How Generative AI Can Support Everyone

Part of a series  |  Generative AI Insights

A team of diverse business colleagues smiling and collaborating

Find out how one of today's hottest technologies could support people of multiple backgrounds and what you can do to help.

Employers are weighing the benefits and responsible use of generative artificial intelligence (AI). While generative AI tools can enhance productivity and creativity and open up possibilities for innovation and problem-solving, it's essential to consider their impact on people. By applying a human touch, leaders can help ensure that generative AI tools include and foster opportunities for everyone.

What is generative AI, and how does it work?

Generative AI is a technology that can create content when prompted by a user. Text-only generative AI tools use large language models (LLMs), which use text data to inform their content, or outputs. In popular interfaces, a person enters a question or request, also known as a prompt, and the tool returns an output. Image-generating tools work similarly, producing images when prompted.

Such tools are not human or sentient. They are trained to repurpose created content. The tools are sought after by those seeking to enhance their efficiency and creativity, as they can quickly generate various content types.

Why generative AI needs a human touch

Generative AI tools perform best with a human touch that considers the nuances and richness of people.

They are incredible technological innovations that can significantly benefit today's workplace and the world, particularly when humans are kept in the loop. For example, this technology is giving business owners in rural parts of the world access to resources to succeed; supporting veterans with PTSD management; and helping people democratize experiences, such as the creation of coding, music, art, written content and more, regardless of disability, level of formal training and language of origin. To make generative AI more inclusive, anyone designing and using these tools should consider accounting for all people — for example, by accounting for the quality of the datasets, by reprompting the tools until the desired outputs are received, by being more specific with prompting and by including diverse groups of people in review processes to ensure multiple perspectives are considered.

"There's a human component we always want to consider," says Giselle Mota, chief of product inclusion, ADP. "Just because you're prompting a generative AI tool doesn't mean it will detect human complexity, different languages, nuances in translation and culture, biases or the absence of certain populations. Review the prompts and outputs and tweak them if needed. Review the images, coding, text and other content. Ask yourself who or what is missing."

Examining generative AI outputs

A generative AI experience inclusive of diverse human backgrounds involves monitoring both the language of the outputs and the veracity of their information. And just as text-only generative AI tools are prone to creating limited outputs, so, too, are their image-generating counterparts.

"We've seen that image-generating tools can produce homogenous images," Mota says. "The image sets are trained mostly on similar data, and, as a result, the tools frequently generate similar-looking outputs. These types of outputs warrant further consideration to include more diverse data."

The three Cs: Consider, check, continue

Use the three Cs – consider, check, continue – each time you prompt a generative AI tool. First, consider the outputs. Do they incorporate perspectives from, or images of, multiple groups? Second, check the information. Is it accurate, fair and comprehensive? Can you cite third-party sources to enhance credibility? After these steps, continue as planned. Be mindful of changes and considerations that may arise later. If changes are made, repeat the process.

In instilling the use of the three Cs, workforce awareness and education are essential. Leaders can support employees by providing training, resources and guidance on responsible and inclusive use of generative AI. By encouraging collaborative and thoughtful interactions with the tools, leaders can draw upon the power of the technology to support everyone.

Being mindful of technological considerations

Leaders should also consider identifying technological guardrails for generative AI. Asking HR software vendors about their generative AI ethics, protections and governance models may be the best place to start. Additionally, there's being mindful of how the tools are prompted.

"You can engineer prompts carefully," Mota says. "Be very specific about what you want from the tools. In your prompts, include perspectives, nuances and contexts. Then, once you have the outputs, ask yourself again, 'Who is missing? Which perspectives are missing?' Ultimately, if prompts and outputs represent all people and the world as it is, we can deliver a more accurate, fairer and more comprehensive generative AI experience."

5 considerations to keep in mind

  1. Consider improving the representation, fairness and comprehensiveness of generative AI outputs and, if possible, the tools themselves.
  2. Consider accounting for the quality of generative AI datasets.
  3. Use the three Cs: consider, check, continue. Repeat the process if needed.
  4. Consider engineering prompts carefully. Aim to be specific about the outputs you want from the tools.
  5. Ask your HR software vendor about their generative AI protections and strategies.

ADP's commitment to product inclusion

ADP designs for people, using technology to improve human life. In addition to implementing generative AI to create better experiences for its clients and associates, ADP has adopted rigorous principles to govern its use of generative AI and is committed to creating ethical and supportive product interactions that are considerate of everyone.

"At ADP, we recognize the benefits of generative AI and are aware of the potential for risks and biases that comes with it," says Helena Almeida, vice president, managing counsel, ADP. Almeida is also a member of ADP's AI and Data Ethics Committee. "We're taking a careful and thoughtful approach to incorporating this technology into our products and services. Additionally, we're committed to ensuring that inclusion is part of our governance model so that diverse voices and perspectives are heard. As we progress, we'll continue being mindful of the need for accuracy, comprehensiveness, fairness and representation."

Find out what ADP is doing