By Dana Rao

 Generative AI is changing the way we all think about creativity. Type “3D render of a paper dragon, studio-style photography” and you’re instantly offered multiple variations of a portrait of a ferocious origami creature; or combine a few data points with simple instructions, and a chatbot can spit out a compelling marketing email.

It’s easy to see the power this technology can unlock for individual creators and businesses alike. Generative AI lets people paint with text instead of pixels (or paint). On the business side, it lets you connect with customers efficiently through auto-generated texts, emails, and content. And implemented the right way, generative AI brings precision, power, speed and ease to your existing workflows – allowing people to focus on more strategic or creative parts of their work.

Generative AI also opens the door to new questions about ethics and responsibility in the digital age. As Adobe and others harness the power of this cutting-edge technology, we must come together across industries to develop, implement and respect a set of guardrails that will guide its responsible development and use.

Dana Rao is EVP, general counsel and chief trust officer, Adobe

Grounded in Ethics and Responsibility

Any company building generative AI tools should start with an AI Ethics framework. Having a set of concise and actionable AI ethics principles and a formal review process built into a company’s engineering structure can help ensure that AI technologies – including generative AI – are developed in a way that respects their customers and aligns with their company values. Core to this process are training, testing, and – when necessary – human oversight.

Generative AI, as with any AI, is only as good as the data on which it’s trained. Mitigating harmful outputs starts with building and training on safe and inclusive datasets. For example, Adobe’s first model in our Firefly family of creative generative AI models is trained on Adobe Stock images, openly licensed content, and public domain content where copyright has expired. Training on curated, diverse datasets inherently gives your model a competitive edge when it comes to producing commercially safe and ethical results.

But it’s not just about what goes into a model. It’s also about what comes out. Because even with good data, you can still end up with biased AI, which can unintentionally discriminate or disparage and cause people to feel less valued. The answer is rigorous and continuous testing.

At Adobe, under the leadership of our AI Ethics team, we constantly test our models for safety and bias internally and provide those results to our engineering team to resolve any issues. In addition, our AI features have feedback mechanisms so that when they go out to the public, users can report any concerns and we can take steps to remediate them. It’s critical that companies foster this two-way dialogue with the public so that we can work together to continue to make generative AI better for everyone.

On top of training, companies can build in various technical measures to improve the ethics of their products. Block lists, deny lists, NSFW classifiers can be implemented to mitigate harmful bias in the output of an AI model. If a company is still unsure or unsatisfied with the output, they can always add or require a human in the loop to ensure the output meets their expectations.

And whenever a company is sourcing AI from an outside vendor – whether they’re integrating it into company workflows or into their own products – making sure the AI meets their ethical standards should be part of their vendor risk process.

Transparency Builds Trust

We also need transparency about the content that generative AI models produce. Think of our earlier example but swap the dragon for a speech by a global leader. Generative AI raises concerns over its ability to conjure up convincing synthetic content in a digital world already flooded with misinformation. As the amount of AI-generated content grows, it will be increasingly important to provide people with a way to deliver a message and authenticate that it is true.

At Adobe, we’ve implemented this level of transparency in our products with our Content Credentials. Content Credentials allow creators to attach information to a piece of content – information like their names, dates, and the tools used to create it. Those credentials travel with the content, so that when people see it, they know exactly where the content came from and what happened to it along the way.

We’re not doing this alone; four years ago, we founded the Content Authenticity Initiative to build this solution in an open way so anyone can incorporate it into their own products and platforms. There are over 900 members from all areas of technology, media, and policy who are joining together to bring this solution to the world.

And for generative AI specifically, we automatically attach Content Credentials to indicate when something was created or modified with generative AI. That way, people can see how a piece of content came to be and make more informed decisions about whether to trust it.

Respecting Creators’ Choice and Control

Creators want control over whether their work is used to train generative AI or not. For some, they want their content out of AI. For others, they are happy to see it used in the training data to help this new technology grow, especially if they can retain attribution for their work.

Using provenance technology, creators can attach “Do Not Train” credentials that travel with their content wherever it goes. With industry adoption, this will help prevent web crawlers from using works with “Do Not Train” credentials as part of a dataset.

Together, along with exploratory efforts to compensate creators for their contributions, we can build generative AI that both empowers creators and enhances their experiences.

An Ongoing Journey

We’re just scratching the surface of generative AI and every day, the technology is improving. As it continues to evolve, generative AI will bring new challenges and it’s imperative that industry, government, and community work together to solve them.

By sharing best practices and adhering to standards to develop generative AI responsibly, we can unlock the unlimited possibilities it holds and build a more trustworthy digital space.

LEAVE A REPLY

Please enter your comment!
Please enter your name here