top of page
Writer's pictureShivendra Lal

AI Content Crackdown: What the ‘No Fakes Act’ Means for Your Business

Updated: Sep 17

The debate over AI's ethical use is heating up. The US lawmakers have introduced the Nurture Originals, Foster Art, and Keep Entertainment Safe Act (more commonly known as No Fakes Act). There's also a similar law in the EU. Let's look at how the No Fakes Act approaches this issue, and what tech companies, digital creators and marketers need to keep in mind.


What does the No Fakes Act say about generated content?

Artificial intelligence has been integrated into content creation more and more, leading to deepfakes and other AI-generated content that blurs the line between reality and fiction. Governments around the world are introducing legislation to regulate artificial intelligence for content creation because of the ethical and legal challenges it poses. The No Fakes Act aims to ensure transparency, accountability, and ethical practices when it comes to AI-generated content.


The proposed law aims to make it illegal to make copies of people's images, voices, and likenesses using digital technology. Right now, there's little harmonization in terms of statutory or common laws protecting images, voices, and likenesses. Here's what the bill proposes to do:


  • If someone or a company makes an unauthorized digital replica of an individual in a performance, hold them liable;

  • If the platform knows the replica isn't authorized by the person depicted, hold them liable for hosting it;

  • Exclude the few digital replicas from being considered as illegal if they serve fundamental human rights; and

  • Create a workable national standard for digital replicas.


The bill says individuals should have the right to authorize their voice or likeness to be used in a digital replica. The law defines digital replicas as "new, computer-generated, extremely realistic electronic representations that are easy to identify as voice or visual likenesses of people..." The law doesn't clarify if unauthorized replicas made before the law started are exempt. What would be considered as 'readily identifiable' digital representations or renderings? What about lookalikes or soundalikes? Here's a good example: Scarlett Johannson claimed that her voice was imitated in a recent ChatGPT update. How will it be determined that the voice in ChatGPT is Scarlett Johannson's? It's also unclear what the bill means by allowing copyright holders, such as artists and record labels, to authorize digital copies or altered performances based on their own material.


There are a lot more aspects of right to publicity covered in the law. Because they're not enitrely relevant for this audience, I'm not covering them in this episode.


What does this all mean in simple terms?

The No Fakes Act is probably one of the most important laws lately. Due to the large amount of investment, adoption, and deployment of AI, its ethical use is essential. However, the problem with proposed laws is that there's too much legal context that people have trouble understanding. As I understand it, there are three key takeaways from the No Fakes Act.


In the first place, it mandates that AI-generated content be disclosed. This means that AI-created or AI-adjusted content needs to be clearly labeled. The goal is to prevent misinformation and make sure audiences know when they're interacting with AI-generated content. For example, if a company uses AI to make product reviews, promo videos, or social media posts, it must disclose it. As a result, trust is maintained between the company and its audience, and the consumer can make an informed decision.


Furthermore, it proposes significant restrictions on deepfake technology, especially in contexts where it might cause harm or deception. AI-generated videos and audio recordings that mimic real people can be used maliciously - like to spread fake news, influence people, manipulate public opinion, or defame people. The controversy over Scarlett Johannson's claim is a good example.


Last but not least, the Act gives creators and distributors of AI-generated content new accountability measures. It's up to individuals and companies to ensure that AI-generated content complies with ethical standards and doesn't violate other people's rights.


What are implications for tech companies, digital creators, and marketers?

Considering these rules, it's clear that the No Fakes Act has significant implications for tech companies, especially those that make or use AI tools.


When this bill becomes law, tech companies will have to implement new compliance measures to ensure that their AI tools are used properly. It might mean updating software to label AI-generated content automatically or installing checks to prevent harmful deepfakes. Adobe and Meta have already started doing this, but their approach had bugs. Also, companies need to train their employees on the new regulations and make sure they understand the ethical implications.


Transparency and accountability are key, so tech companies should focus on making AI systems that prevent misleading or harmful content, along with detecting and flagging deepfakes.


In addition, they might need to collaborate with regulators and industry bodies to come up with best practices and standards. As a result of this collaboration, companies can stay ahead of regulatory changes and make sure their AI tools are ethical and legal.


Influencers, content creators, and artists need to be transparent about using AI to generate or enhance their content. The key is to clearly label AI-generated content and explain how it's used.


Creators should avoid using AI in ways that could deceive or manipulate their audience, and they should be conscious of the potential impact of their content. If a creator uses AI to create a realistic deepfake of a public figure, they need to consider the potential harm this content might cause.


The creator community should also see the opportunities that the No Fakes Act creates for them. Creators who embrace the principles of the Act may also find new ways to collaborate with technology companies, marketers, and other stakeholders to push the boundaries of what’s possible with AI while remaining compliant with the law.


It's equally important for marketers to make sure AI-generated ads are clearly labeled and transparent. It includes disclosing AI in ads, social media posts, and other marketing materials. Keeping things transparent is key to building consumer trust.


Use of AI should be cautious when targeting vulnerable audiences, like children or the elderly, and avoid creating content that's misleading or deceptive. It's essential to use ethical marketing practices to protect your brand's reputation and make sure your campaigns are on the right track.


It's clear that they will need to invest in compliance and training for their teams so they know what the AI No Fakes Act says. It might mean updating marketing strategies, revising content creation processes, and implementing new tools to comply with the Act.


In terms of ethical development, application, and distribution of AI-powered tools, the No Fakes Act is a big step forward. Through adequate disclosure of AI use, prioritizing ethical considerations, staying on top of regulations, consulting with legal, ethical, and technical experts, and educating the in-house teams, tech companies, digital creators, and marketers can help promote the intent behind the act. When employed ethically and tastefully, content generated using AI has the potential to inform, educate, and entertain consumers of content. AI is all about extending and expanding human potential. Getting more collaboration among the stakeholders is essential, and the No Fakes Act could help.





Comments


bottom of page