AI firms like as OpenAI, Alphabet (Google’s parent company), and Meta Platforms (previously Facebook) have taken voluntary initiatives to improve the safety of artificial intelligence technologies. These assurances were made by US President Joe Biden during a White House event aimed at addressing concerns about the possible misuse of AI and its impact on US democracy.

Biden praised the significance of these agreements as a great step, but highlighted that much more work needs to be done collectively. He emphasised the importance of remaining attentive about the challenges posed by developing technology, particularly AI, in order to protect national security and democratic ideals.

Companies participating in these voluntary pledges include Anthropic, Inflection, Amazon, and Microsoft, which is an OpenAI partner. They have pledged to rigorously test AI systems before release, share information on risk reduction measures, and invest in cybersecurity to protect against potential attacks.

The measure represents a big step forward in the Biden administration’s efforts to regulate AI technology, which has seen enormous investment and public acceptance in recent years. Microsoft expressed support for the joint effort to make AI safer, more secure, and beneficial to the public in response to Biden’s leadership.

One of the key problems addressed in the agreements is the advent of generative AI, which uses data to create new content, such as ChatGPT’s human-like language. Lawmakers around the world have been debating how to reduce the hazards that this quickly evolving technology poses to national security and the economy.

It is worth noting that the United States has lagged behind the European Union (EU) in terms of artificial intelligence (AI) regulation. In June, EU lawmakers agreed on draught rules requiring AI systems like ChatGPT to declare AI-generated content, differentiate deep-fake images from real ones, and provide safeguards against illegal content.
In response to US Senate Majority Leader Chuck Schumer’s calls for comprehensive regulation, Congress is presently debating legislation that would require political advertisements to reveal whether AI was used in the creation of their images or content.

Biden has been actively working on crafting an executive order and bipartisan legislation concentrating on AI technology to increase efforts in regulating AI. He predicts that the next several years will see extraordinary technological progress, outpacing any changes observed in the previous five decades.

The seven companies have pledged to create a watermarking system that can be applied to all types of AI-generated content, including text, photos, audio, and videos, as part of their promises. The watermark will be technically implanted in the content, allowing consumers to identify when it was created using AI technology.

This watermarking programme intends to assist users in identifying deep-fake photos or audio that may represent non-existent violence, facilitate scams, or negatively distort images of politicians. However, the specifics of how the watermark will be visible during information transmission remain unknown.

Furthermore, the firms have committed to concentrating on user privacy as AI technology improves, as well as ensuring that AI systems are devoid of prejudice, with controls in place to prevent discrimination against vulnerable groups. The commitments also extend to the development of AI solutions to address scientific challenges like medical research and climate change mitigation.

Categorized in: