AI companies, including OpenAI, Alphabet (parent company of Google), and Meta Platforms (formerly Facebook), have taken voluntary steps to enhance the safety of artificial intelligence technology. US President Joe Biden announced these commitments during a White House event aimed at addressing concerns regarding the potential misuse of AI and its impact on US democracy.

Biden acknowledged the importance of these commitments as a positive step but emphasised that there is still much work to be done collaboratively. He stressed the need to be vigilant about the threats posed by emerging technologies, particularly AI, to safeguard national security and democratic values.

The companies involved in these voluntary commitments also include Anthropic, Inflection, Amazon, and Microsoft, which is a partner of OpenAI. They have pledged to rigorously test AI systems before release, share information on risk reduction measures, and invest in cybersecurity to protect against potential attacks.

AI companies, including OpenAI, Alphabet (parent company of Google), and Meta Platforms (formerly Facebook), have taken voluntary steps to enhance the safety of artificial intelligence technology. US President Joe Biden announced these commitments during a White House event aimed at addressing concerns regarding the potential misuse of AI and its impact on US democracy.

Biden acknowledged the importance of these commitments as a positive step but emphasised that there is still much work to be done collaboratively. He stressed the need to be vigilant about the threats posed by emerging technologies, particularly AI, to safeguard national security and democratic values.

The companies involved in these voluntary commitments also include Anthropic, Inflection, Amazon, and Microsoft, which is a partner of OpenAI. They have pledged to rigorously test AI systems before release, share information on risk reduction measures, and invest in cybersecurity to protect against potential attacks.

It is worth noting that the United States has been lagging behind the European Union (EU) in terms of AI regulation. In June, EU lawmakers reached an agreement on draft rules that require AI systems like ChatGPT to disclose AI-generated content, distinguish deep-fake images from real ones, and implement safeguards against illegal content.

In response to calls for comprehensive legislation from US Senate Majority Chuck Schumer, Congress is currently considering a bill that would mandate political ads to disclose whether AI was used in creating their imagery or content.

It is worth noting that the United States has been lagging behind the European Union (EU) in terms of AI regulation. In June, EU lawmakers reached an agreement on draft rules that require AI systems like ChatGPT to disclose AI-generated content, distinguish deep-fake images from real ones, and implement safeguards against illegal content.

In response to calls for comprehensive legislation from US Senate Majority Chuck Schumer, Congress is currently considering a bill that would mandate political ads to disclose whether AI was used in creating their imagery or content.