Getty Images
Business Science Technology Top Story

Global governments race to implement AI regulations amidst rapid technological advancements

In an era marked by the rapid evolution of artificial intelligence (AI), governments worldwide are grappling with the task of creating effective regulations to govern the technology’s use. Notably, Microsoft-backed OpenAI’s ChatGPT has contributed to the complexity of this issue, prompting various national and international governing bodies to take action.

Australia is addressing the prevention of the sharing of child sexual abuse material and deepfake production through new codes enforced on search engines.

In Britain, leading AI developers have joined forces with governments to engage in pre-release testing of advanced AI models. The first global AI Safety Summit witnessed a significant step forward in this collaborative effort. Over 25 countries, including the U.S., China, and the EU, signed the “Bletchley Declaration,” highlighting the need for international cooperation in AI oversight. The UK pledged to triple its funding for the “AI Research Resource,” aimed at enhancing AI model safety, and announced plans to establish the world’s first AI safety institute.

China is expanding its collaboration on AI safety, seeking to build an international governance framework. It published security requirements for firms offering generative AI services and implemented temporary measures, ensuring security assessments are conducted before releasing mass-market AI products.

advertisement

The European Union made significant progress by agreeing on AI rules that classify certain systems as “high risk.” A comprehensive AI Act is expected in December. European Commission President Ursula von der Leyen has proposed the creation of a global panel to assess AI’s risks and benefits.

France is investigating possible breaches related to ChatGPT, reflecting the scrutiny AI technologies face.

The G7 countries have endorsed an 11-point code of conduct for advanced AI systems, aimed at promoting safe and trustworthy AI on a global scale.

Italy is reviewing AI platforms and employing experts in the field. ChatGPT faced a temporary ban in the country but was later reinstated.

advertisement
make-a-purchase-2

Japan is working on regulations closer to the U.S. approach, with plans to introduce them by the end of 2023. The country’s privacy watchdog has cautioned OpenAI against collecting sensitive data without consent.

Poland is investigating OpenAI over potential EU data protection law violations linked to ChatGPT.

Spain initiated a preliminary investigation into potential data breaches by ChatGPT.

The United Nations has established a 39-member advisory body, comprising tech company executives, government officials, and academics, to address international AI governance issues. The U.N. Security Council delved into the global implications of AI, emphasising its impact on global peace and security.

advertisement

In the United States, a new AI safety institute was announced to evaluate the risks associated with “frontier” AI models. President Joe Biden issued an executive order requiring developers of AI systems posing risks to U.S. national security, the economy, public health, or safety to share safety test results with the government. The U.S. Congress held hearings on AI, featuring industry leaders such as Meta CEO Mark Zuckerberg and Tesla CEO Elon Musk, who advocated for a U.S. “referee” for AI. Additionally, the U.S. Federal Trade Commission initiated an investigation into OpenAI for potential consumer protection law violations.

As governments worldwide grapple with the challenges posed by AI’s rapid advancements, it is clear that efforts to regulate and ensure the responsible use of this technology are a global priority.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.