Power Play: Government Swiftly Navigate the Regulatory Landscape of AI Tools in a Bid for Control

Government Worldwide and AI Regulation

Government

Governments worldwide are grappling with the complex task of regulating artificial intelligence (AI) technologies, given the rapid advancements in the field. Here’s an overview of the latest regulatory steps taken by various nations and international bodies:

Read More

AUSTRALIA

Australia is planning regulations that will compel search engines to create new codes preventing the dissemination of AI-generated child sexual abuse material and the production of deepfake versions of such content.

BRITAIN

In a landmark move, leading AI developers at the global AI Safety Summit in Britain agreed to collaborate with governments to conduct testing on emerging AI models before their release. Over 25 countries, including the U.S., China, and the EU, signed the “Bletchley Declaration” to establish a unified approach on oversight. Britain pledged to triple its funding to £300 million for the AI Research Resource, supporting research into ensuring the safety of advanced AI models.

Britain’s data watchdog issued a preliminary enforcement notice to Snap Inc.’s Snapchat in October, citing concerns about the privacy risks associated with its generative AI chatbot.

CHINA

China has implemented temporary regulations, with Vice Minister of Science and Technology Wu Zhaohui expressing readiness to enhance international collaboration on AI safety. In October, China published proposed security requirements for generative AI services, including a blacklist of unacceptable training sources. Temporary measures issued in August mandate security assessments for mass-market AI products.

EUROPEAN UNION

France, Germany, and Italy have reached an agreement on AI regulation, emphasizing the need for developers of foundation models to define model cards providing information about machine learning models. European lawmakers are progressing towards a broader agreement on the AI Act, expected in December.

FRANCE

France’s privacy watchdog is investigating potential breaches related to ChatGPT.

G7

The Group of Seven countries has agreed to an 11-point code of conduct for firms developing advanced AI systems, aiming to promote safe and trustworthy AI globally.

ITALY

Italy’s data protection authority plans to review AI platforms and hire experts in the field. ChatGPT faced a temporary ban in March, later reinstated in April.

JAPAN

Japan anticipates introducing regulations by the end of 2023, with an approach closer to that of the U.S. The country’s privacy watchdog has cautioned OpenAI against collecting sensitive data without proper consent.

POLAND

Poland’s Personal Data Protection Office is investigating OpenAI over a complaint alleging ChatGPT’s violation of EU data protection laws.

SPAIN

Spain’s data protection agency initiated a preliminary investigation into potential data breaches by ChatGPT in April.

UNITED NATIONS

The United Nations Secretary-General announced the creation of a 39-member advisory body to address international governance issues related to AI. The U.N. Security Council held its first formal discussion on AI in July, emphasizing its potential consequences for global peace and security.

UNITED STATES

The United States is set to launch an AI safety institute to assess risks associated with “frontier” AI models. President Joe Biden issued an executive order in October, requiring developers of AI systems posing risks to national security or public interests to share safety test results with the government. The U.S. Congress held hearings on AI in September, and the Federal Trade Commission opened an investigation into OpenAI in July over potential consumer protection law violations.

Read More (AI)

Related posts

Leave a Reply

Your email address will not be published. Required fields are marked *