Meta’s Restriction on AI Advertising Tools
Meta, the owner of Facebook, has made a decisive move concerning its new generative AI advertising tools. They’re withholding access from political campaigns and advertisers in regulated sectors, a decision revealed after concerns were raised about the potential for these tools to escalate the spread of election misinformation.
Policy Update on Access Restrictions
This update was publicly disclosed through Meta’s help center, clarifying that their advertising standards already prohibit content debunked by fact-checking partners but currently lack specific rules on AI-generated ads. The company stated that advertisers in regulated industries related to housing, employment, credit, social issues, elections, politics, health, pharmaceuticals, and financial services won’t have access to these AI features during this testing phase.
Their rationale? Meta aims to comprehend potential risks better and establish necessary safeguards for using AI in ads, especially those addressing sensitive subjects within regulated industries.
Expansion of AI-Powered Ad Tools
This policy shift follows Meta’s recent announcement of expanding AI-powered ad tools, allowing instant creation of backgrounds, image adjustments, and ad variations based on text prompts. Initially available to a select few advertisers, these tools are slated for global release to all advertisers by next year.
In the tech industry’s race to develop generative AI ad products and virtual assistants, prompted by the buzz surrounding OpenAI’s ChatGPT chatbot’s debut, companies like Meta have unveiled such tools without revealing extensive safety measures. Meta’s restriction on political ads marks a significant policy choice in the evolving landscape of AI and advertising, shedding light on the industry’s approach to handling these groundbreaking systems.
Read More (AI)