OpenAI’s rules can be ‘easily’ dodged to target Latinos, study warns

Published Apr 26,2024 00:03 | technology | Cristiano Lima-Strong

Happy Thursday! Today’s newsletter features two of your favorite Brazilian tech policy junkies in conversation. Send tips on news and the name of your favorite churrascaria to: cristiano.lima@washpost.com.

OpenAI’s rules can be ‘easily’ dodged to target Latinos, study warns

In January, OpenAI unveiled revamped policies aimed at preventing its tools from being used to spread disinformation ahead of the 2024 elections, including by blocking people from building chatbots “for political campaigning and lobbying.”

But a new study released Thursday argues that the rules can be “easily” bypassed “to maliciously target minority and marginalized communities in the U.S. with misleading content and political propaganda,” such as Latinos and Spanish speakers.

The findings, researchers say, highlight key enforcement gaps in OpenAI’s rules that could have big implications for underrepresented and non-English-speaking communities during this year’s elections.

The Digital Democracy Institute of the Americas, a research unit that examines how Latinos navigate the internet, ran several tests asking OpenAI’s ChatGPT tool to help create a chatbot for campaigning purposes, including “to interact in Spanish” and to “target” Latino voters.

While all of the prompts “should not have generated responses” under OpenAI’s rules, researchers wrote, the tests “resulted in detailed instructions from GPT-4.” 

“Targeting a chatbot to Latino voters in the U.S. requires a nuanced approach that respects cultural diversity, language preferences and specific issues of importance to the Latino community. Here’s how you could tailor the chatbot for maximum effectiveness,” one reply read.

Roberta Braga, the group’s founder and executive director, told me that the results show the company’s safeguards “were super easily circumvented,” even when researchers “were not hiding the intent” ― targeting campaigns at Latinos.

The report draws a parallel between those findings and broader efforts to crack down on misinformation online. Lawmakers and advocacy groups have long accused tech companies of underinvesting in resources to adequately enforce their rules in non-English languages, particularly social media companies.

The study shows that with AI tools, too, the rules are “not yet being applied consistently or symmetrically across countries, contexts, or in non-English languages,” researchers wrote.

The report did note that “OpenAI’s terms of service are more advanced in addressing misuse and the spread of disinformation than those of most companies bringing generative AI products to market this year.”

Researchers also tested OpenAI’s image-generation tool, DALL-E, which company rules prohibit from being used to create visuals of “real people, including candidates.” 

While the tests did not bypass those guardrails, researchers were able to generate images of politicians holding up the “okay” hand gesture, which groups such as the Anti-Defamation League consider a hate symbol due to its ties to white-supremacist organizations.

“For us, it showed that the tool can't detect the nuance, so even though the terms are in place, this tool can very much be used to make strong political statements,” Braga said.

OpenAI spokeswoman Liz Bourgeois said in a statement that the “findings in this report appear to stem from a misunderstanding of the tools and policies we’ve put in place.”

Bourgeois said the company allows people to use their products as a resource for political advocacy and that providing instructions on how to build a chatbot for the purposes of a campaign is not in violation of its policies.

But Braga, who previously worked at the Atlantic Council think tank, stressed that OpenAI’s tools still “offered guidance on how to define intent, create conversational flows, craft responses, integrate feedback loops” and on how to program and configure chatbots targeting Latinos.

Government scanner

Biden signs bill that could ban TikTok, a strike years in the making (By Cristiano Lima-Strong)

TikTok and the U.S. government dig in for legal war (By Drew Harwell)

FCC to reinstate net neutrality, but it’s not as easy as it once was (By Eva Dou)

Biden campaign plans to keep using TikTok through the election (NBC News)

Inside the industry

Meta’s advertising business keeps rolling, but costs rise in AI arms race (Wall Street Journal)

TikTok halts ‘lite’ rewards program, fending off EU suspension (Bloomberg)

Competition watch

Spotify is struggling to get Apple to approve its iOS updates in the E.U. (The Verge)

Britain probes Amazon and Microsoft over AI partnerships with Mistral, Anthropic and Inflection (TechCrunch)

Workforce report

The FTC banned noncompetes. What that means for workers and companies. (By Taylor Telford)

Trending

The Meta-morphosis of Mark Zuckerberg (New York Times)

Daybook

  • The Federal Communications Commission hosts an open meeting on Thursday at 10:30 a.m.
  • The Washington Post Live hosts an event, “Disparities in Digital Access,” featuring FCC Chairwoman Jessica Rosenworcel on Friday at 9 a.m.

Before you log off

Thats all for today — thank you so much for joining us! Make sure to tell others to subscribe to The Technology 202 here. Get in touch with Cristiano (via email or social media) and Will (via email or social media) for tips, feedback or greetings!


Tags: