close
close

Gottagopestcontrol

Trusted News & Timely Insights

AI Briefing: This is what transparency for AI-powered brand safety technology could look like
Alabama

AI Briefing: This is what transparency for AI-powered brand safety technology could look like

After Adalytics’ new report questions the effectiveness of AI-powered brand safety technology, industry insiders have more questions about what works, what doesn’t, and what advertisers are paying for.

The 100-page report, released Wednesday, examined whether brand safety technologies from companies such as DoubleVerify and Integral Ad Science can identify problematic content in real time and block ads from appearing next to hate speech, sexual innuendo or violent content.

After advertisers expressed dismay at the findings, DV and IAS defended their offerings with statements attacking the report’s methodology. According to a blog post by IAS, the company is “driven by a single mission: to be the global benchmark for trust and transparency in digital media quality.”

“We are committed to media measurement and optimization excellence,” IAS wrote. “And we are constantly innovating to exceed the high standards our clients and partners deserve while maximizing their ROI and protecting brand value across all digital channels.”

DoubleVerify’s statement said the Adalytics report lacked proper context and emphasized the options available to advertisers. However, sources from adtech, brands and agencies said the report accurately identified the key issues. Despite commitments, DV and IAS still have not provided enough transparency to address concerns about AI tools, which in turn would help the industry better understand and test the tools and address broader concerns.

One expert cited a Star Wars scene in which Obi Wan Kenobi uses mind control to redirect stormtroopers, and put it this way: “If there was ever a moment in brand safety where you realized, ‘These are not the droids you’re looking for,’ this is it.”

Earlier this week, Digiday sent DV and IAS questions that advertisers and technology experts wanted to gain insight on ahead of the report’s release. The questions covered the application of brand safety technology, the AI ​​model’s process for analyzing and assessing page safety, and whether pages are crawled occasionally or in real time. Other questions asked whether the companies performed page-level analytics and whether UGC content is analyzed differently than news content. Neither DV nor IAS answered the questions directly.

“There are clearly some gaps in the system where it makes obvious mistakes,” said Laura Edelson, a professor at NYU. “If I were a customer, the very first thing I would want is more information about how this system works.”

Without transparency, a report like Adalytics’ “really undermines trust” because “without trust, there is no foundation,” Edelson said.

So what might transparency look like? What information should advertisers receive from vendors? And how can AI brand safety tools better address issues plaguing content and ads online?

Rocky Moss, CEO and founder of DeepSee.io, an AI brand safety startup, argued that measurement companies should provide more detailed data on the accuracy and reliability of categorization at the page level. Advertisers should also question providers on other topics: how providers’ prebid technology responds when a URL is uncategorized or behind a paywall; how they handle a potential over-reliance on aggregated ratings; and about the risk of bid suppression for uncategorized URLs. He also thinks providers should share information on how they avoid false positives and how much time they spend each day reviewing flagged content for high-traffic and outdated news sites.

“However, categorization models will always be probabilistic, with (hopefully) only a small amount of false negatives and false positives,” Moss said. “If the product is sold without disclosing this, it’s dishonest. If someone buys BS protection and thinks it’s perfect, I know Twitter bots that have some NFTs to sell them.”

The divide between brand safety and user safety is becoming increasingly blurred, says Tiffany Xingyu Wang, founder of a stealth startup and co-founder of Oasis Consortium, a nonprofit focused on ethical technology. She believes that companies that are incentivized to address both issues deserve better tools for user safety, brand suitability and value-based advertising.

“We need to move away from the blocklist focus on filtering,” said Wang, who was previously CMO of AI content moderation company OpenWeb. “Given the increasingly complex environment, that is no longer sufficient for advertisers.”

At Seekr – which helps advertisers and individuals identify and filter misinformation and other harmful content – every piece of content that goes into its AI model is sent for review. This includes news articles, podcast episodes and other content. Instead of using systems to label content as “low,” “medium” or “high” risk, Seekr rates content on a scale of 1 to 100. It also shows what it rates, how it rates, what is flagged and why it is flagged.

Transparency also helps companies make better business decisions, said Pat LaCroix, EVP of strategic partnerships at Seekr. He also noted that performance and suitability should not be mutually exclusive: “This should not be viewed as a nuisance or a tax that you pay, but as something that determines important metrics.”

“People need to change their perspective and everyone needs to go a level deeper to know how to price content. It’s too black box, too general,” said LaCroix, who previously worked in agencies and in-house at brands like Bose. “At the end of the day, CPM is still a real metric that buyers are beholden to and advertisers are still looking for the lowest prices and that’s why this keeps happening.”

Prompts and Products – More AI news and announcements

  • The Irish Data Protection Commission has taken action against X for allegedly using EU users’ data to train X’s Grok AI model without consent.
  • A new report from Check My Ads examines the damage caused by AI-generated obituaries.
  • The FCC has opened a new comment period to create new rules for the use of AI in political advertising.
  • A new report from Dentsu found that the percentage of CMOs who doubt AI’s ability to create content fell from 67% in 2023 to 49% in 2024.

LEAVE A RESPONSE

Your email address will not be published. Required fields are marked *