It’s not daily {that a} journalist on the AI beat for an advert tech commerce pub will get to observe a play a few fictional software program firm set within the agentic age.
That have is all of the extra jarring when the fictional firm is growing a morally questionable AI software and conversations start cropping up among the many characters about whether or not to blow the whistle and contain the press.
It was very (lowercase “m”) meta.
Amassing Knowledge
Final week, a number of AdExchanger reporters went to see “Knowledge,” an off-Broadway play about AI, surveillance, knowledge monitoring and predictive modeling that parallels most of the methods advert know-how is used to profile and goal individuals.
The fictional software program firm within the play, known as Athena, relies closely on Palantir, playwright Matthew Libby mentioned throughout a post-performance talkback. However the story makes it very clear that even those that are engaged on AI with good intentions aren’t secure from changing into implicated in additional dangerous makes use of.
The play revolves round a current school graduate, Maneesh, who’s working at Athena as a designer on the UX workforce regardless of his expertise for programming and knowledge science. He’s rapidly recruited by the information science workforce, whose chief appears intent on getting his palms on a robust algorithm that Maneesh developed in school.
Maneesh, for his half, is apprehensive about becoming a member of the workforce, and adamant that the algorithm stay closed-source. In any other case, what began as a considerably innocuous faculty venture to foretell uncommon occasions in baseball video games may simply be used for extra sinister functions.
With out spoiling what occurs, this proves to be the case.
Quiet bias
The play repeatedly challenges the concept that people could be outlined by a collection of knowledge factors.
As one character factors out, it’s all too simple to cover behind mathematical code and name AI “goal.” However even if you happen to had been to remove the usage of AI and automation, people are nonetheless imperfect creatures with the identical biases that knowledgeable the code within the first place.
That’s the quandary I discovered myself caught on because the play drew to a detailed, and I requested Libby about it throughout the talkback.
AI doesn’t create bias, per se, nevertheless it does exacerbate present biases which have now been constructed into purportedly goal algorithms. Dehumanization in any kind – digital or in any other case – poses an excellent hazard to id and security, Libby mentioned.
It’s tempting to assume you possibly can know who somebody is, what they’ll change into or “what their worth is” if you happen to accumulate sufficient knowledge about them, he added. However that’s a harmful assumption, no matter whether or not you make it face-to-face or via a predictive algorithm.
At what value?
However these kinds of predictions energy each nook of advert tech.
Promoting is all about knowledge assortment and predictive modeling, which isn’t so totally different from the instruments and algorithms created by corporations just like the fictional Athena.
As an illustration, Palantir has been growing a mapping software for ICE to focus on immigrants for detention and deportation primarily based on particulars like geographic location.
The parallels between the information collected for promoting and for presidency functions like immigration aren’t simply summary analogies. Earlier this yr, ICE put out an RFI that requested knowledge suppliers and tech distributors to share data on how their instruments and providers may assist with investigations.
Simply because knowledge is initially collected for one goal doesn’t imply it might’t be used (or misused) for an additional.
For advertisers, the stakes aren’t as excessive. When you goal the fallacious individual, perhaps you’ll waste some media and have a lower-than-expected ROAS.
Algorithms might not create bias, however they scale it at warp pace. And once they’re used to make main selections about individuals’s futures, even a small mistake can upend somebody’s complete life – which Maneesh proves firsthand when he checks his algorithm for a extra private use case with distressing outcomes.
And that’s to not point out the ambiguities of consent. Customers legally consent to knowledge monitoring on a regular basis with out considering twice when signing up for social media platforms or accessing web sites with cookie opt-in widgets. Nonetheless, for most individuals exterior of the advertising world, “opting in” is a obscure time period that doesn’t clarify simply how a lot private knowledge they’re relinquishing.
Now think about that very same knowledge being handed over to immigration enforcement with out direct consent. In response to 404 Media, ICE is gathering addresses from the Division of Well being and Human Providers, successfully turning data individuals share to entry primary providers right into a software for surveillance.
Even when backed by good intentions, the event of know-how can rapidly result in hurt. We’ve seen this earlier than. Facial recognition know-how persistently misidentifies Black individuals, and ChatGPT typically assumes that feminine customers in male-dominated classes (assume management roles, cybersecurity, and so on.) are males.
When you’re constructing know-how, it’s crucial that you simply handle potential biases and converse out towards presumably dangerous use circumstances.
AI isn’t a magic wand. It’s a robust and generally harmful software.
Which isn’t to say AI doesn’t have its place. “Knowledge” acknowledges that AI’s position is difficult, and Libby is “very sympathetic” to Maneesh’s AI-enthused supervisor, who eagerly factors out all the clerical errors and latencies that automation can bypass.
However we are able to’t speak in regards to the wins with out the losses – or with out acknowledging the dangers.
The advert tech world likes to speak about AI solely as progress, however progress comes with a value. How a lot are we keen to pay?
“Knowledge-Pushed Considering” is written by members of the media neighborhood and incorporates contemporary concepts on the digital revolution in media. This column is a part of a collection of views from AdExchanger’s editorial workforce.


