Introducing Fox’s Latest Innovation
Fox Corp. caused quite a stir in media circles recently with the debut of “Verify,” a cutting-edge blockchain-based tool designed to authenticate digital media in the era of AI.
At face value, the initiative seeks to tackle two increasingly challenging issues: the proliferation of “deepfake” content enabled by AI, which has the potential to mislead consumers, and the unauthorized use of content by AI models.
Seeing Beyond the buzzwords
Some skeptics might view this move as a mere stunt to bolster public relations, mixing “AI” and “Blockchain” into a buzzword soup to restore trust in news, especially for aging media conglomerates dealing with credibility snags. After all, we’ve all had a glimpse of corporate dynamics through shows like “Succession,” haven’t we?
However, setting aside the potential irony, it’s essential to give Fox and its newest creation the consideration it deserves. In the realm of deepfakes, Verify allows individuals to input URLs and images into the system, essentially determining if the digital assets are genuine, meaning a publisher has officially sanctioned them. Additionally, on the licensing front, AI enterprises can utilize the Verify database to legitimately access and compensate for content.
Fox’s in-house technology arm, Blockchain Creative Labs, partnered with Polygon, a low-fee, high-throughput blockchain built on the Ethereum network, to execute the operational aspects. By incorporating new content into Verify, an entry is appended to a database on the Polygon blockchain, housing its metadata and other crucial details.
Unlike numerous other crypto experiments, the integration of blockchain might bear significance this time around. Polygon confers an immutable audit trail to content on Verify, eliminating the reliance of third-party publishers on Fox to govern their data.
Verify in Action
Despite its current semblance of a sophisticated database checker, Verify certainly isn’t redundant, especially in guiding traditional publishers through licensing agreements in the context of extensive language models.
Our evaluation involved employing Verify’s web app to scrutinize its day-to-day functionality. Without much delay, it became evident that the app had its limitations for consumer use.
Testing the Limits
The Verify app furnishes a text input box for URLs. When we pasted a Fox News article link pertaining to Elon Musk and deepfakes, the app promptly presented a range of information affirming the article’s authenticity. This included a transaction hash and signature for the Polygon blockchain transaction representing the content, alongside associated metadata, licensing particulars, and featured images.
We proceeded to download and re-upload one of the images into the tool to assess its veracity. To our satisfaction, the app offered similar data as earlier. Although, attempting with another image unraveled a fascinating feature where we could explore other Fox articles featuring the same image.
Although Verify aptly fulfilled these rudimentary tasks, it’s hardly imaginable that consumers would necessitate authentication of content directly lifted from the Fox News website.
Documentation indicates that a plausible user of the service might be someone stumbling upon an article on social media, eager to validate its source. Yet real-world testing uncovered glitches.
Challenges and Limitations
When we stumbled upon an official Fox News post on X social media platform featuring the same article, Verify failed to authenticate it, despite landing on the identical Fox News page. A snapshot of the image from the Fox News post that we uploaded also couldn’t be confirmed, highlighting the app’s susceptibility to manipulated images or slight alterations.
While some of these technical deficiencies are expected to be resolved, tackling the even more complex engineering hurdles to help consumers identify AI-generated content will be a Herculean task for Fox. Even when functioning as advertised, Verify can’t discern AI-generated content – it only confirms the source as Fox or the relevant uploader. This presents a significant roadblock in helping consumers differentiate AI-generated content from human-authored material.
Consumer Engagement and the Hurdle of Apathy
Beyond any technological limitations, there’s the formidable problem of user indifference. It’s a well-known fact that people often aren’t too concerned about the authenticity of what they’re reading, a reality Fox is acutely aware of. This is especially evident when confirmation bias comes into play, where individuals tend to believe information that aligns with their preconceived notions.