Tech News Now: Snap takes a tumble, Meta pushes image labels, and more
Published 8:00 am Wednesday, February 7, 2024
- us-technology-snapchat
Good morning and welcome to Tech News Now, TheStreet’s daily tech rundown.
Tech earnings continued Tuesday with Snap Inc. (SNAP) – Get Free Report, whose stock plummeted around 30% on an earnings miss and weaker-than-anticipated guidance.
In the midst of growing concerns over artificial intelligence image generators and deepfake fraud, Meta (META) – Get Free Report announced a new effort to label AI images on its platforms, and OpenAI revealed a new method of identifying images generated by its generators.
And Google (GOOGL) – Get Free Report has agreed to shell out $350 million to settle a shareholder lawsuit.
Tech earnings continue this week, with Disney (DIS) – Get Free Report, Uber (UBER) – Get Free Report and Roblox (RBLX) – Get Free Report reporting Wednesday, and Cloudflare (NET) – Get Free Report up on Thursday.
Tickers we’re watching today: (SNAP) – Get Free Report, (DIS) – Get Free Report and (UBER) – Get Free Report.
Let’s get into it.
Meta’s AI image labels
With instances of AI-generated deepfake fraud mounting, Meta said Tuesday that it is undertaking a more pointed effort to identify and label AI-generated images on its platforms. The social media giant already labels images generated by Meta’s AI tools as “Imagined with AI,” but will now work to identify and label work generated by other models.
The move, Meta said in a statement, comes during a year in which a “number of important elections are taking place around the world.”
“For those who are worried about video, audio content being designed to materially deceive the public on a matter of political importance in the run-up to the election, we’re going to be pretty vigilant,” Nick Clegg, Meta’s president of global affairs, told The Verge. “Do I think that there is a possibility that something may happen where, however quickly it’s detected or quickly labeled, nonetheless we’re somehow accused of having dropped the ball? Yeah, I think that is possible, if not likely.”
Meta is building tools that can identify invisible markers at scale, specifically C2PA and IPTC standards, which identify the origins of different content types.
Related: Disney shakes up sports as costs surge and activists circle ahead of earnings report
OpenAI’s image verification
OpenAI on the same day announced that it is integrating C2PA metadata into its models, enabling people and organizations like Meta to verify if a given image was created using the company’s DALL-E 3 model.
While it sounds like a good solution to issues of AI-generated misinformation, OpenAI noted that this metadata can be removed pretty easily from an image; taking a screenshot of an AI-generated image and uploading the screenshot removes the metadata. The company added that most social media platforms remove metadata from uploaded images, meaning this integration is “not a silver bullet to address issues of provenance.”
More deep dives on AI:
- Think tank director warns of the danger around ‘non-democratic tech leaders deciding the future’
- US Expert Warns of One Overlooked AI Risk
- Artificial Intelligence is a sustainability nightmare — but it doesn’t have to be
AI researcher Chomba Bupe said in response that the move is self-serving, ensuring that future OpenAI models are not trained off of previously generated synthetic images to “avoid model collapse.”
“Makes their future data scraping efforts much, much easier,” he said, adding: “I know maybe, there is a slim chance they actually do care, but I doubt it.”
Lou Steinberg, a deepfake expert and the founder of cyber research firm CTM Insights, told TheStreet last week that efforts to detect fake images and content represent a flawed approach.
“It’s much easier to check if a small number of things are real vs if an infinite number are fake,” he said, suggesting the importance of verifying metadata not in AI image generators but in camera apps.
Related: Stock Market Today: Stocks higher as markets track Fed rate bets, earnings
Google’s data privacy settlement
Google on Monday agreed to pay $350 million to settle a 2018 class action lawsuit that revolved around a security glitch in the tech giant’s failed Google Plus social media platform that allegedly exposed user data.
Under the terms of the settlement, Google denied any wrongdoing.
“We regularly identify and fix software issues, disclose information about them and take these issues seriously,” spokesperson Jose Castaneda said. “This matter concerns a product that no longer exists and we are pleased to have it resolved.”
The case was initially dismissed in 2020 before the 9th U.S. Circuit Court of Appeals reinstated it in 2021.
Related: Human creativity persists in the era of generative AI
Snap falls 30%
Shares of Snap tumbled more than 30% Wednesday, following an earnings report and guidance that came in below expectations.
The social media company earned eight cents for the quarter — above expectations of six cents — on revenue of $1.36 billion, below expectations of $1.38 billion.
The company’s average revenue per user at $3.29 additionally came in below Street expectations.
“While we are encouraged by the progress we are making with our ad platform and the improved results we are delivering for many of our advertising partners, we estimate that the onset of the conflict in the Middle East was a headwind to year-over-year growth of approximately 2 percentage points in Q4,” Snap said in a statement.
Snap last week cut 10% of its staff, about 500 employees, in a so-called effort to “reduce hierarchy and promote in-person collaboration.”
Related: Deepfake porn: It’s not just about Taylor Swift
The AI Corner: Must-read research
If you’ve heard about all the ways in which AI companies scrape data, you might have come across the term “Common Crawl,” which refers to a nonprofit organization that has made available some of the largest AI training data sets out there.
ChatGPT, for instance, was trained on data from Common Crawl.
A new report from data scientist Stefan Baack, published by Mozilla, breaks down Common Crawl in depth, highlighting the ways in which Common Crawl is at least in part responsible for helping along the creation of “trustworthy” AI models.
Common Crawl has become a focal point for many copyright lawsuits and concerns, notably the New York Times’ lawsuit against OpenAI.
“Often it is claimed that Common Crawl contains the entire web, but that’s absolutely not true,” a main Common Crawl engineer told Baack. “Based on what I know about how many URLs exist, it’s very, very small.”
Read the full report here.
Contact Ian with AI stories via email, ian.krietzberg@thearenagroup.net, or Signal 732-804-1223.
Related: Veteran fund manager picks favorite stocks for 2024