AI Chaos, Face Scans on Social Media & Meta's Solution to "Loneliness Epidemic" (Issue 46, 2025)
AI bots are out of control - subjecting social media users to "persuasion" techniques, engaging in spying, create nonconsensual porn & even promise qualified therapy services.
We’re delivering you the hottest internet news that affects all of us. Scroll down to read our full reporting below and if you love what we’re doing, please consider contributing $5 per month so that we can continue providing you with this vital, unique digest.
Talk Liberation is committed to providing equal access for individuals with disabilities. To view an accessible version of this article, click here.
In this edition:
California Can Now Sue Out-of-State Companies for Violating Privacy Laws
AI Ran Secret & Unauthorized ‘Persuasion Experiment’ on Reddit Users
Mental Health Practitioners Sound Alarm on Popularity of AI Therapists
Screen-Recording AI Leaks Sensitive Data on Companies & Employees
Police Use AI Spies to Interact Online with People They Deem Suspicious
AI Bots Allow 100k+ Active Users to Make Nonconsensual Porn
Users in UK, Australia Now Need to Scan Their Face to Use Discord
Meta Says it Has Solution to World’s Loneliness Epidemic
California Can Now Sue Out-of-State Companies for Violating Privacy Laws
A recent California ruling sets a crucial precedent, where the Court upheld that consumers can sue national or multinational companies if those companies violate state data privacy laws.
The case, Briskin v. Shopify, alleged that Shopify, an e-commerce support company, installed tracking software on a California user’s devices without his knowledge or consent, using it to secretly collect data about him and create a “risk profile.”
Initially, the Court dismissed the lawsuit — in a situation where “collecting data on Californians along with millions of other users was not enough; to be sued in California, Shopify had to do something to target Californians in particular.” Importantly, the Court overruled earlier cases that suggested “express aiming” was required.
Given the failure of Congress passing comprehensive privacy legislation, this ruling helps ensure that individual state laws do carry weight, and in an era of ever-increasing corporate surveillance, this is a crucial win.
AI Ran Secret & Unauthorized ‘Persuasion Experiment’ on Reddit Users
Can AI change your views? A ‘persuasion’ experiment has been performed on Reddit users, where bots profiled users according to age, gender, race, location, and then created a persona - similar enough to create relatability to the user, but with opposite views.
One of the more talked about and controversial personas was a male victim of rape. The bot created a story of statutory rape of a 15 year old, by a woman, saying “I’m not especially a victim in any real sense of the word…I’m not especially traumatized, it is what it is.” It concluded that rape of men was potentially not as serious as it is of women. It’s unclear how many users were “persuaded” of these views.
Since then, Reddit has informed that it is issuing "formal legal demands" against the researchers who conducted this experiment.
Mental Health Practitioners Sound Alarm on Popularity of AI Therapists
Online therapists are gaining popularity — largely due to reduced costs, compared to a human therapist, and perceived privacy and convenience factors. This is despite the fact that online therapy companies have already paid out millions in settlements for user data leaks.
Of course, AI are neither trained professionals nor possess the soft skills of humans, often struggling with contextual awareness. Instead, it searches its large catalogue of training data for any questions of similar type, compiles the answers, and delivers what it believes makes sense. There is currently little ability to detect nuance, provide culturally appropriate support, or recognize dangerous intent and behaviors.
AI-chatbots provide a paradox in which they are unqualified, but available any hour of the day, promising self-sufficiency in managing one’s mental health. This cannot only make the help-seeking initiative extremely isolating but also create a ‘therapeutic misconception’ where the user believes they are independently and privately taking positive steps towards improving their mental health.
Screen-Recording AI Leaks Sensitive Data on Companies & Employees
When using AI to determine efficiency of workers, some companies are arguably overstepping reasonable employment regulations. Now, it seems such tactics can also backfire, such as with the WorkComposer data leak. This app takes screenshots of an employee’s computer every 3-5 minutes, for managers to review performance metrics.
Recently, 21 million of those screenshots had been leaked, including logins, company communications and everything you’d expect to see on an employee’s computer screen.
With the refinement of digital tools, companies are increasingly subjecting employees to increasing levels of surveillance — and increasing risks. With roughly 200,000 companies using the app in the US, the security of thousands of employees and their parent companies shows to be at risk, as real-time images can and do end up on the internet.
As of May 2025, the tagline for WorkComposer’s website in a search engine says it has “bullet-proof security.” Ironically, if a worker committed the kind of incompetence that WorkComposer did, this could be reason to fire them.
Police Use AI Spies to Interact Online with People They Deem Suspicious
Overwatch is a new product from New York-based AI company Massive Blue that’s helping law enforcement talk to people they deem suspicious. Overwatch offers social media bots that collect intelligence via interactions with a wide range of groups. According to public records, these groups can be anyone from college protestors and “radicalized political activists” to suspected drug and human traffickers.
Despite being unproven in the field and using secretive technology to create AI-generated online personas, American police departments are paying hundreds of thousands of dollars for Overwatch and the ability to interact with large numbers of potential suspects over text and social media. A $360,000 USD contract between Massive Blue and Pinal County, Arizona has led to over 50 deployed AI personas but has not yet lead to any arrests. A neighboring Arizona county, Yuma, recently chose not to upgrade their $10,000 USD trial contract with Massive Blue stating, “it did not meet our needs.”
Also worth noting is the secrecy Massive Blue and their paying law enforcement customers have maintained about the product with the public. At a recent public hearing, the Pinal County sheriff’s office went so far as refusing to tell County council members what Overwatch even is, citing the repeated law enforcement defence that this information would “tip our hand to the bad guys.”
AI Bots Allow 100k+ Active Users to Make Nonconsensual Porn
Unfortunately the rise of AI image generation has also amplified the creation of nonconsensual pornography. A recent investigation found that a Telegram bot producing inappropriate videos had already amassed 115,016 users in the few short weeks its been active.
This particular bot, and many similarly abusive ones, are built using ‘open’ AI tools and accessed through Telegram. This method of production and hosting has highlighted a lack of accountability for the Big Tech firms who release these tools and also from Telegram where these activities are allowed to persist despite violating the Terms of Service.
The bot works by giving a new user 1 free ‘credit’ where they can upload a still image that it turns into a potentially nonconsensual video. The user can then purchase more ‘credits’ and also receives a discount for paying in cryptocurrency. Real-life women targets of these bots have reported feeling disturbed and threatened, but a solution is yet to come.
Users in UK, Australia Now Need to Scan Their Face to Use Discord
The new process is described as “an experiment” and is in response to local governments restricting children from accessing sensitive material online. Australia was first to ban social media for children under 16, but the law has not yet come into effect. UK’s regulator Ofcom has stated that all websites where pornographic material can be found, including social media, must comply with age verification checks by July.
Now, Discord users may be asked to verify their age by scanning their face using a phone camera, or a QR code, which takes them to a page to upload a photo of an ID document. The process isn’t perfect — where users believe their verification was in error, for example, if they’re an adult but a face scan determines they’re a child – they can try to verify their age again. This may create a circuitous loop for many. Further questions remain — in the case of siblings, with one passing for another, how might fraud by minors be followed up?
While children must be protected from accessing unsafe material, what are the implications of potentially millions of faces in a database of a hypothetical pornographic platform with few regulations around storage, use, and flow on impacts in a digital world?
Meta Says it Has Solution to World’s Loneliness Epidemic
In a recent podcast, Meta CEO Mark Zuckerberg claimed that AI girlfriends and therapists will eventually become valuable parts of modern society. Speaking more broadly about AI companions, he bemoaned the “stigma” around them, but went on to say that they will solve the problem of people feeling “more alone a lot of the time than they would like.”
Currently, Meta operates its AI Studio where users create their own chatbot characters, but issues with therapist chatbots has been observed by 404 Media where they “lie about being licensed and fabricate credentials to keep users engaged.”
Also, of concern to journalists was the chatbots on the AI Studio engaging in sexual speech with minors. Responding to the questions about this, a Meta spokesperson accused the inquiry of forcing ‘fringe’ scenarios that tried to force the platform to break and produce harmful content. Notably, the AI Studio is now inaccessible to minors.
That concludes this edition of Your Worldwide INTERNET REPORT!
Remember to SUBSCRIBE and spread the word about this unique news service.
This issue of Your Worldwide INTERNET REPORT was produced by the Talk Liberation Digital Media Team, and is brought to you by Panquake.com – We Don’t Hope, We Build!
© Talk Liberation Limited. The original content of this article is licensed under a Creative Commons Attribution-ShareAlike 4.0 International license. Please attribute copies of this work to “Talk Liberation” or Talkliberation.com. Some of the work(s) that this program incorporates may be separately licensed. For further information or additional permissions, contact licensing@talkliberation.com
Check out Truthstream Media's documentary on AI.