8 Ways Big Brother Is Gorging Itself On Your Data This Week (Issue 49, 2025)
From Palantir’s mass data mining to ICE’s facial recognition, US intel, military AI, and Big Tech are rapidly eroding privacy rights and other fundamental freedoms
We’re delivering you the hottest internet news that affects all of us. Scroll down to read our full reporting below and if you love what we’re doing, please consider contributing $5 per month so that we can continue providing you with this vital, unique digest.
Talk Liberation is committed to providing equal access for individuals with disabilities. To view an accessible version of this article, click here.
In this edition:
US Government Using Palantir to Build Mass Surveillance System
Tinder Now Requires ‘Video Selfies’ for New Users in California & Canada
Pentagon Goes All-In on Drone Warfare With Massive 2026 Funding Boost
Defense Department Partners With OpenAI on Next-Gen AI Military Projects
Leaked Emails Reveal ICE Use of Facial Recognition App to ID Everyone
British Police Pilot AI System Profiling Individuals Using Sensitive Data
COPPA 2.0: The Age Verification Trap Threatening Online Privacy
Digital Services Act: EU Disinformation Rules Fuel Trade Clash With U.S.
US Government Using Palantir to Build Mass Surveillance System
The Trump administration has significantly expanded its partnership with Palantir Technologies, the data-mining firm co-founded by billionaire Peter Thiel, to centralize vast amounts of federal data on American citizens. According to a New York Times report, Palantir has secured over $113 million in federal contracts since President Trump took office, including deals with the Department of Homeland Security (DHS), the Pentagon, and discussions with the Social Security Administration and IRS.
The administration relies heavily on Palantir’s Foundry software, which organizes and analyzes data across agencies, fueling fears that it could enable unprecedented surveillance capabilities. But those against data-mining and surveillance tech point out this consolidation would allow the government to create detailed profiles of citizens—tracking everything from medical histories to financial records, weaponizing data for political targeting. Initially framed as a “cost-saving” measure, a Times investigation revealed that behind the scenes, officials are leveraging Palantir’s tech to merge disparate datasets, including immigration records, tax filings, and healthcare data.
Elon Musk’s (former) Department of Government Efficiency (DOGE), staffed by former Palantir employees, is reportedly driving the effort. Internal dissent has also emerged, with 13 former Palantir employees signing a letter condemning the company’s collaboration with the administration, arguing that aggregated data increases risks of misuse or breaches. Palantir has vehemently denied allegations of enabling mass surveillance, asserting that it operates as a “data processor” rather than a collector and emphasizing its commitment to privacy protections.
Tinder Now Requires ‘Video Selfies’ for New Users in California & Canada
Tinder has rolled out a mandatory face liveness detection system for new users in California and Canada, marking a significant escalation in its efforts to combat fake profiles and AI-generated impersonations. The feature, called “Face Check,” requires users to submit a video selfie during account setup, which is then analyzed by FaceTec’s biometric technology to verify that the person is real and that they match their profile photos. Unlike Tinder’s existing ID Check system—which cross-references government IDs for age and identity—Face Check focuses on detecting liveness to prove the user is present and prevents duplicate or deepfake accounts, a growing concern in online dating.
The move follows successful pilot programs in Colombia, Canada, Australia and New Zealand. where early results showed a reduction in bots and fraudulent activity, according to Yoel Roth, Match Group’s head of trust and safety. California was chosen for its large, diverse user base and stringent online privacy laws, which align with Tinder’s need to balance security with data protection, addressing rampant issues like cat-fishing and romance scams which cost U.S. victims over $1.1 billion in 2023.
Match Group, Tinder’s parent company, recently cut 13% of its workforce due to declining revenues and user engagement. While Face Check seeks to restore trust in the platform, some question the long-term storage of encrypted face maps, even as Tinder assures users that video selfies are deleted post-verification. The initiative reflects the dating industry’s pivot toward AI-driven safety tools, though its ultimate success hinges on user adoption and regulatory scrutiny in an era of concerns over increased biometric surveillance.
Pentagon Goes All-In on Drone Warfare With Massive 2026 Funding Boost
The U.S. Department of Defense’s fiscal year 2026 budget includes a historic $13.4 billion investment in autonomous and non-crewed systems, marking the first time such funding has been designated as a standalone category. Senior defense officials revealed that the allocation spans across domains, with $9.4 billion earmarked for unmanned aerial vehicles, $1.7 billion for maritime autonomous systems, and $734 million for underwater capabilities, alongside $1.2 billion for cross-platform autonomy software. The budget also dedicates $3.1 billion to counter-drone technologies, reflecting the military’s clear focus on perceived threats, using inexpensive, proliferating unmanned systems that challenge traditional defense systems.
A significant portion of the funding will bolster the Navy’s unmanned initiatives, including $5.3 billion for sea-based autonomy—a $2.2 billion increase over last year’s FY2025—to procure three MQ-25 Stingray refueling drones and advance medium unmanned surface vessels. The Pentagon emphasized the operational urgency of these investments, noting that carrier strike groups in the Middle East are "engaged in combat every day" against enemy drones.
The budget’s focus on autonomy aligns with broader efforts to modernize the military’s technological edge, though challenges remain in integrating these systems. Officials emphasize the need for interoperable "central brain" software to synchronize disparate platforms, while concerns point to over-reliance on expensive solutions to counter asymmetric threats. As the Pentagon pushes to accelerate innovation, the 2026 defense budget signals a poignant shift toward unmanned warfare—one that could redefine battlefield dynamics.
Defense Department Partners With OpenAI on Next-Generation AI Military Projects
The U.S. Department of Defense has given OpenAI a $200 million contract to develop cutting-edge AI prototypes aimed at addressing critical national security challenges, for both war-fighting and admin domains. The agreement, managed by the Pentagon’s Chief Digital and AI Office (CDAO), focuses on creating "agentic workflows"—semi-autonomous AI systems capable of handling tasks traditionally performed by humans, from logistics and operational planning to healthcare management and cybersecurity. The project aligns with the CDAO’s broader strategy to integrate commercial AI solutions into defense operations, with an estimated completion date of July 2026.
The partnership points to the Pentagon’s urgency to leverage private-sector innovation, particularly amid rising competition with China’s state-backed AI advancements. OpenAI quietly removed a ban on military applications from its usage policy in early 2024, and now permits defense collaborations as long as they exclude weapons development. The deal also highlights the CDAO’s evolving role as a central hub for AI integration, despite internal challenges. OpenAI’s newly launched "OpenAI for Government" initiative will oversee the project, offering secure access to advanced models like ChatGPT Gov and custom AI tools tailored for “national security” needs, building on existing partnerships with NASA, the NIH, and other agencies.
While the initiative promises efficiency gains—like streamlining veterans’ healthcare or automating compliance paperwork—there are ethical and operational concerns. The CDAO has hinted at additional partnerships with other frontier AI firms, pointing to a focus toward it. As OpenAI steps into its role as a federal contractor, the project could set a precedent for how AI transforms national security, balancing innovation with the imperative to safeguard ethical and operational integrity.
Leaked Emails Reveal ICE’s Use of Facial Recognition App To ID ALL Individuals
Immigration and Customs Enforcement (ICE) has begun using a new smartphone app called Mobile Fortify that allows agents to identify individuals in real time by scanning their faces or fingerprints with a mobile phone camera, according to leaked internal emails. The app connects to Customs and Border Protection (CBP) databases, which typically collect biometric data from travelers entering or exiting the U.S., but is now being repurposed for domestic immigration enforcement. Mobile Fortify enables ICE’s Enforcement and Removal Operations (ERO) officers to instantly match a person’s face or fingerprints against DHS records, significantly expanding the agency’s reach in conducting on-the-spot identity checks in public spaces, private work places, and other non-border areas.
Civil liberties-wise the tool shows dangerous escalation in surveillance, as facial recognition tech is prone to false matches and could lead to wrongful arrests. The app’s rollout aligns with the Trump administration’s aggressive deportation policies, which have prioritized the use of advanced technology to meet arrest quotas. It also exemplifies how biometric systems designed for limited purposes, like border security, can be quietly expanded into broader, unchecked surveillance tools.
ICE defends the app, stating it provides "real-time biometric identity verification" without requiring additional hardware. The lack of transparency around its deployment—including unclear safeguards for data retention, oversight, or error correction–has raised constitutional concerns under the Fourth Amendment. With ICE already relying on unsecured mobile devices and expansive data-mining practices, Mobile Fortify points toward pervasive biometric policing on American streets.
British Police Pilot AI System Profiling Individuals Using Sensitive Data
British police forces have begun implementing an AI-powered profiling system called Nectar, developed in partnership with Palantir Technologies, which consolidates sensitive personal data from approximately 80 sources—including race, health records, political views, religious beliefs, and trade union membership—into a unified intelligence platform. The system, currently piloted by Bedfordshire Police and the Eastern Region Serious Organised Crime Unit, aims to “enhance crime prevention” by generating in-depth profiles of suspects, victims, and vulnerable individuals, including minors. However, some have raised alarms over the potential for misuse, particularly regarding data retention, algorithmic bias, and the risk of innocent individuals being flagged with no oversight.
The initiative is part of a broader UK government push to integrate AI into public services, with Palantir’s Foundry platform organizing existing law enforcement data to accelerate investigations and identify at-risk groups. Yet critics argue the system effectively enables mass surveillance, creating "360-degree profiles" of individuals without adequate transparency or legal safeguards. The system also has potential to reinforce racial profiling, given Palantir’s controversial history with tools like ICE’s migrant tracking software.
While Palantir and police insist Nectar only accesses lawfully held data and avoids predictive policing, the lack of parliamentary scrutiny and vague deletion protocols fuel skepticism. Amnesty International warns such tools "supercharge racism" by codifying biased policing practices. The Home Office is considering a national rollout based on pilot results.
COPPA 2.0: The Age Verification Trap Threatening Online Privacy
A proposed update to the Children’s Online Privacy Protection Act (COPPA 2.0) aims to strengthen protections for minors by raising the covered age range from under 13 to under 17 and requiring platforms to obtain consent from teens aged 13–16 for data collection. While framed as a necessity in modern digital privacy laws, the bill’s ambiguous language could force platforms to adopt invasive age-verification systems for all users. Such measures, likely relying on government IDs or biometric scans, risk creating new surveillance infrastructures that extend far beyond their original intent, centralizing sensitive personal data vulnerable to breaches or misuse.
The shift from reactive to proactive age identification compels companies to analyze behavioral cues, usage patterns, or contextual data to avoid legal penalties. This could lead to widespread deployment of facial recognition, ID uploads, or other intrusive verification methods–even on platforms not primarily serving minors. The lack of federal safeguards for the data pools is a problem, as once collected, biometric and identity information may be retained indefinitely or shared with third parties without transparency. The bill’s supporters, including bipartisan lawmakers and major tech firms, argue it closes loopholes exploited by advertisers, but opponents counter that it outsources privacy risks to users, disproportionately affecting minority communities reliant on anonymity for safety.
Beyond privacy erosion, COPPA 2.0 threatens to reshape online access by erecting barriers to participation. Age-gating could deter users from public forums, educational resources, or creative platforms unless they submit to verification, impeding free expression. While the bill’s intent—protecting minors from predatory data practices—is widely endorsed, its execution risks normalizing mass surveillance under the guise of child safety, underscoring the need for precise definitions and enforceable data rights to prevent unintended consequences.
Digital Services Act: EU Disinformation Rules Fuel Trade Clash With U.S.
The European Union’s Code of Practice on Disinformation, now formally integrated into the Digital Services Act (DSA), took full effect on July 1, 2025, imposing stringent transparency and auditing requirements on major tech platforms like Meta, Google, and TikTok. Under the new rules, designated Very Large Online Platforms (VLOPs) and Search Engines (VLOSEs) must demonstrate compliance with measures to combat disinformation—such as algorithmic transparency and ad disclosure—or face fines and investigations by EU regulators. The European Commission insists the framework targets systemic risks like manipulative algorithms, covert advertising, and “harmful content,” rather than censorship.
The policy has sparked transatlantic friction, with some U.S. officials accusing the EU of exporting "de facto global censorship" by forcing platforms to adopt Europe’s standards worldwide. In May 2025, Rep. Jim Jordan (R-OH) and colleagues warned the DSA could restrict Americans’ speech, as companies may harmonize moderation policies globally. The EU counters that the Code focuses on corporate accountability, not content removal.
The clash unfolds against a backdrop of high-stakes trade negotiations, with the Commission explicitly stating the DSA is "not on the table" in talks, mirroring tensions seen in Canada’s recent digital tax dispute with Washington. The U.S. frames the rules as protectionist and the EU defends them as democratic safeguards. The dispute risks escalating into a broader trade war over tech governance.
That concludes this edition of Your Worldwide INTERNET REPORT!
Remember to SUBSCRIBE and spread the word about this unique news service.
This issue of Your Worldwide INTERNET REPORT was produced by the Talk Liberation Digital Media Team, and is brought to you by Panquake.com – We Don’t Hope, We Build!
© Talk Liberation Limited. The original content of this article is licensed under a Creative Commons Attribution-ShareAlike 4.0 International license. Please attribute copies of this work to “Talk Liberation” or Talkliberation.com. Some of the work(s) that this program incorporates may be separately licensed. For further information or additional permissions, contact licensing@talkliberation.com
What kind of America hating freedom hating moron uses these traitorous apps?
Thank You TL