Privacy Isn't Free & The EU Uses Ad Targeting Banned by their Own Laws (Issue 28, 2023)
Also, Australia endangers asylum seekers by publicizing their data and UK Biobank gave people's donated health data to insurance companies...
We’re delivering you the hottest internet news that affects all of us. Scroll down to read our full reporting below and if you love what we’re doing, please consider contributing $5 per month so that we can continue providing you with this vital, unique reporting.
Talk Liberation is committed to providing equal access for individuals with disabilities. To view an accessible version of this article, click here.
In this edition:
Signal Details the ‘High Costs’ of Privacy
AI Cameras are Taking over America
Australia’s Gov to pay Millions in Asylum Seeker Data Breach
BetterHelp shared users’ sensitive health data, FTC says
UK Gives Donated Health Data to Insurance Companies
EU Lawmakers Allegedly Violated their own Privacy Laws
Commerce Department and VC firms encourage AI use for startups
Signal Details the ‘High Costs’ of Privacy
A recent blog post from private messaging app Signal shows just how much it costs to provide their E2EE service. With 10’s of millions of people using the app each month, the current yearly costs for Signal are about $33M and expected to rise to $50M by 2025.
As a non-profit company, the intention behind Signal’s post is most likely to encourage donations for continuing to provide a free, privacy-oriented service and to highlight how expensive it is when a company goes against the industry norm of surveilling users.
Further explaining the motivation for the post, Signal stated: “To put it bluntly, as a nonprofit we don’t have investors or profit-minded board members knocking during hard times, urging us to ‘sacrifice a little privacy’ in the name of hitting growth and monetary targets. This is important in an industry where ‘free’ consumer tech is almost always underwritten by monetizing surveillance and invading privacy.”
AI Cameras are Taking over America
During a recent demonstration of Fusus, an AI-powered surveillance system that is rapidly growing throughout small-town America and major cities, the company showcased live camera feeds from multiple businesses, traffic intersections and even a private residence’s doorbell feed while also providing the ability to scan for people wearing certain clothes or carrying a particular bag and to look for a certain vehicle.
Primarily marketed to law enforcement, city councils across the US are currently debating just how much surveillance they want in their jurisdictions. While many have already paid for Fusus’ big-brother service, others are pushing back and fighting against having the software and hardware installed in their neighborhoods.
According to Beryl Lipton from the Electronic Frontier Foundation (EFF), “The lack of transparency and community conversation around Fusus exacerbates concerns around police access of the system, AI analysis of video, and analytics involving surveillance and crime data, which can influence officer patrols and priorities.”
Australia’s Gov to pay Millions in Asylum Seeker Data Breach
After publicly posting the personal details of nearly 10,000 people seeking asylum in Australia, the government has agreed to pay $20,000 to anyone suffering “extreme loss and damage” resulting from a spreadsheet published in 2014 by the Immigration and Border Protection Department (now Home Affairs).
While the total government payout could run into 10’s of millions of dollars, the compensation stems from the fact that this personal information was used by third parties to allegedly threaten asylum seekers, or persecute and even jail their family members.
A postmortem of the situation revealed that the data was “accessed more than 100 times, including from IP addresses in China, Russia, Egypt and Pakistan, and from masked anonymous locations”.
The fallout from this privacy breach was showcased by a Chinese asylum seeker who stated that when “public security officers” learned this person “had escaped to Australia to seek protection”, the home of their grandmother was immediately searched and, tragically, “During a struggle, the [asylum seeker’s] grandmother suffered a fatal injury”.
BetterHelp shared users’ sensitive health data, FTC says
BetterHelp, an online counseling service, has agreed to pay customers $7.8M in a settlement with the United States Federal Trade Commission (FTC). The payout is a result of BetterHelp sharing data that it promised to keep private - like mental health challenges and other sensitive information - with companies that include Facebook and Snapchat.
The FTC has recently cracked down on similar situations regarding sensitive health data and has made it clear that businesses not strictly classified as health care providers will be held accountable, despite not being subject to regulations such as HIPAA.
BetterHelp provided a statement, calling the data-sharing practices for which it was sanctioned “industry-standard practice” that is “routinely used by some of the largest health providers, health systems, and healthcare brands.” They added, “Nonetheless, we understand the FTC’s desire to set new precedents around consumer marketing, and we are happy to settle this matter with the agency,”
UK Gives Donated Health Data to Insurance Companies
British newspaper The Observer has recently revealed that private health information from a half million UK citizens was shared with insurance companies despite a promise not to do so from UK Biobank, the company collecting the data.
The investigation found that, several times between 2020 and 2023, UK Biobank gave broad access to insurance sector firms which conflicted with an FAQ page from 2006 on Biobank’s website that stated: “Insurance companies will not be allowed access to any individual results nor will they be allowed access to anonymized data.”
Adding more fuel to the fire are several public statements from Biobank backers, who said safeguards would be built in to ensure that “no insurance company or police force or employer will have access”.
The UK’s data privacy watchdog, the Information Commissioner’s Office, is considering becoming involved in the matter and recently commented on this issue that: “People have the right to expect that organisations will handle their information securely and that it will only be used for the purpose they are told or agree to. Organisations must provide clear, accurate and comprehensive information…especially where sensitive personal information is involved.”
EU Lawmakers Allegedly Violated their own Privacy Laws
Privacy rights not-for-profit noyb has filed a complaint alleging European Union lawmakers engaged in privacy-hostile practices that had been banned by the exact laws they helped pass.
The primary offender in noyb’s complaint is the EU Commission’s Directorate General for Migration and Home Affairs and the Commission’s actions involving
"unlawful micro-targeting" of ads on X (formerly Twitter) to garner support for a legislative proposal.
In a press release from noyb, the organization stated: “While online advertising isn’t illegal per se, the EU Commission targeted users based on their political views and religious beliefs. Specifically, the ads were only shown to people who weren’t interested in keywords like #Qatargate, brexit, Marine Le Pen, Alternative für Deutschland, Vox, Christian, Christian-phobia or Giorgia Meloni.”
Commerce Department and VC firms encourage AI use for startups
The Responsible AI Initiative is a joint effort between the Commerce Department and the Responsible Innovation Lab (RIL), a consortium of tech industry players, that advocates for safe innovation practices and is targeted towards AI startups. RIL also represents one of the first, if not the first, AI lobbying group within the United States.
One of RIL’s founders, Hemant Taneja, recently offered their opinion and a possible reason for founding RIL due to AI “heading toward two hermetically sealed ecosystems: one that supports open systems but is also associated with democracy, privacy, and individual rights, versus one that supports state control, information-flow restriction, and politically imposed limits on openness.”
Explaining the deliberate focus on startups, a RIL spokesperson said “These big companies have responsible AI teams, they have trust and safety teams. People have PhDs in this, and early-stage companies don’t have that.” Worry that regulations may be written to favor only the largest tech players is also a concern for RIL, according to a person close to the matter.
That concludes this edition of Your Worldwide INTERNET REPORT!
Remember to SUBSCRIBE and spread the word about this unique news service.
This issue of Your Worldwide INTERNET REPORT was written by Matt Millen of WillenRimer; Edited by Suzie Dawson and Sean O’Brien; Graphics by K4t4rt; with production support by Beth Bracken.
Talk Liberation - Your Worldwide INTERNET REPORT was brought to you by Panquake.com. We Don’t Hope, We Build!
© Talk Liberation Limited. The original content of this article is licensed under a Creative Commons Attribution-ShareAlike 4.0 International license. Please attribute copies of this work to “Talk Liberation” or talkliberation.com. Some of the work(s) that this program incorporates may be separately licensed. For further information or additional permissions, contact licensing@talkliberation.com