News
- Security researchers found Russian network fingerprints on 12 free VPNs available on Google and Apple's app stores, and Chinese traces on six
- Five of these VPNs are also thought to have ties with a Shanghai-based firm believed to have links with the Chinese military
- While network fingerprints don’t necessarily signal Chinese or Russian ownership, experts advise caution
Twelve free VPN services available on the official Google Play and Apple App Store may have links with Russia, and six with China.
These are the findings from security researchers at Comparitech, who analyzed 24 VPNs and found Russian and Chinese network fingerprints on a total of 12 apps. Two of them (Turbo VPN, VPN Proxy Master) also include Chinese or Russian SDKs (software development kits) that, according to experts, "are clear indicators that the SDK was intentionally bundled into the app."
Experts build on the work of the team at the Tech Transparency Project, which in April uncovered that millions of free VPN users across 20 apps may have sent their data to China without knowing it. On that occasion, experts found that Turbo VPN and VPN Proxy Master, alongside three additional services (Thunder VPN, Snap VPN, and Signal Secure VPN), have ties with a Shanghai-based firm believed to have links with the Chinese military.
Despite these traces not necessarily signaling Russia or China's ownership, experts advise current users to exercise caution regarding their data privacy.
"China and Russia both force domestically-owned VPNs to register with the government and adhere to local laws, which may impact user privacy. For this reason, no Chinese or Russian VPN can offer a trustworthy 'no-logs' service, which is the only type of VPN we recommend," wrote researchers.
Which apps are impacted?The impacted VPN apps can be divided into three groups:
- Six communicate with Chinese domains: Signal Secure VPN (Android), Turbo VPN (Android), VPN Proxy Master (Android), Snap VPN (Android), Now VPN (iOS), and Ostrich VPN (iOS)
- Eight Android apps communicate with Russian IP addresses: QuarkVPN, VPNify, Signal Secure VPN, Turbo VPN, VPN Proxy Master, Snap VPN, VPN Free, and Proxy Master
- Four iOS apps communicate with Russian domains: NowVPN, WireVPN, FastVPN Super, VPN - Fast VPN Super with Apple.com domains hosted in Russia, but only the latter two have other third-party Russian domains.
Four Android VPN apps (Signal Secure VPN, Turbo VPN, VPN Proxy Master, and Snap VPN) have links with both Chinese and Russian domains.
All in all, "Apple doesn’t list any of the VPN apps that, on their Android versions, communicate with third-party Russian domains. Based on this, Apple appears to be more strict about removing Russia-linked VPNs than China-linked ones," experts wrote.
What does it mean for your privacy?(Image credit: Getty Images)While the best VPN services promise to boost online privacy by encrypting your online communications and working with strict no-log policies, both China and Russia impose greater control and data retention requirements on domestic VPNs.
As mentioned earlier, the traces that researchers found don't necessarily indicate Chinese and Russian ownership. Yet, "it may be an indicator of potential ties, especially when combined with other signals," such as a Chinese or Russian SDK, publisher metadata, or similar behavior.
On a practical level, it means that the app may route some data or logs via servers located either in China or Russia. Foreign SDKs, especially, could signal deeper control or development origin, according to experts.
As a rule of thumb, you should avoid unverified free VPN apps, regardless of their ownership, as they can make you vulnerable to all sorts of privacy and security risks – from invasive ad-tracking to malware and even foreign surveillance.
If you're looking for a secure freebie, I recommend checking our up-to-date free VPN guide, with Privado VPN, Proton VPN, and Windscribe VPN being today's top picks. If you're willing to go premium, NordVPN is TechRadar's top-rated service at the time of writing.
You might also like- The Google Pixel 10 series doesn't include Battery Share
- This feature was removed to allow for Qi2 magnetic wireless charging
- Not all Pixel fans are happy about this change
The Google Pixel 10 series comes with a number of new features, but it’s also missing some things, with Battery Share notably being absent from these phones.
That’s Google’s name for its reverse wireless charging feature – in other words the ability to use your phone to wirelessly charge other devices. It’s a common feature on Android handsets, including the Google Pixel 9 series, but it’s missing from the Pixel 10.
However, there’s a good reason for this, as DroidReader asked Google and was told that the array of magnets required for Qi2 magnetic wireless charging “creates a strong connection with the charger but presents a physical limitation for reverse wireless charging.”
So in other words, the addition of Qi2 magnetic wireless charging (which allows you to use the new Pixelsnap accessories) meant Google had to remove Battery Share.
MagSafe with Pixel brandingThe Google Pixel 10 Pro with a PixelSnap accessory (Image credit: Philip Berne / Future)For most people, we’d wager this is a good trade. Pixelsnap is a lot like MagSafe – it’s an ecosystem of wireless chargers and accessories like stands and grips that can attach to the back of your Pixel 10 with magnets.
It’s a handy feature, but not everyone is happy about this change, with a Reddit thread including comments like “this one hurts a lot”, and “Battery Share I found to be such a useful feature”.
Still, other comments mentioned hardly if ever using it, so it’s certainly not a universally loved feature. Hopefully, even those who did love Battery Share will come to appreciate Pixelsnap too – but if not, other brands like Samsung still offer similar reverse wireless charging capabilities.
You might also like- Google execs have been talking hardware following the Pixel 10 launch
- Flip phones, smart rings, and tablets aren't on the way
- The company is concentrating on phones and AI instead
We've just been treated to a host of new Google Pixel devices, including four different Pixel 10 phones, but we also have news about Google devices that aren't coming – including a flip foldable and a successor to the Pixel Tablet from 2023.
Speaking to Mark Gurman and Samantha Kelly at Bloomberg, Google's Vice President of Devices and Services Shakil Barkat confirmed that there are no plans for a Google flip foldable to join the Pixel 10 Pro Fold.
Barkat also ruled out a smart ring, and says the Pixel tablet series is on pause until a "meaningful future" can be figured out for the product category. It seems the likes of Samsung will be left to release those kinds of devices for the time being.
The status on smart glasses, meanwhile, is "TBD" – it seems Google is happy to stay focused, for now. "Every time a new type of category of product gets added, the bar on maintenance for the end user keeps going up," says Barkat. "It's already pretty painful."
The "vanguard" of AIGoogle is focused on Pixel phones and AI (Image credit: Philip Berne / Future)Google execs did also use the interview to hype up what they are working on. Rick Osterloh, who is head of Google's hardware and Android divisions, described the Pixel 10 as a "super strong release" in what is now a "mature category".
The Pixel 11 is almost finalized, apparently, while work has started on the Pixel 12. Google design chief Ivy Ross says that the company is aiming for big visual changes to the Pixel phones "every two to three years" – so watch this space.
As you would expect, the Google team pushed AI as being the big innovation that'll be happening on phones over the next few years, via Gemini and features such as Magic Cue, which surfaces key info from your phone when you need it.
Osterloh says he wants Android to be "on the vanguard of where AI is going", and that Google isn't overly worried about Pixel sales: the phones account for about 3% of the US market at the moment, compared to a 49% share for Apple.
You might also like- Trump criticizes legacy US Government websites as expensive and poor to use
- New "American by Design" initiative will modernize government agency sites
- Airbnb co-founder Joe Gebbia appointed as Chief Design Officer of National Design Studio
President Trump may soon be browsing for the best website builders after ordering improvements to federal government websites and physical spaces in the hope to make them more attractive for both workers and customers.
“The Government has lagged behind in usability and aesthetics,” Trump said in a new Executive Order, noting the need for system modernization that could tackle high maintenance costs in the process.
The Executive Order explains legacy systems can be costly to maintain and costly to American citizens, who can spend more time than necessary trying to navigate them, hence the need for change.
Trump wants to modernize US Government websitesThe Order introduces Trump’s new ‘America by Design’ initiative, which begins with high-touch point sites where citizens are most likely to interact with government agencies.
The formation of a new National Design Studio and the appointment of a Chief Design Officer will oversee the project.
“It is the policy of my Administration to deliver digital and physical experiences that are both beautiful and efficient, improving the quality of life for our Nation,” Trump wrote.
The National Design Studio has been tasked with reducing duplicative design costs, much in the same way that the White House has already started centralizing IT procurement to boost cost efficiency.
It will also use a standardized design for consistency and trust, and improve the quality of public-facing experiences.
Agencies have been given until July 4, 2026 to deliver their initial results after consulting with the Chief Design Officer.
Separate Reuters reporting has revealed Airbnb co-founder Joe Gebbia will lead the National Design Studio as Chief Design Officer, with the Internal Revenue Service set to be the first place to see an overhaul.
Trump’s Order also confirms the “temporary organization” will close in three years, on August 21, 2028, suggesting that site modernization could be complete even before that.
You might also like- We’ve listed the best web hosting services
- On a budget? Get yourself online with the best free website builders around
- Trump's "One Big Beautiful Bill" set to award $1 billion funding to "offensive cyber operations"
As artificial intelligence (AI) tools like ChatGPT, Co-Pilot, Grok and predictive analytics platforms become embedded in everyday business operations, many companies are unknowingly walking a legal tightrope.
While the potential of AI tools provide many benefits - streamlining workflows, enhancing decision-making, and unlocking new efficiencies - the legal implications are vast, complex, and often misunderstood.
From data scraping to automated decision-making, the deployment of AI systems raises serious questions around copyright, data protection, and regulatory compliance.
Without robust internal frameworks and a clear understanding of the legal landscape, businesses risk breaching key laws and exposing themselves to reputational and financial harm.
GDPR and the Use of AI on Employee DataOne of the most pressing concerns is how AI is being used internally, particularly when it comes to processing employee data. Many organizations are turning to AI to support HR functions, monitor productivity, or even assess performance. However, these applications may be in direct conflict with the UK General Data Protection Regulation (GDPR).
GDPR principles such as fairness, transparency, and purpose limitation are often overlooked in the rush to adopt new technologies. For example, if an AI system is used for employee monitoring without their informed consent, or if the data collected is repurposed beyond its original intent, the business could be in breach of data protection law.
Moreover, automated decision-making that significantly affects individuals, such as hiring or disciplinary actions, requires specific safeguards under GDPR, including the right to human intervention.
The Legal Grey Area of Data ScrapingAnother legal minefield is the use of scraped data to train AI models. While publicly available data may seem fair game, the reality is far more nuanced. Many websites explicitly prohibit scraping in their terms of service, and using such data without permission can lead to claims of breach of contract or even copyright infringement.
This issue is particularly relevant for businesses developing or fine-tuning their own AI models. If training data includes copyrighted material or personal information obtained without consent, the resulting model could be tainted from a legal standpoint. Even if the data was scraped by a third-party vendor, the business using the model could still be held liable.
Copyright Risks in Generative AIGenerative AI tools, such as large language models and image generators, present another set of challenges. Employees may use these tools to draft reports, create marketing content, or process third-party materials. However, if the input or output involves copyrighted content, and there are no proper permissions or frameworks in place, the business could be at risk of infringement.
For instance, using generative AI to summarize or repurpose a copyrighted article without a license could violate copyright law. Similarly, sharing AI-generated content that closely resembles protected work may also raise legal red flags. Businesses must ensure their employees understand these limitations and are trained to use AI tools within the bounds of copyright law.
The Danger of AI “Hallucinations”One of the lesser-known but increasingly problematic risks of AI is the phenomenon of “hallucinations”- where AI systems generate outputs that are factually incorrect or misleading, but presented with confidence. In a business context, this can have serious consequences.
Consider a scenario where an AI tool is used to draft a public document or legal summary, in which it includes fabricated company information or incorrect regulations. If that content is published or relied upon, the business could face reputational damage, client dissatisfaction, or even legal liability. The risk is compounded when employees assume the AI’s output is accurate without proper verification.
The Need for Internal AI GovernanceTo mitigate these risks, businesses must act promptly to implement robust internal governance frameworks. This includes clear policies on how AI tools can be used, mandatory training for employees, and regular audits of AI-generated content.
Data Protection Impact Assessments (DPIAs) should be conducted whenever AI is used to process personal data, and ethical design principles should be embedded into any AI development process.
It’s also critical to establish boundaries around the use of proprietary or sensitive information. Employees interacting with large language models must be made aware that anything they input could potentially be stored or used to train future models. Without proper safeguards, there’s a real risk of inadvertently disclosing trade secrets or confidential data.
Regulatory Focus in 2025Regulators are increasingly turning their attention to AI. In the UK, the Information Commissioner’s Office (ICO) has made it clear that AI systems must comply with existing data protection laws, and it is actively investigating cases where this may not be happening. The ICO is particularly focused on transparency, accountability, and the rights of individuals affected by automated decision-making.
Looking ahead, we can expect more guidance and enforcement around the use of AI in business. The UK is currently consulting on its AI Bill which aims to regulate artificial intelligence by establishing an AI Authority, enforcing ethical standards, ensuring transparency, and promoting safe, fair, and accountable AI development and use that businesses must comply with.
AI is transforming the way we work, but it’s not a free pass to bypass legal and ethical standards. Businesses must approach AI adoption with caution, clarity, and compliance to safeguard their staff and reputation. By investing in governance, training, and legal oversight, organizations can harness the power of AI while avoiding the pitfalls.
The legal risks are real, but with the right approach, they are also manageable.
We feature the best cloud document storage.
This article was produced as part of TechRadarPro's Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro