News
- Peacemaker season 2 makes a big change to its predecessor's finale
- Fans have long wondered how it would handle a key scene from season 1 episode 8
- Everything else from last season is deemed canon in the DCU
Ever since Peacemaker season 2 was first announced, one big question has plagued fans of the hit HBO Max show: what aspects of its forebear's story will be treated as canon in the nascent DC Universe (DCU)?
It's a query that James Gunn, co-CEO of DC Studios, and Peacemaker's head writer and occasional director, hasn't answered as clearly as many fans had hoped. Now, though, this season's premiere has – and it's solved a big mystery about how Gunn would handle a cameo-filled scene from last season's finale, too.
Full spoilers immediately follow for Peacemaker season 2 episode 1, titled 'The Ties That Grind'. Turn back now if you haven't seen it yet.
Season 2's 'Previously On...' segment confirms the events of last season are all canon in the DCU (Image credit: HBO Max)The short answer is: all of season 1 is canon in the DCU. Well, everything except that cameo-stuffed scene, which I'll get to later.
'The Ties That Grind' opens with John Cena's eponymous anti-hero – real name Chris Smith – answering questions from a class full of kids. It's a funny and unexpected way to open this season of the DCU Chapter One project, especially when you consider the dichotomy between the the foul-mouthed, substance-abusing titular character and his innocent inquirers.
Anyway, when Smith is asked if he has an origin story, we're treated to a 'Previously On...' montage from one of the best HBO Max shows' debut season. That includes clips of his traumatic childhood, joining 'Project Butterfly', killing his xenophobic and toxic father, and saving the world from an alien invasion. Cena even provides a voice-over saying "previously on the DCU", which basically confirms season 1 took place in the DCU instead of its now-defunct forebear, aka the DC Extended Universe (DCEU).
It's the end of this footage-based collage that addresses the multi-cameo scene I've been alluding to. That being, the DCEU's Justice League appearing after Smith and the 11th Street Kids defeat the extraterrestrial Butterflies to stop them taking over planet Earth.
A different supergroup shows up in season 2 episode 1 (Image credit: HBO Max)As the 11th Street Kids triumphantly walked away from a job well done in last season's finale, they were greeted by the Justice League. Admittedly, Jason Momoa's Aquaman and Ezra Miller's The Flash are the only actors who actually appear on the screen, but silhouettes of their fellow heroes – Henry Cavill's Superman, Ben Affleck's Batman, Gal Gadot's Wonder Woman, and Ray Fisher's Cyborg – are shown.
Peacemaker 2 retcons this Justice League cameo. Indeed, the 'previously on...' segment reveals that sextuplet have been replaced the Justice Gang – i.e. Edi Gathegi's Mister Terrific (who only appears in silhouette form), Isabela Merced's Hawkgirl, and Nathan Fillion's Guy Gardner/Green Lantern. The trio are also joined by outlines of David Corenswet's Superman and Milly Alcock's Supergirl, even though the Kryptonian pair aren't part of the Justice Gang.
But I digress. Lambasting their belated appearance, Smith greets them in the same way he did the Justice League last season, saying "you're late, you f*****g d*******s!". Then, in a very brief scene designed to replace Aquaman and The Flash's short and semi-amusing exchange from season 1, Hawkgirl and Gardner have a similarly fleeting chat.
Hawkgirl and Guy Gardner appear again later on in season 2 episode 1 (Image credit: HBO Max)So, problem solved, right? As long as you don't go back and re-watch season 1 episode 8, aka 'It's Cow or Never'. Speaking as part of a roundtable interview attended by TechRadar, Gunn admitted that the retooled Justice Gang scene from this season's premiere won't replace the Justice League one in the season 1 finale.
"I wish I could do that," Gunn replied when asked if he'd swap out the Justice League scene for the updated Justice Gang one. "But, I can't, because it's too expensive. I think we'd rather spend the money on a few more visual effects shots for Supergirl.
"The other thing is normal people don't give a s**t about all this canon stuff as intimately [as diehard fans do]," he continued. "They're just like 'Oh cool, Peacemaker is in Superman' or 'Oh, it's Alien and Predator together'. It doesn't matter if it's not completely consistent with the past, so I though the simple way was really the best, which is just saying 'Yeah, this world is a little different'. We know there are other universes, and this is one where everything is exactly the same as what happened in season one, except for the Justice League's appearance."
Peacemaker season 2 episode is out now. Read my Peacemaker season 2 release schedule to find out when new entries will arrive, or check out my Peacemaker season 2 review for some clues about what's to come in episodes 2 through 5.
You might also like- Here's everything you need to know about Peacemaker season 2 before watching it
- Peacemaker season 2 star Sol Rodríguez teases which DCU project she wants her character to appear in next: 'I would love that'
- Peacemaker season 2 cast tease what's next for the 11th Street Kids in the hit HBO Max show: 'it's something new'
Ever since I tried out Google's Real Time Voice translation on a call between a pair of Google Pixel 10 Pro phones, I've been imagining a future where I can speak to anyone in any language in my own voice and we can instantly understand each other.
I'm not alone in my amazement. When Jimmy Fallon and YouTuber Karen Polinesia demonstrated the feature live during Made By Google 2025 on August 20, 2025, the late-night TV host was gobsmacked, giggling in astonishment as his distinctive voice delivered sentences in Spanish to someone on a Pixel 10 Pro phone in an undisclosed location.
I don't blame him. As I've said, this feature uses AI to re-create both callers' voices in another language, almost without any lag, which is the closest we've come to Star Trek's Universal Translator. But it is missing something.
(Image credit: Lance Ulanoff / Future)You see, Google's Real Time Call Translation only works when you're calling someone on the Pixel 10 phone. What I really want is a more ambient experience.
I can't believe I'm writing this, but what we need is a piece of wearable hardware that's always listening and when it hears someone speaking to you in a language other than your native tongue, it starts interpeting on the fly and "speaking" those same words in your native launguage, while, of course using a voice that matches the speakers voice.
In Star Trek, the galaxy explorers would simply point the device at aliens, and their unintelligible languages would transform their voices into English. I know that's unlikely; however, I do have a vision of what I want.
I'm aware that Google has long had a translate feature with Pixel Buds (using Google Translate and Google Assistant), but it never worked like this and never used a simulacrum of the speaker's voice for the translation. As far as I'm concerned, the system doesn't work unless it includes this.
@techradar ♬ original sound - TechRadar A wearable translatorIn a perfect world, the system would be frictionless: on both speakers and always ready to transparently intercept, translate, and speak so that we don't have to call, tap, look up, or read.
In the real world, there would be some concessions to the current state of Google's mobile hardware ecosystem.
There are a few options. It could be a system that works on both the Pixel Watch and Pixel Buds (the watch translates and sends the voice to the buds) or the buds translate and deliver the voice on their own. Pixel Watch 4 and Pixel Buds (even the Pro models) lack the horsepower to handle the translation.
What's needed is another piece of hardware or a combination of wearable gadgets that can bring this ever-present live translation to life.
A hardware possibility(Image credit: Future/Lance Ulanoff)In general, I'm not a fan of dedicated AI hardware (see Rabbit R1 and Plaud.AI). Smartphones like the Pixel 10 Pro have all the generative AI we need, and a secondary device just to perform many of those same AI actions seems superfluous at best.
The Real Time Live Translation, however, has me thinking differently. Perhaps it's the combination of an enhanced Pixel Watch and Buds, but I'd prefer if the entire operation were housed in what we might call "Pixel Buds Pro Enhanced".
(Image credit: Lance Ulanoff / Future)Inside would be a new Tensor Chip small enough to fit inside one of the buds yet powerful enough to perform local translation and voice generation. We know the software works, so why not build special hardware to support it?
I know that's a tall order. Tensor G5 is a 3nm process chip. Could this be a 2nm? Maybe. The goal would be to both shrink the AI (with its neural network) and lower the power consumption so that one translation doesn't eat up half the Pixel Bud Pro Enhanced's battery life.
This is the one AI wearable idea I can get behind. Just think of what travel to another country might be like if you were wearing one of these. I get that it's unlikely that the person you're talking to also has a pair, but if they can run Gemini Live on their phone or if they have a Pixel Watch, perhaps they can hear what you're saying in their language (and in your voice), too.
My point is, this feature is too powerful to be stuck inside a smartphone, and I hope Google is working right now to bring my Star Trek Universal Translator dreams to life.
You may also like- Grok conversations shared by users have been found indexed by Google
- The interactions, no matter how private, became searchable by anyone online
- The problem arose because Grok's share button didn't add noindex tags to prevent search engine discovery
If you’ve been spending time talking to Grok, your conversations might be visible with a simple Google search, as first uncovered in a report from Forbes. More than 370,000 Grok chats became indexed and searchable on Google without users' knowledge or permission when they used Grok's share button.
The unique URL created by the button didn't mark the page as something for Google to ignore, making it publicly visible with a little effort.
Passwords, private health issues, and relationship drama fill the conversations now publicly available. Even more troubling questions for Grok about making drugs and planning murders appear as well. Grok transcripts are technically anonymized, but if there are identifiers, people could work out who was raising the petty complaints or criminal schemes. These are not exactly the kind of topics you want tied to your name.
Unlike a screenshot or a private message, these links have no built-in expiration or access control. Once they’re live, they’re live. It's more than a technical glitch; it makes it hard to trust the AI. If people are using AI chatbots as ersatz therapy or romantic roleplaying, they don't want what the conversation leaks. Finding your deepest thoughts alongside recipe blogs in search results might drive you away from the technology forever.
No privacy with AI chatsSo how do you protect yourself? First, stop using the “share” function unless you’re completely comfortable with the conversation going public. If you’ve already shared a chat and regret it, you can try to find the link again and request its removal from Google using their Content Removal Tool. But that’s a cumbersome process, and there’s no guarantee it will disappear immediately.
If you talk to Grok through the X platform, you should also adjust your privacy settings. If you disable allowing your posts to be used for training the model, you might have more protection. That's less certain, but the rush to deploy AI products has made a lot of the privacy protections fuzzier than you might think.
If this issue sounds familiar, that's because it's only the latest example of AI chatbot platforms fumbling user privacy while encouraging individual sharing of conversations. OpenAI recently had to walk back an “experiment” where shared ChatGPT conversations began showing up in Google results. Meta faced backlash of its own this summer when people found out that their discussions with the Meta AI chatbot could pop up in the app's discover feed.
Conversations with chatbots can read more like diary entries than like social media posts. And if the default behavior of an app turns those into searchable content, users are going to push back, at least until the next time. As with Gmail ads scanning your inbox or Facebook apps scraping your friends list, the impulse is always to apologize after a privacy violation.
The best-case scenario is that Grok and others patch this quickly. But AI chatbot users should probably assume that anything shared could be read by someone else eventually. As with so many other supposedly private digital spaces, there are a lot more holes than anyone can see. And maybe don't treat Grok like a trustworthy therapist.
You might also like- ChatGPT is no match for a 40-year-old digital Pocket Chess game, and I bet Garry Kasparov would be pleased
- Grok may start remembering everything you ask it to do, according to new reports
- I tried Grok’s new AI image editing features – they’re fun but won’t replace Photoshop any time soon
- Grok 3’s voice mode is unhinged, and that’s the point
- UK proxy use has grown as VPNs face potentially tighter rules and restrictions
- Specialist proxies give businesses flexibility, location accuracy, and resilience compared with VPNs
- British firms are adopting proxies across eCommerce, finance, and marketing
Many UK firms are worrying further VPN regulation could be on the cards, after the divisive Online Safety Act led to explosive interest in the tools, driving regulators to take notice and businesses to explore alternatives.
This is not about traditional business VPNs (such as SonicWall, Cisco AnyConnect, or Fortinet) that secure employee access to internal networks, but rather about specialist VPN services used for external online operations.
As a result of growing uncertainty, companies are increasingly turning to proxy services, which offer greater flexibility and fewer compliance concerns than VPNs.
UK interest increasingProxies, unlike VPNs which encrypt traffic and direct it through a single tunnel, offer more granular routing and customizable access, allowing organizations to conduct location-specific data collection, navigate geo-restrictions, and monitor competitors with reduced risk of detection or blocking.
Data from Decodo shows proxy users from the UK increased by 65% following the launch of the Online Safety Act, while proxy traffic rose by 88%.
That points to growing reliance on proxies as a standard part of digital infrastructure rather than a niche tool.
“Companies around the globe are getting smarter about how they operate in highly competitive landscapes. Instead of just picking the most popular tools, they’re choosing what actually works best for them, whether that’s faster, easier to use, or works better with region-specific restrictions. It shows that people are thinking more critically about their options,” said Vytautas Savickas, CEO at Decodo.
One reason proxies are expanding so fast is their technical maturity, Decodo says. Providers now bundle enterprise-grade security features with user-friendly designs, which makes them suitable for global enterprises as well as smaller firms.
At the same time, more UK businesses are learning how to differentiate between VPNs and proxies and are matching tools to their goals.
“More organizations in the UK are investing time in understanding the tools that power secure and efficient online operations. Most companies test out different solutions, providers, and do their research on proxies and VPNs, and they’re also making more informed, strategic choices,” said Gabriele Verbickaitė, Product Marketing Manager at Decodo.
Proxies are proving especially valuable in sectors such as eCommerce, finance, and digital marketing, with firms using them for tasks like ad verification, price tracking, SEO monitoring, and fraud prevention.
Options such as residential, mobile, and ISP proxies allow for greater stability and location accuracy compared with older methods.
“UK businesses are quickly adopting proxy services, moving beyond simple VPNs to more advanced setups that offer greater control over their online activity. It’s no longer just about staying private – performance and reliability are now just as important,” said Vaidotas Juknys, Head of Commerce at Decodo.
You might also like- These are the best VPNs that you can trust right now
- And these are the best VPNs with antivirus to choose
- The VPN trap: how criminal ecosystems exploit our need for privacy
The cyber-attack on Marks & Spencer is the kind of event that makes business leaders sit up and ponder whether their own organization could be next. While its services may now be up and running, the incident has still cost the brand over £300 million in lost profits, along with potential damage to its customer relationships.
The brand is not alone either, since attackers also hit the specialist food distributor Peter Green Chilled, integral to several supermarket supply chains, along with Coop, North Face and Cartier recently as well. The lasting impacts of these cybersecurity breaches have revealed how quickly a single compromise can affect revenue, logistics and brand trust, even if organizations have well-rehearsed contingency plans.
Cyber criminals love retail dataThe UK’s appetite for online shopping has grown from 18.1 per cent of total sales in September 2019 to around 26 per cent today. This growth brings increased volumes of payment credentials, loyalty data and personal profiles that retailers and their partners must store and access for the whole system to operate effectively.
As every part of the retail supply chain process, from stock control to fulfilment, is now digitally integrated in the battle for streamlined, multi-channel efficiency, it has become almost impossible to guarantee total security.
Criminals want that data for ransom, resale or misuse, and incessantly seek it out. They have learned that the easiest way past expensive perimeter tools starts inside each business. A seasonal employee’s click on an email, a misconfiguration in a loyalty-app update, or slack use of recycled passwords by a manager working from home are all weaknesses that criminals exploit.
The addition of hybrid working has also opened up many more potential entry points for criminals and complicates security vigilance.
The complex pipework of supply chain partner relationships makes continuous monitoring much harder. Retailers rely on third-party ecommerce software, CRM suites, point-of-sale systems and supply-chain tools. Vulnerabilities from even a single vendor or partner is enough to let criminals inside.
Artificial intelligence, meanwhile, has automated phishing lures and vulnerability scanning. The development of off-the-shelf ransomware kits also means criminals need less technical expertise to be effective. They can deliver cyberattacks at greater frequency and speed with superior precision.
Building defenses that contain attacksRemoving all cyber risk is impossible, so organizations must switch focus to damage limitation and maintenance of legitimate trade, using layered security instead of relying totally on a single gatekeeper.
High on the shopping list for retailers should be real-time endpoint detection and response (EDR) or extended detection and response (XDR) platforms. These solutions monitor devices, networks and cloud workloads for anomalous behavior, then isolate infected assets before malware spreads.
Strict network segmentation limits an intruder’s freedom of movement in systems. A zero-trust model will make life harder for them by demanding authentication for every access request.
Sometimes, the most effective containment measure is a deliberate shutdown to allow individual branches to keep trading on local platforms. This prevents attackers from scuttling through systems and enables investigators to get on with their work.
Layering defenseLayered defense must involve employees as well as technology. Multi-factor authentication cuts down the threat from stolen passwords, while least-privilege principles ensure staff only access what is required for the task in-hand. Regular penetration tests expose weak spots before adversaries find them, and supply-chain audits encourage vendors to improve standards.
Preparation is essential. Immutable off-site backups provide clean copies of critical data, but only if recovery time and recovery point objectives are realistic and regularly rehearsed. Full fail-over, forensic hand-off and customer communications must all be rehearsed.
It is also important to diversify infrastructure, avoiding reliance on what becomes a single fault domain through the mistake of running production, back-up and disaster-recovery environments on the same platform. What retailers need is a hybrid or multi-cloud approach to spread risk and improve flexibility.
Instilling new confidenceAfter the immediate threat is contained and systems are restored, rebuilding confidence is tough when customers, staff and investors are wanting details of what happened, the data exposed and how the company will prevent it from happening again.
A timetable of transparent updates shows respect and reduces speculation. Each cyber event or breach should trigger policy changes and fresh internal training, reinforcing the message that security is a collective responsibility shared by everyone in every department.
Many retailers use managed service providers (MSPs) to accelerate all these steps, bringing access to wider experience and expertise, round-the-clock monitoring and economies of scale. Retailers have the strategic oversight and sector knowledge, while the MSP supplies a deeper level of technical insight and a commitment to continuous improvement.
With the right partnerships, layered defenses, crisis response and security awareness, retailers can absorb attacks without day-to-day business grinding to a halt. They can continue to maintain the vital trust that is behind each customer transaction. There is certainly no reason to despair if organizations follow this multi-layered approach.
We list the best endpoint protection software.
This article was produced as part of TechRadarPro's Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro
Few technological shifts have generated as much excitement and anxiety as the introduction of artificial intelligence in the workplace.
We’re seeing a marked step forward in the innovation and wider integration of AI tools as standard across all sectors and industries, driven by promises of streamlining, productivity gain, and growth opportunities.
This transformation is marked by both decision-makers’ eagerness to harness the full potential of AI and employees’ fears about job security.
Gradual, deliberate integrationDespite the demonstrable potential of enterprise AI tools, it’s important that companies deploy them incrementally, rather than actioning disruptive overhauls. A “rip and replace” mindset could result in internal resistance and operational disruption. Gradual integration will enable greater flexibility and alignment with strategic and technical goals.
We’ve seen first-hand how companies have failed to properly implement AI tools, for instance with Klarna in early 2024. Klarna aggressively automated customer support, introducing AI agents to handle huge workloads in place of humans. This led to poor customer experiences, and a public admission that overreliance on cost-cutting was a mistake. The human touch proved irreplaceable for complex human queries.
Similarly, there’s the risk of businesses falling into the trap of viewing AI as a one-size-fits-all solution, lured by the prospects of increased efficiency and decreased costs. Without a clear assessment of foundational challenges, like fragmented data and how to integrate with legacy systems, AI initiatives can hinder rather than deliver results.
Instead, organizations should turn their focus to integrating AI deliberately with existing IT infrastructure, at points where it’s truly able to add value. Targeted, measured deployments will unlock efficiencies that mesh with existing operational strategy and mitigate the chances of disruption.
Human-Machine collaborationThere’s one key thing that’s overlooked in much of the discourse suggesting AI is replacing jobs: the simple fact that AI success depends on the humans that shape, supervise and steer AI output.
Think of it not as a substitute for human intelligence, but as an augmentor capable of transforming ideas into actionable results. To this end, the more that AI is implemented, the greater the potential productivity benefit, but the greater the need for accountability as well.
Accountability — and demonstrated adherence to ethical and legal guidelines — requires human oversight and judgement. Far from making human employees obsolete, widespread AI rollout is creating new demands for human expertise and a whole cache of professions.
Technological accessibilityThis will only become the case by way of mass AI adoption. Which itself can only happen with the emergence of zero- and low-code platforms. The goal is to make powerful IT automation tools accessible to non-technical teams.
This way, employees with specific domain expertise can devise tailored AI systems, and become active shapers of AI-infused business innovation.
This level of collaboration will reveal insights that otherwise might stay hidden in siloed processes, combining automation with deep and involved operational understanding.
It’s not about replacing talent: it’s about identifying it and finding ways of amplifying it to unlock smarter, more adaptive ways of working.
Recognizing value is value in itselfThere’s a lot of talk about AI freeing up employees for high-value tasks, but what qualifies as “high value” is far from universal. A task deemed critical in healthcare might be routine in retail.
Precision might matter most in one industry, where creativity may trump it in others. The reality is: value is subjective and sector-specific, which is why one-size-fits-all actually fits none.
The companies that treat this question strategically, rather than a bolt-on, are the ones that will gain a competitive edge and extract the most value from their AI deployments.
It’s no longer about what AI can take over, but what it should.
Eking out a definition should sit beside broader business priorities: deciding where human focus belongs will be imperative to business success. In an AI-enabled future, the ability to evaluate what matters most will become one of the highest-value capabilities of all.
In short, AI won’t kill jobs, but lazy thinking might. The real threat isn’t the tech itself, but how it’s deployed. Businesses that chase efficiency at the expense of human insight risk shedding expertise. The message for decision makers is clear: equip people, don’t replace them — and you don’t just keep up, you lead.
We list the best IT management tools.
This article was produced as part of TechRadarPro's Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro
Microsoft’s decision to extend security updates for Windows 10 offers welcome breathing room for businesses still navigating complex migration paths. The move aligns with the tech giant’s long-standing pattern of accommodating the slower pace of enterprise IT change, giving organizations time to budget, plan and maintain operational continuity.
For many IT teams, this extension helps manage short-term risk and avoid disruption, particularly for those still reliant on existing infrastructure or specialist applications. But while this reprieve buys time, it does also delay the inevitable, potentially compounding the challenge.
As we’ve seen with previous extensions, each delay risks the loss of critical internal knowledge, as the people and vendors who originally managed these systems move on.
Over time, what could have been a straightforward application and data migration becomes a complex, costly rescue mission. The longer businesses wait, the more they risk accumulating technical debt, becoming dependent on expensive external partners and missing out on innovation. As such, organizations must treat the extension as a final window to take action before the real cliff edge arrives.
Here I explore the pros and cons of the recent Windows 10 security update extension and what considerations businesses should be contemplating over the forthcoming months.
The prosLet’s start with the obvious. An extension provides extended breathing room and gives organizations more time to plan and execute a migration strategy without immediate pressure. Concurrently, this reduces short-term risk, as continued security updates help mitigate vulnerabilities while businesses remain on Windows 10.
This provides operational continuity and avoids any disruption for businesses still dependent on existing, well-established applications or IT infrastructure.
More broadly, the extension offers budget flexibility. IT departments can spread out migration costs over a longer period, which can help with financial planning, especially in a climate of ongoing budget pressures.
It also provides alignment with past practices, keeping consistent with Microsoft’s historical approach of offering extended support to accommodate slow-moving enterprise migrations.
The cons (and the real risks)An extension may provide breathing room, but this also creates a false sense of security. More time can encourage complacency, delaying necessary upgrades and strategic planning.
Even more importantly, it can contribute to a loss of internal knowledge. As time passes, key personnel with migration experience may leave, and vendor support may disappear - this makes future transitions harder and riskier.
What’s more, while short-term savings might be gained, there can be increased long-term costs. Maintaining older infrastructure often becomes more expensive than upgrading it, especially when emergency migrations are needed.
Crucially, delays mean companies accumulate technical debt; by not performing migrations, organizations can end up with a backlog of compatibility issues, unsupported applications and outdated hardware.
The combination of losing internal knowledge and maintaining unsupported systems means businesses can become increasingly reliant on expensive external partners to manage complex migrations and increases the chance of vendor lock-in and dependency.
Ultimately, staying on older systems can prevent organizations from leveraging new features, performance improvements, and security enhancements in Windows 11 or alternative operating systems.
A mindset of continuous modernizationThe issue with deadlines and extension periods is that they signify an eventual point of completion. In this case, a completed migration project. While they are of course necessary for encouraging organizations to update their Windows applications, they also create the mindset that the process is then a done deal.
But technology quickly evolves and IT infrastructure requires continuous modernization. Having this mentality also avoids companies delaying projects when extensions are provided.
At the same time, existing Windows applications can be critical to operations and not modernizing them before the deadline will bring serious risks. So, how can organizations maintain operational continuity but also modernize over the coming months?
The ‘Rs’ approach – including AWS’ ‘7Rs’ and Gartner’s ‘5Rs’ – presents several strategies. This industry standard process is used by cloud providers and encompasses different ways for companies to carry out migrations for unsupported applications. ‘Retiring’, for instance, involves identifying applications that are no longer useful and can be turned off. Each method has its purpose for various contexts.
But large IT estates can be too unique or complex to use such methods alone. In these cases, external cloud specialists can provide companies with vendor-neutral platforms that allow them to maintain their existing Windows 10 applications but redeploy them onto managed operating systems or cloud environments.
This means applications remain fully operational and secure but can continue to receive security patches, support and software updates. It acts as a smarter alternative to complex migration strategies or the costly redevelopment of applications.
Managing, not delaying, the inevitableThis extension is not a solution, but a grace period. It gives IT teams more time to manage short-term risk. But all an extension really does is delay the same situation repeating itself.
Time and time again we have seen this occurrence take place: a business delays its migration for a year, and then another year, and then - all of a sudden - Microsoft stops the extension of the extension.
The pattern often ends with a scramble when the final deadline hits - by which time the cost, complexity and risk have all increased, internal knowledge to migrate quickly and safely has disappeared, and vendors no longer exist.
So while there are pros to the extension, the cons present very real risks - and they emphasize why businesses need to adopt a mindset of continuous modernization.
The technology and providers are available to help companies maintain their existing Windows 10 applications but move them onto supported operating environments.
In the coming months, rather than delaying the inevitable scramble, IT teams can build ongoing modernization.
We've featured the best IT management tools.
This article was produced as part of TechRadarPro's Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro
Quantum computing has long occupied the edges of our collective imagination – frequently mentioned, rarely understood. For many, it remains a distant prospect rather than an immediate concern. But that mindset is fast becoming a risk in itself.
While understanding may be limited today, that must quickly change. Quantum computing has long been viewed as a technology several decades away, but recent breakthroughs suggest it could arrive far sooner.
Google’s Willow and Microsoft’s Majorana chips signal rapid technical acceleration, and the UK Government’s £500 million investment in quantum innovation confirms that global leaders are no longer treating this as speculative, but as a strategic priority.
Despite this, only 35% of professionals surveyed by ISACA believe quantum will enter the mainstream within years rather than decades, highlighting just how much industry perception is lagging behind reality.
That disconnect extends beyond expectations – it’s impacting readiness. Most organizations have yet to factor quantum into their cybersecurity planning, even though the technology is set to fundamentally reshape how vast sectors of society operate online.
This isn’t just about adopting a new form of computing – it’s about protecting the systems, economies and infrastructures that underpin our digital lives. And that starts with truly understanding what quantum is, and how it could both redefine and disrupt the cybersecurity landscape.
The Fundamentals: A Primer on Quantum ComputingIf classical computers are powerful calculators, quantum computers are like probability engines, processing information in ways that allow them to explore many possibilities simultaneously.
Classical computing relies on bits, which are binary units of information that can either be 0 or 1. Quantum computers, by contrast, use qubits (quantum bits), which can be both 0 and 1 at the same time – a phenomenon known as superposition. Qubits can also be entangled, meaning the state of one can instantly influence another, even at a distance.
This means quantum computers can perform complex calculations by exploring multiple paths at once, rather than one-by-one. Where a classical computer might take thousands of years to crack encryption software or simulate a protein structure, a quantum computer could, in theory, complete the task in seconds.
But this is not about speed alone – it’s about capability. Quantum computing makes it possible to solve problems previously considered intractable: from modelling complex chemical reactions at the atomic level, optimizing vast and variable systems like global logistics, to breaking the mathematical problems that make today’s encryption secure.
When it comes to AI the effect is expected to be hugely transformational as the capability of Quantum will lead AI to a new era, both in terms of its level of intelligence and value but also in terms of the risks that come along with AI. These breakthroughs will have profound implications for the systems that underpin daily life, including cybersecurity, healthcare, and finance.
Why Quantum Matters: Revolutionary Potential Across SectorsQuantum computers won’t replace classical machines, but they will be used to solve problems that today’s systems simply can’t at exponentially faster speeds. Their ability to handle complexity at scale means quantum computing will unlock solutions that were previously impossible or impractical, with major implications across a range of sectors.
This potential is already being recognized by many in the industry. ISACA’s Quantum Pulse Poll found that a majority (56%) of European IT professionals welcome the arrival of quantum computing, with the same number predicting that it will create significant business opportunities.
In healthcare, quantum systems could accelerate drug discovery by modelling molecules and protein folding far more accurately than classical machines allow. In business and finance, they could transform how organizations optimize supply chains, manage risk, and harness artificial intelligence to process and learn from vast datasets.
In cybersecurity, quantum has the power to redefine how we protect systems and data. Quantum Key Distribution could enable theoretically unbreakable encryption. AI-driven threat detection could become faster and more effective. And quantum-secure digital identity systems could help prevent fraud and impersonation.
But while these developments hold huge promise, they also introduce one of the most serious challenges facing cybersecurity today.
Quantum and Cybersecurity: A Looming DisruptionThis isn’t a distant concern. Over two-thirds (67%) of cybersecurity professionals surveyed by ISACA believe that quantum computing will increase or shift cyber risk over the next decade, and it’s not hard to see why.
At the center of concern is encryption. Today’s most common cryptographic methods, like RSA and ECC, are built on mathematical problems that classical computers can’t solve in practical timeframes. But quantum machines could crack these with relative ease, putting the security of data at serious risk.
This raises the very real threat of “harvest now, decrypt later” where malicious actors steal encrypted data today, intending to unlock it once quantum capabilities arrive. Sensitive information considered secure now, such as financial records, personal data, and classified communications could be exposed overnight.
The implications are vast. If these foundational algorithms are broken, the ripple effect would be felt across every sector. Cryptography underpins not just cybersecurity systems, but digital infrastructure itself, from banking and healthcare to identity verification and cloud computing.
As quantum advances, preparing for this threat is no longer optional. It’s a critical step toward protecting the digital systems we all rely on.
The Reality Check: How ready are we for quantum?While the pace of quantum innovation accelerates, organizational readiness is not keeping up.
Few organizations have started preparations. Just 4% of IT professionals say their organization has a defined quantum computing strategy in place. In many cases, quantum is still entirely off the radar. More than half of respondents (52%) report that the technology isn’t part of their roadmap, with no plans to include it.
Even when it comes to mitigation, most have yet to take basic steps. Despite the risks posed to current encryption standards, 40% of professionals say their organization hasn’t considered implementing post-quantum cryptography, creating worrying potential for disruption.
Part of the challenge lies in awareness. Quantum remains unfamiliar territory for most professionals, with only 2% describing themselves as extremely familiar with the technology. And while the U.S. National Institute of Standards and Technology (NIST) has spent more than a decade developing post-quantum encryption standards, just 5% of respondents say they have a strong understanding of them.
Meanwhile, global progress on quantum development continues to accelerate. Commercial applications are likely to arrive sooner than many expect, yet they may do so in a digital ecosystem unfit to cope. If encryption breaks before defenses are in place, the consequences could be severe, with widespread operational disruption, reputational harm, and regulatory fallout.
Preparing for quantum is no longer a theoretical exercise. The risk is real, and the window for proactive action is closing.
Preparing for the Post-Quantum FuturePreparing for quantum computing isn’t just a technical upgrade – it’s a strategic imperative. Yet most professionals still lack the awareness and skills needed to navigate what’s coming. Quantum education must now be a priority, not just for security teams, but across leadership, risk, and governance functions.
Governments have a role to play too. The UK’s £60 million investment in quantum skills is a strong start, but long-term readiness will depend on sustained collaboration between public and private sectors.
For organizations, action is needed now. That means identifying where quantum could pose a risk, assessing encryption dependencies, and beginning the shift to quantum-safe systems. Crucially, none of this will be possible without the right expertise.
Developing a holistically trained workforce on quantum (whilst continuing to do this for AI) will enable organizations to apply new technologies effectively and securely before the threats materialize.
Quantum brings extraordinary potential, but it also demands urgent preparation. Those who act early will be far better positioned to secure their systems and lead confidently in a post-quantum world.
We've featured the best cloud firewall.
This article was produced as part of TechRadarPro's Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro
- Security researchers found Russian network fingerprints on 12 free VPNs available on Google and Apple's app stores, and Chinese traces on six
- Five of these VPNs are also thought to have ties with a Shanghai-based firm believed to have links with the Chinese military
- While network fingerprints don’t necessarily signal Chinese or Russian ownership, experts advise caution
Twelve free VPN services available on the official Google Play and Apple App Store may have links with Russia, and six with China.
These are the findings from security researchers at Comparitech, who analyzed 24 VPNs and found Russian and Chinese network fingerprints on a total of 12 apps. Two of them (Turbo VPN, VPN Proxy Master) also include Chinese or Russian SDKs (software development kits) that, according to experts, "are clear indicators that the SDK was intentionally bundled into the app."
Experts build on the work of the team at the Tech Transparency Project, which in April uncovered that millions of free VPN users across 20 apps may have sent their data to China without knowing it. On that occasion, experts found that Turbo VPN and VPN Proxy Master, alongside three additional services (Thunder VPN, Snap VPN, and Signal Secure VPN), have ties with a Shanghai-based firm believed to have links with the Chinese military.
Despite these traces not necessarily signaling Russia or China's ownership, experts advise current users to exercise caution regarding their data privacy.
"China and Russia both force domestically-owned VPNs to register with the government and adhere to local laws, which may impact user privacy. For this reason, no Chinese or Russian VPN can offer a trustworthy 'no-logs' service, which is the only type of VPN we recommend," wrote researchers.
Which apps are impacted?The impacted VPN apps can be divided into three groups:
- Six communicate with Chinese domains: Signal Secure VPN (Android), Turbo VPN (Android), VPN Proxy Master (Android), Snap VPN (Android), Now VPN (iOS), and Ostrich VPN (iOS)
- Eight Android apps communicate with Russian IP addresses: QuarkVPN, VPNify, Signal Secure VPN, Turbo VPN, VPN Proxy Master, Snap VPN, VPN Free, and Proxy Master
- Four iOS apps communicate with Russian domains: NowVPN, WireVPN, FastVPN Super, VPN - Fast VPN Super with Apple.com domains hosted in Russia, but only the latter two have other third-party Russian domains.
Four Android VPN apps (Signal Secure VPN, Turbo VPN, VPN Proxy Master, and Snap VPN) have links with both Chinese and Russian domains.
All in all, "Apple doesn’t list any of the VPN apps that, on their Android versions, communicate with third-party Russian domains. Based on this, Apple appears to be more strict about removing Russia-linked VPNs than China-linked ones," experts wrote.
What does it mean for your privacy?(Image credit: Getty Images)While the best VPN services promise to boost online privacy by encrypting your online communications and working with strict no-log policies, both China and Russia impose greater control and data retention requirements on domestic VPNs.
As mentioned earlier, the traces that researchers found don't necessarily indicate Chinese and Russian ownership. Yet, "it may be an indicator of potential ties, especially when combined with other signals," such as a Chinese or Russian SDK, publisher metadata, or similar behavior.
On a practical level, it means that the app may route some data or logs via servers located either in China or Russia. Foreign SDKs, especially, could signal deeper control or development origin, according to experts.
As a rule of thumb, you should avoid unverified free VPN apps, regardless of their ownership, as they can make you vulnerable to all sorts of privacy and security risks – from invasive ad-tracking to malware and even foreign surveillance.
If you're looking for a secure freebie, I recommend checking our up-to-date free VPN guide, with Privado VPN, Proton VPN, and Windscribe VPN being today's top picks. If you're willing to go premium, NordVPN is TechRadar's top-rated service at the time of writing.
You might also like- The Google Pixel 10 series doesn't include Battery Share
- This feature was removed to allow for Qi2 magnetic wireless charging
- Not all Pixel fans are happy about this change
The Google Pixel 10 series comes with a number of new features, but it’s also missing some things, with Battery Share notably being absent from these phones.
That’s Google’s name for its reverse wireless charging feature – in other words the ability to use your phone to wirelessly charge other devices. It’s a common feature on Android handsets, including the Google Pixel 9 series, but it’s missing from the Pixel 10.
However, there’s a good reason for this, as DroidReader asked Google and was told that the array of magnets required for Qi2 magnetic wireless charging “creates a strong connection with the charger but presents a physical limitation for reverse wireless charging.”
So in other words, the addition of Qi2 magnetic wireless charging (which allows you to use the new Pixelsnap accessories) meant Google had to remove Battery Share.
MagSafe with Pixel brandingThe Google Pixel 10 Pro with a PixelSnap accessory (Image credit: Philip Berne / Future)For most people, we’d wager this is a good trade. Pixelsnap is a lot like MagSafe – it’s an ecosystem of wireless chargers and accessories like stands and grips that can attach to the back of your Pixel 10 with magnets.
It’s a handy feature, but not everyone is happy about this change, with a Reddit thread including comments like “this one hurts a lot”, and “Battery Share I found to be such a useful feature”.
Still, other comments mentioned hardly if ever using it, so it’s certainly not a universally loved feature. Hopefully, even those who did love Battery Share will come to appreciate Pixelsnap too – but if not, other brands like Samsung still offer similar reverse wireless charging capabilities.
You might also like- Google execs have been talking hardware following the Pixel 10 launch
- Flip phones, smart rings, and tablets aren't on the way
- The company is concentrating on phones and AI instead
We've just been treated to a host of new Google Pixel devices, including four different Pixel 10 phones, but we also have news about Google devices that aren't coming – including a flip foldable and a successor to the Pixel Tablet from 2023.
Speaking to Mark Gurman and Samantha Kelly at Bloomberg, Google's Vice President of Devices and Services Shakil Barkat confirmed that there are no plans for a Google flip foldable to join the Pixel 10 Pro Fold.
Barkat also ruled out a smart ring, and says the Pixel tablet series is on pause until a "meaningful future" can be figured out for the product category. It seems the likes of Samsung will be left to release those kinds of devices for the time being.
The status on smart glasses, meanwhile, is "TBD" – it seems Google is happy to stay focused, for now. "Every time a new type of category of product gets added, the bar on maintenance for the end user keeps going up," says Barkat. "It's already pretty painful."
The "vanguard" of AIGoogle is focused on Pixel phones and AI (Image credit: Philip Berne / Future)Google execs did also use the interview to hype up what they are working on. Rick Osterloh, who is head of Google's hardware and Android divisions, described the Pixel 10 as a "super strong release" in what is now a "mature category".
The Pixel 11 is almost finalized, apparently, while work has started on the Pixel 12. Google design chief Ivy Ross says that the company is aiming for big visual changes to the Pixel phones "every two to three years" – so watch this space.
As you would expect, the Google team pushed AI as being the big innovation that'll be happening on phones over the next few years, via Gemini and features such as Magic Cue, which surfaces key info from your phone when you need it.
Osterloh says he wants Android to be "on the vanguard of where AI is going", and that Google isn't overly worried about Pixel sales: the phones account for about 3% of the US market at the moment, compared to a 49% share for Apple.
You might also like- Trump criticizes legacy US Government websites as expensive and poor to use
- New "American by Design" initiative will modernize government agency sites
- Airbnb co-founder Joe Gebbia appointed as Chief Design Officer of National Design Studio
President Trump may soon be browsing for the best website builders after ordering improvements to federal government websites and physical spaces in the hope to make them more attractive for both workers and customers.
“The Government has lagged behind in usability and aesthetics,” Trump said in a new Executive Order, noting the need for system modernization that could tackle high maintenance costs in the process.
The Executive Order explains legacy systems can be costly to maintain and costly to American citizens, who can spend more time than necessary trying to navigate them, hence the need for change.
Trump wants to modernize US Government websitesThe Order introduces Trump’s new ‘America by Design’ initiative, which begins with high-touch point sites where citizens are most likely to interact with government agencies.
The formation of a new National Design Studio and the appointment of a Chief Design Officer will oversee the project.
“It is the policy of my Administration to deliver digital and physical experiences that are both beautiful and efficient, improving the quality of life for our Nation,” Trump wrote.
The National Design Studio has been tasked with reducing duplicative design costs, much in the same way that the White House has already started centralizing IT procurement to boost cost efficiency.
It will also use a standardized design for consistency and trust, and improve the quality of public-facing experiences.
Agencies have been given until July 4, 2026 to deliver their initial results after consulting with the Chief Design Officer.
Separate Reuters reporting has revealed Airbnb co-founder Joe Gebbia will lead the National Design Studio as Chief Design Officer, with the Internal Revenue Service set to be the first place to see an overhaul.
Trump’s Order also confirms the “temporary organization” will close in three years, on August 21, 2028, suggesting that site modernization could be complete even before that.
You might also like- We’ve listed the best web hosting services
- On a budget? Get yourself online with the best free website builders around
- Trump's "One Big Beautiful Bill" set to award $1 billion funding to "offensive cyber operations"
As artificial intelligence (AI) tools like ChatGPT, Co-Pilot, Grok and predictive analytics platforms become embedded in everyday business operations, many companies are unknowingly walking a legal tightrope.
While the potential of AI tools provide many benefits - streamlining workflows, enhancing decision-making, and unlocking new efficiencies - the legal implications are vast, complex, and often misunderstood.
From data scraping to automated decision-making, the deployment of AI systems raises serious questions around copyright, data protection, and regulatory compliance.
Without robust internal frameworks and a clear understanding of the legal landscape, businesses risk breaching key laws and exposing themselves to reputational and financial harm.
GDPR and the Use of AI on Employee DataOne of the most pressing concerns is how AI is being used internally, particularly when it comes to processing employee data. Many organizations are turning to AI to support HR functions, monitor productivity, or even assess performance. However, these applications may be in direct conflict with the UK General Data Protection Regulation (GDPR).
GDPR principles such as fairness, transparency, and purpose limitation are often overlooked in the rush to adopt new technologies. For example, if an AI system is used for employee monitoring without their informed consent, or if the data collected is repurposed beyond its original intent, the business could be in breach of data protection law.
Moreover, automated decision-making that significantly affects individuals, such as hiring or disciplinary actions, requires specific safeguards under GDPR, including the right to human intervention.
The Legal Grey Area of Data ScrapingAnother legal minefield is the use of scraped data to train AI models. While publicly available data may seem fair game, the reality is far more nuanced. Many websites explicitly prohibit scraping in their terms of service, and using such data without permission can lead to claims of breach of contract or even copyright infringement.
This issue is particularly relevant for businesses developing or fine-tuning their own AI models. If training data includes copyrighted material or personal information obtained without consent, the resulting model could be tainted from a legal standpoint. Even if the data was scraped by a third-party vendor, the business using the model could still be held liable.
Copyright Risks in Generative AIGenerative AI tools, such as large language models and image generators, present another set of challenges. Employees may use these tools to draft reports, create marketing content, or process third-party materials. However, if the input or output involves copyrighted content, and there are no proper permissions or frameworks in place, the business could be at risk of infringement.
For instance, using generative AI to summarize or repurpose a copyrighted article without a license could violate copyright law. Similarly, sharing AI-generated content that closely resembles protected work may also raise legal red flags. Businesses must ensure their employees understand these limitations and are trained to use AI tools within the bounds of copyright law.
The Danger of AI “Hallucinations”One of the lesser-known but increasingly problematic risks of AI is the phenomenon of “hallucinations”- where AI systems generate outputs that are factually incorrect or misleading, but presented with confidence. In a business context, this can have serious consequences.
Consider a scenario where an AI tool is used to draft a public document or legal summary, in which it includes fabricated company information or incorrect regulations. If that content is published or relied upon, the business could face reputational damage, client dissatisfaction, or even legal liability. The risk is compounded when employees assume the AI’s output is accurate without proper verification.
The Need for Internal AI GovernanceTo mitigate these risks, businesses must act promptly to implement robust internal governance frameworks. This includes clear policies on how AI tools can be used, mandatory training for employees, and regular audits of AI-generated content.
Data Protection Impact Assessments (DPIAs) should be conducted whenever AI is used to process personal data, and ethical design principles should be embedded into any AI development process.
It’s also critical to establish boundaries around the use of proprietary or sensitive information. Employees interacting with large language models must be made aware that anything they input could potentially be stored or used to train future models. Without proper safeguards, there’s a real risk of inadvertently disclosing trade secrets or confidential data.
Regulatory Focus in 2025Regulators are increasingly turning their attention to AI. In the UK, the Information Commissioner’s Office (ICO) has made it clear that AI systems must comply with existing data protection laws, and it is actively investigating cases where this may not be happening. The ICO is particularly focused on transparency, accountability, and the rights of individuals affected by automated decision-making.
Looking ahead, we can expect more guidance and enforcement around the use of AI in business. The UK is currently consulting on its AI Bill which aims to regulate artificial intelligence by establishing an AI Authority, enforcing ethical standards, ensuring transparency, and promoting safe, fair, and accountable AI development and use that businesses must comply with.
AI is transforming the way we work, but it’s not a free pass to bypass legal and ethical standards. Businesses must approach AI adoption with caution, clarity, and compliance to safeguard their staff and reputation. By investing in governance, training, and legal oversight, organizations can harness the power of AI while avoiding the pitfalls.
The legal risks are real, but with the right approach, they are also manageable.
We feature the best cloud document storage.
This article was produced as part of TechRadarPro's Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro