News
- AMD's head of client CPUs says it's looking into dedicated NPU accelerators
- These would be the equivalent of a discrete GPU, but for AI tasks
- Such boards would lessen demand on higher-end GPUs, as they'd no longer be bought for AI work, as they are in some cases
AMD is looking to a future where it might not just produce standalone graphics cards for desktop PCs, but similar boards which would be the equivalent of an AI accelerator - a discrete NPU, in other words.
CRN reports (via Wccftech) that AMD's Rahul Tikoo, head of its client CPU business, said that Team Red is “talking to customers” about “use cases” and “potential opportunities” for such a dedicated NPU accelerator card.
CRN points out that there are already moves along these lines afoot, such as an incoming Dell Pro Max Plus laptop, which is set to boast a pair of Qualcomm AI 100 PC inference cards. That's two discrete NPU boards with 16 AI cores and 32GB of memory apiece, for 32 AI cores and 64GB of RAM in total.
To put that in perspective, current integrated (on-chip) NPUs, such as those in Intel's Lunar Lake CPUs, or AMD's Ryzen AI chips, offer around 50 TOPS - ideal for Copilot+ PCs - whereas you're looking at up to 400 TOPS with the mentioned Qualcomm AI 100. These boards are for beefy workstation laptops and AI power users.
Tikoo observed: "It’s a very new set of use cases, so we're watching that space carefully, but we do have solutions if you want to get into that space - we will be able to."
The AMD exec wouldn't be drawn to provide a hint at a timeframe in which AMD might be planning to realize such discrete NPU ambitions, but said that "it's not hard to imagine we can get there pretty quickly" given the 'breadth' of Team Red's technologies.
(Image credit: Future / John Loeffler)Analysis: potentially taking the pressure off high-end GPU demandSo, does this mean it won't be too long before you might be looking at buying your desktop PC and mulling a discrete NPU alongside a GPU? Well, not really, this still isn't consumer territory as such - as noted, it's more about AI power users - but it will have an important impact on everyday PCs, at least for enthusiasts.
These standalone NPU cards will only be needed by individuals working on more heavyweight AI tasks with their PC. They will offer benefits for running large AI models or complex workloads locally rather than on the cloud, with far more responsive performance (dodging the delay factor that's inevitably brought into the mix when piping work online, into the cloud).
There are obvious privacy benefits from keeping work on-device, rather than heading cloud-wards, and these discrete NPUs will be designed to be more efficient than GPUs taking on these kinds of workloads - so there will be power savings to be had.
And it's here we come to the crux of the matter for consumers, at least enthusiast PC gamers looking at buying more expensive graphics cards. As we've seen in the past, sometimes individuals working with AI purchase top-end GPUs - like the RTX 5090 or 5080 - for their rigs. When dedicated NPUs come out from AMD (and others), they will offer a better choice than a higher-end GPU - which will take pressure off the market for graphics cards.
So, especially when a new range of GPUs comes out, and there's an inevitable rush to buy, there'll be less overall demand on higher-end models - which is good news for supply and pricing, for gamers who want a graphics card to, well, play PC games, and not hunker down to AI workloads.
Roll on the development of these standalone NPUs, then - it’s got to be a good thing for gamers in the end. Another thought for the much further away future is that eventually, these NPUs may be needed for AI routines within games, when complex AI-driven NPCs are brought into being. We've already taken some steps down this road, cloud-wise, although whether that's a good thing or not is a matter of opinion.
You might also like- Alone – Charity Multipurpose Non-profit WordPress Theme has a 9.8/10 flaw
- The bug allows crooks to create rogue admin accounts
- More than 120,000 takeover attempts already blocked
The "Alone – Charity Multipurpose Non-profit WordPress Theme", a commercial theme used in many WordPress websites, contained a critical vulnerability that allowed threat actors to completely take over the website, experts have warned.
The WordPress theme, designed for charities, NGOs, and fundraising campaigns, features more than 40 ready-to-use demos, donation integration, and compatibility with Elementor and WPBakery.
According to Themetix, around 200 active WordPress sites are running this theme today.
Ongoing attacksWordfence researchers claim exploitation started on July 12, two days before the vulnerability was publicly disclosed. So far, the company blocked more than 120,000 exploitation attempts from almost a dozen different IP addresses.
In the attacks, the threat actors try to upload a ZIP archive with a PHP-based backdoor that grants them remote code execution capabilities, as well as the ability to upload arbitrary files. Crooks also used the flaw to deliver backdoors that can create additional admin accounts.
All versions up to 7.8.3 contained a vulnerability that allowed threat actors to upload arbitrary files, including malware that can create admin accounts. That way, crooks can completely take over websites and use them to host other malware, redirect visitors to other malicious pages, serve phishing landing pages, and more.
The vulnerability is now tracked as CVE-2025-4394, and has a severity score of 9.8/10 (critical). It was addressed in version 7.8.5, which was released on June 16, 2025. If you are using this theme, it would be wise to update it as soon as possible, since the bug is being actively exploited in the wild.
WordPress is generally considered a safe website builder platform, but third-party themes and plugins - not so much. That is why security pros advise WordPress users to only keep the plugins and themes they actively use, and to make sure they are always up to date.
Via The Hacker News
You might also like- A popular WordPress theme has been hijacked by malware - here's what we know
- Take a look at our guide to the best authenticator app
- We've rounded up the best password managers
It feels like everyone and their aunt is making AI / AR smart glasses nowadays, especially as someone who tests the best smart glasses around. But something caught my eye when reading a description of Brilliant Lab’s new Halo glasses – as with their long-term memory capabilities, they promise to remind you of details of conversations and objects you’ve seen “years or even decades later.”
In real-time, Brilliant Labs’ specs can apparently offer contextual information based on what it hears and sees, too. This style of assistive help in the moment and later on sounds like a more ongoing version of features like the Ray-Ban Meta glasses’ visual reminders, features that Meta and others have said they plan to make (or have already made) an optionally always-on tool.
Now, Brilliant Labs has said its agent Noa will serve as a sort of AI VPN. Like a VPN reroutes your data to keep your online activity more private, Noa promises to offer similar levels of privacy as it communicates with the AI model powering its cognitive abilities.
Other Halo highlights are its “world’s thinnest AI glasses” design, its built-in display that sits in your periphery like some other AI specs we’ve seen announced this year, and it will have a relatively affordable $299 asking price (around £225 / AU$465) when it launches in November.
But even as someone who loves my Ray-Ban smart glasses and can see the benefits of these Halo glasses, I’m worried these smart specs are a sign we're continuing to race towards the death of privacy.
AI in your glasses can be handy (Image credit: Meta)Risk vs rewardSmart glasses wearables with cameras are already, admittedly, something of a privacy conundrum. I think the Meta Ray-Ban specs do it well – only letting you snap pictures or short videos (or livestream to a public Meta account on Facebook or Instagram), and have an obvious light shine while you do so.
But the next generation of utility wants to boast an always-on mentality – cameras that activate frequently, or microphones that capture every conversation you have.
This would be like the Bee wristband I saw at CES (which Amazon recently bought), which promises to help you remember what you talk about with detailed summaries.
You can instantly see the advantages of these features. An always-on camera could catch that you’re about to leave home without your keys, or remind you that your fridge is getting empty, and Bee highlighted to me that you could use it to help you remember ideas for gifts based on what people say, or recollect an important in-person work chat you might have.
However, possible pitfalls are close behind.
How private are we really? (Image credit: Shutterstock)Privacy is the big one.
Not just your own, though you’re arguably consenting to AI intrusiveness by using these tools, but the privacy of people around you.
They’ll be recorded by always-on wearables whether they want to, or even know they are, or not.
Privacy makes up a big part of media law training and exams that qualified journalists (like me) must complete, and always-recording wearables could very easily enable people to break a lot of legal and ethical rules. I expect that without these people necessarily realising they’re doing something wrong.
Move fast and break everything(Image credit: Shutterstock)Big tech has always had an ask for forgiveness mentality. Arguably, because time after time, punishments (assuming they are even punished) are usually vastly outweighed by the benefits they reaped by breaking the rules.
This has seemed especially true with privacy, as our data seems to get mishandled by a company every other month – in small, but also sometimes catastrophic ways.
I’m looking at you Tea.
We’ve also already seen examples of AI companies playing fast and loose with copyright, and I expect the rulelessness will only get worse in the AI space as governments across the globe seem less than keen to properly regulate AI so they don’t hamper their country’s efforts to win the digital arms race.
AI wearables capturing every moment of our lives (from multiple angles to boot) with video and audio are a catastrophe waiting to happen.
Yes, there are always promises of privacy, and optional toggles you can switch on to supposedly enhance your data protection. Still, for every good actor that keeps its privacy promises, we can find plenty of companies that don’t – or quietly change them in new ToS you’re asked to sign.
Cooler than expected, just as scary (Image credit: Oakley / Meta)We can hope that robust regulation and proper punishment for malpractice might come in and help avoid this disaster I foresee, but I’m not holding my breath.
Instead, I’m coming to terms with the demise of privacy – a concept already on its last legs – and accepting that while Big Brother might look different from how George Orwell pictured it, it will (as predicted) be watching us.
You might also like- One in six US workers say they lie about using AI to meet job expectations
- Engineers who use AI are the new threat, not the tools themselves
- Many workers copy AI-literate peers just to appear competent in modern workplaces
As AI tools spread across office environments, many US workers now find themselves in an odd situation: pretending to use artificial intelligence at work.
A recent survey by tech recruitment firm Howdy.com found that one in six employees claim to lie about using AI.
This phenomenon appears to be a reaction not only to managerial expectations but also to deeper insecurities around job stability in an AI saturated landscape.
Survival of the most artificialUnderneath the behavior is what some are calling “AI-nxiety,” an unease born from conflicting narratives.
On the one hand, companies urge employees to embrace AI to boost productivity; on the other hand, those same workers are warned that AI, or someone more skilled at using it, could soon replace them.
This sense of pressure is particularly acute when considering workers who fear being displaced by technically skilled peers, such as engineers who actively use LLM based systems and other AI tools.
As one commenter put it on The Register: “You may lose your job to an engineer who uses AI.”
For some, the message is clear: adapt or get left behind.
In late 2023, a survey by EY found that two thirds of white collar US workers feared being passed over for promotion by AI savvy colleagues.
In this environment, mimicking the behavior of the AI literate becomes a way to hedge against obsolescence.
Further complicating the picture is the lack of adequate training.
Howdy.com reports that a quarter of workers expected to use AI receive no instruction on how to do so.
Without proper guidance, many are stuck between expectations from management and the reality of poorly integrated AI systems.
Some give up on mastering the tools and simply act like they are already doing it.
Meanwhile, contradictory workplace norms deepen the confusion.
Another survey from Slack’s Workforce Index found that nearly half of global desk workers felt uncomfortable telling managers they use AI, worrying it may make them appear lazy or unoriginal.
Thus, some pretend not to use AI even when they do.
At the heart of the issue is a growing mismatch between what companies signal, “AI is the future,” and what employees experience: unclear expectations, low support, and shifting norms around competence.
Whether AI actually replaces jobs or not, the psychological toll is already here, and pretending to be an AI user has become a strange new survival strategy.
You might also like- These are the fastest SSDs you can buy right now
- Take a look at some of the best external hard drives available
- The semiconductor industry is losing billions of dollars ever year because of this obscure little quirk
- Nvidia has announced that support for GTX 10 series GPUs ends in October 2025
- After that, these graphics cards, including the GTX 1060, will only get security fixes
- It also announced that Windows 10 support will run through to October 2026, mirroring Microsoft's extended support program for the OS
Nvidia has released a new graphics driver and announced that it'll soon be drawing the curtain on support for GeForce GTX 10 series GPUs, as well as GTX 900 models - and the end for Windows 10 gamers will follow a year later.
As Ars Technica highlighted, the release notes for driver version 580.88 came with the revelation that graphics cards based on Maxwell and Pascal architecture - meaning GTX 900 and 10 series products - will witness their final driver release in October 2025.
After that, they will only get quarterly security updates to patch them against vulnerabilities, and that's all. Security patches will finish in October 2028 for these products, too.
If October 2025 rings a bell, that's because it's also the month when Microsoft casts aside support for Windows 10, and that's also wrapped up in this Nvidia announcement.
Team Green said that it's extending Game Ready Driver support for Windows 10 to October 2026, to mirror the extended support Microsoft is offering consumers who want to stick with the OS, and not upgrade to Windows 11 yet. Or indeed people who may not be able to upgrade to the newer operating system, due to their PC not meeting the hardware requirements.
This move comes as no surprise, as Nvidia already told us back at the start of July that the v580 drivers would be the last to support Maxwell and Pascal graphics cards - we just didn't know exactly when the cut-off was coming, and now we do.
(Image credit: Nvidia)If you're affected, what does this mean exactly?As stated, there are two categories of PC gamers who this affects: those with GTX 10 model GPUs, like the GTX 1060, and those running Windows 10. Further, some folks will be in both camps, no doubt - maybe quite a few.
GTX 10 series graphics cards are still reasonably popular in some cases (whereas GTX 900 products have pretty much dwindled away to nothing). In fact, the GTX 1060 is actually the 12th most popular GPU according to the latest Steam hardware survey - and once reigned supreme - so it's still seeing a lot of use.
After October 2025, this GPU, along with other 10 series offerings like the 1070 and 1080, will only receive security updates. That means they'll still be safe to use - patched against any exploits in drivers that may be found by the bad actors out there - but they won't get support for new games or features.
So, as time rolls on, you'll find that your trusty GTX 1060 becomes wonkier and less reliable with new games, as its final driver version ages and generally gets more erratic. Note that if you stick with old games, which were catered for before game support was frozen, you should be fine, at least in theory.
As for those on Windows 10, you'll be okay for another year yet. You'll still have full driver support through to October 2026, as noted, so you'll be fine until then. Assuming you keep Windows 10 itself secure, of course - using Microsoft's offer of extended support, which is now free, with a slight catch.
After October 2026, though, you'll need to upgrade to Windows 11, or you won't get new drivers – so no game support, or security patches either – no matter how new your Nvidia GPU is.
At this point, you're really looking at a Windows 11 upgrade - or a switch to something else entirely - unless Microsoft extends Windows 10 support further for consumers beyond 2026 (which seems unlikely, but could happen). In which case, Nvidia might again mirror the move with its own drivers - given that's what has happened here - but nothing's guaranteed by any means.
You might also likeThere's always something exciting going on over at Apple TV+, whether that's a new movie release like the upcoming crime-thriller Highest 2 Lowest directed by Spike Lee (it's in cinemas on August 22 and will land on Apple TV+ on September 5), or its lineup of fresh shows for August.
There are five new Apple TV+ titles to look forward to, including returning shows Platonic and Invasion. However, it's the new historical miniseries Chief of War starring Jason Momoa, the first two episodes of which will be released on August 1, that stands out among this month's new arrivals.
Apple TV+ is reliably one of the best streaming services – especially if you're a fan of original movies and shows – and below are our five picks of the next big titles arriving on the platform.
Chief of War (miniseries)Age rating: TV-MA
Creators: Thomas Paʻa Sibbett & Jason Momoa
Arriving on: August 1
As well as starring in Chief of War, Jason Mamoa is heavily involved off-screen, as he's one of the show's creators and executive producers.
Set at the turn of the 19th century, over the course of nine episodes Chief of War chronicles the true story of Hawaiian noble Ka'iana (Mamoa), who embarks on a bloody mission to unite four war-torn Hawaiian kingdom, and prevent his people and culture from being colonized.
Platonic season 2Age rating: TV-MA
Creators: Francesca Delbanco & Nicholas Stoller
Arriving on: August 6
Since premiering in 2023 Platonic has become one of the best-loved Apple TV+ shows and has received mostly positive reviews, scoring 93% on Rotten Tomatoes. from the critics. Two years on from its premiere, Platonic season 2, which looks even more unhinged than its first season, is scheduled to premiere this month.
Platonic is a comedy-drama about two former childhood best friends, single mother-of-three Sylvia (Rose Byrne) and recent divorcee Will (Seth Rogen). Years after a rift that led to their falling out, the two reconnect as adults as they both approach middle age, and help one another to navigate their various midlife crises.
Invasion season 3Age rating: TV-MA
Creators: Simon Kinberg & David Weil
Arriving on: August 22
The popular sci-fi thriller from X-Men and The Martian (2015) producer Simon Kinberg and David Weil, the creator of the Prime Video thriller series Hunter, returns for its third season on August 22.
Invasion follows the events of an alien invasion of Earth from the perspective of different characters from around the world. These include an American mother struggling to keep her family together in the aftermath of the attack, a Japanese aerospace engineer who’s desperate to find her astronaut girlfriend, and a British schoolboy who experiences strange visions connected to the extra-terrestrial invaders.
Rather than focusing on action-packed battle scenes between humans and aliens, Invasion explores the different ways in which people respond in the face of unimaginable chaos and tragedy.
Stillwater season 4Age rating: U
Developed by: Rob Hoegee
Arriving on: August 1
The animated Apple TV+ show has been running since 2020, bringing the pages of Jon. J Muth's Zen series of books to life for the screen. It's been almost two years since Stillwater's third season was released, and now the wait for its fourth installment is almost over.
Centered on three young siblings Karl, Addy, and Michael, Stillwater follows the trio as they navigate childhood challenges, from minor arguments to typical youthful frustrations. Living next door is Sillwater, a wise panda with a calming demeanour who offers the children new perspectives on the world, and themselves.
Stillwater makes use of stories and anecdotes that are often drawn from Zen Buddhist philosophy as a means of indirectly solving the children’s problems, while providing them with the tools to understand and process their emotions.
Snoopy Presents: A Summer Musical(Image credit: AppleTV+)Age rating: N/A
Director: Erik Wiese
Arriving on: August 15
The final new Apple TV+ title this August is the short TV movie Snoopy Presents: A Summer Musical, featuring all the best characters from the Peanuts comics including Charlie Brown, Snoopy, and Woodstock.
Excited for another season at summer camp, Charlie Brown, his sister Sally, and their friends arrive ready for fun, adventures and music. When they arrive, however, they're saddened to learn that their beloved camp is set to close.
Snoopy and Woodstock come across a treasure map that leads them on a series of adventures, and when they find a treasure chest filled with musical instruments, it inspires Charlie and his gang to organize a benefit concert to save the camp.
You might also like- Ted Lasso’s main cast is reunited in Apple TV+ season 4 production teaser, but one missing series regular is set to be recast
- The Studio season 2: everything we know so far about the popular Apple TV+ show's return
- I wrote off Your Friends & Neighbors due to its 78% Rotten Tomatoes score, but this underrated Jon Hamm-led dark comedy is more proof Apple TV+ is in a league of its own
Agentic AI tools are being hired into businesses faster than we can fathom. They’re already helping shape workflows, influencing decisions, and generally assisting with the day-to-day running of a business. But without the right training, oversight, and integration, they risk becoming more of a liability to a business than a competitive advantage.
Unlike generative AI tools, agentic AI tools are designed to act independently, by identifying tasks, making decisions, and taking actions without direct instruction. They’re built to think and act, not just respond: becoming actors in the workplace, participating in workflows, integrating across functions, and shaping outcomes.
Built for a specific outcomeMost of the tools we see today are built for purpose or a specific outcome and are generally more like a bot that front-ends a workflow or business process. Today, we are largely at the task agentic worker level for automation, where we have repeatable tasks that are completed within very specific guardrails and there is very little gray area. Where this is gray area, there is a human in the loop that is making the ultimate decision or determination.
Traditionally, the role of a Chief Information Officer (CIO) is to manage and implement a company's information technology systems and strategies. As overseer of all IT operations, from hardware and software to data and IT infrastructure, the job is to ensure technology supports and drives business goals.
Managing agentic AI toolsToday, managing agentic AI tools, CIOs are beginning to find themselves in less familiar territory. To manage these new digital entities responsibly and effectively, CIOs must step into a role that looks increasingly like that of an HR director, using onboarding and training, setting expectations for roles and responsibilities, and continuously evaluating their performance, just like you would with a new employee.
This might sound surprising, but if CIOs don’t adopt this mindset shift soon, they risk introducing not just inefficiency, but reputational and security risks across their business.
Agentic AI Is Your New Co-WorkerWhen a human employee joins a company, they're given training, clear expectations of their role, access to the right systems, and, crucially, guardrails. We should not be treating our new agentic AI co-workers any differently.
Many organizations believe that they can implement these tools quickly, with minimal oversight or structure - indeed many of the agentic AI vendors are marketing their technology with exactly these promises.
But agentic AI tools are dynamic: they evolve over time, learn from the data they interact with, and can behave in ways developers can’t fully anticipate. CIOs must therefore take the lead in designing structured, thoughtful onboarding processes for these tools, as well as ongoing reviews and feedback loops.
This includes defining their scope of responsibility, controlling what data and systems they can access, and outlining what constitutes successful performance. Without them, agentic AI tools can quickly create confusion, make flawed decisions, or even introduce bias or compliance risks.
Think Metrics, Not MagicWhen a human employee joins a company, they're given training, clear expectations of their role, access to the right systems, and, crucially, guardrails. We should be treating our new agentic AI co-workers exactly the same.
Like any other employee, agentic AI tools need clear success criteria, so CIOs should ask themselves: Is the AI doing what I expect it to do? And is it doing it well?
It’s critical that CIOs establish the right metrics to measure the performance of agentic AI tools. Primarily, CIOs must understand how agentic AI is improving workforce functionality by setting KPIs for tangible metrics, for example, speed and accuracy of execution, cost saved, and alignment with business goals. But performance should also be viewed through the lens of efficacy of outcomes: what are we willing to pay to achieve a specific outcome and what margin of error is acceptable?
This should not be a one-time thing. Annual performance reviews are inadequate for these new colleagues. Metrics need to be tracked and evaluated continuously because agentic AI can drift. As these tools learn and adapt, their behavior may change in subtle ways. Without regular check-ins, CIOs could find themselves managing a tool that no longer does the job well, or worse, one that has learned to manipulate its objectives or produce skewed results.
Ultimately, CIOs must understand not just how well an agentic AI tool is performing, but also whether it remains within regulatory boundaries. Similarly, just like humans, can that tool be corrupted to produce biased results? In many sectors, from finance to healthcare, a human must still make the final call. Agentic AI can assist, but cannot replace that responsibility.
Apply the Same Growth Mindset You Would to EmployeesAs CIOs assess and evaluate the performance of agentic AI tools, they should also consider mapping how they will expand their use across the business. As new employees are not expected to immediately take over every function of the business, you should start with small deployments of agentic AI and expand over time. Just like you coach and develop co-workers to take on new responsibilities, there should be a considered and phased testing and roll out of the appropriate tools for different jobs.
It's about building pathways for responsible expansion. Consider how the AI tool could take on increasingly complex tasks, under increasing scrutiny, with increasing access to sensitive data and systems, but only when it’s earned that trust through performance.
Similarly to how you might bring in a separate employee with a different skill set, you should also consider bringing in different tools based on different models that might be better suited to your desired task. As you search hard to find the right person to fit the team, so too you should find the right AI to support that function.
That level of careful progression doesn’t slow down innovation. It makes it sustainable.
Be Cautious About PermissionsWith generative AI tools, concerns are largely centered around data leakage. But, with agentic AI, the more pressing issue is access. These tools rely on deep integration across systems in order to deliver value, but too much access creates unnecessary risk.
Just as they do for human employees, CIOs must apply a principle of least privilege to agentic AI, granting access only to the systems and data required for the task. Excessive user privileges contribute to more than half of security incidents for nearly a third (32%) of organizations.
Permissions are often given for a specific use case and never revoked, leading to unchecked access that can be exploited, or cause unintended consequences. Agentic AI adoption offers a clear opportunity for organizations to reevaluate legacy access structures and introduce zero trust access controls that are fit for purpose.
Agentic AI Governance Is Workforce ManagementUltimately, what we’re seeing is the evolution of CIO responsibilities. Where once the role was focused on infrastructure, systems, and uptime, it’s now becoming one of HR / IT hybridity, and managing intelligent, autonomous systems that function like staff.
One thing is clear: agentic AI is coming. And if anything, it will soon be as embedded in business operations as cloud services or productivity tools. Agentic AI tools offer extraordinary potential, but only if they’re managed with the same discipline, foresight, and empathy we apply to human teams.
Today, CIOs have a unique opportunity to shape how these tools are brought into the business, not just as technology, but as contributors to outcomes, and innovation.
We list the best HR outsourcing service.
This article was produced as part of TechRadarPro's Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro
- The UK government says it has "no plans to repeal the Online Safety Act" despite growing concerns
- The laws are designed to make the internet a safer place, especially for children and vulnerable adults
- Mandatory age verification checks have been enforced since July 25, sparking concerns over web users' digital rights
The UK government says it has no plans to repeal the Online Safety Act, as opposition to it grows in the wake of the introduction of online age verification checks.
A petition calling on the government to scrap the Act has amassed over 450,000 signatures in just a few days. While the Act became law in 2023, the backlash has been sparked by the recent introduction of age verification checks for users of many websites.
From July 25, Britons need to go through robust mandatory age checks in order to access adult-only websites and any potentially harmful content online.
While the government says the laws are intended to protect both children and adults who use the internet, critics argue that they have serious implications for people's privacy, online security, free speech, and access to information.
The UK Government has responded to the petition with nearly 400,000 signatures to repeal the Online Safety Act.They have no plans to repeal the online safety act pic.twitter.com/m6UeVRfWg1July 28, 2025
The UK Parliament must consider a debate on any petition that gets more than 100,000 signatures. However, responding to the petition, the UK's Department for Science, Innovation, and Technology said: "The Government has no plans to repeal the Online Safety Act, and is working closely with Ofcom to implement the Act as quickly and effectively as possible to enable UK users to benefit from its protections."
It added: "The Government will continue to work with Ofcom towards the full implementation of the Online Safety Act 2023, including monitoring proportionate implementation."
What's behind the age verification backlash?The Online Safety Act is aimed at making the internet a safer place, especially for children. But many online privacy advocates and security experts fear that the age verification measures in particular may end up doing more harm than good.
You now need to scan your face, a credit card, or an identity document if you want to access certain content on X, Reddit, or Bluesky in the UK, and the same applies if you want to play a new over-18 video game, find a new match on a dating app, or watch a video reserved for adults only. Those platforms have to partner with third-party age verification services to implement checks.
Leaks, abuse, and misuse of data are just some of the risks linked to the mass data collection that age verification checks involve. Some commentators also fear that age-blocking certain content that's deemed harmful could lead to limitations on free speech and access to information.
"While the intent to protect kids is understandable, the execution raises serious concerns around privacy, censorship, and, functionally, whether it even works," Yegor Sak, co-founder and CEO of Canada-based Windscribe VPN, told TechRadar.
(Image credit: Getty Images)Such concerns have led thousands of people to turn to a virtual private network (VPN) as a way to safeguard their sensitive information. Proton VPN, for example, recorded an hourly increase of over 1,400% starting at midnight on July 25.
If you have concerns over sharing your data online in order to comply with age verification checks, you can use a VPN to keep your data secure. You'll find some reliable options in our best free VPN guide.
However, free VPNs sometimes lack important security features, so for a more robust option, take a look at our best VPN guide. Our top recommendation is NordVPN, and right now, TechRadar's readers can grab an exclusive deal when they sign up for one of its two-year plans.
NordVPN: Get the best VPN for most people
NordVPN delivers a better mix of privacy, security, extra features, and value for money than any other VPN we’ve tested. It’s cheaper than close rival ExpressVPN, has a huge server network across the world, and is a great choice for streaming, with superb speeds and flawless unblocking.
Sign up to NordVPN to claim TechRadar's exclusive deal and get:
✅ Up to 76% OFF
✅ Up to $50 Amazon Gift card
✅ 4 months free protection (TechRadar exclusive)
There's a 30-day money back guarantee, so if it isn't right you can cancel your subscription and get a refund.View Deal
You might also likeAt Future Publishing we rely on advertising to keep bringing you the content you love to read.
The majority of the content on TechRadar is created solely by our editorial team, but on occasion, we also work with external partners to create content we hope our readers will find interesting and useful.
In some cases, advertisers support us in producing content. This content is labelled so you can see who has funded it and how it was created. We use the label on the page to clarify the advertiser’s involvement in the content.
“Sponsor Content Created With…”Articles that are labelled “Sponsor Content Created With…” are paid for and reviewed by a commercial partner.
They may be produced by the client or by staff employed by TechRadar. This is commercial content and so is subject to the Advertising Standards Authority regulations in the UK and Federal Trade Commission regulations in the US.
“Sponsored By…” and similar labelsArticles that are labelled as "Sponsored By…" or similar labels are independent editorial articles, created by writers employed by TechRadar that have been funded through the support of a commercial partner.
When planning this content, the editorial team may find alignment with a funding partner on the topic and the headline of the article but the article is not subject to any client review in advance of its publish date.
This content abides by the Editors’ Code of Practice from the Independent Press Standards Organisation in the UK and Federal Trade Commission regulations in the US.
If the commercial partner receives a sponsored section within a larger editorial article, that section will have a clear “Sponsored” label.
“Preferred Partner”Articles that are labelled as "Preferred Partner," mean a commercial partner is offering a preferential affiliate rate to Future in exchange for greater prominence on the page, such as by highlighting a particular deal for a product that our journalists recommend.
TechRadar will only write content featuring “Preferred Partners” when we feel the content or product is aligned with what our audience wants. It is not sent to the funding partner for approval.
This content abides by the Editors’ Code of Practice from the Independent Press Standards Organisation in the UK and Federal Trade Commission regulations in the US.
Cyber-attacks continue to dominate headlines, disrupting operations and putting sensitive data at risk. In the wake of the AI boom, threats are growing more complex. The endless game between attacker and defender is intensifying, and defenders know the stakes are high. Operational, financial, and reputational damage can be severe when an attack succeeds.
At the same time, security teams face a widening skills gap, growing threat complexity and tighter budgets. It’s a perfect storm for burnout. In fact, 79% of cybersecurity professionals reported that escalating threats are impacting their mental health, highlighting the need for an empathetic approach to these challenges.
Prevention as the shield, resilience as the backboneHistorically, organizations have measured cybersecurity success by how well they prevent attacks. But with 90% of IT and security leaders reporting cyber incidents in the past year alone, it’s clear that prevention alone is no longer enough.
It’s time to shift the focus towards recovery, transparency, and resilience. Resilience shouldn’t be seen as a fallback – it needs to become the frontline. This shift in mindset not only better prepares organizations for inevitable breaches but also reduces pressure on teams by redefining what success looks like.
When teams are judged on their ability to recover and minimize disruption (not just prevent attacks), they’re empowered to focus on what matters; early detection, rapid response, and recovery planning. This reduces burnout and builds stronger long-term security posture.
We must also accept a hard truth; breaches will happen. Rather than fueling a culture of blame, we need to equip teams to respond effectively and confidently.
Securing the security team with transparencyAs ever, collaboration in a crisis is critical. Security teams working closely with backup, resilience and recovery functions are better able to absorb shocks. When the business is confident in its ability to restore operations, security professionals face less pressure and uncertainty.
This is also true for communication, especially post-breach. Organizations need to be transparent about how they’re containing the incident and what’s being done to prevent recurrence. Trust drives everything and must be built into architecture, communication, and response, from user behavior to board confidence.
Shared risk, shared responsibilityAs seen with the recent retail cyber-attacks in the UK, the implications of a cyber breach can be business critical. Yet many CISOs still struggle to get alignment at board level. Over three-quarters (77%) of UK CISOs feel that their IT budget is not completely reflected by their board’s objectives for cybersecurity.
To make matters worse, this is heightened when it comes to regulatory pressures. New legislation like DORA and the upcoming Cyber Security and Resilience Bill is turning up the heat, with over half (58%) of CISOs feeling the pressure as a direct result.
There is also an element of the blame game going on, with everyone keen to avoid responsibility for an inevitable cyber breach. It’s much easier to point fingers at the IT team than to look at the wider implications or causes of a cyber-attack. Even something as simple as a phishing email can cause widespread problems and is something that individual employees must be aware of. Security is everyone’s business - the attack surface isn’t just focused on IT, it’s every team, tool, and workflow.
This critical gap jeopardizes not only an organizations' security posture but also their ability to meet evolving regulatory demands. CISOs, boards, and other stakeholders must work together to ensure that cyber resilience priorities are clearly defined, adequately funded, and effectively implemented to meet the evolving regulatory landscape.
The weight of responsibility for cyber security shouldn’t just lie on the security team’s shoulders. Cyber resilience is business resilience and security leaders, boards and stakeholders all have a part to play.
Building teams that thriveTo build and retain a capable cybersecurity team amid the widening skills gap, CISOs must lead a shift in both mindset and strategy. By embedding resilience into the core of cyber strategy, CISOs can reduce the relentless pressure to be perfect and create a healthier, more sustainable working environment.
But resilience isn't built in isolation. To truly address burnout and retention, CISOs need C-suite support and cultural change. Cybersecurity must be treated as a shared business-critical priority, not just an IT function. This means aligning investment with board expectations, embedding security into daily operations and ensuring every employee understands their role.
With regulatory pressure rising and the threat landscape evolving, resilience isn’t just a technical necessity, it’s a strategic imperative. CISOs who champion collaboration, drive cultural change, and lead with empathy will be best positioned to build security teams that are not only effective but built to last.
We list the best employee recognition software.
This article was produced as part of TechRadarPro's Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro
- CMA says UK cloud market is uncompetitive
- AWS and Microsoft account for 30-40% of the UK market each
- The two companies disagree with the CMA's findings
The UK's Competition and Markets Authority (CMA) has determined Britain's cloud market shows too many anticompetitive traits, with Microsoft and AWS each holding around 30-40% of the UK market in 2024 and hyperscaler concentration especially high in infrastructure-as-a-service.
At the same time, fewer than 1% of customers switch providers annually and multicloud usage is rare (particularly among SMEs with more limited budgets).
The CMA has blamed high egress fees, incompatible interfaces, latency and skills gaps for widespread vendor lock-in, which is ultimately weakening competition.
CMA worried about AWS and Microsoft cloud market dominanceBehind the two hyperscalers, Google accounts for just 5-10% of the market, with others like IBM and Oracle having even smaller shares. Although AI capabilities are yet to change market dynamics drastically, existing positions are likely to be amplified, thus the CMA has stepped in to ensure competition remains healthy.
In its Final Decision ruling, the CMA took the biggest hits at Microsoft over its unfair licensing practices, which make it costlier to run Microsoft software on rival cloud providers.
A Microsoft spokesperson told TechRadar Pro: "The CMA Panel’s most recent publication misses the mark again, ignoring that the cloud market has never been so dynamic and competitive, with record investment, and rapid, AI-driven changes. Its recommendations fail to cover Google, one of the fastest-growing cloud market participants."
"Microsoft looks forward to working with the Digital Markets Unit toward an outcome that more accurately reflects the current competition in cloud that benefits UK customers," they continued.
"The action proposed by the Inquiry Group is unwarranted and undermines the substantial investment and innovation that have already benefited hundreds of thousands of UK businesses," an AWS spokesperson added.
On the flip side, Google supported the CMA's findings: "The conclusive finding that restrictive licensing harms cloud customers and competition is a watershed moment for the UK."
Elsewhere in the industry, the CMA has been criticized for not acting fast enough and addressing persistent issues like cloud credits, lock-in and procurement bias.
"We urge the CMA to use the powers at its disposal now to address these harms, rather than embark upon a new investigation that may not give customers relief for years to come," Coalition for Fair Software Licensing Executive Director Ryan Triplette shared.
Looking ahead, the CMA's next step is to designate Microsoft and AWS with strategic market status (SMS) under the Digital Markets, Competition and Consumers (DMCC) Act, allowing it to impose legally binding, targeted conduct requirements on the two giants.
"A significant driver of high cloud computing bills is the consolidation of the market into a handful of players. Until recently, these companies have been the only game in town, so they’ve been able to set the rules of the market, for example, including egress fees for switching, long lock-in periods, and more. In fact, Gartner has observed that most customers spend 10% to 15% of their cloud bill on egress charges," noted Akamai's John Bradshaw.
"UK businesses are under huge cost pressures. We need to make it easier for them to switch cloud computing providers and find pricing options that better fit their balance sheets."
You might also like- The average UK SMB now invests 36% of its annual revenue in new tech
- Finances and payments are common use cases
- Only 1% of companies remain non-tech-users
Britain's small and medium-sized businesses are investing more than a third (36%) of their annual revenue in new tools and technology, new Worldpay data has said, with most (90%) also agreeing tech investments have already significantly boosted efficiency.
Among the most popular areas for new tech investments across all type of UK SMBs are financial management (54%), marketing and sales (49%) and payment processing (47%), with employee management, inventory control and CRM also seeing a healthy boost.
On the flip side, only 1% of SMBs are now not using any technology, compared with one in five (20%) a decade ago, marking a huge departure from old ways.
UK SMBs are mostly tech-first"This digital transformation is not just a trend - it's a vital evolution that enhances productivity, efficiency, and customer satisfaction," Worldpay GM for SMB International Chris Wood explained.
A number of factors could have contributed to the rise in spend on digital platforms, but the post-pandemic behavioral shift could lead them. Customers now expect contactless and omnichannel services that are fast and instant.
Then, there are the regulatory hurdles, for example HMRC's Making Tax Digital mandate which requires bookkeeping to be completed using certain reporting software.
"Worldpay is on a mission provide SMBs with the right technology, empowering independent businesses to compete on a level playing field and thrive," Wood added.
You might also like- Small business leaders are struggling to cope with a plummeting UK employment rate
- We've listed the best productivity apps and best online collaboration tools
- Check out the best AI tools and a handy boost