News
- AMD Ryzen Threadripper PRO 9995WX workstation CPU has 96 cores and 192 threads
- It is set to go on sale with an expected priced of around $13,000
- Zen 5-based Threadripper offers 26% gain over predecessor but costs 30% more
The AMD Ryzen Threadripper PRO 9995WX could be the most expensive desktop CPU ever listed at retail, with a rumored price of $13,000.
This price point is more than double that of AMD’s own EPYC 9655, a 96-core data center chip which can be found for just over $6,100.
Built on the Zen 5 architecture and using a 4nm process, the 9995WX targets workstation professionals who need extreme performance in AI, media, design and engineering workflows.
30% price hikeThe chip features 96 cores, 192 threads, and a base clock of 2.5 GHz, boosting up to 5.4 GHz. It supports up to 144 usable PCIe lanes and 8-channel DDR5 ECC RAM running at 6400 MT/s.
There’s also 128MB of L3 cache. While the specs are aimed at users with heavy workloads, the high cost puts it in a niche category. No cooler is included and a dedicated graphics card is required.
The 9995WX is part of the new Threadripper 9000 series, with AMD skipping the 8000 line entirely.
It offers a generational improvement over the Zen 4-based 7995WX, including a reported 26% performance gain.
Even so, the price increase over the previous generation is steep, sitting at 30% higher than the 7995WX.
While this could be justified for some niche professionals, it narrows the market to those with extremely specialized needs.
Preorders are expected to open on July 23, with listings appearing on B&H Photo Video and other retailers.
Although AMD has not confirmed final pricing, Videocardz notes patterns across multiple stores point to a consistent number near $13,000.
The rest of the lineup includes 24-core to 64-core models, with price hikes ranging from 4% to 17% over previous generations.
Intel currently lacks a direct workstation-class competitor in this category, and with AMD pushing core counts and prices even higher, the gap remains wide.
This latest Threadripper generation extends AMD’s lead in ultra-high-end desktop processors, at least for now.
(Image credit: B&H)You may also like- Two new Garmin watches seem to be imminent
- We might get a cheaper Forerunner this Tuesday
- The Venu 4 could follow the recently launched Venu X1
It looks like there are going to be two new contenders for our list of the best Garmin watches in the very near future, with one official tease and one unofficial leak pointing towards new devices in the coming days and weeks.
To start with what we've heard directly from Garmin, the company has posted a teaser for a new watch arriving on July 22 (this coming Tuesday). The outline of the wearable suggests we're looking at a new Forerunner model.
Well-known tipster the5krunner says it's unlikely that this is an existing model launching in China. It's more probable that it's a China-specific Garmin, or it's a completely new model that's going to be launching globally.
Over at Garmin Rumors, the thinking is that the "1XXX" on the teaser image could refer to the price of the upcoming watch, in yuan. If that guess is right, then we'd be looking at a relatively affordable Forerunner compared to the rest of the series.
The Venu 4The Garmin Venu X1 (Image credit: Garmin)As for the less official news, Garmin Rumors (via Notebookcheck) has spotted the first ever mention of the Garmin Venu 4 in the documentation accompanying the Garmin Golf app. Garmin hasn't said anything about this watch, but it looks like it might be on the way.
Earlier this year the Garmin Venu X1 was launched, but based on this new information, that wasn't the true successor to the Garmin Venu 3 – although the brief mention we have of the Venu 4 doesn't tell us too much about it.
Given what Garmin has been doing with its other flagship wearable refreshes, there's a good chance the Venu 4 will come with a brighter screen, an updated user interface, a flashlight, and some additional health features and fitness metrics.
However, there have been no other leaks or rumors to date to give us any hints about what's coming. As soon as Garmin makes either of these smartwatches official, we'll of course bring you all the details on TechRadar.
You might also like- Aokzoe mini PC flaunts a red rocket button with no clear functional explanation
- Branding overwhelms the chassis, with buzzwords replacing useful technical or design explanations
- The processor has real muscle, but the product’s direction feels uncertain and unfocused
Aokzoe has announced its first mini PC powered by AMD’s new Ryzen AI Max+ 395 APU will soon be launched globally.
The company has remained vague about key technical details, but the announcement has stirred attention for its daring design and ambiguous branding.
The mini PC has been previewed with terms like “AI PC,” “A IPC,” and “Hypermind Drive” emblazoned across its surfaces, leaving its final name uncertain.
Design choices raise questions about purpose and practicalityThis device is visually striking with a design that flaunts aggressive angles, bright highlights, and an unexplained red “rocket” button, which feels like a custom or programmable function button, possibly for performance mode.
Mini PCs often lean toward understated forms, but Aokzoe has taken the opposite approach.
Branding is everywhere, with large text and graphics dominating the chassis, raising doubts about whether this machine is intended as a functional business PC or a flashy collector's piece.
Speculation has intensified due to the inclusion of the Ryzen AI Max+ 395, a high-end Strix Halo APU.
This processor is part of AMD’s push into AI-enhanced computing and has only recently started appearing in compact desktops.
Although it holds appeal for demanding tasks like content creation, the lack of detailed specs from Aokzoe makes it difficult to gauge whether this mini PC can realistically serve as a capable video editing PC or handle long work sessions typical in business settings.
At this point, the hardware’s potential seems to outpace the product’s clarity.
Nevertheless, from the official images, the front panel of this device includes a USB4 or Thunderbolt port marked with a lightning bolt icon just before the red “rocket” button.
Next is a full-sized SD card reader, a USB-C port, two USB-A ports (likely differing in speed), and a 3.5mm audio jack for headphones or microphone use.
The company will officially confirm the specs of this device intermittently through social media, avoiding formal release timelines or performance benchmarks.
While a global release has been promised, prospective buyers have little more than renderings and vague labels to assess.
For now, it's difficult to say if the product is serious about computing or simply playing with bold visuals and buzzwords.
Although Aokzoe’s approach is not unique, other brands such as GMKtec and Aoostar are also introducing Strix Halo-based systems.
The likes of HP Z2 Mini G1a, GMKTEC EVO-X2, AOOSTAR’s NEX395, and many more have already been announced.
But these devices are usually not cheap, often selling between the $1500–$2000 price range.
You might also like- This 34-inch business monitor is curved, fast, has Ethernet, Smart KVM, and a webcam
- These are the fastest SSDs you can buy right now
- Take a look at some of the best external hard drives available
- Yahoo Japan is betting big that mandatory AI use can unlock workplace innovation
- The company’s plan starts with automating 30% of daily tasks, like meetings and documents
- Internal tools like SeekAI will handle expenses, research prompts, and summarizing meeting notes
Yahoo Japan is taking a bold step by requiring all 11,000 of its employees to integrate generative AI into their daily work, aiming to double productivity by 2028.
The company, which also operates LINE, plans to make AI tools a standard part of tasks like research, meeting documentation, expense management, and even competitive analysis.
The idea is to shift employee focus from routine output to higher-level thinking and communication by letting AI handle the groundwork and create continuous innovation.
Targeting the 30% firstThe rollout begins in the more universal aspects of office life: areas like searching, drafting, and routine documentation, which Yahoo Japan estimates take up about 30% of its employees’ time.
The company has already developed internal tools like SeekAI to manage tasks such as expense claims and data searches using prompt templates.
AI will also be used to help create agendas, summarize meetings, and proofread reports, thereby giving staff more room to concentrate on decision-making and discussion.
This move might seem extreme, but it follows a broader trend of companies trying to harness AI as a productivity tool rather than just a cost-cutting one.
Yahoo Japan's strategy assumes that automation is not just an efficiency tool but a workplace standard, but there is growing evidence that treating AI as a complete replacement for human workers may be shortsighted.
A recent report by Orgvue claims, more than half of UK businesses which replaced workers with AI now regret that decision. This speaks to a crucial distinction: while AI can support and streamline, it often falls short in areas requiring nuance, empathy, or real-world context.
In this light, Yahoo Japan’s model, one that promotes AI as a support layer rather than a substitute, might prove more sustainable.
This is certainly a sign of things to come, and from my perspective, generative AI is not here to erase jobs, even although there are reports of people losing jobs to AI in some regions.
AI should only shift what jobs look like by removing repetitive tasks and freeing up space for critical thinking and creativity, where human input remains indispensable.
Yahoo Japan’s approach, if implemented with care and flexibility, might help shape that shift in a more inclusive and less disruptive way.
Via PC Watch
You might also like- Here's our roundup of the best AI phones you can buy right now
- We've also listed the best business laptops for all budgets
- AI Agents: the next big phase of artificial intelligence
- Beelink GTi15 Ultra offers vapor cooling in a chassis barely larger than a paperback novel
- A fingerprint reader and dual 10GbE ports are rare finds on any mini PC
- External GPU support solves one problem and creates three others in terms of cost and footprint
Beelink’s GTi15 Ultra mini PC has been launched with features more commonly associated with full-sized desktops.
The standout elements include dual 10Gb Ethernet LAN ports, a fingerprint reader, and support for external graphics - additions which suggest it is built for users who demand more than casual browsing or media playback, especially those looking to downsize without giving up specific performance perks.
Compared to its predecessor, the GTi14, the new GTi15 Ultra brings an Intel Core Ultra 9 285H processor, but the raw CPU performance gain is modest, about 11%, based on internal benchmarks.
Marginal CPU gains, sharper GPU contrastBeelink’s GTi15 Ultra doesn’t emerge in a vacuum; it’s the next step in a mini PC lineage that has gradually pushed the envelope.
Earlier models like the GTi12 Ultra and GTi14 Ultra pioneered the inclusion of a PCIe x8 expansion slot for Beelink’s proprietary EX GPU dock, targeting users who wanted a compact form factor but still needed the option of a desktop-class GPU.
The bigger change, however, lies in the integrated Arc Graphics 140T, which replaces the Arc 8-core iGPU from the previous model.
Despite the branding, this shift may not result in a meaningful leap for GPU-heavy tasks.
The option to connect Beelink’s own external GPU dock certainly offers more flexibility, but not without added cost and space concerns.
With up to 64GB of DDR5 memory and a built-in 145W PSU, the GTi15 Ultra is presented as a serious machine for demanding users.
The dual 10GbE ports point toward a networking edge that could appeal to niche professional workflows, potentially making it viable as a business PC - but in most work settings, such bandwidth far exceeds actual requirements.
The same goes for vapor chamber cooling, which may help thermals but feels more like a talking point than a necessity in typical office scenarios.
Starting at roughly $655 in barebones form and climbing to nearly $880 when configured with 64GB RAM and 1TB storage, this mini PC lands in price territory occupied by capable desktops and laptops.
While the appeal of a sleek video editing PC in such a small footprint is understandable, compromises remain, especially when factoring in the limited internal GPU and dependency on external docks for full graphics performance.
Via Notebookcheck
You might also like- These are the best mobile workstations you can buy right now
- We've also listed the best mini PCs for every budget
- Inside the deepfake threat that’s reshaping corporate risk
Screenshots and PDFs have long served as the fallback tools of digital recordkeeping. They're easy to create, straightforward to file, and for a long time, they seemed “good enough.” But in today’s regulatory environment, where agencies like the SEC and FINRA are demanding complete, contextual, and verifiable records, “good enough” is quickly becoming a liability.
As communications become more dynamic and digital interactions more complex, static captures are increasingly out of step with the needs of modern compliance, and the expectations of U.S. regulators. Recent guidance and enforcement trends make it clear: partial records or flattened archives are no longer sufficient.
Compliance professionals have always adapted to new requirements and risk environments. It’s time to ask whether our current tools still meet the moment. For many firms, that answer is starting to shift.
Digital Communications Have Changed DramaticallyNot long ago, archiving a digital interaction was relatively straightforward. You saved an email. You took a screenshot of a webpage. It was static, predictable, and mostly text-based.
That’s no longer the case. Communications happen across platforms that are constantly updating - live chat software, dynamic websites, embedded widgets, interactive forms, and more. A webpage might display differently depending on who views it, or when. A chat thread might be edited minutes later, or disappear altogether.
In other words, what you're trying to capture isn’t standing still. It’s changing in real-time, sometimes invisibly, and when it comes to compliance, those changes matter a lot. Trying to preserve that complexity with a flat image or PDF is like trying to understand body language by looking at a photograph. You get part of the picture, but not the full story.
Why Static Archives Aren’t Enough Anymore1. They Strip Away Context: Static captures freeze a single moment. They don’t show what came before or after, or how a page or chat evolved. That’s fine - until someone asks how a user experienced a disclosure, or when a message was edited, or whether a page displayed something different two hours later. In those moments, a flat PDF cannot elaborate.
2. They Lack Authenticity: A screenshot looks official, but lacks credibility. It’s difficult to verify when it was taken, whether it shows the whole interaction, or if it’s been altered. In a legal or regulatory setting, that opens the door to doubt and risk.
3. They Don’t Scale: Modern communications move fast and in high volume. Manually capturing and filing screenshots or PDFs is time-consuming, error-prone, and unsustainable. And if you’ve ever tried to search across a thousand PDFs for a single keyword, you know it’s far from ideal.
4. They’re Out of Step with Regulator Expectations: Agencies like the SEC and FINRA are no longer content with partial records. They want full, accurate reconstructions of conversations - especially those that touch customers and investors, or include compliance-sensitive content. They’ve made that clear in recent enforcement actions focused on off-channel communications and poor recordkeeping.
5. They Don’t Capture the Brand Experience: Even outside of compliance, faithfully preserving what happened still matters. Static archives miss how users interacted with a brand, how journeys unfolded, or how dynamic elements behaved. For marketing, product, support, or legal teams, that’s a real gap. Replay delivers full, authentic re-creations of digital experiences, helping brands understand and protect the moments that matter.
What’s the Alternative? Time-Accurate, Replayable RecordsA growing number of compliance teams are moving toward replay-capable archiving systems, which not only save a file or a message, but allow you to recreate the experience as it happened.
With replay, you're not capturing a still image. You're preserving a moment in time that you can revisit, navigate, and verify.
Users can...
1. Revisit a webpage exactly as a user saw it - scrollable, clickable, and live with the same styling and interactive elements.
2. Watch how a digital disclosure evolved over time, with version histories intact.
3. Overlay and compare two captures of the same site or chat to quickly spot differences, updates, or unauthorized changes.
4. Provide regulators or auditors with a full, interactive view, backed by metadata and time-stamped proof.
Replay doesn’t just meet the letter of compliance, it helps meet the spirit of transparency, accuracy, and accountability.
Why Replay is a Better Fit for Today’s Risk EnvironmentReplayable archives offer a number of meaningful advantages for modern compliance. They provide a more accurate record, capturing conversational nuance rather than just a snapshot of what someone happened to say at a single point in time. These records are also auditable by design: time-stamped, tamper-resistant, and rich with metadata that supports their authenticity.
Beyond that, they’re easier to work with. Unlike static files buried in folders, replayable records can be indexed and searched dynamically across platforms, reducing the time and effort it takes to locate specific communications.
Perhaps most importantly, they improve the defensibility of your compliance posture. Regulators and legal teams don’t just receive an image, they can interact with a faithful reconstruction of the communication as it originally appeared and functioned. It’s a shift from passive recordkeeping to active, immersive documentation, a much stronger foundation for meeting both regulatory expectations and internal accountability standards.
Compliance is Evolving, Our Tools Should TooScreenshots and PDFs were once enough. They were functional, and often the best available option. But the tools that served us well in a simpler digital world aren’t necessarily correct for today’s dynamic landscape.
Replay archiving isn’t just a technical upgrade, it’s a strategic one. It allows compliance teams to respond with confidence, investigate with precision, and align more closely with regulatory scrutiny, without adding unnecessary complexity.
Final Thought: Compliance Can’t Be Flat in a 3D WorldIn today’s regulatory environment, context and clarity aren't luxuries, but necessities. While static records might offer a snapshot, modern compliance often requires the ability to press play and experience the linear journey, first-hand.
The good news? The technology exists. And the case for using it is only getting stronger.
When it comes to compliance, seeing what happened should include seeing how it happened, and when. And for that, the PDF and screenshot era belongs in the scrapbook.
We list the best PDF merger tools.
This article was produced as part of TechRadarPro's Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro
Every organization claims security is a priority, yet 91 percent of Security and IT leaders admit they’re making compromises in their security strategies. In today’s environment, compromise has shifted from a failure point to a functional reality of modern enterprise.
Pressed to deliver agility, reduce cost, and keep up with the exponential demands of AI, security teams are being forced to make trade-offs they once would have rejected outright. Visibility is sacrificed for speed. Data quality is sidelined in the rush to deploy. Tools are added faster than they can be integrated. And all of it unfolds under the guise of “acceptable risk,” a term that now shifts depending on the urgency of the business goal at hand.
This is not a story of negligence; it’s one of systemic strain and of an urgent need to reset. As hybrid cloud environments grow more complex and threat actors grow more sophisticated, enterprises must confront an uncomfortable truth: the more compromise becomes routine, the harder it becomes to manage what comes next.
This article explores the consequences of this normalization, the fractures it is creating across the security landscape, and why visibility must be the foundation for regaining control in a world increasingly shaped by AI.
The business of compromiseSecurity leaders are not compromising out of carelessness. They are making calculated decisions under pressure. With cloud computing environments expanding, AI deployments accelerating, and infrastructure growing more fragmented, the operational burden on security teams is exceeding what existing tools and architectures were built to handle.
When asked where they are making trade-offs, the answers are telling. Nearly half of respondents to our 2025 Hybrid Cloud Security Survey say they lack clean, high-quality data to support secure AI workload deployment. The same proportion report insufficient visibility across their hybrid environments, particularly in lateral traffic, which remains one of the most critical yet overlooked areas for threat detection. Another 47 percent point to tool integration as a key area of compromise, highlighting the strain of managing sprawling tech stacks that fail to deliver cohesive insight.
These issues strike at the foundation of any viable security strategy. Without comprehensive visibility, detection becomes reactive. Without reliable data, AI initiatives carry unquantified risk. Without integrated tools, signal fragmentation makes it difficult to prioritize threats, let alone respond effectively.
The perception of risk is also changing. Seventy percent of Security and IT leaders now consider the public cloud to be their most vulnerable environment, citing concerns over governance, blind spots, and the difficulty of maintaining control across distributed architectures. This represents a departure from the early optimism that once accompanied widespread cloud adoption.
In this climate, compromise has become operationalized. What was once a contingency is now a constant, and the consequences extend far beyond tactical inconvenience. Each trade-off introduces ambiguity into risk calculations, increasing the likelihood that a blind spot becomes a breach. The underlying challenge is not just about resources or tooling. It is about the quiet erosion of standards that were once considered non-negotiable.
Where the cracks are showingThe consequences of compromise are materializing across every layer of the organization. This year, the percentage of organizations reporting a breach rose to 55 percent, a 17 percent increase from last year. Just as concerning, nearly half of security leaders say their current tools are falling short in detecting those intrusions. These failures are not due to a lack of investment. They are the result of environments that have outgrown traditional controls, where more data, more alerts, and more tools do not necessarily translate into better protection.
Tool sprawl is a prime example. Organizations are managing an average of 15 security tools across hybrid environments, yet 55 percent admit those tools are not as effective as they should be. Rather than delivering clarity, this growing stack often introduces friction and gaps. Overlapping capabilities generate noise without insight. And all the while, attackers are adapting faster than defenders can consolidate.
AI tools are compounding the issue. One in three organizations say their network data volumes have more than doubled over the past two years, driven largely by AI workloads. This surge is overwhelming existing monitoring tools and giving threat actors more opportunities to hide in plain sight. Nearly half of respondents report a rise in attacks targeting large language models (LLMs), while 58 percent say AI-powered threats are now a top security concern.
These developments reveal the hard truth that compromises made upstream—in visibility, data quality, and tool integration—are now surfacing downstream in the form of missed threats, delayed response times, and a growing sense that risk is outpacing control.
Visibility as a strategic equalizerBut at its core, the issue is not how much data flows through an environment, but how little of it can be fully understood or trusted. Without clear insight into where data travels and how it behaves, risk remains obscured. Eighty eight percent of Security and IT leaders say access to network-derived telemetry is essential for securing AI deployments, which speaks to a broader shift.
As systems become more distributed and threats more subtle, traditional log-based telemetry is no longer enough. What organizations need is complete visibility into all data in motion, across all environments, at all times.
For CISOs, the implications go beyond threat detection. Without complete visibility, risk management becomes reactive. Security teams operate in the dark, relying on fragmented signals and assumptions rather than intelligence. And when accountability is high, but authority is limited, the gap between what leaders are responsible for and what they can control becomes a vulnerability.
Fusing network-derived telemetry with log data is the only way to close the space between what organizations believe is secure and what is actually at risk. This deep observability is what transforms fragmented environments into something defensible, and what gives teams the situational clarity to not just respond to threats, but to contain them before they escalate.
Just because compromise has become the norm does not mean it has to remain the standard. Risk can be recalibrated, but only if visibility is treated as the foundation for a more resilient, forward-looking security strategy.
We list the best online cybersecurity course.
This article was produced as part of TechRadarPro's Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro
- The Fantastic Four: First Steps director has responded to criticism of its visual effects
- Matt Shakman gave a polite but abrupt reply to fans who've reacted negatively
- Critics who've seen the Marvel movie are unanimous in their praise of its CGI
Matt Shakman has given a blunt response to anyone who's reacted negatively to The Fantastic Four: First Steps' visual effects (VFX).
Speaking to TechRadar, the Marvel movie's director gave a polite albeit direct 10-word reply to fans who've said they're less than impressed by some aspects of the film. Of particular note is its computer generated imagery (CGI) and special effects, which have come in for some criticism since the first trailer for First Steps was released in February.
For one, reactions to Reed Richards' elastic superpowers and the Silver Surfer's aesthetic, both of which were unveiled in First Steps' official trailer, were mixed when said footage dropped in April. Then there's those who initially found fault with how The Thing looks. Oh, and let's not forget how many folks hit out at how Giganto, one of the first villains that The Fantastic Four fought in the comics and who's expected to appear in First Steps, looked in a *checks notes* promo tie-in advert for the Little Caesar's restaurant chain.
Ahead of the Marvel Phase 6 film's release, I asked Shakman for his thoughts on said criticism and whether he and First Steps' hundreds-strong VFX team felt weighed down by it.
"I think the visual effects look beautiful in this movie," he replied frankly.
Initially, some people weren't enamored with The Silver Surfer's look (Image credit: Marvel Studios)Now, some readers might think Shakman is dodging the question and/or giving a stock answer that tows the company line.
However, numerous individuals who've already seen one of the most anticipated new movies of the year, including critics, have reacted positively to the VFX in the final cut. Threads on the r/MarvelStudios and r/MarvelStudiosSpoilers sub-Reddits are full of social media posts from journalists praising the CGI and other special effects, so it seems the initial negativity to these elements of the film was overexaggerated.
Okay, the reactions in the aforementioned Reddit threads are just a fraction of those who've seen the final Marvel Cinematic Universe (MCU) movie of 2025. Nonetheless, the fact that there's people hailing First Steps' VFX is evidence that it's not as bad as some fans feared, and that it just needed a little polish and refinement ahead of launch.
You can judge for yourself when the latest Fantastic Four big-screen reboot arrives in a cinema near you on Friday, July 25. In the meantime, read my definitive guide to The Fantastic Four: First Steps or check out the section below for more pre-release coverage of the forthcoming superhero flick.
You might also like- The Fantastic Four: First Steps cast and character guide: Pedro Pascal, Vanessa Kirby, and who else you'll see in the Marvel movie
- The final trailer for The Fantastic Four: First Steps is here – and I'm growing increasingly concerned about one character's fate in the Marvel movie
- The Fantastic Four: First Steps is reportedly getting a sequel – and it isn't the only thing I've just learned about my most anticipated Marvel movie of 2025
- Nvidia's GH200 chips are at the core of Britain's Isambard-AI supercomputer
- It's the 11th fastest in the world, and 10x faster than Britain's second-fastest
- The UK government hopes it'll aid in drug discovery and more
The UK's most powerful AI supercomputer, Isambard-AI, is now fully operational at the Bristol Centre for Supercomputing (BriCS), with some serious Nvidia power at its core.
With 21 exaFlops of AI performance backed by 5,448 Nvidia GH200 Grace Hopper superchips, Isambard-AI now ranks 11th on the Top 500 list of fastest supercomputers, making it one of the global leaders.
Nvidia declared the British supercomputer is now 10x faster than the next UK supercomputer, and more powerful than all the others in the UK combined.
A giant leap forwardBesides being the 11th fastest supercomputer globally, Isambard-AI also ranks fourth globally on the Green500 list for energy efficiency, demonstrating the progress being made to reduce the environmental impact of AI machines and data centers.
Its eco-credentials are endless, including carbon-free power, waste heat recycling and a power usage effectiveness (PUE) of below 1.1 – among the best in the world.
Build in collaboration with Nvidia, HPE and the University of Bristol, Isambard-AI received £225 million in government funding in the hope that the supercomputer would go on to aid across important humanitarian issues like drug discovery and climate modeling.
"And as we press this switch to activate the UK’s most powerful supercomputer, we are embarking on Britain’s super future where AI contributes towards the delivery of better public services, greater public prosperity, deeper scientific discovery and stronger national security," UK Secretary of State Peter Kyle commented.
Among its first projects are Nightingale AI, trained on NHS data to support earlier diagnoses and personalized care, BritLLM, designed to promote inclusivity and better public service delivery in the UK's languages, including Welsh; and UCL Cancer Screening AI for prostate cancer detection.
You might also like- There are the best AI tools and best AI writers
- Nvidia briefly became the first ever $4 trillion company
- Access powerful chips via the best cloud computing providers
- While fixing exploited flaws, Microsoft may have also introduced new bugs
- The issues affected multiple SharePoint on-prem variants
- Hackers are already exploiting them in the wild, so users should patch now
Microsoft has released an urgent patch to fix a zero-day vulnerability affecting on-premises SharePoint servers.
The vulnerability is already being exploited in the wild, which is why users are urged to apply the patch immediately and secure their assets.
Three Microsoft products were said to be affected: SharePoint Server Subscription Edition, SharePoint Server 2019, and SharePoint Server 2016. SharePoint Online (Microsoft 365) is not affected.
How to secure your endpointsThe vulnerability being addressed is described as a deserialization of untrusted data in on-premises Microsoft SharePoint Server, which allows an unauthorized attacker to execute code over a network. It is tracked as CVE-2025-53770, and carries a severity score of 9.8/10 (critical).
“Microsoft is aware that an exploit for CVE-2025-53770 exists in the wild,” the National Vulnerability Database (NVD) said in its advisory.
To secure the endpoints, Microsoft recommends applying the July 2025 security updates immediately, as well as enabling Antimalware Scan Interface (AMSI) for SharePoint and making sure Defender Antivirus is deployed.
After patching, or enabling AMSI, users should rotate their ASP.NET machine keys, deploy Microsoft Defender for Endpoint to detect post-exploitation activity, or upgrade to supported SharePoint versions, if needed.
The vulnerability was actually introduced while fixing a pair of bugs that were also being exploited in the wild. Tracked as CVE-2025-49706 and CVE-2025-49704, these two were fixed in July, but introduced two new flaws - CVE-2025-53770, and CVE-2025-53771, a 6.3/10 (medium) path traversal bug that allows spoofing over a network.
The new bugs were quickly spotted by threat actors, and abused in attacks since July 18, with at least 85 organizations apparently being hit, including several multi-nationals and government entities, such as a private university and a private energy operator in California, a federal government health organization, and a private fintech firm in New York.
Via BleepingComputer
You might also like- Top satellite communications company Viasat was also hit by Salt Typhoon – which shows just how widespread this massive attack was
- Take a look at our guide to the best authenticator app
- We've rounded up the best password managers
- Battlefield content creators are reportedly receiving packages from EA
- Said packages seemingly confirm the name of the series' next game
- Battlefield 6 will allegedly be revealed officially on July 29, 2025
The title of the next Battlefield game seems to have been revealed, as content creators familiar with the series have reportedly been receiving special packages from EA.
As reported by Eurogamer, Battlefield content creator Rivalxfactor posted to X / Twitter, following a since-deleted post featuring a physical box emblazoned with the Battlefield 6 title.
"Content creators are receiving packages from EA," writes Rivalxfactor, who also states that: "Battlefield 6 will be the title name and the preview to the world will probably be by the end of the month."
In a follow-up post, Rivalxfactor claims that Battlefield 6 is to be officially announced on July 29, spanning a three-day event that will encompass not only the reveal of the game, but also interviews with the developers. Rivalxfactor also says an open beta will take place "shortly after" this event, though no specific date is given here.
I have confirmed with another person that there is indeed a 3 day EA event starting on July 29th. This is where Battlefield 6 will be revealed, devs will be interviewed in a somewhat fixed format, and the game will be featured with content creators.The open beta will launch… https://t.co/OmLRH3Gg3ZJuly 19, 2025
Previously, publisher Electronic Arts has suggested that the next Battlefield game will launch before April 2026, and we've also seen what is allegedly a leaked clip of the upcoming title's campaign mode.
If the footage indeed belongs to Battlefield 6, it looks like it's going to return to the series' roots, offering a more contemporary warfare setting and thus ditching the futuristic escapades of Battlefield 2042 and the historic battlegrounds of Battlefield 5 and Battlefield 1.
While we naturally encourage you to take any and all leaks with a healthy pinch of salt - we won't truly know anything concrete until EA officially reveals the game itself - it'll be worth marking July 29 on your calendar just in case the current crop of rumors turns out to be accurate.
You might also like...- This is not a drill: one of the best strategy games ever is free right now – here's how to get it before it's gone
- The latest Turtle Beach Rematch controller sports a lenticular Donkey Kong theme and yes, it's compatible with Switch 2
- Ubisoft names the company CEO's son Charlie Guillemot as co-CEO of new Tencent-funded subsidiary – 'What matters now isn’t my name, it’s the work ahead'
Many industries continue to navigate the complexities of hybrid work and shifting workforce dynamics driven by necessary digital transformations. However, there is one critical issue quietly challenging and reshaping the field services industry in particular. Field service professionals serve as the unsung heroes of modern IT infrastructure. They keep the lights on, the networks connected and the systems running. But behind the scenes, a demographic shift threatens to disrupt these operations.
Veteran technicians, who have spent their careers mastering the nuances of complex systems, are exiting the workforce in large numbers, and there is no clear plan for passing on their knowledge. To make matters worse, younger generations are not joining the industry at the rate needed to replace retiring workers. What’s left is a widening skills gap that threatens to slow operations, increase costs and compromise service quality.
The Retirement Wave Is Real — and RiskyBaby Boomers make up a significant portion of the field service workforce, and their retirement creates more than a staffing issue — it’s a knowledge crisis. These professionals hold a wealth of practical, hands-on insights, including how to troubleshoot legacy equipment, navigate customer preferences and solve problems that aren’t covered in manuals.
To make matters worse, a recent survey from Service Council found that nearly half of field service engineers do not anticipate having a life-long career in the field. Of those engineers looking to leave, half expect to do so in the next three years, which would mean the loss of invaluable institutional knowledge.
The potential consequences will be far-reaching for organizations across all industries, with longer resolution times, higher error rates and diminished customer experience. And while companies will hire new talent, it won’t be enough to truly fill the knowledge gap.
Replacing Workers Isn’t Replacing Wisdom — AI Can HelpNew technicians, no matter how well-trained, need time to build the kind of intuition that comes with experience. Research shows that 70% of skill development happens through hands-on work, while just 10% is a result of formal training. Without structured systems to capture and transfer knowledge, organizations risk leaving new hires to learn through trial and error — a costly and inefficient approach in today’s fast-paced environment.
However, emergent technologies, such as AI, offer a promising path forward. Rather than replacing human expertise, AI can supplement it and accelerate training by providing real-time support, predictive insights and guided troubleshooting to technicians in the field. These systems can analyze equipment data, flag anomalies and suggest next steps, helping less experienced workers make informed decisions quickly.
Advanced AI tools go even further, integrating telemetry, service logs, vendor documentation and industry best practices into a single, intelligent interface. The result is a personal digital assistant that is always available, up-to-date and ready to help. This type of support is invaluable to a newcomer in a fast-paced industry with high customer expectations.
Digital Twins: a Living Library of ExpertiseAcross industries, employees report difficulty accessing the information they need to do their jobs effectively, with only 12-16% of employees saying the critical information they receive from leaders is helping them do their jobs well. This is a critical issue, as it shows that traditional knowledge management tools that previously worked are falling short because they’re either fragmented, outdated or otherwise ineffective in breaking down silos.
Digital twins offer a dynamic solution. Serving as virtual replicas of physical assets and systems, these AI-powered models transmit real-time data to virtual environments. This allows new technicians, who may lack critical knowledge, to simulate scenarios, monitor performance and optimize maintenance strategies. In practice, this means that new technicians will have the opportunity to learn and practice their skills in a controlled environment.
But digital twins are not only advantageous for new technicians. All employees in the field service industry, regardless of employee experience, can benefit from digital twins, as they serve as an intuitive, on-demand source of expert guidance. They reduce learning curves and ensure that critical knowledge is preserved and accessible — regardless of who’s on the job.
The Time to Act Is NowThe field service industry is at a turning point. Organizations that invest in AI and knowledge-preserving technologies today will be better equipped to navigate tomorrow’s challenges. By proactively addressing the knowledge gap, companies can maintain operational excellence, safeguard institutional knowledge and build a more resilient, future-ready workforce.
Now is the time to bridge the gap and lead the next era of field services confidently, properly equipped with the latest cutting-edge technology.
We've listed the best COBOL online courses.
This article was produced as part of TechRadarPro's Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro