News
- French and UK tech experts will collaborate on multiple projects
- One of them is to secure technology used in GPS systems
- GPS needs to be more resilient to blocking and jamming
British and French technology experts will be working together more closely to make GPS and other similar technologies more resistant to disruptions.
The news was announced by the UK Department for Science, Innovation & Technology (DSIT) earlier this week. As per the announcement, experts from the two countries will work together on a number of different projects, including strengthening the resilience of critical infrastructure to the signal-jamming seen in the Russo-Ukrainian war.
“From our electricity infrastructure, to transport, to financial transactions, the tech we rely on for everyday life depends on reliable Positioning, Navigation and Timing (PNT), often provided via satellites,” the announcement reads. “The conflict in Ukraine has shown how new technologies – in some cases, just small hand-held devices – can be used to disrupt PNT services, potentially causing major disruption to the vast areas of life and the economy reliant on them.”
e-LORANOne of these complementary technologies, highly resistant to jamming, is e-LORAN, a system that uses ground-based radio towers as a “backup” to GPS. DSIT describes it as being “much more challenging” to block, and as such can keep critical UK infrastructure technology running “even when GPS fails”.
The war in Ukraine seems to have exposed significant weaknesses of today’s GPS systems, which could end up in tragedy. According to Ukrainska Pravda, The Telegraph’s researchers examined Flight Radar data for the first four months of 2024, which included 63 UK military aircraft completing 1,467 flights over Eastern Europe and the Middle East.
“During this time, the United Kingdom’s military aviation flew 504 transport and reconnaissance missions over Eastern Europe, with 142 of them encountering GPS jamming, and in 60 cases, such efforts occurred multiple times,” the publication explained.
At the same time, Business Insider reported that Finnish soldiers were training with “basic navigation tools” - paper maps and compasses, due to the unreliability of GPS systems.
Via The Register
You might also like- US government probing security risks of mobile devices using Russian or Chinese satellites
- Take a look at our guide to the best website builders around
- We've rounded up the best password managers
- Windows 11 has a fresh preview build in the Canary channel
- It offers a new adaptive energy saver feature which is opt-in by nature
- Turning it on means Windows 11 will intelligently save battery life whenever the system isn't doing anything taxing
Microsoft is trying out a new feature to help give Windows 11 laptops better battery life, and it sounds like a promising idea.
It's called adaptive energy saver, and as Windows Central noticed, the functionality is now in testing in the Canary channel (the earliest of the four test channels that Microsoft uses).
Normally, energy saver only kicks in when the battery is running low (the exact level at which that happens depends on what the user specifies), but with the new intelligent mode of operation, energy saver will be able to operate at any time.
The idea is that if the system detects that there's not much going on – just basic tasks are running, perhaps just light web browsing, or you're writing an email – energy saver will activate in the background and save some battery.
At the moment, the capability is just rolling out in testing, so not every Windows Insider in the Canary channel will see it to begin with.
It's also an opt-in feature, meaning that you'll have to turn it on in Settings (System > Power & battery) to get the benefit. In other words, by default, nothing will change with the way Windows 11 employs energy saver, unless you specifically turn on adaptive energy saver.
Analysis: a bright idea(Image credit: Getty Images)How does adaptive energy saver work? That isn't clear, and Microsoft doesn't provide much in the way of detail in its preview build blog post, save to say that the feature will do its magic "based on the power state of the device and the current system load".
I can only assume that it's going to rein in the CPU and GPU – two of the most power-hungry components inside a laptop (or desktop) – when they're not doing much, which, given how many of us use our laptops, is going to be quite often. So there's a fair chance that this energy-saving trick could actually conserve quite a lot of battery life. (Fingers crossed – and check here for more tips in that same vein, incidentally).
A key point is that the level of brightness set for the screen won't ever be changed by adaptive energy saver. While the display is the other major source of power drain in a laptop, messing with the brightness would likely only annoy users – I know I wouldn't want my screen suddenly growing dimmer for no apparent reason – so it's a sensible decision to put the display to one side here.
While it's obviously designed for laptops, when I first saw this feature I imagined that it could be useful in bringing an eco-friendly element to desktop PCs, too (saving on power bills). That isn't the case, though, and Microsoft makes it clear that this is a notebook-only innovation.
For the more paranoid who are worried about adaptive energy saver perhaps messing with performance when it shouldn't – perhaps due to bugs, for example – it's worth repeating that it will be an opt-in ability. If you don't like the sound of it, just don’t switch the adaptive mode on.
Also, we shouldn't forget that features in testing may not make the cut for final release in Windows 11 anyway – but I'm hoping this one does.
You might also like...- Latest Windows 11 update fail brings yet more installation woes – but some other reported bugs have me seriously worried
- No, Windows 11 PCs aren't 'up to 2.3x faster' than Windows 10 devices, as Microsoft suggests – here's why that's an outlandish claim
- Windows 11 desktop PCs could soon get Copilot+ AI powers, as Intel might radically switch tactics with next-gen CPUs
Cloudflare’s 1.1.1.1 DNS resolver service fell victim to a simultaneous BGP hijack and route leak event, causing massive internet outages and degradation worldwide. Pakistan caused the most famous BGP outage. The government tried to block access to YouTube within the country. Their misconfiguration caused a worldwide YouTube outage.
Most organizations are targets of attacks 7.5 times a year. And while most are resolved quickly, these are examples of public infrastructure failures that are beyond your control.
What other technology do you rely on every day that was invented in the 1980s? Not your smartphone. Not your car. Not your TV. And definitely not your work tools. Yet, every time you send an email, connect to a website, or deploy a cloud service, you’re relying on core internet protocols that predate the web itself.
The Fragile FoundationThe Border Gateway Protocol (BGP) was designed in 1989, an era when the “internet” was barely a concept and security was an afterthought. Back then:
- Home users connected via dial-up modems.
- Businesses considered themselves cutting-edge if they had a T1 line.
- Network reliability was a hope, not an expectation.
BGP’s original purpose was simple: keep the nascent internet stitched together. It provided large institutions with a means to announce which IP address blocks they controlled and to learn about others. The protocol allowed routers across autonomous systems (ASes) to share route announcements and dynamically discover paths to distant networks.
BGP was designed for resilience, not determinism. For openness, not security.
Speed, uptime, and securityToday, we demand speed, uptime, and security that BGP was never built to deliver. Multi-gigabit fiber reaches homes. Enterprises span multiple clouds across continents. Workloads like real-time video, financial transactions, and machine learning require low-latency, high-throughput data paths.
However, BGP still routes traffic based on trust and reachability, rather than performance or identity. It can’t enforce policies. It can’t prevent hijacks. And it certainly can’t guarantee who’s on the other end.
Despite multiple security incidents and efforts, such as RPKI and BGPsec, the internet still routes traffic based on a chain of trust that can be exploited by anyone with a few malicious route announcements. Most fixes require coordination that doesn’t exist and IT infrastructure upgrades that move at glacial speed.
The result? The modern internet rides on a protocol that thinks it’s still 1992.
Public by DefaultAnother artifact of that era is the Domain Name System (DNS). Created to make numeric IP addresses human-readable, DNS transformed how people accessed websites. Instead of memorizing strings of numbers, you could simply type in a name.
The problem? DNS is public by design.
Every query, every resolution, and every domain is visible and discoverable. Attackers can enumerate subdomains, discover shadow IT resources, and probe for vulnerabilities – all by posing as legitimate users.
We’ve seen this pattern before. Consider phone numbers. In the 1990s, receiving a call or piece of mail felt like an event. Now? Most calls are spam, and most email is junk. People don’t pick up unless they recognize the number. Our relationship with public identifiers has undergone a fundamental shift.
The same evolution is happening with network services. Public IP addresses and DNS names are easily scraped, scanned, and attacked. In an age of automation and AI-assisted hacking, exposing your infrastructure by default amounts to sending an invitation.
Yet we continue treating server addresses like phone numbers in a white pages directory – a model that no longer works for the threats we face.
Obsolete AssumptionsBoth BGP and DNS reflect assumptions that simply don’t hold up anymore:
- Assumption: Networks are trusted.
-- Reality: Most attacks now originate from within or via compromised peers.
- Assumption: Routes are stable.
-- Reality: Internet routes change unpredictably due to performance tuning, outages, and misconfigurations.
- Assumption: Identities don’t matter.
-- Reality: Zero-trust architecture has become the standard for secure design.
- Assumption: Services are few and fixed.
-- Reality: Modern architectures dynamically spin up and down thousands of services.
The more we scale and automate, the more these assumptions crumble.
Time for a RethinkThe internet’s early architecture was undeniably brilliant for its time. But that time has passed.
Today’s needs are different. We need:
- Deterministic data paths that can be trusted end-to-end.
- Secure naming systems that are private by default.
- Policy-aware routing that aligns with business, performance, and compliance requirements.
- A model where services announce themselves securely to authorized peers, not to the entire internet.
These aren’t enhancements; they’re necessities.
The irony is striking: everything else in tech has evolved dramatically. Compute became elastic. Storage turned redundant and distributed. Deployment went fully automated. But networking? It’s still largely manual, primarily public, and built mainly on 40-year-old concepts.
This should be our wake-up call. We can’t keep patching internet security with duct tape and hoping for the best. It’s time to challenge the status quo and ask a hard question: are the foundational protocols we depend on every day actually fit for purpose anymore?
Security and privacy can’t remain afterthoughts we layer onto a crumbling foundation. They need to be built from the ground up. That means completely reimagining how the internet connects, routes, and identifies everything.
Think about it: what other critical system in your life still runs on ideas from the 1980s?
LINK!
This article was produced as part of TechRadarPro's Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro
- VRI will complement NCSC's current vulnerability research efforts
- It will be tasked with communicating NCSC's needs with external experts
- The goal is to understand the flaws, patches, and research methodology
The UK’s National Cyber Security Centre (NCSC) has announced the forming of The Vulnerability Research Initiative (VRI), a new program which will see it partner with third-party cybersecurity experts for vulnerability research in commodity and specialized tech.
The NCSC said it currently operates a team of internal researchers who are experts in common technologies, and who conduct vulnerability research (VR) on a range of technologies and products, from traditional commodity tech, to specialized solutions only used in a few places.
However, the team is unable to keep up with the speed at which the technology industry is changing. New tech is popping up every day, and old tech is evolving beyond recognition, “and thus VR is getting harder”.
Understanding the vulnerabilities“This means the NCSC demand for VR continues to grow,” NCSC explained.
To tackle the challenge, it decided to create VRI, and bring in third-party help. The program’s goal is to help NCSC’s researchers understand the vulnerabilities present in today’s technologies, the necessary mitigations, how experts conduct their research, and which tools they use in the process.
“This successful way of working increases NCSC’s capacity to do VR and shares VR expertise across the UK’s VR ecosystem,” the press release further reads.
The VRI core team will include technical experts, relationship managers, and project managers, with the core team being responsible for communicating the VR team's requirements to VRI industry partners and for overseeing the progress and outcomes of the research.
In the (near) future, NCSC will bring in more experts to tackle AI-powered, or otherwise AI-related vulnerabilities. Those who are interested in participating in VRI should reach out to the agency via email at vri@ncsc.gov.uk. The address should not be used for sharing vulnerability reports.
Via BleepingComputer
You might also like- NCSC gets influencers to sing the praises of 2FA
- Take a look at our guide to the best website builders around
- We've rounded up the best password managers
Congratulations, fellow Stranger Things fan! You’re on the home stretch of the three year wait between Stranger Things season 4 and Stranger Things season 5. The final episodes will be split into three releases – volume 1 on November 26, volume 2 on December 25 and the season 5 finale on December 31 – so there’s still a small wait to go, but rumour has it we’re getting the first full season 5 trailer at some point this week.
Today (July 15) marks the first time we ever saw one of the best Netflix shows of all time on screen, with the series debuting nine years ago in 2016. If you can’t remember what happened when we last visited Hawkins (and that’s understandable), our group of best friends attempted to defeat Vecna, causing Max's apparent death as well as the opening of a massive rift between the town and the Upside Down. No big deal, I’m sure.
But as Netflix finally tries to get its fanbase excited about the drawn-out end, a painful question has to be asked. Why should anybody care about Stranger Things season 5 when we’ve been left in the lurch for so long? I’m wondering if it would have been less of a hassle to have been eaten by Vecna when we first met him, and that’s a problem.
Stranger Things season 5 and its trailer are coming, but do we even care?Let’s put it into context. Since Stranger Things season 4 aired, we’ve had four seasons of The Bear, five seasons of Slow Horses (if you count new episodes we’re going to get in September) and two seasons of Severance… and look how long that took to return. In the Stranger Things world alone, we’ve had the arrival of non-canon West End play The First Shadow, plus the announcement of two spinoffs: animated series Stranger Things: Tales from '85 and live-action show The Boroughs. Everyone and their nan has seemingly complained about not getting season 5 in the meantime, and they’ve got good reason to be annoyed.
Back in the good old days of the mid-2000s, we were regularly whipping through 22-episode seasons of TV like there was no tomorrow. Desperate Housewives and Lost were great examples of this, each requiring a high level of input and resource in their own way. Fast forward two decades, and the consolation prize of feeling lucky to get eight new episodes in three years doesn’t feel like something worth investing in.
Sure, these upcoming episodes are basically feature length movies and the technical craft needed to achieve them is immense, but this is Hollywood, for goodness sake! Every resource we allegedly have at our disposal is supposed to be at the top of its game, able to give us everything we have and haven’t yet dreamed up. From a marketing perspective, Netflix might have thought dragging out the jewel in the crown of its streaming back catalog would make fans hungrier for the end product, but there’s only so far you can stretch the theory in practice.
Of course, I’ll be streaming Stranger Things season 5 like the rest of us, but it will be a reluctant watch. The endless wait over the last few years has definitely made me think twice about investing in shows on one of the best streaming services around, and that’s before we even touch on the frequent cancellations (another story for another day).
- Stranger Things season 5's 12-month shoot yielded 650-plus hours of footage for its eight 'blockbuster movie' episodes, the popular Netflix show's creators say
- Stranger Things: The First Shadow's big lore reveal needs addressing in hit Netflix show's final season
- Everything new on Netflix in July 2025
- New Microsoft Research paper identifies areas where AI is already being used the most
- It also shows areas that AI has very little influence on currently
- The research could show potential for AI job augmentation, not just replacement
I don’t know about you, but I have this kind of nagging fear that AI is coming for me one of these days. If not imminently, then in the very near future. One thing that might allay that fear is knowing exactly where AI’s axe is going to fall in the labor market, so that I can make sure I’m always just out of its reach.
The problem is that right now we have a lot of people making bold assumptions about what sorts of jobs AI will take away, but as we all know, no plan survives contact with the enemy, so it might be better to approach the problem from another direction.
A new report from Microsoft Research has analyzed 200,000 real conversations between people and Copilot to understand how AI is being used by people in the workplace right now. This way, we can determine which roles are likely to be the most impacted as companies adopt generative AI in the future.
The most at riskIt should come as no surprise that the jobs the report identified as the most common work activities people seek AI assistance for all involve gathering information and writing, and that the most common activities that AI is performing are providing information and assistance, writing, teaching, and advising.
It turns out that interpreters and translators are top of the list when it comes to compatibility with AI, with a stunning 98% of their activities overlapping with frequent Copilot tasks that have fairly high completion rates.
So, if you're thinking of changing careers to become a translator, it might be worth considering your options. Also at the top of the list are historians, writers and authors, and journalists. It should be no surprise to also see proofreaders, editors, and PR specialists high up on the list, too.
(Image credit: Shutterstock)The most resistant to AIIt’s physical trades involving working with people that are the most resistant to the influence of AI. The report puts nursing assistants, massage therapists, and machinery operators, including truck and tractor drivers, as the most AI-resistant occupations. Manual laborers like roofers, dishwashers, maids, and housekeeping cleaners were also near the top of the list.
The news will be good for some jobs, but terrible for others. Of course, nothing is guaranteed, and if you’re working in one of the most compatible areas for AI (I know I am!), then don’t panic right now because the research could be simply indicating that your area is one that is ripe for augmentation by AI, rather than replacement.
I think there will always be a need for skilled humans in some capacity, even in areas that will be heavily dominated by AI. That said, understanding AI’s impact on jobs is probably going to put you in a better position than if you have no clue about its threats.
You might also like- You don’t have to explain everything to Claude anymore – it’s finally in your apps
- The next generation of ChatGPT is just around the corner - here’s why GPT-5 could transform the way you use AI
- I tried asking ChatGPT what my favorite fictional characters say about me – here’s what I learned about myself
- AI job losses are inevitable, but new innovations will curb the effects, Nvidia CEO Jensen Huang says
- Huang has been warned by senators before an upcoming trip to China
- US tech should 'set the global standard' Huang argues
It’s long been prophesied AI will lead to mass unemployment, with several CEOs and tech leaders warning AI will wipe out millions of jobs, and firms such as Microsoft laying off thousands of workers whilst bringing in new AI productivity tools.
Now, Jensen Huang, CEO of chip manufacturer and AI firm Nvidia, offered his (slightly stale) perspective. In an interview with CNN, Huang essentially passes job protection responsibilities over to business leaders, claiming; “If the world runs out of ideas, then productivity gains translates to job loss.”
“Everybody’s jobs will be affected. Some jobs will be lost. Many jobs will be created and what I hope is that the productivity gains that we see in all the industries will lift society,” Huang said.
Bipartisan warningsHuang’s authority on AI is significant too, thanks to Nvidia's power in the market. The company's GPUs remain one of the most influential tech products in the world, and are largely powering AI development across the world - including in China, which is spooking some US politicians.
Huang recently received a warning written by Republican Senator Jim Banks and Democratic Senator Elizabeth Warren, Reuters reports, which advised against meeting with Chinese companies, arguing this could, “legitimize companies that cooperate closely with the Chinese military or involve discussing exploitable gaps in U.S. export controls”.
A Nvidia spokesperson saidUS technology will ‘set the global standard’ and that ‘America wins’ - with China being one of the largest software markets in the world, adding that AI software "should run best on the U.S. technology stack, encouraging nations worldwide to choose America”.
That being said, Huang has recently argued Chinese military branches will avoid using US technology because of the associated risk; “it could be, of course, limited at any time” he argued, “they simply can’t rely on it”.
He added how Chinese military services, which are already developing powerful tools, “don’t need Nvidia’s chips, certainly, or American tech stacks in order to build their military.”
This comes in response to growing concerns that Chinese companies and military agencies will use US tech to enhance capabilities.
Increasingly harsh restrictions have limited China’s access to top AI technologies, aimed at curbing China’s tech and AI advancement - but concerns remain about the threat to US national security should China use US companies to develop its capabilities.
You might also like- Take a look at our picks for the best AI tools around
- Check out our choice for best antivirus software
- China has spent billions of dollars building far too many data centers for AI and compute
- The prices for the ROG Xbox Ally and ROG Xbox Ally X have reportedly leaked online
- The ROG Xbox Ally will seemingly cost €599 ($699) while the ROG Xbox Ally X will be priced at €899 ($1,050)
- If the prices are accurate, this will make them Xbox's most expensive consoles yet
The pricing information for the ROG Xbox Ally and ROG Xbox Ally X has seemingly leaked online.
As reported by Spanish publication 3djuegos(via GamesRadar), the console prices were leaked through product boxes on Google.
It appears that the ROG Xbox Ally will cost €599, while the ROG Xbox Ally X will be priced much higher at €899. After converting, the prices respectively translate to $699 and $1,050, and, if accurate, this will make them Xbox's most expensive consoles yet.
This is also how much the original Asus ROG Ally and Asus ROG Ally X cost in Europe.
The pricing has since been removed, but the boxes originally linked back to the official Asus website, suggesting that the company may have mistakenly shared the information ahead of time.
Microsoft announced its take on the Asus ROG Ally last month during the Xbox Games Showcase 2025. Both versions of the handheld will feature a 7-inch 1080p display with a 120Hz refresh rate, but the white Xbox Ally variant targets 720p gaming, while the black Xbox Ally X console aims for 900p to 1080p gaming.
Although the prices may be on the more expensive side, it is understandable considering the console/PC hybrid uses AMD Ryzen chips, with the more powerful Xbox Ally X utilizing the Ryzen AI Z2 Extreme, which comes with 24GB of LPDDR5X-8000 RAM and a 1TB SSD.
Both handhelds have also been redesigned with contoured grips to mimic the Xbox Wireless Controller, feature the Xbox's 'ABXY' button layout, hall-effect impulse triggers, a 3.5mm headphone jack, Bluetooth 5.4 connectivity, and more.
There's no word on release dates for either console just yet, but we'll keep you updated.
You might also like...- The Nintendo Switch 2 is the company’s least ambitious console to date, but its improvements are astronomical
- I’ve spent 40 hours exploring Death Stranding 2: On the Beach, and it’s an incredible sequel that builds upon its unique predecessor to become a masterpiece
- I’ve spent 150 hours with The Legend of Zelda: Breath of the Wild, and the Switch 2 Edition is an incredible upgrade
- Binarly spotted multiple flaws in UEFI firmware built by AMI
- AMI released fixes months ago, so users should update now
- Many Gigabyte motherboards reached EOF and thus won't be patched
UEFI firmware on dozens of Gigabyte motherboards is vulnerable to a handful of flaws which theoretically allow threat actors to deploy bootkits on compromised devices, establish stubborn persistence and execute additional malicious code remotely, experts have warned.
Security researchers Binarly recently discovered four vulnerabilities in UEFI firmware developed by American Megatrends Inc. (AMI). All four have a high severity score (8.2/10), and can lead to privilege escalation, malware installation, and other potentially destructive outcomes. They are tracked as CVE-2025-7026, CVE-2025-7027, CVE-2025-7028, and CVE-2025-7028.
Binarly reported its findings to Carnegie Mellon CERT/CC in mid-April 2025, resulting in AMI acknowledging the findings and releasing a patch in mid-June. The patch was pushed to OEMs privately, but apparently Gigabyte did not implement it at the time.
Hundreds of motherboard models affectedThere are apparently more than 240 motherboard models that are impacted by these flaws.
Many won’t be patched at all because they have reached end of life, and as such, are no longer supported by Gigabyte. Instead, users worried about the vulnerabilities should upgrade their hardware to newer, supported versions.
Products from other OEMs are also said to be affected by these flaws, but until a patch is applied, their names will not be publicized.
UEFI firmware is low-level code that runs beneath the operating system, and whose job is to initialize the hardware (CPU, memory, storage), and then hand off control to the OS. When this code has flaws, threat actors can exploit them to install so-called “bootkits”, stealthy malware that loads at boot time, before the OS.
Because they run in privileged environments, bootkits can evade antivirus tools, and even survive OS reinstalls and disk replacements. This makes them highly persistent and dangerous, especially in high-security environments. The good news is that exploiting these vulnerabilities often requires admin access, which is not that easily obtainable.
Via BleepingComputer
You might also like- The first UEFI bootkit malware for Linux has been detected, so users beware
- Take a look at our guide to the best website builders around
- We've rounded up the best password managers
- Cyberpunk 2077: Ultimate Edition will launch on macOS on July 17
- The game uses several exclusive Apple features and technologies
- MetalFX Frame Interpolation and Denoising are coming later this year
There was a time when the idea that a Mac could be a gaming machine was treated as a laughable concept. Those days, though, now seem to be over, as AAA title Cyberpunk 2077: Ultimate Edition has just launched on Apple’s computers after being teased for a few heady months.
The Ultimate Edition includes both the base game from 2020 and the Phantom Liberty expansion that was released in 2023. Although the game was plagued with bugs and glitches when it originally appeared, it has since gone on to earn “overwhelmingly positive” reviews on Steam, with expansion Phantom Liberty being similarly well-received.
Indeed, our Cyberpunk 2077 review called it “ambitious and deeply enjoyable,” while we declared Phantom Liberty to be a “DLC masterclass.”
Despite modern Macs lacking discrete graphics cards, Apple promises that this won’t be a low-resolution, low-frame-rate struggle. According to the company’s announcement, Cyberpunk 2077’s Mac edition has been designed to take “full advantage of Apple silicon and the advanced technologies of Metal … ensuring smooth performance and stunning visuals throughout V’s rise to Night City legend.”
Not just a PC port(Image credit: CD Projekt Red)Apple’s announcement made clear that this is not just a simple port of the PC version. Instead, it takes advantage of some of Apple’s own technologies, like Head Tracked Spatial Audio when using AirPods, “dynamically calibrated HDR optimized for Apple XDR displays” (HDR is also available on non-Apple monitors), support for the Magic Mouse and Magic Trackpad (alongside games controllers and keyboard-and-mouse setups), and “For This Mac” graphics presets that are “individually optimized for every Apple Silicon Mac model.”
You also get MetalFX Upscaling, which renders the game at a lower resolution, then uses artificial intelligence (AI) to increase the visual fidelity. The result is a sharp image that puts less load on your Mac’s chip. In addition to MetalFX Upscaling, Cyberpunk 2077 can also be played with AMD’s FSR upscaling and frame generation techs.
Speaking of frame generation, MetalFX Frame Interpolation will come to Cyberpunk 2077 later this year. Another AI feature, this generates an “intermediate frame” every two input frames, giving you higher frame rates than your hardware might normally be able to manage. Combined with MetalFX Upscaling, Apple says you can achieve 120fps when running the game at Ultra settings – although it’s not clear what specific Apple silicon chip would be required for that level of performance.
MetalFX Denoising is also on the way this year, a tech that Apple boasts will allow “real time path tracing on the game’s highest quality graphics settings.” Put together, Apple says you can expect “smooth performance, sharp visuals, and seamless gameplay.”
Cyberpunk 2077: Ultimate Edition requires a Mac with 16GB of unified memory and an Apple silicon chip (M1 or later). It also requires macOS 15.5 or later “for the best experience.” Following the introduction of AAA games like Baldur’s Gate 3 and Assassin’s Creed Shadows on macOS, it could herald a new era for Mac gamers.
You might also like- Nintendo has introduced new guidelines for the Switch 2 eShop in Asia
- The guidelines target several topics, including game bundles, and how they can be sold, sensitive content restrictions, and more
- These new guidelines are not yet live in the West
Nintendo has introduced new guidelines for the Nintendo Switch 2 eShop in Asia to seemingly combat low-quality games.
Back in May, the company updated the Nintendo Switch eShop to filter out cheap games and "slop," and now, it has implemented further improvements by releasing new guidelines in Japan and some other Asian countries.
As IGN reports, the guidelines target several topics, including game bundles, and how they can be sold, sensitive content restrictions, prohibitions on inaccurate product descriptions, and when and how product information can be updated.
Firstly, in the first year of a game's release, only a maximum of five game bundles may be distributed. The number can then increase for each year the game is available, up to a maximum of eight different bundles.
This new restriction appears to be a way to combat how publishers will constantly push bundles on the store to keep their games in the eShop top charts.
Nintendo is also tackling sensitive content on the platform, which includes "sexualization of children, overly sexual content, discrimination and hate, exploitation of social issues, instructing criminal activity, and political statements". Inaccurate descriptions will now be forbidden.
"It is prohibited to provide inaccurate descriptions of the contents of a product. It is prohibited to provide description of the content of a product as under development if it is not expected to be implemented in the product," the guidelines read.
Finally, publishers and developers will no longer be able to alter their game descriptions without good cause and are now prohibited from changing information on a game's product page after it has gone live.
Developers will also need to contact Nintendo representatives if they intend to distribute an application "that does not include game elements."
These new guidelines are not yet live in the West, but we expect something similar soon.
You might also like...- The Nintendo Switch 2 is the company’s least ambitious console to date, but its improvements are astronomical
- I’ve spent 40 hours exploring Death Stranding 2: On the Beach, and it’s an incredible sequel that builds upon its unique predecessor to become a masterpiece
- I’ve spent 150 hours with The Legend of Zelda: Breath of the Wild, and the Switch 2 Edition is an incredible upgrade
Full spoilers immediately follow for Peacemaker season 1 and Superman.
Well, that's someone I didn't expect to see in Superman. In a movie jam-packed with fan-favorite and unfamiliar characters from DC Comics – read my Superman cast guide for more details on many of them – I was not expecting to see John Cena's Christopher Smith/Peacemaker at all.
Yet, around midway through James Gunn's Superman movie, there the increasingly popular anti-hero was. Okay, it wasn't a major appearance – in fact, compared to the cameo Milly Alcock's Supergirl makes, it was almost a missable moment. Nonetheless, even though the appearance he makes as a talk show guest is incredibly brief, Smith is technically part of the DC Universe (DCU) film's cast.
It's a cameo that raises some major questions about Peacemaker season 2, which is due to be released on HBO Max and other streaming platforms in late August, though. After all, the R-rated series' first season was set in the now-defunct DC Extended Universe (DCEU), so how does Cena's Smith wind up in Gunn and Peter Safran's rebooted cinematic franchise?
Right now, we don't have a definitive answer, but Gunn has indicated (per Entertainment Weekly) that Peacemaker's Quantum Unfolding Chamber (QUC), which we saw on a couple of occasions in season 1, has a big role to play. Season 2's first trailer also suggested there'll be some multiversal shenanigans at play that'll allow Smith to crossover into the DCU. According to fans who've seen a new teaser playing in front of Superman screenings in select theaters worldwide, it's increasingly likely that Smith will use the QUC to traverse the DC multiverse and end up in the DCU, too.
Yep, there'll be two versions of Peacemaker in the hit show's second season (Image credit: Max / YouTube)If that ends up happening, we'll know how Peacemaker makes the leap from one cinematic universe to another prior to season 2's arrival. It'll also explain why he appears in Superman, aka the second DCU Chapter One project to be released after season 1 of Creature Commandos.
So, mystery solved, right? Not so fast. Peacemaker's wild cameo in Superman might provide a solution to the aforementioned query, but it generates three other questions about one of the best HBO Max shows' sophomore outings.
For starters, do the events of Peacemaker 2 run concurrent to Superman? If Cena's titular character appears on a talk show in DC Studios' rebooted cinematic universe, he must crossover into the DCU early on in the HBO TV Original's second season. That's the only logical explanation as to why Smith is seen in Superman and therefore indicates that these projects' stories run parallel to the other.
Then there's the fact that season 2's initial teaser confirms Peacemaker will have a job interview (one that quickly goes awry, based on the footage we've seen) to join The Justice Gang.
This group, one funded by Maxwell Lord that features Guy Gardner/Green Lantern and Kendra Saunders/Hawkgirl, also appeared in one of 2025's most anticipated new movies. Unless he uses the QUC to access the DCU, there's no way Smith can meet this trio. Again, that suggests he'll enter the DCU early on in season 2 and lends further credence to Peacemaker's second season and Superman's narratives occurring at the same time.
Rick Flag Sr will spend much of Peacemaker's second season trying to track down the eponymous anti-hero (Image credit: Jessica Miglio/Max)Lastly, there's the elephant in the room re: 2021's The Suicide Squad. This film was also part of the DCEU, but Peacemaker season 2 will pick up a huge unresolved plot thread from that Gunn-directed flick – that being said, Smith murdering Rick Flag Jr.
We already know that Frank Grillo's Rick Flag Sr will appear in Peacemaker 2. By all accounts, it sounds like he'll be the show's next major antagonist, with Flag Sr looking to avenge the death of his only child.
What's particularly interesting about Grillo's own small role in Superman, though, is he seems to get an extra incentive to embark "on a mission" for justice in Peacemaker 2, as Grillo told me during a pre-release interview for Creature Commandos season 1.
Indeed, not only does he have a vast amount of resources at his disposal after becoming ARGUS' newest director, but there's also a notable shift in his sympathetic view on metahumans before Superman ends. That moment occurs when a fellow member of the Pentagon's executive team implies that superpowered beings are running the proverbial show following everything that happens in Superman. If Flag Sr didn't already have a major reason to find and kill Peacemaker, he will do now with that phrase ringing in his ears.
Okay, much of this is speculation on my part, but it's the best I can come up with ahead of Peacemaker season 2's debut. Thankfully, there's just over one month (at the time of publication) until it's released, so my questions will be officially answered sooner rather than later.
You might also like