News
- Microsoft is reportedly looking to formalize three-day in-office working policy
- Rivals like Amazon now ask for full-time office attendance
- Workers must prepare for more changes, reports claim
Microsoft could be the latest tech giant to explore a stricter in-office working policy, with reports claiming the company is reportedly considering enacting a three-day office-working policy for most employees.
Until now, workers have been able to spend around half of their time at home (or away from the office) despite rivals like Amazon enforcing stricter full-time office-working policies.
A Microsoft spokesperson told Business Insider the company had been exploring changes to the policy, but no official alterations have been made yet.
Microsoft considering upping its office-working daysThe report claims an official Microsoft announcement could come as soon as September 2025, with rollout of any changes arriving as soon as January 2026, although dates and indeed policies may vary depending on location.
Reports of upcoming changes come after the company has made other changes to its workforce, including ongoing worker readjustments and an updated PIP framework to more quickly exit underperforming workers.
In July 2025, Microsoft laid off around 9,000 of its workers, and two months earlier in May a further 6,000 workers lost their jobs.
Company CFO Amy Hood told workers in an internal memo (see by Business Insider) that they should prepare for another year of "intensity."
"We're entering FY26 with clear priorities in security, quality, and AI transformation, building on our momentum and grounded in our mission and growth-mindset culture," she added.
Although the company has undergone major layoffs in recent months, hiring efforts in other areas and a broader restructuring has seen minimal changes to actual overall headcount.
Microsoft CEO Satya Nadella recently said the layoffs had been "weighing heavily" on him, likening the ongoing transformation to that of the 1990s, when PCs and software became democratized, blaming the shifts on evolving customer needs.
Microsoft told us that it is looking at refreshing its flexible working guidelines, as it has done many times before. The company has a page dedicated to its flexible work approach, which reads "No 'one size fits all'."
You might also like- These are the best job sites and best recruitment platforms
- Prepare for future job changes with the best online learning platforms
- Workers are fighting back against RTO mandates - as survey claims remote work really does make you more productive
- Altman's tweet suggests that ChatGPT-5 Pro is coming to Plus subscribers
- It will be limited to a few queries a month
- The move would add more confusion to the ChatGPT Plus model selector
Following the backlash against OpenAI removing ChatGPT-4o when it introduced ChatGPT-5, the AI giant has now restored access to ChatGPT-4o, but only for ChatGPT Plus subscribers.
Free tier users are limited to just ChatGPT-5 for now, but it seems that CEO Sam Altman and OpenAI aren’t done making changes to its LLM lineup just yet.
In reply to a post on X praising how good GPT-5 Pro is, Altman responded, “We are considering giving a (very) small number of GPT-5 Pro queries each month to Plus subscribers so they can try it out!”
we are considering giving a (very) small number of GPT-5 pro queries each month to plus subscribers so they can try it out! i like it too.but yeah if you wanna pay us $1k a month for 2x the input tokens feels like we should find a way to make that happen... https://t.co/9qC0rsDl6zAugust 11, 2025
Plus users currently get a choice between ChatGPT-5 for fast answers and ChatGPT-5 Thinking for slower, but more thoughtful answers. ChatGPT Pro is essentially the best of both worlds, delivering thoughtful answers at speed.
Making even a few queries a month available to Plus users would represent a serious added value to the $20 (£20 / AU$30) monthly subscription. OpenAI describes ChatGPT-5 Pro as “research grade” AI, and it’s currently only available to $200 (£200 / AU$300) a month ChatGPT Pro subscribers.
The current Plus user selection box, with GPT-4o added. (Image credit: Future)Model confusionBefore I get too excited, perhaps it's worth noting the word “considering” is contained in Altman’s tweet, and means that this isn’t definitely going to happen. However, if Altman thinks it’s a good idea, then, being the CEO, he can probably make it happen.
Part of the ethos of ChatGPT-5 was to do away with the confusing LLM line-up and naming conventions that had arisen around ChatGPT-4. The streamlined ChatGPT-5 was supposed to simplify all the different options and intelligently decide which version of the model your query would best respond to.
By giving Plus users access to ChatGPT-5 Pro, in addition to reintroducing ChatGPT-4o, we will essentially be back in the same old situation where people are given too much choice about which model to use, meaning that OpenAI still has a product naming and line-up problem.
You might also like- ChatGPT users are not happy with GPT-5 launch as thousands take to Reddit claiming the new upgrade ‘is horrible’
- ChatGPT just created the most ironic movie plot for Final Destination 7, and I would actually stream it over other truly awful installments in the franchise
- Google Gemini has started spiraling into infinite loops of self-loathing – and AI chatbots have never felt more human
- Threat actors cloned Brazilian government websites using generative AI
- The sites were used to steal personal information and money
- In both instances, the sites were almost identical, experts warn
Experts have warned hackers recently used a generative AI tool to replicate several web pages belonging to the Brazilian government in an effort to steal sensitive personal information and money.
The fake websites were examined by Zscaler ThreatLabz researchers, who discovered multiple indicators of the use of AI to generate code.
The websites look almost identical to the official sites, with the hackers using SEO poisoning to make the websites appear higher in search results, and therefore seem more legitimate.
AI generated government websitesIn the campaign examined by ThreatLabz, two websites were spotted mimicking important government portals. The first was for the State Department of Traffic’s portal for applying for a drivers license.
(Image credit: ZScaler ThreatLabz)The two sites appear to be near-identical, with the only major difference being in the website’s URL. The threat actor used ‘govbrs[.]com’ as the URL prefix, mimicking the official URL in a way that would be easily overlooked by those visiting the site. The webpage was also boosted in search results using SEO poisoning, making it appear to be the legitimate site.
Once on the site, the users are invited to enter their CPF number (a form of personal identification number similar to an SSN), which the hacker would ‘authenticate’ using an API.
The victim would then fill out a web form asking for personal information such as name and address, before being asked to schedule psychometric and medical exams as part of the driving application.
The victim would then be prompted to use Pix, Brazil’s instant payment system, to complete their application. The funds would go directly to the hacker’s account.
A second website based on the job board for the Brazilian Ministry of Education lured applicants into handing over their CPF number and completing payments to the hacker. This website used similar URL squatting techniques and SEO poisoning to appear legitimate.
The user would apply to fake job listings, handing over personal information before again being prompted to use the Pix payment system to complete their application.
In ThreatLabz’ technical analysis of both sites, much of the code showed signs of being generated by Deepsite AI using a prompt to copy the official website, such as TailwindCSS styling and highly structured code comments that state “In a real implementation…”
The CSS files of the website also include templated instructions on how to reproduce the government sites.
The ThreatLabz blog concludes, “While these phishing campaigns are currently stealing relatively small amounts of money from victims, similar attacks can be used to cause far more damage. Organizations can reduce the risk by ensuring best practices along with deploying a Zero Trust architecture to minimize the attack surface.”
You might also like- Take a look at the best identity theft protection tools on offer
- These are the best password managers around right now
- This devious ransomware is able to hijack your system to turn off Microsoft Defender
LG and Samsung have been locked in an OLED TV battle for a number of years, ever since Samsung reentered the OLED TV market in 2022 with the Samsung S95B.
Samsung has since been our TV of the year winner for two years in a row, with the Samsung S90C taking the crown in 2023 and the Samsung S95D taking the title in 2024. Even so, several LG OLED models still sit on our list for the best OLED TV.
I’ve already tested both brands' 2025 flagship models, the LG G5 and Samsung S95F, side-by-side. Recently, however, I also had the chance to do a side-by-side test of their entry-level OLEDs, the LG B5 and Samsung S85F.
It’s worth noting that both these TVs use the same standard W-OLED display panel. So they can’t really be that different, right? Well, let’s look at the results of my comparison to find out.
Brightness and contrastThe Samsung S85F (right) demonstrated higher brightness in some highlight areas despite having the same panel as the LG B5 (left) (Image credit: Future)With both TVs using the same panel, I expected their brightness measurements to be similar, and that did turn out to be the case. When I measured peak HDR brightness for both TVs, the LG B5 clocked in at 668 nits, and the S85F at 777 nits. I assumed a difference of just over 100 nits wouldn’t make an impact on the picture, but I was wrong.
Although the difference was subtle, the S85F’s picture did have bolder highlights in specific movie scenes. Watching The Batman, highlights from light sources such as lamps and torches in the opening subway fight and crime scene sections were indeed brighter on the S85F. The B5 still demonstrated solid brightness, but I found my eye more drawn to the S85F’s picture.
In demo footage from the Spears & Munsil UHD Benchmark 4K Blu-ray, with images such as the sun behind a satellite dish or a horizon at sunset, the S85F had a bit more vibrancy, which made these highlight areas look more striking.
Both the LG B5 (left) and Samsung S85F (right) showed very good contrast, but the B5 handled darker tones better. (Image credit: Warner Bros. / Future)Both the B5 and S85F demonstrated excellent contrast throughout testing. In The Batman, light sources balanced well with dark tones on screen, creating a good sense of contrast, though the S85F’s higher brightness gave it an edge.
Both TVs also had refined shadow detail when watching The Batman, but the B5 displayed deeper, richer black tones, and it better maintained shadow detail, with the S85F showing minor black crush. In Oppenheimer’s black and white scenes, both TVs again showed a good range of gray tones, but here again, the B5 maintained details in darker areas more accurately than the S85F.
I noticed that while Filmmaker Mode was the more accurate mode for darker movies such as Oppenheimer and The Batman, the differences between the two TVs were more obvious in Cinema mode, especially when it came to brightness, contrast and shadow detail.
Color profile Both the LG B5 (left) and Samsung S85F showcased vivid colors, but the S85F's had more pop, whereas the B5's looked more natural (Image credit: Universal Pictures / Future )Where the B5 and S85F really differed was in their color. Although both use the same OLED panel type, the S85F’s colors had a greater visual punch, especially when evaluating both TVs with their Cinema picture preset active.
In Wicked, during the Wizard & I scene where Elphaba stands under some pink flowers, the flowers looked more vibrant on the S85F than the B5, giving them an eye-popping quality. Elphaba’s green skin also appeared brighter, and later in the Emerald City, the greens appeared more dazzling on the S85F.
Where the B5 differed here was in its color depth. The B5’s deeper blacks had the effect of making the pink flowers and Elphaba’s green skin look richer and more lifelike compared to the S85F.
In the same Spears & Munsil footage, shots of colorful butterflies and flowers looked rich and refined on both TVs, but once again, the B5 displayed deeper, richer, and more subtle hues, whereas the S85F had more outright colorful images. I found myself more drawn to the S85F, especially with both TVs in Cinema mode.
Sports The LG B5 (left) had the better motion handling for sports compared to the Samsung S85F (right) (Image credit: Future)One thing I wanted to test on these TVs was sports viewing. OLEDs typically have very good motion handling, which is why they always feature in our best TVs for sport guide. I’ve found that Samsung TVs require more setup effort when it comes to sports than LG TVs, and it was no different with the S85F.
In Standard mode (color in the B5’s Sports mode is too oversaturated, so I preferred not to use it), the LG B5 displayed superior motion handling. An MLS soccer game I watched via Prime Video in this mode looked fluid and smooth throughout viewing, with no settings changes required.
The S85F, also in its Standard preset, showed several motion artifacts, such as a ghosting ball and some stuttering. Changing blur and judder reduction to 5 did help, but even then, there was some picture judder compared to the B5.
Of the two TVs, the B5 was the clear winner when it came to motion handling.
Which TV should you choose?With many similarities between the LG B5 (left) and Samsung S85F (right), the choice may ultimately come down to price (Image credit: Future)After testing both the LG B5 and Samsung S85F side-by-side, the differences are generally subtle, so which one you should buy will likely come down to personal preference.
If you want a brighter, bolder-looking TV with more vibrant color, opt for the S85F. If you want a more natural-looking TV with richer blacks, opt for the B5.
Both TVs have the full suite of gaming features we look for on the best gaming TVs, and both have great smart TV platforms. But sports fans will want to go for the B5 due to its superior motion handling.
During my testing, I ultimately found myself more drawn to the S85F. So that’s the one I’d choose, but it was very close.
Honestly, it could all come down to discounts. The 55-inch B5 costs $1,499.99 / £1,399 / AU$1,995, and the 55-inch Samsung S85F costs $1,499.99 / £1,399 / AU$2,495, so in the US and UK, there's currently nothing between them. But as we approach the end of the year, both TVs will inevitably receive discounts, and the amount of those discounts could determine which TV is the better overall value.
You might also like- Glare-Free vs anti-reflection: This is how Samsung and LG's flagship OLED TV screens fared when I tested them
- I tested LG, Samsung and Sony's elite 2025 OLED TVs side-by-side – here's the one I'd buy with my own money
- I tested LG's cheapest OLED TV and Samsung's more affordable mini-LED TV side-by-side and I know which one I'd buy
- Researchers claim to have found a way to turn a Lenovo webcam into a BadUSB device
- BadUSB is a firmware vulnerability that turns a USB stick into a malware-writing weapon
- Lenovo released a firmware update, so users should patch now
Your device's webcam can be reprogrammed to turn on you and serve as a backdoor for a threat actor, experts have warned.
Security researchers at Eclypsium claim certain Lenovo webcam models powered by Linux can be turned into so-called “BadUSB” devices.
The bug is now tracked as CVE-2025-4371. It still doesn’t have a severity score, but it has a nickname - BadCam.
Reflashing firmwareRoughly a decade ago, researchers found a way to reprogram a USB device’s firmware to act maliciously, letting it mimic keyboards, network cards, or other devices. This allows it to run commands, install malware, or steal data, and the biggest advantage compared to traditional malware is that it can successfully bypass traditional security measures.
The vulnerability was dubbed “BadUSB”, and was seen abused in the wild, when threat actors FIN7 started mailing weaponized USB drives to US-based organizations. At one point, the FBI even started warning people not to plug in USB devices found in office toilets, airports, or received in the postbox.
Now, Eclypsium says that the same thing can be done with certain USB webcams, built by Lenovo and powered by Linux.
"This allows remote attackers to inject keystrokes covertly and launch attacks independent of the host operating system," Eclypsium told The Hacker News.
"An attacker who gains remote code execution on a system can reflash the firmware of an attached Linux-powered webcam, repurposing it to behave as a malicious HID or to emulate additional USB devices," the researchers explained.
"Once weaponized, the seemingly innocuous webcam can inject keystrokes, deliver malicious payloads, or serve as a foothold for deeper persistence, all while maintaining the outward appearance and core functionality of a standard camera.
Gaining remote access to a webcam requires the device to be compromised in the first place, in which case the attackers can do what they please anyway. However, users should be careful not to plug in other people’s webcams, or buy such products from shady internet shops.
Lenovo 510 FHD and Lenovo Performance FHD webcams were said to be vulnerable, and a firmware update version 4.8.0 was released to mitigate the threat.
You might also like- FBI warns over new malware targeting webcams and DVRs
- Take a look at our guide to the best authenticator app
- We've rounded up the best password managers
- Modat found more than 1.2 million misconfigured devices leaking info
- This includes MRI scans, X-rays, and other sensitive files, together with patient contact data
- The healthcare industry needs a proactive approach to cybersecurity, researchers warn
Researchers have warned there are currently over a million internet-connected healthcare devices which are misconfigured, leaking all the data they generate online - putting millions of people at risk of identity theft, phishing, wire fraud, and more.
Modat recently scanned the internet in search of misconfigured, non-password protected, devices and their data, and by using the tag ‘HEALTHCARE’, they found more than 1.2 million devices which were generating, and leaking, confidential medical images including MRI scans, X-rays, and even blood work, of hospitals all over the world.
“Examples of data being leaked in this way include brain scans and X-rays, stored alongside protected health information and personally identifiable information of the patient, potentially representing both a breach of patient’s confidentiality and privacy,” the researchers explained.
Weak passwords and other woesIn some cases, the researchers found information unlocked and available for anyone who knows where to look - and in other cases, the data was protected with such weak and predictable passwords that it posed no challenge to break in and grab them.
“In the worst-case scenario, leaked sensitive medical information could leave unsuspecting victims open to fraud or even blackmail over a confidential medical condition,” they added.
In theory, a threat actor could learn of a patient’s condition before they do. Together with names and contact details, they can reach out to the patient and threaten to release the information to friends and family, unless they pay a ransom.
Alternatively, they could impersonate the doctor or the hospital and send phishing emails inviting the victim to “view sensitive files” which would just redirect them to download malware or share login credentials.
The majority of the misconfigured devices are located in the United States (174K+), with South Africa being close second (172K+). Australia (111K+), Brazil (82K+), and Germany (81K+) round off the top five.
For Modat, a proactive security culture “beats a reactive response”.
“This research reinforces the urgent need for comprehensive asset visibility, robust vulnerability management, and a proactive approach to securing every internet-connected device in healthcare environments, ensuring that sensitive patient data remains protected from unauthorized access and potential exploitation," commented Errol Weiss, Chief Security Officer at Health-ISAC.
You might also like- Major breach at medical billing giant sees data on 5.4 million users stolen - here's what we know
- Take a look at our guide to the best authenticator app
- We've rounded up the best password managers
- Meta has two new VR headsets you can try
- They're protypes that aren't usually accessible to the public
- You'll have to attend SIGGRAPH 2025 to give them a whirl
Every so often, Meta will showcase some of its prototype VR headsets – models which aren’t for public release like its fully fledged Meta Quest 3, but allow its researchers to test attributes when they’re pushed too far beyond current commercial headset limits. Like the Starburst headset, which offered a peak brightness of 20,000 nits.
Tiramisu and Boba 3 – two more of its prototypes – are more concerned with offering “retinal resolution” and an extremely wide field of view rather than just boasting incredible brightness, but like Starburster, Meta is giving folks the chance to demo these usually lab-exclusive headsets.
That is, if you happen to be attending SIGGRAPH 2025 in Vancouver.
(Image credit: Meta)I’ve been to SIGGRAPH previously, and it’s full of futuristic XR tech and demos that companies like Meta and its Reality Labs have been cooking up.
Though usually the prototypes look just like Tiramasu. That is to say, a little impractical.
Tiramisu does at least seem to be a headset you can wear normally, even if it does look like a Meta Quest 2 that has been comically stretched – Starburst, for example, had to be suspended from a metal frame as it was far too heavy to wear.
But Tiramasu doesn’t look like the most practical model. The trade-off is that Meta can outfit the headset with µOLED displays and other tech like custom lenses to deliver high contrast and resolution – 3x and 3.6x respectively of what the Meta Quest 3 offers.
As a result, Tiramasu is the closest Meta has got to achieving the “visual Turing test”, virtual visuals that are indistinguishable from real ones.
(Image credit: Meta)Boba 3, on the other hand, looks like a headset you could buy tomorrow, and the way Meta talks about it, it does feel like something inspired by it could arrive at some point in the future.
That’s because it looks surprisingly compact – apparently it weighs just 660g, a little less than a Quest 3 with Elite strap at 698g. It also has a 4k by 4k resolution, and – the reason this headset is special – it boasts a horizontal field of view of 180° and a vertical field of view of 120°.
That’s significantly more than the 110° and 96°, respectively, offered by the Meta Quest 3, and while the 3 covers about 46% of a person’s field of view, Boba 3 captures about 90%.
The only issue is Boba 3 does require a “top-of-the-line GPU and PC system”, according to Display Systems Research Optical Scientist Yang Zhao. That’s because it needs to fill in the extra space the larger field of view creates, leading to higher compute requirements.
Though Zhao did note that Boba 3 is “something that we wanted to send out into the world as soon as possible”, and it does resemble goggles in a way – the design direction Meta’s next headset is said to be taking.
So we’ll have to keep our eyes peeled to see what Meta launches next, but while only a few lucky folks will get to try Boba 3 at Siggraph, I’m hoping many more of us will get to experience the next-gen VR headsets it inspires.
You might also likeOnline game chats are notorious for vulgar, offensive, and even criminal behavior. Even if only a tiny percentage, the many millions of hours of chat can accumulate a lot of toxic interactions in a way that's a problem for players and video game companies, especially when it involves kids. Roblox has a lot of experience dealing with that aspect of gaming and has used AI to create a whole system to enforce safety rules among its more than 100 million mostly young daily users, Sentinel. Now, it's open-sourcing Sentinel, offering the AI and its capacity for identifying grooming and other dangerous behavior in chat before it escalates for free to any platform.
This isn’t just a profanity filter that gets triggered when someone types a curse word. Roblox has always had that. Sentinel is built to watch patterns over time. It can track how conversations evolve, looking for subtle signs that someone is trying to build trust with a kid in potentially problematic ways. For instance, it might flag a long conversation where an adult-sounding player is just a little too interested in a kid’s personal life.
Sentinel helped Roblox moderators file about 1,200 reports to the National Center for Missing and Exploited Children in just the first half of this year. As someone who grew up in the Wild West of early internet chatrooms, where “moderation” usually meant suspecting that people who used correct spelling and grammar were adults, I can’t overstate how much of a leap forward that feels.
Open-sourcing Sentinel means any game or online platform, whether as big as Minecraft or as small as an underground indie hit, can adapt Sentinel and use it to make their own communities safer. It’s an unusually generous move, albeit one with obvious public relations and potential long-term commercial benefits for the company.
For kids (and their adult guardians), the benefits are obvious. If more games start running Sentinel-style checks, the odds of predators slipping through the cracks go down. Parents get another invisible safety net they didn’t have to set up themselves. And the kids get to focus on playing rather than navigating the online equivalent of a dark alley.
For video games as a whole, it’s a chance to raise the baseline of safety. Imagine if every major game, from the biggest esports titles to the smallest cozy simulators, had access to the same kind of early-warning system. It wouldn’t eliminate the problem, but it could make bad behavior a lot harder to hide.
AI for online safetyOf course, nothing with “AI” in the description is without its complications. The most obvious one is privacy. This kind of tool works by scanning what people are saying to each other, in real time, looking for red flags. Roblox says it uses one-minute snapshots of chat and keeps a human review process for anything flagged. But you can’t really get around the fact that this is surveillance, even if it’s well-intentioned. And when you open-source a tool like this, you’re not just giving the good guys a copy; you also make it easier for bad actors to see how you're stopping them and come up with ways around the system.
Then there’s the problem of language itself. People change how they talk all the time, especially online. Slang shifts, in-jokes mutate, and new apps create new shorthand. A system trained to catch grooming attempts in 2024 might miss the ones happening in 2026. Roblox updates Sentinel regularly, both with AI training and human review, but smaller platforms might not have the resources to keep up with what's happening in their chats.
And while no sane person is against stopping child predators or jerks deliberately trying to upset children, AI tools like this can be abused. If certain political talk, controversial opinions, or simply complaints about the game are added to the filter list, there's little players can do about it. Roblox and any companies using Sentinel will need to be transparent, not just with the code, but also with how it's being deployed and what the data it collects will be used for.
It's also important to consider the context of Roblox's decision. The company is facing lawsuits over what's happened with children using the platform. One lawsuit alleges a 13‑year‑old was trafficked after meeting a predator on the platform. Sentinel isn't perfect, and companies using it could still face legal problems. Ideally, it would serve as a component of online safety setups that include things like better user education and parental controls. AI can't replace all safety programs.
Despite the very real problems of deploying AI to help with online safety, I think open-sourcing Sentinel is one of the rare cases where the upside of using AI is both immediate and tangible. I’ve written enough about algorithms making people angry, confused, or broke to appreciate when one is actually pointed toward making people safer. And making it open-source can help make more online spaces safer.
I don’t think Sentinel will stop every predator, and I don’t think it should be a replacement for good parenting, better human moderation, and educating kids about how to be safe when playing online. But as a subtle extra line of defense, Sentinel has a part to play in building better online experiences for kids.
You might also likeHow often do you upgrade your MacBook? I’m willing to bet it’s not very often, and certainly not every year. If so, that’s great news for you, but perhaps not so pleasing for Apple, which would rather you stumped up for one of the best MacBooks as often as possible. Yet is there really a reason to upgrade if your laptop does everything you need for years at a time?
Take me, for example. I’ve had a MacBook Pro with M1 Pro chip since 2022, and it’s served me superbly well in that time. It handles all my work without a hitch and gives me strong gaming performance for the titles I play. Even Cyberpunk 2077 performs impressively well if I turn frame generation on, and I’m happy to do that since it boosts the frame rates from my integrated laptop chip – which is several generations out of date – up to the mid-70s.
That all means that over the past few years, I’ve looked at advances in the MacBook Pro and decided to take a pass. New chips have been the only major changes of note, and with no big design adjustments or feature improvements to tempt me – and my M1 Pro chip performing so consistently – there’s been no need to rock the boat.
However, I’m starting to get the feeling that this situation is not going to last. Judging by the latest rumors, things could change in a big way in the next year or two, and it might be harder than ever for me to resist the lure of a new MacBook Pro. The good news, though, is that this step up could last me well into the next decade.
The OLED revolution(Image credit: Apple)That idea centers around Apple’s M6 chip, which is expected to land in the MacBook Pro in late 2026 or early 2027. This model is expected to come with an OLED display as well as the new chip, according to Bloomberg journalist Mark Gurman’s latest Power On newsletter.
There, Gurman says that the upcoming M6 MacBook Pro “represents enough of a change to finally move the needle” in his opinion, bringing with it a new chip, an improved screen, plus a thinner, redesigned chassis for the first time in several years.
Gurman is not the only person who could be swayed by this upcoming Mac: it’s the kind of upgrade that might convince me to open the purse strings as well. After all, by the time the M6 model launches, my M1 Pro laptop will be five generations out of date and might start showing its age a little more. It’s still going strong for now, but that won’t be the case forever.
But the bigger change will be the OLED display. This has been rumored for years, but Apple’s obsessive perfectionism has meant we still haven’t seen it in action. When it finally arrives, though, Apple’s gaming gains could finally be married up with the kind of visual output they deserve. The question of whether MacBooks are actually gaming machines has been discussed much over the last few years, but adding an OLED display into the mix would surely settle the question in Apple’s favor once and for all.
What does the future hold?(Image credit: Future)But the fact that it would take an upgrade as momentous as this to convince me to get a new MacBook raises another question: what happens after the M6 MacBook Pro has been and gone?
Generally, MacBook upgrades aren’t usually as feature packed as the one we’re expecting when the M6 chip and OLED display come around. The M4 MacBook Pro, for example, offered a new chip, added Center Stage to the front-facing camera, brought Thunderbolt 5 connectivity to the M4 Pro and M4 Max chips, added a nano-texture coating to the display… and not a whole lot else. Those changes are fine, but they’re not groundbreaking.
Apple has, in some ways, created a problem for itself: its chips are now so performant that they can last for generations, dissuading people from upgrading. Contrast that to the bad old Intel Mac days, when the chips were so underpowered that many people felt forced into expensive annual upgrades, and it’s clear that Apple users are in a better spot than ever.
These days, Apple silicon chips have a lot more longevity, which means it’s harder for Apple to persuade its users to buy new MacBooks on the regular. My hope, at least, is this means Apple will bring more significant new features in the coming years in a bid to tempt upgraders.
But even if it doesn’t, just having a chip that lasts years without faltering is a win for Apple fans, and my M1 Pro is a testament to that. If I upgrade to the M6 MacBook Pro and its OLED display, I’m hoping the improvements it brings last me half a decade or more, just as my long-serving M1 Pro chip has done before it.
You might also like- Fraudulent TikTok Shops driving victims into fake portals designed to steal cryptocurrency and data
- Scammers mimic trusted seller profiles and lure shoppers with unrealistic discounts across popular platforms
- SparkKitty malware secretly collects sensitive data from devices, enabling long-term unauthorized surveillance and control
Cybercriminals are now making use of TikTok Shops to spread malware and steal funds from unsuspecting young users of the platform.
The campaign, revealed by security experts at CTM360, mimics the profile of legitimate ecommerce sellers to build its credibility, often using AI-generated content.
In addition to TikTok, these fake shops can also be found on Facebook, where their modus operandi is to advertise massive price cuts to lure potential victims.
Exploiting brand trust for profitThe main target of these malicious actors is not only to defraud users, mostly in cryptocurrency, but also to deliver malicious software and steal login details.
At the moment, TikTok Wholesale and Mall pages have been linked to over 10,000 such fraudulent URLs.
These URLs, which look like official platforms, offer “buy links” that redirect visitors to a criminal phishing portal.
Once users click the link and enter the portal, they will be made to pay a deposit into an online wallet or purchase a product – the online wallet is fake and the product does not exist.
Some operations take the deception further by posing as an affiliate management service, pushing malicious apps disguised as tools for sellers.
More than 5,000 app download sources have been uncovered, many using embedded links and QR codes to bypass traditional scrutiny.
One identified threat, known as SparkKitty, is capable of harvesting data from both Android and iOS devices.
It can enable long-term access to compromised devices, creating ongoing risk even after the initial infection.
The malware is often delivered through these fake affiliate applications, turning what appears to be a legitimate opportunity into a direct path for account takeover and identity theft.
Because cryptocurrency transactions are irreversible, victims have little recourse once funds are transferred.
A common thread in the campaign is the use of pressure tactics, with countdown timers or limited-time discounts designed to force quick decisions.
These tactics, while common in legitimate marketing, make it harder for users to pause and assess the authenticity of an offer.
Domain checks reveal many of the scam sites using inexpensive extensions such as .top, .shop, or .icu, which can be purchased and deployed rapidly.
How to stay safe- Make sure you check the website address carefully before entering your payment information. Every detail of the website should match the legitimate domain.
- Ensure that you use secure HTTPS encryption
- If the price cut feels too huge, follow your gut and stay away.
- Do not allow a countdown timer to pressure you into making payment; this pressure is a common tactic my malicious actors
- Always insist on the standard payment methods and avoid direct wire transfers or cryptocurrency, as these are harder to trace and often used in scams.
- Install and maintain a trusted security suite that combines robust antivirus protection with real-time browsing safeguards to block malicious websites.
- Configure your firewall to actively monitor and filter network traffic, preventing unauthorized access and blocking suspicious connections before they reach your device.
- Pay close attention to alerts from reputable security programs, which can detect and warn you about known phishing sites or fraudulent activities in real time.
- Remain cautious even when shopping on professional-looking platforms, as well-designed storefronts can still conceal sophisticated attempts at theft.
- These are the best AI website builders around
- Take a look at our pick of the best internet security suites
- OpenAI has new, smaller open models to take on DeepSeek - and they'll be available on AWS for the first time
- A TikTok user damaged their MacBook display in an unexpected way
- The issue was caused by a piece of card placed under the lid
- Even something as innocuous as this can break a laptop screen
For many MacBook owners, it’s a nightmare come true: you open the lid of your pricey laptop and switch it on, only to find the display is a mess, with black bars and glitchy colors everywhere you look. The screen has been ruined, and it’s going to cost a whole lot to put it right.
Worryingly, it’s actually a lot easier to experience this than you might expect: just one seemingly innocuous action can cause hundreds of dollars of damage.
That’s something TikTok user classicheidi found out the hard way. In a video uploaded to the social media platform, classicheidi explained that they had placed a piece of card on the keyboard of their MacBook Air, then closed the lid.
When they opened it again a while later, the screen was ruined.
A costly mistake @classicheidiIs this common knowledge omg
♬ original sound - HeidiThis is an unfortunate incident, but there’s a reason it happened. It’s not because the displays of Apple’s laptops (or those of any other manufacturer, for that matter) are weak or poorly made. But while they should certainly be treated with care, there’s another issue at play.
It’s what Apple describes in a support document as the “tight tolerances” of its laptops. Apple’s MacBooks are made to be as thin as possible, which means the gap between the keyboard and display is very small when the lid is closed.
Anything placed in that gap – even something as modest as a piece of card – can be pushed up against the display, with the resulting pressure leading to serious damage.
For that reason, Apple warns that “leaving any material on your display, keyboard, or palm rest might interfere with the display when it’s closed and cause damage to your display.” If you have a camera cover, a palm rest cover, or a keyboard cover, Apple says you should remove it before closing your laptop’s lid to avoid this kind of scenario – unfortunately, it’s something we've seen before.
If you want to sidestep the kind of outcome classicheidi suffered, it’s important to ensure there’s nothing between your laptop’s keyboard and screen when you close it. If there is, you might open it up to “the biggest jump scare of the century,” in classicheidi’s words.
You might also likeOpenAI CEO Sam Altman and several other researchers and engineers came to Reddit the day after debuting the powerful new GPT-5 AI model for the time-honored tradition of an Ask Me Anything thread.
Though the discussion ranged over all kinds of technical and product elements, there were a few topics that stood out as particularly important to posters based on the frequency and passion with which they were discussed. Here are a few of the most notable things we learned from the OpenAI AMA.
Pining for GPT-4oThe biggest recurring theme in the AMA was a mournful wail from users who loved GPT-4o and felt personally attacked by its removal. That's not an exaggeration, as one user posted, “BRING BACK 4o GPT-5 is wearing the skin of my dead friend.”To which Altman replied, “what an…evocative image. ok we hear you on 4o, working on something now.”
This wasn’t just one isolated request, either. Another post asked to keep both GPT-4o and GPT-4.1 alongside GPT-5, arguing that the older models had distinct personalities and creative rhythms. Altman admitted they were “looking into this now.”
Most requests were a little more subdued, with one poster asking, “Why are we getting rid of the variants and 4o when we all have unique communication styles? Please bring them back!”
Altman’s answer was brief but direct in conceding the point. He wrote, “ok, we hear you all on 4o; thanks for the time to give us the feedback (and the passion!). we are going to bring it back for plus users, and will watch usage to determine how long to support it."
It is interesting that so many heavy users seem to prefer the style of the older model, and prefer it to the objectively better newer ones.
Filtering historyAnother big topic was ChatGPT's safety filter, both currently and before GPT-5 which many posted complaints about for being overzealous. One user described a scenario where they’d been flagged for discussing historical topics, with a response about Gauguin getting flagged and deleted because the artist was a "sex pest," and the user's own clarification question itself getting flagged.
Altman’s answer was a mixture of agreement and reality check. “Yeah, we will continue to improve this,” he said. “It is a legit hard thing; the lines are often really quite blurry sometimes.” He stressed that OpenAI wants to allow “very wide latitude” but admitted that the boundary between unsafe and safe content is far from perfect, but that "people should of course not get banned for learning."
New tierAnother questioner zeroed in on a gap in OpenAI’s subscription model: "Are you guys planning to add another plan for solo power users that are not pros? 20$ plan offers too little for some, and the $200 tier is overkill."
Altman’s answer was succinct, simply saying, “Yes we will do something here.” No details, just a confirmation that the idea’s on the table. That brevity leaves open possibilities from 'next week' to just saying 'the discussion starts now.' But the pricing gap is a big deal for power users who find themselves constrained by the Plus tier but can’t justify enterprise pricing. If OpenAI does create an intermediate tier, it could reshape how dedicated individual users engage with the platform.
The futureAt the end of the AMA, Altman shared some new information about the current and future state of ChatGPT and GPT-5. He started by admitting to some issues with the release, writing that "we expected some bumpiness as we roll out so many things at once. But it was a little more bumpy than we hoped for!"
That bumpiness ended up making GPT-5 seem not as impressive as it should have until now.
"GPT-5 will seem smarter starting today," Altman wrote. "Yesterday, we had a sev [severity, meaning system issue] and the autoswitcher was out of commission for a chunk of the day, and the result was GPT-5 seemed way dumber."
He also promised more access for ChatGPT Plus users, with double the rate limits, as well as the upcoming return of GPT-4o, at least for those same subscribers. The AMA did paint a clearer picture of what OpenAI is willing to change in response to public pressure.
The return of GPT-4o for Plus users at least acknowledges that raw capability isn’t the only metric that matters. If users are this vocal about keeping an older model alive, future releases of GPT-5 and beyond may be designed with more deliberate flavors built in beyond just the personality types promised for GPT-5.
You might also like- OpenAI's first AI Agent is here, and Operator can make a dinner reservation and complete other tasks on the web for you
- There’s a new AI agent ready to browse the web and fill in forms without the need to touch your mouse
- I tried this new AI agent that takes control of your mouse and keyboard to help get tasks done, and I can’t believe how good it is
- Huawei makes its CANN AI GPU toolkit open source to challenge Nvidia’s proprietary CUDA platform
- CUDA’s near 20-year dominance has locked developers into Nvidia’s hardware ecosystem exclusively
- CANN provides multi-layer programming interfaces for AI applications on Huawei’s Ascend AI GPUs
Huawei has announced plans to make its CANN software toolkit for Ascend AI GPUs open source, a move aimed squarely at challenging Nvidia’s long-standing CUDA dominance.
CUDA, often described as a closed-off “moat” or “swamp,” has been viewed as a barrier for developers seeking cross-platform compatibility by some for years.
Its tight integration with Nvidia hardware has locked developers into a single vendor ecosystem for nearly two decades, with all efforts to bring CUDA functionality to other GPU architectures through translation layers blocked by the company.
Opening up CANN to developersCANN, short for Compute Architecture for Neural Networks, is Huawei’s heterogeneous computing framework designed to help developers create AI applications for its Ascend AI GPUs.
The architecture offers multiple programming layers, giving developers options for building both high-level and performance-intensive applications.
In many ways, it is Huawei’s equivalent to CUDA, but the decision to open its source code signals an intent to grow an alternative ecosystem without the restrictions of a proprietary model.
Huawei has reportedly already begun discussions with major Chinese AI players, universities, research institutions, and business partners about contributing to an open-sourced Ascend development community.
This outreach could help accelerate the creation of optimized tools, libraries, and AI frameworks for Huawei’s GPUs, potentially making them more attractive to developers who currently rely on Nvidia hardware.
Huawei’s AI hardware performance has been improving steadily, with claims that certain Ascend chips can outperform Nvidia processors under specific conditions.
Reports such as CloudMatrix 384’s benchmark results against Nvidia running DeepSeek R1 suggest that Huawei’s performance trajectory is closing the gap.
However, raw performance alone will not guarantee developer migration without equivalent software stability and support.
While open-sourcing CANN could be exciting for developers, its ecosystem is in its early stages and may not be anything close to CUDA, which has been refined for nearly 20 years.
Even with open-source status, adoption may depend on how well CANN supports existing AI frameworks, particularly for emerging workloads in large language models (LLM) and AI writer tools.
Huawei’s decision could have broader implications beyond developer convenience, as open-sourcing CANN aligns with China’s broader push for technological self-sufficiency in AI computing, reducing dependence on Western chipmakers.
In the current environment, where U.S. restrictions target Huawei’s hardware exports, building a robust domestic software stack for AI tools becomes as critical as improving chip performance.
If Huawei can successfully foster a vibrant open-source community around CANN, it could present the first serious alternative to CUDA in years.
Still, the challenge lies not just in code availability, but in building trust, documentation, and compatibility at the scale Nvidia has achieved.
Via Toms Hardware
You might also like- We've rounded up the best portable monitors available now
- Take a look at our guide to the best authenticator app
- Time to ditch the pen and paper - Modus now boasts e-paper with 75Hz refresh, enough to challenge even a tablet
While the new ‘Liquid Glass’ look and a way more powerful Spotlight might be the leading features of macOS Tahoe 26, I’ve found that bringing over a much-loved iPhone feature has proven to be the highlight after weeks of testing.
Live Activities steal the show on the iPhone, thanks to their glanceability and effortless way of highlighting key info, whether it’s from a first or third-party app. Some of my favorites are:
- Flighty displays flight tracking details in real-time, for myself, family, or friends
- Airlines like United show my seat, a countdown for boarding, or even baggage claim
- Rideshare apps tell you what kind of car you're driving is arriving in
- Apple Sports displays your favorite teams' live scores in real-time with the game
Now, all of this is arriving on the Mac – right at the top navigation bar, near the right-hand side. They appear when your iPhone is nearby, signed into the same Apple Account, and mirror the same Live Activities you’d see on your phone. It’s a simple but powerful addition.
Considering Apple brought iPhone Mirroring to the Mac in 2024, this 2025 follow-up isn’t surprising. But it’s exactly the kind of small feature that makes a big difference. I’ve loved being able to check a score, track a flight, or see my live position on a plane – without fishing for my phone.
(Image credit: Future/Jacob Krol)I’ve used it plenty at my desk, but to me, it truly shines in Economy class. If you’ve ever tried balancing an iPhone and a MacBook Pro – or even a MacBook Air – on a tray table, you know the awkward overlap. I usually end up propping the iPhone against my screen, hanging it off the palm rest, or just tossing it in my lap. With Live Activities on the Mac, I can stick to one device and keep the tray table clutter-free.
Considering notifications already sync, iPhone Mirroring arrived last year, Live Activities were ultimately the missing piece. On macOS Tahoe, they sit neatly collapsed in the menu bar, just like the Dynamic Island on iPhone, and you can click on one to expand and see the full Live Activity. Another click on any of these Live Activities quickly opens the app on your iPhone via the Mirroring app – it all works together pretty seamlessly.
(Image credit: Future/Jacob Krol)You can also easily dismiss them, as I have found they automatically expand for major updates, saving screen real estate on your Mac. If you already have a Live Activity that you really enjoy on your iPhone, there’s really no extra work needed from the developer, as these will automatically repeat.
All-in-all, it’s a small but super helpful tool that really excels in cramped spaces. So, if you’ve ever struggled with the same balancing act as I have with a tray table, your iPhone, and a MacBook, know that relief is on the way.
It's arriving in the Fall (September or October) with the release of macOS Tahoe 26. If you want it sooner, the public beta of macOS Tahoe 26 is out now, but you'll need to be okay with some bugs and slowdowns.
You might also like- Report claims AI adoption depends on critical human abilities
- Ethics, adaptability, and audience-specific communication all named
- The skills gap in AI workplaces is as much human as it is technical
As AI tools become more and more embedded in our everyday work, new research claims the challenge of not getting the best out of them may not lie solely with the technology.
A report from Multiverse has identified thirteen core human skillsets which could determine whether companies fully realize AI’s potential.
The study warns without deliberate attention to these capabilities, investment in AI writer systems, LLM applications, and other AI tools could fall short of expectations.
Critical thinking under pressureThe Multiverse study draws from observation of AI users at varying experience levels, from beginners to experts, employing methods such as the Think Aloud Protocol Analysis.
Participants verbalised their thought processes while using AI to complete real-world tasks.
From this, researchers built a framework grouping the identified skills into four categories: cognitive skills, responsible AI skills, self-management, and communication skills.
Among the cognitive abilities, analytical reasoning, creativity, and systems thinking were found to be essential for evaluating AI outputs, pushing innovation, and predicting AI responses.
Responsible AI skills included ethics, such as spotting bias in outputs, and cultural sensitivity to address geographic or social context gaps.
Self-management covered adaptability, curiosity, detail orientation, and determination, traits that influence how people refine their AI interactions.
Communication skills included tailoring AI-generated outputs for audience expectations, engaging empathetically with AI as a thought partner, and exchanging feedback to improve performance.
Reports from academic institutions, including MIT, have raised concerns reliance on generative AI can reduce critical thinking, a phenomenon linked to “cognitive offloading.”
This is the process where people delegate mental effort to machines, risking erosion of analytical habits.
While AI tools can process vast amounts of information at speed, the research suggests they cannot replace the nuanced reasoning and ethical judgement that humans contribute.
The Multiverse researchers note that companies focusing solely on technical training may overlook the “soft skills” required for effective collaboration with AI.
Leaders may assume their AI tool investments address a technology gap when in reality, they face a combined human-technology challenge.
The study refrains from claiming AI inevitably weakens human cognition, but instead it argues the nature of cognitive work is shifting, with less emphasis on memorising facts and more on knowing how to access, interpret, and verify information.
You might also like- These are the best AI website builders around
- Take a look at our pick of the best internet security suites
- The biggest heist of all time involved over $14 billion of crypto being stolen - and it went undetected for five years