News
The Sony Bravia 8 II is the company’s top OLED TV for 2025, and at $3,500 / £2,999 / AU$3,999, it’s priced at the level you’d expect for a flagship Sony TV.
The Sony Bravia 9, the company’s flagship mini-LED model, was one of the best TVs I reviewed in 2024, so I was very curious to get my hands on Sony’s new flagship OLED. A main reason was that Sony had claimed the new model would be 150% brighter than its Sony Bravia 8 predecessor, an advancement made possible by the company’s switch from a standard W-OLED panel, the type used in last year's Bravia 8, to a QD-OLED panel for the Bravia 8 II.
While our Sony Bravia 8 OLED review was positive overall, the TV’s peak brightness measured significantly below what we’d seen from the best OLED TVs of 2024, such as the Samsung S95D and LG G4. New flagship OLED TVs in 2025 are now even brighter than last year’s models, with the LG G5 measuring 2,268 nits peak brightness, a level that surpasses many of the best mini-LED TVs, when I tested it.
Brightness mattersThe LG G5 (shown above) has a picture that looks bright even with bright room lighting conditions (Image credit: Future)The LG G5 features a new “four-stack” OLED display panel, which LG calls the Primary RGB Tandem Structure. Unlike previous panels, which use two blue OLED layers plus a third with red, green, and yellow elements, LG’s new design produces light via individual red, green, and blue layers. This design results in improved color detail and, notably, increased peak and fullscreen brightness. (See chart below for a benchmark comparison between the Sony Bravia 8 II, LG G5, and competing OLED TVs.)
The G5’s exceptional brightness had a real impact on its picture quality when I reviewed it. Movies with HDR had a near-3D quality due to the picture’s powerful contrast, which made bright highlights in pictures gleam with a high level of intensity. Colors also looked bright, which gave them a vivid quality without looking unnatural or boosted.
Another important factor with the G5 when I tested it was its ability to retain strong contrast when viewing in bright room lighting conditions, something helped by its anti-reflective screen. This made it a great TV for viewing daytime sports, and movies and darker TV shows also held up very well in bright lighting.
I’ve just started testing the 65-inch model of the Sony Bravia 8 II, so I’m only able to make preliminary judgments about its performance at this point. But as you can see in the brightness benchmark chart above, it falls short of Sony’s claimed 150% peak brightness boost over last year’s Bravia 8, which maxed out 817 nits peak and 182 nits fullscreen brightness.
That’s not to say the Bravia 8 II isn’t bright for an OLED TV. Its peak brightness (measured in Cinema mode, the most accurate available picture preset) is about the same as the 65-inch Samsung S90F, that company’s mid-range OLED TV for 2025, and another TV that uses a QD-OLED display panel. Fullscreen brightness is notably lower on the Bravia 8 II compared to the S90F, however.
As I said above, I’m just starting my subjective testing of the Sony Bravia 8 II, so I’ve yet to get a full sense of its capabilities. The Bravia 8 II’s lower brightness compared to the LG G5 (and also several flagship mini-LED TVs I’ve recently tested) means its picture has less of a vibrant pop when viewing in daylight conditions, and its color, while undoubtedly accurate (see picture accuracy chart), also appears a bit less vibrant.
Is the price right?Sony's Bravia 8 II has a compelling picture, but its measured brightness falls below its premium OLED TV competition (Image credit: Future)At $3,500 / £2,999 / AU$3,999 for the 65-inch model, the Sony Bravia 8 II is priced around the same as new flagship OLED TVs such as the LG G5 and Samsung S95D. It has a premium design, along with a great set of audio features such as Acoustic Surface Audio+, which turns the TV’s OLED panel into a speaker, and Acoustic Center Sync, which lets it be used as a center channel when paired with compatible Sony speaker systems and soundbars.
It’s hard to ignore, though, that the new Samsung S90F, an OLED TV with comparable brightness plus a superior set of gaming features, costs $1,000 less at $2,499 / £2,699 / AU$4,299 for the 65-inch model. That’s quite a bit of cash that could otherwise be spent on 4K Blu-rays and other home theater goodies.
Does the Sony Bravia 8 II justify its premium price? We’ll soon have our review wrapped up, and at that point will provide complete thoughts on its performance and value.
In the meantime, the premium OLED TV competition is looking pretty tough for 2025, and Sony’s flagship model has plenty to prove.
You might also likeAs we expected, WWDC 2025 – mainly the opening keynote – came and went without a formal update on Siri. Apple is still working on the AI-infused update, which is essentially a much more personable and actionable virtual assistant. TechRadar’s Editor at Large, Lance Ulanoff, broke down the specifics of what’s causing the delay after a conversation with Craig Federighi, here.
Now, even without the AI-infused Siri, Apple did deliver a pretty significant upgrade to the Apple Intelligence, but it’s not necessarily in the spot you’d think. It’s giving Visual Intelligence – a feature exclusive to the iPhone 16 family, iPhone 15 Pro, or iPhone 15 Pro Max – an upgrade as it gains on-screen awareness and a new way to search, all housed within the power of a screenshot.
It’s a companion feature to the original set of Visual Intelligence know how – a long press on the Camera Control button (or customizing the Action Button on the 15 Pro) pulls up a live view of your iPhone’s camera and the ability to take a shot, as well as “Ask” or “Search” for what your iPhone sees.
It’s kind of a more basic version of Google Lens, in that you can identify plants, pets, and search visually. Much of that won’t change with iOS 26, but you’ll be able to use Visual Intelligence for screenshots. Following a brief demo at WWDC 2025, I’m eager to use it again.
(Image credit: Jacob Krol/Future)Visual Intelligence makes screenshots a lot more actionable, and could potentially save you space on your iPhone … especially if your Photos app is anything like mine and filled with screenshots. The big effort on Apple’s part here is that this gives us a taste of on-screen awareness.
Screenshotting a Messages chat with a poster for an upcoming Movie night in the demo I saw revealed a glimpse of the new interface. It’s the iPhone’s classic screenshot interface, but on the bottom left is the familiar “Ask,” and “Search” is on the right, while in the middle is a suggestion from Apple Intelligence that can vary based on whatever you screenshot.
In this case, it was “Add to Calendar,” allowing me to easily create an invite with the name of the movie night on the right date and time, as well as the location. Essentially, it's identifying the elements in the screenshot and extracting the relevant information.
Pretty neat! Rather than just taking a screenshot of the image, you can have an actionable event added to your calendar in mere seconds. It also bakes in functionality that I think a lot of iPhone owners will appreciate – even if Android phones like the best Pixels or the Galaxy S25 Ultra could have done this for a while.
Apple Intelligence will provide these suggestions when it deems them right – that could be for creating an invite or a reminder, as well as translating other languages to your preferred one, summarizing text, or even reading aloud.
All very handy, but let’s say you’re scrolling TikTok or Instagram Reels and see a product – maybe a lovely button down or a poster that catches your eye – Visual Intelligence has a solution for this, and it’s kind of Apple’s answer to ‘Circle to Search’ on Android.
You’ll screenshot, and then after it’s taken, simply scrub over the part of the image you want to search. It’s a similar on-screen effect to when you select an object to remove in Photos ‘Clean Up’, but after that, it will let you search it via Google or ChatGPT. Other apps can also opt in for this API that Apple is making available.
(Image credit: Jacob Krol/Future)And that’s where this gets pretty exciting – you’ll be able to scroll through all the available places to search, such as Etsy or Amazon. I think this will be a fan-favorite when it ships, though not entirely a reason to go out and buy an iPhone that supports Visual Intelligence ... yet, at least.
Additionally, if you’d rather search for just the whole screenshot, that’s where the ‘Ask’ and ‘Search’ buttons come in. With those, you can use either Google or ChatGPT. Beyond the ability to analyze and suggest via screenshots, or search with a selection, Apple’s also expanding the types of things that Visual Intelligence can recognize beyond pets and plants to books, landmarks, and pieces of art.
It wasn’t all available immediately at launch, but Apple is clearly working to expand the capabilities of Visual Intelligence and enhance the feature set of Apple Intelligence. Considering this gives us a glimpse into on-screen awareness, I’m pretty excited.
You might also like- I spoke to Apple’s software engineering VP for the inside story on how iPadOS 26 finally became a real Mac alternative
- The best value Mac just got even cheaper – the M4 Mini is an absolute bargain at Amazon today
- I just experienced super-smooth Cyberpunk 2077 at Ultra settings on a Mac, but the developers say there’s more to ‘squeeze out’ of Apple Silicon
This year’s Wimbledon tennis championships is set to be the most interactive and exhaustive for sports fans thanks to a major AI upgrade.
The iconic tournament, long a mainstay of the British summer, is introducing a range of AI-powered upgrades and services for fans through its Wimbledon.com website and app.
This includes an all-new feature allowing fans to explore information about certain matches in almost real-time, and an upgraded tool looking to predict the possible winners of every match.
Match Chat and moreIBM has now been the technology partner for the All-England Lawn Tennis Club (AELTC), tournament organizers for Wimbledon, for 36 years, with 2025 marking another significant milestone for the pair.
Following in the footsteps of recent AI-powered innovations such as 2024’s Catch Me Up, which utilized Watsonx's generative AI to create player-based updates in the form of "cards" available on the Wimbledon.com website or mobile app, and AI commentary introduced in 2023, the two organizations hope these new advances will offer fans old and new greater insights into the tournament.
"The way sports are being consumed is ever-evolving,” Kevin Farrar, Head of Sport Partnerships at IBM UK, told a pre-tournament briefing attended by TechRadar Pro, “our challenge is to see how we tap into that”.
(Image credit: IBM / AELTC)New for 2025 is Match Chat - a new conversational interface which lets fans explore info about the match in nearly real-time.
The tool is built with technologies on watsonx Orchestrate, taking AI agents and large language models (LLMs), such as IBM Granite, which have been trained on the Wimbledon editorial style and language of tennis - so “Gentlemen’s” and “Ladies’” singles, rather than Men’s/Women’s.
Fans will be able to use a number of pre-written prompts, or ask their own questions (such as, ‘who has served the most aces in the match?’, or ‘who is performing better in the match?’), with replies delivered almost immediately.
IBM says the Match Chat training also ensures the tool stays focused on the tennis - users will only be able to ask it questions about matches at the tournament, so there’s hopefully no chance it will get distracted if you ask it where the tastiest strawberries are.
“Whenever we're designing something new, it always starts with the fan first...we think this is going to be a really engaging experience that addresses a number of different kinds of fans,” noted Chris Clements, Digital Products Lead at the AELTC.
"At its heart, sport is a human thing, it's an emotional thing - we're using AI to be able to tell these stories to be told more effectively."
First introduced in 2024, the “Likelihood to Win” tool is also getting an AI boost, and will now alter its projected win percentage even throughout a single game, generating projections from AI-powered analysis of player statistics, expert opinion and match momentum.
The 2025 Wimbledon Championships run from June 30 - July 13, 2025, with the app available to download on Android and iOS now, as well as across the Wimbledon.com website.
More from TechRadar Pro- OM System's OM-5 II is a modest upgrade of the OM-5
- Available in three colorways, including a limited edition Sand Beige
- Body only price is £1,099 / AU$1,699.95 (US pricing TBC)
I'm a fan of OM System's Micro Four Thirds cameras. They're compact, travel-friendly, compatible with a huge range of superb lenses, deliver incredible image stabilization for easy handheld shooting, plus their computational photography modes are addictively fun.
OM System cameras hit the mark on many fronts. But what they have also hit, it seems, is a ceiling. Case in point – the new OM System OM-5 II. It comes two and a half to three years after the OM-5, but you wouldn't know it – there's so little to differentiate between the two cameras.
That's no bad thing per se, we still rate the OM-5 as a top travel camera. But where Panasonic is adding meaningful improvements to its Micro Four Thirds cameras, especially for video capture in the Lumix GH7 and Lumix G9 II, in the OM-5 II we get USB-C charging, some video color profiles, and a rejigged menu. That's just about it.
I can't say I'm surprised. Ever since OM Digital Solutions acquired Olympus, the most notable updates we've seen in new cameras is OM System rebranding. I was still hoping for something bigger in the OM-5 II, though. If OM System was properly investing in the Micro Four Thirds system, there has been enough time since the acquisition for it to have started introducing new tech.
The OM-5 II is a highly rugged camera, ideal for travel and the outdoors (Image credit: OM System)Instead, what we get is the same 20MP MFT sensor with 5-axis image stabilization, a modest 1.04m-dot touchscreen and run of the mill 2.36m-dot EVF, albeit packaged in a retro and rugged body. The OM-5 II still looks the part, and I'm a fan of the limited edition Sand Beige – it looks fab.
At least the legendary Olympus brand hasn't been killed off altogether, and continues to live on under a new name, because I still believe there's a place for such cameras.
Micro Four Thirds cameras, especially the inherited Olympus design ethos, hit a certain quality / portability sweet spot. I just wish OM System was giving fans a little more to be excited about going forward.
Can we ever expect meaningful upgrades again?The glass-half-empty types of have preaching a doom and gloom for Mirco Four Thirds for some time now.
'Micro Four Thirds isn't dead', comes the response from fans who love what the camera system represents; superb build quality, a wide range of optics for specialist interests such as wildlife, birding and more, all in a lightweight system which weighs a fraction of full-frame.
But the fact remains, perhaps more specifically for OM System rather than Panasonic – we haven't seen any decent updates to its new cameras for years.
It's part of the Micro Four Thirds system, with many compatible lightweight lenses, such as the 12-45mm PRO, above, with which it is available as a kit (Image credit: OM System)If OM System was indeed investing in future MFT cameras, I think we would have started to see it this year. Earlier in the year it launched the OM-3 – the first in a series with a slightly different retro styling. It was a delight to use, but not because of big technological improvements, but because Micro Four Thirds remains a really fun and versatile system to shoot with.
I'm glad MFT is here to stay for another few years until the next update cycle lands. However, at that point I'm slightly concerned that we'll discover the system has sung its final song, to live on only through its fans.
Do you love Micro Four Thirds photography? What do you think of its future? Let me know in the comments below.
You might also likeArtificial Intelligence (AI) touches virtually every industry, but it’s become a foundational element in today’s customer experience (CX) strategies. Contact centers, customer support platforms, and digital engagement tools rely on AI to enable faster response times, more personalized interactions, and to uncover valuable insights from massive amounts of customer data. Conversational AI, real-time voice analytics, and intelligent routing are just a few of the innovations transforming how organizations connect with their customers.
While there are plenty of benefits to AI, one thing remains true: AI will never be entirely free from bias. This is because AI is only as accurate as the data it was trained on - which is ultimately created, trained, and maintained by humans - humans, who unconsciously bring their own assumptions and blind spots into the AI systems they build.
This doesn’t mean AI can’t be trustworthy, responsible or fair. It simply means organizations need to implement strong guardrails and standards for monitoring and refining AI models to ensure fairness, inclusion, and neutrality. Mitigating bias is essential across industries, but is especially important in CX - not just for stronger performance and efficiency, but to build and maintain long-term customer trust and regulatory compliance.
Reducing AI bias improves agent performance and efficiencyWhen using AI to automate customer service tasks or assist human agents, even the smallest of biases in data can lead to low-quality experiences. For example, speech recognition tools might struggle to understand different accents and dialects, leading to frustrating customer experiences. Sentiment analysis might misread emotional cues, resulting in inaccurate responses or escalation to the wrong agent. Intelligent routing workflows can unintentionally prioritize certain customer profiles over others if historical training data skews unfairly.
These inconsistencies don’t just impact customers, but agents as well. Human agents may have to step in more often to correct AI mishaps or hallucinations, increasing their cognitive workload and decreasing employee morale, reducing the overall efficiency that AI-powered tools promise to deliver. Additionally, it decreases trust in the technology for agents, potentially leading to negative perceptions of how AI is used and how it is impacting their work.
To address these challenges, organizations need to start by using diverse datasets to train AI models and ensure they can adapt to evolving inputs. From there, constantly auditing and refining data allows organizations to weed out biases before they creep into outputs, ensuring more fair, accurate results. Additionally, monitoring real-time customer feedback across multiple channels gives organizations a strong idea of where customer frustrations are occurring and allows them to take another look at the data feeding those interactions.
Ethical AI builds customer loyalty and supports complianceToday’s consumers are more tech-savvy and privacy-conscious than ever. While recent data shows that more than half of consumers say AI alone doesn’t negatively impact their trust, how customer data is used with it can.
Organizations can address these concerns by adopting privacy-first principles to maintain trust and show commitment to responsible AI practices. Taking steps like encrypting sensitive data, restricting access through strong identity controls, and anonymizing customer data used in AI training models are great examples of a privacy-first approach. Transcripts, voice recordings, and behavior patterns must be handled with care - not just to build trust, but to comply with privacy laws like the GDPR, CCPA and the EU AI Act.
Transparency with consumers is equally as important, especially as it relates to how and what data is collected. Giving customers control over their data, ensuring transparent AI governance, clearly disclosing the use of AI chatbots or tools, and providing seamless escalation to human agents when needed, fosters a sense of trust among customers. Organizations that share how AI is used and decisions are made are likely to earn long-term customer loyalty.
What is easily forgotten is that there is an entire industry segment called Workforce Engagement Management and part of that is coaching agents and getting customer feedback. The ethics of best practice are already in place. Whether it is a virtual agent or a real agent, the principle of improving and compliance still applies. What AI can bring is that the time between the potential error and the review of that mistake can be almost instantaneous. We can also use AI to check AI and compare the ethical answer with the actual answer. Just make your AI agents trainable as you would with your human agents.
Responsible AI enables responsible innovationAI-driven innovation seems to move at the speed of light, but innovation doesn’t have to come at the expense of responsibility. Unsurprisingly, the most forward-thinking organizations are those that embed ethical principles into the innovation process from day one. Achieving this means fostering open collaboration between developers, data scientists, business stakeholders, and IT teams to ensure that both innovation and security are balanced.
Establishing a clear AI governance framework or roadmap helps align stakeholders around a clear vision for ethical AI. When standards and processes are both clearly defined and consistently applied, organizations can scale innovation more responsibly and confidently.
Bias in AI is a complex issue that nearly every organization has or will face - but it’s not an unsolvable one. Feeding diverse datasets into AI training models and then consistently auditing the data helps to mitigate bias. While truly bias-free AI may be difficult to achieve, understanding the challenges and continuously working to limit bias leads to stronger customer loyalty, enhanced compliance, and more opportunities to innovate at scale.
This article was produced as part of TechRadarPro's Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro
- The long-awaited Philips Hue AI assistant is now available in some countries
- It will be launched in the UK in July, and globally by the end of August
- The Philips Hue Play Wall Washer light is on sale today in the US and UK
If you want to have more fun with your smart lights, there's good news – the long-awaited Philips Hue AI assistant is finally here, letting you pick lighting scenes or create brand new ones with simple voice commands. Signify, the company behind Philips Hue, first teased the assistant back in January, and last week a few iPhone users found themselves with early access, but the launch is now official.
The assistant is available now for users in the Netherlands, Belgium, and Luxembourg whose app is set to English, and will be rolling out to Hue users in the UK in July. According to Signify, the global rollout is planned for the end of August.
With the AI assistant, you can either say or type what type of lighting effect you'd like to see, or what mood you want to set, and the app will either suggest something from its existing gallery of presets, or create something brand new if there's nothing that quite fits the bill.
Clean sweepThat's not all – Signify has also launched a new lamp that bathes your walls with a gradient of light. We got our first peek at the Philips Hue Play Wall Washer earlier this month when it was listed on Amazon ahead of its official release date.
Now it's available to buy, and despite measuring just 3.6 x 3 x 6.2 inches / 9.1 x 7.6 x 15.7cm (that's even smaller than the Philips Hue Play Light Bar), it promises to drench your whole wall with a smooth gradient of colored light that you can customize via a 3D drag-and-drop interface in the app, allowing you to set the direction and brightness of the light to suit your room.
The Wall Washer works with the Philips Hue Play HDMI Sync Box or Hue Sync desktop app for PC, allowing you to synchronize lighting effects with movies, TV shows, and games.
Alternatively, you can use it alone, or synced with other Philips Hue smart lights to help create a mood. It uses the same ColorCast light system as the Philips Hue Twilight lamp (one of the best smart lights we've ever tested), projecting a smooth gradient onto nearby surfaces. The video below gives you an idea of the overall effect.
The Wall Washer also looks appealing when switched off. Unlike the Hue Play Light Bar, which has a plastic case, the Wall Washer is finished in black or white matt aluminum.
The Wall Washer is available now in the US and UK, priced at $219.99 / £169.99 for a single light, or $384.99 / £299.99 for a pack of two. It's due to launch in Australia in September, but official pricing has yet to be announced.
You might also likeThe increase of remote and hybrid working, as well as digitization and networking of a wide range of devices and systems, has made IT landscapes much more complex. Employees use so many devices – desktop computers, laptops, tablets, phones – that it’s all too easy to unwittingly give out information.
While organizations being exposed to cyber criminals is nothing new, over half of businesses in the U.S and UK have been targets of a financial scam powered by ‘deepfake’ technology, highlighting ‘deepfake’ scams are a high concern.
In today’s digital landscape, CEOs and CFOs have large digital footprints. They have speeches, interviews and videos across many social media and business channels like YouTube and LinkedIn as well as corporate websites.
And while generative AI has transformed the way people can work and create the vast amount of online content now available is providing criminals with endless material to generate convincing deepfakes, which are being used by scammers worldwide.
In May 2024, British engineering group Arup was duped into transferring $25 million to cybercriminals. The employee attended a video call, where everyone looked and sounded like familiar coworkers and bosses. But everyone in the call was a deepfake, AI-generated imitations of real people used to manipulate the employee into making the transfer.
This wasn’t an isolated incident, either. Advertising group WPP were also targeted for a deepfake scam but thankfully, it was unsuccessful. The group's CEO detailed the attempted fraud in an email to leadership, warning them to look out for calls claiming to be top executives.
The number of deepfake attacks in the corporate world has surged in recent years. The use of rapidly advancing and now widely available technology is making it possible, and people in workplaces are susceptible to falling for it.
Why does this matter to youThis deepfake technology presents a growing threat to businesses, particularly through financial fraud and so when scams like these happen, the damage isn’t just monetary, it can also come back on you. If you were the one who let the scammer in, accidentally shared sensitive data, or approved a fraudulent request, you could be held accountable, even if you didn’t realize what was happening.
AI-generated deepfakes exploit the element of trust, so while cybercriminals might be targeting your employer, you may be the entry point. Corporate deepfake fraud undermines business confidence and public trust.
Defending Your Employer (and Your Job)Given how quickly these threats are evolving, organizations and their employees must develop adequate safeguards and policies to stay safe from exploitation.
Take Your Time and Confirm
Make sure you scrutinize and verify before responding to requests received digitally, especially if they include a request to disclose sensitive information or conduct financial transactions. If you’re encouraged to respond to any requests via phone or video call, call back using the channels you’re familiar with to confirm the task.
Watch for Signs of Unusual Behavior
If a co-worker’s voice sounds a bit off or their camera seems strangely blurry, it may be a sign of something unusual. Other signs that can indicate something is amiss include unnatural blinking or speech that is out of sync with their lips. AI and deepfakes can be deceiving, but they’re not perfect.
Create a culture of cyber awareness
Encourage conversations with your colleagues that allow you to take a step back, pause and raise concerns whenever you feel concerned about a request. And while AI can be useful for a myriad of tasks, workplaces need to have detailed guidelines on its use.
Verify Attendees Before Letting Them In
If you've been invited to a meeting, double-check the invite to ensure you know who the sender is. If you’re hosting a meeting, it’s worth enabling waiting rooms or lobbies so you can approve who joins.
Don’t Hesitate to Question Unusual IT Assistance
If someone appears in a meeting claiming to be from IT and begins asking you to install software or allow them access, be cautious. Instead, verify with your IT department through your usual work channels about what the procedure is to make changes to your device.
We list the best online cybersecurity course.
This article was produced as part of TechRadarPro's Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro
- New Magnetic Switch to turn the earbuds on and off without the case
- Up to 65 hours of battery life with ANC off
- £125 / €149 / $149
If you're looking for earbuds to wear on a mission to space, Audio-Technica have just the things: their new ATH-CKS50TW earbuds have an extraordinary 65-hour battery life in total, which is just about long enough to fly to the moon. That makes my AirPods Pro 2 look pretty feeble.
The buds' own batteries deliver 25 hours of continuous playback, and the charging case adds another 40. Those figures are with active noise cancelling (ANC) off, but with ANC enabled the numbers are still astounding: 15 hours from the buds and a further 25 from the case.
Again, for comparison, the AirPods Pro 2 give you six hours from the buds alone. The Sony WF-1000XM5 give you eight hours. Audio-Technica's new earbuds absolutely crush any of the best earbuds in this measure.
(Image credit: Audio-Technica)Audio-Technica ATH-CKS50TW2: key features and pricingThe icing on the long-lasting cake is that the new earbuds will cost only $149 / £125 (about AU$260).
One of the more unusual new features here is a magnetic switch, which powers the buds on or off by separating or joining their built-in magnets. The idea is to be able to turn off the buds without having to pop them in the case, although I'm not sure there are many people who've been cursing the tyranny of charging cases. It's nice to have the option, though.
The case is also compatible with Qi wireless chargers, which is somewhat rare among affordable earbuds – you don't get it from the Sony WF-C710N or the Nothing Ear (a) for example.
The earbuds feature hybrid ANC with hear-through and talk-through modes, Bluetooth LE Audio with the more advanced LC3 codec, custom-designed 9mm drivers with extended low-end response, and hybrid hard and soft silicone ear tips. They're waterproof and dustproof, rated IP55.
The specs and the battery specs in particular are impressive, but it's worth noting that Audio-Technica has had a few issues with earbud batteries in the past: its SQ1TW2 wireless earphones had a faulty batch that overheated and even produced smoke, and there has also been a recall of the charging case for the ATH-CK3TW earbuds – again due to overheating.
So while I'm glad to see game-changing battery life, if A-T is pushing the limits of battery tech here you might want to keep an eye out for any recalls, just in case. (No pun intended.)
You might also like- Salesforce research finds single-turn tasks see only 58% success, while multi-turn effectiveness drops to 35%
- Reasoning models like gemini-2.5-pro tend to outperform lighter models
- CRMArena-Pro has proven to be a challenging benchmark
Researchers from Salesforce AI Research have introduced a new benchmark – CRMArena-Pro – which uses synthetic enterprise data to access LLM agent performance in difference CRM scenarios.
It found LLM agents achieved around 58% success on tasks which can be completed in a single step, with tasks that require multiple interactions dropping in effectiveness to just 35% – barely more than one in three.
Although models like gemini-2.5-pro achieved over 83% success in workflow execution, the Salesforce researchers still highlighted some concerns with AI agents, suggesting they might not quite be up to scratch after all.
Are AI agents actually that good?The paper, entitled 'Holistic Assessment of LLM Agents Across Diverse Business Scenarios and Interactions', explained that LLM agents displayed near-zero inherent confidentiality awareness, noting that their performance in handling sensitive information is only improved with explicit prompting (which often came at the expense of task success).
They also criticized previous and existing benchmarks for failing to capture multi-turn interactions, addressing B2B scenarios or confidentiality, and reflecting realistic data environments. CRMArena-Pro is build on synthetic data validated by CRM experts, covering B2B and B2C settings.
In terms of analysis results, reasoning models like gemini-2.5-pro and o1 outperformed lighter models most of the time – Salesforce's researchers concluded that models that seek more clarifications generally perform better, especially in multi-turn tasks.
For example, while the average performance across the nine models tested (three each from OpenAI, Google and Meta) resulted in a score of 35.1%, gemini-2.5-pro scored 54.5%.
"These findings suggest a significant gap between current LLM capabilities and the multifaceted demands of real-world enterprise scenarios, positioning CRMArena-Pro as a challenging testbed for guiding future advancements in developing more sophisticated, reliable, and confidentiality-aware LLM agents for professional use," the researchers concluded.
Looking ahead, Salesforce CEO Marc Benioff views AI agents as a high-margin opportunities, with major corporate clients including governments betting on AI agents for boosted efficiency and further cost savings.
You might also like- These are the best AI tools and best AI writers
- Check out our roundup of the best productivity tools
- Many businesses are thinking twice on using AI bots
The Pitt is my very favorite show of 2025, and HBO has confirmed that production has now started on season 2 of the hit medical drama.
The HBO Max Original has been a huge success, gaining a 95% rating on Rotten Tomatoes from the critics and sitting comfortably in Max's top three most-watched streaming shows worldwide.
The press release hasn't shared any more information, but a few days ago it was revealed that the second season would stream in January 2026 and would bring a host of new faces into the ER – including Skinny Pete from Breaking Bad, aka the actor Charles Baker. Baker will be joined by Irene Choie, Laëtitia Hollard, and Lucas Iverson.
What to expect from The Pitt season 2As Hello magazine reports, Baker will be playing an unhoused man called Troy; Iren Choie will be Joy, a medical student "with strong boundaries"; Laëtitia Hollard plays a recent nursing school graduate; and Lucas Iverson will play James, a fourth year medical student.
Noah Wyle, the man with the saddest eyes on any streamer, will of course return as Dr Robbie, and he previously told Deadline that the second season will take place over the Fourth of July weekend. Dr King, Dr Abbot, Dr Langdon and charge nurse Dana Evans are confirmed to be returning too.
I genuinely loved every episode of season 1 of one of the best Max shows, and cried quite a lot in every single one of them: it's a show with a huge heart and the cast are exceptional. In a time when there are many horrible things happening it reminds me of Fred Rogers' famous line: "look for the helpers. You will always find people who are helping."
The Pitt season 1 is streaming now on Max. Season 2 is scheduled for January 2026.
You may also like- The Google Pixel 10 series could allow you to use the telephoto camera for macro photos
- This would allow you to shoot from further away, and help avoid you blocking the light
- However, macro shots will also apparently be possible with the ultra-wide camera
Macro photography on phones often seems to be a bit of an afterthought, but with the Pixel 10 series, Google might be taking it more seriously.
This is according to Android Headlines, which claims that the Pixel 10 and its siblings – which are expected to include the Pixel 10 Pro, the Pixel 10 Pro XL, and the Pixel 10 Pro Fold – will have a tele-macro mode.
In other words, if this rumor is right, then these phones will be able to use their telephoto cameras for macro shots. That’s in contrast to the Google Pixel 9 series and most other high-end handsets, which tend to use their ultra-wide cameras for macro photography.
The advantage of tele-macro is that you can take macro photos from further away. That can be more convenient, especially when photographing something that might not appreciate you looming over it, like an insect. And because you can be further away from the subject, you also won’t be blocking the light as much.
But you may still want to get closer sometimes, and the Pixel 10 series should have you covered there too, because according to this leak, it will also offer macro capabilities with its ultra-wide camera.
So, in other words the Google Pixel 10 series might be doubling down on macro modes, offering two options where most phones have just one at most.
A macro focusThe Google Pixel 9 Pro (Image credit: Blue Pixl Media)That – and especially the tele-macro mode – is great news, as it should make it much easier to take high-quality macro photos, and as a photography fan that could tempt me to upgrade.
Phone companies understandably tend to focus on their main and ultra-wide snappers, with telephotos often coming in third place and macro being even less of a consideration.
But if you like taking photos of a wide variety of things from a range of perspectives, then it’s important to have a wide range of focal lengths that you can shoot at.
It sounds like the Pixel 10 Pro and Pixel 10 Pro XL in particular could offer this, with wide, ultra-wide, telephoto (likely at 5x optical zoom), and two different macro modes potentially set to be offered – not to mention optical-quality 2x zoom, which is achieved on the Pixel 9 series through cropping the main sensor.
That could make for one of the most comprehensive cameras setups you’ll find on a smartphone, and might even tempt me back from the Apple side.
You might also like- New QLED panels wouldn't need expensive barrier film layers
- Samsung doesn't yet know when the tech will be commercialized
- It's only for QLED; QD-OLEDs are made differently
One of the most expensive parts of a QLED display panel is about to get a whole lot cheaper, and that should mean even more affordable QLED televisions.
The component in question is the quantum dot sheet, which sits on top of the LCD panel to improve color reproduction – it's the actual quantum dot part of QLED TVs.
A QLED display currently has barrier film on either side of it to protect the quantum dot layer from oxygen and water. According to trade site The Elec, those films account for 40% of the cost of quantum dot sheets – and Samsung and its supplier Hansol Chemical have found a way to get rid of them.
What Samsung's tech means for QLED – and why it won't help QD-OLED TVAt the moment, a quantum dot sheet has five layers. With the new design there are three.
Samsung and Hansol's new quantum dot sheet design does away with the barrier films altogether without exposing the quantum dots to potential problems.
That should mean a huge drop in the price of QLED panels, but not immediately: Samsung doesn't yet know when the technology will be commercialized.
And even then, it doesn't necessarily mean that QLED TVs will definitely become cheaper – the savings might just be used to absorb rising costs and keep the TVs the same price, or the money from the saving might be invested in other areas of the TV, such as improving the backlight or speaker system.
As The Elec points out, while the new design is good news for QLED TVs, it's not going to make any difference to QD-OLED displays.
That's because QD-OLED panels use a different design. Whereas QLED panels put a quantum dot layer atop an LCD light source, QD-OLED TVs use a blue OLED light source with two red and green conversion layers added via inkjet printing rather than in their own separately manufactured layer.
That's a bit of a shame, because QLED TVs are already getting pretty low-priced, but QD-OLED TVs such as the Samsung S95F or Sony Bravia 8 II very much are not.
You might also likeI’d have placed a decent bet on Apple making a big deal about Apple Intelligence at WWDC this year, and from that I’d have predicted that the iPhone 17 would be Cupertino’s first proper AI phone.
The company somewhat fluffed the launch of Apple Intelligence, with AI-powered features for the iPhone 16 family taking a long time to roll out after its launch, and a smarter ChatGPT-centric Siri still absent. With that in mind, I’d have thought Apple would have gone harder on AI at its yearly developer's conference.
I was wrong.
Apple Intelligence was mentioned, but more as a smart virtual icing to a cake consisting mostly of the Liquid Glass design material and feature updates across Apple’s software ecosystem.
So with that in mind it’s arguably hard to draw any big insights into what’ll be in store at the next Apple event, which is likely to be a September one centred around new iPhones. But I think I can have a good stab at what the next iPhone will be like.
It’ll be boring.
Send me now new iOS(Image credit: Apple)My theory here is that the iPhone 17, if Apple does go with that nomenclature, will be a vehicle for iOS 26 with hardware upgrades taking a back seat.
While a lot of the core iOS experience will broadly be the same as iOS 18, the design changes could take a little getting used to; plus there are a host of new features in the native apps that could offer users new ways to do things.
So I suspect Apple won’t do much on the hardware side to get in the way of that experience; there’s not likely to be any big changes to the core iPhone design, camera array or materials.
Depressingly, I even expect the standard iPhone 17 will still have a 60Hz display, as it seems like Apple is one of the few companies who can get away with this and still charge a premium price.
There are some rumors that tout changes such as the use of aluminum for the frame of the iPhone 17 Pro, but I don’t buy them; the rumored iPhone 17 Air could use the lighter material, though I don’t see that phone shaking up the core design of iPhones.
Rather than champion many hardware upgrades, which in recent years have become iterative to the point of being dull, I think Apple will position the iPhone 17 range as a new chapter in getting the most out of a fresh iOS.
And I think a lot of people will buy into it.
The iPhone’s new clothes(Image credit: Apple)Much like changing up an outfit with the addition of a new shirt or coat, or swapping the strap of a watch, redesigned software can make tried and tested hardware seem fresh and new, even if most of those changes are merely aesthetic.
But I think new features like an overhauled Phone app, smart tools for Maps, Wallet and Music, plus new dedicated Games app-meets-hub will make next-generation iPhones feel a lot newer than those that have simply had camera sensor or button upgrades over their predecessors.
I’m particularly intrigued to see how the Games app plays out, as Apple has quietly been strengthening the gaming experience on iPhone, with support for titles such as Death Stranding and a suite of original games in the growing Apple Arcade service, both of which I don’t feel Android has a strong answer for.
Add in a new chip, which is all but guaranteed for the next-gen iPhones, and you could be looking at some impressive stealthy gaming phones.
With that in mind, I can see the iPhone 17 offering a family of phones for people who’ve resisted upgrading to a new iPhone for a couple of years. That’s often the case, of course, but I feel iOS 26 will be more of an upgrade catalyst even though models dating back to the iPhone 11 can run this upcoming iteration of Apple’s mobile operating system.
I'd place a very solid bet that Apple will market the iPhone 17 range as the ideal vehicle for iOS 26, and I'm forecasting that'll suck in a lot of people; let me know in the comments if you don't agree.
All that being said, I’m totally open to Apple surprising me with an iPhone that’s being given a serious reworking or just has a good clutch of hardware upgrades. I don’t personally think this is the year for that – but I don’t think that matters either.
You might also like- James Gunn has provided some big updates on the next two Batman movies
- The DC Studios co-CEO may have found a "way in" for The Brave and the Bold's story
- Gunn also reconfirmed that The Batman Part II hasn't been canceled
James Gunn has provided some exciting – and slightly worrisome – updates on the next two Batman movies.
Speaking to Rolling Stone, the DC Studios co-chief said he might have found a "way in" to finally get The Brave and The Bold's script up and running. That film, which is part of his and Peter Safran's rebooted DC Universe (DCU), is one of the company's biggest creative priorities.
It's not the only Batman flick in development. Matt Reeves' long-gestating The Batman Part II, which is currently slated to arrive in October 2027, is also moving forward, Gunn reconfirmed. However, other comments he made to Rolling Stone about this DCU-adjacent movie didn't provide clarity on a persistent question DC fans have about this Robert Pattinson-led franchise.
But let's start with what Gunn had to say about The Brave and the Bold. Announced as part of the initial DCU Chapter One line-up in January 2023, this movie, which is inspired by Frank Morrison's graphic novel namesake, has been a tough nut for Gunn and company to crack. Now, though, it sounds like Gunn and the film's yet-to-be-announced writer have made a breakthrough on the storytelling front.
The Brave and the Bold has been in development for over two years at this point (Image credit: DC Studios)"Batman has to have a reason for existing, right?" Gunn said. "Batman can’t just be 'oh, we’re making a Batman movie because Batman’s the biggest character in all of Warner Bros.,' which he is... so, we’re dealing with that.
"I think I have a way in, by the way," Gunn added. "I think I really know what it’s – I just am dealing with the writer to make sure that we can make it a reality."
It'll be a while before Batman makes his DCU debut – after all, Gunn and the unnamed scribe aside, the only talent attached to the project is Andy Muschietti (The Flash, It), as the DCU Chapter One film's director. Nevertheless, I'm glad to hear that there's been some movement on one of the most exciting DCU movies and TV shows on the DC Studios docket.
The two Batman problemIt's been three years since The Batman was released in theaters (Image credit: Jonathan Olley/Warner Bros.)Okay, but what about The Batman's film sequel? Here's what Gunn said about the follow-up to The Batman Epic Crime Saga's first entry: "[The] Batman Part II is not canceled. That’s the other thing I hear all the time – that The Batman Part II is canceled. It’s not. We don’t have a script. Matt [Reeves] is slow. Let him take his time. Let him do what he’s doing. God, people are mean. Let him do his thing, man."
That's great news! Well, until you start to consider the wider implications of having two different Batman projects on the go and comments Gunn has made about distinguishing one from the other.
Where the latter is concerned, Gunn has stressed that the DCU's Caped Crusader has to be distinct enough to separate him from the gritty, grounded, and almost realistic universe Reeves has co-created. However, that doesn't mean we should expect the DCU's iteration to channel the flamboyance of the '90s era of Batman movies or the slapstick nature of the Adam Scott-starring TV show from the '60s.
"[There's a need that he’s not exactly the same as Matt’s Batman," Gunn opined. "But he’s not a campy Batman. I’m not interested in that. I’m not interested in a funny, campy Batman, really."
Comment from r/DC_CinematicAnd therein lies the first problem: how will Gunn and company differentiate their Dark Knight from the Reeves-Verse's one? You could incorporate the fantastical elements from Batman literature, but there's a fine line to be drawn between the extraordinary and the purposeful realism that the best Batman movies, shows, and comic books contain.
The easiest solution – according to some fans, anyway – would be to merge the Reeves-Verse with the DCU and install Pattinson's Bruce Wayne as the latter's billionaire vigilante. It's a topic of conversation that's dominated online and in-person discussions for months, so much so, in fact, that it came up during the last big DCU update Gunn and Safran gave in February.
While Gunn and Reeves have discussed such a possibility, they have always played down suggestions that it'll ever happen. Gunn did so again during his chat with Rolling Stone – "It’s not likely at all", Gunn said. However, that quote, coupled with another – "I would never say zero, because you just never know" – haven't exactly closed the door on Pattinson becoming the DCU's Caped Crusader.
Do I think that'll happen? No. If it was going to, it would've done by now. Each time that Gunn and/or Reeves leave the door ajar on it, though, it only reignites the perpetual debate about whether it should be done or not. So, here I am, Messrs Gunn, Safran, and Reeves: clarify this once and for all by ruling out a merging of the DCU and Reeves-Verse. Do so and we (including you three!) can all get on with our lives without having to read any more about this already tiresome discussion.
You might also like- Microsoft report warns of "the infinite workday" creeping in
- Workers are coming online earlier and finishing later than ever before
- They're also being interrupted by an email or chat message every few minutes
New research from Microsoft has revealed many of us are struggling to maintain a healthy work-life balance - and that an overload of tasks could be what's stopping us from achieving any kind of productivity.
The company's June 2025 Work Trend Index Special Report has warned of "the infinite workday" which it says is a "significant shift" in the hours we work, largely thanks to the influence of hybrid working locations - and, of course, AI.
The report, based on "trillions" of productivity signals such as emails, chat messages and meetings gathered across Microsoft 365, warns the modern workday no longer has a clear beginning or end - and has urged for greater AI tool adoption to help lessen this burden on everyday workers.
Working...6am til 8pm? What a way to make a living"Our research, based on trillions of globally aggregated and anonymized Microsoft 365 productivity signals, reveals a challenging new roadblock: a seemingly infinite workday," Microsoft noted.
"AI offers a way out of the mire, especially if paired with a reimagined rhythm of work. Otherwise, we risk using AI to accelerate a broken system."
Microsoft said it found a major increase in users coming online by 6am, when 40% of users are apparently scanning through their inbox to prioritize tasks for the day.
By 8am, Microsoft Teams chat has overtaken email, with half of all meetings then taking place between 9–11am and 1–3pm - notably, the time when most of us are the most focused and productive throughout the day.
Tuesdays were found to be the busiest day for meetings, with 23% - whereas Fridays have just 16% of all meetings. Troublingly, Microsoft found meetings being held after 8pm are up 16% year over year, showing late finishes are also becoming worryingly normal.
(Image credit: Pexels.com)Weekend email usage also saw a major increase, with nearly 20% of employees checking their email before noon on Saturday and Sunday - and over 5% are back working on emails on Sunday evenings.
The report found the average worker receives 117 emails and 153 Teams messages daily, meaning they are disrupted by an email, chat, or meeting every 2 minutes. Most employees were now also found to send or receive over 50 chats outside of their core business hours, risking their winding-down time.
"This points to a larger truth: the modern workday for many has no clear start or finish," Microsoft concluded. "As business demands grow more complex and expectations continue to rise, time once reserved for focus or recovery may now be spent catching up, prepping, and chasing clarity."
"The signals are clear: it’s time to break the cycle. The future of work won’t be defined by how much drudgery we automate, but by what we choose to fundamentally reimagine. AI can give us the leverage to redesign the rhythm of work, refocus our teams on new and differentiating work, and fix what has become a seemingly infinite workday. The question isn’t whether work will change. It’s whether we will."
You might also like- We've rounded up the best online collaboration tools around
- These are the best time management tools
- And here are the best task management tools on offer today
- OpenAI is moving its government AI models under a new umbrella
- OpenAI for Government will provide AI at the federal, state, and local level
- The AI developer has already signed a pioneering deal with the DoD
OpenAI is consolidating its US government AI tools, such as ChatGPT Gov, under a single umbrella - OpenAI for Government.
OpenAI, alongside the likes of Anthropic and Meta, has partnered multiple times with the US government to develop new AI tools specialised for government workloads.
The new initiative will provide federal, state, and local governments with access to OpenAI’s most secure and compliant models, models specialized for national security, insight into upcoming models and tools, and support.
More AI models for governmentKicking off the new project, OpenAI has signed a $200 million with the U.S. Department of Defense’s Chief Digital and Artificial Intelligence Office (CDAO).
The project will focus on improving administrative work such as service member healthcare access, to boosting cyber defenses and data acquisition.
“Across these efforts, we’re aiming to improve both the day-to-day experience of public service and to help government employees feel more empowered, more efficient, and more supported in their critical missions,” OpenAI said.
“We are already seeing how OpenAI can help public servants at the state level spend less time on repetitive tasks and more time on high-impact work,” the announcement continued, with Open AI referencing the effectiveness of ChatGPT use for the Commonwealth of Pennsylvania, which saved employees around 105 minutes per day.
OpenAI is also deploying AI models at Los Alamos, Lawrence Livermore, and Sandia National Labs to improve scientific research, innovation, and national security.
“We are just getting started, and we look forward to helping U.S. government leaders harness AI to better serve the public. We are committed to working in close partnership with agencies, advancing their missions with powerful tools that are safe, and secure,” the company added.
You might also like- These are the best AI tools and best AI writers
- Take a look at our pick for the best business password managers
- More US government departments ban controversial AI model DeepSeek