News
Well, for the iPad faithful, Apple’s WWDC 2025 keynote was the day that faith was rewarded. I, like countless others, have been waiting for a major upgrade for iPadOS, and the Cupertino-based tech giant delivered.
Yes, iPadOS 26 brings with it Liquid Glass, but more importantly for all iPads that support it, you’ll get actual windowed multitasking, the ability to drop folders in the dock, a menu bar up-top, one of the most addicting gestures I’ve used, and the ability for tasks to run in the background.
Easily, it was the standout moment from the keynote, and I got to go hands-on briefly with iPadOS 26 running on a 13-inch iPad Pro with M4 attached to a Magic Keyboard with an Apple Pencil Pro.
Now, let’s address the elephant in the room – allowing those landmark features I listed out above makes the iPad seem like a Mac, but don’t call this a Mac. Yes, Apple did take some features from the Mac rather than reinventing the entire concept – say, for the close, minimize, or expand buttons in the top left or the menu bar – but this is all well thought out for the iPad, and takes advantage of one of the best parts of an iPad.
Multitouch.
With the iPad’s approach, it’s sort of a choose-your-own-adventure, while on the Mac, it’s keyboard and trackpad. I used it and saw a demo of fingers controlling the windows, as well as using the Apple Pencil to move items around and even the cursor. It's all about control, in that however you see it and want to use it, you can do so to get more out of your iPad. Let’s talk about why.
(Image credit: Jacob Krol/Future)Let’s start with the most exciting part – from any app, you can pull from the bottom corner – it’s set with an effect, a slightly darker edge in the bottom right – to easily resize the window by pulling it back and forth. So from full screen, you just pull it towards the other side to make it smaller, by width or height, and then you can grab the top of the window to place it where you like.
Using the dock, you can then drag and drop another app up or do a swipe up for the peek mode to access your home screen and place any app in this layout. It’s really smooth and lets you finally have your ultimate iPad layout. Maybe that’s a Safari window open to a Google Meet in the corner, the reminders app for your checklist, and your email as you start your day.
You can also split the screen with an image and then open an app like ProCreate, allowing you to see your starting point while drawing something awesome. It really lets you tailor the experience to how you see fit.
Now this new windowing setup does replace SplitView and SlideOver, and while that didn’t excite me when I first heard, I do like the various preset options you can pick from via a long press in the top left corner of any window and the new gesture.
(Image credit: Jacob Krol/Future)With a flick to the left or right, you can effortlessly split your screen and then adjust it further by moving the slider in the middle as needed. This feels like an easier way to achieve a similar result to SplitView, and is quite frankly fun to do.
You can also tap the top of the iPad’s screen to access a menu bar for things like more precise settings or easy exports – it’s the most similar part of the experience to the Mac. Still, considering it’s hidden until you need it, I think iPad power users will likely get the most out of this.
It feels really natural in this implementation, and not a cookie-cutter copy and paste from the Mac, given the updated elements and the ability to control with both touch and a trackpad.
(Image credit: Jacob Krol/Future)Complementing the new multitasking approach is a significantly improved Files app and a dock that can now display a live folder. The app will feel familiar, but a new list view with the ability to customize modifiers, also known as the columns you see, will really let you tailor this for your specific needs.
For instance, I could see myself sorting by last modified and then pulling the folder containing images to the dock to edit in an app like Pixelmator, export, and then upload it into a content management system for a story build. Changes you make within folders or to these layouts can be synced across devices and updated in iCloud as well. If you’re a fan of colored folders and keen to name with emojis, you get this as well.
Those larger exports, maybe a batch photo editor or video export from Final Cut Pro, can now run in the background. I got a demo of this, and it either lives at the top of your device with a progress bar or in a little icon near your time where you can track multiple exports or tasks.
(Image credit: Jacob Krol/Future)The really exciting part, even from these demos and a little usage, is the fact that this isn’t just limited to the iPad Pro with M4 or the iPad Air with M3 or even another step-up model. This new multitasking experience is the result of a new ‘Window Prioritization Model’ that works in conjunction with the performance and resource manager. It has been entirely re-architected to run on any iPad that supports iPadOS 26.
Meaning that the 9th Gen iPad – one of the best values Apple’s ever released – will get this new multitasking experience, same for the 10th, 11th, or 13th Gen, the iPad Air, iPad mini, and Pro. You might not be able to open a dozen there all at once, but it will let you push the chip inside further.
For now, iPadOS 26 is in a developer beta, which means it's not for your main device as bugs and issues are to be expected, but a public beta will arrive in July, and this will be released for everyone with an eligible device in the Fall. I’m super excited to spend time with it and eventually give it a full review treatment, but for now, it’s the upgrade we’ve been waiting for that feels distinctly like an iPad.
Sure, the Mac has long been the ultimate in productivity, but that lacks touch and is truly designed for keyboard and trackpad. The iPad is multitouch first, and Apple really put the time in to craft an experience that feels purpose-made for multiple inputs, with touch being first.
Just fair warning, I’ll be using many, many windows.
You might also like- Default passwords and outdated firmware are turning your home camera into a public livestream, report warns
- Thousands of exposed webcams are offering a front-row seat into private and corporate life
- A simple web browser is all it takes to peek into 40,000 unsecured camera feeds
Thousands of internet-connected webcams, intended to enhance safety and convenience, are now unintentionally offering a window into private lives and secure environments.
Research by Bitsight claims over 40,000 webcams around the world are publicly accessible online, often without their owners’ knowledge.
These include security cameras, baby monitors, office surveillance systems, and even devices inside hospitals and factories.
A growing digital threat, not a hypothetical oneThe investigation highlights just how easily accessible these cameras are.
“No passwords. No protections. Just out there,” wrote João Cruz, Principal Security Research Scientist at Bitsight TRACE, noting it requires neither elite hacking skills nor expensive software. In many cases, all it takes is a web browser and a valid IP address.
“We first raised the alarm in 2023, and based on this latest study, the situation hasn’t gotten any better.”
Exposed footage ranges from innocent scenes, like bird feeders, to far more sensitive views, such as home entry points, live feeds from living rooms, whiteboards in office spaces, and even operations inside data centers.
Worryingly, disturbing conversations have emerged on dark web forums, where some users share methods for locating exposed cameras, or even sell access to live feeds.
“This isn’t hypothetical: this is happening right now,” Cruz emphasized.
The United States leads with roughly 14,000 exposed cameras, followed by Japan, Austria, Czechia, and South Korea. These aren’t isolated incidents but part of a broader failure in how internet-connected cameras are deployed and managed.
Bitsight’s team scanned for both HTTP- and RTSP-based cameras, and the results suggest these figures may only scratch the surface.
Many of the exposed devices result from basic setup errors: default credentials, open internet access, and outdated firmware that leave systems vulnerable.
While vendors and manufacturers must improve device security, users also share responsibility.
Choosing products vetted for cybersecurity can help, but users should also pair their camera setups with tools like leading antivirus software and top-rated parental control solutions, which often include network monitoring to flag unusual access or unprotected devices.
Ultimately, private users should always check remote accessibility settings, change default passwords, update firmware regularly, and, especially for enterprises, enforce firewall protections and require VPN access.
You might also like- These are the best VPN with antivirus solutions
- Take a look at our pick of the best password managers
- Epson launches subscription where you pay "just" $7.99 to rent a printer and print 50 pages every month
- AMD is aggressively acquiring talent to bridge the Instinct and Blackwell GPU performance gap
- Brium’s compiler expertise could help AMD accelerate inference without hardware-specific dependencies
- Untether AI's team joins AMD, but existing customers are left without product support
AMD’s recent moves in the AI sector have centered around strategic acquisitions aimed at strengthening its position in a market largely dominated by Nvidia.
These include the acquisitions of Brium, Silo AI, Nod.ai, and the engineering team from Untether AI, each targeted at bolstering AMD’s AI software, inference optimization, and chip design capabilities.
The goal is clear: narrow the performance and ecosystem gap between AMD’s Instinct GPUs and Nvidia’s Blackwell line.
Calculated acquisitions amid a competitive ecosystemAMD described the acquisition of Brium as a key step toward enhancing its AI software capabilities.
“Brium brings advanced software capabilities that strengthen our ability to deliver highly optimized AI solutions across the entire stack,” the company said.
Brium's strengths lie in compiler technology and end-to-end AI inference optimization, areas that could be crucial for achieving better out-of-the-box performance and making AMD’s software stack less reliant on specific hardware configurations.
While this makes for a strong technical case, it also suggests that AMD is still playing catch-up in the AI software ecosystem, rather than leading it.
Brium’s integration will affect several ongoing projects, including OpenAI Triton and SHARK/IREE, which are seen as instrumental in boosting AMD’s inference and training capabilities.
The use of precision formats such as MX FP4 and FP6 points to a strategy of squeezing higher performance from existing hardware. But the industry has already seen similar moves from Nvidia, which continues to lead in both raw processing power and software maturity.
Another notable move was AMD’s absorption of the entire engineering team from Untether AI, a Canadian startup known for its energy-efficient inference processors. AMD didn’t acquire the company, only the talent, leaving Untether’s products unsupported.
“AMD has entered into a strategic agreement to acquire a talented team of AI hardware and software engineers from Untether AI,” the company confirmed, highlighting a focus on compiler and kernel development along with SoC design.
This signals a strong push into inference-specific technologies, which are becoming increasingly critical as training-based GPU revenue faces potential decline.
“AMD’s acquisition of Untether’s engineering group is proof that the GPU vendors know model training is over, and that a decline in GPU revenue is around the corner,” said Justin Kinsey, president of SBT Industries.
While that may overstate the situation, it reflects a growing sentiment in the industry: energy efficiency and inference performance are the next frontiers, not simply building the fastest systems for training large models.
Despite AMD’s optimism and commitment to “an open, scalable AI software platform,” questions remain about its ability to match Nvidia’s tight integration between hardware and CUDA-based software.
Ultimately, while AMD is taking calculated steps to bridge the gap, Nvidia still holds a considerable lead in both hardware efficiency and software ecosystem.
These acquisitions may bring AMD closer, but for now, Nvidia’s Blackwell remains the benchmark for what is widely regarded as the best GPU for AI workloads.
You might also like- These are the best video cards for video editing you can buy right now
- Take a look at the best virtual machine software
- Fake Cloudflare CAPTCHA page laden with malware uncovered in the wild
- Forescout report finds many vulnerable solar devices run outdated firmware with known exploits active in the wild
- Europe holds 76% of all exposed solar power devices, with Germany and Greece particularly at risk
- SolarView Compact exposure jumped 350% in two years, and it's already linked to cybercrime
The rapid growth of solar energy adoption worldwide has sparked renewed concerns about cybersecurity vulnerabilities within solar infrastructure.
A study by Forescout’s Vedere Labs found nearly 35,000 solar power devices, including inverters, data loggers, and gateways, are exposed to the internet, making them susceptible to exploitation.
These findings follow a previous report by Forescout which identified 46 vulnerabilities in solar power systems.
High exposure and geopolitical implicationsWhat’s particularly alarming now is that many of these devices remain unpatched, even as cyber threats grow more sophisticated.
Ironically, vendors with the highest number of exposed devices aren’t necessarily those with the largest global installations, suggesting issues such as poor default security configurations, insufficient user guidance, or unsafe manual settings.
Forescout found Europe accounts for a staggering 76% of all exposed devices, with Germany and Greece most affected.
While an internet-exposed solar system isn’t automatically vulnerable, it becomes a soft target for cybercriminals. For example, the SolarView Compact device experienced a 350% increase in online exposure over two years and was implicated in a 2024 cyber incident involving bank account theft in Japan.
Concerns around solar infrastructure deepened when Reuters reported rogue communication modules in Chinese-manufactured inverters.
Although not tied to a specific attack, the discovery prompted several governments to reevaluate the security of their energy systems.
According to Forescout, insecure configurations are common, and many devices still run outdated firmware versions. Some are known to have vulnerabilities currently under active exploitation.
Devices like the discontinued SMA Sunny WebBox still account for a significant share of exposed systems.
This is not just a matter of faulty products, it reflects a system-wide risk. While individually limited in impact, these internet-exposed devices may serve as entry points into critical infrastructure.
To mitigate risk, organizations should retire devices that cannot be patched and avoid exposing management interfaces to the internet.
For remote access, secure solutions such as VPNs, along with adherence to CISA and NIST guidelines, are essential.
Additionally, a layered approach using top-rated antivirus tools, endpoint protection solutions, and especially Zero Trust Network Access (ZTNA) architecture may be necessary to keep critical systems insulated from intrusion.
You might also like- Experts warn GTA and Minecraft being used to lure in cyberattack victims
- These are the best green web hosting providers
- We’ve rounded up a list of the best web hosting providers
- Gigabyte's AI TOP 500 TRX50 is a desktop built for AI developers working on massive LLMs
- Older Threadripper CPU included but motherboard supports newer upgrades
- GeForce RTX 5090 GPU paired with software for model tuning
Gigabyte has quietly launched the AI TOP 500 TRX50, a high-end system aimed at developers working on AI models and advanced multimodal tasks.
The machine is powered by AMD’s 24-core Ryzen Threadripper PRO 7965WX processor and cooled by an AORUS 360 AIO liquid cooler. This combination allows it to outperform Gigabyte’s previously announced Arrow Lake-S-based AI TOP 100 Z890.
Interestingly, as TechPowerUp reports, the AI TOP 500 still relies on AMD’s current-generation "Zen 4/Storm Peak" architecture, even with the Threadripper PRO 9000 series expected to launch in the near future.
Ports galoreFortunately, Gigabyte’s TRX50 motherboard supports future upgrades, which could appeal to those planning longer-term builds. VideoCardz suggests that a version using the 32-core 7975WX might be released soon.
Like the AI TOP 100, the 500-series prebuild also includes Gigabyte’s own GeForce RTX 5090 Windforce graphics card.
Internally, the desktop (which bears a striking resemblance to the Cooler Master HAF 700) supports up to 768GB of DDR5 R-DIMM memory across eight slots.
Storage comes in the form of a 1TB AI TOP 100E cache SSD, built to endure heavy write cycles, and a 2TB AORUS Gen 4 SSD for primary use. Power is provided by a 1600W AI TOP Ultra Durable PSU rated at 80 Plus Platinum and compatible with ATX 3.1.
The AI TOP 500 offers a wide range of connectivity options. Up front, users get four USB 3.0 ports, one USB 3.2 Gen 2 Type-C port, audio in and out jacks, and both power and reset buttons. On the rear are six USB 3.2 Gen 2 ports, two USB4 40Gbps Type-C ports, dual RJ-45 LAN ports, a DisplayPort input, and two additional audio jacks.
The workstation also supports multi-node expansion through Thunderbolt 5 and Dual 10G LAN, making it a practical option for research labs or development teams.
The system is tightly integrated with Gigabyte’s AI TOP Utility software platform which helps users manage AI models, build datasets, and monitor hardware performance in real time.
With support for up to 405 billion parameter models, Gigabyte is targeting users who require serious local compute performance without relying on cloud resources…. And, gamers too, apparently, if its tagline of “Premium gaming & AI empowered desktop” is to be believed.
You might also like- These are the best mini PC choices around right now
- And these are best mobile workstations to suit your every need
- HP's Ryzen AI Max+ Pro 395 laptop with 128GB RAM goes on sale everywhere in the US
- OpenAI has upgraded ChatGPT's Projects feature to remember past chats, tone preferences, and files
- Projects now offers deep research, voice mode, mobile file uploads, and more
- OpenAI wants Projects to function more like smart workspaces than one-off chats
ChatGPT's Projects feature has been a useful way to organize conversations with the AI chatbot since it debuted, but it has had its limitations. A major set of upgrades released by OpenAI this week has transformed Projects from a simple file folder into a highly focused version of ChatGPT as a whole.
The Projects feature debuted as a way to organize related chats and files into one digital shoebox. But now, that organization means ChatGPT will remember that those chats and files are related. So, if you start a chat within a Project, the AI will remember things from other chats in that project, referencing your past messages within the same workspace.
If you start a new Project, you can upload your notes, chat about the topic with ChatGPT, ask for online comparisons, and then come back three days later to continue the conversation without rehashing everything or having the AI cite unrelated discussions. ChatGPT won’t just remember the topics either. It will remember your formatting preferences, as well as your tone of voice.
And those can be a lot more complex conversations now that Projects includes the Deep Research tool, which lets you run multi-step tasks in ChatGPT, blending your files and instructions with live information from the web.
You can also now use ChatGPT's voice mode in Projects. Tap the microphone inside any project and start talking about the files within or anything else, and you'll see it appear. And if you're using the mobile app, you can now upload files directly and switch between GPT-4o or other models on the fly.
Other upgrades are more minor but still significant. For instance, if you have a Project that you don't want to share in its entirety, but it includes a particular ChatGPT conversation you wish to send to someone, you can do that now. And if a discussion with ChatGPT suddenly inspires you to start a Project, you can now drag it directly into a project folder or convert it instantly from the sidebar.
Not everyone can use the upgraded Projects features as of yet. You have to be a ChatGPT Plus or Pro subscriber for now. However, based on many other ChatGPT features that were once exclusive to subscribers, I wouldn't be surprised if these become accessible to free users at some point in the future.
AI project powerAs impressive as ChatGPT Projects could be now, I wouldn't expect to see offices throwing out their Notion or Trello programs anytime soon. They still lack some of the common elements of those tools, like calendars. But, for personal or just smaller efforts, it's a nice enhancement of the AI assistant, one that might at least help OpenAI compete with the AI infusions Google has been adding to its ecosystem.
OpenAI has been clear that they don't just want to be a chatbot provider. They want to be your go-to for life and work. These upgrades feel like the early sketches of something more ambitious. OpenAI might someday pitch ChatGPT as an alternative to toggling between ten apps. Instead, you might one day just open ChatGPT and say, “Let’s pick up where we left off on the next work presentation.”
A little experimentation on my part found the upgraded Projects seemed more efficient almost immediately, but not without some hiccups. One large collection of conversations I've organized for testing other features was a little too eager to dredge up the initial interactions rather than pull from more recent discussions about ChatGPT's capabilities. And while Projects can now reference past chats, the actual search and navigation between those chats still isn’t perfect. There's no Boolean logic to use to isolate certain phrases yet, so you might have to do some scrolling to find what you're looking for.
Still, even with the inevitable friction, I can see the value of making Projects more of a self-organizing AI data source, rather than simply a file folder for documents, as it has been. Whether compiling research, analyzing data, or plotting the perfect party, it could make using ChatGPT a lot less chaotic.
You might also likeMattel is partnering with OpenAI to build AI‑powered toys, which might lead to some amazing fun, but also sounds like the premise for a million stories of things going wrong.
To be clear, I don't think AI is going to end the world. I've used ChatGPT in a million ways, including as an aide for activities as a parent. AI has helped me brainstorm bedtime stories and design coloring books, among other things. But that's me using it, not opening it up to direct interaction with children.
The official announcement is very optimistic, of course. Mattel says it’s bringing the “magic of AI” to playtime, promising age‑appropriate, safe, and creative experiences for kids. OpenAI says it’s thrilled to help power these toys with ChatGPT, and both companies seem intent on positioning this as a step forward for playtime and childhood development.
But I can’t help thinking of how ChatGPT conversations can spiral into bizarre conspiracy theories, except suddenly it's a Barbie doll talking to an eight-year-old. Or a GI Joe veering from positive messages about "knowing is half the battle," to pitching cryptocurrency mining because some six‑year‑old heard the word “blockchain” somewhere and thought it sounded like a cool weapon for the toy.
As you might have noted from the top image, the first thought I had was about the film Small Soldiers. The 1998 corny classic about an executive at a toy company deciding to save money by installing military-grade AI chips into action figures, leading to the toys staging guerrilla warfare in the suburbs? It was a satire, and not a bad one at that. But, as over-the-top as that outcome might be, it's hard not to see the glimmer of chaotic potential in installing generative AI in the toys children may spend a lot of time with.
I do get the appeal of AI in a toy, I do. Barbie could be more than just a doll you dress up, she could be a curious, clever conversationalist who can explain space missions or play pretend in a dozen different roles. Or you could have a Hot Wheels car commenting on the track you built for it. I can even picture AI in Uno as a deckpad that actually teaches younger kids strategy and sportsmanship.
But I think generative AI models like ChatGPT shouldn't be used by kids. They may be pared down for safety's sake, but at a certain point, that stops being AI and just becomes a fairly robust set of pre-planned responses without the flexibility of AI. That means avoiding the weirdness, hallucinations, and moments of unintended inappropriateness from AI that adults can brush off but kids might absorb.
Toying with AIMattel has been at this a long time and knows what it is doing, in general, with its products. It's certainly not to their advantage to have their toys go even slightly haywire. The company said it will build safety and privacy into every AI interaction. They promise to focus on appropriate experiences. But “appropriate” is a very slippery word in AI, especially when it comes to language models trained on the internet.
ChatGPT isn’t a closed-loop system that was built for toys, though. It wasn’t designed specifically for young kids. And even when you train it with guidelines, filters, and special voice modules, it’s still built on a model that learns and imitates. There’s also the deeper question: what kind of relationship do we want kids to have with these toys?
There’s a big difference between playing with a doll and imagining conversations with it, and forming a bond with a toy that independently responds. I don’t expect a doll to go the full Chucky or M3gan, but when we blur the line between playmate and program, the outcomes can get hard to predict.
I use ChatGPT with my son in the same way I use scissors or glue – a tool for his entertainment that I control. I’m the gatekeeper, and AI built into a toy is hard to monitor that way. The doll talks. The car replies. The toy engages, and kids may not notice anything amiss because they don't have the experience.
If Barbie’s AI has a glitch, if GI Joe suddenly slips into dark military metaphors, if a Hot Wheels car randomly says something bizarre, a parent might not even know until it’s been said and absorbed. If we’re not comfortable letting these toys run unsupervised, they’re not ready.
It’s not about banning AI from childhood. It’s about knowing the difference between what’s helpful and what’s too risky. I want AI in the toy world to be very narrowly constrained, like how a TV show aimed at toddlers is carefully designed to be appropriate. Those shows won't (hardly ever) go off script, but AI's power is in writing its own script.
I might sound too harsh about this, and goodness knows there have been other tech toy scares. Furbies were creepy. Talking Elmo had glitches. Talking Barbies once had sexist lines about math being hard. All issues that could be resolved, except maybe the Furbies. I do think AI in toys has potential, but I'll be skeptical until I see how well Mattel and OpenAI navigate that narrow path between not really using AI and giving the AI too much freedom to be a bad virtual friend to give your child.
You might also likeThe Apple Watch’s fitness features have been getting consistently more impressive in recent years, between new running metrics, the recent addition of Training Load, and integrations with third-party apps such as TrainingPeaks. And yet, despite these advanced tools at my fingertips and as someone who tests the best Apple Watches as part of my job, I’m still lacking in the running department.
After a long day of work, as a man in his mid-thirties with a very active six-year-old, the last thing I feel I want to do is get my shorts and underlayer on and head out the door, and that’s even with some lovely running routes nearby.
Tracking my workouts is great, but how can I outsource my motivation to my Apple Watch? As it happens, Apple's new AI Workout Buddy might be the answer.
Workout Buddy could became my favorite new watchOS feature in years(Image credit: Lance Ulanoff / Future)I should stress that I have no such issues getting to the gym, . My hesitance to run (which takes a lot less work and time than lifting weights) feels very much like a problem of my own making, so it’s gratifying that Apple may have a solution for me and could make me feel less like I’m the only “reluctant runner” out there.
As revealed at WWDC 25 this week, watchOS 26 will offer Workout Buddy, a “first-of-its-kind fitness experience with Apple Intelligence that incorporates a user’s workout data and their fitness history to generate personalized, motivational insights during their session, based on data like heart rate, pace, distance, Activity rings, personal fitness milestones, and more.”
It’s that word “motivational” that piqued my interest, and while I’m wary of the use of AI (especially as a journalist who makes his living using words) a helpful few words of encouragement in my ear when I’m pushing myself out the door for a 5K could make all the difference.
I recently completed my first 10k running event following some heart health issues in recent years, and having an AI assistant tap into my heart rate data and advise me how much further I could push myself every now and again could have stopped the fearful questions I was asking myself, such as “am I going too fast?” or “am I pushing too hard?”
Matching my style of runningWhen I do get out for a run, I try to avoid looking at my Apple Watch Ultra. I often don’t want to know what my pace is like, because I try to be more intentional with exercise. Namely, keeping my mind on the whole ‘moving my legs’ part of the workout rather than keeping an eye on my pace.
Looking at my pace and seeing it slower than anticipated is a bit of a morale-buster, while checking the distance run and seeing I’m less than halfway around my circuit has a tendency to have a negative impact on my pace, as if I’m willing it to be over.
If I can tweak what the AI offers as encouragement, then I feel I’ll be having my proverbial cake and eating it, pushing me further without laying on too thick how far I’ve fallen since my prime a decade ago. Think less “here are your splits”, and more “keep going, you’re doing great!”.
I’ve tried AI coaching apps like Zing in the past, and as promising as they are, they can often feel overly complex when you just want to track some exercises or your step count. Having something like Workout Buddy running natively on my devices that I can call upon when I need it, and minimise when I don’t, really does feel like the best of both worlds. Roll on September!
You might also like...