News
Airbnb’s redesign, and its push into enabling users to Airbnb more than just a vacation spot or getaway, has been out for a few weeks now. The idea is that you’ll Airbnb more than just a lovely cottage on a beach – also turning to the service, for example, when you want to experience a city that’s close to home or add some adventure to your trip abroad.
We’ve already broken down the additions, including the launch of Experiences and Services, as well as the new look for the app. It all feels a lot sleeker, with visuals that adjust on the fly, a mini social network, and a passport of sorts that saves all the information from your trip. So, if you have a favorite spot, you can easily share it with a friend. It all feels very material.
Here we're taking a look under the hood and finding out how Airbnb is making the app work better for you. The company has rebuilt its entire tech stack for the app and the service as a whole, meaning you'll find easier navigation with three choices at the top: Homes, Experiences, and Services.
Furthermore, there's a redesigned profile that makes it easier for you to take a look back and even reconnect with people you've had experiences with. The rebuilding effort enables what's already launched, while also setting Airbnb up for the future, a note that Jud Coplan, VP of Product Marketing at Airbnb, shared with TechRadar.
“One of the huge benefits of rebuilding the architecture and rethinking, really, the infrastructure of the app was that we created something that can expand beyond the 10 categories of services.” Coplan told me.
He was referencing the 10 that currently exist – chefs, photographers, massages, spa treatments, personal training, hair treatments, makeup, nails, prepared meals, and catering – but hinted that expansion is very possible, adding, “We've created a new Airbnb that can go even beyond what we've been talking about today.”
That could mean we see grocery deliveries integrated, so you can be well stocked when you arrive at your Airbnb to make dinner, or maybe that even means you can order directly to your home.
It’s really an expansion of the platform to book these services, whether you’re in your home city or on a trip. Coplan told me how his family had booked an experience in their home city of San Francisco, a kind of a staycation excursion.
The 10 aforementioned options are further themed by activity, such as cooking, city walks, learning a new skill, and even more exclusive ones featuring celebrities.
During the keynote, Airbnb’s CEO, Brian Chesky, highlighted the importance of people, noting that it’s real folks who are experts offering these experiences and services, and it’s real people offering up their homes, lofts, or apartments for Airbnb.
There is an element of AI being used here, one example being an AI-powered photo tour, as Coplan explained. “For homes, if you upload all your photos, we recognize them, we organize them, we present them really nicely,” all with the aim of making the listing easier and theoretically helping to encourage more bookings. There are also quick replies powered by AI, in which, for a host, the app will automatically suggest a response that could be sent.
For services and experiences, the two new offerings from Airbnb utilize AI to recommend what you might find most appealing. It bases this off of “where they are in their journey, their past bookings, their current trip, what they've told us about in their profile,” explained Coplan.
Lastly on AI, Coplan also shared that Airbnb began rolling out an AI assistant for customer service in the US in English, “that allows people to do is have a natural language conversation with customer service and get answers to questions really easily.” It’ll be interesting to see how this performs, and what feedback users provide.
(Image credit: Airbnb)On the app’s new look and flow, it was clear that Coplan and the team at Airbnb focused on the human element, emphasizing that what one can book is all tied back to a real person. He noted that the color palette, animations, and dimensionality all tie back to the real world, giving a sense of what you might experience.
Even more interesting, and maybe hinting at Airbnb’s future, is the community aspect. “We didn't want this to be a place where you have followers, where you meet people online,” explained Coplan. “The people you see are people that you know from the real world, and so that connections page within your profile, those are people you've traveled with and those are people you've met on your experiences.”
It’s certainly a unique approach and a more intentional one, rooted in a shared experience. Now, it’s entirely opt-in, and there are privacy controls that allow you to turn off this community aspect.
Separate from the community, but also housed within the profile, is a sort of passport-like experience, allowing you to look back at where you’ve been and easily share details. Much like a card in Apple Wallet, it has a shimmer and shine effect when you move your phone around.
While Airbnb’s main event is its annual summer release, and that’s done with for 2025, I suspect we’ll hear more from it sooner than a year from now, and I’m intrigued to see just how far the tech stack can go. For now, I’m on the hunt for an experience to try.
You might also like- Sony unveils its first-ever wireless fight stick, codenamed Project Defiant
- The fight stick is launching in 2026 and is designed for PS5 and PC
- The controller can be used wirelessly via PlayStation Link technology for ultra-low latency gameplay, or wired using a USB-C connection
Sony has revealed its first-ever wireless fight stick, codenamed Project Defiant, and it's releasing in 2026.
Announced during PlayStation's June State of Play, Project Defiant is designed for a variety of fighting games and can be used wirelessly or wired for the PlayStation 5 and PC.
"This sleek new controller will give players more flexibility to play their favorite fighting games, whether that is wirelessly with the innovative PlayStation Link technology that provides ultra-low latency, or through a wired connection on PS5 or PC," said Edwin Foo, Vice President, Product Development, SIE, in a PlayStation blog post.
Project Defiant features a high-quality digital stick that’s custom-designed by Sony, toolless interchangeable restrictor gates (square, circle, and octagon) for the stick, buttons with mechanical switches, and a touch pad like the one found on the DualSense wireless controller.
The controller also boasts a sturdy, ergonomic design for long gaming sessions, features a storage compartment for restrictor gates for convenience, and a PS Link USB adapter.
Like the DualSense wireless controller, Project Defiant also supports the ability to wirelessly wake up the user's PS5 by pressing and holding the PS button on the top side of the device.
As previously mentioned, input timing has been refined thanks to Sony's PS Link wireless technology, but there's also an option for players to plug in to play by using a wired USB-C connection.
Sony has also confirmed that the fight stick will come packaged with a sling carry case, which includes a lever gap to keep the digital stick safe, allowing players to take the hardware on the go.
Since Project Defiant is just a codename, we'll likely learn the name of the fight stick closer to launch.
You might also like...- Nintendo Switch 2 launch day live: all the latest news and updates as the console starts arriving with fans and at retailers
- Nioh 3 has been announced for 2026, but PS5 owners can play an exclusive demo right now
- The first trailer for 007 First Light reveals a young James Bond and it's coming to PC and console in 2026, including Nintendo Switch 2
- Ahead of WWDC 2025, Apple is sharing install stats for iOS 18 and iPadOS 18
- If you haven't installed iOS 18 yet, you're in the smaller group, as it's on 82% of all eligible devices
- Apple doesn't guarantee a number of years for software updates for its devices
Have you been using iOS 18 on your iPhone since it was released in September? Or maybe you’re in the camp of waiting a bit to upgrade until friends or family do, or you read reactions from those who’ve tested it.
Well, regardless of which camp you’re in, Apple’s iOS 18 – and iPadOS 18, the operating system for the iPad – are officially eight months old. While that’s not a year, the company’s 2025 Worldwide Developer Conference is kicking off on Monday, June 9, with an opening keynote. We’re expecting the reveal of the next generation of iOS and other platforms.
In the days leading up to that event, Apple is sharing the final usage numbers of iOS 18 and iPadOS 18. While the tech giant doesn’t promise a specific number of years for software updates and equally essential security updates, iOS 18 supports up to iPhone XR/XS, and iPadOS 18 works on the iPad 7th Gen, iPad mini 5th Gen, iPad Pro 1st Gen, and iPad Air 1st Gen.
That’s an extensive range of supported devices, and for iOS 18, 82% of all eligible iOS devices are running the latest and greatest from Apple. Regarding eligible devices released in the last four years, the installation rate is higher: 88%. Meanwhile, 71% of all iPads that can run iPadOS 18 have it installed, and iPads released in the last four years have an install ratio of about 81%.
(Image credit: Apple)iOS 18 and iPadOS 18 were pretty big updates, as well – for one, it did start the rolling launch of Apple Intelligence with iOS 18.1 in October of 2024, but that remains incomplete, and the much-anticipated AI-powered Siri is still delayed. You can use features like Genmoji, Image Playground, Writing Tools, and Visual Intelligence on eligible iPhones or iPads that support Apple Intelligence.
Beyond that suite, though, iOS 18 broke the so-called app grid, allowing you to place apps wherever you like – even with spaces in between – on the iPhone and iPad. It also lets you adjust the color or tone of your entire home screen, including app icons. The Photos app was redesigned and added customization, but wasn’t loved by everyone. Also, Apple finally added support for RCS messaging, as well as being able to rework the layout of Control Center.
It was a solid, sizable release for iOS and iPadOS, which even got smaller updates and is currently sitting at iOS 18.5 and iPadOS 18.5. Many of the features do work on devices as far back as the iPhone XR, though overall speed and battery life might vary.
So if you haven’t updated to iOS 18 yet and you're on iPhone, Apple’s latest numbers do put you in the smaller group. That’s not a bad thing, but it's a good idea to keep your phone up to date beyond just the new features, as privacy and security updates are also included within these updates.
(Image credit: Apple)But how does this compare to Android phones? As I wrote above, Apple doesn’t guarantee a timeframe, but you can see that it goes back six years for iPhones, as the XS was released in 2018.
Samsung now guarantees most of its Galaxy phone lineup for seven years of software upgrades, while Google Pixel phones receive five years of updates for the Pixel 6 and older, and this increases to seven years for the Pixel 7 and newer. That means if you get a Pixel 9, you can expect updates through 2031.
Depending on the model, Motorola offers three years of major OS updates plus an additional year of security updates. OnePlus offers four years of OS and security updates on its eponymous flagship phones.
While we expect new versions of iOS and iPadOS at WWDC 2025, the rumor mill hasn’t mentioned Apple promising a specific number of years of software and privacy updates. That could happen, but I think the focus will be on the much-rumored ushering in of a Vision Pro-like design for the rest of the platforms – think glassy and sleek throughout.
You might also like- RTX Pro 6000 beats the unreleased 5090 despite lacking important drivers
- Nvidia’s $10000 card was benchmarked across multiple modern game titles
- Extreme power noise and price make it impractical for most would-be buyers
Nvidia’s RTX Pro 6000 might not be marketed as a gaming GPU, but overclocking expert Roman ‘der8auer’ Hartung has shown it can outperform every consumer card Nvidia makes, and that includes the yet-to-be-released RTX 5090.
In his latest video, which you can watch below, der8auer benchmarked the $10,000 Blackwell-based workstation GPU across multiple titles, calling it “the new gaming king.”
Unlike the RTX 5090, which uses the same GB202 chip, the Pro 6000 sports 24,064 CUDA cores, more Tensor and RT cores, and a massive 96GB of GDDR7 memory.
Coil whineIt lacks Nvidia’s Game Ready Drivers, but der8auer notes this didn’t seriously affect gaming performance. In 4K Cyberpunk 2077 tests (without ray tracing), the Pro 6000 pulled 14% ahead of the RTX 5090, though it also used 15% more power.
Performance across other titles echoed that trend. The card was 11% faster in Star Wars Outlaws and Remnant 2, and 3% faster in Assassin’s Creed Mirage, the latter possibly held back by driver limitations.
Power draw and heat are challenges, with the card reaching 600W during gaming. Noise was another factor. According to der8auer, the fan ramps up aggressively, and the coil whine was the loudest he’s ever heard.
While the Pro 6000 is clearly dominant in raw performance, its price point makes it unreachable for most.
Der8auer noted that although the card has three times the VRAM of the RTX 5090, the price is five times higher. He estimated the 64GB of additional VRAM might cost $200 more to produce, but that doesn’t justify the $8,000 difference for consumers.
Still, for those chasing the absolute peak of performance - and willing to overlook coil whine and noise - the RTX Pro 6000 has set a new bar. Just don’t expect it to be practical for most gamers.
Via Tom's Hardware
You might also like- A watered-down Nvidia RTX Pro 6000 is still potent enough to keep China’s AI ambitions alive
- Nvidia’s workaround isn’t top-tier, but it could still flood China’s data centers
- Export rules slow performance, but they can’t stop parallelized AI scaling by Chinese CSPs
In response to US export restrictions introduced in April 2025, Nvidia is reportedly preparing a special edition of its RTX Pro 6000 GPU for the Chinese market.
A report from TrendForce claims this new version will switch from high-bandwidth memory (HBM) to the slower but more accessible GDDR7.
The switch will allow the chip to comply with new regulations that prohibit GPUs with HBM-level memory bandwidth or advanced interconnect capabilities, resulting in a scaled-down GPU, but not one lacking power.
Not the best, but enough for decent AI workThe RTX Pro 6000 is a potent chip. Even after being watered down, TrendForce estimates its performance will fall between Nvidia’s older L40S and the L20 China edition. This places the chip well within the range of GPUs capable of meaningful AI workloads.
What’s driving interest is not just availability, but capability, even with the downgrade. Critics have pointed out that a cut-down version of a very powerful card is still extremely capable, especially if it's priced more affordably.
As a result, Chinese cloud service providers (CSPs) are expected to scale horizontally, buying more units and optimizing for larger node deployments.
Yes, this approach will be more expensive and consume more power, but that’s just a numbers game - CSPs will need to increase infrastructure investment and manage higher power demands. The downside, of course, is that such workarounds are inherently inefficient.
Nonetheless, if the price per unit is right, the aggregate performance could still meet, or even exceed, current needs.
It may not be the fastest setup in traditional terms, but in parallelized environments, the performance gap could narrow. That said, Chinese chipmakers like Huawei and Cambricon are working to fill the gap left by restricted access to top-tier Nvidia GPUs.
If the special edition RTX Pro 6000 succeeds, it might delay the domestic adoption of homegrown alternatives. If it fails, it could accelerate them.
Nvidia’s strategy may help it navigate current U.S. restrictions, but it remains an open question whether that will be enough in the long run.
A weaker chip could still be one of the fastest GPUs on the market, and too powerful to ignore, especially when the line between compliance and capability is so finely drawn.
You might also like- These are the best AMD graphics cards on the market right now
- Take a look at the best mini PCs you can buy today
- Quantum computing startup wants to launch a 1000-qubit machine by 2031
- Volvo's innovation uses sensors to help the belt adjust its load
- The company claims it can help reduce injuries
- The multi-adaptive safety belt will feature on the EX60
Volvo can claim to be part of the very history of the humble seatbelt, considering Swedish engineer and Volvo employee Nils Bohlin perfected his three-point harness with the company way back in the late 1950s.
Now, it wants to inject some serious smarts into a very simple device that has saved millions of lives over the years.
Thanks to input from the multitude of sensors, cameras, and compute tech onboard the upcoming EX60 (the EX90's sleek little brother), Volvo’s new multi-adaptive safety belt can provide the perfect tension in the unfortunate event of an accident.
Most regular seat belts have three “load-limiting” profile variations that help apply the right load for drivers and occupants of differing heights and weights.
However, Volvo’s latest invention features 11 profiles that adapt to traffic variations and the person wearing it, thanks to real-time data from the car’s advanced sensors, according to the Swedish marque.
Sensors inside can detect height, weight, and seating position of occupants, while the exterior sensor suite can analyze the characteristics of a crash and send the data to the belt to provide the appropriate load "in the blink of an eye".
And how will it help? Well, Volvo gives the example that larger occupants in a serious crash will receive a higher belt load, while smaller occupants in a less severe crash will receive a milder load to prevent common injuries associated with standard seat belts.
Passive tech gets active(Image credit: Volvo)Volvo states that it bases its safety innovations on the research it has conducted into some 80,000 real-life accidents over five decades, with a continual data feed helping it make improvements.
It is one of the few automotive companies that has a dedicated Accident Research Team that is permitted to attend the scene of an accident that occurs near its Gothenburg headquarters.
Thanks to this constant source of data, its latest multi-adaptive safety belt will apparently get better over time via over-the-air updates.
Volvo claims that as it gathers more data and insights, its cars will improve their understanding of the "occupants, new scenarios and response strategies". Clever stuff.
You might also like- I've driven the most expensive Volvo ever – and its clever Lidar tech could take EV safety to the next level
- Volvo’s cars will be the first to get Google Gemini’s ‘conversational’ AI – and I think the in-car tech has massive potential
- I drove the new screen-obsessed Polestar 4 – and its lack of a rear windscreen isn't the only thing it should be remembered for
- Thunderbolt 5 brings external GPUs closer to delivering real desktop-class performance on thin laptops
- Gigabyte Aorus RTX 5090 AI Box is a dream for power users, not casual gamers
- Heat and power delivery are major concerns when running top-tier GPUs like the RTX 5090
At Computex 2025, Gigabyte introduced a new external GPU enclosure designed to deliver high-performance gaming and AI capabilities.
The Aorus RTX 5090 AI Box connects via Thunderbolt 5 and is powered by Nvidia’s flagship GeForce RTX 5090, following in the footsteps of previous models like the Aorus GTX 1070 and Aorus RTX 3080 Ti, which also featured top-tier Nvidia GPUs at the time.
With the RTX 5090 widely regarded as the best GPU on the market, the AI Box promises desktop-class performance for machines that previously maxed out with integrated graphics or modest discrete GPUs.
Thunderbolt 5 unlocks new performance potentialThanks to Thunderbolt 5’s dramatically increased bandwidth, many of the bottlenecks that once plagued eGPU setups are being addressed, bringing users closer to the long-standing goal of running a high-end GPU on a lightweight, ultraportable machine.
Theoretically, the Aorus RTX 5090 AI Box checks nearly every box: cutting-edge graphics, future-proof connectivity, and plug-and-play flexibility. However, eGPU setups still come with inherent limitations.
Despite lower latency and higher throughput, external GPUs often fall short of matching the performance of internal GPUs due to data transfer overhead and potential driver inconsistencies.
Heat and power management also remain critical concerns, especially with a GPU as power-hungry as the RTX 5090.
There's also the question of practicality. This setup will likely be overkill for casual gaming or office tasks, but it will be a compelling option for developers, video editors, and 3D artists who need the fastest PC performance with the flexibility of a mobile setup.
That said, pricing will be a key consideration. Gigabyte has yet to announce the price of the AI Box, but with the RTX 5090 already commanding a premium, and Thunderbolt 5 components adding to the cost, this device won’t come cheap.
For reference, the Gigabyte Aorus GV-N4090IXEB-24GD, launched two years ago, debuted at $2,000. The new model could very well surpass that figure.
Via PCWatch (originally published in Japanese)
You might also like- These are the best AMD graphics cards you can buy now
- Take a look at the best Mini PCs on the market today
- Devious new ClickFix malware variant targets macOS, Android, and iOS
- No new iPhone expected, not even a hint, really
- No apologies for the bad Apple Intelligence launch
- No Holy Grail products like XR glasses or a glucose monitor
Next week Apple hosts its Worldwide Developer’s Conference (WWDC) 2025, bringing together developers and media to discuss what’s new and what’s upcoming in the world of All Things Apple. Apple makes a lot of stuff, and more importantly, Apple makes the software that runs it all. Apple’s developer conference is about software more than anything else, and hardware news will only serve to expand on Apple’s software development.
If I were a betting man, here’s my safe bet for what new products we’ll see at WWDC 2025: nothing! Apple almost never launches hardware at WWDC. The only time we get something new at WWDC is when it serves to make the developer news more interesting.
Of course, this year’s software news could be very interesting, with a major redesign possibly in the cards for every Apple OS, plus a new naming scheme that will match the update to the year: ie. iOS 26 in 2026. Do we need new hardware to go with the updated interface? No, but it could generate more excitement.
With that in mind, here’s what I do not expect to see at WWDC 2025
Samsung Galaxy S25 Edge, iPhone 16 Pro Max, Galaxy S25 Ultra (Image credit: Philip Berne / Future)No hardware, no new iPhone, not even the iPhone 17 AirIf you were hoping for a sneak peak at the iPhone 17 Air, I would highly doubt that Apple will drop any hints about its rumored upcoming thin phone. Even with the Samsung Galaxy S25 Edge now available, I don’t think Apple will ruin the surprise coming in September, when it launches a drastically redesigned iPhone 17 family. Showing off an iPhone 17 Air would give too much away today.
I also wouldn’t expect any new iPad models. Apple launched new iPad base models and new iPad Air tablets recently, with faster processors inside. There’s no reason to launch anything new at WWDC 2025. Even the iPad Mini got a recent refresh, so it’s doubtful we’ll see anything new.
Occasionally Apple announces a new chipset at WWDC, like the Apple M4 platform. That would be a good reason to get a new Macbook Pro, or possibly an updated iPad Pro model. We haven’t gotten hints about this, so we’re not expecting any new Macbook or iPad’s with an Apple M5, for instance.
There is one lingering rumor about an updated Apple Homepod, possibly with a built-in display. That would make sense for WWDC 2025 because a new display means new possibilities for developers to load apps. If there is any hardware announced next week, that would be my top bet.
(Image credit: Shutterstock)No apologies about Apple AI, and no backing downIf you think Apple Intelligence hasn’t been going well, you’re right, but I wouldn’t expect Apple to admit as much, and it definitely won’t be apologizing for any of the missteps so far. In fact, I expect we’ll see Apple barreling forward with AI features at every level of every single OS.
We’re going to get Apple Intelligence on the Apple Watch, along with more AI on the iPhone, iPad, and MacBooks. The real question is whether Apple will keep promising the same features that never materialized – Siri’s ability to read your email and your personal info and provide you with tailored advice – or if there will be some new direction, perhaps with new partnerships to bridge the gaps in Apple’s capabilities.
I think the latter is most likely, as developers right now are keen on ways to integrate existing apps and features with AI. It makes sense for Apple to partner more closely with companies outside of OpenAI, while it continues to try to build its own AI infrastructure.
(Image credit: Apple)No holy grail, either on your wrist or your faceI expect Apple will talk about advances in Apple Health on the Apple Watch, but I don’t expect any dramatic new capabilities will be announced. We won’t see improvements in glucose monitoring, for instance, or new hardware that can measure metabolic rates in non-invasive ways.
Apple still has a lot of catching up to do on its watch hardware. Google’s Wear OS watches and its partners have added features like zero pulse detection plus more AI features, and battery life continues to climb on the Wear OS side. Apple has been a bit stagnant with its WatchOS progress.
I also wouldn’t expect new face wearables. No update to Apple Vision Pro, and no new Apple Vision products. It is possible that we will get improved controls for Vision Pro, and maybe even real joystick controllers, but no new platform like XR smart glasses.
You might also like- Meta is developing its Aria Gen 2 smart glasses, which come packed with sensors and AI features
- The smart glasses can track your gaze, movement, and even heart rate to gauge what's happening around you and your feelings about it
- The smart glasses are currently being used to help researchers train robots and build better AI systems that could be incorporated into consumer smart glasses
The Ray-Ban Meta smart glasses are still relatively new, but Meta is already ramping up work with its new Aria Gen 2 smart glasses. Unlike the Ray-Bans, these smart glasses are only for research purposes, for now, but are packed with enough sensors, cameras, and processing power that it seems inevitable some of what Meta learns from them will be incorporated into future wearables.
Project Aria's research-level tools, like the new smart glasses, are used by people working on computer vision, robotics, or any relevant hybrid of contextual AI and neuroscience that draws Meta's attention. The idea for developers is to utilize these glasses to devise more effective methods for teaching machines to navigate, contextualize, and interact with the world.
The first Aria smart glasses came out in 2020. The Aria Gen 2s are far more advanced in hardware and software. They’re lighter, more accurate, pack more power, and look much more like glasses people wear in their regular lives, though you wouldn't mistake them for a standard pair of spectacles.
The four computer vision cameras can see an 80° arc around you and measure depth and relative distance, so it can tell both how far your coffee mug is from your keyboard, or where a drone’s landing gear might be heading. That's just the beginning of the sensory equipment in the glasses, including an ambient light sensor with ultraviolet mode, a contact microphone that can pick up your voice even in noisy environments, and a pulse detector embedded in the nose pad that can estimate your heart rate.
Future facewearThere's also plenty of eye-tracking technology, able to tell where you’re looking, when you blink, how your pupils change, and what you're focusing on. It can even track your hands, measuring joint movement in a way that could help with training robots or learning gestures. Combined, the glasses can figure out what you're looking at, how you're holding an object, and if what you're seeing is getting your heart rate up because of an emotional reaction. If you're holding an egg and see your sworn enemy, the AI might be able to figure out you want to throw the egg at them, and help you aim it accurately.
As stated, these are research tools. They’re not for sale to consumers, and Meta hasn’t said if they ever will be. Researchers have to apply to get access, and the company is expected to start taking those applications later this year.
But the implications are far larger. Meta's plans for smart glasses go well beyond checking for messages. They want to link human interactions with the real world to machines, teaching them to do the same. Theoretically, those robots could look, listen, and interpret the world around them like humans do.
It's not going to happen tomorrow, but the Aria Gen 2 smart glasses prove it's a lot closer than you might think. And it’s probably only a matter of time before some version of the Aria Gen 2 ends up for sale to the average person. You'll have that powerful AI brain sitting on your face, remembering where you left your keys and sending a robot to pick them up for you.
You might also like- Luma Labs' new Modify Video tool for Dream Machine uses AI to alter any video footage without reshoots
- Any characters or environments won't lose their original motion or performances
- Anything from subtle wardrobe tweaks to full magical scene overhauls is feasible
Luma Labs is known for producing AI videos from scratch, but the company has a new feature for its Dream Machine that can utterly transform real video footage in subtle or blatant ways, even if it's just an old home movie.
The new Modify Video feature does for videos something like the best Photoshop tools do for images. It can change a scene's setting, style, even whole characters, all without reshooting, reanimating, or even standing up.
The company boasts that the AI video editing preserves everything that matters to you from the original recording, such as actor movement, framing, timing, and other key details, while altering anything else you want.
The outfit you're wearing, which you've decided wasn't you, is suddenly an entirely different set of clothing. That blanket fort is now a ship sailing a stormy sea, and your friend flailing on the ground is actually an astronaut in space, all without the use of green screens or editing bays.
Luma’s combination of advanced motion and performance capture, AI styling, and what it calls structured presets makes it possible to offer the full range of reimagined videos.
All you need to do is upload a video of up to 10 seconds in length to get started. Then pick from the Adhere, Flex, or Reimagine presets.
Adhere is the most subtle option; it focuses on minimal changes, such as the clothing adjustment below or different textures on furniture. Flex does that but can also adjust the style of the video, the lighting, and other, more obvious details. Reimagine, as the name suggests, can completely remake everything about the video, taking it to another world or remaking people into cartoon animals or sending someone standing on a flat board into a cyberpunk hoverboard race.
Flexible AI videoIt all depends on not just prompts, but reference images and frame selections from your video if you choose. As a result, the process is much more user-friendly and flexible.
Although AI video modification is hardly unique to Luma, the company claims it outperforms rivals like Runway and Pika due to its performance fidelity. The altered videos keep an actor’s body language, facial expressions, and lip sync. The final results appear as an organic whole, not just stitched-together bits.
Of course, the Modify Video tools have limitations. These are still capped at 10 seconds per clip for now, which keeps things manageable in terms of wait times. However, if you want a longer film, you need to plan and work out how to artistically incorporate different shots into one film.
Still, features like the ability to isolate elements within a shot are a big deal. Sometimes you have a performance you're very happy with, but it's supposed to be a different kind of character in a different setting. Well, you can keep the performance intact and swap a garage for the sea and your actor's legs for a fish tail.
Dreams to realityIt is genuinely impressive how quickly and thoroughly the AI tools can rework a bit of footage. These tools aren't just a gimmick; the AI models are aware of performances and timelines in a way that feels closer to human than any I've seen. The AI models don't actually understand pacing, continuity, or structure, but they are very good at mimicking these aspects.
While the technical and ethical limitations will prevent Luma Labs from recreating the entire cinema at this point, these tools will be tempting for many amateur or independent video producers. And while I don't see it becoming as widely used as common photo filters, there are some fun ideas in Luma's demos that you might want to try.
You might also like...AI isn’t hype anymore—it’s real. IDC predicts that by 2028 AI spending could hit $623 billion by 2028. That kind of investment doesn’t come from buzz. It comes from companies seeing real value.
AI tools are already cutting costs, speeding up work, and - let’s be honest - making jobs more enjoyable. Nobody misses the repetitive stuff. Instead, we’re doing more of what we’re actually good at: strategy, creativity, and problem-solving.
So now that companies have tasted that value, many want to go further. Not just use AI—but build entire internal AI-powered solutions themselves. Stitch together some models, build an app, launch it to their teams. The thinking goes: if off-the-shelf tools work, imagine how great it’ll be if we control the whole thing.
Here’s the reality: for most companies, especially non-tech companies, building in-house AI solutions is a bad bet. They take too long, cost too much, and rarely deliver what the business actually needs.
Let’s talk about why.
It’s not about the model. It’s about the missing link Between tech and impact.Companies are already experimenting with models. They’re using GPTs, building copilots, testing agents. That’s not the problem. The problem is believing the solution is just about picking a model or wiring one together. That’s not where most projects fail.
They fail because the solution—how it fits into your workflows, your systems, your people—isn’t well thought out. It’s fragmented. It’s not scalable. It doesn’t stick. The model might be powerful, but the experience around it doesn’t work. And without that, the value never materializes. This is why the connective layer matters.
The interface. The orchestration. The automation. The safeguards. It’s what turns "we have a model" into "we’re driving results." And most companies don’t have the internal expertise to build that layer right.
Going solo comes with hidden costsTrying to build your own AI-powered solution might feel brave. But unless your company is a product and engineering company, the odds are stacked against you.
Here’s where most organizations get it wrong:
1. You Don’t Have the UX Muscle
AI only delivers value when people actually use it. That means seamless, intuitive, trustworthy interfaces. Most enterprises don’t have the product design and UX software and development capabilities to build interfaces that users actually want to engage with. Internal tools often look—and perform—like science experiments.
2. You’re Flying Blind
Vendors bring learning from hundreds of deployments. You don’t. If you’re rolling out a custom AI solution based on a few internal tests and gut instinct, you’re guessing. You don’t have enough data to know what “good” looks like—or what real adoption takes.
3. You’re Not Budgeting for What Comes Next
AI isn’t static. Models evolve. Interfaces break. User needs change. If you’re not committing budget and headcount for constant iteration, retraining, and support, that in-house solution will be outdated in under a year. And it will sit unused, no matter how promising it looked at launch.
4. Security Concerns Are Overblown
Yes, protecting data is critical. But assuming vendor AI tools are inherently less secure? That’s a flawed take. The best AI providers build with security and compliance at the core. If you trust cloud infrastructure, you can trust enterprise-grade AI vendors.
5. "Only We Know Our Business" Misses the Point
Your internal team knows your business better. That’s not in question. But they likely don’t know how to build scalable, production-ready AI. Vendors do. They’ve already solved the engineering challenges, the data problems, the deployment mess. Why start from scratch?
If you’re not a tech company, stop trying to be one. There’s no shame in partnering with experts—it’s how the winners win faster.
Agentic AI is coming—and it’s even harder to build rightThe next phase is agentic AI. These systems don’t just generate—they act. They make decisions. They learn. They execute. It’s already revolutionizing workstreams like customer service, reporting, and document creation.
But these aren’t lightweight features. They’re full systems—requiring real orchestration, context awareness, governance, and maintenance. Trying to build them internally without the right foundation? That’s not just inefficient. It’s risky.
You don’t need to build these things. You need to leverage the companies that already have.
AI is a team sport, play with the prosAI feels like it’s getting easier. And in some ways, it is. Open-source models. No-code platforms. Accessible APIs.
But building an AI solution that actually moves the needle? That’s still hard. Really hard. And if you think your internal team can replicate what vendors have spent years perfecting, you’re wasting time—and likely money.
The smartest companies aren’t trying to do it all themselves. They’re focusing on what they do best and partnering for the rest.
AI is a team sport. Play with the pros.
That’s how you win.
LINK!
This article was produced as part of TechRadarPro's Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro