News

Jury Decides Meta Stole Data from Users of Period-Tracking App. What to Do If You're Worried - Wednesday, August 6, 2025 - 16:01
The tech company lost a massive privacy case involving the Flo app that has raised huge questions about how health apps are secretly being used.
Researchers Seize Control of Smart Homes With Malicious Gemini AI Prompts - Wednesday, August 6, 2025 - 16:44
AI hacks show how Gemini can be secretly used to control lights, heaters and other smart home tech, signaling a new evolution in digital vulnerabilities.
Verizon Promo Offers NFL Sunday Ticket Access at No Extra Cost - Wednesday, August 6, 2025 - 17:50
The offer is open to new or existing customers.
AI Sucks at Sudoku. Much More Troubling Is That It Can't Explain Why - Wednesday, August 6, 2025 - 19:55
Research into how chatbots solve simple puzzles demonstrates one of the technology's big ethical concerns.
Trump Media Is Testing an AI Search Engine Powered by Perplexity - Wednesday, August 6, 2025 - 20:35
The president's media company is beta-testing Truth Search AI, but will the results lean toward conservative opinions?
Wi-Fi 8 Focuses on Reliability Over Speed to Handle Advanced AI Experiences - Wednesday, August 6, 2025 - 23:00
Wi-Fi 8 plans to prioritize reliability in a data-hungry AI world, but you’ll have to wait a few years to get it. But I know some decent router choices you can grab today.
Today's NYT Mini Crossword Answers for Thursday, Aug. 7 - Wednesday, August 6, 2025 - 23:35
Here are the answers for The New York Times Mini Crossword for Aug. 7
Grinding Your Teeth While Sleeping? Here's How to Stop Naturally - Thursday, August 7, 2025 - 03:48
If you wake up with a sore jaw or headache, you may have sleep bruxism. Here are some natural ways to stop it.
The Owala FreeSip Is the Water Bottle I Swear By. It's Seeing a Major Discount for Back to School - Thursday, August 7, 2025 - 05:00
Move over, Stanley. This insulated water bottle is now the drinking vessel of choice among students, and it's 20% off for just a short time.
These Gut-Health Hydration Powders Are Getting Me Through Summer Workouts - Thursday, August 7, 2025 - 06:00
Blume SuperBelly hydration packets keep me running, even in the heat.
Want a Little More Sleep? Try This iOS 26 Alarm Trick - Thursday, August 7, 2025 - 06:00
This customizable feature can give you a few more minutes to snooze or force you out of bed faster.
Best 6 TVs I've Tested for August 2025 - Thursday, August 7, 2025 - 06:03
Here are the best TVs I've reviewed, and to suit most budgets, from top brands including LG, Samsung and TCL.
Want to Get the Most From Your Kindle? I Recommend These 10 Hacks - Thursday, August 7, 2025 - 06:45
From sharing your library with family or sending documents straight to your device, there are plenty of lesser-known tricks to make the most of your e-reader.
I Found Out Just How Much a Home Security System Can Save on Home Insurance - Thursday, August 7, 2025 - 07:00
Home insurance monthly premiums can be a pain but I learned most insurance will give you a rate discount with the right security system.
Can you run OpenAI's new gpt-oss AI models on your laptop or phone? Here's what you'll need and how to do it - Wednesday, August 6, 2025 - 20:00

As you may have seen, OpenAI has just released two new AI models – gpt‑oss‑20b and gpt‑oss-120b – which are the first open‑weight models from the firm since GPT‑2.

These two models – one is more compact, and the other much larger – are defined by the fact that you can run them locally. They'll work on your desktop PC or laptop – right on the device, with no need to go online or tap the power of the cloud, provided your hardware is powerful enough.

So, you can download either the 20b version – or, if your PC is a powerful machine, the 120b spin – and play around with it on your computer, check how it works (in text-to-text fashion) and how the model thinks (its whole process of reasoning is broken down into steps). And indeed, you can tweak and build on these open models, though safety guardrails and censorship measures will, of course, be in place.

But what kind of hardware do you need to run these AI models? In this article, I'm examining the PC spec requirements for both gpt‑oss‑20b – the more restrained model packing 21 billion parameters – and gpt‑oss-120b, which offers 117 billion parameters. The latter is designed for data center use, but it will run on a high-end PC, whereas gpt‑oss‑20b is the model designed specifically for consumer devices.

Indeed, when announcing these new AI models, Sam Altman referenced 20b working on not just run-of-the-mill laptops, but also smartphones – but suffice it to say, that's an ambitious claim, which I'll come back to later.

These models can be downloaded from Hugging Face (here's gpt‑oss‑20b and here’s gpt‑oss-120b) under the Apache 2.0 license, or for the merely curious, there's an online demo you can check out (no download necessary).

(Image credit: Future / Lance Ulanoff)The smaller gpt-oss-20b model

Minimum RAM needed: 16GB

The official documentation from OpenAI simply lays out a requisite amount of RAM for these AI models, which in the case of this more compact gpt-oss-20b effort is 16GB.

This means you can run gpt-oss-20b on any laptop or PC that has 16GB of system memory (or 16GB of video RAM, or a combo of both). However, it's very much a case of the more, the merrier – or faster, rather. The model might chug along with that bare minimum of 16GB, and ideally, you'll want a bit more on tap.

As for CPUs, AMD recommends the use of a Ryzen AI 300 series CPU paired with 32GB of memory (and half of that, 16GB, set to Variable Graphics Memory). For the GPU, AMD recommends any RX 7000 or 9000 model that has 16GB of memory – but these aren't hard-and-fast requirements as such.

Really, the key factor is simply having enough memory – the mentioned 16GB allocation, and preferably having all of that on your GPU. This allows all the work to take place on the graphics card, without being slowed down by having to offload some of it to the PC's system memory. Although the so-called Mixture of Experts, or MoE, design OpenAI has used here helps to minimize any such performance drag, thankfully.

Anecdotally, to pick an example plucked from Reddit, gpt-oss-20b runs fine on a MacBook Pro M3 with 18GB.

(Image credit: TeamGroup)The bigger gpt-oss-120b model

RAM needed: 80GB

It's the same overall deal with the beefier gpt-oss-120b model, except as you might guess, you need a lot more memory. Officially, this means 80GB, although remember that you don't have to have all of that RAM on your graphics card. That said, this large AI model is really designed for data center use on a GPU with 80GB of memory on board.

However, the RAM allocation can be split. So, you can run gpt-OSS-120b on a computer with 64GB of system memory and a 24GB graphics card (an Nvidia RTX 3090 Ti, for example, as per this Redditor), which makes a total of 88GB of RAM pooled.

AMD's recommendation in this case, CPU-wise, is for its top-of-the-range Ryzen AI Max+ 395 processor coupled with 128GB of system RAM (and 96GB of that allocated as Variable Graphics Memory).

In other words, you're looking at a seriously high-end workstation laptop or desktop (maybe with multiple GPUs) for gpt-oss-120b. However, you may be able to get away with a bit less than the stipulated 80GB of memory, going by some anecdotal reports - though I wouldn't bank on it by any means.

(Image credit: Shutterstock/AdriaVidal)How to run these models on your PC

Assuming you meet the system requirements outlined above, you can run either of these new gpt-oss releases on Ollama, which is OpenAI's platform of choice for using these models.

Head here to grab OIlama for your PC (Windows, Mac, or Linux) - click the button to download the executable, and when it's finished downloading, double click the executable file to run it, and click Install.

Next, run the following two commands in Ollama to obtain and then run the model you want. In the example below, we're running gpt-oss-20b, but if you want the larger model, just replace 20b with 120b.

ollama pull gpt-oss:20bollama run gpt-oss:20b

If you prefer another option rather than Ollama, you could use LM Studio instead, using the following command. Again, you can switch 20b for 120b, or vice-versa, as appropriate:

lms get openai/gpt-oss-20b

Windows 11 (or 10) users can exercise the option of Windows AI Foundry (hat tip to The Verge).

In this case, you'll need to install Foundry Local - there's a caveat here, though, and it's that this is still in preview - check out this guide for the full instructions on what to do. Also, note that right now you'll need an Nvidia graphics card with 16GB of VRAM on-board (though other GPUs, like AMD Radeon models, will be supported eventually - remember, this is still a preview release).

Furthermore, macOS support is "coming soon," we're told.

(Image credit: Shutterstock/ Alex Photo Stock)What about smartphones?

As noted at the outset, while Sam Altman said that the smaller AI model runs on a phone, that statement is pushing it.

True enough, Qualcomm did issue a press release (as spotted by Android Authority) about gpt-oss-20b running on devices with a Snapdragon chip, but this is more about laptops – Copilot+ PCs that have Snapdragon X silicon – rather than smartphone CPUs.

Running gpt-oss-20b isn't a realistic proposition for today's phones, though it may be possible in a technical sense (assuming your phone has 16GB+ RAM). Even so, I doubt the results would be impressive.

However, we're not far away from getting these kinds of models running properly on mobiles, and this will surely be in the cards for the near-enough future.

You might also like
Grok rolls out AI video creator for X with bonus "spicy" mode - Wednesday, August 6, 2025 - 21:00
  • X's AI video maker Grok Imagine is live for SuperGrok and Premium+ subscribers
  • Grok Imagine turns prompts into looping six-second clips
  • The tool includes a controversial “spicy mode” for some NSFW content

xAI is pushing out the Grok Imagine AI video maker to those willing to pay for a SuperGrok or Premium+ subscription. Assuming you've paid your $30 or $35 a month, respectively, you can access Imagine in the Grok app under its own tab and turn prompts into short video clips. These last for around six seconds and include synced sound. You can also upload static images and animate them into looping clips.

Grok Imagine is another addition to the increasingly competitive AI video space, including OpenAI's Sora, Google's Veo 3, Runway, and more. Having audio built in also helps the tool, as sound is still not a universally available feature in all AI video tools.

To stand out, Elon Musk is encouraging people to think of it as “AI Vine,” tying the new tool to the classic and long-defunct short-form video platform for Twitter, itself a vanished brand name.

However, this isn’t just nostalgia for 2014 social media. The difference is that it's a way to blend active creation and passive scrolling.

Grok Imagine should get better almost every day. Make sure to download the latest @Grok app, as we have an improved build every few days. https://t.co/MGZtdMx26oAugust 3, 2025

Spicy Grok

One potentially heated controversy around Grok Imagine is the inclusion of a “spicy mode” allowing for a limited amount of more explicit content generation. While the system includes filters and moderation to prevent actual nudity or anything sexual, users can still experiment with suggestive prompts.

Musk himself posted a video of a scantily clad angel made with Grok Imagine. It provoked quite a few angry and upset responses from users on X. xAI insists guardrails are in place, but that hasn’t stopped some early testers from trying to break them.

xAI is keen to promote Grok Imagine as a way to make AI video accessible for everyone, from businesses crafting ads to teachers animating lessons. Still, there are understandable concerns about whether an AI platform that was only recently in hot water for outright pro-Nazi statements can be trusted to share video content without getting into more hot water. That goes double for the filters for the spicy content.

You might also like
Gemini AI can turn prompts into picture books, but I still prefer Paddington - Wednesday, August 6, 2025 - 23:00
  • Gemini’s Storybook feature lets you instantly generate 10-page illustrated storybooks
  • You can pick art styles and themes
  • The results can be cute but are far from the quality of beloved classics

If you have a kid who loves to hear about themselves in a story, Google’s Gemini AI has a new trick that could keep them happy for a long time. Gemini's new Storybook feature lets you generate fully illustrated, ten-page storybooks with narration from a single prompt.

You describe the tale, the look you want, and any other details, and Gemini writes the story, creates images for each page, and reads it aloud within a few minutes.

Storybook, in some ways, just combines existing abilities like text composition, image generation, and voice narration. Still, by putting them into a single prompt system, it speeds up the final product enormously. If you don't like certain details of the look or writing, you can simply adjust the book with follow-up prompts. You can even feed it a photo to shape the setting or characters.

The appeal for those who might feel they lack creative writing or drawing skills is obvious. No need to hire an illustrator or record voiceovers yourself. If your child wants a bedtime story about a shy dragon who finds confidence at music camp, you type that in, and within minutes, you’ve got a book with pictures, narration, and page-by-page structure.

This isn’t just for bedtime, either. Teachers can create customized stories to explain hard topics, perhaps teaching second graders about gravity with a friendly astronaut cat. Therapists could use storybooks to help kids talk through emotions using characters they connect with. Aunts and uncles can make personalized birthday stories with inside jokes and family pets.

What used to be a labor-intensive creative project is now something you can do on your phone during lunch break.

AI storytellers

And it is a notable shift from the standard template with a blank to fill in approach common to other AI tools. The narration even adapts to the tone of the story, with voices that can be whimsical, soothing, or dramatic, depending on what your story needs. Google is pitching the tool to busy parents, overworked teachers, and creative kids looking for a co-author and illustrator for their ideas.

I asked Gemini to make a story about my dogs going on an adventure in nature, sharing their names and describing their looks, and that's about it. You can read and listen to the Gemini-created story here.

It did a remarkably good job, albeit with a very inconsistent look to the dogs from page to page and a somewhat dull story. And when I tried it again to see how it would perform with the same prompt, the dogs sometimes had more than four limbs, not exactly reassuring to a child looking forward to a story about their pets.

And while it's theoretically possible that Gemini could write and illustrate a story better than the many classic and modern children's books out there, or one more personally resonant than writing it yourself, I personally have doubts. This is a fun little trick, but the idea of skipping every bookstore, library, and box of crayons and pencils for an AI alternative that can't always even make your dog look the same on every page feels like the exact activity I'd rather do myself. I'll stick to asking AI for help organizing my kitchen and leave the bedtime stories to me.

You might also like
Cybersecurity must be a top priority for businesses from beginning to end - Thursday, August 7, 2025 - 02:51

Like it or not, cyberattacks are now a regular occurrence, and part of everyday life. However, despite this predictability, it still remains impossible to pinpoint exactly when and where they will occur. This means that businesses must remain vigilant, constantly on the lookout for any and all potential threats.

From the moment a company is created, it must be assumed that attacks will be coming. Just because it is new and unknown does not mean it is safe. Take DeepSeek for example, despite being the new kid on the block, as soon as its name hit the news, it was hit with a severe large-scale attack. However, this does not give established companies an excuse to drop their guard.

The past couple of months alone have seen some of the biggest names in retail fall victim, with large scale companies like M&S and Dior unable to properly defend against attacks. No matter how big the company, it is vital to employ a well-rounded cybersecurity strategy that provides security from the foundational stages of development through to the latest iteration.

Siloed teams are outdated

The key to weathering the storm of cyberattacks is a firm foundation. Cybersecurity principles must be embedded from the outset, ensuring a strong and secure beginning for any product or system development. These defenses must be continually built upon, monitored, tested and updated on a proactive basis to ensure any potential vulnerabilities are mitigated before they can become a threat.

Threats are constantly evolving, and the attack defended against today could be the one that breaks through tomorrow. Therefore it is imperative to keep any and all threat intelligence up to date, monitoring threats in real-time and continuously sharing the information business-wide.

Unfortunately, it is the dissemination of this information that can cause issues - especially when different teams are receiving information late, or not at all. This is often the case in organizations that employ a siloed approach, with individual teams working in isolation from each other.

This fragmented structure can not only impact an organization's ability to detect and respond to threats, but the capability to learn from them and share these insights with other teams. Without a formal structure in place to facilitate cross-team collaboration, teams may develop different processes in parallel, use different tools, and fail to communicate across functions when facing risks or as incidents unfold.

As a result, security controls are inconsistent, making it tough, if not impossible, to establish standard methods for sharing threat intelligence and incident response procedures.

Introducing collaboration

A centralized platform that unifies threat intelligence company-wide will strengthen security efforts across departments and ensure that teams operate as part of shared vision. Creating common goals and metrics encourages collaboration and establishes a clear sense of purpose. Threat Intelligence Platforms (TIPs) enable organizations to adopt this approach, integrating across business systems and providing automated intelligence sharing.

TIPs act as the heart of an organization's cyber defenses, gathering information from across multiple sources, from public feeds, to industry reports, and distributing it across all teams. They are able to sift through the data and identify serious threats, advising teams where to focus their efforts to prioritize the most at-risk vulnerabilities.

Through the automation of processes such as data collection and by removing internal communication barriers, organizations can translate scattered, complex cyber‑threat information into coordinated action to protect critical assets faster and comprehensively. This will result in improved threat detection, quicker incident response times and a greater overall cyber resilience.

The hyper-orchestration approach

The hyper-orchestration approach builds upon these foundations of collaboration and collective defense, replacing siloed teams with a united threat intelligence network. Employing this structure from the formation of a business will allow organizations to avoid the formation of individual teams, and enhance their cybersecurity capabilities from the outset.

This collective defense approach coordinates threat intelligence and response activities to tackle specific security threats. Perhaps one of the most notable examples of collective defense in action is the Information Sharing and Analysis Centre (ISAC), which collects, analyses and disseminates actionable threat information to its members.

These centers enable organizations to identify and mitigate risks and boost their cyber resilience. ISACs are made up of a comprehensive group of highly competent and professional organizations, with the National Council of ISACs currently comprising almost 30 sector-specific organizations, for example.

Recent research highlights the importance of this collective defense approach, with 90% of cybersecurity professionals believing collaboration and information sharing are very important or crucial for a strong cyber defense. Despite this, nearly three-quarters (70%) feel their organization needs to do more to improve threat intelligence sharing capabilities.

It is clear that a collective defense approach is growing more popular, with dedicated information sharing roles now recognised at the highest levels of government and regulation. The EU Network and Information Systems Directive 2 (NIS2), which came into force last October, is a clear example of this - focusing on the resilience of sectors that are under particular risk.

With clear importance being placed on collaboration in cybersecurity, organizations must take steps to incorporate this approach into their cyber security strategies from day one. Employing hyper orchestration and collective defense is key to enhancing cyber resilience and ensuring systems are secure through every stage of a business’ development.

We list the best firewall for small business.

This article was produced as part of TechRadarPro's Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro

The UK is falling behind in the global race for digital sovereignty - Thursday, August 7, 2025 - 04:03

At London Tech Week we heard Kier Starmer make a commitment to the UK people that we would “become an AI maker, not an AI taker”. But how did this shift from an assumed frontrunner to an AI underdog happen?

Economic and geopolitical instability, including tariffs and ever-changing political alliances, has caused global technology leaders to realize that both their physical and digital infrastructure is best kept in-region. This allows them to protect themselves and not let innovation be hampered by outside forces. This has led to what we are seeing with Starmer’s comments - investment and commitment towards keeping AI systems developed and controlled within the UK.

Enterprises and governments are being shown that sovereign AI and data platforms are no longer a nice-to-have, but a must-have. These are defined as open source-based systems where data and AI are governed together at the edge, on-prem, or in-country cloud. This feels new to many organizations as it requires more than just turning a dial to more or less cloud.

Where we stand today

Research shows that the UK is falling behind AI innovations, despite strong IT infrastructure and a robust workforce in place. Top enterprise leaders aren’t currently matching the government’s urgency or investment focus on AI and data. This disconnect is particularly stark in the banking sector, which was seen as the UK’s most likely AI growth engine only a year ago.

Sovereignty over AI and data must be mission-critical and applied quickly, for every economy and every enterprise within it. If we continue to hesitate, concerns are that the UK could lose its economic edge. In today’s world, if you can’t control your data and AI, you’ll struggle to stay ahead.

So what needs to be done to fix this growing problem and reposition the UK as an AI leader with a solid base to scale in-region?

1.An intention gap

When it comes to intent to build sovereign AI and data platforms, UK leaders are among the least committed across the globe, despite government-backed programs being critical infrastructure plays.

Needless to say, if national ambition isn’t matched by enterprise commitment, the UK risks losing its early advantage.

2.Seeing beyond the immediate, and building for it

Globally, it appears that success hinges on a strategic commitment to full data access, open source foundations, integrated AI tools, and hybrid infrastructure, as well as accelerating applications into an agentic state.

The fastest-moving economies aren’t siloed in their application; generative and agentic AI are transforming every industry. They’re building sovereign AI and data factories that are open source, flexible, and future-proof architectures. This means that their AI and data can adapt and deliver value across borders, partners, and time.

In countries leading the charge, enterprise leaders follow these core beliefs:

1.Deep integration of AI and data is critical.

2.Sovereignty isn’t a choice—it’s a necessity.

3.Sustainable success relies on controlling your AI and data platform.

The next three years will shape which economies control the future of data and, consequently, AI. Although trillions have been invested by UK enterprise and government to build one of the world’s most advanced AI ecosystems, without strategies tied to these three core principles, these assets won’t deliver ROI.

3. Sensing the urgency, and adapting to it

The UK is not alone in facing this crossroads - Germany, Saudi Arabia, and the UAE are also converting infrastructure into execution. However, the UK seems to be hesitating more than its counterparts. For every competitor, there is increased recognition that sovereign control over AI and data is now essential, a push that is needed.

This recognition is at the heart of reshaping enterprise priorities. As more leaders act, the foundations they’re choosing matter just as much as the strategy itself.

Closing remarks

The divide between early movers and those hesitating is already clear. Just 13% of enterprises have fully integrated AI and data operations, but they account for 21% of the total global ROI, signaling what’s possible when strategy and execution align at speed.

There’s a huge opportunity within this space, as the global AI and data economy is projected to reach $16.5 trillion by 2028. The UK still has a structural advantage with world-class infrastructure, talent, and public investment. All that’s left is action.

We list the best cloud storage.

This article was produced as part of TechRadarPro's Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro

Revenue redefined: why Agentic AI succeeds where traditional AI stalls - Thursday, August 7, 2025 - 04:53

AI has become synonymous with business transformation, promising insights and efficiency. Yet for many CEOs, traditional AI tools remain frustratingly passive, surfacing insights but failing to take action. Today’s business leaders don’t need more dashboards; they need execution.

This gap often stems from a misunderstanding of AI's role. Tools like “co-pilots” transcribe, summarize, and recommend, but they still rely on humans to follow through. That missing “last mile” is where execution breaks down, costing companies time, revenue, and agility.

Understanding the AI Dichotomy

There's a widespread misconception about AI's role in modern business operations, and many CEOs don’t understand the difference. Traditional AI models, including generative AI (GenAI) and transcription services, rely on human intervention to move from insight to action.

They surface recommendations but require human oversight to execute, often causing operational stalls and insights that aren’t accounted for in decision-making. According to Gartner Research, 73% of insights captured by legacy AI tools never translate into executed actions, highlighting a tangible gap between data availability and operational execution.

Imagine a sales representative finishing a call where a potential customer expresses interest but mentions budget constraints. A traditional AI tool captures this interaction and generates a transcript, flagging the budget issue as a critical insight. However, it's up to the representative, assistant, or manager to manually review this flagged point, determine the next steps, update CRM records, and communicate that flagged point in their follow-ups.

This manual process introduces delays, allows for human errors, and increases the likelihood that the lead cools off or engages with a competitor in the meantime. Despite recognizing valuable data, the reactive nature of traditional AI means execution gaps persist, leaving executives puzzled when expected outcomes fail to materialize.

Misunderstandings Around Reactive and Proactive AI

The issue isn't just technological; it's conceptual. Organizations continue to misunderstand the distinct roles and capabilities of different AI categories though their operations. Traditional reactive AI solutions are often perceived as holistic operational fixes, setting unrealistic expectations and leading to implementation failures and skepticism regarding AI's overall efficacy in the first place.

The misunderstanding also encompasses risk and accountability.

Proactive agentic AI might raise concerns about automated errors or missteps. However, human leaders still hold the reins for overall strategy and are ultimately responsible for the outcomes. Agentic AI does not remove professional, human oversight; instead, it supports leaders by automating routine operational tasks, enabling teams to focus strategically and maximize on high-value opportunities.

The Proactive Shift: Introducing Agentic AI

Agentic AI is a monumental leap in how AI operates, shifting from simply offering insights to actively taking the reins and executing tasks autonomously within existing workflows. Rather than merely highlighting data trends, it triggers structured, automated actions directly from the surfaced insights. This is to guarantee that customer and market signals are promptly acted upon, ultimately boosting revenue outcomes.

There is a spectrum of Agentic AI abilities going from advanced automations to autonomous decision making. It is important to know how and where to employ this power in the right way that is secure.

This type of AI continuously captures structured, clean, first-party data from customer interactions, such as sales calls, emails, and meetings. It then automatically integrates this information into CRM systems, communication platforms, and operational workflows, leaving no insights to fall through the cracks. Unlike traditional AI that merely suggests actions, agentic AI independently completes these tasks, prompting a reduction in administrative overhead and operational friction.

The Cost of Administrative Overhead

Traditional AI's reactive approach exacerbates administrative burdens, inevitably impacting productivity and revenue potential. Boston Consulting Group reports that sales representatives spend up to 45% of their time on administrative tasks, such as CRM updates and manual follow-ups. This administrative overload limits their capacity to engage in revenue-generating activities and reduces overall sales effectiveness.

For CEOs and revenue leaders, execution speed directly correlates with revenue performance. Delays in responding to customer dissatisfaction, competitive shifts, or emerging market opportunities can lead to substantial financial setbacks. Even minor operational delays can mean the difference between growth and stagnation.

That execution gap is precisely what Agentic AI is built to resolve. Embedding directly into existing workflows and autonomously executing necessary tasks ensures immediate, structured responses to market signals. Instead of solely identifying churn risks, agentic AI proactively alerts customer success teams with clearly defined actions to prevent revenue loss.

Interoperability and Operational Agility Across the Enterprise

A major limitation of traditional AI tools is their siloed nature. Data outputs typically require manual intervention to distribute across departments, creating inefficiencies and inconsistencies. Agentic AI, in contrast, operationalizes intelligence by integrating across the enterprise's existing technology stack, enhancing transparency and consistency among sales, marketing, and customer success teams. This integration allows for interoperability while reducing delays associated with manual transfers and human-dependent workflows.

Operational agility has become a priority for CEOs who face rapidly shifting markets and fierce competition. While traditional AI provides important insights, it lacks the execution capacity to drive agile responses. Agentic AI meets this demand by automating real-time, responsive actions within core business processes.

Embracing Agentic AI: The Path Forward

Why is Agentic AI so important right now? Because understanding and embracing Agentic AI isn't just about gaining an edge; it's about finding and taking advantage of opportunities in today's fiercely competitive, resource-strained, and unpredictable markets. This goes beyond a simple tech improvement; it's a way to redefine how businesses turn intelligence into action, directly converting their strategic insights into real, immediate impact.

I’ve tested and ranked 12 of the best CRM platforms.

This article was produced as part of TechRadarPro's Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro

Pages