Error message

  • Notice: Undefined offset: 5 in user_node_load() (line 3604 of /home/ewg56ffgqu3p/public_html/modules/user/user.module).
  • Notice: Trying to get property of non-object in user_node_load() (line 3604 of /home/ewg56ffgqu3p/public_html/modules/user/user.module).
  • Notice: Undefined offset: 5 in user_node_load() (line 3605 of /home/ewg56ffgqu3p/public_html/modules/user/user.module).
  • Notice: Trying to get property of non-object in user_node_load() (line 3605 of /home/ewg56ffgqu3p/public_html/modules/user/user.module).
  • Notice: Undefined offset: 5 in user_node_load() (line 3606 of /home/ewg56ffgqu3p/public_html/modules/user/user.module).
  • Notice: Trying to get property of non-object in user_node_load() (line 3606 of /home/ewg56ffgqu3p/public_html/modules/user/user.module).

News

Today's NYT Strands Hints, Answers and Help for July 15 #499 - Monday, July 14, 2025 - 16:00
Here are hints and answers for the NYT Strands puzzle for July 15 No. 499.
Today's NYT Connections Hints, Answers and Help for July 15, #765 - Monday, July 14, 2025 - 16:00
Here are some hints and the answers for the NYT Connections puzzle for July 15, #765.
Today's NYT Connections: Sports Edition Hints and Answers for July 15, #295 - Monday, July 14, 2025 - 16:00
Here are hints and the answers for the NYT Connections: Sports Edition puzzle for July 15, No. 295.
Galaxy Z Fold 7 vs. Z Fold 6: Slimmer Body, Bigger Screens and Slightly Higher Price - Monday, July 14, 2025 - 17:24
Samsung's new book-style foldable has big upgrades on its predecessor -- so how do they stack up?
Can an Air Purifier Protect You From Wildfire Smoke? Here's What You Need to Know - Monday, July 14, 2025 - 17:33
Major fires continue to burn in Arizona and across Canada, impacting the air quality for millions of people. An air purifier can help, but it depends on the filter.
Expert Advice: How to Protect Yourself From Wildfire Smoke - Monday, July 14, 2025 - 17:38
Many communities in Arizona, Michigan, Illinois, New York and across North America are experiencing poor air quality due to numerous wildfires. Here's how to guard yourself against it.
Which Foldable Will You Flip For? Comparing Samsung's Galaxy Z Flip 7 and Z Flip 7 FE - Monday, July 14, 2025 - 17:50
One performs better overall, but is the price difference enough to matter? Let's compare Samsung's flip phones for 2025.
Would you buy a 500g laptop with a 7-inch display? $500 GPD MicroPC 2 netbook will appeal to makers and geeks, but I fear the rest of us won't see the appeal - Monday, July 14, 2025 - 15:31
  • GDP MicroPC 2 packs performance upgrades into one of the lightest laptops with full Windows
  • Full-size ports and PCIe 3.0 storage make this mini-PC surprisingly versatile for fieldwork
  • 7-inch screen is sharp, bright, and folds flat for flexible use, but not for long sessions or multitasking

While most modern ultraportables chase thin bezels and all-day battery life, GPD’s new MicroPC 2, the follow up to the original MicroPC (first launched in 2018 and refreshed in 2021), takes a different route.

It brings back the netbook format with updated internals and rugged, field-ready durability, aimed at specific use cases like IT maintenance, networking, and mobile diagnostics.

Weighing around 490 grams and measuring 171.2 x 110.8 x 23.5 mm, it is one of the lightest laptops with full x86 compatibility.

Performance in a tiny shell

At its core, the MicroPC 2 runs Intel’s N250 processor, built on the newer Intel 7 process.

While still a low-power chip, it offers clear improvements in base and boost clock speeds, cache size, and integrated graphics performance.

Paired with 16GB of LPDDR5 memory and a 512GB M.2 SSD with PCIe 3.0 x4 bandwidth, the system delivers surprisingly capable performance for light workloads.

Benchmark scores show large gains in both CPU and GPU performance compared to the original MicroPC.

Even with those upgrades, it is hard to see this compact device - now available for backing on Indiegogo - gaining broad appeal, especially with an early-backer price close to $500.

Its 7-inch 1080p display offers 500 nits of brightness, making it readable and usable despite the compact form factor.

The screen folds flat for added flexibility in tight environments. Still, the size and layout make it uncomfortable for extended typing or multitasking.

For engineers or mobile professionals who need command-line access, it may serve as a compact problem-solver. But it is best viewed as a backup terminal, not a primary machine.

The MicroPC 2 does make smart use of its rear I/O layout. With dual USB-C and USB-A Gen 2 ports, HDMI 2.1, 2.5Gbps Ethernet, and microSD support, it offers more connectivity than most tablets.

However, the removal of legacy ports like RS-232, available on the original MicroPC, could be a drawback for technicians working with older systems.

Wi-Fi and Bluetooth have been upgraded, but their value depends on whether users see the benefit of these features in a 7-inch form factor.

For IT administrators, field testers, or mobile teams who need a physical keyboard on the go, it could function as an efficient, task-specific business PC.

But most users will find the cramped keyboard, limited performance, and narrow software support too restrictive for general use.

The 512GB version of the MicroPC 2 is currently priced at $495 for backers, with retail pricing set at $607.

At the time of writing, it has raised HKD 754,620 (about $96,131.80) in crowdfunding and is scheduled to begin shipping in September 2025.

You might also like
A workstation PC with 540TB storage is within reach — this tower case can hold up to 15 x 36TB Seagate HDDs - Monday, July 14, 2025 - 16:28
  • Silverstone Seta H2 could be overkill for some, but it solves a very specific problem
  • Storage density is the priority, and that comes with layout and thermal trade-offs
  • Cable clutter and airflow chaos are inevitable when you chase maximum drive capacity

In a market full of flashy PC cases with glass panels, RGB lighting, and limited internal expandability, SilverStone’s newly unveiled Seta H2 case takes a far more practical approach focused on functionality.

Built as a full tower workstation case, the Seta H2 is about storage expansion rather than stylistic embellishments, and while its 540TB capacity might sound like overkill, this case makes it technically possible.

At a glance, it may look like a throwback, but beneath its plain surface lies the capacity to support what could be the largest HDD array in any consumer-grade tower case.

Not flashy, but engineered for scale

The case’s internal volume of 70 liters is used efficiently to accommodate up to 15 hard drives, and if each of these drives is 36TB, this enables a theoretical storage capacity of 540TB.

This configuration requires multiple removable brackets and cages, which allow users to mount a mix of 2.5-inch and 3.5-inch drives.

Additional 2.5-inch slots are hidden behind the motherboard tray and in various corners, suggesting this design caters to users who value storage density over airflow or clean cable layouts.

Enthusiasts considering this setup may find cooling to be a bottleneck, despite support for multiple fans and even large radiators.

Airflow becomes more complicated when 15 drives are tightly packed in the front, and those drives themselves are not exactly low-power or low-heat components.

Support for E-ATX and SSI-EEB motherboards makes the Seta H2 viable for enterprise or heavy workstation use.

The ability to fit long GPUs, up to 428mm, is impressive given the limited internal space, but installing a side radiator or using one of the drive brackets near the GPU can reduce clearance and make cooling and layout choices more difficult.

Whether the Seta H2 offers the best HDD setup is debatable, as power, heat, and cable management issues may limit its practical use.

With a starting price of around $216 or €200, this case is neither budget-friendly nor prohibitively expensive.

However, if you need the full 540TB capacity, a 36TB HDD like the Seagate Exos M 36TB is priced at $800.

At this rate, the total cost for 540TB could be over $12,000, depending on the models selected and current market conditions.

Via Techpowerup

You might also like
Top AI image generator announces unlimited usage - so get creating now - Monday, July 14, 2025 - 17:22
  • Freepik launches unlimited AI video and image generation for Premium+ and Pro plans
  • Model rollout will be gradual, but more are expected to be supported soon
  • It could cost Freepik in the short term, but it'll result in long-term customers

Freepik is rolling out unlimited AI generation across the platform for paying users, meaning they won't be limited to caps or credits as is the case with many other rival platforms.

Premium+ and Pro account holders now get unlimited AI video and image generation as Freepik becomes one of the first major platforms to remove restrictions entirely.

Despite lifted limits, users will still be limited in other aspects - for example, video will be rendered in 768p resolution, using the MiniMax model from launch.

Freepik lifts AI limits

The company has promised to add further models as time goes on, with weekly launches planned for July 2025. Most will become unlimited, just like MiniMax, but some of the more powerful models like Veo 3 will be restricted.

"We decided to eliminate credits and offer unlimited generation because we understood that what holds users back is not the technology, but the frictions in the usage model," Freepik CEO Joaquin Cuenca explained.

Cuenca emphasized accessibility, creativity and mass adoption were the company's key drivers, rather than monetizing limitations, with the company expecting to absorb technological costs to promote long-term platform loyalty, ultimately leading to more sustainable revenue from long-term customers.

To get access to unlimited video and image generation, users will need to be on one of the two paying plans, starting at $24.50 per month.

"That's the real revolution of AI: not just in what it can do, but in how it is put at the service of people," the Cuenca added.

No details have been shared about upcoming models, but we do know that Premium+ and Pro plans get priority speed when using ChatGPT, Imagen 4 and Veo 3 compared with the cheaper and free plans. They also get early access to upcoming AI features, with top-tier Pro customers getting advanced AI models soonest.

You might also like
Get ready to brag that your Sony WH-1000XM6 headphones use the same tech as the headsets NFL coaches are wearing - Monday, July 14, 2025 - 18:00
  • Sony is sharing more details on its NFL coaching headset
  • The Sony NFL Coach's Headset will make its debut for the 2025 NFL season
  • This one won't be purchasable by consumers, but promises best-in-class ANC like the WH-1000XM6

Sony is the official technology partner of the NFL – the National Football League – and we know that the tech giant has been working on a headset for coaches and other officials. We even got a first look at it back in January at the Consumer Electronics Show.

Now, though, Sony’s NFL Coach’s Headset is official, comes in three styles, and will be making its debut ahead of the 2025 season. And when the season does kick off later this year, you can expect 32 teams to be using the headset.

Unlike the latest Sony WH-1000XM6 for Sony, the NFL Coach’s Headset is tailor-made for, well, what the name describes. It’s designed from the ground up to work for coaches in a game environment, and that starts with connectivity. It doesn’t have Bluetooth onboard, but plugs into a special connectivity box that taps into the private network, powered by Verizon, for the NFL and the teams.

(Image credit: Sony)

There are also physical buttons or capacitive touch controls on the NFL Coach’s Headset. It also doesn’t have a rechargeable battery; rather, Sony is going old-school, possibly a quicker route than requiring a recharge by powering this headset with two AAA batteries. It’s also not carrying a specific IPX rating, but Sony did stress-test the unit for both extreme cold and extreme heat at NFL games.

This was done in environments where these weather conditions are recreated, as well as through live testing during NFL games this past season, including a frigid and snowy game at the Buffalo Bills' Highmark Stadium.

Sony’s past experience with various consumer headphones, including the WH-1000 series, will inform the design and other aspects, but it will likely focus on noise cancellation and pickup. It’s also not as straightforward as there are multiple design options for this headset, depending on the team and even individual preference.

Yes, it comes in a model with left and right earcups, but there are also two other options – just a left earcup and just a right earcup. All three, though, come with a microphone on a boom. Sony, however, is offering active noise cancellation on all three models.

Sony put this to the test in stadium environments where ambient crowd noise was measured at 100 decibels.

The resulting noise cancellation does, claims Sony, work effectively and has been tuned specifically for this use-case – here that means being able to hear communication while on the sidelines, but also for the microphone onboard using signal voice processing to pickup just the person speaking, and not the background chatter or even sound of the stadium.

Considering there are no buttons on the headphones, the microphone will automatically mute when the boom is raised. However, the belt pack, to which the headset will be plugged, will also have some manual controls.

Much of the focus here clearly went into making a durable headset that could withstand game after game use, though Sony confirmed with us that each team would have backups and that the overall feature set was purpose-crafted for each coaching staff.

Sony’s NFL Coach’s Headset will be worn during the 2025 season, which starts early September, but will make an early debut during the Hall of Fame game at the end of July. It, of course, does have some Sony branding front and center, after all, it’ll be shown on TV. I’m just curious if these will make it into a Beyond Sports simulcast – Homer rocking one could be neat.

As you might suspect with such a purpose-built product, there are no plans for a consumer release for the NFL Coach’s Headset. Those after a Sony headphone will need to consider the Sony WH-1000XM6, and you can read our full review here.

You might also like
You don’t have to explain everything to Claude anymore – it’s finally in your apps - Monday, July 14, 2025 - 20:00
  • Claude can now connect to apps like Notion, Canva, and Stripe
  • The AI can understand and assist with tasks using your actual work data
  • Claude's secure access reduces the need to constantly explain context to the AI

Anthropic has upgraded Claude with a major new set of tools that let the AI assistant integrate directly with several popular software tools, including Notion, Canva, Stripe, Figma, Socket, and Prisma. The new Claude tool directory means you don't need to explain what you want to Claude every time you want to employ those tools; Claude can now look at the same information as you to help.

Until now, most AI interactions have required copying and pasting every detail from your project management tool, explaining what’s important, clarifying what each task means, and double-checking that the AI understood it. Now you can just ask it to do the task, and Claude will pull the information directly from the relevant tool to handle things.

That might not seem groundbreaking at first glance, but that context gap is where things usually fall apart when asking AI chatbots to help you. For instance, if you're working on a product launch in Notion and have a list of things to do, you'd normally have to retype or upload all the information to Claude. Now, once you connect Notion to Claude, the AI can read your project documents directly and start putting together timelines and presentation materials that fit the product because it sees what you see.

Or imagine a small business owner using Stripe to manage payments who wants a summary of which customers paid last week and which still owe you for your services. Claude can now pull that data directly from Stripe with your permission. And with Canva, a blank social media post template can now be filled in with a design and copy from Claude based on your brief. You describe what you need in plain language, and Claude will make something usable.

Claude connected

These integrations are powered by something called the Model Context Protocol, or MCP. That basically means Claude can understand and act on the tools you use without needing a whole tutorial. You just connect an app once, and Claude gets secure, limited access to the relevant information inside it. It doesn’t read your entire inbox or download your bank history, just what’s necessary to help you with the task at hand.

You can go to Claude’s tool directory and connect whatever apps you already use. If you’re on a paid Claude plan, you’ll get access to remote app connections like Stripe and Notion. Desktop integrations, like Figma and Socket, are available through the Claude desktop app.

Other AI tools are trying something similar. Google’s Gemini shows up in Docs and Gmail. Microsoft’s Copilot is baked into Word and Excel. But Anthropic’s take is more about linking what you already do with the AI, as opposed to baking the AI into those apps directly.

Of course, this doesn't make Claude autonomous. It can’t pay your bills or fully run your job. And while Anthropic says it’s designed everything with privacy and security in mind, some are likely to be wary, even if you can choose what Claude can access. But for most regular users, this update represents something potentially very useful in staying on top of things. If, as Anthropic claims, it will save time and mean you don't have to redo a lot of tedious paperwork, it will likely be a very popular feature.

You might also like
Today's NYT Mini Crossword Answers for Tuesday, July 15 - Monday, July 14, 2025 - 22:12
Here are the answers for The New York Times Mini Crossword for July 15.
Is Nano-Hydroxyapatite Toothpaste an Effective Fluoride Alternative? Dentists Weigh In - Tuesday, July 15, 2025 - 03:06
We consulted dentists to discover the potential benefits and side effects of fluoride-free, nano-hydroxyapatite toothpaste.
Understanding the vibe coding trend and considerations for developers - Tuesday, July 15, 2025 - 02:38

AI is democratizing access to software development in new and innovative ways, with 'vibe coding' emerging as the latest buzzword for budding developers. For the uninitiated, vibe coding makes it easy for anyone to get a head start on projects, by merely describing what they want AI tools such as Cursor, GitHub’s Copilot or Replit to accomplish.

This is no small thing, especially for those who before had not been able to create software on their own. However, vibe coding requires a high level of trust in the AI’s output, and there are potential trade-offs in confidence and security for a faster turnaround and expanding capabilities to those who would not otherwise be able to code.

Exploring the latest AI technologies

Exploring the latest AI technologies can undoubtedly help developers experiment with new ways in which to do things better and faster, and vibe coding is no exception. However, even Andrej Karpathy, the former AI director at Tesla who coined the term, advises that the methodology is better suited for “weekend projects” than for more complex projects.

For smaller, proof-of-concept style projects, the speed of vibe coding can shine, but as things grow so does the need for important context and knowledge of edge cases. Vibe coding practicalities aside - recognising security exploits, lack of contextual understanding, bug fixing and software life cycle/maintenance considerations – there’s also a larger issue at play.

For AI-powered coding grunt work to improve in any format or scale, socially responsible AI must be the foundation on which technology solutions are built and delivered. The more trusted and accurate the data that large language models train on, the higher quality the outcome - for the code and most importantly, for the larger tech community.

There is a need for a symbiotic relationship to form: data helps create and improve AI experiences, and AI experiences help guide new, human verified information.

Understanding the opportunities and risks associated with vibe coding

AI continues to democratize access to software development in new and innovative ways for aspiring developers. Developers can ask AI tools such as Cursor, GitHub’s Copilot or Replit to help initially scaffold what they would like to see at the beginning of a new project. This is where AI generated code can be very powerful, and I use it on my own homegrown projects.

There is little need for context, edge cases are much fewer, and security concerns are more “standard” than other larger projects with bespoke needs. However, the trade-off for improved speed can be a sacrifice in potential security concerns as those code bases grow.

For budding programmers, vibe coding has the potential to provide the necessary support during the early stages of a project, and anything that gets more people into our field and shortens the learning curve to coding is always a good thing. But caution should also be exercised given the risks associated with the method for things outside of some early, base use cases.

Handing over the reins to AI

Andrej Karpathy describes vibe coding as interacting with AI to assist developers shift away from manual programming activities through LLMs (large language models), prompting developers through intuitive decisions to make it easier to create software as LLMs continue to improve code writing skills.

To fully embrace vibe coding, developers must cede much of the control to the AI assistant during the entire process, rather than becoming aware and having an understanding of the code as it is put into the codebase.

As LLM assistants continue to improve their corpus of knowledge on developing code, real-time decisions and predictions are made around what you would like to happen next to successfully complete a project, but these are still assumptions and educated guesses based on the experiences of others.

Vibe code with caution

It should always be remembered that every coding process cannot be overseen exclusively with the help of AI assistance. Small, low-risk side projects are ideal for vibe coding. When it comes to larger, more complex projects – a human should always be a first-class part of that loop.

AI coding tools powered by LLMs can and do produce mistakes. Developing larger datasets of information and considering other factors such as quality control and security requires an expert eye to monitor flaws or weaknesses that could be new or unexpected to an AI who is only thinking of other use cases, or the most common ones.

Knowledgeable developers are able to identify and test vulnerable code themselves – LLMs are sometimes simply unable to even register any mistakes they may produce.

In short, what the AI assistant doesn’t know, it will not flag as an issue – be that a bug or security vulnerability, AI can be quite confidently wrong. Vibe coding is a new trend for the industry but cannot be seen as a silver bullet or license for cutting corners in the development process when quality and stability matter.

Human expertise must always contribute to the process along the way as code bases grow, either vertically or horizontally.

We list the best site for hiring developers.

This article was produced as part of TechRadarPro's Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro

Cloud sovereignty in Europe and beyond: a tipping point? - Tuesday, July 15, 2025 - 03:48

Europe has begun to actively test the waters of cloud sovereignty, with 72% of European businesses prioritizing data control when selecting technology vendors. However, despite the growing desire to protect data integrity and security within European borders, over 70% of European businesses use US hyperscalers from their cloud computing provider.

While Google is doubling down on its commitment to data sovereignty, there is a growing concern over hyperscalers' dominance over the European market, as it leaves critical infrastructure in the hands of dominant foreign providers.

As sweeping tariffs continue to escalate tensions between Europe and the US's big tech, many are questioning whether Google's commitment is enough to protect Europe’s data from the Big Three.

US influence on Europe’s tech ecosystem

US Policies, like the 2018 Clarifying Lawful Overseas Use of Data (CLOUD) Act, give US hyperscalers massive influence in Europe. The act grants US authorities and federal agencies access to data stored by US cloud service providers, even when hosted in Europe. This raises concerns about whether European data stored with US-based providers can ever truly be sovereign, even if housed within European borders.

Another concern in Europe: being cut off from US services. If Europe were to suddenly lose access to US cloud services or face rising costs, businesses would experience immediate setbacks, from service disruptions to escalating operational expenses. These concerns, along with a push for more leadership, independence, and economic competitiveness, have led to Europe steadily building its own cloud ecosystem – fostering a network of regional providers and implementing policies that reinforce data independence.

The question has now become, do these changes signal a true tipping point for Europe? Or are they merely the first steps in a much longer transformation?

Who is driving the adoption of sovereignty?

Both the public and private sectors play pivotal roles in advancing cloud sovereignty across Europe. Governments have established regulatory frameworks to enhance standards and avoid fragmentation. However, policymaking is often slow and subject to political negotiation, making private sector initiatives crucial in accelerating the shift toward true sovereignty.

The private sector has emerged as a driving force behind the practical implementation of sovereignty ideals. According to Dominique Tessier, Head of Cybersecurity Focus Group at the European Champions Alliance (ECA), “the move to make sure the EU Cloud Certification Scheme will finally include an “upper security layer” is mainly driven by private European companies, as AIRBUS, EDF, Telecom Italia and others, whose efforts are gaining momentum”.

While companies like Microsoft have invested heavily in EU infrastructure to comply with local regulations, concerns remain about whether this represents true sovereignty or just a regulatory workaround. In contrast, European companies and partnerships, such as the joint venture between OVHcloud and Capgemini, are working to offer services independent of US control, aiming to create fully sovereign cloud solutions.

These initiatives reflect a growing acknowledgement of the strategic importance of cloud sovereignty. This is supported by Rahiel Nasir, Research Director, IDC Europe, who states that “interest in sovereignty has moved from governments and regulated sectors to all industry sectors, especially in Europe, and everywhere else where cloud is just beginning to pick up”. These efforts are becoming more widespread, indicating a collaborative push towards achieving European digital independence, but more needs to be done to make this achievable.

How can Europe achieve ‘true sovereignty?’

Achieving true cloud sovereignty requires more than just localized data storage, it demands complete independence from hyperscalers. To achieve this, Europe must go beyond compliance and foster a robust ecosystem of local providers that can match and work alongside hyperscalers.

While hyperscalers play a role in the broader cloud landscape, they should not be relied upon for sovereign data. According to Tessier, “the new US Administration has shown that it won’t hesitate to resort either to sudden price increases or even to stiffening delivery policy. It’s time to reduce our dependencies, not to consider that there is no alternative”.

For Nasir, the key is striking a balance. “In an ideal scenario, local providers and global providers should partner for sovereignty to work at scale”. Leveraging their capabilities where appropriate while ensuring critical data and workloads remain within truly sovereign infrastructure.

By shifting away from hyperscaler dependency and building a diverse, sovereign infrastructure, organizations can move beyond regulatory compliance and achieve operational independence within their own jurisdictions.

The path to sovereignty

While Europe is leading the way in advocating for cloud and digital sovereignty, achieving true independence requires a strategic balance. Reducing reliance on US hyperscalers while developing competitive local alternatives is crucial. This balance involves leveraging public and private sector initiatives to create an environment where local providers can thrive and compete on a global scale.

Ultimately, sovereignty is not just about regulatory compliance; it's about a strategic vision for independence. Empowering local providers and creating interconnected networks will allow Europe to set its own digital agenda and drive long-term economic and technological growth, helping to achieve “true” sovereignty.

We list the best cloud storage.

This article was produced as part of TechRadarPro's Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro

Refinance Rates Move Up: Refinance Rates for July 15, 2025 - Tuesday, July 15, 2025 - 04:00
Several key refinance rates were higher this week, so it might be worth waiting.
Mortgage Rates Climb: Mortgage Rates for July 15, 2025 - Tuesday, July 15, 2025 - 04:05
Mortgage rates climbed higher over the last week. Here's what to expect if you're in the market for a home loan.
Social Security Disability Insurance July 2025: Your Money Is Headed Out - Tuesday, July 15, 2025 - 06:00
The next round of payments for July 2025 will soon be headed out to recipients. Here's the monthly payout schedule.
Brainpower unleashed: agentic AI and beyond bots - Tuesday, July 15, 2025 - 04:52

What truly separates us from machines? Free will, creativity and intelligence? But think about it. Our brains aren't singular, monolithic processors. The magic isn't in one “thinking part,” but rather in countless specialized agents—neurons—that synchronize perfectly.

Some neurons catalog facts, others process logic or govern emotion, still more retrieve memories, orchestrate movement, or interpret visual signals. Individually, they perform simple tasks, yet collectively, they produce the complexity we call human intelligence.

Now, imagine replicating this orchestration digitally. Traditional AI was always narrow: specialized, isolated bots designed to automate mundane tasks. But the new frontier is Agentic AI—systems built from specialized, autonomous agents that interact, reason and cooperate, mirroring the interplay within our brains.

Large language models (LLMs) form the linguistic neurons, extracting meaning and context. Specialized task agents execute distinct functions like retrieving data, analyzing trends and even predicting outcomes. Emotion-like agents gauge user sentiment, while decision-making agents synthesize inputs and execute actions.

The result is digital intelligence and agency. But do we need machines to mimic human intelligence and autonomy?

Every domain has a choke point—Agentic AI unblocks them all

Ask the hospital chief who’s trying to fill a growing roster of vacant roles. The World Health Organization predicts a global shortfall of 10 million healthcare workers by 2030. Doctors and nurses pull 16-hour shifts like it’s the norm. Claims processors grind through endless policy reviews, while lab technicians wade through a forest of paperwork before they can even test a single sample.

In a well-orchestrated Agentic AI world, these professionals get some relief. Claim-processing bots can read policies, assess coverage and even detect anomalies in minutes—tasks that would normally take hours of mind-numbing, error-prone work. Lab automation agents could receive patient data directly from electronic health records, run initial tests and auto-generate reports, freeing up technicians for the more delicate tasks that truly need human skill.

The same dynamic plays out across industries. Take banking, where anti-money laundering (AML) and know-your-customer (KYC) processes remain the biggest administrative headaches. Corporate KYC demands endless verification steps, complex cross-checks, and reams of paperwork. An agentic system can orchestrate real-time data retrieval, conduct nuanced risk analysis and streamline compliance so that staff can focus on actual client relationships rather than wrestling with forms.

Insurance claims, telecom contract reviews, logistics scheduling—the list is endless. Each domain has repetitive tasks that bog down talented people.

AI is the flashlight in a dark basement

Yes, agentic AI is the flashlight in a dark basement: shining a bright light on hidden inefficiencies, letting specialized agents tackle the grunt work in parallel, and giving teams the bandwidth to focus on strategy, innovation and building deeper connections with customers.

But the true power agentic AI lies in its ability to solve not just for efficiency or one department but to scale seamlessly across multiple functions—even multiple geographies. This is an improvement of 100x scale.

1. Scalability: Agentic AI is modular at its core, allowing you to start small—like a single FAQ chatbot—then seamlessly expand. Need real-time order tracking or predictive analytics later? Add an agent without disrupting the rest. Each agent handles a specific slice of work, cutting development overhead and letting you deploy new capabilities without ripping apart your existing setup.

2. Anti-fragility: In a multi-agent system, one glitch won’t topple everything. If a diagnostic agent in healthcare goes offline, other agents—like patient records or scheduling—keep working. Failures stay contained within their respective agents, ensuring continuous service. That means your entire platform won’t crash because one piece needs a fix or an upgrade.

3. Adaptability: When regulations or consumer expectations shift, you can modify or replace individual agents—like a compliance bot—without forcing a system-wide overhaul. This piecemeal approach is akin to upgrading an app on your phone rather than reinstalling the entire operating system. The result? A future-proof framework that evolves alongside your business, eliminating massive downtimes or risky reboots.

You can’t predict the next AI craze, but you can be ready for it

Generative AI was the breakout star a couple of years ago; agentic AI is grabbing the spotlight now. Tomorrow, something else will emerge—because innovation never rests. How then, do we future-proof our architecture so each wave of new technology doesn’t trigger an IT apocalypse? According to a recent Forrester study, 70% of leaders who invested over 100 million dollars in digital initiatives credit one strategy for success: a platform approach.

Instead of ripping out and replacing old infrastructure each time a new AI paradigm hits, a platform integrates these emerging capabilities as specialized building blocks. When agentic AI arrives, you don’t toss your entire stack—you simply plug in the latest agent modules. This approach means fewer project overruns, quicker deployments, and more consistent outcomes.

Even better, a robust platform offers end-to-end visibility into each agent’s actions—so you can optimize costs and keep a tighter grip on compute usage. Low-code/no-code interfaces also lower the entry barrier for business users to create and deploy agents, while prebuilt tool and agent libraries accelerate cross-functional workflows, whether in HR, marketing, or any other department.

Platforms that support PolyAI architectures and a variety of orchestration frameworks allow you to swap different models, manage prompts and layer new capabilities without rewriting everything from scratch. Being cloud-agnostic, they also eliminate vendor lock-in, letting you tap the best AI services from any provider. In essence, a platform-based approach is your key to orchestrating multi-agent reasoning at scale—without drowning in technical debt or losing agility.

So, what are the core elements of this platform approach?

1. Data: Plugged into a common layer

Whether you’re implementing LLMs or agentic frameworks, your platform’s data layer remains the cornerstone. If it’s unified, each new AI agent can tap into a curated knowledge base without messy retrofitting.

2. Models: Swappable brains

A flexible platform lets you pick specialized models for each use case—financial risk analysis, customer service, healthcare diagnoses—then updates or replaces them without nuking everything else.

3. Agents: Modular workflows

Agents thrive as independent yet orchestrated mini-services. If you need a new marketing agent or a compliance agent, you spin it up alongside existing ones, leaving the rest of the system stable.

4. Governance: Guardrails at scale

When your governance structure is baked into the platform—covering bias checks, audit trails, and regulatory compliance—you remain proactive, not reactive, regardless of which AI “new kid on the block” you adopt next.

A platform approach is your strategic hedge against technology’s ceaseless evolution—ensuring that no matter which AI trend takes center stage, you’re ready to integrate, iterate, and innovate.

Start small and orchestrate your way up

Agentic AI isn’t entirely new—Tesla’s self-driving cars employs multiple autonomous modules. The difference is that new orchestration frameworks make such multi-agent intelligence widely accessible. No longer confined to specialized hardware or industries, Agentic AI can now be applied to everything from finance to healthcare, fueling renewed mainstream interest and momentum. Design for platform-based readiness.

Start with a single agent addressing a concrete pain point and expand iteratively. Treat data as a strategic asset, select your models methodically, and bake in transparent governance. That way, each new AI wave integrates seamlessly into your existing infrastructure—boosting agility without constant overhauls.

We list the best IT Automation software.

This article was produced as part of TechRadarPro's Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro

Pages