News

Today's NYT Connections Hints, Answers and Help for July 10, #760 - Wednesday, July 9, 2025 - 16:00
Here are some hints and the answers for the NYT Connections puzzle for Thursday, July 10, No. 760.
An OpenAI Web Browser Is Imminent, Report Says. That Would Really Shake Up the Web - Wednesday, July 9, 2025 - 16:41
An AI-powered browser from the ChatGPT maker would inevitably compete with Google Chrome.
PlayStation Plus Subscribers Can Get Chromed Out in Cyberpunk 2077 Now - Wednesday, July 9, 2025 - 17:02
Subscribers -- and their kids -- can also play other games on PlayStation Plus, like the Bluey game, soon.
Hosting.com (formerly A2 Hosting) Review: A Great Option for First-Time Site Owners - Wednesday, July 9, 2025 - 17:41
I tested Hosting.com’s shared hosting tools and customer service -- here’s what I learned.
How to Use Chatbots and What to Know About These AI Tools - Wednesday, July 9, 2025 - 18:10
Artificial intelligence tools like ChatGPT and Claude are set apart from search engines thanks to their chat component.
Today's NYT Connections: Sports Edition Hints and Answers for July 10, #290 - Wednesday, July 9, 2025 - 18:24
Here are hints and the answers for the NYT Connections: Sports Edition puzzle for July 10, No. 290
6 Natural Sugar Substitutes To Satisfy Your Cravings - Wednesday, July 9, 2025 - 18:24
Cut your processed sugar intake with these tasty alternatives.
This is the weirdest looking AI MAX+ 395 Mini PC that I've ever seen — and you can apparently hold it comfortably in the palm of your hand - Wednesday, July 9, 2025 - 16:34
  • AOOSTAR’s NEX395 has the power, but the cooling system remains a complete mystery
  • Radeon 8060S beats RX 7600 XT in specs, making external GPU pairing confusing
  • Without OCuLink, the eGPU dock likely suffers major bottlenecks in real-world tasks

AOOSTAR NEX395 is the latest in a growing field of AI-focused mini PCs which comes in a box-like casing that departs from the more common designs found in the segment.

The company says the NEX395 uses AMD’s flagship Strix Halo processor, a 16-core, 32-thread chip with boost speeds up to 5.1GHz.

It includes 40 RONA 3.5 compute units and appears to support up to 128GB of memory, most likely LPDDR5X given the compact casing.

Memory capacity matches rivals, but key hardware details are missing

This level of memory is in line with other mini PCs targeting AI development workflows, especially those involving large language models.

However, no details have been confirmed regarding storage, cooling, or motherboard layout.

The device looks more like an oversized SSD enclosure or an external GPU dock than a full-fledged desktop system.

Its slim, rectangular, vent-heavy design completely deviates from the usual cube or NUC-style mini PCs.

Holding it in your palm feels more like gripping a chunky power bank or a Mac mini cut in half, definitely not what you’d expect from a 16-core AI workstation.

The layout makes you question where the thermal headroom or upgradable internals even fit.

The AOOSTAR NEX395 includes an integrated Radeon 8060S GPU, part of the Ryzen AI MAX+ 395 APU.

However, it also sells an external eGPU enclosure featuring the Radeon RX 7600 XT.

Given that the integrated GPU already offers a newer architecture and more compute units than the RX 7600 XT, the use case for pairing the two is unclear.

Also, the NEX395 does not appear to support high-speed eGPU connectivity like OCuLink, which would limit bandwidth for external graphics support.

Port selection includes dual Ethernet ports, four USB-A ports, USB-C, HDMI, and DisplayPort outputs, along with a dedicated power input, suggesting reliance on an external power brick.

Without confirmed thermal design or sustained performance metrics, it’s unclear whether this system can function reliably in roles normally filled by the best workstation PC or best business PC options.

Unfortunately, the pricing details for the NEX395 are currently unavailable.

Given the $1500–$2000 range of comparable models such as the HP Z2 Mini G1a and GMKTEC EVO-X2, AOOSTAR’s model is unlikely to be cheap.

Via Videocardz

You might also like
Ceramic-based startup wants to put more than 100,000TB in a 42U rack by 2030 — but it will take almost 50 years to fill it up - Wednesday, July 9, 2025 - 17:46
  • The first-generation system is slower than tape but aims to scale up rapidly by 2030
  • Cerabyte’s roadmap involves physics so advanced it sounds like sci-fi with helium ion beams
  • Long-term capacity hinges on speculative tech that doesn’t yet exist outside lab settings

Munich-based startup Cerabyte is developing what it claims could become a disruptive alternative to magnetic tape in archival data storage.

Using femtosecond lasers to etch data onto ceramic layers within glass tablets, the company envisions racks holding more than 100 petabytes (100,000TB) of data by the end of the decade.

Yet despite these bold goals, practical constraints mean it may take decades before such capacity sees real-world usage.

The journey to 100PB racks starts with slower, first-generation systems

CMO and co-founder Martin Kunze outlined the vision at the recent A3 Tech Live event, noting the system draws on “femtosecond laser etching of a ceramic recording layer on a glass tablet substrate.”

These tablets are housed in cartridges and shuttled by robotic arms inside tape library-style cabinets, a familiar setup with an unconventional twist.

The pilot system, expected by 2026, aims to deliver 1 petabyte per rack with a 90-second time to the first byte and just 100MBps in sustained bandwidth.

Over several refresh cycles, Cerabyte claims that performance will increase, and by 2029 or 2030, it anticipates “a 100-plus PB archival storage rack with 2GBps bandwidth and sub-10-second time to first byte.”

The company’s long-term projections are even more ambitious, and it believes that femtosecond laser technology could evolve into “a particle beam matrix tech” capable of reducing bit size from 300nm to 3nm.

With helium ion beam writing by 2045, Cerabyte imagines a system holding up to 100,000PB in a single rack.

However, such claims are steeped in speculative physics and should, as the report says, be “marveled at but discounted as realizable technology for the time being.”

Cerabyte’s stated advantages over competitors such as Microsoft’s Project Silica, Holomem, and DNA storage include greater media longevity, faster access times, and lower cost per terabyte.

“Lasting more than 100 years compared to tape’s 7 to 15 years,” said Kunze, the solution is designed to handle long-term storage with lower environmental impact.

He also stated the technology could ship data “at 1–2GBps versus tape’s 1GBps,” and “cost $1 per TB against tape’s $2 per TB.”

So far, the company has secured around $10 million in seed capital and over $4 million in grants.

It is now seeking A-round VC funding, with backers including Western Digital, Pure Storage, and In-Q-Tel.

Whether Cerabyte becomes a viable alternative to traditional archival storage methods or ends up as another theoretical advance depends not just on density, but on long-term reliability and cost-effectiveness.

Even if it doesn't become a practical alternative to large HDDs by 2045, Cerabyte’s work may still influence the future of long-term data storage, just not on the timeline it projects.

Via Blocksandfiles

You might also like
How to Watch the Wimbledon Women's Singles Semifinal: Sabalenka vs. Anisimova for Free - Wednesday, July 9, 2025 - 20:00
Which one will advance to the finals? Here's how to catch the action with a free trial or live TV subscription.
How to Watch Wimbledon Women's Singles Semifinals Iga Swiatek vs. Belinda Bencic for Free - Wednesday, July 9, 2025 - 21:00
It's the first time both women have made it this far in Wimbledon, but only one can advance to the finals.
Today's NYT Mini Crossword Answers for Thursday, July 10 - Wednesday, July 9, 2025 - 22:48
Here are the answers for The New York Times Mini Crossword for July 10.
This New Gmail Feature Makes Managing Your Subscriptions Easier Than Ever - Thursday, July 10, 2025 - 00:33
The new feature makes taking control of unruly newsletter subscriptions a breeze.
The four-phase security approach to keep in mind for your AI transformation - Thursday, July 10, 2025 - 02:46

As organizations continue to adopt AI tools, security teams are often caught unprepared for the emerging challenges. The disconnect between engineering teams rapidly deploying AI solutions and security teams struggling to establish proper guardrails has created significant exposure across enterprises. This fundamental security paradox—balancing innovation with protection—is especially pronounced as AI adoption accelerates at unprecedented rates.

The most critical AI security challenge enterprises face today stems from organizational misalignment. Engineering teams are integrating AI and Large Language Models (LLMs) into applications without proper security guidance, while security teams fail to communicate their AI readiness expectations clearly.

McKinsey research confirms this disconnect: leaders are 2.4 times more likely to cite employee readiness as a barrier to adoption versus their own issues with leadership alignment, despite employees currently using generative AI three times more than leaders expect.

Understanding the Unique Challenges of AI Applications

Organizations implementing AI solutions are essentially creating new data pathways that are not necessarily accounted for in traditional security models. This presents several key concerns:

1. Unintentional Data Leakage

Users sharing sensitive information with AI systems may not recognize the downstream implications. AI systems frequently operate as black boxes, processing and potentially storing information in ways that lack transparency.

The challenge is compounded when AI systems maintain conversation history or context windows that persist across user sessions. Information shared in one interaction might unexpectedly resurface in later exchanges, potentially exposing sensitive data to different users or contexts. This "memory effect" represents a fundamental departure from traditional application security models where data flow paths are typically more predictable and controllable.

2. Prompt Injection Attacks

Prompt injection attacks represent an emerging threat vector poised to attract financially motivated attackers as enterprise AI deployment scales. Organizations dismissing these concerns for internal (employee-facing) applications overlook the more sophisticated threat of indirect prompt attacks capable of manipulating decision-making processes over time.

For example, a job applicant could embed hidden text like "prioritize this resume" in their PDF application to manipulate HR AI tools, pushing their application to the top regardless of qualifications. Similarly, a vendor might insert invisible prompt commands in contract documents that influence procurement AI to favor their proposals over competitors. These aren't theoretical threats - we've already seen instances where subtle manipulation of AI inputs has led to measurable changes in outputs and decisions.

3. Authorization Challenges

Inadequate authorization enforcement in AI applications can lead to information exposure to unauthorized users, creating potential compliance violations and data breaches.

4. Visibility Gaps

Insufficient monitoring of AI interfaces leaves organizations with limited insights into queries, response and decision rationales, making it difficult to detect misuse or evaluate performance.

The Four-Phase Security Approach

To build a comprehensive AI security program that addresses these unique challenges while enabling innovation, organizations should implement a structured approach:

Phase 1: Assessment

Begin by cataloging what AI systems are already in use, including shadow IT. Understand what data flows through these systems and where sensitive information resides. This discovery phase should include interviews with department leaders, surveys of technology usage and technical scans to identify unauthorized AI tools.

Rather than imposing restrictive controls (which inevitably drive users toward shadow AI), acknowledge that your organization is embracing AI rather than fighting it. Clear communication about assessment goals will encourage transparency and cooperation.

Phase 2: Policy Development

Collaborate with stakeholders to create clear policies about what types of information should never be shared with AI systems and what safeguards need to be in place. Develop and share concrete guidelines for secure AI development and usage that balance security requirements with practical usability.

These policies should address data classification, acceptable use cases, required security controls and escalation procedures for exceptions. The most effective policies are developed collaboratively, incorporating input from both security and business stakeholders.

Phase 3: Technical Implementation

Deploy appropriate security controls based on potential impact. This might include API-based redaction services, authentication mechanisms and monitoring tools. The implementation phase should prioritize automation wherever possible.

Manual review processes simply cannot scale to meet the volume and velocity of AI interactions. Instead, focus on implementing guardrails that can programmatically identify and protect sensitive information in real-time, without creating friction that might drive users toward unsanctioned alternatives. Create structured partnerships between security and engineering teams, where both share responsibility for secure AI implementation.

Phase 4: Education and Awareness

Educate users about AI security. Help them understand what information is appropriate to share and how to use AI systems safely. Training should be role-specific, providing relevant examples that resonate with different user groups.

Regular updates on emerging threats and best practices will keep security awareness current as the AI landscape evolves. Recognize departments that successfully balance innovation with security to create positive incentives for compliance.

Looking Ahead

As AI becomes increasingly embedded throughout enterprise processes, security approaches must evolve to address emerging challenges. Organizations viewing AI security as an enabler rather than an impediment will gain competitive advantages in their transformation journeys.

Through improved governance frameworks, effective controls and cross-functional collaboration, enterprises can leverage AI's transformative potential while mitigating its unique challenges.

We've listed the best online cybersecurity courses.

This article was produced as part of TechRadarPro's Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro

Key Refinance Rate Moves Lower: Refinance Rates for July 10, 2025 - Thursday, July 10, 2025 - 04:00
Refinance rates were mixed, but one key rate fell. Even a slightly lower interest rate can save you money on your home loan.
Mortgage Rates Climb: Mortgage Rates for July 10, 2025 - Thursday, July 10, 2025 - 04:05
Homebuyers are facing unaffordable home prices and high mortgage rates. Will the housing market change before fall?
I Tried MyFitnessPal's New Feature and It Helped Me Plan Healthy Meals That Actually Taste Good - Thursday, July 10, 2025 - 05:03
I tried MyFitnessPal's new Meal Planner feature, which allows you to meal plan and order your groceries in one app.
Best Compression Socks for 2025 - Thursday, July 10, 2025 - 06:00
Improve blood flow and reduce discomfort with the best compression socks, handpicked by CNET's experts.
Why the AI boom requires an Wyatt Earp - Thursday, July 10, 2025 - 04:41

At a time when many believe that oversight of the Artificial Intelligence industry is desperately needed, the US government appears to have different ideas. The "One Big Beautiful Bill Act" (OBBBA)—recently given the nod by the House of Representatives—includes a 10-year moratorium on state and local governments enacting or enforcing regulations on AI models, systems, or automated decision-making tools.

Supporters claim the goal is to streamline AI regulation by establishing federal oversight, thereby preventing a patchwork of state laws that could stifle innovation and create compliance chaos. Critics warn that the moratorium could leave consumers vulnerable to emerging AI-related issues, such as algorithmic bias, privacy violations, and the spread of deepfakes.

Basically, if the AI sector is the Wild West, no one will be allowed to clean up Dodge.

Why should we care?

History may not literally repeat itself, but there are historical patterns and trends that we can view and hopefully be informed by, and our history books are packed with examples of technology reshaping the lives of the workforce.

And be it in the form of James Watt’s steam engine or Henry Ford’s moving assembly line, the cost of the progress brought by fresh technology is regularly paid by the large numbers of people sent home without a pay packet.

And AI will cost jobs too.

Experts such as those at McKinsey, the Lancet, or the World Economic Forum (WEF) may not agree on exact numbers or percentages of lost jobs, but the consistent message is that it will be bad:

  • 30% of US work hours across all sectors will be automated by 2030 says McKinsey
  • 25% of medical administrative tasks could vanish by 2035 according to a Lancet study
  • 39% of existing skill sets will become outdated between now and 2030 warns WEF

Of course, as with all new technologies, new jobs will be created. But we can’t all be prompt engineers.

The Great Brain Robbery

Essentially, those hit hardest by the bulk of new technologies from the Spinning Jenny onwards were the ones engaged to carry out physical work. But AI wants to muscle in on the intellectual and creative domains previously considered uniquely human. For example, nonpartisan American think tank the Pew Research Center reckons 30% of media jobs could be automated by 2035.

And those creative jobs are under threat because creatives are being ripped off.

Many AI models are trained on massive datasets scraped from the internet, and these often include articles, books, images, music and even code that are protected by copyright laws, but AI companies lean heavily towards take-first-ask-later. Obviously, artists, writers, and other content creators see this practice as unauthorized use of their intellectual property and they argue that ultimately, it’s not even in the best interests of the AI sector.

If AI takes work away from human creatives—devastating creative industries already operating on thin margins—there will be less and less innovative content to feed to AI systems which will result in AI feeding off homogenized AI content – a derivative digital snake eating its own tail.

A smarter way forward would be to find a framework where creatives are compensated for use of their work to ensure the sustainability of human produced product. The music industry already has a model where artists receive payments via performing rights organizations such as PRS, GEMA and BMI. The AI sector needs to find something similar.

To make this happen, regulators may need to be involved.

Competitive opportunity versus minimizing societal harm

Without regulation, we risk undermining the economic foundations of creative and knowledge-based industries. Journalism, photography, literature, music, and visual arts depend on compensation mechanisms that AI training currently bypasses.

The United Kingdom and the European Union are taking notably different paths when it comes to regulating AI. The EU is pursuing a strict, binding regulatory framework, an approach designed to protect fundamental rights, promote safety, and ensure ethical use of AI across member states. In contrast, the UK is currently opting for a more flexible approach, emphasizing innovation and light-touch oversight aiming to encourage rapid AI development and attracting investment.

But this light-touch strategy could be a massive misstep – one that in the long term could leave everyone wishing we’d thought things through.

While AI enthusiasts may initially be pleased with minimal interference from regulators, eventually AI businesses will come up against consumer trust, something they absolutely need.

While AI businesses operating in Europe will be looking at higher compliance costs, there is also a clearer regulatory landscape and therefore more likely to be greater consumer trust – a huge commercial advantage.

Meanwhile, AI businesses operating in light-touch markets (such as the UK) need to consider how their AI data practices align with their (and their competitors’) brand values and customer expectations. As public awareness grows, companies seen as exploiting creators may face reputational damage. And a lack of consumer confidence could lead to a shift in mindset from previously arm’s-length regulators.

Regardless of the initial regulatory environment, early adopters of ethical AI practices may gain competitive advantages as regulatory requirements catch up to ethical standards. Perhaps the wisest way forward is to voluntarily make Dodge City a better place, even if there’s no sheriff in town – for now.

I tried 70+ best AI tools.

This article was produced as part of TechRadarPro's Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro

Google is offering all its best Gemini AI features for free, but only long enough to get you hooked - Thursday, July 10, 2025 - 05:38

This week, Samsung Galaxy Unpacked unveiled its latest smartphones, the Galaxy Z Fold 7, Flip 7, and Flip 7 FE, all integrated with Google Gemini AI features. To sweeten the deal for potential customers, Google announced a special offer: six free months of Google AI Pro for those who purchase the new phones. This premium subscription service includes access to the Gemini 2.5 Pro model, the Google Veo 3 AI video generator, two terabytes of cloud storage, and early access to upcoming AI features.

Of course, once those six months are up, you'll have to pay the standard $20 a month to keep your subscription. But, Google likely believes more than a few people will be happy to pay after they get accustomed to its AI toolkit. The psychology behind this is as simple as free samples at the grocery store. Google isn’t trying to sell you a subscription right now because it thinks you won't want to give it up just because it isn't free anymore.

It's a pretty impressive set of features. Veo 3 is one of the most powerful consumer-facing video generators available. And Gemini 2.5 Pro is far more coherent in conversation than its predecessors.

Gemini try and buy

It's easy to imagine how Google hopes the six months will go. You might spend a month fiddling with Veo and creating movies about your dog going on adventures. Or start turning to Gemini to summarize very long emails, and eventually every email. Or you might get a great recipe from a random prompt to Gemini and soon use it to plan your every meal. By the end of six months, Google's AI might just be what you turn to a dozen times a day as a reflex. By the time Google asks for $20 a month, you might even consider it a bargain.

That's Google's dream scenario, but it comes with a risk. Google is betting that people will find these tools indispensable. But, if people take it for granted for six months, they might resent having to pay for it, no matter how much you enjoy playing with Veo 3 and talking to Gemini. Nobody likes the feeling of having something useful pulled away unless you pony up. That's probably even more likely when it comes bundled with a device that already costs over a thousand dollars. There’s a version of this where the user relationship becomes less “wow, this is useful,” and more “wait, I have to pay extra for that now?”

But, I wouldn't be surprised if a scenario somewhere between the extremes still makes Google happy. Tying its most advanced AI tools to Samsung’s brand-new hardware is smart. The Galaxy Z Fold 7 and Flip 7 are devices people buy because they want the bells and whistles. They're practically built for people who like to show off. In other words, the people most likely to find ways to use AI enough to justify $20.

But it cuts both ways. Because if the experience feels too essential, people will feel punished when it disappears. And if it doesn’t feel essential enough, they won’t bother subscribing. The six-month trial is walking a very fine line between generosity and locking someone in to an AI future.

You might also like

Pages