News
- Sensitive files held by US courts are being targeted
- The US Judiciary is strengthening its IT infrastructure following incidents
- The DOJ, DHS, and others, were called to help
The US Judiciary system has confirmed suffering a cyberattack, and says it is now working on reinforcing its systems to prevent further incursions.
In a press release published on the US Courts website, the body said said it recently experienced, “escalated cyberattacks of a sophisticated and persistent nature.”
Without detailing the attacks, or the perpetrators, the announcement said that the crooks were targeting its case management system, targeting sensitive files hosted there.
Courts in the crosshairs“The vast majority of documents filed with the Judiciary’s electronic case management system are not confidential and indeed are readily available to the public, which is fundamental to an open and transparent judicial system.
However, some filings contain confidential or proprietary information that are sealed from public view,” the announcement reads.
“These sensitive documents can be targets of interest to a range of threat actors. To better protect them, courts have been implementing more rigorous procedures to restrict access to sensitive documents under carefully controlled and monitored circumstances.”
The announcement does not go into detail about the reinforcement efforts. It says that the Administrative Office of the United States Courts is working with Congress, the Department of Justice (DoJ), the Department of Homeland Security (DHS), and other agencies.
US courts, both local and federal, have often been the targets of different cybercriminals.
Back in 2020, a cyberattack against the US federal court system ended up being far more damaging than initially thought, and in 2024, unnamed hackers attacked court systems across the US state of Washington, forcing the judicial organization to shut down parts of its infrastructure to prevent further damage.
In summer 2024, the Superior Court of Los Angeles County, the largest in the United States, suffered a ransomware attack which forced it to close down its entire operation for a day.
You might also like- US court software and systems have some worrying security flaws
- Take a look at our guide to the best authenticator app
- We've rounded up the best password managers
The touted benefits of artificial intelligence (AI) are vast. It’s promised to boost efficiency, create happier workers and drive innovation. Sounds great – but at what point do you see value for money? This is an issue that many businesses are continuing to grapple with, and the data paints a sobering picture.
Research reveals a mere 36% of organizations have successfully scaled their GenAI solutions, with just 13% achieving a significant, enterprise-level impact. The gap between pilot and profit is becoming a chasm, with Gartner predicting 30% of GenAI projects will be abandoned after the proof-of-concept stage this year alone.
So why the disconnect? The problem isn't a failure of the technology itself, but of foresight. In the race for AI dominance, many leaders are focused on the promise of the technology itself and not calculating the true cost of the journey it will take to extract its value.
They often underestimate the long-term financial commitment, the necessary infrastructure overhaul and the critical change management needed to turn a promising algorithm into a pillar of the business. To move from AI ambition to AI achievement, it’s time for leaders to confront these hidden economics head-on, starting with the risks you can't yet see.
Planning for tomorrow’s AIUK businesses are spending an average of £321,000 on AI, but 44% report seeing only minor gains. This disconnect between investment and impact is often rooted in a failure to plan for the hidden, long-term risks that emerge after deployment.
These risks fall into two main categories: the shifting landscape of future regulation and the unforeseen realities of implementation costs. Without a clear global rulebook, businesses operate in a regulatory fog. And having a patchwork of national policies means a system deemed compliant today could be rendered a liability by new rules tomorrow, creating a ticking economic clock on the investment.
This lack of foresight also applies to tangible costs, where on-premise expenses escalate with energy-intensive hardware, and cloud deployments trigger punishing "bill shock" from data charges not factored into initial plans.
A comprehensive solution to these uncertainties is to build with a flexible mindset from day one. A viable strategy requires designing systems that can be easily modified and implementing clear, strong policies for how data is managed. However, a flexible system is only as good as the team that manages it.
This is where addressing the skills gap becomes a necessity. Investing in upskilling and cultivating a culture of continuous learning is not just another cost; it is the core capability that allows an organization to adapt to whatever challenges – technical or legal – the future holds.
This means looking beyond a small pool of perfect-fit AI experts and instead hiring for adaptability, and seeking out individuals with strong foundational skills and capacity to constantly learn new technologies.
The sustainable AI equationAs AI's computational needs intensify, sustainability has shifted from a corporate ideal to a core economic imperative. The sheer power of the processors driving modern AI generates immense heat, and data centers are at the epicenter of this challenge.
With cooling already accounting for nearly 40% of a data center's energy consumption, traditional air-cooling methods are proving to be a bottleneck. Capable of capturing only 30% of the heat generated by servers, these legacy systems are not just inefficient, but a direct threat to the scalability and financial viability of the high-performance AI applications of tomorrow.
This is where advanced solutions like direct-to-chip and immersion liquid cooling become necessary. By using fluids to dissipate heat with far greater efficiency, these technologies address the problem at its source. Immersion cooling, for example, can capture 100% of the heat produced by servers, a capability that translates directly into lower carbon emissions and significant operational cost savings.
In addition, liquid cooling's superior thermal management allows data centers to handle much higher server densities, maximizing the value of existing infrastructure and reducing the need for costly physical expansions. This is a crucial advantage for scaling AI efficiently and responsibly. It transforms sustainability from a cost center into a powerful competitive edge.
Building the foundation for lasting AI valueThe path forward is about approaching AI’s potential with a new strategic maturity. Success in this next chapter means looking at AI as a business transformation build on a sound economic foundation, where the hidden costs of regulation, implementation and sustainability are interconnected pillars.
The true return on investment will not be found in simple cost savings, but in the ability to make smarter decisions faster, adapt to a changing market and build a lasting edge over the competition. No matter what industry you are in, it’s time to stop asking what the tool can do, and start asking if their organization is truly ready to wield its power.
We've featured the best green web hosting.
This article was produced as part of TechRadarPro's Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro