News
- WeTransfer users were outraged when it seemed an updated terms of service implied their data would be used to train AI models.
- The company moved fast to assure users it does not use uploaded content for AI training
- WeTransfer rewrote the clause in clearer language
File-sharing platform WeTransfer spent a frantic day reassuring users that it has no intention of using any uploaded files to train AI models, after an update to its terms of service suggested that anything sent through the platform could be used for making or improving machine learning tools.
The offending language buried in the ToS said that using WeTransfer gave the company the right to use the data "for the purposes of operating, developing, commercializing, and improving the Service or new technologies or services, including to improve performance of machine learning models that enhance our content moderation process, in accordance with the Privacy & Cookie Policy."
That part about machine learning and the general broad nature of the text seemed to suggest that WeTransfer could do whatever it wanted with your data, without any specific safeguards or clarifying qualifiers to alleviate suspicions.
Perhaps understandably, a lot of WeTransfer users, who include many creative professionals, were upset at what this seemed to imply. Many started posting their plans to switch away from WeTransfer to other services in the same vein. Others began warning that people should encrypt files or switch to old-school physical delivery methods.
Time to stop using @WeTransfer who from 8th August have decided they'll own anything you transfer to power AI pic.twitter.com/sYr1JnmemXJuly 15, 2025
WeTransfer noted the growing furor around the language and rushed to try and put out the fire. The company rewrote the section of the ToS and shared a blog explaining the confusion, promising repeatedly that no one's data would be used without their permission, especially for AI models.
"From your feedback, we understood that it may have been unclear that you retain ownership and control of your content. We’ve since updated the terms further to make them easier to understand," WeTransfer wrote in the blog. "We’ve also removed the mention of machine learning, as it’s not something WeTransfer uses in connection with customer content and may have caused some apprehension."
While still granting a standard license for improving WeTransfer, the new text omits references to machine learning, focusing instead on the familiar scope needed to run and improve the platform.
Clarified privacyIf this feels a little like deja vu, that's because something very similar happened about a year and a half ago with another file transfer platform, Dropbox. A change to the company's fine print implied that Dropbox was taking content uploaded by users in order to train AI models. Public outcry led to Dropbox apologizing for the confusion and fixing the offending boilerplate.
The fact that it happened again in such a similar fashion is interesting not because of the awkward legal language used by software companies, but because it implies a knee-jerk distrust in these companies to protect your information. Assuming the worst is the default approach when there's uncertainty, and the companies have to make an extra effort to ease those tensions.
Sensitivity from creative professionals to even the appearance of data misuse. In an era where tools like DALL·E, Midjourney, and ChatGPT train on the work of artists, writers, and musicians, the stakes are very real. The lawsuits and boycotts by artists over how their creations are used, not to mention suspicions of corporate data use, make the kinds of reassurances offered by WeTransfer are probably going to be something tech companies will want to have in place early on, lest they face the misplaced wrath of their customers
You might also likeIt’s a scenario that plays out far too often: A mid-sized company runs a routine threat validation exercise and stumbles on something unexpected, like an old infostealer variant that has been quietly active in their network for weeks.
This scenario doesn’t require a zero-day exploit or sophisticated malware. All it takes is one missed setting, inadequate endpoint oversight, or a user clicking what they shouldn’t. Such attacks don’t succeed because they’re advanced. They succeed because routine safeguards aren’t in place.
Take Lumma Stealer, for example. This is a simple phishing attack that lures users into running a fake CAPTCHA script. It spreads quickly but can be stopped cold by something as routine as restricting PowerShell access and providing basic user training. However, in many environments, even those basic defenses aren’t deployed.
This is the story behind many breaches today. Not headline-grabbing hacks or futuristic AI assaults—just overlooked updates, fatigued teams and basic cyber hygiene falling through the cracks.
Security Gaps That Shouldn’t Exist in 2025Security leaders know the drill: patch the systems, limit access and train employees. Yet these essentials often get neglected. While the industry chases the latest exploits and talks up advanced tools, attackers keep targeting the same weak points. They don’t have to reinvent the wheel. They just need to find one that’s loose.
Just as the same old techniques are still at work, old malware is making a comeback. Variants like Mirai, Matsu and Klopp are resurfacing with minor updates and major impact. These aren’t sophisticated campaigns, but recycled attacks retooled just enough to slip past tired defenses.
The reason they work isn’t technical, it’s operational. Security teams are burned out. They’re managing too many alerts, juggling too many tools and doing it all with shrinking budgets and rising expectations. In this kind of environment, the basics don’t just get deprioritized, they get lost.
Burnout Is a Risk FactorThe cybersecurity industry often defines risk in terms of vulnerabilities, threat actors and tool coverage, but burnout may be the most overlooked risk of all. When analysts are overwhelmed, they miss routine maintenance. When processes are brittle, teams can’t keep up with the volume. When bandwidth runs out, even critical tasks can get sidelined.
This isn’t about laziness. It’s about capacity. Most breaches don’t reveal a lack of intelligence. They just demonstrate a lack of time.
Meanwhile, phishing campaigns are growing more sophisticated. Generative AI is making it easier for attackers to craft personalized lures. Infostealers continue to evolve, disguising themselves as login portals or trusted interfaces that lure users into running malicious code. Users often infect themselves, unknowingly handing over credentials or executing code.
These attacks still rely on the same assumptions: someone will click. The system will let it run. And no one will notice until it’s too late.
Why Real-World Readiness Matters More Than ToolsIt’s easy to think readiness means buying new software or hiring a red team, but true preparedness is quieter and more disciplined. It’s about confirming that defenses such as access restrictions, endpoint rules and user permissions are working against the actual threats.
Achieving this level of preparedness takes more than monitoring generic threat feeds. Knowing that ransomware is trending globally isn’t the same as knowing which threat groups are actively scanning your infrastructure. That’s the difference between a broader weather forecast and radar focused on your ZIP code.
Organizations that regularly validate controls against real-world, environment-specific threats gain three key advantages.
First, they catch problems early. Second, they build confidence across their team. When everyone knows what to expect and how to respond, fatigue gives way to clarity. Thirdly, by knowing the threats that matter, and the ones focused on them, they can prioritize those fundamental activities that get ignored.
You may not need to patch every CVE right now, just the ones being used by the threat actors targeting you. What areas of your network are they actively doing reconnaissance on? Those subnets probably need more focus to patching and remediation.
Security Doesn’t Need to Be Sexy, It Needs to WorkThere’s a cultural bias in cybersecurity toward innovation and incident response. The new tool, the emergency patch and the major breach all get more attention than the daily habits that quietly prevent problems.
Real resilience depends on consistency. It means users can’t run untrusted PowerShell scripts. It means patches are applied on a prioritized schedule, not “when we get around to it.” It means phishing training isn’t just a checkbox, but a habit reinforced over time.
These basics aren’t glamorous, but they work. In an environment where attackers are looking for the easiest way in, doing the simplest things correctly is one of the most effective strategies a team can take.
Discipline Is the New InnovationThe cybersecurity landscape will continue to change. AI will keep evolving, adversaries will go on adapting, and the next headline breach is likely already in motion. The best defense isn’t more noise or more tech, but better discipline.
Security teams don’t need to do everything. They need to do the right things consistently. That starts with reestablishing routine discipline: patch, configure, test, rinse and repeat. When those fundamentals are strong, the rest can hold.
For CISOs, now is the time to ask a simple but powerful question: Are we doing the basics well, and can we prove it? Start by assessing your organization’s hygiene baseline. What patches are overdue? What controls haven’t been tested in months? Where are your people stretched too thin to execute the essentials? The answers won’t just highlight the risks, they’ll point toward the pathway to resilience.
We list the best patch management software.
This article was produced as part of TechRadarPro's Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro
AI isn’t just something to adopt; it’s already embedded in the systems we rely on. From threat detection and response to predictive analytics and automation, AI is actively reshaping how we defend against evolving cyber threats in real time. It’s not just a sales tactic (for some); it’s an operational necessity.
Yet, as with many game-changing technologies, the reality on the ground is more complex. The cybersecurity industry is once again grappling with a familiar disconnect: bold promises about efficiency and transformation that don’t always reflect the day-to-day experiences of those on the front lines. According to recent research, 71% of executives report that AI has significantly improved productivity, but only 22% of frontline analysts, the very people who use these tools, say the same.
When solutions are introduced without a clear understanding of the challenges practitioners face, the result isn’t transformation, it’s friction. Bridging that gap between strategic vision and operational reality is essential if AI is to deliver on its promise and drive meaningful, lasting impact in cybersecurity.
Executives love AIAccording to Deloitte, 25% of companies are expected to have launched AI agents by the end of 2025, with that number projected to rise to 50% shortly thereafter. The growing interest in AI tools is driven not only by their potential but also by the tangible results they are already beginning to deliver
For executives, the stakes are rising. As more companies begin releasing AI-enabled products and services, the pressure to keep pace is intensifying. Organizations that can’t demonstrate AI capabilities, whether in their customer experience, cybersecurity response, or product features, risk being perceived as laggards, out-innovated by faster, more adaptive competitors. Across industries, we're seeing clear signals: AI is becoming table stakes, and customers and partners increasingly expect smarter, faster, and more adaptive solutions.
This competitive urgency is reshaping boardroom conversations. Executives are no longer asking whether they should integrate AI, but how quickly and effectively they can do so, without compromising trust, governance, or business continuity. The pressure isn’t just to adopt AI internally to drive efficiency, but to productize it in ways that enhance market differentiation and long-term customer value.
But the scramble to implement AI is doing more than reshaping strategy, it’s unlocking entirely new forms of innovation. Business leaders are recognizing that AI agents can do more than just streamline functions; they can help companies bring entirely new capabilities to market. From automating complex customer interactions to powering intelligent digital products and services, AI is quickly moving from a behind-the-scenes tool to a front-line differentiator. And for executives willing to lead with bold, well-governed AI strategies, the payoff isn’t just efficiency, it’s market relevance.
Analysts distrust AIIf anyone wants to make their job easier, it’s a SOC analyst, so their skepticism of AI comes from experience, not cynicism. The stakes in cybersecurity are high, and trust is earned, especially when systems that are designed to protect critical assets are involved. Research shows that only 10% of analysts currently trust AI to operate fully autonomously. This skepticism isn’t about rejecting innovation, it’s about ensuring that AI can meet the high standards required for real-time threat detection and response.
That said, while full autonomy is not yet on the table, analysts are beginning to see tangible results that are gradually building trust. For example, 56% of security teams report that AI has already boosted productivity by streamlining tasks, automating routine processes, and speeding up response times. These tools are increasingly trusted for well-defined tasks, giving analysts more time to focus on higher-priority, complex threats.
This incremental trust is key. While 56% of security professionals express confidence in AI for threat detection, they still hesitate to let it manage security autonomously. As AI tools continue to prove their ability to process vast amounts of data and provide actionable insights, initial skepticism is giving way to more measured, conditional trust.
Looking aheadClosing the perception gap between executive enthusiasm and analyst skepticism is critical for business growth. Executives must create an environment where analysts feel empowered to use AI to enhance their expertise without compromising security standards. Without this, the organization risks falling into the hype cycle, where AI is overpromised but underdelivered.
In cybersecurity, where the margin for error is razor-thin, collaboration between AI systems and human analysts is critical. As these tools mature and demonstrate real-world impact, trust will grow, especially when their use is grounded in transparency, explainability, and accountability.
When AI is thoughtfully integrated and aligned with practitioner needs, it becomes a reliable asset that not only strengthens defenses but also drives long-term resilience and value across the organization.
We list the best cloud firewall.
This article was produced as part of TechRadarPro's Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro
- A reputable source claims the first public iOS 26 beta will land on or around July 23
- That would be later in the year than usual
- There's already an iOS 26 developer beta
It’s now over a month since iOS 26 was announced, and although it’s available in developer beta, the public beta is yet to launch. But we do now have a good idea of when the first public beta might land.
According to Apple watcher Mark Gurman in a reply to a post on X by @ParkerOrtolani, the first iOS 26 public beta will probably land on or around July 23.
That’s a bit unusual, as typically we’d have had the first public beta before then. For example, the first public beta of iOS 18 launched on July 15 last year, following its announcement on June 10. So this year, with iOS 26 having been unveiled on June 9, we’d if anything have expected to already have the first public beta.
around the 23rdJuly 15, 2025
A worthwhile waitStill, if Gurman is right there’s not too much longer to wait, and it should be worth the wait too, as iOS 26 is a significant upgrade for Apple’s smartphone operating system.
It includes a completely new look, with more rounded and transparent elements, plus redesigned phone and camera apps, a new Apple Games app, and more.
Of course, we’d take the claim of it landing on or around July 23 with a pinch of salt, especially with that being later than normal. But Gurman has a superb track record for Apple information, and either way we’d expect it to land soon.
If you can’t wait a little big longer though, you can always grab the developer beta – the next version of which may well even land before July 23. To get that, check out how to install the iOS 26 developer beta.
You might also like- Claude for Financial Services launches specifically for the financial industry
- Users can access powerful Claude 4 models and other Claude AI tools
- The system integrates with internal and external data sources
Anthropic has launched a special edition of Claude designed for the highly regulated financial industry, with a focus on market research, due diligence, and investment decision-making.
The OpenAI rival hope for financial institutions to use its tool for financial modelling, trading system modernisation, risk modeling, and compliance automation, with pre-built MCP connectors offering seamless access to entperise and market data platforms.
The company boasted that Claude for Financial Services offers a unified interface, combining Claude's AI powers with internal and external financial data sources from the likes of Databricks and Snowflake.
Claude for Financial Services is here to take on the financial sectorAnthropic highlighted four of the tool's key benefits: powerful Claude 4 models that outperform other frontier models, access to Claude Code and Claude for Enterprise, pre-built MCP connectors, and expert support for onboarding and training.
Testing revealed that Claude Opus 4 passed five of the seven Financial Modeling World Cup competition levels, scoring 83% accuracy on complex excel tasks.
"Access your critical data sources with direct hyperlinks to source materials for instant verification, all in one platform with expanded capacity for demanding financial workloads," the company shared in a post.
Anthropic also stressed that users' data is not used for training its generative models in the name of intellectual property and client information confidentiality.
Besides Snowflake for data and Databricks for analytics, Claude for Financial Services also connects with the likes of Box for document management and S&P Global for market and valuation data, among others.
Among the early adopters is the Commonwealth Bank of Australia, whose CTO Rodrigo Castillo praised Claude for its "advanced capabilities" and "commitment to safety." The Australian banking giant envisions using Claude for Financial Services for fraud prevention and customer service enhancement.
You might also like- Financial leaders still rely on regular tools like Excel for automation tasks over AI
- We've listed the best AI tools and best AI writers for different industries
- Boost output with the best productivity tools