News
- HighPoint Rocket 7638D combines extreme GPU power with massive SSD storage in just one PCIe slot
- Dual MCIO ports and a CDFP interface unlock true compute-storage fusion for HPC workflows
- Can host the RTX 5090 and 16 enterprise SSDs using a single compact expansion card
HighPoint Technologies is preparing to unveil the Rocket 7638D at FMS2025, a single-slot PCIe Gen5 x16 add-in card that aims to combine external GPU support and high-capacity SSD storage within a compact form factor.
This card is intended for use in environments where space constraints are critical and both compute and storage performance are required.
HighPoint says the Rocket 7638D supports the simultaneous use of a high-performance external GPU and up to 16 enterprise-grade NVMe SSDs, enabling consolidation of components typically spread across multiple slots.
Merging GPU support and SSD capacity in one PCIe slotThe design appears to be targeted at AI inference, high-performance computing (HPC), and media production workloads, where system density and thermal considerations could restrict expansion options.
The Rocket 7638D uses an external CDFP interface to accommodate a full-height, dual- or triple-slot Gen5 GPU, supporting lengths up to 370mm, including options like the Nvidia GeForce RTX 5090, which launched earlier this year.
Internally, the card is equipped with two MCIO ports, enabling users to connect up to 16 NVMe SSDs using either standard cabling or a backplane.
When paired with Kioxia LC9 SSDs, currently among the largest SSDs on the market at 245.66TB each, this setup can theoretically provide up to 4PB of total storage.
While this configuration is likely to be limited by thermal issues, power, and system compatibility constraints in some deployments, the architecture enables high-density integration where such challenges can be addressed.
How to do it- Install the Rocket 7638D into a PCIe Gen5 x16 slot on a supported motherboard
- Connect a compatible Gen5 x16 GPU (e.g., RTX 5090) via the CDFP port
- Attach up to 16 NVMe SSDs using dual MCIO cables or through a Gen5-capable backplane
- Ensure power delivery and cooling are appropriate for both GPU and SSD load
- Use firmware tools to manage lane distribution, power cycling, and device monitoring
- Monitor system status using onboard LED indicators or command-line utilities
In addition to the 7638D, HighPoint will be showcasing its wider Rocket Series portfolio at FMS2025.
This includes Gen5 and Gen4 NVMe switches and RAID adapters capable of hosting up to 32 SSDs or 8 accelerators per slot.
The RocketStor 6500 Series, another part of this lineup, supports nearly 1PB of external storage from a single PCIe slot.
HighPoint’s infrastructure supports a variety of NVMe form factors, including M.2, U.2/U.3, E1.S, E3.S, and ESDFF.
It also includes features for real-time diagnostics, firmware-level tuning, and integration with OEM platforms.
You might also like- These are the best mobile workstations you can buy right now
- We've also listed the best mini PCs for every budget
- Merging lasers with silicon could make photonic chips cheap, fast, and ready for production
- AMD Threadripper 9995WX tops PassMark with 174,825 points in multithreaded performance testing
- With 96 cores and 192 threads, it crushes benchmarks meant for server-grade processors
- The Threadripper 9995WX even outperforms AMD’s EPYC 9755 by more than 5% in tests
The AMD Ryzen Threadripper PRO 9995WX has emerged as the fastest CPU in PassMark’s multithreaded performance charts, claiming a score of 174,825 points.
This new benchmark positions the 96-core processor ahead of AMD’s own EPYC 9755, which trails by about 5% in multithreaded workloads with 166,328 points.
This lead is noteworthy not only because of the tight margin but also due to the distinct market segments to which both chips are intended: Threadripper for high-end workstations and EPYC for data center servers.
Built for extreme performance in workstation-class systemsLaunched in the second quarter of 2025, the Threadripper PRO 9995WX is built around the sTR5 socket and features a base clock speed of 2.5GHz with a boost speed reaching 5.4GHz.
It comes with 192 threads, and its typical TDP of 350W reflects the scale of its compute capabilities.
With a massive 384MB of L3 cache and substantial L1 and L2 cache arrangements, the CPU is engineered to handle highly parallelized tasks.
These features show AMD’s intent to offer extreme performance in high-end desktop and workstation markets where parallel compute power is critical.
In benchmark tests, it delivered 1,220,090 MOps/sec in integer math, 707,600 MOps/sec in floating point operations, and processed 3.6 million kilobytes per second in data compression.
Its single-thread performance reached 4,565 MOps/sec, placing it 45th among 5,287 CPUs in that metric.
The new Threadripper PRO 9995WX is 21% faster than the 7995WX, AMD’s own earlier flagship.
This gain marks a substantial generational leap, particularly for users whose applications benefit from the full core and thread count.
The Threadripper PRO 9995WX has just gone on sale and can be found at major retailers like Amazon and Newegg, with a starting price of $11,699.
You might also like- These are the fastest SSDs you buy right now
- Take a look at some of the best external hard drives on the market
- AMD Threadripper Pro 9995WX goes on sale at Amazon and Newegg
- OpenAI CEO Sam Altman said testing GPT-5 left him scared in a recent interview
- He compared GPT-5 to the Manhattan Project
- He warned that the rapid advancement of AI is happening without sufficient oversight
OpenAI chief Sam Altman has painted a portrait of GPT‑5 that reads more like a thriller than a product launch. In a recent episode of the This Past Weekend with Theo Von podcast, he described the experience of testing the model in breathless tones that evoke more skepticism than whatever alarm he seemed to want listeners to hear.
Altman said that GPT-5 “feels very fast,” while recounting moments when he felt very nervous. Despite being the driving force behind GPT-5's development, Altman claimed that during some sessions, he looked at GPT‑5 and compared it to the Manhattan Project.
Altman also issued a blistering indictment of current AI governance, suggesting “there are no adults in the room” and that oversight structures have lagged behind AI development. It's an odd way to sell a product promising serious leaps in artificial general intelligence. Raising the potential risks is one thing, but acting like he has no control over how GPT-5 performs feels somewhat disingenuous.
OpenAI CEO Sam Altman: "It feels very fast." - "While testing GPT5 I got scared" - "Looking at it thinking: What have we done... like in the Manhattan Project"- "There are NO ADULTS IN THE ROOM" from r/ChatGPTAnalysis: Existential GPT-5 fearsWhat spooked Altman isn’t entirely clear, either. Altman didn’t go into technical specifics. Invoking the Manhattan Project is another over-the-top sort of analogy. Signaling irreversible and potentially catastrophic change and global stakes seems odd as a comparison to a sophisticated auto-complete. Saying they built something they don’t fully understand makes OpenAI seem either reckless or incompetent.
GPT-5 is supposed to come out soon, and there are hints that it will expand far beyond GPT-4’s abilities. The "digital mind" described in Altman’s comments could indeed represent a shift in how the people building AI consider their work, but this kind of messianic or apocalyptic projection seems silly. Public discourse around AI has mostly toggled between breathless optimism and existential dread, but something in the middle seems more appropriate.
This isn't the first time Altman has publicly acknowledged his discomfort with the AI arms race. He’s been on record saying that AI could “go quite wrong,” and that OpenAI must act responsibly while still shipping useful products. But while GPT-5 will almost certainly arrive with better tools, friendlier interfaces, and a slightly snappier logo, the core question it raises is about power.
The next generation of AI, if it’s faster, smarter, and more intuitive, will be handed even more responsibility. And that would be a bad idea based on Altman's comments. And even if he's exaggerating, I don't know if that's the kind of company that should be deciding how that power is deployed.
You might also likeGenerative AI is a headline act in many industries, but the data powering these AI tools plays the lead role backstage. Without clean, curated, and compliant data, even the most ambitious AI and machine learning (ML) initiatives will falter.
Today, enterprises are moving quickly to integrate AI into their operations. According to McKinsey, in 2024, 65% of organizations reported regularly using generative AI, marking a twofold increase from 2023.
However, the true potential of AI and ML in the enterprise won’t come from surface-level content generation. It will come from deeply embedding models into decision-making systems, workflows, and customer-facing processes where data quality, governance, and trust become central.
Additionally, simply incorporating AI and ML features and functionality into foundational applications won’t do an enterprise any good. Organizations must leverage all aspects of their data to create strategic advantages that help them stand out from the competition.
To do this, the data powering their applications must be clean and accurate to mitigate bias, hallucinations, and/or regulatory infractions. Otherwise, they risk issues in training and output, ultimately negating the benefits that the AI and ML projects were initially meant to create.
The importance of good, clean dataData is the foundation of any successful AI initiative, and enterprises need to raise the bar for data quality, completeness, and ethical governance. However, this isn’t always as easy as it sounds. According to Qlik, 81% of companies still struggle with AI data quality, and 77% of companies with over $5 billion in revenue expect poor AI data quality to cause a major crisis.
In 2021, for example, Zillow shut down Zillow Offers because it failed to accurately value homes due to faulty algorithms, leading to massive losses. This case highlights a critical importance – AI and ML projects must operate on good, clean data in order to produce the most accurate, best results.
Today, AI and ML technologies rely on data to learn patterns, make predictions and recommendations, and help enterprises drive better decision-making. Techniques like retrieval-augmented generation (RAG) pull from enterprise knowledge bases in real-time, but if those sources are incomplete or outdated, the model will generate inaccurate or irrelevant answers.
Agentic AI’s ability to act reliably hinges on consuming accurate, timely data in real time. For example, an autonomous trading algorithm reacting to faulty market data could trigger millions in losses within seconds.
Establishing and maintaining an environment of good dataIn order for enterprises to establish and maintain an environment of good data that can be leveraged for AI and ML usage, there are three key elements to consider:
1. Build a comprehensive data collection engineEffective data collection is essential for successful AI and ML projects, and modern data platforms and tools, such as those for integration, transformation, quality monitoring, cataloging, and observability, to support the demands of their AI development and output. They ensure the organization is getting the right data.
Whether the data be structured, semi-structured, or unstructured, any data collected should come from a variety of sources and methods to support robust model training and testing to encapsulate the different user scenarios that they may encounter upon deployment. Additionally, companies must ensure they follow ethical data collection standards. Whether the data is first-, second-, or third-party, it must be sourced correctly and with consent given for its collection and use.
2. Ensure high data qualityHigh-quality, fit-for-purpose data is imperative for the performance, accuracy, and reliability of AI and ML models. Given that these technologies introduce new dimensions, the data used must be specifically aligned with the requirements of the intended use case. However, 67% of data and analytics professionals say they don’t have complete trust in their organizations’ data for decision-making.
To address this, it's essential that enterprises have data that is representative of real-world scenarios, monitor for missing data, eliminate duplicate data, and maintain consistency across data sources. Furthermore, recognizing and addressing biases in training data is critical, as biased data can compromise outcomes and fairness and negatively impact customer experience and credibility.
3. Implement trust and data governance frameworksThe push for responsible AI has placed a spotlight on data governance. With 42% of data and analytics professionals saying their organization is unprepared to handle the governance of legal, privacy, and security policies for AI initiatives, it’s critical that there is a shift from traditional data governance frameworks to more dynamic frameworks.
In particular, with Agentic AI coming into significant prominence, it’s crucial to address why agents make specific decisions or take specific actions. Enterprises must have a sharp focus on Explainable AI techniques to build trust, assign accountability and ensure compliance. Trust in AI outputs begins with trust in the data behind them.
In summaryAI and ML projects will fail without good data because data is the foundation that enables these technologies to learn. Data strategies and AI and ML strategies are intertwined. Enterprises must make an operational shift that puts data at the core of everything they do – from technology infrastructure investment all the way to governance.
Those that take the time to put data first will see projects flourish. Those that don’t will be faced with ongoing struggles and competition biting at their heels.
We list the best data visualization tools.
This article was produced as part of TechRadarPro's Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro
IT teams know the balancing act all too well. Security teams implement new protocols that generate a flood of user complaints. The IT help desk is overwhelmed with tickets that could have been prevented.
Meanwhile, employees bypass carefully designed systems because they're too cumbersome. And today's increasingly distributed workforce only exacerbates this balancing act, creating a larger attack surface across more devices, locations, and applications.
While IT management may have accepted this as the inevitable reality, the challenges are only intensifying. AI-powered cyberattacks are becoming more sophisticated daily, capable of adapting faster than traditional security measures can respond. The old playbook of treating security, IT operations, and employee experience as separate functions has reached its breaking point.
A unified approach is needed, or IT leaders risk not only making their organization at risk for security vulnerabilities, but losing visibility and control of their organizations’ digital work environments.
The "self-driving car" of enterprise ITAlthough the rise of new AI tools and devices has created headaches for IT, AI-powered digital environments, or an autonomous workspace, offer IT leaders a path to modernizing and knocking down the divisions that exist across employee experience, security and operations.
These environments self-configure, self-heal, and self-secure with minimal human intervention. Think of it as the "self-driving car" of enterprise IT.
Unlike traditional automated systems that follow preset rules and require constant human oversight, autonomous workspaces continuously learn from data patterns and user behaviors.
Because these environments monitor every aspect of the digital environment simultaneously, previous silos that plagued IT teams’ decision making are eliminated, offering IT teams full context of their organization’s digital environment.
For example, when a security anomaly emerges, the system doesn't just alert administrators; it automatically quarantines the threat while maintaining seamless user access to legitimate resources. When a device falls out of compliance, it self-corrects without user intervention.
And rather than looking at these issues in a vacuum, autonomous workspaces enable IT to connect dots across different factions of the workplace, understanding if an employee’s application performance issue is underpinned by a larger problem or vulnerability.
The strategic imperative for not only IT teams, but a businesses' bottom lineWhile an autonomous workspace can free IT teams from the endless cycle of firefighting, the benefits of adopting an autonomous workspace extend beyond just the IT team, ultimately providing a foundation for business resiliency and cost efficiency.
1. Security rigorAs generative AI tools become embedded in daily workflows, they also broaden the attack surface, and a reactive security approach is proving inadequate. Autonomous workspaces flip this model by implementing predictive zero-trust security. Instead of waiting for threats to manifest, these systems continuously analyze patterns and behaviors to identify potential risks before they materialize.
The system makes intelligent trust decisions in milliseconds, based on comprehensive understanding of user behavior, network conditions, and threat intelligence, helping equip a business for the increasingly sophisticated cyberattacks of today and future.
2. Employee experience benefitsOrganizations that take a holistic approach to employees’ digital experience gain more than just operational benefits. A modern digital experience gives employees self-service access to the apps, resources and the support they need, when they need it.
This approach helps reduce disruptions and prevents issues before they can impact employee productivity. With secure access from anywhere, employees can stay focused and in control of how they work.
The result is stronger collaboration, higher employee satisfaction, and a significant advantage in attracting and retaining top talent in a growing hybrid work environment.
3. Streamlined resourcesThink about the traditional approach to endpoint management. Security teams set protocols. IT operations teams install management tools to ensure compliance. And user experience teams try to minimize the performance impact. The result? Conflicting priorities, duplicated efforts, and frustrated users. Autonomous workspaces break down silos and integrate these different functions into a single, intelligent platform, streamlining IT resources and costs, while enhancing collaboration across teams.
The most successful implementations of autonomous workspaces share a common characteristic: they eliminate artificial boundaries between security, IT operations, and employee experience teams. This convergence isn't just about organizational structure—it's about creating technology ecosystems where security and IT enhance rather than complicate employee productivity and collaboration.
As the enterprise landscape continues to evolve, the organizations that thrive will be those that embrace autonomous workspaces not merely as a technology solution, but as the foundation of their digital work strategy.
We list the best IT documentation tool.
This article was produced as part of TechRadarPro's Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro
Experience Level Objectives (XLOs) represent a fundamental evolution in monitoring philosophy, moving beyond the conventional Service Level Objectives (SLOs) and SLAs that have dominated IT operations for years.
This post examines the key differences between these approaches and explains why XLOs provide a more business-aligned framework for modern digital operations.
User-Centric vs. infrastructure-centric measurementsTraditional SLA and SLO monitoring has primarily focused on system availability and IT infrastructure health. This approach centers on technical metrics like uptime percentages, server response times, and infrastructure resource utilization. While these metrics provide valuable insights into system health, they create a significant disconnect between technical indicators and actual business metrics.
In contrast, XLO monitoring prioritizes metrics that directly gauge user experience and satisfaction. This shift reflects a growing recognition that digital service quality cannot be measured solely by whether systems are functioning, but rather by how well they are functioning from the user's perspective. As research increasingly shows, "slow is the new down"—acknowledging that poor performance, even without complete failure, can severely impact user satisfaction and business outcomes.
This philosophical difference addresses a critical blind spot in traditional monitoring approaches. A system can report 100% uptime while delivering a frustratingly slow experience that drives users away. XLOs close this gap by measuring what actually matters to users: the quality and speed of their interactions with digital services.
The importance of monitoring from where it mattersMost monitoring tools rely on cloud-based vantage points for digital experience monitoring —convenient (for the vendor), but disconnected from the actual user experience. These first-mile checks confirm whether the infrastructure is up, but say little about how your application is experienced by users in the real world. Hence, it is primarily useful for QA purposes, especially for new code releases.
XLOs shift the perspective. They depend on insights captured from where users truly are—whether that’s on a connection inside an office through a regional ISP, a mobile connections through a mobile operator, or even a laptop connected via Starlink. This visibility uncovers the real issues users face: congestion, routing delays, delays from third part code, and other last-mile failures that cloud monitoring can’t see.
If SLOs tell you your system is available, XLOs tell you whether it’s delivering the experience the business expects to real users. This outside-in view is what turns data into real business insight. It closes the visibility gap between infrastructure health and user experience—and that’s where the real value lies.
End-to-End Journey PerspectiveTraditional SLOs often focus on individual components or services, creating a fragmented view of performance. XLOs, by contrast, are designed to capture the complete user journey across multiple systems and services. This end-to-end perspective reflects the reality that users experience services holistically, not as isolated components. Modern digital services span multiple providers, platforms, and technologies, making isolated component monitoring inadequate for ensuring overall service quality.
While an SLA may measure the uptime of an S3 storage bucket, or the uptime of your DNS or CDN provider, these are only three of dozens or hundreds of components in an entire system. As a rule of thumb, the quality of the experience delivered by a system is as good as the worst of its components. Thus, while most components could be working perfectly, an issue in a third-party API may be resulting in the entire experience for your users to be unacceptable.
The XLO, by contrast, is less concerned about CPU utilization or database response time, while entirely focused in the resulting experience for a user – whether the user is a customer, an internal user, or an API consumed by an internal or external system.
Business alignment and value demonstrationA critical difference between XLOs and traditional SLOs is their alignment with business outcomes. Traditional SLOs primarily serve technical teams, measuring system health in terms that may not translate directly to business impact, while SLAs establish accountability from vendors that deliver a component of the functionality of a system. This creates challenges in demonstrating IT's value to business stakeholders and securing resources for performance improvements.
XLOs fundamentally change this dynamic by providing metrics that directly correlate with business performance. By moving beyond "Is it up?" to answer "Is it meeting our users’ expectations?", XLOs address what business stakeholders actually care about. This alignment helps prove the value of IT Operations and justify investments in performance improvements by demonstrating clear connections between technical performance and business outcomes.
As more components of our business and personal lives are based on digital experiences or supported by digital processes, delivering on the expectations is a business priority. In a recent survey of thousands of users showed bad digital experiences are the main reason why consumers switch to different banking providers.
As a specific example, a team can set specific XLO targets that reflect business priorities, such as ensuring the critical part of loading a page, measured as Largest Contentful Paint (LCP), does not exceed 2.5 seconds 90% of the time in a given month. This specific threshold directly impacts bounce rates and user engagement, providing clear business value.
Accelerating maturity with XLOsAccording to the GigaOm Maturity Model for IPM, organizations progress through five stages—from chaotic, reactive operations to optimized, business-driven monitoring. Traditional SLOs keep teams stuck in the early stages, focused on infrastructure uptime and siloed metrics. XLOs act as a catalyst for maturity by:
Aligning with advanced stages: XLOs introduce user-focused metrics that resonate with the 'Quantitative' and 'Optimized' stages, emphasizing business outcomes.
Facilitating proactive issue detection: Tools like burndown charts enable early identification of performance degradations, a hallmark of mature operations.
Fostering cross-functional collaboration: XLOs unify teams around shared objectives, essential for achieving higher maturity levels.
For example, a retail company using XLOs to monitor checkout flow performance (e.g., Time to Interactive across regions) isn’t just fixing errors—they’re optimizing a revenue-critical journey, a hallmark of GigaOm’s value-based observability.
Proactive vs. Reactive MonitoringTraditional SLO monitoring often creates a reactive posture, where teams respond to issues after they've already impacted users. This approach typically waits for error thresholds to trigger alerts before teams mobilize to address problems. Once these thresholds are crossed, the business is already suffering some impact.
XLO monitoring enables a substantially more proactive approach. By tracking performance trends over time, proactively simulating user experiences from their real-world locations, businesses can detect gradual degradations before they breach critical thresholds – and often before they impact users.
Tracking XLOs over time is where burn-down charts come into play. Burn down charts help track the progress of your performance against your set objectives, showing how much of your performance budget is left as time goes on.
When a team adopts XLOs as a KPI, it influences how the teams make decisions, how they see success, and what risks are acceptable. Operations can evaluate whether to release changes based on their projected impact on experience metrics, maintaining consistently high user satisfaction. In this way, burn down charts offer a clear status of service health over periods of time.
Breaking down organizational silosA significant practical difference between XLO and traditional SLO approaches lies in their organizational impact. Traditional SLOs often reinforce existing silos between development, operations, and business teams, as each group focuses on their own specialized metrics.
XLOs, by contrast, create a common language and shared objectives across organizational boundaries. By providing metrics that matter to both technical and business stakeholders, XLOs facilitate cross-functional collaboration and shared accountability for user experience. This collaborative approach enables faster problem resolution and more effective performance optimization.
Building a digital operations center (DOC)For a long time, IT operations teams have built NOCs and SOCs to manage network operations and security. In today’s world where most business interactions are digital, as organizations mature, many are formalizing their cross-functional efforts by building Digital Operations Centers (DOCs).
A DOC brings together teams across IT, engineering, and business functions to monitor experience-centric metrics in real time. With XLOs at the core, a DOC isn’t just a control room—it’s a shared space for aligning around user outcomes, accelerating response times, and making performance a business-wide priority. It’s a sign of maturity and a strategic investment in digital resilience.
A DOC puts digital user experience at the center of the business and provides visibility into how every critical digital operation in the business performs - and what is the performance of all the key components that contribute to delivering that experience – from internet backbone to third party components, cloud services, APIs, DNS, front-end servers, databases, microservices, down to application code.
A DOC is a natural evolution of a NOC and a SOC as IT operations teams evolve from a systems-uptime focus to becoming a true operational intelligence team that is a critical component of how the business operates, and not only the team keeping the lights on.
Specific Experience MetricsXLO monitoring to measure specific performance metrics that directly impact user experience can include:
Wait Time: The duration between the user’s request and the server’s initial response
Response Time: The total time taken for the server to process a request and send back the complete response
First Contentful Paint (FCP): The time it takes for the browser to render the first piece of content on the screen
Largest Contentful Paint (LCP): Time when the largest content is visible within the browser
Cumulative Layout Shift (CLS): A measure of how much the layout of the page shifts unexpectedly during loading
Time to Interactive: The time it takes for a page to become fully interactive and responsive to user inputs
These metrics create a multidimensional view of the user experience that traditional infrastructure-focused SLOs simply cannot provide.
The Strategic Value of XLO MonitoringSLOs and Experience Level Objectives (XLOs) aren’t just buzzwords; they're guiding principles for ensuring performance indicators align with real customer expectations.
The SRE Report 2025According to the SRE Report 2025, 40% of businesses are prioritizing the adoption of SLOs and XLOs over the next 12 months. By focusing on user experience rather than just system availability, providing specific experience-focused metrics, aligning with business outcomes, enabling proactive optimization, capturing end-to-end journeys, and breaking down organizational silos, XLOs provide a more comprehensive and business-relevant approach to monitoring.
This evolution reflects changing expectations from both users and businesses.
For organizations seeking to improve digital experience quality while demonstrating clear business value from IT investments, XLOs offer a powerful framework that goes beyond traditional SLO limitations. By implementing XLO monitoring, organizations can align technical performance with business objectives, ultimately delivering superior digital experiences that drive competitive advantage.
We've listed the best Active directory documentation tool.
This article was produced as part of TechRadarPro's Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro
- PlayStation's Project Defiant fight stick is officially called FlexStrike
- The fight stick will pack mechanical switch buttons, PS Link support, and instantly swappable stick gates
- It's set to launch sometime in 2026
PlayStation's Project Defiant fight stick finally has an official name, alongside brand new details and a vague release window.
A new PlayStation Blog post has revealed that Project Defiant is officially called the FlexStrike, and it's currently set to arrive sometime in 2026. The news comes right before Sony's own EVO 2025 fighting game tournament event in Las Vegas, where the FlexStrike will be on display (but not playable) for the first time.
FlexStrike will be compatible with both PS5 and PC, and it supports Sony's proprietary PlayStation Link wireless tech. Here, a PlayStation Link USB adapter can be used to hook up a compatible gaming headset - like the Pulse Elite or Pulse Explore earbuds - as well as up to two FlexStrike controllers for local play.
Like many of the best fight sticks, the FlexStrike will also be customizable to a degree. One really cool feature shown in the trailer (above) is a 'toolless' gate swap. By opening the non-slip grip at the bottom, players will be able to swap between square, circular, and octagonal gates on the fly with the joystick. This means you won't have to buy a separate joystick or gate, or use any additional tools to get the job done.
The controller has several amenities you'll find on other top fight sticks, including a stick input swap for menu navigation, and a lock switch that disables certain buttons (like pausing) for tournament play. The eight face buttons are also mechanical, which means they should register clicky, instantaneous inputs.
Lastly, players can use a DualSense Wireless Controller in tandem with the FlexStrike for menu navigation, not unlike what we see with the PlayStation Access controller.
PlayStation appears to be investing quite heavily in fighting game hardware and software. It's likely that the FlexStrike will launch around the same time as Marvel Tokon: Fighting Souls, published by PlayStation Studios and developed by Arc System Works; the team behind Guilty Gear Strive, Granblue Fantasy Versus: Rising, and many more of the best fighting games.
TechRadar Gaming will be very keen to deliver a verdict on the FlexStrike when it launches next year, so stay tuned for a potential review in 2026.
You might also like...