News
As you may have seen, OpenAI has just released two new AI models – gpt‑oss‑20b and gpt‑oss-120b – which are the first open‑weight models from the firm since GPT‑2.
These two models – one is more compact, and the other much larger – are defined by the fact that you can run them locally. They'll work on your desktop PC or laptop – right on the device, with no need to go online or tap the power of the cloud, provided your hardware is powerful enough.
So, you can download either the 20b version – or, if your PC is a powerful machine, the 120b spin – and play around with it on your computer, check how it works (in text-to-text fashion) and how the model thinks (its whole process of reasoning is broken down into steps). And indeed, you can tweak and build on these open models, though safety guardrails and censorship measures will, of course, be in place.
But what kind of hardware do you need to run these AI models? In this article, I'm examining the PC spec requirements for both gpt‑oss‑20b – the more restrained model packing 21 billion parameters – and gpt‑oss-120b, which offers 117 billion parameters. The latter is designed for data center use, but it will run on a high-end PC, whereas gpt‑oss‑20b is the model designed specifically for consumer devices.
Indeed, when announcing these new AI models, Sam Altman referenced 20b working on not just run-of-the-mill laptops, but also smartphones – but suffice it to say, that's an ambitious claim, which I'll come back to later.
These models can be downloaded from Hugging Face (here's gpt‑oss‑20b and here’s gpt‑oss-120b) under the Apache 2.0 license, or for the merely curious, there's an online demo you can check out (no download necessary).
(Image credit: Future / Lance Ulanoff)The smaller gpt-oss-20b modelMinimum RAM needed: 16GB
The official documentation from OpenAI simply lays out a requisite amount of RAM for these AI models, which in the case of this more compact gpt-oss-20b effort is 16GB.
This means you can run gpt-oss-20b on any laptop or PC that has 16GB of system memory (or 16GB of video RAM, or a combo of both). However, it's very much a case of the more, the merrier – or faster, rather. The model might chug along with that bare minimum of 16GB, and ideally, you'll want a bit more on tap.
As for CPUs, AMD recommends the use of a Ryzen AI 300 series CPU paired with 32GB of memory (and half of that, 16GB, set to Variable Graphics Memory). For the GPU, AMD recommends any RX 7000 or 9000 model that has 16GB of memory – but these aren't hard-and-fast requirements as such.
Really, the key factor is simply having enough memory – the mentioned 16GB allocation, and preferably having all of that on your GPU. This allows all the work to take place on the graphics card, without being slowed down by having to offload some of it to the PC's system memory. Although the so-called Mixture of Experts, or MoE, design OpenAI has used here helps to minimize any such performance drag, thankfully.
Anecdotally, to pick an example plucked from Reddit, gpt-oss-20b runs fine on a MacBook Pro M3 with 18GB.
(Image credit: TeamGroup)The bigger gpt-oss-120b modelRAM needed: 80GB
It's the same overall deal with the beefier gpt-oss-120b model, except as you might guess, you need a lot more memory. Officially, this means 80GB, although remember that you don't have to have all of that RAM on your graphics card. That said, this large AI model is really designed for data center use on a GPU with 80GB of memory on board.
However, the RAM allocation can be split. So, you can run gpt-OSS-120b on a computer with 64GB of system memory and a 24GB graphics card (an Nvidia RTX 3090 Ti, for example, as per this Redditor), which makes a total of 88GB of RAM pooled.
AMD's recommendation in this case, CPU-wise, is for its top-of-the-range Ryzen AI Max+ 395 processor coupled with 128GB of system RAM (and 96GB of that allocated as Variable Graphics Memory).
In other words, you're looking at a seriously high-end workstation laptop or desktop (maybe with multiple GPUs) for gpt-oss-120b. However, you may be able to get away with a bit less than the stipulated 80GB of memory, going by some anecdotal reports - though I wouldn't bank on it by any means.
(Image credit: Shutterstock/AdriaVidal)How to run these models on your PCAssuming you meet the system requirements outlined above, you can run either of these new gpt-oss releases on Ollama, which is OpenAI's platform of choice for using these models.
Head here to grab OIlama for your PC (Windows, Mac, or Linux) - click the button to download the executable, and when it's finished downloading, double click the executable file to run it, and click Install.
Next, run the following two commands in Ollama to obtain and then run the model you want. In the example below, we're running gpt-oss-20b, but if you want the larger model, just replace 20b with 120b.
ollama pull gpt-oss:20bollama run gpt-oss:20bIf you prefer another option rather than Ollama, you could use LM Studio instead, using the following command. Again, you can switch 20b for 120b, or vice-versa, as appropriate:
lms get openai/gpt-oss-20bWindows 11 (or 10) users can exercise the option of Windows AI Foundry (hat tip to The Verge).
In this case, you'll need to install Foundry Local - there's a caveat here, though, and it's that this is still in preview - check out this guide for the full instructions on what to do. Also, note that right now you'll need an Nvidia graphics card with 16GB of VRAM on-board (though other GPUs, like AMD Radeon models, will be supported eventually - remember, this is still a preview release).
Furthermore, macOS support is "coming soon," we're told.
(Image credit: Shutterstock/ Alex Photo Stock)What about smartphones?As noted at the outset, while Sam Altman said that the smaller AI model runs on a phone, that statement is pushing it.
True enough, Qualcomm did issue a press release (as spotted by Android Authority) about gpt-oss-20b running on devices with a Snapdragon chip, but this is more about laptops – Copilot+ PCs that have Snapdragon X silicon – rather than smartphone CPUs.
Running gpt-oss-20b isn't a realistic proposition for today's phones, though it may be possible in a technical sense (assuming your phone has 16GB+ RAM). Even so, I doubt the results would be impressive.
However, we're not far away from getting these kinds of models running properly on mobiles, and this will surely be in the cards for the near-enough future.
You might also like- X's AI video maker Grok Imagine is live for SuperGrok and Premium+ subscribers
- Grok Imagine turns prompts into looping six-second clips
- The tool includes a controversial “spicy mode” for some NSFW content
xAI is pushing out the Grok Imagine AI video maker to those willing to pay for a SuperGrok or Premium+ subscription. Assuming you've paid your $30 or $35 a month, respectively, you can access Imagine in the Grok app under its own tab and turn prompts into short video clips. These last for around six seconds and include synced sound. You can also upload static images and animate them into looping clips.
Grok Imagine is another addition to the increasingly competitive AI video space, including OpenAI's Sora, Google's Veo 3, Runway, and more. Having audio built in also helps the tool, as sound is still not a universally available feature in all AI video tools.
To stand out, Elon Musk is encouraging people to think of it as “AI Vine,” tying the new tool to the classic and long-defunct short-form video platform for Twitter, itself a vanished brand name.
However, this isn’t just nostalgia for 2014 social media. The difference is that it's a way to blend active creation and passive scrolling.
Grok Imagine should get better almost every day. Make sure to download the latest @Grok app, as we have an improved build every few days. https://t.co/MGZtdMx26oAugust 3, 2025
Spicy GrokOne potentially heated controversy around Grok Imagine is the inclusion of a “spicy mode” allowing for a limited amount of more explicit content generation. While the system includes filters and moderation to prevent actual nudity or anything sexual, users can still experiment with suggestive prompts.
Musk himself posted a video of a scantily clad angel made with Grok Imagine. It provoked quite a few angry and upset responses from users on X. xAI insists guardrails are in place, but that hasn’t stopped some early testers from trying to break them.
xAI is keen to promote Grok Imagine as a way to make AI video accessible for everyone, from businesses crafting ads to teachers animating lessons. Still, there are understandable concerns about whether an AI platform that was only recently in hot water for outright pro-Nazi statements can be trusted to share video content without getting into more hot water. That goes double for the filters for the spicy content.
You might also like

