OpenAI, the developer of ChatGPT, has released two large language models (LLMs) under the Apache 2.0 open source licence. The models, gpt-oss-120b and gpt-oss-20b are open-weight language models, which OpenAI claimed deliver strong real-world performance at low cost.
According to OpenAI, the new models outperform similarly sized open models on reasoning tasks and are optimised for efficient deployment on consumer hardware.
OpenAI said its gpt-oss-120b model achieves near-parity with OpenAI o4-mini on core reasoning benchmarks, while running efficiently on a single 80 GB GPU. It said the gpt-oss-20b model delivers similar results to OpenAI o3‑mini on common benchmarks and can run on edge devices with just 16 GB of memory.
Graphics processor unit (GPU) maker Nvidia said OpenAI’s new models were trained on Nvidia H100 GPUs and can use Nvidia NIM microservices, which it said, offers easy deployment on any GPU-accelerated infrastructure with flexibility, data privacy and enterprise-grade security.
Nvidia said that with software optimisations for the Nvidia Blackwell platform, the models can achieve 1.5 million tokens per second when run on Nvidia GB200 NVL72 systems, to support AI inference.
Amanda Brock, CEO at OpenUK, said: “The beauty of open source and openness in AI is that it feeds all sides in the global debate’s needs – it has the power to be a digital public good creating access for all, but commercially, as with open source software which has become the darling of Big Tech, it enables the creation of de facto standards and promotes adoption – think Meta’s open innovation model Llama. In a world of geo-political shift, it enables global reach and impact in AI.”
The main benefit of an open AI model is that it is not closed, which means it can be checked by anyone. This should help to improve its quality and remove bugs and go some way to tackling bias, when the source data on which a model is trained is not diverse enough. Open models offer businesses a way to fine-tune an LLM to how their organisation runs. However, CIOs should weigh the benefits of using an open AI model over one that is proprietary, especially as they face significant operational costs associated with deploying any AI model.
Haritha Khandabattu, senior director and analyst at Gartner, said open models, popularised by LLMs such as Meta’s Llama, are mostly being used in regulated industries. “These industries are inclined to experiment with open models,” she said. “Depending on where and how you are deploying the open models, they might also require significant infrastructure.”
Khandabattu said the reason organisations are experimenting with open models is to retain control. However, from the IT leaders she has spoken to, Khandabattu said the total cost of deployment is “very high”. There are substantial operational costs, and engineering costs that’s required to customise, run and maintain an open model.
She added that open models used for AI applications like AI-based coding, may not always match the performance of proprietary models. She said this can lead to organisations being negatively affected, such as a lower level of overall employee experience or developer experience and slower operational performance times.
Khandabattu urged IT leaders to consider the pros and cons of open models, which may offer the level of enterprise IT support needed by the organisation. “Like enterprise open source software, they also come with their own risk,” she added.