OpenAI recently launched an open-source model called GPT-OSS, marking the first publicly available model with open weights since the release of GPT-2 in 2019. GPT-OSS comes in two versions: the gpt-oss-120b with 120 billion parameters and the gpt-oss-20b with 20 billion parameters. The larger model can run on a single Nvidia GPU, boasting performance that rivals the existing o4-mini model, while the smaller version can operate on devices with just 16GB of memory, delivering performance comparable to the o3-mini model. Both versions are licensed under Apache 2.0, allowing for commercial use and available for free download on platforms like Hugging Face.
OpenAI’s CEO Sam Altman has acknowledged that the company has not taken the right stance in the history of open-source models, especially after the Chinese startup DeepSeek launched a cost-effective open-source model this year. Altman pointed out that the hope for innovation with open-source models should be pushed forward in the United States, emphasizing that these models are Based on democratic values, freely provided for public use.. Co-founder Greg Brockman further views this release as a Supplement of existing paid services rather than as a competitor.
The GPT-OSS model employs the chain of thought reasoning method that OpenAI first applied in its o1 model last fall, allowing for multi-step response prompts. Although these text-only models aren’t multimodal, they are capable of web browsing, invoking cloud models to assist in task execution, executing code, and function as AI agents controlling software. Unlike ChatGPT, GPT-OSS can operate independently in offline environments and behind firewalls.
Due to the reduced barriers for use with the public weight model, anyone can attempt to adjust the model for improper purposes. OpenAI has conducted the most rigorous safety testing in its history for this reason. Safety researcher Eric Wallace stated that the team performed meticulous fine-tuning tests in high-risk areas and conducted in-depth assessments of the attainable risks. According to OpenAI’s preparation framework evaluation, this public weight model has not reached a high-risk level, and the model will demonstrate Thinking Chain’s process, allowing users to monitor model behavior, correct misguidance, and prevent misuse.
Researcher Chris Koch mentioned that the performance of gpt-oss-120b is comparable to OpenAI’s o3 and o4-mini models, with some assessments even indicating it surpasses them. This release also poses a challenge to the current leader in the open-source ecosystem, Meta. Since the launch of the first Llama series model in 2023, the latest Llama 4 remains a mainstream product in the market, and Mark Zuckerberg has hinted that future models may consider abandoning the open-source strategy due to security concerns.
This release comes against the backdrop of escalating competition for AI talent among tech giants like OpenAI and Meta. By 2025, highly sought-after AI researchers are expected to receive lucrative job offers. OpenAI’s new release could spark fierce competition for Meta, with the precise impact hinging on how developers embrace the GPT-OSS model. Meanwhile, Meta is also focused on developing superintelligence that goes beyond human cognition, establishing an internal lab led by former Scale CEO Alexandr Wang.
This release marks a significant shift in the competitive landscape of the AI industry. As the competition in AI technology between China and the U.S. intensifies, we can expect more American tech companies to mimic OpenAI by launching open-source models to tackle their Chinese competitors. In the coming months, we might see tech giants like Google and Microsoft accelerating their efforts to release open-source models, while the competition around safety standards and performance benchmarks will also heat up.



