Daily Technology
·18/03/2026
OpenAI has released GPT-5.4 mini and GPT-5.4 nano, two new artificial intelligence models engineered for speed and efficiency. These additions to the GPT-5.4 family are smaller, faster versions of the flagship model, designed to handle specific tasks where larger, more powerful models may be inefficient or cost-prohibitive.
The GPT-5.4 mini model demonstrates a significant leap in performance over its predecessor, the GPT-5 mini. According to OpenAI, it operates more than twice as fast on tasks involving coding, reasoning, and tool utilization. Performance benchmarks indicate that its capabilities approach those of the standard GPT-5.4 model in certain areas. The primary intended applications for GPT-5.4 mini include code editing and debugging. It can also function as a specialized subagent within larger systems like Codex, where a primary model could delegate specific, speed-sensitive tasks to the faster, more economical mini model.
As the smallest model in the new lineup, GPT-5.4 nano is tailored for high-volume, foundational workloads. OpenAI suggests its use for routine operations such as data classification and information extraction. Its design prioritizes speed and cost-effectiveness for tasks that do not require the complex reasoning abilities of larger models, positioning it as a utility for backend data processing.
This release positions OpenAI to compete more directly in the AI software engineering market, particularly against rivals like Anthropic. By offering a tiered selection of models, developers can select the most appropriate tool based on the complexity, speed, and cost requirements of their specific application.
Access to these new models varies. GPT-5.4 mini is available to developers through the API and is integrated into Codex and ChatGPT. For ChatGPT Free and Go users, it is accessible via the "Thinking" feature. It also serves as the fallback model for users who exceed the rate limit for the standard GPT-5.4. In contrast, GPT-5.4 nano is available exclusively through the API.









