
OpenAI released the O3-Mini model optimized for STEM
OpenAI officially announced the O3-Mini model, which was prevented in December 2024.
OpenAI officially announced the O3-Mini model, which was prevented in December 2024.
The O3-Mini, which the company describes as the most suitable cost-effective model in the reasoning series, focuses on the Reason of STEM by offering strong performance in science, mathematics and coding. According to OpenAI, this model offers the same level of performance as O1 in cases that require moderate reasoning, and has the advantage of responding faster.
According to the company’s A/B tests, O3-Mini works 24 %faster than O1-Mini and the average response time has dropped from 10.16 seconds to 7.7 seconds. It also draws attention as the first small reasoning model that supports popular features such as function call, developer messages and configured outputs. The model also integrates with the search feature to provide response to current sources on the web. Users can choose between three different reasoning levels, low, medium and high.
Chatgpt Plus can be used by Team and Pro subscribers and will replace O1-Mini. ChatGPT Pro users will be able to provide unlimited access to both O3-Mini and the more advanced O3-Mini-Hight version. Plus and Team users are daily message By increasing its borders, it will be able to increase 150 messages from the 50 message limits in O1-Mini. Free users will be able to experience O3-Mini by using the “reason” option on Chatgpt or reconstruct a response. At the same time, the model has been accessed through Microsoft’s Azure OpenAI service.
This new model stands out as a part of OpenAI’s efforts to develop artificial intelligence that optimizes reasoning performance in the STEM field. It aims to provide an important alternative for developers and researchers with faster response time and customizable reasoning levels.
Artificial Intelligence
OpenAI’s own artificial intelligence chip is almost ready!
A bomb -like news fell on the agenda. According to Reuters, OpenAI pushed the button to get rid of his dependence on Nvidia. The company is moving towards removing its own artificial intelligence chip. Let’s see what’s going on together without further ado.

A bomb -like news fell on the agenda. According to Reuters, OpenAI pushed the button to get rid of his dependence on Nvidia. The company is moving towards removing its own artificial intelligence chip. Let’s see what’s going on together without further ado.
According to Reuters’ quotes OpenAIDesigned by the artificial intelligence chip can be mass production next year. Sources said the company has completed the company’s chip design within a few months and plans to send it to TSMC for production. Thus, the OpenAI will use its own chips instead of Nvidia to train and run artificial intelligence models. While the production of TSMC will be produced by 3 NM process, the chip will be equipped with high -band width memory and comprehensive networks.
The OpenAI chip will use it on a limited scale during the launch. Sources, however, said that the chipset will mostly be used to run artificial intelligence models. The future versions of the chip will have more advanced processors and abilities.
OpenAI’s chip design team is the head of Google’s former TPU engineer Richard Ho. It is stated that the work has accelerated thoroughly in the last months and the team has increased from 20 people to 40 people. Last year, Reuters shared a report claiming that OpenAI was working with Broadcom to develop a special chip. Again, in another report of a few months ago, it was mentioned that the company gave interesting job advertisements and turned to chip design. When we combine the information mentioned in the old reports, we see that the puzzle was completed before.
Firms operating in the field of artificial intelligence made billions of dollars of chip purchases to create artificial intelligence infrastructure and strengthen hungry models. But initiatives such as Deepseek opened up to discuss whether to get thousands of chips to equal and provide power.
Artificial Intelligence
Gemini 2.0 experimental models were available
Google has announced today that Gemini’s application contains the newest Gemini 2.0 experimental models.

Google has announced today that Gemini’s application contains the newest Gemini 2.0 experimental models.
One of the most remarkable innovations for users is that the Gemini 2.0 Flash Thinking model can now be tried free of charge by everyone. The company states that this model is particularly trained in a way that divides the commands into steps to strengthen their reasoning capabilities and provide better responses.
According to Google, Gemini 2.0 Flash Thinking, users’ why the model reacts in a certain way, Which It allows them to see how it is based on assumptions and how it creates the line of logic ”. Of course, these explanations do not mean that artificial intelligence really “think” or “reason”. However, it can be said that it is an important step in terms of providing transparency to understand how the models are concluded. In this way, when the Geminin sometimes provides false or fabricated information, it will be a little easier to understand why it is caused.
In addition, there is a great innovation for Gemini Advanced subscribers: Gemini 2.0 Pro is now accessible. Google says that this model has been developed especially in complex tasks, to ensure that it gives more accurate results in coding and mathematics. The expression “better reality ında in the company’s statement seems to actually mean ür producing less fabricated information ..
Both models are currently available on Gemini mobile application and Gemini web interface. Google also announced that Gemini 2.0 Pro will soon be available to Google Workspace Business and Enterprise customers. These developments stand out as an important part of Google’s artificial intelligence strategy, which aims to make Gemini more functional for both individual users and businesses.
Artificial Intelligence
Deepseek allegedly bought nvidia chips
Behind Deepseek’s success technological infrastructure And Financial strategies It causes controversy in the USA.

Behind Deepseek’s success technological infrastructure And Financial strategies It causes controversy in the USA.
In a report published on Friday, it was claimed that the company has purchased advanced Nvidia chips by overcoming the US export restrictions on artificial intelligence chips. It is claimed that the company supplied these chips through third -party intermediaries in Singapore.
It is known that Deepseek uses much less resources compared to billions of dollars that giants such as OpenAI and Google spend to improve chatgpt and Gemini models. This suggests US analysts about whether China is ahead of the USA in development. However, Deepseek’s success is not the same in every field. In a test between 11 major artificial intelligence platforms, Deepseek ranked 10th in 10th by answering only 17 %of the questions correctly. Moreover:
- 30 %of his answers contained false claims,
- 53 %of them provided uncertain or useful information about the news.
This shows that the model is not yet to compete directly with Google Gemini or OpenAI chatgpt.
In -depth review and investigations from the USA
The US administration is investigating whether this new company has used intermediaries in Southeast Asia to overcome restrictions on artificial intelligence chips. FBI and the White House examined whether Singapore had a role in providing Nvidia chips to Deepseek.
In particular, the percentage of income of Nvidia from Singapore increased from 9 %to 22 %in the last two years. attracts. This increases speculation that Singapore has become an important center in the supply of chips to Chinese AI companies. Nvidia said in a statement that his partners had to act in accordance with the law and said he did not believe that Deepseek violated the laws. On the other hand, Howard Lutnick, elected by Trump to the Ministry of Commerce, said that Deepseek is not against the competing with US AI companies, but that this should not be done using the US’s own tools.
The company admitted that it uses 2,048 Nvidia H800 GPU chips to train the V3 model. However, it is estimated that the stronger -R1 is trained using advanced Nvidia GPUs, which are prohibited in sale in China. All these developments show that the US can bring tighter controls to China in artificial intelligence technologies and global AI competition will be further escalated.
-
Artificial Intelligence2 weeks ago
Tim Cook announced that Apple Intelligence will expand more languages in April
-
Automobile2 weeks ago
Chery TIGGO 4 Top Also received full marks from Gear!
-
Artificial Intelligence2 weeks ago
Artificial Intelligence Tension Between Deepseek and OpenAI
-
Mobile2 weeks ago
The GREN problem we saw in the Galaxy S25 Ultra with Ultra was overcome
-
Wearable Technology2 weeks ago
Casio announced five new analog watch models called MTS-S100 Series
-
Software2 weeks ago
Windows 11 Insider update brings iPhone integration to the Start menu
-
Software2 weeks ago
Samsung brought Google maps integration to One UI 7 to now Bar
-
Gaming2 weeks ago
There are important news from GTA 6