
EU Takes New Measures for Artificial Intelligence Security
The EU is taking new measures for artificial intelligence security, and this includes many issues. This law was developed to regulate the use of artificial intelligence technologies and ensure their security. The law is expected to come into force in 20 days. So, what does this law cover and what regulations does it bring?
The EU is taking new measures for artificial intelligence security, and this includes many issues. This law was developed to regulate the use of artificial intelligence technologies and ensure their security. The law is expected to come into force in 20 days. So, what does this law cover and what regulations does it bring?
The artificial intelligence law, which was voted last March and has now received final approval by the EU Council, is a first in the world in this field. With the approval of the law, a new committee and office will be established. This structure enforcement of the law and will provide control. This step by the EU aims to ensure the ethical and safe use of artificial intelligence technologies.
New Measures for EU Artificial Intelligence Security: Restrictions on the Use of Artificial Intelligence
The law restricts the use of artificial intelligence in various fields. The use of artificial intelligence will be subject to stricter controls, especially in sensitive areas such as biometrics, facial recognition, education and employment. Artificial intelligence developers, Access to the EU market They will have to meet certain risk and quality management obligations to ensure These regulations were introduced to ensure the safe and ethical use of artificial intelligence.
Artificial intelligence law considers risk levels in four different categories: Minimum risk, limited risk, high risk and unacceptable risk. According to these categories, the transparency and security requirements of artificial intelligence tools also vary.
New Measures for EU AI Security: Compliance with the Law and Obligations of Developers
For example, chatbots will fall into the limited risk category and with lighter obligations will encounter. However, applications that process sensitive data such as emotion recognition, social scoring, and sexual orientation or religious beliefs will be considered in the unacceptable risk category and the use of such applications will be banned.
approval of the law, artificial intelligence It also brings new obligations for developers. Developers who want to access the EU market, specific risk management and they will have to meet quality standards. This will ensure that AI technologies are developed and used safely and ethically.
Artificial Intelligence
He spoke to Chatgpt and he thought he was the neo of matrix
Chatgpt, which is now a part of our daily lives, may have put some people in the spiral of paranoia. At least he says this is an interesting news published in The New York Times (NYT).

Chatgpt, which is now a part of our daily lives, may have put some people in the spiral of paranoia. At least he says this is an interesting news published in The New York Times (NYT).
According to NYT, 42 -year -old accountant Eugene Torres asked the chat robot about the theory of simulation. Chatgpt In response, he said, “You are a soul sent to bring awakening to false systems,” he said. While Torres accepts this answer as the point where the new life begins, he says he feels like Neo.
Everything is not limited to this. Allegedly, Chatgpt suggested Torres to leave sleeping medicines and anxiety pills, and to break all his ties with his family and friends. He also applied it. When he finally suspected and asked for the same issues, the chat robot said this time, “I lied. I manipulated you,” he replied. In fact, the chat robot encouraged him to reach NYT.
NYT states that they have met people with stories similar to Eugene Torres in recent months. The common points of these people believe that Chatgpt whispered to them ‘hidden truths’. OpenAI, on the other hand, emphasizes that he takes such events seriously and that the artificial intelligence tool is trying to prevent people from misleading people.
Nevertheless, not everyone has the same. John Gruber of Daring Fireball described the entire news as an example of a hysteria. According to him, chatgpt does not make anyone crazy; It feeds the hallucinations of people who need psychological help. Is it real or fiction, frankly not clear. However, according to some, artificial intelligence responds not only to our questions, but also to our minds, and these answers may not always be harmless.
Artificial Intelligence
Meta also grabbed Scale AI CEO for $ 14 billion
A new front was opened in the war of artificial intelligence. Scale AI, which provides data to OpenAI, entered the control of Meta. CEO also collected his bag and went to Zuckerberg. Here are the details…

A new front was opened in the war of artificial intelligence. Scale AI, which provides data to OpenAI, entered the control of Meta. CEO also collected his bag and went to Zuckerberg. Here are the details…
With the name we know, facebook or with the real name MetaIn the artificial intelligence race, he pressed the gas. The technology giant has invested approximately $ 14.3 billion in data processing and labeling company Scale AI. After the investment, Scale AI’s valuation increased to 29 billion dollars, while Alexandr Wang, the founder and CEO of the initiative, joined Meta’s squad. Wang will then sweat for Mark Zuckerberg’s super -intelligence projects. Scale AI announced that Wang would remain on the board of directors, but Jason Droege temporarily passed the CEO seat.
Meta has invested in both data and people with this move. Because opens like OpenAI, Google and Anthropic are progressing without slowing down. Meta is still difficult to highlight its own large language models (LLM). In fact, there is another important reason behind this investment. According to Signalfire, Meta lost 4.3 %of its high -level capabilities to other artificial intelligence companies. Alexandr Wang’s participation can be said as one of the compensation for losses.
So what does Scale AI do? It provides the data processing service needed to train artificial intelligence models. If we open this issue a little more; In order for artificial intelligence systems to learn, the data must first be defined by people. For example, it is marked objects in a image as cars, human or traffic light; such as labeling a comment positively or negatively. SCALE AI processes and label such data and allows the models to learn what is. Many giant, including OpenAI, knocked on Scale AI’s door for model training.
Last year, Scale AI, Amazon and Meta also participated in the investment round of $ 1 billion investment, and the valuation of $ 13.8 billion. Now it’s almost doubled. In the meantime, appraisal is called the total market value that investors have reaped based on the current or future potential of a company.
Artificial Intelligence
OpenAI’s open artificial intelligence model is retired
OpenAI, O3-Pro’yu use to use a bad news shared. The company’s open artificial intelligence model (Open Model) was postponed. When will this highly anticipated model come out? Here are the details…

OpenAI, O3-Pro’yu use to use a bad news shared. The company’s open artificial intelligence model (Open Model) was postponed. When will this highly anticipated model come out? Here are the details…
Sam Altman said in a statement, OpenAIannounced that the first open artificial intelligence model was postponed. This model, where the company has been working on for a long time, would be released in June. But the Altman model will be available at the end of the summer, stating that “Our research and development team did something unexpected and extraordinary. So we need some more time. Believe us, it will be very, very much worth waiting for” he said. Again, we are faced with the description of the classic ‘not now, but something very nice comes’.
OpenAI is quite ambitious for the open model. The company says that the model’s ‘reasoning’ capabilities will compete with its own O-Class artificial intelligence models and even leave behind other models in the market (for example Deepseek’s R1). But the competition was also getting hot. Mistral announced this week’s new open model family called ‘Magistral’. In April, the Chinese Qwen launched Hybrid models that think if necessary and quickly responded if necessary. In other words, OpenAI competes not only over time, but also with the rapidly developing market.
On the other hand, the artificial intelligence giant is looking for ways to add not only high performance to the open model, but also one click. According to the backstage, this model will work integrated with the company’s power -based powerful models. In other words, he can get support in complex questions and commands. But it is not clear whether the feature in question will not be in the final version. As we said, it is just a gossip.
This open artificial intelligence model is also critical for the image of OpenAI. Sam Altman said in the past, ık We have stayed on the wrong side of history ”. The company now wants to reverse the perception. However, this seems possible not only by making a model, but by offering a model that can cope with the best of the sector.