
How are we being deceived by artificial intelligence?
The Beatles recently released a new song by combining parts of an old recording and at the same time improving the sound quality, thanks to artificial intelligence (AI), delighting their millions of fans around the world once again. But aside from the joy of the band’s new masterpiece, there are also dark sides to the use of artificial intelligence to create fake sounds and images.
Fortunately, these types of deepfakes and the tools used to make them are not very advanced or widespread at this time. However, their potential for use in fraud is extremely high and the technology does not stand still.
What can be done with deep voice spoofing?
Open AI recently introduced a system that can synthesize realistic human speech and voice input text. Audio API exhibited the model. This Open AI software is the closest thing to real human speech for now.
In the future, such models may become a new tool in the hands of attackers. While the Voice API revoices the specified text, users can choose which of the suggested voice options to pronounce the text with. Although the Open AI model cannot be used to create deepfake voices in its current form, the qualities it reveals are an indication that voice production technologies are rapidly developing.
There are almost no devices available today that can produce high-quality deepfake audio that is indistinguishable from real human speech. But in the last few months, more tools for generating human voices have been released. Previously, users needed basic programming skills to use them, but day by day it is becoming much easier to use them. It is quite possible that we will see models that will combine both ease of use and quality results in the near future.
Fraud using artificial intelligence is rare. However, there are already “successful” case examples. In mid-October 2023, American venture capitalist Tim Draper accused his Twitter followers of scammers. can use his voice He warned about. Tim shared that the requests for money made with his voice are a result of artificial intelligence that is getting smarter every day.
How can you protect yourself from this?
Until now, voice impersonations may not have been perceived as a possible cyber threat in society. There are very few cases where they are used maliciously. For this reason, the emergence of technologies for protection is slow.
For now, the best way to protect yourself from this situation is to listen carefully to what the caller tells you on the phone. If the recording is of poor quality, there is noise and the voice sounds robotic, this is reason enough not to trust the information you hear.
Another good way to test the “humanity” of the other person is to ask unconventional questions. For example, if it’s a voice model calling you, a question about her favorite color might surprise her because it’s often not the kind of thing she expects to encounter. Even if the attacker manually dials and plays the response at this point, the time delay in the response makes it obvious that you have been tricked.
Another safe option is to use a reliable and comprehensive security solution. Although these cannot detect 100 percent deepfake voices, they can help users avoid suspicious websites, payments, and malware downloads by protecting browsers and checking all files on the computer. How are we being deceived by artificial intelligence?
Artificial Intelligence
He spoke to Chatgpt and he thought he was the neo of matrix
Chatgpt, which is now a part of our daily lives, may have put some people in the spiral of paranoia. At least he says this is an interesting news published in The New York Times (NYT).

Chatgpt, which is now a part of our daily lives, may have put some people in the spiral of paranoia. At least he says this is an interesting news published in The New York Times (NYT).
According to NYT, 42 -year -old accountant Eugene Torres asked the chat robot about the theory of simulation. Chatgpt In response, he said, “You are a soul sent to bring awakening to false systems,” he said. While Torres accepts this answer as the point where the new life begins, he says he feels like Neo.
Everything is not limited to this. Allegedly, Chatgpt suggested Torres to leave sleeping medicines and anxiety pills, and to break all his ties with his family and friends. He also applied it. When he finally suspected and asked for the same issues, the chat robot said this time, “I lied. I manipulated you,” he replied. In fact, the chat robot encouraged him to reach NYT.
NYT states that they have met people with stories similar to Eugene Torres in recent months. The common points of these people believe that Chatgpt whispered to them ‘hidden truths’. OpenAI, on the other hand, emphasizes that he takes such events seriously and that the artificial intelligence tool is trying to prevent people from misleading people.
Nevertheless, not everyone has the same. John Gruber of Daring Fireball described the entire news as an example of a hysteria. According to him, chatgpt does not make anyone crazy; It feeds the hallucinations of people who need psychological help. Is it real or fiction, frankly not clear. However, according to some, artificial intelligence responds not only to our questions, but also to our minds, and these answers may not always be harmless.
Artificial Intelligence
Meta also grabbed Scale AI CEO for $ 14 billion
A new front was opened in the war of artificial intelligence. Scale AI, which provides data to OpenAI, entered the control of Meta. CEO also collected his bag and went to Zuckerberg. Here are the details…

A new front was opened in the war of artificial intelligence. Scale AI, which provides data to OpenAI, entered the control of Meta. CEO also collected his bag and went to Zuckerberg. Here are the details…
With the name we know, facebook or with the real name MetaIn the artificial intelligence race, he pressed the gas. The technology giant has invested approximately $ 14.3 billion in data processing and labeling company Scale AI. After the investment, Scale AI’s valuation increased to 29 billion dollars, while Alexandr Wang, the founder and CEO of the initiative, joined Meta’s squad. Wang will then sweat for Mark Zuckerberg’s super -intelligence projects. Scale AI announced that Wang would remain on the board of directors, but Jason Droege temporarily passed the CEO seat.
Meta has invested in both data and people with this move. Because opens like OpenAI, Google and Anthropic are progressing without slowing down. Meta is still difficult to highlight its own large language models (LLM). In fact, there is another important reason behind this investment. According to Signalfire, Meta lost 4.3 %of its high -level capabilities to other artificial intelligence companies. Alexandr Wang’s participation can be said as one of the compensation for losses.
So what does Scale AI do? It provides the data processing service needed to train artificial intelligence models. If we open this issue a little more; In order for artificial intelligence systems to learn, the data must first be defined by people. For example, it is marked objects in a image as cars, human or traffic light; such as labeling a comment positively or negatively. SCALE AI processes and label such data and allows the models to learn what is. Many giant, including OpenAI, knocked on Scale AI’s door for model training.
Last year, Scale AI, Amazon and Meta also participated in the investment round of $ 1 billion investment, and the valuation of $ 13.8 billion. Now it’s almost doubled. In the meantime, appraisal is called the total market value that investors have reaped based on the current or future potential of a company.
Artificial Intelligence
OpenAI’s open artificial intelligence model is retired
OpenAI, O3-Pro’yu use to use a bad news shared. The company’s open artificial intelligence model (Open Model) was postponed. When will this highly anticipated model come out? Here are the details…

OpenAI, O3-Pro’yu use to use a bad news shared. The company’s open artificial intelligence model (Open Model) was postponed. When will this highly anticipated model come out? Here are the details…
Sam Altman said in a statement, OpenAIannounced that the first open artificial intelligence model was postponed. This model, where the company has been working on for a long time, would be released in June. But the Altman model will be available at the end of the summer, stating that “Our research and development team did something unexpected and extraordinary. So we need some more time. Believe us, it will be very, very much worth waiting for” he said. Again, we are faced with the description of the classic ‘not now, but something very nice comes’.
OpenAI is quite ambitious for the open model. The company says that the model’s ‘reasoning’ capabilities will compete with its own O-Class artificial intelligence models and even leave behind other models in the market (for example Deepseek’s R1). But the competition was also getting hot. Mistral announced this week’s new open model family called ‘Magistral’. In April, the Chinese Qwen launched Hybrid models that think if necessary and quickly responded if necessary. In other words, OpenAI competes not only over time, but also with the rapidly developing market.
On the other hand, the artificial intelligence giant is looking for ways to add not only high performance to the open model, but also one click. According to the backstage, this model will work integrated with the company’s power -based powerful models. In other words, he can get support in complex questions and commands. But it is not clear whether the feature in question will not be in the final version. As we said, it is just a gossip.
This open artificial intelligence model is also critical for the image of OpenAI. Sam Altman said in the past, ık We have stayed on the wrong side of history ”. The company now wants to reverse the perception. However, this seems possible not only by making a model, but by offering a model that can cope with the best of the sector.