Connect with us

Published

on

Artificial intelligence capabilities, which were first included in the Galaxy S24 series, are gradually becoming available on Galaxy S23 Series, Z Flip5, Z Fold5, S23FE and Tab S9 Series devices with the One UI 6.1 update released on March 28.

‘Samsung’‘s new One UI 6.1 update brings life-enhancing AI features to more Galaxy devices.

The innovative and versatile Galaxy AI AI features, already available with the Galaxy S24 series, introduce Galaxy S23 Series, S23 FE, Z Flip5, Z Fold5, and Tab S9 Series users to brand new ways to communicate, be creative, and be productive, starting March 28, 2024.

Galaxy AI will improve the lives of more users

Galaxy’s AI world is designed to provide users with premium mobile AI experiences that make their daily lives easier.

In line with Samsung’s mission of making the latest technologies accessible to everyone, the range of Galaxy models with Galaxy AI capabilities has expanded with the new One UI 6.1 update, while it has artificial intelligence technology from Live Translate to Circle to Search, from Note Assistant to Creative Editing. , many new features that improve life are offered to more users.

artificial intelligence

Web and search engine experience on mobile moves forward with artificial intelligence

Galaxy AI brings many features that advance the mobile web browsing and search engine experience to smartphones. Circle to Search with Google The feature allows users to search on Google about any object or content they see on the screen by drawing a circle with their fingers around that object or content.

When users are curious about an object they see in a scene while watching their favorite content and want to learn more about it, or when they like a product, outfit or device and want to buy it, Circle to Search will enable them to quickly find the searched objects by presenting results on different websites.

Circle to Search with Google also provides relevant and personalized information based on the user’s location, with AI-powered global results. Web AssistantIt offers the opportunity to save time while following what is happening in the world by creating short summaries of news or web pages.

In this way, web experience and news tracking becomes much more effortless and efficient. Thanks to these features, this transformation in the navigation and search engine experience, which started with the Galaxy S24 Series, now comes to more devices with the One UI 6.1 update, bringing practicality and time to more users.

Artificial intelligence

Galaxy AI technologies make life easier in the business environment too

Chat AssistantIt offers users a more qualified communication opportunity by suggesting correspondence content appropriate to the context of the conversation while chatting.

Artificial intelligence-supported Chat Assistant can produce content suggestions for daily conversations with friends, as well as responses in a more formal and corporate language for correspondence with colleagues or business partners. Chat Assistant also translates messages in an accurate and appropriate tone with its translation feature in 13 languages, allowing two users who do not speak the same language to easily communicate and understand each other via chat. Voice Recording AssistantWith artificial intelligence and Speech-to-Text transfer technologies, it can transcribe, summarize and even translate voice recordings, thus reducing the daily workload.

Note Assistantmakes note-taking more creative, practical and enjoyable. With the support of artificial intelligence, it summarizes the notes taken, places the notes in special templates, suggests cover pages and offers a more organized and productive note-taking experience.

Thanks to these features, the tools used in daily business life make life easier and offer a more efficient daily experience. While keeping track of meetings and tasks, as well as correspondence, becomes more practical, time is saved in business routines.

Artificial intelligence

Languages ​​you don’t know are no longer an obstacle to your travel

Interpreter And Live Translate It is now very easy for users to overcome language barriers in countries where they do not speak the language. Interpreter eliminates the language barrier in face-to-face communication and enables communication in the spoken language when you want to get directions in a city you are visiting.

When used together with Galaxy Buds 2 Pro, the Interpreter works in conjunction with the headphones and offers a smoother experience without the need to remove the headphones. Live Translate helps you make restaurant and transportation reservations with real-time translation support in the local language for phone calls. Thanks to these two features, languages ​​you do not know will no longer hinder your travel and you will not have to worry about the language barrier during your holidays.

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Artificial Intelligence

GPT-5 will be free, O3 canceled

OpenAI CEO Sam Altman announced the new roadmap. Altman said the company has canceled the exit of O3, which is planned to be the next major model of the company. Let’s look at the details together without further ado.

Published

on

OpenAI CEO Sam Altman announced the new roadmap. Altman said the company has canceled the exit of O3, which is planned to be the next major model of the company. Let’s look at the details together without further ado.

Nowadays, we often hear the name of OpenAI with other developments in Deepseek and Artificial Intelligence. It is said that OpenAI has designed its own artificial intelligence chip and test production will begin a few months later. TSMC, the supplier of giants such as Apple and Nvidia, will be responsible for production. Of course, neither TSMC nor OpenAI did any explanation about the artificial intelligence chip. On the one hand, while on the other hand Sam AltmanTwitter or new name of X exploded the bomb. OpenAI CEO announced in detail what to do.

Altman said that OpenAI will release the GPT-5 model in the coming months and that this model will contain many technologies of the company, including O3. The GPT-5 will, of course, will be integrated into OpenAI’s chatgpt platform and API. In line with this decision, OpenAI will no longer release O3 as an independent model. The company announced last December that it aims to release O3 at the beginning of this year. A few weeks after this statement, OpenAI’s product director Kevin Weil said in an interview that O3 would come out like February-Mart.

Sam Altman

In relation to the roadmap, Sam Altman said, uz We are in favor of moving towards the goal we aim and sharing our roadmap better. We want to make our products more simple. We want artificial intelligence to ‘just work’ for you. We are aware of how complex our model and product range is. Make sure we do not like the model selection section in Chatgpt as much as you. We want to go back to the combined artificial intelligence experience. ”

Altman, however, announced that the GPT-5 will be available for free. However, the free version will be subject to the abuse thresholds and the standard intelligence setting will be opened. OpenAI CEO did not give details about how the thresholds were identified or exactly what the standard intelligence setting meant. Chatgpt Plus subscribers will be able to run GPT-5 at a higher intelligence level. Pro subscribers will be able to reach the GPT-5’s even higher level of intelligence. GPT-5 will include sound, canvas, search, deep search and more. Altman, OpenAI’ın Chatgpt in the last few months in the Chatgpt, referring to a series of innovations, “the most important goal for us to use all our vehicles, when to think of when to create a wide range of tasks to create an effective unified model,” he said.

OpenAI SearchGPT is below expected

OpenAI plans to release the GPT-4.5, a model-named model-named GPT-4.5 before the GPT-5. Altman added that this model would be the latest model that does not use reasoning (or chain of thought). Unlike O3 and other reasoning models, these models tend to be less reliable in mathematics and physics.

Apparently, OpenAI completely adopted the reasoning trend with the O1 model at the end of last year. O1 model and reasoning caused controversy. Since reasoning models have a verification mechanism on their own, they can avoid mistakes that other models have fallen. However, the verification process causes the operations to be done later. Reasoning models can take the solution to seconds with seconds. Nevertheless, they are more reliable and more talented.

OpenAI is in danger

Deepseek’s R1 model and OpenAI’s O1 model rivals each other. R1 soon attracted the attention of the whole world. In addition, R1 is an open source model, unlike O1. In other words, developments have the opportunity to use it as they wish. Sam Altman also acknowledged that Deepseek reduces OpenAI’s superiority in the field of artificial intelligence. Altman, OpenAI’ın better compete to compete with some models will draw forward stressed. OpenAI is also said to have experienced a number of challenges and technical problems with GPT-4.5 or Orion’s performance.

Continue Reading

Artificial Intelligence

OpenAI’s own artificial intelligence chip is almost ready!

A bomb -like news fell on the agenda. According to Reuters, OpenAI pushed the button to get rid of his dependence on Nvidia. The company is moving towards removing its own artificial intelligence chip. Let’s see what’s going on together without further ado.

Published

on

OpenAI's own artificial intelligence chip is almost ready!

A bomb -like news fell on the agenda. According to Reuters, OpenAI pushed the button to get rid of his dependence on Nvidia. The company is moving towards removing its own artificial intelligence chip. Let’s see what’s going on together without further ado.

According to Reuters’ quotes OpenAIDesigned by the artificial intelligence chip can be mass production next year. Sources said the company has completed the company’s chip design within a few months and plans to send it to TSMC for production. Thus, the OpenAI will use its own chips instead of Nvidia to train and run artificial intelligence models. While the production of TSMC will be produced by 3 NM process, the chip will be equipped with high -band width memory and comprehensive networks.

The OpenAI chip will use it on a limited scale during the launch. Sources, however, said that the chipset will mostly be used to run artificial intelligence models. The future versions of the chip will have more advanced processors and abilities.

OpenAI SearchGPT is below expected

OpenAI’s chip design team is the head of Google’s former TPU engineer Richard Ho. It is stated that the work has accelerated thoroughly in the last months and the team has increased from 20 people to 40 people. Last year, Reuters shared a report claiming that OpenAI was working with Broadcom to develop a special chip. Again, in another report of a few months ago, it was mentioned that the company gave interesting job advertisements and turned to chip design. When we combine the information mentioned in the old reports, we see that the puzzle was completed before.

OpenAI has released a new product that will rival Google

Firms operating in the field of artificial intelligence made billions of dollars of chip purchases to create artificial intelligence infrastructure and strengthen hungry models. But initiatives such as Deepseek opened up to discuss whether to get thousands of chips to equal and provide power.

Continue Reading

Artificial Intelligence

Gemini 2.0 experimental models were available

Google has announced today that Gemini’s application contains the newest Gemini 2.0 experimental models.

Published

on

Gemini 2.0 experimental models were available

Google has announced today that Gemini’s application contains the newest Gemini 2.0 experimental models.

One of the most remarkable innovations for users is that the Gemini 2.0 Flash Thinking model can now be tried free of charge by everyone. The company states that this model is particularly trained in a way that divides the commands into steps to strengthen their reasoning capabilities and provide better responses.

According to Google, Gemini 2.0 Flash Thinking, users’ why the model reacts in a certain way, Which It allows them to see how it is based on assumptions and how it creates the line of logic ”. Of course, these explanations do not mean that artificial intelligence really “think” or “reason”. However, it can be said that it is an important step in terms of providing transparency to understand how the models are concluded. In this way, when the Geminin sometimes provides false or fabricated information, it will be a little easier to understand why it is caused.

In addition, there is a great innovation for Gemini Advanced subscribers: Gemini 2.0 Pro is now accessible. Google says that this model has been developed especially in complex tasks, to ensure that it gives more accurate results in coding and mathematics. The expression “better reality ında in the company’s statement seems to actually mean ür producing less fabricated information ..

Gemini 2.0

Both models are currently available on Gemini mobile application and Gemini web interface. Google also announced that Gemini 2.0 Pro will soon be available to Google Workspace Business and Enterprise customers. These developments stand out as an important part of Google’s artificial intelligence strategy, which aims to make Gemini more functional for both individual users and businesses.

Continue Reading

Trending

Copyright © 2022 RAZORU NEWS.
Project by V