Connect with us

Published

on

Scientists have added a surprising feature that will improve artificial intelligence. Here are all the details.

Scientists have taught an artificial intelligence system that can recognize sad goats. According to those behind the system, the tool will not only improve animal welfare but also help develop new methods of care for children and other non-verbal patients. Typically understanding when an animal is in pain can be difficult and depends on the veterinarians’ expertise. Many times the solution to a medical problem may be simple, but knowing it is needed can be much more difficult. The system was created by filming the faces of goats in pain and other goats that were comfortable. These videos were then fed into a machine learning system that taught it to spot goats that might be in danger.

Scientists added an interesting feature to artificial intelligence!

According to scientists’ research, the system was 62 percent to 80 percent accurate in detecting the faces of suffering goats. But they hope the system will improve. So far it has been trained and tested on only 40 goats. But researchers hope this can be taught with more goats and different animal species. from the University of Florida ludovica “If we solve the problem with animals, we can also solve the problem of children and other non-verbal patients,” Chiavaccini said. “This isn’t just an animal welfare issue,” he said. “We also know that animals in pain do not gain weight and are less productive. “Farmers are increasingly aware of the need to control acute and chronic pain in animals.”

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Artificial Intelligence

Tim Cook Went to China for Apple Intelligence

Apple CEO Tim Cook flew to China to discuss Apple Intelligence, but the tech giant faces tough legal hurdles.

Published

on

Tim Cook Went to China for Apple Intelligence

Apple CEO Tim Cook flew to China to discuss Apple Intelligence, but the tech giant faces tough legal hurdles.

Tim Cook visited China to discuss supply chain issues with Chinese Premier Li Qiang. Cook reportedly went to discuss trade tensions between the US and China and to clarify Apple Intelligence’s situation in China. According to Financial Times, a senior Chinese source said Apple is working to navigate China’s complex regulations. It is clear that the technology giant wants to bring Apple Intelligence to devices sold in China, but there are major obstacles for any US company to do this alone.

Tim Cook

The senior source also emphasized that Apple will face a long and complex approval process. But if Apple uses existing AI large language models that the Chinese government is reviewing, the approval process will become simpler and more understandable.

Apple Intelligence coming to EU iPhones and iPads in April

It is also said that Tim Cook has continued the negotiations that Apple has initiated with Chinese companies in the last few months. These talks are rumored to include meetings with Chinese search engine Baidu, as well as technology groups ByteDance and Moonshot. Apple CEO did not comment on what he would do during the visit and who he would meet with. However, Cook stated last October that he was working hard to make Apple Intelligence available in China. Tim Cook went to China three times this year, and Apple Chief Operating Officer Jeff Williams twice.

Continue Reading

Artificial Intelligence

Apple Pressed the Button to Improve Siri

Apple is taking important steps to make its digital assistant Siri more capable and competitive. According to reports, the company is following a two-stage plan to develop Siri.

Published

on

Apple Pressed the Button to Improve Siri

Apple is taking important steps to make its digital assistant Siri more capable and competitive. According to reports, the company is following a two-stage plan to develop Siri.

As a first step, Apple plans to introduce ChatGPT integration to Siri in December this year. This integration will enable the voice assistant to respond to more complex and natural language requests more effectively. Integrating ChatGPT will serve as an interim solution until Apple’s own large language model (LLM) is fully ready.

siri

Apple aims to create a more advanced artificial intelligence assistant by renewing its basic infrastructure. This new version is called “LLMSiri” and will be supported by Apple’s own developed major language models. The new version is expected to offer a more natural and chat-oriented experience and be able to perform complex operations such as text summarization, text creation and multi-step commands.

Timeline:

  • December 2024: ChatGPT integration is planned to be introduced.
  • June 2025: The LLM version is expected to be introduced at Apple’s annual Worldwide Developers Conference (WWDC).
  • Spring 2026: LLM Siri is aimed to be made generally available with iOS 19.4.

These developments enable Apple to become more competitive and user-friendly. artificial It is seen as part of efforts to make it an intelligence assistant. Users can expect their voice assistants to interact more naturally and effectively. Let’s see if the new versions and improvements will yield the desired results.

Continue Reading

Artificial Intelligence

Gemini Transforms into a Real Digital Assistant with Android 16

Google’s Gemini artificial intelligence is placed at the center of Android 16, surpassing the traditional assistant experience offered by Assistant.

Published

on

Gemini Transforms into a Real Digital Assistant with Android 16

Google’s Gemini artificial intelligence is placed at the center of Android 16, surpassing the traditional assistant experience offered by Assistant.

While Android 15 has just been released, early developer previews of Android 16 show that Gemini could be much more than just a chatbot. The dream of a productive AI assistant became a reality when Google integrated Gemini into its Android devices. But initially, AI was limited to providing basic chatbot functions. Thanks to advanced extensions and integrations like Google Home, it now has the potential to handle more complex tasks. For example, you can control Google Home devices by simply saying “turn off the lights.”

Android 16

It seems that this experience will become even deeper with Android 16. Currently, Gemini can only access limited functionality through certain applications’ APIs. But the new Android version may reverse these limitations. Thanks to the new APIs, application developers will need to become compatible with Gemini. This can exponentially increase the functionality of AI.

Application Function APIs: Powering AI

Android developer Mishaal Rahman discovered a new set of APIs called “app functions” in Android 16’s code. These APIs enable an application to provide certain functionality to the system. For example, a food ordering app can use these APIs so Gemini can place orders on your behalf.

This dynamic will reduce the burden on Google to make Gemini compatible across all applications, allowing app developers to extend Gemini’s capabilities. This could enable an AI assistant that can perform more complex actions on behalf of users.

Gemini 2

The development of Gemini is a major step in Google’s mobile AI strategy. of Android 16 developer Its preview shows that Gemini will be able to work with a much wider range of applications, and users will be able to rely on this AI for complex tasks, not just simple commands.

It’s unclear whether Gemini will replace Google Assistant, but Android 16’s innovative approach is on its way to making the app not just an AI tool, but a full-fledged digital assistant. With these new APIs and integrations, Google can completely transform the way users interact with technology.

Continue Reading

Trending

Copyright © 2022 RAZORU NEWS.
Project by V