
Twitch fires all members of its Security Advisory Council
Various events continue to occur on Twitch. Here are all the details.
Various events continue to occur on Twitch. Here are all the details.
Twitch convened cyberbullying experts, web researchers and community members in 2020 to form the Security Advisory Council. The review board was created to help draft new policies, develop products that improve security, and protect the interests of marginalized groups. Now CNBC reports that the website has terminated all members of the council. 31 of Twitch’s current contracts in may He reportedly summoned nine members to a meeting on May 6 to inform them that it would expire and that they would not receive payments in the second half of 2024.
Members of the Security Advisory Council include Dr. Co-Director of the Cyberbullying Research Center. Sameer Hinduja and Dr. TL Taylor is the co-founder and director of AnyKey, an organization that advocates for inclusion and diversity in video games and esports. Also present is Emma Llansó, director of the Free Expression Project at the Center for Democracy and Technology.
In an email to members, the company reportedly told them that from now on, “the Security Advisory Council will consist primarily of individuals who serve as company Ambassadors.” The Amazon subsidiary did not name any names, but describes its Ambassadors as “individuals who contribute positively to the Twitch community, from being role models for their community to creating new types of content to having inspiring stories that empower those around them.” ” It can be said that the layoffs on the platform surprised many people, but there have been many developments in the technology sector recently.
Security
Google goes to authentication with QR coded for Gmail with two -factor identity
Google is preparing to make a significant change to increase Gmail’s security.

Google is preparing to make a significant change to increase Gmail’s security.
For a long time with two -factor authentication (2FA) using SMS codes for Gmail, in the near future will switch to QR codes. This change is made due to security risks of SMS verification and aims to protect users better than cyber attacks.
According to Forbes, the new verification method, which will be available in the next few months, stands out as a measure of Google against fraudsters who abuse SMS -based authentication. Google official Ross Richendrfer said that the company has been going to this method in order to reduce global increased SMS fraud and identity hunting attacks on a global scale. Google does not completely remove SMS verification, but instead of buying a six -digit code, users will offer a QR code that users have to scan when entering Gmail.
Why is Google to verify QR coded?
There are some security deficits of traditional SMS verification:
–Open to ID Hunt attacks: Codes coming by SMS can be captured by scammers. QR codes reduce this risk because they do not contain a shared code.
–SIM card fraud: Malicious people, using the phone numbers of users their He can steal authentication codes.
– SMS traffic fraud: Cyber criminals can deceive service providers and send a large number of SMS to the numbers they control and gain from it.
Google, QR code -based verification will largely close these security gaps, he says. With the new system, the Gmail user will see a QR code on the screen when logging in, and will complete the authentication by scanning this code with a Google application or a device that supports verification.
Google has recently added a few more important updates in the application. One of them is that Android and iOS users can pay invoices directly through Gmail. The company plans to continue to increase Gmail’s security and functionality in the future. The authentication with QR coded two -factor authentication, which will be available for everyone soon, will make it safer and easier to log in to Gmail.
Artificial Intelligence
Is Artificial Intelligence a Security Shield or a Threat Tool?
Artificial intelligence has given a major boost to the cybersecurity arms race in the past year. There will be no break from this race for the next 12 months. This has significant implications for corporate cybersecurity teams and employers, as well as everyday web users.

Artificial intelligence has given a major boost to the cybersecurity arms race in the past year. There will be no break from this race for the next 12 months. This has significant implications for corporate cybersecurity teams and employers, as well as everyday web users.
Safe experts They underlined what should be taken into consideration in 2025, stating that artificial intelligence tools can increase the scale and severity of all kinds of fraud, disinformation campaigns and other threats in the hands of bad actors.
The UK’s National Cyber Security Center (NCSC) warned at the start of 2024 that AI is already being used by all types of threat actors and will “increase the volume and impact of cyber attacks over the next two years”. The threat is most visible in the field of social engineering, where generative artificial intelligence (GenAI) can help malicious actors craft persuasive campaigns in error-free local languages.
Although these trends will continue in 2025, we can also see artificial intelligence used for the following purposes:
· Authentication bypass: Deepfake technology used to help scammers impersonate customers in selfie and video-based checks for new account creation and account access.
· Business email compromise (BEC): AI was once again used for social engineering, but this time to trick the corporate buyer into transferring money to an account under the fraudster’s control. Deepfake audio and video can also be used to impersonate CEOs and other senior leaders in phone calls and virtual meetings.
· Imitation scam: Open source large language models (LLMs) will present new opportunities for fraudsters. By training these models on data collected from hacked or public social media accounts, fraudsters can impersonate victims in virtual kidnappings and other scams designed to fool their friends and family.
· Influencer scam: Similarly, we expect to see GenAI used by scammers in 2025 to create fake or duplicate social media accounts impersonating celebrities, influencers, and other public figures. Deepfake videos will be released to trick followers into handing over their personal information and money, for example in investment and crypto scams, including the kind of tricks highlighted in ESET’s latest Threat Report. This will put more pressure on social media platforms to offer effective account verification tools and badges and keep you on your toes.
· Disinformation: Hostile states and other groups will leverage GenAI to easily generate fake content to trick gullible social media users into following fake accounts. These users can then be turned into online amplifiers for influence operations that are more effective and harder to detect than content/troll farms.
· Password cracking: AI-driven tools can mass expose user credentials in seconds to gain access to corporate networks and data, as well as customer accounts.
AI privacy concerns for 2025
Artificial intelligence will not just be a tool for threat actors next year. It will also bring a high risk of data leakage. LLMs need large volumes of text, images and video to educate themselves. Often some of this data will be sensitive:
Such as biometrics, health information or financial data. In some cases, social media and other companies may change the Terms and Conditions to use customer data to train models. Once this information is collected by the AI model, it poses a risk to individuals if the AI system itself is hacked or if the information is shared with others through GenAI applications running on LLM.
There is also a concern for enterprise users that they may unknowingly share sensitive business information through GenAI prompts. A fifth of UK companies have inadvertently exposed potentially sensitive corporate data through employee use of GenAI, a survey has found.
AI for defenders in 2025
The good news is that AI will play a larger role in the work of cybersecurity teams in the coming year as it is incorporated into new products and services.
· Users, security teams and even A.I. creating synthetic data to train security tools
- Summarizing long and complex threat intelligence reports for analysts
- Improving SecOps efficiency for overloaded teams by contextualizing and prioritizing alerts and automating workflows for investigation and remediation
- Scanning large data volumes for signs of suspicious behavior
- Skilling IT teams through “co-pilot” functionality built into a variety of products to help reduce the possibility of misconfiguration
However, IT and security leaders A.I. It must also understand its limitations and the importance of human expertise in decision-making. A balance between human and machine will be needed in 2025 to reduce the risk of delusions, pattern distortion, and other potential negative outcomes. A.I. It is not a magic wand. It should be combined with other tools and techniques for optimum results.
AI challenges in compliance and implementation
The evolution of the threat landscape and AI security does not occur in a vacuum. Geopolitical changes in 2025, especially in the United States, could even lead to deregulation in the technology and social media sectors.
This could enable scammers and other malicious actors to flood online platforms with AI-generated threats. Meanwhile, in the EU, there is still some uncertainty around AI regulation, which could make life more difficult for compliance teams. As legal experts note, codes of practice and guidance still need to be sorted out and liability calculated for AI system failures. Lobbying from the tech sector could change how EU AI law is implemented in practice.
What is clear, however, is that artificial intelligence will fundamentally change the way we interact with technology in 2025, for better and for worse. While it offers great potential benefits for businesses and individuals, it also poses new risks that need to be managed. It would be in everyone’s interest to work more closely over the next year to make sure this happens. Governments, private sector businesses and end users must do their part and work together to harness the potential of AI while mitigating its risks.
Security
Kaspersky, IT Outage And Supply Chain Risk Scenario
As part of Kaspersky’s annual “Security Bulletin”, the company’s experts analyzed major supply chain attacks and IT outages experienced last year.

As part of Kaspersky’s annual “Security Bulletin”, the company’s experts analyzed major supply chain attacks and IT outages experienced last year.
In 2024, supply chain attacks and IT outages emerge as prominent cybersecurity concerns, demonstrating that almost no infrastructure is completely immune from risk. A faulty CrowdStrike update affected millions of systems; Sophisticated incidents such as the XZ backdoor and the Polyfill.io supply chain attack have highlighted the risks inherent in widely used tools. These and other notable cases highlight the need for rigorous security measures, robust patch and update management, and proactive defenses to protect global supply chains and infrastructure.
While evaluating the events of 2024 within the scope of “Story of the Year”, Kaspersky Security Bulletin discusses possible future scenarios and the potential consequences of these scenarios as follows:
But what if a major AI provider experiences an outage or data breach? Businesses are increasingly relying on models from providers such as OpenAI, Meta, Anthropic. However, despite the excellent user experience these integrations offer, they also come with significant cyber risks. Dependence on a single AI provider or a limited number of service providers creates concentrated points of failure. If a large artificial intelligence company experiences a critical outage, it can significantly affect tens or even thousands of services that depend on them.
Additionally, an incident at a major AI provider could lead to one of the most serious data leaks since these systems store large amounts of sensitive information.
But what if on-device AI tools are exploited? As AI becomes more integrated into everyday devices, the risk of it becoming an attack vector increases significantly. For example, Kaspersky’s Operation Triangulation campaign, revealed last year, showed how attackers can compromise device integrity by exploiting system software and hardware using zero-day vulnerabilities and installing advanced spyware. Potential software or hardware vulnerabilities in the neural processing units that run AI, including certain platforms such as Apple Intelligence, could, if discovered, significantly increase the scope and impact of such attacks. Exploiting such vulnerabilities can significantly amplify the scale and impact of attacks using AI capabilities.
Kaspersky’s Operation Triangulation investigation also uncovered a first-of-its-kind case reported by the company: the misuse of on-device machine learning tools for data extraction. This suggests that features designed to improve user experience are already being weaponized by advanced threat actors.
But what if threat actors disrupt satellite connectivity? Although the space industry has been facing various cyber attacks for some time, the new target of threat actors may be satellite internet providers as an important element of the global connectivity chain. Satellite internet can provide temporary communication links when other systems are down; Airlines, cruise lines and other platforms can rely on this service to offer connectivity to passengers. It can also be used to enable secure communication services.
This creates cyber risks: a targeted cyber attack or a faulty update against a leading or dominant satellite provider can cause internet outages and possible communication breakdowns and seriously impact individuals and organizations.
But what if major physical threats to the internet materialize? Continuing on the topic of connectivity, the internet is also vulnerable to physical threats. 95% of global data via undersea cables while being transmittedwhich are physical locations where different networks exchange data traffic approximately There are 1,500 Internet Exchange Points (IXPs). Many of these points are located in data centers.
An outage to just a few critical components of this chain – such as trunk cables or IXPs – could overload the remaining infrastructure and potentially lead to widespread outages, significantly impacting global connectivity.
But what if serious vulnerabilities are exploited in the Windows and Linux kernels? These operating systems run many critical assets around the world – servers, production equipment, logistics systems, IoT devices, and others. A remotely exploitable core vulnerability in these systems could expose countless devices and networks around the world to potential attacks, creating a high-risk situation where global supply chains could suffer major disruptions.
“Supply chain risks may seem daunting, but awareness is the first step to prevention,” said Igor Kuznetsov, Director of Kaspersky Global Research and Analysis Team (GReAT). We can reduce single points of failure by rigorously testing updates, using AI-powered anomaly detection, and diversifying providers. We can eliminate weak elements and build resilience. It is also vital to create a culture of responsibility among staff, because human attention is the cornerstone of security. “Together, these measures can ensure a safer future by protecting supply chains.”
-
Mobile2 weeks ago
Xiaomi 16 Pro pursues innovation: can be produced with 3D printing technology
-
Mobile2 weeks ago
Motorola Razr 60 Ultra is coming: New era in the foldable phone market
-
Automobile2 weeks ago
Renault 5 is newly new to Roland-Garros
-
Gaming2 weeks ago
Is the new Battlefield game coming? Big leak!
-
Gaming2 weeks ago
Killing Floor 3 was postponed to the end of 2025
-
Life2 weeks ago
Google ends step by step support to Home Max
-
Mobile2 weeks ago
Tecno officially announced his new Camon 40 series smartphones.
-
Mobile2 weeks ago
New information has arrived about Google’s highly anticipated Pixel 9A phone