Hello, World! I'm Vivian, a cybersecurity and AI Product Manager trying to keep up with an industry that moves faster than I can whisk up my morning matcha. Every week brings a new wave of vulnerabilities, AI security mishaps, and breaches that keep us on our toes, so I take some time to share the most interesting news and to help me stay accountable and informed. Let's dive into whats been in cybersecurity and AI security recently…
AI Research & Vulnerabilities:
First Commercial LLMjacking Campaign Monetizes Stolen AI Credits
Researchers uncovered Operation Bizarre Bazaar, the first attributed LLMjacking campaign where attackers steal cloud-based AI compute credits and resell them on underground marketplaces for profit. The attackers compromise cloud accounts with weak credentials or exposed API keys, then hijack access to hosted LLM instances from providers like OpenAI, Anthropic, and Azure to generate tokens that are packaged and sold at discounted rates to cybercriminals seeking cheaper AI access. The campaign demonstrates a new monetization model for cloud account compromises, where stolen AI resources are treated as a commodity.
Remote Code Execution in n8n Due to Sandbox Escape Vulnerability
Researchers disclosed a critical vulnerability in n8n, a popular workflow automation platform, that allows attackers to escape the JavaScript sandbox and execute arbitrary code on the host system. The flaw stems from insufficient isolation in how n8n processes user-supplied JavaScript code within workflow nodes, enabling an attacker to access Node.js built-in modules and file system operations that should be restricted. Exploitation requires the ability to create or modify workflows, meaning the primary risk is to organizations where untrusted users have workflow editing permissions.
Android Trojan Campaign Abuses Hugging Face to Host RAT Payloads
Researchers identified an ongoing Android malware campaign that leverages Hugging Face, a legitimate AI model hosting platform, to distribute Remote Access Trojan payloads. Attackers upload malicious Android APK files disguised as AI model datasets to Hugging Face repositories, then distribute links through messaging apps and social media, tricking victims into downloading what appears to be legitimate AI applications. Once installed, the trojanized apps request extensive permissions including accessibility services, allowing attackers to remotely control devices, steal credentials, intercept SMS messages, and exfiltrate sensitive data.
AI News
Former Google Engineer Convicted of Stealing Trade Secrets for Chinese AI Companies
A federal jury found Linwei Ding, a former Google software engineer, guilty of economic espionage and theft of trade secrets related to Google's AI infrastructure. Prosecutors presented evidence that Ding transferred over 500 confidential files containing information about Google's supercomputing data centers and AI chip architecture to his personal accounts while secretly working for two China-based AI technology companies. Ding faces up to 10 years in prison for each count of economic espionage and up to 10 years for each trade secret theft count.
Mozilla Gives Users the Option to Disable All AI Features in Firefox
Mozilla introduced a new master toggle in Firefox that allows users to completely disable all AI-powered features with a single switch. The option, accessible through the browser settings under Privacy & Security, will turn off AI chatbot integration, AI-assisted content summarization, and other generative AI tools that have been gradually integrated into the browser. Mozilla stated the move responds to user feedback requesting more granular control over AI functionality, particularly from privacy-conscious users.
OpenAI Announces Multiple Product Updates
OpenAI launched Prism, a free cloud-based LaTeX workspace that integrates AI directly into scientific writing workflows. The company also unveiled a standalone Codex application for code generation and debugging, revealed details about an internal AI agent used for data analysis and research workflows, announced the retirement of GPT-4o and several older model versions, and implemented new safety controls to prevent AI agents from being manipulated through malicious links.
Cybersecurity News
Malicious Roblox Mods Distribute Infostealer Targeting Gamers
Researchers discovered a campaign distributing trojanized Roblox executors and mod menus through community forums and Discord servers that contain a sophisticated Lua-based infostealer. The malware, embedded within tools that promise enhanced gaming features or cheats, specifically targets credentials and session tokens for gaming platforms, social media accounts, and cryptocurrency wallets commonly used by the gaming community. Once installed, the stealer operates silently in the background while the mod continues to function normally, exfiltrating browser data, Discord tokens, and Roblox authentication cookies.
Malicious Chrome Extension Hijacks Affiliate Links to Steal Commissions
Researchers exposed a Chrome extension that secretly intercepts and replaces legitimate affiliate tracking codes on e-commerce sites with the attacker's own identifiers to steal referral commissions. The extension, which promoted itself as a productivity tool for online shoppers, monitors user navigation across major retail platforms including Amazon, eBay, and Walmart, then dynamically swaps affiliate parameters in URLs before the page fully loads, allowing the attacker to earn commissions while original content creators receive nothing.
FBI Takes Down RAMP Forum Used by Ransomware Operators
The Federal Bureau of Investigation seized the RAMP underground forum, a Russian-language cybercrime marketplace where ransomware affiliates recruited partners, traded stolen data, and coordinated attacks. The platform served as a hub for ransomware-as-a-service operations and initial access brokers who sold credentials to compromised corporate networks. The FBI takedown notice states that the agency collected extensive evidence during the operation, including user data and communication logs that will be used to pursue further criminal investigations.
ShinyHunters Expands SaaS Data Theft Operations Across Multiple Platforms
Google Cloud Threat Intelligence Group published an analysis revealing that the ShinyHunters cybercrime group has significantly expanded its operations beyond traditional database breaches to systematically target SaaS platforms through compromised administrative credentials and API abuse. The group exploits weak authentication, lack of multi-factor authentication, and overly permissive API tokens to extract customer data at scale, mainly via social engineering. ShinyHunters has developed automated tools specifically designed to efficiently enumerate and exfiltrate data from popular SaaS platforms.
OpenClaw Security Vulnerabilities
Multiple security issues were discovered in OpenClaw and related systems, including exposed Moltbook databases, 1-click RCE via authentication token exfiltration, 341 malicious Clawed skills on ClawHub, fake Clawdbot VS Code extensions, exposed Clawdbot control panels, plaintext credentials issues, real attack traffic targeting exposed gateways, and various MoltBot enterprise AI risks.
That's all for now… Stay informed and protected.