Brief #94: ESXi Server Attacks, Webcam-Based Ransomware, Google's AI Red Team Path

Nikoloz Kokhreidze
80% of organizations struggle to identify high-risk data in hybrid clouds. State actors bypass MFA via LinkedIn/WhatsApp social engineering. Anthropic's Claude outperforms GPT-4o in security testing.

Happy Sunday!
This week brings some interesting developments worth your attention. As you enjoy your morning coffee, here's what's happening:
• VMware ESXi servers are facing active exploitation of a critical vulnerability, with over 37,000 exposed instances worldwide. If your organization uses ESXi, you'll want to prioritize those patches.
• The job market is evolving - we're seeing an oversupply of security generalists while specialized skills (like OT security and zero-trust) remain in high demand. Might be time to focus on developing those niche capabilities.
• An interesting finding on LLM hacking capabilities shows that while technically possible, AI tools still require significant expert supervision to be effective for attacks - good news for defenders, at least for now.
Dive into the full newsletter for more details on these stories and other developments shaping our industry this week.
Your feedback shapes Mandos Brief and I'd love to hear your thoughts about the content I share.

INDUSTRY NEWS
Over 37,000 VMware ESXi Servers Vulnerable to Actively Exploited Critical Flaw
-
A critical out-of-bounds write vulnerability (CVE-2025-22224) affecting VMware ESXi is being actively exploited in the wild, with ShadowServer reporting approximately 37,000 internet-exposed vulnerable instances.
-
The flaw enables local attackers with administrative privileges on VM guests to escape the sandbox and execute code on the host as the VMX process, with CISA mandating federal agencies to patch by March 25, 2025, or discontinue using the product.
-
Most vulnerable servers are located in China (4,400), France (4,100), United States (3,800), and Germany (2,800), with Broadcom providing patches but no workarounds for this critical vulnerability.
Akira Ransomware Gang Bypasses EDR by Encrypting Network from Unsecured Webcam
-
Akira threat actors initially gained access through an exposed remote access solution, then pivoted to using an unsecured webcam running Linux after their Windows encryptor was blocked by the victim's EDR solution.
-
The attackers mounted Windows SMB network shares from the webcam device, which had no EDR protection, allowing them to encrypt files across the victim's network while generating unmonitored malicious SMB traffic.
-
Security firm S-RM confirmed that patches were available for the webcam vulnerabilities, highlighting the importance of isolating IoT devices from sensitive networks and maintaining regular firmware updates for all connected devices.
Microsoft 365 to Prompt Users for OneDrive Backups, Amid Multiple Security Threats
-
Microsoft is implementing a new feature in Microsoft 365 apps that will prompt users to back up their files to OneDrive, potentially improving data security as various threats emerge.
-
A malicious Chrome extension attack discovered by SquareX Labs can impersonate legitimate extensions like password managers by using the chrome.management API to disable the real extension and display phishing forms to steal credentials.
-
Over 37,000 VMware ESXi servers remain vulnerable to ongoing attacks, while a ransomware gang successfully encrypted a network by accessing it through a webcam to bypass endpoint detection and response (EDR) solutions.

LEADERSHIP INSIGHTS
Organizations Face Significant Data Security Challenges in Hybrid Cloud Environments
-
Survey reveals 80% of respondents lack high confidence in identifying high-risk data sources, with 31% reporting insufficient tooling to identify their riskiest data sources.
-
Misalignment exists between management and operational teams, with executives focusing on strategic goals while staff struggle with resource constraints—54% rely on semi-automated processes and 22% on entirely manual processes.
-
Organizations are shifting toward risk-based approaches, prioritizing vulnerability identification (7.06/8) and vulnerability prioritization (6.15/8) over compliance-driven strategies, with 54% using four or more tools to manage data risks.
State-Sponsored Threat Actor Compromises Cloud Environment via Social Engineering
-
Attackers used LinkedIn and WhatsApp to target key development staff, convincing them to run malicious code that harvested access keys and credentials from corporate laptops.
-
The threat actor bypassed MFA by stealing session tokens, gaining access to Microsoft 365 and AWS environments through both direct API access and web console via compromised Entra ID.
-
The sophisticated attack demonstrates how threat actors can chain together minor permission gaps to achieve privilege escalation, highlighting critical weaknesses in identity governance and cloud security monitoring.
Sophisticated Infostealer Malware "SneakThief" Sets New Standard for 2024 Cyber Threats
-
"SneakThief" malware employs multi-stage infiltration techniques including process injection, encrypted communications, and boot persistence to remain hidden while stealing valuable data.
-
Top ten MITRE ATT&CK techniques account for over 90% of observed malicious activity, with Process Injection (T1055), Command and Scripting Interpreter (T1059), and Credentials from Password Stores (T1555) being most prevalent.
-
Modern infostealers now perform an average of 14 malicious actions per sample, while ransomware groups have evolved to multi-stage extortion campaigns that combine data theft with traditional encryption tactics.

CAREER DEVELOPMENT
Google and Hack The Box Launch AI Red Teamer Path for Security Education
-
The partnership introduces a structured learning program designed to equip cybersecurity professionals with skills to evaluate, test, and defend AI systems against adversarial threats like data poisoning and model evasion.
-
The curriculum aligns with Google's Secure AI Framework (SAIF) and provides hands-on labs focused on red teaming methodologies specifically for AI security challenges.
-
Target audiences include penetration testers expanding into AI security, AI engineers developing secure models, and developers working with AI-integrated applications, with plans to expand coverage of MITRE Atlas and OWASP LLM/ML frameworks.
Entry-level cybersecurity jobs in US typically pay $50-80K, varying by location and experience
-
Most respondents indicate entry-level SOC analyst positions start at $50-60K, with some reporting increases to $60-70K after probationary periods or in higher cost-of-living areas.
-
True entry-level cybersecurity positions are relatively rare, with many employers preferring candidates who have 2-5 years of prior IT experience, which can push salaries toward the $70-90K range.
-
Location significantly impacts salary ranges, with coastal and high cost-of-living areas offering higher compensation (up to $90-100K), while specialized roles in consulting, engineering, or finance sectors may command premium starting salaries.
Cybersecurity Job Market Shifts: Generalist Oversupply While Specialized Skills Remain in Demand
-
The cybersecurity job market has evolved from "hire anyone who can spell cybersecurity" to a more competitive landscape, with generalists facing potential oversupply while specific skill shortages persist in areas like operational technology and zero-trust expertise.
-
HR practices are complicating the hiring process through "ghost jobs" (advertised positions that don't exist), AI-based resume filtering that rejects qualified candidates, and unrealistic job requirements that don't match actual needs or compensation levels.
-
Industry experts recommend employers work with existing security staff to create realistic job descriptions, focus on hiring for aptitude rather than experience for junior roles, and note that networking has become increasingly critical for job seekers in the security field.

AI & SECURITY
LLM Hacking Research Shows Limited Practical Threat Despite Technical Feasibility
-
OWASP researchers found that while LLMs can technically perform hacking tasks, they require extensive supervision from experts and are impractical for low-skill threat actors due to high time investment (82 developer hours for just five tasks).
-
GPT-4o outperformed Claude and local DeepSeek models (which failed completely), suggesting that advanced LLM hacking requires credentials for commercial APIs, increasing both cost and risk of detection for malicious actors.
-
LLMs demonstrated significant limitations including rigid goal-following (missing obvious vulnerabilities), installation loops creating "cycles of spend," and noisy fallback behaviors that would likely trigger detection in real environments.
AI Agents Set to Transform Work Functions and Reshape Industries
-
AI is entering the "Agentic" phase, where autonomous AI systems can perceive environments and take actions to achieve specific goals without constant human input.
-
Five types of AI agents are emerging: simple reflex, model-based reflex, goal-based, utility-based, and learning agents - with applications across customer support, online shopping, education, healthcare, and business decision-making.
-
While promising increased productivity, the shift raises concerns about job displacement and control problems, with experts predicting AI will affect nearly 40% of all jobs in coming years.
Security Researcher Publishes Comprehensive Guide to Hacking AI Applications
-
Security researcher Joseph (rez0) has released a detailed guide covering methodologies for hacking AI applications, focusing on systems that use language models as features.
-
The guide explores various attack vectors including prompt injection, traditional web vulnerabilities triggered through AI, and multimodal attacks that use invisible Unicode characters or image-based techniques.
-
The researcher includes a responsibility model for AI security, explaining how vulnerabilities should be attributed between model providers, application developers, and users, along with potential mitigations for the identified security issues.

MARKET UPDATES
Anthropic's Claude 3.5 Sonnet Tops AI Security Rankings in CalypsoAI's New Index
-
CalypsoAI has launched the first comprehensive security ranking system for major GenAI models, using their new Inference Red-Team solution that successfully compromised all tested models through automated attacks and "Agentic Warfare" techniques.
-
The CalypsoAI Security Index (CASI) shows Anthropic's Claude 3.5 Sonnet leading with a 96.25 score, while popular models like OpenAI's GPT-4o scored significantly lower at 75.06, revealing substantial vulnerabilities across even the most advanced AI systems.
-
The index provides critical metrics beyond security scores, including Risk-to-Performance ratio and Cost of Security, giving organizations essential data to make informed decisions about which AI models can be safely deployed in enterprise environments.
Rapid7 Enhances Exposure Management with Data Visibility and AI-Driven Risk Scoring
-
Rapid7's expanded offering provides continuous visibility into sensitive data across multicloud environments, integrating with AWS Macie, Google Cloud DLP, and Microsoft Defender for automated data classification.
-
New AI-driven vulnerability scoring enhances risk prioritization by generating intelligence-driven risk scores, helping security teams focus on critical exposures with greater accuracy.
-
Updates to Remediation Hub streamline the remediation process by embedding guidance directly within asset inventory pages, eliminating platform switching and accelerating mean-time-to-remediate.
NinjaOne secures $500 million in Series C funding at $5 billion valuation
-
The endpoint management platform raised funding led by ICONIQ Growth and CapitalG to drive R&D in autonomous management, patching, and vulnerability remediation while supporting its pending $262M acquisition of Dropsuite.
-
NinjaOne remains founder-led with co-founders Sal Sferlazza and Chris Matarese maintaining majority control of the company, which serves over 24,000 customers including Nvidia, Lyft, and Porsche.
-
The company plans to expand its AI capabilities and IT use cases while maintaining its commitment to customer support, having recently launched NinjaOne AI for Patch Sentiment, Mobile Device Management, and free Warranty Tracking.

TOOLS
CloudDefense.AI
CloudDefense.AI is a Cloud Native Application Protection Platform (CNAPP) that safeguards cloud infrastructure and cloud-native apps with expertise, precision, and confidence.
Wiz
Wiz Cloud Security Platform is a cloud-native security platform that enables security, dev, and devops to work together in a self-service model, detecting and preventing cloud security threats in real-time.
Anomali
Anomali is an AI-Powered Security Operations Platform that delivers speed, scale, and performance at a reduced cost, combining ETL, SIEM, XDR, SOAR, and TIP to detect, investigate, respond, and remediate threats.
Before you go
If you found this newsletter useful, I'd really appreciate if you could forward it to your community and share your feedback below!
For more frequent cybersecurity leadership insights and tips, follow me on LinkedIn, BlueSky and Mastodon.
Best,
Nikoloz