Brief #91: AWS AMI Attack, NVIDIA Container Escape, InfoSec Salaries

Nikoloz Kokhreidze

Nikoloz Kokhreidze

9 min read

Malicious AI models found on Hugging Face. Multiple PE firms compete for Trend Micro. Security leadership salaries reach $261.5K median

mandos brief cybersecurity leadership newsletter week 7 of 2025

Happy Sunday!

This week brings some interesting developments across the security landscape. AWS users should pay attention to a new supply chain attack targeting AMI deployments, while Meta's making waves with their new AI-powered testing tool that's already improving security across their major platforms. On the career front, the latest salary index shows security leadership roles breaking past $260K, though the gap between public and private sector compensation continues to widen.

Let's dive into this week's security updates and see what matters for your Monday morning.

Your feedback shapes Mandos Brief and I'd love to hear your thoughts about the content I share.

INDUSTRY NEWS

AWS AMI Name Confusion Attack Enables Malicious Image Deployment

  • A newly discovered supply chain attack allows threat actors to trick AWS services into using malicious AMIs by exploiting image name pattern matching when the owners attribute is not specified during AMI searches.

  • The vulnerability affects approximately 1% of organizations using AWS and impacted AWS's internal systems. AWS has released "Allowed AMIs" feature in December 2024 as a security control to prevent unauthorized AMI usage.

  • The attack can be detected using Datadog's new Cloud SIEM rule that monitors for ec2:DescribeImages API calls without owner filters followed by ec2:RunInstances. The open source "whoAMI-scanner" tool can also identify instances running unverified AMIs.

South American Foreign Ministry Targeted with Novel FINALDRAFT Malware Using Microsoft Graph API

  • Threat actor REF7707 deployed sophisticated malware against a South American foreign ministry, telecommunications entity, and university, using valid network credentials for lateral movement via Windows Remote Management.

  • The malware, named FINALDRAFT, is a remote administration tool that leverages Microsoft's Graph API to execute commands through Outlook draft folders, featuring 37 command handlers for process injection and file manipulation.

  • A Linux variant of FINALDRAFT was discovered, suggesting a cross-platform espionage campaign with both Windows and Linux versions sharing similar command-and-control functionality through Microsoft's email services.

NVIDIA Container Toolkit Vulnerability Enables Host System Access Through Container Escape

  • Critical vulnerability (CVE-2024-0132) in NVIDIA Container Toolkit allows attackers to escape container isolation by exploiting a Time-of-Check/Time-of-Use vulnerability in libnvidia-container, enabling full host system access.

  • Exploit technique involves manipulating container filesystem mounts to access the host's root filesystem and docker.sock, allowing attackers to launch privileged containers and achieve complete host compromise.

  • Affects multiple cloud providers using NVIDIA's toolkit, with potential for cross-tenant attacks in Kubernetes environments. Fixed in version 1.17.4, which addresses both the original vulnerability and a subsequent bypass (CVE-2025-23359).

LEADERSHIP INSIGHTS

Threat Actors Standardize Enterprise-Level Attack Methods Across All Business Sizes

  • Advanced techniques like defense tampering and BYOVD privilege escalations have become standard across organizations of all sizes, with attackers adapting enterprise-level strategies for smaller targets.

  • Infostealers and malicious scripts dominated the threat landscape (46% of incidents), while ransomware groups shifted focus to data theft and extortion rather than encryption due to improved detection capabilities.

  • Healthcare and education sectors were most targeted (38% of incidents), with attackers heavily exploiting RATs like AsyncRAT and abusing legitimate RMM tools for network infiltration and lateral movement.

Paris Peace Forum Policy Report Shows AI Governance Parallels in Cyber Policy Evolution

  • International cyber policy experience over the past 20 years offers valuable frameworks and lessons for governing emerging AI risks, particularly around trust-building and stakeholder inclusion.

  • Current global AI governance efforts show significant fragmentation, with 118 countries excluded from major initiatives, highlighting need for more inclusive participation similar to cyber policy development.

  • The report identifies AI-driven cyber threats as the most pressing short-term risk, requiring adaptation of existing cybersecurity frameworks rather than creating entirely new governance structures.

2025 CISO Compensation Survey Shows Growing Pay Gap Between Public and Private Sectors

  • Public company CISOs experienced a 6.1% year-over-year increase in cash compensation, while private sector CISOs saw only 1.7% growth, highlighting a widening compensation gap between sectors.

  • Gender pay disparities persist, with female CISOs in private companies earning 83% of male counterparts' salaries, though the gap narrows to 92.5% in public companies. Diversity remains a critical challenge in security leadership.

  • Security leaders face significant protection gaps, with over 50% of private company CISOs lacking indemnification policies or Directors & Officers insurance, while public company CISOs generally receive better benefits and protections.

Note: I've focused on the key compensation, diversity, and protection findings from the comprehensive survey, highlighting the most significant trends that security leaders should be aware of. The bold words emphasize critical aspects that organizations should consider when evaluating their security leadership structure and compensation packages.

📖
Discover my collection of industry reports, guides and cheat sheets in ‣ Cyber Strategy OS.

CAREER DEVELOPMENT

Global InfoSec Salary Index 2025 Released with Head of Security Leading at $261.5K

  • Dataset shows Head of Security and Director of Security as highest-paid roles, with median salaries of $261,500 and $257,500 respectively, based on community-sourced data from professionals worldwide.

  • The index reveals significant salary variations across 118 roles, with entry-level positions like SOC Analyst starting at $70,600, while specialized roles like Privacy Engineer command $200,000 median salaries.

  • Comprehensive data available through multiple channels including weekly updated GitHub repository and downloadable public domain dataset, with salary information from over 1,123 Security Engineers contributing to the index.

Cybersecurity Professionals Share Mixed Views on Certification Requirements for Career Growth

  • Industry veterans report successful careers without certifications, with multiple professionals having 10-30 years of experience in InfoSec roles while holding few or no certifications.

  • Continuous learning remains essential, but professionals emphasize that learning can occur through hands-on experience, practical implementation, and on-the-job training rather than formal certification.

  • Career advancement challenges include overcoming HR filters and employer mandates, with some organizations requiring specific certifications (like CISSP) for position retention or advancement, while others focus purely on demonstrated skills.

AI Tools Fuel Cybersecurity Job Demand Amid Rising Threats

  • AI accessibility has created new attack vectors, with threat actors leveraging AI for enhanced phishing campaigns and deepfakes, capable of generating convincing malicious content in minutes without requiring advanced technical skills.

  • Global cybersecurity workforce gap reaches 4.8 million jobs in 2024, with positions taking 21% longer to fill than other IT roles due to rapidly evolving threat landscape and required skill sets.

  • Information security analyst employment projected to grow 33% by 2033, with Chief Information Security Officers (CISOs) earning up to $1 million annually as organizations prioritize defense against sophisticated cyber threats.

AI & SECURITY

Meta Launches AI-Powered Software Testing Tool for Automated Bug Detection

  • Meta's new Automated Compliance Hardening (ACH) tool combines LLM capabilities with mutation testing to automatically generate both realistic test cases and code mutations, focusing on specific types of faults rather than just code coverage.

  • The system has been successfully deployed across Meta's major platforms (Facebook Feed, Instagram, Messenger, WhatsApp) to detect and prevent privacy regressions by automatically generating tests from plain text descriptions of potential vulnerabilities.

  • Unlike traditional mutation testing approaches that rely on rule-based systems, ACH uses LLMs to create more realistic fault scenarios and automatically generates the corresponding test cases, significantly reducing manual effort while providing verifiable assurances of test effectiveness.

OWASP Releases LLM Security Solutions Framework for AI Application Development

  • Document outlines security solutions landscape for LLM applications, targeting developers, AppSec teams, and security leaders with focus on four major application architectures: prompt-centric, AI agents, plugins/extensions, and complex applications.

  • Framework aligns with OWASP Top 10 for LLMs and CISO Governance Checklist, providing vendor-agnostic guidance on securing the complete AI lifecycle from development through deployment, addressing gaps in traditional security tools.

  • Emphasizes unique security challenges including prompt injection, data leakage, and unauthorized access, while maintaining a vendor-neutral stance to help organizations properly define business outcomes for LLMSecOps investments.

AI Model Repositories and Infrastructure Face Multiple Security Threats

  • Researchers found thousands of malicious files on Hugging Face, including compromised models capable of stealing credentials - in one case, attackers impersonated 23AndMe to distribute a model that stole AWS passwords.

  • AI model theft through extraction attacks is increasing, where attackers systematically query black-box models through APIs to collect enough data for reverse engineering, particularly targeting cloud-hosted systems.

  • Organizations must address "excessive agency" risks where AI systems have unnecessary permissions across integrated environments, while implementing proper access controls and conducting red team assessments to identify vulnerabilities in AI infrastructure.

MARKET UPDATES

Multiple Private Equity Firms Compete to Acquire Trend Micro in Potential Multi-Billion Deal

  • Major private equity firms including Bain, KKR, Advent International, and EQT AB have expressed interest in acquiring Japanese cybersecurity provider Trend Micro, which currently has a market cap of $8.54 billion.

  • Trend Micro's flagship platform Vision One protects over 500,000 organizations and 250 million devices, with recent quarterly revenue growth of 6% to $456 million and operating income increase of 42% to $99 million.

  • The company's shares jumped 16% following acquisition reports, though sources indicate Trend Micro may opt to remain independent rather than pursuing a sale.

CyberArk Acquires Access Management Startup Zilla Security for $165M

  • CyberArk enhances its identity security portfolio by acquiring Zilla Security, whose platform streamlines compliance processes through automated user access reviews and permission management for enterprise applications.

  • Zilla's technology helps implement separation-of-duties controls and automatically detects suspicious activities like unauthorized admin account creation, while also identifying potential access-related vulnerabilities.

  • The acquisition will result in two new standalone products - Zilla Comply and Zilla Provisioning - integrating with CyberArk's existing suite of secure access management solutions.

Andesite AI Launches Human-AI Security Operations Platform with $23M Funding

  • New bionic SOC platform combines human expertise with AI to help security teams shift from reactive alert triaging to proactive threat hunting, while maintaining data within security boundaries.

  • Platform features include context-aware AI for data unification, evidentiary AI for decision tracking, and adaptive automation for streamlined workflows, meeting standards like SOC2 Type I and NIST AI Risk Management Framework.

  • Investment from General Catalyst and Red Cell Partners will support platform enhancement and expansion into key industries, addressing the challenge of overwhelming alerts and fragmented security tools that plague modern SOC teams.

TOOLS

Crowdstrike Charlotte AI

CrowdStrike Charlotte AI is a conversational AI assistant that accelerates security operations by automating tasks and providing faster intelligence through generative AI capabilities.

WhyLabs LLM Security

WhyLabs is a platform that provides security, monitoring, and observability capabilities for Large Language Models (LLMs) and AI applications, enabling teams to protect against malicious prompts, data leaks, misinformation, and other vulnerabilities.

CalypsoAI

CalypsoAI is a platform that provides centralized security, observability, and control for deploying and scaling large language models and generative AI across an enterprise.


Before you go

If you found this newsletter useful, I'd really appreciate if you could forward it to your community and share your feedback below!

For more frequent cybersecurity leadership insights and tips, follow me on LinkedInBlueSky and Mastodon.

Best, 
Nikoloz

Share With Your Network

Check out these related posts