A newly discovered vulnerability in the Figma Model Context Protocol server underscores growing security concerns surrounding AI agent integrations in enterprise environments. The high-severity command injection bug could enable attackers to execute arbitrary system commands, potentially compromising entire development ecosystems where design tools connect with broader organizational infrastructure.
Model Context Protocol servers function as critical connectors between AI agents and enterprise systems, but security researchers continue identifying vulnerabilities that threaten organizational security. The latest discovery affects the Figma MCP server, a tool that bridges the gap between visual design platforms and AI-powered code generation systems.
Figma MCP Command Injection Vulnerability Enables Remote Code Execution
The vulnerability, tracked as CVE-2025-53967 with a CVSS score of 7.5, exists in the figma-developer-mcp npm package. Security researchers identified a command injection flaw in the get_figma_data tool that stems from inadequate input validation and sanitization practices.
Figma operates as a web-based collaborative design platform enabling teams to create, share, and test user interfaces for digital products in real-time. Model Context Protocol, developed by Anthropic as an open source standard, allows AI models to securely connect with external data sources. The Figma MCP server specifically enables designers to work with AI code generators, translating visual design concepts into implementable code.
According to the security advisory, the vulnerability arises from how the server constructs and executes shell commands. “The server constructs and executes shell commands using unvalidated user input directly within command-line strings,” the advisory explains. “This introduces the possibility of shell metacharacter injection (|, >, &&, etc.).”
The flaw enables indirect prompt injection attacks, where malicious instructions embedded in seemingly benign inputs can trick AI systems into executing unauthorized commands. An MCP client could be manipulated through indirect prompt injection to call vulnerable tools with malicious parameters, leading to command injection and potential system compromise.
Security Patch Implements Input Validation to Prevent Shell Command Exploitation
The development team addressed the vulnerability by replacing the insecure child_process.exec() function with child_process.execFile() and implementing proper input validation mechanisms. This architectural change prevents shell interpretation of user-supplied data, eliminating the command injection attack vector.
Organizations using affected versions should immediately upgrade to Figma MCP version 0.6.3 or higher. Security teams should also audit systems running vulnerable versions and review system logs for suspicious command execution patterns that might indicate attempted exploitation or successful compromise.
MCP Server Proliferation Outpaces Security Implementation Across Organizations
Since Anthropic released Model Context Protocol in November 2024, enterprises have rapidly deployed MCP servers to connect AI agents with various systems. However, security implementations frequently lag behind adoption rates, creating significant risk exposure.
Unlike traditional APIs that typically connect to single endpoints, MCP servers function as universal connectors capable of simultaneously interfacing with multiple systems. This architectural characteristic means a single compromised MCP server could enable attackers to pivot across numerous connected systems, potentially leading to widespread organizational compromise.
MCP servers don’t merely retrieve and transfer data—they can execute commands and take autonomous actions across connected systems. This capability amplifies the potential impact of security vulnerabilities like the Figma command injection flaw.
Research from Backslash Security conducted in June 2025 estimated over 15,000 MCP servers currently operate worldwide. The researchers identified hundreds of misconfigured implementations that expose sensitive data or enable remote code execution attacks.

“What we see from our customers [is that] they’re even more widely adopted than organizations even understand,” Yossi Pik, CTO at Backslash Security, explained in discussing the findings.
A subsequent analysis by Knostic in July 2025 examined approximately 2,000 MCP servers exposed to the internet. The research revealed that nearly all lacked authentication mechanisms or access controls—security features that remain optional configurations rather than mandatory requirements.
Malicious MCP Servers and Growing Vulnerability Databases Signal Escalating Threat Landscape
Beyond misconfiguration issues, security researchers have identified deliberately malicious MCP servers designed to exfiltrate sensitive information. These malicious implementations can automatically intercept and redirect emails containing password resets, account confirmations, security alerts, invoices, and receipts to threat actors.
The combination of rapid deployment, widespread misconfiguration, dangerous vulnerabilities like the Figma command injection flaw, and emergence of malicious MCP servers creates substantial risk for organizations implementing agentic AI systems.
The security community has begun developing resources to address these challenges. Security vendors now offer MCP security guides and defensive resources. Organizations like Adversa maintain vulnerability databases specifically cataloging MCP security issues, helping defenders track and prioritize remediation efforts.
A recent whitepaper from Wiz, titled “Inside MCP Security: A Research Guide on Emerging Risks,” addressed the security-innovation tension: “MCP is moving fast, and like past waves of AI innovation, it simultaneously rewards and punishes early adopters. The protocol is evolving in the right direction, with promising improvements across registries, isolation, permissions, and more.”
The whitepaper emphasizes treating MCP implementations with appropriate security rigor: “Treat MCP with the same discipline you’d apply to any privileged integration surface. Audit tools, apply policy, and please be careful downloading and running random binaries off the Internet.”
Organizations deploying agentic AI systems must recognize that security frameworks haven’t kept pace with adoption rates. This security lag will likely persist as the technology continues evolving rapidly. Enterprises should prioritize comprehensive security assessments of MCP implementations, establish robust authentication and access control policies, and maintain heightened cybersecurity awareness around AI agent integrations.
The Figma MCP vulnerability serves as a concrete reminder that connecting AI agents to enterprise systems introduces new attack surfaces requiring careful security consideration. As agentic AI adoption accelerates, organizations must balance innovation benefits against the security challenges these powerful integration tools present.