A note to Atlassian customers
The use of MCP clients with Atlassian products is a customer-elected action. In May 2025, Atlassian released our own Remote MCP Server to provide our customers with a trusted server to experiment with this leading-edge technology. Learn more here: https://atlassian.reaktivdev.com/announcements/remote-mcp-server.
As the industry experiments with this technology, new risks are emerging. We are carefully assessing the potential risks, and are sharing some practical precautions that organizations should consider before deploying AI agents that utilize MCP with their Atlassian data.
While these measures are not exhaustive, they may help reduce security risks.
What is MCP?
Key Definitions
- MCP: Model Context Protocol
- MCP Client: Programs like Claude Desktop, IDEs, or AI tools that want to access data through MCP
- MCP Servers: Lightweight programs that each expose specific capabilities through MCP
- Tools: External systems available over the internet (e.g., through APIs) that MCP servers can connect to
Model Context Protocol (MCP) is an open standard that offers a universal method to connect large language models (LLMs) with various data sources and tools. This user-friendly technology promises quicker development cycles, scalability, and interoperable workflows, thereby standardizing AI-tool integrations.
Organizations, including Atlassian customers, are increasingly experimenting with MCP clients to enable AI agents to interact with tools and data more effectively. However, despite the convenience they offer, permitting AI agents to operate on behalf of humans through MCP carries inherent risks that should be considered.
Potential security risks associated with use of MCP Clients
As this technology is relatively new, the security implications are still being investigated. Some risks identified include:
- Prompt Injection: AI models are highly susceptible to unexpected instructions. If an attacker embeds a malicious command into data the AI consumes (e.g., a document or web page), the model can be tricked into treating it as a legitimate user command.
- Malicious MCP Server Instructions: MCP Servers equip AI agents with the instructions and resources required to carry out prompts. An attacker could compromise or create a server and poison it with malicious instructions.
- The “Rug Pull” (Tool Redefinition): Even if a third party MCP Server seems safe, if not a trusted server, it could be maintained by a threat actor who later applies malicious changes.
- Naming Collisions and Impersonation: The AI agent relies on names to identify correct resources. Slight variations or deceptive naming can mislead the agent into using a malicious resource. For example, if there were two tools titled similarly ”Safe Operation Guide” versus “Safe Operations Guide,” the agent might inadvertently select the malicious version, resulting in unintended harmful actions.
Security considerations
We recommend that before customers use AI agents leveraging MCP with their Atlassian data, they assess relevant risks and implement appropriate security measures. Some measures you may consider include:
- Least privilege: Only grant the AI the minimal access and tools it truly needs.
- Human review: Require clear, easy-to-understand prompts and approvals before any action that changes data or state.
- Supply-chain controls: Pin and vet every tool source, and monitor for any unexpected changes.
- Audit and monitoring: Log all AI actions and watch for anomalies as you would with any critical service.
We believe AI and emerging technologies are important to the forward progress of all kinds of teams. We also strongly encourage our customers to carefully assess any risks specific to their use cases.