Transform your workflow with Raygun's remote MCP
Posted Oct 21, 2025 | 9 min. (1784 words)We’re happy to announce Raygun’s new remote MCP server, giving AI tools direct access to live error data so they can investigate issues, surface root causes, and take action with real context, not guesses.
It’s been nearly a year since Anthropic released the Model Context Protocol (MCP), and a lot has changed in the AI space. Since then, almost all major players now support MCP, allowing them to tap into the massive and ever-expanding catalogue of MCP servers. When MCP first launched, we shipped our own Raygun MCP within 48 hours of the spec dropping, which was an early step toward giving LLMs visibility into Raygun data.
Since then, the protocol and its ecosystem have evolved rapidly. In this post, we’re introducing our new remote MCP server, a rebuilt hosted version designed for better performance, simpler setup, and deeper context.
This launch comes at a time when MCP has grown more capable than ever, with broader language support, a thriving community of servers, and most importantly, official support for remote MCPs with authentication. It’s a milestone we’ve been eager to take advantage of.
- What’s changed
- Our new remote first MCP server
- Demo and install instructions
- What we would like to see from the protocol
- What are some of our favourite MCP servers
- Conclusion
What’s changed
One of the biggest game changers when it came to MCP servers is when agent coding assistants like Amp, Cursor, Claude Code, Codex, and more added support for MCP. Using an MCP server with something like the desktop Claude app was useful, but when you allow agents to make changes on your behalf with the additional context coming from these MCP servers you could get some really amazing results. Before this, MCP servers were mostly useful for simple question-answer responses, with that only being supported by Claude Desktop. Now, when assistants like Cursor or Claude Code can call these tools on your behalf, they can diagnose, test, and even fix issues in your codebase automatically.
In late May this year, the people at Anthropic released a new version of the specification centered around making remote servers a first class citizen of MCP with support for OAuth client registration + streamable HTTP servers. This was a very welcomed change as it brought support for remote access of MCPs to the mainstream. With this, we don’t have to worry about people updating their MCP servers locally. Or have to run arbitrary code on your local machine anymore. This change was exactly what we’d been waiting for. It unlocked the ability to deliver our MCP server as a remote endpoint, with built-in authentication and zero local setup.
GitHub also recently launched their MCP registry which is a curated set of MCP servers with one-click installs to your agent of choice. Anthropic has also created the MCP registry which is what backs GitHubs registry allowing for discovery of all types of MCP servers.
Our new remote first MCP server
When we first built our MCP server, our goal was simple, explore this new specification and give LLMs the same visibility into your Raygun data that you have in the dashboard. But as we and our users started using agentic assistants more, we hit a wall, they needed deeper context.
The main limitation we saw with our existing MCP server was the ability to fetch error instance details. As we at Raygun used agentic assistants more we really wanted to get more context into them to deliver better results, we also saw demand for this from the community as well. That’s when we decided we needed to revamp our MCP server, enhancing it with both better context through error instance details, and optimising the tools so they would be friendlier to LLMs to use.
Given how much of the specification had involved since our first version and the headaches involved with local setups (npx, uv, ect) we decided to start fresh. Our new MCP server is built with the mindset of remote first. This means there are centralised updates and no arbitrary local code executions, basically what you see is what you get. This would also end up improving performance and the capabilities as we were no longer limited by the exact API V3 endpoints we expose.
When MCP launched, only TypeScript and Python were supported. Now, with the official C# SDK, we could finally merge this with our core stack. No adapters, no shims. It also let us hook directly into Raygun’s API V3, so tool calls now translate directly to API queries without any extra network hops.
Designing for agents is different from designing for developers. Instead of exposing one-to-one API calls, we focused on intent-level tools, things like ‘investigate a deployment’ or ‘find related errors.’ These compound actions give agents the context they need to reason about a problem, rather than stringing together multiple endpoints to get the data that they need. If you’re interested in learning more about creating tools for agents, this blog by Anthropic is a great starting point.
Demos and install instructions
This demo highlights the new depth of context available. The agent isn’t just fetching error lists. it’s reasoning through stack traces to find the issues. Combine this with the ability to now view associated deployment versions, browser information, breadcrumbs, customer data and more, the agent becomes infinitely more capable at solving errors. We’ve even heard of some of the early testers going from having errors in production to having them solved within minutes.
Amp
Guide: Amp MCP Documentation
amp mcp add raygun --header "Authorization=Bearer YOUR_PAT_TOKEN" https://api.raygun.com/v3/mcp
Claude Code
Guide: Claude Code MCP Documentation
claude mcp add --transport http raygun https://api.raygun.com/v3/mcp --header "Authorization: Bearer YOUR_PAT_TOKEN”
Codex
Guide: Codex MCP Documentation
[mcp_servers.raygun]
command = "npx"
args = ["mcp-remote", "https://api.raygun.com/v3/mcp", "--header", "Authorization: Bearer YOUR_PAT_TOKEN"]
Cursor
Go to Cursor Settings
→ MCP
→ New MCP Server
{
"mcpServers": {
"Raygun": {
"url": "https://api.raygun.com/v3/mcp",
"headers": {
"Authorization": "Bearer YOUR_PAT_TOKEN"
}
}
}
}
Gemini CLI
gemini mcp add --transport http raygun https://api.raygun.com/v3/mcp --header "Authorization: Bearer YOUR_PAT_TOKEN”`
We have a bigger list on our GitHub repository here if you haven’t found yours, and if you want one added feel free to create an issue on the repository requesting it.
What we would like to see from the protocol
Similar to the last blog on the MCP protocol, I’m going to put forward some thoughts on the current state of MCP and what I would like improved or have some progress on.
Current challenges with MCP
There’s been quite a bit of talk around how MCP may be the wrong abstraction with valid points for and against. Some of the complaints here centre around the fact that as you add more tools to LLMs their performance degrades. Others say the approach of developing the tools from APIs is incorrect too. We think these have some valid points and there’s definitely nuance here - this is something we have tried to navigate with our MCP where we tried to make the best tools for LLMs to use.
Working on both Raygun’s MCP and our new AI agent platform Autohive has given us a front-row seat to MCP’s growing pains. During this we have developed MCP support for Autohive in a beta state. It’s incredibly cool to see the use cases this has unlocked, but also the headaches it creates.
The biggest issue plaguing the MCP ecosystem right now is non-adherence to the MCP specification. Some MCP servers support OAuth but do not provide dynamic client registration (DCR) therefore kneecapping anyone trying to use their MCPs. Quite a few MCP servers were created with SSE endpoints which is deprecated now (we use streamable HTTP) and supporting SSE is difficult due to ignored standards. Not all MCP servers are remote meaning not all cloud platforms can use all MCP servers. And some MCP servers completely ignore the standard when it comes to OAuth discovery. Streamable HTTP MCP servers have definitely helped a lot here, but we’ve got a lot of legacy servers still out there.
Future directions
Some more interesting experiments that would be great to see is output schemas being supported alongside input schemas. With this, we believe that it would be possible for LLMs to reason better about tools so they can plan and construct multiple chained tool calls together inferring and using information from each other. Another upside to this is workflow software could then use the outputs from these tools as they would have defined schemas.
While there are still rough edges, MCP’s foundation is solid. With stronger spec adoption and continued iteration, it could easily continue to be the backbone of how agents and developers’ tools communicate well into the future.
What are some of our favorite MCP servers
Enough negatives though. To end this blog post off, here are some other cool MCP servers we have been using that work really well for development and with our MCP.
Context7: This MCP allows you to search for documentation indexed by Context7, allowing you to bring into context any specific framework, specification, or documentation that you need to check against. This one is especially helpful when validating against the MCP specification as we could ask questions about behavior of our MCP and other MCPs. Along with whether they follow the RFCs established by the MCP specification.
Chrome DevTools: This is a new MCP server by Google which provides access to the Chrome DevTools such as profiling, reading console, network requests, and a whole lot more which can help coding assistants figure out the root cause of something they’re trying to solve.
Figma: This allows you to fetch images that you’ve selected inside of the Figma desktop app. They do provide a remote MCP server but we feel the local experience is better as you don’t have to copy links from Figma - you can just say “replicate this design component in Figma”.
Another neat small site that James Montemagno created was a one-click install button creator for Cursor and VSCode. Check it out here. This is great for remote servers that don’t have auth or have an OAuth flow configured for the server.
These examples show how mature the MCP ecosystem has become with support from many large players in the software market. Along with how Raygun’s remote MCP fits right into that growing toolkit.
Conclusion
The Model Context Protocol has come a long way in a year, and with Raygun’s remote MCP, we’re excited to help developers and AI assistants work together more effectively than ever. We can’t wait to see what you build with it.
If you have any questions or concerns, feel free to contact us or leave an issue on the GitHub repository for our MCP server.