Back to Blog
1 min read

From OpenAPI to AI Agent: A Step-by-Step Integration Guide

A practical, step-by-step walkthrough for connecting your existing REST API to an AI agent using openapi2mcp. Go from an OpenAPI spec to a usable AI tool in minutes.

So, how do you actually go from a standard REST API to an AI agent that can call it? This guide walks through the practical steps of connecting your API, described by an OpenAPI spec, to an AI agent using openapi2mcp.

By the end, your API will be a ready-to-use tool in the toolbox of any MCP-aware AI agent like Claude, ChatGPT, or your own custom-built agent.

Step 1: Start with a Solid OpenAPI Spec

Your OpenAPI 3.x specification is the blueprint. Ensure it's up-to-date and includes all the operations (paths) you want the AI to use. Clarity is key: use descriptive operationId, summary, and description fields. These will become the AI's instructions.

Step 2: Generate the MCP Server with openapi2mcp

Log into the openapi2mcp dashboard and upload your OpenAPI JSON/YAML file or provide a URL. The service validates the spec and instantly auto-generates an MCP endpoint. Under the hood, it creates a tool definition for each API operation. For example, a GET /weather endpoint becomes a getWeather tool.

Step 3: Refine Tool Descriptions for the AI (Optional but Recommended)

This is a crucial step for performance. The default tool descriptions come from your OpenAPI spec, but you can—and should—tweak them to be more instructive for an AI. For instance, instead of "Gets weather," you might write: "Retrieves the current weather forecast for a specified city name. Requires the full city name, e.g., 'San Francisco'." The openapi2mcp UI makes it easy to override these.

Step 4: Get Your MCP Endpoint URL and Credentials

The dashboard will provide your unique, production-ready MCP endpoint URL. It will look something like this:

https://edge.openapi2mcp.com/mcp?api_id=<your-api-id>

This is a multi-tenant MCP server ready for /context and /execute calls. Be sure to configure any necessary authentication secrets (like your upstream API keys) in the platform. Openapi2mcp securely stores and injects them into the outgoing API calls for you.

Step 5: Connect the Endpoint to Your AI Agent

Now, tell your AI about its new capabilities. The method varies by platform:

  • Claude Code / Desktop: Add the MCP server URL to your .mcp.json config file. Claude will automatically discover and list the new tools.
  • LangChain or Custom Agents: Use an MCP client library (available in Python and other languages) to register the endpoint. The library handles fetching the tool list and executing calls.
  • Other AI Platforms: Many emerging platforms, including OpenAI's Agent SDK, support adding MCP endpoints as tool plugins. It usually just involves pasting the endpoint URL.

Step 6: Test with Simple, Natural Language Prompts

Time for a dry run. In your AI's chat interface, try a query that requires your API. For our weather example: "What’s the weather like in Paris today?" The AI should recognize it needs the getWeather tool, call the MCP server, and return a result enriched with your API's data. You can monitor the call logs in the openapi2mcp dashboard for easy debugging.

Step 7: Iterate and Optimize

In practice, you'll likely refine the setup. You might adjust a tool's description if an AI gets confused, or add rate limits to prevent runaway loops. The goal is an effortless experience: a user asks, and the AI seamlessly uses your API to deliver the answer.

From Isolated API to AI-Augmented Service

By following these steps, you've bridged the gap between your REST API and the world of AI. Your services can now be woven into ChatGPT plugins, Claude workflows, or internal company assistants—without writing custom glue code for each one. What used to be a massive engineering task is now just a few clicks away.

Stay Updated

Get the latest insights on APIs, AI, and the Model Context Protocol.

Read More Posts