Part 2 - Connecting AI Agents to Microservices with MCP

# langchain4j# java# mcp# microservices
Part 2 - Connecting AI Agents to Microservices with MCPPedro Santos

Connecting AI Agents to Microservices with MCP (No Custom SDKs) In the previous post, I...

Connecting AI Agents to Microservices with MCP (No Custom SDKs)

In the previous post, I showed how LangChain4j lets you build agents with a Java interface and a couple of annotations. But those agents were using @Tool, methods defined in the same JVM. Fine for a monolith, but I’m running 5 microservices.

I needed the AI agent in service A to call business logic in service B, C, D, and E. Without writing bespoke HTTP clients for each one.

That’s where MCP comes in, and it changed how I think about exposing business logic.

The Problem: @Tool Doesn’t Scale Across Services

In my saga orchestration system, I have:

  • order-service (port 3000): MongoDB, manages orders and events
  • product-validation-service (port 8090): PostgreSQL, validates catalog
  • payment-service (port 8091): PostgreSQL, handles payments and fraud scoring
  • inventory-service (port 8092): PostgreSQL, manages stock
  • orchestrator (port 8050): coordinates the saga via Kafka

And then there’s the ai-saga-agent (port 8099), the service that hosts my AI agents. It needs to query data from ALL other services.

With @Tool, I’d have to write HTTP clients and DTOs for each service. Error handling, retry logic, the whole nine yards. Every time a service adds a new capability, I’d update the agent’s code. Tight coupling everywhere.

MCP: One Protocol for Everything

MCP (Model Context Protocol) is basically USB for AI. Instead of writing custom integrations per service, you expose tools via a standard JSON-RPC protocol over HTTP/SSE. Any agent can connect, discover available tools, and call them.

The before/after in my codebase was dramatic.

Before (without MCP): Agent needs stock data, write InventoryHttpClient. Agent needs payment status, write PaymentHttpClient. Agent needs order details, write OrderHttpClient. New tool in inventory? Update the client, update the agent.

After (with MCP): Each service exposes an MCP server. Agent connects to http://localhost:8092/sse and automatically discovers getStockByProduct, getLowStockAlert, checkReservationExists. New tool? Just add it to the MCP server. The agent sees it on next connection.

Making a Microservice an MCP Server

Let me show you the actual code from my payment-service. It already had a PaymentService and a FraudValidationService, real business logic with database queries. I just needed to expose some of those methods as MCP tools.

Add the Dependency

implementation 'io.modelcontextprotocol.sdk:mcp:0.9.0'
Enter fullscreen mode Exit fullscreen mode

Set Up the Transport

@Bean
public HttpServletSseServerTransportProvider mcpTransport() {
    return HttpServletSseServerTransportProvider.builder()
        .objectMapper(new ObjectMapper())
        .messageEndpoint("/mcp/message")
        .build();
}

@Bean
public ServletRegistrationBean<HttpServletSseServerTransportProvider> mcpServlet(
        HttpServletSseServerTransportProvider transport) {
    return new ServletRegistrationBean<>(transport, "/sse", "/mcp/message");
}
Enter fullscreen mode Exit fullscreen mode

Register Your Tools

Here’s the key part. I’m reusing the same PaymentService and FraudValidationService beans that already exist:

@Bean
public McpSyncServer mcpServer(
        HttpServletSseServerTransportProvider transport,
        PaymentService paymentService,
        FraudValidationService fraudService) {

    return McpServer.sync(transport)
        .serverInfo("payment-mcp", "1.0.0")
        .capabilities(ServerCapabilities.builder().tools(true).build())
        .tools(
            getPaymentStatus(paymentService),
            getRefundRate(paymentService),
            getFraudRiskScore(fraudService)  // same business logic, now via MCP
        )
        .build();
}
Enter fullscreen mode Exit fullscreen mode

Each tool needs four things. A name and description so the LLM understands what it does. A JSON schema for parameters. And a handler function that runs your actual business logic:

private SyncToolSpecification getPaymentStatus(PaymentService paymentService) {
    return tool(
        "getPaymentStatus",
        "Returns the current payment status for a given transaction. " +
        "Use to verify whether a payment was processed, pending, or refunded.",
        """
        {
          "type": "object",
          "properties": {
            "transactionId": {
              "type": "string",
              "description": "Transaction ID associated with the saga"
            }
          },
          "required": ["transactionId"]
        }
        """,
        args -> {
            String txId = (String) args.get("transactionId");
            return paymentService.findByTransactionId(txId)
                .map(p -> "status=" + p.getStatus()
                    + " | totalAmount=" + p.getTotalAmount()
                    + " | totalItems=" + p.getTotalItems())
                .orElse("No payment found for transactionId=" + txId);
        }
    );
}
Enter fullscreen mode Exit fullscreen mode

Notice: no new code. The paymentService.findByTransactionId() method already existed. I’m just wrapping it with a description so the LLM knows when to call it.

What Each Service Exposes

I did this for all 4 services:

Service MCP Tools
order-service getOrderById, listRecentEvents, getLastEventByOrder
payment-service getPaymentStatus, getRefundRate, getFraudRiskScore
inventory-service getStockByProduct, getLowStockAlert, checkReservationExists
product-validation checkProductExists, checkValidationExists, listCatalog

Each service keeps full ownership of its data. The MCP layer is just a thin exposure.

The Agent Side: Connecting as an MCP Client

Now on the ai-saga-agent, I connect to all these servers:

@Bean
public McpToolProvider mcpToolProvider() {
    return McpToolProvider.builder()
        .mcpClients(List.of(
            buildClient("http://localhost:3000/sse"),     // order
            buildClient("http://localhost:8091/sse"),     // payment
            buildClient("http://localhost:8092/sse"),     // inventory
            buildClient("http://localhost:8090/sse")      // product-validation
        ))
        .build();
}

private McpClient buildClient(String sseUrl) {
    return new DefaultMcpClient.Builder()
        .transport(new HttpMcpTransport.Builder()
            .sseUrl(sseUrl)
            .build())
        .build();
}
Enter fullscreen mode Exit fullscreen mode

Then when I build an agent, I just pass the mcpToolProvider:

DataAnalystAgent agent = AiServices.builder(DataAnalystAgent.class)
    .chatModel(gemini)
    .toolProvider(mcpToolProvider)   // discovers tools from all 4 services
    .build();
Enter fullscreen mode Exit fullscreen mode

That’s it. The agent now has access to 12+ tools across 4 services, without a single HTTP client written by hand.

The Saga Architecture (Quick Context)

For those not familiar with the Saga Pattern: it’s how you handle distributed transactions without two-phase commit. Instead of one big transaction, you have a chain of local transactions. If any step fails, you run compensating transactions to undo the previous steps.

My flow looks like this:

Order Service → Orchestrator → Product Validation → Payment → Inventory → Success
                                    ↑                  ↑          ↑
                                    └──── Rollback ←───┴──────────┘
Enter fullscreen mode Exit fullscreen mode

Everything communicates via Kafka topics. The orchestrator listens for results and decides what to publish next. There’s a state transition table that maps (source, status) to the next topic:

Source Status → Next Topic
ORCHESTRATOR SUCCESS product-validation-success
PRODUCT_VALIDATION SUCCESS payment-success
PAYMENT SUCCESS inventory-success
INVENTORY SUCCESS finish-success
INVENTORY FAIL payment-fail (rollback)
PAYMENT FAIL product-validation-fail (rollback)

The beauty of this setup is that the saga flow is deterministic and auditable. Every event is stored, every transition is logged.

@Tool vs MCP Tool: When to Use Each

After building this, my rule of thumb is simple:

Use @Tool when the logic lives in the same JVM as the agent. No network overhead, tightly coupled, only that agent can use it.

Use MCP when the logic lives in another service. Any agent can connect. The protocol is language-agnostic (just JSON-RPC), and adding new tools doesn’t require changes on the agent side.

In practice, my agents use MCP for everything. The only @Tool I still use is for utility functions that don’t belong in any microservice, like formatting helpers or date calculations.

Testing MCP Endpoints Manually

You can test MCP without an AI agent. It’s just HTTP:

# 1. Open an SSE session
curl http://localhost:8092/sse
# Returns a sessionId

# 2. List available tools
curl -X POST "http://localhost:8092/mcp/message?sessionId=YOUR_SESSION" \
  -H "Content-Type: application/json" \
  -d '{"jsonrpc":"2.0","id":2,"method":"tools/list","params":{}}'

# 3. Call a tool
curl -X POST "http://localhost:8092/mcp/message?sessionId=YOUR_SESSION" \
  -H "Content-Type: application/json" \
  -d '{
    "jsonrpc":"2.0","id":3,"method":"tools/call",
    "params":{"name":"getStockByProduct","arguments":{"productCode":"COMIC_BOOKS"}}
  }'
Enter fullscreen mode Exit fullscreen mode

This is super useful for debugging. When an agent does something unexpected, I test the tool directly to check if it’s the tool or the prompt that’s wrong.

What’s Next

With MCP in place, the infrastructure was ready. But the interesting part is what the agents actually do with all these tools. In the next post, I’ll walk through the 3 agents I built. The OperationsAgent listens for failed sagas on Kafka and auto-diagnoses them using RAG. The SagaComposerAgent periodically rewrites the saga execution plan based on real failure data. And the DataAnalystAgent answers natural language questions like “list the 5 most recent failed sagas and assess their fraud risk.”

The code is all open source: github.com/pedrop3/sagaorchestration


This is part 2 of a 3-part series on integrating AI into a distributed saga system:

  1. Part 1 - Why I Picked LangChain4j Over Spring AI
  2. Part 2 - Connecting AI Agents to Microservices with MCP
  3. Part - Agents That Diagnose, Plan, and Query a Distributed Saga