Runtime Adapter Hot-Swapping with Ports & Adapters — The Pattern Alistair Cockburn Didn't Document

# java# springboot# architecture# hexagonal
Runtime Adapter Hot-Swapping with Ports & Adapters — The Pattern Alistair Cockburn Didn't DocumentCharles Hornick

When Alistair Cockburn described the Hexagonal Architecture in 2005, he documented adapter...

When Alistair Cockburn described the Hexagonal Architecture in 2005, he documented adapter interchangeability as a core property of the pattern. In Chapter 5 of his work, he writes:

"For each port, there may be multiple adapters, for various technologies that may plug into that port."

But the original pattern leaves something open: what happens when you need to switch adapters at runtime, across a distributed system, in response to a failure, and before other instances encounter the same problem?

This article presents a concrete implementation of emergency adapter hot-swapping using Spring Boot 4, Spring Cloud Bus, and Resilience4j. When one service instance detects a failing adapter, it broadcasts the failure via a message bus. All other instances switch to a fallback adapter before they encounter the timeout themselves.

Cockburn's reaction when I explained the concept on LinkedIn:

"Fabulous! I got around to documenting live swaps for long-running systems, but didn't dare think about emergency hot-swapping — thank you!"

The problem

Consider a standard Ports & Adapters setup where your domain service talks to an upstream API through an adapter. When that API starts timing out, every instance of your service will independently discover the failure, each burning through its own timeout window.

With 10 instances and a 3-second timeout, you're looking at 30 seconds of cumulative degraded experience across your cluster, at minimum. In practice, retries, thread pool saturation, and cascading failures make this much worse.

The standard circuit breaker pattern (Hystrix, Resilience4j) solves this per instance: each service independently opens its circuit after detecting failures. But there's no coordination. Instance 7 doesn't know that instance 1 already hit the timeout 2 seconds ago.

The solution: broadcast-driven adapter switching

The key insight: the first instance to detect a failure should warn all others.

Instance 1: timeout detected → broadcast "switch to fallback"
Instance 2: receives event → switches adapter (never hits the timeout)
Instance 3: receives event → switches adapter (never hits the timeout)
...
Instance N: receives event → switches adapter (never hits the timeout)
Enter fullscreen mode Exit fullscreen mode

This is eventual consistency applied to infrastructure. The adapter state propagates across the cluster asynchronously, and every instance converges to the same adapter within milliseconds of the first detection.

Implementation

Full source code: github.com/charles-hornick/adapter-hotswap-spring

The Port

Nothing special here, just a standard port interface:

public interface AnimalPort {
    String getAnimal();
    String name();
}
Enter fullscreen mode Exit fullscreen mode

The Adapter Registry

The critical piece. An AtomicReference holds the active adapter, making the swap lock-free and thread-safe:

@Component
public class AdapterConfig {

    private final Map<String, AnimalPort> adapters;
    private final AtomicReference<AnimalPort> activeAdapter;

    public AdapterConfig(
            @Qualifier("primaryAdapter") final AnimalPort primary,
            @Qualifier("fallbackAdapter") final AnimalPort fallback) {
        this.adapters = Map.of("primary", primary, "fallback", fallback);
        this.activeAdapter = new AtomicReference<>(primary);
    }

    public AnimalPort getActiveAdapter() {
        return activeAdapter.get();
    }

    public boolean switchTo(String adapterName) {
        final var target = adapters.get(adapterName);
        final var previous = activeAdapter.getAndSet(target);
        return !previous.name().equals(adapterName);
    }
}
Enter fullscreen mode Exit fullscreen mode

The swap is a single atomic operation. Threads currently executing a request through the old adapter will finish their call; new requests immediately use the new adapter. No locks, no synchronized blocks.

Timeout detection + broadcast

The service layer wraps adapter calls with Resilience4j's TimeLimiter. On timeout, it publishes a custom Spring Cloud Bus event:

@Service
public class AnimalService {

    private final AdapterConfig adapterConfig;
    private final ApplicationEventPublisher publisher;
    private final BusProperties busProperties;
    private final TimeLimiter timeLimiter;
    private final ExecutorService executor;

    public AnimalService(final AdapterConfig adapterConfig,
                         final ApplicationEventPublisher publisher,
                         final BusProperties busProperties) {
        this.adapterConfig = adapterConfig;
        this.publisher = publisher;
        this.busProperties = busProperties;

        this.timeLimiter = TimeLimiter.of(
                TimeLimiterConfig.custom()
                        .timeoutDuration(Duration.ofSeconds(3))
                        .cancelRunningFuture(true)
                        .build()
        );

        this.executor = Executors.newVirtualThreadPerTaskExecutor();
    }

    public String getAnimal() {
        final var adapter = adapterConfig.getActiveAdapter();
        try {
            final var future = executor.submit(adapter::getAnimal);
            return timeLimiter.executeFutureSupplier(() -> future);
        } catch (final Exception e) {
            // First instance to detect failure → broadcast to all
            publisher.publishEvent(new AdapterSwitchEvent(
                this, busProperties.getId(), "**", "fallback"
            ));
            return adapterConfig.getActiveAdapter().getAnimal();
        }
    }
}
Enter fullscreen mode Exit fullscreen mode

Key decisions:

  • Virtual threads (Executors.newVirtualThreadPerTaskExecutor()) - Java 25's virtual threads keep the timeout wrapper lightweight. No platform thread blocking.

  • "**" destination - the bus event targets all services, not a specific instance.

  • Immediate local fallback - after broadcasting, the current request falls back locally without waiting for the bus round-trip.

The Bus Event

A custom RemoteApplicationEvent that carries the target adapter name:

public class AdapterSwitchEvent extends RemoteApplicationEvent {

    private final String targetAdapter;

    public AdapterSwitchEvent(final Object source,
                              final String originService,
                              final String destinationService,
                              final String targetAdapter) {
        super(source != null ? source : new Object(), originService, () -> destinationService != null ? destinationService : "**");
        this.targetAdapter = targetAdapter;
    }

    // getter
}
Enter fullscreen mode Exit fullscreen mode

Spring Cloud Bus serializes this to JSON, pushes it to RabbitMQ, and every connected instance receives it. The listener on each instance performs the swap:

@EventListener
public void onAdapterSwitch(final AdapterSwitchEvent event) {
    adapterConfig.switchTo(event.getTargetAdapter());
}
Enter fullscreen mode Exit fullscreen mode

Recovery detection

A scheduled health checker pings the primary service and broadcasts a switch-back when it recovers:

@Scheduled(fixedDelayString = "${health.check.interval:5000}")
public void checkPrimaryHealth() {
    if ("primary".equals(adapterConfig.getActiveAdapterName())) {
        return; // already on primary, nothing to check
    }
    try {
        String response = restClient.get().uri("/health")
            .retrieve().body(String.class);
        if ("UP".equals(response)) {
            publisher.publishEvent(new AdapterSwitchEvent(
                this, busProperties.getId(), "**", "primary"
            ));
        }
    } catch (Exception e) {
        // still down, do nothing
    }
}
Enter fullscreen mode Exit fullscreen mode

Running the demo

bashgit clone https://github.com/charles-hornick/adapter-hotswap-spring.git
cd adapter-hotswap-spring
docker-compose up --build
Enter fullscreen mode Exit fullscreen mode

The demo runs two instances of the hotswap service, a flaky primary service (cyclic timeouts), and a stable fallback. Watch the logs for:

Response: Chien          ← primary adapter
Response: Chien
EVENT: Primary adapter timeout — broadcasting switch to fallback
EVENT: Switching adapter from 'primary' to 'fallback'
(instance 2) EVENT RECEIVED: Switch to 'fallback'
Response: Chat            ← fallback adapter
HEALTHCHECK: Primary adapter responding again
EVENT: Switching adapter from 'fallback' to 'primary'
Response: Chien          ← back to primary
Enter fullscreen mode Exit fullscreen mode

Instance 2 switches without ever experiencing the timeout.

Limitations and trade-offs

This is a demo, not a production framework. Real-world considerations:

Split-brain risk. If RabbitMQ is partitioned, instances may diverge on which adapter is active.
Mitigation: use a consensus mechanism or accept eventual convergence via the health checker.

Thundering herd on recovery. When the health checker broadcasts "switch back to primary", all instances hit the primary simultaneously.
Mitigation: add jitter to the health check interval, or use a canary approach where only one instance switches back first.

Single point of decision. Whichever instance first detects the failure makes the decision for the entire cluster. If that detection is a false positive (network blip), the whole cluster switches unnecessarily.
Mitigation: require N consecutive failures before broadcasting, or use a quorum-based decision.

Bus latency. There's a window between the broadcast and reception where other instances may still hit the failing adapter. This is inherent to eventual consistency. For most use cases, the RabbitMQ propagation delay (typically <100ms) is negligible compared to a 3-second timeout.

No adapter state persistence. If an instance restarts, it defaults back to the primary adapter regardless of the cluster state.
Mitigation: persist the adapter state in a shared store (Redis, database) or replay the latest bus event on startup.

When to use this

This pattern is most valuable when:

  • You have multiple instances of the same service
  • Adapter failures are detectable (timeout, HTTP 5xx, connection refused)
  • The cost of independent failure detection across instances is significant
  • You have a viable fallback adapter (degraded service, cache, alternative provider)

It's overkill for single-instance deployments or when failures are so rare that independent circuit breakers suffice.

Relation to Cockburn's original work

The Hexagonal Architecture describes adapters as interchangeable, so you can swap a database adapter for an in-memory adapter, or a REST adapter for a gRPC adapter. But the original pattern is silent on when and how this swap happens at runtime.

This implementation extends the pattern in two directions:

  1. Runtime swapping - adapters switch while the system is running, without restart or redeployment.
  2. Cluster-wide coordination - the swap propagates across all instances, turning a local failure detection into a global infrastructure decision.

The adapter remains a first-class architectural concept. We're just giving it operational capabilities that the original pattern implied but never specified.


Source code: github.com/charles-hornick/adapter-hotswap-spring

Stack: Java 25, Spring Boot 4.0.2, Spring Cloud 2025.1.0, Resilience4j 2.3.0, RabbitMQ

Charles Hornick - Java consultant specializing in smart refactoring and legacy application rescue. charles-hornick.be