Charles HornickWhen Alistair Cockburn described the Hexagonal Architecture in 2005, he documented adapter...
When Alistair Cockburn described the Hexagonal Architecture in 2005, he documented adapter interchangeability as a core property of the pattern. In Chapter 5 of his work, he writes:
"For each port, there may be multiple adapters, for various technologies that may plug into that port."
But the original pattern leaves something open: what happens when you need to switch adapters at runtime, across a distributed system, in response to a failure, and before other instances encounter the same problem?
This article presents a concrete implementation of emergency adapter hot-swapping using Spring Boot 4, Spring Cloud Bus, and Resilience4j. When one service instance detects a failing adapter, it broadcasts the failure via a message bus. All other instances switch to a fallback adapter before they encounter the timeout themselves.
Cockburn's reaction when I explained the concept on LinkedIn:
"Fabulous! I got around to documenting live swaps for long-running systems, but didn't dare think about emergency hot-swapping — thank you!"
Consider a standard Ports & Adapters setup where your domain service talks to an upstream API through an adapter. When that API starts timing out, every instance of your service will independently discover the failure, each burning through its own timeout window.
With 10 instances and a 3-second timeout, you're looking at 30 seconds of cumulative degraded experience across your cluster, at minimum. In practice, retries, thread pool saturation, and cascading failures make this much worse.
The standard circuit breaker pattern (Hystrix, Resilience4j) solves this per instance: each service independently opens its circuit after detecting failures. But there's no coordination. Instance 7 doesn't know that instance 1 already hit the timeout 2 seconds ago.
The solution: broadcast-driven adapter switching
The key insight: the first instance to detect a failure should warn all others.
Instance 1: timeout detected → broadcast "switch to fallback"
Instance 2: receives event → switches adapter (never hits the timeout)
Instance 3: receives event → switches adapter (never hits the timeout)
...
Instance N: receives event → switches adapter (never hits the timeout)
This is eventual consistency applied to infrastructure. The adapter state propagates across the cluster asynchronously, and every instance converges to the same adapter within milliseconds of the first detection.
Full source code: github.com/charles-hornick/adapter-hotswap-spring
Nothing special here, just a standard port interface:
public interface AnimalPort {
String getAnimal();
String name();
}
The critical piece. An AtomicReference holds the active adapter, making the swap lock-free and thread-safe:
@Component
public class AdapterConfig {
private final Map<String, AnimalPort> adapters;
private final AtomicReference<AnimalPort> activeAdapter;
public AdapterConfig(
@Qualifier("primaryAdapter") final AnimalPort primary,
@Qualifier("fallbackAdapter") final AnimalPort fallback) {
this.adapters = Map.of("primary", primary, "fallback", fallback);
this.activeAdapter = new AtomicReference<>(primary);
}
public AnimalPort getActiveAdapter() {
return activeAdapter.get();
}
public boolean switchTo(String adapterName) {
final var target = adapters.get(adapterName);
final var previous = activeAdapter.getAndSet(target);
return !previous.name().equals(adapterName);
}
}
The swap is a single atomic operation. Threads currently executing a request through the old adapter will finish their call; new requests immediately use the new adapter. No locks, no synchronized blocks.
The service layer wraps adapter calls with Resilience4j's TimeLimiter. On timeout, it publishes a custom Spring Cloud Bus event:
@Service
public class AnimalService {
private final AdapterConfig adapterConfig;
private final ApplicationEventPublisher publisher;
private final BusProperties busProperties;
private final TimeLimiter timeLimiter;
private final ExecutorService executor;
public AnimalService(final AdapterConfig adapterConfig,
final ApplicationEventPublisher publisher,
final BusProperties busProperties) {
this.adapterConfig = adapterConfig;
this.publisher = publisher;
this.busProperties = busProperties;
this.timeLimiter = TimeLimiter.of(
TimeLimiterConfig.custom()
.timeoutDuration(Duration.ofSeconds(3))
.cancelRunningFuture(true)
.build()
);
this.executor = Executors.newVirtualThreadPerTaskExecutor();
}
public String getAnimal() {
final var adapter = adapterConfig.getActiveAdapter();
try {
final var future = executor.submit(adapter::getAnimal);
return timeLimiter.executeFutureSupplier(() -> future);
} catch (final Exception e) {
// First instance to detect failure → broadcast to all
publisher.publishEvent(new AdapterSwitchEvent(
this, busProperties.getId(), "**", "fallback"
));
return adapterConfig.getActiveAdapter().getAnimal();
}
}
}
Key decisions:
Virtual threads (Executors.newVirtualThreadPerTaskExecutor()) - Java 25's virtual threads keep the timeout wrapper lightweight. No platform thread blocking.
"**" destination - the bus event targets all services, not a specific instance.
Immediate local fallback - after broadcasting, the current request falls back locally without waiting for the bus round-trip.
A custom RemoteApplicationEvent that carries the target adapter name:
public class AdapterSwitchEvent extends RemoteApplicationEvent {
private final String targetAdapter;
public AdapterSwitchEvent(final Object source,
final String originService,
final String destinationService,
final String targetAdapter) {
super(source != null ? source : new Object(), originService, () -> destinationService != null ? destinationService : "**");
this.targetAdapter = targetAdapter;
}
// getter
}
Spring Cloud Bus serializes this to JSON, pushes it to RabbitMQ, and every connected instance receives it. The listener on each instance performs the swap:
@EventListener
public void onAdapterSwitch(final AdapterSwitchEvent event) {
adapterConfig.switchTo(event.getTargetAdapter());
}
A scheduled health checker pings the primary service and broadcasts a switch-back when it recovers:
@Scheduled(fixedDelayString = "${health.check.interval:5000}")
public void checkPrimaryHealth() {
if ("primary".equals(adapterConfig.getActiveAdapterName())) {
return; // already on primary, nothing to check
}
try {
String response = restClient.get().uri("/health")
.retrieve().body(String.class);
if ("UP".equals(response)) {
publisher.publishEvent(new AdapterSwitchEvent(
this, busProperties.getId(), "**", "primary"
));
}
} catch (Exception e) {
// still down, do nothing
}
}
bashgit clone https://github.com/charles-hornick/adapter-hotswap-spring.git
cd adapter-hotswap-spring
docker-compose up --build
The demo runs two instances of the hotswap service, a flaky primary service (cyclic timeouts), and a stable fallback. Watch the logs for:
Response: Chien ← primary adapter
Response: Chien
EVENT: Primary adapter timeout — broadcasting switch to fallback
EVENT: Switching adapter from 'primary' to 'fallback'
(instance 2) EVENT RECEIVED: Switch to 'fallback'
Response: Chat ← fallback adapter
HEALTHCHECK: Primary adapter responding again
EVENT: Switching adapter from 'fallback' to 'primary'
Response: Chien ← back to primary
Instance 2 switches without ever experiencing the timeout.
This is a demo, not a production framework. Real-world considerations:
Split-brain risk. If RabbitMQ is partitioned, instances may diverge on which adapter is active.
Mitigation: use a consensus mechanism or accept eventual convergence via the health checker.
Thundering herd on recovery. When the health checker broadcasts "switch back to primary", all instances hit the primary simultaneously.
Mitigation: add jitter to the health check interval, or use a canary approach where only one instance switches back first.
Single point of decision. Whichever instance first detects the failure makes the decision for the entire cluster. If that detection is a false positive (network blip), the whole cluster switches unnecessarily.
Mitigation: require N consecutive failures before broadcasting, or use a quorum-based decision.
Bus latency. There's a window between the broadcast and reception where other instances may still hit the failing adapter. This is inherent to eventual consistency. For most use cases, the RabbitMQ propagation delay (typically <100ms) is negligible compared to a 3-second timeout.
No adapter state persistence. If an instance restarts, it defaults back to the primary adapter regardless of the cluster state.
Mitigation: persist the adapter state in a shared store (Redis, database) or replay the latest bus event on startup.
This pattern is most valuable when:
It's overkill for single-instance deployments or when failures are so rare that independent circuit breakers suffice.
The Hexagonal Architecture describes adapters as interchangeable, so you can swap a database adapter for an in-memory adapter, or a REST adapter for a gRPC adapter. But the original pattern is silent on when and how this swap happens at runtime.
This implementation extends the pattern in two directions:
The adapter remains a first-class architectural concept. We're just giving it operational capabilities that the original pattern implied but never specified.
Source code: github.com/charles-hornick/adapter-hotswap-spring
Stack: Java 25, Spring Boot 4.0.2, Spring Cloud 2025.1.0, Resilience4j 2.3.0, RabbitMQ
Charles Hornick - Java consultant specializing in smart refactoring and legacy application rescue. charles-hornick.be