Geometry of Systems: Architecture as Field Physics

# architecture# ddd# systemdesign# cleanarchitecture
Geometry of Systems: Architecture as Field PhysicsVasiliy Shilov

1. We don't build systems - we define their geometry Software architecture is often...

1. We don't build systems - we define their geometry

Software architecture is often described using the metaphor of construction.

We "build" systems.
We "lay foundations".
We "assemble components".

This metaphor is convenient - and fundamentally wrong.

A system is not a building.

A system is a field - not classical mechanics on a lab bench, but a useful picture: a landscape of allowed configurations and the cost of moving between them. Not a collection of parts, but a space shaped by what pulls the system (intent), what walls it off (constraints), and how easily change spreads (coupling).

Architecture is not about assembling components.
It is about shaping the space in which the system is allowed to exist.

That shaping has a price. Every boundary, layer, and context is ongoing economics: decisions you must keep consistent and attention you must spend to navigate the model. A field can be over-engineered - when structural complexity costs more (cognitive load, change latency, coordination) than the value the system protects. Then the right move is not to add another ring or context, but to flatten until stakes and uncertainty justify the depth.

Once you see it this way, everything changes:

  • Bugs are not "mistakes" - they are allowed states
  • Complexity is not "growth" - it is distortion of the field
  • Stability is not "control" - it is energy minimization

Allowed by which layer? "Allowed" is always scoped. The same state can be forbidden in the spec and still reachable in production - because admissibility is defined separately for code (what actually runs), architecture (what may depend on what, which invariants exist), and process (review, gates, rollout rules). If a bug exists, it means some layer's geometry - not necessarily the one you wish were in charge - admitted that state or that transition. The fix is often to tighten geometry where it was too loose, not to relabel the outcome as a moral failure.

Figure A - Hero: architecture as one field.

Schematic energy landscape: one deep basin (intent), forbidden patches (constraints), dense core fading to fluid edges (coupling)

Below: a compact visual glossary (each term maps cleanly to a diagram), Clean Architecture and DDD read through the same lens, dynamics - debt, shortcuts, and decay - then a short bridge from metaphor to computation: how the same model can sit on enforceable artifacts, not only on slides. The physics here is deliberate analogy: it is meant to design and explain, not to replace metrics, tests, or incident response.

2. Visual glossary: the physics of architecture

Figure B - Glossary map (three forces on one diagram).

Same landscape annotated: gravity arrows into a well, topology as cut-out forbidden regions, viscosity as thick vs thin medium between modules

Intent = Gravity

Intent defines where the system "wants" to be.

A clear business goal creates a deep potential well -
pulling decisions, code, and behavior toward a stable configuration.

Weak or ambiguous intent?

-> Shallow field

-> Multiple competing minima

-> Architectural drift

If your system constantly diverges, it's not a discipline problem.
It's a gravity problem.

Constraints = Topology

Constraints define where the system cannot go.

They are not guidelines.
They are the shape of admissible space - which regions exist at all, not continuous deformation in the mathematician's sense, but "you cannot get there from here without breaking an invariant".

Invariants = impassable ridges

Policies = restricted regions

Contracts = boundary surfaces

A well-designed system doesn't "check correctness".

It cannot physically reach incorrect states.

If you rely on validation instead of geometry -
you're simulating physics instead of defining it.

Figure C - Geometry vs runtime check.

Split panel: left path blocked by cliff topology; right same bad state reachable via dotted bridge labeled validation

Coupling = Viscosity

Coupling defines how easily a local change propagates - how far a perturbation spreads through the codebase and runtime.

Think of it as viscosity of the medium:

High coupling -> thick, resistant medium

Low coupling -> fluid, easily changing environment

In a healthy architecture:

Core = high density, high stability

Edges = low density, high adaptability

If everything is equally fluid - nothing is stable.
If everything is equally rigid - nothing evolves.

Economics of decisions, economics of attention

Geometry is a budget. Each partition of the system - module, layer, bounded context - adds decision surface: where files live, what may call what, which invariant owns which story. It also taxes attention: what developers must hold in working memory to change behavior safely.

The goal is not maximal structure. It is minimal structure that still makes wrong states expensive or impossible - so that finding the correct state stays cheaper than fighting the diagram. If complexity of the shape outruns utility of the product (or of the experiment), you are over-constraining the field: fewer layers, wider contexts, softer topology until the problem's weight catches up.

That is the practical answer to "how many layers, how far to split": as many as pay rent in reduced risk or faster learning - not one more.

One compact handle for the cost of change (heuristic, not a literal product - order-of-magnitude thinking):

cost_of_change ~ propagation × integration_depth × constraint_friction
Enter fullscreen mode Exit fullscreen mode

Read ~ as scales with, in a rough sense - not a literal product.

Propagation is coupling - how far a perturbation travels. Integration depth is how many layers, contexts, or handoffs a change must cross. Constraint friction is how much surface area your invariants add - necessary drag when it buys safety, waste when it does not. When you feel a change is expensive, you can ask which term grew.

3. Clean Architecture as a physics experiment

Let's reinterpret Clean Architecture as a field system.

The onion is an instrument, not a mandate. The physics reading tells you what each ring is for (density, flow, phase boundary); it does not say that every codebase needs the full stack of rings. A small, short-lived surface can be shallower - fewer adapters, thinner use-case shells - if attention and decision economics do not yet warrant the full depth. Deep wells are for heavy invariants and long horizons; flat plains are allowed when gravity is weak.

Figure D - Clean Architecture rings + physics overlay.

Concentric rings (Entities, Use Cases, Adapters) with density, flow lines, and phase boundary callouts

Entities - High-density core

This is the region of maximum stability.

Business invariants live here

Change is expensive and rare

The "laws of physics" are defined here

The closer you are to the core, the stronger the constraints and gravity.

Use Cases - Gradient lines

Use cases are not just application logic.

They are paths through the field.

They define:

  • how data moves
  • how decisions propagate
  • how energy flows toward the core

A good use case is not code - it's a stable trajectory.

Adapters - Phase boundaries

Adapters are where phase transitions happen.

External world = chaotic, noisy, probabilistic

Internal system = structured, deterministic

Adapters condense reality into invariants.

This is where entropy is reduced.

4. DDD in the same picture

Domain-Driven Design is not a different physics - it is the same field language applied to meaning, boundaries, and strategic shape. Where Clean Architecture stresses dependency direction and layers, DDD stresses multiple local geometries (bounded contexts) and how they connect.

Figure E - Bounded contexts as neighboring fields.

Several labeled regions (bounded contexts) with different shading; core domain deepest well; arrows between regions with varied line styles suggesting relationship types

Bounded context - Local field

A bounded context is a region where the model is consistent: its own admissible states, its own ubiquitous language. In field terms: a local coordinate chart - the words and invariants line up with the code inside that region.

Different contexts do not share one global geometry. "The same word, different potential" is not a bug; forcing one model everywhere is how you warp the field and pay in integration pain.

Core, supporting, generic - Where gravity pulls hardest

Core domain is where business survival concentrates - the deepest well in the strategic landscape. Supporting and generic subdomains are shallower basins or flatter plains: important operationally, but not where you pour the same design energy.

That matches the earlier density story: invest viscosity and topology where differentiation lives; keep the periphery fluid or buy/borrow when the landscape is flat.

Aggregate - Local viscosity and gates

Inside a context, an aggregate is a cluster of high internal coupling held behind a small surface: identity, commands, consistency rules. Perturbations should not diffuse freely across the whole model - they stop at the gate.

So: tactical DDD is local field design - where change is sticky and what the rest of the system is allowed to see.

Context mapping - Topology between fields

Strategic design is the shape between contexts: who leads, who conforms, where you translate (anti-corruption layer), where you share a kernel. That is inter-field topology - the same idea as adapters as phase boundaries, but at organizational and semantic scale.

DDD's collaboration practices (event storming, workshops) define that topology; they do not reduce to a diagram. The metaphor still helps: you are negotiating where one field ends and another begins, and how signals cross without collapsing two geometries into one lie.

5. Dynamics: why systems decay

Technical debt = local minima

Bad architecture isn't random.

It's a stable but suboptimal state.

The system is stuck because:

  • escaping requires energy (time, money, risk)
  • local minimum is "good enough"

Refactoring is not cleanup.
It is energy injection to escape a basin.

Figure F - Technical debt as local minimum.

Energy curve: ball stuck in shallow pit; global better minimum separated by hump; arrow

Vibe coding = bypassing the barrier

When you "just ship it" without respecting constraints, you are not climbing the landscape the architecture defined - you shortcut through what was meant to be a wall. People sometimes call that "tunneling"; it is not quantum mechanics, only a name for skipping the intended path and landing in a state the topology was supposed to exclude.

Figure G - Bypass vs intended path.

Side view: solid barrier; solid curve climbs over; dashed tunnel or broken outline cuts through; landing dot in

It works... temporarily.

But the system ends up in a configuration that violates its geometry.

-> instability

-> unpredictable behavior

-> expensive corrections later

Speed without geometry doesn't remove cost.
It defers and amplifies it.

6. From metaphor to computation

The field is not only a metaphor for conversation. The same picture can be operationalized - translated into things you can evaluate, enforce, or measure. Then architecture is not frozen in a diagram; it extends into types, tests, policies, graphs, and telemetry.

Geometry as a rough dictionary

In the article In systems that compute
Intent Objective function / utility - what you optimize (SLOs, error budgets, product metrics, cost caps)
Constraints Invariants - types, schemas, policy engines, admission control, config guards
Coupling Propagation cost - dependency and ownership graphs, change coupling, blast-radius estimates

Execution is not, in the first place, "running code." It is state transition under constraints - the moves the system is permitted to make. Code is the mechanism - a materialized view of those permissions at runtime. The system evolves because transitions are allowed (or blocked) by types, policies, dependencies, and deploy gates. Motion in the field is the semantics; binaries and services are how that motion is instantiated.

You can read a design move as a question that admits an answer:

decision = evaluate(intent, constraints, current_state)
Enter fullscreen mode Exit fullscreen mode

Not every team will literalize that as one function - the point is the shape: choices become conditional on explicit intent, explicit boundaries, and observable state. Refactors, deploys, and feature work are trajectories - you can ask whether a step respects the topology before it lands in production.

Why this matters

Without something computable at the end of the chain:

  • architecture lives in documentation
  • constraints live as convention
  • intent lives as interpretation

With the bridge:

  • constraints can be enforceable (compiler, CI, policy-as-code)
  • intent can be evaluable (metrics, budgets, gates)
  • decisions can be traceable (ADRs, audit logs, deployment policies)

Where computation fails

Even when geometry is encoded, governance decays: metrics can drift from intent; policies can lag behind reality; coupling can grow invisibly while the dependency graph you trust goes stale. Computation does not remove errors - it moves them, from hidden runtime states into mis-specified or obsolete models of the field. The failure mode is not "no rules" - it is wrong rules you still believe.

The useful question shifts from "is this correct?" - often late, often political - to "is this state (or transition) reachable under the geometry we defined?" That is where architecture meets runtime: not every answer is automatic, but more of the field can be checked than if the model stayed purely narrative.

This does not mean humans disappear - partial formalization, judgment calls, and the economics of attention still apply. It means the metaphor has computational continuation where you choose to pay for it.

7. Conclusion: architect as a designer of forces

The role of an architect is not to control the system directly.

It is to design:

  • gravity (intent)
  • topology (constraints)
  • viscosity (coupling)

So that:

The system naturally evolves toward correct states.

If your system constantly breaks:

it's not a people problem

it's not a tooling problem

It's a geometry problem

Where this metaphor stops

This frame does not replace team dynamics, on-call and incidents, security threat models, or business economics - it situates the shape of the system next to those realities. Trade-off math (value vs complexity, risk vs ceremony) still lives in product and org context; the field picture helps you see when you are spending attention and decisions without buying commensurate safety or speed. Use it to align structure with intent and constraints; use other tools when the failure mode is human, operational, or adversarial.

Architecture is not the structure of the system.

It is the shape of its possible futures.

Final takeaway

Good architecture is when the system finds the correct state with minimal effort - machine effort and human effort: fewer forced decisions, less sustained attention to stay coherent, structure that matches the stakes.

Bad architecture is when correctness requires constant force, or when the cost of the geometry (layers, boundaries, process) exceeds the use the system provides - complexity without return.