🏛️ Real Architectures

Compound Patterns

No real-world system uses a single pattern. These are the famous architectures that combine multiple patterns — and how they fit together at Google, Netflix, Amazon, and beyond.

architectureintermediate

Model-View-Controller (MVC)

Separates data, presentation, and user interaction into three layers

MVC decouples domain logic (Model) from presentation (View) via a control layer (Controller). The Model notifies Views of state changes through the Observer pattern, allowing multiple views to reflect the same data. The Controller uses the Strategy pattern to select appropriate business logic based on user input. Views often form a Composite tree for hierarchical UI construction. This separation enables independent development: designers refine Views, domain experts build Models, and developers wire Controllers. Changes to business logic don't require UI rewrites, and the same Model can power desktop, web, and mobile clients. MVC remains the backbone of web frameworks because it scales from small prototypes to large systems. Testability improves dramatically—you can unit-test Models and Controllers without rendering Views.

How the patterns combine

  • 1.Model encapsulates domain state and rules; notifies Views when data changes (Observer)
  • 2.View subscribes to Model and renders state; never modifies Model directly
  • 3.Controller intercepts user events, invokes Model methods, and may update View (Strategy selector)
  • 4.Changes flow: User → Controller → Model → View (via notifications)

Used by

Spring MVC (Pivotal / VMware)Java web framework using annotated controllers, JPA models, and JSP/Thymeleaf views
Ruby on Rails (Basecamp (DHH))Convention-over-configuration web framework with built-in MVC scaffolding

When to reach for this

  • Building web applications where you need clear separation between business logic and UI
  • Supporting multiple clients (web, mobile, desktop) from a single backend
  • Teams where frontend and backend developers work in parallel on the same feature
  • When testability of business logic is critical and UI volatility is expected
uiintermediate

Model-View-ViewModel (MVVM)

Adds ViewModel as a state container and data-binding bridge between Model and View

MVVM extends MVC by introducing ViewModel—a facade that exposes Model state in View-friendly terms and handles View commands. The breakthrough is two-way data binding: when you change an input, the ViewModel automatically syncs to the Model, and when the Model updates, Views refresh without imperative code. ViewModels encapsulate presentation logic (formatting, validation, visibility toggles) that doesn't belong in Models. They act as facades, simplifying Models for the View layer. User actions become Commands that the ViewModel executes, decoupling UI events from business logic. MVVM shines in rich UI applications (desktop, modern SPAs) where constant synchronization between user input and display is essential. Frameworks like Angular, Vue, and WPF handle the data binding machinery, so you focus on what changes, not how to push updates.

How the patterns combine

  • 1.ViewModel holds observable properties mirroring Model state (e.g., isLoading, errorMessage)
  • 2.View binds directly to ViewModel properties; framework auto-syncs on change
  • 3.User actions (clicks, form input) trigger ViewModel commands, which invoke Model methods
  • 4.Model updates flow back to ViewModel, which re-computes derived properties, triggering View refresh

Used by

WPF (Windows Presentation Foundation) (Microsoft)Desktop UI framework with XAML-based two-way data binding to C# ViewModels
Angular (modern) (Google)SPA framework using [(ngModel)] two-way binding and component logic in TypeScript

When to reach for this

  • Rich client applications (desktop, SPA) with heavy user interaction and frequent state updates
  • Complex UIs where presentation logic (validation, formatting, visibility) is non-trivial
  • Teams wanting explicit separation of Model, ViewModel (presentation facade), and View layers
  • Projects leveraging framework-native data binding (Vue reactivity, Angular RxJS, WPF bindings)
architectureadvanced

Command Query Responsibility Segregation (CQRS)

Separates write (command) and read (query) pathways for scalability and flexibility

CQRS splits the application into two asymmetric paths: one for state-changing operations (commands) and one for reading state (queries). Unlike traditional CRUD, which uses the same Model for both reads and writes, CQRS recognizes they have different optimization needs—writes must be consistent and auditable, while reads must be fast and flexible. Commands are executed against a write model that enforces business rules. Each command generates events that are appended to an event log. Queries hit a separate, denormalized read model optimized for performance. An observer watches the event log and rebuilds read models asynchronously, creating eventual consistency. This pattern excels in event-driven systems where you need audit trails, distributed consistency, or support for multiple query indexes. The tradeoff is increased complexity: you manage two models and must handle eventual consistency.

How the patterns combine

  • 1.Commands mutate state and publish events to an event store (write path)
  • 2.Event handlers (observers) consume events and update denormalized read models asynchronously
  • 3.Queries directly hit read models, bypassing business logic (read path)
  • 4.Strategy pattern allows swapping read model implementations or command handlers per business context

Used by

Microsoft eShopOnContainers (Microsoft)Sample microservices showing CQRS with separate write DB (SQL Server) and read views (Redis)
Axon Framework (AxonIQ)Java CQRS & Event Sourcing framework with built-in event store and projection engine

When to reach for this

  • Systems with complex business logic and multiple query types (reporting, analytics, dashboards)
  • Event-driven microservices where commands must be auditable and replayed
  • High-read, low-write scenarios where you can afford eventual consistency
  • Projects requiring detailed audit trails or temporal queries (what was the balance at date X?)
dataadvanced

Event Sourcing

Persist state changes as immutable events; rebuild state by replaying event history

Event Sourcing inverts the traditional database model. Instead of storing current state, you store a log of all state-changing events. The current state is derived by replaying events from the beginning. This provides a complete audit trail and enables temporal queries (what was the state at time T?). Each command produces one or more events that are immutable and append-only. The Memento pattern is implicit: events capture snapshots of state transitions. Observers (event handlers) subscribe to the event log and maintain materialized views or side effects. Recovery is trivial—replay events—and debugging is powerful because you have a record of every state change. Event Sourcing pairs naturally with CQRS, though they are independent. Complexity increases because you manage event versioning and eventual consistency. Not every system needs it, but it's invaluable for financial systems, auditing, and temporal analytics.

How the patterns combine

  • 1.Commands are validated and produce domain events (e.g., MoneyTransferred, UserRegistered)
  • 2.Events are appended to an immutable event log (Event Store)
  • 3.Current state is reconstructed by loading relevant events and applying them in order (Memento replay)
  • 4.Event handlers (observers) subscribe and perform side effects: update read models, send notifications, trigger other commands

Used by

Axon Framework (AxonIQ)Java framework with @EventHandler annotated handlers and automatic event store integration
Akka Persistence (Lightbend)Scala/Java library for actor-based systems with PersistentActor event journaling

When to reach for this

  • Systems requiring complete audit trails and compliance logging (financial, healthcare)
  • Temporal analytics: queries like 'show me all state transitions for this entity'
  • Debugging: replay a sequence of events to reproduce past bugs or understand decision history
  • Distributed systems: events are naturally suited for eventual consistency and replication
architectureintermediate

Circuit Breaker

Prevents cascading failures by failing fast when a remote service is unhealthy

Circuit Breaker wraps calls to external services (APIs, databases, microservices) and monitors their success rate. When failures exceed a threshold, the circuit 'opens'—requests fail immediately without calling the remote service, giving it time to recover. After a timeout, the circuit enters 'half-open' state and tests a single request. If it succeeds, the circuit 'closes' and resumes normal operation. The State pattern is central: the circuit transitions between closed (normal), open (failing fast), and half-open (testing recovery). The Proxy pattern wraps the remote call, intercepting it to check circuit state. Observers watch metrics (latency, error rates) and trigger state transitions. Circuit Breaker is essential in microservices and distributed systems because it stops thundering-herd problems: one slow service doesn't drag down all callers. It works best paired with retries, timeouts, and fallbacks for resilient architectures.

How the patterns combine

  • 1.Closed state: requests pass through; on each call, record success/failure
  • 2.Threshold exceeded: transition to Open state, fail all requests immediately
  • 3.After timeout: transition to Half-Open, allow one test request
  • 4.Test succeeds: reset to Closed; test fails: return to Open; State pattern manages transitions

Used by

Netflix Hystrix (Netflix)Pioneering Java library for circuit breaker, timeout, and bulkhead patterns in microservices
Resilience4j (RobertWinkler (OSS))Lightweight Java library with decorators for circuit breaker, retry, timeout, and rate limiting

When to reach for this

  • Microservices architectures where one slow service can cascade failures across the platform
  • Integrations with unreliable or fluctuating external APIs
  • Any distributed system where you need graceful degradation under partial outages
  • Load balancing: circuit breaker prevents routing to unhealthy instances
architectureintermediate

Repository Pattern

Abstracts data persistence; swaps storage backends without changing business logic

Repository acts as a facade over data access logic, presenting collections as in-memory data structures (List, Set) rather than exposing SQL or API calls. It abstracts the persistence mechanism—swap SQL for NoSQL, REST API, or mock data, and the rest of the application stays unchanged. The Factory pattern appears when the repository creates domain objects from raw data (rows, JSON). Proxy pattern enables lazy loading: query results are wrapped in proxies that fetch related data on access. Together, these patterns isolate the application from persistence details, making testing trivial (mock the repository) and migrations painless. Repository is foundational in DDD and ORM-based architectures. It's less relevant in functional or query-oriented systems, but it remains the gold standard for object-oriented applications that value testability and separation of concerns.

How the patterns combine

  • 1.Repository interface defines domain-centric methods: findById, findByName, save, delete (no SQL exposed)
  • 2.Implementation talks to a database, cache, or API; translates between ORM objects and domain objects
  • 3.Factory sub-pattern: Repository.fromDataRow(row) creates domain objects from raw data
  • 4.Proxy sub-pattern: Lazy-loaded relationships materialize when accessed, not on load

Used by

Spring Data JPA (Pivotal / VMware)Java framework that auto-generates repositories from interface declarations; swaps DB backends easily
.NET Entity Framework Core (Microsoft)ORM with DbSet<T> repositories; LINQ enables flexible queries

When to reach for this

  • ORM-based applications where you want to hide SQL and swap databases
  • Unit tests: mock repositories to test business logic without touching a real database
  • DDD-driven systems where entities are queried by domain-specific methods, not raw SQL
  • Layered architectures separating persistence from business logic
architectureadvanced

Plugin Architecture

Load and execute extensible modules at runtime without recompiling the core

Plugin Architecture lets third-party code extend an application without modifying its source. The core defines plugin interfaces (strategies); plugins implement these interfaces. A plugin loader (factory) discovers, validates, and instantiates plugins at startup or runtime. The core publishes events (observer) that plugins subscribe to and extend. Strategy pattern is obvious: plugins are alternative strategies injected at runtime. Factory pattern manages instantiation and dependency injection. Observer pattern allows plugins to hook into application lifecycle events (app-start, user-login, document-save) and intercede. This pattern powers IDEs (IntelliJ, VS Code), browsers (Chrome extensions), and gaming engines. The tradeoff is increased complexity: you must carefully design plugin interfaces, handle versioning, and manage security (sandboxing).

How the patterns combine

  • 1.Core defines plugin interface (e.g., Plugin, ToolProvider, Extension)
  • 2.Plugin loader scans directories/classpath, discovers classes implementing the interface (Factory)
  • 3.Each plugin is instantiated and registered; core holds a collection of plugins
  • 4.Core fires lifecycle events; plugins listen and respond (Observer)
  • 5.User action → core looks up applicable plugins (Strategy selection) → invokes them

Used by

IntelliJ IDEA Plugin Platform (JetBrains)Kotlin/Java plugins extending IDE behavior: inspections, refactorings, language support
VS Code Extensions (Microsoft)TypeScript/Node.js extensions with Webview UI, language servers, and command palette contributions

When to reach for this

  • Applications requiring third-party extensibility (IDEs, browsers, content management systems)
  • Long-lived platforms where features are added without redeploying the core
  • Projects where different customers need different feature sets (modular SaaS)
  • Community-driven ecosystems: plugins are maintained by the community, reducing core burden
architectureintermediate

Middleware / Decorator Pipeline

Chain decorators (middleware) to process requests through layers of concern

Middleware pipelines layer cross-cutting concerns (logging, authentication, compression, error handling) as decorators that wrap each other. A request flows through the chain: each middleware can inspect it, modify it, pass it to the next, and intercept the response. This separates concerns beautifully—add logging without touching auth code. The Decorator pattern is explicit: each middleware wraps the next middleware, adding behavior transparently. Chain-of-Responsibility is the execution model: a request passes through the chain until handled. Middleware can short-circuit the chain (e.g., reject authentication) or augment the response on the way back out. Middleware pipelines are ubiquitous in web frameworks (Express, Spring, Django) and are the primary way to organize cross-cutting concerns in modern applications. They're simpler than AOP and more flexible.

How the patterns combine

  • 1.Define middleware interface: next = (request) → response
  • 2.Each middleware receives next middleware; can call it, skip it, or modify args/response
  • 3.Chain built by nesting: middleware1(middleware2(middleware3(...)))
  • 4.Request flows in; each middleware unwraps and adds behavior; response flows back out wrapped

Used by

Express.js (StrongLoop / IBM)Node.js web framework with app.use() chaining middleware for routing, parsing, auth, error handling
Java Servlet Filters (Oracle (Java EE))Standard Java web container mechanism for request/response decoration and filtering

When to reach for this

  • Web request processing where you need centralized logging, auth, and error handling
  • Cross-cutting concerns that shouldn't be scattered throughout business logic
  • Building extensible request/response handling with optional middleware (compression, caching)
  • Separating infrastructure concerns from domain logic cleanly
aiadvanced

RAG Agent (AI Compound Pattern)

Retrieval-Augmented Generation agents that combine knowledge retrieval, reasoning, and tool use

RAG Agents are LLM-powered systems that ground responses in retrieved knowledge rather than relying on parametric memory alone. The agent loops: (1) accept a user query, (2) retrieve relevant documents from a vector store (RAG), (3) decide what tools to invoke (web search, database query, calculator), (4) execute tools and feed results back to the LLM (ReAct pattern), (5) repeat until the LLM decides it has enough context to answer confidently. Memory management is critical: short-term context (conversation history), medium-term (retrieved documents), and long-term (vector embeddings). Guardrails ensure the agent doesn't hallucinate, invoke unauthorized tools, or answer outside its domain. The agent's reasoning is visible: it logs which documents were retrieved, which tools were called, and how it synthesized an answer. RAG Agents power customer support bots, research assistants, and enterprise question-answering systems. They're more reliable than pure LLMs because they cite sources and adapt to new information without retraining.

How the patterns combine

  • 1.Accept user query; store in short-term memory (conversation history)
  • 2.Query vector store for top-K similar documents (RAG retrieval)
  • 3.LLM reads query + retrieved docs + conversation history; decides next action
  • 4.If decision is 'invoke tool': validate tool call (guardrails), execute, append result to memory
  • 5.If decision is 'answer': synthesize response with citations from retrieved docs and tool outputs
  • 6.Loop until confident; return answer with audit trail (retrieved docs, tool calls, reasoning)

Used by

LangChain Agents (LangChain (OSS))Python framework for building agents with ReAct looping, tool definitions, and memory management
LlamaIndex (Jerry Liu (OSS))Data framework for RAG: connects LLMs to data via vector indices, query engines, and agents

When to reach for this

  • Customer support bots that need access to knowledge bases, ticketing systems, and FAQs
  • Research assistants: query papers, summarize findings, and cite sources
  • Enterprise Q&A: ground responses in proprietary data (internal docs, databases)
  • Autonomous workflows: agents autonomously solve problems by chaining tools and reasoning
aiadvanced

ML Feature Platform

End-to-end pipeline for feature engineering, training, serving, and monitoring ML models

ML Feature Platforms unify the infrastructure for managing features (derived inputs), training models, and serving predictions. A feature store decouples feature computation from model code: define a feature once, reuse across training and serving. A model registry catalogs trained models with metadata (version, performance, lineage). A training pipeline orchestrates data prep, feature engineering, and model training. A serving pipeline encapsulates inference, feature lookup, and caching for low-latency predictions. Monitoring continuously checks model drift and data quality. These platforms standardize ML workflows and reduce friction: data scientists focus on feature engineering and modeling, engineers operationalize training and serving, and both trust the shared feature store as the source of truth. Without a platform, teams end up with skew: training uses different features than serving, leading to model degradation in production. ML Feature Platforms are complex but essential in organizations running 100+ models. They're powered by orchestration (Airflow, Kubeflow), databases (Snowflake, BigQuery), and specialized tools (Tecton, Databricks Feature Store).

How the patterns combine

  • 1.Feature Store: compute and cache features; training and serving both query it (consistency)
  • 2.Training Pipeline: fetch features from store, train model, evaluate, log to Model Registry
  • 3.Model Registry: catalog models with versions, metrics, lineage, and deployment status
  • 4.Serving Pipeline: receive prediction request, fetch features from store, invoke model, cache result
  • 5.Monitoring: track prediction distribution, model drift, data quality, and performance; alert on anomalies

Used by

Uber Michelangelo (Uber)Internal ML platform with Kafka-based feature store, distributed training, and real-time serving
Airbnb Bighead (Airbnb)ML platform with feature engineering notebooks, training orchestration, and model serving gateway

When to reach for this

  • Organizations with 10+ models in production needing consistency between training and serving
  • High-frequency prediction use cases (recommendations, rankings, personalization) where serving latency matters
  • Teams managing complex feature pipelines: multiple data sources, transformations, aggregations
  • Regulated industries (finance, healthcare) requiring audit trails and monitoring of model behavior

Ready for the individual patterns?