Docs Blog Contribute Enterprise Start →
Back to Blog

The Core Science Behind FractalX

A deep dive into how FractalX uses static AST analysis and build-time code generation to automate the science of decomposing monoliths into production-ready microservices.

F
FractalX Team March 18, 2026 · 12 min read
Read on Medium
Introducing
FractalX Framework
Static decomposition  ·  AST analysis  ·  Build-time code generation
Apache 2.0 Java 17+ Spring Boot 3.2 v0.2.5 on Maven Central

The Microservices Migration Tax

Every engineering team that has tried to break apart a monolith knows the experience well. You sketch an architecture diagram on a whiteboard. It looks clean. Then you sit down to actually build it, and weeks disappear into scaffolding — Dockerfiles, service discovery configuration, gRPC proto files, circuit breaker wiring, distributed tracing setup, database schema splitting, saga orchestrators for long-running transactions, admin dashboards to observe all of it. None of that code is your product. All of it has to be written, reviewed, deployed, and maintained.

FractalX is a framework built on a simple thesis: this entire class of work should be automated. You describe your bounded contexts with annotations. A Maven plugin reads your source code at build time, analyzes it with a Java AST parser, and generates a complete distributed system — independently deployable Spring Boot services, a gateway, a service registry, a saga orchestrator, an admin UI, Docker Compose files, and startup scripts — as a fully-functional output directory.

"No runtime agent. No bytecode manipulation. No manual wiring. Add three annotations, run one command, get a fully operational microservice platform."

Your monolith keeps compiling and running as-is throughout the process. FractalX reads it statically. You can keep developing the monolith and re-decompose at any time.


How It Works — The 30-Second Version

FractalX operates in two phases at build time.

The result is a directory called fractalx-output/ containing everything needed to start your distributed system immediately — on bare metal with a shell script, or containerized with Docker Compose.


The Three Annotations

The entire developer-facing API of FractalX fits into four annotations. Three of them cover nearly everything you’ll ever need.

@DecomposableModule — mark a bounded context

Place this on one class per module. It acts as the root anchor for that service’s package tree. FractalX reads the fields declared in this class to build the inter-service dependency graph.

@DecomposableModule(
    serviceName           = "order-service",
    port                  = 8081,
    independentDeployment = true,
    ownedSchemas          = {"orders", "order_items"}
)
@Service
public class OrderModule {

    // FractalX reads these field types to detect cross-service dependencies.
    // It will auto-generate a gRPC client for each.
    private final PaymentService paymentService;
    private final InventoryService inventoryService;

    @DistributedSaga(
        sagaId             = "place-order-saga",
        compensationMethod = "cancelOrder",
        timeout            = 30000
    )
    public Order placeOrder(Long userId, List<OrderItem> items) {
        // FractalX detects the cross-service calls here and
        // generates a full saga orchestrator from them.
        paymentService.charge(userId, calculateTotal(items));
        inventoryService.reserve(items);
        return orderRepository.save(new Order(userId, items));
    }
}

@DistributedSaga — generate saga orchestration

Place this on a method inside a @DecomposableModule class. FractalX detects the sequence of cross-service calls in the method body, maps compensation methods by naming convention, and generates a complete fractalx-saga-orchestrator service — a state machine with forward steps, automatic compensation on failure, and retry-backed completion callbacks to the owning service.

@ServiceBoundary — explicit gRPC exposure

FractalX automatically detects which methods are called by other modules and marks them for gRPC exposure. @ServiceBoundary lets you be explicit when auto-detection isn’t sufficient, or when you want to enforce allowed-caller constraints verified at build time.

💡
Key insight: Your monolith continues to compile and run as a normal Spring Boot application throughout the entire process. FractalX only reads it statically and produces microservices as a separate output. The annotations are stripped from the generated service code by the AnnotationRemover step, so they never appear in production deployments.

What Gets Generated

Running mvn fractalx:decompose on a three-module e-commerce monolith produces the following output:

order-service/
Full Spring Boot project — pom.xml, Dockerfile, application.yml, source files, NetScope gRPC client for payment and inventory, Resilience4j config per dependency, OtelConfig.java, health endpoints.
:8081 HTTP  ·  :18081 gRPC
fractalx-gateway/
Spring Cloud Gateway with dynamic route resolution from the registry, CORS, rate limiting, circuit breakers at gateway level, distributed tracing on every forwarded request.
:9999 HTTP
fractalx-saga-orchestrator/
Generated only when @DistributedSaga is found. Full state machine, compensation on failure, transactional outbox pattern for at-least-once delivery — no message broker required.
:8099 HTTP
admin-service/
14-section operations dashboard — service health, distributed traces (via Jaeger), aggregated logs, saga instance monitoring, service topology map, live config editor with hot reload.
:9090 HTTP
fractalx-registry/
Lightweight custom service registry. Every generated service self-registers on startup. Gateway and admin query it for live routing and health dashboards. No Eureka, no Consul.
:8761 HTTP
logger-service/
Centralized structured log ingestion. Every service ships logs via FractalLogAppender, keyed by correlation ID and span ID. Queryable from the Admin UI.
:9099 HTTP

Beyond the services themselves, the output also includes a docker-compose.yml with multi-stage Dockerfiles, a start-all.sh that starts services in dependency order, a stop-all.sh, per-service Flyway migration scaffolds for database isolation, and an API_CATALOG.md documenting every generated endpoint.

Generated System Architecture
fractalx-registry :8761  ← all services self-register on startup

fractalx-gateway  :9999  ← single entry point for all clients
  ├─ /api/orders/**    → order-service    :8081
  ├─ /api/payments/**  → payment-service  :8082
  ├─ /api/inventory/** → inventory-service :8083
  ├─ TracingFilter     (X-Correlation-Id, W3C traceparent)
  └─ RateLimiter + CircuitBreaker per route

order-service :8081 ——[NetScope gRPC :18081]——→ payment-service :8082
  ├─ OtelConfig.java          (OTLP → Jaeger :4317)
  ├─ Resilience4j             (CircuitBreaker + Retry + TimeLimiter)
  └─ ServiceHealthConfig.java (TCP HealthIndicator per gRPC dep)

logger-service  :9099  ← structured log ingest (correlationId, spanId)
admin-service   :9090  ← ops dashboard (traces, logs, sagas, topology)
jaeger          :16686 ← distributed trace store

NetScope: gRPC Without the Proto Files

One of the most interesting parts of FractalX is its inter-service communication layer, called NetScope. When FractalX detects that order-service depends on PaymentService, it does something clever: it reads the PaymentService class from the monolith source, extracts all of its public methods, and generates a @NetScopeClient interface in the calling service with matching signatures.

// Generated in order-service - you never write this
@NetScopeClient("payment-service")
public interface PaymentServiceClient {
    PaymentResult processPayment(Long customerId, BigDecimal amount, Long orderId);
    void cancelProcessPayment(Long customerId, BigDecimal amount, Long orderId);
}

On the server side, FractalX adds @NetworkPublic to the methods in payment-service that are called by other modules. The gRPC port follows a fixed convention: HTTP port + 10,000. So a service on :8082 exposes gRPC on :18082. Hosts are resolved dynamically from the registry at startup — no hardcoded hostnames anywhere.

NetScope also transparently propagates correlation IDs across every gRPC call via NetScopeContextInterceptor, which acts as both a client interceptor (injecting the ID into outgoing gRPC metadata) and a server interceptor (extracting it and placing it in the SLF4J MDC). This means every log line across every service in a single request shares the same correlation ID, visible in the Admin UI.


Resilience, Automatically

Each generated service gets a Resilience4j configuration per upstream dependency — circuit breaker, retry policy, and time limiter — baked directly into its application.yml. You don’t configure Resilience4j; FractalX generates the configuration based on the dependency graph it builds during analysis. At the gateway level, circuit breakers are independently generated per route.

The saga orchestrator adds another layer of resilience. When a saga step fails, compensation methods are invoked in reverse order. When an owner service is temporarily unavailable to receive a completion callback, the orchestrator retries every two seconds for up to ten attempts before marking the saga instance as a dead-letter — surfaced in the Admin UI for manual intervention.


Before and After

Capability Without FractalX With FractalX
Service scaffolding Days per service Seconds — fully generated
Inter-service communication Write gRPC proto files manually Auto-generated @NetScopeClient
Distributed tracing Instrument every service Auto-injected OtelConfig.java per service
Service discovery Configure Eureka or Consul Lightweight fractalx-registry auto-generated
Circuit breaking & retry Configure Resilience4j per service Auto-generated per dependency
Database isolation Manually split schemas DataIsolationGenerator + Flyway scaffolds
Distributed sagas Write orchestrator from scratch Full saga service from @DistributedSaga
Docker deployment Write Dockerfiles + Compose Generated Dockerfile + docker-compose.yml
Admin dashboard Build your own 14-section ops dashboard auto-generated

Getting Started in Five Minutes

FractalX is available on Maven Central. Add the annotations dependency to your monolith, annotate your module classes, and run the plugin.

<!-- Step 1: Add to your monolith's pom.xml -->
<dependency>
    <groupId>org.fractalx</groupId>
    <artifactId>fractalx-annotations</artifactId>
    <version>0.2.5</version>
</dependency>

<!-- Step 2: Add the Maven plugin -->
<plugin>
    <groupId>org.fractalx</groupId>
    <artifactId>fractalx-maven-plugin</artifactId>
    <version>0.2.5</version>
</plugin>
# Step 3: Annotate, then decompose
mvn fractalx:decompose

# Step 4: Start everything
cd fractalx-output
./start-all.sh

# Or with Docker:
docker-compose up --build

After the decomposition runs, you’ll see a Vercel-style summary printed to the console listing every service URL. The Admin UI at http://localhost:9090 gives you a live view of service health, distributed traces, aggregated logs, and saga instance status from the moment services start.


Static Verification

FractalX ships a verifier goal, mvn fractalx:verify, that performs static analysis on the generated output without requiring any services to be running. It checks that every module has a unique service name and port, that all referenced cross-module dependencies were successfully resolved and have generated NetScope clients, that saga compensation methods follow naming conventions, and that @ServiceBoundary caller constraints are not violated. This makes it practical to add FractalX decomposition as a CI step, catching architectural violations before they reach deployment.


The Design Philosophy

FractalX makes a deliberately conservative bet: keep your monolith. There’s a strong argument in the industry — articulated well by teams at Stack Overflow, Shopify, and others — that a well-structured monolith is significantly easier to develop, test, and debug than a distributed system. FractalX doesn’t ask you to abandon that. It asks you to structure your monolith into clear bounded contexts and then, when you’re ready for independent scaling or team autonomy, produce the distributed system from that structure automatically.

The generated output is not a black box. Every file in fractalx-output/ is readable, editable, plain Spring Boot Java code. If FractalX doesn’t generate exactly what you need, you can modify the output. The framework’s value is in eliminating the first 95% of the work — the scaffolding, wiring, and cross-cutting concerns — so your team can focus on the last 5% that is specific to your system.


Where to Go Next

FractalX is open source under the Apache 2.0 license. The project lives on GitHub at github.com/Project-FractalX/FractalX, and the annotations artifact is published to Maven Central under org.fractalx:fractalx-annotations. The repository includes a complete developer guide, a full annotations reference, and a working e-commerce monolith example you can clone and decompose immediately.

If you’re maintaining a Spring Boot monolith and have ever stared down a microservices migration with dread, FractalX is worth an afternoon. The quick start takes about five minutes, and seeing a fully-wired distributed system emerge from your existing codebase with a single Maven command is — at minimum — a remarkable thing to witness.

· · ·
GitHub: github.com/Project-FractalX/FractalX
Maven Central: org.fractalx:fractalx-annotations:0.2.5
License: Apache 2.0  ·  Java 17+  ·  Spring Boot 3.2.0
All posts

Share this post

FractalX / Blog
Architecture Java Spring Boot Code Generation
The Core Science
Behind FractalX
Static AST analysis · Build-time code generation · NetScope gRPC · Distributed sagas · 30+ generators
Format:
Share to
Read on Medium