v0.0.4b
Architecture Overview
Jan 1, 2026
General
How Three Sigma turns descriptions into running applications.
This page explains the technical architecture that enables Three Sigma's speed and reliability. Understanding this helps you make the most of the platform and appreciate why it works differently from traditional tools.
The Core Idea: Schema as Source of Truth
Everything in Three Sigma flows from a single JSON schema.
When you describe a process and the AI generates it, the output is a structured schema that completely defines:
Every step in the workflow
Every field that captures data
Every action users can take
Every transition between steps
Every validation rule and conditional requirement
Every automation that triggers on actions
This schema is the single source of truth. From it, the runtime automatically generates:
Database tables to store your data
API endpoints for every operation
User interface for each step
Validation logic that enforces your rules
Automation execution that runs when triggered
Change the schema, and everything updates together. There's no separate database design, no API layer to maintain, no UI code to write.
Why This Matters
Traditional Architecture
In traditional development:
Someone writes a requirements document
Database designers create table schemas
Backend developers build APIs
Frontend developers build user interfaces
QA tests that everything works together
DevOps deploys to production
Each layer is separate. Changes require updates to multiple systems by multiple people. Integration bugs lurk at every boundary.
Three Sigma Architecture
In Three Sigma:
A schema defines everything
The runtime interprets the schema and provides database, API, and UI
Changes to the schema automatically propagate everywhere
One artifact. One deployment. No integration seams.
The Four-Layer Stack
Layer 1: Schema Definition
The schema is a JSON document that describes your process using Three Sigma's vocabulary:
Process — name, description, metadata
Steps — ordered stages of the workflow
Fields — data captured at each step (with types, validation, requirements)
Actions — buttons that move work forward (with transitions and automations)
Expressions — conditional logic for routing and requirements
The schema is declarative—it says what you want, not how to implement it.
Layer 2: Validation Engine
Before any schema becomes a running application, the Validation Engine checks it for correctness:
Structural validation:
Every step has required properties
Every field has a valid type
Every action has valid transitions
Reference validation:
Field references point to real fields
Step transitions point to real steps
Automation targets exist
Logical validation:
No circular dependencies in expressions
Required fields are defined before they're used
Conditional logic uses valid operators
Safety validation:
Expressions don't reference themselves
Automations don't create infinite loops
Permissions are properly scoped
+ Many more structural, technical, conceptual, and business validations.
If validation fails, the schema doesn't deploy. This guarantees that every running process is internally consistent.
Layer 3: Runtime Engine
The runtime interprets validated schemas and provides the actual application:
Data Layer:
Automatically creates database tables for each process
Manages tenant, and even team, isolation (your data stays yours)
Handles field storage for all field types
Maintains audit trails for every change
API Layer:
Generates REST endpoints for CRUD operations
Enforces permissions at every endpoint
Validates incoming data against schema rules
Processes automations when actions are triggered
UI Layer:
Renders appropriate input controls for each field type
Shows/hides fields based on conditional logic
Displays available actions based on current state
Handles form validation and submission
Layer 4: AI Generation
The AI Process Generator sits above the schema layer:
Input: Natural language description of what you need
Processing: AI interprets requirements and generates schema structure
Output: Complete, valid JSON schema ready for deployment
The AI understands Three Sigma's vocabulary—field types, step patterns, action configurations, expression syntax—and produces schemas that pass validation.
This is different from AI that generates suggestions or documentation. Three Sigma's AI generates deployable artifacts.
Field Type Architecture
Fields are a first-class citizen in Three Sigma. They are the atomic building blocks of data capture. Each field type knows:
How to store itself — Database column types and serialization
How to validate itself — Required checks, format validation, value constraints
How to render itself — Appropriate UI controls for input and display
How to serialize itself — API representation for external systems
Example: Signature Field
A signature field isn't just a text box. It:
Stores a cryptographic hash for tamper detection
Captures who signed, when, and what action triggered it
Is immutable once signed (can't be changed)
Renders as a signature capture UI, not a text input
Validates that the signer has appropriate permissions
This complexity is encapsulated in the field type. Schema authors just say "this is a signature field" and get all the behavior automatically.
Expression Evaluation
Expressions are evaluated at runtime using current field values:
Expressions can reference any field in the process. The runtime resolves field values dynamically at runtime, enabling conditional behavior that adapts as data changes.
Multi-Tenant Isolation
Every Three Sigma deployment serves multiple tenants (organizations, teams). The architecture ensures complete isolation:
Data isolation:
Each tenant has separate database tables
Tenant ID is enforced on every query
Cross-tenant data access is logically and architecturally impossible
Schema isolation:
Each tenant can have different processes
Schema changes for one tenant don't affect others
Deployments are tenant-scoped
Runtime isolation:
API requests are authenticated and scoped to tenant
Users can only see their organization's processes and data
Deployment Model
Traditional deployment:
Write code
Build artifacts
Run tests
Stage deployment
Production deployment
Verify and monitor
Three Sigma deployment:
Schema is validated
Runtime picks up new schema
Database tables created if needed
Process is live
There's no build step, no staging environment, no deployment pipeline. Valid schemas become running applications immediately.
How This Creates Speed
No translation layers: Schema → Runtime is direct. No intermediate artifacts to build or maintain.
No integration: Database, API, and UI come from the same source. No integration bugs.
No deployment ceremony: Valid schema = running application. No DevOps tickets or release cycles.
No maintenance divergence: Schema is always the truth. No drift between documentation and reality.
How This Creates Reliability
Validation catches errors early: Invalid schemas don't deploy. Problems surface immediately, not in production.
Schema is auditable: Every process definition is a versioned artifact. You know exactly what's running.
Field types are battle-tested: Complex behaviors (signatures, calculations, automations) are implemented once and reused everywhere.
Runtime is shared: The same runtime engine runs all processes. Improvements benefit everyone.
Technical Decisions
JSON Schema as the canonical format: Human-readable, machine-parseable, version-controllable, AI-generatable.
Validation before deployment: Fail fast. Never deploy something that can't run correctly.
Modular field types: Add new capabilities once, available everywhere. No per-process coding.
Expression-based logic: Complex conditions without code. Evaluatable by runtime, generatable by AI.
Tenant-first architecture: Isolation by design, not by policy. Security through architecture.
Next Steps
→ How It Works — The user-facing deployment flow
→ Processes — Understanding the building blocks
→ Fields Reference — All available field types