โ† Back to Landing
๐Ÿ“–

Under the Hood

WhiteSpaceIQ Technical Documentation

๐Ÿ”’ GitHub repository private to protect client IP

WhiteSpaceIQ

Production Private Repository 4-Week Sprint

A production-grade multi-agent system that generates 40-page strategic marketing audits in 45 minutes instead of 30 days. Built for Sarra Richmond, The Ghost, this system preserves brand voice through a novel "Foundation Lock" pattern while orchestrating three specialized AI agents.

โšก Key Innovation
Unlike traditional prompt chaining, WhiteSpaceIQ implements true multi-agent reasoning where agents can disagree, request revisions, and validate each other's work against a locked brand foundation.
๐Ÿš€
40x Faster
Reduces 30 days of strategic work to 45 minutes without sacrificing quality
๐ŸŽฏ
94% Voice Consistency
Maintains brand voice through validation loops and foundation lock
๐Ÿ”„
3.6 Reasoning Loops
Average agent iterations ensuring quality through disagreement

The Challenge

Sarra Richmond creates comprehensive marketing audits that typically require a month of research, analysis, and strategic development. The challenge: automate this without losing her unique "Ghost Marketing" voice or compromising on strategic depth.

The Solution

A three-agent orchestration system where Research, Strategy, and Validation agents work in conversation (not sequence) around a locked brand foundation. The system can reject and revise its own work to maintain quality and voice consistency.

System Architecture

High-Level Architecture

Next.js Frontend
โ†’
FastAPI Backend
โ†’
LangGraph Orchestration
โ†’
PostgreSQL + pgvector

Core Components

System Architecture
WhiteSpaceIQ/
โ”œโ”€โ”€ backend/
โ”‚   โ”œโ”€โ”€ src/
โ”‚   โ”‚   โ”œโ”€โ”€ agents/          # Three specialized agents
โ”‚   โ”‚   โ”‚   โ”œโ”€โ”€ research_coordinator.py
โ”‚   โ”‚   โ”‚   โ”œโ”€โ”€ strategic_analyzer.py
โ”‚   โ”‚   โ”‚   โ””โ”€โ”€ insight_validator.py
โ”‚   โ”‚   โ”œโ”€โ”€ builders/        # Phase builders (1-5)
โ”‚   โ”‚   โ”œโ”€โ”€ services/        # Orchestration services
โ”‚   โ”‚   โ”œโ”€โ”€ templates/       # Jinja2 templates
โ”‚   โ”‚   โ””โ”€โ”€ main.py         # FastAPI application
โ”‚   โ””โ”€โ”€ data/
โ”‚       โ”œโ”€โ”€ foundations/    # Locked brand foundations
โ”‚       โ””โ”€โ”€ outputs/        # Generated audits
โ””โ”€โ”€ frontend/
    โ”œโ”€โ”€ components/         # React components
    โ”œโ”€โ”€ pages/             # Next.js pages
    โ””โ”€โ”€ services/          # API integration

Data Flow

Client Input
User provides business context, target audience, and brand preferences through the web interface
Foundation Lock
System locks brand voice parameters (tone, stance, ethic) before any generation begins
Agent Orchestration
Three agents work in parallel and conversation, sharing context and validating outputs
Validation Loops
Validator agent can reject work and request revisions (average 3.6 loops)
Audit Generation
40-page PDF generated with consistent voice and strategic depth

Workflow Phases

The system operates through five distinct phases, each building on the previous while maintaining the locked foundation throughout the process.

Phase 1: Strategic Foundation

Python
# Foundation elements that get locked
foundation = {
    "brand_essence": {
        "mission": "User-provided mission",
        "vision": "User-provided vision",
        "values": ["integrity", "innovation", "impact"]
    },
    "voice_parameters": {
        "tone": "rebellious",  # Locked parameter
        "stance": "direct",    # Locked parameter
        "ethic": "human_first" # Locked parameter
    }
}

Phase 2: Research & Discovery

Research Coordinator agent analyzes market landscape, identifies opportunities, and writes findings to shared context for other agents to access.

Phase 3: Strategic Analysis

Strategic Analyzer reads research context and crafts narrative sections, applying the locked foundation voice throughout.

Phase 4: Validation & Refinement

Insight Validator checks all content against foundation lock, measuring voice consistency and factual accuracy. Can reject and request rewrites.

Phase 5: Document Generation

Final assembly and rendering of the 40-page audit with all validated content.

The Three Agents

Unlike traditional prompt chaining, these agents operate with genuine autonomy and can disagree with each other's outputs.

๐Ÿ”
Research Coordinator
Gathers market intelligence, competitor analysis, and industry insights. Writes findings to shared context for other agents.
โœ๏ธ
Strategic Analyzer
Reads research context and builds strategic narratives. Applies rebel/ghost voice framework while maintaining factual accuracy.
๐Ÿ›ก๏ธ
Insight Validator
Guards brand consistency and factual accuracy. Can reject sections and demand rewrites until voice matches foundation lock.

Agent Communication

Python
# Agents share context and can disagree
class AgentOrchestrator:
    async def orchestrate(self, context: SharedContext):
        research = await self.research_agent.gather(context)
        
        # Strategy agent reads research
        strategy = await self.strategy_agent.compose(
            research=research,
            foundation=context.foundation_lock
        )
        
        # Validation can reject and loop
        while not self.validator.approve(strategy):
            feedback = self.validator.get_feedback()
            strategy = await self.strategy_agent.revise(
                feedback=feedback,
                foundation=context.foundation_lock
            )
            
        return strategy

Agent Orchestration

The orchestration layer manages agent interactions, shared context, and validation loops using LangGraph for state management.

LangGraph Implementation

Python
from langgraph.graph import StateGraph, END

# Define the agent graph
workflow = StateGraph(AgentState)

# Add nodes for each agent
workflow.add_node("research", research_agent)
workflow.add_node("strategy", strategy_agent)
workflow.add_node("validation", validation_agent)

# Define edges with conditional logic
workflow.add_edge("research", "strategy")
workflow.add_conditional_edges(
    "strategy",
    lambda x: "validation" if x["ready"] else "strategy"
)
workflow.add_conditional_edges(
    "validation",
    lambda x: END if x["approved"] else "strategy"
)

Shared Context Management

All agents access a shared context that maintains state across the orchestration:

Context Element Purpose Access
Foundation Lock Immutable brand voice parameters Read-only for all agents
Research Findings Market intelligence and insights Write: Research, Read: All
Strategic Sections Draft content for audit Write: Strategy, Read: Validation
Validation Feedback Rejection reasons and revision requests Write: Validation, Read: Strategy

Foundation Lock Pattern

๐Ÿ”’ Key Innovation
The Foundation Lock ensures brand voice consistency by establishing immutable parameters BEFORE any AI generation begins. This is what prevents AI drift and maintains the human voice.

Implementation

Python
class FoundationLock:
    """Immutable brand voice parameters"""
    
    def __init__(self, brand_profile: BrandProfile):
        self.tone = brand_profile.tone  # e.g., "rebellious"
        self.stance = brand_profile.stance  # e.g., "direct"
        self.ethic = brand_profile.ethic  # e.g., "human_first"
        self._locked = True
        
    def validate_content(self, content: str) -> ValidationResult:
        """Check if content matches locked foundation"""
        
        tone_score = self.measure_tone_alignment(content)
        stance_score = self.measure_stance_alignment(content)
        ethic_score = self.measure_ethic_alignment(content)
        
        overall_score = (tone_score + stance_score + ethic_score) / 3
        
        return ValidationResult(
            passed=overall_score >= 0.9,  # 90% threshold
            score=overall_score,
            feedback=self.generate_feedback(content)
        )

Voice Parameters

Tone
The emotional color of the content (e.g., rebellious, professional, playful)
Stance
The approach to communication (e.g., direct, diplomatic, provocative)
Ethic
The underlying values driving decisions (e.g., human_first, data_driven, innovation_focused)

Technology Stack

โšก
FastAPI
High-performance async Python backend with automatic API documentation
๐Ÿ”—
LangGraph
Agent orchestration and state management for complex multi-agent workflows
๐Ÿ—„๏ธ
PostgreSQL + pgvector
Vector database for semantic search and RAG-powered knowledge retrieval
โš›๏ธ
Next.js 14
React framework with app router for the user interface
๐ŸŽจ
Tailwind CSS
Utility-first CSS framework for rapid UI development
โ˜๏ธ
Render.com
Production deployment platform with automatic scaling

AI Models

Model Purpose Configuration
GPT-4 Turbo Primary reasoning engine Temperature: 0.7, Max tokens: 4000
Claude 3 Opus Validation and refinement Temperature: 0.3, Max tokens: 2000
OpenAI Embeddings Vector search for RAG Model: text-embedding-3-large

Setup & Deployment

Local Development Setup

$ git clone [private-repo]
$ cd whitespaceiq
# Backend setup
$ cd backend
$ python -m venv venv
$ source venv/bin/activate
$ pip install -r requirements.txt
$ uvicorn src.main:app --reload
# Frontend setup
$ cd ../frontend
$ npm install
$ npm run dev

Environment Variables

.env
# API Keys
OPENAI_API_KEY=sk-...
ANTHROPIC_API_KEY=sk-ant-...

# Database
DATABASE_URL=postgresql://user:pass@localhost/whitespaceiq
PGVECTOR_EXTENSION=true

# Application
ENVIRONMENT=development
DEBUG=true
SECRET_KEY=your-secret-key

# Model Configuration
PRIMARY_MODEL=gpt-4-turbo-preview
VALIDATION_MODEL=claude-3-opus
EMBEDDING_MODEL=text-embedding-3-large

Production Deployment

The application is deployed on Render.com with automatic builds from the main branch:

๐Ÿš€ Deployment Configuration
Backend: Web Service with Docker
Frontend: Static Site with build command
Database: PostgreSQL with pgvector extension
Environment: Production with auto-scaling

API Endpoints

Main Endpoints

Endpoint Method Description
/api/v1/audit/generate POST Generate complete audit from user input
/api/v1/foundation/create POST Create and lock brand foundation
/api/v1/status/{session_id} GET Check generation status
/api/v1/download/{session_id} GET Download completed audit PDF

Example Request

JSON
{
  "business_context": {
    "company_name": "Example Corp",
    "industry": "Technology",
    "target_audience": "B2B SaaS buyers"
  },
  "brand_voice": {
    "tone": "rebellious",
    "stance": "direct",
    "ethic": "human_first"
  },
  "audit_focus": [
    "competitive_analysis",
    "market_positioning",
    "growth_opportunities"
  ]
}

Testing

Test Scripts

# Run individual phase tests
$ python scripts/test_phase1.py
$ python scripts/test_phase2.py
# Run complete workflow test
$ python scripts/run_phases.py --all
# Run with specific client
$ python scripts/run_phases.py --client rebel_marketing

Validation Metrics

Voice Consistency
Target: >90% match with foundation lock parameters
Generation Time
Target: <45 minutes for complete audit
Reasoning Loops
Expected: 2-5 validation cycles per audit

Troubleshooting

Common Issues

โš ๏ธ Connection Pool Issues
Problem: Stale connections after long-running processes
Solution: Restart backend between test runs
lsof -ti:8000 | xargs kill -9 && uvicorn src.main:app --reload
โš ๏ธ Template Rendering Failures
Problem: Pydantic validation errors with dynamic LLM output
Solution: Use flexible template rendering without rigid schemas
See: templates/TEMPLATE_ARCHITECTURE.md
โš ๏ธ Agent Timeout
Problem: Agents exceed 60-second timeout
Solution: Increase timeout in LLMClient configuration
timeout=httpx.Timeout(120.0)

Lessons Learned

๐Ÿ“… October 28, 2025
"KISS is a principle, not a suggestion"

Spent a day building complex connection pooling with context managers, health monitoring, and dependency injection. The actual solution? Restart the server between tests. Simple solutions often beat clever ones.

Key Insights

  • Foundation First: Lock brand voice before any generation to prevent drift
  • Agents Need Autonomy: Real reasoning requires ability to disagree and revise
  • Templates Must Be Flexible: LLM output varies; rigid schemas break
  • Measure Before Optimizing: Don't solve imaginary problems
  • Production != Perfect: Ship working code, iterate based on real usage

What Worked

Multi-Agent Orchestration
LangGraph enabled true agent conversation vs simple chaining
Foundation Lock Pattern
Novel approach to maintaining voice consistency
Flexible Templates
Recursive rendering handles any LLM output structure

Future Improvements

  • Implement streaming responses for better UX
  • Add more granular progress tracking
  • Expand validation metrics beyond voice consistency
  • Build agent memory for improved performance over time