Building in Public
Taking a month in Bangkok to train Muay Thai and learn to dance, while reflecting on building technology with wisdom.
What We're Picking Up
Cross-modal learning experiments: Learning both tango (intuition, feeling, weight shifts) and salsa (technical precision, rhythm-driven) as complementary skills rather than competing approaches. The assumption that "more technical = better" misses half the picture - sustainable mastery requires both systematic execution and intuitive adaptation.
AI coordination limitations: Prototyping multi-agent systems revealed that Claude Projects and similar tools hit walls when you need persistent, domain-specific intelligence. The general-purpose AI dream breaks down at implementation scale - you need specialized processing pipelines that don't eat through API budgets.
Hierarchical Reasoning Machines: This challenges the fundamental "scale = intelligence" assumption driving current AI development. What if architectural efficiency matters more than parameter count? Real products need sustainable tokenomics, not just impressive demos.
What We're Working On
Content processing pipeline: Building systems that can ingest lectures, academic papers, and complex arguments into queryable knowledge bases. Think "search engine for systematic reasoning patterns" - but with anti-hallucination measures because accuracy matters when processing authoritative sources.
Token economics reality check: Hit the cost wall building consumer AI applications with real user data at scale. Led me down compression algorithms, context optimization, and selective processing. The "just throw more tokens at it" approach doesn't work for sustainable products.
Truth verification infrastructure: Got burned by AI-generated fake academic references in grad school submissions. Now building verification layers as a separate service. When you're processing complex arguments and authoritative content, hallucinations aren't just annoying - they're academically destructive.
What We're Letting Go
Blockchain-first solutions: Moved from "how do we use distributed ledgers?" to "what specific problems need decentralized verification?" Most applications don't need immutable records - they need reliable information processing. Better to solve specific verification problems than build abstract truth infrastructure.
Overoptimization fallacy: Was training Muay Thai 8x/week while maintaining academic work and building projects. Reduced to 4x/week after recognizing that sustainable performance beats peak bursts. Energy management matters more than maximum effort when you're working on multiple complex problems.
General-purpose AI coordination: The elegant multi-agent architecture worked beautifully in testing, terribly at scale. Pivoting from trying to make everything work together to building specialized systems that do one thing well. Constraints breed better solutions than unlimited resources.
What We're Seeing
We're at an inflection point where raw compute is abundant enough to solve interesting problems, but expensive enough that sustainable solutions require actual engineering discipline. The next wave of useful AI applications will come from efficiency innovations, not scale increases - from people who've hit the token wall and been forced to think systematically about what actually matters.
Most AI discussion happens in two bubbles: researchers with unlimited budgets, or consumers who never see backend costs. The sustainable products emerge from the narrow middle - people building real applications for real users with real constraints.