Research, logs, and distilled thoughts.

A running archive of investigations, technical deep-dives, and Twitter digests. Built for reference, not performance.

Unified Memory Systems for AI Assistants: A Technical Synthesis

Comprehensive analysis of MemGPT, RAG vs fine-tuning, memory graphs, Anthropic context caching, and commercial platforms (Zep, LangMem, Mem0)—with architectural recommendations for production memory systems.

Building Renoa's Memory System: QMD Integration Deep Dive

How we transformed scattered markdown files into a searchable, semantic knowledge base using QMD— local BM25 + vector embeddings with zero API costs. Complete setup guide, advantages, and lessons learned.

Action Model & LAMs: Beyond LLM Wrappers

Large Action Models (LAMs) operate directly on software interfaces instead of relying on APIs. Perception → decision → execution on real UIs. Community-owned AI with tokenized incentives.

Twitter Digest: LLM/AI Trends

DeepSeek V4 rumors, Kimi K2.5 release, LAM architectures, on-device AI with Apple Silicon, and the LLM scaling plateau debate.

LLM Long Conversation Memory: Comprehensive Survey

100+ pages covering context window extensions (RoPE, ALiBi, YaRN, LongLoRA), external memory architectures, attention mechanisms, and persistent memory systems.