ContentPosts from @dsensenig..
Link
@faun shared a link, 13 hours ago

Lessons learned from building a sync-engine and reactivity system with SQLite

A dev ditched Electric + PGlite for a lean, browser-native sync setup built aroundWASM SQLite,JSON polling, andBroadcastChannel reactivity. It’s running inside a local-first notes app. Changes get logged with DB triggers. Sync state? Tracked by hand. Svelte stores update via lightweight polling, wi..

Lessons learned from building a sync-engine and reactivity system with SQLite
Link
@faun shared a link, 13 hours ago

Developer's block

Overdoing “best practices” can kill momentum. Think endless tests, wall-to-wall docs, airtight CI, and coding rules rigid enough to snap. Sounds responsible—until it slows dev to a crawl. The piece argues for flipping that script. Start scrappy. Build fast. Save the polish for later. It’s how you d..

Link
@faun shared a link, 13 hours ago

From GPT-2 to gpt-oss: Analyzing the Architectural Advances

OpenAI Returns to Openness. The company droppedgpt-oss-20Bandgpt-oss-120B—its first open-weight LLMs since GPT-2. The models pack a modern stack:Mixture-of-Experts,Grouped Query Attention,Sliding Window Attention, andSwiGLU. They're also lean. Thanks toMXFP4 quantization, 20B runs on a 16GB consume..

From GPT-2 to gpt-oss: Analyzing the Architectural Advances
Link
@faun shared a link, 13 hours ago

Are OpenAI and Anthropic Really Losing Money on Inference?

DeepSeek R1 running on H100s puts input-token costs near$0.003 per million—while output tokens still punch in north of$3. That’s a 1,000x spread. So if a job leans heavy on input—think code linting or parsing big docs—those margins stay fat, even with cautious compute. System shift:This lop-sided ..

Are OpenAI and Anthropic Really Losing Money on Inference?
Link
@faun shared a link, 13 hours ago

Some thoughts on LLMs and Software Development

Most LLMs still play autocomplete sidekick. But seasoned devs? They get better results when the model reads and rewrites actual source files. That gap—between how LLMs are designed to work and how prosactuallyuse them—messes with survey data and muddies the picture on real gains in code quality and..

Link
@faun shared a link, 13 hours ago

Combining GenAI & Agentic AI to build scalable, autonomous systems

Agentic AI doesn’t just crank out content—it takes the wheel. Where GenAI reacts, Agentic AI plans, perceives, and acts. Think less autocomplete, more autonomous ops. Hook them together, and you get a full-stack brain: content creation, real-time decisions, adaptive workflows, all learning as they ..

Combining GenAI & Agentic AI to build scalable, autonomous systems
Link
@faun shared a link, 13 hours ago

37 Things I Learned About Information Retrieval in Two Years at a Vector Database Company

A Weaviate engineer pulls back the curtain on two years of hard-earned lessons in vector search—breaking downBM25,embedding models,ANN algorithms, andRAG pipelines. The real story? Retrieval workflows keep moving—from keyword-heavy (sparse) toward embedding-driven (dense). Across IR use cases, the ..

Link
@faun shared a link, 13 hours ago

What makes Claude Code so damn good (and how to recreate that magic in your agent)!?

Claude Code skips the multi-agent circus. One main loop. At most, one fork in the road. Everything runs through a flat message history, tracked by a tidy little to-do list. Over half its LLM calls? Outsourced to lighter, cheaper models likeclaude-3-5-haiku. Smart split: heavyweight reasoning when y..

What makes Claude Code so damn good (and how to recreate that magic in your agent)!?
Link
@faun shared a link, 13 hours ago

I set up an email triage system using Home Assistant and a local LLM, here's how you can too

A DIY email triage rig usingHome Assistant, IMAP, andOllamawires up local LLM smarts with YAML-fueled automation. At the core: an8B dolphin-llamamodel running on GPU, chewing through messy HTML emails, tagging them, and firing off priority-sorted summaries via notifications. Why it matters:A signal..

I set up an email triage system using Home Assistant and a local LLM, here's how you can too
Link
@faun shared a link, 13 hours ago

The Most Important Machine Learning Equations: A Comprehensive Guide

A new reference rounds up the core ML equations—Bayes’ Theorem, cross-entropy, eigen decomposition, attention—and shows how they plug into real Python code using NumPy, TensorFlow, and scikit-learn. It hits the big four: probability, linear algebra, optimization, and generative modeling. Stuff that..