ContentPosts from @faun..
 Activity
@faun published <function LinkPost.objectify at 0x7f3f5a865c60> Some thoughts on LLMs and Software Development , 3 months, 4 weeks ago.
 Activity
@faun published <function LinkPost.objectify at 0x7f3f5a865c60> Combining GenAI & Agentic AI to build scalable, autonomous systems , 3 months, 4 weeks ago.
 Activity
@faun published <function LinkPost.objectify at 0x7f9b7f781c60> From GPT-2 to gpt-oss: Analyzing the Architectural Advances , 3 months, 4 weeks ago.
 Activity
@faun published <function LinkPost.objectify at 0x7f9b7f781c60> I set up an email triage system using Home Assistant and a local LLM, here's how you can too , 3 months, 4 weeks ago.
 Activity
@faun published <function LinkPost.objectify at 0x7f3f5a865c60> 37 Things I Learned About Information Retrieval in Two Years at a Vector Database Company , 3 months, 4 weeks ago.
 Activity
@faun published <function LinkPost.objectify at 0x7f3f5a865c60> What makes Claude Code so damn good (and how to recreate that magic in your agent)!? , 3 months, 4 weeks ago.
 Activity
@faun published <function LinkPost.objectify at 0x7f3f5a865c60> The Most Important Machine Learning Equations: A Comprehensive Guide , 3 months, 4 weeks ago.
Link
@faun shared a link, 3 months, 4 weeks ago
FAUN.dev()

We Needed Better Cloud Storage for Python so We Built Obstore

Obstoreis a new stateless object store that skips fsspec-style caching and keeps its API tight and predictable across S3, GCS, and Azure. Sync and async both work. Under the hood? Fast, zero-copy Rust–Python interop. And on small concurrent async GETs, it reportedly crushes S3FS with up to9x better .. read more  

We Needed Better Cloud Storage for Python so We Built Obstore
Link
@faun shared a link, 3 months, 4 weeks ago
FAUN.dev()

How Salesforce Delivers Reliable, Low-Latency AI Inference

Salesforce’s AI Metadata Service (AIMS) just got a serious speed boost. They rolled out a multi-layer cache—L1 on the client, L2 on the server—and cut inference latency from 400ms to under 1ms. That’s over 98% faster. But it’s not just about speed anymore. L2 keeps responses flowing even when the b.. read more  

How Salesforce Delivers Reliable, Low-Latency AI Inference
Link
@faun shared a link, 3 months, 4 weeks ago
FAUN.dev()

From Python to Go: Why We Rewrote Our Ingest Pipeline at Telemetry Harbor

Telemetry Harbor tossed out Python FastAPI and rebuilt its ingest pipeline inGo. The payoff?10x faster, no more CPU freakouts, and strongerdata integritythanks to strict typing. PostgreSQL is now the slowest link in the chain—not the app—which is the kind of bottleneck you actuallywant. Means the s.. read more  

From Python to Go: Why We Rewrote Our Ingest Pipeline at Telemetry Harbor