ContentPosts from @vinothkaran88..
Link
@varbear shared a link, 1 week, 4 days ago
FAUN.dev()

How Netflix Tudum Supports 20 Million Users With CQRS

Netflix gutted Tudum’s old read path—Kafka, Cassandra, layers of cache—and swapped inRAW Hollow, a compressed, distributed, in-memory object store baked right into each microservice. Result? Homepage renders dropped from 1.4s to 0.4s. Editors get near-instant previews. No more read caches. No extern..

How Netflix Tudum Supports 20 Million Users With CQRS
Link
@varbear shared a link, 1 week, 4 days ago
FAUN.dev()

Kafka is fast -- I'll use Postgres

Postgres is pulling Kafka moves—without the Kafka. On a humble 3-node cluster, it held 5MB/s ingest and 25MB/s egress like a champ. Low latency. Rock-solid durability. Crank things up, andsingle-node Postgresflexed hard: 240 MiB/s in, 1.16 GiB/s out for pub/sub. Thousands of messages per second in q..

Kafka is fast -- I'll use Postgres
Link
@varbear shared a link, 1 week, 4 days ago
FAUN.dev()

The bug that taught me more about PyTorch than years of using it

A sneaky bug inPyTorch’s MPS backendlet non-contiguous tensors silently ignore in-place ops likeaddcmul_. That’s optimizer-breaking stuff. The culprit? ThePlaceholder abstraction- meant to handle temp buffers under the hood - forgot to actually write results back to the original tensor...

The bug that taught me more about PyTorch than years of using it
Link
@varbear shared a link, 1 week, 4 days ago
FAUN.dev()

uv is the best thing to happen to the Python ecosystem in a decade

uvis a new Rust-powered CLI from Astral that tosses Python versioning, virtualenvs, and dependency syncing into one blisteringly fast tool. It handles yourpyproject.tomllike a grown-up—auto-generates it, updates it, keeps your environments identical across machines. Need to run a tool once without t..

uv is the best thing to happen to the Python ecosystem in a decade
Link
@kaptain shared a link, 1 week, 4 days ago
FAUN.dev()

eBPF Beginner Skill Path

This hands-on path drops devs straight into writing, loading, and poking at basiceBPFprograms withlibbpf,maps, and those all-important kernel safety checks. It starts simple - with a beginner-friendly challenge - then dives deeper into theverifierand tools for runtime introspection...

eBPF Beginner Skill Path
Link
@kaptain shared a link, 1 week, 4 days ago
FAUN.dev()

How to build highly available Kubernetes applications with Amazon EKS Auto Mode

Amazon EKS Auto Mode now runs the cluster for you—handling control plane updates, add-on management, and node rotation. It sticks to Kubernetes best practices so your apps stay up through node drains, pod failures, AZ outages, and rolling upgrades. It also respectsPod Disruption Budgets,Readiness Ga..

How to build highly available Kubernetes applications with Amazon EKS Auto Mode
Link
@kaptain shared a link, 1 week, 4 days ago
FAUN.dev()

Building a Kubernetes Platform — Think Big, Think in Planes

Thinking in planes, as introduced by the Platform Engineering reference model, helps teams describe their platform in a simple, shared language, turning a collection of tools into a platform. It forces you to think horizontally, connecting teams and technologies instead of adding more layers, creati..

Link
@kaptain shared a link, 1 week, 4 days ago
FAUN.dev()

Helm 4 Overview

Helm 4 ditches the old plugin model for a sharper, plugin-first architecture powered by WebAssembly. That means isolation/control, and deeper customization - if you're ready to adapt! Post-renderers are now plugins. That breaks compatibility with earlier exec-based setups, so expect some rewiring. ..

Link
@kaptain shared a link, 1 week, 4 days ago
FAUN.dev()

The State of OCI Artifacts for AI/ML

OCI artifacts quietly leveled up. Over the last 18 months, they’ve gone from a niche hack to production muscle for AI/ML workloads on Kubernetes. The signs? Clear enough:KitOpsandModelPacklanded in the CNCF Sandbox. Kubernetes 1.31 got native support forImage Volume Source. Docker pushedModel Runner..

The State of OCI Artifacts for AI/ML
Link
@kaptain shared a link, 1 week, 4 days ago
FAUN.dev()

Unlocking next-generation AI performance with Dynamic Resource Allocation on Amazon EKS and Amazon EC2 P6e-GB200

Amazon just droppedEC2 P6e-GB200 UltraServers, packingNVIDIA GB200 Grace Blackwellchips. Built for running trillion-parameter AI models onAmazon EKSwithout losing sleep over scaling. Under the hood:NVLink 5.0,IMEX, andEFAv4stitch up to 72 Blackwell GPUs into one memory-coherent cluster per UltraServ..

Unlocking next-generation AI performance with Dynamic Resource Allocation on Amazon EKS and Amazon EC2 P6e-GB200