Join us

ContentRecent posts and updates..
Link
@kaptain shared a link, 1Ā week ago
FAUN.dev()

Experimenting with Gateway API using kind

A new guide shows how to runGateway APIlocally withkindandcloud-provider-kind. It spins up a one-node Kubernetes cluster in Docker - complete with LoadBalancer Services and a Gateway API controller. Cloud vibes, zero cloud bill. Fire it up to deploy demo apps, test routing, or poke around with CRD e.. read more Ā 

Dev Swag
@ByteVibe shared a product

No comment - Heavy Blendā„¢ Hoodie

#developerĀ  #merchandiseĀ  #swagĀ 

This unisex heavy blend Hooded Sweatshirt is relaxation itself. It's made with a thick blend of Cotton and Polyester, which makes it plush, soft and warm. The spacious Kangaroo Pocket adds daily pract...

Ad
www.faun.dev shared an ad

#adĀ  #sponsoredĀ 
Link
@kaptain shared a link, 1Ā week ago
FAUN.dev()

Cluster API v1.12: Introducing In-place Updates and Chained Upgrades

Cluster API v1.12.0 addsin-place updatesandchained upgrades, so machines can swap parts without going down, and clusters can jump versions without drama. KubeadmControlPlaneandMachineDeploymentsnow choose between full rollouts or surgical patching, depending on what changed. The goal: keep clusters .. read more Ā 

Link
@kaptain shared a link, 1Ā week ago
FAUN.dev()

Ingress NGINX: Statement from the Steering and Security Response Committees

Kubernetes is cutting offIngress NGINXin March 2026. No more updates. No bug fixes. No security patches. Done. Roughly half of cloud-native setups still rely on it, but it's been understaffed for years. If you're one of them, it's time to move. There’s no plug-and-play replacement, but the ecosystem.. read more Ā 

Link
@kaptain shared a link, 1Ā week ago
FAUN.dev()

Run a Private Personal AI with Clawdbot + DMR

Clawdbot just plugged intoDocker Model Runner (DMR). That means you can now run your own OpenAI-compatible assistant, locally, on your hardware. No cloud. No per-token fees. No data leaking into the void!.. read more Ā 

Run a Private Personal AI with Clawdbot + DMR
Link
@kaptain shared a link, 1Ā week ago
FAUN.dev()

New Conversion from cgroup v1 CPU Shares to v2 CPU Weight

A new quadratic formula now mapscgroup v1 CPU sharestocgroup v2 CPU weight. Why? Because the old linear approach messed with CPU fairness; especially at low share values. This fix nails prioritization where it counts. It lands at theOCI runtime layer, live inrunc v1.3.2andcrun v1.23, so containers f.. read more Ā 

Link
@kala shared a link, 1Ā week ago
FAUN.dev()

AWS Frontier Agents: Kiro, DevOps Agent, and Security Agent

ā€œFrontier Agentsā€ drop straight into incident workflows. They kick off investigations on their own, whether triggered by alarms or a human hand, pulling together logs, metrics, and deployment context fast. Findings show up where they’re needed: Slack threads, tickets, operator dashboards. No shell c.. read more Ā 

AWS Frontier Agents: Kiro, DevOps Agent, and Security Agent
Ad
www.faun.dev shared an ad

#adĀ  #sponsoredĀ 
Link
@kala shared a link, 1Ā week ago
FAUN.dev()

Is that allowed? Authentication and authorization in Model Context Protocol

TheModel Context Protocol (MCP) 2025-11-25spec tightens up remote agent auth. It leans intoOAuth 2.1 Authorization Code grants, PKCE required, step-up auth backed. No token passthrough allowed. What’s new: experimental extensions forclient credentialsandclient ID metadata. These smooth out agent reg.. read more Ā 

Is that allowed? Authentication and authorization in Model Context Protocol
Link
@kala shared a link, 1Ā week ago
FAUN.dev()

Securing Agents in Production (Agentic Runtime, #1)

Palantir's AIP Agentic Runtime isn't just another agent platform, it's a control plane with teeth. Think tight policy enforcement, ephemeral autoscaling with Kubernetes (Rubix), and memory stitched in from the jump viaOntology. Tool usage? Traced and locked down with provenance-based security. Every.. read more Ā 

Securing Agents in Production (Agentic Runtime, #1)
Link
@kala shared a link, 1Ā week ago
FAUN.dev()

Keeping 20,000 GPUs healthy

Modal unpacked how it keeps a 20,000+ GPU fleet sane across AWS, GCP, Azure, and OCI. Think autoscaling, yes, but with some serious moves behind the curtain. They're running instance benchmarking, enforcing machine image consistency, running boot-time checks, and tracking GPU health both passively a.. read more Ā 

Keeping 20,000 GPUs healthy
Link
@devopslinks shared a link, 1Ā week ago
FAUN.dev()

Nanoservices: Why Serverless Got Architecture Right

A fresh take onAWS Lambdaand serverless: thinknanoservices- tiny, isolated functions instead of chunky microservices. No shared state or shared runtime but clean separation, lean logic, and fewer ways to screw up scaling. Where microservices can spiral into spaghetti, nanoservices stay crisp. Each f.. read more Ā 

Nanoservices: Why Serverless Got Architecture Right