ContentPosts from @maqs_p..
Link
@faun shared a link, 2 months ago

We built an MCP server so Claude can access your incidents

Incident.io dropped an open sourceMCP server in Gothat plugs Claude into their API using theModel Context Protocol. That means Claude can now ask questions, spin up incidents, and dig into timelines—just by talking. The server translates Claude’s prompts into REST calls, turning AI babble into real..

We built an MCP server so Claude can access your incidents
Link
@faun shared a link, 2 months ago

Does platform engineering make sense for startups?

Platform engineering isn't just for the big dogs anymore. Startups are picking it up as astrategic edge, building tight, high-leverage tooling from day one. Think:templated CI/CD pipelines, plug-and-play infra modules, zero-handoff onboarding. Done right, these early bets smooth the path and keep d..

Does platform engineering make sense for startups?
Link
@faun shared a link, 2 months ago

Proton launches free standalone cross-platform Authenticator app

Proton just droppedProton Authenticator, a free 2FA app that actually respects your privacy. It’s cross-platform, offline-friendly, and skips the usual junk—no ads, no trackers, no bait-and-lock-in. It’s gotend-to-end encryption, a biometric lock, and lets youexport TOTP seedslike it’s your data (b..

Link
@faun shared a link, 2 months ago

AWS Lambda now supports GitHub Actions to simplify function deployment

AWS Lambda just got a smoother ride to prod. There’s now a nativeGitHub Actions integration—no more DIY scripts to ship your serverless. On commit, the new action packages your code, wires up IAM viaOIDC, and deploys using either.zip bundles or containers. All from a tidy, declarative GitHub workfl..

Link
@faun shared a link, 2 months ago

Who does the unsexy but essential work for open source?

Oracle led the line-count race in the Linux 6.1 kernel release—beating out flashier open source names. Most of its work isn’t headline material. It’s deep-core stuff: memory management tweaks, block device updates, the quiet machinery real systems run on...

Who does the unsexy but essential work for open source?
Link
@faun shared a link, 2 months ago

Pinterest Uncovers Rare Search Failure During Migration to Kubernetes

Pinterest hit a weird one-in-a-million query mismatch during its search infra move to Kubernetes. The culprit? A slippery timing bug. To catch it, engineers pulled out every trick—live traffic replays, their own diff tools, hybrid rollouts layered on both the legacy and K8s stacks. Painful, but it ..

Pinterest Uncovers Rare Search Failure During Migration to Kubernetes
Link
@faun shared a link, 2 months ago

Terraform Validate Disagrees with Terraform Docs

Terraform’s CLI will throw errors on configs that match the docs—because your local provider schema might be stale or out of sync. Docs follow the latest release. Your machine might not. So even supported fields can break validation. Love that for us...

Link
@faun shared a link, 2 months ago

How I Cut AWS Compute Costs by 70% with a Multi-Arch EKS Cluster and Karpenter

Swapping out Kubernetes Cluster Autoscaler forKarpentercut node launch times to under 20 seconds and dropped compute bills by 70%. The secret sauce? Smarter, faster spot instance scaling. Bonus perks: architecture-aware scheduling formulti-CPU (ARM64/x86)workloads—more performance, better utilizati..

How I Cut AWS Compute Costs by 70% with a Multi-Arch EKS Cluster and Karpenter
Link
@faun shared a link, 2 months ago

Scale AI/ML Workloads with Amazon EKS: Up to 100K Nodes

Amazon EKS just leveled up—clusters can now run withup to 100,000 nodeswith support ofKubernetes 1.30and up. That's not just big—it’s AI-and-ML-scale big. Cluster setup got a lot less manual, too. The AWS Console’s"auto mode"auto-builds your VPC and IAM configs.eksctlplugs right into the flow...

Scale AI/ML Workloads with Amazon EKS: Up to 100K Nodes
Link
@faun shared a link, 2 months ago

Building a RAG chat-based assistant on Amazon EKS Auto Mode and NVIDIA NIMs

AWS and NVIDIA just dropped a full-stack recipe for running Retrieval-Augmented Generation (RAG) onAmazon EKS Auto Mode—built on top ofNVIDIA NIM microservices. It's LLMs on Kubernetes, but without the hair-pulling. Inference? GPU-accelerated. Embeddings? Covered. Vector search? Handled byAmazon Op..

Building a RAG chat-based assistant on Amazon EKS Auto Mode and NVIDIA NIMs