Join us

ContentUpdates and recent posts about Arti..
Link
@kaptain shared a link, 4 weeks, 1 day ago
FAUN.dev()

Kubernetes v1.36 Sneak Peek

Kubernetes v1.36, coming inApril 2026, will feature removals and deprecations, with enhancements that include retirement of the Ingress NGINX project and thedeprecation of .spec.externalIPs in Service.Additionally, the release will remove the gitRepo volume driver and introduce enhancements like fas.. read more  

Link
@kaptain shared a link, 4 weeks, 1 day ago
FAUN.dev()

Broadcom Makes Its Pitch To Run Kubernetes On VMware VCF

Broadcom's $69 billion acquisition of virtualization pioneer VMware in late 2023 brought about significant price increases and a shift towards subscription-based licensing. The company aims to establish VMware Cloud Foundation (VCF) as the foundation for enterprise workloads gravitating towards priv.. read more  

Broadcom Makes Its Pitch To Run Kubernetes On VMware VCF
Link
@kaptain shared a link, 4 weeks, 1 day ago
FAUN.dev()

Docker Offload now Generally Available: The Full Power of Docker, for Every Developer, Everywhere.

Docker Offload is a managed cloud service that moves the container engine to Docker’s secure cloud, allowing developers to run Docker from any environment without changing their workflows. With Docker Offload, developers can keep using the same commands and workflows they are accustomed to in Docker.. read more  

Docker Offload now Generally Available: The Full Power of Docker, for Every Developer, Everywhere.
Link
@kaptain shared a link, 4 weeks, 1 day ago
FAUN.dev()

llm-d officially a CNCF Sandbox project

At Google Cloud, the llm-d project has been accepted as a Cloud Native Computing Foundation (CNCF) Sandbox project. This collaboration with industry leaders like Red Hat, IBM Research, CoreWeave, and NVIDIA aims to provide a framework for any model, accelerator, or cloud. The introduction of GKE Inf.. read more  

llm-d officially a CNCF Sandbox project
Link
@kala shared a link, 4 weeks, 1 day ago
FAUN.dev()

From zero to a RAG system: successes and failures

An engineer spun up an internal chat with a localLLaMAmodel viaOllama, a PythonFlaskAPI, and aStreamlitfrontend. They moved off in-memoryLlamaIndexto batch ingestion intoChromaDB(SQLite). Checkpoints and tolerant parsing went in to stop RAM disasters. Indexing produced 738,470 vectors (~54 GB). They.. read more  

From zero to a RAG system: successes and failures
Link
@kala shared a link, 4 weeks, 1 day ago
FAUN.dev()

Why we're rethinking cache for the AI era

Cloudflare data shows that 32% of network traffic originates from automated traffic, including AI assistants fetching data for responses. AI bots often issue high-volume requests and access rarely visited content, impacting cache efficiency. Cloudflare researchers propose AI-aware caching algorithms.. read more  

Why we're rethinking cache for the AI era
Link
@kala shared a link, 4 weeks, 1 day ago
FAUN.dev()

Our most intelligent open models, built from Gemini 3 research and technology to maximize intelligence-per-parameter

Built from Gemini 3 research and technology, Gemma 4 offers maximum compute and memory efficiency for mobile and IoT devices. Develop autonomous agents, multimodal applications, and multilingual experiences with Gemma 4's unprecedented intelligence-per-parameter... read more  

Our most intelligent open models, built from Gemini 3 research and technology to maximize intelligence-per-parameter
Link
@kala shared a link, 4 weeks, 1 day ago
FAUN.dev()

Qwen3.6-Plus: Towards Real World Agents

Qwen3.6-Plus, the latest release following Qwen3.5 series, offers enhanced agentic coding capabilities and sharper multimodal reasoning. The model excels in frontend web development and complex problem-solving, setting a new standard in the developer ecosystem. Qwen3.6-Plus is available via Alibaba .. read more  

Link
@kala shared a link, 4 weeks, 1 day ago
FAUN.dev()

State of Context Engineering in 2026

Context engineering has evolved in the AI engineering field since mid-2025 with the introduction of patterns for managing context effectively. These patterns include progressive disclosure, compression, routing, retrieval strategies, and tool management, each addressing a different dimension of the .. read more  

Link
@devopslinks shared a link, 4 weeks, 1 day ago
FAUN.dev()

RAM is getting expensive, so squeeze the most from it

The Register contrastszramandzswap. It flags a patch that claims up to 50% fasterzramops. It notes Fedora enableszramby default. It details thatzramprovides compressed in‑RAM swap (LZ4).zswapcompresses pages before writing to disk and requires on‑disk swap... read more  

RAM is getting expensive, so squeeze the most from it
Arti is an official Tor Project initiative to rewrite the Tor client stack in Rust. Its primary goal is to address long-standing safety, reliability, and maintainability challenges inherent in the legacy C-based Tor implementation. By leveraging Rust’s strong compile-time guarantees for memory safety and concurrency, Arti eliminates entire classes of bugs that have historically affected Tor, including many security vulnerabilities.

Arti is architected as a modular, embeddable library rather than a monolithic application. This makes it easier for developers to integrate Tor networking capabilities directly into other applications, services, and platforms. From its earliest versions, Arti has supported multi-core cryptography, cleaner APIs, and a more maintainable internal design.

While early releases focused on client functionality such as bootstrapping, running as a SOCKS proxy, and routing traffic over the Tor network, the long-term roadmap includes full feature parity with the existing Tor client, support for onion services, anti-censorship mechanisms, and eventually Tor relay functionality. Arti represents the future foundation of the Tor ecosystem, prioritizing long-term security, developer velocity, and adaptability.