Join us

ContentUpdates and recent posts about vLLM..
Story
@laura_garcia shared a post, 1 month, 1 week ago
Software Developer, RELIANOID

🚀 RELIANOID at DevOpsDays Tel Aviv 2025

📅 December 11, 2025 • 📍 Tel Aviv, Israel What a week ahead! Our team is working full-throttle as we prepare to attend three major events in just a few days — and we’re thrilled to add DevOpsDays Tel Aviv to the list. We’ll be joining the community to share how RELIANOID helps DevOps and platform tea..

devopsdays telaviv relianoid
Story
@laura_garcia shared a post, 1 month, 1 week ago
Software Developer, RELIANOID

🛡️ RELIANOID at Black Hat Europe 2025

📅 December 8–11, 2025 • 📍 London, UK RELIANOID is heading to Black Hat Europe 2025, the premier global event for cutting-edge cybersecurity research and innovation. We’ll be in London showcasing how our high-performance ADCs, intelligent proxy architecture, and automated security capabilities help e..

black hat europe london 2025 relianoid
Link
@anjali shared a link, 1 month, 1 week ago
Customer Marketing Manager, Last9

OTel Updates: Unroll Processor Now in Collector Contrib

The OTel unroll processor splits bundled log records into individual events. Now in Collector Contrib v0.137.0 for VPC and CloudWatch logs.

Unroll Processor
Story
@laura_garcia shared a post, 1 month, 1 week ago
Software Developer, RELIANOID

Tesco’s latest outage is a reminder: uptime IS the customer experience.

Shoppers across the UK faced checkout failures, broken order updates, and Clubcard access issues as Tesco’s digital platforms suffered “intermittent” instability. In modern retail, even brief disruptions damage trust, loyalty, and sales. At RELIANOID, we help retailers stay resilient with intelligen..

tesco outage
Link
@anjali shared a link, 1 month, 1 week ago
Customer Marketing Manager, Last9

Instrumentation: Getting Signals In

See how instrumentation in OpenTelemetry helps track app issues, know the difference between auto and manual methods, and when to use them.

otel_metrics_quarkus
 Activity
@devopslinks added a new tool Syft , 1 month, 1 week ago.
 Activity
@kaptain added a new tool KubeLinter , 1 month, 1 week ago.
 Activity
@devopslinks added a new tool Grype , 1 month, 1 week ago.
 Activity
@kaptain added a new tool Hadolint , 1 month, 1 week ago.
 Activity
@varbear added a new tool Bandit , 1 month, 1 week ago.
vLLM is an advanced open-source framework for serving and running large language models efficiently at scale. Developed by researchers and engineers from UC Berkeley and adopted widely across the AI industry, vLLM focuses on optimizing inference performance through its innovative PagedAttention mechanism — a memory management system that enables near-zero waste in GPU memory utilization. It supports model parallelism, continuous batching, tensor parallelism, and dynamic batching across GPUs, making it ideal for real-world deployment of foundation models. vLLM integrates seamlessly with Hugging Face Transformers, OpenAI-compatible APIs, and popular orchestration tools like Ray Serve and Kubernetes. Its design allows developers and enterprises to host LLMs with reduced latency, lower hardware costs, and increased throughput, powering everything from chatbots to enterprise-scale AI services.