Join us

ContentUpdates and recent posts about vLLM..
 Activity
@devopslinks added a new tool JFrog Xray , 1 month, 2 weeks ago.
 Activity
@devopslinks added a new tool OWASP Dependency-Check , 1 month, 2 weeks ago.
 Activity
@varbear added a new tool pre-commit , 1 month, 2 weeks ago.
 Activity
@devopslinks added a new tool GitGuardian , 1 month, 2 weeks ago.
 Activity
@devopslinks added a new tool detect-secrets , 1 month, 2 weeks ago.
 Activity
@devopslinks added a new tool Gitleaks , 1 month, 2 weeks ago.
Course
@eon01 published a course, 1 month, 2 weeks ago
Founder, FAUN.dev

DevSecOps in Practice

TruffleHog Flask NeuVector detect-secrets pre-commit OWASP Dependency-Check Docker checkov Bandit Hadolint Grype KubeLinter Syft GitLab CI/CD Trivy Kubernetes

A Hands-On Guide to Operationalizing DevSecOps at Scale

DevSecOps in Practice
Story
@tairascott shared a post, 1 month, 2 weeks ago
AI Expert and Consultant, Trigma

How Do Large Language Models (LLMs) Work? An In-Depth Look

Discover how Large Language Models work through a clear and human centered explanation. Learn about training, reasoning, and real world applications including Agentic AI development and LLM powered solutions from Trigma.

How do Large Language Models (LLMs) Work Banner
Story
@laura_garcia shared a post, 1 month, 2 weeks ago
Software Developer, RELIANOID

🔐 RELIANOID at Gartner IAM Summit 2025 | Dec 8–10, Grapevine, TX

We’re heading to the Gartner Identity & Access Management Summit to showcase how RELIANOID’s intelligent proxy and ADC platforms empower modern IAM: enhancing Zero Trust enforcement, adaptive access, and hybrid/multi-cloud security. Join us to explore AI-driven automation, ITDR, and identity governa..

Gartner Identity and Access Management Summit 2025 relianoid
Link
@varbear shared a link, 1 month, 2 weeks ago
FAUN.dev()

Confessions of a Software Developer: No More Self-Censorship

A mid-career dev hits pause after ten years in the game -realizing core skills likepolymorphism, SQL, and automated testingnever quite clicked. Leadership roles, shipping products, mentoring junior devs - none of it filled those gaps. They'd been writingC#/.NETfor a while too. Not out of love, just .. read more  

Confessions of a Software Developer: No More Self-Censorship
vLLM is an advanced open-source framework for serving and running large language models efficiently at scale. Developed by researchers and engineers from UC Berkeley and adopted widely across the AI industry, vLLM focuses on optimizing inference performance through its innovative PagedAttention mechanism — a memory management system that enables near-zero waste in GPU memory utilization. It supports model parallelism, continuous batching, tensor parallelism, and dynamic batching across GPUs, making it ideal for real-world deployment of foundation models. vLLM integrates seamlessly with Hugging Face Transformers, OpenAI-compatible APIs, and popular orchestration tools like Ray Serve and Kubernetes. Its design allows developers and enterprises to host LLMs with reduced latency, lower hardware costs, and increased throughput, powering everything from chatbots to enterprise-scale AI services.