Join us

ContentUpdates and recent posts about vLLM..
ย Activity
@adrian_schmidt started using tool Amazon S3 , 1ย week ago.
ย Activity
@adrian_schmidt started using tool Amazon EC2 , 1ย week ago.
ย Activity
@adrian_schmidt started using tool Amazon Cloudfront , 1ย week ago.
ย Activity
@adrian_schmidt started using tool Amazon ALB , 1ย week ago.
Story
@laura_garcia shared a post, 1ย week ago
Software Developer, RELIANOID

๐—•๐—ฒ๐˜๐˜ ๐—•๐—ฟ๐—ฎ๐˜€๐—ถ๐—น ๐Ÿฎ๐Ÿฌ๐Ÿฎ๐Ÿฒ

๐Ÿ“ Sรฃo Paulo, Brazil ๐Ÿ“… May 5โ€“8, 2026 ๐—ฅ๐—˜๐—Ÿ๐—œ๐—”๐—ก๐—ข๐—œ๐—— is heading to ๐—•๐—ฒ๐˜๐˜ ๐—•๐—ฟ๐—ฎ๐˜€๐—ถ๐—น ๐Ÿฎ๐Ÿฌ๐Ÿฎ๐Ÿฒ โ€” ๐˜ต๐˜ฉ๐˜ฆ ๐˜ญ๐˜ข๐˜ณ๐˜จ๐˜ฆ๐˜ด๐˜ต ๐˜Œ๐˜ฅ๐˜›๐˜ฆ๐˜ค๐˜ฉ ๐˜ฆ๐˜ท๐˜ฆ๐˜ฏ๐˜ต ๐˜ช๐˜ฏ ๐˜“๐˜ข๐˜ต๐˜ช๐˜ฏ ๐˜ˆ๐˜ฎ๐˜ฆ๐˜ณ๐˜ช๐˜ค๐˜ข. ๐Ÿš€ 46,000+ professionals ๐Ÿ’ก 270+ companies ๐ŸŒ One shared goal: transforming education Letโ€™s talk about secure, scalable, and high-performance digital learning. ๐Ÿ‘‰ See you at Expo Cen..

bett_brazil_sao_paulo_2026_relianoid
Link
@koukibadr shared a link, 1ย week ago
Mobile Developer, Nventive

LiveData vs StateFlow

LiveData and StateFlow both stream data reactively, but differ in two key ways:

Initialization โ€” LiveData needs no initial value; StateFlow requires one.

Lifecycle โ€” LiveData is lifecycle-aware by default; StateFlow is not, so you need to wrap it in repeatOnLifecycle to avoid memory leaks.

Code templating
Story
@pramod_kumar_0820 shared a post, 1ย week, 1ย day ago
Software Engineer, Teknospire

How To Crack Senior Java Interviews (6โ€“10 YOE) In 4 Weeks

Javadoc Searchspring

A practical 4-week roadmap to crack Senior Java Developer interviews (6โ€“10 YOE), covering Core Java, Spring Boot internals, Microservices, System Design, and real-world interview strategies.

Senior Java Interviews (6โ€“10 YOE) In 4 Weeks
ย Activity
@smh started using tool TypeScript , 1ย week, 1ย day ago.
ย Activity
@smh started using tool Terraform , 1ย week, 1ย day ago.
ย Activity
@smh started using tool Python , 1ย week, 1ย day ago.
vLLM is an advanced open-source framework for serving and running large language models efficiently at scale. Developed by researchers and engineers from UC Berkeley and adopted widely across the AI industry, vLLM focuses on optimizing inference performance through its innovative PagedAttention mechanism โ€” a memory management system that enables near-zero waste in GPU memory utilization. It supports model parallelism, continuous batching, tensor parallelism, and dynamic batching across GPUs, making it ideal for real-world deployment of foundation models. vLLM integrates seamlessly with Hugging Face Transformers, OpenAI-compatible APIs, and popular orchestration tools like Ray Serve and Kubernetes. Its design allows developers and enterprises to host LLMs with reduced latency, lower hardware costs, and increased throughput, powering everything from chatbots to enterprise-scale AI services.