Join us

Building a cost-effective logging platform using Clickhouse for petabyte scale

Building a cost-effective logging platform using Clickhouse for petabyte scale

Zomato's production generated over 50 TB of uncompressed logs per day, peaking at 150 million logs per minute. To handle this, they transitioned from Elasticsearch to Clickhouse, leveraging its horizontal scalability and low latency. Custom Golang workers efficiently batched log insertions, using AWS spot instances for cost savings and using a semi-structured schema to optimize data management. They also implemented query throttling mechanisms and advanced monitoring to ensure performance and resiliency.


Let's keep in touch!

Stay updated with my latest posts and news. I share insights, updates, and exclusive content.

Unsubscribe anytime. By subscribing, you share your email with @faun and accept our Terms & Privacy.

Give a Pawfive to this post!


Only registered users can post comments. Please, login or signup.

Start writing about what excites you in tech — connect with developers, grow your voice, and get rewarded.

Join other developers and claim your FAUN.dev() account now!

Avatar

The FAUN

FAUN.dev()

@faun
The FAUN watches over the forest of developers. It roams between Kubernetes clusters, code caves, AI trails, and cloud canopies, gathering the signals that matter and clearing out the noise.
Developer Influence
3k

Influence

302k

Total Hits

3712

Posts