ContentPosts from @techish11..
Link
@faun shared a link, 2 months, 1 week ago

Human coders are still better than LLMs

Antirez recounted a story of working on Vector Sets for Redis, detailing a bug he encountered and his process of finding a solution through a creative approach involving LLM. He explored different methods to ensure link reciprocity and proposed a hashing solution that offered a balance between effic..

Link
@faun shared a link, 2 months, 1 week ago

Architecting Gen AI-Powered Microservices: The Unwritten Playbook

Plugging Gen AI into microservicesisn't just a task. It's an adventure in tech wizardry. Get cozy with messaging queues, prompt caching, and the relentless art of watching in production...

Architecting Gen AI-Powered Microservices: The Unwritten Playbook
Link
@faun shared a link, 2 months, 1 week ago

Text-to-Malware: How Cybercriminals Weaponize Fake AI-Themed Websites

UNC6032swindled millions by spinning a tangled web of fake "AI video generator" sites. They slippedPython-based infostealersright under our noses, using social media ads as their Trojan horses.Meta’s ad transparency pulled back the curtain on over 30 malicious sites, yet the sneakySTARKVEIL malwarec..

Text-to-Malware: How Cybercriminals Weaponize Fake AI-Themed Websites
Link
@faun shared a link, 2 months, 1 week ago

Why GCP Load Balancers Struggle with Stateful LLM Traffic — and How to Fix It

Deploying LLMs onGCP Load Balancersis like fitting a square peg in a round hole. These models aren't stateless, so skip HTTP, go straight forTCP Load Balancing. Toss in Redis to keep those sessions on a leash. Tweak load balancer settings to dodge mid-stream socket calamities. Embrace the power ofGK..

Link
@faun shared a link, 2 months, 1 week ago

LLMOps: DevOps Strategies for Deploying Large Language Models in Production

LLMOpsshakes up the MLOps scene with tailor-made Kubernetes magic. It wrestlesGPU scheduling, caching, and autoscalingfor those behemothLLM deployments. Keep an eye out for serverless endpoints and model meshes—smooth scaling and a wallet-friendly operation...

Link
@faun shared a link, 2 months, 1 week ago

Want a humanoid, open source robot for just $3,000? Hugging Face is on it.

Hugging Facejust pulled the curtain back onHopeJR, a humanoid robot that swings 66 degrees of freedom—at just$3,000. This price tag shames the $16,000 slapped on Unitree's G1. Together with The Robot Studio, they've created this robot with a dash of Bender's charisma. The kicker? It's fully open-sou..

Want a humanoid, open source robot for just $3,000? Hugging Face is on it.
Link
@faun shared a link, 2 months, 1 week ago

It’s not your imagination: AI is speeding up the pace of change

AI takes a victory lap:Mary Meeker revealsChatGPTsnagged 800 million users in a brisk 17 months. Meanwhile, the bean counters cheer as inference costs nosedived 99% in just two years. Profitability? That's still a cliffhanger...

It’s not your imagination: AI is speeding up the pace of change
Link
@faun shared a link, 2 months, 1 week ago

Perplexity offers training wheels for building AI agents

Perplexity Labsis your quick-draw tool for crafting apps and digital delights, powered by LLMs likeGPT-4 Omni. It’s a star where others stumble: fast, project-driven tasks. Expect example-heavy insights and real-world project demos. While competitors dawdle, it delivers. Need deep web browsing, code..

Link
@faun shared a link, 2 months, 1 week ago

Using AI to outsmart AI-driven phishing scams

Phishing scamsare growing craftier, employing AI likeFraudGPTto weave through filters and masquerade as real emails, boosting scam rates by70%. AI can unveil sneaky phishing patterns humans miss, but it loves a good panic. It often cries wolf with false alarms and needs a babysitter to adjust to eve..

Using AI to outsmart AI-driven phishing scams
Link
@faun shared a link, 2 months, 1 week ago

We rewrote large parts of our API in Go using AI: we are now ready to handle one billion databases

Tursooverhauled its API withGoand AI, gunning for 1 billion databases. Think big, act smart. They squeezed every byte by adopting string interning. No more in-memory maps—they swapped them for aSQLite-backedLRU cache. The result? Leaner memory usage and hassle-free proxy bootstrapping...

We rewrote large parts of our API in Go using AI: we are now ready to handle one billion databases