ContentPosts from @lukates..
Link
@faun shared a link, 2 months, 2 weeks ago

Poison everywhere: No output from your MCP server is safe

Anthropic's MCPmakes LLMs groove with real-world tools but leaves the backdoor wide open for mischief. Full-Schema Poisoning (FSP) waltzes across schema fields like it owns the place.ATPAsneaks in by twisting tool outputs, throwing off detection like a pro magicians’ misdirection. Keep your eye on t..

Poison everywhere: No output from your MCP server is safe
Link
@faun shared a link, 2 months, 2 weeks ago

Vibe coding web frontend tests — from mocked to actual tests

Cursorwrestled with flaky tests, tangled in its over-reliance onXPath. A shift todata-testidfinally tamed the chaos. Though it tackled some UI tests, expired API tokens and timestamped transactions revealed its Achilles' heel...

Vibe coding web frontend tests — from mocked to actual tests
Link
@faun shared a link, 2 months, 2 weeks ago

Meta reportedly in talks to invest billions of dollars in Scale AI

Metawants a piece of the$10 billion pieat Scale AI, diving headfirst into the largest private AI funding circus yet.Scale AI'srevenue? Projected to rocket from last year’s $870M to$2 billionthis year, thanks to some beefy partnerships and serious AI model boot camps...

Meta reportedly in talks to invest billions of dollars in Scale AI
Link
@faun shared a link, 2 months, 2 weeks ago

Modern Test Automation with AI(LLM) and Playwright MCP (Model Context Protocol)

GenAI and Playwright MCP are shaking up test automation. Think natural language scripts and real-time adaptability, kicking flaky tests to the curb.But watch your step:security risks lurk, server juggling causes headaches, and dynamic UIs refuse to play nice...

Link
@faun shared a link, 2 months, 2 weeks ago

Automate Models Training: An MLOps Pipeline with Tekton and Buildpacks

Tekton plusBuildpacks: your secret weapon for training GPT-2 without Dockerfile headaches. They wrap your code in containers, ensuring both security and performance.Tekton Pipelineslean on Kubernetes tasks to deliver isolation and reproducibility. Together, they transform CI/CD for ML into something..

Automate Models Training: An MLOps Pipeline with Tekton and Buildpacks
Link
@faun shared a link, 2 months, 2 weeks ago

GenAI Meets SLMs: A New Era for Edge Computing

SLMspower up edge computing with speed and privacy finesse. They master real-time decisions and steal the spotlight in cramped settings like telemedicine andsmart cities. On personal devices, they outdoLLMs—trimming the fat with model distillation and quantization. Equipped withONNXandMediaPipe, the..

Link
@faun shared a link, 2 months, 2 weeks ago

The AI 4-Shot Testing Flow

4-Shot Testing Flowfuses AI's lightning-fast knack for spotting issues with the human knack for sniffing out those sneaky, context-heavy bugs. Trim QA time and expenses. While AI tears through broad test execution, human testers sharpen the lens, snagging false positives/negatives before they slip t..

The AI 4-Shot Testing Flow
Link
@faun shared a link, 2 months, 2 weeks ago

BenchmarkQED: Automated benchmarking of RAG systems

BenchmarkQEDtakes RAG benchmarking to another level. ImagineLazyGraphRAGsmashing through competition—even when wielding a hefty1M-tokencontext. The only hitch? It occasionally stumbles on direct relevance for local queries. But fear not,AutoQis in its corner, crafting a smorgasbord of synthetic quer..

Link
@faun shared a link, 2 months, 2 weeks ago

What execs want to know about multi-agentic systems with AI

Lack of resources kills agent teamwork in Multi-Agent Systems (MAS); clear roles and protocols rule the roost—plus a dash of rigorous testing and good AI behavior.Ignore bias, and your MAS could accidentally nudge e-commerce into the murky waters of socio-economic unfairness. Cue reputation hits and..

What execs want to know about multi-agentic systems with AI
Link
@faun shared a link, 2 months, 2 weeks ago

Disrupting malicious uses of AI: June 2025

OpenAI's June 2025 report, "Disrupting Malicious Uses of AI," is out. It highlights various cases where AI tools were exploited for deceptive activities, including social engineering, cyber espionage, and influence operations...

Disrupting malicious uses of AI: June 2025