ContentPosts from @mnowak-devops..
Link
@faun shared a link, 6 months ago
FAUN.dev()

AI Runbooks for Google SecOps: Security Operations with Model Context Protocol

Google's MCP servers arm SecOps teams with direct control of security tools using LLMs.Now, analysts can skip the fluff and get straight to work—no middleman needed. The system ties runbooks to live data, offeringautomated, role-specific security measures. The result? A fusion of top-tier protocols .. read more  

AI Runbooks for Google SecOps: Security Operations with Model Context Protocol
Link
@faun shared a link, 6 months ago
FAUN.dev()

Vibe coding web frontend tests — from mocked to actual tests

Cursorwrestled with flaky tests, tangled in its over-reliance onXPath. A shift todata-testidfinally tamed the chaos. Though it tackled some UI tests, expired API tokens and timestamped transactions revealed its Achilles' heel... read more  

Vibe coding web frontend tests — from mocked to actual tests
Link
@faun shared a link, 6 months ago
FAUN.dev()

Poison everywhere: No output from your MCP server is safe

Anthropic's MCPmakes LLMs groove with real-world tools but leaves the backdoor wide open for mischief. Full-Schema Poisoning (FSP) waltzes across schema fields like it owns the place.ATPAsneaks in by twisting tool outputs, throwing off detection like a pro magicians’ misdirection. Keep your eye on t.. read more  

Poison everywhere: No output from your MCP server is safe
Link
@faun shared a link, 6 months ago
FAUN.dev()

Meta Introduces LlamaRL: A Scalable PyTorch-Based Reinforcement Learning RL Framework for Efficient LLM Training at Scale

Reinforcement Learningfine-tunes large language models for better performance by adapting outputs based on structured feedback. Scaling RL for LLMs faces resource challenges due to massive computation, model sizes, and engineering problems like GPU idle time. Meta's LlamaRL is a PyTorch-based asynch.. read more  

Meta Introduces LlamaRL: A Scalable PyTorch-Based Reinforcement Learning RL Framework for Efficient LLM Training at Scale
Link
@faun shared a link, 6 months ago
FAUN.dev()

What execs want to know about multi-agentic systems with AI

Lack of resources kills agent teamwork in Multi-Agent Systems (MAS); clear roles and protocols rule the roost—plus a dash of rigorous testing and good AI behavior.Ignore bias, and your MAS could accidentally nudge e-commerce into the murky waters of socio-economic unfairness. Cue reputation hits and.. read more  

What execs want to know about multi-agentic systems with AI
Link
@faun shared a link, 6 months ago
FAUN.dev()

The AI 4-Shot Testing Flow

4-Shot Testing Flowfuses AI's lightning-fast knack for spotting issues with the human knack for sniffing out those sneaky, context-heavy bugs. Trim QA time and expenses. While AI tears through broad test execution, human testers sharpen the lens, snagging false positives/negatives before they slip t.. read more  

The AI 4-Shot Testing Flow
Link
@faun shared a link, 6 months ago
FAUN.dev()

GenAI Meets SLMs: A New Era for Edge Computing

SLMspower up edge computing with speed and privacy finesse. They master real-time decisions and steal the spotlight in cramped settings like telemedicine andsmart cities. On personal devices, they outdoLLMs—trimming the fat with model distillation and quantization. Equipped withONNXandMediaPipe, the.. read more  

Link
@faun shared a link, 6 months ago
FAUN.dev()

Automate Models Training: An MLOps Pipeline with Tekton and Buildpacks

Tekton plusBuildpacks: your secret weapon for training GPT-2 without Dockerfile headaches. They wrap your code in containers, ensuring both security and performance.Tekton Pipelineslean on Kubernetes tasks to deliver isolation and reproducibility. Together, they transform CI/CD for ML into something.. read more  

Automate Models Training: An MLOps Pipeline with Tekton and Buildpacks
Link
@faun shared a link, 6 months ago
FAUN.dev()

Disrupting malicious uses of AI: June 2025

OpenAI's June 2025 report, "Disrupting Malicious Uses of AI," is out. It highlights various cases where AI tools were exploited for deceptive activities, including social engineering, cyber espionage, and influence operations... read more  

Disrupting malicious uses of AI: June 2025
Link
@faun shared a link, 6 months ago
FAUN.dev()

How we’re responding to The New York Times’ data demands in order to protect user privacy

OpenAI is challenging a court order stemming from The New York Times' copyright lawsuit, which mandates the indefinite retention of user data from ChatGPT and API services. OpenAI contends this requirement violates user privacy commitments and sets a concerning precedent. While the company complies .. read more  

How we’re responding to The New York Times’ data demands in order to protect user privacy