Join us

ContentUpdates and recent posts about Ollama..
Discovery IconThat's all about @Ollama โ€” explore more posts below...
Story
@laura_garcia shared a post, 35ย minutes ago
Software Developer, RELIANOID

๐—›๐—ฎ๐—ฐ๐—ธ ๐—ฆ๐—ฝ๐—ฎ๐—ฐ๐—ฒ ๐—–๐—ผ๐—ป ๐Ÿฎ๐Ÿฌ๐Ÿฎ๐Ÿฒ

๐Ÿš€ ๐—›๐—ฎ๐—ฐ๐—ธ ๐—ฆ๐—ฝ๐—ฎ๐—ฐ๐—ฒ ๐—–๐—ผ๐—ป ๐Ÿฎ๐Ÿฌ๐Ÿฎ๐Ÿฒ ๐Ÿ“ Kennedy Space Center ๐Ÿ“… May 6โ€“9, 2026 ๐™’๐™๐™š๐™ง๐™š ๐™˜๐™ฎ๐™—๐™š๐™ง๐™จ๐™š๐™˜๐™ช๐™ง๐™ž๐™ฉ๐™ฎ ๐™ข๐™š๐™š๐™ฉ๐™จ ๐™จ๐™ฅ๐™–๐™˜๐™š ๐™ž๐™ฃ๐™ฃ๐™ค๐™ซ๐™–๐™ฉ๐™ž๐™ค๐™ฃ. Hack Space Con is not your typical event โ€” itโ€™s where cybersecurity, aerospace, and advanced technologies converge to shape the future of security beyond Earth. ๐Ÿ” ๐—ช๐—ต๐—ฎ๐˜ ๐˜๐—ผ ๐—ฒ๐˜…๐—ฝ๐—ฒ๐—ฐ๐˜: - Hands-on techn..

HACKSPACECON2026_florida_RELIANOID
News FAUN.dev() Team
@devopslinks shared an update, 4ย hours ago
FAUN.dev()

Ubuntu's Next Chapter: Local AI, Confined Agents, and a Bet Against the Cloud-First OS

Ubuntu Ollama Snap

Ubuntu is getting local AI as a native capability over the next year, with inference snaps that install models like any other package, AI-powered accessibility features, and confined agentic workflows for both desktops and server fleets. Canonical is betting on open weight models, local-by-default inference, and snap confinement, a deliberate counter to the cloud-first AI direction Microsoft, Apple, and Google are taking with their operating systems.

Ubuntu's Next Chapter: Local AI, Confined Agents, and a Bet Against the Cloud-First OS
ย Activity
@devopslinks added a new tool Snap , 5ย hours, 8ย minutes ago.
ย Activity
@kala added a new tool Ollama , 5ย hours, 22ย minutes ago.
ย Activity
@backlinksaimnxt started using tool Python , 6ย hours, 13ย minutes ago.
ย Activity
@backlinksaimnxt started using tool Analysys Ark , 6ย hours, 13ย minutes ago.
Story Keploy Team
@sancharini shared a post, 2ย days, 2ย hours ago

Building Automated Regression Testing From Scratch: A Complete Walkthrough

Learn how to build automated regression testing from scratch in 4-6 weeks. Step-by-step walkthrough covering phases, implementation, tools, and avoiding mistakes.

regression testing services
Story
@elsie-rainee shared a post, 2ย days, 2ย hours ago
DevOps Engineer, Freelancer

Android Architecture: Components, Patterns & Best Practices Guide

Learn Android architecture with components, patterns, and best practices to build mobile apps that are scalable, easy to maintain, and high-performing.

Android Architecture
Story
@viktoriiagolovtseva shared a post, 3ย days ago

Online event planning template

Planning a webinar, workshop, or team-wide event in Jira? Youโ€™re not alone. When youโ€™re managing internal demos, customer-facing webinars, or company-wide town halls, event coordination takes effort and often involves stakeholders across departments.

Missed deadlines, unclear responsibilities, or last-minute changes can turn even a small event into a major time sink. But thereโ€™s good news: you can streamline your event workflows using the tools your team already uses.

Instead of juggling spreadsheets, emails, and calendar invites, create a customizable event planning template in Jira. It brings everything into one place, supports collaboration, and helps you keep track of dependencies, deliverables, and last-minute requests in real time.

Zrzut ekranu 2026-05-01 150322
Story
@viktoriiagolovtseva shared a post, 3ย days ago

Performance Review Template That Actually Works

Hiring the right person is only half the equation โ€” helping them grow is the other

Zrzut ekranu 2026-05-01 131816
Ollama is an open source tool for running large language models locally on your own machine. It packages model weights, configuration, and a runtime into a single binary with a simple CLI, letting developers pull and run models like Llama, Mistral, or Qwen with one command (`ollama run <model>`). It exposes an HTTP API compatible with parts of the OpenAI spec, which makes it easy to swap into existing tooling. Ollama is one of the most popular entry points for local LLM inference, particularly on macOS and Linux developer machines.