Docker Model Runner: Running Machine Learning Models with Docker
Pulling and Running Models
When you want to use a model, you should start by pulling it from Docker Hub. Before you do that, make sure you have the Docker Model Runner plugin installed. You can install it using the following command:
# Update package lists
apt-get update
# Install Docker Model Runner plugin (if not installed)
apt-get install docker-model-plugin
# Verify installation
docker model version
To pull a model, use the docker model pull command followed by the model's name and tag. For example, to pull Qwen3 coder (a large language model fine-tuned for coding tasks), you would get the model from the AI registry:
docker model pull ai/qwen3-coder:30B
AI models are typically large files, so the pull process may take some time, depending on your internet speed.
After pulling the model, you can test a prompt against it using the docker model run command. For example:
docker model run ai/qwen3-coder:30B "Write a Python function that calculates the Fibonacci sequence."
The 30B tag indicates the size of the model (30 billion parameters in this case). This model will typically require a machine with a good amount of RAM (think 16GB - more or less).
Painless Docker - 2nd Edition
A Comprehensive Guide to Mastering Docker and its EcosystemEnroll now to unlock all content and receive all future updates for free.
Hurry! This limited time offer ends in:
To redeem this offer, copy the coupon code below and apply it at checkout:
