- The Byte
- Posts
- Run LLM Models Locally with Docker
Run LLM Models Locally with Docker
Byte Sized Updates about AI & Tech

Good morning! This week has been a year-long. I'm pretty confident that if I still had hair, I would've lost it again.
All the best things come from downturns. Let's focus on the positives and get to work.
Quick hitters, let's go!
Docker has been busy lately.
Meet Docker Model Runner:
Run AI models with the tools you’re using today
GPU acceleration on Apple Silicon
Pull models as OCI Artifacts
No extra setup, just dev
Oh wait, wrong decade. You may have heard of the recent MCP Server craze to standardize how LLMs interface with applications. Well, Google just slipped in another standard called A2A to try to create a standards battle.
Put the coffee on, as this will be an interesting battle.
Killing them softly comes to mind, but for some, it is more drastic. The new Google AI Overview answers your Google search question and pushes all the other results down. This is good for Google but bad for anyone trying to rank on Google.
This week has been all about making ChatGPT Image action figures. So if you want to do it yourself, check out the steps below:

That's it for this week. I'm off next week and plan to deploy vacation mode for a few days!
…That’s this week’s theByte newsletter!
-Brian