---
title: "💭 Ollama"
description: "!https://ollama.ai/"
date: 2023-10-14
published: true
tags:
  - llm
  - ai
  - thought
template: link
---


<div class="embed-card embed-card-external">
  <a href="https://ollama.ai/" class="embed-card-link" target="_blank" rel="noopener noreferrer">
    <div class="embed-card-image">
      <img src="https://ollama.com/public/og.png" alt="Ollama — Ollama is the easiest way to automate your work using open models, while keeping your data safe." loading="lazy">
    </div>
    <div class="embed-card-content">
      <div class="embed-card-title">Ollama</div>
      <div class="embed-card-description">Ollama is the easiest way to automate your work using open models, while keeping your data safe.</div>
      <div class="embed-card-meta">ollama.ai</div>
    </div>
  </a>
</div>


ollama is the easiest to get going local llm tool that I have tried, and seems to be crazy fast.  It feels faster than chat gpt, which has not been the experience I have had previously with running llm's on my hardware.


``` bash
curl https://i.jpillora.com/jmorganca/ollama | bash
ollama serve
ollama run mistral
ollama run codellama:7b-code
ollama list
```



!!! note

    This post is a <a href="/thoughts/" class="wikilink" data-title="Thoughts" data-description="These are generally my thoughts on a web page or some sort of url, except a rare few don&#39;t have a link. These are dual published off of my..." data-date="2024-04-01">thought</a>. It's a short note that I make
    about someone else's content online <a href="/tags/thoughts/" class="hashtag-tag" data-tag="thoughts" data-count=2 data-reading-time=3 data-reading-time-text="3 minutes">#thoughts</a>
