ollama is the easiest to get going local llm tool that I have tried, and seems to be crazy fast. It feels faster than chat gpt, which has not been the experience I have had previously with running llmβs on my hardware.
curl https://i.jpillora.com/jmorganca/ollama | bash
ollama serve
ollama run mistral
ollama run codellama:7b-code
ollama list
Note
This post is a thought. Itβs a short note that I make about someone elseβs content online #thoughts