Ollama

This add-on for AI Engine adds support for Ollama. So, what’s Ollama? It’s a local LLM you can run right on your computer or server. It’s fast, easy to install, and frees you from relying on remote servers or third-party services.

Think of it as the closest thing to Apple Intelligence, powered by your own machine. Whether that’s a good thing or not… well, you decide! 😬

How to use it

Once installed, you’ll find that a new Type is available in Environments for AI. Pick it, and set the Endpoint. If you installed Ollama locally (same as your WordPress), by default, this endpoint should be “http://127.0.0.1:11434“.

When this is done, you should be able to Refresh Models. By default, Ollama should come with one model, llama3. Once it is refresh, you can use this new environment anywhere in AI Engine!

Model Support

It works with all the models, but it’s recommended and well-tested with:

  • LLama: Chat, Functions
  • Llava: Chat, Image Vision
  • Mistral: Chat
  • Gemma: Chat

For more information about Ollama, please visit their official website.