Switched to OpenAI

I've switched from using Ollama to OpenAI for embeddings and LLM for Cognee. If I'm being honest, I just wanted an excuse to play around with Ollama. It was fun, but using Ollama created some logistic headaches. The biggest is that I couldn't run it locally on my older Intel Macbook; it would run out of memory. That made debugging issues with Cognee or any other part of the setup...well...a PITA.
So I switched to OpenAI. I would have used Anthropic but as far as I know Anthropic doesn't have an embedding model. I discovered this after-the-fact when I grabbed an Anthropic API key and paid for $10.00 in credits. Oh well, I'm sure I'll eventually use them for something.
By switching to OpenAI as Cognee's embedding API and LLM, I got several big benefits:
- It simplifies my docker compose file: I don't have to start an Ollama container, download the embedding model and the Qwen LLM.
- It's now a bit closer to how it will actually be used in production environment. As a past platform engineering manager, I wouldn't want to increase my maintenance burden by hosting my own LLM.
- I can run this setup locally for debugging. I can still run it on my remote server, but when things start getting a little wonky, I don't have to ssh into my local server and start trying to debug, ship logs to my dev machine, etc. It's just easier.
- I could simplify my setup scripts, and even get rid of a couple. Less code, less to maintain.
This should overall help accelerate my work. By how much? That remains to be seen.