Ollama is very useful but also rather barebones. I recommend installing Open-Webui to manage models and conversations. It will also be useful if you want to tweak more advanced settings like system prompts, seed, temperature and others.
You can install open-webui using docker or just pip, which is enough if you only care about serving yourself.
Edit: open-webui also renders markdown, which makes formatting and reading much more appealing and useful.
Edit2: you can also plug ollama into continue.dev, an extension to vscode which brings the LLM capabilities to your IDE.
Link it to openweb-ui makes things easier
Then can knowledge it on say all the manuals in your house
Or your home insurance policy or soemthing.
Link it to "speaches" and then you can make a voice chat.
Link it to continue.dev for coding
I think alot of the use case come from developing system prompts
You can them make a "custom" model for specific tasks.
I.e this model knows about my home insurance policy but writes back lile it's pirate with a stutter
Great job trying to learn! Ignore the naysayers here, as a fellow programmer like it or not, you're going to need to learn how to interact with it. That's the real way we'd lose our jobs, if you don't keep up with this stuff you're doomed to fall behind.
I recommend trying to first build a simple CLI API that you can work with and ask questions similar to chat gpt. This will give you a good foundation on the APIs and how it works. Then you can move up into function calling for things like HomeAssistant, and then maybe even later training loras. Start small, just getting basic stuff working first, and build from there.