Ollama model feed-back

From Notes_Wiki

Home > Local system based AI tools > Ollama > Ollama model feed-back

We can go to https://ollama.com/ and get information on large no. of models available include their parameter options 9B or 70B etc. and for each type of parameter option we can also see the size of model on the page.

Based on the available memory on the system and advantages of particular model we can download it. The name through which can download the model is also written before the size. For example at https://ollama.com/library/phi3 we can see that phi3 is available for 3.8b billion parameters for 2.2GB size and can be downloaded via:

ollama run phi3

Overall for following models I can share feed-back:

deepseek-r1 (Multiple sizes)
: Very good model with thinking / chain of thought so that we get very good results with explanation
phi4 (14B - 9.1GB)
: Very high accuracy among small models
llama3.2 (3B - 2.0 GB)
: Very small and efficient model that can be run with limited resources
llama3 (8B - 4.7GB)
: Decently sized to run on local laptop, be fast, memory efficient and still give reasonable output.
llava (34b - 20GB)
: For image recognition or computer vision related tasks
llama3.2-vision (11b - 7.9GB)
: For image recognition tasks with smaller size
wizard-vicuna-uncensored (30b - 18GB)
: For queries that would otherwise get censored on other models

Home > Local system based AI tools > Ollama > Ollama model feed-back