Ollama model feed-back

From Notes_Wiki
Revision as of 05:34, 8 January 2025 by Saurabh (talk | contribs)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)

Home > Local system based AI tools > Ollama > Ollama model feed-back

We can go to https://ollama.com/ and get information on large no. of models available include their parameter options 9B or 70B etc. and for each type of parameter option we can also see the size of model on the page.

Based on the available memory on the system and advantages of particular model we can download it. The name through which can download the model is also written before the size. For example at https://ollama.com/library/phi3 we can see that phi3 is available for 3.8b billion parameters for 2.2GB size and can be downloaded via:

ollama run phi3

Overall for following models I can share feed-back:

phi3:medium-128k (14B - 7.9GB)
: We get good large context window and decent accuracy
deepseek-v2:16b (16B - 8.9GB)
: Based on mixture of experts so it is very fast for its parameeter / memory size.
gemma2:27b (27B - 15 GB)
: This is the high parameter count model that can be run on laptop with 32GB RAM. This is very slow compared to others.
llama3.2 (3B - 2.0 GB)
: Very small and efficient model that can be run with limited resources
llama3 (8B - 4.7GB)
: Decently sized to run on local laptop, be fast, memory efficient and still give reasonable output.


Home > Local system based AI tools > Ollama > Ollama model feed-back