Difference between revisions of "Ollama installation"

From Notes_Wiki
m
m
 
(5 intermediate revisions by the same user not shown)
Line 3: Line 3:
=Installing ollama on Rocky 9.x=
=Installing ollama on Rocky 9.x=
To install Ollama on local system use following steps:
To install Ollama on local system use following steps:
# Download Ollama archive (.tgz) from the ollama site:
# Install ollama directly from site using:
#:<pre>
#:<pre>
#:: curl -L https://ollama.com/download/ollama-linux-amd64.tgz -o ollama-linux-amd64.tgz
#:: curl -fsSL https://ollama.com/install.sh | sh
#:</pre>
#: We can also install specific version of ollama using syntax similar to below example:
#::<pre>
#::: curl -fsSL https://ollama.com/install.sh | OLLAMA_VERSION=0.5.7 sh
#::</pre>
#:: For this look at model release versions at https://github.com/ollama/ollama/releases
# Check whether ollama service is running via:
#:<pre>
#:: systemctl status ollama
#:</pre>
#:</pre>
# Extract the ollama source and try to start ollama server on local machine:<source type="bash">
sudo tar -C /usr -xzf ollama-linux-amd64.tgz
export OLLAMA_ORIGINS="moz-extension://*"
ollama serve
</source>
# After this run ollama on local system via below commands and test:
# After this run ollama on local system via below commands and test:
#:<pre>
#:<pre>
#:: ollama run llama3.2
#:: ollama run deepseek-r1:1.5b
#:</pre>
#:: The above command may take considerable time when run for first time as it download the entire model.  deepseek-r1:1.5b model in above example is 1.1GB in size.  So first the command will download 1.1GB model and only after that we can prompt ">>>" to type our queries.
# The ollama service runs as ollama user with home folder as <tt>/usr/share/ollama</tt>.  Consider moving /usr/share/ollama to partition with more space and create symoblic link at the original place.
# Edit /etc/systemd/system/ollama.service and append one more Environment condition
#:<pre>
#:: Environment="OLLAMA_HOST=0.0.0.0"
#:</pre>
# Restart service
#:<pre>
#:: systemctl daemon-reload
#:: systemctl restart ollama
#:</pre>
#:</pre>
#:: The above command may take considerable time when run for first time as it download the entire model.  llama3.2 model in above example is 2GB in size.  So first the command will download 2GB model and only after that we can prompt ">>>" to type our queries.
#:: Without above we cannot use Ollama from n8n etc. over http://localhost:11434/
# You can consider moving ~/.ollama path to other location and create a symbolic link as ollama models might need lot of space to work
# To close the system use:
# To close the system use:
#:<pre>
#:<pre>
Line 27: Line 41:
#:</pre>
#:</pre>


Note that models are downloaded in current users home folder inside .ollama folder.  Ensure you have enough space in this partition / folder for models to get downloaded and stored.
=Configure ollama to automatically run on system boot=
# To configure ollama as service create file '<tt>/etc/systemd/system/ollama.service</tt>' with following contents:<source type="bash">
[Unit]
Description=Ollama Service
After=network-online.target
[Service]
ExecStart=/usr/bin/ollama serve
User=saurabh
Group=saurabh
Restart=always
RestartSec=3
Environment="PATH=$PATH"
Environment="OLLAMA_ORIGINS=moz-extension://*"
[Install]
WantedBy=default.target
</source>
# Reload systemctl configuration and configure ollama to run with system boot:
#:<pre>
#:: sudo systemctl daemon-reload
#:: sudo systemctl enable ollama
#:: sudo systemctl start ollama
#:: sudo systemctl status ollama
#:</pre>
Refer:
* https://github.com/ollama/ollama/blob/main/docs/linux.md
* https://adasci.org/hands-on-guide-to-running-llms-locally-using-ollama/




[[Main Page|Home]] > [[Local system based AI tools]] > [[Ollama]] > [[Ollama installation]]
[[Main Page|Home]] > [[Local system based AI tools]] > [[Ollama]] > [[Ollama installation]]

Latest revision as of 13:36, 27 July 2025

Home > Local system based AI tools > Ollama > Ollama installation

Installing ollama on Rocky 9.x

To install Ollama on local system use following steps:

  1. Install ollama directly from site using:
    curl -fsSL https://ollama.com/install.sh | sh
    We can also install specific version of ollama using syntax similar to below example:
    curl -fsSL https://ollama.com/install.sh | OLLAMA_VERSION=0.5.7 sh
    For this look at model release versions at https://github.com/ollama/ollama/releases
  2. Check whether ollama service is running via:
    systemctl status ollama
  3. After this run ollama on local system via below commands and test:
    ollama run deepseek-r1:1.5b
    The above command may take considerable time when run for first time as it download the entire model. deepseek-r1:1.5b model in above example is 1.1GB in size. So first the command will download 1.1GB model and only after that we can prompt ">>>" to type our queries.
  4. The ollama service runs as ollama user with home folder as /usr/share/ollama. Consider moving /usr/share/ollama to partition with more space and create symoblic link at the original place.
  5. Edit /etc/systemd/system/ollama.service and append one more Environment condition
    Environment="OLLAMA_HOST=0.0.0.0"
  6. Restart service
    systemctl daemon-reload
    systemctl restart ollama
    Without above we cannot use Ollama from n8n etc. over http://localhost:11434/
  7. To close the system use:
    /bye
  8. Between any two queries which are unrelated we can clear the context using:
    /clear


Home > Local system based AI tools > Ollama > Ollama installation