phys-npps-members-l AT lists.bnl.gov
Subject: ALL NPPS Members
List archive
Re: [[Phys-npps-members-l] ] Instructions for Offsite Access to Ollama Server on npps0
- From: "Smirnov, Dmitri" <dsmirnov AT bnl.gov>
- To: "phys-npps-members-l AT lists.bnl.gov" <phys-npps-members-l AT lists.bnl.gov>
- Subject: Re: [[Phys-npps-members-l] ] Instructions for Offsite Access to Ollama Server on npps0
- Date: Fri, 13 Mar 2026 22:26:31 +0000
Thank you, Shuwei. That would really help, as we have recently started seeing more collisions with our CI jobs. On our side, we will also look into whether it makes sense for our tests to use less than the full GPU memory.
Dmitri
On 3/13/26 6:12 PM, Ye, Shuwei wrote:
Dear Gabor,I am responsible for managing the Ollama server on npps0. I can look into configuring the server to use only one GPU and limiting the VRAM to 24 GB.Best regards,
--Shuwei
From: Galgoczi, Gabor (PO) <ggalgoczi1 AT bnl.gov>
Sent: Friday, March 13, 2026 4:18 PM
To: Torre Wenaus <wenaus AT gmail.com>; Ye, Shuwei <yesw AT bnl.gov>
Cc: NPPS members <phys-npps-members-l AT lists.bnl.gov>
Subject: Re: Instructions for Offsite Access to Ollama Server on npps0Dear All,
Who is the owner of the Ollama process? Could you restrict it to GPU0 using CUDA_VISIBLE_DEVICES=0? Could also limit the utilized RAM by OLLAMA_MAX_VRAM setting.
When it is running it uses most of the RAM of both GPUs. We can not do eic-opticks work and our GitHub CI tests also fail due to insufficient free memory.
Thank you,Gabor
From: phys-npps-members-l-request AT lists.bnl.gov <phys-npps-members-l-request AT lists.bnl.gov> on behalf of Ye, Shuwei <yesw AT bnl.gov>
Sent: Wednesday, March 11, 2026 11:11 AM
To: Torre Wenaus <wenaus AT gmail.com>
Cc: NPPS members <phys-npps-members-l AT lists.bnl.gov>
Subject: [[Phys-npps-members-l] ] Instructions for Offsite Access to Ollama Server on npps0Dear Torre,
You can find the detailed instructions for offsite access to the Ollama server on our group machine npps0 in the following document:For example, to use the modelqwen3.5:35bviaclaudeCLI on your laptop, follow these steps:
Establish an SSH tunnel to forward the local port 1080 to the remote Ollama server:Codessh -f -N -L 1080:130.199.21.114:11434 ssh.bnl.gov Set the required environment variables:Codeexport ANTHROPIC_AUTH_TOKEN=ollama export ANTHROPIC_API_KEY="" export ANTHROPIC_BASE_URL=http://localhost:1080 Launch Claude with the specified model:Codeclaude --model qwen3.5:35bPlease let me know if you encounter any issues or need further assistance.Best regards,
--Shuwei
-
[[Phys-npps-members-l] ] Instructions for Offsite Access to Ollama Server on npps0,
Ye, Shuwei, 03/11/2026
-
Re: [[Phys-npps-members-l] ] Instructions for Offsite Access to Ollama Server on npps0,
Galgoczi, Gabor (PO), 03/13/2026
-
Re: [[Phys-npps-members-l] ] Instructions for Offsite Access to Ollama Server on npps0,
Ye, Shuwei, 03/13/2026
-
Re: [[Phys-npps-members-l] ] Instructions for Offsite Access to Ollama Server on npps0,
Smirnov, Dmitri, 03/13/2026
-
Re: [[Phys-npps-members-l] ] Instructions for Offsite Access to Ollama Server on npps0,
Ye, Shuwei, 03/16/2026
- Re: [[Phys-npps-members-l] ] Instructions for Offsite Access to Ollama Server on npps0, Zhaoyu Yang, 03/16/2026
-
Re: [[Phys-npps-members-l] ] Instructions for Offsite Access to Ollama Server on npps0,
Ye, Shuwei, 03/16/2026
- Re: [[Phys-npps-members-l] ] Instructions for Offsite Access to Ollama Server on npps0, Galgoczi, Gabor (PO), 03/13/2026
-
Re: [[Phys-npps-members-l] ] Instructions for Offsite Access to Ollama Server on npps0,
Smirnov, Dmitri, 03/13/2026
-
Re: [[Phys-npps-members-l] ] Instructions for Offsite Access to Ollama Server on npps0,
Ye, Shuwei, 03/13/2026
-
Re: [[Phys-npps-members-l] ] Instructions for Offsite Access to Ollama Server on npps0,
Galgoczi, Gabor (PO), 03/13/2026
Archive powered by MHonArc 2.6.24.