Skip to Content.
Sympa Menu

sdcc_users-l - [Sdcc_users-l] IC cluster PASCAL P100

sdcc_users-l AT lists.bnl.gov

Subject: Scientific Data & Computing Center

List archive

Chronological Thread  
  • From: "Caramarcu Costin" <caramarc AT bnl.gov>
  • To: <sdcc_users-l AT lists.bnl.gov>
  • Subject: [Sdcc_users-l] IC cluster PASCAL P100
  • Date: Mon, 11 Sep 2017 12:25:37 -0400

Hi all,

 

As it was announced in our last liaison meeting the Nvidia P100 nodes are now ready for production. The new nodes have the same specs as the current one (same: cpu, memory, ib) and instead of 2x Tesla K80 we have 2x Pascal P100.

 

In order to run on the P100 gpu’s you need to specify the constraint (--constraint=pascal or -C pascal) not that P100 have only 1 GPU per card (--gres=gpu:2). If you need to make sure you run on K80 same rule applies (--constraint=tesla or -C tesla).

 

Something to take note of:

-          constrains are not mandatory,

-          specifying just --gres=gpu:2 or --gres=gpu:1 can land your job in pascal, tesla or mixed

 

For your convenience here are some sample scripts:

P100:

#SBATCH -p long

#SBATCH -t=01:00:00

#SBATCH --account accountname

#SBATCH -N 2

#SBATCH -n 64

#SBATCH -C pascal

#SBATCH --qos normal

#SBATCH --gres=gpu:2

module load mpi/openmpi-1.10.2

module load gcc/5.3.0

srun ./my_executable

 

K80:

#SBATCH -p long

#SBATCH -t=01:00:00

#SBATCH --account accountname

#SBATCH -N 2

#SBATCH -n 64

#SBATCH -C tesla

#SBATCH --qos normal

#SBATCH --gres=gpu:4

module load mpi/openmpi-1.10.2

module load gcc/5.3.0

srun ./my_executable

 

CPU ONLY:

#SBATCH -p long

#SBATCH -t=01:00:00

#SBATCH --account accountname

#SBATCH -N 2

#SBATCH -n 64

#SBATCH --qos normal

module load mpi/openmpi-1.10.2

module load gcc/5.3.0

srun ./my_executable

 

 

For any problems please contact us: https://www.sdcc.bnl.gov/#support

 

Regards,

Costin & Zihua



  • [Sdcc_users-l] IC cluster PASCAL P100, Caramarcu Costin, 09/11/2017

Archive powered by MHonArc 2.6.24.

Top of Page