このページの更新は終了しました。

最新の情報はTSUBAME3.0計算サービスのWebページをご覧ください。

TSUBAME2.5からTSUBAME3.0へのデータ移行方法の資料はこちら

How to specify t2sub options for an MPI, MPI+OpenMP program? (2014.10.17 update)

MPI, MPI+OpenMP job script

Firstly, make a job script like as follows.

#! /bin/sh

MPI_PROCS=`wc -l $PBS_NODEFILE | awk '{print $1}'`

echo '---------- Properties ----------'
echo "Job ID:              $PBS_JOBID"
echo "# of MPI Processses: $MPI_PROCS"
echo "MPI Node File"
sed s/^/'\t'/ $PBS_NODEFILE
echo "# of OpenMP Threads: $OMP_NUM_THREADS"
echo '--------------------------------'

cd $HOME/t2_jobs/job01
mpirun -n $MPI_PROCS -x OMP_NUM_THREADS=$OMP_NUM_THREADS -hostfile $PBS_NODEFILE ./program

Where, $PBS_NODEFILE is a node-list file created form t2sub with select and mpiprocs options by PBS. The variables $OMP_NUM_THREADS represents OpenMP thread number defined by the ncpus option of t2sub. The thread number is 1 without ncpus option and OMP_NUM_THREADS=1. In the script, the first sentence is  used to getting the variables's value.

Where, the '-n $MPI_PROCS' is specied so that the job runs with the processes allocated by PBS。Please specify '-n PROC_NUM' for a given processes.

「-x OMP_NUM_THREADS=$OMP_NUM_THREADS」: all the MPI processes run with specified OpenMP threads.
This example is openmpi. Other mpi(s) can be chosen in TSUBAME2. 
It corrects according to MPI to be used. A setup of an environment variable is also described. 

In the case of mvapich2
    Does not require any special specified in SP3 (2014.8-).
    As follows: SP1 (L system).
     「VIADEV_USE_AFFINITY=0 OMP_NUM_THREADS=$OMP_NUM_THREADS」

In the case of mpich2 (V queue uses it here)
    Does not require any special specified in SP3 (2014.8-).
    As follows: SP1 (L system).
     「-env OMP_NUM_THREADS=$OMP_NUM_THREADS」

「-hostfile $PBS_NODEFILE」: The MPI processes are started on the nodes allocated by PBS.

N nodes and 1 proc/node => total N processes

t2sub -q S -l select=N:mpiprocs=1 -l place=scatter job.sh

N nodes and 1 process/node with OpenMP threads => total N processes, with total NxT threads

t2sub -q S -l select=N:mpiprocs=1:ncpus=T -l place=scatter job.sh

N nodes and M processes/node => total NxM processes

t2sub -q S -l select=N:mpiprocs=M -l place=scatter job.sh

N nodes, M processes/node and T threads/process => total NxM processes, and total NxMxT threads

t2sub -q S -l select=N:mpiprocs=M:ncpus=T -l place=scatter job.sh