HPC Cluster Job Scheduler: Difference between revisions
Line 8: | Line 8: | ||
# Since the tutorial doesn't require any input file, we can simply submit this job to the cluster. <pre>sbatch submit-mpi.sh</pre> | # Since the tutorial doesn't require any input file, we can simply submit this job to the cluster. <pre>sbatch submit-mpi.sh</pre> | ||
# Monitor the status of your running job (which should only take about 20-25 seconds to run). The system will replace <code>$USER</code> in the command below with your username. You can also directly specify your username instead of <code>$USER</code>. <pre>squeue --user=$USER</pre> | # Monitor the status of your running job (which should only take about 20-25 seconds to run). The system will replace <code>$USER</code> in the command below with your username. You can also directly specify your username instead of <code>$USER</code>. <pre>squeue --user=$USER</pre> | ||
# When your job ends look for the additional file that was added to your directory.<pre>ls</pre> This file will be in the form of job.''#####''.out where the ''#####'' matches the number in the '''JOBID''' column of the '''squeue''' command output. This creates unique output files which prevents subsequent job runs from overwriting previous outputs. | # When your job ends, look for the additional file that was added to your directory.<pre>ls</pre> This file will be in the form of job.''#####''.out where the ''#####'' matches the number in the '''JOBID''' column of the '''squeue''' command output. This creates unique output files which prevents subsequent job runs from overwriting previous outputs. | ||
# You can view the job output file by running the command (replace ##### with the actual job ID) <pre>cat job.####.out</pre> | # You can view the job output file by running the command (replace ##### with the actual job ID) <pre>cat job.####.out</pre> | ||
<br> | <br> |
Revision as of 16:43, 3 June 2019
This content is under construction. Check back often for updates.
Submitting Your First HPC Job
- Login to the HPC cluster using one of the methods described in on Accessing the Cluster via SSH on the Getting Started page.
- Make a tutorial directory using the command
mkdir tutorial
and then change into that directory usingcd tutorial
- Next, make a copy of the submit script examples using
cp /opt/tcnjhpc/esla-tutorial/examples/submit-* .
(make sure to include the . which represents the current directory as the target). - List the names of the files that were copied to the current directory.
ls
- Edit one of the submission scripts to modify the email address in it. This email address we receive messages when the job starts and ends or if there was some kind of failure. Use the simple text editor nano to edit the file. Press
CTRL q
to quit nano.nano submit-mpi.sh
You could alternately use the edit feature in Open OnDemand to make the change. - Since the tutorial doesn't require any input file, we can simply submit this job to the cluster.
sbatch submit-mpi.sh
- Monitor the status of your running job (which should only take about 20-25 seconds to run). The system will replace
$USER
in the command below with your username. You can also directly specify your username instead of$USER
.squeue --user=$USER
- When your job ends, look for the additional file that was added to your directory.
ls
This file will be in the form of job.#####.out where the ##### matches the number in the JOBID column of the squeue command output. This creates unique output files which prevents subsequent job runs from overwriting previous outputs. - You can view the job output file by running the command (replace ##### with the actual job ID)
cat job.####.out
The video below demonstrates a sample run of the tutorial steps described above.
Anatomy of a SLURM Sbatch Submit Script
Content to be updated.
#!/bin/bash #SBATCH --workdir=./ # Set the working directory #SBATCH --mail-user=nobody@tcnj.edu # Who to send emails to #SBATCH --mail-type=ALL # Send emails on start, end and failure #SBATCH --job-name=m_pi_dart # Name to show in the job queue #SBATCH --output=job.%j.out # Name of stdout output file (%j expands to jobId) #SBATCH --ntasks=10 # Total number of mpi tasks requested #SBATCH --nodes=2 # Total number of nodes requested #SBATCH --partition=short # Partition (a.k.a. queue) to use module add elsa-tutorial # Disable selection of Infiniband networking export OMPI_MCA_btl=^openib # Run MPI program echo "Starting on "`date` mpirun mdart 50000 10000 # ^---- should be 500,000/ntasks to match serial version echo "Finished on "`date`
Advanced Submit Script Options
Constraints
The SLURM constraint option allows for further control over which nodes your job can be scheduled on in a particular parition/queue. You may require a specific processor family or network interconnect. The features that can be used with the sbatch constraint option are defined by the system administrator and thus vary among HPC sites.
One should be careful when combining multiple constraints. It is possible to specify a combination that cannot be satisfied (e.g. specifying a node with a skylake and a broadwell family of processor).
Available ELSA HPC constraints.
Example 1 (single constraint):
#SBATCH --constraint=skylake
Example 2 (anding multiple constraints):
#SBATCH --constraint="skylake&ib"
Example 3 (oring multiple constraints):
#SBATCH --constraint="skylake|broadwell"
Example 3 (complex constraints):
#SBATCH --constraint="(skylake|broadwell)&ib"
Node Exclusivity
The job allocation can not share nodes with other running jobs.
This option should be used judiciously and sparingly. If for example, your job requires only 2 CPU cores and is scheduled on a node with 32 cores, no other job will be able to make use of the remaining 30 cores (not even your own job). Where this may make sense is when your job is competing for memory (RAM) with others running on the same node. The system is not yet configured to enforce memory limitations like it does for CPU cores. Using this option will guarantee that the entire node is exclusive to your job.
Example:
#SBATCH --exclusive
Job Arrays
Example 1:
#SBATCH --output=job.%A_%a.out #SBATCH --array=1-100
Example 2 (step size):
#SBATCH --output=job.%A_%a.out #SBATCH --array=1-100:20
Example 3 (limit simultaneous task):
#SBATCH --output=job.%A_%a.out #SBATCH --array=1-100%5
Example Submit Scripts
Content to be created.
ELSA Job Partitions/Queues
Parition/Queue Name | Max Time Limit | Resource Type |
---|---|---|
short | 6 hours | CPU |
normal | 24 hours | CPU |
long | 7 days | CPU |
nolimit* | none | CPU |
shortgpu | 6 hours | GPU |
gpu | 7 days | GPU |
* - Use of the nolimit partition is restricted to approved cluster users. Faculty may request access for themselves and students by emailing ssivy@tcnj.edu.