HPC Cluster Job Scheduler: Difference between revisions

From HPC Docs
Jump to navigation Jump to search
No edit summary
No edit summary
Line 23: Line 23:
| long || 7 days || CPU
| long || 7 days || CPU
|-
|-
| nolimit || none || CPU
| nolimit<sup>*</sup> || none || CPU
|-
|-
| shortgpu || 6 hours || GPU
| shortgpu || 6 hours || GPU
Line 29: Line 29:
| gpu || 7 days || GPU
| gpu || 7 days || GPU
|}
|}
<nowiki>*</nowiki> - Use of the '''nolimit''' partition is restricted to approved cluster users. Faculty may request access for themselves and students by emailing ssivy@tcnj.edu.

Revision as of 19:00, 29 April 2019

This content is under construction. Check back often for updates.

Submitting Your First HPC Job

Content to be created.

Anatomy of a SLURM Sbatch Submit Script

Content to be created.

Advanced Submit Script Options

Content to be created.

Example Submit Scripts

Content to be created.

ELSA Job Partitions/Queues

Parition/Queue Name Max Time Limit Resource Type
short 6 hours CPU
normal 24 hours CPU
long 7 days CPU
nolimit* none CPU
shortgpu 6 hours GPU
gpu 7 days GPU

* - Use of the nolimit partition is restricted to approved cluster users. Faculty may request access for themselves and students by emailing ssivy@tcnj.edu.