\fBsalloc\fR [\fIOPTIONS(0)\fR...] [ : [\fIOPTIONS(N)\fR...]] [\fIcommand(0)\fR [\fIargs(0)\fR...]]

Option(s) define multiple jobs in a co-scheduled heterogeneous job.
For more details about heterogeneous jobs see the document
.br
https://slurm.schedmd.com/heterogeneous_jobs.html

.SH "DESCRIPTION"
salloc is used to allocate a Slurm job allocation, which is a set of resources
(nodes), possibly with some set of constraints (e.g. number of processors per
node).  When salloc successfully obtains the requested allocation, it then runs
the command specified by the user.  Finally, when the user specified command is
complete, salloc relinquishes the job allocation.

The command may be any program the user wishes.  Some typical commands are
xterm, a shell script containing srun commands, and srun (see the EXAMPLES
section). If no command is specified, then the value of
\fBSallocDefaultCommand\fR in slurm.conf is used. If
\fBSallocDefaultCommand\fR is not set, then \fBsalloc\fR runs the
user's default shell.

The following document describes the influence of various options on the
allocation of cpus to jobs and tasks.
.br
https://slurm.schedmd.com/cpu_management.html

NOTE: The salloc logic includes support to save and restore the terminal line
settings and is designed to be executed in the foreground. If you need to
execute salloc in the background, set its standard input to some file, for
example: "salloc \-n16 a.out </dev/null &"

.SH "RETURN VALUE"
If salloc is unable to execute the user command, it will
return 1 and print errors to stderr. Else if success or if killed by signals
HUP, INT, KILL, or QUIT: it will return 0.

.SH "COMMAND PATH RESOLUTION"

If provided, the command is resolved in the following order:
.br

1. If command starts with ".", then path is constructed as:
current working directory / command
.br
2. If command starts with a "/", then path is considered absolute.
.br
3. If command can be resolved through PATH. See \fBpath_resolution\fR(7).
.br
4. If command is in current working directory.
.P
Current working directory is the calling process working directory unless the
\fB\-\-chdir\fR argument is passed, which will override the current working
Define the job accounting and profiling sampling intervals.
This can be used to override the \fIJobAcctGatherFrequency\fR parameter in Slurm's
configuration file, \fIslurm.conf\fR.
The supported format is as follows:
.RS
.TP 12
\fB\-\-acctg\-freq=\fR\fI<datatype>\fR\fB=\fR\fI<interval>\fR
where \fI<datatype>\fR=\fI<interval>\fR specifies the task sampling
interval for the jobacct_gather plugin or a
sampling interval for a profiling type by the
acct_gather_profile plugin. Multiple,
comma-separated \fI<datatype>\fR=\fI<interval>\fR intervals
may be specified. Supported datatypes are as follows:
.RS
.TP
\fBtask=\fI<interval>\fR
where \fI<interval>\fR is the task sampling interval in seconds
for the jobacct_gather plugins and for task
profiling by the acct_gather_profile plugin.
NOTE: This frequency is used to monitor memory usage. If memory limits
are enforced the highest frequency a user can request is what is configured in
the slurm.conf file.  They can not turn it off (=0) either.
.TP
\fBenergy=\fI<interval>\fR
where \fI<interval>\fR is the sampling interval in seconds
for energy profiling using the acct_gather_energy plugin
.TP
\fBnetwork=\fI<interval>\fR
where \fI<interval>\fR is the sampling interval in seconds
for infiniband profiling using the acct_gather_interconnect
plugin.
.TP
\fBfilesystem=\fI<interval>\fR
where \fI<interval>\fR is the sampling interval in seconds
for filesystem profiling using the acct_gather_filesystem
plugin.
.TP
.RE
.RE
.br
The default value for the task sampling interval
is 30. The default value for all other intervals is 0.
An interval of 0 disables sampling of the specified type.
If the task sampling interval is 0, accounting
information is collected only at job termination (reducing Slurm
interference with the job).
.br
.br
Smaller (non\-zero) values have a greater impact upon job performance,
but a value of 30 seconds is not likely to be noticeable for
applications having less than 10,000 tasks.
.RE
    \fB\-\-threads\-per\-core\fR=<\fIthreads\fR>
.fi
If task/affinity plugin is enabled, then specifying an allocation in this
manner also results in subsequently launched tasks being bound to threads
if the \fB\-B\fR option specifies a thread count, otherwise an option of
\fIcores\fR if a core count is specified, otherwise an option of \fIsockets\fR.
If SelectType is configured to select/cons_res, it must have a parameter of
CR_Core, CR_Core_Memory, CR_Socket, or CR_Socket_Memory for this option
to be honored.
If not specified, the scontrol show job will display 'ReqS:C:T=*:*:*'. This
option applies to job allocations.

.TP
\fB\-\-bb\fR=<\fIspec\fR>
Burst buffer specification. The form of the specification is system dependent.
Note the burst buffer may not be accessible from a login node, but require
that salloc spawn a shell on one of its allocated compute nodes. See the
description of SallocDefaultCommand in the slurm.conf man page for more
information about how to spawn a remote shell.

.TP
\fB\-\-bbf\fR=<\fIfile_name\fR>
Path of file containing burst buffer specification.
The form of the specification is system dependent.
Also see \fB\-\-bb\fR.
Note the burst buffer may not be accessible from a login node, but require
that salloc spawn a shell on one of its allocated compute nodes. See the
description of SallocDefaultCommand in the slurm.conf man page for more
information about how to spawn a remote shell.

.TP
\fB\-\-begin\fR=<\fItime\fR>
Defer eligibility of this job allocation until the specified time.

Time may be of the form \fIHH:MM:SS\fR to run a job at
a specific time of day (seconds are optional).
(If that time is already past, the next day is assumed.)
You may also specify \fImidnight\fR, \fInoon\fR, \fIfika\fR (3 PM) or
\fIteatime\fR (4 PM) and you can have a time\-of\-day suffixed
with \fIAM\fR or \fIPM\fR for running in the morning or the evening.
You can also say what day the job will be run, by specifying
a date of the form \fIMMDDYY\fR or \fIMM/DD/YY\fR
\fIYYYY\-MM\-DD\fR. Combine date and time using the following
format \fIYYYY\-MM\-DD[THH:MM[:SS]]\fR. You can also
give times like \fInow + count time\-units\fR, where the time\-units
can be \fIseconds\fR (default), \fIminutes\fR, \fIhours\fR,
\fIdays\fR, or \fIweeks\fR and you can tell Slurm to run
the job today with the keyword \fItoday\fR and to run the
job tomorrow with the keyword \fItomorrow\fR.
The value may be changed after job submission using the
\fBscontrol\fR command.
For example:
following the specified time. The exact poll interval depends on the
Slurm scheduler (e.g., 60 seconds with the default sched/builtin).
 \- If no time (HH:MM:SS) is specified, the default is (00:00:00).
 \- If a date is specified without a year (e.g., MM/DD) then the current
year is assumed, unless the combination of MM/DD and HH:MM:SS has
already passed for that year, in which case the next year is used.
.RE

.TP
\fB\-\-bell\fR
Force salloc to ring the terminal bell when the job allocation is granted
(and only if stdout is a tty).  By default, salloc only rings the bell
if the allocation is pending for more than ten seconds (and only if stdout
is a tty). Also see the option \fB\-\-no\-bell\fR.

.TP
\fB\-\-cluster\-constraint\fR=<\fIlist\fR>
Specifies features that a federated cluster must have to have a sibling job
submitted to it. Slurm will attempt to submit a sibling job to a cluster if it
has at least one of the specified features.

.TP
\fB\-\-comment\fR=<\fIstring\fR>
An arbitrary comment.

.TP
\fB\-C\fR, \fB\-\-constraint\fR=<\fIlist\fR>
Nodes can have \fBfeatures\fR assigned to them by the Slurm administrator.
Users can specify which of these \fBfeatures\fR are required by their job
using the constraint option.
Only nodes having features matching the job constraints will be used to
satisfy the request.
Multiple constraints may be specified with AND, OR, matching OR,
resource counts, etc. (some operators are not supported on all system types).
Supported \fbconstraint\fR options include:
.PD 1
.RS
.TP
\fBSingle Name\fR
Only nodes which have the specified feature will be used.
For example, \fB\-\-constraint="intel"\fR
.TP
\fBNode Count\fR
A request can specify the number of nodes needed with some feature
by appending an asterisk and count after the feature name.
For example, \fB\-\-nodes=16 \-\-constraint="graphics*4 ..."\fR
indicates that the job requires 16 nodes and that at least four of those
nodes must have the feature "graphics."
.TP
\fBAND\fR
If only nodes with all of specified features will be used.
The ampersand is used for an AND operator.
\fBMultiple Counts\fR
Specific counts of multiple resources may be specified by using the AND
operator and enclosing the options within square brackets.
For example, \fB\-\-constraint="[rack1*2&rack2*4]"\fR might
be used to specify that two nodes must be allocated from nodes with the feature
of "rack1" and four nodes must be allocated from nodes with the feature
"rack2".

\fBNOTE:\fR This construct does not support multiple Intel KNL NUMA or MCDRAM
modes. For example, while \fB\-\-constraint="[(knl&quad)*2&(knl&hemi)*4]"\fR is
not supported, \fB\-\-constraint="[haswell*2&(knl&hemi)*4]"\fR is supported.
Specification of multiple KNL modes requires the use of a heterogeneous job.
.TP
\fBBrackets\fR
Brackets can be used to indicate that you are looking for a set of nodes with
the different requirements contained within the brackets. For example,
\fB--constraint="[(rack1|rack2)*1&(rack3)*2]"\fR will get you one node with
either the "rack1" or "rack2" features and two nodes with the "rack3" feature.
The same request without the brackets will try to find a single node that
meets those requirements.
.TP
\fBParenthesis\fR
Parenthesis can be used to group like node features together. For example,
\fB\-\-constraint="[(knl&snc4&flat)*4&haswell*1]"\fR might be used to specify
that four nodes with the features "knl", "snc4" and "flat" plus one node with
the feature "haswell" are required. All options within parenthesis should be
grouped with AND (e.g. "&") operands.
.RE

.TP
\fB\-\-contiguous\fR
If set, then the allocated nodes must form a contiguous set.

\fBNOTE\fR: If SelectPlugin=cons_res this option won't be honored
with the \fBtopology/tree\fR or \fBtopology/3d_torus\fR
plugins, both of which can modify the node ordering.

.TP
\fB\-\-cores\-per\-socket\fR=<\fIcores\fR>
Restrict node selection to nodes with at least the specified number of
cores per socket.  See additional information under \fB\-B\fR option
above when task/affinity plugin is enabled.

.TP
\fB\-\-cpu\-freq\fR =<\fIp1\fR[\-\fIp2\fR[:\fIp3\fR]]>

Request that job steps initiated by srun commands inside this allocation
be run at some requested frequency if possible, on the CPUs selected
for the step on the compute node(s).

\fBp1\fR can be  [#### | low | medium | high | highm1] which will set the
frequency scaling_speed to the corresponding value, and set the frequency
If \fBp3\fR is UserSpace, the frequency scaling_speed will be set by a power
or energy aware scheduling strategy to a value between p1 and p2 that lets the
job run within the site's power goal. The job may be delayed if p1 is higher
than a frequency that allows the job to run within the goal.

If the current frequency is < min, it will be set to min. Likewise,
if the current frequency is > max, it will be set to max.

Acceptable values at present include:
.RS
.TP 14
\fB####\fR
frequency in kilohertz
.TP
\fBLow\fR
the lowest available frequency
.TP
\fBHigh\fR
the highest available frequency
.TP
\fBHighM1\fR
(high minus one) will select the next highest available frequency
.TP
\fBMedium\fR
attempts to set a frequency in the middle of the available range
.TP
\fBConservative\fR
attempts to use the Conservative CPU governor
.TP
\fBOnDemand\fR
attempts to use the OnDemand CPU governor (the default value)
.TP
\fBPerformance\fR
attempts to use the Performance CPU governor
.TP
\fBPowerSave\fR
attempts to use the PowerSave CPU governor
.TP
\fBUserSpace\fR
attempts to use the UserSpace CPU governor
.TP
.RE

The following informational environment variable is set in the job
step when \fB\-\-cpu\-freq\fR option is requested.
.nf
        SLURM_CPU_FREQ_REQ
.fi

This environment variable can also be used to supply the value for the
CPU frequency request if it is set when the 'srun' command is issued.
The \fB\-\-cpu\-freq\fR on the command line will override the
configured, this parameter is ignored.

\fBNOTE\fR: When the step completes, the frequency and governor of each
selected CPU is reset to the previous values.

\fBNOTE\fR: When submitting jobs with  the \fB\-\-cpu\-freq\fR option
with linuxproc as the ProctrackType can cause jobs to run too quickly before
Accounting is able to poll for job information. As a result not all of
accounting information will be present.
.RE

.TP
\fB\-\-cpus\-per\-gpu\fR=<\fIncpus\fR>
Advise Slurm that ensuing job steps will require \fIncpus\fR processors per
allocated GPU.
Not compatible with the \fB\-\-cpus\-per\-task\fR option.

.TP
\fB\-c\fR, \fB\-\-cpus\-per\-task\fR=<\fIncpus\fR>
Advise Slurm that ensuing job steps will require \fIncpus\fR processors per
task. By default Slurm will allocate one processor per task.

For instance,
consider an application that has 4 tasks, each requiring 3 processors.  If our
cluster is comprised of quad\-processors nodes and we simply ask for
12 processors, the controller might give us only 3 nodes.  However, by using
the \-\-cpus\-per\-task=3 options, the controller knows that each task requires
3 processors on the same node, and the controller will grant an allocation
of 4 nodes, one for each of the 4 tasks.

.TP
\fB\-\-deadline\fR=<\fIOPT\fR>
remove the job if no ending is possible before
this deadline (start > (deadline \- time[\-min])).
Default is no deadline.  Valid time formats are:
.br
HH:MM[:SS] [AM|PM]
.br
MMDD[YY] or MM/DD[/YY] or MM.DD[.YY]
.br
MM/DD[/YY]\-HH:MM[:SS]
.br
YYYY\-MM\-DD[THH:MM[:SS]]]

.TP
\fB\-\-delay\-boot\fR=<\fIminutes\fR>
Do not reboot nodes in order to satisfied this job's feature specification if
the job has been eligible to run for less than this time period.
If the job has waited for less than the specified period, it will use only
nodes which already have the specified features.
The argument is in units of minutes.
A default value may be set by a system administrator using the \fBdelay_boot\fR
different  users. The  value may be changed after job submission using the
scontrol command.
Dependencies on remote jobs are allowed in a federation.
Once a job dependency fails due to the termination state of a preceding job,
the dependent job will never be run, even if the preceding job is requeued and
has a different termination state in a subsequent execution.
.PD
.RS
.TP
\fBafter:job_id[[+time][:jobid[+time]...]]\fR
After the specified jobs start or are cancelled and 'time' in minutes from job
start or cancellation happens, this
job can begin execution. If no 'time' is given then there is no delay after
start or cancellation.
.TP
\fBafterany:job_id[:jobid...]\fR
This job can begin execution after the specified jobs have terminated.
.TP
\fBafterburstbuffer:job_id[:jobid...]\fR
This job can begin execution after the specified jobs have terminated and
any associated burst buffer stage out operations have completed.
.TP
\fBaftercorr:job_id[:jobid...]\fR
A task of this job array can begin execution after the corresponding task ID
in the specified job has completed successfully (ran to completion with an
exit code of zero).
.TP
\fBafternotok:job_id[:jobid...]\fR
This job can begin execution after the specified jobs have terminated
in some failed state (non-zero exit code, node failure, timed out, etc).
.TP
\fBafterok:job_id[:jobid...]\fR
This job can begin execution after the specified jobs have successfully
executed (ran to completion with an exit code of zero).
.TP
\fBexpand:job_id\fR
Resources allocated to this job should be used to expand the specified job.
The job to expand must share the same QOS (Quality of Service) and partition.
Gang scheduling of resources in the partition is also not supported.
"expand" is not allowed for jobs that didn't originate on the same cluster
as the submitted job.
.TP
\fBsingleton\fR
This job can begin execution after any previously launched jobs
sharing the same job name and user have terminated.
In other words, only one job by that name and owned by that user can be running
or suspended at any point in time.
In a federation, a singleton dependency must be fulfilled on all clusters
unless DependencyParameters=disable_remote_singleton is used in slurm.conf.
.RE

.TP
Much like \-\-nodelist, but the list is contained in a file of name
\fInode file\fR.  The node names of the list may also span multiple lines
in the file.    Duplicate node names in the file will be ignored.
The order of the node names in the list is not important; the node names
will be sorted by Slurm.

.TP
\fB\-\-get\-user\-env\fR[=\fItimeout\fR][\fImode\fR]
This option will load login environment variables for the user specified
in the \fB\-\-uid\fR option.
The environment variables are retrieved by running something of this sort
"su \- <username> \-c /usr/bin/env" and parsing the output.
Be aware that any environment variables already set in salloc's environment
will take precedence over any environment variables in the user's
login environment.
The optional \fItimeout\fR value is in seconds. Default value is 3 seconds.
The optional \fImode\fR value control the "su" options.
With a \fImode\fR value of "S", "su" is executed without the "\-" option.
With a \fImode\fR value of "L", "su" is executed with the "\-" option,
replicating the login environment.
If \fImode\fR not specified, the mode established at Slurm build time
is used.
Example of use include "\-\-get\-user\-env", "\-\-get\-user\-env=10"
"\-\-get\-user\-env=10L", and "\-\-get\-user\-env=S".
NOTE: This option only works if the caller has an
effective uid of "root".

.TP
\fB\-\-gid\fR=<\fIgroup\fR>
Submit the job with the specified \fIgroup\fR's group access permissions.
\fIgroup\fR may be the group name or the numerical group ID.
In the default Slurm configuration, this option is only valid when used
by the user root.

.TP
\fB\-G\fR, \fB\-\-gpus\fR=[<\fitype\fR>:]<\fInumber\fR>
Specify the total number of GPUs required for the job.
An optional GPU type specification can be supplied.
For example "\-\-gpus=volta:3".
Multiple options can be requested in a comma separated list, for example:
"\-\-gpus=volta:3,kepler:1".
See also the \fB\-\-gpus\-per\-node\fR, \fB\-\-gpus\-per\-socket\fR and
\fB\-\-gpus\-per\-task\fR options.

.TP
\fB\-\-gpu\-bind\fR=<\fItype\fR>
Bind tasks to specific GPUs.
By default every spawned task can access every GPU allocated to the job.

Supported \fItype\fR options:
.RS
.TP 10
If the task/cgroup plugin is used and ConstrainDevices is set in cgroup.conf,
then the GPU IDs are zero-based indexes relative to the GPUs allocated to the
job (e.g. the first GPU is 0, even if the global ID is 3). Otherwise, the GPU
IDs are global IDs, and all GPUs on each node in the job should be allocated for
predictable binding results.
.TP
\fBmask_gpu:<list>\fR
Bind by setting GPU masks on tasks (or ranks) as specified where <list> is