OIST Banner OIST Banner OIST Banner Dark Mode OIST Banner Dark Mode

Running Mathematica Jobs

If you wan to run Mathematica on your local computer you can find information and an application form on this page.

We don’t explain here how to use Mathematica, for more information about Mathematica please consult Wolfram Documentation Center. We show how to run Mathematica computation on the cluster using job scripts.

Running Mathematica with GUI

If you have followed our guide on connecting to the cluster then you have set up XQuartz for MacOS or MobaXterm for Windows, so you can run graphical applications. You can start the Mathematica GUI like below:

# Log in to Deigo
$ ssh -X deigo

# Load the latest version of Mathematica
$ module load mathematica

# Run mathematica GUI with 16g memory and 8 cores
$ srun -p compute --mem=16g -c 8 --x11 --pty Mathematica

Running Mathematica in interactive mode

You may notice that the GUI can be quite slow to use, especially if you are connecting from outside OIST. It is often better to run it in text-only mode which is much faster, if not as pretty:

# log in to Deigo
$ ssh -X deigo

# Load the latest version of Mathematica
$ module load mathematica

# Run mathematica in text mode with 16g memory and 8 cores
$ srun -p compute --mem=16g -c 8 --pty math

Basic Mathematica job script

Here is a sample Mathematica script, math_basic.m, that we want to run in a job script:

A = Sum[i, {i,1,100}]
B = Mean[{25, 36, 22, 16, 8, 42}]
Answer = A + B
Quit[];

The following job script, math_basic.slurm will run our Mathematica script above. It uses the default of one single core:

#!/bin/bash
#SBATCH --partition=compute
#SBATCH --mem=8G
#SBATCH --time=10:00
#SBATCH --output=math_basic_%j.out

module load mathematica
math -run < math_basic.m

Use sbatch to submit our job script to the cluster:

$ sbatch math_basic.slurm

The result will be available in the file math_basic_*.out where all printed output ends up.

Parallelism through multithreading (single node)

Mathematica can use multiple cores to do parallel computation using the built in Parallel commands or by using the parallel API. Parallel Mathematica jobs are limited to one node, but can use all cores on the node if allocated. A parallel Mathematica script must specify the number of processors to be allocated.

Here is a Mersenne prime computation Mathematica script, math_par_mp.m:

(*Limits Mathematica to requested resources*)
Unprotect[$ProcessorCount];$ProcessorCount = 8;

(*Prints the machine name that each kernel is running on*)
Print[ParallelEvaluate[$MachineName]];

(*Prints all Mersenne PRime numbers less than 2000*)
Print[Parallelize[Select[Range[2000],PrimeQ[2^#-1]&]]];

The following slurm script, math_par_mp.slurm, will request 8 cores to run the above Mathematica script:

#!/bin/bash
#SBATCH --partition=compute
#SBATCH --job-name=math_par_mp
#SBATCH --ntasks=1
#SBATCH --cpus-per-task=8
#SBATCH --output=math_par_mp_%j.out

module load mathematica

math -run < math_par_mp.m

To run this script as a job, run:

 $ sbatch math_par_mp.slurm

The result will be available in the file math_par_mp_*.out

Parallelism through SubKernels (multiple nodes)

To scale your computation even further, Mathematica can also run in parallel across multiple nodes, using the built-in commands LaunchKernels and RemoteMachine to launch sub-kernel computation on different remote machines together with the Parallel command. This allows you to exploit parallelism of the cluster without the single node limitation of the multi-core method above.

IMPORTANT: Mathematica executes each sub-kernel using a newly created login shell, only the environment variables set in your $HOME/.bashrc file are visible to the sub-kernels. The module load mathematica command must be added into your $HOME/.bashrc file, and does not need to be done in the slurm batch script. You can do it by appending the following code AT THE VERY END of your bashrc file:

MHOSTNAME="$(/bin/uname -n)"
if [[ "${MHOSTNAME:0:5}" == "deigo" ]]; then
  module load mathematica/13.1
fi

In the following example, the prime number computation is distributed on several cores from different nodes. We tell Slurm to use 6 nodes, with 4 (single-core) tasks in each node. That gives us a total of 24 cores for our computation:

#!/bin/bash
#SBATCH --partition=compute
#SBATCH --job-name=math_par_dis
#SBATCH --nodes=6
#SBATCH --mem-per-cpu=8G
#SBATCH --ntasks-per-node=4
#SBATCH --output=math_par_dis_%j.out

### -------------- BEGIN USER INPUT -------------

## Put the full pathname of *** your Mathematica script here **
myscript="math_par_dis.m"

### -------------- END USER INPUT -------------

get_abs_filename() {
  echo "$(cd "$(dirname "$1")" && pwd)/$(basename "$1")"
}


## Set up core and node information
nlname="${WORK_SCRATCH}/nodelist_${SLURM_JOB_NAME}_${SLURM_JOB_ID}"
msname="${WORK_SCRATCH}/script_${SLURM_JOB_NAME}_${SLURM_JOB_ID}"
mfname="${WORK_SCRATCH}/machines_${SLURM_JOB_NAME}_${SLURM_JOB_ID}"
mathscript="$(get_abs_filename "${myscript}")"
mkdir -pv "${WORK_SCRATCH}" >&2
scontrol show hostnames $SLURM_NODELIST > "${nlname}"
for ((line=1; line<=${SLURM_JOB_NUM_NODES}; line++)); do
  for ((tN=1; tN<=${SLURM_NTASKS_PER_NODE}; tN++)); do
    head -$line "${nlname}" | tail -1 >> "${mfname}"
  done
done
cd "${WORK_SCRATCH}"
sed -e "s;MACHINES_LIST_FILE;${mfname};g" ${mathscript} > ${msname}

## Run the script
math -run < "${msname}"

## Clean-up
cd "${SLURM_SUBMIT_DIR}"
[[ -d "${WORK_SCRATCH}" ]] && /bin/rm -fvr "${WORK_SCRATCH}" >&2

Our example Mathematica script math_par_dis.m consists of the following code. Note that in this example, it is important that you do not modify the line hosts=Import["MACHINES_LIST_FILE","List"].

(* configuration for starting remote kernels *)

Needs["SubKernels`RemoteKernels`"]

(* initialize the kernels on all machines defined in the host file *)

hosts=Import["MACHINES_LIST_FILE","List"]

(* on the master node initialize only one kernel less, since one is already running *)
imin=2;
imax=Length[hosts];
idelta=1;

Do[
  Print["starting Kernel: ",i," on ",hosts[[i]]];
  LaunchKernels[RemoteMachine[hosts[[i]]]];,
  {i,imin,imax,idelta}
]

(* actual calculation *)
primelist = ParallelTable[Prime[k], {k, 1, 2000}];
Print[primelist]

To submit the script, use sbatch as usual:

$ sbatch math_par_dis.slurm

The result will be available in the file math_par_dis_*.out