Skip to content

Gurobi

Gurobi is licensed software available only to HPC Vega users who already have a Gurobi license. To gain access to the software on HPC Vega, each user should send an email to our support team, containing your license email address and cluster user name.

Modules

Eligible users have access to the Gurobi module.

module av Gurobi

Output:

------------------------------------------------------ /ceph/hpc/software/modulefiles ------------------------------------------------------
   Gurobi/9.5.1-GCCcore-11.2.0-env

------------------------------------------------- /cvmfs/sling.si/modules/el7/modules/all --------------------------------------------------
   Gurobi/9.5.1-GCCcore-11.2.0 (D)

  Where:
   D:  Default Module

Use "module spider" to find all possible modules and extensions.
Use "module keyword key1 key2 ..." to search for all possible modules matching any of the "keys".

Once you get access to the Gurobi software on the Vega cluster, you will be able to load Gurobi modules with the -env extension. Module will export the correct license path GRB_LICNESE_FILE.

module load Gurobi/9.5.1-GCCcore-11.2.0-env 

Gurobi shell based on Python 3.9.6 can be launched with command `gurobi.sh`
Gurobi Python Interface can be loaded in Python 3.9.6 with 'import gurobipy'

[teop@vglogin0007]$ gurobi_cl
Set parameter TSPort to value 38567
Set parameter Username
Set parameter TokenServer to value "gurobi.vega.izum.si"
Set parameter LogFile to value "gurobi.log"
Using license file /ceph/hpc/software/gurobi/lic/gurobi.lic

Usage: gurobi_cl [--command]* [param=value]* filename
Type 'gurobi_cl --help' for more information.

Lisense

Floating license is currently available.

gurobi_cl --license

Set parameter TSPort to value 38567
Set parameter Username
Set parameter TokenServer to value "gurobi.vega.izum.si"
Set parameter LogFile to value "gurobi.log"
Using license file /ceph/hpc/software/gurobi/lic/gurobi.lic

Running batch jobs

To run batch jobs, you need to prepare a job script (see examples below) and submit it to the batch system with the sbatch command.

Example 1:

#!/bin/bash -l
#SBATCH --output=mpi-%j.out
#SBATCH --ntasks=10
#SBATCH --cpus-per-task=6
#SBATCH --time=00:20:00
#SBATCH --export=ALL

module load openmpi/4.1.2.1 
module --ignore-cache load Gurobi/9.5.1-GCCcore-11.2.0-env
export QUEUETIMEOUT=200

mpirun -np 1 gurobi_cl Threads=60 VarBranch=1 Cuts=1 PreSolve=2 ResultFile=/ceph/hpc/home/$USER/output-example1.sol input.lp

Example 2:

#!/bin/bash -l
#SBATCH --output=omp-%j.out
#SBATCH --cpus-per-task=6
#SBATCH --time=00:20:00
#SBATCH --export=ALL

module load openmpi/4.1.2.1
module --ignore-cache load Gurobi/9.5.1-GCCcore-11.2.0-env
export QUEUETIMEOUT=200
export OMP_NUM_THREADS=24

srun gurobi_cl Threads=24 VarBranch=1 Cuts=1 PreSolve=2 ResultFile=/ceph/hpc/home/$USER/output-example2.sol input.lp