Centre of Operations of the Slovak Academy of Sciences


Account Creation and Logging in

Guides for the AUREL supercomputer (COO SAS - CC Bratislava)

Account Creation and Logging in

User registration and administration is handled by the registration portal at You can create your account here.
User registration proceeds in the following steps:
  1. A new user creates an account at
  2. User confirms their email address
  3. User details are checked, user account is activated and the user receives testing access to all of the computing resources of SIVVP. Further information is sent to the user's email

Computing resources are accessible only through secure connection (SSH) to one of the login nodes. After logging in, the user can compile programs and run jobs. Logging in requires an SSH key.

  • Key creation and logging in for Linux/UNIX

    Run the following command in shell:

    ssh-keygen -b 2048 -t rsa

    this creates a key pair in ~/.ssh directory (by default), please select a secure password (at least 8 characters, including numbers and special symbols). You will upload your public key (~/.ssh/ during the registration process. Let us remind you that ~/.ssh/id_rsa is your private key, which needs to be stored securely. Never send or show it to anyone. If you are suspicious that your private key was compromised, please report it immediately. You can always generate a new key pair and send it to us for replacement.

  • Logging in for Windows

    You should be able to use any ssh client that supports authentication through ssh keys. The following is the procedure using PuTTY (open-source ssh client). Along with the main client, you will also need PuTTYgen from here:
    Putty Gen1

    Select SSH-2 RSA as the key type and set "number of bits in the generated key" to 2048. Click "generate" and move your mouse around in the gray field. Do not forget a strong password for your key. Upload you public key and secure your private key (see information above).
    Putty Gen2

    To log in, open PuTTY, input as the host name and import your private key through the menu Connection -> SSH -> Auth.
    Putty Gen3


  • IP addresses of the login nodes

    Login nodeIP address
    Aurel 1147.213.80.175
    Aurel 2147.213.80.176

    Aurel supercomputer has two equivalent login nodes. If one is down, you can use the other.
  • File Transfer

    You can use SCP protocol to transfer files to Aurel supercomputer and the other compute clusters.

    File transfer from Linux/Mac
    Example command to transfer a local file to the cluster:

    scp /path/to/local/file login@IP:.

    Example command to transfer a file from the cluster to a local machine:

    scp login@IP:/path/to/remote/file .

    Just replace the "IP" with the address of the target node and "login" with your login name.

    File transfer through Windows
    You can use any scp client that supports authentication with ssh keys. In our examples we will use the freely available WinSCP.

    Click on "New" to create a new connection.

    1. Choose scp as the protocol
    2. Fill in the IP address of a login node
    3. Fill in your login information (name and password for your ssh key)
    4. Input the path to your private key
    5. Click "Login" (you can also "Save" your settings)

    Winscp provides a comfortable user interface to copy, transfer and delete files.

  • Guides for the AUREL supercomputer (COO SAS - CC Bratislava)

    Code Compilation

    Please use IBM compilers to create an optimized program from source code. The following compilers are available:
    - XL C/C++ (C/C++ compiler) version 13.1
    - XLF Fortran version 15.1
    GNU compilers are also available, but are not recommended for HPC applications.

    Environment setup
    IBM C/C++/fortran - compilers are available with standard settings

    GNU C/C++/fortran - compilers are available with standard settings

    64-bit addressing
    Normally, compilers operate in 32 bit mode. You can compile your programs in 64 bit mode to achieve a higher performance as well as use more memory. You can do this by using the “-q64” flag. You can also set the variable OBJECT_MODE using this command:

    export OBJECT_MODE=64

    When using GNU the flag is "-maix64".
    • Serial Code Compilation

    • Optimized code compilation examples:
      xlc_r -q64 -qarch=pwr7 -qtune=pwr7 -O3 -qhot myprog.c

      xlC_r -q64 -qarch=pwr7 -qtune=pwr7 -O3 -qhot myprog.c

      xlf_r -q64 -qarch=pwr7 -qtune=pwr7 -O3 -qhot myprog.f

      xlf90_r -q64 -qarch=pwr7 -qtune=pwr7 -O3 -qhot myprog.f90

      You can find a more detailed description of IBM compilers here:

    • MPI Code Compilation

    • IBM Parallel Environment
      Optimized code compilation examples using Message Passing Interface (MPI):
      mpcc -q64 -qarch=pwr7 -qtune=pwr7 -O3 -qhot myprog.c

      mpCC -q64 -qarch=pwr7 -qtune=pwr7 -O3 -qhot myprog.c

      mpxlf -q64 -qarch=pwr7 -qtune=pwr7 -O3 -qhot _myprog.f

      mpxlf90 -q64 -qarch=pwr7 -qtune=pwr7 -O3 -qhot myprog.f90

    • OpenMP Code Compilation

      Optimized code compilation examples using Open Multi-Processing (OpenMP):

      xlc_r -q64 -qsmp=omp -qarch=pwr7 -qtune=pwr7 -O3 -qhot myprog.c

      xlC_r -q64 -qsmp=omp -qarch=pwr7 -qtune=pwr7 -O3 -qhot myprog.c

      xlf_r -q64 -qsmp=omp -qarch=pwr7 -qtune=pwr7 -O3 -qhot myprog.f

      xlf90_r -qsmp=omp -q64 -qarch=pwr7 -qtune=pwr7 -O3 -qhot myprog.f90

    You can load the necessary command through modules (“module load mpi/mpich2). The syntax is the same as for IBM PE, only the commands themselves are different:
    C: mpicc
    C++: mpic++
    Fortran 77: mpif77
    Fortran 90: mpif90

    Job Running

    Every compute job must be run using the IBM queue system LoadLeveler.
    • Basic Commands

      - llsubmit script.ll - add job to the queue
      - llq - check job status of all jobs in the queue
      - llstatus - check available resources
      - llclass – shows available job classes and their parameters
      - llcancel JOBID – cancel job with the id "JOBID"

      Job description and required resources must be defined in a special script (text file) for LoadLeveler.
      You can find some job script examples in this directory: /gpfs/home/info/examples.

    • Job Script Syntax

      The script consists of key expressions for LoadLeveler (lines starting with #@) and commands, which will be interpreted.
      At the beginning of the script, you need to specify the job's resources.
      Lines with LoadLeveler keywords should not be split by lines that do not contain them.
      Keywords are followed by commands for job execution. You will usually use 'mpiexec' or 'poe' to run your parallel program. You can also use shell commands inside your script. You can find out your name and account number (account_no) using the command "showaccount".

    • Script Example (IBM PE)

      #@ job_type = parallel
      #@ job_name = My_job
      #@ account_no = name-number
      #@ class = My_class
      #@ error = job.err
      #@ output = job.out
      #@ network.MPI = sn_all,not_shared,US
      #@ node = 2
      #@ rset = RSET_MCM_AFFINITY
      #@ mcm_affinity_options = mcm_mem_req mcm_distribute mcm_sni_none
      #@ task_affinity = core(1)
      #@ tasks_per_node = 32
      #@ queue

      mpiexec /path/to/your/app -flags...

      Lines starting with "#@" are interpreted by Loadleveler.
      The most important keywords are "total_tasks", which specifies the number of MPI processes and "node", which specifies the number of nodes. Our example runs 64 jobs on 2 nodes. It is also important to choose the right job class. The following table shows the available classes.
      class max_node per job maxjobs per user max_total_tasks per user max. walltime (HH:MM) priority
      short 32 -1 -1 12:00 100
      medium 16 -1 1024 48:00 80
      long 4 8 512 240:00 60
      testing 1:00 undefined*
      * this class runs on a single designated node, it is used to test and tune applications.

      Script Example (MPICH)

      #! /bin/bash
      #@ job_type = parallel
      #@ account_no = name-number
      #@ class = My_class
      #@ output = job.out
      #@ error = job.err
      #@ network.MPI = sn_all,not_shared,US
      #@ node = 1
      #@ tasks_per_node = 32
      #@ queue

      export LD_LIBRARY_PATH=/gpfs/home/utils/mpi/mpich2-1.5/lib:$LD_LIBRARY_PATH
      export PATH=/gpfs/home/utils/mpi/mpich2-1.5/bin:$PATH
      $(which mpiexec) ./soft.x

      Work Directory

    • Every user has their own subdirectory in /gpfs/scratch/ which is designated to store intermediate results. This directory is not only larger than your home directory, it is also faster. If you are running a job, which produces a lot of data, set it as your work directory.

      User Tools


      - simpler version of the llq command with various useful information


      - counts the number of running jobs and CPU hours for the current user on all projects

      resuse -m

      - jobs and CPU hours in the last 30 days

      resuse -q

      - jobs and CPU hours in the last 100 days

      resuse -y

      - jobs and CPU hours in the last 350 days

      resuse username

      - jobs and CPU hours for the user "username"

      resuse username -q (or resuse -q username)

      - jobs and CPU hours in the last 100 days for the user "username"