Shared Cluster

  • http://visu-cp.inrialpes.fr/monika

  • http://visu-cp.inrialpes.fr/drawgantt

  • To access this cluster, connect to a frontend: access1-cp (Fedora 24) or access2-cp (Ubuntu 16)

  • Our nodes are named “nodeX-thoth” and are available with the property -p "cluster='thoth'". The system is Ubuntu 16.04, and you have “sudo” powers to install your own packages. Cluster-wide package installs should go through a helpdesk ticket: https://helpdesk.inria.fr/

  • The NFS mounts work on Thoth machines. Scratches and other volumes from the team are available on the Thoth nodes. You can read/write to /scratch/clear, /scratch2/clear, /scratch/gpuhostX, /home/clear, etc..

  • On other machines (non-Thoth) our scratches are not accessible for now.

If you want to use any other nodes from the shared cluster (including gpu nodes from other teams), you can copy your data and write your results in ‘/services/scratch/thoth/username’. The volume ‘/services/scratch/thoth’ is available from the submission server and you have to do `mkdir username` in there to create a folder for yourself (you have the right). This scratch volume is shared with other teams and quite full at the moment (only 230Gb left). I recall that the volume /services/scratch/thoth should only be used to put data useful for the computation and to save results, that should all be moved somewhere else afterwards, it should not be used for long-term storage.

  • There are two queues: default and besteffort. People from other teams can only submit “besteffort” jobs on our machines and vice-versa.

  • To actually reserve an interactive gpu job you need to use a command somewhat different from edgar commands:

oarsub -I -p "cluster='perception'" -l "{gpu='YES'}/host=1/gpudevice=1"

You can replace perception by mistis or even drop it. This ugly resource definition means “I want one gpu on one host”. I guess it is somehow connected with the fact they have gpu and cpu nodes in one reservation system. GPU visibility is restricted by OAR, so you have access to only one GPU.