Getting Started

At this point, you should have met several smiling people and settled in your office. Comfy chair, splendid view of the surrounding mountains.. Welcome to Grenoble!

Since you are reading this page, you have been given your temporary password and successfully logged in. You should change your password as soon as possible. In fact, you should change it right now. :-)

INRIA password change

If you’re coming from a different INRIA centre, you will keep your account but it needs to be updated manually with Jean-Marc Joseph. In some cases you have to change your password anyway to force the update.

We leave for lunch between 12:00 and 13:00. You will need your lunch card, Nathalie (our team assistant) should have briefed you on how to get it (I think you pick it up at the restaurant with an ID card but I’m not sure). Your card has an associated balance that you can recharge with debit card, cash, or on the internet.

For any computer-related questions for which you cannot find answer in the wiki, or when in doubt before doing something on your machine/on servers of the team, your system administrators are here to help.

Summary

  • Every user gets a home directory that can store up to 10Gb and that is backed up regularly. These home directories are mounted on all the machines via NFS. You should store there information that is important but avoid installing large libraries. It can very much hold your code but not your data.

  • Every user can get access to one or multiple scratch directories that are also mounted on all the machines via NFS where data/checkpoints/logs can be stored. If you need any scratch space, you should inform us and we will create one for you.

  • There is a GPU cluster that we host and administrate ourselves and that holds various types of GPU cards. We use OAR as a job scheduler. There are 2 modes: interactive where you get a shell on a GPU instance (max 12h) and passive where you simply launch the job in the background (unlimited time), be it using a .sh script or the full python command inside the OAR command itself. In addition, there are 2 queues: default where each user has a maximum of 3 instances that cannot be killed until the job stops and besteffort where the name is self-explanatory and jobs can get killed to make space for others (based on “karma”, availability…) but can also be restarted automatically when adding the -t idempotent option. Note that the GPU cluster frontend is edgar on which you should launch OAR commands.

  • There is also a CPU cluster that we do not administrate directly that contains a lot of CPUs. Here again we use OAR and the same rules as above hold. Note that the CPU cluster frontend is access2-cp.

Warning

Do not launch python commands directly on cluster frontends (edgar or access2-cp). If your command makes the frontend crash, the whole cluster will be down.

  • Every user manages its own python environment but we strongly recommend using conda that can be installed with Miniconda. We can help you set that up!

  • Every user has access to a gitlab account (using your INRIA credentials) where code can be stored using version control.