Due to the increase in the number of cores per node in the new system the default memory allocation for jobs is being reduced from 4GB to 2GB. This change will take effect for all new jobs submitted after 10am on Wednesday, 14th December.
The new cluster also includes a number of General Purpose Graphical Proessing Units (GPUs), along with the CUDA software libraries - to gain access to the GPU nodes please email firstname.lastname@example.org
We've also implemented the 'module' command as a means of loading software libraries/environments. You can see a list of the modules available by running 'module avail' from any worker node.
As part of the upgrade we've also updated the parallel MPI environments - OpenMPI and MvaPICH2. The new environments can be accessed using the 'module' command. Support for previous versions of MPI will be discontinued shortly.