High Performance Computing Clusters available to researchers at the University of Newcastle. Includes Redhat Linux Grid, Windows Clusters and SCIT Grid Clusters

HPC Clusters

High performance computing for research

The University of Newcastle has a new high performance computing (HPC) system available to staff and research higher degree students. This Research Compute Grid (RCG) is a University funded facility focused on providing computing capacity for Research. It can be used to undertake computer modelling, data analysis or be used as a platform to test and develop computing code prior to running it on National facilities such as the National Computational Infrastructure (NCI) based out of Canberra.

The RCG is based on DELL Servers running Redhat Linux that are combined using the PBS Professional Cluster Management system. It has 648 CPU cores with over 3.5 Terabytes of RAM along with specialised GPU capacity consisting of NVIDIA and Intel Xeon Phi (MIC) processors. This provides approximately 20 Teraflops of processing capacity to University of Newcastle Researchers.

Compilers for Fortran, C, C++ are provided along with scripting languages such as python and pearl. A plethora of Grid aware applications are also available (Refer Table 2 - RCG Applications)

The Linux based grid is best suited to non-interactive 'batch' jobs where the data is read in from a file and results are written to an output file. Interactive jobs are usually run one at a time and may have to wait until a free grid node is available.

The current grid configuration (Refer Table 1 – RCG hardware configuration) consists of a grid head node that maintains the queues, a login node on which jobs are compiled/test run/submitted to the grid and a number of execute nodes on which your program is executed. The execute nodes are configured identically and there is no way to tell which node will be selected by the grid. Additional nodes with specialist hardware also form part of the grid.

Table 1: RCG hardware configuration

Node Purpose Configuration
Grid Head Grid Storage System. Provides storage for all compute nodes within the grid.

DELL R720XD Rack Server with:

  • 2 Intel 12 Core CPUs (ie total 24 cores)
  • 192GB RAM
  • 8 TB of disk storage
Login Node (Compile and Submit)

Provides a system to compile code using one of the available compliers (Fortran, C or C++) and the ability to submit compute jobs to the Research Compute Grid

(Refer to Table 2 for a list of applications available on the RCG)

DELL R715 Rack Server with:

  • 2 AMD 16 Core CPUs (ie 32 cores)
  • 256GB RAM
Traditional CPU compute Nodes (Total of 8 Servers) Provides traditional CPU processing of GRID jobs.

DELL Chassis with 8 x M915 Blade Servers. Each server has:

  • 4 AMD 16 Core CPUs (ie 64 cores)
  • 256GB RAM

Nvidia Kepler GPU Execute Nodes

(Total of 2 Servers)
Provides GPU processing using CUDA code.

Two DELL R720 rack Servers. Each server has:

  • 2 Intel 8 Core Processors (Total 16 cores)
  • 2 Nvidia K20 GPU Cards
  • 128GB RAM

Intel Xeon Phi GPU (MIC) Execute Nodes

(Total of 2 servers)
Provides GPU processing using the MIC Intel GPU cards.

Two DELL R720 rack Servers. Each server has:

  • 2 Intel 8 Core Processors (Total 16 cores)
  • 2 Intel 7120 Xeon Phi Cards
  • 128GB RAM

Large Memory Execute Node

(Total of 1 Server)
Provides traditional CPU processing of GRID jobs which require a larger memory footprint in a single server (512 GB RAM)

DELL R815 Rack Server with:

  • 2 AMD 8 Core (ie total 16 cores)
  • 512GB RAM

Further expansion of the RCG is underway which will provide an additional 512 Cores, 2 terabytes of RAM and a Microsoft Windows based High Performance computing environment.

The RCG is managed by the Academic & Research Computing Support team (ARCS) within IT Services. If you would like to run jobs on the University's RCG please contact the 17triplezero IT Service Desk on 4921 7000 and request access to the Research Compute Grid (RCG).

Table 2: Research Compute Grid Applications

Name Description Installed
Abyss Pair Sequence Assembler Yes
ANSYS ANSYS CFX Solver Upon request
Autodock Molecule Docking Tool Yes
Bedtools Chromosome Overlap Tool Yes
Bioperl   Yes
Boost C++ Scientific Libraries Yes
Blast Basic Local Alignment Search Tool Yes
Bowtie DNA Sequence Indexer Yes
BWA Burrows-Wheeler Aligner Yes
Cplex Linear Equation Solver Yes
Cufflinks RNA Sequence Analysis Tool Yes
Fluent Fluid Dynamics Upon request
Gromacs Molecular dynamics package Yes
Gsnap Genome Mapping Alignment Yes
Java Oracle Java Yes
LAMMPS Molecular Dynamics Simulator Yes
Mathematica Maths Package Yes
Matlab Maths Package Yes
MPICH MPI programming library Yes
NAMD Scalable Molecular Dynamics Yes
NetCDF Network Common Data Forum Libraries Yes
Numpy Numerical Python Tools Yes
Plink Genotype Phenotype Analysis Yes
PennCNV Copy Number Variation Detection Yes
PyCUDA Python bindings for CUDA Yes
OpenMPI MPI Programming library Yes
R R Project Yes
Samtools Nucleotide Sequence Toolbox Yes
Scipy Scientific Python Tools Yes
Tophat RNA Sequence Mapper Yes
Trinity Transcriptome Reconstruction Yes
Velvet DNA Sequence Assembler Yes

Windows Cluster

The University of Newcastle provides a Microsoft compute cluster that is available for staff/students wishing to run jobs.

Other software may be installed after consultation with ARCS staff.

Head node
Name:  WINGRID2008
Hardware:  Intel Xeon 2.33GHz, 8GB RAM, Total 300GB storage of which 200GB is available as shared space for users.
Operating System:  Windows Server 2008 – HPC Edition
Software Installed:  MS HPC Pack, MS Visual Studio.NET 2005
Compute Nodes (8 nodes)
Name: WinGrid-Node001…….WinGrid-Node008
Hardware: Intel Xeon 2.00GHz, 16GB RAM, 160GB hard drives
Software Installed: 2008b, R, Magma

Accessing the Cluster


SCIT xGrid Clusters

The School of Design, Communication and IT manages two separate adhoc clustering systems, meaning that clients join the cluster when they are free or otherwise instructed to. One cluster runs Apple's xGrid Technology, the other a proprietary animation rendering cluster technology.

Although the xGrid cluster can compute a range of programmed mathematical computations/requests, the School uses it for image processing when rendering and compressing video footage into required formats for editing and output. The most power the cluster has registered is 313 processors totalling 751GHz of processing power, and 592GB RAM.

The second cluster works only with the School's chosen animation software, Cinema 4D, and renders animation output into a number of different formats. The most nodes we have had attached to this cluster to date is 121. To give a real world example from our own use, this cluster has output one project in less than 6 hours which, by comparison, took 16 days to process on one machine.


If you have any questions about the above information, please contact ARCS.