A list of High Performance Computing options for University of Newcastle Researchers

High performance computing

Many researchers use computers but desktop machines only go so far, so if your overnight compute jobs run into the next day, if your research waits for a weekend to run and if your computer is limiting the progress of your research, then High Performance Vomputing (HPC) is the solution.

High Performance Computing is used to solve real world problems of significant scale or detail across a diverse range of disciplines including physics, biology, chemistry, geo sciences, climate science, engineering and many others.

What is it and how do I get it? When should I use it? What caveats or obstacles are there? *Support?

University Compute Grids

The University has two Compute Grids available for researchers: one Linux and one Microsoft Windows based, both Compute Grids are managed by the ARCS Team who can arrange access.

ARCS provide general access to Linux and Microsoft compute grids and provide services such as; advice, installation and maintenance of HPC Clusters (HPCC's) owned by specific research groups in addition to advice and design of Research Group funded compute Workstations, Servers and Grids

The ARCS team can also provide services such as; advice, installation and maintenance services and maintenance of Research Grant funded computer systems from high end research Workstations to dedicated compute servers to GPU based compute grids and HPC Clusters (HPCC's) owned by specific research groups.

 

Development of methodologies and code in preparation for running on national compute grid facilities (e.g. NCI).

When data analysis requires proprietary software that would be difficult (or cost prohibitive) to load on a national or state facility.

For researchers with moderate HPC compute requirements without funding to purchase their own compute grid systems.

For Academics and PhD/RHD students wanting to learn how to apply compute grid facilities to their research.

Development of methodologies and code in preparation for running on national compute grid facilities (e.g. NCI).

If your research requires dedicated compute resources and funding is available for their purchase.  ARCS can design, build and provide ongoing management.  Systems can be added to the existing University Grids allowing the use of Grid Management software but restricting access to your research group.  Or they can be built completely standalone.

If your processing requirements exceed TODO then one of the state or national facilities might better meet your needs.

UoN Supplied (ARCS)

ARCS Compute Cloud

This is the national organisation named ARCS as opposed to the University Research computing group called ARCS.

The ARCS Computer Cloud is an interface to many of the HPC facilities around Australia. It provides the Grisu interface that can make submitting jobs for certain applications easier

 External (ARCS)

The Berkeley Open Infrastructure for Network Computing (BOINC)

A non-commercial middle ware system for volunteer and grid computing.

BOINC is a way of using the spare processor power of a regular computer. A server must be set up to serve work to computers running the BOINC client.

These can either be lab computers on campus or compute time donated by volunteers anywhere on the Internet.

This method is unsuitable if you require a guaranteed amount of computing or do not wish to host any part of the HPC solution.

End User

Intersect/McLaren

Intersect encourages researchers who are interested in using HPC to contact them for advice & support at:
E: hpc_support@intersect.org.au
T: (02) 8079 2534

McLaren is a shared memory SGI Altix 4700 with 128 Dual - Core CPUs.

1TB RAM and 12 TB disk space.

Ideally suited to applications where a large amount of RAM is required.

Also a good choice for researchers that have out grown their institutional facilities but still require the personal support that Intersect is able to offer.

 

External (Intersect)

NCI/Vayu

Researchers at UoN can apply for access to this service either through Intersect:
E: hpc_support@intersect.org.au
T: (02) 8079 2534
or directly to NCI

NCI / Vayu is a Sun Constellation Cluster with 1492 nodes, each containing 2 quad core Nehalem processors summing up to 11,936 cores, 37TB RAM and 800 TB disk space.

Applications not currently installed at NCI are unlikely to be added.

Please send an email to: hpc_support@intersect.org.au if you have custom requirements.

External (NCI)


* Support Source of Support for Service
UoN Supplied Deployed, managed and supported by the University and provided to Researchers at no additional cost.
UoN Supported Deployed, managed and supported by the University with some additional costs to the Researcher, Centre or Faculty.
End User The item is end user supported. The University is not in a position to provide technical support or assurances that the product will work within the University environment.
External (Organisation) Deployed, managed and supported by an organisation external to the University. Additional costs may apply.