
Scientific computing

High-performance computing
High-performance computing (HPC) is a powerful tool in today鈥檚 research and allows for large-scale computations of complex systems, for big data analysis, or for data visualization. To achieve this, HPC systems have thousands of processors, multiple GPUs, hundreds of gigabytes of memory, and terabytes of storage available.

Parallel computing
When writing a program, by default it will be a serial program. This means a single CPU core is used. However, most computers and, in particular, HPC clusters will have multiple cores available. To access the multiple CPUs, your program needs to be designed as a parallel program. This allows it to use many CPU cores simultaneously and this cuts down processing time substantially.

Using a grant provided by Compute Ontario, the University of Ottawa has created a series of notebooks to teach about using AI and machine learning for your research. They also explain research data management for your input and output data. These notebooks work on their own and are also frequently presented through our seminar series.
Resources
Each consortium has their own specific cluster documentation page.
Consortium | Documentation |
---|---|
ACENET | |
CAC | |
Calcul Qu茅bec | |
SciNet |
General documentation is available on the Digital Research Alliance of Canada at
