Hardware Selection Advice: Difference between revisions
Content deleted Content added
No edit summary |
|||
(43 intermediate revisions by 9 users not shown) | |||
Line 3:
=Introduction=
We often get asked about the optimum computing setup to run TUFLOW models. While every model is different and will interact differently with your hardware there is some general advice that we can offer.
In the sections below you will find more detailed advice on GPU and CPU but generally:<br>
* The amount of RAM in your computer will be the limiter for the size of model you can to run. This applies to CPU RAM (TUFLOW Classic, TUFLOW FV and TUFLOW HPC with Hardware == CPU) and also GPU RAM (TUFLOW HPC and TUFLOW FV with Hardware == GPU).▼
* The processing speed of your CPU, the architecture, cache size, speed and number of processors play a role. ▼
▲* The amount of RAM in
▲* The processing speed of
* For GPU simulations, the number of CUDA cores, the core speed, GPU card architecture, memory speed and interfacing with the motherboard PCI lanes and CPU are all important.
* The system must be well cooled to avoid throttling (meaning reduction of clock speeds to reduce heating), and have sufficient and reliable power supply. Should upgrades to the system be expected in the future (such as adding a second GPU card), then consider configuring these components to avoid future limitations. <br>
For information on minimum and recommended system requirement, see <u>[[System_Requirements | System Requirements]]</u>.
To discover a computer's NVIDIA GPU hardware, see <u>[[Console_Window_GPU_Usage | NVIDIA GPU Hardware and Usage]]</u>.<br>
=The TUFLOW Software Suite=
The TUFLOW Software suite has a range of solvers. Each interact differently with your hardware so pairing the correct solver (or the range of solvers you want to run) and hardware is an important consideration. A brief summary of each solver's needs is provided as follows:<br>
*
* TUFLOW HPC - Run on CPU Hardware: A single model run uses the CPU and is parallelised to run across multiple cores. In general terms: The maximum model size is dependent on the available CPU RAM and the runtime is driven by the CPU speed, the number of cores available to be run in parallel, architecture and cache size.
*
*
*
<br>
Line 24 ⟶ 30:
TUFLOW HPC on GPU Hardware is typically our fastest solver for 1D/2D pipe and floodplain simulations.
* TUFLOW HPC supports CUDA enabled NVIDIA GPU cards. For list of supported CUDA enabled graphics cards please visit the <u>[https://developer.nvidia.com/cuda-gpus NVIDIA website]</u>.
*To discover a computer's NVIDIA GPU hardware, see <u>[[Console_Window_GPU_Usage | NVIDIA GPU Hardware and Usage]]</u>.
*
The precision solver you require will determine the type of GPU card that is best suited for your compute. For any given generation/architecture of cards, the “gaming” cards such as the GTX GeForce and RTX provide excellent single precision performance – typically comparable to that of the “scientific” cards such as the Tesla series. If double precision is required then the scientific cards are substantially faster, but these are also significantly more expensive. The Quadro series cards sit in between for both double precision performance and cost. When checking the specifications of the card it should provide you with a breakdown of the single and double precision throughput in flops. Single precision compute is typically sufficient for TUFLOW HPC modelling.
For the higher end GPU cards, users may wish to consider server-based computers rather than workstations, and also weigh the cost of an extra TUFLOW licence against the cost of the high end hardware.
===GPU RAM===
Line 32 ⟶ 40:
===CPU RAM===
TUFLOW HPC on GPU hardware still uses the CPU to compute and store data (in CPU RAM) during model initialisation and for all 1D calculations. While we are working on improving our CPU RAM usage, currently we tend to find that CPU RAM is often the limiter to the size of the model domain you can run, particularly if using running over multiple GPU cards. During initialisation and simulation a model will typically require 4-6 times the amount of CPU RAM relative to GPU RAM. As an example, a model that utilises 11GB of GPU RAM (typical memory for high-end gaming card, and corresponds to about a 50 million cell model) the CPU RAM required during initialisation will typically be in range 44GB to 66GB. A model that fully utilises two 11 GB GPUs (i.e. a 100 million cell model) may require as much as 128GB of CPU RAM during initialisation. Note that anything more than 256GB of CPU RAM will exceed the limitations of consumer chipsets that are available in 2025 and requires more expensive workstation hardware - additionally, users should consult a hardware expert to check limitations of specific hardware.
===CUDA Cores, GPU Clock speed, and FLOPs ===
One way of reporting a GPU card's throughput is in Floating Point Operations per second (FLOPs). The more FLOPs, the more calculations that can get crunched per second and the faster the model should run. For any given generation of GPU, FLOPs are approximately proportional to number of CUDA cores times the GPU clock speed. However, there have been significant improvements in GPU architecture since the inception of CUDA, and this has contributed to increases in overall FLOPs performance beyond just the increases in cores and clock speed that have occurred over this time.
===Multiple GPUs===
TUFLOW can use multiple GPU cards on a machine to run a single model (TUFLOW FV can currently use a single GPU only). This is useful for models that are too large for a single GPU, or for running a model as quickly as possible. In general terms the run time benefit of using multiple cards increases with model size.
*
*It does (as of build 2020-01-AA) auto detect and utilise peer-to-peer access over NVLink or PCI bus on the motherboard. Note that not all GPUs support peer-to-peer access.
* When using multiple GPUs it is best to use cards of similar memory and performance. While it is possible (as of build 2020-01-AA) to re-balance a model over multiple GPUs, we do not recommend using cards with vastly disparate performance.▼
**PCI bus - this method requires cards that supports TCC driver mode and all cards must be in TCC driver mode. As TUFLOW primarily relies on GPU CUDA capabilities, the impact of using higher or lower PCI slot option is minimal.
* Sufficient cooling and power supply should be considered if multiple cards are used. When installed in adjacent PCI slots, the preference is to use rear vented cards rather than side vented to avoid blowing hot air onto the neighbouring cards (which could lead to overheating).▼
**NVLink - high-end compute cards can have up to 8 cards talking to each other through a high-spec NVLink, but many of the less expensive cards are limited to only having two connected together over a dual socket NVLink.
*Models may still be run across multiple GPUs even if a NVLink is not present and the GPUs do not support peer-to-peer access. In this case HPC reverts to exchanging the domain boundary data between the GPUs via the CPU. The memory bandwidth between the GPU and the main system is not a critical bottleneck for TUFLOW.
▲*
▲*
===GPU Performance Comparison===
Extensive GPU hardware speed comparison testing has been completed using TUFLOW's standardised hardware benchmarking dataset. Details for the benchmarking are available via the <u>[[Hardware_Benchmarking | Hardware Benchmarking]]</u> page. Review the GPU benchmarking runtime results table to compare the speed performance of different cards. If your GPU card is not listed in the result dataset please download and run the benchmarking dataset, and provide the result summary to [mailto:support@tuflow.com support@tuflow.com]. We will add the details to the runtime results table.<br>
External videocard benchmark websites can be used to compare GPU cards, for example, <u>[https://www.videocardbenchmark.net/high_end_gpus.html PassMark Software - Video Card (GPU) Benchmarks]</u> is an excellent performance guide. Note that PassMark may not be representative for the highest end cards, for TUFLOW. GPU are complex devices, newer cards may not perform as well on PassMark's benchmarks for criteria consumers buy GPUs for (games, video editing, etc.), even though the cards may well perform a lot better for TUFLOW.
<br>
Line 51 ⟶ 68:
Faster RAM will result in quicker runtimes, however this is usually a secondary consideration to chip speed, cache size and architecture.
===CPU Cores ===
*
*
*
===Hyperthreading===
Line 61 ⟶ 78:
===Processor Frequency and RAM Frequency===
The frequency directly affects the run times. In general, the higher the frequency, the faster the model runs.
===CPU Performance Comparison===
Extensive CPU hardware speed comparison testing has been completed using TUFLOW's standardised hardware benchmarking dataset. Details for the benchmarking are available via the <u>[[Hardware_Benchmarking| Hardware Benchmarking]]</u> page of the Wiki. Review the CPU benchmarking runtime results table to compare the speed performance of different chips. If your chip is not listed in the result dataset please download and run the benchmarking dataset, and provide the result summary to [mailto:support@tuflow.com support@tuflow.com]. We will add the details to the runtime results table.
<br>
=Storage Advice=
Solid state hard drives are preferred for temporary storage as they are faster to write to than traditional hard drives. Large data files can then be transferred to a more permanent location.<br>
<br>
{{Tips Navigation
|uplink=[[Main_Page| TUFLOW Main Page]]
}}
|