HPC FAQ

From Tuflow
Revision as of 15:43, 13 January 2021 by Bill Syme (talk | contribs)
Jump to navigation Jump to search

Will TUFLOW HPC and TUFLOW Classic results match?

No. TUFLOW Classic uses a 2nd order ADI (Alternating Direction Implicit) finite difference solution of the 2D SWE, while the HPC solver uses a 2nd order explicit finite volume TVD (Total Variation Diminishing) solution (a 1st order HPC solution is also available). As there is no exact solution of the equations (hence all the different solvers!), the two schemes produce different results.

However, in 2nd order mode the two schemes are generally consistent with testing thus far indicating Classic and HPC 2nd order produce peak level differences usually within a few percentage points of the depth in the primary conveyance flow paths. Greater differences can occur in areas adjoining the main flow paths and around the edge of the inundation extent where floodwaters are still rising or are sensitive to a minor rise in main flow path levels, or where upstream controlled weir flow across thick or wide embankments occurs due to the different numerical approaches.

For deep, fast flowing waterways, 1st order HPC tends to produce higher water levels and steeper gradients compared with the Classic and HPC 2nd order solutions. These differences can exceed 10% of the primary flow path depth. Typically, lower Manning’s n values are required for HPC 1st order (or the original TUFLOW GPU), to achieve a similar result to TUFLOW Classic or HPC 2nd order.

Significant differences may occur at 2D HQ boundaries. Classic treats the 2D HQ boundary as one HQ boundary across the whole HQ line, setting a water level based on the total flow across the line. Due to model splitting to parallelise the 2D domain across CPU or GPU cores, HPC applies the HQ boundary slope to each individual cell along the boundary. As with all HQ boundaries, the effect of the boundary should be well away from the area of interest, and sensitivity testing carried out to demonstrate this.

Is recalibration necessary if I switch from TUFLOW Classic to TUFLOW HPC?

Yes, if transitioning from Classic to HPC (or any other solver), it is best practice to compare the results, and if there are unacceptable differences, or the model calibration has deteriorated, to fine-tune the model performance through adjustment of key parameters.

Typically, between TUFLOW Classic and HPC 2nd order this would only require a slight adjustment to Manning’s n values, any additional form losses at bends/obstructions or eddy viscosity values. Regardless, industry standard Manning’s n and other key parameters should only be used/needed. Use of non-standard values is a strong indicator there are other issues such as inflows, poor boundary representation or missing/erroneous topography.

A greater adjustment of parameters would be expected if transitioning between HPC 1st order (or the original TUFLOW GPU) and Classic or HPC 2nd order.

Do I need to change anything to run a TUFLOW Classic model in TUFLOW HPC?

For single 2D domain models, no, other than inserting the following basic TCF commands:
Solution Scheme == HPC
The following command is also required to run the model using GPU hardware:
Hardware == GPU

Why does my TUFLOW HPC simulation take longer than TUFLOW Classic?

The primary reasons why the HPC may run slow are discussed below:

If run on a single CPU thread, Classic is a more efficient scheme
If running on the same CPU hardware, a well-constructed Classic model on a good timestep is nearly always faster than HPC running on a single CPU thread (i.e. not using GPU hardware). Running a single HPC simulation across multiple CPU threads may produce a faster simulation than Classic. HPC is best run using GPU hardware. HPC run using good GPU hardware should be faster than Classic on CPU. The Computer Hardware Benchmark page included guidance on the fastest available hardware for TUFLOW modelling.

Over utilisation of CPU threads/cores
Trying to run multiple HPC simulations across the same CPU threads. If, for example, you have 4 CPU threads on your computer and you run two simulations that both request 4 threads, then effectively you are overloading the CPU hardware by requesting 8 threads in total. This will slow down the simulations by more than a factor of 2. The most efficient approach in this case is to run both simulations using 2 threads each, noting that if you are performing other CPU intensive tasks, this also needs to be considered.
By default, the number of CPU threads taken is two (2). You can control the number of threads requested by either using the -nt<number_threads> run time option, e.g. -nt2, or use the TCF command CPU Threads. The -nt run time option overrides CPU Threads.
Note: If Windows hyperthreading is active there typically will be two threads for each physical core. For computationally intensive processes such as TUFLOW, it is recommended that hyperthreading is deactivated so there is one thread for each core.

Poor GPU Hardware
If running a simulation using a low end or old GPU device, simulations may only be marginally faster, than running Classic or HPC on CPU hardware. If running on a GPU device, high end NVidia graphics are strongly recommended. The performance of different NVidia cards varies by orders of magnitude.The Computer Hardware Benchmark page included guidance on the fastest available hardware for TUFLOW modelling.

The HPC adaptive timestep is reducing to an extremely small number
See HPC Adaptive Timestepping

Why is the TUFLOW HPC adaptive timestepping selecting very small timesteps?

Common reasons for TUFLOW HPC selecting very small timesteps are:

  • The model has one or more or erroneous deep cells. The Celerity Control Number described further above reduces the timestep proportionally to the square root of the depth, so any unintended deep cells can cause a reduction in the timestep.
  • Poorly configured or schematised 2D boundary or 1D/2D link causing uncontrolled or inaccurate flow patterns. The high velocities may cause the Courant Number to control the timestep or the high velocity differentials can cause the Diffusion Number to force the timestep downwards. In these situations, Classic would often become unstable alerting the modeller to an issue. However, HPC will remain stable relying on the modeller to perform more thorough reviews of flow patterns at boundaries and 1D/2D links.
  • If using the SRF (Storage Reduction Factor), this proportionally reduces the Δx and Δy length values in the control number formulae. This may further reduce the minimum timestep if a cell with an SRF value greater than 0.0) is the controlling cell. For example, applying a SRF of 0.8 to reduce the storage of a cell by 80% or a factor of 5, also reduces the controlling timestep for that cell by a factor of 5.

To review and isolate the location of the minimum timestep the timesteps are output to:

  • Console window and .hpc.tlf file
  • .hpc.dt.csv file (this file contains every timestep)
  • “Minimum dt” map output (excellent for identifying the location of the minimum timestep adopted – add “dt” to Map Output Data Types ==)

I know TUFLOW Classic, do I need to be aware of anything different with TUFLOW HPC?

Yes! TUFLOW Classic tells you where your model has deficient or erroneous data, or where the model is poorly set up by going unstable, or producing a high mass error (a sign of poor numerical convergence of the matrix solution). The best approach when developing a Classic model is to keep the timestep high (typically a half to a quarter of the cell size in metres), and if the simulation becomes unstable to investigate why. In most cases, there will be erroneous data or poor set up such as a badly orientated boundary, connecting a large 1D culvert to a single SX cell, etc.
HPC, however, remains stable by reducing its timestep and does not alert the modeller to these issues. Therefore, the following tips are highly recommended, otherwise there will be a strong likelihood that any deficient aspects to the modelling won’t be found till much further down the track, potentially causing costly reworking. So, it’s very much modeller beware!

  • Use of excessively small timesteps is a strong indicator of poor model health (see discussion further above).
  • If the timestepping is erratic (i.e. not changing smoothly), or there is a high occurrence of repeated timesteps, these are indicators of an issue in the model data or set up.
  • Be more thorough in reviewing the model results. Although this is best practice for any modelling, it is paramount for unconditionally stable solvers like HPC that thorough checks of the model’s flow patterns, performance at boundaries and links is carried out.
  • The CME%, which is an excellent indicator that the Classic 2D solver is numerically converging, is not generally of use for HPC, which is volume conserving and effectively 0% subject to numerical precision. Non-zero whole of model CME% for HPC 1D/2D linked models is usually an indication of either the 1D and 2D adaptive timesteps being significantly different, or a poorly configured 1D/2D link.

How much faster is TUFLOW HPC compared to Classic?

This is largely based on hardware that is used to run HPC models (CPU and GPU) and its performance. On average, HPC using GPU hardware runs about 10 to 20 times faster than Classic and about 30 to 40 times faster than HPC using the default number of CPU threads. Even though HPC using CPU hardware is with default settings slower than Classic, more CPU threads can be used to achieve faster run times. As TUFLOW Classic is not parallelised it can only run on one CPU thread and the runtime cannot be further improved with more CPU resources.
For further information and discussion see: Hardware Benchmarking Topic HPC on CPU vs GPU

Will results from TUFLOW HPC using CPU match with HPC using GPU?

TUFLOW HPC using CPU should produce identical results with TUFLOW HPC using GPU, because it uses the same solver. However, HPC GPU and HPC CPU are compiled by different compilers, which can produce minor differences down to numerical precision. Also note that minor difference between HPC CPU and HPC GPU can be amplified in a model that is already unstable. If there are large differences in modelling results, it could be an indicator of model instability.

Why is my model using 2020 HPC slower than 2018 HPC?

The change in runtime can be due to different timestepping applied with the new default - mesh size insensitive turbulence model - Wu instead of Smagorinsky. To confirm this is the case, test run your model with 2020release and the following commands:
Viscosity Formulation == Smagorinsky
Viscosity Coefficients == 0.5, 0.05
Not all HPC models will show an increase in runtime when changing from the 2018 to the 2020 release - models that are controlled by the wave celerity or velocity control numbers and not the diffusion control number are likely to be similar in runtime. Some models will be even faster with the last 2020 release due to other improvements.

I have been given a model developed in an older release and the results are different in a newer release. Why?

If comparing a Classic model with HPC, check the first question Will TUFLOW HPC and TUFLOW Classic results match?
In addition to above, there are numerous reasons why model results would be different between different TUFLOW releases, Classic or HPC regardless:

  • General improvements and fine-tuning of the solution scheme, especially for the more complex hydraulic physical terms and situations such as: sub-grid turbulence representation; treatment of shocks (e.g. hydraulic jumps); and transitioning between sub-critical and super-critical flow on steep slopes.
  • Some new functionality can cause a significant change in results. For example, Sub-Grid Sampling (SGS) where a model uses a too coarse cell resolution in high flow areas of highly variable topography (relative to the 2D cell size). SGS will greatly improve the model's ability to convey water accurately in this situation with vastly improved results convergence due to varying the cell size compared to not using SGS. Another example is the new default sub-grid turbulence scheme in the 2020 release of TUFLOW HPC that is cell size independent and allows modellers to use cell sizes much smaller than the depth across all scales from flume to large rivers.
  • Changes to the default settings and values, e.g. different default eddy viscosity formulation and/or coefficients; improved data pre-processing approaches (e.g. sampling materials on cell mid-sides instead of cell centres). For backward compatibility the “Defaults ==” command is available to run old models on new releases to replicate past results (note, sometimes full backward compatibility can not be catered for, especially for several releases earlier).
  • New features that use GIS attributes previously reserved (i.e. unused). If these attributes were not populated with the recommended “reserved” value (usually 0 or blank), then they can cause unpredictable results in later releases.
  • Bug fixes noting that most bug fixes are input/output related and rarely affect the model's results.

Generally, there should not be substantial differences as the fundamental equations being solved are unchanged and TUFLOW Classic and HPC solvers have always solved all the physical terms using a 2nd order spatial approach. If significant differences (>10% of depth change across the whole model) are observed then it’s most likely due to the first four dot points above. To identify the cause(s) for the change, the model can be run with the latest build and for past releases to identify which release(s) the significant changes occurred.

The changes for each release are documented in their release notes. Past releases and release notes are all available here.

The recommendation is usually for new or reworked models to use the newest build to take advantage of the latest features and enhancements. However, particularly if a model is calibrated, using prior builds of TUFLOW or winding back default settings using “Defaults ==” is considered reasonable for established models.