HPC FAQ: Difference between revisions
Content deleted Content added
Stephen.kime (talk | contribs) No edit summary |
Stephen.kime (talk | contribs) No edit summary |
||
Line 1:
No, TUFLOW Classic uses a 2nd order ADI (Alternating Direction Implicit) finite difference solution of the 2D SWE, while the HPC solver uses a 2nd order explicit finite volume TVD (Total Variation Diminishing) solution (a 1st order HPC solution is also available). As there is no exact solution of the equations (hence all the different solvers!), the two schemes produce different results. <br>
Line 10:
Significant differences may occur at 2D HQ boundaries. Classic treats the 2D HQ boundary as one HQ boundary across the whole HQ line, setting a water level based on the total flow across the line. Due to model splitting to parallelise the 2D domain across CPU or GPU cores, HPC applies the HQ boundary slope to each individual cell along the boundary. As with all HQ boundaries, the effect of the boundary should be well away from the area of interest, and sensitivity testing carried out to demonstrate this.<br>
For single 2D domain model, no, other than inserting the .tcf commands:<br>
<font color="blue"><tt>Solution Scheme == HPC</tt></font> and, if running on a GPU device, <font color="blue"><tt>Hardware == GPU</tt></font>. <br>
HPC does not yet support multiple 2D domain models. Note that some more specialised or rarely used features are still not incorporated into the HPC solver. <br>
Yes, if transitioning from Classic to HPC (or any other solver), it is best practice to compare the results, and if there are unacceptable differences, or the model calibration has deteriorated, to finetune the model performance through adjustment of key parameters. <br>
Line 22:
Use of non-standard values is a strong indicator there are other issues such as inflows, poor boundary representation or missing/erroneous topography. A greater adjustment of parameters would be expected if transitioning between HPC 1st order (or the original TUFLOW GPU) and Classic or HPC 2nd order. <br>
If running on the same CPU hardware, a well-constructed Classic model on a good timestep is nearly always faster than HPC running on a single CPU thread. Running a single HPC simulation across multiple CPU threads may produce a faster simulation than Classic. Primary reasons why the HPC may produce a slower run, especially are discussed below. <br>
Trying to run multiple HPC simulations across the same CPU threads. If, for example, you have 4 CPU threads on your computer and you run two simulations that both request 4 threads, then effectively you are overloading the CPU hardware by requesting 8 threads in total. This will slow down the simulations by an approximate factor of 2.
Line 35:
Note: To request the maximum number of threads on a machine use <font color="blue"><tt>CPU Threads == MAX</tt></font> if hyperthreading is deactivated and <font color="blue"><tt>CPU Threads == MAX/2</tt></font> if hyperthreading is active. The latter requests half the number of threads, which for most Windows machines is the same as the number of physical cores. <br>
If running a simulation using a low end or old GPU device (these are usually the graphics card that comes standard with computer), simulations can be only marginally faster, or even slower, than running Classic or HPC on CPU hardware. If running on a GPU device, high end NVidia graphics are strongly recommended. The performance of different NVidia cards varies by orders of magnitude – for benchmark tests using the original TUFLOW GPU solver review the <font color="blue"><tt>Hardware Benchmarking Wiki page</tt></font>. <br>
Common reasons why HPC adopts a very small timestep are provided below. To review and isolate the location of the minimum timestep the timesteps are output to: <br>
* Console window and .hpc.tlf file
Line 51:
If using the SRF (Storage Reduction Factor), this proportionally reduces the Δx and Δy length values in the control number formulae. This may further reduce the minimum timestep if a cell with an SRF value greater than 0.0) is the controlling cell. For example, applying a SRF of 0.8 to reduce the storage of a cell by 80% or a factor of 5, also reduces the controlling timestep for that cell by a factor of 5. <br>
Yes! TUFLOW Classic tells you where your model has deficient or erroneous data, or where the model is poorly set up by going unstable, or producing a high mass error (a sign of poor numerical convergence of the matrix solution). The best approach when developing a Classic model is to keep the timestep high (typically a half to a quarter of the cell size in metres), and if the simulation becomes unstable to investigate why. In most cases, there will be erroneous data or poor set up such as a badly orientated boundary, connecting a large 1D culvert to a single SX cell, etc. <br>
Line 59:
* If the timestepping is erratic (i.e. not changing smoothly), or there is a high occurrence of repeated timesteps, these are indicators of an issue in the model data or set up.
* Be more thorough in reviewing the model results. Although this is best practice for any modelling, it is paramount for unconditionally stable solvers like HPC that thorough checks of the model’s flow patterns, performance at boundaries and links is carried out.
* The CME%, which is an excellent indicator that the Classic 2D solver is numerically converging, is not generally of use for HPC, which is volume conserving and effectively 0% subject to numerical precision. Non-zero whole of model CME% for HPC 1D/2D linked models is usually an indication of either the 1D and 2D adaptive timesteps being significantly different, or a poorly configured 1D/2D link.<br>
<br>
If you experience an issue that is not detailed above or in one of our other HPC modelling guidance pages please a send an email to support@tuflow.com
|