TUFLOW FAQ

From Tuflow
Jump to: navigation, search

Contents

TUFLOW Versions

What is the difference between single and double precision and when should I use them?

For each release of TUFLOW, both single and double precision engine versions are available. In the executable filename the single precision version of TUFLOW will include iSP and iDP for the double precision version. The double precision won't make the results twice as good, it stores numbers as 8-byte real numbers (15-17 significant figures) as opposed to 4-byte (6-9 significant figures).

When to use single or double precision depends on the solution scheme:

  • TUFLOW Classic uses water level as the primary variable within the hydrodynamic solver and double precision is typically necessary when using direct rainfall or with model elevations above 100m. In these cases, numerical precision (rounding errors) can cause mass conservation errors. For example, a small rainfall (e.g. 1mm/hr) converted to metres / second (~2.78e-7) may be lost through numerical precision and result in accumulated mass balance error, specially for longer model run time.
  • TUFLOW HPC uses cell-averaged water depth as the primary variable within the solver, rather than using water surface elevation as the primary variable and computing water depth on the fly from surface elevation minus bed elevation. This means that precision issues associated with applying a very small rainfall and/or modelling high elevations are not applicable in HPC. Unless testing shows otherwise, the single precision version of TUFLOW should be used for all HPC simulations. An error message will be triggered if TUFLOW HPC is used with double precision unless HPC DP Check == OFF is specified within the TCF. The need to use double precision with HPC could occur when the coupled ESTRY 1D engine requires the use of the double precision solver to achieve better stability in 1D with more significant numbers. This usually happens with carved 1D channels within the 2D domain, either the 1D channel itself or the boundary links between 1D and 2D domain are causing the mass error. On rare occasions, models with higher elevations and small QT inflow would also require to run in double precision, because QT boundaries have hidden 1D node and as such are solved in the 1D ESTRY engine.

The single precision version of TUFLOW uses significantly less memory (RAM) and is about 20% faster for TUFLOW Classic and four times faster for HPC. Unless required otherwise, the single precision version of TUFLOW is recommended. A good step in the model development is to run the model with both the single and the double precision and if the results / mass balance are similar then the single precision version is sufficient.

Advanced reasoning to use single precision with TUFLOW HPC:

  • HPC is an explicit finite volume scheme which is mass conserving to numerical precision.
  • The HPC scheme uses 4th order time integration, which means the simulation completes in fewer time steps compared to 1st or 2nd order time integration schemes.

Why are model results developed in an older release different to a newer release?

If comparing a Classic model with HPC, also check the Will TUFLOW HPC and TUFLOW Classic results match? page in addition to this answer.
In addition to the above, there are reasons why model results would be different between different TUFLOW releases, whether it is the Classic or HPC solver as follows:

  • General improvements and fine-tuning of the solution scheme, especially for the more complex hydraulic physical terms and situations such as: sub-grid turbulence representation; treatment of shocks (e.g. hydraulic jumps); and transitioning between sub-critical and super-critical flow on steep slopes.
  • Some new functionality can cause a significant change in results. For example:
    • Sub-Grid Sampling (SGS) applied to an existing model that used a too coarse cell resolution in high flow areas of highly variable topography (relative to the 2D cell size). SGS will greatly improve the model's ability to convey water accurately in these situations with vastly improved results.
    • New default sub-grid turbulence scheme in the 2020 release of TUFLOW HPC that is cell size independent and allows modellers to use cell sizes much smaller than the flow depth across all scales from flume to large rivers. For more information on differences between Smagorinsky scheme (HPC releases up to 2020) and the new Wu turbulence scheme (2020 onwards) see here.
  • Changes to the default settings and values, e.g.:
    • different default eddy viscosity formulation and/or coefficients,
    • improved data pre-processing approaches such as sampling materials on cell mid-sides instead of cell centres,
    • and many others.
    • For backward compatibility the Defaults == command is available to run old models on new releases to replicate past results (note, sometimes full backward compatibility cannot be catered for due to different code compiler and updates that can't be reverted, especially for several releases earlier).
  • New features that use GIS attributes previously reserved (i.e. unused). If these attributes were not populated with the recommended “reserved” value (usually 0 or blank), then they can cause unpredictable results in later releases.
  • Bug fixes noting that most bug fixes are input/output related and rarely affect the model's hydraulic calculations.
  • Change in timestepping can also produce a small change in results. HPC uses the Runge-Kutta 4th order integrator, which is usually fairly insensitive to time step provided the model is running stably. However when a region is filled by flow that only just overtops an embankment, a 10 mm difference in water levels upstream of the embankment can create a much larger difference in levels downstream. Hence small differences in time-stepping (along with many other aspects of model setup) can trigger local differences in model results.
  • Model orientation (if changed) could also mean slight change in results. This is mostly given by interpolating values from different calculation points. Every cell has nine calculation points. Based on the model origin, all or most of the calculation points would have different topography elevation sampled, which translates to slightly different results.
  • If using 1D channel, possibly different cells have been selected as HX boundary and might have different elevations. This can be reviewed in 1d_to_2d check file.

Generally, there should not be substantial differences as the fundamental equations being solved are unchanged and TUFLOW Classic and HPC solvers have always solved all the physical terms using a 2nd order spatial approach. The one exception is the turbulence (eddy viscosity) representation, which is the most complex and challenging to solve of all the physical terms (many 2D schemes simply omit this term). If significant differences (>10% of depth change across the whole model) are observed then it’s most likely due to the first four dot points above. To identify in which release(s) the significant changes occurred, the model can be run with the latest build and for past releases. The changes for each release are documented in their release notes. Past releases and release notes are all available here. Once the exact release where the changes occurred is tracked down, individual features can be turned off to narrow down the cause.

The recommendation is usually for new or reworked models to use the newest build to take advantage of the latest features and enhancements, some level of calibration might be required for reworked models. The new TUFLOW executable is not different from the previous ones in the meaning that any existing model should be re-calibrated if there are available calibration data. However, particularly if a model is already calibrated, using prior builds of TUFLOW or winding back default settings using Defaults == command is considered reasonable for established models that are to be used for minor tasks and an update of the model would not be cost effective.

I am running existing and developed case and see differences away from the model changes. Why?

Any geometry changes between models, no matter how small, will affect results, sometimes to a greater degree than that occurring in the area of change. For example, a few millimetres increase in water level can determine whether or not overtopping of an embankment occurs, and this can consequently cause even larger impacts on the downstream side of the embankment. Furthermore, these changes can be compounded by subsequent changes in timestepping when using the adaptive timestepping option (the default in TUFLOW HPC), especially at fringes of the flood extent, where cells are constantly wetting and drying. Modellers and reviewers should be judicious and pragmatic when assessing which impacts are real and which are numerical noise.
Suggestions:

  • Use the latest TUFLOW HPC release available.
  • Check that cell size is appropriate to the modelling exercise.
  • Use depth varying manning's n (lower manning's n for shallow water depths), specifically for direct rainfall models.
  • Set appropriate Map Cutoff Depth for the modelling task. e.g. direct rainfall models might have higher values to avoid undesirable noise on the wet/dry interface.

Why seemingly identical models can produce non-identical results?

Generally speaking single path numerical solvers such as those used for hydraulic modelling should be able to produce the same numerical results twice to the last bit of every binary number calculated and output. However, this can become difficult with parallel computations as the order in which a list of single or double precision numbers are summated can produce slightly different rounding errors and thereby produce very slightly different results. For the vast majority of models TUFLOW Classic, TUFLOW HPC and TUFLOW FV will reproduce numerically identical results when run on the same CPU/GPU. Occasionally this might not be the case when identical simulations are run on different CPUs/GPUs due to hardware differences.
Prior to 2020-10-AB release, the new boundary method introduced in TUFLOW HPC 2020-01-AA release for inflowing HT and QT boundaries (refer see Section 6.1 of the 2020 Release Notes) can in rare situations be affected by bitwise reproducibility when parallelised. When this issue occurs, very slight numerical differences can occur throughout the model, noting that they will be of a much smaller magnitude than those that occur when carrying out impact assessments, but will cause undesirable numerical noise in the impact mapping.

Do I need TUFLOW licence to create TUFLOW inputs and view results from TUFLOW models?

No, a TUFLOW licence is only needed to run simulations. All TUFLOW inputs and outputs use free open formats that are read and editable by third party software, for example QGIS and Notepad++:

How closely would TUFLOW results match other hydraulic software?

Different software will give different results for the simple reason that they all include different calculation assumptions. Understanding what those assumptions are and how they influence results will be important for the sensitivity testing. TUFLOW, like all hydraulic modelling software, needs to be applied appropriately and models should be calibrated to real world events if calibration data are available.

Can TUFLOW's 1D engine be used for modelling complex pipe hydraulics?

TUFLOW can model complex pipe hydraulics with a level of accuracy similar or better than industry peers. There are a few notable features that place TUFLOW ahead of other software:

  • TUFLOW's 1D solution accurately accounts for both non-pressurised and pressurised flows within the pipe network.
  • TUFLOW's treatment of pipe junction losses is one of the most sophisticated. The default method (Engelund) will adjust losses dynamically every timestep of the simulation based on the hydraulic conditions at that time and the following:
    • changes in pipe size
    • expansion/contraction if there is a manhole at the pipe junction
    • variation in pipe approach and exit angles at junctions
    • variation in pipe approach and exit elevation at junctions
  • Alternative loss methods to Engelund are also available, such as Fixed losses. The Fixed method conforms with some industry guidelines, such as the Qld Urban Drainage Manual (QUDM). Fixed losses are not set as the default as this generally requires the modeller to manually enter appropriate values at every manhole, whereas the Engelund approach in TUFLOW, which is based on that in MIKE Urban with several improvements developed in conjunction with Gold Coast City Council’s infrastructure team, provides an excellent automatic approach with no or minimal user input beyond the pipe and manhole geometry. The other advantage of the Engelund approach is that it is dynamic and adjusts losses according to the flow conditions, whereas the Fixed approach assumes the same energy loss coefficient for all flow regimes. TUFLOW also allows having a mix of different methods in the one model, for example, there may be a special manhole where the Fixed or other approach needs to be applied.
  • There are numerous pit inlet options, from automatic capture rates to manually defined depth-discharge relationships. In all cases the 2D cell water depth at the inlet influences the amount of flow entering the pit, and as such the 1D underground pipe network.
  • The 2020 TUFLOW release offers sub-grid topography sampling to process all elevations within the cell into a depth/volume relationship for its calculations. This approach ensures much more accurate water depth estimations at pit inlets, even if the 2D cell resolution is much larger than the geometry of the drain at the inlet. This in turn translates to more accurate representation of the pit inflow, and as such flow through the entire pipe network. No other 1D/2D stormwater drainage modelling software offers this functionality. The new Quadtree functionality also allows the user to model key flowpaths, such as road drains, in high resolution.
  • The 2D overland approach used by TUFLOW ensures any above ground inundation is defined by the model topography. This approach avoids any engineering judgement flow path definition mistakes which the 1D overland software suffer from.

How can I speed up my TUFLOW model, make it more storage efficient and reduce RAM requirements?

The following tips provide advice on how to make TUFLOW models initialise, run and write results faster. Most of the suggestions will also ensure the model output size is kept to minimum, minimising required disk space.

  • Initialisation:
    • Use XF files to speed up the model initialisation (the default).
    • Use FLT grids for grid inputs instead of ASC as they are faster to read. FLT files are ESRI binary (float) version of the ASC files, their size is about 1/5 of an ASC file. To convert ASC to FLT, use -conv switch with our ASC_to_ASC utility.
  • Writing check files:
    • Writing check files to a local solid state drive usually take less time than to a network drive.
    • Make sure there is enough free space as drives slow down when they are nearing to being full.
    • Suppress unnecessary check files - some check files, such as zpt, uvpt and grd check file, can take a relatively long time to write. If some check files are not changing, they can just be written once and excluded for subsequent simulations with the below command. Any check file can be excluded, example of such command is Write Check Files EXCLUDE == zpt uvpt grd.
    • Use output zones to write localised check files when specific areas of a model domain are of interest.
  • Simulation:
    • Use single precision version of TUFLOW if double precision is not required. More information on single and double versions here.
    • If using TUFLOW HPC, review the minimum dt map output to check for locations with low timestep and whether any improvements can be applied to speed up the simulation.
  • Writing results:
    • Writing results to a local solid state drive usually take less time than to a network drive. This also applies to running models on a cloud. From our testing, writing outputs on a local drive of the virtual machine is noticeably faster than using network storage.
    • Make sure there is enough free space as drives slow down when they are nearing to being full.
    • Use output zones to write localised results when specific areas of a model domain are of interest.
    • Use FLT for grid results instead of ASC as they are more efficient in terms of write speed and file size.
    • Specify XMDF for temporal output instead of DAT as they are more compressed and faster to write.
    • Use appropriate Map Output Interval for XMDF, e.g. around 100 output increments for the whole simulation unless an animation of high temporal resolution is to be created.
    • Output only maximum FLT grids with FLT Map Output Interval == 0 instead of every Map Output Interval. Keep the temporal output only as XMDF.
    • Specify only Map Output Data Types that are needed.
    • For further details, see Output Management Advice webinar.
  • Hardware:
    • Consider using multiple resources:
      • TUFLOW HPC is parallelised and can run on multiple devices. By default, TUFLOW HPC running on GPU uses one GPU card, to increase the number see Running HPC on multiple GPU devices. TUFLOW HPC running on CPU uses by default four CPU threads, to increase the number see Running HPC on multiple CPU threads. Note that model initialisation, 1D calculations and writing results are being processed on the CPU side even if GPU resources are used for the 2D simulation and currently these processes only use one CPU thread.
      • TUFLOW Classic isn't parallelised and only uses a single CPU thread for every simulation. Model initialisation, 1D calculations and writing results are also limited to only one CPU thread.
    • The better the hardware, the faster TUFLOW model would run. To see how different hardware compares to each other, see Hardware Benchmarking Results.
    • For more information on hardware specification, see Hardware Selection Guidance.

Reducing RAM requirements:

  • Model domain should be as tight to the model code (active areas) as possible - search the .tlf for "Isolating redundant perimeter sections", this will tell you how much larger the domain is than it needs to be as a percentage.
  • The larger cell size, the less RAM is required. Cell size sensitivity testing can be carried out to find out the largest cell size without compromising model accuracy.
  • Using SGS does increase RAM requirement compared to a non SGS model with the same cell size. However, thanks to SGS the cell size can typically be increased whilst still achieving a good cell size convergence and therefore RAM requirement may be lower than the original non-SGS model.
  • Using Quadtree may also decrease RAM requirement as with Quadtree mesh only active code polygon is processed instead of the whole model domain rectangle used for HPC. Also, judicious refinement and using larger cells in non-focus areas will decrease the number of active cells and therefore the RAM requirement.
  • Inputs might be clipped if their extent is much larger than the code, e.g. grids, material layers and so on. For example if a 100GB DEM is being input to TUFLOW with the majority of the DEM being outside the TUFLOW model, TUFLOW allocates memory to read the entire DEM in for processing. Memory for reading inputs is only used temporarily and is released after the processing of the input, e.g. the DEM.
  • Large DEM and material grids can use up a lot of memory during model initialisation as the entire grid is read for processing. Re-tiling the grid into smaller sections will use up less memory as each smaller grid will be processed at the time avoiding high memory peaks.
  • Inspecting .tlf file can help recognising which model features require the most RAM and more RAM efficient methods might be considered. For example, using gridded rainfall is more RAM efficient than using large number of rainfall polygons. Other features such as soil infiltration, hazard outputs and output zones all require additional memory.
  • Grid (raster) outputs from TUFLOW (including DEM check files and Map Output Formats ASC, FLT, NC, TGO and WRR), require significant memory for storing interpolation factors. An interpolation is required as these outputs are all north-south aligned, while TUFLOW models can include rotation, water level lines and varying cell sizes. By default the grid output resolution is half the 2D cell size. Increasing the output cell size with the .tcf command Grid Output Cell Size == <grid_output_cell_size> can reduce the amount of memory required for this. Refer to Section 9.6.4 of the 2018 TUFLOW manual for more information on grid outputs.

Other processes can slow TUFLOW simulations down if they are competing for resources. A few potential causes:

  • Running another model at the same time (as the CPU RAM is shared between both simulations)
  • Scheduled computer back ups
  • Operating system updates
  • Antivirus scans
  • Others activities (e.g. copying large files, writing large files).


TUFLOW Model Build and Schematisation

What is the difference between 1d_pit and 1d_nwk layer for pits?

The 1d_pit layer is a newer and simplified version of the 1d_nwk layer for pits.

1d_pit

  • Assumes default values for any of missing attributes that are described for the 1d_nwk pits layers.
  • Does not have invert attributes. The upstream invert is assumed to be the ZC elevation of the connected 2D SX cell, as such the "L" flag is not supported. The downstream invert needs to retrieve its value from the connected 1d_nwk channel.
  • The nodal area assumes the default, the downstream pit channel node has its nodal area automatically assigned based on the connected 1d_nwk channel segments that have the UCS attribute set to “true”. The applied nodal area can be confirmed in the .eof file.
  • Virtual pipes can only be setup with the 1d_pit layer.

1d_nwk

  • Used for models that needs to deviate from the default approaches.

Should I use Z Lines or Z Shapes for topographic modifications?

The Z Line feature is a predecessor of Z Shape. As it has only one attribute, elevation, the GULLY or MIN / RIDGE or MAX / THICK / ADD options have to be specified in the control file at the end of the “Read GIS Z Line” command. This limits the Z Line layer to be used for multiple applications and separate command, including separate GIS layer, has to be used for every option combination. The development of Z Lines has ceased and some alternative options might not be available with new features, such as Sub-Grid Sampling.
The new Z Shape feature has four attributes to capture all possible option combination within one GIS layer and a single command “Read GIS Z Shape” in the control file. As such, Z Shapes are recommended to use over Z Lines.
Z Lines can be modified into Z Shapes by copying the GIS lines and points into 2d_zsh layer and specifying the options in the Shape_Option attribute.

How can I read approach-capture flow curves for on-grade pits into TUFLOW?

TUFLOW currently doesn't support the use of approach-capture flow curves directly. To get around this, pit depth-flow curve relationship can be derived as follows:

  • Select a few representative slope increments for the project area and do a Manning's equation calculation for each to calculate the gutter flowrates for incrementally increasing flow depths.
  • Multiply the above flows by the % value in the on grade curve. This turns the % capture vs. flow curve into a depth vs. flow relationship.
  • Repeat for different gutter shapes.

Should I model culverts in the 1D or 2D domain?

This largely depends on the size of the culvert in comparison with the cell size. Most commonly culverts are modelled within the 1D domain, however, with more computational power given by GPU devices, cell sizes are getting smaller and it might be beneficial to model bigger culverts in 2D. If the culvert is an important aspect of the study, it is always recommended to model it in number of ways by either using different methods and/or different software and the afflux is cross-checked against a desktop analysis and hand calculations. None will be 100% right, but the intent here is to establish a verification of the preferred approach.

2D only approach

  • Benefits:
    • Good stability and improved hydraulic behaviour at the culvert entrance/exit.
    • The expansion loss can be explicitly handled in 2D provided the 2D cell resolution is sufficiently fine to model the expansion of flow downstream of the culvert.
  • Challenges:
    • The contraction loss, which is related to the expansion of water after the vena contracta and forms inside at the culvert inlet, won't be picked up well and some additional energy loss (form loss) might be needed to cover this.
    • Side and soffit wall friction are not modelled unless Manning's n varies with depth, i.e. increase Manning's n with depth to account for the missing Manning's n on the side and top surfaces.
    • The vertical walls not only create extra friction but also straightens the flow in the direction of the wall. Thin breaklines can be used to represent these walls in 2D, but it is likely to cause saw-tooth effect, i.e. extra numerical head loss, if there are too few cells between the walls. Sub-grid sampling is recommended in this case.
    • Flow overtopping can be represented to some extent by assuming 100% blockage of layer 2 in a 2d_lfcsh, but the flow upstream and downstream before the culvert is overtopped is hard to model.

2D-1D-2D approach

  • Benefits:
    • A more appropriate approach where the 2D cell size is greater than around half the total culvert width.
    • Contraction losses are handled better.
    • Flow overtopping can be modelled in 2D.
  • Challenges:
    • Expansion losses are very dependent on the 2D cell resolution. A 2D cell size much larger than the culvert width will not reproduce the expansion losses very well (even with SGS) and the culvert's exit loss needs to cover this. A finer 2D cell size (several or more cells across the culvert) will reproduce the expansion losses much better and the culvert's exit loss need to be reduced to compensate it. This usually only happens for large 1D culverts with high velocities as there can be losses duplicated in the 2D on the exit side as the 2D flow expands (i.e. duplication of the exit/expansion loss). The latest release has a new feature to automatically adjust 1D culvert losses based on the 2D approach/departure velocities as what happens with a 1D-1D-1D arrangement. See this paper for more information.
    • HX connections may cause instability, especially with a skewed culvert outlet but can be demonstrated to produce better velocity patterns downstream of the culvert to model the expansion losses, which will be occurring in the 2D domain.
    • Given that the SX connection applies the flow going out of the 1D culvert as a source term without momentum, it is difficult to completely prevent the water from piling up. If required, wingwalls can be modelled as thin breaklines to help guide the water away. Using the SX boundary Z flag lowers other SX cells below the 1D culvert invert level and it can mitigate the water from piling up.

What entry/exit loss and contraction coefficients should I use for 1D culverts?

We don’t provide hard recommendations on the exit and entry losses to use for culverts as we have found different organisations around the world, typically government, have their own guidelines for different types of inlets configurations and require these to be used, for example, the Queensland Urban Drainage Manual (QUDM). However, it is very important to understand how losses are applied and that different 1D solvers may treat them differently. For cross-checking your results from any hydraulic modelling software, a simple calculation applying the entry and exit losses (allowing for any automatic adjustments as discussed below) to the computed head (V2/2g), plus allowing any surface roughness losses (Manning's equation) for longer culverts, is the best practice for culverts flowing in a sub-critical flow condition (i.e. downstream controlled flow).

For the entrance loss values, the approach should be to use values as quoted in the literature or guidelines for the inlet shape and design unless there is evidence to use another value (e.g. comparison with reliable calibration data would indicate different energy losses).

For the exit loss a value of 1.0 is recommended in nearly all situations provided losses are being adjusted every timestep. A value of 1.0 with adjusted losses is derived from the fluid flow physics (momentum or energy conservation) for expanding flow and will be the most precise approach for modelling exit losses or to apply a bend loss to the approach/departure channel if these are modelled in 1D.

Occasionally there are situations where non-standard entrance and exit loss values are needed. A good example is if the approach or departure flow is skewed to the culvert direction. In these situations there may also be a significant bend (energy) loss occurring as the water changes direction entering or leaving the structure. To account for this the modeller may need to increase the entrance and/or exit loss values.

By default, TUFLOW adjusts the entrance and exit losses of 1D structures flowing sub-critical every timestep based on the approach/departure velocities for 1D-1D-1D with a new feature in the 2020-10-AA build for 2D-1D-2D. The entrance losses are adjusted based on an empirical relationship from flume testing whilst the exit loss equation is theoretically derived as mentioned above - see Section 5.7.6 from the 2018 TUFLOW Manual. The modeller should be familiar with the approach taken by the software they are using as some software either don't adjust for approach/departure velocities at structures (will overestimate losses using standard values) and some may apply a limiting loss thereby not allowing the losses to sufficiently reduce.

TUFLOW, by default, allows the losses to reduce to effectively a zero loss coefficient (i.e. 0.0001). A zero loss occurs where the approach and departure velocity is the same as the structure velocity. For example, a clear-spanning bridge over a concrete lined channel with the water level below the bridge deck will experience no energy losses until the bridge deck is surcharged so if your software is applying unadjusted or limited energy loss coefficients there will be an unrealistic energy loss at the structure for flow below the bridge deck. For culverts, in most cases there will be some losses as it is rare that the channel is of identical shape and slope to the culvert with usually the culvert being more constrictive and therefore a higher velocity so the adjusted coefficients are nearly always non-zero. At the other extreme is flow from or into a near still body of water (e.g. a lake or the ocean). In this situation the loss coefficient(s) will not be reduced and the maximum energy loss possible should occur.

If the default adjust losses approach is used (Structure Losses == ADJUST) the recommendations are to use industry guidelines for the entrance loss coefficient based on the shape/design of the inlet (these coefficients are typically based on a near zero approach velocity), and to use 1.0 for the exit loss coefficient. This applies to 1D culverts connected to 1D channels. The adjusted entrance and exit losses can be viewed over time in the _TSL layer, see Table 13-2 from the 2018 TUFLOW Manual.

Since the 2020-10-AA release, TUFLOW has a new (beta functionality) to have the losses automatically adjusted for linked 1D culverts and other structures connected to 2D domains through SX links (Structure Losses SX == ADJUST), see Section 6.2 from the 2020-10 Release Notes. Note, it is likely this setting will become the default in a future TUFLOW release once beta testing and further benchmarking is completed.

The other values to consider for modelling culverts are the inlet contraction coefficients used when the flow is upstream controlled flow should this occur. Typically the TUFLOW default values for these values should be used unless the inlet shape and design indicates otherwise.

What should I do if 1D pump doesn't convey as much flow as expected and/or seems to be unstable?

A couple of things can be checked:

  • The inlet of the pipe has to be fully submerged, otherwise the pump will shut down and there will be no flow through the pump.
  • Add extra storage to the node upstream of the pump with 1d_na table or 1d_nwk type NODE through ANA attribute.
  • Try reasonably smaller 1D timestep.
  • If using non-operational pump (P), check if the depth discharge relationship is appropriate. If outlet of the pipe is lower than inlet, the pump will always try to pump at full capacity.
  • Consider using operational pump (PO), where the pump would switch off if the water level upstream gets below the pump soffit.

What is the difference between rainfall excess and soil infiltration?

Rainfall Excess Approach - Read Materials File and Read GIS SA RF continuing loss specification:

  • This initial and continuing loss approach is a simplistic calculation method comparable to the loss methods included in traditional hydrology models (e.g. RORB, URBS, WBNM etc).
  • The calculation approach is as follows:
    • The user defines the rainfall hyetograph (time vs depth (mm)) boundary condition inputs.
    • The rainfall value is reduced by the loss value (i.e. rainfall excess) before the boundary condition input is applied to the 2D cells.
    • The GIS SA RF takes the rainfall excess (units = m) and multiples the value by the Area attribute (units = m2) in the GIS object to convert rainfall depth to a volume (m3).
    • TUFLOW applies the calculated rainfall excess flow to the lowest cell in each GIS polygon during the first timestep that wetting occurs. Every timestep thereafter the inflow is distributed over the wet cells within the polygon.

Soil Infiltration Approach - TUFLOW Soils File (.tsoilf):

  • This approach is a more realistic representation of the actual physics associated with water infiltration into the soil.
  • The calculation approach is as follows:
    • The user defines the rainfall hyetograph (time vs depth (mm)) boundary condition inputs.
    • The total rainfall value is applied directly to every 2D cell within the 2d_rf polygon.
    • When a 2D cell is wet the soil infiltration function subtracts the appropriate loss volume of water from it. Computationally this is referred to as a “Sink” term.

TUFLOW Check Files

My cross sections have various Manning's n, but only one value is reported in 1d_ta_tables_check.csv. Is it applied correctly?

When the Manning’s n values differ throughout the cross section (N flag is specified), these are used to proportionally adjust the width of the cross section to an effective width and calculate the whole cross section as it has only one Manning’s n. In the processed hydraulic properties part of the 1d_ta_tables_check.csv, the width is different to the effective width and the header of the last column K (conveyance) only shows a single Manning’s n value. The .eof file and nwk_C_check file will report one Manning's n and n_nf_cd value, the one that was used for the calculation of the effective width - bed elevation Manning's n.
Note: When an N flag is specified in XZ cross section, the n_nf_cd value in 1d_nwk layer becomes a Manning's n multiplier. It is usually set to 1 and could be used for calibration purposes.

TUFLOW Model Stability Troubleshooting

How can I stabilise 2D boundary?

All boundaries:

  • Use the latest TUFLOW release:
    • The new cell size insensitive Wu turbulence scheme in TUFLOW HPC has become the default from 2020 release. The previous default, Smagorinsky viscosity formulation, could produce unrealistic eddies in the model when cell size got smaller than flow depth.
    • Since the 2020 release all H boundaries including QT boundary adjusts the water level by the dynamic head when water is entering the model (a common approach in CFD modelling), which reduces the chance of spurious circulations caused by the boundary assumptions and/or schematisation of the boundary.
  • Boundaries should be digitised perpendicular to the flow and well away from the area of interest.
  • Every flow path should have its own boundary.
  • Rapid changes in elevation immediately adjacent to the boundary should be avoided, the terrain should be reasonably smooth. Z shape polygon can be used to smooth out the topography if required.
  • Sufficient number of cells should be connected to the boundary in the main channel. TUFLOW Classic requires at least 3-4 cells and TUFLOW HPC 7-8 cells. Since 2020 release, HPC is using a non-slip boundary similar to Classic and will produce reasonable results with a minimum of 3-4 cells across a primary waterway.
  • Using sharp edges and concave angles in boundary polylines should be avoided.
  • Apply Boundary Viscosity Factor command. For example, applying a factor of 2 will double the eddy viscosity coefficient along all external boundaries. Sensitivity testing should be carried out to confirm model results have not changed.
  • Timestep adjustments:
    • TUFLOW Classic - timestep should be between 1/2 to 1/5 seconds of the cell size in metres. Some models perform well at the higher range of the timestep, other models need to keep the timestep in the lower ranges
    • TUFLOW HPC - most HPC models won't need timestep adjustment. In rare cases Control Number Factor command can be used to lower the control number factor which controls the adaptive timestep.
  • Increasing bed resistance or adding form losses can also help with stability due to the much higher resistive forces balancing the driving forces (gravity, inertia). This should only be done in rare cases and sensitivity testing should be carried out to confirm model results have not changed.

Specific boundaries:

  • Downstream HQ boundaries:
    • Should be digitised only at the location where the water is exiting the model, e.g. not around the whole perimeter of the model.
    • When the downstream HQ boundary is unstable, the boundary line can be split into sections so that the channel provides a different HQ boundary than the floodplain. The reason this may work is that HQ boundaries work by cutting a cross section along the line and then producing a rating curve using the Manning's equation. If the HQ line is really broad, it may not be representative of what is happening locally at the channel.
  • Upstream QT boundaries:
    • QT boundaries have hidden 1D node and standard 1D output can be written for the node when using Reveal 1D Nodes == ON command.
    • Storage can be increased using the "a" attribute. Setting the attribute to 10 will double the storage, the default is 5.
    • Unstable QT boundaries can be replaced by 2d_sa boundaries in some cases. Source area (SA) boundaries apply the flow on to the cell as a volume and let gravity pull the water downstream, they don't preserve momentum and will usually result in lower velocities. QT boundaries, when well behaved, will produce good flow distribution as they apply flow as a volume and momentum. For river boundaries, the preference is to use a 2D QT line, but either can be used. If SA boundary is used instead of QT boundary, it should be placed further upstream and sensitivity testing should be carried out to confirm there are no negative effects at the location of interest.
  • HT boundaries:
    • Set an initial water level so the model doesn't start completely dry. This can be done globally with Set IWL command or locally with 2d_iwl polygon.

How can I stabilise 1D features?

1D open channels

  • Split or merge open channel sections - the general advice is to have the length of open channels about 3 to 5 times of their width. Merging very short channels will provide sufficient storage. Splitting long channels into shorter ones with more 1D nodes might smooth the water level slope and the simulated flow rate can become more stable.
  • Increase the storage area where required with 1d_na table or 1d_nwk ANA attribute with type NODE. The extra storage can cause a lag in a the hydrograph and sensitivity testing should be undertaken.
  • Use a non-inertial open channel (SN) instead of the standard inertial channel (S) at the location of oscillations. For a non-inertial channel, the inertial terms are ignored which might help stabilise problematic S channels. Removing the inertial terms is a common approach when for example supercritical flow is present.
  • Poor digitisation and interpolation of water level lines (WLL) can cause questionable display of results. Review the spacing of water level lines, distance_to_A attribute for triangulation and if the WLLs are digitised within the HX lines. The calculated 1D results will be stretched across the WLL based on the cross section processed data and location where the 1d_nwk intersects the WLL.

1D culverts

  • Smooth out 2D topography at inlet and outlet to match culvert inverts. This is best to do with Z shape polygons.
  • Lower the 2D topography upstream and downstream of the culvert, the DEM levels should not be elevated above the culvert’s inverts. If using a "Z" flag, this will only lower the cell centres and might not be sufficient in some cases. Z shape polygon is recommended as it will lower all elevation points within the polygon.
  • Check that the number of SX cells connected upstream and downstream is the same or greater in flow width to the flow width of the 1D culvert. Review 1d_to_2d check file if the connected cells are as intended. The flow through the culvert is determined from upstream and downstream water levels in the 2D and not from the volume of water entering the 1D/2D cells directly. The volume removed (or added) from the 2D is determined by the culvert hydraulics. Removing large volumes of water from small number of cells can have an exaggerated draw down effect at the SX cells which in turn will affect the flow through the culverts since the upstream water level is lower. Potentially the downstream water level can be higher which will also decrease the flow through the culvert.
  • Connect cells that have a constant water level (perpendicular to flow) as the water level boundary will be averaged from the connected cells. Connecting cells that are not perpendicular to the flow could mean the average water level upstream of the culvert to be higher that in reality and downstream average water level will be lower.
  • Multiply default SX boundary storage with the 'a' attribute.
  • Use polygon SX for more stability, see further guidance here. If using shapefiles, 2d_bc SX polygon should be read into TUFLOW on the same line as 2d_bc CN line separated by a vertical bar.
  • Use Sub-Grid Sampling (SGS) to improve volume calculations upstream and downstream of the structure.

1D pits and pipes

  • Check that a sufficient number of SX cells are selected for the magnitude of the inflows. If it’s a relatively small 2D cell size and a large pit inflow (negative flow) or surcharge (positive flow) is applied to too few SX cells this will may cause instabilities (which is why TUFLOW selects more than one cell at a pit SX if the width of the pit exceeds the 2D cell size). Increasing the number of SX cells may help.

Note: Lower 1D timestep can be sensitivity tested, however the timestep should still remain within reasonable bounds. If the 1D timestep is required to be very small for stability reasons, usually other measures can be taken to improve the stability and consequently run time of the model.

My model reports very high mass balance error at the beginning, then it lowers. Is it unhealthy?

With a small number of wet cells in the model even a small erroneous volume can look large when reported as a percentage of mass balance error. Because there is a limited volume in the model, the error is a larger proportion. As more volume enters the model domain, it takes a much bigger error to be of the same proportion overall. Adding a little bit of water that should not be there to a teacup gives a big error in comparison to the total volume. Adding the same little bit of water (erroneous volume) into a lake is negligible.


TUFLOW Results and Outputs

Why are grid outputs (ASC, FLT) slightly different from mesh outputs (XMDF, DAT)?

Slight differences are expected between the result grids (ASC or FLT) and mesh results (XMDF or DAT) due to small differences in the interpolation method.

  • When outputting directly to FLT or ASC using the Map Output Format command, the output grid results are derived from both the cell centres and cell corners and the final results has those values interpolated to the centres of half cells (by default).
  • When outputting XMDF (or DAT) format, the mesh results are derived from cell centres for depths / levels and cell sides for vector data. The final XMDF stores data interpolated to the cell corners and doesn’t have any memory of where the corner data were derived from and what those values were.

The differences can also be amplified when the rotation of the model is not perfectly north-south aligned (i.e., a non-zero orientation).

Why are grids processed with TUFLOW_to_GIS different from TUFLOW grids?

Slight differences are expected between the result grids (ASC or FLT) produced directly from TUFLOW and post-processed from the TUFLOW_to_GIS utility from XMDF or DAT due to differences in the interpolation approach. Check Why are grid outputs (ASC, FLT) slightly different from mesh outputs (XMDF, DAT)? question above on differences between outputs direct from the TUFLOW Engine.
When ASC/FLT results are sampled from the XMDF files with the TUFLOW_to_GIS utility, it uses less data points than when the grids are written directly from TUFLOW.

The differences can also be amplified when the rotation of the model is a non-zero value. The extent of the grids is also different. When TUFLOW starts it removes any redundant areas (if any) outside of 2d_code layer to reduce the calculation time:

  • The extent of grids written directly from TUFLOW works with the unreduced area, encapsulating the entire rotated model domain in a north-south orientated rectangle.
  • The extent of XMDF (or DAT) results is based on the reduced area. When the TUFLOW_to_GIS utility is used the post-processed north-south aligned grids encapsulate the reduced model extent. The origin coordinates of such grids doesn't match the origin of the model (model domain), neither the grids written out from TUFLOW directly.

In general, a better interpolation of results is achieved when the grids are written directly from TUFLOW than when post-processed with TUFLOW_to_GIS utility. Importantly, when comparing results the recommendation is to be consistent and to use the same method for the duration of a project to ensure there are no differences as a consequence of the post-processing method.

Why is Z0 hazard coming directly from TUFLOW different to the one calculated from maximum velocity and depth?

The maximum velocity and maximum depth doesn’t necessarily occur at the same time. The Z0 hazard written directly from TUFLOW has the depth x velocity calculated every timestep and at the end outputs the maximums of all the timesteps. When using the velocity and depth grids the maximum velocity is always used with maximum depth.
Outputting the Z0 hazard grid directly from TUFLOW is more accurate, however, if this is not an option, manual creation is an acceptable workaround. The general recommendation is to choose one method and keep it consistent for the length of the project.
When calculating the Z0 hazard from the velocity and depth, the grids have already been interpolated by TUFLOW before the raster calculation (less data is used for the calculation), whereas the TUFLOW Z0 output is interpolated from the calculated data directly. Please also refer to the above question for more information Why are grid outputs (ASC, FLT) slightly different from mesh outputs (XMDF, DAT)?.

Why does hazard output (Z1, Z2, ...) show float values and not just integers?

The hazard values are calculated in cell corners and are in a form of an integer. When the values are processed into a grid, which has, by default, a resolution of half the cell size, the values are interpolated to the cell centres - four cell centre values per cell. Further interpolation occurs when the model has a non-zero orientation as grid files are north-south aligned.

Why are results from 2d_po features, POMM.csv and plots extracted from XMDF in GIS package not matching?

There are numerous reasons for the outputs to not be matching:

  • Plot output:
    • The 2d_po results are tracked at every Time Series Output Interval and saved into the PO.csv file. If the water level peaks in between the intervals, that maximum water level won’t be recorded in the PO.csv.
    • The POMM.csv tracks maximums of PO features at every computational timestep.
  • Map output:
    • The temporal XMDF(or DAT) results are tracked every Map Output Interval and interpolated to the cell corners.
    • The maximum XMDF map output records the value at every computational timestep, however the final maximum map output is also being interpolated to the cell corners.
    • The extraction tools in GIS packages use interpolated data for the plot creation, e.g. less data than the TUFLOW direct plot output.
  • Although the maximums from POMM.csv and the maximum map output will be very close, they won't match exactly.

Further notes for using post-processed flux line:

  • No information is stored in the map output about whether the flow is upstream or downstream controlled.
  • The staggered computational grid means there is always some interpolation occurring when projecting levels and velocities to the flux line. This will cause significant uncertainty in areas of sudden topographic change such as at an embankment. It is also the reason that two similar lines side by side may provide different outputs.
  • Highly variable/complex flows, which are usually associated with highly variable topography, will add to the uncertainty when interpolating to the flux line.

The general recommendation is to use a lot of 2d_po features with reasonably small Time Series Output Interval to achieve better accuracy, they won't slow the simulation down. In case of flux lines, the 2d_po lines are using the actual computed fluxes across each cell face and will be an exact representation of the flow being calculated by the 2D solver. Post-processed fluxes should only be located in areas of little topographic change and even then, used as a rough guide.

My model has puddles of water with high depth around buildings. How can I fix that?

This situation can happen when using direct rainfall and raising the elevation of the buildings above the surrounding ground level. In rain on grid model the rainfall is assigned to all active cells, therefore to the cells on the top of the buildings as well. This might create a waterfall like output around the buildings as a result of the water falling from the high topography (buildings) to the ground.

A couple of suggestions to consider:

  • Represent the buildings as high Manning's n instead of an increased elevation within the topography.
  • Exclude the building polygons from the RF polygon, so the buildings won't receive rainfall. This will however remove the rainfall from the model that supposed to fall on the buildings:
    • If the area of the buildings is negligible in comparison to the full model, it might be acceptable to leave the rainfall out.
    • Use 2d_sa_rf layer to include the cut out rainfall in the model again with Read GIS SA RF command. For example, most rainfall that falls on buildings will collect on the roof and through gutters and downpipes find it's way directly into the sub-surface drainage system or elsewhere in the model domain. Digitise a small polygon for each building on the ground where it is expected the runoff from the building will leave the property and assign a few extra attributes that will convert an input rainfall hyetograph into flow which is then used as usual 2d_sa. Check TUFLOW example model EG02_2D_5m_005.tcf on how to setup 2d_sa_rf.
    • If the building rainfall should go directly to the 1D pit and pipe network, Read GIS SA RF PITS command can be used.