Modelling Accuracy Uncertainties Impact Mapping
What are the reasons why models vary widely in their accuracy?
- The level of uncertainty/inaccuracy in the input data, especially uncertainties in hydrological inflows (which can be considerable) and topography.
- Whether the model is calibrated, and if calibrated, the range of calibration events and quantity/quality/type of calibration data. A model well calibrated to a range of flood events will be much more accurate than an uncalibrated model. More information on calibration can be found in Flood Modelling: How Accurate is Your Model? or an Australian Water School webinar Maximising Hydraulic Model Accuracy.
- Has the model’s sensitivity to changes in uncertain parameters (eg. Manning’s n values) been tested/quantified. Sensitivity testing can help firm up on error margins, especially if the model is uncalibrated.
- The scale of the hydraulics. ±10mm would be a large error margin for a flume model flowing 10cm deep, but is negligible for a deep river system flowing 10m or more deep. It’s important to think about the error margin as a percentage (not an absolute amount) of the depth/flow in the main flow paths.
- The suitability of the software being used for the application, whether a 1D or 2D solution, how the equations are solved, and whether any key terms in the equations are omitted. For example, for non-complex (i.e. slow moving) flows, terms such as the sub-grid turbulence (commonly known as eddy viscosity) can be omitted, however, this can be an essential term for faster, complex flow hydraulics.
- Is the discretisation (element size) reasonable for the modelling objectives. TUFLOW 2D Cell Size Selection provides detailed discussion and documents two case studies.
- Are the approximations at boundaries influencing the modelling objectives (this can particularly be an issue for HQ boundaries, which can have considerable uncertainties).
What is the difference between absolute and relative accuracy?
- Absolute accuracy is a measure of how accurately a model can reproduce the real flood behavior. For example model calibration is assessing absolute accuracy since it is comparing model results against real data. It can be a significant amount (e.g. 0.5m).
- Relative accuracy is the amount that can compare two model results against one another after changing a single input or parameter. This is typically being referred to for development assessments. In other words, holding everything else fixed what's the impact of removing storage from the floodplain due to the development of a lot. Significantly smaller tolerance is usually used in this instance.
What are the main flood modelling uncertainties?
Uncertainties in the computational flood hydraulic field are present at all stages of an analysis and are related to the:
- Accuracy of the input data (e.g. terrain, landuse, hydrologic inflows).
- Approach used for solving the underlying mathematical equations describing free surface fluid flow.
- The degree and quality of model calibration/verification.
Understanding the degree of uncertainty is important for setting absolute metrics such as design levels and freeboard. While less important for impact mapping because the uncertainties are present in both sides of the comparison. The hydraulic modelling carried out should be based on recommended industry and software guidelines, and follow sound modelling practices.
An overview on uncertainties in flood modelling can be viewed in Australian Water School webinar How Wrong is Your Flood Model?
What is numerical noise?
The nonlinear nature of the shallow water (long wave) equations used for modelling floods, can cause localised differences in the results that occur in locations well removed from where the model has changed. These differences should be treated as artefacts of the solution, often referred to as numerical noise.
The most common types of numerical noise are:
- Numerical errors - Numerical errors arise from poor solution convergence and/or numerical instabilities. In a well designed model using good input data and an appropriate computational timestepping, the level of numerical errors should be minimal, and within typical mapping tolerances used in the industry. If the numerical errors are of concern a review of the model’s input data, numerical stability and timestepping should be carried out. A common check is to reduce the timestepping to check the results or impacts are consistent with the original timestepping. Numerical noise from numerical errors is usually evident by impact mapping appearing uneven or irregular.
- Edge effects - Edge effects occur where a very slight change in water level can cause a 2D cell on the edge of the flood extent at that point in time to be dry in the base case and flooded in the change case, or vice versa. The wet/dry disparity can cause larger impacts than occurs in adjoining areas. Edge effects can also appear due to how a threshold is applied.
- Isolated instances - Isolated instances occur distant from the area of interest. These are an artefact of the solution and are often seen in more sensitive output fields such as velocity where the flow is turbulent or fast moving, or as edge effects. Isolated instances are typically less than the lower bound of the impact mapping tolerance and are therefore not readily apparent. However, where they do appear, they should not be interpreted as an impact.
What are the approaches for setting flood impact tolerances?
Establishing guidelines for mapping tolerances for flood impact assessments typically follow one of the approaches below, with the approach taken dependent on the objectives and the type of hydraulic modelling output field being mapped:
- A percentage, for example, a maximum increase in velocity of 10%. A threshold or cutoff is sometimes used below which the impact is assumed to be inconsequential or to discard slight changes to near zero values.
- An absolute amount, for example, maximum increase in water level of 0.1 m.
- A hybrid approach where the percentage change is computed relative to a value, which is the greater of the pre (or post) value and the absolute reference (threshold) value. This reduces the magnitude of the percentage change in regions where the base case value is considered insignificant or near zero.
- A risk based probability approach that sets a limit using a more extreme event. For example, an increase in the 1% AEP level must not exceed the 0.5% AEP level.
In terms of the practical application of setting a tolerance the following need to be considered:
- Using a percentage is statistically the most meaningful to cover uncertainties in the data and modelling. However, percentage change is only applicable for output fields that are zero based (e.g. velocities, depths, velocity times depth - VxD). For non zero based output fields such as water level, which does not start at zero and varies according to the terrain, a percentage approach is not meaningful, and an absolute or risk based approach should be adopted.
- A risk based approach is particularly relevant for setting factors of safety such as freeboard. A deep, fast flowing river can exhibit differences of several metres between AEP events, whereas slower moving rivers with large floodplains acting as storage can exhibit minor changes in peak flood levels between AEP events once the river has broken its banks. A risk based approach distinguishes between different hydraulic behaviours (conveyance based versus storage based). However, it is rarely used in practice even though it is arguably the soundest approach.
- For a change in output to be significant, it needs to represent the risk to the sensitive receptor. For example, erosion potential is best measured by an output field such as bed shear stress. However, due to the complexity of the equation there are complications in interpreting bed shear stress, especially where depths are shallow and where Manning’s n is representative of the vegetation rather than the soil surface. More complicated output fields are also difficult to present and convey to stakeholders. Therefore, tolerances and thresholds tend to be set using output fields more easily understood by all stakeholders.
- The potential cumulative impact of multiple changes in the floodplain. For example, flood behaviour changes associated with a single development in isolation may be negligible, dozens of neighbouring developments over decades may however cause a significant change in flood behaviour relative to the pre-developed catchment state.
I am running existing and developed case and see differences away from the model changes. Why?
Any geometry changes between models, no matter how small, will affect results, sometimes to a greater degree than that occurring in the area of change. For example, a few millimetres increase in water level can determine whether or not overtopping of an embankment occurs, and this can consequently cause even larger impacts on the downstream side of the embankment. Furthermore, these changes can be compounded by subsequent changes in timestepping when using the adaptive timestepping option (the default in TUFLOW HPC), especially at fringes of the flood extent, where cells are constantly wetting and drying. Modellers and reviewers should be judicious and pragmatic when assessing which impacts are real and which are numerical noise.
- Use the latest TUFLOW HPC release available.
- Check that cell size is appropriate to the modelling exercise.
- Use depth varying manning's n (lower manning's n for shallow water depths), specifically for direct rainfall models.
- Set appropriate Map Cutoff Depth for the modelling task. e.g. direct rainfall models might have higher values to avoid undesirable noise on the wet/dry interface.
- Use smaller 1D timestep for models with 1D features.
- Try double precision, specifically for models with higher elevation, 1D features and or very small flow/rainfall depth increments.
- When running HPC and or Quadtree, test control number factor smaller than 1.
Why seemingly identical models can produce non-identical results?
Generally speaking single path numerical solvers such as those used for hydraulic modelling should be able to produce the same numerical results twice to the last bit of every binary number calculated and output. However, this can become difficult with parallel computations as the order in which a list of single or double precision numbers are summated can produce slightly different rounding errors and thereby produce very slightly different results. For the vast majority of models TUFLOW Classic and TUFLOW HPC will reproduce numerically identical results when run on the same CPU/GPU. Occasionally this might not be the case when identical simulations are run on different CPUs/GPUs due to hardware differences.
Prior to 2020-10-AB release, the new boundary method introduced in TUFLOW HPC 2020-01-AA release for inflowing HT and QT boundaries (refer see Section 6.1 of the 2020 Release Notes) can in rare situations be affected by bitwise reproducibility when parallelised. When this issue occurs, very slight numerical differences can occur throughout the model, noting that they will be of a much smaller magnitude than those that occur when carrying out impact assessments, but will cause undesirable numerical noise in the impact mapping.
|Back to TUFLOW Modelling Guidance|