<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en">
	<id>https://wiki.tuflow.com/w/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Jaap.vandervelde</id>
	<title>Tuflow - User contributions [en]</title>
	<link rel="self" type="application/atom+xml" href="https://wiki.tuflow.com/w/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Jaap.vandervelde"/>
	<link rel="alternate" type="text/html" href="https://wiki.tuflow.com/Special:Contributions/Jaap.vandervelde"/>
	<updated>2026-05-09T15:56:16Z</updated>
	<subtitle>User contributions</subtitle>
	<generator>MediaWiki 1.43.8</generator>
	<entry>
		<id>https://wiki.tuflow.com/w/index.php?title=Linux_Install&amp;diff=45670</id>
		<title>Linux Install</title>
		<link rel="alternate" type="text/html" href="https://wiki.tuflow.com/w/index.php?title=Linux_Install&amp;diff=45670"/>
		<updated>2026-03-29T22:39:22Z</updated>

		<summary type="html">&lt;p&gt;Jaap.vandervelde: Jaap.vandervelde moved page Linux Install Draft to Linux Install without leaving a redirect: Publishing Linux install Draft&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Introduction ==&lt;br /&gt;
TUFLOW is installed on Linux using the &amp;lt;code&amp;gt;.rpm&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;.deb&amp;lt;/code&amp;gt; installer packages, which can be [https://www.tuflow.com/downloads/#tuflow downloaded here]. There&#039;s also a tar.gz archive available from the same location, which can serve as a portable application, similar to the .zip archive available for Windows.&lt;br /&gt;
&lt;br /&gt;
The installers have been tested on Rocky Linux 9 and Ubuntu 22.04, but should work on other modern Red Hat and Debian distributions.&lt;br /&gt;
&lt;br /&gt;
== Codemeter Configuration ==&lt;br /&gt;
To provide licenses to TUFLOW on Linux install and configure the CodeMeter User Runtime Package for Linux (.rpm or .deb options are available) https://www.wibu.com/support/user/user-software.html. &lt;br /&gt;
&lt;br /&gt;
* If using a hardware based usb dongle (either network or local licenses) please follow the instructions within [[Installing_Wibu_CodeMeter_Linux | Installing Wibu Codemeter Linux]].&lt;br /&gt;
* If using a software lock (either network or local licenses please follow the instructions within [[WIBU_Software_Licence_Linux  |  Wibu Software License Linux]].&lt;br /&gt;
&lt;br /&gt;
== TUFLOW Versioning ==&lt;br /&gt;
TUFLOW uses a year.minor.patch versioning convention as follows. &lt;br /&gt;
&lt;br /&gt;
* The year corresponds to the major version number e.g. 2026.0.0. Major releases are the only releases that will admit the possibility of breaking changes, which are changes in defaults or features that may change model results between versions. There is one major release per year&lt;br /&gt;
&lt;br /&gt;
* Minor releases contain new features and bug fixes, but no breaking changes and increment the minor version number, e.g. 2026.1.0&lt;br /&gt;
* Patch releases are bug fixes only and increment the patch version number, e.g. 2026.0.1&lt;br /&gt;
&lt;br /&gt;
== RPM Install ==&lt;br /&gt;
&amp;lt;ul&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt; Download the &amp;lt;code&amp;gt;.rpm&amp;lt;/code&amp;gt; for the specific TUFLOW build from https://www.tuflow.com/downloads/#tuflow&lt;br /&gt;
&amp;lt;li&amp;gt; Save the &amp;lt;code&amp;gt;.rpm&amp;lt;/code&amp;gt; locally. A common location is the home folder for the current user &amp;lt;code&amp;gt;~/&amp;lt;/code&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt; Run the preferred installation command. For example, if installing the TUFLOW 2026.0.0 release:&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
sudo dnf install ~/tuflow-2026.0-2026.0.0-1.x86_x64.rpm&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&amp;lt;/li&amp;gt;&amp;lt;/ul&amp;gt;&lt;br /&gt;
&amp;lt;ul&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;This installs a version-specific copy of the TUFLOW binaries and libraries under &amp;lt;code&amp;gt;/opt/tuflow/&amp;amp;lt;version&amp;amp;gt;/bin&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;/opt/tuflow/&amp;amp;lt;version&amp;amp;gt;/lib&amp;lt;/code&amp;gt;. A symbolic link to the executable is created in  &amp;lt;code&amp;gt;/usr/bin&amp;lt;/code&amp;gt;, allowing the required version to be run with a versioned shortcut, e.g. &amp;lt;code&amp;gt;tuflow-2026.0-isp&amp;lt;/code&amp;gt;.&lt;br /&gt;
&amp;lt;li&amp;gt;There is also a launcher script in &amp;lt;code&amp;gt;/opt/tuflow/tuflow-&amp;lt;version&amp;gt;/bin/tuflow-&amp;lt;version&amp;gt;-idp.sh&amp;lt;/code&amp;gt; for use cases where some environment variables needs to be set before execution. If you need to edit these, we recommend copying these into &amp;lt;code&amp;gt;/usr/local/bin&amp;lt;/code&amp;gt;, as they will be overwritten if a patch release is installed.&lt;br /&gt;
&amp;lt;li&amp;gt;To uninstall run the preferred uninstaller command. For example:&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
sudo dnf remove tuflow-2026.0&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&amp;lt;/li&amp;gt;&amp;lt;li&amp;gt;To return a list of previously installed versions and review the correct uninstall version use &amp;lt;code&amp;gt;dnf list installed&amp;lt;/code&amp;gt;&#039;&#039;, or &amp;lt;code&amp;gt;dnf list installed &#039;tuflow*&#039;&amp;lt;/code&amp;gt;&#039;&#039;&amp;lt;/li&amp;gt;&amp;lt;/ul&amp;gt;&#039;&#039;Note: Major and minor releases are installed with a new directory under &amp;lt;code&amp;gt;/opt/tuflow&amp;lt;/code&amp;gt;. For example, TUFLOW 2026.0.0 is installed under &amp;lt;code&amp;gt;/opt/tuflow/tuflow-2026.0&amp;lt;/code&amp;gt;. Patch releases overwrite the existing installation that shares the same major and minor release number. For example, TUFLOW 2026.0.1 will update 2026.0.0 if present.&#039;&#039; &lt;br /&gt;
&lt;br /&gt;
== DEB Install ==&lt;br /&gt;
The process for &amp;lt;code&amp;gt;.deb&amp;lt;/code&amp;gt; is essentially the same as the &amp;lt;code&amp;gt;.rpm&amp;lt;/code&amp;gt; steps 2-5. For steps 3 and 5, the install and uninstall command examples below are for Debian derivatives. For example (assuming TUFLOW 2026.0.0, modify for the specific version required):&lt;br /&gt;
&lt;br /&gt;
To install:&lt;br /&gt;
  sudo apt install ./tuflow-2026.0_2026.0.0-1_amd64.deb&lt;br /&gt;
&lt;br /&gt;
To uninstall:&lt;br /&gt;
  sudo apt remove tuflow-2026.0&lt;br /&gt;
If unsure about which versions you have previously installed you can return a list of all the installed versions via:&lt;br /&gt;
  apt list --installed | grep tuflow&lt;br /&gt;
&#039;&#039;Note: Major and minor releases are installed with a new directory under &amp;lt;code&amp;gt;/opt/tuflow&amp;lt;/code&amp;gt;. For example, TUFLOW 2026.0.0 is installed under &amp;lt;code&amp;gt;/opt/tuflow/tuflow-2026.0&amp;lt;/code&amp;gt;. Patch releases overwrite the existing installation that shares the same major and minor release number. For example, TUFLOW 2026.0.1 will update 2026.0.0 if present.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
== &amp;lt;span&amp;gt;Notes for Running TUFLOW Under WSL&amp;lt;/span&amp;gt; ==&lt;br /&gt;
If you are running a Linux installation with Microsoft Windows under WSL (Windows Subsystem for Linux), things should run as expected. If you want to use NVIDIA hardware with CUDA support with TUFLOW under WSL, you need to add the correct folder to the LD_LIBRARY_PATH environment variable, so that TUFLOW can find the correct CUDA shared libraries that work in WSL:&lt;br /&gt;
  export LD_LIBRARY_PATH=/usr/lib/wsl/lib:$LD_LIBRARY_PATH&lt;br /&gt;
In that folder there&#039;s a utility &amp;lt;code&amp;gt;/usr/lib/wsl/lib/nvidia-smi&amp;lt;/code&amp;gt; which will indicate if the Linux installation can connect to the NVIDIA hardware otherwise managed by the Microsoft Windows driver.&lt;br /&gt;
&lt;br /&gt;
== Running a Model ==&lt;br /&gt;
For either Linux distribution TUFLOW can be run by calling &amp;lt;code&amp;gt;tuflow-2026.0-isp&amp;lt;/code&amp;gt; (this example would the currently installed patch for version 2026.0 and single precision) and by passing it a configuration file name:&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
tuflow-2026.0-isp my_run.tcf&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;Model-specific commands such as the &amp;lt;code&amp;gt;export OMP_NUM_THREADS&amp;lt;/code&amp;gt; environment variable can either be added via the command line, or in your own custom run script that calls the configuration file name, for example &#039;&#039;&amp;lt;code&amp;gt;run_model.sh&amp;lt;/code&amp;gt;&#039;&#039;:&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
#!/bin/bash &lt;br /&gt;
export OMP_NUM_THREADS=8 &lt;br /&gt;
tuflow-2026.0-idp my_run.tfc&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;Once saved the script will need to be configured to run as an executable as follows:&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
chmod +x run_model.sh&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Return to Home Page ==&lt;br /&gt;
Return to [[Main_Page | Wiki Home Page]].&lt;/div&gt;</summary>
		<author><name>Jaap.vandervelde</name></author>
	</entry>
	<entry>
		<id>https://wiki.tuflow.com/w/index.php?title=Linux_Install&amp;diff=45669</id>
		<title>Linux Install</title>
		<link rel="alternate" type="text/html" href="https://wiki.tuflow.com/w/index.php?title=Linux_Install&amp;diff=45669"/>
		<updated>2026-03-27T08:04:25Z</updated>

		<summary type="html">&lt;p&gt;Jaap.vandervelde: spelling&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Introduction ==&lt;br /&gt;
TUFLOW is installed on Linux using the &amp;lt;code&amp;gt;.rpm&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;.deb&amp;lt;/code&amp;gt; installer packages, which can be [https://www.tuflow.com/downloads/#tuflow downloaded here]. There&#039;s also a tar.gz archive available from the same location, which can serve as a portable application, similar to the .zip archive available for Windows.&lt;br /&gt;
&lt;br /&gt;
The installers have been tested on Rocky Linux 9 and Ubuntu 22.04, but should work on other modern Red Hat and Debian distributions.&lt;br /&gt;
&lt;br /&gt;
== Codemeter Configuration ==&lt;br /&gt;
To provide licenses to TUFLOW on Linux install and configure the CodeMeter User Runtime Package for Linux (.rpm or .deb options are available) https://www.wibu.com/support/user/user-software.html. &lt;br /&gt;
&lt;br /&gt;
* If using a hardware based usb dongle (either network or local licenses) please follow the instructions within [[Installing_Wibu_CodeMeter_Linux | Installing Wibu Codemeter Linux]].&lt;br /&gt;
* If using a software lock (either network or local licenses please follow the instructions within [[WIBU_Software_Licence_Linux  |  Wibu Software License Linux]].&lt;br /&gt;
&lt;br /&gt;
== TUFLOW Versioning ==&lt;br /&gt;
TUFLOW uses a year.minor.patch versioning convention as follows. &lt;br /&gt;
&lt;br /&gt;
* The year corresponds to the major version number e.g. 2026.0.0. Major releases are the only releases that will admit the possibility of breaking changes, which are changes in defaults or features that may change model results between versions. There is one major release per year&lt;br /&gt;
&lt;br /&gt;
* Minor releases contain new features and bug fixes, but no breaking changes and increment the minor version number, e.g. 2026.1.0&lt;br /&gt;
* Patch releases are bug fixes only and increment the patch version number, e.g. 2026.0.1&lt;br /&gt;
&lt;br /&gt;
== RPM Install ==&lt;br /&gt;
&amp;lt;ul&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt; Download the &amp;lt;code&amp;gt;.rpm&amp;lt;/code&amp;gt; for the specific TUFLOW build from https://www.tuflow.com/downloads/#tuflow&lt;br /&gt;
&amp;lt;li&amp;gt; Save the &amp;lt;code&amp;gt;.rpm&amp;lt;/code&amp;gt; locally. A common location is the home folder for the current user &amp;lt;code&amp;gt;~/&amp;lt;/code&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt; Run the preferred installation command. For example, if installing the TUFLOW 2026.0.0 release:&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
sudo dnf install ~/tuflow-2026.0-2026.0.0-1.x86_x64.rpm&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&amp;lt;/li&amp;gt;&amp;lt;/ul&amp;gt;&lt;br /&gt;
&amp;lt;ul&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;This installs a version-specific copy of the TUFLOW binaries and libraries under &amp;lt;code&amp;gt;/opt/tuflow/&amp;amp;lt;version&amp;amp;gt;/bin&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;/opt/tuflow/&amp;amp;lt;version&amp;amp;gt;/lib&amp;lt;/code&amp;gt;. A symbolic link to the executable is created in  &amp;lt;code&amp;gt;/usr/bin&amp;lt;/code&amp;gt;, allowing the required version to be run with a versioned shortcut, e.g. &amp;lt;code&amp;gt;tuflow-2026.0-isp&amp;lt;/code&amp;gt;.&lt;br /&gt;
&amp;lt;li&amp;gt;There is also a launcher script in &amp;lt;code&amp;gt;/opt/tuflow/tuflow-&amp;lt;version&amp;gt;/bin/tuflow-&amp;lt;version&amp;gt;-idp.sh&amp;lt;/code&amp;gt; for use cases where some environment variables needs to be set before execution. If you need to edit these, we recommend copying these into &amp;lt;code&amp;gt;/usr/local/bin&amp;lt;/code&amp;gt;, as they will be overwritten if a patch release is installed.&lt;br /&gt;
&amp;lt;li&amp;gt;To uninstall run the preferred uninstaller command. For example:&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
sudo dnf remove tuflow-2026.0&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&amp;lt;/li&amp;gt;&amp;lt;li&amp;gt;To return a list of previously installed versions and review the correct uninstall version use &amp;lt;code&amp;gt;dnf list installed&amp;lt;/code&amp;gt;&#039;&#039;, or &amp;lt;code&amp;gt;dnf list installed &#039;tuflow*&#039;&amp;lt;/code&amp;gt;&#039;&#039;&amp;lt;/li&amp;gt;&amp;lt;/ul&amp;gt;&#039;&#039;Note: Major and minor releases are installed with a new directory under &amp;lt;code&amp;gt;/opt/tuflow&amp;lt;/code&amp;gt;. For example, TUFLOW 2026.0.0 is installed under &amp;lt;code&amp;gt;/opt/tuflow/tuflow-2026.0&amp;lt;/code&amp;gt;. Patch releases overwrite the existing installation that shares the same major and minor release number. For example, TUFLOW 2026.0.1 will update 2026.0.0 if present.&#039;&#039; &lt;br /&gt;
&lt;br /&gt;
== DEB Install ==&lt;br /&gt;
The process for &amp;lt;code&amp;gt;.deb&amp;lt;/code&amp;gt; is essentially the same as the &amp;lt;code&amp;gt;.rpm&amp;lt;/code&amp;gt; steps 2-5. For steps 3 and 5, the install and uninstall command examples below are for Debian derivatives. For example (assuming TUFLOW 2026.0.0, modify for the specific version required):&lt;br /&gt;
&lt;br /&gt;
To install:&lt;br /&gt;
  sudo apt install ./tuflow-2026.0_2026.0.0-1_amd64.deb&lt;br /&gt;
&lt;br /&gt;
To uninstall:&lt;br /&gt;
  sudo apt remove tuflow-2026.0&lt;br /&gt;
If unsure about which versions you have previously installed you can return a list of all the installed versions via:&lt;br /&gt;
  apt list --installed | grep tuflow&lt;br /&gt;
&#039;&#039;Note: Major and minor releases are installed with a new directory under &amp;lt;code&amp;gt;/opt/tuflow&amp;lt;/code&amp;gt;. For example, TUFLOW 2026.0.0 is installed under &amp;lt;code&amp;gt;/opt/tuflow/tuflow-2026.0&amp;lt;/code&amp;gt;. Patch releases overwrite the existing installation that shares the same major and minor release number. For example, TUFLOW 2026.0.1 will update 2026.0.0 if present.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
== &amp;lt;span&amp;gt;Notes for Running TUFLOW Under WSL&amp;lt;/span&amp;gt; ==&lt;br /&gt;
If you are running a Linux installation with Microsoft Windows under WSL (Windows Subsystem for Linux), things should run as expected. If you want to use NVIDIA hardware with CUDA support with TUFLOW under WSL, you need to add the correct folder to the LD_LIBRARY_PATH environment variable, so that TUFLOW can find the correct CUDA shared libraries that work in WSL:&lt;br /&gt;
  export LD_LIBRARY_PATH=/usr/lib/wsl/lib:$LD_LIBRARY_PATH&lt;br /&gt;
In that folder there&#039;s a utility &amp;lt;code&amp;gt;/usr/lib/wsl/lib/nvidia-smi&amp;lt;/code&amp;gt; which will indicate if the Linux installation can connect to the NVIDIA hardware otherwise managed by the Microsoft Windows driver.&lt;br /&gt;
&lt;br /&gt;
== Running a Model ==&lt;br /&gt;
For either Linux distribution TUFLOW can be run by calling &amp;lt;code&amp;gt;tuflow-2026.0-isp&amp;lt;/code&amp;gt; (this example would the currently installed patch for version 2026.0 and single precision) and by passing it a configuration file name:&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
tuflow-2026.0-isp my_run.tcf&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;Model-specific commands such as the &amp;lt;code&amp;gt;export OMP_NUM_THREADS&amp;lt;/code&amp;gt; environment variable can either be added via the command line, or in your own custom run script that calls the configuration file name, for example &#039;&#039;&amp;lt;code&amp;gt;run_model.sh&amp;lt;/code&amp;gt;&#039;&#039;:&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
#!/bin/bash &lt;br /&gt;
export OMP_NUM_THREADS=8 &lt;br /&gt;
tuflow-2026.0-idp my_run.tfc&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;Once saved the script will need to be configured to run as an executable as follows:&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
chmod +x run_model.sh&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Return to Home Page ==&lt;br /&gt;
Return to [[Main_Page | Wiki Home Page]].&lt;/div&gt;</summary>
		<author><name>Jaap.vandervelde</name></author>
	</entry>
	<entry>
		<id>https://wiki.tuflow.com/w/index.php?title=Linux_Install&amp;diff=45668</id>
		<title>Linux Install</title>
		<link rel="alternate" type="text/html" href="https://wiki.tuflow.com/w/index.php?title=Linux_Install&amp;diff=45668"/>
		<updated>2026-03-27T08:03:21Z</updated>

		<summary type="html">&lt;p&gt;Jaap.vandervelde: various minor corrections, match other wiki&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Introduction ==&lt;br /&gt;
TUFLOW is installed on Linux using the &amp;lt;code&amp;gt;.rpm&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;.deb&amp;lt;/code&amp;gt; installer packages, which can be [https://www.tuflow.com/downloads/#tuflow downloaded here]. There&#039;s also a tar.gz archive available from the same location, which can serve as a portable application, similar to the .zip archive available for Windows.&lt;br /&gt;
&lt;br /&gt;
The installers have been tested on Rocky Linux 9 and Ubuntu 22.04, but should work on other modern Red Hat and Debian distributions.&lt;br /&gt;
&lt;br /&gt;
== Codemeter Configuration ==&lt;br /&gt;
To provide licenses to TUFLOW on Linux install and configure the CodeMeter User Runtime Package for Linux (.rpm or .deb options are available) https://www.wibu.com/support/user/user-software.html. &lt;br /&gt;
&lt;br /&gt;
* If using a hardware based usb dongle (either network or local licenses) please follow the instructions within [[Installing_Wibu_CodeMeter_Linux | Installing Wibu Codemeter Linux]].&lt;br /&gt;
* If using a software lock (either network or local licenses please follow the instructions within [[WIBU_Software_Licence_Linux  |  Wibu Software License Linux]].&lt;br /&gt;
&lt;br /&gt;
== TUFLOW Versioning ==&lt;br /&gt;
TUFLOW uses a year.minor.patch versioning convention as follows. &lt;br /&gt;
&lt;br /&gt;
* The year corresponds to the major version number e.g. 2026.0.0. Major releases are the only releases that will admit the possibility of breaking changes, which are changes in defaults or features that may change model results between versions. There is one major release per year&lt;br /&gt;
&lt;br /&gt;
* Minor releases contain new features and bug fixes, but no breaking changes and increment the minor version number, e.g. 2026.1.0&lt;br /&gt;
* Patch releases are bug fixes only and increment the patch version number, e.g. 2026.0.1&lt;br /&gt;
&lt;br /&gt;
== RPM Install ==&lt;br /&gt;
&amp;lt;ul&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt; Download the &amp;lt;code&amp;gt;.rpm&amp;lt;/code&amp;gt; for the specific TUFLOW build from https://www.tuflow.com/downloads/#tuflow&lt;br /&gt;
&amp;lt;li&amp;gt; Save the &amp;lt;code&amp;gt;.rpm&amp;lt;/code&amp;gt; locally. A common location is the home folder for the current user &amp;lt;code&amp;gt;~/&amp;lt;/code&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt; Run the preferred installation command. For example, if installing the TUFLOW 2026.0.0 release:&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
sudo dnf install ~/tuflow-2026.0-2026.0.0-1.x86_x64.rpm&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&amp;lt;/li&amp;gt;&amp;lt;/ul&amp;gt;&lt;br /&gt;
&amp;lt;ul&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;This installs a version-specific copy of the TUFLOW binaries and libraries under &amp;lt;code&amp;gt;/opt/tuflow/&amp;amp;lt;version&amp;amp;gt;/bin&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;/opt/tuflow/&amp;amp;lt;version&amp;amp;gt;/lib&amp;lt;/code&amp;gt;. A symbolic link to the executable is created in  &amp;lt;code&amp;gt;/usr/bin&amp;lt;/code&amp;gt;, allowing the required version to be run with a versioned shortcut, e.g. &amp;lt;code&amp;gt;tuflow-2026.0-isp&amp;lt;/code&amp;gt;.&lt;br /&gt;
&amp;lt;li&amp;gt;There is also a launcher script in &amp;lt;code&amp;gt;/opt/tuflow/tuflow-&amp;lt;version&amp;gt;/bin/tuflow-&amp;lt;version&amp;gt;-idp.sh&amp;lt;/code&amp;gt; for use cases where some environment variables needs to be set before execution. If you need to edit these, we recommend copying these into &amp;lt;code&amp;gt;/usr/local/bin&amp;lt;/code&amp;gt;, as they will be overwritten if a patch release is installed.&lt;br /&gt;
&amp;lt;li&amp;gt;To uninstall run the preferred uninstaller command. For example:&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
sudo dnf remove tuflow-2026.0&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&amp;lt;/li&amp;gt;&amp;lt;li&amp;gt;To return a list of previously installed versions and review the correct uninstall version use &amp;lt;code&amp;gt;dnf list installed&amp;lt;/code&amp;gt;&#039;&#039;, or &amp;lt;code&amp;gt;dnf list installed &#039;tuflow*&#039;&amp;lt;/code&amp;gt;&#039;&#039;&amp;lt;/li&amp;gt;&amp;lt;/ul&amp;gt;&#039;&#039;Note: Major and minor releases are installed with a new directory under &amp;lt;code&amp;gt;/opt/tuflow&amp;lt;/code&amp;gt;. For example, TUFLOW 2026.0.0 is installed under &amp;lt;code&amp;gt;/opt/tuflow/tuflow-2026.0&amp;lt;/code&amp;gt;. Patch releases overwrite the existing installation that shares the same major and minor release number. For example, TUFLOW 2026.0.1 will update 2026.0.0 if present.&#039;&#039; &lt;br /&gt;
&lt;br /&gt;
== DEB Install ==&lt;br /&gt;
The process for &amp;lt;code&amp;gt;.deb&amp;lt;/code&amp;gt; is essentially the same as the &amp;lt;code&amp;gt;.rpm&amp;lt;/code&amp;gt; steps 2-5. For steps 3 and 5, the install and uninstall command examples below are for Debian derivatives. For example (assuming TUFLOW 2026.0.0, modify for the specific version required):&lt;br /&gt;
&lt;br /&gt;
To install:&lt;br /&gt;
  sudo apt install ./tuflow-2026.0_2026.0.0-1_amd64.deb&lt;br /&gt;
&lt;br /&gt;
To uninstall:&lt;br /&gt;
  sudo apt remove tuflow-2026.0&lt;br /&gt;
If unsure about which versions you have previously installed you can return a list of all the installed versions via:&lt;br /&gt;
  apt list --installed | grep tuflow&lt;br /&gt;
&#039;&#039;Note: Major and minor releases are installed with a new directory under &amp;lt;code&amp;gt;/opt/tuflow&amp;lt;/code&amp;gt;. For example, TUFLOW 2026.0.0 is installed under &amp;lt;code&amp;gt;/opt/tuflow/tuflow-2026.0&amp;lt;/code&amp;gt;. Patch releases overwrite the existing installation that shares the same major and minor release number. For example, TUFLOW 2026.0.1 will update 2026.0.0 if present.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
== &amp;lt;span&amp;gt;Notes For Running TUFLOW Under WSL&amp;lt;/span&amp;gt; ==&lt;br /&gt;
If you are running a Linux installation with Microsoft Windows under WSL (Windows Subsystem for Linux), things should run as expected. If you want to use NVIDIA hardware with CUDA support with TUFLOW under WSL, you need to add the correct folder to the LD_LIBRARY_PATH environment variable, so that TUFLOW can find the correct CUDA shared libraries that work in WSL:&lt;br /&gt;
  export LD_LIBRARY_PATH=/usr/lib/wsl/lib:$LD_LIBRARY_PATH&lt;br /&gt;
In that folder there&#039;s a utility &amp;lt;code&amp;gt;/usr/lib/wsl/lib/nvidia-smi&amp;lt;/code&amp;gt; which will indicate if the Linux installation can connect to the NVIDIA hardware otherwise managed by the Microsoft Windows driver.&lt;br /&gt;
&lt;br /&gt;
== Running A Model ==&lt;br /&gt;
For either Linux distribution TUFLOW can be run by calling &amp;lt;code&amp;gt;tuflow-2026.0-isp&amp;lt;/code&amp;gt; (this example would the currently installed patch for version 2026.0 and single precision) and by passing it a configuration file name:&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
tuflow-2026.0-isp my_run.tcf&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;Model-specific commands such as the &amp;lt;code&amp;gt;export OMP_NUM_THREADS&amp;lt;/code&amp;gt; environment variable can either be added via the command line, or in your own custom run script that calls the configuration file name, for example &#039;&#039;&amp;lt;code&amp;gt;run_model.sh&amp;lt;/code&amp;gt;&#039;&#039;:&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
#!/bin/bash &lt;br /&gt;
export OMP_NUM_THREADS=8 &lt;br /&gt;
tuflow-2026.0-idp my_run.tfc&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;Once saved the script will need to be configured to run as an executable as follows:&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
chmod +x run_model.sh&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Return To Home Page ==&lt;br /&gt;
Return to [[Main_Page | Wiki Home Page]].&lt;/div&gt;</summary>
		<author><name>Jaap.vandervelde</name></author>
	</entry>
	<entry>
		<id>https://wiki.tuflow.com/w/index.php?title=Linux_Install&amp;diff=45667</id>
		<title>Linux Install</title>
		<link rel="alternate" type="text/html" href="https://wiki.tuflow.com/w/index.php?title=Linux_Install&amp;diff=45667"/>
		<updated>2026-03-27T07:47:58Z</updated>

		<summary type="html">&lt;p&gt;Jaap.vandervelde: a few minor corrections, and reformatting of code examples&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Introduction ==&lt;br /&gt;
TUFLOW is installed on Linux using the &amp;lt;code&amp;gt;.rpm&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;.deb&amp;lt;/code&amp;gt; installer packages, which can be [https://www.tuflow.com/downloads/#tuflow downloaded here]. There&#039;s also a tar.gz archive available from the same location, which can serve as a portable application, similar to the .zip archive available for Windows.&lt;br /&gt;
&lt;br /&gt;
The installers have been tested on Rocky Linux 9 and Ubuntu 22.04, but should work on other modern Red Hat and Debian distributions.&lt;br /&gt;
&lt;br /&gt;
== Codemeter Configuration ==&lt;br /&gt;
To license TUFLOW on Linux install and configure CodeMeter User Runtime Package for Linux (.rpm or .deb options are available) https://www.wibu.com/support/user/user-software.html. &lt;br /&gt;
&lt;br /&gt;
* If using a hardware based usb dongle (either network or local licenses) please follow the instructions within [[Installing_Wibu_CodeMeter_Linux | Installing Wibu Codemeter Linux]].&lt;br /&gt;
* If using a software lock (either network or local licenses please follow the instructions within [[WIBU_Software_Licence_Linux  |  Wibu Software License Linux]].&lt;br /&gt;
&lt;br /&gt;
== TUFLOW Versioning ==&lt;br /&gt;
TUFLOW uses a year.minor.patch versioning convention as follows. &lt;br /&gt;
&lt;br /&gt;
* The year corresponds to the major version number e.g. 2026.0.0. Major releases are the only releases that will admit the possibility of breaking changes, which are changes in defaults or features that may change model results between versions. There is one major release per year&lt;br /&gt;
&lt;br /&gt;
* Minor releases contain new features and bug fixes, but no breaking changes and increment the minor version number, e.g. 2026.1.0&lt;br /&gt;
* Patch releases are bug fixes only and increment the patch version number, e.g. 2026.0.1&lt;br /&gt;
&lt;br /&gt;
== RPM Install ==&lt;br /&gt;
&amp;lt;ul&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt; Download the &amp;lt;code&amp;gt;.rpm&amp;lt;/code&amp;gt; for the specific TUFLOW build from https://www.tuflow.com/downloads/#tuflow&lt;br /&gt;
&amp;lt;li&amp;gt; Save the &amp;lt;code&amp;gt;.rpm&amp;lt;/code&amp;gt; locally. A common location is the home folder for the current user &amp;lt;code&amp;gt;~/&amp;lt;/code&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt; Run the preferred installation command. For example, if installing the TUFLOW 2026.0.0 release:&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
sudo dnf install ~/tuflow-2026.0-2026.0.0-1.x86_x64.rpm&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&amp;lt;/li&amp;gt;&amp;lt;/ul&amp;gt;&lt;br /&gt;
&amp;lt;ul&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;This installs a version-specific copy of the TUFLOW binaries and libraries under &amp;lt;code&amp;gt;/opt/tuflow/&amp;amp;lt;version&amp;amp;gt;/bin&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;/opt/tuflow/&amp;amp;lt;version&amp;amp;gt;/lib&amp;lt;/code&amp;gt;. A symbolic link to the executable is created in  &amp;lt;code&amp;gt;/usr/bin&amp;lt;/code&amp;gt;, allowing the required version to be run with a versioned shortcut, e.g. &amp;lt;code&amp;gt;tuflow-2026.0-isp&amp;lt;/code&amp;gt;.&lt;br /&gt;
&amp;lt;li&amp;gt;There is also a launcher script in &amp;lt;code&amp;gt;/opt/tuflow/tuflow-&amp;lt;version&amp;gt;/bin/tuflow-&amp;lt;version&amp;gt;-idp.sh&amp;lt;/code&amp;gt; for use cases where some environment variables needs to be set before execution. If you need to edit these, we recommend copying these into &amp;lt;code&amp;gt;/usr/local/bin&amp;lt;/code&amp;gt;, as they will be overwritten if a patch release is installed.&lt;br /&gt;
&amp;lt;li&amp;gt;To uninstall run the preferred uninstaller command. For example:&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
sudo dnf remove tuflow-2026.0&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&amp;lt;/li&amp;gt;&amp;lt;li&amp;gt;To return a list of previously installed versions and review the correct uninstall version use &amp;lt;code&amp;gt;dnf list installed&amp;lt;/code&amp;gt;&#039;&#039;, or &amp;lt;code&amp;gt;dnf list installed &#039;tuflow*&#039;&amp;lt;/code&amp;gt;&#039;&#039;&amp;lt;/li&amp;gt;&amp;lt;/ul&amp;gt;&#039;&#039;Note: Major and minor releases are installed with a new directory under &amp;lt;code&amp;gt;/opt/tuflow&amp;lt;/code&amp;gt;. For example, TUFLOW 2026.0.0 is installed under &amp;lt;code&amp;gt;/opt/tuflow/tuflow-2026.0&amp;lt;/code&amp;gt;. Patch releases overwrite the existing installation that shares the same major and minor release number. For example, TUFLOW 2026.0.1 will update 2026.0.0 if present.&#039;&#039; &lt;br /&gt;
&lt;br /&gt;
== DEB Install ==&lt;br /&gt;
The process for &amp;lt;code&amp;gt;.deb&amp;lt;/code&amp;gt; is essentially the same as the &amp;lt;code&amp;gt;.rpm&amp;lt;/code&amp;gt; steps 2-5. For steps 3 and 5, the install and uninstall command examples below are for Debian derivatives. For example (assuming TUFLOW 2026.0.0, modify for the specific version required):&lt;br /&gt;
&lt;br /&gt;
To install:&lt;br /&gt;
  sudo apt install ./tuflow-2026.0_2026.0.0-1_amd64.deb&lt;br /&gt;
&lt;br /&gt;
To uninstall:&lt;br /&gt;
  sudo apt remove tuflow-2026.0&lt;br /&gt;
If unsure about which versions you have previously installed you can return a list of all the installed versions via:&lt;br /&gt;
  apt list --installed | grep tuflow&lt;br /&gt;
&#039;&#039;Note: Major and minor releases are installed with a new directory under &amp;lt;code&amp;gt;/opt/tuflow&amp;lt;/code&amp;gt;. For example, TUFLOW 2026.0.0 is installed under &amp;lt;code&amp;gt;/opt/tuflow/tuflow-2026.0&amp;lt;/code&amp;gt;. Patch releases overwrite the existing installation that shares the same major and minor release number. For example, TUFLOW 2026.0.1 will update 2026.0.0 if present.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
== &amp;lt;span&amp;gt;Notes For Running Linux Under WSL&amp;lt;/span&amp;gt; ==&lt;br /&gt;
If you are running a Linux installation with Microsoft Windows under WSL (Windows Subsystem for Linux), things should run as expected. If you want to use NVIDIA hardware with CUDA support with TUFLOW under WSL, you need to add the correct folder to the LD_LIBRARY_PATH environment variable, so that TUFLOW can find the correct CUDA shared libraries that works in WSL:&lt;br /&gt;
  export LD_LIBRARY_PATH=/usr/lib/wsl/lib:$LD_LIBRARY_PATH&lt;br /&gt;
In that folder there&#039;s a utility &amp;lt;code&amp;gt;/usr/lib/wsl/lib/nvidia-smi&amp;lt;/code&amp;gt; which will indicate if the Linux installation can connect to the NVIDIA hardware otherwise managed by the Microsoft Windows driver.&lt;br /&gt;
&lt;br /&gt;
== Running A Model ==&lt;br /&gt;
For either Linux distribution TUFLOW can be run by calling &amp;lt;code&amp;gt;tuflow-2026.0-isp&amp;lt;/code&amp;gt; (this example would the currently installed patch for version 2026.0 and single precision) and by passing it a configuration file name:&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
tuflow-2026.0-isp my_run.tcf&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;Model-specific commands such as the &amp;lt;code&amp;gt;export OMP_NUM_THREADS&amp;lt;/code&amp;gt; environment variable can either be added via the command line, or in your own custom run script that calls the configuration file name, for example &#039;&#039;&amp;lt;code&amp;gt;run_model.sh&amp;lt;/code&amp;gt;&#039;&#039;:&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
#!/bin/bash &lt;br /&gt;
export OMP_NUM_THREADS=8 &lt;br /&gt;
tuflow-2026.0-idp my_run.tfc&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;Once saved the script will need to be configured to run as an executable as follows:&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
chmod +x run_model.sh&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Return To Home Page ==&lt;br /&gt;
Return to [[Main_Page | Wiki Home Page]].&lt;/div&gt;</summary>
		<author><name>Jaap.vandervelde</name></author>
	</entry>
	<entry>
		<id>https://wiki.tuflow.com/w/index.php?title=Installing_Wibu_CodeMeter_Linux&amp;diff=45662</id>
		<title>Installing Wibu CodeMeter Linux</title>
		<link rel="alternate" type="text/html" href="https://wiki.tuflow.com/w/index.php?title=Installing_Wibu_CodeMeter_Linux&amp;diff=45662"/>
		<updated>2026-03-27T01:58:06Z</updated>

		<summary type="html">&lt;p&gt;Jaap.vandervelde: modernise and align with other wiki&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This article provide a basic set of instructions to install the Wibu CodeMeter Runtime on a Linux host through the command line interface (CLI). For more information about using Wibu dongles or software licenses, refer to &amp;lt;u&amp;gt;[[Wibu_Dongles|Wibu Dongles]]&amp;lt;/u&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
The Linux commands used on this wiki should work on most modern Linux distributions, but were tested on CentOS and Debian. Note that these instructions are provided as a courtesy to users new to Linux, please ensure you understand what the commands mean before you run them and be aware of the [[Tuflow:General_disclaimer|general disclaimer]].&lt;br /&gt;
&lt;br /&gt;
==Getting the CodeMeter Runtime==&lt;br /&gt;
&lt;br /&gt;
The appropriate version of the CodeMeter Runtime can be obtained from the Wibu website at &amp;lt;u&amp;gt;https://www.wibu.com/support/user/user-software.html&amp;lt;/u&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
If you are using Debian, Ubuntu, Mint or another Linux distribution in the Debian family of distributions, you should obtain a copy of the `.deb` installer for your hardware. If you are using Red Hat (RHEL), Fedora, CentOS, Rocky, or another Linux distribution in the Red Hat family of distributions, you should obtain a copy of the `.rpm` installer for your hardware. Download the 64-bit version for use with 64-bit TUFLOW (all modern versions are). From here on, we&#039;ll refer to &#039;Debian&#039; or &#039;Red Hat&#039; to mean any distribution in that family.&lt;br /&gt;
&lt;br /&gt;
Depending on your level of access to the machine running Linux and whether or not it is running a graphical user interface, you may have some trouble getting the file onto your machine. You can download the file directly from the command line with: &amp;lt;pre&amp;gt;wget -O codemeter.rpm &amp;lt;direct link&amp;gt;&amp;lt;/pre&amp;gt; where &amp;quot;&amp;lt;direct link&amp;gt;&amp;quot; is the &#039;direct link&#039; provided on the Wibu download page for the version you are downloading. &lt;br /&gt;
&lt;br /&gt;
 The download page also provides an MD5 checksum. You can run &amp;lt;pre&amp;gt;md5sum codemeter.rpm&amp;lt;/pre&amp;gt; and verify that the file you downloaded was downloaded correctly by comparing this checksum.&lt;br /&gt;
&lt;br /&gt;
If your Linux distribution does not provide `wget`, you can obtain a copy on Debian with `sudo apt-get install wget` and on Red Hat with `sudo yum install wget`.&lt;br /&gt;
&lt;br /&gt;
==Installing the CodeMeter Runtime==&lt;br /&gt;
&lt;br /&gt;
On Red Hat you can install the CodeMeter Runtime with &amp;lt;pre&amp;gt;sudo dnf install ./codemeter.rpm&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
On Debian you can install the CodeMeter Runtime with &amp;lt;pre&amp;gt;sudo apt install ./codemeter.deb&amp;lt;/pre&amp;gt; If that fails due to missing dependencies, you can attempt &amp;lt;pre&amp;gt;sudo apt install -f&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Once these commands complete (on either Debian or Red Hat), you can start, stop and restart the services with `systemctl`: &amp;lt;pre&amp;gt;sudo systemctl restart codemeter.service&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Alternatively, if `systemctl` is not available to you, try: &amp;lt;pre&amp;gt;sudo /etc/init.d/codemeter restart&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Configuring Codemeter Runtime for Server Host ==&lt;br /&gt;
Turn off codemeter service (Note: If this is not completed you will not be able to edit the config files):&amp;lt;pre&amp;gt;sudo systemctl stop codemeter.service&amp;lt;/pre&amp;gt;Open &amp;lt;code&amp;gt;/etc/wibu/CodeMeter/Server.ini&amp;lt;/code&amp;gt; with write access.&lt;br /&gt;
&lt;br /&gt;
Within Server.ini set: &amp;lt;code&amp;gt;IsNetworkServer=1&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Restart the CodeMeter service:&amp;lt;pre&amp;gt;sudo systemctl start codemeter.service&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Configuring the CodeMeter Runtime==&lt;br /&gt;
&lt;br /&gt;
Refer to the CodeMeter manual for instructions on configuring CodeMeter. &lt;br /&gt;
&lt;br /&gt;
However, if you are installing CodeMeter as a client for network licenses, the following is an example of a section you can add to the `/etc/wibu/CodeMeter/Server.ini`: &amp;lt;pre&amp;gt;[ServerSearchList]&lt;br /&gt;
UseBroadcast=1&lt;br /&gt;
&lt;br /&gt;
[ServerSearchList\Server1]&lt;br /&gt;
Address=&amp;lt;ip number of your license host&amp;gt;&amp;lt;/pre&amp;gt;&lt;br /&gt;
You can add multiple `ServerSearchList\Server&amp;lt;n&amp;gt;` sections, one for each license host you have, with the IP address of the license host. Once you update and save the configuration file, restart the CodeMeter service and your licenses from the network server should then be available locally.&lt;br /&gt;
&lt;br /&gt;
If you are setting up a license host, which you wish to access from another machine, you will need to install the CodeMeter Runtime on that machine as well and you need to ensure the firewall allows requests to the license host on port 22350.&lt;br /&gt;
&lt;br /&gt;
On Red Hat, you can achieve this with:&amp;lt;pre&amp;gt;sudo firewall-cmd --get-active-zones&lt;br /&gt;
sudo firewall-cmd --zone=public --add-port=22350/tcp --permanent&lt;br /&gt;
sudo firewall-cmd --reload&amp;lt;/pre&amp;gt;&lt;br /&gt;
This assumes you see the `public` zone after the first command.&lt;br /&gt;
&lt;br /&gt;
On Debian, you can run:&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
sudo ufw allow 22350/tcp&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Alternatively, if `ufw` is not available to you, try:&amp;lt;pre&amp;gt;sudo iptables -A INPUT -p tcp -m tcp --dport 22350 -j ACCEPT&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Similarly, if you want users to be able to access the web admin interface for CodeMeter on the server, you need to ensure the firewall allows request on port 22352 (for http) and/or 22353 (for https). However, access to the web admin interface from other machines is not required for obtaining a license and in typical configurations, you would be able to access the web admin interface on the host itself (localhost) without additional firewall rules.&lt;/div&gt;</summary>
		<author><name>Jaap.vandervelde</name></author>
	</entry>
	<entry>
		<id>https://wiki.tuflow.com/w/index.php?title=WIBU_Licence_for_Linux&amp;diff=45661</id>
		<title>WIBU Licence for Linux</title>
		<link rel="alternate" type="text/html" href="https://wiki.tuflow.com/w/index.php?title=WIBU_Licence_for_Linux&amp;diff=45661"/>
		<updated>2026-03-27T01:31:26Z</updated>

		<summary type="html">&lt;p&gt;Jaap.vandervelde: fix mixed RHEL / Debian syntax&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;For the process of requesting a Wibu-Systems Software License for a &#039;&#039;Windows&#039;&#039; licence host, please refer to  [[WIBU Licence Update Request|Wibu Software Licence Update Request]]. This article describes the process of setting up a software licence container and installing licence updates from a &amp;lt;i&amp;gt;Linux&amp;lt;/i&amp;gt; command line interface, like one you would access on an SSH console. It applies to both Local and Network software licences, not cloud-based licences.&lt;br /&gt;
&lt;br /&gt;
The Linux commands used on this wiki should work on most modern Linux distributions but were tested on CentOS and Debian. This article assumes you have downloaded and installed an appropriate version of the [https://www.wibu.com/support/user/user-software.html CodeMeter Runtime (download)] for your Linux host and CodeMeter is running as a service. If you are uncertain, you can run &amp;lt;pre&amp;gt;systemctl | grep codemeter&amp;lt;/pre&amp;gt; and you should see both `codemeter.service` and `codemeter-webadmin.service` as `running` or `exited`. For the CodeMeter service, you can also check &amp;lt;pre&amp;gt;systemctl status codemeter.service&amp;lt;/pre&amp;gt; which should report CodeMeter Server as running. By default, your CodeMeter Control Center web application would be hosted on &amp;lt;font color=&amp;quot;#3366CC&amp;quot;&amp;gt;http://&amp;lt;your hostname&amp;gt;:22352/&amp;lt;/font&amp;gt; or &amp;lt;font color=&amp;quot;#3366CC&amp;quot;&amp;gt;https://&amp;lt;your hostname&amp;gt;:22353/&amp;lt;/font&amp;gt; and accessible from &amp;lt;font color=&amp;quot;#3366CC&amp;quot;&amp;gt;http://&amp;lt;your hostname&amp;gt;:22350/&amp;lt;/font&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
For a basic setup of CodeMeter on Linux, refer to [[Installing_Wibu_CodeMeter_Linux | Installing Wibu CodeMeter on Linux]].&lt;br /&gt;
&lt;br /&gt;
Software licences are an alternative option to hardware USB dongle licences. Please select the licence host carefully as a software-based dongle will be bound to it when it is first imported. If over time you decide you want to move to another computer, we will need to re-issue you with a new software licence (which will incur a small administration fee).&lt;br /&gt;
&lt;br /&gt;
== Setting up a new software licence container ==&lt;br /&gt;
&lt;br /&gt;
Email &amp;lt;u&amp;gt;[mailto:sales@tuflow.com sales@tuflow.com]&amp;lt;/u&amp;gt; to request a software licence. You will be sent an empty licence container file (*.WibuCmLif). &lt;br /&gt;
&lt;br /&gt;
Install the licence container file with:&lt;br /&gt;
&amp;lt;pre&amp;gt;cmu --import --file Universal_Firm_Code_CmActLicense_6000224.WibuCmLif&amp;lt;/pre&amp;gt;&lt;br /&gt;
Which should result in something like:&lt;br /&gt;
&amp;lt;pre&amp;gt;cmu - CodeMeter Universal Support Tool.&lt;br /&gt;
Version 6.60a of 2018-Feb-26 (Build 2878) for Linux&lt;br /&gt;
Copyright (C) 2007-2018 by WIBU-SYSTEMS AG. All rights reserved.&lt;br /&gt;
&lt;br /&gt;
The file contains 1 Update:&lt;br /&gt;
  CmActLtLicense binding information: FirmCode 6000224&lt;br /&gt;
&lt;br /&gt;
Execute Update ...&lt;br /&gt;
The file contains 1 Update:&lt;br /&gt;
  CmActLtLicense update: Serial number xxx-xxxxxxxxxx, FirmCode 6000224.&lt;br /&gt;
   --&amp;gt; successful&lt;br /&gt;
1 successful update done&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
After installing the licence container file, the CodeMeter WebAdmin interface may report an error if you try to check the new container (e.g. &#039;Error 407: Unknown error&#039;). This can be resolved by restarting the CodeMeter service after installing the licence container:&lt;br /&gt;
&amp;lt;pre&amp;gt;sudo systemctl restart codemeter.service&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Make a note of the serial number displayed. You can list serial numbers of installed dongles and software licence containers with the command:&lt;br /&gt;
&amp;lt;pre&amp;gt;cmu --list&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Updating licences in an existing licence container ==&lt;br /&gt;
&lt;br /&gt;
Once you have an existing licence container for either local or network licences, you can update it with licences by first creating a licence update request.&lt;br /&gt;
&lt;br /&gt;
For hardware-based licences, on a dongle:&amp;lt;pre&amp;gt;cmu --context 101139 --serial x-xxxxxxx --file x-xxxxxxx.WibuCmRaC&amp;lt;/pre&amp;gt;For software-based licences:&amp;lt;pre&amp;gt;cmu --context 6000224 --serial xxx-xxxxxxxxxx --file xxx-xxxxxxxxxx.WibuCmRaC&amp;lt;/pre&amp;gt;Where &amp;quot;xxx-xxxxxxxxxx&amp;quot; / &amp;quot;x-xxxxxxxxxx&amp;quot; is the serial number of your licence container. E-mail the created licence request file (.WibuCmRaC) file to [mailto:sales@tuflow.com sales@tuflow.com].&lt;br /&gt;
&lt;br /&gt;
Once your licence request is processed, you will receive a licence update file (.WibuCmRaU) in return, which you can install with&lt;br /&gt;
&amp;lt;pre&amp;gt;cmu --import --file &amp;lt;serial&amp;gt;.WibuCmRaU&amp;lt;/pre&amp;gt;&lt;br /&gt;
Which should result in something like (this example for a software-based licence):&lt;br /&gt;
&amp;lt;pre&amp;gt;cmu - CodeMeter Universal Support Tool.&lt;br /&gt;
Version 6.60a of 2018-Feb-26 (Build 2878) for Linux&lt;br /&gt;
Copyright (C) 2007-2018 by WIBU-SYSTEMS AG. All rights reserved.&lt;br /&gt;
&lt;br /&gt;
The file contains 1 Updates:&lt;br /&gt;
  CmActLtLicense binding information: FirmCode 6000224&lt;br /&gt;
  CmDongle update for 130-3796453031 (FirmCode 6000224).&lt;br /&gt;
&lt;br /&gt;
Execute Update ...&lt;br /&gt;
The file contains 1 Updates:&lt;br /&gt;
  CmActLtLicense update: Serial number xxx-xxxxxxxxxx, FirmCode 6000224.&lt;br /&gt;
   --&amp;gt; successful&lt;br /&gt;
1 successful update done&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Once your licence update has been imported, you should see your installed licences in the CodeMeter WebAdmin. If they fail to show up, restart the CodeMeter Service:&lt;br /&gt;
&amp;lt;pre&amp;gt;sudo systemctl restart codemeter.service&amp;lt;/pre&amp;gt;&lt;br /&gt;
==Using the installed licenses==&lt;br /&gt;
If you are running a model on another machine and have installed the license on a remote license host, refer to &amp;lt;u&amp;gt;[[WIBU_Configure_Network_Server|Wibu Configure Network Server]]&amp;lt;/u&amp;gt; and &amp;lt;u&amp;gt;[[WIBU_Configure_Network_Client|Wibu_Configure Network Client]]&amp;lt;/u&amp;gt; to learn how to connect to it.&lt;br /&gt;
&lt;br /&gt;
If the software license container is the only license container (i.e. you have no dongle installed), you can test using the installed license by simply starting a model run that requires it (i.e. some run other than a benchmark or tutorial model). &lt;br /&gt;
&lt;br /&gt;
However, if you have both a dongle and software license container installed, you can ensure TUFLOW prefers the software license by creating a license control file (.lcf) for your model, with the line: &amp;lt;pre&amp;gt;WIBU Firm Code Search Order == 6000224 101139&amp;lt;/pre&amp;gt; This ensures that TUFLOW will prefer a software license container (6000224) over a dongle license container (101139) to obtain a license. You can learn more about where such a file can be placed and what the followed logic for obtaining a license is in the &amp;lt;u&amp;gt;[https://docs.tuflow.com/classic-hpc/manual/latest/ TUFLOW Manual]&amp;lt;/u&amp;gt;.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;/div&gt;</summary>
		<author><name>Jaap.vandervelde</name></author>
	</entry>
	<entry>
		<id>https://wiki.tuflow.com/w/index.php?title=WIBU_Licence_for_Linux&amp;diff=45660</id>
		<title>WIBU Licence for Linux</title>
		<link rel="alternate" type="text/html" href="https://wiki.tuflow.com/w/index.php?title=WIBU_Licence_for_Linux&amp;diff=45660"/>
		<updated>2026-03-27T01:13:20Z</updated>

		<summary type="html">&lt;p&gt;Jaap.vandervelde: Reviewed, corrected errors on licence types, aligned with other wiki&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;For the process of requesting a Wibu-Systems Software License for a &#039;&#039;Windows&#039;&#039; licence host, please refer to  [[WIBU Licence Update Request|Wibu Software Licence Update Request]]. This article describes the process of setting up a software licence container and installing licence updates from a &amp;lt;i&amp;gt;Linux&amp;lt;/i&amp;gt; command line interface, like one you would access on an SSH console. It applies to both Local and Network software licences, not cloud-based licences.&lt;br /&gt;
&lt;br /&gt;
The Linux commands used on this wiki should work on most modern Linux distributions but were tested on CentOS and Debian. This article assumes you have downloaded and installed an appropriate version of the [https://www.wibu.com/support/user/user-software.html CodeMeter Runtime (download)] for your Linux host and CodeMeter is running as a service. If you are uncertain, you can run &amp;lt;pre&amp;gt;systemctl | grep codemeter&amp;lt;/pre&amp;gt; and you should see both `codemeter.service` and `codemeter-webadmin.service` as `running` or `exited`. For the CodeMeter service, you can also check &amp;lt;pre&amp;gt;/etc/init.d/codemeter status&amp;lt;/pre&amp;gt; which should report &amp;quot;CodeMeter Server is running.&amp;quot;. By default, your CodeMeter Control Center web application would be hosted on &amp;lt;font color=&amp;quot;#3366CC&amp;quot;&amp;gt;http://&amp;lt;your hostname&amp;gt;:22352/&amp;lt;/font&amp;gt; or &amp;lt;font color=&amp;quot;#3366CC&amp;quot;&amp;gt;https://&amp;lt;your hostname&amp;gt;:22353/&amp;lt;/font&amp;gt; and accessible from &amp;lt;font color=&amp;quot;#3366CC&amp;quot;&amp;gt;http://&amp;lt;your hostname&amp;gt;:22350/&amp;lt;/font&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
For a basic setup of CodeMeter on Linux, refer to [[Installing_Wibu_CodeMeter_Linux | Installing Wibu CodeMeter on Linux]].&lt;br /&gt;
&lt;br /&gt;
Software licences are an alternative option to hardware USB dongle licences. Please select the licence host carefully as a software-based dongle will be bound to it when it is first imported. If over time you decide you want to move to another computer, we will need to re-issue you with a new software licence (which will incur a small administration fee).&lt;br /&gt;
&lt;br /&gt;
== Setting up a new software licence container ==&lt;br /&gt;
&lt;br /&gt;
Email &amp;lt;u&amp;gt;[mailto:sales@tuflow.com sales@tuflow.com]&amp;lt;/u&amp;gt; to request a software licence. You will be sent an empty licence container file (*.WibuCmLif). &lt;br /&gt;
&lt;br /&gt;
Install the licence container file with:&lt;br /&gt;
&amp;lt;pre&amp;gt;cmu --import --file Universal_Firm_Code_CmActLicense_6000224.WibuCmLif&amp;lt;/pre&amp;gt;&lt;br /&gt;
Which should result in something like:&lt;br /&gt;
&amp;lt;pre&amp;gt;cmu - CodeMeter Universal Support Tool.&lt;br /&gt;
Version 6.60a of 2018-Feb-26 (Build 2878) for Linux&lt;br /&gt;
Copyright (C) 2007-2018 by WIBU-SYSTEMS AG. All rights reserved.&lt;br /&gt;
&lt;br /&gt;
The file contains 1 Update:&lt;br /&gt;
  CmActLtLicense binding information: FirmCode 6000224&lt;br /&gt;
&lt;br /&gt;
Execute Update ...&lt;br /&gt;
The file contains 1 Update:&lt;br /&gt;
  CmActLtLicense update: Serial number xxx-xxxxxxxxxx, FirmCode 6000224.&lt;br /&gt;
   --&amp;gt; successful&lt;br /&gt;
1 successful update done&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
After installing the licence container file, the CodeMeter WebAdmin interface may report an error if you try to check the new container (e.g. &#039;Error 407: Unknown error&#039;). This can be resolved by restarting the CodeMeter service after installing the licence container:&lt;br /&gt;
&amp;lt;pre&amp;gt;sudo systemctl restart codemeter.service&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Make a note of the serial number displayed. You can list serial numbers of installed dongles and software licence containers with the command:&lt;br /&gt;
&amp;lt;pre&amp;gt;cmu --list&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Updating licences in an existing licence container ==&lt;br /&gt;
&lt;br /&gt;
Once you have an existing licence container for either local or network licences, you can update it with licences by first creating a licence update request.&lt;br /&gt;
&lt;br /&gt;
For hardware-based licences, on a dongle:&amp;lt;pre&amp;gt;cmu --context 101139 --serial x-xxxxxxx --file x-xxxxxxx.WibuCmRaC&amp;lt;/pre&amp;gt;For software-based licences:&amp;lt;pre&amp;gt;cmu --context 6000224 --serial xxx-xxxxxxxxxx --file xxx-xxxxxxxxxx.WibuCmRaC&amp;lt;/pre&amp;gt;Where &amp;quot;xxx-xxxxxxxxxx&amp;quot; / &amp;quot;x-xxxxxxxxxx&amp;quot; is the serial number of your licence container. E-mail the created licence request file (.WibuCmRaC) file to [mailto:sales@tuflow.com sales@tuflow.com].&lt;br /&gt;
&lt;br /&gt;
Once your licence request is processed, you will receive a licence update file (.WibuCmRaU) in return, which you can install with&lt;br /&gt;
&amp;lt;pre&amp;gt;cmu --import --file &amp;lt;serial&amp;gt;.WibuCmRaU&amp;lt;/pre&amp;gt;&lt;br /&gt;
Which should result in something like (this example for a software-based licence):&lt;br /&gt;
&amp;lt;pre&amp;gt;cmu - CodeMeter Universal Support Tool.&lt;br /&gt;
Version 6.60a of 2018-Feb-26 (Build 2878) for Linux&lt;br /&gt;
Copyright (C) 2007-2018 by WIBU-SYSTEMS AG. All rights reserved.&lt;br /&gt;
&lt;br /&gt;
The file contains 1 Updates:&lt;br /&gt;
  CmActLtLicense binding information: FirmCode 6000224&lt;br /&gt;
  CmDongle update for 130-3796453031 (FirmCode 6000224).&lt;br /&gt;
&lt;br /&gt;
Execute Update ...&lt;br /&gt;
The file contains 1 Updates:&lt;br /&gt;
  CmActLtLicense update: Serial number xxx-xxxxxxxxxx, FirmCode 6000224.&lt;br /&gt;
   --&amp;gt; successful&lt;br /&gt;
1 successful update done&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Once your licence update has been imported, you should see your installed licences in the CodeMeter WebAdmin. If they fail to show up, restart the CodeMeter Service:&lt;br /&gt;
&amp;lt;pre&amp;gt;sudo systemctl restart codemeter.service&amp;lt;/pre&amp;gt; or &amp;lt;pre&amp;gt;sudo /etc/init.d/codemeter restart&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Using the installed licenses==&lt;br /&gt;
If you are running a model on another machine and have installed the license on a remote license host, refer to &amp;lt;u&amp;gt;[[WIBU_Configure_Network_Server|Wibu Configure Network Server]]&amp;lt;/u&amp;gt; and &amp;lt;u&amp;gt;[[WIBU_Configure_Network_Client|Wibu_Configure Network Client]]&amp;lt;/u&amp;gt; to learn how to connect to it.&lt;br /&gt;
&lt;br /&gt;
If the software license container is the only license container (i.e. you have no dongle installed), you can test using the installed license by simply starting a model run that requires it (i.e. some run other than a benchmark or tutorial model). &lt;br /&gt;
&lt;br /&gt;
However, if you have both a dongle and software license container installed, you can ensure TUFLOW prefers the software license by creating a license control file (.lcf) for your model, with the line: &amp;lt;pre&amp;gt;WIBU Firm Code Search Order == 6000224 101139&amp;lt;/pre&amp;gt; This ensures that TUFLOW will prefer a software license container (6000224) over a dongle license container (101139) to obtain a license. You can learn more about where such a file can be placed and what the followed logic for obtaining a license is in the &amp;lt;u&amp;gt;[https://docs.tuflow.com/classic-hpc/manual/latest/ TUFLOW Manual]&amp;lt;/u&amp;gt;.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;/div&gt;</summary>
		<author><name>Jaap.vandervelde</name></author>
	</entry>
	<entry>
		<id>https://wiki.tuflow.com/w/index.php?title=Hardware_Selection_Advice&amp;diff=45627</id>
		<title>Hardware Selection Advice</title>
		<link rel="alternate" type="text/html" href="https://wiki.tuflow.com/w/index.php?title=Hardware_Selection_Advice&amp;diff=45627"/>
		<updated>2026-03-23T04:33:38Z</updated>

		<summary type="html">&lt;p&gt;Jaap.vandervelde: Removing statement on misinformation, added links, made testing advice specific to non-ECC memory&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This page provides general hardware advice for running TUFLOW models on GPU or CPU. &amp;lt;br&amp;gt;&lt;br /&gt;
[[File: Hardware_Configuration_001.jpg ||450px|right]]&lt;br /&gt;
&lt;br /&gt;
=Introduction=&lt;br /&gt;
We often get asked about the optimum computing setup to run TUFLOW models. While every model is different and will interact differently with your hardware there is some general advice that we can offer. Note that recommendations focus specifically on running TUFLOW. It is highly recommend to consult your IT team to confirm that all components of the machine are fully capable of supporting your intended uses and meet your requirements for quality, speed, and durability.&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In the sections below you will find more detailed advice on GPU and CPU but generally:&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* The amount of RAM in the computer will be the limiter for the size of model you can run. This applies to CPU RAM (TUFLOW Classic, TUFLOW FV and TUFLOW HPC with Hardware == CPU) and also GPU RAM (TUFLOW HPC and TUFLOW FV with Hardware == GPU). If available RAM becomes a limitation, users should also investigate improvements to their model configuration to reduce RAM requirements (see &amp;lt;u&amp;gt;[[TUFLOW Simulation Speed | TUFLOW Simulation Speed]]&amp;lt;/u&amp;gt;). &lt;br /&gt;
&lt;br /&gt;
* The processing speed of the CPU, the architecture, cache size, speed and number of processors play a role.&lt;br /&gt;
* For GPU simulations, the number of CUDA cores, the core speed, GPU card architecture, memory speed and interfacing with the motherboard PCI lanes and CPU are all important. &lt;br /&gt;
* The system must be well cooled to avoid throttling (meaning reduction of clock speeds to reduce heating), and have sufficient and reliable power supply. Should upgrades to the system be expected in the future (such as adding a second GPU card), then consider configuring these components to avoid future limitations. &amp;lt;br&amp;gt;&lt;br /&gt;
For information on minimum and recommended system requirement, see &amp;lt;u&amp;gt;[[System_Requirements | System Requirements]]&amp;lt;/u&amp;gt;. &lt;br /&gt;
&lt;br /&gt;
To discover a computer&#039;s NVIDIA GPU hardware, see &amp;lt;u&amp;gt;[[Console_Window_GPU_Usage | NVIDIA GPU Hardware and Usage]]&amp;lt;/u&amp;gt;.&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=The TUFLOW Software Suite=&lt;br /&gt;
The TUFLOW Software suite has a range of solvers. Each interact differently with your hardware so pairing the correct solver (or the range of solvers you want to run) and hardware is an important consideration. A brief summary of each solver&#039;s needs is provided as follows:&amp;lt;br&amp;gt;&lt;br /&gt;
*TUFLOW Classic: A single model run can only use the CPU and cannot be run across multiple CPU cores or GPU hardware. In general terms: The maximum model size is dependent on the available CPU RAM and the runtime is driven by the CPU speed, architecture and cache size.&lt;br /&gt;
* TUFLOW HPC - Run on CPU Hardware: A single model run uses the CPU and is parallelised to run across multiple cores. In general terms: The maximum model size is dependent on the available CPU RAM and the runtime is driven by the CPU speed, the number of cores available to be run in parallel, architecture and cache size.&lt;br /&gt;
*TUFLOW HPC - Run on GPU Hardware: A single model run uses the GPU(s) for computation. In general terms: The maximum model size is dependent on the available GPU and CPU RAM and the runtime is driven by the CUDA core speed, the number of CUDA cores available and the GPU architecture. GPU performance is complex and is not easily inferred from GPU clock speed and number of cores, it is also very dependent on the ‘generation’ or architecture of GPU. As TUFLOW HPC requires some data exchange between GPU and CPU, the motherboard bus speeds and CPU speeds also play a role but typically a much lesser role compared to the GPU CUDA compute.&lt;br /&gt;
*TUFLOW FV - Run on CPU Hardware: A single model run uses CPU and is parallelised to run across multiple cores. In general terms: The maximum model size is dependent on the available CPU RAM and the runtime is determined by the CPU speed, the number of cores available to be run in parallel, chip architecture and cache size.&lt;br /&gt;
*TUFLOW FV - Run on GPU Hardware: A single model run uses the GPU(s) for computation. In general terms: The maximum model size is dependent on the available GPU and CPU RAM and the runtime is driven by the CUDA core speed, the number of CUDA cores available and the GPU architecture. GPU performance is complex and is not easily inferred from GPU clock speed and number of cores, it is also very dependent on the ‘generation’ or architecture of GPU. As TUFLOW FV requires some data exchange between GPU and CPU, the motherboard bus speeds and CPU speeds also play a role but typically a much lesser role compared to the GPU CUDA compute.&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The &amp;lt;u&amp;gt;[[Hardware_Benchmarking_-_Results#CPU_Results | Hardware Benchmarking]]&amp;lt;/u&amp;gt; page shows recently run combinations of GPU, CPU and RAM. These can be compared with the system intended for purchase. The recommendation is to seek advise from an appropriate computer hardware vendor who can advise on the compatibility and optimisation of the setup.&lt;br /&gt;
&lt;br /&gt;
=GPU Advice=&lt;br /&gt;
TUFLOW HPC on GPU Hardware is typically our fastest solver for 1D/2D pipe and floodplain simulations. &lt;br /&gt;
* TUFLOW HPC supports CUDA enabled NVIDIA GPU cards. For list of supported CUDA enabled graphics cards please visit the &amp;lt;u&amp;gt;[https://developer.nvidia.com/cuda-gpus NVIDIA website]&amp;lt;/u&amp;gt;.&lt;br /&gt;
*To discover a computer&#039;s NVIDIA GPU hardware, see &amp;lt;u&amp;gt;[[Console_Window_GPU_Usage | NVIDIA GPU Hardware and Usage]]&amp;lt;/u&amp;gt;.&lt;br /&gt;
*TUFLOW HPC on GPU Hardware can be run in either single or double precision. However, for the vast majority of flood applications single precision is sufficient. We typically run our models on single precision. If you are unsure we recommend running with both the single and double precision solvers and comparing your results.&lt;br /&gt;
The precision solver you require will determine the type of GPU card that is best suited for your compute. For any given generation/architecture of cards, the “gaming” cards such as the GTX GeForce and RTX provide excellent single precision performance – typically comparable to that of the “scientific” cards such as the Tesla series. If double precision is required then the scientific cards are substantially faster, but these are also significantly more expensive. The Quadro series cards sit in between for both double precision performance and cost. When checking the specifications of the card it should provide you with a breakdown of the single and double precision throughput in flops. Single precision compute is typically sufficient for TUFLOW HPC modelling.&lt;br /&gt;
&lt;br /&gt;
For the higher end GPU cards, users may wish to consider server-based computers rather than workstations, and also weigh the cost of an extra TUFLOW licence against the cost of the high end hardware.&lt;br /&gt;
&lt;br /&gt;
===GPU RAM===&lt;br /&gt;
RAM is the computer memory required to store all of the model data used during the computation. A computer has CPU RAM which is located on the motherboard and accessed from the CPU, and it has GPU RAM which is located on the GPU device and accessed from the GPU. The two memory storage systems are physically separate. &lt;br /&gt;
The amount of GPU RAM is one of two factors that will determine the size of the model that can be run (the other being CPU RAM). As a rule of thumb, approximately 5 million cells can be run per gigabyte (GB) of GPU RAM depending on the model features, e.g. a model with infiltration requires more memory due to the extra variables needed for the infiltration calculation. &lt;br /&gt;
&lt;br /&gt;
===CPU RAM===&lt;br /&gt;
TUFLOW HPC on GPU hardware still uses the CPU to compute and store data (in CPU RAM) during model initialisation and for all 1D calculations. While we are working on improving our CPU RAM usage, currently we tend to find that CPU RAM is often the limiter to the size of the model domain you can run, particularly if using running over multiple GPU cards. During initialisation and simulation a model will typically require 4-6 times the amount of CPU RAM relative to GPU RAM. As an example, a model that utilises 11GB of GPU RAM (typical memory for high-end gaming card, and corresponds to about a 50 million cell model) the CPU RAM required during initialisation will typically be in range 44GB to 66GB. A model that fully utilises two 11 GB GPUs (i.e. a 100 million cell model) may require as much as 128GB of CPU RAM during initialisation. Note that anything more than 256GB of CPU RAM will exceed the limitations of consumer chipsets that are available in 2025 and requires more expensive workstation hardware - additionally, users should consult a hardware expert to check limitations of specific hardware.&lt;br /&gt;
&lt;br /&gt;
=== RAM Reliability (ECC vs non-ECC) ===&lt;br /&gt;
ECC (Error-Correcting Code) RAM detects and corrects memory errors, improving reliability, while non-ECC cannot. Use of non-ECC memory may raise worries about global error rates affecting simulation results. However, large [https://www.cs.toronto.edu/~bianca/papers/sigmetrics09.pdf field studies] show errors are usually caused by physical faults in specific DIMMs (Dual In-line Memory Modules, the removable RAM sticks), not uniform random events. Most DIMMs experience no errors, while a small number produce the vast majority of faults. Modern DDR5 memory also includes on-die correction that silently fixes some errors before they leave the chip.&lt;br /&gt;
&lt;br /&gt;
A failing DIMM on a non-ECC system is more likely to cause crashes or obvious corruption than a silent incorrect result. In numerical solvers, bit flips often trigger instability or failure rather than plausible but wrong outputs. For a single TUFLOW workstation, ECC is generally not required solely to protect result quality, though it may be beneficial for servers, critical workloads, or environments operating many machines.&lt;br /&gt;
&lt;br /&gt;
If additional confidence is desired, run a memory test (for example [https://www.memtest.org/ Memtest86+]) for multiple passes after installation, for non-ECC memory. Consistent errors indicate defective hardware that should be replaced.&lt;br /&gt;
&lt;br /&gt;
===CUDA Cores, GPU Clock speed, and FLOPs ===&lt;br /&gt;
One way of reporting a GPU card&#039;s throughput is in Floating Point Operations per second (FLOPs). The more FLOPs, the more calculations that can get crunched per second and the faster the model should run. For any given generation of GPU, FLOPs are approximately proportional to number of CUDA cores times the GPU clock speed. However, there have been significant improvements in GPU architecture since the inception of CUDA, and this has contributed to increases in overall FLOPs performance beyond just the increases in cores and clock speed that have occurred over this time. &lt;br /&gt;
&lt;br /&gt;
===Multiple GPUs===&lt;br /&gt;
TUFLOW can use multiple GPU cards on a machine to run a single model (TUFLOW FV can currently use a single GPU only). This is useful for models that are too large for a single GPU, or for running a model as quickly as possible. In general terms the run time benefit of using multiple cards increases with model size. &lt;br /&gt;
*TUFLOW HPC-GPU does not support SLI for inter-GPU communications.&lt;br /&gt;
*It does (as of build 2020-01-AA) auto detect and utilise peer-to-peer access over NVLink or PCI bus on the motherboard. Note that not all GPUs support peer-to-peer access. &lt;br /&gt;
**PCI bus - this method requires cards that supports TCC driver mode and all cards must be in TCC driver mode. As TUFLOW primarily relies on GPU CUDA capabilities, the impact of using higher or lower PCI slot option is minimal.&lt;br /&gt;
**NVLink - high-end compute cards can have up to 8 cards talking to each other through a high-spec NVLink, but many of the less expensive cards are limited to only having two connected together over a dual socket NVLink.&lt;br /&gt;
*Models may still be run across multiple GPUs even if a NVLink is not present and the GPUs do not support peer-to-peer access. In this case HPC reverts to exchanging the domain boundary data between the GPUs via the CPU. The memory bandwidth between the GPU and the main system is not a critical bottleneck for TUFLOW.&lt;br /&gt;
*When using multiple GPUs it is best to use cards of similar memory and performance. While it is possible (as of build 2020-01-AA) to re-balance a model over multiple GPUs, we do not recommend using cards with vastly disparate performance.&lt;br /&gt;
*Sufficient cooling and power supply should be considered if multiple cards are used. When installed in adjacent PCI slots, the preference is to use rear vented cards rather than side vented to avoid blowing hot air onto the neighbouring cards (which could lead to overheating).&lt;br /&gt;
&lt;br /&gt;
===GPU Performance Comparison===&lt;br /&gt;
Extensive GPU hardware speed comparison testing has been completed using TUFLOW&#039;s standardised hardware benchmarking dataset. Details for the benchmarking are available via the &amp;lt;u&amp;gt;[[Hardware_Benchmarking | Hardware Benchmarking]]&amp;lt;/u&amp;gt; page. Review the GPU benchmarking runtime results table to compare the speed performance of different cards. If your GPU card is not listed in the result dataset please download and run the benchmarking dataset, and provide the result summary to [mailto:support@tuflow.com support@tuflow.com]. We will add the details to the runtime results table.&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
External videocard benchmark websites can be used to compare GPU cards, for example, &amp;lt;u&amp;gt;[https://www.videocardbenchmark.net/high_end_gpus.html PassMark Software - Video Card (GPU) Benchmarks]&amp;lt;/u&amp;gt; is an excellent performance guide. Note that PassMark may not be representative for the highest end cards, for TUFLOW. GPU are complex devices, newer cards may not perform as well on PassMark&#039;s benchmarks for criteria consumers buy GPUs for (games, video editing, etc.), even though the cards may well perform a lot better for TUFLOW.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=CPU Advice=&lt;br /&gt;
In general terms a more recent architecture, higher clock speed CPU with a large cache will perform better than a slower clock speed chip. This section discusses CPU RAM, RAM speed, Processor frequency, Multi-core processing and hyper-threading.&lt;br /&gt;
&lt;br /&gt;
===CPU RAM===&lt;br /&gt;
The amount of CPU RAM will determine the size of the model that can be run or a number of models that can be run at one time. &lt;br /&gt;
Faster RAM will result in quicker runtimes, however this is usually a secondary consideration to chip speed, cache size and architecture.&lt;br /&gt;
&lt;br /&gt;
===CPU Cores ===&lt;br /&gt;
*TUFLOW HPC - Run on GPU Hardware: The parallel processing is being done on the GPU card. However, TUFLOW HPC-GPU still uses the CPU for model initialisation and for 1D calculations. If multiple GPU cards are used, TUFLOW will use the equivalent number of CPU threads for controlling the GPUs and migrating data. So for a machine dedicated to HPC-GPU modelling, the number of CPU cores should be higher than the number of installed GPUs.&lt;br /&gt;
*TUFLOW HPC - Run on CPU Hardware: HPC model can also be run on multiple CPU cores. For the comparison of simulation speed, please refer to [[Hardware_Benchmarking_Topic_HPC_on_CPU_vs_GPU | HPC on CPU vs GPU]].&lt;br /&gt;
*TUFLOW Classic: TUFLOW Classic simulation can only use one CPU core due to the implicit nature of the numerical solution. More CPU cores will enable running more simulations at the same time most efficiently.&lt;br /&gt;
&lt;br /&gt;
===Hyperthreading===&lt;br /&gt;
https://fvwiki.tuflow.com/index.php?title=TUFLOW_FV_Parallel_Computing&lt;br /&gt;
&lt;br /&gt;
===Processor Frequency and RAM Frequency===&lt;br /&gt;
The frequency directly affects the run times. In general, the higher the frequency, the faster the model runs.&lt;br /&gt;
&lt;br /&gt;
===CPU Performance Comparison===&lt;br /&gt;
Extensive CPU hardware speed comparison testing has been completed using TUFLOW&#039;s standardised hardware benchmarking dataset. Details for the benchmarking are available via the &amp;lt;u&amp;gt;[[Hardware_Benchmarking| Hardware Benchmarking]]&amp;lt;/u&amp;gt; page of the Wiki. Review the CPU benchmarking runtime results table to compare the speed performance of different chips. If your chip is not listed in the result dataset please download and run the benchmarking dataset, and provide the result summary to [mailto:support@tuflow.com support@tuflow.com]. We will add the details to the runtime results table.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=Storage Advice=&lt;br /&gt;
Solid state hard drives are preferred for temporary storage as they are faster to write to than traditional hard drives. Large data files can then be transferred to a more permanent location.&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
{{Tips Navigation&lt;br /&gt;
|uplink=[[Main_Page| TUFLOW Main Page]]&lt;br /&gt;
}}&lt;/div&gt;</summary>
		<author><name>Jaap.vandervelde</name></author>
	</entry>
	<entry>
		<id>https://wiki.tuflow.com/w/index.php?title=TUFLOW_on_Linux&amp;diff=45212</id>
		<title>TUFLOW on Linux</title>
		<link rel="alternate" type="text/html" href="https://wiki.tuflow.com/w/index.php?title=TUFLOW_on_Linux&amp;diff=45212"/>
		<updated>2025-11-17T06:17:52Z</updated>

		<summary type="html">&lt;p&gt;Jaap.vandervelde: First draft of an article for release alongside 2026.0&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Introduction =&lt;br /&gt;
Starting at version 2026.0, TUFLOW Classic/HPC will also be available for Linux, just like TUFLOW FV has been since its inception. Although the application functions the same on either Microsoft Windows or Linux, there are some caveats to consider especially when working across both platforms within a single project.&lt;br /&gt;
&lt;br /&gt;
= How to install =&lt;br /&gt;
TUFLOW for Linux will be made available in two types of packages, &amp;lt;code&amp;gt;.deb&amp;lt;/code&amp;gt; for Debian family distributions (like Debian, Ubuntu, Mint, etc.) and &amp;lt;code&amp;gt;.rpm&amp;lt;/code&amp;gt; for RHEL family distributions (Red Hat, CentOS, Rocky, etc.).&lt;br /&gt;
&lt;br /&gt;
These can be downloaded and installed with tools that are available on any Linux distribution by default. It is recommended that admins use &amp;lt;code&amp;gt;dnf&amp;lt;/code&amp;gt; or &amp;lt;code&amp;gt;apt&amp;lt;/code&amp;gt; respectively, to ensure that dependencies are automatically downloaded and installed as needed.&lt;br /&gt;
&lt;br /&gt;
= How to use =&lt;br /&gt;
After installation, TUFLOW will be available from the command line as &amp;lt;code&amp;gt;tuflow_2026.0&amp;lt;/code&amp;gt;. Users may define an alias like &amp;lt;code&amp;gt;alias tuflow=&#039;tuflow_2026.0&#039;&amp;lt;/code&amp;gt; or use the specific versioned command directly in their scripts. Command-line options are passed just as in Windows.&lt;br /&gt;
&lt;br /&gt;
However, because Linux often runs without a graphical environment, TUFLOW on Linux runs as if the &amp;lt;code&amp;gt;-nmb&amp;lt;/code&amp;gt; option was provided.&lt;br /&gt;
&lt;br /&gt;
= Use across both Microsoft Windows and Linux =&lt;br /&gt;
Some of the quirky differences between Windows and Linux will affect users that want to use TUFLOW across both systems.&lt;br /&gt;
&lt;br /&gt;
By no means a complete overview, but these are some aspects to consider:&lt;br /&gt;
&amp;lt;ol&amp;gt;&lt;br /&gt;
*&#039;&#039;&#039;Slashes and backslashes&#039;&#039;&#039;&amp;lt;br/&amp;gt;Windows uses backslashes in file and directory paths like &amp;lt;code&amp;gt;D:\Project\results\&amp;lt;/code&amp;gt;. Linux uses forward slashes instead like &amp;lt;code&amp;gt;~/Project/results&amp;lt;/code&amp;gt; and will often interpret characters following backslashes as special characters. TUFLOW will usually deal with either format on either system in configuration files, but when writing scripts or commands users should remain aware. Also, logs and outputs referring to files will use the format appropriate to the system it is running on.&lt;br /&gt;
*&#039;&#039;&#039;Drive letters&#039;&#039;&#039;&amp;lt;br/&amp;gt;Drive letters like &amp;lt;code&amp;gt;D:&amp;lt;/code&amp;gt; are specific to Windows. Configuration files that need to be usable across both systems should avoid their use and instead use relative paths like &amp;lt;code&amp;gt;../Model/Materials_001.csv&amp;lt;/code&amp;gt; or absolute paths that assume the file is on the current drive like &amp;lt;code&amp;gt;/Project/model/Materials_001.csv&amp;lt;/code&amp;gt;.&lt;br /&gt;
*&#039;&#039;&#039;Character case in names&#039;&#039;&#039;&amp;lt;br/&amp;gt;Windows is case-insensitive, which means that files called &amp;lt;code&amp;gt;Hello.TXT&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;hello.txt&amp;lt;/code&amp;gt; are considered to have the same name, and cannot exist in the same location. Linux is case-sensitive and considers uppercase characters and lowercase characters to be different, and so those two files can exist side by side. This is very relevant if TUFLOW users use one spelling in one place, and the other in another - whereas Windows might interpret both references to point to the same file, on Linux they would result in two separate files. Similarly, if a file&#039;s name is spelled with different case from its actual name, a Windows application would find it, and a Linux application might not. (see below)&lt;br /&gt;
*&#039;&#039;&#039;Encoding and special characters&#039;&#039;&#039;&amp;lt;br/&amp;gt;On older versions of Windows in English-speaking countries, text files (including TUFLOW configuration files) would use the 1252-Windows encoding. In modern versions of Windows, UTF-8 with BOM is the standard. On Linux UTF-8 without BOM is the standard. TUFLOW accepts any of these encodings and will typically write files in UTF-8 without BOM. On Windows, line endings in such text files are encoded as two characters, a carriage return and a line feed (CR/LF), while Linux uses only a single LF. TUFLOW accepts both forms and writes what is appropriate to the system it is running on.&lt;br /&gt;
&amp;lt;/ol&amp;gt;&lt;br /&gt;
&lt;br /&gt;
To ensure maximum compatibility of models across systems, TUFLOW will do its best to match names in the model files in the same way between Windows and Linux. This may surprise some new users on Linux and can be disabled with a command-line flag if undesirable, in future releases.&lt;br /&gt;
&lt;br /&gt;
As a general principle, TUFLOW will default to trying to run a model with as little change as possible required across both platforms, writing results in a form that is appropriate to the platform it runs on. Future releases may provide users with more control over this behaviour, if needed.&lt;br /&gt;
&lt;br /&gt;
= Common Questions Answered (FAQ)=&lt;br /&gt;
== Why use TUFLOW on Linux at all? ==&lt;br /&gt;
TUFLOW on Linux is not just for users that use Linux as their primary work environment, although those certainly exist. Other reasons to consider are:&lt;br /&gt;
&amp;lt;ol&amp;gt;&lt;br /&gt;
*&#039;&#039;&#039;Performance&#039;&#039;&#039;&amp;lt;br/&amp;gt; Servers or workstations that are dedicated for the running of models can be configured to run TUFLOW and the necessary supporting software (e.g. CodeMeter, NVIDIA GPU drivers, etc.) and very little else, using Linux. There may also be small performance differences in general between an executable built and optimised for Windows or Linux.&lt;br /&gt;
*&#039;&#039;&#039;Infrastructure cost&#039;&#039;&#039;&amp;lt;br/&amp;gt; Especially relevant when running virtual machines dedicated to running TUFLOW, or many of those in the cloud, most distributions of Linux do not require a licence whereas Windows does. This can substantially reduce the cost of infrastructure.&lt;br /&gt;
*&#039;&#039;&#039;HPC tooling&#039;&#039;&#039;&amp;lt;br/&amp;gt; Automation of running TUFLOW models across many computers or in the cloud can be greatly simplified with tooling designed for that purpose. Many such tools are available &amp;quot;off the shelf&amp;quot;, but mostly on Linux. (e.g. SLURM, PBS/Torque, etc.). Similarly, containerisation (e.g. Docker, Podman, LXC) typically requires running on Linux as well.&lt;br /&gt;
&amp;lt;/ol&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Must TUFLOW users on Linux use the command line interface? ==&lt;br /&gt;
Linux offers a variety of desktop environments and TUFLOW can be used from those just like users would using Microsoft Windows. However, in either case TUFLOW itself is always running as a so-called console application and that makes it uniquely suited to running in lightweight non-graphical environments as well.&lt;br /&gt;
&lt;br /&gt;
Like Windows, Linux offers a variety of scripting options that can help minimise the direct use of the command line by users, if that is desirable. Bash shell scripts are as easy to write and run as batch files are on Windows. Running Python or even PowerShell on Linux is easy to set up as well.&lt;br /&gt;
&lt;br /&gt;
However, given the nature of the application and Linux as an operating system, TUFLOW users on Linux would do well to try and understand some of the very basic of using TUFLOW from the command line and the TUFLOW Support team can assist with that.&lt;/div&gt;</summary>
		<author><name>Jaap.vandervelde</name></author>
	</entry>
	<entry>
		<id>https://wiki.tuflow.com/w/index.php?title=Configure_CUDA_device_selection&amp;diff=43876</id>
		<title>Configure CUDA device selection</title>
		<link rel="alternate" type="text/html" href="https://wiki.tuflow.com/w/index.php?title=Configure_CUDA_device_selection&amp;diff=43876"/>
		<updated>2025-06-18T03:51:13Z</updated>

		<summary type="html">&lt;p&gt;Jaap.vandervelde: rewrite for a most consistent style&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Computers running TUFLOW may have multiple GPUs. These may be multiple NVIDIA GPUs with CUDA capabilities, used to accelerate simulation runs. Alternatively, they can be GPUs used for purposes such as rendering the interactive desktop or handling other computational tasks. A common occurrence on modern motherboards is the availability of an integrated GPU.&lt;br /&gt;
&lt;br /&gt;
Generally, it is recommended to use a GPU that is not used for TUFLOW modelling as the primary GPU for rendering the desktop, if needed. If there is no additional GPU available, one of the NVIDIA GPUs can be used, in which case it is recommended to use the most capable card for running your models and the less capable one for rendering the desktop.&lt;br /&gt;
&lt;br /&gt;
TUFLOW allows selection of a specific GPU for computation using command-line options such as -pu0 for the first GPU, -pu1 for the second, and so on. (See [[HPC Running and Converting Models]].)  &lt;br /&gt;
&lt;br /&gt;
However, what TUFLOW considers the first or second GPU may not match the order shown in tools such as Windows Device Manager, Task Manager, or the output of &amp;lt;code&amp;gt;nvidia-smi&amp;lt;/code&amp;gt; on the command line. Another common problem is that the needed GPUs are not actually in the expected order and may cause difficulty selecting GPUs in the preferred order.&lt;br /&gt;
&lt;br /&gt;
To this end, an environment variable called &amp;lt;code&amp;gt;CUDA_VISIBLE_DEVICES&amp;lt;/code&amp;gt; limits the devices that will be visible to CUDA-capable applications like TUFLOW, as well as specifying the order they will appear in. The remainder of this article outlines how to configure that setting. As an example, a Windows computer is used that has 2 NVIDIA GPUs, and an on-board AMD GPU. In Windows, all available GPUs can be listed using a PowerShell command like this:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;powershell&amp;quot;&amp;gt;&lt;br /&gt;
Get-CimInstance -Namespace root\cimv2 -ClassName Win32_VideoController | Select-Object DeviceID, Name&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
(PowerShell commands can be run by opening PowerShell from the Windows Start Menu and pasting a command there)&lt;br /&gt;
&lt;br /&gt;
The output for the example computer is as follows (note that virtual adapters, such as a Remote Desktop adapter, will also appear):&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
DeviceID         Name&lt;br /&gt;
--------         ----&lt;br /&gt;
VideoController1 AMD Radeon(TM) Graphics&lt;br /&gt;
VideoController2 Microsoft Remote Display Adapter&lt;br /&gt;
VideoController3 NVIDIA GeForce RTX 4090&lt;br /&gt;
VideoController4 NVIDIA GeForce RTX 4090&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
In this case, only &#039;VideoController3&#039; and &#039;VideoController4&#039; need to be visible to CUDA-enabled applications like TUFLOW. More details on those can be obtained by running the following command (from either PowerShell, Command Prompt, or a Linux shell):&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;batch&amp;quot;&amp;gt;&lt;br /&gt;
nvidia-smi --query-gpu=name,uuid --format=csv,noheader,nounits&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
And the output is as follows:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
NVIDIA GeForce RTX 4090, GPU-5060f556-4eb4-7155-4020-abadcb2fd735&lt;br /&gt;
NVIDIA GeForce RTX 4090, GPU-f3825978-37f8-b933-5327-583196d560cd&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
The tool does not list the AMD card, but up to and including version 2025.1 of TUFLOW, that card may still interfere with the GPU selection order. Also, from this readout, it is not at all clear which card is which and the order here may not match the order you expect from tools like Task Manager (&#039;GPU 0&#039;, &#039;GPU 1&#039;, etc.).&lt;br /&gt;
&lt;br /&gt;
This issue can be resolved by setting the environment variable &amp;lt;code&amp;gt;CUDA_VISIBLE_DEVICES&amp;lt;/code&amp;gt;. There are two possible formats. It can either have a value like &amp;lt;code&amp;gt;0,1&amp;lt;/code&amp;gt; or a more explicit value like &amp;lt;code&amp;gt;GPU-5060f556-4eb4-7155-4020-abadcb2fd735,GPU-f3825978-37f8-b933-5327-583196d560cd&amp;lt;/code&amp;gt; using the identifiers from the &amp;lt;code&amp;gt;nvidia-smi&amp;lt;/code&amp;gt; output. &lt;br /&gt;
&lt;br /&gt;
The short format just affects the default order. If using &amp;lt;code&amp;gt;-pu0&amp;lt;/code&amp;gt; with TUFLOW selects the GPU considered #1 and vice versa, setting &amp;lt;code&amp;gt;CUDA_VISIBLE_DEVICES&amp;lt;/code&amp;gt; to &amp;lt;code&amp;gt;1,0&amp;lt;/code&amp;gt; reverses the default order. However, this order may change as new hardware is installed, or existing hardware reinstalled. The recommendation is to use the explicit values in the long format.&lt;br /&gt;
&lt;br /&gt;
The value of the environment variable can either be set at the start of scripts used to run models, like batch files, PowerShell scripts, or Linux shell scripts, or globally so that it automatically applies to all running applications.&lt;br /&gt;
&lt;br /&gt;
In a batch file or from the Command Prompt use this (note there are no quotes around the values, replace the values with the identifiers for detected GPUs):&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;dos&amp;quot;&amp;gt;&lt;br /&gt;
SET CUDA_VISIBLE_DEVICES=GPU-5060f556-4eb4-7155-4020-abadcb2fd735,GPU-f3825978-37f8-b933-5327-583196d560cd&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
In a PowerShell script or from the PowerShell prompt use this (note the quotes around the values, replace the values with the identifiers for detected GPUs):&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;powershell&amp;quot;&amp;gt;&lt;br /&gt;
$env:CUDA_VISIBLE_DEVICES = &amp;quot;GPU-5060f556-4eb4-7155-4020-abadcb2fd735,GPU-f3825978-37f8-b933-5327-583196d560cd&amp;quot;&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If a globally set value is preferred, it can either be set for a single user account by finding &amp;quot;Edit environment variables &#039;&#039;for your account&#039;&#039;&amp;quot; in the Windows Start menu and entering the values without quotes, or it can be set for all users on the machine by finding &amp;quot;Edit the &#039;&#039;system&#039;&#039; environment variables&amp;quot; in the Windows Start menu and doing the same in the &#039;System Variables&#039; section. Note that administrator rights are required (elevation) to be able to do the latter. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Warning:&#039;&#039;&#039; setting the value globally affects all CUDA-capable applications, not just TUFLOW. Please ensure that no other applications need the CUDA capabilities of the GPUs that are left out or use a local value in scripts or batch files instead.&lt;/div&gt;</summary>
		<author><name>Jaap.vandervelde</name></author>
	</entry>
	<entry>
		<id>https://wiki.tuflow.com/w/index.php?title=Configure_CUDA_device_selection&amp;diff=43870</id>
		<title>Configure CUDA device selection</title>
		<link rel="alternate" type="text/html" href="https://wiki.tuflow.com/w/index.php?title=Configure_CUDA_device_selection&amp;diff=43870"/>
		<updated>2025-06-18T02:26:03Z</updated>

		<summary type="html">&lt;p&gt;Jaap.vandervelde: fix syntax error&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;The computer you use to run TUFLOW may have multiple GPUs. These can be multiple NVIDIA GPUs with CUDA-capabilities, which you may want to use to accelerate running your models. Or they can be additional GPUs for other purposes like rendering the interactive desktop for users of the computer, or other computational tasks. A common occurrence on modern motherboards is the availability of an integrated GPU.&lt;br /&gt;
&lt;br /&gt;
Generally, we recommend using a GPU you don&#039;t use for TUFLOW modelling as your primary GPU for rendering the desktop, if needed. If you don&#039;t have an additional GPU available, you can use one of the NVIDIA GPUs, be we would then recommend using the most capable card as the primary card for running your models, and the secondary card as the primary GPU for rendering the desktop.&lt;br /&gt;
&lt;br /&gt;
TUFLOW allows you to select a specific GPU for its compute, using command line options like &amp;lt;code&amp;gt;-pu0&amp;lt;/code&amp;gt; for the first GPU, &amp;lt;code&amp;gt;-pu1&amp;lt;/code&amp;gt; for the second, etc. (see [[HPC Running and Converting Models]])  &lt;br /&gt;
&lt;br /&gt;
However, you may find that what TUFLOW considers the first or second GPU does not match your expectations based on what you see in tools like the Windows Device Manager, Task Manager, or the output from &amp;lt;code&amp;gt;nvidia-smi&amp;lt;/code&amp;gt; on the command line. Another common problem is that the GPUs you want to use are not actually #0 and #1 and you may have trouble selecting the cards you prefer, in the order you prefer them in.&lt;br /&gt;
&lt;br /&gt;
To this end, you can set an environment variable called &amp;lt;code&amp;gt;CUDA_VISIBLE_DEVICES&amp;lt;/code&amp;gt;, which limits the devices that will be visible to CUDA-capable applications like TUFLOW, as well as specifying the order they will appear in. The rest of this article will explain how to go about that. As an example, we&#039;ll use a Windows computer that has 2 NVIDIA GPUs, and an on-board AMD GPU. In Windows, you can list all the available GPUs using a Powershell command like this:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;powershell&amp;quot;&amp;gt;&lt;br /&gt;
Get-CimInstance -Namespace root\cimv2 -ClassName Win32_VideoController | Select-Object DeviceID, Name&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
(you can run PowerShell commands by opening PowerShell from the Windows Start Menu and pasting a command there)&lt;br /&gt;
&lt;br /&gt;
The output for the example computer looks like this (note that even virtual adapters like a Remote Desktop adapter will show):&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
DeviceID         Name&lt;br /&gt;
--------         ----&lt;br /&gt;
VideoController1 AMD Radeon(TM) Graphics&lt;br /&gt;
VideoController2 Microsoft Remote Display Adapter&lt;br /&gt;
VideoController3 NVIDIA GeForce RTX 4090&lt;br /&gt;
VideoController4 NVIDIA GeForce RTX 4090&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
In this case, we only need &#039;VideoController3&#039; and &#039;VideoController4&#039; to be visible to CUDA-enabled applications like TUFLOW. We can get more details on those by running the following command (from either PowerShell, Command Prompt, or a Linux shell):&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;batch&amp;quot;&amp;gt;&lt;br /&gt;
nvidia-smi --query-gpu=name,uuid --format=csv,noheader,nounits&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
And the output looks like this:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
NVIDIA GeForce RTX 4090, GPU-5060f556-4eb4-7155-4020-abadcb2fd735&lt;br /&gt;
NVIDIA GeForce RTX 4090, GPU-f3825978-37f8-b933-5327-583196d560cd&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
The tool won&#039;t list the AMD card, but up to and including version 2025.1 of TUFLOW, that card may still interfere with your GPU selection order. Also, from this readout, it is not at all clear which card is which and the order here may not match the order you expect from tools like Task Manager (&#039;GPU 0&#039;, &#039;GPU 1&#039;, etc.).&lt;br /&gt;
&lt;br /&gt;
This is what we will solve by setting the environment variable &amp;lt;code&amp;gt;CUDA_VISIBLE_DEVICES&amp;lt;/code&amp;gt;. There are two possible formats. It can either have a value like &amp;lt;code=&amp;gt;0,1&amp;lt;/code&amp;gt; or a more explicit value like &amp;lt;code&amp;gt;GPU-5060f556-4eb4-7155-4020-abadcb2fd735,GPU-f3825978-37f8-b933-5327-583196d560cd&amp;lt;/code&amp;gt; using the identifiers from the &amp;lt;code&amp;gt;nvidia-smi&amp;lt;/code&amp;gt; output. &lt;br /&gt;
&lt;br /&gt;
The short format just affects the default order. If you find using &amp;lt;code&amp;gt;-pu0&amp;lt;/code&amp;gt; with TUFLOW selects the GPU you&#039;d consider #1 and vice versa, you could set &amp;lt;code&amp;gt;CUDA_VISIBLE_DEVICES&amp;lt;/code&amp;gt; to &amp;lt;code&amp;gt;1,0&amp;lt;/code&amp;gt;, to reverse the default order. However, this order may change as you install new hardware or reinstall existing hardware, so the recommendation is to use the explicit values in the long format.&lt;br /&gt;
&lt;br /&gt;
You can either set the value of the environment variable at the start of scripts you use to run your models, like batch files, PowerShell scripts, or Linux shell scripts, or you can set it globally so that it automatically applies to all running applications.&lt;br /&gt;
&lt;br /&gt;
In a batch file or from the Command Prompt use this (note there are no quotes around the values, replace the values with the identifiers for your GPUs):&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;dos&amp;quot;&amp;gt;&lt;br /&gt;
SET CUDA_VISIBLE_DEVICES=GPU-5060f556-4eb4-7155-4020-abadcb2fd735,GPU-f3825978-37f8-b933-5327-583196d560cd&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
In a PowerShell script or from the PowerShell prompt use this (note the quotes around the values, replace the values with the identifiers for your GPUs):&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;powershell&amp;quot;&amp;gt;&lt;br /&gt;
$env:CUDA_VISIBLE_DEVICES = &amp;quot;GPU-5060f556-4eb4-7155-4020-abadcb2fd735,GPU-f3825978-37f8-b933-5327-583196d560cd&amp;quot;&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If you prefer to set the value globally, you can either set it for a single user account by finding &amp;quot;Edit environment variables &#039;&#039;for your account&#039;&#039;&amp;quot; in the Windows Start menu and entering the values without quotes, or you can set it for all users on the machine by finding &amp;quot;Edit the &#039;&#039;system&#039;&#039; environment variables&amp;quot; in the Windows Start menu and doing the same in the &#039;System Variables&#039; section. Note that you need to be an administrator to be able to do the latter. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Warning:&#039;&#039;&#039; setting the value globally affects all CUDA-capable applications, not just TUFLOW. Please ensure that no other applications need the CUDA-capabilities of the GPUs you&#039;re leaving out or use a local value in your scripts or batch files instead.&lt;/div&gt;</summary>
		<author><name>Jaap.vandervelde</name></author>
	</entry>
	<entry>
		<id>https://wiki.tuflow.com/w/index.php?title=Configure_CUDA_device_selection&amp;diff=43869</id>
		<title>Configure CUDA device selection</title>
		<link rel="alternate" type="text/html" href="https://wiki.tuflow.com/w/index.php?title=Configure_CUDA_device_selection&amp;diff=43869"/>
		<updated>2025-06-18T02:25:27Z</updated>

		<summary type="html">&lt;p&gt;Jaap.vandervelde: fix syntax error&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;The computer you use to run TUFLOW may have multiple GPUs. These can be multiple NVIDIA GPUs with CUDA-capabilities, which you may want to use to accelerate running your models. Or they can be additional GPUs for other purposes like rendering the interactive desktop for users of the computer, or other computational tasks. A common occurrence on modern motherboards is the availability of an integrated GPU.&lt;br /&gt;
&lt;br /&gt;
Generally, we recommend using a GPU you don&#039;t use for TUFLOW modelling as your primary GPU for rendering the desktop, if needed. If you don&#039;t have an additional GPU available, you can use one of the NVIDIA GPUs, be we would then recommend using the most capable card as the primary card for running your models, and the secondary card as the primary GPU for rendering the desktop.&lt;br /&gt;
&lt;br /&gt;
TUFLOW allows you to select a specific GPU for its compute, using command line options like &amp;lt;code&amp;gt;-pu0&amp;lt;/code&amp;gt; for the first GPU, &amp;lt;code&amp;gt;-pu1&amp;lt;/code&amp;gt; for the second, etc. (see [[HPC Running and Converting Models]])  &lt;br /&gt;
&lt;br /&gt;
However, you may find that what TUFLOW considers the first or second GPU does not match your expectations based on what you see in tools like the Windows Device Manager, Task Manager, or the output from &amp;lt;code&amp;gt;nvidia-smi&amp;lt;/code&amp;gt; on the command line. Another common problem is that the GPUs you want to use are not actually #0 and #1 and you may have trouble selecting the cards you prefer, in the order you prefer them in.&lt;br /&gt;
&lt;br /&gt;
To this end, you can set an environment variable called &amp;lt;code&amp;gt;CUDA_VISIBLE_DEVICES&amp;lt;/code&amp;gt;, which limits the devices that will be visible to CUDA-capable applications like TUFLOW, as well as specifying the order they will appear in. The rest of this article will explain how to go about that. As an example, we&#039;ll use a Windows computer that has 2 NVIDIA GPUs, and an on-board AMD GPU. In Windows, you can list all the available GPUs using a Powershell command like this:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;powershell&amp;quot;&amp;gt;&lt;br /&gt;
Get-CimInstance -Namespace root\cimv2 -ClassName Win32_VideoController | Select-Object DeviceID, Name&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
(you can run PowerShell commands by opening PowerShell from the Windows Start Menu and pasting a command there)&lt;br /&gt;
&lt;br /&gt;
The output for the example computer looks like this (note that even virtual adapters like a Remote Desktop adapter will show):&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
DeviceID         Name&lt;br /&gt;
--------         ----&lt;br /&gt;
VideoController1 AMD Radeon(TM) Graphics&lt;br /&gt;
VideoController2 Microsoft Remote Display Adapter&lt;br /&gt;
VideoController3 NVIDIA GeForce RTX 4090&lt;br /&gt;
VideoController4 NVIDIA GeForce RTX 4090&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
In this case, we only need &#039;VideoController3&#039; and &#039;VideoController4&#039; to be visible to CUDA-enabled applications like TUFLOW. We can get more details on those by running the following command (from either PowerShell, Command Prompt, or a Linux shell):&lt;br /&gt;
&amp;lt;syntaxhighlight&amp;gt;&lt;br /&gt;
nvidia-smi --query-gpu=name,uuid --format=csv,noheader,nounits&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
And the output looks like this:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
NVIDIA GeForce RTX 4090, GPU-5060f556-4eb4-7155-4020-abadcb2fd735&lt;br /&gt;
NVIDIA GeForce RTX 4090, GPU-f3825978-37f8-b933-5327-583196d560cd&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
The tool won&#039;t list the AMD card, but up to and including version 2025.1 of TUFLOW, that card may still interfere with your GPU selection order. Also, from this readout, it is not at all clear which card is which and the order here may not match the order you expect from tools like Task Manager (&#039;GPU 0&#039;, &#039;GPU 1&#039;, etc.).&lt;br /&gt;
&lt;br /&gt;
This is what we will solve by setting the environment variable &amp;lt;code&amp;gt;CUDA_VISIBLE_DEVICES&amp;lt;/code&amp;gt;. There are two possible formats. It can either have a value like &amp;lt;code=&amp;gt;0,1&amp;lt;/code&amp;gt; or a more explicit value like &amp;lt;code&amp;gt;GPU-5060f556-4eb4-7155-4020-abadcb2fd735,GPU-f3825978-37f8-b933-5327-583196d560cd&amp;lt;/code&amp;gt; using the identifiers from the &amp;lt;code&amp;gt;nvidia-smi&amp;lt;/code&amp;gt; output. &lt;br /&gt;
&lt;br /&gt;
The short format just affects the default order. If you find using &amp;lt;code&amp;gt;-pu0&amp;lt;/code&amp;gt; with TUFLOW selects the GPU you&#039;d consider #1 and vice versa, you could set &amp;lt;code&amp;gt;CUDA_VISIBLE_DEVICES&amp;lt;/code&amp;gt; to &amp;lt;code&amp;gt;1,0&amp;lt;/code&amp;gt;, to reverse the default order. However, this order may change as you install new hardware or reinstall existing hardware, so the recommendation is to use the explicit values in the long format.&lt;br /&gt;
&lt;br /&gt;
You can either set the value of the environment variable at the start of scripts you use to run your models, like batch files, PowerShell scripts, or Linux shell scripts, or you can set it globally so that it automatically applies to all running applications.&lt;br /&gt;
&lt;br /&gt;
In a batch file or from the Command Prompt use this (note there are no quotes around the values, replace the values with the identifiers for your GPUs):&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;dos&amp;quot;&amp;gt;&lt;br /&gt;
SET CUDA_VISIBLE_DEVICES=GPU-5060f556-4eb4-7155-4020-abadcb2fd735,GPU-f3825978-37f8-b933-5327-583196d560cd&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
In a PowerShell script or from the PowerShell prompt use this (note the quotes around the values, replace the values with the identifiers for your GPUs):&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;powershell&amp;quot;&amp;gt;&lt;br /&gt;
$env:CUDA_VISIBLE_DEVICES = &amp;quot;GPU-5060f556-4eb4-7155-4020-abadcb2fd735,GPU-f3825978-37f8-b933-5327-583196d560cd&amp;quot;&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If you prefer to set the value globally, you can either set it for a single user account by finding &amp;quot;Edit environment variables &#039;&#039;for your account&#039;&#039;&amp;quot; in the Windows Start menu and entering the values without quotes, or you can set it for all users on the machine by finding &amp;quot;Edit the &#039;&#039;system&#039;&#039; environment variables&amp;quot; in the Windows Start menu and doing the same in the &#039;System Variables&#039; section. Note that you need to be an administrator to be able to do the latter. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Warning:&#039;&#039;&#039; setting the value globally affects all CUDA-capable applications, not just TUFLOW. Please ensure that no other applications need the CUDA-capabilities of the GPUs you&#039;re leaving out or use a local value in your scripts or batch files instead.&lt;/div&gt;</summary>
		<author><name>Jaap.vandervelde</name></author>
	</entry>
	<entry>
		<id>https://wiki.tuflow.com/w/index.php?title=Configure_CUDA_device_selection&amp;diff=43868</id>
		<title>Configure CUDA device selection</title>
		<link rel="alternate" type="text/html" href="https://wiki.tuflow.com/w/index.php?title=Configure_CUDA_device_selection&amp;diff=43868"/>
		<updated>2025-06-18T02:22:08Z</updated>

		<summary type="html">&lt;p&gt;Jaap.vandervelde: update deprecated source tags&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;The computer you use to run TUFLOW may have multiple GPUs. These can be multiple NVIDIA GPUs with CUDA-capabilities, which you may want to use to accelerate running your models. Or they can be additional GPUs for other purposes like rendering the interactive desktop for users of the computer, or other computational tasks. A common occurrence on modern motherboards is the availability of an integrated GPU.&lt;br /&gt;
&lt;br /&gt;
Generally, we recommend using a GPU you don&#039;t use for TUFLOW modelling as your primary GPU for rendering the desktop, if needed. If you don&#039;t have an additional GPU available, you can use one of the NVIDIA GPUs, be we would then recommend using the most capable card as the primary card for running your models, and the secondary card as the primary GPU for rendering the desktop.&lt;br /&gt;
&lt;br /&gt;
TUFLOW allows you to select a specific GPU for its compute, using command line options like &amp;lt;code&amp;gt;-pu0&amp;lt;/code&amp;gt; for the first GPU, &amp;lt;code&amp;gt;-pu1&amp;lt;/code&amp;gt; for the second, etc. (see [[HPC Running and Converting Models]])  &lt;br /&gt;
&lt;br /&gt;
However, you may find that what TUFLOW considers the first or second GPU does not match your expectations based on what you see in tools like the Windows Device Manager, Task Manager, or the output from &amp;lt;code&amp;gt;nvidia-smi&amp;lt;/code&amp;gt; on the command line. Another common problem is that the GPUs you want to use are not actually #0 and #1 and you may have trouble selecting the cards you prefer, in the order you prefer them in.&lt;br /&gt;
&lt;br /&gt;
To this end, you can set an environment variable called &amp;lt;code&amp;gt;CUDA_VISIBLE_DEVICES&amp;lt;/code&amp;gt;, which limits the devices that will be visible to CUDA-capable applications like TUFLOW, as well as specifying the order they will appear in. The rest of this article will explain how to go about that. As an example, we&#039;ll use a Windows computer that has 2 NVIDIA GPUs, and an on-board AMD GPU. In Windows, you can list all the available GPUs using a Powershell command like this:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;powershell&amp;quot;&amp;gt;&lt;br /&gt;
Get-CimInstance -Namespace root\cimv2 -ClassName Win32_VideoController | Select-Object DeviceID, Name&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
(you can run PowerShell commands by opening PowerShell from the Windows Start Menu and pasting a command there)&lt;br /&gt;
&lt;br /&gt;
The output for the example computer looks like this (note that even virtual adapters like a Remote Desktop adapter will show):&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
DeviceID         Name&lt;br /&gt;
--------         ----&lt;br /&gt;
VideoController1 AMD Radeon(TM) Graphics&lt;br /&gt;
VideoController2 Microsoft Remote Display Adapter&lt;br /&gt;
VideoController3 NVIDIA GeForce RTX 4090&lt;br /&gt;
VideoController4 NVIDIA GeForce RTX 4090&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
In this case, we only need &#039;VideoController3&#039; and &#039;VideoController4&#039; to be visible to CUDA-enabled applications like TUFLOW. We can get more details on those by running the following command (from either PowerShell, Command Prompt, or a Linux shell):&lt;br /&gt;
&amp;lt;syntaxhighlight&amp;gt;&lt;br /&gt;
nvidia-smi --query-gpu=name,uuid --format=csv,noheader,nounits&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
And the output looks like this:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
NVIDIA GeForce RTX 4090, GPU-5060f556-4eb4-7155-4020-abadcb2fd735&lt;br /&gt;
NVIDIA GeForce RTX 4090, GPU-f3825978-37f8-b933-5327-583196d560cd&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
The tool won&#039;t list the AMD card, but up to and including version 2025.1 of TUFLOW, that card may still interfere with your GPU selection order. Also, from this readout, it is not at all clear which card is which and the order here may not match the order you expect from tools like Task Manager (&#039;GPU 0&#039;, &#039;GPU 1&#039;, etc.).&lt;br /&gt;
&lt;br /&gt;
This is what we will solve by setting the environment variable &amp;lt;code&amp;gt;CUDA_VISIBLE_DEVICES&amp;lt;/code&amp;gt;. There are two possible formats. It can either have a value like &amp;lt;code=&amp;gt;0,1&amp;lt;/code&amp;gt; or a more explicit value like &amp;lt;code&amp;gt;GPU-5060f556-4eb4-7155-4020-abadcb2fd735,GPU-f3825978-37f8-b933-5327-583196d560cd&amp;lt;/code&amp;gt; using the identifiers from the &amp;lt;code&amp;gt;nvidia-smi&amp;lt;/code&amp;gt; output. &lt;br /&gt;
&lt;br /&gt;
The short format just affects the default order. If you find using &amp;lt;code&amp;gt;-pu0&amp;lt;/code&amp;gt; with TUFLOW selects the GPU you&#039;d consider #1 and vice versa, you could set &amp;lt;code&amp;gt;CUDA_VISIBLE_DEVICES&amp;lt;/code&amp;gt; to &amp;lt;code=&amp;quot;none&amp;quot;&amp;gt;1,0&amp;lt;/code&amp;gt;, to reverse the default order. However, this order may change as you install new hardware or reinstall existing hardware, so the recommendation is to use the explicit values in the long format.&lt;br /&gt;
&lt;br /&gt;
You can either set the value of the environment variable at the start of scripts you use to run your models, like batch files, PowerShell scripts, or Linux shell scripts, or you can set it globally so that it automatically applies to all running applications.&lt;br /&gt;
&lt;br /&gt;
In a batch file or from the Command Prompt use this (note there are no quotes around the values, replace the values with the identifiers for your GPUs):&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;dos&amp;quot;&amp;gt;&lt;br /&gt;
SET CUDA_VISIBLE_DEVICES=GPU-5060f556-4eb4-7155-4020-abadcb2fd735,GPU-f3825978-37f8-b933-5327-583196d560cd&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
In a PowerShell script or from the PowerShell prompt use this (note the quotes around the values, replace the values with the identifiers for your GPUs):&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;powershell&amp;quot;&amp;gt;&lt;br /&gt;
$env:CUDA_VISIBLE_DEVICES = &amp;quot;GPU-5060f556-4eb4-7155-4020-abadcb2fd735,GPU-f3825978-37f8-b933-5327-583196d560cd&amp;quot;&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If you prefer to set the value globally, you can either set it for a single user account by finding &amp;quot;Edit environment variables &#039;&#039;for your account&#039;&#039;&amp;quot; in the Windows Start menu and entering the values without quotes, or you can set it for all users on the machine by finding &amp;quot;Edit the &#039;&#039;system&#039;&#039; environment variables&amp;quot; in the Windows Start menu and doing the same in the &#039;System Variables&#039; section. Note that you need to be an administrator to be able to do the latter. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Warning:&#039;&#039;&#039; setting the value globally affects all CUDA-capable applications, not just TUFLOW. Please ensure that no other applications need the CUDA-capabilities of the GPUs you&#039;re leaving out or use a local value in your scripts or batch files instead.&lt;/div&gt;</summary>
		<author><name>Jaap.vandervelde</name></author>
	</entry>
	<entry>
		<id>https://wiki.tuflow.com/w/index.php?title=HPC_Running_and_Converting_Models&amp;diff=43867</id>
		<title>HPC Running and Converting Models</title>
		<link rel="alternate" type="text/html" href="https://wiki.tuflow.com/w/index.php?title=HPC_Running_and_Converting_Models&amp;diff=43867"/>
		<updated>2025-06-18T02:17:36Z</updated>

		<summary type="html">&lt;p&gt;Jaap.vandervelde: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Introduction =&lt;br /&gt;
This page contains information about converting an existing TUFLOW Classic or GPU (pre 2017 HPC release) model to a format that can be run using the TUFLOW HPC engine. This page provides a quick summary for experienced TUFLOW users to use as a reference point for updating their models. It is recommended that less experienced TUFLOW users refer to our &amp;lt;u&amp;gt;[[Tutorial_Introduction |TUFLOW Tutorial Modules]]&amp;lt;/u&amp;gt; for greater support and guidance on creating a HPC model.&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
To convert an existing TUFLOW Classic or GPU Model to run on HPC, an update to the TUFLOW Control File (TCF) is needed. Some features from TUFLOW Classic that are not currently supported in HPC, may prevent the HPC model running successfully. To find out more about unsupported features in HPC, refer to the &amp;lt;u&amp;gt;[https://docs.tuflow.com/classic-hpc/manual/latest/ TUFLOW Manual]&amp;lt;/u&amp;gt;. &amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Converting TUFLOW Classic to HPC (TCF Updates) =&lt;br /&gt;
To run an existing TUFLOW Classic simulation with the new HPC engine, the following lines of text need to be added to the TUFLOW Control File (TCF).&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;font color=&amp;quot;blue&amp;quot;&amp;gt;&amp;lt;tt&amp;gt;Solution Scheme &amp;lt;/tt&amp;gt;&amp;lt;/font&amp;gt; &amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;tt&amp;gt;==&amp;lt;/tt&amp;gt;&amp;lt;/font&amp;gt;&amp;lt;tt&amp;gt; HPC &amp;lt;/tt&amp;gt; &amp;lt;font color=&amp;quot;green&amp;quot;&amp;gt;&amp;lt;tt&amp;gt;   !This command specifies that you want to run TUFLOW using the HPC solution scheme or engine.&amp;lt;/tt&amp;gt;&amp;lt;/font&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
The following command is also required to run the model using GPU hardware:&amp;lt;br&amp;gt; &lt;br /&gt;
&amp;lt;font color=&amp;quot;blue&amp;quot;&amp;gt;&amp;lt;tt&amp;gt;Hardware &amp;lt;/tt&amp;gt;&amp;lt;/font&amp;gt; &amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;tt&amp;gt;==&amp;lt;/tt&amp;gt;&amp;lt;/font&amp;gt;&amp;lt;tt&amp;gt; GPU &amp;lt;/tt&amp;gt;        &amp;lt;font color=&amp;quot;green&amp;quot;&amp;gt;&amp;lt;tt&amp;gt;   !CPU is default. The hardware command instructs TUFLOW HPC to run using GPU hardware. This is typically orders of magnitude faster than on CPU.&amp;lt;/tt&amp;gt;&amp;lt;/font&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
These two commands are all that&#039;s needed to run convert the TUFLOW Classic model to HPC and run using GPU hardware. There are however more commands provided below that give the modeller greater control over the Hardware that HPC uses.&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Running HPC on Multiple CPU Threads =&lt;br /&gt;
As mentioned in the &amp;lt;u&amp;gt;[[HPC_Introduction | HPC Introduction]]&amp;lt;/u&amp;gt; page, HPC can be parallelised to run across multiple CPU processors when run on CPU (i.e. not GPU). The following command allows the modeller to dictate the number of core processors to run TUFLOW HPC across.&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;font color=&amp;quot;blue&amp;quot;&amp;gt;&amp;lt;tt&amp;gt;CPU Threads &amp;lt;/tt&amp;gt;&amp;lt;/font&amp;gt; &amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;tt&amp;gt;==&amp;lt;/tt&amp;gt;&amp;lt;/font&amp;gt;&amp;lt;tt&amp;gt; 8 &amp;lt;/tt&amp;gt;  &amp;lt;font color=&amp;quot;green&amp;quot;&amp;gt;&amp;lt;tt&amp;gt;   !Default is 4. This instructs TUFLOW to search for and run the model across four different core processors. &amp;lt;/tt&amp;gt;&amp;lt;/font&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
If the number of processors or TUFLOW licences found by TUFLOW are less than the specified value, then TUFLOW will utilise the maximum number of core processors available within the licence and hardware limitations.&amp;lt;br&amp;gt;&lt;br /&gt;
Alternatively, the number of CPU threads can be specified in the batch file / command line by using the -nt&amp;lt;number of threads&amp;gt; argument. If both control file and command line are used to specify number of threads, the command line option will prevail.&lt;br /&gt;
&lt;br /&gt;
= Running HPC on Multiple GPU Devices =&lt;br /&gt;
Much like HPC can be run across multiple CPU processors, HPC can be run across multiple GPU cards. Models can also be instructed to run on a specific GPU card.&amp;lt;br&amp;gt;&lt;br /&gt;
If a machine only has a single GPU card, the GPU Device ID should be 0, this is a default. If a second GPU card was added, the Device ID would be 1 and so on. The GPU IDs can be checked by reviewing the machines &#039;&#039;Device Manager&#039;&#039;. (Note that the order may not match your expectations, more on that in [[Configure CUDA device selection]]) &amp;lt;br&amp;gt;&lt;br /&gt;
The most common method is to specify the GPU card ID in the batch file / command line by using the -pu&amp;lt;id&amp;gt; argument.&amp;lt;br&amp;gt;&lt;br /&gt;
Example below will run single simulation across the first and second GPU card (ID 0 and 1). &lt;br /&gt;
&amp;lt;pre&amp;gt;&amp;quot;TUFLOW_iSP_w64.exe&amp;quot; -pu0 -pu1 &amp;quot;M01_5m_001.tcf&amp;quot;&amp;lt;/pre&amp;gt;&lt;br /&gt;
Example below will run single simulation on the fourth GPU card (ID 3). &lt;br /&gt;
&amp;lt;pre&amp;gt;&amp;quot;TUFLOW_iSP_w64.exe&amp;quot; -pu3 &amp;quot;M01_5m_001.tcf&amp;quot;&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Alternatively, the following TCF command can be used to set the number of GPU devices and which devices to use.&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;font color=&amp;quot;blue&amp;quot;&amp;gt;&amp;lt;tt&amp;gt;GPU Device IDs &amp;lt;/tt&amp;gt;&amp;lt;/font&amp;gt; &amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;tt&amp;gt;==&amp;lt;/tt&amp;gt;&amp;lt;/font&amp;gt;&amp;lt;tt&amp;gt; 0, 1 &amp;lt;/tt&amp;gt;	&amp;lt;font color=&amp;quot;green&amp;quot;&amp;gt;&amp;lt;tt&amp;gt;   !This command instructs TUFLOW to run the model on GPU Device 0 and GPU Device 1.&amp;lt;/tt&amp;gt;&amp;lt;/font&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
If both control file and command line are used to specify devices, the command line option will prevail.&lt;br /&gt;
&lt;br /&gt;
= Converting TUFLOW GPU to HPC (TCF Updates) =&lt;br /&gt;
When converting a TUFLOW GPU model across to HPC, first confirm that all features in the GPU model are available in HPC by referring to the &amp;lt;u&amp;gt;[https://tuflow.com/Download/TUFLOW/Releases/2017-09/TUFLOW%20Release%20Notes.2017-09.pdf TUFLOW 2017-09 Release Notes]&amp;lt;/u&amp;gt;.&amp;lt;br&amp;gt;&lt;br /&gt;
Delete the following command from the *.tcf file and insert the commands specified above, for converting a TUFLOW Classic model to HPC.&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;s&amp;gt;&amp;lt;font color=&amp;quot;blue&amp;quot;&amp;gt;&amp;lt;tt&amp;gt;GPU Solver &amp;lt;/tt&amp;gt;&amp;lt;/font&amp;gt; &amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;tt&amp;gt;==&amp;lt;/tt&amp;gt;&amp;lt;/font&amp;gt;&amp;lt;tt&amp;gt; ON &amp;lt;/tt&amp;gt;&amp;lt;/s&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= HPC Scenarios and Updating the Batch File =&lt;br /&gt;
Modellers may want to change the hardware that HPC is run on throughout the course of a project. For example, if your company own more CPU than GPU licences it may be beneficial to run the model using CPU hardware during the initial model build phase so your colleagues have access to the higher speed GPU licences for production runs on other projects that are running in parallel. If this is the case, it may be easier to setup a Scenario Logic statement in the TUFLOW Control File (TCF), that allows the modeller to change the hardware being used with a simple switch in the batch file used to run the model.&lt;br /&gt;
&lt;br /&gt;
To setup a scenario for varying hardware options, the following commands can be used in the TCF file:&lt;br /&gt;
&amp;lt;font color=&amp;quot;blue&amp;quot;&amp;gt;&amp;lt;tt&amp;gt;Solution Scheme &amp;lt;/tt&amp;gt;&amp;lt;/font&amp;gt; &amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;tt&amp;gt;==&amp;lt;/tt&amp;gt;&amp;lt;/font&amp;gt;&amp;lt;tt&amp;gt; HPC &amp;lt;/tt&amp;gt; &amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;font color=&amp;quot;blue&amp;quot;&amp;gt;&amp;lt;tt&amp;gt;Hardware &amp;lt;/tt&amp;gt;&amp;lt;/font&amp;gt; &amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;tt&amp;gt;==&amp;lt;/tt&amp;gt;&amp;lt;/font&amp;gt; &amp;lt;tt&amp;gt; &amp;lt;&amp;lt;~s1~&amp;gt;&amp;gt; &amp;lt;/tt&amp;gt;    &amp;lt;font color=&amp;quot;green&amp;quot;&amp;gt;&amp;lt;tt&amp;gt;   !The scenario will either be &amp;quot;CPU&amp;quot; or &amp;quot;GPU&amp;quot;&amp;lt;/tt&amp;gt;&amp;lt;/font&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This basic scenario logic can be configured further within the TCF as shown below:&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;font color=&amp;quot;blue&amp;quot;&amp;gt;&amp;lt;tt&amp;gt;If Scenario &amp;lt;/tt&amp;gt;&amp;lt;/font&amp;gt; &amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;tt&amp;gt;==&amp;lt;/tt&amp;gt;&amp;lt;/font&amp;gt; &amp;lt;tt&amp;gt; CPU &amp;lt;/tt&amp;gt; &amp;lt;br&amp;gt;&lt;br /&gt;
:: &amp;lt;font color=&amp;quot;blue&amp;quot;&amp;gt;&amp;lt;tt&amp;gt;CPU Threads &amp;lt;/tt&amp;gt;&amp;lt;/font&amp;gt; &amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;tt&amp;gt;==&amp;lt;/tt&amp;gt;&amp;lt;/font&amp;gt; &amp;lt;tt&amp;gt; 8 &amp;lt;/tt&amp;gt; &amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;font color=&amp;quot;blue&amp;quot;&amp;gt;&amp;lt;tt&amp;gt;Else If Scenario &amp;lt;/tt&amp;gt;&amp;lt;/font&amp;gt; &amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;tt&amp;gt;==&amp;lt;/tt&amp;gt;&amp;lt;/font&amp;gt; &amp;lt;tt&amp;gt; GPU &amp;lt;/tt&amp;gt; &amp;lt;br&amp;gt;&lt;br /&gt;
:: &amp;lt;font color=&amp;quot;blue&amp;quot;&amp;gt;&amp;lt;tt&amp;gt;GPU Device IDs&amp;lt;/tt&amp;gt;&amp;lt;/font&amp;gt; &amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;tt&amp;gt;==&amp;lt;/tt&amp;gt;&amp;lt;/font&amp;gt; &amp;lt;tt&amp;gt; 0, 1 &amp;lt;/tt&amp;gt; &amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;font color=&amp;quot;blue&amp;quot;&amp;gt;&amp;lt;tt&amp;gt;End If &amp;lt;/tt&amp;gt;&amp;lt;/font&amp;gt; &amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If using the above Scenario Logic, the modeller must include a scenario call in the batch file.&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;font color=&amp;quot;blue&amp;quot;&amp;gt;&amp;lt;tt&amp;gt;-s1 &amp;lt;Hardware Type&amp;gt; &amp;lt;/tt&amp;gt;&amp;lt;/font&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
If you are unfamiliar with using Scenario Logic, please refer to &amp;lt;u&amp;gt;[[Tutorial_M08 |Tutorial Module 08]]&amp;lt;/u&amp;gt;.&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Other useful batch file switches include:&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;font color=&amp;quot;blue&amp;quot;&amp;gt;&amp;lt;tt&amp;gt;-nt &amp;lt;number_of_threads&amp;gt; &amp;lt;/tt&amp;gt;&amp;lt;/font&amp;gt;: This switch is used to set the number of CPU threads used for CPU mode simulations.&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;font color=&amp;quot;blue&amp;quot;&amp;gt;&amp;lt;tt&amp;gt;-pu &amp;lt;GPU Device IDs&amp;gt; &amp;lt;/tt&amp;gt;&amp;lt;/font&amp;gt;: This switch is used to set the number of GPU devices and which devices to use.&amp;lt;br&amp;gt;&lt;br /&gt;
An example of how this would be implemented into a simple batch file for CPU and GPU are shown below.&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;u&amp;gt;CPU&amp;lt;/u&amp;gt;&#039;&#039;&#039;&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;TUFLOW_iSP_w64.exe -s1 &amp;lt;font color=&amp;quot;blue&amp;quot;&amp;gt;CPU&amp;lt;/font&amp;gt; &amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;-nt8&amp;lt;/font&amp;gt; FMA_T2_~s1~_001.tcf&amp;lt;/tt&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
This example will run TUFLOW HPC on CPU using 8 CPU threads.&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;u&amp;gt;GPU&amp;lt;/u&amp;gt;&#039;&#039;&#039;&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;TUFLOW_iSP_w64.exe -s1 &amp;lt;font color=&amp;quot;blue&amp;quot;&amp;gt;GPU&amp;lt;/font&amp;gt; &amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;-pu0 -pu1&amp;lt;/font&amp;gt; FMA_T2_~s1~_001.tcf&amp;lt;/tt&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
This example will run TUFLOW HPC on GPU using 2 GPU cards.&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
{{Tips Navigation&lt;br /&gt;
|uplink=[[ HPC_Modelling_Guidance | Back to HPC Modelling Guidance]]&lt;br /&gt;
}}&lt;/div&gt;</summary>
		<author><name>Jaap.vandervelde</name></author>
	</entry>
	<entry>
		<id>https://wiki.tuflow.com/w/index.php?title=HPC_Running_and_Converting_Models&amp;diff=43866</id>
		<title>HPC Running and Converting Models</title>
		<link rel="alternate" type="text/html" href="https://wiki.tuflow.com/w/index.php?title=HPC_Running_and_Converting_Models&amp;diff=43866"/>
		<updated>2025-06-18T02:17:18Z</updated>

		<summary type="html">&lt;p&gt;Jaap.vandervelde: add reference back to Configure CUDA device selection&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Introduction =&lt;br /&gt;
This page contains information about converting an existing TUFLOW Classic or GPU (pre 2017 HPC release) model to a format that can be run using the TUFLOW HPC engine. This page provides a quick summary for experienced TUFLOW users to use as a reference point for updating their models. It is recommended that less experienced TUFLOW users refer to our &amp;lt;u&amp;gt;[[Tutorial_Introduction |TUFLOW Tutorial Modules]]&amp;lt;/u&amp;gt; for greater support and guidance on creating a HPC model.&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
To convert an existing TUFLOW Classic or GPU Model to run on HPC, an update to the TUFLOW Control File (TCF) is needed. Some features from TUFLOW Classic that are not currently supported in HPC, may prevent the HPC model running successfully. To find out more about unsupported features in HPC, refer to the &amp;lt;u&amp;gt;[https://docs.tuflow.com/classic-hpc/manual/latest/ TUFLOW Manual]&amp;lt;/u&amp;gt;. &amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Converting TUFLOW Classic to HPC (TCF Updates) =&lt;br /&gt;
To run an existing TUFLOW Classic simulation with the new HPC engine, the following lines of text need to be added to the TUFLOW Control File (TCF).&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;font color=&amp;quot;blue&amp;quot;&amp;gt;&amp;lt;tt&amp;gt;Solution Scheme &amp;lt;/tt&amp;gt;&amp;lt;/font&amp;gt; &amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;tt&amp;gt;==&amp;lt;/tt&amp;gt;&amp;lt;/font&amp;gt;&amp;lt;tt&amp;gt; HPC &amp;lt;/tt&amp;gt; &amp;lt;font color=&amp;quot;green&amp;quot;&amp;gt;&amp;lt;tt&amp;gt;   !This command specifies that you want to run TUFLOW using the HPC solution scheme or engine.&amp;lt;/tt&amp;gt;&amp;lt;/font&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
The following command is also required to run the model using GPU hardware:&amp;lt;br&amp;gt; &lt;br /&gt;
&amp;lt;font color=&amp;quot;blue&amp;quot;&amp;gt;&amp;lt;tt&amp;gt;Hardware &amp;lt;/tt&amp;gt;&amp;lt;/font&amp;gt; &amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;tt&amp;gt;==&amp;lt;/tt&amp;gt;&amp;lt;/font&amp;gt;&amp;lt;tt&amp;gt; GPU &amp;lt;/tt&amp;gt;        &amp;lt;font color=&amp;quot;green&amp;quot;&amp;gt;&amp;lt;tt&amp;gt;   !CPU is default. The hardware command instructs TUFLOW HPC to run using GPU hardware. This is typically orders of magnitude faster than on CPU.&amp;lt;/tt&amp;gt;&amp;lt;/font&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
These two commands are all that&#039;s needed to run convert the TUFLOW Classic model to HPC and run using GPU hardware. There are however more commands provided below that give the modeller greater control over the Hardware that HPC uses.&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Running HPC on Multiple CPU Threads =&lt;br /&gt;
As mentioned in the &amp;lt;u&amp;gt;[[HPC_Introduction | HPC Introduction]]&amp;lt;/u&amp;gt; page, HPC can be parallelised to run across multiple CPU processors when run on CPU (i.e. not GPU). The following command allows the modeller to dictate the number of core processors to run TUFLOW HPC across.&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;font color=&amp;quot;blue&amp;quot;&amp;gt;&amp;lt;tt&amp;gt;CPU Threads &amp;lt;/tt&amp;gt;&amp;lt;/font&amp;gt; &amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;tt&amp;gt;==&amp;lt;/tt&amp;gt;&amp;lt;/font&amp;gt;&amp;lt;tt&amp;gt; 8 &amp;lt;/tt&amp;gt;  &amp;lt;font color=&amp;quot;green&amp;quot;&amp;gt;&amp;lt;tt&amp;gt;   !Default is 4. This instructs TUFLOW to search for and run the model across four different core processors. &amp;lt;/tt&amp;gt;&amp;lt;/font&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
If the number of processors or TUFLOW licences found by TUFLOW are less than the specified value, then TUFLOW will utilise the maximum number of core processors available within the licence and hardware limitations.&amp;lt;br&amp;gt;&lt;br /&gt;
Alternatively, the number of CPU threads can be specified in the batch file / command line by using the -nt&amp;lt;number of threads&amp;gt; argument. If both control file and command line are used to specify number of threads, the command line option will prevail.&lt;br /&gt;
&lt;br /&gt;
= Running HPC on Multiple GPU Devices =&lt;br /&gt;
Much like HPC can be run across multiple CPU processors, HPC can be run across multiple GPU cards. Models can also be instructed to run on a specific GPU card.&amp;lt;br&amp;gt;&lt;br /&gt;
If a machine only has a single GPU card, the GPU Device ID should be 0, this is a default. If a second GPU card was added, the Device ID would be 1 and so on. The GPU IDs can be checked by reviewing the machines &#039;&#039;Device Manager&#039;&#039;. (Note that the order may not match your expectations, more on that in [Configure CUDA device selection]) &amp;lt;br&amp;gt;&lt;br /&gt;
The most common method is to specify the GPU card ID in the batch file / command line by using the -pu&amp;lt;id&amp;gt; argument.&amp;lt;br&amp;gt;&lt;br /&gt;
Example below will run single simulation across the first and second GPU card (ID 0 and 1). &lt;br /&gt;
&amp;lt;pre&amp;gt;&amp;quot;TUFLOW_iSP_w64.exe&amp;quot; -pu0 -pu1 &amp;quot;M01_5m_001.tcf&amp;quot;&amp;lt;/pre&amp;gt;&lt;br /&gt;
Example below will run single simulation on the fourth GPU card (ID 3). &lt;br /&gt;
&amp;lt;pre&amp;gt;&amp;quot;TUFLOW_iSP_w64.exe&amp;quot; -pu3 &amp;quot;M01_5m_001.tcf&amp;quot;&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Alternatively, the following TCF command can be used to set the number of GPU devices and which devices to use.&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;font color=&amp;quot;blue&amp;quot;&amp;gt;&amp;lt;tt&amp;gt;GPU Device IDs &amp;lt;/tt&amp;gt;&amp;lt;/font&amp;gt; &amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;tt&amp;gt;==&amp;lt;/tt&amp;gt;&amp;lt;/font&amp;gt;&amp;lt;tt&amp;gt; 0, 1 &amp;lt;/tt&amp;gt;	&amp;lt;font color=&amp;quot;green&amp;quot;&amp;gt;&amp;lt;tt&amp;gt;   !This command instructs TUFLOW to run the model on GPU Device 0 and GPU Device 1.&amp;lt;/tt&amp;gt;&amp;lt;/font&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
If both control file and command line are used to specify devices, the command line option will prevail.&lt;br /&gt;
&lt;br /&gt;
= Converting TUFLOW GPU to HPC (TCF Updates) =&lt;br /&gt;
When converting a TUFLOW GPU model across to HPC, first confirm that all features in the GPU model are available in HPC by referring to the &amp;lt;u&amp;gt;[https://tuflow.com/Download/TUFLOW/Releases/2017-09/TUFLOW%20Release%20Notes.2017-09.pdf TUFLOW 2017-09 Release Notes]&amp;lt;/u&amp;gt;.&amp;lt;br&amp;gt;&lt;br /&gt;
Delete the following command from the *.tcf file and insert the commands specified above, for converting a TUFLOW Classic model to HPC.&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;s&amp;gt;&amp;lt;font color=&amp;quot;blue&amp;quot;&amp;gt;&amp;lt;tt&amp;gt;GPU Solver &amp;lt;/tt&amp;gt;&amp;lt;/font&amp;gt; &amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;tt&amp;gt;==&amp;lt;/tt&amp;gt;&amp;lt;/font&amp;gt;&amp;lt;tt&amp;gt; ON &amp;lt;/tt&amp;gt;&amp;lt;/s&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= HPC Scenarios and Updating the Batch File =&lt;br /&gt;
Modellers may want to change the hardware that HPC is run on throughout the course of a project. For example, if your company own more CPU than GPU licences it may be beneficial to run the model using CPU hardware during the initial model build phase so your colleagues have access to the higher speed GPU licences for production runs on other projects that are running in parallel. If this is the case, it may be easier to setup a Scenario Logic statement in the TUFLOW Control File (TCF), that allows the modeller to change the hardware being used with a simple switch in the batch file used to run the model.&lt;br /&gt;
&lt;br /&gt;
To setup a scenario for varying hardware options, the following commands can be used in the TCF file:&lt;br /&gt;
&amp;lt;font color=&amp;quot;blue&amp;quot;&amp;gt;&amp;lt;tt&amp;gt;Solution Scheme &amp;lt;/tt&amp;gt;&amp;lt;/font&amp;gt; &amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;tt&amp;gt;==&amp;lt;/tt&amp;gt;&amp;lt;/font&amp;gt;&amp;lt;tt&amp;gt; HPC &amp;lt;/tt&amp;gt; &amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;font color=&amp;quot;blue&amp;quot;&amp;gt;&amp;lt;tt&amp;gt;Hardware &amp;lt;/tt&amp;gt;&amp;lt;/font&amp;gt; &amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;tt&amp;gt;==&amp;lt;/tt&amp;gt;&amp;lt;/font&amp;gt; &amp;lt;tt&amp;gt; &amp;lt;&amp;lt;~s1~&amp;gt;&amp;gt; &amp;lt;/tt&amp;gt;    &amp;lt;font color=&amp;quot;green&amp;quot;&amp;gt;&amp;lt;tt&amp;gt;   !The scenario will either be &amp;quot;CPU&amp;quot; or &amp;quot;GPU&amp;quot;&amp;lt;/tt&amp;gt;&amp;lt;/font&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This basic scenario logic can be configured further within the TCF as shown below:&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;font color=&amp;quot;blue&amp;quot;&amp;gt;&amp;lt;tt&amp;gt;If Scenario &amp;lt;/tt&amp;gt;&amp;lt;/font&amp;gt; &amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;tt&amp;gt;==&amp;lt;/tt&amp;gt;&amp;lt;/font&amp;gt; &amp;lt;tt&amp;gt; CPU &amp;lt;/tt&amp;gt; &amp;lt;br&amp;gt;&lt;br /&gt;
:: &amp;lt;font color=&amp;quot;blue&amp;quot;&amp;gt;&amp;lt;tt&amp;gt;CPU Threads &amp;lt;/tt&amp;gt;&amp;lt;/font&amp;gt; &amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;tt&amp;gt;==&amp;lt;/tt&amp;gt;&amp;lt;/font&amp;gt; &amp;lt;tt&amp;gt; 8 &amp;lt;/tt&amp;gt; &amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;font color=&amp;quot;blue&amp;quot;&amp;gt;&amp;lt;tt&amp;gt;Else If Scenario &amp;lt;/tt&amp;gt;&amp;lt;/font&amp;gt; &amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;tt&amp;gt;==&amp;lt;/tt&amp;gt;&amp;lt;/font&amp;gt; &amp;lt;tt&amp;gt; GPU &amp;lt;/tt&amp;gt; &amp;lt;br&amp;gt;&lt;br /&gt;
:: &amp;lt;font color=&amp;quot;blue&amp;quot;&amp;gt;&amp;lt;tt&amp;gt;GPU Device IDs&amp;lt;/tt&amp;gt;&amp;lt;/font&amp;gt; &amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;tt&amp;gt;==&amp;lt;/tt&amp;gt;&amp;lt;/font&amp;gt; &amp;lt;tt&amp;gt; 0, 1 &amp;lt;/tt&amp;gt; &amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;font color=&amp;quot;blue&amp;quot;&amp;gt;&amp;lt;tt&amp;gt;End If &amp;lt;/tt&amp;gt;&amp;lt;/font&amp;gt; &amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If using the above Scenario Logic, the modeller must include a scenario call in the batch file.&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;font color=&amp;quot;blue&amp;quot;&amp;gt;&amp;lt;tt&amp;gt;-s1 &amp;lt;Hardware Type&amp;gt; &amp;lt;/tt&amp;gt;&amp;lt;/font&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
If you are unfamiliar with using Scenario Logic, please refer to &amp;lt;u&amp;gt;[[Tutorial_M08 |Tutorial Module 08]]&amp;lt;/u&amp;gt;.&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Other useful batch file switches include:&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;font color=&amp;quot;blue&amp;quot;&amp;gt;&amp;lt;tt&amp;gt;-nt &amp;lt;number_of_threads&amp;gt; &amp;lt;/tt&amp;gt;&amp;lt;/font&amp;gt;: This switch is used to set the number of CPU threads used for CPU mode simulations.&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;font color=&amp;quot;blue&amp;quot;&amp;gt;&amp;lt;tt&amp;gt;-pu &amp;lt;GPU Device IDs&amp;gt; &amp;lt;/tt&amp;gt;&amp;lt;/font&amp;gt;: This switch is used to set the number of GPU devices and which devices to use.&amp;lt;br&amp;gt;&lt;br /&gt;
An example of how this would be implemented into a simple batch file for CPU and GPU are shown below.&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;u&amp;gt;CPU&amp;lt;/u&amp;gt;&#039;&#039;&#039;&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;TUFLOW_iSP_w64.exe -s1 &amp;lt;font color=&amp;quot;blue&amp;quot;&amp;gt;CPU&amp;lt;/font&amp;gt; &amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;-nt8&amp;lt;/font&amp;gt; FMA_T2_~s1~_001.tcf&amp;lt;/tt&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
This example will run TUFLOW HPC on CPU using 8 CPU threads.&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;u&amp;gt;GPU&amp;lt;/u&amp;gt;&#039;&#039;&#039;&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;TUFLOW_iSP_w64.exe -s1 &amp;lt;font color=&amp;quot;blue&amp;quot;&amp;gt;GPU&amp;lt;/font&amp;gt; &amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;-pu0 -pu1&amp;lt;/font&amp;gt; FMA_T2_~s1~_001.tcf&amp;lt;/tt&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
This example will run TUFLOW HPC on GPU using 2 GPU cards.&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
{{Tips Navigation&lt;br /&gt;
|uplink=[[ HPC_Modelling_Guidance | Back to HPC Modelling Guidance]]&lt;br /&gt;
}}&lt;/div&gt;</summary>
		<author><name>Jaap.vandervelde</name></author>
	</entry>
	<entry>
		<id>https://wiki.tuflow.com/w/index.php?title=Configure_CUDA_device_selection&amp;diff=43865</id>
		<title>Configure CUDA device selection</title>
		<link rel="alternate" type="text/html" href="https://wiki.tuflow.com/w/index.php?title=Configure_CUDA_device_selection&amp;diff=43865"/>
		<updated>2025-06-18T02:15:14Z</updated>

		<summary type="html">&lt;p&gt;Jaap.vandervelde: new article on setting CUDA_VISIBLE_DEVICES referenced from the FAQ for Hardware Selection Advice&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;The computer you use to run TUFLOW may have multiple GPUs. These can be multiple NVIDIA GPUs with CUDA-capabilities, which you may want to use to accelerate running your models. Or they can be additional GPUs for other purposes like rendering the interactive desktop for users of the computer, or other computational tasks. A common occurrence on modern motherboards is the availability of an integrated GPU.&lt;br /&gt;
&lt;br /&gt;
Generally, we recommend using a GPU you don&#039;t use for TUFLOW modelling as your primary GPU for rendering the desktop, if needed. If you don&#039;t have an additional GPU available, you can use one of the NVIDIA GPUs, be we would then recommend using the most capable card as the primary card for running your models, and the secondary card as the primary GPU for rendering the desktop.&lt;br /&gt;
&lt;br /&gt;
TUFLOW allows you to select a specific GPU for its compute, using command line options like &amp;lt;source enclose=&amp;quot;none&amp;quot;&amp;gt;-pu0&amp;lt;/source&amp;gt; for the first GPU, &amp;lt;source enclose=&amp;quot;none&amp;quot;&amp;gt;-pu1&amp;lt;/source&amp;gt; for the second, etc. (see [[HPC Running and Converting Models]])  &lt;br /&gt;
&lt;br /&gt;
However, you may find that what TUFLOW considers the first or second GPU does not match your expectations based on what you see in tools like the Windows Device Manager, Task Manager, or the output from &amp;lt;source enclose=&amp;quot;none&amp;quot;&amp;gt;nvidia-smi&amp;lt;/source&amp;gt; on the command line. Another common problem is that the GPUs you want to use are not actually #0 and #1 and you may have trouble selecting the cards you prefer, in the order you prefer them in.&lt;br /&gt;
&lt;br /&gt;
To this end, you can set an environment variable called &amp;lt;source enclose=&amp;quot;none&amp;quot;&amp;gt;CUDA_VISIBLE_DEVICES&amp;lt;/source&amp;gt;, which limits the devices that will be visible to CUDA-capable applications like TUFLOW, as well as specifying the order they will appear in. The rest of this article will explain how to go about that. As an example, we&#039;ll use a Windows computer that has 2 NVIDIA GPUs, and an on-board AMD GPU. In Windows, you can list all the available GPUs using a Powershell command like this:&lt;br /&gt;
&amp;lt;source language=&amp;quot;powershell&amp;quot;&amp;gt;&lt;br /&gt;
Get-CimInstance -Namespace root\cimv2 -ClassName Win32_VideoController | Select-Object DeviceID, Name&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
(you can run PowerShell commands by opening PowerShell from the Windows Start Menu and pasting a command there)&lt;br /&gt;
&lt;br /&gt;
The output for the example computer looks like this (note that even virtual adapters like a Remote Desktop adapter will show):&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
DeviceID         Name&lt;br /&gt;
--------         ----&lt;br /&gt;
VideoController1 AMD Radeon(TM) Graphics&lt;br /&gt;
VideoController2 Microsoft Remote Display Adapter&lt;br /&gt;
VideoController3 NVIDIA GeForce RTX 4090&lt;br /&gt;
VideoController4 NVIDIA GeForce RTX 4090&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
In this case, we only need &#039;VideoController3&#039; and &#039;VideoController4&#039; to be visible to CUDA-enabled applications like TUFLOW. We can get more details on those by running the following command (from either PowerShell, Command Prompt, or a Linux shell):&lt;br /&gt;
&amp;lt;source&amp;gt;&lt;br /&gt;
nvidia-smi --query-gpu=name,uuid --format=csv,noheader,nounits&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
And the output looks like this:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
NVIDIA GeForce RTX 4090, GPU-5060f556-4eb4-7155-4020-abadcb2fd735&lt;br /&gt;
NVIDIA GeForce RTX 4090, GPU-f3825978-37f8-b933-5327-583196d560cd&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
The tool won&#039;t list the AMD card, but up to and including version 2025.1 of TUFLOW, that card may still interfere with your GPU selection order. Also, from this readout, it is not at all clear which card is which and the order here may not match the order you expect from tools like Task Manager (&#039;GPU 0&#039;, &#039;GPU 1&#039;, etc.).&lt;br /&gt;
&lt;br /&gt;
This is what we will solve by setting the environment variable &amp;lt;source enclose=&amp;quot;none&amp;quot;&amp;gt;CUDA_VISIBLE_DEVICES&amp;lt;/source&amp;gt;. There are two possible formats. It can either have a value like &amp;lt;source enclose=&amp;quot;none&amp;quot;&amp;gt;0,1&amp;lt;/source&amp;gt; or a more explicit value like &amp;lt;source enclose=&amp;quot;none&amp;quot;&amp;gt;GPU-5060f556-4eb4-7155-4020-abadcb2fd735,GPU-f3825978-37f8-b933-5327-583196d560cd&amp;lt;/source&amp;gt; using the identifiers from the &amp;lt;source enclose=&amp;quot;none&amp;quot;&amp;gt;nvidia-smi&amp;lt;/source&amp;gt; output. &lt;br /&gt;
&lt;br /&gt;
The short format just affects the default order. If you find using &amp;lt;source enclose=&amp;quot;none&amp;quot;&amp;gt;-pu0&amp;lt;/source&amp;gt; with TUFLOW selects the GPU you&#039;d consider #1 and vice versa, you could set &amp;lt;source enclose=&amp;quot;none&amp;quot;&amp;gt;CUDA_VISIBLE_DEVICES&amp;lt;/source&amp;gt; to &amp;lt;source enclose=&amp;quot;none&amp;quot;&amp;gt;1,0&amp;lt;/source&amp;gt;, to reverse the default order. However, this order may change as you install new hardware or reinstall existing hardware, so the recommendation is to use the explicit values in the long format.&lt;br /&gt;
&lt;br /&gt;
You can either set the value of the environment variable at the start of scripts you use to run your models, like batch files, PowerShell scripts, or Linux shell scripts, or you can set it globally so that it automatically applies to all running applications.&lt;br /&gt;
&lt;br /&gt;
In a batch file or from the Command Prompt use this (note there are no quotes around the values, replace the values with the identifiers for your GPUs):&lt;br /&gt;
&amp;lt;source language=&amp;quot;dos&amp;quot;&amp;gt;&lt;br /&gt;
SET CUDA_VISIBLE_DEVICES=GPU-5060f556-4eb4-7155-4020-abadcb2fd735,GPU-f3825978-37f8-b933-5327-583196d560cd&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
In a PowerShell script or from the PowerShell prompt use this (note the quotes around the values, replace the values with the identifiers for your GPUs):&lt;br /&gt;
&amp;lt;source language=&amp;quot;powershell&amp;quot;&amp;gt;&lt;br /&gt;
$env:CUDA_VISIBLE_DEVICES = &amp;quot;GPU-5060f556-4eb4-7155-4020-abadcb2fd735,GPU-f3825978-37f8-b933-5327-583196d560cd&amp;quot;&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If you prefer to set the value globally, you can either set it for a single user account by finding &amp;quot;Edit environment variables &#039;&#039;for your account&#039;&#039;&amp;quot; in the Windows Start menu and entering the values without quotes, or you can set it for all users on the machine by finding &amp;quot;Edit the &#039;&#039;system&#039;&#039; environment variables&amp;quot; in the Windows Start menu and doing the same in the &#039;System Variables&#039; section. Note that you need to be an administrator to be able to do the latter. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Warning:&#039;&#039;&#039; setting the value globally affects all CUDA-capable applications, not just TUFLOW. Please ensure that no other applications need the CUDA-capabilities of the GPUs you&#039;re leaving out or use a local value in your scripts or batch files instead.&lt;/div&gt;</summary>
		<author><name>Jaap.vandervelde</name></author>
	</entry>
	<entry>
		<id>https://wiki.tuflow.com/w/index.php?title=Hardware_Selection_Advice&amp;diff=43864</id>
		<title>Hardware Selection Advice</title>
		<link rel="alternate" type="text/html" href="https://wiki.tuflow.com/w/index.php?title=Hardware_Selection_Advice&amp;diff=43864"/>
		<updated>2025-06-18T01:22:51Z</updated>

		<summary type="html">&lt;p&gt;Jaap.vandervelde: add FAQ section on Pavlina&amp;#039;s request, add link to CUDA device selection article&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This page provides general hardware advice for running TUFLOW models on GPU or CPU. &amp;lt;br&amp;gt;&lt;br /&gt;
[[File: Hardware_Configuration_001.jpg ||450px|right]]&lt;br /&gt;
&lt;br /&gt;
=Introduction=&lt;br /&gt;
We often get asked about the optimum computing setup to run TUFLOW models. While every model is different and will interact differently with your hardware there is some general advice that we can offer. In the sections below you will find more detailed advice on GPU and CPU but generally:&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* The amount of RAM in the computer will be the limiter for the size of model you can run. This applies to CPU RAM (TUFLOW Classic, TUFLOW FV and TUFLOW HPC with Hardware == CPU) and also GPU RAM (TUFLOW HPC and TUFLOW FV with Hardware == GPU).&lt;br /&gt;
* The processing speed of the CPU, the architecture, cache size, speed and number of processors play a role.&lt;br /&gt;
* For GPU simulations, the number of CUDA cores, the core speed, GPU card architecture, memory speed and interfacing with the motherboard PCI lanes and CPU are all important. &lt;br /&gt;
* The system must be well cooled to avoid throttling (meaning reduction of clock speeds to reduce heating).&amp;lt;br&amp;gt;&lt;br /&gt;
For information on minimum and recommended system requirement, see &amp;lt;u&amp;gt;[[System_Requirements | System Requirements]]&amp;lt;/u&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
To discover a computer&#039;s NVIDIA GPU hardware, see &amp;lt;u&amp;gt;[[Console_Window_GPU_Usage | NVIDIA GPU Hardware and Usage]]&amp;lt;/u&amp;gt;.&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=The TUFLOW Software Suite=&lt;br /&gt;
The TUFLOW Software suite has a range of solvers. Each interact differently with your hardware so pairing the correct solver (or the range of solvers you want to run) and hardware is an important consideration. A brief summary of each solver&#039;s needs is provided as follows:&amp;lt;br&amp;gt;&lt;br /&gt;
*TUFLOW Classic: A single model run can only use the CPU and cannot be run across multiple CPU cores or GPU hardware. In general terms: The maximum model size is dependent on the available CPU RAM and the runtime is driven by the CPU speed, architecture and cache size.&lt;br /&gt;
* TUFLOW HPC - Run on CPU Hardware: A single model run uses the CPU and is parallelised to run across multiple cores. In general terms: The maximum model size is dependent on the available CPU RAM and the runtime is driven by the CPU speed, the number of cores available to be run in parallel, architecture and cache size.&lt;br /&gt;
*TUFLOW HPC - Run on GPU Hardware: A single model run uses the GPU(s) for computation. In general terms: The maximum model size is dependent on the available GPU and CPU RAM and the runtime is driven by the CUDA core speed, the number of CUDA cores available and the GPU architecture. GPU performance is complex and is not easily inferred from GPU clock speed and number of cores, it is also very dependent on the ‘generation’ or architecture of GPU. As TUFLOW HPC requires some data exchange between GPU and CPU, the motherboard bus speeds and CPU speeds also play a role but typically a much lesser role compared to the GPU CUDA compute.&lt;br /&gt;
*TUFLOW FV - Run on CPU Hardware: A single model run uses CPU and is parallelised to run across multiple cores. In general terms: The maximum model size is dependent on the available CPU RAM and the runtime is determined by the CPU speed, the number of cores available to be run in parallel, chip architecture and cache size.&lt;br /&gt;
*TUFLOW FV - Run on GPU Hardware: A single model run uses the GPU(s) for computation. In general terms: The maximum model size is dependent on the available GPU and CPU RAM and the runtime is driven by the CUDA core speed, the number of CUDA cores available and the GPU architecture. GPU performance is complex and is not easily inferred from GPU clock speed and number of cores, it is also very dependent on the ‘generation’ or architecture of GPU. As TUFLOW FV requires some data exchange between GPU and CPU, the motherboard bus speeds and CPU speeds also play a role but typically a much lesser role compared to the GPU CUDA compute.&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The &amp;lt;u&amp;gt;[[Hardware_Benchmarking_-_Results#CPU_Results | Hardware Benchmarking]]&amp;lt;/u&amp;gt; page shows recently run combinations of GPU, CPU and RAM. These can be compared with the system planned for purchase. The recommendation is to seek advise from an appropriate computer hardware vendor who can advise on the compatibility and optimisation of the setup.&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=GPU Advice=&lt;br /&gt;
TUFLOW HPC on GPU Hardware is typically our fastest solver for 1D/2D pipe and floodplain simulations. &lt;br /&gt;
* TUFLOW HPC supports CUDA enabled NVIDIA GPU cards. For list of supported CUDA enabled graphics cards please visit the &amp;lt;u&amp;gt;[https://developer.nvidia.com/cuda-gpus NVIDIA website]&amp;lt;/u&amp;gt;.&lt;br /&gt;
*To discover a computer&#039;s NVIDIA GPU hardware, see &amp;lt;u&amp;gt;[[Console_Window_GPU_Usage | NVIDIA GPU Hardware and Usage]]&amp;lt;/u&amp;gt;.&lt;br /&gt;
*TUFLOW HPC on GPU Hardware can be run in either single or double precision. However, for the vast majority of flood applications single precision is sufficient. We typically run our models on single precision. If you are unsure we recommend running with both the single and double precision solvers and comparing your results.&lt;br /&gt;
The precision solver you require will determine the type of GPU card that is best suited for your compute. For any given generation/architecture of cards, the “gaming” cards such as the GTX GeForce and RTX provide excellent single precision performance – typically comparable to that of the “scientific” cards such as the Tesla series. If double precision is required then the scientific cards are substantially faster, but these are also significantly more expensive. The Quadro series cards sit in between for both double precision performance and cost. When checking the specifications of the card it should provide you with a breakdown of the single and double precision throughput in flops. Single precision compute is typically sufficient for TUFLOW HPC modelling.&lt;br /&gt;
&lt;br /&gt;
===GPU RAM===&lt;br /&gt;
RAM is the computer memory required to store all of the model data used during the computation. A computer has CPU RAM which is located on the motherboard and accessed from the CPU, and it has GPU RAM which is located on the GPU device and accessed from the GPU. The two memory storage systems are physically separate. &lt;br /&gt;
The amount of GPU RAM is one of two factors that will determine the size of the model that can be run (the other being CPU RAM). As a rule of thumb, approximately 5 million cells can be run per gigabyte (GB) of GPU RAM depending on the model features, e.g. a model with infiltration requires more memory due to the extra variables needed for the infiltration calculation. &lt;br /&gt;
&lt;br /&gt;
===CPU RAM===&lt;br /&gt;
TUFLOW HPC on GPU hardware still uses the CPU to compute and store data (in CPU RAM) during model initialisation and for all 1D calculations. While we are working on improving our CPU RAM usage, currently we tend to find that CPU RAM is often the limiter to the size of the model domain you can run, particularly if using running over multiple GPU cards. During initialisation and simulation a model will typically require 4-6 times the amount of CPU RAM relative to GPU RAM. As an example, a model that utilises 11GB of GPU RAM (typical memory for high-end gaming card, and corresponds to about a 50 million cell model) the CPU RAM required during initialisation will typically be in range 44GB to 66GB. A model that fully utilises two 11 GB GPUs (i.e. a 100 million cell model) may require as much as 128GB of CPU RAM during initialisation. &lt;br /&gt;
&lt;br /&gt;
===CUDA Cores, GPU Clock speed, and FLOPs ===&lt;br /&gt;
One way of reporting a GPU card&#039;s throughput is in Floating Point Operations per second (FLOPs). The more FLOPs, the more calculations that can get crunched per second and the faster the model should run. For any given generation of GPU, FLOPs are approximately proportional to number of CUDA cores times the GPU clock speed. However, there have been significant improvements in GPU architecture since the inception of CUDA, and this has contributed to increases in overall FLOPs performance beyond just the increases in cores and clock speed that have occurred over this time. &lt;br /&gt;
&lt;br /&gt;
===Multiple GPUs===&lt;br /&gt;
TUFLOW can use multiple GPU cards on a machine to run a single model (TUFLOW FV can currently use a single GPU only). This is useful for models that are too large for a single GPU, or for running a model as quickly as possible. In general terms the run time benefit of using multiple cards increases with model size. &lt;br /&gt;
*TUFLOW HPC-GPU does not support SLI for inter-GPU communications.&lt;br /&gt;
*It does (as of build 2020-01-AA) auto detect and utilise peer-to-peer access over NVLink or PCI bus on the motherboard. Note that not all GPUs support peer-to-peer access. &lt;br /&gt;
**PCI bus - this method requires cards that supports TCC driver mode and all cards must be in TCC driver mode. As TUFLOW primarily relies on GPU CUDA capabilities, the impact of using higher or lower PCI slot option is minimal.&lt;br /&gt;
**NVLink - high-end compute cards can have up to 8 cards talking to each other through a high-spec NVLink, but many of the less expensive cards are limited to only having two connected together over a dual socket NVLink.&lt;br /&gt;
*Models may still be run across multiple GPUs even if a NVLink is not present and the GPUs do not support peer-to-peer access. In this case HPC reverts to exchanging the domain boundary data between the GPUs via the CPU. The memory bandwidth between the GPU and the main system is not a critical bottleneck for TUFLOW.&lt;br /&gt;
*When using multiple GPUs it is best to use cards of similar memory and performance. While it is possible (as of build 2020-01-AA) to re-balance a model over multiple GPUs, we do not recommend using cards with vastly disparate performance.&lt;br /&gt;
*Sufficient cooling and power supply should be considered if multiple cards are used. When installed in adjacent PCI slots, the preference is to use rear vented cards rather than side vented to avoid blowing hot air onto the neighbouring cards (which could lead to overheating).&lt;br /&gt;
&lt;br /&gt;
===GPU Performance Comparison===&lt;br /&gt;
Extensive GPU hardware speed comparison testing has been completed using TUFLOW&#039;s standardised hardware benchmarking dataset. Details for the benchmarking are available via the &amp;lt;u&amp;gt;[[Hardware_Benchmarking_(2018-03-AA)| Hardware Benchmarking]]&amp;lt;/u&amp;gt; page of the Wiki. Review the GPU benchmarking runtime results table to compare the speed performance of different cards. If your GPU card is not listed in the result dataset please download and run the benchmarking dataset, and provide the result summary to [mailto:support@tuflow.com support@tuflow.com]. We will add the details to the runtime results table.&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
External videocard benchmark websites can be used to compare GPU cards, for example, &amp;lt;u&amp;gt;[https://www.videocardbenchmark.net/high_end_gpus.html PassMark Software - Video Card (GPU) Benchmarks]&amp;lt;/u&amp;gt; is an excellent performance guide.  &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=CPU Advice=&lt;br /&gt;
In general terms a more recent architecture, higher clock speed CPU with a large cache will perform better than a slower clock speed chip. This section discusses CPU RAM, RAM speed, Processor frequency, Multi-core processing and hyper-threading.&lt;br /&gt;
&lt;br /&gt;
===CPU RAM===&lt;br /&gt;
The amount of CPU RAM will determine the size of the model that can be run or a number of models that can be run at one time. &lt;br /&gt;
Faster RAM will result in quicker runtimes, however this is usually a secondary consideration to chip speed, cache size and architecture.&lt;br /&gt;
&lt;br /&gt;
===CPU Cores ===&lt;br /&gt;
*TUFLOW HPC - Run on GPU Hardware: The parallel processing is being done on the GPU card. However, TUFLOW HPC-GPU still uses the CPU for model initialisation and for 1D calculations. If multiple GPU cards are used, TUFLOW will use the equivalent number of CPU threads for controlling the GPUs and migrating data. So for a machine dedicated to HPC-GPU modelling, the number of CPU cores should be higher than the number of installed GPUs.&lt;br /&gt;
*TUFLOW HPC - Run on CPU Hardware: HPC model can also be run on multiple CPU cores. For the comparison of simulation speed, please refer to [[Hardware_Benchmarking_Topic_HPC_on_CPU_vs_GPU | HPC on CPU vs GPU]].&lt;br /&gt;
*TUFLOW Classic: TUFLOW Classic simulation can only use one CPU core due to the implicit nature of the numerical solution. More CPU cores will enable running more simulations at the same time most efficiently.&lt;br /&gt;
&lt;br /&gt;
===Hyperthreading===&lt;br /&gt;
https://fvwiki.tuflow.com/index.php?title=TUFLOW_FV_Parallel_Computing&lt;br /&gt;
&lt;br /&gt;
===Processor Frequency and RAM Frequency===&lt;br /&gt;
The frequency directly affects the run times. In general, the higher the frequency, the faster the model runs.&lt;br /&gt;
&lt;br /&gt;
===CPU Performance Comparison===&lt;br /&gt;
Extensive CPU hardware speed comparison testing has been completed using TUFLOW&#039;s standardised hardware benchmarking dataset. Details for the benchmarking are available via the &amp;lt;u&amp;gt;[[Hardware_Benchmarking_(2018-03-AA)| Hardware Benchmarking]]&amp;lt;/u&amp;gt; page of the Wiki. Review the CPU benchmarking runtime results table to compare the speed performance of different chips. If your chip is not listed in the result dataset please download and run the benchmarking dataset, and provide the result summary to [mailto:support@tuflow.com support@tuflow.com]. We will add the details to the runtime results table.&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=Storage Advice=&lt;br /&gt;
Solid state hard drives are preferred for temporary storage as they are faster to write to than traditional hard drives. Large data files can then be transferred to a more permanent location.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=Common Questions Answered (FAQ)=&lt;br /&gt;
==How do I reliably select the correct GPU on systems with multiple GPUs?==&lt;br /&gt;
You may encounter issues like these:&lt;br /&gt;
* The order of GPUs in Task Manager does not match the order of GPUs for TUFLOW.&lt;br /&gt;
* The order of GPUs in the `nvidia-smi` tool does not match the order of GPUs for TUFLOW.&lt;br /&gt;
* Non-NVIDIA GPUs or GPUs you do not want to use interfere with GPU selection.&lt;br /&gt;
You can resolve these issues by reading about &amp;lt;u&amp;gt;[[configure_CUDA_device_selection | specifying the exact CUDA-capable GPUs]]&amp;lt;/u&amp;gt; you want to be available to CUDA-enabled applications like TUFLOW.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
{{Tips Navigation&lt;br /&gt;
|uplink=[[Main_Page| TUFLOW Main Page]]&lt;br /&gt;
}}&lt;/div&gt;</summary>
		<author><name>Jaap.vandervelde</name></author>
	</entry>
	<entry>
		<id>https://wiki.tuflow.com/w/index.php?title=Hardware_Selection_Advice&amp;diff=41902</id>
		<title>Hardware Selection Advice</title>
		<link rel="alternate" type="text/html" href="https://wiki.tuflow.com/w/index.php?title=Hardware_Selection_Advice&amp;diff=41902"/>
		<updated>2025-01-06T23:15:22Z</updated>

		<summary type="html">&lt;p&gt;Jaap.vandervelde: &amp;#039;throttling&amp;#039; is not a common phrase.&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This page provides general hardware advice for running TUFLOW models on GPU or CPU. &amp;lt;br&amp;gt;&lt;br /&gt;
[[File: Hardware_Configuration_001.jpg ||450px|right]]&lt;br /&gt;
&lt;br /&gt;
=Introduction=&lt;br /&gt;
We often get asked about the optimum computing setup to run TUFLOW models. While every model is different and will interact differently with your hardware there is some general advice that we can offer. In the sections below you will find more detailed advice on GPU and CPU but generally:&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* The amount of RAM in the computer will be the limiter for the size of model you can run. This applies to CPU RAM (TUFLOW Classic, TUFLOW FV and TUFLOW HPC with Hardware == CPU) and also GPU RAM (TUFLOW HPC and TUFLOW FV with Hardware == GPU).&lt;br /&gt;
* The processing speed of the CPU, the architecture, cache size, speed and number of processors play a role.&lt;br /&gt;
* For GPU simulations, the number of CUDA cores, the core speed, GPU card architecture, memory speed and interfacing with the motherboard PCI lanes and CPU are all important. &lt;br /&gt;
* The system must be well cooled to avoid throttling (meaning reduction of clock speeds to reduce heating).&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=The TUFLOW Software Suite=&lt;br /&gt;
The TUFLOW Software suite has a range of solvers. Each interact differently with your hardware so pairing the correct solver (or the range of solvers you want to run) and hardware is an important consideration. A brief summary of each solver&#039;s needs is provided as follows:&amp;lt;br&amp;gt;&lt;br /&gt;
* TUFLOW Classic: A single model run can only use the CPU and cannot be run across multiple CPU cores or GPU hardware. In general terms: The maximum model size is dependent on the available CPU RAM and the runtime is driven by the CPU speed, architecture and cache size.  &lt;br /&gt;
* TUFLOW HPC - Run on CPU Hardware: A single model run uses the CPU and is parallelised to run across multiple cores. In general terms: The maximum model size is dependent on the available CPU RAM and the runtime is driven by the CPU speed, the number of cores available to be run in parallel, architecture and cache size.&lt;br /&gt;
* TUFLOW HPC - Run on GPU Hardware: A single model run uses the GPU(s) for computation. In general terms: The maximum model size is dependent on the available GPU and CPU RAM and the runtime is driven by the CUDA core speed, the number of CUDA cores available and the GPU architecture. GPU performance is complex and is not easily inferred from GPU clock speed and number of cores, it is also very dependent on the ‘generation’ or architecture of GPU. As TUFLOW HPC requires some data exchange between GPU and CPU, the motherboard bus speeds and CPU speeds also play a role but typically a much lesser role compared to the GPU CUDA compute.&lt;br /&gt;
* TUFLOW FV - Run on CPU Hardware: A single model run uses CPU and is parallelised to run across multiple cores. In general terms: The maximum model size is dependent on the available CPU RAM and the runtime is determined by the CPU speed, the number of cores available to be run in parallel, chip architecture and cache size.&lt;br /&gt;
* TUFLOW FV - Run on GPU Hardware: A single model run uses the GPU(s) for computation. In general terms: The maximum model size is dependent on the available GPU and CPU RAM and the runtime is driven by the CUDA core speed, the number of CUDA cores available and the GPU architecture. GPU performance is complex and is not easily inferred from GPU clock speed and number of cores, it is also very dependent on the ‘generation’ or architecture of GPU. As TUFLOW FV requires some data exchange between GPU and CPU, the motherboard bus speeds and CPU speeds also play a role but typically a much lesser role compared to the GPU CUDA compute.&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
On our &amp;lt;u&amp;gt;[[Hardware_Benchmarking_-_Results#CPU_Results | Hardware Benchmarking]]&amp;lt;/u&amp;gt; page you can compare recently run combinations of GPU, CPU and RAM with the system you are planning to purchase. We recommend that if building a computer that you seek advise from an appropriate computer hardware vendor who can advise on the compatibility and optimisation of your setup.&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=GPU Advice=&lt;br /&gt;
TUFLOW HPC on GPU Hardware is typically our fastest solver for 1D/2D pipe and floodplain simulations. &lt;br /&gt;
* TUFLOW HPC supports CUDA enabled NVIDIA GPU cards. For list of supported CUDA enabled graphics cards please visit the &amp;lt;u&amp;gt;[https://developer.nvidia.com/cuda-gpus NVIDIA website]&amp;lt;/u&amp;gt;.&lt;br /&gt;
* TUFLOW HPC on GPU Hardware can be run in either single or double precision. However, for the vast majority of flood applications single precision is sufficient. We typically run our models on single precision. If you are unsure we recommend running with both the single and double precision solvers and comparing your results. &lt;br /&gt;
The precision solver you require will determine the type of GPU card that is best suited for your compute. For any given generation/architecture of cards, the “gaming” cards such as the GTX GeForce and RTX provide excellent single precision performance – typically comparable to that of the “scientific” cards such as the Tesla series. If double precision is required then the scientific cards are substantially faster, but these are also significantly more expensive. The Quadro series cards sit in between for both double precision performance and cost. When checking the specifications of the card it should provide you with a breakdown of the single and double precision throughput in flops. Single precision compute is typically sufficient for TUFLOW HPC modelling.&lt;br /&gt;
&lt;br /&gt;
===GPU RAM===&lt;br /&gt;
RAM is the computer memory required to store all of the model data used during the computation. A computer has CPU RAM which is located on the motherboard and accessed from the CPU, and it has GPU RAM which is located on the GPU device and accessed from the GPU. The two memory storage systems are physically separate. &lt;br /&gt;
The amount of GPU RAM is one of two factors that will determine the size of the model that can be run (the other being CPU RAM). As a rule of thumb, approximately 5 million cells can be run per gigabyte (GB) of GPU RAM depending on the model features, e.g. a model with infiltration requires more memory due to the extra variables needed for the infiltration calculation. &lt;br /&gt;
&lt;br /&gt;
===CPU RAM===&lt;br /&gt;
TUFLOW HPC on GPU hardware still uses the CPU to compute and store data (in CPU RAM) during model initialisation and for all 1D calculations. While we are working on improving our CPU RAM usage, currently we tend to find that CPU RAM is often the limiter to the size of the model domain you can run, particularly if using running over multiple GPU cards. During initialisation and simulation a model will typically require 4-6 times the amount of CPU RAM relative to GPU RAM. As an example, a model that utilises 11GB of GPU RAM (typical memory for high-end gaming card, and corresponds to about a 50 million cell model) the CPU RAM required during initialisation will typically be in range 44GB to 66GB. A model that fully utilises two 11 GB GPUs (i.e. a 100 million cell model) may require as much as 128GB of CPU RAM during initialisation. &lt;br /&gt;
&lt;br /&gt;
===CUDA Cores, GPU Clock speed, and FLOPs===&lt;br /&gt;
One way of reporting a GPU card&#039;s throughput is in Floating Point Operations per second (FLOPs). The more FLOPs, the more calculations that can get crunched per second and the faster the model should run. For any given generation of GPU, FLOPs are approximately proportional to number of CUDA cores times the GPU clock speed. However, there have been significant improvements in GPU architecture since the inception of CUDA, and this has contributed to increases in overall FLOPs performance beyond just the increases in cores and clock speed that have occurred over this time. &lt;br /&gt;
&lt;br /&gt;
===Multiple GPUs===&lt;br /&gt;
TUFLOW can use multiple GPU cards on a machine to run a single model (TUFLOW FV can currently use a single GPU only). This is useful for models that are too large for a single GPU, or for running a model as quickly as possible. In general terms the run time benefit of using multiple cards increases with model size. &lt;br /&gt;
* TUFLOW HPC-GPU does not support SLI for inter-GPU communications.&lt;br /&gt;
* It does (as of build 2020-01-AA) auto detect and utilise peer-to-peer access over NVLink or PCI bus on the motherboard. Note that not all GPUs support peer-to-peer access. &lt;br /&gt;
** PCI bus - this method requires cards that supports TCC driver mode and all cards must be in TCC driver mode. As TUFLOW primarily relies on GPU CUDA capabilities, the impact of using higher or lower PCI slot option is minimal.&lt;br /&gt;
** NVLink - high-end compute cards can have up to 8 cards talking to each other through a high-spec NVLink, but many of the less expensive cards are limited to only having two connected together over a dual socket NVLink.&lt;br /&gt;
* Models may still be run across multiple GPUs even if a NVLink is not present and the GPUs do not support peer-to-peer access. In this case HPC reverts to exchanging the domain boundary data between the GPUs via the CPU. The memory bandwidth between the GPU and the main system is not a critical bottleneck for TUFLOW.&lt;br /&gt;
* When using multiple GPUs it is best to use cards of similar memory and performance. While it is possible (as of build 2020-01-AA) to re-balance a model over multiple GPUs, we do not recommend using cards with vastly disparate performance.&lt;br /&gt;
* Sufficient cooling and power supply should be considered if multiple cards are used. When installed in adjacent PCI slots, the preference is to use rear vented cards rather than side vented to avoid blowing hot air onto the neighbouring cards (which could lead to overheating).&lt;br /&gt;
&lt;br /&gt;
===GPU Performance Comparison===&lt;br /&gt;
Extensive GPU hardware speed comparison testing has been completed using TUFLOW&#039;s standardised hardware benchmarking dataset. Details for the benchmarking are available via the &amp;lt;u&amp;gt;[[Hardware_Benchmarking_(2018-03-AA)| Hardware Benchmarking]]&amp;lt;/u&amp;gt; page of the Wiki. Review the GPU benchmarking runtime results table to compare the speed performance of different cards. If your GPU card is not listed in the result dataset please download and run the benchmarking dataset, and provide the result summary to support@tuflow.com. We will add the details to the runtime results table.&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
External videocard benchmark websites can be used to compare GPU cards, for example, &amp;lt;u&amp;gt;[https://www.videocardbenchmark.net/high_end_gpus.html PassMark Software - Video Card (GPU) Benchmarks]&amp;lt;/u&amp;gt; is an excellent performance guide.  &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=CPU Advice=&lt;br /&gt;
In general terms a more recent architecture, higher clock speed CPU with a large cache will perform better than a slower clock speed chip. This section discusses CPU RAM, RAM speed, Processor frequency, Multi-core processing and hyper-threading.&lt;br /&gt;
&lt;br /&gt;
===CPU RAM===&lt;br /&gt;
The amount of CPU RAM will determine the size of the model that can be run or a number of models that can be run at one time. &lt;br /&gt;
Faster RAM will result in quicker runtimes, however this is usually a secondary consideration to chip speed, cache size and architecture.&lt;br /&gt;
&lt;br /&gt;
===CPU Cores===&lt;br /&gt;
* TUFLOW HPC - Run on GPU Hardware: The parallel processing is being done on the GPU card. However, TUFLOW HPC-GPU still uses the CPU for model initialisation and for 1D calculations. If multiple GPU cards are used, TUFLOW will use the equivalent number of CPU threads for controlling the GPUs and migrating data. So for a machine dedicated to HPC-GPU modelling, the number of CPU cores should be higher than the number of installed GPUs.&lt;br /&gt;
* TUFLOW HPC - Run on CPU Hardware: HPC model can also be run on multiple CPU cores. For the comparison of simulation speed, please refer to [[Hardware_Benchmarking_Topic_HPC_on_CPU_vs_GPU | HPC on CPU vs GPU]].&lt;br /&gt;
* TUFLOW Classic: TUFLOW Classic simulation can only use one CPU core due to the implicit nature of the numerical solution. More CPU cores will enable running more simulations at the same time most efficiently.&lt;br /&gt;
&lt;br /&gt;
===Hyperthreading===&lt;br /&gt;
https://fvwiki.tuflow.com/index.php?title=TUFLOW_FV_Parallel_Computing&lt;br /&gt;
&lt;br /&gt;
===Processor Frequency and RAM Frequency===&lt;br /&gt;
The frequency directly affects the run times. In general, the higher the frequency, the faster the model runs.&lt;br /&gt;
&lt;br /&gt;
===CPU Performance Comparison===&lt;br /&gt;
Extensive CPU hardware speed comparison testing has been completed using TUFLOW&#039;s standardised hardware benchmarking dataset. Details for the benchmarking are available via the &amp;lt;u&amp;gt;[[Hardware_Benchmarking_(2018-03-AA)| Hardware Benchmarking]]&amp;lt;/u&amp;gt; page of the Wiki. Review the CPU benchmarking runtime results table to compare the speed performance of different chips. If your chip is not listed in the result dataset please download and run the benchmarking dataset, and provide the result summary to support@tuflow.com. We will add the details to the runtime results table.&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=Storage Advice=&lt;br /&gt;
Solid state hard drives are preferred for temporary storage as they are faster to write to than traditional hard drives. Large data files can then be transferred to a more permanent location.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
{{Tips Navigation&lt;br /&gt;
|uplink=[[Main_Page| TUFLOW Main Page]]&lt;br /&gt;
}}&lt;/div&gt;</summary>
		<author><name>Jaap.vandervelde</name></author>
	</entry>
	<entry>
		<id>https://wiki.tuflow.com/w/index.php?title=Wibu_Dongles&amp;diff=39710</id>
		<title>Wibu Dongles</title>
		<link rel="alternate" type="text/html" href="https://wiki.tuflow.com/w/index.php?title=Wibu_Dongles&amp;diff=39710"/>
		<updated>2024-07-01T00:49:22Z</updated>

		<summary type="html">&lt;p&gt;Jaap.vandervelde: Link to cloud licence troubleshooting section&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Introduction=&lt;br /&gt;
TUFLOW Products are licenced via Codemeter locks, available in three forms; WIBU USB-2 Dongles (Hardware lock), WIBU Software Licences (Software lock) and WIBU Cloud locks.&amp;lt;br&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;WIBU Hardware Lock&#039;&#039;&#039;: WIBU hardware locks are physical dongles (USB-2) that contain TUFLOW licences. Licences are coded onto the dongle and can be moved between computers. WIBU hardware locks are recognised by the 2006-06-BD release onwards. &lt;br /&gt;
*&#039;&#039;&#039;WIBU Software Lock&#039;&#039;&#039;: WIBU software locks are coded onto the computer&#039;s or server&#039;s motherboard, it cannot be transferred to a different host. WIBU software locks are recognised by the 2016-03-AF release onwards. &lt;br /&gt;
* &#039;&#039;&#039;WIBU Cloud Licence&#039;&#039;&#039;: WIBU cloud licences are available for Network licences, and a hosting on the WIBU cloud server. An internet connection is required to access a cloud licence. WIBU Cloud licences are recognised by the 2016-03-AF release onwards.&amp;lt;br&amp;gt;&lt;br /&gt;
The following pages provide details how to install, update and manage TUFLOW licences using all of the above forms.&lt;br /&gt;
&lt;br /&gt;
=Installation=&lt;br /&gt;
==Installing CodeMeter RunTime Kit==&lt;br /&gt;
The first step in using the Wibu licence is to install the CodeMeter Runtime Kit.  This needs to be installed for any computers that will be running TUFLOW as well as for the network licence server.&amp;lt;br&amp;gt;&lt;br /&gt;
The latest version of CodeMeter can be downloaded from CodeMeter site:&amp;lt;br&amp;gt;&lt;br /&gt;
* &amp;lt;u&amp;gt;[https://www.wibu.com/support/user/downloads-user-software.html https://www.wibu.com/support/user/downloads-user-software.html]&amp;lt;/u&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
The correct file to download is the &#039;&#039;&#039;CodeMeter Runtime Kit for Windows&#039;&#039;&#039;&amp;lt;br&amp;gt;&lt;br /&gt;
[[File:CodeMeter_RuntimeKit_Download.jpg|400px]]&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
If a computer is used as network license server, please select the &amp;quot;Network Server&amp;quot; option during the installation, so that CodeMeter can configure TCP and UDP protocols in the Windows Firewall.&amp;lt;br&amp;gt;&lt;br /&gt;
[[File:CodeMeter_Network_Server_install.PNG|500px]]&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
Once installed the configuration depends on if the TUFLOW licence is a local, network or software licence. &lt;br /&gt;
* If this is the first time the licence has been used, or your existing licence has expired. You will need to update your licence file. Please progress to the &amp;lt;u&amp;gt;[[#Request_a_licence_update| Request a licence update]]&amp;lt;/u&amp;gt; section&lt;br /&gt;
* If there is already an active licence associated with the dongle:&lt;br /&gt;
:* For a local licence, the dongle can now be inserted into the machine and TUFLOW simulations can be started.&lt;br /&gt;
:* For a network and software licence, continue to the &amp;lt;u&amp;gt;[[#Configuring_Network_Server | configure network server]]&amp;lt;/u&amp;gt; and &amp;lt;u&amp;gt;[[#Configuring_Access_to_Network Licence | configure network access]]&amp;lt;/u&amp;gt; sections below.&lt;br /&gt;
&lt;br /&gt;
===Silent Install===&lt;br /&gt;
It is possible to do a silent install of the CodeMeter Runtime kit.  CodeMeter support staff have advised that this can be done with the following parameters:&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;CodeMeterRuntime.exe /ComponentArgs &amp;quot;*&amp;quot;:&amp;quot;/qn&amp;quot;&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Installing CodeMeter RunTime Kit for Linux===&lt;br /&gt;
If you are installing on a Linux computer from the command line, refer to:&lt;br /&gt;
* &amp;lt;u&amp;gt;[[Installing_Wibu_CodeMeter_Linux|Installing Wibu CodeMeter Linux]]&amp;lt;/u&amp;gt;.&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Configuring Network Server==&lt;br /&gt;
If the TUFLOW licence is a network licence, the computer hosting the dongle will need to be configured as a TUFLOW server.  This is required even if the simulations are to be performed on the server.  Instructions for configuring the network licence are detailed in the following page:&amp;lt;br&amp;gt;&lt;br /&gt;
*&amp;lt;u&amp;gt;[[WIBU_Configure_Network_Server_2016| WIBU Configure Network Server]]&amp;lt;/u&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Configuring Access to Network Licence==&lt;br /&gt;
To access TUFLOW licences on a remote network server, the CodeMeter runtime kit needs to be installed on the client machine.  Once installed, CodeMeter needs to be configured to use the network licence.&lt;br /&gt;
Instructions for configuring the network licence are detailed in the following page:&amp;lt;br&amp;gt;&lt;br /&gt;
*&amp;lt;u&amp;gt;[[WIBU_Configure_Network_Client_2016| WIBU Configure Network Client]]&amp;lt;/u&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=Updating=&lt;br /&gt;
There are a number of reasons that the Wibu licence may need to be updated, for example:&lt;br /&gt;
* To add additional modules&lt;br /&gt;
* To update to new support year&lt;br /&gt;
* To add rental licences&lt;br /&gt;
For each change to the dongle, it will be necessary to provide a licence update request file to the TUFLOW staff.&amp;lt;br&amp;gt;&lt;br /&gt;
The procedure is the same for local and network licences, the request will need to be generated from the computer which has the dongle plugged in. &lt;br /&gt;
==Request a licence update==&lt;br /&gt;
The instructions for creating a licence request differ slightly depending whether the dongle has not previously been coded for TUFLOW simulations or if the dongle has not been provided by BMT.  &amp;lt;br&amp;gt;&lt;br /&gt;
&#039;&#039;&#039;Hardware Licence (USB)&#039;&#039;&#039; &amp;lt;br&amp;gt;&lt;br /&gt;
* &amp;lt;u&amp;gt;[[WIBU_Licence_Update_Request | Wibu licence update request for Windows (normal)]]&amp;lt;/u&amp;gt; - Unless specified otherwise by the TUFLOW staff, this option is the correct one to chose.&lt;br /&gt;
* &amp;lt;u&amp;gt;[[WIBU_Licence_Update_Request_Uncoded | Wibu licence update request for Windows (uncoded or blank dongle)]]&amp;lt;/u&amp;gt; - If you have been provided with a blank dongle or are using a non BMT dongle the TUFLOW producer code needs to be added when requesting the licence update.&amp;lt;br&amp;gt;&lt;br /&gt;
* &amp;lt;u&amp;gt;[[WIBU_Licence_for_Linux | Wibu licence update request for Linux]]&amp;lt;/u&amp;gt;&lt;br /&gt;
&#039;&#039;&#039;Software Licence (File)&#039;&#039;&#039; &amp;lt;br&amp;gt;&lt;br /&gt;
* &amp;lt;u&amp;gt;[[WIBU_Software_Licence_Update_Request | Wibu software licence update request for Windows]]&amp;lt;/u&amp;gt;&lt;br /&gt;
* &amp;lt;u&amp;gt;[[WIBU_Licence_for_Linux | Wibu software licence update request for Linux]]&amp;lt;/u&amp;gt;&lt;br /&gt;
After creating the licence update request, please email the created file (&#039;&#039;&#039;.WibuCmRaC&#039;&#039;&#039;) to &amp;lt;u&amp;gt;[mailto:sales@tuflow.com sales@tuflow.com]&amp;lt;/u&amp;gt;.&amp;lt;br&amp;gt;&lt;br /&gt;
&#039;&#039;&#039;Cloud Licence&#039;&#039;&#039; &amp;lt;br&amp;gt;&lt;br /&gt;
* Contact &amp;lt;u&amp;gt;sales@tuflow.com&amp;lt;/u&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
==Import a licence update==&lt;br /&gt;
Once a licence update has been created, an update file will be provide to you via email.  This update file will have the extension &#039;&#039;&#039;.WibuCmRaU&#039;&#039;&#039;. The same method is used for network, local and software licences.&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&#039;&#039;&#039;Hardware Licence (USB)&#039;&#039;&#039; &lt;br /&gt;
* &amp;lt;u&amp;gt;[[WIBU_Licence_Update_Import | Importing a Wibu licence update for Windows]]&amp;lt;/u&amp;gt;&lt;br /&gt;
* &amp;lt;u&amp;gt;[[WIBU_Licence_for_Linux | Importing a Wibu licence update request for Linux]]&amp;lt;/u&amp;gt;&lt;br /&gt;
When an update is applied, this modifies the content of the dongle, it does not need to be applied on each computer that will be used for TUFLOW modelling!&amp;lt;br&amp;gt;&lt;br /&gt;
&#039;&#039;&#039;Software Licence (File)&#039;&#039;&#039; &lt;br /&gt;
* &amp;lt;u&amp;gt;[[WIBU_Licence_Update_Softlock_Import |Importing a Wibu licence update for Windows]]&amp;lt;/u&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&#039;&#039;&#039;Cloud Licence&#039;&#039;&#039;&lt;br /&gt;
* &amp;lt;u&amp;gt;[[WIBU_Configure_Cloud_Client | Importing a WIBU Cloud License]]&amp;lt;/u&amp;gt;.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=Troubleshooting=&lt;br /&gt;
*&amp;lt;u&amp;gt;[[WIBU_Dongle_Not_Working_Correctly | Dongle Not Working Correctly]]&amp;lt;/u&amp;gt;&lt;br /&gt;
*&amp;lt;u&amp;gt;[[WIBU_Unable_to_Remove_Licence_Container | Unable to Remove Licence Container]]&amp;lt;/u&amp;gt;&lt;br /&gt;
*[[WIBU Configure Cloud Client#Troubleshooting|Unable to Connect to Cloud Licence]]&lt;br /&gt;
&lt;br /&gt;
==Diagnostics==&lt;br /&gt;
===cmDust===&lt;br /&gt;
When the CodeMeter runtime kit is installed a diagnostics utility call &#039;&#039;&#039;cmDust&#039;&#039;&#039; is also installed.  Instructions for creating a diagnostics file:&lt;br /&gt;
* &amp;lt;u&amp;gt;[[WIBU_create_cmDust | Create CM Dust Diagnostics File]]&amp;lt;/u&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Enabling Logging===&lt;br /&gt;
Codemeter allows you to write extended log files to you local drive. To setup these features:&lt;br /&gt;
* &amp;lt;u&amp;gt;[[Codemeter_Enable_Logging | Enable Codemeter Logging]]&amp;lt;/u&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Enabling Network Server License Monitoring===&lt;br /&gt;
Codemeter allows you to conduct real-time licence network monitoring. To setup these features:&lt;br /&gt;
* &amp;lt;u&amp;gt;[[Network_Server_License_Monitoring | Network License Monitoring]]&amp;lt;/u&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
{{Tips Navigation&lt;br /&gt;
|uplink=[[Main_Page| Back to Wiki Main Page]]&lt;br /&gt;
}}&lt;/div&gt;</summary>
		<author><name>Jaap.vandervelde</name></author>
	</entry>
	<entry>
		<id>https://wiki.tuflow.com/w/index.php?title=TUFLOW_Licensing&amp;diff=39081</id>
		<title>TUFLOW Licensing</title>
		<link rel="alternate" type="text/html" href="https://wiki.tuflow.com/w/index.php?title=TUFLOW_Licensing&amp;diff=39081"/>
		<updated>2024-03-27T07:04:07Z</updated>

		<summary type="html">&lt;p&gt;Jaap.vandervelde: /* Frequently Asked Questions (FAQ) - What is the best licence for me */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Introduction=&lt;br /&gt;
A TUFLOW licence is required to run TUFLOW, except when using third party software such as a GIS to prepare input data or view results, or when running TUFLOW demo, tutorial or example models in licence free mode.&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Lock Types==&lt;br /&gt;
TUFLOW Products are licenced via locks, available in two forms; WIBU USB-2 Dongles (hardware lock) and WIBU Software Licences (software lock). A third form, WIBU Cloud locks, have been available since 2023.&lt;br /&gt;
&lt;br /&gt;
===Current Licence Lock Options:===&lt;br /&gt;
Refer to &amp;lt;u&amp;gt;[[Wibu_Dongles | WIBU Lock Guidance]]&amp;lt;/u&amp;gt; for further information on the following licence hosting options:&lt;br /&gt;
* &#039;&#039;&#039;WIBU Hardware Lock&#039;&#039;&#039;: WIBU hardware locks are physical dongles (USB-2) that contain TUFLOW licences. Licences are coded onto the dongle and can be moved between computers. WIBU hardware locks are recognised by the 2006-06-BD release onwards. &lt;br /&gt;
*&#039;&#039;&#039;WIBU Software Lock&#039;&#039;&#039;: WIBU software locks are coded onto the computer&#039;s or server&#039;s motherboard, it cannot be transferred to a different host. WIBU software locks are recognised by the 2016-03-AF release onwards. &lt;br /&gt;
* &#039;&#039;&#039;WIBU Cloud Licence&#039;&#039;&#039;: WIBU cloud licences are available for Network licences, and a hosting on the WIBU cloud server. An internet connection is required to access a cloud licence. WIBU Cloud licences are recognised by the 2016-03-AF release onwards. &lt;br /&gt;
&lt;br /&gt;
===Legacy Licence Lock Options===&lt;br /&gt;
* &#039;&#039;&#039;Softlok Dongle&#039;&#039;&#039;: As of August 2010 Softlok USB dongles are no longer issued due to the dongle provider not supporting 64-bit. Maintained Softlok dongles may be exchanged for a WIBU dongle for a nominal fee, please contact &amp;lt;u&amp;gt;[mailto:sales@tuflow.com sales@tuflow.com]&amp;lt;/u&amp;gt;. For the 2009-07, 2008-08, 2007-07 and 2006-06 releases, the “DB” builds or later will need to be used to recognise a WIBU Codemeter dongle. Refer to &amp;lt;u&amp;gt;[[Softlok_Dongles | Softlok Guidance]]&amp;lt;/u&amp;gt; for further information.&lt;br /&gt;
&lt;br /&gt;
==Licence Types==&lt;br /&gt;
TUFLOW simulations can be executed using a variety of licence types:&lt;br /&gt;
* &#039;&#039;&#039;Licence Free Mode&#039;&#039;&#039;: A licence free mode has been built in TUFLOW. For information outlining the limits associated with the licence free mode, see &amp;lt;u&amp;gt;[[New_User_Guide_Free_Demo_Version | TUFLOW Free DEMO Version Guide]]&amp;lt;/u&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Local Licence&#039;&#039;&#039;: TUFLOW simulations can only be run on the computer hosting the Lock. For installation guidance, see &amp;lt;u&amp;gt;[[New_User_Guide_Local_Licences | Local Licence Installation Guide]]&amp;lt;/u&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Network Licence&#039;&#039;&#039;: The Lock can be hosted on any computer or server. Other computers &#039;check out&#039; licences from the host computer via a company’s network. There are no regional restrictions associated with Network licences. For installation guidance, see &amp;lt;u&amp;gt;[[New_User_Guide_Network_Licences | Network Licence Installation Guide]]&amp;lt;/u&amp;gt;.&amp;lt;br&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Cloud Licence&#039;&#039;&#039;: The Lock is hosted on a central WIBU cloud server. Other computers &#039;check out&#039; licences from the host server over an internet connection. There are no regional restrictions associated with Cloud licences. For installation guidance, see &amp;lt;u&amp;gt;[[New_User_Guide_Cloud_Licences | Cloud Licence Installation Guide]]&amp;lt;/u&amp;gt;. Note that a cloud licence can either be provided as a Local Cloud Licence, so that TUFLOW simulations can only be run on the computer that have the cloud licence installed, or as a Network Cloud Licence, for the same benefits that a Network Licence provides, but with the added administrative overhead as well.&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=Frequently Asked Questions (FAQ) =&lt;br /&gt;
== Do I require a TUFLOW licence to create TUFLOW inputs and view results from a TUFLOW simulation? ==&lt;br /&gt;
No, a TUFLOW licence is only needed to run a TUFLOW .exe file (excluding tutorial, demo and example models or small models eligible for free mode).  Running a TUFLOW .exe is required to:&lt;br /&gt;
* Process control files (.tcf, etc), check the data inputs and to construct the model using the GIS layers and other inputs.  As part of this process any ERROR, WARNING or CHECK messages are issued to help resolve input data conflicts. Check files and GIS layers representing the final model construct are also produced to quality control the model’s inputs.&lt;br /&gt;
* To carry out the hydraulic computations provided Step 1 above produces no ERROR messages.&lt;br /&gt;
No licence is needed for all other tasks, including:&lt;br /&gt;
* Creation and editing of all input files and GIS layers.&lt;br /&gt;
* Running GIS plugins such as the QGIS TUFLOW Viewer.&lt;br /&gt;
* Running utilities (e.g. asc_to_asc.exe).&lt;br /&gt;
* Reviewing check files/layers in GIS.&lt;br /&gt;
* Viewing results (e.g. using TUFLOW Viewer in QGIS).&lt;br /&gt;
All TUFLOW inputs and outputs use free open formats that are readable and editable by third party software, for example QGIS and Notepad++:&lt;br /&gt;
* Download Notepad++ to create and review tabular data: &amp;lt;u&amp;gt;[[NotepadPlusPlus_Tips | Notepad++ installation and tips]]&amp;lt;/u&amp;gt;.&lt;br /&gt;
* Download QGIS: &amp;lt;u&amp;gt;[[QGIS_Tips | QGIS installation and tips]]&amp;lt;/u&amp;gt;.&lt;br /&gt;
* Install the TUFLOW Plugin: &amp;lt;u&amp;gt;[[TUFLOW_QGIS_Plugin | TUFLOW QGIS plugin installation and tips]]&amp;lt;/u&amp;gt;.&lt;br /&gt;
* Use TUFLOW Viewer to review XMDF results: &amp;lt;u&amp;gt;[[TUFLOW_Viewer |TUFLOW Viewer]]&amp;lt;/u&amp;gt;. &amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Why is a pre 2010 version of TUFLOW not working?==&lt;br /&gt;
In 2010 &amp;lt;u&amp;gt;[[TUFLOW_Licensing#Softlok_Dongles_.28Legacy_Product.29 | Softlok dongles]]&amp;lt;/u&amp;gt; were replaced by WIBU dongles. TUFLOW versions earlier than 2010 might be searching for a Softlok licence, however it is likely you have a WIBU licence. The &amp;quot;DB&amp;quot; builds of TUFLOW were created so earlier versions of TUFLOW recognise the new WIBU licences. Please download a &amp;quot;DB&amp;quot; version from the &amp;lt;u&amp;gt;[https://tuflow.com/downloads/tuflow-classichpc-archive/ TUFLOW website]&amp;lt;/u&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
==How many simulations can be run at the same time?==&lt;br /&gt;
The number of licences reflect how many TUFLOW simulations can be run in parallel at any given time. For example, a Local 4 can run 4 simulations at the same time on the one computer. A Network 5 allows up to 5 simulations at any one time across an organisation’s network. If all licences are in use when a TUFLOW simulation starts, the simulation enters a holding pattern until a free licence is available.&lt;br /&gt;
&lt;br /&gt;
== What is the best licence for me? ==&lt;br /&gt;
&lt;br /&gt;
=== Hardware vs. Software ===&lt;br /&gt;
The advantage of a hardware lock is that it is portable and not tied to a specific computer. You can unplug it and plug it into another computer. A software lock is tied to a specific computer and cannot be moved once bound to that computer. This also means that if the computer changes substantially, the lock may become invalid and may need replacement. &lt;br /&gt;
&lt;br /&gt;
This is especially relevant if you install the lock on a virtual machine (VM), since migrating the VM or rebuilding it can cause the lock to become invalid. Specifically, if you run a VM in the cloud as a licence server, you need to keep in mind that stopping and starting the VM is safe. But only retaining the physical storage and basing a new VM on it will invalidate the licence.&lt;br /&gt;
&lt;br /&gt;
The advantage of a software lock is that it can be installed on a computer that you don&#039;t have physical access to, and doesn&#039;t require a free USB port. This makes software locks well-suited to installation on VMs, or remote computers.&lt;br /&gt;
&lt;br /&gt;
=== Local vs. Network ===&lt;br /&gt;
The advantage of a local lock is that you only need CodeMeter installed on the computer that will be running your TUFLOW software, and the required configuration will be minimal. No network access beyond this computer is required. The lock is either installed directly on this local CodeMeter, as a software lock, or CodeMeter will pick up an inserted hardware lock automatically.&lt;br /&gt;
&lt;br /&gt;
The advantage of a network lock is that you can manage all your locks in a single place (called the CodeMeter Network Server), and that licences can very easily be accessed from multiple computers on your network. However, this does mean that each additional computer that needs to be able to run TUFLOW will also need CodeMeter installed. These extra instances of CodeMeter need to be configured to find the Network Server. And these computers need to be able to access the Network Server over port 22350, which may require some cooperation of your IT staff to configure.&lt;br /&gt;
&lt;br /&gt;
=== Cloud vs. non-Cloud ===&lt;br /&gt;
The advantage of a cloud licence is that you can install it on as many machines as you like, and reuse it on future machines, for as long as the licence is valid. This gives you a similar advantage to using a Network Licence, without the complexity of network connectivity. Another benefit of this is that you can put the licence on computers that are not connected to your local network, which would make using a Network Licence on a Network Server on your network impossible, while Local Software Licences would not be reusable.&lt;br /&gt;
&lt;br /&gt;
A disadvantage of a cloud licence can be the requirement that the computer it is installed on has https (web) internet access to `wibu.cloud`. Using a Local Cloud Licence also means that you need to distribute your licence to many locations, which may be a security concern.&lt;br /&gt;
&lt;br /&gt;
Since Cloud Licences can be provided as either Local Cloud or Network Cloud licences, you can have the benefits of both combined: a Cloud Network Licence would allow you to licence computers not connected to your local network, while not requiring https access to wibu.cloud for each machine running TUFLOW. Instead you can organise network connections from the computers running TUFLOW to the Cloud Network Licence server, while the entire network can be independent of your local network. This makes the Cloud Licence ideal for use in situations where you want to create your own cloud infrastructure for running TUFLOW.{{Tips Navigation&lt;br /&gt;
|uplink=[[Main_Page| TUFLOW Main Page]]&lt;br /&gt;
}}&lt;/div&gt;</summary>
		<author><name>Jaap.vandervelde</name></author>
	</entry>
	<entry>
		<id>https://wiki.tuflow.com/w/index.php?title=TUFLOW_Licensing&amp;diff=39076</id>
		<title>TUFLOW Licensing</title>
		<link rel="alternate" type="text/html" href="https://wiki.tuflow.com/w/index.php?title=TUFLOW_Licensing&amp;diff=39076"/>
		<updated>2024-03-27T06:25:40Z</updated>

		<summary type="html">&lt;p&gt;Jaap.vandervelde: Update to date + clarification of Local vs Network Cloud Licence&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Introduction=&lt;br /&gt;
A TUFLOW licence is required to run TUFLOW, except when using third party software such as a GIS to prepare input data or view results, or when running TUFLOW demo, tutorial or example models in licence free mode.&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Lock Types==&lt;br /&gt;
TUFLOW Products are licenced via locks, available in two forms; WIBU USB-2 Dongles (hardware lock) and WIBU Software Licenses (software lock). A third form, WIBU Cloud locks, have been available since 2023.&lt;br /&gt;
&lt;br /&gt;
===Current Licence Lock Options:===&lt;br /&gt;
Refer to &amp;lt;u&amp;gt;[[Wibu_Dongles | WIBU Lock Guidance]]&amp;lt;/u&amp;gt; for further information on the following licence hosting options:&lt;br /&gt;
* &#039;&#039;&#039;WIBU Hardware Lock&#039;&#039;&#039;: WIBU hardware locks are physical dongles (USB-2) that contain TUFLOW licences. Licences are coded onto the dongle and can be moved between computers. WIBU hardware locks are recognised by the 2006-06-BD release onwards. &lt;br /&gt;
*&#039;&#039;&#039;WIBU Software Lock&#039;&#039;&#039;: WIBU software locks are coded onto the computer&#039;s or server&#039;s motherboard, it cannot be transferred to a different host. WIBU software locks are recognised by the 2016-03-AF release onwards. &lt;br /&gt;
* &#039;&#039;&#039;WIBU Cloud Licence&#039;&#039;&#039;: WIBU cloud licences are available for Network licences, and a hosting on the WIBU cloud server. An internet connection is required to access a cloud licence. WIBU Cloud licences are recognised by the 2016-03-AF release onwards. &lt;br /&gt;
&lt;br /&gt;
===Legacy Licence Lock Options===&lt;br /&gt;
* &#039;&#039;&#039;Softlok Dongle&#039;&#039;&#039;: As of August 2010 Softlok USB dongles are no longer issued due to the dongle provider not supporting 64-bit. Maintained Softlok dongles may be exchanged for a WIBU dongle for a nominal fee, please contact &amp;lt;u&amp;gt;[mailto:sales@tuflow.com sales@tuflow.com]&amp;lt;/u&amp;gt;. For the 2009-07, 2008-08, 2007-07 and 2006-06 releases, the “DB” builds or later will need to be used to recognise a WIBU Codemeter dongle. Refer to &amp;lt;u&amp;gt;[[Softlok_Dongles | Softlok Guidance]]&amp;lt;/u&amp;gt; for further information.&lt;br /&gt;
&lt;br /&gt;
==Licence Types==&lt;br /&gt;
TUFLOW simulations can be executed using a variety of licence types:&lt;br /&gt;
* &#039;&#039;&#039;Licence Free Mode&#039;&#039;&#039;: A licence free mode has been built in TUFLOW. For information outlining the limits associated with the licence free mode, see &amp;lt;u&amp;gt;[[New_User_Guide_Free_Demo_Version | TUFLOW Free DEMO Version Guide]]&amp;lt;/u&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Local Licence&#039;&#039;&#039;: TUFLOW simulations can only be run on the computer hosting the Lock. For installation guidance, see &amp;lt;u&amp;gt;[[New_User_Guide_Local_Licences | Local Licence Installation Guide]]&amp;lt;/u&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Network Licence&#039;&#039;&#039;: The Lock can be hosted on any computer or server. Other computers &#039;check out&#039; licences from the host computer via a company’s network. There are no regional restrictions associated with Network licences. For installation guidance, see &amp;lt;u&amp;gt;[[New_User_Guide_Network_Licences | Network Licence Installation Guide]]&amp;lt;/u&amp;gt;.&amp;lt;br&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Cloud Licence&#039;&#039;&#039;: The Lock is hosted on a central WIBU cloud server. Other computers &#039;check out&#039; licences from the host server over an internet connection. There are no regional restrictions associated with Cloud licences. For installation guidance, see &amp;lt;u&amp;gt;[[New_User_Guide_Cloud_Licences | Cloud Licence Installation Guide]]&amp;lt;/u&amp;gt;. Note that a cloud licence can either be provided as a Local Cloud Licence, so that TUFLOW simulations can only be run on the computer that have the cloud licence installed, or as a Network Cloud Licence, for the same benefits that a Network Licence provides, but with the added administrative overhead as well.&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=Frequently Asked Questions (FAQ) =&lt;br /&gt;
== Do I require a TUFLOW licence to create TUFLOW inputs and view results from a TUFLOW simulation? ==&lt;br /&gt;
No, a TUFLOW licence is only needed to run a TUFLOW .exe file (excluding tutorial, demo and example models or small models eligible for free mode).  Running a TUFLOW .exe is required to:&lt;br /&gt;
* Process control files (.tcf, etc), check the data inputs and to construct the model using the GIS layers and other inputs.  As part of this process any ERROR, WARNING or CHECK messages are issued to help resolve input data conflicts. Check files and GIS layers representing the final model construct are also produced to quality control the model’s inputs.&lt;br /&gt;
* To carry out the hydraulic computations provided Step 1 above produces no ERROR messages.&lt;br /&gt;
No licence is needed for all other tasks, including:&lt;br /&gt;
* Creation and editing of all input files and GIS layers.&lt;br /&gt;
* Running GIS plugins such as the QGIS TUFLOW Viewer.&lt;br /&gt;
* Running utilities (e.g. asc_to_asc.exe).&lt;br /&gt;
* Reviewing check files/layers in GIS.&lt;br /&gt;
* Viewing results (e.g. using TUFLOW Viewer in QGIS).&lt;br /&gt;
All TUFLOW inputs and outputs use free open formats that are readable and editable by third party software, for example QGIS and Notepad++:&lt;br /&gt;
* Download Notepad++ to create and review tabular data: &amp;lt;u&amp;gt;[[NotepadPlusPlus_Tips | Notepad++ installation and tips]]&amp;lt;/u&amp;gt;.&lt;br /&gt;
* Download QGIS: &amp;lt;u&amp;gt;[[QGIS_Tips | QGIS installation and tips]]&amp;lt;/u&amp;gt;.&lt;br /&gt;
* Install the TUFLOW Plugin: &amp;lt;u&amp;gt;[[TUFLOW_QGIS_Plugin | TUFLOW QGIS plugin installation and tips]]&amp;lt;/u&amp;gt;.&lt;br /&gt;
* Use TUFLOW Viewer to review XMDF results: &amp;lt;u&amp;gt;[[TUFLOW_Viewer |TUFLOW Viewer]]&amp;lt;/u&amp;gt;. &amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Why is a pre 2010 version of TUFLOW not working?==&lt;br /&gt;
In 2010 &amp;lt;u&amp;gt;[[TUFLOW_Licensing#Softlok_Dongles_.28Legacy_Product.29 | Softlok dongles]]&amp;lt;/u&amp;gt; were replaced by WIBU dongles. TUFLOW versions earlier than 2010 might be searching for a Softlok licence, however it is likely you have a WIBU licence. The &amp;quot;DB&amp;quot; builds of TUFLOW were created so earlier versions of TUFLOW recognise the new WIBU licences. Please download a &amp;quot;DB&amp;quot; version from the &amp;lt;u&amp;gt;[https://tuflow.com/downloads/tuflow-classichpc-archive/ TUFLOW website]&amp;lt;/u&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
==How many simulations can be run at the same time?==&lt;br /&gt;
The number of licences reflect how many TUFLOW simulations can be run in parallel at any given time. For example, a Local 4 can run 4 simulations at the same time on the one computer. A Network 5 allows up to 5 simulations at any one time across an organisation’s network. If all licences are in use when a TUFLOW simulation starts, the simulation enters a holding pattern until a free licence is available.{{Tips Navigation&lt;br /&gt;
|uplink=[[Main_Page| TUFLOW Main Page]]&lt;br /&gt;
}}&lt;/div&gt;</summary>
		<author><name>Jaap.vandervelde</name></author>
	</entry>
	<entry>
		<id>https://wiki.tuflow.com/w/index.php?title=Organisation_Cloud_Software_Execution&amp;diff=36121</id>
		<title>Organisation Cloud Software Execution</title>
		<link rel="alternate" type="text/html" href="https://wiki.tuflow.com/w/index.php?title=Organisation_Cloud_Software_Execution&amp;diff=36121"/>
		<updated>2023-12-15T04:06:55Z</updated>

		<summary type="html">&lt;p&gt;Jaap.vandervelde: /* Technical Terms Glossary */ Alphabetical ordering&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;The TUFLOW &amp;lt;u&amp;gt;[https://www.tuflow.com/Download/Licensing/TUFLOW%20Products%20Licence%20Agreement.pdf End User Licence Agreement]&amp;lt;/u&amp;gt; was updated in 2018 allowing companies to host their own licences on the cloud. The only restrictions associated with users running TUFLOW simulations on their own company public or private cloud environment are:&lt;br /&gt;
&amp;lt;ol&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt; The licence must be a “Network” type (use of “Local” licences is not permitted on the cloud).&lt;br /&gt;
&amp;lt;li&amp;gt; Usage of TUFLOW software on a virtual machine is confined to Authorised Users within the Licensee&#039;s Network. This clause means companies cannot on-sell access to TUFLOW licences hosted in the cloud or otherwise (excluding TUFLOW vendor contract arrangements). &lt;br /&gt;
&amp;lt;/ol&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Configuration of your cloud environment is your own responsibility. There are numerous ways TUFLOW licensing and simulation can be configured in a cloud environment depending on the cloud provider (Microsoft, Google, Amazon, other etc.) and internal company protocols. We recommend engaging a professional with suitable cloud architecture expertise to design your bespoke system. Clients who have already migrated to the cloud have done so in a variety ways:&lt;br /&gt;
* Some use a hardware lock (USB) dongle that resides in their office on a physical computer or server. Cloud virtual machines link to the network licence via the IP address of the hardware lock.&lt;br /&gt;
* Others use a software lock. Software locks are a digital licence file that is locked to a dedicated host computer, server or virtual machine. When using a software lock please select the host carefully as the software licence will be bound to it. Relocating the licence to a new location will require TUFLOW sales staff to reissue the licence, which incurs a small administration fee.&lt;br /&gt;
&lt;br /&gt;
 &#039;&#039;&#039;Please Note: Network licence rentals can be used to upscale the available licences on your cloud system when demand requires it.&#039;&#039;&#039; &lt;br /&gt;
 Refer to the &amp;lt;u&amp;gt;[https://www.tuflow.com/Prices.aspx TUFLOW Pricelist]&amp;lt;/u&amp;gt; for more information.&lt;br /&gt;
&lt;br /&gt;
This detailed report from the TUFLOW Library discusses some benefits, challenges and solutions relating to cloud computing to help people who are setting up their own system: &lt;br /&gt;
&amp;lt;u&amp;gt;[https://downloads.tuflow.com/Licensing/2021_Running_TUFLOW_on_the_Cloud.pdf Running TUFLOW on the Cloud (Whitepaper)]&amp;lt;/u&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
[[File:2021_Running_TUFLOW_on_the_Cloud.png]]&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
= Common Questions Answered (FAQ)=&lt;br /&gt;
== Q1: How do I execute a simulation on the cloud? Can I still use batch files? ==&lt;br /&gt;
Running a simulation on the cloud can be very similar to running it on any other computer. You can access a VM remotely just like you would any other remote computer, using Remote Desktop, SSH, VNC, an X-Server client, etc. - whatever you are used to and what is set up on the VM. However, that assumes the VM is set up for that type of access and is running when you need to connect to it. If you want to make use of the real benefits of the cloud, like the ability to run on many computers at once, starting them automatically only when needed, doing it through such a process would be very cumbersome. You may want to consider looking at more advanced techniques like [https://azure.microsoft.com/en-au/products/batch Azure Batch], AWS Batch, or [https://cloud.google.com/batch/docs/get-started Google Cloud Batch].&lt;br /&gt;
&lt;br /&gt;
In either case, you will need access to a TUFLOW licence server from VMs running the model. Have a look at &amp;quot;Do I need a different licence to run models on the cloud?&amp;quot; below. And the VMs will always need to have CodeMeter installed, configured to find the licence you plan to use, as well as appropriate drivers for hardware like GPUs.&lt;br /&gt;
&lt;br /&gt;
When running on the cloud, consider that you may not have network access to locations where you would normally store your results. You may need to set up storage in the cloud separate from the VM, but connected to it, to collect your results and still have them available to you once the VM stops running.&lt;br /&gt;
&lt;br /&gt;
If you&#039;re using remote access to desktop VMs, you can still use &#039;&#039;batch files&#039;&#039; or scripts like you&#039;re used to. If you look into batch services, you will need more involved scripting, and you would typically not use batch files, but split up the work into separate tasks for the cloud platform to schedule on available computers. Keep in mind that this is a substantial and complex task, requiring some development and IT skills. If you plan on this type of cloud use, plan ahead and be ready with a working and tested solution, before you take on a deadline.&lt;br /&gt;
== Q2: Do I need a different TUFLOW executable to run models on the cloud? ==&lt;br /&gt;
No, you can use the same executable appropriate to the operating system you are on. Keep in mind that running TUFLOW with a licence does require that CodeMeter is installed as well and configured to find the licence. And if you are using a GPU on the cloud, you will need to have the appropriate NVIDIA drivers with CUDA installed, and a GPU licence available.&lt;br /&gt;
&lt;br /&gt;
Although you do use the same executable, it may be advantageous to provide some additional command line options to TUFLOW when you run it on the cloud. Since you typically won&#039;t be present and looking at the screen, consider using the `-nc` switch, which prevents user interaction on the console. Also, the familiar `-b` option will prevent the simulation waiting for a key press at the end of the simulation. And finally, given the possible cost of running models at scale, you would do well to test your model with the `-t` switch before sending it to the cloud. In addition to command line options, learn about TUFLOW override files to override configuration that may need to be different on the cloud VM, like the location where TUFLOW should write results.&lt;br /&gt;
== Q3: What steps do I need to take to run my model on the cloud? ==&lt;br /&gt;
In no particular order:&lt;br /&gt;
&lt;br /&gt;
* Assuming you have chosen a cloud provider you will use, make sure you understand the answers to the previous questions. If some of this is too technical, ensure you go over this with staff with appropriate IT skills and administrative access.&lt;br /&gt;
* With regard to the model itself, ensure that it has no references to files on computers that wouldn&#039;t be accessible from the cloud VM running the model. Ideally, construct your model configuration so that it can be self-contained within a single folder and would run wherever you put it.&lt;br /&gt;
* Ensure you have sufficient TUFLOW licences available and accessible to your cloud VMs to run the number of simulations you plan to run in parallel on the cloud.&lt;br /&gt;
* Ensure you have sufficient quota for storage and cloud resources you need to run the number of simulations you plan to run, specifically when using the &#039;Batch&#039; services mention under Q1.&lt;br /&gt;
* Ensure you have the right level of access to make use of the cloud resources you need, and that you&#039;re able to use and manage them when you do.&lt;br /&gt;
* Ensure that what you&#039;re planning on the cloud complies with your company and client&#039;s security policies for the work. Think about where the cloud computers are, how data is transferred to and from the cloud, and who has access.&lt;br /&gt;
* If you can, pick a region that puts the compute and storage relatively close to your own location, ensuring that your access (or perhaps your clients&#039; access) to them over the internet can achieve good total network speeds.&lt;br /&gt;
* Test you model before putting it on the cloud and test your preferred method of running a model on the cloud before scaling it up.&lt;br /&gt;
* Make sure your model configuration matches your actual needs before sending it to the cloud. Consider the frequency of writing outputs, whether you need check files, etc.&lt;br /&gt;
&lt;br /&gt;
When in doubt, feel free to contact [mailto:support@tuflow.com TUFLOW Support] and [mailto:sales@tuflow.com TUFLOW Sales] with questions, but keep in mind that we can only offer limited guidance when it comes to the specifics of your chosen cloud provider, and that your company&#039;s IT policies may further limit your options.&lt;br /&gt;
&lt;br /&gt;
== Q4: How can I download the simulation results? ==&lt;br /&gt;
This depends on your chosen solution.&lt;br /&gt;
&lt;br /&gt;
If you have cloud VMs that have access to your company&#039;s internal network, you may be able to copy the results automatically (with a script or batch file) after a simulation completes, and no download would be needed. If you have cloud VMs that you interactively use remotely, you can use whatever tools you would use from any remote machine, like OneDrive, Dropbox, FTP, SSH, to name but a few.&lt;br /&gt;
&lt;br /&gt;
However, all cloud service providers also provide cloud storage, and it may be cheaper and faster to keep unprocessed results in the cloud. Once a run completes, you typically do not want to keep the results on storage that is local to the VM that ran the model (e.g. its C: or D: drive on a Windows computer), unless you plan to use the same VM for post-processing of the results. But you can set up network file shares in the cloud that can be connected to your VM as extra drives or mounts, or you can make use of blob storage like Azure Blob, S3 Buckets, etc. Depending on the cloud service provider, there will be relatively user-friendly tools to access these remotely and download your data later.&lt;br /&gt;
&lt;br /&gt;
For particularly massive datasets, some cloud providers also offer services where they can put the data on physical media and ship them to you. However, keep in mind that this takes substantial time to reserve beforehand and then some time to execute after you complete the work. And the service may not be available for smaller volumes you may need.&lt;br /&gt;
&lt;br /&gt;
Finally, at the risk of stating the obvious: perform the download on a good internet connection. Cloud providers charge a small amount per GB downloaded, and in return they offer very good download speeds for your data. But your internet connection may end up limiting how quickly you get your data to your computer.&lt;br /&gt;
== Q5: What are the benefits of running a simulation in the cloud rather than locally? ==&lt;br /&gt;
Not all benefits apply in all cases, but consider these:&lt;br /&gt;
&lt;br /&gt;
* You can get access to as many cloud VMs (and GPUs) you need to run as many runs you need in parallel, provided you have sufficient licences and quota with the provider.&lt;br /&gt;
* If you only need compute infrequently, it&#039;s there in the cloud when you need it and you only pay for it when you use it.&lt;br /&gt;
* If your workload suddenly increases (which may be a good thing), you can quickly increase the amount of compute with cloud computing, provided you&#039;re set up to do so.&lt;br /&gt;
* Most cloud providers offer access to a variety of very capable hardware, that may allow you to run models larger or longer running than you could on your own hardware.&lt;br /&gt;
* If you collaborate with others from various locations (wherever they are in the world), having the results in the cloud may be a real benefit.&lt;br /&gt;
&lt;br /&gt;
However, there are some potential downsides to consider as well:&lt;br /&gt;
&lt;br /&gt;
* If you make efficient use of hardware you own, the compute is likely cheaper per model run than it would be compared to cloud computing, especially for on-demand compute.&lt;br /&gt;
* Although it&#039;s not very complicated to set up a VM for cloud runs and to get up and running, it may be complicated to do so in a way that satisfies your company or client&#039;s security policies.&lt;br /&gt;
* Similarly, just running some models on an interactively accessible VM may be simple, but developing scripts for automated model running may require time and skills that prevent you from doing so yourself.&lt;br /&gt;
&lt;br /&gt;
== Q6: Do I need to add in any extra commands in my control files? ==&lt;br /&gt;
If your model is self-contained and could run from its folder on any computer, perhaps not. However, you may want to change where a VM in the cloud tries to write its results, for example. You can achieve that with extra command in your control files, but also consider the use of TUFLOW override control files, which you can tailor to the cloud VMs you&#039;re using, without affecting the control files you use for running or testing locally.&lt;br /&gt;
&lt;br /&gt;
To keep costs of storage and transport manageable, as well as saving on some run time, configure your model to write only the outputs you need. This includes selecting the right variables to output, at the appropriate time intervals. Have a look at our [https://www.youtube.com/watch?v=-CsKKjG7jpQ Output Management Advice] webinar (15 minutes) for more tips on that.&lt;br /&gt;
&lt;br /&gt;
Also look at the command line switches mentioned in the answer to Q2.&lt;br /&gt;
== Q7: Do I need a different licence to run models on the cloud? ==&lt;br /&gt;
Not necessarily, but there are some things to keep in mind. If your existing licences are on a dongle, they would need to be network licences and the server they are installed on would have to be accessible over the network from the cloud VMs you&#039;re looking to run models on. If you have sufficient existing network licences you can use in this manner, including licences for special hardware you&#039;d be using on the cloud (like a GPU), you will not need different licences.&lt;br /&gt;
&lt;br /&gt;
You can also set up a dedicated VM to run a small CodeMeter network licence server in the cloud for software network licences. But keep in mind that licences on such a server cannot be moved elsewhere - they are bound to this specific VM. Access to this licence server would be limited to VMs in the cloud, on the same virtual network as the licence server. Or you&#039;d need to have someone with the appropriate IT skills make the licence server accessible from all locations you need access from.&lt;br /&gt;
&lt;br /&gt;
Alternatively, you may be able to make use of web licences, please contact [mailto:sales@tuflow.com TUFLOW Sales] for more information on that.&lt;br /&gt;
== Q8: What can go wrong when running models on the cloud? ==&lt;br /&gt;
For starters, almost everything that can go wrong when running models locally, although power failure and loss of network connection is exceedingly rare on the cloud.&lt;br /&gt;
&lt;br /&gt;
Common problems arise from the differences in the computer&#039;s environment: software you may have installed that batch files rely on, software required to run TUFLOW (CodeMeter, NVIDIA drivers for GPU), access to networked resources you get inputs from, or write results to, etc.&lt;br /&gt;
&lt;br /&gt;
Also, if you&#039;re using Batch services from your cloud provider, once a VM completes its tasks, it may disappear. If something went wrong during the run, you may have very limited access to information about what went wrong, so you want to be careful about logging and where logs are written to.&lt;br /&gt;
&lt;br /&gt;
Similarly, but much simpler: of you run models interactively on a desktop VM, once you turn it off, you will no longer have access to its local storage. And once you remove the VM to save on cost, keep in mind that its attached disk storage will be removed as well, so ensure you have your results in a safe place before that.&lt;br /&gt;
&lt;br /&gt;
Finally, access to licences using Codemeter from the cloud VM can sometimes cause some complications. And access for users to the VM or the data may cause some complications, depending on your IT setup.&lt;br /&gt;
&lt;br /&gt;
None of these should stop you from trying, but ensure everything works like you expect, before scaling up to many model runs at once.&lt;br /&gt;
== Q9: If I stop the cloud VM after models are finished, can I still download the results? ==&lt;br /&gt;
If the results were written to local storage on the VM (like the default C: or D: drive on a Windows VM), you will only be able to access these when the cloud VM is running. If you stopped it, you could restart it to gain access again. Once you delete the VM, data on those volumes will be deleted as well, and cannot be recovered.&lt;br /&gt;
&lt;br /&gt;
To be able to access results in the cloud even when a VM is stopped, or deleted, copy the results to a network share on the cloud. On the VM, you may be able to mount this storage as a network share, or tools will be available to perform a copy to cloud storage, depending on the cloud provider and operating system you are using.&lt;br /&gt;
== Q10: Why is my run on the cloud slower than I expected based on the specs? ==&lt;br /&gt;
Although cloud hardware may be faster for some use cases, and certainly a lot more expensive to purchase, it may not be guaranteed to run your TUFLOW model faster. This mostly depends on how modern the NVIDIA hardware architecture is, how many CUDA cores it has available and specific metrics of the hardware like the amount of memory, the clock speed of the memory, the clock speed of the cores, and how the GPU is connected to the rest of the hardware. For a good assessment of whether you should expect better performance, refer to our [[Hardware Benchmarking (2018-03-AA)|Hardware Benchmarking]] pages.&lt;br /&gt;
&lt;br /&gt;
If you&#039;re wondering why TUFLOW software doesn&#039;t benefit from these supposedly faster and more expensive GPUs, consider that a GPU has many different features, and TUFLOW only makes use of an important subset of these. Also, most TUFLOW models are executed using the single-precision floating-point executable, which is faster than the double-precision executable. Desktop GPUs are highly optimised for single-precision compute, because this is what benefits gaming, and as it happens, TUFLOW runs. Data centre GPUs are more optimised for double-precision compute, but most TUFLOW simulations don&#039;t benefit in result quality from using this.&lt;br /&gt;
&lt;br /&gt;
Even when the hardware should be faster according to benchmarks, it&#039;s possible that you have some other restrictions. For one, if your cloud environment shares GPUs between many users, the part of the GPU available to your model run may only see a small percentage of the performance it would show with exclusive access to the GPU. This is particularly true in Virtual Desktop Infrastructure (VDI) setups. The way TUFLOW uses the GPU is very different from normal graphics processing, and VDI solutions are often not good for model running.&lt;br /&gt;
&lt;br /&gt;
Another common cause of slowing is writing results directly to network shares that may be accessed over network connections that are orders of magnitude slower than local disk access. In these situations, the recommendation is to write results locally (with minimal overhead) on the cloud VM and then copy the results to other storage in one go, when the run completes. Even if you perform this copy while another run starts, you&#039;ll find that running first and copying after is a lot faster than writing directly to the network share. To understand why, imagine writing and sending an email one word at a time, or writing it all in one go. The amount of typing you have to do is roughly the same, if you do it cleverly, but clearly the whole process will take longer, and you can imagine the network having to send far more data back and forth. The difference between writing results to the network one part at a time, instead of all at once is analogous.&lt;br /&gt;
== Q11: How can I lower the cost of running simulations on the cloud? ==&lt;br /&gt;
The first step would be to select the hardware that&#039;s best suited to your needs, at the lowest price, from the most affordable provider.&lt;br /&gt;
&lt;br /&gt;
Secondly, if you get cloud hardware on-demand, you&#039;re paying the highest rates for the flexibility this affords. You can also reserve instances of specific hardware types, for periods like a year, or three years (depending on the cloud provider), dramatically lowering the price - but then you will have to pay for the entire period for the reserved instances. If your organisation is large enough, it can be worthwhile to have access to a pool of reserved resources, as long as the business achieves high utilisation over time, so that you only pay on-demand prices when you exceed your reserved instances.&lt;br /&gt;
&lt;br /&gt;
If you do end up using on-demand hardware, ensure you only run it when you&#039;re actually using it. By automatically turning off VMs when the work is done and copied to appropriate storage, you can save on compute costs - you&#039;re not paying for how much power they use, you&#039;re paying for the hours they&#039;re on. And keep data on cheap storage like blob storage or online file shares, where you pay only for the size you&#039;re using, instead of keeping expensive VMs around that have massive virtual hard drives that you&#039;re paying for as long as they exist, empty or not.&lt;br /&gt;
&lt;br /&gt;
Don&#039;t download data repeatedly, especially if you need access to it frequently. If you only need access to a small part of the data, it may be worth it to do so remotely. But if you need to process entire files, or multiple users need a copy, it will be more economical to download the data to your network once and use it from there.&lt;br /&gt;
&lt;br /&gt;
You may have heard about &#039;spot pricing&#039; for VMs. This may be suitable if you&#039;re running many small simulations in sequence, and if you&#039;re not under strict time pressure to deliver results, but in many cases, it won&#039;t be ideal, especially if your model is not set up with restart files that get stored away from the VM. We find that the added complexity rarely outweighs the price difference for the hardware, but the discounts on VMs obtained through spot pricing can be substantial.&lt;br /&gt;
&lt;br /&gt;
If you find that the number of licences you need to scale up model running on the cloud is the main limiting factor for cost, contact our [mailto:sales@tuflow.com TUFLOW Sales] to discuss options for your situation.&lt;br /&gt;
&lt;br /&gt;
Finally, read through these questions, and take the advice given to heart. Optimising your model configuration and making the right choices when running on the cloud can save a lot of run time, and thus cost.&lt;br /&gt;
== Q12: Is there a developed service to run large numbers of model runs on the cloud, if we cannot set it up ourselves? ==&lt;br /&gt;
As of 2019, TUFLOW offer an [[TUFLOW Cloud Simulation Service|on-demand cloud simulation service]] that may suit your needs if your project is sufficiently urgent or large. As of 2023, you may find third parties providing services on the cloud as well, and TUFLOW may support use of its software in such services.&lt;br /&gt;
== Q13: Which machine size / hardware type do you recommend for my model runs? ==&lt;br /&gt;
Hardware selection is very specific to the modelling requirements of each organisation and project. There is no one-size-fits-all recommendation to make. &lt;br /&gt;
&lt;br /&gt;
However, some comments that generally apply:&lt;br /&gt;
&lt;br /&gt;
* As with physical hardware, top speed comes at a premium. If you compare model run times between different VM sizes, you may find that running on the slower machines may work out cheaper for a certain amount of work, than using the faster ones. Of course, you will have to consider project lead time, and time spent on licences as well.&lt;br /&gt;
* For most cloud providers, the number of vCPU cores scales together with the type and number of available GPUs. And together with vCPUs, the amount of available RAM and storage scales up as well. As a result, you may end up with a lot of unused resources on some machine types.&lt;br /&gt;
* If you&#039;re considering purchasing cloud infrastructure for permanent use, keep in mind that Virtual Desktop Infrastructure solutions often share resources like GPUs between many users. You may find that a specific type of hardware works really well in a test setup, where you&#039;re the only user on it, but performs really poorly when under load from many users. If you purchase access to a cloud VM directly, you will have it all to yourself, but additional infrastructure on top of the VMs may affect your performance greatly.&lt;br /&gt;
* Conversely, when selecting a VM type that provides access to only &#039;half&#039; of a data centre VM, you don&#039;t have to worry about negative performance impact. This type of sharing (that your IT can also achieve in your own data centre with NVIDIA MIG) still ensures that you always get full access to a dedicated part of the GPU and performance should be as expected.&lt;br /&gt;
== Technical Terms Glossary ==&lt;br /&gt;
A brief explanation of some of the technical terms used in this FAQ, and relevant to cloud model running:&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;Azure Batch&#039;&#039;&#039;: A cloud computing service provided by Microsoft Azure for running large-scale parallel and batch compute jobs.&lt;br /&gt;
*&#039;&#039;&#039;Azure Blob&#039;&#039;&#039;: Microsoft Azure&#039;s object storage solution for the cloud.&lt;br /&gt;
*&#039;&#039;&#039;AWS Batch&#039;&#039;&#039;: Amazon Web Services&#039; batch computing service that enables the processing of a large number of batch jobs.&lt;br /&gt;
*&#039;&#039;&#039;Blob Storage (Binary Large Object Storage)&#039;&#039;&#039;: A storage service for large amounts of unstructured data.&lt;br /&gt;
*&#039;&#039;&#039;CodeMeter&#039;&#039;&#039;: A software technology developed by WIBU, used by TUFLOW for software licensing and protection.&lt;br /&gt;
*&#039;&#039;&#039;CUDA&#039;&#039;&#039;: A parallel computing platform and application programming interface model created by Nvidia.&lt;br /&gt;
*&#039;&#039;&#039;GPU (Graphics Processing Unit)&#039;&#039;&#039;: A specialized processor designed to accelerate graphics rendering.&lt;br /&gt;
*&#039;&#039;&#039;Google Cloud Batch&#039;&#039;&#039;: A batch computing service provided by Google Cloud, similar in functionality to Azure Batch and AWS Batch.&lt;br /&gt;
*&#039;&#039;&#039;Network File Shares&#039;&#039;&#039;: Storage locations on a network that multiple users can access to store and retrieve files, as on a regular file system.&lt;br /&gt;
*&#039;&#039;&#039;NVIDIA MIG (Multi-Instance GPU)&#039;&#039;&#039;: A technology that provides hardware partitioning of NVIDIA GPUs.&lt;br /&gt;
*&#039;&#039;&#039;On-Demand Compute&#039;&#039;&#039;: A cloud computing service model where computing resources are made available immediately to the user as needed.&lt;br /&gt;
* &#039;&#039;&#039;Remote Desktop&#039;&#039;&#039;: A program or feature that allows a user to connect to a computer in another location, see that computer&#039;s desktop, and interact with it as if it were local.&lt;br /&gt;
*&#039;&#039;&#039;S3 Buckets&#039;&#039;&#039;: Amazon Web Services&#039; scalable storage buckets in the cloud.&lt;br /&gt;
*&#039;&#039;&#039;Spot Pricing&#039;&#039;&#039;: A pricing model in cloud computing where available compute capacity can be purchased at potentially lower costs compared to on-demand rates.&lt;br /&gt;
*&#039;&#039;&#039;SSH (Secure Shell)&#039;&#039;&#039;: A cryptographic network protocol for operating network services securely over an unsecured network.&lt;br /&gt;
*&#039;&#039;&#039;Virtual Desktop Infrastructure (VDI)&#039;&#039;&#039;: Technology that hosts a desktop operating system on a centralized server in a data center.&lt;br /&gt;
*&#039;&#039;&#039;VM (Virtual Machine)&#039;&#039;&#039;: A software emulation of a physical computer that runs an operating system and applications just like a physical machine.&lt;br /&gt;
*&#039;&#039;&#039;VNC (Virtual Network Computing)&#039;&#039;&#039;: A graphical desktop-sharing system that uses the Remote Frame Buffer protocol to remotely control another computer.&lt;br /&gt;
* &#039;&#039;&#039;X-Server&#039;&#039;&#039;: A program that manages and displays graphical user interfaces in a Unix or Linux environment.&lt;/div&gt;</summary>
		<author><name>Jaap.vandervelde</name></author>
	</entry>
	<entry>
		<id>https://wiki.tuflow.com/w/index.php?title=Organisation_Cloud_Software_Execution&amp;diff=36120</id>
		<title>Organisation Cloud Software Execution</title>
		<link rel="alternate" type="text/html" href="https://wiki.tuflow.com/w/index.php?title=Organisation_Cloud_Software_Execution&amp;diff=36120"/>
		<updated>2023-12-15T04:04:21Z</updated>

		<summary type="html">&lt;p&gt;Jaap.vandervelde: Numbering error&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;The TUFLOW &amp;lt;u&amp;gt;[https://www.tuflow.com/Download/Licensing/TUFLOW%20Products%20Licence%20Agreement.pdf End User Licence Agreement]&amp;lt;/u&amp;gt; was updated in 2018 allowing companies to host their own licences on the cloud. The only restrictions associated with users running TUFLOW simulations on their own company public or private cloud environment are:&lt;br /&gt;
&amp;lt;ol&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt; The licence must be a “Network” type (use of “Local” licences is not permitted on the cloud).&lt;br /&gt;
&amp;lt;li&amp;gt; Usage of TUFLOW software on a virtual machine is confined to Authorised Users within the Licensee&#039;s Network. This clause means companies cannot on-sell access to TUFLOW licences hosted in the cloud or otherwise (excluding TUFLOW vendor contract arrangements). &lt;br /&gt;
&amp;lt;/ol&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Configuration of your cloud environment is your own responsibility. There are numerous ways TUFLOW licensing and simulation can be configured in a cloud environment depending on the cloud provider (Microsoft, Google, Amazon, other etc.) and internal company protocols. We recommend engaging a professional with suitable cloud architecture expertise to design your bespoke system. Clients who have already migrated to the cloud have done so in a variety ways:&lt;br /&gt;
* Some use a hardware lock (USB) dongle that resides in their office on a physical computer or server. Cloud virtual machines link to the network licence via the IP address of the hardware lock.&lt;br /&gt;
* Others use a software lock. Software locks are a digital licence file that is locked to a dedicated host computer, server or virtual machine. When using a software lock please select the host carefully as the software licence will be bound to it. Relocating the licence to a new location will require TUFLOW sales staff to reissue the licence, which incurs a small administration fee.&lt;br /&gt;
&lt;br /&gt;
 &#039;&#039;&#039;Please Note: Network licence rentals can be used to upscale the available licences on your cloud system when demand requires it.&#039;&#039;&#039; &lt;br /&gt;
 Refer to the &amp;lt;u&amp;gt;[https://www.tuflow.com/Prices.aspx TUFLOW Pricelist]&amp;lt;/u&amp;gt; for more information.&lt;br /&gt;
&lt;br /&gt;
This detailed report from the TUFLOW Library discusses some benefits, challenges and solutions relating to cloud computing to help people who are setting up their own system: &lt;br /&gt;
&amp;lt;u&amp;gt;[https://downloads.tuflow.com/Licensing/2021_Running_TUFLOW_on_the_Cloud.pdf Running TUFLOW on the Cloud (Whitepaper)]&amp;lt;/u&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
[[File:2021_Running_TUFLOW_on_the_Cloud.png]]&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
= Common Questions Answered (FAQ)=&lt;br /&gt;
== Q1: How do I execute a simulation on the cloud? Can I still use batch files? ==&lt;br /&gt;
Running a simulation on the cloud can be very similar to running it on any other computer. You can access a VM remotely just like you would any other remote computer, using Remote Desktop, SSH, VNC, an X-Server client, etc. - whatever you are used to and what is set up on the VM. However, that assumes the VM is set up for that type of access and is running when you need to connect to it. If you want to make use of the real benefits of the cloud, like the ability to run on many computers at once, starting them automatically only when needed, doing it through such a process would be very cumbersome. You may want to consider looking at more advanced techniques like [https://azure.microsoft.com/en-au/products/batch Azure Batch], AWS Batch, or [https://cloud.google.com/batch/docs/get-started Google Cloud Batch].&lt;br /&gt;
&lt;br /&gt;
In either case, you will need access to a TUFLOW licence server from VMs running the model. Have a look at &amp;quot;Do I need a different licence to run models on the cloud?&amp;quot; below. And the VMs will always need to have CodeMeter installed, configured to find the licence you plan to use, as well as appropriate drivers for hardware like GPUs.&lt;br /&gt;
&lt;br /&gt;
When running on the cloud, consider that you may not have network access to locations where you would normally store your results. You may need to set up storage in the cloud separate from the VM, but connected to it, to collect your results and still have them available to you once the VM stops running.&lt;br /&gt;
&lt;br /&gt;
If you&#039;re using remote access to desktop VMs, you can still use &#039;&#039;batch files&#039;&#039; or scripts like you&#039;re used to. If you look into batch services, you will need more involved scripting, and you would typically not use batch files, but split up the work into separate tasks for the cloud platform to schedule on available computers. Keep in mind that this is a substantial and complex task, requiring some development and IT skills. If you plan on this type of cloud use, plan ahead and be ready with a working and tested solution, before you take on a deadline.&lt;br /&gt;
== Q2: Do I need a different TUFLOW executable to run models on the cloud? ==&lt;br /&gt;
No, you can use the same executable appropriate to the operating system you are on. Keep in mind that running TUFLOW with a licence does require that CodeMeter is installed as well and configured to find the licence. And if you are using a GPU on the cloud, you will need to have the appropriate NVIDIA drivers with CUDA installed, and a GPU licence available.&lt;br /&gt;
&lt;br /&gt;
Although you do use the same executable, it may be advantageous to provide some additional command line options to TUFLOW when you run it on the cloud. Since you typically won&#039;t be present and looking at the screen, consider using the `-nc` switch, which prevents user interaction on the console. Also, the familiar `-b` option will prevent the simulation waiting for a key press at the end of the simulation. And finally, given the possible cost of running models at scale, you would do well to test your model with the `-t` switch before sending it to the cloud. In addition to command line options, learn about TUFLOW override files to override configuration that may need to be different on the cloud VM, like the location where TUFLOW should write results.&lt;br /&gt;
== Q3: What steps do I need to take to run my model on the cloud? ==&lt;br /&gt;
In no particular order:&lt;br /&gt;
&lt;br /&gt;
* Assuming you have chosen a cloud provider you will use, make sure you understand the answers to the previous questions. If some of this is too technical, ensure you go over this with staff with appropriate IT skills and administrative access.&lt;br /&gt;
* With regard to the model itself, ensure that it has no references to files on computers that wouldn&#039;t be accessible from the cloud VM running the model. Ideally, construct your model configuration so that it can be self-contained within a single folder and would run wherever you put it.&lt;br /&gt;
* Ensure you have sufficient TUFLOW licences available and accessible to your cloud VMs to run the number of simulations you plan to run in parallel on the cloud.&lt;br /&gt;
* Ensure you have sufficient quota for storage and cloud resources you need to run the number of simulations you plan to run, specifically when using the &#039;Batch&#039; services mention under Q1.&lt;br /&gt;
* Ensure you have the right level of access to make use of the cloud resources you need, and that you&#039;re able to use and manage them when you do.&lt;br /&gt;
* Ensure that what you&#039;re planning on the cloud complies with your company and client&#039;s security policies for the work. Think about where the cloud computers are, how data is transferred to and from the cloud, and who has access.&lt;br /&gt;
* If you can, pick a region that puts the compute and storage relatively close to your own location, ensuring that your access (or perhaps your clients&#039; access) to them over the internet can achieve good total network speeds.&lt;br /&gt;
* Test you model before putting it on the cloud and test your preferred method of running a model on the cloud before scaling it up.&lt;br /&gt;
* Make sure your model configuration matches your actual needs before sending it to the cloud. Consider the frequency of writing outputs, whether you need check files, etc.&lt;br /&gt;
&lt;br /&gt;
When in doubt, feel free to contact [mailto:support@tuflow.com TUFLOW Support] and [mailto:sales@tuflow.com TUFLOW Sales] with questions, but keep in mind that we can only offer limited guidance when it comes to the specifics of your chosen cloud provider, and that your company&#039;s IT policies may further limit your options.&lt;br /&gt;
&lt;br /&gt;
== Q4: How can I download the simulation results? ==&lt;br /&gt;
This depends on your chosen solution.&lt;br /&gt;
&lt;br /&gt;
If you have cloud VMs that have access to your company&#039;s internal network, you may be able to copy the results automatically (with a script or batch file) after a simulation completes, and no download would be needed. If you have cloud VMs that you interactively use remotely, you can use whatever tools you would use from any remote machine, like OneDrive, Dropbox, FTP, SSH, to name but a few.&lt;br /&gt;
&lt;br /&gt;
However, all cloud service providers also provide cloud storage, and it may be cheaper and faster to keep unprocessed results in the cloud. Once a run completes, you typically do not want to keep the results on storage that is local to the VM that ran the model (e.g. its C: or D: drive on a Windows computer), unless you plan to use the same VM for post-processing of the results. But you can set up network file shares in the cloud that can be connected to your VM as extra drives or mounts, or you can make use of blob storage like Azure Blob, S3 Buckets, etc. Depending on the cloud service provider, there will be relatively user-friendly tools to access these remotely and download your data later.&lt;br /&gt;
&lt;br /&gt;
For particularly massive datasets, some cloud providers also offer services where they can put the data on physical media and ship them to you. However, keep in mind that this takes substantial time to reserve beforehand and then some time to execute after you complete the work. And the service may not be available for smaller volumes you may need.&lt;br /&gt;
&lt;br /&gt;
Finally, at the risk of stating the obvious: perform the download on a good internet connection. Cloud providers charge a small amount per GB downloaded, and in return they offer very good download speeds for your data. But your internet connection may end up limiting how quickly you get your data to your computer.&lt;br /&gt;
== Q5: What are the benefits of running a simulation in the cloud rather than locally? ==&lt;br /&gt;
Not all benefits apply in all cases, but consider these:&lt;br /&gt;
&lt;br /&gt;
* You can get access to as many cloud VMs (and GPUs) you need to run as many runs you need in parallel, provided you have sufficient licences and quota with the provider.&lt;br /&gt;
* If you only need compute infrequently, it&#039;s there in the cloud when you need it and you only pay for it when you use it.&lt;br /&gt;
* If your workload suddenly increases (which may be a good thing), you can quickly increase the amount of compute with cloud computing, provided you&#039;re set up to do so.&lt;br /&gt;
* Most cloud providers offer access to a variety of very capable hardware, that may allow you to run models larger or longer running than you could on your own hardware.&lt;br /&gt;
* If you collaborate with others from various locations (wherever they are in the world), having the results in the cloud may be a real benefit.&lt;br /&gt;
&lt;br /&gt;
However, there are some potential downsides to consider as well:&lt;br /&gt;
&lt;br /&gt;
* If you make efficient use of hardware you own, the compute is likely cheaper per model run than it would be compared to cloud computing, especially for on-demand compute.&lt;br /&gt;
* Although it&#039;s not very complicated to set up a VM for cloud runs and to get up and running, it may be complicated to do so in a way that satisfies your company or client&#039;s security policies.&lt;br /&gt;
* Similarly, just running some models on an interactively accessible VM may be simple, but developing scripts for automated model running may require time and skills that prevent you from doing so yourself.&lt;br /&gt;
&lt;br /&gt;
== Q6: Do I need to add in any extra commands in my control files? ==&lt;br /&gt;
If your model is self-contained and could run from its folder on any computer, perhaps not. However, you may want to change where a VM in the cloud tries to write its results, for example. You can achieve that with extra command in your control files, but also consider the use of TUFLOW override control files, which you can tailor to the cloud VMs you&#039;re using, without affecting the control files you use for running or testing locally.&lt;br /&gt;
&lt;br /&gt;
To keep costs of storage and transport manageable, as well as saving on some run time, configure your model to write only the outputs you need. This includes selecting the right variables to output, at the appropriate time intervals. Have a look at our [https://www.youtube.com/watch?v=-CsKKjG7jpQ Output Management Advice] webinar (15 minutes) for more tips on that.&lt;br /&gt;
&lt;br /&gt;
Also look at the command line switches mentioned in the answer to Q2.&lt;br /&gt;
== Q7: Do I need a different licence to run models on the cloud? ==&lt;br /&gt;
Not necessarily, but there are some things to keep in mind. If your existing licences are on a dongle, they would need to be network licences and the server they are installed on would have to be accessible over the network from the cloud VMs you&#039;re looking to run models on. If you have sufficient existing network licences you can use in this manner, including licences for special hardware you&#039;d be using on the cloud (like a GPU), you will not need different licences.&lt;br /&gt;
&lt;br /&gt;
You can also set up a dedicated VM to run a small CodeMeter network licence server in the cloud for software network licences. But keep in mind that licences on such a server cannot be moved elsewhere - they are bound to this specific VM. Access to this licence server would be limited to VMs in the cloud, on the same virtual network as the licence server. Or you&#039;d need to have someone with the appropriate IT skills make the licence server accessible from all locations you need access from.&lt;br /&gt;
&lt;br /&gt;
Alternatively, you may be able to make use of web licences, please contact [mailto:sales@tuflow.com TUFLOW Sales] for more information on that.&lt;br /&gt;
== Q8: What can go wrong when running models on the cloud? ==&lt;br /&gt;
For starters, almost everything that can go wrong when running models locally, although power failure and loss of network connection is exceedingly rare on the cloud.&lt;br /&gt;
&lt;br /&gt;
Common problems arise from the differences in the computer&#039;s environment: software you may have installed that batch files rely on, software required to run TUFLOW (CodeMeter, NVIDIA drivers for GPU), access to networked resources you get inputs from, or write results to, etc.&lt;br /&gt;
&lt;br /&gt;
Also, if you&#039;re using Batch services from your cloud provider, once a VM completes its tasks, it may disappear. If something went wrong during the run, you may have very limited access to information about what went wrong, so you want to be careful about logging and where logs are written to.&lt;br /&gt;
&lt;br /&gt;
Similarly, but much simpler: of you run models interactively on a desktop VM, once you turn it off, you will no longer have access to its local storage. And once you remove the VM to save on cost, keep in mind that its attached disk storage will be removed as well, so ensure you have your results in a safe place before that.&lt;br /&gt;
&lt;br /&gt;
Finally, access to licences using Codemeter from the cloud VM can sometimes cause some complications. And access for users to the VM or the data may cause some complications, depending on your IT setup.&lt;br /&gt;
&lt;br /&gt;
None of these should stop you from trying, but ensure everything works like you expect, before scaling up to many model runs at once.&lt;br /&gt;
== Q9: If I stop the cloud VM after models are finished, can I still download the results? ==&lt;br /&gt;
If the results were written to local storage on the VM (like the default C: or D: drive on a Windows VM), you will only be able to access these when the cloud VM is running. If you stopped it, you could restart it to gain access again. Once you delete the VM, data on those volumes will be deleted as well, and cannot be recovered.&lt;br /&gt;
&lt;br /&gt;
To be able to access results in the cloud even when a VM is stopped, or deleted, copy the results to a network share on the cloud. On the VM, you may be able to mount this storage as a network share, or tools will be available to perform a copy to cloud storage, depending on the cloud provider and operating system you are using.&lt;br /&gt;
== Q10: Why is my run on the cloud slower than I expected based on the specs? ==&lt;br /&gt;
Although cloud hardware may be faster for some use cases, and certainly a lot more expensive to purchase, it may not be guaranteed to run your TUFLOW model faster. This mostly depends on how modern the NVIDIA hardware architecture is, how many CUDA cores it has available and specific metrics of the hardware like the amount of memory, the clock speed of the memory, the clock speed of the cores, and how the GPU is connected to the rest of the hardware. For a good assessment of whether you should expect better performance, refer to our [[Hardware Benchmarking (2018-03-AA)|Hardware Benchmarking]] pages.&lt;br /&gt;
&lt;br /&gt;
If you&#039;re wondering why TUFLOW software doesn&#039;t benefit from these supposedly faster and more expensive GPUs, consider that a GPU has many different features, and TUFLOW only makes use of an important subset of these. Also, most TUFLOW models are executed using the single-precision floating-point executable, which is faster than the double-precision executable. Desktop GPUs are highly optimised for single-precision compute, because this is what benefits gaming, and as it happens, TUFLOW runs. Data centre GPUs are more optimised for double-precision compute, but most TUFLOW simulations don&#039;t benefit in result quality from using this.&lt;br /&gt;
&lt;br /&gt;
Even when the hardware should be faster according to benchmarks, it&#039;s possible that you have some other restrictions. For one, if your cloud environment shares GPUs between many users, the part of the GPU available to your model run may only see a small percentage of the performance it would show with exclusive access to the GPU. This is particularly true in Virtual Desktop Infrastructure (VDI) setups. The way TUFLOW uses the GPU is very different from normal graphics processing, and VDI solutions are often not good for model running.&lt;br /&gt;
&lt;br /&gt;
Another common cause of slowing is writing results directly to network shares that may be accessed over network connections that are orders of magnitude slower than local disk access. In these situations, the recommendation is to write results locally (with minimal overhead) on the cloud VM and then copy the results to other storage in one go, when the run completes. Even if you perform this copy while another run starts, you&#039;ll find that running first and copying after is a lot faster than writing directly to the network share. To understand why, imagine writing and sending an email one word at a time, or writing it all in one go. The amount of typing you have to do is roughly the same, if you do it cleverly, but clearly the whole process will take longer, and you can imagine the network having to send far more data back and forth. The difference between writing results to the network one part at a time, instead of all at once is analogous.&lt;br /&gt;
== Q11: How can I lower the cost of running simulations on the cloud? ==&lt;br /&gt;
The first step would be to select the hardware that&#039;s best suited to your needs, at the lowest price, from the most affordable provider.&lt;br /&gt;
&lt;br /&gt;
Secondly, if you get cloud hardware on-demand, you&#039;re paying the highest rates for the flexibility this affords. You can also reserve instances of specific hardware types, for periods like a year, or three years (depending on the cloud provider), dramatically lowering the price - but then you will have to pay for the entire period for the reserved instances. If your organisation is large enough, it can be worthwhile to have access to a pool of reserved resources, as long as the business achieves high utilisation over time, so that you only pay on-demand prices when you exceed your reserved instances.&lt;br /&gt;
&lt;br /&gt;
If you do end up using on-demand hardware, ensure you only run it when you&#039;re actually using it. By automatically turning off VMs when the work is done and copied to appropriate storage, you can save on compute costs - you&#039;re not paying for how much power they use, you&#039;re paying for the hours they&#039;re on. And keep data on cheap storage like blob storage or online file shares, where you pay only for the size you&#039;re using, instead of keeping expensive VMs around that have massive virtual hard drives that you&#039;re paying for as long as they exist, empty or not.&lt;br /&gt;
&lt;br /&gt;
Don&#039;t download data repeatedly, especially if you need access to it frequently. If you only need access to a small part of the data, it may be worth it to do so remotely. But if you need to process entire files, or multiple users need a copy, it will be more economical to download the data to your network once and use it from there.&lt;br /&gt;
&lt;br /&gt;
You may have heard about &#039;spot pricing&#039; for VMs. This may be suitable if you&#039;re running many small simulations in sequence, and if you&#039;re not under strict time pressure to deliver results, but in many cases, it won&#039;t be ideal, especially if your model is not set up with restart files that get stored away from the VM. We find that the added complexity rarely outweighs the price difference for the hardware, but the discounts on VMs obtained through spot pricing can be substantial.&lt;br /&gt;
&lt;br /&gt;
If you find that the number of licences you need to scale up model running on the cloud is the main limiting factor for cost, contact our [mailto:sales@tuflow.com TUFLOW Sales] to discuss options for your situation.&lt;br /&gt;
&lt;br /&gt;
Finally, read through these questions, and take the advice given to heart. Optimising your model configuration and making the right choices when running on the cloud can save a lot of run time, and thus cost.&lt;br /&gt;
== Q12: Is there a developed service to run large numbers of model runs on the cloud, if we cannot set it up ourselves? ==&lt;br /&gt;
As of 2019, TUFLOW offer an [[TUFLOW Cloud Simulation Service|on-demand cloud simulation service]] that may suit your needs if your project is sufficiently urgent or large. As of 2023, you may find third parties providing services on the cloud as well, and TUFLOW may support use of its software in such services.&lt;br /&gt;
== Q13: Which machine size / hardware type do you recommend for my model runs? ==&lt;br /&gt;
Hardware selection is very specific to the modelling requirements of each organisation and project. There is no one-size-fits-all recommendation to make. &lt;br /&gt;
&lt;br /&gt;
However, some comments that generally apply:&lt;br /&gt;
&lt;br /&gt;
* As with physical hardware, top speed comes at a premium. If you compare model run times between different VM sizes, you may find that running on the slower machines may work out cheaper for a certain amount of work, than using the faster ones. Of course, you will have to consider project lead time, and time spent on licences as well.&lt;br /&gt;
* For most cloud providers, the number of vCPU cores scales together with the type and number of available GPUs. And together with vCPUs, the amount of available RAM and storage scales up as well. As a result, you may end up with a lot of unused resources on some machine types.&lt;br /&gt;
* If you&#039;re considering purchasing cloud infrastructure for permanent use, keep in mind that Virtual Desktop Infrastructure solutions often share resources like GPUs between many users. You may find that a specific type of hardware works really well in a test setup, where you&#039;re the only user on it, but performs really poorly when under load from many users. If you purchase access to a cloud VM directly, you will have it all to yourself, but additional infrastructure on top of the VMs may affect your performance greatly.&lt;br /&gt;
* Conversely, when selecting a VM type that provides access to only &#039;half&#039; of a data centre VM, you don&#039;t have to worry about negative performance impact. This type of sharing (that your IT can also achieve in your own data centre with NVIDIA MIG) still ensures that you always get full access to a dedicated part of the GPU and performance should be as expected.&lt;br /&gt;
== Technical Terms Glossary ==&lt;br /&gt;
A brief explanation of some of the technical terms used in this FAQ, and relevant to cloud model running:&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;Azure Batch&#039;&#039;&#039;: A cloud computing service provided by Microsoft Azure for running large-scale parallel and batch compute jobs.&lt;br /&gt;
* &#039;&#039;&#039;AWS Batch&#039;&#039;&#039;: Amazon Web Services&#039; batch computing service that enables the processing of a large number of batch jobs.&lt;br /&gt;
* &#039;&#039;&#039;Google Cloud Batch&#039;&#039;&#039;: A batch computing service provided by Google Cloud, similar in functionality to Azure Batch and AWS Batch.&lt;br /&gt;
* &#039;&#039;&#039;VM (Virtual Machine)&#039;&#039;&#039;: A software emulation of a physical computer that runs an operating system and applications just like a physical machine.&lt;br /&gt;
* &#039;&#039;&#039;Remote Desktop&#039;&#039;&#039;: A program or feature that allows a user to connect to a computer in another location, see that computer&#039;s desktop, and interact with it as if it were local.&lt;br /&gt;
* &#039;&#039;&#039;SSH (Secure Shell)&#039;&#039;&#039;: A cryptographic network protocol for operating network services securely over an unsecured network.&lt;br /&gt;
* &#039;&#039;&#039;VNC (Virtual Network Computing)&#039;&#039;&#039;: A graphical desktop-sharing system that uses the Remote Frame Buffer protocol to remotely control another computer.&lt;br /&gt;
* &#039;&#039;&#039;X-Server&#039;&#039;&#039;: A program that manages and displays graphical user interfaces in a Unix or Linux environment.&lt;br /&gt;
* &#039;&#039;&#039;CodeMeter&#039;&#039;&#039;: A software technology developed by WIBU, used by TUFLOW for software licensing and protection.&lt;br /&gt;
* &#039;&#039;&#039;GPU (Graphics Processing Unit)&#039;&#039;&#039;: A specialized processor designed to accelerate graphics rendering.&lt;br /&gt;
* &#039;&#039;&#039;CUDA&#039;&#039;&#039;: A parallel computing platform and application programming interface model created by Nvidia.&lt;br /&gt;
* &#039;&#039;&#039;Blob Storage (Binary Large Object Storage)&#039;&#039;&#039;: A storage service for large amounts of unstructured data.&lt;br /&gt;
* &#039;&#039;&#039;Azure Blob&#039;&#039;&#039;: Microsoft Azure&#039;s object storage solution for the cloud.&lt;br /&gt;
* &#039;&#039;&#039;S3 Buckets&#039;&#039;&#039;: Amazon Web Services&#039; scalable storage buckets in the cloud.&lt;br /&gt;
* &#039;&#039;&#039;Network File Shares&#039;&#039;&#039;: Storage locations on a network that multiple users can access to store and retrieve files, as on a regular file system.&lt;br /&gt;
* &#039;&#039;&#039;Virtual Desktop Infrastructure (VDI)&#039;&#039;&#039;: Technology that hosts a desktop operating system on a centralized server in a data center.&lt;br /&gt;
* &#039;&#039;&#039;NVIDIA MIG (Multi-Instance GPU)&#039;&#039;&#039;: A technology that provides hardware partitioning of NVIDIA GPUs.&lt;br /&gt;
* &#039;&#039;&#039;Spot Pricing&#039;&#039;&#039;: A pricing model in cloud computing where available compute capacity can be purchased at potentially lower costs compared to on-demand rates.&lt;br /&gt;
* &#039;&#039;&#039;On-Demand Compute&#039;&#039;&#039;: A cloud computing service model where computing resources are made available immediately to the user as needed.&lt;/div&gt;</summary>
		<author><name>Jaap.vandervelde</name></author>
	</entry>
	<entry>
		<id>https://wiki.tuflow.com/w/index.php?title=Organisation_Cloud_Software_Execution&amp;diff=36119</id>
		<title>Organisation Cloud Software Execution</title>
		<link rel="alternate" type="text/html" href="https://wiki.tuflow.com/w/index.php?title=Organisation_Cloud_Software_Execution&amp;diff=36119"/>
		<updated>2023-12-15T04:03:38Z</updated>

		<summary type="html">&lt;p&gt;Jaap.vandervelde: Added a technical term glossary at the end.&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;The TUFLOW &amp;lt;u&amp;gt;[https://www.tuflow.com/Download/Licensing/TUFLOW%20Products%20Licence%20Agreement.pdf End User Licence Agreement]&amp;lt;/u&amp;gt; was updated in 2018 allowing companies to host their own licences on the cloud. The only restrictions associated with users running TUFLOW simulations on their own company public or private cloud environment are:&lt;br /&gt;
&amp;lt;ol&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt; The licence must be a “Network” type (use of “Local” licences is not permitted on the cloud).&lt;br /&gt;
&amp;lt;li&amp;gt; Usage of TUFLOW software on a virtual machine is confined to Authorised Users within the Licensee&#039;s Network. This clause means companies cannot on-sell access to TUFLOW licences hosted in the cloud or otherwise (excluding TUFLOW vendor contract arrangements). &lt;br /&gt;
&amp;lt;/ol&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Configuration of your cloud environment is your own responsibility. There are numerous ways TUFLOW licensing and simulation can be configured in a cloud environment depending on the cloud provider (Microsoft, Google, Amazon, other etc.) and internal company protocols. We recommend engaging a professional with suitable cloud architecture expertise to design your bespoke system. Clients who have already migrated to the cloud have done so in a variety ways:&lt;br /&gt;
* Some use a hardware lock (USB) dongle that resides in their office on a physical computer or server. Cloud virtual machines link to the network licence via the IP address of the hardware lock.&lt;br /&gt;
* Others use a software lock. Software locks are a digital licence file that is locked to a dedicated host computer, server or virtual machine. When using a software lock please select the host carefully as the software licence will be bound to it. Relocating the licence to a new location will require TUFLOW sales staff to reissue the licence, which incurs a small administration fee.&lt;br /&gt;
&lt;br /&gt;
 &#039;&#039;&#039;Please Note: Network licence rentals can be used to upscale the available licences on your cloud system when demand requires it.&#039;&#039;&#039; &lt;br /&gt;
 Refer to the &amp;lt;u&amp;gt;[https://www.tuflow.com/Prices.aspx TUFLOW Pricelist]&amp;lt;/u&amp;gt; for more information.&lt;br /&gt;
&lt;br /&gt;
This detailed report from the TUFLOW Library discusses some benefits, challenges and solutions relating to cloud computing to help people who are setting up their own system: &lt;br /&gt;
&amp;lt;u&amp;gt;[https://downloads.tuflow.com/Licensing/2021_Running_TUFLOW_on_the_Cloud.pdf Running TUFLOW on the Cloud (Whitepaper)]&amp;lt;/u&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
[[File:2021_Running_TUFLOW_on_the_Cloud.png]]&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
= Common Questions Answered (FAQ)=&lt;br /&gt;
== Q1: How do I execute a simulation on the cloud? Can I still use batch files? ==&lt;br /&gt;
Running a simulation on the cloud can be very similar to running it on any other computer. You can access a VM remotely just like you would any other remote computer, using Remote Desktop, SSH, VNC, an X-Server client, etc. - whatever you are used to and what is set up on the VM. However, that assumes the VM is set up for that type of access and is running when you need to connect to it. If you want to make use of the real benefits of the cloud, like the ability to run on many computers at once, starting them automatically only when needed, doing it through such a process would be very cumbersome. You may want to consider looking at more advanced techniques like [https://azure.microsoft.com/en-au/products/batch Azure Batch], AWS Batch, or [https://cloud.google.com/batch/docs/get-started Google Cloud Batch].&lt;br /&gt;
&lt;br /&gt;
In either case, you will need access to a TUFLOW licence server from VMs running the model. Have a look at &amp;quot;Do I need a different licence to run models on the cloud?&amp;quot; below. And the VMs will always need to have CodeMeter installed, configured to find the licence you plan to use, as well as appropriate drivers for hardware like GPUs.&lt;br /&gt;
&lt;br /&gt;
When running on the cloud, consider that you may not have network access to locations where you would normally store your results. You may need to set up storage in the cloud separate from the VM, but connected to it, to collect your results and still have them available to you once the VM stops running.&lt;br /&gt;
&lt;br /&gt;
If you&#039;re using remote access to desktop VMs, you can still use &#039;&#039;batch files&#039;&#039; or scripts like you&#039;re used to. If you look into batch services, you will need more involved scripting, and you would typically not use batch files, but split up the work into separate tasks for the cloud platform to schedule on available computers. Keep in mind that this is a substantial and complex task, requiring some development and IT skills. If you plan on this type of cloud use, plan ahead and be ready with a working and tested solution, before you take on a deadline.&lt;br /&gt;
== Q2: Do I need a different TUFLOW executable to run models on the cloud? ==&lt;br /&gt;
No, you can use the same executable appropriate to the operating system you are on. Keep in mind that running TUFLOW with a licence does require that CodeMeter is installed as well and configured to find the licence. And if you are using a GPU on the cloud, you will need to have the appropriate NVIDIA drivers with CUDA installed, and a GPU licence available.&lt;br /&gt;
&lt;br /&gt;
Although you do use the same executable, it may be advantageous to provide some additional command line options to TUFLOW when you run it on the cloud. Since you typically won&#039;t be present and looking at the screen, consider using the `-nc` switch, which prevents user interaction on the console. Also, the familiar `-b` option will prevent the simulation waiting for a key press at the end of the simulation. And finally, given the possible cost of running models at scale, you would do well to test your model with the `-t` switch before sending it to the cloud. In addition to command line options, learn about TUFLOW override files to override configuration that may need to be different on the cloud VM, like the location where TUFLOW should write results.&lt;br /&gt;
== Q3: What steps do I need to take to run my model on the cloud? ==&lt;br /&gt;
In no particular order:&lt;br /&gt;
&lt;br /&gt;
* Assuming you have chosen a cloud provider you will use, make sure you understand the answers to the previous questions. If some of this is too technical, ensure you go over this with staff with appropriate IT skills and administrative access.&lt;br /&gt;
* With regard to the model itself, ensure that it has no references to files on computers that wouldn&#039;t be accessible from the cloud VM running the model. Ideally, construct your model configuration so that it can be self-contained within a single folder and would run wherever you put it.&lt;br /&gt;
* Ensure you have sufficient TUFLOW licences available and accessible to your cloud VMs to run the number of simulations you plan to run in parallel on the cloud.&lt;br /&gt;
* Ensure you have sufficient quota for storage and cloud resources you need to run the number of simulations you plan to run, specifically when using the &#039;Batch&#039; services mention under Q1.&lt;br /&gt;
* Ensure you have the right level of access to make use of the cloud resources you need, and that you&#039;re able to use and manage them when you do.&lt;br /&gt;
* Ensure that what you&#039;re planning on the cloud complies with your company and client&#039;s security policies for the work. Think about where the cloud computers are, how data is transferred to and from the cloud, and who has access.&lt;br /&gt;
* If you can, pick a region that puts the compute and storage relatively close to your own location, ensuring that your access (or perhaps your clients&#039; access) to them over the internet can achieve good total network speeds.&lt;br /&gt;
* Test you model before putting it on the cloud and test your preferred method of running a model on the cloud before scaling it up.&lt;br /&gt;
* Make sure your model configuration matches your actual needs before sending it to the cloud. Consider the frequency of writing outputs, whether you need check files, etc.&lt;br /&gt;
&lt;br /&gt;
When in doubt, feel free to contact [mailto:support@tuflow.com TUFLOW Support] and [mailto:sales@tuflow.com TUFLOW Sales] with questions, but keep in mind that we can only offer limited guidance when it comes to the specifics of your chosen cloud provider, and that your company&#039;s IT policies may further limit your options.&lt;br /&gt;
&lt;br /&gt;
== Q4: How can I download the simulation results? ==&lt;br /&gt;
This depends on your chosen solution.&lt;br /&gt;
&lt;br /&gt;
If you have cloud VMs that have access to your company&#039;s internal network, you may be able to copy the results automatically (with a script or batch file) after a simulation completes, and no download would be needed. If you have cloud VMs that you interactively use remotely, you can use whatever tools you would use from any remote machine, like OneDrive, Dropbox, FTP, SSH, to name but a few.&lt;br /&gt;
&lt;br /&gt;
However, all cloud service providers also provide cloud storage, and it may be cheaper and faster to keep unprocessed results in the cloud. Once a run completes, you typically do not want to keep the results on storage that is local to the VM that ran the model (e.g. its C: or D: drive on a Windows computer), unless you plan to use the same VM for post-processing of the results. But you can set up network file shares in the cloud that can be connected to your VM as extra drives or mounts, or you can make use of blob storage like Azure Blob, S3 Buckets, etc. Depending on the cloud service provider, there will be relatively user-friendly tools to access these remotely and download your data later.&lt;br /&gt;
&lt;br /&gt;
For particularly massive datasets, some cloud providers also offer services where they can put the data on physical media and ship them to you. However, keep in mind that this takes substantial time to reserve beforehand and then some time to execute after you complete the work. And the service may not be available for smaller volumes you may need.&lt;br /&gt;
&lt;br /&gt;
Finally, at the risk of stating the obvious: perform the download on a good internet connection. Cloud providers charge a small amount per GB downloaded, and in return they offer very good download speeds for your data. But your internet connection may end up limiting how quickly you get your data to your computer.&lt;br /&gt;
== Q5: What are the benefits of running a simulation in the cloud rather than locally? ==&lt;br /&gt;
Not all benefits apply in all cases, but consider these:&lt;br /&gt;
&lt;br /&gt;
* You can get access to as many cloud VMs (and GPUs) you need to run as many runs you need in parallel, provided you have sufficient licences and quota with the provider.&lt;br /&gt;
* If you only need compute infrequently, it&#039;s there in the cloud when you need it and you only pay for it when you use it.&lt;br /&gt;
* If your workload suddenly increases (which may be a good thing), you can quickly increase the amount of compute with cloud computing, provided you&#039;re set up to do so.&lt;br /&gt;
* Most cloud providers offer access to a variety of very capable hardware, that may allow you to run models larger or longer running than you could on your own hardware.&lt;br /&gt;
* If you collaborate with others from various locations (wherever they are in the world), having the results in the cloud may be a real benefit.&lt;br /&gt;
&lt;br /&gt;
However, there are some potential downsides to consider as well:&lt;br /&gt;
&lt;br /&gt;
* If you make efficient use of hardware you own, the compute is likely cheaper per model run than it would be compared to cloud computing, especially for on-demand compute.&lt;br /&gt;
* Although it&#039;s not very complicated to set up a VM for cloud runs and to get up and running, it may be complicated to do so in a way that satisfies your company or client&#039;s security policies.&lt;br /&gt;
* Similarly, just running some models on an interactively accessible VM may be simple, but developing scripts for automated model running may require time and skills that prevent you from doing so yourself.&lt;br /&gt;
&lt;br /&gt;
== Q6: Do I need to add in any extra commands in my control files? ==&lt;br /&gt;
If your model is self-contained and could run from its folder on any computer, perhaps not. However, you may want to change where a VM in the cloud tries to write its results, for example. You can achieve that with extra command in your control files, but also consider the use of TUFLOW override control files, which you can tailor to the cloud VMs you&#039;re using, without affecting the control files you use for running or testing locally.&lt;br /&gt;
&lt;br /&gt;
To keep costs of storage and transport manageable, as well as saving on some run time, configure your model to write only the outputs you need. This includes selecting the right variables to output, at the appropriate time intervals. Have a look at our [https://www.youtube.com/watch?v=-CsKKjG7jpQ Output Management Advice] webinar (15 minutes) for more tips on that.&lt;br /&gt;
&lt;br /&gt;
Also look at the command line switches mentioned in the answer to Q2.&lt;br /&gt;
== Q7: Do I need a different licence to run models on the cloud? ==&lt;br /&gt;
Not necessarily, but there are some things to keep in mind. If your existing licences are on a dongle, they would need to be network licences and the server they are installed on would have to be accessible over the network from the cloud VMs you&#039;re looking to run models on. If you have sufficient existing network licences you can use in this manner, including licences for special hardware you&#039;d be using on the cloud (like a GPU), you will not need different licences.&lt;br /&gt;
&lt;br /&gt;
You can also set up a dedicated VM to run a small CodeMeter network licence server in the cloud for software network licences. But keep in mind that licences on such a server cannot be moved elsewhere - they are bound to this specific VM. Access to this licence server would be limited to VMs in the cloud, on the same virtual network as the licence server. Or you&#039;d need to have someone with the appropriate IT skills make the licence server accessible from all locations you need access from.&lt;br /&gt;
&lt;br /&gt;
Alternatively, you may be able to make use of web licences, please contact [mailto:sales@tuflow.com TUFLOW Sales] for more information on that.&lt;br /&gt;
== Q8: What can go wrong when running models on the cloud? ==&lt;br /&gt;
For starters, almost everything that can go wrong when running models locally, although power failure and loss of network connection is exceedingly rare on the cloud.&lt;br /&gt;
&lt;br /&gt;
Common problems arise from the differences in the computer&#039;s environment: software you may have installed that batch files rely on, software required to run TUFLOW (CodeMeter, NVIDIA drivers for GPU), access to networked resources you get inputs from, or write results to, etc.&lt;br /&gt;
&lt;br /&gt;
Also, if you&#039;re using Batch services from your cloud provider, once a VM completes its tasks, it may disappear. If something went wrong during the run, you may have very limited access to information about what went wrong, so you want to be careful about logging and where logs are written to.&lt;br /&gt;
&lt;br /&gt;
Similarly, but much simpler: of you run models interactively on a desktop VM, once you turn it off, you will no longer have access to its local storage. And once you remove the VM to save on cost, keep in mind that its attached disk storage will be removed as well, so ensure you have your results in a safe place before that.&lt;br /&gt;
&lt;br /&gt;
Finally, access to licences using Codemeter from the cloud VM can sometimes cause some complications. And access for users to the VM or the data may cause some complications, depending on your IT setup.&lt;br /&gt;
&lt;br /&gt;
None of these should stop you from trying, but ensure everything works like you expect, before scaling up to many model runs at once.&lt;br /&gt;
== Q10: If I stop the cloud VM after models are finished, can I still download the results? ==&lt;br /&gt;
If the results were written to local storage on the VM (like the default C: or D: drive on a Windows VM), you will only be able to access these when the cloud VM is running. If you stopped it, you could restart it to gain access again. Once you delete the VM, data on those volumes will be deleted as well, and cannot be recovered.&lt;br /&gt;
&lt;br /&gt;
To be able to access results in the cloud even when a VM is stopped, or deleted, copy the results to a network share on the cloud. On the VM, you may be able to mount this storage as a network share, or tools will be available to perform a copy to cloud storage, depending on the cloud provider and operating system you are using.&lt;br /&gt;
== Q11: Why is my run on the cloud slower than I expected based on the specs? ==&lt;br /&gt;
Although cloud hardware may be faster for some use cases, and certainly a lot more expensive to purchase, it may not be guaranteed to run your TUFLOW model faster. This mostly depends on how modern the NVIDIA hardware architecture is, how many CUDA cores it has available and specific metrics of the hardware like the amount of memory, the clock speed of the memory, the clock speed of the cores, and how the GPU is connected to the rest of the hardware. For a good assessment of whether you should expect better performance, refer to our [[Hardware Benchmarking (2018-03-AA)|Hardware Benchmarking]] pages.&lt;br /&gt;
&lt;br /&gt;
If you&#039;re wondering why TUFLOW software doesn&#039;t benefit from these supposedly faster and more expensive GPUs, consider that a GPU has many different features, and TUFLOW only makes use of an important subset of these. Also, most TUFLOW models are executed using the single-precision floating-point executable, which is faster than the double-precision executable. Desktop GPUs are highly optimised for single-precision compute, because this is what benefits gaming, and as it happens, TUFLOW runs. Data centre GPUs are more optimised for double-precision compute, but most TUFLOW simulations don&#039;t benefit in result quality from using this.&lt;br /&gt;
&lt;br /&gt;
Even when the hardware should be faster according to benchmarks, it&#039;s possible that you have some other restrictions. For one, if your cloud environment shares GPUs between many users, the part of the GPU available to your model run may only see a small percentage of the performance it would show with exclusive access to the GPU. This is particularly true in Virtual Desktop Infrastructure (VDI) setups. The way TUFLOW uses the GPU is very different from normal graphics processing, and VDI solutions are often not good for model running.&lt;br /&gt;
&lt;br /&gt;
Another common cause of slowing is writing results directly to network shares that may be accessed over network connections that are orders of magnitude slower than local disk access. In these situations, the recommendation is to write results locally (with minimal overhead) on the cloud VM and then copy the results to other storage in one go, when the run completes. Even if you perform this copy while another run starts, you&#039;ll find that running first and copying after is a lot faster than writing directly to the network share. To understand why, imagine writing and sending an email one word at a time, or writing it all in one go. The amount of typing you have to do is roughly the same, if you do it cleverly, but clearly the whole process will take longer, and you can imagine the network having to send far more data back and forth. The difference between writing results to the network one part at a time, instead of all at once is analogous.&lt;br /&gt;
== Q12: How can I lower the cost of running simulations on the cloud? ==&lt;br /&gt;
The first step would be to select the hardware that&#039;s best suited to your needs, at the lowest price, from the most affordable provider.&lt;br /&gt;
&lt;br /&gt;
Secondly, if you get cloud hardware on-demand, you&#039;re paying the highest rates for the flexibility this affords. You can also reserve instances of specific hardware types, for periods like a year, or three years (depending on the cloud provider), dramatically lowering the price - but then you will have to pay for the entire period for the reserved instances. If your organisation is large enough, it can be worthwhile to have access to a pool of reserved resources, as long as the business achieves high utilisation over time, so that you only pay on-demand prices when you exceed your reserved instances.&lt;br /&gt;
&lt;br /&gt;
If you do end up using on-demand hardware, ensure you only run it when you&#039;re actually using it. By automatically turning off VMs when the work is done and copied to appropriate storage, you can save on compute costs - you&#039;re not paying for how much power they use, you&#039;re paying for the hours they&#039;re on. And keep data on cheap storage like blob storage or online file shares, where you pay only for the size you&#039;re using, instead of keeping expensive VMs around that have massive virtual hard drives that you&#039;re paying for as long as they exist, empty or not.&lt;br /&gt;
&lt;br /&gt;
Don&#039;t download data repeatedly, especially if you need access to it frequently. If you only need access to a small part of the data, it may be worth it to do so remotely. But if you need to process entire files, or multiple users need a copy, it will be more economical to download the data to your network once and use it from there.&lt;br /&gt;
&lt;br /&gt;
You may have heard about &#039;spot pricing&#039; for VMs. This may be suitable if you&#039;re running many small simulations in sequence, and if you&#039;re not under strict time pressure to deliver results, but in many cases, it won&#039;t be ideal, especially if your model is not set up with restart files that get stored away from the VM. We find that the added complexity rarely outweighs the price difference for the hardware, but the discounts on VMs obtained through spot pricing can be substantial.&lt;br /&gt;
&lt;br /&gt;
If you find that the number of licences you need to scale up model running on the cloud is the main limiting factor for cost, contact our [mailto:sales@tuflow.com TUFLOW Sales] to discuss options for your situation.&lt;br /&gt;
&lt;br /&gt;
Finally, read through these questions, and take the advice given to heart. Optimising your model configuration and making the right choices when running on the cloud can save a lot of run time, and thus cost.&lt;br /&gt;
== Q13: Is there a developed service to run large numbers of model runs on the cloud, if we cannot set it up ourselves? ==&lt;br /&gt;
As of 2019, TUFLOW offer an [[TUFLOW Cloud Simulation Service|on-demand cloud simulation service]] that may suit your needs if your project is sufficiently urgent or large. As of 2023, you may find third parties providing services on the cloud as well, and TUFLOW may support use of its software in such services.&lt;br /&gt;
== Q14: Which machine size / hardware type do you recommend for my model runs? ==&lt;br /&gt;
Hardware selection is very specific to the modelling requirements of each organisation and project. There is no one-size-fits-all recommendation to make. &lt;br /&gt;
&lt;br /&gt;
However, some comments that generally apply:&lt;br /&gt;
&lt;br /&gt;
* As with physical hardware, top speed comes at a premium. If you compare model run times between different VM sizes, you may find that running on the slower machines may work out cheaper for a certain amount of work, than using the faster ones. Of course, you will have to consider project lead time, and time spent on licences as well.&lt;br /&gt;
* For most cloud providers, the number of vCPU cores scales together with the type and number of available GPUs. And together with vCPUs, the amount of available RAM and storage scales up as well. As a result, you may end up with a lot of unused resources on some machine types.&lt;br /&gt;
* If you&#039;re considering purchasing cloud infrastructure for permanent use, keep in mind that Virtual Desktop Infrastructure solutions often share resources like GPUs between many users. You may find that a specific type of hardware works really well in a test setup, where you&#039;re the only user on it, but performs really poorly when under load from many users. If you purchase access to a cloud VM directly, you will have it all to yourself, but additional infrastructure on top of the VMs may affect your performance greatly.&lt;br /&gt;
* Conversely, when selecting a VM type that provides access to only &#039;half&#039; of a data centre VM, you don&#039;t have to worry about negative performance impact. This type of sharing (that your IT can also achieve in your own data centre with NVIDIA MIG) still ensures that you always get full access to a dedicated part of the GPU and performance should be as expected.&lt;br /&gt;
== Technical Terms Glossary ==&lt;br /&gt;
A brief explanation of some of the technical terms used in this FAQ, and relevant to cloud model running:&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;Azure Batch&#039;&#039;&#039;: A cloud computing service provided by Microsoft Azure for running large-scale parallel and batch compute jobs.&lt;br /&gt;
* &#039;&#039;&#039;AWS Batch&#039;&#039;&#039;: Amazon Web Services&#039; batch computing service that enables the processing of a large number of batch jobs.&lt;br /&gt;
* &#039;&#039;&#039;Google Cloud Batch&#039;&#039;&#039;: A batch computing service provided by Google Cloud, similar in functionality to Azure Batch and AWS Batch.&lt;br /&gt;
* &#039;&#039;&#039;VM (Virtual Machine)&#039;&#039;&#039;: A software emulation of a physical computer that runs an operating system and applications just like a physical machine.&lt;br /&gt;
* &#039;&#039;&#039;Remote Desktop&#039;&#039;&#039;: A program or feature that allows a user to connect to a computer in another location, see that computer&#039;s desktop, and interact with it as if it were local.&lt;br /&gt;
* &#039;&#039;&#039;SSH (Secure Shell)&#039;&#039;&#039;: A cryptographic network protocol for operating network services securely over an unsecured network.&lt;br /&gt;
* &#039;&#039;&#039;VNC (Virtual Network Computing)&#039;&#039;&#039;: A graphical desktop-sharing system that uses the Remote Frame Buffer protocol to remotely control another computer.&lt;br /&gt;
* &#039;&#039;&#039;X-Server&#039;&#039;&#039;: A program that manages and displays graphical user interfaces in a Unix or Linux environment.&lt;br /&gt;
* &#039;&#039;&#039;CodeMeter&#039;&#039;&#039;: A software technology developed by WIBU, used by TUFLOW for software licensing and protection.&lt;br /&gt;
* &#039;&#039;&#039;GPU (Graphics Processing Unit)&#039;&#039;&#039;: A specialized processor designed to accelerate graphics rendering.&lt;br /&gt;
* &#039;&#039;&#039;CUDA&#039;&#039;&#039;: A parallel computing platform and application programming interface model created by Nvidia.&lt;br /&gt;
* &#039;&#039;&#039;Blob Storage (Binary Large Object Storage)&#039;&#039;&#039;: A storage service for large amounts of unstructured data.&lt;br /&gt;
* &#039;&#039;&#039;Azure Blob&#039;&#039;&#039;: Microsoft Azure&#039;s object storage solution for the cloud.&lt;br /&gt;
* &#039;&#039;&#039;S3 Buckets&#039;&#039;&#039;: Amazon Web Services&#039; scalable storage buckets in the cloud.&lt;br /&gt;
* &#039;&#039;&#039;Network File Shares&#039;&#039;&#039;: Storage locations on a network that multiple users can access to store and retrieve files, as on a regular file system.&lt;br /&gt;
* &#039;&#039;&#039;Virtual Desktop Infrastructure (VDI)&#039;&#039;&#039;: Technology that hosts a desktop operating system on a centralized server in a data center.&lt;br /&gt;
* &#039;&#039;&#039;NVIDIA MIG (Multi-Instance GPU)&#039;&#039;&#039;: A technology that provides hardware partitioning of NVIDIA GPUs.&lt;br /&gt;
* &#039;&#039;&#039;Spot Pricing&#039;&#039;&#039;: A pricing model in cloud computing where available compute capacity can be purchased at potentially lower costs compared to on-demand rates.&lt;br /&gt;
* &#039;&#039;&#039;On-Demand Compute&#039;&#039;&#039;: A cloud computing service model where computing resources are made available immediately to the user as needed.&lt;/div&gt;</summary>
		<author><name>Jaap.vandervelde</name></author>
	</entry>
	<entry>
		<id>https://wiki.tuflow.com/w/index.php?title=Organisation_Cloud_Software_Execution&amp;diff=36118</id>
		<title>Organisation Cloud Software Execution</title>
		<link rel="alternate" type="text/html" href="https://wiki.tuflow.com/w/index.php?title=Organisation_Cloud_Software_Execution&amp;diff=36118"/>
		<updated>2023-12-15T03:55:55Z</updated>

		<summary type="html">&lt;p&gt;Jaap.vandervelde: Added Q14 on hardware selection&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;The TUFLOW &amp;lt;u&amp;gt;[https://www.tuflow.com/Download/Licensing/TUFLOW%20Products%20Licence%20Agreement.pdf End User Licence Agreement]&amp;lt;/u&amp;gt; was updated in 2018 allowing companies to host their own licences on the cloud. The only restrictions associated with users running TUFLOW simulations on their own company public or private cloud environment are:&lt;br /&gt;
&amp;lt;ol&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt; The licence must be a “Network” type (use of “Local” licences is not permitted on the cloud).&lt;br /&gt;
&amp;lt;li&amp;gt; Usage of TUFLOW software on a virtual machine is confined to Authorised Users within the Licensee&#039;s Network. This clause means companies cannot on-sell access to TUFLOW licences hosted in the cloud or otherwise (excluding TUFLOW vendor contract arrangements). &lt;br /&gt;
&amp;lt;/ol&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Configuration of your cloud environment is your own responsibility. There are numerous ways TUFLOW licensing and simulation can be configured in a cloud environment depending on the cloud provider (Microsoft, Google, Amazon, other etc.) and internal company protocols. We recommend engaging a professional with suitable cloud architecture expertise to design your bespoke system. Clients who have already migrated to the cloud have done so in a variety ways:&lt;br /&gt;
* Some use a hardware lock (USB) dongle that resides in their office on a physical computer or server. Cloud virtual machines link to the network licence via the IP address of the hardware lock.&lt;br /&gt;
* Others use a software lock. Software locks are a digital licence file that is locked to a dedicated host computer, server or virtual machine. When using a software lock please select the host carefully as the software licence will be bound to it. Relocating the licence to a new location will require TUFLOW sales staff to reissue the licence, which incurs a small administration fee.&lt;br /&gt;
&lt;br /&gt;
 &#039;&#039;&#039;Please Note: Network licence rentals can be used to upscale the available licences on your cloud system when demand requires it.&#039;&#039;&#039; &lt;br /&gt;
 Refer to the &amp;lt;u&amp;gt;[https://www.tuflow.com/Prices.aspx TUFLOW Pricelist]&amp;lt;/u&amp;gt; for more information.&lt;br /&gt;
&lt;br /&gt;
This detailed report from the TUFLOW Library discusses some benefits, challenges and solutions relating to cloud computing to help people who are setting up their own system: &lt;br /&gt;
&amp;lt;u&amp;gt;[https://downloads.tuflow.com/Licensing/2021_Running_TUFLOW_on_the_Cloud.pdf Running TUFLOW on the Cloud (Whitepaper)]&amp;lt;/u&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
[[File:2021_Running_TUFLOW_on_the_Cloud.png]]&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
= Common Questions Answered (FAQ)=&lt;br /&gt;
== Q1: How do I execute a simulation on the cloud? Can I still use batch files? ==&lt;br /&gt;
Running a simulation on the cloud can be very similar to running it on any other computer. You can access a VM remotely just like you would any other remote computer, using Remote Desktop, SSH, VNC, an X-Server client, etc. - whatever you are used to and what is set up on the VM. However, that assumes the VM is set up for that type of access and is running when you need to connect to it. If you want to make use of the real benefits of the cloud, like the ability to run on many computers at once, starting them automatically only when needed, doing it through such a process would be very cumbersome. You may want to consider looking at more advanced techniques like [https://azure.microsoft.com/en-au/products/batch Azure Batch], AWS Batch, or [https://cloud.google.com/batch/docs/get-started Google Cloud Batch].&lt;br /&gt;
&lt;br /&gt;
In either case, you will need access to a TUFLOW licence server from VMs running the model. Have a look at &amp;quot;Do I need a different licence to run models on the cloud?&amp;quot; below. And the VMs will always need to have CodeMeter installed, configured to find the licence you plan to use, as well as appropriate drivers for hardware like GPUs.&lt;br /&gt;
&lt;br /&gt;
When running on the cloud, consider that you may not have network access to locations where you would normally store your results. You may need to set up storage in the cloud separate from the VM, but connected to it, to collect your results and still have them available to you once the VM stops running.&lt;br /&gt;
&lt;br /&gt;
If you&#039;re using remote access to desktop VMs, you can still use &#039;&#039;batch files&#039;&#039; or scripts like you&#039;re used to. If you look into batch services, you will need more involved scripting, and you would typically not use batch files, but split up the work into separate tasks for the cloud platform to schedule on available computers. Keep in mind that this is a substantial and complex task, requiring some development and IT skills. If you plan on this type of cloud use, plan ahead and be ready with a working and tested solution, before you take on a deadline.&lt;br /&gt;
== Q2: Do I need a different TUFLOW executable to run models on the cloud? ==&lt;br /&gt;
No, you can use the same executable appropriate to the operating system you are on. Keep in mind that running TUFLOW with a licence does require that CodeMeter is installed as well and configured to find the licence. And if you are using a GPU on the cloud, you will need to have the appropriate NVIDIA drivers with CUDA installed, and a GPU licence available.&lt;br /&gt;
&lt;br /&gt;
Although you do use the same executable, it may be advantageous to provide some additional command line options to TUFLOW when you run it on the cloud. Since you typically won&#039;t be present and looking at the screen, consider using the `-nc` switch, which prevents user interaction on the console. Also, the familiar `-b` option will prevent the simulation waiting for a key press at the end of the simulation. And finally, given the possible cost of running models at scale, you would do well to test your model with the `-t` switch before sending it to the cloud. In addition to command line options, learn about TUFLOW override files to override configuration that may need to be different on the cloud VM, like the location where TUFLOW should write results.&lt;br /&gt;
== Q3: What steps do I need to take to run my model on the cloud? ==&lt;br /&gt;
In no particular order:&lt;br /&gt;
&lt;br /&gt;
* Assuming you have chosen a cloud provider you will use, make sure you understand the answers to the previous questions. If some of this is too technical, ensure you go over this with staff with appropriate IT skills and administrative access.&lt;br /&gt;
* With regard to the model itself, ensure that it has no references to files on computers that wouldn&#039;t be accessible from the cloud VM running the model. Ideally, construct your model configuration so that it can be self-contained within a single folder and would run wherever you put it.&lt;br /&gt;
* Ensure you have sufficient TUFLOW licences available and accessible to your cloud VMs to run the number of simulations you plan to run in parallel on the cloud.&lt;br /&gt;
* Ensure you have sufficient quota for storage and cloud resources you need to run the number of simulations you plan to run, specifically when using the &#039;Batch&#039; services mention under Q1.&lt;br /&gt;
* Ensure you have the right level of access to make use of the cloud resources you need, and that you&#039;re able to use and manage them when you do.&lt;br /&gt;
* Ensure that what you&#039;re planning on the cloud complies with your company and client&#039;s security policies for the work. Think about where the cloud computers are, how data is transferred to and from the cloud, and who has access.&lt;br /&gt;
* If you can, pick a region that puts the compute and storage relatively close to your own location, ensuring that your access (or perhaps your clients&#039; access) to them over the internet can achieve good total network speeds.&lt;br /&gt;
* Test you model before putting it on the cloud and test your preferred method of running a model on the cloud before scaling it up.&lt;br /&gt;
* Make sure your model configuration matches your actual needs before sending it to the cloud. Consider the frequency of writing outputs, whether you need check files, etc.&lt;br /&gt;
&lt;br /&gt;
When in doubt, feel free to contact [mailto:support@tuflow.com TUFLOW Support] and [mailto:sales@tuflow.com TUFLOW Sales] with questions, but keep in mind that we can only offer limited guidance when it comes to the specifics of your chosen cloud provider, and that your company&#039;s IT policies may further limit your options.&lt;br /&gt;
&lt;br /&gt;
== Q4: How can I download the simulation results? ==&lt;br /&gt;
This depends on your chosen solution.&lt;br /&gt;
&lt;br /&gt;
If you have cloud VMs that have access to your company&#039;s internal network, you may be able to copy the results automatically (with a script or batch file) after a simulation completes, and no download would be needed. If you have cloud VMs that you interactively use remotely, you can use whatever tools you would use from any remote machine, like OneDrive, Dropbox, FTP, SSH, to name but a few.&lt;br /&gt;
&lt;br /&gt;
However, all cloud service providers also provide cloud storage, and it may be cheaper and faster to keep unprocessed results in the cloud. Once a run completes, you typically do not want to keep the results on storage that is local to the VM that ran the model (e.g. its C: or D: drive on a Windows computer), unless you plan to use the same VM for post-processing of the results. But you can set up network file shares in the cloud that can be connected to your VM as extra drives or mounts, or you can make use of blob storage like Azure Blob, S3 Buckets, etc. Depending on the cloud service provider, there will be relatively user-friendly tools to access these remotely and download your data later.&lt;br /&gt;
&lt;br /&gt;
For particularly massive datasets, some cloud providers also offer services where they can put the data on physical media and ship them to you. However, keep in mind that this takes substantial time to reserve beforehand and then some time to execute after you complete the work. And the service may not be available for smaller volumes you may need.&lt;br /&gt;
&lt;br /&gt;
Finally, at the risk of stating the obvious: perform the download on a good internet connection. Cloud providers charge a small amount per GB downloaded, and in return they offer very good download speeds for your data. But your internet connection may end up limiting how quickly you get your data to your computer.&lt;br /&gt;
== Q5: What are the benefits of running a simulation in the cloud rather than locally? ==&lt;br /&gt;
Not all benefits apply in all cases, but consider these:&lt;br /&gt;
&lt;br /&gt;
* You can get access to as many cloud VMs (and GPUs) you need to run as many runs you need in parallel, provided you have sufficient licences and quota with the provider.&lt;br /&gt;
* If you only need compute infrequently, it&#039;s there in the cloud when you need it and you only pay for it when you use it.&lt;br /&gt;
* If your workload suddenly increases (which may be a good thing), you can quickly increase the amount of compute with cloud computing, provided you&#039;re set up to do so.&lt;br /&gt;
* Most cloud providers offer access to a variety of very capable hardware, that may allow you to run models larger or longer running than you could on your own hardware.&lt;br /&gt;
* If you collaborate with others from various locations (wherever they are in the world), having the results in the cloud may be a real benefit.&lt;br /&gt;
&lt;br /&gt;
However, there are some potential downsides to consider as well:&lt;br /&gt;
&lt;br /&gt;
* If you make efficient use of hardware you own, the compute is likely cheaper per model run than it would be compared to cloud computing, especially for on-demand compute.&lt;br /&gt;
* Although it&#039;s not very complicated to set up a VM for cloud runs and to get up and running, it may be complicated to do so in a way that satisfies your company or client&#039;s security policies.&lt;br /&gt;
* Similarly, just running some models on an interactively accessible VM may be simple, but developing scripts for automated model running may require time and skills that prevent you from doing so yourself.&lt;br /&gt;
&lt;br /&gt;
== Q6: Do I need to add in any extra commands in my control files? ==&lt;br /&gt;
If your model is self-contained and could run from its folder on any computer, perhaps not. However, you may want to change where a VM in the cloud tries to write its results, for example. You can achieve that with extra command in your control files, but also consider the use of TUFLOW override control files, which you can tailor to the cloud VMs you&#039;re using, without affecting the control files you use for running or testing locally.&lt;br /&gt;
&lt;br /&gt;
To keep costs of storage and transport manageable, as well as saving on some run time, configure your model to write only the outputs you need. This includes selecting the right variables to output, at the appropriate time intervals. Have a look at our [https://www.youtube.com/watch?v=-CsKKjG7jpQ Output Management Advice] webinar (15 minutes) for more tips on that.&lt;br /&gt;
&lt;br /&gt;
Also look at the command line switches mentioned in the answer to Q2.&lt;br /&gt;
== Q7: Do I need a different licence to run models on the cloud? ==&lt;br /&gt;
Not necessarily, but there are some things to keep in mind. If your existing licences are on a dongle, they would need to be network licences and the server they are installed on would have to be accessible over the network from the cloud VMs you&#039;re looking to run models on. If you have sufficient existing network licences you can use in this manner, including licences for special hardware you&#039;d be using on the cloud (like a GPU), you will not need different licences.&lt;br /&gt;
&lt;br /&gt;
You can also set up a dedicated VM to run a small CodeMeter network licence server in the cloud for software network licences. But keep in mind that licences on such a server cannot be moved elsewhere - they are bound to this specific VM. Access to this licence server would be limited to VMs in the cloud, on the same virtual network as the licence server. Or you&#039;d need to have someone with the appropriate IT skills make the licence server accessible from all locations you need access from.&lt;br /&gt;
&lt;br /&gt;
Alternatively, you may be able to make use of web licences, please contact [mailto:sales@tuflow.com TUFLOW Sales] for more information on that.&lt;br /&gt;
== Q8: What can go wrong when running models on the cloud? ==&lt;br /&gt;
For starters, almost everything that can go wrong when running models locally, although power failure and loss of network connection is exceedingly rare on the cloud.&lt;br /&gt;
&lt;br /&gt;
Common problems arise from the differences in the computer&#039;s environment: software you may have installed that batch files rely on, software required to run TUFLOW (CodeMeter, NVIDIA drivers for GPU), access to networked resources you get inputs from, or write results to, etc.&lt;br /&gt;
&lt;br /&gt;
Also, if you&#039;re using Batch services from your cloud provider, once a VM completes its tasks, it may disappear. If something went wrong during the run, you may have very limited access to information about what went wrong, so you want to be careful about logging and where logs are written to.&lt;br /&gt;
&lt;br /&gt;
Similarly, but much simpler: of you run models interactively on a desktop VM, once you turn it off, you will no longer have access to its local storage. And once you remove the VM to save on cost, keep in mind that its attached disk storage will be removed as well, so ensure you have your results in a safe place before that.&lt;br /&gt;
&lt;br /&gt;
Finally, access to licences using Codemeter from the cloud VM can sometimes cause some complications. And access for users to the VM or the data may cause some complications, depending on your IT setup.&lt;br /&gt;
&lt;br /&gt;
None of these should stop you from trying, but ensure everything works like you expect, before scaling up to many model runs at once.&lt;br /&gt;
== Q10: If I stop the cloud VM after models are finished, can I still download the results? ==&lt;br /&gt;
If the results were written to local storage on the VM (like the default C: or D: drive on a Windows VM), you will only be able to access these when the cloud VM is running. If you stopped it, you could restart it to gain access again. Once you delete the VM, data on those volumes will be deleted as well, and cannot be recovered.&lt;br /&gt;
&lt;br /&gt;
To be able to access results in the cloud even when a VM is stopped, or deleted, copy the results to a network share on the cloud. On the VM, you may be able to mount this storage as a network share, or tools will be available to perform a copy to cloud storage, depending on the cloud provider and operating system you are using.&lt;br /&gt;
== Q11: Why is my run on the cloud slower than I expected based on the specs? ==&lt;br /&gt;
Although cloud hardware may be faster for some use cases, and certainly a lot more expensive to purchase, it may not be guaranteed to run your TUFLOW model faster. This mostly depends on how modern the NVIDIA hardware architecture is, how many CUDA cores it has available and specific metrics of the hardware like the amount of memory, the clock speed of the memory, the clock speed of the cores, and how the GPU is connected to the rest of the hardware. For a good assessment of whether you should expect better performance, refer to our [[Hardware Benchmarking (2018-03-AA)|Hardware Benchmarking]] pages.&lt;br /&gt;
&lt;br /&gt;
If you&#039;re wondering why TUFLOW software doesn&#039;t benefit from these supposedly faster and more expensive GPUs, consider that a GPU has many different features, and TUFLOW only makes use of an important subset of these. Also, most TUFLOW models are executed using the single-precision floating-point executable, which is faster than the double-precision executable. Desktop GPUs are highly optimised for single-precision compute, because this is what benefits gaming, and as it happens, TUFLOW runs. Data centre GPUs are more optimised for double-precision compute, but most TUFLOW simulations don&#039;t benefit in result quality from using this.&lt;br /&gt;
&lt;br /&gt;
Even when the hardware should be faster according to benchmarks, it&#039;s possible that you have some other restrictions. For one, if your cloud environment shares GPUs between many users, the part of the GPU available to your model run may only see a small percentage of the performance it would show with exclusive access to the GPU. This is particularly true in Virtual Desktop Infrastructure (VDI) setups. The way TUFLOW uses the GPU is very different from normal graphics processing, and VDI solutions are often not good for model running.&lt;br /&gt;
&lt;br /&gt;
Another common cause of slowing is writing results directly to network shares that may be accessed over network connections that are orders of magnitude slower than local disk access. In these situations, the recommendation is to write results locally (with minimal overhead) on the cloud VM and then copy the results to other storage in one go, when the run completes. Even if you perform this copy while another run starts, you&#039;ll find that running first and copying after is a lot faster than writing directly to the network share. To understand why, imagine writing and sending an email one word at a time, or writing it all in one go. The amount of typing you have to do is roughly the same, if you do it cleverly, but clearly the whole process will take longer, and you can imagine the network having to send far more data back and forth. The difference between writing results to the network one part at a time, instead of all at once is analogous.&lt;br /&gt;
== Q12: How can I lower the cost of running simulations on the cloud? ==&lt;br /&gt;
The first step would be to select the hardware that&#039;s best suited to your needs, at the lowest price, from the most affordable provider.&lt;br /&gt;
&lt;br /&gt;
Secondly, if you get cloud hardware on-demand, you&#039;re paying the highest rates for the flexibility this affords. You can also reserve instances of specific hardware types, for periods like a year, or three years (depending on the cloud provider), dramatically lowering the price - but then you will have to pay for the entire period for the reserved instances. If your organisation is large enough, it can be worthwhile to have access to a pool of reserved resources, as long as the business achieves high utilisation over time, so that you only pay on-demand prices when you exceed your reserved instances.&lt;br /&gt;
&lt;br /&gt;
If you do end up using on-demand hardware, ensure you only run it when you&#039;re actually using it. By automatically turning off VMs when the work is done and copied to appropriate storage, you can save on compute costs - you&#039;re not paying for how much power they use, you&#039;re paying for the hours they&#039;re on. And keep data on cheap storage like blob storage or online file shares, where you pay only for the size you&#039;re using, instead of keeping expensive VMs around that have massive virtual hard drives that you&#039;re paying for as long as they exist, empty or not.&lt;br /&gt;
&lt;br /&gt;
Don&#039;t download data repeatedly, especially if you need access to it frequently. If you only need access to a small part of the data, it may be worth it to do so remotely. But if you need to process entire files, or multiple users need a copy, it will be more economical to download the data to your network once and use it from there.&lt;br /&gt;
&lt;br /&gt;
You may have heard about &#039;spot pricing&#039; for VMs. This may be suitable if you&#039;re running many small simulations in sequence, and if you&#039;re not under strict time pressure to deliver results, but in many cases, it won&#039;t be ideal, especially if your model is not set up with restart files that get stored away from the VM. We find that the added complexity rarely outweighs the price difference for the hardware, but the discounts on VMs obtained through spot pricing can be substantial.&lt;br /&gt;
&lt;br /&gt;
If you find that the number of licences you need to scale up model running on the cloud is the main limiting factor for cost, contact our [mailto:sales@tuflow.com TUFLOW Sales] to discuss options for your situation.&lt;br /&gt;
&lt;br /&gt;
Finally, read through these questions, and take the advice given to heart. Optimising your model configuration and making the right choices when running on the cloud can save a lot of run time, and thus cost.&lt;br /&gt;
== Q13: Is there a developed service to run large numbers of model runs on the cloud, if we cannot set it up ourselves? ==&lt;br /&gt;
As of 2019, TUFLOW offer an [[TUFLOW Cloud Simulation Service|on-demand cloud simulation service]] that may suit your needs if your project is sufficiently urgent or large. As of 2023, you may find third parties providing services on the cloud as well, and TUFLOW may support use of its software in such services.&lt;br /&gt;
== Q14: Which machine size / hardware type do you recommend for my model runs? ==&lt;br /&gt;
Hardware selection is very specific to the modelling requirements of each organisation and project. There is no one-size-fits-all recommendation to make. &lt;br /&gt;
&lt;br /&gt;
However, some comments that generally apply:&lt;br /&gt;
&lt;br /&gt;
* As with physical hardware, top speed comes at a premium. If you compare model run times between different VM sizes, you may find that running on the slower machines may work out cheaper for a certain amount of work, than using the faster ones. Of course, you will have to consider project lead time, and time spent on licences as well.&lt;br /&gt;
* For most cloud providers, the number of vCPU cores scales together with the type and number of available GPUs. And together with vCPUs, the amount of available RAM and storage scales up as well. As a result, you may end up with a lot of unused resources on some machine types.&lt;br /&gt;
* If you&#039;re considering purchasing cloud infrastructure for permanent use, keep in mind that Virtual Desktop Infrastructure solutions often share resources like GPUs between many users. You may find that a specific type of hardware works really well in a test setup, where you&#039;re the only user on it, but performs really poorly when under load from many users. If you purchase access to a cloud VM directly, you will have it all to yourself, but additional infrastructure on top of the VMs may affect your performance greatly.&lt;br /&gt;
* Conversely, when selecting a VM type that provides access to only &#039;half&#039; of a data centre VM, you don&#039;t have to worry about negative performance impact. This type of sharing (that your IT can also achieve in your own data centre with NVIDIA MIG) still ensures that you always get full access to a dedicated part of the GPU and performance should be as expected.&lt;/div&gt;</summary>
		<author><name>Jaap.vandervelde</name></author>
	</entry>
	<entry>
		<id>https://wiki.tuflow.com/w/index.php?title=Organisation_Cloud_Software_Execution&amp;diff=36117</id>
		<title>Organisation Cloud Software Execution</title>
		<link rel="alternate" type="text/html" href="https://wiki.tuflow.com/w/index.php?title=Organisation_Cloud_Software_Execution&amp;diff=36117"/>
		<updated>2023-12-15T03:40:06Z</updated>

		<summary type="html">&lt;p&gt;Jaap.vandervelde: First set of questions answered, based on list of questions provided by Pavlina via email.&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;The TUFLOW &amp;lt;u&amp;gt;[https://www.tuflow.com/Download/Licensing/TUFLOW%20Products%20Licence%20Agreement.pdf End User Licence Agreement]&amp;lt;/u&amp;gt; was updated in 2018 allowing companies to host their own licences on the cloud. The only restrictions associated with users running TUFLOW simulations on their own company public or private cloud environment are:&lt;br /&gt;
&amp;lt;ol&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt; The licence must be a “Network” type (use of “Local” licences is not permitted on the cloud).&lt;br /&gt;
&amp;lt;li&amp;gt; Usage of TUFLOW software on a virtual machine is confined to Authorised Users within the Licensee&#039;s Network. This clause means companies cannot on-sell access to TUFLOW licences hosted in the cloud or otherwise (excluding TUFLOW vendor contract arrangements). &lt;br /&gt;
&amp;lt;/ol&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Configuration of your cloud environment is your own responsibility. There are numerous ways TUFLOW licensing and simulation can be configured in a cloud environment depending on the cloud provider (Microsoft, Google, Amazon, other etc.) and internal company protocols. We recommend engaging a professional with suitable cloud architecture expertise to design your bespoke system. Clients who have already migrated to the cloud have done so in a variety ways:&lt;br /&gt;
* Some use a hardware lock (USB) dongle that resides in their office on a physical computer or server. Cloud virtual machines link to the network licence via the IP address of the hardware lock.&lt;br /&gt;
* Others use a software lock. Software locks are a digital licence file that is locked to a dedicated host computer, server or virtual machine. When using a software lock please select the host carefully as the software licence will be bound to it. Relocating the licence to a new location will require TUFLOW sales staff to reissue the licence, which incurs a small administration fee.&lt;br /&gt;
&lt;br /&gt;
 &#039;&#039;&#039;Please Note: Network licence rentals can be used to upscale the available licences on your cloud system when demand requires it.&#039;&#039;&#039; &lt;br /&gt;
 Refer to the &amp;lt;u&amp;gt;[https://www.tuflow.com/Prices.aspx TUFLOW Pricelist]&amp;lt;/u&amp;gt; for more information.&lt;br /&gt;
&lt;br /&gt;
This detailed report from the TUFLOW Library discusses some benefits, challenges and solutions relating to cloud computing to help people who are setting up their own system: &lt;br /&gt;
&amp;lt;u&amp;gt;[https://downloads.tuflow.com/Licensing/2021_Running_TUFLOW_on_the_Cloud.pdf Running TUFLOW on the Cloud (Whitepaper)]&amp;lt;/u&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
[[File:2021_Running_TUFLOW_on_the_Cloud.png]]&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
= Common Questions Answered (FAQ)=&lt;br /&gt;
== Q1: How do I execute a simulation on the cloud? Can I still use batch files? ==&lt;br /&gt;
Running a simulation on the cloud can be very similar to running it on any other computer. You can access a VM remotely just like you would any other remote computer, using Remote Desktop, SSH, VNC, an X-Server client, etc. - whatever you are used to and what is set up on the VM. However, that assumes the VM is set up for that type of access and is running when you need to connect to it. If you want to make use of the real benefits of the cloud, like the ability to run on many computers at once, starting them automatically only when needed, doing it through such a process would be very cumbersome. You may want to consider looking at more advanced techniques like [https://azure.microsoft.com/en-au/products/batch Azure Batch], AWS Batch, or [https://cloud.google.com/batch/docs/get-started Google Cloud Batch].&lt;br /&gt;
&lt;br /&gt;
In either case, you will need access to a TUFLOW licence server from VMs running the model. Have a look at &amp;quot;Do I need a different licence to run models on the cloud?&amp;quot; below. And the VMs will always need to have CodeMeter installed, configured to find the licence you plan to use, as well as appropriate drivers for hardware like GPUs.&lt;br /&gt;
&lt;br /&gt;
When running on the cloud, consider that you may not have network access to locations where you would normally store your results. You may need to set up storage in the cloud separate from the VM, but connected to it, to collect your results and still have them available to you once the VM stops running.&lt;br /&gt;
&lt;br /&gt;
If you&#039;re using remote access to desktop VMs, you can still use &#039;&#039;batch files&#039;&#039; or scripts like you&#039;re used to. If you look into batch services, you will need more involved scripting, and you would typically not use batch files, but split up the work into separate tasks for the cloud platform to schedule on available computers. Keep in mind that this is a substantial and complex task, requiring some development and IT skills. If you plan on this type of cloud use, plan ahead and be ready with a working and tested solution, before you take on a deadline.&lt;br /&gt;
== Q2: Do I need a different TUFLOW executable to run models on the cloud? ==&lt;br /&gt;
No, you can use the same executable appropriate to the operating system you are on. Keep in mind that running TUFLOW with a licence does require that CodeMeter is installed as well and configured to find the licence. And if you are using a GPU on the cloud, you will need to have the appropriate NVIDIA drivers with CUDA installed, and a GPU licence available.&lt;br /&gt;
&lt;br /&gt;
Although you do use the same executable, it may be advantageous to provide some additional command line options to TUFLOW when you run it on the cloud. Since you typically won&#039;t be present and looking at the screen, consider using the `-nc` switch, which prevents user interaction on the console. Also, the familiar `-b` option will prevent the simulation waiting for a key press at the end of the simulation. And finally, given the possible cost of running models at scale, you would do well to test your model with the `-t` switch before sending it to the cloud. In addition to command line options, learn about TUFLOW override files to override configuration that may need to be different on the cloud VM, like the location where TUFLOW should write results.&lt;br /&gt;
== Q3: What steps do I need to take to run my model on the cloud? ==&lt;br /&gt;
In no particular order:&lt;br /&gt;
&lt;br /&gt;
* Assuming you have chosen a cloud provider you will use, make sure you understand the answers to the previous questions. If some of this is too technical, ensure you go over this with staff with appropriate IT skills and administrative access.&lt;br /&gt;
* With regard to the model itself, ensure that it has no references to files on computers that wouldn&#039;t be accessible from the cloud VM running the model. Ideally, construct your model configuration so that it can be self-contained within a single folder and would run wherever you put it.&lt;br /&gt;
* Ensure you have sufficient TUFLOW licences available and accessible to your cloud VMs to run the number of simulations you plan to run in parallel on the cloud.&lt;br /&gt;
* Ensure you have sufficient quota for storage and cloud resources you need to run the number of simulations you plan to run, specifically when using the &#039;Batch&#039; services mention under Q1.&lt;br /&gt;
* Ensure you have the right level of access to make use of the cloud resources you need, and that you&#039;re able to use and manage them when you do.&lt;br /&gt;
* Ensure that what you&#039;re planning on the cloud complies with your company and client&#039;s security policies for the work. Think about where the cloud computers are, how data is transferred to and from the cloud, and who has access.&lt;br /&gt;
* If you can, pick a region that puts the compute and storage relatively close to your own location, ensuring that your access (or perhaps your clients&#039; access) to them over the internet can achieve good total network speeds.&lt;br /&gt;
* Test you model before putting it on the cloud and test your preferred method of running a model on the cloud before scaling it up.&lt;br /&gt;
* Make sure your model configuration matches your actual needs before sending it to the cloud. Consider the frequency of writing outputs, whether you need check files, etc.&lt;br /&gt;
&lt;br /&gt;
When in doubt, feel free to contact [mailto:support@tuflow.com TUFLOW Support] and [mailto:sales@tuflow.com TUFLOW Sales] with questions, but keep in mind that we can only offer limited guidance when it comes to the specifics of your chosen cloud provider, and that your company&#039;s IT policies may further limit your options.&lt;br /&gt;
&lt;br /&gt;
== Q4: How can I download the simulation results? ==&lt;br /&gt;
This depends on your chosen solution.&lt;br /&gt;
&lt;br /&gt;
If you have cloud VMs that have access to your company&#039;s internal network, you may be able to copy the results automatically (with a script or batch file) after a simulation completes, and no download would be needed. If you have cloud VMs that you interactively use remotely, you can use whatever tools you would use from any remote machine, like OneDrive, Dropbox, FTP, SSH, to name but a few.&lt;br /&gt;
&lt;br /&gt;
However, all cloud service providers also provide cloud storage, and it may be cheaper and faster to keep unprocessed results in the cloud. Once a run completes, you typically do not want to keep the results on storage that is local to the VM that ran the model (e.g. its C: or D: drive on a Windows computer), unless you plan to use the same VM for post-processing of the results. But you can set up network file shares in the cloud that can be connected to your VM as extra drives or mounts, or you can make use of blob storage like Azure Blob, S3 Buckets, etc. Depending on the cloud service provider, there will be relatively user-friendly tools to access these remotely and download your data later.&lt;br /&gt;
&lt;br /&gt;
For particularly massive datasets, some cloud providers also offer services where they can put the data on physical media and ship them to you. However, keep in mind that this takes substantial time to reserve beforehand and then some time to execute after you complete the work. And the service may not be available for smaller volumes you may need.&lt;br /&gt;
&lt;br /&gt;
Finally, at the risk of stating the obvious: perform the download on a good internet connection. Cloud providers charge a small amount per GB downloaded, and in return they offer very good download speeds for your data. But your internet connection may end up limiting how quickly you get your data to your computer.&lt;br /&gt;
== Q5: What are the benefits of running a simulation in the cloud rather than locally? ==&lt;br /&gt;
Not all benefits apply in all cases, but consider these:&lt;br /&gt;
&lt;br /&gt;
* You can get access to as many cloud VMs (and GPUs) you need to run as many runs you need in parallel, provided you have sufficient licences and quota with the provider.&lt;br /&gt;
* If you only need compute infrequently, it&#039;s there in the cloud when you need it and you only pay for it when you use it.&lt;br /&gt;
* If your workload suddenly increases (which may be a good thing), you can quickly increase the amount of compute with cloud computing, provided you&#039;re set up to do so.&lt;br /&gt;
* Most cloud providers offer access to a variety of very capable hardware, that may allow you to run models larger or longer running than you could on your own hardware.&lt;br /&gt;
* If you collaborate with others from various locations (wherever they are in the world), having the results in the cloud may be a real benefit.&lt;br /&gt;
&lt;br /&gt;
However, there are some potential downsides to consider as well:&lt;br /&gt;
&lt;br /&gt;
* If you make efficient use of hardware you own, the compute is likely cheaper per model run than it would be compared to cloud computing, especially for on-demand compute.&lt;br /&gt;
* Although it&#039;s not very complicated to set up a VM for cloud runs and to get up and running, it may be complicated to do so in a way that satisfies your company or client&#039;s security policies.&lt;br /&gt;
* Similarly, just running some models on an interactively accessible VM may be simple, but developing scripts for automated model running may require time and skills that prevent you from doing so yourself.&lt;br /&gt;
&lt;br /&gt;
== Q6: Do I need to add in any extra commands in my control files? ==&lt;br /&gt;
If your model is self-contained and could run from its folder on any computer, perhaps not. However, you may want to change where a VM in the cloud tries to write its results, for example. You can achieve that with extra command in your control files, but also consider the use of TUFLOW override control files, which you can tailor to the cloud VMs you&#039;re using, without affecting the control files you use for running or testing locally.&lt;br /&gt;
&lt;br /&gt;
To keep costs of storage and transport manageable, as well as saving on some run time, configure your model to write only the outputs you need. This includes selecting the right variables to output, at the appropriate time intervals. Have a look at our [https://www.youtube.com/watch?v=-CsKKjG7jpQ Output Management Advice] webinar (15 minutes) for more tips on that.&lt;br /&gt;
&lt;br /&gt;
Also look at the command line switches mentioned in the answer to Q2.&lt;br /&gt;
== Q7: Do I need a different licence to run models on the cloud? ==&lt;br /&gt;
Not necessarily, but there are some things to keep in mind. If your existing licences are on a dongle, they would need to be network licences and the server they are installed on would have to be accessible over the network from the cloud VMs you&#039;re looking to run models on. If you have sufficient existing network licences you can use in this manner, including licences for special hardware you&#039;d be using on the cloud (like a GPU), you will not need different licences.&lt;br /&gt;
&lt;br /&gt;
You can also set up a dedicated VM to run a small CodeMeter network licence server in the cloud for software network licences. But keep in mind that licences on such a server cannot be moved elsewhere - they are bound to this specific VM. Access to this licence server would be limited to VMs in the cloud, on the same virtual network as the licence server. Or you&#039;d need to have someone with the appropriate IT skills make the licence server accessible from all locations you need access from.&lt;br /&gt;
&lt;br /&gt;
Alternatively, you may be able to make use of web licences, please contact [mailto:sales@tuflow.com TUFLOW Sales] for more information on that.&lt;br /&gt;
== Q8: What can go wrong when running models on the cloud? ==&lt;br /&gt;
For starters, almost everything that can go wrong when running models locally, although power failure and loss of network connection is exceedingly rare on the cloud.&lt;br /&gt;
&lt;br /&gt;
Common problems arise from the differences in the computer&#039;s environment: software you may have installed that batch files rely on, software required to run TUFLOW (CodeMeter, NVIDIA drivers for GPU), access to networked resources you get inputs from, or write results to, etc.&lt;br /&gt;
&lt;br /&gt;
Also, if you&#039;re using Batch services from your cloud provider, once a VM completes its tasks, it may disappear. If something went wrong during the run, you may have very limited access to information about what went wrong, so you want to be careful about logging and where logs are written to.&lt;br /&gt;
&lt;br /&gt;
Similarly, but much simpler: of you run models interactively on a desktop VM, once you turn it off, you will no longer have access to its local storage. And once you remove the VM to save on cost, keep in mind that its attached disk storage will be removed as well, so ensure you have your results in a safe place before that.&lt;br /&gt;
&lt;br /&gt;
Finally, access to licences using Codemeter from the cloud VM can sometimes cause some complications. And access for users to the VM or the data may cause some complications, depending on your IT setup.&lt;br /&gt;
&lt;br /&gt;
None of these should stop you from trying, but ensure everything works like you expect, before scaling up to many model runs at once.&lt;br /&gt;
== Q10: If I stop the cloud VM after models are finished, can I still download the results? ==&lt;br /&gt;
If the results were written to local storage on the VM (like the default C: or D: drive on a Windows VM), you will only be able to access these when the cloud VM is running. If you stopped it, you could restart it to gain access again. Once you delete the VM, data on those volumes will be deleted as well, and cannot be recovered.&lt;br /&gt;
&lt;br /&gt;
To be able to access results in the cloud even when a VM is stopped, or deleted, copy the results to a network share on the cloud. On the VM, you may be able to mount this storage as a network share, or tools will be available to perform a copy to cloud storage, depending on the cloud provider and operating system you are using.&lt;br /&gt;
== Q11: Why is my run on the cloud slower than I expected based on the specs? ==&lt;br /&gt;
Although cloud hardware may be faster for some use cases, and certainly a lot more expensive to purchase, it may not be guaranteed to run your TUFLOW model faster. This mostly depends on how modern the NVIDIA hardware architecture is, how many CUDA cores it has available and specific metrics of the hardware like the amount of memory, the clock speed of the memory, the clock speed of the cores, and how the GPU is connected to the rest of the hardware. For a good assessment of whether you should expect better performance, refer to our [[Hardware Benchmarking (2018-03-AA)|Hardware Benchmarking]] pages.&lt;br /&gt;
&lt;br /&gt;
If you&#039;re wondering why TUFLOW software doesn&#039;t benefit from these supposedly faster and more expensive GPUs, consider that a GPU has many different features, and TUFLOW only makes use of an important subset of these. Also, most TUFLOW models are executed using the single-precision floating-point executable, which is faster than the double-precision executable. Desktop GPUs are highly optimised for single-precision compute, because this is what benefits gaming, and as it happens, TUFLOW runs. Data centre GPUs are more optimised for double-precision compute, but most TUFLOW simulations don&#039;t benefit in result quality from using this.&lt;br /&gt;
&lt;br /&gt;
Even when the hardware should be faster according to benchmarks, it&#039;s possible that you have some other restrictions. For one, if your cloud environment shares GPUs between many users, the part of the GPU available to your model run may only see a small percentage of the performance it would show with exclusive access to the GPU. This is particularly true in Virtual Desktop Infrastructure (VDI) setups. The way TUFLOW uses the GPU is very different from normal graphics processing, and VDI solutions are often not good for model running.&lt;br /&gt;
&lt;br /&gt;
Another common cause of slowing is writing results directly to network shares that may be accessed over network connections that are orders of magnitude slower than local disk access. In these situations, the recommendation is to write results locally (with minimal overhead) on the cloud VM and then copy the results to other storage in one go, when the run completes. Even if you perform this copy while another run starts, you&#039;ll find that running first and copying after is a lot faster than writing directly to the network share. To understand why, imagine writing and sending an email one word at a time, or writing it all in one go. The amount of typing you have to do is roughly the same, if you do it cleverly, but clearly the whole process will take longer, and you can imagine the network having to send far more data back and forth. The difference between writing results to the network one part at a time, instead of all at once is analogous.&lt;br /&gt;
== Q12: How can I lower the cost of running simulations on the cloud? ==&lt;br /&gt;
The first step would be to select the hardware that&#039;s best suited to your needs, at the lowest price, from the most affordable provider.&lt;br /&gt;
&lt;br /&gt;
Secondly, if you get cloud hardware on-demand, you&#039;re paying the highest rates for the flexibility this affords. You can also reserve instances of specific hardware types, for periods like a year, or three years (depending on the cloud provider), dramatically lowering the price - but then you will have to pay for the entire period for the reserved instances. If your organisation is large enough, it can be worthwhile to have access to a pool of reserved resources, as long as the business achieves high utilisation over time, so that you only pay on-demand prices when you exceed your reserved instances.&lt;br /&gt;
&lt;br /&gt;
If you do end up using on-demand hardware, ensure you only run it when you&#039;re actually using it. By automatically turning off VMs when the work is done and copied to appropriate storage, you can save on compute costs - you&#039;re not paying for how much power they use, you&#039;re paying for the hours they&#039;re on. And keep data on cheap storage like blob storage or online file shares, where you pay only for the size you&#039;re using, instead of keeping expensive VMs around that have massive virtual hard drives that you&#039;re paying for as long as they exist, empty or not.&lt;br /&gt;
&lt;br /&gt;
Don&#039;t download data repeatedly, especially if you need access to it frequently. If you only need access to a small part of the data, it may be worth it to do so remotely. But if you need to process entire files, or multiple users need a copy, it will be more economical to download the data to your network once and use it from there.&lt;br /&gt;
&lt;br /&gt;
You may have heard about &#039;spot pricing&#039; for VMs. This may be suitable if you&#039;re running many small simulations in sequence, and if you&#039;re not under strict time pressure to deliver results, but in many cases, it won&#039;t be ideal, especially if your model is not set up with restart files that get stored away from the VM. We find that the added complexity rarely outweighs the price difference for the hardware, but the discounts on VMs obtained through spot pricing can be substantial.&lt;br /&gt;
&lt;br /&gt;
If you find that the number of licences you need to scale up model running on the cloud is the main limiting factor for cost, contact our [mailto:sales@tuflow.com TUFLOW Sales] to discuss options for your situation.&lt;br /&gt;
&lt;br /&gt;
Finally, read through these questions, and take the advice given to heart. Optimising your model configuration and making the right choices when running on the cloud can save a lot of run time, and thus cost.&lt;br /&gt;
== Q13: Is there a developed service to run large numbers of model runs on the cloud, if we cannot set it up ourselves? ==&lt;br /&gt;
As of 2019, TUFLOW offer an [[TUFLOW Cloud Simulation Service|on-demand cloud simulation service]] that may suit your needs if your project is sufficiently urgent or large. As of 2023, you may find third parties providing services on the cloud as well, and TUFLOW may support use of its software in such services.&lt;/div&gt;</summary>
		<author><name>Jaap.vandervelde</name></author>
	</entry>
	<entry>
		<id>https://wiki.tuflow.com/w/index.php?title=HPC_Running_and_Converting_Models&amp;diff=35763</id>
		<title>HPC Running and Converting Models</title>
		<link rel="alternate" type="text/html" href="https://wiki.tuflow.com/w/index.php?title=HPC_Running_and_Converting_Models&amp;diff=35763"/>
		<updated>2023-12-11T01:54:07Z</updated>

		<summary type="html">&lt;p&gt;Jaap.vandervelde: Brought partially correct command line option description in line with tcf command description (which was correct)&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Introduction =&lt;br /&gt;
This page contains information about converting an existing TUFLOW Classic or GPU (pre 2017 HPC release) model to a format that can be run using the TUFLOW HPC engine. This page provides a quick summary for experienced TUFLOW users to use as a reference point for updating their models. It is recommended that less experienced TUFLOW users refer to our &amp;lt;u&amp;gt;[[Tutorial_Introduction |TUFLOW Tutorial Modules]]&amp;lt;/u&amp;gt; for greater support and guidance on creating a HPC model.&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
To convert an existing TUFLOW Classic or GPU Model to run on HPC, an update to the TUFLOW Control File (TCF) is needed. Some features from TUFLOW Classic that are not currently supported in HPC, may prevent the HPC model running successfully. To find out more about unsupported features in HPC be sure to review the &amp;lt;u&amp;gt;[https://tuflow.com/Download/TUFLOW/Releases/2017-09/TUFLOW%20Release%20Notes.2017-09.pdf TUFLOW 2017-09 Release Notes]&amp;lt;/u&amp;gt; or the &amp;lt;u&amp;gt;[[HPC Features | HPC Features Wiki Page]]&amp;lt;/u&amp;gt;.&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Converting TUFLOW Classic to HPC (TCF Updates) =&lt;br /&gt;
To run an existing TUFLOW Classic simulation with the new HPC engine, the following lines of text need to be added to the TUFLOW Control File (TCF).&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;font color=&amp;quot;blue&amp;quot;&amp;gt;&amp;lt;tt&amp;gt;Solution Scheme &amp;lt;/tt&amp;gt;&amp;lt;/font&amp;gt; &amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;tt&amp;gt;==&amp;lt;/tt&amp;gt;&amp;lt;/font&amp;gt;&amp;lt;tt&amp;gt; HPC &amp;lt;/tt&amp;gt; &amp;lt;font color=&amp;quot;green&amp;quot;&amp;gt;&amp;lt;tt&amp;gt;   !This command specifies that you want to run TUFLOW using the HPC solution scheme or engine.&amp;lt;/tt&amp;gt;&amp;lt;/font&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
The following command is also required to run the model using GPU hardware:&amp;lt;br&amp;gt; &lt;br /&gt;
&amp;lt;font color=&amp;quot;blue&amp;quot;&amp;gt;&amp;lt;tt&amp;gt;Hardware &amp;lt;/tt&amp;gt;&amp;lt;/font&amp;gt; &amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;tt&amp;gt;==&amp;lt;/tt&amp;gt;&amp;lt;/font&amp;gt;&amp;lt;tt&amp;gt; GPU &amp;lt;/tt&amp;gt;        &amp;lt;font color=&amp;quot;green&amp;quot;&amp;gt;&amp;lt;tt&amp;gt;   !CPU is default. The hardware command instructs TUFLOW HPC to run using GPU hardware. This is typically orders of magnitude faster than on CPU.&amp;lt;/tt&amp;gt;&amp;lt;/font&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
These two commands are all that&#039;s needed to run convert the TUFLOW Classic model to HPC and run using GPU hardware. There are however more commands provided below that give the modeller greater control over the Hardware that HPC uses.&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Running HPC on Multiple CPU Threads =&lt;br /&gt;
As mentioned in the &amp;lt;u&amp;gt;[[HPC_Introduction | HPC Introduction]]&amp;lt;/u&amp;gt; page, HPC can be parallelised to run across multiple CPU processors when run on CPU (i.e. not GPU). The following command allows the modeller to dictate the number of core processors to run TUFLOW HPC across.&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;font color=&amp;quot;blue&amp;quot;&amp;gt;&amp;lt;tt&amp;gt;CPU Threads &amp;lt;/tt&amp;gt;&amp;lt;/font&amp;gt; &amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;tt&amp;gt;==&amp;lt;/tt&amp;gt;&amp;lt;/font&amp;gt;&amp;lt;tt&amp;gt; 8 &amp;lt;/tt&amp;gt;  &amp;lt;font color=&amp;quot;green&amp;quot;&amp;gt;&amp;lt;tt&amp;gt;   !Default is 4. This instructs TUFLOW to search for and run the model across four different core processors. &amp;lt;/tt&amp;gt;&amp;lt;/font&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
If the number of processors or TUFLOW licences found by TUFLOW are less than the specified value, then TUFLOW will utilise the maximum number of core processors available within the licence and hardware limitations.&amp;lt;br&amp;gt;&lt;br /&gt;
Alternatively, the number of CPU threads can be specified in the batch file / command line by using the -nt&amp;lt;number of threads&amp;gt; argument. If both control file and command line are used to specify number of threads, the command line option will prevail.&lt;br /&gt;
&lt;br /&gt;
= Running HPC on Multiple GPU Devices =&lt;br /&gt;
Much like HPC can be run across multiple CPU processors, HPC can be run across multiple GPU cards. Models can also be instructed to run on a specific GPU card.&amp;lt;br&amp;gt;&lt;br /&gt;
If a machine only has a single GPU card, the GPU Device ID should be 0, this is a default. If a second GPU card was added, the Device ID would be 1 and so on. The GPU IDs can be checked by reviewing the machines &#039;&#039;Device Manager&#039;&#039;.&amp;lt;br&amp;gt;&lt;br /&gt;
The most common is to specify the GPU card ID in the batch file / command line by using the -pu&amp;lt;id&amp;gt; argument.&amp;lt;br&amp;gt;&lt;br /&gt;
Example below will run single simulation across the first and second GPU card (ID 0 and 1). &lt;br /&gt;
&amp;lt;pre&amp;gt;&amp;quot;TUFLOW_iSP_w64.exe&amp;quot; -pu0 -pu1 &amp;quot;M01_5m_001.tcf&amp;quot;&amp;lt;/pre&amp;gt;&lt;br /&gt;
Example below will run single simulation on the fourth GPU card (ID 3). &lt;br /&gt;
&amp;lt;pre&amp;gt;&amp;quot;TUFLOW_iSP_w64.exe&amp;quot; -pu3 &amp;quot;M01_5m_001.tcf&amp;quot;&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Alternatively, the following TCF command can be used to set the number of GPU devices and which devices to use.&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;font color=&amp;quot;blue&amp;quot;&amp;gt;&amp;lt;tt&amp;gt;GPU Device IDs &amp;lt;/tt&amp;gt;&amp;lt;/font&amp;gt; &amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;tt&amp;gt;==&amp;lt;/tt&amp;gt;&amp;lt;/font&amp;gt;&amp;lt;tt&amp;gt; 0, 1 &amp;lt;/tt&amp;gt;	&amp;lt;font color=&amp;quot;green&amp;quot;&amp;gt;&amp;lt;tt&amp;gt;   !This command instructs TUFLOW to run the model on GPU Device 0 and GPU Device 1.&amp;lt;/tt&amp;gt;&amp;lt;/font&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
If both control file and command line are used to specify devices, the command line option will prevail.&lt;br /&gt;
&lt;br /&gt;
= Converting TUFLOW GPU to HPC (TCF Updates) =&lt;br /&gt;
When converting a TUFLOW GPU model across to HPC, first confirm that all features in the GPU model are available in HPC by referring to the &amp;lt;u&amp;gt;[https://tuflow.com/Download/TUFLOW/Releases/2017-09/TUFLOW%20Release%20Notes.2017-09.pdf TUFLOW 2017-09 Release Notes]&amp;lt;/u&amp;gt;.&amp;lt;br&amp;gt;&lt;br /&gt;
Delete the following command from the *.tcf file and insert the commands specified above, for converting a TUFLOW Classic model to HPC.&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;s&amp;gt;&amp;lt;font color=&amp;quot;blue&amp;quot;&amp;gt;&amp;lt;tt&amp;gt;GPU Solver &amp;lt;/tt&amp;gt;&amp;lt;/font&amp;gt; &amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;tt&amp;gt;==&amp;lt;/tt&amp;gt;&amp;lt;/font&amp;gt;&amp;lt;tt&amp;gt; ON &amp;lt;/tt&amp;gt;&amp;lt;/s&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= HPC Scenarios and Updating the Batch File =&lt;br /&gt;
Modellers may want to change the hardware that HPC is run on throughout the course of a project. For example, if your company own more CPU than GPU licences it may be beneficial to run the model using CPU hardware during the initial model build phase so your colleagues have access to the higher speed GPU licences for production runs on other projects that are running in parallel. If this is the case, it may be easier to setup a Scenario Logic statement in the TUFLOW Control File (TCF), that allows the modeller to change the hardware being used with a simple switch in the batch file used to run the model.&lt;br /&gt;
&lt;br /&gt;
To setup a scenario for varying hardware options, the following commands can be used in the TCF file:&lt;br /&gt;
&amp;lt;font color=&amp;quot;blue&amp;quot;&amp;gt;&amp;lt;tt&amp;gt;Solution Scheme &amp;lt;/tt&amp;gt;&amp;lt;/font&amp;gt; &amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;tt&amp;gt;==&amp;lt;/tt&amp;gt;&amp;lt;/font&amp;gt;&amp;lt;tt&amp;gt; HPC &amp;lt;/tt&amp;gt; &amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;font color=&amp;quot;blue&amp;quot;&amp;gt;&amp;lt;tt&amp;gt;Hardware &amp;lt;/tt&amp;gt;&amp;lt;/font&amp;gt; &amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;tt&amp;gt;==&amp;lt;/tt&amp;gt;&amp;lt;/font&amp;gt; &amp;lt;tt&amp;gt; &amp;lt;&amp;lt;~s1~&amp;gt;&amp;gt; &amp;lt;/tt&amp;gt;    &amp;lt;font color=&amp;quot;green&amp;quot;&amp;gt;&amp;lt;tt&amp;gt;   !The scenario will either be &amp;quot;CPU&amp;quot; or &amp;quot;GPU&amp;quot;&amp;lt;/tt&amp;gt;&amp;lt;/font&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This basic scenario logic can be configured further within the TCF as shown below:&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;font color=&amp;quot;blue&amp;quot;&amp;gt;&amp;lt;tt&amp;gt;If Scenario &amp;lt;/tt&amp;gt;&amp;lt;/font&amp;gt; &amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;tt&amp;gt;==&amp;lt;/tt&amp;gt;&amp;lt;/font&amp;gt; &amp;lt;tt&amp;gt; CPU &amp;lt;/tt&amp;gt; &amp;lt;br&amp;gt;&lt;br /&gt;
:: &amp;lt;font color=&amp;quot;blue&amp;quot;&amp;gt;&amp;lt;tt&amp;gt;CPU Threads &amp;lt;/tt&amp;gt;&amp;lt;/font&amp;gt; &amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;tt&amp;gt;==&amp;lt;/tt&amp;gt;&amp;lt;/font&amp;gt; &amp;lt;tt&amp;gt; 8 &amp;lt;/tt&amp;gt; &amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;font color=&amp;quot;blue&amp;quot;&amp;gt;&amp;lt;tt&amp;gt;Else If Scenario &amp;lt;/tt&amp;gt;&amp;lt;/font&amp;gt; &amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;tt&amp;gt;==&amp;lt;/tt&amp;gt;&amp;lt;/font&amp;gt; &amp;lt;tt&amp;gt; GPU &amp;lt;/tt&amp;gt; &amp;lt;br&amp;gt;&lt;br /&gt;
:: &amp;lt;font color=&amp;quot;blue&amp;quot;&amp;gt;&amp;lt;tt&amp;gt;GPU Device IDs&amp;lt;/tt&amp;gt;&amp;lt;/font&amp;gt; &amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;tt&amp;gt;==&amp;lt;/tt&amp;gt;&amp;lt;/font&amp;gt; &amp;lt;tt&amp;gt; 0, 1 &amp;lt;/tt&amp;gt; &amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;font color=&amp;quot;blue&amp;quot;&amp;gt;&amp;lt;tt&amp;gt;End If &amp;lt;/tt&amp;gt;&amp;lt;/font&amp;gt; &amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If using the above Scenario Logic, the modeller must include a scenario call in the batch file.&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;font color=&amp;quot;blue&amp;quot;&amp;gt;&amp;lt;tt&amp;gt;-s1 &amp;lt;Hardware Type&amp;gt; &amp;lt;/tt&amp;gt;&amp;lt;/font&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
If you are unfamiliar with using Scenario Logic, please refer to &amp;lt;u&amp;gt;[[Tutorial_M08 |Tutorial Module 08]]&amp;lt;/u&amp;gt;.&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Other useful batch file switches include:&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;font color=&amp;quot;blue&amp;quot;&amp;gt;&amp;lt;tt&amp;gt;-nt &amp;lt;number_of_threads&amp;gt; &amp;lt;/tt&amp;gt;&amp;lt;/font&amp;gt;: This switch is used to set the number of CPU threads used for CPU mode simulations.&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;font color=&amp;quot;blue&amp;quot;&amp;gt;&amp;lt;tt&amp;gt;-pu &amp;lt;GPU Device IDs&amp;gt; &amp;lt;/tt&amp;gt;&amp;lt;/font&amp;gt;: This switch is used to set the number of GPU devices and which devices to use.&amp;lt;br&amp;gt;&lt;br /&gt;
An example of how this would be implemented into a simple batch file for CPU and GPU are shown below.&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;u&amp;gt;CPU&amp;lt;/u&amp;gt;&#039;&#039;&#039;&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;TUFLOW_iSP_w64.exe -s1 &amp;lt;font color=&amp;quot;blue&amp;quot;&amp;gt;CPU&amp;lt;/font&amp;gt; &amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;-nt8&amp;lt;/font&amp;gt; FMA_T2_~s1~_001.tcf&amp;lt;/tt&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
This example will run TUFLOW HPC on CPU using 8 CPU threads.&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;u&amp;gt;GPU&amp;lt;/u&amp;gt;&#039;&#039;&#039;&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;TUFLOW_iSP_w64.exe -s1 &amp;lt;font color=&amp;quot;blue&amp;quot;&amp;gt;GPU&amp;lt;/font&amp;gt; &amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;-pu0 -pu1&amp;lt;/font&amp;gt; FMA_T2_~s1~_001.tcf&amp;lt;/tt&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
This example will run TUFLOW HPC on GPU using 2 GPU cards.&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
{{Tips Navigation&lt;br /&gt;
|uplink=[[ HPC_Modelling_Guidance | Back to HPC Modelling Guidance]]&lt;br /&gt;
}}&lt;/div&gt;</summary>
		<author><name>Jaap.vandervelde</name></author>
	</entry>
	<entry>
		<id>https://wiki.tuflow.com/w/index.php?title=Installing_Wibu_CodeMeter_Linux&amp;diff=25210</id>
		<title>Installing Wibu CodeMeter Linux</title>
		<link rel="alternate" type="text/html" href="https://wiki.tuflow.com/w/index.php?title=Installing_Wibu_CodeMeter_Linux&amp;diff=25210"/>
		<updated>2022-02-01T23:07:02Z</updated>

		<summary type="html">&lt;p&gt;Jaap.vandervelde: Apparently managed to make the same mistake three times in a row - apologies to anyone confused. 22350, 22352 and 22353 are definitely correct, more in CodeMeter documentation.&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This article provide a basic set of instructions to install the Wibu CodeMeter Runtime on a Linux host through the command line interface (CLI). For more information about using Wibu dongles or software licenses, refer to &amp;lt;u&amp;gt;[[Wibu_Dongles|Wibu Dongles]]&amp;lt;/u&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
The Linux commands used on this wiki should work on most modern Linux distributions, but were tested on CentOS and Debian. Note that these instructions are provided as a courtesy to users new to Linux, please ensure you understand what the commands mean before you run them and be aware of the [[Tuflow:General_disclaimer|general disclaimer]].&lt;br /&gt;
&lt;br /&gt;
==Getting the CodeMeter Runtime==&lt;br /&gt;
&lt;br /&gt;
The appropriate version of the CodeMeter Runtime can be obtained from the Wibu website at &amp;lt;u&amp;gt;[https://www.wibu.com/support/user/user-software.html www.wibu.com/support/user/user-software.html]&amp;lt;/u&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
If you are using Debian, Ubuntu, Mint or another Linux distribution in the Debian family of distributions, you should obtain a copy of the `.deb` installer for your hardware. If you are using Red Hat (RHEL), Fedora, CentOS or another Linux distribution in the Red Hat family of distributions, you should obtain a copy of the `.rpm` installer for your hardware. If your hardware supports 64-bit software (which is likely for modern systems), using that version is recommended. From here on, we&#039;ll refer to &#039;Debian&#039; or &#039;Red Hat&#039; to mean any distribution in that family.&lt;br /&gt;
&lt;br /&gt;
Depending on your level of access to the machine running Linux and whether or not it is running a graphical user interface, you may have some trouble getting the file onto your machine. You can download the file directly from the command line with: &amp;lt;pre&amp;gt;wget -O codemeter.rpm &amp;lt;direct link&amp;gt;&amp;lt;/pre&amp;gt; where &amp;quot;&amp;lt;direct link&amp;gt;&amp;quot; is the &#039;direct link&#039; provided on the Wibu download page for the version you are downloading.&lt;br /&gt;
&lt;br /&gt;
The download page also provides an MD5 checksum. You can run &amp;lt;pre&amp;gt;md5sum codemeter.rpm&amp;lt;/pre&amp;gt; and verify that the file you downloaded was downloaded correctly by comparing this checksum.&lt;br /&gt;
&lt;br /&gt;
If your Linux distribution does not provide `wget`, you can obtain a copy on Debian with `sudo apt-get install wget` and on Red Hat with `sudo yum install wget`.&lt;br /&gt;
&lt;br /&gt;
==Installing the CodeMeter Runtime==&lt;br /&gt;
&lt;br /&gt;
On Red Hat, using `yum`, you can install the CodeMeter Runtime with &amp;lt;pre&amp;gt;sudo yum localinstall codemeter.rpm&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
On Debian, using `dpkg` and `apt-get`, you can install the CodeMeter Runtime with &amp;lt;pre&amp;gt;sudo dpkg -i codemeter.deb&amp;lt;/pre&amp;gt; If that fails due to missing dependencies, you can instead attempt &amp;lt;pre&amp;gt;sudo apt-get -f codemeter.deb&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Once these commands complete (on either Debian or Red Hat), you can start, stop and restart the services with `systemctl`: &amp;lt;pre&amp;gt;sudo systemctl restart codemeter.service&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Alternatively, if `systemctl` is not available to you, you can use: &amp;lt;pre&amp;gt;sudo /etc/init.d/codemeter restart&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Configuring the CodeMeter Runtime==&lt;br /&gt;
&lt;br /&gt;
Refer to the CodeMeter manual for instructions on configuring CodeMeter. &lt;br /&gt;
&lt;br /&gt;
However, if you are installing CodeMeter as a client for network licenses, the following is an example of a section you can add to the `/etc/wibu/CodeMeter/Server.ini`: &amp;lt;pre&amp;gt;[ServerSearchList]&lt;br /&gt;
UseBroadcast=1&lt;br /&gt;
&lt;br /&gt;
[ServerSearchList\Server1]&lt;br /&gt;
Address=&amp;lt;ip number of your license host&amp;gt;&amp;lt;/pre&amp;gt;&lt;br /&gt;
You can add multiple `ServerSearchList\Server&amp;lt;n&amp;gt;` sections, one for each license host you have, with the IP address of the license host. Once you update and save the configuration file, restart the CodeMeter service and your licenses from the network server should then be available locally.&lt;br /&gt;
&lt;br /&gt;
If you are setting up a license host, which you wish to access from another machine, you will need to install the CodeMeter Runtime on that machine as well and you need to ensure the firewall allows requests to the license host on port 22350.&lt;br /&gt;
&lt;br /&gt;
On Red Hat, you can achieve this with:&amp;lt;pre&amp;gt;sudo firewall-cmd --get-active-zones&lt;br /&gt;
sudo firewall-cmd --zone=public --add-port=22350/tcp --permanent&lt;br /&gt;
sudo firewall-cmd --reload&amp;lt;/pre&amp;gt;&lt;br /&gt;
This assumes you see the `public` zone after the first command.&lt;br /&gt;
&lt;br /&gt;
On Debian, you can run:&amp;lt;pre&amp;gt;sudo iptables -A INPUT -p tcp -m tcp --dport 22350 -j ACCEPT&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Similarly, if you want users to be able to access the web admin interface for code meter on the server, you would need to ensure the firewall allows request on port 22352 (for http) and/or 22353 (for https). However, access to the web admin interface from other machines is not required for obtaining a license and in typical configurations, you would be able to access the web admin interface on the host itself (localhost) without additional firewall rules.&lt;/div&gt;</summary>
		<author><name>Jaap.vandervelde</name></author>
	</entry>
	<entry>
		<id>https://wiki.tuflow.com/w/index.php?title=Installing_Wibu_CodeMeter_Linux&amp;diff=25209</id>
		<title>Installing Wibu CodeMeter Linux</title>
		<link rel="alternate" type="text/html" href="https://wiki.tuflow.com/w/index.php?title=Installing_Wibu_CodeMeter_Linux&amp;diff=25209"/>
		<updated>2022-02-01T07:31:14Z</updated>

		<summary type="html">&lt;p&gt;Jaap.vandervelde: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This article provide a basic set of instructions to install the Wibu CodeMeter Runtime on a Linux host through the command line interface (CLI). For more information about using Wibu dongles or software licenses, refer to &amp;lt;u&amp;gt;[[Wibu_Dongles|Wibu Dongles]]&amp;lt;/u&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
The Linux commands used on this wiki should work on most modern Linux distributions, but were tested on CentOS and Debian. Note that these instructions are provided as a courtesy to users new to Linux, please ensure you understand what the commands mean before you run them and be aware of the [[Tuflow:General_disclaimer|general disclaimer]].&lt;br /&gt;
&lt;br /&gt;
==Getting the CodeMeter Runtime==&lt;br /&gt;
&lt;br /&gt;
The appropriate version of the CodeMeter Runtime can be obtained from the Wibu website at &amp;lt;u&amp;gt;[https://www.wibu.com/support/user/user-software.html www.wibu.com/support/user/user-software.html]&amp;lt;/u&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
If you are using Debian, Ubuntu, Mint or another Linux distribution in the Debian family of distributions, you should obtain a copy of the `.deb` installer for your hardware. If you are using Red Hat (RHEL), Fedora, CentOS or another Linux distribution in the Red Hat family of distributions, you should obtain a copy of the `.rpm` installer for your hardware. If your hardware supports 64-bit software (which is likely for modern systems), using that version is recommended. From here on, we&#039;ll refer to &#039;Debian&#039; or &#039;Red Hat&#039; to mean any distribution in that family.&lt;br /&gt;
&lt;br /&gt;
Depending on your level of access to the machine running Linux and whether or not it is running a graphical user interface, you may have some trouble getting the file onto your machine. You can download the file directly from the command line with: &amp;lt;pre&amp;gt;wget -O codemeter.rpm &amp;lt;direct link&amp;gt;&amp;lt;/pre&amp;gt; where &amp;quot;&amp;lt;direct link&amp;gt;&amp;quot; is the &#039;direct link&#039; provided on the Wibu download page for the version you are downloading.&lt;br /&gt;
&lt;br /&gt;
The download page also provides an MD5 checksum. You can run &amp;lt;pre&amp;gt;md5sum codemeter.rpm&amp;lt;/pre&amp;gt; and verify that the file you downloaded was downloaded correctly by comparing this checksum.&lt;br /&gt;
&lt;br /&gt;
If your Linux distribution does not provide `wget`, you can obtain a copy on Debian with `sudo apt-get install wget` and on Red Hat with `sudo yum install wget`.&lt;br /&gt;
&lt;br /&gt;
==Installing the CodeMeter Runtime==&lt;br /&gt;
&lt;br /&gt;
On Red Hat, using `yum`, you can install the CodeMeter Runtime with &amp;lt;pre&amp;gt;sudo yum localinstall codemeter.rpm&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
On Debian, using `dpkg` and `apt-get`, you can install the CodeMeter Runtime with &amp;lt;pre&amp;gt;sudo dpkg -i codemeter.deb&amp;lt;/pre&amp;gt; If that fails due to missing dependencies, you can instead attempt &amp;lt;pre&amp;gt;sudo apt-get -f codemeter.deb&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Once these commands complete (on either Debian or Red Hat), you can start, stop and restart the services with `systemctl`: &amp;lt;pre&amp;gt;sudo systemctl restart codemeter.service&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Alternatively, if `systemctl` is not available to you, you can use: &amp;lt;pre&amp;gt;sudo /etc/init.d/codemeter restart&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Configuring the CodeMeter Runtime==&lt;br /&gt;
&lt;br /&gt;
Refer to the CodeMeter manual for instructions on configuring CodeMeter. &lt;br /&gt;
&lt;br /&gt;
However, if you are installing CodeMeter as a client for network licenses, the following is an example of a section you can add to the `/etc/wibu/CodeMeter/Server.ini`: &amp;lt;pre&amp;gt;[ServerSearchList]&lt;br /&gt;
UseBroadcast=1&lt;br /&gt;
&lt;br /&gt;
[ServerSearchList\Server1]&lt;br /&gt;
Address=&amp;lt;ip number of your license host&amp;gt;&amp;lt;/pre&amp;gt;&lt;br /&gt;
You can add multiple `ServerSearchList\Server&amp;lt;n&amp;gt;` sections, one for each license host you have, with the IP address of the license host. Once you update and save the configuration file, restart the CodeMeter service and your licenses from the network server should then be available locally.&lt;br /&gt;
&lt;br /&gt;
If you are setting up a license host, which you wish to access from another machine, you will need to install the CodeMeter Runtime on that machine as well and you need to ensure the firewall allows requests to the license host on port 22350.&lt;br /&gt;
&lt;br /&gt;
On Red Hat, you can achieve this with:&amp;lt;pre&amp;gt;sudo firewall-cmd --get-active-zones&lt;br /&gt;
sudo firewall-cmd --zone=public --add-port=22350/tcp --permanent&lt;br /&gt;
sudo firewall-cmd --reload&amp;lt;/pre&amp;gt;&lt;br /&gt;
This assumes you see the `public` zone after the first command.&lt;br /&gt;
&lt;br /&gt;
On Debian, you can run:&amp;lt;pre&amp;gt;sudo iptables -A INPUT -p tcp -m tcp --dport 22350 -j ACCEPT&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Similarly, if you want users to be able to access the web admin interface for code meter on the server, you would need to ensure the firewall allows request on port 22352 (for http) and/or 223533 (for https). However, access to the web admin interface from other machines is not required for obtaining a license and in typical configurations, you would be able to access the web admin interface on the host itself (localhost) without additional firewall rules.&lt;/div&gt;</summary>
		<author><name>Jaap.vandervelde</name></author>
	</entry>
	<entry>
		<id>https://wiki.tuflow.com/w/index.php?title=Installing_Wibu_CodeMeter_Linux&amp;diff=25207</id>
		<title>Installing Wibu CodeMeter Linux</title>
		<link rel="alternate" type="text/html" href="https://wiki.tuflow.com/w/index.php?title=Installing_Wibu_CodeMeter_Linux&amp;diff=25207"/>
		<updated>2022-02-01T01:40:39Z</updated>

		<summary type="html">&lt;p&gt;Jaap.vandervelde: changed &amp;#039;web interface&amp;#039; to &amp;#039;web admin interface&amp;#039;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This article provide a basic set of instructions to install the Wibu CodeMeter Runtime on a Linux host through the command line interface (CLI). For more information about using Wibu dongles or software licenses, refer to &amp;lt;u&amp;gt;[[Wibu_Dongles|Wibu Dongles]]&amp;lt;/u&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
The Linux commands used on this wiki should work on most modern Linux distributions, but were tested on CentOS and Debian. Note that these instructions are provided as a courtesy to users new to Linux, please ensure you understand what the commands mean before you run them and be aware of the [[Tuflow:General_disclaimer|general disclaimer]].&lt;br /&gt;
&lt;br /&gt;
==Getting the CodeMeter Runtime==&lt;br /&gt;
&lt;br /&gt;
The appropriate version of the CodeMeter Runtime can be obtained from the Wibu website at &amp;lt;u&amp;gt;[https://www.wibu.com/support/user/user-software.html www.wibu.com/support/user/user-software.html]&amp;lt;/u&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
If you are using Debian, Ubuntu, Mint or another Linux distribution in the Debian family of distributions, you should obtain a copy of the `.deb` installer for your hardware. If you are using Red Hat (RHEL), Fedora, CentOS or another Linux distribution in the Red Hat family of distributions, you should obtain a copy of the `.rpm` installer for your hardware. If your hardware supports 64-bit software (which is likely for modern systems), using that version is recommended. From here on, we&#039;ll refer to &#039;Debian&#039; or &#039;Red Hat&#039; to mean any distribution in that family.&lt;br /&gt;
&lt;br /&gt;
Depending on your level of access to the machine running Linux and whether or not it is running a graphical user interface, you may have some trouble getting the file onto your machine. You can download the file directly from the command line with: &amp;lt;pre&amp;gt;wget -O codemeter.rpm &amp;lt;direct link&amp;gt;&amp;lt;/pre&amp;gt; where &amp;quot;&amp;lt;direct link&amp;gt;&amp;quot; is the &#039;direct link&#039; provided on the Wibu download page for the version you are downloading.&lt;br /&gt;
&lt;br /&gt;
The download page also provides an MD5 checksum. You can run &amp;lt;pre&amp;gt;md5sum codemeter.rpm&amp;lt;/pre&amp;gt; and verify that the file you downloaded was downloaded correctly by comparing this checksum.&lt;br /&gt;
&lt;br /&gt;
If your Linux distribution does not provide `wget`, you can obtain a copy on Debian with `sudo apt-get install wget` and on Red Hat with `sudo yum install wget`.&lt;br /&gt;
&lt;br /&gt;
==Installing the CodeMeter Runtime==&lt;br /&gt;
&lt;br /&gt;
On Red Hat, using `yum`, you can install the CodeMeter Runtime with &amp;lt;pre&amp;gt;sudo yum localinstall codemeter.rpm&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
On Debian, using `dpkg` and `apt-get`, you can install the CodeMeter Runtime with &amp;lt;pre&amp;gt;sudo dpkg -i codemeter.deb&amp;lt;/pre&amp;gt; If that fails due to missing dependencies, you can instead attempt &amp;lt;pre&amp;gt;sudo apt-get -f codemeter.deb&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Once these commands complete (on either Debian or Red Hat), you can start, stop and restart the services with `systemctl`: &amp;lt;pre&amp;gt;sudo systemctl restart codemeter.service&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Alternatively, if `systemctl` is not available to you, you can use: &amp;lt;pre&amp;gt;sudo /etc/init.d/codemeter restart&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Configuring the CodeMeter Runtime==&lt;br /&gt;
&lt;br /&gt;
Refer to the CodeMeter manual for instructions on configuring CodeMeter. &lt;br /&gt;
&lt;br /&gt;
However, if you are installing CodeMeter as a client for network licenses, the following is an example of a section you can add to the `/etc/wibu/CodeMeter/Server.ini`: &amp;lt;pre&amp;gt;[ServerSearchList]&lt;br /&gt;
UseBroadcast=1&lt;br /&gt;
&lt;br /&gt;
[ServerSearchList\Server1]&lt;br /&gt;
Address=&amp;lt;ip number of your license host&amp;gt;&amp;lt;/pre&amp;gt;&lt;br /&gt;
You can add multiple `ServerSearchList\Server&amp;lt;n&amp;gt;` sections, one for each license host you have, with the IP address of the license host. Once you update and save the configuration file, restart the CodeMeter service and your licenses from the network server should then be available locally.&lt;br /&gt;
&lt;br /&gt;
If you are setting up a license host, which you wish to access from another machine, you will need to install the CodeMeter Runtime on that machine as well and you need to ensure the firewall allows requests to the license host on port 22350.&lt;br /&gt;
&lt;br /&gt;
On Red Hat, you can achieve this with:&amp;lt;pre&amp;gt;sudo firewall-cmd --get-active-zones&lt;br /&gt;
sudo firewall-cmd --zone=public --add-port=22350/tcp --permanent&lt;br /&gt;
sudo firewall-cmd --reload&amp;lt;/pre&amp;gt;&lt;br /&gt;
This assumes you see the `public` zone after the first command.&lt;br /&gt;
&lt;br /&gt;
On Debian, you can run:&amp;lt;pre&amp;gt;sudo iptables -A INPUT -p tcp -m tcp --dport 22350 -j ACCEPT&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Similarly, if you want users to be able to access the web admin interface for code meter on the server, you would need to ensure the firewall allows request on port 22352 (for http) and/or 223532 (for https). However, access to the web admin interface from other machines is not required for obtaining a license and in typical configurations, you would be able to access the web admin interface on the host itself (localhost) without additional firewall rules.&lt;/div&gt;</summary>
		<author><name>Jaap.vandervelde</name></author>
	</entry>
	<entry>
		<id>https://wiki.tuflow.com/w/index.php?title=Installing_Wibu_CodeMeter_Linux&amp;diff=25206</id>
		<title>Installing Wibu CodeMeter Linux</title>
		<link rel="alternate" type="text/html" href="https://wiki.tuflow.com/w/index.php?title=Installing_Wibu_CodeMeter_Linux&amp;diff=25206"/>
		<updated>2022-02-01T01:39:08Z</updated>

		<summary type="html">&lt;p&gt;Jaap.vandervelde: updated port number and added remark about http / https access to web interface&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This article provide a basic set of instructions to install the Wibu CodeMeter Runtime on a Linux host through the command line interface (CLI). For more information about using Wibu dongles or software licenses, refer to &amp;lt;u&amp;gt;[[Wibu_Dongles|Wibu Dongles]]&amp;lt;/u&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
The Linux commands used on this wiki should work on most modern Linux distributions, but were tested on CentOS and Debian. Note that these instructions are provided as a courtesy to users new to Linux, please ensure you understand what the commands mean before you run them and be aware of the [[Tuflow:General_disclaimer|general disclaimer]].&lt;br /&gt;
&lt;br /&gt;
==Getting the CodeMeter Runtime==&lt;br /&gt;
&lt;br /&gt;
The appropriate version of the CodeMeter Runtime can be obtained from the Wibu website at &amp;lt;u&amp;gt;[https://www.wibu.com/support/user/user-software.html www.wibu.com/support/user/user-software.html]&amp;lt;/u&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
If you are using Debian, Ubuntu, Mint or another Linux distribution in the Debian family of distributions, you should obtain a copy of the `.deb` installer for your hardware. If you are using Red Hat (RHEL), Fedora, CentOS or another Linux distribution in the Red Hat family of distributions, you should obtain a copy of the `.rpm` installer for your hardware. If your hardware supports 64-bit software (which is likely for modern systems), using that version is recommended. From here on, we&#039;ll refer to &#039;Debian&#039; or &#039;Red Hat&#039; to mean any distribution in that family.&lt;br /&gt;
&lt;br /&gt;
Depending on your level of access to the machine running Linux and whether or not it is running a graphical user interface, you may have some trouble getting the file onto your machine. You can download the file directly from the command line with: &amp;lt;pre&amp;gt;wget -O codemeter.rpm &amp;lt;direct link&amp;gt;&amp;lt;/pre&amp;gt; where &amp;quot;&amp;lt;direct link&amp;gt;&amp;quot; is the &#039;direct link&#039; provided on the Wibu download page for the version you are downloading.&lt;br /&gt;
&lt;br /&gt;
The download page also provides an MD5 checksum. You can run &amp;lt;pre&amp;gt;md5sum codemeter.rpm&amp;lt;/pre&amp;gt; and verify that the file you downloaded was downloaded correctly by comparing this checksum.&lt;br /&gt;
&lt;br /&gt;
If your Linux distribution does not provide `wget`, you can obtain a copy on Debian with `sudo apt-get install wget` and on Red Hat with `sudo yum install wget`.&lt;br /&gt;
&lt;br /&gt;
==Installing the CodeMeter Runtime==&lt;br /&gt;
&lt;br /&gt;
On Red Hat, using `yum`, you can install the CodeMeter Runtime with &amp;lt;pre&amp;gt;sudo yum localinstall codemeter.rpm&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
On Debian, using `dpkg` and `apt-get`, you can install the CodeMeter Runtime with &amp;lt;pre&amp;gt;sudo dpkg -i codemeter.deb&amp;lt;/pre&amp;gt; If that fails due to missing dependencies, you can instead attempt &amp;lt;pre&amp;gt;sudo apt-get -f codemeter.deb&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Once these commands complete (on either Debian or Red Hat), you can start, stop and restart the services with `systemctl`: &amp;lt;pre&amp;gt;sudo systemctl restart codemeter.service&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Alternatively, if `systemctl` is not available to you, you can use: &amp;lt;pre&amp;gt;sudo /etc/init.d/codemeter restart&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Configuring the CodeMeter Runtime==&lt;br /&gt;
&lt;br /&gt;
Refer to the CodeMeter manual for instructions on configuring CodeMeter. &lt;br /&gt;
&lt;br /&gt;
However, if you are installing CodeMeter as a client for network licenses, the following is an example of a section you can add to the `/etc/wibu/CodeMeter/Server.ini`: &amp;lt;pre&amp;gt;[ServerSearchList]&lt;br /&gt;
UseBroadcast=1&lt;br /&gt;
&lt;br /&gt;
[ServerSearchList\Server1]&lt;br /&gt;
Address=&amp;lt;ip number of your license host&amp;gt;&amp;lt;/pre&amp;gt;&lt;br /&gt;
You can add multiple `ServerSearchList\Server&amp;lt;n&amp;gt;` sections, one for each license host you have, with the IP address of the license host. Once you update and save the configuration file, restart the CodeMeter service and your licenses from the network server should then be available locally.&lt;br /&gt;
&lt;br /&gt;
If you are setting up a license host, which you wish to access from another machine, you will need to install the CodeMeter Runtime on that machine as well and you need to ensure the firewall allows requests to the license host on port 22350.&lt;br /&gt;
&lt;br /&gt;
On Red Hat, you can achieve this with:&amp;lt;pre&amp;gt;sudo firewall-cmd --get-active-zones&lt;br /&gt;
sudo firewall-cmd --zone=public --add-port=22350/tcp --permanent&lt;br /&gt;
sudo firewall-cmd --reload&amp;lt;/pre&amp;gt;&lt;br /&gt;
This assumes you see the `public` zone after the first command.&lt;br /&gt;
&lt;br /&gt;
On Debian, you can run:&amp;lt;pre&amp;gt;sudo iptables -A INPUT -p tcp -m tcp --dport 22350 -j ACCEPT&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Similarly, if you want users to be able to access the web interface for code meter on the server, you would need to ensure the firewall allows request on port 22352 (for http) and/or 223532 (for https). However, access to the web interface from other machines is not required for obtaining a license and in typical configurations, you would be able to access the web interface on the host itself (localhost) without additional firewall rules.&lt;/div&gt;</summary>
		<author><name>Jaap.vandervelde</name></author>
	</entry>
	<entry>
		<id>https://wiki.tuflow.com/w/index.php?title=TUFLOW_crashing&amp;diff=19670</id>
		<title>TUFLOW crashing</title>
		<link rel="alternate" type="text/html" href="https://wiki.tuflow.com/w/index.php?title=TUFLOW_crashing&amp;diff=19670"/>
		<updated>2021-02-15T06:19:11Z</updated>

		<summary type="html">&lt;p&gt;Jaap.vandervelde: Some rewording around crashes&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;&amp;lt;tt&amp;gt;&#039;&#039;&#039;This Page is under construction&#039;&#039;&#039; &amp;lt;/tt&amp;gt;&amp;lt;/font&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=Introduction=&lt;br /&gt;
From time to time TUFLOW simulations can crash. There are multiple reasons why it could happen. This page can be used as a guide to find and rectify the cause of the crash.&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=Troubleshooting Tips=&lt;br /&gt;
* Use the latest TUFLOW release available at the [https://www.tuflow.com/downloads/ TUFLOW website].&lt;br /&gt;
* If using GPU, update the graphics card driver to the latest version from the [https://www.nvidia.com/Download/index.aspx Nvidia website].&lt;br /&gt;
* Restart the modelling machine.&lt;br /&gt;
* Check the end of .tlf file for an error message.&lt;br /&gt;
* Test running the model on a different machine.&lt;br /&gt;
* Save all outputs (checks, results and logs) to a local drive and use TUFLOW executable saved on a local drive to determine if network is causing the issue.&lt;br /&gt;
* Monitor if the issue is happening to a single model only or every model, at specific time during the simulation or randomly.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=TUFLOW simulation DOS window only flicks and disappears=&lt;br /&gt;
Problem might be in the filepath of the TUFLOW executable in the batch file.&amp;lt;br&amp;gt;&lt;br /&gt;
&#039;&#039;&#039;Suggestion&#039;&#039;&#039;:&lt;br /&gt;
* Check that the executable filepath exists.&lt;br /&gt;
* TUFLOW doesn&#039;t currently support UNC paths. The folder with the executable has to be opened with a mapped drive. Type &amp;quot;net use &amp;lt;drive&amp;gt;: \\server_name\share_name&amp;quot; in the command line to map desired drive.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=TCF does not exist=&lt;br /&gt;
The .tcf name and filepath might be incorrect or unsupported UNC paths were used.&amp;lt;br&amp;gt;&lt;br /&gt;
&#039;&#039;&#039;Suggestions:&#039;&#039;&#039;&lt;br /&gt;
* Check the name of the .tcf file and filepath (if used) is correct.&lt;br /&gt;
* TUFLOW doesn&#039;t currently support UNC paths. The folder with the model has to be opened with a mapped drive. Type &amp;quot;net use &amp;lt;drive&amp;gt;: \\server_name\share_name&amp;quot; in the command line to map desired drive.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=TUFLOW crashes during model initialisation=&lt;br /&gt;
Crash at the start of the model might be connected to an erroneous input data or an error in the control files. It might be captured as standard TUFLOW error or as a Fortran compiler error leaving messages only in the console window.&amp;lt;br&amp;gt;&lt;br /&gt;
&#039;&#039;&#039;Suggestions:&#039;&#039;&#039;&lt;br /&gt;
* Insert &amp;quot;pause&amp;quot; at the end of the batch file to keep the console window open. The control file/GIS layer written just above the error should be the cause of the issue.&lt;br /&gt;
* Let the console window be written to a text file, e.g. “TUFLOW.exe my_model.tcf &amp;gt; dump.txt”. This will redirect console output massages as well as the standard error stream to the “dump.log” file, and likely it will record more error information than the usual TUFLOW log file.&lt;br /&gt;
If multiple large models are initialising at the same time, this could cause a memory overload and stop the simulations.&amp;lt;br&amp;gt;&lt;br /&gt;
&#039;&#039;&#039;Suggestions:&#039;&#039;&#039;&lt;br /&gt;
* Display &amp;quot;Peak working set (memory)&amp;quot; in the Task Manager to confirm memory overload, it is not displayed by default.&lt;br /&gt;
* Insert &amp;quot;timeout xxx&amp;quot; in the batch file between the runs to allow the models to initialise separately, where xxx is a number of seconds to wait for the next run to start.&amp;lt;br&amp;gt;&lt;br /&gt;
If there is no clear indication of what the cause might be, send a snapshot of the console window and .tlf to support@tuflow.com.&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=TUFLOW crashes randomly at any time during the simulation=&lt;br /&gt;
This can happen when TUFLOW is writing outputs to a network drive and/or model uses TUFLOW executable located on a network drive and the computer loses connection to the network. There could be no message box when this happens or a window stating that TUFLOW.exe has stopped working and nothing error related will be written to the .tlf file. If multiple models were running simultaneously, all unfinished models would crash. This applies to Windows 10, 8, 7 operating system. With Windows XP, the simulation would only pause and restart itself when the access to the network drive is back on. The difference in the behaviour is unfortunately based on the operating system and as far as we are aware we are unable to do anything within the TUFLOW code to handle this situation.&amp;lt;br&amp;gt;&lt;br /&gt;
&#039;&#039;&#039;Suggestions&#039;&#039;&#039;:&lt;br /&gt;
* Use TUFLOW executable saved on a local drive.&lt;br /&gt;
* Set a local drive as the output drive for all checks, results and logs. This can be done using &amp;lt;font color=&amp;quot;blue&amp;quot;&amp;gt;&amp;lt;tt&amp;gt;Output Drive&amp;lt;/tt&amp;gt;&amp;lt;/font&amp;gt; command in the .tcf. All outputs can be copied back to the network drive at the simulation end if required. Robocopy can be added to the end of the .bat file, example below.&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If the issue persist after using all the suggestions, contact support@tuflow.com, attaching the .tlf file and snapshot of the console window.&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;@echo off&lt;br /&gt;
&lt;br /&gt;
set TUFLOWEXE_iSP=O:\TUFLOW\Releases\2020-01\2020-10-AA\TUFLOW_iSP_w64.exe&lt;br /&gt;
set RUN_iSP=start &amp;quot;TUFLOW&amp;quot; /wait &amp;quot;%TUFLOWEXE_iSP%&amp;quot; -b&lt;br /&gt;
&lt;br /&gt;
set A=5m 2.5m&lt;br /&gt;
set B=EXG DEV&lt;br /&gt;
set source_results=D:\TUFLOW\results&lt;br /&gt;
set source_log=D:\TUFLOW\runs&lt;br /&gt;
set destination_results=O:\TUFLOW\support\results&lt;br /&gt;
set destination_log=O:\TUFLOW\support\runs&lt;br /&gt;
&lt;br /&gt;
FOR %%a in (%A%) do (&lt;br /&gt;
	FOR %%b in (%B%) do (&lt;br /&gt;
		:: Run model&lt;br /&gt;
		echo Running Cell Size %%a Model Scenario %%b&lt;br /&gt;
		%RUN_iSP% -s1 %%a -s2 %%b M10_~s1~_~s2~_003.tcf&lt;br /&gt;
		&lt;br /&gt;
		:: Move results folder to different location&lt;br /&gt;
        robocopy &amp;quot;%source_results%&amp;quot; &amp;quot;%destination_results%&amp;quot; /e /move&lt;br /&gt;
        &lt;br /&gt;
        :: Move log folder to different location&lt;br /&gt;
        robocopy &amp;quot;%source_log%&amp;quot; &amp;quot;%destination_log%&amp;quot; /e /move&lt;br /&gt;
        timeout 5&lt;br /&gt;
	)&lt;br /&gt;
)&lt;br /&gt;
pause&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=TUFLOW is crashing at the same/similar time of the day/week=&lt;br /&gt;
Something might be preventing the simulation from writing to network drive such as scheduled backup, updates, restart, deduplication or other scheduled processes. TUFLOW is not releasing the licence at the end of the simulation, rather CodeMeter system is determining that the TUFLOW application is no longer running and is therefore releasing the licence - this would show as “Handle xxx automatically released. The application is no longer available.” in the cmDust file.&amp;lt;br&amp;gt;&lt;br /&gt;
&#039;&#039;&#039;Suggestions:&#039;&#039;&#039;&lt;br /&gt;
* Ensure storage locations that results are written to have ample free space. Depending on your infrastructure, reported free space may not be accurate, or performance of storage may substantially degrade as it nears being full.&lt;br /&gt;
* Check with IT whether such processes are occurring and if reasonable limits/preferences/priorities can be set to reduce (disk/network) resource use, to allow other processes to run in parallel.&lt;br /&gt;
* Run models locally if it is known the runs will coincide with such processes. More information on running models locally &amp;lt;u&amp;gt;[[TUFLOW_crashing#TUFLOW_crashes_randomly_at_any_time_during_the_simulation | here]]&amp;lt;/u&amp;gt;.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=TUFLOW crashes at the end of the simulation before writing out maximum results=&lt;br /&gt;
The output drive might not have enough free space to write out the full results. If multiple TUFLOW simulations run in parallel, the free space might be filling up faster than expected. There can also be other processes filling up the drive, e.g. other software, backups, other users copying data. The target drive should never be planned to be filled to the brim, as performance will suffer for all processes.&amp;lt;br&amp;gt;&lt;br /&gt;
&#039;&#039;&#039;Suggestions:&#039;&#039;&#039;&lt;br /&gt;
* Make sure there is enough free space on the output drive.&lt;br /&gt;
If the output drive does have more than enough space, insert &amp;quot;pause&amp;quot; at the end of the batch file to keep the console window open and send a snapshot of this window and .tlf to support@tuflow.com.&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=TUFLOW can&#039;t find a valid licence=&lt;br /&gt;
Confirmation that licence is the issue can be mostly found in the .tlf.&lt;br /&gt;
&lt;br /&gt;
==Unmaintained dongle==&lt;br /&gt;
Once a year (usually after mid-year) a new TUFLOW release will need the licence to be updated. This will show in the .tlf file as &amp;quot;Unmaintained since &amp;lt;year&amp;gt;&amp;quot;. Follow &amp;lt;u&amp;gt;[[WIBU_Licence_Update_Request | WIBU Licence Update Request]]&amp;lt;/u&amp;gt; to get the licence updated.&lt;br /&gt;
&lt;br /&gt;
==Running very old TUFLOW releases==&lt;br /&gt;
When using an old legacy model with the original TUFLOW release there might be an error &amp;quot;Could not find standalone or network dongle server&amp;quot;. Such version of TUFLOW can only run with the old blue softlok licence dongle. If only the metal Wibu dongle is available, DB version of the same year release can be used. We do have some of the old dongles still in possession and can rent it out if required.&amp;lt;br&amp;gt;&lt;br /&gt;
More information on TUFLOW licence dongles and which releases are affected: &amp;lt;u&amp;gt;[[TUFLOW_Licensing | TUFLOW Licensing]]&amp;lt;/u&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==No TUFLOW licence settings files found==&lt;br /&gt;
Some users might mistake this sentence in the .tlf file as there is an issue with the licence. The real cause of the crash would be noted in the last couple of lines of the .tlf. The &amp;quot;No TUFLOW licence settings files found&amp;quot; sentence is followed by &amp;quot;Default settings applied&amp;quot; and the default settings are listed on the next three lines, e.g. WIBU Retry Time, WIBU Retry Count, WIBU Dongles Only. Only if these settings are required to be different, the licence control file (.lcf) would be created.&lt;br /&gt;
&lt;br /&gt;
==Technical licence issues==&lt;br /&gt;
&amp;lt;u&amp;gt;[[Wibu_Dongles#Troubleshooting | Wibu Dongles Troubleshooting]]&amp;lt;/u&amp;gt;&lt;/div&gt;</summary>
		<author><name>Jaap.vandervelde</name></author>
	</entry>
	<entry>
		<id>https://wiki.tuflow.com/w/index.php?title=Wibu_Dongles&amp;diff=18564</id>
		<title>Wibu Dongles</title>
		<link rel="alternate" type="text/html" href="https://wiki.tuflow.com/w/index.php?title=Wibu_Dongles&amp;diff=18564"/>
		<updated>2020-06-15T04:16:06Z</updated>

		<summary type="html">&lt;p&gt;Jaap.vandervelde: Added a reference to a (new) Linux installation page&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Introduction=&lt;br /&gt;
This page contains a brief introduction to the Wibu dongles.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
=Installation=&lt;br /&gt;
==Installing CodeMeter RunTime Kit==&lt;br /&gt;
The first step in using the Wibu licence is to install the CodeMeter Runtime Kit.  This needs to be installed for any computers that will be running TUFLOW (from either local or network licence) as well as for the network licence server.&amp;lt;br&amp;gt;&lt;br /&gt;
If you are installing on a Linux computer from the command line, refer to [[Installing_Wibu_CodeMeter_Linux|Installing Wibu CodeMeter Linux]]&amp;lt;br&amp;gt;&lt;br /&gt;
The latest version can be downloaded from CodeMeter site:&amp;lt;br&amp;gt;&lt;br /&gt;
[https://www.wibu.com/support/user/downloads-user-software.html https://www.wibu.com/support/user/downloads-user-software.html]&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
The correct file to download is the &#039;&#039;&#039;CodeMeter Runtime Kit for Windows&#039;&#039;&#039;&amp;lt;br&amp;gt;&lt;br /&gt;
[[File:CodeMeter RuntimeKit Download.jpg|400px]]&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
Once installed the configuration depends on if the TUFLOW licence is a Local or Network licence. &lt;br /&gt;
* If this is the first time the licence has been used, or your existing licence has expired. You will need to update your licence file. Please progress to the &amp;lt;u&amp;gt;[[#Request_a_licence_update| Request a licence update]]&amp;lt;/u&amp;gt; section&lt;br /&gt;
* If there is already an active licence associated with the dongle:&lt;br /&gt;
:* For a local licence, the dongle can now be inserted into the machine and TUFLOW simulations can be started.&lt;br /&gt;
:* For a network licence, continue to the &amp;lt;u&amp;gt;[[#Configuring_Network_Server | configure network server]]&amp;lt;/u&amp;gt; and &amp;lt;u&amp;gt;[[#Configuring_Access_to_Network Licence | configure network access]]&amp;lt;/u&amp;gt; sections below.&lt;br /&gt;
&lt;br /&gt;
===Silent Install===&lt;br /&gt;
It is possible to do a silent install of the CodeMeter Runtime kit.  CodeMeter support staff have advised that this can be done with the following parameters:&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;CodeMeterRuntime.exe /ComponentArgs &amp;quot;*&amp;quot;:&amp;quot;/qn&amp;quot;&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Configuring Network Server==&lt;br /&gt;
If the TUFLOW licence is a network licence, the computer hosting the dongle will need to be configured as a TUFLOW server.  This is required even if the simulations are to be performed on the server.  Instructions for configuring the network licence are detailed in the following page:&amp;lt;br&amp;gt;&lt;br /&gt;
*[[WIBU_Configure_Network_Server_2016| WIBU Configure Network Server - Post 2016 Codemeter (recommended)]]&lt;br /&gt;
*[[WIBU_Configure_Network_Server | WIBU Configure Network Server - Pre 2016 Codemeter]]&lt;br /&gt;
&lt;br /&gt;
==Configuring Access to Network Licence==&lt;br /&gt;
To access TUFLOW licences on a remote network server, the CodeMeter runtime kit needs to be installed on the client machine.  Once installed, CodeMeter needs to be configured to use the network licence.&lt;br /&gt;
Instructions for configuring the network licence are detailed in the following page:&amp;lt;br&amp;gt;&lt;br /&gt;
*[[WIBU_Configure_Network_Client_2016| WIBU Configure Network Client - Post 2016 Codemeter (recommended)]]&lt;br /&gt;
*[[WIBU_Configure_Network_Client | WIBU Configure Network Client - Pre 2016 Codemeter]]&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=Updating=&lt;br /&gt;
There are a number of reasons that the Wibu licence may need to be updated, for example:&lt;br /&gt;
* To add additional modules&lt;br /&gt;
* To update to new support year&lt;br /&gt;
* To add rental licences&lt;br /&gt;
For each change to the dongle, it will be necessary to provide a licence update request file to the TUFLOW staff.&amp;lt;br&amp;gt;&lt;br /&gt;
The procedure is the same for local and network licences, the request will need to be generated from the computer which has the dongle plugged in. &lt;br /&gt;
==Request a licence update==&lt;br /&gt;
The instructions for creating a licence request differ slightly depending whether the dongle has not previously been coded for TUFLOW simulations or if the dongle has not been provided by BMT.  &amp;lt;br&amp;gt;&lt;br /&gt;
&#039;&#039;&#039;Hardware Licence (USB)&#039;&#039;&#039; &amp;lt;br&amp;gt;&lt;br /&gt;
* Unless specified otherwise by the TUFLOW staff, this option is the correct one to chose (please click the link to the right): [[WIBU_Licence_Update_Request | Wibu licence update request (normal)]] &lt;br /&gt;
* If you have been provided with a blank dongle or are using a non BMT dongle the TUFLOW producer code needs to be added when requesting the licence update: [[WIBU_Licence_Update_Request_Uncoded | Wibu licence update request (uncoded or blank dongle)]]&amp;lt;br&amp;gt;&lt;br /&gt;
&#039;&#039;&#039;Software Licence (File)&#039;&#039;&#039; &amp;lt;br&amp;gt;&lt;br /&gt;
* If you would like a software licence instead of a hardware licence: [[WIBU_Software_Licence_Update_Request | Wibu software licence update request]]&lt;br /&gt;
After creating the licence update request, please email the created file (&#039;&#039;&#039;.WibuCmRaC&#039;&#039;&#039;) to sales@tuflow.com.&lt;br /&gt;
&lt;br /&gt;
==Import a licence update==&lt;br /&gt;
Once a licence update has been created, an update file will be provide to you via email.  This update file will have the extension &#039;&#039;&#039;.WibuCmRaU&#039;&#039;&#039;.  To import the file please follow the steps below, the same method is used for network and local licences.&lt;br /&gt;
[[WIBU_Licence_Update_Import | Importing a Wibu licence update]]&amp;lt;br&amp;gt;&lt;br /&gt;
When an update is applied, this modifies the content of the dongle, it does not need to be applied on each computer that will be used for TUFLOW modelling!&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
=Troubleshooting=&lt;br /&gt;
==Dongle Not Working Correctly==&lt;br /&gt;
If the drivers have been installed, the colour of the CodeMeter icon in the taskbar indicates if a dongle is being detected correctly.&amp;lt;br&amp;gt;&lt;br /&gt;
[[File:CM CM Stick Grey.jpg|20px]] --- Grey, No CM stick detected.&amp;lt;br&amp;gt;&lt;br /&gt;
[[File:CM CM Stick Green.jpg|20px]] --- Green, CM stick detected.&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
A range of other colors are also available but not frequently used:&amp;lt;br&amp;gt;&lt;br /&gt;
[[File:CM CM Stick Yellow.jpg|20px]] --- Yellow, CM stick enabled until unplugged (password protected).&amp;lt;br&amp;gt;&lt;br /&gt;
[[File:CM CM Stick Red.jpg|20px]] --- Red, CM stick disabled (password protected).&amp;lt;br&amp;gt;&lt;br /&gt;
[[File:CM CM Stick Blue.jpg|20px]] --- Blue, Multiple CM sticks.&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
If the CodeMeter icon reamins grey (not detected) when the dongle is inserted, please follow the steps below:&amp;lt;br&amp;gt;&lt;br /&gt;
[[WIBU_Dongle_Not_Detected | WIBU Dongle Not Detected]]&lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
If the dongle is correctly detected (the icon changes colour to green), but you are unable to run a TUFLOW simulation, please follow the steps below:&amp;lt;br&amp;gt;&lt;br /&gt;
[[WIBU_Dongle_Not_Running | WIBU Dongle Not Running TUFLOW]]&lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
If you are having trouble accessing a network dongle from a remote computer, please follow the steps below:&amp;lt;br&amp;gt;&lt;br /&gt;
[[WIBU_Network_Dongle_Connectivity | WIBU Network Dongle Issues]]&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
=Diagnostics=&lt;br /&gt;
==cmDust==&lt;br /&gt;
When the CodeMeter runtime kit is installed a diagnostics utility call &#039;&#039;&#039;cmDust&#039;&#039;&#039; is also installed.  Instructions for creating a diagnostics file can be found on &amp;lt;u&amp;gt;[[WIBU_create_cmDust | create CM Dust diagnostics file]]&amp;lt;/u&amp;gt; page.&lt;br /&gt;
&lt;br /&gt;
==Enabling Logging==&lt;br /&gt;
Codemeter allows you to write extended log files to you local drive. To setup these features please follow the instruction on our [[Codemeter_Enable_Logging | Enable Codemeter Logging]] page.&lt;br /&gt;
&lt;br /&gt;
== Enabling Network Server License Monitoring==&lt;br /&gt;
Codemeter allows you to conduct real-time licence network monitoring. To setup these features please see our [[Network_Server_License_Monitoring | Network License Monitoring]] page&lt;/div&gt;</summary>
		<author><name>Jaap.vandervelde</name></author>
	</entry>
	<entry>
		<id>https://wiki.tuflow.com/w/index.php?title=Installing_Wibu_CodeMeter_Linux&amp;diff=18563</id>
		<title>Installing Wibu CodeMeter Linux</title>
		<link rel="alternate" type="text/html" href="https://wiki.tuflow.com/w/index.php?title=Installing_Wibu_CodeMeter_Linux&amp;diff=18563"/>
		<updated>2020-06-15T04:13:12Z</updated>

		<summary type="html">&lt;p&gt;Jaap.vandervelde: Page created with basic installation instructions for Red Hat and Debian variants.&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This article provide a basic set of instructions to install the Wibu CodeMeter Runtime on a Linux host through the command line interface (CLI). For more information about using Wibu dongles or software licenses, refer to [[Wibu_Dongles|Wibu Dongles]].&lt;br /&gt;
&lt;br /&gt;
The Linux commands used on this wiki should work on most modern Linux distributions, but were tested on CentOS and Debian. Note that these instructions are provided as a courtesy to users new to Linux, please ensure you understand what the commands mean before you run them and be aware of the [[Tuflow:General_disclaimer|general disclaimer]].&lt;br /&gt;
&lt;br /&gt;
==Getting the CodeMeter Runtime==&lt;br /&gt;
&lt;br /&gt;
The appropriate version of the CodeMeter Runtime can be obtained from the Wibu website at &amp;lt;u&amp;gt;[https://www.wibu.com/support/user/user-software.html www.wibu.com/support/user/user-software.html]&amp;lt;/u&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
If you are using Debian, Ubuntu, Mint or another Linux distribution in the Debian family of distributions, you should obtain a copy of the `.deb` installer for your hardware. If you are using Red Hat (RHEL), Fedora, CentOS or another Linux distribution in the Red Hat family of distributions, you should obtain a copy of the `.rpm` installer for your hardware. If your hardware supports 64-bit software (which is likely for modern systems), using that version is recommended. From here on, we&#039;ll refer to &#039;Debian&#039; or &#039;Red Hat&#039; to mean any distribution in that family.&lt;br /&gt;
&lt;br /&gt;
Depending on your level of access to the machine running Linux and whether or not it is running a graphical user interface, you may have some trouble getting the file onto your machine. You can download the file directly from the command line with: &amp;lt;pre&amp;gt;wget -O codemeter.rpm &amp;lt;direct link&amp;gt;&amp;lt;/pre&amp;gt; where &amp;quot;&amp;lt;direct link&amp;gt;&amp;quot; is the &#039;direct link&#039; provided on the Wibu download page for the version you are downloading.&lt;br /&gt;
&lt;br /&gt;
The download page also provides an MD5 checksum. You can run &amp;lt;pre&amp;gt;md5sum codemeter.rpm&amp;lt;/pre&amp;gt; and verify that the file you downloaded was downloaded correctly by comparing this checksum.&lt;br /&gt;
&lt;br /&gt;
If your Linux distribution does not provide `wget`, you can obtain a copy on Debian with `sudo apt-get install wget` and on Red Hat with `sudo yum install wget`.&lt;br /&gt;
&lt;br /&gt;
==Installing the CodeMeter Runtime==&lt;br /&gt;
&lt;br /&gt;
On Red Hat, using `yum`, you can install the CodeMeter Runtime with &amp;lt;pre&amp;gt;sudo yum localinstall codemeter.rpm&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
On Debian, using `dpkg` and `apt-get`, you can install the CodeMeter Runtime with &amp;lt;pre&amp;gt;sudo dpkg -i codemeter.deb&amp;lt;/pre&amp;gt; If that fails due to missing dependencies, you can instead attempt &amp;lt;pre&amp;gt;sudo apt-get -f codemeter.deb&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Once these commands complete (on either Debian or Red Hat), you can start, stop and restart the services with `systemctl`: &amp;lt;pre&amp;gt;sudo systemctl restart codemeter.service&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Alternatively, if `systemctl` is not available to you, you can use: &amp;lt;pre&amp;gt;sudo /etc/init.d/codemeter restart&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Configuring the CodeMeter Runtime==&lt;br /&gt;
&lt;br /&gt;
Refer to the CodeMeter manual for instructions on configuring CodeMeter. &lt;br /&gt;
&lt;br /&gt;
However, if you are installing CodeMeter as a client for network licenses, the following is an example of a section you can add to the `/etc/wibu/CodeMeter/Server.ini`: &amp;lt;pre&amp;gt;[ServerSearchList]&lt;br /&gt;
UseBroadcast=1&lt;br /&gt;
&lt;br /&gt;
[ServerSearchList\Server1]&lt;br /&gt;
Address=&amp;lt;ip number of your license host&amp;gt;&amp;lt;/pre&amp;gt;&lt;br /&gt;
You can add multiple `ServerSearchList\Server&amp;lt;n&amp;gt;` sections, one for each license host you have, with the IP address of the license host. Once you update and save the configuration file, restart the CodeMeter service and your licenses from the network server should then be available locally.&lt;br /&gt;
&lt;br /&gt;
If you are setting up a license host, which you wish to access from another machine, you will need to install the CodeMeter Runtime on that machine as well and you need to ensure the firewall allows requests to the license host on port 22350.&lt;br /&gt;
&lt;br /&gt;
On Red Hat, you can achieve this with:&amp;lt;pre&amp;gt;sudo firewall-cmd --get-active-zones&lt;br /&gt;
sudo firewall-cmd --zone=public --add-port=22352/tcp --permanent&lt;br /&gt;
sudo firewall-cmd --reload&amp;lt;/pre&amp;gt;&lt;br /&gt;
This assumes you see the `public` zone after the first command.&lt;br /&gt;
&lt;br /&gt;
On Debian, you can run:&amp;lt;pre&amp;gt;sudo iptables -A INPUT -p tcp -m tcp --dport 22350 -j ACCEPT&amp;lt;/pre&amp;gt;&lt;/div&gt;</summary>
		<author><name>Jaap.vandervelde</name></author>
	</entry>
	<entry>
		<id>https://wiki.tuflow.com/w/index.php?title=WIBU_Software_Licence_Update_Request&amp;diff=18542</id>
		<title>WIBU Software Licence Update Request</title>
		<link rel="alternate" type="text/html" href="https://wiki.tuflow.com/w/index.php?title=WIBU_Software_Licence_Update_Request&amp;diff=18542"/>
		<updated>2020-06-11T06:09:35Z</updated>

		<summary type="html">&lt;p&gt;Jaap.vandervelde: Added a reference to a (new) Linux software license page&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Software licences are an alternative option to hardware USB licences. Please select the license host carefully as the software based dongle will be bound to it. If over time you decide you want to move to another computer we will need to re-issue you with a new software license (which will incur a small administration fee).&lt;br /&gt;
&lt;br /&gt;
&amp;lt;ol&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt; Email &amp;lt;u&amp;gt;sales@tuflow.com&amp;lt;/u&amp;gt; to request a software licence. You will be sent an empty licence container file (*.wibucmlif). &lt;br /&gt;
&amp;lt;li&amp;gt; Install &amp;lt;u&amp;gt;[[Wibu_Dongles#Installing_CodeMeter_RunTime_Kit | Codemeter Control Centre]]&amp;lt;/u&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
If your license host is a Linux host, please refer to &amp;lt;u&amp;gt;[[WIBU_Software_Licence_Linux | WIBI Software License Linux]]&amp;lt;/u&amp;gt; for the rest of the process&lt;br /&gt;
&amp;lt;li&amp;gt; Open Codemeter Control Centre (from the start menu) and drag and drop the provided .wibucmlif onto it. This will import an empty licence container onto the computer (it will show with a grey icon in the licence tab).&lt;br /&gt;
&amp;lt;li&amp;gt; With the empty licence container selected click the “Licence Update” option, as per the image below.&amp;lt;br&amp;gt; [[File:Software_Licence_001.PNG|600px]]&lt;br /&gt;
&amp;lt;li&amp;gt; Select “Next” at the next dialogue.&amp;lt;br&amp;gt; [[File:Software_Licence_002.PNG|600px]]&lt;br /&gt;
&amp;lt;li&amp;gt; Select “Create Licence Request”.&amp;lt;br&amp;gt; [[File:Software_Licence_003.PNG|600px]]&lt;br /&gt;
&amp;lt;li&amp;gt; Save the licence request file to your computer.&amp;lt;br&amp;gt; [[File:Software_Licence_004.PNG|600px]]&lt;br /&gt;
&amp;lt;li&amp;gt; Email the licence request file (.WibuCmRaC) file to &amp;lt;u&amp;gt;sales@tuflow.com&amp;lt;/u&amp;gt;&lt;br /&gt;
&amp;lt;/ol&amp;gt;&lt;/div&gt;</summary>
		<author><name>Jaap.vandervelde</name></author>
	</entry>
</feed>