Following the release of Arc Hydro for ArcGIS Pro 2.5 earlier this year, here is the Arc Hydro for ArcGIS Pro 2.6 formal release. The major advancements include implementation of Trace Network capabilities fully released in Pro 2.6, addition of functionality for hydrologic modeling support, LAS to DEM creation toolset, two new functions for terrain ruggedness quantification, and several other functions transitioned from the ArcMap version. This release is identified as “ArcHydroPro2.0.221_signed” release in the Arc Hydro Pro download setup directory.
Arc Hydro Pro development continues and plans for the 2.7 release focus on including additional floodplain delineation and hydraulic modeling support tools, advanced Trace Network tools, and more extensive documentation. If you have any questions or comments, please contact the Arc Hydro team either via the GeoNet page or directly via email using firstname.lastname@example.org.
Quantifying the characteristics of terrain can be beneficial in many analysis workflows including sediment transport modelling, ecological studies, geomorphological evaluation of land forms, and landslide hazards assessment. To help with terrain analysis, Arc Hydro is adding a Terrain Ruggedness Index (TRI) tool and a Vector Ruggedness Measure (VRM) tool to its terrain pre-processing capabilities.
How TRI and VRM contribute to Terrain Analyses
Terrain Ruggedness Index:
TRI expresses the amount of elevation difference between adjacent cells of a DEM. Using methodology developed by Riley et al (1999) and published in the paper “A Terrain ruggedness Index That Quantifies Topographic heterogeneity”, the tool measures the difference in elevation values from a center cell and eight cells directly surrounding it. Then, the eight elevation differences are squared and averaged. The square root of this average results is a TRI measurement for the center cell. This calculation is then conducted on every cell of the DEM.
Vector Ruggedness Measure:
VRM provides a way to measure terrain ruggedness as the variation in three-dimensional orientation of grid cells within a neighborhood. Slope and aspect are captured into a single measure and used to decouple terrain ruggedness from just slope or elevation. The VRM was first proposed by Hobson (1972) in “Surface roughness in topography: quantitative approach” and was later adapted by Sappington et al (2007) in “Quantifying landscape ruggedness for animal habitat analysis: A case study using bighorn sheep in the Mojave Desert.”
Both tool scripts use geoprocessing workflows developed by Dr. Barry Nickel, a professor at University of California Santa Cruz.
Where to Find TRI and VRM
Many ecologists and geologists have employed TRI and VRM to further their research and understanding of terrain. Prior to this tool release, analysts had to derive both TRI and VRM through a series of geoprocessing steps or had to write their own custom tool. Now, both tools are available under Arc Hydro's Terrain Preprocessing toolbox in ArcMap 10.8.0.33 and ArcGIS Pro in Arc Hydro build 2.6.26. Future releases of Arc Hydro will also support the tools.
How to Use the Tools
The TRI tool requires only a DEM as an input. The output is a TRI raster data set that expresses the amount of elevation difference between adjacent cells of the DEM in meters. Output values are dependent on input DEM resolution.
The VRM tool has two input parameters. One is the DEM for analyses, and the other is an integer specifying neighborhood size for analyses. A larger window size can be useful, but often causes a smoothing effect on the landscape, so as a default, the window size is set to 3. The output is a VRM raster data set that is a dimensionless ruggedness value between 0 (flat) and 1 (most rugged). Typical values for natural terrains range between 0 and 0.5, with rugged landscape defined to be greater than 0.02.
Over the last few years, we have been working towards integrating Arc Hydro into the ArcGIS Pro platform. The release of Arc Hydro for ArcGIS Pro 2.5 includes over 200 Arc Hydro functions in the domain of terrain preprocessing, watershed and stream delineation, and characterization. Most of the “old friends” from the Arc Hydro 10.x implementation are there, but some of the new functionality is not available in 10.x. For example, only available in Arc Hydro in Pro is a whole toolset dedicated to wetlands identification using a machine learning approach – check out Gina’s blog on that.
Also, Arc Hydro for Pro 2.5 leverages the new Trace Network functionality that is available by participating in the Trace Network Beta program for ArcGIS Pro 2.5. This core ArcGIS Pro functionality enables the Pro implementation of Arc Hydro functions that were previously unavailable due to dependency on geometric networks. If you are interested in testing and reviewing the Trace Network and Arc Hydro tools built on top of it, please e-mail email@example.com and we will get you registered in the Trace Network Beta program.
If you did not get a chance to recently review “Arc Hydro – Project Development Best Practices” and, specifically for Pro, the “Arc Hydro – ArcGIS Pro Project Startup Best Practices” documents, now would be a good time to do so. Both documents, as well as all others in the Arc Hydro documentation series, are available on the Arc Hydro GeoNet page.
Arc Hydro Pro development continues and plans for the 2.6 release focus on including additional hydrologic and hydraulic modeling support tools and more extensive documentation. If you have any questions or comments, please contact the Arc Hydro team either via the GeoNet page or directly via email using firstname.lastname@example.org.
Wetlands are an important ecosystem that provide habitat for many plant and animal species, improve water quality, recharge groundwater, and ease flood and drought severity. However, the quality and existence of wetlands are threatened by agricultural or development repurposing, pollutant runoff, and climate change. Given the ecological value provided by wetlands and the ongoing threat to wetland health, wetland management and conservation efforts are imperative. Rapid and reliable creation of wetland distribution maps can benefit these efforts. The Wetland Identification Model (WIM) is a proposed framework for creating these data.
How the WIM Aims to Support Wetland Protection Efforts
While there are many types of wetlands, all wetlands can be identified by common features, including the presence of hydrologic conditions that inundate the area, vegetation adapted for life in saturated soil conditions, and hydric soils. Light detection and ranging (LiDAR) data offer new opportunities to observe these features at varying scales and provide higher resolution and wider availability than other remote sensing options. LiDAR returns can be interpolated to create high-resolution digital elevation models (DEMs), which can then be used to derive topographic metrics that describe flow convergence and near-surface soil moisture to indicate wetlands. Furthermore, deriving topographic metrics from LiDAR DEMs has been shown to increase the accuracy of saturation extent mapping compared to coarser DEMs (i.e., > 2 m).
The WIM uses LiDAR DEMs to derive topographic metrics that describe hydrologic drivers of wetland formation and uses these as predictors of wetland areas through the random forests algorithm (Breiman, 2001). The WIM consists of three main parts: preprocessing, predictor variable calculation, and classification and accuracy assessment. Required input data are a high‐resolution digital elevation model (DEM) and verified wetland/nonwetland coverage (i.e., ground truth data), both in TIFF format. The current implementation also requires a surface water input raster, although future implementations will derive this directly from the input DEM. Final model outputs are wetland predictions and an accuracy report.
The input DEM is preprocessed using methods specific to hydrologic parameter derivation from high-resolution DEMs.
The preprocessed DEM is used to calculate the predictor variables: the topographic wetness index (TWI), curvature, and cartographic depth‐to‐water index (DTW).
Training data are derived from the ground truth data.
The training data are coupled with the merged predictor variables to train the random forests algorithm (Breiman, 2001).
The ground truth data that were not used to train the model are used to assess the accuracy of predictions. The accuracy metrics generated are chosen to minimize unrepresentative evaluations of model performance due to imbalanced target classes.
Previous Performance of the WIM and Potential Applications
The WIM was created through original research at the University of Virginia. It was originally developed and evaluated for environmental planning applications, specifically to streamline the wetland permitting process by providing accurate wetland inventories that limit manual surveying to likely wetland areas. After calibration for four geographic regions in Virginia using a rich ground truth dataset of jurisdictionally confirmed wetlands and nonwetlands, the WIM was able to identify 80‐90% of true wetlands across the sites. The proportion of wetland predictions that were correct varied from 22 to 69%. Overall, the results suggest strong potential for the WIM to support wetlands delineation. However, success in other landscapes will depend on the quality of the DEM and available ground truth data. These data allow for the necessary calibration of WIM parameters to specific landscapes. This iterative process will likely reveal unique DEM preprocessing parameters that improve the representation of the land surface for wetlands specific to the region. Further, reliable and abundant ground truth data will allow the model to learn a range of wetland characteristics and provide representative accuracy assessments.
For further reading on the development and evaluation of the WIM, see the following publications:
O'Neil, G. L., Goodall, J. L., Behl, M., Saby, L. (2020). Deep Learning using Physically-Informed Input Data for Wetland Identification. Environmental Modelling and Software. 104665. https://doi.org/10.1016/j.envsoft.2020.104665.
O'Neil, G. L., Saby, L., Band, L. E., Goodall, J. L. (2019). Effects of LiDAR DEM Smoothing and Conditioning Techniques on a Topography-Based Wetland Identification Model. Water Resources Research, 55. https://doi.org/10.1029/2019WR024784.
O'Neil, G. L., Goodall, J. L., Watson, L. T. (2018). Evaluating the potential for site-specific modification of LiDAR DEM derivatives to improve environmental planning-scale wetland identification using random forest classification. Journal of Hydrology, 559, 192-208. https://doi.org/10.1016/j.jhydrol.2018.02.009.
Breiman, L. (2001). Random forests. Machine learning, 45(1), 5-32.
I am new to Arc Hydro (AH), and have been working to Identify Sinks in the landscape. I encountered several issues, so I wanted to share my methods in hope this saves someone else the time I spent trying to figure this out. I started my project with raw Lidar and processed it using Quick Terrain Modeler, which resulted in 10 subset DEMs in a .tif file format for just one county. I did not mosaic the subsets, because I later learned that a county-level DEM is far too large to process in AH. Many of these tips were compiled from the AH Tutorial found online, Geonet AH Problem Solvers, and the developers of AH. I used ArcMap 10.4.
*before starting make sure you have downloaded the correct version of AH for your system, and make sure your computer has the hardware/software needed for this level of geoprocessing. You will need a Spatial Analyst license.
1) Create a folder on your local machine, i.e., C:\GIS\project *When using AH, always work on the local machine, keep names and pathways short and simple.
2) Open ArcMap, select My Template/Blank Map, and designate geodatabase for this map, i.e., C:\GIS\project\project.gdb
7) Set Target Locations AH toolbar > ApUtilities > Set Target Locations > ApUtilitiesConfig > OK > make sure this location is correct AH toolbar > ApUtilities > Set Target Locations > HydroConfig > OK > make sure this location is correct *Rasters should go into a folder (no .gdb) and Vectors go into a .gdb
😎 Add your DEM raster to the map, build pyramids if prompted.
9) Save map File > Save As > navigate to C:\GIS\project and save map, i.e., project.mxd
10) Preprocess the DEM if necessary. The raster format that I found to be most useful included: a small raster (columns and rows < 10,000x10,000), integer (not floating point), GRID format, defined Z coordinate system, and NoData values converted to "0". I started with a 32-bit, floating point, .tif raster, so I had to preprocess the raster before running it in AH: a) Original DEM Float to Integer Spatial Analyst > Math > Int Output Location: C:\GIS\project *I added "i" to the end of the file name, so I knew which raster this was from the others, i.e., "projecti.tif" b) DEM .tif to GRID Conversion > to Raster > Raster to Other Format Output Location: C:\GIS\project c) Define Projection Input: DEM GRID in an integer format (from previous step) Open coordinate system list, Spatial Reference Properties > select Z Coordinate System tab > select NAVD88 > double- click NAVD88 > Vertical Coordinate System Properties window will open > and change Linear Unit Name: centimeters > Apply > OK > OK > OK *This step changes the coordinate system of the input file and does not create an output. d) Raster Calculator, convert NoData to "0" Spatial Analyst > Map Algebra > Raster Calculator Use this equation for the raster: Con(IsNull("inputraster"),0,"inputraster") Output: C:\GIS\project\projectc *I added a "c" to the end of the name, so I knew this is the calculated version of the raster.
11) Remove the other files that may have been added to your map except for "projectc" file (this should be the raster that is: integer, GRID, defined projection, and NoData=0). It wouldn't hurt to save your map again, and clean your temp folder one last time before running AH.
I followed all of the above steps, and still could not get a successful output with Sink Evaluation and the issue turned out to be my computer's graphic/video card. It worked perfectly once I ran it on a more robust machine.