ArcGIS Pro 3.0 makes some exciting new capabilities available to you for the Spatial Analyst extension. In the Suitability Modeler it is now easier to create more complex models, and there is an option to process large raster data efficiently with Raster Analytics. If you do water flow modelling, there are some very powerful new tools for hydrology analysis. There is a new tool to calculate a spatial relative risk surface. You can now perform zonal analysis with circular statistics. Read on for details on these changes, and more.
ArcGIS Pro 3.0 was released on June 23, 2022.
A selection of the changes in this release can be found in the What's New in ArcGIS Pro 3.0 blog post and video. For a more comprehensive list, see What's New in ArcGIS Pro 3.0 in the online Help. Because this is a major release, the information in the Migration from ArcGIS Pro 2.x to 3.0 help topic will help guide you through the transition.
Listed here are the main functional areas that have been improved over the last release:
The Suitability Modeler in ArcGIS Pro 3.0 sees continued improvements in its capability, usability and ability to work on large tasks.
The suitability modeler now has the ability to split up complicated models into component sub-models. Breaking down the analysis into logical groupings improves the ability for domain experts to collaborate within their areas of expertise. A central planner can then combine those specialized models to come up with more comprehensive overall plans to make better decisions.
You can now share and run suitability models on servers using ArcGIS Pro as a client. By running the models using Raster Analytics, you can take advantage of the power of distributed processing to perform analysis on larger datasets more efficiently than before. The process is pretty straight forward. First, the suitability model is shared as a portal item. When the model is run, the processing occurs on the servers, with the output being created as web imagery layers in the active portal.
The new Calculate Kernel Density Ratio tool uses two input feature datasets to calculate a spatial relative risk surface. This is useful when the phenomenon being analyzed requires a control.
One application of this tool could be an epidemiologist who is studying the occurrences of a disease to determine if high prevalence's in certain areas could be linked to environmental factors. The density ratio is calculated using the disease occurrences as the numerator and the total population as the denominator. The result surface shows the density of disease occurrence normalized by population density, which makes it possible to determine where the disease occurrences are higher than expected.
In comparison to the Kernel Density tool, the output from this new tool is normalized, meaning the resulting values are proportional.
The Extract By Mask tool has been updated with two new optional parameters.
With the Extraction Area parameter, you can now extract the areas outside the mask, as well as inside it. This brings the tool in line with other tools in the extraction toolset.
The Analysis Extent parameter gives you more control over the extent of the output raster. You can define the output analysis area explicitly in several ways, either by typing values, choosing the display extent, selecting a layer, or browsing for an input dataset. The default analysis extent will be the intersection of the input raster and the input feature or mask data.
A hallmark feature for this release is the new Derive Continuous Flow tool, which creates consistent flow direction and flow accumulation rasters directly, in one step, regardless of whether sinks have been treated or not. Accompanying this are two new tools for extracting streams from elevations surfaces directly, Derive Stream as Line and Derive Stream As Raster. All of these tools support the ability to specify a dataset that delineates real depressions in the elevation surface, giving you more control over landscape features that dictate water flow.
If you've done hydrology analysis in the past, you'll know one of the requirements was to spend time creating what is called a hydrologically conditioned DEM to use as the elevation surface. This typically meant identifying sinks (low points) or depressions in the data, especially artificial ones caused by artifacts in the source data or previous processing steps. Once identified, additional work had to be done to smooth them over, with the end goal of producing an elevation surface over which water will flow in the expected direction. With these new tools, you can get to the fun part of analysis and modelling right away!
For the Block Statistics tool, Focal Statistics tool, and Focal Statistics raster function, when the Weight neighborhood type is selected, the calculations used for the Mean and Standard deviation statistics has been improved. The denominator in the equation is now the sum of the weight values applied to the kernel, instead of the number of cells in the kernel neighborhood. One thing to note for both the weighted mean and standard deviation is that the weights must be positive values. For more details, see the help content for the weighted neighborhood calculations. The calculation for the weighted Sum statistic is unchanged.
As a part of this work, for the Block Statistics tool when the Weight neighborhood is selected, the Median and Minority options are no longer available. The available choices of statistics now match those of Focal Statistics.
For the Compute Confusion Matrix raster tool, the Intersection over Union (IoU) mean value is now computed for each class. IoU is the area of overlap between the predicted segmentation and the ground truth divided by the area of union between the predicted segmentation and the ground truth.
The Export Training Data for Deep Learning tool has three new optional input parameters: Instance Feature Class, Instance Class Value Field, and Minimum Polygon Overlap Ratio. For the Metadata Format parameter, a new Panoptic_Segmentation metadata format option is available. This tool can now also take advantage of parallel processing for improved performance.
Several Spatial Analyst tools calculate various statistics on rasters based on particular sets of input values. Some of those statistics are Majority and Minority, which calculate the most frequently occurring and the least frequently occurring values in those sets, respectively. There can be cases where is a tie, where there are multiple values that occur with the same highest or lowest frequency. For the Zonal Statistics tool, the logic applied to this scenario is to select the lowest of the tied values. For example, if the list of cell values in a zone were 1, 1, 2, 2, 2, 5, 5, 5, and 6, for the majority there is a tie between values 2 and 5, which each have a frequency of 3. The tool will return a value of 2 for the zone, since it is the lowest of the tied vales.
For the Cell Statistics, Block Statistics and Focal Statistics tools, as well as the Cell Statistics and Focal Statistics raster functions, the logic that had been applied historically when there was a tie was to return a NoData value. This often lead to having more areas of NoData in the output raster than may have been expected. To avoid this, the logic for these tools was updated in Pro 3.0 to match that used by Zonal Statistics, and so now also return the lowest of the tied values. For the Focal Statistics tool, a slightly different approach is used in order to retain the significance of the processing cell itself. Here, the lowest of the tied values will be returned, unless the processing cell itself is one of the tied values. In that scenario, the value returned for that set will be that of the processing cell itself
Have you been using the advanced capabilities for geodesic surface analysis offered by the Surface Parameters geoprocessing tool? With this release, these capabilities are now available to you in two additional ways. One is with the Surface Parameters raster function. The other is with the Surface Parameters raster analysis portal tool, which will be available when you are signed in to a suitably configured ArcGIS Enterprise portal.
This release sees the introduction of circular calculations when performing zonal statistics. What does this mean? Let's consider an abstract example where the statistic you want to calculate is the mean (average). Say you have two input cell values A and B for a particular zone, and those values represent measures of 0 degrees and 360 degrees. If you do a regular arithmetic calculation for the mean [(valueA + valueB) / 2 = (0 + 360) / 2 = 180], the result is probably not what you would have intended. How can the average of two values that represent due North be due South? By applying circular calculations, the mean value of these two values would actually be 0. See the following table for some additional example comparing the arithmetic mean to the circular mean for different angle inputs.
Examples:
Input angles | Arithmetic mean | Circular mean |
0, 360 | 180 | 0 |
0, 90, 180, 270 | 135 | 129.6 |
0, 90, 180, 270, 360 | 180 | 0 |
In order to calculate circular statistics correctly, two new parameters are available. Use the Calculate Circular Statistics parameter to indicate to the tool whether to calculate ordinary linear statistics or cyclical statistics. An additional parameter, Circular Wrap Value, is used to range of a given circular statistic. Other examples of cyclical quantities than compass direction in degrees (0 to 360) include hours of a day (0 to 24 hours), or fractional parts of real numbers.
The option to perform circular statistics applies only to the following statistics: Mean, Majority, Minority, Standard Deviation, and Variety.
Circular statistics is available from the following functionalities:
• Spatial Analyst geoprocessing tools: Zonal Statistics, Zonal Statistics as Table
• Raster Analysis geoprocessing tools for performing raster analysis on data in your portal: Summarize Raster Within, Zonal Statistics as Table
• Raster functions: Zonal Statistics
In the ArcPy Spatial Analyst module, you can now use the Render function to apply symbology to a raster object. The symbology can be a rendering rule or a color map. This is particularly useful for displaying data in a Jupyter notebooks.
We hope that you will find the new and updated functionality available in ArcGIS Pro 3.0 to be useful for your work. We have some other blogs to come that go into more detail on some of this functionality, so be on the lookout for them.
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.