Have you ever tried to get a route between several points in ArcMap and received the message "Warning: Location X in 'Stops' is on a non-traversable network element position"? You do a little research and find out you need to enable the setting "Exclude restricted portions of the network". Even after turning that setting on, you still get the error. What's going on?
If you try the same route in ArcGIS Pro, there’s no error message. And you can't find the “Exclude restricted portions of the network” setting anyway.
What's going on? What is the problem here? And what is the difference between ArcMap and ArcGIS Pro?
The problem that's preventing the stop from being on the route is with the network location.
What is the network location? Per Esri Help documentation, “a network location is a type of network analysis object that is tied to the network; furthermore, its position on the network is input for the analysis." (Network Locations, http://bit.ly/2gfVM6m). In plain terms, it's where ArcGIS Network Analyst routes to.
When ArcGIS Network Analyst solves a route, Network Analyst doesn't route to the XY location of the point. Instead, the extension will snap the point to the nearest street and calculate some location values on that street. That location is the network location.
The network location can be seen in four fields*: SourceID, SourceOID, PosAlong, and SideofEdge.
SourceID: This will be the name of the source feature class that the network location is on.
SourceOID: The OID of the source feature that the network location is on in the source feature class.
PosAlong: The position along the digitized direction of the source line feature**. The number is expressed as a ratio, between 0 and 1. For example, a PosAlong value of 0.557 indicates that the location is 55.7% down the line.
SideOfEdge: The side of the line that the original XY location is on with reference to the digitized direction of the line.
The message "Warning: Location X in 'Stops' is on a non-traversable network element position" indicates that the network location for that point is on a street that is considered prohibited or non-traversable. Some examples of a location that is on a prohibited street include, but are not limited to:
The analysis has been set so that it simulates driving a car, and the network location is on a pedestrian-only street.
The stop is on the right side of a one-way street which is prohibited in the "along" direction***.
The network location is on an unpaved road, and unpaved roads are prohibited in the analysis.
You can use the Network Identify tool on an edge in the network dataset to see which network attribute restrictions (like one-way or unpaved roads) would cause the edge to be traversable or prohibited.
So, the network location is on a prohibited network edge. What do you do about that? Let's continue by looking at the "Exclude restricted portions of the network" setting, since that's the setting we use to fix the error.
The "Exclude restricted portions of the network" setting causes network analysis objects to locate only on elements that don't have active prohibit-restrictions, which are restrictions that are checked in the Analysis Settings tab. With this on, then a network location will not be placed on any edge considered prohibited at the time.
How does it work? If the "Exclude restricted portion of the network" setting is on when ArcGIS Network Analyst is calculating the network locations, ArcGIS Network Analyst will skip any street considered prohibited and find the closest street which is traversable.
Let's go back to the example of the analysis being set up to simulate driving a car, and the point is closest to a street marked as pedestrian-only. Getting more specific, let's say I work for a pizza delivery service. A customer called and ordered a pizza. They live in a college dorm, which is located on a pedestrian walkway. If I have the "Exclude restricted portions of the network" setting enabled when I load the point for that location, I'll get a route. And I'll see that it's not taking me to that pedestrian walkway; it's taking me to a point on the main road through the campus. From there, I'll park on the side of the road, get out and walk to the dorm to deliver the pizza. Then go back to my car and continue the route.
This setting is where we see one of the biggest differences in network locations between ArcMap and ArcGIS Pro. Let's start with ArcMap.
In ArcMap, all network location settings are accessed through the Network Locations tab of the network analysis Layer Properties****.
The order of changing settings related to network location matters because network location settings in ArcMap are not retroactive—they don’t go back and change any network locations already calculated. So, if you change a network location setting after loading your locations, you'll need to recalculate the network locations.
By default, ArcGIS Network Analyst in ArcMap does not use the "Exclude restricted portions of the network” setting, so you will need to turn it on. Either turn it on before loading the locations or after—if after, be sure to recalculate the network locations before solving.
In the situation described in the beginning of this blog, the "Exclude restricted portions of the network" was turned on, but the network locations were not recalculated. Here are some example steps to follow to ensure stops are included in the route:
Load the locations into the analysis layer.
Turn on "Exclude restricted portions of the network".
Make any remaining changes to the analysis settings, including which restrictions are turned on or off.
Recalculate the network locations *****.
ArcGIS Pro has more advanced network location settings. All settings are found in the Add Locations geoprocessing tool, which loads the points into the network analysis layer and calculates the network locations. So, it makes sense that the network location settings are found in the Add Locations geoprocessing tool.
But wait, where is the "Exclude restricted portions of the network"? It's not gone; it's still there. In fact, ArcGIS Pro turns it on by default, so it's always in effect. Also, ArcGIS Pro automatically recalculates network locations for locations affected by setting changes automatically before the solve. So, you do not need to manually recalculate locations in ArcGIS Pro; it does it for you.
These are some of the most used settings to keep in mind when working with ArcGIS Network Analyst, but there are many more. I encourage you to check out the settings and see how they can improve your network analysis.
* For points. Network locations for lines and polygons (for barriers, route zones, etc.) are stored in a single blob field and cannot be easily read.
** One of the easiest ways to see the digitized direction of a line is to add an arrow at the end of the line symbology. In ArcMap and ArcGIS Pro, there is a default symbology called “Arrow at End” that can be used.
*** “Along” indicates travelling with the digitized direction. “Against” indicates travelling against the digitized direction.
**** Common ways to access the network analysis Layer Properties are either double-clicking the analysis layer name in the Table of Contents or by clicking the Layer Properties box in the top-right corner of the Network Analysis window.
***** To recalculate the network locations, right-click the sublayer in the Network Analysis window, and choose Recalculate Location Fields.
I'm trying to download some base maps that covers the extent of Alaska using the Download Map Tool in ArcGIS Pro 2.2, but since Alaska spans across the 180° line I get everything except the area that I want. Hopefully this can be addressed in a future update.
I've included a couple screen captures, one of the area of interest, and the other being the resulting tile package.
In a previous blog post, my esteemed colleague and board game nemesis Kelly Gerrow-Wilcox discussed the basics of capturing web traffic in a web browser using the built in browser developer tools. But what about when you’re consuming services in ArcGIS Pro, ArcMap or any other non-browser client? Enter: web traffic capturing tools. There are numerous free tools (such as Fiddler, Wireshark, Charles and others) which allow users to capture web traffic from their computers. This blog will focus on capturing HTTP/HTTPS traffic using Fiddler. I've chosen Fiddler because of its relatively simple interface and broad adoption within Esri Technical Support.
Basics: download, configuration and layout
Fiddler can be downloaded here. After installation, the only critical configuration that needs to occur is to enable it to capture traffic over HTTPS.
With Fiddler open go to Tools > Options
In the pane that opens, check Capture HTTPS CONNTECTs and Decrypt HTTPS traffic. (This allows you to capture any requests sent using HTTPS, which is slowly but inevitably replacing HTTP as the protocol for transferring data across the web).
If it’s necessary to capture network traffic from a mobile device, some extra configuration is required. Both the mobile device and the machine where Fiddler is installed will need to be using the same wifi network. The following documents outline the steps to capture mobile traffic:
The Fiddler application itself is split into two main sections; the Web Sessions list and the…other pane (I couldn’t find an official name so for the purposes of this blog we’ll refer to it as the Details pane).
The Web Sessions pane includes a sequential list of every request sent by the client to a web server. Important information contained here includes:
The type of protocol used (HTTP or HTTPS)
Which server the request was sent to, and the full URL
Note: the columns in the Web Sessions pane can be custom configured by right clicking anywhere in the headers and selecting Customize Columns. Some useful fields to turn on can be:
Overall Elapsed (available under Collection: Session Timers. This is the overall time it takes for the request to be sent and returned)
ClientBeginRequest (available under Collection: Session Timers. This is the time your software first began sending the request, using your computer’s time)
X -HostIP (Select Collection: Session Flags and manually enter in the Header Name. This is the IP address of the server destination of the request)
If you click a single web session, that triggers the Details pane to populate with a wide variety of information for that specific request.
Intermediate: What do these details mean? Do they mean things? Let's find out!
There are numerous tabs in the Details pane. The most useful (for our purposes) are Timeline, Statistics, and Inspectors. The others are all advanced functionality outside of the scope of this blog.
The Statistics and Timeline tabs are both helpful when investigating any performance related issue, for example if a service is taking a long time to load in the Map Viewer. The Timeline tab is useful for identifying which request in a multi-request process is acting as a bottleneck. To utilize the Timeline tab, select multiple requests in the Web Sessions list. The timeline will display the requests in a sequential “cascade” format. Any requests taking an unusually long time will clearly stand out with a significantly longer bar in the timeline.
Statistics displays the exact times every step of the request took, from the client initially making a connection to the last step of the client receiving the response. This breakdown is useful to potentially identifying which step in the process of a single request is acting as a bottleneck. For example, if every step is taking a fraction of a second, but there is a multi-second pause between ServerGotRequest and ServerBeginResponse that would indicate that something on the server side is causing a slowdown.
Lastly, the Inspectors tab is the where the bulk of information is displayed and likely where the vast majority of any troubleshooting will be done. Here is where the curtain is drawn back to reveal the nitty gritty of how applications interact with web services. Inspectors is further divided into two main sections; the Request information (everything related to the request sent by the client) and the Response information (everything related to the response returned by the server). Both divisions have a nearly identical set of subdivisions which display the content of the request/response in different formats. Below are the useful tabs for our purposes:
Headers – A list of additional information that is not part of the main request. This may include information like security/authentication information, the data format of the request or response, the type of client making the request, etc. This is a good place to find an ArcGIS Online token, when relevant.
WebForms (request specific) – Depending on the type of request, this will display a breakdown of each request parameter and the value of that parameter. For example, when submitting a search query this section will display the parameters of the query (like keywords, date ranges, etc).
ImageView (response specific) – If the request is for an image, the ImageView will display the image which is returned. Obviously, this is particularly useful for requests involving tiled services.
Raw – This will display the entire request or response in text format.
JSON – If the request/response includes content in JSON format, this tab displays the content in a more human readable format. This is particularly useful for requests/responses to the REST api of ArcGIS Enterprise servers.
XML – If the request/response includes content in XML format, this tab displays the content in a more human readable format. This is particularly useful for requests/responses to OGC services.
Advanced: That’s great Alan. But what am I supposed to actually do with this information?
How you use network traffic information is going to depend on what you’re trying to learn or solve. Checking network traffic can help identify the where and what of a problem but cannot tell you the solution. This is where your knowledge of your app, your web services and if all else fails, some good old fashioned web searching come into play. Here are a few common examples of ways to isolate the problem you’re facing:
Check the HTTP/HTTPS response code in the Web Sessions pane. Anything that isn’t 200 should be investigated (it might not necessarily be a problem, but it’s worth looking at). Again, here’s a description of what these mean. Even a 200 response could contain error messages or other useful information.
A 304 response from a server will trigger the client (web browser, ArcMap, etc) to use the client’s cache and Fiddler is therefore not actually capturing a complete response from the server. If there is a 304 response on a critically important request, try again either in Incognito mode or clear your client’s cache.
A 401 or 403 response typically mean the server requires some sort of authentication. This would help, for example, identify an unshared feature service in a web map which is shared publicly.
A 504 response typically means something timed out. Use this in conjunction with the Timeline, Statistics and Overall Elapsed column mentioned above to troubleshoot performance issues.
If you can’t find the problematic request, open the Raw, JSON or XML tabs of the response and just scroll through the requests looking for one that returns an error.
Raw, JSON and XML contain the exact same information, just formatted differently.
When errors occur, the error listed in the response may be more detailed than the error provided in the user interface of whichever application was being utilized.
Find a way to ignore irrelevant requests!!
One of the most challenging factors in troubleshooting network traffic is the volume of requests that are sent/and received for even minor actions. Below are strategies to help avoid cluttering your log with unnecessary requests.
Turn off Capture (File > uncheck Capture Traffic) when you know Fiddler’s not capturing relevant information.
Close any browser windows or background processes that don’t need to be running.
If Fiddler is capturing traffic you know is not related to what you’re investigating, Filter it out of the Web Sessions by right clicking a session > Filter > select what session parameter you want to filter.
If you’ve captured a number of requests that you know you don’t need, select and delete them.
Target Fiddler to only capture requests from a single application by clicking the ‘Any Process’ button (next to the small bullseye icon), holding and then releasing your mouse over the application you want to capture from. This would be useful, for example, to capture all traffic coming from ArcMap while ignoring everything that occurs with your browsers.
Once you have isolated the request(s) relevant to the issue you’re investigating, the following tips can help determine what the actual problem is.
If you can isolate the problematic request, consider what is the nature of that request in order to help determine any next steps.
Are ALL requests to the service failing? Perhaps the entire server is down or inaccessible.
It’s possible to resend any requests by right clicking a Web Session > Replay > Reissue and Edit.
This is especially helpful for isolating a specific header or request parameter that might be problematic. Modify the information under WebForms or Headers to see if that fixes the problem you’re encountering or reproduces the problem you’re investigating.
If you have a request that’s succeeding and one that’s failing, copy the headers or WebForms parameters one at a time from the request that’s working to the request that’s failing. Once the request works, you’ve successfully isolated the parameter/header in the requests that’s causing the problem.
It’s possible to send repeated requests by right clicking a Web Session > Replay > Reissue Sequentially.
This is helpful for capturing issues which might be intermittent. Send the request 20 or 30 times automatically and see if hit the issue you’re looking for.
Web service query requests can be viewed in the browser with a user-friendly interface. This allows you to easily tweak and resend requests. To view a query request in the browser:
Right click the query session
Copy > Copy the URL
Paste in a browser window
Change the section in the URL “…f=json…” to “…f=html…”
Click enter to browse to the page
Fiddler and other network capture software are not silver bullets to solve all web traffic related GIS issues, but they are useful tools to help. With a bit of practice, utilizing this type of software can help resolve a wide variety of issues when accessing web services in GIS applications.
Got any good Fiddler (or general network traffic logging) tips? Feel free to leave them in the comments!
Imagine this: you've been assigned a project where you must find the drive times (at 5 minutes, 10 minutes, and 30 minutes) for 100 different customers and the best routes to deliver supplies to all customers. On top of that, you'll need to do it for many different datasets. The result of each analysis, along with the underlying data used to produce those results, must be sent to the client.
ArcGIS Network Analyst is the best option, but you'll need your own network dataset. So, you reach out to a colleague or friend. They'd be happy to give you a network dataset, but it contains data for a much larger area than needed. While it may work for your analyses, you can't send the client the whole dataset.
A network dataset containing turn features, sign features, and/or traffic data can be difficult to clip. Using a regular Clip operation on the streets can break connectivity between the streets, as well as break the link between the network edges and the turns, signs, and traffic data.
So, the question is how can you clip the network dataset to a manageable size and keep all the connectivity between the streets, turns, signs, and even the traffic data?
There are a few ways to accomplish this, as outlined in this post. Using Extracted Data from the Distributed Geodatabase Toolbar in ArcMap:
From the Distributed Geodatabase toolbar, select Extract Data.
In the Extract Data Wizard, check the box to 'Show advanced options for overriding data extraction defaults when I click Next'.
Choose the extent of the data to extract to a new geodatabase (when using an extent smaller than the full extent of the network dataset, the network dataset will be clipped to that extent during the extraction).
Choose the feature classes to extract. By default, all feature classes in the map are checked, and the network dataset is one of those layers.
Click Next > Finish.
Using the Consolidate Layer Geoprocessing Tool in ArcMap or ArcGIS Pro:
In the Data Management toolbox, select Package toolset > Consolidate Layer.
Choose the input layers and the output folder. Choosing the network dataset layer (for example, Streets_ND) brings all source layers with it.
Choose the output format.
Choose the extent of the data to extract to a new geodatabase (when using an extent smaller than the full extent of the network dataset, the network dataset will be clipped to that extent during the extraction).
Create Mobile Map Packaging Tool in ArcGIS Pro:Note: This is the best option if you plan to use routing in Navigator for ArcGIS.
Choose the input map(s) and the output location.
Optional: Choose an input locator. If you want to use data in Navigator for ArcGIS, you must use an input locator other than the World Geocoding Service or the default XY locator.
Choose the appropriate extent (when using an extent smaller than the full extent of the network dataset, the network dataset will be clipped to that extent during the extraction).
Check the box to Clip Features.
With all methods above, your data will still allow routing and other network analysis, but will now be a much more manageable size for sharing with others.Rachel A. - Desktop Support Analyst
Nobody likes to talk about it, but sometimes computers can crash. Yup, the entire thing just fails and nothing at all can be recovered (if you haven’t backed up your data, go do it now!) Or what if your laptop is stolen, or you flipped your kayak and your machine sank to the bottom of Lake Superior? You just don’t have it anymore and there is absolutely nothing you can do to get it back. When these types of things happen, any Esri licenses that were authorized on the machine may be lost, too.
In the past, an authorized maintenance contact had to call Esri Technical Support to submit a license appeal and recover the lost licenses. Now, this functionality is built in to My Esri, empowering your organization with self-service functionality and enabling you to get back up and running quickly.
I wanted to make sure that our customers are aware of this great new functionality and walk through how you’d go about getting your licenses back in the event of a catastrophic failure or loss as described above – though I really hope that never happens.
To perform the following steps, you will either need “Esri Admin” permission or the “Take Licensing Actions” permission. Sensitive information such as machine IDs, license numbers, and other personal information have been replaced with asterisks in the following screenshots.
First, log in to My Esri and click the My Organizations tab.
Please note that I’m demonstrating the steps in a QA environment and that your experience won’t include the green QA…
Click the Licensing tab.
This will bring up the Licensing Overview page and if you have the correct permissions, you should see the Recover Lost Licenses option both in the Licensing panel as well as a card.
Next, click Recover Lost Licenses.
The Recover Lost Licenses screen explains that this is a process to retrieve licenses from a machine that is no longer accessible due to system failure, system loss, or destruction. The License Recovery process requires the signature of the organization’s License Administrator in a Certificate of Destruction. This process is irreversible and should only be used as the absolute last option when all other solutions to rectify the problem have failed.
An example of when you would not use the Recover Lost Licenses option is if you can still access the machine and deauthorize the licenses normally. The instructions provided describe how to perform standard license deauthorization:
Once you’ve determined that it really isn’t feasible to scuba dive to the bottom of Lake Superior to recover your machine (and hence, its licenses), follow the steps outlined below to complete the recovery.
Step 1: Find Your Machine
To proceed with license recovery, select how you would like to find the machine. There is an option to search by products on the machine or use the machine’s UMN IDs if you know those.
Option A: Search for machine by product
Search for the machine by populating the dropdown boxes.
We see that the search for ArcGIS Desktop Advanced Concurrent Use licenses for this organization returns five machines.
Selecting the machine from which the licenses need to be recovered will take you to Step 3.
Option B: Select Machine using the UMN
Enter the UMN for the machine and click Search. Since the UMN by definition is associated with a single machine, you should get only one result in this case, as opposed to searching for a machine by product.
Click Select to take you to Step 3.
Step 3: Review Selected Machine
This step will show you a list of products our records show were activated for the selected machine.
After reviewing the selected machine, you have the option to go back if this is not the correct machine or proceed with the license recovery process.
Step 4: Accept Terms and Conditions
Review and agree to the terms and conditions, and click Next.
Step 5: Summary to process License Return
This step gives you another opportunity to fully review the selected licenses to return. If the selection is correct, click the “Process Return” button near the bottom of the page.
You’ll receive a confirmation screen showing the status of each license return.
And that’s it. You are now able to authorize these licenses on a new, dry machine!
In the event that not all licenses are returned successfully, you will be presented with a summary of which licenses were returned and which were not. These should be exceptions; not the norm. In these cases, please work with Esri Customer Service or your local distributor to finalize the recovery process.Kory K. - Customer Advocacy Lead
3D data is becoming more ubiquitous nowadays and is especially promoted throughout the ArcGIS Platform. From web scenes, to CityEngine, to ArcGIS Pro, there are many different applications to import, manage, model, and share your 3D data. To get the output you are looking for, it may require numerous steps and tools. To navigate some of these steps and tools, here are some tips and tricks for working with 3D data in ArcGIS.
3D File Coordinate Systems
The majority of 3D formats do not store a coordinate system. GeoVRML and KML are the lone exceptions. KML will use a WGS 1984 coordinate system and meters for the unit of measurement. All other types (DAE, 3DS, OBJ) must be placed properly, otherwise they may import at "0,0" (off the coast of Africa).Trick #1
If you are using CityEngine, you can drag and drop your shape from the Navigator window into the scene (this workflow assumes a scene coordinate system is already set). When you export the shape to a multipatch feature class, the coordinate system is created with the data so you can bring it into another ArcGIS product.Import OverviewTrick #2
The same workflow can be accomplished in ArcGIS Pro. Create an empty multipatch feature class, navigate to Editor > Create Features > Select Model, and click the globe to place the model.Trick #3Use the Replace with Model tool (ArcScene or ArcGlobe) or the Replace with Multipatch tool (ArcGIS Pro).ArcGIS Desktop Replace with ModelArcGIS Pro Replace with MultipatchTrick #4
If you are using ArcScene, ArcGlobe, or ArcGIS Pro, manually place the model during an edit session using the Move, Rotate, or Scale operations.Move, rotate, or scale a featureNote: There is known issue with the Import 3D files tool. The placement points parameter is not honored so as of ArcGIS 10.4.1 or ArcGIS Pro 1.3, this tool is not a viable option. This issue is planned to be fixed in a future release.
To import your 3D file with textures, you must ensure the texture resides next to the 3D file, either as an individual image file or a folder with the images.Note: Both the file and folder must have same name for the software to recognize the texture.Trick #1
Textures are only supported in file or enterprise geodatabases. Shapefile multipatches do not support textures, so make sure to import the multipatch into a geodatabase.
Make sure your 3D data has valid z-values. When sharing a web scene or importing the data into ArcGIS Pro, you want to make sure the elevation values are correct.Trick #1
If your multipatch is not at the correct elevation, you can use this trick. In ArcGIS Pro,set the multipatch data "on the ground" and use the Layer 3D To Feature Class tool. The elevation values are then embedded into the multipatch.Trick #2
If you are using simple feature data (non-multipatch), use the Add Surface Information tool to add z-values to the data. Also, you can add z-values to an attribute table and with the Add Z Information tool, you can verify the values with the tool's output. If the data does not have valid elevation values, see the next tip.
Tools to Create 3D Data
Understand which tools can create 3D data: Layer 3D To Feature Class, Interpolate Shape, or Feature To 3D By Attribute.
Understanding 3D Data
Understand your 3D data. Extruded 2D polygons are not true 3D features, so you must export to multipatch to make the polygon a true 3D feature. Simple point, line, and polygon features can be considered 3D data if they have the correct z-values. 2D features can also be symbolized using 3D marker symbology.
Know the difference between a z-enabled feature class and a non-z-enabled feature class with a z field in the attribute table. Feature classes must be z-enabled to display at the correct elevation. You might see a z field in the attribute table, but that does not mean the geometry has the correct z-values. This can be verified by editing the vertices or adding z-values to an attribute table, as described above.
While this blog does not cover every facet of working with 3D data, it is my hope that this will provide some valuable information for working with 3D data on the ArcGIS Platform.Andrew J. – Desktop Support Analyst
I’ve installed the new PerfTools add-in for ArcGIS Pro; what are some scenarios in which this new tool can help optimize performance?
Displaying and Logging Rendering Time for Specific Spatial Extents
Have you created a series of spatial bookmarks in your ArcGIS Pro project? A one-line script command (ZoomToBookmarks all) can zoom through these spatial bookmarks and log draw time, frames per second (FPS) metrics, and other timestamps. No bookmarks? No problem…you can also specify extents by providing 2D or 3D camera positions in the same spatial coordinates as your data.
Playing and Timing Animations
Have you added an animation? Using the PlayAnimation command returns measures of total elapsed animation time, as well as average and minimum FPS.
To build a thorough display cache or simulate navigation through large datasets without specifying bookmarks or camera positions, you can use the roaming capabilities of PerfTools. This allows you to virtually “walk” across the active view, starting from the upper left and moving row-by-row towards the lower right. The total draw time, in addition to average and minimum FPS, are logged for your reference.
Timing Spatial Selection
Moving from a file geodatabase to an enterprise geodatabase? Or have you updated your spatial index? You can examine the impacts these changes have on making spatial selections in ArcGIS Pro. The SelectFeatures command allows you to specify your selection bounding box in screen coordinates on the active 2D or 3D view. PerfTools logs a count of the features selected, as well as the selection and draw complete times.
Most power from the PerfTools add-in comes through a comprehensive scripting language that allows you to assemble several commands into a more comprehensive scenario. With this functionality, you can simulate typical user interactions with ArcGIS Pro, including creating and opening projects, panning, zooming, selecting, and so forth. You can add delays or “think time”, as well as looping commands (ForCount, ForFile, ForFolder, and ForTime) to repeat key parts of your workflow. Via script command, you can also control key aspects of logging content and structure in PerfTools.
Custom Script Commands
Not finding the script command you’re looking for? PerfTools allows you to create your own commands through leveraging the ArcGIS Pro SDK. Part of the PerfTools download includes documentation and a sample, “T1Command”, that gets you started with your own customizations.
After installing the add-in and opening Pro, Take a look in your Documents\ArcGIS\AddIns\ArcGISPro\PerfTools folder. You should see a PDF there titled "PerfTools_for_ArcGIS_Pro.pdf". This contains comprehensive documentation and sample code snippets.
Is PerfTools comprehensive? You bet! We’ll be taking a closer look at some of these techniques in upcoming blog posts. In the meantime, feel free to download the PerfTools AddIn and try it out for yourself!
With almost every new release of ArcGIS Desktop and ArcGIS for Server, there are changes that aim to improve software quality and performance; sometimes, these changes require you to update your workflows. The improvements and deprecations made for geocoding in ArcMap 10.5 and ArcGIS Pro 1.4 may break some existing workflows or require you to prepare before installing ArcGIS 10.5. In this post, we'll give you an overview of these changes.
1. Address locators stored in geodatabases are no longer supported, as specified in the deprecation notice for ArcGIS 10.4 and 10.4.1. As such, you must move or copy the address locators from the geodatabase to a file folder before installing ArcGIS Desktop and ArcGIS Server 10.5. By doing this, you'll avoid the following issues in ArcGIS 10.5:
Address locators currently stored in geodatabases do not display as inputs to tools nor are visible in ArcCatalog when viewing the geodatabase content.
Starting a geocode service published from an address locator that is stored in a geodatabase fails to create an instance and returns an error in the server logs.
Publishing an address locator stored in a geodatabase or an .sd file that references an address locator stored in a geodatabase directly to ArcGIS Server 10.5 will return an error in the server logs.
2. We made several improvements to the US Address locator styles, such as adding fields and reordering input fields used when building an address locator. However, these improvements break any existing workflows that use Python scripts and Model Builder models to create address locators. These issues occur without an error or warning message and render the address locators unusable. Furthermore, geocoding services created from these locators and used in web applications are impacted in ArcGIS 10.5.
To avoid these issues, update the field mapping in the scripts and models after installing ArcMap 10.5 and ArcGIS Pro 1.4 but before running the scripts and models. There are also additional output fields that display in the geocode result that are similar to the output fields of StreetMap Premium and the World Geocoding Service.
3. If it is necessary to continue using the US Address locator style from ArcGIS 10.4 to create address locators after installing ArcGIS Desktop 10.5, contact Esri Support Services to request access to the USAddress.lot.xml file.
For more information, please refer to this technical article index, which covers more detailed solutions to the aforementioned issues.Shana B. - Product Engineer
With the addition of the Train Random Trees Classifier, Create Accuracy Assessment Points, Update Accuracy Assessment Points, and Compute Confusion Matrix tools in ArcMap 10.4, as well as all of the image classification tools in ArcGIS Pro 1.3, it is a great time to check out the image segmentation and classification tools in ArcGIS for Desktop. Here we discuss image segmentation, compare the four classifiers (Train Iso Cluster Classifier, Train Maximum Likelihood Classifier, random trees, and Support Vector Machine), and review the basic classification workflow.Image Segmentation
Before you begin image classification, you may want to consider segmenting the image first. Segmentation groups similar pixels together and assigns the average value to all of the grouped pixels. This can improve classification significantly and remove speckles from the image.Train Iso Cluster Classifier
The Iso Cluster is an unsupervised classifier (that is, it does not require a training sample), with which the user can set the number of classes and divide a multiband image into that number of classes. This classifier is the easiest of all the classifiers to use, as it does not require creating a training sample and can handle very large segmented images. However, this classifier is not as accurate as the other classifiers due to the lack of training sample.Train Maximum Likelihood Classifier
The Maximum Likelihood Classifier (MLC) uses Bayes' theorem of decision making and is a supervised classifier (that is, the classifier requires a training sample). The training data is used to create a class signature based on the variance and covariance. Additionally, the algorithm assumes a normal distribution of each class sample in the multidimensional space, where the number of dimensions equals the number of bands in the image. The classifier then compares each pixel to the multidimensional space for each class and assigns the pixel to the class that the pixel has the maximum likelihood of belonging to based on its location in the multidimensional space.Train Random Trees Classifier
One supervised classifier that was introduced with ArcGIS 10.4 is the random trees classifier, which breaks the training data into a random sub-selection and creates classification decision trees for each sub-selection. The decision trees run for each pixel, and the class that gets assigned to the pixel most often by the trees is selected as the final classification. This method is resistant to over-fitting due to small numbers of training data and/or large numbers of bands. This classifier also allows the inclusion of auxillary data, including segmented images and digital elevation model (DEM) data.Train Support Vector Machine Classifier
Support Vector Machine (SVM) is a supervised classifier similar to the MLC classifier, in that the classifier looks at multidimensional points defined by the band values of each training sample. However, instead of evaluating the maximum likelihood that a pixel belongs to a class cluster, the algorithm defines the multidimensional space in such a way that the gap between class clusters is as large as possible. This divides the space up into different sections separated by gaps. Each pixel is classified where it falls in the divided space.Image Classification Workflow:
With the addition of the Create Accuracy Assessment Points, Update Accuracy Assessment Points, and Compute Confusion Matrix tools in ArcGIS 10.4, it is now possible to both create and assess image classification in ArcMap and ArcGIS Pro.
The general workflow for image classification and assessment in ArcGIS is:
Use the measures of accuracy (the user’s accuracy, producer's accuracy, and Kappa index) calculated by the confusion matrix to assess the classification. Make changes to the training sample, as needed, to improve the classification.
The best part about this six-step process is that it makes it pretty easy to compare different classification methods, and it’s often important to compare the different methods. Getting your training sites nailed down (step 2) is usually the toughest part, but steps 3 through 7 fly by since the analysis is done for you. In the end, you have several classified raster images to use in your work and can choose the best result based on your personal objectives.
As an example, we used this workflow to classify a Landsat 8 image of the Ventura area in Southern California. We used the MLC, SVM, and Random Trees (RT) methods to classify a single Landsat 8 raster captured on February 15, 2016. We classified the image into nine classes and manually selected training samples and accuracy assessment (“ground truth”) points. Additionally, we used a segmented image as an additional input raster for each classifier. Once we classified the rasters, we computed a confusion matrix for each output to determine the accuracy of the classification when compared to ground truth points. The Kappa index in the Confusion Matrix gives us an overall idea of how accurate each classification method is.
The results showed that each method did pretty well in the classification when looking at the Kappa indexes, as well as based on a visual assessment. In order of accuracy (from the highest Kappa index to the lowest), we see that the SVM output was the most accurate (Kappa = 0.915), followed by Random Trees (Kappa = 0.88) and finally the MLC method (Kappa = 0.846).
We can see from the Confusion Matrix that some methods did better than others for specific classes. For example, the MLC didn’t do too well with Bare Earth classification, but RT and SVM weren’t too much better. This is great information for honing in on a better-classified image–now we know that we should focus on getting better Bare Earth training samples to improve our results. You could keep going with this until you get a really high accuracy for all classes, if that’s what you need for your analysis. If you need just a general idea of the area, you could just take what you get in Round 1! Check out what we got:
This blog post provides the latest updates regarding deprecated features in ArcGIS 10.4 and in the recent release of ArcGIS 10.4.1.
With each release, the platforms and functionality supported in the ArcGIS platform are assessed and adjusted based on customer needs and technology trends. The purpose of the Deprecated Features for ArcGIS document is to provide as much advanced notice as possible regarding these changes.