|
POST
|
I am really struggling to understand how to use building polygons to create an output that will work with the Export Training Data for Deep Learning tool. I am making an assumption that I need to create classified tiles for use with the Classify Pixels Deep Learning Model, but that assumption could be wrong. I cannot figure out how to make it create classified tiles that make any sense to me. When I run the Polygon to Raster tool with just the building footprint shapes it creates a raster with no background. When I use that raster with the Export Training Data tool it says it worked but no image chips are created. (Aerial with Polygon to Raster output on top. No training image chips are created when these two raster are used as inputs to the Export Training Data for Deep Learning tool) I tried running the Segment Mean Shift tool. It creates a raster with an added background surrounding the buildings that seems to work with the Export tool. However, that raster seems to treat the buildings as the no data values and the background as the raster feature. When I run the Export Training Data tool it only creates classified tile chips of the portions of my aerial that contain no buildings. It doesn't seem like that output will do me any good for classifying pixels as buildings. (Segment Mean Shift Tool output based on Polygon to Raster input) (Training Image Chips are created by the Export Training Data for Deep Learning Tool when I use the Segment Mean Shift raster as the classified raster, but none of the chips have buildings in them. They only cover portions of the aerial that had no buildings at all within the chip. The Output No Feature Tiles option made no difference) I have no idea what I should expect from these tools, because the documentation and examples are no help. I have tried at least 50 workflow variants to try to get classified tile chips that show buildings and nothing has worked. Outputs have ranged from the Export tool creating nothing, to creating only chips without buildings, to errors stating that the raster is incompatible with the Export Training Data tool. The Segment Mean Shift tool output shown above is the best result I have had so far. My trial and error definitely seems to just be error at this point. I am sure Esri support will be useless for this task, since at this point it is clear to me that they know less than I do. Sandeep I really, really need someone to provide more details about your building footprints workflow than what your blog says
... View more
08-18-2019
05:25 PM
|
0
|
0
|
1825
|
|
POST
|
Sandeep: When do you expect to release the notebook you are working on? I have managed to use aspects of the learn.ai module code in the land cover notebook with my local data, so I expect I will be able to make use of the notebook for building footprints. I have several questions about the approach you described. Did you use the Classified_Tiles metadata output from your Export Training Data for Deep Learning tool, and did you have to first convert your building footprint polygons to rasters in order to do that? Or were you able to use a different metadata format output that worked with the original building footprint polygons you had? Also, were you able to use the Classify Pixel modelling with just a single class of buildings, or did you have to have two classes of buildings and non-buildings (everything surrounding your building footprints). Anyway, I would prefer to benefit from your experience on this task, otherwise I will have to do all of the trial and error process on my own, since I am going to come up with a building footprint creation process one way or the other.
... View more
08-16-2019
12:55 PM
|
0
|
0
|
1825
|
|
POST
|
Sandeep: I would very much like to see the UNet model your team used to extract building footprints. Did your model include a RefineNet subroutine to enhance the quality of the classified image output or did you just rely on the Regularize Building Footprint tool to clean up lower resolution raster images?
... View more
08-14-2019
05:03 PM
|
0
|
0
|
1543
|
|
POST
|
The notebook really confused me since it caused me to read the online help for Image Server and not for ArcGIS Pro or Desktop, so I thought the tool only worked with Image Server. Since I have Spatial Analyst, I decided to just try the Export Training Data for Deep Learning tool on my own data in Desktop and to output it to a local directory. I found that my Image Service was too large and had download restrictions that caused a 999999 error, so I used the Clip tool to extract a GDB raster of a smaller portion. My Building Footprint feature class was also too large so I selected footprints that overlapped the image I had clipped and exported them. I made sure that there were 5 fields that matched the Image Classification Manager fields added to my building footprint polygons (Classname - text 256 char, Classvalue - Long, RED - Long, GREEN - Long and BLUE - Long) and populated them. That finally worked to output PNG files to a local directory and KITTI_rectangles metadata. For the benefit of anyone like me that wants a real life example of what the tool produces rather than just the description given in the tool help, the output created an images directory and a labels directory and a stats.txt file. There were 644 PNG files in the images directory based on the number of pixels and stride I specified and the number of images that contained a polygon, and there were 644 text files in the labels directory all with numeric file names padded with leading zeros (ie., 000000000.png and 000000000.txt respectively) A sample image and label text file are shown below: Image output to the images directory image shown with building polygons (not part of output) labels text file: 1 0.00 0 0 0.00 433.91 24.63 507.57 0 0 0 0 0 0 0 1 0.00 0 0 33.03 497.67 82.45 512.00 0 0 0 0 0 0 0 1 0.00 0 0 85.77 384.83 198.12 512.00 0 0 0 0 0 0 0 1 0.00 0 0 408.83 386.81 512.00 506.69 0 0 0 0 0 0 0 1 0.00 0 0 388.53 195.90 502.51 290.04 0 0 0 0 0 0 0 1 0.00 0 0 409.18 0.00 512.00 18.65 0 0 0 0 0 0 0 The tool says the first position in each line of the text file is the classification code, the next three are skipped, the next four are image coordinates that define the minimum bounding rectangle of the polygon and the rest of the positions are skipped. The minimum bounding rectangle defines separate training chips within the image that will be used by the deep learning classifier for each building. The stats.txt file summarized the output of the tool as follows: images = 644 *3*512*512 features = 4539 features per image = [min = 1, mean = 7.05, max = 14] classes = 1 cls name cls value images features min size mean size max size Buildings 1 644 4539 0.02 1978.09 6068.74
... View more
08-14-2019
04:52 PM
|
1
|
0
|
1983
|
|
POST
|
I will not be using Esri tools for deep learning as long as Esri only publishes examples that rely on Image Server, since I cannot and will not work in that environment. Telling me working outside of Image Server can be done does me no good without any examples or clear explanation showing how to actually do it. I called Esri help and they could not tell me how to adapt the code the Esri deep learning team provided in their notebook to work outside of Image Server, so I really need an example where Image Server is not used by the Export Training Data for Deep Learning tool. I only had a trial license for Image Analyst that expired while I was trying to get help from Esri to show me how to actually use the Export Training Data for Deep Learning tool without Image Server. I believe My organization is still trying to get an Image Analyst license added to our Enterprise license, but I am frustrated that the deep learning team notebooks were unusable without Image Server access. I do have an Advanced license, a Spatial Analyst license and a 3D Analyst license, so I can use the Eliminate tool, the Classify Pixel for Deep Learning tool, Majority Filter tool and the Regularize Building Footprint tool shown in the model builder diagram for the Building Footprint Extraction portion of the blog. However, I will not be getting access to Image Server, so I really need help making your notebook work without using Image Server. The deep learning team really needs to lay out all of these license requirements in the notebooks up front more clearly, so that people don't waste their time trying them when they don't have the necessary licenses. And please provide an alternative option that doesn't involve Image Server if it is only highly recommended but not an absolute requirement.
... View more
08-14-2019
08:55 AM
|
1
|
0
|
1983
|
|
POST
|
The notebook this post was originally based on was set up to use Image Services and Image Server. Will your code work with data stored locally rather than online or using Image Server? Will it work with just an Image Analyst licence in ArcGIS Pro?
... View more
08-13-2019
12:18 PM
|
0
|
0
|
1543
|
|
POST
|
I had not seen that. Is there any code I could see that is related to the image? What was involved in training the model?
... View more
08-13-2019
11:52 AM
|
0
|
19
|
1543
|
|
POST
|
Most likely the basemap is in Geographic Coordinate System WGS 1984 and all of your other layers are in a different coordinate system (probably a local projection suited to your area of interest). Your dataframe was probably using the basemap coordinate system, which means all of your other layers needed to be projected on the fly pretty much every time the map refreshed. That can take a long time if you have a lot of layers that need to project on the fly, especially if you are zoomed out pretty far. I don't use basemaps for this reason and my jurisdiction bought our own aerials and serve them in my local projection, so performance is always good. You may want to try making the data frame use your local projection and make the basemap project on the fly, and set the basemap visibility to turn off when you zoom out beyond a certain amount so that it does not project on the fly over a large area. Of course if you have control over the imagery and have Image Server, you want to make sure the imagery is projected into your local coordinate system projection. Mixing coordinate systems in an editing map is generally a bad idea and the effort of projecting all of the data into the same coordinate system is worth it if you will regularly be using your editing map.
... View more
08-13-2019
09:53 AM
|
2
|
0
|
2694
|
|
POST
|
Ben: A Linear Referenced Route feature class is a standard feature class that is just M coordinate enabled (which is no different from a feature class with Z coordinates enabled) and when it is created in a file geodatabase it has an ObjectID, so there actually is nothing special about routes vs any other feature class. I have not tried ArcGIS Pro for modifying tables, but in ArcMap you cannot use the Catalog view to edit a schema if the feature class is a layer in a map document, so maybe ArcGIS Pro is the same. Anyway, in both ArcMap and ArcGIS Pro you should be able to use the Add Field tool even if the feature class is open in a map document. Assuming you can get through all of the steps, please post back with pictures of the final results, especially if it didn't work out as expected or if you find that there are a lot of situations where this approach needs adjustment. Although I have applied similar techniques to my own problems, I have never actually used this exact process myself with data like yours, so I am curious to see how well it worked.
... View more
08-12-2019
07:39 AM
|
0
|
1
|
4466
|
|
POST
|
1. Do both the centerlines and the road casing have street names that can be matched after running the Intersect tool? If so that would allow you to select only the portions of centerline that match the road casing street name. 2. Use the Dissolve tool on the centerlines on the road casing name and FID to make sure the centerlines are a single segment within each road casing. 3. Make each intersected and dissolved centerline into a linear referenced route with the Create Routes tool making the route name based on the road casing FID with measures that start at 0 based on length that use units of measure that work for you (feet, miles, meters, kilometers, etc) using lower left priority. 4. Add a double field for the half-way measure and calculate it using the Python expression: !Shape!.lastPoint.M / 2 5. Export the route table view to create an event table (not a feature class). 6. Add double fields for X_Coord, Y_Coord and Offset_Dist to the table. 7. Calculate the Offset_Dist field to be a distance that will definitely fall outside ot the road casings in the units you prefer, i.e., 500 feet. 8. Use the Create Route Event Layer tool with the exported table to create a point layer, using both the angle and the compliment angle options. 9. Use the Geometry Calculator to calculate the X and Y coordinate values into the X_Coord and Y_Coord fields. 10. Export the table view to create a new table that has the Route ID (road casing FID), X, Y, offset, angle and complimentary angle values. 11. Use the Bearing Distance to Line tool twice to create two line feature classes extending from the midpoint coordinate of the centerline perpendicular to the centerline for each side of the centerline by alternating the two different angle fields. If the roads curve a lot and could create multiple intersecting segments with the casing, you may also want to Select By Location using the Route Event Layer points to be sure the segment touched that point of the centerline. 12. Intersect the perpendicular line features created in step 11 with the road casings and select the portions of lines that have the same road casing FID as the Route ID. 13. The length of the selected lines are the distances to the edge of the road casing from the midpint of the centerline to either the right or left side of the centerline, depending on the angle field you used for that feature class.
... View more
08-11-2019
08:13 PM
|
1
|
5
|
4466
|
|
POST
|
Are you using ArcMap Desktop or ArcGIS Pro? You keep saying ArcMap, but posted in the ArcGIS Pro content group. I will assume you are using ArcMap Desktop for the moment, since I am more familiar with troubleshooting that application than I am with ArcGIS Pro. Normally, you should first assume that strange behaviors are directly created by the specific map document you are working with. This is especially true if you have been using this map document over the course of several ArcMap version upgrades. Have you tried opening a brand new map and adding data to it directly from disk? If your data displays normally in the new map while doing activities you have describe then your saved map is either configured wrong or corrupt. If this is ArcMap you should try MXD Doctor. If that doesn't work, you should then try copying the layers of your corrupt map and pasting them into a brand new map. If that solves your problem kill the original map and start over with the fresh one. If the problem starts occurring as a result of the copy paste only, then the problem is definitely in the way the layers are configured or with the data. You may need to rebuild the map completely from scratch and only reference the original map during the rebuild process to replicate it without copy pasting anything. If you rebuild from scratch you may discover several problems in your configuration. Configuration problems could include invalid label or definition query expressions, possibly caused by schema changes in your source data, or corrupted data. You may need to run the Repair Geometry tool on your edit feature classes. If the problem occurs when you start with a completely new map and directly add data to it from disk, your default map template may be corrupt and should be deleted and regenerated. If regenerating the map template fails to resolve the problem, then your software install may be corrupt. This does not exhaust all of the possible causes of your problem, but this is the normal order of the things I try to troubleshoot this kind of problem.
... View more
08-11-2019
10:41 AM
|
3
|
2
|
2694
|
|
POST
|
I tried running the code from the Light-Weight RefineNet (in PyTorch) Github project. I was able to run the notebooks without a problem using the pretrained models. However, when I tried to run the model training script I was unable to complete the first epoch, because it used up all of my GPU memory. I am using NVidia Quadro P4000 with 8 GB VRam. I thought the batch size was defaulting to 1, but it was actually set to 6 or higher. I appears I am able to run the model after setting the batch size to 5 or less.
... View more
08-06-2019
08:12 AM
|
0
|
21
|
1543
|
|
POST
|
You cannot do this in one step with geoprocessing tools. You should select the records with values that you want dissolved and dissolve them. Then select all of the records with Null values in your original parcels and use the Append tool with the NO TEST option to insert them into the Dissolve output.
... View more
08-05-2019
01:11 PM
|
0
|
1
|
1193
|
|
POST
|
What really got me interested in deep learning was the Microsoft Building Footprints and really my goal is to essentially recreate their process so that I can apply it to more recent aerial photos. They applied Semantic Segmentation like the project in my previous post, but they achieved much better results. Apparently it is crucial to incorporate RefineNet upsampling layers described in this paper in the code to achieve resolutions that make it possible to extract individual building footprints when buildings cluster near each other. The RefineNet paper included a link to the MATLAB code they used and their github page included a link to a PyTorch implementation of RefineNet. Sadly I have not found any code published by Microsoft related to the footprints they created, so I am stuck trying to build up enough of a knowledge base to move beyond just a conceptual understanding of what they did toward my own practical applications.
... View more
08-02-2019
10:26 AM
|
1
|
22
|
1543
|
|
POST
|
I have been trying to run code from the last link I posted, but the code was developed by Microsoft and is promoting the use of Azure. I am having problems setting up the Azure workspace, and I don't really want to risk getting charged for storing data that is purely experimental at first. I am frustrated that most of my problems are centered around licencing and access requirements of proprietary software or the set up of online services that may require payment and can't even begin running the code that actually processes any imagery or does any deep learning. I found yet another GitHub project for extracting building footprints described by the video here. This code was developed by a graduate student. He limited himself to using 3-band imagery, which is the kind of imagery I have. I am not sure how easy this code is to run, but it seems to include a command line interface, which seems nice. Hopefully it avoids software that isn't open source and online resources that have to be paid for, but I haven't verified that yet.
... View more
08-01-2019
11:02 AM
|
0
|
23
|
1543
|
| Title | Kudos | Posted |
|---|---|---|
| 1 | 03-31-2025 03:25 PM | |
| 1 | 03-28-2025 06:54 PM | |
| 1 | 03-16-2025 09:49 PM | |
| 1 | 03-03-2025 10:43 PM | |
| 1 | 02-27-2025 10:50 PM |
| Online Status |
Offline
|
| Date Last Visited |
a week ago
|