|
POST
|
I just found this link for the github code for a deep learning project that extracts building footprints from satellite imagery. An overview description of the the project is here. I have not tried it or really explored the code yet, but at least initially this seems to be much closer to the kind of code I need. The code was designed for use in Linux, so I expect it will need some tweaking to run in Windows.
... View more
07-31-2019
01:45 PM
|
0
|
25
|
1565
|
|
POST
|
Dan: I found this video on image segmentation using UNet to detect cell nuclei in images, which shows the basic principles of a modeling approach that I think could be adapted to extract building footprints from aerials. I was able to get the code to work after pip installing a few site packages (opencv-python, tensorflow and tensorflow-gpu) and some NVidia developer software for GPU acceleration (CUDA 10.0 and CUDNN 7.6.2.24). The division of the training/test data into tiles or chips was not done by the video example code, so I still have to deal with developing my own routines for preparing training data from much larger rasters, since the arcgis.learn.export_training_data() method esri has created requires access rights I don't have. My starting data is similar to the esri sample, since I have a raster covering a much bigger extent than my area of interest, and a polygon layer of building footprints within my area of interest that optionally could be converted to a classified raster. export = learn.export_training_data(input_raster=naip_input_layer,
output_location=samplefolder,
input_class_data=label_layer.url,
chip_format="PNG",
tile_size={"x":400,"y":400},
stride_size={"x":0,"y":0},
metadata_format="Classified_Tiles",
context={"startIndex": 0, "exportAllTiles": False, "cellSize": 2},
gis = gis) The export_training_data method parameters suggest that this tool is very similar to the Split Raster tool. I don't have experience using the Split Raster tool either, but it looks like the main difference in the parameters seems to be the metadata_format that outputs Classified Tiles. I wish I could see a sample of the output of the export_training_data method that I could compare to the Split Raster output so that I could determine what, if any, additional processing is done beyond what the Split Raster tool does. The video example seems to handle training and testing of the model fairly well, however, it does not deal with creating a final model output from a new raster and it seems best suited to processing separate photos that do not have to be reassembled into a single image at the end, so I would also have to figure out how to accomplish that. The esri example seems to have enclosed the final classification process in a black box method. I assume that method tiles the new raster and combines the tiles at the end to create a final classified raster covering the original raster extent. Anyway, I would appreciate your thoughts on the assumptions I am making and any suggestions you may have that might help me create code or apply other techniques so that I could design a process of my own that might work for my needs.
... View more
07-31-2019
10:37 AM
|
0
|
0
|
3579
|
|
POST
|
Dan: Thanks for the links and comments. The answer to "what do I want to use 'IT' for?' initially is orthophoto image analysis. The value of AI was proven to me by the building footprints Microsoft released. The inclusion of those 800K shapes overnight in my source data has allowed me to quickly analyze and dramatically improve the positional accuracy of my address points, parcels, land use tracking cases, general plan, zoning, cities, etc in a very short amount of time single-handed, which was never possible before.. But the Microsoft data is already getting out of date and I am certain that AI is the only practical way to make the maintenance of this layer possible for my jurisdiction and the 29 cities we encompass. I want to be part of the ability to create and maintain data in near real-time, with ever increasing resolution that is comprehensible and integrated across multiple objects in a variety of formats at scales both large and small for my jurisdiction. Currently my jurisdiction is most lacking in its ability to extract useful information from orthophoto imagery that goes beyond making it a map background,
... View more
07-26-2019
10:02 PM
|
0
|
0
|
3579
|
|
POST
|
Well it looks like even if I could access the data that I personally can't really run any of the deep learning code. Virtually all of the tools involved in model training require full access to Image Server. While I believe my organization has Image Server, I personally don't have rights to access it. I find some of the documentation in the notebook and elsewhere to be misleading when it says that code like this can be run in ArcGIS Pro, since it looks like none of it really is run directly by Pro. Anyway, I feared that esri was blowing smoke at the conference saying that deep learning is coming to ArcGIS, when in reality it looks like it is only available to an extremely small percentage of their users and only an extremely small percentage of them will ever have any interest in tackling the learning curve. I estimate that 99% of the people that attended their deep learning presentations at the UC wasted their time, since they will never have the rights to use deep learning with the way esri is currently deploying it. My frustration is that while the potential of deep learning is obvious to me, I didn't get enough information at the UC and can't get enough information from esri web articles to even intelligently talk to my organization about all of the licenses and server set ups we really would need to even attempt to use it. I feel like there is no way for me to know if what we already have is enough or if we can justify the cost of upgrading our enterprise agreement and systems to make deep learning even possible.
... View more
07-26-2019
01:51 PM
|
0
|
29
|
3579
|
|
POST
|
After doing some searching it looks like ArcPy handles local data and arcgis.GIS handles webmap data and the two don't really mix. So while I figured out that I can load a local layer into a Jupyter notebook by using arcpy.MakeRasterLayer_management, I can't display it in a webmap. It looks like Spatially Enabled Dataframes can handle both local and online data together, but it seems to be more of an environment for manipulating feature class data in tabular format than it is for visualizing rasters on a map. Anyway, it does look like the code would have to undergo a major rewrite to work with a mixture of local and online data, if it is even possible. I also tried to use an anonymous login to ArcGIS Online, but that only caused more errors. The security issue seems to be that the data is housed in an online location that is requiring https security and my organization's setup is not compliant with the ssl certification protocols required to access the data. It is a little frustrating that they didn't just post this data like all the other esri services that I can access through Portal. My best bet may be to publish the image data I downloaded from the Conservancy as an Image Service through my organization, although I am not completely sure even that will work.
... View more
07-25-2019
09:01 PM
|
0
|
0
|
3579
|
|
POST
|
I contacted esri tech support and they determined that the image is published, but it is being published under an organization operated by a division of esri rather than under the normal main esri organization, and my organization's security doesn't recognize that division and is blocking me from accessing the data. So I will have to talk to my AGOL administrator to see if he either can grant me rights to see data published by that esri division or can move the data as a service under my organization. It appears that the notebook is expecting an image service and the tech person did not think it can be served from my local machine without a substantial rewrite of the code, but she said she would talk to some of the Python specialists to see if they can suggest any options. Anyway, I will post back if I make any progress. I also will see if any of the other deep learning notebooks has a different way of setting up the training data that might work for me.
... View more
07-25-2019
12:25 PM
|
1
|
32
|
3579
|
|
POST
|
Dan wants you to show examples (pictures) of results you got using any method you choose where you think the results are wrong. We won't make the judgement of what is right or wrong using any method, since our level of tolerance for error and yours could be very different and also as the complexity of this problem increases there could be multiple solutions that are technically right that you might reject as wrong. Also, showing examples may lead to methods designed specifically for those special situations, if there are enough of them to make it worth the effort.
... View more
07-25-2019
07:16 AM
|
2
|
0
|
912
|
|
POST
|
The Github link you referenced only has the Jupyter Notebook as far as I can see, not any image file. I did change change directories to the location of the notebook I downloaded before launching it. There is no image in that directory. There was a data subdirectory, but the only image is called percipitation.tif without any classification labeling. I searched all downloaded subdirectories for a file name that contained the word Kent and found nothing. Anyway, I have downloaded the Kent classified image from the Chesapeake Conservancy land cover projectand it looks like the image shown by the notebook. However, the way the code is written it is looking at ArcGIS Online, not the local directory where the notebook is located. Anyway, I would think that there would be a way to create a layer from a file on my local hard drive, I just am not having success searching the ArcGIS Pro Python documentation for it.
... View more
07-24-2019
07:16 PM
|
0
|
0
|
3579
|
|
POST
|
I downloaded a set of sample Jupyter notebooks from esri at https://developers.arcgis.com/python/sample-notebooks/. One of the notebooks is called land_cover_classification_using_unet, which is supposed to showcase an end-to-end to land cover classification workflow using ArcGIS API for Python. The workflow consists of three major steps: (1) extract training data, (2) train a deep learning image segmentation model, (3) deploy the model for inference and create maps. I am having trouble running the notebook, and so far have only gotten the first two steps to work, which just create a connection to ArcGIS Online. The third and fourth lines of code are supposed to access a labeled image to train the model, but I get an error that the index value is out of range no matter what index value I use, which basically means the image was not found. label_layer = gis.content.search("Kent_county_full_label_land_cover")[1] # the index might change
label_layer
---------------------------------------------------------------------------
IndexError Traceback (most recent call last)
<ipython-input-29-a4ac34d0306c> in <module>
----> 1 label_layer = gis.content.search("Kent_county_full_label_land_cover")[1] # the index might change
2 label_layer
IndexError: list index out of range I downloaded the original classified image for Kent County in Deleware from the Chesapeake Conservancy land cover project. It looks the same, although I am not completely sure it matches the the extent or classifications of the training image the notebook was supposed to use. How do I change the code to use the image I downloaded and saved on my computer rather than the image from ArcGIS Online? I will probably will be asking more questions as I progress though the code, since it seems likely I will hit other problems. I am hoping to first be able to complete the notebook example covering the Deleware region and afterward adapt it to process the NAIP imagery for my jurisdiction.
... View more
07-24-2019
01:45 PM
|
0
|
39
|
10486
|
|
POST
|
I agree with Joshua that the Minimum Bounding Geometry tool may be a good starting point for analyzing your parcels, since you could compare the area of the parcel to the area of the minimum bounding geometry to separate parcels that are basically rectangular from parcels that are not. Most likely it will be much easier to develop a process that can be successfully validated for parcels that are rectangular than for parcels that are more complex. Separating parcels into different classes like rectangular and non-rectangular is critical to being able to build up a multi-tiered automated process that might ultimately be capable of handling the majority of parcels correctly.
... View more
07-19-2019
10:09 AM
|
2
|
0
|
1733
|
|
POST
|
No code exists, except for the ideal situation. While I can imagine some other methods, there is no way to know for sure what the parcel shape is without deep learning AI that I don't know how to write. All I could do is solve the problem for the ideal situation and then come up with a way to determine if it worked and try another set of geoprocessing steps for the parcels that failed. That probably would involve weeks of trial and error to come up with a 95% to 99% solution. I don't have the time to do that. Python is not the issue, the issue is coming up with the sequence of geoprocessing steps and validation checks at each stage to identify the parcels that passed and failed the previous steps you applied and coming up with an alternative geoprocessing set of steps that fits the majority of the parcels that failed. No one knows all of those steps and validation checks for this problem. Python only makes it easier to apply the steps in the right order in a repeatable manner once you figure them out, but it cannot figure out what those steps are for you (without AI). AI will only get to a 90% to 99% solution at best as well, although the benefit is that it does the learning process and applies the solutions much quicker if you can come up with a way to train it. However, no set of automated logical solutions will ever reach 100%. 100% can only be achieved at some point by coming up with some way to identify candidates that you suspect didn't get solved by all of the steps you have already applied and inspecting and manually solving each one at some point, and even then somethings won't get caught.
... View more
07-16-2019
08:18 AM
|
2
|
1
|
2907
|
|
POST
|
Based on your map example, it is an ideal situation where the parcel is rectangular. If every parcel was rectangular and at the exact tangent angle to the street, the easiest method would be to create linear referenced routes with your centerlines. Then use Feature to Point with the inside option to extract the centroid of the parcel. Then you can use the Locate Features Along Routes tool to get the measure and distance of the point along the centerline with an adequate search radius. I usually get all events for all centerlines within the radius and eliminate events that have an address street name that does not match the centerline name. I summarize the point on the parcel field to get a count of events for each parcel to determine if more than one event was created along a curve in the centerline, since technically all points along a tangent radial curve to the centroid are valid events, and manually eliminate all but one of those points. An event layer can be created which displays an angle field and that layer can be exported to a real point feature class that will store the angle field values permanently. From the angle you can determine the compass bearing of the tangent line from the parcel centroid point to the centerline. You can also tell if the parcel is on the left or right hand side of the centerline, because the distance values are positive or negative depending on which side of the line the point is on. You can join the point to the parcel using the parcel number and calculate over the angle or compass direction.
... View more
07-16-2019
01:59 AM
|
2
|
3
|
2907
|
|
BLOG
|
Ultimately I did not use this approach, since I didn't find a way to make it work over really large areas, since the quality of the footprints is resolution dependent. Also, at that time Google only had footprints covering one or two of the 29 cities in my county and nothing in the unincorporated areas. However, I have been able to get building footprints for free from Microsoft through github. They used deep learning AI to create the footprints from 2014/2015 aerials for the entire US (and it looks like now they may have also done Canada based on google search results for Microsoft Building Footprints Github). The json files with the footprints did not import with the standard json import process of desktop, partially because the json format is not standard and partially because the California file was over 2 GB, so one of our administrators used ArcGIS FME to do the import. The quality of the footprints is very good. Unfortunately the documentation is limited and I have not seen anybody figure out how to access and apply the model to their own aerials, and Microsoft may never offer updates. However, being able to get over 800K footprints for my county for free has been a game changer for me.
... View more
07-03-2019
08:22 AM
|
0
|
0
|
5146
|
|
BLOG
|
The code was designed to compile a list of values, but it can be adjusted to list only a single value based on the maximum from a set of values. Basically line 26 and 30 in your code would change. You would have change line 26 to be an elif statement that compares the current date stored in the dictionary with the date of the relate row currently being processed and if it is, replace the value of the dictionary rather than append the value. Something like: elif relateDict[relateKey][0][1] < relateRow[1]:
# if the relate key is already in the dictionary
# and the current date is greater than the dictionary date
# replace the value associated with the key
relateDict[relateKey] = [relateRow[0:]]
I did not test the code, so I am not entirely sure I got the dictionary indexing right and I am not fully revamping the code to gain greater efficiency by eliminating the list within a list structure that is no longer really needed for the dictionary value, but this should give you the idea of how to adapt the logic to your needs.
... View more
06-13-2019
04:54 PM
|
0
|
0
|
13039
|
|
POST
|
There are a lot more real world zoning configurations where the largest zone is not in the center than you may realize, such as with natural feature zoning where a thin band of a water course zone happens to go through the exact center of a parcel, but overall that zone covers just a small portion of the parcel. Or in oddly shaped parcels like flag lots where the centroid does not fall in the parcel itself and the label point may be in the driveway, not in the heart of the parcel. The archery target example was just easy to describe and visualize as a situation where the centroid is not the best place to find the largest area. In any case, my method works regardless of how the zones are laid out in the parcel.
... View more
05-10-2019
11:24 AM
|
1
|
0
|
1771
|
| Title | Kudos | Posted |
|---|---|---|
| 1 | 03-31-2025 03:25 PM | |
| 1 | 03-28-2025 06:54 PM | |
| 1 | 03-16-2025 09:49 PM | |
| 1 | 03-03-2025 10:43 PM | |
| 1 | 02-27-2025 10:50 PM |
| Online Status |
Online
|
| Date Last Visited |
14 hours ago
|