Hydrographic Applications of AI and Deep Learning

06-29-2020 03:33 PM
Labels (1)
Esri Contributor

Join Esri in an AI Deep Learning Maritime seminar on Artificial Intelligence and Deep Learning in ArcGIS. This webinar covers how to use and automate machine-based object detection using convolutional neural networks in ArcGIS to solve real-world problems.

You can view the presentation slides here: https://bit.ly/3dJ5w46 

Some additional materials from our Q&A session from the webinar, answered by our presenters, Madhu Hosuru and David Yu:

Q1: When you say collect  ''as many as possible''  for training data, are you suggesting many analysts collect from the same data set?  what are or do you suggest, and on what does your answer depend?


  • Yes, but labels shouldn't be duplicated. You can also use a 3rd party labelling service or alternatively use a crowd sourcing approach in ArcGIS Online by assigning users the ability to edit labels in a common feature service.


Q2: Any suggestions on how to collect training data, e.g. meaning as collect 'as many as possible'.


  • Best practice is to collect labels from a diverse source of images (i.e. from different locations) and train & predict with the model iteratively. For simple examples such as shipwrecks we wouldn't need more than 500 labels to achieve very good accuracies.


Q3: The presentation has only shown Geo AI against static features.   How can GeoAI be applied to temporal and objects that change or move, such as cargo ships or determining tidal ranges.


  • Typically inferencing against streaming data can be broken down into detections against static frames. Tracking on top of those detections is a separate workflow that can be achieved with libraries such as SORT (simple online realtime tracking).


Q4: Are the output(s) confidence/validity a function of the raster data resolution.  That is low resolution low confidence/accuracy of results.  Higher resolution higher confidence/accuracy of results.


  • It would also depend on the resolution the model is trained against. Training and prediction rasters should have the same resolution. Confidence is more of a function of the number of training samples.


Q5: Is arcgis a python library


Q6: Could this be applied to Multibeam Backscatter image classification? Extraction of seabed features, how successful would this be on greyscale imagery?


  • If your input data can be rasterized and your positive features can be visually identified then it is generally a sign that it can be done with deep learning. You will be able to input single channel imagery as well.


Q7: Can we request Esri's professional service for other country such as Asia instead Regional Distributor ?


  • Please contact your regional distributor. Depending on your need they will either be able to provide GeoAI support or coordinate with Esri Inc. for professional services support.


Q8: Can GeoAI Deep Learning apply for "Digital Elevation Model" and "Digital Surface Model" or some other GIS model?


  • Yes, as long as data can be rasterized, it can be fed into a DL model.


Q9: Is it possible to detect traffic volume using this technique?


Q10: How can false positive results after QA be fed back into the model to improve it?


  • It is possible to artifically inject misclassified examples into your training set through a process called hard negative mining. However this can be avoided by selecting a balanced set of positive and negative classes as your training set, or by employing image augmentation through the prepare_data() function in the ArcGIS API for Python.


Q11: can u provide ppt of this session?


  • Yes,  PPTs and resources will be provided as a follow up.


Q12: Can the tool (or another in conjunction with it) extract dimensional attributes of the detected targets e.g. length, width, height?


  • Not directly but it can be inferred from the resolution of your rasters.


Q13: How Geo AI recognize wrecks from Obstructions and underwater rocks?


  • We can do what we call "hard negative mining" - by deliberately including negative examples in the training set.


Q14: Is it necessary to create a buffer around your training samples?


  • Buffers around bounding boxes are not needed since during training the model takes into account the wider context.


Q15: I have a question about Jupyter. Does it use GPU automatically for training the model? (Assuming that using GPU is the best equiped for training)


  • Yes, as long as GPUs are available on your machine, training will automatically be carried out on GPU. For inference you can also specify CPU/GPU as an argument.


Q16: In what part of training your model the model creates buffers around your training data?


  • Buffers are not used at all in training. Instead, during the forward pass of a convolutional neural network, a series of image filters are used to sample information from the entire image. That is to say, every location in your input contributes a little bit to your prediction. As for the polygon output, how much buffer is generated is a learnt parameter depending on your input polygon buffer size.


Q17: What licensing options are required, and are the deep learning tools available in Pro versions before 2.6?


  • The Image Analyst Extension will be required. However we do not recommend using Pro versions lower than 2.5 since many of the deep learning tools were a recent addition.


Q18: How much training data i.e.


  • Exported around 86000 image chips (with different cell size values ranges from 0.1 to 0.9, along with using Stride x and Stride y value = 50 ).


Q19: Do you provide any pre-trained models for fine-tuning?



Q20: Is it possible to subsequently execute this model on newly acquired imagery?


  • Yes, the newly acquired imagery should be on the same resolution as the trained model. 


Q21: Will the shown Jupyter notebook be available?



Q22: Is there a reason this example is done in Jupyter instead of ArcGIS notebooks?


  • Yes, to show the capabilities of ArcGIS API for Python can integrate with third-party tools. Also, the MaskRCNN Model notebook will be available in the pro 2.6. 


Q23: Can AI reliably detect gradients from imagery, instead of objects?


  • Yes, after applying the Sobel filters to the raw input, then pass the output layer into the model.


Q24: Does the training data include the bag layer or just the aspect?


  • Yes, you can train on the bag. We found that using aspect better accuracy can be achieved.      


Q25: Which wreck position is taken to S57, and which depth value is selected


  • We have selected the centroid of the polygon as a wreck position while updating S57 data. In this Webinar, we did not use the depth value, but using raster analysis geo-processing tools can obtain the shallowest pixel value of a polygon.
Tags (3)
0 Kudos
0 Replies