POST
|
Ok, I reply to myself: Seems to be a problem when you specify the GPU number in Pro and you have set the environmental variable CUDA_VISIBLE_DEVICES Some time ago I used the command Viewshed2 that uses CUDA processing for speeding up the visibility calculations using full geodetic solution. For harnessing the power of one of my GPUs I set the env CUDA_VISIBLE_DEVICES =0 This env variable seems to override whatever GPU you indicate in the Pro GUI, so if you have several GPUs on your system don't forget to leave the GPU number in Pro always as 0, and use the CUDA_VISIBLE_DEVICES variable to point to the GPU that you want to use. At least this has solved the issue for me. Prior to this, I double checked that my CUDA drivers were properly installed, that the bandwith with my GPU was healthy, and so on. This is so complex but it works.
... View more
12-01-2020
11:18 AM
|
0
|
0
|
94
|
POST
|
Hi everyone. I have 2 GPUs on my system: #0, configured for display, and #1 which is a high-end NVidia Titan RTX for AI processing. I am running ArcGIS Pro 2.6 with the Deep Learning frameworks installed using the installer provided by Esri here: https://github.com/Esri/deep-learning-frameworks/ I can train a model and detect objects using the GPU #0, so with the windows default GPU it works.That being said, when I ask Pro to detect objects using GPU #1 it says: ERROR 999999: Something unexpected caused the tool to fail. Contact Esri Technical Support (http://esriurl.com/support) to Report a Bug, and refer to the error help for potential solutions or workarounds. Parallel processing job timed out [Failed to generate table] Failed to execute (DetectObjectsUsingDeepLearning). I have my GPUs configured as follows: As you can see, both in WDDM mode and both with ECC mode disabled. I have also tried setting the #1 Titan RTX to TCC mode, instead of WDDM: With this last configuration, same error. Any ideas about what should I check about the configuration of the second GPU? Is there any limitation right now in the software to process on GPU instance #1 or above?? Can any confirm that are no limitations to run processes on others GPU? Or do I have a misconfiguration on my side? Any help will be appreciated. Best regards.
... View more
11-30-2020
10:00 AM
|
0
|
1
|
144
|
POST
|
I reply to myself: sorry, one of the xml files had errors. Once corrected I could run the processs and discover a brand new error that I will describe in another post. Thanks
... View more
11-28-2020
01:31 AM
|
0
|
0
|
84
|
POST
|
I am using ArcGIS Pro 2.6 and I have installed all the Deep Learning Frameworks using the integrated installer that Esri has provided for 2.6 here: https://links.esri.com/deep-learning-framework-install (It was really useful, by the way) I created a training sample with some dots. I exported those features to chips using the Export Training Sample command in Pro 2.6. Metadata format for exporting: Pascal Visual Objects Classes (I try to apply the Single Shot Detector SSD) , buffer radius: 2 Then I tried to train the model and started to get error messages. I could get rid of some of them; for instance, reading similar topics in this forum I found that the regional configuration of the computer can be a source or problems, since in some regions we use , instead of . for expressing decimals, so I replaced all the commas in my xml chips for dots. When I try to train the model it runs for 1 second (a little more than before), but soon another error appears: Traceback (most recent call last): File "c:\program files\arcgis\pro\Resources\ArcToolbox\toolboxes\Image Analyst Tools.tbx\TrainDeepLearningModel.tool\tool.script.execute.py", line 196, in <module> execute() File "c:\program files\arcgis\pro\Resources\ArcToolbox\toolboxes\Image Analyst Tools.tbx\TrainDeepLearningModel.tool\tool.script.execute.py", line 141, in execute data_bunch = prepare_data(in_folder, **prepare_data_kwargs) File "C:\Program Files\ArcGIS\Pro\bin\Python\envs\arcgispro-py3\Lib\site-packages\arcgis\learn\_data.py", line 817, in prepare_data .label_from_func(get_y_func) File "C:\Program Files\ArcGIS\Pro\bin\Python\envs\arcgispro-py3\Lib\site-packages\fastai\data_block.py", line 475, in _inner self.train = ft(*args, from_item_lists=True, **kwargs) File "C:\Program Files\ArcGIS\Pro\bin\Python\envs\arcgispro-py3\Lib\site-packages\fastai\data_block.py", line 299, in label_from_func return self._label_from_list([func(o) for o in self.items], label_cls=label_cls, **kwargs) File "C:\Program Files\ArcGIS\Pro\bin\Python\envs\arcgispro-py3\Lib\site-packages\fastai\data_block.py", line 299, in <listcomp> return self._label_from_list([func(o) for o in self.items], label_cls=label_cls, **kwargs) File "C:\Program Files\ArcGIS\Pro\bin\Python\envs\arcgispro-py3\Lib\site-packages\arcgis\learn\_data.py", line 167, in _get_bbox_lbls return _get_bbox_classes(xmlfile, class_mapping, height_width) File "C:\Program Files\ArcGIS\Pro\bin\Python\envs\arcgispro-py3\Lib\site-packages\arcgis\learn\_data.py", line 135, in _get_bbox_classes tree = ET.parse(xmlfile) File "C:\Program Files\ArcGIS\Pro\bin\Python\envs\arcgispro-py3\Lib\xml\etree\ElementTree.py", line 1196, in parse tree.parse(source, parser) File "C:\Program Files\ArcGIS\Pro\bin\Python\envs\arcgispro-py3\Lib\xml\etree\ElementTree.py", line 597, in parse self._root = parser._parse_whole(source) xml.etree.ElementTree.ParseError: XML declaration not well-formed: line 1, column 16 Failed to execute (TrainDeepLearningModel). Any hint will be really appreciated. For example, could be useful to have access to chips that any of you can have. If I could train a model with your tested chips would mean that my errors are in the process of creating the chips and is not an installation issue related with the frameworks. Thanks in advance for your help.
... View more
11-27-2020
12:10 PM
|
0
|
1
|
109
|
POST
|
Yeah, I have the exact same question. I designed the same solution as you: in my intranet, I create a Table View of my feature class containing only the fields that I want to show, and therefore converting my feature class in a non spatial table. I upload my data to a Feature Service in AGOL without any warning. I can even see the table in my Feature Service in AGOL. So far, so good. But when I open the table, it is empty. No records. It has all the fields but no records. Any hint would be appreciated. p.s.: Using Pro 2.6. to define the Table View with data from an Enterprise Geodatabase.
... View more
10-24-2020
11:49 AM
|
0
|
0
|
333
|
POST
|
Thanks for your information. We have opted for doing something similar. I feel attracted by the idea of using the attachments because having all the information embedded in the Geodatabase is cool, but I don't want to geopardize the whole project for this. Syncing is now fast and stable. I'd rather not to take to more risks...
... View more
07-03-2015
01:18 AM
|
0
|
0
|
49
|
POST
|
Hi, everyone. We are using Collector with our own Portal (10.3 now and in several weeks will be upgrading to 10.3.1) . We've sent our force task of inspectors to the field with Collector in iPads 64 GB with precached ortoimagery and other raster maps, and also with huge databases of several millions of parcels (several million parcels!!). The system works fine: the queries are really quick, the visualization is astoundingly fast and the user experience is so good. However, I see the sync process is still weak and this opens a question: whether or not we should be working with attached photos in the Geodatabase. We have the possibility to insert pics with every parcel verified in the field (it's configured in our databases and webmap and it works fine actually), but we don't know if this will work in large scale. I mean... we will have 6 teams, each one having to visit and control about 7.000 parcels, and in each parcel they can collect as many as 1 or 2 photos. As a result, the sync process will turn into heavy very quickly, as every inspector will have to transfer for instance 300 pictures to the central database but also will have to download the 300x5 photos of the rest of the teams. I recommended to start not using the photo attachment function. What would you do? Do you have experience with such a complex project environment using attachments? We would appreciate any comments based on your experience. Will be very useful for us. Further details about our systems: - ArcGIS for Server 10.3 (will update to 10.3.1 soon) - Portal 10.3 (will update to 10.3.1 soon) - SQL Server 2008 R2 as backend DBMS - Field devices: iPad Mini 2 64 GB with 4G cards. Encrypted VPN connections. Futher details of the geographic information included in the webmap: - Vector data: 2 parcel database of about 1 million records each one (sync and query only) - Vector data: 2 parcel databases with the selected parcels to verify (about 42000 polygon features with update, sync and query functionalities activated). - Vector data: 2 additional databases (about 800 polygon features with sync and query only). - Raster data: 2014 aerial imaginery, precached to 1:1000 scale and transfered to the devices using cable (tpk files of 32 GB). This is the default background map. - Raster data: 2010 aerial imaginery, precached to 1:4.000 scale and also transfered to the devices using cable (tpk of 8 GB). - Raster data: 3 additional background maps precached to 1:8.000 scale (tpk of about 2 GB). Thanks in advance for your help.
... View more
07-01-2015
05:14 AM
|
0
|
2
|
2871
|
Online Status |
Offline
|
Date Last Visited |
12-04-2020
10:37 AM
|