I get very big osgb data about 600G from Institue of surveying and mapping, then convert this big osgb data using Create Integrated Mesh Scene Layer Package tool in Pro 2.5, but when the tool runs a half(it may have taken many hours or several days), pro crashes because of the quality of the osgb data or pro closes suddenly because of some unexpected circumstances such as power lost or be closed by mistakes.
Then I check and repair this big osgb data, and I want to run this tool continue with the last result and not start form scrach. I know Pro has no this function in current version, but could pro add this function in the future version? Because this function is very important for dealing with very big osgb data, and it may take very very long time before the tool fails. Thank you!
supposedly fixed in 2.3, if this is the error
BUG-000117586: ArcGIS Pro crashes when running the Create Integrate..
there are others
just sort from Newest to Oldest
Thanks for the bugs and the user wants to run this tool continue with his last result. Because this tool may have taken many hours or several day. If he must run this tool form scratch, he think it is inefficient and unacceptable. Thanks again.
Tech Support would be my suggestion, there may be underlying issues for the failure, making restarting the process a secondary concern
Yes, it may be quality of the osgb data, but the user wants to deal with the left osgb data after he repairs and don't want to execute all the osgb data again. Thank you.
I think we need to get to the bottom of why this is crashing ArcGIS Pro. We cannot pick up a half baked scene layer package after the tool has already failed. Is the user inputting the folder of OSGB files or are they inputting the root OSGB file? My recommendation would be to input the root OSGB file as the tool will then read the children nodes from that root file but if you input the entire folder then the tool will attempt to read all OSGB files in that directory. We have seen cases where there are OSGB files in a folder that should not be there so please try inputting the root file and let me know if that resolves the crash.
Thanks for your recommendation, it is work when I convert a small osgb files which results Pro crashes before. I will tell the user to try this. I will update you if he answers me.
Thanks so much
The user has converted the osgb data which about 10G, the tool runs successfully. But he has a question for only input the root osgb files, that is, the source osgb data has 585 subfolders but the number of root osgb file only has 432, if only input these 432 root osgb file, whether it will result the lost of data? Or say, these subfolders which lack of root osgb file are all the bad data and don't need to put these subfolders as the input dataset?
Addtionally, the user is testing the big 320G osgb data, if he completes, I will update to you. Thanks so much, your recommendation is very helpful!
Glad to hear it worked! It's hard to really say if there's lost data or not. They're going to have to analyze the data to understand whether it's all there or not. I've seen cases where there are osgb files that are from a different dataset but somehow they got put in that directory so that could be the case here. What I would recommend and viewing some of those osgb files in a viewer (e.g. Acute 3D viewer) to see if they do match. They can also write out the osgb file to an ASCII file (osgLab utility) to see if the nodes are referenced properly within the file.
Thanks for your recommendation again, it is very valuable to me, I will test these osgb files.
The user has answered me that it is failed when dealing with 320G osgb data. The error as the follow picture:
The output slpk file has already written 450G when this tool failed. Do you know what's the possible reasons with this error or is there any other methods to run these osgb data successfully? I'm grateful to have your help.
Unfortunately with that error message it could be anything. It could be the process being stopped prematurely by the machine going to sleep, or the D drive running out of space etc. It also doesn't seem feasible to get that much data to try and reproduce on our end so the only thing I can recommend are:
1. Run the tool again to see if it fails at the exact same time.
2. Try on a different machine.
3. Split up the job so that you have multiple slpks instead of one large slpk
Let me know if any of those options are feasible.
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.