dear Readers,
One of the tasks we are trying to accomplish is to serve lidar DEM (raster) data stored on our servers; lidar collected acrossmany states. LERC and MRF was a suggested path and we are in the process of evaluating it with a large sample dataset.
we are planning to do the following - please feel free to provide suggestions / recommend changes to optimize our available resources.
Objective: Evaluate and confirm that LERC/MRF OptimizeRasters processed lidar data would be efficient in storage and transmission.
1. based on the reading, a tolerance of 10 cm of allowable error would yield a reduction of 40% or more. will that LERC compression consume extra processing resources and additional time for while serving out data?
2. IS Ten cm too much for lidar 1 m data? will there beerror prone ?.
3a. Two of our dev servers are running ArcGIS server based services and are arccatalog toolbos functions; does Optimize rasters toolbox take a lot of processing and memory resources?
3b. When I selected a whole folder of Lidar 1m data, the Optimize Rasters failed half way through the processing - restarting a new instance locks up the machine. Is there a batch process technique to make use of Optimize Raster tool box?
4. After creating MRF / LERC DEM from OptimizedRasters , we are planning create 2 sets of services exposing the same data; One Optimized and the other with raw data to confirm that we can continue to use Optimized Rasters module from ESRI (JPL, NASA... GDAL etc).
5. With 2m lidar data, 10 cm 'error tolerance' to optimize the size of the file should be fine; for 1m lidar data, any suggestions for the suggested tolerance for optimum storage.
6. there has been statements that question using LERC /MRF efficiencies over unprocessed data due to the compression and de-compression that needs to be done. Any informed opinion would be useful for us to choose our path.
7. Size of data: our current test data is to the order of 10.1 TB - 1m lidar data for a few counties. We have users across the nation and we are actively involved with other federal, state, and local agencies collecting data. so 10 TB might end up being a fraction of the total data size.
Given the large size of data that we need to share, we decided to reach out for ideas.
thank you for taking the time to read through the entire content.
regards
Ravi Kaushika.