Skip navigation
All People > hlzhang525 > Challenges to Geospatial Solution Development

Today, data are exponentially growing everywhere, which make it so large and complex that it becomes difficult to handle those effectively through using traditional computing and algorithms, including geospatial data (GIS, remote sensing) and (upstream) big data.upstream-big-data.PNG

Big Data in E&P sector

 

The challenges include capture, curation, storage, search, sharing, transfer, analysis, and visualization.

 

For example, in an oilfiled, thousands of oil production wells are equipped with many sensors that daily capture massive amounts of information about the well flowing conditions. Yearly, tens of TB data are collected and stored into data stores, which are required to effectively process and analyze.


Similarly in geospatial domain, when dealing with (hundreds of GB to Terabytes of high-resolution) 'continuous' image data, most CPUs-based algorithms in research and development do not have sufficient computing power to perform traditional image processing tasks in a timely fashion ('on-demand' or real-time), even though the latest development of GIS and remote sensing can demonstrate the capability to deal with real-time 'discrete' event-based data /signal /network, for example, by ArcGIS GeoEvent and ESRI GIS tools for Hadoop. In operation, therefore, the processing power of the typical CPU desktop workstations can become a severe bottleneck in the process of viewing and enhancing high resolution image data, including color-balancing, bundle adjustment, real-time change detection, oil-spills, etc.


All those data applications require timely responses for swift decisions, which depend upon real-time /near real-time performance of algorithm analysis and imagery processing (like color balancing) in advanced IT environments (i.e., GPUs cluster computing, 'Cloud' computing, MapReduce-enabled applications).


These on-demand systems and applications can greatly benefit from high performance computing techniques and practices to speed up data processing and visulization, either after the data has been collected and transmitted to a ground station on Earth, or during the data collection procedure onboard the sensor, in real-time fashion.

 

Parallel and distributed computing facilities and algorithms as well as high-performance FPGA and DSP systems have become indispensable tools to tackle the issues of processing massive stream GIS, remote sensing, and upstream big data (below). In order to make use of those facilities and algorithms properly, some solution vendores offer the optimal tools like SAS® Grid Manager SAS Grid Manager | SAS to do so.


In recent years, GPUs have evolved into highly parallel many-core processors with tremendous computing power and high memory bandwidth to offer two to three orders of magnitude speedup over the CPUs. A cost-effective GPU computer has become an affordable alternative to an expensive CPU computer cluster for many engineers and researchers performing various engineering and scientific applications. Comparison of Laptop Graphics Cards - NotebookCheck.net Tech

 

In operation, many advanced high-performance computing algorithms of big data with SAS and R have been successfully applying to oil producing data analytics. However, the research and solution development in remote sensing community are still facing the innovative challenges to meet the operational requirements in real-time or MapReduce-enabled applications, especially, like automation of color-balancing and bundle adjustment, in addition to real-time detection of changes & oil spills, etc.

 

In practice, we should use effective tools to monitor /manage /diagnose ‘high availability’ computing performance, especially as a single computing system, like Parallel Processing and (load-balancing) cluster computing. As well know, those high performance architectures are usually performed as a single computing system, including N nodes (CPU-GPU), shared memory, and /or virtual machines, which are obviously performed unlike grid computing and cloud computing.

 

For example, the simplest task is to monitor how good /bad the geoprocessing tools from ArcGIS 10.3 and Pro 1.0 use the multiple cores and processors.

 

 

+++++++++++++++++


Linux and CUDA-enabled GPU Computing


CUDA® is a parallel computing platform and programming model invented by NVIDIA. It enables dramatic increases in computing performance by harnessing the power of the graphics processing unit (GPU).


Refer to Getting Started Linux :: CUDA Toolkit Documentation

 

Windows and CUDA-enabled GPU Computing

 

Refer to Getting Started Windows :: CUDA Toolkit Documentation

 

WebGL and the implementation of MPEG Dash –a streaming video standard that has been slowly picking up steam among industry players — in IE11

 

Microsoft (Finally) Confirms WebGL Support For Internet Explorer 11 | TechCrunch

In operation, people very commonly think that today powerful computers would help imagery specialists effectively to perform automation of color-balancing and spatial rectification for imagery processing (i.e., without intensively human involvements).

 

Obviously, it is wrong impression,  Including the use of GPUs-based supercomputing workstation, or multi-core /processor CPU-based server machines (over load-balancing cluster or Cloud infrastructure)...

 

In fact, imagery specialists are still facing challenges ‘effectively and accurately’ to process high-resolution (0.31-2.5 m) optical imagery for larger coverage (from hundreds of thousands to millions KM2), which is critical to support operations and many applications (# 1 below).

 

From frontline experiences, in addition to technical challenges, there is major barrier to tell people the limitations of current computer solutions with any latest imagery-processing-related algorithms, no matter how powerful those solution packagess are…


Certainly, massively 'manual' adjustment in color and spatial accuracy are highly required, to meet high-standard operation requirements like features' enhancement in mosaic, in addition to color-balanced requirement (#2 & 3 below).

 

Due to spectral variations, there are still some issues like seamlines in some areas (# 4 below).

 

mosaic.png

1. High resolution (0.5-m) satellite imagery mosaic covering the area of over-2-million KM2,  which is well-balanced in color and spatially rectified

well-color-balanced.png
2. Overall, visible and near-infra remote sensing technology can be perfectly applied to the areas with limited variations to the landscape, which allows for nice color balancing in the entire region
high-level and clear features in color-balanced mosaic.PNG
3. High-level and clear features for feature extraction, mapping, and cartography in well-balanced color mosaic
some-seamlines.png
4. However, some seamlines are still visible among scenes in some areas (around 3-5% of scenes) , due to 'huge' spectral differences, including atmospheric conditions

The most recent version of imaging radar, known as spotlight-mode SAR, can produce imagery with spatial resolution that begins to approach that of remote optical imagers.

 

For all of these reasons, 'airborne' synthetic spotlight aperture radar imaging is rapidly becoming a key technology in the world of modern remote sensing for high-resolution generation of DEM, except for higher cost and different imagery processing workflows (than scan-mode /InSAR or LiDAR).

 

With the wide use of SPOTLIGHT polarization technology applied to satellite,  'spaceborne' spotlight SAR sensors like COSMO-SkyMed, TerraSAR-X, and TanDEM-X enable us to handle high-resolution stereo-pair HH/ VV SAR images (up to 1-m), to 'accurately & cost-effectively' generate 3-5 m resolution  of DEM for large coverage, via the following rigorous workflow.

 

Stereo-SAR-DEM.JPG

For spotlight-mode SAR imagery registration and processing, refer to the attachement.

 

 

++++++++++++++++++++++++

PS.,

 

The techniques of generating high-resolution DEM (0.15-5 m resolution) are widely discussed and used in operation, including Interferometric Synthetic Aperture Radar (InSAR) and Light Detection and Ranging (LiDAR) along with advancements in the ability to extract elevation data using conventional stereo-pair photogrammetric methods for airborne and spaceborne OPTICAL /near-IR sensors (GeoEye-1, WorldView-2/3, Pleiades-1A/1B, SPOT-6/7, etc.).

 

Among those, modern airborne & spaceborne imaging radars, known as synthetic 'scan-mode' aperture radars (SARs), are capable of producing high-quality pictures of the earth's surface while avoiding some of the shortcomings of certain other forms of remote imaging systems.

 

Primarily,  radar overcomes the nighttime limitations of optical cameras, and the cloud- cover limitations of both optical and infrared imagers. In addition, because imaging radars use a form of coherent illumination, they can be used in certain special modes such as 'interferometry', to produce some unique derivative image products that 'incoherent' systems cannot.

In practice, many customers like hydrologists and water management are not satified with flooding analysis and prediction studies, especially, when those results are from GIS-based flooding modeling applications.


To improve the reliability of Flooding Disaster Prediction and Mapping, GIS experts with many hydrologists usally concentrate acquisition and processing high-resolution geospatial data like cloud points for higher DEM generation.  Some flooding applications improve their analysis also via simulation...


However, the experiences show that those efforts would not produce more reliable analysis, because of differences in roughness, rock hardness of geological formations, flow depth, and slope, the timing of runoff from most parts of a watershed differing from that along the principal flow path (which is generally used to compute times of concentration).


Those GIS-based flooding apps are just 'simplified' hydrologic models. Besides, uncertainty analysis in those flooding models mostly ignores geological (and some of very important hydrologic) factors, which are usually major impacts on the reliability of flooding analysis and mapping. Hydrological modelling - Wikipedia, the free encyclopedia and Hydrological transport model - Wikipedia, the free encyclopedia


As well known, people commonly apply 3D geological modeling techology into groundwater analysis applications (attachment). In fact, when dealing with surface and surface-subsuface water applications, we also shouldn't ignore this important geological approach.


For example, when applying SWMM/RUNOFF algorithm in GIS-based flooding model, different rock hardness of geological formations, faults,  soil and trees for infiltration, upland erosion (like CASC2D-SED, which simulates soil erosion from overland flow and routes sediment by size fractions to the outlet of a watershed) are hardly considered, even in some famous Surface Runoff Models (rainfalls, simplified recharges)…

 

runoff.jpg

flooding-groundwaterillustratio-l.jpg

Illustration of water cycle


A similar intention (BUT, different implementation at all) to combine large quantities of outcrop data into a geodatabase can be referred at The SAFARI geodatabase: Exploring geological outcrop analogues for reservoir modeling | ArcGIS Blog

 

Certainly, without the details on both 3D surface and subsurface geology and hydrology, it is really chanlenging to get reliable flooding analysis and prediction...

designO&G downstream large-scale operations drive intelligent infrastructure and assets management throughout the operating life of assets, which significantly impact on safety, operational efficiency, production predictability, and overall profitability.

 

One of the positive effects on asset information management is to seamlessly integrate 3D process plant models, utility 3D models, visualization, simulation and mobility in 3D GIS platform with ‘real-time’ capability.

 

The first step for 3D GIS platform is to build 3D GIS solution database in one eco-system, meanwhile seamlessly interoperating with others.

 

The construction of 3D GIS database are required to integrate different data and data models, in particular, 3D Process Plant Models (refinery, gas plant, petrochemical, power, etc.) and 3D Utility Models (powerlines, pipelines, etc.).

 

3D Plant Model (as 3D Block in ArcScene)3D Powerline Model (3D Scene in CityEngine)
refinary.JPG3D-powerline-model.JPG
After reducing the details of 3D plant models by 100-500 times, CityEngine can be used to convert into multipatch via Collada; But, there is no effective way to geolocate the block onto the earth in ArcScene or ArcGlobe in Edit mode3D powerlines model converted as 3D block in 3D scene

 

 

However, it is huge challenging in theory and practice to integrate 3D Plant models into geospatial solution platforms like ESRI ArcGIS, because of the different computer precisions between 3D GIS and 3D Plant Models (mostly 3D CAD design models by Microstation and Intergraph in O&G), which demonstrate the following major issues during conversion from 3D CAD to 3D GIS via open standard formats like Collada:

 

1. too details with huge size (i.e., very common to have 1 - 2 GB or more , if via Collada);

2. non-georeferenced (i.e., most of plant engineers /designers are still not aware of the spatial technology)

3. loss of intelligence (i.e., all attributes/ annotations associated with objects in 3D CAD model are lost in 3D GIS)

 

Worth to mention, asset information models in 3D GIS platform should provide an accurate digital representation of the physical asset (which for many infrastructure assets is in a constant state of flux) and detailed knowledge of the asset context (including the decisions behind why the assets were designed the way they were, as well as how the asset was constructed and modified).


Let's compare major capabilities from two products (MicroStation, AutoCAD) for 3D CAD plant modeling (attachment)...


Secondly, how to convert the following major 3D process plant models into 3D geospatial solution platforms?

 

Intergraph PDS / MicroStation (v6, 7)

MicroStation (v8) /

i-models /

Hypermodeling /

Bentley Map 3D

Intergraph Smart 3D /

GeoMedia® 3D

Still many 3D models in PDS as corporate standand; especially, massive 3D plant models were delivered in PDSdirect and seamless integration with 3D Plant & Utility models via i-models; and directly edit over 3D CAD in DWG or DGN, LiDAR point clouds (TranScan) within Bentley Map 3D ...

interoperability with both graphics and data attributes of foreign CAD models and PDS models; enabling an even richer, centrally-managed 3D ecosystem, especially, with Bentley Map 3D /i-models

 

So, what is the operational solution to meet those challenges?

Coastlines, shoals and reefs are some of the most dynamic and constantly changing regions of the globe.

 

Monitoring and measuring these changes is critical to marine navigation and an important tool in understanding our environment. Near shore bathymetry is currently calculated using high-resolution multispectral satellite imagery. However, with the introduction of WorldView-2’s higher resolution, increased agility and Coastal Blue band (400-450 nm), bathymetric measurements will substantially improve both in depth and accuracy, which can be cost-effectively used to apply in operation potentially to replace the tradtional marine surveying, in particular, up to 10 - 20 m in depth.

 

WorldView-2-Blue-Band-for-Bathymetric-Extraction.JPG

WorldView-2-Blue-Band-for-Bathymetric-Extraction2.JPG

Blue band (396-460 nm) in WorldView-2 & -3 panatrating shallow water (coastal, lake, etc.)

 

* Note:

Marine surveyors perform inspections of vessels of all types including oil rigs, ferries, cargo vessels and warships, pleasure craft, passenger vessels, tugboats, barges, dredges, as well as marine cargo, marine engines and facilities such as canals, drydocks, loading docks and more for the purpose of pre-purchase evaluation, insurance eligibility, insurance claim resolution and regulation compliance.

 


There are two established techniques for calculating bathymetry using multispectral satellite imagery: a radiometric approach and a photogrammetric approach.

 

The Radiometric Approach

 

The radiometric approach exploits the fact that different wavelengths of light are attenuated by water to differing degrees, with red light being attenuated much more rapidly than blue light.

 

Analysts have leveraged existing multispectral satellites’ ability to detect light in the blue (450 – 510 nm), green (510 – 580 nm) and red bands (630 – 690 nm) to achieve good depth estimates, in water up to 15 meters in depth. And, with the addition of sonar based ground truth measurements, they have achieved vertical and horizontal accuracies of less than 1 meter.

 

In order to improve bathymetric measurements, analysts have turned to airborne, high-resolution multispectral platforms. These sensors are able to detect light between 400 and 450 nm – the spectrum that provides the deepest penetration of clear water.Studies using these data have shown that accurate bathymetric measurements can be achieved up to 20 meters and deeper.

 

WorldView-2 is the first commercial high-resolution satellite to provide 1.84 m resolution multispectral imagery, plus a Coastal Blue detector focused on the 400 – 450 nm range. WorldView-2’s large single-pass collection capabilities will also make the application of ground truth data more accurate and reliable. Multiple small collections contain differences in sun angle, sea state and other parameters and it is challenging to calibrate one series of measurements and then apply them across a broad area.

 

Large synoptic collections, enabled by WorldView-2’s agility and rapid retargeting capabilities, allow analysts to compare the differing absorption of the Coastal Blue, Blue and Green bands, calibrate their bathymetric estimations using a few known points, and then reliably extend the model across the entire collection area.

 

The Photogrammetric Approach

 

In this method, stereoscopic images are collected over the target area, and a data elevation model (DEM) of the shallow ocean floor is produced from the imagery. Early studies with both satellite imagery, and digital photography appeared promising, and demonstrate that this technique can be used to provide accurate bathymetric models of shallow environments without ground truth. However, the technique has not been widely studied due to limitations in the capabilities of current sensors.

 

The challenge with collecting stereoscopic imagery of the shallow ocean floor is in how light interacts with the air/water interface. At high angles of incidence, light is completely reflected off the surface of the water, preventing any sub-aquatic features from being observed. Current multispectral satellite sensors are not able to collect enough high-resolution stereoscopic imagery within the narrow angle necessary to penetrate the ocean surface. In addition, none of them are able to measure the shorter wavelength blue light necessary for maximum depth penetration.

 

WorldView-2 will make this new method for measuring bathymetry possible. The Coastal Blue band will deliver maximum water penetration, and WorldView-2’s enhanced agility will enable the collection of large amounts of high-resolution intrack stereo imagery at the ideal angle for water penetration. The advantage of this approach is that multiple images can be registered using tie points that are visible on land and in the water, and the resulting stereo composite can be used to calculate water depth without relying on ground truth measurements. No other satellite is able to deliver this unique combination of high spatial and spectral resolution, agility and stereo collection capacity.

 

Please refer to the White Paper from DigitalGlobe in 2010 (attachment), and also the earlier study by Lee (2010).

 

 

Worth to mention,  Lee 's study didn't showed an obvious correlation between the Coastal depth and Blue bands, because of only using the traditional classification methods in thier study...

LAS Point Clouds data can be accurate and reliable, ONLY when following rigorous quality control standards during acquisition and processing in operation.

 

LiDAR acquisition systems capable of recording lidar data with sufficient accuracy over a range of altitudes should be required. With good acquisition plan, highly qualified field personnel consisting of professional licensed land surveyors, licensed pilots and LiDAR technicians operate the system to ensure quality results from each flight to meet project requirements.

 

Generally, laser data processing means numerous  'rigorous' working steps.


Typical steps in their different working order are as follows:

•Working with trajectories;
•Dividing data into smaller geographical regions (blocks);
•Classifying points by echo;
•Deducing line numbers to points;
•Classifying ground points separately after each flightline;
•Measuring match of overlapping strips;
•Solving heading, roll and pitch for whole data set;
•Verify corrections visually;

•Cutting overlapping point strips;
•Classifying ground points back to default;
•Starting final classification to ground, vegetation, building etc. classes.


In practice, the data should be divided into smaller blocks of around 5- 10 million points, due to the limits of current operating systems and computing.


Never delete points, add points, or change the elevation of points in LAS data, when working in the LAS format. Specialists only attribute each point with various flags that reflect attributes or characteristics of that point. Usually, returns are flagged several ways: by return number, by layer, or by type classes.

 

Return number is simply first, second, third, fourth, etc., depending on the number of returns recorded by the particular sensor (attachment - Major LiDAR Sensors). Layer relates to return number, but takes one step toward elevation classification. In the LAS format, class types can be classified properly, even user-defined (attachment – LAS 1.4). If the end-result of the project is to produce a bare earth terrain model, the following categories are recommended:

    • Bare Ground (Terrain);
    • Features above ground (including buildings, tree crowns, cars, poles, bridges…);
    • Water; and
    • Noise


Noise Removal


The very first step in post-processing is to identify and eliminate noise points, which are extremely high or low points outside the range of realistic elevations for the project area. Anomalously high points can be caused by atmospheric aerosols, birds, or low-flying aircraft; low points might be caused by laser multipath. While noise points would probably be removed later by automated filtering, it is usually advantageous to remove them even earlier in the processing workflow. Many software packages use the absolute minimum and maximum elevations in a dataset as the basis for assigning a scale for color-by-elevation symbology.


Noise points will cause the elevation range in areas of real interest to be compressed within the color scale. In addition to a simple band filter with high and low limits to classify these points as noise, which is Class 7 - Low Point (noise) in the LAS format, remote sensing specialists should use advanced algorithms to minimize noise, including manual editing.


Once labeled, noise points can then be ignored by software for display purposes and in analytical computations, including feature extraction.

 

Manual Classification


As well known, automated filtering, appropriately applied, can effectively classify about 90 % of the ground points in a LiDAR point cloud. The remaining around 10% of the points must be visually inspected and classified manually, which involves human interaction with the data, familiarity with subject landscape, and knowledge of fundamental mapping principles, conventions,  "best practice" and high-resolution optical imagery (airphotos, GeoEye, QuickBird, Pleiades) with advanced Lidar package like TerraScan, ENVI LiDAR, Leica CloudPro (Leica XPro, Leica Cyclone), or ERDAS LPS (point cloud).

In those LAS point cloud processing packages, manual editing and classification techniques are introduced to help improve data accuracy and reliability, in particular, when LAS data are not classified on latest LAS standard. This type of technique called 'Classify on Point Cloud' has been developed and is applied in the context of accepted mapping conventions and practices.

 

For example, the Point Cloud Classify lets us classify point clouds based on parameters that define objects (man-made structures) and vegetation, including DTM, City Model (buildings), and Canopy model (lower vegetation, medium vegetation, higher vegetation). Pls refer to the paper called 'CLASSIFICATION OF LIDAR POINT CLOUD AND GENERATION OF DTM FROM LIDAR HEIGHT AND INTENSITY DATA IN FORESTED AREA'.

 

The parameters for vegetation include a height and greenness criteria. The greenness criterion is applicable only to Point Clouds that have RGB information (below).

 

Leica_CloudPro.jpg

RGB-encoded Points Imagery, after high-quality-controlled Acquisition and Processing

(Accurate and Reliable LAS Points, ready for Geospatial and G&G applications)


If the final result of manual editing  and classification is a detailed bare-earth terrain model (DTM), that means that those data have been quality controlled for completeness and lack of artifacts.


If the final products include 3D feature clouds, advanced photorealistic rendering and 3D modeling techniques should be  applied to those 3D point clouds to create realistic representations and 3D analysis.

 

For city modeling, 3D features (buildings, tree crowns, powerlines, tanks, cars, poles, bridges) can be classified and modeled.

Finally, LiDAR intensity data also can be used to extract features, similar to raster imagery (see the slides by Bill)...

 

LiDAR-photorealistic-rendering.PNG

LiDAR-photorealistic-rendering1.PNG

Photorealistic Rendering from Point Clouds for Visualization and 3D Modeling


++++++++++++


The LAS file format is a binary file format that maintains information specific to the LiDAR nature of the data while not being overly complex (http://www.asprs.org/Committee-General/LASer-LAS-File-Format-Exchange-Activities.html

Keep in mind that LAS clouds can be also generated from stereo-pair optical imagery, in addition to LiDAR.

hlzhang525

Fill Voids in DEM

Posted by hlzhang525 Oct 28, 2014

Traditionally, there are three types of methods to fill voids in DEM like SRTM and InSAR, which are available in ArcGIS.


A method to fill voids uses a variety of interpolators; a method to determine the most appropriate void filling algorithms using a classification of the voids based on their size and a typology of their surrounding terrain; and the classification of the most appropriate algorithm for each of the voids in the SRTM data.

 

Obviously, the choice of void filling algorithm is dependent on both the size and terrain type of the void. Generally, the best methods can be generalized as:

 

  • Kriging or Inverse Distance Weighting interpolation for small and medium size voids in relatively flat low-lying areas;
  • Spline interpolation for small and medium sized voids in high altitude and dissected terrain;
  • Triangular Irregular Network or Inverse Distance Weighting interpolation for large voids in very flat areas, and an advanced Spline Method (Topo to Raster in ArcGIS) for large voids in other terrains.


However, on our practice, two latest methods (Fill and Feather method, Delta Surface Fill), which are only available in some leading Remote Sensing packages (below), are mostly recommended to do DEM/ DTM filling task in operation.

 

DSF-Algorithm.JPG

Introduction

 

Airborne & terrestrial 3D mapping systems are widely available on the different platforms (airplane, helicopter, UAV, mobile-vehicle, etc.), which are mainly used to collect 3D imagery for civil mapping applications, including stereo-pair, point cloud, oblique, panoramas, motion video, and others. 


With 3D imaging technology, those ‘combined’ systems mostly produce very accurate 3D imagery (point cloud), while reaching very high-resolution geospatial details on the ground. One of the main reasons for high accuracy with the spatial details is:


Those mapping systems are commonly assembled with the Global Positioning System (GPS) and Inertial Navigation Systems (INS), in addition to high precision ‘civil-mapping’ sensors, scanners, or cameras.

 

GPS

The Global Positioning System can be used for determining one's precise location and providing a highly accurate time reference almost anywhere on Earth or in Earth orbit. The accuracy of the GPS signal itself is about 5 meters. However, using differential GPS and other error-correcting techniques, the accuracy can be improved to about 1 cm over short distances.

 

INS

An Inertial Navigation System provides the position, velocities and attitude of an aircraft by measuring the accelerations and rotations applied to the system's inertial frame. INS's have angular and linear accelerometers (for changes in position).

 

Angular accelerometers measure how the aircraft is rotating in space. Generally, there's at least one sensor for each of the three axes: pitch (nose up and down), yaw (nose left and right) and roll (clockwise or counterclockwise from the cockpit).

 

Linear accelerometers measure how the aircraft is moving in space. Since it can move in three axes (up & down, left & right, forward & back), there is a linear accelerometer for each axis. A computer continually calculates the aircraft's current position. First, for each of six axes, it integrates the sensed amount of acceleration over time to figure the current velocity. Then it integrates the velocity to figure the current position.


After acquisition (before serving, analysis, or extracting information),  3D imagery from those mapping systems must be processed by remote sensing specialists with proper ‘algorithm-enabled’ computer package for higher accuracy, in addition to other aspects (noise removal, color enhancement  ....)

 

Manage 3D Imaging and Point Clouds

 

Different data of 3D Imaging (and Point Clouds) will utilize different serving approach, when trying to manage at server-side, which will ensure that each can be effectively and seamlessly used in various applications.

 

In the market, some solution vendors offer specific server-side solution to manage certain type of 3D Imaging data directly at server-side, such as optical stereo-pair, optical oblique, optical panoramas, optical motion video, point cloud, or others.


Pictometry at EagleView Technologies - Roof Measurement & Aerial Measurement Service

GEOSPAN at GEOSPAN is the industry leading Photogrammetric provider of multi-angle oblique aerial and integrated 360° street-level   …

CycloMedia at Home (EN) | CycloMedia - EN

earthmine at earthmine - 3D Street Level Imagery Solutions

Leica Cyclone (Cyclone-server)

AXIS 241 Video Servers

...

3D Printing Topo map with/ without City Model is highly demanded, in practice, including the creation of sand table.

 

However, most of 3D mapping systems like ESRI CityEngine, ArcScene or ArcGIS Pro does not support 'direct' 3D Printing for topo 3D map (with or without 3D features).


A workaround solution to this task is addressed (refer to the article below from Roy).

city-model.jpgtopo-model.jpg

 

C Tech’s MVS software includes the ability to create specially formatted VRML files which allow for the creation of full-color 3D physical models using the most advanced 3D printers. However, C Tech only offers 3D model printing as a service to clients.  They assist with modification of the models to ensure the models print correctly the first time and perform the actual printing.

 

What is 3D printing? How does 3D printing work?


3D printer creates 10 houses in a DAY | Daily Mail Online      

What are the major challenges to 3D GIS developments and applications, especially, in complicate facilities and surface-subsurface 3D systems?

 

3D spatial data modeling and realization (raster, topo,TIN, geometry) in one 3D solution system, and then smoothless interoperation in others!

Among those, 3D design models (as 3D CAD models) and some 3D geo-models (as 3D G&G models) also must be considered (even, intergrated), even though those 'geometry' may be too details for some 3D GIS systems...

 

The techniques and theories of 2D spatial data modelling are not able to fully be transferred onto the aspect of 3D data modelling and structuring. Besides, realizations of "true" 3D GIS spatial system needs a lot of efforts from 3D computer graphics, 3D computational geometry, 3D CAD, close-range photogrammetry, and Lidar, which are extensively taking place in energy and mining industries.
Data_Modelling_Today.png

A basic reference is available below

Traditionally, geospatial and G&G techniques belong to different discipline.


For last decade, however, geospatial community has been facing new challenges, because more and more custmers and applications demand surface-subsurface data integration (saying,  creating drill hole traces in 3D plan/ profile/ section) and geological analysis 'within' geospatial solutions, which are commonly used in geological studies, subsurface engineering (utility, civil engineering), hydrology, river and flood rish analysis, earthquake study, mining, energy, ...


Even though huge challenges ahead, it also offers big opportunities ...


How closely to combine both disciplines to reach those diverse goals, in particular, via comining both spatial data models and G&G data models 'within' geospatial database/ servers, with rich client applications?


We see some good moves in this direction. For example, Aquaveo at Subsurface Analyst | Aquaveo.com and C Tech at EnterVol Product Suite | C Tech Development Corporation , partially or fully combine some of G&G data models within ArcGIS FGDB.


Worth to mention, major commercial G&G data models are aleady well-designed to combine both geospatial and G&G data models 'inside' G&G domain, meanwhile also allowing integrated or seamlessly interoperated with major geospatial database (like a WMS Server or ESRI ArcGIS Server) and other G&G project database (OpenWorks, GeoFrame).


For example,  Paradigm SKUA-GOCAD (Epos), which is  'proprietary' G&G data models with geospatial data models widely available for petroleum and mining industries, offers this type of 'integrated' features (complete geospatial and G&G data models in one domain) to give users much more efficiency and production for geological exploration and resource appraisals in operation...

Diagram_Epos_Apr2013.jpg

Technically, SKUA-GOCAD is a complete 3D GIS - G&G modelling system, which covers both surface 3D and subsurface 3D.

 

With SKUA-GOCAD, seamless interaction between 3D model visualization, multiple data space views, and a powerful query environment for selecting subsets of both geospatial and G&G data through one central software application.

profile-image-display.png

Schlumberger Seabed E&P data model (Petrel) and GEOVIA Surpac™ (Gems) offer similar capabilities and functionalities...


So far, some offer good starter with some real geological analysis capabilities in geospatial workflows as an extension inside ArcGIS to connect their own G&G database, even not completely in geospatial database yet.

 

RockWare GIS Link 16 - cross-sections, fence diagrams in ArcView (sold by RockWare)

 

Geological Software | Geology Software | Geosoft Solutions

 

GSI3D -Free Near-surface 3D modeling package GSI3D - Wikipedia, the free encyclopedia

 

Certainly, some open data models are also trying to do so, including BGS OpenGeoscience, USGS and PPDM ... However, those data models still are facing development challenges in geospatial platform like ArcGIS...

 

++++++++++++

 

OneGeology OneGeology - To be the provider of geoscience data globally

Open Geoscience data models | British Geological Survey (BGS)

Borehole data model from BGS: Borehole index & interpretations | British Geological Survey (BGS)

Geochemistry data model: Geochemistry data model | Geochemistry | British Geological Survey (BGS)

Lithology data model: Lexicon data model | British Geological Survey (BGS)

 

NADM - The North American Geologic Map Data Model

 

FGDC Proposal for Geologic Data Model Standard

 

National Geologic Map Database -- Standards Development

From variant index images (like NDVI) derived from time-serial remote sensing images, both the ‘spatial variation’ and ‘temporal variation’ for multiple disciplines could be defined, which are widely available, in particular, in the literatures. However, most of those research results show high uncertainty and low reliability.

 

 

In practice, in order to *accurately* define spatial variations or detect spatial changes directly from time-serial images or index images, many technical challenges are required to be solved.

 

For example, in GIS and land management, how *effectively* to detect spatial changes of landcover over time (i.e., landuses, building lots, fences, tree crowns, etc.)?

 

Obviously, algorithms for change detection /defining spatial variation are mostly different from feature extraction, including traditional change detection algorithms and object-oriented algorithms.

 

For last few years, many researches and practitioners have been discussing object-oriented Change Detection with eCognition and ERDAS Objective.


With eCognition, it uses the multivariate alteration detection (MAD) transformation (by Allan et al, 1998; Nielsen & Conradsen, 1997), which is based on the established canonical correlations analysis.

 


However, it looks that MAD might be challenging, as a ‘real' object-oriented solution for Change Detection in accurate way.

 

Inversely, ERDAS Objective uses Discriminant Function Change algorithm to help extract change features, which demonstrates direct and efficient way to map ‘spatial variation’ and perform change detection...