hi Larry.
sadly, I'm not yet very familiar with ArcGIS products .. where do I find those 'referenced�?? images im ArcGlobe ?
* * *
[ATTACH=CONFIG]12114[/ATTACH]
* * *
Okay, let me explain how this workflow would work in the CityEngine, what works and what does not, and what the role of metadata would play.
Read on carefully.. 🙂
1]
First off, the CityEngine can not texture 3d models based on oblique and ortho imagery automatically. So the CE can not 'grab' a 3d multipatch and search the appropriate realworld textures, using e.g. photogrammetric approach. At least at the moment, this is not possible.
2]
The CityEngine is used to texture surfaces (with a series of given input pictures) based on input via code or metadata, which teaches the (dumb) computer what to do with the 'polygon soup'.
Thus, that classification process is of course MUCH easier if you already have the metadata what each object actually represents.
3]
If no metadata is available at all, what you can do - this is seen in the screenshot above - query the dimensions and orientation of each object and 'embed' the metadata by creating your own (code based) classification.
This of course it quite complex and never will catch all possible cases, but can create quite good results, though is of course not representing the real world !
By code-based classification, I mean the following 'tests' for each object:
In pseudo code :
if object is wider and deeper than 5 meters and higher than 3 meters, it's a 'building' or 'room' or a 'garden shed'.
if object is longer than 2 meters, higher than 2 meters and less than 0.5 meters deep, it's a 'wall'.
if objects is less than 1 meters in each dimension, it's a 'technicalBox'
...
After this classification for each object type and continue with testing each of the subshapes (individual polygons) of that object.
E.g. if a volume is classified as 'building', then check the subshapes how they are oriented in space. vertical ones then can be facades with windows, if wide enough.
This breakdown of the process is crude but shows the workflow. Of course - as you see - all that classification can be simplified a lot if the Cityengine already knows what an object actually represents.
Does this make sense ?
An other example :
One customer I've been in contact recently uses a pool of facade images, which has a certain naming convention that defines the buildingID and the facadeIndex. Using this metadata in the file name, the CityEngine can assign a large pool of realworld facade images onto the correct sides of the buildings created based on a 2D GIS dataset.