With the Image Analyst extension in ArcGIS Pro 2.1 (or later), non-orthorectified and suitably overlapping images with appropriate metadata can be viewed in stereo! This stereoscopic viewing experience can enable 3D feature extraction. See more information at http://esriurl.com/stereo.
If your organization has a collection of images and you’d like to use the stereo viewing capability in ArcGIS Pro, where do you start? The key questions are:
What type of sensor collected the data, and
What orientation data do you have along with the images?
In order to display images as stereo pairs, ArcGIS must have detailed information about the location of the sensor (x,y,z) as well as its orientation – and this is unique information for every image. Information about the sensor (typically called a camera model or sensor model) is also required.
There are a few conceptually simple cases, although each has important details to follow within its own workflow and documentation.
If you have two overlapping satellite images, you can go directly to stereo viewing.
If you have a collection of satellite images, you can build a mosaic dataset and ingest the images using the specific raster type for that satellite, run the Build Stereo Model geoprocessing tool, then proceed to the stereo view. The raster type for the satellite reads the required orientation data.
If your imagery came from a professional aerial camera system:
If you have an output project file from aerotriangulation (AT) software (e.g. Match-AT or ISAT), ArcGIS includes raster types which ingest the orientation data for you, so this is similar to the satellite case: build a mosaic dataset with the proper raster type, Build Stereo Model, and proceed to stereo viewing.
If you have a project file from AT software not currently supported, Python raster types are under development for additional sensors e.g. for the Vexcel Ultracam. For more information, watch for announcements on GeoNet or on http://esriurl.com/ImageryWorkflows. Alternatively, if you have a table of camera and frame orientation values, see the next bullet.
If you have a table of data values representing the exterior orientation as well as a camera model (interior orientation), you will build a mosaic dataset and ingest the images using the “Frame camera” raster type.
This document (http://esriurl.com/FrameCameraBestPractices) provides a workflow for how to prepare the necessary camera & frame data, then configure the mosaic dataset.
If you have scanned film but without the results of AT software, refer to the FrameCameraBestPractices. With ArcGIS Pro 2.1, some values may have to be estimated, and the positional accuracy may not be optimum. ArcGIS Pro 2.2 (and later versions) support fiducial measurement.
If your imagery was captured using a drone, you will need to use photogrammetric software to generate the camera model and orientation data.
If you process your drone imagery using Ortho Mapping in ArcGIS Pro Advanced (see http://esriurl.com/OrthoMappingHelp), after the Adjust step is completed, the Image Collection mosaic dataset will be ready for viewing in stereo (after Build Stereo Model).
If you are using Drone2Map, please see this item ArcGIS Online http://esriurl.com/D2Mmanagement to download a geoprocessing tool which can ingest the images into a mosaic dataset.