I have a reasonably dense point cloud and 3D textured mesh as an obj file that have been captured from a UAV flight and show a partially vegetated hillside. Within the 3D mesh are specific areas of interest and I'd like to be able to crop these out of the point cloud or mesh and then calculate the surface area.
I may also be able to obtain an orthomosaic of the UAV data and if so I was planning on digitising the areas of interest as 2D polygons and then use the Interpolate Polygon To Multipatch to derive 3D multipatch features, which will give me the surface areas. This has worked for me in the past when doing some similar work, but I'm wondering whether there are alternative and potentially more accurate workflows I could use?
In addition, I have had a few issues preparing the data for analysis.
First I have to translate the .obj file to something supported by the Import 3D Files tool (.dae, .3ds, .wrl and .flt). Using a .dae file I get the terrain and imagery texture, but the georeferecing is missing. Using a .3ds file retained the georeferecning, but the imagery texture is missing. Using a .wrl file also retained the georeferecning, but I cannot view the output in a 3D scene I assume due to memory issues. Lastly, the .flt file doesn't seem to import or play nice with the Import 3D Files tool.
I have also had some trouble viewing the 3D multipatch layers in ArcGIS Pro. When added as a 3D layer in a 3D scene they are drawn flat without any geometry z-values. However in ArcCatalog they can be viewed in 3D just fine.
Any advice would be appreciated.
Why don't you just bring the point cloud in as a lasd, then go from there.
If I remember correctly a *.flt is a float raster file, you may be able to import that directly in as a raster.
And for your answer, if you can get a raster dem, isn't the 3d area of each pixel proportional to the slope angle, so 3d pixel area = 2d pixel area / cos(slope).