When you access a 3-D geometry through the API, it must allocate a SE_POINT buffer for
the X,Y data, and a LFLOAT buffer for the Zs. LFLOATs (double) are half again as large as
as SE_POINT (struct { double x, double y }). Inside the object itself, the Z values must
be stored, though all zeros will only add 8+(nverts-1) bytes (which means that the storage
requirements for point layers increases 50%). Spatial indexes are unchanged, but the
time it takes to retrieve the rows (which represents the bulk of the query time) increases
with larger shapes (this is why a -400,-400,1000000 WGS84 coordref is so much faster
than a -400,-400,1000000000 one).
I'm actually glad it's so very inconvenient to add zero Z values to a large dataset. This
should raise the question of whether so much work for so little gain is worth the effort.
Calculating actual Z values from a scale-appropriate DEM (be it GTOPO, DTED0, DTED1,
DTED2, or a few petabytes of LIDAR) would also be a lot of effort, but then you would
actually have a 3-D dataset (actually, just 2.5-D), not a dataset with similar storage
and poor performance with no useful values.
- V