Hi Everybody:

I wrote a program to generate geometries for linear features. However, I found that the geometries are slightly changed after assigned to IFeature.Shape. Following is my code:

I checked the coordinates of (newShape as IPolyline).ToPoint before and after running the line "m_linear_feature.Shape = newShape".

Before: X = -92.836516205759764, Y = 40.870457300856849, Z = 900.537152489813

After: X = -92.836516205999942, Y = 40.870457301000044, Z = 900.53715249001107

What is the reason for this change? Is there any way to avoid it?

Thank you!

I wrote a program to generate geometries for linear features. However, I found that the geometries are slightly changed after assigned to IFeature.Shape. Following is my code:

// newShape is IPolyline m_editor.StartOperation(); m_linear_feature.Shape = newShape; m_linear_feature.Store(); m_editor.StopOperation("Reshape linear feature");

I checked the coordinates of (newShape as IPolyline).ToPoint before and after running the line "m_linear_feature.Shape = newShape".

Before: X = -92.836516205759764, Y = 40.870457300856849, Z = 900.537152489813

After: X = -92.836516205999942, Y = 40.870457301000044, Z = 900.53715249001107

What is the reason for this change? Is there any way to avoid it?

Thank you!

Your tolerance and resolution settings appear to be set to 9 decimal places. Any numbers beyond that decimal place is due to the limitations of how binary numbers manipulate and represent base 10 numbers. Holding all 15 decimal places is nearly impossible to achieve with standard double precision numbers, since it passes through code you have no control over which may alter the value at such high precision levels. Most programming language math modules fall far short of preserving all 15 decimal positions and any pass thorough any validation routine involving a lower precision function in the background will reduce the precision of the value.

The values in degrees are at most off by .00037 of a foot, which is beyond any normal real world survey precision/accuracy. The actual precision/accuracy of your input values are suspect if this data is relying on standard survey instrumentation measurements, since you have exceeded their rated accuracy. Therefore there is in reality no loss of accuracy or precision in terms of meaningful measurable data.

Your concern is more likely based on some form of programming join or comparisons between double fields or values. Joins should never rely on double values at such high resolutions, and exact comparisons are unreliable with double values beyond a certain number of decimal digits. I normally have to compare two fractional double values at these levels of resolution using greater than/less than logic rather than equals logic.

Anyway, this is the behavior and function of choosing a tolerance and resolution for your data. By setting the number of decimal places at a certain tolerance it means exactly what it says. You will tolerate adjustments to the value as long as they meet that level of decimal resolution.