Select to view content in your preferred language

ESRI and Microsoft SQL disagreeing on what is a valid shape

6326
12
06-18-2016 07:13 PM
JohnCuthbertson
Emerging Contributor

We are using Microsoft SQL Server 2012 and ESRI 10.2 in a Windows environment, shapes are stored in Geometry format.

We are trying to programmatically add shapes directly to an ESRI Geodatabase (IE via SQL, not ESRI routines).

Some of the shapes have suspect self-intersections and we need to either make them valid, or reject them.

This issue we are having is that raw SQL environment considers the shapes valid, but when they are read by ESRI products some are considered invalid.

This issue seems to be related to the number of decimal places used by SQL versus that used by ESRI.

The attached SQL code demonstrates the issue..

  • The initial shape is considered by SQL to be valid, but when inserted into an ESRI layer, ESRI considers it self-intersecting.
  • By reducing the number of decimal places, (7 for Lng and 8 for Lat), we can get ESRI to agree with SQL that the shape is ‘valid’

My question is have I detected a solution that is applicable to all cases (IE ESRI and SQL will always both agree on validity or otherwise).

Feedback from anyone with knowledge in this area would be appreciated

0 Kudos
12 Replies
JoshuaBixby
MVP Esteemed Contributor

A couple of questions:

  1. What are the XY Resolution and Tolerance of the feature class you are working with?
  2. You appear to be using a geographic (not projected) spatial reference, but you are working with SQL Server's GEOMETRY data type instead of GEOGRAPHY data type.  Why not use GEOGRAPHY data type since you are using a geographic spatial reference?
0 Kudos
JohnCuthbertson
Emerging Contributor

XY Resolution is 0.000000016065314

Tolerance is 0.000000032130629

Geography vs Geometry question response could fill a book, but I do not think it is relevant to this subject (or is it?)

0 Kudos
DanPatterson_Retired
MVP Emeritus

is sql using float or double?  (ie single precision or double precision IEEE 754 spec)

0 Kudos
VinceAngelo
Esri Esteemed Contributor

I believe they pack the bits themselves, but it's a lot closer to the old BASIC precision limitations of pre-9.2 ArcSDE (and not even the full 31 bits of BASIC in the 2008 R1 implementation).  The part that peeves me is the WKT generated has twelve and fourteen digits, even though only the first four to six are stored.

- V

0 Kudos
DanPatterson_Retired
MVP Emeritus

Vince... It was kind of reminding me of the geometry errors introduced when the spatial reference wasn't specified and the default for geometry calculations went to single precision versus double.  I just wanted to rule that out as a possibility, at least for the esri files/queries/calculations.

0 Kudos
VinceAngelo
Esri Esteemed Contributor

There's certainly a noise-to-signal issue when the topology is evaluated at a resolution which is coarser than the coordinates are expressed.  Data quality is an issue -- I've seen SQL-Server shift coordinates a dozen Angstroms and be happy with the new geometry, where even the insanely precise default coordinate reference found intersection (all for coordinates which were originally recorded at five decimal places).  The geometries really need to cleaned with respect to the native data precision (be it decimeter, centimeter, or meter).  It's rare that WGS84 data is accurate to under a meter, so thrashing around in sub-millimeter is just a distraction.

- V

0 Kudos
JohnCuthbertson
Emerging Contributor

Dan, SQL is using vanilla SQL provided spatial facilities (SQL 2012). I do not know how to check whether it is using float or double (or how to change it).

If your question relates to the code that is generating the coordinates then this is irrelevant. The fact is that I have a shape that SQL considers valid and ESRI considers invalid

Just read the interchange between Dan and Vince

0 Kudos
VinceAngelo
Esri Esteemed Contributor

I think part of the problem is the way you are defining the issue.  ArcGIS has always allowed users to define the precision at which coordinates are captured.  SQL-Server has hard-coded internal limits which have evolved over time, and which are significantly smaller than the precision permitted by Esri.

If you wish to reduce the default XY Tolerance of a feature class in an enterprise feature class, all you need to do is specify it during feature class creation.  Of course, this is more often the cause of geometry corruption issues than it is the solution.

In the end, what Microsoft considers "close enough" isn't good enough for a data model that can process 54 significant bits. If you want to load geometries outside of ArcGIS, you need to be prepared to repair improper geometries.

- V

0 Kudos
JohnCuthbertson
Emerging Contributor

Vince, If I understand  what you are saying (and what you said in the comments above) then Microsoft Geometry only uses 31 bits to store the individual coordinates, even though WKT may display more. Accordingly, if we are storing ESRI SHAPES using the default storage of GEOMETRY then our precision is limited to 31 bits (does this change if we use GEOGRAPHY?)  no matter how we insert the data (using native SQL or ESRI APIs). My question then becomes what is the resultant accuracy (and then what is an appropriate maximum 'XY Tolerance' that reflects this accuracy). I am going away now to experiment with raw SQL Spatial code to verify this understanding.

Please stop me if I am going down the wrong track

0 Kudos