Problem with precision and scale in new field

4076
16
07-18-2016 12:28 PM
Md_YousufReja
New Contributor

Hello GeoNet community,

I have a question regarding precision and scale of a new field. I am working on making a new field where I need total of 18 digits including 8 digits after decimal. For example the math is

900000000 + 1111*10000 + 1111*1 + 11111111/100000000

I need the result is 911111111.11111111

But, it shows 911111111.11111116

FYI, I have used Double and changed the precision and scale when I add the field. I will be appreciated if anyone has some ideas to explore.

Thanks

Reja

0 Kudos
16 Replies
DanPatterson_Retired
MVP Emeritus

6 .... 2    I am using python 3.5 as a demo... maybe mine is more 'incorrectly accurate '

0 Kudos
AbdullahAnter
Occasional Contributor III

Dan, 6 is not logical

I think there is a different problem not precision.

or Yusuf maybe wrote a wrong number.

0 Kudos
DanPatterson_Retired
MVP Emeritus

get him to do the test.... subtract the two numbers... in the big picture I am sure it won't matter since if he is using floating point representations in that manner, then the problem should be scaled by moving the data towards the 0,0 origin by subtracting the mean from all values... or work with integers... or one of the other ways of dealing with floating point math.

0 Kudos
Md_YousufReja
New Contributor

Thanks Dana And Anter for going deep into it. I am working on making 2,880,000 unique cell IDs for a panel. When I did join the dbf file of all IDs and math, I found that last single digits or sometimes last two digits of values are changing. When I add the field, I used Double as field type and put 17 for precision and 8 for scale. Here are some examples:

900000000 + 2252* 10000+ 1245*1 + 11111111/100000000 = 922521245.11111116

900000000 + 2252* 10000+ 1245*1 + 11111121/100000000 = 922521245.11111116

900000000 + 2252* 10000+ 1245*1 + 11112111/100000000 = 922521245.11112106

900000000 + 2252* 10000+ 1245*1 + 11112121/100000000 = 922521245.11112118

................................................................................like that

Thanks

0 Kudos
DanPatterson_Retired
MVP Emeritus

don't use a dbase file

unique values should be integer, don't rely on float

or scale your unique values as an order selection from system minimum to system maximum

perhaps if you elaborated why you need 2,880,000 unique cell ids for a panel (of what?) it might help come up with some other solutions

Md_YousufReja
New Contributor

Thanks Dan! I need to develop  2,880,000 cell IDs for a mapping application model, where each cell has to be unique. Because other raster and data layers will be analyzed by the cells. I know that maximum number of digits stored internally in a double is 17 digits. In this case I'm using the max number, but somehow it rounded up the last one or last two digits. Is there anything that I have to fix in the settings?

0 Kudos
DanPatterson_Retired
MVP Emeritus

64 bit system, integer range

>>> min_int = -2**63

>>> max_int = 2**63 -1

>>> print(" from {} to {}".format(min_int,max_int))

from -9223372036854775808 to 9223372036854775807

32 bit system, integer range

>>> min_int = -2**31

>>> max_int = 2**31 -1

>>> print(" from {} to {}".format(min_int,max_int))

from -2147483648 to 2147483647

I think you have enough unique values by sticking to integer