Select to view content in your preferred language

Problem using INT

831
7
08-01-2010 07:42 AM
SethBinder
Emerging Contributor
Hi. I've got 32-bit floating point rasters that I'm trying to convert to 32-bit unsigned integer format (for the ultimate purpose of being able to use the Combine tool). All of the values are already positive integers and the largest values fall (barely) below the max value for 32-bit unsigned. However, when I use INT, all values above the *signed* integer max value are cut off. (Similarly, when I create an empty 32-bit unsigned integer raster dataset, the default NoData value is set at the signed max, rather than the unsigned max.) Does anyone know why this is happening and/or if there's a work-around? Thanks.

--Seth
0 Kudos
7 Replies
RobertBerger
Occasional Contributor
Hi Seth,

Have you tried using the copy raster GP tool and specifying the output to be 32 bit unsigned int?

Robert
0 Kudos
SethBinder
Emerging Contributor
Robert,

Thank you. I hadn't thought of that. It worked perfectly.
0 Kudos
SethBinder
Emerging Contributor
It seems I spoke too soon. I just built the attribute tables for the copied rasters and discovered that the values don't correspond at all to the original data. According to the dataset Properties, everything should be in order, but looking at the tables, I get negative values where there should be none, and the largest values don't approach the true max value. This is bizarre.
0 Kudos
RobertBerger
Occasional Contributor
Seth,

What do you output your image to? If your output is 32 bit unsigned int (like a tiff or img) then you shouldn't be able to have negative values. Don't output to GRID since that is a special case. Also, I don't think we support attribute tables with 32 bit data.
To check the output values, you could use the identify tool in ArcMap, overlaying the two images and show identify results for all visible images (make sure both images have visibility turned on).
Does that help?

Robert
0 Kudos
SethBinder
Emerging Contributor
Robert,

I'm using a file geodatabase. The orginal data format was Arc/Info ASCII GRID (metadata here]).

I tried using ArcMap as you suggested. Identify does not reveal any discrepancy between the original and copied layers. Interestingly, if I switch the Symbology in the copied layer from Stretched to Classified, the values change completely, appearing in the TOC as they do in the attribute table---though using the Identify tool still reveals no discrepancy; it reads the original values.

The problem certainly seems to be with the attribute table, but my understanding was that ArcGIS does support attribute tables for 32-bit integer data (though not for floating point). In fact, I've created attribute tables for other 32-bit int datasets, and they seem fine. It's very important for me to be able to generate these tables in order to export data for use with other software. For the moment, I'm sacrificing a significant digit and creating tables from 16-bit data, but I'd like to avoid that if I can.

Thanks again for your help,

Seth
0 Kudos
RobertBerger
Occasional Contributor
Hi Seth,

Can you try rebuilding the attribute table?

Robert
0 Kudos
SethBinder
Emerging Contributor
Robert,

It seems my data were corrupted somewhere along the line. I started from scratch and have now been able to Combine the processed datasets as needed, including some 32-bit integer data. 

(I didn't re-build the attribute tables for the corrupted data, but if you are still interested to see how that would turn out, I'm happy to try.)

Thanks,

Seth
0 Kudos