I am a ArcMap somewhat experienced user. Regardless, this is my first time encountering issues while processing tifs. I have several inputs corresponding to population, built up areas, residential areas and floor areas. All of them are .tif files and are to be used in a linear regression that pretends to estimate built up floor areas for a whole country. The regression is made externally with a pyhton code that uses rasterio amongst other packages.
On this first image you can see the value ranges from the inputs. For all of them, NoData values are set to 0 respectively for each specific .tif.
However, when running the code, estimations are created on NoData (0) pixels. Now, the number output value is abnormally high, most of the times I have tried something different is negative. And, consequently the regression is not working. Alternatively, I have tried to make small calculations and even only export the rasters and it still outputs random values. The following image is from the copy raster tool (original is out_pop.tif).
Can someone please give me a hint on what can be happening? From my point of view the answer has to be on the GIS side and not on the coding part since it only takes the raster data to create the matrixes.
Thanks to all!
I suspect your tif type is too small. I can only demonstrate this with numpy, so bear with me... just pretend every 'array' is a raster.
# ---- maximum values for various bit types
np.iinfo(np.int8).max # 127
np.iinfo(np.uint8).max # 255
np.iinfo(np.uint16).max # 65535
np.iinfo(np.int16).max # 32767
np.iinfo(np.uint32).max # 4294967295
np.iinfo(np.int32).max # 2147483647
# ---- now what happens when you get a value > the maximum allowed for the bit type
a = np.array(, dtype=np.int16)
b = a * 2
b # ---- hmmmm it yields a weird number
a1 = np.array(, dtype=np.int32) # keep the number the same, change the bit type
b1 = a1 * 2
b1 # ---- all is good