Calculate Slope Between Pixels Through Time: A Time Series Trend Analysis

3200
0
05-26-2014 03:02 PM
BenSciance
New Contributor II
I have a 9 year weekly time series (~500 raster grids of equal cell size and extent). I'm interested in obtaining the regression line slope between pixels (Imagine stacking all 500 grids on top of one another and running a linear regression between each individual pixel). Essentially, this would measure the net change between pixels through my time series.

My main question is about the coefficients defined in the first answer here. Is the coefficient defined as 12 due to the temporal scale of the original question being at the monthly level? And if so, should my code's coefficient be set to 7 due to the weekly temporal scale between my grids?

I have been referencing these posts that are similar:

Representing trends over time:
(http://gis.stackexchange.com/questions/65787/arcgis-make-linear-regression-across-multiple-raster-la...)

Make linear regression across multiple layers:
(http://gis.stackexchange.com/questions/52502/how-to-represent-trend-over-time?rq=1)


I wrote this python code after reading the links referenced above:
from __future__ import division
import arcpy
import os
from arcpy import env
from arcpy.sa import*


# define workspace
arcpy.env.workspace=r"C:\Users\mbs7038\Documents\Remove_Gauge_Outliers\test\forSlope.gdb"

# enable overwriting
arcpy.env.overwriteOutput=True

# check spatial analyst extension
arcpy.CheckOutExtension('Spatial')

# define output paths
slopePath=r"C:\Users\mbs7038\Documents\Remove_Gauge_Outliers\SpatialTrends\SlopeTEST.gdb"

# list all rasters in the workspace
rasters=arcpy.ListRasters('*')
# sort rasters numerically
rasters.sort()

# get the number of rasters
n=len(rasters)
print n

# setup index
i=1

# define division
seed=(n*n*n)-n
print 'the global seed is {0}'.format(seed)

for raster in rasters:
    print i
    coef=(12*(i)-((6*n)+(6*1)))/seed

    print 'Raster {0} is {1}:'.format(i,raster)
    print 'the coef for raster {0} is {1}'.format(i,coef)

    # Multiply raster by coefficient and add to output slope grid
    if i==1:
        outSlope=(Raster(raster)*coef)
        i+=1  # same as saying i=i+1
    else:
        print 'adding {0} to outSlope'.format(raster)
        outSlope=outSlope+(Raster(raster)*coef)
        i+=1
    if i==6:
        break

# Save final slope grid
print 'saving final slope grid'
outSlope.save(slopePath + "/" + "floodChange_py")

print 'script is complete'


I created this code as a test which appeared to work on a 5 week subset of my data. If what I have referenced above is correct then this code will work on a time series (of equal cell size and extent) of any length. Although there were no bugs in the code, I have a feeling I may have made a mistake someplace and would appreciate extra eyes on my code.

Any advice or recommendations?

On a side note I'm using PyScripter, hence the "from future import division" in the first line. This disables the automatic floor function which rounds floating point numbers to the nearest integer away from zero.
Tags (2)
0 Kudos
0 Replies