This raster is pretty small.
If you are at 10.1 or later, have you considered copying it to the in_memory workspace and seeing if that runs faster (I would expect it too, though you may have to try the different sampling tools to see which one works best.)
James,
Assuming 32bit values, you're still only looking at something like 268MB (236*552*540*4 bytes) of data, which should fit into memory without a problem. Doing that for both your rasters and points would be a good place to start as Curtis mentions, as your sampling has to iterate over all points and rasters which isn't particularly fast. Another option that would take more work is to stack the rasters using multidimensional arrays, such as with NumPy or NetCDF. Then, you can pull out the values of all 540 rasters by sampling a single vector (all rasters at one point) which should greatly improve performance.
cheers,
Shaun
rasterarray = [] arcount = 1 for conraster in concrasters: rasArray = "rasArray" + str(arcount) rasArray = arcpy.RasterToNumPyArray(conraster) rasterarray.append(rasArray) arcount = arcount + 1
Sample with my input polyline.]
This sounds like a great idea. NumPy arrays stored in my experience very efficiently, so if you have zero's/nodata etc you probably will be just fine memory wise.
You'd have to convert your line to points and then convert the points to numpy row-col coordinates using basic geometric arithmetic against your extent and cell size.