POST
|
I don't think anything has changed after all these years... Curve features still send me in to fits of rage these days (in fact just 2 days ago I encountered an odd buffered polygon geometry that when used as input to the Clip_managment tool (for clipping rasters) resulted in this weird inverted output. The fix: Export to shapefile and use the densified geom as the clipping geometry... Problem solved! So, needless to say, I am still hatin' on them!!! Sure wish there was an envr setting to keep them from happening in the 1st place... or at least a method of detecting their presence in a FC. Nope!
... View more
03-31-2016
10:12 AM
|
1
|
1
|
3784
|
POST
|
Before you run the Clip tool, run the Dice tool (Dicing Godzillas (features with too many vertices) | ArcGIS Blog) on the layer you are trying to clip. The Dice tool is specifically for the case when the input geometry is "too big to process" (one of the symptoms being the 'out of memory' error).
... View more
01-12-2016
09:33 AM
|
1
|
4
|
1401
|
POST
|
Yes, there is a way to improve performance... Like Dan mentioned, don't use ArcGIS (directly at least). Here is another approach leveraging Python dictionaries. Note that this code only works for planar coordinates (UTM, State Plane, etc.) so if your data is in Geographic coordinates you need to use another distance function! BTW: This code assumes all fields are fully populated with valid values - if not you'll have to handle those exceptions. Numpy has some nice ways to do all this stuff too - I usually prefer to roll my own though . import math
def getDist(x1, y1, x2, y2):
return math.sqrt((x1-x2)**2 + (y1-y2)**2)
sitesDict = {r[0],r[1]:r[2:] for r in arcpy.da.SearchCursor(sitesFC, ["SHAPE@X","SHAPE@Y","RESTAURANT_ID"])}
customersDict = {r[0],r[1]:r[2:] for r in arcpy.da.SearchCursor(custFC, ["SHAPE@X","SHAPE@Y","INCOME"])}
for x1, y1 in sitesDict:
incomeList = []
for x2, y2, in customersDict:
if getDist(x1, y1, x2, y2) <= 1000: #or whatever map unit theashold...
income = customersDict[(x2, y2)][0])
if income > 0:
incomeList.append(customersDict[(x2, y2)][0])
#blah
incomeListSum = sum(incomeList)
incomeListLen = len(incomeList)
print ("Restaurant #" + str(sitesDict[(x1,y1)][0] + " has " + str(incomeListLen) + " customers - average income is " = str(incomeListSum / float(incomeListLen)))
... View more
01-04-2016
02:21 PM
|
0
|
0
|
940
|
POST
|
The tool 'Generate Near Table' (BTW: only available at the ArcInfo license level) would also work for something like this... Basically it builds a distance-based one-to-many association table. A warning that the resulting output table is often many orders of magnitude larger than the number of input features - the explanation being that one house can be within 100 miles of many different restaurants. Be sure to apply a search radius! As a experiment, you might 1st apply a very small search radius (< 1 mile) so as to see the format of the output data you will be dealing with.... before you accidently end up with an output table that is 300 million records. If you are running out of RAM, consider installing the 64-bit background geoprocessor: Background Geoprocessing (64-bit)—Help | ArcGIS for Desktop Not sure exactly what you are doing here, but I think initially if I were doing this in a script, I would just loop through the restaurants, and select the houses, gather their stats, and write out some result. Something like: foodPnts = r"C:\temp\test.gdb\restaurants"
housePnts = r"C:\temp\test.gdb\houses"
arcpy.MakeFeatureLayer_management(housePnts, "fl")
searchRows = arcpy.da.SearchRows(foodPnts, ["SHAPE@", "RESTAURANT_ID"])
for searchRow in searchRows:
shapeObj, restaurantId = searchRow
arcpy.SelectLayerByLocation_management("fl", "INTESECT", shapeObj, "100 MILES")
recordCount = int(arcpy.GetCount_management("fl").getOutput(0))
print("There are " + str(recordCount) + " houses within 100 miles of restaurant #" + str(restaurantId))
#using the selected features/records in "fl", you now have a hook so as to get their names, incomes, etc.
#blah, blah...
del searchRow, searchRows
... View more
12-28-2015
04:48 PM
|
2
|
2
|
2380
|
POST
|
Another method using Python: myTbl = r"C:\temp\my_fgdb.gdb\my_table"
arcpy.AddField_management(myTbl, "UNIQ_ID", "LONG")
dataDict = {}
i = 1
updateRows = arcpy.da.UpdateCursor(myTbl, ["LAST_NAME", "UNIQ_ID"])
for updateRow in updateRows:
if updateRow[0] not in dataDict:
dataDict[updateRow[0]] = i
i+=1
updateRow[1] = dataDict[updateRow[0]]
updateRows.updateRow(updateRow)
del updateRow, updateRows
... View more
12-28-2015
12:11 PM
|
3
|
1
|
2055
|
POST
|
Well at least we have a new forum in which we can do meaningful things...
... View more
07-23-2015
04:36 PM
|
1
|
0
|
1157
|
POST
|
So with that mystery solved, anyone know off the top of their head what version of scipy and pandas is "sort of" compatible with v10.3.1? The last time I tried to do this I recall it wasn't very easy and I only got about half the stuff working... which I suppose is why ESRI dropped it.
... View more
07-23-2015
02:57 PM
|
0
|
1
|
664
|
POST
|
The hype (Esri Advances Scientific Analysis with SciPy ) said it'd be there, but I'm not seeing it as part of the standard ESRI Python install (32 or 64 bit). What's up with that?
... View more
07-23-2015
02:08 PM
|
1
|
4
|
3233
|
POST
|
I was actually thinking of converting my units to chains to really throw things off. BTW: according to your converter, 1m = 3ft 3 3 ⁄ 8 in. I think this output is intended to mock the U.S. system.
... View more
02-17-2015
05:54 PM
|
0
|
1
|
268
|
POST
|
Sorry Dan and Chris, I should have clarified, but bit depth promotion is not what is going on here. For example... a value of 34.5678 * 3.2808 would not result in an output value >= 3.402823466e+38 (which according to the documentation would be the max of the 32-bit threshold), but regardless the resulting raster is output as a 64-bit float... even though there is no reason to warrant that. If I was trying to resolve 3.402823465e+38 * 3.2808 I would of course expect (and want!) the result promoted to a 64-bit depth. Basically what I'm look for (wishing for!) is either a envr setting or a map algebra function that would ignore the unwarranted bit depth promotion. My feeling is that ESRI probably just hard coded it so that if a float type operand was used with an input 32-bit float datasets it would just auto-promote, regardless if the output values exceeded the limit of the 32-bit max/min. A more elegant algorithm would test the max values of the rasters, the operations, and operands then resolve if promotion was necessary. More code for sure, but certainly more elegant. Worth noting that multiplying a 32-bit float raster by an integer (34.5678 * 3) retains the original 32-bit float depth in the output.
... View more
02-17-2015
03:27 PM
|
0
|
3
|
1316
|
POST
|
Hi Jay - Not sure what you mean here... your explanation of the process is not very clear. Basically I have a 32-bit float input raster in FGDB format, and I want to do some simple math involving some floating point numbers. For example: myRst * 3.2808. I want the output raster to remain in FGDB format. Currently when this process is executed the output raster (in FGDB format) is pumped out as a 64-bit raster. Is there a direct way to "force" the output to remain a 32-bit float? For what it's worth, I did try the following w/ no success: 1. Created a 32-bit raster (a copy of the input), and used that as the output (aka I overwrote the copy with the output). Result: 64-bit. 2. Altered the NoData envr setting (under Raster Storage heading) to 'MINIMUM'. Result: 64-bit.
... View more
02-17-2015
01:38 PM
|
0
|
0
|
1316
|
POST
|
GRID only supports up to a 32-bit float (64-bit isn't even an option!), so yes that would work. I actually prefer GRID format usually, but indeed some things are faster/more efficient with FGDB format. One thing in particular is that the FGDB format offers (over GRID, .img, etc). is compression support for 32-bit floats. This comes into play especially for rasters with sparse data coverage (the NoData values compress!). Otherwise your 7.2GB LiDAR raster in FGDB format ends up being a 120GB GRID format raster! Yes, a bit depth environment setting or spatial analyst function sure would be nice!
... View more
02-12-2015
06:35 PM
|
0
|
1
|
1316
|
POST
|
I've noticed an annoying issue that when I do raster math on a 32-bit float raster datasets , the output (if in FGDB format) has the bit depth get upped to 64-bit. Other than using the CopyRaster tool to copy it back to a 32-bit float format, is there any way to control the bit depth? As far as I know there is no geoprocessing envr setting or spatial analyst function that does this... but seems like there should be!
... View more
02-12-2015
04:54 PM
|
1
|
12
|
6787
|
POST
|
I agree with Mr. Bixby, no need to store row ids. This code using set() objects also does the job: magicNumberSet = set([16,17,18]) #the building must have all of these codes to be in the yesList
buildingDict = {}
searchRows = arcpy.da.SearchCursor('L_DAMAGE_RESULTS_WIND', ["BLDG_ID", "HAZARD_ID"])
for searchRow in searchrows:
buildingId, hazardId = searchRow
if buildingId in buildingDict:
buildingDict[buildingId].add(hazardId)
else:
buildingDict[buildingId] = set([hazardId])
yesList = [buildingId for buildingId in buildingDict if magicNumberSet.issubset(buildingDict[buildingId])]
noList = [set(buildingDict.keys()).difference(yesList)]
... View more
01-08-2015
04:42 PM
|
1
|
2
|
458
|
Title | Kudos | Posted |
---|---|---|
1 | 08-29-2024 08:21 AM | |
1 | 02-13-2012 09:06 AM | |
2 | 10-05-2010 07:50 PM | |
1 | 02-08-2012 03:09 PM | |
1 | 10-31-2013 02:18 PM |
Online Status |
Offline
|
Date Last Visited |
08-30-2024
12:25 AM
|