|
POST
|
Hi Curtis Thanks for the following. I'm currently reading up on sys.path to better understand how it works as well as virtualenv\virtualenvwrapper. I'll post my workflow once I've got it working as I'm also going to be using GitHub for version management of my code.
... View more
05-24-2016
01:47 AM
|
0
|
0
|
2181
|
|
POST
|
I have found the following two sites explaining how to setup Python Virtualenv for ArcGIS: ArcGIS and Virtual Environments part 1: ; ArcGIS and Virtual Environments part 2: Calling arcpy from an external virtual Python environment: I've successfully created a virtual environment under my external drive: E:\Python\Venv\ArcPy64bit When I followed the second part of ArcGIS and Virtual Environments part 2: as well as Calling arcpy from an external virtual Python environment: to create the Python Path file to the ArcPy and my ArcHydroTools is where the comes apart. As instructed I ran the following within my Python IDLE: import sys
sys.path I then created the Python Path file from the list above: I copied the path file (arcpy64bit_paths.pth) under the following directory: E:\Python\Venv\ArcPy64bit\Lib\site-packages When I setup the Python Interpreter using the virtualenv within Eclipse the following paths are recognised: ArcPy and ArcHydroTools not recognised: When I setup the Python Interpreter using the ArcGIS Python within Eclipse the following paths are recognised: ArcPy and ArcHydroTools are recognised: Any help to resolve the following will be appreciated, as I really need to use virtualenv to manage my Python Environment due to different versions of packages required.
... View more
05-22-2016
09:27 AM
|
0
|
6
|
6577
|
|
POST
|
Hi Dan Thanks for the following. I'll work through it tonight and get back to you tomorrow.
... View more
05-18-2016
03:47 AM
|
0
|
0
|
1590
|
|
POST
|
Hi Dan I'll go through the following and post my code. Will most like come back to you with some questions regarding summarizing the new classes based on above.
... View more
05-17-2016
12:29 PM
|
0
|
2
|
1590
|
|
POST
|
Hi Blake Thanks for the following, looks great. I'll post my final code using Data Access Module and NumPy for comparison.
... View more
05-17-2016
12:26 PM
|
0
|
0
|
1590
|
|
POST
|
Hi Dan Thanks for the reply. In this case each record is unique. The original table is a summary of the number of buildings that was found within each service area interval (i.e. TIME5 = 0 - 5min, TIME10 = 5 - 10min etc.) This time around I'm sumarising the percentage of total buildings found within each new time interval (i.e. TIME0_15min = 0 - 15min etc.). For the unique columns could I use the OBJECTID? How do I go about collapsing the columns to generate the new time intervals based on the following? With regards to the Q's The number of records are reasonably small, but I'll be running the following regularly for different study areas. Yes, I'd like to generate graphs but would like them displayed in ArcMap as part of my Data Driven Pages at a later stage. Not at this stage I'm looking at using OpenPyXL to write the final table out to Excel based on a predifined template for various social facility types (i.e. Schools, Health Facilities, Community Services, Recreation Facilities etc.)
... View more
05-17-2016
12:19 PM
|
0
|
0
|
1590
|
|
POST
|
I'd like to find out what approaches the community have used to achieve similar results using either: Data Access Module or; NumPy The following table (stats_table1) is my starting point: I'd like to populate a new table (stats_table2) based on the following structure: for each row in the first table: stats_table2 [SETTLEMENTNAME] = stats_table1 [SETTLEMENTNAME] stats_table2 [SOCIAL_FACILITY] = stats_table1 [NAME] stats_table2 [TIME0_15MIN] = stats_table1 (((TIME5 + TIME10 + TIME15)) / TOTALBUILD)*100) stats_table2 [TIME15 _30MIN] = stats_table1 (((TIME20 + TIME25 + TIME30)) / TOTALBUILD)*100) stats_table2 [TIME30 _60MIN] = stats_table1 ((TIME60 / TOTALBUILD)*100) stats_table2 [TIME60_PLUS] = stats_table1 ((TIME60P / TOTALBUILD)*100) Final Results: i.e. Data Access Module: Would you use a Search Cursor to loop through stats_table1, perform the following calculations and write the results to a python dictionary, then use a Update Cursor to populate stats_table2 NumPy: Would you convert the stats_table1 to a NumPy array, perform the following calculations and write the results into a temporary array and and back to a table to be appended to stats_table2 Any sample code or references will be appreciated as I originally was looking at nesting a Search Cursor with a Update Cursor, then realised it was a bad idea.
... View more
05-17-2016
07:16 AM
|
0
|
8
|
8384
|
|
POST
|
Hi Dan I've used the code example within your "Observation_Summary_1.pdf" and understand the most of your code, but if you wouldn't mind unpacking from Line 45 to 50. I'm unsure if line 45 is a list comprehension of some sort and not understanding the how its working. # coding: utf-8
"""
Dan Patterson:
Numpy Pivot Table Summary
16/05/2016
"""
# import site-packages and modules
import numpy as np
import arcpy
# set input summary table
input_table = r"E:\Projects\2016\G112224\Models\Schools\Schools_Combined_160505.gdb\De_Villiers_Graaff_Hs_SAA_Stats"
output_gdb = r"E:\Python\Testing\dan_patterson_numpy\SAA_Summary_Report_Testing.gdb"
# numpy pivot table function
def pivot_summary(input_table, output_gdb):
# convert summary table to structured numpy array
numpy_fields = ("OBJECTID", "TOWN", "SETTLEMENTNAME",
"NAME", "TIME", "FREQUENCY")
sum_array = arcpy.da.TableToNumPyArray(input_table, numpy_fields) # @UndefinedVariable
# obtain unique records based on first three columns
unique_records = np.unique(sum_array[['TOWN', 'SETTLEMENTNAME', 'NAME']])
# number of unique rows
shp = len(unique_records)
# construct the output array
dt = [('TOWN', 'U20'), ('SETTLEMENTNAME', 'U20'), ('NAME', 'U20'),
('TIME5', np.int32), ('TIME10', np.int32), ('TIME15', np.int32),
('TIME20', np.int32), ('TIME25', np.int32), ('TIME30', np.int32),
('TIME60', np.int32)]
# populate array with zeros
pivot_array = np.zeros(shp, dtype=dt)
# assign the first three columns
pivot_array['TOWN'] = unique_records['TOWN']
# the values from the unique test
pivot_array['SETTLEMENTNAME'] = unique_records['SETTLEMENTNAME']
# everything is sorted
pivot_array['NAME'] = unique_records['NAME']
# loop through unique records array
for i in range(shp):
# pull out the rows that match
row_match = sum_array[sum_array[['TOWN', 'SETTLEMENTNAME',
'NAME']] == unique_records]
for j in range(len(row_match)):
column = 'TIME' + str(row_match ['TIME'])
buildings = row_match ['FREQUENCY']
pivot_array[column] = buildings
pivot_table = "{0}\\{1}".format(output_gdb, "Pivot_Table_Summary")
arcpy.da.NumPyArrayToTable(pivot_array, pivot_table) # @UndefinedVariable
return pivot_table
pivot_table = pivot_summary(input_table, output_gdb) Thanks for your help Dan
... View more
05-17-2016
02:28 AM
|
0
|
2
|
2082
|
|
POST
|
Hi Devin I wrote the following python script to copy the datasets (feature classes & rasters) for each mxd within a folder into a new File Geodatabase. You could use the following as a starting point and instead of copy it out write the location of the layers (feature classes and rasters) into a summary table. '''
Created on Jan 20, 2016
Copy all feature classes and
rasters from each mxd in a folder
into a new File Geodatabase.
@author: PeterW
'''
import os
import time
import arcpy
# set arguments
folder = arcpy.GetParameterAsText(0)
out_gdb = arcpy.GetParameterAsText(1)
# folder = arcpy.GetParameterAsText(0)
# out_gdb = arcpy.GetParameterAsText(1)
# Processing time
def hms_string(sec_elapsed):
h = int(sec_elapsed / (60 * 60))
m = int(sec_elapsed % (60 * 60) / 60)
s = sec_elapsed % 60
return "{}h:{:>02}m:{:>05.2f}s".format(h, m, s)
start_time1 = time.time()
# copy layers function
def copy_features():
try:
if arcpy.Exists(os.path.join(out_gdb, layer.datasetName)):
arcpy.AddMessage("Feature class already exists, it will be skipped")
else:
arcpy.FeatureClassToGeodatabase_conversion(lyr_source, out_gdb)
except:
arcpy.AddMessage("Error copying: " + layer.name)
arcpy.AddError(arcpy.GetMessages())
def copy_rasters():
try:
if arcpy.Exists(os.path.join(out_gdb, layer.datasetName)):
arcpy.AddMessage("Raster already exists, it will be skipped")
else:
out_raster = os.path.join(out_gdb, layer.datasetName)
arcpy.CopyRaster_management(lyr_source, out_raster)
except:
arcpy.AddMessage("Error copying: " + layer.name)
arcpy.AddError(arcpy.GetMessages())
# Loop through each data frame, layer and copy to new file geodatabase
for filename in os.listdir(folder):
fullpath = os.path.join(folder, filename)
if os.path.isfile(fullpath):
basename, extension = os.path.splitext(fullpath)
if extension.lower() == ".mxd":
arcpy.AddMessage("Processing: " + basename)
mxd = arcpy.mapping.MapDocument(fullpath)
dfs = arcpy.mapping.ListDataFrames(mxd)
for df in dfs:
arcpy.AddMessage("DataFrame: " + df.name)
layers = arcpy.mapping.ListLayers(mxd, "", df)
for layer in layers:
if layer.isFeatureLayer:
lyr_source = layer.dataSource
lyr_name = layer.name.encode("utf8", "replace")
arcpy.AddMessage("Copying: {}".format(lyr_name))
copy_features()
if layer.isRasterLayer:
lyr_source = layer.dataSource
lyr_name = layer.name.encode("utf8", "replace")
arcpy.AddMessage("Copying: {}".format(lyr_name))
copy_rasters()
# Determine the time take to copy features
end_time1 = time.time()
print ("It took {} to copy all layers to file geodatabase".format(hms_string(end_time1 - start_time1))) Let me know if you need help amending the following to meet you needs.
... View more
05-15-2016
03:31 AM
|
0
|
2
|
1887
|
|
POST
|
Hi Joshua The "spaces" and "," are required between the strings i.e. "Rivers, Wetlands". The removal of the "spaces" and "," are those at the end of the concatenated string i.e. "Rivers, Wetlands,space"
... View more
05-09-2016
12:20 PM
|
0
|
1
|
590
|
|
POST
|
Hi Dan The following dealt with both the "space" and "," thanks so much Regards
... View more
05-09-2016
07:35 AM
|
0
|
0
|
1818
|
|
POST
|
Thanks Dan Was just thinking about that. I was considering using a nested if statement to test the length of records for each row to remove the "space" after the "," where there is only one item and to remove the "," at the end.
... View more
05-09-2016
07:24 AM
|
0
|
0
|
1818
|
|
POST
|
Hi Dan I've used slicing to select the correct fields that are defined within my UpdateCursor as the fields are made up of: "TYPE_RIV", "TYPE_WET", "TYPE_CBA" (input to good) "ENVIRO_TYPE" (output field to be populated) field_name = "ENVIRO_TYPE"
arcpy.AddField_management(union_name, "ENVIRO_TYPE", "TEXT", field_length=100)
enviro_fields = [f.name for f in arcpy.ListFields(union_name, "TYPE*", "TEXT")]
enviro_fields.append(field_name)
with arcpy.da.UpdateCursor(union_name, enviro_fields )as upcur: # @UndefinedVariable
for row in upcur:
good = [val for val in list(row[:-1]) if val not in["", None]]
frmt = ("{}, "*len(good)).format(*good)
row[-1] = frmt
upcur.updateRow(row)
print row[-1] Thanks for all the help.
... View more
05-09-2016
06:58 AM
|
0
|
3
|
1818
|
|
POST
|
Hi Dan The problem is that I'm not sure how to get the field values without having to specify the field index and the last field row[2] = ""ENVIRO_TYPE", is the field I need to update. row[0] = "TYPE_RIV", row[1] = "TYPE_WET" this time but it won't always be the case. Any suggestions welcome Print Statement: # environmental features within each settlement
def environmental_features(enviro_input):
enviro_intersect = []
for fcs in enviro_input:
orig_name = arcpy.Describe(fcs).name
input_features = ["Settlements_Amended", orig_name]
intersect_name = "{0}\\{1}_int".format("in_memory", orig_name)
arcpy.Intersect_analysis(input_features, intersect_name)
enviro_intersect.append(intersect_name)
merge_name = "{0}\\{1}_merge".format("in_memory", "Environmental")
arcpy.Merge_management(enviro_intersect, merge_name)
diss_name = "{0}\\{1}_diss".format("in_memory", "Environmental")
arcpy.Union_analysis(merge_name, diss_name, "ALL")
field_name = "ENVIRO_TYPE"
arcpy.AddField_management(diss_name, "ENVIRO_TYPE",
"TEXT",
field_length=100)
field_names = [f.name for f in arcpy.ListFields(diss_name, "TYPE*", "TEXT")]
cur_fields = list(field_names)
cur_fields.append(field_name)
print cur_fields
with arcpy.da.UpdateCursor(diss_name, cur_fields)as ucur: # @UndefinedVariable
for row in ucur:
print row[0], row[1], row[2] Python Code:
... View more
05-08-2016
02:49 PM
|
0
|
5
|
1818
|
| Title | Kudos | Posted |
|---|---|---|
| 3 | 01-16-2012 02:34 AM | |
| 1 | 05-07-2016 03:04 AM | |
| 1 | 04-10-2016 01:09 AM | |
| 1 | 03-13-2017 12:27 PM | |
| 1 | 02-17-2016 02:34 PM |
| Online Status |
Offline
|
| Date Last Visited |
03-04-2021
12:50 PM
|