|
POST
|
You could put your particular subdirectories in a list. Then you could add the shapefile copy loop code that forest knutsen wrote inside a loop that iterates through the subdirectory list. rootdir = r"C:\xxx\xxx\shp"
subfolders = [
"Folder1",
"Another Folder",
"AndAnother",
]
for folder in myfolders:
arcpy.env.workspace = os.path.join(rootdir, folder)
for shapefile in arcpy.ListFeatureClasses():
print 'copying: ' + shapefile[:-4]
arcpy.CopyFeatures_management(shapefile, os.path.join(out_path, shapefile[:-4])) Or you could utilize Python's own os.walk() to navigate your folders instead of listing them out.
... View more
02-09-2015
07:41 AM
|
0
|
4
|
2181
|
|
POST
|
My first thought is to use Python and arcpy.AddField_management(). This assumes all the fields are the same type and length. If they are all different, it can still be done but the code would have to be changed a bit. import arcpy
testtable = r"C:\temp\mygdb.gdb\Test"
newfields = [
"Test1",
"Test2",
"Test3",
"Test4",
"Test5"
]
for field in newfields:
arcpy.AddField_management(
testtable, ## in_table
field, ## field_name
"TEXT", ## field_type
)
print "{} added".format(field)
... View more
02-09-2015
07:16 AM
|
2
|
0
|
784
|
|
POST
|
Will Esri be bundling all of the patches into a single, easy to install service pack for any of the ArcGIS 10.2.2 products? New installs are so tedious with all these patches to install! EDIT: I did find this Knowledge Base article about doing a batch file installation but it's for 9.3. Has anyone updated this for 10.2.2?
... View more
02-06-2015
03:20 PM
|
1
|
3
|
6040
|
|
POST
|
Well, I didn't end up using any joins after all! Like you mentioned, it was always writing the fields from both feature classes and I got tired of fighting with it. Here's my solution: def main():
import arcpy
import os
# Local variables
sourcegdb = r"C:\temp\83629_Baseline.mdb"
panels = os.path.join(sourcegdb, "AllSignPanels")
supports = os.path.join(sourcegdb, "SignSupports")
template = os.path.join(sourcegdb, "SignPanelTemplate")
# Get all SupportIDs for where clause IN() statement
supportid = tuple(i[0] for i in arcpy.da.SearchCursor(supports, "SupportID"))
# Optional truncate
arcpy.TruncateTable_management(template)
# Retreive only panel records that have a support and write to output table
fields = [f.name for f in arcpy.ListFields(template)]
where_clause = "SupportID IN{}".format(supportid)
with arcpy.da.SearchCursor(panels, fields, where_clause) as s_cursor:
with arcpy.da.InsertCursor(template, fields) as i_cursor:
for row in s_cursor:
i_cursor.insertRow(row)
if __name__ == '__main__':
main() Since you had that nice SignPanelTemplate table, I just used that for the field names. If you want to overwrite the table each time, just truncate the table before you start writing the rows again.
... View more
02-06-2015
03:08 PM
|
1
|
12
|
2601
|
|
POST
|
To Mody Buchbinder Running compress on Linux/Unix We have ArcSDE in Oracle 11g on a Linux server. We perform database maintenance operations (including compress) with a Python script executed by Task Scheduler on a Windows Server 2012r2 machine. We use a process very similar to the one Esri recommends and it works quite well; no SDEMON required. However, we did find that killing all users did not clean up the processes and sessions correctly on the Linux server. See my thread here for more information.
... View more
02-06-2015
09:44 AM
|
0
|
0
|
1906
|
|
POST
|
My first paid experience with GIS was a summer internship with the local Township government. Classic make parcel section map type stuff. I've actually been doing GIS in local government ever since...
... View more
02-06-2015
09:30 AM
|
0
|
1
|
2674
|
|
POST
|
If you feel like posting some sample data we can look at how to get this working the way you want.
... View more
02-06-2015
07:44 AM
|
0
|
14
|
2601
|
|
POST
|
There is nothing special you have to write to the file to close it, but you do need to close the CSV open object in the code. If you have it in a with statement (like in my code) it will always close automatically. In the code you posted, you do not have it in a with statement so you need the f.close() at the end. Glad you got it working. Keep in mind that when you open the CSV file for writing, you have different modes to open it as. Use 'r' when the file will only be read, 'w' for only writing (an existing file with the same name will be erased), and 'a' opens the file for appending. Any data written to the file is automatically added to the end. The mode argument is optional; 'r' will be assumed if it’s omitted. If you try using the append mode, make sure you don't write the field name header row again!
... View more
02-06-2015
07:10 AM
|
2
|
1
|
7799
|
|
POST
|
I don't know what outLayerFile is so this is the best I can do for you. See if you can make this work. import arcpy
import os
import csv
lyrFile = arcpy.mapping.Layer(outLayerFile)
for lyr in arcpy.mapping.ListLayers(lyrFile):
if lyr.name == "Routes":
# Create CSV
CSVFile = r'F:\Workspace\Sandy\GM_costAnalysis\analysis2\allRoutes.csv'
with open(outputCSV, "w") as csvfile:
csvwriter = csv.writer(csvfile, delimiter=',', lineterminator='\n')
## Write field name header line
fields = ['OrderCount', 'TotalTravelTime', 'TotalDistance', 'Block', 'Scale']
csvwriter.writerow(fields)
## Write data rows
with arcpy.da.SearchCursor(lyr.dataSource, fields) as s_cursor:
for row in s_cursor:
csvwriter.writerow(row)
... View more
02-05-2015
01:22 PM
|
2
|
3
|
7799
|
|
POST
|
Although I haven't used it yet, the zip() function should work well too. Map two lists into a dictionary in Python - Stack Overflow
... View more
02-05-2015
01:00 PM
|
0
|
0
|
2052
|
|
POST
|
If you're writing one row at a time, use w.writerow() If you're writing all the data at once in some kind of iterable variable, use w.writerows() Also, I recommend using the csv writer in a with statement to ensure it is closed even if there is an error. Here is what I use for writing a geodatabase table to csv. import arcpy
import os
import csv
# Environment variables
workingDir = r"C:\temp"
workingGDB = os.path.join(workingDir, "MyGeodatabase.gdb")
inputTable = os.path.join(workingGDB, "MyInputTable")
outputCSV = os.path.join(workingDir, "MyOutput.csv")
# Create CSV
with open(outputCSV, "w") as csvfile:
csvwriter = csv.writer(csvfile, delimiter=',', lineterminator='\n')
## Write field name header line
fields = ['FirstField','NextField','AndThirdExample']
csvwriter.writerow(fields)
## Write data rows
with arcpy.da.SearchCursor(inputTable, fields) as s_cursor:
for row in s_cursor:
csvwriter.writerow(row)
... View more
02-05-2015
12:52 PM
|
1
|
5
|
7799
|
|
POST
|
After further investigation, I should clarify that the feature class needs to be a feature layer before you can do the join. Once you do the join, the field names on the feature layer are indeed "featureclassname.fieldname".
... View more
02-05-2015
12:45 PM
|
0
|
0
|
718
|
|
POST
|
After further investigation, JoinFields might be a better option since you can specify fields you want. And to clarify, if using a feature classes, you have to make a feature layer first before doing the join. After the join is done, field names will be like "feactureclassname.fieldname".
... View more
02-05-2015
12:38 PM
|
0
|
16
|
2601
|
|
POST
|
Try doing the join in Python using arcpy.AddJoin_management() (see example 2 at the bottom). Once you've done the join, then print the field names and see what you need to use for the search cursor.
... View more
02-05-2015
11:02 AM
|
0
|
1
|
718
|
|
POST
|
Maybe you have to call your field name as "featureclass.fieldname" beacuse that's how ArcMap renames when you join. Try using ListFields to print the name of the fields on the feature class you're running the SearchCursor on (Texas2) so you can see what you're working with. fields = arcpy.ListFields(fc)
for f in fields:
print f.name EDIT: Are you doing the join in Python? ArcGIS Help 10.1 - Add Join (Data Management)
... View more
02-05-2015
10:33 AM
|
0
|
3
|
718
|
| Title | Kudos | Posted |
|---|---|---|
| 1 | 4 weeks ago | |
| 1 | 07-31-2025 11:59 AM | |
| 1 | 07-31-2025 09:12 AM | |
| 2 | 06-18-2025 03:00 PM | |
| 1 | 06-18-2025 02:50 PM |