|
POST
|
I have some simple ArcPy script tool validator code that is giving me grief. I am using desktop at 10.3.1. I'm open to whatever help can be offered to get this to work as intended. I have two parameters for my script tool. The first is the type of product to be created, the second is the scale of the product. Digital Map / No scale required are the default values defined in the Tool Validator code. If it is a paper product, the user selects the appropriate scale from the drop-down list. The problem I can't solve is that if the user changes the value in parameter 0 to paper map, the list of scales becomes available but the value is still the default ("No scale required") so an error is thrown because that option is not in the scale list. The error clears when I select the scale from the list, but I want only those choices available and obviously don't want the error to appear at all. Likewise if I then change parameter 0 back to Digital Map the "No scale required" value appears but the error message is thrown again because that value is not in the scale list, which is somehow still appearing. There is no way to clear the error at this point. I'd really appreciate help with how to get these different combinations to appear and change as intended without any errors. If they pick Paper, only the scales appear. If they switch back to Digital, only the "No scale required" value appears. And so on. I can't guarantee the user's behaviour so I'd like for it to be bulletproof. import arcpy
class ToolValidator(object):
def __init__(self):
self.params = arcpy.GetParameterInfo()
def initializeParameters(self):
# Define the list of product types the user can choose from
self.params[0].filter.list = ["Digital Map", "Paper Map"]
self.params[0].value = "Digital Map"
return
def updateParameters(self):
if self.params[0].value == "Digital Map":
self.params[1].value = "No scale required"
if self.params[0].value == "Paper Map":
self.params[1].filter.list = ["1:10,000", "1:25,000"]
return
def updateMessages(self):
return
... View more
11-21-2019
06:29 AM
|
0
|
2
|
2068
|
|
POST
|
I'll see what I can do on Friday. Thank you for your insight so far
... View more
10-10-2019
04:03 PM
|
0
|
2
|
6160
|
|
POST
|
I’m working in ArcMap 10.3.1. If I type in an underscore in my attribute field and type in the same underscore (or copy-paste it into the code) would that ensure it’s the same character? I’ll keep playing with this and I’ll check for white space. I appreciate your help!
... View more
10-10-2019
11:12 AM
|
0
|
4
|
6160
|
|
POST
|
The first error you found was from me being sloppy when anonymizing my fields for posting here. The second error was a typo when moving the snippet to a computer connected to the outside world. I've edited the code above to fix those mistakes. Good catches but the real script (of which this is a generic snippet) runs fine without errors, but doesn't do what I expect it to if the underscore character is present. If I amend my data to be "NA" and my script to look for "NA" it works as expected as an error check, but unfortunately editing the source data when doing the real work is not an option at the moment.
... View more
10-10-2019
09:55 AM
|
0
|
6
|
6160
|
|
POST
|
I am using an update cursor to update one field based on the content of another. I can't get part of the Python script I've written to properly handle the underscore special character that is present in my data. I think I need to do something to ensure the string is read as unicode and not ascii, but I cannot figure out how to do this when referring to an index value. Can someone help? (Below is not the real script but it is the part that I am having trouble with.) with arcpy.da.UpdateCursor (in_table="Test", field_names=["CATS", "DOGS"]) as cursor:
for row in cursor:
if str(row[0].upper()) not in ["PIZZA", "N_A"]:
row[1] = 12345
cursor.updateRow(row)
del cursor, row
... View more
10-10-2019
06:26 AM
|
0
|
8
|
6361
|
|
POST
|
I have figured out a solution that give me exactly what I want and it is fast (enough). It parses through the large file and writes the data I need in about 2.5 minutes, which is about 4 times faster than the external VB script it replaces, plus it gives me a GIS output within my current workflow. I did not use any of the array code that Dan mentioned but I will be reading those answers very closely and applying whatever I can find there to future projects indeed. Very appreciated Dan Patterson In case anyone is in the same circumstance this is what I am using: Input = arcpy.GetParameterAsText(0)
codes = arcpy.GetParameterAsText(1).split(",")
arcpy.env.workspace = arcpy.GetParameterAsText(2)
sr = arcpy.SpatialReference(4326)
arcpy.env.overwriteOutput = True
Output = arcpy.CreateFeatureclass_management(arcpy.env.workspace, "Output", "POINT", "", "DISABLED", "DISABLED", sr)
arcpy.AddField_management(Output, "LAT", "DOUBLE", "", "", "", "", "NULLABLE", "NON_REQUIRED", "")
arcpy.AddField_management(Output, "LONG", "DOUBLE", "", "", "", "", "NULLABLE", "NON_REQUIRED", "")
arcpy.AddField_management(Output, "IOH", "TEXT", "", "", "", "", "NULLABLE", "NON_REQUIRED", "")
arcpy.AddField_management(Output, "MEASURE1", "LONG", "", "", "", "", "NULLABLE", "NON_REQUIRED", "")
arcpy.AddField_management(Output, "MEASURE2", "LONG", "", "", "", "", "NULLABLE", "NON_REQUIRED", "")
arcpy.AddField_management(Output, "MEASURE3", "LONG", "", "", "", "", "NULLABLE", "NON_REQUIRED", "")
arcpy.AddField_management(Output, "CODE", "TEXT", "", "", "", "", "NULLABLE", "NON_REQUIRED", "")
arcpy.AddField_management(Output, "TYPECODE", "LONG", "", "", "", "", "NULLABLE", "NON_REQUIRED", "")
arcpy.AddField_management(Output, "TYPENAME", "TEXT", "", "", "", "", "NULLABLE", "NON_REQUIRED", "")
arcpy.AddField_management(Output, "FEATCODE", "TEXT", "", "", "", "", "NULLABLE", "NON_REQUIRED", "")
arcpy.AddField_management(Output, "FEATNAME", "TEXT", "", "", "", "", "NULLABLE", "NON_REQUIRED", "")
arcpy.AddField_management(Output, "COL", "TEXT", "", "", "", "", "NULLABLE", "NON_REQUIRED", "")
arcpy.AddField_management(Output, "MULT", "LONG", "", "", "", "", "NULLABLE", "NON_REQUIRED", "")
arcpy.AddField_management(Output, "DATE3", "TEXT", "", "", "", "", "NULLABLE", "NON_REQUIRED", "")
arcpy.AddField_management(Output, "DATE5", "TEXT", "", "", "", "", "NULLABLE", "NON_REQUIRED", "")
arcpy.AddField_management(Output, "AORD", "TEXT", "", "", "", "", "NULLABLE", "NON_REQUIRED", "")
with open(Input, 'r') as file:
with arcpy.da.InsertCursor(Output,["SHAPE@XY", "LAT", "LONG", "IOH", "MEASURE1", "MEASURE2", "MEASURE3", "CODE", "TYPECODE", "TYPENAME", "FEATCODE", "FEATNAME", "COL", "MULT", "DATE3", "DATE5", "AORD"]) as InsCur:
for line in file:
values = line.split("\t")
LAT = values[5]
LONG = values[6]
IOH = values[1]
MEASURE1 = float(values[9])*0.2691
MEASURE2 = float(values[10])*0.2691
MEASURE3 = float(values[11])*0.2691
CODE = values[2]
TYPECODE = values[7]
TYPENAME = values[8]
FEATCODE = values[38]
FEATNAME = values[39]
COL = values[18]
MULT = values[20]
DATE3 = values[23]
DATE5 = values[24]
AORD = values[32]
if values[2] in codes and float(values[9])*0.2691 >= 75:
newGeom = arcpy.PointGeometry(arcpy.Point(LONG,LAT))
InsCur.insertRow([newGeom,LAT,LONG,IOH,MEASURE1,MEASURE2,MEASURE3,CODE,TYPECODE,TYPENAME,FEATCODE,FEATNAME,COL,MULT,DATE3,DATE5,AORD])
OutputCount = arcpy.GetCount_management(Output)
OutputCountInt = int(OutputCount.getOutput(0))
if (OutputCountInt < 1):
arcpy.AddMessage("There are no valid points in your AOI.")
else:
arcpy.AddMessage("Points have been written to the Output feature class in your input database.")
... View more
04-05-2019
09:16 AM
|
1
|
0
|
1222
|
|
POST
|
Hi Dan - attached is some sample data and another file that explains my goal in clearer terms. In my previous posts I'm sure wasn't as succinct as I could've been. One thing to keep in mind and that I am clearly getting hung up on, is that by filtering the data first, 99% of the lines from the input file could be weeded out. In my head this seems like the best way to go about processing the data, but it's clear that what I see in my head doesn't always easily translate to the code or my understanding of how the most efficient code works. I appreciate your posts on this and I've already learned a ton, thank you!
... View more
04-05-2019
05:37 AM
|
0
|
0
|
1222
|
|
POST
|
All that sounds ideal - just to wrap my head around it now. The ratio of input lines to output lines is very lopsided due to the filtering I need to do so I'm wondering how that will work. I'm going to post some data in the morning and hopefully that will help.
... View more
04-04-2019
07:46 PM
|
0
|
0
|
1222
|
|
POST
|
OK, that would go a long way to me understanding this. I will attach 100 lines of the data Friday morning. Will have to condition and anonymize it but for these purposes it will be good to go. Appreciate it
... View more
04-04-2019
07:42 PM
|
0
|
0
|
3055
|
|
POST
|
Thanks for all your help on this. I have done some more reading and just want to make sure I understand the method here... It looks much more simple and lightweight than what I was doing previously so I'm glad to try this out. For your bullet points: 1. You are right about the baggage - each time this is run, it will only return maybe 0.25% of the data from the input file, hence the tool's necessity. I understand that the field widths have a big impact on memory usage with this method, so I'd have to constrain those really see if all the fields I output currently are crucial to my goals for this... There may be one or two that I can drop; they are integers so it won't save a lot of room but every little bit helps I suppose. In your example would the code create a massive array of my specified columns for the entire file? Or does it work one line at a time? I am confusing myself I think. 2. If I have the metaphor right, can I do the pruning before the gathering - just leave the rotten berries on the plant? ie can the if statement come before the array so lines that don't meet the threshold don't even get considered? Or is that just not how it works? Or does the array need to be completely created before I pick and choose the data to come out of it and into my featureclass? Pick every berry, throw out the bad ones, then make the jam.... Or does something else need to happen altogether? 3. Do you mean that the insertcursor business is compatible with your code, or that the insertcursor is a separate function that will also achieve the same result? I will do much more reading on this...... btw I was never all that good with Avenue back when I was in school, but that was my fault and not my instructor's, he tried his best with me and I'm still doing GIS so that's good
... View more
04-04-2019
12:02 PM
|
0
|
6
|
3055
|
|
POST
|
Also the if statement I'm using is crucial to finding the needles in the haystack and then only processing and outputting the needles... Can it be incorporated using your method?
... View more
04-04-2019
05:23 AM
|
0
|
8
|
3055
|
|
POST
|
Hi Dan; your suggestions are different from what I was expecting so it might take me a bit longer to understand, but I'm willing to try them out, thanks. Glad to have some different routes to try. For more reference information: I am writing this to replace a standalone VB parser that wrote to a new text file which was then manually ingested to Arc using the Add XY Data function in Catalog. I have a bit of time so I thought I'd try to migrate that process into the Arc environment and create the output file all in one go. The VB parser is legacy from many years ago and chunked the data. I'm using 10.3 but am making the leap to 10.6 or 10.7 soon. Input file is tab-delimited. My input file does not mix data types within columns. The format of the output file is important because other processes after this depend on the field types - 5 fields are required by further process while the other 5 fields are needed just for reference. There is no header on the file but I know the schema. Some fields use -999 as a null while others have blank space - but all the fields I need to perform further tasks on will be 100% populated with real information (this is a condition of the file when it is created and is defined in the schema). As for the memory issue - I thought the reason for processing the file line-by-line was to avoid the memory issues? That's what all the guidance and help I could find online said, that reading the file in one go was a bad move and that the easy solution was to process a line, dump it, process the next line, etc.... I could be misunderstanding your process and the guidance I read though... Or perhaps line-by-line isn't available in the method you're suggesting? Thanks, hope this additional information helps.
... View more
04-04-2019
04:53 AM
|
0
|
0
|
3055
|
|
POST
|
Ah, yes I guess the casual-ness doesn't lend itself to getting the full picture across, sorry about that. I didn't want to further bog down my question but I see that the missing information is pertinent. My input file is tab-delimited (which I have accounted for) and has ~50 fields of which I am only interested in 10. I've created the Feature Class to have those 10 fields. 3 of the fields need to be transformed, converted from feet to meters and then rounded to have no decimal places (I only included one of those fields in the example because the methodology would be identical). I could do this in 2 steps by creating an imperial field in the output file, populating the metric field via field calculator, then deleting the imperial field but was hoping to do it in one step to save that file maintenance at the end. I have the index positions of all the input file fields and I know the order in which they should be inserted into the Feature Class. For example, for the 4 fields in my original post the index positions are [5], [6], [9], [4]. (The Lat and Long fields [5] [6] are required in the output file and will also be used with the SHAPE@XY token to create the point geometry.) The if statement filters only records whose code has been specified by the user and tree height is >= 100 ft. This bottom part of this post from Stack Exchange looks promising, but I haven't been able to test it out yet and I clearly am not entirely sure what I'm doing just yet.
... View more
04-03-2019
07:35 PM
|
0
|
0
|
3055
|
|
POST
|
I have a very large text file (~5 GB, ~30 million lines) that I need to parse and then output some of the data to a new point feature class. I cannot figure out how to proceed with the da.InsertCursor. I've created the feature class so the fields are in the required order. There is an if statement that parses out the required lines of the file. The units in the text file are Feet, however I need rounded Metres in the output and the output field for that information must be Long. The round command returns float but my final value in the Tree_Height output must be long - can the float value be mapped into the long field? Converting from Feet to Metres is simple multiplication, but I believe my field types might be messed up then? The index positions of the Lat and Long fields in the source file are 5 and 6. The index position of the Description field in the source file is 1. The index position of the Tree_Height (in Feet) in the source file is 9. The index position of the Code in the source file is 2 - this is the link between the user-inputted codes and the codes in the if statement. The if statement works, but after that.... Can someone help me set up the da.InsertCursor so that these operations can (a) get done and (b) get done efficiently? I've been fumbling with lists of fields and tuples of things and am not making any progress. Have looked at help files and Penn State's online courses but still no joy..... import arcpy, os
treesfile = arcpy.GetParameterAsText(0)
codes = arcpy.GetParameterAsText(1).split(",") # user types in comma-delimited wildlife codes for the AOI
arcpy.env.workspace = arcpy.GetParameterAsText(2)
sr = arcpy.SpatialReference(4326)
arcpy.env.overwriteOutput = True
Filtered_Trees = arcpy.CreateFeatureclass_management(arcpy.env.workspace, "Filtered_Trees", "POINT", "", "DISABLED", "DISABLED", sr)
arcpy.AddField_management(Filtered_Trees, "Lat", "DOUBLE", "", "", "", "", "NULLABLE", "NON_REQUIRED", "")
arcpy.AddField_management(Filtered_Trees, "Long", "DOUBLE", "", "", "", "", "NULLABLE", "NON_REQUIRED", "")
arcpy.AddField_management(Filtered_Trees, "Description", "TEXT", "", "", "", "", "NULLABLE", "NON_REQUIRED", "")
arcpy.AddField_management(Filtered_Trees, "Tree_Height", "LONG", "", "", "", "", "NULLABLE", "NON_REQUIRED", "") # height value must be rounded and in Metres.
# there are many more fields but you get the idea
with open(treesfile, 'r') as file:
for line in file:
values = line.split("\t")
if values[2] in codes and float(values[9]) >= 100:
# now I am stuck...
... View more
04-03-2019
01:27 PM
|
0
|
16
|
4915
|
| Title | Kudos | Posted |
|---|---|---|
| 1 | 05-02-2025 12:30 PM | |
| 1 | 04-05-2019 09:16 AM | |
| 1 | 03-20-2019 10:27 AM | |
| 1 | 03-13-2020 08:18 AM |
| Online Status |
Offline
|
| Date Last Visited |
11-13-2025
03:56 PM
|