|
POST
|
Use the Dissolve tool. For future reference, when you run the Buffer tool, there are options to choose whether you want the output buffers dissolved or not.
... View more
05-17-2013
10:58 AM
|
0
|
0
|
1327
|
|
POST
|
Start with the product documentation. See A Quick Tour of Modelbuilder. This is a topic in the help system. You'll find tutorials there and some good overview topics. The specific topics on variables are Creating a stand-alone variable and Exposing tool parameters as model variables.
... View more
05-17-2013
07:25 AM
|
0
|
0
|
1125
|
|
POST
|
This might help: use the Intersect tool. For the input features, add your line feature class TWICE. Yes, twice -- you're going to intersect the lines with themselves. If you enter it once, you'll get empty output. The output will have more features than the input. When segments of a line segments overlap each other, a new line (w/o overlap) is created in the output. Here's a picture of the result: [ATTACH=CONFIG]24384[/ATTACH] I'm not sure what you want to do next. If you want to stitch the lines back together, use Dissolve. Dissolve on one of the FID_ fields and specify that you want multipart features created. (I thought about this a bit more; if you want to stitch the lines back together, you don't need to use the Intersect tool -- you can use Feature To Line tool instead. The input is your line features. The output will be a bunch of line fragments with the FIDs carried through. You then Dissolve on the FID and create multipart features. In theory, Feature To Line should be much quicker than Intersect since it doesn't have to assemble topology.)
... View more
05-16-2013
10:07 AM
|
0
|
0
|
3180
|
|
POST
|
For those of you keeping score at home, we added subtype and domain descriptions when exporting geodatabase features to shapefiles beginning with 10.0SP4 and refined the algorithm (there were cases where writing the descriptions could be really slow) at 10.0SP5. We added the environment setting at 10.1. Only certain tools write the domain description. For example, the Feature Class To Feature Class and Table To Table tools write the description, but Feature Class To Shapefile and Copy Features do not write the domain description. So, how can you determine which 10.0 tools write the domain description? By looking at the online 10.1 tool documentation. If the 10.1 tool documentation does not list the Transfer Field Domain Descriptions environment, then the 10.0 tool doesn't write the domain description. Tools that write the domain description: Feature Class To Feature Class Table To Table Tools that DO NOT write the domain description: Feature Class To Shapefile Copy Features Copy Rows I apologize for the confusion.
... View more
05-16-2013
09:30 AM
|
0
|
0
|
1867
|
|
POST
|
You're not missing anything. I don't have the definitive answer for you because we're all gone for the day (except me, apparently). What service pack are you on? It turns out that we put this into one of the service packs w/o exposing the environment. I'll know more tomorrow. In the meantime, can you just delete the fields on the output with Delete Field?
... View more
05-15-2013
06:09 PM
|
0
|
0
|
1867
|
|
POST
|
Good luck. I spoke with a colleague and he says he isn't aware of many issues with dissolving lines. However, he did say that when problems occur, it's when you're dissolving with no attributes (there are no dissolve_fields). When there are no dissolve fields, everything is dissolved into a single feature. My mental model of dissolving streets is that you're typically choosing a dissolve field like STREET_NAME or ROAD_TYPE, and the result features are therefor not problematic from the standpoint of vertex count. One final question: are you using Dissolve or Unsplit Line?
... View more
05-15-2013
12:28 PM
|
0
|
0
|
4577
|
|
POST
|
Something is amiss here. You shouldn't be running into memory problems if you're dissolving streets unless an individual output street, when dissolved, contains many hundreds of thousands of vertices (typically, problems occur with millions of vertices). This seems unusual to me because most street features typically have very few vertices. Are you dissolving streets for an entire country? For more information on this, see the blog post Dicing Godzillas. Excessive number of vertices, as described in the blog post, typically occurs when dissolving many intricate polygons into one big Godzilla polygon. The blog post shows you how to calculate the number of vertices you're dealing with so you can anticipate problems and get an vertex limit for the Dice tool. If you suspect you're dealing with too many vertices, you can get the vertex count as described in the blog and make decisions based on this. I hate to throw this out here, but have you checked the geometry using Check Geometry tool?
... View more
05-15-2013
10:56 AM
|
0
|
0
|
4577
|
|
POST
|
These fields (that start with "d_") contain the description of your subtype. This was added due to numerous requests to add the subtype description (if one exists) to exported shapefiles. Shapefiles don't support subtypes, so we export both the subtype code and description as regular fields. You can control whether these fields get written by setting the Transfer Field Domain Descriptions environment setting.
... View more
05-15-2013
08:15 AM
|
0
|
0
|
1867
|
|
POST
|
I hunted around a bit for the SQL... this could be done by using GROUP BY in SQL (something like "Select MAX(DEPTH) GROUP BY STACK_ID"). Unfortunately, GROUP BY queries aren't supported on file geodatabases, only enterprise databases (i.e., SQL Server, Oracle, etc.).
... View more
05-10-2013
09:21 AM
|
0
|
0
|
2403
|
|
POST
|
If all the points in your 'point stack' have a common ID (i.e., "STACK_ID"), try using the Summary Statistics tool. You'll find the MAX of the DEPTH field and your Case Field is your STACK_ID. The result will be a table that will have a record for each unique STACK_ID with the maximum depth. What it won't have is the alkalinity field value for that point. But you do have enough information (the STACK_ID and the DEPTH) to find that unique point using an attribute query. The way I'd do this is to create a new field on the point feature class that concatenates STACK_ID and DEPTH. This would be a text field. See this blog post on how to concatenate field values. If you can create this field on your point feature class, you can also create it on the output of Summary Statistics. Then you have a common key field to use in joins and relates. You could use Join Field to join alkalinity from the point features to the statistics table for example. If you have an advanced license, you could use the Sort tool to sort your points on STACK_ID and DEPTH (descending) , then use Summary Statistics with the FIRST statistic to find the alkalinity value. STACK_ID would be your Case field. This'll return the first value of alkalinity found for each unique STACK_ID. Since the table is sorted, the first occurrence is the deepest point. There's probably a better way to do this using an SQL query, but I can't think of it right now.
... View more
05-09-2013
02:49 PM
|
0
|
0
|
2403
|
|
POST
|
The only way to tell if you're running out of disk space is to periodically check the available space on the drive while the tool is executing. 128GB at the beginning of the run seems like enough to me. (1.8 million polygons, guessing ~2k per polygon = ~4GB) The failure of 64-bit background processing is disturbing. The fact that Buffer comes back quickly with an error seems like it's a machine/OS configuration issue, not a tool issue (unless you're running Buffer from within a script tool and not passing the layer to the script tool as a parameter). Do other tools work in 64-bit background? If not, then there's something subtle about your configuration that's amiss and might take a while to figure out. I haven't heard of problems with 64-bit background and I did a quick check/search of our support site and didn't find anything there either. There's not much to the install -- I doubt you did anything incorrect. Sigh -- I'm not being much help here. Let me see if I can get you the ftp site to send your data to us -- my contact for this is out of the office at the moment.
... View more
05-07-2013
11:08 AM
|
0
|
0
|
2653
|
|
POST
|
First, have a look at this blog article -- it discusses in detail large overlay (1.8 million polygons + 860 polygons is considered large): http://blogs.esri.com/esri/arcgis/2012/06/15/be-successful-overlaying-large-complex-datasets-in-geoprocessing/. In the blog post, it makes mention of the 64-bit background processing product you can install and use. This is in full release. Your version of windows needs to be 64-bit. Beyond the suggestions that the blog post gives, the first thing I would look at is whether you're running out of disk space on your C: drive. (Kinda smells like that to me.) You don't have to perform a repair geometry on the output of buffer. I assume you checked/trust the geometry of the other feature class (the one with 830 polygons). If you still get stuck, we can run it here with your data.
... View more
05-07-2013
09:36 AM
|
0
|
0
|
2653
|
|
POST
|
I'm assuming you have a feature class of the lines already constructed. If so, you could use the Intersect tool. Intersect the polygons and the lines. The result is a line feature class with the feature ID of the polygon that it intersects. The result is simply the length of the line (shape_length if you're writing the output to a file geodatabase).
... View more
05-02-2013
09:05 AM
|
0
|
0
|
506
|
|
POST
|
I played around with this a bit more given that you can't know the number of fields at runtime. What you can do is receive the arguments as a tuple. So, in the code block, you do this: def getMaxField(*args): All the arguments are now available to you. Assuming that you'll pass in the field values followed by the field names like this: getMaxField(!FIELD_A_1!, !FIELD_A_2!, !FIELD_A_3!, "FIELD_A_1", "FIELD_A_2", "FIELD_A_3" ) You can receive the arguments and split them up into two arrays, one of values, one of names. To get the total number of args, use the len() function. Here is the code to create two arrays: nfields = len(args)
vals = args[0:nfields/2]
fields = args[nfields/2:nfields] Next, we need to find the maximum value in the vals array. Python has the max() function: maxValue = max(vals) But we don't want just the max value, we want the position in the array of the max value. Python lists/tuples have the ().index method. So, to find the index of the max value: i = vals.index(max(vals)) Now all that's left is to return the name of the field found at that index: return fields Here's the completed (verbose) code. It worked for my simple test case. It doesn't do anything about duplicate maximum values in the array. def getMaxField(*args):
nfields = len(args)
vals = args[0:nfields/2]
fields = args[nfields/2:nfields]
i = vals.index(max(vals))
return fields
... View more
05-01-2013
08:38 AM
|
2
|
0
|
4638
|
|
POST
|
You can use Calculate Field. The attached shows a graphic of a table named "A" with three numeric fields. In the Expression, I call the getMaxField routine I wrote passing in the values of the three fields and the names of the three fields: getMaxField(!FIELD_A_1!, !FIELD_A_2!, !FIELD_A_3!, "FIELD_A_1", "FIELD_A_2", "FIELD_A_3" ) The Code Block: def getMaxField(v1, v2, v3, name1, name2, name3):
maxval = max(v1,v2,v3)
if maxval == v1:
return name1
if maxval == v2:
return name2
return name3
... View more
04-30-2013
11:26 AM
|
1
|
0
|
4638
|
| Title | Kudos | Posted |
|---|---|---|
| 1 | 09-30-2013 04:37 PM | |
| 1 | 03-27-2013 10:03 AM | |
| 2 | 11-15-2013 12:33 PM | |
| 1 | 04-30-2013 11:26 AM | |
| 1 | 07-26-2011 08:03 AM |
| Online Status |
Offline
|
| Date Last Visited |
11-11-2020
02:22 AM
|