POST
|
Business analyst has some great address locators, but I can't find detailed documentation for them. For example, there is a locator called USA_PointAddresses. It matches to address points - but is it rooftops, E911 data or what? The reference data is listed \REF_DATA\USA.gdb\EsriPointAddress and the path shows that it is on my desktop, but it isn't. I need to understand the how the data was created, how old it was, is it consistent across the US or does it concentrate on Urban areas, etc. Another example, I'm finding addresses that match with the 5 digit ZIP locator, but not with the ZIP +4, even though according to the USPS, the ZIP +4 is valid. I need the documentation on how these address locators were created and details on the underlying source data and can't seem to find it anywhere in the documentation. I'm using the desktop version of Business Analyst.
... View more
05-03-2017
08:38 AM
|
1
|
4
|
1441
|
POST
|
Well....this all worked marvelously, but it didn't. I spread my 18 subsets over 3 machines with Xeon processors, started the batches last night and I came in this morning and voila they were done. Didn't believe it could work so quickly and sadly, it didn't. The results in the OD Cost Matrix table look correct (I've only tested a few as of yet), however, it didn't solve for all the Origins. I had subsets of 50,000 origins and in one set, it only solved the first 1,443 origins, in another the first 9000 - and these were run on the same machine, in the same batch. The OD Cost Matrices ended up with vastly different numbers of rows. For example, in one batch, one matrix had 6,594,231 rows, another 1,021,000 and 155,000 (there were 2 more in this batch that I haven't looked at yet). I'm doing all the processing in file geodatabases, one for each subset, so I don't think I'm bumping up against any size limits. Each GDB is in a separate folder - although all of these folders are subsumed in one folder. I have 48GB of RAM and gobs of harddrive space, so I don't think that is the issue. Anyone have any idea what is going on? Or how I can log error messages so I might have a clue as to what is going on? Thanks, Heather
... View more
02-12-2013
11:09 AM
|
0
|
1
|
1217
|
POST
|
Hello, This work is for a health care access grant. So my origins are patients, destinations are facilities that provide a particular service. I'm doing this for different areas of the country. I am hoping that the 900,000x240 is my largest OD cost matrix, but won't know until all the data comes in. I've got my python queries set and managed to run a very small sample as a batch and all worked great - thanks Chris. However, now I'm wondering how best to optimize this process (for minimum processing time - I worry that if I leave a process running for 2 weeks, we will have a power outage and I'll have to start over). I currently have it set up to do 18 subsets of 50,000 origins each. But would it run quicker with 36 subsets of 25,000 or would this just overload the CPU? It is hard to test this out since everything takes so long, so if anyone has experience in this, it would be great to know as I sense there are a lot of OD Cost Matrices in my future. Thanks for the advice, really appreciate it, Heather
... View more
02-08-2013
05:27 AM
|
0
|
0
|
1217
|
POST
|
Wow, that would make the world of difference. I'll give it a try, thanks, Heather
... View more
02-04-2013
07:57 AM
|
0
|
0
|
1217
|
POST
|
Heather, Obviously, this is a very in-depth approach and involves some upfront work (and perhaps some learning of new things). However, for large jobs like yours, using scripts to do the processing can really reduce the amount of processing time (and headaches). Hopefully some of this is helpful. I think it is very beneficial when analysts learn a bit of scripting. It definitely has it's benefits. Happy Geoprocessing! Chris B. Hi Chris, Thanks for the in-depth reply. I should have stated that I have a model to run the OD cost matrix process and exports the travel times to a geodatabase file. Though, in the past, I manually created the subsets (which doesn't take much time) and I haven't set up the model to run all the subsets consecutively, so your approach would save a bit of my intervention, but I didn't see how it would speed up the actual processing time - my guess is it would still take 2 weeks to run each subset or am I missing something? Heather
... View more
02-04-2013
05:56 AM
|
0
|
0
|
1217
|
POST
|
Hi, I need to run a 900,000 x 240 OD Cost Matrix (not by choice!) and I'm looking for the most efficient way of getting this done. I'm using the StreetMap dataset, my destinations are spread throughout North Carolina and most of my origins are in NC, but not all are. I do have a limit of 180 min of travel time. I recently did a 300,000 x 240 matrix by running subsets of 50,000 x 240. Each subset took about 2 weeks to process on a desktop dedicated to this (64 bit, 8GB RAM, windows 7, ArcGIS v10.1). The good news is I didn't run into any out of memory errors, the bad news is we could have had a power outage minutes before completion and I would have lost 2 weeks. Also, I did this manually and it was a bit of a hassle to keep track of the different subsets (I had several false starts and stops and I have 3 different desktops dedicated to this, which are living in different offices) So, the question is, how best to tackle the 900,000 x 240 matrix (minimizing my time as well as computer time): 1) use the 50,000 origin subset concept since it worked before 2) automate a loop that runs much smaller subsets, say 1,000x240 - but this leaves me with lots and lots of files to merge together. 3) instead of subsetting the origins, subset the destinations (i.e. 900,000 x 10 and then run this 24 times) 4) other reasonable options? (Not doing it isn't an option). Thanks for any ideas, Heather
... View more
02-01-2013
11:27 AM
|
0
|
10
|
5632
|
POST
|
Thanks so much, worked like a charm. Some day I'll make the time to work through a python course, would save me so much time! Thanks, Heather
... View more
02-27-2012
10:03 AM
|
0
|
0
|
218
|
POST
|
Hello, I need to parse some street addresses (stored as strings in one attribute field) to 2 separate fields ??? one for the house number, one for the street name. Of course the addresses have varying lengths of house numbers (or sometimes none) and varying lengths of street names (and varying word count), so I can???t just use Left() or Right(). For example: 1 Main St 10 Main St 100 Maine Street 100 Main Podunk St Podunk St I think I need to tokenize the string with the space as a deliminator, but I've never used python (and only basic VB scripts), so I'm getting lost as I troll through the various python resources. Can someone help me out - I'd like to do this in the field calculator. Thanks, Heather
... View more
02-27-2012
09:17 AM
|
0
|
2
|
827
|
POST
|
Hi, I have two point datasets of tobacco retailers - the gold standard is based on actual tobacco license files, the other based on the types of stores that sell tobacco (conv store, grocery store etc). I want to compare the two datasets - spatially, to understand how different they are. Neither dataset has attributes in common (one has the name, the other the type) and I don't really care about the attributes. Essentially, what I want to know: is the dataset with store type (which I can get nationally) a good proxy for the dataset based on license files (which isn't available nationally, but I can get it for a few states to test). Any suggestions on comparing these datasets? Anything I've come across is really getting at a relationship between their attributes rather than their location. Thanks, Heather Carlos Dartmouth College
... View more
02-15-2012
09:11 AM
|
0
|
1
|
3154
|
POST
|
I've attached a zipfile of the toolbox with both the model and submodel. I think one of my problems is that I'm not feeding the frequency table into Calculate Value (I don't even have it in the submodel). But I'm not sure how to get Calculate Value to know that is the table I want it to calculate from. I've also tried using Get Field Value - but I am running into a problem with the default iterator value of 1 - I posted in another thread about this. Thanks, Heather
... View more
09-22-2011
07:46 AM
|
0
|
0
|
244
|
POST
|
Hello, I'm having a similar problem, but I have the precondition set. I'm using Iterate Field Values and then I use the output as in-line variable substitution in Get Field Value. However, Get Field Value won't run because the default of the Iterator is 1 and I get the error "Field 1 does not exist within table". You can see in the attachment, Get Field Value is "not ready-to-run" - even once I run the iterator (and the iterator is no longer 1), the state of Get Field Value doesn't change and therefore it doesn't run. I can manually trick it into Ready to Run state (by running the iterator and then opening, editing, cancelling the edit and closing Get Field Value), but since the iterator has already run, that doesn't help and once I reset it, it turns white.
... View more
09-21-2011
09:10 AM
|
0
|
0
|
739
|
POST
|
Hello all! I have a frequency table with a with a bunch of different attributes (destinations). I want to iterate through the rows of the table, and then check that a select list of the attributes have a value >0, if so, I then want to make a selection from a related shapefile and draw a convex hull around the selection. If any of the list of attributes = 0, then I do nothing and move on to the next row of the frequency table. To do this, I set up a submodel with iterate field values and calculate value to check the list of attributes. I use "stop" if one of the attributes = 0, otherwise continue. In the main model, I Iterate Row Selection on the frequency table and then send the selected row to the submodel. I then have the results of the submodel (the "continue") as a precondition for selecting from the shapefile and drawing the convex hull. Two problems: 1) my calculate value always returns "true" even if the attribute = 0 2) Even when the submodel finds all the attributes > 0, the next step in the main model won't run because "the precondition is false". I've attached the model, submodel the calculate value code is as follows: getVal("%DestName%") def getVal(intValue): if intValue >= 1: return "TRUE" else: return "FALSE" Data Type is Boolean
... View more
09-16-2011
12:53 PM
|
0
|
3
|
616
|
POST
|
I'm having trouble nesting 2 iterators. I have tax parcels that are categorized by the type of destination (grocery store, bank etc....). My study subjects are in a point file and in another file, I have polygons that are buffers (sometimes network buffers, sometimes simple circles) around each study subject. For each individual study subject, I want to select all the parcels that are in that study subject's buffer, then I'll use NEAR to calculate the distance from the study subject to each of the selected parcels. So, I have two iterators. 1) Iterate feature selection - this iterates through the buffer file and selects each buffer (I can also use the value from this process to select the related study subject) 2) Iterate field values - this iterates through the types of destinations (grocery, bank etc) I would like to loop as follows: Select one buffer/subject pair Select one destination group --calculate NEAR (which also means adding fields,calculating values, converting to layers and whatnot) Repeat for all of the destinations (for the first buffer subject pair) Repeat for the next buffer/subject pair and so on Since I can only use one iterator for each model, I've tried lots of combinations of submodels, but I still end up with the first iterator running completely through ALL of the iterators before it starts doing the second iterator. So, in essence, they aren't nested, one does all of its business before the next one starts. I've included jpg's of the current incarnation of my model. Thanks for any thoughts. Heather Carlos
... View more
11-23-2010
06:09 AM
|
0
|
2
|
623
|
Title | Kudos | Posted |
---|---|---|
1 | 05-03-2017 08:38 AM | |
2 | 05-03-2017 08:48 AM |
Online Status |
Offline
|
Date Last Visited |
07-20-2022
09:18 AM
|