POST
|
Thanks Curtis, this is very useful information, certainly a help for future development with ModelBuilder.
... View more
05-21-2012
10:15 PM
|
0
|
0
|
16
|
POST
|
This is the way to do it. It seems elaborate but there is no other way as ModelBuilder accesses those variables accessed with "%" syntax as strings -- not objects. The commas separating arguments is python syntax, there is no way around that. Thanks Curtis, it is appreciated. This is a clear explanation, and I was suspecting something like that. As I am just starting with Python, it is all a bit new to me and getting to know these specific issues and how to handle them, takes time. The way this all works is the the input expression to Calculate Value is interpreted by model builder (variables substituted in) -- and the resulting function is then passed to python literally. This is the same reason that pathnames must be used with Calculate Value like this: def func(r"%mbuilder path variable%") Hope this helps! Could you elaborate a bit more on this specific subject regarding pathnames? I haven't read about it before. And is the "r" before the inline variable a typing error, or does it need to be there?
... View more
05-21-2012
01:14 PM
|
0
|
0
|
16
|
POST
|
The error is in the incorrect usage of Python's "rstrip" command. "rstrip" removes ANY occurrences of the characters in the search string you supply (in your case "_shp") from the right side of the processed string. This is why it is going wrong, since "citiespop_shp" contains an additional "p" character as the ending, and since "p", is part of "_shp", it is also removed from "citiespop", resulting in "citiespo". The removing only stops at the "o" character ("citiespo"), as that is the first character not part of the search string "_shp" Please note that for "rstring", the order of the characters you supply is irrelevant. If you coded: "teststring.rstring("phs_")" the results would be exactly the same , e.g. giving you "citiespo" for the processed basename "citiespop_shp" You will need to use another of Python's string commands to accomplish what you want to do.
... View more
05-20-2012
07:13 AM
|
0
|
0
|
18
|
POST
|
Hi all, For the record: ArcGIS 10.0 SP4. Just getting my feet wet for the first time with more advanced ModelBuilder techniques in combination with Python, I noticed there is a possible issue / bug with the in-line variable substitution in non-English locals. My computer is set to Dutch, and displays "," as the decimal point character. However, if I have a numeric variable with a decimal point in it (in my case the values were Double), think of "1234,56789", the in-line variable substitution leads to a fault as soon as the variable is inserted in a Python function's argument list, as Python uses the comma "," character to separate out different arguments. I saw this issue popping up when I tried using two Double variables as arguments in the Calculate Value tool of ModelBuilder. See the attached screenshots. The first screenshot shows the original Python code I inserted in the Calculate Value tool. This code returned the error seen in screenshot 2. Notice it clearly states that the "getMBRsqrt" function I defined "takes exactly 2 arguments (4 given)" while 4 given! . Also look at the string of numbers in the red lettered error description. It shows the two variables I passed also being separated by the decimal comma of the values themselves, causing 4 "pseudo arguments" to be passed over to the function, instead of two numbers using decimal points. This is of course wrong, and leads to the fault, as my function only accepts two arguments and must have the decimal values / doubles passed over. Only when I elaborately changed the function, and passed the variables as strings, converting them back to numbers, could I run it. See screenshot 3 for the modified code in the Calculate Value tool that did run properly. Before I needlessly start reporting this to ESRI, can someone else confirm this issue? I tried finding it in the list of fixed issues for SP5, but no such issue seems listed for fixing. Marco
... View more
05-19-2012
02:47 AM
|
0
|
4
|
3262
|
POST
|
Getting good at geostatistical modeling requires study, practice, and often a bit of luck. X2. I think many people underestimate the need to delve in these kind of subjects, whether it's geostatistical or plain statistical modelling on numbers. There is a lot going wrong in practice, and unfortunately also many datasets not only misinterpreted, but often under-used because people don't know what to do with it to get interpretable results. If you read the help and decide you want more information, a colleague published a book last year on performing spatial statistics (and geostatistics) in ArcGIS: http://esripress.esri.com/display/index.cfm?fuseaction=display&websiteID=194 Don't forget this small but helpful introductory whitepaper by the same author: http://www.esri.com/library/whitepapers/pdfs/intro-modeling.pdf
... View more
05-16-2012
01:39 PM
|
0
|
0
|
33
|
POST
|
Can't answer your question as I haven't tried what you are attempting, and you probably already read it, but this ESRI PDF gives some more details, see point 5... Maybe there is a limit to the number or Feature Datasets open for editing (just 1 allowed??). E.g. you are not allowed to make two Feature Classes in two different Feature Datasets editable, but are allowed to edit one Stand-alone (not in Feature Dataset) Feature Class, and one residing in a Feature Dataset...? Just guessing here.
... View more
05-12-2012
11:34 AM
|
0
|
0
|
20
|
POST
|
OK, I now discovered more or less what and where is actually going on. While ArcMap only shows a progress bar for a few minutes, the unaccounted time afterwards is spend writing classic ESRI grid format files in the Windows user profile's Temp directory. I can see ESRI grid files being build up (file size increases) in cryptically named subfolders. Seems the processing needed for a fill are far from over when ArcMap stops showing progress. Frustrating. It is still unclear exactly what is going on. I know the Help states Fill is the equivalent of a number of tools combined. "The Fill tool uses the equivalents of several tools, such as Focal Flow, Flow Direction, Sink, Watershed, and Zonal Fill, to locate and fill sinks." , so I wonder if ArcMap is just combining the outputs of these processes, or is really "filling up" / replacing sinks with hydrologically corrected DEM values. Whatever, I am glad it is now doing it on an external SSD, and not my harddisk, because it is literally writing Gigabytes of grid data on end to the Temp and subsequently removing files again.
... View more
05-12-2012
09:37 AM
|
0
|
0
|
28
|
POST
|
Hi all, Has anyone else noticed this: when I run the Fill tool to remove spurious sinks in my raster DEM, ArcMap and the Fill tool seem at first to be working quite fast, 0-100% in limited time, in consecutive iterative runs to properly fill the sinks. However, after seeing the 0-100% progress bar fully finish 3x times or so (or as much as the iterative process of filling needs), ArcMap than fails to add the created DEM. Instead, it starts pounding away like crazy on the harddisk, in a kind of activity you really do not want on a hard drive but only in RAM or on a SSD as it will wear out a hard disk fast, and only adds the final result a very long time after the progress bar of the Fill tool disappeared. So it does finish, but maybe an hour or so after the five minutes Fill tool processing as displayed by the progress bar. I noticed the final processing of the raster takes place in your Windows users accounts "Temp" directory, as I can see activity going on there related to ArcMap. I am fully aware that the fill process is an intensive one, but this seems wrong. It seems like ArcGIS needs an excessive amount of time to write the final result. I realize the fill tool needs to update the existing grid in specific locations, and I begin to wonder if that is the issue??: ArcMap trying to write a raster in a kind of extremely inefficient database SQL Update statement, replacing records in a file geodatabase raster table instead of just dumping a whole new raster table in the file geodatabase that is my final destination location for the result... Anyone else notice the same issues with the Fill tool? Marco
... View more
05-12-2012
04:37 AM
|
0
|
1
|
3558
|
POST
|
Hi Marco, You could try the Tiled Labels To Anno tool or export using Data Driven Pages - either way a new labelling run takes place for each extent so you should see every feature labelled when you zoom in. Thanks Jon, that is a useful tip and good addition to this thread. I am already setting up Data Driven Pages and determining which labels I would like to have tiled, if not all. Of course, if you already have a fully annotated layer stored in an Annotation feature class, and disabled "Label features" for the corresponding layer, the Data Driven Pages will not solve the issue. That is just one minor caveat to that solution, it needs active labeling set.
... View more
05-11-2012
11:19 AM
|
0
|
0
|
5
|
POST
|
OK, well, I think I now figured it out myself... 🙂 what I hadn't reckoned with, is that when using normal labeling, the labels are actually only placed within the visible extent of the data frame on the layout. That is good for display, as labels are not "cut off" at the borders of the data frame on the layout, but it also means that when exporting annotation to an Annotation feature class, where you would usually want to export the entire feature class so as to support panning across the entire dataset and still see labels, labels may be placed OUTSIDE the current view if there are long lines like roads. Hence they are not visible in the same way as with normal labeling... You can solve this by choosing "Only label current extent" in the export options for "Convert Labels to Annotation", but this also means the exported annotation will not cover the entire dataset. Seems we have a trade off here: either label only "Features in current extent" when exporting to annotation to get consistency between the dynamic labeling and annotation, but with the drawback of not being able to pan your entire dataset with annotated labels, or label using "All features" when exporting to annotation, but risk labeling to be different from what the current extent of the data frame in layout view shows... Food for thought... :confused: *** EDIT ***: this also opens up another possibility/ solution: If you have an already predefined set of extents for which you want to make maps, like 5 regions ( not necessarily equal in size like a regular map grid! ), you could simply create a (poly-)line feature class of the outlines of the regions and add it to the view. Set it to be high "Feature Weight" so as to enforce that the features may not be overlapped by labels. Set other labeling lower weight. This is what happens at the borders of a data frame in layout view. And set symbology to 100% transparent so as to hide it's existence. You probably need to enable labeling as well for the labeling engine to have it participate in the labeling process. Maybe you need to set labels as well to transparent. You now have a dataset that will at least enforce that created annotation during export to the geodatabase will hopefully not overlap / be cut off at borders, and more or less confine itself within the polyline feature. It is not entirely the same as the dynamic labeling, but it might help in some cases (I haven't tested this thought yet though!). I also see now that using so called "Map document Annotation" might be better suitable for my purposes. Have to read a bit more though about that and annotation groups.
... View more
05-04-2012
06:03 AM
|
0
|
0
|
5
|
Online Status |
Offline
|
Date Last Visited |
a week ago
|