|
BLOG
|
Interesting post with some food for thought and good reminders. Seeing these tests were run with se_toolkit, and the ArcSDE SDK won't be supported beyond ArcGIS 10.2, it makes me wonder how easy or hard it would be to run similar tests in ArcGIS 10.3 or beyond. Esri's DEPRECATION PLAN FOR ARCGIS 10.1 AND ARCGIS 10.2. states:
ArcGIS 10.2 will be the last major release to support the ArcSDE SDK with the ArcSDE C and
Java APIs. Today many other options are available for developers, including SQL, which is
available as a result of the widespread adoption of spatial types, the File Geodatabase API
(introduced in 2011), and ArcGIS Runtime SDKs that offer modern environments for the
creation of compelling custom applications.
I don't think the File Geodatabase API is really applicable here, which leaves "SQL" and ArcGIS Runtime SDKs. Maybe it is a long week catching up with me, but I am struggling to see where ArcGIS Runtime SDK fits in here. Thinking of these specific tests, even going down the SQL path seems onerous, or at least not clear at first glance. What options do you see for recreating these types of test post se_toolkit?
... View more
09-26-2014
07:57 PM
|
0
|
0
|
720
|
|
POST
|
Are you not able to use SQL Server's native spatial types (GEOMETRY, GEOGRAPHY)? Either native spatial type works with ArcSDE and allows one to perform SQL directly against the data. If the data is registered as version, you may need to query the versioned view instead of the base tables.
... View more
09-26-2014
01:02 PM
|
0
|
0
|
1532
|
|
POST
|
SDEBINARY? I don't think it is possible, regardless of Oracle or SQL Server. Are you using SDEBINARY with the code snippet above? It looks like SDO_Geometry. With SQL Server, you would need to use GEOMETRY or GEOGRAPHY. Look at STWithin or STContains depending whether you want to look at it from the point's or polygon's perspective.
... View more
09-26-2014
08:22 AM
|
0
|
3
|
1532
|
|
POST
|
Daniel, to code wrap you need to use the 'advanced editor' option in the upper right corner. Once in the advanced editor, you can do syntax highlighting. Just to clarify, the code you just posted is generating the cursor object error you mention earlier? Can you try running the command and pasting the exact error messages that are returned, all of them? One potential problem I see, although it wouldn't generate the cursor object error, is your string representing the feature class. If you are going to use Windows-style paths (single backslashes for directories), you should put in 'r' in front of the first quote to signify a raw string. For example, r"C:\Users\damrine" instead of "C:\Users\damrine". In Python, a backslash is an escape character, which can create problems with Windows-style paths if people don't realize that.
... View more
09-25-2014
07:41 AM
|
0
|
0
|
1439
|
|
POST
|
A few things come to mind. I: drive, mapped network drive or local storage? If mapped network drive, have you tried copying the MDB to local storage and testing it. Can you create a new MDB and copy the data into it? If so, does that give same error messages? Can you make an OLE DB connection to the MDB in question?
... View more
09-24-2014
06:40 PM
|
0
|
6
|
1824
|
|
POST
|
Are you using my second code snippet? The error message you are getting is likely caused by using a Python "with" statement and the older cursors. The second code snippet I posted most recently does not use the with statement and should work. Your issue with working over the network has nothing to do with Python. I would argue ArcGIS Desktop has historically handled network-based data, like UNC paths, very poorly. I believe the second code snippet I posted will work, and with any data set. To defend scripting with Python, it is way more powerful than most of the GUI tools, which one would expect. And, GUI tools don't always work as well, and I find troubleshooting them much more frustrating than Python.
... View more
09-24-2014
09:13 AM
|
0
|
2
|
1439
|
|
POST
|
I don't believe the original cursors (non da cursors) supported the Python with statement. Original code example modified for original cursors:
def normalizeField_10(in_table, in_field, out_field):
cur = arcpy.SearchCursor(in_table)
row = next(iter(cur))
minimum = maximum = row.getValue(in_field)
for row in cur:
x = row.getValue(in_field)
if x < minimum:
minimum = x
if x > maximum:
maximum = x
del cur
cur = arcpy.UpdateCursor(in_table)
for row in cur:
row.setValue(out_field, (float(row.getValue(in_field)) - minimum)/(maximum - minimum))
cur.updateRow(row)
del cur
... View more
09-23-2014
01:59 PM
|
2
|
1
|
1439
|
|
POST
|
What is the error message if you use arcpy.da.InsertCursor instead of arcpy.InsertCursor? Do you have a requirement to use the older InsertCursor? Besides better performance, the data access InsertCursor supports Python's with statement, which I find to be a big plus.
... View more
09-23-2014
12:54 PM
|
0
|
8
|
1824
|
|
POST
|
I roughed out the example to run from the interactive Python window. I don't work with Model Builder much or at all, so I can't help much in terms of incorporating it into that workflow. The function takes strings for all arguments. If VAL is the name of the field, use "VAL" with just quotes and no brackets or parentheses, same with NORM.
... View more
09-23-2014
12:38 PM
|
0
|
5
|
5220
|
|
POST
|
Xander Bakker, I agree the list-based example you provide is more succinct, not to mention will also work. The reason I went with a slightly more verbose iterator than a more succinct list comprehension was memory management. If one happens to be dealing with millions, tens of millions, or even more records, fully populating a list will impact memory and probably performance. That said, most users likely never deal with data sets large enough to see memory or performance differences between the two approaches..
... View more
09-23-2014
12:23 PM
|
1
|
1
|
5220
|
|
POST
|
There is definitely a way to address this without resorting to plugging values in by hand. That being said, it isn't going to be a simple expression in Field Calculator that does it. Do you have a requirement to use Field Calculator? If so, I think "The Web" has already given you the answer(s). The behavior you are seeing, at least the parts that aren't errors, are expected. Field Calculator operates like a cursor, at least logically. Field Calculator only sees one record at a time, even if it is operating over an entire set of records. The Codeblock section allows for a little bit of kung fu but we are talking Kung Fu Panda and not Caine. This is why your min and max functions are returning VAL instead of the min and max for the field. If you were looking at minimum or maximum values across fields for a given record, then the approach might work. As Ian Grasshoff mentions, you could use the Summary Statistics tool. My rub with that tool, and why I seldom use it, is that it creates a table to place the values. The last thing I want is another table to extract values from and have to clean up afterwards. As Johannes Bierer links to, you could use arcpy.da.TableToNumPyArrary to dump the table and find the minimum, maximum, or other statistics that way. My only concern with that approach, well all approaches involving lists, is that creating the lists can consume lots of memory depending on the data sets involved. Why create a list with a million elements if you only want to know the minimum and maximum values for the field. Here is an example of a function you could create and then call from the Python interactive window:
def normalizeField(in_table, in_field, out_field):
with arcpy.da.SearchCursor(in_table, in_field) as cur:
x, = next(iter(cur))
minimum = maximum = x
for x, in cur:
if x < minimum:
minimum = x
if x > maximum:
maximum = x
with arcpy.da.UpdateCursor(in_table, [in_field, out_field]) as cur:
for x, y in cur:
y = (float(x) - minimum)/(maximum - minimum)
cur.updateRow([x,y])
The float with the UpdateCursor ensures the normalization value doesn't truncate to an integer if the in_field happens to be an integer.
... View more
09-23-2014
11:35 AM
|
1
|
3
|
5220
|
|
BLOG
|
Esri has released a KB article acknowledging and describing the limitation of feature class modified dates in ArcCatalog: FAQ: In ArcCatalog, why is the time stamp incorrect on the date modified field of a file geodatabase feature class? A small step in a positive direction....
... View more
09-22-2014
07:01 AM
|
1
|
0
|
1043
|
|
BLOG
|
Final answer... "known issue." Unfortunately, my request for clarification on the first response was to repeat the symptoms I was seeing. I don't consider repeating observable facts as offering an explanation, let alone clarification, but they have a story and are sticking to it. Accepting Esri isn't going to change a behavior, a behavior that I clearly argue is a bug, there are still a couple outstanding questions. First, Esri Development says it is a known “limitation” that the feature class modified times aren’t actually accurate. The problem with this statement is that I nor anyone I know has ever found documentation of this limitation. If it is known, why not document it so that it can be known by the tens or hundreds of thousands of ArcGIS Desktop users around the world and not just a small team of people in Redlands. Second, there is a flawed logic in given users incorrect information with no caveats rather than no information at all. When it comes to ArcSDE and Personal Geodatabases, the “Modified” column is empty because there is no mechanism for Esri to give the users reliable information. Why not do the same with file geodatabases instead of giving incorrect information? This ties is well with my What's in a Name When Known = Unknown blog post where I raise the issue of known limits of the software not being documented and the impacts that causes to users.
... View more
09-17-2014
01:30 PM
|
0
|
0
|
1043
|
|
POST
|
Right, I forgot about that second part, which is why I wasn't using it in 10.1. I know it is no consolation, but it does work in 10.2.2 once you can upgrade.
... View more
09-12-2014
09:10 AM
|
1
|
0
|
1512
|
|
POST
|
With 10.1 I seem to remember some quirky behavior, I would argue a bug, where you need to include the OBJECTID field in the GROUP BY clause even if you aren't selecting OBJECTID. Try the following and see if it works:
tab = r'H:\Documents\ArcGIS\Default.gdb\arlist'
with arcpy.da.SearchCursor(tab, ["fc", "item2"], sql_clause=(None, "GROUP BY fc, item2, objectid")) as cursor:
for row in cursor:
print "{0}, {1}".format(row[0], row[1])
I believe the root cause was that OBJECTID in 10.1 is being implicitly selected with the SearchCursor and not handling it in the GROUP BY clause creates the error. I don't recall this being listed in Addressed Issues for 10.2/10.2.1/10.2.2, but the behavior did change between 10.1 and 10.2.x.
... View more
09-12-2014
08:56 AM
|
0
|
2
|
1512
|
| Title | Kudos | Posted |
|---|---|---|
| 1 | 2 weeks ago | |
| 1 | a month ago | |
| 1 | 11-03-2025 08:26 AM | |
| 1 | 10-22-2025 12:57 PM | |
| 2 | 10-13-2025 02:00 PM |
| Online Status |
Offline
|
| Date Last Visited |
7 hours ago
|