|
POST
|
ModelBuilder is straining to support iterators. Much better to start using Python for iterations. You are still using the same functions but the iteration is much easier to understand and you can have much better control of missing data, errors and the process is much more understandable. You can use Cursors that are much faster. Start your upgrade by taking each function and right-click to create a python snippet. Then put them all in a script. You can also encapsulate the iterator into a single script and publish it as a custom tool and leave the rest of the process in ModelBuilder.
... View more
03-11-2023
03:54 PM
|
0
|
0
|
1197
|
|
POST
|
To operate tools on data stored on disk you always have to create in memory definitions or bitmaps first. MakeTableView is much simpler, works on one table and is more reliable. It can make a subset of records and a subset of fields, but sometimes other tools get confused eg Intersect overlays. MakeQueryTable attempts to set up a relational database expression across several tables in the same database. It may work for sample sets, but never works for me with real sized datasets. After trying it for a while I avoid it at all costs. Often it just hangs or crashes after hours. So what to do? If you are using Model Builder or are trying to do interactive selections you are stuck with it. Simplify and reduce first by exporting to a temporary featureclass or partition into smaller sets and repeat. If you are using Python there are much better options. You can copy the data straight into Numpy or Pandas and run python functions. Note SQL expressions have to be all in one database. Break this by extracting the whole dataset use a SearchCursor() then use python set functions to do joins. This means you are not limited to one database. You can use python dictionaries that are very fast lookups and can be very large say 1 million records. I often use list comprehensions with a SearchCursor to extract the table. Then loop through another table in another database with the dictionary. Alternatively I use a huge list with an SQL IN clause to do the selection, make sure the field is indexed in the database and you will have your selection in milliseconds compared to hours with MakeQueryView.
... View more
03-11-2023
03:47 PM
|
2
|
0
|
3999
|
|
POST
|
Thanks, have been laid low with an achillies tendon break and covid and family cancer. Back soon.
... View more
03-11-2023
02:33 PM
|
0
|
0
|
329
|
|
POST
|
This is even tricky for polygons. Esri enforce a convention of outside clockwise with holes counterclockwise. This is the Right Hand Rule and defines which side of a polygon is UP. But OGC forgot to define this so lots of open tools do not enforce this rule causing spatial analysis errors. To standardise run the Repair Geometry Tool. Note all KML polygons will be corrupt after importing for example. But what about polylines? There is no convention here but often it matters, such as when labelling text, now more sophisticated thank goodness. But there is no From-To convention for polylines. Even finding the start and end nodes is no longer supported with 'shape files'. It was for coverages with built-in arc node topology. Someone could write a small python script to find the start and end points and the midpoint (all available functions) and then you need to do some clever vector geometry calculating the vector cross product to see if the polyline curves to the left or right. Then flip all required polylines. Another approach might be to close the start and end points temporarily to create polygons, then repair geometry, find the ones that have been reversed (repaired) and those are the source polylines that need flipping. You might be able to use Linear Referencing. When you define routes you nominate the corner of the extent for the start of each route. This makes all the routes run in the same direction. Not quite clockwise but directed. When calculating a convex hull of a cloud of points the boundary polyline is a clockwise polyline. Maybe you can adapt the very simple algorithm to achieve a similar result. Or make a convex hull and then order your data. Now I have thought that I don't really understand your question! A picture would help. Do you want all your polylines looking like a stream, or is each one independent? I am very curious as to why you need them flipped.
... View more
03-11-2023
02:19 PM
|
0
|
2
|
2885
|
|
IDEA
|
I have been starting to use the Python API and I have found it very difficult to understand the functions. The samples are just 'showoff' examples of why you would use GIS, not how to debug the tools. I find that the arcpy documentation is much more helpful because there are example snippets on every function right in the help, not somewhere in a sample. The edit tools have 16 samples for the whole module. Most of the functions and parameters are not covered with an example anywhere. I cannot find a working example of extract_changes(). I note in the community noticeboard that there was a bug in 2020 where 'optional' parameter for the URI were actually required. Has this changed? Let's see something working so I can backtrack to see why my environment is different. I thought the python Open API on Github would have the source for arcgis! Some hope there, it only has a smattering of examples that give a once over lightly of each module. My initial thought was that since there was a missing function where I was porting an arcpy script I would add one myself - open source, published... (I can see a child table separately from the parent table in the GUI, so why not in the API?)
... View more
10-04-2022
10:13 PM
|
0
|
0
|
1589
|
|
POST
|
There is already a function to validate the SQL string built in to the API. Very straightforward. https://developers.arcgis.com/python/api-reference/arcgis.features.html#arcgis.features.FeatureLayer.validate_sql
... View more
10-04-2022
04:01 PM
|
1
|
0
|
2155
|
|
POST
|
I am searching for a working example of extract_changes() too. I cannot understand the error messages. I did see a post from 2020 where it does not accept defaults as documented, you have to add in the URI for example.
... View more
10-04-2022
03:49 PM
|
0
|
2
|
3306
|
|
POST
|
What you have to do is add in the arcgis module as well as arcpy. Then you can access AGOL tools which include the Geocoding Service. Or switch to the geocoding service direct from ArcGISPro. Thefunctions and parameters are different but it is the same service ultimately, and will also cost credits.
... View more
09-13-2022
10:22 PM
|
0
|
0
|
4522
|
|
POST
|
The GlobalID is readonly and out of our control. Make a new field and save the ID to a new GUID field that you do have ownership. Then make up a dictionary of oldID to newID and change the foreign key in the attachment to point to the new one. Phew!
... View more
09-13-2022
10:13 PM
|
0
|
0
|
1074
|
|
POST
|
I am trying to do the same. My hope was that you might be able to do this with Arcade from a parent/child relation. Maybe you can. However it will be quite complex because you will need to allow for multiple child records and you will need to use the last record. So turn on EditTracking. I have then found it would be possible to update the parent feature with the latest record using the Python API by offloading all the hard work to Pandas. Get a list of visit records and find the latest date for each GUID that is a foreign key to the GlobalID. Now you have a dataframe of these records to update the main layer with whatever you want to calculate into a status field ready to be symbolised. Note the magic one-liner at line 24. This script was in Pro, so there are a few changes to get a featureset in AGOL using fl.query_related_records() # excess WeedLocations fields joined when adding extra visit events will be cleaned up...
# dictionary of field mappings so far from Visits_Table to WeedLocations
visit_to_weed = {
'Guid_visits': 'GlobalID', # foreign key -> primary key
'EditDate_1':'DateVisitMadeFromLastVisit', # for latest date
"WeedVisitStatus":'StatusFromLastVisit', # as inspected
'DifficultyChild':'DifficultyFromLastVisit', # as inspected
'VisitStage':'LatestVisitStage', # as inspected
'Area':'LatestArea' # as inspected
}
# extract the data required from the database Visits_Table, don't even need a primary key eg GlobalID
# In the future we will put on a filter greater than the last update
filter = '' #"""EditDate_1 > date '{}'""".format('2022-07-01')
in_flds = list(visit_to_weed.keys()) # ['Guid_visits','EditDate_1','WeedVisitStatus','DifficultyChild','VisitStage','Area']
out_flds = list(visit_to_weed.values())
vdate =[row for row in arcpy.da.SearchCursor(visits,in_flds,filter)]
print('vdate:',len(vdate))
# put in a pandas dataframe
df = pd.DataFrame(vdate,columns=in_flds)
# find the record with max edit date by visit and keep the other details all in one line!
idx = df.groupby(['Guid_visits'])['EditDate_1'].transform(max) == df['EditDate_1']
# print(df[idx])
dVisit = df.set_index('Guid_visits').T.to_dict('list')
# print(dVisit.get('{08125D4D-4725-4A26-BAC9-98BE7EF0784C}',"Missing"))
# {08125D4D-4725-4A26-BAC9-98BE7EF0784C}, 2022-08-23 03:04:01.644000, None, None, None, NaN
# Count visits for each location
vguid = [row[0] for row in arcpy.da.SearchCursor(visits,['GUID_visits'], "GUID_visits is not NULL")]
# dict of counts by GlobalID for updating
vguid_counts = collections.Counter(vguid)
# put counts in new field VisitCount for now
if len(arcpy.ListFields(weeds,'VisitCount')) == 0:
arcpy.management.AddField(weeds,'VisitCount','LONG')
with arcpy.da.UpdateCursor(weeds, ['VisitCount'] + out_flds) as cur:
n = 0
for row in cur:
try:
row[1] = vguid_counts[row[0]]
if dVisit.get(row[0],None):
row[2] = dVisit.get(row[0],None)[0]
row[3] = dVisit.get(row[0],None)[1]
row[4] = dVisit.get(row[0],None)[2]
row[5] = dVisit.get(row[0],None)[3]
row[6] = dVisit.get(row[0],None)[4]
cur.updateRow(row)
n+=1
except Exception as e:
arcpy.AddMessage(row)
arcpy.AddMessage(e)
arcpy.AddMessage("Well Done, {} records updated".format(n))
... View more
09-13-2022
10:02 PM
|
0
|
0
|
487
|
|
POST
|
Yes you can! But as pointed out the performance is terrible and limited. So you have to precalculate it to be useful on a map. Even that is painful! I have the same problem. In my case the user wanted to move the visit records to a related child table and have the location table as the parent. But some records do not have a child. Since the foreign key (a horrible GUID) is the link to the GlobalID (a read-only GUID) I needed to add missing records in the location feature layer to the visit related layer. I did this in a prototype using relates in Pro, and then in an arcpy script using dictionaries. Because arcpy does not use relates (or the tools that do are not reliable). So it works a treat in Pro because you have da.Cursors, and georelational tables. But in AGOL you have a giant object that you have to query in single REST requests that have limits on the data returned. In my Pro example there is still some magic performed in Pandas. It finds the record with the latest date grouped by foreign key all ready to update the feature layer. My next strategy is to go to the Visit related table with a more direct query in AGOL.You can get all the Visit records slowly by the WeedLocation record using query_related_records(). It is slow and painful and the return is a json nested mess but Pandas can unpack it. This enables me to update the featurelayer attributes that are used for symbology. # add missing visits
# oh yeah, cannot use relates
# use keyfiles
# Creative Commons NZ 4.0 Kim Ollivier
# 10 Sept 2022
import os
import sys
import collections
import arcpy
try:
gdb = sys.argv[1]
except IndexError:
gdb = 'm:/project/econet/source9Sep/CAMS_weed.gdb'
if not arcpy.Exists(gdb):
raise IOError
arcpy.env.workspace = gdb
arcpy.env.overwriteOutput = True
arcpy.AddMessage(gdb)
# "Visits_Table", "GUID_visits" "WeedLocations", "ParentGuid" or "GlobalID"
vguid = [row[0] for row in arcpy.da.SearchCursor('Visits_Table',['GUID_visits'], "GUID_visits is not NULL")]
print('vguid:',len(vguid))
# dict of counts by GUID for inspection later
vguid_counts = collections.Counter(vguid)
sql = """GlobalID NOT IN {}""".format(tuple(vguid))
print(sql[0:60], ' ... ',sql[-42:])
arcpy.management.MakeFeatureLayer('Weedlocations','weed_no_visit_lay',sql) # use a fieldinfo to limit fields?
arcpy.management.CopyRows('weed_no_visit_lay','Visit_extra') # this adds in my name as an editor??
print("extras",arcpy.management.GetCount('Visit_extra'))
# maybe extract a dict and insert
#
# make a new visits table to allow repeated tests
arcpy.management.Merge(['Visits_Table','Visit_extra'], 'Visits_Table_new',add_source="ADD_SOURCE_INFO")
max_dates = {}
for key, value in {r[0]:r[1] for r in 'Visits_Table'}.items():
max_dates.setdefault(value, set()).add(key)
with arcpy.da.UpdateCursor('WeedLocations',['GlobalID','VisitCount']) as cur:
for row in cur:
row[1] = vguid_counts.get(row[0],None)
cur.updateRow(row)
... View more
09-13-2022
09:51 PM
|
0
|
0
|
3439
|
|
POST
|
The only workaround that I see often is to create a ParentGUID GUID field and copy the GlobalID first. Then you have to change the foreign key GUID to the new GlobalID with an update.Maybe a python script or the Python API. This has also been driving me mad. The GlobalID is like an OBJECTID - not yours, so the software decides if the feature is a new record
... View more
09-13-2022
09:33 PM
|
0
|
0
|
4080
|
|
POST
|
Sort of retired? Jack Dangermond hasn't! Covid lockdowns have made me virtually retired!
... View more
09-04-2022
04:01 AM
|
0
|
1
|
1036
|
|
POST
|
Here I am in 2022 with ArcGIS Pro 3.01 installed still looking to see why it does not work with a couple of simple file geodatabases!
... View more
09-04-2022
03:45 AM
|
1
|
0
|
3092
|
| Title | Kudos | Posted |
|---|---|---|
| 1 | 08-26-2025 03:48 PM | |
| 1 | 05-08-2025 02:07 PM | |
| 1 | 05-07-2025 05:13 PM | |
| 3 | 04-04-2025 03:16 PM | |
| 2 | 05-07-2025 05:21 PM |
| Online Status |
Offline
|
| Date Last Visited |
10-20-2025
06:39 PM
|