|
IDEA
|
When importing data from geopackage or sqlite any field defined as Integer in the DDL gets cast to BigInteger instead of (Long) Integer. This is a new incompatible behaviour since BigIntegers were included (ArcPro 3.2?). There are several tools that do not work with BigIntegers such as RelationshipClasses. It would be helpful if there was a switch to turn off this unwelcome enhancement. There is a switch in Options/Map and Scene - but it doesn't work. The only way to fix the problem is to define a FieldMapping on every copy operation and there are many of these that do not have FieldMapping as a parameter. If the field is readonly such as OBJECTID then it cannot be changed at all. I see another user has used FME to fix this. This is like a reverse single precision / double precision incompatibility! Maybe a switch somewhere in the settings or the environment settings to retrofit a fix?
... View more
02-22-2024
01:25 AM
|
8
|
12
|
5793
|
|
POST
|
ArcGISPro is apparently backward compatible with ArcMap but I have found a serious limitation in the TableToRelationshipClass_management tool when finally upgrading my scripts. ArcGISPro will only accept GlobalIDs or String fields as keys and only String fields in the relation table. What has happened to Integer fields? These were apparently Best Practice in database design for Primary and Foreign keys. Edit: it appears that Big Integers are not supported, Short and Long are allowed, but if a sqlite table is being translated INTEGER type is by default typecast to BigInteger. But these are not supported in Relationship Classes. I have a clone of a large survey database ( 50 tables, 70 GB) that uses Integer for nearly every table which I use in the relationship class definition. There is also a sequence field used to order related records that is integer (of course). Choices: a. I could adopt GlobalID - GUID relationships. This is a lot of additional work, not in the source tables. The GlobalID is readonly and not under my control. It can be changed arbitrarily by Esri tools if they consider the feature is "new" thus breaking the link to the foreign key. b. I could copy the integer keys to strings just for the relates. This is also a lot of work (60 min processing per relate x 5 relations, indexing, rebuild the relate. At least I am still in control and the keys are static. I do not do any editing except for bulk validation, subsets, copies. c. Force a cast of all integers in the entire database to Long instead of Big Integer when translating. Using string values for integers does work but takes over an hour to build the relation class. using GlobalID-GUID pairs is faster at 7 minutes. The proper solution is to return to basics and somehow force the cast to Long everywhere. To do that I found I had to remove deprecated tools TableToTable and MakeFeatureLayer/CopyFeatures with tools from the Conversion toolbox: ExportTable and ExportFeatures which have a parameter to include a FieldMappings/FieldMap/Field expression that allowed me to override the default and change the field_type before copying to a filegeodatabase. I also had to replace the simpler FieldInfo parameter which does not allow type mappings. If you are only doing a couple the interactive tool lets you define the fieldmap, but in Python it is a nightmare of obscure FieldMapping objects. If only there was an environment variable to switch the default Integer type to Long instead of BigInteger.
... View more
02-20-2024
02:48 PM
|
1
|
2
|
2412
|
|
POST
|
Yes you can. Try it. Relates are hard to use. You can get a tree when querying with a popup. You can also select from a table and find related selections with the hamburger at the end of the table view. It requires manual triggering every time [Not like ArcView 3 and Avenue Dialog Designer where the selected records are highlighted, you can raise the selection to the top, not just filter and generally have a much better experience.]
... View more
02-20-2024
01:34 PM
|
0
|
0
|
563
|
|
POST
|
Have a look at the validation tools in the forms. You can program up a really complex interface with a bit of python programming, have error messages and check it is valid before starting the run. You can even program the whole form up in python so you don't use the wizard at all, I haven't gone to that step myself. Here is an example that analyses our census data. http://www.ollivier.co.nz/support/census2018/index.htm
... View more
02-18-2024
02:09 AM
|
0
|
0
|
916
|
|
POST
|
Well I think that a list comprehension wrapped around a da.SearchCursor is a very efficient and fast way to extract the features into a more open format. But why do you think that Pandas (a non spatial module) would handle spatial data? I am impressed that it worked at all! Have you considered GeoPandas? That does handle shape columns and provides you with spatial functions. Or maybe use the arcgis module? That has spatial extensions for pandas where geojson data returned by the Python Rest API is handled smoothly. There is another way to extract the features - use Numpy. There is a function in the da module FeatureClassToNumPyArray that translates directly from a featureclass to a numpy array. Maybe you can convert from the numpy array to a dataframe? Finally there are Geometry Objects. These are in-memory arrays of the geometry from a featureclass that have a complete set of spatial operators built in. Every spatial tool in the toolbox has an equivalent function for pairs of geometry objects that do not need an advanced licence. You can write a complete ArcGIS clone using a Basic licence. I do this for selected operations if needed to be restricted to a Basic licence or if I find a faster way by short-cutting the search.
... View more
02-18-2024
01:58 AM
|
0
|
0
|
821
|
|
POST
|
There is no need to create a new class. Just a function would do. Shapefiles are obsolete with lots of limitations, use a featureclass in a filegeodatabase. Then you can have null geometry for example. There are much easier ways of loading data that this painful way that is also very old. Here are 5 records of the in the PointFile ====================== id, shapeId, x, y, z 1,1,2197490.3821680555,1.5003079009938888E7,1573.2872741540364 2,2,2197478.7284111492,1.5003076194199622E7,1570.29 3,3,2197459.2878189506,1.500307149703293E7,1570.29 4,4,2197446.2932236306,1.5003068357325058E7,1573.6321294959107 5,5,2197490.3821680555,1.5003079009938888E7,1573.2872741540364 Here is the python program =================== import arcpy import os class TextToShapefileConverter: def __init__(self, input_directory, output_directory, spatial_reference_id=26914😞 self.input_directory = input_directory self.output_directory = output_directory self.spatial_reference_id = spatial_reference_id self.spatial_reference = arcpy.SpatialReference(self.spatial_reference_id) def convert(self, point_file_name): input_text_file = os.path.join(self.input_directory, point_file_name) output_shapefile_name = os.path.splitext(point_file_name)[0] + '.shp' output_shapefile = os.path.join(self.output_directory, output_shapefile_name) print "generating", output_shapefile_name if "PolygonPointFile" in point_file_name: self._create_polyline_shapefile(output_shapefile, input_text_file) # self._create_polygon_shapefile(output_shapefile, input_text_file) def _create_polyline_shapefile(self, output_shapefile, input_text_file): arcpy.env.overwriteOutput = True arcpy.CreateFeatureclass_management( os.path.dirname(output_shapefile), os.path.basename(output_shapefile), "POLYLINE", has_z="ENABLED", has_m="DISABLED", spatial_reference=self.spatial_reference) arcpy.AddField_management(output_shapefile, "PairID", "LONG") cursor = arcpy.da.InsertCursor(output_shapefile, ['SHAPE@', 'PairID']) points = [] pair_id = 0 with open(input_text_file, 'r') as file: next(file) # Skip header for line in file: if line.strip(): _, _, x, y, z = line.split(',') point = arcpy.Point(float(x), float(y), float(z)) points.append(point) # Check if a pair of points has been added if len(points) == 5: print points polyline = arcpy.Polyline(arcpy.Array(points), self.spatial_reference, True) cursor.insertRow([polyline, pair_id]) points = [] # Reset for next pair pair_id += 1 del cursor def _create_polygon_shapefile(self, output_shapefile, input_text_file): arcpy.env.overwriteOutput = True arcpy.CreateFeatureclass_management( os.path.dirname(output_shapefile), os.path.basename(output_shapefile), "POLYGON", has_z="ENABLED", has_m="DISABLED", spatial_reference=self.spatial_reference) arcpy.AddField_management(output_shapefile, "GroupID", "LONG") cursor = arcpy.da.InsertCursor(output_shapefile, ['SHAPE@', 'GroupID']) points = [] group_id = 0 with open(input_text_file, 'r') as file: next(file) # Skip header for line in file: if line.strip(): id, _, x, y, z = line.split(',') point = arcpy.Point(float(x), float(y), float(z)) points.append(point) if len(points) == 5: if points[0] != points[-1]: points.append(points[0]) # Ensure the polygon is closed polygon = arcpy.Polygon(arcpy.Array(points), self.spatial_reference, True) cursor.insertRow([polygon, group_id]) points = [] # Reset for next group group_id += 1 del cursor if __name__ == "__main__": input_directory = r'C:\Tutorial\GIS\ArcPyTutorial\Data\MyTestFolder' output_directory = r'C:\Tutorial\GIS\ArcPyTutorial\Data\MyTestFolder\outPut' converter = TextToShapefileConverter(input_directory, output_directory) # Process each file for filename in os.listdir(input_directory): if filename.endswith('.txt'😞 converter.convert(filename)
... View more
02-11-2024
03:20 AM
|
0
|
0
|
1064
|
|
POST
|
Yes, well the Pro developers forgot to read the ArcMap manuals.... there is now a synonym to patch up the syntax. I presume that it is a different mechanism under the hood. It never worked reliably for me anyway, so I just use an SSD drive as my scratch.gdb. You can easily run out of memory if you do not release the features, and who does that? Also projections did not work etc.
... View more
02-11-2024
02:58 AM
|
0
|
0
|
1074
|
|
POST
|
Save (in_)memory for simple tasks. It is a lot of people's experience that it does not always work for complex tasks like Dissolve. It did not work for me with geometry objects or reprojections. If you have an SSD then define a partition or just a disk letter and use that for the scratch.gdb and other temporary featureclasses. Since it is effectively equivalent hardware you will get all the benefits of cached read/writes and more reliable operation with the same speed and not run out of memory.
... View more
02-11-2024
02:54 AM
|
0
|
0
|
1075
|
|
POST
|
They look like reserved words in the dropdown data types for parameters to me. Maybe escape them somehow if you have to use them. If you run arcpy.ValidateFieldName() it might prepend underscores if the test finds unacceptable field names. Its a fairly bad bug that these keywords have overflowed into the parameter space. ValidateFieldName(name, workspace=None) ValidateFieldName(name, {workspace}) Takes a string (field name) and a workspace path and returns a valid field name based on name restrictions in the output geodatabase. All invalid characters in the input string will be replaced with an underscore (_). The field name restrictions depend on the specific database used (Structured Query Language [SQL] or Oracle). name(String): The field name to be validated. If the optional workspace is not specified, the field name is validated against the current workspace. workspace{String}: An optional specified workspace to validate the field name against. The workspace can be a file system or a personal, file, or enterprise geodatabase. If the workspace is not specified, the field name is validated using the current workspace environment. If the workspace environment has not been set, the field name is validated based on a folder workspace.
... View more
02-11-2024
02:46 AM
|
0
|
0
|
644
|
|
POST
|
Actually you cannot use numeric codes for text fields any more at ArcGISPro in domains.They only allow ranges with digits. It does half work in ArcMap but the text is always right justified and the widths go wonky. So even if you want to use digits, for say sorting they have to be character strings. I still hate this because our postal service uses leading zeros. My postcode is "0626" but if Excel get hold of that it morphs into 626 and so on. Sometimes export tools allow you to choose between the code and description but since I am a programmer the codes will do just fine because you Have to use the code in SQL expressions anyway.
... View more
02-11-2024
02:35 AM
|
0
|
0
|
948
|
|
POST
|
The basic issue is that in the interactive window the layers are recognised as objects and the underlying source is found. When in a script the layers are not visible because you are not in ArcGIS. You have to use the path to the featureclass. If you still want to use layer names, you could save the layer to a lyrx file and read that in as a layer object which has a property to find the featureclass path. In the script tool you could add parameters that ask for a 'layer name'. Then you would have to guess the featureclass is the same as the layer. But if you ask for a layerobject then you will get an object that knows the source featureclass. Tip: when debugging don't use the try/except error trapping. Just let it crash and you will get better feedback on where and why it has failed. If you do add the trap, then print what the error was. In a standalone script to make a layer you have to 1. Define the featureclass and path, 2. my_layer = arcpy.MakeFeatureLayer(...). You can create a filter, select by location and all the other fancy things that a layer makes possible without making a copy of the featureclass. Then the layer name can be used like your interactive script. Adding print statements only show when you run the script from an editor. A good thing to do because you will trap all the syntax errors earlier and you can add break points and print intermediate results. I use arcpy.management.GetCount() a lot to stop if there are no records selected and arcpy.Exists(fc) to see if I have got the data.
... View more
02-11-2024
02:21 AM
|
0
|
0
|
1325
|
|
POST
|
No, cannot be done. Not supported. Anyway it only works for a demo, not large datasets. I find I have to do my own 'join' using dictionaries. This turns out to be faster, more reliable and more scaleable. One benefit is that you do not have to have the data in the same database because you are using Python structures as an intermediary. You do get into the weeds of cursors, dictionaries and trapping errors a bit. Actually with Pandas now a default package installed you may be able to streamline a join. Must look into that. My legacy workflow is like this: 1. Describe the two fc or tables and get the schema for the table you wish to join, plus the keys. Or use arcpy.ListFields() 2. Choose the foreign key and user-fields you wish to join. (Make a list of names) 3. Add the new fields to the target featureclass, or perhaps a copy that you want to create. 4. Use a SearchCursor to create a dictionary of the table data, with the foreign key as the dictionary key and the attributes as a tuple. You will not be needing the shape field, OBJECTID, or dynamic fields for area, perimeter. 5.Open an UpdateCursor on your target featureclass with a list of the field in the tuple. Iterate through the table and update the new empty fields using the dictionary and key. Best to use dict.get(key, None) to avoid missing keys. It all sounds a bit of work, but once you have the pattern it's easy to adapt to the next project. It is really really FAST. If you have billions of records you can always do some partitioning, but you should be OK with millions of records. There is an example of a table join in my Python Tips talk
... View more
02-11-2024
02:02 AM
|
0
|
0
|
1263
|
|
POST
|
Here is an example of finding the latest visit record (using Pandas) and then transferring that to the parent location table. # current_status.py
# from Visits_Table put back status, edit date, difficulty on WeedLocations
# use Pandas for ease, speed, simplicity
# 19 Sept 2022 latest schema, different gdb
# 12 Oct 2022 change to DateCheck from EditDate but keep EditDate as last record
import sys
import arcpy
import pandas as pd
import collections
from datetime import datetime
try:
gdb = sys.argv[1]
except IndexError:
disk = sys.argv[0][0]
gdb = '{}:/project/econet/source/cams_weed.gdb'.format(disk)
start = datetime.now()
if not arcpy.Exists(gdb):
raise IOError
arcpy.env.workspace = gdb
arcpy.env.overwriteOutput = True
arcpy.AddMessage(gdb)
debug = True
# two tables in gdb
weeds = 'WeedLocations'
visits = 'Visits_Table'
# basic attributes to be transferred from Visits to WeedLocations, not validated yet
visit_to_weed = {
'Guid_visits': 'GlobalID', ## 0 foreign key -> primary key
'DateCheck':'DateVisitMadeFromLastVisit', ## 1 for latest date Note not the same as EditDate
"WeedVisitStatus":'StatusFromLastVisit', ## 2 as inspected
'DifficultyChild':'DifficultyFromLastVisit', ## 3 as inspected
'VisitStage':'LatestVisitStage', ## 4 as inspected
'Area':'LatestArea', ## 5 as inspected
'DateForReturnVisit':'DateForNextVisitFromLastVisit', # 6 calculated
'EditDate':'EditDate' ## dummy for pandas to find latest record
}
in_flds = list(visit_to_weed.keys())
out_flds = list(visit_to_weed.values())
filter = '' #"""EditDate > date '{}'""".format('2022-07-01')
vdate =[row for row in arcpy.da.SearchCursor(visits,in_flds,filter)]
print('vdate:',len(vdate))
# put in a pandas dataframe and process woohoo
df = pd.DataFrame(vdate,columns=in_flds)
# find the record with max edit date by visit and keep the other details all in one line!
idx = df.groupby(['Guid_visits'])['EditDate'].transform(max) == df['EditDate']
dVisit = df.set_index('Guid_visits').T.to_dict('list')
# Count visits for each location
vguid = [row[0] for row in arcpy.da.SearchCursor(visits,['Guid_visits'], "Guid_visits is not NULL")]
# dict of counts by GlobalID for updating
vguid_counts = collections.Counter(vguid)
# update weeds with visit count and latest details
with arcpy.da.UpdateCursor(weeds, ['VisitCount'] + out_flds) as cur:
n = 0
for row in cur:
try:
row[1] = vguid_counts[row[0]]
if dVisit.get(row[0],None):
row[2] = dVisit.get(row[0],None)[0]
row[3] = dVisit.get(row[0],None)[1]
row[4] = dVisit.get(row[0],None)[2]
row[5] = dVisit.get(row[0],None)[3]
row[6] = dVisit.get(row[0],None)[4]
cur.updateRow(row)
n+=1
except Exception as e:
arcpy.AddMessage(row)
arcpy.AddMessage(e)
print("Well Done, {} records updated in {}".format(n, datetime.now() - start))
... View more
03-17-2023
02:46 PM
|
0
|
0
|
900
|
|
POST
|
I am afraid that relationship classes are not supported in arcpy. You will have to create your own equivalent, which as it turns out is much better and faster anyway. 1. Read in the related tables using a SearchCursor inside a list comprehension to get an in_memory list of records with the key as the one of the fields. 1a. Convert the list to a Pandas dataframe. 2. Use Pandas to find the minimum or maximum values with a groupby of the foreign key 3. Make a dictionary of the results of the statistics 4. Run an UpdateCursor on your featureclass and use the dictionary of min/max values to update the featureclass. This will be lightning fast, reliable and easy to understand. You might add some error trapping such as use .get(value,none) instead of a lookup to avoid missing values. See example.
... View more
03-15-2023
06:41 PM
|
1
|
1
|
924
|
|
POST
|
Maybe start with a temporary layer of start nodes and end nodes from the pipelines? There is a tool to do this. Then overlays of the node points would give you the ends. Keep track of the objectids to get references to the original lines.
... View more
03-12-2023
03:07 PM
|
0
|
0
|
1687
|
| Title | Kudos | Posted |
|---|---|---|
| 1 | 08-26-2025 03:48 PM | |
| 1 | 05-08-2025 02:07 PM | |
| 1 | 05-07-2025 05:13 PM | |
| 3 | 04-04-2025 03:16 PM | |
| 2 | 05-07-2025 05:21 PM |
| Online Status |
Offline
|
| Date Last Visited |
10-20-2025
06:39 PM
|