POST
|
Yes you can. Try it. Relates are hard to use. You can get a tree when querying with a popup. You can also select from a table and find related selections with the hamburger at the end of the table view. It requires manual triggering every time [Not like ArcView 3 and Avenue Dialog Designer where the selected records are highlighted, you can raise the selection to the top, not just filter and generally have a much better experience.]
... View more
02-20-2024
01:34 PM
|
0
|
0
|
176
|
POST
|
Have a look at the validation tools in the forms. You can program up a really complex interface with a bit of python programming, have error messages and check it is valid before starting the run. You can even program the whole form up in python so you don't use the wizard at all, I haven't gone to that step myself. Here is an example that analyses our census data. http://www.ollivier.co.nz/support/census2018/index.htm
... View more
02-18-2024
02:09 AM
|
0
|
0
|
333
|
POST
|
Well I think that a list comprehension wrapped around a da.SearchCursor is a very efficient and fast way to extract the features into a more open format. But why do you think that Pandas (a non spatial module) would handle spatial data? I am impressed that it worked at all! Have you considered GeoPandas? That does handle shape columns and provides you with spatial functions. Or maybe use the arcgis module? That has spatial extensions for pandas where geojson data returned by the Python Rest API is handled smoothly. There is another way to extract the features - use Numpy. There is a function in the da module FeatureClassToNumPyArray that translates directly from a featureclass to a numpy array. Maybe you can convert from the numpy array to a dataframe? Finally there are Geometry Objects. These are in-memory arrays of the geometry from a featureclass that have a complete set of spatial operators built in. Every spatial tool in the toolbox has an equivalent function for pairs of geometry objects that do not need an advanced licence. You can write a complete ArcGIS clone using a Basic licence. I do this for selected operations if needed to be restricted to a Basic licence or if I find a faster way by short-cutting the search.
... View more
02-18-2024
01:58 AM
|
0
|
0
|
314
|
POST
|
There is no need to create a new class. Just a function would do. Shapefiles are obsolete with lots of limitations, use a featureclass in a filegeodatabase. Then you can have null geometry for example. There are much easier ways of loading data that this painful way that is also very old. Here are 5 records of the in the PointFile ====================== id, shapeId, x, y, z 1,1,2197490.3821680555,1.5003079009938888E7,1573.2872741540364 2,2,2197478.7284111492,1.5003076194199622E7,1570.29 3,3,2197459.2878189506,1.500307149703293E7,1570.29 4,4,2197446.2932236306,1.5003068357325058E7,1573.6321294959107 5,5,2197490.3821680555,1.5003079009938888E7,1573.2872741540364 Here is the python program =================== import arcpy import os class TextToShapefileConverter: def __init__(self, input_directory, output_directory, spatial_reference_id=26914😞 self.input_directory = input_directory self.output_directory = output_directory self.spatial_reference_id = spatial_reference_id self.spatial_reference = arcpy.SpatialReference(self.spatial_reference_id) def convert(self, point_file_name): input_text_file = os.path.join(self.input_directory, point_file_name) output_shapefile_name = os.path.splitext(point_file_name)[0] + '.shp' output_shapefile = os.path.join(self.output_directory, output_shapefile_name) print "generating", output_shapefile_name if "PolygonPointFile" in point_file_name: self._create_polyline_shapefile(output_shapefile, input_text_file) # self._create_polygon_shapefile(output_shapefile, input_text_file) def _create_polyline_shapefile(self, output_shapefile, input_text_file): arcpy.env.overwriteOutput = True arcpy.CreateFeatureclass_management( os.path.dirname(output_shapefile), os.path.basename(output_shapefile), "POLYLINE", has_z="ENABLED", has_m="DISABLED", spatial_reference=self.spatial_reference) arcpy.AddField_management(output_shapefile, "PairID", "LONG") cursor = arcpy.da.InsertCursor(output_shapefile, ['SHAPE@', 'PairID']) points = [] pair_id = 0 with open(input_text_file, 'r') as file: next(file) # Skip header for line in file: if line.strip(): _, _, x, y, z = line.split(',') point = arcpy.Point(float(x), float(y), float(z)) points.append(point) # Check if a pair of points has been added if len(points) == 5: print points polyline = arcpy.Polyline(arcpy.Array(points), self.spatial_reference, True) cursor.insertRow([polyline, pair_id]) points = [] # Reset for next pair pair_id += 1 del cursor def _create_polygon_shapefile(self, output_shapefile, input_text_file): arcpy.env.overwriteOutput = True arcpy.CreateFeatureclass_management( os.path.dirname(output_shapefile), os.path.basename(output_shapefile), "POLYGON", has_z="ENABLED", has_m="DISABLED", spatial_reference=self.spatial_reference) arcpy.AddField_management(output_shapefile, "GroupID", "LONG") cursor = arcpy.da.InsertCursor(output_shapefile, ['SHAPE@', 'GroupID']) points = [] group_id = 0 with open(input_text_file, 'r') as file: next(file) # Skip header for line in file: if line.strip(): id, _, x, y, z = line.split(',') point = arcpy.Point(float(x), float(y), float(z)) points.append(point) if len(points) == 5: if points[0] != points[-1]: points.append(points[0]) # Ensure the polygon is closed polygon = arcpy.Polygon(arcpy.Array(points), self.spatial_reference, True) cursor.insertRow([polygon, group_id]) points = [] # Reset for next group group_id += 1 del cursor if __name__ == "__main__": input_directory = r'C:\Tutorial\GIS\ArcPyTutorial\Data\MyTestFolder' output_directory = r'C:\Tutorial\GIS\ArcPyTutorial\Data\MyTestFolder\outPut' converter = TextToShapefileConverter(input_directory, output_directory) # Process each file for filename in os.listdir(input_directory): if filename.endswith('.txt'😞 converter.convert(filename)
... View more
02-11-2024
03:20 AM
|
0
|
0
|
397
|
POST
|
Yes, well the Pro developers forgot to read the ArcMap manuals.... there is now a synonym to patch up the syntax. I presume that it is a different mechanism under the hood. It never worked reliably for me anyway, so I just use an SSD drive as my scratch.gdb. You can easily run out of memory if you do not release the features, and who does that? Also projections did not work etc.
... View more
02-11-2024
02:58 AM
|
0
|
0
|
348
|
POST
|
Save (in_)memory for simple tasks. It is a lot of people's experience that it does not always work for complex tasks like Dissolve. It did not work for me with geometry objects or reprojections. If you have an SSD then define a partition or just a disk letter and use that for the scratch.gdb and other temporary featureclasses. Since it is effectively equivalent hardware you will get all the benefits of cached read/writes and more reliable operation with the same speed and not run out of memory.
... View more
02-11-2024
02:54 AM
|
0
|
0
|
349
|
POST
|
They look like reserved words in the dropdown data types for parameters to me. Maybe escape them somehow if you have to use them. If you run arcpy.ValidateFieldName() it might prepend underscores if the test finds unacceptable field names. Its a fairly bad bug that these keywords have overflowed into the parameter space. ValidateFieldName(name, workspace=None) ValidateFieldName(name, {workspace}) Takes a string (field name) and a workspace path and returns a valid field name based on name restrictions in the output geodatabase. All invalid characters in the input string will be replaced with an underscore (_). The field name restrictions depend on the specific database used (Structured Query Language [SQL] or Oracle). name(String): The field name to be validated. If the optional workspace is not specified, the field name is validated against the current workspace. workspace{String}: An optional specified workspace to validate the field name against. The workspace can be a file system or a personal, file, or enterprise geodatabase. If the workspace is not specified, the field name is validated using the current workspace environment. If the workspace environment has not been set, the field name is validated based on a folder workspace.
... View more
02-11-2024
02:46 AM
|
0
|
0
|
228
|
POST
|
Actually you cannot use numeric codes for text fields any more at ArcGISPro in domains.They only allow ranges with digits. It does half work in ArcMap but the text is always right justified and the widths go wonky. So even if you want to use digits, for say sorting they have to be character strings. I still hate this because our postal service uses leading zeros. My postcode is "0626" but if Excel get hold of that it morphs into 626 and so on. Sometimes export tools allow you to choose between the code and description but since I am a programmer the codes will do just fine because you Have to use the code in SQL expressions anyway.
... View more
02-11-2024
02:35 AM
|
0
|
0
|
262
|
POST
|
The basic issue is that in the interactive window the layers are recognised as objects and the underlying source is found. When in a script the layers are not visible because you are not in ArcGIS. You have to use the path to the featureclass. If you still want to use layer names, you could save the layer to a lyrx file and read that in as a layer object which has a property to find the featureclass path. In the script tool you could add parameters that ask for a 'layer name'. Then you would have to guess the featureclass is the same as the layer. But if you ask for a layerobject then you will get an object that knows the source featureclass. Tip: when debugging don't use the try/except error trapping. Just let it crash and you will get better feedback on where and why it has failed. If you do add the trap, then print what the error was. In a standalone script to make a layer you have to 1. Define the featureclass and path, 2. my_layer = arcpy.MakeFeatureLayer(...). You can create a filter, select by location and all the other fancy things that a layer makes possible without making a copy of the featureclass. Then the layer name can be used like your interactive script. Adding print statements only show when you run the script from an editor. A good thing to do because you will trap all the syntax errors earlier and you can add break points and print intermediate results. I use arcpy.management.GetCount() a lot to stop if there are no records selected and arcpy.Exists(fc) to see if I have got the data.
... View more
02-11-2024
02:21 AM
|
0
|
0
|
456
|
POST
|
No, cannot be done. Not supported. Anyway it only works for a demo, not large datasets. I find I have to do my own 'join' using dictionaries. This turns out to be faster, more reliable and more scaleable. One benefit is that you do not have to have the data in the same database because you are using Python structures as an intermediary. You do get into the weeds of cursors, dictionaries and trapping errors a bit. Actually with Pandas now a default package installed you may be able to streamline a join. Must look into that. My legacy workflow is like this: 1. Describe the two fc or tables and get the schema for the table you wish to join, plus the keys. Or use arcpy.ListFields() 2. Choose the foreign key and user-fields you wish to join. (Make a list of names) 3. Add the new fields to the target featureclass, or perhaps a copy that you want to create. 4. Use a SearchCursor to create a dictionary of the table data, with the foreign key as the dictionary key and the attributes as a tuple. You will not be needing the shape field, OBJECTID, or dynamic fields for area, perimeter. 5.Open an UpdateCursor on your target featureclass with a list of the field in the tuple. Iterate through the table and update the new empty fields using the dictionary and key. Best to use dict.get(key, None) to avoid missing keys. It all sounds a bit of work, but once you have the pattern it's easy to adapt to the next project. It is really really FAST. If you have billions of records you can always do some partitioning, but you should be OK with millions of records. There is an example of a table join in my Python Tips talk
... View more
02-11-2024
02:02 AM
|
0
|
0
|
480
|
POST
|
Here is an example of finding the latest visit record (using Pandas) and then transferring that to the parent location table. # current_status.py
# from Visits_Table put back status, edit date, difficulty on WeedLocations
# use Pandas for ease, speed, simplicity
# 19 Sept 2022 latest schema, different gdb
# 12 Oct 2022 change to DateCheck from EditDate but keep EditDate as last record
import sys
import arcpy
import pandas as pd
import collections
from datetime import datetime
try:
gdb = sys.argv[1]
except IndexError:
disk = sys.argv[0][0]
gdb = '{}:/project/econet/source/cams_weed.gdb'.format(disk)
start = datetime.now()
if not arcpy.Exists(gdb):
raise IOError
arcpy.env.workspace = gdb
arcpy.env.overwriteOutput = True
arcpy.AddMessage(gdb)
debug = True
# two tables in gdb
weeds = 'WeedLocations'
visits = 'Visits_Table'
# basic attributes to be transferred from Visits to WeedLocations, not validated yet
visit_to_weed = {
'Guid_visits': 'GlobalID', ## 0 foreign key -> primary key
'DateCheck':'DateVisitMadeFromLastVisit', ## 1 for latest date Note not the same as EditDate
"WeedVisitStatus":'StatusFromLastVisit', ## 2 as inspected
'DifficultyChild':'DifficultyFromLastVisit', ## 3 as inspected
'VisitStage':'LatestVisitStage', ## 4 as inspected
'Area':'LatestArea', ## 5 as inspected
'DateForReturnVisit':'DateForNextVisitFromLastVisit', # 6 calculated
'EditDate':'EditDate' ## dummy for pandas to find latest record
}
in_flds = list(visit_to_weed.keys())
out_flds = list(visit_to_weed.values())
filter = '' #"""EditDate > date '{}'""".format('2022-07-01')
vdate =[row for row in arcpy.da.SearchCursor(visits,in_flds,filter)]
print('vdate:',len(vdate))
# put in a pandas dataframe and process woohoo
df = pd.DataFrame(vdate,columns=in_flds)
# find the record with max edit date by visit and keep the other details all in one line!
idx = df.groupby(['Guid_visits'])['EditDate'].transform(max) == df['EditDate']
dVisit = df.set_index('Guid_visits').T.to_dict('list')
# Count visits for each location
vguid = [row[0] for row in arcpy.da.SearchCursor(visits,['Guid_visits'], "Guid_visits is not NULL")]
# dict of counts by GlobalID for updating
vguid_counts = collections.Counter(vguid)
# update weeds with visit count and latest details
with arcpy.da.UpdateCursor(weeds, ['VisitCount'] + out_flds) as cur:
n = 0
for row in cur:
try:
row[1] = vguid_counts[row[0]]
if dVisit.get(row[0],None):
row[2] = dVisit.get(row[0],None)[0]
row[3] = dVisit.get(row[0],None)[1]
row[4] = dVisit.get(row[0],None)[2]
row[5] = dVisit.get(row[0],None)[3]
row[6] = dVisit.get(row[0],None)[4]
cur.updateRow(row)
n+=1
except Exception as e:
arcpy.AddMessage(row)
arcpy.AddMessage(e)
print("Well Done, {} records updated in {}".format(n, datetime.now() - start))
... View more
03-17-2023
02:46 PM
|
0
|
0
|
496
|
POST
|
I am afraid that relationship classes are not supported in arcpy. You will have to create your own equivalent, which as it turns out is much better and faster anyway. 1. Read in the related tables using a SearchCursor inside a list comprehension to get an in_memory list of records with the key as the one of the fields. 1a. Convert the list to a Pandas dataframe. 2. Use Pandas to find the minimum or maximum values with a groupby of the foreign key 3. Make a dictionary of the results of the statistics 4. Run an UpdateCursor on your featureclass and use the dictionary of min/max values to update the featureclass. This will be lightning fast, reliable and easy to understand. You might add some error trapping such as use .get(value,none) instead of a lookup to avoid missing values. See example.
... View more
03-15-2023
06:41 PM
|
1
|
1
|
520
|
POST
|
Maybe start with a temporary layer of start nodes and end nodes from the pipelines? There is a tool to do this. Then overlays of the node points would give you the ends. Keep track of the objectids to get references to the original lines.
... View more
03-12-2023
03:07 PM
|
0
|
0
|
822
|
POST
|
ModelBuilder is straining to support iterators. Much better to start using Python for iterations. You are still using the same functions but the iteration is much easier to understand and you can have much better control of missing data, errors and the process is much more understandable. You can use Cursors that are much faster. Start your upgrade by taking each function and right-click to create a python snippet. Then put them all in a script. You can also encapsulate the iterator into a single script and publish it as a custom tool and leave the rest of the process in ModelBuilder.
... View more
03-11-2023
03:54 PM
|
0
|
0
|
573
|
Title | Kudos | Posted |
---|---|---|
1 | 02-20-2024 02:48 PM | |
1 | 05-23-2024 03:46 PM | |
3 | 02-22-2024 01:25 AM | |
1 | 02-22-2024 01:57 AM | |
1 | 03-15-2023 06:41 PM |
Online Status |
Offline
|
Date Last Visited |
yesterday
|