|
POST
|
Your Census Block feature are not unique in the shapefile or the dbf table that you say has only one feature/record per Census Block value. The only way I will accept that you have proven me wrong is if you do the Dissolve I have shown you or you do a Summary of the Census Blocks and find that the count of every Census Block value is 1. If you refuse to do that your on your own to figure this out. But I can assure you that the problem is not with the software, it is with your unproven assumptions about your data. A difference in the presence or absence of an FID field in the shapefile and table is another factor that can affect the export behavior, but it would be unusual for either if those file types to lack an FID field.
... View more
11-24-2021
11:10 PM
|
0
|
0
|
1263
|
|
POST
|
Use the Dissolve tool (Data Management Tools, Generalization Toolbox) on the shapefile that is only supposed to have one shape for each Census Block. Only use the Census Block field as a Dissolve field in the tool and add the Census Block field as a Statistics field with a Statistics Type of Count The output will have a single shape for each Census Block and export the results you are are wanting. Anything in the output Count field with a value greater that 1 was split into more than one piece in your original Census Block shapefile. Below is how I would set up the tool for my Census Block feature class and the output with all Census Blocks that collapsed from many features for those Census Block values to a single feature.
... View more
11-24-2021
03:53 PM
|
0
|
1
|
1268
|
|
IDEA
|
@SamMontoia1Your suggestion doesn't help very much in my opinion. It would show me what won't work, but only if I take action to use the new options you are adding. I most likely won't ever use those options, because, as I mentioned, I rarely consider the selectable state of my layers while I setup a transfer. Your proposal continues to make me responsible to think about the selectable state of my layers and that responsibility should not fall on me, it should fall on the designers of the tool. Additionally, I don't want the tool to show me that it currently won't allow me to do what I want, I want the tool to assist me with getting to the point where it will do what I want. My goals is not to prevent myself from setting up the transfer I want. My goal is to make the transfer I set up actually work with as little thought about the selectable state of my layers as possible. I want it to be the tool's responsibility to think about and help me correctly setup the selectable state of my layers for every transfer I set up by the time I have exited the tool, regardless of whatever selectable state my layers were in before I entered the tool..
... View more
11-24-2021
08:01 AM
|
0
|
0
|
3153
|
|
IDEA
|
As a frequent user of the Attribute Transfer tool I know that the tool only operates when the source and the target layers are both selectable. More often than not, I fail to think about the selectable state of my layers when I set up the Attribute Transfer Mapping and use an unselectable layer that will disable the tool until I change that layer to selectable, There is no message that ever appears saying that either layer is unselectable at any point, and the user is completely on their own to figure out why the tool won't work when they try it and troubleshoot it. It would be helpful if the Attribute Transfer Mapping tool at least gave a warning when I close the tool that one or more of the layers set up for the transfer is unselectable so that I would immediately make the connection between the tool and the cause of it not working. Ideally the warning would also list the unselectable layers that have been used in my setups. Even better, if the message would give me the option to make all of the layers set up for transfer selectable, I would almost always use that option. I don't really want the proposal offered by this idea, because putting the selectable state of my layers as the control of the transfer setups I can do is backwards to me. To me that just moves the confusion about why I can't setup the tool to do what I want inside the tool and makes me have to exit the tool to change my selectable layers before I can even create a setup with the Attribute Transfer tool. When I use the Attribute Transfer setup I want to be able to set up any transfer that I want and be able to expect it to work immediately upon exiting the tool or warned that the tool won't work until I fix the selectable state of the layers I set up.
... View more
11-24-2021
07:29 AM
|
0
|
0
|
3167
|
|
POST
|
Attribute Joins do not display the full set of records that actually result from a one-to-many or many-to-many join. The table shown by ArcMap is functionally a one-to-one or many-to-one set of records, meaning that new records are not created in the parent table to match all of the records in the joined table. For all intents and purposes it is hiding from you the true number of denormalized records that would result from the full output of that relational join for performance reasons. However, when you do an export the true denormalized record set of the one-to-many or many-to-many records is generated, which inserts records that you would not see in a layer attribute join. This is a desired behavior, since this is the best way to make ArcMap behave like Access when outputting a result from a one-to-many or many-to-many relationship. The export increases the number of records to convert these relationships to a true one-to-one output. Creating a one-to-one representation of the data in the ?-to-many relationships is often useful for a variety of analysis purposes, and I have made use of this export behavior many times. The difference in your results is exposing an erroneous assumption on your part that the Census Block shape file has only unique values for each Census Block field value. If you do a Summary of your Census Block field in the Census Block table you almost certainly will discover their are duplicate values for one or more of your Census Blocks. If even one Census Block value is duplicated in the shape file that should only have unique values it will double the number of records in the export for that value, and if it is duplicated on 3 features it will triple the values in the output of the export for that value. So if the shapefile that allows duplicate values has 100,000 features with that Census Block value you will end up with 100,000 more records than you expect when you do an export if that value is duplicated and not actually unique. You need to clean up your features to merge together all parts of the Census Block into a single multipart feature to actually have a many-to-one relationship like the Attribute Join shows you. The Dissolve tool can do the merge and output a new feature class or you can Edit the existing FC by using the Merge option under the Editor button when you have selected all features for a single Census Block value. That will make that shapefile conform to your relationship assumption and it will behave the same as the Attribute Join when you do an export, since then it will create a many-to-one join and not a many-to-many join with denormalized records hidden by the Attribute Join.
... View more
11-23-2021
05:25 PM
|
0
|
3
|
1276
|
|
BLOG
|
Using Embedded cursors is not a good idea for any data that will continue to grow into more than 1,000 records that match. Each cursor reopens the table on disk and processes all records in the table during each loop to isolate the records specified by the query filter, so the Embedded cursors result in an exponential performance hit as the record sets grows. Using an attribute index on all filter fields gives a performance boost, but that boost gets overwhelmed when the number of unique values it needs to filter grows beyond a certain point. Dictionaries give a huge performance enhancement over query filters when there are a huge number of unique values to access. Dictionaries are random access with no real performance delay for accessing the 1st record or the 1 millionth record. The Maximum time it has taken me to load 40 fields from 1 million records into a dictionary (including the shape field) is 10 minutes and that allows me to match them to 1 million records in an update table in a single pass. While I haven't done this in my blog, I frequently gain another performance enhancement by adding a python expression that compares the 40 field values so I only write updates to the actual records that changed (typically 1,000 records or less). With that enhancement I can complete the updates in under another 10 minutes. The comparison expression is up to 20 times faster than a cursor writing to a record that isn't actually changing, especially in large tables. The base code in my blog and the code I provided in my latest response can be revised so that it dramatically speeds up again if your record set is very large and the number of records actually getting updated is small by only writing to records that actually changed. It only takes a revision of one or two lines of code so I will go back and add that.
... View more
11-12-2021
03:20 PM
|
3
|
0
|
18420
|
|
BLOG
|
@ChaseSchieffer, The last example in my blog covering using a dictionary to replace a summary table applies to your situation, with a little adaptation to your specific needs. If your main feature class had a LatestStatusDate field you could also update that to go along with your LatestStatus value for the record. November 12, 2021: I enhanced the code performance by adding an if condition that only updates the master table record when the related table date in the dictionary is the more recent than the one in the LatestStatusDate field or the status value of the dictionary is different from the one held in the LatestStatus field. This gives a significant performance boost if you have two large sets of records to process and match, but in reality only a few records actually need to be updated due to value changes. from time import strftime
print( "Start script: " + strftime("%Y-%m-%d %H:%M:%S"))
import arcpy
sourceFC = r"C:\Path\RelateTable"
sourceFieldsList = ["HydrantID", "CreateDate", "LastEditDate", "Status"]
# Build a summary dictionary from a da SearchCursor with unique key values
# of a field storing a list of the latest date and status.
valueDict = {}
with arcpy.da.SearchCursor(sourceFC, sourceFieldsList) as searchRows:
for searchRow in searchRows:
keyValue = searchRow[0]
if not keyValue in valueDict:
# assign a new keyValue entry to the dictionary with the
#latest date of the Created or Last Edit Date field.
lastDate = searchRow[1]
if searchRow[2] > lastDate:
lastDate = searchRow[2]
valueDict[keyValue] = [lastDate, searchRow[3]]
else:
# update the date and status of an existing keyValue entry in
# the dictionary if the current record's
# Created or Last Edit Date is the Latest date.
newDate = searchRow[1]
if newDate < searchRow[2]:
newDate = searchRow[2]
if valueDict[keyValue][0] < newDate:
valueDict[keyValue] = [newDate, searchRow[3]]
updateFC = r"C:\Path\UpdateFeatureClass"
updateFieldsList = ["FacilityID", "LatestStatusDate", "LatestStatus"]
with arcpy.da.UpdateCursor(updateFC, updateFieldsList) as updateRows:
for updateRow in updateRows:
# store the Join value of the row being updated in a keyValue variable
keyValue = updateRow[0]
# verify that the keyValue is in the Dictionary
if keyValue in valueDict:
if updateRow[1] < valueDict[keyValue][0] or updateRow[2] != valueDict[keyValue][1]:
# Transfer the data
updateRow[1] = valueDict[keyValue][0] # Latest Status Date
updateRow[2] = valueDict[keyValue][1] # Latest Status
updateRows.updateRow(updateRow)
del valueDict
print( "Finished script: " + strftime("%Y-%m-%d %H:%M:%S"))
... View more
11-10-2021
11:10 PM
|
2
|
0
|
18456
|
|
IDEA
|
I am using ArcGIS Pro 2.7, and out of the box it behaves the way I described, but it does have the Option Angular Units setting so I tried that. That resolved the issue. But I am glad to hear that ArcGIS Pro 2.8 has fully fixed this issue for those like me who didn't think to look at the Angular Units Option. I tried installing 2.8 yesterday, but it failed. I will contact Esri support to get that resolved. I look forward to the version where the default line symbol can be set for the initial course of each new traverse, since my Idea for that is in the product plan.
... View more
05-21-2021
08:15 AM
|
0
|
0
|
2387
|
|
IDEA
|
Every time I create a new course in the Traverse tool for a curve the units reset to decimal degrees (dd) for the Delta Angle. Unlike the line symbol choice that remembers the symbol for the next course, the Delta Angle resets for every newly created course. I always enter the Delta Angle in Degrees Minutes Seconds (dms) units so I am constantly having to change this setting for every curve course I create. If I fail to change the units from dd to dms and enter the value as a dms value, the tool treats the dashes like minus signs and completely changes the value. This error causes the tangent bearing of the next course to be wrong and I have to press the undo button to remove the Delta Angle value so I can change the units, enter the dms value and set the correct course. Switching from keyboard to mouse to change the units for every curve course slows the whole process to a crawl. I presume the units are internally stored in dd units, since every time I enter a dms value and exit the Delta Angle field, the value and units are displayed as dd units. However, if I click on the value to edit in the Delta Angle it remembers the units I previously chose and changes the value and units back to dms. If I change the unit to ra it remembers that choice as well. I would like a way to set the default Delta Angle units to dms the first time I enter the field for editing for every new course I create in the Traverse tool. It can still display dd units when I am not editing in the field. It should then continue to remember any changes I make to the units for the next time I enter the field. This would save considerable time and along with my other idea to set a default Line symbol this change would make the traverse tool a keyboard only tool from the starting course to the ending course and for every new single course traverse I create.
... View more
05-20-2021
01:04 PM
|
1
|
5
|
2448
|
|
IDEA
|
No. I want to choose the template before ever creating any traverse and before the drop down is available. I am currently having to use the drop down every time for thousands of new traverses that only have one course. My traverses always start with the top item in my domain and ignore the field default value. If I am creating side-yard traverses. I want a way to set the starting template for the traverse tool to use the template below once and have it stay that way as long as I am working on side-yards. It should take me a total of 1 mouse move and 3 mouse clicks to do this. Then I may do 300 separate traverses in a row with this template for an average tract. Currently to do that I have to add 1,495 more mouse moves and clicks using the drop down for 299 of the side yard traverses. I want to avoid this wasteful and tedious set of steps that almost doubles the time I spend working on these traverses. If I am creating centerline traverses. I want a way to set the starting template for the traverse tool to use the template below once and have it stay that way as long as I am working on centerlines. It should take me a total of 1 mouse move and 3 mouse clicks to do this. Then I may do 40 separate traverses in a row with this template for an average tract. Currently to do that I have to add 195 more mouse moves and clicks using the drop down for 39 of the Centerline traverses. I want to avoid this tedious set of steps that almost doubles the time I spend working on these traverses. A potential way to solve this is by adding another drop down to the main body of the tool under the Layer drop down for the Initial Traverse Template. The picture below shows what the tool should look like immediately after pressing the Set Starting Location button and choosing a position on the map. This would make the Traverse tool act like the Create Features tool, where I only have to choose an editing template once to create as many features as I want with that template until I change the template in the Create Features tool window.
... View more
04-01-2021
03:25 PM
|
0
|
0
|
3150
|
|
IDEA
|
I can't find a way to choose the editing template for my layer that is used when I hit the New button and the Set Start Location button in the Traverse tool. My layer is symbolized based on a field with a coded domain, and every time I start a new traverse the tool always chooses the alphabetically first item in the domain as the editing template no matter what I do. I rarely want to use that template and I have to manually change the template virtually every time I create a traverse. The Traverse tool doesn't respect the active template I have chosen in the Create Feature pane. The Traverse tool has its own template drop down that can switch the template the Traverse tool uses for each course of the traverse using choices that are limited to the applicable templates for the layer I have chosen in the tool, which is probably the reason it can't respect the Create Feature pane template. Also after a template is chosen in the drop down each added course will use that template until another is chosen. However, every time a new traverse is started the template the Traverse tool uses for the initial course of the traverse is essentially arbitrary and can only be changed manually after a traverse has been started. This adds extra mouse movements and three mouse clicks to change the template nearly every time I create a new traverse. That is extremely tedious, time consuming and inefficient when I am doing a large number of traverses, such as doing separate traverses for all of the property side yard boundaries that divide hundreds of lots in a tract. I need a setting that will make it possible for me to choose the template that will initially be used by the Traverse tool when I hit the New button or the Set Start Location button that will apply to every new traverse I create until I change that initial template setting.
... View more
04-01-2021
12:04 PM
|
2
|
7
|
3226
|
|
POST
|
Press the Windows button on your computer (lower left corner of the Windows task bar), press the ArcGIS application group, then press the Python Command Prompt application to open a command prompt window. Type Idle and press Enter to start Idle (Python GUI application). From the File menu create a New File or use the shortcut Ctrl-N. Paste the code into the file. From the File menu Save (Ctrl-S shortcut) the file in the directory you want. Give the file the name you want and make sure to put the .py extension at the end. You can run the file from the Run menu Run Module (F5 shortcut). If there are errors they will be shown in the main Idle window. You can use this .py file to schedule a weekly task using the Windows Task Manager that will run automatically at the date and time you specify. For the program script portion of the action use (keep in double quotes and replace yourusername with your actual user name): "C:\Users\yourusername\AppData\Local\Programs\ArcGIS Pro\bin\Python\envs\arcgispro-py3\pythonw.exe" For the Add arguments put a double quoted string containing the path and file name of your python script.
... View more
03-24-2021
12:07 PM
|
1
|
0
|
3601
|
|
POST
|
Since the purpose of your join is to transfer data from the table to the feature class, my Blog applies. If you change the lines of code below based on the comment that precedes them to match your path, table/feature class names, and field names, this code should work to transfer the data. This code is not set up for versioned SDE data or SDE data that requires an Editor session to make changes, but it could be adapted to deal with that if needed. This code assumes the concatenation of the 3 fields uniquely identifies each row in the table. If the concatenated values of the three fields in the table are not unique, only the value from the last table record holding the concatenated value from the three fields will be passed to all of the matching features in the feature class. The code could be modified to check for concatenated keys in the source table that are not unique and do statistical operations on the value being transferred, like First, Last, Min, Max, Sum, Mean, etc. (depending on the data type of the value being transferred). from time import strftime
print( "Start script: " + strftime("%Y-%m-%d %H:%M:%S"))
import arcpy
## Change the path and source table name to match your table
sourceFC = r"C:\Path\SourceTable"
## Change the names of the three join fields and the fourth value field
sourceFieldsList = ["JoinField1", "JoinField2", "JoinField3", "ValueField"]
# Use list comprehension to build a dictionary from a da SearchCursor where the key values are based on 3 separate feilds
valueDict = {(r[0],r[1],r[2]):r[3:] for r in arcpy.da.SearchCursor(sourceFC, sourceFieldsList)}
## Change the path and update feature class name to match your feature class
updateFC = r"C:\Path\UpdateFeatureClass"
## Change the names of the three join fields and the fourth value field
updateFieldsList = ["JoinField1", "JoinField2", "JoinField3", "ValueField"]
with arcpy.da.UpdateCursor(updateFC, updateFieldsList) as updateRows:
for updateRow in updateRows:
# store the Join value by combining 3 field values of the row being updated in a keyValue variable
keyValue = (updateRow[0],updateRow[1],updateRow[2])
# verify that the keyValue is in the Dictionary
if keyValue in valueDict:
# transfer the value stored under the keyValue from the dictionary to the updated field.
updateRow[3] = valueDict[keyValue]
updateRows.updateRow(updateRow)
del valueDict
print( "Finished script: " + strftime("%Y-%m-%d %H:%M:%S"))
... View more
03-22-2021
07:43 AM
|
0
|
2
|
3623
|
|
POST
|
What is the ultimate purpose of the join? Is the join primarily being used for transferring data between the feature class and table, creating feature labels, creating layer symbology, or expanding the feature records to show all of the combinations in a one-to-many or many-to-many relationship with the table. Is the data relationship supposed to refresh in real time as you edit the features and/or the table and maintain performance in a real time map? Without knowing your specific needs, all I can do is offer a few suggestions and my feeling about their usefulness. The Make Query Table tool can do multi-field joins, but it is not editable, it only shows records that match, it doesn't really refresh and performs badly for large dataset joins. For real time needs, I would say it is best to concatenate the values of multiple fields into a single field with a delimiter character separator and use that field to do a standard single field join. The concatenated field can be maintained in real time for both the feature class and the table using a field calculation like expression using Attribute Assistant in an Editor session under ArcMap Desktop and in Pro with Attribute Rules, provided your data is in a geodatabase that you can customize. If the multi-field join is being used for doing Geoprocessing data transfer operations, see the Creating a Multi-Field Python Dictionary Key to replace a Concatenated Join Field section in my Turbo-Charging Data Manipulation with Python Cursors and Dictionaries Blog. Since your data is in SDE, perhaps creating a view using the underlying database would be your best bet, although I have no real experience in setting up enterprise database views.
... View more
03-17-2021
07:48 PM
|
1
|
4
|
3666
|
|
POST
|
I am experimenting with a script that I may develop into a tool that provides a fast alternative workflow to the Eliminate tool that assigns attributes from a polygon within a case field group that has a shared border. The scenario driving the script/tool/workflow occurs when the Intersect or Union tool combines two feature classes and slivers must remain within the feature boundaries from one of the feature classes. The typical scenario where this is needed occurs when a parcel feature class is one of the inputs to the Intersect or Union tool and the slivers or portions of a parcel created by another polygon feature class can only take on the attributes of touching features that are also inside of that parcel. It is assumed that the parcels and the other feature class each adhere to a topology rule that does not allow overlapping features within themselves, but they have not been topologically controlled relative to each other. They also must be in a projection that uses linear units (like meters or feet) for the area and perimeter fields and, at this point, I have only designed the script to work with feature classes contained in a file geodatabase. So far, I have a script that deals with the most straight forward fix where the script identifies all of the parcels that have been divided into only two parts by the other feature class and if the smaller part falls below a certain thinness ratio value and area size value and touches the larger portion, it inherits the attributes of the larger portion of the parcel, meaning that the entire parcel ends up being covered by only one of the features from the other feature class. When the Dissolve tool is applied to the ObjectID and attributes of second feature class the resulting feature boundary will conform to the parcel boundary where the tool modified the attributes. The script has been tested on 50K+ features and based on a thinness ratio of 0.05 and maximum area of 500 sq. ft., it identified and modified 1,451 slivers that met these criteria in 4 seconds. I could redesign the script into a tool. It could potentially do multiple passes using a default list containing several levels of thinness ratios, area size ranges, query expressions and/or minimum/maximum feature counts within a parcel. I need to do more tests to come up with the best recommended values for these inputs. I may also design it to accept user inputs for these parameters or to accept a layer selection like the Eliminate tool. I would appreciate some feedback from users to find out if they are dealing with scenarios similar to mine so I can decide if it is worth my time to make a tool interface that allow users to customize the inputs to their own needs. If I do design this tool I will not give any assurance that the tool will give good results if the user provides inputs that do not conform to the requirements I have mentioned. I also will most likely not enhance the tool to fit any alternative requirements.
... View more
02-24-2021
06:13 PM
|
0
|
0
|
1186
|
| Title | Kudos | Posted |
|---|---|---|
| 1 | 03-31-2025 03:25 PM | |
| 1 | 03-28-2025 06:54 PM | |
| 1 | 03-16-2025 09:49 PM | |
| 1 | 03-03-2025 10:43 PM | |
| 1 | 02-27-2025 10:50 PM |
| Online Status |
Offline
|
| Date Last Visited |
Thursday
|