|
IDEA
|
I was using the ArcMap tool when I wrote this post and I am using ArcGIS Pro which apparently predates the release of the Export Features and Export Tables tools. So at this time I cannot test the Export Features or Export Tables tools. However, ArcGIS 2.9 does offer the "Preserve Global IDs" environmental setting and that does work with Feature Class to Feature Class and Table to Table at 2.9. So that aspect of my request can be considered Implemented. However, the bulk of the data I deal with only uses ObjectID as the primary key unique record value. Preserving these values is paramount to many of my workflows. I do note that the Export Features and Export Table tools include a field map, which may allow me to preserve the ObjectID. However, other tools that include a field map have been notoriously unsuccessful and unreliable at preserving the ObjectID values up through ArcGIS Pro 2.9. So, I cannot test if this is a viable solution, since the Field Map is not available with Feature Class to Feature Class or Table to Table. I do not foresee upgrading to ArcGIS Pro 3.x any time soon in my organization, although I may push to upgrade my own ArcGIS Pro installation sooner if you can confirm the Export Features and Export Table field map offers ObjectID value preservation. If you can confirm the Field Map capability of the Export Features and Export Table tools support ObjectID preservation, then you can mark this idea as Implemented. But if it does not preserve the ObjectID foreign key values reliably, then consider implementing this aspect of this idea through the field map or an an additional option.
... View more
08-05-2024
11:58 AM
|
0
|
0
|
2647
|
|
POST
|
The original suggestion using MOD only reliably works if the field being evaluated is a Short, Long or Big Integer field. However, if the field being evaluated is a Float or Double and contains any values that are not whole numbers, the expression suggested by @MatthewLeonard is the way to go.
... View more
08-03-2024
09:18 AM
|
1
|
1
|
2567
|
|
IDEA
|
Being able to retain foreign key values in any extract of data that the user wants to create that match the source primary key values is fundamental to most data administration workflows. Extractions of records from a source that relies on its OBJECTIDs and/or its GLOBALIDs as its primary key in relationship to other records should not be relationally destroyed relative to the source that they came from if the user has an explicit requirement to preserve those values as a foreign key in their extracted output. However, this kind of relational destruction is currently always happening whenever the Feature Class to Feature Class and Table to Table tools are used, which are the default tools for creating extractions in ArcGIS Pro. The Feature Class to Feature Class and Table to Table tools (and possibly other tools) need to add two options that would allow the user to create new foreign key fields that preserve the OBJECTID and/or GLOBALID values of the source feature class/table in the output. If the user checks either of these options a control in the tool should suggest a default Field Name for each which the user could override. Validation should ensure that the source contains OBJECTID/GLOBALID fields before enabling these options, and that the user definable field names are valid and not already in the source feature class/table or duplicated if the user checks both options. The source OBJECTID values would be preserved in a Long field and the source GLOBALID values would be preserved in a GUID field. The use case for this is when the only field(s) that contain a unique value for each of the source features or rows is the OBJECTID and/or the GLOBALID. Often the source table is not owned by the user and cannot be modified to store these values into a Long and/or GUID field in advance. Using either of these tools overwrites the source OBJECTID and GLOBALID values with new values in the output, especially when the user only wants to extract a subset of records. Having to copy the source data into a workspace the user can modify to add these fields in advance is very time consuming when the source contains large numbers of features and/or rows and they have to repeatedly copy the source every time they need a new extraction if they have any reason to believe that the source contains new records that they want. This Copy workaround is especially burdensome to the user who only wants a small subset of the source features or rows and not an entire copy to apply the fundamentals of relational database design with their output. There are also many instances when the Copy function is not supported for even doing this workaround when the source feature class is part of a topology within a feature dataset. Incorporating the preservation of the OBJECTIDs in a static foreign key field in the output of these two tools is absolutely possible, as demonstrated by numerous tools like Spatial Join, Union, Intersect, Polygon to Line, Multipart to Singlepart, etc. all of which preserve a foreign key FID field in their output. Expending this kind of foreign key preservation behavior to also allow for the preservation GLOBALID values in a GUID foreign key output field should not be significantly more difficult. This would be a great enhancement and time saver for users who need extractions from Parcel Fabric data sources and any other advanced feature dataset capabilities that are built around GLOBALIDs as primary keys for relationships. And adding a GLOBALID to GUID option to many other tools that currently by default preserve OBJECTIDs in a foreign key FID field would be great as well as a time saver. But that is less critical, since at least those outputs already preserve usable foreign key values from the source OBJECTIDs in their outputs by default. Additionally some of those tools support field mapping, which may allow the user to accomplish the GLOBALID value preservation themselves. But the Feature Class to Feature Class and Table to Table tools currently do not offer any option that allows the user to preserve any usable foreign key values in their output relative to any sources that rely on OBJECTIDs or GLOBALIDs as their sole unique primary keys. If the existing Feature Class to Feature Class and Table to Table tools should not be modified to add this option because they are accessible to all license levels and intentionally provide limited functionality as a result, a new pair of Conversion tools called Feature Class to Feature Class with FIDs and Table to Table with FIDs should be added for Advanced License users at minimum. These two new tools would definitely be a selling point for many Basic and Standard users to consider upgrading to an Advanced license, as well as two very befitting and useful additions for truly Advanced users.
... View more
08-02-2024
03:59 PM
|
0
|
7
|
2746
|
|
IDEA
|
In my opinion, the centralization of the Attribute Field Mapping configuration into a single set up interface that simultaneously impacts virtually all editing tools is a fundamentally flawed design. It introduces confusion, contradiction, conflicts, constraints and inefficiency into my workflows that never existed before. It seems designed to only work if I repeatedly use only one of the tools that relies on this set up and rarely switch to any other tool that is impacted by the set up. While I can imagine there are some users that think the centralized Field Mapping set up is an improvement, I doubt they have developed many workflows that switch between the different tools that are impacted by these settings. I think anyone that thinks about the behaviors of the different editing tools understands why ArcMap never centralized them into a single set up. The Attribute Transfer tool set up is designed around the spatial behavior of that tool, which is best used with only a very small set of field mappings in place to work correctly, especially when the most common behavior involves transferring data between overlapping features in two different feature classes. On the other hand the Copy/Paste behavior needs nearly every attribute mapping set up by default, especially for copy/pasting records within the same feature class/attribute table. But the Copy/Paste Field Mapping set up means I have to turn off the layer covering the other to use the Attribute Transfer to click between the two feature classes, adding a lot of clicks that are not necessary in ArcMap. On the other hand my ideal Attribute Transfer field mapping set up would cause the Copy/Paste behavior to fail to preserve attributes in Pro. I never had to think about that in ArcMap while setting up the Attribute Field Map for the Transfer Attribute Tool in ArcMap. The two behaviors were never meant to control each other, but now. like it or not, I am being forced into an unwanted choice with unintended impacts on both behaviors. Using the ArcGIS Pro Attribute Field Mapping now feels like trying to configure the separate behaviors of four traffic signal arms where I can only set up one of them and every change I make to one signal arm is being applied to all of the signal arms simultaneously and identically. The result is not the smooth operation of traffic flow through that intersection, but confusion, collisions and a tremendous slow down of traffic flow once the drivers realize the signals arms are all showing green, yellow or red at the same time and not operating independently and in concert with each other. While playing from a single sheet of music may be fine for a solo performance, no symphony worth listening too was ever performed by giving all the players of the different instruments the same sheet of music to play at the same time. The Copy/Paste, Attribute Transfer tool, and any other editing tools impacted by the Attribute Field Mapping settings are symphony players. They need to be given separate sheets of music to play a symphony together. The Attribute Field Mapping is their sheet music. They are rarely meant to simultaneously play the melody, harmony or rhythm identically when they are played together. And you can't get them to play together by having only one sheet of music available for all of the instruments to read from at any given moment. I want the field maps of the editing tools able to be separated again. The superficial fact that the user interface appearance required to set up the Field Mappings of the different editing tools can appear identical did not justify connecting all of the editing tool behaviors that use a Field Mapping to a single centralized model underneath. Adding Field Mapping capabilities to the Copy/Paste function is an improvement in so far as it lets me map between different field names for that operation, which I couldn't do with ArcMap. However, that set up should not have been combined with the set up of the Attribute Transfer tool in the mistaken belief that they use this functionality in fundamentally same way.
... View more
07-18-2024
02:58 AM
|
6
|
1
|
1970
|
|
POST
|
The problem reappeared in a template map. I found that that the coordinate system projection of the template map dataframe did not match my feature classes. It was fixed by changing the template map dataframe coordinate system. So the project was not corrupt. It is just that I am still not used to how to track down such basic things in ArcGIS Pro compared to ArcMap. It took me this long to figure out that I needed to right click the map root container at the top of the layer list to check that setting, and I only figured that out because I had two maps in the same project that were behaving completely differently. The other thing this showed me is that true curves have to be displayed in their native projection to appear and behave correctly. Attempting to display them in any other projection on the fly without formally applying the Project tool simply does not work.
... View more
07-08-2024
09:51 AM
|
1
|
0
|
2112
|
|
POST
|
After further investigation, the strange behaviors I was observing all seems to be due to using a corrupted project. Bringing the data into a fresh project seems to have eliminated the issue with the true curves. The behavior was primarily being observed in ArcGIS Pro. I am now no longer as certain that the exact same behaviors were appearing in ArcMap, or if I was assuming they were on the basis of some behaviors that I may not have investigated as thoroughly as I thought. Bottom line, if a project begins to act strangely try building a fresh one.
... View more
07-08-2024
08:54 AM
|
0
|
0
|
2122
|
|
POST
|
During preparations for migrating to a Parcel Fabric, I am examining the results from the Simplify by Straight Lines and Circular Arcs (SLACA) tool in some detail. I have several instances of strange results that I want to discuss with Esri technical support to see if these are expected behaviors or more likely signs of bugs. I am using ArcGIS Pro 2.9 and ArcMap 10.8.1. This is an example of an original densified polyline that makes up part of a polygon boundary shown next to the results of running the buffer tool on the same polygons with a negative 1-foot buffer (-1’). Everything is as expected. The dark red line below shows the output of the SLACA tool using a 0.5-foot distance tolerance allowance from the polyline edges. The curves generated are clearly farther than 0.5 feet away from the original polyline edges (exactly 5’ offset rather than 0.5’ offset in the most extreme case as far as I can tell) and visibly cross into the -1’ buffer polygon. However, when I use the Select by Location tool to find the set of SLACA lines that intersect with the buffer polygons, none of these lines are selected. I created a copy of the SLACA lines and ran the Densify tool using the ANGLE option to only alter curves with a vertex placement at every 0.5-degree change. The output line in purple is nearly restored to the original form of the polyline before I ran the SLACA tool when it did not intersect with the buffer polygons. This is apparently the way the radial bend/curve geometry is being interpreted by the Select by Location tool which would explain why that tool did not identify any intersection between the two features. The Densified Output Shown By Itself The Original and Densified Polylines Overlaying Each Other. When I build polygons using the SLACA lines the output is visibly identical to the SLACA lines. If I select one of those SLACA polygons (the bulb in the center of the picture) it looks like this relative to the negative buffer polygons: If I run the Select by Location tool using this selected polygon to select all intersecting buffer polygons, it only selects the buffer polygon that makes up the center of the bulb and nothing else. This would seem to indicate that there are bugs either affecting the depiction of the radial bend/curve on the screen so that it does not accurately represent the underlying geometry formula being stored with the feature or an error in the way the geometry formula is being processed. I think it is most likely that the screen depiction is not an accurate representation of the underlying geometry formula being stored with these features, but either way the disconnect between what the user sees and how the feature behaves using standard geoprocessing operations clearly will cause confusion for the viewer and the analyst (like it did for me). The good new is that apparently the fundamental true curve geometry being stored in the feature is correct within the geoprocessing tolerances I set using the geoprocessing tools and is not apparently being corrupted by any of the geoprocessing steps I did. Otherwise the Densify tool could not have created an output that met my tolerance criteria and virtually restored the original shape. But the bad news is that the depiction of the true curve is visibly distorted in a way that would lead the user to believe that the geometry has been corrupted to not match the tolerance settings. Additionally, interacting with the geometry on the screen behaves as though that distortion being depicted is really there, i.e., the user has to click on what they see on screen to identify the feature they are viewing, even though in reality the coordinates being clicked would not touch the underlying geometry. If the screen depiction and interactions were corrected to actually match the stored underlying geometry the geoprocessing tools are actually generating, this would likely resolve many of the frustrations users have expressed over the years about how true curves behave. Additionally, only a relatively small subset of curves are exhibiting this behavior. That suggest that this is only affecting one or perhaps two subroutines of the drawing module that are only triggered under specific circumstances. This would explain the seemingly random appearance of gaps or overlaps on screen in the aftermath of inputting geometry that should perfectly align, because the behavior is not universally occurring. As proof that this is the case is the fact that no gaps/overlaps were created between any true curves when I created my polygons using only a single polyline to define the shared boundary between every left and right polygon, whether the boundary contained one or more true curves or not. Artificial gaps/overlaps can only be created when separate polylines define the shared boundaries between two polygons and can be depicted using different drawing subroutines. Such randomness leads users to mistrust the results even more than an easily explained universal error that everyone understands, because it is both harder to detect/predict and harder to explain. But in reality this means to me that the overall routines involving true curves are trustworthy and the effect is actually less damaging to our actual data outputs than what we feel we are observing. The correction of this behavior should have a much greatly impact on user perceptions of the trustworthiness of true curves than it is actually having on the true curves themselves in reality. At the same time, this bug may never be fixed. The performance requirements of the Drawing module and the Geoprocessing module are so different that any resolution of this bug that results in a noticeable degradation in drawing performance will likely not be implemented. The number of users that would tolerate a noticeable drawing performance drop is certainly far less than the set of users that would continue to tolerate the production of these small sets of features with these apparent distortions and geoprocessing behavior disconnects. This geometry appears and performs the same way in ArcMap and ArcGIS Pro, so this seeming bug has apparently not been detected or has been detected and rejected from being fixed since the original incorporation and implementation of true curves in Esri's products.
... View more
07-04-2024
08:59 AM
|
0
|
2
|
2212
|
|
BLOG
|
Without knowing your data or what field is being referenced in the field list it is hard to determine the cause of this behavior. Can you confirm that the field in your field list for the update cursor is never Null? If any record contains a Null value in the first field in your list, the behavior is what I would expect. Potentially, you need to add the ObjectID field to your update cursor field list and use a modified index expression for the update loop to avoid updating that field, and then only use that field index in your output for the condition of an unmatched record. In any case, without the full field list and a clear understanding of what values are possible in the update target key field you are referencing, I can only guess at what is causing this behavior. However, there are ways to modify the code to accomplish what you want to do, but they may require a little more sophisticated field index handling in the code than the relatively straight forward examples I provided in my Blog. I often build a new dictionary in the condition where records are unmatched and iterate through it after the update loop is finished. I use multiple loops frequently to process an update cursor, followed by looping through a dictionary of records for an insert cursor, followed by another loop of another dictionary with an update cursor that deletes records to acheive a complete update, adds and delete synchronization process of a target to match a source. Each of these loops have to be kept separate to avoid causing locking conflicts. I also include logic to not touch records that are validated as already synchronized to avoid the performance hit of doing unnecessary edits with my cursor processing, which is slower than reading and comparing a record in memory. Rich
... View more
05-23-2024
09:31 PM
|
0
|
0
|
9593
|
|
IDEA
|
As an illustration of a common workflow I do with the ArcMap Attribute Transfer Tool, I need to reshape building permits which were originally created as overlapping shapes based on the underlying common lot, because at the time of intake the condo parcels did not exist. I need to resshape these permits to match the final condominium parcel. In the picture below there are 10 overlapping permit shapes and each condo parcel overlaps a single common lot parcel within the same feature class. Without the ArcMap dialogs, this task would be nearly impossible, but with them it is easy to match up the legal descriptions contained in the permits to the desired condo lots. The final permit shapes. The ArcMap Transfer Attribute Tool dialogs are the key to this entire workflow which results in a dramatic increase in the resolution of my permit data and greatly enhances my ability to understand the arrangement of the permits that can lead to better analysis results and opportunities. If I used the ArcGIS Pro Attribute Transfer Tool I would have to first filter out the common lots as source features, since they have the lowest object ID and would prevent me from selecting the condo lots as my source feature. I then would have to select every source feature in the order of the object IDs of the target features to match them up correctly rather than by the logical and natural arrangement of the legal descriptions of the source features. Being forced to know the order of the target features that ArcGIS Pro will pick in order to select my source features operates exactly backwards to the way I need it to work, which should naturally be driven by the source feature I choose, not the target feature it will force me to choose. ArcMap does not require me to work through the features in the random order of the target permit Object IDs, nor does it require me to take any preliminary steps to filter out the source common lots with lower ObjectIDs before I can start editing. I can simply choose the desired condo parcel from the source dialog and the desired permit from the target dialog in whatever order makes the most sense to me (in this case the order of the condo lot numbers) rather than the arbitrary ObjectID order of the target permits. It is typical in ArcMap for me to tackle workflows like this involving hundreds of overlapping features in my target feature class. The amount of forethought I would be forced engage in and the number of QC steps I would have to add to prevent errors to accomplish the same task in ArcGIS Pro seems intentionally designed to make my head hurt or to prevent me from even attempting to enhance and maintain my data in the way that I want it. This workflow alone makes it impossible for me to justify taking advantage of any ArcGIS Pro enhancements that would block my data from being backward compatable with ArcMap, since none of the enhancements Esri has provided is as vital to my data's usefulness and integrity as this ArcMap Attribute Transfer Tool behavior.
... View more
05-20-2024
05:39 PM
|
0
|
0
|
3709
|
|
IDEA
|
Another important aspect of the dialog behavior in ArcMap is that when the user clicks an item in the list in the left-hand pane, the feature in the map flashes. This allows the user to visually confirm that they have selected the correct item in the list that they want the transfer to affect before they commit any change by hitting the OK button to proceed. The current behavior of the ArcGIS Pro tool carries with it the strong potential for silently introducing data corruption and confusion rather than being reliable as one of the primary tools for correcting it, and for that reason is virtually worthless for replacing any of my ArcMap workflows that rely on this tool. Additionally, as it currently stands, ArcGIS Pro has no alternative editing workflow that can serve as a passable substitute for the Attribute Transfer tool provided in ArcMap.
... View more
05-20-2024
05:01 PM
|
0
|
0
|
3715
|
|
IDEA
|
Using Python in the Field Calculator is notorious for this type of troubleshooting. Knowing which row it failed on would speed up the process of correcting the Python expression or refining the record selection to avoid triggering the error and having the calculation stop working midway through the calculation.
... View more
05-20-2024
04:41 PM
|
0
|
0
|
609
|
|
POST
|
I suspect the example from the Riverside County parcels data probably caused some confusion. That is because the Book and Page fields actually represent the Recorded Book and Page from our Recorder's office for our Tract or Parcel Map recordings. They have nothing to do with the Book and Page of the Assessors Parcel Number. In fact there are no fields in the parcel schema that directly contain just the Assessors Book or Page of the parcel. To correctly extract the Book and Page of the Assessors Parcel you should have instead parsed the first 6 digits of the Name field (Book is digits 1-3 and Page is digits 4-6) and done the feature selection based on an SQL expression of: "NAME LIKE '" + Name6Digits + "%'" then you could take the max NAME value of the set of returned features and increment that parcel number by 1 to correctly generate the next parcel number in the sequence based on the parcel that the user clicked. Arcade and ArcGIS Pro are actually especially useful for this data because they allow me to use parsing expressions to generate symbology for this data based on the Assessors Book, the Assessors Page, or the Assessors Book and Page without any need to create fields that directly contain any of those values. In ArcMap I would have alter the schema or create a view that generated actual fields for those parsed values to be able to symbolize the data based on them. Arcade can also convert the original string value of the NAME field to numeric values so I have the option to display the symbology based on the range of numeric values rather than being limited to only using unique value symbology for a set of discrete string values.
... View more
05-15-2024
06:19 PM
|
0
|
0
|
1440
|
|
POST
|
I agree about the parentheses. As my original post indicated, the code was untested. The post was intended to show the structure of how I typically tackle this kind of problem, but I must not have had time to design a real world test case to do an actual debug of the code at that time. Anyway, I hope you have gotten what you needed.
... View more
01-31-2024
11:17 AM
|
1
|
0
|
2370
|
|
POST
|
As I recall, the sorted function may fail if any value it is trying to sort is None, since the sorted function must compare all values to each other to determine the order of the values, and None cannot be compared to other actual values the way that underlying function is written. Since one of the fields making up your tuple key is a date field I imagine it can contain None values. In any case you could print the keys prior to the sorted function to determine what the actual key values are to see if this may explain the code behavior. You can try using a value substitution in your list comprehension as shown in the top reply to this post: https://stackoverflow.com/questions/30976124/sort-when-values-are-none-or-empty-strings-python
... View more
01-31-2024
10:26 AM
|
2
|
4
|
2396
|
|
IDEA
|
Thank you for implementing this enhancement and the video demonstrating how it works. The behavior is much more like the way the Create Features template works and it makes it much easier to quickly create a set of single course features that all need to be classified using the same attribute/symbology.
... View more
10-06-2023
11:39 AM
|
0
|
0
|
2761
|
| Title | Kudos | Posted |
|---|---|---|
| 1 | 03-24-2026 08:01 PM | |
| 6 | 02-23-2026 08:34 AM | |
| 1 | 03-31-2025 03:25 PM | |
| 1 | 03-28-2025 06:54 PM | |
| 1 | 03-16-2025 09:49 PM |
| Online Status |
Offline
|
| Date Last Visited |
03-24-2026
07:54 PM
|