|
IDEA
|
Summary Esri’s current Spatial Reference model is static: it understands projection, datum, and units, but not time (epoch) or control network realization. That was sufficient when most work was local and datums were treated as fixed. It is no longer sufficient for a world that depends on Global Navigation Satellite System (GNSS) measurements, dynamic datums, and cross-boundary critical infrastructure. This idea proposes a Dynamic Spatial Reference Extension that adds epoch and control-network awareness directly into the feature class Spatial Reference object, so that Esri tools can enforce it the same way they already enforce projection and datum. This is not just a “California problem” or a “surveying problem.” It is a critical infrastructure, population mobility, and cross-jurisdictional data integrity problem. Problem Today, Esri’s Spatial Reference: - Enforces: projection, datum, units, XY tolerance/resolution. - Does NOT enforce: epoch, control network, velocity model, bearing basis, distance basis. As a result: - GNSS-derived coordinates are inherently epoch-dependent, but GIS treats them as timeless. - Control networks move (especially in plate boundary regions like California), but feature classes don’t record which epoch or realization they align to. - Bearings and distances derived from coordinates can change over time, but there is no way to track or enforce this. - Datasets from different epochs and control networks can be merged, appended, or overlaid with no warning, silently corrupting geometry. - Cross-boundary infrastructure (pipelines, power lines, fiber, transportation, water systems) and cross-state property ownership rely on consistent coordinates that may span multiple epochs and control realizations. This is manageable for a single county with custom scripts and local rules. It is not manageable at national or global scale. And it cannot be solved reliably by external JSON, sidecar metadata, or custom extensions, because Esri tools only enforce what is embedded in the feature class Spatial Reference. Why this matters (beyond “just” surveying) 1. Critical infrastructure - Pipelines, power transmission, rail, fiber, and water systems cross state and county lines. - If different segments are referenced to different epochs or control networks, the geometry is wrong even if the projection matches. - This affects maintenance, safety, emergency response, and regulatory compliance. 2. Population mobility and cross-boundary property - People buy property across state lines and move between regions. - Parcels, easements, and rights-of-way can span jurisdictions with different control and epochs. - Without epoch-aware SR, parcel and boundary data can drift or misalign when combined. 3. GNSS and dynamic datums - GNSS coordinates are inherently time-dependent. - National geodetic agencies (such as NGS in the United States) rely on Continuously Operating Reference Stations (CORS) to define modern control networks. These CORS networks are tied to specific epochs and realizations. - The United States National Spatial Reference System (NSRS) is being modernized into a dynamic, time-dependent framework. - The International Terrestrial Reference Frame (ITRF) — the global standard for Earth-centered coordinates — is inherently epoch-based and updated regularly. - GIS is the last major system still treating coordinates as if the Earth is static. 4. Global interoperability - Dynamic datums are already in use or planned in multiple regions (e.g., Australia, Europe, US NSRS modernization). - Without epoch-aware SR, Esri becomes a bottleneck for accurate cross-border and international data exchange. 5. The true promise of the Parcel Fabric—seamless editing, authoritative record management, cross‑boundary consistency, and survey‑grade lineage—cannot be fully realized without an enhanced, epoch‑aware Spatial Reference. Parcels are not just drawings; they are legal objects tied to control networks, GNSS‑derived measurements, and bearings that change over time as the Earth moves. When the Spatial Reference lacks epoch and control‑network metadata, the Fabric can maintain topology but cannot guarantee positional integrity, especially when data crosses county or state lines or is updated from modern GNSS observations. A dynamic, time‑aware Spatial Reference is what allows the Parcel Fabric to function as a truly authoritative, future‑proof cadastral system rather than a sophisticated drawing tool. Proposed Solution: Dynamic Spatial Reference Extension Extend the feature class Spatial Reference object to include a “survey-grade half” that captures the dynamic and realization-specific aspects of the coordinate system. Suggested additional fields (conceptual, not prescriptive): - ControlNetworkID - Identifies the control network or realization (e.g., NSRS2011, future NSRS, ITRF-based realization, regional CORS network). - Epoch - The reference epoch of the coordinates (e.g., 2010.00, 2022.00). - VelocityModel - The velocity or deformation model used (if any). - BearingBasis - Grid vs true, and any relevant projection-based bearing definition. - BearingNotation - Quadrant vs azimuth, degrees vs grads, etc. - DistanceBasis - Ground vs grid, and any scale factor or combination factor assumptions. - TransformationLineage - A record of the CRS and epoch transformations applied (e.g., Esri WKIDs and epoch shifts used). Key behaviors and enforcement Once this Dynamic SR Extension is embedded in the feature class Spatial Reference, Esri tools should: 1. Detect mismatches - Prevent or warn when merging/appending/overlaying datasets with different epochs or control networks. - Similar to how Esri currently prevents mixing different projections or units. 2. Require epoch alignment - Just as “Project” is required to reconcile different projections, an epoch-alignment step should be required when combining data from different epochs or control networks. - This could be implemented via new tools or extensions to existing tools (e.g., “Project with Epoch Alignment”). 3. Preserve and propagate metadata - Ensure that the Dynamic SR metadata is preserved through geoprocessing operations, exports, and schema changes. - Make it visible in layer properties and accessible via APIs. 4. Integrate with GNSS workflows - Allow GNSS-derived data to be stored with explicit epoch and control network metadata. - Support transformations from GNSS/ITRF epochs into local realizations and epochs using the Dynamic SR model. Relationship to Bearing WKID / Bearing Profiles A separate but related idea is the introduction of a Bearing WKID or bearing profile concept, which defines how bearings are expressed (basis, notation, distance basis, etc.). The Dynamic SR Extension complements this by: - Anchoring coordinates in time (epoch) and control network. - Ensuring that bearings are recomputed correctly when coordinates are epoch-shifted or transformed. - Providing a consistent framework for survey-grade bearings in GIS. Why this must be native to Esri (and not just an extension) Organizations like counties or agencies can and do build local extensions: - Custom scripts for epoch shifting. - Local control network alignment workflows. - Ad hoc metadata conventions. These work in-house, but: - They are fragile across software updates. - They are invisible to core Esri tools. - They cannot be reliably enforced outside the organization. - They cannot scale to national or global infrastructure. Only Esri can: - Extend the Spatial Reference object. - Integrate Dynamic SR into the projection engine. - Update core tools (Project, Append, Merge, Spatial Join, etc.) to respect and enforce epoch and control metadata. - Provide a consistent, global implementation that supports critical infrastructure and cross-boundary data. What this Idea asks Esri to do 1. Acknowledge the need for dynamic, epoch-aware spatial references as a first-class platform concern. 2. Extend the feature class Spatial Reference model to include: - Epoch - ControlNetworkID - VelocityModel - Bearing/Distance basis metadata - Transformation lineage 3. Update core tools to: - Detect and warn on mismatched epochs/control networks. - Require or assist with epoch alignment before blending datasets. 4. Document and expose this model clearly so users understand: - Why epochs matter. - How control networks (including CORS networks) affect coordinates. - How to maintain spatial integrity across time and jurisdictional boundaries. Closing This is not just a technical refinement. It is a necessary evolution for a world that depends on: - GNSS, - dynamic datums, - cross-boundary infrastructure, - and a mobile population. Static spatial references were enough for a static view of the Earth. They are not enough for the planet we actually live on. Dynamic, epoch-aware Spatial References—embedded in the feature class and enforced by Esri tools—are the next logical step.
... View more
03-24-2026
11:37 PM
|
0
|
0
|
183
|
|
IDEA
|
Summary ArcGIS needs a Bearing WKID system—parallel to the existing Spatial Reference WKID system—to formally define and standardize how bearings are represented, stored, and interpreted across ArcGIS tools. Today, bearings are treated as raw numbers with no declared semantics. This leads to ambiguity, inconsistent tool outputs, and user confusion. A Bearing WKID system would make bearings portable, self-describing, and reliably interpretable across all ArcGIS workflows. Why This Matters Bearings are used in linear referencing, survey workflows, engineering design, CAD/GIS integration, navigation, custom grid systems, field data collection, and event tables. Despite this, ArcGIS provides no standard way to define angular units, reference frame, zero direction, rotation direction, or basis (compass, grid, magnetic, true north, Euclidean, geodetic, engineering grid). A bearing value like “45.0” is meaningless without these semantics. It is important to note that the spatial reference itself is not a substitute for a well-defined bearing definition. A spatial reference describes how coordinates relate to the earth or a grid, but it does not define how angular directions should be interpreted. Disciplines such as surveying, engineering, navigation, and LRS require bearings to be precise, unambiguous, and consistently applied. Without a dedicated bearing definition system, ArcGIS cannot meet these requirements. Current Pitfalls in ArcGIS Bearing Handling 1. Bearings are unitless and ambiguous. ArcGIS stores them as plain numbers with no declared meaning. 2. Tools output bearings inconsistently. Different tools use different conventions, often undocumented. 3. Bearings drift toward compass interpretation. When semantics are missing, tools and users assume compass bearings even when the data is grid-based. 4. Metadata is not preserved. Geoprocessing tools often strip metadata, making bearings even harder to interpret. 5. No way to declare custom angular systems. Engineering grids, survey bearings, and custom angular units have no native representation. 6. No validation or error detection. ArcGIS cannot detect mismatches between bearing systems because no standard exists. 7. Bearings are not portable. Exporting or publishing bearing data loses all semantic meaning. Proposed Solution: A Bearing WKID System Esri should introduce a Bearing WKID system that works like Spatial Reference WKIDs but for angular semantics. A Bearing WKID would define: - Angular units (degrees, radians, grads, surveyor’s degrees, custom) - Reference frame (compass, grid, magnetic, true north, custom) - Zero direction (north, east, grid X-axis, custom) - Rotation direction (clockwise or counterclockwise) - Basis (Euclidean, geodetic, engineering grid) - Optional parameters (magnetic declination, custom offsets, local engineering definitions) This would make bearings self-describing, portable, consistent across tools, safe for export and publication, validatable, and future-proof. Just as Spatial Reference WKIDs solved projection ambiguity, Bearing WKIDs would solve bearing ambiguity. Interim Support: JSON Packaging Until Esri implements Bearing WKIDs, ArcGIS could support JSON packaging of bearing definitions, use a reserved invalid WKID (such as -1) to signal “custom bearing system,” and adopt a standard JSON schema for storing bearing semantics. This would allow tools and add-ins to interpret bearings safely today while providing a migration path to future Esri WKIDs. Benefits to the ArcGIS Community A Bearing WKID system would eliminate silent misinterpretation, standardize bearing outputs across tools, improve engineering and survey workflows, support custom angular systems, make bearings portable and self-describing, reduce user confusion, improve documentation clarity, and align ArcGIS with real-world practices. Call to Action If you have ever struggled with inconsistent bearing outputs, undocumented bearing conventions, ambiguous azimuths, grid versus compass confusion, custom engineering bearings, LRS offset calculations, or CAD/GIS integration, please upvote this idea and share your use cases. A Bearing WKID system would bring clarity, consistency, and reliability to one of the most misunderstood parts of ArcGIS.
... View more
03-24-2026
08:01 PM
|
1
|
0
|
147
|
|
IDEA
|
This default behavior is not a recent change, so I oppose directly changing the default behavior. However I would support an option to turn it off or on with a one time setting at a user's discretion.
... View more
03-16-2026
07:46 AM
|
0
|
0
|
197
|
|
POST
|
ArcGIS Pro 3.x and earlier are built on an oversimplified, early‑2010s async model that promised non‑blocking behavior — but the more Esri adds on top of it, the more obvious the architectural limits become. Hardware manufacturers keep delivering more cores, more GPU power, more VRAM, and more memory, but Pro simply isn’t architected to take advantage of modern hardware in the ways those vendors intend. Two threads — the UI thread and the Main CIM Thread (MCT) — are not enough when all GIS operations are funneled through a blocking MCT. It’s unrealistic to expect ArcMap‑level responsiveness from this model. If Esri is going to undertake another major architectural overhaul, then let’s ask for the right one — a real modernization of the engine under the hood. Below is what ArcGIS Pro 4.0 would actually need to fully utilize multi‑core CPUs, GPUs, VRAM, and parallel pipelines. This is not fantasy. This is the real engineering work required to bring Pro into the 2026 world. What ArcGIS Pro Needs Under the Hood to Truly Use Modern Hardware 1. A real multi‑threaded GIS engine (not a global lock) Current reality: • The Main CIM Thread (MCT) is a single global mutex. • Nearly all GIS operations serialize through it. What Pro needs: • ✔ A thread‑safe, lock‑free GIS core • ✔ Independent pipelines for: • geometry operations • event layer computation • definition query evaluation • table access • editing • labeling • snapping • topology • attribute rules • ✔ A scheduler that distributes work across all CPU cores This is the foundation of a modern engine. 2. Parallel geometry and topology processing Current reality: • Geometry, topology, snapping, and measure calculations all run on one core. What Pro needs: • ✔ SIMD‑optimized geometry kernels • ✔ Multi‑core spatial indexing • ✔ Parallel topology validation • ✔ Parallel snapping • ✔ Parallel measure interpolation This is how CAD, game engines, and modern GIS engines operate today. 3. A real event‑layer engine that runs off the UI thread Current reality: • Event layers recompute on the MCT. • Definition queries trigger full recomputation. • The UI freezes. What Pro needs: • ✔ A dedicated event‑layer computation thread • ✔ Incremental recomputation (not full rebuilds) • ✔ GPU‑accelerated measure interpolation • ✔ Cached route geometry segments This alone would eliminate a huge percentage of UI stalls. 4. A GPU‑accelerated spatial engine (not just rendering) Current reality: • The GPU draws symbols and rasters — and that’s about it. What Pro needs: • ✔ GPU‑accelerated: • spatial joins • buffering • clipping • intersections • event layer generation • measure calculations • topology checks • snapping • definition query filtering • ✔ VRAM‑resident spatial indexes • ✔ Compute shaders for geometry operations This is standard in modern 3D and simulation engines. 5. A non‑blocking UI model Current reality: • The UI thread and MCT block each other. • Buttons grey out. • The TOC freezes. • The map freezes. What Pro needs: • ✔ A UI that never waits on GIS operations • ✔ A GIS engine that never waits on the UI • ✔ A message‑passing model (not shared state) • ✔ A rendering thread independent of the MCT This is how modern CAD and 3D engines stay responsive. 6. A real async model (not a syntax wrapper) Current reality: • is just a ticket to the MCT. • is mostly ceremony. What Pro needs: • ✔ True asynchronous pipelines • ✔ Futures/promises that run on worker threads • ✔ A scheduler that distributes GIS tasks across cores • ✔ No global lock This is what async is supposed to mean. 7. A modern refresh pipeline Current reality: Any small change triggers a full cascade: • renderer • labeling • event layers • definition queries • snapping • topology • attribute rules • TOC • map view What Pro needs: • ✔ Incremental refresh • ✔ Dirty‑region rendering • ✔ Partial event‑layer updates • ✔ Partial table refresh • ✔ Partial labeling refresh • ✔ Partial snapping refresh This is how modern engines avoid freezing. Closing Thought ArcGIS Pro has a modern exterior — GPU rendering, WPF UI, async syntax — but the engine underneath is still fundamentally serialized. If Esri is planning a major architectural overhaul for Pro 4.0, this is the opportunity to build a truly modern GIS engine that matches the hardware and expectations of 2026.
... View more
02-23-2026
08:34 AM
|
6
|
1
|
545
|
|
POST
|
@Wolf Does the code you provided work for version 3? If the user does a Save As of a previously saved project that had already stored a GUID in its custom settings, does this code handle assigning a new GUID to the newly created Project duplicate's custom settings? If it does could you point out the event that ensures that happens. Or if the new project would carry over the GUID custom setting of the original saved project under the code you provided can you extend the code to handle the Save As event or explain how that would be done? Finally, do maps in a project need a similar custom property to retain a unique GUID that would be accessible to an Addin, or is the GUID created by esri for project items like maps accessible through an exposed property or task?
... View more
01-21-2026
10:11 AM
|
0
|
0
|
239
|
|
BLOG
|
@AnninaRupe1 Thank you for sharing your appreciation of my efforts to try to help people make the connection between the key Python concepts I'm describing and the handful of lines of sometimes densely compacted code I've present. It also helps me to spell out why it was done that way, to see whether it really has stood up over time or could benefit from an update with things I have learned since I originally wrote this blog. I can only recall one other Python Blog I wrote called: I've Saved Time in a Bottle. How Do I Get it Back Out? - Doing More with Date Fields Using the Field Calculator and Python The Turbo Charging Data Manipulation blog remains the single most useful coding concept I have repeatedly applied and adapted to solve problems I have encountered in my own career. And I am grateful to see that it has stood the test of time and continues to be a resource for many other people.
... View more
03-31-2025
03:25 PM
|
1
|
0
|
7830
|
|
BLOG
|
@AnninaRupe1 Thank you for the question and trying to understand the concepts taught in this blog better. Constructing a dictionary from a list can be done in many ways, depending on how you want to process the list data in you code. In the case of my code I will try to break the dictionary construction down more and explain why I chose to do it the way I did. Here is the code from example 3 that built the dictionary. valueDict = {r[0]:(r[1:]) for r in arcpy.da.SearchCursor(sourceFC, sourceFieldsList)} The dictionary key value is the field value stored under the first field name in the field list: r[0]. The dictionary value contained under that particular key is a tuple () containing the set of all of the field values from the field name list r except the first field name in the field names list stored at index 0 of the field name list, since it is not the entire field list and instead starts at index 1 of the field names list: (r[1:]). The reason for this is that I only use the first field as a matching key value, and not as a data transfer value in the rest of my code. I saw no need to include code to transfer and overwrite the key into the field in the matched record in the target data that by definition must already contain the matched key value before a transfer can take place. valueDict[keyValue] returns the tuple associated with the key that only contains values that actually need to be written to the matched record. The consequence of my choice to exclude they key value from the tuple is that the tuple indexes are shifted left or -1 relative to the original field list. Not overwriting the key value in the target record makes the code faster than an unnecessary data overwrite, since writing data is the slowest part of the code and you want to avoid unnecessary data writes that you possibly can in your code. The data transfer part of the code makes the necessary index shift relative to the original field name list indexing to keep everything aligned. There are alternative ways the code could have accomplished the same thing, but you would have to adjust the indexing shown in my example in both the dictionary construction and the data transfer part of the code for it to work and still avoid the unnecessary overwrite of the key value that my code intentionally avoids. List manipulation is a topic unto itself and is largely driven by the goals of your code, which must consider and balance valid logic, speed, understandability, elegance and efficiency. Choosing, understanding and applying at least one form of internally consistent and valid logic and code syntax that accomplishes your goal is always essential before you try to adjust the logic or syntax to optimize and improve the code for the other 4 code factors. Hopefully, this post is useful in helping you better understand the choices driving the way I wrote my code. If you believe you have discovered ways to improve my code relative to the other factors, feel free to offer working code examples here.
... View more
03-28-2025
06:54 PM
|
1
|
0
|
7900
|
|
BLOG
|
If you only want area values that have changed to be updated you should use the if condition below: if valueDict[keyValue] != updateRow[1]: Potentially you would need an additional prior if clause to do an update to handle Null values. Something like: if valueDict[keyValue] != None and updateRow[1] == None: Also you need to change the update assignment line from: updateRow[n] = valueDict[keyValue][n-1] to: updateRow[n] = valueDict[keyValue] This change is needed since your dictionary values returned are floats and not a tuple or list that requires the use of an index to retrieve a value from within it. The code could be rewritten to be simplified further since your dictionary is not returning a list, but if you just make this change the code would work without an error. If that does not accomplish your goal, please try to explain the final result you would like to acheive in more detail.
... View more
03-16-2025
09:49 PM
|
1
|
0
|
8115
|
|
BLOG
|
list() converts an interable object like a tuple to a list. The valueDict[keyValue] evaluates as a float, which cannot be converted to a list, it can only be added or appended to an existing list. Also, since the valueDict[keyValue] is always a single float value, there is no benefit to enclosing it in a list over working with it directly as a float value unless you were going to append multiple float values to the list. This is how you would create a list variable that contains your float value: myList = (valueDict[keyValue]) You need to explain what you believe you are trying to accomplish by putting a float value into a list and why you think that is required for your code to accomplish it's purpose. I don't see any purpose for why you are doing what you are doing or have any idea what your end goal is with this portion of your code.
... View more
03-16-2025
09:37 PM
|
0
|
0
|
8121
|
|
BLOG
|
I got the upper syntax wrong. It is fixed in my previous post now. The syntax is supposed to be: text.upper() You missed the parentheses at the end. I don't know why the case is changing in the two environments. I don't work with portal data very much, so someone with more experience will have to let you know if that behavior is normal or not. Also this line of code makes no sense to me: if list(valueDict[keyValue]) and updateRow[1:1]: those are not logical true false test conditions. What is this line supposed to do? Rich
... View more
03-12-2025
08:40 PM
|
0
|
0
|
8200
|
|
BLOG
|
You can change the case to all upper or all lower in both the dictionary and the cursor to force a match even if they are stored in different letter cases on disk. This modification is similar to your previous code where you made the key the combination of two field values that are stored as two separate field values on disk. valueDict = {r[0].upper(): r[1] for r in arcpy.da.SearchCursor(temp_Areas, sourceFieldsList)} print(valueDict) keyValue = updateRow[0].upper() So one of the key benefits of this code is that tables that don't work using a standard attribute join can be matched and at a faster speed than a Join and calculation (which would also update all rows rather than just the rows that have actual differences.)
... View more
03-12-2025
07:03 PM
|
0
|
0
|
8208
|
|
BLOG
|
I found the error in my previous code. I had not noticed that I had pasted the for loop twice when it should only have been pasted once: for updateRow in updateRows I have removed the second for loop and it will now work according to your needs. This revised code is many times more efficient than your current code, especially if only a few records actually need to be updated. The larger your record set becomes, the larger the benefit of using my revised code, since the act of updating a row is the most costly part of the loop, and avoiding it when no change is actually occurring is the most significant optimization you can implement. You also won't have to disable editor tracking with the updated code, since records that have no change will not be updated and not trigger the modified date. I left the insert and delete code for the benefit of others. The complete synchronization of a relationship to produce a complete match efficiently from a source requires all three steps.
... View more
03-03-2025
10:43 PM
|
1
|
0
|
8357
|
|
BLOG
|
tempDict = {} delDict = {} with arcpy.da.UpdateCursor(fs2, updateFieldsList) as updateRows: for updateRow in updateRows: # store the Join value by combining 2 field values of the row being updated in a keyValue variable keyValue = updateRow[0] + "," + str(updateRow[1]) tempDict[keyValue] = keyValue.split(",",1) # verify that the keyValue is in the Dictionary if keyValue in valueDict: update = False for n in range (2,len(sourceFieldsList)): if valueDict[keyValue][n-2] != None: if updateRow[n] == None: updateRow[n] = valueDict[keyValue][n-2] update = True elif updateRow[n] != valueDict[keyValue][n-2]: updateRow[n] = valueDict[keyValue][n-2] update = True if update == True: updateRows.updateRow(updateRow) else: delDict[keyValue] = keyValue with arcpy.da.InsertCursor(fs2, updateFieldsList) as cursor: for key in valueDict.keys(): if not key in tempDict: insRow = key.split(",",1) + list(valueDict[key]) print(insRow) cursor.insertRow(insRow) with arcpy.da.UpdateCursor(fs2, updateFieldsList) as updateRows: for updateRow in updateRows: keyValue = updateRow[0] + "," + str(updateRow[1]) if keyValue in delDict: updateRows.deleteRow() del delDict del tempDict del valueDict The above code should only update row A if the row B field is not Null and one or more field in row A is Null or has a different value from the row B field. It should not do an update if for all fields the row B field is Null, both the row A field and row B field are Null, or the row A field and row B field contain the same value. This code also should insert rows from the source that are missing in the update target and delete rows from the update target that are not in the source. The insert and delete operations are done in separate for loops from the initial updatecursor to prevent locks and to avoid having the updateCursor get confused by rows being deleted. It is important to note that InsertCursors require that the updateFieldsList has the field names and field values arranged in the exact same order as the actual field order of the underlying table/feature class/service, otherwise the insert will fail. SearchCursors and UpdateCursors do not have this requirement and can process fields in any order, but InsertCursors can't rearrange the underlying field order.
... View more
02-28-2025
08:03 AM
|
0
|
0
|
8446
|
|
BLOG
|
Print both halves of the if condition on a record that was updated that you belive should not have updated on separate lines. The difference could be that the lists that are being compared are misaligned, which wouldn't necessarily cause a runtime error, but it would be a coding logic error. Also post your full code so far as a base for me to edit. I cannot peice together the code on my phone without retyping everything otherwise. I need the full code to add the code sections and logic for processing insert and delete cursors. In any case, I am done for the night.
... View more
02-27-2025
10:50 PM
|
1
|
0
|
8569
|
|
BLOG
|
Try indenting all of the code you just posted under this if clause to only update rows that actually are different. if list(updateRow[2:]) != list(valueDict[keyValue]): If it triggers errors, post them.
... View more
02-27-2025
10:16 PM
|
0
|
0
|
8574
|
| Title | Kudos | Posted |
|---|---|---|
| 1 | 03-24-2026 08:01 PM | |
| 6 | 02-23-2026 08:34 AM | |
| 1 | 03-31-2025 03:25 PM | |
| 1 | 03-28-2025 06:54 PM | |
| 1 | 03-16-2025 09:49 PM |
| Online Status |
Offline
|
| Date Last Visited |
03-24-2026
07:54 PM
|