<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: Loading Data in to Versioned, Archiving tables on daily basis in Data Management Questions</title>
    <link>https://community.esri.com/t5/data-management-questions/loading-data-in-to-versioned-archiving-tables-on/m-p/1587178#M45580</link>
    <description>&lt;P&gt;Insert in to Versioned View ( it inserts in to delta tables) or just table through SQL ? The problem Inserting in to table was encountered issues while&amp;nbsp; generating GLOBAL ID and OBJECT ID.&amp;nbsp; is there way to auto increment them using procedure?&amp;nbsp;&lt;/P&gt;</description>
    <pubDate>Wed, 19 Feb 2025 20:07:10 GMT</pubDate>
    <dc:creator>vijaybadugu</dc:creator>
    <dc:date>2025-02-19T20:07:10Z</dc:date>
    <item>
      <title>Loading Data in to Versioned, Archiving tables on daily basis</title>
      <link>https://community.esri.com/t5/data-management-questions/loading-data-in-to-versioned-archiving-tables-on/m-p/1586152#M45570</link>
      <description>&lt;P&gt;&lt;STRONG&gt;Current Data Loading Approach:&lt;/STRONG&gt;&lt;/P&gt;&lt;P&gt;Currently, I use the &lt;STRONG&gt;Append&lt;/STRONG&gt; tool to load data. Initially, I truncate the entire table and then load the data using the &lt;STRONG&gt;Append&lt;/STRONG&gt; tool. However, this approach causes the data to be inserted into &lt;STRONG&gt;delta tables&lt;/STRONG&gt;, leading to their continuous growth, which may negatively impact overall database performance.&lt;/P&gt;&lt;P&gt;&lt;STRONG&gt;Why Did I Enable Versioning and Archiving?&lt;/STRONG&gt;&lt;BR /&gt;We have a few &lt;STRONG&gt;feature classes&lt;/STRONG&gt; that are part of a &lt;STRONG&gt;topology&lt;/STRONG&gt; and are used in &lt;STRONG&gt;Field Maps&lt;/STRONG&gt; with &lt;STRONG&gt;sync-enabled feature services&lt;/STRONG&gt;. Additionally, we created several &lt;STRONG&gt;relational tables&lt;/STRONG&gt; in the &lt;STRONG&gt;spatial database&lt;/STRONG&gt; that are updated daily from a &lt;STRONG&gt;non-spatial database&lt;/STRONG&gt;.&lt;/P&gt;&lt;P&gt;To publish these tables with &lt;STRONG&gt;sync-enabled services&lt;/STRONG&gt; and display this relational data in &lt;STRONG&gt;Field Maps&lt;/STRONG&gt;, we need to enable &lt;STRONG&gt;versioning&lt;/STRONG&gt; and &lt;STRONG&gt;archiving&lt;/STRONG&gt; on the tables.&lt;/P&gt;&lt;P&gt;&lt;STRONG&gt;How Should the Data Be Loaded into These Tables?&lt;/STRONG&gt;&lt;/P&gt;&lt;P&gt;What are the best practices for updating this data daily?&lt;/P&gt;&lt;P&gt;Could you suggest a more efficient approach?&lt;/P&gt;</description>
      <pubDate>Mon, 17 Feb 2025 23:33:01 GMT</pubDate>
      <guid>https://community.esri.com/t5/data-management-questions/loading-data-in-to-versioned-archiving-tables-on/m-p/1586152#M45570</guid>
      <dc:creator>vijaybadugu</dc:creator>
      <dc:date>2025-02-17T23:33:01Z</dc:date>
    </item>
    <item>
      <title>Re: Loading Data in to Versioned, Archiving tables on daily basis</title>
      <link>https://community.esri.com/t5/data-management-questions/loading-data-in-to-versioned-archiving-tables-on/m-p/1586778#M45573</link>
      <description>&lt;P&gt;Truncate and Append is pretty much worst case for a versioned geodatabase, particularly a TRUNCATE, since it will bypass the Delete event.&lt;/P&gt;&lt;P&gt;Best practice is also the trickiest -- Identify the changes (INSERT/UPDATE/DELETE) that need to occur to bring the data into sync, and issue commands against a versioned view to make it happen.&lt;/P&gt;&lt;P&gt;- V&lt;/P&gt;</description>
      <pubDate>Wed, 19 Feb 2025 02:28:29 GMT</pubDate>
      <guid>https://community.esri.com/t5/data-management-questions/loading-data-in-to-versioned-archiving-tables-on/m-p/1586778#M45573</guid>
      <dc:creator>VinceAngelo</dc:creator>
      <dc:date>2025-02-19T02:28:29Z</dc:date>
    </item>
    <item>
      <title>Re: Loading Data in to Versioned, Archiving tables on daily basis</title>
      <link>https://community.esri.com/t5/data-management-questions/loading-data-in-to-versioned-archiving-tables-on/m-p/1587082#M45578</link>
      <description>&lt;P&gt;In Append Tool, There is an option to match fields to update. However, If the records are not in source, those will be deleted them manually/ write a script method to delete them from target. so I thought of truncate and Append would be right option. Before that, I unregistered the feature class.&amp;nbsp;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Wed, 19 Feb 2025 17:46:52 GMT</pubDate>
      <guid>https://community.esri.com/t5/data-management-questions/loading-data-in-to-versioned-archiving-tables-on/m-p/1587082#M45578</guid>
      <dc:creator>vijaybadugu</dc:creator>
      <dc:date>2025-02-19T17:46:52Z</dc:date>
    </item>
    <item>
      <title>Re: Loading Data in to Versioned, Archiving tables on daily basis</title>
      <link>https://community.esri.com/t5/data-management-questions/loading-data-in-to-versioned-archiving-tables-on/m-p/1587140#M45579</link>
      <description>&lt;P&gt;I have batch scripts running in client sites several times a day, doing updates and inserts of existing and new rows. I use SQL to manage the change detection, using &lt;STRONG&gt;&lt;FONT face="courier new,courier"&gt;INSERT ... SELECT ... FROM ({virtual_table_query}) vt&lt;/FONT&gt;&lt;/STRONG&gt;&amp;nbsp;and &lt;STRONG&gt;&lt;FONT face="courier new,courier"&gt;UPDATE ... FROM ({virtual_table_query}) WHERE u.keycol = vt.keycol&lt;/FONT&gt;&lt;/STRONG&gt;, editing both unversioned and versioned data. TRUNCATE/APPEND is too blunt an instrument, and unregistering the feature class defeats the purpose of versioning and archiving.&lt;/P&gt;&lt;P&gt;The key here is to populate a temporary table (which &lt;U&gt;is&lt;/U&gt; truncated and appended, or sometimes only appended), then use SQL for the gentler bits.&lt;/P&gt;&lt;P&gt;- V&lt;/P&gt;</description>
      <pubDate>Wed, 19 Feb 2025 19:11:29 GMT</pubDate>
      <guid>https://community.esri.com/t5/data-management-questions/loading-data-in-to-versioned-archiving-tables-on/m-p/1587140#M45579</guid>
      <dc:creator>VinceAngelo</dc:creator>
      <dc:date>2025-02-19T19:11:29Z</dc:date>
    </item>
    <item>
      <title>Re: Loading Data in to Versioned, Archiving tables on daily basis</title>
      <link>https://community.esri.com/t5/data-management-questions/loading-data-in-to-versioned-archiving-tables-on/m-p/1587178#M45580</link>
      <description>&lt;P&gt;Insert in to Versioned View ( it inserts in to delta tables) or just table through SQL ? The problem Inserting in to table was encountered issues while&amp;nbsp; generating GLOBAL ID and OBJECT ID.&amp;nbsp; is there way to auto increment them using procedure?&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Wed, 19 Feb 2025 20:07:10 GMT</pubDate>
      <guid>https://community.esri.com/t5/data-management-questions/loading-data-in-to-versioned-archiving-tables-on/m-p/1587178#M45580</guid>
      <dc:creator>vijaybadugu</dc:creator>
      <dc:date>2025-02-19T20:07:10Z</dc:date>
    </item>
    <item>
      <title>Re: Loading Data in to Versioned, Archiving tables on daily basis</title>
      <link>https://community.esri.com/t5/data-management-questions/loading-data-in-to-versioned-archiving-tables-on/m-p/1587811#M45583</link>
      <description>&lt;P&gt;If the table is non-versioned, you're responsible for a &lt;FONT face="courier new,courier"&gt;serial&lt;/FONT&gt; or &lt;FONT face="courier new,courier"&gt;SEQUENCE&lt;/FONT&gt; to keep it clean. UUID generation is trivial.&lt;/P&gt;&lt;P&gt;- V&lt;/P&gt;</description>
      <pubDate>Fri, 21 Feb 2025 02:59:05 GMT</pubDate>
      <guid>https://community.esri.com/t5/data-management-questions/loading-data-in-to-versioned-archiving-tables-on/m-p/1587811#M45583</guid>
      <dc:creator>VinceAngelo</dc:creator>
      <dc:date>2025-02-21T02:59:05Z</dc:date>
    </item>
    <item>
      <title>Re: Loading Data in to Versioned, Archiving tables on daily basis</title>
      <link>https://community.esri.com/t5/data-management-questions/loading-data-in-to-versioned-archiving-tables-on/m-p/1588509#M45585</link>
      <description>&lt;P&gt;The table is versioned and archived. can we update this table through sql without inserting in to delta tables and _H tables ? I am using next_rowid and next_globalid to&amp;nbsp; insert data in to OBJECTID and GLOBALID.&amp;nbsp;&lt;/P&gt;&lt;P&gt;How to reset OBJECTID to 1 when using next_rowid ?&lt;/P&gt;</description>
      <pubDate>Mon, 24 Feb 2025 14:19:21 GMT</pubDate>
      <guid>https://community.esri.com/t5/data-management-questions/loading-data-in-to-versioned-archiving-tables-on/m-p/1588509#M45585</guid>
      <dc:creator>vijaybadugu</dc:creator>
      <dc:date>2025-02-24T14:19:21Z</dc:date>
    </item>
    <item>
      <title>Re: Loading Data in to Versioned, Archiving tables on daily basis</title>
      <link>https://community.esri.com/t5/data-management-questions/loading-data-in-to-versioned-archiving-tables-on/m-p/1588943#M45587</link>
      <description>&lt;P&gt;Inserting, updating, and deleting data directly using SQL is possible but is not recommended due to its complexity and risk &lt;STRONG&gt;especially when there is a controller dataset involved&lt;/STRONG&gt; (topology, parcel fabric, utility network etc). The recommended, safe way to perform these updates is to make use of one an API (Python, .NET, REST, etc) that will manage the edits to the archive as well as ensure we maintain integrity with the controller dataset.&lt;/P&gt;&lt;P&gt;As an example, if you insert a new feature into a topology using SQL the row will be created in the database, but no dirty area will be created. This means that the topology will not be able to validate the feature.&lt;/P&gt;&lt;P&gt;For controller datasets that perform tracing the problem is significantly worse, because this means the feature will not be incorporated into the network index. Also, if you are making updates to features that affect the state of the network you will end up in a similar situation, with edits being made to features with no means to get them reflected in the network index.&lt;/P&gt;&lt;P&gt;If you make the same edit using our API, we will ensure that in addition to performing the requested edits we are also properly managing any system fields. If you find out later that you have made a mistake that affects your topology, and try to log a case with support, they will be unable to help you because your use of SQL instead of the use of the APIs.&lt;/P&gt;</description>
      <pubDate>Tue, 25 Feb 2025 14:52:30 GMT</pubDate>
      <guid>https://community.esri.com/t5/data-management-questions/loading-data-in-to-versioned-archiving-tables-on/m-p/1588943#M45587</guid>
      <dc:creator>RobertKrisher</dc:creator>
      <dc:date>2025-02-25T14:52:30Z</dc:date>
    </item>
    <item>
      <title>Re: Loading Data in to Versioned, Archiving tables on daily basis</title>
      <link>https://community.esri.com/t5/data-management-questions/loading-data-in-to-versioned-archiving-tables-on/m-p/1589140#M45588</link>
      <description>&lt;P&gt;Thanks for response. U&lt;SPAN&gt;pdating a versioned, archived table on a weekly/daily basis using SQL instead of geoprocessing tools like truncate → append. Each update contains 70,000 records, which are stored in history (_H) and delta tables, causing growth until a compress operation is performed. This may impact overall geodatabase performance. I&amp;nbsp;am considering updating the table through SQL without modifying the delta or archive tables. These tables are relational to versioned feature classes. Since they are versioned, I have enforced both versioning and archiving; otherwise, I would have only enabled archiving.&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;These tables are independent not participating in any of network or topology. I was pulling the data from non spatial database , these tables are used in field maps to show the data .it has just relational class to map actual versioned feature class to this table&lt;/SPAN&gt;&lt;/P&gt;</description>
      <pubDate>Tue, 25 Feb 2025 20:42:15 GMT</pubDate>
      <guid>https://community.esri.com/t5/data-management-questions/loading-data-in-to-versioned-archiving-tables-on/m-p/1589140#M45588</guid>
      <dc:creator>vijaybadugu</dc:creator>
      <dc:date>2025-02-25T20:42:15Z</dc:date>
    </item>
    <item>
      <title>Re: Loading Data in to Versioned, Archiving tables on daily basis</title>
      <link>https://community.esri.com/t5/data-management-questions/loading-data-in-to-versioned-archiving-tables-on/m-p/1589533#M45591</link>
      <description>&lt;P&gt;&lt;a href="https://community.esri.com/t5/user/viewprofilepage/user-id/386769"&gt;@vijaybadugu&lt;/a&gt;&amp;nbsp;In your original post you said that some of the feature classes participated in a topology, so that's why I raised concerns about direct updates.&lt;/P&gt;&lt;P&gt;If you make direct edits to features using SQL then there will be no history for these edits captured in the GIS. This keeps table sizes down, but also means that if someone takes the data offline and tries to sync it, the server won't see that you've made any changes. This would require all your clients to do full downloads.&lt;/P&gt;&lt;P&gt;Doing transactional edits using our APIs will ensure that clients and systems downstream can use that history to perform incremental updates. The important thing is to be selective in your updates and only push inserts/updates/deletes as necessary. This will slow down the growth of the tables/history and will keep the size of the deltas for your field clients down.&lt;/P&gt;</description>
      <pubDate>Wed, 26 Feb 2025 16:38:24 GMT</pubDate>
      <guid>https://community.esri.com/t5/data-management-questions/loading-data-in-to-versioned-archiving-tables-on/m-p/1589533#M45591</guid>
      <dc:creator>RobertKrisher</dc:creator>
      <dc:date>2025-02-26T16:38:24Z</dc:date>
    </item>
  </channel>
</rss>

