|
POST
|
If the table is non-versioned, you're responsible for a serial or SEQUENCE to keep it clean. UUID generation is trivial. - V
... View more
02-20-2025
06:59 PM
|
0
|
1
|
2896
|
|
POST
|
I have batch scripts running in client sites several times a day, doing updates and inserts of existing and new rows. I use SQL to manage the change detection, using INSERT ... SELECT ... FROM ({virtual_table_query}) vt and UPDATE ... FROM ({virtual_table_query}) WHERE u.keycol = vt.keycol, editing both unversioned and versioned data. TRUNCATE/APPEND is too blunt an instrument, and unregistering the feature class defeats the purpose of versioning and archiving. The key here is to populate a temporary table (which is truncated and appended, or sometimes only appended), then use SQL for the gentler bits. - V
... View more
02-19-2025
11:11 AM
|
0
|
3
|
2925
|
|
POST
|
Truncate and Append is pretty much worst case for a versioned geodatabase, particularly a TRUNCATE, since it will bypass the Delete event. Best practice is also the trickiest -- Identify the changes (INSERT/UPDATE/DELETE) that need to occur to bring the data into sync, and issue commands against a versioned view to make it happen. - V
... View more
02-18-2025
06:28 PM
|
0
|
5
|
2932
|
|
POST
|
The point behind using separate development and production databases is to keep dev as close as possible to production. The choice between DBO- and sde-owned geodatabases is somewhat fundamental, and straddling the two models like this is bound to cause this exact sort of problem. The sde-login owned geodatabase model does not know how to respect DBO rights, so piling on more permissions is not likely to be able to help. At this point, you need to choose between not locking out non-sde logins in DEV, and reconfiguring the DEV database to be DBO-owned. I personally hate DBO geodatabases with a passion, but if that's your production configuration, I'd suggest you restart with a new DEV and use that same model. - V
... View more
02-15-2025
04:50 PM
|
1
|
1
|
1806
|
|
POST
|
It depends on situational requirements, but best practice is to keep the database server and all timestamp values set to UTC (aka Greenwich Mean Time). Web clients are exceptionally persistent in converting date/time values to localtime, even when they're already sharing a timezone. - V
... View more
02-12-2025
07:26 AM
|
0
|
0
|
893
|
|
POST
|
File geodatabase hasn't really changed since 10.0 (vice the 9.x architecture). There have been types added, the use of which would make older clients not recognize those tables, but the underlying format hasn't changed, and there isn't an upgrade procedure for them. - V
... View more
01-28-2025
01:28 PM
|
0
|
0
|
1154
|
|
POST
|
Table ownership and schema are different things in PostgreSQL. ArcObjects geodatabase registration functionality requires that the schema be the same as the owner login. You can use the tables without registration, or you can name them as required for registration. - V
... View more
01-22-2025
08:53 AM
|
2
|
0
|
844
|
|
POST
|
Use a backup solution for a backup purpose, and a failover solution for a failover purpose. There is no "best" replication solution here. - V
... View more
01-21-2025
05:00 PM
|
0
|
0
|
907
|
|
POST
|
A replica isn't a backup; it's a point-in-time snapshot. A true standby database is right pattern for failover/resiliency. - V
... View more
01-21-2025
11:54 AM
|
0
|
2
|
914
|
|
POST
|
It's usually the case that you should not ever need to Define a projection. The only case where this is appropriate is when: The existing coordinate system is wrong, and You know what the correct coordinate system is. It's not clear from your attachments (which really ought to be embedded inline) which datasets are which. It does look like two of them may be in geographic (angular) units (the extents look like degrees). Altering the false origins is useless in this context. You need to identify which sources were corrupt before you got them, and which ones you corrupted with an ill-advised Define Projection. Once you've got that straight, you need to Project the sources to the desired coordinate reference, not clobber the existing correct metadata. - V
... View more
01-15-2025
07:33 AM
|
1
|
0
|
1723
|
|
POST
|
I think the deprecation is being enforced by 3.3/10.3. - V
... View more
01-08-2025
11:58 AM
|
1
|
0
|
2061
|
|
POST
|
User schema geodatabases have been deprecated for more than four years. Are you sure this is even possible? - V
... View more
01-08-2025
10:13 AM
|
1
|
2
|
2081
|
|
POST
|
The definitive Esri document on coordinate domains is the Understanding Coordinate Management in the Geodatabase whitepaper. Basically, the geometry management library stores all coordinates in a compressed integer form which allows for fast, efficient topological comparison. The domain properties provide details on the way that compression manifests. The minX/minY values form the lower-left (SW) corner point past which no mapping can occur (not even search circle construction). The maxX/maxY are also hard limits (NE), but are likely to be impossibly far from actual data, unless the XY resolution is ridiculously fine (e.g. Ångströms). Within those corners, all coordinates are snapped to the nearest resolution value (and the compressed values are less efficiently compressed as the resolution gets smaller). Coordinate domains are immutable -- they cannot be altered after the data table is populated. They must be established at feature class creation time. I override the defaults all the time, but I found assembly language to be an intuitive way to write code. The easiest way to leverage custom domains in the Pro UI is through a feature dataset. Create your FD with the desired tolerances, populate new feature classes in the FD, then immediately remove the FC from the FD (to avoid locking issues associated with feature datasets). - V
... View more
01-06-2025
08:17 AM
|
3
|
0
|
1328
|
|
POST
|
There's quite a bit wrong with the assumptions behind this, so let's make sure the ground is clear: ArcGIS Pro is a database client. Changing display properties on a layer has no impact on the data within the database server, only on the queries which are generated (and results returned) Queries against a database server have no impact on the data storage of the rows, which are usually bundled up into blocks in the database I/O subsystem. Databases read blocks, not rows, so if the DBMS thinks it needs any row in the block, it has to read the entire block. Blocks are cached, so that supplemental reads of any other rows in the block will likely not cost more (see fragmentation, later) Rows which span blocks generally require chaining to the supplemental block(s) Databases often "read ahead" once they detect a sequence of blocks is being queried, flushing out least recently used blocks, so that subsequent reads are "faster" (which isn't always the case, especially if the read-ahead wasn't needed, and the flushed blocks could have been needed instead) Now then: Specifically, when I disable certain fields in a layer before adding it to the map, how does this impact the underlying database query? It changes the column list in the SELECT clause, and therefore the content of rows which are returned, no more. Does SQL Server still read all data pages and fetch all fields from disk, or is the query optimized to retrieve only the enabled fields that are actively being displayed? Usually the former, though it's really "all fields from all rows from all blocks (or the subset of all blocks indicated by the index), plus all of the chained blocks". In order to optimize for selected columns, you'd need to store every combination of the row contents for every possible query, using orders of magnitude more disk space, and making updates an I/O nightmare. My expectation is that the query should be optimized since fewer fields are being queried. However, I am uncertain about this because of how SQL Server stores data. For instance, if the data uses row-store storage, the database pages would still contain all fields, and the I/O operations on the disk would read all fields regardless of what is needed. This behavior would contradict the expected optimization. Which indicates the expectation is incorrect. The key piece you're missing, though, is the optimizer, which might be able to take the data from the skinnier pages (and therefore more densely packed blocks) associated with indexes. This is why a covering index is created. [Could anyone] shed light on how ArcGIS Pro handles such scenarios and whether this affects database performance. ArcGIS Pro is a database client. It fashions SQL queries. It doesn't play any role in how the database responds to the queries. The other pieces missing here are fragmentation and partitioning. If you load imagery footprint data in a database, and add to it each day, for years, then do a query for "All images between time x and time y", and have indexes on the image timestamp, the RDBMS is going to find the starting block pretty quickly, and probably will be able to plow though the subsequent blocks to get the remaining rows. But, if you instead query on imagery platform, without an index, then the database needs to do a full table scan to inspect each block for rows, and each row for data. Even if you have an index, which identifies the blocks where all Landsat7 images are, the database still needs to read a lot of blocks, and depending on the size of the rows, and how they are allocated across the blocks, might indeed need to read ALL the blocks. This, in essence, is fragmentation -- the need to read "unnecessary" data because it is stored in the same block as the data you want. Now, you could build a clustered index on the platform, which instructs the database to build an index, and order the rows in the table so that consecutive rows all share the same platform (until it changes), but then the footprint data would be fragmented with respect to date. The other thing you could do is make several virtual tables, partitions, and store all the consecutive date data for each platform separately, so a "platform and time" query is likely to process the least amount of rows. Note that you could also partition by date and cluster for platform, if that made more sense in your overall query set. And because the partitions are very nearly tables themselves, the database could leverage multiple CPUs to search the partitions in parallel, assembling the final result set after reducing the number of blocks as it can. Finally, there's spatial queries, which depend on the storage of an indication of where the features are, either by a grid algorithm (Esri ST_Geometry) or a corner+envelope size mechanism (R-trees). Data can also be spatially fragmented, like in the years-long sequential storage of imagery footprints example above, and can be defragmented by creating a clustered index on something spatial, like UTM Zone, or country code, or province code (or combination, like UTM+Admin1, then centroid Y value). Though, again, this optimization will reduce the performance of temporal queries (it really is a zero sum game -- optimizations have costs, like UPDATEs on indexed fields needing to update the index values in addition to the rows, and are subject to the law diminishing returns -- too many partitions will overfill the catalog and eventually slow queries to a crawl). Since the majority of the work is done in packing the result stream back to the client as rows, if wide, unnecessary field values can be left out, then there will be less transmission delay across the network from the database. - V
... View more
01-03-2025
11:07 AM
|
3
|
1
|
1190
|
|
POST
|
It's been decades since my last Oracle upgrade, but sometimes they run long, and it's hard to predict, with side-by-side systems that ought to be identical, which one will be the slow child. - V
... View more
12-18-2024
07:12 AM
|
1
|
0
|
1109
|
| Title | Kudos | Posted |
|---|---|---|
| 2 | 03-27-2026 12:04 PM | |
| 1 | 02-25-2026 07:30 PM | |
| 2 | 10-10-2025 07:28 AM | |
| 2 | 10-07-2025 11:00 AM | |
| 1 | 08-13-2025 07:10 AM |