|
POST
|
File geodatabase hasn't really changed since 10.0 (vice the 9.x architecture). There have been types added, the use of which would make older clients not recognize those tables, but the underlying format hasn't changed, and there isn't an upgrade procedure for them. - V
... View more
01-28-2025
01:28 PM
|
0
|
0
|
703
|
|
POST
|
Table ownership and schema are different things in PostgreSQL. ArcObjects geodatabase registration functionality requires that the schema be the same as the owner login. You can use the tables without registration, or you can name them as required for registration. - V
... View more
01-22-2025
08:53 AM
|
2
|
0
|
690
|
|
POST
|
Use a backup solution for a backup purpose, and a failover solution for a failover purpose. There is no "best" replication solution here. - V
... View more
01-21-2025
05:00 PM
|
0
|
0
|
551
|
|
POST
|
A replica isn't a backup; it's a point-in-time snapshot. A true standby database is right pattern for failover/resiliency. - V
... View more
01-21-2025
11:54 AM
|
0
|
2
|
558
|
|
POST
|
It's usually the case that you should not ever need to Define a projection. The only case where this is appropriate is when: The existing coordinate system is wrong, and You know what the correct coordinate system is. It's not clear from your attachments (which really ought to be embedded inline) which datasets are which. It does look like two of them may be in geographic (angular) units (the extents look like degrees). Altering the false origins is useless in this context. You need to identify which sources were corrupt before you got them, and which ones you corrupted with an ill-advised Define Projection. Once you've got that straight, you need to Project the sources to the desired coordinate reference, not clobber the existing correct metadata. - V
... View more
01-15-2025
07:33 AM
|
1
|
0
|
974
|
|
POST
|
I think the deprecation is being enforced by 3.3/10.3. - V
... View more
01-08-2025
11:58 AM
|
1
|
0
|
1397
|
|
POST
|
User schema geodatabases have been deprecated for more than four years. Are you sure this is even possible? - V
... View more
01-08-2025
10:13 AM
|
1
|
2
|
1417
|
|
POST
|
The definitive Esri document on coordinate domains is the Understanding Coordinate Management in the Geodatabase whitepaper. Basically, the geometry management library stores all coordinates in a compressed integer form which allows for fast, efficient topological comparison. The domain properties provide details on the way that compression manifests. The minX/minY values form the lower-left (SW) corner point past which no mapping can occur (not even search circle construction). The maxX/maxY are also hard limits (NE), but are likely to be impossibly far from actual data, unless the XY resolution is ridiculously fine (e.g. Ångströms). Within those corners, all coordinates are snapped to the nearest resolution value (and the compressed values are less efficiently compressed as the resolution gets smaller). Coordinate domains are immutable -- they cannot be altered after the data table is populated. They must be established at feature class creation time. I override the defaults all the time, but I found assembly language to be an intuitive way to write code. The easiest way to leverage custom domains in the Pro UI is through a feature dataset. Create your FD with the desired tolerances, populate new feature classes in the FD, then immediately remove the FC from the FD (to avoid locking issues associated with feature datasets). - V
... View more
01-06-2025
08:17 AM
|
3
|
0
|
975
|
|
POST
|
There's quite a bit wrong with the assumptions behind this, so let's make sure the ground is clear: ArcGIS Pro is a database client. Changing display properties on a layer has no impact on the data within the database server, only on the queries which are generated (and results returned) Queries against a database server have no impact on the data storage of the rows, which are usually bundled up into blocks in the database I/O subsystem. Databases read blocks, not rows, so if the DBMS thinks it needs any row in the block, it has to read the entire block. Blocks are cached, so that supplemental reads of any other rows in the block will likely not cost more (see fragmentation, later) Rows which span blocks generally require chaining to the supplemental block(s) Databases often "read ahead" once they detect a sequence of blocks is being queried, flushing out least recently used blocks, so that subsequent reads are "faster" (which isn't always the case, especially if the read-ahead wasn't needed, and the flushed blocks could have been needed instead) Now then: Specifically, when I disable certain fields in a layer before adding it to the map, how does this impact the underlying database query? It changes the column list in the SELECT clause, and therefore the content of rows which are returned, no more. Does SQL Server still read all data pages and fetch all fields from disk, or is the query optimized to retrieve only the enabled fields that are actively being displayed? Usually the former, though it's really "all fields from all rows from all blocks (or the subset of all blocks indicated by the index), plus all of the chained blocks". In order to optimize for selected columns, you'd need to store every combination of the row contents for every possible query, using orders of magnitude more disk space, and making updates an I/O nightmare. My expectation is that the query should be optimized since fewer fields are being queried. However, I am uncertain about this because of how SQL Server stores data. For instance, if the data uses row-store storage, the database pages would still contain all fields, and the I/O operations on the disk would read all fields regardless of what is needed. This behavior would contradict the expected optimization. Which indicates the expectation is incorrect. The key piece you're missing, though, is the optimizer, which might be able to take the data from the skinnier pages (and therefore more densely packed blocks) associated with indexes. This is why a covering index is created. [Could anyone] shed light on how ArcGIS Pro handles such scenarios and whether this affects database performance. ArcGIS Pro is a database client. It fashions SQL queries. It doesn't play any role in how the database responds to the queries. The other pieces missing here are fragmentation and partitioning. If you load imagery footprint data in a database, and add to it each day, for years, then do a query for "All images between time x and time y", and have indexes on the image timestamp, the RDBMS is going to find the starting block pretty quickly, and probably will be able to plow though the subsequent blocks to get the remaining rows. But, if you instead query on imagery platform, without an index, then the database needs to do a full table scan to inspect each block for rows, and each row for data. Even if you have an index, which identifies the blocks where all Landsat7 images are, the database still needs to read a lot of blocks, and depending on the size of the rows, and how they are allocated across the blocks, might indeed need to read ALL the blocks. This, in essence, is fragmentation -- the need to read "unnecessary" data because it is stored in the same block as the data you want. Now, you could build a clustered index on the platform, which instructs the database to build an index, and order the rows in the table so that consecutive rows all share the same platform (until it changes), but then the footprint data would be fragmented with respect to date. The other thing you could do is make several virtual tables, partitions, and store all the consecutive date data for each platform separately, so a "platform and time" query is likely to process the least amount of rows. Note that you could also partition by date and cluster for platform, if that made more sense in your overall query set. And because the partitions are very nearly tables themselves, the database could leverage multiple CPUs to search the partitions in parallel, assembling the final result set after reducing the number of blocks as it can. Finally, there's spatial queries, which depend on the storage of an indication of where the features are, either by a grid algorithm (Esri ST_Geometry) or a corner+envelope size mechanism (R-trees). Data can also be spatially fragmented, like in the years-long sequential storage of imagery footprints example above, and can be defragmented by creating a clustered index on something spatial, like UTM Zone, or country code, or province code (or combination, like UTM+Admin1, then centroid Y value). Though, again, this optimization will reduce the performance of temporal queries (it really is a zero sum game -- optimizations have costs, like UPDATEs on indexed fields needing to update the index values in addition to the rows, and are subject to the law diminishing returns -- too many partitions will overfill the catalog and eventually slow queries to a crawl). Since the majority of the work is done in packing the result stream back to the client as rows, if wide, unnecessary field values can be left out, then there will be less transmission delay across the network from the database. - V
... View more
01-03-2025
11:07 AM
|
3
|
1
|
816
|
|
POST
|
It's been decades since my last Oracle upgrade, but sometimes they run long, and it's hard to predict, with side-by-side systems that ought to be identical, which one will be the slow child. - V
... View more
12-18-2024
07:12 AM
|
1
|
0
|
763
|
|
POST
|
The undefined environment makes identifying a cause challenging. Is the database on-prem on in the cloud? Is the workstation on-prem or in the cloud? If in the cloud, is it in the same neighborhood (subnet)? A full architecture diagram of both execution environments could be useful. - V
... View more
12-17-2024
07:18 PM
|
1
|
0
|
779
|
|
POST
|
If versioning and/or replication is involved, you really don't want to use anything but a database restore. The management service folks should be capable of snapshotting the old dev, and replacing it with a production snapshot for development. If replication is involved, you then need to immediately drop the replica in new-Dev, lest something very ugly happens. If old-Dev had a different keycode, you'll need to apply that to new-Dev before you forget (save the contents of old-dev.sde_server_config before it's archived).
- V
... View more
12-06-2024
10:10 AM
|
0
|
0
|
656
|
|
POST
|
I never have. I do run CLUSTER, VACUUM, and REINDEX at quiet times, but "quiet" can be hard to find in 24x7 systems. REINDEX will slow down queries, but not cause them to fail. VACUUM might grab a lock; the docs will say for sure.
- V
... View more
12-02-2024
06:42 AM
|
0
|
0
|
865
|
|
POST
|
Never, never, NEVER, NEVER publish as the data owner. This is a HUGE security risk. Just don't do it.
This is a Security Modeling 101 issue. The principle here is "minimum necessary privilege". The owner has way too much access to the table. Instead, create one or more browse users, and roles for each kind of access, and grant access to the tables to the roles, and grant roles to users. Publish data with the user holding the least possible access to make effective use the data. If some apps need UPDATE, but others don't, publish with different publishing users (e.g., "app1_pub" & "app2_pub"), only granting the minimum necessary to each.
- V
... View more
11-30-2024
06:10 PM
|
1
|
2
|
2170
|
|
POST
|
@Yogesh_Chavan wrote:
Question: In our case as we have only one user being the data owner, should we have a new dedicated user for publishing? or is it okay to keep using the user X for publishing?
If you only have one login you are not anywhere close to best practice. Publishing with the table owner means that any zero-day security bug would allow "read-only" users to delete the contents of your database (or just systematically corrupt it), with no way to determine who did it.
It is NOT okay to keep using user X for publishing. You need to create a browsing login and user, and grant it only SELECT access to the tables involved in publishing (and nothing else), then publish connected as that user.
Using enterprise-class database tools means having an enterprise-class security model. There are entire books on database security, but you can start with a chapter in any database administration guide.
- V
... View more
11-29-2024
08:24 AM
|
1
|
4
|
2734
|
| Title | Kudos | Posted |
|---|---|---|
| 2 | 10-10-2025 07:28 AM | |
| 2 | 10-07-2025 11:00 AM | |
| 1 | 08-13-2025 07:10 AM | |
| 1 | 07-17-2025 08:16 PM | |
| 1 | 07-13-2025 07:47 AM |