IDEA
|
I've been using PostgreSQL/PostGIS with ArcGIS clients since at least PG 8.0, probably to PG 7.something_small. Desktop clients have supported direct write to even non-enabled PostgreSQL instances since ArcGIS 10.4.1. All the ArcGIS Pro releases have supported at least three PostgreSQL releases, sometimes as many as five. Instead of filing an enhancement request to do what the software already has done for two decades, you'd probably be better served by asking a specific question about your particular use case. Critical information includes: The exact PostgreSQL release The database name and table owner name The exact command you're using The exact error or other problem encountered It's not like there aren't some quirks to how PostgreSQL is supported, but it's not all that difficult to get full functionality, both with and without geodatabase enablement. - V
... View more
yesterday
|
0
|
0
|
24
|
POST
|
This isn't so much a "Data Management" question as it is a raster manipulation one. You might be better off looking in the Spatial Analyst, ArcGIS Pro, or Geoprocessing communities. - V
... View more
2 weeks ago
|
0
|
0
|
122
|
POST
|
Rather than rewriting the whole dataset, and having to take the service offline to do so, you could amend your process to only UPDATE the changed records, INSERT the new records, and DELETE the no longer active ones. In one system where I did this, I'm able to complete UPDATE and INSERT operations in ten minutes, processing 100k of change messages (many of which are redundant) while the parent system takes three hours to make its changes using the "reload everything" procedure. For me, the key is to crunch the rows into a SHA-1 hash with a key (or compound key) associated with each row. Then after the changes in the parent occur, I pull the hash table into a Python dictionary, then scan the parent table, hashing each row. If the key exists in the dictionary, and the hash is equal, no change is necessary (and I delete the matching dictionary item). If the hash is different, I set the query row (and hash) aside into an update queue (and also delete the dictionary key). If the key isn't present, then the row goes into an insert queue (with hash). Once I'm done with input rows, any keys still in the dictionary become my deletion queue. Then the deletes are made from the table by key (and likewise from the hash table), updates are made to the child table (and the child's hash table in parallel), and the inserts are performed (to both table and hash). It depends on the data change frequency and volume whether I vacuum and rebuild indexes daily, weekly, or monthly. A service that I've deployed processes 100k-500k rows every 12 hours, performing 4k-10k updates and 3-50k inserts in 10-25 minutes (and has been doing so for half a decade now). I'm about to deploy another system like it, and it will process 26-30 million rows each month, with 1k-5k in combined U/I/Ds, in a hour or so (loading the full tables from scratch takes 6-8 hours, which is a lot of downtime for a 24x7 system). - V
... View more
3 weeks ago
|
1
|
0
|
333
|
POST
|
Feature classes haven't been "in an SDE" since SDE 3.0 was released. The current term of art is "enterprise geodatabase in {name of RDBMS}". Please provide the RDBMS name (and version), and the CREATE TABLE statement for the feature class in question. Generally, Shape_AREA is a computed column that doesn't even exist in the database, and therefore can't be defined NOT NULL. - V
... View more
3 weeks ago
|
0
|
1
|
281
|
POST
|
There are no effective limits on file geodatabase (two billion or four quintillion features). It would help if you could document your exact steps, to the point of describing the feature counts in the eight tables, and specifying the exact geoprocessing command in the Analysis log. - V
... View more
4 weeks ago
|
0
|
1
|
142
|
POST
|
@yockee wrote: The survey user is portal user and it does not have any connection with database user. Am I wrong ? The Portal user has nothing to do with the connection that was used to make the map service. The map service should NOT be published using table owner authentication. Which means that you need to validate your trigger function in pgAdmin while logged in as the publishing user. - V
... View more
05-07-2025
07:47 AM
|
1
|
1
|
213
|
POST
|
Frankly, this requirement makes no sense. Using HTTP is inherently insecure. Many browsers refuse to use HTTP. Forcing processes to use HTTP for "security reasons" is bizarre. Since Portal was created after HTTPS was standardized, and Portal messages include authentication, running HTTP would compromise your Portal instance. I'd expect there is no legacy wiring to revert Portal into feeble security mode. With the volume of GIS data flowing, it's probably for the best for the scanner people that they can't process it (especially because the HTTP data from Server is being forwarded, so you get it twice). - V
... View more
05-05-2025
07:51 AM
|
0
|
0
|
149
|
POST
|
Index fragmentation is an issue with large tables, but no so much with small/tiny ones. If your Adds table is tiny, then you're probably wasting time by indexing it, and any subsequent INSERT or UPDATE is going to massively fragment the index again, so as Ryan has written, the timing of the report is important. If, using Traditional Versioning, you rebuild indexes then compress and post, then tables (and therefore their indexes) may have had significant changes, which would impact the index fragmentation, and require another REINDEX. Having a clean rowid column index before compress is probably worth the effort, but you should probably wait until after geodatabase maintenance to update all indexes. - V
... View more
05-01-2025
11:36 AM
|
1
|
0
|
351
|
POST
|
Well, you certainly shouldn't be connecting a web app to a table as the table owner (huge security risk), so you need to make sure that the trigger works when logged in as the survey user. If the table is versioned, you won't see the UPDATEs until the version is posted to the business table (and detecting UPDATE events becomes problematic, since the Adds and Deletes tables both change on an update. You'd probably be better off adding Editor Tracking on the table, then using a SQL batch job to LEFT OUTER JOIN to the survey table, driving an INSERT into the id tracking table. - V
... View more
04-28-2025
05:20 AM
|
0
|
0
|
268
|
POST
|
You really shouldn't have any need to create any logins that map to the sde schema, because you should never, ever load user data with the sde schema. Enterprise geodatabases actually have a requirement that the schema be the same as the login/user for any tables to be edited with ArcGIS tools (the only exception is when the geodatabase is owned by DBO, at which point it really has no security model at all). There is a Create Database User (Data Management) tool to help you create working enterprise geodatabase logins. This has the feel of an XY Problem. What were you attempting that made creating multiple logins that mapped to the sde schema seem necessary? Maybe we can help with that problem, instead. - V
... View more
04-28-2025
05:10 AM
|
1
|
2
|
493
|
POST
|
That sounds like a firewall, VPN, or other port manager issue. It's possible there's an Oracle switch for this, but there's no place in the geodatabase API to set a timeout, and I've had Oracle connections up over weekends. - V
... View more
04-23-2025
08:05 AM
|
0
|
0
|
212
|
POST
|
That makes more sense. You should be able to use SQL to update the lat/lon geometry component, if you just isolate the lat/lon coords and project them. - V
... View more
04-17-2025
07:46 AM
|
1
|
0
|
636
|
POST
|
I've done this in PG13, PG14, and PG15 with no difficulty. Except that one time when I didn't grant access to the sequence in the target table to the role as which the inserts were running. Then INSERTs silently failed. I only found the problem by connecting to pgAdmin as the insert script user, generating an appropriate INSERT statement, and getting a useful error message. - V
... View more
04-16-2025
01:39 PM
|
0
|
2
|
338
|
POST
|
Can you query the rows in a DA SearchCursor and output the Geometry.WKT for each? Does that display the correct projected values? I have no idea how the layer could project on the fly to correct units if it didn't have correct SRID metadata. - V
... View more
04-16-2025
12:09 PM
|
0
|
0
|
657
|
POST
|
If CAD objects are involved, Pro will render the CAD objects, not the geometry out of the database. It would be quite odd to get there, but that's the only reason that makes sense. - V
... View more
04-16-2025
09:54 AM
|
0
|
0
|
672
|
Title | Kudos | Posted |
---|---|---|
1 | 3 weeks ago | |
1 | 05-07-2025 07:47 AM | |
1 | 05-01-2025 11:36 AM | |
1 | 04-28-2025 05:10 AM | |
1 | 04-17-2025 07:46 AM |