POST
|
Doug -- Views are, well, views. The geodatabase doesn't have the XML metadata from the original table(s)' columns, so it can't know what the aliases were. So it's not a matter of "dropping" them so much as never having had them. And if the view can't participate in geodatabase behaviors, it doesn't get the XML that permits aliases to exist. Note that nothing is ever "in the SDE". It's always been in the database, accessed through what was last called ArcSDE (but nowadays, it's all just ArcObjects). SDE is long gone. "There is no Dana, only Zuul." - V
... View more
08-13-2025
07:10 AM
|
1
|
1
|
780
|
POST
|
Please edit the question to place the code in a code block (click on "..." then "</>" in the second row of icons). Reading Python code without indention is nearly impossible. It's also nearly impossible to debug incomplete code (especially when there are TWO SearchCursor flavors [and arcpy.SearchCursor is deprecated, and should not be used in code written since 2013]). But you should know that nesting cursors is an antipattern, and the likely cause of your error. - V
... View more
08-04-2025
01:11 PM
|
0
|
0
|
860
|
POST
|
The PostgreSQL team made a significant change at PG 12 that eliminated a hidden column that Esri was using as the primary key on the objectid allocation algorithm. I'm not sure you can make the leap from PG10 to PG15 just by doing a backup/restore, since all your "i-tables" are now invalid. If the tables aren't versioned, you might be better off exporting through file geodatabase, or just copying from the PG10 instance to the PG15 one (you can run both on the same box using different ports). Tech Support likely has better tools to work out the problem here. - Vince
... View more
07-30-2025
06:27 AM
|
0
|
0
|
918
|
POST
|
You have corrupted geodatabase metadata. TLDR solution: Rename the table to a temporary name Create a dummy table (just objectid is enough) with the "existing" name Use ArcObjects (Pro UI or ArcPy) to delete the dummy table Rename the temp name back to original Register the no-longer "existing" table - V
... View more
07-17-2025
08:16 PM
|
1
|
0
|
804
|
POST
|
Management Studio is a client to the SQL Server database, just like Pro, so it doesn't really have any role in enterprise geodatabase compatibility. You don't state the database server version in use, so it's hard to determine compatibility, but that message seems to come from the database drivers, not from Esri components. Note that it's recommended to upgrade your clients to the intended Enterprise version before upgrading the ArcGIS Server/Portal components, so you really ought to be running ArcGIS Pro 3.4 with Enterprise 11.4. The software compatibility matrix is available in this FAQ topic. - V
... View more
07-13-2025
07:47 AM
|
1
|
1
|
661
|
POST
|
Well, if you had used the sde schema, not sdo, and didn't register any user tables, then dropping the sde schema (including contents) could have been enough. And if you can identify all the dbo tables that were added (most have 'sde_' or 'gdb_' prefixes), you might be able to pull this off (again, provided you didn't register any tables). Once you've registered a table with the geodatabase (with either sde or sdo model), the cleanup becomes significantly more complicated, and falling back to the last backup (Ryan's suggestion) makes good sense. - V
... View more
06-20-2025
07:00 PM
|
1
|
0
|
412
|
POST
|
Branch versioning isn't just another flavor of database versioning. It has a specific set of use cases, all of which go through a services architecture endpoint. So, it's not a property of the connection so much as a property of the service. I wouldn't expect ArcPy to be real useful here; this seems more in the domain of the ArcGIS API for Python (which is more Portal-centric). This Blog from 2019 might help clarify how what you're describing is the intended design of Branch Versioning. - V
... View more
06-15-2025
11:44 AM
|
0
|
0
|
287
|
POST
|
First of all, it's important to point at that "SDE" no longer exists. The functionality of the product formerly known as "SDE", then "ArcSDE", has dissolved like ink into water within ArcGIS Enterprise functionality. The last dregs of "SDE" naming are the "sde" login that owns enterprise geodatabase metadata (in most environments), the ".sde"-suffixed file known as a "Enterprise Geodatabase Connection File" (or just "connection file") and the arcpy.ArcSDESQLExecute() function (that executes arbitrary SQL code as the connection user). The connection file UI widget has a specific functionality in Desktop interfaces (based on permissions), and there isn't any way to change that. If you know how to create login and group roles and GRANT access to tables, you know all that is necessary to get an approximation of what you are requesting here out of connection files. Doing it this way would be cumbersome, moderately hideous, and probably result in awful performance (burdening the database server with far more connections than is necessary) The only way you can filter tables like this is if you make your own UI component, fed by a JSON file or other config file. The particulars of implementing such a UI component would be better researched in an ArcObjects, Pro API, Python, or other programming interface forum, since this isn't really a data management issue so much as a custom UI one. - V
... View more
06-13-2025
05:56 AM
|
0
|
1
|
304
|
IDEA
|
I've been using PostgreSQL/PostGIS with ArcGIS clients since at least PG 8.0, probably to PG 7.something_small. Desktop clients have supported direct write to even non-enabled PostgreSQL instances since ArcGIS 10.4.1. All the ArcGIS Pro releases have supported at least three PostgreSQL releases, sometimes as many as five. Instead of filing an enhancement request to do what the software already has done for two decades, you'd probably be better served by asking a specific question about your particular use case. Critical information includes: The exact PostgreSQL release The database name and table owner name The exact command you're using The exact error or other problem encountered It's not like there aren't some quirks to how PostgreSQL is supported, but it's not all that difficult to get full functionality, both with and without geodatabase enablement. - V
... View more
06-11-2025
01:45 PM
|
0
|
0
|
470
|
POST
|
This isn't so much a "Data Management" question as it is a raster manipulation one. You might be better off looking in the Spatial Analyst, ArcGIS Pro, or Geoprocessing communities. - V
... View more
05-27-2025
02:30 PM
|
0
|
0
|
312
|
POST
|
Rather than rewriting the whole dataset, and having to take the service offline to do so, you could amend your process to only UPDATE the changed records, INSERT the new records, and DELETE the no longer active ones. In one system where I did this, I'm able to complete UPDATE and INSERT operations in ten minutes, processing 100k of change messages (many of which are redundant) while the parent system takes three hours to make its changes using the "reload everything" procedure. For me, the key is to crunch the rows into a SHA-1 hash with a key (or compound key) associated with each row. Then after the changes in the parent occur, I pull the hash table into a Python dictionary, then scan the parent table, hashing each row. If the key exists in the dictionary, and the hash is equal, no change is necessary (and I delete the matching dictionary item). If the hash is different, I set the query row (and hash) aside into an update queue (and also delete the dictionary key). If the key isn't present, then the row goes into an insert queue (with hash). Once I'm done with input rows, any keys still in the dictionary become my deletion queue. Then the deletes are made from the table by key (and likewise from the hash table), updates are made to the child table (and the child's hash table in parallel), and the inserts are performed (to both table and hash). It depends on the data change frequency and volume whether I vacuum and rebuild indexes daily, weekly, or monthly. A service that I've deployed processes 100k-500k rows every 12 hours, performing 4k-10k updates and 3-50k inserts in 10-25 minutes (and has been doing so for half a decade now). I'm about to deploy another system like it, and it will process 26-30 million rows each month, with 1k-5k in combined U/I/Ds, in a hour or so (loading the full tables from scratch takes 6-8 hours, which is a lot of downtime for a 24x7 system). - V
... View more
05-23-2025
06:20 AM
|
1
|
0
|
820
|
POST
|
Feature classes haven't been "in an SDE" since SDE 3.0 was released. The current term of art is "enterprise geodatabase in {name of RDBMS}". Please provide the RDBMS name (and version), and the CREATE TABLE statement for the feature class in question. Generally, Shape_AREA is a computed column that doesn't even exist in the database, and therefore can't be defined NOT NULL. - V
... View more
05-20-2025
05:10 AM
|
0
|
1
|
703
|
POST
|
There are no effective limits on file geodatabase (two billion or four quintillion features). It would help if you could document your exact steps, to the point of describing the feature counts in the eight tables, and specifying the exact geoprocessing command in the Analysis log. - V
... View more
05-15-2025
03:00 PM
|
0
|
1
|
350
|
POST
|
@yockee wrote: The survey user is portal user and it does not have any connection with database user. Am I wrong ? The Portal user has nothing to do with the connection that was used to make the map service. The map service should NOT be published using table owner authentication. Which means that you need to validate your trigger function in pgAdmin while logged in as the publishing user. - V
... View more
05-07-2025
07:47 AM
|
1
|
1
|
500
|
POST
|
Frankly, this requirement makes no sense. Using HTTP is inherently insecure. Many browsers refuse to use HTTP. Forcing processes to use HTTP for "security reasons" is bizarre. Since Portal was created after HTTPS was standardized, and Portal messages include authentication, running HTTP would compromise your Portal instance. I'd expect there is no legacy wiring to revert Portal into feeble security mode. With the volume of GIS data flowing, it's probably for the best for the scanner people that they can't process it (especially because the HTTP data from Server is being forwarded, so you get it twice). - V
... View more
05-05-2025
07:51 AM
|
0
|
0
|
311
|
Title | Kudos | Posted |
---|---|---|
1 | 08-13-2025 07:10 AM | |
1 | 07-17-2025 08:16 PM | |
1 | 07-13-2025 07:47 AM | |
1 | 06-20-2025 07:00 PM | |
1 | 05-23-2025 06:20 AM |