|
POST
|
The undefined environment makes identifying a cause challenging. Is the database on-prem on in the cloud? Is the workstation on-prem or in the cloud? If in the cloud, is it in the same neighborhood (subnet)? A full architecture diagram of both execution environments could be useful. - V
... View more
12-17-2024
07:18 PM
|
1
|
0
|
1126
|
|
POST
|
If versioning and/or replication is involved, you really don't want to use anything but a database restore. The management service folks should be capable of snapshotting the old dev, and replacing it with a production snapshot for development. If replication is involved, you then need to immediately drop the replica in new-Dev, lest something very ugly happens. If old-Dev had a different keycode, you'll need to apply that to new-Dev before you forget (save the contents of old-dev.sde_server_config before it's archived).
- V
... View more
12-06-2024
10:10 AM
|
0
|
0
|
937
|
|
POST
|
I never have. I do run CLUSTER, VACUUM, and REINDEX at quiet times, but "quiet" can be hard to find in 24x7 systems. REINDEX will slow down queries, but not cause them to fail. VACUUM might grab a lock; the docs will say for sure.
- V
... View more
12-02-2024
06:42 AM
|
0
|
0
|
1499
|
|
POST
|
Never, never, NEVER, NEVER publish as the data owner. This is a HUGE security risk. Just don't do it.
This is a Security Modeling 101 issue. The principle here is "minimum necessary privilege". The owner has way too much access to the table. Instead, create one or more browse users, and roles for each kind of access, and grant access to the tables to the roles, and grant roles to users. Publish data with the user holding the least possible access to make effective use the data. If some apps need UPDATE, but others don't, publish with different publishing users (e.g., "app1_pub" & "app2_pub"), only granting the minimum necessary to each.
- V
... View more
11-30-2024
06:10 PM
|
1
|
3
|
3336
|
|
POST
|
@Yogesh_Chavan wrote:
Question: In our case as we have only one user being the data owner, should we have a new dedicated user for publishing? or is it okay to keep using the user X for publishing?
If you only have one login you are not anywhere close to best practice. Publishing with the table owner means that any zero-day security bug would allow "read-only" users to delete the contents of your database (or just systematically corrupt it), with no way to determine who did it.
It is NOT okay to keep using user X for publishing. You need to create a browsing login and user, and grant it only SELECT access to the tables involved in publishing (and nothing else), then publish connected as that user.
Using enterprise-class database tools means having an enterprise-class security model. There are entire books on database security, but you can start with a chapter in any database administration guide.
- V
... View more
11-29-2024
08:24 AM
|
1
|
5
|
3900
|
|
POST
|
That's not a supported combination. It might work, or it might spawn a small black hole which consumes the data center. I wouldn't hold my breath for either of these outcomes for ArcGIS 11.0.
- V
... View more
11-12-2024
10:52 PM
|
0
|
1
|
1534
|
|
POST
|
Your first step should be to search on "arcgis postgresql requirements" and look to see if the database is supported (PG 16 isn't supported at 11.4.0, much less 11.0).
I haven't tuned an RDBMS in at least a decade, possibly two, so I doubt this is a priority.
- V
... View more
11-12-2024
06:57 AM
|
0
|
3
|
1547
|
|
POST
|
Why do you need the MV registered? Is it going to participate in geodatabase behavior? If you just want to render from it, making a Query Layer of the unregistered table would likely suffice.
Conflicts that could be created between the ArcSDE metadata and geodatabase metadata was the reason the ancient command-line utilities were deprecated long, long, ago. Even if you could make a 10.2.2 sdetable function, you could only harm the integrity of a modern geodatabase.
Registration of views is a relatively new feature. If you haven't installed a service pack to 3.2, you probably ought to, at least 3.2.1, but the 3.2.4 terminal release is likely to be better.
- V
... View more
11-08-2024
10:54 AM
|
1
|
1
|
1473
|
|
POST
|
The length and area columns are often wrong, or useless, so it's quite possible you won't find what you're looking for.
"Internally calculated Cartesian value that should not be trusted for any real-world purpose" would suffice for both length and area.
- V
... View more
11-08-2024
07:16 AM
|
0
|
0
|
779
|
|
POST
|
Dissolving millions of features to a few hundred is a nightmare case. You didn't provide most of the requested information necessary to help. Having corrupt geometries before the Dissolve would put a knife into the back of the post-Dissolve topology; that might not be recoverable without significant data loss.
- V
... View more
11-07-2024
07:24 AM
|
0
|
0
|
3712
|
|
POST
|
Even 200k rows can be slow if they're wide enough. You should certainly have an index on the query column, but first priority is to copy the FGDB directory to local disk.
- V
... View more
11-06-2024
07:18 AM
|
0
|
1
|
1949
|
|
POST
|
There are a bunch of things here:
200m rows is an order of magnitude higher than I would feel comfortable using for file geodatabase (yeah, it functions, but a real database would function much better).
Shared folders are performance death for file geodatabase, with a minimum 2x cost accessing a local network share
Full-table-scan queries are performance poison relational databases with very large tables. If it's important enough to do a query, it's important enough to build an index.
You should not be using an OR when you could use an IN: rel_objectid in (26,19804) Remember that FGDB doesn't have an RDBMS optimizer, so you should always pitch softballs for queries.
- V
... View more
11-05-2024
07:36 AM
|
0
|
3
|
1968
|
|
POST
|
There's really gobs of options here. I worked one project where Survey 123 was used to populate attributes for imagery, and database triggers and batch geoprocessing scripts handled it from there. You can also have a true "not ArcGIS" solution with a tiny web form that populates a database and geoprocessing steps take it from there. If Excel is your thing, completing metadata in a spreadsheet and dropping that in a folder or S3 bucket to generate further processing is an option.
- V
... View more
11-01-2024
12:28 PM
|
0
|
0
|
856
|
|
POST
|
At some point, you need to leave 10.8.x behind. I'm an extreme late adopter, and I've been using Pro exclusively for years, even for hobby projects. Python 3 is worth the transition cost, and clinging to an unsupported platform is just no fun.
- V
... View more
10-31-2024
06:15 PM
|
1
|
1
|
2742
|
|
POST
|
Yes, you can manage indexes manually with SQL in pgAdmin, psql, or even arcpy.ArcSDESQLExecute().
- V
... View more
10-30-2024
01:06 PM
|
1
|
2
|
1601
|
| Title | Kudos | Posted |
|---|---|---|
| 2 | 03-27-2026 12:04 PM | |
| 1 | 02-25-2026 07:30 PM | |
| 2 | 10-10-2025 07:28 AM | |
| 2 | 10-07-2025 11:00 AM | |
| 1 | 08-13-2025 07:10 AM |