POST
|
Whenever something new comes up, the question to ask is "What changed?" The error message is indicating login/schema issues, so in this case, what changed in the database? Did someone alter the login/schema mappings? Has there been a database rev update?\
- V
... View more
11 hours ago
|
0
|
0
|
4
|
POST
|
The definition of "unused" needs scope. They were all used at least once.
It would be possible to scan a number of map projects to inventory the "used" tables, but much more difficult to scan for all scripts or SQL functions/triggers that exploit "unused" data to manifest "used" data.
- V
... View more
11 hours ago
|
0
|
0
|
4
|
POST
|
You ought to back up file geodatabases in their entirety, and shapefiles as complete sets. You can make it easier on yourself by placing logically grouped shapefiles into a single subdirectory, then zipping by subdirectory.
I wouldn't use ModelBuilder for this, but I wouldn't use ModelBuilder for anything (I'd rather use Assembly, which puts it 15 language-environments back).
The Compare tool has the advantage because it already knows what data to compare. Coding this yourself for generic sources would be a non-trivial task.
- V
... View more
a week ago
|
0
|
1
|
74
|
POST
|
In what way do you want to compare the datasets? Plain files have byte order, but geodata, since it is a type of relation, needs to be compared unordered, which makes comparison, umm..., complex (if not impossible).
Obviously you could compare them in simplistic ways, but the number of false positives would reduce the usefulness of such comparisons.
- V
... View more
a week ago
|
0
|
3
|
126
|
POST
|
Rather than give generous permissions to the users, create a power_user role, then grant that role access to run a utility procedure with the minimum needed privileges, and grant the power_user role privilege to execute that procedure. In this way you only grant the necessary access to terminate users according to the rules established in the PL/SQL procedure (the PL/SQL equivalent of Unix sudo or some other setuid script).
- V
... View more
2 weeks ago
|
1
|
0
|
82
|
POST
|
It's not exactly best practice to make use of the master database for anything GIS. In fact, it's pretty close to worst practice. It wouldn't surprise if it is not permitted (and if it is, I would file a Defect to have that loophole closed).
Pretty much any training in database technology is going to tell you to create a new logical storage, a database or databases to house your data (leveraging the storage), and additional user logins and group logins, and schemas in the databases to match and/or correspond to the logins.
Creating user data in reserved admin databases presents an unnecessary risk to database stability. Creating a "toto" database for your non-Kansas data would in fact be best practice, and I encourage you to do so.
- V
... View more
3 weeks ago
|
1
|
0
|
96
|
POST
|
Never ever change the selection environment on a source while inside a cursor using that source. Please use code formatting on your post so that the code indentation is legible. - V
... View more
09-13-2024
07:45 AM
|
3
|
1
|
281
|
POST
|
Hi. This is the user forums, where users communicate with other users (and some Esri employees who also use the software, but most not authorized to speak on behalf of the company). Communicating vendor requirements to Esri should be done through Tech Support and/or Customer Service and/or your local government representative. I would note that backfitting fundamental changes to software in Mature or Retired support status is not a likely outcome, especially if modern software in active support already has that capability. - V
... View more
09-06-2024
07:15 AM
|
0
|
0
|
130
|
POST
|
I haven't ever used SQL*Loader, so this should be asked as a different question (and not of me). Populating new tables, adding all appropriate indexes, then running INSERT/DELETE based on a LEFT OUTER JOIN mismatch and UPDATE on only the changed rows is the procedure I'm recommending. How you actually implement that is outside the scope of my answer. - V
... View more
09-05-2024
01:41 PM
|
0
|
0
|
162
|
POST
|
I'm not real good with the step-by-step thing, and am forbidden by NDA to give explicit details . All I can say is that I used FeatureClassToFeatureClass and TableToTable to populate a few dozen tables as an interim data change set into a staging schema within the database, using a common randomly generated name prefix, then executed many tens of thousands of SQL statements (via arcpy.ArcSDESQLExecute) to manifest the change. The load took twenty minutes, and the base table population via SQL ten more, then the hierarchical data propagation took another twenty. 120+ million rows were processed and the services publishing the data remained live through the process. I haven't attempted anything with annotation. - V
... View more
09-04-2024
12:19 PM
|
0
|
2
|
185
|
POST
|
@RogerDunnGIS wrote: An exception should be throw in these instances so I know what the issue is and where it is. While that sounds great in theory, it's actually pretty hard to accomplish, at least with enterprise geodatabases. Data loading, for efficiency, is done as a bulk insert operation. This means that the error isn't encountered until much later, possibly thousands of rows after the data has been staged for array insert. If you organize your code to COMMIT after each row, then the error can be caught, BUT performance may be degraded by several orders of magnitude (e.g., 5 minutes instead of 300 milliseconds). If no exception is ever raised, then that is a problem, and a reproducible test case should be submitted through Tech Support. - V
... View more
08-20-2024
08:44 AM
|
0
|
0
|
60
|
POST
|
"Best" approaches don't generally exist, but storing only geometry in one database, and everything else in another, then linking between them in real time is an anti-pattern that approaches worst case. The least-worst case solution set has all the data for each table in one place. There are many ways to get there, but they all require more details than is really appropriate for a public forum (and often involve multi-month implementation contracts). I have implemented change detection solutions that use a hash of the data row contents to identify which records have changed over time, so that only the records which need update are updated, allowing me to keep table collections exceeding 160 million rows in sync with 20 minutes of processing for 200k-800k (often redundant) change messages a day (ironically, the system generating the change messages takes 4-5 hours to identify the change candidates, and most of those 20 minutes are due to transmission delay across a wide area network). Suffice it to say that "maintenance" and "publication" databases are in your near future. If you calculate a reasonably secure hash during data ingest into a staging table, and preserve the hash of the existing data as you load it, you can drive a simple UPDATE statement into the publishing tables in nearly no time, without any publishing downtime. - V
... View more
08-15-2024
07:04 PM
|
0
|
0
|
284
|
POST
|
I just went through something like this with an architecture dataset I was toying with, though it was hundredths of feet, not sub-millimeter rounding. The default coordinate reference generally preserves thousandths of meters, though some dip into the ten-thousandths. Unfortunately, it varies by coordinate system (max resolution across the entire mappable space). The only way to change ArcGIS coordinate reference behavior is to take proactive ownership of the coordinate reference used in your data conversion. The first step here is to read (and understand) the Understanding Coordinate Management in the Geodatabase White Paper. Then you need to determine your actual XY/Z/M coordinate reference range requirements and determine the exact offsets and scales necessary to accomplish them. At that point, you can create a Feature Dataset which uses these values. And then all data conversion should go though that Feature Dataset. Since spatial references are immutable, once the feature class is created inside the FDS, you can drag it back out to the parent file/enterprise geodatabase. You should also consider whether tenth-millimeter precision is actually necessary. I can assure you that the carpenter building my retirement home is not going to achieve better than 1/16" precision, possibly as little as 1/4". - V
... View more
08-07-2024
07:44 AM
|
0
|
0
|
159
|
POST
|
As a SQL query, it's pretty basic but doing this in Python is a bit trickier, since you have to do the inner query in memory. The key is to chunk the features into rows by Y, so you can order by X. Then you need to flip the listed X values on alternate rows. I'm on a deadline, so I can't offer even a rough untested code block. - V
... View more
07-26-2024
12:31 PM
|
0
|
0
|
473
|
POST
|
Relying on feature order based on OID is kindof iffy. The only way to change OID is to create a new feature class in the order you desire, so this isn't an UpdateCursor task, but an InsertCursor one. Getting the "back-and-forth" numbering is a matter of assigning bands (by Y value), then using modulus-2 of band-number to assign left-to-right or right-to-left order. - V
... View more
07-26-2024
10:37 AM
|
1
|
2
|
490
|
Title | Kudos | Posted |
---|---|---|
1 | 2 weeks ago | |
1 | 3 weeks ago | |
3 | 09-13-2024 07:45 AM | |
1 | 07-19-2024 12:21 PM | |
1 | 07-25-2024 07:29 AM |