POST
|
It's not exactly best practice to make use of the master database for anything GIS. In fact, it's pretty close to worst practice. It wouldn't surprise if it is not permitted (and if it is, I would file a Defect to have that loophole closed).
Pretty much any training in database technology is going to tell you to create a new logical storage, a database or databases to house your data (leveraging the storage), and additional user logins and group logins, and schemas in the databases to match and/or correspond to the logins.
Creating user data in reserved admin databases presents an unnecessary risk to database stability. Creating a "toto" database for your non-Kansas data would in fact be best practice, and I encourage you to do so.
- V
... View more
yesterday
|
1
|
0
|
14
|
POST
|
Never ever change the selection environment on a source while inside a cursor using that source. Please use code formatting on your post so that the code indentation is legible. - V
... View more
2 weeks ago
|
3
|
1
|
201
|
POST
|
Hi. This is the user forums, where users communicate with other users (and some Esri employees who also use the software, but most not authorized to speak on behalf of the company). Communicating vendor requirements to Esri should be done through Tech Support and/or Customer Service and/or your local government representative. I would note that backfitting fundamental changes to software in Mature or Retired support status is not a likely outcome, especially if modern software in active support already has that capability. - V
... View more
3 weeks ago
|
0
|
0
|
87
|
POST
|
I haven't ever used SQL*Loader, so this should be asked as a different question (and not of me). Populating new tables, adding all appropriate indexes, then running INSERT/DELETE based on a LEFT OUTER JOIN mismatch and UPDATE on only the changed rows is the procedure I'm recommending. How you actually implement that is outside the scope of my answer. - V
... View more
3 weeks ago
|
0
|
0
|
101
|
POST
|
I'm not real good with the step-by-step thing, and am forbidden by NDA to give explicit details . All I can say is that I used FeatureClassToFeatureClass and TableToTable to populate a few dozen tables as an interim data change set into a staging schema within the database, using a common randomly generated name prefix, then executed many tens of thousands of SQL statements (via arcpy.ArcSDESQLExecute) to manifest the change. The load took twenty minutes, and the base table population via SQL ten more, then the hierarchical data propagation took another twenty. 120+ million rows were processed and the services publishing the data remained live through the process. I haven't attempted anything with annotation. - V
... View more
3 weeks ago
|
0
|
2
|
124
|
POST
|
@RogerDunnGIS wrote: An exception should be throw in these instances so I know what the issue is and where it is. While that sounds great in theory, it's actually pretty hard to accomplish, at least with enterprise geodatabases. Data loading, for efficiency, is done as a bulk insert operation. This means that the error isn't encountered until much later, possibly thousands of rows after the data has been staged for array insert. If you organize your code to COMMIT after each row, then the error can be caught, BUT performance may be degraded by several orders of magnitude (e.g., 5 minutes instead of 300 milliseconds). If no exception is ever raised, then that is a problem, and a reproducible test case should be submitted through Tech Support. - V
... View more
08-20-2024
08:44 AM
|
0
|
0
|
49
|
POST
|
"Best" approaches don't generally exist, but storing only geometry in one database, and everything else in another, then linking between them in real time is an anti-pattern that approaches worst case. The least-worst case solution set has all the data for each table in one place. There are many ways to get there, but they all require more details than is really appropriate for a public forum (and often involve multi-month implementation contracts). I have implemented change detection solutions that use a hash of the data row contents to identify which records have changed over time, so that only the records which need update are updated, allowing me to keep table collections exceeding 160 million rows in sync with 20 minutes of processing for 200k-800k (often redundant) change messages a day (ironically, the system generating the change messages takes 4-5 hours to identify the change candidates, and most of those 20 minutes are due to transmission delay across a wide area network). Suffice it to say that "maintenance" and "publication" databases are in your near future. If you calculate a reasonably secure hash during data ingest into a staging table, and preserve the hash of the existing data as you load it, you can drive a simple UPDATE statement into the publishing tables in nearly no time, without any publishing downtime. - V
... View more
08-15-2024
07:04 PM
|
0
|
0
|
247
|
POST
|
I just went through something like this with an architecture dataset I was toying with, though it was hundredths of feet, not sub-millimeter rounding. The default coordinate reference generally preserves thousandths of meters, though some dip into the ten-thousandths. Unfortunately, it varies by coordinate system (max resolution across the entire mappable space). The only way to change ArcGIS coordinate reference behavior is to take proactive ownership of the coordinate reference used in your data conversion. The first step here is to read (and understand) the Understanding Coordinate Management in the Geodatabase White Paper. Then you need to determine your actual XY/Z/M coordinate reference range requirements and determine the exact offsets and scales necessary to accomplish them. At that point, you can create a Feature Dataset which uses these values. And then all data conversion should go though that Feature Dataset. Since spatial references are immutable, once the feature class is created inside the FDS, you can drag it back out to the parent file/enterprise geodatabase. You should also consider whether tenth-millimeter precision is actually necessary. I can assure you that the carpenter building my retirement home is not going to achieve better than 1/16" precision, possibly as little as 1/4". - V
... View more
08-07-2024
07:44 AM
|
0
|
0
|
141
|
POST
|
As a SQL query, it's pretty basic but doing this in Python is a bit trickier, since you have to do the inner query in memory. The key is to chunk the features into rows by Y, so you can order by X. Then you need to flip the listed X values on alternate rows. I'm on a deadline, so I can't offer even a rough untested code block. - V
... View more
07-26-2024
12:31 PM
|
0
|
0
|
433
|
POST
|
Relying on feature order based on OID is kindof iffy. The only way to change OID is to create a new feature class in the order you desire, so this isn't an UpdateCursor task, but an InsertCursor one. Getting the "back-and-forth" numbering is a matter of assigning bands (by Y value), then using modulus-2 of band-number to assign left-to-right or right-to-left order. - V
... View more
07-26-2024
10:37 AM
|
1
|
2
|
450
|
POST
|
File group configuration is controlled by the "storage keyword" environment (using an "ON filegroup" in the storage clause). I always create my tables with SQL then register them, so then the file group allocation is trivial. Of course, you can use SQL Server tools to migrate storage via a clustered index as well. In the days of petabyte disk arrays, file group management is mostly a waste of time for tiny things like vector data under a quarter-billion rows. - V
... View more
07-25-2024
07:29 AM
|
1
|
1
|
362
|
POST
|
Measurements in Web Mercator aren't really in meters. The poles are infinitely far from the Equator, so distance accuracy is wrong to start (sphere), and gets worse as latitude magnitude increases (at roughly tan(latitude)). Never measure ANYTHING in Web Mercator, because it can't give a correct answer. If you're working in MD State Plane, keep on using MD State Plane to get accurate distances. - V
... View more
07-19-2024
12:21 PM
|
1
|
0
|
322
|
POST
|
There's a bunch of information missing here: What version of ArcGIS are you using? What product? What is the coordinate system used in the display canvas? What is the coordinate system of the Buffer source? What is the coordinate system of the Buffer output? Where is the data located? What exact parameters did you provide to the Buffer command? What is the actual measured distance? There are many potential causes, but the most common is use of a coordinate system for which distance is undefined, like Web Mercator. - V
... View more
07-19-2024
10:28 AM
|
0
|
3
|
352
|
POST
|
Please provide an example of the invalid polygon coordinates.
... View more
07-16-2024
07:30 AM
|
0
|
0
|
177
|
POST
|
While latitude has hard limits on values ([-90,90]), longitude does not. pi radians is equivalent in every way to 3pi radians and 2pi, 0, and -2pi are equivalent as well. There may be limitations in storage of coordinate values in SQL Server, which, like Esri's ST_Geometry, uses integer internal representation, but values between -360 degrees and +360 degrees ought to be safe. STDistance(), if done correctly for geodesic values, should not find any distance at all between 278.440065912902 and -81.559934087098 with the same latitude. Measuring in Cartesian degrees is of course useless for all purposes. So it seems your problem is not with the longitude values but with distance measurement. You would need to edit or reply to your own post with a more specific description of that issue to get a useful answer. - V
... View more
07-04-2024
11:15 AM
|
0
|
0
|
435
|
Title | Kudos | Posted |
---|---|---|
1 | yesterday | |
3 | 2 weeks ago | |
1 | 07-19-2024 12:21 PM | |
1 | 07-25-2024 07:29 AM | |
1 | 07-26-2024 10:37 AM |