POST
|
Hexagons can't be tessellated themselves. You need to move on to triangles after the initial layer. I've actually got code for this in my Blog post. Squares (aka diamonds) can be sub-tessellated, as can triangles themselves. It's not clear what a unic layer might be. - V
... View more
2 weeks ago
|
0
|
0
|
84
|
POST
|
The value in the computed length field depends on the map projection used in your canvas. Web Mercator is useless for planar measurement (the poles are infinitely far from the Equator, so all measurements not along the Equator itself are skewed, potentially infinitely), as are all geographic coordinate systems (Cartesian degrees are meaningless). Some measurement calculations in the UI are smart enough to not trust Web Mercator, and to do geodesic computation with a geographic coordsys, but the table rows are going to return Cartesian results (the units of which may not make sense). If you use a coordinate reference which is not the same as the data source, any stored values in the original coordref may not agree with the values computed "on-the-fly" by reprojection. The amount of error may vary by location (e.g. over Africa vs. over Iceland). - V
... View more
2 weeks ago
|
0
|
0
|
65
|
POST
|
Any query returning over a hundred thousand rows of 2 million is always going to be slow, especially through a service that only returns 2000 records at a time, in a format that's much, much larger than the raw data. This has nothing to do with PostgreSQL and everything to do with throughput. - V
... View more
4 weeks ago
|
0
|
0
|
169
|
POST
|
It's more likely that the ST_Intersects() is your performance bottleneck, especially if you have not constructed a spatial index. Personally, I'd do this query as a single statement using a virtual table, and with an ST_Within: UPDATE parcel_new_pin u
SET zoning = vt.zoning
FROM (
SELECT p.pin,
p.zoning
FROM parcel_new_pin p
JOIN ZONING z2 ON sde.ST_Within(sde.ST_PointOnSurface(p.shape,z2.shape)) = 1
WHERE p.zoning <> z2.zoning_district
AND z2.zoning_district <> 'HB-S MLKO'
) vt
WHERE u.pin = vt.pin; For this query, I'd make sure that ZONING has an index on the geometry column, and PARCEL_NEW_PIN has an index on PIN. Additionally, you can test the raw performance of the ST_PointOnSurface query in isolation: SELECT sde.ST_PointOnSurface((p.shape))
FROM parcel_new_pin p And the performance of the JOIN without UPDATE: SELECT p.pin,
p.zoning
FROM parcel_new_pin p
JOIN ZONING z2 ON sde.ST_Within(sde.ST_PointOnSurface(p.shape,z2.shape)) = 1
WHERE p.zoning <> z2.zoning_district
AND z2.zoning_district <> 'HB-S MLKO' If you haven't built a spatial index on the polygons of ZONING, that should be your first priority, though it's probably easiest to use the ArcPy utility Add Spatial Index (Data Management) via an enterprise geodatabase connection (.sde) file, at least at first, since the parameters can be tricky. - V
... View more
4 weeks ago
|
0
|
0
|
101
|
POST
|
I've done a join with 72M ST_PointOnSurface calculations against a 35M polygon layer to verify parent-child relationships using PostgreSQL with PostGIS geometry, and it ran in less than 25min (on a RDS xlarge instance). I never tried that in Oracle with SDE.ST_GEOMETRY (or with the PG sde.ST_Geometry implementation) How many features? What is the average vertex count in the polygons? How are the indexes built? What does the query plan look like? How long does a simple "SELECT geomcol FROM tablename" query take on the table, and how long does a unbounded ST_PointOnSurface query take. Including the actual query will help generate feedback. - V
... View more
4 weeks ago
|
1
|
1
|
117
|
POST
|
I'm mostly using AWS RDS, where the Esri sde.st_geometry type isn't available. I haven't ever seen performance issues with PostGIS geometry types, though I've always built appropriate indexes. I have seen poor performance with millions of spatially fragmented features (at 1m it's noticeable, after 10m it starts to impact functionality), but spatial fragmentation is storage-agnostic, and impacts all storage implementations (shapefile, file geodatabase, Oracle, SQL-Server, PostgreSQL, Informix, DB2,...). More details would be necessary in order to distinguish whether the issue was with the storage format or the underlying data. Among the information needed: Exact versions of PostgreSQL and PostGIS The number of tables involved (both with and without geometry columns) The number of features in the largest table The geometry type (point/line/poly) For line/polygon, the average number of vertices in each feature The frequency of update on the features The average number of features returned on the average query A description of the performance problem using numbers (e.g., "It takes 14.7 seconds to return 40,123 features in GeoJSON format over a window that extends 17% of the total area") - V
... View more
4 weeks ago
|
1
|
3
|
222
|
POST
|
Put me in the "crazy number" camp. It's also suboptimal for enterprise geodatabase. The tools are optimized for rows, not for tables. When a Windows "list directory contents" request takes three minutes to complete, there's no chance that row access will be fast. 2500 feature classes is too many, even for 100 file geodatabases. Far better to have 25 tables with 100m rows each. - V
... View more
02-16-2024
07:13 AM
|
1
|
0
|
115
|
POST
|
Since the "file" in "file geodatabase" is to distinguish it from an RDBMS-based geodatabase, it's important to know that file geodatabase is not ever a single file (it's a directory with many files within [currently a minimum of 41]). If you have a single file, then the provider (or somewhere in between) messed up. You can try renaming to "....gdb.zip" and trying to open that, but even if it doesn't fail, you really ought to go back to the provider for a correctly named file. (Part of this is Microsoft's fault, for trying to make file suffixes optional by hiding them by default.) - V
... View more
01-05-2024
12:22 PM
|
1
|
1
|
520
|
POST
|
@SteveCole wrote: You cannot edit an enterprise SDE feature class with a basic ArcGIS license level. This is not completely true. There are a number of ways to access/maintain tables and rows in an EGDB that don't require a Standard or Advanced license (for example, a Portal survey could start data upload, without even a Basic license). And with Standard/Advanced, you can do a number of tasks in a PostgreSQL database without enabling it as an enterprise geodatabase. _ v
... View more
12-27-2023
06:27 AM
|
0
|
0
|
293
|
POST
|
This is a really broad question. The "most efficient" method depends on the nature of the data, the way the data is managed, and the nature of updates, and your experience and skill in ETL solutions. The source formats are somewhat trivial; it's what you need to do with them that matters. Some folks have no problems taking 10 minutes to update 100k rows via a Portal API, while others are updating 100 million rows every 8 hours. Once you add in different ways to acquire data, well, there are too many options beyond generic advice like "As efficiently as possible." If you want specific advice, you'll need to provide more details on the process you are trying to optimize. - V
... View more
12-22-2023
07:49 AM
|
1
|
0
|
188
|
DOC
|
I have a copy of ArcInfo 5.0 FORTRAN executive library reference manuals, but I'm not giving those up (my wife can put them in my casket to get rid of them). - V
... View more
12-05-2023
08:59 AM
|
0
|
0
|
391
|
POST
|
I built a minimal power draw high-end gaming-class system (65W AMD Ryzen 7 (8 core), 32Gb RAM, 2x500Gb M.2 disk), with a GEFORCE RTX 4060 graphics card for around $1K not too long ago. It was originally planned with motherboard graphics, but that was too laggy for dual HD displays, and I was able to get the RTX on sale (it doubled the power draw during graphics benchmarking, with more than ten times the frames per second), but still well under the capability of the 80+ Platinum 650W power supply) . It runs fairly cool with just two (silent) case fans and the default fan-on-a-heat-sink CPU fan, and our teenager is jealous of mom's WFH desktop GPU. You can build a new system that meets your requirements or work with a build-to-suit vendor to do so, as long as you don't need to purchase new monitors, too. - V
... View more
12-05-2023
08:54 AM
|
2
|
0
|
973
|
POST
|
You can write your own, but I doubt there's any general purpose tool for this. The atan2 function takes dy & dx parameters, but that's counter-clockwise from 3-o'clock, so you'd need to reverse the sign and rotate. - V
... View more
12-01-2023
09:26 AM
|
0
|
0
|
291
|
POST
|
The long answer is that you might be able to use the fid returned by arcpy.da.InsertCursor.insertRow() to build a dictionary (lookup[fid] = old_globalid), then use the created fid in a DA UpdateCursor pass to update the uuid value: updated = 0
with arcpy.da.UpdateCursor(fc,['objectid','globalid']) as cursor:
for row in cursor:
if row[0] in lookup:
cursor.updateRow([row[0],lookup[row[0]])
updated += 1
print("{:d} globalid values updated".format(updated)) This could only work because cursors ignore GlobalID values. I haven't tested this, but it's worth a shot. - V PS: Oh, and there exists a chance (however slight) that one of the new UUIDs could conflict with a replacement GlobalID value, though it's so slight that you probably don't need to check first (but if it does occur, then the global dictionary in memory will be lost by the error).
... View more
11-28-2023
11:16 AM
|
0
|
1
|
393
|
BLOG
|
Thanks, Dan. Having changed it all to make the center exclusion more restrictive early on, then fixed a bug in the center point calculation (it's 2/3, not 3/4 the height), I think I might have bollixed the level counts and introduced duplicate points in centers. Yup, duplicates. I'll just undo that correction, which makes the point count smaller (though still exponential), and simpler. Just a tweak to two code blocks, and a couple of graphics, and the charts. ---- Okay, all done. On to the tool...
... View more
11-26-2023
01:21 PM
|
1
|
0
|
223
|
Title | Kudos | Posted |
---|---|---|
1 | 4 weeks ago | |
1 | 4 weeks ago | |
1 | 02-16-2024 07:13 AM | |
1 | 07-27-2022 12:17 PM | |
1 | 01-05-2024 12:22 PM |