POST
|
I'm mostly using AWS RDS, where the Esri sde.st_geometry type isn't available. I haven't ever seen performance issues with PostGIS geometry types, though I've always built appropriate indexes. I have seen poor performance with millions of spatially fragmented features (at 1m it's noticeable, after 10m it starts to impact functionality), but spatial fragmentation is storage-agnostic, and impacts all storage implementations (shapefile, file geodatabase, Oracle, SQL-Server, PostgreSQL, Informix, DB2,...). More details would be necessary in order to distinguish whether the issue was with the storage format or the underlying data. Among the information needed: Exact versions of PostgreSQL and PostGIS The number of tables involved (both with and without geometry columns) The number of features in the largest table The geometry type (point/line/poly) For line/polygon, the average number of vertices in each feature The frequency of update on the features The average number of features returned on the average query A description of the performance problem using numbers (e.g., "It takes 14.7 seconds to return 40,123 features in GeoJSON format over a window that extends 17% of the total area") - V
... View more
02-20-2024
07:41 AM
|
1
|
3
|
507
|
POST
|
Put me in the "crazy number" camp. It's also suboptimal for enterprise geodatabase. The tools are optimized for rows, not for tables. When a Windows "list directory contents" request takes three minutes to complete, there's no chance that row access will be fast. 2500 feature classes is too many, even for 100 file geodatabases. Far better to have 25 tables with 100m rows each. - V
... View more
02-16-2024
07:13 AM
|
1
|
0
|
191
|
POST
|
Since the "file" in "file geodatabase" is to distinguish it from an RDBMS-based geodatabase, it's important to know that file geodatabase is not ever a single file (it's a directory with many files within [currently a minimum of 41]). If you have a single file, then the provider (or somewhere in between) messed up. You can try renaming to "....gdb.zip" and trying to open that, but even if it doesn't fail, you really ought to go back to the provider for a correctly named file. (Part of this is Microsoft's fault, for trying to make file suffixes optional by hiding them by default.) - V
... View more
01-05-2024
12:22 PM
|
1
|
1
|
682
|
POST
|
@SteveCole wrote: You cannot edit an enterprise SDE feature class with a basic ArcGIS license level. This is not completely true. There are a number of ways to access/maintain tables and rows in an EGDB that don't require a Standard or Advanced license (for example, a Portal survey could start data upload, without even a Basic license). And with Standard/Advanced, you can do a number of tasks in a PostgreSQL database without enabling it as an enterprise geodatabase. _ v
... View more
12-27-2023
06:27 AM
|
0
|
0
|
393
|
POST
|
This is a really broad question. The "most efficient" method depends on the nature of the data, the way the data is managed, and the nature of updates, and your experience and skill in ETL solutions. The source formats are somewhat trivial; it's what you need to do with them that matters. Some folks have no problems taking 10 minutes to update 100k rows via a Portal API, while others are updating 100 million rows every 8 hours. Once you add in different ways to acquire data, well, there are too many options beyond generic advice like "As efficiently as possible." If you want specific advice, you'll need to provide more details on the process you are trying to optimize. - V
... View more
12-22-2023
07:49 AM
|
1
|
0
|
233
|
DOC
|
I have a copy of ArcInfo 5.0 FORTRAN executive library reference manuals, but I'm not giving those up (my wife can put them in my casket to get rid of them). - V
... View more
12-05-2023
08:59 AM
|
0
|
0
|
500
|
POST
|
I built a minimal power draw high-end gaming-class system (65W AMD Ryzen 7 (8 core), 32Gb RAM, 2x500Gb M.2 disk), with a GEFORCE RTX 4060 graphics card for around $1K not too long ago. It was originally planned with motherboard graphics, but that was too laggy for dual HD displays, and I was able to get the RTX on sale (it doubled the power draw during graphics benchmarking, with more than ten times the frames per second), but still well under the capability of the 80+ Platinum 650W power supply) . It runs fairly cool with just two (silent) case fans and the default fan-on-a-heat-sink CPU fan, and our teenager is jealous of mom's WFH desktop GPU. You can build a new system that meets your requirements or work with a build-to-suit vendor to do so, as long as you don't need to purchase new monitors, too. - V
... View more
12-05-2023
08:54 AM
|
2
|
0
|
1140
|
POST
|
You can write your own, but I doubt there's any general purpose tool for this. The atan2 function takes dy & dx parameters, but that's counter-clockwise from 3-o'clock, so you'd need to reverse the sign and rotate. - V
... View more
12-01-2023
09:26 AM
|
0
|
0
|
330
|
POST
|
The long answer is that you might be able to use the fid returned by arcpy.da.InsertCursor.insertRow() to build a dictionary (lookup[fid] = old_globalid), then use the created fid in a DA UpdateCursor pass to update the uuid value: updated = 0
with arcpy.da.UpdateCursor(fc,['objectid','globalid']) as cursor:
for row in cursor:
if row[0] in lookup:
cursor.updateRow([row[0],lookup[row[0]])
updated += 1
print("{:d} globalid values updated".format(updated)) This could only work because cursors ignore GlobalID values. I haven't tested this, but it's worth a shot. - V PS: Oh, and there exists a chance (however slight) that one of the new UUIDs could conflict with a replacement GlobalID value, though it's so slight that you probably don't need to check first (but if it does occur, then the global dictionary in memory will be lost by the error).
... View more
11-28-2023
11:16 AM
|
0
|
1
|
484
|
BLOG
|
Thanks, Dan. Having changed it all to make the center exclusion more restrictive early on, then fixed a bug in the center point calculation (it's 2/3, not 3/4 the height), I think I might have bollixed the level counts and introduced duplicate points in centers. Yup, duplicates. I'll just undo that correction, which makes the point count smaller (though still exponential), and simpler. Just a tweak to two code blocks, and a couple of graphics, and the charts. ---- Okay, all done. On to the tool...
... View more
11-26-2023
01:21 PM
|
1
|
0
|
259
|
BLOG
|
In Calculate the number of fixed-size circles which can fit in an arbitrary polygon (Part I), I got to the point that I could generate an ideally packed collection of circles for fitting inside a figure within a fixed grid. The expansion of the hexagon to a size in which the desired circles fit with overlapping was the first step, so now we need to move on to: Making the circles actually overlay an input layer, Finding all shifted and/or rotated circle sets which have the most number of fitting circles, and Attempting a "best fit" among the alternatives which meet the criteria. But first, I'm going to divert into computing theory... It should be obvious that unless the test condition is this: there are likely going to be an infinite number of sets of circles which meet the "most number of circles in the figure" criteria (if there are any that can). Any time the number of answers is infinite, the mechanism to obtain those answers is going to be, umm..., slow (where "slow" means "takes infinite time"). Even finding the first example of a set of circles that meets the criteria could take a while. This brings us to "P" and "NP". Briefly, these are, respectively, the set of all "easy" problems (those that can be solved in polynomial time), and the set of "hard" problems (which can only be solved in non-polynomial time). This barely scratches the surface of the topic, but the basic theory here is that "easy" problems, where the solution complexity can be determined proportionally as a square of the number of data elements (n) being processed (as in an ugly sort solution), are different than "hard" problems, where the complexity is such that the solution is based on an exponent, like 2 to the n-th power (there is disagreement as to whether P == NP, but I'm not going to go down that rabbit hole). Suffice it to say, heuristics are needed to solve problems which potentially have an effectively infinite number of solutions, and that perfect can be the enemy of "good enough". In this case, we probably don't need to test every possible origin and rotation of the packed circle set, and that we might be able to just generate a sample of the potential origins and rotations, to see if a "good enough" solution emerges (and if it doesn't we can just try a bit longer). Now, there are two ways to generate the origin and rotations, systematic and random. In the systematic approach, we divide the figure evenly (which works out to be tessellations). For example, starting with a square to partition, we choose the center point (aka "reference point", level zero) then choose quarter centroids: And then partition those (and so on): Great, right? Well, maybe not, because our reference hexagons look like this: with both type I and type II error (points outside and gaps within). Doh! So maybe tessellations of a square are not going to optimally partition the hexagon space. But do we need to optimize the hexagon at all? Maybe we need to partition the circle, inscribed within the hexagon. But how do we do that? We could use even angular distance, say at 15 degrees and eighths of radius: but that doesn't partition the space in 2D, since the gap increases with radius. If we use equidistant arc segments at each radius, we can control vertex density: but that's sort of funky in distribution as well, and it's really hard to control the number of radial segments when trying to increase/decrease density (this works pretty well at eighths, but there was a constant in the computation, and I'd be afraid to halve or double density). I guess we could try a spiral (prorated along length, for which we can obtain a formula), but we'd still have the issue that the hexagons aren't filled. Maybe we need to use hexagons after all. Since hexagons can be partitioned by six equilateral triangles, we could use the center as level 0, the centroids of the six triangles as level 1, then tessellate the triangles into four equilateral triangles of side s/2 (skipping the center point in each center triangle, which is already present in the level above). In Python, that works out to this simple expression (which, yes, is complicated by ternary logic): count = 1 if levels == 0 else 6 * 4**(levels-1) + 1 Here's the points generated to level five, with triangles generated to level three: And because it's pretty cool, I'll include the Python for allocating the points here: import math
import arcpy
class Equilateral:
def __init__(self,p1,p2,p3,is_center=False):
self._p1 = p1
self._p2 = p2
self._p3 = p3
self.was_center = is_center
m2 = arcpy.Point((p2.X + p3.X) * 0.5,
(p2.Y + p3.Y) * 0.5)
dy = m2.Y - p1.Y
dx = m2.X - p1.X
h = math.sqrt(dx*dx + dy*dy)
cm = h * 2.0 / 3.0
theta = math.atan2(dy,dx)
self._p0 = arcpy.Point(p1.X + (math.cos(theta) * cm),
p1.Y + (math.sin(theta) * cm))
def as_wkt(self):
return ("POLYGON (({:f} {:f}, {:f} {:f}, {:f} {:f}, {:f} {:f}))".format(
self._p1.X,self._p1.Y,
self._p2.X,self._p2.Y,
self._p3.X,self._p3.Y,
self._p1.X,self._p1.Y))
def split(self):
m1 = arcpy.Point((self._p1.X + self._p2.X) * 0.5,
(self._p1.Y + self._p2.Y) * 0.5)
m2 = arcpy.Point((self._p2.X + self._p3.X) * 0.5,
(self._p2.Y + self._p3.Y) * 0.5)
m3 = arcpy.Point((self._p3.X + self._p1.X) * 0.5,
(self._p3.Y + self._p1.Y) * 0.5)
return [Equilateral(m1,m2,m3,True),
Equilateral(self._p1,m1,m3),
Equilateral(m1,self._p2,m2),
Equilateral(m3,m2,self._p3)]
def get_center(self):
return self._p0
class Hexagon:
def __init__(self,center,diameter,is_transverse=False):
radius = float(diameter) * 0.5
side = radius / math.sin(math.pi / 3)
#print("-- radius = {:f}, side = {:f}".format(radius,side))
self._p0 = center
if is_transverse:
self._p1 = arcpy.Point(center.X,
center.Y - side)
self._p2 = arcpy.Point(center.X + radius,
center.Y - side * math.sin(math.pi / 6.0))
self._p3 = arcpy.Point(center.X + radius,
center.Y + side * math.sin(math.pi / 6.0))
self._p4 = arcpy.Point(center.X,
center.Y + side)
self._p5 = arcpy.Point(center.X - radius,
center.Y + side * math.sin(math.pi / 6.0))
self._p6 = arcpy.Point(center.X - radius,
center.Y - side * math.sin(math.pi / 6.0))
else:
self._p1 = arcpy.Point(center.X + side,
center.Y)
self._p2 = arcpy.Point(center.X + side * math.cos(math.pi / 3.0),
center.Y + radius)
self._p3 = arcpy.Point(center.X + side * math.cos(2.0 * math.pi / 3.0),
center.Y + radius)
self._p4 = arcpy.Point(center.X - side,
center.Y)
self._p5 = arcpy.Point(center.X - side * math.cos(math.pi / 3.0),
center.Y - radius)
self._p6 = arcpy.Point(center.X - side * math.cos(2.0 * math.pi / 3.0),
center.Y - radius)
def as_wkt(self):
return ("POLYGON (({:f} {:f}, {:f} {:f}, {:f} {:f}, {:f} {:f}, "
"{:f} {:f}, {:f} {:f}, {:f} {:f}))".format(
self._p1.X,self._p1.Y,
self._p2.X,self._p2.Y,
self._p3.X,self._p3.Y,
self._p4.X,self._p4.Y,
self._p5.X,self._p5.Y,
self._p6.X,self._p6.Y,
self._p1.X,self._p1.Y))
def split(self):
return [Equilateral(self._p0,self._p1,self._p2,True),
Equilateral(self._p0,self._p2,self._p3,True),
Equilateral(self._p0,self._p3,self._p4,True),
Equilateral(self._p0,self._p4,self._p5,True),
Equilateral(self._p0,self._p5,self._p6,True),
Equilateral(self._p0,self._p6,self._p1,True)]
def get_center(self):
return self._p0
@classmethod
def compute_point_count(cls,levels):
return 1 if levels == 0 else 6 * 4**(levels-1) + 1
def allocate_points(self,levels):
points = [arcpy.Point(self._p0.X,self._p0.Y)]
if levels > 0:
parts = self.split()
for part in parts:
c = part.get_center()
points.append(arcpy.Point(c.X,c.Y))
for level in range(2,levels+1):
prev = parts
parts = []
for part in prev:
parts.extend(part.split())
for part in parts:
if (not part.was_center):
c = part.get_center()
points.append(arcpy.Point(c.X,c.Y))
return points
ref_hex = Hexagon(center=arcpy.Point(32.0,32.0),diameter=64)
for levels in range(0,5):
points = ref_hex.allocate_points(levels)
print("Level {:d} has {:3d} point{:s}".format(
levels,len(points),'' if len(points) == 1 else 's')) Basically, I define two classes, Equilateral and Hexagon, and let the hexagon split itself into triangles, and let the triangles split themselves (keeping track of whether the triangle is at the center of the previous level). By returning the points as they're generated in level order, we still get good coverage if we need to mash on the Cancel button while the app is running. I should note here that allocating those points is pretty intensive, and takes a long time for higher tessellation levels (even on a fairly recent high-end laptop): When partitioning the rotation angle, we can skip the need to rotate a full 360 degrees (2*pi radians) and just consider 180 degrees (pi radians), because hexagons are biaxially symmetric. Here we can catch a break, because the same partitioning technique used on the square will work great on the angle, just in one dimension (with a bit of a twist): Start with zero and ninety degrees as level 0 (that is, both hexagon forms, default and transverse), then split those in half (at 45 and 135, using 180 as an implicit endpoint), and again (22.5,67.5,112.5,157.5), and again, which, to level four, looks like this: The code for pre-calculating the rotation multiplier count doesn't even involve ternary logic: num_rotations = 2**(levels+1) This component is also NP, but smaller, since it's an exponent of two. Okay, so with these partitioning levels, we can predict the number of iterations (and after some executions, generate a useful "time remaining" estimate). Here's a table showing number of iterations, and the products thereof: The color gradations in this chart are at powers of 10 -- starting with green under ten thousand, yellow below one hundred thousand, orange under a million,..., the values in black exceed 32-bit INT_MAX (more than 2.14 billion* iterations). For obvious reasons, I'm going to cap the tessellation exponents at 9(-ish), and place a hard limit on the number of valid permutations permitted (over which a -99999 will show and a warning will be reported). One last consideration: Systematic sampling is not the only option. We could also choose to randomly pick point placement options, using Create Random Points (Data Management). The trick here is that randomness generally requires more iterations to make sure there are no significant gaps. Here are the same number of points as generated by four hexagon tessellations (385 points) systematic first then with an equal number of random dots (overlaid, systematic crosses and random dots), then several multipliers of random densification: What densification value would you choose? You know what, since this is all going into a tool, why not make that a parameter, too: If randomness is chosen, we can also select a densification factor (default of 3.0). Finally, we know what the parameters are, and how many iterations it could take, and guess what? I'm going to need a Part III to go over the tool and the results thereof. (Sorry.) - V ---- *In American usage; one thousand million in British parlance.
... View more
11-26-2023
08:36 AM
|
5
|
2
|
534
|
POST
|
Because the admin username if half the authentication pair to obtain administrative access to the server, it's probably not a good idea to publicize what is used currently, or what it might be in the future. I suggest using something that makes sense for your organization, with "adm" or "admin" or "root" or "boss" in there somewhere, but going out of your way to not use the admin naming pattern used in training. You should, of course document this adequately, potentially in a locked cabinet to which management has a key, just in case. The exam writing consultants hate the name "headless user", but I find it, from a security standpoint, better than publishing all data as user "BobJones" since he was the admin eight years (and three admins) ago. The publishing credentials should be closely held as well (and archived in the boss' lockbox), but not as closely as full-on admin privileges. - V
... View more
11-21-2023
01:42 PM
|
1
|
0
|
318
|
POST
|
An export (exp or expdp) of the table would of course capture all column values, as would an import, though it's been too many decades for me suggest how to alter the intermediate SQL file to prevent difficulties while re-registering the resulting table. - V
... View more
11-14-2023
12:48 PM
|
0
|
0
|
558
|
BLOG
|
I've generated a toolbox and populated arrays of offset and rotation permutations. I just need to implement the execute function. I didn't intend for this to be a cliff-hanger; I'll try to wrap it up soon. - V
... View more
11-14-2023
09:38 AM
|
0
|
0
|
194
|
POST
|
That's a small enough set that you probably need to try it, though it still wouldn't meet your initial criteria, which included setbacks and topography. There's also an issue with web scraping in general, since it's usually considered a violation of TOS for most web sites. - V
... View more
11-13-2023
01:47 PM
|
1
|
0
|
328
|
Title | Kudos | Posted |
---|---|---|
1 | 10-27-2023 10:35 AM | |
2 | Friday | |
1 | a month ago | |
2 | a month ago | |
1 | 02-21-2024 11:41 AM |