POST
|
It depends on what type of work you are going to do with the database, and your current connection to your local database, and so what your expectations are. E.g. if you just want to do an occasional update of a limited number of records and / or view the data, or if you need fast low latency access because you are going to re-write millions of records on a regular basis. Remote databases can have a very large latency (the time to do a round trip from you local machine to the server and back), which can kill the performance if you need to fire thousands or even millions of requests to it. In case you run a remote database and need low latency access, you may need a virtual machine running ArcGIS Pro in the same data center or area, so as to avoid the cost of the round trip. E.g. I personally run a local PostgreSQL database filled with the entire planet's worth of OpenStreetMap data on a professional dual CPU 44 core HP Z840 workstation. In the most outrages configuration, I filled it with Facebooks now defunct "Daylight distribution" of OpenStreetMap, which integrated an additional >1 Billion buildings of Google "Open Buildings", causing the largest table to exceed >2.4 Billion records. Yes, you read that right, billions! I subsequently process and update hundreds of millions of records in derived tables. I certainly wouldn't want to do this against a remote database with latencies running in the 100's of milliseconds. It would take ages. Running this all locally against PCIe NVMe drives, I am capable of updating hundreds of millions of records per hour from ArcGIS Pro (although batched in sets of tens of thousands to reduce the number of requests). This would be unthinkable against a remote database, unless running as suggested above, a VM with Pro in the same data center as the remote database. Finally, running a local database, although a burden, can learn you a lot about the proper configuration and running of a PostgreSQL database and server, knowledge that you may not gain with a remote database and its fully pre-installed configuration (which may or may not suite your use case or be adjustable to your taste or needs).
... View more
05-13-2025
12:44 AM
|
2
|
0
|
621
|
POST
|
The CVE seems to concern only one specific library regarding Avro format, which doesn't seem present in the Pro install (see my listing below which slightly differs from yours but does not show a file name with 'avro'). These found modules are different ones, and as far as I can tell not involved in the CVE. I guess the affected module is called simply 'parquet-avro-<VERSION>.jar', but I didn't see the actual full filename listed in the CVE.
... View more
05-10-2025
10:49 AM
|
2
|
0
|
1219
|
POST
|
Those ping latencies do not look to good, although expected across larger distances with machines in different regions. My local network setup, which is of course not comparable as being best case, has <1ms latency on ping to my VM running PostgreSQL. I think this explains a lot of your issue, and George's comment about running ArcGIS Enterprise and Pro on the same VNet as PostgreSQL: https://innerjoin.bit.io/the-distance-dilemma-measuring-and-mitigating-postgres-network-latency-76f6cd1a6c57
... View more
05-02-2025
06:46 AM
|
0
|
0
|
486
|
POST
|
Of course, OpenStreetMap was already updated a year ago 😉, but you likely already know that: Relatie: Blue Heron Lake (12908) | OpenStreetMap
... View more
04-28-2025
01:15 PM
|
0
|
1
|
607
|
POST
|
@_____ wrote: "The bugs we can report and many of them get fixed with the next update." I don't believe that for one second. That hasn't been my experience, or any other persons experience that I know. In fact, it's a meme online how terrible Esri is at doing bug fixes in a timely manner. I do think this quote hits on one important aspect of the issue raised here: ESRI rarely if ever backports bugfixes to the previous major release. This causes major issues, as it means that, even if a bugfix is introduced "in the next release" (which I agree with you is definitely not a given, and lots of recognized bugs are only fixed after maybe two or three major releases - which is 2-3 years), it may well end up in a version of ArcGIS with a new bug affecting the same functionality. If that sounds improbable: I have a tool that heavily relies on the Maplex label engine. When ESRI made some major changes to Maplex in ArcMap 10.3 and if I remember well Pro 1.4, I was confronted with unusable labeling results. Reporting back confirmed the issue, and a bugfix was introduced in the next major release, only to introduce a new major issue! I finally had to wait until ArcMap 10.6 and Pro 1.6 for the Maplex labelling engine to be fully sorted out and give proper results... Best software practices regarding bugfixes require fixes to be backported to previous releases, at the very least the last major release cycle. A fix for an issue in Pro 3.4 should not require upgrading to Pro 3.5, but also be in the Pro 3.4.x cycle. If you don't do that, you end up with a potentially perpetually broken product, as each new release also inevitably introduces new bugs, and the version users actually use (often the last major one), never gets fixed properly to the extent of being really usable. I really think if ESRI sorted out the way they handle these issues and dramatically reduce their bugfix backlog, both ESRI support employees and users would see a dramatic drop in the time needed to deal with issues, as rather than seeing the same - unfixed - issue pop-up for the tenth time in the support process, support employees and users would only primarily have to deal with true new issues. There is so much wasted time in the current process, that ultimately also holds back developers and waists their, as some repeated issues will ultimately make it on their plate as well.
... View more
04-23-2025
01:14 PM
|
4
|
1
|
2193
|
POST
|
@RyJohnsen wrote: I'm on windows 11. I initially used TreadPoolExecutor, but I'm not I/O bound and the GIL caused it to be much slower than ProcessPoolExecutor. Yes, it all depends on how much actual CPU work versus IO you are doing whether ThreadPoolExecutor or ProcessPoolExecutor is the best solution (and available resources in terms of RAM etc., although that starts to become a pretty mood discussion on modern power desktops, that usually have plenty of resources). However, in my experience, it is pretty hard to become purely CPU and not IO bound. You really need to do significant work to be CPU bound or limited.
... View more
03-25-2025
08:25 AM
|
0
|
0
|
791
|
POST
|
Actually, for Python, the maximum number of workers that can be launched on Windows is 61 according to the documentation for concurrent.futures: https://docs.python.org/3/library/concurrent.futures.html By the way, if you don't want irritating pop-ups of command windows during execution of processes, you can add the code below, which will change the python executable and prevent pop-ups and execute like if it was a thread (although process): import sys multiprocessing.set_executable(os.path.join(sys.exec_prefix,'pythonw.exe'))
... View more
03-25-2025
07:56 AM
|
0
|
1
|
796
|
POST
|
Can't help you with the specific error message that you receive, but a couple of remarks: - If you are still running Windows 10, you may not be able to go beyond 64 processes due to limitations with Windows Processor Groups: https://bitsum.com/general/the-64-core-threshold-processor-groups-and-windows/ It is still not entirely clear to me personally, if and how Windows 11 handles this and if it allows true unlimited process numbers. There were changes though to this aspect of Windows, but I am still on Windows 10 so can't verify it. - Do you really need processes? If you connect to a database, the database may well turn a threaded application into something close to a processes based multi-processing solution, however allowing you to use threads in your Python application. E.g. in the screenshot below, I am using a concurrent.futures.ThreadPoolExecutor to execute up to 44 threads with SQL statements to generalize data on the database using PostGIS commands. As you can see from the inset of the remote desktop to the server, this pushes the PostgreSQL database to a full 100% CPU usage, without processes.
... View more
03-21-2025
02:29 AM
|
0
|
4
|
871
|
POST
|
If your GPU overheats, then this is not likely a Pro software issue. Laptop CPUs and GPUs need proper cooling but the cooling grating at the end of the copper heatpipes inside a power laptop easily get clogged with dust. It can take just a few months on a brand new laptop with heavy usage of the GPU in a dusty home or office environment, to clog up the cooling grating and cause overheating issues. Open up the laptop and vacuum clean the grating. The fact that the laptop crashes, would also maybe be reason to send it in for warranty. The laptop should detect improper cooling and downgrade the CPU and GPU frequencies causing a slow laptop, but not a full crash.
... View more
03-21-2025
01:56 AM
|
0
|
1
|
1650
|
POST
|
Well, this is getting quite a bit of a hack, but an *.mmpk file is just a ZIP file that you can unzip to an empty (new) folder. If you do that, you should see its contents. ArcGIS Pro project file can be unzipped as well. There is likely an *.mmap and *.mapx as part of the unzipped file. When I look at a custom created Mobile Map Package for ESRI's OpenStreetMap vector tile service, I see these files containing a reference to the service as Url/uRI, that you can copy and paste in a browser. That shows a JSON service description, that contains a "defaultStyles" reference to the subfolder with styles (in my case resources/styles) If you subsequently add this subfolder to the Url, it gets you to the JSON style of the service, where you can see the actual layer definitions as JSON items. Whether all of this gets you anywhere nearer to editing the style of your *.mmpk, IDK..., but at least it should give you some insights as to where it gets it's styling.
... View more
10-24-2024
12:00 PM
|
0
|
1
|
1461
|
POST
|
Does the package not have some customizable JSON style in it that you could edit in e.g. ESRI's vector style editor or by hand in a text editor? AFAIK, vector tiles are always styled dynamically, and thus require a style file to define its symbology and layers etc. It must be coming from somewhere in the map package... or reference ESRI's navigation style JSON on the web.
... View more
10-24-2024
10:35 AM
|
0
|
4
|
1475
|
POST
|
@TimMinter wrote: Data Interoperability extension claimed a successful load after 11 hours 54 minutes (ugh), all on the local machine with no network transfer happening. Considering you were loading 860M records to begin with, that 12 hour FME load doesn't sound to bad if it used database INSERTs (which it likely did, unless FME can use COPY). Database INSERTs are way more expensive in terms of performance than COPY or UPDATEs in PostgreSQL. This is the nature of PostgreSQL and its technical implementation. Your timing calculates to a rate of about 72M records / h, which seems reasonable. My custom multi-threaded code also doesn't get above about 150M INSERTs per hour. About double the rate, but not an order of a magnitude faster or so. Considering Data Interoperability likely has more overhead due to all its inner circuitry, I think it might well be slower than the raw performance I am getting from custom code.
... View more
10-24-2024
10:29 AM
|
0
|
1
|
2687
|
POST
|
I have never used StreetMap Premium, but reading this page: https://doc.arcgis.com/en/streetmap-premium/latest/get-started/overview.htm shouldn't you be downloading the GCS mobile map package from your StreetMap Premium licensed organizational "My ESRI" portal, as that map package appears to contain pre-symbolized data similar to ESRI's "Navigation Map" style if I understand the Help text correctly? See the last bullet "Cartographic display".
... View more
10-24-2024
10:06 AM
|
0
|
0
|
1479
|
POST
|
@TimMinter wrote: In further poking about the web, I ran across a bunch of grumbling about how Python will just hang during larger data loads. I'm assuming that ArcGIS Pro Append GP tool uses Python code, and I have a suspicion (that I won't pursue) that it's just another example of that behavior. AFAIU, quite a lot of the core ArcGIS geoprocessing tools have been written in C++, not Python, but Vince can give a more informed and balanced answer. I don't agree about the "Python will just hang during larger data loads" as cause of data load issues, see my response about OpenStreetMap data processing at planetary scale. Crappy programming and poor hardware / networks can cause major issues though. Marco
... View more
10-22-2024
02:58 PM
|
1
|
0
|
2769
|
POST
|
Well, I can't vouch for any specific tools like Append, but my experience has shown you can handle multi-billion record spatial tables in PostgreSQL and create ArcGIS Query Layers for them in ArcGIS Pro. However, this experience has also shown that some ArcGIS geoprocessing tools are very efficient and can handle such large datasets, and others cannot. It all depends on the tool and its exact implementation. I currently run a >4TB large OpenStreetMap database as a non-geodatabase PostgreSQL database (v17.0 currently) based on Facebook/Meta's "Daylight Map Distribution for OpenStreetMap", that includes nearly all of Google Open Buildings. The hardware I use is a refurbished HP Z840 workstation that I beefed up with 20TB of NVMe disks (RAID-0), half of it serving as superfast backup, and 512GB RAM. The largest table, containing all buildings and other Polygon geometries of OpenStreetMap for the entire planet, has nearly 2.5Billion(!) records (yes, I am using 64-bit ObjectIDs), see screenshot of DBeaver, and the actual styled data in Pro in the background as an exported PDF created by ArcGIS Pro. To handle import and export, I have successfully used Python modules available in ArcGIS Pro, like sqlite3, pyodbc and psycopg2. The actual original import of OpenStreetMap data is using osm2pgsql, subsequent processing steps use the other libraries. All of these tools have proven to be capable to process hundreds of millions of records data transport in and out the database without significant slow down and to be able to create > 400M record File Geodatabases and SQLite spatial databases as export, but it requires really good thought of how to handle stuff in your Python code. E.g. storing ObjectIDs to process as ordinary Python number objects in some huge Python list, will quickly have you run out of RAM with billions of records, even if you have 64GB or more available. I solved that issue by using numpy arrays and data types, that can be far more memory efficient. But there is a plethora of issues to deal with if you want to write efficient Python / ArcPy code for handling these amounts of data, so I am not at all surprised some of the current ArcGIS geoprocessing tools fail to scale to those table sizes, likely because no developer ever tested them at those scales of data processing. That said, using standard SQL and pyodbc / psycopg2, I have been able to e.g. run SQL 'UPDATE' statements that modify an entire > 1B record table at a rate of 500M - 3.5B records per hour (1 million rows updated per second) depending on the operation performed on this 2016 hardware using Python multi-threading options... Modern hardware should be able to easily double that.
... View more
10-22-2024
02:09 PM
|
1
|
1
|
2775
|
Title | Kudos | Posted |
---|---|---|
1 | 04-29-2020 03:54 AM | |
1 | 06-09-2025 12:29 PM | |
1 | 06-06-2025 01:11 AM | |
1 | 06-07-2025 01:12 AM | |
1 | 06-07-2025 02:44 AM |
Online Status |
Offline
|
Date Last Visited |
2 weeks ago
|