|
POST
|
Well, this is getting quite a bit of a hack, but an *.mmpk file is just a ZIP file that you can unzip to an empty (new) folder. If you do that, you should see its contents. ArcGIS Pro project file can be unzipped as well. There is likely an *.mmap and *.mapx as part of the unzipped file. When I look at a custom created Mobile Map Package for ESRI's OpenStreetMap vector tile service, I see these files containing a reference to the service as Url/uRI, that you can copy and paste in a browser. That shows a JSON service description, that contains a "defaultStyles" reference to the subfolder with styles (in my case resources/styles) If you subsequently add this subfolder to the Url, it gets you to the JSON style of the service, where you can see the actual layer definitions as JSON items. Whether all of this gets you anywhere nearer to editing the style of your *.mmpk, IDK..., but at least it should give you some insights as to where it gets it's styling.
... View more
10-24-2024
12:00 PM
|
0
|
1
|
3377
|
|
POST
|
Does the package not have some customizable JSON style in it that you could edit in e.g. ESRI's vector style editor or by hand in a text editor? AFAIK, vector tiles are always styled dynamically, and thus require a style file to define its symbology and layers etc. It must be coming from somewhere in the map package... or reference ESRI's navigation style JSON on the web.
... View more
10-24-2024
10:35 AM
|
0
|
4
|
3391
|
|
POST
|
@TimMinter wrote: Data Interoperability extension claimed a successful load after 11 hours 54 minutes (ugh), all on the local machine with no network transfer happening. Considering you were loading 860M records to begin with, that 12 hour FME load doesn't sound to bad if it used database INSERTs (which it likely did, unless FME can use COPY). Database INSERTs are way more expensive in terms of performance than COPY or UPDATEs in PostgreSQL. This is the nature of PostgreSQL and its technical implementation. Your timing calculates to a rate of about 72M records / h, which seems reasonable. My custom multi-threaded code also doesn't get above about 150M INSERTs per hour. About double the rate, but not an order of a magnitude faster or so. Considering Data Interoperability likely has more overhead due to all its inner circuitry, I think it might well be slower than the raw performance I am getting from custom code.
... View more
10-24-2024
10:29 AM
|
0
|
1
|
4635
|
|
POST
|
I have never used StreetMap Premium, but reading this page: https://doc.arcgis.com/en/streetmap-premium/latest/get-started/overview.htm shouldn't you be downloading the GCS mobile map package from your StreetMap Premium licensed organizational "My ESRI" portal, as that map package appears to contain pre-symbolized data similar to ESRI's "Navigation Map" style if I understand the Help text correctly? See the last bullet "Cartographic display".
... View more
10-24-2024
10:06 AM
|
0
|
0
|
3395
|
|
POST
|
@TimMinter wrote: In further poking about the web, I ran across a bunch of grumbling about how Python will just hang during larger data loads. I'm assuming that ArcGIS Pro Append GP tool uses Python code, and I have a suspicion (that I won't pursue) that it's just another example of that behavior. AFAIU, quite a lot of the core ArcGIS geoprocessing tools have been written in C++, not Python, but Vince can give a more informed and balanced answer. I don't agree about the "Python will just hang during larger data loads" as cause of data load issues, see my response about OpenStreetMap data processing at planetary scale. Crappy programming and poor hardware / networks can cause major issues though. Marco
... View more
10-22-2024
02:58 PM
|
1
|
0
|
4717
|
|
POST
|
Well, I can't vouch for any specific tools like Append, but my experience has shown you can handle multi-billion record spatial tables in PostgreSQL and create ArcGIS Query Layers for them in ArcGIS Pro. However, this experience has also shown that some ArcGIS geoprocessing tools are very efficient and can handle such large datasets, and others cannot. It all depends on the tool and its exact implementation. I currently run a >4TB large OpenStreetMap database as a non-geodatabase PostgreSQL database (v17.0 currently) based on Facebook/Meta's "Daylight Map Distribution for OpenStreetMap", that includes nearly all of Google Open Buildings. The hardware I use is a refurbished HP Z840 workstation that I beefed up with 20TB of NVMe disks (RAID-0), half of it serving as superfast backup, and 512GB RAM. The largest table, containing all buildings and other Polygon geometries of OpenStreetMap for the entire planet, has nearly 2.5Billion(!) records (yes, I am using 64-bit ObjectIDs), see screenshot of DBeaver, and the actual styled data in Pro in the background as an exported PDF created by ArcGIS Pro. To handle import and export, I have successfully used Python modules available in ArcGIS Pro, like sqlite3, pyodbc and psycopg2. The actual original import of OpenStreetMap data is using osm2pgsql, subsequent processing steps use the other libraries. All of these tools have proven to be capable to process hundreds of millions of records data transport in and out the database without significant slow down and to be able to create > 400M record File Geodatabases and SQLite spatial databases as export, but it requires really good thought of how to handle stuff in your Python code. E.g. storing ObjectIDs to process as ordinary Python number objects in some huge Python list, will quickly have you run out of RAM with billions of records, even if you have 64GB or more available. I solved that issue by using numpy arrays and data types, that can be far more memory efficient. But there is a plethora of issues to deal with if you want to write efficient Python / ArcPy code for handling these amounts of data, so I am not at all surprised some of the current ArcGIS geoprocessing tools fail to scale to those table sizes, likely because no developer ever tested them at those scales of data processing. That said, using standard SQL and pyodbc / psycopg2, I have been able to e.g. run SQL 'UPDATE' statements that modify an entire > 1B record table at a rate of 500M - 3.5B records per hour (1 million rows updated per second) depending on the operation performed on this 2016 hardware using Python multi-threading options... Modern hardware should be able to easily double that.
... View more
10-22-2024
02:09 PM
|
1
|
1
|
4723
|
|
POST
|
Well, there is the: Recalculate Feature Class Extent (Data Management) geoprocessing tool that you could easily call from ArcPy. Do note though that it requires an exclusive schema lock, which might be problematic in some cases. It also calculates the extent based on the actual features according to the Help page, while your first post suggests setting a predefined "organization" extent that may not reflect the actual data's true extent. I am not sure if that is possible with the ESRI tools, or if you would need to hack into the geodatabases system tables to achieve that.
... View more
09-27-2024
09:52 AM
|
0
|
2
|
1932
|
|
POST
|
If I remember it well, the "Number of Points" setting determines the minimum number of data points to use to calculate a grid cell value, and the "Maximum Distance" setting can limit this: - If you set a 'fixed' distance for the search radius, and only 3 points are found within this distance, the cell's value will be based solely on those 3 points, not the set "Number of Points", e.g. 12 as default. - If you set a 'variable' distance, the search for nearest data points will continue until "Number of Points", e.g. 12 is found, irrespective of the distance. So you have to answer the questions: - Do I mind potentially adding data points beyond the set search radius? You likely shouldn't worry to much about data points beyond the search readius. Since Kriging is in a sense a form of IDW (Inverse Distance Weighted) interpolation, points further away will influence the grid cell's value less anyway, and shouldn't overly contribute or distort results even if less appropriate (unless some major break in data values is visible due to e.g. geological factors, in which case you might wish to set barries). - Do I care if less data points are being included to calculate a cell's value? If the data is erratic (which already means it is less suitable for interpolation), having more datapoints included might be better to get the overall picture. Either way, I think the differences between the options will be limited. Try it out to find out and explore the error surface after interpolation. A last question you always need to ask yourself: is my data suitable for interpolation in the first place? Sometimes other statistical methods are better suited for certain types of data, and classification of data points and correlation with environmental factors based on ordinary statistics is the more appropriate method to extract value from your dataset. E.g. if you have statistically proven certain values correlate with certain geological strata, classifying a geological map based on this knowledge could also be a valid method of generating a space filling dataset, instead of interpolation. Of course, with all the options for data exploration and post result evaluation in a tool like Geostatistical Analyst, you should be able to tell if your data is suitable for interpolation or not. But sometimes people forget that data should in fact have spatial auto-correlation, and stubbornly ignore indications otherwise, in a desperate attempt to "create a surface" of a set of data points, because the data "must be interpolated!" (no, it doesn't always, and if you've sampled at the wrong spatial scale to capture the actual phenomenon your trying to get a handle on, spatial auto-correction may also be virtually absent). If your data exploration says there is no real spatial auto-correlation, don't attempt to interpolate, find other ways to handle processing of your data, it may well still be suitable for some other type of statistical analysis with proper input of environmental factors.
... View more
09-01-2024
02:19 PM
|
2
|
0
|
1526
|
|
POST
|
Due to a historic Javascript and Node.js limitation, 64-bit is actually currently 53-bit support for some applications and uses in the ESRI product line. This might change in the future: https://blog.logrocket.com/how-to-represent-large-numbers-node-js-app/ https://v8.dev/features/bigint Not that you are likely to hit 53-bit ObjectIDs anytime soon, but see the "Caution" remark in this ESRI Help page.
... View more
08-27-2024
10:56 AM
|
1
|
0
|
3928
|
|
POST
|
This ESRI page mentioning the Microsoft Azure Database for PostgreSQL (Flexible Server): https://enterprise.arcgis.com/en/system-requirements/latest/windows/databases-in-the-cloud.htm refers to this Microsoft Help page: https://learn.microsoft.com/en-us/azure/postgresql/flexible-server/how-to-connect-tls-ssl Maybe there is something there that may explain the issue.
... View more
08-25-2024
07:38 AM
|
0
|
0
|
1671
|
|
POST
|
Can't speak for ESRI, and I'am not a web developer, but considering all the blog posts with new features of Calcite released by ESRI in the past few years, I highly doubt Calcite will be ditched any time soon for another framework. Maybe the Jimu thing being default is specific to the iOS and Mac platforms, as ESRI may feel Calcite is not yet mature enough on these platforms? Just speculation though...
... View more
08-25-2024
07:27 AM
|
0
|
0
|
2247
|
|
POST
|
Some tools like Calculate Field do not actually create a new dataset as output, but just modify the existing dataset. I think this may be the reason the "Add To Display" option is not working, as the modified dataset may already have been added to the TOC in the previous step. You could use an extra "Make Feature Layer" tool step in the model just after the "Calculate Field" tool, and add the output of that instead.
... View more
08-02-2024
03:00 AM
|
1
|
0
|
1004
|
|
POST
|
One thing you might still try though, is to disable the graphics card in Windows before installing the graphics driver, than re-enable it afterwards after the successful install of the driver, if the installation process of the driver hasn't already done so.
... View more
05-07-2024
05:33 AM
|
0
|
0
|
6077
|
|
POST
|
Also note that these "dedicated" graphics cards in laptops are not like true desktop graphics cards stuck in a PCIe express port. They are usually essentially a kind of co-processor chips directly stuck on your mother board to help and work together with the integrated graphics. That is fundamentally different from a high end desktop graphics card, and may explain some of these issues as well.
... View more
05-07-2024
05:23 AM
|
0
|
0
|
6077
|
| Title | Kudos | Posted |
|---|---|---|
| 1 | 01-31-2026 04:45 AM | |
| 1 | 12-08-2025 09:12 AM | |
| 1 | 12-05-2025 12:38 PM | |
| 1 | 12-04-2025 10:08 PM | |
| 1 | 12-04-2025 10:11 AM |
| Online Status |
Offline
|
| Date Last Visited |
03-11-2026
01:10 PM
|