POST
|
Yeah, I was getting 1.471 million/second (that's right) doing COPY from CSV to PostgreSQL unlogged table, which straight up spoiled me. I'm glad I didn't have to get all chunky on the input data, and I'm glad that the local power company, Dell, Microsoft, SAFE, and Esri loaded the 680m records to the EGDB table in the 12 hours that I was not paying any attention at all. Cheers Marco, tim
... View more
10-24-2024
02:51 PM
|
1
|
0
|
2647
|
POST
|
Ok, I have _an_ answer, maybe not _the_answer. So, no, ArcGIS Pro will not do this for me when I'm using the Append tool. Data Interoperability extension claimed a successful load after 11 hours 54 minutes (ugh), all on the local machine with no network transfer happening. I still need to poke at it a bit to see if I agree with FME's "successful" claim. So, that's that. Thanks everyone. tim
... View more
10-24-2024
08:29 AM
|
0
|
2
|
2658
|
POST
|
Looks like the CMS configuration got jiggled so there's no way to select an off-menu avatar. That's my UX, anyway.
... View more
10-23-2024
01:41 PM
|
0
|
2
|
2026
|
POST
|
Thanks Marco, lots of potentially useful info there. I'm thankful that I'm only dealing with 1.3b records on this one. The PC I'm using for this work is just a June 2023 vintage Dell Precision workstation meeting or exceeding ArcGIS Pro recommended and/or optimal specs, so nothing noteworthy.
... View more
10-23-2024
08:26 AM
|
0
|
0
|
2691
|
POST
|
Well, I'm with you, Vince. However, my employer needs it done, and I still enjoy being employed, so the part of me that I sell for 2080 hours a year would certainly try this. The rest of me wouldn't. 🙂 I loaded 1.3 billion records from CSV files to PostgreSQL last week in about 24 minutes, so that's pretty cool. In further poking about the web, I ran across a bunch of grumbling about how Python will just hang during larger data loads. I'm assuming that ArcGIS Pro Append GP tool uses Python code, and I have a suspicion (that I won't pursue) that it's just another example of that behavior. If Append could swing it, it would get my records into a geodatabase table schema with fields ordered, named, and aliased per requirements without a bunch of re-messing about. So, one of my principles is to configure before I code. I hope I don't have to code up your hints, but I'm glad you shared them. Some of my workmates noted that Data Interoperability (FME) extension has loaded larger recordsets for them, so I'm headed that direction now, hoping and expecting that SAFE didn't code up their tools with Python. I'll report back here with what worked, probably. The table is not a feature class. It is raw material for defined and ad-hoc data products that can support spatio-temporal analysis, etc. by being related to a polygon feature class. So, big table becomes smaller table that meets someone's specialized requirements. We all know that I'm going to try Query Layer and/or Query Table on that big table whether it's wise or not. Cheers, tim
... View more
10-22-2024
02:31 PM
|
1
|
1
|
2737
|
POST
|
I'm attempting to use ArcGIS Pro 3.3.1 to read a non-geodatabase PostgreSQL 15.8 database table with ~860 million rows, then write those rows to a geodatabase table. I'm using the Append GP tool to map the input table columns to the output geodatabase table fields and load the records. When I have tried with an enterprise geodatabase target table in SQL Server 2019 and 2022, the process slows to a terrible crawl after ~262 million records are loaded. Same thing when I try with a file geodatabase target table, but it looks more like the load stopped instead of slowing at ~262 million records. Storage is fine, RAM looks happy, CPU says ArcGIS Pro is doing something noticeable. Disk reads and writes drop off for the local attempts and network transport drops off for the remote server attempts when the load stalls. I ensured that the only index on the target table columns was the ArcGIS ObjectID index, and that it's not clustered in order to eliminate the SQL Server "load slows when unsorted data is loaded to a table with a clustered index" problem. I'm using the 32-bit ObjectID because I'm not getting too close to the 2.14 billion limit that would bump me up into 64-bit ObjectID territory. Technically, the documentation and discussions in the community suggest that the file geodatabase itself can handle "unlimited" records. Any thoughts on why ArcGIS Pro is getting stuck at ~262m? cheers, tim
... View more
10-21-2024
02:01 PM
|
0
|
9
|
2828
|
DOC
|
Hi @MartinF - anecdotally, I just upgraded the EGDB schema in a SQL Server 2019 db from 10.9.1.*** to 11.3 on Monday in a development environment after a good db backup. ArcGIS Server map/feature/editing services that depend on the EGDB objects worked just fine without any changes. ArcGIS Pro Layer files have the db name prefix part of the EGDB schema hardcoded, but ArcGIS Pro sorted it out without any changes to the layer files. ModelBuilder models ran fine without changes. I found some Python code that was doing some string manipulation on the fully expressed object name (e.g. <db>.<schema>.<object> that I expect to fail when I test it soon. Code that depends on the db objects outside of the ArcGIS sphere continues to work because the db name element in the fully expressed object name is really just a value in one or more of the EGDB sde management tables, so it's not returned when you're working with the objects in SSMS. All pretty much good to go without any major fixes. Cheers and good luck, tim
... View more
07-10-2024
07:58 AM
|
0
|
0
|
12151
|
IDEA
|
"Coming in with the golden light In the morning" -KB
... View more
04-08-2024
06:26 AM
|
0
|
0
|
2771
|
IDEA
|
@Michele_Hosking, @KoryKramer - sounds like a requirement that didn't ship with this capability the first go-round. Maybe soon? Maybe some day?
... View more
03-27-2024
08:02 AM
|
0
|
0
|
2837
|
POST
|
@MarceloMarques - yep, aware of the opportunities to manage my own data stores. trying to understand if a hosted feature layer can be configured to use a file geodatabase under the hood based on choices I make when publishing from an ArcGIS Pro layer. https://enterprise.arcgis.com/en/portal/latest/use/publish-features.htm
... View more
02-07-2024
09:16 AM
|
1
|
0
|
2680
|
POST
|
Thank you, @MichaelJenkins. I'm holding the "publish a file geodatabase, and make a service from it" path to success in my back pocket for now. Last time I looked at ths approach, it was unclean for several reasons. Maybe it's improved at AGE 11.1.0.
... View more
02-07-2024
09:08 AM
|
0
|
1
|
2689
|
POST
|
Thank you, @MarceloMarques. So, it sounds like the short answer is, "ArcGIS Enterprise Portal 11.1.0 will not use a file geodatabase to support a hosted feature layer item." Is that correct? tim Edit - solution acceptance note: I feel fairly confident that I have evaluated Marcelo's and Michael's input and understood it correctly. The simple answer is in my reflection post here, so I marked it as the solution. Someone tell me if I'm wrong, and I'll adjust. Thanks all!
... View more
02-07-2024
09:03 AM
|
0
|
2
|
2695
|
POST
|
In a December 2022 "Sharing Content to ArcGIS Enterprise" Esri course, the instructor noted that when I share an ArcGIS Pro layer to Portal as a hosted feature layer, if I ensure that editing capability is not enabled, then AGE-Portal persists the layer's data as a file geodatabase instead of as an enterprise geodatabase (PostgreSQL, probably). The instructor said that the file geodatabase is the highest READ performance data structure that AGE-Portal uses for its hosted feature layers. I want to use the highest READ performance data structure. I've only now come around to actually make use of this information, and everything changes all the time. When peeking under the AGE-Portal hood, I'm not seeing (yet, I hope not to dive into the dark box on this) that it's persisting hosted feature layers without editing enabled in anything other than its managed PostgreSQL enterprise geodatabase data store. So, my questions: If, in early 2024 I use ArcGIS Pro 3.2.1 to share a layer to AGE-Portal 11.1.0 as a hosted feature layer without editing enabled, in what data structure/format is the data persisted? What is the highest performance data structure/format I can convince AGE-Portal to manage for me? I have a time-enabled point feature class with ~53m rows that I need to persist in an AGE-Portal data store. Typically, only about 800 active records are queried and rendered for the full spatial envelope of the data set. Scale dependencies are set for ~1.5m active records, so they get queried and subsetted to reasonable quantities when zooming in and rendering. There are numerous other performance, gotcha, bug, and version discrepancy red flags in my questions and examples, and I'm pretty much aware of and controlling for all of them. I'm specifically interested in understanding what controls I have over AGE-Portal's decisions on what data structures/formats to persist my data that it manages. I'll be selecting for highest possible READ performance. Ok, well, I also want to know whether or not AGE-Portal respects and includes all of my performance configurations (field indexes, coordinate reference system, and the like) when it ingests my data into its data store. Maybe it does some of its own? I have noticed that it has not been entirely respectful of things like field order and field naming choices, so maybe it's happy to jack up or ignore my other configurations. Cheers & thanks, tim edit: the only way to see typos is to glance at the post after posting it.
... View more
02-07-2024
08:11 AM
|
1
|
7
|
2772
|
Title | Kudos | Posted |
---|---|---|
1 | 10-24-2024 02:51 PM | |
1 | 10-24-2024 02:39 PM | |
1 | 10-22-2024 02:31 PM | |
1 | 02-07-2024 09:16 AM | |
1 | 02-07-2024 08:11 AM |