POST
|
In the error I see reference to PostgreSQL's 'public' user schema. As of PG15, working with public schema to store user data has been strongly discouraged by the the PostgreSQL developers and community due to security concerns, and default access rights revoked. This may be causing you issues. I recommend you to create a new user with its own dedicated user schema and not grant it superuser rights but just leave it at its ordinary limited rights.
... View more
Saturday
|
1
|
2
|
154
|
POST
|
@JesseCloutier I am running into the same issue: no problem posting new posts, but no longer the ability to edit my own posts. Can you fix this? Honestly, I think ESRI should do an overall review of all users that lost their editing rights here in the community by whatever hickup that may have caused it.
... View more
Saturday
|
0
|
1
|
27
|
POST
|
About the life cycle: to me, any(!) bug and a fix for it that is detected in any of the current "general availability" major versions (which is generally the last three major releases, e.g. 3.3, 3.4 and 3.5) should be automatically applied and / or backported to all the other major releases in "general availability" if it is also applicable to those other major versions (there maybe cases where it is not, e.g. if the issue is only related to new functionality not available in lower versions). Currently, this is not the case, and ESRI seems to apply a highly opaque "vetting" system for backporting fixes that feels essentially random to Pro users, and only true security issues seem to automatically make their way to lower versions.
... View more
Saturday
|
1
|
0
|
116
|
POST
|
What happens if you make the path a raw string? So: arcpy.Exists(r"<YOUR_PATH>")
... View more
Friday
|
0
|
0
|
62
|
POST
|
@RTPL_AU wrote: How valid is it to think that the release model should change to a mix of Long Term Stable + New Feature Releases Short Term Unstable? Introducing such Long Term Stable releases would only be a viable approach if the LTS release is actually actively maintained and being worked on with additional (backported) bugfixes. If it is just another unmaintained essentially dead release cycle nobody cares about, it would solve nothing.
... View more
Friday
|
0
|
0
|
223
|
POST
|
I totally agree this approach of only "fixing" issues in new releases is highly problematic. In my opinion, a bug is only truly "fixed" if the fix is backported to the release cycle where the issue / bug was first introduced. Fixing issues only in new releases is "addressing" the issue, not fixing it. A fix for an issue / bug introduced into e.g. 3.4.0, should not require upgrading to 3.5 or 3.6, but should be addressed in a 3.4.x minor bugfix release.
... View more
Friday
|
1
|
0
|
225
|
POST
|
Since this other thread about ArcGIS Monitor suggests the Datastore and associated database is just a regular PostgreSQL installation, the logical option to monitor your Datastore health would be any of the available (commercial) software packages for monitoring PostgreSQL database health. E.g. pgAdmin would be a logical choice. DBeaver is another option, and there others out there if you do a bit of searching. Although both of these suggested options are more about database management, and not so much about alerting or warning you when something catastrophic is about to happen. There are other options out there though, that do more of a kind of monitoring job if you do a bit of internet searching.
... View more
a week ago
|
1
|
0
|
98
|
POST
|
Well, since you are writing about using an API, that essentially already means programming... I guess you could use the "Python Client for Google Maps Services" and ArcPy to write your own implementation as Python script to get access from within Pro: https://github.com/googlemaps/google-maps-services-python
... View more
2 weeks ago
|
1
|
0
|
175
|
IDEA
|
@TanuHoque Thanks, I overlooked the setting, and hadn't really expected it to reside in the "Map and Scene" section of the Options. I expected this somewhere in "General"/"Geoprocessing" or maybe "User Interface". But at least I am glad to see it implemented! Now I need to get a horrid bug in Pro 3.4+ fixed that has plagued and totally blocked my workflow ever since the Pro 3.4 upgrade, and forced me to stick to Pro 3.3 to get anything done, so I will unfortunately not be able to take advantage of it anytime soon...
... View more
3 weeks ago
|
0
|
0
|
70
|
POST
|
Honestly, that hardly sound like anything needing "special configuration". It is a tiny database by all measures nowadays. As said, I have run a PostgreSQL database with >2.5 B records on a single server. Secondly, any PostgreSQL configuration is also tightly coupled with the actual hardware specs. E.g. something like the 'shared_buffers' setting is almost always recommended to set as a percentage of your RAM, so unless you already know or share your configuration here, there is no sensible recommendation. That said, the three things that really will make a difference: - Do no run from HDD, use NVMe attached PCIe SSDs. It is hardly a question nowadays, but up to maybe five years ago, I still regularily read about people putting their database on HDD, which kills performance compared to modern NVMe SSDs. - Switch off 'synchronous_commit' in the postgresql.conf file. This is probably the single most effective configuration you can make if you do not need it (no replication to standby servers) to enhance performance on writes / UPDATEs. - Set an appropriate 'shared_buffers' setting to give PostgreSQL breathing room in RAM. All other settings are likely to only contribute minor, especially in a minor use case like yours.
... View more
3 weeks ago
|
0
|
0
|
197
|
IDEA
|
@TanuHoque Is this really implemented in Pro 3.5, or is this still "In Product Plan" and will be implemented in 3.6, because I do not see such an option to switch between "Clause" and "SQL" mode application wide in Pro 3.5? I assume it will be available somewhere in the "Project/Options", but in Pro 3.5, there still isn't such a setting as far as I can see it. Or am I missing something obvious?
... View more
3 weeks ago
|
0
|
0
|
196
|
POST
|
I am not sure what you mean. If the hosted database is based on PostgreSQL, there is no "magic spell" to make it better than any other PostgreSQL installation. In the distant past, ArcSDE - one of the backbones of ESRI database technology - supported the 'RAW' spatial storage, which, due to being directly convertible to the ESRI internal geometry representation in applications, was generally regarded as a more performant option over databases 'native' geometry implementations for ESRI software (although ESRI never really made much noise about this or admitted to it). However, 'RAW' is past in favor of 'native' or ESRI ST_Geometry geometry storage in all databases AFAIK. If with your remark, you are just referring to PostgreSQL configuration as done with e.g. the 'postgresql.conf' file, I suggest you start delving into the wonderful world of PostgreSQL configuration and read up about the subject. There is definitely stuff you need to adjust in there, as the default settings are not suited for large (Post-)GIS databases, but just the absolute minimum to get a server running on minimal hardware.
... View more
a month ago
|
0
|
0
|
269
|
POST
|
Since the error states a "time out", has anything changed lately in your configuration to bog down your server, and maybe cause such a time out? Since you already run this for years, possibly on the same hardware, your hardware may no longer be able to fully keep up, and cause time out on some requests. Certainly, if your database and usage has grown considerably over the years, maybe the server is running against its limits. Have you checked that?
... View more
a month ago
|
0
|
0
|
382
|
POST
|
It depends on what type of work you are going to do with the database, and your current connection to your local database, and so what your expectations are. E.g. if you just want to do an occasional update of a limited number of records and / or view the data, or if you need fast low latency access because you are going to re-write millions of records on a regular basis. Remote databases can have a very large latency (the time to do a round trip from you local machine to the server and back), which can kill the performance if you need to fire thousands or even millions of requests to it. In case you run a remote database and need low latency access, you may need a virtual machine running ArcGIS Pro in the same data center or area, so as to avoid the cost of the round trip. E.g. I personally run a local PostgreSQL database filled with the entire planet's worth of OpenStreetMap data on a professional dual CPU 44 core HP Z840 workstation. In the most outrages configuration, I filled it with Facebooks now defunct "Daylight distribution" of OpenStreetMap, which integrated an additional >1 Billion buildings of Google "Open Buildings", causing the largest table to exceed >2.4 Billion records. Yes, you read that right, billions! I subsequently process and update hundreds of millions of records in derived tables. I certainly wouldn't want to do this against a remote database with latencies running in the 100's of milliseconds. It would take ages. Running this all locally against PCIe NVMe drives, I am capable of updating hundreds of millions of records per hour from ArcGIS Pro (although batched in sets of tens of thousands to reduce the number of requests). This would be unthinkable against a remote database, unless running as suggested above, a VM with Pro in the same data center as the remote database. Finally, running a local database, although a burden, can learn you a lot about the proper configuration and running of a PostgreSQL database and server, knowledge that you may not gain with a remote database and its fully pre-installed configuration (which may or may not suite your use case or be adjustable to your taste or needs).
... View more
a month ago
|
2
|
0
|
347
|
Title | Kudos | Posted |
---|---|---|
1 | Monday | |
1 | Friday | |
1 | Saturday | |
1 | Saturday | |
1 | a week ago |
Online Status |
Offline
|
Date Last Visited |
Tuesday
|