POST
|
@c1asse wrote: The GPU drivers gets disabled automatically and on startup there is no display currently. Please note that i have used other simulation software such as HEC-RAS and FLOW3D and have never faced any issue like this. I am unsure what mistake i am making while using the software or saving its files. Any help is appreciated. Thankyou. Do you mean the display in the Map view is permanently stuck on the "Loading Map" progress icon and the actual map display stays grey, not showing anything, and Pro reporting "Changes in your graphics hardware detected"? If so, welcome to the club ;-(. I have had a similar experience with what is still my only laptop, a Core i7-7700HQ with NVidia Geforce 1050 graphics card. Despite contacting ESRI support about this, I have never managed to get Pro working with the graphics driver, not with DirectX, nor OpenGL, nor by upgrading to any of the latest versions of the drivers or any of the Windows updates since I acquired it. I finally resorted to disabling the NVidia Geforce 1050 graphics card altogether through Windows settings, and only run on the Core i7-7700HQ's integrated graphics. That actually works better than initially expected, but of course feels a bit dump knowing I have "dead" unused hardware lying around that probably could do a bit better. Nonetheless, if you have no option to switch laptops, I would say go for it, and run on integrated graphics only. More modern laptops than my 6 years old machine, should have more muscle for that work too, so if I can run it on integrated, you can probably too and with better experience.
... View more
05-07-2024
05:14 AM
|
0
|
0
|
1247
|
IDEA
|
@RTPL_AU wrote: @SSWoodward From a paying customer perspective if you already have internal testing processes that detect regressions, these regressions should be public and part of the release notes, and if not, this is still a valid idea and should be moved a more appropriate section such as "things that Esri could do better that isn't a product feature". I think I would even go a step further: regarding the nature of the bugs when there is a new software release, there are two clearly distinguishable ones in my opinion: - 1) Bugs in newly added functionality that doesn't affect any of the existing functionality of ArcGIS and allows the current user base to continue using Pro unhindered as they did with the previous release, and to potentially upgrade to the latest release with the minor caveat of needing to avoid the affected functionality. While it might be a nuisance that some fancy new feature is not usable due to such an issue, such bugs in my opinion are relatively harmless and lower priority from the perspective of the user base (unless there is a risk that the bug causes major damage to e.g. an enterprise geodatabase, but such bugs are rare). - 2) Regression bugs that directly affect the usability of existing functionality of ArcGIS, and make it impossible to use and / or upgrade to the latest release either due to a severe performance issue, or tools / functionality being truly broken and totally unusable. In my opinion, the second type of bugs - regression bugs - is far worse than the first, and should always be top priority for fixing as they make it impossible for the existing user base to use or upgrade to the latest version while continuing their regular workflow. The specific issue you've brought up was introduced in Pro 2.9.6 and 3.1 and has been addressed in the coming release of ArcGIS Pro 3.3 In my opinion, regression bugs should always be fixed in the next patch release, not in a major or minor release! That is, this bug should have been fixed in a 2.9.x or 3.1.x patch, not in Pro 3.3. Or, if the original fix was developed based on the 3.3 code base, it should have been back ported to all previous releases affected. I realize time constraints may make it impossible to fix stuff in the next patch release, e.g. 2.9.7, but that still means the fix should have been part of 2.9.8 or so. If you do not follow such practices, you may end up with a perpetually broken product, because regressions are never fixed in the release cycle as used by the existing user base.
... View more
04-30-2024
01:31 AM
|
0
|
0
|
779
|
POST
|
With such a big difference, and all other things equal, I would definitely recommend contacting ESRI support, they should be able to tell you if this is a known issue, and whether it is scheduled for a fix. It is not the first time ArcGIS suffers a performance regression and may need a fix.
... View more
04-28-2024
02:21 AM
|
0
|
0
|
479
|
POST
|
@Crinoid I have one more question for you, what are the settings for "Word spacing" and "Letter spacing" of the label class's Symbol? On this help page: https://pro.arcgis.com/en/pro-app/latest/help/mapping/text/spread-a-label-through-a-polygon-feature.htm You can see the text: "These values become the minimum values when you use the Spread labels labeling option." Is "Word spacing" perhaps accidentally set to 0%? It defaults to 100%.
... View more
03-12-2024
12:06 PM
|
0
|
0
|
918
|
POST
|
@Crinoid wrote: If the engine isn't able to fit the label horizontally without deleting the space, it shouldn't use horizontal placement. I can't really imagine a scenario where this behavior would be desirable, and there's no method to toggle it, so I think it's either just an oversight or a bug. Totally agree, this behavior doesn't make any sense from a cartographer's perspective. No real world cartographer would ever totally remove spaces between words making labels essentially illegible. It is bit sad we need to explain basic cartographer's 101 to a company like ESRI...
... View more
03-07-2024
01:48 PM
|
0
|
0
|
937
|
POST
|
Welcome to the wonderful world of "processor groups", "CPU sets", "processor affinity", NUMA (Non Uniform Memory Access - a technique to access local or remote RAM in a non-uniform manner, with local faster RAM being preferred) and other processor scheduling details you likely never heard of, but that do start to creep in on your work once you hit large numbers of logical processors on multiple socket server systems. As I also learned the hard way, software doesn't just magically use all logical processors on your system. In fact, most software out there has never been designed to take full advantage of >=64 logical processor systems, simply because there were none out there up until very recently. One minor introduction might be this read of Bitsum: https://bitsum.com/general/the-64-core-threshold-processor-groups-and-windows/ There is a lot more to read out there about these subjects, but I can tell you it doesn't necessarily provide answers or working solutions. However, as to practical advice, you might attempt to: - Disable hyperthreading - Disable NUMA in your system's BIOS. This will create in your case a system that is more or less logically seen to be as a "single socket" 32 core system without hyperthreading, with both CPUs having equal and predictable access times to local and remote RAM memory sticks. In this configuration, it is more likely all cores will be used, but they will have (slightly) higher RAM latency / access times. While both of these options are designed to potentially run your system faster, they do not necessarily do so, and may in fact reduce the overall performance of specific workflows if those workflows need access to all physical cores. Note that disabling NUMA does not seem to be recommended on older multi-socket AMD systems from what I have read so far, as the data links between different CPU's memory seem to have far to big latency, actually harming overall performance, but this seems to be less of an issue with INTEL systems. I don't know if that is still an issue with modern AMD systems (likely not). If you do disable these options in the BIOS of your system, I strongly recommend you to run a controlled performance test of your workflow before and after the change, to see if one or the other configuration is faster, it may turn out that disabling the options is actually harmful (but in my case it was beneficial to disable them).
... View more
02-18-2024
03:07 AM
|
0
|
0
|
414
|
POST
|
Not sure if there is a solution for you in here, but John Nelson's ArcGIS blog posts are always a joy: How to reattach land that spills over the International Dateline
... View more
02-05-2024
12:48 PM
|
1
|
0
|
466
|
POST
|
If you are really worried about it, or have noticed changed behavior, then why not? ESRI support, at least here in the Netherlands, is usually quite willing to help you out. I wouldn't call a 4h continuous high load associated with working hours a "CPU spike" though...
... View more
01-24-2024
12:50 PM
|
0
|
0
|
1237
|
POST
|
I am not to familiar with ArcGIS Monitor, and have never used it, but from what I've read about its function and see in your images, it is clear any activity tracked by ArcGIS Monitor is directly affected by your user's activity. You can even see the 12:00 to 13:00 lunch dip in the CPU usage of Monitor... Personally, I wouldn't worry to much about 80-100% cpu usage, unless it directly impacts the usage of Monitor (slow updating, web interface no longer accessible due to time-outs or so, or whatever else blocks you from using it properly). Instead, having 80-90% of your computing power sitting idle doing nothing is a waste of compute resources and ultimately money. Computers are designed to do work, not to sit idle, and servers and professional workstations are designed to withstand 100% CPU usage for essentially indefinitely. I routinely process global OpenStreetMap data on an HP Z840 workstation with dual CPU's (2x22 cores), which is essentially server hardware packaged in a very large desktop case. I trash it with 100% CPU for days to process the global OpenStreetMap data. Never fails on hardware. Of course, the high CPU usage may indicate this server is at its limit doing real work, maybe because it needs to monitor a very large enterprise deployment with many users and servers and handle all the traffic for that, and you may need to anticipate a server upgrade if you intend to use it for an even larger enterprise deployment in the future.
... View more
01-24-2024
11:50 AM
|
0
|
0
|
1243
|
POST
|
Maybe these links are of some use: https://www.esri.com/arcgis-blog/products/arcgis-pro/mapping/how-to-reattach-land-that-spills-over-the-international-dateline/ https://www.esri.com/arcgis-blog/products/data-management/data-management/arcgis-pro-tips-scroll-around-the-world/ https://pro.arcgis.com/en/pro-app/latest/help/mapping/properties/allow-dateline-panning.htm If possible in a custom add-in, you'll likely need to limit it to one of the supported projections, or change the projection's central meridian as in John's blog post, which won't give you "wrap" panning, but will allow you to adjust the display across the date line to suite your needs.
... View more
01-21-2024
01:55 AM
|
0
|
0
|
327
|
POST
|
Well, that might indeed explain why I haven't seen this issue. Although I have used the "Spread letters up to a fixed limit" option in multiple label classes for years, I do not routinely use the option to spread words. However, the removal of spaces in case of "Spread words" sounds like a bug needing reporting. It really doesn't make sense to remove spaces entirely, as it can heavily affect readability of the label. In my experience though, just using "Spread letters" is enough, and will also give a convincing and cartographically nice larger gap between words, so maybe that is an option for you too, and just leave the "Spread words" setting at its default value.
... View more
01-17-2024
11:40 AM
|
0
|
1
|
1146
|
POST
|
I have extensively used the Maplex label engine with similar settings for waterbodies in various versions of Pro (1.0 to 3.2), but never saw the "disappearing spaces". Looking through the Maplex settings, the only real option I see that might make this happen, is if you have the Position / Abbreviate option selected, and accidentally added a space to the string of characters in the box "Characters to remove first" under Truncation, which might not be immediately obvious. I am not even sure that box accepts spaces, I have never used this specific option.
... View more
01-17-2024
09:34 AM
|
0
|
4
|
1191
|
POST
|
Also see this probably related thread with some posts of an ESRI employee: https://community.esri.com/t5/arcgis-enterprise-questions/arcgis-enterprise-security-patch-dec-2023/m-p/1361553#M38012
... View more
12-17-2023
03:40 AM
|
1
|
1
|
1900
|
POST
|
Note that I am not an ESRI employee: I am not sure, but you may have run into the major issue with a defective security patch ESRI released in June: "Portal for ArcGIS Enterprise Sites Security Patch for 10.8.1, 10.9.1, and 11.1" If I understood it well, if you had this defective patch installed, in order for an upgrade to properly succeed, you would first need to run a recently released "Validation and Repair" tool that ESRI released to deal with this major issue. However, this tool cannot be run after an upgrade, as you seem to already have done. From what I've read, I strongly recommend you to contact ESRI Support as soon as possible. https://www.esri.com/arcgis-blog/products/trust-arcgis/administration/portal-for-arcgis-enterprise-sites-security-patch-is-now-available/ https://support.esri.com/en-us/patches-updates/2023/defective-arcgis-enterprise-patch https://support.esri.com/en-us/patches-updates/2023/portal-for-arcgis-validation-and-repair
... View more
12-17-2023
03:33 AM
|
1
|
0
|
1901
|
POST
|
From what I read in the links you posted, fault tolerance is not the issue with the performance of administering services. Fault tolerance just means stuff doesn't get lost in case of a failure of a single disk or server, e.g. by having a RAID that stores backup copies of any file, or a secondary server storing a backup. It doesn't say anything about the performance of the underlying file system. Your problem is already pointed out by the text you quoted: apparently DFS will not immediately show any changes / writes to it to all machines accessing it, which appears to be required for properly managing your ArcGIS Server installation. This delay caused by DFS makes it unsuitable for storing your configuration store. Based on the remarks of random high IO requirements in the links you posted, you're probably best off putting it on a (dedicated) single server with an NVMe drives based RAID 1 or 5 to have backup of the data. Modern NVMe drives should give you the required random IO performance.
... View more
12-14-2023
12:24 PM
|
0
|
0
|
623
|
Title | Kudos | Posted |
---|---|---|
1 | 3 weeks ago | |
1 | 3 weeks ago | |
1 | 08-23-2017 07:51 AM | |
2 | 09-01-2024 02:19 PM | |
1 | 08-27-2024 10:56 AM |
Online Status |
Offline
|
Date Last Visited |
Sunday
|