BLOG
|
Hi @AndreaB_, I think that is a good guess on the query calls. Such log entries could be from users or from dashboards. These map layer query requests return data in the form of json, pbf or kml. For MapServer services, this textual info is pulled from each layer separately which can then be rendered in a client (e.g., a JavaScript app). Export on the other hand, is a request to generate an image of this same data from one or more layers at once. While the function is called export, "export map" only returns a rendered image...not the actual data. Hope that helps Aaron
... View more
04-04-2025
12:08 AM
|
1
|
0
|
589
|
BLOG
|
Hi @RandyBonds_Jr_, Generally speaking... There is not a direct relationship between number of CPUs and number of ArcSOC instances. That said, for your most critical services (typically there are only a handful) these should be set to a min/max of N, where N is equal to the number of CPUs cores. The remaining services could then be shared and/or hosted. From the perspective of memory, there is a more direct relationship with ArcSOCs. The key factors are the number ArcSOC instances, memory size (of each ArcSOC) and usage patterns. A very simple equation could be something like: (number of ArcSOCs x Avg ArcSOC size). Hope that helps Aaron
... View more
01-22-2025
03:05 PM
|
1
|
0
|
1368
|
BLOG
|
Hi @Br1an_Peters, Not sure on creating a dashboard. However, you can tell System Log Parser to write out the statistical analysis of a Simple report as JSON (instead of an xlsx file). This could be used as input to a dashboard or observability tool. slp.exe -d "C:\Folder1\MyReports" -s https://myserver.domain.com/server -eh now -sh 2hr -a simple -u siteadmin -p sit3adm1nPW -r json
... View more
01-14-2025
10:47 AM
|
0
|
0
|
1006
|
BLOG
|
Hi @Br1an_Peters, To specify a custom output location use the -d option with a path. For example: slp.exe -d "C:\Folder1\MyReports" -s https://myserver.domain.com/server -eh now -sh 2hr -a Optimized -u siteadmin -p sit3adm1nPW
... View more
01-08-2025
04:36 PM
|
1
|
0
|
1072
|
BLOG
|
Hi @SGTomlins,
I believe the Response code: 499 is a token related error Most likely the service your test is consuming requires authentication. As it is currently, cache_tiles2.zip does not have authentication functionality added in. A potential solution would be to look at the authentication components from this test: Using Apache JMeter to Load Test an ArcGIS Enterprise Authenticated Service (Intermediate/Advanced) and copy them into the tile test.
Hope that helps Aaron
... View more
11-15-2024
12:31 PM
|
1
|
0
|
433
|
POST
|
Hi @SGTomlins, Finding delays or bottlenecks in a system can be a complex problem to solve. Sometimes the issue is hardware (e.g., not enough memory, too few cpu cores) and sometimes it is software (e.g., not enough services instances, max number of db connections reached, map is showing too much detail at small scales). As for an integrated, easier way to accomplish such a task I'm not sure. Logs/traces are certain valuable, as you mentioned they are in the realm of "forensics" and take more time to gather/analyze. Whatever method you choose, I would offer some strategies. I would start with identifying your top 3 most popular services. This will help narrow the focus and give the best return on investment of time. From there, look to understand how the services are being utilized (e.g., what functions are people requesting from these services...export map, query, applyEdits, etc...). It is also advantageous to know the performance profile of the functions...maybe its only one particular operation that impacts users the most. From an ArcGIS Server logging perspective, different functions have different levels of detail recorded. For some, you may have to increase the ArcGIS Server LogLevel to get that information. For example: -- export map at verbose/debug can list the duration of time of the overall request (from the point of view of the ArcSOC) and how long it takes for the symbology to render -- feature query at verbose can list the duration of time of the overall request (from the point of view of the ArcSOC), and show how much of that time was spent in the database...this is helpful for understanding if your spending more time of data retrieval or data serialization Once items for improvement have been identified, you can start researching potential ways to tune and remedy a solution to improve performance and remove the bottleneck. Hope that helps.
Aaron
... View more
11-15-2024
12:25 PM
|
1
|
1
|
693
|
BLOG
|
Benchmark ArcGIS Enterprise...The Original Approach A while ago, I discussed using the Natural Earth dataset with a preconfigured Apache JMeter test to benchmark an ArcGIS Enterprise deployment. Those results from that test could then compared to runs from other deployments to get a comparative idea of the underlying hardware's performance and scalability characteristics. This approach had some benefits: Natural Earth is a free GIS data Available for public use Low-to-moderate data complexity (easy to work with) Test Plan featured a step load for observing scalability capabilities While useful and a good measuring stick, the scalability component meant the test would typically run for a long time (which also added some complication). I had wondered if there was an easier way to just benchmark the processing hardware (e.g., the CPU) but still through ArcGIS Enterprise: Was it possible to use JMeter from a performance-only perspective? Could I create a test to benchmark ArcGIS Enterprise without an underlying FGDB or enterprise geodatabase dataset (which should simplify the overall effort)? It turns out the answers were yes! Benchmark ArcGIS Enterprise...An Alternative Approach Okay...I am speaking in half-truths. The new benchmark test does not depend on an FGDB or eGDB dataset based service, but does need some data. To help keep things simple, the data (e.g., pre-generated geometries) is simply passed through the JMeter sample elements to an ArcGIS resource that does not have a referenced dataset behind-the-scenes. So, how is this done? Through the tried-and-true Geometry service. ArcGIS Server's geometry service is a built-in resource that provides access to many functions for performing geometric operations. The calculations of these operations (like buffer or generalize) can be simple or complex (depending on what you ask it). From a performance analyst's perspective, it provides a fantastic means for benchmarking the CPU hardware of the machine running ArcGIS Server. Note: Although the term ArcGIS Enterprise includes ArcGIS Server, this benchmark primary exercises the latter (e.g., ArcGIS Server). Some traffic may go through the ArcGIS Web Adaptor and there would be a small amount of Portal for ArcGIS authentication taking place, but by design, the bulk of the work will be performed by ArcGIS Server. Benefits of Using the Geometry service The Geometry service has been around in ArcGIS Server since version 9.3, so its ubiquitous. That makes a test utilizing it easy and reliable. Since the data driving the test is put inside the key/value pairs of the requests, that adds portability (e.g., no dataset to lug around). Note: While the Geometry service has been included with ArcGIS Server for some time, by default it is off and not running. The service would need to be started and shared to the appropriate Portal for ArcGIS members before running the test. The Geometry_Functions_Benchmark Test Plan To download the Apache JMeter Test Plan used in this Article see: geometry_functions_benchmark1.zip Downloading and opening the Test Plan in Apache JMeter should look similar to the following: Adjust the User Defined Variables to fit your environment What Types of Functions Should be Tested? For a benchmark, the short answer is only a few. This particular Test Plan only calls a few different operations...as well as the same operations in different ways (e.g., changing request parameters to purposely get a variant response). This provides mutability so the test is not just doing the same thing over and over. Below is a look at the operations used in this benchmark: generalize toGeoCoordinateString project buffer Expected Test and Operation Performance This test has some operations that may perform fast and others that will take more time. This speed will vary based on the hardware. Ultimately, we just want ArcGIS Enterprise (e.g., Server) to work for a just few minutes so we can get an idea of the processing performance. If each operation took 10 minutes (with the test many times longer) the benchmark itself can become too time-consuming and less practical to use. Deployment Architecture Example This benchmark test was run in a lab against two different severs (e.g., run once per server): ArcGIS Enterprise -- Machine #1 (older hardware) Intel Xeon E5-4650, 2.70 GHz SPECint_base2006 Score: 50.5 32 processing cores HyperThreading disabled 64GB RAM 10Gbps network ArcGIS Enterprise -- Machine #2 (newer hardware) Intel Xeon Gold 6126, 2.60 GHz SPECint_base2006 Score: 71.9 24 processing cores HyperThreading disabled 128GB RAM 10Gbps network Note: Since this testing effort was more focused on speed instead of throughput, SPECint_base numbers were used instead of SPECint_rate_base. Benchmark Test Execution For long running tests, it is not recommended to run the Test Plan within the GUI. However, since this is a relatively short test, the impact is nominal. Note: When running any test, it is always recommended to coordinate the start time and expected duration with the appropriate personnel. This ensures minimal impact to users and other colleagues that may also need to use the ArcGIS Enterprise Site of interest (e.g., the production deployment). Additionally, this helps prevent system noise from other activity and use which may "pollute" the test results. Results After adjusting the User Defined Variables to point to the appropriate environment (Machine #1…devlab05), the benchmark was run right in JMeter GUI. The results can be observed from the View Results in Table element: For convenience, the Test Plan automatically calculates the overall test run duration, right in the name of the last operation This makes benchmark time easy to observe from the table The Test Plan was adjusted to point to a server on newer hardware (Machine #2…eistsrv05) and the benchmark was rerun From the table, the results are added after the first run: Expectedly, the first machine required more time to complete the same operations. This resulted in a measurable difference in performance between the two machines. Machine #1…devlab05 Benchmark duration: 259946 ms Machine #2…eistsrv05 Benchmark duration: 181441 ms Calculate Percentage Change Since the response times were lower (e.g., faster) with newer hardware (compared to the first run on older hardware), we'll calculate a percentage decrease: First, original server time - newer server time = the decrease Then, the decrease ÷ original server number × 100 = the % decrease (259946 ms - 181441 ms) / 259946 ms = 0.302 0.302 x 100 = 30.2% The benchmark times from the older hardware (our start point) was 30% lower than the newer hardware. This percentage change suggests a measurable improvement when using the newer hardware. Percentage Change Estimate Based on SPEC Let's use the SPEC ratio with the benchmark time from the original run to predict the target_time (benchmark time on the newer machine). This can help with the understand if the roughly the same percentage change could be estimated. (Baseline_SPEC x Baseline_Time) = (Target_SPEC x Target_Time) ((Baseline_SPEC x Baseline_Time) / Target_SPEC) = Target_Time (36.875 x 259946 ms) / 53.75 = 178335 ms (after rounding down to nearest second) (259946 ms - 178335 ms) / 259946 ms = 0.314 0.314 x 100 = 31.4% From this prediction, the older hardware was estimated to be 31% lower than with the newer hardware. This is very close to the percentage change that was calculated based on the observed benchmark times. Future Hardware Processor architectures and CPU speeds are always improving. Eventually, such a benchmark test (as it is currently built) may only take a minute or tens of seconds to run (what a great problem to have). At this point, complexity could be added to the test to increase its run duration to better match the new technology. You may have noticed the last transaction in the test was disabled. This 1000 Point Buffer request with a distance of 10000 meters and a unit of 9035 (International Meter Distance) takes some time to calculate (even on decent hardware). It was disabled to shorten the run time to a reasonable duration. However, if helpful, it can enabled as an additional calculation, depending on the CPU speed of the deployment of interest. Final Thoughts As mentioned in other community articles, there is no one service or function that can cover the entire breadth and depth of ArcGIS. However, the Geometry service is a resource that represents a portion of the amazing field of GIS that is easy to work with. This makes it a good option to use for benchmark testing efforts. A Fast Response Time Is All About CPU Speed, Right? For this Geometry benchmark test, yes. However, for real-world services, processing speed is not the only factor. Server hardware components like disk speed, available memory, network speed are other resources which can improve response times (in addition to CPU speed). Together, they all have a positive affect on the user experience. This benchmark focused on CPU performance as it is a large part of the client request/server response process, but as just mentioned, it is not the only server resource when taking into account other potential ArcGIS services. What About Other CPU Comparison Tools? There are many utilities out there that can profile and test the various pieces of server hardware using a whole battery of exercises. These tests are great and certainly add value for understanding the hardware. Again, there is no one test that can represent all things GIS. But hopefully, this Geometry Benchmark Test Plan can be a useful tool in the analyst tool chest. To download the Apache JMeter Test Plan used in this Article see: geometry_functions_benchmark1.zip Attribution Resource: File:Wikimedia_Foundation_Servers-8055_43.jpg Description: Rack-mounted 11th-generation PowerEdge servers Author: Victorgrigas - Own work Created: 16 July 2012 Uploaded: 20 July 2012 License: CC BY-SA 3.0, Link Resource: File:Cpu-processor.jpg Description: Author: Fx Mehdi - Own work Uploaded: 30 May 2019 License: Creative Commons Attribution-Share Alike 4.0 International
... View more
09-03-2024
11:52 AM
|
3
|
0
|
1345
|
BLOG
|
Hi @AnjulPandey, Is there a way to obtain usage statistics of vector tile service? There might be some usage statistics through the ArcGIS Web Adaptor access logs (IIS log), if it is available. The information here would be counts (how many times was a vector tile service requested) and performance (how long were users waiting to down load the tiles). System Log Parser should be able to address this the the Internet Information Services Log Query capability. Has anyone created a script or notebook to collect a list of all the vector tile packages with their storage size? Not that I am aware of but I have not specifically looked for tools that address that functionality. Logs for map and vector tiles generated by publishing services? Is this similar to the first item? Are you after usage statistics or publishing statistics. If so, System Log Parser would summarize requests for such data (usage and publishing) from the ArcGIS Web Adaptor access log. But it is worth pointing out that for publishing statistics, only the job's GUID would be listed in the report. Logs for how many features were edited by the user? This is interesting. Would this be hosted feature services, traditional or both? For traditional feature services, this might involve queries to the enterprise geodata to understand what, if anything has changed since the check. Although I am not sure if any observability or reporting tools do this, I have not looked. Assuming the edited data was through the applyEdits function of a feature service, System Log Parser would list if that method was called. Of course, this does not mean the data was changed...just that the applyEdits function was called. Logs of how many vector or raster tiles were downloaded by the user? Map, Vector and Raster tile requests through a web adaptor are reported like any other request. System Log Parser would be able to report on usage statistics (e.g., counts and performance) from the ArcGIS Web Adaptor access logs. But, these log sources do not typically record the ArcGIS User that made the request. So the "who" would not be available. Hope that helps Aaron
... View more
08-28-2024
11:35 AM
|
1
|
0
|
1726
|
BLOG
|
Hi @ghaskett_aaic, My guess is that your 11.2 Site and System Log Parser are fine. There are two conditions that exist that can lead to the situation of the generated report listing 0 hits. The LogLevel of ArcGIS Server is set to a value of either Warning, Error or Info The statistics in the System Log Parser report are based on elapsed time request entries, but this requires the LogLevel to be set to Fine Verbose and Debug work too but they not recommended for Production sites due to how much detail is recorded There were no requests to the Site during the specified log query duration (e.g., 2024-03-01T10:21:57 through 2024-03-05T10:21:57) For a quick check, authenticate to the Site from the REST service endpoint, select a service, pan or zoom around a few times, then generated the report with an end time that includes those requests (e.g., end time = now) This report should include your requests that were just made to the Site Hope that helps Aaron
... View more
06-14-2024
11:00 PM
|
1
|
0
|
2225
|
BLOG
|
The Shared Service Instance Pool When people talk about load testing an ArcGIS Enterprise Site, such conversations typically involve the consumption of dedicated or hosted feature services. For many years, dedicated and hosted services have provided a fast, dependable mechanism for consuming high-traffic map resources online. This has not changed. However, there is another type of resource to provide maps to users: the Shared Service Instance Pool. Introduced in 10.7, the shared instances pool make it easier to view and query services that are still important but where memory usage is favored over performance. This allows for high service density publishing (e.g., being able to publish and have running many services) at the expense of some speed and throughput. It can be a good trade-off considering that for many organizations, there are generally more shared service candidates than dedicated or hosted. From this advantageous characteristic, shared services have been a true game changer. But, from a load testing perspective there are some considerations. Note: As a GIS administrator, assume all shared services have an equal weight of importance with each other. Also assume that dedicated services take a higher priority than shared services. Should Shared Services Be Load Tested? The $64,000 Question! As a performance analyst, this question may come as your publish services to the Site. While it can be very tempting to load test shared services to understand their scalability profile, there are several reasons why this strategy does *not* make sense: If scalability was paramount, the service should be moved to dedicated Shared services can still scale (e.g., support multiple, concurrent requests for the same item) but this is not its primary function The administrator has already designated the service to favor memory usage By configuring the service as shared, it is expected that the service will not be requested frequently If the service occasionally has slower or slightly slower response times, that is okay Testing such services steals hardware resources from the dedicated services Dedicated services are the MVP services, do not try to have the shared services compete with them Dedicated and hosted are the go-to mechanisms for service scalability By testing or frequently sending requests shared services, (limited) system resources like CPU and Memory can be drawn away from the services that need it for delivering fast performance to users Test Plan Management Challenges It is not uncommon for Sites to have dozens or hundreds of shared services Assuming all shared services are equal, the test plan for effectively testing and profiling a hundred or more shared services could be daunting and difficult to manage A Site has a Mix of Shared and Dedicated Services, Can the Dedicated Ones Still Be Tested? Yes. Understanding the performance and scalability profile of dedicated services is still valuable information to have for deploying and managing the Site optimally. Test dedicated services as your normally would. Can Shared services be tested if that is the only instance pool type published? There are no technical limitations that prevent shared services from being load tested. While it is certain possible to run such a test, it is not recommended for the reasons above. Analyze and Monitor the Site Periodically The popularity of services can increase or decrease over time. Periodically analyzing the traffic patterns of service requests can help provide administrators with information to configure and manage the Site optimally. This means that as some services are requested more frequently (or it is anticipated that they will be), they can be manually moved from being a shared service to a dedicated service.
... View more
05-01-2024
11:17 AM
|
5
|
0
|
874
|
BLOG
|
The following is an additional resource which may help provide information on the "anonymous" entries observed in the ArcGIS Enterprise (e.g., ArcGIS Server) logs: ArcGIS Enterprise Analysis with System Log Parser: Understanding Anonymous Entries for the User Name (Beginner)
... View more
04-22-2024
05:40 PM
|
0
|
0
|
2314
|
BLOG
|
System Log Parser's Statistics By User Reports: the Anonymous Value For evaluating Site performance and quantifying service popularity, System Log Parser (SLP) has several report offerings to conduct ArcGIS Enterprise log analysis. When selecting Analysis Types such as Simple, WithOverviewCharts or Complete, there is an option called "Add Statistics By User to Report" which will include an additional worksheet called Statistics By User into the generated output. The information on this worksheet includes a statistical summary of successful Portal member requests (as reported by ArcGIS Enterprise). This can be quite helpful for GIS administrators to understanding who is asking for what. Sometimes however, the listed User on this worksheet may show the unexpected value of "anonymous". For a Site with secured services, this might be a puzzling username to observe. Is an Anonymous User Sending Successful Queries to a Secured Service? The short answer: no, they are not. The long answer: no, they are still not, but some background is needed to provide the proper context on "anonymous" entries for the User Name value in the logs. Portal Member Log Entry Identity When System Log Parser queries the ArcGIS Enterprise (e.g., ArcGIS Server) logs, it reads the "User Name" field to determine the member identity for each log entry of a successful request. This value is only read from very specific log entries (e.g., where the log Code=100004). Such entries also have the final elapsed time duration of the work performed (e.g., how long the request took from the ArcGIS Server's ArcSOC.exe point of view). These resources are some of best places to look for quantification analysis of the Site. For many service request log entries, this lists the authenticated Portal member username value...as expected. But, there are log entry cases when a member has just authenticated to the Site and the recorded value of "anonymous" is listed instead, but anonymous (e.g., a non-authenticated user) was not actually reading the service. If log queries are executed manually (for the same window of time) through Manager or the REST Admin API, additional details are revealed which can help explain this initial user impersonation by the entity called "anonymous". By using the Request ID field in the logs, one can correlate multiple entries together (since all of the same Request IDs belong to the same request...which is really awesome). So, while the Code=100004 entry shows the user as "anonymous", the Code=9029 entry actually lists the requesting user's Portal member identity. In this case, "admin". Subsequent queries by that user are listed as the expected name (e.g., and not "anonymous"). Note: In the log entry screenshot above, "NaturalEarth/NaturalEarth_SQLServer.MapServer" was a service shared only to specific Portal members. Note: System Log Parser does not currently present this additional user impersonation detail. Whatever value is recorded under User Name is what SLP uses for the Statistics By User worksheet. Note: There can also be a separate Code=8522 log entry which lists the recorded member value under the User Name column. Actual Anonymous Requests to Services Shared to the Public There are also log entries where the value for the "User Name" field can list "anonymous" as the member, but this where it is truly representing anonymous. In this situation, the logs are identifying a successful request made by someone for a publicly available service where the connecting client was not challenged to authenticate. In other words, the service was intentionally shared to Everyone (e.g., the public). By performing another manual, in-depth log query (for the same window of time) for these types of requests, more details can be derived which show that associated Code=9029 entry. This helps highlight that the request was actually made on behalf of the "Anonymous user". Note: In the log entry screenshot above, "SampleWorldCities.MapServer" was a service shared to Everyone. Note: System Log Parser does not currently present this additional user impersonation detail. Whatever value is recorded under User Name is what SLP uses for the Statistics By User worksheet. Are There Anonymous User Log Entries for Secured Services? No. Requests issued for any non-publicly shared resource will be prompted to authenticate (even if what is requested does not exist). Therefore, Code=100004 entries will not exist for the "Anonymous user" user against secured services. Note: In the log entry screenshot above, "NaturalEarth/NaturalEarth_SQLServer.MapServer" was a service shared only to specific Portal members. Note: ArcGIS Enterprise will still acknowledge an "Anonymous user" request for a secured service (existing or not) with a Code=9029 entry (and potentially a Code=8522 entry as well). What Release is this User Name Log Entry Information Based On? This article is based on ArcGIS Enterprise 11.2/11.3, but the User Name information has been available in the ArcGIS Server logs for many releases. Variability The purpose of this Community Article is to offer guidance and help explain several of the situations where "anonymous" is listed as the User Name in the ArcGIS Enterprise (e.g., ArcGIS Server) logs. Expect some variability of this behavior (over the years and) across the releases. Additionally, since there are many ArcGIS Server service capabilities, each may handle the persistence of the User Name value slightly differently within the framework's internal logging logic.
... View more
04-22-2024
05:31 PM
|
6
|
0
|
1559
|
BLOG
|
Hi @ZachBodenner, > could possibly expand on why it would be "potentially faster." > Is that just because it's easier to devote dedicated instances to the service? Yes, but I think it would have more to do with not splitting time across different services instances for retrieving the same data. If all 20 dedicated instances are from one service, then there is a greater chance of improved performance from the benefit of "cache hits". There is "cache" all over but the one I am thinking is at the ArcSOC-level (depending on the service there can be workspace cache that can be taken advantage of). In the end, the performance of both configurations is probably really close, but if I were to go with one (without performance testing the differences of the two), I would pick the layers coming from the same service. Aaron
... View more
03-08-2024
02:55 PM
|
0
|
0
|
4033
|
BLOG
|
Hi @ZachBodenner, My take is that it would be more efficient (and potentially faster) to have all of the layers coming from the same service. This assumes all the layers are are using the same connection to the data under the hood. With this approach the web map can be more easily managed as there is just one service to optimize and tune (e.g., number of instances). Granting permissions in Portal for ArcGIS should also be simpler. Of course, the elephant in the room is 20 layers. lol. If all 20 layers are required to be there for functionality that is one thing. But, if possible, consider opting-in for some of them or enabling some based on the map scale. Hope that helps. Aaron
... View more
03-01-2024
06:19 PM
|
0
|
0
|
4142
|
BLOG
|
Hi @ChiefKeefSosa300, It depends. You stated you tested a different map service. Does that mean different data than natural earth? If yes, each dataset will have its own profile for performance and scalability because the geometry density and complexity can vary. Your data may have more (or less) layers and each layer may have more (or less) attributes. If no, and you also tested natural earth, then there is a greater chance of seeing a similar profile for performance/scalability, but there are still other variables which can impact response time and throughput like the system architecture (number of machines) and hardware (number of CPUs, CPU speed and memory). Hope this helps. Aaron
... View more
02-09-2024
05:52 PM
|
0
|
0
|
882
|
Title | Kudos | Posted |
---|---|---|
1 | 04-04-2025 12:08 AM | |
1 | 01-22-2025 03:05 PM | |
1 | 01-08-2025 04:36 PM | |
1 | 11-15-2024 12:31 PM | |
1 | 11-15-2024 12:25 PM |
Online Status |
Offline
|
Date Last Visited |
a week ago
|