|
BLOG
|
> Does anyone have a sample bat file that schedules the running of System Log Parser that they can share? Hello @NickMiller2, What is the log source you are wanting to parse with SLP? ...I'll list examples for using several of the common ones. Select one of the slp.exe commands below and arguments and put them into a bat file. ArcGIS Server Log Query (File System) slp.exe -f AGSFS -i \\myserver.domain.com\c$\arcgis\arcgisserver\logs\MYSERVER.DOMAIN.COM -eh now -sh 30day -a optimized -validate true ArcGIS Server Log Query (Web) slp.exe -f AGS -s https://myserver.domain.com/server -u gisadmin -p Myp@ssword -eh now -sh 7day -a optimized -validate true Internet Information Services Log Query slp.exe -f IIS -i \\myserver.domain.com\c$\inetpub\logs\LogFiles\W3SVC1 -eh now -sh 30day -a optimized -validate true All of the examples commands above use the Optimized report which is the recommended Analysis Type. The Optimized report has tremendous memory savings over the other analysis types, especially if you are reading 30days worth of logs. If you are wanting to query ArcGIS Server logs but can only use the web, reading 30day might be difficult (due to the nature of the REST Admin API that is used behind-the-scenes). To assist in troubleshooting, the "-validate true" is passed into the slp.exe command in print any potential errors to the console that might be encountered. Otherwise, slp.exe runs "silently". For in-depth troubleshooting, you can optionally pass in "-apploglevel DEBUG". This will create a unique run log of the slp.exe execution...with alot of detail. The log file is typically found in: C:\Users\gisadmin\Documents\System Log Parser\Logs\Application_agsfsx2_20220702T220628_k6a7xrhk.log Hope this helps.
... View more
07-05-2022
11:29 AM
|
1
|
0
|
6880
|
|
BLOG
|
System Log Parser's Optimized Analysis Type Although you are probably familiar with using System Log Parser (SLP) to read logs and help you quantify your ArcGIS Enterprise usage, there is a relatively new feature in this popular, free utility that can make the effort easier and the analysis more powerful. This feature is the "Optimized" report option which is a new parameter in the GUI listed under Analysis Type. For log queries such as ArcGIS Server (Web), ArcGIS Server (File System), Internet Information Services (IIS) and Elastic Load Balancer (AWS), it is now the default. This option is not yet available for Cloud Front and Azure log queries. For backward compatibility, Simple, WithOverviewCharts and Complete types are still an option, but for the best analysis experience, they are not recommended. The Optimized Analysis Type option shown in the GUI: A System Log Parser spreadsheet report generated using the Optimized option: Also Available From the Command-Line The GUI (SystemLogsGUI.exe) is convenient for creating reports but the command line version (slp.exe) is great for automating System Log Parser analysis. The Optimized report option is available from both! Optimized Report Origins Over the years, one of the most requested enhancements for SLP has been to improve the speed and memory usage (of the machine running it) when parsing logs for very busy Sites or large time spans (e.g. several weeks or more). The Optimized report directly addresses these two items. Performance is measurably faster and memory orders of magnitude lower. With the ability to search greater spans of time with a quicker executions, more powerful statistics can be performed as the report can analyze more requests. For the best results, use the Optimized report with ArcGIS Server (File System), Internet Information Services (IIS) or Elastic Load Balancer (AWS) log queries. Memory usage savings can still be obtained with ArcGIS Server (Web) log queries, but the act of retrieving large amounts of logs through the REST Admin API is still the performance bottleneck. Optimized Report Contents System Log Parser reports focus on grouping the values from the logs into key time-based categories. While the Optimized report for the different log sources (ArcGIS Server, IIS, or ELB) has some commonality, there can be additional statistics depending on which was utilized. For example, if the Optimized report was used for querying ArcGIS Server (Web) or ArcGIS Server (File System) logs, wait time, instance creation time, and arrival time will each be statistically broken down in additional to elapsed time. System Log Parser Support For support contact SystemTestTool@esri.com Note: System Log Parser is developed from Esri's Professional Services but is not a product of Esri. Latest Version Bug fixes and new features are always being added to SLP. The latest version can be found here: System Log Parser (0.12.17.0)
... View more
07-03-2022
03:35 PM
|
4
|
7
|
11177
|
|
POST
|
Hi @ZachBodenner, Maybe I can help answer some of your questions... "What is the difference between the Elapsed Time (AVG) in the Statistics by Resource tab, Wait Time (AVG) in the Wait Time (Queue Time) tab, and Instance Creation Time (AVG) in the Instance Creation Time tab?" I'll start with elapsed times. These entries in the ArcGIS Server log (code 100004) are not response times per se, but an elapsed time. This values marks the duration the ArcSOC.exe process spent working on the request. This time also includes data access time but does not include wait time or creation time (or transport time). Wait time or queue time is the duration of the request spent waiting to get to an available ArcSOC.exe process just to start working. Typically, this time is low (subsecond). When it is higher than a second, that is important information to note and usually suggests that the service does not have enough maximum number of instances available. Since there is not enough during that moment, requests start queueing...this waiting can impact the user experience. Instance creation time (or start time) is the duration of the overall request time spent "waiting" (a different wait than the other previous one) for an available ArcSOC.exe (service instance) to start up. Sometimes there are enough instances ready to handle incoming requests, sometimes not. Some deployments have configurations (for dedicated services) that use different values for the minimum and maximum number of instances...the default is a min of 1 and a max of 2. In the case where a lot of requests for a service come in all at once, a demand is (first) put on the number of instances that need to be running to meet this need. If not enough are running, more are started until the maximum is reached. All service start up times are different but these log values helps identify the cost. Sometimes it is expensive and the start up times are very long. If you feel the times are too long and the user experience is impacted, increase the service instance minimum. Another strategy when seeing long waiting times and/or instance creation times for a service is to change its type from dedicated to shared. This is great to do for seldom used services and more optimal than setting the instance to a min of 0 and max of 1. While switching to a shared service type has several benefits, the service capability has to be supported (as a shared service) and the service needs to be published through ArcGIS Pro. Note, there is no entry in the ArcGIS Server logs that show transport time. In other words, the time it takes to transfer the response from ArcGIS Server to the Web Adaptor and then to the client. However, using System Log Parser to analyze the Web Adaptor access logs, show request times in the report that are closer to actual response times. "What factors impact the time it takes for a server to create an instance? For example, I’m looking at a server resource that I have configured to have 2 minimum services running, yet the instance creation time is listed as 55 seconds on average. If I have 2 ArcSOCs dedicated to that service, why would it take so long to create?" The number of instances available to the service do not directly impact the creation time. But the composition and complexity of the MXD or ArcGIS Pro project are two factors that can impact the creation time. As for places to start troubleshooting the 55 seconds, try opening up the MXD or ArcGIS Pro project and examining it. Perhaps one layer, at the default extent, is showing too much data and detail. You might improve this by zooming in closer, turning off the layer, or optimizing the layer (e.g. missing index, simply the geometry). "Finally (though I have a lot more questions I’ll end here to avoid too long a post), in the Summary tab, no matter how far back I set my query in the Parser tool, the Data TimeSpan field seems to max out at a certain distance into the past. Is there an action that resets the .log file so that I can only see back so far? Restarting the ArcGIS Server instance or rebooting the server, for example?" Running System Log Parser from the command line can yield more options to run the query for a longer duration of time. It can go really far back...though by default, ArcGIS Server only keeps logs for 90 days. If the Data TimeSpan field appears to max out it might be due to configuration of ArcGIS Server (this setting can be adjusted from Manager). When creating a report against a long duration of time, it is highly recommended to select the Optimized Analysis Type and if available, use the ArcGIS Server Log Query (File System) for faster IO log reading. Hope that helps.
... View more
07-01-2022
03:26 PM
|
2
|
0
|
3275
|
|
BLOG
|
Hi @ABDURRAHAMANMIRZA, From your User Defined Variables, it looks like ServiceName was set to GPServer but I think it should be "Buffer" (just like ToolName). I noticed that your JobStatusCheck request is no longer dynamic but fixed. Is there a reason it was changed? Having a static jobid will not work in the long run as the "id" will change every time a new one (e.g. a job) is submitted. Did the dynamic JobStatusCheck work? For example, a JobStatusCheck with a Path like: /${ServerInstanceName}/rest/services/${ServiceName}/GPServer/${ToolName}/jobs/${jobId} "All the values are appearing as 0 for LoopJobStatus in the Result Tree" All of the requests in the "playback" might be green but appear 0 because the test was using a hard-coded jobid. This is just a guess, but the jobid should be dynamic and captured when the job was first submitted. At this point in the test, the job is captured with a regular expression a placed into the variable called ${jobId}. If you are trying to troubleshoot a test, I recommend adding the JMeter test element called "Debug Sampler". You can add Debug Samplers all over as you troubleshoot. This item let's you see all the variables as the test runs. Hope that helps. Aaron
... View more
06-19-2022
11:51 PM
|
0
|
0
|
1116
|
|
BLOG
|
The roads hosted feature service Apache JMeter Test Plan has been recently updated: roads_hfs2.zip Updates include improved Linux support as well as a correction to the name of the ArcGIS Server instance which was not reflected through the User Defined Variable. The HTTP Request Names have been shortened for easier readability.
... View more
05-03-2022
02:23 PM
|
1
|
0
|
3459
|
|
BLOG
|
Recommended Strategies for Load Testing an ArcGIS Server Deployment has been updated to include several additional tips and items to consider when testing.
... View more
05-03-2022
11:19 AM
|
2
|
0
|
3318
|
|
BLOG
|
Hello @AYUSHYADAV, > Just wanted to ask whether we need to install any plugin for that or how we will get that in our test plan. The "bzm – Concurrency Thread Group" does require the JMeter Plugins Manager to be installed. With this in place, JMeter will automatically download and install any additional items that are referenced in the Test Plan when you open it. This is really convenient but you'll need the Plugins Manager to be installed. To install the Plugins Manager: Download the plugins-manager.jar file, and put it into JMeter’s lib/ext directory (e.g. C:\apache-jmeter-5.4.3\lib\ext). Then (re)start JMeter.
... View more
04-04-2022
09:42 AM
|
0
|
0
|
5185
|
|
BLOG
|
Administration Automation with Apache JMeter Apache JMeter is a great load testing tool, but it's a fantastic automation framework too! There are many ArcGIS Enterprise administrative workflows and automation solutions for your portal. This Article focuses on using JMeter to call the ArcGIS REST API in order to carry out user management tasks that would be tedious for large numbers of members. Thankfully, the JMeter's GUI makes the test setup and REST request building easy. The User Administration Test Plans To download the Apache JMeter Test Plan used in this Article see: portal_administration1.zip This project includes 6 Test Plans for ArcGIS Enterprise 10.9/10.9.1: Add a new user A simple, basic test that just adds new members Add a new user with a few options A test that adds new members but allows the Start page and a Portal Group to be specified Add a new user with more options A test that adds new members (Start page and Group) but can set Add-on licenses Set the security question/answer for new users Set the security question and answer for newly added members that have not logged in Disable a user A test that disables users Enable a user A test that enables users The CSV Data Set Config of Users For convenience, all the Test Plans in the project work off the same list of users from the same file. In the Test Plans, this is referenced by the CSV Data Set Config element named "Users File". The Text File List of Users The included text file contains user information for working with 10 different members. However, it can be adjusted and/or expanded to suite the needs of your organization. There are many different options for the role and userLicenseTypeId fields These choices can also impact the Add-on licenses as some automatically include specific entitlements Note: It is recommended to first run the test plans with a small list of users in order to see if everything is configured correctly for your Site. Administrator Login With the exception of one, all the included Test Plans have logic to log in as a built-in Portal for ArcGIS administrator at the beginning of the test. For efficiency this action is only executed once (at the start) per each test thread. Note: When connecting to the Portal for ArcGIS component of ArcGIS Enterprise, the Test Plans will be sending requests directly to the "arcgis" instance on port 7443. Add a New User (portal_users_add1) The portal_users_add1 Test Plan is simple way to add new members to Portal for ArcGIS. The administrator credentials are specified from the User Defined Variables section of the Test Plan Except where noted, this step is performed at the beginning of all the included tests Once the test authenticates as the administrator, it calls the createUser function and repeats it for each line in file containing the list of users Since this test uses a single HTTP Request to create the user it is the fastest and most scalable way to add new members This test only adds users, it does not perform any other duties such as joining a member to a group, setting the Start page, or selecting add-on licenses For convenience, the username is appended to all user-based transactions and requests This assists troubleshooting if a particular iteration of the test could not add a specific user All of the included tests follow this design pattern This test is similar to the process used on the Example: Add members to the portal resource, command line utility and Add members from a file feature built into Portal for ArcGIS. Add a New User (portal_users_add2) The portal_users_add2 Test Plan is an easy way to add new members to Portal for ArcGIS but includes a few options. In addition to creating the user, this test allows the administrator to set additional properties like the Start page (also known as the landing page) and a Portal Group. This test expands the user creation process by 3 additional requests per user If creating thousands of users, you may notice this test takes longer to complete than portal_users_add1 This is due to the fact that more work is taking place Note: A new member can actually be added to more than one Portal Group on creation. However, for simplicity, portal_users_add2 only adds the user to one group and the same group is used for all members. The group used is defined from the PortalGroupId User Defined Variable. This GUID Id needs to be manually looked up from your Portal for ArcGIS Site. If you do not wish to add the user to a Group, simply disable the setProperties request in the test. Add a New User (portal_users_add3) The portal_users_add3 Test Plan is an automated way to add new members to Portal for ArcGIS with the most options for an administrator. This test allows you to set the Start page and Portal Group but adds the ability to specify Add-on licenses like ArcGIS Pro and Extensions and certain User type extensions. Immediately after the administrator authentication, the test makes a call to retrieve GUIDs for the ArcGIS Pro and User type extensions These GUIDs will be used later when assigning the licenses to the users The add-on licenses add several more requests to the user creation process Although powerful, these additional requests can add time to the overall task as they are performed for each member that is created The test is configured to assign the user: ArcGIS Pro Advanced and all available Extensions (as of 10.9/10.9.1) All User type extensions Note: There are other Add-on licenses such as Applications and ArcGIS Runtime extensions that were not included in the portal_users_add3 Test Plan. Many of these other licenses would require there own specific HTTP request. Again, while this can be convenient and powerful, it can add time to process of adding each user. There are also some licenses like App bundles which were not included in the test as they are automatically included with user license type (e.g. Creator). Set the Security Question/Answer for New Users (portal_users_update_profile1) The portal_users_update_profile1 Test Plan is a little unique. It is the only test in the project which does not log in as a Portal for ArcGIS administrator. Instead, it logs in as each user and assumes it is performing the initial login for each member as it will set their security question and answer. Presetting the security question and answer is completely optional Your organization may instead prefer to have each user set these values when they first log in Disable a User (portal_users_disable1) The portal_users_disable1 Test Plan is an automated way for taking a list of users and disabling their membership to the portal. Once the account is disabled the user cannot log in. This is a less destructive function than delete. Disabling users is fairly straight-forward and performed with one REST call to disableUsers Note: For simplicity, the disableUsers request in the portal_users_disable1 Test Plan is only disabling one member at a time. However, for each call to the disableUsers function, the request will accept groups of users for improved efficiency. As of 10.9/10.9.1, disableUsers accepts up to 25 users at a time. Note: The portal_users_disable1 test can be executed over the same users successfully. From the point of view of ArcGIS Enterprise, it is just disabling the member(s) again. Enable a User (portal_users_enable1) The portal_users_enable1 Test Plan is an automated way for taking a list of users and enabling their membership to the portal. Once the account is enabled the user cannot log in. Enabling users is fairly straight-forward and performed with one REST call to enableUsers Note: For simplicity, the enableUsers request in the portal_users_enable1 Test Plan is only enabling one member at a time. However, for each call to the enableUsers function, the request will accept groups of users for improved efficiency. As of 10.9/10.9.1, enableUsers accepts up to 25 users at a time. Note: The portal_users_enable1 test can be executed over the same users successfully. From the point of view of ArcGIS Enterprise, it is just enabling the member(s) again. The Thread Group Configuration Unlike the previous Apache JMeter Article tests that are time-dominant, the Test Plans in this project are iteration based. In other words, when creating or disabling a specific users, we only need to work on the users of interest from the list once. The Thread Group "step load" configuration that is included by default with every Apache JMeter installation includes a very convenient Loop Count setting to specify exactly how many iteration the Test Plan should execute The Loop Count setting should match the number of lines in the "Users File" that contain members to be added/disabled/enabled Note: All Test Plans in the project are configured with the same Thread Group setting. Additionally, all of the included tests are executed with one concurrent test thread. Test Execution Also, unlike the previous Apache JMeter Article tests that are executed from the command-line, you can probably get away with running this administrative automation Test Plans right from the GUI. Of course, this depends on how many users you are planning to create, disable, or enable. If you are working with a few hundred, then the GUI would be fine. However, if you plan to create thousands or tens of thousands of users (or more), you will want to run the Test Plans from the command-line for the best usage efficiency of the test workstation resources. See the runMe.bat script included with the portal_administration1.zip project for an example on how to run a test as recommended by the Apache JMeter team. This script is configured to run portal_users_add3, but can easily be adjusted to running any of the tests. The runMe.bat script contains a jmeterbin variable that will need to be set to the appropriate value for your environment Note: It is always recommended to coordinate the start time with the appropriate personnel of your organization. This ensures minimal impact to users and other colleagues that may also need to use your on-premise ArcGIS Enterprise Site. Validating the Test Plans If the test is being run from the GUI, there are several listeners that have been added to all of the included Test Plans that offer immediate feedback on the status. The View Results Tree element offers a convenient way to quickly examine the status of each transaction (e.g. "Create User Account -- portalpublisher2") and its respective requests (e.g. "/arcgis/portaladmin/security/users/createUser--portalpublisher2") Thanks to the Response Assertion rule elements added to each request, the green check mark is trusted indicator of a successful transaction or request The View Results in Table element offers a handy way to see status of each transaction and its response time all from one table Troubleshooting a Command-line Test Execution As mentioned earlier, when working with large amounts of users, the recommended approach is to run the Test Plans from the command-line. However, administrators will be very interested to understand which users, if any, encountered errors through the automation. It is here that the JMeter Test Report can offer great insight. From the Request Summary pie chart on the initial page of the Test Report, you can quickly see if any errors were encountered If errors were encountered from the test run, the Statistics table (bottom of the first report page) can make the failed user requests easily to find when sorting by the FAIL column Final Thoughts There are many frameworks, tools and utilities out there to perform administrative task automation for ArcGIS Enterprise. Most likely, they all have their own strengths. Apache JMeter is handy as it provide a graphical interface for building and adjusting the REST requests need to perform the functions. The HTML/JavaScript reports which can be automatically created at the end of a test report are a nice bonus for understanding if whole job was successful or which particular parts failed. To download the Apache JMeter Test Plan used in this Article see: portal_administration1.zip A Quick Word on Using Multiple Threads All of the included tests could be configured to use multiple, concurrent threads for faster execution. This is fine from a technical point of view, but all of these test perform write operations to the internal database for the Portal for ArcGIS component. As with any database, such operations can be resource intensive and can only go so fast. Using too many concurrent threads may actually slow down the performance of these tests. A Quick Word on Deleting Users The tests included with the project did not include a delete user operation. Deleting a member from the portal is permanent (without backups being available) and such tools that automate this action should use caution. Additionally, some users may have uploaded a plethora of content to the portal. This content would need to be delete or transferred to another user before removing that member. Apache JMeter released under the Apache License 2.0. Apache, Apache JMeter, JMeter, the Apache feather, and the Apache JMeter logo are trademarks of the Apache Software Foundation.
... View more
03-07-2022
11:38 AM
|
2
|
5
|
2680
|
|
BLOG
|
For easier readability, the "A Quick Word on Sizing" and "A Quick Word on Instances" sections were moved from Why Test a Asynchronous Geoprocessing Service? to under Final Thoughts. "A Quick Word on the Location of the arcgisjobs Folder" was just added and also placed under Final Thoughts.
... View more
02-18-2022
04:20 PM
|
0
|
0
|
1497
|
|
BLOG
|
Why Test a Asynchronous Geoprocessing Service? The most popular reason is probably, "because you have been asked to test it". As a tester, GIS administrators may look to you to show how a geoprocessing (GP) model behaves as a service under load. Many GP models are built to perform long-running, critical and complex tasks. Since they are a resource often found on ArcGIS Enterprise deployments (as a service), this makes the understanding of their performance and scalability profiles key. Asynchronous Geoprocessing Service Testing Challenges The hardest part of the load testing an asynchronous geoprocessing service is the loop logic and being able to handle the different states of the job. The loop should not be too aggressive and you need to exit the loop under the right conditions. Appropriately marking a test iteration as failed based on the job status or passed based on the output results from the task, is also critical. How to Test an Asynchronous Geoprocessing Service? The basic steps for load testing an asynchronous GP service are: Provide inputs for geoprocessing task Submit job Capture unique job ID Perform an initial job status check Loop on the job status check If job succeeded Exit loop Sleep for a short duration If job succeeded Examine results If available, download output/data At a minimum, the Apache Test Plan should handle this logic. However, there are some enhancements that can be added to this process, like: Maximum number of job status checks in the loop Maximum number of test iterations Marking the job as failed if a non-successful status is returned The Test Plan included in the Article provides these additional features. The "Summarize Invasive Species" Geoprocessing Model and Dataset The understanding of the process in this Article is most effective if the steps can be reproduced. For that, we can turn to a modern ArcGIS Pro GP model and dataset that is free and publicly available. The Share a web tool -- Summarize Invasive Species package is available from arcgis.com. View of the New Zealand data from ArcGIS Pro (with Topographic Basemap): This Article will not cover the details of configuring or publishing this model as a service in ArcGIS Enterprise. For information on such actions, see: Share a web tool Test Data Generation Although the JMeter Test Plan will utilize some data to make the inputs to the model/service dynamic and more realistic, this will be pre-generated and automatically included with the test. The reason is to focus on the test logic (and not the data generation). The Asynchronous Geoprocessing Service Test Plan To download the Apache JMeter Test Plan used in this Article see: async_gp1.zip Opening the Test Plan in Apache JMeter should look similar to the following: Adjust the User Defined Variables to fit your environment Note: The test has different variables for the name of the (GP) service and (GP) tool. When published, they often use the same name (e.g. "SummarizeInvasiveSpecies") but do not have to. Components of the Test Plan SubmitJob It all start with "submit the job". This is probably the easiest part of the test. Once this request has been sent, the service returns a job id and the server can begin to work on the task. This id is captured from the Regular Expression Extractor element. Note: The job id is unique to each job (and test thread). It is used in every subsequent request in the test. CSV Data Set Config We briefly skip to the bottom to mention the CSV Data Set Config element. The inputs are important and required to submit a job, but the generation of its data but is not a heavy area of focus of this testing Article. Contents example of the inputs.csv file: Note: The full input.csv file is included with the async_gp1 Test Plan. InitialJobStatus The initial job status transaction has one HTTP request element inside that is used to find and populate a variable (jobStatus) with the current state of the submitted task. This value will be used to enter the upcoming while loop. As of ArcGIS Enterprise 10.9, there are just a few different values that the status of a job can have; these states reflect the job's life cycle: esriJobSubmitted esriJobWaiting esriJobExecuting esriJobSucceeded esriJobFailed esriJobTimedOut esriJobCancelling esriJobCancelled Ideally, we are expecting that from the status perspective, the job's states will be: esriJobSubmitted --> (esriJobWaiting -->) esriJobExecuting --> esriJobSucceeded Accounting for the other values is what makes the testing of the service both fun and tricky. LoopJobStatus The job status loop (also known as the while loop) has several parts to it. They all play an important role for periodically examining the status of the (test thread's) unique job id and then properly handling the state when it changes (e.g. succeeds or fails or just takes too long). While loop logic Job status check Short sleep timer There are also some nice-to-have extras (mentioned earlier as enhancements): Response assertion check Maximum loop check WhileLoop As long as the returned job status is either "esriJobSubmitted", "esriJobWaiting", or "esriJobExecuting", the loop will continue running. Just checking against these three states helps keep the loop logic simple. Note: A job status of "esriJobSucceeded" will exit the loop. This is a good thing and what we want the test logic to encounter. JobStatusCheck The job status check is an HTTP request asking for the current value of the task. It does the same thing as the request inside the InitialJobStatus, but in a loop. Note: Since this element is inside a loop and there will most likely be multiple occurrences, it is important not to give the HTTP Request the same name as the Transaction. This helps avoid confusion in the analysis and reporting. ResponseAssertion As mentioned, this logic is a nice-to-have. The value of the job status from the LoopJobStatus request is immediately checked. If it is "esriJobSubmitted", "esriJobWaiting", "esriJobExecuting" or "esriJobSucceeded", the request will be marked as successful. If any other job status states appear (e.g. "esriJobFailed", "esriJobTimedOut"), the request (and the loop transaction) will be marked as failed, which is a good thing. This design favors a simple approach to the testing logic. Note: The "esriJobSucceeded" is not a condition in the while loop but it is a value we look for with the ResponseAssertion rule to determine a successful request. SleepWhileLoop The sleep timer is critical. Without it, too many status check requests are sent to the service which causes unnecessary load. Since the job status request is fast and light-weight but the overall task is long-running, it makes sense to delay each state check by a second or two (the test sleep variable is set to 2000ms). This is exactly what this timer does. Note: The Test Plan sleep variable, WhileLoopSleep is set to 2000 (ms), which is 2 seconds. IfWhileLoopMax The while loop iteration checking logic is also a nice-to-have. It has several parts it and the logic is carried out independently for each job. IfWhileLoopMax -- This element verifies whether the job status check loop has executed more than the allowed maximum amount of iterations (WhileLoopMax variable...default is 300); if the limit has been reached it carries out the following test elements: WhileLoopMaxReached -- An HTTP request identical to LoopJobStatus JSR223Listener -- This logic purposely fails the WhileLoopMaxReached request that was just sent FlowControlAction -- This logic ends the job status check loop by immediately stopping the current, individual test thread CounterWhileLoop -- There is a counter that is incremented for every iteration of the job status check Note: The purpose of the IfWhileLoopMax logic is to stop the job, fail the loop operation and make it easy for the tester to see that a job is taking "too long" to execute the task. The Test Plan uses 2000 (ms) and 300 for the WhileLoopSleep and WhileLoopMax variables, respectively. This would allow for about a 10 minute job execution time. If your jobs run times are longer, please adjust these, as needed. IfJobSucceeded Now out of the while loop (finally!), the test logic checks the last known status of the job. If it succeeded, it carries out some additional logic. GetParamUrl -- this HTTP request is identical to the LoopjobStatus (and the InitialJobStatus) elements The server response for this request is examined differently as it is parsed for the value of the paramUrl string DownloadOutput The value of the paramUrl variable is added to the end of the unique job request and the contents are downloaded. DownloadOutput -- this HTTP Request downloads the content whose name was based on the value of the paramUrl variable populated from the previous request ResponseAssertion -- a response assertion is added looking for the key word "rings" to validate that the contents actually contain a geometry Note: This step is optional, but it represents the full delivery of task...the data specific to the submitted job. Verifying the job's output contained geometry data helps the test show that the service was working as expected. Other jobs may produce an entirely different output, adjust the ResposneAssertion logic as needed. IfTestIterationMax The test iteration check is also a nice-to-have feature for load testing an asynchronous geoprocessing service. Its logic is very similar to the IfWhileLoopMax check. However, it keeps track of the total number of jobs (successful or failed) across all test threads. IfTestIterationMax -- This element verifies whether the amount of executed tests (e.g. jobs submitted) is more than the allowed maximum amount of iterations (TestIterationMax variable...default is 2500); if the limit has been reached it carries out the following test elements: TestIterationMaxReached -- An HTTP request identical to LoopJobStatus JSR223Listener -- This logic purposely fails the TestIterationMaxReached request that was just sent FlowControlAction -- This logic ends the load test by immediately stopping the all test threads CounterTestIteration -- There is a counter that is incremented for every test iteration (in other words, one job submitted equals one test iteration) Note: The purpose of the IfTestIterationMax logic is to stop the after a specific number job have been sent from the load test. Not all tests need to utilize this feature or hit this maximum. However, if you are experimenting with the test logic, it is a good strategy to set the maximum to a low value until you have verified that things are behaving as expected. Otherwise, your test might send many long-running jobs to the service at once, which in turn, could take a while to complete. This feature helps avoid that scenario. The Thread Group Configuration The JMeter Test Plan is currently configured for a 30 minute load test with each step lasting a little under 2 minutes. Different environments and data may require an alternative setting to achieve the desired test results, adjust as needed The average "Summarize Invasive Species" job in this example takes between 8 -18 seconds If your service is significantly longer, you should adjust the Thread Group Configuration to produce a step duration longer than 2 minutes in order to obtain a decent sampling (per each step) Validating the Test Plan As a best testing practice, it is always a good idea to validate the results coming back from the server before applying the actual load. Use the View Results Tree listener to assist with the validation The Test Plan includes a View Results Tree Listener but it is disabled by default Enable it to view the results From the GUI, Start the test Transactions Select and expand one of the "LoopJobStatus" Transactions The results should resemble the following: In this example, the LoopJobStatus transaction above contained 7 status check requests that all completed successfully and because of this, the job also succeeded The response time of the loop (e.g. the job) was just over 12 seconds (12066 ms) As more pressure is applied (e.g. via the load test), each job will require more status checks which will in turn, take longer to complete By design, this will show up as longer response times for the LoopJobStatus operation The response time of the LoopJobStatus transaction is a great measuring stick for judging the overall performance and scalability of an asynchronous geoprocessing service Note: Generally speaking, the "job status loop" component of an asynchronous geoprocessing service test will represent the bulk of time for every test iteration. All the other operations (SubmitJob, InitialJobStatus, DownloadOutput, etc...) typically happen very quickly. Requests Expand one of the "DownloadOutput" Transactions Select one of the https requests The results should resemble the following: The contents from the DownloadOutput request is helpful for validating that the job was able to produce an expected output In this case, it is a geometry formatted in JSON Based on the GP model used as the service, this geometry summarizes the range of invasive grass species near locations where people may come into contact with the grasses and facilitate their spread Note: Other geoprocessing services may produce a different type of output than the JSON shown in the example above. Test Execution The load test should be run in the same manner as a typical JMeter Test Plan. See the runMe.bat script included with the async_gp1.zip project for an example on how to run a test as recommended by the Apache JMeter team. The runMe.bat script contains a jmeterbin variable that will need to be set to the appropriate value for your environment Note: It is always recommended to coordinate the load test start time and duration with the appropriate personnel of your organization. This ensures minimal impact to users and other colleagues that may also need to use your on-premise ArcGIS Enterprise Site. Additionally, this helps prevent system noise from other activity and use which may "pollute" the test results. Note: For several reasons, it is strongly advised to never load test services provided by ArcGIS Online. JMeter Report Throughput Curves As expected for a long running job, the throughput of the LoopJobStatus operation is low The peak throughput appeared to occur at the 11:48 mark At this time, the service produced about 0.4 transactions/second This equated to around 1,440 jobs through the system per hour Note: The JobStatusCheck request is selected to disable its rendering in the chart. Since this was a fast request, it showed higher throughput than the LoopJobStatus operation, but that is not what we are interested in for understanding the scalability of the service. Performance Curves The performance of the job at the beginning of the test was about 14 seconds At the point where the throughput maxed out (the 11:48 mark), the performance had increased to over 21 seconds Despite an increased load that produced longer and longer response times, the service and system continue to complete the jobs successfully Final Thoughts While there are many geoprocessing models out there that perform a variety of different tasks, this Article can be used as a guide on how to load test an asynchronous GP service. As with many things related to testing, geoprocessing services are easy to apply load against. However, since each job has a life cycle that needs to be tracked, the test logic has to account for this changing job status. It is this characteristic of the service that introduces some complexity to the Test Plan. That said, Apache JMeter is a feature-rich testing framework that helps testers meet this challenge. To download the Apache JMeter Test Plan used in this Article see: async_gp1.zip To download the geoprocessing package and data used in this Article see: Share a web tool A Quick Word on Sizing The focus of previous testing articles has typically not been to offer strategies and techniques on capacity planning. However, generally speaking, long-running jobs like ones from an asynchronous geoprocessing service are relatively easy to size. For example, if the average job duration for a service is something long like 30 seconds and your ArcGIS Server machine has 8 CPU cores, you would be able to support 8 concurrent users (before queueing begins to occur). In other words, for the situation just mentioned, the rough sizing would be: 1 job == 1 core == 1 user (where the response times are > 1 second) Note: This assumes the minimum number of instances for the GP service would be set to the number of available CPU cores (e.g. 8 on an 8 core machine). This setting would be done for predictable performance and maximum throughput. Would things fall over with more than 8 concurrent jobs are going? Most likely not, this is just when queuing starts to occur. Whenever queuing start to happens, the response times of the job completions will become a little longer (e.g. slower) for all the running jobs. Knowing this, do you still have to load test the GP? Most likely yes. All geoprocessing models, data and inputs are different, behave different and can use the available hardware in various ways. Your test will show this impact under load, which will be very important to understand. A Quick Word on Instances While it is possible to have the dedicated GP service instances set to use all the CPU cores on ArcGIS Server, a GIS Administrator may intentionally chose to not go with that configuration. Since the jobs of the GP service could be very long running and resource intensive, an alternate deployment strategy might be to purposely set the maximum number of instances for the GP service to a lower value so other users can use different services without waiting (due to resource constraints). A Quick Word on the Location of the arcgisjobs Folder While not a focus of this Article, the location of the arcgisjobs folder can have an impact on performance and/or scalability of the geoprocessing service. This location is where the service will temporarily read and write data as the job is being processed. The final output of each job is also stored in this location. For extremely busy Sites where thousands of jobs are concurrently being carried out from multiple ArcGIS Servers, consider the storage capabilities (e.g. I/O speed, reliability) of this location. Apache JMeter released under the Apache License 2.0. Apache, Apache JMeter, JMeter, the Apache feather, and the Apache JMeter logo are trademarks of the Apache Software Foundation.
... View more
02-18-2022
12:40 PM
|
0
|
3
|
2809
|
|
BLOG
|
I have also uploaded an update to the cache tile test: cache_tiles2.zip This version contains some added logic to better support vector tiles caches. I added a User Defined Variable called "TileExtension". If your vector tiles show the extension for a "Protocolbuffer Binary Format" file, simply set TileExtension to .pbf (note the dot, as it would need to be included). Otherwise, leave this variable empty. I also added additional ContentTypes to the Response Assertion rules in order to check for a returns that would be in "Protocolbuffer Binary Format". Another important detail, the default User Defined Variable value for the ServiceName is NaturalEarth_256_Cache. However, if your cache was a Hosted service, this would be set to Hosted/NaturalEarth_256_Cache. The ServiceName should include the preceding directory, if it exists. Lastly, if testing a vector tile cache, the User Defined Variable for ServiceType would need to be changed to VectorTileServer.
... View more
02-08-2022
12:56 PM
|
1
|
0
|
2530
|
|
BLOG
|
The Assumptions and Constraints section of the Article has been updated to include some environment details that might be helpful for certain conditions.
... View more
01-20-2022
11:59 AM
|
1
|
0
|
2602
|
|
BLOG
|
Hi @DeanHowell1, Here is an Article you may find interesting for testing cached service: Creating a Load Test in Apache JMeter Against a Cached Map Service (Advanced) This Article focuses on testing a map service but the same Test Plan should work against a cached image service. Hope this helps. Aaron
... View more
01-18-2022
12:56 PM
|
0
|
0
|
3638
|
|
BLOG
|
Why Test a Cached Map Service? Cached map services are a popular and recommended way to provide a well performing presentation of static data. The cache service type is a proven technology, but there may still be requirements to test it under load to observe its scalability first hand on a specific deployment architecture. While cached map services perform well, serving up thousands of simultaneous tile requests can be resource intensive on the server hardware. Note: Due to the fast rate of delivery and consumption of the resource, load testing cached map services can also be intensive on the hardware utilization of the test client workstation. Cached Map Service Testing Challenges Compared to the load testing of the export map function, proper testing of a cached map service introduces several challenges as the request composition with each map screen changes. Since the underlying cache scheme is using a grid design, the map extents of some pans or zooms may pull down more or less tile images than others. Accounting for this real-world behavior of the cache service makes the test logic more complex than if it were exercising the export map function. The test logic should also be dynamic and cover a decent area of interest. Converting a HAR file of captured cache tile requests into a test might be quick and easy to do but does not show a realistic scalability of the service. This is due to the small sample of tile requests being used over and over again. Generally speaking, requests for individual cache tiles are fast...very fast. Due to this behavior, the test logic also needs to perform well, scale with the service and have minimal overhead on the test client How to Test a Cached Map Service? The steps in this Article should work with any existing cached map service on your local ArcGIS Enterprise deployment. However, if one if not available, it is recommended to give the Natural Earth dataset a look for the task. The Natural Earth Dataset Although the steps should work with any data, the walkthrough of the process in this Article might be more effective if they can be directly followed. In such cases, it is great turning to the Natural Earth datasets which provides some decent map detail (at smaller scales) covering the whole world. Download the Natural Each dataset here The download above is a subset of the larger Natural_Earth_quick_start.zip and includes a modified MXD for ArcMap 10.8.1 and ArcGIS Pro 2.8 project Either can be used to publish and create a cached map service to ArcGIS Enterprise The Natural Earth subset of data should look similar to the following when opened in ArcGIS Pro (or ArcMap) This Article will not cover the details of creating, configuring or publishing a cached map service in ArcGIS Enterprise. For information on such actions, see: Tutorial: Creating a cached map service Note: It is recommended to become familiar with some of metadata details of the cached map service as the load testing effort will require knowledge of some of that information (e.g. xorigin, yorigin, tileCols, tileRows, and spatial reference as well as the scales that contain tiles). Test Data Generation With a cached map service available, the next step would be to generate test data over an area of interest. As with other JMeter Articles on Community, we need good test data to get the most value from the results. And like before, the Load Testing Tools package (for ArcGIS Pro) makes short work of this job. There is even a specific tool for creating bounding box data to use with a cached map services. Note: Version 1.3.0 of Load Testing Tools added the "Generate Bounding Boxes (Precision)" tool. Download and unzip the package then make that folder available to your ArcGIS Pro project. The Generate Bounding Boxes (Precision) Tool Launching the Generate Bounding Boxes (Precision) tool should present an interface similar to the following: Before running the tool, let's adjust the input to target the data generation process to: Specific map scales (in this case three different scales) Scales 4622324.434309 and 1155581.108577 were kept Scale 2311162.217155 was added The number of records to be generated was adjusted the reflect larger map scales As the scale number goes down, we want to tool to generate more boxes A specific area of interest (optional) A polygon of the United States was added to a new map This feature was set as the Constraining Polygon Click Run Tool execution may take a few moments Visualizing the Generated Data in ArcGIS Pro The Contents screen will populate by adding a new feature classes that is visually representing the generated data Not all the generated map scales will be immediately seen Visualizing the Generated Data in a Text Editor Using the file system explorer, navigate to the ArcGIS Pro project used for generating the data and open one of the csv files using your favorite text editor The file contents should look similar to the following: The Apache JMeter test will be configured to convert each of these bound boxes into the corresponding cache map tiles The Cached Map Service Test Plan To download the Apache JMeter Test Plan used in this Article see: cache_tiles1.zip Opening the Test Plan in Apache JMeter should look similar to the following: Adjust the User Defined Variables to fit your environment Xorigin, Yorigin, TileCols, TileRows are properties of the created map cache that can be found on the REST endpoint page of the service TileCols and TileRows are typically found under Tile Info Height and Width Components of the Test Plan CSV Data Set Config The CSV Data Set Config elements in JMeter are used to reference the newly generated test data from the file system. The current version of the Test Plan is built to utilize 3 different CSV files (one for each map scale data file). Note: Other that the User Defined Variables and the setting of the Filename in the CSV Data Set Config elements, there should not be anything else that requires editing or changing in the Test Plan. The test logic is listed below just to explain how the values in the HTTP Request become populated. Levels Of Detail List Logic To avoid more complex JMeter test logic, 24 fixed map cache levels of detail are placed inside a class in a JSR223 Sampler test element. That "complex alternative" would be to connect the endpoint of the service at the start of the test and pull down the cache tile metadata. Putting HTTP logic into JSR223 Samplers is technically doable, but not the route I chose. There is only one JSR223 Sampler inside the Levels Of Detail Transaction This item is executed only once, at the start of each test thread The element contains 24 fixed cache levels of detail, with level 0 starting at scale 591657527.591555 If your cache scheme starts at a different scale for 0, then the JSR223 Sampler will need to be manually adjusted This JSR223 Sampler does not need to be edited to run the test This assumes cached map service has a Spatial Reference of 102100 (3857) Levels Of Detail -- JSR223 Sampler (Full Logic): // FileServer class
import org.apache.jmeter.services.FileServer
public class Lod{
int level
double resolution
double scale
double tolerance
}
public class MyLodList1{
public List<Lod> LodList = new ArrayList()
MyLodList1(){
// Based on ArcGIS Online Map Scales
// https://services.arcgisonline.com/arcgis/rest/services/World_Street_Map/MapServer
//
// Spatial Reference: 102100 (3857)
Lod lod = new Lod()
lod = new Lod()
lod.level = 0
lod.resolution = 156543.03392800014 //11
lod.scale = 591657527.591555
lod.tolerance = 0.5
this.LodList.add(lod)
lod = new Lod()
lod.level = 1
lod.resolution = 78271.51696399994 //11
lod.scale = 295828763.795777
lod.tolerance = 0.5
this.LodList.add(lod)
lod = new Lod()
lod.level = 2
lod.resolution = 39135.75848200009 //11
lod.scale = 147914381.897889
lod.tolerance = 0.25
this.LodList.add(lod)
lod = new Lod()
lod.level = 3
lod.resolution = 19567.87924099992 //11
lod.scale = 73957190.948944
lod.tolerance = 0.5
this.LodList.add(lod)
lod = new Lod()
lod.level = 4
lod.resolution = 9783.93962049996 //11
lod.scale = 36978595.474472
lod.tolerance = 0.5
this.LodList.add(lod)
lod = new Lod()
lod.level = 5
lod.resolution = 4891.96981024998 //11
lod.scale = 18489297.737236
lod.tolerance = 0.5
this.LodList.add(lod)
lod = new Lod()
lod.level = 6
lod.resolution = 2445.98490512499 //11
lod.scale = 9244648.868618
lod.tolerance = 0.5
this.LodList.add(lod)
lod = new Lod()
lod.level = 7
lod.resolution = 1222.9924525624949 //13
lod.scale = 4622324.434309
lod.tolerance = 0.5
this.LodList.add(lod)
lod = new Lod()
lod.level = 8
lod.resolution = 611.49622628137968 //14
lod.scale = 2311162.217155
lod.tolerance = 0.5
this.LodList.add(lod)
lod = new Lod()
lod.level = 9
lod.resolution = 305.74811314055756 //14
lod.scale = 1155581.108577
lod.tolerance = 0.5
this.LodList.add(lod)
lod = new Lod()
lod.level = 10
lod.resolution = 152.87405657041106 //14
lod.scale = 577790.554289
lod.tolerance = 0.5
this.LodList.add(lod)
lod = new Lod()
lod.level = 11
lod.resolution = 76.437028285073239 //15
lod.scale = 288895.277144
lod.tolerance = 0.5
this.LodList.add(lod)
lod = new Lod()
lod.level = 12
lod.resolution = 38.21851414253662 //14
lod.scale = 144447.638572
lod.tolerance = 0.5
this.LodList.add(lod)
lod = new Lod()
lod.level = 13
lod.resolution = 19.10925707126831 //15
lod.scale = 72223.819286
lod.tolerance = 0.5
this.LodList.add(lod)
lod = new Lod()
lod.level = 14
lod.resolution = 9.5546285356341549 //16
lod.scale = 36111.909643
lod.tolerance = 0.5
this.LodList.add(lod)
lod = new Lod()
lod.level = 15
lod.resolution = 4.77731426794937 //14
lod.scale = 18055.954822
lod.tolerance = 0.05
this.LodList.add(lod)
lod = new Lod()
lod.level = 16
lod.resolution = 2.388657133974685 //15
lod.scale = 9027.977411
lod.tolerance = 0.025
this.LodList.add(lod)
lod = new Lod()
lod.level = 17
lod.resolution = 1.1943285668550503 //16
lod.scale = 4513.988705
lod.tolerance = 0.025
this.LodList.add(lod)
lod = new Lod()
lod.level = 18
lod.resolution = 0.5971642835598172 //16
lod.scale = 2256.994353
lod.tolerance = 0.005
this.LodList.add(lod)
lod = new Lod()
lod.level = 19
lod.resolution = 0.29858214164761665 //17
lod.scale = 1128.497176
lod.tolerance = 0.005
this.LodList.add(lod)
lod = new Lod()
lod.level = 20
lod.resolution = 0.14929107082380833 //17
lod.scale = 564.248588
lod.tolerance = 0.0025
this.LodList.add(lod)
lod = new Lod()
lod.level = 21
lod.resolution = 0.07464553541190416 //17
lod.scale = 282.124294
lod.tolerance = 0.0005
this.LodList.add(lod)
lod = new Lod()
lod.level = 22
lod.resolution = 0.03732276770595208 //17
lod.scale = 141.062147
lod.tolerance = 0.0005
this.LodList.add(lod)
lod = new Lod()
lod.level = 23
lod.resolution = 0.01866138385297604 //17
lod.scale = 70.5310735
lod.tolerance = 0.0005
this.LodList.add(lod)
}
}
MyLodList1 mylods = new MyLodList1()
List<Lod> LodList = mylods.LodList
vars.putObject("LodList",LodList) GetMapTile Logic The JSR223 Samplers inside the GetMapTile Transaction is the logic responsible for taking a bounding box and transforming it into the corresponding cache tiles. There is one JSR223 Sampler for each map scale (e.g. one for each corresponding CSV Data Set Config) CSV Data Set Config A --> JSR223 Sampler A1 This is executed with every test thread iteration This is executed frequently...every time a new bounding box is read in These JSR223 Samplers do not need to be edited to run the test Note: JSR223 Samplers using Groovy are generally executed quickly and add very little overhead to the test GetMapTile -- JSR223 Sampler A1 (Full Logic): // Script to process a CSV file (from Load Testing Tools) with lines in the following format:
// bbox,width,height,mapUnits,sr,scale
// FileServer class
import org.apache.jmeter.services.FileServer
import org.apache.commons.math3.util.Precision
//import java.math.BigDecimal
// GetMapTile
bbox_var = vars.get("bbox_A")
String[] bboxParts = bbox_var.split(',')
double xmin = Double.parseDouble(bboxParts[0])
double ymin = Double.parseDouble(bboxParts[1])
double xmax = Double.parseDouble(bboxParts[2])
double ymax = Double.parseDouble(bboxParts[3])
width_var = vars.get("width_A")
height_var = vars.get("height_A")
// Use map scale resolution (map units per pixel) to determine tile level
double mapresolution = 0
int resolutionprecision = 10
mapresolution = Precision.round((Math.abs(xmax - xmin) / Double.parseDouble(width_var)), resolutionprecision)
scale_var = vars.get("scale_A")
double bbox_scale_double = Double.parseDouble(scale_var)
// Map units per pixel
double tileresolution = 0
double lod_resolution = 0
double scale = 0
int tilelevel = 0
LodList = vars.getObject("LodList") // Assuming cached map service has a Spatial Reference of 102100 (3857)
boolean firstIteration = true;
for(int i = 0; i < LodList.size; i++)
{
lod_resolution = Precision.round(LodList[i].resolution, resolutionprecision)
tileresolution = lod_resolution
tilelevel = LodList[i].level
scale = LodList[i].scale
if (mapresolution >= lod_resolution)
{
break
}
}
tileCols_var = vars.get("TileCols")
cols = Double.parseDouble(tileCols_var)
tileRows_var = vars.get("TileRows")
rows = Double.parseDouble(tileRows_var)
// Origin of the cache (upper left corner)
xorigin_var = vars.get("Xorigin")
xorigin = Double.parseDouble(xorigin_var)
yorigin_var = vars.get("Yorigin")
yorigin = Double.parseDouble(yorigin_var)
// Get minimum tile column
double minxtile = (xmin - xorigin) / (cols * tileresolution)
// Get minimum tile row
// From the origin, maxy is minimum y
double minytile = (yorigin - ymax) / (rows * tileresolution)
// Get maximum tile column
double maxxtile = (xmax - xorigin) / (cols * tileresolution)
// Get maximum tile row
// From the origin, miny is maximum y
double maxytile = (yorigin - ymin) / (rows * tileresolution)
// Return integer value for min and max, row and column
int mintilecolumn = (int)Math.floor(minxtile)
int mintilerow = (int)Math.floor(minytile)
int maxtilecolumn = (int)Math.floor(maxxtile)
int maxtilerow = (int)Math.floor(maxytile)
Scheme_var = vars.get("Scheme")
WebServerName_var = vars.get("WebServerName")
ServerInstanceName_var = vars.get("ServerInstanceName")
ServiceName_var = vars.get("ServiceName")
ServiceType_var = vars.get("ServiceType")
def cacheRequest
def tilePaths = []
int count = 0
for (int row = mintilerow; row <= maxtilerow; row++)
{
// for each column in the row, in the map extent
for (int col = mintilecolumn; col <= maxtilecolumn; col++)
{
cacheRequest = ("/").concat(ServerInstanceName_var).concat("/rest/services/").concat(ServiceName_var).concat("/").concat(ServiceType_var)
cacheRequest = cacheRequest.concat("/tile").concat("/").concat(tilelevel.toString()).concat("/").concat(row.toString()).concat("/").concat(col.toString())
count++
tilePaths.add(cacheRequest)
}
}
def requestCount = count.toString()
vars.putObject("RequestCount_A",requestCount)
vars.putObject("TilePaths_A",tilePaths) Cache Tile Loop and Path Population There are several components needs for this part of the Test Plan. With the bounding box translated into the corresponding cache tiles and assembled into a list of URLs, a third JSR223 is needed to place each URL into a variable inside a loop. The loop logic takes place inside the Cache Tiles transaction. There is one JSR223 Sampler for each map scale CSV Data Set Config A --> JSR223 Sampler A2 These JSR223 Samplers do not need to be edited to run the test There is a Loop Controller added to only ask for the actual number of tiles per bounding box since this amount can change extent to extent The number of tiles that correspond to each bound box vary by extent but also but the map resolution (1920x1080) Higher screen resolutions require more tiles The Loop Controller contains the following elements: Counter JSR223 Sampler HTTP Request Loop Controller Counter JSR223 Sampler HTTP Request All of the test logic above exists just for this component of the test. For each map scale, there is only one HTTP Request! This simple design favors readability and maintainability. Note: The HTTP Requests contains a Response Assertion element to validate the items returned from the server. If the content type of the response is image/jpeg or image/png, then the request will pass. However, some VectorTileServer caches may return a Protocolbuffer Binary Format (*.pbf) file. In these cases, the Patterns to Test would need to be manually expanded to the following: image/jpeg || image/png || application/octet-stream || application/x-protobuf The Thread Group Configuration The JMeter Test Plan is currently configured for a relatively short test of 20 minutes. Cached map services perform well, so a lot of throughput will be taking place within each step (2 minutes per step) and from the test overall. Different environments may require an alternative pressure configuration to achieve the desired test results, adjust as needed Validating the Test Plan As a best practice, it is always a good idea to validate the results coming back before executing the actual load test. Use the View Results Tree listener to assist with the validation The Test Plan includes a View Results Tree Listener but it is disabled by default Enable it to view the results From the GUI, Start the test Transactions Select one of the "Cache Tiles" Transactions The results should resemble the following: In this example, all the transactions completed successfully (e.g. the green checkmark) Cache Tiles (map scale: 4622324.434309) Cache Tiles (map scale: 2311162.217155) Cache Tiles (map scale: 1155581.108577) Selecting one of the transactions and the Sampler result element lists some key information Take a quick glance at the Size in bytes In the example above, the Transaction size was over 50KB which suggests decent tile data (for this dataset) was being returned and the responses were not all "blank" images The Number of samples in the transaction was 80 Since there is a JSR223 Sampler with every tile request, this actually resulted in 40 tiles being downloaded The Load time shows 62 (ms), meaning it only took 0.062 seconds to pull down 40 tile images Requests Expand the selected Transaction In this example, Cache Tiles (map scale: 1155581.108577) Select one of the HTTPS requests The results should resemble the following: In this example, the select request completed successfully (e.g. the green checkmark) Take a quick glance at Load time In this example, the individual tile request only took 2 ms (0.002 seconds) to download Clicking on the Response data tab allows you to preview the requested tile: Note: Once visual validation and debugging is complete, it is recommended to disable the View Results Tree element before executing the load test Test Execution The load test should be run in the same manner as a typical JMeter Test Plan. See the runMe.bat script included with the cache_tiles1.zip project for an example on how to run a test as recommended by Apache JMeter. The runMe.bat script contains a jmeterbin variable that will need to be set to the appropriate value for your environment Note: It is always recommended to coordinate the load test start time and duration with the appropriate personnel of your organization. This ensures minimal impact to users and other colleagues that may also need to use your on-premise ArcGIS Enterprise Site. Additionally, this helps prevent system noise from other activity and use which may "pollute" your test results. Note: For several reasons, it is strongly advised to never load test ArcGIS Online. JMeter Report The auto-generated JMeter Report can provide insight into the throughput of the cached map service under load This report is auto-generated from the command-line options passed in from the runMe.bat script Throughput Curve The JMeter Report for a cached map service load test may appear sluggish and slow when viewed in a web browser This is due to the default nature of its composition, which attempts to render every unique request in some of the charts In a test such as this, there will be many From the chart legend, select all JSR223 Sampler items to disable their rendering (as they may skew the scale) In this case, the peak throughput for any one of the given map scale transactions of cached tiles was about 15 transactions/second Since 3 map scales were tested, the total transactions per second achieved was 45 transactions/second This equated to around 162,000 cache transactions/hour The peak throughput appear to occur at the 10:34 mark Performance Curve The performance of the cache throughput was good at roughly 120 ms or 0.12 seconds This was observation was taken where the peak transactions/sec occurred at the 10:34 mark Note: "Peak throughput" is a point in a test where no higher throughput can be achieved. This does not mean that is the maximum amount of pressure the service will support without "falling over". Generally speaking, if additional users ask for cache tiles after the system has reached peak throughput (e.g. you run the step load configuration higher), the service will still fulfill their requests but they will just wait longer for the responses to return (due to queueing). Final Thoughts The Apache JMeter Test Plan in this Article represents a programmatic approach for applying load to an ArcGIS cached map service. One of the strengths of this test is that it easy to build, configure and maintain. The auto-generated JMeter report provides charts and summaries that can be used to analyze the performance and scalability of the cached map service. To download the Apache JMeter Test Plan used in this Article see: cache_tiles1.zip Additional Items Worth Mentioning Every cached service is different. But generally speaking, the performance and scalability of a cached service can be affected by a variety of factors: Deployment architecture The location of the cache data with respect to the ArcGIS tile handler(s) Cache data storage disk technology and speed Network bandwidth Between the cache data storage and ArcGIS tile handler(s) Between the ArcGIS tile handler(s) and ArcGIS Web Adaptor(s) Between the ArcGIS Web Adaptor(s) and Test Client The processor speed and number of processing cores The delivery of cache tiles is quick but under heavy load the overall process utilizes CPU resources from the ArcGIS tile handler and ArcGIS Web Adaptor (if it exists in the deployment) hosting technology (e.g. Microsoft's Internet Information Services service) Different data can perform differently The average tile size (e.g. size on disk) Smaller tile sizes that contain less data might perform differently that larger more detailed tiles Tested map scales Even for the same dataset, map scale 36111.909643 may have "heavier" cache tiles than map scale 1155581.108577 Assumptions and Constraints JDK 17 or greater will not work with this (JMeter 5.4.x) Test Plan Running on these JDK releases will throw the following error: org.codehaus.groovy.GroovyBugError: BUG! exception in phase 'semantic analysis' in source unit 'Script161.groovy' Unsupported class file major version 61 Using JDK 16 or earlier avoids this error The reason is because JMeter 5.4.x only supports JDK 16 (or earlier) If JDK 17 or greater is required for your environment, you must use JMeter 5.5 (which supports JDK 17) On-Demand Cache is not enabled Might work but has not been tested Single Fused Map Cache is TRUE The cache Storage Format is COMPACT Image format of the tiles are in JPG or PNG Due to the Response Assertion rule to validate the return from the server The included Test Plan should work with a cached service for Map Image The ServiceType variable (under User Defined Variables) would need to be changed Not heavily tested Vector The ServiceType variable (under User Defined Variables) would need to be changed VectorTile service tile images can be in Protocolbuffer Binary Format (*.pbf) The Response Assertion rule would need to expand to include application/octet-stream or application/x-protobuf The JSR223 Samplers within the GetMapTile transaction would need to be adjusted to add ".pbf" to the end of the cacheRequest variable Not heavily tested Apache JMeter released under the Apache License 2.0. Apache, Apache JMeter, JMeter, the Apache feather, and the Apache JMeter logo are trademarks of the Apache Software Foundation.
... View more
01-18-2022
12:03 PM
|
3
|
7
|
4054
|
|
BLOG
|
Network Analyst Route Simply put, the Network Analyst route solver is used for finding the quickest way to get from one place to another. This traveled path might just involve a start and end location but could optionally stop at several locations while also asking the solver to generate turn-by-turn directions for each route in the solution. Note: Route functionality is available with a Network Analyst license. Load Testing a Network Analyst Route Service Network Analyst is packed with many capabilities and features for route solving. Such solutions can be executed through ArcGIS Pro but many times it is consumed through an ArcGIS service. Since it provides industry-leading technology for route solutions, it is logical to want to load test your locally running (route) solver service to see its scalability potential. There are several types of analysis provided by the Network Analyst extension, this Article uses routes as they are very easy to work with...the only required inputs are at least two valid stops points. This characteristic makes it a good choice for demonstrating how to generate data and use it in a load test against a route service. Note: The walkthrough in this Article used ArcGIS Pro 2.9 with Network Analyst services that ran in an ArcGIS Enterprise 10.9 deployment. How to Test a Network Analyst Route Service? Network Analyst ArcGIS Pro Tutorial Data The understanding of the processes in this Article are most effective if the steps can be followed using the same data. For such a task, the Network Analyst team has made a great set of data available. There is a tutorial found on arcgis.com called Network Analyst ArcGIS Pro Tutorial Data. Zipped, it is about 132MB and consists of Network Analyst data for several different cities: San Diego, and Paris, and San Francisco. The Geographic Coordinate System is: WGS 1984 (WKID: 4326). The data is publicly accessible. Note: The examples in this Article will focus on the San Diego dataset. View of the San Diego Streets data from ArcGIS Pro (with Topographic Basemap): The Streets, Walking_Pathways or Network Dataset (NewSanDiego_ND) layers do not need to enabled to utilize the Network Analyst capabilities In the example above, they are enabled to act as a point of reference of the San Diego streets This Article will not cover the details of creating, configuring or publishing a network dataset in ArcGIS Enterprise. For information on such tasks, see: Create a network dataset tutorial that specifically uses this San Diego geodatabase Publish routing services Note: The route solver examples in this Article use a map service (with the network analysis capability) as opposed to a geoprocessing service. The map service uses synchronous execution. Test Data Generation This testing effort will require valid stop points to use within the JMeter test. As with other JMeter Articles on Community, we need good test data to get the most value from the results. And like before, the Load Testing Tools make short work of this job. There is even a specific tool for creating route data. Version 1.3.0 adds some nice enhancements to the "Generate Data (Solve Route)" tool. Making the Tools Available from ArcGIS Pro Once the load-testing-tools project has been downloaded to your machine, place the unzipped folder in a directory that is accessible or made accessible by ArcGIS Pro. If you have a previous version of the Load Testing Tools already installed, this updated version can be placed along side it (although with a different folder name) or completely replace the previous version. For example: Place the load-testing-tools folder in C:\Users\[username]\Documents\ArcGIS Use the Add Folder Connection from Catalog in ArcGIS Pro to list the contents of this directory: The "Generate Data (Solve Route)" tool can create test data from the (map) service, a local copy of the data or the data within an enterprise geodatabase. For this example, any data in WGS 1984 (WKID: 4326) with an area of interest focusing around San Diego could be used. Launch the Generate Data (Solve Route) Tool Launching the Generate Data (Solve Route) tool should present an interface similar to the following: In its simplest form, only the path of csv file, which will contain the stop points, needs to be specified However, while we want to generate random points to use as the stops, we would like to avoid creating them in the bays, lakes or ocean This is where the optional Constraining Polygon parameter comes in This input field can be used to reference a data layer to spatially limit where the points are generated In actuality, we will adjust all of the default values View of the polygon (in pink) outlining the area of interest of the San Diego streets data in ArcGIS Pro: Note: This polygon was created manually and is not included with the San Diego dataset To download the SanDiegoPolygon shapefile used in this Article see: SanDiegoPolygon.zip Note: From a testing point of view, the polygon does not need to include every segment of the streets layer The Generate Data (Solve Route) Tool Inputs Adjust the Number of Tests to: 1000 Adjust the Stops Per Test to: 2 Point the Constraining Polygon to: SanDiegoPolygon Set the Output to a file path location where the results will get written: C:\Users\[username]\Documents\ArcGIS\Projects\NetworkAnalystMap1\sandiegostops1.csv Click Run to execute the tool Examining the CSV file will reveal the generated stop data This data will be used directly in the Apache JMeter test as input Viewing the file in a text editor should show something similar to the following: The features of the route solver are amazingly vast and could accept other spatial data, for example: Barriers, Polyline Barriers, and Polygon Barriers are other inputs that could be passed into a request parameter The generation of these other inputs for route solver requests will not be covered in this Article Spatially Visualize the Generated Points The generated points that are used for the stops in the requests can be added to the ArcGIS Pro project to spatially view their location. From ArcGIS Pro, use Catalog to locate and open the file geodatabase inside the project Locate the random_pts feature class Add the feature class to the Current Map: The Route Solver Test Plan To download the Apache JMeter Test Plan used in this Article see: route_solver1.zip Opening the Test Plan in Apache JMeter should look similar to the following: Adjust the User Defined Variables to fit your environment Note: The Apache JMeter release used for this Article was 5.4.3 (this version provides critical security updates for Apache Log4j2). It is strongly recommended that all Apache JMeter deployments run on the latest release. HTTP Request The route solve test is simple and fairly straight-forward. All of the test logic can be found within one JMeter HTTP Request object. Following the testing style used in previous Articles, this request item is placed inside a Transaction Controller. The key/value pairs for the request in this JMeter test are based on two factors: The functionality available in the published Network Analyst service (and underlying data) The values in this test were taken directly from the default ones used from the REST endpoint of the published San Diego service, for example: https://yourwebadaptor.domain.com/server/rest/services/NetworkAnalyst/SanDiegoRoute/NAServer/Route/solve The version of ArcGIS Enterprise (ArcGIS Server) Some versions add new capabilities This test is based on the published service from the San Diego dataset and ArcGIS Enterprise 10.9 Different network datasets may have different request parameter options available or populated, by default. Some parameters if enabled (like returnDirections), will tell the solver to return more information. This in turn asks the service to do more work which will increase the response time of the request. Note: The view of the HTTP Request from the Table of Content (left side of Test Plan) will appear as a mix of JMeter variables and strings. This is by design. This values will become populated on playback (in the View Results Tree object and raw results file). The Thread Group Configuration The JMeter Test Plan is configured for a load test of 20 minutes. With this test example using two stops for each route request , the solver should perform well and return a good handful of samples (e.g. responses from the server) for each step. Different environments and data may require an alternative setting to achieve the desired test results, adjust the test thread settings as needed Validating the Test Plan As a best practice, it is always a good idea to validate the results coming back within the JMeter GUI before executing the actual load test from the command-line. Use the View Results Tree listener to assist with the validation The Test Plan for this Article includes a View Results Tree Listener but it is disabled Enable it to view the results when the test is played from the GUI From the GUI, Start the test Let the test run for 20 seconds or so Click Stop Transactions Select one of the "Route" Transactions The View Results Tree section should resemble the following: In this example, all transactions completed successfully Sometimes when stopping the playback, the last Transactions in the View Results Tree may fail as it was stopped "mid-request"; this is safe to ignore Requests Expand one of the "Route" Transactions Select the HTTPS request within it The results should resemble the following: In this example, the selected request completed successfully (as indicated by the green check mark) The success of the parent Transaction already indicated this status From the Sampler result tab, take a quick glance at the Size in bytes field In this example, the Request Size was about 15KB which usually means good geometry data was returned; in other words, the responses were not "empty" and is more proof that it was successful Examine the URL of the request As mentioned earlier, this value of the request URL becomes populated at runtime Click on Response data tab and Response Body sub-tab This shows a textual view of the data returned from the request: Note: The route geometries returned are commonly rendered in web browser based JavaScript applications. Although Apache JMeter is a (test) client, it does not spatially render these geometry responses from the server in that way. Test Execution The load test should be run in the same manner as a typical JMeter Test Plan. See the runMe.bat script included with the route_solver1.zip project for an example on how to run a test as recommended by the Apache JMeter team. The runMe.bat script contains a jmeterbin variable that will need to be set to the appropriate value for your environment If Network Analyst route service was published as dedicated, adjust the minimum and maximum instances accordingly prior to running the load test For more information see: Configure service instance settings The published route service used in this Article was dedicated with the maximum instances set to 4 The ArcGIS Server component was running on a system with 4 CPU cores Note: It is always recommended to coordinate the load test start time and duration with the appropriate personnel of your organization. This ensures minimal impact to users and other colleagues that may also need to use your on-premise ArcGIS Enterprise Site. Additionally, this helps prevent system noise from other activity and use which may "pollute" the test results. Note: For several reasons, it is strongly advised to never load test ArcGIS Online. JMeter Report Throughput Curve The auto-generated JMeter Report can provide insight into the throughput of the route service under load Since each Route Transaction contained one request, both metrics (request and transaction) virtually showed the same value; this is expected given the design of the test In this case, the peak throughput for the two stop route solves was about 15 transactions/second Given the environment tested, this equates to around 54,000 route solves/hour Performance Curves The auto-generated JMeter Report can also provide insight into the performance of the route service under load Since each Route Transaction contained one request, both metrics (request and transaction) virtually showed the same value; this is expected given the design of the test The performance of the route requests was good and under 1 second throughout the load test Where the throughput first peaked at 15 transactions/second is where the response time was measured At this point in the test, the average response time was about 333 ms or 0.33 seconds It may also be helpful to see the plotted response times with respect to the step load (configured threads) Previous charts showed values with respect to time Final Thoughts The Apache JMeter Test Plan in this Article represents a programmatic approach for applying load to a Network Analyst route service. One of the strengths of this test is that it easy to configure and maintain. The auto-generated JMeter report provides charts and summaries that can be used to quickly analyze the performance and scalability of the route service. To download the Apache JMeter Test Plan used in this Article see: route_solver1.zip To download the San Diego dataset used in this Article see: Network Analyst ArcGIS Pro Tutorial Data Apache JMeter released under the Apache License 2.0. Apache, Apache JMeter, JMeter, the Apache feather, and the Apache JMeter logo are trademarks of the Apache Software Foundation.
... View more
12-31-2021
01:40 PM
|
0
|
0
|
1885
|
| Title | Kudos | Posted |
|---|---|---|
| 1 | 10-16-2025 12:05 PM | |
| 3 | 07-02-2025 03:34 PM | |
| 1 | 07-22-2025 11:41 AM | |
| 1 | 04-04-2025 12:08 AM | |
| 1 | 01-22-2025 03:05 PM |
| Online Status |
Offline
|
| Date Last Visited |
Friday
|