BLOG
|
Hello @AYUSHYADAV, > Just wanted to ask whether we need to install any plugin for that or how we will get that in our test plan. The "bzm – Concurrency Thread Group" does require the JMeter Plugins Manager to be installed. With this in place, JMeter will automatically download and install any additional items that are referenced in the Test Plan when you open it. This is really convenient but you'll need the Plugins Manager to be installed. To install the Plugins Manager: Download the plugins-manager.jar file, and put it into JMeter’s lib/ext directory (e.g. C:\apache-jmeter-5.4.3\lib\ext). Then (re)start JMeter.
... View more
04-04-2022
09:42 AM
|
0
|
0
|
4474
|
BLOG
|
Administration Automation with Apache JMeter Apache JMeter is a great load testing tool, but it's a fantastic automation framework too! There are many ArcGIS Enterprise administrative workflows and automation solutions for your portal. This Article focuses on using JMeter to call the ArcGIS REST API in order to carry out user management tasks that would be tedious for large numbers of members. Thankfully, the JMeter's GUI makes the test setup and REST request building easy. The User Administration Test Plans To download the Apache JMeter Test Plan used in this Article see: portal_administration1.zip This project includes 6 Test Plans for ArcGIS Enterprise 10.9/10.9.1: Add a new user A simple, basic test that just adds new members Add a new user with a few options A test that adds new members but allows the Start page and a Portal Group to be specified Add a new user with more options A test that adds new members (Start page and Group) but can set Add-on licenses Set the security question/answer for new users Set the security question and answer for newly added members that have not logged in Disable a user A test that disables users Enable a user A test that enables users The CSV Data Set Config of Users For convenience, all the Test Plans in the project work off the same list of users from the same file. In the Test Plans, this is referenced by the CSV Data Set Config element named "Users File". The Text File List of Users The included text file contains user information for working with 10 different members. However, it can be adjusted and/or expanded to suite the needs of your organization. There are many different options for the role and userLicenseTypeId fields These choices can also impact the Add-on licenses as some automatically include specific entitlements Note: It is recommended to first run the test plans with a small list of users in order to see if everything is configured correctly for your Site. Administrator Login With the exception of one, all the included Test Plans have logic to log in as a built-in Portal for ArcGIS administrator at the beginning of the test. For efficiency this action is only executed once (at the start) per each test thread. Note: When connecting to the Portal for ArcGIS component of ArcGIS Enterprise, the Test Plans will be sending requests directly to the "arcgis" instance on port 7443. Add a New User (portal_users_add1) The portal_users_add1 Test Plan is simple way to add new members to Portal for ArcGIS. The administrator credentials are specified from the User Defined Variables section of the Test Plan Except where noted, this step is performed at the beginning of all the included tests Once the test authenticates as the administrator, it calls the createUser function and repeats it for each line in file containing the list of users Since this test uses a single HTTP Request to create the user it is the fastest and most scalable way to add new members This test only adds users, it does not perform any other duties such as joining a member to a group, setting the Start page, or selecting add-on licenses For convenience, the username is appended to all user-based transactions and requests This assists troubleshooting if a particular iteration of the test could not add a specific user All of the included tests follow this design pattern This test is similar to the process used on the Example: Add members to the portal resource, command line utility and Add members from a file feature built into Portal for ArcGIS. Add a New User (portal_users_add2) The portal_users_add2 Test Plan is an easy way to add new members to Portal for ArcGIS but includes a few options. In addition to creating the user, this test allows the administrator to set additional properties like the Start page (also known as the landing page) and a Portal Group. This test expands the user creation process by 3 additional requests per user If creating thousands of users, you may notice this test takes longer to complete than portal_users_add1 This is due to the fact that more work is taking place Note: A new member can actually be added to more than one Portal Group on creation. However, for simplicity, portal_users_add2 only adds the user to one group and the same group is used for all members. The group used is defined from the PortalGroupId User Defined Variable. This GUID Id needs to be manually looked up from your Portal for ArcGIS Site. If you do not wish to add the user to a Group, simply disable the setProperties request in the test. Add a New User (portal_users_add3) The portal_users_add3 Test Plan is an automated way to add new members to Portal for ArcGIS with the most options for an administrator. This test allows you to set the Start page and Portal Group but adds the ability to specify Add-on licenses like ArcGIS Pro and Extensions and certain User type extensions. Immediately after the administrator authentication, the test makes a call to retrieve GUIDs for the ArcGIS Pro and User type extensions These GUIDs will be used later when assigning the licenses to the users The add-on licenses add several more requests to the user creation process Although powerful, these additional requests can add time to the overall task as they are performed for each member that is created The test is configured to assign the user: ArcGIS Pro Advanced and all available Extensions (as of 10.9/10.9.1) All User type extensions Note: There are other Add-on licenses such as Applications and ArcGIS Runtime extensions that were not included in the portal_users_add3 Test Plan. Many of these other licenses would require there own specific HTTP request. Again, while this can be convenient and powerful, it can add time to process of adding each user. There are also some licenses like App bundles which were not included in the test as they are automatically included with user license type (e.g. Creator). Set the Security Question/Answer for New Users (portal_users_update_profile1) The portal_users_update_profile1 Test Plan is a little unique. It is the only test in the project which does not log in as a Portal for ArcGIS administrator. Instead, it logs in as each user and assumes it is performing the initial login for each member as it will set their security question and answer. Presetting the security question and answer is completely optional Your organization may instead prefer to have each user set these values when they first log in Disable a User (portal_users_disable1) The portal_users_disable1 Test Plan is an automated way for taking a list of users and disabling their membership to the portal. Once the account is disabled the user cannot log in. This is a less destructive function than delete. Disabling users is fairly straight-forward and performed with one REST call to disableUsers Note: For simplicity, the disableUsers request in the portal_users_disable1 Test Plan is only disabling one member at a time. However, for each call to the disableUsers function, the request will accept groups of users for improved efficiency. As of 10.9/10.9.1, disableUsers accepts up to 25 users at a time. Note: The portal_users_disable1 test can be executed over the same users successfully. From the point of view of ArcGIS Enterprise, it is just disabling the member(s) again. Enable a User (portal_users_enable1) The portal_users_enable1 Test Plan is an automated way for taking a list of users and enabling their membership to the portal. Once the account is enabled the user cannot log in. Enabling users is fairly straight-forward and performed with one REST call to enableUsers Note: For simplicity, the enableUsers request in the portal_users_enable1 Test Plan is only enabling one member at a time. However, for each call to the enableUsers function, the request will accept groups of users for improved efficiency. As of 10.9/10.9.1, enableUsers accepts up to 25 users at a time. Note: The portal_users_enable1 test can be executed over the same users successfully. From the point of view of ArcGIS Enterprise, it is just enabling the member(s) again. The Thread Group Configuration Unlike the previous Apache JMeter Article tests that are time-dominant, the Test Plans in this project are iteration based. In other words, when creating or disabling a specific users, we only need to work on the users of interest from the list once. The Thread Group "step load" configuration that is included by default with every Apache JMeter installation includes a very convenient Loop Count setting to specify exactly how many iteration the Test Plan should execute The Loop Count setting should match the number of lines in the "Users File" that contain members to be added/disabled/enabled Note: All Test Plans in the project are configured with the same Thread Group setting. Additionally, all of the included tests are executed with one concurrent test thread. Test Execution Also, unlike the previous Apache JMeter Article tests that are executed from the command-line, you can probably get away with running this administrative automation Test Plans right from the GUI. Of course, this depends on how many users you are planning to create, disable, or enable. If you are working with a few hundred, then the GUI would be fine. However, if you plan to create thousands or tens of thousands of users (or more), you will want to run the Test Plans from the command-line for the best usage efficiency of the test workstation resources. See the runMe.bat script included with the portal_administration1.zip project for an example on how to run a test as recommended by the Apache JMeter team. This script is configured to run portal_users_add3, but can easily be adjusted to running any of the tests. The runMe.bat script contains a jmeterbin variable that will need to be set to the appropriate value for your environment Note: It is always recommended to coordinate the start time with the appropriate personnel of your organization. This ensures minimal impact to users and other colleagues that may also need to use your on-premise ArcGIS Enterprise Site. Validating the Test Plans If the test is being run from the GUI, there are several listeners that have been added to all of the included Test Plans that offer immediate feedback on the status. The View Results Tree element offers a convenient way to quickly examine the status of each transaction (e.g. "Create User Account -- portalpublisher2") and its respective requests (e.g. "/arcgis/portaladmin/security/users/createUser--portalpublisher2") Thanks to the Response Assertion rule elements added to each request, the green check mark is trusted indicator of a successful transaction or request The View Results in Table element offers a handy way to see status of each transaction and its response time all from one table Troubleshooting a Command-line Test Execution As mentioned earlier, when working with large amounts of users, the recommended approach is to run the Test Plans from the command-line. However, administrators will be very interested to understand which users, if any, encountered errors through the automation. It is here that the JMeter Test Report can offer great insight. From the Request Summary pie chart on the initial page of the Test Report, you can quickly see if any errors were encountered If errors were encountered from the test run, the Statistics table (bottom of the first report page) can make the failed user requests easily to find when sorting by the FAIL column Final Thoughts There are many frameworks, tools and utilities out there to perform administrative task automation for ArcGIS Enterprise. Most likely, they all have their own strengths. Apache JMeter is handy as it provide a graphical interface for building and adjusting the REST requests need to perform the functions. The HTML/JavaScript reports which can be automatically created at the end of a test report are a nice bonus for understanding if whole job was successful or which particular parts failed. To download the Apache JMeter Test Plan used in this Article see: portal_administration1.zip A Quick Word on Using Multiple Threads All of the included tests could be configured to use multiple, concurrent threads for faster execution. This is fine from a technical point of view, but all of these test perform write operations to the internal database for the Portal for ArcGIS component. As with any database, such operations can be resource intensive and can only go so fast. Using too many concurrent threads may actually slow down the performance of these tests. A Quick Word on Deleting Users The tests included with the project did not include a delete user operation. Deleting a member from the portal is permanent (without backups being available) and such tools that automate this action should use caution. Additionally, some users may have uploaded a plethora of content to the portal. This content would need to be delete or transferred to another user before removing that member. Apache JMeter released under the Apache License 2.0. Apache, Apache JMeter, JMeter, the Apache feather, and the Apache JMeter logo are trademarks of the Apache Software Foundation.
... View more
03-07-2022
11:38 AM
|
2
|
5
|
2256
|
BLOG
|
For easier readability, the "A Quick Word on Sizing" and "A Quick Word on Instances" sections were moved from Why Test a Asynchronous Geoprocessing Service? to under Final Thoughts. "A Quick Word on the Location of the arcgisjobs Folder" was just added and also placed under Final Thoughts.
... View more
02-18-2022
04:20 PM
|
0
|
0
|
1333
|
BLOG
|
Why Test a Asynchronous Geoprocessing Service? The most popular reason is probably, "because you have been asked to test it". As a tester, GIS administrators may look to you to show how a geoprocessing (GP) model behaves as a service under load. Many GP models are built to perform long-running, critical and complex tasks. Since they are a resource often found on ArcGIS Enterprise deployments (as a service), this makes the understanding of their performance and scalability profiles key. Asynchronous Geoprocessing Service Testing Challenges The hardest part of the load testing an asynchronous geoprocessing service is the loop logic and being able to handle the different states of the job. The loop should not be too aggressive and you need to exit the loop under the right conditions. Appropriately marking a test iteration as failed based on the job status or passed based on the output results from the task, is also critical. How to Test an Asynchronous Geoprocessing Service? The basic steps for load testing an asynchronous GP service are: Provide inputs for geoprocessing task Submit job Capture unique job ID Perform an initial job status check Loop on the job status check If job succeeded Exit loop Sleep for a short duration If job succeeded Examine results If available, download output/data At a minimum, the Apache Test Plan should handle this logic. However, there are some enhancements that can be added to this process, like: Maximum number of job status checks in the loop Maximum number of test iterations Marking the job as failed if a non-successful status is returned The Test Plan included in the Article provides these additional features. The "Summarize Invasive Species" Geoprocessing Model and Dataset The understanding of the process in this Article is most effective if the steps can be reproduced. For that, we can turn to a modern ArcGIS Pro GP model and dataset that is free and publicly available. The Share a web tool -- Summarize Invasive Species package is available from arcgis.com. View of the New Zealand data from ArcGIS Pro (with Topographic Basemap): This Article will not cover the details of configuring or publishing this model as a service in ArcGIS Enterprise. For information on such actions, see: Share a web tool Test Data Generation Although the JMeter Test Plan will utilize some data to make the inputs to the model/service dynamic and more realistic, this will be pre-generated and automatically included with the test. The reason is to focus on the test logic (and not the data generation). The Asynchronous Geoprocessing Service Test Plan To download the Apache JMeter Test Plan used in this Article see: async_gp1.zip Opening the Test Plan in Apache JMeter should look similar to the following: Adjust the User Defined Variables to fit your environment Note: The test has different variables for the name of the (GP) service and (GP) tool. When published, they often use the same name (e.g. "SummarizeInvasiveSpecies") but do not have to. Components of the Test Plan SubmitJob It all start with "submit the job". This is probably the easiest part of the test. Once this request has been sent, the service returns a job id and the server can begin to work on the task. This id is captured from the Regular Expression Extractor element. Note: The job id is unique to each job (and test thread). It is used in every subsequent request in the test. CSV Data Set Config We briefly skip to the bottom to mention the CSV Data Set Config element. The inputs are important and required to submit a job, but the generation of its data but is not a heavy area of focus of this testing Article. Contents example of the inputs.csv file: Note: The full input.csv file is included with the async_gp1 Test Plan. InitialJobStatus The initial job status transaction has one HTTP request element inside that is used to find and populate a variable (jobStatus) with the current state of the submitted task. This value will be used to enter the upcoming while loop. As of ArcGIS Enterprise 10.9, there are just a few different values that the status of a job can have; these states reflect the job's life cycle: esriJobSubmitted esriJobWaiting esriJobExecuting esriJobSucceeded esriJobFailed esriJobTimedOut esriJobCancelling esriJobCancelled Ideally, we are expecting that from the status perspective, the job's states will be: esriJobSubmitted --> (esriJobWaiting -->) esriJobExecuting --> esriJobSucceeded Accounting for the other values is what makes the testing of the service both fun and tricky. LoopJobStatus The job status loop (also known as the while loop) has several parts to it. They all play an important role for periodically examining the status of the (test thread's) unique job id and then properly handling the state when it changes (e.g. succeeds or fails or just takes too long). While loop logic Job status check Short sleep timer There are also some nice-to-have extras (mentioned earlier as enhancements): Response assertion check Maximum loop check WhileLoop As long as the returned job status is either "esriJobSubmitted", "esriJobWaiting", or "esriJobExecuting", the loop will continue running. Just checking against these three states helps keep the loop logic simple. Note: A job status of "esriJobSucceeded" will exit the loop. This is a good thing and what we want the test logic to encounter. JobStatusCheck The job status check is an HTTP request asking for the current value of the task. It does the same thing as the request inside the InitialJobStatus, but in a loop. Note: Since this element is inside a loop and there will most likely be multiple occurrences, it is important not to give the HTTP Request the same name as the Transaction. This helps avoid confusion in the analysis and reporting. ResponseAssertion As mentioned, this logic is a nice-to-have. The value of the job status from the LoopJobStatus request is immediately checked. If it is "esriJobSubmitted", "esriJobWaiting", "esriJobExecuting" or "esriJobSucceeded", the request will be marked as successful. If any other job status states appear (e.g. "esriJobFailed", "esriJobTimedOut"), the request (and the loop transaction) will be marked as failed, which is a good thing. This design favors a simple approach to the testing logic. Note: The "esriJobSucceeded" is not a condition in the while loop but it is a value we look for with the ResponseAssertion rule to determine a successful request. SleepWhileLoop The sleep timer is critical. Without it, too many status check requests are sent to the service which causes unnecessary load. Since the job status request is fast and light-weight but the overall task is long-running, it makes sense to delay each state check by a second or two (the test sleep variable is set to 2000ms). This is exactly what this timer does. Note: The Test Plan sleep variable, WhileLoopSleep is set to 2000 (ms), which is 2 seconds. IfWhileLoopMax The while loop iteration checking logic is also a nice-to-have. It has several parts it and the logic is carried out independently for each job. IfWhileLoopMax -- This element verifies whether the job status check loop has executed more than the allowed maximum amount of iterations (WhileLoopMax variable...default is 300); if the limit has been reached it carries out the following test elements: WhileLoopMaxReached -- An HTTP request identical to LoopJobStatus JSR223Listener -- This logic purposely fails the WhileLoopMaxReached request that was just sent FlowControlAction -- This logic ends the job status check loop by immediately stopping the current, individual test thread CounterWhileLoop -- There is a counter that is incremented for every iteration of the job status check Note: The purpose of the IfWhileLoopMax logic is to stop the job, fail the loop operation and make it easy for the tester to see that a job is taking "too long" to execute the task. The Test Plan uses 2000 (ms) and 300 for the WhileLoopSleep and WhileLoopMax variables, respectively. This would allow for about a 10 minute job execution time. If your jobs run times are longer, please adjust these, as needed. IfJobSucceeded Now out of the while loop (finally!), the test logic checks the last known status of the job. If it succeeded, it carries out some additional logic. GetParamUrl -- this HTTP request is identical to the LoopjobStatus (and the InitialJobStatus) elements The server response for this request is examined differently as it is parsed for the value of the paramUrl string DownloadOutput The value of the paramUrl variable is added to the end of the unique job request and the contents are downloaded. DownloadOutput -- this HTTP Request downloads the content whose name was based on the value of the paramUrl variable populated from the previous request ResponseAssertion -- a response assertion is added looking for the key word "rings" to validate that the contents actually contain a geometry Note: This step is optional, but it represents the full delivery of task...the data specific to the submitted job. Verifying the job's output contained geometry data helps the test show that the service was working as expected. Other jobs may produce an entirely different output, adjust the ResposneAssertion logic as needed. IfTestIterationMax The test iteration check is also a nice-to-have feature for load testing an asynchronous geoprocessing service. Its logic is very similar to the IfWhileLoopMax check. However, it keeps track of the total number of jobs (successful or failed) across all test threads. IfTestIterationMax -- This element verifies whether the amount of executed tests (e.g. jobs submitted) is more than the allowed maximum amount of iterations (TestIterationMax variable...default is 2500); if the limit has been reached it carries out the following test elements: TestIterationMaxReached -- An HTTP request identical to LoopJobStatus JSR223Listener -- This logic purposely fails the TestIterationMaxReached request that was just sent FlowControlAction -- This logic ends the load test by immediately stopping the all test threads CounterTestIteration -- There is a counter that is incremented for every test iteration (in other words, one job submitted equals one test iteration) Note: The purpose of the IfTestIterationMax logic is to stop the after a specific number job have been sent from the load test. Not all tests need to utilize this feature or hit this maximum. However, if you are experimenting with the test logic, it is a good strategy to set the maximum to a low value until you have verified that things are behaving as expected. Otherwise, your test might send many long-running jobs to the service at once, which in turn, could take a while to complete. This feature helps avoid that scenario. The Thread Group Configuration The JMeter Test Plan is currently configured for a 30 minute load test with each step lasting a little under 2 minutes. Different environments and data may require an alternative setting to achieve the desired test results, adjust as needed The average "Summarize Invasive Species" job in this example takes between 8 -18 seconds If your service is significantly longer, you should adjust the Thread Group Configuration to produce a step duration longer than 2 minutes in order to obtain a decent sampling (per each step) Validating the Test Plan As a best testing practice, it is always a good idea to validate the results coming back from the server before applying the actual load. Use the View Results Tree listener to assist with the validation The Test Plan includes a View Results Tree Listener but it is disabled by default Enable it to view the results From the GUI, Start the test Transactions Select and expand one of the "LoopJobStatus" Transactions The results should resemble the following: In this example, the LoopJobStatus transaction above contained 7 status check requests that all completed successfully and because of this, the job also succeeded The response time of the loop (e.g. the job) was just over 12 seconds (12066 ms) As more pressure is applied (e.g. via the load test), each job will require more status checks which will in turn, take longer to complete By design, this will show up as longer response times for the LoopJobStatus operation The response time of the LoopJobStatus transaction is a great measuring stick for judging the overall performance and scalability of an asynchronous geoprocessing service Note: Generally speaking, the "job status loop" component of an asynchronous geoprocessing service test will represent the bulk of time for every test iteration. All the other operations (SubmitJob, InitialJobStatus, DownloadOutput, etc...) typically happen very quickly. Requests Expand one of the "DownloadOutput" Transactions Select one of the https requests The results should resemble the following: The contents from the DownloadOutput request is helpful for validating that the job was able to produce an expected output In this case, it is a geometry formatted in JSON Based on the GP model used as the service, this geometry summarizes the range of invasive grass species near locations where people may come into contact with the grasses and facilitate their spread Note: Other geoprocessing services may produce a different type of output than the JSON shown in the example above. Test Execution The load test should be run in the same manner as a typical JMeter Test Plan. See the runMe.bat script included with the async_gp1.zip project for an example on how to run a test as recommended by the Apache JMeter team. The runMe.bat script contains a jmeterbin variable that will need to be set to the appropriate value for your environment Note: It is always recommended to coordinate the load test start time and duration with the appropriate personnel of your organization. This ensures minimal impact to users and other colleagues that may also need to use your on-premise ArcGIS Enterprise Site. Additionally, this helps prevent system noise from other activity and use which may "pollute" the test results. Note: For several reasons, it is strongly advised to never load test services provided by ArcGIS Online. JMeter Report Throughput Curves As expected for a long running job, the throughput of the LoopJobStatus operation is low The peak throughput appeared to occur at the 11:48 mark At this time, the service produced about 0.4 transactions/second This equated to around 1,440 jobs through the system per hour Note: The JobStatusCheck request is selected to disable its rendering in the chart. Since this was a fast request, it showed higher throughput than the LoopJobStatus operation, but that is not what we are interested in for understanding the scalability of the service. Performance Curves The performance of the job at the beginning of the test was about 14 seconds At the point where the throughput maxed out (the 11:48 mark), the performance had increased to over 21 seconds Despite an increased load that produced longer and longer response times, the service and system continue to complete the jobs successfully Final Thoughts While there are many geoprocessing models out there that perform a variety of different tasks, this Article can be used as a guide on how to load test an asynchronous GP service. As with many things related to testing, geoprocessing services are easy to apply load against. However, since each job has a life cycle that needs to be tracked, the test logic has to account for this changing job status. It is this characteristic of the service that introduces some complexity to the Test Plan. That said, Apache JMeter is a feature-rich testing framework that helps testers meet this challenge. To download the Apache JMeter Test Plan used in this Article see: async_gp1.zip To download the geoprocessing package and data used in this Article see: Share a web tool A Quick Word on Sizing The focus of previous testing articles has typically not been to offer strategies and techniques on capacity planning. However, generally speaking, long-running jobs like ones from an asynchronous geoprocessing service are relatively easy to size. For example, if the average job duration for a service is something long like 30 seconds and your ArcGIS Server machine has 8 CPU cores, you would be able to support 8 concurrent users (before queueing begins to occur). In other words, for the situation just mentioned, the rough sizing would be: 1 job == 1 core == 1 user (where the response times are > 1 second) Note: This assumes the minimum number of instances for the GP service would be set to the number of available CPU cores (e.g. 8 on an 8 core machine). This setting would be done for predictable performance and maximum throughput. Would things fall over with more than 8 concurrent jobs are going? Most likely not, this is just when queuing starts to occur. Whenever queuing start to happens, the response times of the job completions will become a little longer (e.g. slower) for all the running jobs. Knowing this, do you still have to load test the GP? Most likely yes. All geoprocessing models, data and inputs are different, behave different and can use the available hardware in various ways. Your test will show this impact under load, which will be very important to understand. A Quick Word on Instances While it is possible to have the dedicated GP service instances set to use all the CPU cores on ArcGIS Server, a GIS Administrator may intentionally chose to not go with that configuration. Since the jobs of the GP service could be very long running and resource intensive, an alternate deployment strategy might be to purposely set the maximum number of instances for the GP service to a lower value so other users can use different services without waiting (due to resource constraints). A Quick Word on the Location of the arcgisjobs Folder While not a focus of this Article, the location of the arcgisjobs folder can have an impact on performance and/or scalability of the geoprocessing service. This location is where the service will temporarily read and write data as the job is being processed. The final output of each job is also stored in this location. For extremely busy Sites where thousands of jobs are concurrently being carried out from multiple ArcGIS Servers, consider the storage capabilities (e.g. I/O speed, reliability) of this location. Apache JMeter released under the Apache License 2.0. Apache, Apache JMeter, JMeter, the Apache feather, and the Apache JMeter logo are trademarks of the Apache Software Foundation.
... View more
02-18-2022
12:40 PM
|
0
|
3
|
2441
|
BLOG
|
I have also uploaded an update to the cache tile test: cache_tiles2.zip This version contains some added logic to better support vector tiles caches. I added a User Defined Variable called "TileExtension". If your vector tiles show the extension for a "Protocolbuffer Binary Format" file, simply set TileExtension to .pbf (note the dot, as it would need to be included). Otherwise, leave this variable empty. I also added additional ContentTypes to the Response Assertion rules in order to check for a returns that would be in "Protocolbuffer Binary Format". Another important detail, the default User Defined Variable value for the ServiceName is NaturalEarth_256_Cache. However, if your cache was a Hosted service, this would be set to Hosted/NaturalEarth_256_Cache. The ServiceName should include the preceding directory, if it exists. Lastly, if testing a vector tile cache, the User Defined Variable for ServiceType would need to be changed to VectorTileServer.
... View more
02-08-2022
12:56 PM
|
1
|
0
|
2172
|
BLOG
|
The Assumptions and Constraints section of the Article has been updated to include some environment details that might be helpful for certain conditions.
... View more
01-20-2022
11:59 AM
|
1
|
0
|
2244
|
BLOG
|
Hi @DeanHowell1, Here is an Article you may find interesting for testing cached service: Creating a Load Test in Apache JMeter Against a Cached Map Service (Advanced) This Article focuses on testing a map service but the same Test Plan should work against a cached image service. Hope this helps. Aaron
... View more
01-18-2022
12:56 PM
|
0
|
0
|
3012
|
BLOG
|
Why Test a Cached Map Service? Cached map services are a popular and recommended way to provide a well performing presentation of static data. The cache service type is a proven technology, but there may still be requirements to test it under load to observe its scalability first hand on a specific deployment architecture. While cached map services perform well, serving up thousands of simultaneous tile requests can be resource intensive on the server hardware. Note: Due to the fast rate of delivery and consumption of the resource, load testing cached map services can also be intensive on the hardware utilization of the test client workstation. Cached Map Service Testing Challenges Compared to the load testing of the export map function, proper testing of a cached map service introduces several challenges as the request composition with each map screen changes. Since the underlying cache scheme is using a grid design, the map extents of some pans or zooms may pull down more or less tile images than others. Accounting for this real-world behavior of the cache service makes the test logic more complex than if it were exercising the export map function. The test logic should also be dynamic and cover a decent area of interest. Converting a HAR file of captured cache tile requests into a test might be quick and easy to do but does not show a realistic scalability of the service. This is due to the small sample of tile requests being used over and over again. Generally speaking, requests for individual cache tiles are fast...very fast. Due to this behavior, the test logic also needs to perform well, scale with the service and have minimal overhead on the test client How to Test a Cached Map Service? The steps in this Article should work with any existing cached map service on your local ArcGIS Enterprise deployment. However, if one if not available, it is recommended to give the Natural Earth dataset a look for the task. The Natural Earth Dataset Although the steps should work with any data, the walkthrough of the process in this Article might be more effective if they can be directly followed. In such cases, it is great turning to the Natural Earth datasets which provides some decent map detail (at smaller scales) covering the whole world. Download the Natural Each dataset here The download above is a subset of the larger Natural_Earth_quick_start.zip and includes a modified MXD for ArcMap 10.8.1 and ArcGIS Pro 2.8 project Either can be used to publish and create a cached map service to ArcGIS Enterprise The Natural Earth subset of data should look similar to the following when opened in ArcGIS Pro (or ArcMap) This Article will not cover the details of creating, configuring or publishing a cached map service in ArcGIS Enterprise. For information on such actions, see: Tutorial: Creating a cached map service Note: It is recommended to become familiar with some of metadata details of the cached map service as the load testing effort will require knowledge of some of that information (e.g. xorigin, yorigin, tileCols, tileRows, and spatial reference as well as the scales that contain tiles). Test Data Generation With a cached map service available, the next step would be to generate test data over an area of interest. As with other JMeter Articles on Community, we need good test data to get the most value from the results. And like before, the Load Testing Tools package (for ArcGIS Pro) makes short work of this job. There is even a specific tool for creating bounding box data to use with a cached map services. Note: Version 1.3.0 of Load Testing Tools added the "Generate Bounding Boxes (Precision)" tool. Download and unzip the package then make that folder available to your ArcGIS Pro project. The Generate Bounding Boxes (Precision) Tool Launching the Generate Bounding Boxes (Precision) tool should present an interface similar to the following: Before running the tool, let's adjust the input to target the data generation process to: Specific map scales (in this case three different scales) Scales 4622324.434309 and 1155581.108577 were kept Scale 2311162.217155 was added The number of records to be generated was adjusted the reflect larger map scales As the scale number goes down, we want to tool to generate more boxes A specific area of interest (optional) A polygon of the United States was added to a new map This feature was set as the Constraining Polygon Click Run Tool execution may take a few moments Visualizing the Generated Data in ArcGIS Pro The Contents screen will populate by adding a new feature classes that is visually representing the generated data Not all the generated map scales will be immediately seen Visualizing the Generated Data in a Text Editor Using the file system explorer, navigate to the ArcGIS Pro project used for generating the data and open one of the csv files using your favorite text editor The file contents should look similar to the following: The Apache JMeter test will be configured to convert each of these bound boxes into the corresponding cache map tiles The Cached Map Service Test Plan To download the Apache JMeter Test Plan used in this Article see: cache_tiles1.zip Opening the Test Plan in Apache JMeter should look similar to the following: Adjust the User Defined Variables to fit your environment Xorigin, Yorigin, TileCols, TileRows are properties of the created map cache that can be found on the REST endpoint page of the service TileCols and TileRows are typically found under Tile Info Height and Width Components of the Test Plan CSV Data Set Config The CSV Data Set Config elements in JMeter are used to reference the newly generated test data from the file system. The current version of the Test Plan is built to utilize 3 different CSV files (one for each map scale data file). Note: Other that the User Defined Variables and the setting of the Filename in the CSV Data Set Config elements, there should not be anything else that requires editing or changing in the Test Plan. The test logic is listed below just to explain how the values in the HTTP Request become populated. Levels Of Detail List Logic To avoid more complex JMeter test logic, 24 fixed map cache levels of detail are placed inside a class in a JSR223 Sampler test element. That "complex alternative" would be to connect the endpoint of the service at the start of the test and pull down the cache tile metadata. Putting HTTP logic into JSR223 Samplers is technically doable, but not the route I chose. There is only one JSR223 Sampler inside the Levels Of Detail Transaction This item is executed only once, at the start of each test thread The element contains 24 fixed cache levels of detail, with level 0 starting at scale 591657527.591555 If your cache scheme starts at a different scale for 0, then the JSR223 Sampler will need to be manually adjusted This JSR223 Sampler does not need to be edited to run the test This assumes cached map service has a Spatial Reference of 102100 (3857) Levels Of Detail -- JSR223 Sampler (Full Logic): // FileServer class
import org.apache.jmeter.services.FileServer
public class Lod{
int level
double resolution
double scale
double tolerance
}
public class MyLodList1{
public List<Lod> LodList = new ArrayList()
MyLodList1(){
// Based on ArcGIS Online Map Scales
// https://services.arcgisonline.com/arcgis/rest/services/World_Street_Map/MapServer
//
// Spatial Reference: 102100 (3857)
Lod lod = new Lod()
lod = new Lod()
lod.level = 0
lod.resolution = 156543.03392800014 //11
lod.scale = 591657527.591555
lod.tolerance = 0.5
this.LodList.add(lod)
lod = new Lod()
lod.level = 1
lod.resolution = 78271.51696399994 //11
lod.scale = 295828763.795777
lod.tolerance = 0.5
this.LodList.add(lod)
lod = new Lod()
lod.level = 2
lod.resolution = 39135.75848200009 //11
lod.scale = 147914381.897889
lod.tolerance = 0.25
this.LodList.add(lod)
lod = new Lod()
lod.level = 3
lod.resolution = 19567.87924099992 //11
lod.scale = 73957190.948944
lod.tolerance = 0.5
this.LodList.add(lod)
lod = new Lod()
lod.level = 4
lod.resolution = 9783.93962049996 //11
lod.scale = 36978595.474472
lod.tolerance = 0.5
this.LodList.add(lod)
lod = new Lod()
lod.level = 5
lod.resolution = 4891.96981024998 //11
lod.scale = 18489297.737236
lod.tolerance = 0.5
this.LodList.add(lod)
lod = new Lod()
lod.level = 6
lod.resolution = 2445.98490512499 //11
lod.scale = 9244648.868618
lod.tolerance = 0.5
this.LodList.add(lod)
lod = new Lod()
lod.level = 7
lod.resolution = 1222.9924525624949 //13
lod.scale = 4622324.434309
lod.tolerance = 0.5
this.LodList.add(lod)
lod = new Lod()
lod.level = 8
lod.resolution = 611.49622628137968 //14
lod.scale = 2311162.217155
lod.tolerance = 0.5
this.LodList.add(lod)
lod = new Lod()
lod.level = 9
lod.resolution = 305.74811314055756 //14
lod.scale = 1155581.108577
lod.tolerance = 0.5
this.LodList.add(lod)
lod = new Lod()
lod.level = 10
lod.resolution = 152.87405657041106 //14
lod.scale = 577790.554289
lod.tolerance = 0.5
this.LodList.add(lod)
lod = new Lod()
lod.level = 11
lod.resolution = 76.437028285073239 //15
lod.scale = 288895.277144
lod.tolerance = 0.5
this.LodList.add(lod)
lod = new Lod()
lod.level = 12
lod.resolution = 38.21851414253662 //14
lod.scale = 144447.638572
lod.tolerance = 0.5
this.LodList.add(lod)
lod = new Lod()
lod.level = 13
lod.resolution = 19.10925707126831 //15
lod.scale = 72223.819286
lod.tolerance = 0.5
this.LodList.add(lod)
lod = new Lod()
lod.level = 14
lod.resolution = 9.5546285356341549 //16
lod.scale = 36111.909643
lod.tolerance = 0.5
this.LodList.add(lod)
lod = new Lod()
lod.level = 15
lod.resolution = 4.77731426794937 //14
lod.scale = 18055.954822
lod.tolerance = 0.05
this.LodList.add(lod)
lod = new Lod()
lod.level = 16
lod.resolution = 2.388657133974685 //15
lod.scale = 9027.977411
lod.tolerance = 0.025
this.LodList.add(lod)
lod = new Lod()
lod.level = 17
lod.resolution = 1.1943285668550503 //16
lod.scale = 4513.988705
lod.tolerance = 0.025
this.LodList.add(lod)
lod = new Lod()
lod.level = 18
lod.resolution = 0.5971642835598172 //16
lod.scale = 2256.994353
lod.tolerance = 0.005
this.LodList.add(lod)
lod = new Lod()
lod.level = 19
lod.resolution = 0.29858214164761665 //17
lod.scale = 1128.497176
lod.tolerance = 0.005
this.LodList.add(lod)
lod = new Lod()
lod.level = 20
lod.resolution = 0.14929107082380833 //17
lod.scale = 564.248588
lod.tolerance = 0.0025
this.LodList.add(lod)
lod = new Lod()
lod.level = 21
lod.resolution = 0.07464553541190416 //17
lod.scale = 282.124294
lod.tolerance = 0.0005
this.LodList.add(lod)
lod = new Lod()
lod.level = 22
lod.resolution = 0.03732276770595208 //17
lod.scale = 141.062147
lod.tolerance = 0.0005
this.LodList.add(lod)
lod = new Lod()
lod.level = 23
lod.resolution = 0.01866138385297604 //17
lod.scale = 70.5310735
lod.tolerance = 0.0005
this.LodList.add(lod)
}
}
MyLodList1 mylods = new MyLodList1()
List<Lod> LodList = mylods.LodList
vars.putObject("LodList",LodList) GetMapTile Logic The JSR223 Samplers inside the GetMapTile Transaction is the logic responsible for taking a bounding box and transforming it into the corresponding cache tiles. There is one JSR223 Sampler for each map scale (e.g. one for each corresponding CSV Data Set Config) CSV Data Set Config A --> JSR223 Sampler A1 This is executed with every test thread iteration This is executed frequently...every time a new bounding box is read in These JSR223 Samplers do not need to be edited to run the test Note: JSR223 Samplers using Groovy are generally executed quickly and add very little overhead to the test GetMapTile -- JSR223 Sampler A1 (Full Logic): // Script to process a CSV file (from Load Testing Tools) with lines in the following format:
// bbox,width,height,mapUnits,sr,scale
// FileServer class
import org.apache.jmeter.services.FileServer
import org.apache.commons.math3.util.Precision
//import java.math.BigDecimal
// GetMapTile
bbox_var = vars.get("bbox_A")
String[] bboxParts = bbox_var.split(',')
double xmin = Double.parseDouble(bboxParts[0])
double ymin = Double.parseDouble(bboxParts[1])
double xmax = Double.parseDouble(bboxParts[2])
double ymax = Double.parseDouble(bboxParts[3])
width_var = vars.get("width_A")
height_var = vars.get("height_A")
// Use map scale resolution (map units per pixel) to determine tile level
double mapresolution = 0
int resolutionprecision = 10
mapresolution = Precision.round((Math.abs(xmax - xmin) / Double.parseDouble(width_var)), resolutionprecision)
scale_var = vars.get("scale_A")
double bbox_scale_double = Double.parseDouble(scale_var)
// Map units per pixel
double tileresolution = 0
double lod_resolution = 0
double scale = 0
int tilelevel = 0
LodList = vars.getObject("LodList") // Assuming cached map service has a Spatial Reference of 102100 (3857)
boolean firstIteration = true;
for(int i = 0; i < LodList.size; i++)
{
lod_resolution = Precision.round(LodList[i].resolution, resolutionprecision)
tileresolution = lod_resolution
tilelevel = LodList[i].level
scale = LodList[i].scale
if (mapresolution >= lod_resolution)
{
break
}
}
tileCols_var = vars.get("TileCols")
cols = Double.parseDouble(tileCols_var)
tileRows_var = vars.get("TileRows")
rows = Double.parseDouble(tileRows_var)
// Origin of the cache (upper left corner)
xorigin_var = vars.get("Xorigin")
xorigin = Double.parseDouble(xorigin_var)
yorigin_var = vars.get("Yorigin")
yorigin = Double.parseDouble(yorigin_var)
// Get minimum tile column
double minxtile = (xmin - xorigin) / (cols * tileresolution)
// Get minimum tile row
// From the origin, maxy is minimum y
double minytile = (yorigin - ymax) / (rows * tileresolution)
// Get maximum tile column
double maxxtile = (xmax - xorigin) / (cols * tileresolution)
// Get maximum tile row
// From the origin, miny is maximum y
double maxytile = (yorigin - ymin) / (rows * tileresolution)
// Return integer value for min and max, row and column
int mintilecolumn = (int)Math.floor(minxtile)
int mintilerow = (int)Math.floor(minytile)
int maxtilecolumn = (int)Math.floor(maxxtile)
int maxtilerow = (int)Math.floor(maxytile)
Scheme_var = vars.get("Scheme")
WebServerName_var = vars.get("WebServerName")
ServerInstanceName_var = vars.get("ServerInstanceName")
ServiceName_var = vars.get("ServiceName")
ServiceType_var = vars.get("ServiceType")
def cacheRequest
def tilePaths = []
int count = 0
for (int row = mintilerow; row <= maxtilerow; row++)
{
// for each column in the row, in the map extent
for (int col = mintilecolumn; col <= maxtilecolumn; col++)
{
cacheRequest = ("/").concat(ServerInstanceName_var).concat("/rest/services/").concat(ServiceName_var).concat("/").concat(ServiceType_var)
cacheRequest = cacheRequest.concat("/tile").concat("/").concat(tilelevel.toString()).concat("/").concat(row.toString()).concat("/").concat(col.toString())
count++
tilePaths.add(cacheRequest)
}
}
def requestCount = count.toString()
vars.putObject("RequestCount_A",requestCount)
vars.putObject("TilePaths_A",tilePaths) Cache Tile Loop and Path Population There are several components needs for this part of the Test Plan. With the bounding box translated into the corresponding cache tiles and assembled into a list of URLs, a third JSR223 is needed to place each URL into a variable inside a loop. The loop logic takes place inside the Cache Tiles transaction. There is one JSR223 Sampler for each map scale CSV Data Set Config A --> JSR223 Sampler A2 These JSR223 Samplers do not need to be edited to run the test There is a Loop Controller added to only ask for the actual number of tiles per bounding box since this amount can change extent to extent The number of tiles that correspond to each bound box vary by extent but also but the map resolution (1920x1080) Higher screen resolutions require more tiles The Loop Controller contains the following elements: Counter JSR223 Sampler HTTP Request Loop Controller Counter JSR223 Sampler HTTP Request All of the test logic above exists just for this component of the test. For each map scale, there is only one HTTP Request! This simple design favors readability and maintainability. Note: The HTTP Requests contains a Response Assertion element to validate the items returned from the server. If the content type of the response is image/jpeg or image/png, then the request will pass. However, some VectorTileServer caches may return a Protocolbuffer Binary Format (*.pbf) file. In these cases, the Patterns to Test would need to be manually expanded to the following: image/jpeg || image/png || application/octet-stream || application/x-protobuf The Thread Group Configuration The JMeter Test Plan is currently configured for a relatively short test of 20 minutes. Cached map services perform well, so a lot of throughput will be taking place within each step (2 minutes per step) and from the test overall. Different environments may require an alternative pressure configuration to achieve the desired test results, adjust as needed Validating the Test Plan As a best practice, it is always a good idea to validate the results coming back before executing the actual load test. Use the View Results Tree listener to assist with the validation The Test Plan includes a View Results Tree Listener but it is disabled by default Enable it to view the results From the GUI, Start the test Transactions Select one of the "Cache Tiles" Transactions The results should resemble the following: In this example, all the transactions completed successfully (e.g. the green checkmark) Cache Tiles (map scale: 4622324.434309) Cache Tiles (map scale: 2311162.217155) Cache Tiles (map scale: 1155581.108577) Selecting one of the transactions and the Sampler result element lists some key information Take a quick glance at the Size in bytes In the example above, the Transaction size was over 50KB which suggests decent tile data (for this dataset) was being returned and the responses were not all "blank" images The Number of samples in the transaction was 80 Since there is a JSR223 Sampler with every tile request, this actually resulted in 40 tiles being downloaded The Load time shows 62 (ms), meaning it only took 0.062 seconds to pull down 40 tile images Requests Expand the selected Transaction In this example, Cache Tiles (map scale: 1155581.108577) Select one of the HTTPS requests The results should resemble the following: In this example, the select request completed successfully (e.g. the green checkmark) Take a quick glance at Load time In this example, the individual tile request only took 2 ms (0.002 seconds) to download Clicking on the Response data tab allows you to preview the requested tile: Note: Once visual validation and debugging is complete, it is recommended to disable the View Results Tree element before executing the load test Test Execution The load test should be run in the same manner as a typical JMeter Test Plan. See the runMe.bat script included with the cache_tiles1.zip project for an example on how to run a test as recommended by Apache JMeter. The runMe.bat script contains a jmeterbin variable that will need to be set to the appropriate value for your environment Note: It is always recommended to coordinate the load test start time and duration with the appropriate personnel of your organization. This ensures minimal impact to users and other colleagues that may also need to use your on-premise ArcGIS Enterprise Site. Additionally, this helps prevent system noise from other activity and use which may "pollute" your test results. Note: For several reasons, it is strongly advised to never load test ArcGIS Online. JMeter Report The auto-generated JMeter Report can provide insight into the throughput of the cached map service under load This report is auto-generated from the command-line options passed in from the runMe.bat script Throughput Curve The JMeter Report for a cached map service load test may appear sluggish and slow when viewed in a web browser This is due to the default nature of its composition, which attempts to render every unique request in some of the charts In a test such as this, there will be many From the chart legend, select all JSR223 Sampler items to disable their rendering (as they may skew the scale) In this case, the peak throughput for any one of the given map scale transactions of cached tiles was about 15 transactions/second Since 3 map scales were tested, the total transactions per second achieved was 45 transactions/second This equated to around 162,000 cache transactions/hour The peak throughput appear to occur at the 10:34 mark Performance Curve The performance of the cache throughput was good at roughly 120 ms or 0.12 seconds This was observation was taken where the peak transactions/sec occurred at the 10:34 mark Note: "Peak throughput" is a point in a test where no higher throughput can be achieved. This does not mean that is the maximum amount of pressure the service will support without "falling over". Generally speaking, if additional users ask for cache tiles after the system has reached peak throughput (e.g. you run the step load configuration higher), the service will still fulfill their requests but they will just wait longer for the responses to return (due to queueing). Final Thoughts The Apache JMeter Test Plan in this Article represents a programmatic approach for applying load to an ArcGIS cached map service. One of the strengths of this test is that it easy to build, configure and maintain. The auto-generated JMeter report provides charts and summaries that can be used to analyze the performance and scalability of the cached map service. To download the Apache JMeter Test Plan used in this Article see: cache_tiles1.zip Additional Items Worth Mentioning Every cached service is different. But generally speaking, the performance and scalability of a cached service can be affected by a variety of factors: Deployment architecture The location of the cache data with respect to the ArcGIS tile handler(s) Cache data storage disk technology and speed Network bandwidth Between the cache data storage and ArcGIS tile handler(s) Between the ArcGIS tile handler(s) and ArcGIS Web Adaptor(s) Between the ArcGIS Web Adaptor(s) and Test Client The processor speed and number of processing cores The delivery of cache tiles is quick but under heavy load the overall process utilizes CPU resources from the ArcGIS tile handler and ArcGIS Web Adaptor (if it exists in the deployment) hosting technology (e.g. Microsoft's Internet Information Services service) Different data can perform differently The average tile size (e.g. size on disk) Smaller tile sizes that contain less data might perform differently that larger more detailed tiles Tested map scales Even for the same dataset, map scale 36111.909643 may have "heavier" cache tiles than map scale 1155581.108577 Assumptions and Constraints JDK 17 or greater will not work with this (JMeter 5.4.x) Test Plan Running on these JDK releases will throw the following error: org.codehaus.groovy.GroovyBugError: BUG! exception in phase 'semantic analysis' in source unit 'Script161.groovy' Unsupported class file major version 61 Using JDK 16 or earlier avoids this error The reason is because JMeter 5.4.x only supports JDK 16 (or earlier) If JDK 17 or greater is required for your environment, you must use JMeter 5.5 (which supports JDK 17) On-Demand Cache is not enabled Might work but has not been tested Single Fused Map Cache is TRUE The cache Storage Format is COMPACT Image format of the tiles are in JPG or PNG Due to the Response Assertion rule to validate the return from the server The included Test Plan should work with a cached service for Map Image The ServiceType variable (under User Defined Variables) would need to be changed Not heavily tested Vector The ServiceType variable (under User Defined Variables) would need to be changed VectorTile service tile images can be in Protocolbuffer Binary Format (*.pbf) The Response Assertion rule would need to expand to include application/octet-stream or application/x-protobuf The JSR223 Samplers within the GetMapTile transaction would need to be adjusted to add ".pbf" to the end of the cacheRequest variable Not heavily tested Apache JMeter released under the Apache License 2.0. Apache, Apache JMeter, JMeter, the Apache feather, and the Apache JMeter logo are trademarks of the Apache Software Foundation.
... View more
01-18-2022
12:03 PM
|
3
|
7
|
3474
|
BLOG
|
Network Analyst Route Simply put, the Network Analyst route solver is used for finding the quickest way to get from one place to another. This traveled path might just involve a start and end location but could optionally stop at several locations while also asking the solver to generate turn-by-turn directions for each route in the solution. Note: Route functionality is available with a Network Analyst license. Load Testing a Network Analyst Route Service Network Analyst is packed with many capabilities and features for route solving. Such solutions can be executed through ArcGIS Pro but many times it is consumed through an ArcGIS service. Since it provides industry-leading technology for route solutions, it is logical to want to load test your locally running (route) solver service to see its scalability potential. There are several types of analysis provided by the Network Analyst extension, this Article uses routes as they are very easy to work with...the only required inputs are at least two valid stops points. This characteristic makes it a good choice for demonstrating how to generate data and use it in a load test against a route service. Note: The walkthrough in this Article used ArcGIS Pro 2.9 with Network Analyst services that ran in an ArcGIS Enterprise 10.9 deployment. How to Test a Network Analyst Route Service? Network Analyst ArcGIS Pro Tutorial Data The understanding of the processes in this Article are most effective if the steps can be followed using the same data. For such a task, the Network Analyst team has made a great set of data available. There is a tutorial found on arcgis.com called Network Analyst ArcGIS Pro Tutorial Data. Zipped, it is about 132MB and consists of Network Analyst data for several different cities: San Diego, and Paris, and San Francisco. The Geographic Coordinate System is: WGS 1984 (WKID: 4326). The data is publicly accessible. Note: The examples in this Article will focus on the San Diego dataset. View of the San Diego Streets data from ArcGIS Pro (with Topographic Basemap): The Streets, Walking_Pathways or Network Dataset (NewSanDiego_ND) layers do not need to enabled to utilize the Network Analyst capabilities In the example above, they are enabled to act as a point of reference of the San Diego streets This Article will not cover the details of creating, configuring or publishing a network dataset in ArcGIS Enterprise. For information on such tasks, see: Create a network dataset tutorial that specifically uses this San Diego geodatabase Publish routing services Note: The route solver examples in this Article use a map service (with the network analysis capability) as opposed to a geoprocessing service. The map service uses synchronous execution. Test Data Generation This testing effort will require valid stop points to use within the JMeter test. As with other JMeter Articles on Community, we need good test data to get the most value from the results. And like before, the Load Testing Tools make short work of this job. There is even a specific tool for creating route data. Version 1.3.0 adds some nice enhancements to the "Generate Data (Solve Route)" tool. Making the Tools Available from ArcGIS Pro Once the load-testing-tools project has been downloaded to your machine, place the unzipped folder in a directory that is accessible or made accessible by ArcGIS Pro. If you have a previous version of the Load Testing Tools already installed, this updated version can be placed along side it (although with a different folder name) or completely replace the previous version. For example: Place the load-testing-tools folder in C:\Users\[username]\Documents\ArcGIS Use the Add Folder Connection from Catalog in ArcGIS Pro to list the contents of this directory: The "Generate Data (Solve Route)" tool can create test data from the (map) service, a local copy of the data or the data within an enterprise geodatabase. For this example, any data in WGS 1984 (WKID: 4326) with an area of interest focusing around San Diego could be used. Launch the Generate Data (Solve Route) Tool Launching the Generate Data (Solve Route) tool should present an interface similar to the following: In its simplest form, only the path of csv file, which will contain the stop points, needs to be specified However, while we want to generate random points to use as the stops, we would like to avoid creating them in the bays, lakes or ocean This is where the optional Constraining Polygon parameter comes in This input field can be used to reference a data layer to spatially limit where the points are generated In actuality, we will adjust all of the default values View of the polygon (in pink) outlining the area of interest of the San Diego streets data in ArcGIS Pro: Note: This polygon was created manually and is not included with the San Diego dataset To download the SanDiegoPolygon shapefile used in this Article see: SanDiegoPolygon.zip Note: From a testing point of view, the polygon does not need to include every segment of the streets layer The Generate Data (Solve Route) Tool Inputs Adjust the Number of Tests to: 1000 Adjust the Stops Per Test to: 2 Point the Constraining Polygon to: SanDiegoPolygon Set the Output to a file path location where the results will get written: C:\Users\[username]\Documents\ArcGIS\Projects\NetworkAnalystMap1\sandiegostops1.csv Click Run to execute the tool Examining the CSV file will reveal the generated stop data This data will be used directly in the Apache JMeter test as input Viewing the file in a text editor should show something similar to the following: The features of the route solver are amazingly vast and could accept other spatial data, for example: Barriers, Polyline Barriers, and Polygon Barriers are other inputs that could be passed into a request parameter The generation of these other inputs for route solver requests will not be covered in this Article Spatially Visualize the Generated Points The generated points that are used for the stops in the requests can be added to the ArcGIS Pro project to spatially view their location. From ArcGIS Pro, use Catalog to locate and open the file geodatabase inside the project Locate the random_pts feature class Add the feature class to the Current Map: The Route Solver Test Plan To download the Apache JMeter Test Plan used in this Article see: route_solver1.zip Opening the Test Plan in Apache JMeter should look similar to the following: Adjust the User Defined Variables to fit your environment Note: The Apache JMeter release used for this Article was 5.4.3 (this version provides critical security updates for Apache Log4j2). It is strongly recommended that all Apache JMeter deployments run on the latest release. HTTP Request The route solve test is simple and fairly straight-forward. All of the test logic can be found within one JMeter HTTP Request object. Following the testing style used in previous Articles, this request item is placed inside a Transaction Controller. The key/value pairs for the request in this JMeter test are based on two factors: The functionality available in the published Network Analyst service (and underlying data) The values in this test were taken directly from the default ones used from the REST endpoint of the published San Diego service, for example: https://yourwebadaptor.domain.com/server/rest/services/NetworkAnalyst/SanDiegoRoute/NAServer/Route/solve The version of ArcGIS Enterprise (ArcGIS Server) Some versions add new capabilities This test is based on the published service from the San Diego dataset and ArcGIS Enterprise 10.9 Different network datasets may have different request parameter options available or populated, by default. Some parameters if enabled (like returnDirections), will tell the solver to return more information. This in turn asks the service to do more work which will increase the response time of the request. Note: The view of the HTTP Request from the Table of Content (left side of Test Plan) will appear as a mix of JMeter variables and strings. This is by design. This values will become populated on playback (in the View Results Tree object and raw results file). The Thread Group Configuration The JMeter Test Plan is configured for a load test of 20 minutes. With this test example using two stops for each route request , the solver should perform well and return a good handful of samples (e.g. responses from the server) for each step. Different environments and data may require an alternative setting to achieve the desired test results, adjust the test thread settings as needed Validating the Test Plan As a best practice, it is always a good idea to validate the results coming back within the JMeter GUI before executing the actual load test from the command-line. Use the View Results Tree listener to assist with the validation The Test Plan for this Article includes a View Results Tree Listener but it is disabled Enable it to view the results when the test is played from the GUI From the GUI, Start the test Let the test run for 20 seconds or so Click Stop Transactions Select one of the "Route" Transactions The View Results Tree section should resemble the following: In this example, all transactions completed successfully Sometimes when stopping the playback, the last Transactions in the View Results Tree may fail as it was stopped "mid-request"; this is safe to ignore Requests Expand one of the "Route" Transactions Select the HTTPS request within it The results should resemble the following: In this example, the selected request completed successfully (as indicated by the green check mark) The success of the parent Transaction already indicated this status From the Sampler result tab, take a quick glance at the Size in bytes field In this example, the Request Size was about 15KB which usually means good geometry data was returned; in other words, the responses were not "empty" and is more proof that it was successful Examine the URL of the request As mentioned earlier, this value of the request URL becomes populated at runtime Click on Response data tab and Response Body sub-tab This shows a textual view of the data returned from the request: Note: The route geometries returned are commonly rendered in web browser based JavaScript applications. Although Apache JMeter is a (test) client, it does not spatially render these geometry responses from the server in that way. Test Execution The load test should be run in the same manner as a typical JMeter Test Plan. See the runMe.bat script included with the route_solver1.zip project for an example on how to run a test as recommended by the Apache JMeter team. The runMe.bat script contains a jmeterbin variable that will need to be set to the appropriate value for your environment If Network Analyst route service was published as dedicated, adjust the minimum and maximum instances accordingly prior to running the load test For more information see: Configure service instance settings The published route service used in this Article was dedicated with the maximum instances set to 4 The ArcGIS Server component was running on a system with 4 CPU cores Note: It is always recommended to coordinate the load test start time and duration with the appropriate personnel of your organization. This ensures minimal impact to users and other colleagues that may also need to use your on-premise ArcGIS Enterprise Site. Additionally, this helps prevent system noise from other activity and use which may "pollute" the test results. Note: For several reasons, it is strongly advised to never load test ArcGIS Online. JMeter Report Throughput Curve The auto-generated JMeter Report can provide insight into the throughput of the route service under load Since each Route Transaction contained one request, both metrics (request and transaction) virtually showed the same value; this is expected given the design of the test In this case, the peak throughput for the two stop route solves was about 15 transactions/second Given the environment tested, this equates to around 54,000 route solves/hour Performance Curves The auto-generated JMeter Report can also provide insight into the performance of the route service under load Since each Route Transaction contained one request, both metrics (request and transaction) virtually showed the same value; this is expected given the design of the test The performance of the route requests was good and under 1 second throughout the load test Where the throughput first peaked at 15 transactions/second is where the response time was measured At this point in the test, the average response time was about 333 ms or 0.33 seconds It may also be helpful to see the plotted response times with respect to the step load (configured threads) Previous charts showed values with respect to time Final Thoughts The Apache JMeter Test Plan in this Article represents a programmatic approach for applying load to a Network Analyst route service. One of the strengths of this test is that it easy to configure and maintain. The auto-generated JMeter report provides charts and summaries that can be used to quickly analyze the performance and scalability of the route service. To download the Apache JMeter Test Plan used in this Article see: route_solver1.zip To download the San Diego dataset used in this Article see: Network Analyst ArcGIS Pro Tutorial Data Apache JMeter released under the Apache License 2.0. Apache, Apache JMeter, JMeter, the Apache feather, and the Apache JMeter logo are trademarks of the Apache Software Foundation.
... View more
12-31-2021
01:40 PM
|
0
|
0
|
1668
|
BLOG
|
Hi @DeanHowell1, Are you referring to the testing of a single fused image cache? If so, a possible solution would be to take generated bounding boxes of interest and convert them on-the-fly to the appropriate set of tiles. This conversion process would be based on the GetLayerTile logic (there might be some older resources out on the interest which still list these steps in coding languages). Of course, the newer developer APIs from Esri do this for you with a simple function call, but in Apache JMeter's case, this logic would need to be added to the test (e.g. using something like Groovy). I would recommend this strategy over converting a HAR file into a load test. Although technically valid, with the HAR file approach the requests are quickly cached and the load tests then typically shows high network utilization. But using the first approach (conversion of extents to underlying tiles), the requests are more realistic as the test can spatially cover a lot more area. This topic was also recently discussed as a potential Community Article in the future. If one is put together I will definitely send you the link. Thanks again for the feedback. Aaron.
... View more
11-15-2021
08:50 PM
|
0
|
0
|
3133
|
BLOG
|
Hi @DeanHowell1, The Performance Engineering team recently released a Community Article on Creating a Load Test in Apache JMeter Against a Hosted Feature Layer Service! The discussion covers how to generate (feature service) test data and how to plug this data (query extents) into an Apache JMeter Test Plan for load testing. We took a programmatic approach to solving tackling this challenge, however this part of the test logic for the most part remains hidden from the tester. Happy testing! Aaron
... View more
10-26-2021
02:11 PM
|
1
|
0
|
5121
|
BLOG
|
Why Test a Hosted Feature Layer Service? Previous Community Articles on performance testing with Apache JMeter focused on exercising Map Services through the export function. However, Hosted (feature) layers are also a popular capability of ArcGIS Enterprise and are widely used in deployments. Additionally, querying these layers are based on a "repeated" grid design which can help provide a higher degree of scalability over other visualization technologies. Couple this with client-side rendering of the data returned and its a win-win. Given that hosted feature services are a proven and favorite service technology, it makes sense to want to test feature queries under load to observe its scalability first hand. Hosted Feature Layer Service Testing Challenges Compared to testing the export map function, testing Hosted Feature Layer Service queries is challenge as the requests are more complex to achieve programmatically. A navigational "pan" or a "zoom" in the web browser produces a handful of different queries, each with their own geometry. To repeat this behavior, the constructed load test will not have just one request to issue but many and a varying amount. Couple this with the fact that each query request in the transaction will have a unique geometry and changing maxAllowableOffset (depending on the map scale) and its a lot of moving parts to keep track of. How to Test a Hosted Feature Service? The USGS Motor Vehicle Use Roads Dataset The understanding of the process in this Article is most effective if the steps can be reproduced. But this repeatability requires access to the same set of data. The spatial size of the data source also needs to be large enough to generate decent test data but not too big where it is cumbersome to download. Enter in the Motor Vehicle Use Map: Roads feature layer dataset on hub.arcgis.com. The 179K polyline records of USGS Roads data in WGS 1984 Web Mercator (Auxiliary_Sphere), equates to about 200MB when zipped. It is provided through the Creative Commons (CC0) license. View of Roads data from ArcGIS Pro: Large scale view with labeling enabled: This data will be published from ArcGIS Pro to a hosted feature service in ArcGIS Enterprise or loaded directly through Portal for ArcGIS. To create a service from this data, see Publish hosted feature layers in ArcGIS Enterprise Test Data Generation This test will require some good test data to use within the JMeter test. To tackle such a task, it is highly recommended to use the very excellent Load Testing Tools. Version 1.2.2 adds new capabilities like the "Generate Query Extents" tool which will be a great help for generating feature service test data. This data utilizes the grid-based design which is what we want. With the grid-based approach, envelopes for the desired area are created behind the scenes. Then, these envelopes are converted to the appropriate 512x512 query extents. The number of the queries (for each initial envelope) will vary based on where it lands on the grid...this mimics the service behavior in a web browser. Making the Tools Available from ArcGIS Pro Once the load-testing-tools project has been downloaded to your machine, place the unzipped folder in a directory that is accessible or made accessible by ArcGIS Pro. If you have a previous version of the Load Testing Tools already installed, this updated version can be installed along side it (although with a different folder name) or completely replace the existing folder. For example: Place the load-testing-tools folder in C:\Users\[username]\Documents\ArcGIS Use the Add Folder Connection from Catalog in ArcGIS Pro to list the contents of this directory: The "Generate Query Extents" tool can work off the hosted feature service, a local copy of the data or the data within an enterprise geodatabase. Note: the tool should generate query extents from any data but it does require the Projected Coordinate System to be WGS 1984 Web Mercator Auxiliary_Sphere (WKID: 3857). Select an Area of Interest Select an area of interest from the map in which to generate test data. In this example, The Roads data is being viewed from the Northwestern United States (near the state borders of Idaho and Montana). The selected map scale is 1:1,000,000. Run the Generate Query Extents Tool Running the Generate Query Extents tool should present inputs similar to the following: Adjust the Inputs for the Generate Query Extents Tool The default inputs were adjusted to reflect the following: Several smaller and larger scale levels were removed The remaining scale levels are 12, 13, and 14 which correspond to the map scales 144448, 72224, and 36112, respectively The Number of Records for these scales were increased Scale Level 14 may be omitted depending on the release of Load Testing Tools (if absent, please add this Scale Level manually) The File Output Location which should be something similar to: C:\Users\username\Documents\ArcGIS\Projects\Catalog2\query_extents.csv Click Run to execute the tool Note: The duration of time to generate the test data is based on several factors such as the number of different Scale Levels, the Number of Records (per each Scale Level) and the current map scale of the Project. Note: Generating test data using other datasets may dictate the need to use different Scale Levels based on level of detail and feature density. Validating the Generated Test Data It is a good practice to visually verify generated test data. This let's the tester know what the load test will be spatially requesting from the feature service. Once the tool has completed successfully it will generate 3 primary sets of data that are of interest: Bounding box feature classes Contains randomly generated areas of interest One feature class for each requested Scale Level Query Extent feature classes Contains a (512x512) tile grid that each feature query will be based on One feature class for each requested Scale Level Query Extent CSV files Contains the generated test data Each line is composed of the dynamic components of a feature service request One file for reach requested Scale Level From the Catalog panel, load the bbox_36112 feature class onto the current map in ArcGIS Pro This output is very similar to the data from the Generate Bounding Boxes tool In this example, the randomly generated boxes are in pink These areas represent the screen resolution of a user requesting data from the feature service Now, from the Catalog panel, load the query_extents_36112 feature class onto the current map but behind (underneath) the bbox_36112 data In this example, the query tile grid boxes are in green These tiles correspond to an area on the map that the bboxes are asking for data Zooming in to the map can yield a better understanding to the relationship between these two datasets As seen in the map below, some bboxes are slightly offset from each other but still share a common query tile from the grid beneath them The coordinates of these query tiles (e.g. from the query_extent feature class) is what will go into the CSV files and ultimately the JMeter load test Looking closer at the bboxes reveal details on their respective query composition For example, some bboxes might require 12 "underlying" tiles to fulfill, others 15 or 20 As seen in the map below, the bbox highlighted in black requires 12 specific query tiles colored in red Note: The tile grid design of the feature service is one of its key strengths as it lends itself to repeatability. This repeatability can be leveraged with caching in a deployment for improved scalability. This is not possible with export map. Examining the generated CSV files will reveal the end results of this transformation Viewing the query_extents_36112.csv file in a text editor should show something similar to the following Depending on the release of Load Test Tools, the CSV files might be sorted by the operationid column Depending on the release of Load Testing Tools, the line may or may not be grouped by the operationid column The understanding of the operationid, in this case, is an important testing concept as each operation is representing a navigation action (e.g. a pan or zoom) From JMeter's point of view, an operation is the same as a transaction All the lines with a matching operationid will become feature service query request geometries under the same transaction controller The Hosted Feature Service Query Test Plan To download the Apache JMeter Test Plan used in this Article see: roads_hfs1.zip Opening the Test Plan in Apache JMeter should look similar to the following: Adjust the User Defined Variables to fit your environment The 3 CSV files generated from the tool are referenced by through the JMeter variables DataFile_A, DataFile_B, and DataFile_C by just the file name (the file system path is not included here) Components of the Test Plan Data Reader Logic The roads_hfs test is a bit of a different beast than other Apache JMeter test examples used in previous articles. The primary different is that while its still a data drive test (e.g. CSV files are used for request input), it is not using the typical "CSV Data Set Config" Config Element object to read in the data. Instead, this logic is performed through JSR223 Samplers that execute Groovy code. The reason Groovy is utilized is due to the nature of interacting with a feature service mentioned earlier. Recall that some transactions will have 12 requests and other may have 15 or 20 (depending on where the overall area of interest lands on the tile grid). This difference in the number of requests requires the test to use a more flexible mechanism for reading and using the data from the CSV files since this will not be constant. There is one JSR223 Sampler for each CSV file (e.g. each map scale) All JSR223 Samplers for reading data are put into a Once Only Controller to minimize overhead The CSV file read will only be carried out once, at the beginning of each test thread Shown below is "JSR223 Sample A1" which will be reading in the file query_extents_72224.csv Experience coding in Groovy is not required for running this test, in fact, these JSR223 Samplers do not need to be edited to run the test, but it is helpful to understand what logic is responsible for reading in the CSV data Operation ID Selection Logic Once the CSV data has been read in, the test will need to select an operation id for each scale with every test iteration. To accomplish this, a second set of JSR223 Samplers were used to pick from each list of operations. There is one JSR223 Sampler for each map scale that randomly selects an operation id All JSR223 Samplers for generating this operation id are put into a Transaction Controller called Operation Generator This is executed with every test thread iteration These JSR223 Samplers do not need to be edited to run the test Note: JSR223 Samplers using Groovy are generally executed quickly and add very little overhead to the test Operation Loop and Parameter Population With an operation id chosen, the focus becomes the loop logic where the test will lookup the number of feature services queries that the make up the transaction. From there it will use a third set of JSR223 Samplers to populate the requests parameters associated to the previously selected operation id with each iteration in the loop. There is one JSR223 Sampler for each map scale that populates the associated JMeter variables based on the operation id and iteration value These items then become key/value pairs which are picked up by the HTTP Request The iteration valued are tracked by a Counter Config Element These JSR223 Samplers do not need to be edited to run the test Each Loop Controller, Counter, JSR223 Sampler and HTTP Request objects are all placed inside a corresponding Transaction Controller to logically separate the items for each map scale HTTP Request Essentially, all of the test logic above exists just for this component of the test. Here, the JMeter HTTP Request object can read-in the JMeter variables for specific key/value parameters that have been populated by the JSR223 Sampler immediately before it. Since this approach is highly programmatic, there is only one HTTP Request per map scale! Such a design favors maintainability. Note: This test approach would also work for traditional, non-hosted feature layer services. However, these feature services do not have the same request parameter optimizations that hosted services do such as maxAllowableOffset and quantizationParameters. These options would just need to be deleted from the HTTP Request. The Thread Group Configuration The JMeter Test Plan is currently configured for a relatively short test of 10 minutes. Generally speaking, hosted feature services perform well, so a lot of throughput will be taking place within each step (1 minute per step) as well as from the test overall. Different environments and data may require an alternative setting to achieve the desired test results, adjust as needed Validating the Test Plan As a best practice, it is always a good idea to validate the results coming back before executing the actual load test. Use the View Results Tree listener to assist with the validation The Test Plan includes a View Results Tree Listener but it is disabled by default Enable it to view the results From the GUI, Start the test Transactions Select one of the "HFS" Transactions The results should resemble the following: In this example, the transactions listed above: HFS (mapscale: 72224), HFS (mapscale: 36112), and HFS (mapscale: 144448) all completed successfully The Sampler result lists some more details Although each Transaction sent one HTTP request per feature query extent, the JMeter test is counting the Sampler as part of the operation The JSR223 Samplers add very little overhead to the Transaction although they do double the number of samples, this is just a detail to be aware of Take a quick glance at the Size in bytes In this example, the Transaction Size was almost 65KB which suggests some data was being returned and the responses were not "empty" Requests Expand one of the "HFS" Transactions Select one of the https requests The results should resemble the following: In this example, the select request completed successfully Take a quick glance at the Size in bytes In this example, the Request Size was about 5KB which suggests some data was being returned and the responses were not "empty" (e.g. 1500 bytes) The ContentType is also important Per the parameters in the Test Plan, the requested format is pbf which returns application/x-protobuf Requesting protocol buffers is a best practice as it optimizes the payload The resulting format is binary and cannot easily be viewed without additional help that is not covered in this Article Note: Feature services (including hosted feature services) are rendered on the client (not on the server like export map). Although Apache JMeter is a (test) client, it does not render the server responses through JavaScript like a web browser. Test Execution The load test should be run in the same manner as a typical JMeter Test Plan. See the runMe.bat script included with the roads_hfs1.zip project for an example on how to run a test as recommended by the Apache JMeter team. The runMe.bat script contains a jmeterbin variable that will need to be set to the appropriate value for your environment Note: It is always recommended to coordinate the load test start time and duration with the appropriate personnel. This ensures minimal impact to users and other colleagues that may also need to use the ArcGIS Enterprise Site. Additionally, this helps prevent system noise from other activity and use which may "pollute" the test results. JMeter Report Throughput Curves The auto-generated JMeter Report can provide insight into the throughput of the HFS transactions under load Non-HFS Transactions have been manually filtered out In this case, the peak throughput for the HFS operations was about 16.5 transactions/second Since there were 3 HFS transactions, this equates to almost 50 transactions/second (or 178,200 transactions/hour) Note: Each of the HFS Transactions will naturally have a similar throughput as their respective execution in the test was weighted the same Performance Curves The auto-generated JMeter Report can provide insight into the performance of the HFS transactions under load Non-HFS Transactions have been manually filtered out In this case, HFS transactions for all scales were sub-second (under 1 second) Even toward the end of the test, under the heaviest load, the average response time was under 225 ms or 0.225 seconds Final Thoughts There are other ways to test a hosted feature layer service queries such as through captured traffic from a web browser while interacting with the endpoint or application. This would produce a list of the service URLs which could be translated into a test. However, a programmatic approach such as the one listed in this Article offers a strategy for testing a wide spatial area of the service covering many more extents than can be practically done with the captured traffic approach. The programmatic approach is also easier to maintain as the size of Test Plan is much smaller. To put this into perspective, the JMeter test contained in this Article only contained 3 HTTP Requests (one for each map scale). To download the Apache JMeter Test Plan used in this Article see: roads_hfs1.zip To download the Natural Earth subset of data used in this Article see: Motor Vehicle Use Map: Roads (Feature Layer) Apache JMeter released under the Apache License 2.0. Apache, Apache JMeter, JMeter, the Apache feather, and the Apache JMeter logo are trademarks of the Apache Software Foundation.
... View more
10-26-2021
01:21 PM
|
2
|
10
|
5379
|
BLOG
|
Hello @RDSpire, I am very happy to hear that you found these articles are useful. Yes...our team definitely have more planned, including one that covers our take for interpreting load tests results. Your suggested topic, "How to test a web application published from ArcGIS Enterprise" also sounds like good subject to socialize on Community. Thank you for your feedback! Aaron
... View more
08-26-2021
10:15 AM
|
0
|
0
|
2244
|
BLOG
|
What is Test Data? Simply put, test data is used to drive a performance or load test by requesting different areas of interest from an ArcGIS Enterprise map service. This spatial parts of the data usually takes the form of a points or bounding boxes (bboxes) and is typically stored a plain text file or in some cases a database. Previous Community Articles on Load Testing ArcGIS Enterprise with Apache JMeter focused on strategies for building test logic and running the test. The sample projects provided on these blogs included test data in the form of plain text comma separated value (CSV) files that plugged right into the requests. These CSV files contained items like bounding boxes and a corresponding spatial reference to provide the HTTP requests in the test with parameter information. With each iteration of test, the next line of data is read in which is then populated into the request. For demonstration purposes, this test data worked well for requesting different map scales against services like SampleWorldCities and NaturalEarth. However, those sample test datasets are limited for use with other map services as the pre-generated bounding boxes were created to only ask for areas of interest around the world at a high level. If your organization is working with data at the state, county or city level, you'll want to have test data that focuses on those areas to maximize load test value. In other others, you want test data at a larger map scale and test data cover a specific area of interest. Generating such data that is specific to your services or your spatial data becomes a critical piece of the process for making a good load test. While composing a few geometries by hand for a simple test is certainly doable, the request signatures are quickly repeated resulting in scalability patterns that are skewed and not realistic. A better test is one that utilizes a large amount of random geometries to push the map service and hardware resource more effectively. Tools for Creating Custom Load Data Thankfully, there is a set of recently released testing tools for ArcGIS Pro on GitHub that makes the task of data generation extraordinarily easy. The name of the utility is called: Load Testing Tools and is available at: https://www.arcgis.com/home/item.html?id=b06ef175665a45d68f5796f321b56e61 This examples in this Article were based on version 1.1 of the toolset One of my favorite tools in the group is "Generate Bounding Boxes" which can quickly generate bounding boxes by the either the map's current extent or a selected polygon. Having the ability to passing in a specific polygon is a very powerful feature as the geometries that are created can be filtered to just your area of interest (e.g. Country, State, County or City). The generated data can be validated visually (via separate feature classes that are created) and plugged right into a JMeter Test Plan (via CSV files that are also created). Again, very easy...very powerful. Creating Custom Test Data Making the Tools Available from ArcGIS Pro Once the load-testing-tools project has been downloaded to your machine, place the folder in a directory that is accessible or made accessible by ArcGIS Pro. For example: Place the load-testing-tools folder in C:\Users\[username]\Documents\ArcGIS Use the Add Folder Connection from Catalog in ArcGIS Pro to list the contents of this directory: Using A Polygon to Outline the Area of Interest In this ArcGIS Pro project, a polygon feature class (U.S. State of Indiana in pink) has been added to the Map to define a boundary around the area where the bounding boxes for the requests in the test will be generated. The Projected Coordinate System of the Indiana State feature class is: WGS 1984 Web Mercator (auxiliary sphere) Its WKID is: 3857 For a point of reference, the default Basemap (World Topographic Map) is left in the map The Projected Coordinate System of the Basemap is also: WGS 1984 Web Mercator (auxiliary sphere) Generate Bounding Boxes Tool Inputs You can launch the Generate Bounding Boxes tool, by navigating the load-testing-tools folder from the ArcGIS Pro Catalog screen. Expand the Load Testing Tools.tbx and double-click on Generate Bounding Boxes. The Geoprocessing screen should populate and look similar to the following: One of the convenient features about the Generate Bounding Boxes tool is that technically ready to go just by clicking Run! With the default options, it will randomly generate bounding boxes using the current extent of the ArcGIS Pro map. Note: The default map scales of the Generate Bounding Boxes tool are similar to those of ArcGIS Online but for brevity, only every other scale is listed. You additional map scales are needed, they can be manually added from within the tool. While this make the data generation really easy, in this example, we are interested in generating boxes inside a particular polygon (State of Indiana). We also want to be very specific on the map scales our test will be using, so we'll want to remove some scales and add others from the tool's interface. From the Generate Bounding Boxes tool: Click the red X in front of 73957191, 18489298, 4622324, 1155581, 288895, 282, 70 to remove these map scales From the empty text box under the Scale column, Add 36112 and use 100 for the Number of Records column From the empty text box under the Scale column, Add 9028 and use 1000 for the Number of Records column From the empty text box under the Scale column, Add 2257 and use 3000 for the Number of Records column Increase the Number of Records for 4514 to 1000 records Increase the Number of Records for 1128 to 3000 records Click the drop down under Polygon Layer and select the feature class of interest with in the Map, in this case, Indiana Expand Output Options Note the location of the bounding boxes csv file Separate csv files per map scale will also be created at this location Select "Output Separate Feature Class Per Scale" option After the customization, the Generate Bounding Boxes tool input should look like the following ([username] would reflect your Windows username): Click Run Tool execution may take a few moments The Table of Contents screen will start to populate by adding feature classes to the Map (one per scale) Visualizing the Generated Data from the Individual Feature Classes Once complete, the output within ArcGIS Pro should look similar to the following: The individual feature class make quality checking a breeze as its easy to see the areas of interest that the test will be making from the generated data Note: Some of the generated bounding boxes may have portions of the their geometry that fall outside the polygon of interest. This is okay. Thanks to the visualization of the data, it is also easy to see why fewer bounding boxes were created for smaller map scales like 1:72,224 and 1:36,112 Similarly, this is why more bounding boxes were created for larger map scales like 1:2,257 and 1:1,128 Note: Depending on your data and it density at the larger scales, it could be advantageous to generate more than 3000 bounding boxes (per scale) in order to "cover more ground". Keep in mind that some load test frameworks may read CSV data into memory and creating extremely large datasets may require more memory from the test client. Visualizing the Generated Data from the Individual CSV Files Using the file system explorer, navigate to the ArcGIS Pro project used for generating the data: C:\Users\[username]\Documents\ArcGIS\Projects\MyProject1 The folder contents should look similar to the following: Opening the contents of bounding_boxes_2257.csv should resemble the following: This data will work with most load testing tools that allow the parameterization of HTTP requests from CSV files Note: The feature class to use as a Polygon Layer for spatial filtering can utilize a Projected Coordinate Systems other than WGS 1984 Web Mercator (auxiliary sphere). However, the generated CSV data will still be projected into bounding boxes that have a WKID of 3857. Using the Generated Data in an Apache JMeter Test Plan With a procedure for generating spatially customized data, you can take the CSV files and import them into an Apache JMeter Test Plan to use in a load test. The previous testing Articles: Using Apache JMeter to Load Test an ArcGIS Enterprise Authenticated Service (Intermediate/Advanced) Using Public Domain Data to Benchmark an ArcGIS Enterprise Map Service (Intermediate) Provided Apache JMeter sample tests that would make good templates to use with your new data and against your map services. CSV Data Set Config Using the CSV Data Set Config element in JMeter, the new generated test data can be referenced from its path on the file system. The Filename path value refers to the location of the CSV file on the disk C:/JMeter Tests/naturalearth1/datasets/bounding_boxes_288895.csv Sample test projects from previous Articles used variables for the path ${ProjectFolder}/datasets/bounding_boxes_288895.csv The Variable Names denotes the column headers in the CSV file bbox,width,height,mapUnits,sr,scale would then become bbox_288895,width_288895,height_288895,mapUnits_288895,sr_288895,scale_288895 as the test may be using other map scales where just "bbox" would be ambiguous The HTTP Request elements pointing to your map service can then be adjusted to utilize variables such as ${bbox_288895} that reference your generated test data. Apache JMeter released under the Apache License 2.0. Apache, Apache JMeter, JMeter, the Apache feather, and the Apache JMeter logo are trademarks of the Apache Software Foundation.
... View more
07-29-2021
11:17 AM
|
5
|
2
|
5561
|
BLOG
|
Updates to following sections: Testing Framework Bottleneck Interactive Response Time Law Additions of the following sections: Testing Framework Architecture
... View more
07-18-2021
04:07 PM
|
0
|
0
|
1742
|
Title | Kudos | Posted |
---|---|---|
1 | 04-04-2025 12:08 AM | |
1 | 01-22-2025 03:05 PM | |
1 | 01-08-2025 04:36 PM | |
1 | 11-15-2024 12:31 PM | |
1 | 11-15-2024 12:25 PM |
Online Status |
Online
|
Date Last Visited |
2 hours ago
|