Select to view content in your preferred language

Creating a Load Test in Apache JMeter Against a Hosted Feature Layer Service (Intermediate/Advanced)

4545
10
10-26-2021 01:21 PM
AaronLopez
Esri Contributor
2 10 4,545

Why Test a Hosted Feature Layer Service?

Previous Community Articles on performance testing with Apache JMeter focused on exercising Map Services through the export function. However, Hosted (feature) layers are also a popular capability of ArcGIS Enterprise and are widely used in deployments. Additionally, querying these layers are based on a "repeated" grid design which can help provide a higher degree of scalability over other visualization technologies. Couple this with client-side rendering of the data returned and its a win-win.

Given that hosted feature services are a proven and favorite service technology, it makes sense to want to test feature queries under load to observe its scalability first hand.

Hosted Feature Layer Service Testing Challenges

Compared to testing the export map function, testing Hosted Feature Layer Service queries is challenge as the requests are more complex to achieve programmatically. A navigational "pan" or a "zoom" in the web browser produces a handful of different queries, each with their own geometry. To repeat this behavior, the constructed load test will not have just one request to issue but many and a varying amount. Couple this with the fact that each query request in the transaction will have a unique geometry and changing maxAllowableOffset (depending on the map scale) and its a lot of moving parts to keep track of.

How to Test a Hosted Feature Service?

The USGS Motor Vehicle Use Roads Dataset

The understanding of the process in this Article is most effective if the steps can be reproduced. But this repeatability requires access to the same set of data. The spatial size of the data source also needs to be large enough to generate decent test data but not too big where it is cumbersome to download.

Enter in the Motor Vehicle Use Map: Roads feature layer dataset on hub.arcgis.com. The 179K polyline records of USGS Roads data in WGS 1984 Web Mercator (Auxiliary_Sphere), equates to about 200MB when zipped. It is provided through the Creative Commons (CC0) license.

  • View of Roads data from ArcGIS Pro:

usgs_roads_small_scale_arcgispro.png

  • Large scale view with labeling enabled:

usgs_roads_large_scale_arcgispro.png

This data will be published from ArcGIS Pro to a hosted feature service in ArcGIS Enterprise or loaded directly through Portal for ArcGIS.

Test Data Generation

This test will require some good test data to use within the JMeter test.

To tackle such a task, it is highly recommended to use the very excellent Load Testing Tools.

Version 1.2.2 adds new capabilities like the "Generate Query Extents" tool which will be a great help for generating feature service test data.

This data utilizes the grid-based design which is what we want. With the grid-based approach, envelopes for the desired area are created behind the scenes. Then, these envelopes are converted to the appropriate 512x512 query extents. The number of the queries (for each initial envelope) will vary based on where it lands on the grid...this mimics the service behavior in a web browser.

Making the Tools Available from ArcGIS Pro

Once the load-testing-tools project has been downloaded to your machine, place the unzipped folder in a directory that is accessible or made accessible by ArcGIS Pro. If you have a previous version of the Load Testing Tools already installed, this updated version can be installed along side it (although with a different  folder name) or completely replace the existing folder.

For example:

  • Place the load-testing-tools folder in C:\Users\[username]\Documents\ArcGIS
  • Use the Add Folder Connection from Catalog in ArcGIS Pro to list the contents of this directory:

arcgispro_catalog_loadtestingtools_update.png

The "Generate Query Extents" tool can work off the hosted feature service, a local copy of the data or the data within an enterprise geodatabase.

Note: the tool should generate query extents from any data but it does require the Projected Coordinate System to be WGS 1984 Web Mercator Auxiliary_Sphere (WKID: 3857).

Select an Area of Interest

Select an area of interest from the map in which to generate test data. In this example, The Roads data is being viewed from the Northwestern United States (near the state borders of Idaho and Montana). The selected map scale is 1:1,000,000.

usgs_roads_areaofinterest_arcgispro.png

Run the Generate Query Extents Tool

  • Running the Generate Query Extents tool should present inputs similar to the following:

generate_query_extents_tool_defaults.png

Adjust the Inputs for the Generate Query Extents Tool

The default inputs were adjusted to reflect the following:

  • Several smaller and larger scale levels were removed
  • The remaining scale levels are 12, 13, and 14 which correspond to the map scales 144448, 72224, and 36112, respectively
    • The Number of Records for these scales were increased
    • Scale Level 14 may be omitted depending on the release of Load Testing Tools (if absent, please add this Scale Level manually)
  • The File Output Location which should be something similar to:
    • C:\Users\username\Documents\ArcGIS\Projects\Catalog2\query_extents.csv
  • Click Run to execute the tool

Note: The duration of time to generate the test data is based on several factors such as the number of different Scale Levels, the Number of Records (per each Scale Level) and the current map scale of the Project.

generate_query_extents_tool_adjusted.png

Note: Generating test data using other datasets may dictate the need to use different Scale Levels based on level of detail and feature density.

Validating the Generated Test Data

It is a good practice to visually verify generated test data. This let's the tester know what the load test will be spatially requesting from the feature service. 

Once the tool has completed successfully it will generate 3 primary sets of data that are of interest:

  • Bounding box feature classes
    • Contains randomly generated areas of interest
    • One feature class for each requested Scale Level
  • Query Extent feature classes
    • Contains a (512x512) tile grid that each feature query will be based on
    • One feature class for each requested Scale Level
  • Query Extent CSV files
    • Contains the generated test data
    • Each line is composed of the dynamic components of a feature service request
    • One file for reach requested Scale Level

generate_query_extents_tool_output.png

  • From the Catalog panel, load the bbox_36112 feature class onto the current map in ArcGIS Pro
    • This output is very similar to the data from the Generate Bounding Boxes tool
  • In this example, the randomly generated boxes are in pink
    • These areas represent the screen resolution of a user requesting data from the feature service

viewing_bbox_featureclass_generated_data.png

  • Now, from the Catalog panel, load the query_extents_36112 feature class onto the current map but behind (underneath) the bbox_36112 data
  • In this example, the query tile grid boxes are in green
    • These tiles correspond to an area on the map that the bboxes are asking for data

viewing_queryextents_featureclass_generated_data.png

  • Zooming in to the map can yield a better understanding to the relationship between these two datasets
  • As seen in the map below, some bboxes are slightly offset from each other but still share a common query tile from the grid beneath them
  • The coordinates of these query tiles (e.g. from the query_extent feature class) is what will go into the CSV files and ultimately the JMeter load test

viewing_queryextents_featureclass_generated_data_zoom.png

  • Looking closer at the bboxes reveal details on their respective query composition
    • For example, some bboxes might require 12 "underlying" tiles to fulfill, others 15 or 20
    • As seen in the map below, the bbox highlighted in black requires 12 specific query tiles colored in red

viewing_queryextents_featureclass_generated_data_examine.png

 Note: The tile grid design of the feature service is one of its key strengths as it lends itself to repeatability. This repeatability can be leveraged with caching in a deployment for improved scalability. This is not possible with export map.

  • Examining the generated CSV files will reveal the end results of this transformation
  • Viewing the query_extents_36112.csv file in a text editor should show something similar to the following
    • Depending on the release of Load Test Tools, the CSV files might be sorted by the operationid column

csv_file_notepad.png

  • Depending on the release of Load Testing Tools, the line may or may not be grouped by the operationid column
  • The understanding of the operationid, in this case, is an important testing concept as each operation is representing a navigation action (e.g. a pan or zoom)
  • From JMeter's point of view, an operation is the same as a transaction
    • All the lines with a matching operationid will become feature service query request geometries under the same transaction controller

The Hosted Feature Service Query Test Plan 

  • To download the Apache JMeter Test Plan used in this Article see: roads_hfs1.zip
  • Opening the Test Plan in Apache JMeter should look similar to the following:
    • Adjust the User Defined Variables to fit your environment
      • The 3 CSV files generated from the tool are referenced by through the JMeter variables DataFile_A, DataFile_B, and DataFile_C by just the file name (the file system path is not included here)

jmeter_hfs_testplan.png

Components of the Test Plan

Data Reader Logic

The roads_hfs test is a bit of a different beast than other Apache JMeter test examples used in previous articles. The primary different is that while its still a data drive test (e.g. CSV files are used for request input), it is not using the typical "CSV Data Set Config" Config Element object to read in the data. Instead, this logic is performed through JSR223 Samplers that execute Groovy code. The reason Groovy is utilized is due to the nature of interacting with a feature service mentioned earlier. Recall that some transactions will have 12 requests and other may have 15 or 20 (depending on where the overall area of interest lands on the tile grid). This difference in the number of requests requires the test to use a more flexible mechanism for reading and using the data from the CSV files since this will not be constant.

  • There is one JSR223 Sampler for each CSV file (e.g. each map scale)
    • All JSR223 Samplers for reading data are put into a Once Only Controller to minimize overhead
      • The CSV file read will only be carried out once, at the beginning of each test thread
  • Shown below is "JSR223 Sample A1" which will be reading in the file query_extents_72224.csv
    • Experience coding in Groovy is not required for running this test, in fact, these JSR223 Samplers do not need to be edited to run the test, but it is helpful to understand what logic is responsible for reading in the CSV data

jmeter_hfs_datareader_logic.png

Operation ID Selection Logic

Once the CSV data has been read in, the test will need to select an operation id for each scale with every test iteration. To accomplish this, a second set of JSR223 Samplers were used to pick from each list of operations.

  • There is one JSR223 Sampler for each map scale that randomly selects an operation id 
    • All JSR223 Samplers for generating this operation id are put into a Transaction Controller called Operation Generator
      • This is executed with every test thread iteration
      • These JSR223 Samplers do not need to be edited to run the test

Note: JSR223 Samplers using Groovy are generally executed quickly and add very little overhead to the test

jmeter_hfs_operationselector_logic.png

Operation Loop and Parameter Population

With an operation id chosen, the focus becomes the loop logic where the test will lookup the number of feature services queries that the make up the transaction. From there it will use a third set of JSR223 Samplers to populate the requests parameters associated to the previously selected operation id with each iteration in the loop. 

  • There is one JSR223 Sampler for each map scale that populates the associated JMeter variables based on the operation id and iteration value
    • These items then become key/value pairs which are picked up by the HTTP Request
    • The iteration valued are tracked by a Counter Config Element
    • These JSR223 Samplers do not need to be edited to run the test
  • Each Loop Controller, Counter, JSR223 Sampler and HTTP Request objects are all placed inside a corresponding Transaction Controller to logically separate the items for each map scale

jmeter_hfs_parameterpopulator_logic.png

HTTP Request

Essentially, all of the test logic above exists just for this component of the test. Here, the JMeter HTTP Request object can read-in the JMeter variables for specific key/value parameters that have been populated by the JSR223 Sampler immediately before it.

Since this approach is highly programmatic, there is only one HTTP Request per map scale! Such a design favors maintainability.

jmeter_hfs_request_parameter_details.png

Note: This test approach would also work for traditional, non-hosted feature layer services. However, these feature services do not have the same request parameter optimizations that hosted services do such as maxAllowableOffset and quantizationParameters. These options would just need to be deleted from the HTTP Request.

The Thread Group Configuration

The JMeter Test Plan is currently configured for a relatively short test of 10 minutes. Generally speaking, hosted feature services perform well, so a lot of throughput will be taking place within each step (1 minute per step) as well as from the test overall.

  • Different environments and data may require an alternative setting to achieve the desired test results, adjust as needed

jmeter_hfs_stepload.png

Validating the Test Plan

As a best practice, it is always a good idea to validate the results coming back before executing the actual load test.

  • Use the View Results Tree listener to assist with the validation
    • The Test Plan includes a View Results Tree Listener but it is disabled by default
      • Enable it to view the results
  • From the GUI, Start the test

Transactions 

  • Select one of the "HFS" Transactions
    • The results should resemble the following:

jmeter_viewresults_transaction.png

  • In this example, the transactions listed above: HFS (mapscale: 72224), HFS (mapscale: 36112), and HFS (mapscale: 144448) all completed successfully
    • The Sampler result lists some more details
      • Although each Transaction sent one HTTP request per feature query extent, the JMeter test is counting the Sampler as part of the operation
        • The JSR223 Samplers add very little overhead to the Transaction although they do double the number of samples, this is just a detail to be aware of
  • Take a quick glance at the Size in bytes
    • In this example, the Transaction Size was almost 65KB which suggests some data was being returned and the responses were not "empty"

Requests

  • Expand one of the "HFS" Transactions
  • Select one of the https requests
    • The results should resemble the following:

jmeter_viewresults_request.png

  • In this example, the select request completed successfully
  • Take a quick glance at the Size in bytes
    • In this example, the Request Size was about 5KB which suggests some data was being returned and the responses were not "empty" (e.g. 1500 bytes)
  • The ContentType is also important
    • Per the parameters in the Test Plan, the requested format is pbf which returns application/x-protobuf 
      • Requesting protocol buffers is a best practice as it optimizes the payload
        • The resulting format is binary and cannot easily be viewed without additional help that is not covered in this Article

Note: Feature services (including hosted feature services) are rendered on the client (not on the server like export map). Although Apache JMeter is a (test) client, it does not render the server responses through JavaScript like a web browser.

Test Execution

The load test should be run in the same manner as a typical JMeter Test Plan.

See the runMe.bat script included with the roads_hfs1.zip  project for an example on how to run a test as recommended by the Apache JMeter team. 

  • The runMe.bat script contains a jmeterbin variable that will need to be set to the appropriate value for your environment

Note: It is always recommended to coordinate the load test start time and duration with the appropriate personnel. This ensures minimal impact to users and other colleagues that may also need to use the ArcGIS Enterprise Site. Additionally, this helps prevent system noise from other activity and use which may "pollute" the test results.

JMeter Report

Throughput Curves

  • The auto-generated JMeter Report can provide insight into the throughput of the HFS transactions under load
  • Non-HFS Transactions have been manually filtered out
  • In this case, the peak throughput for the HFS operations was about 16.5 transactions/second
    • Since there were 3 HFS transactions, this equates to almost 50 transactions/second (or 178,200 transactions/hour)

jmeter_report_throughput.png

Note: Each of the HFS Transactions will naturally have a similar throughput as their respective execution in the test was weighted the same

Performance Curves

  • The auto-generated JMeter Report can provide insight into the performance of the HFS transactions under load
    • Non-HFS Transactions have been manually filtered out
  • In this case, HFS transactions for all scales were sub-second (under 1 second)
    • Even toward the end of the test, under the heaviest load, the average response time was under 225 ms or 0.225 seconds

jmeter_report_responsetime.png

Final Thoughts

There are other ways to test a hosted feature layer service queries such as through captured traffic from a web browser while interacting with the endpoint or application. This would produce a list of the service URLs which could be translated into a test. However, a programmatic approach such as the one listed in this Article offers a strategy for testing a wide spatial area of the service covering many more extents than can be practically done with the captured traffic approach.

The programmatic approach is also easier to maintain as the size of Test Plan is much smaller. To put this into perspective, the JMeter test contained in this Article only contained 3 HTTP Requests (one for each map scale). 

 

 

 

Apache JMeter released under the Apache License 2.0. Apache, Apache JMeter, JMeter, the Apache feather, and the Apache JMeter logo are trademarks of the Apache Software Foundation.

10 Comments