BLOG
|
Hi @DeanHowell1, Here is an Article you may find interesting for testing cached service: Creating a Load Test in Apache JMeter Against a Cached Map Service (Advanced) This Article focuses on testing a map service but the same Test Plan should work against a cached image service. Hope this helps. Aaron
... View more
01-18-2022
12:56 PM
|
0
|
0
|
2404
|
BLOG
|
Why Test a Cached Map Service? Cached map services are a popular and recommended way to provide a well performing presentation of static data. The cache service type is a proven technology, but there may still be requirements to test it under load to observe its scalability first hand on a specific deployment architecture. While cached map services perform well, serving up thousands of simultaneous tile requests can be resource intensive on the server hardware. Note: Due to the fast rate of delivery and consumption of the resource, load testing cached map services can also be intensive on the hardware utilization of the test client workstation. Cached Map Service Testing Challenges Compared to the load testing of the export map function, proper testing of a cached map service introduces several challenges as the request composition with each map screen changes. Since the underlying cache scheme is using a grid design, the map extents of some pans or zooms may pull down more or less tile images than others. Accounting for this real-world behavior of the cache service makes the test logic more complex than if it were exercising the export map function. The test logic should also be dynamic and cover a decent area of interest. Converting a HAR file of captured cache tile requests into a test might be quick and easy to do but does not show a realistic scalability of the service. This is due to the small sample of tile requests being used over and over again. Generally speaking, requests for individual cache tiles are fast...very fast. Due to this behavior, the test logic also needs to perform well, scale with the service and have minimal overhead on the test client How to Test a Cached Map Service? The steps in this Article should work with any existing cached map service on your local ArcGIS Enterprise deployment. However, if one if not available, it is recommended to give the Natural Earth dataset a look for the task. The Natural Earth Dataset Although the steps should work with any data, the walkthrough of the process in this Article might be more effective if they can be directly followed. In such cases, it is great turning to the Natural Earth datasets which provides some decent map detail (at smaller scales) covering the whole world. Download the Natural Each dataset here The download above is a subset of the larger Natural_Earth_quick_start.zip and includes a modified MXD for ArcMap 10.8.1 and ArcGIS Pro 2.8 project Either can be used to publish and create a cached map service to ArcGIS Enterprise The Natural Earth subset of data should look similar to the following when opened in ArcGIS Pro (or ArcMap) This Article will not cover the details of creating, configuring or publishing a cached map service in ArcGIS Enterprise. For information on such actions, see: Tutorial: Creating a cached map service Note: It is recommended to become familiar with some of metadata details of the cached map service as the load testing effort will require knowledge of some of that information (e.g. xorigin, yorigin, tileCols, tileRows, and spatial reference as well as the scales that contain tiles). Test Data Generation With a cached map service available, the next step would be to generate test data over an area of interest. As with other JMeter Articles on Community, we need good test data to get the most value from the results. And like before, the Load Testing Tools package (for ArcGIS Pro) makes short work of this job. There is even a specific tool for creating bounding box data to use with a cached map services. Note: Version 1.3.0 of Load Testing Tools added the "Generate Bounding Boxes (Precision)" tool. Download and unzip the package then make that folder available to your ArcGIS Pro project. The Generate Bounding Boxes (Precision) Tool Launching the Generate Bounding Boxes (Precision) tool should present an interface similar to the following: Before running the tool, let's adjust the input to target the data generation process to: Specific map scales (in this case three different scales) Scales 4622324.434309 and 1155581.108577 were kept Scale 2311162.217155 was added The number of records to be generated was adjusted the reflect larger map scales As the scale number goes down, we want to tool to generate more boxes A specific area of interest (optional) A polygon of the United States was added to a new map This feature was set as the Constraining Polygon Click Run Tool execution may take a few moments Visualizing the Generated Data in ArcGIS Pro The Contents screen will populate by adding a new feature classes that is visually representing the generated data Not all the generated map scales will be immediately seen Visualizing the Generated Data in a Text Editor Using the file system explorer, navigate to the ArcGIS Pro project used for generating the data and open one of the csv files using your favorite text editor The file contents should look similar to the following: The Apache JMeter test will be configured to convert each of these bound boxes into the corresponding cache map tiles The Cached Map Service Test Plan To download the Apache JMeter Test Plan used in this Article see: cache_tiles1.zip Opening the Test Plan in Apache JMeter should look similar to the following: Adjust the User Defined Variables to fit your environment Xorigin, Yorigin, TileCols, TileRows are properties of the created map cache that can be found on the REST endpoint page of the service TileCols and TileRows are typically found under Tile Info Height and Width Components of the Test Plan CSV Data Set Config The CSV Data Set Config elements in JMeter are used to reference the newly generated test data from the file system. The current version of the Test Plan is built to utilize 3 different CSV files (one for each map scale data file). Note: Other that the User Defined Variables and the setting of the Filename in the CSV Data Set Config elements, there should not be anything else that requires editing or changing in the Test Plan. The test logic is listed below just to explain how the values in the HTTP Request become populated. Levels Of Detail List Logic To avoid more complex JMeter test logic, 24 fixed map cache levels of detail are placed inside a class in a JSR223 Sampler test element. That "complex alternative" would be to connect the endpoint of the service at the start of the test and pull down the cache tile metadata. Putting HTTP logic into JSR223 Samplers is technically doable, but not the route I chose. There is only one JSR223 Sampler inside the Levels Of Detail Transaction This item is executed only once, at the start of each test thread The element contains 24 fixed cache levels of detail, with level 0 starting at scale 591657527.591555 If your cache scheme starts at a different scale for 0, then the JSR223 Sampler will need to be manually adjusted This JSR223 Sampler does not need to be edited to run the test This assumes cached map service has a Spatial Reference of 102100 (3857) Levels Of Detail -- JSR223 Sampler (Full Logic): // FileServer class
import org.apache.jmeter.services.FileServer
public class Lod{
int level
double resolution
double scale
double tolerance
}
public class MyLodList1{
public List<Lod> LodList = new ArrayList()
MyLodList1(){
// Based on ArcGIS Online Map Scales
// https://services.arcgisonline.com/arcgis/rest/services/World_Street_Map/MapServer
//
// Spatial Reference: 102100 (3857)
Lod lod = new Lod()
lod = new Lod()
lod.level = 0
lod.resolution = 156543.03392800014 //11
lod.scale = 591657527.591555
lod.tolerance = 0.5
this.LodList.add(lod)
lod = new Lod()
lod.level = 1
lod.resolution = 78271.51696399994 //11
lod.scale = 295828763.795777
lod.tolerance = 0.5
this.LodList.add(lod)
lod = new Lod()
lod.level = 2
lod.resolution = 39135.75848200009 //11
lod.scale = 147914381.897889
lod.tolerance = 0.25
this.LodList.add(lod)
lod = new Lod()
lod.level = 3
lod.resolution = 19567.87924099992 //11
lod.scale = 73957190.948944
lod.tolerance = 0.5
this.LodList.add(lod)
lod = new Lod()
lod.level = 4
lod.resolution = 9783.93962049996 //11
lod.scale = 36978595.474472
lod.tolerance = 0.5
this.LodList.add(lod)
lod = new Lod()
lod.level = 5
lod.resolution = 4891.96981024998 //11
lod.scale = 18489297.737236
lod.tolerance = 0.5
this.LodList.add(lod)
lod = new Lod()
lod.level = 6
lod.resolution = 2445.98490512499 //11
lod.scale = 9244648.868618
lod.tolerance = 0.5
this.LodList.add(lod)
lod = new Lod()
lod.level = 7
lod.resolution = 1222.9924525624949 //13
lod.scale = 4622324.434309
lod.tolerance = 0.5
this.LodList.add(lod)
lod = new Lod()
lod.level = 8
lod.resolution = 611.49622628137968 //14
lod.scale = 2311162.217155
lod.tolerance = 0.5
this.LodList.add(lod)
lod = new Lod()
lod.level = 9
lod.resolution = 305.74811314055756 //14
lod.scale = 1155581.108577
lod.tolerance = 0.5
this.LodList.add(lod)
lod = new Lod()
lod.level = 10
lod.resolution = 152.87405657041106 //14
lod.scale = 577790.554289
lod.tolerance = 0.5
this.LodList.add(lod)
lod = new Lod()
lod.level = 11
lod.resolution = 76.437028285073239 //15
lod.scale = 288895.277144
lod.tolerance = 0.5
this.LodList.add(lod)
lod = new Lod()
lod.level = 12
lod.resolution = 38.21851414253662 //14
lod.scale = 144447.638572
lod.tolerance = 0.5
this.LodList.add(lod)
lod = new Lod()
lod.level = 13
lod.resolution = 19.10925707126831 //15
lod.scale = 72223.819286
lod.tolerance = 0.5
this.LodList.add(lod)
lod = new Lod()
lod.level = 14
lod.resolution = 9.5546285356341549 //16
lod.scale = 36111.909643
lod.tolerance = 0.5
this.LodList.add(lod)
lod = new Lod()
lod.level = 15
lod.resolution = 4.77731426794937 //14
lod.scale = 18055.954822
lod.tolerance = 0.05
this.LodList.add(lod)
lod = new Lod()
lod.level = 16
lod.resolution = 2.388657133974685 //15
lod.scale = 9027.977411
lod.tolerance = 0.025
this.LodList.add(lod)
lod = new Lod()
lod.level = 17
lod.resolution = 1.1943285668550503 //16
lod.scale = 4513.988705
lod.tolerance = 0.025
this.LodList.add(lod)
lod = new Lod()
lod.level = 18
lod.resolution = 0.5971642835598172 //16
lod.scale = 2256.994353
lod.tolerance = 0.005
this.LodList.add(lod)
lod = new Lod()
lod.level = 19
lod.resolution = 0.29858214164761665 //17
lod.scale = 1128.497176
lod.tolerance = 0.005
this.LodList.add(lod)
lod = new Lod()
lod.level = 20
lod.resolution = 0.14929107082380833 //17
lod.scale = 564.248588
lod.tolerance = 0.0025
this.LodList.add(lod)
lod = new Lod()
lod.level = 21
lod.resolution = 0.07464553541190416 //17
lod.scale = 282.124294
lod.tolerance = 0.0005
this.LodList.add(lod)
lod = new Lod()
lod.level = 22
lod.resolution = 0.03732276770595208 //17
lod.scale = 141.062147
lod.tolerance = 0.0005
this.LodList.add(lod)
lod = new Lod()
lod.level = 23
lod.resolution = 0.01866138385297604 //17
lod.scale = 70.5310735
lod.tolerance = 0.0005
this.LodList.add(lod)
}
}
MyLodList1 mylods = new MyLodList1()
List<Lod> LodList = mylods.LodList
vars.putObject("LodList",LodList) GetMapTile Logic The JSR223 Samplers inside the GetMapTile Transaction is the logic responsible for taking a bounding box and transforming it into the corresponding cache tiles. There is one JSR223 Sampler for each map scale (e.g. one for each corresponding CSV Data Set Config) CSV Data Set Config A --> JSR223 Sampler A1 This is executed with every test thread iteration This is executed frequently...every time a new bounding box is read in These JSR223 Samplers do not need to be edited to run the test Note: JSR223 Samplers using Groovy are generally executed quickly and add very little overhead to the test GetMapTile -- JSR223 Sampler A1 (Full Logic): // Script to process a CSV file (from Load Testing Tools) with lines in the following format:
// bbox,width,height,mapUnits,sr,scale
// FileServer class
import org.apache.jmeter.services.FileServer
import org.apache.commons.math3.util.Precision
//import java.math.BigDecimal
// GetMapTile
bbox_var = vars.get("bbox_A")
String[] bboxParts = bbox_var.split(',')
double xmin = Double.parseDouble(bboxParts[0])
double ymin = Double.parseDouble(bboxParts[1])
double xmax = Double.parseDouble(bboxParts[2])
double ymax = Double.parseDouble(bboxParts[3])
width_var = vars.get("width_A")
height_var = vars.get("height_A")
// Use map scale resolution (map units per pixel) to determine tile level
double mapresolution = 0
int resolutionprecision = 10
mapresolution = Precision.round((Math.abs(xmax - xmin) / Double.parseDouble(width_var)), resolutionprecision)
scale_var = vars.get("scale_A")
double bbox_scale_double = Double.parseDouble(scale_var)
// Map units per pixel
double tileresolution = 0
double lod_resolution = 0
double scale = 0
int tilelevel = 0
LodList = vars.getObject("LodList") // Assuming cached map service has a Spatial Reference of 102100 (3857)
boolean firstIteration = true;
for(int i = 0; i < LodList.size; i++)
{
lod_resolution = Precision.round(LodList[i].resolution, resolutionprecision)
tileresolution = lod_resolution
tilelevel = LodList[i].level
scale = LodList[i].scale
if (mapresolution >= lod_resolution)
{
break
}
}
tileCols_var = vars.get("TileCols")
cols = Double.parseDouble(tileCols_var)
tileRows_var = vars.get("TileRows")
rows = Double.parseDouble(tileRows_var)
// Origin of the cache (upper left corner)
xorigin_var = vars.get("Xorigin")
xorigin = Double.parseDouble(xorigin_var)
yorigin_var = vars.get("Yorigin")
yorigin = Double.parseDouble(yorigin_var)
// Get minimum tile column
double minxtile = (xmin - xorigin) / (cols * tileresolution)
// Get minimum tile row
// From the origin, maxy is minimum y
double minytile = (yorigin - ymax) / (rows * tileresolution)
// Get maximum tile column
double maxxtile = (xmax - xorigin) / (cols * tileresolution)
// Get maximum tile row
// From the origin, miny is maximum y
double maxytile = (yorigin - ymin) / (rows * tileresolution)
// Return integer value for min and max, row and column
int mintilecolumn = (int)Math.floor(minxtile)
int mintilerow = (int)Math.floor(minytile)
int maxtilecolumn = (int)Math.floor(maxxtile)
int maxtilerow = (int)Math.floor(maxytile)
Scheme_var = vars.get("Scheme")
WebServerName_var = vars.get("WebServerName")
ServerInstanceName_var = vars.get("ServerInstanceName")
ServiceName_var = vars.get("ServiceName")
ServiceType_var = vars.get("ServiceType")
def cacheRequest
def tilePaths = []
int count = 0
for (int row = mintilerow; row <= maxtilerow; row++)
{
// for each column in the row, in the map extent
for (int col = mintilecolumn; col <= maxtilecolumn; col++)
{
cacheRequest = ("/").concat(ServerInstanceName_var).concat("/rest/services/").concat(ServiceName_var).concat("/").concat(ServiceType_var)
cacheRequest = cacheRequest.concat("/tile").concat("/").concat(tilelevel.toString()).concat("/").concat(row.toString()).concat("/").concat(col.toString())
count++
tilePaths.add(cacheRequest)
}
}
def requestCount = count.toString()
vars.putObject("RequestCount_A",requestCount)
vars.putObject("TilePaths_A",tilePaths) Cache Tile Loop and Path Population There are several components needs for this part of the Test Plan. With the bounding box translated into the corresponding cache tiles and assembled into a list of URLs, a third JSR223 is needed to place each URL into a variable inside a loop. The loop logic takes place inside the Cache Tiles transaction. There is one JSR223 Sampler for each map scale CSV Data Set Config A --> JSR223 Sampler A2 These JSR223 Samplers do not need to be edited to run the test There is a Loop Controller added to only ask for the actual number of tiles per bounding box since this amount can change extent to extent The number of tiles that correspond to each bound box vary by extent but also but the map resolution (1920x1080) Higher screen resolutions require more tiles The Loop Controller contains the following elements: Counter JSR223 Sampler HTTP Request Loop Controller Counter JSR223 Sampler HTTP Request All of the test logic above exists just for this component of the test. For each map scale, there is only one HTTP Request! This simple design favors readability and maintainability. Note: The HTTP Requests contains a Response Assertion element to validate the items returned from the server. If the content type of the response is image/jpeg or image/png, then the request will pass. However, some VectorTileServer caches may return a Protocolbuffer Binary Format (*.pbf) file. In these cases, the Patterns to Test would need to be manually expanded to the following: image/jpeg || image/png || application/octet-stream || application/x-protobuf The Thread Group Configuration The JMeter Test Plan is currently configured for a relatively short test of 20 minutes. Cached map services perform well, so a lot of throughput will be taking place within each step (2 minutes per step) and from the test overall. Different environments may require an alternative pressure configuration to achieve the desired test results, adjust as needed Validating the Test Plan As a best practice, it is always a good idea to validate the results coming back before executing the actual load test. Use the View Results Tree listener to assist with the validation The Test Plan includes a View Results Tree Listener but it is disabled by default Enable it to view the results From the GUI, Start the test Transactions Select one of the "Cache Tiles" Transactions The results should resemble the following: In this example, all the transactions completed successfully (e.g. the green checkmark) Cache Tiles (map scale: 4622324.434309) Cache Tiles (map scale: 2311162.217155) Cache Tiles (map scale: 1155581.108577) Selecting one of the transactions and the Sampler result element lists some key information Take a quick glance at the Size in bytes In the example above, the Transaction size was over 50KB which suggests decent tile data (for this dataset) was being returned and the responses were not all "blank" images The Number of samples in the transaction was 80 Since there is a JSR223 Sampler with every tile request, this actually resulted in 40 tiles being downloaded The Load time shows 62 (ms), meaning it only took 0.062 seconds to pull down 40 tile images Requests Expand the selected Transaction In this example, Cache Tiles (map scale: 1155581.108577) Select one of the HTTPS requests The results should resemble the following: In this example, the select request completed successfully (e.g. the green checkmark) Take a quick glance at Load time In this example, the individual tile request only took 2 ms (0.002 seconds) to download Clicking on the Response data tab allows you to preview the requested tile: Note: Once visual validation and debugging is complete, it is recommended to disable the View Results Tree element before executing the load test Test Execution The load test should be run in the same manner as a typical JMeter Test Plan. See the runMe.bat script included with the cache_tiles1.zip project for an example on how to run a test as recommended by Apache JMeter. The runMe.bat script contains a jmeterbin variable that will need to be set to the appropriate value for your environment Note: It is always recommended to coordinate the load test start time and duration with the appropriate personnel of your organization. This ensures minimal impact to users and other colleagues that may also need to use your on-premise ArcGIS Enterprise Site. Additionally, this helps prevent system noise from other activity and use which may "pollute" your test results. Note: For several reasons, it is strongly advised to never load test ArcGIS Online. JMeter Report The auto-generated JMeter Report can provide insight into the throughput of the cached map service under load This report is auto-generated from the command-line options passed in from the runMe.bat script Throughput Curve The JMeter Report for a cached map service load test may appear sluggish and slow when viewed in a web browser This is due to the default nature of its composition, which attempts to render every unique request in some of the charts In a test such as this, there will be many From the chart legend, select all JSR223 Sampler items to disable their rendering (as they may skew the scale) In this case, the peak throughput for any one of the given map scale transactions of cached tiles was about 15 transactions/second Since 3 map scales were tested, the total transactions per second achieved was 45 transactions/second This equated to around 162,000 cache transactions/hour The peak throughput appear to occur at the 10:34 mark Performance Curve The performance of the cache throughput was good at roughly 120 ms or 0.12 seconds This was observation was taken where the peak transactions/sec occurred at the 10:34 mark Note: "Peak throughput" is a point in a test where no higher throughput can be achieved. This does not mean that is the maximum amount of pressure the service will support without "falling over". Generally speaking, if additional users ask for cache tiles after the system has reached peak throughput (e.g. you run the step load configuration higher), the service will still fulfill their requests but they will just wait longer for the responses to return (due to queueing). Final Thoughts The Apache JMeter Test Plan in this Article represents a programmatic approach for applying load to an ArcGIS cached map service. One of the strengths of this test is that it easy to build, configure and maintain. The auto-generated JMeter report provides charts and summaries that can be used to analyze the performance and scalability of the cached map service. To download the Apache JMeter Test Plan used in this Article see: cache_tiles1.zip Additional Items Worth Mentioning Every cached service is different. But generally speaking, the performance and scalability of a cached service can be affected by a variety of factors: Deployment architecture The location of the cache data with respect to the ArcGIS tile handler(s) Cache data storage disk technology and speed Network bandwidth Between the cache data storage and ArcGIS tile handler(s) Between the ArcGIS tile handler(s) and ArcGIS Web Adaptor(s) Between the ArcGIS Web Adaptor(s) and Test Client The processor speed and number of processing cores The delivery of cache tiles is quick but under heavy load the overall process utilizes CPU resources from the ArcGIS tile handler and ArcGIS Web Adaptor (if it exists in the deployment) hosting technology (e.g. Microsoft's Internet Information Services service) Different data can perform differently The average tile size (e.g. size on disk) Smaller tile sizes that contain less data might perform differently that larger more detailed tiles Tested map scales Even for the same dataset, map scale 36111.909643 may have "heavier" cache tiles than map scale 1155581.108577 Assumptions and Constraints JDK 17 or greater will not work with this (JMeter 5.4.x) Test Plan Running on these JDK releases will throw the following error: org.codehaus.groovy.GroovyBugError: BUG! exception in phase 'semantic analysis' in source unit 'Script161.groovy' Unsupported class file major version 61 Using JDK 16 or earlier avoids this error The reason is because JMeter 5.4.x only supports JDK 16 (or earlier) If JDK 17 or greater is required for your environment, you must use JMeter 5.5 (which supports JDK 17) On-Demand Cache is not enabled Might work but has not been tested Single Fused Map Cache is TRUE The cache Storage Format is COMPACT Image format of the tiles are in JPG or PNG Due to the Response Assertion rule to validate the return from the server The included Test Plan should work with a cached service for Map Image The ServiceType variable (under User Defined Variables) would need to be changed Not heavily tested Vector The ServiceType variable (under User Defined Variables) would need to be changed VectorTile service tile images can be in Protocolbuffer Binary Format (*.pbf) The Response Assertion rule would need to expand to include application/octet-stream or application/x-protobuf The JSR223 Samplers within the GetMapTile transaction would need to be adjusted to add ".pbf" to the end of the cacheRequest variable Not heavily tested Apache JMeter released under the Apache License 2.0. Apache, Apache JMeter, JMeter, the Apache feather, and the Apache JMeter logo are trademarks of the Apache Software Foundation.
... View more
01-18-2022
12:03 PM
|
2
|
4
|
2521
|
BLOG
|
Network Analyst Route Simply put, the Network Analyst route solver is used for finding the quickest way to get from one place to another. This traveled path might just involve a start and end location but could optionally stop at several locations while also asking the solver to generate turn-by-turn directions for each route in the solution. Note: Route functionality is available with a Network Analyst license. Load Testing a Network Analyst Route Service Network Analyst is packed with many capabilities and features for route solving. Such solutions can be executed through ArcGIS Pro but many times it is consumed through an ArcGIS service. Since it provides industry-leading technology for route solutions, it is logical to want to load test your locally running (route) solver service to see its scalability potential. There are several types of analysis provided by the Network Analyst extension, this Article uses routes as they are very easy to work with...the only required inputs are at least two valid stops points. This characteristic makes it a good choice for demonstrating how to generate data and use it in a load test against a route service. Note: The walkthrough in this Article used ArcGIS Pro 2.9 with Network Analyst services that ran in an ArcGIS Enterprise 10.9 deployment. How to Test a Network Analyst Route Service? Network Analyst ArcGIS Pro Tutorial Data The understanding of the processes in this Article are most effective if the steps can be followed using the same data. For such a task, the Network Analyst team has made a great set of data available. There is a tutorial found on arcgis.com called Network Analyst ArcGIS Pro Tutorial Data. Zipped, it is about 132MB and consists of Network Analyst data for several different cities: San Diego, and Paris, and San Francisco. The Geographic Coordinate System is: WGS 1984 (WKID: 4326). The data is publicly accessible. Note: The examples in this Article will focus on the San Diego dataset. View of the San Diego Streets data from ArcGIS Pro (with Topographic Basemap): The Streets, Walking_Pathways or Network Dataset (NewSanDiego_ND) layers do not need to enabled to utilize the Network Analyst capabilities In the example above, they are enabled to act as a point of reference of the San Diego streets This Article will not cover the details of creating, configuring or publishing a network dataset in ArcGIS Enterprise. For information on such tasks, see: Create a network dataset tutorial that specifically uses this San Diego geodatabase Publish routing services Note: The route solver examples in this Article use a map service (with the network analysis capability) as opposed to a geoprocessing service. The map service uses synchronous execution. Test Data Generation This testing effort will require valid stop points to use within the JMeter test. As with other JMeter Articles on Community, we need good test data to get the most value from the results. And like before, the Load Testing Tools make short work of this job. There is even a specific tool for creating route data. Version 1.3.0 adds some nice enhancements to the "Generate Data (Solve Route)" tool. Making the Tools Available from ArcGIS Pro Once the load-testing-tools project has been downloaded to your machine, place the unzipped folder in a directory that is accessible or made accessible by ArcGIS Pro. If you have a previous version of the Load Testing Tools already installed, this updated version can be placed along side it (although with a different folder name) or completely replace the previous version. For example: Place the load-testing-tools folder in C:\Users\[username]\Documents\ArcGIS Use the Add Folder Connection from Catalog in ArcGIS Pro to list the contents of this directory: The "Generate Data (Solve Route)" tool can create test data from the (map) service, a local copy of the data or the data within an enterprise geodatabase. For this example, any data in WGS 1984 (WKID: 4326) with an area of interest focusing around San Diego could be used. Launch the Generate Data (Solve Route) Tool Launching the Generate Data (Solve Route) tool should present an interface similar to the following: In its simplest form, only the path of csv file, which will contain the stop points, needs to be specified However, while we want to generate random points to use as the stops, we would like to avoid creating them in the bays, lakes or ocean This is where the optional Constraining Polygon parameter comes in This input field can be used to reference a data layer to spatially limit where the points are generated In actuality, we will adjust all of the default values View of the polygon (in pink) outlining the area of interest of the San Diego streets data in ArcGIS Pro: Note: This polygon was created manually and is not included with the San Diego dataset To download the SanDiegoPolygon shapefile used in this Article see: SanDiegoPolygon.zip Note: From a testing point of view, the polygon does not need to include every segment of the streets layer The Generate Data (Solve Route) Tool Inputs Adjust the Number of Tests to: 1000 Adjust the Stops Per Test to: 2 Point the Constraining Polygon to: SanDiegoPolygon Set the Output to a file path location where the results will get written: C:\Users\[username]\Documents\ArcGIS\Projects\NetworkAnalystMap1\sandiegostops1.csv Click Run to execute the tool Examining the CSV file will reveal the generated stop data This data will be used directly in the Apache JMeter test as input Viewing the file in a text editor should show something similar to the following: The features of the route solver are amazingly vast and could accept other spatial data, for example: Barriers, Polyline Barriers, and Polygon Barriers are other inputs that could be passed into a request parameter The generation of these other inputs for route solver requests will not be covered in this Article Spatially Visualize the Generated Points The generated points that are used for the stops in the requests can be added to the ArcGIS Pro project to spatially view their location. From ArcGIS Pro, use Catalog to locate and open the file geodatabase inside the project Locate the random_pts feature class Add the feature class to the Current Map: The Route Solver Test Plan To download the Apache JMeter Test Plan used in this Article see: route_solver1.zip Opening the Test Plan in Apache JMeter should look similar to the following: Adjust the User Defined Variables to fit your environment Note: The Apache JMeter release used for this Article was 5.4.3 (this version provides critical security updates for Apache Log4j2). It is strongly recommended that all Apache JMeter deployments run on the latest release. HTTP Request The route solve test is simple and fairly straight-forward. All of the test logic can be found within one JMeter HTTP Request object. Following the testing style used in previous Articles, this request item is placed inside a Transaction Controller. The key/value pairs for the request in this JMeter test are based on two factors: The functionality available in the published Network Analyst service (and underlying data) The values in this test were taken directly from the default ones used from the REST endpoint of the published San Diego service, for example: https://yourwebadaptor.domain.com/server/rest/services/NetworkAnalyst/SanDiegoRoute/NAServer/Route/solve The version of ArcGIS Enterprise (ArcGIS Server) Some versions add new capabilities This test is based on the published service from the San Diego dataset and ArcGIS Enterprise 10.9 Different network datasets may have different request parameter options available or populated, by default. Some parameters if enabled (like returnDirections), will tell the solver to return more information. This in turn asks the service to do more work which will increase the response time of the request. Note: The view of the HTTP Request from the Table of Content (left side of Test Plan) will appear as a mix of JMeter variables and strings. This is by design. This values will become populated on playback (in the View Results Tree object and raw results file). The Thread Group Configuration The JMeter Test Plan is configured for a load test of 20 minutes. With this test example using two stops for each route request , the solver should perform well and return a good handful of samples (e.g. responses from the server) for each step. Different environments and data may require an alternative setting to achieve the desired test results, adjust the test thread settings as needed Validating the Test Plan As a best practice, it is always a good idea to validate the results coming back within the JMeter GUI before executing the actual load test from the command-line. Use the View Results Tree listener to assist with the validation The Test Plan for this Article includes a View Results Tree Listener but it is disabled Enable it to view the results when the test is played from the GUI From the GUI, Start the test Let the test run for 20 seconds or so Click Stop Transactions Select one of the "Route" Transactions The View Results Tree section should resemble the following: In this example, all transactions completed successfully Sometimes when stopping the playback, the last Transactions in the View Results Tree may fail as it was stopped "mid-request"; this is safe to ignore Requests Expand one of the "Route" Transactions Select the HTTPS request within it The results should resemble the following: In this example, the selected request completed successfully (as indicated by the green check mark) The success of the parent Transaction already indicated this status From the Sampler result tab, take a quick glance at the Size in bytes field In this example, the Request Size was about 15KB which usually means good geometry data was returned; in other words, the responses were not "empty" and is more proof that it was successful Examine the URL of the request As mentioned earlier, this value of the request URL becomes populated at runtime Click on Response data tab and Response Body sub-tab This shows a textual view of the data returned from the request: Note: The route geometries returned are commonly rendered in web browser based JavaScript applications. Although Apache JMeter is a (test) client, it does not spatially render these geometry responses from the server in that way. Test Execution The load test should be run in the same manner as a typical JMeter Test Plan. See the runMe.bat script included with the route_solver1.zip project for an example on how to run a test as recommended by the Apache JMeter team. The runMe.bat script contains a jmeterbin variable that will need to be set to the appropriate value for your environment If Network Analyst route service was published as dedicated, adjust the minimum and maximum instances accordingly prior to running the load test For more information see: Configure service instance settings The published route service used in this Article was dedicated with the maximum instances set to 4 The ArcGIS Server component was running on a system with 4 CPU cores Note: It is always recommended to coordinate the load test start time and duration with the appropriate personnel of your organization. This ensures minimal impact to users and other colleagues that may also need to use your on-premise ArcGIS Enterprise Site. Additionally, this helps prevent system noise from other activity and use which may "pollute" the test results. Note: For several reasons, it is strongly advised to never load test ArcGIS Online. JMeter Report Throughput Curve The auto-generated JMeter Report can provide insight into the throughput of the route service under load Since each Route Transaction contained one request, both metrics (request and transaction) virtually showed the same value; this is expected given the design of the test In this case, the peak throughput for the two stop route solves was about 15 transactions/second Given the environment tested, this equates to around 54,000 route solves/hour Performance Curves The auto-generated JMeter Report can also provide insight into the performance of the route service under load Since each Route Transaction contained one request, both metrics (request and transaction) virtually showed the same value; this is expected given the design of the test The performance of the route requests was good and under 1 second throughout the load test Where the throughput first peaked at 15 transactions/second is where the response time was measured At this point in the test, the average response time was about 333 ms or 0.33 seconds It may also be helpful to see the plotted response times with respect to the step load (configured threads) Previous charts showed values with respect to time Final Thoughts The Apache JMeter Test Plan in this Article represents a programmatic approach for applying load to a Network Analyst route service. One of the strengths of this test is that it easy to configure and maintain. The auto-generated JMeter report provides charts and summaries that can be used to quickly analyze the performance and scalability of the route service. To download the Apache JMeter Test Plan used in this Article see: route_solver1.zip To download the San Diego dataset used in this Article see: Network Analyst ArcGIS Pro Tutorial Data Apache JMeter released under the Apache License 2.0. Apache, Apache JMeter, JMeter, the Apache feather, and the Apache JMeter logo are trademarks of the Apache Software Foundation.
... View more
12-31-2021
01:40 PM
|
0
|
0
|
1328
|
BLOG
|
Hi @DeanHowell1, Are you referring to the testing of a single fused image cache? If so, a possible solution would be to take generated bounding boxes of interest and convert them on-the-fly to the appropriate set of tiles. This conversion process would be based on the GetLayerTile logic (there might be some older resources out on the interest which still list these steps in coding languages). Of course, the newer developer APIs from Esri do this for you with a simple function call, but in Apache JMeter's case, this logic would need to be added to the test (e.g. using something like Groovy). I would recommend this strategy over converting a HAR file into a load test. Although technically valid, with the HAR file approach the requests are quickly cached and the load tests then typically shows high network utilization. But using the first approach (conversion of extents to underlying tiles), the requests are more realistic as the test can spatially cover a lot more area. This topic was also recently discussed as a potential Community Article in the future. If one is put together I will definitely send you the link. Thanks again for the feedback. Aaron.
... View more
11-15-2021
08:50 PM
|
0
|
0
|
2525
|
BLOG
|
Hi @DeanHowell1, The Performance Engineering team recently released a Community Article on Creating a Load Test in Apache JMeter Against a Hosted Feature Layer Service! The discussion covers how to generate (feature service) test data and how to plug this data (query extents) into an Apache JMeter Test Plan for load testing. We took a programmatic approach to solving tackling this challenge, however this part of the test logic for the most part remains hidden from the tester. Happy testing! Aaron
... View more
10-26-2021
02:11 PM
|
1
|
0
|
4147
|
BLOG
|
Why Test a Hosted Feature Layer Service? Previous Community Articles on performance testing with Apache JMeter focused on exercising Map Services through the export function. However, Hosted (feature) layers are also a popular capability of ArcGIS Enterprise and are widely used in deployments. Additionally, querying these layers are based on a "repeated" grid design which can help provide a higher degree of scalability over other visualization technologies. Couple this with client-side rendering of the data returned and its a win-win. Given that hosted feature services are a proven and favorite service technology, it makes sense to want to test feature queries under load to observe its scalability first hand. Hosted Feature Layer Service Testing Challenges Compared to testing the export map function, testing Hosted Feature Layer Service queries is challenge as the requests are more complex to achieve programmatically. A navigational "pan" or a "zoom" in the web browser produces a handful of different queries, each with their own geometry. To repeat this behavior, the constructed load test will not have just one request to issue but many and a varying amount. Couple this with the fact that each query request in the transaction will have a unique geometry and changing maxAllowableOffset (depending on the map scale) and its a lot of moving parts to keep track of. How to Test a Hosted Feature Service? The USGS Motor Vehicle Use Roads Dataset The understanding of the process in this Article is most effective if the steps can be reproduced. But this repeatability requires access to the same set of data. The spatial size of the data source also needs to be large enough to generate decent test data but not too big where it is cumbersome to download. Enter in the Motor Vehicle Use Map: Roads feature layer dataset on hub.arcgis.com. The 179K polyline records of USGS Roads data in WGS 1984 Web Mercator (Auxiliary_Sphere), equates to about 200MB when zipped. It is provided through the Creative Commons (CC0) license. View of Roads data from ArcGIS Pro: Large scale view with labeling enabled: This data will be published from ArcGIS Pro to a hosted feature service in ArcGIS Enterprise or loaded directly through Portal for ArcGIS. To create a service from this data, see Publish hosted feature layers in ArcGIS Enterprise Test Data Generation This test will require some good test data to use within the JMeter test. To tackle such a task, it is highly recommended to use the very excellent Load Testing Tools. Version 1.2.2 adds new capabilities like the "Generate Query Extents" tool which will be a great help for generating feature service test data. This data utilizes the grid-based design which is what we want. With the grid-based approach, envelopes for the desired area are created behind the scenes. Then, these envelopes are converted to the appropriate 512x512 query extents. The number of the queries (for each initial envelope) will vary based on where it lands on the grid...this mimics the service behavior in a web browser. Making the Tools Available from ArcGIS Pro Once the load-testing-tools project has been downloaded to your machine, place the unzipped folder in a directory that is accessible or made accessible by ArcGIS Pro. If you have a previous version of the Load Testing Tools already installed, this updated version can be installed along side it (although with a different folder name) or completely replace the existing folder. For example: Place the load-testing-tools folder in C:\Users\[username]\Documents\ArcGIS Use the Add Folder Connection from Catalog in ArcGIS Pro to list the contents of this directory: The "Generate Query Extents" tool can work off the hosted feature service, a local copy of the data or the data within an enterprise geodatabase. Note: the tool should generate query extents from any data but it does require the Projected Coordinate System to be WGS 1984 Web Mercator Auxiliary_Sphere (WKID: 3857). Select an Area of Interest Select an area of interest from the map in which to generate test data. In this example, The Roads data is being viewed from the Northwestern United States (near the state borders of Idaho and Montana). The selected map scale is 1:1,000,000. Run the Generate Query Extents Tool Running the Generate Query Extents tool should present inputs similar to the following: Adjust the Inputs for the Generate Query Extents Tool The default inputs were adjusted to reflect the following: Several smaller and larger scale levels were removed The remaining scale levels are 12, 13, and 14 which correspond to the map scales 144448, 72224, and 36112, respectively The Number of Records for these scales were increased Scale Level 14 may be omitted depending on the release of Load Testing Tools (if absent, please add this Scale Level manually) The File Output Location which should be something similar to: C:\Users\username\Documents\ArcGIS\Projects\Catalog2\query_extents.csv Click Run to execute the tool Note: The duration of time to generate the test data is based on several factors such as the number of different Scale Levels, the Number of Records (per each Scale Level) and the current map scale of the Project. Note: Generating test data using other datasets may dictate the need to use different Scale Levels based on level of detail and feature density. Validating the Generated Test Data It is a good practice to visually verify generated test data. This let's the tester know what the load test will be spatially requesting from the feature service. Once the tool has completed successfully it will generate 3 primary sets of data that are of interest: Bounding box feature classes Contains randomly generated areas of interest One feature class for each requested Scale Level Query Extent feature classes Contains a (512x512) tile grid that each feature query will be based on One feature class for each requested Scale Level Query Extent CSV files Contains the generated test data Each line is composed of the dynamic components of a feature service request One file for reach requested Scale Level From the Catalog panel, load the bbox_36112 feature class onto the current map in ArcGIS Pro This output is very similar to the data from the Generate Bounding Boxes tool In this example, the randomly generated boxes are in pink These areas represent the screen resolution of a user requesting data from the feature service Now, from the Catalog panel, load the query_extents_36112 feature class onto the current map but behind (underneath) the bbox_36112 data In this example, the query tile grid boxes are in green These tiles correspond to an area on the map that the bboxes are asking for data Zooming in to the map can yield a better understanding to the relationship between these two datasets As seen in the map below, some bboxes are slightly offset from each other but still share a common query tile from the grid beneath them The coordinates of these query tiles (e.g. from the query_extent feature class) is what will go into the CSV files and ultimately the JMeter load test Looking closer at the bboxes reveal details on their respective query composition For example, some bboxes might require 12 "underlying" tiles to fulfill, others 15 or 20 As seen in the map below, the bbox highlighted in black requires 12 specific query tiles colored in red Note: The tile grid design of the feature service is one of its key strengths as it lends itself to repeatability. This repeatability can be leveraged with caching in a deployment for improved scalability. This is not possible with export map. Examining the generated CSV files will reveal the end results of this transformation Viewing the query_extents_36112.csv file in a text editor should show something similar to the following Depending on the release of Load Test Tools, the CSV files might be sorted by the operationid column Depending on the release of Load Testing Tools, the line may or may not be grouped by the operationid column The understanding of the operationid, in this case, is an important testing concept as each operation is representing a navigation action (e.g. a pan or zoom) From JMeter's point of view, an operation is the same as a transaction All the lines with a matching operationid will become feature service query request geometries under the same transaction controller The Hosted Feature Service Query Test Plan To download the Apache JMeter Test Plan used in this Article see: roads_hfs1.zip Opening the Test Plan in Apache JMeter should look similar to the following: Adjust the User Defined Variables to fit your environment The 3 CSV files generated from the tool are referenced by through the JMeter variables DataFile_A, DataFile_B, and DataFile_C by just the file name (the file system path is not included here) Components of the Test Plan Data Reader Logic The roads_hfs test is a bit of a different beast than other Apache JMeter test examples used in previous articles. The primary different is that while its still a data drive test (e.g. CSV files are used for request input), it is not using the typical "CSV Data Set Config" Config Element object to read in the data. Instead, this logic is performed through JSR223 Samplers that execute Groovy code. The reason Groovy is utilized is due to the nature of interacting with a feature service mentioned earlier. Recall that some transactions will have 12 requests and other may have 15 or 20 (depending on where the overall area of interest lands on the tile grid). This difference in the number of requests requires the test to use a more flexible mechanism for reading and using the data from the CSV files since this will not be constant. There is one JSR223 Sampler for each CSV file (e.g. each map scale) All JSR223 Samplers for reading data are put into a Once Only Controller to minimize overhead The CSV file read will only be carried out once, at the beginning of each test thread Shown below is "JSR223 Sample A1" which will be reading in the file query_extents_72224.csv Experience coding in Groovy is not required for running this test, in fact, these JSR223 Samplers do not need to be edited to run the test, but it is helpful to understand what logic is responsible for reading in the CSV data Operation ID Selection Logic Once the CSV data has been read in, the test will need to select an operation id for each scale with every test iteration. To accomplish this, a second set of JSR223 Samplers were used to pick from each list of operations. There is one JSR223 Sampler for each map scale that randomly selects an operation id All JSR223 Samplers for generating this operation id are put into a Transaction Controller called Operation Generator This is executed with every test thread iteration These JSR223 Samplers do not need to be edited to run the test Note: JSR223 Samplers using Groovy are generally executed quickly and add very little overhead to the test Operation Loop and Parameter Population With an operation id chosen, the focus becomes the loop logic where the test will lookup the number of feature services queries that the make up the transaction. From there it will use a third set of JSR223 Samplers to populate the requests parameters associated to the previously selected operation id with each iteration in the loop. There is one JSR223 Sampler for each map scale that populates the associated JMeter variables based on the operation id and iteration value These items then become key/value pairs which are picked up by the HTTP Request The iteration valued are tracked by a Counter Config Element These JSR223 Samplers do not need to be edited to run the test Each Loop Controller, Counter, JSR223 Sampler and HTTP Request objects are all placed inside a corresponding Transaction Controller to logically separate the items for each map scale HTTP Request Essentially, all of the test logic above exists just for this component of the test. Here, the JMeter HTTP Request object can read-in the JMeter variables for specific key/value parameters that have been populated by the JSR223 Sampler immediately before it. Since this approach is highly programmatic, there is only one HTTP Request per map scale! Such a design favors maintainability. Note: This test approach would also work for traditional, non-hosted feature layer services. However, these feature services do not have the same request parameter optimizations that hosted services do such as maxAllowableOffset and quantizationParameters. These options would just need to be deleted from the HTTP Request. The Thread Group Configuration The JMeter Test Plan is currently configured for a relatively short test of 10 minutes. Generally speaking, hosted feature services perform well, so a lot of throughput will be taking place within each step (1 minute per step) as well as from the test overall. Different environments and data may require an alternative setting to achieve the desired test results, adjust as needed Validating the Test Plan As a best practice, it is always a good idea to validate the results coming back before executing the actual load test. Use the View Results Tree listener to assist with the validation The Test Plan includes a View Results Tree Listener but it is disabled by default Enable it to view the results From the GUI, Start the test Transactions Select one of the "HFS" Transactions The results should resemble the following: In this example, the transactions listed above: HFS (mapscale: 72224), HFS (mapscale: 36112), and HFS (mapscale: 144448) all completed successfully The Sampler result lists some more details Although each Transaction sent one HTTP request per feature query extent, the JMeter test is counting the Sampler as part of the operation The JSR223 Samplers add very little overhead to the Transaction although they do double the number of samples, this is just a detail to be aware of Take a quick glance at the Size in bytes In this example, the Transaction Size was almost 65KB which suggests some data was being returned and the responses were not "empty" Requests Expand one of the "HFS" Transactions Select one of the https requests The results should resemble the following: In this example, the select request completed successfully Take a quick glance at the Size in bytes In this example, the Request Size was about 5KB which suggests some data was being returned and the responses were not "empty" (e.g. 1500 bytes) The ContentType is also important Per the parameters in the Test Plan, the requested format is pbf which returns application/x-protobuf Requesting protocol buffers is a best practice as it optimizes the payload The resulting format is binary and cannot easily be viewed without additional help that is not covered in this Article Note: Feature services (including hosted feature services) are rendered on the client (not on the server like export map). Although Apache JMeter is a (test) client, it does not render the server responses through JavaScript like a web browser. Test Execution The load test should be run in the same manner as a typical JMeter Test Plan. See the runMe.bat script included with the roads_hfs1.zip project for an example on how to run a test as recommended by the Apache JMeter team. The runMe.bat script contains a jmeterbin variable that will need to be set to the appropriate value for your environment Note: It is always recommended to coordinate the load test start time and duration with the appropriate personnel. This ensures minimal impact to users and other colleagues that may also need to use the ArcGIS Enterprise Site. Additionally, this helps prevent system noise from other activity and use which may "pollute" the test results. JMeter Report Throughput Curves The auto-generated JMeter Report can provide insight into the throughput of the HFS transactions under load Non-HFS Transactions have been manually filtered out In this case, the peak throughput for the HFS operations was about 16.5 transactions/second Since there were 3 HFS transactions, this equates to almost 50 transactions/second (or 178,200 transactions/hour) Note: Each of the HFS Transactions will naturally have a similar throughput as their respective execution in the test was weighted the same Performance Curves The auto-generated JMeter Report can provide insight into the performance of the HFS transactions under load Non-HFS Transactions have been manually filtered out In this case, HFS transactions for all scales were sub-second (under 1 second) Even toward the end of the test, under the heaviest load, the average response time was under 225 ms or 0.225 seconds Final Thoughts There are other ways to test a hosted feature layer service queries such as through captured traffic from a web browser while interacting with the endpoint or application. This would produce a list of the service URLs which could be translated into a test. However, a programmatic approach such as the one listed in this Article offers a strategy for testing a wide spatial area of the service covering many more extents than can be practically done with the captured traffic approach. The programmatic approach is also easier to maintain as the size of Test Plan is much smaller. To put this into perspective, the JMeter test contained in this Article only contained 3 HTTP Requests (one for each map scale). To download the Apache JMeter Test Plan used in this Article see: roads_hfs1.zip To download the Natural Earth subset of data used in this Article see: Motor Vehicle Use Map: Roads (Feature Layer) Apache JMeter released under the Apache License 2.0. Apache, Apache JMeter, JMeter, the Apache feather, and the Apache JMeter logo are trademarks of the Apache Software Foundation.
... View more
10-26-2021
01:21 PM
|
2
|
10
|
4399
|
BLOG
|
Hello @RDSpire, I am very happy to hear that you found these articles are useful. Yes...our team definitely have more planned, including one that covers our take for interpreting load tests results. Your suggested topic, "How to test a web application published from ArcGIS Enterprise" also sounds like good subject to socialize on Community. Thank you for your feedback! Aaron
... View more
08-26-2021
10:15 AM
|
0
|
0
|
1843
|
BLOG
|
What is Test Data? Simply put, test data is used to drive a performance or load test by requesting different areas of interest from an ArcGIS Enterprise map service. This spatial parts of the data usually takes the form of a points or bounding boxes (bboxes) and is typically stored a plain text file or in some cases a database. Previous Community Articles on Load Testing ArcGIS Enterprise with Apache JMeter focused on strategies for building test logic and running the test. The sample projects provided on these blogs included test data in the form of plain text comma separated value (CSV) files that plugged right into the requests. These CSV files contained items like bounding boxes and a corresponding spatial reference to provide the HTTP requests in the test with parameter information. With each iteration of test, the next line of data is read in which is then populated into the request. For demonstration purposes, this test data worked well for requesting different map scales against services like SampleWorldCities and NaturalEarth. However, those sample test datasets are limited for use with other map services as the pre-generated bounding boxes were created to only ask for areas of interest around the world at a high level. If your organization is working with data at the state, county or city level, you'll want to have test data that focuses on those areas to maximize load test value. In other others, you want test data at a larger map scale and test data cover a specific area of interest. Generating such data that is specific to your services or your spatial data becomes a critical piece of the process for making a good load test. While composing a few geometries by hand for a simple test is certainly doable, the request signatures are quickly repeated resulting in scalability patterns that are skewed and not realistic. A better test is one that utilizes a large amount of random geometries to push the map service and hardware resource more effectively. Tools for Creating Custom Load Data Thankfully, there is a set of recently released testing tools for ArcGIS Pro on GitHub that makes the task of data generation extraordinarily easy. The name of the utility is called: Load Testing Tools and is available at: https://www.arcgis.com/home/item.html?id=b06ef175665a45d68f5796f321b56e61 This examples in this Article were based on version 1.1 of the toolset One of my favorite tools in the group is "Generate Bounding Boxes" which can quickly generate bounding boxes by the either the map's current extent or a selected polygon. Having the ability to passing in a specific polygon is a very powerful feature as the geometries that are created can be filtered to just your area of interest (e.g. Country, State, County or City). The generated data can be validated visually (via separate feature classes that are created) and plugged right into a JMeter Test Plan (via CSV files that are also created). Again, very easy...very powerful. Creating Custom Test Data Making the Tools Available from ArcGIS Pro Once the load-testing-tools project has been downloaded to your machine, place the folder in a directory that is accessible or made accessible by ArcGIS Pro. For example: Place the load-testing-tools folder in C:\Users\[username]\Documents\ArcGIS Use the Add Folder Connection from Catalog in ArcGIS Pro to list the contents of this directory: Using A Polygon to Outline the Area of Interest In this ArcGIS Pro project, a polygon feature class (U.S. State of Indiana in pink) has been added to the Map to define a boundary around the area where the bounding boxes for the requests in the test will be generated. The Projected Coordinate System of the Indiana State feature class is: WGS 1984 Web Mercator (auxiliary sphere) Its WKID is: 3857 For a point of reference, the default Basemap (World Topographic Map) is left in the map The Projected Coordinate System of the Basemap is also: WGS 1984 Web Mercator (auxiliary sphere) Generate Bounding Boxes Tool Inputs You can launch the Generate Bounding Boxes tool, by navigating the load-testing-tools folder from the ArcGIS Pro Catalog screen. Expand the Load Testing Tools.tbx and double-click on Generate Bounding Boxes. The Geoprocessing screen should populate and look similar to the following: One of the convenient features about the Generate Bounding Boxes tool is that technically ready to go just by clicking Run! With the default options, it will randomly generate bounding boxes using the current extent of the ArcGIS Pro map. Note: The default map scales of the Generate Bounding Boxes tool are similar to those of ArcGIS Online but for brevity, only every other scale is listed. You additional map scales are needed, they can be manually added from within the tool. While this make the data generation really easy, in this example, we are interested in generating boxes inside a particular polygon (State of Indiana). We also want to be very specific on the map scales our test will be using, so we'll want to remove some scales and add others from the tool's interface. From the Generate Bounding Boxes tool: Click the red X in front of 73957191, 18489298, 4622324, 1155581, 288895, 282, 70 to remove these map scales From the empty text box under the Scale column, Add 36112 and use 100 for the Number of Records column From the empty text box under the Scale column, Add 9028 and use 1000 for the Number of Records column From the empty text box under the Scale column, Add 2257 and use 3000 for the Number of Records column Increase the Number of Records for 4514 to 1000 records Increase the Number of Records for 1128 to 3000 records Click the drop down under Polygon Layer and select the feature class of interest with in the Map, in this case, Indiana Expand Output Options Note the location of the bounding boxes csv file Separate csv files per map scale will also be created at this location Select "Output Separate Feature Class Per Scale" option After the customization, the Generate Bounding Boxes tool input should look like the following ([username] would reflect your Windows username): Click Run Tool execution may take a few moments The Table of Contents screen will start to populate by adding feature classes to the Map (one per scale) Visualizing the Generated Data from the Individual Feature Classes Once complete, the output within ArcGIS Pro should look similar to the following: The individual feature class make quality checking a breeze as its easy to see the areas of interest that the test will be making from the generated data Note: Some of the generated bounding boxes may have portions of the their geometry that fall outside the polygon of interest. This is okay. Thanks to the visualization of the data, it is also easy to see why fewer bounding boxes were created for smaller map scales like 1:72,224 and 1:36,112 Similarly, this is why more bounding boxes were created for larger map scales like 1:2,257 and 1:1,128 Note: Depending on your data and it density at the larger scales, it could be advantageous to generate more than 3000 bounding boxes (per scale) in order to "cover more ground". Keep in mind that some load test frameworks may read CSV data into memory and creating extremely large datasets may require more memory from the test client. Visualizing the Generated Data from the Individual CSV Files Using the file system explorer, navigate to the ArcGIS Pro project used for generating the data: C:\Users\[username]\Documents\ArcGIS\Projects\MyProject1 The folder contents should look similar to the following: Opening the contents of bounding_boxes_2257.csv should resemble the following: This data will work with most load testing tools that allow the parameterization of HTTP requests from CSV files Note: The feature class to use as a Polygon Layer for spatial filtering can utilize a Projected Coordinate Systems other than WGS 1984 Web Mercator (auxiliary sphere). However, the generated CSV data will still be projected into bounding boxes that have a WKID of 3857. Using the Generated Data in an Apache JMeter Test Plan With a procedure for generating spatially customized data, you can take the CSV files and import them into an Apache JMeter Test Plan to use in a load test. The previous testing Articles: Using Apache JMeter to Load Test an ArcGIS Enterprise Authenticated Service (Intermediate/Advanced) Using Public Domain Data to Benchmark an ArcGIS Enterprise Map Service (Intermediate) Provided Apache JMeter sample tests that would make good templates to use with your new data and against your map services. CSV Data Set Config Using the CSV Data Set Config element in JMeter, the new generated test data can be referenced from its path on the file system. The Filename path value refers to the location of the CSV file on the disk C:/JMeter Tests/naturalearth1/datasets/bounding_boxes_288895.csv Sample test projects from previous Articles used variables for the path ${ProjectFolder}/datasets/bounding_boxes_288895.csv The Variable Names denotes the column headers in the CSV file bbox,width,height,mapUnits,sr,scale would then become bbox_288895,width_288895,height_288895,mapUnits_288895,sr_288895,scale_288895 as the test may be using other map scales where just "bbox" would be ambiguous The HTTP Request elements pointing to your map service can then be adjusted to utilize variables such as ${bbox_288895} that reference your generated test data. Apache JMeter released under the Apache License 2.0. Apache, Apache JMeter, JMeter, the Apache feather, and the Apache JMeter logo are trademarks of the Apache Software Foundation.
... View more
07-29-2021
11:17 AM
|
5
|
2
|
4542
|
BLOG
|
Updates to following sections: Testing Framework Bottleneck Interactive Response Time Law Additions of the following sections: Testing Framework Architecture
... View more
07-18-2021
04:07 PM
|
0
|
0
|
1432
|
BLOG
|
Request Also known as a sampler An HTTP request is the "smallest" unit of work you can define a test to perform. Generally, when testing ArcGIS Enterprise, it can be a URL for a resource like a map service, feature service or route solve but it can also be a call for a static object like a *.css or *.js file. The protocol can be HTTP (plain text) or HTTPS (secured) and the method can be one of many, although GET POST and HEAD are typically the most common. A dynamic map service request would resemble the following form: https://yourwebadaptor.domain.com/server/rest/services/NaturalEarth/MapServer/export?bbox=-130.9656801129776%2C18.608785315857112%2C-57.52504741730332%2C52.34557596043248&bboxSR=4326&imageSR=4326&size=1920%2C882&dpi=96&format=png32&transparent=true&layers=show%3A15%2C16%2C17%2C19%2C20%2C21%2C22%2C23%2C24%2C25%2C26%2C27%2C28%2C29%2C30%2C31%2C32%2C33%2C34%2C35&f=image The same URL as an Apache JMeter HTTP request: What a static request would look like: https://yourwebadaptor.domain.com/portal/home/10.9.0/js/jsapi/dojo/dojo.js Apache JMeter also makes the distinction between a request and a sampler, though they are both defining an action to perform. A sampler, it would be the execution of a process at the Operating System level that performs some type of action like running a geoprocessing tool to create a file geodatabase or to create of a new SDE version in an enterprise geodatabase. Every test has to have at least one request or sampler. Another type of sampler is a web socket. While like an HTTP request in that it makes a call over the "web" and can be secured, it uses a different protocol for communicating with the remote server as well as different parameters to specify parameter options. Transaction A transaction is a logical grouping of one or more http requests. The requests can dynamic and/or static. Together, these requests typically make up one user operation, for example: The loading of web app A navigation action like a pan or zoom A search function Creation of a new SDE Version within an Enterprise GeoDatabase It is not a technical requirement to use transactions in a test , but doing so can greatly enhance the analysis as individual operations (e.g. transactions) could then be isolated to show their respective performance behaviors throughout the run. This can be very informative. Understanding that only requests for map scale 1:72,224 had performance problems is very useful from a tuning perspective as you would know exactly what areas of the map document or project would need to be adjusted...transactions can help you accomplish this. Apache JMeter Transaction containing three requests from one operation: Test Also known as test plan or test project The term "test" is rather generic and is often used as both a noun (I created a test to call the resource) and verb (I am going to test the service). Transactions and requests are usually defined in a test. The test will have additional options to configure such as: how long the test will run for, where the results go, should metrics on the remote servers be collected. Different frameworks use slightly different terminology for describing a test. In Apache JMeter's case, a test or test project is called a Test Plan and is designated with a *.jmx file extension. Step Load Also known as load The step load is a characteristic that defines how long and how many concurrent test threads to apply during the test through even, incrementing pressure (e.g. similar to a staircase). Configuring the test for a step load is helpful for understanding how a map service performs or scales or how deployment resources behave as more and more requests are thrown at it. The defined pressure can also decrease (toward the end of the test) but do not have to. Apache JMeter Thread Group (bzm - Concurrency) specifying and visualizing a specific step load: Constant Load A constant load also defines how long and how many test threads to apply but is usually set for a steady rate over long periods of time. Instead of focusing on performance and scalability this configuration is typically for understanding durability and stability. Apache JMeter Thread Group (bzm - Concurrency) specifying and visualizing a specific constant load: Test Threads Also known as threads This is mechanism responsible for applying load by taking the defined work to be done in the test such as the transactions and/or requests and executing them repeatedly. Test threads typically behave in a serial fashion where each thread starts by reading the the first request defined in the test, sends that to the server then awaits its response. The next request in the test will not be issued until a response comes back from the server or a timeout has elapsed. Once one of these conditions is met it moves to the next request. Most tests are configured to have each test thread repeat this process continuously for the duration of the run. Various technologies often refer to test threads as virtual users but this can be misleading. The test threads of a test are just the means (pressure) to an end (delivered throughput). In other words, the execution of a test that is configured with a step load that reaches 100 test threads does not mean the environment is supporting 100 concurrent, virtual users. In this case, determining users would be calculated off the test's throughput; transactions/sec, for example. Apache JMeter Thread Group (bzm - Concurrency) defining the step load via (test) threads: Users Also known as virtual users The number of supported users is one of the most requested items to determine from a load test and usually takes the form of: How many users will this specific service or application support? Will a particular service or application support at least X users? The calculation of users is closely tied to think time as well as measured test artifacts such as throughput and response time. Using Little Law with these inputs can provide a theoretical estimate to the number of users an environment can support. Think Time Also known as workflow pacing Think time is a duration (defined as seconds or milliseconds) that is added into a test to simulate the delays of human behavior that would occur from a person naturally interacting with the map service or web application. Think time delays can be added to transactions (e.g. an operation) or requests or even to the test itself (which is then referred to as workflow pacing). How they are added can vary based on the testing framework involved. In Apache JMeter's case, there are several different timers available that can be added to the test to simulate various types of delays. Key Performance indicators (KPIs) KPIs are test metrics that assist with the analysis of a load test. Some of the most popular ones are associated with measuring the response time and throughput of the test. However, they also extend to items that count the number of failed requests, count the average content length (per request) or that collect information on hardware utilization (such as CPU, memory, network and disk). Although the ability to capture hardware utilization often requires additional test configuration and permissions within the environment, this information is one of the most important artifacts captured from a load test. Note: Captured hardware utilization is one of the most important artifacts captured from a load test. Response Time Response time is a common metric that is used to measure the performance of a request, transaction or test. Simply put, it provides an understanding to how fast an operation is behaving. The value is typically presented in seconds or milliseconds. Faster performance means lower response times which translates to a more favorable user experience. Response time can be plotted over the duration of the test to understand how performance scaled or listed together with throughput for a particular point in the test (e.g. where throughput peaked). Note: Response times are one of the most important artifacts captured from a load test. Ideally, the performance of the item being tested will take on the following curve where the response times will climb more quickly (around the point of peak throughput). In the following example, the average request response time at peak throughput was about 0.4 seconds. Throughput: Throughput is a common metric that is used to measure the scalability of a map service, web application or hardware infrastructure. Essentially, it provides an understanding to the rate at which an operation can be conducted over a duration of time. The value can be usually captured as requests/sec, transactions/sec (e.g. operations/sec) or tests/sec, though it is often expressed over the duration of an hour (the rate in seconds multiplied by 3600). Higher scalability means more throughput which translates to support for more users. Some test analysis will focus on the average throughput of all transactions for a test, while others might examine the average throughput for each individual operation. Note: Throughput is one of the most important artifacts captured from a load test. Ideally, the throughput of the item being tested will resemble the following curve where it reaches a peak then plateaus. When throughput peaks and/or plateaus it suggests that the test has encountered some form of a bottleneck. In the following example, the average request throughput at peak was about 24 requests/second (or 86,400 requests/hour). Bottleneck A bottleneck is a condition of a deployment where one of the its components or tiers is limiting the rate at which it can respond to incoming requests. A bottleneck can take the form of: Hardware Examples All of the CPU cores of ArcGIS Server are fully utilized Available Memory is exhausted Storage disk I/O of the Database server is fully utilized Network card is saturated Due to Send or Received traffic Software Examples The database was configured to only allow 25 current connections despite having ample hardware resources available Throughput for consuming a map service plateaus but ArcGIS Server CPU utilization does not increase above 25% A bottleneck will always exist in a deployment and determining which component will restricts first is part of analysis. It will often take a load test to expose where the first bottleneck will occur since it may only be observed under a large amount of pressure. While server resources and settings are typically the focus of bottleneck analysis, test client resources (CPU, memory, network, disk and in some cases the testing license) can also be a factor. Reaching a bottleneck is not necessarily a problem, it just lets you know where the first weakness or limitation is within the system. Sometimes a bottleneck is considered a “good thing”, for example running a large ArcGIS caching process, it is desired that the CPU becomes the first bottleneck because it is doing the work to create the map tiles. If the CPU can only reach 50% because of another bottleneck (e.g., disk I/O), it will take twice as long for the job to finish relative to 100% CPU utilization. Note: A bottleneck always exists in a deployment Test Type Also known as a performance test, load test, stress test, endurance test, benchmark test Many organizations often use different categories to classify the testing being carried out. A performance test is typically utilized to troubleshoot issues with a service or application when it is behaving slowly or produce longer than expected response times. They do not need to involve a step load and could be conveniently executed as a single user directly interacting from a web browser with the endpoint of interest. A load test can often be used to describe a step load test with a goal of meeting a particular throughput and response time goal. For example, X transactions/sec with a response time under Y seconds and no failures. This might result in the exhaustion of one of the server hardware resources but that is usually not the goal. A load test can also be referred to as a scalability test. A stress test is a similar test but is frequently focused on reaching a pressure that is a multiple load test's goal. In other words, if the load test was trying to reach X transactions/sec, the stress test might try to reach X * 5 transactions/sec without encountering a significant amount of failures. An endurance test has the distinction of trying to break components of the system. Its applied load can be a multiple of the stress test's where the goal is to encounter significant errors and observe the throughput and response time when they occur. An endurance test can also be referred to as a durability test where the applied load is constant for a very long duration and hardware utilization and reclamation patterns are observed. Test Plan In the general sense, a test plan is document, table or list which defines the specific tests that will be executed as well as their respective goals. These goals are the reason and purpose of each the test. The analysis of the results (by hand or from generated test reports) should help you determine whether or not the goals of each test were achieved. Testing Framework The testing framework is the tool or technology used in the form of libraries, APIs as well as a graphical user interface (GUI) for assembling requests, and the test as well as defining the load to be applied. There are many great testing frameworks out there and Apache JMeter is just one of them. While they are all similar in purpose, many of them take different approaches to the vocabulary of certain components and how they create a test and apply load. Some put the definition of the requests and transactions into their own files with the step load configuration in another. With Apache JMeter, all of the test objects are defined in the Test Plan and are logically separated within the tree. Some load testing framework examples: Apache JMeter LoadRunner Silk Performer Some performance testing framework examples: wget A command line tool for retrieving one or more URLs Can provides a high level of detail on each request and response curl A command line tool for retrieving one or more URLs Can provides a high level of detail on each request and response Fiddler GUI-based HTTP Debugger that can be used alone or with a web browser Can provides a high level of detail on each request and response Testing Framework Architecture When testing ArcGIS Enterprise most of the architectural attention centers around scalability of the deployment tiers: Load Balancer, Web Adaptor, Portal for ArcGIS, ArcGIS DataStore, ArcGIS Server, Enterprise Geodatabase and Network Storage. While one 8 Core test machine can usually send a fair amount of requests that can satisfy the typical test, sometimes multiple machines are needed if the load to apply requires serious horsepower. Depending on the testing framework involved, several of the testing components can be separated out to different machines to improve the scalability of the test client. Common components to scale out are: Test Controller As the name implies, the main focus of the controller is to stop and start the test as well as coordinate the collection of test metrics from one or more Test Agents In Apache JMeter's case, the controller is integrated right into the GUI but is also running when the test is executed from the command line Other testing frameworks may have a web-based Test Controller frontend Typically, only one Test Controller is needed for any given test environment, but it can run on dedicated hardware that is separate from the Test Agents Test Agent The primary job of the Test Agent is to send requests and receive responses from the server This component performs most of the work and would require the most CPU resources For big jobs, multiple Test Agents machines might be needed In Apache JMeter's case, by default, the Test Agent runs on the same machine as the Test Controller Test Repository A machine dedicated to storing the load test results This can include test metrics like response time, throughput and hardware utilization In Apache JMeter's case, the results are stored on the controller in text (*.JTL) files It is possible to send the results to a database, but this is not the default Test Visualization A machine used to visualize the test metrics and hardware utilization in real-time In Apache JMeter's case, the GUI is not recommend for the data visualization of a production test run but the command-line is If results are sent to a database, additional software can connect to Test Repository to visualize the information Interactive Response Time Law The Interactive Response Time Law is a formula that defines the relationship between key performance factors, namely users, throughput, response time, and user think time. The calculation can be arranged to determine the parameter of interest as long as you know the other three. For example, if the number of users utilizing the system is known, what the average response time is for requests, and the average user think time, we can then derive the estimated throughput demand on the system. This law is very useful when attempting to convert users to throughput and throughput to users and other use cases as well and is foundational to areas related to testing such as capacity planning. Given the following formula: N = X * (R + Z) N = Number of jobs or concurrent users X = Throughput per second in the system R = Response time, or average time a job spends in the system Z = Think time For more information on the Interactive Response Time Law see: http://downloads.esri.com/Support/downloads/other_/ArcGIS%20Enterprise%20deployment%20guide_Scene%20layer%20benchmark%20testing.pdf https://homepages.inf.ed.ac.uk/jeh/biss2013/Note2.pdf Apache JMeter released under the Apache License 2.0. Apache, Apache JMeter, JMeter, the Apache feather, and the Apache JMeter logo are trademarks of the Apache Software Foundation.
... View more
07-16-2021
04:14 PM
|
0
|
1
|
2287
|
BLOG
|
Hi @DeanHowell1, This is a good question. We are planning to address strategies for testing feature services in future Community Articles as well as ways to generate test data (bounding boxes and points) from ArcGIS Pro. In the short term, the quickest way forward may be to use the HTTP recorded that is built-in to Apache JMeter. The recorder would capture the requests as you interact with the feature service from a web application and put them right into the Test Plan. Ideally, each pan or zoom from the application would be its own transaction in the test. If you have pre-recorded HTTP Archive (*.har) files of captured feature service traffic, there is a free utility called HAR2JMX (har to jmx) that can convert them right into an Apache JMeter Test Plan.
... View more
06-30-2021
12:00 PM
|
1
|
0
|
5280
|
BLOG
|
Actual Performance Curve chart adjusted to show the response time point that corresponds more accurately to the maximum throughput.
... View more
06-30-2021
09:54 AM
|
1
|
0
|
1756
|
BLOG
|
Choosing a Capability of ArcGIS Enterprise to Benchmark As the foundational software system for GIS, ArcGIS Enterprise performs many duties such mapping and visualization, analytics. From this wide range of capabilities and functions there is no single test that can represent all of its ability. However, if one function were to be used as a benchmark for testing an ArcGIS Enterprise deployment, a strong case can be made for the map service export function. Export map can be called easily and programmatically in an Apache JMeter Test Plan by varying the spatial extents of the requests from CSV data files across several map scales. This translates to just one request for each map scale transaction which helps keep the test from become complicated and difficult to maintain. Coupled with the fact that the export function has been available since version 9.3 makes for a proven and reliable operation to benchmark. What is a Benchmark of a Map Service? GIS testers and administrators are often tasked with understanding the differences in throughput between two systems or the same system after some form of environment modification. In such scenarios, a benchmark is the process of carrying out a load test to act as a standard for which multiple things can be compared to one another. With respect to GIS, this load test would be an Apache JMeter Test Plan executing a step load test against an ArcGIS Enterprise map service to understand the highest rate of throughput (transactions/sec or requests/sec) that can be achieved from the deployment given a particular state or configuration. This rate is also known as the peak throughput. At peak throughput, understanding the performance (transaction or request response time) would also be critical to measure. Benchmark Dataset Any dataset can be used for a benchmark as long as it is kept constant where changes like feature class additions, updates, deletes and versions are not being made. This consistency helps create a dependable "standard" since it is a non-moving target. The test data can be private (e.g. proprietary) or public domain based. What is Public Domain Data? Generally speaking, public domain data would be any raster or vector datasets that are free to download and use. There are many public domain datasets out there (and potentially different licenses that define them). The data used in this Article is Made with Natural Earth and provided through the Creative Commons (CC0) license. Why Use Public Domain Data? One of the characteristics that make a good benchmark is constructing a test so that others are able repeat the same test that you did. Public domain data is a good choice in this regard as it promotes a testing standard and a dependable measuring stick for performance and scalability. SampleWorldCities vs Natural Earth While ArcGIS Server's inclusion of the SampleWorldCities through its installation helps make the dataset ubiquitous and good for test examples and walkthroughs, its extremely small size does not make it ideal to use for benchmarking a map service. The Natural Earth datasets on the other hand, provides some decent map detail (at smaller scales) covering the whole world. Additionally, this can be achieved given an easily accommodating disk size foot print which help make it more practical to share, download and use. The Benchmark Natural Earth Dataset Download the benchmark dataset here The data is a subset of the Natural_Earth_quick_start.zip and includes a modified MXD for ArcMap 10.8.1 and ArcGIS Pro 2.8 project. Either can be used to publish a map service to ArcGIS Enterprise. The Natural Earth subset of data should look similar to the following when opened in ArcGIS Pro (or ArcMap) Deployment Architecture Architecture is import detail of a benchmark. The following are all important components of benchmark architecture that have an impact on the test: Does a Web Adaptor exist? Was authentication involved or was the service made available to everyone Portal for ArcGIS authentication ArcGIS Server token authentication Available to everyone How many machines took part in the ArcGIS Site? Processor details Processor model and architecture Number of CPU cores for each server (including the testing client workstation) Physical, virtual or cloud Physical Memory details Total amount of system memory Network speed ArcGIS Enterprise version Operating System Version Note: It is recommended to take note of the deployment architecture details. Saving this information with the test results can help give proper context and meaning to the analysis or conclusions. The results listed in for this benchmark test were run against the follow environment architecture: ArcGIS Server (10.9 Final) Dell PowerEdge R640 SPECint_rate_base2006 HyperThreading disabled 128GB RAM Windows Server 2019 10G network ArcGIS Web Adaptor (10.9 Final) Dell PowerEdge R440 SPECint_base2006 HyperThreading disabled 64GB RAM Windows Server 2019 10G network Test Client Apache JMeter 5.4.1 Dell PowerEdge R640 SPECint_rate_base2006 6 virtual CPUs 16GB RAM Windows Server 2019 10G network Data Source Type and Location Using either a file geodatabase or enterprise geodatabase to store data in for benchmark test is fine. Regardless of which is used, the detail of the data source is an important property of the environment which should be noted. Note: It is recommended to take note of the data source type. Saving this information with the test results can help give proper context and meaning to the analysis or conclusions. As for location, using a remote file geodatabase instead of a local file geodatabase might be necessary if the deployment has multiple servers that make up the ArcGIS Enterprise Site. In either case, remote or local, the data source location is also an important detail of the test environment that should be noted. Note: It is recommended to take note of the data source location. Saving this information with the test results can help give proper context and meaning to the analysis or conclusions. Service Type and Number of Instances For the most widely used ArcGIS map services in a Site, it is recommended to publish the resource as a Dedicated instance instead of Shared. Although both types can scale to fully utilize the available hardware, a Dedicated service instance has resources behind the scenes that are devoted to it which make it an ideal choice for a benchmark test. For predictable performance, it is recommended to set the Minimum and Maximum number of instances for the Dedicate instance type equal to the number of CPU Cores of the ArcGIS Server machine. Note: It is recommended to take note of the service type and number of instances. Saving this information with the test results can help give proper context and meaning to the analysis or conclusions. Do the Request Options in a Benchmark Test Matter? Absolutely! Using a common dataset and the export map function is not enough to establish a dependable benchmark. The export operation is extremely versatile but through this flexibility an image can be generated through in a variety of different input options. A load test that is sending in the requests to the map service consistently is an important for establishing a reliable benchmark. Can the test request a BMP image format instead of a PNG or ask for data to be in a different spatial reference other than the default of 4326? Yes, but changing such options may impact the performance and scalability of the test so it is recommended to leave this Test Plan settings as is. The Map Service Benchmark Test Plan To download the Apache JMeter Test Plan used in this Article see: naturalearth1.zip This Test Plan is largely based on the SampleWorldCities test project from a previous Article Downloading and opening the Test Plan in Apache JMeter should look similar to the following: Adjusted the User Defined Variables to fit your environment The request composition (one for each of the 5 tested map scales) should look similar to the following: The Thread Group Configuration The Thread Group defines the step load characteristics of the test and plays an important role. For an export map, the maximum Number of Threads for the test has a close relationship with maximum number ArcGIS Server CPU cores (and similarly, the maximum number of service instances). Configuring the test threads to exceed the number of cores helps ensure enough pressure is applied to fully utilize the server CPU resources. From there, peak throughput should observed which is a primary goal of a benchmark test. Note: Not all tested datasets may show the respective service fully utilizing the CPU of the ArcGIS Server tier. In such cases, additional troubleshooting is needed to understand where the bottleneck exists that is limiting the scalability of the given workflow. As a general rule of thumb, configure the maximum step load to be 25% -- 60% higher than number of server CPU cores As seen below, the Test Plan is configured to run for 1 hour and reach a maximum step load of 40 concurrent test threads This would start the benchmark at 1 test thread and add an additional thread every 90 seconds This benchmark was designed to test an ArcGIS Server deployment running on 24 physical CPU cores Adjust accordingly, not every ArcGIS Server will run on 24 physical cores and the maximum step values may be too high for your deployment Note: It is recommended to take note of the step load configuration details. Saving this information with the test results can help give proper context and meaning to the analysis or conclusions. Benchmark Test Execution The benchmark should be run in the same manner as a typical JMeter Test Plan. See the runMe.bat script included with the naturalearth1.zip project for an example on how to run a test recommended by the Apache JMeter team. Note: It is always recommended to coordinate the load test start time and duration with the appropriate personnel. This ensures minimal impact to users and other colleagues that may also need to use the ArcGIS Enterprise Site. Additionally, this helps prevent system noise from other activity and use which may "pollute" the test results. Results and Analysis Once the load test has completed, the runME.bat instructs Apache JMeter to automatically generated a report to assist with the analysis of the results. There can be entire Articles and internet resources devoted exclusively to analyzing the components of the results from a load test. So, in the interest of keeping things simple, our focus will be looking at request throughput (requests/sec) and request performance (seconds) metrics from the report. The diagrams below illustrates the ideal trends of these two items in the test over time. The Ideal Throughput Curve Ideally, the throughput curve will have the form of the orange line above. The point where the curve peaks and begins to flatten is an indication that the system has reached its highest level of throughput (due to a hardware or software bottleneck). This area of the graph where the curve bends is referred to as the knee and the value for maximum throughput is at this point. The blue line represents the increasing step load of the test. The Ideal Performance Curve Ideally, the response time curve will have the form of the green line above. It is taken at this same point in the test as maximum throughput. The blue line represents the increasing step load of the test. JMeter Report Included with the naturalearth1.zip project is a Apache JMeter report called naturalearth1_run1 within the reports folder. Opening the index.html will reveal multiple charts and table to assist with the analysis Actual Throughput Curve From the report: Under Charts-->Throughput, the Hits Per Second chart can be found where the request throughput from the test is plotted Since the test was constructed with each transaction containing only one request, "hits per second" is equivalent to both transactions/sec and requests/sec The system achieved a maximum throughput of about 80 transactions/sec (or 80 requests/sec) Actual Performance Curve From the report: Under Charts-->Response Times, the Time Vs Threads chart can be found where the request performance from the test is plotted All items except "/pvtserver/rest/services/NaturalEarth/MapServer/export" are filtered out (by clicking on them within the legend) Since the test was constructed with each transaction containing only one request, the "export request" is also representing the average transaction performance At the point of maximum throughput, the system deliver a transaction performance of about 314ms or 0.3 seconds Note: A different approach to the analysis will need to be taken for load test containing transactions with more than one request Comparing the Results After you have completed the test of your system with the provided data and Test Plan you can compare the results with the those listed in this Article. This can provide an approximate measuring stick for equating two systems. To download the Apache JMeter Test Plan used in this Article see: naturalearth1.zip To download the Natural Earth subset of data used in this Article see: Natural_Earth_Test_Data Apache JMeter released under the Apache License 2.0. Apache, Apache JMeter, JMeter, the Apache feather, and the Apache JMeter logo are trademarks of the Apache Software Foundation.
... View more
06-29-2021
12:55 AM
|
2
|
9
|
2693
|
BLOG
|
Hi @DeanHowell1, Thanks for reading our Article on Creating a Load Test in Apache JMeter. I agree with you...it appears your deployment is requiring a token in order to consume the SampleWorldCities map service. We recently added a walkthrough on Using Apache JMeter to Load Test an ArcGIS Enterprise Authenticated Service (Intermediate/Advanced) which may help. At the end of that Article, there is a Test Plan you can download which includes all of the pieces listed in the discussion. Hope this helps! Aaron
... View more
06-21-2021
05:25 PM
|
1
|
0
|
5395
|
BLOG
|
Performance Engineering: Load Testing ArcGIS Enterprise What is Performance Engineering? Performance Engineering is the practice of proactively testing, monitoring and analyzing an ArcGIS Enterprise deployment or application from the perspective of performance and/or scalability. It can also encompass both hardware (e.g. CPU and memory utilization) and software components (e.g. map service composition) of a Site. Performance Engineering efforts typically involve multiple tools to carry out the testing and monitoring functions. Why is Performance and Scalability Important? System performance and scalability are critical factors in the successful adoption, operation, and long-term use of an ArcGIS Enterprise deployment. They are often key determinants of end-user satisfaction. The Performance Engineering team in Professional Services provides resources in the form of Community Articles to help achieve those results through the implementation of modern performance and scalability testing and troubleshooting best practices using ArcGIS Enterprise. What Tools are Recommended for Load Testing and Analysis? There is a tremendous amount of high quality testing tools for troubleshooting, analyzing and monitoring the performance of web applications and map services. Unfortunately, it is impossible to cover all of them discussing how they can be used. Instead, Performance Engineering Articles will focus heavily on using Apache JMeter for our performance and load testing tutorials with ArcGIS Enterprise. For many of our Articles, we provide the Apache JMeter Test Plan that was built specifically for each walkthrough. Performance Engineering Articles Strategies, Integration & Configuration and Operational Support Recommended Strategies for Load Testing an ArcGIS Server Deployment (Beginner/Intermediate) -- General strategies for load testing an ArcGIS Server setup; not specific to any testing tool Testing Fundamentals, Meanings and How They Are Used (Beginner) -- Vocabulary definitions to common testing items and phrases ArcGIS Enterprise Analysis with System Log Parser's Optimized Analysis Type (Beginner) -- Improve your log parsing experience by taking advantage of Optimized Analysis Type in System Log Parser Automating System Log Parser from the Windows Command Line (Beginner/Intermediate) -- Several helpful tips and tricks for automating the parsing of ArcGIS Enterprise logs with command line version of System Log Parser ArcGIS Enterprise Analysis with System Log Parser's ServiceDetails Analysis Type (Beginner) -- Use the ServiceDetails Analysis Type in System Log Parser to summarize important information from your services Optimizing ArcSOC Availability and Utilization (Beginner/Intermediate) -- How to observe and match the Instance configuration of dedicated services to the incoming demand ArcGIS Server Performance Strategies (Beginner/Intermediate) -- Common performance challenges and strategies for overcoming them ArcGIS Enterprise Analysis with System Log Parser: Understanding Anonymous Entries for the User Name (Beginner) -- Understanding why the value of "anonymous" can be seen in the System Log Parser report's "Statistics By User" worksheet ArcGIS Enterprise: Is It a Good Idea to Load Test Shared Services? (Beginner) -- A quick discussion of items to consider before load testing shared services Performance and Load Testing Walkthroughs Performance Testing with Apache JMeter (An Introduction) -- An introduction to performance with Apache JMeter; setup a very simple load test Creating a Load Test in Apache JMeter against the SampleWorldCities Map Service (Beginner/Intermediate) -- A detailed walkthrough for building a dynamic load test in Apache JMeter against a map service Running an Apache JMeter Load Test from Command-line mode (Beginner/Intermediate) -- Procedures and strategies for running an Apache JMeter load test from the command-line Using Apache JMeter to Load Test an ArcGIS Enterprise Authenticated Service (Intermediate/Advanced) -- A discussion on how authentication can be used an Apache JMeter test to apply load to a secured map service Using Public Domain Data to Benchmark an ArcGIS Enterprise Map Service (Intermediate) -- A discussion on running an export map test with public data to act as a benchmark of map service; benchmark results from an Esri lab environment included Using ArcGIS Pro to Generate Test Data for Use with Map Services (Beginner/Intermediate) -- A walkthrough on using the new Load Test Tools utility to generate test data that can be spatially customized Creating a Load Test in Apache JMeter Against a Hosted Feature Layer Service (Intermediate/Advanced) -- A walkthrough on using an update to the Load Test Tools utility to generate test data for a hosted feature layer service and how to utilize this programmatically with an Apache JMeter Test Plan Creating a Load Test in Apache JMeter Against a Network Analyst Route Service (Intermediate/Advanced) -- A walkthrough on using an update to the Load Test Tools utility to generate test data for a Network Analyst route service and how to utilize this programmatically with an Apache JMeter Test Plan Creating a Load Test in Apache JMeter Against a Cached Map Service (Advanced) -- A walkthrough on using the latest Load Test Tools utility to generate test data for a cached map service and how to utilize this programmatically with an Apache JMeter Test Plan Load Test an Asynchronous Geoprocessing Service Using Apache JMeter (Advanced) -- A walkthrough on how to load test an asynchronous geoprocessing service; includes an Apache JMeter Test Plan and link to a full featured GP model Capturing Hardware Utilization During an Apache JMeter Load Test (Intermediate) -- A discussion on several common scenarios on how to capture the hardware usage from machines in an ArcGIS Enterprise deployment Using a Branch Versioning Editing Load Test with Apache JMeter (Advanced) -- A discussion on strategies for using a load test to conduct branch versioning editing Benchmark ArcGIS Enterprise Without a Dataset (Intermediate) -- Use the built-in Geometry service to easily benchmark the underlying hardware of the ArcGIS Server machine ( *** Added September 2024 ***) Administration Automation ArcGIS Enterprise User Administration Automation with Apache JMeter (Intermediate) -- A walkthrough on how to use Apache JMeter to automate some common administrative user tasks in ArcGIS Enterprise; includes several Apache JMeter Test Plans Related Boards Implementing ArcGIS | ArcGIS Enterprise Attribution File:Wikimedia_Foundation_Servers-8055_17.jpg; Victorgrigas, CC BY-SA 3.0 <https://creativecommons.org/licenses/by-sa/3.0>, via Wikimedia Commons, Created: 16 July 2012 File:Blumfield_V-twin_motorcycle_engine.jpg, Public Domain, Created: 1 January 1912 Apache JMeter released under the Apache License 2.0. Apache, Apache JMeter, JMeter, the Apache feather, and the Apache JMeter logo are trademarks of the Apache Software Foundation.
... View more
06-21-2021
03:41 PM
|
8
|
0
|
15754
|
Title | Kudos | Posted |
---|---|---|
3 | 09-03-2024 11:52 AM | |
1 | 07-01-2022 03:26 PM | |
1 | 08-28-2024 11:35 AM | |
1 | 06-14-2024 11:00 PM | |
5 | 05-01-2024 11:17 AM |
Online Status |
Offline
|
Date Last Visited |
08-28-2024
05:43 AM
|