|
BLOG
|
Subnetwork Scalability Analysis What is Subnetwork Scalability Analysis? Before talking about Utility Network subnetwork scalability analysis, let's review the definitions of what is a subnetwork and what is scalability analysis: A subnetwork: In a utility network, a subnetwork is a partition or topologic subset of the data in a tier where all participating features have connectivity to the same controllers. A subnetwork is often used for tracing to determine if connectivity is available. Scalability analysis: Scalability analysis is for determining a Utility Network deployment's ability to execute multiple requests concurrently (e.g., updateSubnetwork and exportSubnetwork). This is typically done to understand if time-dominant business objectives can be met. For example, can a particular deployment export 5,000 subnetworks within a 4 hour window? The answer to this question is unknown until tested and verified. The scalability of a deployment is tied to how much concurrency can take place at once while maintaining optimal performance of the operations of interest. This Article discusses how to apply concurrency via a test (included in Article). Note: For more information on a subnetwork, see: The Life Cycle of Subnetwork in Utility Network Our Utility Network Objective and Goal Automate the tasks of updating or exporting a list of subnetworks Capture performance time Overall task Individual operations (e.g., time to update a specific subnetwork) Solution configurable to meet requirements Time sensitive Resource sensitive Why Perform Update/Export Subnetwork Analysis? Typical purpose and time constraints are a business requirement Exercise two functions per subnetwork Update (updateSubnetwork) Commonly performed after both editing and validate network topology have been called or enable network topology has been carried out Export (exportSubnetwork) Used to extract information about the subnetwork to a file which can then be used by external system like outage management Before a subnetwork can be exported out of the system, it needs to be up-to-date (updated) Both are important utility network functions. As a GIS administrator or developer, it is important to understand how these operations scale. Understanding the time to process a list of subnetworks is the primary analysis. This can be taken further by exploring solutions that optimize the task for time or system resources. These challenges have similar approaches on how to accomplish the task of updating or exporting many subnetworks. Gain insight on business objectives Exporting Subnetworks can take more compute time depending on the options utilized Such options may be a business requirement Performance and scalability can affect architecture From the test results and findings Deployment configuration might need to change to meet business needs How to Accomplish Subnetwork Scalability Analysis? List of Subnetworks The first step is to extract a list of subnetworks from the Utility Network dataset. This can be done through a variety of methods: ArcGIS Pro ArcPy script SQL select statements SQL example: Connect to the Utility Network geodatabase Find the ObjectId for the Subnetwork table: This ObjectId is dynamic and can vary Type GUID is constant -- Find Utility Network ObjectId for Subnetwork table
SELECT OBJECTID FROM sde.GDB_ITEMS WHERE type='{37672BD2-B9F3-48C1-89B5-8C43BBBB6D57}' For our example database, this returned 446 This ObjectId value will be used in the next query Export the list of subnetworks -- Export list of Subnetworks
SELECT
T1.SUBNETWORKCONTROLLERNAME, T1.SUBNETWORKNAME, T1.ISDIRTY,
T1.ISDELETED, T1.TIERNAME, T1.DOMAINNETWORKNAME, T1.GDB_FROM_DATE
FROM
elec.UN_446_SUBNETWORKS T1
INNER JOIN (SELECT SUBNETWORKNAME, MAX(GDB_FROM_DATE) AS MaxDate
FROM
elec.UN_446_SUBNETWORKS
GROUP BY SUBNETWORKNAME) T2
ON T1.SUBNETWORKNAME = T2.SUBNETWORKNAME
AND T1.GDB_FROM_DATE = T2.MaxDate Save output to a file (csv or tsv) Some SubnetworkControllerNames may contain commas (e.g., ",") and a tab-separate value instead of a comma-separated value file would be more appropriate For the Utility Network dataset used in this Article, the resulting list of subnetworks looks like the following when the data rows are separated with tabs. Note: It is assumed current state of the network contains clean and dirty subnetworks. How subnetworks become dirty is beyond the scope of this Article. Note: There can be several business factors that can go into a selection to pull out subnetworks. This query is to help get you started. Apache JMeter As mentioned in other Community Articles, Apache JMeter is free testing tool. It is great for exercising the REST endpoint of ArcGIS Enterprise to test many functions: Performance Testing with Apache JMeter (An Introduction) Running an Apache JMeter Load Test from Command-line mode (Beginner/Intermediate) Creating a Load Test in Apache JMeter against the SampleWorldCities Map Service (Beginner/Intermediate) Using Apache JMeter to Load Test an ArcGIS Enterprise Authenticated Service (Intermediate/Advanced) While there are many testing tools out there, JMeter will be used to call a Utility Network service in ArcGIS Enterprise for making updateSubnetwork and exportSubnetwork requests. Utility Network Dataset -- Naperville Electric Naperville Electric data, as seen from ArcGIS Pro: Update Subnetwork Apache JMeter – File system view of Test Plan folder Outside of JMeter the Test Plan is just a jmx file, all by itself. As mentioned in other Community Articles, it is recommended to create some folder structure of the project you work with. This can aid with management, especially if you have many different tests doing different things. In the folders pictured above, the list of subnetworks and test results can be keep inside the UpdateSubnetwork directory where it is less likely to confused with other test run data. The Update Subnetwork Test Plan To download the Apache JMeter Test Plan used in this Article see: UpdateSubnetwork1.zip Opening the Test Plan in Apache JMeter should look similar to the following: Adjust the User Defined Variables to fit your environment Key Components of the Test Plan With the UpdateSubnetwork Test Plan expanded, the following elements can been seen: User Defined Variables List above, used to easily tailor the test to an environment Aggregate Report Used for debugging test and post-test analysis Thread Group Defines the thread concurrency or scalability of the test Recommended to use default values (e.g., 1 test thread) while building a test GenerateToken Obtains an token from ArcGIS Enterprise Uses credentials defined earlier Holds onto it during test While Controller Sync Used when business logic dictates calls should be synchronous Loops over list of subnetworks CSV Data Set Config Links list of subnetworks to test CSV or TSV data text file If Controller A logic branch of the test using Groovy Follows branch if subnetwork is dirty (IsDirty=True or IsDirty=1) USN Request (${SUBNETWORKNAME}) The test element that issues the request to perform bulk of work From the data file source, ${SUBNETWORKNAME} is populated with the name of the subnetwork While Controller Async Used when business logic dictates calls should be asynchronous Ideal for very long running updateSubnetwork requests Loops over list of subnetworks To utilize: Right-click on While Controller Async and select Enable Right-click on While Controller Sync and select Disable Note: Set the Thread Group values to determine the scalability profile of the Update Subnetwork test. A larger value for "Users" means a higher maximum concurrency that test will reach. Increase "Rampup" to gradually increase the amount of concurrency across that duration of time (in seconds). A Closer Look at GenerateToken The GenerateToken request is an important part of the test. It occurs early in the test and the Utility Network service will most likely require authentication. The Generate Token Request Capturing the Token Requesting the token is half the work. The other part is capturing it and storing it as a variable so JMeter can access while the test runs. This is done with the Regular Expression Extractor element from JMeter. Note: How regular expressions work is beyond the scope of this Article, but it essentially looks for particular string signature in the response and if found, puts it into a JMeter variable called agstoken. Update Subnetwork Test Plan Data File -- The List of Subnetworks The CSV Data Set Config element plays a huge role as it ties our subnetwork data list to the test. Note: Some Utility Network datasets may contain the comma character (e.g., ",") in the SubnetworkControllerName. Where this occurs, it is advantageous to save the list of subnetworks as tab separated values instead of a comma separated values. Note: In cases where a tab separated values file is used, JMeter will still expect the header line in the list of subnetworks to be comma separated. If Controller Logic For convenience, test logic was added to only update subnetworks from the list if the IsDirty field equals 1. Groovy is used to accomplish this task in the test. Apache Groovy is a scripting language based on Java that provides powerful functions and programmatic support for things that might be difficult for the standard JMeter test elements. The Update Subnetwork HTTP Request This test element in the Test Plan is the responsible for updateSubnetwork web calls in the test. This component issues the request and JMeter evaluates the response coming back from ArcGIS Enterprise through the REST endpoint. Test Execution Graphic User Interface (GUI) execution Ideal for test authoring and debugging Play and Stop buttons Command-line execution Great for official test runs More efficient memory utilization Especially if listeners like the Aggregate Report are disabled for the run Play (consoleRun.bat) and Stop (Control-C) When the test is run via the consoleRun.bat, a results.jtl file will be generated in the results folder This is the "results file" and contains valuable performance information on operations from the test Note: It is always recommended to coordinate the load test start time and duration with the appropriate personnel of your organization. This ensures minimal impact to users and other colleagues that may also need to use your on-premise ArcGIS Enterprise Site. Additionally, this helps prevent system noise from other activity and use which may "pollute" the test results. Note: For several reasons, it is strongly advised to never load test services provided by ArcGIS Online. Analysis -- Reporting and Evaluation Aggregate Report Element (Test Debugging) From this analysis, we can easily see the UpdateSubnetwork_Sync transaction and some key statistics that tells us important performance information about the operation overall. For a rough total time estimate, when using 1 test thread of concurrency, just multiply the number of samples times the average. For example: "# Samples * Average" = Total Test Run Time (ms) For "official" test runs, it is recommended to disable the Aggregate Report listener from the Tree (a simple right-click then select Disable) and only use it for test authoring, debugging and post-test processing. Note: For post-test processing, the Aggregate Report can still be used to browse, select and load a test results file even if it is disabled in the GUI. The Results File When run from the command-line script, a JMeter will produce a results file as part of its output. The results file (*.jtl file), contains raw request information in text form. The contents in this file contains a listing all the subnetworks and their individual response times. While easy to open and view with any common editor, performing analysis in this form is not recommended. Aggregate Report Element (Post-Test Analysis) Let us circle back to the Aggregate Report element for some post-test analysis. From the JMeter GUI, click the Browse button to find the jtl results file then select Open to load it. Once loaded, the jtl file is processed by JMeter and a statistical report is generated is the Aggregate Report section. The Aggregate Report element is interactive, so you can click on the Maximum header field to easily re-sort the listing and find subnetworks that were taking the most time. The UpdateSubnetwork_Sync transaction is still listed, which is critical for understanding if the overall numbers are meeting certain performance requirements from a service level agreement (SLA). Note: The number of subnetworks in this dataset is small. Test results from production datasets would have more samples. Taking the Analysis Further Since the results file is comma separated values, it can easily be opened in a spreadsheet. A view of the result data with some minimal formatting. This is good but can it be better? Charting the transaction performance offers improved clarity. Note: If you are just after overall numbers, filter out individual subnetwork requests (e.g., the field where responseMessage = OK). Export Subnetwork Apache JMeter – File system view of Test Plan folder The Export Subnetwork Test Plan To download the Apache JMeter Test Plan used in this Article see: ExportSubnetwork1.zip Opening the Test Plan in Apache JMeter should look similar to the following: Adjust the User Defined Variables to fit your environment Note: Set the Thread Group values to determine the scalability profile of the Export Subnetwork test. A larger value for "Users" means a higher maximum concurrency that test will reach. Increase "Rampup" to gradually increase the amount of concurrency across that duration of time (in seconds) until the maximum is met. The Export Subnetwork HTTP Request This test element in the Test Plan is the responsible for exportSubnetwork web calls in the test. This component issues the request and JMeter evaluates the response coming back from ArcGIS Enterprise through the REST endpoint. Note: This Export Subnetwork Test Plan is designed to export every subnetwork in the data file. Advanced JMeter Getting the Size of the Export Subnetwork Output The exportSubnetwork request produces a file on ArcGIS Server. This output file (which is commonly formatted as json or pbf) can then be imported into other systems. The size on disk of this item can vary but is helpful test metadata to capture. While having the Request element directly read the output to obtain the size might be simple, it would be very inefficient. However, this task is an opportunity to utilize the Groovy language mentioned earlier and perform some heavier file system computation quickly. After the exportSubnetwork request, there are additional requests in the form of two JSR223 Samplers: CalculateStatistics SN:${SUBNETWORKNAME} SN:${SUBNETWORKNAME} Size:${outputFileSizeKB}KB The first Sample contains Groovy logic to read the output file from the export and put it into a JMeter variable. A closer look at the Groovy logic: // Grab variables and some dynamic artifacts to assemble file system path of exportSubnetwork output
// Examine file system of exportSubnetwork output on the disk
String outputUrl = vars.getObject("outputUrl"); // Captured from previous request (exportSubnetwork)
String outputFormat = vars.getObject("outputFormat"); // User Defined Variable
String subnetworkName = vars.getObject("SUBNETWORKNAME"); // Test Plan variable from CSV
String serverName = vars.getObject("serverHostname"); // User Defined Variable
String folderServiceName = vars.getObject("serviceName"); // User Defined Variable
File srvc = new File(folderServiceName);
String folderName = srvc.getParent(); // Isolates folder name from service
String serviceName = srvc.getName(); // Isolates service name
String outputName = org.apache.commons.io.FilenameUtils.getBaseName(outputUrl); // Isolates file name from exportSubnetwork output
outputNamewExt = outputName + "." + outputFormat; // Re-add file extension
// Create filesystem path would output should reside; assume arcgisoutput directory is shared to user running JMeter
// Use "\\\\MyServer.domain.org\\arcgisoutput\\" if serverName variable is not ArcGIS Server machine
String cifsPath = "\\\\" + serverName + "\\arcgisoutput\\" + folderName + "\\" + serviceName + "_MapServer\\" + outputNamewExt;
File file = new File(cifsPath);
long cifsFileSize=file.length(); // Get size of exportSubnetwork output
double oneKilobyte=1024.0
double cifsFileSizeKB=(cifsFileSize/oneKilobyte);
double cifsFileSizeKBRound=cifsFileSizeKB.round(2); // User friendly formatted file size
log.info("folderServiceName: " + folderServiceName);
log.info("outputUrl: " + outputUrl);
log.info("cifsPath: " + cifsPath);
log.info("cifs file.exists: " + file.exists().toString());
log.info("serverName: " + serverName + " subnetworkName: \"" + subnetworkName + "\" size: " + cifsFileSizeKBRound + " KB outputName: " + outputNamewExt); // Write information to JMeter log
vars.putObject("outputFileSizeKB",cifsFileSizeKBRound); The second Sample simply lists the captured file size in the Name property of the JSR223 Sampler element in the test. This value is then reported as the name of the request in the jtl results file which could be used for further analysis. For example: Getting Individual Subnetwork Response Time from an Asynchronous Job The Asynchronous Execution Style The default execution style in the test is synchronous. It is simple and straight-forward. Issue the request and just wait for the response. This is the ideal approach when the response time per request is expected to be less than 10 minutes. For requests that run longer than 10 minutes, asynchronous is the recommended approach. However, there are more moving parts with an asynchronous test: A job is submitted The job status is periodically polled in a loop There are more than one loop Once complete the output location for that job can then be captured Each of these steps is its own request in the test, and each has a response time which is captured in the results file. While JMeter reports this total time under the transaction name of ExportSubnetwork Async (this is great because it provides handy overall statistics on the function) it does not easily link each individual subnetwork to its respective response time. Groovy to the Rescue (Again) Calculating a single response time for each subnetwork operation can be accomplished in Groovy by setting a timer at the beginning and end of the asynchronous loop. Groovy simply calculates the difference and reports the time back into the test (like the size calculation). A closer look at the Groovy logic: // Grab variables and some dynamic artifacts to assemble file system path of exportSubnetwork output
// Examine file system of exportSubnetwork output on the disk
String outputUrl = vars.getObject("outputUrl"); // Captured from previous request (exportSubnetwork)
String outputFormat = vars.getObject("outputFormat"); // User Defined Variable
String subnetworkName = vars.getObject("SUBNETWORKNAME"); // Test Plan variable from CSV
String serverName = vars.getObject("serverHostname"); // User Defined Variable
String folderServiceName = vars.getObject("serviceName"); // User Defined Variable
String startEpoch = vars.getObject("StartEpoch"); // Test Plan variable from inside While Controller Async
String stopEpoch = vars.getObject("StopEpoch"); // Test Plan variable from inside While Controller Async
File srvc = new File(folderServiceName);
String folderName = srvc.getParent(); // Isolates folder name from service
String serviceName = srvc.getName(); // Isolates service name
String outputName = org.apache.commons.io.FilenameUtils.getBaseName(outputUrl); // Isolates file name from exportSubnetwork output
outputNamewExt = outputName + "." + outputFormat; // Re-add file extension
// Create filesystem path would output should reside; assume arcgisoutput directory is shared to user running JMeter
// Use "\\\\MyServer.domain.org\\arcgisoutput\\" if serverName variable is not ArcGIS Server machine
String cifsPath = "\\\\" + serverName + "\\arcgisoutput\\" + folderName + "\\" + serviceName + "_MapServer\\" + outputNamewExt;
File file = new File(cifsPath);
long cifsFileSize=file.length(); // Get size of exportSubnetwork output
long startEpochLong = Long.parseLong(startEpoch);
long stopEpochLong = Long.parseLong(stopEpoch);
long responseTimeMS = stopEpochLong-startEpochLong; // Caclulate an ***estimated response time value*** of previous request (for asynchronous exportSubnetwork only)
double oneKilobyte=1024.0
double cifsFileSizeKB=(cifsFileSize / oneKilobyte);
double cifsFileSizeKBRound=cifsFileSizeKB.round(2); // User friendly formatted file size
double responseTimeSec = ((responseTimeMS/1000).round(2)); // Convert estimated response time in milliseconds to seconds
log.info("folderServiceName: " + folderServiceName);
log.info("outputUrl: " + outputUrl);
log.info("cifsPath: " + cifsPath);
log.info("cifs file.exists: " + file.exists().toString());
log.info("serverName: " + serverName + " subnetworkName: \"" + subnetworkName + "\" size: " + cifsFileSizeKBRound + " KB responseTimeEstimate: " + responseTimeSec + " seconds outputName: " + outputNamewExt); // Write information to JMeter log
vars.putObject("outputFileSizeKB",cifsFileSizeKBRound); // Put calculated value into variable to use later
vars.putObject("responseTimeSec",responseTimeSec); // Put calculated value into variable to use later Additional Strategies Time vs Resources Time constraint example Need to complete task within a certain duration (e.g., export 5,000 subnetworks in 4 hours) Client (JMeter) and server (ArcGIS Server) can work together for exercising deployment scalability Client sends more concurrent test requests, server responds accordingly The end result of the overall test run time is the scalability analysis From Thread Group element, increase number of Threads and Ramp-up period Considerations Resource intensive on CPU as more services instances are used Servers as well as test client machine This higher rate of concurrency could impact other users Coordinate and schedule accordingly Resource constraint example Current setup Most flexible Recommended for initial test authoring and debugging Longer overall compute time Most forgiving on resources Fewer test threads of concurrent execution Requires less services instances on Server Saves memory resources Final Thoughts The Apache JMeter Test Plans in this Article represent a programmatic approach for applying load to a Utility Network service through the updateSubnetwork and exportSubnetwork functions. Understanding how the system responds to this load through response time numbers returned by ArcGIS Enterprise can help determine if the deployment will be able to meet your performance needs. The authored tests can be adjusted to meet time (more concurrency, higher scalability required) or resources challenges (memory constrained systems). Strategies for analyzing the test results were also discussed which included using the Aggregate Report in the JMeter Test Plan for some quick performance statistics as well as charting individual response times for visual clarity. While the overall process for the Update Subnetwork and Export Subnetwork Test Plans are similar. Export Subnetwork contained an addition Transaction which examined the resulting output on ArcGIS Server using Groovy to obtain the file size for each subnetwork. To download the Apache JMeter Test Plans used in this Article see: UpdateSubnetwork1.zip ExportSubnetwork1.zip Things to Consider This list contains particular topics that can enhance the overall scalability analysis or topics that could pose as potential challenges: Metrics Capturing processor and memory utilization of the machines involved in the test can strength the analysis and help detect hardware contention There are a variety of methods for performing this task The Article: Capturing Hardware Utilization During an Apache JMeter Load Test (Intermediate) highlights several approaches Detecting a hardware bottleneck like full processor utilization or memory exhaustion can inform the GIS administrator and/or developer that additional resources are needed before more pressure is applied Additional analysis Optimal service instances Utility Network services require dedicated service instances Dedicated services have a minimum and maximum number of instances that can be adjusted to meet your needs (concurrent user demand or memory constraints) Testing can help find the best configuration to fit your business needs Service Level Agreement (SLA) Statistics from Aggregate Report can help GIS administrators and/or developers understand if the performance from operations are meeting their needs Common pitfalls The test client machine Test client software can be a resource intensive application This is amplified when many concurrent test threads are used in the test Consider installing the test client: On its own machine (separate from machines performing other ArcGIS duties) With adequate hardware resources (number of processing cores and memory) Too much pressure, too quickly Applying too many concurrent threads to a test too quickly can negatively impact performance From the User Defined Variables, adjust the Users and Rampup values in small increments until a configuration is found that works best Synchronous vs Asynchronous Test Execution Both JMeter Test Plans listed in this Article include the ability to execute the updateSubnetwork or exportSubnetwork requests synchronously or asynchronously If it is unknown which execution style to use, it is recommended to use synchronous (Sync), which in the tests, is active and enabled by default Synchronous is simpler and the optimal choice for requests that are expected to be completed in under 10 minutes Asynchronous (from the point of view of the Test Plan) contains more moving parts, but is ideal for very long running tasks
... View more
02-13-2026
05:22 PM
|
4
|
0
|
705
|
|
BLOG
|
ArcGIS Enterprise Logs and System Log Parser While several Community Articles exist which discuss analyzing ArcGIS Enterprise logs and how to get started with System Log Parser: ArcGIS Enterprise Analysis with System Log Parser's Optimized Analysis Type (Beginner) ArcGIS Enterprise Analysis with System Log Parser's ServiceDetails Analysis Type (Beginner) ArcGIS Enterprise Analysis with System Log Parser: Understanding Anonymous Entries for the User Name (Beginner) Benefits of Analyzing ArcGIS Server Log Entries of Level Info (Beginner) Automating System Log Parser from the Windows Command Line (Beginner/Intermediate) There are few resources that go into the details of the generated spreadsheet report and how to make GIS administrator and/or developer decisions based on the performance information it provides. This Article will walk-through how to perform some log analysis on a Utility Network deployment which can then be used to build knowledge about Site usage and efficiency. Performance Analysis of ArcGIS Enterprise Logs Before jumping in, let us review what log analysis is and where such log data can be found in an ArcGIS deployment. Log Analysis: The process of extracting information from log data. This information can be used to quantify GIS usage and help answer: What services are users asking for? What operations are they performing? What performance are they experiencing? Log Data: The log data to analyze is the ArcGIS Enterprise logs. Typically, this data resides on the deployment servers and comes in different forms: ArcGIS Web Adaptor access logs The log data source used in this Article ArcGIS Server access logs ArcGIS Pro generated logs Each log source offers its own wealth of information. Example Scenario for Log Analysis The following scenario is the use case for our log analysis: Your manager has assigned a task: Quantify the ArcGIS Utility Network usage and efficiency of your Site This means, the following will need to be answered: Is the Site well managed and optimal? What services are the most popular? Which methods are called? query, applyEdits, updateSubnetwork How are services performing? Overall user experience? Our manager has also asked that such analysis needs to be carried out in a cost effect manner. Note: Before the log analysis is started, it is recommended to reference any performance targets that may already exist for your organization. Such criteria may state what performance is expected for specific operations, for a given period of time. This can help you answer how your services are performing with respect to your deployment. Why Perform Log Analysis? We have our task at hand, but why go through the logs? Why perform log analysis? In order to recognize Site usage and efficiency of the Utility Network deployment, a proven strategy is to understand request performance and utilization through the logs. There are several benefits with this approach: Easy to do Performed quickly Very likely that data already exists to analyze Read non-invasively Can have minimal cost to server resources ArcGIS Enterprise logs (ArcGIS Web Adaptor access logs or ArcGIS Server logs) can provide a valuable record of client requests and server responses. This data is a powerful and accurate view of the past. How to Accomplish the Analysis? The strategy for this analysis is straight-forward: Consume the deployment logs Extract request information Generate statistics on the service and function performance Log data can be read and analyzed quickly using a free utility: System Log Parser System Log Parser is tool for digesting logs that can be run via a graphical user interface (GUI) or command-line (for automated scripting). It is compiled for the Windows platform. Log Analysis Strategies Which log source should be used for the analysis? Several source options may exist for a deployment: ArcGIS Enterprise ArcGIS Server ArcGIS Web Adaptor access logs Microsoft Internet Information Services (IIS) Apache Tomcat Cloud Accessibility options may also exist: Web access Local network access Local file system access Each log source has unique strengths and while several log sources might exist for a deployment, it is recommended to pick one and use that for the primary analysis (e.g., ArcGIS Web Adaptor access logs). For this Article, reading the ArcGIS Web Adaptor access logs over the local network will be the source of the data. Note: Most log formats are operating system agnostic and standardize the data columns. ArcGIS Server logs follows this pattern. Running System Log Parser Graphical User Interface Easy to use and configurable. Options: Choose log source Internet Information Services Log Query Set log location path Local: C:\inetpub\logs\LogFiles\W3SVC1 Could also specify a shared network location: \\server1.yourdomain.org\w3svc1 Set Date range Local Time Set Analysis Type to Optimized (the Default) Fast and memory efficient on machine running System Log Parser Analyze logs! Command-line Automation Same functionality as GUI but a little more flexibility. Ideal for generating automated reports periodically (e.g., a Windows Scheduled Task). PowerShell Example: Choose log source -f IIS Set log location path Local: C:\inetpub\logs\LogFiles\W3SVC1 Could also specify a shared network location: \\server1.yourdomain.org\w3svc1 Set Date range Time in UTC Can pass in a specific datetime value -startstring "[Start_DateTime_UTC]" -endstring "[End_DateTime_UTC]" Set Analysis Type to Optimized -a Optimized Analyze logs! PS C:\> # Run System Log Parser via PowerShell
PS C:\> $startLocal = $endLocal = Get-Date # Now
PS C:\> $startLocal = $startLocal.AddDays(-7) # Go back 7 days ago
PS C:\> $startUtc = $startLocal.ToUniversalTime().ToString("yyyy/MM/dd h:mm:ss tt")
PS C:\> $endUtc = $endLocal.ToUniversalTime().ToString("yyyy/MM/dd h:mm:ss tt")
PS C:\> $iisLogPath = "C:\inetpub\logs\LogFiles\W3SVC1" # Could also be \\server1\W3SVC1
PS C:\> $reportDate = $endLocal.ToString("yyyyMMddTHHmm")
& "C:\SystemLogParser\slp.exe" -f IIS -i "$iisLogPath" -startstring "$startUtc" -endstring "$endUtc" -a Optimized -d "C:\MyReports" -n "SLP_IIS_Optimized_$reportDate.xlsx" -o false Note: If the size of your log data is unknown, it is recommended to start with a smaller query window (e.g., 1hr or 6hrs) until a relative compute time is understood. The Log Report -- Understanding System Log Parser output By default, the generated report is spreadsheet-based. It creates an XLSX file which is an Open Office XML document. The report typically consists of several worksheets, each of which summarizes a particular metric. Summary Worksheet The initial page lists some information detailing: The date the report was generated The analysis type of the report Specified log path Start and End times Query Data High level log query and request statistics Statistics By Method Worksheet The Statistics By Method worksheet is a breakdown of requests and responses per function that help answer some of the primary questions of the task: Which methods (also referred to as functions or operations) were called? query, applyEdits, updateSubnetwork? How were services performing? The table on this worksheet provides a great deal of information. The default view sorts the columns of time by largest Sum value. Sum is derived from the "request occurrence (Count column) * average response time (Avg column)", which highlights the service and operation the servers spent the most time on to fulfill responses. Note: Response times shown for demonstration purposes only. Response times for each deployment are unique as performance is influenced by many factors. Note: Tabular data view for actual deployment may be much larger with more services and additional functions reported. In addition to Sum, Count and Average, the following statistics are displayed to help provide a deeper understanding on how the services and functions are performing: Min Minimum, the least or fastest response time value observed for that function (from that service source) P50 The 50th percentile; 50% of the response time data for that function (from that service source) fall at or below this point P95 The 95th percentile; 95% of the response time data for that function (from that service source) fall at or below this point P99 The 99th percentile; 99% of the response time data for that function (from that service source) fall at or below this point Max Maximum, the highest response time value observed for that function (from the service source) Stdev Standard deviation, a calculation on the spread or variation of the response times for that function (from that service source) The table highlights that the query function was the most popular method requested. By deployment compute time, the top three operations were actually all query (e.g., MapServer, FeatureServer, and Hosted) across two different services (Naperville_Electric and Naperville_Overlay). Looking at the Avg column, the average response time (in seconds) of these query functions are summarized and all three were sub-second (meaning less than 1 seconds). With the query methods being identified, scanning the table for other functions of interest showed applyEdits and updateSubnetwork were also called. Looking at other functions of interest, we can observe applyEdits and updateSubnetwork. On average, applyEdits had sub-seconds response times and updateSubnetwork took several seconds. This table helps answer the question of which methods were called A performance profile for three different queries was found along with applyEdits and updateSubnetwork Other functions were present like trace, reconcile, and post As for how the two reported services were performing, the definitive answer to this question can vary by organization: Typically based on existing performance criteria for the service, function and metric Initial runs of System Log Parser will yield an understanding of the current timeframe selected This should provide a good overall sample Running System Log Parser on a regular basis will allow you to compare changes in performance profiles If unusual behavior is observed in subsequent reports, an investigation or review or may be required Note: Typically, not all methods follow the same performance criteria. Each function performs different work than others. A feature query would have a different performance profile than updateSubnetwork. Request Counts By Resource Worksheet The Request Counts By Resource worksheet is an easy view on which services were the most popular. The totals are separated into two groups: Resource Requests (method based requests) Lists counts based on requests that used known service functions (similar totals to the Statistics By Method worksheet) Resource Requests (method and service endpoint based requests) Lists counts based on requests that used known services functions and requests to the REST service endpoints that typically pull metadata (similar totals to the Capability - Server worksheet) This table can help answer the remaining questions of the task: What services are the most popular? Based on the Source and Total columns, we can see that Naperville_Electric the most requested resource. Capability - Server Worksheet The Capability -- Server worksheet is alternative view to assessing service performance. It lists the breakdown by Capability and Source, but method is removed. Without the separation by method, the total number of requests for many of the services is higher. This is because there can be requests for a service that do not call a function. Some requests just make calls to the service's or layer's metadata. Instead of a statistical view of request performance, the times are grouped into response time buckets. These buckets consist of a duration range (e.g. 0-1 seconds or 6-10 seconds). This simplified view can make it easy to see where time is being spent. In the following example, there are two buckets of time that are highlighted. While these represented a small percentage of the overall number of MapServer requests, they highlight a tuning opportunity for the GIS administrator and/or developer who can decide if the amounts and values were acceptable. Note: Response times shown for demonstration purposes only. Response times for each deployment are unique as performance is influenced by many factors. This table can help answer the remaining questions of the task: Is the Site well managed and optimal? Overall user experience A well managed and optimal Site implies: The majority of the operations performed were fast or within the range deemed good/acceptable "Fast" and "within the range" are subjective quantifications and are best answered by preexisting performance criteria from your organization Another perspective, is using your best judgement from experience A small number of requests in the larger time buckets indicates there are some longer running functions Perhaps this can be tuned Function input (e.g., the parameters of the request) can also be a factor If the services are dedicated (as is with Utility Network services) and they are configured with the appropriate number of instances Configuration tuning could be eliminated The overall user experience: Is related to the functions exercised and their respective performance profile Note: There are several factors that can play a part in service performance: Number of instances Only available with dedicated services The data The methods called As well as parameters used Deployment architecture Available hardware resources Demand (e.g., concurrent requests) Tuning these characteristics to adjust performance is outside the scope of this Article. Log Report The generated log report made it easier to answer the questions for the task. What services were the most popular? Naperville_Electric was the most popular followed by Naperville_Overlay What functions were called? Many different methods were called: several spatial queries, applyEdits, updateSubnetwork, trace, validateNetworkTopology, reconcile and post. How were the services performing? Ultimately, this definition can vary by organization It is typically based on a criteria or agreement that lists an expectation for each service and operation However, from the System Log Parser report: Could identify service performance and see a statistical profile for each operation (by service) Decision support for maintenance and tuning opportunities Summary From the ArcGIS Enterprise log analysis, a report was generated that summarized the request and response activity of the Utility Network deployment. The analysis and performance breakdown were from System Log Parser. System Log Parser is free tool for Windows and is also listed in the Well Architected Systems tools section. The report provided statistical data to help answer questions about the Site such as: What was the most popular service Which methods were called by these services How the services were performing and whether the Site was well managed These could be answered When paired with performance goals of the organization From judgement based on experience However, despite the analysis and task being completed...your work is not done! The best Site analysis comes from periodically evaluating the deployment, because: Usage trends change over time Some services may become more popular, others less A potential opportunity to optimize dedicated service configurations You want to build up historical knowledge of the Site's performance behavior Understanding how functions perform from the services can help identify when behaviors and/or patterns look out-of-place or unusual This can assist with troubleshooting and tuning Can help highlight when the services and functions Are optimal Are not optimal Running System Log Parser regularly (e.g., once a month) can help you build a historical understanding of your Site's performance. This Article focused on analyzing a Site with Utility Network services but the practice and strategies could be applied to any GIS deployment.
... View more
02-13-2026
04:55 PM
|
5
|
1
|
1010
|
|
BLOG
|
Hi @AndreaB_, The promotion of the this entry's log level from DEBUG to INFO took place at ArcGIS Enterprise 11.4. I will make sure I clarify this in the Article to avoid any confusion going forward. Thank you for pointing this out. Aaron
... View more
10-29-2025
07:11 AM
|
0
|
0
|
534
|
|
BLOG
|
Hi @AndreaB_ Hosted services are a great way to share data but they do not use the Share instance pool. A quick recap on the three main technologies for running a service in ArcGIS Enterprise: Dedicated instance services ArcSOC based Each service has its own min and max instances Each instance consumes memory Supports many different capabilities Recommend if performance is paramount and you require the most functionality (e.g., UtilityNetwork, Geoprocessing) Hosted services Hosted request handler based (e.g., a single java process) Auto-scales (no need to set the min and max instances) Delivers great performance Great for feature service queries Has support for other capabilities like vector tiles, wfs and scene All services will reside in the Hosted folder on ArcGIS Server Shared instance pool service ArcSOC based All shared services use the same set of reserved ArcSOCs The screenshot you listed from Manager is a place to adjust some properties for these ArcSOCs, if needed The number of shared instances is detected at install time...8 logical cores were detected Auto-scales (to a degree) If there are 8 shared instances running 8 requests for one service could use all of them at the same time to meet demand Or, if requests for 8 different services came it could use all 8 instances to meet demand Great for data that is important to have in a service but not frequently requested If you find a shared service is frequently requested, it should be switched to dedicated A much better strategies than setting a dedicated service to min 0, max 1 Not all service capabilities and features are available as a shared instance Regarding your questions... Hosted performance If you find that the Hosted services are not meeting your performance/scalability criteria, it could be that the 4 cores are not enough to meet your demand (as you suggested). With Hosted services, there are not many dials to tune like number of instances. If you have several web applications that are serving up these Hosted services, I would recommend following best practices: Ensure only the most critical layers are enabled, by default The default scale should be the best for the given data (e.g., avoid having to have the user zoom in repeatedly to get to critical data) Ensure that quantization view mode is being used unless editing data If you find that the Shared services are not meeting your performance/scalability criteria, it becomes a different conversation and strategy. My personal take: If request performance for Shared services are "slow", that is okay because they were knowing marked a shared. With Shared, the service and its data are important, but there is demand and priority elsewhere (e.g., Dedicated services). Therefore, expect Shared response times that are a little slower. You could increase the number of Shared instances, but is really against the intention of this service pool type...and it could compete with your Dedicated instances, which you would not want. If some services over time are recognized to have a higher priority (based on your log analysis), then it is recommended for them to be changed to Dedicated, where you would need to manage the number of instances (e.g., setting min and max to the number of Cores). Scaling Up and Out Due to success and popularity of your Site (a good problem to have), you may find that more Cores on the single box is needed (scaling up) and/or you need more machines (scaling out). This can allow you to have Dedicated, Shared, and Hosted service duties on their own respective server (or some combination). It sounds like you are already thinking that way. Hope this helps Aaron
... View more
10-16-2025
12:05 PM
|
1
|
0
|
2698
|
|
BLOG
|
Hi @AndreaB_, SimonSchütte_ct lists some helpful resources on Hosted services. They are a great service option providing good scalability, if the capabilities meet your needs. You can do a lot with them, though features like branch versioning, utility network and geoprocessing are not available. Here are some additional documentation which may help provide more information what Hosted services/layers can and cannot do: Services and portal items Hosted layers Hope that helps. Aaron
... View more
10-15-2025
11:32 PM
|
0
|
0
|
2737
|
|
BLOG
|
Hi @EduardBeneke, Yes, you could probably construct a JMeter test to perform the publishing of a large service. This effort would most likely involve an SD file (a service definition that was previously created from ArcGIS Pro) and some asynchronous JMeter test logic. However, the publishing of a service is generally considered an administrative function. While the functionality of publishing a service to work as intended is very important, whether it is slow or fast is not as high a priority (just as long as it still works). That said, here are some reasons that come to mind that might cause service publishing to have slow performance: When publishing the service, "copy all data" is selected under the Data and Layer Type section While there are very good reasons to do this, one of the considerations is that ArcGIS Pro will need to upload all the data listed in the Contents pane in the project to ArcGIS Enterprise...for large datasets this can take quite awhile The network between ArcGIS Pro and ArcGIS Enterprise has measurably latency If you are publishing from your local machine into the cloud or into a deployment that is physically far away, then performance (of many different operations) could be impacted In this case, try publishing from an ArcGIS Pro that is on the local network of ArcGIS Enterprise Alternatively, if the data resides in an Enterprise Geodatabase, consider publishing with the Reference registered data (Map Image and/or Feature) option As another alternative, consider using the "Save As Offline Service Definition" when attempting to share/publish the service This will create an SD file (which might be large and take some time to create), but you take that hit once. Then when you want to republish, you can do it right from manager over the web. The PublishingTools.GPServer service is busy publishing other services By default, up to 3 instance can run at a time I list this as a possibility, but it would probably be unlikely Hope that helps Aaron
... View more
09-18-2025
02:58 PM
|
0
|
0
|
1089
|
|
BLOG
|
Hi @StevenBeothy I think that test plan is valid and a good programmatic way to get a performance profile from each of the layers However, there are a few items worth mentioning: My roads_hfs test requested * for the outFields This is fine but can be considered aggressive, especially if the tables are "wide" (e.g., 50 or more attributes) In that case, I would consider maybe asking for a few specific attributes (e.g., OBJECTID plus one or two others) But, that assumes the requested attributes exist in all the layers This is why using * is so easy My roads_hfs test was build around using hosted feature service queries (e.g., where resultType=tile) I am a huge fan of the feature tiles and their data structure lends itself to efficient querying (repeatable extents for better caching, more predictable quantization tolerances for each of the layers) A resultType of say standard would be similar but the test logic would be different (and in my opinion, more complex) Additionally, requests with resultType=standard can ask for varying numbers of records being returned (through the use of the resultRecordCount and resultOffset parameters) This adds variance and test complexity...which again is why I like testing with resultType=tile To reiterate, I think your approach is good and resultType=tile style tests are a great way to consume a feature service. I bring up these other items to give my opinion and perspective on what your test is doing and how its asking for the data...as well as the potential test logic impacts when asking for feature data when the resultType is not tile. Hope that helps.
... View more
07-22-2025
11:41 AM
|
1
|
0
|
1657
|
|
BLOG
|
ArcGIS Server's Info Log Level In previous Community Articles, I've talked about the wealth of information you can obtain by analyzing the ArcGIS Server logs for elapsed time, wait times (e.g., queue times), and instance creation times. Analyzing these metrics with tools such as System Log Parser, can help you quantify your GIS deployment to understand what resources users are requesting and how long they are waiting to receive response for them. From there you can decided if you need to increase services instances or add more processing capability. While getting a statistical view on service performance using elapsed time is good (really good), there is another ArcGIS Server log metric that can bring a whole new perspective to this analysis. The metric is the Info Log Level entry. Now, this entry is not new feature, but in recent ArcGIS Server releases its Log Level has been elevated (to Info). This makes its valuable information more accessible without the performance penalty of levels like Verbose and Debug. Why Info Is A Great Entry for Analysis What makes the Info entry great is that it provides a representation of a request's response time! I say "representation" because the time is taken from the point-of-view of the platform (the moment the request entered the framework) as opposed to the requesting client. This duration includes elapsed time, wait time and creation time into one value! For example: From this screenshot, the Info entry (with Log Code 9999), contains a listing of the request's URL (under the Message column) and the response time (under the Time Elapsed column). With the Url, you get name of the service of requested as well as the function called. Note: The time of the Info request entry is in milliseconds. ArcGIS Server may list other log entries at the level of Info that do not correspond to a request's response time. While you can expect to find times for requests for resources like map services and feature services, you'll see also observe entries that go through the framework but do not utilize an ArcSOC service instance in the traditional sense. For example, calls for static files such as rest.js and main.css (as shown in the screenshot). In addition to those static items, you may also see response time entries for such resources as hosted feature services and vector tile services. These items, if such services exist in your deployment, are a nice bonus! Granted, requests for such items are typically very fast, but by examining Info type entries, analysis can be performed that usually require an access log. Note: An access log is commonly found with an IIS or Tomcat (e.g., where the ArcGIS Web Adaptor is installed). While perfectly good technologies that efficiently log every request into the system, they might be a component that is not available with your deployment. The ArcGIS Server Info Level log type helps bridge that gap. Create a System Log Parser Report from Info Log Level Entries The ability to create a System Log Parser report from the Info Log Level entries can be found from the Analysis Type option, under ArcGIS Server Log Query (File System). Note: ArcGIS Server's Log Level must be set to at least Info to create a System Log Parser report based on Info entries. System Log Parser Info Report The System Log Parser Info report will show response time statistics for services requests as well as other items that go through the ArcGIS Server framework. Note: The Info entry discussed in this Article highlights the response times that are recorded for many ArcGIS requests. However, the log entry currently does not break down these requests by user. This detail could probably be inferred by using the RequestID attribute and finding a matching User Authentication entry, but that analysis is not covered in this Article or currently by System Log Parser. In the end, this analysis is all about turning log data into information...with Info log entries, you can have more information about your GIS deployment and services available to you to help make the best decisions possible.
... View more
07-02-2025
03:34 PM
|
3
|
3
|
1741
|
|
BLOG
|
Hi @AndreaB_, I think that is a good guess on the query calls. Such log entries could be from users or from dashboards. These map layer query requests return data in the form of json, pbf or kml. For MapServer services, this textual info is pulled from each layer separately which can then be rendered in a client (e.g., a JavaScript app). Export on the other hand, is a request to generate an image of this same data from one or more layers at once. While the function is called export, "export map" only returns a rendered image...not the actual data. Hope that helps Aaron
... View more
04-04-2025
12:08 AM
|
1
|
0
|
3563
|
|
BLOG
|
Hi @RandyBonds_Jr_, Generally speaking... There is not a direct relationship between number of CPUs and number of ArcSOC instances. That said, for your most critical services (typically there are only a handful) these should be set to a min/max of N, where N is equal to the number of CPUs cores. The remaining services could then be shared and/or hosted. From the perspective of memory, there is a more direct relationship with ArcSOCs. The key factors are the number ArcSOC instances, memory size (of each ArcSOC) and usage patterns. A very simple equation could be something like: (number of ArcSOCs x Avg ArcSOC size). Hope that helps Aaron
... View more
01-22-2025
03:05 PM
|
1
|
0
|
4981
|
|
BLOG
|
Hi @Br1an_Peters, Not sure on creating a dashboard. However, you can tell System Log Parser to write out the statistical analysis of a Simple report as JSON (instead of an xlsx file). This could be used as input to a dashboard or observability tool. slp.exe -d "C:\Folder1\MyReports" -s https://myserver.domain.com/server -eh now -sh 2hr -a simple -u siteadmin -p sit3adm1nPW -r json
... View more
01-14-2025
10:47 AM
|
0
|
0
|
3980
|
|
BLOG
|
Hi @Br1an_Peters, To specify a custom output location use the -d option with a path. For example: slp.exe -d "C:\Folder1\MyReports" -s https://myserver.domain.com/server -eh now -sh 2hr -a Optimized -u siteadmin -p sit3adm1nPW
... View more
01-08-2025
04:36 PM
|
1
|
0
|
4046
|
|
BLOG
|
Hi @SGTomlins,
I believe the Response code: 499 is a token related error Most likely the service your test is consuming requires authentication. As it is currently, cache_tiles2.zip does not have authentication functionality added in. A potential solution would be to look at the authentication components from this test: Using Apache JMeter to Load Test an ArcGIS Enterprise Authenticated Service (Intermediate/Advanced) and copy them into the tile test.
Hope that helps Aaron
... View more
11-15-2024
12:31 PM
|
1
|
0
|
1470
|
|
POST
|
Hi @SGTomlins, Finding delays or bottlenecks in a system can be a complex problem to solve. Sometimes the issue is hardware (e.g., not enough memory, too few cpu cores) and sometimes it is software (e.g., not enough services instances, max number of db connections reached, map is showing too much detail at small scales). As for an integrated, easier way to accomplish such a task I'm not sure. Logs/traces are certain valuable, as you mentioned they are in the realm of "forensics" and take more time to gather/analyze. Whatever method you choose, I would offer some strategies. I would start with identifying your top 3 most popular services. This will help narrow the focus and give the best return on investment of time. From there, look to understand how the services are being utilized (e.g., what functions are people requesting from these services...export map, query, applyEdits, etc...). It is also advantageous to know the performance profile of the functions...maybe its only one particular operation that impacts users the most. From an ArcGIS Server logging perspective, different functions have different levels of detail recorded. For some, you may have to increase the ArcGIS Server LogLevel to get that information. For example: -- export map at verbose/debug can list the duration of time of the overall request (from the point of view of the ArcSOC) and how long it takes for the symbology to render -- feature query at verbose can list the duration of time of the overall request (from the point of view of the ArcSOC), and show how much of that time was spent in the database...this is helpful for understanding if your spending more time of data retrieval or data serialization Once items for improvement have been identified, you can start researching potential ways to tune and remedy a solution to improve performance and remove the bottleneck. Hope that helps.
Aaron
... View more
11-15-2024
12:25 PM
|
1
|
1
|
2097
|
|
BLOG
|
Benchmark ArcGIS Enterprise...The Original Approach A while ago, I discussed using the Natural Earth dataset with a preconfigured Apache JMeter test to benchmark an ArcGIS Enterprise deployment. Those results from that test could then compared to runs from other deployments to get a comparative idea of the underlying hardware's performance and scalability characteristics. This approach had some benefits: Natural Earth is a free GIS data Available for public use Low-to-moderate data complexity (easy to work with) Test Plan featured a step load for observing scalability capabilities While useful and a good measuring stick, the scalability component meant the test would typically run for a long time (which also added some complication). I had wondered if there was an easier way to just benchmark the processing hardware (e.g., the CPU) but still through ArcGIS Enterprise: Was it possible to use JMeter from a performance-only perspective? Could I create a test to benchmark ArcGIS Enterprise without an underlying FGDB or enterprise geodatabase dataset (which should simplify the overall effort)? It turns out the answers were yes! Benchmark ArcGIS Enterprise...An Alternative Approach Okay...I am speaking in half-truths. The new benchmark test does not depend on an FGDB or eGDB dataset based service, but does need some data. To help keep things simple, the data (e.g., pre-generated geometries) is simply passed through the JMeter sample elements to an ArcGIS resource that does not have a referenced dataset behind-the-scenes. So, how is this done? Through the tried-and-true Geometry service. ArcGIS Server's geometry service is a built-in resource that provides access to many functions for performing geometric operations. The calculations of these operations (like buffer or generalize) can be simple or complex (depending on what you ask it). From a performance analyst's perspective, it provides a fantastic means for benchmarking the CPU hardware of the machine running ArcGIS Server. Note: Although the term ArcGIS Enterprise includes ArcGIS Server, this benchmark primary exercises the latter (e.g., ArcGIS Server). Some traffic may go through the ArcGIS Web Adaptor and there would be a small amount of Portal for ArcGIS authentication taking place, but by design, the bulk of the work will be performed by ArcGIS Server. Benefits of Using the Geometry service The Geometry service has been around in ArcGIS Server since version 9.3, so its ubiquitous. That makes a test utilizing it easy and reliable. Since the data driving the test is put inside the key/value pairs of the requests, that adds portability (e.g., no dataset to lug around). Note: While the Geometry service has been included with ArcGIS Server for some time, by default it is off and not running. The service would need to be started and shared to the appropriate Portal for ArcGIS members before running the test. The Geometry_Functions_Benchmark Test Plan To download the Apache JMeter Test Plan used in this Article see: geometry_functions_benchmark1.zip Downloading and opening the Test Plan in Apache JMeter should look similar to the following: Adjust the User Defined Variables to fit your environment What Types of Functions Should be Tested? For a benchmark, the short answer is only a few. This particular Test Plan only calls a few different operations...as well as the same operations in different ways (e.g., changing request parameters to purposely get a variant response). This provides mutability so the test is not just doing the same thing over and over. Below is a look at the operations used in this benchmark: generalize toGeoCoordinateString project buffer Expected Test and Operation Performance This test has some operations that may perform fast and others that will take more time. This speed will vary based on the hardware. Ultimately, we just want ArcGIS Enterprise (e.g., Server) to work for a just few minutes so we can get an idea of the processing performance. If each operation took 10 minutes (with the test many times longer) the benchmark itself can become too time-consuming and less practical to use. Deployment Architecture Example This benchmark test was run in a lab against two different severs (e.g., run once per server): ArcGIS Enterprise -- Machine #1 (older hardware) Intel Xeon E5-4650, 2.70 GHz SPECint_base2006 Score: 50.5 32 processing cores HyperThreading disabled 64GB RAM 10Gbps network ArcGIS Enterprise -- Machine #2 (newer hardware) Intel Xeon Gold 6126, 2.60 GHz SPECint_base2006 Score: 71.9 24 processing cores HyperThreading disabled 128GB RAM 10Gbps network Note: Since this testing effort was more focused on speed instead of throughput, SPECint_base numbers were used instead of SPECint_rate_base. Benchmark Test Execution For long running tests, it is not recommended to run the Test Plan within the GUI. However, since this is a relatively short test, the impact is nominal. Note: When running any test, it is always recommended to coordinate the start time and expected duration with the appropriate personnel. This ensures minimal impact to users and other colleagues that may also need to use the ArcGIS Enterprise Site of interest (e.g., the production deployment). Additionally, this helps prevent system noise from other activity and use which may "pollute" the test results. Results After adjusting the User Defined Variables to point to the appropriate environment (Machine #1…devlab05), the benchmark was run right in JMeter GUI. The results can be observed from the View Results in Table element: For convenience, the Test Plan automatically calculates the overall test run duration, right in the name of the last operation This makes benchmark time easy to observe from the table The Test Plan was adjusted to point to a server on newer hardware (Machine #2…eistsrv05) and the benchmark was rerun From the table, the results are added after the first run: Expectedly, the first machine required more time to complete the same operations. This resulted in a measurable difference in performance between the two machines. Machine #1…devlab05 Benchmark duration: 259946 ms Machine #2…eistsrv05 Benchmark duration: 181441 ms Calculate Percentage Change Since the response times were lower (e.g., faster) with newer hardware (compared to the first run on older hardware), we'll calculate a percentage decrease: First, original server time - newer server time = the decrease Then, the decrease ÷ original server number × 100 = the % decrease (259946 ms - 181441 ms) / 259946 ms = 0.302 0.302 x 100 = 30.2% The benchmark times from the older hardware (our start point) was 30% lower than the newer hardware. This percentage change suggests a measurable improvement when using the newer hardware. Percentage Change Estimate Based on SPEC Let's use the SPEC ratio with the benchmark time from the original run to predict the target_time (benchmark time on the newer machine). This can help with the understand if the roughly the same percentage change could be estimated. (Baseline_SPEC x Baseline_Time) = (Target_SPEC x Target_Time) ((Baseline_SPEC x Baseline_Time) / Target_SPEC) = Target_Time (36.875 x 259946 ms) / 53.75 = 178335 ms (after rounding down to nearest second) (259946 ms - 178335 ms) / 259946 ms = 0.314 0.314 x 100 = 31.4% From this prediction, the older hardware was estimated to be 31% lower than with the newer hardware. This is very close to the percentage change that was calculated based on the observed benchmark times. Future Hardware Processor architectures and CPU speeds are always improving. Eventually, such a benchmark test (as it is currently built) may only take a minute or tens of seconds to run (what a great problem to have). At this point, complexity could be added to the test to increase its run duration to better match the new technology. You may have noticed the last transaction in the test was disabled. This 1000 Point Buffer request with a distance of 10000 meters and a unit of 9035 (International Meter Distance) takes some time to calculate (even on decent hardware). It was disabled to shorten the run time to a reasonable duration. However, if helpful, it can enabled as an additional calculation, depending on the CPU speed of the deployment of interest. Final Thoughts As mentioned in other community articles, there is no one service or function that can cover the entire breadth and depth of ArcGIS. However, the Geometry service is a resource that represents a portion of the amazing field of GIS that is easy to work with. This makes it a good option to use for benchmark testing efforts. A Fast Response Time Is All About CPU Speed, Right? For this Geometry benchmark test, yes. However, for real-world services, processing speed is not the only factor. Server hardware components like disk speed, available memory, network speed are other resources which can improve response times (in addition to CPU speed). Together, they all have a positive affect on the user experience. This benchmark focused on CPU performance as it is a large part of the client request/server response process, but as just mentioned, it is not the only server resource when taking into account other potential ArcGIS services. What About Other CPU Comparison Tools? There are many utilities out there that can profile and test the various pieces of server hardware using a whole battery of exercises. These tests are great and certainly add value for understanding the hardware. Again, there is no one test that can represent all things GIS. But hopefully, this Geometry Benchmark Test Plan can be a useful tool in the analyst tool chest. To download the Apache JMeter Test Plan used in this Article see: geometry_functions_benchmark1.zip Attribution Resource: File:Wikimedia_Foundation_Servers-8055_43.jpg Description: Rack-mounted 11th-generation PowerEdge servers Author: Victorgrigas - Own work Created: 16 July 2012 Uploaded: 20 July 2012 License: CC BY-SA 3.0, Link Resource: File:Cpu-processor.jpg Description: Author: Fx Mehdi - Own work Uploaded: 30 May 2019 License: Creative Commons Attribution-Share Alike 4.0 International
... View more
09-03-2024
11:52 AM
|
3
|
0
|
2427
|
| Title | Kudos | Posted |
|---|---|---|
| 4 | 02-13-2026 05:22 PM | |
| 5 | 02-13-2026 04:55 PM | |
| 1 | 10-16-2025 12:05 PM | |
| 3 | 07-02-2025 03:34 PM | |
| 1 | 07-22-2025 11:41 AM |
| Online Status |
Offline
|
| Date Last Visited |
3 weeks ago
|