BLOG
|
Hi @ZachBodenner, > could possibly expand on why it would be "potentially faster." > Is that just because it's easier to devote dedicated instances to the service? Yes, but I think it would have more to do with not splitting time across different services instances for retrieving the same data. If all 20 dedicated instances are from one service, then there is a greater chance of improved performance from the benefit of "cache hits". There is "cache" all over but the one I am thinking is at the ArcSOC-level (depending on the service there can be workspace cache that can be taken advantage of). In the end, the performance of both configurations is probably really close, but if I were to go with one (without performance testing the differences of the two), I would pick the layers coming from the same service. Aaron
... View more
03-08-2024
02:55 PM
|
0
|
0
|
354
|
BLOG
|
Hi @ZachBodenner, My take is that it would be more efficient (and potentially faster) to have all of the layers coming from the same service. This assumes all the layers are are using the same connection to the data under the hood. With this approach the web map can be more easily managed as there is just one service to optimize and tune (e.g., number of instances). Granting permissions in Portal for ArcGIS should also be simpler. Of course, the elephant in the room is 20 layers. lol. If all 20 layers are required to be there for functionality that is one thing. But, if possible, consider opting-in for some of them or enabling some based on the map scale. Hope that helps. Aaron
... View more
03-01-2024
06:19 PM
|
0
|
0
|
463
|
BLOG
|
Hi @ChiefKeefSosa300, It depends. You stated you tested a different map service. Does that mean different data than natural earth? If yes, each dataset will have its own profile for performance and scalability because the geometry density and complexity can vary. Your data may have more (or less) layers and each layer may have more (or less) attributes. If no, and you also tested natural earth, then there is a greater chance of seeing a similar profile for performance/scalability, but there are still other variables which can impact response time and throughput like the system architecture (number of machines) and hardware (number of CPUs, CPU speed and memory). Hope this helps. Aaron
... View more
02-09-2024
05:52 PM
|
0
|
0
|
208
|
BLOG
|
Branch Versioning Branch versioning is a type of versioning for an enterprise geodatabase that aligns with the ArcGIS Enterprise Web GIS model. It uses a service-oriented design to support multiuser editing workflows and long transaction scenarios through web feature layers. When datasets are registered as branch versioned in the geodatabase, there is an option to enable the Version Management capability as the resource is being shared/published. This creates a version management endpoint that can perform the creation of new named versions as well as the administration of existing ones. Web users can query and edit data of their own named version through the web feature layer or reconcile and post the changes into the default parent version for other members to utilize. In short, branch versioning is built for the modern web and uses a simplified data model that is optimized for today's clients like ArcGIS Pro. For more information see: Versioning types -- branch versioning Version management Version administrator Why Test Branch Versioning? Since branch versioning is the data model for multi-user editing over the web with ArcGIS Enterprise and is a major components for frameworks such as utility network and parcel fabric, the understanding of its performance profile can greatly benefit GIS administrators. Branch Versioning Editing Testing Challenges While the exercising of branch versioning queries from JMeter is not too difficult, things become more complex when editing is involved: For example: When a new named version is created, it needs be referenced in the test by the associated GUID in some places and by the name in others Any time a "write" operation like takes place against the data like creating a new named version, applying edits, reconciling or posting, a new historicalMoment is generated Any web feature layer queries that take place after a write operation will want to use the updated historicalMoment to retrieve the most up-to-date information from the version management system Prior to adding new data, the test needs to ask the system to reserve objectIDs and use the returned numeric value as the starting point In addition to objectIDs, items being inserted or updated also have globalIDs to keep track of To properly perform write operations, an editing lock needs to be acquired that utilizes a unique sessionID With editing through a load test, it is easy to see the additional moving parts that the Test Plan needs to keep track of. Of course, when this happens in ArcGIS Pro, this complexity is handled automatically for the user. Note: The historic moment is one of the strengths of branch versioning as it allows a user a mechanism for viewing data for a particular date and time. Lock acquisition is another key feature of branch versioning which allows the system to have multiple readers for each named version or just one exclusive editor. How to Test Branch Versioning Editing? The branch versioning editing workflow was conducted in ArcGIS Pro 3.2 Final (and ArcGIS Enterprise 11.2). While the operations were executed in ArcGIS Pro, the HTTP traffic (between ArcGIS Pro and ArcGIS Enterprise) was captured and later converted from a HAR file to a JMX file (JMeter Test Plan). At a high level, the workflow operations in this Article contained the following steps which represent a user's (branch versioning) editing session in ArcGIS Pro: Portal Member Authentication Open the Project Create a new named version Switch to the named version Zoom in to an area of interest Insert a new point Update attributes of the new point Insert a new polygon Update attributes of the new point Save Edits Reconcile with Default Post the changes back to Default Zoom out (back to the initial map scale) The general steps in this Article should work with any service data that is registered as branch versioned. Though the test workflow makes specific use of a point layer and a polygon layer that existed in the utilized dataset. The dataset tested was the free and publicly available Natural Earth. View of the Natural Earth service from ArcGIS Pro: Southern California, USA The Natural Earth Dataset The Natural Earth datasets provides some decent map detail (at smaller scales) covering the whole world. Download the Natural Each dataset here The download above is a subset of the larger Natural_Earth_quick_start.zip and includes a modified MXD for ArcMap 10.8.1 and ArcGIS Pro 2.8 project Either can be used to publish and create a cached map service to ArcGIS Enterprise The ArcGIS Pro Feature Cache ArcGIS Pro uses a feature cache to improve the display performance when navigating to areas of interest that have already been visited on the map. If any editing takes place this cache is refreshed. Feature caching is an all-around good performance strategy. However, when capturing the HTTP traffic for this workflow, the feature cache was turned off. The Editing Workflow via ArcGIS Pro This section lists the editing workflow steps from an ArcGIS Pro visualization point of view. Open the Project The resources of the project are based on the web feature layers from the published service Create a new, public named version and switch to it When manually conducting the editing workflow, the value of "edits_20231211_2" was used but within the JMeter test, this value will be dynamic generated and unique to better allow for more automated execution After opening the project, ArcGIS Pro will issue multiple queries to the various layers listed in the table. When a new version is created and ArcGIS Pro switches to it, similar looking layers queries will be sent again but these are actually for the new version. Note: Although the editing workflow starts with "Portal Member Authentication", the logic for this operation was manually adjusted in the Test Plan to remove some additional requests and reduce complexity Zoom in to an area of interest (where the inserts will take place) In this example, the area of interest is Southwest of Palm Springs, California, USA Insert a new point From ArcGIS Pro, select the Edit tab from the top menu; click on Create (under the Features section); then from the list of Create Features Templates select "ne_10m_populated_places" Then, click on an area in the map to insert the point Note: After clicking in the map, the point was actually inserted into the geodatabase. However, since the Save operation has not been executed, the "Branch Moment" for the named version has not yet been updated. Once save has taken place, the edits become "official". The next step involved changing attributes of the newly inserted point as it was an opportunity to later show a feature update within the JMeter Test Plan. Alternatively, you could have just used the Feature Template to pre-populate the attribute fields and then perform the insert. Update attributes of the new point With the newly added point still selected, click Attributes (under Selection), then populate several of the fields. Attributes to update SCALERANK, NATSCALE and LABELRANK rank was given arbitrary numeric values The NAME field was specifically set to "Otisburg" To commit the changes, click Apply Insert a new polygon From ArcGIS Pro, select the Edit tab from the top menu; click on Create (under the Features section); then from the list of Create Features Templates select "ne_10m_urban_areas" Then click on an area of the map and draw a polygon around the "Otisburg point" Update attributes of the new polygon With the newly added polygon still selected, click Attributes (under Selection), then populate several of the fields. scalerank was given an arbitrary numeric value The featurecla field was specifically set to "It's a little bitty place" Note: When adding the point and polygon, several feature queries were being issued as the cursor moved around the map. These queries are the result of the Snapping engine. With snapping enabled (the default option) the additional requests are expected behavior. The captured workflow used in this Article left this option enabled. The functionality can be turned off from the Snapping management if the additional requests are not desired in the captured traffic. Save Edits With the branch versioning edits complete, the changes were internally finalized by the system once Save was clicked Reconcile with Default Reconciling with default is the process of resolving conflicts that result from merging parent version data (e.g., from default) into the edit session The captured workflow used the Reconcile defaults (as captured by ArcGIS Pro): Detect conflicts by attribute (instead of object) Abort if conflicts were detected was false Issue a Post (after the reconcile) automatically was false Run the job asynchronously was false Since this editing workflow is relatively simple, there should be no existing attributes that conflict the edits that were performed Post the changes back to Default Post is the process of merging the edit session data into the default version The inserts will now become part of default and any new named versions that are created after The captured workflow used the Post defaults (as captured by ArcGIS Pro): Run the job asynchronously was false Note: If the captured traffic is executed in the Test Plan multiple times (e.g., a load test), the "same" items will appear multiple times in default (as expected). From a visual perspective, it might be difficult to "see" these new objects spatially in the default version as they would all be using the same geometry. However, the labeling of the point and polygon layers might give hints that multiple occurrences exist for items using the same geometry. Note: A few words about an asynchronous reconcile or post; with ArcGIS Pro 3.2, the default execution from the ribbon menu is to utilize synchronous execution. If asynchronous for an operation is used instead, it follows a different HTTP request pattern (since it become a submitted job whose status must be periodically polled). Additionally, the action utilizes a separate ArcGIS Server service (e.g., the System GP services like: SyncTools, UtilityNetworkTools, or VersionManagementTools) to carry out the appropriate asynchronous task. Zoom out (back to the initial map scale) Finally, the session ends by navigating back to the original map scale of 1:1,000,000 to help provide a visual point of reference of the edits that were performed The Branch Versioning Editing JMeter Test Plan To download the Apache JMeter Test Plan used in this Article see: branch_versioning_editing1.zip Opening the Test Plan in Apache JMeter should look similar to the following: The tests steps closely follow the workflow performed in ArcGIS Pro: Portal Member Authentication Open the Project Create a new named version Switch to the named version Zoom in to an area of interest Insert a new point Update attributes of the new point Insert a new polygon Update attributes of the new point Save Edits Reconcile with Default Post the changes back to Default Zoom out (back to the initial map scale) Components of the Test Plan In order to help keep this branching versioning editing Test Plan as simple as possible, only a few options exist under User Defined Variables that need to be configured. A single HTTP Header Manager element is utilized and just one Response Assertion exists to throw messages if the word "error" occurs in any response. CSV Data Set Config The test does make use of a CSV Data Set Config elements in JMeter to order to authenticated and utilize a different Portal member when executed under load. This item is location inside the Thread Group. Note: As administrators, testers and analysists, the JMeter Test Plan could be as complex as needed to fit your needs. It is not uncommon to having an editing test have the actual changes (e.g., geometries inserts or attributes updates) use a CSV file to make the effort more dynamic and realistic. However, one of the goals of this Article is help show how the editing can be done by keeping the complexity of the test down to a minimum. (User Test Thread) Initialization Transaction Once the user or more accurately, the test thread, starts but before authentication takes place, we want JMeter to generate values for a few items and assign them to variables: sessionID Used for acquiring locks and for write operations Unique for each user's ArcGIS Pro session gdbVersion geodatabase Version name identifier Used to help create the named version's "official name" Once created, ArcGIS Enterprise will also generate a GUID associated to the new named version A unique name for each named version A GUID for our point to be added Unique for every added point Unique for each users ArcGIS Pro session A GUID for our polygon to be added Unique for every added polygon Unique for each users ArcGIS Pro session These are defined through a User Parameters element. Open ArcGIS Pro Project Transaction When expanded, the Open ArcGIS Pro Project transaction will list the requests issued when the aprx file was launched. There are typically many requests for such an operation as ArcGIS Pro gathers meta-data on the service and layers. In the middle of this operation are a few requests calling a special GUID resource. These are calls to the default version in the geodatabase. Typically, "BD3F4817-9A00-41AC-B0CC-58F78DBAE0A1" is the GUID associated with Default. There is also startReading request to obtain a read lock that utilizes the sessionID generated earlier in the test. Create Version and Switch Version Transaction The Create Version operation does a lot of important work in the Test Plan. It ensures that the edits will have a place to go (the named version) until they are intentionally merged with Default. Upon creating the new version, the versionGUID, versionName and branch versioning moment are returned which need to be captured and stored into additional test variables since they will all be used in subsequent requests. The versionName is the official vanity designation of the named version. While the test generated a value for "gdbVersion", the system adds some addition information to its final form. For example: gdbVersion value (generated by JMeter): version_O5AHEMD89CYTH7GW versionName value (returned by ArcGIS Enterprise): USER007.version_O5AHEMD89CYTH7GW Note: Create (version) is available through the service's VersionManagementServer capability. Remember, when creating the new named version, ArcGIS Pro was instructed to switch to it. Immediately following create, there were requests issued against the new version using its GUID and well as several queries. These queries had specific key/value parameters that depended on the new branch versioning moment and name of the new version that was returned (from the response of create). Add Point Spatially Transaction Like the create operation, Add Point Spatially contains several critical pieces. This particular action is the first to "start" the edit session for the long transaction by sending a startEditing request which acquires a write lock against the named version. Since an insert is taking place, an objectID for the point is also required. The reserveObjectIDs call asks the system to return an objectID starting point for that particular layer. The test requested 100 IDs and the system responded with a firstObjectID to start with (which is captured to a JMeter variable) followed by an another number. The second number is the continuous, reserved range of IDs (the firstObjectID is the first item in this range). Note: While the GlobalID GUIDs can be generated by the Test Plan, ObjectIDs cannot. ObjectIDs must be allocated by the web feature layer. Note: A reserveObjectIDs request asking for 100 allocated ObjectIDs does not guarantee that 100 continuously IDs were reserved. The system will respond with the range available from that initial request. Additional reserveObjectIDs calls might be needed to obtain the necessary amount of IDs if the tested workflow is planning to perform multiple inserts that exceed the allocation. For such a situation, the test plan would need to handle the objectID logic itself. ArcGIS Pro does this automatically for the editing user. With the ObjectID for the point reserved and captured, the insert can now take place. Web feature layer editing (adds, deletes, and updates) happen through the applyEdits function. This request requires the named version name and sessionID as well as the edit payload. This payload lists of the edits to be performed specifying the layer ID, operation (e.g., adds), attributes and geometry. The pointGlobalID (GUID) generated at the start of the test is also used. The value for the edits key: [{"id":1,"adds":[{"attributes":{"OBJECTID":${pointObjectID},"SCALERANK":null,"NATSCALE":null,"LABELRANK":null,"FEATURECLA":null,"NAME":null,"NAMEPAR":null,"NAMEALT":null,"DIFFASCII":null,"NAMEASCII":null,"ADM0CAP":null,"CAPIN":null,"WORLDCITY":null,"MEGACITY":null,"SOV0NAME":null,"SOV_A3":null,"ADM0NAME":null,"ADM0_A3":null,"ADM1NAME":null,"ISO_A2":null,"NOTE":null,"LATITUDE":null,"LONGITUDE":null,"CHANGED":null,"NAMEDIFF":null,"DIFFNOTE":null,"POP_MAX":null,"POP_MIN":null,"POP_OTHER":null,"RANK_MAX":null,"RANK_MIN":null,"GEONAMEID":null,"MEGANAME":null,"LS_NAME":null,"LS_MATCH":null,"CHECKME":null,"MAX_POP10":null,"MAX_POP20":null,"MAX_POP50":null,"MAX_POP300":null,"MAX_POP310":null,"MAX_NATSCA":null,"MIN_AREAKM":null,"MAX_AREAKM":null,"MIN_AREAMI":null,"MAX_AREAMI":null,"MIN_PERKM":null,"MAX_PERKM":null,"MIN_PERMI":null,"MAX_PERMI":null,"MIN_BBXMIN":null,"MAX_BBXMIN":null,"MIN_BBXMAX":null,"MAX_BBXMAX":null,"MIN_BBYMIN":null,"MAX_BBYMIN":null,"MIN_BBYMAX":null,"MAX_BBYMAX":null,"MEAN_BBXC":null,"MEAN_BBYC":null,"COMPARE":null,"GN_ASCII":null,"FEATURE_CL":null,"FEATURE_CO":null,"ADMIN1_COD":null,"GN_POP":null,"ELEVATION":null,"GTOPO30":null,"TIMEZONE":null,"GEONAMESNO":null,"UN_FID":null,"UN_ADM0":null,"UN_LAT":null,"UN_LONG":null,"POP1950":null,"POP1955":null,"POP1960":null,"POP1965":null,"POP1970":null,"POP1975":null,"POP1980":null,"POP1985":null,"POP1990":null,"POP1995":null,"POP2000":null,"POP2005":null,"POP2010":null,"POP2015":null,"POP2020":null,"POP2025":null,"POP2050":null,"CITYALT":null,"min_zoom":null,"wikidataid":null,"wof_id":null,"CAPALT":null,"name_en":null,"name_de":null,"name_es":null,"name_fr":null,"name_pt":null,"name_ru":null,"name_zh":null,"label":null,"name_ar":null,"name_bn":null,"name_el":null,"name_hi":null,"name_hu":null,"name_id":null,"name_it":null,"name_ja":null,"name_ko":null,"name_nl":null,"name_pl":null,"name_sv":null,"name_tr":null,"name_vi":null,"wdid_score":null,"ne_id":null,"GlobalID":"${pointGlobalID}","created_user":"${username}","created_date":1702339858000,"last_edited_user":"${username}","last_edited_date":1702339858000},"geometry":{"x":-12994151.880899999,"y":3983674.1925999969,"spatialReference":{"wkid":102100,"latestWkid":3857,"xyTolerance":0.001,"zTolerance":0.001,"mTolerance":0.001,"falseX":-20037700,"falseY":-30241100,"xyUnits":10000,"falseZ":0,"zUnits":1,"falseM":0,"mUnits":1}}}]}] As mentioned earlier, if the insert utilized the feature template, the attributes would have actual values for the initial edit instead of "null". The point insert operation in this test plan is "simple" in that it uses a fixed geometry of "x":-12994151.880899999,"y":3983674.1925999969. The test plan could be made to utilize a CSV file with different values to make this part more dynamic. If the applyEdits is successful, a new edit moment is returned to be used on subsequent feature queries. Note: The value of the new historicMoment key in the query requests before applyEdits is called is typically different than the value after. The value of the property (e.g., historicMoment) is updated through the Regular Expression Extractor element. After an edit operation, there are often a few feature queries which use the edit moment, version name and ObjectID that was just inserted. Update Point Attributes Transaction Since the editing session has already been started and the objectID(s) already reserved, the Update Point Attribute does not have as many moving parts as Add Point Spatially. It contains a handful of queries and one applyEdits request (to perform the update). The update performed by the applyEdits is relatively simple as its just changing a few attributes. However, more columns could be changed in addition to the geometry. The value for the edits key: [{"id":1,"updates":[{"attributes":{"OBJECTID":${pointObjectID},"SCALERANK":550,"NATSCALE":425,"LABELRANK":231,"NAME":"Otisburg","GlobalID":"${pointGlobalID}"}}]}] Polygon Transactions The same editing process (e.g., insert followed by an update) is repeated for the polygon layer. While the Add Polygon Spatially contains a new reserveObjectIDs request (since the insert is taking place against a different layer) there is not an additional startEditing request against the named version GUID because the long transaction is still active. The value for the edits key: [{"id":7,"adds":[{"attributes":{"OBJECTID":${polygonObjectID},"scalerank":null,"featurecla":null,"area_sqkm":null,"min_zoom":null,"GlobalID":"${polygonGlobalID}","created_user":"${username}","created_date":1702340363000,"last_edited_user":"${username}","last_edited_date":1702340363000,"Shape__Area":293911790.76777756,"Shape__Length":85235.280315167271},"geometry":{"rings":[[[-12973934.7456,3982326.3835999966],[-12974309.137,3979331.2524999976],[-12977753.537900001,3978732.2261999995],[-12982021.5997,3977908.565200001],[-12984193.069800001,3974913.4340000004],[-12991456.2629,3974464.1643000022],[-12996697.7424,3976336.121299997],[-13002688.004700001,3978956.8611000031],[-13003736.3006,3984872.245099999],[-13005758.0142,3991910.8033000007],[-13004485.0834,3994381.7864999995],[-13002762.883000001,3990413.2377000004],[-12996922.3772,3986893.9585999995],[-12986589.174800001,3987193.4717999995],[-12983369.4088,3985995.4192999974],[-12981197.9387,3984348.0971999988],[-12973934.7456,3982326.3835999966]]],"spatialReference":{"wkid":102100,"latestWkid":3857,"xyTolerance":0.001,"zTolerance":0.001,"mTolerance":0.001,"falseX":-20037700,"falseY":-30241100,"xyUnits":10000,"falseZ":0,"zUnits":1,"falseM":0,"mUnits":1}}}]}] The polygon layer update is similar to the point update in that it modifies several column attributes of the data. Save Edits (to the Named Version) Transaction At this point in the Test Plan, the edits are done and the long transaction can be completed. This happens by a stopEditing request being issued against the named version. When the commit is successful, the branch moment value is updated and the edits performed will persist in the named version. Reconcile With Default Transaction If the changes made against the named version need to be visible in Default, the reconcile operation must take place first. As mentioned earlier, reconcile helps resolve data conflicts, that might exist between the two versions. Since a write could be taking place during this process, a startEditing request is issued to the named version GUID resource just prior to the reconcile. From an HTTP request point of view, reconcile is called against the GUID of the named version (which was created from this iteration of the test). The key/value pair request parameters are fairly straight-forward with the only dynamic pieces being the sessionID (and of course the token). A stopEditing request is issued (also against the named version GUID) once reconcile has finished. Once the reconcile has completed successfully, a new branch moment is returned and captured. Note: This test focuses on "write operations" returning an updated edit moment. While this is true, the edit moment change can also be reflected through the responses from VersionManagementServer/versions/[GUID] requests. Many times, this value is identical to the moment returned by the write operation. Sometimes it is different but still very close. For simplicity, this test plan did not reflect this potentially different moment change with an additional regular expression extractor element (for these VersionManagementServer/versions/[GUID] requests). Post To Default Transaction Post is the last step to bring the edits in the named version to the default version. Like reconcile, the captured editing session used the defaults options. As a result, the key/value pair request parameters are straight-forward with the only dynamic pieces being the sessionID and the token. There are a few more reading and editing calls than the reconcile operation to the named version and default version, but the operation is also straight-forward. Managing the Named Versions Named versions can build up over time, but if the post operation is the last action in the user's edit session, the named version could be deleted. Note: For simplicity, deleting the named version after the post transaction completed was not added to the test. Zoom Out (to Scale 1 Million) Transaction The edit session concludes with a navigation zoom out to the map scale 1:1,000,000. This was done in ArcGIS Pro to make it easier to see the edits with respect to the rest of the data. The result of this simple operation is a handful of feature layer queries. However, it is important to understand that these queries (based on the values for historicMoment and gdbVersion) are still against the named version and not the default version. Note: Queries against default would (always) use the gdbVersion value of "sde.DEFAULT". However, since Default has been updated in this test plan as a result of the post operation, new queries would need to utilize an updated historicMoment in order to pull down the latest data. The most recent value for default's historicMoment can be found in the response of a VersionManagementServer request (e.g., VersionManagementServer/versions/BD3F4817-9A00-41AC-B0CC-58F78DBAE0A1). Such a request exists in the post transaction, but for simplicity, this was not captured with a regular expression. Final Thoughts The Apache JMeter Test Plan in this Article represents a basic programmatic approach for handling some of the key components of a branch versioning editing workflow. While the steps of the workflow were straight-forward, the end result was a Test Plan that still contained contains several moving parts. To help with this, the edits themselves were static and hard-coded into the test. This was done to keep things as simple as possible so the Article could focus on critical points and strategies. However, there is nothing to stop you from recording and creating more complex edits in a new test or to make the edits of your effort dynamic where the values are derived from a CSV file. As a tester and GIS analyst, the sky's always the limit when building a JMeter Test Plan to consume or edit data from ArcGIS Enterprise. To download the Apache JMeter Test Plan used in this Article see: branch_versioning_editing1.zip Running the Load Test Note: Please coordinate with your GIS team if your Apache JMeter test will be sending requests to a server that might impact other users. A load test should be scheduled to run during non-peak business hours. Note: Before running an editing test against an enterprise geodatabase, ensure that proper backups exist, are accessible, and can be utilized, if needed. While named versions can be easily deleted via the version management system, edits and changes posted to default are more difficult to undo. Typically, restoring the data from a backup to get the system to a clean state is often the last step after the edit testing effort has completed anyways. Historic Moment Testing Details The branch moment is a critical component of branch versioning as it allows the feature layer queries to pull data from a specific moment in time from the geodatabase. However, from an HTTP testing perspective, the response from various functions list this value under different string names. Some resources list it under creationdate or modifiedDate, whiles others use editmoment or just moment. As a tester, that is something to look out for when constructing the regular expression to capture the appropriate value. In the case of create (version), creationdate and modifiedDate are returned though they often contain the same value. For more information see: Historical moments Attribution File:c8db9197-fc08-4673-8bd1-6b622c7d4f6e_text.gif, Otisburg -- Superman (1978) Apache JMeter released under the Apache License 2.0. Apache, Apache JMeter, JMeter, the Apache feather, and the Apache JMeter logo are trademarks of the Apache Software Foundation.
... View more
12-23-2023
01:03 AM
|
2
|
0
|
923
|
BLOG
|
Hi @SimonGIS, A good find! The more recent versions of this Test Plan (roads_hfs2.zip and roads_hfs3.zip) should have that issue corrected. Thanks. Aaron
... View more
11-20-2023
09:32 AM
|
0
|
0
|
548
|
BLOG
|
Thanks to @NoahMayer to pointing out that the "CSV Data Set Config" elements in the test need to have all the variable names listed if some data columns are to be skipped inside a request. In the original sampleworldcities2.zip test, the columns for: "bbox, width, height, mapUnits, sr, scale" were read in as "bbox_4622324,width_4622324,height_4622324,sr_4622324" (using map scale 4622324 as an example). mapUnits was intentionally skipped. However, the assignment of sr_4622324 picked up the value of mapUnits. That was not intended. A new version of the test called sampleworldcities2B.zip has been uploaded which assigns variables to all the data columns to correct this behavior. Thanks again Noah...good find!! Aaron
... View more
10-25-2023
10:20 AM
|
1
|
0
|
759
|
BLOG
|
Hi @JoshBillings, That combination of JMeter and JDK looks good. It unclear exactly what issue Groovy is having an problem with. That said, I tried adjusting the declaration logic to help avoid any ambiguity. I just uploaded a new version of the test: roads_hfs3.zip. Please give this one a try. This new test ran fine with using OpenJDK17.0.2, OpenJDK20.0.2, OpenJDK11.0.2 and OpenLogic8u382. But if you still encounter issues, try downgrading the JDK (e.g., to 11 or even 8). Hope that helps. Aaron
... View more
10-13-2023
05:57 PM
|
2
|
0
|
681
|
BLOG
|
Hi @JoshBillings, The operation logic is fairly simple in that it just performs some JMeter variable retrieval and a little Groovy arithmetic. Can you confirm what version of JMeter and JavaSDK you are using? Thanks. Aaron
... View more
10-12-2023
07:31 PM
|
0
|
0
|
696
|
BLOG
|
Performance: Challenges and Strategies ArcGIS Enterprise provides a robust and scalable platform for delivering GIS resources for users through services and web applications. Sometimes however, deployments may experience slow performance from the published resource endpoints. Since ArcGIS is very versatile, there can be different ways, configurations and options when making these resources available for consumption. Obtaining good performance is not always as easy and straight-forward as just toggling a "fast = true" setting. What might be a handy feature for some administrators could be a configuration, that while helpful, may interfere with the performance for others. In the real-world, performance is often a function of several items and configuration settings. Having strategies to understand and overcome the most common ones may help put the deployment on the road to achieving faster performance. What is Performance? Performance is a description of how fast (or slow) a server or service operates for a particular function of interest. How long it takes an ArcGIS Server service to complete an operation like query, applyEdit or export and then send the response back to the client that requested it would be an example of performance. This duration is measured in seconds or milliseconds and is commonly referred to as the response time. Although called "response time", there are several important steps that make up the overall time a client like a web browser or ArcGIS Pro spend waiting for a requested server resource: DNS lookup of the server's hostname SSL handshake between client and server TCP/IP connection between client and server Sending the request to server Server processing the request Receiving response from server For large responses, Time-to-first-byte (or TTFB) can be used instead to measure response time Typically, the bulk of the time is spent at 4.1. This is where the server is working on the response. This Community Article will explorer areas that can impact this portion of the response time. Why is Performance Important? Simply put: the faster the performance, the lower the response time. The lower the response time, the more requests the server can support at one time. This higher concurrency of requests translates to greater scalability which ultimately means support for more users. Note: Performance is measured as a unit of time need to complete a single operation at a time (e.g., 0.238 seconds to execute a feature query request). Scalability on the other hand, is frequently measured as transactions or operations over time (e.g., requests/sec or operations/hour). Note: When talking about getting better or "more" performance, it implies achieving lower response times. On the other hand, better scalability implies higher rate of throughput (e.g., more operations/hour). What is Acceptable Performance? It depends. The criteria or requirement for classifying an item as having fast (or slow) performance can vary greatly by organization, the published services and expected operations users will be calling. It is not uncommon to have different response time goals for ArcGIS Server functions or user application workflows (several requests grouped together to represent one operation). Any number of seconds is fine for a requirement but keep in mind, it may take more hardware as well as more extensive tuning and strategies (this Article) to achieve aggressive goals. How is Performance Measured? Response time is the key metric for determining if performance is meeting or staying within or under a target requirement. Common strategies for measuring are: Single user interaction Through the web browser or ArcGIS Pro This is the easiest place to start If there is no understanding yet on performance, start here Statistically analyzing large volumes of response times Through log analysis tools, the ArcGIS Server Manager Statistics page, or other observability utilities These approaches have the benefit of analyzing real-world requests that users have already executed against the deployment This is discussed in more depth later in the Article Load Testing Can provide an understanding on performance and scalability More time consuming to setup Online resources exist for getting started Performance Engineering: Load Testing ArcGIS Enterprise When analyzing logs or running tests, a common strategy is to leverage statistics for breaking down large amounts of response times. The Average and 90th (or 95th) percentile, the Minimum, and Maximum help provide an understanding of the performance the user may have been experiencing. Capturing Response Times -- Web Browser How response times are captured through single user interaction is a fun topic of discussion. The easiest approach to capture response times of REST requests from a web application is with the browser's "developer tools" functionality. All major browsers offer some view of the requests, responses and times being sent and received. This duration can give an idea of how fast a request or operation (potentially multiple requests) performed. Decisions can then be made if this is acceptable or needs to be improved. Capturing Response Times -- ArcGIS Pro While ArcGIS Pro also communicates with ArcGIS Enterprise via REST, it does not have a built-in equivalent of the developer tools. For capturing responses times, you'll need a separate HTTP debugger. There are many available, a popular choice is Fiddler. With Fiddler installed to the same machine as ArcGIS Pro, it can be configured to intercept the traffic. Request parameters, response content and times can be similarly captured and examined. Are Goals Required for Improving Performance? Absolutely not. GIS Administrators can always analyze, tune and apply best practices to the system even if no official performance requirements are in place. However, it is highly recommended to understand what performance your system is initially delivering (e.g., these are typically referred to as baseline response time numbers) before adjustments are made. This way you can determine if the applied changes are having a positive effect. Common Performance Challenges and Potential Strategies Service Pool Types and Instances One of the most frequent areas that ArcGIS administrators encounter performance challenges with is setting the appropriate number of instances for dedicated services. But first, let's review the different types. There are 3 service types, each with their own strengths. Dedicated Hosted Shared As a GIS Administrator it is important to be able to identify the instance type for a service. This can be easily viewed within ArcGIS Server Manager, under Manage Services: Selecting the Appropriate Type Choosing a dedicated service type is ideal when achieving the most performance and scalability control is desired. With this type, the administrator can: Set the maximum number of instances to the number of CPU cores to take full advantage of the available processing capability ArcGIS Server machine (via the maximum) Conserve memory when idle (via the minimum) Adjust for predicable performance by setting the min and max to the same value Dedicated services are ideal for heavily requested services or services where performance is paramount. Hosted services do not utilization ArcSOC instances and auto-scale as needed. However, the ArcGIS capabilities available to it (Hosted) are limited as it is used primarily with feature queries. Shared services are great for accessing items that are requested less frequently. They typically have more ArcGIS capabilities available to them but not all ArcGIS functionally is available (e.g., branch versioning). A shared instance pool is the default type when publishing a service with ArcGIS Pro. Note: It is important to reiterate that if the selected service type is set to dedicated, the (minimum and maximum) number of instances should be evaluated to ensure they are optimal. The default when publishing in ArcGIS Pro is to only use a maximum of 2 (instances). This might be too low and inadeqaute for services where performance/scalability are important. Focus the Map Optimizing the map is not a new strategy but a relatively easy one to follow for getting better performance from your deployment. When the map is focused on its primary purpose and presentation, the system does not have to do unnecessary work. Remember, the web is a multiuser platform. Making the display of the dynamic data as streamlined as possible is key to good performance (and scalability). With potentially many requests occurring at the same time, the content being shared needs to be as efficient as possible. Map Strategies Choose an Optimal Default Extent If the map is providing data on Los Angeles, the default extent should not be showing all of California If many different map scales are required, use scale dependencies and generalization to limit the detail to when you need most (e.g., the largest scales) Remove unneeded layers Consider removing nice-to-have data layers At the very least, unselect them and have user opt-in to enable Purposely limit what users can do with a service Avoid projecting on the fly Use the same coordinate system for data frame and data Definition queries Ensure indexes are in place if comparison logic is applied to attribute columns Note: "Focusing the map" applies to maps, apps and services Software Releases A particular version of ArcGIS Enterprise (and its related solutions) can have a handful of patches after its initial base release. These patches can offer performance improvements as well as functionality and security fixes. It is highly recommended to periodically check the Esri Patches and Updates site or run the "Check for ArcGIS Enterprise Updates" tool. Then, apply the updates at the appropriate time. Resource Contention and Expansion There are times when best practices and strategies for performance are applied but lower response times and higher scalability are still required. Perhaps the current hardware running ArcGIS Server is simply exhausted where the processing power or memory have become the bottleneck for improving the user experience. For such situations, you need to consider expansion and/or upgrading the hardware. Scalability For the ArcGIS Server and ArcGIS Web Adaptor tiers of the deployment, you generally have the following options for improving scalability using hardware: Scaling up Adding more resources to the existing machine (e.g., additional processing cores) Additional memory can also assist with scaling by allowing the deployment to have more ArcSOC instances running concurrently Ideal for deployments such as: cloud, virtualization, Kubernetes Scaling out Adding more machines of equal resource capacity Ideal for deployments such as: on-premise, cloud, virtualization, Kubernetes Performance To improve performance with hardware, there is typically just one option: Obtaining faster processing cores Ideal for deployments such as: cloud, Kubernetes The system's paging configuration can also impact scalability even when ample memory resources have been added to a system. Although this is operating system software setting and not hardware, it can play a crucial part in the running of many concurrent ArcSOC instances. Be sure to set this accordingly to handle a workload with many instances or instances with a large memory footprint. There can be situations where performance is limited and appears to be CPU bound (e.g., a bottleneck due to limited processing power). Further inspection may reveal the "culprit" to be one or more slow queries which were suboptimal to begin with or an expensive operation called too frequently (through a periodic administrative tasks). In such a case, it may be more effective to address to the "bad" queries to improve performance and scalability. Note: These expansion strategies are for overcoming general processing and memory limitations. But, disk and network (bandwidth, latency) resources can also be bottlenecks. For some environments, these can be more complicated to expand upon requiring additional steps to upgrade. A General Approach to Scaling For ArcGIS Server it is can be easier to scale up first, then out. The reason is due to the Site configuration and directories which would need to be migrated to shared storage if the architecture is switched from a single machine to a multi-machine deployment. How much should you scale up...two servers, five servers? Without details on average response times and the anticipated number of users to support, it’s difficult to provide a concise answer. However, a simplistic, general approach would be to just try doubling the current amount (memory and/or physical processing cores) and observing the impact. For many cases, this probably works well up to 16 CPUs. At that point, adding another machine may be more advantageous. Note: Your current product license may impact how many physical cores you can use with ArcGIS Server. Check with your Esri Account Manager for more details. Note: For support with capacity planning or architecture design (which can help provide a detailed understanding and estimate on the required hardware resources for a given set of workflows), contact Todd Jarrard (tjarrard@esri.com) in Esri Professional Services. Observability "Quantifying ArcGIS" is a great strategy...a personal favorite. It defines what resources were being requested and how fast were the responses to fulfill these requests. It is important for obtaining an understanding of general system performance. If system resource utilization can also be captured, the analysis can be further elevated. There are many utilities available for periodically examining your system. Some tools may read the access logs and primarily focus on the statistical request performance made by users. Others may poll Server's statistics page and capture the CPU utilization (of ArcGIS Server or the database) for that duration of time. Which approach is best? If observability is currently not taking place then most likely, any one of them. would be a good addition. They all help provide some type of insight to the performance and health of the deployment. Once analysis has been conducted for a deployment, reports can be typically generated that highlight which map services might be of interest due to: The observed response times being slower than expected The number of requests issued for the resource Both response time and number of requests For an ArcGIS Site with many services, knowing which ones are statistically slow or are consuming the most resources help focus tuning efforts. With such reports, the GIS administrator has turned data into valuable information and are now better informed at making decisions for improving the user experience. That said, examining logs and statistics is just one (important) slice of the analysis pie. A Challenge with Common Observability Tools Many tools for system observability and monitoring focus the analysis on requests and responses for services. This is a good approach and definitely assists administrators with quantifying ArcGIS, but it can have a limitation. The limitation can appear with an assumption that a slow map service can be "fixed" by simply adding processing cores. More cores might improve some aspects of the situation, but it is recommended that the service be examined (or even reexamined) in more depth before more resources are obtained. This goes back to the "Focus the Map" section, for example: Ensure the data for the service is not being shown at too small of scales Avoid suboptimal queries Detailed query analysis can help show the occurrence of these behaviors that might get masked with general service reporting. However, while the break down of service request parameters and the underlying queries can improve the analysis it can add complexity to the reporting itself (e.g., more time to execute, more views of what to look at, the understanding of the views). Additionally, not all observability tools perform this type of inspection. Some recent efforts that are gaining traction are attempting to tackle this issue. They are based on a bottom up approach of analysis where the starting point is on the underlying database queries themselves through mechanism known as "query datastore". Query datastore analysis is powerful and does not impact the database performance like a trace can but it does require some knowledge of the queries themselves and their purpose. Look to this type of analysis capability in the future to help get the most from your observability tools. Conclusion There is no single item to easily adjust for boosting performance and scalability of an ArcGIS Enterprise Site. However, this Article lists some common strategies that can be applied together for improving it. It is also important to understand these are items that should be periodically revisited and acted upon. User habits change over time as does the popularity of a web application or service. Resources that were assigned to a particular service can be reevaluated or reduced to make room for the next featured item in your Site. ArcGIS performance analysis can be fun but it also a continuous effort for maintaining the best user experience. Attribution Resource: File:Grayson_running_the_4x100.jpg Description: English: Grayson running the first leg of the 4x100 at the 2010 Tigered invite Author: Graysonbay Created: 02:02, 29 November 2010 License: This file is licensed under the Creative Commons Attribution 3.0 Unported license Resource: File:Kurvimeter_1_fcm.jpg Author: Frank C. Müller, Baden-Baden License: This file is licensed under the Creative Commons Attribution-Share Alike 4.0 International license. Resource: File:My_Opera_Server.jpg Description: A server used for the My Home Author: William Viker, william.viker@gmail.com (c) 2006 License: The copyright holder of this file allows anyone to use it for any purpose, provided that the copyright holder is properly attributed. Redistribution, derivative work, commercial use, and all other use is permitted. Resource: File:Samsung-1GB-DDR2-Laptop-RAM.jpg Description: A 1 gigabyte stick of DDR2 667 MHz (PC2-5300) laptop RAM, made by Samsung and pulled from a 2007 MacBook laptop. Author: Evan-Amos Created: 1 August 2018 License: Public Domain
... View more
10-06-2023
06:22 PM
|
13
|
4
|
2570
|
BLOG
|
Hi @ChiefKeefSosa300 (tq), These are really thoughtful questions, thank for asking them! The statements about changing the output format to BMP or utilizing another the spatial reference are mentioned with respect to the that specific test. But they need more context... I favor PNGs over BMPs as the former (Portable Network Graphics) is generally much faster to generate than the latter (Bitmap Image). For example, an export map request that creates a PNG is about 87KB. Using a BMP for the same area, the resulting image is around 15MB. The drastic difference in size would impact scalability (as more network bandwidth is required per request). That said, if your application or workflow requires BMP, then that is the way to go and your benchmark should reflect that. There is nothing wrong with using BMP as an image format for a vector data benchmark, but you should get more throughput by using PNG. As for the spatial reference, this test used bounding boxes in a geographic coordinate system of "WGS 1984" (WKID 4326) because that is what the data (and published service) was originally in. Of course, there are many different spatial references. The goal when possible, is to try to avoid projecting-on-the-fly. In other words, if the data (and service) are in "WGS 1984" and you ask for "NAD27" (using the appropriate NAD27 bounding box), ArcGIS Server should appropriately fulfil the request. However, while very convenient, this transformation between coordinate systems comes at a performance cost. If you use this test and data as a benchmark, "WGS 1984" (WKID 4326) are an optimal choice. But, if you are wanting to build a benchmark test against your data that is in another coordinate system, I would use bounding boxes in that coordinate system to ensure you can get the best performance. So to conclude, does using BMP or another coordinate system fail the test, not at all. Performance would be impacted, but you still have a good "measuring stick". As long as the test parameters are known and kept constant between runs over time, you'll have a reliable benchmark regardless of the options used. Hope this helps. Aaron
... View more
10-04-2023
05:45 PM
|
1
|
0
|
457
|
BLOG
|
Hi @Jen_Zumbado-Hannibal, Both tools are similar but also different. The 10.8.1 python script listed on the arcgis.com page, focuses on retrieving usage statistics (total number of requests, maximum and average response time, and total timed-out requests). The soccer captures some of that info, but the primary purpose of it is to help the GIS analyst or administrator understand the optimal instance configuration of a service. In other words, the max instance of a dedicated service was set to 12...but was the system able to reach this maximum for the duration of interest? If the captured busy instances data constantly matches or is drastically less than the maximum you can then choose a more optimal configuration to improve scalability or save memory. Hope that helps. Aaron
... View more
09-26-2023
07:15 PM
|
1
|
0
|
918
|
BLOG
|
Hi @ChiefKeefSosa300 (tq), Would you be able to delete one of the "image/png Validation" elements and try playing the test again? These validations items are looking to see if the content returned from ArcGIS Server looks like a png image (this is the default image format this test requests). If it does not match a png signature, it fails the request. Failing a request whose response is not expected is a common strategy when you want to know if your test encounters an error. Aaron
... View more
09-26-2023
06:56 PM
|
0
|
0
|
493
|
BLOG
|
Hi @ChiefKeefSosa300, I was able to test the portal_administration1 JMeter project against a 11.0 deployment and it worked. Can you confirm that the value for the "PortalServerName" only contains the hostname of the machine running Portal for ArcGIS (e.g., portalserver.domain.com)? It should not contain any protocol or spaces after (e.g., http://portalserver.domain.com ). Also, can you verify that you can connect to the portal through a web browser on the same machine running the test? This can be through 7443 or 443. Don't forget to use the web adaptor's portal instance name if connecting to 443 (e.g., portal). Thanks. Aaron
... View more
09-08-2023
11:30 AM
|
1
|
0
|
347
|
BLOG
|
Hi @ChiefKeefSosa300, I have tried this test to create members with ArcGIS Enterprise 10.9, 10.9.1 and 11.1. I do not recall the results with 11.0 specifically. Did you encounter an issue? Thanks. Aaron
... View more
08-23-2023
10:17 PM
|
0
|
0
|
394
|
BLOG
|
Hi @ErickTGG, Any observability tool could be used to capture operation data like CPU and memory usage from a performance/load test. And, if these reports and filters already exist and can be consumed that is definitely a good strategy. One thing to keep in mind is that by default, many monitoring tools capture metrics for long term analysis. A common polling rate is every 1 minute or 5 minutes since they are expecting to store a lot of data (and this frequency can help with the manageability and scalability of that system). As such, that "lower" resolution is generally not ideal for examining the impact of a load test. A typical load test runs for 1-- 2 hours. For such a short duration, higher capture intervals like 5, 10, or 20 seconds, are usually favored so a more detailed impact on the deployment's resources can be seen. This might be adjustable with most monitoring tools...not sure. In cases where this higher resolution is not an option or maybe there is no monitoring tool even available, having the testing framework capture this type of data can be extremely valuable for the analysis. Hope that helps. Aaron
... View more
06-21-2023
04:50 PM
|
0
|
0
|
944
|
Title | Kudos | Posted |
---|---|---|
1 | 10-04-2023 05:45 PM | |
2 | 12-23-2023 01:03 AM | |
1 | 07-05-2022 11:29 AM | |
1 | 10-25-2023 10:20 AM | |
1 | 06-07-2021 05:34 PM |
Online Status |
Offline
|
Date Last Visited |
2 weeks ago
|