BLOG
|
Hi @StevenBeothy I think that test plan is valid and a good programmatic way to get a performance profile from each of the layers However, there are a few items worth mentioning: My roads_hfs test requested * for the outFields This is fine but can be considered aggressive, especially if the tables are "wide" (e.g., 50 or more attributes) In that case, I would consider maybe asking for a few specific attributes (e.g., OBJECTID plus one or two others) But, that assumes the requested attributes exist in all the layers This is why using * is so easy My roads_hfs test was build around using hosted feature service queries (e.g., where resultType=tile) I am a huge fan of the feature tiles and their data structure lends itself to efficient querying (repeatable extents for better caching, more predictable quantization tolerances for each of the layers) A resultType of say standard would be similar but the test logic would be different (and in my opinion, more complex) Additionally, requests with resultType=standard can ask for varying numbers of records being returned (through the use of the resultRecordCount and resultOffset parameters) This adds variance and test complexity...which again is why I like testing with resultType=tile To reiterate, I think your approach is good and resultType=tile style tests are a great way to consume a feature service. I bring up these other items to give my opinion and perspective on what your test is doing and how its asking for the data...as well as the potential test logic impacts when asking for feature data when the resultType is not tile. Hope that helps.
... View more
a month ago
|
1
|
0
|
101
|
BLOG
|
ArcGIS Server's Info Log Level In previous Community Articles, I've talked about the wealth of information you can obtain by analyzing the ArcGIS Server logs for elapsed time, wait times (e.g., queue times), and instance creation times. Analyzing these metrics with tools such as System Log Parser, can help you quantify your GIS deployment to understand what resources users are requesting and how long they are waiting to receive response for them. From there you can decided if you need to increase services instances or add more processing capability. While getting a statistical view on service performance using elapsed time is good (really good), there is another ArcGIS Server log metric that can bring a whole new perspective to this analysis. The metric is the Info Log Level entry. Now, this entry is not new feature, but in recent ArcGIS Server releases its Log Level has been elevated (to Info). This makes its valuable information more accessible without the performance penalty of levels like Verbose and Debug. Why Info Is A Great Entry for Analysis What makes the Info entry great is that it provides a representation of a request's response time! I say "representation" because the time is taken from the point-of-view of the platform (the moment the request entered the framework) as opposed to the requesting client. This duration includes elapsed time, wait time and creation time into one value! For example: From this screenshot, the Info entry (with Log Code 9999), contains a listing of the request's URL (under the Message column) and the response time (under the Time Elapsed column). With the Url, you get name of the service of requested as well as the function called. Note: The time of the Info request entry is in milliseconds. ArcGIS Server may list other log entries at the level of Info that do not correspond to a request's response time. While you can expect to find times for requests for resources like map services and feature services, you'll see also observe entries that go through the framework but do not utilize an ArcSOC service instance in the traditional sense. For example, calls for static files such as rest.js and main.css (as shown in the screenshot). In addition to those static items, you may also see response time entries for such resources as hosted feature services and vector tile services. These items, if such services exist in your deployment, are a nice bonus! Granted, requests for such items are typically very fast, but by examining Info type entries, analysis can be performed that usually require an access log. Note: An access log is commonly found with an IIS or Tomcat (e.g., where the ArcGIS Web Adaptor is installed). While perfectly good technologies that efficiently log every request into the system, they might be a component that is not available with your deployment. The ArcGIS Server Info Level log type helps bridge that gap. Create a System Log Parser Report from Info Log Level Entries The ability to create a System Log Parser report from the Info Log Level entries can be found from the Analysis Type option, under ArcGIS Server Log Query (File System). Note: ArcGIS Server's Log Level must be set to at least Info to create a System Log Parser report based on Info entries. System Log Parser Info Report The System Log Parser Info report will show response time statistics for services requests as well as other items that go through the ArcGIS Server framework. Note: The Info entry discussed in this Article highlights the response times that are recorded for many ArcGIS requests. However, the log entry currently does not break down these requests by user. This detail could probably be inferred by using the RequestID attribute and finding a matching User Authentication entry, but that analysis is not covered in this Article or currently by System Log Parser. In the end, this analysis is all about turning log data into information...with Info log entries, you can have more information about your GIS deployment and services available to you to help make the best decisions possible.
... View more
07-02-2025
03:34 PM
|
0
|
0
|
403
|
BLOG
|
Hi @AndreaB_, I think that is a good guess on the query calls. Such log entries could be from users or from dashboards. These map layer query requests return data in the form of json, pbf or kml. For MapServer services, this textual info is pulled from each layer separately which can then be rendered in a client (e.g., a JavaScript app). Export on the other hand, is a request to generate an image of this same data from one or more layers at once. While the function is called export, "export map" only returns a rendered image...not the actual data. Hope that helps Aaron
... View more
04-04-2025
12:08 AM
|
1
|
0
|
950
|
BLOG
|
Hi @RandyBonds_Jr_, Generally speaking... There is not a direct relationship between number of CPUs and number of ArcSOC instances. That said, for your most critical services (typically there are only a handful) these should be set to a min/max of N, where N is equal to the number of CPUs cores. The remaining services could then be shared and/or hosted. From the perspective of memory, there is a more direct relationship with ArcSOCs. The key factors are the number ArcSOC instances, memory size (of each ArcSOC) and usage patterns. A very simple equation could be something like: (number of ArcSOCs x Avg ArcSOC size). Hope that helps Aaron
... View more
01-22-2025
03:05 PM
|
1
|
0
|
1789
|
BLOG
|
Hi @Br1an_Peters, Not sure on creating a dashboard. However, you can tell System Log Parser to write out the statistical analysis of a Simple report as JSON (instead of an xlsx file). This could be used as input to a dashboard or observability tool. slp.exe -d "C:\Folder1\MyReports" -s https://myserver.domain.com/server -eh now -sh 2hr -a simple -u siteadmin -p sit3adm1nPW -r json
... View more
01-14-2025
10:47 AM
|
0
|
0
|
1367
|
BLOG
|
Hi @Br1an_Peters, To specify a custom output location use the -d option with a path. For example: slp.exe -d "C:\Folder1\MyReports" -s https://myserver.domain.com/server -eh now -sh 2hr -a Optimized -u siteadmin -p sit3adm1nPW
... View more
01-08-2025
04:36 PM
|
1
|
0
|
1433
|
BLOG
|
Hi @SGTomlins,
I believe the Response code: 499 is a token related error Most likely the service your test is consuming requires authentication. As it is currently, cache_tiles2.zip does not have authentication functionality added in. A potential solution would be to look at the authentication components from this test: Using Apache JMeter to Load Test an ArcGIS Enterprise Authenticated Service (Intermediate/Advanced) and copy them into the tile test.
Hope that helps Aaron
... View more
11-15-2024
12:31 PM
|
1
|
0
|
595
|
POST
|
Hi @SGTomlins, Finding delays or bottlenecks in a system can be a complex problem to solve. Sometimes the issue is hardware (e.g., not enough memory, too few cpu cores) and sometimes it is software (e.g., not enough services instances, max number of db connections reached, map is showing too much detail at small scales). As for an integrated, easier way to accomplish such a task I'm not sure. Logs/traces are certain valuable, as you mentioned they are in the realm of "forensics" and take more time to gather/analyze. Whatever method you choose, I would offer some strategies. I would start with identifying your top 3 most popular services. This will help narrow the focus and give the best return on investment of time. From there, look to understand how the services are being utilized (e.g., what functions are people requesting from these services...export map, query, applyEdits, etc...). It is also advantageous to know the performance profile of the functions...maybe its only one particular operation that impacts users the most. From an ArcGIS Server logging perspective, different functions have different levels of detail recorded. For some, you may have to increase the ArcGIS Server LogLevel to get that information. For example: -- export map at verbose/debug can list the duration of time of the overall request (from the point of view of the ArcSOC) and how long it takes for the symbology to render -- feature query at verbose can list the duration of time of the overall request (from the point of view of the ArcSOC), and show how much of that time was spent in the database...this is helpful for understanding if your spending more time of data retrieval or data serialization Once items for improvement have been identified, you can start researching potential ways to tune and remedy a solution to improve performance and remove the bottleneck. Hope that helps.
Aaron
... View more
11-15-2024
12:25 PM
|
1
|
1
|
936
|
BLOG
|
Benchmark ArcGIS Enterprise...The Original Approach A while ago, I discussed using the Natural Earth dataset with a preconfigured Apache JMeter test to benchmark an ArcGIS Enterprise deployment. Those results from that test could then compared to runs from other deployments to get a comparative idea of the underlying hardware's performance and scalability characteristics. This approach had some benefits: Natural Earth is a free GIS data Available for public use Low-to-moderate data complexity (easy to work with) Test Plan featured a step load for observing scalability capabilities While useful and a good measuring stick, the scalability component meant the test would typically run for a long time (which also added some complication). I had wondered if there was an easier way to just benchmark the processing hardware (e.g., the CPU) but still through ArcGIS Enterprise: Was it possible to use JMeter from a performance-only perspective? Could I create a test to benchmark ArcGIS Enterprise without an underlying FGDB or enterprise geodatabase dataset (which should simplify the overall effort)? It turns out the answers were yes! Benchmark ArcGIS Enterprise...An Alternative Approach Okay...I am speaking in half-truths. The new benchmark test does not depend on an FGDB or eGDB dataset based service, but does need some data. To help keep things simple, the data (e.g., pre-generated geometries) is simply passed through the JMeter sample elements to an ArcGIS resource that does not have a referenced dataset behind-the-scenes. So, how is this done? Through the tried-and-true Geometry service. ArcGIS Server's geometry service is a built-in resource that provides access to many functions for performing geometric operations. The calculations of these operations (like buffer or generalize) can be simple or complex (depending on what you ask it). From a performance analyst's perspective, it provides a fantastic means for benchmarking the CPU hardware of the machine running ArcGIS Server. Note: Although the term ArcGIS Enterprise includes ArcGIS Server, this benchmark primary exercises the latter (e.g., ArcGIS Server). Some traffic may go through the ArcGIS Web Adaptor and there would be a small amount of Portal for ArcGIS authentication taking place, but by design, the bulk of the work will be performed by ArcGIS Server. Benefits of Using the Geometry service The Geometry service has been around in ArcGIS Server since version 9.3, so its ubiquitous. That makes a test utilizing it easy and reliable. Since the data driving the test is put inside the key/value pairs of the requests, that adds portability (e.g., no dataset to lug around). Note: While the Geometry service has been included with ArcGIS Server for some time, by default it is off and not running. The service would need to be started and shared to the appropriate Portal for ArcGIS members before running the test. The Geometry_Functions_Benchmark Test Plan To download the Apache JMeter Test Plan used in this Article see: geometry_functions_benchmark1.zip Downloading and opening the Test Plan in Apache JMeter should look similar to the following: Adjust the User Defined Variables to fit your environment What Types of Functions Should be Tested? For a benchmark, the short answer is only a few. This particular Test Plan only calls a few different operations...as well as the same operations in different ways (e.g., changing request parameters to purposely get a variant response). This provides mutability so the test is not just doing the same thing over and over. Below is a look at the operations used in this benchmark: generalize toGeoCoordinateString project buffer Expected Test and Operation Performance This test has some operations that may perform fast and others that will take more time. This speed will vary based on the hardware. Ultimately, we just want ArcGIS Enterprise (e.g., Server) to work for a just few minutes so we can get an idea of the processing performance. If each operation took 10 minutes (with the test many times longer) the benchmark itself can become too time-consuming and less practical to use. Deployment Architecture Example This benchmark test was run in a lab against two different severs (e.g., run once per server): ArcGIS Enterprise -- Machine #1 (older hardware) Intel Xeon E5-4650, 2.70 GHz SPECint_base2006 Score: 50.5 32 processing cores HyperThreading disabled 64GB RAM 10Gbps network ArcGIS Enterprise -- Machine #2 (newer hardware) Intel Xeon Gold 6126, 2.60 GHz SPECint_base2006 Score: 71.9 24 processing cores HyperThreading disabled 128GB RAM 10Gbps network Note: Since this testing effort was more focused on speed instead of throughput, SPECint_base numbers were used instead of SPECint_rate_base. Benchmark Test Execution For long running tests, it is not recommended to run the Test Plan within the GUI. However, since this is a relatively short test, the impact is nominal. Note: When running any test, it is always recommended to coordinate the start time and expected duration with the appropriate personnel. This ensures minimal impact to users and other colleagues that may also need to use the ArcGIS Enterprise Site of interest (e.g., the production deployment). Additionally, this helps prevent system noise from other activity and use which may "pollute" the test results. Results After adjusting the User Defined Variables to point to the appropriate environment (Machine #1…devlab05), the benchmark was run right in JMeter GUI. The results can be observed from the View Results in Table element: For convenience, the Test Plan automatically calculates the overall test run duration, right in the name of the last operation This makes benchmark time easy to observe from the table The Test Plan was adjusted to point to a server on newer hardware (Machine #2…eistsrv05) and the benchmark was rerun From the table, the results are added after the first run: Expectedly, the first machine required more time to complete the same operations. This resulted in a measurable difference in performance between the two machines. Machine #1…devlab05 Benchmark duration: 259946 ms Machine #2…eistsrv05 Benchmark duration: 181441 ms Calculate Percentage Change Since the response times were lower (e.g., faster) with newer hardware (compared to the first run on older hardware), we'll calculate a percentage decrease: First, original server time - newer server time = the decrease Then, the decrease ÷ original server number × 100 = the % decrease (259946 ms - 181441 ms) / 259946 ms = 0.302 0.302 x 100 = 30.2% The benchmark times from the older hardware (our start point) was 30% lower than the newer hardware. This percentage change suggests a measurable improvement when using the newer hardware. Percentage Change Estimate Based on SPEC Let's use the SPEC ratio with the benchmark time from the original run to predict the target_time (benchmark time on the newer machine). This can help with the understand if the roughly the same percentage change could be estimated. (Baseline_SPEC x Baseline_Time) = (Target_SPEC x Target_Time) ((Baseline_SPEC x Baseline_Time) / Target_SPEC) = Target_Time (36.875 x 259946 ms) / 53.75 = 178335 ms (after rounding down to nearest second) (259946 ms - 178335 ms) / 259946 ms = 0.314 0.314 x 100 = 31.4% From this prediction, the older hardware was estimated to be 31% lower than with the newer hardware. This is very close to the percentage change that was calculated based on the observed benchmark times. Future Hardware Processor architectures and CPU speeds are always improving. Eventually, such a benchmark test (as it is currently built) may only take a minute or tens of seconds to run (what a great problem to have). At this point, complexity could be added to the test to increase its run duration to better match the new technology. You may have noticed the last transaction in the test was disabled. This 1000 Point Buffer request with a distance of 10000 meters and a unit of 9035 (International Meter Distance) takes some time to calculate (even on decent hardware). It was disabled to shorten the run time to a reasonable duration. However, if helpful, it can enabled as an additional calculation, depending on the CPU speed of the deployment of interest. Final Thoughts As mentioned in other community articles, there is no one service or function that can cover the entire breadth and depth of ArcGIS. However, the Geometry service is a resource that represents a portion of the amazing field of GIS that is easy to work with. This makes it a good option to use for benchmark testing efforts. A Fast Response Time Is All About CPU Speed, Right? For this Geometry benchmark test, yes. However, for real-world services, processing speed is not the only factor. Server hardware components like disk speed, available memory, network speed are other resources which can improve response times (in addition to CPU speed). Together, they all have a positive affect on the user experience. This benchmark focused on CPU performance as it is a large part of the client request/server response process, but as just mentioned, it is not the only server resource when taking into account other potential ArcGIS services. What About Other CPU Comparison Tools? There are many utilities out there that can profile and test the various pieces of server hardware using a whole battery of exercises. These tests are great and certainly add value for understanding the hardware. Again, there is no one test that can represent all things GIS. But hopefully, this Geometry Benchmark Test Plan can be a useful tool in the analyst tool chest. To download the Apache JMeter Test Plan used in this Article see: geometry_functions_benchmark1.zip Attribution Resource: File:Wikimedia_Foundation_Servers-8055_43.jpg Description: Rack-mounted 11th-generation PowerEdge servers Author: Victorgrigas - Own work Created: 16 July 2012 Uploaded: 20 July 2012 License: CC BY-SA 3.0, Link Resource: File:Cpu-processor.jpg Description: Author: Fx Mehdi - Own work Uploaded: 30 May 2019 License: Creative Commons Attribution-Share Alike 4.0 International
... View more
09-03-2024
11:52 AM
|
3
|
0
|
1595
|
BLOG
|
Hi @AnjulPandey, Is there a way to obtain usage statistics of vector tile service? There might be some usage statistics through the ArcGIS Web Adaptor access logs (IIS log), if it is available. The information here would be counts (how many times was a vector tile service requested) and performance (how long were users waiting to down load the tiles). System Log Parser should be able to address this the the Internet Information Services Log Query capability. Has anyone created a script or notebook to collect a list of all the vector tile packages with their storage size? Not that I am aware of but I have not specifically looked for tools that address that functionality. Logs for map and vector tiles generated by publishing services? Is this similar to the first item? Are you after usage statistics or publishing statistics. If so, System Log Parser would summarize requests for such data (usage and publishing) from the ArcGIS Web Adaptor access log. But it is worth pointing out that for publishing statistics, only the job's GUID would be listed in the report. Logs for how many features were edited by the user? This is interesting. Would this be hosted feature services, traditional or both? For traditional feature services, this might involve queries to the enterprise geodata to understand what, if anything has changed since the check. Although I am not sure if any observability or reporting tools do this, I have not looked. Assuming the edited data was through the applyEdits function of a feature service, System Log Parser would list if that method was called. Of course, this does not mean the data was changed...just that the applyEdits function was called. Logs of how many vector or raster tiles were downloaded by the user? Map, Vector and Raster tile requests through a web adaptor are reported like any other request. System Log Parser would be able to report on usage statistics (e.g., counts and performance) from the ArcGIS Web Adaptor access logs. But, these log sources do not typically record the ArcGIS User that made the request. So the "who" would not be available. Hope that helps Aaron
... View more
08-28-2024
11:35 AM
|
1
|
0
|
2027
|
BLOG
|
Hi @ghaskett_aaic, My guess is that your 11.2 Site and System Log Parser are fine. There are two conditions that exist that can lead to the situation of the generated report listing 0 hits. The LogLevel of ArcGIS Server is set to a value of either Warning, Error or Info The statistics in the System Log Parser report are based on elapsed time request entries, but this requires the LogLevel to be set to Fine Verbose and Debug work too but they not recommended for Production sites due to how much detail is recorded There were no requests to the Site during the specified log query duration (e.g., 2024-03-01T10:21:57 through 2024-03-05T10:21:57) For a quick check, authenticate to the Site from the REST service endpoint, select a service, pan or zoom around a few times, then generated the report with an end time that includes those requests (e.g., end time = now) This report should include your requests that were just made to the Site Hope that helps Aaron
... View more
06-14-2024
11:00 PM
|
1
|
0
|
2526
|
BLOG
|
The Shared Service Instance Pool When people talk about load testing an ArcGIS Enterprise Site, such conversations typically involve the consumption of dedicated or hosted feature services. For many years, dedicated and hosted services have provided a fast, dependable mechanism for consuming high-traffic map resources online. This has not changed. However, there is another type of resource to provide maps to users: the Shared Service Instance Pool. Introduced in 10.7, the shared instances pool makes it easier to view and query services that are still valuable but where system memory usage is favored over performance. This feature allows for high service density publishing (e.g., being able to publish and have many running services) at the expense of some speed and throughput. It can be a good trade-off considering that for many organizations, there are generally more shared service candidates than dedicated or hosted. From this advantageous characteristic, shared services have been a true game changer. But, from a load testing perspective there are some considerations. Note: As a GIS administrator, assume all shared services have an equal weight of value with each other. Also, assume that dedicated services should take a higher priority than shared services. Should Shared Services Be Load Tested? The $64,000 Question! As a performance analyst, this question may come up as your publishing services to the Site. While it can be very tempting to load test shared services to understand their scalability profile, there are several reasons why this strategy does *not* make sense: If scalability was paramount, the service should be moved to dedicated Shared services can still scale (e.g., support multiple, concurrent requests for the same item) but this is not its primary function The administrator has already designated the service to favor memory usage By configuring the service as shared, it is expected that the service will not be requested frequently If the service occasionally has slower or slightly slower response times, that is okay Testing such services steals hardware resources from the dedicated services Dedicated services are your "first class" services, do not have the shared services compete with them Dedicated and hosted services are the go-to mechanisms for scalability Shared services are not By testing or frequently sending requests shared services, (limited) system resources like CPU and Memory can be drawn away from the services that need it for delivering fast performance to users Test Plan Management Challenges It is not uncommon for Sites to have dozens or hundreds of shared services Assuming all shared services are equal, the test plan for effectively testing and profiling a hundred or more shared services could be daunting and difficult to manage A Site has a Mix of Shared and Dedicated Services, Can the Dedicated Ones Still Be Tested? Yes. Understanding the performance and scalability profile of dedicated services is still valuable information to have for deploying and managing the Site optimally. Test your dedicated services as you normally would. Can Shared services be tested if that is the only instance pool type published? There are no technical limitations that prevent shared services from being load tested. While it is certain possible to run such a test, it is not recommended for the reasons above. Analyze and Monitor the Site Periodically The popularity of services can increase or decrease over time. Periodically analyzing the traffic patterns of service requests can help provide administrators with information to configure and manage the Site optimally. This means that as some services are requested more frequently (or it is anticipated that they will be), they can be manually moved from being a shared service to a dedicated service.
... View more
05-01-2024
11:17 AM
|
5
|
0
|
1001
|
BLOG
|
The following is an additional resource which may help provide information on the "anonymous" entries observed in the ArcGIS Enterprise (e.g., ArcGIS Server) logs: ArcGIS Enterprise Analysis with System Log Parser: Understanding Anonymous Entries for the User Name (Beginner)
... View more
04-22-2024
05:40 PM
|
0
|
0
|
2675
|
BLOG
|
System Log Parser's Statistics By User Reports: the Anonymous Value For evaluating Site performance and quantifying service popularity, System Log Parser (SLP) has several report offerings to conduct ArcGIS Enterprise log analysis. When selecting Analysis Types such as Simple, WithOverviewCharts or Complete, there is an option called "Add Statistics By User to Report" which will include an additional worksheet called Statistics By User into the generated output. The information on this worksheet includes a statistical summary of successful Portal member requests (as reported by ArcGIS Enterprise). This can be quite helpful for GIS administrators to understanding who is asking for what. Sometimes however, the listed User on this worksheet may show the unexpected value of "anonymous". For a Site with secured services, this might be a puzzling username to observe. Is an Anonymous User Sending Successful Queries to a Secured Service? The short answer: no, they are not. The long answer: no, they are still not, but some background is needed to provide the proper context on "anonymous" entries for the User Name value in the logs. Portal Member Log Entry Identity When System Log Parser queries the ArcGIS Enterprise (e.g., ArcGIS Server) logs, it reads the "User Name" field to determine the member identity for each log entry of a successful request. This value is only read from very specific log entries (e.g., where the log Code=100004). Such entries also have the final elapsed time duration of the work performed (e.g., how long the request took from the ArcGIS Server's ArcSOC.exe point of view). These resources are some of best places to look for quantification analysis of the Site. For many service request log entries, this lists the authenticated Portal member username value...as expected. But, there are log entry cases when a member has just authenticated to the Site and the recorded value of "anonymous" is listed instead, but anonymous (e.g., a non-authenticated user) was not actually reading the service. If log queries are executed manually (for the same window of time) through Manager or the REST Admin API, additional details are revealed which can help explain this initial user impersonation by the entity called "anonymous". By using the Request ID field in the logs, one can correlate multiple entries together (since all of the same Request IDs belong to the same request...which is really awesome). So, while the Code=100004 entry shows the user as "anonymous", the Code=9029 entry actually lists the requesting user's Portal member identity. In this case, "admin". Subsequent queries by that user are listed as the expected name (e.g., and not "anonymous"). Note: In the log entry screenshot above, "NaturalEarth/NaturalEarth_SQLServer.MapServer" was a service shared only to specific Portal members. Note: System Log Parser does not currently present this additional user impersonation detail. Whatever value is recorded under User Name is what SLP uses for the Statistics By User worksheet. Note: There can also be a separate Code=8522 log entry which lists the recorded member value under the User Name column. Actual Anonymous Requests to Services Shared to the Public There are also log entries where the value for the "User Name" field can list "anonymous" as the member, but this where it is truly representing anonymous. In this situation, the logs are identifying a successful request made by someone for a publicly available service where the connecting client was not challenged to authenticate. In other words, the service was intentionally shared to Everyone (e.g., the public). By performing another manual, in-depth log query (for the same window of time) for these types of requests, more details can be derived which show that associated Code=9029 entry. This helps highlight that the request was actually made on behalf of the "Anonymous user". Note: In the log entry screenshot above, "SampleWorldCities.MapServer" was a service shared to Everyone. Note: System Log Parser does not currently present this additional user impersonation detail. Whatever value is recorded under User Name is what SLP uses for the Statistics By User worksheet. Are There Anonymous User Log Entries for Secured Services? No. Requests issued for any non-publicly shared resource will be prompted to authenticate (even if what is requested does not exist). Therefore, Code=100004 entries will not exist for the "Anonymous user" user against secured services. Note: In the log entry screenshot above, "NaturalEarth/NaturalEarth_SQLServer.MapServer" was a service shared only to specific Portal members. Note: ArcGIS Enterprise will still acknowledge an "Anonymous user" request for a secured service (existing or not) with a Code=9029 entry (and potentially a Code=8522 entry as well). What Release is this User Name Log Entry Information Based On? This article is based on ArcGIS Enterprise 11.2/11.3, but the User Name information has been available in the ArcGIS Server logs for many releases. Variability The purpose of this Community Article is to offer guidance and help explain several of the situations where "anonymous" is listed as the User Name in the ArcGIS Enterprise (e.g., ArcGIS Server) logs. Expect some variability of this behavior (over the years and) across the releases. Additionally, since there are many ArcGIS Server service capabilities, each may handle the persistence of the User Name value slightly differently within the framework's internal logging logic.
... View more
04-22-2024
05:31 PM
|
6
|
0
|
1838
|
BLOG
|
Hi @ZachBodenner, > could possibly expand on why it would be "potentially faster." > Is that just because it's easier to devote dedicated instances to the service? Yes, but I think it would have more to do with not splitting time across different services instances for retrieving the same data. If all 20 dedicated instances are from one service, then there is a greater chance of improved performance from the benefit of "cache hits". There is "cache" all over but the one I am thinking is at the ArcSOC-level (depending on the service there can be workspace cache that can be taken advantage of). In the end, the performance of both configurations is probably really close, but if I were to go with one (without performance testing the differences of the two), I would pick the layers coming from the same service. Aaron
... View more
03-08-2024
02:55 PM
|
0
|
0
|
4454
|
Title | Kudos | Posted |
---|---|---|
1 | a month ago | |
1 | 04-04-2025 12:08 AM | |
1 | 01-22-2025 03:05 PM | |
1 | 01-08-2025 04:36 PM | |
1 | 11-15-2024 12:31 PM |
Online Status |
Offline
|
Date Last Visited |
Monday
|