Select to view content in your preferred language

Tracing Response Time Problems For Services

180
3
Jump to solution
11-05-2024 01:14 PM
SGTomlins
Regular Contributor

Greetings,

I have ArcGIS Monitor tracking response times for my services.  I consistently see response times for most services in the 0.8 second range.  Several times a day I see response time jump to nearly 4 seconds on my Image Services.  This is intermittent and is normally under 1 second.

I have reviewed all the logs in Portal, Server, Server statistics, Monitor and System Event Viewer for the specific time frames in question.  I can find the requests that are made for the problematic services in each respective logging environment.

What I do not have is the "trace" information to discover exactly what is causing the long response times.  Once the request for the Image Service is made it has trail from the requesting service, to Portal, Server, Tile Cache, Database... etc...  I am not exactly sure...

Can someone direct me to a solution to discover the delay?  I am accustomed to the SDE intercept process as an example to discover slow, bad, queries, etc....

After all the traditional configuration tweaks are accomplished to speed things up, I want to discover what goes "wrong" at different times of the day for specific services.  

What is the prescribed method to find delays in response times?  Something like "tracert" or "sdeintercept"  anything like this available?  Am I missing something obvious?

@EsriEvan thought I would hit you up since you chimed in on another performance issue I posed regarding this sort of question.. Solved: What is measured for the "Average Response Time" ? - Esri Community 

Thank you in advance.

@SGTomlins 

0 Kudos
1 Solution

Accepted Solutions
AaronLopez
Esri Contributor

Hi @SGTomlins,
Finding delays or bottlenecks in a system can be a complex problem to solve. Sometimes the issue is hardware (e.g., not enough memory, too few cpu cores) and sometimes it is software (e.g., not enough services instances, max number of db connections reached, map is showing too much detail at small scales).
As for an integrated, easier way to accomplish such a task I'm not sure. Logs/traces are certain valuable, as you mentioned they are in the realm of "forensics" and take more time to gather/analyze.

Whatever method you choose, I would offer some strategies. I would start with identifying your top 3 most popular services. This will help narrow the focus and give the best return on investment of time.

From there, look to understand how the services are being utilized (e.g., what functions are people requesting from these services...export map, query, applyEdits, etc...). It is also advantageous to know the performance profile of the functions...maybe its only one particular operation that impacts users the most.
From an ArcGIS Server logging perspective, different functions have different levels of detail recorded. For some, you may have to increase the ArcGIS Server LogLevel to get that information.
For example:
   -- export map at verbose/debug can list the duration of time of the overall request (from the point of view of the ArcSOC) and how long it takes for the symbology to render
   -- feature query at verbose can list the duration of time of the overall request (from the point of view of the ArcSOC), and show how much of that time was spent in the database...this is helpful for understanding if your spending more time of data retrieval or data serialization

Once items for improvement have been identified, you can start researching potential ways to tune and remedy a solution to improve performance and remove the bottleneck.

Hope that helps.

Aaron

View solution in original post

3 Replies
SGTomlins
Regular Contributor

@AaronLopez - Forgive me if this is not the right way to connect with you.  But I am looking for someone to nudge me in the right direction for a trail to follow about finding delays in my system.  I know I can open each and every log environment, isolate time frames and attempt to do the forensics.  I was hoping there is a more integrated way to do this.

Thank you,

Steve

0 Kudos
AaronLopez
Esri Contributor

Hi @SGTomlins,
Finding delays or bottlenecks in a system can be a complex problem to solve. Sometimes the issue is hardware (e.g., not enough memory, too few cpu cores) and sometimes it is software (e.g., not enough services instances, max number of db connections reached, map is showing too much detail at small scales).
As for an integrated, easier way to accomplish such a task I'm not sure. Logs/traces are certain valuable, as you mentioned they are in the realm of "forensics" and take more time to gather/analyze.

Whatever method you choose, I would offer some strategies. I would start with identifying your top 3 most popular services. This will help narrow the focus and give the best return on investment of time.

From there, look to understand how the services are being utilized (e.g., what functions are people requesting from these services...export map, query, applyEdits, etc...). It is also advantageous to know the performance profile of the functions...maybe its only one particular operation that impacts users the most.
From an ArcGIS Server logging perspective, different functions have different levels of detail recorded. For some, you may have to increase the ArcGIS Server LogLevel to get that information.
For example:
   -- export map at verbose/debug can list the duration of time of the overall request (from the point of view of the ArcSOC) and how long it takes for the symbology to render
   -- feature query at verbose can list the duration of time of the overall request (from the point of view of the ArcSOC), and show how much of that time was spent in the database...this is helpful for understanding if your spending more time of data retrieval or data serialization

Once items for improvement have been identified, you can start researching potential ways to tune and remedy a solution to improve performance and remove the bottleneck.

Hope that helps.

Aaron

SGTomlins
Regular Contributor

Thank you Aaron, I am grateful for your time.  I will proceed with the helpful thoughts you offered here.

All the best,

@SGTomlins 

0 Kudos