Select to view content in your preferred language

Units for Python API Data Store metrics query

383
3
10-09-2024 10:59 AM
Jay_Gregory
Frequent Contributor

I'm trying to query the AVG_CPU for my m2 data store per https://developers.arcgis.com/python/latest/api-reference/arcgis.gis.admin.html#datastoremetricsmana... and I'm trying to understand the units of the results.  My code is 

 

 

 

gis.admin.datastore_metrics.query(metric=DataStoreMetric.AVG_CPU, bin_size=1, bin_unit=DataStoreTimeUnit.HOUR, ago=7, ago_unit=DataStoreTimeUnit.DAY)

 

 

 

Any my results are  

 

 

 

[{'ts': datetime.datetime(2024, 10, 9, 17, 0), 'value': 793.0},
 {'ts': datetime.datetime(2024, 10, 9, 16, 0), 'value': 2905.0},
 {'ts': datetime.datetime(2024, 10, 9, 15, 0), 'value': 3252.0},
 {'ts': datetime.datetime(2024, 10, 9, 14, 0), 'value': 2101.0},
 {'ts': datetime.datetime(2024, 10, 9, 13, 0), 'value': 49.0},
 {'ts': datetime.datetime(2024, 10, 9, 12, 0), 'value': 29.0},
 {'ts': datetime.datetime(2024, 10, 9, 11, 0), 'value': 48.0},
 {'ts': datetime.datetime(2024, 10, 9, 10, 0), 'value': 24.0},
 {'ts': datetime.datetime(2024, 10, 9, 9, 0), 'value': 0.0},
 {'ts': datetime.datetime(2024, 10, 9, 8, 0), 'value': 1.0},

 

 

 

Am I reading this correctly that my avg_cpu is over 100% at certain times?

Tags (3)
0 Kudos
3 Replies
Jay_Gregory
Frequent Contributor

I'm trying to query the AVG_CPU for my m2 data store per https://developers.arcgis.com/python/latest/api-reference/arcgis.gis.admin.html#datastoremetricsmana... and I'm trying to understand the units of the results.  My code is 

 

 

gis.admin.datastore_metrics.query(metric=DataStoreMetric.AVG_CPU, bin_size=1, bin_unit=DataStoreTimeUnit.HOUR, ago=7, ago_unit=DataStoreTimeUnit.DAY)

 

 

Any my results are  

 

 

[{'ts': datetime.datetime(2024, 10, 9, 17, 0), 'value': 793.0},
 {'ts': datetime.datetime(2024, 10, 9, 16, 0), 'value': 2905.0},
 {'ts': datetime.datetime(2024, 10, 9, 15, 0), 'value': 3252.0},
 {'ts': datetime.datetime(2024, 10, 9, 14, 0), 'value': 2101.0},
 {'ts': datetime.datetime(2024, 10, 9, 13, 0), 'value': 49.0},
 {'ts': datetime.datetime(2024, 10, 9, 12, 0), 'value': 29.0},
 {'ts': datetime.datetime(2024, 10, 9, 11, 0), 'value': 48.0},
 {'ts': datetime.datetime(2024, 10, 9, 10, 0), 'value': 24.0},
 {'ts': datetime.datetime(2024, 10, 9, 9, 0), 'value': 0.0},
 {'ts': datetime.datetime(2024, 10, 9, 8, 0), 'value': 1.0},

 

 

Am I reading this correctly that my avg_cpu is over 100% at certain times?

Tags (3)
0 Kudos
Jay_Gregory
Frequent Contributor

I'm trying to query the AVG_CPU for my m2 data store per https://developers.arcgis.com/python/latest/api-reference/arcgis.gis.admin.html#datastoremetricsmana... and I'm trying to understand the units of the results.  My code is 

 

gis.admin.datastore_metrics.query(metric=DataStoreMetric.AVG_CPU, bin_size=1, bin_unit=DataStoreTimeUnit.HOUR, ago=7, ago_unit=DataStoreTimeUnit.DAY)

 

Any my results are  

 

[{'ts': datetime.datetime(2024, 10, 9, 17, 0), 'value': 793.0},
 {'ts': datetime.datetime(2024, 10, 9, 16, 0), 'value': 2905.0},
 {'ts': datetime.datetime(2024, 10, 9, 15, 0), 'value': 3252.0},
 {'ts': datetime.datetime(2024, 10, 9, 14, 0), 'value': 2101.0},
 {'ts': datetime.datetime(2024, 10, 9, 13, 0), 'value': 49.0},
 {'ts': datetime.datetime(2024, 10, 9, 12, 0), 'value': 29.0},
 {'ts': datetime.datetime(2024, 10, 9, 11, 0), 'value': 48.0},
 {'ts': datetime.datetime(2024, 10, 9, 10, 0), 'value': 24.0},
 {'ts': datetime.datetime(2024, 10, 9, 9, 0), 'value': 0.0},
 {'ts': datetime.datetime(2024, 10, 9, 8, 0), 'value': 1.0},

 

Am I reading this correctly that my avg_cpu is over 100% at certain times?

0 Kudos
PeterKnoop
MVP Regular Contributor

query has an aggregation parameter, which by default is set to DataStoreAggregation.SUM. Since you are not overriding that value, I believe your code is producing a value that is the aggregated sum of per-minute data over the course of an hour.

I haven't found where the aggregation parameter values are explicitly documented, however, through autocompletion, it appears the possible values are:

PeterKnoop_0-1728570093432.png

So you will want to add aggregation=DataStoreAggregation.AVG to your call, if you want to see the average CPU utilization over one hour on the expected scale of 0-100%. 

On a side note, if you are doing this to evaluate how much time your organization utilization is at 100%, and users are experiencing unexpected delays, then you probably want to capture this data at the minute scale.  Then you can retrospectively look, for example, at how many minutes over a day your utilization was pegged at 100%, as well as how many consecutive minutes it was pegged at 100%. If those numbers get bigger than your users' tolerance for waiting for results or receiving timeout errors, then you likely want to upgrade your Feature Data Store, or identify what is maxing things out so that you can work with users on potential alternative workflows to reduce utilization.