<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: Units for Python API Data Store metrics query in ArcGIS API for Python Questions</title>
    <link>https://community.esri.com/t5/arcgis-api-for-python-questions/units-for-python-api-data-store-metrics-query/m-p/1547598#M10758</link>
    <description>&lt;P&gt;&lt;A href="https://developers.arcgis.com/python/latest/api-reference/arcgis.gis.admin.html#arcgis.gis.admin.DataStoreMetricsManager.query" target="_self"&gt;query&lt;/A&gt; has an aggregation parameter, which by default is set to&amp;nbsp;DataStoreAggregation.SUM. Since you are not overriding that value, I believe your code is producing a value that is the aggregated sum of per-minute data over the course of an hour.&lt;/P&gt;&lt;P&gt;I haven't found where the aggregation parameter values are explicitly documented, however, through autocompletion, it appears the possible values are:&lt;/P&gt;&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="PeterKnoop_0-1728570093432.png" style="width: 400px;"&gt;&lt;img src="https://community.esri.com/t5/image/serverpage/image-id/116980i5E51FBEFCA5A48C5/image-size/medium?v=v2&amp;amp;px=400" role="button" title="PeterKnoop_0-1728570093432.png" alt="PeterKnoop_0-1728570093432.png" /&gt;&lt;/span&gt;&lt;/P&gt;&lt;P&gt;So you will want to add aggregation=DataStoreAggregation.AVG to your call, if you want to see the average CPU utilization over one hour on the expected scale of 0-100%.&amp;nbsp;&lt;/P&gt;&lt;P&gt;On a side note, if you are doing this to evaluate how much time your organization utilization is at 100%, and users are experiencing unexpected delays, then you probably want to capture this data at the minute scale.&amp;nbsp; Then you can retrospectively look, for example, at how many minutes over a day your utilization was pegged at 100%, as well as how many consecutive minutes it was pegged at 100%. If those numbers get bigger than your users' tolerance for waiting for results or receiving timeout errors, then you likely want to upgrade your Feature Data Store, or identify what is maxing things out so that you can work with users on potential alternative workflows to reduce utilization.&lt;/P&gt;</description>
    <pubDate>Thu, 10 Oct 2024 14:33:51 GMT</pubDate>
    <dc:creator>PeterKnoop</dc:creator>
    <dc:date>2024-10-10T14:33:51Z</dc:date>
    <item>
      <title>Units for Python API Data Store metrics query</title>
      <link>https://community.esri.com/t5/arcgis-api-for-python-questions/units-for-python-api-data-store-metrics-query/m-p/1547220#M10750</link>
      <description>&lt;P&gt;I'm trying to query the AVG_CPU for my m2 data store per&amp;nbsp;&lt;A href="https://developers.arcgis.com/python/latest/api-reference/arcgis.gis.admin.html#datastoremetricsmanager" target="_blank" rel="noopener"&gt;https://developers.arcgis.com/python/latest/api-reference/arcgis.gis.admin.html#datastoremetricsmanager&lt;/A&gt;&amp;nbsp;and I'm trying to understand the units of the results.&amp;nbsp; My code is&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;LI-CODE lang="python"&gt;gis.admin.datastore_metrics.query(metric=DataStoreMetric.AVG_CPU, bin_size=1, bin_unit=DataStoreTimeUnit.HOUR, ago=7, ago_unit=DataStoreTimeUnit.DAY)&lt;/LI-CODE&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Any my results are&amp;nbsp;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;LI-CODE lang="python"&gt;[{'ts': datetime.datetime(2024, 10, 9, 17, 0), 'value': 793.0},
 {'ts': datetime.datetime(2024, 10, 9, 16, 0), 'value': 2905.0},
 {'ts': datetime.datetime(2024, 10, 9, 15, 0), 'value': 3252.0},
 {'ts': datetime.datetime(2024, 10, 9, 14, 0), 'value': 2101.0},
 {'ts': datetime.datetime(2024, 10, 9, 13, 0), 'value': 49.0},
 {'ts': datetime.datetime(2024, 10, 9, 12, 0), 'value': 29.0},
 {'ts': datetime.datetime(2024, 10, 9, 11, 0), 'value': 48.0},
 {'ts': datetime.datetime(2024, 10, 9, 10, 0), 'value': 24.0},
 {'ts': datetime.datetime(2024, 10, 9, 9, 0), 'value': 0.0},
 {'ts': datetime.datetime(2024, 10, 9, 8, 0), 'value': 1.0},&lt;/LI-CODE&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Am I reading this correctly that my avg_cpu is over 100% at certain times?&lt;/P&gt;</description>
      <pubDate>Wed, 09 Oct 2024 17:59:32 GMT</pubDate>
      <guid>https://community.esri.com/t5/arcgis-api-for-python-questions/units-for-python-api-data-store-metrics-query/m-p/1547220#M10750</guid>
      <dc:creator>Jay_Gregory</dc:creator>
      <dc:date>2024-10-09T17:59:32Z</dc:date>
    </item>
    <item>
      <title>Units for Python API Data Store metrics query</title>
      <link>https://community.esri.com/t5/arcgis-api-for-python-questions/units-for-python-api-data-store-metrics-query/m-p/1547217#M10752</link>
      <description>&lt;P&gt;I'm trying to query the AVG_CPU for my m2 data store per&amp;nbsp;&lt;A href="https://developers.arcgis.com/python/latest/api-reference/arcgis.gis.admin.html#datastoremetricsmanager" target="_blank" rel="noopener"&gt;https://developers.arcgis.com/python/latest/api-reference/arcgis.gis.admin.html#datastoremetricsmanager&lt;/A&gt;&amp;nbsp;and I'm trying to understand the units of the results.&amp;nbsp; My code is&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;LI-CODE lang="python"&gt;gis.admin.datastore_metrics.query(metric=DataStoreMetric.AVG_CPU, bin_size=1, bin_unit=DataStoreTimeUnit.HOUR, ago=7, ago_unit=DataStoreTimeUnit.DAY)&lt;/LI-CODE&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Any my results are&amp;nbsp;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;LI-CODE lang="python"&gt;[{'ts': datetime.datetime(2024, 10, 9, 17, 0), 'value': 793.0},
 {'ts': datetime.datetime(2024, 10, 9, 16, 0), 'value': 2905.0},
 {'ts': datetime.datetime(2024, 10, 9, 15, 0), 'value': 3252.0},
 {'ts': datetime.datetime(2024, 10, 9, 14, 0), 'value': 2101.0},
 {'ts': datetime.datetime(2024, 10, 9, 13, 0), 'value': 49.0},
 {'ts': datetime.datetime(2024, 10, 9, 12, 0), 'value': 29.0},
 {'ts': datetime.datetime(2024, 10, 9, 11, 0), 'value': 48.0},
 {'ts': datetime.datetime(2024, 10, 9, 10, 0), 'value': 24.0},
 {'ts': datetime.datetime(2024, 10, 9, 9, 0), 'value': 0.0},
 {'ts': datetime.datetime(2024, 10, 9, 8, 0), 'value': 1.0},&lt;/LI-CODE&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Am I reading this correctly that my avg_cpu is over 100% at certain times?&lt;/P&gt;</description>
      <pubDate>Wed, 09 Oct 2024 17:59:35 GMT</pubDate>
      <guid>https://community.esri.com/t5/arcgis-api-for-python-questions/units-for-python-api-data-store-metrics-query/m-p/1547217#M10752</guid>
      <dc:creator>Jay_Gregory</dc:creator>
      <dc:date>2024-10-09T17:59:35Z</dc:date>
    </item>
    <item>
      <title>Units for Python API Data Store metrics query</title>
      <link>https://community.esri.com/t5/arcgis-api-for-python-questions/units-for-python-api-data-store-metrics-query/m-p/1547219#M10751</link>
      <description>&lt;P&gt;I'm trying to query the AVG_CPU for my m2 data store per&amp;nbsp;&lt;A href="https://developers.arcgis.com/python/latest/api-reference/arcgis.gis.admin.html#datastoremetricsmanager" target="_blank" rel="noopener"&gt;https://developers.arcgis.com/python/latest/api-reference/arcgis.gis.admin.html#datastoremetricsmanager&lt;/A&gt;&amp;nbsp;and I'm trying to understand the units of the results.&amp;nbsp; My code is&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;LI-CODE lang="python"&gt;gis.admin.datastore_metrics.query(metric=DataStoreMetric.AVG_CPU, bin_size=1, bin_unit=DataStoreTimeUnit.HOUR, ago=7, ago_unit=DataStoreTimeUnit.DAY)&lt;/LI-CODE&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Any my results are&amp;nbsp;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;LI-CODE lang="python"&gt;[{'ts': datetime.datetime(2024, 10, 9, 17, 0), 'value': 793.0},
 {'ts': datetime.datetime(2024, 10, 9, 16, 0), 'value': 2905.0},
 {'ts': datetime.datetime(2024, 10, 9, 15, 0), 'value': 3252.0},
 {'ts': datetime.datetime(2024, 10, 9, 14, 0), 'value': 2101.0},
 {'ts': datetime.datetime(2024, 10, 9, 13, 0), 'value': 49.0},
 {'ts': datetime.datetime(2024, 10, 9, 12, 0), 'value': 29.0},
 {'ts': datetime.datetime(2024, 10, 9, 11, 0), 'value': 48.0},
 {'ts': datetime.datetime(2024, 10, 9, 10, 0), 'value': 24.0},
 {'ts': datetime.datetime(2024, 10, 9, 9, 0), 'value': 0.0},
 {'ts': datetime.datetime(2024, 10, 9, 8, 0), 'value': 1.0},&lt;/LI-CODE&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Am I reading this correctly that my avg_cpu is over 100% at certain times?&lt;/P&gt;</description>
      <pubDate>Wed, 09 Oct 2024 17:59:27 GMT</pubDate>
      <guid>https://community.esri.com/t5/arcgis-api-for-python-questions/units-for-python-api-data-store-metrics-query/m-p/1547219#M10751</guid>
      <dc:creator>Jay_Gregory</dc:creator>
      <dc:date>2024-10-09T17:59:27Z</dc:date>
    </item>
    <item>
      <title>Re: Units for Python API Data Store metrics query</title>
      <link>https://community.esri.com/t5/arcgis-api-for-python-questions/units-for-python-api-data-store-metrics-query/m-p/1547598#M10758</link>
      <description>&lt;P&gt;&lt;A href="https://developers.arcgis.com/python/latest/api-reference/arcgis.gis.admin.html#arcgis.gis.admin.DataStoreMetricsManager.query" target="_self"&gt;query&lt;/A&gt; has an aggregation parameter, which by default is set to&amp;nbsp;DataStoreAggregation.SUM. Since you are not overriding that value, I believe your code is producing a value that is the aggregated sum of per-minute data over the course of an hour.&lt;/P&gt;&lt;P&gt;I haven't found where the aggregation parameter values are explicitly documented, however, through autocompletion, it appears the possible values are:&lt;/P&gt;&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="PeterKnoop_0-1728570093432.png" style="width: 400px;"&gt;&lt;img src="https://community.esri.com/t5/image/serverpage/image-id/116980i5E51FBEFCA5A48C5/image-size/medium?v=v2&amp;amp;px=400" role="button" title="PeterKnoop_0-1728570093432.png" alt="PeterKnoop_0-1728570093432.png" /&gt;&lt;/span&gt;&lt;/P&gt;&lt;P&gt;So you will want to add aggregation=DataStoreAggregation.AVG to your call, if you want to see the average CPU utilization over one hour on the expected scale of 0-100%.&amp;nbsp;&lt;/P&gt;&lt;P&gt;On a side note, if you are doing this to evaluate how much time your organization utilization is at 100%, and users are experiencing unexpected delays, then you probably want to capture this data at the minute scale.&amp;nbsp; Then you can retrospectively look, for example, at how many minutes over a day your utilization was pegged at 100%, as well as how many consecutive minutes it was pegged at 100%. If those numbers get bigger than your users' tolerance for waiting for results or receiving timeout errors, then you likely want to upgrade your Feature Data Store, or identify what is maxing things out so that you can work with users on potential alternative workflows to reduce utilization.&lt;/P&gt;</description>
      <pubDate>Thu, 10 Oct 2024 14:33:51 GMT</pubDate>
      <guid>https://community.esri.com/t5/arcgis-api-for-python-questions/units-for-python-api-data-store-metrics-query/m-p/1547598#M10758</guid>
      <dc:creator>PeterKnoop</dc:creator>
      <dc:date>2024-10-10T14:33:51Z</dc:date>
    </item>
  </channel>
</rss>

