<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic spatiotemporal bds REST query performance in ArcGIS API for Python Questions</title>
    <link>https://community.esri.com/t5/arcgis-api-for-python-questions/spatiotemporal-bds-rest-query-performance/m-p/766415#M556</link>
    <description>&lt;HTML&gt;&lt;HEAD&gt;&lt;/HEAD&gt;&lt;BODY&gt;&lt;P&gt;Using the ArcGIS API for Python I'm comparing query performance against data in Oracle vs a Hosted Feature Layer where the data is in the Spatiotemporal Big Datastore. We've got ArcGIS Enteprise (portal, hosted server, geoevent server, datastore) at version 10.6.1 running on Linux; each component is on it's own machine that meets the recommended specs. Our spatiotemporal big data store has 3 nodes each with 16 cores, 32GB RAM, and 1TB storage. I'm using ArcGIS API for Python v1.4.2. I used GeoEvent server to load 450 million rows into the Spatiotemporal Big Data Store. I made a table view of the same data in Oracle.&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;For one day at a time, my query gets all of the unique values in a field (myField) and counts up how many records there are for each unique value.&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Using a cx_Oracle database connection I query my Oracle data like so:&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;PRE class="lia-code-sample line-numbers language-none"&gt;&lt;CODE&gt;select &lt;SPAN class="operator token"&gt;=&lt;/SPAN&gt; &lt;SPAN class="string token"&gt;"""SELECT COUNT(myField) as CNT, myField FROM MY_TABLE
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; WHERE ((DATETIME &amp;gt;= TO_DATE('2018-08-01 00:00', 'YYYY-MM-DD HH24:MI')) AND
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; (DATETIME &amp;lt; TO_DATE('2018-08-02 00:00', 'YYYY-MM-DD HH24:MI')))
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; GROUP BY myField
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; ORDER BY CNT desc"""&lt;/SPAN&gt;
unique_values &lt;SPAN class="operator token"&gt;=&lt;/SPAN&gt; pandas&lt;SPAN class="punctuation token"&gt;.&lt;/SPAN&gt;read_sql&lt;SPAN class="punctuation token"&gt;(&lt;/SPAN&gt;select&lt;SPAN class="punctuation token"&gt;,&lt;/SPAN&gt; &lt;SPAN class="operator token"&gt;&amp;lt;&lt;/SPAN&gt;database connection&lt;SPAN class="operator token"&gt;&amp;gt;&lt;/SPAN&gt;&lt;SPAN class="punctuation token"&gt;)&lt;/SPAN&gt;‍‍‍‍‍‍&lt;SPAN class="line-numbers-rows"&gt;&lt;SPAN&gt;‍&lt;/SPAN&gt;&lt;SPAN&gt;‍&lt;/SPAN&gt;&lt;SPAN&gt;‍&lt;/SPAN&gt;&lt;SPAN&gt;‍&lt;/SPAN&gt;&lt;SPAN&gt;‍&lt;/SPAN&gt;&lt;SPAN&gt;‍&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/CODE&gt;&lt;/PRE&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;That query returns in 38 seconds and tells me that I have 3.8 million unique values for that day in myField. When I put that in a for loop and do that query for each day in a month the average query time is 31 seconds.&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;I expected my queries against the spatiotemporal big datastore to be faster but instead they are WAY slower. Doing that same query takes 1 hr 12 min. Iterating over a month one day at a time yields an average query time of 1 hr+. I'm doing the query like this:&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;PRE class="lia-code-sample line-numbers language-none"&gt;&lt;CODE&gt;fl = arcgis.features.FeatureLayer('&amp;lt;url&amp;gt;/MapServer/0', gis)
time1 = datetime.datetime(2018,8,1)
time2 = time1 + timedelta(days=1)
unique_values = fl.query(
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;where='1=1',
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;time_filter=[time1, time2],
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;out_fields='myField',
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;return_distinct_values=True,
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;return_geometry=False
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;)‍‍‍‍‍‍‍‍‍‍&lt;SPAN class="line-numbers-rows"&gt;&lt;SPAN&gt;‍&lt;/SPAN&gt;&lt;SPAN&gt;‍&lt;/SPAN&gt;&lt;SPAN&gt;‍&lt;/SPAN&gt;&lt;SPAN&gt;‍&lt;/SPAN&gt;&lt;SPAN&gt;‍&lt;/SPAN&gt;&lt;SPAN&gt;‍&lt;/SPAN&gt;&lt;SPAN&gt;‍&lt;/SPAN&gt;&lt;SPAN&gt;‍&lt;/SPAN&gt;&lt;SPAN&gt;‍&lt;/SPAN&gt;&lt;SPAN&gt;‍&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/CODE&gt;&lt;/PRE&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;The query returns exactly what I expect it to return, which is the same thing that the Oracle query returns. This is good. But it's waaaaaaay slower at about 1 hr 15 min on average (again, it takes 30 seconds via Oracle).&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;How can I be sure I've optimized this query? Does Elasticsearch have some kind of query overhead that makes queries take a long time (in other words, should I only expect Elasticsearch to outperform Oracle when the scale of my data is huge?). Or should I not expect Elasticsearch to outperform Oracle at all for this type of query?&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Incidentally, what does the dynamic_layer parameter in the FeatureLayer constructor do?&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;&lt;A class="link-titled" href="https://esri.github.io/arcgis-python-api/apidoc/html/arcgis.features.toc.html#arcgis.features.FeatureLayer" title="https://esri.github.io/arcgis-python-api/apidoc/html/arcgis.features.toc.html#arcgis.features.FeatureLayer" rel="nofollow noopener noreferrer" target="_blank"&gt;arcgis.features module — arcgis 1.5.0 documentation&lt;/A&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;The built-in docstring viewed via ctrl+tab in my Jupyter Notebook says, "Optional dictionary. If the layer is given a dynamic layer definition, this will be added to the functions.". What does that mean? There are no examples and I can't figure out what this is for. I was hoping I could use it like a definition expression to limit the data in my feature layer to the time range in question, thereby making my query faster.&lt;/P&gt;&lt;/BODY&gt;&lt;/HTML&gt;</description>
    <pubDate>Sun, 12 Dec 2021 08:31:35 GMT</pubDate>
    <dc:creator>RyanClancy</dc:creator>
    <dc:date>2021-12-12T08:31:35Z</dc:date>
    <item>
      <title>spatiotemporal bds REST query performance</title>
      <link>https://community.esri.com/t5/arcgis-api-for-python-questions/spatiotemporal-bds-rest-query-performance/m-p/766415#M556</link>
      <description>&lt;HTML&gt;&lt;HEAD&gt;&lt;/HEAD&gt;&lt;BODY&gt;&lt;P&gt;Using the ArcGIS API for Python I'm comparing query performance against data in Oracle vs a Hosted Feature Layer where the data is in the Spatiotemporal Big Datastore. We've got ArcGIS Enteprise (portal, hosted server, geoevent server, datastore) at version 10.6.1 running on Linux; each component is on it's own machine that meets the recommended specs. Our spatiotemporal big data store has 3 nodes each with 16 cores, 32GB RAM, and 1TB storage. I'm using ArcGIS API for Python v1.4.2. I used GeoEvent server to load 450 million rows into the Spatiotemporal Big Data Store. I made a table view of the same data in Oracle.&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;For one day at a time, my query gets all of the unique values in a field (myField) and counts up how many records there are for each unique value.&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Using a cx_Oracle database connection I query my Oracle data like so:&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;PRE class="lia-code-sample line-numbers language-none"&gt;&lt;CODE&gt;select &lt;SPAN class="operator token"&gt;=&lt;/SPAN&gt; &lt;SPAN class="string token"&gt;"""SELECT COUNT(myField) as CNT, myField FROM MY_TABLE
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; WHERE ((DATETIME &amp;gt;= TO_DATE('2018-08-01 00:00', 'YYYY-MM-DD HH24:MI')) AND
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; (DATETIME &amp;lt; TO_DATE('2018-08-02 00:00', 'YYYY-MM-DD HH24:MI')))
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; GROUP BY myField
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; ORDER BY CNT desc"""&lt;/SPAN&gt;
unique_values &lt;SPAN class="operator token"&gt;=&lt;/SPAN&gt; pandas&lt;SPAN class="punctuation token"&gt;.&lt;/SPAN&gt;read_sql&lt;SPAN class="punctuation token"&gt;(&lt;/SPAN&gt;select&lt;SPAN class="punctuation token"&gt;,&lt;/SPAN&gt; &lt;SPAN class="operator token"&gt;&amp;lt;&lt;/SPAN&gt;database connection&lt;SPAN class="operator token"&gt;&amp;gt;&lt;/SPAN&gt;&lt;SPAN class="punctuation token"&gt;)&lt;/SPAN&gt;‍‍‍‍‍‍&lt;SPAN class="line-numbers-rows"&gt;&lt;SPAN&gt;‍&lt;/SPAN&gt;&lt;SPAN&gt;‍&lt;/SPAN&gt;&lt;SPAN&gt;‍&lt;/SPAN&gt;&lt;SPAN&gt;‍&lt;/SPAN&gt;&lt;SPAN&gt;‍&lt;/SPAN&gt;&lt;SPAN&gt;‍&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/CODE&gt;&lt;/PRE&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;That query returns in 38 seconds and tells me that I have 3.8 million unique values for that day in myField. When I put that in a for loop and do that query for each day in a month the average query time is 31 seconds.&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;I expected my queries against the spatiotemporal big datastore to be faster but instead they are WAY slower. Doing that same query takes 1 hr 12 min. Iterating over a month one day at a time yields an average query time of 1 hr+. I'm doing the query like this:&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;PRE class="lia-code-sample line-numbers language-none"&gt;&lt;CODE&gt;fl = arcgis.features.FeatureLayer('&amp;lt;url&amp;gt;/MapServer/0', gis)
time1 = datetime.datetime(2018,8,1)
time2 = time1 + timedelta(days=1)
unique_values = fl.query(
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;where='1=1',
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;time_filter=[time1, time2],
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;out_fields='myField',
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;return_distinct_values=True,
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;return_geometry=False
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;)‍‍‍‍‍‍‍‍‍‍&lt;SPAN class="line-numbers-rows"&gt;&lt;SPAN&gt;‍&lt;/SPAN&gt;&lt;SPAN&gt;‍&lt;/SPAN&gt;&lt;SPAN&gt;‍&lt;/SPAN&gt;&lt;SPAN&gt;‍&lt;/SPAN&gt;&lt;SPAN&gt;‍&lt;/SPAN&gt;&lt;SPAN&gt;‍&lt;/SPAN&gt;&lt;SPAN&gt;‍&lt;/SPAN&gt;&lt;SPAN&gt;‍&lt;/SPAN&gt;&lt;SPAN&gt;‍&lt;/SPAN&gt;&lt;SPAN&gt;‍&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/CODE&gt;&lt;/PRE&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;The query returns exactly what I expect it to return, which is the same thing that the Oracle query returns. This is good. But it's waaaaaaay slower at about 1 hr 15 min on average (again, it takes 30 seconds via Oracle).&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;How can I be sure I've optimized this query? Does Elasticsearch have some kind of query overhead that makes queries take a long time (in other words, should I only expect Elasticsearch to outperform Oracle when the scale of my data is huge?). Or should I not expect Elasticsearch to outperform Oracle at all for this type of query?&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Incidentally, what does the dynamic_layer parameter in the FeatureLayer constructor do?&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;&lt;A class="link-titled" href="https://esri.github.io/arcgis-python-api/apidoc/html/arcgis.features.toc.html#arcgis.features.FeatureLayer" title="https://esri.github.io/arcgis-python-api/apidoc/html/arcgis.features.toc.html#arcgis.features.FeatureLayer" rel="nofollow noopener noreferrer" target="_blank"&gt;arcgis.features module — arcgis 1.5.0 documentation&lt;/A&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;The built-in docstring viewed via ctrl+tab in my Jupyter Notebook says, "Optional dictionary. If the layer is given a dynamic layer definition, this will be added to the functions.". What does that mean? There are no examples and I can't figure out what this is for. I was hoping I could use it like a definition expression to limit the data in my feature layer to the time range in question, thereby making my query faster.&lt;/P&gt;&lt;/BODY&gt;&lt;/HTML&gt;</description>
      <pubDate>Sun, 12 Dec 2021 08:31:35 GMT</pubDate>
      <guid>https://community.esri.com/t5/arcgis-api-for-python-questions/spatiotemporal-bds-rest-query-performance/m-p/766415#M556</guid>
      <dc:creator>RyanClancy</dc:creator>
      <dc:date>2021-12-12T08:31:35Z</dc:date>
    </item>
    <item>
      <title>Re: spatiotemporal bds REST query performance</title>
      <link>https://community.esri.com/t5/arcgis-api-for-python-questions/spatiotemporal-bds-rest-query-performance/m-p/766416#M557</link>
      <description>&lt;HTML&gt;&lt;HEAD&gt;&lt;/HEAD&gt;&lt;BODY&gt;&lt;P&gt;Thanks for the post, not because I can answer it but because I have seen similar but haven't taken the time to write it up.&amp;nbsp; When it comes to all of the "distributed GIS" (spatiotemporal datastore, geoanalytics server, raster analytics), we have not been impressed with performance.&amp;nbsp; Given the amount of money involved to deploy numerous virtual machines and license them with OS and GIS applications, I don't understand how "distributed GIS" is supposed to compete with a high-end local workstation conducting similar analyses.&lt;/P&gt;&lt;/BODY&gt;&lt;/HTML&gt;</description>
      <pubDate>Thu, 04 Oct 2018 22:26:10 GMT</pubDate>
      <guid>https://community.esri.com/t5/arcgis-api-for-python-questions/spatiotemporal-bds-rest-query-performance/m-p/766416#M557</guid>
      <dc:creator>JoshuaBixby</dc:creator>
      <dc:date>2018-10-04T22:26:10Z</dc:date>
    </item>
    <item>
      <title>Re: spatiotemporal bds REST query performance</title>
      <link>https://community.esri.com/t5/arcgis-api-for-python-questions/spatiotemporal-bds-rest-query-performance/m-p/766417#M558</link>
      <description>&lt;HTML&gt;&lt;HEAD&gt;&lt;/HEAD&gt;&lt;BODY&gt;&lt;P&gt;Sharing with &lt;A href="https://community.esri.com/group/1456"&gt;Big Data&lt;/A&gt;‌ and &lt;A href="https://community.esri.com/group/1199"&gt;Real-Time &amp;amp; Big Data GIS&lt;/A&gt;‌ since STDS falls under that area.&lt;/P&gt;&lt;/BODY&gt;&lt;/HTML&gt;</description>
      <pubDate>Thu, 04 Oct 2018 22:32:17 GMT</pubDate>
      <guid>https://community.esri.com/t5/arcgis-api-for-python-questions/spatiotemporal-bds-rest-query-performance/m-p/766417#M558</guid>
      <dc:creator>JoshuaBixby</dc:creator>
      <dc:date>2018-10-04T22:32:17Z</dc:date>
    </item>
  </channel>
</rss>

