Does anyone know if there is a way to find an AGO item's (story map, etc..) usage stats with a custom date range using python?
Thanks!
Hi Edward,
The usage functionality isn't currently documented in the REST API. Consider supporting the following idea or creating a new one to have this functionality documented so that you can interact with it using python: https://community.esri.com/ideas/7408
When posting the idea, it's really helpful to include a description of what your desired workflow would be and the desired output.
Thanks,
Kelly
Hi Edward,
The new Python API may be able to do what you're looking for (ArcGIS API for Python).
Thank you,
Lesi
Do you know where the arguments for usage have been documented?
I found this sample that would indicate that usage is available (beta?) but nothing in the api reference about it.
Usage statistics and credit consumption | ArcGIS for Developers
Thanks,
-Tyler
This is really hokey and could use a lot of cleaning. For some reason, I couldn't get around some limit to how the API retrieves the data, so I had to make multiple url calls for each 52-ish-day block to get a years worth of stats. This reads from a CSV of ITEM NAMES, and writes out stats to unique files. You have to pass dates as julian date. This idea was originally posted two years ago.Access Raw (Rest API) data from AGOL Feature Service usage statistics . This has been HUUUUUGELY helpful when we look at the raw AGOL service numbers compared to Google Analytics of web pages where we're embedding maps, so we can see what folks are interested in or using. The service statistics are way more telling than the number of times an item was viewed.
This could easily be modified to write to one file or be more dynamic on dates instead of the ridiculous number of try blocks I have.
import csv
import urllib
import urllib2
import json
import datetime
# need to get a token
tokenURL = 'https://www.arcgis.com/sharing/rest/generateToken'
params = {'f': 'pjson', 'username': 'AGOL_USER_NAME', 'password': 'AGOL_PASSWORD!', 'referer': 'https://www.arcgis.com'}
req = urllib2.Request(tokenURL, urllib.urlencode(params))
response = urllib2.urlopen(req)
data = json.load(response)
token = data['token']
# read list of AGOL service names
readCSV = list(csv.reader(open('service.csv')))
names = []
for row in readCSV:
name = row[0]
#generate url list of up to 7 api calls
#Esri truncates usage stats into blocks of approx. 52 days
#Making a single url call for entire year will fail
url1 = 'https://[YOUR_ORGANIZATIONAL_NAME].maps.arcgis.com/sharing/rest/portals/[YOUR_ORGANIZATIONAL_ID]/usa...'+(name)+'&stype=features&token='+token
url2 = 'https://[YOUR_ORGANIZATIONAL_NAME].maps.arcgis.com/sharing/rest/portals/[YOUR_ORGANIZATIONAL_ID]/usa...'+(name)+'&stype=features&token='+token
url3 = 'https://[YOUR_ORGANIZATIONAL_NAME].maps.arcgis.com/sharing/rest/portals/[YOUR_ORGANIZATIONAL_ID]/usa...'+(name)+'&stype=features&token='+token
url4 = 'https://[YOUR_ORGANIZATIONAL_NAME].maps.arcgis.com/sharing/rest/portals/[YOUR_ORGANIZATIONAL_ID]/usa...'+(name)+'&stype=features&token='+token
url5 = 'https://[YOUR_ORGANIZATIONAL_NAME].maps.arcgis.com/sharing/rest/portals/[YOUR_ORGANIZATIONAL_ID]/usa...'+(name)+'&stype=features&token='+token
url6 = 'https://[YOUR_ORGANIZATIONAL_NAME].maps.arcgis.com/sharing/rest/portals/[YOUR_ORGANIZATIONAL_ID]/usa...'+(name)+'&stype=features&token='+token
url7 = 'https://[YOUR_ORGANIZATIONAL_NAME].maps.arcgis.com/sharing/rest/portals/[YOUR_ORGANIZATIONAL_ID]/usa...'+(name)+'&stype=features&token='+token
#Make 7 url calls for year. Will fail if 7 calls and feature was posted mid-year
#so try blocks will keep trying less url calls
try:
print 'Reading the usage statistics for '+(name)+' for 2016....'
filename = (name)+'.csv'
data = json.loads(urllib.urlopen(url1).read())['data'][0]['num']
data = data + json.loads(urllib.urlopen(url2).read())['data'][0]['num']
data = data + json.loads(urllib.urlopen(url3).read())['data'][0]['num']
data = data + json.loads(urllib.urlopen(url4).read())['data'][0]['num']
data = data + json.loads(urllib.urlopen(url5).read())['data'][0]['num']
data = data + json.loads(urllib.urlopen(url6).read())['data'][0]['num']
data = data + json.loads(urllib.urlopen(url7).read())['data'][0]['num']
print 'Writing '+(filename)+'....'
with open(filename, 'wb') as output:
csv_writer = csv.writer(output)
csv_writer.writerow(['name','date','count'])
for date, count in data:
csv_writer.writerow([name,datetime.datetime.fromtimestamp(int(date)/1e3), count])
except Exception:
try:
print 'Reading the usage statistics for '+(name)+' for the last 313 days of 2016....'
filename = (name)+'.csv'
data = json.loads(urllib.urlopen(url2).read())['data'][0]['num']
data = data + json.loads(urllib.urlopen(url3).read())['data'][0]['num']
data = data + json.loads(urllib.urlopen(url4).read())['data'][0]['num']
data = data + json.loads(urllib.urlopen(url5).read())['data'][0]['num']
data = data + json.loads(urllib.urlopen(url6).read())['data'][0]['num']
data = data + json.loads(urllib.urlopen(url7).read())['data'][0]['num']
print 'Writing '+(filename)+'....'
with open(filename, 'wb') as output:
csv_writer = csv.writer(output)
csv_writer.writerow(['name','date','count'])
for date, count in data:
csv_writer.writerow([name,datetime.datetime.fromtimestamp(int(date)/1e3), count])
except Exception:
try:
print 'Reading the usage statistics for '+(name)+' for the last 261 days of 2016....'
filename = (name)+'.csv'
data = json.loads(urllib.urlopen(url3).read())['data'][0]['num']
data = data + json.loads(urllib.urlopen(url4).read())['data'][0]['num']
data = data + json.loads(urllib.urlopen(url5).read())['data'][0]['num']
data = data + json.loads(urllib.urlopen(url6).read())['data'][0]['num']
data = data + json.loads(urllib.urlopen(url7).read())['data'][0]['num']
print 'Writing '+(filename)+'....'
with open(filename, 'wb') as output:
csv_writer = csv.writer(output)
csv_writer.writerow(['name','date','count'])
for date, count in data:
csv_writer.writerow([name,datetime.datetime.fromtimestamp(int(date)/1e3), count])
except Exception:
try:
print 'Reading the usage statistics for '+(name)+' for the last 209 days of 216....'
filename = (name)+'.csv'
data = json.loads(urllib.urlopen(url4).read())['data'][0]['num']
data = data + json.loads(urllib.urlopen(url5).read())['data'][0]['num']
data = data + json.loads(urllib.urlopen(url6).read())['data'][0]['num']
data = data + json.loads(urllib.urlopen(url7).read())['data'][0]['num']
print 'Writing '+(filename)+'....'
with open(filename, 'wb') as output:
csv_writer = csv.writer(output)
csv_writer.writerow(['name','date','count'])
for date, count in data:
csv_writer.writerow([name,datetime.datetime.fromtimestamp(int(date)/1e3), count])
except Exception:
try:
print 'Reading the usage statistics for '+(name)+' for the last 157 days of 2016....'
filename = (name)+'.csv'
data = json.loads(urllib.urlopen(url5).read())['data'][0]['num']
data = data + json.loads(urllib.urlopen(url6).read())['data'][0]['num']
data = data + json.loads(urllib.urlopen(url7).read())['data'][0]['num']
print 'Writing '+(filename)+'....'
with open(filename, 'wb') as output:
csv_writer = csv.writer(output)
csv_writer.writerow(['name','date','count'])
for date, count in data:
csv_writer.writerow([name,datetime.datetime.fromtimestamp(int(date)/1e3), count])
except Exception:
try:
print 'Reading the usage statistics for '+(name)+' for the last 105 days of 2016....'
filename = (name)+'.csv'
data = json.loads(urllib.urlopen(url6).read())['data'][0]['num']
data = data + json.loads(urllib.urlopen(url7).read())['data'][0]['num']
print 'Writing '+(filename)+'....'
with open(filename, 'wb') as output:
csv_writer = csv.writer(output)
csv_writer.writerow(['name','date','count'])
for date, count in data:
csv_writer.writerow([name,datetime.datetime.fromtimestamp(int(date)/1e3), count])
except Exception:
try:
print 'Reading the usage statistics for '+(name)+' for the last 53 days of 2016....'
filename = (name)+'.csv'
data = json.loads(urllib.urlopen(url7).read())['data'][0]['num']
print 'Writing '+(filename)+'....'
with open(filename, 'wb') as output:
csv_writer = csv.writer(output)
csv_writer.writerow(['name','date','count'])
for date, count in data:
csv_writer.writerow([name,datetime.datetime.fromtimestamp(int(date)/1e3), count])
except:
print 'No usage statistics available too short time range'
pass