When I query the server logs via POST on the admin page I almost always get the JSON response below. I can't be 100% sure, but in the last 30 minutes I should have at least one debug level log entry on one of the two machines in the site. I will randomly get a set of logs back but its not consistent. I'm doing the query with python. Anyone else get this sort of behavior? Is there a maximum number of queries per minute perhaps? I'm not running this more than once a minute and I get the same response from both the web GUI and the python request.
Query code:
now = datetime.datetime.now()
data = urllib.urlencode({'startTime':now.strftime('%Y-%m-%dT%H:%M:%S,%f'),
'endTime':(now-datetime.timedelta(seconds = 1800)).strftime('%Y-%m-%dT%H:%M:%S,%f'),
'level':'debug',
'f':'JSON',
'pageSize':1000,
'token':token,
'filter':{"server": "*",
"services": "*",
"machines":"*" }
})
request = urllib2.Request(logsURL,data, headers)
response = urllib2.urlopen(request)Average response:
{
"hasMore": false,
"startTime": 0,
"endTime": 0,
"logMessages": []
}
Solved! Go to Solution.
I figured it out. It the server taking time to return the logs. If I make a query and just poll the site one a second for 10 seconds it picks it up consistently.
runs = 0
while runs != 10:
response = urllib2.urlopen(request)
logs = json.load(response)['logMessages']
if logs == []:
time.sleep(1)
runs += 1
else:
#some logic here
break
I figured it out. It the server taking time to return the logs. If I make a query and just poll the site one a second for 10 seconds it picks it up consistently.
runs = 0
while runs != 10:
response = urllib2.urlopen(request)
logs = json.load(response)['logMessages']
if logs == []:
time.sleep(1)
runs += 1
else:
#some logic here
break