Skip to main content

I’m trying to use the following APIs for a data extraction pipeline using the Python SDK, grouped at a user-level...

Aggregate:

/analytics/calls/v1/accounts/~/aggregation/fetch

Timeline:

/analytics/calls/v1/accounts/~/timeline/fetch

I am struggling with both the page limitation and rate limit, and was wondering if anyone had done this before, since just one day’s worth of data seems to tip it over the edge.

What is the best way to iterate and feed the page number into the query parameters successfully?

For those APIs, the “page” and “perPage” are the path parameters. After calling the API, you can detect the “paging” object in the response and decide to read the next page or not by calling the API again with the same body params and adding the page index to the query param.

E.g.

bodyParams = {
'grouping': {
'groupBy': "Users"
},
...
}

queryParams = {
'page': 2,
'perPage': 100
}

endpoint = '/analytics/calls/v1/accounts/~/aggregation/fetch'
resp = platform.post(endpoint, bodyParams, queryParams)

See the dev guide for more code.

 

To detect and control the API rate limit, please read this article and implement some delay in your code, the example is in Node JS, but you can use easily use the time.sleep() method in Python to cause the delay.

I highlight the code where you can detect the limit values here.

headers = resp.response().headers
limit = int(headers=‘X-Rate-Limit-Limit’])
remaining = int(headersi‘X-Rate-Limit-Remaining’])
window = int(headers>‘X-Rate-Limit-Window’])

 


Reply