I believe the PyGitHub
libary can do this, but for the purposes of my project, I am calling the GitHub API directly with the requests
library as I am caching the results of my API call with the requests-cache
library.
According to the documentation for the per_page
parameter, the maximum number of releases you can retrieve is 100, but if I am dealing with a repository with over 100, how can I still get a list of all of them? Example code below.
import requests
ACCESS_TOKEN = '<insert token>'
headers = {
'Authorization': 'token ' ACCESS_TOKEN,
'Accept': 'application/vnd.github.v3 json'
}
response = requests.get('https://api.github.com/repos/{insert author}/{insert repository}/releases?per_page=100', headers={'Authorization': 'token ' ACCESS_TOKEN})
print(response.json())
CodePudding user response:
The Github API uses pagination to deal with large numbers of responses. You can request pages other than the first page by appending page=<n>
to your request url. For example:
page = 1
while True:
response = requests.get(
f"https://api.github.com/repos/{user}/{repo}/releases?per_page=100&page={page}",
headers={"Authorization": "token " ACCESS_TOKEN},
)
releases = response.json()
if not releases:
break
print(json.dumps(releases, indent=2))
page = 1