-
Notifications
You must be signed in to change notification settings - Fork 620
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Use Github Throttling pluging for @octokit/rest #3983
Comments
Would be a good addition. But it will not solve the rate limit problem. May I ask what size of org / deployments do you have? |
@npalm I was able to solve most of the rate limiting problems that were occurring almost ever hour by varying the scheduled lambda event. The pool docs suggest the following: This reduces the overall concurrent requests to github, and resolves most of the throttling issues. I think this should be added to the docs.
We have one deployment with 21 runners. Because of the resolution above, we decided to remove pools all together as they are expensive. Once we removed pools we noticed that every so often a job will never be allocated a runner. When looking into the logs, I see an error around the same time:
The job will now hang forever. I believe this happens because of the size of some of our workflow's matrix jobs. It launches around 25 jobs in parallel. So far, we have only noticed this error for this specific workflow. The workaround for us is to always ensure there is one runner available at all times, so we have to add in a pool of size 1 to all our runners. Obviously this isn't ideal. I am not sure if I have anymore control on how philips will process my requests, turning down Also, see updated overview of this issue. You can see the original 403 error was from the pool lambda, since resolved, but new 503 is from the scale up lambda, which makes sense since that would be getting all the parallel requests from my job matrix. |
Can |
@andrewdibiasio6 the module is now supporting a job retry mechanism, which will solve teh problem for some hagning jobs |
@npalm Yes this would solve the issue for some hanging jobs, but 900s upper bound for retires isn't going to help. When throttled by github, you're usually throttled for 1 hour. This means no amount for retires will help. If anything, retrying more will likely throttle you more, as giuthub suggestion is to back off for a suggested amount of time before retrying, hence using the octo client. |
The intend of the retry are mostly messages that are missed, messages getting crossed and not scaling properly. Indeed 900 is the max for SQS. Ideas or help is very welcom to make the runners. more resilient. But the tough part quering GitHub to find jobs will only add up to rate limit. Also GitHub does not have an API to ask the depth of queus. |
GitHub limits the number of REST API requests that you can make within a specific amount of time.
We authorize a GitHub App or OAuth app, which can then make API requests on your behalf. All of these requests count towards a personal rate limit of 5,000 requests per hour.
In addition to primary rate limits, GitHub enforces secondary rate limits in order to prevent abuse and keep the API available for all users.
We may encounter a secondary rate limit if we:
We are seeing many errors like:
I suggest we add the suggested throttling plugin to help with this issue, or some other suggestion here.
The text was updated successfully, but these errors were encountered: