Resolved -
Between 13:25 UTC and 18:35 UTC on Dec 11th, GitHub experienced an increase in scraper activity on public parts of our website. This scraper activity caused a low priority web request pool to increase and eventually exceed total capacity resulting in users experiencing 500 errors. In particular, this affected Login, Logout, and Signup routes, along with less than 1% requests from within Actions jobs. At the peak of the incident, 7.6% of login requests were impacted, which was the most significant impact of this scraping attack.
Our mitigation strategy identified the scraping activity and blocked it. We also increased the pool of web requests that were impacted to have more capacity, and lastly we upgraded key user login routes to higher priority queues.
In future, we’re working to more proactively identify this particular scraper activity and have faster mitigation times.
Dec 11, 20:05 UTC
Update -
We see signs of full recovery and will post a more in-depth update soon.
Dec 11, 20:05 UTC
Update -
We are continuing to monitor and continuing to see signs of recovery. We will update when we are confident that we are in full recovery.
Dec 11, 19:58 UTC
Update -
We've applied a mitigation to fix intermittent failures in anonymous requests and downloads from GitHub, including Login, Signup, Logout, and some requests from within Actions jobs. We are seeing improvements in telemetry, but we will continue to monitor for full recovery.
Dec 11, 19:04 UTC
Update -
We currently have ~7% of users experiencing errors when attempting to sign up, log in, or log out. We are deploying a change to mitigate these failures.
Dec 11, 18:47 UTC
Investigating -
We are investigating reports of impacted performance for some GitHub services.
Dec 11, 18:40 UTC