Allow configuring the chunk size for object storage to make S3 Scaleway work with large files
Everyone can contribute. Help move this issue forward while earning points, leveling up and collecting rewards.
Problem to solve
At the moment we do not have a way to configure the chunk size for object storage when using S3 storage, we only have this option for backups: gitlab_rails['backup_multipart_chunk_size']
.
Because of that, Scaleway S3 cannot handle files more than 5000Mb because the default chunk size is 5Mb, and the maximum allowed number of chunks on the Scaleway S3 storage is 1000.
Attempt to upload a file more than 5000Mb results in the following error in the workhorse log:
{"correlation_id":"01HB8VE723NX2N6Q1YNW2E4JW8","error":"handleFileUploads: extract files from multipart:
multipart: NextPart: unexpected EOF","level":"error","method":"POST","msg":"","time":"2023-09-26T13:50:17Z","uri":"/api/v4/jobs/1971/artifacts?artifact_format=zip\u0026artifact_type=archive\u0026expire_in=1+hours"}
Proposal
Introduce the ability to modify multipart chunk size for object storage data types similar to backup_multipart_chunk_size
.
Similar issue, but it's about some Cloudflare limitations: #326083.
This page may contain information related to upcoming products, features and functionality. It is important to note that the information presented is for informational purposes only, so please do not rely on the information for purchasing or planning purposes. Just like with all projects, the items mentioned on the page are subject to change or delay, and the development, release, and timing of any products, features, or functionality remain at the sole discretion of GitLab Inc.