jeudi 24 septembre 2015

Is there any way to specify a max number of retries when using s3cmd?

I've looked through the usage guide as well as the config docs and I'm just not seeing it. This is the output for my bash script that uses s3cmd sync when S3 appeared to be down:

WARNING: Retrying failed request: /some/bucket/path/
WARNING: 503 (Service Unavailable): 
WARNING: Waiting 3 sec...
WARNING: Retrying failed request: /some/bucket/path/
WARNING: 503 (Service Unavailable): 
WARNING: Waiting 6 sec...
ERROR: The read operation timed out

It looks like it is retrying twice using exponential backoffs, then failing. Surely there must be some way to explicitly state how many times s3cmd should retry a failed network call?

Aucun commentaire:

Enregistrer un commentaire