jeudi 2 avril 2015

Most efficient way to copy http resource into AWS s3 bucket

A company provides me with temporary access a backup archive at a url like



http://ift.tt/1Iu7TSL


As you can see that file resides in their s3 bucket. I was hoping that I could copy that file to my own AWS bucket by doing something like this:



$ export AWS_ACCESS_KEY_ID=AKIAJEYKXMCPBZQYJYXT
$ aws s3 cp s3://3e1d1268-0c97-63f9-2519-19591e8b6271/live/1427843203_backup/telcat_live_2015-03-31T23-06-43_UTC_database.sql.gz s3://ist-drupal-pantheon-managed-site-backups

A client error (403) occurred when calling the HeadObject operation: Forbidden
Completed 1 part(s) with ... file(s) remaining


They've probably configured this so that I don't have s3 protocol access to this file... (Anything else I might try?)


Assuming that I can't do an s3 to s3 copy, my next best idea is to download the file to my EC2 instance and then upload it (will have to do a multipart upload since it's > 100MB) to my own s3 bucket. Since I will have to do this periodically for several backups, I'm wondering if anyone can think of ways that I might save resources (storage, throughput etc.) on my EC2 instance.


Thanks for any ideas.





Aucun commentaire:

Enregistrer un commentaire