Create buckets
$
Remove buckets which are empty
$
Lists the objects in bucket-name/path (objects in bucket-name filtered by the prefix path).
$
$
aws s3 mb s3://bucket-name
Remove buckets which are empty
$ aws s3 rb s3://bucket-name
Remove buckets which are non-empty
$ aws s3 rb s3://bucket-name
--force
List all buckets
$ aws s3 ls
List
all objects and folders (prefixes) in a bucket$
aws s3 ls s3://bucket-name
Lists the objects in bucket-name/path (objects in bucket-name filtered by the prefix path).
$
aws s3 ls s3://bucket-name
/path
Copy an object into a bucket. It grants read
permissions on the object to everyone and full
permissions (read
, readacl
, and writeacl
) to the account associated with user@example.com
.
$ aws s3 cp file.txt s3://my-bucket
/ --grants read=uri=http://acs.amazonaws.com/groups/global/AllUsers full=emailaddress=user@example.com
sync
command
$ aws s3 sync <source> <target> [--options]
$ aws s3 sync . s3://my-bucket/path
upload: MySubdirectory\MyFile3.txt to s3://my-bucket/path/MySubdirectory/MyFile3.txt
upload: MyFile2.txt to s3://my-bucket/path/MyFile2.txt
upload: MyFile1.txt to s3://my-bucket/path/MyFile1.txt
Normally, sync
only copies missing or outdated files or objects between the source and target. However, you may supply the --delete
option to remove files or objects from the target not present in the source.
// Sync with deletion - object is deleted from bucket
$ aws s3 sync . s3://my-bucket/path --delete
delete: s3://my-bucket/path/MyFile1.txt
The --exclude
and --include
options allow you to specify rules to filter the files or objects to be copied during the sync operation.
Local directory contains 3 files:
MyFile1.txt
MyFile2.rtf
MyFile88.txt
'''
$ aws s3 sync . s3://my-bucket/path --exclude '*.txt'
upload: MyFile2.rtf to s3://my-bucket/path/MyFile2.rtf
'''
$ aws s3 sync . s3://my-bucket/path --exclude '*.txt' --include 'MyFile*.txt'
upload: MyFile1.txt to s3://my-bucket/path/MyFile1.txt
upload: MyFile88.txt to s3://my-bucket/path/MyFile88.txt
upload: MyFile2.rtf to s3://my-bucket/path/MyFile2.rtf
'''
$ aws s3 sync . s3://my-bucket/path --exclude '*.txt' --include 'MyFile*.txt' --exclude 'MyFile?.txt'
upload: MyFile2.rtf to s3://my-bucket/path/MyFile2.rtf
upload: MyFile88.txt to s3://my-bucket/path/MyFile88.txt
the
s3
command set includes cp
, mv
, ls
, and rm
, and they work in similar ways to their Unix counterparts. The following are some examples.
// Copy MyFile.txt in current directory to s3://my-bucket/path $
aws s3 cp MyFile.txt s3://my-bucket/path/
// Move all .jpg files in s3://my-bucket/path to ./MyDirectory $aws s3 mv s3://my-bucket/path ./MyDirectory --exclude '*' --include '*.jpg' --recursive
// List the contents of my-bucket $aws s3 ls s3://my-bucket
// List the contents of path in my-bucket $aws s3 ls s3://my-bucket/path
// Delete s3://my-bucket/path/MyFile.txt $aws s3 rm s3://my-bucket/path/MyFile.txt
// Delete s3://my-bucket/path and all of its contents $aws s3 rm s3://my-bucket/path --recursive
When the
--recursive
option is used on a directory/folder withcp
,mv
, orrm
, the command walks the directory tree, including all subdirectories.
// List of files in human readable form with sizes in KB/MB/GB
$ aws s3 ls s3://mybucket/path --recursive --human-readable --summarize
--human-readable displays file size in Bytes/MiB/KiB/GiB/TiB/PiB/EiB. --summarize displays the total number of objects and total size at the end of the result listing:
No comments:
Post a Comment