php Amazon S3 避免覆盖同名对象

声明:本页面是StackOverFlow热门问题的中英对照翻译,遵循CC BY-SA 4.0协议,如果您需要使用它,必须同样遵循CC BY-SA许可,注明原文地址和作者信息,同时你必须将它归于原作者(不是我):StackOverFlow 原文地址: http://stackoverflow.com/questions/12654828/
Warning: these are provided under cc-by-sa 4.0 license. You are free to use/share it, But you must attribute it to the original authors (not me): StackOverFlow

提示:将鼠标放在中文语句上可以显示对应的英文。显示中英文
时间:2020-08-25 03:58:49  来源:igfitidea点击:

Amazon S3 avoid overwriting objects with the same name

phpfilefile-uploadamazon-s3amazon-web-services

提问by CyberJunkie

If I upload a file to S3 with the filename identical to a filename of an object in the bucket it overwrites it. What options exists to avoid overwriting files with identical filenames? I enabled versioning in my bucket thinking it will solve the problem but objects are still overwritten.

如果我将文件名与存储桶中对象的文件名相同的文件上传到 S3,它会覆盖它。有哪些选项可以避免覆盖具有相同文件名的文件?我在我的存储桶中启用了版本控制,认为它会解决问题,但对象仍然被覆盖。

采纳答案by Prinzhorn

My comment from above doesn't work. I thought the WRITEACL would apply to objects as well, but it only works on buckets.

我上面的评论不起作用。我认为WRITEACL 也适用于对象,但它仅适用于存储桶。

Since you enabled versioning, your objects aren't overwritten. But if you don't specify the version in your GET request or URL, the latest version will be taken. This means when you put and object into S3 you need to save the versionID the response tells you in order to retrieve the very first object.

由于您启用了版本控制,因此您的对象不会被覆盖。但是如果您没有在 GET 请求或 URL 中指定版本,则将采用最新版本。这意味着当您将对象放入 S3 时,您需要保存响应告诉您的 versionID,以便检索第一个对象。

See Amazon S3 ACL for read-only and write-once accessfor more.

有关更多信息,请参阅Amazon S3 ACL 以获取只读和一次写入访问权限

回答by Ryan Parman

You can also configure an IAM user with limited permissions. Writes are still writes (i.e., updates), but using an IAM user is a best practice anyway.

您还可以配置具有有限权限的 IAM 用户。写入仍然是写入(即更新),但无论如何使用 IAM 用户是最佳实践。

The owner (i.e., your "long-term access key and secret key") always has full control unless you go completely out of your way to disable it.

所有者(即您的“长期访问密钥和秘密密钥”)始终拥有完全控制权,除非您完全不顾一切地禁用它。

回答by Patch92

Here is my suggestion if you are using a DB to store the key of every file on your s3 bucket.

如果您使用数据库来存储 s3 存储桶上每个文件的密钥,这是我的建议。

Generate a random key. Try insert/update the key your DB, in a field with a UNIQUE constraint that allows a null entry. If it fails the key has been used, repeat until you get a unique key.

生成随机密钥。尝试在具有允许空条目的 UNIQUE 约束的字段中插入/更新您的数据库的键。如果失败,则密钥已被使用,重复直到获得唯一密钥。

Then put your file on s3 with your key that you know is unique.

然后使用您知道唯一的密钥将您的文件放在 s3 上。