Java 如何在 Amazon s3 Bucket 中压缩文件并获取其 URL

声明:本页面是StackOverFlow热门问题的中英对照翻译,遵循CC BY-SA 4.0协议,如果您需要使用它,必须同样遵循CC BY-SA许可,注明原文地址和作者信息,同时你必须将它归于原作者(不是我):StackOverFlow 原文地址: http://stackoverflow.com/questions/43275575/
Warning: these are provided under cc-by-sa 4.0 license. You are free to use/share it, But you must attribute it to the original authors (not me): StackOverFlow

提示:将鼠标放在中文语句上可以显示对应的英文。显示中英文
时间:2020-08-12 00:50:28  来源:igfitidea点击:

How to zip files in Amazon s3 Bucket and get its URL

javaspringamazon-web-servicesamazon-s3

提问by jeff ayan

I have a bunch of files inside Amazon s3 bucket, I want to zip those file and download get the contents via S3 URL using Java Spring.

我在 Amazon s3 存储桶中有一堆文件,我想压缩这些文件并下载使用 Java Spring 通过 S3 URL 获取内容。

采纳答案by mootmoot

S3 is not a file server, nor does it offer operating system file services, such as data manipulation.

S3 不是文件服务器,也不提供操作系统文件服务,例如数据操作。

If there is many "HUGE" files, your best bet is

如果有很多“巨大”的文件,你最好的选择是

  1. start a simple EC2 instance
  2. Download all those files to EC2 instance, compress them, reupload it back to S3 bucket with a new object name
  1. 启动一个简单的 EC2 实例
  2. 将所有这些文件下载到 EC2 实例,压缩它们,使用新的对象名称将其重新上传回 S3 存储桶

Yes, you can use AWS lambda to do the same thing, but lambda is bounds to 900 seconds (15 mins) execution timeout (Thus it is recommended to allocate more RAM to boost lambda execution performance)

是的,您可以使用 AWS lambda 来做同样的事情,但 lambda 的执行超时时间为 900 秒(15 分钟)(因此建议分配更多 RAM 以提高 lambda 执行性能)

Traffics from S3 to local region EC2 instance and etc services is FREE.

从 S3 到本地区域 EC2 实例等服务的流量是免费的。

If your main purpose is just to readthose file within same AWS region using EC2/etc services, then you don't need this extra step. Just access the file directly.

如果您的主要目的只是使用 EC2/etc 服务读取同一 AWS 区域内的那些文件,那么您不需要这个额外的步骤。直接访问文件即可。

Note :

笔记 :

It is recommended to access and share file using AWS API. If you intend to share the file publicly, you must look into security issue seriously and impose download restriction. AWS traffics out to internet is never cheap.

建议使用 AWS API 访问和共享文件。如果您打算公开共享该文件,则必须认真研究安全问题并施加下载限制。AWS 到互联网的流量从不便宜。

回答by Frode N. Rosand

If you need individual files (objects) in S3 compressed, then it is possible to do so in a round-about way. You can define a CloudFront endpoint pointing to the S3 bucket, then let CloudFront compress the content on the way out: https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/ServingCompressedFiles.html

如果您需要压缩 S3 中的单个文件(对象),则可以采用一种迂回的方式进行压缩。您可以定义一个指向 S3 存储桶的 CloudFront 端点,然后让 CloudFront 在输出时压缩内容:https: //docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/ServingCompressedFiles.html