.net 如何将大于 5 MB(大约)的文件上传到 Amazon S3(官方 SDK)?
声明:本页面是StackOverFlow热门问题的中英对照翻译,遵循CC BY-SA 4.0协议,如果您需要使用它,必须同样遵循CC BY-SA许可,注明原文地址和作者信息,同时你必须将它归于原作者(不是我):StackOverFlow
原文地址: http://stackoverflow.com/questions/3871430/
Warning: these are provided under cc-by-sa 4.0 license. You are free to use/share it, But you must attribute it to the original authors (not me):
StackOverFlow
How to upload files to Amazon S3 (official SDK) that are larger than 5 MB (approx)?
提问by InvertedAcceleration
I am using the latest version of the official Amazon S3 SDK (1.0.14.1) to create a backup tool. So far everything works correctly if the size of the file I'm uploading is below 5 MB, but when any of the files is above 5 MB the upload fails with the following exception:
我正在使用官方 Amazon S3 SDK (1.0.14.1) 的最新版本来创建备份工具。到目前为止,如果我上传的文件大小低于 5 MB,一切正常,但当任何文件高于 5 MB 时,上传失败,但出现以下异常:
System.Net.WebException: The request was aborted: The request was canceled. ---> System.IO.IOException: Cannot close stream until all bytes are written. at System.Net.ConnectStream.CloseInternal(Boolean internalCall, Boolean aborting) --- End of inner exception stack trace --- at Amazon.S3.AmazonS3Client.ProcessRequestError(String actionName, HttpWebRequest request, WebException we, HttpWebResponse errorResponse, String requestAddr, WebHeaderCollection& respHdrs, Type t) at Amazon.S3.AmazonS3Client.Invoke[T](S3Request userRequest) at Amazon.S3.AmazonS3Client.PutObject(PutObjectRequest request) at BackupToolkit.S3Module.UploadFile(String sourceFileName, String destinationFileName) in W:\code\AutoBackupTool\BackupToolkit\S3Module.cs:line 88 at BackupToolkit.S3Module.UploadFiles(String sourceDirectory) in W:\code\AutoBackupTool\BackupToolkit\S3Module.cs:line 108
System.Net.WebException:请求被中止:请求被取消。---> System.IO.IOException:在写入所有字节之前无法关闭流。在 System.Net.ConnectStream.CloseInternal(Boolean internalCall, Boolean aborting) --- 内部异常堆栈跟踪结束 --- 在 Amazon.S3.AmazonS3Client.ProcessRequestError(String actionName, HttpWebRequest request, WebException we, HttpWebResponse errorResponse, String requestAddr , WebHeaderCollection& respHdrs, Type t) at Amazon.S3.AmazonS3Client.Invoke[T](S3Request userRequest) at Amazon.S3.AmazonS3Client.PutObject(PutObjectRequest request) at BackupToolkit.S3Module.UploadFile(String sourceFileName, String destinationFileName) in W: \code\AutoBackupTool\BackupToolkit\S3Module.cs:BackupToolkit.S3Module 中的第 88 行。
Note: 5 MB is roughly the boundary of failure, it can be slightly lower or anything higher
注意:5 MB 大致是失败的边界,它可以稍低或更高
I am assuming that the connection is timing out and the stream is being automatically closed before the file upload completes.
我假设连接超时并且在文件上传完成之前自动关闭流。
I've tried to find a way to set a long timeout (but I can't find the option in either AmazonS3or AmazonS3Config).
我试图找到一种设置长时间超时的方法(但我在AmazonS3或中都找不到该选项AmazonS3Config)。
Any ideas on how to increase the time-out (like an application wide setting I can use) or is it unrelated to a timeout issue?
关于如何增加超时的任何想法(例如我可以使用的应用程序范围的设置)还是与超时问题无关?
Code:
代码:
var s3Client = AWSClientFactory.CreateAmazonS3Client(AwsAccessKey, AwsSecretKey);
var putObjectRequest = new PutObjectRequest {
BucketName = Bucket,
FilePath = sourceFileName,
Key = destinationFileName,
MD5Digest = md5Base64,
GenerateMD5Digest = true
};
using (var upload = s3Client.PutObject(putObjectRequest)) { }
回答by InvertedAcceleration
Updated answer:
更新的答案:
I recently updated one of my projects that uses the Amazon AWS .NET SDK (to version 1.4.1.0) and in this version there are two improvements that did not exist when I wrote the original answer here.
我最近更新了我的一个使用 Amazon AWS .NET SDK 的项目(到版本1.4.1.0),在这个版本中有两个改进,当我在这里写原始答案时不存在。
- You can now set
Timeoutto-1to have an infinite time limit for the put operation. - There is now an extra property on
PutObjectRequestcalledReadWriteTimeoutwhich can be set (in milliseconds) to timeout on the stream read/write level opposed to the entire put operation level.
- 现在,您可以设置
Timeout到-1到有放置操作无限的时间限制。 - 现在有一个额外的属性 on
PutObjectRequestcalledReadWriteTimeout可以设置(以毫秒为单位)在流读/写级别上超时,而不是整个放置操作级别。
So my code now looks like this:
所以我的代码现在看起来像这样:
var putObjectRequest = new PutObjectRequest {
BucketName = Bucket,
FilePath = sourceFileName,
Key = destinationFileName,
MD5Digest = md5Base64,
GenerateMD5Digest = true,
Timeout = -1,
ReadWriteTimeout = 300000 // 5 minutes in milliseconds
};
Original answer:
原答案:
I managed to figure out the answer...
我设法找出了答案......
Before posting the question I had explored AmazonS3and AmazonS3Configbut not PutObjectRequest.
在发布我已经探索过的问题之前AmazonS3,AmazonS3Config但不是PutObjectRequest。
Inside PutObjectRequestthere is a Timeoutproperty (set in milliseconds). I have successfully used this to upload the larger files (note: setting it to 0 doesn't remove the timeout, you need to specify a positive number of miliseconds... I've gone for 1 hour).
里面PutObjectRequest有一个Timeout属性(以毫秒为单位)。我已经成功地使用它来上传较大的文件(注意:将其设置为 0 不会删除超时,您需要指定一个正数的毫秒数......我已经走了 1 小时)。
This works fine:
这工作正常:
var putObjectRequest = new PutObjectRequest {
BucketName = Bucket,
FilePath = sourceFileName,
Key = destinationFileName,
MD5Digest = md5Base64,
GenerateMD5Digest = true,
Timeout = 3600000
};
回答by Nick Randell
I've been having similar problems to this and started to use the TransferUtility class to perform multi part uploads.
我一直有类似的问题,并开始使用 TransferUtility 类来执行分段上传。
At the moment this code is working. I did have problems when the timeout was set too low though!
目前这段代码正在运行。但是,当超时设置得太低时,我确实遇到了问题!
var request = new TransferUtilityUploadRequest()
.WithBucketName(BucketName)
.WithFilePath(sourceFile.FullName)
.WithKey(key)
.WithTimeout(100 * 60 * 60 * 1000)
.WithPartSize(10 * 1024 * 1024)
.WithSubscriber((src, e) =>
{
Console.CursorLeft = 0;
Console.Write("{0}: {1} of {2} ", sourceFile.Name, e.TransferredBytes, e.TotalBytes);
});
utility.Upload(request);
As I'm typing this, I have a 4GB upload taking place and it's already got further through than ever before!
当我输入这个时,我有一个 4GB 的上传正在进行,它已经比以往任何时候都更进一步!
回答by Malik Khalil
AWS SDK for .NET has two main APIsto work with Amazon S3.Both are able to upload large and small files on S3.
适用于 .NET 的 AWS 开发工具包有两个主要的 API可以与 Amazon S3 配合使用。两者都能够在 S3 上上传大小文件。
1. Low-level API :
1.低级API:
The low-level API uses the same pattern used for other service low-level APIs in the SDK.There is a client object called AmazonS3Clientthat implements the IAmazonS3 interface.It contains methods for each of the service operations exposed by S3.
Namespace : Amazon.S3, Amazon.S3.Model
低级 API 使用与 SDK 中其他服务低级 API 相同的模式。有一个名为AmazonS3Client的客户端对象 实现了 IAmazonS3 接口。它包含 S3 公开的每个服务操作的方法。
命名空间:Amazon.S3、Amazon.S3.Model
// Step 1 :
AmazonS3Config s3Config = new AmazonS3Config();
s3Config.RegionEndpoint = GetRegionEndPoint();
// Step 2 :
using(var client = new AmazonS3Client(My_AWSAccessKey, My_AWSSecretKey, s3Config) )
{
// Step 3 :
PutObjectRequest request = new PutObjectRequest();
request.Key = My_key;
request.InputStream = My_fileStream;
request.BucketName = My_BucketName;
// Step 4 : Finally place object to S3
client.PutObject(request);
}
2. TransferUtility :(I would recommend using this API)
2. TransferUtility :(我建议使用这个API)
The TransferUtility runs on top of the low-level API. For putting and getting objects into S3, It is a simple interface for handling the most common uses of S3. The biggest benefit comes with putting objects. For example, TransferUtility detects if a file is large and switches into multipart upload mode.
Namespace : Amazon.S3.Transfer
TransferUtility 在低级 API 之上运行。用于将对象放入和获取到 S3,它是一个简单的接口,用于处理 S3 的最常见用途。最大的好处是放置对象。例如,TransferUtility 检测文件是否很大并切换到分段上传模式。
命名空间:Amazon.S3.Transfer
// Step 1 : Create "Transfer Utility" (replacement of old "Transfer Manager")
TransferUtility fileTransferUtility =
new TransferUtility(new AmazonS3Client(Amazon.RegionEndpoint.USEast1));
// Step 2 : Create Request object
TransferUtilityUploadRequest uploadRequest =
new TransferUtilityUploadRequest
{
BucketName = My_BucketName,
FilePath = My_filePath,
Key = My_keyName
};
// Step 3 : Event Handler that will be automatically called on each transferred byte
uploadRequest.UploadProgressEvent +=
new EventHandler<UploadProgressArgs>
(uploadRequest_UploadPartProgressEvent);
static void uploadRequest_UploadPartProgressEvent(object sender, UploadProgressArgs e)
{
Console.WriteLine("{0}/{1}", e.TransferredBytes, e.TotalBytes);
}
// Step 4 : Hit upload and send data to S3
fileTransferUtility.Upload(uploadRequest);
回答by user369142
Nick Randell has got the right idea on this, further to his post here's another example with some alternative eventhandling, and a method to get percentage completed for the uploaded file:
Nick Randell 对此有正确的想法,在他的帖子之后,这里有另一个示例,其中包含一些替代事件处理,以及一种获取上传文件完成百分比的方法:
private static string WritingLargeFile(AmazonS3 client, int mediaId, string bucketName, string amazonKey, string fileName, string fileDesc, string fullPath)
{
try
{
Log.Add(LogTypes.Debug, mediaId, "WritingLargeFile: Create TransferUtilityUploadRequest");
var request = new TransferUtilityUploadRequest()
.WithBucketName(bucketName)
.WithKey(amazonKey)
.WithMetadata("fileName", fileName)
.WithMetadata("fileDesc", fileDesc)
.WithCannedACL(S3CannedACL.PublicRead)
.WithFilePath(fullPath)
.WithTimeout(100 * 60 * 60 * 1000) //100 min timeout
.WithPartSize(5 * 1024 * 1024); // Upload in 5MB pieces
request.UploadProgressEvent += new EventHandler<UploadProgressArgs>(uploadRequest_UploadPartProgressEvent);
Log.Add(LogTypes.Debug, mediaId, "WritingLargeFile: Create TransferUtility");
TransferUtility fileTransferUtility = new TransferUtility(ConfigurationManager.AppSettings["AWSAccessKey"], ConfigurationManager.AppSettings["AWSSecretKey"]);
Log.Add(LogTypes.Debug, mediaId, "WritingLargeFile: Start Upload");
fileTransferUtility.Upload(request);
return amazonKey;
}
catch (AmazonS3Exception amazonS3Exception)
{
if (amazonS3Exception.ErrorCode != null &&
(amazonS3Exception.ErrorCode.Equals("InvalidAccessKeyId") ||
amazonS3Exception.ErrorCode.Equals("InvalidSecurity")))
{
Log.Add(LogTypes.Debug, mediaId, "Please check the provided AWS Credentials.");
}
else
{
Log.Add(LogTypes.Debug, mediaId, String.Format("An error occurred with the message '{0}' when writing an object", amazonS3Exception.Message));
}
return String.Empty; //Failed
}
}
private static Dictionary<string, int> uploadTracker = new Dictionary<string, int>();
static void uploadRequest_UploadPartProgressEvent(object sender, UploadProgressArgs e)
{
TransferUtilityUploadRequest req = sender as TransferUtilityUploadRequest;
if (req != null)
{
string fileName = req.FilePath.Split('\').Last();
if (!uploadTracker.ContainsKey(fileName))
uploadTracker.Add(fileName, e.PercentDone);
//When percentage done changes add logentry:
if (uploadTracker[fileName] != e.PercentDone)
{
uploadTracker[fileName] = e.PercentDone;
Log.Add(LogTypes.Debug, 0, String.Format("WritingLargeFile progress: {1} of {2} ({3}%) for file '{0}'", fileName, e.TransferredBytes, e.TotalBytes, e.PercentDone));
}
}
}
public static int GetAmazonUploadPercentDone(string fileName)
{
if (!uploadTracker.ContainsKey(fileName))
return 0;
return uploadTracker[fileName];
}
回答by EKanadily
see this topic here How to upload a file to amazon S3 super easy using c#including a demo project to download. it is high level using AWS sdk .net 3.5 (and higher) it can be utilised using the following code :
在此处查看此主题如何使用 c# 将文件上传到 amazon S3 超级简单,包括要下载的演示项目。它是使用 AWS sdk .net 3.5(及更高版本)的高级版本,可以使用以下代码使用:
// preparing our file and directory names
string fileToBackup = @"d:\mybackupFile.zip" ; // test file
string myBucketName = "mys3bucketname"; //your s3 bucket name goes here
string s3DirectoryName = "justdemodirectory";
string s3FileName = @"mybackupFile uploaded in 12-9-2014.zip";
AmazonUploader myUploader = new AmazonUploader();
myUploader.sendMyFileToS3(fileToBackup, myBucketName, s3DirectoryName, s3FileName);

