Javascript 如何使用适用于 Node.js 的 AWS 开发工具包将 Amazon S3 中的所有对象从一个前缀复制/移动到另一个前缀
声明:本页面是StackOverFlow热门问题的中英对照翻译,遵循CC BY-SA 4.0协议,如果您需要使用它,必须同样遵循CC BY-SA许可,注明原文地址和作者信息,同时你必须将它归于原作者(不是我):StackOverFlow
原文地址: http://stackoverflow.com/questions/30959251/
Warning: these are provided under cc-by-sa 4.0 license. You are free to use/share it, But you must attribute it to the original authors (not me):
StackOverFlow
How to copy/move all objects in Amazon S3 from one prefix to other using the AWS SDK for Node.js
提问by Yousaf
How do I copy all objects from one prefix to other? I have tried all possible ways to copy all objects in one shot from one prefix to other, but the only way that seems to work is by looping over a list of objects and copying them one by one. This is really inefficient. If I have hundreds of files in a folder, will I have to make 100 calls?
如何将所有对象从一个前缀复制到另一个前缀?我已经尝试了所有可能的方法来将一个镜头中的所有对象从一个前缀复制到另一个,但似乎唯一可行的方法是循环遍历对象列表并一个一个地复制它们。这真的是低效的。如果我在一个文件夹中有数百个文件,我是否需要拨打 100 个电话?
var params = {
Bucket: bucket,
CopySource: bucket+'/'+oldDirName+'/filename.txt',
Key: newDirName+'/filename.txt',
};
s3.copyObject(params, function(err, data) {
if (err) {
callback.apply(this, [{
type: "error",
message: "Error while renaming Directory",
data: err
}]);
} else {
callback.apply(this, [{
type: "success",
message: "Directory renamed successfully",
data: data
}]);
}
});
采纳答案by Aditya Manohar
You will need to make one AWS.S3.listObjects()
to list your objects with a specific prefix. But you are correct in that you will need to make one call for every object that you want to copy from one bucket/prefix to the same or another bucket/prefix.
您将需要制作一个AWS.S3.listObjects()
以使用特定前缀列出您的对象。但是您是正确的,因为您需要为要从一个存储桶/前缀复制到相同或另一个存储桶/前缀的每个对象进行一次调用。
You can also use a utility library like asyncto manage your requests.
您还可以使用async等实用程序库来管理您的请求。
var AWS = require('aws-sdk');
var async = require('async');
var bucketName = 'foo';
var oldPrefix = 'abc/';
var newPrefix = 'xyz/';
var s3 = new AWS.S3({params: {Bucket: bucketName}, region: 'us-west-2'});
var done = function(err, data) {
if (err) console.log(err);
else console.log(data);
};
s3.listObjects({Prefix: oldPrefix}, function(err, data) {
if (data.Contents.length) {
async.each(data.Contents, function(file, cb) {
var params = {
Bucket: bucketName,
CopySource: bucketName + '/' + file.Key,
Key: file.Key.replace(oldPrefix, newPrefix)
};
s3.copyObject(params, function(copyErr, copyData){
if (copyErr) {
console.log(copyErr);
}
else {
console.log('Copied: ', params.Key);
cb();
}
});
}, done);
}
});
Hope this helps!
希望这可以帮助!
回答by Peter Peng
Here is a code snippet that do it in the "async await" way:
这是一个以“异步等待”方式执行的代码片段:
const AWS = require('aws-sdk');
AWS.config.update({
credentials: new AWS.Credentials(....), // credential parameters
});
AWS.config.setPromisesDependency(require('bluebird'));
const s3 = new AWS.S3();
... ...
const bucketName = 'bucketName'; // example bucket
const folderToMove = 'folderToMove/'; // old folder name
const destinationFolder = 'destinationFolder/'; // new destination folder
try {
const listObjectsResponse = await s3.listObjects({
Bucket: bucketName,
Prefix: folderToMove,
Delimiter: '/',
}).promise();
const folderContentInfo = listObjectsResponse.Contents;
const folderPrefix = listObjectsResponse.Prefix;
await Promise.all(
folderContentInfo.map(async (fileInfo) => {
await s3.copyObject({
Bucket: bucketName,
CopySource: `${bucketName}/${fileInfo.Key}`, // old file Key
Key: `${destinationFolder}/${fileInfo.Key.replace(folderPrefix, '')}`, // new file Key
}).promise();
await s3.deleteObject({
Bucket: bucketName,
Key: fileInfo.Key,
}).promise();
})
);
} catch (err) {
console.error(err); // error handling
}
回答by erwinkarim
More update on the original code which copies folders recursively. Some limitations is that the code does not handle more than 1000 objects per Prefix and of course the depth limitation if your folders are very deep.
对递归复制文件夹的原始代码进行更多更新。一些限制是代码不能处理每个 Prefix 超过 1000 个对象,当然如果您的文件夹很深,深度限制。
import AWS from 'aws-sdk';
AWS.config.update({ region: 'ap-southeast-1' });
/**
* Copy s3 folder
* @param {string} bucket Params for the first argument
* @param {string} source for the 2nd argument
* @param {string} dest for the 2nd argument
* @returns {promise} the get object promise
*/
export default async function s3CopyFolder(bucket, source, dest) {
// sanity check: source and dest must end with '/'
if (!source.endsWith('/') || !dest.endsWith('/')) {
return Promise.reject(new Error('source or dest must ends with fwd slash'));
}
const s3 = new AWS.S3();
// plan, list through the source, if got continuation token, recursive
const listResponse = await s3.listObjectsV2({
Bucket: bucket,
Prefix: source,
Delimiter: '/',
}).promise();
// copy objects
await Promise.all(
listResponse.Contents.map(async (file) => {
await s3.copyObject({
Bucket: bucket,
CopySource: `${bucket}/${file.Key}`,
Key: `${dest}${file.Key.replace(listResponse.Prefix, '')}`,
}).promise();
}),
);
// recursive copy sub-folders
await Promise.all(
listResponse.CommonPrefixes.map(async (folder) => {
await s3CopyFolder(
bucket,
`${folder.Prefix}`,
`${dest}${folder.Prefix.replace(listResponse.Prefix, '')}`,
);
}),
);
return Promise.resolve('ok');
}
回答by Guppie70
A small change to the code of Aditya Manohar that improves the error handling in the s3.copyObject function and will actually finish the "move" request by removing the source files after the copy requests have been executed:
对 Aditya Manohar 代码的一个小改动,改进了 s3.copyObject 函数中的错误处理,并在执行复制请求后通过删除源文件来实际完成“移动”请求:
const AWS = require('aws-sdk');
const async = require('async');
const bucketName = 'foo';
const oldPrefix = 'abc/';
const newPrefix = 'xyz/';
const s3 = new AWS.S3({
params: {
Bucket: bucketName
},
region: 'us-west-2'
});
// 1) List all the objects in the source "directory"
s3.listObjects({
Prefix: oldPrefix
}, function (err, data) {
if (data.Contents.length) {
// Build up the paramters for the delete statement
let paramsS3Delete = {
Bucket: bucketName,
Delete: {
Objects: []
}
};
// Expand the array with all the keys that we have found in the ListObjects function call, so that we can remove all the keys at once after we have copied all the keys
data.Contents.forEach(function (content) {
paramsS3Delete.Delete.Objects.push({
Key: content.Key
});
});
// 2) Copy all the source files to the destination
async.each(data.Contents, function (file, cb) {
var params = {
CopySource: bucketName + '/' + file.Key,
Key: file.Key.replace(oldPrefix, newPrefix)
};
s3.copyObject(params, function (copyErr, copyData) {
if (copyErr) {
console.log(err);
} else {
console.log('Copied: ', params.Key);
}
cb();
});
}, function (asyncError, asyncData) {
// All the requests for the file copy have finished
if (asyncError) {
return console.log(asyncError);
} else {
console.log(asyncData);
// 3) Now remove the source files - that way we effectively moved all the content
s3.deleteObjects(paramsS3Delete, (deleteError, deleteData) => {
if (deleteError) return console.log(deleteError);
return console.log(deleteData);
})
}
});
}
});
Note that I have moved the cb()
callback function outside the if-then-else loop. That way even when an error occurs the async module will fire the done()
function.
请注意,我已将cb()
回调函数移到if-then-else 循环之外。这样即使发生错误,异步模块也会触发该done()
功能。