删除Amazon S3存储桶?

时间:2020-03-05 18:43:07  来源:igfitidea点击:

我一直在通过S3Fox与Amazon S3进行交互,但似乎无法删除存储桶。我选择一个存储分区,点击删除,在弹出窗口中确认删除,然后...什么也没有发生。我应该使用其他工具吗?

解决方案

回答

我总是最终使用他们的CAPI和小脚本来完成此操作。我不确定为什么S3Fox无法做到这一点,但是目前该功能似乎已被破坏。我确信许多其他S3工具也可以做到。

回答

首先删除存储桶中的所有对象。然后,我们可以删除存储桶本身。

显然,无法删除其中包含对象的存储桶,并且S3Fox不会为我们执行此操作。

像这样,我自己在S3Fox上还遇到了其他一些小问题,现在使用基于Java的工具jets3t来解决错误情况。也必须有其他人。

回答

这可能是S3Fox中的错误,因为它通常能够递归删除项目。但是,我不确定是否曾经尝试一次删除整个存储桶及其内容。

正如Stu所提到的,JetS3t项目包括一个Java GUI applet,我们可以在浏览器中轻松运行它来管理S3存储桶:Cockpit。与S3Fox相比,它既有优点也有缺点,但是很有可能会处理麻烦的工作。尽管这将要求我们先删除对象,然后再删除存储桶。

免责声明:我是JetS3t和Cockpit的作者

回答

SpaceBlock还使我们可以轻松删除s3存储桶,右键单击存储桶,然后在传输视图中删除,等待作业完成,完成。

这是我维护的免费和开放源代码的Windows s3前端,因此可以实现无耻的插件警报等。

回答

我们必须确保为存储桶设置了正确的写许可权,并且该存储桶不包含任何对象。
一些有用的工具可以进行删除:CrossFTP,查看和删除存储桶(如FTP客户端)。如上所述的jets3t工具。

回答

使用s3cmd:
创建一个新的空目录
s3cmd sync-删除已删除的empty_directory s3:// yourbucket

回答

我猜最简单的方法是使用Amazon S3的免费在线文件管理器S3fm。没有要安装的应用程序,没有第三方网站注册。直接从Amazon S3运行,安全又方便。

只需选择存储桶,然后点击删除即可。

回答

请记住,S3存储桶必须为空才能删除。好消息是,大多数第三方工具都可以自动执行此过程。如果我们遇到S3Fox的问题,建议我们尝试将S3FM用于GUI或者将S3Sync用于命令行。亚马逊上有一篇很棒的文章描述了如何使用S3Sync。设置完变量后,键盘命令是

./s3cmd.rb deleteall <your bucket name>

删除包含大量单个文件的存储桶往往会使许多S3工具崩溃,因为它们试图显示目录中所有文件的列表。我们需要找到一种批量删除的方法。我为此找到的最好的GUI工具是Bucket Explorer。它会删除S3存储桶中的1000个文件块中的文件,并且在尝试打开s3Fox和S3FM之类的大型存储桶时不会崩溃。

我还发现了一些可用于此目的的脚本。我还没有尝试过这些脚本,但是它们看起来非常简单。

红宝石

require 'aws/s3'

AWS::S3::Base.establish_connection!(
:access_key_id => 'your access key',
:secret_access_key => 'your secret key'
)

bucket = AWS::S3::Bucket.find('the bucket name')

while(!bucket.empty?)
begin
puts "Deleting objects in bucket"

bucket.objects.each do |object|
object.delete
puts "There are #{bucket.objects.size} objects left in the bucket"
end

puts "Done deleting objects"

rescue SocketError
puts "Had socket error"
end

end

佩尔

#!/usr/bin/perl
use Net::Amazon::S3;
my $aws_access_key_id = 'your access key';
my $aws_secret_access_key = 'your secret access key';
my $increment = 50; # 50 at a time
my $bucket_name = 'bucket_name';

my $s3 = Net::Amazon::S3->new({aws_access_key_id => $aws_access_key_id, aws_secret_access_key => $aws_secret_access_key, retry => 1, });
my $bucket = $s3->bucket($bucket_name);

print "Incrementally deleting the contents of $bucket_name\n";

my $deleted = 1;
my $total_deleted = 0;
while ($deleted > 0) {
print "Loading up to $increment keys...\n";
$response = $bucket->list({'max-keys' => $increment, }) or die $s3->err . ": " . $s3->errstr . "\n";
$deleted = scalar(@{ $response->{keys} }) ;
$total_deleted += $deleted;
print "Deleting $deleted keys($total_deleted total)...\n";
foreach my $key ( @{ $response->{keys} } ) {
my $key_name = $key->{key};
$bucket->delete_key($key->{key}) or die $s3->err . ": " . $s3->errstr . "\n";
}
}
print "Deleting bucket...\n";
$bucket->delete_bucket or die $s3->err . ": " . $s3->errstr;
print "Done.\n";

消息来源:Tarkblog

希望这可以帮助!

回答

我用Python编写了一个脚本来执行此操作,该脚本成功删除了我的9000个对象。看到这个页面:

https://efod.se/blog/archive/2009/08/09/delete-s3-bucket

回答

我将不得不看看其中一些替代文件管理器。我使用过(和喜欢)BucketExplorer,我们可以从http://www.bucketexplorer.com/上获得它。

这是30天的免费试用,然后(当前)每个许可证的价格为49.99美元(购买封面为49.95美元)。

回答

另一个无耻的插件:当我不得不删除250,000个项目时,我已经厌倦了等待单独的HTTP删除请求,因此我编写了一个Ruby脚本来对它进行多线程处理,并在很短的时间内完成:

http://github.com/sfeley/s3nuke/

由于处理线程的方式,它在Ruby 1.9中的运行速度更快。

回答

s3cmd的最新版本具有--recursive

例如。,

~/$ s3cmd rb --recursive s3://bucketwithfiles

http://s3tools.org/kb/item5.htm

回答

尝试https://s3explorer.appspot.com/来管理S3帐户。

回答

这就是我用的。只是简单的ruby代码。

case bucket.size
  when 0
    puts "Nothing left to delete"
  when 1..1000
    bucket.objects.each do |item|
      item.delete
      puts "Deleting - #{bucket.size} left"        
    end
end

回答

这是一个难题。我的解决方案位于http://stuff.mit.edu/~jik/software/delete-s3-bucket.pl.txt。它在顶部的注释中描述了我确定可能出错的所有内容。这是脚本的当前版本(如果更改它,我会在URL上放一个新版本,但可能不在这里)。

#!/usr/bin/perl

# Copyright (c) 2010 Jonathan Kamens.
# Released under the GNU General Public License, Version 3.
# See <http://www.gnu.org/licenses/>.

# $Id: delete-s3-bucket.pl,v 1.3 2010/10/17 03:21:33 jik Exp $

# Deleting an Amazon S3 bucket is hard.
#
# * You can't delete the bucket unless it is empty.
#
# * There is no API for telling Amazon to empty the bucket, so you have to
# delete all of the objects one by one yourself.
#
# * If you've recently added a lot of large objects to the bucket, then they
# may not all be visible yet on all S3 servers. This means that even after the
# server you're talking to thinks all the objects are all deleted and lets you
# delete the bucket, additional objects can continue to propagate around the S3
# server network. If you then recreate the bucket with the same name, those
# additional objects will magically appear in it!
# 
# It is not clear to me whether the bucket delete will eventually propagate to
# all of the S3 servers and cause all the objects in the bucket to go away, but
# I suspect it won't. I also suspect that you may end up continuing to be
# charged for these phantom objects even though the bucket they're in is no
# longer even visible in your S3 account.
#
# * If there's a CR, LF, or CRLF in an object name, then it's sent just that
# way in the XML that gets sent from the S3 server to the client when the
# client asks for a list of objects in the bucket. Unfortunately, the XML
# parser on the client will probably convert it to the local line ending
# character, and if it's different from the character that's actually in the
# object name, you then won't be able to delete it. Ugh! This is a bug in the
# S3 protocol; it should be enclosing the object names in CDATA tags or
# something to protect them from being munged by the XML parser.
#
# Note that this bug even affects the AWS Web Console provided by Amazon!
#
# * If you've got a whole lot of objects and you serialize the delete process,
# it'll take a long, long time to delete them all.

use threads;
use strict;
use warnings;

# Keys can have newlines in them, which screws up the communication
# between the parent and child processes, so use URL encoding to deal
# with that. 
use CGI qw(escape unescape); # Easiest place to get this functionality.
use File::Basename;
use Getopt::Long;
use Net::Amazon::S3;

my $whoami = basename 
s3cmd rb --force s3://bucket-name
; my $usage = "Usage: $whoami [--help] --access-key-id=id --secret-access-key=key --bucket=name [--processes=#] [--wait=#] [--nodelete] Specify --processes to indicate how many deletes to perform in parallel. You're limited by RAM (to hold the parallel threads) and bandwidth for the S3 delete requests. Specify --wait to indicate seconds to require the bucket to be verified empty. This is necessary if you create a huge number of objects and then try to delete the bucket before they've all propagated to all the S3 servers (I've seen a huge backlog of newly created objects take *hours* to propagate everywhere). See the comment at the top of the script for more information about this issue. Specify --nodelete to empty the bucket without actually deleting it.\n"; my($aws_access_key_id, $aws_secret_access_key, $bucket_name, $wait); my $procs = 1; my $delete = 1; die if (! GetOptions( "help" => sub { print $usage; exit; }, "access-key-id=s" => $aws_access_key_id, "secret-access-key=s" => $aws_secret_access_key, "bucket=s" => $bucket_name, "processess=i" => $procs, "wait=i" => $wait, "delete!" => $delete, )); die if (! ($aws_access_key_id && $aws_secret_access_key && $bucket_name)); my $increment = 0; print "Incrementally deleting the contents of $bucket_name\n"; $| = 1; my(@procs, $current); for (1..$procs) { my($read_from_parent, $write_to_child); my($read_from_child, $write_to_parent); pipe($read_from_parent, $write_to_child) or die; pipe($read_from_child, $write_to_parent) or die; threads->create(sub { close($read_from_child); close($write_to_child); my $old_select = select $write_to_parent; $| = 1; select $old_select; &child($read_from_parent, $write_to_parent); }) or die; close($read_from_parent); close($write_to_parent); my $old_select = select $write_to_child; $| = 1; select $old_select; push(@procs, [$read_from_child, $write_to_child]); } my $s3 = Net::Amazon::S3->new({aws_access_key_id => $aws_access_key_id, aws_secret_access_key => $aws_secret_access_key, retry => 1, }); my $bucket = $s3->bucket($bucket_name); my $deleted = 1; my $total_deleted = 0; my $last_start = time; my($start, $waited); while ($deleted > 0) { $start = time; print "\nLoading ", ($increment ? "up to $increment" : "as many as possible")," keys...\n"; my $response = $bucket->list({$increment ? ('max-keys' => $increment) : ()}) or die $s3->err . ": " . $s3->errstr . "\n"; $deleted = scalar(@{ $response->{keys} }) ; if (! $deleted) { if ($wait and ! $waited) { my $delta = $wait - ($start - $last_start); if ($delta > 0) { print "Waiting $delta second(s) to confirm bucket is empty\n"; sleep($delta); $waited = 1; $deleted = 1; next; } else { last; } } else { last; } } else { $waited = undef; } $total_deleted += $deleted; print "\nDeleting $deleted keys($total_deleted total)...\n"; $current = 0; foreach my $key ( @{ $response->{keys} } ) { my $key_name = $key->{key}; while (! &send(escape($key_name) . "\n")) { print "Thread $current died\n"; die "No threads left\n" if (@procs == 1); if ($current == @procs-1) { pop @procs; $current = 0; } else { $procs[$current] = pop @procs; } } $current = ($current + 1) % @procs; threads->yield(); } print "Sending sync message\n"; for ($current = 0; $current < @procs; $current++) { if (! &send("\n")) { print "Thread $current died sending sync\n"; if ($current = @procs-1) { pop @procs; last; } $procs[$current] = pop @procs; $current--; } threads->yield(); } print "Reading sync response\n"; for ($current = 0; $current < @procs; $current++) { if (! &receive()) { print "Thread $current died reading sync\n"; if ($current = @procs-1) { pop @procs; last; } $procs[$current] = pop @procs; $current--; } threads->yield(); } } continue { $last_start = $start; } if ($delete) { print "Deleting bucket...\n"; $bucket->delete_bucket or die $s3->err . ": " . $s3->errstr; print "Done.\n"; } sub send { my($str) = @_; my $fh = $procs[$current]->[1]; print($fh $str); } sub receive { my $fh = $procs[$current]->[0]; scalar <$fh>; } sub child { my($read, $write) = @_; threads->detach(); my $s3 = Net::Amazon::S3->new({aws_access_key_id => $aws_access_key_id, aws_secret_access_key => $aws_secret_access_key, retry => 1, }); my $bucket = $s3->bucket($bucket_name); while (my $key = <$read>) { if ($key eq "\n") { print($write "\n") or die; next; } chomp $key; $key = unescape($key); if ($key =~ /[\r\n]/) { my(@parts) = split(/\r\n|\r|\n/, $key, -1); my(@guesses) = shift @parts; foreach my $part (@parts) { @guesses = (map(($_ . "\r\n" . $part, $_ . "\r" . $part, $_ . "\n" . $part), @guesses)); } foreach my $guess (@guesses) { if ($bucket->get_key($guess)) { $key = $guess; last; } } } $bucket->delete_key($key) or die $s3->err . ": " . $s3->errstr . "\n"; print "."; threads->yield(); } return; }

回答

可以用来避免此问题的一种技术是将所有对象放在存储桶中的"文件夹"中,使我们可以删除文件夹,然后继续删除存储桶。此外,可以使用http://s3tools.org上可用的s3cmd工具删除其中包含文件的存储桶:

##代码##