Linux 从 S3 存储桶下载 300 万个对象的最快方法

声明:本页面是StackOverFlow热门问题的中英对照翻译,遵循CC BY-SA 4.0协议,如果您需要使用它,必须同样遵循CC BY-SA许可,注明原文地址和作者信息,同时你必须将它归于原作者(不是我):StackOverFlow 原文地址: http://stackoverflow.com/questions/4720735/
Warning: these are provided under cc-by-sa 4.0 license. You are free to use/share it, But you must attribute it to the original authors (not me): StackOverFlow

提示:将鼠标放在中文语句上可以显示对应的英文。显示中英文
时间:2020-08-05 02:32:03  来源:igfitidea点击:

Fastest way to download 3 million objects from a S3 bucket

pythonlinuxamazon-s3botoeventlet

提问by Jagtesh Chadha

I've tried using Python + boto + multiprocessing, S3cmd and J3tset but struggling with all of them.

我尝试过使用 Python + boto + multiprocessing、S3cmd 和 J3tset,但都在努力。

Any suggestions, perhaps a ready-made script you've been using or another way I don't know of?

任何建议,也许是您一直在使用的现成脚本或我不知道的其他方式?

EDIT:

编辑:

eventlet+boto is a worthwhile solution as mentioned below. Found a good eventlet reference article here http://web.archive.org/web/20110520140439/http://teddziuba.com/2010/02/eventlet-asynchronous-io-for-g.html

eventlet+boto 是一个有价值的解决方案,如下所述。在这里找到了一篇很好的 eventlet 参考文章http://web.archive.org/web/20110520140439/http://teddziuba.com/2010/02/eventlet-asynchronous-io-for-g.html

I've added the python script that I'm using right now below.

我在下面添加了我现在正在使用的 python 脚本。

采纳答案by Jagtesh Chadha

Okay, I figured out a solution based on @Matt Billenstien's hint. It uses eventlet library. The first step is most important here (monkey patching of standard IO libraries).

好的,我根据@Matt Billenstien 的提示找到了一个解决方案。它使用 eventlet 库。第一步在这里最重要(标准 IO 库的猴子补丁)。

Run this script in the background with nohup and you're all set.

使用 nohup 在后台运行此脚本,一切就绪。

from eventlet import *
patcher.monkey_patch(all=True)

import os, sys, time
from boto.s3.connection import S3Connection
from boto.s3.bucket import Bucket

import logging

logging.basicConfig(filename="s3_download.log", level=logging.INFO)


def download_file(key_name):
    # Its imp to download the key from a new connection
    conn = S3Connection("KEY", "SECRET")
    bucket = Bucket(connection=conn, name="BUCKET")
    key = bucket.get_key(key_name)

    try:
        res = key.get_contents_to_filename(key.name)
    except:
        logging.info(key.name+":"+"FAILED")

if __name__ == "__main__":
    conn = S3Connection("KEY", "SECRET")
    bucket = Bucket(connection=conn, name="BUCKET")

    logging.info("Fetching bucket list")
    bucket_list = bucket.list(prefix="PREFIX")

    logging.info("Creating a pool")
    pool = GreenPool(size=20)

    logging.info("Saving files in bucket...")
    for key in bucket.list():
        pool.spawn_n(download_file, key.key)
    pool.waitall()

回答by Matt Billenstein

Use eventlet to give you I/O parallelism, write a simple function to download one object using urllib, then use a GreenPile to map that to a list of input urls -- a pile with 50 to 100 greenlets should do...

使用 eventlet 为您提供 I/O 并行性,编写一个简单的函数来使用 urllib 下载一个对象,然后使用 GreenPile 将其映射到输入 url 列表——一堆 50 到 100 个 greenlet 应该做...