打开文件进行读取时,S3和Lambda出现Python只读文件系统错误
声明:本页面是StackOverFlow热门问题的中英对照翻译,遵循CC BY-SA 4.0协议,如果您需要使用它,必须同样遵循CC BY-SA许可,注明原文地址和作者信息,同时你必须将它归于原作者(不是我):StackOverFlow
原文地址: http://stackoverflow.com/questions/39383465/
Warning: these are provided under cc-by-sa 4.0 license. You are free to use/share it, But you must attribute it to the original authors (not me):
StackOverFlow
Python Read-only file system Error With S3 and Lambda when opening a file for reading
提问by user1530318
I'm seeing the below error from my lambda function when I drop a file.csv into an S3 bucket. The file is not large and I even added a 60 second sleep prior to opening the file for reading, but for some reason the file has the extra ".6CEdFe7C" appended to it. Why is that?
当我将 file.csv 放入 S3 存储桶时,我从 lambda 函数中看到以下错误。该文件并不大,我什至在打开文件进行读取之前添加了 60 秒的睡眠,但由于某种原因,该文件附加了额外的“.6CEdFe7C”。这是为什么?
[Errno 30] Read-only file system: u'/file.csv.6CEdFe7C': IOError
Traceback (most recent call last):
File "/var/task/lambda_function.py", line 75, in lambda_handler
s3.download_file(bucket, key, filepath)
File "/var/runtime/boto3/s3/inject.py", line 104, in download_file
extra_args=ExtraArgs, callback=Callback)
File "/var/runtime/boto3/s3/transfer.py", line 670, in download_file
extra_args, callback)
File "/var/runtime/boto3/s3/transfer.py", line 685, in _download_file
self._get_object(bucket, key, filename, extra_args, callback)
File "/var/runtime/boto3/s3/transfer.py", line 709, in _get_object
extra_args, callback)
File "/var/runtime/boto3/s3/transfer.py", line 723, in _do_get_object
with self._osutil.open(filename, 'wb') as f:
File "/var/runtime/boto3/s3/transfer.py", line 332, in open
return open(filename, mode)
IOError: [Errno 30] Read-only file system: u'/file.csv.6CEdFe7C'
Code:
代码:
def lambda_handler(event, context):
s3_response = {}
counter = 0
event_records = event.get("Records", [])
s3_items = []
for event_record in event_records:
if "s3" in event_record:
bucket = event_record["s3"]["bucket"]["name"]
key = event_record["s3"]["object"]["key"]
filepath = '/' + key
print(bucket)
print(key)
print(filepath)
s3.download_file(bucket, key, filepath)
The result of the above is:
上面的结果是:
mytestbucket
file.csv
/file.csv
[Errno 30] Read-only file system: u'/file.csv.6CEdFe7C'
If the key/file is "file.csv", then why does the s3.download_file method try to download "file.csv.6CEdFe7C"? I'm guessing when the function is triggered, the file is file.csv.xxxxx but by the time it gets to line 75, the file is renamed to file.csv?
如果密钥/文件是“file.csv”,那么为什么 s3.download_file 方法会尝试下载“file.csv.6CEdFe7C”?我猜当函数被触发时,文件是 file.csv.xxxxx 但是当它到达第 75 行时,文件被重命名为 file.csv?
回答by joonas.fi
Only /tmp
seems to be writable in AWS Lambda.
只有/tmp
似乎是在AWS LAMBDA写。
Therefore this would work:
因此,这将起作用:
filepath = '/tmp/' + key
回答by Te ENe Te
According to http://boto3.readthedocs.io/en/latest/guide/s3-example-download-file.html
根据http://boto3.readthedocs.io/en/latest/guide/s3-example-download-file.html
The example shows how to use the first parameter for the cloud name and the second parameter for the local path to be downloaded.
该示例显示了如何将第一个参数用于云名称,将第二个参数用于要下载的本地路径。
in other hand, the amazaon docs, says
另一方面,亚马逊文档说
Thus, we have 512 MB for create files. Here is my code for me in lambda aws, for me works like charm.
因此,我们有 512 MB 用于创建文件。这是我在 lambda aws 中的代码,对我来说就像魅力一样。
.download_file(Key=nombre_archivo,Filename='/tmp/{}'.format(nuevo_nombre))
回答by Pavlo Kovalchuk
I noticed when I uploaded a code for lambda directly as a zip file
I was able to write only to /tmp
folder, but when uploaded code from S3
I was able to write to the project root folder
too.
我注意到当我为 lambda 上传代码时,directly as a zip file
我只能写入/tmp
文件夹,但是当上传代码时,S3
我也可以写入文件夹project root folder
。