Python 写入 csv 文件

声明:本页面是StackOverFlow热门问题的中英对照翻译,遵循CC BY-SA 4.0协议,如果您需要使用它,必须同样遵循CC BY-SA许可,注明原文地址和作者信息,同时你必须将它归于原作者(不是我):StackOverFlow 原文地址: http://stackoverflow.com/questions/20719263/
Warning: these are provided under cc-by-sa 4.0 license. You are free to use/share it, But you must attribute it to the original authors (not me): StackOverFlow

提示:将鼠标放在中文语句上可以显示对应的英文。显示中英文
时间:2020-08-18 21:05:13  来源:igfitidea点击:

Write to a csv file scrapy

pythoncsvscrapy

提问by blackmamba

I want to write to csv file in scrapy

我想在scrapy中写入csv文件

 for rss in rsslinks:
  item = AppleItem()
  item['reference_link'] = response.url
  base_url = get_base_url(response)
  item['rss_link'] = urljoin_rfc(base_url,rss)
  #item['rss_link'] = rss
  items.append(item)
  #items.append("\n")
 f = open(filename,'a+')    #filename is apple.com.csv
 for item in items:
    f.write("%s\n" % item)

My output is this:

我的输出是这样的:

{'reference_link': 'http://www.apple.com/'
 'rss_link': 'http://www.apple.com/rss '
{'reference_link': 'http://www.apple.com/rss/'
 'rss_link':   'http://ax.itunes.apple.com/WebObjects/MZStore.woa/wpa/MRSS/newreleases/limit=10/rss.xml'}
{'reference_link': 'http://www.apple.com/rss/'
 'rss_link':  'http://ax.itunes.apple.com/WebObjects/MZStore.woa/wpa/MRSS/newreleases/limit=25/rss.xml'}

What I want is this format:

我想要的是这种格式:

reference_link               rss_link  
http://www.apple.com/     http://www.apple.com/rss/

采纳答案by jonrsharpe

You need to

你需要

  1. Write your header row; then
  2. Write the entry rows for each object.
  1. 写下你的标题行;然后
  2. 为每个对象写入条目行。

You could approach it like:

你可以像这样处理它:

fields = ["reference_link", "rss_link"] # define fields to use
with open(filename,'a+') as f: # handle the source file
    f.write("{}\n".format('\t'.join(str(field) 
                              for field in fields))) # write header 
    for item in items:
        f.write("{}\n".format('\t'.join(str(item[field]) 
                              for field in fields))) # write items

Note that "{}\n".format(s)gives the same result as "%s\n" % s.

请注意,"{}\n".format(s)给出与 相同的结果"%s\n" % s

回答by Guy Gavriely

simply crawl with -o csv, like:

简单地爬行-o csv,例如:

scrapy crawl <spider name> -o file.csv -t csv

回答by uhbif19

Try tablib.

试试tablib

dataset = tablib.Dataset()
dataset.headers = ["reference_link", "rss_link"]

def add_item(item):    
   dataset.append([item.get(field) for fields in dataset.headers])

for item in items:
    add_item(item)

f.write(dataset.csv)


回答by Anurag Misra

Best approach to solve this problem is to use python in-build csvpackage.

解决此问题的最佳方法是使用 python in-build csv包。

import csv

file_name = open('Output_file.csv', 'w') #Output_file.csv is name of output file

fieldnames = ['reference_link', 'rss_link'] #adding header to file
writer = csv.DictWriter(file_name, fieldnames=fieldnames)
writer.writeheader()
for rss in rsslinks:
    base_url = get_base_url(response)
    writer.writerow({'reference_link': response.url, 'rss_link': urljoin_rfc(base_url, rss)}) #writing data into file.

回答by jwalman

This is what worked for me using Python3:

这就是使用 Python3 对我有用的方法:

scrapy runspider spidername.py -o file.csv -t csv