Python 从 json 脚本输出中刮取

声明:本页面是StackOverFlow热门问题的中英对照翻译,遵循CC BY-SA 4.0协议,如果您需要使用它,必须同样遵循CC BY-SA许可,注明原文地址和作者信息,同时你必须将它归于原作者(不是我):StackOverFlow 原文地址: http://stackoverflow.com/questions/23574636/
Warning: these are provided under cc-by-sa 4.0 license. You are free to use/share it, But you must attribute it to the original authors (not me): StackOverFlow

提示:将鼠标放在中文语句上可以显示对应的英文。显示中英文
时间:2020-08-19 03:10:15  来源:igfitidea点击:

scrapy from script output in json

pythonjsonweb-scrapingscrapyscrapy-spider

提问by Wasif Khalil

I am running scrapyin a python script

scrapy在 python 脚本中运行

def setup_crawler(domain):
    dispatcher.connect(stop_reactor, signal=signals.spider_closed)
    spider = ArgosSpider(domain=domain)
    settings = get_project_settings()
    crawler = Crawler(settings)
    crawler.configure()
    crawler.crawl(spider)
    crawler.start()
    reactor.run()

it runs successfully and stops but where is the result ? I want the result in json format, how can I do that?

它运行成功并停止,但结果在哪里?我想要json格式的结果,我该怎么做?

result = responseInJSON

like we do using command

就像我们使用命令一样

scrapy crawl argos -o result.json -t json

采纳答案by alecxe

You need to set FEED_FORMATand FEED_URIsettings manually:

您需要手动设置FEED_FORMATFEED_URI设置:

settings.overrides['FEED_FORMAT'] = 'json'
settings.overrides['FEED_URI'] = 'result.json'

If you want to get the results into a variable you can define a Pipelineclass that would collect items into the list. Use the spider_closedsignal handler to see the results:

如果您想将结果放入一个变量中,您可以定义一个Pipeline将项目收集到列表中的类。使用spider_closed信号处理程序查看结果:

import json

from twisted.internet import reactor
from scrapy.crawler import Crawler
from scrapy import log, signals
from scrapy.utils.project import get_project_settings


class MyPipeline(object):
    def process_item(self, item, spider):
        results.append(dict(item))

results = []
def spider_closed(spider):
    print results

# set up spider    
spider = TestSpider(domain='mydomain.org')

# set up settings
settings = get_project_settings()
settings.overrides['ITEM_PIPELINES'] = {'__main__.MyPipeline': 1}

# set up crawler
crawler = Crawler(settings)
crawler.signals.connect(spider_closed, signal=signals.spider_closed)
crawler.configure()
crawler.crawl(spider)

# start crawling
crawler.start()
log.start()
reactor.run() 

FYI, look at how Scrapy parses command-line arguments.

仅供参考,看看 Scrapy 如何解析命令行参数

Also see: Capturing stdout within the same process in Python.

另请参阅:在 Python 中的同一进程中捕获标准输出

回答by Alvaro Cavalcanti

I managed to make it work simply by adding the FEED_FORMATand FEED_URIto the CrawlerProcessconstructor, using the basic Scrapy API tutorial code as follows:

我使用基本的 Scrapy API 教程代码,简单地通过将FEED_FORMAT和添加FEED_URICrawlerProcess构造函数来使其工作,如下所示:

process = CrawlerProcess({
'USER_AGENT': 'Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 5.1)',
'FEED_FORMAT': 'json',
'FEED_URI': 'result.json'
})

回答by Aminah Nuraini

Easy!

简单!

from scrapy import cmdline

cmdline.execute("scrapy crawl argos -o result.json -t json".split())

Put that script where you put scrapy.cfg

把那个脚本放在你放的地方scrapy.cfg