在 Python 中使用 BeautifulSoup 解析数据

声明:本页面是StackOverFlow热门问题的中英对照翻译,遵循CC BY-SA 4.0协议,如果您需要使用它,必须同样遵循CC BY-SA许可,注明原文地址和作者信息,同时你必须将它归于原作者(不是我):StackOverFlow 原文地址: http://stackoverflow.com/questions/1501690/
Warning: these are provided under cc-by-sa 4.0 license. You are free to use/share it, But you must attribute it to the original authors (not me): StackOverFlow

提示:将鼠标放在中文语句上可以显示对应的英文。显示中英文
时间:2020-11-03 22:25:44  来源:igfitidea点击:

Parsing out data using BeautifulSoup in Python

pythonhtmlparsingbeautifulsoup

提问by GobiasKoffi

I am attempting to use BeautifulSoup to parse through a DOM tree and extract the names of authors. Below is a snippet of HTML to show the structure of the code I'm going to scrape.

我正在尝试使用 BeautifulSoup 解析 DOM 树并提取作者姓名。下面是一段 HTML 代码,用于显示我将要抓取的代码的结构。

<html>
<body>
<div class="list-authors">
<span class="descriptor">Authors:</span> 
<a href="/find/astro-ph/1/au:+Lin_D/0/1/0/all/0/1">Dacheng Lin</a>, 
<a href="/find/astro-ph/1/au:+Remillard_R/0/1/0/all/0/1">Ronald A. Remillard</a>, 
<a href="/find/astro-ph/1/au:+Homan_J/0/1/0/all/0/1">Jeroen Homan</a> 
</div>
<div class="list-authors">
<span class="descriptor">Authors:</span> 
<a href="/find/astro-ph/1/au:+Kosovichev_A/0/1/0/all/0/1">A.G. Kosovichev</a>
</div>

<!--There are many other div tags with this structure-->
</body>
</html>

My point of confusion is that when I do soup.find, it finds the first occurrence of the div tag that I'm searching for. After that, I search for all 'a' link tags. At this stage, how do I extract the authors names from each of the link tags and print them out? Is there a way to do it using BeautifulSoup or do I need to use Regex? How do I continue iterating over every other other div tag and extract the authors names?

我的困惑点是,当我执行soup.find 时,它会找到我正在搜索的div 标签的第一次出现。之后,我搜索所有“a”链接标签。在这个阶段,我如何从每个链接标签中提取作者姓名并打印出来?有没有办法使用 BeautifulSoup 或者我需要使用 Regex?如何继续遍历其他所有 div 标签并提取作者姓名?

import re
import urllib2,sys
from BeautifulSoup import BeautifulSoup, NavigableString
html = urllib2.urlopen(address).read()
    soup = BeautifulSoup(html)

    try:

        authordiv = soup.find('div', attrs={'class': 'list-authors'})
        links=tds.findAll('a')


        for link in links:
            print ''.join(link[0].contents)

        #Iterate through entire page and print authors


    except IOError: 
        print 'IO error'

回答by John La Rooy

just use findAll for the divs link you do for the links

只需将 findAll 用于您为链接所做的 div 链接

for authordiv in soup.findAll('div', attrs={'class': 'list-authors'}):

对于soup.findAll('div', attrs={'class': 'list-authors'}) 中的authordiv:

回答by Mark Rushakoff

Since linkis already taken from an iterable, you don't need to subindex link-- you can just do link.contents[0].

由于link已经从可迭代对象中获取,因此您无需进行子索引link——您只需执行link.contents[0].

print link.contents[0]with your new example with two separate <div class="list-authors">yields:

print link.contents[0]使用具有两个单独<div class="list-authors">收益的新示例:

Dacheng Lin
Ronald A. Remillard
Jeroen Homan
A.G. Kosovichev

So I'm not sure I understand the comment about searching other divs. If they are different classes, you will either need to do a separate soup.findand soup.findAll, or just modify your first soup.find.

所以我不确定我是否理解有关搜索其他 div 的评论。如果它们是不同的类,你要么需要做一个单独的soup.findand soup.findAll,要么只是修改你的第一个soup.find.