Python BeautifulSoup 刮表

声明:本页面是StackOverFlow热门问题的中英对照翻译,遵循CC BY-SA 4.0协议,如果您需要使用它,必须同样遵循CC BY-SA许可,注明原文地址和作者信息,同时你必须将它归于原作者(不是我):StackOverFlow 原文地址: http://stackoverflow.com/questions/18966368/
Warning: these are provided under cc-by-sa 4.0 license. You are free to use/share it, But you must attribute it to the original authors (not me): StackOverFlow

提示:将鼠标放在中文语句上可以显示对应的英文。显示中英文
时间:2020-08-19 12:27:00  来源:igfitidea点击:

Python BeautifulSoup scrape tables

pythonhtmlweb-scrapingbeautifulsouphtml-parsing

提问by kingcope

I am trying to create a table scrape with BeautifulSoup. I wrote this Python code:

我正在尝试使用 BeautifulSoup 创建一个表刮。我写了这个 Python 代码:

import urllib2
from bs4 import BeautifulSoup

url = "http://dofollow.netsons.org/table1.htm"  # change to whatever your url is

page = urllib2.urlopen(url).read()
soup = BeautifulSoup(page)

for i in soup.find_all('form'):
    print i.attrs['class']

I need to scrape Nome, Cognome, Email.

我需要抓取 Nome、Cognome、Email。

采纳答案by alecxe

Loop over table rows (trtag) and get the text of cells (tdtag) inside:

遍历表格行(tr标签)并获取其中的单元格文本(td标签):

for tr in soup.find_all('tr')[2:]:
    tds = tr.find_all('td')
    print "Nome: %s, Cognome: %s, Email: %s" % \
          (tds[0].text, tds[1].text, tds[2].text)

prints:

印刷:

Nome: ?Massimo, Cognome: ?Allegri, Email: [email protected]
Nome: ?Alessandra, Cognome: ?Anastasia, Email: [email protected]
...

FYI, [2:]slice here is to skip two header rows.

仅供参考,[2:]这里的切片是跳过两个标题行。

UPD, here's how you can save results into txt file:

UPD,这是将结果保存到txt文件的方法:

with open('output.txt', 'w') as f:
    for tr in soup.find_all('tr')[2:]:
        tds = tr.find_all('td')
        f.write("Nome: %s, Cognome: %s, Email: %s\n" % \
              (tds[0].text, tds[1].text, tds[2].text))

回答by Rakesh Kumar

# Libray
from bs4 import BeautifulSoup

# Empty List
tabs = []

# File handling
with open('/home/rakesh/showHW/content.html', 'r') as fp:
    html_content = fp.read()

    table_doc = BeautifulSoup(html_content, 'html.parser')
    # parsing html content
    for tr in table_doc.table.find_all('tr'):
        tabs.append({
            'Nome': tr.find_all('td')[0].string,
            'Cogname': tr.find_all('td')[1].string,
            'Email': tr.find_all('td')[2].string
            })

    print(tabs)