Beatiful Soup使用

​ 每一个网页都有一定的特殊有结构和层级关系,很多节点都用id或class作区分,所以可以借助它们的结构和属性来提取

​ 简单来说,Beautiful Soup是Python的一个HTML或XML的解析库,我们用它可以方便地从网页中提取数据。

​ 使用LXML解析器,只需在初始化Beautiful Soup时,把第二个参数改为lxml即可:

from bs4 import BeautifulSoup
soup=BeautifulSoup('<p>Hello</p>','lxml')
print(soup.p.string)
#Hello

示例

html = """
<html><head><title>The Dormouse's story</title></head>
<body>
<p class="title" name="dromouse"><b>The Dormouse's story</b></p>
<p class="story">Once upon a time there were three little sisters; and their names were
<a href="http://example.com/elsie" class="sister" id="link1"><!-- Elsie --></a>,
<a href="http://example.com/lacie" class="sister" id="link2">Lacie</a> and
<a href="http://example.com/tillie" class="sister" id="link3">Tillie</a>;
and they lived at the bottom of a well.</p>
<p class="story">...</p>
"""
from bs4 import BeautifulSoup
soup = BeautifulSoup(html, 'lxml')
print(soup.prettify())
print(soup.title.string)

运行结果:

<html>
<head>
<title>
The Dormouse's story
</title>
</head>
<body>
<p class="title" name="dromouse">
<b>
The Dormouse's story
</b>
</p>
<p class="story">
Once upon a time there were three little sisters; and their names were
<a class="sister" href="http://example.com/elsie" id="link1">
<!-- Elsie -->
</a>
,
<a class="sister" href="http://example.com/lacie" id="link2">
Lacie
</a>
and
<a class="sister" href="http://example.com/tillie" id="link3">
Tillie
</a>
;
and they lived at the bottom of a well.
</p>
<p class="story">
...
</p>
</body>
</html>

The Dormouse's story

​ 首先声明变量 html,它是一个 HTML 字符串。但是需要注意的是,它并不是一个完整的 HTML 字符串,因为 body 和 html 节点都没有闭合。接着,我们将它当作第一个参数传给 BeautifulSoup 对象,该对象的第二个参数为解析器的类型(这里使用 lxml),此时就完成了 BeaufulSoup 对象的初始化。然后,将这个对象赋值给 soup 变量。

​ prettify() 方法可以把要解析的字符串以标准的缩进格式输出。

​ 然后调用 soup.title.string,这实际上是输出 HTML 中 title 节点的文本内容。所以,soup.title 可以选出 HTML 中的 title 节点,再调用 string 属性就可以得到里面的文本了。

提取信息

获取名称

可以利用 name 属性获取节点的名称。这里还是以上面的文本为例,选取 title 节点,然后调用 name 属性就可以得到节点名称:

print(soup.title.name)

运行结果:

title

获取属性

每个节点可能有多个属性,比如 id 和 class 等,选择这个节点元素后,可以调用 attrs 获取所有属性:

print(soup.p.attrs)
print(soup.p.attrs['name'])

运行结果:

{'class': ['title'], 'name': 'dromouse'}
dromouse

可以看到,attrs 的返回结果是字典形式,它把选择的节点的所有属性和属性值组合成一个字典。接下来,如果要获取 name 属性,就相当于从字典中获取某个键值,只需要用中括号加属性名就可以了。比如,要获取 name 属性,就可以通过 attrs[‘name’] 来得到。

其实这样有点烦琐,还有一种更简单的获取方式:可以不用写 attrs,直接在节点元素后面加中括号,传入属性名就可以获取属性值了。样例如下:

print(soup.p['name'])
print(soup.p['class'])

运行结果如下:

dromouse
['title']

这里需要注意的是,有的返回结果是字符串,有的返回结果是字符串组成的列表。比如,name 属性的值是唯一的,返回的结果就是单个字符串。而对于 class,一个节点元素可能有多个 class,所以返回的是列表。在实际处理过程中,我们要注意判断类型。

获取内容

可以利用 string 属性获取节点元素包含的文本内容,比如要获取第一个 p 节点的文本:

print(soup.p.string)

运行结果如下:

The Dormouse's story

这里选择到的 p 节点是第一个 p 节点,获取的文本也是第一个 p 节点里面的文本。

嵌套选择

html = """
<html><head><title>The Dormouse's story</title></head>
<body>
"""
from bs4 import BeautifulSoup
soup = BeautifulSoup(html, 'lxml')
print(soup.head.title)
print(type(soup.head.title))
print(soup.head.title.string)

运行结果:

<title>The Dormouse's story</title>
<class 'bs4.element.Tag'>
The Dormouse's story

关联选择

​ 有时候不能做到一步就选到想要的节点元素,需要先选中某一个节点元素,然后以它为基准再选择它的子节点、父节点、兄弟节点等

子节点和子孙节点

​ 选取节点元素之后,如果想要获取它的直接子节点,可以调用 contents 属性

html = """
<html>
<head>
<title>The Dormouse's story</title>
</head>
<body>
<p class="story">
Once upon a time there were three little sisters; and their names were
<a href="http://example.com/elsie" class="sister" id="link1">
<span>Elsie</span>
</a>
<a href="http://example.com/lacie" class="sister" id="link2">Lacie</a>
and
<a href="http://example.com/tillie" class="sister" id="link3">Tillie</a>
and they lived at the bottom of a well.
</p>
<p class="story">...</p>
"""
from bs4 import BeautifulSoup
soup = BeautifulSoup(html, 'lxml')
print(soup.p.contents)

运行结果:

['\n            Once upon a time there were three little sisters; and their names were\n            ', <a class="sister" href="http://example.com/elsie" id="link1">
<span>Elsie</span>
</a>, '\n', <a class="sister" href="http://example.com/lacie" id="link2">Lacie</a>, ' \n and\n ', <a class="sister" href="http://example.com/tillie" id="link3">Tillie</a>, '\n and they lived at the bottom of a well.\n ']

或者调用 children 属性得到相应的结果

from bs4 import BeautifulSoup
soup = BeautifulSoup(html, 'lxml')
print(soup.p.children)
for i, child in enumerate(soup.p.children):
print(i, child)

结果:

<list_iterator object at 0x1064f7dd8>
0
Once upon a time there were three little sisters; and their names were

1 <a class="sister" href="http://example.com/elsie" id="link1">
<span>Elsie</span>
</a>
2

3 <a class="sister" href="http://example.com/lacie" id="link2">Lacie</a>
4
and

5 <a class="sister" href="http://example.com/tillie" id="link3">Tillie</a>
6
and they lived at the bottom of a well.

要得到所有的子孙节点的话,可以调用 descendants 属性

from bs4 import BeautifulSoup
soup = BeautifulSoup(html, 'lxml')
print(soup.p.descendants)
for i, child in enumerate(soup.p.descendants):
print(i, child)

父节点和祖先节点

如果要获取某个节点元素的父节点,可以调用 parent 属性

html = """
<html>
<head>
<title>The Dormouse's story</title>
</head>
<body>
<p class="story">
Once upon a time there were three little sisters; and their names were
<a href="http://example.com/elsie" class="sister" id="link1">
<span>Elsie</span>
</a>
</p>
<p class="story">...</p>
"""
from bs4 import BeautifulSoup
soup = BeautifulSoup(html, 'lxml')
print(soup.a.parent)

如果想获取所有的祖先节点,可以调用 parents 属性

html = """
<html>
<body>
<p class="story">
<a href="http://example.com/elsie" class="sister" id="link1">
<span>Elsie</span>
</a>
</p>
"""
from bs4 import BeautifulSoup
soup = BeautifulSoup(html, 'lxml')
print(type(soup.a.parents))
print(list(enumerate(soup.a.parents)))

兄弟节点

html = """
<html>
<body>
<p class="story">
Once upon a time there were three little sisters; and their names were
<a href="http://example.com/elsie" class="sister" id="link1">
<span>Elsie</span>
</a>
Hello
<a href="http://example.com/lacie" class="sister" id="link2">Lacie</a>
and
<a href="http://example.com/tillie" class="sister" id="link3">Tillie</a>
and they lived at the bottom of a well.
</p>
"""
from bs4 import BeautifulSoup
soup = BeautifulSoup(html, 'lxml')
print('Next Sibling', soup.a.next_sibling)
print('Prev Sibling', soup.a.previous_sibling)
print('Next Siblings', list(enumerate(soup.a.next_siblings)))
print('Prev Siblings', list(enumerate(soup.a.previous_siblings)))

next_sibling 和 previous_sibling 分别获取节点的下一个和上一个兄弟元素,next_siblings 和 previous_siblings 则分别返回后面和前面的兄弟节点

方法选择器

find_all

查询所有符合条件的元素,可以给它传入一些属性或文本来得到符合条件的元素

name

查询所有 ul 节点

html='''
<div class="panel">
<div class="panel-heading">
<h4>Hello</h4>
</div>
<div class="panel-body">
<ul class="list" id="list-1">
<li class="element">Foo</li>
<li class="element">Bar</li>
<li class="element">Jay</li>
</ul>
<ul class="list list-small" id="list-2">
<li class="element">Foo</li>
<li class="element">Bar</li>
</ul>
</div>
</div>
'''
from bs4 import BeautifulSoup
soup = BeautifulSoup(html, 'lxml')
print(soup.find_all(name='ul'))
print(type(soup.find_all(name='ul')[0]))

attrs

html='''
<div class="panel">
<div class="panel-heading">
<h4>Hello</h4>
</div>
<div class="panel-body">
<ul class="list" id="list-1" name="elements">
<li class="element">Foo</li>
<li class="element">Bar</li>
<li class="element">Jay</li>
</ul>
<ul class="list list-small" id="list-2">
<li class="element">Foo</li>
<li class="element">Bar</li>
</ul>
</div>
</div>
'''
from bs4 import BeautifulSoup
soup = BeautifulSoup(html, 'lxml')
print(soup.find_all(attrs={'id': 'list-1'}))
print(soup.find_all(attrs={'name': 'elements'}))

text

text 参数可用来匹配节点的文本,传入的形式可以是字符串,可以是正则表达式对象

import re
html='''
<div class="panel">
<div class="panel-body">
<a>Hello, this is a link</a>
<a>Hello, this is a link, too</a>
</div>
</div>
'''
from bs4 import BeautifulSoup
soup = BeautifulSoup(html, 'lxml')
print(soup.find_all(text=re.compile('link')))

find

find 方法返回的是单个元素,也就是第一个匹配的元素,而 find_all 返回的是所有匹配的元素组成的列表