用xpath进行解析爬取相关数据
数据分析
采用xpath方式进行数据解析
xpath数据解析
xpath是数据解析中最常用且最高效便捷的一种解析方式,通用性强。
- xpath解析原理:
1.实例化一个etree的对象,且需要将被解析的页面源码加载到该对象中。
2.调用etree对象中的xpath方法结合着xpath表达式实现标签定位和内容的捕获。
- 环境的需求:
pip install lxml
- 如何实例化一个etree:from lxml import etree
1.将本地的html文档中的源码数据记载带etree对象中:
etree.parse(filePath)
2.可以将从互联网获取的页面加载到该对象中:
etree.HTML('page_text')
xpath('xpath表达式')
- xpath表达式
/html/body/div /表示从根节点开始,表示一个层级
/html//div //div //表示多个层级,可以表示从任意位置开始定位
属性定位://div[@class='song'] tag[@attrName="attrValue"]
索引定位://div[@class='song']/p[3] 索引序号从1开始。
取文本:
/text() 获取的是标签中直系的文本内容 //div[@class='tang']/ul/li[5]/a/text()[0]
//text() 获取的全部的文本内容 //li[7]//text()
取属性:
/@attrName 获取属性值 //div[@class='tang']/img/@src
示例一
需求
爬取58二手房中的房源信息,网站页面:58二手房
代码
# 需求:爬取58二手房中的房源信息
from lxml import etree
import requests
if __name__ == "__main__":
# UA伪装
headers = {
'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/86.0.4240.193 Safari/537.36'
}
#爬取到页面源码
url = "https://tj.58.com/ershoufang/"
page_text = requests.get(url=url,headers=headers).text
#数据解析
tree = etree.HTML(page_text)
#存储li标签
li_list = tree.xpath('//ul[@class="house-list-wrap"]/li')
ans_list = []
#fp = open('58.txt','w',encoding='utf-8')
#获取标题
for li in li_list:
detail = li.xpath('./div[2]/h2/a/text()')[0]
#fp.write(detail+'\n')
print(detail)
ans_list.append(detail)
**
示例二
需求
解析下载彼岸网图片,网站页面:彼岸网动漫图片
代码
import requests
import os
from lxml import etree
if __name__ == '__main__':
url = 'http://pic.netbian.com/4kdongman/'
# UA伪装
headers = {
'User-Agent' : 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/86.0.4240.193 Safari/537.36'
}
#请求页面数据
response = requests.get(url=url,headers=headers)
# 解决中文乱码问题方法1
#response.encoding = 'gbk'
# 解决中文乱码问题方法2
response.encoding = response.apparent_encoding
page_text = response.text
#实例化etree
tree = etree.HTML(page_text)
#获取li标签
li_list = tree.xpath('//ul[@class="clearfix"]/li')
if not os.path.exists('./dongman'):
os.mkdir('./dongman')
#获取图片地址和名称
for li in li_list:
img_link = li.xpath('./a/img/@src')[0]
img_link = "http://pic.netbian.com"+img_link
img_name = li.xpath('./a/img/@alt')[0]
img_name = img_name+'.jpg'
#获取图片二进制数据
print(img_link,img_name)
img_data = requests.get(url=img_link,headers=headers).content
#print(img_link,img_name)
#设置存储路径
img_path = 'dongman/'+img_name
with open(img_path,'wb') as fp:
fp.write(img_data)
print(img_name+"下载成功!")
**
示例三
需求
爬取所有城市名称 ,网站页面:aqistudy.cn
代码
import requests
from lxml import etree
if __name__ == '__main__':
url = 'https://www.aqistudy.cn/historydata/'
# UA伪装
headers = {
'User-Agent' : 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/86.0.4240.193 Safari/537.36'
}
#获取页面数据
page_text = requests.get(url=url,headers=headers).text
#创建示例对象
tree = etree.HTML(page_text)
#获取所有a列表
all_city = []
a_list = tree.xpath('//div[@class="bottom"]/ul/li/a | //div[@class="bottom"]/ul/div[2]/li/a')
for cur in a_list:
city = cur.xpath('./text()')
all_city.append(city)
print(all_city)
**
示例四
需求
爬取免费建立模板,网站页面:站长素材免费建立模板
代码
import requests
import os
from lxml import etree
if __name__ == '__main__':
url = 'https://sc.chinaz.com/jianli/free.html'
# UA伪装
headers = {
'User-Agent' : 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/86.0.4240.193 Safari/537.36'
}
#获取首页数据
response = requests.get(url=url,headers=headers)
#response.encoding="gbk"
response.encoding = response.apparent_encoding
page_text = response.text
#print(page_text)
#创建实例对象进行解析
tree = etree.HTML(page_text)
if not os.path.exists('./jianli'):
os.mkdir('./jianli')
a_list = tree.xpath('//div[@id="container"]/div/a')
for cur_a in a_list:
re_link = cur_a.xpath('./@href')[0]
re_name = cur_a.xpath('./img/@alt')[0]+'.rar'
#print(re_name,re_link)
re_link="http:"+re_link
response = requests.get(url=re_link,headers=headers)
response.encoding = response.apparent_encoding
detail_page = response.text
detail_tree = etree.HTML(detail_page)
detail_link = detail_tree.xpath('//*[@id="down"]/div[2]/ul/li[10]/a/@href')[0]
#print(detail_link)
re_data = requests.get(url=detail_link,headers=headers).content
re_path = 'jianli/'+re_name
with open(re_path,'wb') as fp:
fp.write(re_data)
print(re_name+"下载成功!!!")
**