scrapy框架
基于Spider全站数据爬取
将网站中某板块下的全部的页码对应的页面数据进行爬取。
实现方式:
- 将所有页码的url添加到start_urls 列表(不推荐)
- 自行手动进行请求的发送(推荐)
yield scrapy.Request(url=new_url, callback=self.parse) #callback 用于数据解析
案例
爬取 www.521609.com/tuku/mxxz/index.html 下所有页码的图片名称
scrapy startproject xhyl
cd xhyl
scrapy genspider xhyla www.521609.com/tuku/mxxz/index.html
不同页码url对比:
http://www.521609.com/tuku/mxxz/index.html
http://www.521609.com/tuku/mxxz/index_2.html
settings.py 配置文件
BOT_NAME = 'xhyl'
SPIDER_MODULES = ['xhyl.spiders']
NEWSPIDER_MODULE = 'xhyl.spiders'
# Crawl responsibly by identifying yourself (and your website) on the user-agent
USER_AGENT = 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/87.0.4280.88 Safari/537.36'
# Obey robots.txt rules
ROBOTSTXT_OBEY = False
LOG_LEVEL = 'ERROR'
爬虫文件:xhyla.py
import scrapy
class XhylaSpider(scrapy.Spider):
name = 'xhyla'
# allowed_domains = ['www.521609.com/tuku/mxxz/index_28.html']
start_urls = ['http://www.521609.com/tuku/mxxz/index.html']
url = 'http://www.521609.com/tuku/mxxz/index_%d.html'
page_num = 2
def parse(self, response):
li_list = response.xpath('/html/body/div[4]/div[3]/ul/li')
for li in li_list:
img_name = li.xpath('./a/p/text()').extract_first()
print(img_name)
if self.page_num <= 28:
new_url = format(self.url % self.page_num)
self.page_num += 1
# 手动发送请求 callback 用于数据解析
yield scrapy.Request(u