python爬虫爬取拉勾网职业信息

本文介绍了使用Python Scrapy框架爬取拉勾网上的Java工程师招聘信息,详细展示了爬虫项目的实现过程,包括目标设定、网页结构分析、Scrapy项目配置、中间件设置、爬虫编写、数据保存以及最终生成职位和公司关键词词云的展示。词云结果显示了当前Java工程师招聘的主要需求和公司特点。

摘要生成于 C知道 ,由 DeepSeek-R1 满血版支持, 前往体验 >

先放成果

目标

抓取拉钩关于java工程师的招聘信息并制作成词云图。

研究目标网站

打开拉钩网可以发现目标url为:https://www.lagou.com/zhaopin/Java/2/?filterOption=2 ,这通过翻页发现filterOption=2对应的是页码,这可以通过总页数遍历的方式爬取所有信息。
这里写图片描述
我们可以抓取得数据有:
公司名、发布日期、工资、最低需求、工作标签、公司名、公司类型、公司地址、公司关键词

开始scrapy项目:

具体参考我的上一遍文章:http://blog.csdn.net/qq_33850908/article/details/79063271

编写代码:

items.py:

# -*- coding: utf-8 -*-

# Define here the models for your scraped items
#
# See documentation in:
# https://doc.scrapy.org/en/latest/topics/items.html

import scrapy


class LagouItem(scrapy.Item):
    # define the fields for your item here like:
    name = scrapy.Field()
    day = scrapy.Field()
    salary = scrapy.Field()
    require = scrapy.Field()
    tag = scrapy.Field()
    keyWord = scrapy.Field()
    companyName = scrapy.Field()
    companyType = scrapy.Field()
    location = scrapy.Field()

这里没什么好说的,就是吧要抓取的数据列出来。
middlewares.py

# -*- coding: utf-8 -*-

# Define here the models for your spider middleware
#
# See documentation in:
# https://doc.scrapy.org/en/latest/topics/spider-middleware.html
import random
from scrapy import signals
import unit.userAgents as userAgents
from unit.proxyMysql import sqlHelper


class LagouSpiderMiddleware(object):
    # Not all methods need to be defined. If a method is not defined,
    # scrapy acts as if the spider middleware does not modify the
    # passed objects.

    @classmethod
    def from_crawler(cls, crawler):
        # This method is used by Scrapy to create your spiders.
        s = cls()
        crawler.signals.connect(s.spider_opened, signal=signals.spider_opened)
        return s

    def process_spider_input(self, response, spider):
        # Called for each response that goes through the spider
        # middleware and into the spider.

        # Should return None or raise an exception.
        return None

    def process_spider_output(self, response, result, spider):
        # Called with the results returned from the Spider, after
        # it has processed the response.

        # Must return an iterable of Request, dict or Item objects.
        for i in result:
            yield i

    def process_spider_exception(self, response, exception, spider):
        # Called when a spider or process_spider_input() method
        # (from other spider middleware) raises an exception.

        # Should return either None or an iterable of Response, dict
        # or Item objects.
        pass

    def process_start_requests(self, start_requests, spider):
        
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值