Python爬虫踩坑记录 _pickle.PicklingError: Can‘t pickle <class>

博客讲述了在Windows环境下运行Python爬虫时遇到的pickle模块报错,原因是进程不可序列化。为解决此问题,博主选择在Ubuntu虚拟机中运行,但在配置MongoDB时因内存不足导致错误。通过使用--smallfiles选项成功启动MongoDB,最终实现了爬虫的正常运行。

摘要生成于 C知道 ,由 DeepSeek-R1 满血版支持, 前往体验 >

做大作业老师要求帮他们组运行一个爬虫程序,下载源码后在Anaconda里运行,发现了奇怪的报错。
在这里插入图片描述

Traceback (most recent call last):
  File "ccf_crawler.py", line 118, in <module>
    save_dblp_papers()
  File "ccf_crawler.py", line 102, in save_dblp_papers
    watcher_process.start()
  File "E:\jupter\lib\multiprocessing\process.py", line 121, in start
    self._popen = self._Popen(self)
  File "E:\jupter\lib\multiprocessing\context.py", line 224, in _Popen
    return _default_context.get_context().Process._Popen(process_obj)
  File "E:\jupter\lib\multiprocessing\context.py", line 326, in _Popen
    return Popen(process_obj)
  File "E:\jupter\lib\multiprocessing\popen_spawn_win32.py", line 93, in __init__
    reduction.dump(process_obj, to_child)
  File "E:\jupter\lib\multiprocessing\reduction.py", line 60, in dump
    ForkingPickler(file, protocol).dump(obj)
_pickle.PicklingError: Can't pickle <class 'paper_crawler.crawler_manager.CrawlerManager'>: it's not the same object as paper_crawler.crawler_manager.CrawlerManager
Traceback (most recent call last):
  File "<string>", line 1, in <module>
  File "E:\jupter\lib\multiprocessing\spawn.py", line 116, in spawn_main
    exitcode = _main(fd, parent_sentinel)
  File "E:\jupter\lib\multiprocessing\spawn.py", line 126, in _main
    self = reduction.pickle.load(from_parent)
EOFError: Ran out of input

是python pickle报错,pickle是一个简单的持久化功能,将对象序列化后以文件的形式保存。为什么在windows里运行不了呢?查阅资料,发现关于pickle和进程

原来这个爬虫是用进程编写的,而在windows系统中,进程使用socket对象,而socket对象是不可序列化的;在linux系统中,进程使用的是fork对象,因此可以被序列化。
所以解决方案有二:

  1. 重构代码,用线程而不是进程编写,对性能不会造成大的影响,这样就可以在windows上运行
  2. 使用linux系统运行即可

因此,打开了ubuntu虚拟机,配置环境,安装mongoDB,运行代码,成功!
在这里插入图片描述
没内存啦,唉
在这里插入图片描述

插播:Ubuntu 14.04系统安装MongoDB的报错
安装网络教程安装MongoDB,后出现报错

 mongod --dbpath data/db
Mon Dec  7 19:03:14.143 [initandlisten] MongoDB starting : pid=129519 port=27017 dbpath=data/db 64-bit host=ubuntu
Mon Dec  7 19:03:14.143 [initandlisten] db version v2.4.9
Mon Dec  7 19:03:14.143 [initandlisten] git version: nogitversion
Mon Dec  7 19:03:14.143 [initandlisten] build info: Linux orlo 3.2.0-58-generic #88-Ubuntu SMP Tue Dec 3 17:37:58 UTC 2013 x86_64 BOOST_LIB_VERSION=1_54
Mon Dec  7 19:03:14.143 [initandlisten] allocator: tcmalloc
Mon Dec  7 19:03:14.143 [initandlisten] options: { dbpath: "data/db" }
Mon Dec  7 19:03:14.149 [initandlisten] journal dir=data/db/journal
Mon Dec  7 19:03:14.149 [initandlisten] recover : no journal files present, no recovery needed
Mon Dec  7 19:03:14.149 [initandlisten] 
Mon Dec  7 19:03:14.149 [initandlisten] ERROR: Insufficient free space for journal files
Mon Dec  7 19:03:14.149 [initandlisten] Please make at least 3379MB available in data/db/journal or use --smallfiles
Mon Dec  7 19:03:14.149 [initandlisten] 
Mon Dec  7 19:03:14.150 [initandlisten] exception in initAndListen: 15926 Insufficient free space for journals, terminating
Mon Dec  7 19:03:14.150 dbexit: 
Mon Dec  7 19:03:14.150 [initandlisten] shutdown: going to close listening sockets...
Mon Dec  7 19:03:14.150 [initandlisten] shutdown: going to flush diaglog...
Mon Dec  7 19:03:14.150 [initandlisten] shutdown: going to close sockets...
Mon Dec  7 19:03:14.150 [initandlisten] shutdown: waiting for fs preallocator...
Mon Dec  7 19:03:14.150 [initandlisten] shutdown: lock for final commit...
Mon Dec  7 19:03:14.150 [initandlisten] shutdown: final commit...
Mon Dec  7 19:03:14.150 [initandlisten] shutdown: closing all files...
Mon Dec  7 19:03:14.150 [initandlisten] closeAllFiles() finished
Mon Dec  7 19:03:14.150 [initandlisten] journalCleanup...
Mon Dec  7 19:03:14.150 [initandlisten] removeJournalFiles
Mon Dec  7 19:03:14.150 [initandlisten] shutdown: removing fs lock...
Mon Dec  7 19:03:14.150 dbexit: really exiting now

是因为内存不够了啊,orz,没办法,虚拟机太小了,只能使用替代命令–smallfiles运行了
注意空格之类的格式(这几天空格踩雷好多,疲惫)(要注意细节啊),
运行命令

 mongod --dbpath data/db --smallfiles


成功!

Mon Dec  7 19:03:59.144 [initandlisten] MongoDB starting : pid=129886 port=27017 dbpath=data/db 64-bit host=ubuntu
Mon Dec  7 19:03:59.145 [initandlisten] db version v2.4.9
Mon Dec  7 19:03:59.145 [initandlisten] git version: nogitversion
Mon Dec  7 19:03:59.145 [initandlisten] build info: Linux orlo 3.2.0-58-generic #88-Ubuntu SMP Tue Dec 3 17:37:58 UTC 2013 x86_64 BOOST_LIB_VERSION=1_54
Mon Dec  7 19:03:59.145 [initandlisten] allocator: tcmalloc
Mon Dec  7 19:03:59.145 [initandlisten] options: { dbpath: "data/db", smallfiles: true }
Mon Dec  7 19:03:59.148 [initandlisten] journal dir=data/db/journal
Mon Dec  7 19:03:59.148 [initandlisten] recover : no journal files present, no recovery needed
Mon Dec  7 19:03:59.215 [FileAllocator] allocating new datafile data/db/local.ns, filling with zeroes...
Mon Dec  7 19:03:59.215 [FileAllocator] creating directory data/db/_tmp
Mon Dec  7 19:03:59.217 [FileAllocator] done allocating datafile data/db/local.ns, size: 16MB,  took 0 secs
Mon Dec  7 19:03:59.218 [FileAllocator] allocating new datafile data/db/local.0, filling with zeroes...
Mon Dec  7 19:03:59.219 [FileAllocator] done allocating datafile data/db/local.0, size: 16MB,  took 0 secs
Mon Dec  7 19:03:59.223 [initandlisten] waiting for connections on port 27017
Mon Dec  7 19:03:59.225 [websvr] admin web console waiting for connections on port 28017

Mon Dec  7 19:07:59.266 [PeriodicTask::Runner] task: DBConnectionPool-cleaner took: 8ms
Mon Dec  7 19:48:17.641 [PeriodicTask::Runner] task: WriteBackManager::cleaner took: 5ms


2025-07-06 22:15:17 [scrapy.utils.log] DEBUG: Using reactor: twisted.internet.selectreactor.SelectReactor 2025-07-06 22:15:17 [scrapy.extensions.telnet] INFO: Telnet Password: 94478ec1879f1a75 2025-07-06 22:15:17 [scrapy.middleware] INFO: Enabled extensions: ['scrapy.extensions.corestats.CoreStats', 'scrapy.extensions.telnet.TelnetConsole', 'scrapy.extensions.logstats.LogStats', 'scrapy.extensions.throttle.AutoThrottle'] 2025-07-06 22:15:17 [scrapy.middleware] INFO: Enabled downloader middlewares: ['scrapy.downloadermiddlewares.robotstxt.RobotsTxtMiddleware', 'scrapy.downloadermiddlewares.httpauth.HttpAuthMiddleware', 'scrapy.downloadermiddlewares.downloadtimeout.DownloadTimeoutMiddleware', 'scrapy.downloadermiddlewares.defaultheaders.DefaultHeadersMiddleware', 'scrapy.downloadermiddlewares.useragent.UserAgentMiddleware', 'scrapy.downloadermiddlewares.retry.RetryMiddleware', 'scrapy.downloadermiddlewares.redirect.MetaRefreshMiddleware', 'scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware', 'scrapy.downloadermiddlewares.redirect.RedirectMiddleware', 'scrapy.downloadermiddlewares.cookies.CookiesMiddleware', 'scrapy.downloadermiddlewares.httpproxy.HttpProxyMiddleware', 'scrapy.downloadermiddlewares.stats.DownloaderStats', 'scrapy.downloadermiddlewares.httpcache.HttpCacheMiddleware'] 2025-07-06 22:15:17 [scrapy.middleware] INFO: Enabled spider middlewares: ['scrapy.spidermiddlewares.httperror.HttpErrorMiddleware', 'scrapy.spidermiddlewares.offsite.OffsiteMiddleware', 'scrapy.spidermiddlewares.referer.RefererMiddleware', 'scrapy.spidermiddlewares.urllength.UrlLengthMiddleware', 'scrapy.spidermiddlewares.depth.DepthMiddleware'] 2025-07-06 22:15:17 [scrapy.middleware] INFO: Enabled item pipelines: ['nepu_spider.pipelines.ContentCleanPipeline', 'nepu_spider.pipelines.DeduplicatePipeline', 'nepu_spider.pipelines.SQLServerPipeline'] 2025-07-06 22:15:17 [scrapy.core.engine] INFO: Spider opened 2025-07-06 22:15:17 [nepu_spider.pipelines] INFO: ✅ 数据库表 'knowledge_base' 已创建或已存在 2025-07-06 22:15:17 [nepu_info] INFO: ✅ 成功连接到 SQL Server 数据库 2025-07-06 22:15:17 [scrapy.extensions.logstats] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min) 2025-07-06 22:15:17 [scrapy.extensions.httpcache] DEBUG: Using filesystem cache storage in C:\Users\Lenovo\nepu_qa_project\.scrapy\httpcache 2025-07-06 22:15:17 [scrapy.extensions.telnet] INFO: Telnet console listening on 127.0.0.1:6023 2025-07-06 22:15:17 [nepu_info] INFO: 🚀 开始爬取东北石油大学官网... 2025-07-06 22:15:17 [nepu_info] INFO: 初始URL数量: 4 2025-07-06 22:15:17 [scrapy.downloadermiddlewares.redirect] DEBUG: Redirecting (302) to <GET https://www.nepu.edu.cn/robots.txt> from <GET http://www.nepu.edu.cn/robots.txt> 2025-07-06 22:15:24 [scrapy.core.engine] DEBUG: Crawled (404) <GET https://www.nepu.edu.cn/robots.txt> (referer: None) 2025-07-06 22:15:24 [protego] DEBUG: Rule at line 12 without any user agent to enforce it on. 2025-07-06 22:15:24 [protego] DEBUG: Rule at line 13 without any user agent to enforce it on. 2025-07-06 22:15:24 [protego] DEBUG: Rule at line 14 without any user agent to enforce it on. 2025-07-06 22:15:24 [protego] DEBUG: Rule at line 15 without any user agent to enforce it on. 2025-07-06 22:15:24 [protego] DEBUG: Rule at line 16 without any user agent to enforce it on. 2025-07-06 22:15:28 [scrapy.downloadermiddlewares.redirect] DEBUG: Redirecting (302) to <GET https://www.nepu.edu.cn/index.htm> from <GET http://www.nepu.edu.cn/> 2025-07-06 22:15:34 [scrapy.downloadermiddlewares.redirect] DEBUG: Redirecting (302) to <GET https://www.nepu.edu.cn/tzgg.htm> from <GET http://www.nepu.edu.cn/tzgg.htm> 2025-07-06 22:15:38 [scrapy.downloadermiddlewares.redirect] DEBUG: Redirecting (302) to <GET https://www.nepu.edu.cn/xwzx.htm> from <GET http://www.nepu.edu.cn/xwzx.htm> 2025-07-06 22:15:45 [scrapy.downloadermiddlewares.redirect] DEBUG: Redirecting (302) to <GET https://www.nepu.edu.cn/xxgk.htm> from <GET http://www.nepu.edu.cn/xxgk.htm> 2025-07-06 22:15:51 [scrapy.core.engine] DEBUG: Crawled (200) <GET https://www.nepu.edu.cn/index.htm> (referer: None) 2025-07-06 22:15:51 [scrapy.spidermiddlewares.offsite] DEBUG: Filtered offsite request to 'www.gov.cn': <GET https://www.gov.cn/gongbao/content/2001/content_61066.htm> 2025-07-06 22:15:59 [scrapy.core.engine] DEBUG: Crawled (200) <GET https://www.nepu.edu.cn/tzgg.htm> (referer: None) 2025-07-06 22:16:01 [scrapy.core.engine] DEBUG: Crawled (404) <GET https://www.nepu.edu.cn/xwzx.htm> (referer: None) 2025-07-06 22:16:01 [nepu_info] ERROR: 请求失败: https://www.nepu.edu.cn/xwzx.htm | 状态: 404 | 错误: Ignoring non-200 response 2025-07-06 22:16:03 [scrapy.core.engine] DEBUG: Crawled (404) <GET https://www.nepu.edu.cn/xxgk.htm> (referer: None) 2025-07-06 22:16:03 [nepu_info] ERROR: 请求失败: https://www.nepu.edu.cn/xxgk.htm | 状态: 404 | 错误: Ignoring non-200 response 2025-07-06 22:16:05 [scrapy.core.engine] DEBUG: Crawled (200) <GET https://www.nepu.edu.cn/info/1049/28877.htm> (referer: https://www.nepu.edu.cn/index.htm) 2025-07-06 22:16:05 [nepu_info] ERROR: ❌ 解析失败: https://www.nepu.edu.cn/info/1049/28877.htm | 错误: Expected selector, got <DELIM '/' at 0> Traceback (most recent call last): File "C:\Users\Lenovo\nepu_qa_project\nepu_spider\spiders\info_spider.py", line 148, in parse_item date_text = response.css(selector).get() ^^^^^^^^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\scrapy\http\response\text.py", line 147, in css return self.selector.css(query) ^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\parsel\selector.py", line 282, in css return self.xpath(self._css2xpath(query)) ^^^^^^^^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\parsel\selector.py", line 285, in _css2xpath return self._csstranslator.css_to_xpath(query) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\parsel\csstranslator.py", line 107, in css_to_xpath return super(HTMLTranslator, self).css_to_xpath(css, prefix) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\cssselect\xpath.py", line 192, in css_to_xpath for selector in parse(css)) ^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\cssselect\parser.py", line 415, in parse return list(parse_selector_group(stream)) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\cssselect\parser.py", line 428, in parse_selector_group yield Selector(*parse_selector(stream)) ^^^^^^^^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\cssselect\parser.py", line 436, in parse_selector result, pseudo_element = parse_simple_selector(stream) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\cssselect\parser.py", line 544, in parse_simple_selector raise SelectorSyntaxError( cssselect.parser.SelectorSyntaxError: Expected selector, got <DELIM '/' at 0> 2025-07-06 22:16:05 [scrapy.dupefilters] DEBUG: Filtered duplicate request: <GET https://www.nepu.edu.cn/info/1049/28867.htm> - no more duplicates will be shown (see DUPEFILTER_DEBUG to show all duplicates) 2025-07-06 22:16:05 [scrapy.core.engine] DEBUG: Crawled (200) <GET https://www.nepu.edu.cn/info/1313/16817.htm> (referer: https://www.nepu.edu.cn/index.htm) 2025-07-06 22:16:05 [nepu_info] ERROR: ❌ 解析失败: https://www.nepu.edu.cn/info/1313/16817.htm | 错误: Expected selector, got <DELIM '/' at 0> Traceback (most recent call last): File "C:\Users\Lenovo\nepu_qa_project\nepu_spider\spiders\info_spider.py", line 148, in parse_item date_text = response.css(selector).get() ^^^^^^^^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\scrapy\http\response\text.py", line 147, in css return self.selector.css(query) ^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\parsel\selector.py", line 282, in css return self.xpath(self._css2xpath(query)) ^^^^^^^^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\parsel\selector.py", line 285, in _css2xpath return self._csstranslator.css_to_xpath(query) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\parsel\csstranslator.py", line 107, in css_to_xpath return super(HTMLTranslator, self).css_to_xpath(css, prefix) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\cssselect\xpath.py", line 192, in css_to_xpath for selector in parse(css)) ^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\cssselect\parser.py", line 415, in parse return list(parse_selector_group(stream)) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\cssselect\parser.py", line 428, in parse_selector_group yield Selector(*parse_selector(stream)) ^^^^^^^^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\cssselect\parser.py", line 436, in parse_selector result, pseudo_element = parse_simple_selector(stream) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\cssselect\parser.py", line 544, in parse_simple_selector raise SelectorSyntaxError( cssselect.parser.SelectorSyntaxError: Expected selector, got <DELIM '/' at 0> 2025-07-06 22:16:06 [scrapy.core.engine] DEBUG: Crawled (200) <GET https://www.nepu.edu.cn/info/1313/17517.htm> (referer: https://www.nepu.edu.cn/index.htm) 2025-07-06 22:16:06 [nepu_info] ERROR: ❌ 解析失败: https://www.nepu.edu.cn/info/1313/17517.htm | 错误: Expected selector, got <DELIM '/' at 0> Traceback (most recent call last): File "C:\Users\Lenovo\nepu_qa_project\nepu_spider\spiders\info_spider.py", line 148, in parse_item date_text = response.css(selector).get() ^^^^^^^^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\scrapy\http\response\text.py", line 147, in css return self.selector.css(query) ^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\parsel\selector.py", line 282, in css return self.xpath(self._css2xpath(query)) ^^^^^^^^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\parsel\selector.py", line 285, in _css2xpath return self._csstranslator.css_to_xpath(query) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\parsel\csstranslator.py", line 107, in css_to_xpath return super(HTMLTranslator, self).css_to_xpath(css, prefix) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\cssselect\xpath.py", line 192, in css_to_xpath for selector in parse(css)) ^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\cssselect\parser.py", line 415, in parse return list(parse_selector_group(stream)) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\cssselect\parser.py", line 428, in parse_selector_group yield Selector(*parse_selector(stream)) ^^^^^^^^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\cssselect\parser.py", line 436, in parse_selector result, pseudo_element = parse_simple_selector(stream) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\cssselect\parser.py", line 544, in parse_simple_selector raise SelectorSyntaxError( cssselect.parser.SelectorSyntaxError: Expected selector, got <DELIM '/' at 0> 2025-07-06 22:16:07 [scrapy.core.engine] DEBUG: Crawled (200) <GET https://www.nepu.edu.cn/info/1313/19127.htm> (referer: https://www.nepu.edu.cn/index.htm) 2025-07-06 22:16:07 [nepu_info] ERROR: ❌ 解析失败: https://www.nepu.edu.cn/info/1313/19127.htm | 错误: Expected selector, got <DELIM '/' at 0> Traceback (most recent call last): ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\cssselect\parser.py", line 544, in parse_simple_selector raise SelectorSyntaxError( cssselect.parser.SelectorSyntaxError: Expected selector, got <DELIM '/' at 0> 2025-07-06 22:16:58 [scrapy.core.engine] DEBUG: Crawled (200) <GET https://www.nepu.edu.cn/info/1049/28867.htm> (referer: https://www.nepu.edu.cn/index.htm) 2025-07-06 22:16:58 [nepu_info] ERROR: ❌ 解析失败: https://www.nepu.edu.cn/info/1049/28867.htm | 错误: Expected selector, got <DELIM '/' at 0> Traceback (most recent call last): File "C:\Users\Lenovo\nepu_qa_project\nepu_spider\spiders\info_spider.py", line 148, in parse_item date_text = response.css(selector).get() ^^^^^^^^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\scrapy\http\response\text.py", line 147, in css return self.selector.css(query) ^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\parsel\selector.py", line 282, in css return self.xpath(self._css2xpath(query)) ^^^^^^^^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\parsel\selector.py", line 285, in _css2xpath return self._csstranslator.css_to_xpath(query) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\parsel\csstranslator.py", line 107, in css_to_xpath return super(HTMLTranslator, self).css_to_xpath(css, prefix) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\cssselect\xpath.py", line 192, in css_to_xpath for selector in parse(css)) ^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\cssselect\parser.py", line 415, in parse return list(parse_selector_group(stream)) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\cssselect\parser.py", line 428, in parse_selector_group yield Selector(*parse_selector(stream)) ^^^^^^^^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\cssselect\parser.py", line 436, in parse_selector result, pseudo_element = parse_simple_selector(stream) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\cssselect\parser.py", line 544, in parse_simple_selector raise SelectorSyntaxError( cssselect.parser.SelectorSyntaxError: Expected selector, got <DELIM '/' at 0> 2025-07-06 22:16:58 [scrapy.core.engine] INFO: Closing spider (finished) 2025-07-06 22:16:58 [nepu_info] INFO: ✅ 数据库连接已关闭 2025-07-06 22:16:58 [nepu_info] INFO: 🛑 爬虫结束,原因: finished 2025-07-06 22:16:58 [nepu_info] INFO: 总计爬取页面: 86 2025-07-06 22:16:58 [scrapy.utils.signal] ERROR: Error caught on signal handler: <function Spider.close at 0x000001FF77BA2C00> Traceback (most recent call last): File "D:\annaCONDA\Lib\site-packages\scrapy\utils\defer.py", line 312, in maybeDeferred_coro result = f(*args, **kw) File "D:\annaCONDA\Lib\site-packages\pydispatch\robustapply.py", line 55, in robustApply return receiver(*arguments, **named) File "D:\annaCONDA\Lib\site-packages\scrapy\spiders\__init__.py", line 92, in close return closed(reason) File "C:\Users\Lenovo\nepu_qa_project\nepu_spider\spiders\info_spider.py", line 323, in closed json.dump(stats, f, ensure_ascii=False, indent=2) File "D:\annaCONDA\Lib\json\__init__.py", line 179, in dump for chunk in iterable: File "D:\annaCONDA\Lib\json\encoder.py", line 432, in _iterencode yield from _iterencode_dict(o, _current_indent_level) File "D:\annaCONDA\Lib\json\encoder.py", line 406, in _iterencode_dict yield from chunks File "D:\annaCONDA\Lib\json\encoder.py", line 406, in _iterencode_dict yield from chunks File "D:\annaCONDA\Lib\json\encoder.py", line 439, in _iterencode o = _default(o) File "D:\annaCONDA\Lib\json\encoder.py", line 180, in default raise TypeError(f'Object of type {o.__class__.__name__} ' TypeError: Object of type SettingsAttribute is not JSON serializable 2025-07-06 22:16:58 [scrapy.statscollectors] INFO: Dumping Scrapy stats: {'downloader/request_bytes': 35146, 'downloader/request_count': 96, 'downloader/request_method_count/GET': 96, 'downloader/response_bytes': 729404, 'downloader/response_count': 96, 'downloader/response_status_count/200': 88, 'downloader/response_status_count/302': 5, 'downloader/response_status_count/404': 3, 'dupefilter/filtered': 184, 'elapsed_time_seconds': 101.133916, 'finish_reason': 'finished', 'finish_time': datetime.datetime(2025, 7, 6, 14, 16, 58, 758524), 'httpcache/firsthand': 96, 'httpcache/miss': 96, 'httpcache/store': 96, 'httpcompression/response_bytes': 2168438, 'httpcompression/response_count': 88, 'log_count/DEBUG': 120, 'log_count/ERROR': 89, 'log_count/INFO': 18, 'log_count/WARNING': 1, 'offsite/domains': 2, 'offsite/filtered': 4, 'request_depth_max': 3, 'response_received_count': 91, 'robotstxt/request_count': 1, 'robotstxt/response_count': 1, 'robotstxt/response_status_count/404': 1, 'scheduler/dequeued': 94, 'scheduler/dequeued/memory': 94, 'scheduler/enqueued': 94, 'scheduler/enqueued/memory': 94, 'start_time': datetime.datetime(2025, 7, 6, 14, 15, 17, 624608)} 2025-07-06 22:16:58 [scrapy.core.engine] INFO: Spider closed (finished)
最新发布
07-07
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值