2023-07-10 23:08:00 [scrapy.core.engine] INFO: Closing spider (finished) 2023-07-10 23:08:00 [selenium.webdriver.remote.remote_connection] DEBUG: DELETE http://localhost:60596/session/be2dde14f6d45a27f7c73bf2fdb4c6fc/window {} 2023-07-10 23:08:00 [urllib3.connectionpool] DEBUG: http://localhost:60596 "DELETE /session/be2dde14f6d45a27f7c73bf2fdb4c6fc/window HTTP/1.1" 200 0 2023-07-10 23:08:00 [selenium.webdriver.remote.remote_connection] DEBUG: Remote response: status=200 | data={"value":[]} | headers=HTTPHeaderDict({'Content-Length': '12', 'Content-Type': 'application/json; charset=utf-8', 'cache-control': 'no-cache'}) 2023-07-10 23:08:00 [selenium.webdriver.remote.remote_connection] DEBUG: Finished Request 这个是什么意思

时间: 2024-04-10 08:28:49 浏览: 350
这段日志显示了 Scrapy 框架和 Selenium 的一些操作。让我为您解释每一部分的含义: 1. `2023-07-10 23:08:00 [scrapy.core.engine] INFO: Closing spider (finished)`:表示爬虫已完成并关闭。 2. `2023-07-10 23:08:00 [selenium.webdriver.remote.remote_connection] DEBUG: DELETE http://localhost:60596/session/be2dde14f6d45a27f7c73bf2fdb4c6fc/window {}`:表示正在发送一个 HTTP DELETE 请求,关闭 Selenium 的浏览器窗口。 3. `2023-07-10 23:08:00 [urllib3.connectionpool] DEBUG: http://localhost:60596 "DELETE /session/be2dde14f6d45a27f7c73bf2fdb4c6fc/window HTTP/1.1" 200 0`:表示该 DELETE 请求已成功发送,返回状态码 200 表示成功。 4. `2023-07-10 23:08:00 [selenium.webdriver.remote.remote_connection] DEBUG: Remote response: status=200 | data={"value":[]} | headers=HTTPHeaderDict({'Content-Length': '12', 'Content-Type': 'application/json; charset=utf-8', 'cache-control': 'no-cache'})`:表示远程服务器返回的响应,其中状态码为 200,返回的数据为 `{"value":[]}`。 5. `2023-07-10 23:08:00 [selenium.webdriver.remote.remote_connection] DEBUG: Finished Request`:表示请求已完成。 这些日志详细记录了 Scrapy 和 Selenium 在运行过程中的一些操作,可以帮助您进行调试和了解程序的执行流程。
阅读全文

相关推荐

raise KeyError(f"{self.__class__.__name__} does not support field: {key}") KeyError: 'NepuContentItem does not support field: publish_date' 2025-07-07 20:46:23 [nepudata] INFO: 开始解析页面: https://www.nepu.edu.cn/gjjl/xjjl.htm 2025-07-07 20:46:23 [scrapy.core.scraper] ERROR: Spider error processing <GET https://www.nepu.edu.cn/gjjl/xjjl.htm> (referer: None) Traceback (most recent call last): File "D:\annaCONDA\Lib\site-packages\scrapy\utils\defer.py", line 257, in iter_errback yield next(it) ^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\scrapy\utils\python.py", line 312, in __next__ return next(self.data) ^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\scrapy\utils\python.py", line 312, in __next__ return next(self.data) ^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\scrapy\core\spidermw.py", line 104, in process_sync for r in iterable: File "D:\annaCONDA\Lib\site-packages\scrapy\spidermiddlewares\offsite.py", line 28, in <genexpr> return (r for r in result or () if self._filter(r, spider)) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\scrapy\core\spidermw.py", line 104, in process_sync for r in iterable: File "D:\annaCONDA\Lib\site-packages\scrapy\spidermiddlewares\referer.py", line 353, in <genexpr> return (self._set_referer(r, response) for r in result or ()) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\scrapy\core\spidermw.py", line 104, in process_sync for r in iterable: File "D:\annaCONDA\Lib\site-packages\scrapy\spidermiddlewares\urllength.py", line 27, in <genexpr> return (r for r in result or () if self._filter(r, spider)) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\scrapy\core\spidermw.py", line 104, in process_sync for r in iterable: File "D:\annaCONDA\Lib\site-packages\scrapy\spidermiddlewares\depth.py", line 31, in <genexpr> return (r for r in result or () if self._filter(r, response, spider)) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\scrapy\core\spidermw.py", line 104, in process_sync for r in iterable: File "C:\Users\Lenovo\nepu_spider\nepu_spider\spiders\nepudata_spider.py", line 93, in parse item = self.extract_content(response) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\Lenovo\nepu_spider\nepu_spider\spiders\nepudata_spider.py", line 175, in extract_content item['publish_date'] = publish_date ~~~~^^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\scrapy\item.py", line 85, in __setitem__ raise KeyError(f"{self.__class__.__name__} does not support field: {key}") KeyError: 'NepuContentItem does not support field: publish_date' 2025-07-07 20:46:23 [scrapy.spidermiddlewares.httperror] INFO: Ignoring response <404 https://www.nepu.edu.cn/yjsy/>: HTTP status code is not handled or not allowed 2025-07-07 20:46:24 [scrapy.spidermiddlewares.httperror] INFO: Ignoring response <404 https://www.nepu.edu.cn/jwc/>: HTTP status code is not handled or not allowed 2025-07-07 20:46:24 [scrapy.spidermiddlewares.httperror] INFO: Ignoring response <404 https://www.nepu.edu.cn/xsc/>: HTTP status code is not handled or not allowed 2025-07-07 20:46:25 [scrapy.spidermiddlewares.httperror] INFO: Ignoring response <404 https://www.nepu.edu.cn/rsc/>: HTTP status code is not handled or not allowed 2025-07-07 20:46:26 [scrapy.spidermiddlewares.httperror] INFO: Ignoring response <404 https://www.nepu.edu.cn/kyc/>: HTTP status code is not handled or not allowed 2025-07-07 20:46:26 [scrapy.spidermiddlewares.httperror] INFO: Ignoring response <404 https://www.nepu.edu.cn/xxyw/?year=2020>: HTTP status code is not handled or not allowed 2025-07-07 20:46:27 [scrapy.spidermiddlewares.httperror] INFO: Ignoring response <404 https://www.nepu.edu.cn/xxyw/?year=2021>: HTTP status code is not handled or not allowed 2025-07-07 20:46:28 [scrapy.spidermiddlewares.httperror] INFO: Ignoring response <404 https://www.nepu.edu.cn/xxyw/?year=2022>: HTTP status code is not handled or not allowed 2025-07-07 20:46:28 [scrapy.spidermiddlewares.httperror] INFO: Ignoring response <404 https://www.nepu.edu.cn/xxyw/?year=2023>: HTTP status code is not handled or not allowed 2025-07-07 20:46:29 [scrapy.spidermiddlewares.httperror] INFO: Ignoring response <404 https://www.nepu.edu.cn/xxyw/?year=2024>: HTTP status code is not handled or not allowed 2025-07-07 20:46:29 [scrapy.spidermiddlewares.httperror] INFO: Ignoring response <404 https://www.nepu.edu.cn/xxyw/?year=2025>: HTTP status code is not handled or not allowed 2025-07-07 20:46:30 [scrapy.spidermiddlewares.httperror] INFO: Ignoring response <404 https://www.nepu.edu.cn/tzgg/?year=2020>: HTTP status code is not handled or not allowed 2025-07-07 20:46:30 [scrapy.spidermiddlewares.httperror] INFO: Ignoring response <404 https://www.nepu.edu.cn/tzgg/?year=2021>: HTTP status code is not handled or not allowed 2025-07-07 20:46:31 [scrapy.spidermiddlewares.httperror] INFO: Ignoring response <404 https://www.nepu.edu.cn/tzgg/?year=2022>: HTTP status code is not handled or not allowed 2025-07-07 20:46:31 [scrapy.spidermiddlewares.httperror] INFO: Ignoring response <404 https://www.nepu.edu.cn/tzgg/?year=2023>: HTTP status code is not handled or not allowed 2025-07-07 20:46:32 [scrapy.spidermiddlewares.httperror] INFO: Ignoring response <404 https://www.nepu.edu.cn/tzgg/?year=2024>: HTTP status code is not handled or not allowed 2025-07-07 20:46:32 [scrapy.spidermiddlewares.httperror] INFO: Ignoring response <404 https://www.nepu.edu.cn/tzgg/?year=2025>: HTTP status code is not handled or not allowed 2025-07-07 20:46:33 [scrapy.spidermiddlewares.httperror] INFO: Ignoring response <404 https://www.nepu.edu.cn/xsdt/?year=2020>: HTTP status code is not handled or not allowed 2025-07-07 20:46:34 [scrapy.spidermiddlewares.httperror] INFO: Ignoring response <404 https://www.nepu.edu.cn/xsdt/?year=2021>: HTTP status code is not handled or not allowed 2025-07-07 20:46:34 [scrapy.spidermiddlewares.httperror] INFO: Ignoring response <404 https://www.nepu.edu.cn/xsdt/?year=2022>: HTTP status code is not handled or not allowed 2025-07-07 20:46:35 [scrapy.spidermiddlewares.httperror] INFO: Ignoring response <404 https://www.nepu.edu.cn/xsdt/?year=2023>: HTTP status code is not handled or not allowed 2025-07-07 20:46:36 [scrapy.spidermiddlewares.httperror] INFO: Ignoring response <404 https://www.nepu.edu.cn/xsdt/?year=2024>: HTTP status code is not handled or not allowed 2025-07-07 20:46:37 [scrapy.spidermiddlewares.httperror] INFO: Ignoring response <404 https://www.nepu.edu.cn/xsdt/?year=2025>: HTTP status code is not handled or not allowed 2025-07-07 20:46:37 [scrapy.core.engine] INFO: Closing spider (finished) 2025-07-07 20:46:37 [nepudata] INFO: 数据库连接已关闭 2025-07-07 20:46:37 [scrapy.statscollectors] INFO: Dumping Scrapy stats: {'downloader/request_bytes': 10178, 'downloader/request_count': 32, 'downloader/request_method_count/GET': 32, 'downloader/response_bytes': 117980, 'downloader/response_count': 32, 'downloader/response_status_count/200': 9, 'downloader/response_status_count/404': 23, 'elapsed_time_seconds': 23.999904, 'finish_reason': 'finished', 'finish_time': datetime.datetime(2025, 7, 7, 12, 46, 37, 62452), 'httpcompression/response_bytes': 187569, 'httpcompression/response_count': 9, 'httperror/response_ignored_count': 23, 'httperror/response_ignored_status_count/404': 23, 'log_count/ERROR': 9, 'log_count/INFO': 45, 'log_count/WARNING': 1, 'response_received_count': 32, 'scheduler/dequeued': 32, 'scheduler/dequeued/memory': 32, 'scheduler/enqueued': 32, 'scheduler/enqueued/memory': 32, 'spider_exceptions/KeyError': 9, 'start_time': datetime.datetime(2025, 7, 7, 12, 46, 13, 62548)} 2025-07-07 20:46:37 [scrapy.core.engine] INFO: Spider closed (finished)

2025-07-06 22:15:17 [scrapy.utils.log] DEBUG: Using reactor: twisted.internet.selectreactor.SelectReactor 2025-07-06 22:15:17 [scrapy.extensions.telnet] INFO: Telnet Password: 94478ec1879f1a75 2025-07-06 22:15:17 [scrapy.middleware] INFO: Enabled extensions: ['scrapy.extensions.corestats.CoreStats', 'scrapy.extensions.telnet.TelnetConsole', 'scrapy.extensions.logstats.LogStats', 'scrapy.extensions.throttle.AutoThrottle'] 2025-07-06 22:15:17 [scrapy.middleware] INFO: Enabled downloader middlewares: ['scrapy.downloadermiddlewares.robotstxt.RobotsTxtMiddleware', 'scrapy.downloadermiddlewares.httpauth.HttpAuthMiddleware', 'scrapy.downloadermiddlewares.downloadtimeout.DownloadTimeoutMiddleware', 'scrapy.downloadermiddlewares.defaultheaders.DefaultHeadersMiddleware', 'scrapy.downloadermiddlewares.useragent.UserAgentMiddleware', 'scrapy.downloadermiddlewares.retry.RetryMiddleware', 'scrapy.downloadermiddlewares.redirect.MetaRefreshMiddleware', 'scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware', 'scrapy.downloadermiddlewares.redirect.RedirectMiddleware', 'scrapy.downloadermiddlewares.cookies.CookiesMiddleware', 'scrapy.downloadermiddlewares.httpproxy.HttpProxyMiddleware', 'scrapy.downloadermiddlewares.stats.DownloaderStats', 'scrapy.downloadermiddlewares.httpcache.HttpCacheMiddleware'] 2025-07-06 22:15:17 [scrapy.middleware] INFO: Enabled spider middlewares: ['scrapy.spidermiddlewares.httperror.HttpErrorMiddleware', 'scrapy.spidermiddlewares.offsite.OffsiteMiddleware', 'scrapy.spidermiddlewares.referer.RefererMiddleware', 'scrapy.spidermiddlewares.urllength.UrlLengthMiddleware', 'scrapy.spidermiddlewares.depth.DepthMiddleware'] 2025-07-06 22:15:17 [scrapy.middleware] INFO: Enabled item pipelines: ['nepu_spider.pipelines.ContentCleanPipeline', 'nepu_spider.pipelines.DeduplicatePipeline', 'nepu_spider.pipelines.SQLServerPipeline'] 2025-07-06 22:15:17 [scrapy.core.engine] INFO: Spider opened 2025-07-06 22:15:17 [nepu_spider.pipelines] INFO: ✅ 数据库表 'knowledge_base' 已创建或已存在 2025-07-06 22:15:17 [nepu_info] INFO: ✅ 成功连接到 SQL Server 数据库 2025-07-06 22:15:17 [scrapy.extensions.logstats] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min) 2025-07-06 22:15:17 [scrapy.extensions.httpcache] DEBUG: Using filesystem cache storage in C:\Users\Lenovo\nepu_qa_project\.scrapy\httpcache 2025-07-06 22:15:17 [scrapy.extensions.telnet] INFO: Telnet console listening on 127.0.0.1:6023 2025-07-06 22:15:17 [nepu_info] INFO: 🚀 开始爬取东北石油大学官网... 2025-07-06 22:15:17 [nepu_info] INFO: 初始URL数量: 4 2025-07-06 22:15:17 [scrapy.downloadermiddlewares.redirect] DEBUG: Redirecting (302) to <GET https://www.nepu.edu.cn/robots.txt> from <GET http://www.nepu.edu.cn/robots.txt> 2025-07-06 22:15:24 [scrapy.core.engine] DEBUG: Crawled (404) <GET https://www.nepu.edu.cn/robots.txt> (referer: None) 2025-07-06 22:15:24 [protego] DEBUG: Rule at line 12 without any user agent to enforce it on. 2025-07-06 22:15:24 [protego] DEBUG: Rule at line 13 without any user agent to enforce it on. 2025-07-06 22:15:24 [protego] DEBUG: Rule at line 14 without any user agent to enforce it on. 2025-07-06 22:15:24 [protego] DEBUG: Rule at line 15 without any user agent to enforce it on. 2025-07-06 22:15:24 [protego] DEBUG: Rule at line 16 without any user agent to enforce it on. 2025-07-06 22:15:28 [scrapy.downloadermiddlewares.redirect] DEBUG: Redirecting (302) to <GET https://www.nepu.edu.cn/index.htm> from <GET http://www.nepu.edu.cn/> 2025-07-06 22:15:34 [scrapy.downloadermiddlewares.redirect] DEBUG: Redirecting (302) to <GET https://www.nepu.edu.cn/tzgg.htm> from <GET http://www.nepu.edu.cn/tzgg.htm> 2025-07-06 22:15:38 [scrapy.downloadermiddlewares.redirect] DEBUG: Redirecting (302) to <GET https://www.nepu.edu.cn/xwzx.htm> from <GET http://www.nepu.edu.cn/xwzx.htm> 2025-07-06 22:15:45 [scrapy.downloadermiddlewares.redirect] DEBUG: Redirecting (302) to <GET https://www.nepu.edu.cn/xxgk.htm> from <GET http://www.nepu.edu.cn/xxgk.htm> 2025-07-06 22:15:51 [scrapy.core.engine] DEBUG: Crawled (200) <GET https://www.nepu.edu.cn/index.htm> (referer: None) 2025-07-06 22:15:51 [scrapy.spidermiddlewares.offsite] DEBUG: Filtered offsite request to 'www.gov.cn': <GET https://www.gov.cn/gongbao/content/2001/content_61066.htm> 2025-07-06 22:15:59 [scrapy.core.engine] DEBUG: Crawled (200) <GET https://www.nepu.edu.cn/tzgg.htm> (referer: None) 2025-07-06 22:16:01 [scrapy.core.engine] DEBUG: Crawled (404) <GET https://www.nepu.edu.cn/xwzx.htm> (referer: None) 2025-07-06 22:16:01 [nepu_info] ERROR: 请求失败: https://www.nepu.edu.cn/xwzx.htm | 状态: 404 | 错误: Ignoring non-200 response 2025-07-06 22:16:03 [scrapy.core.engine] DEBUG: Crawled (404) <GET https://www.nepu.edu.cn/xxgk.htm> (referer: None) 2025-07-06 22:16:03 [nepu_info] ERROR: 请求失败: https://www.nepu.edu.cn/xxgk.htm | 状态: 404 | 错误: Ignoring non-200 response 2025-07-06 22:16:05 [scrapy.core.engine] DEBUG: Crawled (200) <GET https://www.nepu.edu.cn/info/1049/28877.htm> (referer: https://www.nepu.edu.cn/index.htm) 2025-07-06 22:16:05 [nepu_info] ERROR: ❌ 解析失败: https://www.nepu.edu.cn/info/1049/28877.htm | 错误: Expected selector, got <DELIM '/' at 0> Traceback (most recent call last): File "C:\Users\Lenovo\nepu_qa_project\nepu_spider\spiders\info_spider.py", line 148, in parse_item date_text = response.css(selector).get() ^^^^^^^^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\scrapy\http\response\text.py", line 147, in css return self.selector.css(query) ^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\parsel\selector.py", line 282, in css return self.xpath(self._css2xpath(query)) ^^^^^^^^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\parsel\selector.py", line 285, in _css2xpath return self._csstranslator.css_to_xpath(query) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\parsel\csstranslator.py", line 107, in css_to_xpath return super(HTMLTranslator, self).css_to_xpath(css, prefix) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\cssselect\xpath.py", line 192, in css_to_xpath for selector in parse(css)) ^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\cssselect\parser.py", line 415, in parse return list(parse_selector_group(stream)) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\cssselect\parser.py", line 428, in parse_selector_group yield Selector(*parse_selector(stream)) ^^^^^^^^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\cssselect\parser.py", line 436, in parse_selector result, pseudo_element = parse_simple_selector(stream) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\cssselect\parser.py", line 544, in parse_simple_selector raise SelectorSyntaxError( cssselect.parser.SelectorSyntaxError: Expected selector, got <DELIM '/' at 0> 2025-07-06 22:16:05 [scrapy.dupefilters] DEBUG: Filtered duplicate request: <GET https://www.nepu.edu.cn/info/1049/28867.htm> - no more duplicates will be shown (see DUPEFILTER_DEBUG to show all duplicates) 2025-07-06 22:16:05 [scrapy.core.engine] DEBUG: Crawled (200) <GET https://www.nepu.edu.cn/info/1313/16817.htm> (referer: https://www.nepu.edu.cn/index.htm) 2025-07-06 22:16:05 [nepu_info] ERROR: ❌ 解析失败: https://www.nepu.edu.cn/info/1313/16817.htm | 错误: Expected selector, got <DELIM '/' at 0> Traceback (most recent call last): File "C:\Users\Lenovo\nepu_qa_project\nepu_spider\spiders\info_spider.py", line 148, in parse_item date_text = response.css(selector).get() ^^^^^^^^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\scrapy\http\response\text.py", line 147, in css return self.selector.css(query) ^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\parsel\selector.py", line 282, in css return self.xpath(self._css2xpath(query)) ^^^^^^^^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\parsel\selector.py", line 285, in _css2xpath return self._csstranslator.css_to_xpath(query) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\parsel\csstranslator.py", line 107, in css_to_xpath return super(HTMLTranslator, self).css_to_xpath(css, prefix) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\cssselect\xpath.py", line 192, in css_to_xpath for selector in parse(css)) ^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\cssselect\parser.py", line 415, in parse return list(parse_selector_group(stream)) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\cssselect\parser.py", line 428, in parse_selector_group yield Selector(*parse_selector(stream)) ^^^^^^^^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\cssselect\parser.py", line 436, in parse_selector result, pseudo_element = parse_simple_selector(stream) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\cssselect\parser.py", line 544, in parse_simple_selector raise SelectorSyntaxError( cssselect.parser.SelectorSyntaxError: Expected selector, got <DELIM '/' at 0> 2025-07-06 22:16:06 [scrapy.core.engine] DEBUG: Crawled (200) <GET https://www.nepu.edu.cn/info/1313/17517.htm> (referer: https://www.nepu.edu.cn/index.htm) 2025-07-06 22:16:06 [nepu_info] ERROR: ❌ 解析失败: https://www.nepu.edu.cn/info/1313/17517.htm | 错误: Expected selector, got <DELIM '/' at 0> Traceback (most recent call last): File "C:\Users\Lenovo\nepu_qa_project\nepu_spider\spiders\info_spider.py", line 148, in parse_item date_text = response.css(selector).get() ^^^^^^^^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\scrapy\http\response\text.py", line 147, in css return self.selector.css(query) ^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\parsel\selector.py", line 282, in css return self.xpath(self._css2xpath(query)) ^^^^^^^^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\parsel\selector.py", line 285, in _css2xpath return self._csstranslator.css_to_xpath(query) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\parsel\csstranslator.py", line 107, in css_to_xpath return super(HTMLTranslator, self).css_to_xpath(css, prefix) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\cssselect\xpath.py", line 192, in css_to_xpath for selector in parse(css)) ^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\cssselect\parser.py", line 415, in parse return list(parse_selector_group(stream)) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\cssselect\parser.py", line 428, in parse_selector_group yield Selector(*parse_selector(stream)) ^^^^^^^^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\cssselect\parser.py", line 436, in parse_selector result, pseudo_element = parse_simple_selector(stream) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\cssselect\parser.py", line 544, in parse_simple_selector raise SelectorSyntaxError( cssselect.parser.SelectorSyntaxError: Expected selector, got <DELIM '/' at 0> 2025-07-06 22:16:07 [scrapy.core.engine] DEBUG: Crawled (200) <GET https://www.nepu.edu.cn/info/1313/19127.htm> (referer: https://www.nepu.edu.cn/index.htm) 2025-07-06 22:16:07 [nepu_info] ERROR: ❌ 解析失败: https://www.nepu.edu.cn/info/1313/19127.htm | 错误: Expected selector, got <DELIM '/' at 0> Traceback (most recent call last): ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\cssselect\parser.py", line 544, in parse_simple_selector raise SelectorSyntaxError( cssselect.parser.SelectorSyntaxError: Expected selector, got <DELIM '/' at 0> 2025-07-06 22:16:58 [scrapy.core.engine] DEBUG: Crawled (200) <GET https://www.nepu.edu.cn/info/1049/28867.htm> (referer: https://www.nepu.edu.cn/index.htm) 2025-07-06 22:16:58 [nepu_info] ERROR: ❌ 解析失败: https://www.nepu.edu.cn/info/1049/28867.htm | 错误: Expected selector, got <DELIM '/' at 0> Traceback (most recent call last): File "C:\Users\Lenovo\nepu_qa_project\nepu_spider\spiders\info_spider.py", line 148, in parse_item date_text = response.css(selector).get() ^^^^^^^^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\scrapy\http\response\text.py", line 147, in css return self.selector.css(query) ^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\parsel\selector.py", line 282, in css return self.xpath(self._css2xpath(query)) ^^^^^^^^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\parsel\selector.py", line 285, in _css2xpath return self._csstranslator.css_to_xpath(query) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\parsel\csstranslator.py", line 107, in css_to_xpath return super(HTMLTranslator, self).css_to_xpath(css, prefix) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\cssselect\xpath.py", line 192, in css_to_xpath for selector in parse(css)) ^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\cssselect\parser.py", line 415, in parse return list(parse_selector_group(stream)) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\cssselect\parser.py", line 428, in parse_selector_group yield Selector(*parse_selector(stream)) ^^^^^^^^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\cssselect\parser.py", line 436, in parse_selector result, pseudo_element = parse_simple_selector(stream) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\cssselect\parser.py", line 544, in parse_simple_selector raise SelectorSyntaxError( cssselect.parser.SelectorSyntaxError: Expected selector, got <DELIM '/' at 0> 2025-07-06 22:16:58 [scrapy.core.engine] INFO: Closing spider (finished) 2025-07-06 22:16:58 [nepu_info] INFO: ✅ 数据库连接已关闭 2025-07-06 22:16:58 [nepu_info] INFO: 🛑 爬虫结束,原因: finished 2025-07-06 22:16:58 [nepu_info] INFO: 总计爬取页面: 86 2025-07-06 22:16:58 [scrapy.utils.signal] ERROR: Error caught on signal handler: <function Spider.close at 0x000001FF77BA2C00> Traceback (most recent call last): File "D:\annaCONDA\Lib\site-packages\scrapy\utils\defer.py", line 312, in maybeDeferred_coro result = f(*args, **kw) File "D:\annaCONDA\Lib\site-packages\pydispatch\robustapply.py", line 55, in robustApply return receiver(*arguments, **named) File "D:\annaCONDA\Lib\site-packages\scrapy\spiders\__init__.py", line 92, in close return closed(reason) File "C:\Users\Lenovo\nepu_qa_project\nepu_spider\spiders\info_spider.py", line 323, in closed json.dump(stats, f, ensure_ascii=False, indent=2) File "D:\annaCONDA\Lib\json\__init__.py", line 179, in dump for chunk in iterable: File "D:\annaCONDA\Lib\json\encoder.py", line 432, in _iterencode yield from _iterencode_dict(o, _current_indent_level) File "D:\annaCONDA\Lib\json\encoder.py", line 406, in _iterencode_dict yield from chunks File "D:\annaCONDA\Lib\json\encoder.py", line 406, in _iterencode_dict yield from chunks File "D:\annaCONDA\Lib\json\encoder.py", line 439, in _iterencode o = _default(o) File "D:\annaCONDA\Lib\json\encoder.py", line 180, in default raise TypeError(f'Object of type {o.__class__.__name__} ' TypeError: Object of type SettingsAttribute is not JSON serializable 2025-07-06 22:16:58 [scrapy.statscollectors] INFO: Dumping Scrapy stats: {'downloader/request_bytes': 35146, 'downloader/request_count': 96, 'downloader/request_method_count/GET': 96, 'downloader/response_bytes': 729404, 'downloader/response_count': 96, 'downloader/response_status_count/200': 88, 'downloader/response_status_count/302': 5, 'downloader/response_status_count/404': 3, 'dupefilter/filtered': 184, 'elapsed_time_seconds': 101.133916, 'finish_reason': 'finished', 'finish_time': datetime.datetime(2025, 7, 6, 14, 16, 58, 758524), 'httpcache/firsthand': 96, 'httpcache/miss': 96, 'httpcache/store': 96, 'httpcompression/response_bytes': 2168438, 'httpcompression/response_count': 88, 'log_count/DEBUG': 120, 'log_count/ERROR': 89, 'log_count/INFO': 18, 'log_count/WARNING': 1, 'offsite/domains': 2, 'offsite/filtered': 4, 'request_depth_max': 3, 'response_received_count': 91, 'robotstxt/request_count': 1, 'robotstxt/response_count': 1, 'robotstxt/response_status_count/404': 1, 'scheduler/dequeued': 94, 'scheduler/dequeued/memory': 94, 'scheduler/enqueued': 94, 'scheduler/enqueued/memory': 94, 'start_time': datetime.datetime(2025, 7, 6, 14, 15, 17, 624608)} 2025-07-06 22:16:58 [scrapy.core.engine] INFO: Spider closed (finished)

(scrapy_env) C:\Users\Lenovo\nepu_spider>scrapy crawl nepu 2025-07-04 11:44:20 [scrapy.utils.log] INFO: Scrapy 2.8.0 started (bot: nepu_spider) 2025-07-04 11:44:20 [scrapy.utils.log] INFO: Versions: lxml 4.9.3.0, libxml2 2.10.4, cssselect 1.1.0, parsel 1.6.0, w3lib 1.21.0, Twisted 22.10.0, Python 3.11.5 | packaged by Anaconda, Inc. | (main, Sep 11 2023, 13:26:23) [MSC v.1916 64 bit (AMD64)], pyOpenSSL 23.2.0 (OpenSSL 3.0.10 1 Aug 2023), cryptography 41.0.3, Platform Windows-10-10.0.26100-SP0 2025-07-04 11:44:20 [scrapy.crawler] INFO: Overridden settings: {'BOT_NAME': 'nepu_spider', 'FEED_EXPORT_ENCODING': 'utf-8', 'NEWSPIDER_MODULE': 'nepu_spider.spiders', 'REQUEST_FINGERPRINTER_IMPLEMENTATION': '2.7', 'SPIDER_MODULES': ['nepu_spider.spiders'], 'TWISTED_REACTOR': 'twisted.internet.asyncioreactor.AsyncioSelectorReactor'} 2025-07-04 11:44:20 [asyncio] DEBUG: Using selector: SelectSelector 2025-07-04 11:44:20 [scrapy.utils.log] DEBUG: Using reactor: twisted.internet.asyncioreactor.AsyncioSelectorReactor 2025-07-04 11:44:20 [scrapy.utils.log] DEBUG: Using asyncio event loop: asyncio.windows_events._WindowsSelectorEventLoop 2025-07-04 11:44:20 [scrapy.extensions.telnet] INFO: Telnet Password: 97bca17b548b5608 2025-07-04 11:44:20 [scrapy.middleware] INFO: Enabled extensions: ['scrapy.extensions.corestats.CoreStats', 'scrapy.extensions.telnet.TelnetConsole', 'scrapy.extensions.logstats.LogStats'] 2025-07-04 11:44:20 [scrapy.middleware] INFO: Enabled downloader middlewares: ['scrapy.downloadermiddlewares.httpauth.HttpAuthMiddleware', 'scrapy.downloadermiddlewares.downloadtimeout.DownloadTimeoutMiddleware', 'scrapy.downloadermiddlewares.defaultheaders.DefaultHeadersMiddleware', 'scrapy.downloadermiddlewares.useragent.UserAgentMiddleware', 'scrapy.downloadermiddlewares.retry.RetryMiddleware', 'scrapy.downloadermiddlewares.redirect.MetaRefreshMiddleware', 'scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware', 'scrapy.downloadermiddlewares.redirect.RedirectMiddleware', 'scrapy.downloadermiddlewares.cookies.CookiesMiddleware', 'scrapy.downloadermiddlewares.httpproxy.HttpProxyMiddleware', 'scrapy.downloadermiddlewares.stats.DownloaderStats'] 2025-07-04 11:44:20 [scrapy.middleware] INFO: Enabled spider middlewares: ['scrapy.spidermiddlewares.httperror.HttpErrorMiddleware', 'scrapy.spidermiddlewares.offsite.OffsiteMiddleware', 'scrapy.spidermiddlewares.referer.RefererMiddleware', 'scrapy.spidermiddlewares.urllength.UrlLengthMiddleware', 'scrapy.spidermiddlewares.depth.DepthMiddleware'] 2025-07-04 11:44:20 [scrapy.middleware] INFO: Enabled item pipelines: ['nepu_spider.pipelines.NepuSpiderPipeline'] 2025-07-04 11:44:20 [scrapy.core.engine] INFO: Spider opened 2025-07-04 11:44:21 [nepu] INFO: 🆕 数据表 NewsArticles 创建成功或已存在 2025-07-04 11:44:21 [nepu] INFO: ✅ 成功连接到 SQL Server 数据库 2025-07-04 11:44:21 [scrapy.extensions.logstats] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min) 2025-07-04 11:44:21 [scrapy.extensions.telnet] INFO: Telnet console listening on 127.0.0.1:6023 2025-07-04 11:44:21 [scrapy.core.engine] DEBUG: Crawled (200) <GET https://news.nepu.edu.cn/xsdt.htm> (referer: None) 2025-07-04 11:44:21 [scrapy.downloadermiddlewares.redirect] DEBUG: Redirecting (302) to <GET https://www.nepu.edu.cn/info/1049/9837.htm> from <GET http://www.nepu.edu.cn/info/1049/9837.htm> 2025-07-04 11:44:21 [scrapy.downloadermiddlewares.redirect] DEBUG: Redirecting (302) to <GET https://www.nepu.edu.cn/info/1049/9836.htm> from <GET http://www.nepu.edu.cn/info/1049/9836.htm> 2025-07-04 11:44:21 [scrapy.downloadermiddlewares.redirect] DEBUG: Redirecting (302) to <GET https://www.nepu.edu.cn/info/1049/9812.htm> from <GET http://www.nepu.edu.cn/info/1049/9812.htm> 2025-07-04 11:44:21 [scrapy.downloadermiddlewares.redirect] DEBUG: Redirecting (302) to <GET https://www.nepu.edu.cn/info/1049/9815.htm> from <GET http://www.nepu.edu.cn/info/1049/9815.htm> 2025-07-04 11:44:21 [scrapy.downloadermiddlewares.redirect] DEBUG: Redirecting (302) to <GET https://www.nepu.edu.cn/info/1049/9809.htm> from <GET http://www.nepu.edu.cn/info/1049/9809.htm> 2025-07-04 11:44:21 [scrapy.downloadermiddlewares.redirect] DEBUG: Redirecting (302) to <GET https://www.nepu.edu.cn/info/1049/9808.htm> from <GET http://www.nepu.edu.cn/info/1049/9808.htm> 2025-07-04 11:44:21 [scrapy.downloadermiddlewares.redirect] DEBUG: Redirecting (302) to <GET https://www.nepu.edu.cn/info/1049/10155.htm> from <GET http://www.nepu.edu.cn/info/1049/10155.htm> 2025-07-04 11:44:21 [scrapy.downloadermiddlewares.redirect] DEBUG: Redirecting (302) to <GET https://www.nepu.edu.cn/info/1049/10129.htm> from <GET http://www.nepu.edu.cn/info/1049/10129.htm> 2025-07-04 11:44:21 [scrapy.downloadermiddlewares.redirect] DEBUG: Redirecting (302) to <GET https://www.nepu.edu.cn/info/1049/9813.htm> from <GET http://www.nepu.edu.cn/info/1049/9813.htm> 2025-07-04 11:44:21 [scrapy.downloadermiddlewares.redirect] DEBUG: Redirecting (302) to <GET https://www.nepu.edu.cn/info/1049/10162.htm> from <GET http://www.nepu.edu.cn/info/1049/10162.htm> 2025-07-04 11:44:21 [scrapy.core.engine] DEBUG: Crawled (200) <GET https://www.nepu.edu.cn/info/1049/9812.htm> (referer: None) 2025-07-04 11:44:21 [scrapy.core.engine] DEBUG: Crawled (200) <GET https://www.nepu.edu.cn/info/1049/9809.htm> (referer: None) 2025-07-04 11:44:21 [scrapy.core.engine] DEBUG: Crawled (200) <GET https://www.nepu.edu.cn/info/1049/9837.htm> (referer: None) 2025-07-04 11:44:21 [scrapy.core.engine] DEBUG: Crawled (200) <GET https://www.nepu.edu.cn/info/1049/10155.htm> (referer: None) 2025-07-04 11:44:21 [scrapy.core.engine] DEBUG: Crawled (200) <GET https://www.nepu.edu.cn/info/1049/9815.htm> (referer: None) 2025-07-04 11:44:21 [scrapy.core.engine] DEBUG: Crawled (200) <GET https://www.nepu.edu.cn/info/1049/9813.htm> (referer: None) 2025-07-04 11:44:21 [scrapy.core.engine] DEBUG: Crawled (200) <GET https://www.nepu.edu.cn/info/1049/9836.htm> (referer: None) 2025-07-04 11:44:21 [scrapy.core.engine] DEBUG: Crawled (200) <GET https://www.nepu.edu.cn/info/1049/10162.htm> (referer: None) 2025-07-04 11:44:21 [scrapy.core.engine] DEBUG: Crawled (200) <GET https://www.nepu.edu.cn/info/1049/10129.htm> (referer: None) 2025-07-04 11:44:21 [scrapy.core.engine] DEBUG: Crawled (200) <GET https://www.nepu.edu.cn/info/1049/9808.htm> (referer: None) 2025-07-04 11:44:21 [scrapy.core.scraper] ERROR: Spider error processing <GET https://www.nepu.edu.cn/info/1049/9812.htm> (referer: None) Traceback (most recent call last): File "D:\annaCONDA\Lib\site-packages\scrapy\utils\defer.py", line 257, in iter_errback yield next(it) ^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\scrapy\utils\python.py", line 312, in __next__ return next(self.data) ^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\scrapy\utils\python.py", line 312, in __next__ return next(self.data) ^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\scrapy\core\spidermw.py", line 104, in process_sync for r in iterable: File "D:\annaCONDA\Lib\site-packages\scrapy\spidermiddlewares\offsite.py", line 28, in <genexpr> return (r for r in result or () if self._filter(r, spider)) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\scrapy\core\spidermw.py", line 104, in process_sync for r in iterable: File "D:\annaCONDA\Lib\site-packages\scrapy\spidermiddlewares\referer.py", line 353, in <genexpr> return (self._set_referer(r, response) for r in result or ()) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\scrapy\core\spidermw.py", line 104, in process_sync for r in iterable: File "D:\annaCONDA\Lib\site-packages\scrapy\spidermiddlewares\urllength.py", line 27, in <genexpr> return (r for r in result or () if self._filter(r, spider)) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\scrapy\core\spidermw.py", line 104, in process_sync for r in iterable: File "D:\annaCONDA\Lib\site-packages\scrapy\spidermiddlewares\depth.py", line 31, in <genexpr> return (r for r in result or () if self._filter(r, response, spider)) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\scrapy\core\spidermw.py", line 104, in process_sync for r in iterable: File "C:\Users\Lenovo\nepu_spider\nepu_spider\spiders\nepu.py", line 33, in parse_detail text = response.css(selector).get() ^^^^^^^^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\scrapy\http\response\text.py", line 147, in css return self.selector.css(query) ^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\parsel\selector.py", line 282, in css return self.xpath(self._css2xpath(query)) ^^^^^^^^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\parsel\selector.py", line 285, in _css2xpath return self._csstranslator.css_to_xpath(query) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\parsel\csstranslator.py", line 107, in css_to_xpath return super(HTMLTranslator, self).css_to_xpath(css, prefix) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\cssselect\xpath.py", line 192, in css_to_xpath for selector in parse(css)) ^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\cssselect\parser.py", line 415, in parse return list(parse_selector_group(stream)) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\cssselect\parser.py", line 428, in parse_selector_group yield Selector(*parse_selector(stream)) ^^^^^^^^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\cssselect\parser.py", line 436, in parse_selector result, pseudo_element = parse_simple_selector(stream) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\cssselect\parser.py", line 544, in parse_simple_selector raise SelectorSyntaxError( cssselect.parser.SelectorSyntaxError: Expected selector, got <DELIM '/' at 0> 2025-07-04 11:44:21 [scrapy.core.scraper] ERROR: Spider error processing <GET https://www.nepu.edu.cn/info/1049/9809.htm> (referer: None) Traceback (most recent call last): File "D:\annaCONDA\Lib\site-packages\scrapy\utils\defer.py", line 257, in iter_errback yield next(it) ^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\scrapy\utils\python.py", line 312, in __next__ return next(self.data) ^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\scrapy\utils\python.py", line 312, in __next__ return next(self.data) ^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\scrapy\core\spidermw.py", line 104, in process_sync for r in iterable: File "D:\annaCONDA\Lib\site-packages\scrapy\spidermiddlewares\offsite.py", line 28, in <genexpr> return (r for r in result or () if self._filter(r, spider)) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\scrapy\core\spidermw.py", line 104, in process_sync for r in iterable: File "D:\annaCONDA\Lib\site-packages\scrapy\spidermiddlewares\referer.py", line 353, in <genexpr> return (self._set_referer(r, response) for r in result or ()) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\scrapy\core\spidermw.py", line 104, in process_sync for r in iterable: File "D:\annaCONDA\Lib\site-packages\scrapy\spidermiddlewares\urllength.py", line 27, in <genexpr> return (r for r in result or () if self._filter(r, spider)) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\scrapy\core\spidermw.py", line 104, in process_sync for r in iterable: File "D:\annaCONDA\Lib\site-packages\scrapy\spidermiddlewares\depth.py", line 31, in <genexpr> return (r for r in result or () if self._filter(r, response, spider)) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\scrapy\core\spidermw.py", line 104, in process_sync for r in iterable: File "C:\Users\Lenovo\nepu_spider\nepu_spider\spiders\nepu.py", line 33, in parse_detail text = response.css(selector).get() ^^^^^^^^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\scrapy\http\response\text.py", line 147, in css return self.selector.css(query) ^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\parsel\selector.py", line 282, in css return self.xpath(self._css2xpath(query)) ^^^^^^^^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\parsel\selector.py", line 285, in _css2xpath return self._csstranslator.css_to_xpath(query) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\parsel\csstranslator.py", line 107, in css_to_xpath return super(HTMLTranslator, self).css_to_xpath(css, prefix) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\cssselect\xpath.py", line 192, in css_to_xpath for selector in parse(css)) ^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\cssselect\parser.py", line 415, in parse return list(parse_selector_group(stream)) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\cssselect\parser.py", line 428, in parse_selector_group yield Selector(*parse_selector(stream)) ^^^^^^^^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\cssselect\parser.py", line 436, in parse_selector result, pseudo_element = parse_simple_selector(stream) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\cssselect\parser.py", line 544, in parse_simple_selector raise SelectorSyntaxError( cssselect.parser.SelectorSyntaxError: Expected selector, got <DELIM '/' at 0> 2025-07-04 11:44:21 [scrapy.core.scraper] ERROR: Spider error processing <GET https://www.nepu.edu.cn/info/1049/9837.htm> (referer: None) Traceback (most recent call last): File "D:\annaCONDA\Lib\site-packages\scrapy\utils\defer.py", line 257, in iter_errback yield next(it) ^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\scrapy\utils\python.py", line 312, in __next__ return next(self.data) ^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\scrapy\utils\python.py", line 312, in __next__ return next(self.data) ^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\scrapy\core\spidermw.py", line 104, in process_sync for r in iterable: File "D:\annaCONDA\Lib\site-packages\scrapy\spidermiddlewares\offsite.py", line 28, in <genexpr> return (r for r in result or () if self._filter(r, spider)) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\scrapy\core\spidermw.py", line 104, in process_sync for r in iterable: File "D:\annaCONDA\Lib\site-packages\scrapy\spidermiddlewares\referer.py", line 353, in <genexpr> return (self._set_referer(r, response) for r in result or ()) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\scrapy\core\spidermw.py", line 104, in process_sync for r in iterable: File "D:\annaCONDA\Lib\site-packages\scrapy\spidermiddlewares\urllength.py", line 27, in <genexpr> return (r for r in result or () if self._filter(r, spider)) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\scrapy\core\spidermw.py", line 104, in process_sync for r in iterable: File "D:\annaCONDA\Lib\site-packages\scrapy\spidermiddlewares\depth.py", line 31, in <genexpr> return (r for r in result or () if self._filter(r, response, spider)) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\scrapy\core\spidermw.py", line 104, in process_sync for r in iterable: File "C:\Users\Lenovo\nepu_spider\nepu_spider\spiders\nepu.py", line 33, in parse_detail text = response.css(selector).get() ^^^^^^^^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\scrapy\http\response\text.py", line 147, in css return self.selector.css(query) ^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\parsel\selector.py", line 282, in css return self.xpath(self._css2xpath(query)) ^^^^^^^^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\parsel\selector.py", line 285, in _css2xpath return self._csstranslator.css_to_xpath(query) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\parsel\csstranslator.py", line 107, in css_to_xpath return super(HTMLTranslator, self).css_to_xpath(css, prefix) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\cssselect\xpath.py", line 192, in css_to_xpath for selector in parse(css)) ^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\cssselect\parser.py", line 415, in parse return list(parse_selector_group(stream)) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\cssselect\parser.py", line 428, in parse_selector_group yield Selector(*parse_selector(stream)) ^^^^^^^^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\cssselect\parser.py", line 436, in parse_selector result, pseudo_element = parse_simple_selector(stream) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\cssselect\parser.py", line 544, in parse_simple_selector raise SelectorSyntaxError( cssselect.parser.SelectorSyntaxError: Expected selector, got <DELIM '/' at 0> 2025-07-04 11:44:21 [scrapy.core.scraper] ERROR: Spider error processing <GET https://www.nepu.edu.cn/info/1049/10155.htm> (referer: None) Traceback (most recent call last): File "D:\annaCONDA\Lib\site-packages\scrapy\utils\defer.py", line 257, in iter_errback yield next(it) ^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\scrapy\utils\python.py", line 312, in __next__ return next(self.data) ^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\scrapy\utils\python.py", line 312, in __next__ return next(self.data) ^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\scrapy\core\spidermw.py", line 104, in process_sync for r in iterable: File "D:\annaCONDA\Lib\site-packages\scrapy\spidermiddlewares\offsite.py", line 28, in <genexpr> return (r for r in result or () if self._filter(r, spider)) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\scrapy\core\spidermw.py", line 104, in process_sync for r in iterable: File "D:\annaCONDA\Lib\site-packages\scrapy\spidermiddlewares\referer.py", line 353, in <genexpr> return (self._set_referer(r, response) for r in result or ()) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\scrapy\core\spidermw.py", line 104, in process_sync for r in iterable: File "D:\annaCONDA\Lib\site-packages\scrapy\spidermiddlewares\urllength.py", line 27, in <genexpr> return (r for r in result or () if self._filter(r, spider)) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\scrapy\core\spidermw.py", line 104, in process_sync for r in iterable: File "D:\annaCONDA\Lib\site-packages\scrapy\spidermiddlewares\depth.py", line 31, in <genexpr> return (r for r in result or () if self._filter(r, response, spider)) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\scrapy\core\spidermw.py", line 104, in process_sync for r in iterable: File "C:\Users\Lenovo\nepu_spider\nepu_spider\spiders\nepu.py", line 33, in parse_detail text = response.css(selector).get() ^^^^^^^^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\scrapy\http\response\text.py", line 147, in css return self.selector.css(query) ^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\parsel\selector.py", line 282, in css return self.xpath(self._css2xpath(query)) ^^^^^^^^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\parsel\selector.py", line 285, in _css2xpath return self._csstranslator.css_to_xpath(query) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\parsel\csstranslator.py", line 107, in css_to_xpath return super(HTMLTranslator, self).css_to_xpath(css, prefix) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\cssselect\xpath.py", line 192, in css_to_xpath for selector in parse(css)) ^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\cssselect\parser.py", line 415, in parse return list(parse_selector_group(stream)) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\cssselect\parser.py", line 428, in parse_selector_group yield Selector(*parse_selector(stream)) ^^^^^^^^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\cssselect\parser.py", line 436, in parse_selector result, pseudo_element = parse_simple_selector(stream) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\cssselect\parser.py", line 544, in parse_simple_selector raise SelectorSyntaxError( cssselect.parser.SelectorSyntaxError: Expected selector, got <DELIM '/' at 0> 2025-07-04 11:44:21 [scrapy.core.scraper] ERROR: Spider error processing <GET https://www.nepu.edu.cn/info/1049/9815.htm> (referer: None) Traceback (most recent call last): File "D:\annaCONDA\Lib\site-packages\scrapy\utils\defer.py", line 257, in iter_errback yield next(it) ^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\scrapy\utils\python.py", line 312, in __next__ return next(self.data) ^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\scrapy\utils\python.py", line 312, in __next__ return next(self.data) ^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\scrapy\core\spidermw.py", line 104, in process_sync for r in iterable: File "D:\annaCONDA\Lib\site-packages\scrapy\spidermiddlewares\offsite.py", line 28, in <genexpr> return (r for r in result or () if self._filter(r, spider)) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\scrapy\core\spidermw.py", line 104, in process_sync for r in iterable: File "D:\annaCONDA\Lib\site-packages\scrapy\spidermiddlewares\referer.py", line 353, in <genexpr> return (self._set_referer(r, response) for r in result or ()) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\scrapy\core\spidermw.py", line 104, in process_sync for r in iterable: File "D:\annaCONDA\Lib\site-packages\scrapy\spidermiddlewares\urllength.py", line 27, in <genexpr> return (r for r in result or () if self._filter(r, spider)) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\scrapy\core\spidermw.py", line 104, in process_sync for r in iterable: File "D:\annaCONDA\Lib\site-packages\scrapy\spidermiddlewares\depth.py", line 31, in <genexpr> return (r for r in result or () if self._filter(r, response, spider)) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\scrapy\core\spidermw.py", line 104, in process_sync for r in iterable: File "C:\Users\Lenovo\nepu_spider\nepu_spider\spiders\nepu.py", line 33, in parse_detail text = response.css(selector).get() ^^^^^^^^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\scrapy\http\response\text.py", line 147, in css return self.selector.css(query) ^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\parsel\selector.py", line 282, in css return self.xpath(self._css2xpath(query)) ^^^^^^^^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\parsel\selector.py", line 285, in _css2xpath return self._csstranslator.css_to_xpath(query) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\parsel\csstranslator.py", line 107, in css_to_xpath return super(HTMLTranslator, self).css_to_xpath(css, prefix) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\cssselect\xpath.py", line 192, in css_to_xpath for selector in parse(css)) ^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\cssselect\parser.py", line 415, in parse return list(parse_selector_group(stream)) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\cssselect\parser.py", line 428, in parse_selector_group yield Selector(*parse_selector(stream)) ^^^^^^^^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\cssselect\parser.py", line 436, in parse_selector result, pseudo_element = parse_simple_selector(stream) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\cssselect\parser.py", line 544, in parse_simple_selector raise SelectorSyntaxError( cssselect.parser.SelectorSyntaxError: Expected selector, got <DELIM '/' at 0> 2025-07-04 11:44:21 [scrapy.core.scraper] ERROR: Spider error processing <GET https://www.nepu.edu.cn/info/1049/9813.htm> (referer: None) Traceback (most recent call last): File "D:\annaCONDA\Lib\site-packages\scrapy\utils\defer.py", line 257, in iter_errback yield next(it) ^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\scrapy\utils\python.py", line 312, in __next__ return next(self.data) ^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\scrapy\utils\python.py", line 312, in __next__ return next(self.data) ^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\scrapy\core\spidermw.py", line 104, in process_sync for r in iterable: File "D:\annaCONDA\Lib\site-packages\scrapy\spidermiddlewares\offsite.py", line 28, in <genexpr> return (r for r in result or () if self._filter(r, spider)) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\scrapy\core\spidermw.py", line 104, in process_sync for r in iterable: File "D:\annaCONDA\Lib\site-packages\scrapy\spidermiddlewares\referer.py", line 353, in <genexpr> return (self._set_referer(r, response) for r in result or ()) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\scrapy\core\spidermw.py", line 104, in process_sync for r in iterable: File "D:\annaCONDA\Lib\site-packages\scrapy\spidermiddlewares\urllength.py", line 27, in <genexpr> return (r for r in result or () if self._filter(r, spider)) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\scrapy\core\spidermw.py", line 104, in process_sync for r in iterable: File "D:\annaCONDA\Lib\site-packages\scrapy\spidermiddlewares\depth.py", line 31, in <genexpr> return (r for r in result or () if self._filter(r, response, spider)) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\scrapy\core\spidermw.py", line 104, in process_sync for r in iterable: File "C:\Users\Lenovo\nepu_spider\nepu_spider\spiders\nepu.py", line 33, in parse_detail text = response.css(selector).get() ^^^^^^^^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\scrapy\http\response\text.py", line 147, in css return self.selector.css(query) ^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\parsel\selector.py", line 282, in css return self.xpath(self._css2xpath(query)) ^^^^^^^^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\parsel\selector.py", line 285, in _css2xpath return self._csstranslator.css_to_xpath(query) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\parsel\csstranslator.py", line 107, in css_to_xpath return super(HTMLTranslator, self).css_to_xpath(css, prefix) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\cssselect\xpath.py", line 192, in css_to_xpath for selector in parse(css)) ^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\cssselect\parser.py", line 415, in parse return list(parse_selector_group(stream)) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\cssselect\parser.py", line 428, in parse_selector_group yield Selector(*parse_selector(stream)) ^^^^^^^^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\cssselect\parser.py", line 436, in parse_selector result, pseudo_element = parse_simple_selector(stream) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\cssselect\parser.py", line 544, in parse_simple_selector raise SelectorSyntaxError( cssselect.parser.SelectorSyntaxError: Expected selector, got <DELIM '/' at 0> 2025-07-04 11:44:21 [scrapy.core.scraper] ERROR: Spider error processing <GET https://www.nepu.edu.cn/info/1049/9836.htm> (referer: None) Traceback (most recent call last): File "D:\annaCONDA\Lib\site-packages\scrapy\utils\defer.py", line 257, in iter_errback yield next(it) ^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\scrapy\utils\python.py", line 312, in __next__ return next(self.data) ^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\scrapy\utils\python.py", line 312, in __next__ return next(self.data) ^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\scrapy\core\spidermw.py", line 104, in process_sync for r in iterable: File "D:\annaCONDA\Lib\site-packages\scrapy\spidermiddlewares\offsite.py", line 28, in <genexpr> return (r for r in result or () if self._filter(r, spider)) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\scrapy\core\spidermw.py", line 104, in process_sync for r in iterable: File "D:\annaCONDA\Lib\site-packages\scrapy\spidermiddlewares\referer.py", line 353, in <genexpr> return (self._set_referer(r, response) for r in result or ()) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\scrapy\core\spidermw.py", line 104, in process_sync for r in iterable: File "D:\annaCONDA\Lib\site-packages\scrapy\spidermiddlewares\urllength.py", line 27, in <genexpr> return (r for r in result or () if self._filter(r, spider)) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\scrapy\core\spidermw.py", line 104, in process_sync for r in iterable: File "D:\annaCONDA\Lib\site-packages\scrapy\spidermiddlewares\depth.py", line 31, in <genexpr> return (r for r in result or () if self._filter(r, response, spider)) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\scrapy\core\spidermw.py", line 104, in process_sync for r in iterable: File "C:\Users\Lenovo\nepu_spider\nepu_spider\spiders\nepu.py", line 33, in parse_detail text = response.css(selector).get() ^^^^^^^^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\scrapy\http\response\text.py", line 147, in css return self.selector.css(query) ^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\parsel\selector.py", line 282, in css return self.xpath(self._css2xpath(query)) ^^^^^^^^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\parsel\selector.py", line 285, in _css2xpath return self._csstranslator.css_to_xpath(query) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\parsel\csstranslator.py", line 107, in css_to_xpath return super(HTMLTranslator, self).css_to_xpath(css, prefix) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\cssselect\xpath.py", line 192, in css_to_xpath for selector in parse(css)) ^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\cssselect\parser.py", line 415, in parse return list(parse_selector_group(stream)) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\cssselect\parser.py", line 428, in parse_selector_group yield Selector(*parse_selector(stream)) ^^^^^^^^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\cssselect\parser.py", line 436, in parse_selector result, pseudo_element = parse_simple_selector(stream) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\cssselect\parser.py", line 544, in parse_simple_selector raise SelectorSyntaxError( cssselect.parser.SelectorSyntaxError: Expected selector, got <DELIM '/' at 0> 2025-07-04 11:44:21 [scrapy.core.scraper] ERROR: Spider error processing <GET https://www.nepu.edu.cn/info/1049/10129.htm> (referer: None) Traceback (most recent call last): File "D:\annaCONDA\Lib\site-packages\scrapy\utils\defer.py", line 257, in iter_errback yield next(it) ^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\scrapy\utils\python.py", line 312, in __next__ return next(self.data) ^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\scrapy\utils\python.py", line 312, in __next__ return next(self.data) ^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\scrapy\core\spidermw.py", line 104, in process_sync for r in iterable: File "D:\annaCONDA\Lib\site-packages\scrapy\spidermiddlewares\offsite.py", line 28, in <genexpr> return (r for r in result or () if self._filter(r, spider)) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\scrapy\core\spidermw.py", line 104, in process_sync for r in iterable: File "D:\annaCONDA\Lib\site-packages\scrapy\spidermiddlewares\referer.py", line 353, in <genexpr> return (self._set_referer(r, response) for r in result or ()) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\scrapy\core\spidermw.py", line 104, in process_sync for r in iterable: File "D:\annaCONDA\Lib\site-packages\scrapy\spidermiddlewares\urllength.py", line 27, in <genexpr> return (r for r in result or () if self._filter(r, spider)) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\scrapy\core\spidermw.py", line 104, in process_sync for r in iterable: File "D:\annaCONDA\Lib\site-packages\scrapy\spidermiddlewares\depth.py", line 31, in <genexpr> return (r for r in result or () if self._filter(r, response, spider)) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\scrapy\core\spidermw.py", line 104, in process_sync for r in iterable: File "C:\Users\Lenovo\nepu_spider\nepu_spider\spiders\nepu.py", line 33, in parse_detail text = response.css(selector).get() ^^^^^^^^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\scrapy\http\response\text.py", line 147, in css return self.selector.css(query) ^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\parsel\selector.py", line 282, in css return self.xpath(self._css2xpath(query)) ^^^^^^^^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\parsel\selector.py", line 285, in _css2xpath return self._csstranslator.css_to_xpath(query) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\parsel\csstranslator.py", line 107, in css_to_xpath return super(HTMLTranslator, self).css_to_xpath(css, prefix) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\cssselect\xpath.py", line 192, in css_to_xpath for selector in parse(css)) ^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\cssselect\parser.py", line 415, in parse return list(parse_selector_group(stream)) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\cssselect\parser.py", line 428, in parse_selector_group yield Selector(*parse_selector(stream)) ^^^^^^^^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\cssselect\parser.py", line 436, in parse_selector result, pseudo_element = parse_simple_selector(stream) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\cssselect\parser.py", line 544, in parse_simple_selector raise SelectorSyntaxError( cssselect.parser.SelectorSyntaxError: Expected selector, got <DELIM '/' at 0> 2025-07-04 11:44:21 [scrapy.core.scraper] ERROR: Spider error processing <GET https://www.nepu.edu.cn/info/1049/10162.htm> (referer: None) Traceback (most recent call last): File "D:\annaCONDA\Lib\site-packages\scrapy\utils\defer.py", line 257, in iter_errback yield next(it) ^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\scrapy\utils\python.py", line 312, in __next__ return next(self.data) ^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\scrapy\utils\python.py", line 312, in __next__ return next(self.data) ^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\scrapy\core\spidermw.py", line 104, in process_sync for r in iterable: File "D:\annaCONDA\Lib\site-packages\scrapy\spidermiddlewares\offsite.py", line 28, in <genexpr> return (r for r in result or () if self._filter(r, spider)) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\scrapy\core\spidermw.py", line 104, in process_sync for r in iterable: File "D:\annaCONDA\Lib\site-packages\scrapy\spidermiddlewares\referer.py", line 353, in <genexpr> return (self._set_referer(r, response) for r in result or ()) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\scrapy\core\spidermw.py", line 104, in process_sync for r in iterable: File "D:\annaCONDA\Lib\site-packages\scrapy\spidermiddlewares\urllength.py", line 27, in <genexpr> return (r for r in result or () if self._filter(r, spider)) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\scrapy\core\spidermw.py", line 104, in process_sync for r in iterable: File "D:\annaCONDA\Lib\site-packages\scrapy\spidermiddlewares\depth.py", line 31, in <genexpr> return (r for r in result or () if self._filter(r, response, spider)) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\scrapy\core\spidermw.py", line 104, in process_sync for r in iterable: File "C:\Users\Lenovo\nepu_spider\nepu_spider\spiders\nepu.py", line 33, in parse_detail text = response.css(selector).get() ^^^^^^^^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\scrapy\http\response\text.py", line 147, in css return self.selector.css(query) ^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\parsel\selector.py", line 282, in css return self.xpath(self._css2xpath(query)) ^^^^^^^^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\parsel\selector.py", line 285, in _css2xpath return self._csstranslator.css_to_xpath(query) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\parsel\csstranslator.py", line 107, in css_to_xpath return super(HTMLTranslator, self).css_to_xpath(css, prefix) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\cssselect\xpath.py", line 192, in css_to_xpath for selector in parse(css)) ^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\cssselect\parser.py", line 415, in parse return list(parse_selector_group(stream)) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\cssselect\parser.py", line 428, in parse_selector_group yield Selector(*parse_selector(stream)) ^^^^^^^^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\cssselect\parser.py", line 436, in parse_selector result, pseudo_element = parse_simple_selector(stream) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\cssselect\parser.py", line 544, in parse_simple_selector raise SelectorSyntaxError( cssselect.parser.SelectorSyntaxError: Expected selector, got <DELIM '/' at 0> 2025-07-04 11:44:21 [scrapy.core.scraper] ERROR: Spider error processing <GET https://www.nepu.edu.cn/info/1049/9808.htm> (referer: None) Traceback (most recent call last): File "D:\annaCONDA\Lib\site-packages\scrapy\utils\defer.py", line 257, in iter_errback yield next(it) ^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\scrapy\utils\python.py", line 312, in __next__ return next(self.data) ^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\scrapy\utils\python.py", line 312, in __next__ return next(self.data) ^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\scrapy\core\spidermw.py", line 104, in process_sync for r in iterable: File "D:\annaCONDA\Lib\site-packages\scrapy\spidermiddlewares\offsite.py", line 28, in <genexpr> return (r for r in result or () if self._filter(r, spider)) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\scrapy\core\spidermw.py", line 104, in process_sync for r in iterable: File "D:\annaCONDA\Lib\site-packages\scrapy\spidermiddlewares\referer.py", line 353, in <genexpr> return (self._set_referer(r, response) for r in result or ()) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\scrapy\core\spidermw.py", line 104, in process_sync for r in iterable: File "D:\annaCONDA\Lib\site-packages\scrapy\spidermiddlewares\urllength.py", line 27, in <genexpr> return (r for r in result or () if self._filter(r, spider)) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\scrapy\core\spidermw.py", line 104, in process_sync for r in iterable: File "D:\annaCONDA\Lib\site-packages\scrapy\spidermiddlewares\depth.py", line 31, in <genexpr> return (r for r in result or () if self._filter(r, response, spider)) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\scrapy\core\spidermw.py", line 104, in process_sync for r in iterable: File "C:\Users\Lenovo\nepu_spider\nepu_spider\spiders\nepu.py", line 33, in parse_detail text = response.css(selector).get() ^^^^^^^^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\scrapy\http\response\text.py", line 147, in css return self.selector.css(query) ^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\parsel\selector.py", line 282, in css return self.xpath(self._css2xpath(query)) ^^^^^^^^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\parsel\selector.py", line 285, in _css2xpath return self._csstranslator.css_to_xpath(query) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\parsel\csstranslator.py", line 107, in css_to_xpath return super(HTMLTranslator, self).css_to_xpath(css, prefix) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\cssselect\xpath.py", line 192, in css_to_xpath for selector in parse(css)) ^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\cssselect\parser.py", line 415, in parse return list(parse_selector_group(stream)) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\cssselect\parser.py", line 428, in parse_selector_group yield Selector(*parse_selector(stream)) ^^^^^^^^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\cssselect\parser.py", line 436, in parse_selector result, pseudo_element = parse_simple_selector(stream) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\cssselect\parser.py", line 544, in parse_simple_selector raise SelectorSyntaxError( cssselect.parser.SelectorSyntaxError: Expected selector, got <DELIM '/' at 0> 2025-07-04 11:44:21 [scrapy.core.engine] INFO: Closing spider (finished) 2025-07-04 11:44:21 [nepu] INFO: 🔌 已安全关闭数据库连接 2025-07-04 11:44:21 [scrapy.statscollectors] INFO: Dumping Scrapy stats: {'downloader/request_bytes': 5100, 'downloader/request_count': 21, 'downloader/request_method_count/GET': 21, 'downloader/response_bytes': 93797, 'downloader/response_count': 21, 'downloader/response_status_count/200': 11, 'downloader/response_status_count/302': 10, 'elapsed_time_seconds': 0.502389, 'finish_reason': 'finished', 'finish_time': datetime.datetime(2025, 7, 4, 3, 44, 21, 531133), 'httpcompression/response_bytes': 251471, 'httpcompression/response_count': 11, 'log_count/DEBUG': 24, 'log_count/ERROR': 10, 'log_count/INFO': 13, 'request_depth_max': 1, 'response_received_count': 11, 'scheduler/dequeued': 21, 'scheduler/dequeued/memory': 21, 'scheduler/enqueued': 21, 'scheduler/enqueued/memory': 21, 'spider_exceptions/SelectorSyntaxError': 10, 'start_time': datetime.datetime(2025, 7, 4, 3, 44, 21, 28744)} 2025-07-04 11:44:21 [scrapy.core.engine] INFO: Spider closed (finished) (scrapy_env) C:\Users\Lenovo\nepu_spider>

2025-07-07 15:39:05 [scrapy.utils.log] INFO: Scrapy 2.13.3 started (bot: scrapybot) 2025-07-07 15:39:05 [scrapy.utils.log] INFO: Versions: {'lxml': '6.0.0', 'libxml2': '2.11.9', 'cssselect': '1.3.0', 'parsel': '1.10.0', 'w3lib': '2.3.1', 'Twisted': '25.5.0', 'Python': '3.11.5 (tags/v3.11.5:cce6ba9, Aug 24 2023, 14:38:34) [MSC v.1936 ' '64 bit (AMD64)]', 'pyOpenSSL': '25.1.0 (OpenSSL 3.5.1 1 Jul 2025)', 'cryptography': '45.0.5', 'Platform': 'Windows-10-10.0.22631-SP0'} Traceback (most recent call last): File "D:\python\python3.11.5\Lib\site-packages\scrapy\spiderloader.py", line 106, in load return self._spiders[spider_name] ~~~~~~~~~~~~~^^^^^^^^^^^^^ KeyError: 'faq_spider' During handling of the above exception, another exception occurred: Traceback (most recent call last): File "D:\code\nepu_spider\run_all_spiders.py", line 15, in <module> process.crawl(name) File "D:\python\python3.11.5\Lib\site-packages\scrapy\crawler.py", line 338, in crawl crawler = self.create_crawler(crawler_or_spidercls) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\python\python3.11.5\Lib\site-packages\scrapy\crawler.py", line 374, in create_crawler return self._create_crawler(crawler_or_spidercls) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\python\python3.11.5\Lib\site-packages\scrapy\crawler.py", line 458, in _create_crawler spidercls = self.spider_loader.load(spidercls) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\python\python3.11.5\Lib\site-packages\scrapy\spiderloader.py", line 108, in load raise KeyError(f"Spider not found: {spider_name}") KeyError: 'Spider not found: faq_spider'

(scrapy_env) C:\Users\Lenovo\nepu_spider>scrapy crawl nepu 2025-07-04 10:50:26 [scrapy.utils.log] INFO: Scrapy 2.8.0 started (bot: nepu_spider) 2025-07-04 10:50:26 [scrapy.utils.log] INFO: Versions: lxml 4.9.3.0, libxml2 2.10.4, cssselect 1.1.0, parsel 1.6.0, w3lib 1.21.0, Twisted 22.10.0, Python 3.11.5 | packaged by Anaconda, Inc. | (main, Sep 11 2023, 13:26:23) [MSC v.1916 64 bit (AMD64)], pyOpenSSL 23.2.0 (OpenSSL 3.0.10 1 Aug 2023), cryptography 41.0.3, Platform Windows-10-10.0.26100-SP0 Traceback (most recent call last): File "D:\annaCONDA\Lib\site-packages\scrapy\spiderloader.py", line 77, in load return self._spiders[spider_name] ~~~~~~~~~~~~~^^^^^^^^^^^^^ KeyError: 'nepu' During handling of the above exception, another exception occurred: Traceback (most recent call last): File "D:\annaCONDA\Scripts\scrapy-script.py", line 10, in <module> sys.exit(execute()) ^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\scrapy\cmdline.py", line 158, in execute _run_print_help(parser, _run_command, cmd, args, opts) File "D:\annaCONDA\Lib\site-packages\scrapy\cmdline.py", line 111, in _run_print_help func(*a, **kw) File "D:\annaCONDA\Lib\site-packages\scrapy\cmdline.py", line 166, in _run_command cmd.run(args, opts) File "D:\annaCONDA\Lib\site-packages\scrapy\commands\crawl.py", line 24, in run crawl_defer = self.crawler_process.crawl(spname, **opts.spargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\scrapy\crawler.py", line 232, in crawl crawler = self.create_crawler(crawler_or_spidercls) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\scrapy\crawler.py", line 266, in create_crawler return self._create_crawler(crawler_or_spidercls) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\scrapy\crawler.py", line 346, in _create_crawler spidercls = self.spider_loader.load(spidercls) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\scrapy\spiderloader.py", line 79, in load raise KeyError(f"Spider not found: {spider_name}") KeyError: 'Spider not found: nepu'

(scrapy_env) C:\Users\Lenovo\nepu_qa_project>scrapy crawl nepu_info 2025-07-06 22:49:54 [scrapy.utils.log] INFO: Scrapy 2.8.0 started (bot: nepu_spider) 2025-07-06 22:49:54 [scrapy.utils.log] INFO: Versions: lxml 4.9.3.0, libxml2 2.10.4, cssselect 1.1.0, parsel 1.6.0, w3lib 1.21.0, Twisted 22.10.0, Python 3.11.5 | packaged by Anaconda, Inc. | (main, Sep 11 2023, 13:26:23) [MSC v.1916 64 bit (AMD64)], pyOpenSSL 23.2.0 (OpenSSL 3.0.10 1 Aug 2023), cryptography 41.0.3, Platform Windows-10-10.0.26100-SP0 Traceback (most recent call last): File "D:\annaCONDA\Lib\site-packages\scrapy\spiderloader.py", line 77, in load return self._spiders[spider_name] ~~~~~~~~~~~~~^^^^^^^^^^^^^ KeyError: 'nepu_info' During handling of the above exception, another exception occurred: Traceback (most recent call last): File "D:\annaCONDA\Scripts\scrapy-script.py", line 10, in <module> sys.exit(execute()) ^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\scrapy\cmdline.py", line 158, in execute _run_print_help(parser, _run_command, cmd, args, opts) File "D:\annaCONDA\Lib\site-packages\scrapy\cmdline.py", line 111, in _run_print_help func(*a, **kw) File "D:\annaCONDA\Lib\site-packages\scrapy\cmdline.py", line 166, in _run_command cmd.run(args, opts) File "D:\annaCONDA\Lib\site-packages\scrapy\commands\crawl.py", line 24, in run crawl_defer = self.crawler_process.crawl(spname, **opts.spargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\scrapy\crawler.py", line 232, in crawl crawler = self.create_crawler(crawler_or_spidercls) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\scrapy\crawler.py", line 266, in create_crawler return self._create_crawler(crawler_or_spidercls) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\scrapy\crawler.py", line 346, in _create_crawler spidercls = self.spider_loader.load(spidercls) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\scrapy\spiderloader.py", line 79, in load raise KeyError(f"Spider not found: {spider_name}") KeyError: 'Spider not found: nepu_info'

scrapy.exceptions.NotSupported: Response content isn't text 2025-07-03 18:03:25 [scrapy.core.downloader.handlers.http11] WARNING: Expected response size (74394949) larger than download warn size (33554432) in request <GET https://www.nepu.edu.cn/system/_content/download.jsp?urltype=news.DownloadAttachUrl&owner=1598682733&wbfileid=4A9D4034D4D267393F4383B735426785>. 2025-07-03 18:03:25 [scrapy.core.scraper] ERROR: Spider error processing <GET https://www.nepu.edu.cn/system/_content/download.jsp?urltype=news.DownloadAttachUrl&owner=1598682733&wbfileid=B46A9651835E933A9B0C365BFA063965> (referer: https://www.nepu.edu.cn/info/1251/8720.htm) Traceback (most recent call last): File "D:\annaCONDA\Lib\site-packages\scrapy\utils\defer.py", line 257, in iter_errback yield next(it) ^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\scrapy\utils\python.py", line 312, in __next__ return next(self.data) ^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\scrapy\utils\python.py", line 312, in __next__ return next(self.data) ^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\scrapy\core\spidermw.py", line 104, in process_sync for r in iterable: File "D:\annaCONDA\Lib\site-packages\scrapy\spidermiddlewares\offsite.py", line 28, in <genexpr> return (r for r in result or () if self._filter(r, spider)) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\scrapy\core\spidermw.py", line 104, in process_sync for r in iterable: File "D:\annaCONDA\Lib\site-packages\scrapy\spidermiddlewares\referer.py", line 353, in <genexpr> return (self._set_referer(r, response) for r in result or ()) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\scrapy\core\spidermw.py", line 104, in process_sync for r in iterable: File "D:\annaCONDA\Lib\site-packages\scrapy\spidermiddlewares\urllength.py", line 27, in <genexpr> return (r for r in result or () if self._filter(r, spider)) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\scrapy\core\spidermw.py", line 104, in process_sync for r in iterable: File "D:\annaCONDA\Lib\site-packages\scrapy\spidermiddlewares\depth.py", line 31, in <genexpr> return (r for r in result or () if self._filter(r, response, spider)) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\scrapy\core\spidermw.py", line 104, in process_sync for r in iterable: File "C:\Users\Lenovo\nepu_spider\nepu_spider\spiders\nepu.py", line 11, in parse title = response.xpath('//title/text()').get() ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\scrapy\http\response\__init__.py", line 149, in xpath raise NotSupported("Response content isn't text") scrapy.exceptions.NotSupported: Response content isn't text

大家在看

recommend-type

matlab开发-高斯系数模型中DoLoanPortfolio的累积分布函数

matlab开发-高斯系数模型中DoLoanPortfolio的累积分布函数。用高斯因子模型计算CDO贷款组合损失的累积分布函数
recommend-type

Nature-Scientific-Data-2021

2021年自然科学数据 我们发布了在四个心理图像任务(即手图像,脚图像,减法图像和单词生成图像)期间以1KHz采样频率记录的306通道MEG-BCI数据。 数据集包含使用典型的BCI图像范例在17天健康参与者的不同日子进行的两次MEG记录。 据我们所知,当前数据集将是唯一可公开获得的MEG影像BCI数据集。 该数据集可被科学界用于开发新型模式识别机器学习方法,以使用MEG信号检测与MI和CI任务相关的大脑活动。 我们以两种不同的文件格式提供了MEG BCI数据集: 脑成像数据结构(BIDS) 。 要阅读更多信息,在BIDS格式下以“功能图像文件格式” (.fif)文件获取原始数据。 要了解更多信息, MAT-file是MATLAB (.mat)的数据文件格式。 要了解更多信息, 在此存储库中,我们为以下任务提供了Matlab脚本: Step0_script_fif2bids.m :用
recommend-type

The GNU Toolchain for ARM targets HOWTO.pdf

英文原版的介绍怎样制作交叉编译工具的资料
recommend-type

串口调试助手 XCOM V2.6

如果网速可以,建议搭建去下载微软商店里的串口调试助手
recommend-type

Mapnik是用于开发地图绘制应用程序的开源工具包-C/C++开发

_ / _ / _ / _ / _ / _ / _ / _ / _ / _ / _ / _ / _ / _ / _ / _ / _ / _ / _ / _ / _ / _ / _ / _ / _ / _ / _ / _ / _ / _ / _ / _ / _ / _ / _ / _ / _ / _ / _ / _ / _ / _ / _ / _ / _ / _ / _ / _ / _ / _ / _ / _ / _ / _ / _ / _ / _ / _ / _ / _ / _ / _ / _ / _ / _ / _ / _ / _ / _ / _ / _ / _ / _ / _ / _ / _ / _ / _ / _ / _ / _ / _ / _ / _ / _ / _ / _ / _ / _ / _ / _ / _ / _ / _ / _ / _ / _ / _ / _ / _ / _ / _ / _ / _ / _ / _ / _ / Mapnik是用于开发地图应用程序的开源工具包。 C ++共享库的核心是为空间数据访问和可视化提供算法和模式的库。

最新推荐

recommend-type

计算机视觉_深度学习_目标检测_YOLOv5-61_LPRNet_车牌识别_图像处理_OpenCV_PyTorch_PySide6_GUI界面开发_车辆管理_智能交通_蓝牌识别_.zip

计算机视觉_深度学习_目标检测_YOLOv5-61_LPRNet_车牌识别_图像处理_OpenCV_PyTorch_PySide6_GUI界面开发_车辆管理_智能交通_蓝牌识别_
recommend-type

Web2.0新特征图解解析

Web2.0是互联网发展的一个阶段,相对于早期的Web1.0时代,Web2.0具有以下显著特征和知识点: ### Web2.0的定义与特点 1. **用户参与内容生产**: - Web2.0的一个核心特征是用户不再是被动接收信息的消费者,而是成为了内容的生产者。这标志着“读写网络”的开始,用户可以在网络上发布信息、评论、博客、视频等内容。 2. **信息个性化定制**: - Web2.0时代,用户可以根据自己的喜好对信息进行个性化定制,例如通过RSS阅读器订阅感兴趣的新闻源,或者通过社交网络筛选自己感兴趣的话题和内容。 3. **网页技术的革新**: - 随着技术的发展,如Ajax、XML、JSON等技术的出现和应用,使得网页可以更加动态地与用户交互,无需重新加载整个页面即可更新数据,提高了用户体验。 4. **长尾效应**: - 在Web2.0时代,即使是小型或专业化的内容提供者也有机会通过互联网获得关注,这体现了长尾理论,即在网络环境下,非主流的小众产品也有机会与主流产品并存。 5. **社交网络的兴起**: - Web2.0推动了社交网络的发展,如Facebook、Twitter、微博等平台兴起,促进了信息的快速传播和人际交流方式的变革。 6. **开放性和互操作性**: - Web2.0时代倡导开放API(应用程序编程接口),允许不同的网络服务和应用间能够相互通信和共享数据,提高了网络的互操作性。 ### Web2.0的关键技术和应用 1. **博客(Blog)**: - 博客是Web2.0的代表之一,它支持用户以日记形式定期更新内容,并允许其他用户进行评论。 2. **维基(Wiki)**: - 维基是另一种形式的集体协作项目,如维基百科,任何用户都可以编辑网页内容,共同构建一个百科全书。 3. **社交网络服务(Social Networking Services)**: - 社交网络服务如Facebook、Twitter、LinkedIn等,促进了个人和组织之间的社交关系构建和信息分享。 4. **内容聚合器(RSS feeds)**: - RSS技术让用户可以通过阅读器软件快速浏览多个网站更新的内容摘要。 5. **标签(Tags)**: - 用户可以为自己的内容添加标签,便于其他用户搜索和组织信息。 6. **视频分享(Video Sharing)**: - 视频分享网站如YouTube,用户可以上传、分享和评论视频内容。 ### Web2.0与网络营销 1. **内容营销**: - Web2.0为内容营销提供了良好的平台,企业可以通过撰写博客文章、发布视频等内容吸引和维护用户。 2. **社交媒体营销**: - 社交网络的广泛使用,使得企业可以通过社交媒体进行品牌传播、产品推广和客户服务。 3. **口碑营销**: - 用户生成内容、评论和分享在Web2.0时代更易扩散,为口碑营销提供了土壤。 4. **搜索引擎优化(SEO)**: - 随着内容的多样化和个性化,SEO策略也必须适应Web2.0特点,注重社交信号和用户体验。 ### 总结 Web2.0是对互联网发展的一次深刻变革,它不仅仅是一个技术变革,更是人们使用互联网的习惯和方式的变革。Web2.0的时代特征与Web1.0相比,更加注重用户体验、社交互动和信息的个性化定制。这些变化为网络营销提供了新的思路和平台,也对企业的市场策略提出了新的要求。通过理解Web2.0的特点和应用,企业可以更好地适应互联网的发展趋势,实现与用户的深度互动和品牌的有效传播。
recommend-type

【C++编程新手必看】:一步步带你制作出风靡全球的“别踩白块儿”游戏

# 摘要 本文全面介绍C++编程在游戏开发中的应用,涵盖了从基础概念到具体实现的多个方面。首先,文章提供了游戏开发环境的搭建指南,包括编译器配置和开发工具的选择。随后,重点介绍了游戏主循环和基本框架的构建,强调了事件处理和渲染技术。在游戏逻辑和交互设计方面,本文阐述了界面布局、事件响应和游戏状态管理的核心实现。为了提升游戏体验,本文还探讨了添加音效和背景音乐以及开发高级游戏特性的方法。最后,文章介绍了性能优化和跨平台发布的过程,包括游戏的打包和针对不同平台的发布策略。本文旨在为C++游戏开发者提供一个实用的开发指南,帮助他们从零开始构建出性能优化、跨平台兼容的游戏。 # 关键字 C++编程;
recommend-type

使用scikit-learn训练模型来预测鸢尾花种类

<think>我们使用scikit-learn库来训练一个分类模型预测鸢尾花种类。鸢尾花数据集是scikit-learn内置的经典数据集,包含三种鸢尾花的四个特征(花萼长度、花萼宽度、花瓣长度、花瓣宽度)和对应的类别标签。步骤:1.导入必要的库和数据集2.数据探索(可选,但建议了解数据)3.数据预处理(如划分训练集和测试集、标准化等)4.选择模型(例如KNN、随机森林等)5.训练模型6.模型评估7.使用模型进行预测下面以KNN(K近邻)算法为例,展示完整流程。</think>### 使用 scikit-learn 训练鸢尾花分类模型 以下是基于 scikit-learn 训练鸢尾花分类模型的
recommend-type

WWF工作流设计器C#源码解析及演示

### WWF工作流设计器控件C#源码知识点 #### 1. WWF(Windows Workflow Foundation)概述 WWF是微软公司推出的一个工作流框架,作为.NET Framework的一部分。它提供了一套丰富的API,用于设计、执行和管理工作流。工作流可以用于各种应用程序,包括Web应用、服务和桌面应用,使得开发者能够将复杂的业务逻辑以工作流的形式表现出来,简化业务流程自动化和管理。 #### 2. 工作流设计器控件(Workflow Designer Control) 工作流设计器控件是WWF中的一个组件,主要用于提供可视化设计工作流的能力。它允许用户通过拖放的方式在界面上添加、配置和连接工作流活动,从而构建出复杂的工作流应用。控件的使用大大降低了工作流设计的难度,并使得设计工作流变得直观和用户友好。 #### 3. C#源码分析 在提供的文件描述中提到了两个工程项目,它们均使用C#编写。下面分别对这两个工程进行介绍: - **WorkflowDesignerControl** - 该工程是工作流设计器控件的核心实现。它封装了设计工作流所需的用户界面和逻辑代码。开发者可以在自己的应用程序中嵌入这个控件,为最终用户提供一个设计工作流的界面。 - 重点分析:控件如何加载和显示不同的工作流活动、控件如何响应用户的交互、控件状态的保存和加载机制等。 - **WorkflowDesignerExample** - 这个工程是演示如何使用WorkflowDesignerControl的示例项目。它不仅展示了如何在用户界面中嵌入工作流设计器控件,还展示了如何处理用户的交互事件,比如如何在设计完工作流后进行保存、加载或执行等。 - 重点分析:实例程序如何响应工作流设计师的用户操作、示例程序中可能包含的事件处理逻辑、以及工作流的实例化和运行等。 #### 4. 使用Visual Studio 2008编译 文件描述中提到使用Visual Studio 2008进行编译通过。Visual Studio 2008是微软在2008年发布的集成开发环境,它支持.NET Framework 3.5,而WWF正是作为.NET 3.5的一部分。开发者需要使用Visual Studio 2008(或更新版本)来加载和编译这些代码,确保所有必要的项目引用、依赖和.NET 3.5的特性均得到支持。 #### 5. 关键技术点 - **工作流活动(Workflow Activities)**:WWF中的工作流由一系列的活动组成,每个活动代表了一个可以执行的工作单元。在工作流设计器控件中,需要能够显示和操作这些活动。 - **活动编辑(Activity Editing)**:能够编辑活动的属性是工作流设计器控件的重要功能,这对于构建复杂的工作流逻辑至关重要。 - **状态管理(State Management)**:工作流设计过程中可能涉及保存和加载状态,例如保存当前的工作流设计、加载已保存的工作流设计等。 - **事件处理(Event Handling)**:处理用户交互事件,例如拖放活动到设计面板、双击活动编辑属性等。 #### 6. 文件名称列表解释 - **WorkflowDesignerControl.sln**:解决方案文件,包含了WorkflowDesignerControl和WorkflowDesignerExample两个项目。 - **WorkflowDesignerControl.suo**:Visual Studio解决方案用户选项文件,该文件包含了开发者特有的个性化设置,比如窗口布局、断点位置等。 - **Thumbs.db**:缩略图缓存文件,由Windows自动生成,用于存储文件夹中的图片缩略图,与WWF工作流设计器控件功能无关。 - **WorkflowDesignerExample**:可能是一个文件夹,包含了示例工程相关的所有文件,或者是示例工程的可执行文件。 - **EULA.txt**:最终用户许可协议文本文件,通常说明了软件的版权和使用许可条件。 综上所述,该文件集包含了WWF工作流设计器控件的完整C#源码以及相应的Visual Studio项目文件,开发者可以利用这些资源深入理解WWF工作流设计器控件的工作机制,并将其应用于实际的项目开发中,实现工作流的设计和管理功能。
recommend-type

CAD数据在ANSA中:完美修复几何数据的策略与方法

# 摘要 CAD数据的准确性和几何完整性对于产品设计和工程分析至关重要。本文首先强调了CAD数据和几何修复的重要性,随后介绍了ANSA软件在处理CAD数据中的角色。通过详细的分析,本文探讨了几何数据的常见问题,以及有效的手动和自动修复技术。文中还提供了几何修复在实际应用中的案例分析,并讨论了行业未来对几何修复技术的需求和新兴技术趋势。文章旨在为CAD数据几何修复提供全面的理论知识、诊断方法和修复策略,并
recommend-type

编写verilog代码实现以上的规格化功能

<think>我们有一个16位的二进制数,其格式如下:-a15:尾数符号位(1位)-a14-a5:尾数数值位(10位)-a4:指数符号位(1位)-a3-a0:指数数值位(4位)规格化的目标:确保尾数数值位的最高位a14为1(即尾数的绝对值在[0.5,1)范围内)。步骤:1.提取输入信号的各个部分:尾数符号位、尾数数值位、指数符号位、指数数值位。2.将尾数数值位(10位)视为无符号整数M(范围0到1023),我们需要通过左移操作使得M的最高位为1(即M>=512)。同时记录左移的位数(shift_count)。3.调整指数:新的指数=原指数-shift_count(因为尾数左移相当于乘以2^sh
recommend-type

探索ARM9 2410开发板与wince5.0系统的高级实验

标题中的“周立功ARM (magicarm2410) 高级实验”指明了文档内容涉及周立功品牌下的ARM9 2410开发板的高级使用实验。ARM9 2410是基于ARM920T内核的处理器,广泛应用于嵌入式系统开发。周立功是一家在电子与嵌入式系统领域内具有影响力的公司,提供嵌入式教学和开发解决方案。MagicARM2410是该公司的某型号开发板,可能专为教学和实验设计,携带了特定的实验内容,例如本例中的“eva例程”。 描述提供了额外的背景信息,说明周立功ARM9 2410开发板上预装有Windows CE 5.0操作系统,以及该开发板附带的EVA例程。EVA可能是用于实验教学的示例程序或演示程序。文档中还提到,虽然书店出售的《周立功 ARM9开发实践》书籍中没有包含EVA的源码,但该源码实际上是随开发板提供的。这意味着,EVA例程的源码并不在书籍中公开,而是需要直接从开发板上获取。这对于那些希望深入研究和修改EVA例程的学生和开发者来说十分重要。 标签中的“magicarm2410”和“周立功ARM”是对文档和开发板的分类标识。这些标签有助于在文档管理系统或资料库中对相关文件进行整理和检索。 至于“压缩包子文件的文件名称列表:新建文件夹”,这表明相关文件已经被打包压缩,但具体的文件内容和名称没有在描述中列出。我们仅知道压缩包内至少存在一个“新建文件夹”,这可能意味着用户需要进一步操作来查看或解压出文件夹中的内容。 综合以上信息,知识点主要包括: 1. ARM9 2410开发板:一款基于ARM920T内核的处理器的嵌入式开发板,适用于教学和项目实验。 2. Windows CE 5.0系统:这是微软推出的专为嵌入式应用设计的操作系统,提供了一个可定制、可伸缩的、实时的操作环境。 3. EVA例程:一个嵌入式系统开发的教学或实验示例程序。它可能被设计用于演示特定功能或技术,如显示、控制或通信。 4. 开发实践书籍与源码提供:《周立功 ARM9开发实践》一书可能详细介绍了ARM9 2410开发板的使用方法,但书中的内容不包含EVA例程的源码,源码需要通过其他途径获得。 5. 文件打包压缩:文档可能以压缩包的形式存在,包含了需要的内容,但具体内容未知,需要解压缩之后才能查看。 了解这些知识点后,对于从事嵌入式系统开发的工程师或者学生来说,可以更好地利用周立功 ARM9 2410开发板进行学习和实验,尤其是可以进行更深入的研究和实验设计,通过EVA例程的源码来理解嵌入式系统的运行机制和程序结构。同时,也能够使用Windows CE 5.0系统环境来开发相应的应用程序。
recommend-type

【ANSA网格生成手册】:创建高效高质量网格的6个技巧

# 摘要 ANSA软件作为一款强大的前处理工具,在工程仿真领域扮演着重要角色。本文首先概述了ANSA软件的基本功能以及网格生成技术的基础知识。接着深入探讨了不同网格类型的选择及其优缺点,同时强调了网格密度与质量控制的重要性。文中详细介绍了高级网格生成技巧,包括自适应网格技术和多重网格以及混合网格技术的应用。此外,本文还提供了网格生成在实际应用中的前处理
recommend-type

能否简单一点

<think>我们简化问题:统计从列表第一个字典开始,连续多少个字典的money字段为正数。注意:我们只关心从第一个开始的连续正数,一旦遇到非正数(包括0和负数)或者缺失money字段,就停止计数。简单实现思路:1.遍历列表中的每个字典(从第一个开始)2.对于每个字典,检查是否有'money'键,且其值大于0。3.如果满足条件,计数器加1;否则立即停止遍历并返回当前计数。代码实现:</think>### 简化版:统计连续正数 `money` 字段 #### 解决方案代码 ```python def count_positive_money(dict_list): count = 0