活动介绍

scrapy settings.py怎么添加Spider

时间: 2024-10-23 09:14:09 浏览: 89
在`Scrapy`中,`settings.py`文件是用来配置爬虫项目全局参数的文件。如果你想在其中添加一个新的spider(爬虫),你需要做的是: 1. 首先,确保你的spider已经被正确地定义在一个单独的模块中,例如`my_spiders.py`,并且继承了`scrapy.Spider`基类。 ```python # my_spiders.py import scrapy class MyNewSpider(scrapy.Spider): name = 'new_spider' # 这里给spider起个名字 start_urls = ['http://example.com'] # 定义初始url列表 def parse(self, response): # 必须定义的方法,处理网页响应 # ... ``` 2. 在`settings.py`文件中,你需要配置`SPIDER_MODULES`,指定包含spider模块的路径。假设你的spider模块位于`src/spiders`目录下: ```python # src/settings.py (添加新的spider) SPIDER_MODULES = ['src.spiders'] NEWSPIDER_MODULE = 'src.spiders' ``` 现在,当你运行`scrapy crawl new_spider`时,Scrapy就会找到并使用你在`MyNewSpider`中的配置来执行这个新的爬虫。
相关问题

Scrapy怎么禁用settings.py

在Scrapy中,`settings.py` 文件是项目的配置文件,其中定义了爬虫的各种全局设置。如果你想临时禁用某个设置,而不是完全删除或注释掉整个文件,你可以使用 `os.environ` 来动态设置环境变量,从而覆盖`settings.py` 中的值。 例如,如果你想要禁用名为`FEED_FORMAT`的设置,可以在运行你的爬虫时设置这个环境变量: ```bash $ SCrapy_SETTINGS_MODULE=your_project_name.settings DISABLED_FEED_FORMAT=True scrapy crawl your_spider_name ``` 在这里,`DISABLED_FEED_FORMAT`是你自定义的一个环境变量名,`True`表示你想要禁用`FEED_FORMAT`设置。然后在你的`settings.py`文件中,你可以检查这个环境变量来决定是否应用该设置: ```python FEED_FORMAT = os.environ.get('DISABLED_FEED_FORMAT', 'your_default_format') if os.environ.get('DISABLED_FEED_FORMAT') else 'default_feed_format' ``` 这将在`settings.py`中根据环境变量的值来动态调整`FEED_FORMAT`设置。 然而,这种方式并不推荐长期使用,因为这可能会导致代码难以理解和维护。通常情况下,你应该尽量避免直接修改核心配置。如果你确实有特殊需求要更改某个设置,可能需要考虑重构你的代码或者使用中间件(middlewares)等更合适的方式来处理。

2025-07-08 15:43:37 [scrapy.utils.log] INFO: Scrapy 2.13.3 started (bot: scrapybot) 2025-07-08 15:43:37 [scrapy.utils.log] INFO: Versions: {'lxml': '6.0.0', 'libxml2': '2.11.9', 'cssselect': '1.3.0', 'parsel': '1.10.0', 'w3lib': '2.3.1', 'Twisted': '25.5.0', 'Python': '3.11.5 (tags/v3.11.5:cce6ba9, Aug 24 2023, 14:38:34) [MSC v.1936 ' '64 bit (AMD64)]', 'pyOpenSSL': '25.1.0 (OpenSSL 3.5.1 1 Jul 2025)', 'cryptography': '45.0.5', 'Platform': 'Windows-10-10.0.22631-SP0'} 2025-07-08 15:43:37 [scrapy.addons] INFO: Enabled addons: [] 2025-07-08 15:43:37 [asyncio] DEBUG: Using selector: SelectSelector 2025-07-08 15:43:37 [scrapy.utils.log] DEBUG: Using reactor: twisted.internet.asyncioreactor.AsyncioSelectorReactor 2025-07-08 15:43:37 [scrapy.utils.log] DEBUG: Using asyncio event loop: asyncio.windows_events._WindowsSelectorEventLoop 2025-07-08 15:43:37 [scrapy.extensions.telnet] INFO: Telnet Password: 8a6ca1391bfb9949 2025-07-08 15:43:37 [scrapy.middleware] INFO: Enabled extensions: ['scrapy.extensions.corestats.CoreStats', 'scrapy.extensions.telnet.TelnetConsole', 'scrapy.extensions.logstats.LogStats'] 2025-07-08 15:43:37 [scrapy.crawler] INFO: Overridden settings: {'DOWNLOAD_DELAY': 1, 'NEWSPIDER_MODULE': 'nepu_spider.spiders', 'SPIDER_MODULES': ['nepu_spider.spiders'], 'USER_AGENT': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 ' '(KHTML, like Gecko) Chrome/120.0.0.0 Safari/537.36'} 2025-07-08 15:43:37 [scrapy.middleware] INFO: Enabled downloader middlewares: ['scrapy.downloadermiddlewares.offsite.OffsiteMiddleware', 'scrapy.downloadermiddlewares.httpauth.HttpAuthMiddleware', 'scrapy.downloadermiddlewares.downloadtimeout.DownloadTimeoutMiddleware', 'scrapy.downloadermiddlewares.defaultheaders.DefaultHeadersMiddleware', 'scrapy.downloadermiddlewares.useragent.UserAgentMiddleware', 'scrapy.downloadermiddlewares.retry.RetryMiddleware', 'scrapy.downloadermiddlewares.redirect.MetaRefreshMiddleware', 'scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware', 'scrapy.downloadermiddlewares.redirect.RedirectMiddleware', 'scrapy.downloadermiddlewares.cookies.CookiesMiddleware', 'scrapy.downloadermiddlewares.httpproxy.HttpProxyMiddleware', 'scrapy.downloadermiddlewares.stats.DownloaderStats'] 2025-07-08 15:43:37 [scrapy.middleware] INFO: Enabled spider middlewares: ['scrapy.spidermiddlewares.start.StartSpiderMiddleware', 'scrapy.spidermiddlewares.httperror.HttpErrorMiddleware', 'scrapy.spidermiddlewares.referer.RefererMiddleware', 'scrapy.spidermiddlewares.urllength.UrlLengthMiddleware', 'scrapy.spidermiddlewares.depth.DepthMiddleware'] 2025-07-08 15:43:37 [scrapy.middleware] INFO: Enabled item pipelines: ['nepu_spider.pipelines.MultiJsonPipeline'] 2025-07-08 15:43:37 [scrapy.addons] INFO: Enabled addons: [] 2025-07-08 15:43:37 [scrapy.utils.log] DEBUG: Using reactor: twisted.internet.asyncioreactor.AsyncioSelectorReactor 2025-07-08 15:43:37 [scrapy.utils.log] DEBUG: Using asyncio event loop: asyncio.windows_events._WindowsSelectorEventLoop 2025-07-08 15:43:37 [scrapy.extensions.telnet] INFO: Telnet Password: 671a36aa7bc330e0 2025-07-08 15:43:37 [scrapy.middleware] INFO: Enabled extensions: ['scrapy.extensions.corestats.CoreStats', 'scrapy.extensions.telnet.TelnetConsole', 'scrapy.extensions.logstats.LogStats'] 2025-07-08 15:43:37 [scrapy.crawler] INFO: Overridden settings: {'DOWNLOAD_DELAY': 1, 'NEWSPIDER_MODULE': 'nepu_spider.spiders', 'SPIDER_MODULES': ['nepu_spider.spiders'], 'USER_AGENT': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 ' '(KHTML, like Gecko) Chrome/120.0.0.0 Safari/537.36'} 2025-07-08 15:43:37 [scrapy.middleware] INFO: Enabled downloader middlewares: ['scrapy.downloadermiddlewares.offsite.OffsiteMiddleware', 'scrapy.downloadermiddlewares.httpauth.HttpAuthMiddleware', 'scrapy.downloadermiddlewares.downloadtimeout.DownloadTimeoutMiddleware', 'scrapy.downloadermiddlewares.defaultheaders.DefaultHeadersMiddleware', 'scrapy.downloadermiddlewares.useragent.UserAgentMiddleware', 'scrapy.downloadermiddlewares.retry.RetryMiddleware', 'scrapy.downloadermiddlewares.redirect.MetaRefreshMiddleware', 'scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware', 'scrapy.downloadermiddlewares.redirect.RedirectMiddleware', 'scrapy.downloadermiddlewares.cookies.CookiesMiddleware', 'scrapy.downloadermiddlewares.httpproxy.HttpProxyMiddleware', 'scrapy.downloadermiddlewares.stats.DownloaderStats'] 2025-07-08 15:43:37 [scrapy.middleware] INFO: Enabled spider middlewares: ['scrapy.spidermiddlewares.start.StartSpiderMiddleware', 'scrapy.spidermiddlewares.httperror.HttpErrorMiddleware', 'scrapy.spidermiddlewares.referer.RefererMiddleware', 'scrapy.spidermiddlewares.urllength.UrlLengthMiddleware', 'scrapy.spidermiddlewares.depth.DepthMiddleware'] 2025-07-08 15:43:37 [scrapy.middleware] INFO: Enabled item pipelines: ['nepu_spider.pipelines.MultiJsonPipeline'] 2025-07-08 15:43:37 [scrapy.addons] INFO: Enabled addons: [] 2025-07-08 15:43:37 [scrapy.utils.log] DEBUG: Using reactor: twisted.internet.asyncioreactor.AsyncioSelectorReactor 2025-07-08 15:43:37 [scrapy.utils.log] DEBUG: Using asyncio event loop: asyncio.windows_events._WindowsSelectorEventLoop 2025-07-08 15:43:37 [scrapy.extensions.telnet] INFO: Telnet Password: 76f044bac415a70c 2025-07-08 15:43:37 [scrapy.middleware] INFO: Enabled extensions: ['scrapy.extensions.corestats.CoreStats', 'scrapy.extensions.telnet.TelnetConsole', 'scrapy.extensions.logstats.LogStats'] 2025-07-08 15:43:37 [scrapy.crawler] INFO: Overridden settings: {'DOWNLOAD_DELAY': 1, 'NEWSPIDER_MODULE': 'nepu_spider.spiders', 'SPIDER_MODULES': ['nepu_spider.spiders'], 'USER_AGENT': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 ' '(KHTML, like Gecko) Chrome/120.0.0.0 Safari/537.36'} 2025-07-08 15:43:37 [scrapy.middleware] INFO: Enabled downloader middlewares: ['scrapy.downloadermiddlewares.offsite.OffsiteMiddleware', 'scrapy.downloadermiddlewares.httpauth.HttpAuthMiddleware', 'scrapy.downloadermiddlewares.downloadtimeout.DownloadTimeoutMiddleware', 'scrapy.downloadermiddlewares.defaultheaders.DefaultHeadersMiddleware', 'scrapy.downloadermiddlewares.useragent.UserAgentMiddleware', 'scrapy.downloadermiddlewares.retry.RetryMiddleware', 'scrapy.downloadermiddlewares.redirect.MetaRefreshMiddleware', 'scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware', 'scrapy.downloadermiddlewares.redirect.RedirectMiddleware', 'scrapy.downloadermiddlewares.cookies.CookiesMiddleware', 'scrapy.downloadermiddlewares.httpproxy.HttpProxyMiddleware', 'scrapy.downloadermiddlewares.stats.DownloaderStats'] 2025-07-08 15:43:37 [scrapy.middleware] INFO: Enabled spider middlewares: ['scrapy.spidermiddlewares.start.StartSpiderMiddleware', 'scrapy.spidermiddlewares.httperror.HttpErrorMiddleware', 'scrapy.spidermiddlewares.referer.RefererMiddleware', 'scrapy.spidermiddlewares.urllength.UrlLengthMiddleware', 'scrapy.spidermiddlewares.depth.DepthMiddleware'] 2025-07-08 15:43:37 [scrapy.middleware] INFO: Enabled item pipelines: ['nepu_spider.pipelines.MultiJsonPipeline'] 2025-07-08 15:43:37 [scrapy.addons] INFO: Enabled addons: [] 2025-07-08 15:43:37 [scrapy.utils.log] DEBUG: Using reactor: twisted.internet.asyncioreactor.AsyncioSelectorReactor 2025-07-08 15:43:37 [scrapy.utils.log] DEBUG: Using asyncio event loop: asyncio.windows_events._WindowsSelectorEventLoop 2025-07-08 15:43:37 [scrapy.extensions.telnet] INFO: Telnet Password: fc500ad4454da624 2025-07-08 15:43:37 [scrapy.middleware] INFO: Enabled extensions: ['scrapy.extensions.corestats.CoreStats', 'scrapy.extensions.telnet.TelnetConsole', 'scrapy.extensions.logstats.LogStats'] 2025-07-08 15:43:37 [scrapy.crawler] INFO: Overridden settings: {'DOWNLOAD_DELAY': 1, 'NEWSPIDER_MODULE': 'nepu_spider.spiders', 'SPIDER_MODULES': ['nepu_spider.spiders'], 'USER_AGENT': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 ' '(KHTML, like Gecko) Chrome/120.0.0.0 Safari/537.36'} 2025-07-08 15:43:37 [scrapy.middleware] INFO: Enabled downloader middlewares: ['scrapy.downloadermiddlewares.offsite.OffsiteMiddleware', 'scrapy.downloadermiddlewares.httpauth.HttpAuthMiddleware', 'scrapy.downloadermiddlewares.downloadtimeout.DownloadTimeoutMiddleware', 'scrapy.downloadermiddlewares.defaultheaders.DefaultHeadersMiddleware', 'scrapy.downloadermiddlewares.useragent.UserAgentMiddleware', 'scrapy.downloadermiddlewares.retry.RetryMiddleware', 'scrapy.downloadermiddlewares.redirect.MetaRefreshMiddleware', 'scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware', 'scrapy.downloadermiddlewares.redirect.RedirectMiddleware', 'scrapy.downloadermiddlewares.cookies.CookiesMiddleware', 'scrapy.downloadermiddlewares.httpproxy.HttpProxyMiddleware', 'scrapy.downloadermiddlewares.stats.DownloaderStats'] 2025-07-08 15:43:37 [scrapy.middleware] INFO: Enabled spider middlewares: ['scrapy.spidermiddlewares.start.StartSpiderMiddleware', 'scrapy.spidermiddlewares.httperror.HttpErrorMiddleware', 'scrapy.spidermiddlewares.referer.RefererMiddleware', 'scrapy.spidermiddlewares.urllength.UrlLengthMiddleware', 'scrapy.spidermiddlewares.depth.DepthMiddleware'] 2025-07-08 15:43:37 [scrapy.middleware] INFO: Enabled item pipelines: ['nepu_spider.pipelines.MultiJsonPipeline'] 2025-07-08 15:43:37 [scrapy.core.engine] INFO: Spider opened 2025-07-08 15:43:37 [scrapy.extensions.logstats] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min) 2025-07-08 15:43:37 [scrapy.core.engine] INFO: Spider opened 2025-07-08 15:43:37 [scrapy.extensions.logstats] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min) 2025-07-08 15:43:37 [scrapy.core.engine] INFO: Spider opened 2025-07-08 15:43:37 [scrapy.extensions.logstats] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min) 2025-07-08 15:43:37 [scrapy.core.engine] INFO: Spider opened 2025-07-08 15:43:37 [scrapy.extensions.logstats] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min) 2025-07-08 15:43:37 [scrapy.extensions.telnet] INFO: Telnet console listening on 127.0.0.1:6023 2025-07-08 15:43:37 [scrapy.extensions.telnet] INFO: Telnet console listening on 127.0.0.1:6024 2025-07-08 15:43:37 [scrapy.extensions.telnet] INFO: Telnet console listening on 127.0.0.1:6025 2025-07-08 15:43:37 [scrapy.extensions.telnet] INFO: Telnet console listening on 127.0.0.1:6026 2025-07-08 15:43:37 [scrapy.core.engine] DEBUG: Crawled (200) <GET https://xxgk.nepu.edu.cn/xxgklm/xxgk.htm> (referer: None) 2025-07-08 15:43:37 [scrapy.core.engine] DEBUG: Crawled (200) <GET https://www.nepu.edu.cn/jgsz/jxdw.htm> (referer: None) 2025-07-08 15:43:37 [scrapy.core.engine] DEBUG: Crawled (200) <GET https://zsxxw.nepu.edu.cn/> (referer: None) 2025-07-08 15:43:37 [scrapy.core.engine] DEBUG: Crawled (200) <GET https://www.nepu.edu.cn/xxgk/xxjj.htm> (referer: None) 2025-07-08 15:43:38 [scrapy.core.engine] INFO: Closing spider (finished) 2025-07-08 15:43:38 [scrapy.statscollectors] INFO: Dumping Scrapy stats: {'downloader/request_bytes': 314, 'downloader/request_count': 1, 'downloader/request_method_count/GET': 1, 'downloader/response_bytes': 4815, 'downloader/response_count': 1, 'downloader/response_status_count/200': 1, 'elapsed_time_seconds': 0.265455, 'finish_reason': 'finished', 'finish_time': datetime.datetime(2025, 7, 8, 7, 43, 38, 4643, tzinfo=datetime.timezone.utc), 'httpcompression/response_bytes': 18235, 'httpcompression/response_count': 1, 'items_per_minute': None, 'log_count/DEBUG': 8, 'log_count/INFO': 26, 'response_received_count': 1, 'responses_per_minute': None, 'scheduler/dequeued': 1, 'scheduler/dequeued/memory': 1, 'scheduler/enqueued': 1, 'scheduler/enqueued/memory': 1, 'start_time': datetime.datetime(2025, 7, 8, 7, 43, 37, 739188, tzinfo=datetime.timezone.utc)} 2025-07-08 15:43:38 [scrapy.core.engine] INFO: Spider closed (finished) 2025-07-08 15:43:38 [scrapy.core.engine] INFO: Closing spider (finished) 2025-07-08 15:43:38 [scrapy.statscollectors] INFO: Dumping Scrapy stats: {'downloader/request_bytes': 311, 'downloader/request_count': 1, 'downloader/request_method_count/GET': 1, 'downloader/response_bytes': 5880, 'downloader/response_count': 1, 'downloader/response_status_count/200': 1, 'elapsed_time_seconds': 0.282532, 'finish_reason': 'finished', 'finish_time': datetime.datetime(2025, 7, 8, 7, 43, 38, 21720, tzinfo=datetime.timezone.utc), 'httpcompression/response_bytes': 18387, 'httpcompression/response_count': 1, 'items_per_minute': None, 'log_count/DEBUG': 6, 'log_count/INFO': 22, 'response_received_count': 1, 'responses_per_minute': None, 'scheduler/dequeued': 1, 'scheduler/dequeued/memory': 1, 'scheduler/enqueued': 1, 'scheduler/enqueued/memory': 1, 'start_time': datetime.datetime(2025, 7, 8, 7, 43, 37, 739188, tzinfo=datetime.timezone.utc)} 2025-07-08 15:43:38 [scrapy.core.engine] INFO: Spider closed (finished) 2025-07-08 15:43:38 [scrapy.core.engine] INFO: Closing spider (finished) 2025-07-08 15:43:38 [scrapy.statscollectors] INFO: Dumping Scrapy stats: {'downloader/request_bytes': 300, 'downloader/request_count': 1, 'downloader/request_method_count/GET': 1, 'downloader/response_bytes': 9026, 'downloader/response_count': 1, 'downloader/response_status_count/200': 1, 'elapsed_time_seconds': 0.284539, 'finish_reason': 'finished', 'finish_time': datetime.datetime(2025, 7, 8, 7, 43, 38, 22730, tzinfo=datetime.timezone.utc), 'httpcompression/response_bytes': 32943, 'httpcompression/response_count': 1, 'items_per_minute': None, 'log_count/DEBUG': 10, 'log_count/INFO': 39, 'response_received_count': 1, 'responses_per_minute': None, 'scheduler/dequeued': 1, 'scheduler/dequeued/memory': 1, 'scheduler/enqueued': 1, 'scheduler/enqueued/memory': 1, 'start_time': datetime.datetime(2025, 7, 8, 7, 43, 37, 738191, tzinfo=datetime.timezone.utc)} 2025-07-08 15:43:38 [scrapy.core.engine] INFO: Spider closed (finished) 2025-07-08 15:43:38 [scrapy.core.engine] INFO: Closing spider (finished) 2025-07-08 15:43:38 [scrapy.statscollectors] INFO: Dumping Scrapy stats: {'downloader/request_bytes': 311, 'downloader/request_count': 1, 'downloader/request_method_count/GET': 1, 'downloader/response_bytes': 9736, 'downloader/response_count': 1, 'downloader/response_status_count/200': 1, 'elapsed_time_seconds': 0.285536, 'finish_reason': 'finished', 'finish_time': datetime.datetime(2025, 7, 8, 7, 43, 38, 22730, tzinfo=datetime.timezone.utc), 'httpcompression/response_bytes': 25723, 'httpcompression/response_count': 1, 'items_per_minute': None, 'log_count/DEBUG': 13, 'log_count/INFO': 49, 'response_received_count': 1, 'responses_per_minute': None, 'scheduler/dequeued': 1, 'scheduler/dequeued/memory': 1, 'scheduler/enqueued': 1, 'scheduler/enqueued/memory': 1, 'start_time': datetime.datetime(2025, 7, 8, 7, 43, 37, 737194, tzinfo=datetime.timezone.utc)} 2025-07-08 15:43:38 [scrapy.core.engine] INFO: Spider closed (finished)

<think>我们正在分析Scrapy爬虫的日志输出,特别是Scrapy 2.13.3版本的日志。日志分析是爬虫调试和性能优化的重要环节。以下将详细解析Scrapy日志的各个部分,并说明如何从中获取爬虫运行的关键信息。 ### 1. Scrapy日志级别 Scrapy支持多种日志级别,可以通过`LOG_LEVEL`设置调整: - `DEBUG`:最详细的日志,用于调试 - `INFO`:一般信息(默认级别) - `WARNING`:警告信息 - `ERROR`:错误信息 - `CRITICAL`:严重错误 在`settings.py`中设置: ```python LOG_LEVEL = 'INFO' # 推荐生产环境使用INFO级别 ``` ### 2. 日志核心结构解析 #### (1) 引擎启动信息 ``` [scrapy.utils.log] INFO: Scrapy 2.13.3 started (bot: project_name) [scrapy.utils.log] INFO: Versions: lxml 4.9.4.0, libxml2 2.9.14, cssselect 1.2.0, parsel 1.8.1, ... # 依赖库版本[^2] [scrapy.crawler] INFO: Overridden settings: {'BOT_NAME': 'project_name', 'LOG_LEVEL': 'DEBUG', ...} ``` 此处显示Scrapy版本、关键组件版本和生效的设置项。若出现版本不兼容警告(如lxml版本过低)需特别注意[^2]。 #### (2) 中间件和扩展加载 ``` [scrapy.middleware] INFO: Enabled extensions: {'scrapy.extensions.corestats.CoreStats', ...} [scrapy.middleware] INFO: Enabled downloader middlewares: {'scrapy.downloadermiddlewares.httpauth.HttpAuthMiddleware', ...} ``` 此部分列出所有启用的扩展和中间件。若自定义中间件未出现在列表中,检查是否在`settings.py`中正确注册。 #### (3) 爬虫启动与请求调度 ``` [scrapy.core.engine] INFO: Spider opened # 爬虫开始运行 [scrapy.extensions.logstats] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min) # 初始统计 [scrapy.core.engine] DEBUG: Crawled (200) <GET https://example.com/page1> # 成功请求 ``` 关键指标: - `Crawled pages`:已下载页面数 - `scraped items`:成功解析的Item数 - 括号中的数字为HTTP状态码 #### (4) 数据处理与异常 **正常数据处理:** ``` [scrapy.core.scraper] DEBUG: Scraped from <200 https://example.com/item> {'title': 'Sample Item', 'price': 100} ``` **常见异常类型:** - `IgnoreRequest`:请求被忽略(通常由中间件触发) ``` [scrapy.core.engine] DEBUG: Ignoring request <GET https://example.com/bad-url>: filtered by dupefilter ``` - `HttpError`:HTTP错误(如404/503) ``` [scrapy.core.engine] DEBUG: Crawled (404) <GET https://example.com/not-found> ``` - `ParseError`:解析失败 ``` [scrapy.core.scraper] ERROR: Error processing <GET https://example.com/bad-page> Traceback (most recent call last): File "scrapy/core/scraper.py", line 157, in _process_spidermw_output ``` ### 3. 性能关键指标分析 #### (1) 实时统计日志 默认每60秒输出一次(通过`LOGSTATS_INTERVAL`调整): ``` [scrapy.extensions.logstats] INFO: Crawled 120 pages (at 20 pages/min), scraped 115 items (at 18 items/min) ``` 计算实际吞吐量: $$ \text{页面下载速度} = \frac{\Delta\text{爬取页面数}}{\text{时间间隔}} \quad \text{(pages/min)} $$ $$ \text{数据处理速度} = \frac{\Delta\text{解析Item数}}{\text{时间间隔}} \quad \text{(items/min)} $$ #### (2) 阻塞延迟检测 若日志中出现大量`DEBUG: Crawled`间隔不均匀,可能由以下原因导致: - **同步阻塞操作**:在回调函数中执行耗时同步操作(如数据库写入)[^3] ``` [scrapy.core.engine] DEBUG: Crawled (200) <GET url1> [wait: 3.2s] # 响应延迟高 ``` - **下载器过载**:`CONCURRENT_REQUESTS`设置过高导致服务器响应变慢 #### (3) 内存使用监控 开启`MEMUSAGE_ENABLED`后,日志显示: ``` [scrapy.extensions.memusage] INFO: Memory usage at startup: 1024MB [scrapy.extensions.memusage] INFO: Memory usage: 2048MB (delta: 1024MB) ``` 若内存持续增长,需检查是否存在内存泄漏(如未正确关闭文件句柄)。 ### 4. 错误调试技巧 #### (1) 定位异常源头 示例错误日志: ``` [scrapy.core.scraper] ERROR: Spider error processing <GET https://example.com/page> Traceback (most recent call last): File "my_spider.py", line 25, in parse price = response.xpath('//span[@class="price"]/text()').extract_first() AttributeError: 'NoneType' object has no attribute 'xpath' ``` 诊断步骤: 1. 确认错误发生在`my_spider.py`第25行 2. 检查`response.xpath('//span[@class="price"]')`返回`None`的原因 3. 使用Scrapy shell验证选择器:`scrapy shell https://example.com/page` #### (2) 反爬应对日志 当触发网站反爬时: ``` [scrapy.downloadermiddlewares.retry] DEBUG: Retrying <GET https://example.com> (failed 3 times): 503 Service Unavailable [scrapy.core.engine] DEBUG: Crawled (429) <GET https://example.com> [<http://proxy-pool>] ``` 建议操作: - 启用自动限速:`AUTOTHROTTLE_ENABLED = True` - 增加代理中间件:`DOWNLOADER_MIDDLEWARES = {'scrapy_proxy_pool.middlewares.ProxyPoolMiddleware': 610}` ### 5. 日志持久化配置 将日志保存到文件(`settings.py`): ```python LOG_FILE = 'crawl.log' # 输出到文件 LOG_FORMAT = '%(asctime)s [%(name)s] %(levelname)s: %(message)s' # 自定义格式 ``` 使用`RotatingFileHandler`防止日志过大: ```python from logging.handlers import RotatingFileHandler LOG_HANDLERS = { 'file': { 'level': 'DEBUG', 'class': 'logging.handlers.RotatingFileHandler', 'filename': 'crawl.log', 'maxBytes': 1024*1024*10, # 10MB 'backupCount': 5, } } ``` ### 6. 高级调试工具 #### (1) Telnet控制台 日志中显示Telnet端口: ``` [scrapy.extensions.telnet] INFO: Telnet console listening on 127.0.0.1:6023 ``` 连接后执行: ```bash telnet localhost 6023 >>> est() # 查看引擎状态 >>> prefs() # 查看性能统计 ``` #### (2) Scrapy Stats API 通过`scrapy.statscollectors`获取统计数据: ```python from scrapy.statscollectors import MemoryStatsCollector stats = MemoryStatsCollector() print(stats.get_stats()) ``` 输出示例: ```json { "downloader/request_count": 120, "item_scraped_count": 115, "response_received_count": 120, "scheduler/dequeued": 120 } ``` ### 性能优化公式参考 1. **请求并发度计算**: $$ \text{实际并发数} = \min(\text{CONCURRENT_REQUESTS}, \text{可用代理数}) $$ 2. **合理延迟设置**: $$ \text{建议延迟} = \frac{1}{\text{目标请求速率}} \quad \text{(秒/请求)} $$ 例如目标速率5请求/秒,则延迟=0.2秒 3. **内存警戒线**(根据`MEMUSAGE_LIMIT_MB`): $$ \text{内存使用率} = \frac{\text{当前内存}}{\text{总物理内存}} \times 100\% > 80\% \quad \Rightarrow \text{触发警告} $$ --- ### 相关问题 1. 如何解析Scrapy日志中的HTTP错误码分布? 2. 怎样通过日志判断Scrapy爬虫是否被目标网站封禁? 3. Scrapy日志中出现大量`Ignoring request`警告该如何处理? 4. 如何从日志中分析Scrapy爬虫的瓶颈所在? 5. Scrapy如何配置日志自动分割和归档?</think>### Scrapy 2.13.3 爬虫日志分析指南 Scrapy日志是监控爬虫运行状态的核心工具,以下是关键日志信息的解析方法和优化建议: #### 一、核心日志结构解析 1. **框架初始化日志** ```log [scrapy.utils.log] INFO: Scrapy 2.13.3 started (bot: your_project) [scrapy.utils.log] INFO: Versions: lxml 4.9.4.0, libxml2 2.9.14, ... # 依赖库版本[^2] ``` - 验证环境依赖是否匹配(如lxml/libxml2版本) - 若出现`ImportError`,需检查虚拟环境配置 2. **请求调度日志** ```log [scrapy.core.engine] DEBUG: Crawled (200) <GET https://example.com/page1> [scrapy.core.engine] DEBUG: Crawled (404) <GET https://example.com/invalid_page> ``` - HTTP状态码分析: - `200`:成功响应 - `404`:页面不存在(需检查URL生成逻辑) - `403`:禁止访问(可能触发反爬) - `503`:服务不可用(需添加重试机制) 3. **数据处理日志** ```log [scrapy.core.scraper] DEBUG: Scraped from <200 https://example.com/item> {'title': 'Sample', 'price': 100} # 成功提取的数据 [scrapy.core.scraper] ERROR: Spider error processing <GET https://example.com/bad_page> ``` - 出现`ERROR`时需检查: - XPath/CSS选择器是否失效 - 数据管道处理异常 4. **性能统计日志** ```log [scrapy.extensions.logstats] INFO: Crawled 120 pages (at 20 pages/min) [scrapy.core.engine] INFO: Closing spider (finished) ``` - 关键指标: - **页面爬取速率**:`pages/min` - **Item生成数**:`item_scraped_count` - **请求深度**:`request_depth_max` #### 二、典型问题诊断 1. **反爬封锁迹象** ```log [scrapy.downloadermiddlewares.retry] DEBUG: Retrying <GET url> (failed 3 times) [scrapy.core.scraper] ERROR: 403 Forbidden ``` **解决方案**: - 启用随机User-Agent: ```python DOWNLOADER_MIDDLEWARES = { 'scrapy.downloadermiddlewares.useragent.UserAgentMiddleware': None, 'scrapy_fake_useragent.middleware.RandomUserAgentMiddleware': 400, } ``` - 添加代理中间件(参考引用[1]的反爬手段) 2. **数据提取失败** ```log [scrapy.core.scraper] ERROR: ValueError('Missing required field') ``` **解决方案**: - 使用Scrapy Shell实时调试: ```bash scrapy shell 'https://target-url.com' >>> response.css('div.price::text').get() ``` - 添加异常处理: ```python def parse(self, response): try: yield {'price': response.css('div.price::text').get()} except Exception as e: self.logger.error(f"Extraction failed: {e}") ``` 3. **性能瓶颈定位** ```log [scrapy.core.engine] DEBUG: Crawled (200) <GET url> [wait: 5.2s] # 高延迟 ``` **优化方法**: - 调整并发参数(引用[3]案例): ```python CONCURRENT_REQUESTS = 32 # 默认16 DOWNLOAD_DELAY = 0.5 # 防止被封 ``` - 启用自动限速: ```python AUTOTHROTTLE_ENABLED = True AUTOTHROTTLE_TARGET_CONCURRENCY = 4.0 ``` #### 三、日志配置优化 1. **日志级别设置**(`settings.py`): ```python LOG_LEVEL = 'INFO' # DEBUG/INFO/WARNING/ERROR LOG_FILE = 'crawl.log' # 保存到文件 ``` 2. **关键统计扩展**: ```python EXTENSIONS = { 'scrapy.extensions.corestats.CoreStats': 500, 'scrapy.extensions.memusage.MemoryUsage': 501, } ``` 输出示例: ```log [scrapy.extensions.memusage] INFO: Memory usage: 1024MB [scrapy.statscollectors] INFO: Dumping Scrapy stats: {'downloader/request_count': 120} ``` 3. **自定义日志格式**: ```python LOG_FORMAT = '%(asctime)s [%(name)s] %(levelname)s: %(message)s' ``` #### 四、高级分析工具 1. **Telnet实时监控**: ```log [scrapy.extensions.telnet] INFO: Telnet console on 6023 ``` 连接后执行: ```bash telnet localhost 6023 >>> est() # 查看引擎状态 >>> prefs() # 性能统计 ``` 2. **Stats API获取指标**: ```python class MySpider(scrapy.Spider): def closed(self, reason): stats = self.crawler.stats.get_stats() print(f"总请求数: {stats.get('downloader/request_count')}") ``` --- ### 相关问题 1. 如何解决Scrapy日志中出现大量`Ignored duplicate request`警告? 2. Scrapy统计信息中`item_dropped_count`异常增高的原因是什么? 3. 如何配置Scrapy日志自动分割和归档? 4. Telnet监控显示`engine.slot.inprogress`持续很高该如何优化? 5. Scrapy日志中出现`ReactorNotRestartable`错误如何解决?
阅读全文

相关推荐

Traceback (most recent call last): File "/root/anaconda3/bin/scrapy", line 8, in <module> sys.exit(execute()) File "/root/anaconda3/lib/python3.9/site-packages/scrapy/cmdline.py", line 160, in execute cmd.crawler_process = CrawlerProcess(settings) File "/root/anaconda3/lib/python3.9/site-packages/scrapy/crawler.py", line 332, in __init__ super().__init__(settings) File "/root/anaconda3/lib/python3.9/site-packages/scrapy/crawler.py", line 202, in __init__ self.spider_loader = self._get_spider_loader(settings) File "/root/anaconda3/lib/python3.9/site-packages/scrapy/crawler.py", line 196, in _get_spider_loader return loader_cls.from_settings(settings.frozencopy()) File "/root/anaconda3/lib/python3.9/site-packages/scrapy/spiderloader.py", line 72, in from_settings return cls(settings) File "/root/anaconda3/lib/python3.9/site-packages/scrapy/spiderloader.py", line 27, in __init__ self._load_all_spiders() File "/root/anaconda3/lib/python3.9/site-packages/scrapy/spiderloader.py", line 56, in _load_all_spiders for module in walk_modules(name): File "/root/anaconda3/lib/python3.9/site-packages/scrapy/utils/misc.py", line 106, in walk_modules submod = import_module(fullpath) File "/root/anaconda3/lib/python3.9/importlib/__init__.py", line 127, in import_module return _bootstrap._gcd_import(name[level:], package, level) File "<frozen importlib._bootstrap>", line 1030, in _gcd_import File "<frozen importlib._bootstrap>", line 1007, in _find_and_load File "<frozen importlib._bootstrap>", line 986, in _find_and_load_unlocked File "<frozen importlib._bootstrap>", line 680, in _load_unlocked File "<frozen importlib._bootstrap_external>", line 850, in exec_module File "<frozen importlib._bootstrap>", line 228, in _call_with_frames_removed File "/home/house_info/house_info/spiders/HouseSpyder.py", line 2, in <module> from scrapy_redis.spiders import RedisSpider ModuleNotFoundError: No module named 'scrapy_

(scrapy_env) C:\Users\Lenovo\nepu_spider>scrapy crawl nepu Traceback (most recent call last): File "D:\annaCONDA\Scripts\scrapy-script.py", line 10, in <module> sys.exit(execute()) ^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\scrapy\cmdline.py", line 157, in execute cmd.crawler_process = CrawlerProcess(settings) ^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\scrapy\crawler.py", line 318, in __init__ super().__init__(settings) File "D:\annaCONDA\Lib\site-packages\scrapy\crawler.py", line 190, in __init__ self.spider_loader = self._get_spider_loader(settings) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\scrapy\crawler.py", line 184, in _get_spider_loader return loader_cls.from_settings(settings.frozencopy()) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\scrapy\spiderloader.py", line 69, in from_settings return cls(settings) ^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\scrapy\spiderloader.py", line 24, in __init__ self._load_all_spiders() File "D:\annaCONDA\Lib\site-packages\scrapy\spiderloader.py", line 53, in _load_all_spiders for module in walk_modules(name): ^^^^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\scrapy\utils\misc.py", line 87, in walk_modules submod = import_module(fullpath) ^^^^^^^^^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\importlib\__init__.py", line 126, in import_module return _bootstrap._gcd_import(name[level:], package, level) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "<frozen importlib._bootstrap>", line 1204, in _gcd_import File "<frozen importlib._bootstrap>", line 1176, in _find_and_load File "<frozen importlib._bootstrap>", line 1147, in _find_and_load_unlocked File "<frozen importlib._bootstrap>", line 690, in _load_unlocked File "<frozen importlib._bootstrap_external>", line 940, in exec_module File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed File "C:\Users\Lenovo\nepu_spider\nepu_spider\spiders\nepu.py", line 2, in <module> from nepu_Spider.items import NepuSpiderItem # 导入定义好的 Item ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ModuleNotFoundError: No module named 'nepu_Spider' (scrapy_env) C:\Users\Lenovo\nepu_spider>scrapy crawl nepu Traceback (most recent call last): File "D:\annaCONDA\Scripts\scrapy-script.py", line 10, in <module> sys.exit(execute()) ^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\scrapy\cmdline.py", line 157, in execute cmd.crawler_process = CrawlerProcess(settings) ^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\scrapy\crawler.py", line 318, in __init__ super().__init__(settings) File "D:\annaCONDA\Lib\site-packages\scrapy\crawler.py", line 190, in __init__ self.spider_loader = self._get_spider_loader(settings) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\scrapy\crawler.py", line 184, in _get_spider_loader return loader_cls.from_settings(settings.frozencopy()) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\scrapy\spiderloader.py", line 69, in from_settings return cls(settings) ^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\scrapy\spiderloader.py", line 24, in __init__ self._load_all_spiders() File "D:\annaCONDA\Lib\site-packages\scrapy\spiderloader.py", line 53, in _load_all_spiders for module in walk_modules(name): ^^^^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\scrapy\utils\misc.py", line 87, in walk_modules submod = import_module(fullpath) ^^^^^^^^^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\importlib\__init__.py", line 126, in import_module return _bootstrap._gcd_import(name[level:], package, level) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "<frozen importlib._bootstrap>", line 1204, in _gcd_import File "<frozen importlib._bootstrap>", line 1176, in _find_and_load File "<frozen importlib._bootstrap>", line 1147, in _find_and_load_unlocked File "<frozen importlib._bootstrap>", line 690, in _load_unlocked File "<frozen importlib._bootstrap_external>", line 940, in exec_module File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed File "C:\Users\Lenovo\nepu_spider\nepu_spider\spiders\nepu.py", line 3, in <module> from nepu_spider.items import NewsItem ImportError: cannot import name 'NewsItem' from 'nepu_spider.items' (C:\Users\Lenovo\nepu_spider\nepu_spider\items.py)

(scrapy_env) C:\Users\Lenovo\nepu_spider>scrapy crawl nepu 2025-07-04 10:50:26 [scrapy.utils.log] INFO: Scrapy 2.8.0 started (bot: nepu_spider) 2025-07-04 10:50:26 [scrapy.utils.log] INFO: Versions: lxml 4.9.3.0, libxml2 2.10.4, cssselect 1.1.0, parsel 1.6.0, w3lib 1.21.0, Twisted 22.10.0, Python 3.11.5 | packaged by Anaconda, Inc. | (main, Sep 11 2023, 13:26:23) [MSC v.1916 64 bit (AMD64)], pyOpenSSL 23.2.0 (OpenSSL 3.0.10 1 Aug 2023), cryptography 41.0.3, Platform Windows-10-10.0.26100-SP0 Traceback (most recent call last): File "D:\annaCONDA\Lib\site-packages\scrapy\spiderloader.py", line 77, in load return self._spiders[spider_name] ~~~~~~~~~~~~~^^^^^^^^^^^^^ KeyError: 'nepu' During handling of the above exception, another exception occurred: Traceback (most recent call last): File "D:\annaCONDA\Scripts\scrapy-script.py", line 10, in <module> sys.exit(execute()) ^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\scrapy\cmdline.py", line 158, in execute _run_print_help(parser, _run_command, cmd, args, opts) File "D:\annaCONDA\Lib\site-packages\scrapy\cmdline.py", line 111, in _run_print_help func(*a, **kw) File "D:\annaCONDA\Lib\site-packages\scrapy\cmdline.py", line 166, in _run_command cmd.run(args, opts) File "D:\annaCONDA\Lib\site-packages\scrapy\commands\crawl.py", line 24, in run crawl_defer = self.crawler_process.crawl(spname, **opts.spargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\scrapy\crawler.py", line 232, in crawl crawler = self.create_crawler(crawler_or_spidercls) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\scrapy\crawler.py", line 266, in create_crawler return self._create_crawler(crawler_or_spidercls) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\scrapy\crawler.py", line 346, in _create_crawler spidercls = self.spider_loader.load(spidercls) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\scrapy\spiderloader.py", line 79, in load raise KeyError(f"Spider not found: {spider_name}") KeyError: 'Spider not found: nepu'

(scrapy_env) C:\Users\Lenovo\nepu_spider>scrapy crawl test -o test_output.json Traceback (most recent call last): File "D:\annaCONDA\Lib\site-packages\scrapy\spiderloader.py", line 77, in load return self._spiders[spider_name] ~~~~~~~~~~~~~^^^^^^^^^^^^^ KeyError: 'test' During handling of the above exception, another exception occurred: Traceback (most recent call last): File "D:\annaCONDA\Scripts\scrapy-script.py", line 10, in <module> sys.exit(execute()) ^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\scrapy\cmdline.py", line 158, in execute _run_print_help(parser, _run_command, cmd, args, opts) File "D:\annaCONDA\Lib\site-packages\scrapy\cmdline.py", line 111, in _run_print_help func(*a, **kw) File "D:\annaCONDA\Lib\site-packages\scrapy\cmdline.py", line 166, in _run_command cmd.run(args, opts) File "D:\annaCONDA\Lib\site-packages\scrapy\commands\crawl.py", line 24, in run crawl_defer = self.crawler_process.crawl(spname, **opts.spargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\scrapy\crawler.py", line 232, in crawl crawler = self.create_crawler(crawler_or_spidercls) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\scrapy\crawler.py", line 266, in create_crawler return self._create_crawler(crawler_or_spidercls) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\scrapy\crawler.py", line 346, in _create_crawler spidercls = self.spider_loader.load(spidercls) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\scrapy\spiderloader.py", line 79, in load raise KeyError(f"Spider not found: {spider_name}") KeyError: 'Spider not found: test'

(scrapy_env) C:\Users\Lenovo\nepu_qa_project>scrapy crawl nepu_info 2025-07-06 22:49:54 [scrapy.utils.log] INFO: Scrapy 2.8.0 started (bot: nepu_spider) 2025-07-06 22:49:54 [scrapy.utils.log] INFO: Versions: lxml 4.9.3.0, libxml2 2.10.4, cssselect 1.1.0, parsel 1.6.0, w3lib 1.21.0, Twisted 22.10.0, Python 3.11.5 | packaged by Anaconda, Inc. | (main, Sep 11 2023, 13:26:23) [MSC v.1916 64 bit (AMD64)], pyOpenSSL 23.2.0 (OpenSSL 3.0.10 1 Aug 2023), cryptography 41.0.3, Platform Windows-10-10.0.26100-SP0 Traceback (most recent call last): File "D:\annaCONDA\Lib\site-packages\scrapy\spiderloader.py", line 77, in load return self._spiders[spider_name] ~~~~~~~~~~~~~^^^^^^^^^^^^^ KeyError: 'nepu_info' During handling of the above exception, another exception occurred: Traceback (most recent call last): File "D:\annaCONDA\Scripts\scrapy-script.py", line 10, in <module> sys.exit(execute()) ^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\scrapy\cmdline.py", line 158, in execute _run_print_help(parser, _run_command, cmd, args, opts) File "D:\annaCONDA\Lib\site-packages\scrapy\cmdline.py", line 111, in _run_print_help func(*a, **kw) File "D:\annaCONDA\Lib\site-packages\scrapy\cmdline.py", line 166, in _run_command cmd.run(args, opts) File "D:\annaCONDA\Lib\site-packages\scrapy\commands\crawl.py", line 24, in run crawl_defer = self.crawler_process.crawl(spname, **opts.spargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\scrapy\crawler.py", line 232, in crawl crawler = self.create_crawler(crawler_or_spidercls) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\scrapy\crawler.py", line 266, in create_crawler return self._create_crawler(crawler_or_spidercls) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\scrapy\crawler.py", line 346, in _create_crawler spidercls = self.spider_loader.load(spidercls) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\scrapy\spiderloader.py", line 79, in load raise KeyError(f"Spider not found: {spider_name}") KeyError: 'Spider not found: nepu_info'

# settings.py 添加配置 SELENIUM_DRIVER_NAME = 'chrome' SELENIUM_DRIVER_EXECUTABLE_PATH = '/path/to/chromedriver' DOWNLOADER_MIDDLEWARES = { 'scrapy_selenium.SeleniumMiddleware': 800 } import scrapy from scrapy_selenium import SeleniumRequest class JudicialSaleSpider(scrapy.Spider): name = 'ali_auction' def start_requests(self): yield SeleniumRequest( url="https://sf.taobao.com/list/50025969.htm", wait_time=3, screenshot=True, callback=self.parse ) def parse(self, response): # 使用Selenium获取完全渲染后的页面 driver = response.meta['driver'] # 提取拍品信息 for item in driver.find_elements_by_css_selector('.sf-item'): yield { 'title': item.find_element_by_css_selector('.title').text, 'price': item.find_element_by_css_selector('.price').text, 'view_count': item.find_element_by_css_selector('.view').text } # 处理分页 next_page = driver.find_elements_by_css_selector('.next-page') if next_page: yield SeleniumRequest( url=next_page[0].get_attribute('href'), callback=self.parse ) from selenium.webdriver.support.ui import WebDriverWait from selenium.webdriver.support import expected_conditions as EC WebDriverWait(driver, 10).until( EC.presence_of_element_located((By.CSS_SELECTOR, ".auction-list")) ) class MongoDBPipeline: def process_item(self, item, spider): db[spider.name].insert_one(dict(item)) return item 在pycharm上运行后 返回:# settings.py 添加配置 SELENIUM_DRIVER_NAME = 'chrome' SELENIUM_DRIVER_EXECUTABLE_PATH = '/path/to/chromedriver' DOWNLOADER_MIDDLEWARES = { 'scrapy_selenium.SeleniumMiddleware': 800 } import scrapy from scrapy_selenium import SeleniumRequest class JudicialSaleSpider(scrapy.Spider): name = 'ali_auction' def start_requests(self): yield SeleniumRequest( url=

Traceback (most recent call last): File "E:\anaconda\Scripts\scrapy-script.py", line 10, in <module> sys.exit(execute()) ^^^^^^^^^ File "E:\anaconda\Lib\site-packages\scrapy\cmdline.py", line 160, in execute cmd.crawler_process = CrawlerProcess(settings) ^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\anaconda\Lib\site-packages\scrapy\crawler.py", line 357, in __init__ super().__init__(settings) File "E:\anaconda\Lib\site-packages\scrapy\crawler.py", line 227, in __init__ self.spider_loader = self._get_spider_loader(settings) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\anaconda\Lib\site-packages\scrapy\crawler.py", line 221, in _get_spider_loader return loader_cls.from_settings(settings.frozencopy()) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\anaconda\Lib\site-packages\scrapy\spiderloader.py", line 79, in from_settings return cls(settings) ^^^^^^^^^^^^^ File "E:\anaconda\Lib\site-packages\scrapy\spiderloader.py", line 34, in __init__ self._load_all_spiders() File "E:\anaconda\Lib\site-packages\scrapy\spiderloader.py", line 63, in _load_all_spiders for module in walk_modules(name): ^^^^^^^^^^^^^^^^^^ File "E:\anaconda\Lib\site-packages\scrapy\utils\misc.py", line 106, in walk_modules submod = import_module(fullpath) ^^^^^^^^^^^^^^^^^^^^^^^ File "E:\anaconda\Lib\importlib\__init__.py", line 90, in import_module return _bootstrap._gcd_import(name[level:], package, level) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "<frozen importlib._bootstrap>", line 1387, in _gcd_import File "<frozen importlib._bootstrap>", line 1360, in _find_and_load File "<frozen importlib._bootstrap>", line 1331, in _find_and_load_unlocked File "<frozen importlib._bootstrap>", line 935, in _load_unlocked File "<frozen importlib._bootstrap_external>", line 995, in exec_module File "<frozen importlib._bootstrap>", line 488, in _call_with_frames_removed File "C:\Users\xiaopang1\Desktop\scrapy\DoubanProject\DoubanProject\spiders\douban.py", line 34, in <module> from .items import DoubanprojectItem # 导入定义好的 Item 数据结构 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ModuleNotFoundError: No module named 'DoubanProject.spiders.items'

(base) PS D:\2025\internship\pachong\py\my12306> scrapy crawl train 2025-07-15 16:31:48 [scrapy.utils.log] INFO: Scrapy 2.11.1 started (bot: my12306) 2025-07-15 16:31:48 [scrapy.utils.log] INFO: Versions: lxml 5.2.1.0, libxml2 2.13 .1, cssselect 1.2.0, parsel 1.8.1, w3lib 2.1.2, Twisted 23.10.0, Python 3.12.7 | packaged by Anaconda, Inc. | (main, Oct 4 2024, 13:17:27) [MSC v.1929 64 bit (AM D64)], pyOpenSSL 24.2.1 (OpenSSL 3.0.15 3 Sep 2024), cryptography 43.0.0, Platform Windows-11-10.0.26100-SP0 2025-07-15 16:31:48 [scrapy.addons] INFO: Enabled addons: [] 2025-07-15 16:31:48 [py.warnings] WARNING: D:\anaconda\Lib\site-packages\scrapy\u tils\request.py:254: ScrapyDeprecationWarning: '2.6' is a deprecated value for the 'REQUEST_FINGERPRINTER_IMPLEMENTATION' setting. It is also the default value. In other words, it is normal to get this warning if you have not defined a value for the 'REQUEST_FINGERPRINTER_IMPLEMENTATION' sett ing. This is so for backward compatibility reasons, but it will change in a future version of Scrapy. See the documentation of the 'REQUEST_FINGERPRINTER_IMPLEMENTATION' setting for information on how to handle this deprecation. return cls(crawler) 2025-07-15 16:31:48 [scrapy.extensions.telnet] INFO: Telnet Password: b500c51afa127fa5 2025-07-15 16:31:48 [scrapy.middleware] INFO: Enabled extensions: ['scrapy.extensions.corestats.CoreStats', 'scrapy.extensions.telnet.TelnetConsole', 'scrapy.extensions.logstats.LogStats', 'scrapy.extensions.throttle.AutoThrottle'] 2025-07-15 16:31:48 [scrapy.crawler] INFO: Overridden settings: {'AUTOTHROTTLE_ENABLED': True, 'AUTOTHROTTLE_START_DELAY': 10, 'BOT_NAME': 'my12306', 'DOWNLOADER_CLIENT_TLS_METHOD': 'TLSv1.2', 'DOWNLOAD_DELAY': 5, 'DOWNLOAD_TIMEOUT': 15, 'LOG_LEVEL': 'INFO', 'NEWSPIDER_MODULE': 'my12306.spiders', 'RETRY_HTTP_CODES': [302, 403, 404, 500, 502, 503, 504], 'RETRY_TIMES': 5, 'SPIDER_MODULES': ['my12306.spiders']} 2025-07-15 16:31:49 [scrapy.middleware] INFO: Enabled downloader middlewares: ['scrapy.downloadermiddlewares.httpauth.HttpAuthMiddleware', 'scrapy.downloadermiddlewares.downloadtimeout.DownloadTimeoutMiddleware', 'scrapy.downloadermiddlewares.defaultheaders.DefaultHeadersMiddleware', 'my12306.middlewares.RandomUserAgentMiddleware', 'scrapy.downloadermiddlewares.retry.RetryMiddleware', 'scrapy.downloadermiddlewares.redirect.MetaRefreshMiddleware', 'scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware', 'scrapy.downloadermiddlewares.redirect.RedirectMiddleware', 'scrapy.downloadermiddlewares.cookies.CookiesMiddleware', 'scrapy.downloadermiddlewares.httpproxy.HttpProxyMiddleware', 'scrapy.downloadermiddlewares.stats.DownloaderStats'] 2025-07-15 16:31:49 [scrapy.middleware] INFO: Enabled spider middlewares: ['scrapy.spidermiddlewares.httperror.HttpErrorMiddleware', 'scrapy.spidermiddlewares.offsite.OffsiteMiddleware', 'scrapy.spidermiddlewares.referer.RefererMiddleware', 'scrapy.spidermiddlewares.urllength.UrlLengthMiddleware', 'scrapy.spidermiddlewares.depth.DepthMiddleware'] 2025-07-15 16:31:49 [scrapy.middleware] INFO: Enabled item pipelines: ['my12306.pipelines.JsonWriterPipeline'] 2025-07-15 16:31:49 [scrapy.core.engine] INFO: Spider opened 2025-07-15 16:31:49 [scrapy.extensions.logstats] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min) 2025-07-15 16:31:49 [scrapy.extensions.telnet] INFO: Telnet console listening on 127.0.0.1:6023 2025-07-15 16:31:55 [scrapy.core.scraper] ERROR: Spider error processing <GET htt ps://kyfw.12306.cn/otn/login/init> (referer: https://kyfw.12306.cn/otn/index/init) Traceback (most recent call last): File "D:\2025\internship\pachong\py\my12306\my12306\spiders\train_spider.py", line 16, in safe_selector return Selector(response) ^^^^^^^^^^^^^^^^^^ File "D:\anaconda\Lib\site-packages\scrapy\selector\unified.py", line 97, in __init__ super().__init__(text=text, type=st, **kwargs) File "D:\anaconda\Lib\site-packages\parsel\selector.py", line 496, in __init__ root, type = _get_root_and_type_from_text( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\anaconda\Lib\site-packages\parsel\selector.py", line 377, in _get_root_and_type_from_text root = _get_root_from_text(text, type=type, **lxml_kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\anaconda\Lib\site-packages\parsel\selector.py", line 329, in _get_root_from_text return create_root_node(text, _ctgroup[type]["_parser"], **lxml_kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\anaconda\Lib\site-packages\parsel\selector.py", line 110, in create_root_node parser = parser_cls(recover=True, encoding=encoding, huge_tree=True) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\anaconda\Lib\site-packages\lxml\html\__init__.py", line 1887, in __init__ super().__init__(**kwargs) File "src\\lxml\\parser.pxi", line 1806, in lxml.etree.HTMLParser.__init__ File "src\\lxml\\parser.pxi", line 858, in lxml.etree._BaseParser.__init__ LookupError: unknown encoding: 'b'utf8'' During handling of the above exception, another exception occurred: Traceback (most recent call last): File "D:\anaconda\Lib\site-packages\scrapy\utils\defer.py", line 279, in iter_errback yield next(it) ^^^^^^^^ File "D:\anaconda\Lib\site-packages\scrapy\utils\python.py", line 350, in __next__ return next(self.data) ^^^^^^^^^^^^^^^ File "D:\anaconda\Lib\site-packages\scrapy\utils\python.py", line 350, in __next__ return next(self.data) ^^^^^^^^^^^^^^^ File "D:\anaconda\Lib\site-packages\scrapy\core\spidermw.py", line 106, in process_sync for r in iterable: File "D:\anaconda\Lib\site-packages\scrapy\spidermiddlewares\offsite.py", line 28, in <genexpr> return (r for r in result or () if self._filter(r, spider)) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\anaconda\Lib\site-packages\scrapy\core\spidermw.py", line 106, in process_sync for r in iterable: File "D:\anaconda\Lib\site-packages\scrapy\spidermiddlewares\referer.py", line 352, in <genexpr> return (self._set_referer(r, response) for r in result or ()) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\anaconda\Lib\site-packages\scrapy\core\spidermw.py", line 106, in process_sync for r in iterable: File "D:\anaconda\Lib\site-packages\scrapy\spidermiddlewares\urllength.py", line 27, in <genexpr> return (r for r in result or () if self._filter(r, spider)) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\anaconda\Lib\site-packages\scrapy\core\spidermw.py", line 106, in process_sync for r in iterable: File "D:\anaconda\Lib\site-packages\scrapy\spidermiddlewares\depth.py", line 31, in <genexpr> return (r for r in result or () if self._filter(r, response, spider)) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\anaconda\Lib\site-packages\scrapy\core\spidermw.py", line 106, in process_sync for r in iterable: File "D:\2025\internship\pachong\py\my12306\my12306\spiders\train_spider.py", line 241, in login_page sel = safe_selector(response) ^^^^^^^^^^^^^^^^^^^^^^^ File "D:\2025\internship\pachong\py\my12306\my12306\spiders\train_spider.py", line 31, in safe_selector return ParselSelector(text=text, type='html', encoding=encoding) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\anaconda\Lib\site-packages\parsel\selector.py", line 496, in __init__ root, type = _get_root_and_type_from_text( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\anaconda\Lib\site-packages\parsel\selector.py", line 377, in _get_root_and_type_from_text root = _get_root_from_text(text, type=type, **lxml_kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\anaconda\Lib\site-packages\parsel\selector.py", line 329, in _get_root_from_text return create_root_node(text, _ctgroup[type]["_parser"], **lxml_kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\anaconda\Lib\site-packages\parsel\selector.py", line 110, in create_root_node parser = parser_cls(recover=True, encoding=encoding, huge_tree=True) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\anaconda\Lib\site-packages\lxml\html\__init__.py", line 1887, in __init__ super().__init__(**kwargs) File "src\\lxml\\parser.pxi", line 1806, in lxml.etree.HTMLParser.__init__ File "src\\lxml\\parser.pxi", line 858, in lxml.etree._BaseParser.__init__ LookupError: unknown encoding: 'b'utf8'' 2025-07-15 16:32:02 [train] INFO: 成功加载 3399 个车站信息 2025-07-15 16:32:02 [train] INFO: 部分车站示例: [('北京北', 'VAP'), ('北京东', 'BOP'), ('北京', 'BJP'), ('北京南', 'VNP'), ('北京大兴', 'IPP')] 请输入出发站: 北京 请输入到达站: 北京北 请输入日期(格式: yyyymmdd): 20250720 2025-07-15 16:33:01 [scrapy.extensions.logstats] INFO: Crawled 3 pages (at 3 pages/min), scraped 0 items (at 0 items/min) 2025-07-15 16:33:38 [scrapy.downloadermiddlewares.retry] ERROR: Gave up retrying <GET https://www.12306.cn/mormhweb/logFiles/error.html> (failed 6 times): [<twist ed.python.failure.Failure twisted.internet.error.ConnectionLost: Connection to the other side was lost in a non-clean fashion: Connection lost.>] 2025-07-15 16:33:38 [scrapy.core.scraper] ERROR: Error downloading <GET https://www.12306.cn/mormhweb/logFiles/error.html> Traceback (most recent call last): File "D:\anaconda\Lib\site-packages\scrapy\core\downloader\middleware.py", line 54, in process_request return (yield download_func(request=request, spider=spider)) twisted.web._newclient.ResponseNeverReceived: [<twisted.python.failure.Failure tw isted.internet.error.ConnectionLost: Connection to the other side was lost in a non-clean fashion: Connection lost.>] 2025-07-15 16:33:38 [scrapy.core.engine] INFO: Closing spider (finished) 2025-07-15 16:33:38 [scrapy.statscollectors] INFO: Dumping Scrapy stats: {'downloader/exception_count': 6, 'downloader/exception_type_count/twisted.web._newclient.ResponseNeverReceived': 6, 'downloader/request_bytes': 5916, 'downloader/request_count': 10, 'downloader/request_method_count/GET': 10, 'downloader/response_bytes': 86615, 'downloader/response_count': 4, 'downloader/response_status_count/200': 3, 'downloader/response_status_count/302': 1, 'elapsed_time_seconds': 109.671222, 'finish_reason': 'finished', 'finish_time': datetime.datetime(2025, 7, 15, 8, 33, 38, 966742, tzinfo=datetime.timezone.utc), 'httpcompression/response_bytes': 230430, 'httpcompression/response_count': 3, 'log_count/ERROR': 3, 'log_count/INFO': 13, 'log_count/WARNING': 1, 'request_depth_max': 2, 'response_received_count': 3, 'retry/count': 5, 'retry/max_reached': 1, 'retry/reason_count/twisted.web._newclient.ResponseNeverReceived': 5, 'scheduler/dequeued': 10, 'scheduler/dequeued/memory': 10, 'scheduler/enqueued': 10, 'scheduler/enqueued/memory': 10, 'spider_exceptions/LookupError': 1, 'start_time': datetime.datetime(2025, 7, 15, 8, 31, 49, 295520, tzinfo=datetime.timezone.utc)} 2025-07-15 16:33:38 [scrapy.core.engine] INFO: Spider closed (finished) (base) PS D:\2025\internship\pachong\py\my12306>

2025-07-06 22:15:17 [scrapy.utils.log] DEBUG: Using reactor: twisted.internet.selectreactor.SelectReactor 2025-07-06 22:15:17 [scrapy.extensions.telnet] INFO: Telnet Password: 94478ec1879f1a75 2025-07-06 22:15:17 [scrapy.middleware] INFO: Enabled extensions: ['scrapy.extensions.corestats.CoreStats', 'scrapy.extensions.telnet.TelnetConsole', 'scrapy.extensions.logstats.LogStats', 'scrapy.extensions.throttle.AutoThrottle'] 2025-07-06 22:15:17 [scrapy.middleware] INFO: Enabled downloader middlewares: ['scrapy.downloadermiddlewares.robotstxt.RobotsTxtMiddleware', 'scrapy.downloadermiddlewares.httpauth.HttpAuthMiddleware', 'scrapy.downloadermiddlewares.downloadtimeout.DownloadTimeoutMiddleware', 'scrapy.downloadermiddlewares.defaultheaders.DefaultHeadersMiddleware', 'scrapy.downloadermiddlewares.useragent.UserAgentMiddleware', 'scrapy.downloadermiddlewares.retry.RetryMiddleware', 'scrapy.downloadermiddlewares.redirect.MetaRefreshMiddleware', 'scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware', 'scrapy.downloadermiddlewares.redirect.RedirectMiddleware', 'scrapy.downloadermiddlewares.cookies.CookiesMiddleware', 'scrapy.downloadermiddlewares.httpproxy.HttpProxyMiddleware', 'scrapy.downloadermiddlewares.stats.DownloaderStats', 'scrapy.downloadermiddlewares.httpcache.HttpCacheMiddleware'] 2025-07-06 22:15:17 [scrapy.middleware] INFO: Enabled spider middlewares: ['scrapy.spidermiddlewares.httperror.HttpErrorMiddleware', 'scrapy.spidermiddlewares.offsite.OffsiteMiddleware', 'scrapy.spidermiddlewares.referer.RefererMiddleware', 'scrapy.spidermiddlewares.urllength.UrlLengthMiddleware', 'scrapy.spidermiddlewares.depth.DepthMiddleware'] 2025-07-06 22:15:17 [scrapy.middleware] INFO: Enabled item pipelines: ['nepu_spider.pipelines.ContentCleanPipeline', 'nepu_spider.pipelines.DeduplicatePipeline', 'nepu_spider.pipelines.SQLServerPipeline'] 2025-07-06 22:15:17 [scrapy.core.engine] INFO: Spider opened 2025-07-06 22:15:17 [nepu_spider.pipelines] INFO: ✅ 数据库表 'knowledge_base' 已创建或已存在 2025-07-06 22:15:17 [nepu_info] INFO: ✅ 成功连接到 SQL Server 数据库 2025-07-06 22:15:17 [scrapy.extensions.logstats] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min) 2025-07-06 22:15:17 [scrapy.extensions.httpcache] DEBUG: Using filesystem cache storage in C:\Users\Lenovo\nepu_qa_project\.scrapy\httpcache 2025-07-06 22:15:17 [scrapy.extensions.telnet] INFO: Telnet console listening on 127.0.0.1:6023 2025-07-06 22:15:17 [nepu_info] INFO: 🚀 开始爬取东北石油大学官网... 2025-07-06 22:15:17 [nepu_info] INFO: 初始URL数量: 4 2025-07-06 22:15:17 [scrapy.downloadermiddlewares.redirect] DEBUG: Redirecting (302) to <GET https://www.nepu.edu.cn/robots.txt> from <GET http://www.nepu.edu.cn/robots.txt> 2025-07-06 22:15:24 [scrapy.core.engine] DEBUG: Crawled (404) <GET https://www.nepu.edu.cn/robots.txt> (referer: None) 2025-07-06 22:15:24 [protego] DEBUG: Rule at line 12 without any user agent to enforce it on. 2025-07-06 22:15:24 [protego] DEBUG: Rule at line 13 without any user agent to enforce it on. 2025-07-06 22:15:24 [protego] DEBUG: Rule at line 14 without any user agent to enforce it on. 2025-07-06 22:15:24 [protego] DEBUG: Rule at line 15 without any user agent to enforce it on. 2025-07-06 22:15:24 [protego] DEBUG: Rule at line 16 without any user agent to enforce it on. 2025-07-06 22:15:28 [scrapy.downloadermiddlewares.redirect] DEBUG: Redirecting (302) to <GET https://www.nepu.edu.cn/index.htm> from <GET http://www.nepu.edu.cn/> 2025-07-06 22:15:34 [scrapy.downloadermiddlewares.redirect] DEBUG: Redirecting (302) to <GET https://www.nepu.edu.cn/tzgg.htm> from <GET http://www.nepu.edu.cn/tzgg.htm> 2025-07-06 22:15:38 [scrapy.downloadermiddlewares.redirect] DEBUG: Redirecting (302) to <GET https://www.nepu.edu.cn/xwzx.htm> from <GET http://www.nepu.edu.cn/xwzx.htm> 2025-07-06 22:15:45 [scrapy.downloadermiddlewares.redirect] DEBUG: Redirecting (302) to <GET https://www.nepu.edu.cn/xxgk.htm> from <GET http://www.nepu.edu.cn/xxgk.htm> 2025-07-06 22:15:51 [scrapy.core.engine] DEBUG: Crawled (200) <GET https://www.nepu.edu.cn/index.htm> (referer: None) 2025-07-06 22:15:51 [scrapy.spidermiddlewares.offsite] DEBUG: Filtered offsite request to 'www.gov.cn': <GET https://www.gov.cn/gongbao/content/2001/content_61066.htm> 2025-07-06 22:15:59 [scrapy.core.engine] DEBUG: Crawled (200) <GET https://www.nepu.edu.cn/tzgg.htm> (referer: None) 2025-07-06 22:16:01 [scrapy.core.engine] DEBUG: Crawled (404) <GET https://www.nepu.edu.cn/xwzx.htm> (referer: None) 2025-07-06 22:16:01 [nepu_info] ERROR: 请求失败: https://www.nepu.edu.cn/xwzx.htm | 状态: 404 | 错误: Ignoring non-200 response 2025-07-06 22:16:03 [scrapy.core.engine] DEBUG: Crawled (404) <GET https://www.nepu.edu.cn/xxgk.htm> (referer: None) 2025-07-06 22:16:03 [nepu_info] ERROR: 请求失败: https://www.nepu.edu.cn/xxgk.htm | 状态: 404 | 错误: Ignoring non-200 response 2025-07-06 22:16:05 [scrapy.core.engine] DEBUG: Crawled (200) <GET https://www.nepu.edu.cn/info/1049/28877.htm> (referer: https://www.nepu.edu.cn/index.htm) 2025-07-06 22:16:05 [nepu_info] ERROR: ❌ 解析失败: https://www.nepu.edu.cn/info/1049/28877.htm | 错误: Expected selector, got <DELIM '/' at 0> Traceback (most recent call last): File "C:\Users\Lenovo\nepu_qa_project\nepu_spider\spiders\info_spider.py", line 148, in parse_item date_text = response.css(selector).get() ^^^^^^^^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\scrapy\http\response\text.py", line 147, in css return self.selector.css(query) ^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\parsel\selector.py", line 282, in css return self.xpath(self._css2xpath(query)) ^^^^^^^^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\parsel\selector.py", line 285, in _css2xpath return self._csstranslator.css_to_xpath(query) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\parsel\csstranslator.py", line 107, in css_to_xpath return super(HTMLTranslator, self).css_to_xpath(css, prefix) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\cssselect\xpath.py", line 192, in css_to_xpath for selector in parse(css)) ^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\cssselect\parser.py", line 415, in parse return list(parse_selector_group(stream)) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\cssselect\parser.py", line 428, in parse_selector_group yield Selector(*parse_selector(stream)) ^^^^^^^^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\cssselect\parser.py", line 436, in parse_selector result, pseudo_element = parse_simple_selector(stream) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\cssselect\parser.py", line 544, in parse_simple_selector raise SelectorSyntaxError( cssselect.parser.SelectorSyntaxError: Expected selector, got <DELIM '/' at 0> 2025-07-06 22:16:05 [scrapy.dupefilters] DEBUG: Filtered duplicate request: <GET https://www.nepu.edu.cn/info/1049/28867.htm> - no more duplicates will be shown (see DUPEFILTER_DEBUG to show all duplicates) 2025-07-06 22:16:05 [scrapy.core.engine] DEBUG: Crawled (200) <GET https://www.nepu.edu.cn/info/1313/16817.htm> (referer: https://www.nepu.edu.cn/index.htm) 2025-07-06 22:16:05 [nepu_info] ERROR: ❌ 解析失败: https://www.nepu.edu.cn/info/1313/16817.htm | 错误: Expected selector, got <DELIM '/' at 0> Traceback (most recent call last): File "C:\Users\Lenovo\nepu_qa_project\nepu_spider\spiders\info_spider.py", line 148, in parse_item date_text = response.css(selector).get() ^^^^^^^^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\scrapy\http\response\text.py", line 147, in css return self.selector.css(query) ^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\parsel\selector.py", line 282, in css return self.xpath(self._css2xpath(query)) ^^^^^^^^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\parsel\selector.py", line 285, in _css2xpath return self._csstranslator.css_to_xpath(query) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\parsel\csstranslator.py", line 107, in css_to_xpath return super(HTMLTranslator, self).css_to_xpath(css, prefix) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\cssselect\xpath.py", line 192, in css_to_xpath for selector in parse(css)) ^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\cssselect\parser.py", line 415, in parse return list(parse_selector_group(stream)) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\cssselect\parser.py", line 428, in parse_selector_group yield Selector(*parse_selector(stream)) ^^^^^^^^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\cssselect\parser.py", line 436, in parse_selector result, pseudo_element = parse_simple_selector(stream) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\cssselect\parser.py", line 544, in parse_simple_selector raise SelectorSyntaxError( cssselect.parser.SelectorSyntaxError: Expected selector, got <DELIM '/' at 0> 2025-07-06 22:16:06 [scrapy.core.engine] DEBUG: Crawled (200) <GET https://www.nepu.edu.cn/info/1313/17517.htm> (referer: https://www.nepu.edu.cn/index.htm) 2025-07-06 22:16:06 [nepu_info] ERROR: ❌ 解析失败: https://www.nepu.edu.cn/info/1313/17517.htm | 错误: Expected selector, got <DELIM '/' at 0> Traceback (most recent call last): File "C:\Users\Lenovo\nepu_qa_project\nepu_spider\spiders\info_spider.py", line 148, in parse_item date_text = response.css(selector).get() ^^^^^^^^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\scrapy\http\response\text.py", line 147, in css return self.selector.css(query) ^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\parsel\selector.py", line 282, in css return self.xpath(self._css2xpath(query)) ^^^^^^^^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\parsel\selector.py", line 285, in _css2xpath return self._csstranslator.css_to_xpath(query) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\parsel\csstranslator.py", line 107, in css_to_xpath return super(HTMLTranslator, self).css_to_xpath(css, prefix) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\cssselect\xpath.py", line 192, in css_to_xpath for selector in parse(css)) ^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\cssselect\parser.py", line 415, in parse return list(parse_selector_group(stream)) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\cssselect\parser.py", line 428, in parse_selector_group yield Selector(*parse_selector(stream)) ^^^^^^^^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\cssselect\parser.py", line 436, in parse_selector result, pseudo_element = parse_simple_selector(stream) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\cssselect\parser.py", line 544, in parse_simple_selector raise SelectorSyntaxError( cssselect.parser.SelectorSyntaxError: Expected selector, got <DELIM '/' at 0> 2025-07-06 22:16:07 [scrapy.core.engine] DEBUG: Crawled (200) <GET https://www.nepu.edu.cn/info/1313/19127.htm> (referer: https://www.nepu.edu.cn/index.htm) 2025-07-06 22:16:07 [nepu_info] ERROR: ❌ 解析失败: https://www.nepu.edu.cn/info/1313/19127.htm | 错误: Expected selector, got <DELIM '/' at 0> Traceback (most recent call last): ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\cssselect\parser.py", line 544, in parse_simple_selector raise SelectorSyntaxError( cssselect.parser.SelectorSyntaxError: Expected selector, got <DELIM '/' at 0> 2025-07-06 22:16:58 [scrapy.core.engine] DEBUG: Crawled (200) <GET https://www.nepu.edu.cn/info/1049/28867.htm> (referer: https://www.nepu.edu.cn/index.htm) 2025-07-06 22:16:58 [nepu_info] ERROR: ❌ 解析失败: https://www.nepu.edu.cn/info/1049/28867.htm | 错误: Expected selector, got <DELIM '/' at 0> Traceback (most recent call last): File "C:\Users\Lenovo\nepu_qa_project\nepu_spider\spiders\info_spider.py", line 148, in parse_item date_text = response.css(selector).get() ^^^^^^^^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\scrapy\http\response\text.py", line 147, in css return self.selector.css(query) ^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\parsel\selector.py", line 282, in css return self.xpath(self._css2xpath(query)) ^^^^^^^^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\parsel\selector.py", line 285, in _css2xpath return self._csstranslator.css_to_xpath(query) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\parsel\csstranslator.py", line 107, in css_to_xpath return super(HTMLTranslator, self).css_to_xpath(css, prefix) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\cssselect\xpath.py", line 192, in css_to_xpath for selector in parse(css)) ^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\cssselect\parser.py", line 415, in parse return list(parse_selector_group(stream)) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\cssselect\parser.py", line 428, in parse_selector_group yield Selector(*parse_selector(stream)) ^^^^^^^^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\cssselect\parser.py", line 436, in parse_selector result, pseudo_element = parse_simple_selector(stream) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\cssselect\parser.py", line 544, in parse_simple_selector raise SelectorSyntaxError( cssselect.parser.SelectorSyntaxError: Expected selector, got <DELIM '/' at 0> 2025-07-06 22:16:58 [scrapy.core.engine] INFO: Closing spider (finished) 2025-07-06 22:16:58 [nepu_info] INFO: ✅ 数据库连接已关闭 2025-07-06 22:16:58 [nepu_info] INFO: 🛑 爬虫结束,原因: finished 2025-07-06 22:16:58 [nepu_info] INFO: 总计爬取页面: 86 2025-07-06 22:16:58 [scrapy.utils.signal] ERROR: Error caught on signal handler: <function Spider.close at 0x000001FF77BA2C00> Traceback (most recent call last): File "D:\annaCONDA\Lib\site-packages\scrapy\utils\defer.py", line 312, in maybeDeferred_coro result = f(*args, **kw) File "D:\annaCONDA\Lib\site-packages\pydispatch\robustapply.py", line 55, in robustApply return receiver(*arguments, **named) File "D:\annaCONDA\Lib\site-packages\scrapy\spiders\__init__.py", line 92, in close return closed(reason) File "C:\Users\Lenovo\nepu_qa_project\nepu_spider\spiders\info_spider.py", line 323, in closed json.dump(stats, f, ensure_ascii=False, indent=2) File "D:\annaCONDA\Lib\json\__init__.py", line 179, in dump for chunk in iterable: File "D:\annaCONDA\Lib\json\encoder.py", line 432, in _iterencode yield from _iterencode_dict(o, _current_indent_level) File "D:\annaCONDA\Lib\json\encoder.py", line 406, in _iterencode_dict yield from chunks File "D:\annaCONDA\Lib\json\encoder.py", line 406, in _iterencode_dict yield from chunks File "D:\annaCONDA\Lib\json\encoder.py", line 439, in _iterencode o = _default(o) File "D:\annaCONDA\Lib\json\encoder.py", line 180, in default raise TypeError(f'Object of type {o.__class__.__name__} ' TypeError: Object of type SettingsAttribute is not JSON serializable 2025-07-06 22:16:58 [scrapy.statscollectors] INFO: Dumping Scrapy stats: {'downloader/request_bytes': 35146, 'downloader/request_count': 96, 'downloader/request_method_count/GET': 96, 'downloader/response_bytes': 729404, 'downloader/response_count': 96, 'downloader/response_status_count/200': 88, 'downloader/response_status_count/302': 5, 'downloader/response_status_count/404': 3, 'dupefilter/filtered': 184, 'elapsed_time_seconds': 101.133916, 'finish_reason': 'finished', 'finish_time': datetime.datetime(2025, 7, 6, 14, 16, 58, 758524), 'httpcache/firsthand': 96, 'httpcache/miss': 96, 'httpcache/store': 96, 'httpcompression/response_bytes': 2168438, 'httpcompression/response_count': 88, 'log_count/DEBUG': 120, 'log_count/ERROR': 89, 'log_count/INFO': 18, 'log_count/WARNING': 1, 'offsite/domains': 2, 'offsite/filtered': 4, 'request_depth_max': 3, 'response_received_count': 91, 'robotstxt/request_count': 1, 'robotstxt/response_count': 1, 'robotstxt/response_status_count/404': 1, 'scheduler/dequeued': 94, 'scheduler/dequeued/memory': 94, 'scheduler/enqueued': 94, 'scheduler/enqueued/memory': 94, 'start_time': datetime.datetime(2025, 7, 6, 14, 15, 17, 624608)} 2025-07-06 22:16:58 [scrapy.core.engine] INFO: Spider closed (finished)

(scrapy_env) C:\Users\Lenovo\nepu_spider>scrapy crawl nepu 2025-07-04 11:44:20 [scrapy.utils.log] INFO: Scrapy 2.8.0 started (bot: nepu_spider) 2025-07-04 11:44:20 [scrapy.utils.log] INFO: Versions: lxml 4.9.3.0, libxml2 2.10.4, cssselect 1.1.0, parsel 1.6.0, w3lib 1.21.0, Twisted 22.10.0, Python 3.11.5 | packaged by Anaconda, Inc. | (main, Sep 11 2023, 13:26:23) [MSC v.1916 64 bit (AMD64)], pyOpenSSL 23.2.0 (OpenSSL 3.0.10 1 Aug 2023), cryptography 41.0.3, Platform Windows-10-10.0.26100-SP0 2025-07-04 11:44:20 [scrapy.crawler] INFO: Overridden settings: {'BOT_NAME': 'nepu_spider', 'FEED_EXPORT_ENCODING': 'utf-8', 'NEWSPIDER_MODULE': 'nepu_spider.spiders', 'REQUEST_FINGERPRINTER_IMPLEMENTATION': '2.7', 'SPIDER_MODULES': ['nepu_spider.spiders'], 'TWISTED_REACTOR': 'twisted.internet.asyncioreactor.AsyncioSelectorReactor'} 2025-07-04 11:44:20 [asyncio] DEBUG: Using selector: SelectSelector 2025-07-04 11:44:20 [scrapy.utils.log] DEBUG: Using reactor: twisted.internet.asyncioreactor.AsyncioSelectorReactor 2025-07-04 11:44:20 [scrapy.utils.log] DEBUG: Using asyncio event loop: asyncio.windows_events._WindowsSelectorEventLoop 2025-07-04 11:44:20 [scrapy.extensions.telnet] INFO: Telnet Password: 97bca17b548b5608 2025-07-04 11:44:20 [scrapy.middleware] INFO: Enabled extensions: ['scrapy.extensions.corestats.CoreStats', 'scrapy.extensions.telnet.TelnetConsole', 'scrapy.extensions.logstats.LogStats'] 2025-07-04 11:44:20 [scrapy.middleware] INFO: Enabled downloader middlewares: ['scrapy.downloadermiddlewares.httpauth.HttpAuthMiddleware', 'scrapy.downloadermiddlewares.downloadtimeout.DownloadTimeoutMiddleware', 'scrapy.downloadermiddlewares.defaultheaders.DefaultHeadersMiddleware', 'scrapy.downloadermiddlewares.useragent.UserAgentMiddleware', 'scrapy.downloadermiddlewares.retry.RetryMiddleware', 'scrapy.downloadermiddlewares.redirect.MetaRefreshMiddleware', 'scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware', 'scrapy.downloadermiddlewares.redirect.RedirectMiddleware', 'scrapy.downloadermiddlewares.cookies.CookiesMiddleware', 'scrapy.downloadermiddlewares.httpproxy.HttpProxyMiddleware', 'scrapy.downloadermiddlewares.stats.DownloaderStats'] 2025-07-04 11:44:20 [scrapy.middleware] INFO: Enabled spider middlewares: ['scrapy.spidermiddlewares.httperror.HttpErrorMiddleware', 'scrapy.spidermiddlewares.offsite.OffsiteMiddleware', 'scrapy.spidermiddlewares.referer.RefererMiddleware', 'scrapy.spidermiddlewares.urllength.UrlLengthMiddleware', 'scrapy.spidermiddlewares.depth.DepthMiddleware'] 2025-07-04 11:44:20 [scrapy.middleware] INFO: Enabled item pipelines: ['nepu_spider.pipelines.NepuSpiderPipeline'] 2025-07-04 11:44:20 [scrapy.core.engine] INFO: Spider opened 2025-07-04 11:44:21 [nepu] INFO: 🆕 数据表 NewsArticles 创建成功或已存在 2025-07-04 11:44:21 [nepu] INFO: ✅ 成功连接到 SQL Server 数据库 2025-07-04 11:44:21 [scrapy.extensions.logstats] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min) 2025-07-04 11:44:21 [scrapy.extensions.telnet] INFO: Telnet console listening on 127.0.0.1:6023 2025-07-04 11:44:21 [scrapy.core.engine] DEBUG: Crawled (200) <GET https://news.nepu.edu.cn/xsdt.htm> (referer: None) 2025-07-04 11:44:21 [scrapy.downloadermiddlewares.redirect] DEBUG: Redirecting (302) to <GET https://www.nepu.edu.cn/info/1049/9837.htm> from <GET http://www.nepu.edu.cn/info/1049/9837.htm> 2025-07-04 11:44:21 [scrapy.downloadermiddlewares.redirect] DEBUG: Redirecting (302) to <GET https://www.nepu.edu.cn/info/1049/9836.htm> from <GET http://www.nepu.edu.cn/info/1049/9836.htm> 2025-07-04 11:44:21 [scrapy.downloadermiddlewares.redirect] DEBUG: Redirecting (302) to <GET https://www.nepu.edu.cn/info/1049/9812.htm> from <GET http://www.nepu.edu.cn/info/1049/9812.htm> 2025-07-04 11:44:21 [scrapy.downloadermiddlewares.redirect] DEBUG: Redirecting (302) to <GET https://www.nepu.edu.cn/info/1049/9815.htm> from <GET http://www.nepu.edu.cn/info/1049/9815.htm> 2025-07-04 11:44:21 [scrapy.downloadermiddlewares.redirect] DEBUG: Redirecting (302) to <GET https://www.nepu.edu.cn/info/1049/9809.htm> from <GET http://www.nepu.edu.cn/info/1049/9809.htm> 2025-07-04 11:44:21 [scrapy.downloadermiddlewares.redirect] DEBUG: Redirecting (302) to <GET https://www.nepu.edu.cn/info/1049/9808.htm> from <GET http://www.nepu.edu.cn/info/1049/9808.htm> 2025-07-04 11:44:21 [scrapy.downloadermiddlewares.redirect] DEBUG: Redirecting (302) to <GET https://www.nepu.edu.cn/info/1049/10155.htm> from <GET http://www.nepu.edu.cn/info/1049/10155.htm> 2025-07-04 11:44:21 [scrapy.downloadermiddlewares.redirect] DEBUG: Redirecting (302) to <GET https://www.nepu.edu.cn/info/1049/10129.htm> from <GET http://www.nepu.edu.cn/info/1049/10129.htm> 2025-07-04 11:44:21 [scrapy.downloadermiddlewares.redirect] DEBUG: Redirecting (302) to <GET https://www.nepu.edu.cn/info/1049/9813.htm> from <GET http://www.nepu.edu.cn/info/1049/9813.htm> 2025-07-04 11:44:21 [scrapy.downloadermiddlewares.redirect] DEBUG: Redirecting (302) to <GET https://www.nepu.edu.cn/info/1049/10162.htm> from <GET http://www.nepu.edu.cn/info/1049/10162.htm> 2025-07-04 11:44:21 [scrapy.core.engine] DEBUG: Crawled (200) <GET https://www.nepu.edu.cn/info/1049/9812.htm> (referer: None) 2025-07-04 11:44:21 [scrapy.core.engine] DEBUG: Crawled (200) <GET https://www.nepu.edu.cn/info/1049/9809.htm> (referer: None) 2025-07-04 11:44:21 [scrapy.core.engine] DEBUG: Crawled (200) <GET https://www.nepu.edu.cn/info/1049/9837.htm> (referer: None) 2025-07-04 11:44:21 [scrapy.core.engine] DEBUG: Crawled (200) <GET https://www.nepu.edu.cn/info/1049/10155.htm> (referer: None) 2025-07-04 11:44:21 [scrapy.core.engine] DEBUG: Crawled (200) <GET https://www.nepu.edu.cn/info/1049/9815.htm> (referer: None) 2025-07-04 11:44:21 [scrapy.core.engine] DEBUG: Crawled (200) <GET https://www.nepu.edu.cn/info/1049/9813.htm> (referer: None) 2025-07-04 11:44:21 [scrapy.core.engine] DEBUG: Crawled (200) <GET https://www.nepu.edu.cn/info/1049/9836.htm> (referer: None) 2025-07-04 11:44:21 [scrapy.core.engine] DEBUG: Crawled (200) <GET https://www.nepu.edu.cn/info/1049/10162.htm> (referer: None) 2025-07-04 11:44:21 [scrapy.core.engine] DEBUG: Crawled (200) <GET https://www.nepu.edu.cn/info/1049/10129.htm> (referer: None) 2025-07-04 11:44:21 [scrapy.core.engine] DEBUG: Crawled (200) <GET https://www.nepu.edu.cn/info/1049/9808.htm> (referer: None) 2025-07-04 11:44:21 [scrapy.core.scraper] ERROR: Spider error processing <GET https://www.nepu.edu.cn/info/1049/9812.htm> (referer: None) Traceback (most recent call last): File "D:\annaCONDA\Lib\site-packages\scrapy\utils\defer.py", line 257, in iter_errback yield next(it) ^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\scrapy\utils\python.py", line 312, in __next__ return next(self.data) ^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\scrapy\utils\python.py", line 312, in __next__ return next(self.data) ^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\scrapy\core\spidermw.py", line 104, in process_sync for r in iterable: File "D:\annaCONDA\Lib\site-packages\scrapy\spidermiddlewares\offsite.py", line 28, in <genexpr> return (r for r in result or () if self._filter(r, spider)) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\scrapy\core\spidermw.py", line 104, in process_sync for r in iterable: File "D:\annaCONDA\Lib\site-packages\scrapy\spidermiddlewares\referer.py", line 353, in <genexpr> return (self._set_referer(r, response) for r in result or ()) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\scrapy\core\spidermw.py", line 104, in process_sync for r in iterable: File "D:\annaCONDA\Lib\site-packages\scrapy\spidermiddlewares\urllength.py", line 27, in <genexpr> return (r for r in result or () if self._filter(r, spider)) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\scrapy\core\spidermw.py", line 104, in process_sync for r in iterable: File "D:\annaCONDA\Lib\site-packages\scrapy\spidermiddlewares\depth.py", line 31, in <genexpr> return (r for r in result or () if self._filter(r, response, spider)) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\scrapy\core\spidermw.py", line 104, in process_sync for r in iterable: File "C:\Users\Lenovo\nepu_spider\nepu_spider\spiders\nepu.py", line 33, in parse_detail text = response.css(selector).get() ^^^^^^^^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\scrapy\http\response\text.py", line 147, in css return self.selector.css(query) ^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\parsel\selector.py", line 282, in css return self.xpath(self._css2xpath(query)) ^^^^^^^^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\parsel\selector.py", line 285, in _css2xpath return self._csstranslator.css_to_xpath(query) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\parsel\csstranslator.py", line 107, in css_to_xpath return super(HTMLTranslator, self).css_to_xpath(css, prefix) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\cssselect\xpath.py", line 192, in css_to_xpath for selector in parse(css)) ^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\cssselect\parser.py", line 415, in parse return list(parse_selector_group(stream)) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\cssselect\parser.py", line 428, in parse_selector_group yield Selector(*parse_selector(stream)) ^^^^^^^^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\cssselect\parser.py", line 436, in parse_selector result, pseudo_element = parse_simple_selector(stream) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\cssselect\parser.py", line 544, in parse_simple_selector raise SelectorSyntaxError( cssselect.parser.SelectorSyntaxError: Expected selector, got <DELIM '/' at 0> 2025-07-04 11:44:21 [scrapy.core.scraper] ERROR: Spider error processing <GET https://www.nepu.edu.cn/info/1049/9809.htm> (referer: None) Traceback (most recent call last): File "D:\annaCONDA\Lib\site-packages\scrapy\utils\defer.py", line 257, in iter_errback yield next(it) ^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\scrapy\utils\python.py", line 312, in __next__ return next(self.data) ^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\scrapy\utils\python.py", line 312, in __next__ return next(self.data) ^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\scrapy\core\spidermw.py", line 104, in process_sync for r in iterable: File "D:\annaCONDA\Lib\site-packages\scrapy\spidermiddlewares\offsite.py", line 28, in <genexpr> return (r for r in result or () if self._filter(r, spider)) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\scrapy\core\spidermw.py", line 104, in process_sync for r in iterable: File "D:\annaCONDA\Lib\site-packages\scrapy\spidermiddlewares\referer.py", line 353, in <genexpr> return (self._set_referer(r, response) for r in result or ()) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\scrapy\core\spidermw.py", line 104, in process_sync for r in iterable: File "D:\annaCONDA\Lib\site-packages\scrapy\spidermiddlewares\urllength.py", line 27, in <genexpr> return (r for r in result or () if self._filter(r, spider)) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\scrapy\core\spidermw.py", line 104, in process_sync for r in iterable: File "D:\annaCONDA\Lib\site-packages\scrapy\spidermiddlewares\depth.py", line 31, in <genexpr> return (r for r in result or () if self._filter(r, response, spider)) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\scrapy\core\spidermw.py", line 104, in process_sync for r in iterable: File "C:\Users\Lenovo\nepu_spider\nepu_spider\spiders\nepu.py", line 33, in parse_detail text = response.css(selector).get() ^^^^^^^^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\scrapy\http\response\text.py", line 147, in css return self.selector.css(query) ^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\parsel\selector.py", line 282, in css return self.xpath(self._css2xpath(query)) ^^^^^^^^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\parsel\selector.py", line 285, in _css2xpath return self._csstranslator.css_to_xpath(query) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\parsel\csstranslator.py", line 107, in css_to_xpath return super(HTMLTranslator, self).css_to_xpath(css, prefix) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\cssselect\xpath.py", line 192, in css_to_xpath for selector in parse(css)) ^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\cssselect\parser.py", line 415, in parse return list(parse_selector_group(stream)) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\cssselect\parser.py", line 428, in parse_selector_group yield Selector(*parse_selector(stream)) ^^^^^^^^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\cssselect\parser.py", line 436, in parse_selector result, pseudo_element = parse_simple_selector(stream) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\cssselect\parser.py", line 544, in parse_simple_selector raise SelectorSyntaxError( cssselect.parser.SelectorSyntaxError: Expected selector, got <DELIM '/' at 0> 2025-07-04 11:44:21 [scrapy.core.scraper] ERROR: Spider error processing <GET https://www.nepu.edu.cn/info/1049/9837.htm> (referer: None) Traceback (most recent call last): File "D:\annaCONDA\Lib\site-packages\scrapy\utils\defer.py", line 257, in iter_errback yield next(it) ^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\scrapy\utils\python.py", line 312, in __next__ return next(self.data) ^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\scrapy\utils\python.py", line 312, in __next__ return next(self.data) ^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\scrapy\core\spidermw.py", line 104, in process_sync for r in iterable: File "D:\annaCONDA\Lib\site-packages\scrapy\spidermiddlewares\offsite.py", line 28, in <genexpr> return (r for r in result or () if self._filter(r, spider)) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\scrapy\core\spidermw.py", line 104, in process_sync for r in iterable: File "D:\annaCONDA\Lib\site-packages\scrapy\spidermiddlewares\referer.py", line 353, in <genexpr> return (self._set_referer(r, response) for r in result or ()) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\scrapy\core\spidermw.py", line 104, in process_sync for r in iterable: File "D:\annaCONDA\Lib\site-packages\scrapy\spidermiddlewares\urllength.py", line 27, in <genexpr> return (r for r in result or () if self._filter(r, spider)) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\scrapy\core\spidermw.py", line 104, in process_sync for r in iterable: File "D:\annaCONDA\Lib\site-packages\scrapy\spidermiddlewares\depth.py", line 31, in <genexpr> return (r for r in result or () if self._filter(r, response, spider)) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\scrapy\core\spidermw.py", line 104, in process_sync for r in iterable: File "C:\Users\Lenovo\nepu_spider\nepu_spider\spiders\nepu.py", line 33, in parse_detail text = response.css(selector).get() ^^^^^^^^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\scrapy\http\response\text.py", line 147, in css return self.selector.css(query) ^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\parsel\selector.py", line 282, in css return self.xpath(self._css2xpath(query)) ^^^^^^^^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\parsel\selector.py", line 285, in _css2xpath return self._csstranslator.css_to_xpath(query) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\parsel\csstranslator.py", line 107, in css_to_xpath return super(HTMLTranslator, self).css_to_xpath(css, prefix) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\cssselect\xpath.py", line 192, in css_to_xpath for selector in parse(css)) ^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\cssselect\parser.py", line 415, in parse return list(parse_selector_group(stream)) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\cssselect\parser.py", line 428, in parse_selector_group yield Selector(*parse_selector(stream)) ^^^^^^^^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\cssselect\parser.py", line 436, in parse_selector result, pseudo_element = parse_simple_selector(stream) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\cssselect\parser.py", line 544, in parse_simple_selector raise SelectorSyntaxError( cssselect.parser.SelectorSyntaxError: Expected selector, got <DELIM '/' at 0> 2025-07-04 11:44:21 [scrapy.core.scraper] ERROR: Spider error processing <GET https://www.nepu.edu.cn/info/1049/10155.htm> (referer: None) Traceback (most recent call last): File "D:\annaCONDA\Lib\site-packages\scrapy\utils\defer.py", line 257, in iter_errback yield next(it) ^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\scrapy\utils\python.py", line 312, in __next__ return next(self.data) ^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\scrapy\utils\python.py", line 312, in __next__ return next(self.data) ^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\scrapy\core\spidermw.py", line 104, in process_sync for r in iterable: File "D:\annaCONDA\Lib\site-packages\scrapy\spidermiddlewares\offsite.py", line 28, in <genexpr> return (r for r in result or () if self._filter(r, spider)) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\scrapy\core\spidermw.py", line 104, in process_sync for r in iterable: File "D:\annaCONDA\Lib\site-packages\scrapy\spidermiddlewares\referer.py", line 353, in <genexpr> return (self._set_referer(r, response) for r in result or ()) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\scrapy\core\spidermw.py", line 104, in process_sync for r in iterable: File "D:\annaCONDA\Lib\site-packages\scrapy\spidermiddlewares\urllength.py", line 27, in <genexpr> return (r for r in result or () if self._filter(r, spider)) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\scrapy\core\spidermw.py", line 104, in process_sync for r in iterable: File "D:\annaCONDA\Lib\site-packages\scrapy\spidermiddlewares\depth.py", line 31, in <genexpr> return (r for r in result or () if self._filter(r, response, spider)) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\scrapy\core\spidermw.py", line 104, in process_sync for r in iterable: File "C:\Users\Lenovo\nepu_spider\nepu_spider\spiders\nepu.py", line 33, in parse_detail text = response.css(selector).get() ^^^^^^^^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\scrapy\http\response\text.py", line 147, in css return self.selector.css(query) ^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\parsel\selector.py", line 282, in css return self.xpath(self._css2xpath(query)) ^^^^^^^^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\parsel\selector.py", line 285, in _css2xpath return self._csstranslator.css_to_xpath(query) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\parsel\csstranslator.py", line 107, in css_to_xpath return super(HTMLTranslator, self).css_to_xpath(css, prefix) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\cssselect\xpath.py", line 192, in css_to_xpath for selector in parse(css)) ^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\cssselect\parser.py", line 415, in parse return list(parse_selector_group(stream)) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\cssselect\parser.py", line 428, in parse_selector_group yield Selector(*parse_selector(stream)) ^^^^^^^^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\cssselect\parser.py", line 436, in parse_selector result, pseudo_element = parse_simple_selector(stream) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\cssselect\parser.py", line 544, in parse_simple_selector raise SelectorSyntaxError( cssselect.parser.SelectorSyntaxError: Expected selector, got <DELIM '/' at 0> 2025-07-04 11:44:21 [scrapy.core.scraper] ERROR: Spider error processing <GET https://www.nepu.edu.cn/info/1049/9815.htm> (referer: None) Traceback (most recent call last): File "D:\annaCONDA\Lib\site-packages\scrapy\utils\defer.py", line 257, in iter_errback yield next(it) ^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\scrapy\utils\python.py", line 312, in __next__ return next(self.data) ^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\scrapy\utils\python.py", line 312, in __next__ return next(self.data) ^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\scrapy\core\spidermw.py", line 104, in process_sync for r in iterable: File "D:\annaCONDA\Lib\site-packages\scrapy\spidermiddlewares\offsite.py", line 28, in <genexpr> return (r for r in result or () if self._filter(r, spider)) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\scrapy\core\spidermw.py", line 104, in process_sync for r in iterable: File "D:\annaCONDA\Lib\site-packages\scrapy\spidermiddlewares\referer.py", line 353, in <genexpr> return (self._set_referer(r, response) for r in result or ()) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\scrapy\core\spidermw.py", line 104, in process_sync for r in iterable: File "D:\annaCONDA\Lib\site-packages\scrapy\spidermiddlewares\urllength.py", line 27, in <genexpr> return (r for r in result or () if self._filter(r, spider)) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\scrapy\core\spidermw.py", line 104, in process_sync for r in iterable: File "D:\annaCONDA\Lib\site-packages\scrapy\spidermiddlewares\depth.py", line 31, in <genexpr> return (r for r in result or () if self._filter(r, response, spider)) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\scrapy\core\spidermw.py", line 104, in process_sync for r in iterable: File "C:\Users\Lenovo\nepu_spider\nepu_spider\spiders\nepu.py", line 33, in parse_detail text = response.css(selector).get() ^^^^^^^^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\scrapy\http\response\text.py", line 147, in css return self.selector.css(query) ^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\parsel\selector.py", line 282, in css return self.xpath(self._css2xpath(query)) ^^^^^^^^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\parsel\selector.py", line 285, in _css2xpath return self._csstranslator.css_to_xpath(query) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\parsel\csstranslator.py", line 107, in css_to_xpath return super(HTMLTranslator, self).css_to_xpath(css, prefix) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\cssselect\xpath.py", line 192, in css_to_xpath for selector in parse(css)) ^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\cssselect\parser.py", line 415, in parse return list(parse_selector_group(stream)) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\cssselect\parser.py", line 428, in parse_selector_group yield Selector(*parse_selector(stream)) ^^^^^^^^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\cssselect\parser.py", line 436, in parse_selector result, pseudo_element = parse_simple_selector(stream) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\cssselect\parser.py", line 544, in parse_simple_selector raise SelectorSyntaxError( cssselect.parser.SelectorSyntaxError: Expected selector, got <DELIM '/' at 0> 2025-07-04 11:44:21 [scrapy.core.scraper] ERROR: Spider error processing <GET https://www.nepu.edu.cn/info/1049/9813.htm> (referer: None) Traceback (most recent call last): File "D:\annaCONDA\Lib\site-packages\scrapy\utils\defer.py", line 257, in iter_errback yield next(it) ^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\scrapy\utils\python.py", line 312, in __next__ return next(self.data) ^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\scrapy\utils\python.py", line 312, in __next__ return next(self.data) ^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\scrapy\core\spidermw.py", line 104, in process_sync for r in iterable: File "D:\annaCONDA\Lib\site-packages\scrapy\spidermiddlewares\offsite.py", line 28, in <genexpr> return (r for r in result or () if self._filter(r, spider)) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\scrapy\core\spidermw.py", line 104, in process_sync for r in iterable: File "D:\annaCONDA\Lib\site-packages\scrapy\spidermiddlewares\referer.py", line 353, in <genexpr> return (self._set_referer(r, response) for r in result or ()) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\scrapy\core\spidermw.py", line 104, in process_sync for r in iterable: File "D:\annaCONDA\Lib\site-packages\scrapy\spidermiddlewares\urllength.py", line 27, in <genexpr> return (r for r in result or () if self._filter(r, spider)) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\scrapy\core\spidermw.py", line 104, in process_sync for r in iterable: File "D:\annaCONDA\Lib\site-packages\scrapy\spidermiddlewares\depth.py", line 31, in <genexpr> return (r for r in result or () if self._filter(r, response, spider)) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\scrapy\core\spidermw.py", line 104, in process_sync for r in iterable: File "C:\Users\Lenovo\nepu_spider\nepu_spider\spiders\nepu.py", line 33, in parse_detail text = response.css(selector).get() ^^^^^^^^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\scrapy\http\response\text.py", line 147, in css return self.selector.css(query) ^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\parsel\selector.py", line 282, in css return self.xpath(self._css2xpath(query)) ^^^^^^^^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\parsel\selector.py", line 285, in _css2xpath return self._csstranslator.css_to_xpath(query) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\parsel\csstranslator.py", line 107, in css_to_xpath return super(HTMLTranslator, self).css_to_xpath(css, prefix) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\cssselect\xpath.py", line 192, in css_to_xpath for selector in parse(css)) ^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\cssselect\parser.py", line 415, in parse return list(parse_selector_group(stream)) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\cssselect\parser.py", line 428, in parse_selector_group yield Selector(*parse_selector(stream)) ^^^^^^^^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\cssselect\parser.py", line 436, in parse_selector result, pseudo_element = parse_simple_selector(stream) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\cssselect\parser.py", line 544, in parse_simple_selector raise SelectorSyntaxError( cssselect.parser.SelectorSyntaxError: Expected selector, got <DELIM '/' at 0> 2025-07-04 11:44:21 [scrapy.core.scraper] ERROR: Spider error processing <GET https://www.nepu.edu.cn/info/1049/9836.htm> (referer: None) Traceback (most recent call last): File "D:\annaCONDA\Lib\site-packages\scrapy\utils\defer.py", line 257, in iter_errback yield next(it) ^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\scrapy\utils\python.py", line 312, in __next__ return next(self.data) ^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\scrapy\utils\python.py", line 312, in __next__ return next(self.data) ^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\scrapy\core\spidermw.py", line 104, in process_sync for r in iterable: File "D:\annaCONDA\Lib\site-packages\scrapy\spidermiddlewares\offsite.py", line 28, in <genexpr> return (r for r in result or () if self._filter(r, spider)) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\scrapy\core\spidermw.py", line 104, in process_sync for r in iterable: File "D:\annaCONDA\Lib\site-packages\scrapy\spidermiddlewares\referer.py", line 353, in <genexpr> return (self._set_referer(r, response) for r in result or ()) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\scrapy\core\spidermw.py", line 104, in process_sync for r in iterable: File "D:\annaCONDA\Lib\site-packages\scrapy\spidermiddlewares\urllength.py", line 27, in <genexpr> return (r for r in result or () if self._filter(r, spider)) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\scrapy\core\spidermw.py", line 104, in process_sync for r in iterable: File "D:\annaCONDA\Lib\site-packages\scrapy\spidermiddlewares\depth.py", line 31, in <genexpr> return (r for r in result or () if self._filter(r, response, spider)) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\scrapy\core\spidermw.py", line 104, in process_sync for r in iterable: File "C:\Users\Lenovo\nepu_spider\nepu_spider\spiders\nepu.py", line 33, in parse_detail text = response.css(selector).get() ^^^^^^^^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\scrapy\http\response\text.py", line 147, in css return self.selector.css(query) ^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\parsel\selector.py", line 282, in css return self.xpath(self._css2xpath(query)) ^^^^^^^^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\parsel\selector.py", line 285, in _css2xpath return self._csstranslator.css_to_xpath(query) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\parsel\csstranslator.py", line 107, in css_to_xpath return super(HTMLTranslator, self).css_to_xpath(css, prefix) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\cssselect\xpath.py", line 192, in css_to_xpath for selector in parse(css)) ^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\cssselect\parser.py", line 415, in parse return list(parse_selector_group(stream)) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\cssselect\parser.py", line 428, in parse_selector_group yield Selector(*parse_selector(stream)) ^^^^^^^^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\cssselect\parser.py", line 436, in parse_selector result, pseudo_element = parse_simple_selector(stream) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\cssselect\parser.py", line 544, in parse_simple_selector raise SelectorSyntaxError( cssselect.parser.SelectorSyntaxError: Expected selector, got <DELIM '/' at 0> 2025-07-04 11:44:21 [scrapy.core.scraper] ERROR: Spider error processing <GET https://www.nepu.edu.cn/info/1049/10129.htm> (referer: None) Traceback (most recent call last): File "D:\annaCONDA\Lib\site-packages\scrapy\utils\defer.py", line 257, in iter_errback yield next(it) ^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\scrapy\utils\python.py", line 312, in __next__ return next(self.data) ^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\scrapy\utils\python.py", line 312, in __next__ return next(self.data) ^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\scrapy\core\spidermw.py", line 104, in process_sync for r in iterable: File "D:\annaCONDA\Lib\site-packages\scrapy\spidermiddlewares\offsite.py", line 28, in <genexpr> return (r for r in result or () if self._filter(r, spider)) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\scrapy\core\spidermw.py", line 104, in process_sync for r in iterable: File "D:\annaCONDA\Lib\site-packages\scrapy\spidermiddlewares\referer.py", line 353, in <genexpr> return (self._set_referer(r, response) for r in result or ()) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\scrapy\core\spidermw.py", line 104, in process_sync for r in iterable: File "D:\annaCONDA\Lib\site-packages\scrapy\spidermiddlewares\urllength.py", line 27, in <genexpr> return (r for r in result or () if self._filter(r, spider)) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\scrapy\core\spidermw.py", line 104, in process_sync for r in iterable: File "D:\annaCONDA\Lib\site-packages\scrapy\spidermiddlewares\depth.py", line 31, in <genexpr> return (r for r in result or () if self._filter(r, response, spider)) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\scrapy\core\spidermw.py", line 104, in process_sync for r in iterable: File "C:\Users\Lenovo\nepu_spider\nepu_spider\spiders\nepu.py", line 33, in parse_detail text = response.css(selector).get() ^^^^^^^^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\scrapy\http\response\text.py", line 147, in css return self.selector.css(query) ^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\parsel\selector.py", line 282, in css return self.xpath(self._css2xpath(query)) ^^^^^^^^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\parsel\selector.py", line 285, in _css2xpath return self._csstranslator.css_to_xpath(query) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\parsel\csstranslator.py", line 107, in css_to_xpath return super(HTMLTranslator, self).css_to_xpath(css, prefix) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\cssselect\xpath.py", line 192, in css_to_xpath for selector in parse(css)) ^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\cssselect\parser.py", line 415, in parse return list(parse_selector_group(stream)) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\cssselect\parser.py", line 428, in parse_selector_group yield Selector(*parse_selector(stream)) ^^^^^^^^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\cssselect\parser.py", line 436, in parse_selector result, pseudo_element = parse_simple_selector(stream) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\cssselect\parser.py", line 544, in parse_simple_selector raise SelectorSyntaxError( cssselect.parser.SelectorSyntaxError: Expected selector, got <DELIM '/' at 0> 2025-07-04 11:44:21 [scrapy.core.scraper] ERROR: Spider error processing <GET https://www.nepu.edu.cn/info/1049/10162.htm> (referer: None) Traceback (most recent call last): File "D:\annaCONDA\Lib\site-packages\scrapy\utils\defer.py", line 257, in iter_errback yield next(it) ^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\scrapy\utils\python.py", line 312, in __next__ return next(self.data) ^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\scrapy\utils\python.py", line 312, in __next__ return next(self.data) ^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\scrapy\core\spidermw.py", line 104, in process_sync for r in iterable: File "D:\annaCONDA\Lib\site-packages\scrapy\spidermiddlewares\offsite.py", line 28, in <genexpr> return (r for r in result or () if self._filter(r, spider)) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\scrapy\core\spidermw.py", line 104, in process_sync for r in iterable: File "D:\annaCONDA\Lib\site-packages\scrapy\spidermiddlewares\referer.py", line 353, in <genexpr> return (self._set_referer(r, response) for r in result or ()) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\scrapy\core\spidermw.py", line 104, in process_sync for r in iterable: File "D:\annaCONDA\Lib\site-packages\scrapy\spidermiddlewares\urllength.py", line 27, in <genexpr> return (r for r in result or () if self._filter(r, spider)) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\scrapy\core\spidermw.py", line 104, in process_sync for r in iterable: File "D:\annaCONDA\Lib\site-packages\scrapy\spidermiddlewares\depth.py", line 31, in <genexpr> return (r for r in result or () if self._filter(r, response, spider)) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\scrapy\core\spidermw.py", line 104, in process_sync for r in iterable: File "C:\Users\Lenovo\nepu_spider\nepu_spider\spiders\nepu.py", line 33, in parse_detail text = response.css(selector).get() ^^^^^^^^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\scrapy\http\response\text.py", line 147, in css return self.selector.css(query) ^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\parsel\selector.py", line 282, in css return self.xpath(self._css2xpath(query)) ^^^^^^^^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\parsel\selector.py", line 285, in _css2xpath return self._csstranslator.css_to_xpath(query) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\parsel\csstranslator.py", line 107, in css_to_xpath return super(HTMLTranslator, self).css_to_xpath(css, prefix) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\cssselect\xpath.py", line 192, in css_to_xpath for selector in parse(css)) ^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\cssselect\parser.py", line 415, in parse return list(parse_selector_group(stream)) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\cssselect\parser.py", line 428, in parse_selector_group yield Selector(*parse_selector(stream)) ^^^^^^^^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\cssselect\parser.py", line 436, in parse_selector result, pseudo_element = parse_simple_selector(stream) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\cssselect\parser.py", line 544, in parse_simple_selector raise SelectorSyntaxError( cssselect.parser.SelectorSyntaxError: Expected selector, got <DELIM '/' at 0> 2025-07-04 11:44:21 [scrapy.core.scraper] ERROR: Spider error processing <GET https://www.nepu.edu.cn/info/1049/9808.htm> (referer: None) Traceback (most recent call last): File "D:\annaCONDA\Lib\site-packages\scrapy\utils\defer.py", line 257, in iter_errback yield next(it) ^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\scrapy\utils\python.py", line 312, in __next__ return next(self.data) ^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\scrapy\utils\python.py", line 312, in __next__ return next(self.data) ^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\scrapy\core\spidermw.py", line 104, in process_sync for r in iterable: File "D:\annaCONDA\Lib\site-packages\scrapy\spidermiddlewares\offsite.py", line 28, in <genexpr> return (r for r in result or () if self._filter(r, spider)) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\scrapy\core\spidermw.py", line 104, in process_sync for r in iterable: File "D:\annaCONDA\Lib\site-packages\scrapy\spidermiddlewares\referer.py", line 353, in <genexpr> return (self._set_referer(r, response) for r in result or ()) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\scrapy\core\spidermw.py", line 104, in process_sync for r in iterable: File "D:\annaCONDA\Lib\site-packages\scrapy\spidermiddlewares\urllength.py", line 27, in <genexpr> return (r for r in result or () if self._filter(r, spider)) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\scrapy\core\spidermw.py", line 104, in process_sync for r in iterable: File "D:\annaCONDA\Lib\site-packages\scrapy\spidermiddlewares\depth.py", line 31, in <genexpr> return (r for r in result or () if self._filter(r, response, spider)) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\scrapy\core\spidermw.py", line 104, in process_sync for r in iterable: File "C:\Users\Lenovo\nepu_spider\nepu_spider\spiders\nepu.py", line 33, in parse_detail text = response.css(selector).get() ^^^^^^^^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\scrapy\http\response\text.py", line 147, in css return self.selector.css(query) ^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\parsel\selector.py", line 282, in css return self.xpath(self._css2xpath(query)) ^^^^^^^^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\parsel\selector.py", line 285, in _css2xpath return self._csstranslator.css_to_xpath(query) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\parsel\csstranslator.py", line 107, in css_to_xpath return super(HTMLTranslator, self).css_to_xpath(css, prefix) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\cssselect\xpath.py", line 192, in css_to_xpath for selector in parse(css)) ^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\cssselect\parser.py", line 415, in parse return list(parse_selector_group(stream)) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\cssselect\parser.py", line 428, in parse_selector_group yield Selector(*parse_selector(stream)) ^^^^^^^^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\cssselect\parser.py", line 436, in parse_selector result, pseudo_element = parse_simple_selector(stream) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\annaCONDA\Lib\site-packages\cssselect\parser.py", line 544, in parse_simple_selector raise SelectorSyntaxError( cssselect.parser.SelectorSyntaxError: Expected selector, got <DELIM '/' at 0> 2025-07-04 11:44:21 [scrapy.core.engine] INFO: Closing spider (finished) 2025-07-04 11:44:21 [nepu] INFO: 🔌 已安全关闭数据库连接 2025-07-04 11:44:21 [scrapy.statscollectors] INFO: Dumping Scrapy stats: {'downloader/request_bytes': 5100, 'downloader/request_count': 21, 'downloader/request_method_count/GET': 21, 'downloader/response_bytes': 93797, 'downloader/response_count': 21, 'downloader/response_status_count/200': 11, 'downloader/response_status_count/302': 10, 'elapsed_time_seconds': 0.502389, 'finish_reason': 'finished', 'finish_time': datetime.datetime(2025, 7, 4, 3, 44, 21, 531133), 'httpcompression/response_bytes': 251471, 'httpcompression/response_count': 11, 'log_count/DEBUG': 24, 'log_count/ERROR': 10, 'log_count/INFO': 13, 'request_depth_max': 1, 'response_received_count': 11, 'scheduler/dequeued': 21, 'scheduler/dequeued/memory': 21, 'scheduler/enqueued': 21, 'scheduler/enqueued/memory': 21, 'spider_exceptions/SelectorSyntaxError': 10, 'start_time': datetime.datetime(2025, 7, 4, 3, 44, 21, 28744)} 2025-07-04 11:44:21 [scrapy.core.engine] INFO: Spider closed (finished) (scrapy_env) C:\Users\Lenovo\nepu_spider>

最新推荐

recommend-type

Comsol声子晶体能带计算:六角与三角晶格原胞选取及布里渊区高对称点选择 - 声子晶体 v1.0

内容概要:本文详细探讨了利用Comsol进行声子晶体能带计算过程中,六角晶格和三角晶格原胞选取的不同方法及其对简约布里渊区高对称点选择的影响。文中不仅介绍了两种晶格类型的基矢量定义方式,还强调了正确设置周期性边界条件(特别是相位补偿)的重要性,以避免计算误差如鬼带现象。同时,提供了具体的MATLAB代码片段用于演示关键步骤,并分享了一些实践经验,例如如何通过观察能带图中的狄拉克锥特征来验证路径设置的准确性。 适合人群:从事材料科学、物理学研究的专业人士,尤其是那些正在使用或计划使用Comsol软件进行声子晶体模拟的研究人员。 使用场景及目标:帮助研究人员更好地理解和掌握在Comsol环境中针对不同类型晶格进行精确的声子晶体能带计算的方法和技术要点,从而提高仿真精度并减少常见错误的发生。 其他说明:文章中提到的实际案例展示了因晶格类型混淆而导致的问题,提醒使用者注意细节差异,确保模型构建无误。此外,文中提供的代码片段可以直接应用于相关项目中作为参考模板。
recommend-type

springboot213大学生心理健康管理系统的设计与实现.zip

springboot213大学生心理健康管理系统的设计与实现
recommend-type

三轴自动锁螺丝机PLC配方编程:吸钉式锁螺丝智能调整与注释详解 变址寄存器 高效版

一种基于三菱FX系列PLC的三轴自动锁螺丝机的配方编程方法。该系统采用吸钉式锁螺丝方式,通过PLC进行智能管理和调整。主要内容包括:利用D寄存器阵列和变址寄存器Z来存储和管理不同配方的数据,如坐标和螺丝数量;通过触摸屏和示教器简化调试流程,使工人能够快速设置和保存参数;并通过RS指令将数据保存到触摸屏内置存储中。此外,还展示了具体的PLC程序片段,解释了如何通过简单的寄存器操作实现复杂的配方管理和自动化操作。 适合人群:从事工业自动化领域的工程师和技术人员,尤其是熟悉PLC编程和机械设备调试的专业人士。 使用场景及目标:适用于需要提高生产效率和简化调试流程的制造业企业。主要目标是帮助技术人员掌握如何使用PLC进行配方管理,优化自动锁螺丝机的操作流程,减少人工干预,提升设备的智能化水平。 其他说明:文中提供的具体PLC程序代码和详细的注释有助于读者更好地理解和应用相关技术。同时,通过实例演示了如何利用PLC寄存器寻址特性和变址寄存器简化程序逻辑,为类似项目提供有价值的参考。
recommend-type

基于QT与STM32的Modbus-TCP四遥功能实现及源码解析

基于Qt开发的Modbus-TCP远程控制系统,用于实现四遥(遥测、遥控、遥信、遥调)功能。系统由上位机和下位机组成,上位机使用Qt进行图形界面开发,下位机采用STM32和W5500以太网模块,所有Modbus功能均自行实现,未使用第三方库。文中具体展示了各个功能的实现细节,包括ADC数据采集、LED控制、按键状态读取以及参数调节等功能的具体代码实现。 适合人群:具有一定嵌入式开发经验的研发人员,尤其是熟悉Qt和STM32的开发者。 使用场景及目标:适用于工业自动化、智能家居等领域,旨在帮助开发者理解和实现基于Modbus-TCP协议的远程控制系统,掌握四遥功能的具体实现方法。 其他说明:文中提供了详细的代码片段和技术难点解析,有助于读者深入理解系统的实现过程。同时,针对常见的开发问题给出了具体的解决方案,如浮点数转换、字节序处理等。
recommend-type

Web前端开发:CSS与HTML设计模式深入解析

《Pro CSS and HTML Design Patterns》是一本专注于Web前端设计模式的书籍,特别针对CSS(层叠样式表)和HTML(超文本标记语言)的高级应用进行了深入探讨。这本书籍属于Pro系列,旨在为专业Web开发人员提供实用的设计模式和实践指南,帮助他们构建高效、美观且可维护的网站和应用程序。 在介绍这本书的知识点之前,我们首先需要了解CSS和HTML的基础知识,以及它们在Web开发中的重要性。 HTML是用于创建网页和Web应用程序的标准标记语言。它允许开发者通过一系列的标签来定义网页的结构和内容,如段落、标题、链接、图片等。HTML5作为最新版本,不仅增强了网页的表现力,还引入了更多新的特性,例如视频和音频的内置支持、绘图API、离线存储等。 CSS是用于描述HTML文档的表现(即布局、颜色、字体等样式)的样式表语言。它能够让开发者将内容的表现从结构中分离出来,使得网页设计更加模块化和易于维护。随着Web技术的发展,CSS也经历了多个版本的更新,引入了如Flexbox、Grid布局、过渡、动画以及Sass和Less等预处理器技术。 现在让我们来详细探讨《Pro CSS and HTML Design Patterns》中可能包含的知识点: 1. CSS基础和选择器: 书中可能会涵盖CSS基本概念,如盒模型、边距、填充、边框、背景和定位等。同时还会介绍CSS选择器的高级用法,例如属性选择器、伪类选择器、伪元素选择器以及选择器的组合使用。 2. CSS布局技术: 布局是网页设计中的核心部分。本书可能会详细讲解各种CSS布局技术,包括传统的浮动(Floats)布局、定位(Positioning)布局,以及最新的布局模式如Flexbox和CSS Grid。此外,也会介绍响应式设计的媒体查询、视口(Viewport)单位等。 3. 高级CSS技巧: 这些技巧可能包括动画和过渡效果,以及如何优化性能和兼容性。例如,CSS3动画、关键帧动画、转换(Transforms)、滤镜(Filters)和混合模式(Blend Modes)。 4. HTML5特性: 书中可能会深入探讨HTML5的新标签和语义化元素,如`<article>`、`<section>`、`<nav>`等,以及如何使用它们来构建更加标准化和语义化的页面结构。还会涉及到Web表单的新特性,比如表单验证、新的输入类型等。 5. 可访问性(Accessibility): Web可访问性越来越受到重视。本书可能会介绍如何通过HTML和CSS来提升网站的无障碍访问性,比如使用ARIA标签(Accessible Rich Internet Applications)来增强屏幕阅读器的使用体验。 6. 前端性能优化: 性能优化是任何Web项目成功的关键。本书可能会涵盖如何通过优化CSS和HTML来提升网站的加载速度和运行效率。内容可能包括代码压缩、合并、避免重绘和回流、使用Web字体的最佳实践等。 7. JavaScript与CSS/HTML的交互: 在现代Web开发中,JavaScript与CSS及HTML的交云并用是不可或缺的。书中可能会讲解如何通过JavaScript动态地修改样式、操作DOM元素以及使用事件监听和响应用户交互。 8. Web框架和预处理器: 这本书可能会提到流行的Web开发框架和预处理器,比如Bootstrap、Foundation、Sass和Less等,它们是如何简化和加速开发流程的。 9. 测试和维护: 书中也可能包含关于如何测试网页以及如何持续优化和维护CSS和HTML代码的章节。例如,使用断言测试、自动化测试、性能分析工具等。 最后,鉴于文件名称列表中的“压缩包子文件”的表述,这可能是对“压缩包”文件的一种误译或误用,此处“压缩包”应该指的是包含该书籍PDF文件的压缩文件格式,如ZIP或RAR。而“Pro CSS and HTML Design Patterns.pdf”指的就是该书籍的PDF格式电子版文件。 以上所述,构成了《Pro CSS and HTML Design Patterns》一书可能包含的核心知识点。通过学习这些内容,Web前端开发者可以掌握更为高效和优雅的设计模式,从而在日常工作中更高效地解决实际问题。
recommend-type

Zotero 7数据同步:Attanger插件安装&设置,打造文献管理利器

/i.s3.glbimg.com/v1/AUTH_08fbf48bc0524877943fe86e43087e7a/internal_photos/bs/2018/L/j/ipKA8TRPyt4ptaG1DGww/t-m-01.png) # 1. Zotero 7数据同步简介 在数字时代,管理研究数据变得日益重要,而Zot
recommend-type

卷积神经网络的基础理论200字

<think>我们正在回答用户关于卷积神经网络(CNN)基础理论的询问。根据提供的引用资料,我们可以概述CNN的基本结构和工作原理。引用[1]提到CNN是一种前馈神经网络,信息从输入单向流动到输出。引用[2]指出CNN在图像处理等领域应用广泛,通过层次结构和参数调整实现模式识别。引用[3]说明CNN包含卷积层,使用卷积核提取局部特征,减少参数量,提高效率。引用[4]则提到了训练过程,包括前向传播、误差计算和反向传播(梯度下降)。因此,我们将从以下几个方面概述CNN:1.CNN的基本结构(卷积层、池化层、全连接层)2.卷积操作的工作原理(局部感受野、权重共享)3.训练过程(前向传播、损失函数、反
recommend-type

轻便实用的Java库类查询工具介绍

标题 "java2库类查询" 和描述表明,所提及的工具是一个专门用于查询Java库类的应用程序。此软件旨在帮助开发者快速地查找和引用Java的标准开发工具包(SDK)中包含的所有应用程序编程接口(API)类。通过这样的工具,开发者可以节省大量在官方文档或搜索引擎上寻找类定义和使用方法的时间。它被描述为轻巧且方便,这表明其占用的系统资源相对较少,同时提供直观的用户界面,使得查询过程简洁高效。 从描述中可以得出几个关键知识点: 1. Java SDK:Java的软件开发工具包(SDK)是Java平台的一部分,提供了一套用于开发Java应用软件的软件包和库。这些软件包通常被称为API,为开发者提供了编程界面,使他们能够使用Java语言编写各种类型的应用程序。 2. 库类查询:这个功能对于开发者来说非常关键,因为它提供了一个快速查找特定库类及其相关方法、属性和使用示例的途径。良好的库类查询工具可以帮助开发者提高工作效率,减少因查找文档而中断编程思路的时间。 3. 轻巧性:软件的轻巧性通常意味着它对计算机资源的要求较低。这样的特性对于资源受限的系统尤为重要,比如老旧的计算机、嵌入式设备或是当开发者希望最小化其开发环境占用空间时。 4. 方便性:软件的方便性通常关联于其用户界面设计,一个直观、易用的界面可以让用户快速上手,并减少在使用过程中遇到的障碍。 5. 包含所有API:一个优秀的Java库类查询软件应当能够覆盖Java所有标准API,这包括Java.lang、Java.util、Java.io等核心包,以及Java SE平台的所有其他标准扩展包。 从标签 "java 库 查询 类" 可知,这个软件紧密关联于Java编程语言的核心功能——库类的管理和查询。这些标签可以关联到以下知识点: - Java:一种广泛用于企业级应用、移动应用(如Android应用)、网站后端、大型系统和许多其他平台的编程语言。 - 库:在Java中,库是一组预打包的类和接口,它们可以被应用程序重复使用。Java提供了庞大的标准库,以支持各种常见的任务和功能。 - 查询:查询指的是利用软件工具搜索、定位和检索信息的过程。对于Java库类查询工具来说,这意味着可以通过类名、方法签名或其他标识符来查找特定的API条目。 最后,压缩包文件列表包含了两个文件:“java.dit”和“Java.exe”。其中“Java.exe”很可能是程序的可执行文件,而“java.dit”可能是一个数据文件,用于存储Java类的索引或数据。由于文件名后缀通常与文件类型相关联,但“dit”并不是一个常见的文件扩展名。这可能是一个特定于软件的自定义格式,或是一个打字错误。 总结来说,"java2库类查询" 是一个针对Java开发者的实用工具,它提供了一个轻量级、易用的平台来查询和定位Java标准库中的所有类和API。此工具对优化开发流程,减少查找Java类文档的时间大有裨益,尤其适合需要频繁查阅Java API的开发者使用。
recommend-type

【Zotero 7终极指南】:新手必备!Attanger插件全攻略与数据同步神技

# 1. Zotero 7与Attanger插件的介绍 在当今的学术研究和知识管理领域,高效的文献管理工具至关重要。Zotero 7作为一个流行的参考文献管理软件,因其强大的功能和用户友好的界面而受到专业人士的青睐。而Attanger插件则为Zotero 7带来了更多定制化和高级功能,极大地增强
recommend-type

MATLAB整段注释快捷键

<think>首先,用户的问题是:“MATLAB整段注释快捷键”,意思是“MATLAB整段注释快捷键”。我需要用中文回答。 根据规则: - 回答必须用中文。 - 回答完问题后,必须在末尾输出“§§相关问题§§”,然后提出3-5个相关问题。 - “§§相关问题§§”必须完整输出在第一行,禁止换行分散。 - 相关问题必须基于用户的问题和我的回答。 现在,回忆MATLAB的快捷键:在MATLAB中,整段注释通常使用快捷键Ctrl + R来注释选中的代码行,使用Ctrl + T来取消注释。用户问的是“整段注释”,所以我应该确认快捷键。 在MATLAB中: - 注释选中的行:Ctrl + R -