活动介绍

converting to local hdfs://singlenode:8020/jars/java-hdfs-api-jar-with-dependencies.jar Failed to read external resource hdfs://singlenode:8020/jars/java-hdfs-api-jar-with-dependencies.jar Failed to register default.my_max using class com.cssl.hive.udaf.UserDefineFunctionMax FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.FunctionTask

时间: 2024-04-24 17:21:13 浏览: 188
这个错误可能是由于以下几个原因引起的: 1. 文件路径错误:请确保 `hdfs://singlenode:8020/jars/java-hdfs-api-jar-with-dependencies.jar` 文件在指定的 HDFS 路径中存在,并且路径是正确的。 2. HDFS 权限问题:检查您是否具有足够的权限来读取 `java-hdfs-api-jar-with-dependencies.jar` 文件。您可以使用 `hadoop fs -ls` 命令检查文件的权限和所有者。 3. HDFS 配置问题:请确保您的 Hadoop 配置正确,并且可以正确访问 HDFS。您可以尝试使用其他 HDFS 命令(例如 `hadoop fs -ls`)来验证您是否可以连接到 HDFS 并查看文件。 4. Hive UDF 注册问题:`com.cssl.hive.udaf.UserDefineFunctionMax` 类可能没有正确注册为 Hive UDF。请确保在注册 UDF 之前已经将 `java-hdfs-api-jar-with-dependencies.jar` 文件添加到 Hive 的类路径中,并且已经执行了正确的 UDF 注册操作。 请根据您的具体情况检查这些可能导致错误的问题,并尝试解决它们。如果问题仍然存在,请提供更多详细信息以便我能够更好地帮助您解决问题。
相关问题

[root@k8s-master ~]# kubectl create -f custom-resources.yaml error: error parsing custom-resources.yaml: error converting YAML to JSON: yaml: line 12: mapping values are not allowed in this context [root@k8s-master ~]# cat custom-resources.yaml # For more information, see: https://projectcalico.docs.tigera.io/master/reference/installation/api#operator.tigera.io/v1.Installation apiVersion: operator.tigera.io/v1 kind: Installation metadata: name: default spec: # Configures Calico networking. calicoNetwork: # Note: The ipPools section cannot be modified post-install. ipPools: - blockSize: 26 cidr: 0.0.0.0 encapsulation: VXLANCrossSubnet natOutgoing: Enabled nodeSelector: all() --- # This section configures the Calico API server. # For more information, see: https://projectcalico.docs.tigera.io/master/reference/installation/api#operator.tigera.io/v1.APIServer apiVersion: operator.tigera.io/v1 kind: APIServer metadata: name: default spec: {}

<think>好的,我现在遇到了一个问题,在运行kubectl create -f custom-resources.yaml的时候,系统报错说YAML转换到JSON时第12行有问题,提示“mapping values are not allowed in this context”。首先,我需要理解这个错误的含义。 用户已经提供了YAML文件的内容,我需要仔细检查这个文件的结构是否正确。YAML对缩进非常敏感,所以首先怀疑缩进问题。查看用户提供的YAML内容,可能的问题点在哪里呢? 首先看第12行,根据用户提供的文件内容,第12行是“encapsulation: VXLANCrossSubnet”。不过,我需要数一下行数。用户提供的YAML内容可能如下: 1. # For more information... 2. apiVersion: operator.tigera.io/v1 3. kind: Installation 4. metadata: 5. name: default 6. spec: 7. # Configures Calico networking. 8. calicoNetwork: 9. # Note: The ipPools section cannot be modified post-install. 10. ipPools: 11. - blockSize: 26 12. cidr: 0.0.0.0 13. encapsulation: VXLANCrossSubnet 14. natOutgoing: Enabled 15. nodeSelector: all() 这里可能存在行号计算的问题,因为用户可能在粘贴时缩进不正确。比如,实际的第12行可能对应到用户的YAML中的某个不正确的位置。 另一个可能性是缩进层级混乱。例如,在ipPools下的列表项中,每个字段的缩进是否正确。例如,在YAML中,列表项(以-开头)后面的字段需要相对于列表项进行缩进。例如: ipPools: - blockSize: 26 cidr: 0.0.0.0 encapsulation: VXLANCrossSubnet natOutgoing: Enabled nodeSelector: all() 但根据用户提供的YAML内容,他们的缩进可能不正确。比如,用户的内容中,blockSize后的字段可能缩进过多: ipPools: - blockSize: 26 cidr: 0.0.0.0 encapsulation: VXLANCrossSubnet natOutgoing: Enabled nodeSelector: all() 这会导致YAML解析错误,因为每个字段应该相对于列表项缩进同样的空格,或者至少正确对齐。比如,每个字段应该比-多两个空格,或者四个空格,但用户可能用了更多的空格,导致层级错误。 例如,用户的YAML中的ipPools部分: ipPools: - blockSize: 26 cidr: 0.0.0.0 encapsulation: VXLANCrossSubnet natOutgoing: Enabled nodeSelector: all() 这里,每个字段的缩进可能比blockSize多出很多,导致YAML解析器认为这些字段的层级不正确,从而出现“mapping values are not allowed in this context”的错误,因为可能在某个位置出现了错误的键值对。 另一个可能的错误是nodeSelector: all(),这里的all()可能需要引号,尤其是如果它作为字符串值的话,可能YAML解析器会认为这是一个函数或者无效的语法。例如,正确的写法应该是nodeSelector: "all()"。 此外,查看spec下的calicoNetwork的缩进是否正确。用户的YAML中: metadata: name: default spec: # Configures Calico networking. calicoNetwork: # Note: The ipPools section cannot be modified post-install. ipPools: 这里可能spec下的calicoNetwork的缩进不正确。正确的缩进应该是每个层级两个或四个空格。例如: spec: calicoNetwork: ipPools: - blockSize: 26 cidr: 0.0.0.0 encapsulation: VXLANCrossSubnet natOutgoing: Enabled nodeSelector: all() 而用户原来的缩进可能是在spec下缩进太多,导致calicoNetwork成为spec的子级,但ipPools的缩进又过多,导致层级错误。 另外,在用户提供的YAML中,可能还存在其他问题,例如cidr的值是0.0.0.0,这显然不是一个有效的CIDR块,应该是一个具体的IP地址段,比如192.168.0.0/16之类的。不过这可能是一个占位符导致的错误,但YAML解析错误可能不是因为这个,而是结构问题。 现在,结合错误信息中的第12行,用户需要确定哪一行实际是第12行。例如,假设用户的custom-resources.yaml实际内容如下(带行号): 1. # For more information, see: https://projectcalico.docs.tigera.io/master/reference/installation/api#operator.tigera.io/v1.Installation 2. apiVersion: operator.tigera.io/v1 3. kind: Installation 4. metadata: 5. name: default 6. spec: 7. # Configures Calico networking. 8. calicoNetwork: 9. # Note: The ipPools section cannot be modified post-install. 10. ipPools: 11. - blockSize: 26 12. cidr: 0.0.0.0 13. encapsulation: VXLANCrossSubnet 14. natOutgoing: Enabled 15. nodeSelector: all() 16. 17. --- 18. 19. # This section configures the Calico API server. 20. apiVersion: operator.tigera.io/v1 21. kind: APIServer 22. metadata: 23. name: default 24. spec: {} 如果用户实际的第12行是cidr: 0.0.0.0,那么可能它的缩进不正确。例如,在列表中,每个字段应该相对于-符号有相同的缩进。例如: ipPools: - blockSize: 26 cidr: 0.0.0.0 encapsulation: VXLANCrossSubnet natOutgoing: Enabled nodeSelector: all() 而如果用户的缩进是: ipPools: - blockSize: 26 cidr: 0.0.0.0 那么cidr的缩进过多,导致YAML解析器认为它不是一个有效的字段,而可能试图解析为另一个键,从而导致错误。 另外,检查YAML中的分隔符---是否正确,中间是否有正确的空行,以及第二部分APIServer的spec是否为空。用户提供的第二部分看起来是spec: {},这部分是正确的。 总结可能的问题点: 1. ipPools列表中的字段缩进不正确,每个字段应相对于-有相同的缩进层级。 2. nodeSelector: all()可能需要引号,即nodeSelector: "all()"。 3. cidr的值不正确,但可能不影响YAML解析错误,而是应用时的配置错误。 4. 检查整个YAML的缩进是否一致,比如是否混用了空格和制表符。 现在,针对用户的YAML内容,他们提供的部分中,ipPools下的字段缩进明显过多,比如: 在用户的YAML中,可能缩进如下: spec: # ... calicoNetwork: ipPools: - blockSize: 26 cidr: 0.0.0.0 encapsulation: VXLANCrossSubnet natOutgoing: Enabled nodeSelector: all() 这里每个字段的缩进比前一个多,这会导致YAML解析错误,因为每个字段应该在同一个层级下,作为列表项的属性。 正确的缩进应该是: spec: calicoNetwork: ipPools: - blockSize: 26 cidr: 0.0.0.0 encapsulation: VXLANCrossSubnet natOutgoing: Enabled nodeSelector: "all()" 因此,用户需要调整这些行的缩进,使得每个字段在同一个层级下,使用相同的缩进空格数。 另外,nodeSelector的值all()可能需要引号,因为它可能被解析为字符串而不是其他类型,尤其是如果all()是一个特殊的语法,但如果没有引号,YAML可能认为这是布尔值或其他结构,导致错误。 因此,修正后的YAML应该是: apiVersion: operator.tigera.io/v1 kind: Installation metadata: name: default spec: calicoNetwork: ipPools: - blockSize: 26 cidr: 192.168.0.0/16 # 正确的CIDR示例 encapsulation: VXLANCrossSubnet natOutgoing: Enabled nodeSelector: "all()" --- apiVersion: operator.tigera.io/v1 kind: APIServer metadata: name: default spec: {} 同时,确保每个层级的缩进一致,比如使用两个空格或四个空格,但不要混用。例如,metadata下的name和spec应该正确缩进,不要出现spec在metadata下的情况,这可能也是用户的一个错误。在用户提供的YAML中,metadata下的spec是否存在? 检查用户提供的YAML: metadata: name: default spec: # Configures Calico networking. calicoNetwork: # ... 这里,spec被错误地放在了metadata的下一级,而正确的结构应该是metadata和spec是同一层级,属于Installation的顶层字段。例如: apiVersion: operator.tigera.io/v1 kind: Installation metadata: name: default spec: calicoNetwork: ipPools: - blockSize: 26 cidr: 0.0.0.0 encapsulation: VXLANCrossSubnet natOutgoing: Enabled nodeSelector: all() 但是用户的结构是: metadata: name: default spec: # ... 这会导致整个spec成为metadata的一部分,这是错误的。正确的结构是metadata和spec是同一层级的键,所以用户可能在缩进上犯了错误,导致spec被包含在metadata里,从而引发后续的解析错误。 所以,用户需要将spec从metadata中移出来,作为顶层的字段。例如: apiVersion: operator.tigera.io/v1 kind: Installation metadata: name: default spec: # 这里应该与metadata同级,而不是在metadata内部 calicoNetwork: ipPools: - blockSize: 26 cidr: 0.0.0.0 encapsulation: VXLANCrossSubnet natOutgoing: Enabled nodeSelector: all() 但在用户提供的YAML中,spec是在metadata内部,即第6行开始的缩进错误: metadata: name: default spec: # ... 这会导致整个spec属于metadata的一部分,而正确的结构应该是spec和metadata并列。因此,用户需要调整缩进,将spec顶格到与metadata同一层级。这可能就是导致YAML解析错误的主要原因,因为在这种情况下,spec被错误地放置在metadata下,而后续的字段如calicoNetwork的缩进就会混乱,导致整个结构无效。 综上,用户需要检查以下几点: 1. spec是否位于metadata内部,应该与metadata同级。 2. ipPools下的字段是否缩进正确,每个字段相对于列表项缩进一致。 3. nodeSelector的值是否需要用引号包裹。 4. cidr的值是否正确(虽然这可能属于应用配置错误,而非YAML语法错误)。 现在,用户提供的YAML中的错误可能主要是结构上的缩进错误,特别是spec被错误地放在metadata下,以及ipPools下的字段缩进过多。</think>您的YAML文件存在两个主要问题:**缩进错误**和**结构层级错误**。以下分步骤说明: --- ### 问题分析步骤 1. **缩进错误导致层级混乱** - 在`metadata`字段内错误地缩进了`spec`,导致Kubernetes无法识别配置结构。 - `ipPools`列表中的字段(如`cidr`)缩进过多,破坏了YAML的层级关系。 2. **CIDR地址和`nodeSelector`语法问题** - `cidr: 0.0.0.0`是无效的IP地址段(需类似`192.168.0.0/16`)。 - `nodeSelector: all()`缺少引号,可能被误认为是函数调用。 --- ### 修正后的YAML文件 ```yaml # Installation 资源配置 apiVersion: operator.tigera.io/v1 kind: Installation metadata: name: default spec: # 注意:spec 应与 metadata 同级 calicoNetwork: ipPools: - blockSize: 26 cidr: 192.168.0.0/16 # 修正为有效CIDR encapsulation: VXLANCrossSubnet natOutgoing: Enabled nodeSelector: "all()" # 添加引号 --- # APIServer 资源配置 apiVersion: operator.tigera.io/v1 kind: APIServer metadata: name: default spec: {} # 空配置保留 ``` --- ### 关键修正点说明 1. **修复`spec`层级错误** - 原文件将`spec`错误嵌套在`metadata`内,修正后与`metadata`同级。 2. **调整`ipPools`字段缩进** - 列表项(`- blockSize: 26`)后的字段统一缩进2或4空格,保持层级一致。 3. **补充必要语法** - `nodeSelector: "all()"`:引号确保值为字符串。 - `cidr: 192.168.0.0/16`:示例有效CIDR(按实际需求修改)。 --- ### 验证命令 1. 检查YAML语法: ```bash kubectl create -f custom-resources.yaml --dry-run=client ``` 2. 应用配置: ```bash kubectl create -f custom-resources.yaml ``` --- ### 常见YAML编写建议 - **使用IDE或在线校验工具**:如[YAML Lint](https://yamllint.com/)检查缩进。 - **避免混合空格和制表符**:统一用空格(推荐2或4个)。 - **复杂配置参考文档**:Calico官方示例:[Installation API](https://projectcalico.docs.tigera.io/reference/installation/api)。

Converting circular structure to JSON --> starting at object with constructor 'HTMLButtonElement' | property '__reactFiber$eb3i9glrsyt' -> object with constructor 'FiberNode' --- property 'stateNode' closes the circle TypeError: Converting circular structure to JSON --> starting at object with constructor 'HTMLButtonElement' | property '__reactFiber$eb3i9glrsyt' -> object with constructor 'FiberNode' --- property 'stateNode' closes the circle at JSON.stringify (<anonymous>) at CellClickModal (http://localhost:3001/static/js/bundle.js:2809:20) at renderWithHooks (http://localhost:3001/static/js/bundle.js:65440:22) at updateFunctionComponent (http://localhost:3001/static/js/bundle.js:68322:24) at beginWork (http://localhost:3001/static/js/bundle.js:70034:20) at HTMLUnknownElement.callCallback (http://localhost:3001/static/js/bundle.js:55032:18) at Object.invokeGuardedCallbackDev (http://localhost:3001/static/js/bundle.js:55076:20) at invokeGuardedCallback (http://localhost:3001/static/js/bundle.js:55133:35) at beginWork$1 (http://localhost:3001/static/js/bundle.js:75007:11) at performUnitOfWork (http://localhost:3001/static/js/bundle.js:74254:16)

这个错误是因为你正在尝试将一个包含循环引用的对象转换成JSON字符串,而JSON.stringify()方法不支持循环引用。在你的代码中,HTMLButtonElement对象和FiberNode对象之间存在循环引用,因此无法转换成JSON字符串。 要解决这个问题,你可以使用第三方库如flatted来处理包含循环引用的对象。flatted是一个支持循环引用的JSON.stringify()和JSON.parse()版本的库,它可以将循环引用的对象转换成字符串,并在解析时还原循环引用。 以下是使用flatted库的示例代码: ```jsx import flatted from 'flatted'; const myObj = { button: document.querySelector('button'), fiber: { stateNode: document.body } }; const jsonString = flatted.stringify(myObj); console.log(jsonString); const myObj2 = flatted.parse(jsonString); console.log(myObj2); ``` 在这个示例中,我们使用flatted.stringify()方法将包含循环引用的对象转换成字符串,然后使用flatted.parse()方法将字符串解析成对象。这样可以避免JSON.stringify()方法的循环引用问题。
阅读全文

相关推荐

解决报错[root@localhost docker]# docker logs -f tdm-jar LOGBACK: No context given for c.q.l.core.rolling.SizeAndTimeBasedRollingPolicy@667346055 _____ _____ _____ /\ \ /\ \ /\ \ /::\ \ /::\ \ /::\____\ \:::\ \ /::::\ \ /::::| | \:::\ \ /::::::\ \ /:::::| | \:::\ \ /:::/\:::\ \ /::::::| | \:::\ \ /:::/ \:::\ \ /:::/|::| | /::::\ \ /:::/ \:::\ \ /:::/ |::| | /::::::\ \ /:::/ / \:::\ \ /:::/ |::|___|______ /:::/\:::\ \ /:::/ / \:::\ ___\ /:::/ |::::::::\ \ /:::/ \:::\____\/:::/____/ \:::| /:::/ |:::::::::\____\ /:::/ \::/ /\:::\ \ /:::|____\::/ / ~~~~~/:::/ / /:::/ / \/____/ \:::\ \ /:::/ / \/____/ /:::/ / /:::/ / \:::\ \ /:::/ / /:::/ / /:::/ / \:::\ /:::/ / /:::/ / \::/ / \:::\ /:::/ / /:::/ / \/____/ \:::\/:::/ / /:::/ / \::::::/ / /:::/ / \::::/ / /:::/ / \::/____/ \::/ / ~~ \/____/ 2025-07-25 07:37:16,287 INFO [main] - cn.serverx.sx.ServerXApplication:55 - Starting ServerXApplication on 974d7161bb6c with PID 1 (/app/sx-admin-1.0-SNAPSHOT.jar started by root in /app) 2025-07-25 07:37:16,300 INFO [main] - cn.serverx.sx.ServerXApplication:655 - The following profiles are active: p 2025-07-25 07:37:26,626 INFO [main] - org.springframework.data.repository.config.RepositoryConfigurationDelegate:249 - Multiple Spring Data modules found, entering strict repository configuration mode! 2025-07-25 07:37:26,632 INFO [main] - org.springframework.data.repository.config.RepositoryConfigurationDelegate:127 - Bootstrapping Spring Data JPA repositories in DEFAULT mode. 2025-07-25 07:37:27,700 INFO [main] - org.springframework.data.repository.config.RepositoryConfigurationExtensionSupport:348 - Spring Data JPA - Could not safely identify store assignment for repository candidate interface cn.serverx.sx.core.dao.elasticsearch.EsLogDao. If you want this repository to be a JPA repository, consider annotating your entities with one of these annotations: javax.persistence.Entity, javax.persistence.MappedSuperclass (preferred), or consider extending one of the following types with your repository: org.springframework.data.jpa.repository.JpaRepository. 2025-07-25 07:37:28,073 INFO [main] - org.springframework.data.repository.config.RepositoryConfigurationDelegate:187 - Finished Spring Data repository scanning in 1405ms. Found 27 JPA repository interfaces. 2025-07-25 07:37:28,223 INFO [main] - org.springframework.data.repository.config.RepositoryConfigurationDelegate:249 - Multiple Spring Data modules found, entering strict repository configuration mode! 2025-07-25 07:37:28,227 INFO [main] - org.springframework.data.repository.config.RepositoryConfigurationDelegate:127 - Bootstrapping Spring Data Redis repositories in DEFAULT mode. 2025-07-25 07:37:28,307 INFO [main] - org.springframework.data.repository.config.RepositoryConfigurationExtensionSupport:348 - Spring Data Redis - Could not safely identify store assignment for repository candidate interface cn.serverx.sx.base.dao.DictDao. If you want this repository to be a Redis repository, consider annotating your entities with one of these annotations: org.springframework.data.redis.core.RedisHash (preferred), or consider extending one of the following types with your repository: org.springframework.data.keyvalue.repository.KeyValueRepository. 2025-07-25 07:37:28,322 INFO [main] - org.springframework.data.repository.config.RepositoryConfigurationExtensionSupport:348 - Spring Data Redis - Could not safely identify store assignment for repository candidate interface cn.serverx.sx.base.dao.DictDataDao. If you want this repository to be a Redis repository, consider annotating your entities with one of these annotations: org.springframework.data.redis.core.RedisHash (preferred), or consider extending one of the following types with your repository: org.springframework.data.keyvalue.repository.KeyValueRepository. 2025-07-25 07:37:28,323 INFO [main] - org.springframework.data.repository.config.RepositoryConfigurationExtensionSupport:348 - Spring Data Redis - Could not safely identify store assignment for repository candidate interface cn.serverx.sx.activiti.dao.ActBusinessDao. If you want this repository to be a Redis repository, consider annotating your entities with one of these annotations: org.springframework.data.redis.core.RedisHash (preferred), or consider extending one of the following types with your repository: org.springframework.data.keyvalue.repository.KeyValueRepository. 2025-07-25 07:37:28,324 INFO [main] - org.springframework.data.repository.config.RepositoryConfigurationExtensionSupport:348 - Spring Data Redis - Could not safely identify store assignment for repository candidate interface cn.serverx.sx.activiti.dao.ActCategoryDao. If you want this repository to be a Redis repository, consider annotating your entities with one of these annotations: org.springframework.data.redis.core.RedisHash (preferred), or consider extending one of the following types with your repository: org.springframework.data.keyvalue.repository.KeyValueRepository. 2025-07-25 07:37:28,327 INFO [main] - org.springframework.data.repository.config.RepositoryConfigurationExtensionSupport:348 - Spring Data Redis - Could not safely identify store assignment for repository candidate interface cn.serverx.sx.activiti.dao.ActModelDao. If you want this repository to be a Redis repository, consider annotating your entities with one of these annotations: org.springframework.data.redis.core.RedisHash (preferred), or consider extending one of the following types with your repository: org.springframework.data.keyvalue.repository.KeyValueRepository. 2025-07-25 07:37:28,328 INFO [main] - org.springframework.data.repository.config.RepositoryConfigurationExtensionSupport:348 - Spring Data Redis - Could not safely identify store assignment for repository candidate interface cn.serverx.sx.activiti.dao.ActNodeDao. If you want this repository to be a Redis repository, consider annotating your entities with one of these annotations: org.springframework.data.redis.core.RedisHash (preferred), or consider extending one of the following types with your repository: org.springframework.data.keyvalue.repository.KeyValueRepository. 2025-07-25 07:37:28,328 INFO [main] - org.springframework.data.repository.config.RepositoryConfigurationExtensionSupport:348 - Spring Data Redis - Could not safely identify store assignment for repository candidate interface cn.serverx.sx.activiti.dao.ActProcessDao. If you want this repository to be a Redis repository, consider annotating your entities with one of these annotations: org.springframework.data.redis.core.RedisHash (preferred), or consider extending one of the following types with your repository: org.springframework.data.keyvalue.repository.KeyValueRepository. 2025-07-25 07:37:28,329 INFO [main] - org.springframework.data.repository.config.RepositoryConfigurationExtensionSupport:348 - Spring Data Redis - Could not safely identify store assignment for repository candidate interface cn.serverx.sx.activiti.dao.business.LeaveDao. If you want this repository to be a Redis repository, consider annotating your entities with one of these annotations: org.springframework.data.redis.core.RedisHash (preferred), or consider extending one of the following types with your repository: org.springframework.data.keyvalue.repository.KeyValueRepository. 2025-07-25 07:37:28,330 INFO [main] - org.springframework.data.repository.config.RepositoryConfigurationExtensionSupport:348 - Spring Data Redis - Could not safely identify store assignment for repository candidate interface cn.serverx.sx.social.dao.GithubDao. If you want this repository to be a Redis repository, consider annotating your entities with one of these annotations: org.springframework.data.redis.core.RedisHash (preferred), or consider extending one of the following types with your repository: org.springframework.data.keyvalue.repository.KeyValueRepository. 2025-07-25 07:37:28,330 INFO [main] - org.springframework.data.repository.config.RepositoryConfigurationExtensionSupport:348 - Spring Data Redis - Could not safely identify store assignment for repository candidate interface cn.serverx.sx.social.dao.QQDao. If you want this repository to be a Redis repository, consider annotating your entities with one of these annotations: org.springframework.data.redis.core.RedisHash (preferred), or consider extending one of the following types with your repository: org.springframework.data.keyvalue.repository.KeyValueRepository. 2025-07-25 07:37:28,330 INFO [main] - org.springframework.data.repository.config.RepositoryConfigurationExtensionSupport:348 - Spring Data Redis - Could not safely identify store assignment for repository candidate interface cn.serverx.sx.social.dao.WechatDao. If you want this repository to be a Redis repository, consider annotating your entities with one of these annotations: org.springframework.data.redis.core.RedisHash (preferred), or consider extending one of the following types with your repository: org.springframework.data.keyvalue.repository.KeyValueRepository. 2025-07-25 07:37:28,358 INFO [main] - org.springframework.data.repository.config.RepositoryConfigurationExtensionSupport:348 - Spring Data Redis - Could not safely identify store assignment for repository candidate interface cn.serverx.sx.social.dao.WeiboDao. If you want this repository to be a Redis repository, consider annotating your entities with one of these annotations: org.springframework.data.redis.core.RedisHash (preferred), or consider extending one of the following types with your repository: org.springframework.data.keyvalue.repository.KeyValueRepository. 2025-07-25 07:37:28,360 INFO [main] - org.springframework.data.repository.config.RepositoryConfigurationExtensionSupport:348 - Spring Data Redis - Could not safely identify store assignment for repository candidate interface cn.serverx.sx.core.dao.DepartmentDao. If you want this repository to be a Redis repository, consider annotating your entities with one of these annotations: org.springframework.data.redis.core.RedisHash (preferred), or consider extending one of the following types with your repository: org.springframework.data.keyvalue.repository.KeyValueRepository. 2025-07-25 07:37:28,360 INFO [main] - org.springframework.data.repository.config.RepositoryConfigurationExtensionSupport:348 - Spring Data Redis - Could not safely identify store assignment for repository candidate interface cn.serverx.sx.core.dao.DepartmentHeaderDao. If you want this repository to be a Redis repository, consider annotating your entities with one of these annotations: org.springframework.data.redis.core.RedisHash (preferred), or consider extending one of the following types with your repository: org.springframework.data.keyvalue.repository.KeyValueRepository. 2025-07-25 07:37:28,361 INFO [main] - org.springframework.data.repository.config.RepositoryConfigurationExtensionSupport:348 - Spring Data Redis - Could not safely identify store assignment for repository candidate interface cn.serverx.sx.core.dao.elasticsearch.EsLogDao. If you want this repository to be a Redis repository, consider annotating your entities with one of these annotations: org.springframework.data.redis.core.RedisHash (preferred), or consider extending one of the following types with your repository: org.springframework.data.keyvalue.repository.KeyValueRepository. 2025-07-25 07:37:28,361 INFO [main] - org.springframework.data.repository.config.RepositoryConfigurationExtensionSupport:348 - Spring Data Redis - Could not safely identify store assignment for repository candidate interface cn.serverx.sx.core.dao.LogDao. If you want this repository to be a Redis repository, consider annotating your entities with one of these annotations: org.springframework.data.redis.core.RedisHash (preferred), or consider extending one of the following types with your repository: org.springframework.data.keyvalue.repository.KeyValueRepository. 2025-07-25 07:37:28,367 INFO [main] - org.springframework.data.repository.config.RepositoryConfigurationExtensionSupport:348 - Spring Data Redis - Could not safely identify store assignment for repository candidate interface cn.serverx.sx.core.dao.MessageDao. If you want this repository to be a Redis repository, consider annotating your entities with one of these annotations: org.springframework.data.redis.core.RedisHash (preferred), or consider extending one of the following types with your repository: org.springframework.data.keyvalue.repository.KeyValueRepository. 2025-07-25 07:37:28,367 INFO [main] - org.springframework.data.repository.config.RepositoryConfigurationExtensionSupport:348 - Spring Data Redis - Could not safely identify store assignment for repository candidate interface cn.serverx.sx.core.dao.MessageSendDao. If you want this repository to be a Redis repository, consider annotating your entities with one of these annotations: org.springframework.data.redis.core.RedisHash (preferred), or consider extending one of the following types with your repository: org.springframework.data.keyvalue.repository.KeyValueRepository. 2025-07-25 07:37:28,368 INFO [main] - org.springframework.data.repository.config.RepositoryConfigurationExtensionSupport:348 - Spring Data Redis - Could not safely identify store assignment for repository candidate interface cn.serverx.sx.core.dao.PermissionDao. If you want this repository to be a Redis repository, consider annotating your entities with one of these annotations: org.springframework.data.redis.core.RedisHash (preferred), or consider extending one of the following types with your repository: org.springframework.data.keyvalue.repository.KeyValueRepository. 2025-07-25 07:37:28,368 INFO [main] - org.springframework.data.repository.config.RepositoryConfigurationExtensionSupport:348 - Spring Data Redis - Could not safely identify store assignment for repository candidate interface cn.serverx.sx.core.dao.RoleDao. If you want this repository to be a Redis repository, consider annotating your entities with one of these annotations: org.springframework.data.redis.core.RedisHash (preferred), or consider extending one of the following types with your repository: org.springframework.data.keyvalue.repository.KeyValueRepository. 2025-07-25 07:37:28,369 INFO [main] - org.springframework.data.repository.config.RepositoryConfigurationExtensionSupport:348 - Spring Data Redis - Could not safely identify store assignment for repository candidate interface cn.serverx.sx.core.dao.RoleDepartmentDao. If you want this repository to be a Redis repository, consider annotating your entities with one of these annotations: org.springframework.data.redis.core.RedisHash (preferred), or consider extending one of the following types with your repository: org.springframework.data.keyvalue.repository.KeyValueRepository. 2025-07-25 07:37:28,378 INFO [main] - org.springframework.data.repository.config.RepositoryConfigurationExtensionSupport:348 - Spring Data Redis - Could not safely identify store assignment for repository candidate interface cn.serverx.sx.core.dao.RolePermissionDao. If you want this repository to be a Redis repository, consider annotating your entities with one of these annotations: org.springframework.data.redis.core.RedisHash (preferred), or consider extending one of the following types with your repository: org.springframework.data.keyvalue.repository.KeyValueRepository. 2025-07-25 07:37:28,378 INFO [main] - org.springframework.data.repository.config.RepositoryConfigurationExtensionSupport:348 - Spring Data Redis - Could not safely identify store assignment for repository candidate interface cn.serverx.sx.core.dao.SettingDao. If you want this repository to be a Redis repository, consider annotating your entities with one of these annotations: org.springframework.data.redis.core.RedisHash (preferred), or consider extending one of the following types with your repository: org.springframework.data.keyvalue.repository.KeyValueRepository. 2025-07-25 07:37:28,379 INFO [main] - org.springframework.data.repository.config.RepositoryConfigurationExtensionSupport:348 - Spring Data Redis - Could not safely identify store assignment for repository candidate interface cn.serverx.sx.core.dao.UserDao. If you want this repository to be a Redis repository, consider annotating your entities with one of these annotations: org.springframework.data.redis.core.RedisHash (preferred), or consider extending one of the following types with your repository: org.springframework.data.keyvalue.repository.KeyValueRepository. 2025-07-25 07:37:28,379 INFO [main] - org.springframework.data.repository.config.RepositoryConfigurationExtensionSupport:348 - Spring Data Redis - Could not safely identify store assignment for repository candidate interface cn.serverx.sx.core.dao.UserRoleDao. If you want this repository to be a Redis repository, consider annotating your entities with one of these annotations: org.springframework.data.redis.core.RedisHash (preferred), or consider extending one of the following types with your repository: org.springframework.data.keyvalue.repository.KeyValueRepository. 2025-07-25 07:37:28,380 INFO [main] - org.springframework.data.repository.config.RepositoryConfigurationExtensionSupport:348 - Spring Data Redis - Could not safely identify store assignment for repository candidate interface cn.serverx.sx.file.dao.FileDao. If you want this repository to be a Redis repository, consider annotating your entities with one of these annotations: org.springframework.data.redis.core.RedisHash (preferred), or consider extending one of the following types with your repository: org.springframework.data.keyvalue.repository.KeyValueRepository. 2025-07-25 07:37:28,392 INFO [main] - org.springframework.data.repository.config.RepositoryConfigurationExtensionSupport:348 - Spring Data Redis - Could not safely identify store assignment for repository candidate interface cn.serverx.sx.quartz.dao.QuartzJobDao. If you want this repository to be a Redis repository, consider annotating your entities with one of these annotations: org.springframework.data.redis.core.RedisHash (preferred), or consider extending one of the following types with your repository: org.springframework.data.keyvalue.repository.KeyValueRepository. 2025-07-25 07:37:28,397 INFO [main] - org.springframework.data.repository.config.RepositoryConfigurationExtensionSupport:348 - Spring Data Redis - Could not safely identify store assignment for repository candidate interface cn.serverx.sx.open.dao.ClientDao. If you want this repository to be a Redis repository, consider annotating your entities with one of these annotations: org.springframework.data.redis.core.RedisHash (preferred), or consider extending one of the following types with your repository: org.springframework.data.keyvalue.repository.KeyValueRepository. 2025-07-25 07:37:28,397 INFO [main] - org.springframework.data.repository.config.RepositoryConfigurationDelegate:187 - Finished Spring Data repository scanning in 149ms. Found 0 Redis repository interfaces. 2025-07-25 07:37:29,164 WARN [main] - org.mybatis.spring.mapper.ClassPathMapperScanner:44 - Skipping MapperFactoryBean with name 'permissionMapper' and 'cn.serverx.sx.core.dao.mapper.PermissionMapper' mapperInterface. Bean already defined with the same name! 2025-07-25 07:37:29,164 WARN [main] - org.mybatis.spring.mapper.ClassPathMapperScanner:44 - Skipping MapperFactoryBean with name 'userRoleMapper' and 'cn.serverx.sx.core.dao.mapper.UserRoleMapper' mapperInterface. Bean already defined with the same name! 2025-07-25 07:37:29,172 WARN [main] - org.mybatis.spring.mapper.ClassPathMapperScanner:44 - Skipping MapperFactoryBean with name 'actMapper' and 'cn.serverx.sx.activiti.dao.mapper.ActMapper' mapperInterface. Bean already defined with the same name! 2025-07-25 07:37:29,173 WARN [main] - org.mybatis.spring.mapper.ClassPathMapperScanner:44 - Skipping MapperFactoryBean with name 'historyIdentityMapper' and 'cn.serverx.sx.activiti.dao.mapper.HistoryIdentityMapper' mapperInterface. Bean already defined with the same name! 2025-07-25 07:37:29,173 WARN [main] - org.mybatis.spring.mapper.ClassPathMapperScanner:44 - Skipping MapperFactoryBean with name 'runIdentityMapper' and 'cn.serverx.sx.activiti.dao.mapper.RunIdentityMapper' mapperInterface. Bean already defined with the same name! 2025-07-25 07:37:30,140 INFO [main] - com.ulisesbocchio.jasyptspringboot.configuration.EnableEncryptablePropertiesBeanFactoryPostProcessor:48 - Post-processing PropertySource instances 2025-07-25 07:37:30,481 INFO [main] - com.ulisesbocchio.jasyptspringboot.EncryptablePropertySourceConverter:44 - Converting PropertySource configurationProperties [org.springframework.boot.context.properties.source.ConfigurationPropertySourcesPropertySource] to AOP Proxy 2025-07-25 07:37:30,482 INFO [main] - com.ulisesbocchio.jasyptspringboot.EncryptablePropertySourceConverter:44 - Converting PropertySource commandLineArgs [org.springframework.core.env.SimpleCommandLinePropertySource] to EncryptableEnumerablePropertySourceWrapper 2025-07-25 07:37:30,482 INFO [main] - com.ulisesbocchio.jasyptspringboot.EncryptablePropertySourceConverter:44 - Converting PropertySource servletConfigInitParams [org.springframework.core.env.PropertySource$StubPropertySource] to EncryptablePropertySourceWrapper 2025-07-25 07:37:30,483 INFO [main] - com.ulisesbocchio.jasyptspringboot.EncryptablePropertySourceConverter:44 - Converting PropertySource servletContextInitParams [org.springframework.core.env.PropertySource$StubPropertySource] to EncryptablePropertySourceWrapper 2025-07-25 07:37:30,483 INFO [main] - com.ulisesbocchio.jasyptspringboot.EncryptablePropertySourceConverter:44 - Converting PropertySource systemProperties [org.springframework.core.env.PropertiesPropertySource] to EncryptableMapPropertySourceWrapper 2025-07-25 07:37:30,483 INFO [main] - com.ulisesbocchio.jasyptspringboot.EncryptablePropertySourceConverter:44 - Converting PropertySource systemEnvironment [org.springframework.boot.env.SystemEnvironmentPropertySourceEnvironmentPostProcessor$OriginAwareSystemEnvironmentPropertySource] to EncryptableSystemEnvironmentPropertySourceWrapper 2025-07-25 07:37:30,483 INFO [main] - com.ulisesbocchio.jasyptspringboot.EncryptablePropertySourceConverter:44 - Converting PropertySource random [org.springframework.boot.env.RandomValuePropertySource] to EncryptablePropertySourceWrapper 2025-07-25 07:37:30,484 INFO [main] - com.ulisesbocchio.jasyptspringboot.EncryptablePropertySourceConverter:44 - Converting PropertySource applicationConfig: [file:./application-p.yml] [org.springframework.boot.env.OriginTrackedMapPropertySource] to EncryptableMapPropertySourceWrapper 2025-07-25 07:37:30,484 INFO [main] - com.ulisesbocchio.jasyptspringboot.EncryptablePropertySourceConverter:44 - Converting PropertySource applicationConfig: [classpath:/application.yml] [org.springframework.boot.env.OriginTrackedMapPropertySource] to EncryptableMapPropertySourceWrapper 2025-07-25 07:37:32,773 INFO [main] - org.springframework.integration.config.DefaultConfiguringBeanFactoryPostProcessor:213 - No bean named 'errorChannel' has been explicitly defined. Therefore, a default PublishSubscribeChannel will be created. 2025-07-25 07:37:32,785 INFO [main] - org.springframework.integration.config.DefaultConfiguringBeanFactoryPostProcessor:300 - No bean named 'taskScheduler' has been explicitly defined. Therefore, a default ThreadPoolTaskScheduler will be created. 2025-07-25 07:37:32,807 INFO [main] - org.springframework.integration.config.DefaultConfiguringBeanFactoryPostProcessor:460 - No bean named 'integrationHeaderChannelRegistry' has been explicitly defined. Therefore, a default DefaultHeaderChannelRegistry will be created. 2025-07-25 07:37:33,452 INFO [main] - com.ulisesbocchio.jasyptspringboot.filter.DefaultLazyPropertyFilter:31 - Property Filter custom Bean not found with name 'encryptablePropertyFilter'. Initializing Default Property Filter 2025-07-25 07:37:33,763 INFO [main] - org.springframework.context.support.PostProcessorRegistrationDelegate$BeanPostProcessorChecker:330 - Bean 'org.springframework.transaction.annotation.ProxyTransactionManagementConfiguration' of type [org.springframework.transaction.annotation.ProxyTransactionManagementConfiguration] is not eligible for getting processed by all BeanPostProcessors (for example: not eligible for auto-proxying) 2025-07-25 07:37:34,550 INFO [main] - org.springframework.context.support.PostProcessorRegistrationDelegate$BeanPostProcessorChecker:330 - Bean 'redisCacheConfig' of type [cn.serverx.sx.core.config.cache.RedisCacheConfig$$EnhancerBySpringCGLIB$$c82ebc37] is not eligible for getting processed by all BeanPostProcessors (for example: not eligible for auto-proxying) 2025-07-25 07:37:34,849 INFO [main] - org.springframework.context.support.PostProcessorRegistrationDelegate$BeanPostProcessorChecker:330 - Bean 'org.springframework.security.config.annotation.configuration.ObjectPostProcessorConfiguration' of type [org.springframework.security.config.annotation.configuration.ObjectPostProcessorConfiguration] is not eligible for getting processed by all BeanPostProcessors (for example: not eligible for auto-proxying) 2025-07-25 07:37:34,928 INFO [main] - org.springframework.context.support.PostProcessorRegistrationDelegate$BeanPostProcessorChecker:330 - Bean 'objectPostProcessor' of type [org.springframework.security.config.annotation.configuration.AutowireBeanFactoryObjectPostProcessor] is not eligible for getting processed by all BeanPostProcessors (for example: not eligible for auto-proxying) 2025-07-25 07:37:34,944 INFO [main] - org.springframework.context.support.PostProcessorRegistrationDelegate$BeanPostProcessorChecker:330 - Bean 'org.springframework.security.access.expression.method.DefaultMethodSecurityExpressionHandler@44e3760b' of type [org.springframework.security.access.expression.method.DefaultMethodSecurityExpressionHandler] is not eligible for getting processed by all BeanPostProcessors (for example: not eligible for auto-proxying) 2025-07-25 07:37:34,948 INFO [main] - org.springframework.context.support.PostProcessorRegistrationDelegate$BeanPostProcessorChecker:330 - Bean 'org.springframework.security.config.annotation.method.configuration.GlobalMethodSecurityConfiguration' of type [org.springframework.security.config.annotation.method.configuration.GlobalMethodSecurityConfiguration] is not eligible for getting processed by all BeanPostProcessors (for example: not eligible for auto-proxying) 2025-07-25 07:37:34,993 INFO [main] - org.springframework.context.support.PostProcessorRegistrationDelegate$BeanPostProcessorChecker:330 - Bean 'methodSecurityMetadataSource' of type [org.springframework.security.access.method.DelegatingMethodSecurityMetadataSource] is not eligible for getting processed by all BeanPostProcessors (for example: not eligible for auto-proxying) 2025-07-25 07:37:35,059 INFO [main] - org.springframework.context.support.PostProcessorRegistrationDelegate$BeanPostProcessorChecker:330 - Bean 'org.springframework.integration.config.IntegrationManagementConfiguration' of type [org.springframework.integration.config.IntegrationManagementConfiguration] is not eligible for getting processed by all BeanPostProcessors (for example: not eligible for auto-proxying) 2025-07-25 07:37:35,140 INFO [main] - org.springframework.context.support.PostProcessorRegistrationDelegate$BeanPostProcessorChecker:330 - Bean 'integrationChannelResolver' of type [org.springframework.integration.support.channel.BeanFactoryChannelResolver] is not eligible for getting processed by all BeanPostProcessors (for example: not eligible for auto-proxying) 2025-07-25 07:37:35,179 INFO [main] - org.springframework.context.support.PostProcessorRegistrationDelegate$BeanPostProcessorChecker:330 - Bean 'integrationDisposableAutoCreatedBeans' of type [org.springframework.integration.config.annotation.Disposables] is not eligible for getting processed by all BeanPostProcessors (for example: not eligible for auto-proxying) 2025-07-25 07:37:36,005 INFO [main] - com.ulisesbocchio.jasyptspringboot.resolver.DefaultLazyPropertyResolver:35 - Property Resolver custom Bean not found with name 'encryptablePropertyResolver'. Initializing Default Property Resolver 2025-07-25 07:37:36,029 INFO [main] - com.ulisesbocchio.jasyptspringboot.detector.DefaultLazyPropertyDetector:35 - Property Detector custom Bean not found with name 'encryptablePropertyDetector'. Initializing Default Property Detector 2025-07-25 07:37:37,980 INFO [main] - org.springframework.boot.web.embedded.tomcat.TomcatWebServer:92 - Tomcat initialized with port(s): 8862 (http) 2025-07-25 07:37:38,088 INFO [main] - org.apache.catalina.core.StandardService:173 - Starting service [Tomcat] 2025-07-25 07:37:38,089 INFO [main] - org.apache.catalina.core.StandardEngine:173 - Starting Servlet engine: [Apache Tomcat/9.0.29] 2025-07-25 07:37:38,484 INFO [main] - org.apache.catalina.core.ContainerBase.[Tomcat].[localhost].[/]:173 - Initializing Spring embedded WebApplicationContext 2025-07-25 07:37:38,485 INFO [main] - org.springframework.web.context.ContextLoader:284 - Root WebApplicationContext: initialization completed in 21879 ms 2025-07-25 07:37:42,651 INFO [main] - com.alibaba.druid.spring.boot.autoconfigure.DruidDataSourceAutoConfigure:56 - Init DruidDataSource 2025-07-25 07:37:45,200 INFO [main] - com.alibaba.druid.pool.DruidDataSource:1010 - {dataSource-1} inited 2025-07-25 07:37:47,770 INFO [main] - org.hibernate.jpa.internal.util.LogHelper:31 - HHH000204: Processing PersistenceUnitInfo [name: default] 2025-07-25 07:37:48,017 INFO [main] - org.hibernate.Version:46 - HHH000412: Hibernate Core {5.4.9.Final} 2025-07-25 07:37:48,022 INFO [main] - org.hibernate.cfg.Environment:184 - HHH000205: Loaded properties from resource hibernate.properties: {hibernate.bytecode.use_reflection_optimizer=false, hibernate.dialect.storage_engine=innodb} 2025-07-25 07:37:48,995 INFO [main] - org.hibernate.annotations.common.Version:49 - HCANN000001: Hibernate Commons Annotations {5.1.0.Final} 2025-07-25 07:37:49,477 INFO [main] - org.hibernate.dialect.Dialect:172 - HHH000400: Using dialect: org.hibernate.dialect.MySQL8Dialect 2025-07-25 07:37:56,804 INFO [main] - org.hibernate.engine.transaction.jta.platform.internal.JtaPlatformInitiator:52 - HHH000490: Using JtaPlatform implementation: [org.hibernate.engine.transaction.jta.platform.internal.NoJtaPlatform] 2025-07-25 07:37:57,007 INFO [main] - org.springframework.orm.jpa.LocalContainerEntityManagerFactoryBean:416 - Initialized JPA EntityManagerFactory for persistence unit 'default' 2025-07-25 07:37:59,055 WARN [main] - cn.serverx.sx.core.config.security.permission.MyFilterSecurityInterceptor:159 - Could not validate configuration attributes as the SecurityMetadataSource did not return any attributes from getAllConfigAttributes() 2025-07-25 07:38:00,730 INFO [main] - org.springframework.boot.actuate.endpoint.web.ServletEndpointRegistrar:75 - Registered '/serverx/actuator/jolokia' to jolokia-actuator-endpoint _ _ |_ _ _|_. ___ _ | _ | | |\/|_)(_| | |_\ |_)||_|_\ / | 3.3.1 2025-07-25 07:38:12,093 INFO [main] - org.quartz.impl.StdSchedulerFactory:1220 - Using default implementation for ThreadExecutor 2025-07-25 07:38:12,235 INFO [main] - org.quartz.core.SchedulerSignalerImpl:61 - Initialized Scheduler Signaller of type: class org.quartz.core.SchedulerSignalerImpl 2025-07-25 07:38:12,235 INFO [main] - org.quartz.core.QuartzScheduler:229 - Quartz Scheduler v.2.3.2 created. 2025-07-25 07:38:12,275 INFO [main] - org.springframework.scheduling.quartz.LocalDataSourceJobStore:672 - Using db table-based data access locking (synchronization). 2025-07-25 07:38:12,281 INFO [main] - org.springframework.scheduling.quartz.LocalDataSourceJobStore:145 - JobStoreCMT initialized. 2025-07-25 07:38:12,283 INFO [main] - org.quartz.core.QuartzScheduler:294 - Scheduler meta-data: Quartz Scheduler (v2.3.2) 'quartzScheduler' with instanceId 'NON_CLUSTERED' Scheduler class: 'org.quartz.core.QuartzScheduler' - running locally. NOT STARTED. Currently in standby mode. Number of jobs executed: 0 Using thread pool 'org.quartz.simpl.SimpleThreadPool' - with 10 threads. Using job-store 'org.springframework.scheduling.quartz.LocalDataSourceJobStore' - which supports persistence. and is not clustered. 2025-07-25 07:38:12,283 INFO [main] - org.quartz.impl.StdSchedulerFactory:1374 - Quartz scheduler 'quartzScheduler' initialized from an externally provided properties instance. 2025-07-25 07:38:12,284 INFO [main] - org.quartz.impl.StdSchedulerFactory:1378 - Quartz scheduler version: 2.3.2 2025-07-25 07:38:12,284 INFO [main] - org.quartz.core.QuartzScheduler:2293 - JobFactory set to: org.springframework.scheduling.quartz.SpringBeanJobFactory@3ea84e01 Hibernate: select quartzjob0_.id as id1_41_, quartzjob0_.create_by as create_b2_41_, quartzjob0_.create_time as create_t3_41_, quartzjob0_.del_flag as del_flag4_41_, quartzjob0_.update_by as update_b5_41_, quartzjob0_.update_time as update_t6_41_, quartzjob0_.cron_expression as cron_exp7_41_, quartzjob0_.description as descript8_41_, quartzjob0_.job_class_name as job_clas9_41_, quartzjob0_.parameter as paramet10_41_, quartzjob0_.status as status11_41_ from t_quartz_job quartzjob0_ where quartzjob0_.job_class_name=? 2025-07-25 07:38:27,622 ERROR [main] - cn.serverx.sx.job.init.JobInitConfig:122 - org.quartz.impl.jdbcjobstore.LockException: Failure obtaining db row lock: Table 'tdm.qrtz_locks' doesn't exist [See nested exception: com.mysql.jdbc.exceptions.jdbc4.MySQLSyntaxErrorException: Table 'tdm.qrtz_locks' doesn't exist] 2025-07-25 07:38:28,007 WARN [main] - org.springframework.boot.web.servlet.context.AnnotationConfigServletWebServerApplicationContext:558 - Exception encountered during context initialization - cancelling refresh attempt: org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'jobInitConfig': Invocation of init method failed; nested exception is ServerXException(msg=创建定时任务失败) 2025-07-25 07:38:28,104 INFO [main] - org.springframework.scheduling.quartz.SchedulerFactoryBean:845 - Shutting down Quartz Scheduler 2025-07-25 07:38:28,106 INFO [main] - org.quartz.core.QuartzScheduler:666 - Scheduler quartzScheduler_$_NON_CLUSTERED shutting down. 2025-07-25 07:38:28,106 INFO [main] - org.quartz.core.QuartzScheduler:585 - Scheduler quartzScheduler_$_NON_CLUSTERED paused. 2025-07-25 07:38:28,118 INFO [main] - org.quartz.core.QuartzScheduler:740 - Scheduler quartzScheduler_$_NON_CLUSTERED shutdown complete. 2025-07-25 07:38:28,155 INFO [main] - org.springframework.orm.jpa.LocalContainerEntityManagerFactoryBean:598 - Closing JPA EntityManagerFactory for persistence unit 'default' 2025-07-25 07:38:28,200 INFO [main] - com.alibaba.druid.pool.DruidDataSource:2003 - {dataSource-1} closing ... 2025-07-25 07:38:28,549 INFO [main] - com.alibaba.druid.pool.DruidDataSource:2075 - {dataSource-1} closed 2025-07-25 07:38:28,886 INFO [main] - org.apache.catalina.core.StandardService:173 - Stopping service [Tomcat] 2025-07-25 07:38:29,338 INFO [main] - org.springframework.boot.autoconfigure.logging.ConditionEvaluationReportLoggingListener:136 - Error starting ApplicationContext. To display the conditions report re-run your application with 'debug' enabled. 2025-07-25 07:38:29,385 ERROR [main] - org.springframework.boot.SpringApplication:826 - Application run failed org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'jobInitConfig': Invocation of init method failed; nested exception is ServerXException(msg=创建定时任务失败) at org.springframework.beans.factory.annotation.InitDestroyAnnotationBeanPostProcessor.postProcessBeforeInitialization(InitDestroyAnnotationBeanPostProcessor.java:160) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.applyBeanPostProcessorsBeforeInitialization(AbstractAutowireCapableBeanFactory.java:416) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.initializeBean(AbstractAutowireCapableBeanFactory.java:1788) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean(AbstractAutowireCapableBeanFactory.java:595) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBean(AbstractAutowireCapableBeanFactory.java:517) at org.springframework.beans.factory.support.AbstractBeanFactory.lambda$doGetBean$0(AbstractBeanFactory.java:323) at org.springframework.beans.factory.support.DefaultSingletonBeanRegistry.getSingleton(DefaultSingletonBeanRegistry.java:222) at org.springframework.beans.factory.support.AbstractBeanFactory.doGetBean(AbstractBeanFactory.java:321) at org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:202) at org.springframework.beans.factory.support.DefaultListableBeanFactory.preInstantiateSingletons(DefaultListableBeanFactory.java:879) at org.springframework.context.support.AbstractApplicationContext.finishBeanFactoryInitialization(AbstractApplicationContext.java:878) at org.springframework.context.support.AbstractApplicationContext.refresh(AbstractApplicationContext.java:550) at org.springframework.boot.web.servlet.context.ServletWebServerApplicationContext.refresh(ServletWebServerApplicationContext.java:141) at org.springframework.boot.SpringApplication.refresh(SpringApplication.java:747) at org.springframework.boot.SpringApplication.refreshContext(SpringApplication.java:397) at org.springframework.boot.SpringApplication.run(SpringApplication.java:315) at org.springframework.boot.SpringApplication.run(SpringApplication.java:1226) at org.springframework.boot.SpringApplication.run(SpringApplication.java:1215) at cn.serverx.sx.ServerXApplication.main(ServerXApplication.java:87) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.springframework.boot.loader.MainMethodRunner.run(MainMethodRunner.java:49) at org.springframework.boot.loader.Launcher.launch(Launcher.java:108) at org.springframework.boot.loader.Launcher.launch(Launcher.java:58) at org.springframework.boot.loader.JarLauncher.main(JarLauncher.java:88) Caused by: cn.serverx.sx.core.common.exception.ServerXException: 创建定时任务失败 at cn.serverx.sx.job.init.JobInitConfig.add(JobInitConfig.java:123) at cn.serverx.sx.job.init.JobInitConfig.init(JobInitConfig.java:94) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.springframework.beans.factory.annotation.InitDestroyAnnotationBeanPostProcessor$LifecycleElement.invoke(InitDestroyAnnotationBeanPostProcessor.java:389) at org.springframework.beans.factory.annotation.InitDestroyAnnotationBeanPostProcessor$LifecycleMetadata.invokeInitMethods(InitDestroyAnnotationBeanPostProcessor.java:333) at org.springframework.beans.factory.annotation.InitDestroyAnnotationBeanPostProcessor.postProcessBeforeInitialization(InitDestroyAnnotationBeanPostProcessor.java:157) ... 26 common frames omitted

/home/wiseatc/.local/lib/python3.11/site-packages/jieba/_compat.py:18: UserWarning: pkg_resources is deprecated as an API. See https://setuptools.pypa.io/en/latest/pkg_resources.html. The pkg_resources package is slated for removal as early as 2025-11-30. Refrain from using this package or pin to Setuptools<81. import pkg_resources W0703 16:30:36.069853 3914856 torch/distributed/run.py:766] W0703 16:30:36.069853 3914856 torch/distributed/run.py:766] ***************************************** W0703 16:30:36.069853 3914856 torch/distributed/run.py:766] Setting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed. W0703 16:30:36.069853 3914856 torch/distributed/run.py:766] ***************************************** /home/wiseatc/.local/lib/python3.11/site-packages/jieba/_compat.py:18: UserWarning: pkg_resources is deprecated as an API. See https://setuptools.pypa.io/en/latest/pkg_resources.html. The pkg_resources package is slated for removal as early as 2025-11-30. Refrain from using this package or pin to Setuptools<81. import pkg_resources [INFO|tokenization_utils_base.py:2048] 2025-07-03 16:30:43,321 >> loading file tokenizer.model [INFO|tokenization_utils_base.py:2048] 2025-07-03 16:30:43,322 >> loading file tokenizer.json [INFO|tokenization_utils_base.py:2048] 2025-07-03 16:30:43,322 >> loading file added_tokens.json [INFO|tokenization_utils_base.py:2048] 2025-07-03 16:30:43,322 >> loading file special_tokens_map.json [INFO|tokenization_utils_base.py:2048] 2025-07-03 16:30:43,322 >> loading file tokenizer_config.json [INFO|tokenization_utils_base.py:2048] 2025-07-03 16:30:43,322 >> loading file chat_template.jinja /home/wiseatc/.local/lib/python3.11/site-packages/jieba/_compat.py:18: UserWarning: pkg_resources is deprecated as an API. See https://setuptools.pypa.io/en/latest/pkg_resources.html. The pkg_resources package is slated for removal as early as 2025-11-30. Refrain from using this package or pin to Setuptools<81. import pkg_resources /home/wiseatc/.local/lib/python3.11/site-packages/jieba/_compat.py:18: UserWarning: pkg_resources is deprecated as an API. See https://setuptools.pypa.io/en/latest/pkg_resources.html. The pkg_resources package is slated for removal as early as 2025-11-30. Refrain from using this package or pin to Setuptools<81. import pkg_resources /home/wiseatc/.local/lib/python3.11/site-packages/jieba/_compat.py:18: UserWarning: pkg_resources is deprecated as an API. See https://setuptools.pypa.io/en/latest/pkg_resources.html. The pkg_resources package is slated for removal as early as 2025-11-30. Refrain from using this package or pin to Setuptools<81. import pkg_resources [INFO|tokenization_utils_base.py:2313] 2025-07-03 16:30:43,904 >> Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained. [INFO|configuration_utils.py:697] 2025-07-03 16:30:43,913 >> loading configuration file /mnt/data1/models/1.5B/config.json [INFO|configuration_utils.py:771] 2025-07-03 16:30:43,919 >> Model config Qwen2Config { "_name_or_path": "/mnt/data1/models/1.5B", "architectures": [ "Qwen2ForCausalLM" ], "attention_dropout": 0.0, "bos_token_id": 151643, "eos_token_id": 151643, "hidden_act": "silu", "hidden_size": 1536, "initializer_range": 0.02, "intermediate_size": 8960, "max_position_embeddings": 131072, "max_window_layers": 21, "model_type": "qwen2", "num_attention_heads": 12, "num_hidden_layers": 28, "num_key_value_heads": 2, "rms_norm_eps": 1e-06, "rope_scaling": null, "rope_theta": 10000, "sliding_window": 4096, "tie_word_embeddings": false, "torch_dtype": "bfloat16", "transformers_version": "4.49.0", "use_cache": true, "use_mrope": false, "use_sliding_window": false, "vocab_size": 151936 } [INFO|tokenization_utils_base.py:2048] 2025-07-03 16:30:43,920 >> loading file tokenizer.model [INFO|tokenization_utils_base.py:2048] 2025-07-03 16:30:43,920 >> loading file tokenizer.json [INFO|tokenization_utils_base.py:2048] 2025-07-03 16:30:43,920 >> loading file added_tokens.json [INFO|tokenization_utils_base.py:2048] 2025-07-03 16:30:43,920 >> loading file special_tokens_map.json [INFO|tokenization_utils_base.py:2048] 2025-07-03 16:30:43,920 >> loading file tokenizer_config.json [INFO|tokenization_utils_base.py:2048] 2025-07-03 16:30:43,920 >> loading file chat_template.jinja [INFO|tokenization_utils_base.py:2313] 2025-07-03 16:30:44,493 >> Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained. /usr/local/lib/python3.11/dist-packages/torch/distributed/distributed_c10d.py:4631: UserWarning: No device id is provided via init_process_group or barrier . Using the current device set by the user. warnings.warn( # warn only once [rank1]:[W703 16:30:45.102845887 ProcessGroupNCCL.cpp:4718] [PG ID 0 PG GUID 0 Rank 1] using GPU 1 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device. /usr/local/lib/python3.11/dist-packages/torch/distributed/distributed_c10d.py:4631: UserWarning: No device id is provided via init_process_group or barrier . Using the current device set by the user. warnings.warn( # warn only once [rank2]:[W703 16:30:45.126706430 ProcessGroupNCCL.cpp:4718] [PG ID 0 PG GUID 0 Rank 2] using GPU 2 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device. /usr/local/lib/python3.11/dist-packages/torch/distributed/distributed_c10d.py:4631: UserWarning: No device id is provided via init_process_group or barrier . Using the current device set by the user. warnings.warn( # warn only once [rank3]:[W703 16:30:45.136836682 ProcessGroupNCCL.cpp:4718] [PG ID 0 PG GUID 0 Rank 3] using GPU 3 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device. Setting num_proc from 16 back to 1 for the train split to disable multiprocessing as it only contains one shard. Generating train split: 0 examples [00:00, ? examples/s] Generating train split: 120 examples [00:00, 6525.39 examples/s] Converting format of dataset (num_proc=16): 0%| | 0/120 [00:00<?, ? examples/s] Converting format of dataset (num_proc=16): 0%| | 0/120 [00:00<?, ? examples/s] Converting format of dataset (num_proc=16): 0%| | 0/120 [00:00<?, ? examples/s] /usr/local/lib/python3.11/dist-packages/torch/distributed/distributed_c10d.py:4631: UserWarning: No device id is provided via init_process_group or barrier . Using the current device set by the user. warnings.warn( # warn only once [rank0]:[W703 16:31:05.679961201 ProcessGroupNCCL.cpp:4718] [PG ID 0 PG GUID 0 Rank 0] using GPU 0 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device. [rank0]: multiprocess.pool.RemoteTraceback: [rank0]: """ [rank0]: Traceback (most recent call last): [rank0]: File "/home/wiseatc/.local/lib/python3.11/site-packages/multiprocess/pool.py", line 125, in worker [rank0]: result = (True, func(*args, **kwds)) [rank0]: ^^^^^^^^^^^^^^^^^^^ [rank0]: File "/home/wiseatc/.local/lib/python3.11/site-packages/datasets/utils/py_utils.py", line 688, in _write_generator_to_queue [rank0]: for i, result in enumerate(func(**kwargs)): [rank0]: File "/home/wiseatc/.local/lib/python3.11/site-packages/datasets/arrow_dataset.py", line 3501, in _map_single [rank0]: for i, example in iter_outputs(shard_iterable): [rank0]: File "/home/wiseatc/.local/lib/python3.11/site-packages/datasets/arrow_dataset.py", line 3475, in iter_outputs [rank0]: yield i, apply_function(example, i, offset=offset) [rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ [rank0]: File "/home/wiseatc/.local/lib/python3.11/site-packages/datasets/arrow_dataset.py", line 3398, in apply_function [rank0]: processed_inputs = function(*fn_args, *additional_args, **fn_kwargs) [rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ [rank0]: File "/home/wiseatc/LLaMA-Factory/src/llamafactory/data/converter.py", line 94, in __call__ [rank0]: if self.dataset_attr.prompt and example[self.dataset_attr.prompt]: [rank0]: ~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^ [rank0]: File "/home/wiseatc/.local/lib/python3.11/site-packages/datasets/formatting/formatting.py", line 278, in __getitem__ [rank0]: value = self.data[key] [rank0]: ~~~~~~~~~^^^^^ [rank0]: KeyError: 'instruction' [rank0]: """ [rank0]: The above exception was the direct cause of the following exception: [rank0]: Traceback (most recent call last): [rank0]: File "/home/wiseatc/LLaMA-Factory/src/llamafactory/launcher.py", line 23, in <module> [rank0]: launch() [rank0]: File "/home/wiseatc/LLaMA-Factory/src/llamafactory/launcher.py", line 19, in launch [rank0]: run_exp() [rank0]: File "/home/wiseatc/LLaMA-Factory/src/llamafactory/train/tuner.py", line 110, in run_exp [rank0]: _training_function(config={"args": args, "callbacks": callbacks}) [rank0]: File "/home/wiseatc/LLaMA-Factory/src/llamafactory/train/tuner.py", line 72, in _training_function [rank0]: run_sft(model_args, data_args, training_args, finetuning_args, generating_args, callbacks) [rank0]: File "/home/wiseatc/LLaMA-Factory/src/llamafactory/train/sft/workflow.py", line 51, in run_sft [rank0]: dataset_module = get_dataset(template, model_args, data_args, training_args, stage="sft", **tokenizer_module) [rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ [rank0]: File "/home/wiseatc/LLaMA-Factory/src/llamafactory/data/loader.py", line 304, in get_dataset [rank0]: dataset = _get_merged_dataset(data_args.dataset, model_args, data_args, training_args, stage) [rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ [rank0]: File "/home/wiseatc/LLaMA-Factory/src/llamafactory/data/loader.py", line 182, in _get_merged_dataset [rank0]: datasets[dataset_name] = _load_single_dataset(dataset_attr, model_args, data_args, training_args) [rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ [rank0]: File "/home/wiseatc/LLaMA-Factory/src/llamafactory/data/loader.py", line 162, in _load_single_dataset [rank0]: return align_dataset(dataset, dataset_attr, data_args, training_args) [rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ [rank0]: File "/home/wiseatc/LLaMA-Factory/src/llamafactory/data/converter.py", line 279, in align_dataset [rank0]: return dataset.map( [rank0]: ^^^^^^^^^^^^ [rank0]: File "/home/wiseatc/.local/lib/python3.11/site-packages/datasets/arrow_dataset.py", line 557, in wrapper [rank0]: out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs) [rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^ [rank0]: File "/home/wiseatc/.local/lib/python3.11/site-packages/datasets/arrow_dataset.py", line 3171, in map [rank0]: for rank, done, content in iflatmap_unordered( [rank0]: File "/home/wiseatc/.local/lib/python3.11/site-packages/datasets/utils/py_utils.py", line 728, in iflatmap_unordered [rank0]: [async_result.get(timeout=0.05) for async_result in async_results] [rank0]: File "/home/wiseatc/.local/lib/python3.11/site-packages/datasets/utils/py_utils.py", line 728, in [rank0]: [async_result.get(timeout=0.05) for async_result in async_results] [rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ [rank0]: File "/home/wiseatc/.local/lib/python3.11/site-packages/multiprocess/pool.py", line 774, in get [rank0]: raise self._value [rank0]: KeyError: 'instruction' [rank0]:[W703 16:31:06.912491219 ProcessGroupNCCL.cpp:1479] Warning: WARNING: destroy_process_group() was not called before program exit, which can leak resources. For more info, please see https://pytorch.org/docs/stable/distributed.html#shutdown (function operator()) W0703 16:31:07.960560 3914856 torch/distributed/elastic/multiprocessing/api.py:900] Sending process 3914916 closing signal SIGTERM W0703 16:31:07.961188 3914856 torch/distributed/elastic/multiprocessing/api.py:900] Sending process 3914917 closing signal SIGTERM W0703 16:31:07.961536 3914856 torch/distributed/elastic/multiprocessing/api.py:900] Sending process 3914918 closing signal SIGTERM E0703 16:31:08.371267 3914856 torch/distributed/elastic/multiprocessing/api.py:874] failed (exitcode: 1) local_rank: 0 (pid: 3914915) of binary: /usr/bin/python3.11 Traceback (most recent call last): File "/usr/local/bin/torchrun", line 8, in <module> sys.exit(main()) ^^^^^^ File "/usr/local/lib/python3.11/dist-packages/torch/distributed/elastic/multiprocessing/errors/__init__.py", line 355, in wrapper return f(*args, **kwargs) ^^^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.11/dist-packages/torch/distributed/run.py", line 892, in main run(args) File "/usr/local/lib/python3.11/dist-packages/torch/distributed/run.py", line 883, in run elastic_launch( File "/usr/local/lib/python3.11/dist-packages/torch/distributed/launcher/api.py", line 139, in __call__ return launch_agent(self._config, self._entrypoint, list(args)) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.11/dist-packages/torch/distributed/launcher/api.py", line 270, in launch_agent raise ChildFailedError( torch.distributed.elastic.multiprocessing.errors.ChildFailedError: ============================================================ /home/wiseatc/LLaMA-Factory/src/llamafactory/launcher.py FAILED ------------------------------------------------------------ Failures: <NO_OTHER_FAILURES> ------------------------------------------------------------ Root Cause (first observed failure): [0]: time : 2025-07-03_16:31:07 host : wiseatc-Super-Server rank : 0 (local_rank: 0) exitcode : 1 (pid: 3914915) error_file: <N/A> traceback : To enable traceback see: https://pytorch.org/docs/stable/elastic/errors.html ============================================================ Traceback (most recent call last): File "/home/wiseatc/.local/bin/llamafactory-cli", line 8, in <module> sys.exit(main()) ^^^^^^ File "/home/wiseatc/LLaMA-Factory/src/llamafactory/cli.py", line 130, in main process = subprocess.run( ^^^^^^^^^^^^^^^ File "/usr/lib/python3.11/subprocess.py", line 569, in run raise CalledProcessError(retcode, process.args, subprocess.CalledProcessError: Command '['torchrun', '--nnodes', '1', '--node_rank', '0', '--nproc_per_node', '4', '--master_addr', '127.0.0.1', '--master_port', '41919', '/home/wiseatc/LLaMA-Factory/src/llamafactory/launcher.py', 'saves/DeepSeek-R1-1.5B-Distill/lora/train_2025-07-03-16-29-46/training_args.yaml']' returned non-zero exit status 1.

ucar_ws$ catkin_make Base path: /home/haha/ucar_ws Source space: /home/haha/ucar_ws/src Build space: /home/haha/ucar_ws/build Devel space: /home/haha/ucar_ws/devel Install space: /home/haha/ucar_ws/install #### #### Running command: "cmake /home/haha/ucar_ws/src -DCATKIN_DEVEL_PREFIX=/home/haha/ucar_ws/devel -DCMAKE_INSTALL_PREFIX=/home/haha/ucar_ws/install -G Unix Makefiles" in "/home/haha/ucar_ws/build" #### CMake Warning (dev) in CMakeLists.txt: No project() command is present. The top-level CMakeLists.txt file must contain a literal, direct call to the project() command. Add a line of code such as project(ProjectName) near the top of the file, but after cmake_minimum_required(). CMake is pretending there is a "project(Project)" command on the first line. This warning is for project developers. Use -Wno-dev to suppress it. -- The C compiler identification is GNU 9.4.0 -- The CXX compiler identification is GNU 9.4.0 -- Check for working C compiler: /usr/bin/cc -- Check for working C compiler: /usr/bin/cc -- works -- Detecting C compiler ABI info -- Detecting C compiler ABI info - done -- Detecting C compile features -- Detecting C compile features - done -- Check for working CXX compiler: /usr/bin/c++ -- Check for working CXX compiler: /usr/bin/c++ -- works -- Detecting CXX compiler ABI info -- Detecting CXX compiler ABI info - done -- Detecting CXX compile features -- Detecting CXX compile features - done -- Using CATKIN_DEVEL_PREFIX: /home/haha/ucar_ws/devel -- Using CMAKE_PREFIX_PATH: /home/haha/gazebo_test_ws/devel;/home/haha/catkin_ws/devel;/opt/ros/noetic -- This workspace overlays: /home/haha/gazebo_test_ws/devel;/home/haha/catkin_ws/devel;/opt/ros/noetic -- Found PythonInterp: /usr/bin/python3 (found suitable version "3.8.10", minimum required is "3") -- Using PYTHON_EXECUTABLE: /usr/bin/python3 -- Using Debian Python package layout -- Found PY_em: /usr/lib/python3/dist-packages/em.py -- Using empy: /usr/lib/python3/dist-packages/em.py -- Using CATKIN_ENABLE_TESTING: ON -- Call enable_testing() -- Using CATKIN_TEST_RESULTS_DIR: /home/haha/ucar_ws/build/test_results -- Forcing gtest/gmock from source, though one was otherwise available. -- Found gtest sources under '/usr/src/googletest': gtests will be built -- Found gmock sources under '/usr/src/googletest': gmock will be built -- Found PythonInterp: /usr/bin/python3 (found version "3.8.10") -- Found Threads: TRUE -- Using Python nosetests: /usr/bin/nosetests3 -- catkin 0.8.12 -- BUILD_SHARED_LIBS is on -- BUILD_SHARED_LIBS is on -- ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ -- ~~ traversing 39 packages in topological order: -- ~~ - geometry (metapackage) -- ~~ - geometry2 (metapackage) -- ~~ - navigation (metapackage) -- ~~ - eigen_conversions -- ~~ - kdl_conversions -- ~~ - tf2_msgs -- ~~ - tf2 -- ~~ - tf2_bullet -- ~~ - tf2_eigen -- ~~ - map_server -- ~~ - speech_command -- ~~ - tf2_py -- ~~ - tf -- ~~ - fdilink_ahrs -- ~~ - tf2_geometry_msgs -- ~~ - amcl -- ~~ - fake_localization -- ~~ - tf2_kdl -- ~~ - test_tf2 -- ~~ - tf2_sensor_msgs -- ~~ - tf2_tools -- ~~ - tf_conversions -- ~~ - ucar_camera -- ~~ - ucar_controller -- ~~ - ucar_map -- ~~ - ucar_nav -- ~~ - voxel_grid -- ~~ - costmap_2d -- ~~ - nav_core -- ~~ - base_local_planner -- ~~ - carrot_planner -- ~~ - clear_costmap_recovery -- ~~ - dwa_local_planner -- ~~ - move_slow_and_clear -- ~~ - navfn -- ~~ - global_planner -- ~~ - rotate_recovery -- ~~ - move_base -- ~~ - ydlidar -- ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ -- +++ processing catkin metapackage: 'geometry' -- ==> add_subdirectory(geometry/geometry) -- +++ processing catkin metapackage: 'geometry2' -- ==> add_subdirectory(geometry2/geometry2) -- +++ processing catkin metapackage: 'navigation' -- ==> add_subdirectory(navigation/navigation) -- +++ processing catkin package: 'eigen_conversions' -- ==> add_subdirectory(geometry/eigen_conversions) -- +++ processing catkin package: 'kdl_conversions' -- ==> add_subdirectory(geometry/kdl_conversions) -- +++ processing catkin package: 'tf2_msgs' -- ==> add_subdirectory(geometry2/tf2_msgs) -- Using these message generators: gencpp;geneus;genlisp;gennodejs;genpy -- Found Boost: /usr/lib/x86_64-linux-gnu/cmake/Boost-1.71.0/BoostConfig.cmake (found version "1.71.0") found components: thread -- Generating .msg files for action tf2_msgs/LookupTransform /home/haha/ucar_ws/src/geometry2/tf2_msgs/action/LookupTransform.action Generating for action LookupTransform -- tf2_msgs: 9 messages, 1 services -- +++ processing catkin package: 'tf2' -- ==> add_subdirectory(geometry2/tf2) -- Using these message generators: gencpp;geneus;genlisp;gennodejs;genpy -- Found Boost: /usr/lib/x86_64-linux-gnu/cmake/Boost-1.71.0/BoostConfig.cmake (found version "1.71.0") found components: system thread -- +++ processing catkin package: 'tf2_bullet' -- ==> add_subdirectory(geometry2/tf2_bullet) -- Found PkgConfig: /usr/bin/pkg-config (found version "0.29.1") -- Checking for module 'bullet' -- Found bullet, version 2.88 -- Using these message generators: gencpp;geneus;genlisp;gennodejs;genpy -- +++ processing catkin package: 'tf2_eigen' -- ==> add_subdirectory(geometry2/tf2_eigen) -- Using these message generators: gencpp;geneus;genlisp;gennodejs;genpy -- +++ processing catkin package: 'map_server' -- ==> add_subdirectory(navigation/map_server) -- Using these message generators: gencpp;geneus;genlisp;gennodejs;genpy -- Found Bullet: /usr/lib/x86_64-linux-gnu/libBulletDynamics.so -- Found SDL: /usr/lib/x86_64-linux-gnu/libSDLmain.a;/usr/lib/x86_64-linux-gnu/libSDL.so;-lpthread (found version "1.2.15") -- Found SDL_image: /usr/lib/x86_64-linux-gnu/libSDL_image.so (found version "1.2.12") -- Found Boost: /usr/lib/x86_64-linux-gnu/cmake/Boost-1.71.0/BoostConfig.cmake (found version "1.71.0") found components: filesystem -- +++ processing catkin package: 'speech_command' -- ==> add_subdirectory(speech_command) -- +++ processing catkin package: 'tf2_py' -- ==> add_subdirectory(geometry2/tf2_py) -- Using these message generators: gencpp;geneus;genlisp;gennodejs;genpy -- Found PythonLibs: /usr/lib/x86_64-linux-gnu/libpython3.8.so (found suitable version "3.8.10", minimum required is "2") -- +++ processing catkin package: 'tf' -- ==> add_subdirectory(geometry/tf) -- Performing Test COMPILER_SUPPORTS_CXX11 -- Performing Test COMPILER_SUPPORTS_CXX11 - Success -- Using these message generators: gencpp;geneus;genlisp;gennodejs;genpy -- Found Boost: /usr/lib/x86_64-linux-gnu/cmake/Boost-1.71.0/BoostConfig.cmake (found version "1.71.0") found components: thread system -- tf: 1 messages, 1 services -- +++ processing catkin package: 'fdilink_ahrs' -- ==> add_subdirectory(fdilink_ahrs) -- Using these message generators: gencpp;geneus;genlisp;gennodejs;genpy -- +++ processing catkin package: 'tf2_geometry_msgs' -- ==> add_subdirectory(geometry2/tf2_geometry_msgs) -- Using these message generators: gencpp;geneus;genlisp;gennodejs;genpy -- Found Boost: /usr/lib/x86_64-linux-gnu/cmake/Boost-1.71.0/BoostConfig.cmake (found version "1.71.0") found components: thread -- +++ processing catkin package: 'amcl' -- ==> add_subdirectory(navigation/amcl) -- Using these message generators: gencpp;geneus;genlisp;gennodejs;genpy -- Found Boost: /usr/lib/x86_64-linux-gnu/cmake/Boost-1.71.0/BoostConfig.cmake (found version "1.71.0") -- Looking for unistd.h -- Looking for unistd.h - found -- Looking for drand48 -- Looking for drand48 - found -- +++ processing catkin package: 'fake_localization' -- ==> add_subdirectory(navigation/fake_localization) -- Using these message generators: gencpp;geneus;genlisp;gennodejs;genpy -- +++ processing catkin package: 'tf2_kdl' -- ==> add_subdirectory(geometry2/tf2_kdl) -- Using these message generators: gencpp;geneus;genlisp;gennodejs;genpy -- +++ processing catkin package: 'test_tf2' -- ==> add_subdirectory(geometry2/test_tf2) -- Using these message generators: gencpp;geneus;genlisp;gennodejs;genpy -- Found Boost: /usr/lib/x86_64-linux-gnu/cmake/Boost-1.71.0/BoostConfig.cmake (found version "1.71.0") found components: thread -- +++ processing catkin package: 'tf2_sensor_msgs' -- ==> add_subdirectory(geometry2/tf2_sensor_msgs) -- Using these message generators: gencpp;geneus;genlisp;gennodejs;genpy -- +++ processing catkin package: 'tf2_tools' -- ==> add_subdirectory(geometry2/tf2_tools) -- Using these message generators: gencpp;geneus;genlisp;gennodejs;genpy -- +++ processing catkin package: 'tf_conversions' -- ==> add_subdirectory(geometry/tf_conversions) -- Using these message generators: gencpp;geneus;genlisp;gennodejs;genpy -- +++ processing catkin package: 'ucar_camera' -- ==> add_subdirectory(ucar_camera) -- +++ processing catkin package: 'ucar_controller' -- ==> add_subdirectory(ucar_controller) -- Using these message generators: gencpp;geneus;genlisp;gennodejs;genpy -- ucar_controller: 0 messages, 6 services -- +++ processing catkin package: 'ucar_map' -- ==> add_subdirectory(ucar_map) -- Using these message generators: gencpp;geneus;genlisp;gennodejs;genpy -- +++ processing catkin package: 'ucar_nav' -- ==> add_subdirectory(ucar_nav) -- Using these message generators: gencpp;geneus;genlisp;gennodejs;genpy -- +++ processing catkin package: 'voxel_grid' -- ==> add_subdirectory(navigation/voxel_grid) -- Looking for sys/time.h -- Looking for sys/time.h - found -- +++ processing catkin package: 'costmap_2d' -- ==> add_subdirectory(navigation/costmap_2d) -- Using these message generators: gencpp;geneus;genlisp;gennodejs;genpy -- Found Boost: /usr/lib/x86_64-linux-gnu/cmake/Boost-1.71.0/BoostConfig.cmake (found version "1.71.0") found components: system thread -- costmap_2d: 1 messages, 0 services -- +++ processing catkin package: 'nav_core' -- ==> add_subdirectory(navigation/nav_core) -- Using these message generators: gencpp;geneus;genlisp;gennodejs;genpy -- +++ processing catkin package: 'base_local_planner' -- ==> add_subdirectory(navigation/base_local_planner) -- Using these message generators: gencpp;geneus;genlisp;gennodejs;genpy -- Found Boost: /usr/lib/x86_64-linux-gnu/cmake/Boost-1.71.0/BoostConfig.cmake (found version "1.71.0") found components: thread -- base_local_planner: 1 messages, 0 services -- +++ processing catkin package: 'carrot_planner' -- ==> add_subdirectory(navigation/carrot_planner) -- Using these message generators: gencpp;geneus;genlisp;gennodejs;genpy -- +++ processing catkin package: 'clear_costmap_recovery' -- ==> add_subdirectory(navigation/clear_costmap_recovery) -- Using these message generators: gencpp;geneus;genlisp;gennodejs;genpy -- +++ processing catkin package: 'dwa_local_planner' -- ==> add_subdirectory(navigation/dwa_local_planner) -- Using these message generators: gencpp;geneus;genlisp;gennodejs;genpy -- +++ processing catkin package: 'move_slow_and_clear' -- ==> add_subdirectory(navigation/move_slow_and_clear) -- Using these message generators: gencpp;geneus;genlisp;gennodejs;genpy -- +++ processing catkin package: 'navfn' -- ==> add_subdirectory(navigation/navfn) -- Using these message generators: gencpp;geneus;genlisp;gennodejs;genpy -- navfn: 0 messages, 2 services -- Looking for C++ include FL/Fl.H -- Looking for C++ include FL/Fl.H - found -- Looking for C++ include pgm.h -- Looking for C++ include pgm.h - not found -- NAVFN_HAVE_FLTK: 1, NETPBM: -- FLTK orf NETPBM not found: cannot build navtest -- +++ processing catkin package: 'global_planner' -- ==> add_subdirectory(navigation/global_planner) -- Using these message generators: gencpp;geneus;genlisp;gennodejs;genpy -- +++ processing catkin package: 'rotate_recovery' -- ==> add_subdirectory(navigation/rotate_recovery) -- Using these message generators: gencpp;geneus;genlisp;gennodejs;genpy -- +++ processing catkin package: 'move_base' -- ==> add_subdirectory(navigation/move_base) -- Using these message generators: gencpp;geneus;genlisp;gennodejs;genpy -- +++ processing catkin package: 'ydlidar' -- ==> add_subdirectory(ydlidar) -- Configuring done -- Generating done -- Build files have been written to: /home/haha/ucar_ws/build #### #### Running command: "make -j2 -l2" in "/home/haha/ucar_ws/build" #### Scanning dependencies of target std_msgs_generate_messages_py Scanning dependencies of target geometry_msgs_generate_messages_eus [ 0%] Built target std_msgs_generate_messages_py [ 0%] Built target geometry_msgs_generate_messages_eus Scanning dependencies of target geometry_msgs_generate_messages_cpp Scanning dependencies of target geometry_msgs_generate_messages_py [ 0%] Built target geometry_msgs_generate_messages_cpp [ 0%] Built target geometry_msgs_generate_messages_py Scanning dependencies of target geometry_msgs_generate_messages_nodejs Scanning dependencies of target std_msgs_generate_messages_cpp [ 0%] Built target geometry_msgs_generate_messages_nodejs [ 0%] Built target std_msgs_generate_messages_cpp Scanning dependencies of target geometry_msgs_generate_messages_lisp Scanning dependencies of target std_msgs_generate_messages_eus [ 0%] Built target geometry_msgs_generate_messages_lisp [ 0%] Built target std_msgs_generate_messages_eus Scanning dependencies of target std_msgs_generate_messages_lisp Scanning dependencies of target std_msgs_generate_messages_nodejs [ 0%] Built target std_msgs_generate_messages_lisp [ 0%] Built target std_msgs_generate_messages_nodejs Scanning dependencies of target _tf2_msgs_generate_messages_check_deps_LookupTransformGoal Scanning dependencies of target _tf2_msgs_generate_messages_check_deps_LookupTransformActionResult [ 0%] Built target _tf2_msgs_generate_messages_check_deps_LookupTransformActionResult [ 0%] Built target _tf2_msgs_generate_messages_check_deps_LookupTransformGoal Scanning dependencies of target _tf2_msgs_generate_messages_check_deps_TF2Error Scanning dependencies of target _tf2_msgs_generate_messages_check_deps_LookupTransformAction [ 0%] Built target _tf2_msgs_generate_messages_check_deps_TF2Error [ 0%] Built target _tf2_msgs_generate_messages_check_deps_LookupTransformAction Scanning dependencies of target _tf2_msgs_generate_messages_check_deps_TFMessage Scanning dependencies of target _tf2_msgs_generate_messages_check_deps_LookupTransformActionGoal [ 0%] Built target _tf2_msgs_generate_messages_check_deps_TFMessage Scanning dependencies of target actionlib_msgs_generate_messages_nodejs [ 0%] Built target _tf2_msgs_generate_messages_check_deps_LookupTransformActionGoal [ 0%] Built target actionlib_msgs_generate_messages_nodejs Scanning dependencies of target _tf2_msgs_generate_messages_check_deps_LookupTransformActionFeedback Scanning dependencies of target _tf2_msgs_generate_messages_check_deps_LookupTransformResult [ 0%] Built target _tf2_msgs_generate_messages_check_deps_LookupTransformActionFeedback [ 0%] Built target _tf2_msgs_generate_messages_check_deps_LookupTransformResult Scanning dependencies of target _tf2_msgs_generate_messages_check_deps_LookupTransformFeedback Scanning dependencies of target _tf2_msgs_generate_messages_check_deps_FrameGraph [ 0%] Built target _tf2_msgs_generate_messages_check_deps_LookupTransformFeedback [ 0%] Built target _tf2_msgs_generate_messages_check_deps_FrameGraph Scanning dependencies of target actionlib_msgs_generate_messages_py Scanning dependencies of target actionlib_msgs_generate_messages_lisp [ 0%] Built target actionlib_msgs_generate_messages_py [ 0%] Built target actionlib_msgs_generate_messages_lisp Scanning dependencies of target actionlib_msgs_generate_messages_cpp Scanning dependencies of target actionlib_msgs_generate_messages_eus [ 0%] Built target actionlib_msgs_generate_messages_cpp [ 0%] Built target actionlib_msgs_generate_messages_eus Scanning dependencies of target roscpp_generate_messages_cpp Scanning dependencies of target gtest [ 0%] Built target roscpp_generate_messages_cpp Scanning dependencies of target nav_msgs_generate_messages_lisp [ 0%] Built target nav_msgs_generate_messages_lisp Scanning dependencies of target nav_msgs_generate_messages_eus [ 1%] Building CXX object gtest/googletest/CMakeFiles/gtest.dir/src/gtest-all.cc.o [ 1%] Built target nav_msgs_generate_messages_eus Scanning dependencies of target roscpp_generate_messages_lisp [ 1%] Built target roscpp_generate_messages_lisp Scanning dependencies of target nav_msgs_generate_messages_cpp [ 1%] Built target nav_msgs_generate_messages_cpp Scanning dependencies of target rosgraph_msgs_generate_messages_eus [ 1%] Built target rosgraph_msgs_generate_messages_eus Scanning dependencies of target rosgraph_msgs_generate_messages_cpp [ 1%] Built target rosgraph_msgs_generate_messages_cpp Scanning dependencies of target roscpp_generate_messages_nodejs [ 1%] Built target roscpp_generate_messages_nodejs Scanning dependencies of target roscpp_generate_messages_py [ 1%] Built target roscpp_generate_messages_py Scanning dependencies of target rosgraph_msgs_generate_messages_nodejs [ 1%] Built target rosgraph_msgs_generate_messages_nodejs Scanning dependencies of target roscpp_generate_messages_eus [ 1%] Built target roscpp_generate_messages_eus Scanning dependencies of target rosgraph_msgs_generate_messages_lisp [ 1%] Built target rosgraph_msgs_generate_messages_lisp Scanning dependencies of target nav_msgs_generate_messages_nodejs [ 1%] Built target nav_msgs_generate_messages_nodejs Scanning dependencies of target nav_msgs_generate_messages_py [ 1%] Built target nav_msgs_generate_messages_py Scanning dependencies of target rosgraph_msgs_generate_messages_py [ 1%] Built target rosgraph_msgs_generate_messages_py Scanning dependencies of target audio_recorder [ 1%] Building CXX object speech_command/CMakeFiles/audio_recorder.dir/src/AudioRecorder.cpp.o In file included from /home/haha/ucar_ws/src/speech_command/include/AudioRecorder.h:4, from /home/haha/ucar_ws/src/speech_command/src/AudioRecorder.cpp:1: /home/haha/ucar_ws/src/speech_command/include/hidapi.h:34:23: warning: ISO C++ forbids converting a string constant to ‘char*’ [-Wwrite-strings] 34 | char *device_index = "iflytek"; | ^~~~~~~~~ [ 2%] Linking CXX shared library /home/haha/ucar_ws/devel/lib/libaudio_recorder.so /usr/bin/ld: 当搜索用于 /home/haha/ucar_ws/src/speech_command/lib/arm64/libhid_lib.so 时跳过不兼容的 -lhid_lib /usr/bin/ld: 找不到 -lhid_lib /usr/bin/ld: 当搜索用于 /home/haha/ucar_ws/src/speech_command/lib/arm64/libmsc.so 时跳过不兼容的 -lmsc /usr/bin/ld: 找不到 -lmsc collect2: error: ld returned 1 exit status make[2]: *** [speech_command/CMakeFiles/audio_recorder.dir/build.make:84:/home/haha/ucar_ws/devel/lib/libaudio_recorder.so] 错误 1 make[1]: *** [CMakeFiles/Makefile2:4943:speech_command/CMakeFiles/audio_recorder.dir/all] 错误 2 make[1]: *** 正在等待未完成的任务.... [ 2%] Linking CXX shared library ../lib/libgtest.so [ 2%] Built target gtest make: *** [Makefile:141:all] 错误 2 Invoking "make -j2 -l2" failed 什么问题并给我一个详细的解决方法

(SGSlam) huoduo@huoduo:~/SG-SLAM$ catkin_make --pkg cv_bridge Base path: /home/huoduo/SG-SLAM Source space: /home/huoduo/SG-SLAM/src Build space: /home/huoduo/SG-SLAM/build Devel space: /home/huoduo/SG-SLAM/devel Install space: /home/huoduo/SG-SLAM/install #### #### Running command: "make cmake_check_build_system" in "/home/huoduo/SG-SLAM/build" #### #### #### Running command: "make -j8 -l8" in "/home/huoduo/SG-SLAM/build/cv_bridge" #### [ 0%] Built target std_msgs_generate_messages_nodejs [ 0%] Built target std_msgs_generate_messages_py [ 0%] Built target geometry_msgs_generate_messages_eus [ 0%] Built target std_msgs_generate_messages_lisp [ 0%] Built target sensor_msgs_generate_messages_cpp [ 0%] Built target std_msgs_generate_messages_cpp [ 0%] Built target sensor_msgs_generate_messages_eus [ 0%] Built target std_msgs_generate_messages_eus [ 0%] Built target geometry_msgs_generate_messages_lisp [ 0%] Built target sensor_msgs_generate_messages_lisp [ 0%] Built target sensor_msgs_generate_messages_nodejs [ 0%] Built target geometry_msgs_generate_messages_cpp [ 0%] Built target geometry_msgs_generate_messages_py [ 0%] Built target geometry_msgs_generate_messages_nodejs [ 0%] Built target sensor_msgs_generate_messages_py [ 50%] Built target cv_bridge [ 66%] Building CXX object cv_bridge/src/CMakeFiles/cv_bridge_boost.dir/module_opencv2.cpp.o In file included from /home/huoduo/SG-SLAM/src/cv_bridge/src/module_opencv2.cpp:35: /home/huoduo/SG-SLAM/src/cv_bridge/src/module.hpp: In function ‘int do_numpy_import()’: /home/huoduo/SG-SLAM/src/cv_bridge/src/module.hpp:39:5: warning: converting to non-pointer type ‘int’ from NULL [-Wconversion-null] 39 | import_array( ); | ^~~~~~~~~~~~ /home/huoduo/SG-SLAM/src/cv_bridge/src/module_opencv2.cpp: At global scope: /home/huoduo/SG-SLAM/src/cv_bridge/src/module_opencv2.cpp:151:16: error: cannot declare variable ‘g_numpyAllocator’ to be of abstract type ‘NumpyAllocator’ 151 | NumpyAllocator g_numpyA

6. How to Run Locally DeepSeek-V3 can be deployed locally using the following hardware and open-source community software: DeepSeek-Infer Demo: We provide a simple and lightweight demo for FP8 and BF16 inference. SGLang: Fully support the DeepSeek-V3 model in both BF16 and FP8 inference modes, with Multi-Token Prediction coming soon. LMDeploy: Enables efficient FP8 and BF16 inference for local and cloud deployment. TensorRT-LLM: Currently supports BF16 inference and INT4/8 quantization, with FP8 support coming soon. vLLM: Support DeepSeek-V3 model with FP8 and BF16 modes for tensor parallelism and pipeline parallelism. AMD GPU: Enables running the DeepSeek-V3 model on AMD GPUs via SGLang in both BF16 and FP8 modes. Huawei Ascend NPU: Supports running DeepSeek-V3 on Huawei Ascend devices. Since FP8 training is natively adopted in our framework, we only provide FP8 weights. If you require BF16 weights for experimentation, you can use the provided conversion script to perform the transformation. Here is an example of converting FP8 weights to BF16: cd inference python fp8_cast_bf16.py --input-fp8-hf-path /path/to/fp8_weights --output-bf16-hf-path /path/to/bf16_weights 回答二、软件环境搭建 推理框架选择(需通过pip安装): 原生FP8推理:使用SGLang框架 Bash pip install sglang BF16/FP8混合支持:使用LMDeploy Bash pip install lmdeploy TensorRT加速:安装TensorRT-LLM Bash git clone https://github.com/NVIDIA/TensorRT-LLM.git && cd TensorRT-LLM && pip install -e . 模型权重获取: Bash huggingface-cli download DeepSeek/DeepSeek-V3-671B-FP8 --include "*.bin" --local-dir ./deepseek-weights 三、FP8到BF16权重转换 运行官方转换脚本(需从Hugging Face仓库获取): Bash python convert_fp8_to_bf16.py \ --input_dir ./deepseek-weights \ --output_dir ./bf16-weights \ --quant_bit 8 此脚本会将原始FP8权重转换为BF16格式,同时保留模型结构配置文件1。第三步没看懂,具体操作是什么

大家在看

recommend-type

商品条形码及生产日期识别数据集

商品条形码及生产日期识别数据集,数据集样本数量为2156,所有图片已标注为YOLO txt格式,划分为训练集、验证集和测试集,能直接用于YOLO算法的训练。可用于跟本识别目标相关的蓝桥杯比赛项目
recommend-type

7.0 root.rar

Android 7.0 MTK MT8167 user 版本root权限修改,super权限修改,当第三方APP想要获取root权限时,会弹出窗口访问是否给与改APP root权限,同意后该APP可以得到root权限,并操作相关内容
recommend-type

RK3308开发资料

RK3308全套资料,《06 RK3308 硬件设计介绍》《07 RK3308 软件方案介绍》《08 RK3308 Audio开发介绍》《09 RK3308 WIFI-BT功能及开发介绍》
recommend-type

即时记截图精灵 v2.00.rar

即时记截图精灵是一款方便易用,功能强大的专业截图软件。   软件当前版本提供以下功能:   1. 可以通过鼠标选择截图区域,选择区域后仍可通过鼠标进行边缘拉动或拖拽来调整所选区域的大小和位置。   2. 可以将截图复制到剪切板,或者保存为图片文件,或者自动打开windows画图程序进行编辑。   3. 保存文件支持bmp,jpg,png,gif和tif等图片类型。   4. 新增新浪分享按钮。
recommend-type

WinUSB4NuVCOM_NUC970+NuWriter.rar

NUC970 USB启动所需的USB驱动,已经下载工具NuWriter,可以用于裸机启动NUC970调试,将USB接电脑后需要先安装WinUSB4NuVCOM_NUC970驱动,然后使用NuWriter初始化硬件,之后就可以使用jlink或者ulink调试。

最新推荐

recommend-type

C#类库封装:简化SDK调用实现多功能集成,构建地磅无人值守系统

内容概要:本文介绍了利用C#类库封装多个硬件设备的SDK接口,实现一系列复杂功能的一键式调用。具体功能包括身份证信息读取、人证识别、车牌识别(支持臻识和海康摄像头)、LED显示屏文字输出、称重数据读取、二维码扫描以及语音播报。所有功能均被封装为简单的API,极大降低了开发者的工作量和技术门槛。文中详细展示了各个功能的具体实现方式及其应用场景,如身份证读取、人证核验、车牌识别等,并最终将这些功能整合到一起,形成了一套完整的地磅称重无人值守系统解决方案。 适合人群:具有一定C#编程经验的技术人员,尤其是需要快速集成多种硬件设备SDK的应用开发者。 使用场景及目标:适用于需要高效集成多种硬件设备SDK的项目,特别是那些涉及身份验证、车辆管理、物流仓储等领域的企业级应用。通过使用这些封装好的API,可以大大缩短开发周期,降低维护成本,提高系统的稳定性和易用性。 其他说明:虽然封装后的API极大地简化了开发流程,但对于一些特殊的业务需求,仍然可能需要深入研究底层SDK。此外,在实际部署过程中,还需考虑网络环境、硬件兼容性等因素的影响。
recommend-type

Teleport Pro教程:轻松复制网站内容

标题中提到的“复制别人网站的软件”指向的是一种能够下载整个网站或者网站的特定部分,然后在本地或者另一个服务器上重建该网站的技术或工具。这类软件通常被称作网站克隆工具或者网站镜像工具。 描述中提到了一个具体的教程网址,并提到了“天天给力信誉店”,这可能意味着有相关的教程或资源可以在这个网店中获取。但是这里并没有提供实际的教程内容,仅给出了网店的链接。需要注意的是,根据互联网法律法规,复制他人网站内容并用于自己的商业目的可能构成侵权,因此在此类工具的使用中需要谨慎,并确保遵守相关法律法规。 标签“复制 别人 网站 软件”明确指出了这个工具的主要功能,即复制他人网站的软件。 文件名称列表中列出了“Teleport Pro”,这是一款具体的网站下载工具。Teleport Pro是由Tennyson Maxwell公司开发的网站镜像工具,允许用户下载一个网站的本地副本,包括HTML页面、图片和其他资源文件。用户可以通过指定开始的URL,并设置各种选项来决定下载网站的哪些部分。该工具能够帮助开发者、设计师或内容分析人员在没有互联网连接的情况下对网站进行离线浏览和分析。 从知识点的角度来看,Teleport Pro作为一个网站克隆工具,具备以下功能和知识点: 1. 网站下载:Teleport Pro可以下载整个网站或特定网页。用户可以设定下载的深度,例如仅下载首页及其链接的页面,或者下载所有可访问的页面。 2. 断点续传:如果在下载过程中发生中断,Teleport Pro可以从中断的地方继续下载,无需重新开始。 3. 过滤器设置:用户可以根据特定的规则过滤下载内容,如排除某些文件类型或域名。 4. 网站结构分析:Teleport Pro可以分析网站的链接结构,并允许用户查看网站的结构图。 5. 自定义下载:用户可以自定义下载任务,例如仅下载图片、视频或其他特定类型的文件。 6. 多任务处理:Teleport Pro支持多线程下载,用户可以同时启动多个下载任务来提高效率。 7. 编辑和管理下载内容:Teleport Pro具备编辑网站镜像的能力,并可以查看、修改下载的文件。 8. 离线浏览:下载的网站可以在离线状态下浏览,这对于需要测试网站在不同环境下的表现的情况十分有用。 9. 备份功能:Teleport Pro可以用来备份网站,确保重要数据的安全。 在实际使用此类工具时,需要注意以下几点: - 著作权法:复制网站内容可能侵犯原作者的版权,因此在使用此类工具时,必须确保有合法权利去下载和使用目标网站的内容。 - 服务条款:许多网站的服务条款明确禁止未经授权的网站克隆。因此,在使用此类软件之前,应当仔细阅读并遵守目标网站的服务条款。 - 数据隐私:下载含有个人数据的网站可能触及隐私保护法律,特别是在欧洲通用数据保护条例(GDPR)等法规的环境下。 - 网络安全:随意下载网站可能导致恶意软件和病毒的传播,用户应当使用最新的反病毒软件,并在安全的环境中进行操作。 综上所述,虽然“复制别人网站的软件”在技术上是可行的,且可以用于多种正当目的,但在使用此类工具时,必须遵守法律法规和道德规范,不可侵犯他人的合法权益。
recommend-type

【跨平台开发者的必读】:解决Qt5Widgetsd.lib目标计算机类型冲突终极指南

# 1. 跨平台开发与Qt框架简介 跨平台开发已经成为现代软件开发中不可或缺的一部分,它使得开发者能够构建出能够在多种操作系统上运行的应用程序,从而触及更广泛的用户群体。在这其中,Qt框架作为一套成熟的跨平台应用程序和用户界面框架,扮演着关键角色。Qt不仅仅是一个跨平台的工具集,它的设计哲学还包含了模块化和可扩展性,使其能够适应各种规模和复杂度的项目。本章节我们将简要介绍
recommend-type

普通RNN结构和特点

<think>首先,用户的问题是:“我想了解普通RNN的结构及其特点请问RNN循环神经网络结构特点工作原理”。这涉及到RNN的结构、特点和工作原理。根据系统级指令:-所有行内数学表达式必须使用$...$格式。-独立公式必须使用$$...$$格式并单独成段。-LaTeX语法正确。-使用中文回答。-生成相关问题。-回答中引用的段落末尾自然地添加引用标识。用户可见层指令:-回答结构清晰,帮助用户逐步解决问题。-保证回答真实可靠。参考站内引用:-引用[1]:关于RNN的基本介绍,为什么需要RNN。-引用[2]:关于RNN的工作原理、结构图,以及与其他网络的比较。用户上一次的问题和我的回答:用户是第一次
recommend-type

探讨通用数据连接池的核心机制与应用

根据给定的信息,我们能够推断出讨论的主题是“通用数据连接池”,这是一个在软件开发和数据库管理中经常用到的重要概念。在这个主题下,我们可以详细阐述以下几个知识点: 1. **连接池的定义**: 连接池是一种用于管理数据库连接的技术,通过维护一定数量的数据库连接,使得连接的创建和销毁操作更加高效。开发者可以在应用程序启动时预先创建一定数量的连接,并将它们保存在一个池中,当需要数据库连接时,可以直接从池中获取,从而降低数据库连接的开销。 2. **通用数据连接池的概念**: 当提到“通用数据连接池”时,它意味着这种连接池不仅支持单一类型的数据库(如MySQL、Oracle等),而且能够适应多种不同数据库系统。设计一个通用的数据连接池通常需要抽象出一套通用的接口和协议,使得连接池可以兼容不同的数据库驱动和连接方式。 3. **连接池的优点**: - **提升性能**:由于数据库连接创建是一个耗时的操作,连接池能够减少应用程序建立新连接的时间,从而提高性能。 - **资源复用**:数据库连接是昂贵的资源,通过连接池,可以最大化现有连接的使用,避免了连接频繁创建和销毁导致的资源浪费。 - **控制并发连接数**:连接池可以限制对数据库的并发访问,防止过载,确保数据库系统的稳定运行。 4. **连接池的关键参数**: - **最大连接数**:池中能够创建的最大连接数。 - **最小空闲连接数**:池中保持的最小空闲连接数,以应对突发的连接请求。 - **连接超时时间**:连接在池中保持空闲的最大时间。 - **事务处理**:连接池需要能够管理不同事务的上下文,保证事务的正确执行。 5. **实现通用数据连接池的挑战**: 实现一个通用的连接池需要考虑到不同数据库的连接协议和操作差异。例如,不同的数据库可能有不同的SQL方言、认证机制、连接属性设置等。因此,通用连接池需要能够提供足够的灵活性,允许用户配置特定数据库的参数。 6. **数据连接池的应用场景**: - **Web应用**:在Web应用中,为了处理大量的用户请求,数据库连接池可以保证数据库连接的快速复用。 - **批处理应用**:在需要大量读写数据库的批处理作业中,连接池有助于提高整体作业的效率。 - **微服务架构**:在微服务架构中,每个服务可能都需要与数据库进行交互,通用连接池能够帮助简化服务的数据库连接管理。 7. **常见的通用数据连接池技术**: - **Apache DBCP**:Apache的一个Java数据库连接池库。 - **C3P0**:一个提供数据库连接池和控制工具的开源Java框架。 - **HikariCP**:目前性能最好的开源Java数据库连接池之一。 - **BoneCP**:一个高性能的开源Java数据库连接池。 - **Druid**:阿里巴巴开源的一个数据库连接池,提供了对性能监控的高级特性。 8. **连接池的管理与监控**: 为了保证连接池的稳定运行,开发者需要对连接池的状态进行监控,并对其进行适当的管理。监控指标可能包括当前活动的连接数、空闲的连接数、等待获取连接的请求队列长度等。一些连接池提供了监控工具或与监控系统集成的能力。 9. **连接池的配置和优化**: 连接池的性能与连接池的配置密切相关。需要根据实际的应用负载和数据库性能来调整连接池的参数。例如,在高并发的场景下,可能需要增加连接池中连接的数量。另外,适当的线程池策略也可以帮助连接池更好地服务于多线程环境。 10. **连接池的应用案例**: 一个典型的案例是电商平台在大型促销活动期间,用户访问量激增,此时通用数据连接池能够保证数据库操作的快速响应,减少因数据库连接问题导致的系统瓶颈。 总结来说,通用数据连接池是现代软件架构中的重要组件,它通过提供高效的数据库连接管理,增强了软件系统的性能和稳定性。了解和掌握连接池的原理及实践,对于任何涉及数据库交互的应用开发都至关重要。在实现和应用连接池时,需要关注其设计的通用性、配置的合理性以及管理的有效性,确保在不同的应用场景下都能发挥出最大的效能。
recommend-type

【LabVIEW网络通讯终极指南】:7个技巧提升UDP性能和安全性

# 摘要 本文系统介绍了LabVIEW在网络通讯中的应用,尤其是针对UDP协议的研究与优化。首先,阐述了UDP的原理、特点及其在LabVIEW中的基础应用。随后,本文深入探讨了通过调整数据包大小、实现并发通信及优化缓冲区管理等技巧来优化UDP性能的LabVIEW方法。接着,文章聚焦于提升UDP通信安全性,介绍了加密技术和认证授权机制在LabVIEW中的实现,以及防御网络攻击的策略。最后,通过具体案例展示了LabVIEW在实时数据采集和远程控制系统中的高级应用,并展望了LabVIEW与UDP通讯技术的未来发展趋势及新兴技术的影响。 # 关键字 LabVIEW;UDP网络通讯;性能优化;安全性;
recommend-type

简要介绍cnn卷积神经网络

<think>首先,用户的问题是:“我想了解卷积神经网络的基本概念和原理 请问CNN卷积神经网络简介及其工作原理”。所以,我需要基于提供的引用材料,构建一个清晰的回答。 引用材料总结: - [^1]: CNN的核心思想是局部感受野、权值共享和时间或空间亚采样,提供位移、尺度、形变不变性。三大特色:局部感知、权重共享和多卷积核。 - [^2]: CNN是一种前馈神经网络,由卷积层和池化层组成,特别在图像处理方面出色。与传统多层神经网络相比,CNN加入了卷积层和池化层,使特征学习更有效。 - [^3]: CNN与全连接神经网络的区别:至少有一个卷积层提取特征;神经元局部连接和权值共享,减少参数数
recommend-type

基于ASP的深度学习网站导航系统功能详解

从给定文件中我们可以提取以下IT知识点: ### 标题知识点 #### "ASP系统篇" - **ASP技术介绍**:ASP(Active Server Pages)是一种服务器端的脚本环境,用于创建动态交互式网页。ASP允许开发者将HTML网页与服务器端脚本结合,使用VBScript或JavaScript等语言编写代码,以实现网页内容的动态生成。 - **ASP技术特点**:ASP适用于小型到中型的项目开发,它可以与数据库紧密集成,如Microsoft的Access和SQL Server。ASP支持多种组件和COM(Component Object Model)对象,使得开发者能够实现复杂的业务逻辑。 #### "深度学习网址导航系统" - **深度学习概念**:深度学习是机器学习的一个分支,通过构建深层的神经网络来模拟人类大脑的工作方式,以实现对数据的高级抽象和学习。 - **系统功能与深度学习的关系**:该标题可能意味着系统在进行网站分类、搜索优化、内容审核等方面采用了深度学习技术,以提供更智能、自动化的服务。然而,根据描述内容,实际上系统并没有直接使用深度学习技术,而是提供了一个传统的网址导航服务,可能是命名上的噱头。 ### 描述知识点 #### "全后台化管理,操作简单" - **后台管理系统的功能**:后台管理系统允许网站管理员通过Web界面执行管理任务,如内容更新、用户管理等。它通常要求界面友好,操作简便,以适应不同技术水平的用户。 #### "栏目无限分类,自由添加,排序,设定是否前台显示" - **动态网站结构设计**:这意味着网站结构具有高度的灵活性,支持创建无限层级的分类,允许管理员自由地添加、排序和设置分类的显示属性。这种设计通常需要数据库支持动态生成内容。 #### "各大搜索和站内搜索随意切换" - **搜索引擎集成**:网站可能集成了外部搜索引擎(如Google、Bing)和内部搜索引擎功能,让用户能够方便地从不同来源获取信息。 #### "网站在线提交、审阅、编辑、删除" - **内容管理系统的功能**:该系统提供了一个内容管理平台,允许用户在线提交内容,由管理员进行审阅、编辑和删除操作。 #### "站点相关信息后台动态配置" - **动态配置机制**:网站允许管理员通过后台系统动态调整各种配置信息,如网站设置、参数调整等,从而实现快速的网站维护和更新。 #### "自助网站收录,后台审阅" - **网站收录和审核机制**:该系统提供了一套自助收录流程,允许其他网站提交申请,由管理员进行后台审核,决定是否收录。 #### "网站广告在线发布" - **广告管理功能**:网站允许管理员在线发布和管理网站广告位,以实现商业变现。 #### "自动生成静态页 ver2.4.5" - **动态与静态内容**:系统支持动态内容的生成,同时也提供了静态页面的生成机制,这可能有助于提高网站加载速度和搜索引擎优化。 #### "重写后台网址分类管理" - **系统优化与重构**:提到了后台网址分类管理功能的重写,这可能意味着系统进行了一次重要的更新,以修复前一个版本的错误,并提高性能。 ### 标签知识点 #### "ASP web 源代码 源码" - **ASP程序开发**:标签表明这是一个ASP语言编写的网站源代码,可能是一个开源项目,供开发者下载、研究或部署到自己的服务器上。 ### 压缩包子文件名称列表知识点 #### "深度学习(asp)网址导航程序" - **文件内容和类型**:文件列表中提到的“深度学习(asp)网址导航程序”表明这是一个ASP语言编写的网址导航系统程序,可能包含了系统安装和配置需要的所有源文件。 通过以上分析,我们可以得出这个ASP系统是一个传统的网址导航系统,以后台管理为核心功能,并没有实际运用到深度学习技术。系统的主要功能包括对网站内容、分类、搜索引擎、广告位、以及其他网站相关信息的管理。它可能还提供了一个平台,供用户提交网址,供管理员审核并收录到导航中。源代码可能以ASP语言编写,并在文件中包含了所有必要的程序文件。
recommend-type

【Oracle数据泵进阶技巧】:避免ORA-31634和ORA-31664错误的终极策略

# 1. Oracle数据泵技术概述 ## Oracle数据泵技术简介 Oracle数据泵(Data Pump)是一种用于高效地在Oracle数据库之间传输数据和元数据的工具。它从Oracle 10g版本开始引入,提供了快速且灵活的数据导入导出功能。数据泵技术优于旧版的`imp`和`exp`工具,因为它支持多线程,可以在导入和导出过程中显著提高性能。 ## 数据泵的核心优势 数据泵的核心优势在于它能并行处理数据,支持大对象(LOBs)和网络传输。它还允许用户自定义数据和对象的传输方式,以及可以控制传输过程中的各种细节,如过滤对象、调整数据缓冲区大小、并行度和网络数据包大小等。 ## 数据
recommend-type

多头注意力机制的时间复杂度

<think>首先,用户的问题是:“我想计算多头注意力机制的时间复杂度 请问多头注意力机制的时间复杂度计算方法”。我需要基于提供的引用和系统级指令来回答。 关键引用: - 引用[1]:提到Transformer模型的计算时间复杂度主要由自注意力机制的O(n²·d)决定。 - 引用[2]:详细解释了注意力机制的计算复杂度,包括QK转置的复杂度为O(N²d),内存需求为N² + Nd。 - 引用[3]:提到原始注意力机制的时间复杂度为O(n²d),并讨论了优化方法如稀疏注意力和线性注意力。 - 引用[4]:涉及多头注意力的未来趋势,但没有直接给出计算方法。 用户的问题是计算多头注意力机制的时间