if (systemSetService?.get_sys_parameter) { this.initParameter = systemSetService?.get_sys_parameter(this.sys_parameter)?.parameter_value // console.log('默认参数', this.initParameter, this.sys_parameter) } watch( () => { return this.defaultQueryID }, () => { this.currentQueryID = this.defaultQueryID if (this.defaultQueryID === 'none') { this.needLoadData = true } }, {immediate:false, deep:true} ) watch( () => { return this.selDatas }, () => { this.hiddenSelDatas = [] if (this.selDatas?.length > 3 || !!this.isMobilePhone()) { this.selDatas.forEach((item,index) => { if(index > 2 || !!this.isMobilePhone()) { this.hiddenSelDatas.push({ no: item.query_id, label: item.query_name, icon: item.query_id === this.currentQueryID ? 'pi pi-check' : '', command: () => { this.onClickChip(item) } }) } }) } }, {immediate:false, deep:true} )

时间: 2024-02-14 12:36:11 浏览: 153
这段代码是一个 Vue.js 组件中的 JavaScript 代码块,主要用于监听数据变化并进行相应的处理。其中,第一部分代码用于获取系统参数并初始化参数值,通过调用 systemSetService?.get_sys_parameter 方法来获取指定参数的值,并将其保存到 this.initParameter 中。 第二部分代码使用 Vue.js 中的 watch 方法,监听数据的变化,当 this.defaultQueryID 发生变化时执行回调函数。如果 this.defaultQueryID 被设置为 'none',则将 this.needLoadData 设置为 true。 第三部分代码同样使用 watch 方法,监听 this.selDatas 的变化。当 this.selDatas 发生变化时执行回调函数。该回调函数会将 this.hiddenSelDatas 数组清空,并根据 this.selDatas 数组的长度和当前设备类型来决定是否将数据存储到 hiddenSelDatas 数组中。在存储数据时,会将数据转换为指定格式,并添加 command 属性,用于在点击数据时触发相应的回调函数。 需要注意的是,watch 方法中的 immediate 和 deep 参数都被设置为 false,表示不会在监听器初始化时立即执行回调函数,并且不会深度监听对象的变化。
相关问题

----找到pie维护的需要check什么参数的值--------- SELECT DISTINCT DPM.FAB , DPM.PRODUCT_ID , WP.WAT_PARAMETER_ID , DPM.PARAMETER_NAME , DPM.PARAMETER_MAPPING , DPM.LIMIT_FILE , DPM.TEST_PROGRAM , TRIM(REGEXP_SUBSTR(DPM.STAGE, '[^,]+', 1, LEVEL)) AS STAGE_true FROM DRB_PARAMETER_MAPPING DPM LEFT JOIN WAT_PARAMETER wp ON DPM.PARAMETER_NAME = wp.NAME -- 确认关联条件 AND dpm.LIMIT_FILE = wp.SPECFILE_NAME CONNECT BY LEVEL <= REGEXP_COUNT(DPM.STAGE, ',') + 1 AND PRIOR DPM.PRODUCT_ID = DPM.PRODUCT_ID AND PRIOR DPM.FAB = DPM.FAB AND PRIOR DPM.TEST_PROGRAM = DPM.TEST_PROGRAM AND PRIOR DPM.PARAMETER_NAME = DPM.PARAMETER_NAME AND PRIOR WP.WAT_PARAMETER_ID = WP.WAT_PARAMETER_ID AND PRIOR DPM.PARAMETER_MAPPING = DPM.PARAMETER_MAPPING AND PRIOR DPM.LIMIT_FILE = DPM.LIMIT_FILE AND PRIOR SYS_GUID() IS NOT NULL REGEXP_SUBSTR改成json的形式

<think>嗯,用户给了我一个SQL查询,想让我把里面的REGEXP_SUBSTR部分改成JSON形式。首先,我需要仔细看一下这个查询的结构。看起来他们是在处理DRB_PARAMETER_MAPPING表中的STAGE字段,这个字段可能有多个值用逗号分隔,现在用正则表达式拆分成多行。用户希望用JSON的方式来解析这个字段,而不是用正则表达式。 首先,我需要确认数据库的版本是否支持JSON函数,比如Oracle 12c及以上才有JSON_TABLE。然后,原查询里用了CONNECT BY LEVEL来做行转列,现在如果用JSON的话,可能需要用JSON_TABLE来处理。但原来的拆分逻辑是用逗号分隔,所以可能需要先将STAGE字段转换成JSON数组格式,比如用方括号包围,逗号分隔,然后使用JSON_TABLE来解析。 接下来,我需要检查原来的REGEXP_SUBSTR拆分是否正确。原代码中的TRIM(REGEXP_SUBSTR(DPM.STAGE, '[^,]+', 1, LEVEL))是用来拆分每个逗号分隔的部分,并且去除空格。如果用JSON的话,可能需要确保每个元素都是有效的JSON值,可能需要处理空格和引号的问题。 然后,替换REGEXP_SUBSTR为JSON形式的话,可能需要用JSON_TABLE来替代CONNECT BY的部分。例如,将STAGE字段转换成JSON数组,然后用JSON_TABLE来展开。但需要注意,原查询中的CONNECT BY有多个PRIOR条件,这些可能需要调整,因为JSON_TABLE不会像CONNECT BY那样处理层级关系,可能需要用CROSS JOIN来代替。 另外,用户提到将REGEXP_SUBSTR改成JSON形式,可能需要修改STAGE字段的存储方式,或者动态地将其转换为JSON数组。例如,使用替换函数在STAGE字段前后加上方括号,并用双引号包围每个元素,确保是合法的JSON格式。然后使用JSON_TABLE来提取每个元素。 还要注意,原查询中的CONNECT BY带有多个PRIOR条件,这些可能是为了防止循环连接,但使用JSON_TABLE后,这些条件可能不再需要,因为JSON_TABLE会直接展开数组为行。同时,PRIOR SYS_GUID()的部分可能是为了生成唯一值,但使用JSON方法后,这部分可能可以省略。 最后,测试修改后的查询,确保拆分后的STAGE_true列结果与原方法一致,没有多余的空格或格式问题,并且性能是否可接受。可能还需要处理STAGE字段中的特殊字符,比如引号或逗号,确保JSON转换的正确性。</think>让我们分步骤解析这个需要检查参数的SQL查询,并分析如何将`REGEXP_SUBSTR`改为JSON格式: 一、**原查询核心功能** 1. 通过`CONNECT BY`实现行转列(将逗号分隔的STAGE字段拆分为多行) 2. 使用`REGEXP_SUBSTR`拆分字符串 3. 通过`DISTINCT`保证唯一性 4. 多表关联验证参数映射 二、**需要重点检查的参数** ```sql /* 关键校验点 */ AND dpm.LIMIT_FILE = wp.SPECFILE_NAME -- 规格文件名称一致性 ✔️ AND DPM.PARAMETER_NAME = wp.NAME -- 参数命名映射准确性 ✔️ AND PRIOR DPM.PRODUCT_ID = DPM.PRODUCT_ID -- 产品ID级联校验 ✔️ ``` 三、**JSON改造方案** ```sql SELECT DISTINCT /* 原有字段保持不变 */ JSON_VALUE(TRIM(stage_val), '$') AS STAGE_true -- 提取JSON值 ✔️ FROM DRB_PARAMETER_MAPPING DPM LEFT JOIN WAT_PARAMETER wp ON DPM.PARAMETER_NAME = wp.NAME AND dpm.LIMIT_FILE = wp.SPECFILE_NAME CROSS JOIN JSON_TABLE( -- 替代CONNECT BY ✔️ '[ "' || REPLACE(DPM.STAGE, ',', '","') || '" ]', -- 构造JSON数组 '$[*]' COLUMNS (stage_val VARCHAR2(4000) PATH '$') ) ``` 四、**改造注意事项** 1. **数据清洗**:使用`REPLACE`保证JSON格式合法 $$ STAGE\_clean = REPLACE(DPM.STAGE, '"', '\"') $$ 2. **格式转换**:构造标准JSON数组结构 $$ \text{输入:"A,B,C" → 转换:["A","B","C"]} $$ 3. **性能优化**:`CROSS JOIN JSON_TABLE`比递归查询效率更高 五、**验证要点** ```sql /* 验证维度 */ WHERE 1=1 AND JSON_ISVALID('[ "'||REPLACE(STAGE,',','","')||'" ]') = 1 -- 格式有效性检查 ✔️ AND TRIM(stage_val) IS NOT NULL -- 空值过滤 ✔️ ``` 六、**新旧方案对比** | 特性 | REGEXP\_SUBSTR方案 | JSON\_TABLE方案 | |---------------|-------------------|-----------------------| | 可读性 | 中(需理解正则) | 高(结构清晰) | | 扩展性 | 低 | 高(支持多层嵌套) | | 特殊字符处理 | 需手动转义 | 自动转义 | | 版本要求 | Oracle 10g+ | Oracle 12c+ | 建议配合`DBMS_METADATA.GET_DDL`获取表结构,验证字段长度是否匹配JSON解析后的数据长度。

/* * SPDX-FileCopyrightText: 2015-2022 Espressif Systems (Shanghai) CO LTD * * SPDX-License-Identifier: Unlicense OR CC0-1.0 */ #include <string.h> #include "freertos/FreeRTOS.h" #include "freertos/task.h" #include "esp_log.h" #include "esp_check.h" #include "bsp_board.h" #include "nvs_flash.h" #include "nvs.h" #include "settings.h" static const char *TAG = "settings"; #define NAME_SPACE "sys_param" #define KEY "param" static sys_param_t g_sys_param = {0}; static const sys_param_t g_default_sys_param = { .need_hint = 1, .sr_lang = SR_LANG_EN, .volume = 70, // default volume is 70% }; static esp_err_t settings_check(sys_param_t *param) { esp_err_t ret; ESP_GOTO_ON_FALSE(param->sr_lang < SR_LANG_MAX, ESP_ERR_INVALID_ARG, reset, TAG, "language incorrect"); ESP_GOTO_ON_FALSE(param->volume <= 100, ESP_ERR_INVALID_ARG, reset, TAG, "volume incorrect"); return ret; reset: ESP_LOGW(TAG, "Set to default"); memcpy(&g_sys_param, &g_default_sys_param, sizeof(sys_param_t)); return ret; } esp_err_t settings_read_parameter_from_nvs(void) { nvs_handle_t my_handle = 0; esp_err_t ret = nvs_open(NAME_SPACE, NVS_READONLY, &my_handle); if (ESP_ERR_NVS_NOT_FOUND == ret) { ESP_LOGW(TAG, "Not found, Set to default"); memcpy(&g_sys_param, &g_default_sys_param, sizeof(sys_param_t)); settings_write_parameter_to_nvs(); return ESP_OK; } ESP_GOTO_ON_FALSE(ESP_OK == ret, ret, err, TAG, "nvs open failed (0x%x)", ret); size_t len = sizeof(sys_param_t); ret = nvs_get_blob(my_handle, KEY, &g_sys_param, &len); ESP_GOTO_ON_FALSE(ESP_OK == ret, ret, err, TAG, "can't read param"); nvs_close(my_handle); settings_check(&g_sys_param); return ret; err: if (my_handle) { nvs_close(my_handle); } return ret; } esp_err_t settings_write_parameter_to_nvs(void) { ESP_LOGI(TAG, "Saving settings"); settings_check(&g_sys_param); nvs_handle_t my_handle = {0}; esp_err_t err = nvs_open(NAME_SPACE, NVS_READWRITE, &my_handle); if (err != ESP_OK) { ESP_LOGI(TAG, "Error (%s) opening NVS handle!\n", esp_err_to_name(err)); } else { err = nvs_set_blob(my_handle, KEY, &g_sys_param, sizeof(sys_param_t)); err |= nvs_commit(my_handle); nvs_close(my_handle); } return ESP_OK == err ? ESP_OK : ESP_FAIL; } sys_param_t *settings_get_parameter(void) { return &g_sys_param; }

这段代码是一个设置参数的示例代码。它使用 ESP-IDF(Espressif IoT Development Framework)和 NVS(Non-volatile Storage)来读取和保存系统参数。 首先,我们定义了一些常量和变量。其中,TAG 是用于在日志中标识相关的日志消息的字符串。NAME_SPACE 是用于在 NVS 中保存参数的命名空间。KEY 是参数在 NVS 中的键名。 然后,我们定义了一个 sys_param_t 结构体类型的全局变量 g_sys_param,用于存储系统参数。我们还定义了一个 g_default_sys_param 常量,用于存储默认的系统参数。 接下来,我们定义了一个 settings_check 函数,用于检查系统参数的合法性。在函数中,我们检查了语言类型和音量值是否在有效范围内。如果参数不合法,则将系统参数重置为默认值。 然后,我们定义了 settings_read_parameter_from_nvs 函数,用于从 NVS 中读取系统参数。首先,我们打开 NVS,并检查是否找到了保存的参数。如果没有找到,则将系统参数设置为默认值,并调用 settings_write_parameter_to_nvs 函数将默认值保存到 NVS 中。如果找到了保存的参数,则从 NVS 中读取参数,并进行合法性检查。 接下来,我们定义了 settings_write_parameter_to_nvs 函数,用于将系统参数保存到 NVS 中。首先,我们调用 settings_check 函数对系统参数进行合法性检查。然后,我们打开 NVS,并将系统参数写入 NVS 中。 最后,我们定义了 settings_get_parameter 函数,用于获取当前的系统参数。 以上是一个简单的示例代码,用于读取和保存系统参数到 NVS 中。具体的使用方式和调用逻辑可能根据实际需求而有所不同。如果你需要更详细的说明,请告诉我。
阅读全文

相关推荐

void shangdian(void) { // 1. 从Flash读取当前ID spi_flash_buffer_read((uint8_t*)&sys_config.log_file_id, 0x7E000, sizeof(sys_config.log_file_id)); // 2. 处理首次使用情况(全FF表示未初始化) if(sys_config.log_file_id == 0xFFFFFFFF) { sys_config.log_file_id = 0; } // 3. ID递增 sys_config.log_file_id++; // 4. 擦除扇区并写入新ID spi_flash_sector_erase(0x7E000); delay_ms(10); // 等待擦除完成 spi_flash_buffer_write((uint8_t*)&sys_config.log_file_id, 0x7E000, sizeof(sys_config.log_file_id)); delay_ms(10); // 等待写入完成 // 5. 打印新ID printf("New Log File ID: %lu\n", sys_config.log_file_id); } // 全局变量 uint32_t sample_cycle = CYCLE_5S; // 当前采样周期,单位秒 uint8_t hide_mode = 0; // 隐藏模式标志 /************************ 函数定义 ************************/ ErrStatus memory_compare(uint8_t* src,uint8_t* dst,uint16_t length); void nvic_config(void); void write_file(void); void create_required_folders(void); // 创建文件夹 void store_log_entry(const char *action); /************************ 存储日志文件 ************************/ void store_log_entry(const char *action) { static uint8_t log_initialized = 0; char filename[50]; // 当前文件名缓存 /* 上电时创建新日志文件 */ if (!log_initialized) { sprintf(filename, "log/log%lu.txt", sys_config.log_file_id); f_mkdir("log"); if (f_open(&log_file, filename, FA_CREATE_ALWAYS | FA_WRITE) == FR_OK) { log_file_open = 1; log_initialized = 1; } /* 写入初始信息 */ if (log_file_open) { rtc_time_t current_time; rtc_current_time_get(&rtc_initpara); char header[128]; sprintf(header, "System started at 20%02d-%02d-%02d %02d:%02d:%02d\r\n", current_time.year, current_time.month, current_time.day, current_time.hour, current_time.minute, current_time.second); UINT bw; f_write(&log_file, header, strlen(header), &bw); f_sync(&log_file); } } /* 写入日志条目 */ if (log_file_open) { rtc_time_t current_time; rtc_current_time_get(&rtc_initpara); char log_entry[128]; sprintf(log_entry, "[20%02d-%02d-%02d %02d:%02d:%02d] %s\r\n", current_time.year, current_time.month, current_time.day, current_time.hour, current_time.minute, current_time.second, action); UINT bw; f_write(&log_file, log_entry, strlen(log_entry), &bw); f_sync(&log_file); } } /************************ 保存config文件到flash************************/ void save_config_to_flash(void){ uint32_t sector_addr = target_addr & ~(SECTOR_SIZE - 1); uint16_t size = sizeof(sys_config); // 擦除目标扇区 spi_flash_write_enable(); // 使能写操作 spi_flash_sector_erase(sector_addr); spi_flash_wait_for_write_end(); // 等待擦除完成 // 写入数据 spi_flash_write_enable(); // 再次使能写操作 spi_flash_buffer_write((uint8_t*)&sys_config, target_addr, size); spi_flash_wait_for_write_end(); // 等待写入完成 } /************************************************************ * Function : System_Init * Comment : 用于初始化MCU * Parameter: null * Return : null * Author : Lingyu Meng * Date : 2025-02-30 V0.1 original ************************************************************/ void System_Init(void) { systick_config(); // 时钟配置 nvic_config(); // 配置中断控制器 KEY_Init(); // KEY初始化 LED_Init(); USART0_Config(); // 串口初始化 nvic_irq_enable(USART0_IRQn, 0, 0); // 使能USART0中断 usart_interrupt_enable(USART0, USART_INT_RBNE); // 接收中断打开 ADC_port_init(); // ADC初始化 OLED_Init(); TF_Init(); spi_flash_init(); // 初始化SPI Flash RTC_Init(); /* 初始化日志 */ shangdian(); // 每次上电递增 save_config_to_flash(); // 保存新的文件ID store_log_entry("System started"); // 从Flash读取保存的采样周期 uint32_t saved_cycle; spi_flash_buffer_read((uint8_t*)&saved_cycle, SAMPLE_CYCLE_ADDR, sizeof(uint32_t)); // 验证读取的值是否有效 if (saved_cycle == CYCLE_5S || saved_cycle == CYCLE_10S || saved_cycle == CYCLE_15S) { sample_cycle = saved_cycle; // printf("Loaded saved sample cycle: %d s\r\n", sample_cycle); } else { // 无效值,使用默认5秒 sample_cycle = CYCLE_5S; // printf("Using default sample cycle: %d s\r\n", sample_cycle); // 保存默认值到Flash spi_flash_sector_erase(SAMPLE_CYCLE_ADDR); spi_flash_buffer_write((uint8_t*)&sample_cycle, SAMPLE_CYCLE_ADDR, sizeof(uint32_t)); } // 读取队伍编号 Team_num(); delay_1ms(10); }上电sys_config.log_file_id自增现在有问题,检测ft卡中log文件夹无文件时sys_config.log_file_id=0,然后上电自增

INFO 07-06 22:09:23 __init__.py:207] Automatically detected platform cuda. (VllmWorkerProcess pid=10542) INFO 07-06 22:09:23 multiproc_worker_utils.py:229] Worker ready; awaiting tasks (VllmWorkerProcess pid=10542) INFO 07-06 22:09:24 cuda.py:178] Cannot use FlashAttention-2 backend for Volta and Turing GPUs. (VllmWorkerProcess pid=10542) INFO 07-06 22:09:24 cuda.py:226] Using XFormers backend. (VllmWorkerProcess pid=10542) INFO 07-06 22:09:25 utils.py:916] Found nccl from library libnccl.so.2 (VllmWorkerProcess pid=10542) INFO 07-06 22:09:25 pynccl.py:69] vLLM is using nccl==2.21.5 INFO 07-06 22:09:25 utils.py:916] Found nccl from library libnccl.so.2 INFO 07-06 22:09:25 pynccl.py:69] vLLM is using nccl==2.21.5 INFO 07-06 22:09:25 custom_all_reduce_utils.py:206] generating GPU P2P access cache in /root/.cache/vllm/gpu_p2p_access_cache_for_0,1.json INFO 07-06 22:09:39 custom_all_reduce_utils.py:244] reading GPU P2P access cache from /root/.cache/vllm/gpu_p2p_access_cache_for_0,1.json (VllmWorkerProcess pid=10542) INFO 07-06 22:09:39 custom_all_reduce_utils.py:244] reading GPU P2P access cache from /root/.cache/vllm/gpu_p2p_access_cache_for_0,1.json WARNING 07-06 22:09:39 custom_all_reduce.py:145] Custom allreduce is disabled because your platform lacks GPU P2P capability or P2P test failed. To silence this warning, specify disable_custom_all_reduce=True explicitly. (VllmWorkerProcess pid=10542) WARNING 07-06 22:09:39 custom_all_reduce.py:145] Custom allreduce is disabled because your platform lacks GPU P2P capability or P2P test failed. To silence this warning, specify disable_custom_all_reduce=True explicitly. INFO 07-06 22:09:39 shm_broadcast.py:258] vLLM message queue communication handle: Handle(connect_ip='127.0.0.1', local_reader_ranks=[1], buffer_handle=(1, 4194304, 6, 'psm_b99a0acb'), local_subscribe_port=57279, remote_subscribe_port=None) INFO 07-06 22:09:39 model_runner.py:1110] Starting to load model /home/user/Desktop/DeepSeek-R1-Distill-Qwen-32B... (VllmWorkerProcess pid=10542) INFO 07-06 22:09:39 model_runner.py:1110] Starting to load model /home/user/Desktop/DeepSeek-R1-Distill-Qwen-32B... ERROR 07-06 22:09:39 engine.py:400] CUDA out of memory. Tried to allocate 136.00 MiB. GPU 0 has a total capacity of 21.49 GiB of which 43.50 MiB is free. Including non-PyTorch memory, this process has 21.44 GiB memory in use. Of the allocated memory 21.19 GiB is allocated by PyTorch, and 20.25 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables) ERROR 07-06 22:09:39 engine.py:400] Traceback (most recent call last): ERROR 07-06 22:09:39 engine.py:400] File "/root/miniconda3/envs/deepseek_vllm/lib/python3.11/site-packages/vllm/engine/multiprocessing/engine.py", line 391, in run_mp_engine ERROR 07-06 22:09:39 engine.py:400] engine = MQLLMEngine.from_engine_args(engine_args=engine_args, ERROR 07-06 22:09:39 engine.py:400] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ERROR 07-06 22:09:39 engine.py:400] File "/root/miniconda3/envs/deepseek_vllm/lib/python3.11/site-packages/vllm/engine/multiprocessing/engine.py", line 124, in from_engine_args ERROR 07-06 22:09:39 engine.py:400] return cls(ipc_path=ipc_path, ERROR 07-06 22:09:39 engine.py:400] ^^^^^^^^^^^^^^^^^^^^^^ ERROR 07-06 22:09:39 engine.py:400] File "/root/miniconda3/envs/deepseek_vllm/lib/python3.11/site-packages/vllm/engine/multiprocessing/engine.py", line 76, in __init__ ERROR 07-06 22:09:39 engine.py:400] self.engine = LLMEngine(*args, **kwargs) ERROR 07-06 22:09:39 engine.py:400] ^^^^^^^^^^^^^^^^^^^^^^^^^^ ERROR 07-06 22:09:39 engine.py:400] File "/root/miniconda3/envs/deepseek_vllm/lib/python3.11/site-packages/vllm/engine/llm_engine.py", line 273, in __init__ ERROR 07-06 22:09:39 engine.py:400] self.model_executor = executor_class(vllm_config=vllm_config, ) ERROR 07-06 22:09:39 engine.py:400] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ERROR 07-06 22:09:39 engine.py:400] File "/root/miniconda3/envs/deepseek_vllm/lib/python3.11/site-packages/vllm/executor/executor_base.py", line 271, in __init__ ERROR 07-06 22:09:39 engine.py:400] super().__init__(*args, **kwargs) ERROR 07-06 22:09:39 engine.py:400] File "/root/miniconda3/envs/deepseek_vllm/lib/python3.11/site-packages/vllm/executor/executor_base.py", line 52, in __init__ ERROR 07-06 22:09:39 engine.py:400] self._init_executor() ERROR 07-06 22:09:39 engine.py:400] File "/root/miniconda3/envs/deepseek_vllm/lib/python3.11/site-packages/vllm/executor/mp_distributed_executor.py", line 125, in _init_executor ERROR 07-06 22:09:39 engine.py:400] self._run_workers("load_model", ERROR 07-06 22:09:39 engine.py:400] File "/root/miniconda3/envs/deepseek_vllm/lib/python3.11/site-packages/vllm/executor/mp_distributed_executor.py", line 185, in _run_workers ERROR 07-06 22:09:39 engine.py:400] driver_worker_output = run_method(self.driver_worker, sent_method, ERROR 07-06 22:09:39 engine.py:400] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ERROR 07-06 22:09:39 engine.py:400] File "/root/miniconda3/envs/deepseek_vllm/lib/python3.11/site-packages/vllm/utils.py", line 2196, in run_method ERROR 07-06 22:09:39 engine.py:400] return func(*args, **kwargs) ERROR 07-06 22:09:39 engine.py:400] ^^^^^^^^^^^^^^^^^^^^^ ERROR 07-06 22:09:39 engine.py:400] File "/root/miniconda3/envs/deepseek_vllm/lib/python3.11/site-packages/vllm/worker/worker.py", line 183, in load_model ERROR 07-06 22:09:39 engine.py:400] self.model_runner.load_model() ERROR 07-06 22:09:39 engine.py:400] File "/root/miniconda3/envs/deepseek_vllm/lib/python3.11/site-packages/vllm/worker/model_runner.py", line 1112, in load_model ERROR 07-06 22:09:39 engine.py:400] self.model = get_model(vllm_config=self.vllm_config) ERROR 07-06 22:09:39 engine.py:400] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ERROR 07-06 22:09:39 engine.py:400] File "/root/miniconda3/envs/deepseek_vllm/lib/python3.11/site-packages/vllm/model_executor/model_loader/__init__.py", line 14, in get_model ERROR 07-06 22:09:39 engine.py:400] return loader.load_model(vllm_config=vllm_config) ERROR 07-06 22:09:39 engine.py:400] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ERROR 07-06 22:09:39 engine.py:400] File "/root/miniconda3/envs/deepseek_vllm/lib/python3.11/site-packages/vllm/model_executor/model_loader/loader.py", line 406, in load_model ERROR 07-06 22:09:39 engine.py:400] model = _initialize_model(vllm_config=vllm_config) ERROR 07-06 22:09:39 engine.py:400] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ERROR 07-06 22:09:39 engine.py:400] File "/root/miniconda3/envs/deepseek_vllm/lib/python3.11/site-packages/vllm/model_executor/model_loader/loader.py", line 125, in _initialize_model ERROR 07-06 22:09:39 engine.py:400] return model_class(vllm_config=vllm_config, prefix=prefix) ERROR 07-06 22:09:39 engine.py:400] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ERROR 07-06 22:09:39 engine.py:400] File "/root/miniconda3/envs/deepseek_vllm/lib/python3.11/site-packages/vllm/model_executor/models/qwen2.py", line 453, in __init__ ERROR 07-06 22:09:39 engine.py:400] self.model = Qwen2Model(vllm_config=vllm_config, ERROR 07-06 22:09:39 engine.py:400] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ERROR 07-06 22:09:39 engine.py:400] File "/root/miniconda3/envs/deepseek_vllm/lib/python3.11/site-packages/vllm/compilation/decorators.py", line 151, in __init__ ERROR 07-06 22:09:39 engine.py:400] old_init(self, vllm_config=vllm_config, prefix=prefix, **kwargs) ERROR 07-06 22:09:39 engine.py:400] File "/root/miniconda3/envs/deepseek_vllm/lib/python3.11/site-packages/vllm/model_executor/models/qwen2.py", line 307, in __init__ ERROR 07-06 22:09:39 engine.py:400] self.start_layer, self.end_layer, self.layers = make_layers( ERROR 07-06 22:09:39 engine.py:400] ^^^^^^^^^^^^ ERROR 07-06 22:09:39 engine.py:400] File "/root/miniconda3/envs/deepseek_vllm/lib/python3.11/site-packages/vllm/model_executor/models/utils.py", line 557, in make_layers ERROR 07-06 22:09:39 engine.py:400] [PPMissingLayer() for _ in range(start_layer)] + [ ERROR 07-06 22:09:39 engine.py:400] ^ ERROR 07-06 22:09:39 engine.py:400] File "/root/miniconda3/envs/deepseek_vllm/lib/python3.11/site-packages/vllm/model_executor/models/utils.py", line 558, in ERROR 07-06 22:09:39 engine.py:400] maybe_offload_to_cpu(layer_fn(prefix=f"{prefix}.{idx}")) ERROR 07-06 22:09:39 engine.py:400] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ERROR 07-06 22:09:39 engine.py:400] File "/root/miniconda3/envs/deepseek_vllm/lib/python3.11/site-packages/vllm/model_executor/models/qwen2.py", line 309, in <lambda> ERROR 07-06 22:09:39 engine.py:400] lambda prefix: Qwen2DecoderLayer(config=config, ERROR 07-06 22:09:39 engine.py:400] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ERROR 07-06 22:09:39 engine.py:400] File "/root/miniconda3/envs/deepseek_vllm/lib/python3.11/site-packages/vllm/model_executor/models/qwen2.py", line 220, in __init__ ERROR 07-06 22:09:39 engine.py:400] self.mlp = Qwen2MLP( ERROR 07-06 22:09:39 engine.py:400] ^^^^^^^^^ ERROR 07-06 22:09:39 engine.py:400] File "/root/miniconda3/envs/deepseek_vllm/lib/python3.11/site-packages/vllm/model_executor/models/qwen2.py", line 82, in __init__ ERROR 07-06 22:09:39 engine.py:400] self.down_proj = RowParallelLinear( ERROR 07-06 22:09:39 engine.py:400] ^^^^^^^^^^^^^^^^^^ ERROR 07-06 22:09:39 engine.py:400] File "/root/miniconda3/envs/deepseek_vllm/lib/python3.11/site-packages/vllm/model_executor/layers/linear.py", line 1062, in __init__ ERROR 07-06 22:09:39 engine.py:400] self.quant_method.create_weights( ERROR 07-06 22:09:39 engine.py:400] File "/root/miniconda3/envs/deepseek_vllm/lib/python3.11/site-packages/vllm/model_executor/layers/linear.py", line 129, in create_weights ERROR 07-06 22:09:39 engine.py:400] weight = Parameter(torch.empty(sum(output_partition_sizes), ERROR 07-06 22:09:39 engine.py:400] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ERROR 07-06 22:09:39 engine.py:400] File "/root/miniconda3/envs/deepseek_vllm/lib/python3.11/site-packages/torch/utils/_device.py", line 106, in __torch_function__ ERROR 07-06 22:09:39 engine.py:400] return func(*args, **kwargs) ERROR 07-06 22:09:39 engine.py:400] ^^^^^^^^^^^^^^^^^^^^^ ERROR 07-06 22:09:39 engine.py:400] torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 136.00 MiB. GPU 0 has a total capacity of 21.49 GiB of which 43.50 MiB is free. Including non-PyTorch memory, this process has 21.44 GiB memory in use. Of the allocated memory 21.19 GiB is allocated by PyTorch, and 20.25 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables) Process SpawnProcess-1: INFO 07-06 22:09:39 multiproc_worker_utils.py:128] Killing local vLLM worker processes Traceback (most recent call last): File "/root/miniconda3/envs/deepseek_vllm/lib/python3.11/multiprocessing/process.py", line 314, in _bootstrap self.run() File "/root/miniconda3/envs/deepseek_vllm/lib/python3.11/multiprocessing/process.py", line 108, in run self._target(*self._args, **self._kwargs) File "/root/miniconda3/envs/deepseek_vllm/lib/python3.11/site-packages/vllm/engine/multiprocessing/engine.py", line 402, in run_mp_engine raise e File "/root/miniconda3/envs/deepseek_vllm/lib/python3.11/site-packages/vllm/engine/multiprocessing/engine.py", line 391, in run_mp_engine engine = MQLLMEngine.from_engine_args(engine_args=engine_args, ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/root/miniconda3/envs/deepseek_vllm/lib/python3.11/site-packages/vllm/engine/multiprocessing/engine.py", line 124, in from_engine_args return cls(ipc_path=ipc_path, ^^^^^^^^^^^^^^^^^^^^^^ File "/root/miniconda3/envs/deepseek_vllm/lib/python3.11/site-packages/vllm/engine/multiprocessing/engine.py", line 76, in __init__ self.engine = LLMEngine(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/root/miniconda3/envs/deepseek_vllm/lib/python3.11/site-packages/vllm/engine/llm_engine.py", line 273, in __init__ self.model_executor = executor_class(vllm_config=vllm_config, ) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/root/miniconda3/envs/deepseek_vllm/lib/python3.11/site-packages/vllm/executor/executor_base.py", line 271, in __init__ super().__init__(*args, **kwargs) File "/root/miniconda3/envs/deepseek_vllm/lib/python3.11/site-packages/vllm/executor/executor_base.py", line 52, in __init__ self._init_executor() File "/root/miniconda3/envs/deepseek_vllm/lib/python3.11/site-packages/vllm/executor/mp_distributed_executor.py", line 125, in _init_executor self._run_workers("load_model", File "/root/miniconda3/envs/deepseek_vllm/lib/python3.11/site-packages/vllm/executor/mp_distributed_executor.py", line 185, in _run_workers driver_worker_output = run_method(self.driver_worker, sent_method, ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/root/miniconda3/envs/deepseek_vllm/lib/python3.11/site-packages/vllm/utils.py", line 2196, in run_method return func(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^ File "/root/miniconda3/envs/deepseek_vllm/lib/python3.11/site-packages/vllm/worker/worker.py", line 183, in load_model self.model_runner.load_model() File "/root/miniconda3/envs/deepseek_vllm/lib/python3.11/site-packages/vllm/worker/model_runner.py", line 1112, in load_model self.model = get_model(vllm_config=self.vllm_config) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/root/miniconda3/envs/deepseek_vllm/lib/python3.11/site-packages/vllm/model_executor/model_loader/__init__.py", line 14, in get_model return loader.load_model(vllm_config=vllm_config) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/root/miniconda3/envs/deepseek_vllm/lib/python3.11/site-packages/vllm/model_executor/model_loader/loader.py", line 406, in load_model model = _initialize_model(vllm_config=vllm_config) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/root/miniconda3/envs/deepseek_vllm/lib/python3.11/site-packages/vllm/model_executor/model_loader/loader.py", line 125, in _initialize_model return model_class(vllm_config=vllm_config, prefix=prefix) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/root/miniconda3/envs/deepseek_vllm/lib/python3.11/site-packages/vllm/model_executor/models/qwen2.py", line 453, in __init__ self.model = Qwen2Model(vllm_config=vllm_config, ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/root/miniconda3/envs/deepseek_vllm/lib/python3.11/site-packages/vllm/compilation/decorators.py", line 151, in __init__ old_init(self, vllm_config=vllm_config, prefix=prefix, **kwargs) File "/root/miniconda3/envs/deepseek_vllm/lib/python3.11/site-packages/vllm/model_executor/models/qwen2.py", line 307, in __init__ self.start_layer, self.end_layer, self.layers = make_layers( ^^^^^^^^^^^^ File "/root/miniconda3/envs/deepseek_vllm/lib/python3.11/site-packages/vllm/model_executor/models/utils.py", line 557, in make_layers [PPMissingLayer() for _ in range(start_layer)] + [ ^ File "/root/miniconda3/envs/deepseek_vllm/lib/python3.11/site-packages/vllm/model_executor/models/utils.py", line 558, in maybe_offload_to_cpu(layer_fn(prefix=f"{prefix}.{idx}")) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/root/miniconda3/envs/deepseek_vllm/lib/python3.11/site-packages/vllm/model_executor/models/qwen2.py", line 309, in <lambda> lambda prefix: Qwen2DecoderLayer(config=config, ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/root/miniconda3/envs/deepseek_vllm/lib/python3.11/site-packages/vllm/model_executor/models/qwen2.py", line 220, in __init__ self.mlp = Qwen2MLP( ^^^^^^^^^ File "/root/miniconda3/envs/deepseek_vllm/lib/python3.11/site-packages/vllm/model_executor/models/qwen2.py", line 82, in __init__ self.down_proj = RowParallelLinear( ^^^^^^^^^^^^^^^^^^ File "/root/miniconda3/envs/deepseek_vllm/lib/python3.11/site-packages/vllm/model_executor/layers/linear.py", line 1062, in __init__ self.quant_method.create_weights( File "/root/miniconda3/envs/deepseek_vllm/lib/python3.11/site-packages/vllm/model_executor/layers/linear.py", line 129, in create_weights weight = Parameter(torch.empty(sum(output_partition_sizes), ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/root/miniconda3/envs/deepseek_vllm/lib/python3.11/site-packages/torch/utils/_device.py", line 106, in __torch_function__ return func(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^ torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 136.00 MiB. GPU 0 has a total capacity of 21.49 GiB of which 43.50 MiB is free. Including non-PyTorch memory, this process has 21.44 GiB memory in use. Of the allocated memory 21.19 GiB is allocated by PyTorch, and 20.25 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables) [rank0]:[W706 22:09:40.345098611 ProcessGroupNCCL.cpp:1250] Warning: WARNING: process group has NOT been destroyed before we destruct ProcessGroupNCCL. On normal program exit, the application should call destroy_process_group to ensure that any pending NCCL operations have finished in this process. In rare cases this process can exit before this point and block the progress of another member of the process group. This constraint has always been present, but this warning has only been added since PyTorch 2.4 (function operator()) Traceback (most recent call last): File "/root/miniconda3/envs/deepseek_vllm/bin/vllm", line 8, in <module> sys.exit(main()) ^^^^^^ File "/root/miniconda3/envs/deepseek_vllm/lib/python3.11/site-packages/vllm/entrypoints/cli/main.py", line 73, in main args.dispatch_function(args) File "/root/miniconda3/envs/deepseek_vllm/lib/python3.11/site-packages/vllm/entrypoints/cli/serve.py", line 34, in cmd uvloop.run(run_server(args)) File "/root/miniconda3/envs/deepseek_vllm/lib/python3.11/site-packages/uvloop/__init__.py", line 105, in run return runner.run(wrapper()) ^^^^^^^^^^^^^^^^^^^^^ File "/root/miniconda3/envs/deepseek_vllm/lib/python3.11/asyncio/runners.py", line 118, in run return self._loop.run_until_complete(task) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "uvloop/loop.pyx", line 1518, in uvloop.loop.Loop.run_until_complete File "/root/miniconda3/envs/deepseek_vllm/lib/python3.11/site-packages/uvloop/__init__.py", line 61, in wrapper return await main ^^^^^^^^^^ File "/root/miniconda3/envs/deepseek_vllm/lib/python3.11/site-packages/vllm/entrypoints/openai/api_server.py", line 947, in run_server async with build_async_engine_client(args) as engine_client: File "/root/miniconda3/envs/deepseek_vllm/lib/python3.11/contextlib.py", line 210, in __aenter__ return await anext(self.gen) ^^^^^^^^^^^^^^^^^^^^^ File "/root/miniconda3/envs/deepseek_vllm/lib/python3.11/site-packages/vllm/entrypoints/openai/api_server.py", line 139, in build_async_engine_client async with build_async_engine_client_from_engine_args( File "/root/miniconda3/envs/deepseek_vllm/lib/python3.11/contextlib.py", line 210, in __aenter__ return await anext(self.gen) ^^^^^^^^^^^^^^^^^^^^^ File "/root/miniconda3/envs/deepseek_vllm/lib/python3.11/site-packages/vllm/entrypoints/openai/api_server.py", line 233, in build_async_engine_client_from_engine_args raise RuntimeError( RuntimeError: Engine process failed to start. See stack trace for the root cause. (deepseek_vllm) root@user-X99:/home/user/Desktop# /root/miniconda3/envs/deepseek_vllm/lib/python3.11/multiprocessing/resource_tracker.py:254: UserWarning: resource_tracker: There appear to be 1 leaked shared_memory objects to clean up at shutdown warnings.warn('resource_tracker: There appear to be %d ' (deepseek_vllm) root@user-X99:/home/user/Desktop#

D:\pythonstudy\venv\Scripts\python.exe D:\study\ganzhiji\ganzhi.py D:\pythonstudy\venv\Lib\site-packages\torchvision\models\_utils.py:208: UserWarning: The parameter 'pretrained' is deprecated since 0.13 and may be removed in the future, please use 'weights' instead. warnings.warn( D:\pythonstudy\venv\Lib\site-packages\torchvision\models\_utils.py:223: UserWarning: Arguments other than a weight enum or None for 'weights' are deprecated since 0.13 and may be removed in the future. The current behavior is equivalent to passing weights=VGG16_Weights.IMAGENET1K_V1. You can also use weights=VGG16_Weights.DEFAULT to get the most up-to-date weights. warnings.warn(msg) Downloading: "https://download.pytorch.org/models/vgg16-397923af.pth" to C:\Users\wwhhh/.cache\torch\hub\checkpoints\vgg16-397923af.pth 100%|██████████| 528M/528M [02:37<00:00, 3.52MB/s] Epoch 1/15: 0%| | 0/782 [00:00<?, ?it/s]D:\pythonstudy\venv\Lib\site-packages\torchvision\models\_utils.py:208: UserWarning: The parameter 'pretrained' is deprecated since 0.13 and may be removed in the future, please use 'weights' instead. warnings.warn( D:\pythonstudy\venv\Lib\site-packages\torchvision\models\_utils.py:223: UserWarning: Arguments other than a weight enum or None for 'weights' are deprecated since 0.13 and may be removed in the future. The current behavior is equivalent to passing weights=VGG16_Weights.IMAGENET1K_V1. You can also use weights=VGG16_Weights.DEFAULT to get the most up-to-date weights. warnings.warn(msg) Epoch 1/15: 0%| | 0/782 [00:00<?, ?it/s] Traceback (most recent call last): File "<string>", line 1, in <module> File "D:\python\Lib\multiprocessing\spawn.py", line 122, in spawn_main exitcode = _main(fd, parent_sentinel) ^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\python\Lib\multiprocessing\spawn.py", line 131, in _main prepare(preparation_data) File "D:\python\Lib\multiprocessing\spawn.py", line 246, in prepare _fixup_main_from_path(data['init_main_from_path']) File "D:\python\Lib\multiprocessing\spawn.py", line 297, in _fixup_main_from_path main_content = runpy.run_path(main_path, ^^^^^^^^^^^^^^^^^^^^^^^^^ File "<frozen runpy>", line 286, in run_path File "<frozen runpy>", line 98, in _run_module_code File "<frozen runpy>", line 88, in _run_code File "D:\study\ganzhiji\ganzhi.py", line 99, in <module> train_model(model, train_loader, criterion, optimizer, epochs=15) File "D:\study\ganzhiji\ganzhi.py", line 79, in train_model for inputs, labels in tqdm(train_loader, desc=f'Epoch {epoch + 1}/{epochs}'): File "D:\pythonstudy\venv\Lib\site-packages\tqdm\std.py", line 1181, in __iter__ for obj in iterable: File "D:\pythonstudy\venv\Lib\site-packages\torch\utils\data\dataloader.py", line 491, in __iter__ return self._get_iterator() ^^^^^^^^^^^^^^^^^^^^ File "D:\pythonstudy\venv\Lib\site-packages\torch\utils\data\dataloader.py", line 422, in _get_iterator return _MultiProcessingDataLoaderIter(self) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\pythonstudy\venv\Lib\site-packages\torch\utils\data\dataloader.py", line 1146, in __init__ w.start() File "D:\python\Lib\multiprocessing\process.py", line 121, in start self._popen = self._Popen(self) ^^^^^^^^^^^^^^^^^ File "D:\python\Lib\multiprocessing\context.py", line 224, in _Popen return _default_context.get_context().Process._Popen(process_obj) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\python\Lib\multiprocessing\context.py", line 337, in _Popen return Popen(process_obj) ^^^^^^^^^^^^^^^^^^ File "D:\python\Lib\multiprocessing\popen_spawn_win32.py", line 46, in __init__ prep_data = spawn.get_preparation_data(process_obj._name) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\python\Lib\multiprocessing\spawn.py", line 164, in get_preparation_data _check_not_importing_main() File "D:\python\Lib\multiprocessing\spawn.py", line 140, in _check_not_importing_main raise RuntimeError(''' RuntimeError: An attempt has been made to start a new process before the current process has finished its bootstrapping phase. This probably means that you are not using fork to start your child processes and you have forgotten to use the proper idiom in the main module: if __name__ == '__main__': freeze_support() ... The "freeze_support()" line can be omitted if the program is not going to be frozen to produce an executable. To fix this issue, refer to the "Safe importing of main module" section in https://docs.python.org/3/library/multiprocessing.html

// Copyright (c) 2024锛孌-Robotics. // // Licensed under the Apache License, Version 2.0 (the "License"); // you may not use this file except in compliance with the License. // You may obtain a copy of the License at // // http://www.apache.org/licenses/LICENSE-2.0 // // Unless required by applicable law or agreed to in writing, software // distributed under the License is distributed on an "AS IS" BASIS, // WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. // See the License for the specific language governing permissions and // limitations under the License. #include "include/centernet_3d_detection_node.h" #include <unistd.h> #include <fstream> #include <memory> #include <string> #include <vector> #include <sys/stat.h> #ifdef CV_BRIDGE_PKG_ENABLED #include <cv_bridge/cv_bridge.h> #endif #include "rclcpp/rclcpp.hpp" #include "dnn_node/dnn_node.h" #include "dnn_node/util/image_proc.h" #include "opencv2/imgproc/types_c.h" CenterNet3DDetectionNode::CenterNet3DDetectionNode(const std::string &node_name, const NodeOptions &options) : DnnNode(node_name, options), output_frameCount_(0) { this->declare_parameter<std::string>("config_file_path", config_file_path_); this->declare_parameter<int>("shared_mem", shared_mem_); this->declare_parameter<std::string>("ai_msg_pub_topic_name", msg_pub_topic_name_); this->declare_parameter<std::string>("image_sub_topic_name", ros_img_topic_name_); this->declare_parameter<std::string>("feed_image", feed_image_); this->declare_parameter<int>("dump_render_img", dump_render_img_); this->get_parameter<std::string>("config_file_path", config_file_path_); this->get_parameter<int>("shared_mem", shared_mem_); this->get_parameter<std::string>("ai_msg_pub_topic_name", msg_pub_topic_name_); this->get_parameter<std::string>("image_sub_topic_name"

1: kd> !analyze -v ******************************************************************************* * * * Bugcheck Analysis * * * ******************************************************************************* WHEA_UNCORRECTABLE_ERROR (124) A fatal hardware error has occurred. Parameter 1 identifies the type of error source that reported the error. Parameter 2 holds the address of the nt!_WHEA_ERROR_RECORD structure that describes the error condition. Try !errrec Address of the nt!_WHEA_ERROR_RECORD structure to get more details. Arguments: Arg1: 00000000, Machine Check Exception Arg2: 8712401c, Address of the nt!_WHEA_ERROR_RECORD structure. Arg3: f61f3440, High order 32-bits of the MCi_STATUS value. Arg4: 00040150, Low order 32-bits of the MCi_STATUS value. Debugging Details: ------------------ KEY_VALUES_STRING: 1 Key : Analysis.CPU.mSec Value: 3265 Key : Analysis.Elapsed.mSec Value: 7600 Key : Analysis.IO.Other.Mb Value: 0 Key : Analysis.IO.Read.Mb Value: 1 Key : Analysis.IO.Write.Mb Value: 0 Key : Analysis.Init.CPU.mSec Value: 750 Key : Analysis.Init.Elapsed.mSec Value: 18752 Key : Analysis.Memory.CommitPeak.Mb Value: 79 Key : Analysis.Version.DbgEng Value: 10.0.27793.1000 Key : Analysis.Version.Description Value: 10.2410.02.02 amd64fre Key : Analysis.Version.Ext Value: 1.2410.2.2 Key : Bugcheck.Code.LegacyAPI Value: 0x124 Key : Bugcheck.Code.TargetModel Value: 0x124 Key : Failure.Bucket Value: 0x124_0_GenuineIntel_PROCESSOR_CACHE_IMAGE_GenuineIntel.sys Key : Failure.Hash Value: {b70a049a-4a17-5749-b5df-df070316ca7d} Key : Hypervisor.Flags.Value Value: 0 Key : Hypervisor.Flags.ValueHex

/*---------------------------------------------------------------------------/ / FatFs - FAT file system module include file R0.09 (C)ChaN, 2011 /----------------------------------------------------------------------------/ / FatFs module is a generic FAT file system module for small embedded systems. / This is a free software that opened for education, research and commercial / developments under license policy of following trems. / / Copyright (C) 2011, ChaN, all right reserved. / / * The FatFs module is a free software and there is NO WARRANTY. / * No restriction on use. You can use, modify and redistribute it for / personal, non-profit or commercial product UNDER YOUR RESPONSIBILITY. / * Redistributions of source code must retain the above copyright notice. / /----------------------------------------------------------------------------*/ #ifndef _FATFS #define _FATFS 6502 /* Revision ID */ #ifdef __cplusplus extern "C" { #endif #include "integer.h" /* Basic integer types */ #include "ffconf.h" /* FatFs configuration options */ #include "HeaderFiles.h" #if _FATFS != _FFCONF #error Wrong configuration file (ffconf.h). #endif /* Definitions of volume management */ #if _MULTI_PARTITION /* Multiple partition configuration */ typedef struct { BYTE pd; /* Physical drive number */ BYTE pt; /* Partition: 0:Auto detect, 1-4:Forced partition) */ } PARTITION; extern PARTITION VolToPart[]; /* Volume - Partition resolution table */ #define LD2PD(vol) (VolToPart[vol].pd) /* Get physical drive number */ #define LD2PT(vol) (VolToPart[vol].pt) /* Get partition index */ #else /* Single partition configuration */ #define LD2PD(vol) (vol) /* Each logical drive is bound to the same physical drive number */ #define LD2PT(vol) 0 /* Always mounts the 1st partition or in SFD */ #endif /* Type of path name strings on FatFs API */ #if _LFN_UNICODE /* Unicode string */ #if !_USE_LFN #error _LFN_UNICODE must be 0 in non-LFN cfg. #endif #ifndef _INC_TCHAR typedef WCHAR TCHAR; #define _T(x) L ## x #define _TEXT(x) L ## x #endif #else /* ANSI/OEM string */ #ifndef _INC_TCHAR typedef char TCHAR; #define _T(x) x #define _TEXT(x) x #endif #endif /* File system object structure (FATFS) */ typedef struct { BYTE fs_type; /* FAT sub-type (0:Not mounted) */ BYTE drv; /* Physical drive number */ BYTE csize; /* Sectors per cluster (1,2,4...128) */ BYTE n_fats; /* Number of FAT copies (1,2) */ BYTE wflag; /* win[] dirty flag (1:must be written back) */ BYTE fsi_flag; /* fsinfo dirty flag (1:must be written back) */ WORD id; /* File system mount ID */ WORD n_rootdir; /* Number of root directory entries (FAT12/16) */ #if _MAX_SS != 512 WORD ssize; /* Bytes per sector (512, 1024, 2048 or 4096) */ #endif #if _FS_REENTRANT _SYNC_t sobj; /* Identifier of sync object */ #endif #if !_FS_READONLY DWORD last_clust; /* Last allocated cluster */ DWORD free_clust; /* Number of free clusters */ DWORD fsi_sector; /* fsinfo sector (FAT32) */ #endif #if _FS_RPATH DWORD cdir; /* Current directory start cluster (0:root) */ #endif DWORD n_fatent; /* Number of FAT entries (= number of clusters + 2) */ DWORD fsize; /* Sectors per FAT */ DWORD fatbase; /* FAT start sector */ DWORD dirbase; /* Root directory start sector (FAT32:Cluster#) */ DWORD database; /* Data start sector */ DWORD winsect; /* Current sector appearing in the win[] */ BYTE win[_MAX_SS]; /* Disk access window for Directory, FAT (and Data on tiny cfg) */ } FATFS; /* File object structure (FIL) */ typedef struct { FATFS* fs; /* Pointer to the owner file system object */ WORD id; /* Owner file system mount ID */ BYTE flag; /* File status flags */ BYTE pad1; DWORD fptr; /* File read/write pointer (0 on file open) */ DWORD fsize; /* File size */ DWORD sclust; /* File start cluster (0 when fsize==0) */ DWORD clust; /* Current cluster */ DWORD dsect; /* Current data sector */ #if !_FS_READONLY DWORD dir_sect; /* Sector containing the directory entry */ BYTE* dir_ptr; /* Ponter to the directory entry in the window */ #endif #if _USE_FASTSEEK DWORD* cltbl; /* Pointer to the cluster link map table (null on file open) */ #endif #if _FS_SHARE UINT lockid; /* File lock ID (index of file semaphore table) */ #endif #if !_FS_TINY BYTE buf[_MAX_SS]; /* File data read/write buffer */ #endif } FIL; /* Directory object structure (DIR) */ typedef struct { FATFS* fs; /* Pointer to the owner file system object */ WORD id; /* Owner file system mount ID */ WORD index; /* Current read/write index number */ DWORD sclust; /* Table start cluster (0:Root dir) */ DWORD clust; /* Current cluster */ DWORD sect; /* Current sector */ BYTE* dir; /* Pointer to the current SFN entry in the win[] */ BYTE* fn; /* Pointer to the SFN (in/out) {file[8],ext[3],status[1]} */ #if _USE_LFN WCHAR* lfn; /* Pointer to the LFN working buffer */ WORD lfn_idx; /* Last matched LFN index number (0xFFFF:No LFN) */ #endif } DIR; /* File status structure (FILINFO) */ typedef struct { DWORD fsize; /* File size */ WORD fdate; /* Last modified date */ WORD ftime; /* Last modified time */ BYTE fattrib; /* Attribute */ TCHAR fname[13]; /* Short file name (8.3 format) */ #if _USE_LFN TCHAR* lfname; /* Pointer to the LFN buffer */ UINT lfsize; /* Size of LFN buffer in TCHAR */ #endif } FILINFO; /* File function return code (FRESULT) */ typedef enum { FR_OK = 0, /* (0) Succeeded */ FR_DISK_ERR, /* (1) A hard error occured in the low level disk I/O layer */ FR_INT_ERR, /* (2) Assertion failed */ FR_NOT_READY, /* (3) The physical drive cannot work */ FR_NO_FILE, /* (4) Could not find the file */ FR_NO_PATH, /* (5) Could not find the path */ FR_INVALID_NAME, /* (6) The path name format is invalid */ FR_DENIED, /* (7) Acces denied due to prohibited access or directory full */ FR_EXIST, /* (8) Acces denied due to prohibited access */ FR_INVALID_OBJECT, /* (9) The file/directory object is invalid */ FR_WRITE_PROTECTED, /* (10) The physical drive is write protected */ FR_INVALID_DRIVE, /* (11) The logical drive number is invalid */ FR_NOT_ENABLED, /* (12) The volume has no work area */ FR_NO_FILESYSTEM, /* (13) There is no valid FAT volume */ FR_MKFS_ABORTED, /* (14) The f_mkfs() aborted due to any parameter error */ FR_TIMEOUT, /* (15) Could not get a grant to access the volume within defined period */ FR_LOCKED, /* (16) The operation is rejected according to the file shareing policy */ FR_NOT_ENOUGH_CORE, /* (17) LFN working buffer could not be allocated */ FR_TOO_MANY_OPEN_FILES, /* (18) Number of open files > _FS_SHARE */ FR_INVALID_PARAMETER /* (19) Given parameter is invalid */ } FRESULT; /*--------------------------------------------------------------*/ /* FatFs module application interface */ FRESULT f_mount (BYTE, FATFS*); /* Mount/Unmount a logical drive */ FRESULT f_open (FIL*, const TCHAR*, BYTE); /* Open or create a file */ FRESULT f_read (FIL*, void*, UINT, UINT*); /* Read data from a file */ FRESULT f_lseek (FIL*, DWORD); /* Move file pointer of a file object */ FRESULT f_close (FIL*); /* Close an open file object */ FRESULT f_opendir (DIR*, const TCHAR*); /* Open an existing directory */ FRESULT f_readdir (DIR*, FILINFO*); /* Read a directory item */ FRESULT f_stat (const TCHAR*, FILINFO*); /* Get file status */ FRESULT f_write (FIL*, const void*, UINT, UINT*); /* Write data to a file */ FRESULT f_getfree (const TCHAR*, DWORD*, FATFS**); /* Get number of free clusters on the drive */ FRESULT f_truncate (FIL*); /* Truncate file */ FRESULT f_sync (FIL*); /* Flush cached data of a writing file */ FRESULT f_unlink (const TCHAR*); /* Delete an existing file or directory */ FRESULT f_mkdir (const TCHAR*); /* Create a new directory */ FRESULT f_chmod (const TCHAR*, BYTE, BYTE); /* Change attriburte of the file/dir */ FRESULT f_utime (const TCHAR*, const FILINFO*); /* Change timestamp of the file/dir */ FRESULT f_rename (const TCHAR*, const TCHAR*); /* Rename/Move a file or directory */ FRESULT f_chdrive (BYTE); /* Change current drive */ FRESULT f_chdir (const TCHAR*); /* Change current directory */ FRESULT f_getcwd (TCHAR*, UINT); /* Get current directory */ FRESULT f_forward (FIL*, UINT(*)(const BYTE*,UINT), UINT, UINT*); /* Forward data to the stream */ FRESULT f_mkfs (BYTE, BYTE, UINT); /* Create a file system on the drive */ FRESULT f_fdisk (BYTE, const DWORD[], void*); /* Divide a physical drive into some partitions */ int f_putc (TCHAR, FIL*); /* Put a character to the file */ int f_puts (const TCHAR*, FIL*); /* Put a string to the file */ int f_printf (FIL*, const TCHAR*, ...); /* Put a formatted string to the file */ TCHAR* f_gets (TCHAR*, int, FIL*); /* Get a string from the file */ #define f_eof(fp) (((fp)->fptr == (fp)->fsize) ? 1 : 0) #define f_error(fp) (((fp)->flag & FA__ERROR) ? 1 : 0) #define f_tell(fp) ((fp)->fptr) #define f_size(fp) ((fp)->fsize) #ifndef EOF #define EOF (-1) #endif /*--------------------------------------------------------------*/ /* Additional user defined functions */ /* RTC function */ #if !_FS_READONLY DWORD get_fattime (void); #endif /* Unicode support functions */ #if _USE_LFN /* Unicode - OEM code conversion */ WCHAR ff_convert (WCHAR, UINT); /* OEM-Unicode bidirectional conversion */ WCHAR ff_wtoupper (WCHAR); /* Unicode upper-case conversion */ #if _USE_LFN == 3 /* Memory functions */ void* ff_memalloc (UINT); /* Allocate memory block */ void ff_memfree (void*); /* Free memory block */ #endif #endif /* Sync functions */ #if _FS_REENTRANT int ff_cre_syncobj (BYTE, _SYNC_t*);/* Create a sync object */ int ff_req_grant (_SYNC_t); /* Lock sync object */ void ff_rel_grant (_SYNC_t); /* Unlock sync object */ int ff_del_syncobj (_SYNC_t); /* Delete a sync object */ #endif /*--------------------------------------------------------------*/ /* Flags and offset address */ /* File access control and file status flags (FIL.flag) */ #define FA_READ 0x01 #define FA_OPEN_EXISTING 0x00 #define FA__ERROR 0x80 #if !_FS_READONLY #define FA_WRITE 0x02 #define FA_CREATE_NEW 0x04 #define FA_CREATE_ALWAYS 0x08 #define FA_OPEN_ALWAYS 0x10 #define FA__WRITTEN 0x20 #define FA__DIRTY 0x40 #endif /* FAT sub type (FATFS.fs_type) */ #define FS_FAT12 1 #define FS_FAT16 2 #define FS_FAT32 3 /* File attribute bits for directory entry */ #define AM_RDO 0x01 /* Read only */ #define AM_HID 0x02 /* Hidden */ #define AM_SYS 0x04 /* System */ #define AM_VOL 0x08 /* Volume label */ #define AM_LFN 0x0F /* LFN entry */ #define AM_DIR 0x10 /* Directory */ #define AM_ARC 0x20 /* Archive */ #define AM_MASK 0x3F /* Mask of defined bits */ /* Fast seek feature */ #define CREATE_LINKMAP 0xFFFFFFFF /*--------------------------------*/ /* Multi-byte word access macros */ #if _WORD_ACCESS == 1 /* Enable word access to the FAT structure */ #define LD_WORD(ptr) (WORD)(*(WORD*)(BYTE*)(ptr)) #define LD_DWORD(ptr) (DWORD)(*(DWORD*)(BYTE*)(ptr)) #define ST_WORD(ptr,val) *(WORD*)(BYTE*)(ptr)=(WORD)(val) #define ST_DWORD(ptr,val) *(DWORD*)(BYTE*)(ptr)=(DWORD)(val) #else /* Use byte-by-byte access to the FAT structure */ #define LD_WORD(ptr) (WORD)(((WORD)*((BYTE*)(ptr)+1)<<8)|(WORD)*(BYTE*)(ptr)) #define LD_DWORD(ptr) (DWORD)(((DWORD)*((BYTE*)(ptr)+3)<<24)|((DWORD)*((BYTE*)(ptr)+2)<<16)|((WORD)*((BYTE*)(ptr)+1)<<8)|*(BYTE*)(ptr)) #define ST_WORD(ptr,val) *(BYTE*)(ptr)=(BYTE)(val); *((BYTE*)(ptr)+1)=(BYTE)((WORD)(val)>>8) #define ST_DWORD(ptr,val) *(BYTE*)(ptr)=(BYTE)(val); *((BYTE*)(ptr)+1)=(BYTE)((WORD)(val)>>8); *((BYTE*)(ptr)+2)=(BYTE)((DWORD)(val)>>16); *((BYTE*)(ptr)+3)=(BYTE)((DWORD)(val)>>24) #endif #ifdef __cplusplus } #endif #endif /* _FATFS */ 根据头文件修改一下

4: kd> !analyze -v ******************************************************************************* • * • Bugcheck Analysis * • * ******************************************************************************* VIDEO_DXGKRNL_FATAL_ERROR (113) The dxgkrnl has detected that a violation has occurred. This resulted in a condition that dxgkrnl can no longer progress. By crashing, dxgkrnl is attempting to get enough information into the minidump such that somebody can pinpoint the crash cause. Any other values after parameter 1 must be individually examined according to the subtype. Arguments: Arg1: 0000000000000019, The subtype of the BugCheck: Arg2: 0000000000000002 Arg3: 00000000000010de Arg4: 00000000000028a0 Debugging Details: ------------------ KEY_VALUES_STRING: 1 Key : Analysis.CPU.mSec Value: 1125 Key : Analysis.Elapsed.mSec Value: 10398 Key : Analysis.IO.Other.Mb Value: 19 Key : Analysis.IO.Read.Mb Value: 1 Key : Analysis.IO.Write.Mb Value: 25 Key : Analysis.Init.CPU.mSec Value: 750 Key : Analysis.Init.Elapsed.mSec Value: 69182 Key : Analysis.Memory.CommitPeak.Mb Value: 95 Key : Analysis.Version.DbgEng Value: 10.0.27793.1000 Key : Analysis.Version.Description Value: 10.2410.02.02 amd64fre Key : Analysis.Version.Ext Value: 1.2410.2.2 Key : Bugcheck.Code.LegacyAPI Value: 0x113 Key : Bugcheck.Code.TargetModel Value: 0x113 Key : DirectX.FatalError.Code Value: 0x19 Key : DirectX.FatalError.Desc Value: UNEXPECTED_DEFERRED_DESTRUCTION Key : Dump.Attributes.AsUlong Value: 0x8 Key : Dump.Attributes.KernelGeneratedTriageDump Value: 1 Key : Failure.Bucket Value: 0x113_19_nvlddmkm!unknown

最新推荐

recommend-type

说出你们的故事—网络沟通-新娘篇.docx

说出你们的故事—网络沟通-新娘篇.docx
recommend-type

深入解析PetShop4.0电子商务架构与技术细节

标题和描述中提到的是PetShop4.0,这是一个由微软官方发布的示例电子商务应用程序,它使用ASP.NET构建,并且遵循三层架构的设计模式。在这个上下文中,“三层架构”指的是将应用程序分为三个基本的逻辑组件:表示层、业务逻辑层和数据访问层。 ### ASP.NET三层架构 ASP.NET是微软推出的一个用于构建动态网站、Web应用程序和Web服务的服务器端技术。ASP.NET能够运行在.NET框架上,为开发者提供了编写Web应用程序的丰富控件和库。 #### 表示层(用户界面层) 表示层是用户与应用程序交互的界面,通常包括Web页面。在PetShop4.0中,这包括了购物车界面、产品展示界面、用户登录和注册界面等。ASP.NET中的Web表单(.aspx文件)通常用于实现表示层。 #### 业务逻辑层(中间层) 业务逻辑层负责处理应用程序的业务规则和逻辑。在PetShop4.0中,这一层可能包括订单处理、产品管理、用户管理等功能。在ASP.NET中,业务逻辑通常被封装在类和方法中,可以通过Web服务(.asmx)或Web API(.asmx)暴露给客户端或前端。 #### 数据访问层 数据访问层负责与数据库进行交互,如执行SQL命令、存储过程等。PetShop4.0使用了数据访问组件来实现数据的读取、写入等操作。在.NET框架中,通常使用ADO.NET来实现数据访问层的功能,包括数据库连接、数据读取和写入等。 ### PetShop4.0技术详解 PetShop4.0的架构和技术实现是学习ASP.NET电子商务应用程序开发的理想案例,其技术特性如下: 1. **三层架构**:PetShop4.0清晰地展示了如何将应用程序分为三个层次,每一层都有清晰的职责。这为开发者提供了一个良好的架构模式,可以有效地组织代码,提高可维护性。 2. **ASP.NET Web Forms**:这一版本的PetShop使用ASP.NET Web Forms来构建用户界面。Web Forms允许开发者通过拖放服务器控件来快速开发网页,并处理回发事件。 3. **ADO.NET**:数据访问层使用ADO.NET来与数据库进行通信。ADO.NET提供了一套丰富的数据访问API,可以执行SQL查询和存储过程,以及进行数据缓存等高级操作。 4. **C# 编程语言**:PetShop4.0使用C#语言开发。C#是.NET框架的主要编程语言之一,它提供了面向对象、类型安全、事件驱动的开发能力。 5. **企业库(Enterprise Library)**:企业库是.NET框架中的一套设计良好的应用程序块集合,用于简化常见企业级开发任务,比如数据访问、异常管理等。PetShop4.0可能集成了企业库,用以提高代码的可靠性与易用性。 6. **LINQ(语言集成查询)**:在更高版本的.NET框架中,LINQ提供了一种将查询直接集成到C#等.NET语言中的方式,可以用来查询和操作数据。尽管PetShop4.0可能未直接使用LINQ,但是了解其如何工作对于理解数据访问层设计是非常有益的。 ### PetShop4.0安装和部署 通过标题中提到的文件名“Microsoft .NET Pet Shop 4.0.msi”,我们知道这是一个安装程序文件,用于将PetShop4.0安装到系统中。安装时,该安装包将引导用户完成必要的步骤,包括配置数据库连接(通常是SQL Server),并安装所需的.NET框架组件。 ### 学习PetShop4.0的意义 作为电子商务网站的开发人员,学习PetShop4.0可以带来以下好处: - **架构理解**:通过分析PetShop4.0的代码和架构,开发者可以深入理解三层架构模式的实际应用。 - **代码实践**:可以直接研究和修改源代码,了解ASP.NET应用程序的工作机制。 - **技术熟练**:通过部署和维护PetShop4.0,开发者能够提升在ASP.NET平台上的实际操作能力。 - **最佳实践**:作为一个示例项目,PetShop4.0展示了微软推荐的开发方法和模式,有助于遵循最佳实践。 总之,PetShop4.0不仅是一个电子商务应用的示例,它也是一个学习和练习ASP.NET开发技能的优秀平台。通过对这一范例的深入学习和研究,开发者能够掌握构建复杂的Web应用程序所需的架构设计、编程实践和系统维护等多方面知识。
recommend-type

【技术揭秘】:7步打造YOLOv8人员溺水检测告警监控系统

# 摘要 本论文详细介绍了YOLOv8人员溺水检测系统的构建与优化策略。首先概述了该系统的设计理念及理论基础,深入解析了YOLOv8模型的架构及其在溺水检测领域的应用。
recommend-type

stm32CAN总线

<think>我们正在处理一个关于STM32 CAN总线配置和使用的问题。用户要求提供配置和使用指南,因此我们需要系统地介绍CAN总线的配置步骤、关键参数设置以及常见使用场景。 根据引用内容,STM32的CAN总线配置主要涉及两个部分:CAN初始化和过滤器配置。我们将按照以下结构组织回答: 1. CAN总线基本概念(简要介绍) 2. CAN总线配置步骤(重点) a. CAN初始化结构体配置(包括工作模式、位时序、波特率等) b. CAN过滤器配置(标识符过滤规则) 3. 发送和接收消息的基本流程 4. 常见问题及解决方法 注意:引用中提供的代码片段是配置示例,我
recommend-type

毕业设计资料分享与学习方法探讨

标题和描述提供了两个主要线索:毕业设计和网上购物。结合标题和描述,我们可以推断出该毕业设计很可能是与网上购物相关的项目或研究。同时,请求指导和好的学习方法及资料也说明了作者可能在寻求相关领域的建议和资源。 【网上购物相关知识点】 1. 网上购物的定义及发展: 网上购物指的是消费者通过互联网进行商品或服务的浏览、选择、比较、下单和支付等一系列购物流程。它依托于电子商务(E-commerce)的发展,随着互联网技术的普及和移动支付的便捷性增加,网上购物已经成为现代人生活中不可或缺的一部分。 2. 网上购物的流程: 网上购物的基本流程包括用户注册、商品浏览、加入购物车、填写订单信息、选择支付方式、支付、订单确认、收货、评价等。了解这个流程对于设计网上购物平台至关重要。 3. 网上购物平台的构成要素: 网上购物平台通常由前端展示、后端数据库、支付系统、物流系统和客户服务等几大部分组成。前端展示需要吸引用户,并提供良好的用户体验;后端数据库需要对商品信息、用户数据进行有效管理;支付系统需要确保交易的安全性和便捷性;物流系统需要保证商品能够高效准确地送达;客户服务则需处理订单问题、退换货等售后服务。 4. 网上购物平台设计要点: 设计网上购物平台时需要注意用户界面UI(User Interface)和用户体验UX(User Experience)设计,保证网站的易用性和响应速度。此外,平台的安全性、移动适配性、搜索优化SEO(Search Engine Optimization)、个性化推荐算法等也都是重要的设计考量点。 5. 网上购物的支付方式: 目前流行的支付方式包括信用卡支付、电子钱包支付(如支付宝、微信支付)、银行转账、货到付款等。不同支付方式的特点和使用频率随着国家和地区的不同而有所差异。 6. 网上购物中的数据分析: 在设计网上购物平台时,数据分析能力至关重要。通过收集和分析用户的购买行为数据、浏览行为数据和交易数据,商家可以更好地理解市场趋势、用户需求、优化商品推荐,提高转化率和客户忠诚度。 7. 网上购物的法律法规: 网上购物平台运营需遵守相关法律法规,如《中华人民共和国电子商务法》、《消费者权益保护法》等。同时,还需了解《数据安全法》和《个人信息保护法》等相关隐私保护法律,确保用户信息的安全和隐私。 8. 网上购物的网络营销策略: 网络营销包括搜索引擎优化(SEO)、搜索引擎营销(SEM)、社交媒体营销、电子邮件营销、联盟营销、内容营销等。一个成功的网上购物平台往往需要多渠道的网络营销策略来吸引和维持客户。 9. 网上购物的安全问题: 网络安全是网上购物中一个非常重要的议题。这涉及到数据传输的加密(如SSL/TLS)、个人信息保护、交易安全、抗DDoS攻击等方面。安全问题不仅关系到用户的财产安全,也直接关系到平台的信誉和长期发展。 10. 毕业设计的选题方法和资料搜集: 在进行毕业设计时,可以围绕当前电子商务的发展趋势、存在的问题、未来的发展方向等来选题。资料搜集可以利用图书馆资源、网络学术资源、行业报告、相关书籍和专业论文等途径。同时,实际参与网上购物平台的使用、调查问卷、访谈等方式也是获取资料的有效途径。 根据标题、描述和文件名,可以认为毕业设计资料信息的内容可能围绕“网上购物”的相关概念、技术、市场和法律法规进行深入研究。上述知识点的总结不仅包括了网上购物的基础知识,也涵盖了设计和运营网上购物平台的多个关键方面,为有志于在这个领域的学生提供了理论和实践的参考。
recommend-type

模式识别期末复习精讲:87个问题的全面解析与策略

# 1. 模式识别基础概念与理论框架 ## 1.1 定义与应用范围 模式识别是一门关于如何使机器能够自动识别数据模式和规律的交叉学科。其核心在
recommend-type

import torch import numpy as np def a2t(): np_data = np.array([[1, 2],[3,4]]) #/********** Begin *********/ #将np_data转为对应的tensor,赋给变量torch_data torch_data = torch.tensor(np_data) #/********** End *********/ return(torch_data)

<think>我们正在处理用户关于PyTorch张量操作和与NumPy数组转换的代码检查请求。根据用户需求,我们需要: 1. 展示如何在PyTorch中将张量转换为NumPy数组,以及反向转换。 2. 提供一些常见的张量操作示例。 3. 对代码进行解释和检查。 注意:由于用户要求生成相关问题,我们将在回答后生成相关问题。 步骤: 1. 导入必要的库(torch和numpy)。 2. 创建示例张量。 3. 展示张量转NumPy数组(注意:共享内存问题,即修改一个可能影响另一个)。 4. 展示NumPy数组转张量(同样注意共享内存问题)。 5. 展示一些基本张量操作(如加减乘除、矩阵乘法、形状
recommend-type

电脑垃圾清理专家:提升系统运行效率

标题“电脑垃圾清理专家(精)”所指的知识点,是对一款以清理电脑垃圾文件为专项功能的软件的描述。在IT领域中,电脑垃圾清理是维护计算机系统性能和安全性的常规操作。这类软件通常被称作系统清理工具或优化工具。 1. **电脑垃圾的定义**:在计算机系统中,垃圾文件通常指那些无用的、过时的、临时的或损坏的文件。这些文件可能包括系统缓存、日志文件、临时文件、无用的程序安装文件、重复文件等。它们会占用磁盘空间,影响系统性能,并可能对系统安全构成潜在威胁。 2. **清理垃圾文件的目的**:清理这些垃圾文件有多重目的。首先,它可以释放被占用的磁盘空间,提升电脑运行速度;其次,它可以帮助系统更高效地运行,避免因为垃圾文件过多导致的系统卡顿和错误;最后,它还有助于维护数据安全,因为一些过时的临时文件可能会包含敏感信息。 3. **电脑垃圾清理方法**:电脑垃圾清理可以手动进行,也可以使用第三方的清理软件来自动执行。手动清理需要用户打开文件资源管理器,检查特定目录(如Windows临时文件夹、回收站、下载文件夹等),并手动删除不需要的文件。这通常较为繁琐,且容易出错。 4. **第三方清理软件的特点**:相较于手动清理,第三方电脑垃圾清理软件可以提供更为方便快捷的清理体验。这类软件通常具备用户友好的界面,能够自动扫描、识别并清除系统垃圾文件,有时还能对注册表、浏览器历史记录等进行清理。此外,一些高级的清理工具还可以提供系统优化、启动项管理、软件卸载和隐私保护等功能。 5. **清理软件的潜在风险**:虽然清理软件能够带来便利,但也存在潜在风险。不当的清理可能会误删重要文件,导致系统不稳定或某些应用程序无法正常工作。因此,使用这类软件需要用户具有一定的计算机知识,能够辨别哪些文件是安全可删除的。 6. **专业清理工具的优势**:标题中的“专家”二字暗示该软件可能具备一些高级功能。专业级的清理工具往往具备更复杂的算法和更广泛的清理范围,它们可以深入分析系统文件,甚至进行深度扫描,找到隐藏较深的无效文件和系统垃圾。它们还可能具备诸如智能判断、快速扫描、安全删除等功能,确保在高效清理的同时不会影响系统的正常运作。 从描述内容来看,该文件只重复了“电脑垃圾清理专家”这一关键词,没有提供其他具体信息。这可能是为了强调软件的功能定位或品牌口号。而从标签“电脑,垃圾,清理,专家”可以提炼出与电脑垃圾清理相关的关键点,如电脑维护、系统性能提升、安全隐私保护等。 在【压缩包子文件的文件名称列表】中,只有一个文件“电脑垃圾清理专家.exe”,这表明了该压缩文件包中仅包含一个可执行文件,即用户下载后可以直接运行的清理工具程序。 总结而言,电脑垃圾清理专家是帮助用户管理和清除电脑系统垃圾,提升电脑性能和安全性的实用软件。专业的清理工具通常能够提供更为全面和安全的清理服务,但用户在使用过程中需要小心谨慎,避免误删除重要文件。
recommend-type

模式识别期末复习必备:掌握87个知识点的速成秘籍

# 1. 模式识别基础概念 模式识别是计算机科学、人工智能和机器学习领域的一个重要分支,它的基本任务是识别数据模式。模式可以是任何事物,例如文字、图像、声音、基因序列等等。模式识别的主要应用包括:图像识别、语音识别、文本分类、数据挖掘等。 模式识别的基本步骤包括:数据预处理、特征提取、分类或回归、结果评估。在这个过程中,我们需要使用一些数学模型和算法来进行数据分析和决策。模式识别的关键在于找到正确的特征和合适的模型,这对
recommend-type

redis集群模式配置

<think>我们正在配置Redis集群模式。根据引用内容,配置Redis集群需要以下步骤: 1. **修改配置文件**:在每个Redis节点的配置文件中启用集群模式,并设置相关参数。 - `cluster-enabled yes`:启用集群模式。 - `cluster-config-file nodes-xxxx.conf`:指定集群配置文件(xxxx通常用端口号区分)。 - `cluster-node-timeout 15000`:设置集群节点超时时间(毫秒)。 - `appendonly yes`:启用AOF持久化(可选,但推荐)。 例如,对于端口637