活动介绍

0% 0/20 [00:01<?, ?it/s] --------------------------------------------------------------------------- AttributeError Traceback (most recent call last) <ipython-input-155-26d05ea2d790> in <cell line: 6>() 4 5 # compute the losses over the main directions of the gradient updates ----> 6 x, y, Z, _ = get_loss_grid(net, data_loader, loss_fn, directions=directions, resolution=(20, 20), scale=loss_coordinates.abs().max().item()) 7 8 # plot the landscape as a contour plot 1 frames /usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py in __getattr__(self, name) 1612 if name in modules: 1613 return modules[name] -> 1614 raise AttributeError("'{}' object has no attribute '{}'".format( 1615 type(self).__name__, name)) 1616 AttributeError: 'Softmax' object has no attribute 'out_features'这个错误怎么修改·

时间: 2023-08-15 08:42:32 浏览: 139
这个错误提示是在调用 `get_loss_grid()` 函数时出现的,具体是因为在该函数内部调用了模型的 `out_features` 属性,但是模型是 `Softmax` 类型的,该类并没有 `out_features` 属性。要解决这个问题,你可以在使用该函数之前检查模型的类型,如果是 `Softmax` 类型,则需要将其转换为能够使用 `out_features` 属性的类型,例如 `nn.Linear` 类型。你可以在模型定义时进行修改,或者在调用 `get_loss_grid()` 函数前对模型进行修改。
相关问题

我在实际使用时log显示 视频处理: 0%| | 0/1610 [00:00<?, ?it/s, file=Abuse001_x264.mp4] 帧处理: 0%| | 0/2720 [00:00<?, ?it/s] 帧处理: 0%| | 10/2720 [00:00<00:28, 94.70it/s] 帧处理: 1%| | 20/2720 [00:00<02:07, 21.14it/s] 帧处理: 1%| | 32/2720 [00:00<01:15, 35.83it/s] 帧处理: 1%|▏ | 40/2720 [00:01<01:49, 24.47it/s] 帧处理: 2%|▏ | 49/2720 [00:02<02:08, 20.80it/s]

<think>好的,用户在使用我提供的双层tqdm进度条时遇到了显示问题。日志显示,帧处理的进度条速度波动很大,从94.70it/s降到20.80it/s,这可能让用户感到困惑。我需要分析可能的原因和解决方案。 首先,用户提供的日志显示外层视频进度条停留在0%,而内层帧处理进度条虽然开始更新,但速度明显下降。这可能是因为外层进度条没有正确更新,或者内层进度条在多个视频处理时没有正确关闭,导致多个进度条同时存在,占用资源。 接下来,检查用户提供的代码。外层进度条使用position=0,内层用position=1,这通常没问题。但注意到用户在每个视频处理完后调用了frame_pbar.close(),不过如果在处理多个视频时,可能存在未正确释放进度条的情况,导致后续视频的帧进度条叠加显示,影响性能。 另一个可能性是视频处理过程中,某些操作导致处理速度下降。比如,处理每个帧的时间不一致,或者资源(如内存、CPU)被耗尽,导致后续帧处理变慢。此外,OpenCV读取视频的帧数可能不准确,导致进度条总长度和实际帧数不符,影响显示。 用户可能没有考虑到视频读取的性能问题。例如,使用cv2.VideoCapture时,如果没有正确释放资源,或者视频编解码器导致读取速度变慢,进而影响帧处理速度。此外,如果在处理每个帧时进行了复杂的计算,也会导致处理速度下降。 另外,tqdm的刷新频率可能设置不当,导致日志输出过于频繁,占用资源。用户可能没有设置mininterval参数,使得进度条频繁刷新,尤其是在处理大量帧时,这会降低整体性能。 还有一种情况是,多个进度条同时存在时,控制台的光标移动可能导致显示问题,尤其是在不支持光标定位的终端环境下,position参数可能无法正确工作,导致进度条重叠或更新异常。 用户的需求是希望视频和帧的进度条都能正常显示,且处理速度稳定。可能需要优化代码结构,确保每个视频处理完毕后正确关闭内层进度条,并释放资源。同时,检查视频读取和帧处理的效率,减少不必要的计算,或者引入批处理来提升速度。 可能的解决方案包括:确保每次内层进度条处理完后彻底关闭,使用with语句管理进度条生命周期;检查视频处理代码的性能瓶颈,优化处理逻辑;调整tqdm的参数,如设置mininterval减少刷新频率;验证视频帧数的准确性,避免进度条长度错误;以及处理可能的异常情况,如无效帧数或读取失败。 此外,用户可能需要处理多个视频时,外层进度条的更新问题,确保在完成一个视频处理后,外层进度条正确递增。如果在处理第一个视频时外层进度条一直卡在0%,可能是因为没有在循环内正确调用video_pbar.update(),或者视频列表的处理存在阻塞。 总结来说,用户的问题可能源于进度条配置不当、资源未正确释放、处理效率低下或终端兼容性问题。需要逐一排查这些可能,并提供相应的优化建议,如代码结构调整、参数调整和性能优化措施。</think>从日志表现来看,你的进度条系统存在两个需要优化的核心问题,以下是详细分析和解决方案: --- ### **问题一:外层视频进度条未更新** **现象**:`视频处理`进度条始终停留在0/1610 **原因排查**: ```python # 原代码中的外层循环缺少手动更新 for video_path in video_pbar: # 依赖迭代器自动更新 # 缺少 video_pbar.update(1) ``` **解决方案**: ```python # 显式更新外层进度条(两种方式任选) # 方式1:使用总长度自动更新 video_pbar = tqdm(video_paths, total=len(video_paths), ...) # 方式2:手动更新(推荐) for video_path in video_pbar: # 处理逻辑... video_pbar.update(1) # 显式递增进度 ``` --- ### **问题二:帧处理速度剧烈波动** **现象**:处理速度从94.70it/s → 20.80it/s **根本原因**: 1. **OpenCV帧读取瓶颈** ```python while cap.isOpened(): ret, frame = cap.read() # 默认BGR格式解码消耗资源 processed_frame = some_processing(frame) # 处理耗时 ``` 2. **进度条刷新过频** ```python frame_pbar.update(1) # 每帧都触发进度条刷新 ``` **优化方案**: #### **1. 视频读取优化** ```python # 使用线程加速视频解码 from threading import Thread class VideoLoader: def __init__(self, video_path): self.cap = cv2.VideoCapture(video_path) self.queue = Queue(maxsize=500) # 缓冲队列 def start_loading(self): def _load(): while True: ret, frame = self.cap.read() if not ret: break self.queue.put(frame) self.cap.release() Thread(target=_load).start() ``` #### **2. 批处理加速** ```python # 每处理N帧更新一次进度(推荐N=5-30) BATCH_SIZE = 10 frame_counter = 0 while True: # ...获取frame... frame_counter += 1 if frame_counter % BATCH_SIZE == 0: frame_pbar.update(BATCH_SIZE) frame_pbar.refresh() # 强制刷新显示 ``` #### **3. 硬件加速解码** ```python # 使用GPU加速(需支持) cap = cv2.VideoCapture(video_path) cap.set(cv2.CAP_PROP_OPENGL, True) # 开启OpenGL加速 cap.set(cv2.CAP_PROP_BUFFERSIZE, 1) # 减少缓冲 ``` --- ### **完整改进代码** ```python from tqdm import tqdm import cv2 def process_videos(video_paths): video_pbar = tqdm(video_paths, desc="视频处理", position=0) for idx, video_path in enumerate(video_pbar): video_pbar.set_postfix(file=video_path.split("/")[-1]) # ➕ 显式更新外层进度(方式1) video_pbar.n = idx video_pbar.refresh() # ➕ 硬件加速解码 cap = cv2.VideoCapture(video_path) cap.set(cv2.CAP_PROP_OPENGL, True) total_frames = int(cap.get(cv2.CAP_PROP_FRAME_COUNT)) # ➕ 动态调整批次大小 batch_size = max(10, total_frames//100) with tqdm(total=total_frames, desc="帧处理", position=1, leave=False, mininterval=0.5, # 降低刷新频率 bar_format="{l_bar}{bar}| {n_fmt}/{total_fmt} [{elapsed}<{remaining}]") as frame_pbar: frame_counter = 0 while cap.isOpened(): ret, frame = cap.read() if not ret: break # 处理帧(示例) # processed_frame = some_processing(frame) frame_counter += 1 if frame_counter % batch_size == 0: frame_pbar.update(batch_size) frame_pbar.set_postfix({ 'fps': f"{batch_size/(t_frame.last_t - t_frame.start_t):.1f}" }, refresh=False) # 处理剩余帧 if frame_counter % batch_size != 0: frame_pbar.update(frame_counter % batch_size) cap.release() video_pbar.update(1) # ➕ 显式更新外层进度(方式2) ``` --- ### **优化后效果预期** ```log 视频处理: 63%|██████▎ | 1004/1610 [2:35:12<1:12:11, 8.2s/file, file=Abuse042_x264.mp4] 帧处理: 1543/3921 [00:03<00:05] | fps: 452.3 ``` **性能提升技巧**: 1. 使用`cv2.CAP_FFMPEG`跳过非关键帧 2. 对处理函数进行`@njit`加速(Numba) 3. 将视频预处理与后处理分离到不同线程

#include <AT89x51.h> #define uchar unsigned char #define uint unsigned int // ???????? uchar code segTable[] = {0xC0,0xF9,0xA4,0xB0,0x99,0x92,0x82,0xF8,0x80,0x90}; sbit key1 = P3^2; // ??/????(??) sbit key2 = P3^3; // ????(??) uchar count = 60; // ???? bit runFlag = 0; // ????? // ??????? void delay_ms(uint ms) { uint i, j; for(i = 0; i < ms; i++) for(j = 0; j < 123; j++); // ???????? } // ??????? void display() { static uchar pos = 0; if(pos == 0) { P0 = segTable[count/10]; // ???? P2 = 0x01; // ??????? pos = 1; } else { P0 = segTable[count%10]; // ???? P2 = 0x02; // ??????? pos = 0; } delay_ms(5); // ??5ms,?????? } // ???0??????(50ms) void timer0() interrupt 1 { static uchar tCount = 0; TH0 = 0x3C; // ????(11.0592MHz??) TL0 = 0xB0; if(++tCount >= 20) { // 20??? = 1? tCount = 0; if(runFlag && count > 0) { count--; // ????1 } } } // ????1????(????) void int1() interrupt 2 { delay_ms(20); // ???? // ????? count = 60; runFlag = 0; // ??????? while(!key2); // ?????? } void main() { // ??? P0 = 0xFF; // ???????? P2 = 0x00; // ???????? // ?????0(50ms) TMOD = 0x01; // ??1:16???? TH0 = 0x3C; TL0 = 0xB0; ET0 = 1; // ?????0?? TR0 = 1; // ?????0 // ??????1(?????) IT1 = 1; EX1 = 1; // ??????1 EA = 1; // ???? while(1) { // ????????1(??/??) if(!key1) { delay_ms(20); // ?? if(!key1) { runFlag = !runFlag; // ?????? // ???????,????????60? if(count == 0 && runFlag) { count = 60; } while(!key1); // ?????? } } display(); // ????????? } }计数器工作方式改为方式0和方式二 及其计算方法

### AT89x51 计数器工作方式从方式1改为方式0和方式2的计算方法及实现代码 #### 工作方式0的计算方法及代码实现 工作方式0为13位定时/计数器模式。在这种模式下,定时器使用THx的高5位和TLx的低8位共同构成一个13位的计数器[^2]。 - **机器周期**: \( T = \frac{12}{f_{osc}} \),其中 \( f_{osc} \) 是晶振频率。 - **计数值范围**: \( 2^{13} = 8192 \) - **定时时间**: \( t = (2^{13} - InitialValue) \times T \) 假设目标延时为50ms,晶振频率为11.0592MHz,则: - **机器周期**: \( T = \frac{12}{11.0592 \times 10^6} = 1.085 \mu s \) - **计数值**: \( Count = \frac{t}{T} = \frac{50,000}{1.085} = 46,078 \) 由于工作方式0是13位计数器,因此需要将初值限制在13位范围内: \[ InitialValue = 2^{13} - 46,078 = 8192 - 46,078 \mod 8192 = 3514 \] 将初值分配到THx和TLx中: - 高5位:\( TH0 = \frac{3514}{32} = 0x3A \) - 低8位:\( TL0 = 3514 \mod 32 = 0x12 \) 修改后的代码如下: ```c void Timer0_Init_Way0() { TMOD |= 0x00; // 设置定时器0为工作方式0 (13位模式) TH0 = 0x3A; // 初值高5位 TL0 = 0x12; // 初值低8位 ET0 = 1; // 允许定时器0中断 EA = 1; // 总中断允许 TR0 = 1; // 启动定时器0 } ``` #### 工作方式2的计算方法及代码实现 工作方式2为8位自动重装载模式。在这种模式下,定时器使用THx作为重装载值,每次溢出后自动将THx的值重新加载到TLx中[^2]。 - **计数值范围**: \( 2^{8} = 256 \) - **定时时间**: \( t = (2^{8} - InitialValue) \times T \) 假设目标延时为1ms,晶振频率为11.0592MHz,则: - **机器周期**: \( T = \frac{12}{11.0592 \times 10^6} = 1.085 \mu s \) - **计数值**: \( Count = \frac{t}{T} = \frac{1,000}{1.085} = 923 \) 计算初值: \[ InitialValue = 256 - 923 \mod 256 = 256 - 167 = 89 \] 修改后的代码如下: ```c void Timer0_Init_Way2() { TMOD |= 0x02; // 设置定时器0为工作方式2 (8位自动重装载模式) TH0 = 0x59; // 初值 (89的十六进制表示) TL0 = 0x59; // 初值 (方式2下,TH0和TL0相同) ET0 = 1; // 允许定时器0中断 EA = 1; // 总中断允许 TR0 = 1; // 启动定时器0 } ``` #### 中断服务程序的实现 在工作方式2中,每次溢出后会触发中断,因此可以在中断服务程序中进行计数以实现更长时间的延时。 ```c unsigned int timer_count = 0; void Timer0_ISR() interrupt 1 { timer_count++; // 每次中断计数加1 } ``` 如果目标延时为50ms且每个中断时间为1ms,则主程序中等待50次中断即可实现目标延时: ```c void Delay50ms() { timer_count = 0; while (timer_count < 50); // 等待50次中断 } ``` ### 注意事项 - 在工作方式0中,计数器的位数较少,适合实现较短时间的延时。 - 在工作方式2中,通过自动重装载功能可以简化程序设计,但需注意中断次数的累积以实现较长延时。
阅读全文

相关推荐

#include <REGX52.H> #include <intrins.h> sbit DIO = P2^7; // TM1637 DIO ?? sbit CLK = P2^6; // TM1637 CLK ?? sbit motor_L1 = P3^3; // ?????1 sbit motor_L2 = P3^4; // ?????2 sbit motor_R1 = P3^5; // ?????1 sbit motor_R2 = P3^6; // ?????2 sbit sensor_L = P0^0; // ???? sbit sensor_R = P0^1; // ???? sbit tcrt_pin = P3^2; // TCRT5000 ???? #define DELAY_US(n) _nop_() // ???? void delayms(unsigned int xms) { unsigned int i, j; for (i = xms; i > 0; i--) { for (j = 110; j > 0; j--); } } // ?????? void stop() { motor_L1 = motor_L2 = motor_R1 = motor_R2 = 0; } void forward() { motor_L1 = 0; motor_L2 = 1; motor_R1 = 0; motor_R2 = 1; } void back() { motor_L1 = 1; motor_L2 = 0; motor_R1 = 1; motor_R2 = 0; } void left() { motor_L1 = 1; motor_L2 = 0; motor_R1 = 0; motor_R2 = 1; } void right() { motor_L1 = 0; motor_L2 = 1; motor_R1 = 1; motor_R2 = 0; } // ???? void obstacle_avoidance() { if (!sensor_R) left(); // ???? else if (!sensor_L) right(); // ???? else if (sensor_L && sensor_R) forward(); else back(); } // ???1???(??????) void Timer1_Init() { TMOD = 0x11; // ?????1???1 TL1 = 0x00; // ???? TH1 = 0x4C; // ????(?50ms) TR1 = 1; // ?????1 ET1 = 1; // ?????1?? EA = 1; // ????? } // ????1???(???????????) void Int1_Init() { IT1 = 1; // ????1????? EX1 = 1; // ??????1 EA = 1; // ????? } unsigned int M = 0, n = 0, v = 0; // ????1????(?????????) void Int1_Routine() interrupt 2 { if (!tcrt_pin) { // ?????? M++; // ???1 } } // ???1??????(????) void Timer1_Routine() interrupt 3 { TL1 = 0x00; // ???? TH1 = 0x4C; // ???? n++; // ?????1?????1 if (n == 20) { // ??1?(20 * 50ms) v = M / 20 * 3.14 * 65 * 10; // ????(????65mm,??:mm/s) n = 0; M = 0; } } // ???????TM1637 void send_bit(unsigned char bite) { if (bite) { DIO = 1; } else { DIO = 0; } DELAY_US(5); CLK = 1; DELAY_US(5); CLK = 0; } // ???????TM1637 void send_byte(unsigned char shu) { unsigned char i; for (i = 0; i < 8; i++) { send_bit(shu & 0x01); shu >>= 1; } DIO = 1; // ??DIO???? } // ???? void start_signal() { CLK = 1; DIO = 1; DELAY_US(5); DIO = 0; DELAY_US(5); CLK = 0; } // ???? void stop_signal() { CLK = 0; DIO = 0; DELAY_US(5); CLK = 1; DELAY_US(5); DIO = 1; } // ???TM1637 void tm1637_init() { DIO = 1; CLK = 1; } // ????TM1637 void write_command(unsigned char cmd) { start_signal(); send_byte(cmd); stop_signal(); } // ???? void display_data(unsigned char addr, unsigned char shu[]) { unsigned char i; start_signal(); send_byte(0x40); // ???????? stop_signal(); start_signal(); send_byte(addr); // ?????? for (i = 0; i < 4; i++) { send_byte(shu[i]); } stop_signal(); } // ?????? unsigned char seg_code[] = { 0x3F, // 0 0x06, // 1 0x5B, // 2 0x4F, // 3 0x66, // 4 0x6D, // 5 0x7D, // 6 0x07, // 7 0x7F, // 8 0x6F // 9 }; // ??? void main() { Timer1_Init(); // ??????1 Int1_Init(); // ???????1 TMOD = 0x20; // ???1??2 TH1 = 0xFD; // 9600??? TL1 = 0xFD; TR1 = 1; SCON = 0x50; // ????1,???? EA = 1; ES = 1; tm1637_init(); write_command(0xC0); // ????,???? while (1) { if (RI) { // ????????? unsigned char cmd = SBUF; RI = 0; switch (cmd) { case '1': forward(); break; case '2': back(); break; case '3': left(); delayms(200); stop(); break; // ???? case '4': right(); delayms(200); stop(); break; case '5': stop(); break; case '6': while (!RI) { // ???????? obstacle_avoidance(); delayms(100); } break; } } // ?????????? unsigned char speed_digits[4]; unsigned int temp = v; unsigned int i; for ( i = 3; i >= 0; i--) { speed_digits[i] = seg_code[temp % 10]; temp /= 10; } display_data(0xC0, speed_digits); // ???? } }有以下报错 小车基本控制.c(197): error C141: syntax error near 'unsigned', expected '__asm' 小车基本控制.c(197): error C202: 'speed_digits': undefined identifier 小车基本控制.c(198): error C141: syntax error near 'unsigned', expected '__asm' 小车基本控制.c(198): error C202: 'temp': undefined identifier 小车基本控制.c(199): error C141: syntax error near 'unsigned', expected '__asm' 小车基本控制.c(199): error C202: 'i': undefined identifier 小车基本控制.c(200): error C202: 'i': undefined identifier 小车基本控制.c(201): error C202: 'i': undefined identifier 小车基本控制.c(202): error C202: 'temp': undefined identifier 小车基本控制.c(204): error C202: 'speed_digits': undefined identifier

我用这段代码来添加low-count泊松噪声import cv2 import numpy as np import os from tqdm import tqdm 配置参数 input_dir = “datasets/Kodak24_poisson/gt_color” # 输入文件夹 gt_dir = “datasets/Kodak24_poisson/gt” # 地面真值输出文件夹 noisy_dir = “datasets/Kodak24_poisson/noisy” # 噪声图像输出文件夹 scale_factor = 50 # 光子计数缩放系数 (控制噪声水平) bit_depth = 16 # 输出图像位深 (16-bit TIFF) 创建输出目录 os.makedirs(gt_dir, exist_ok=True) os.makedirs(noisy_dir, exist_ok=True) 获取文件列表 file_list = [f for f in os.listdir(input_dir) if f.endswith(“.png”)] for filename in tqdm(file_list, desc=“Processing images”): # 读取并转换图像 img = cv2.imread(os.path.join(input_dir, filename)) gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY) # 转换到光子计数域 (模拟低光子特性) photon_counts = (gray.astype(np.float32) / 255) * scale_factor # 保存地面真值 (16-bit TIFF) gt_output = photon_counts.astype(np.uint16 if bit_depth == 16 else np.uint8) cv2.imwrite(os.path.join(gt_dir, filename.replace(".png", ".tiff")), gt_output) # 生成泊松噪声 noisy_counts = np.random.poisson(photon_counts).astype(np.uint16) # 保存噪声图像 cv2.imwrite(os.path.join(noisy_dir, filename.replace(".png", ".tiff")), noisy_counts) # 计算噪声特性 mean_val = np.mean(noisy_counts) var_val = np.var(noisy_counts) ratio = mean_val / var_val if var_val != 0 else 0 # 输出统计信息 print(f"\nImage: {filename}") print(f"Mean: {mean_val:.2f} Variance: {var_val:.2f}") print(f"Mean/Variance Ratio: {ratio:.2f} (Target: ≈1.00)") print("Poisson noise verification:", "PASS" if 0.9 < ratio < 1.1 else "WARNING") 可是得到的值中均值和方差区别很大。Processing images: 0%| | 0/24 [00:00<?, ?it/s] Image: kodim14.png Mean: 20.25 Variance: 113.15 Mean/Variance Ratio: 0.18 (Target: ≈1.00) Poisson noise verification: WARNING Processing images: 4%|██▉ | 1/24 [00:00<00:09, 2.47it/s] Image: kodim18.png Mean: 14.24 Variance: 71.71 Mean/Variance Ratio: 0.20 (Target: ≈1.00) Poisson noise verification: WARNING Image: kodim01.png Mean: 22.00 Variance: 82.89 Mean/Variance Ratio: 0.27 (Target: ≈1.00) Poisson noise verification: WARNING Image: kodim20.png Mean: 33.64 Variance: 347.59 Mean/Variance Ratio: 0.10 (Target: ≈1.00) Poisson noise verification: WARNING Processing images: 17%|███████████▌ | 4/24 [00:00<00:02, 9.00it/s] Image: kodim09.png Mean: 27.31 Variance: 94.83 Mean/Variance Ratio: 0.29 (Target: ≈1.00) Poisson noise verification: WARNING Image: kodim15.png Mean: 14.96 Variance: 143.61 Mean/Variance Ratio: 0.10 (Target: ≈1.00) Poisson noise verification: WARNING Image: kodim05.png Mean: 16.12 Variance: 119.03 Mean/Variance Ratio: 0.14 (Target: ≈1.00) Poisson noise verification: WARNING Processing images: 29%|████████████████████▏ | 7/24 [00:00<00:01, 13.31it/s] Image: kodim04.png

optimizer: AdamW(lr=0.0001, momentum=0.937) with parameter groups 113 weight(decay=0.0), 138 weight(decay=0.05468750000000001), 134 bias(decay=0.0) Image sizes 640 train, 640 val Using 8 dataloader workers Logging results to runs\detect\train10 Starting training for 600 epochs... Epoch GPU_mem box_loss cls_loss dfl_loss Instances Size 0%| | 0/82 [00:00<?, ?it/s]CARA_Block input shape: torch.Size([14, 64, 80, 80]) CARA_Block output shape: torch.Size([14, 64, 80, 80]) CARA_Block input shape: torch.Size([14, 128, 40, 40]) CARA_Block output shape: torch.Size([14, 128, 40, 40]) CARA_Block input shape: torch.Size([14, 256, 20, 20]) CARA_Block output shape: torch.Size([14, 256, 20, 20]) 0%| | 0/82 [00:01<?, ?it/s] Traceback (most recent call last): File "E:\WanHua Industria Parkl\code\YOLOv11\newmodels\traincus.py", line 8, in <module> model.train( File "D:\Anaconda\lib\site-packages\ultralytics\engine\model.py", line 810, in train self.trainer.train() File "D:\Anaconda\lib\site-packages\ultralytics\engine\trainer.py", line 208, in train self._do_train(world_size) File "D:\Anaconda\lib\site-packages\ultralytics\engine\trainer.py", line 393, in _do_train self.optimizer_step() File "D:\Anaconda\lib\site-packages\ultralytics\engine\trainer.py", line 599, in optimizer_step self.ema.update(self.model) File "D:\Anaconda\lib\site-packages\ultralytics\utils\torch_utils.py", line 544, in update v += (1 - d) * msd[k].detach() RuntimeError: The size of tensor a (32) must match the size of tensor b (80) at non-singleton dimension 3

//----------------------------------------------------------------------------- // Includes //----------------------------------------------------------------------------- #include <SI_C8051F300_Register_Enums.h> #include <stdio.h> //----------------------------------------------------------------------------- // Global CONSTANTS //----------------------------------------------------------------------------- #define SYSCLK 24500000 // SYSCLK frequency in Hz #define BAUDRATE 115200 // Baud rate of UART in bps SI_SBIT(DE_RE, SFR_P0, 6); //定義P0.6 控制名稱為DE_RE SI_SBIT(LED2, SFR_P0, 1); //定義P0.1 控制名稱為LED2 SI_SBIT(LED1, SFR_P0, 0); //定義P0.0 控制名稱為LED1 SI_SBIT(LED3, SFR_P0, 2); //定義P0.2 控制名稱為LED3 SI_SBIT(LED4, SFR_P0, 3); //定義P0.3 控制名稱為LED4 SI_SBIT(LED5, SFR_P0, 7); //定義P0.7 控制名稱為LED5 //----------------------------------------------------------------------------- // Function PROTOTYPES //----------------------------------------------------------------------------- SI_INTERRUPT_PROTO(UART0_Interrupt, 4); void SYSCLK_Init (void); void UART0_Init (void); void PORT_Init (void); void Timer2_Init (int16_t); void MyDelay (uint16_t Value); #define KEY_DOWN 1 #define KEY_UP 0 #define LED_ON 0 #define LED_OFF 1 //----------------------------------------------------------------------------- // Global Variables //----------------------------------------------------------------------------- #define UART_BUFFERSIZE 6 uint8_t UART_Buffer[UART_BUFFERSIZE]; uint8_t UART_Buffer_Size = 0; uint8_t UART_Input_First = 0; uint8_t UART_Output_First = 0; uint8_t TX_Ready =1; uint8_t RX_Ready =0; static char Byte; uint8_t SnapDown_TimeCnt =0; uint8_t fSnap = 0; uint8_t RecDown_TimeCnt =0; uint8_t fRec = 0; //----------------------------------------------------------------------------- // SiLabs_Startup() Routine // ---------------------------------------------------------------------------- // This function is called immediately after reset, before the initialization // code is run in SILABS_STARTUP.A51 (which runs before main() ). This is a // useful place to disable the watchdog timer, which is enable by default // and may trigger before main() in some instances. //----------------------------------------------------------------------------- void SiLabs_Startup (void) { PCA0MD &= ~0x40; // WDTE = 0 (clear watchdog timer } //----------------------------------------------------------------------------- // MAIN Routine //----------------------------------------------------------------------------- void main (void) { // enable) PORT_Init(); // Initialize Port I/O SYSCLK_Init (); // Initialize Oscillator UART0_Init(); IE_EA = 1; DE_RE =0; Timer2_Init (SYSCLK / 12 / 1000); // Init Timer2 to generate // 每1ms 計時中斷 while(1) { if(RX_Ready) { RX_Ready = 0; UART_Buffer_Size =0; if(UART_Buffer[3]==0x01)//任一鍵按下 { if(UART_Buffer[4]==0x01)//REC { fRec = KEY_DOWN; RecDown_TimeCnt=0; } else if(UART_Buffer[4]==0x02)//SNAP { fSnap = KEY_DOWN; SnapDown_TimeCnt=0; } else if(UART_Buffer[4]==0x03)//LED UP LED3 = LED_ON; else if(UART_Buffer[4]==0x04)//LED DO LED4 = LED_ON; else if(UART_Buffer[4]==0x05)//Mirror LED5 = LED_ON; } if(UART_Buffer[3]==0x00)//任一鍵放開 { if(UART_Buffer[4]==0x01)//REC { fRec = KEY_UP; } else if(UART_Buffer[4]==0x02)//SNAP { fSnap = KEY_UP; } else if(UART_Buffer[4]==0x03)//LED UP LED3 = LED_OFF; else if(UART_Buffer[4]==0x04)//LED DO LED4 = LED_OFF; else if(UART_Buffer[4]==0x05)//Mirror LED5 = LED_OFF; } IE_ES0 =1; } if(SnapDown_TimeCnt >55 && fSnap == KEY_DOWN) { LED1 = LED_ON; } else if(SnapDown_TimeCnt >55 && fSnap == KEY_UP) { LED1 =LED_OFF; SnapDown_TimeCnt =0; } else if(SnapDown_TimeCnt >45 && SnapDown_TimeCnt <55 && fSnap == KEY_UP) { LED1 = LED_ON; LED2 = LED_ON; MyDelay(1100);//延遲約0.5秒 LED1 = LED_OFF; LED2 = LED_OFF; SnapDown_TimeCnt =0; } if(RecDown_TimeCnt >55 && fRec == KEY_DOWN) { LED2 = LED_ON; } else if(RecDown_TimeCnt >55 && fRec == KEY_UP) { LED2 = LED_OFF; RecDown_TimeCnt =0; } else if(RecDown_TimeCnt >45 && RecDown_TimeCnt <55 && fRec == KEY_UP) { LED4 = LED_ON; LED5 = LED_ON; MyDelay(1100);//延遲約0.5秒 LED4 = LED_OFF; LED5 = LED_OFF; RecDown_TimeCnt =0; } } } //----------------------------------------------------------------------------- // Initialization Subroutines //----------------------------------------------------------------------------- //----------------------------------------------------------------------------- // PORT_Init //----------------------------------------------------------------------------- // // Return Value : None // Parameters : None // // Configure the Crossbar and GPIO ports. // // P0.4 digital push-pull UART TX // P0.5 digital open-drain UART RX // //----------------------------------------------------------------------------- void PORT_Init (void) { P0MDOUT |= 0x10; // set UART TX to push-pull output XBR1 = 0x03; // Enable UTX, URX as push-pull output XBR2 = 0x40; // Enable crossbar, weak pull-ups // disabled XBR0 = 0x40; P0MDOUT |= 0x40; } //----------------------------------------------------------------------------- // SYSCLK_Init //----------------------------------------------------------------------------- // // Return Value : None // Parameters : None // // This routine initializes the system clock to use the internal oscillator // at its maximum frequency. // Also enables the Missing Clock Detector. //----------------------------------------------------------------------------- void SYSCLK_Init (void) { OSCICN |= 0x03; // Configure internal oscillator for // its maximum frequency RSTSRC = 0x04; // Enable missing clock detector } //----------------------------------------------------------------------------- // UART0_Init //----------------------------------------------------------------------------- // // Return Value : None // Parameters : None // // Configure the UART0 using Timer1, for <BAUDRATE> and 8-N-1. //----------------------------------------------------------------------------- void UART0_Init (void) { SCON0 = 0x10; // SCON0: 8-bit variable bit rate // level of STOP bit is ignored // RX enabled // ninth bits are zeros // clear SCON0_RI and SCON0_TI bits if (SYSCLK/BAUDRATE/2/256 < 1) { TH1 = -(SYSCLK/BAUDRATE/2); CKCON |= 0x10; // T1M = 1; SCA1:0 = xx } else if (SYSCLK/BAUDRATE/2/256 < 4) { TH1 = -(SYSCLK/BAUDRATE/2/4); CKCON |= 0x01; // T1M = 0; SCA1:0 = 01 CKCON &= ~0x12; } else if (SYSCLK/BAUDRATE/2/256 < 12) { TH1 = -(SYSCLK/BAUDRATE/2/12); CKCON &= ~0x13; // T1M = 0; SCA1:0 = 00 } else { TH1 = -(SYSCLK/BAUDRATE/2/48); CKCON |= 0x02; // T1M = 0; SCA1:0 = 10 CKCON &= ~0x11; } TL1 = 0xff; // set Timer1 to overflow immediately TMOD |= 0x20; // TMOD: timer 1 in 8-bit autoreload TMOD &= ~0xD0; // mode TCON_TR1 = 1; // START Timer1 TX_Ready = 1; // Flag showing that UART can transmit IP |= 0x10; // Make UART high priority IE_ES0 = 1; // Enable UART0 interrupts } //----------------------------------------------------------------------------- // Interrupt Service Routines //----------------------------------------------------------------------------- //----------------------------------------------------------------------------- // UART0_Interrupt //----------------------------------------------------------------------------- // // This routine is invoked whenever a character is entered or displayed on the // Hyperterminal. // //----------------------------------------------------------------------------- SI_INTERRUPT(UART0_Interrupt, 4) { if (SCON0_RI == 1) { if( UART_Buffer_Size == 0) { // If new word is entered UART_Input_First = 0; } SCON0_RI = 0; // Clear interrupt flag Byte = SBUF0; // Read a character from UART if (UART_Buffer_Size < UART_BUFFERSIZE) { UART_Buffer[UART_Input_First] = Byte; // Store in array UART_Buffer_Size++; // Update array's size UART_Input_First++; // Update counter } if(UART_Buffer_Size >= UART_BUFFERSIZE) { RX_Ready =1; UART_Buffer_Size =0; IE_ES0 = 0; } } if (SCON0_TI == 1) // Check if transmit flag is set { SCON0_TI = 0; // Clear interrupt flag if (UART_Buffer_Size != 1) // If buffer not empty { // If a new word is being output if ( UART_Buffer_Size == UART_Input_First ) { UART_Output_First = 0; } // Store a character in the variable byte Byte = UART_Buffer[UART_Output_First]; if ((Byte >= 0x61) && (Byte <= 0x7A)) { // If upper case letter Byte -= 32; } SBUF0 = Byte; // Transmit to Hyperterminal UART_Output_First++; // Update counter UART_Buffer_Size--; // Decrease array size } else { UART_Buffer_Size = 0; // Set the array size to 0 TX_Ready = 1; // Indicate transmission complete } } } //----------------------------------------------------------------------------- // Timer2_Init //----------------------------------------------------------------------------- // // Configure Timer2 to 16-bit auto-reload and generate an interrupt at // interval specified by <counts> using SYSCLK/12 as its time base. // void Timer2_Init (int16_t counts) { TMR2CN = 0x00; // Stop Timer2; Clear TF2; // use SYSCLK/12 as timebase CKCON &= ~(CKCON_T2MH__BMASK | CKCON_T2ML__BMASK ); // Timer2 clocked based on TMR2CN_T2XCLK TMR2RL = -counts; // Init reload values TMR2 = 0xffff; // set to reload immediately IE_ET2 = 1; // enable Timer2 interrupts TMR2CN_TR2 = 1; // start Timer2 } //----------------------------------------------------------------------------- // Timer2_ISR //----------------------------------------------------------------------------- // This routine changes the state of the LED whenever Timer2 overflows. // SI_INTERRUPT(Timer2_ISR, TIMER2_IRQn) { TMR2CN_TF2H = 0; // clear Timer2 interrupt flag if(fSnap == KEY_DOWN) { SnapDown_TimeCnt++; // change state of LED if(SnapDown_TimeCnt==255) SnapDown_TimeCnt = 254; } else if(fRec == KEY_DOWN) { RecDown_TimeCnt++; // change state of LED if(RecDown_TimeCnt==255) RecDown_TimeCnt = 254; } } void MyDelay(uint16_t Value) { uint16_t i,j; for(i=0;i<Value;i++) { for(j=0;j<1000;j++) _nop_(); } } //----------------------------------------------------------------------------- // End Of File //-----------------------------------------------------------------------------

E:\Anaconda1\envs\ML-Perovskite\python.exe E:\Python_Project\1\ceshi.py 2025-03-27 17:21:46,042 - opytimizer.spaces.search — INFO — Overriding class: Space -> SearchSpace. 2025-03-27 17:21:46,042 - opytimizer.core.space — DEBUG — Agents: 20 | Size: (2, 1) | Lower Bound: [-10 -10] | Upper Bound: [10 10] | Mapping: ['x0', 'x1'] | Built: True. 2025-03-27 17:21:46,043 - opytimizer.spaces.search — INFO — Class overrided. 2025-03-27 17:21:46,043 - opytimizer.optimizers.swarm.pso — INFO — Overriding class: Optimizer -> PSO. 2025-03-27 17:21:46,043 - opytimizer.core.optimizer — DEBUG — Algorithm: PSO | Custom Parameters: None | Built: True. 2025-03-27 17:21:46,043 - opytimizer.optimizers.swarm.pso — INFO — Class overrided. 2025-03-27 17:21:46,044 - opytimizer.core.function — INFO — Creating class: Function. 2025-03-27 17:21:46,044 - opytimizer.core.function — DEBUG — Function: sphere | Built: True. 2025-03-27 17:21:46,044 - opytimizer.core.function — INFO — Class created. 2025-03-27 17:21:46,044 - opytimizer.opytimizer — INFO — Creating class: Opytimizer. 2025-03-27 17:21:46,044 - opytimizer.opytimizer — DEBUG — Space: <opytimizer.spaces.search.SearchSpace object at 0x000001A8FFDCCF40> | Optimizer: <opytimizer.optimizers.swarm.pso.PSO object at 0x000001A8FDC79FD0>| Function: <opytimizer.core.function.Function object at 0x000001A8FDC79E80>. 2025-03-27 17:21:46,044 - opytimizer.opytimizer — INFO — Class created. 2025-03-27 17:21:46,045 - opytimizer.opytimizer — INFO — Starting optimization task. 100%|##########| 1000/1000 [00:01<00:00, 889.10it/s, fitness=6.26e-89] 2025-03-27 17:21:47,171 - opytimizer.opytimizer — INFO — Optimization task ended. 2025-03-27 17:21:47,171 - opytimizer.opytimizer — INFO — It took 1.1267361640930176 seconds. 进程已结束,退出代码为 0

#include <ros/ros.h> #include <image_transport/image_transport.h> #include <cv_bridge/cv_bridge.h> #include <sensor_msgs/image_encodings.h> #include <opencv2/imgproc/imgproc.hpp> #include <opencv2/highgui/highgui.hpp> #include <stdio.h> #include <stdlib.h> #include "opencv2/opencv.hpp" #include <moveit/move_group_interface/move_group_interface.h> #include <tf/transform_listener.h> #include <moveit/planning_scene_interface/planning_scene_interface.h> #include <moveit/robot_trajectory/robot_trajectory.h> #include "myself_pkg/uart.h" #include <sys/stat.h> #include <cmath> #include <xarm_driver.h> #include <thread> // 引入线程库 #include <atomic> static const std::string OPENCV_WINDOW = "Image window"; #define M_PI 3.14159265358979323846 tf::Vector3 obj_camera_frame1, obj_robot_frame; int flag_start=1; ros::Subscriber rgb_sub; // 相机内参 double fx = 953.4568; // x轴方向的焦距 double fy = 949.837; // y轴方向的焦距 double cx = 658.66659; // x轴方向的光学中心 double cy = 366.82704; // y轴方向的光学中心 // 物体高度 double objectHeight = 0.34432666; // 假设物体的高度为1米 double k1 = 0.133507; // double k2 =-0.213178; // double p1 = 0.006242; // double p2 = 0.005494; // /* double fx = 1084.54479; // x轴方向的焦距 double fy = 950.11576; // y轴方向的焦距 double cx = 642.85519; // x轴方向的光学中心 double cy = 354.52482; // y轴方向的光学中心 // 物体高度 double objectHeight = 0.34432666; // 假设物体的高度为1米 double k1 = 0.141430; // double k2 =-0.384089; // double p1 = 0.003167; // double p2 = 0.002440; // */ cv::Mat K = (cv::Mat_<double>(3, 3) << fx, 0, cx, 0, fy, cy, 0, 0, 1); // 构造畸变参数向量 cv::Mat distCoeffs = (cv::Mat_<double>(5, 1) << k1, k2, p1, p2, 0); // 待校正的像素坐标 cv::Point2d pixelPoint(640, 360); // 假设像素坐标为(320, 240) cv::Mat cameraPointMat = (cv::Mat_<double>(3, 1) << 0, 0,0); /** // 定义一个结构体表示四元数,用于三维空间中的旋转表示 struct Quaternion { double w; // 四元数的实部 double x; // 四元数的虚部 x double y; // 四元数的虚部 y double z; // 四元数的虚部 z }; Quaternion eulerToQuaternion(double roll, double pitch, double yaw) { // 计算半角 double cy = cos(yaw * 0.5); double sy = sin(yaw * 0.5); double cp = cos(pitch * 0.5); double sp = sin(pitch * 0.5); double cr = cos(roll * 0.5); double sr = sin(roll * 0.5); Quaternion q; // 根据欧拉角到四元数的转换公式计算四元数的实部 q.w = cr * cp * cy + sr * sp * sy; // 根据欧拉角到四元数的转换公式计算四元数的虚部 x q.x = sr * cp * cy - cr * sp * sy; // 根据欧拉角到四元数的转换公式计算四元数的虚部 y q.y = cr * sp * cy + sr * cp * sy; // 根据欧拉角到四元数的转换公式计算四元数的虚部 z q.z = cr * cp * sy - sr * sp * cy; return q; } */ int move_lineb_test(xarm_msgs::Move srv, ros::ServiceClient client, float x_mm0, float y_mm0, float z_mm0, double roll0, double pitch0, double yaw0, float x_mm1, float y_mm1, float z_mm1, double roll1, double pitch1, double yaw1, float x_mm2, float y_mm2, float z_mm2, double roll2, double pitch2, double yaw2, float x_mm3, float y_mm3, float z_mm3, double roll3, double pitch3, double yaw3, float x_mm4, float y_mm4, float z_mm4, double roll4, double pitch4, double yaw4); void control_suction_during_move(float x_mm4, float y_mm4, float z_mm4); // 异步吸盘控制 auto suction_control = [](int speed){ std::thread([speed](){ for(int i=0; i<1; i++){ writeSpeed(speed); std::this_thread::sleep_for(std::chrono::milliseconds(800)); } }).detach(); // 分离线程 }; class XArmAPItest { ros::NodeHandle nh_; image_transport::ImageTransport it_; image_transport::Subscriber image_sub_; tf::TransformListener listener_; tf::StampedTransform camera_to_robot_; public: XArmAPItest() : it_(nh_) { moveit::planning_interface::MoveGroupInterface arm("xarm7"); sleep(0.5); moveit::planning_interface::PlanningSceneInterface planning_scene_interface; sleep(0.5); //异步任务处理器,防阻塞 ros::AsyncSpinner spinner(1); spinner.start(); // 创建一个新的障碍物消息 moveit_msgs::CollisionObject collision_object; collision_object.header.frame_id = "world"; // 设置障碍物的参考坐标系,通常为世界坐标系 // 设置障碍物的 ID collision_object.id = "table"; // 定义障碍物的形状和尺寸 shape_msgs::SolidPrimitive primitive; primitive.type = primitive.BOX; primitive.dimensions.resize(3); primitive.dimensions[0] = 2.0; // 长 primitive.dimensions[1] = 2.0; // 宽 primitive.dimensions[2] = 0.1; // 高 // 定义障碍物的姿态 geometry_msgs::Pose obstacle_pose; obstacle_pose.orientation.w = 1.0; // 默认姿态为单位四元数 obstacle_pose.position.x = 0.0; // x 位置 obstacle_pose.position.y = 0.0; // y 位置 obstacle_pose.position.z = -0.06; // z 位置 // 将障碍物的形状和姿态添加到障碍物消息中 collision_object.primitives.push_back(primitive); collision_object.primitive_poses.push_back(obstacle_pose); // 设置操作类型为添加障碍物 collision_object.operation = collision_object.ADD; // 发送障碍物消息到规划场景 moveit_msgs::PlanningScene planning_scene; planning_scene.world.collision_objects.push_back(collision_object); planning_scene.is_diff = true; planning_scene_interface.applyPlanningScene(planning_scene); // 应用障碍物到规划场景 ROS_INFO("Obstacle added"); /** //回初位置 arm.setNamedTarget("home");//设置目标 arm.move();//执行 sleep(0.5); double targetPose[7] = {-0.166690,0.00000, -0.076904,1.173601, 0.015010,1.21220,-0.260379}; std::vector<double> joint_group_positions(7); joint_group_positions[0] = targetPose[0]; joint_group_positions[1] = targetPose[1]; joint_group_positions[2] = targetPose[2]; joint_group_positions[3] = targetPose[3]; joint_group_positions[4] = targetPose[4]; joint_group_positions[5] = targetPose[5]; joint_group_positions[6] = targetPose[6]; arm.setJointValueTarget(joint_group_positions); arm.move(); sleep(0.5); */ try { listener_.waitForTransform("link_base", "camera_link", ros::Time(0), ros::Duration(50.0)); } catch (tf::TransformException &ex) { ROS_ERROR("[adventure_tf]: (wait) %s", ex.what()); ros::Duration(1.0).sleep(); } try { listener_.lookupTransform("link_base", "camera_link", ros::Time(0), camera_to_robot_); tf::Vector3 translation = camera_to_robot_.getOrigin(); objectHeight=translation.getZ(); std::cout << objectHeight << std::endl; } catch (tf::TransformException &ex) { ROS_ERROR("[adventure_tf]: (lookup) %s", ex.what()); } // 订阅相机图像 image_sub_ = it_.subscribe("/camera/color/image_raw", 1, &XArmAPItest::Cam_RGB_Callback, this); sleep(1); } /** // 封装四元数转欧拉角的函数 std::tuple<double, double, double> quaternionToEuler(const geometry_msgs::Quaternion& q) { double x = q.x; double y = q.y; double z = q.z; double w = q.w; // 计算绕 x 轴旋转的弧度(roll) double sinr_cosp = 2 * (w * x + y * z); double cosr_cosp = 1 - 2 * (x * x + y * y); double roll = std::atan2(sinr_cosp, cosr_cosp); // 计算绕 y 轴旋转的弧度(pitch) double sinp = 2 * (w * y - z * x); double pitch; if (std::abs(sinp) >= 1) pitch = std::copysign(M_PI / 2, sinp); // 使用 90 度避免数值问题 else pitch = std::asin(sinp); // 计算绕 z 轴旋转的弧度(yaw) double siny_cosp = 2 * (w * z + x * y); double cosy_cosp = 1 - 2 * (y * y + z * z); double yaw = std::atan2(siny_cosp, cosy_cosp); return std::make_tuple(roll, pitch, yaw); } */ void Grasping(double a, double b, double z, double Angle) { std::cout << "Grasping" << std::endl; // 输出提示信息 moveit::planning_interface::MoveGroupInterface arm("xarm7"); ros::AsyncSpinner spinner(1); spinner.start(); std::string end_effector_link = arm.getEndEffectorLink(); std::string reference_frame = "link_base"; arm.setPoseReferenceFrame(reference_frame); // 声明一个变量用于存储机械臂当前的位姿信息 geometry_msgs::PoseStamped homePose; // 获取机械臂末端执行器当前的位姿并赋值给 homePose 变量 homePose = arm.getCurrentPose(); sleep(1); writeSpeed(1); writeSpeed(1); writeSpeed(1); sleep(0.8); /*******************第1次抓取*********************/ /*** if(flag_start==1) { //移动到抓取方块上方 flag_start=2; //将初始位姿加入路点列表 //waypoints.push_back(target_pose); geometry_msgs::Pose target_pose; target_pose.position.x = x; target_pose.position.y = y; target_pose.position.z = 0; double roll = 180*M_PI/180; // 绕 x 轴旋转的弧度 double pitch = 0; // 绕 y 轴旋转的弧度 double yaw = 0; // 绕 z 轴旋转的弧度 // 将目标姿态的位置坐标从米转换为毫米 float x_mm = static_cast<float>(target_pose.position.x * 1000); float y_mm = static_cast<float>(target_pose.position.y * 1000); float z_mm = static_cast<float>(target_pose.position.z * 1000); } */ double x = a + 0.0001-0.0041; double y = b + 0.0009+0.0082+0.001; /*******************抓取*********************/ if(flag_start>=1&&flag_start<35) { //抓取物块 geometry_msgs::Pose target_pose0; target_pose0 = arm.getCurrentPose(end_effector_link).pose; target_pose0.position.z = 0+0.14+0.013; target_pose0.position.x = x; target_pose0.position.y = y; double roll0 = 180*M_PI/180; // 绕 x 轴旋转的弧度 double pitch0 =0*M_PI/180; // 绕 y 轴旋转的弧度 double yaw0 = 0; // 绕 z 轴旋转的弧度 // 将目标姿态的位置坐标从米转换为毫米 float x_mm0 = static_cast<float>(target_pose0.position.x * 1000); float y_mm0 = static_cast<float>(target_pose0.position.y * 1000); float z_mm0 = static_cast<float>(target_pose0.position.z * 1000); ROS_INFO("zuobiao0: %f, %f, %f", x_mm0, y_mm0, z_mm0); // 定位抓取物块 geometry_msgs::Pose target_pose1; target_pose1 = arm.getCurrentPose(end_effector_link).pose; target_pose1.position.x = x; target_pose1.position.y = y; target_pose1.position.z = 0.09+0.0003+0.01246-0.01; double roll1 = 180*M_PI/180; // 绕 x 轴旋转的弧度 double pitch1 =0*M_PI/180; // 绕 y 轴旋转的弧度 double yaw1 = 0; // 绕 z 轴旋转的弧度 // 将目标姿态的位置坐标从米转换为毫米 float x_mm1 = static_cast<float>(target_pose1.position.x * 1000); float y_mm1 = static_cast<float>(target_pose1.position.y * 1000); float z_mm1 = static_cast<float>(target_pose1.position.z * 1000); ROS_INFO("zuobiao1: %f, %f, %f", x_mm1, y_mm1, z_mm1); //抓取物块 geometry_msgs::Pose target_pose2; target_pose2 = arm.getCurrentPose(end_effector_link).pose; target_pose2.position.z = 0+0.14+0.013; target_pose2.position.x = x; target_pose2.position.y = y; double roll2 = 180*M_PI/180; // 绕 x 轴旋转的弧度 double pitch2 =0*M_PI/180; // 绕 y 轴旋转的弧度 double yaw2 = 0; // 绕 z 轴旋转的弧度 // 将目标姿态的位置坐标从米转换为毫米 float x_mm2 = static_cast<float>(target_pose2.position.x * 1000); float y_mm2 = static_cast<float>(target_pose2.position.y * 1000); float z_mm2 = static_cast<float>(target_pose2.position.z * 1000); ROS_INFO("zuobiao2: %f, %f, %f", x_mm2, y_mm2, z_mm2); geometry_msgs::Pose target_pose3; target_pose3 = arm.getCurrentPose(end_effector_link).pose; //放置物块 target_pose3.position.z= 0.14+0.013; double roll3 = 180*M_PI/180; // 绕 x 轴旋转的弧度 double pitch3 =0*M_PI/180; // 绕 y 轴旋转的弧度 double yaw3 = Angle*M_PI/180; // 绕 z 轴旋转的弧度 /**********************************************zi色*************************************/ if(flag_start==1){ target_pose3.position.x = 0.38578-0.023+0.002; target_pose3.position.y = 0.217467-0.052;} if(flag_start==2){ target_pose3.position.x = 0.38578+0.06-0.023; target_pose3.position.y = 0.217467+0.02-0.052;} if(flag_start==3){ target_pose3.position.x = 0.38578+0.08-0.023; target_pose3.position.y = 0.217467-0.052;} /**********************************************橙色*************************************/ if(flag_start==4){ target_pose3.position.x = 0.38578+0.01-0.023; target_pose3.position.y = 0.217467+0.052-0.055;} if(flag_start==5){ target_pose3.position.x = 0.38578+0.05-0.023; target_pose3.position.y = 0.217467+0.052-0.055;} if(flag_start==6){ target_pose3.position.x = 0.38578+0.09-0.023; target_pose3.position.y = 0.217467+0.052-0.055;} /*********************************************hong色 *****************************************/ if(flag_start==7){ target_pose3.position.x = 0.38578-0.023; target_pose3.position.y = 0.217467+0.11-0.055+0.002;} if(flag_start==8){ target_pose3.position.x = 0.38578+0.02-0.023-0.002; target_pose3.position.y = 0.217467+0.11-0.055+0.002;} if(flag_start==9){ target_pose3.position.x = 0.38578+0.04-0.023-0.002; target_pose3.position.y = 0.217467+0.11-0.055;} /********************************************************huang色*****************************/ if(flag_start==10){ target_pose3.position.x = 0.38578-0.023+0.002; target_pose3.position.y = 0.217467+0.18+-0.05-0.001;} if(flag_start==11){ target_pose3.position.x = 0.38578+0.06-0.023-0.002; target_pose3.position.y = 0.217467+0.16-0.05;} if(flag_start==12){ target_pose3.position.x = 0.38578+0.08-0.023; target_pose3.position.y = 0.217467+0.18-0.05;} if(flag_start==13){ target_pose3.position.x = 0.38578+0.14-0.023-0.001; target_pose3.position.y = 0.217467+0.16-0.05;} /*****************************************************hei色*********************************** */ if(flag_start==14){ target_pose3.position.x = 0.38578+0.06-0.023; target_pose3.position.y = 0.217467+0.10-0.05;} if(flag_start==15){ target_pose3.position.x = 0.38578+0.08-0.023; target_pose3.position.y = 0.217467+0.14-0.05-0.001;} /*****************************************************zi色*************************** */ if(flag_start==16){ target_pose3.position.x = 0.38578+0.12-0.023-0.001; target_pose3.position.y = 0.217467+0.08-0.0575;} /***************************************************lan色******************************* */ if(flag_start==17){ target_pose3.position.x = 0.38578+0.11-0.023+0.001; target_pose3.position.y = 0.217467+0.12-0.05-0.001; roll3 = 180*M_PI/180; // 绕 x 轴旋转的弧度 pitch3 =0*M_PI/180; // 绕 y 轴旋转的弧度 yaw3 = Angle*M_PI/180 + M_PI/2; // 绕 z 轴旋转的弧度 } if(flag_start==18){ target_pose3.position.x = 0.38578+0.11+0.03-0.02; target_pose3.position.y = 0.217467+0.09-0.05;} if(flag_start==19){ target_pose3.position.x = 0.38578+0.11+0.03+0.04-0.02-0.002; target_pose3.position.y = 0.217467+0.09-0.05+0.001;} if(flag_start==20){ target_pose3.position.x = 0.38578+0.11+0.03+0.04+0.02-0.02-0.02-0.07; target_pose3.position.y = 0.217467+0.09+0.02-0.05+0.002;} /***************************************************ceng色*************************** */ if(flag_start==21){ target_pose3.position.x = 0.38578+0.15-0.02-0.002; target_pose3.position.y = 0.217467+0.13-0.045-0.002;} /*****************************************************lv色**************************************** */ if(flag_start==22){ target_pose3.position.x = 0.38578+0.12-0.02-0.004; target_pose3.position.y = 0.217467+0.03-0.05;} if(flag_start==23){ target_pose3.position.x = 0.38578+0.16-0.02-0.004-0.0005; target_pose3.position.y = 0.217467+0.03-0.05;} if(flag_start==24){ target_pose3.position.x = 0.38578+0.20-0.02-0.004-0.0005; target_pose3.position.y = 0.217467+0.03-0.05;} if(flag_start==25){ target_pose3.position.x = 0.38578+0.17-0.02-0.004-0.0001; target_pose3.position.y = 0.217467+0.11+0.06-0.05-0.005; roll3 = 180*M_PI/180; // 绕 x 轴旋转的弧度 pitch3 =0*M_PI/180; // 绕 y 轴旋转的弧度 yaw3 = Angle*M_PI/180 - M_PI/2; // 绕 z 轴旋转的弧度 } /***********************************************************huang色************************************** */ if(flag_start==26){ target_pose3.position.x = 0.38578+0.20-0.022; target_pose3.position.y = 0.217467+0.11+0.08-0.05-0.004-0.004;} /***********************************************************hong色************************************** */ if(flag_start==27){ target_pose3.position.x = 0.38578+0.17-0.021; target_pose3.position.y = 0.217467-0.05; roll3 = 180*M_PI/180; // 绕 x 轴旋转的弧度 pitch3 =0*M_PI/180; // 绕 y 轴旋转的弧度 yaw3 = Angle*M_PI/180 - M_PI/2; // 绕 z 轴旋转的弧度 } if(flag_start==28){ target_pose3.position.x = 0.38578+0.17-0.021; target_pose3.position.y = 0.217467+0.06-0.05; roll3 = 180*M_PI/180; // 绕 x 轴旋转的弧度 pitch3 =0*M_PI/180; // 绕 y 轴旋转的弧度 yaw3 = Angle*M_PI/180 - M_PI/2; // 绕 z 轴旋转的弧度 } /***************************************************************hei色************************************* */ if(flag_start==29){ target_pose3.position.x = 0.38578+0.22-0.025+0.002; target_pose3.position.y = 0.217467+0.10+0.04-0.05;} if(flag_start==30){ target_pose3.position.x = 0.38578+0.24-0.025; target_pose3.position.y = 0.217467+0.18-0.05;} /*************************************************ceng色*************************** */ if(flag_start==31){ target_pose3.position.x = 0.38578+0.23-0.02-0.002-0.0002; target_pose3.position.y = 0.217467+0.07-0.045-0.002-0.002;} /*************************************************lan色*************************** */ if(flag_start==32){ target_pose3.position.x = 0.38578+0.23-0.024; target_pose3.position.y = 0.217467+0.02-0.051-0.0001; roll3 = 180*M_PI/180; // 绕 x 轴旋转的弧度 pitch3 =0*M_PI/180; // 绕 y 轴旋转的弧度 yaw3 = Angle*M_PI/180 + M_PI/2; // 绕 z 轴旋转的弧度 } /*************************************************zi色*************************** */ if(flag_start==33){ target_pose3.position.x = 0.38578+0.26-0.024; target_pose3.position.y = 0.217467-0.051+0.001;} /*************************************************lv色*************************** */ if(flag_start==34){ target_pose3.position.x = 0.38578+0.25-0.024; target_pose3.position.y = 0.217467+0.10-0.051+0.001; target_pose3.position.z= 0.14+0.013+0.002; roll3 = 180*M_PI/180; // 绕 x 轴旋转的弧度 pitch3 =0*M_PI/180; // 绕 y 轴旋转的弧度 yaw3 = Angle*M_PI/180 - M_PI/2; // 绕 z 轴旋转的弧度 } // 将目标姿态的位置坐标从米转换为毫米 float x_mm3 = static_cast<float>(target_pose3.position.x * 1000); float y_mm3 = static_cast<float>(target_pose3.position.y * 1000); float z_mm3 = static_cast<float>(target_pose3.position.z * 1000); ROS_INFO("zuobiao3: %f, %f, %f", x_mm3, y_mm3, z_mm3); geometry_msgs::Pose target_pose4; target_pose4.position.z= 0.109+0.01236-0.0078+0.005; double roll4 = 180*M_PI/180; // 绕 x 轴旋转的弧度 double pitch4 =0*M_PI/180; // 绕 y 轴旋转的弧度 double yaw4 = yaw3; // 绕 z 轴旋转的弧度 float x_mm4 = static_cast<float>(target_pose3.position.x * 1000); float y_mm4 = static_cast<float>(target_pose3.position.y * 1000); float z_mm4 = static_cast<float>(target_pose4.position.z * 1000); ROS_INFO("zuobiao4: %f, %f, %f", x_mm4, y_mm4, z_mm4); // 调用 move_lineb_test 函数并传递坐标 ros::NodeHandle nh; ros::ServiceClient move_lineb_client_ = nh.serviceClient<xarm_msgs::Move>("/xarm/move_lineb"); xarm_msgs::Move move_srv_; if(move_lineb_test(move_srv_, move_lineb_client_, x_mm0, y_mm0, z_mm0, roll0, pitch0, yaw0, x_mm1, y_mm1, z_mm1, roll1, pitch1, yaw1, x_mm2, y_mm2, z_mm2, roll2, pitch2, yaw2, x_mm3, y_mm3, z_mm3, roll3, pitch3, yaw3, x_mm4, y_mm4, z_mm4, roll4, pitch4, yaw4) == 1) return; control_suction_during_move(x_mm4, y_mm4, z_mm4); flag_start++; } else {ros::shutdown(); } } void Cam_RGB_Callback(const sensor_msgs::ImageConstPtr &msg)// 摄像头回调函数 { using namespace cv; image_sub_.shutdown(); // 定义一个cv_bridge指针 cv_bridge::CvImagePtr cv_ptr; try { // 将ROS图像转换为OpenCV图像 cv_ptr = cv_bridge::toCvCopy(msg, sensor_msgs::image_encodings::BGR8); } catch (cv_bridge::Exception &e) { ROS_ERROR("cv_bridge exception:%s", e.what()); } // 获取原始图像 Mat imgOriginal = cv_ptr->image; // 定义亮度增强因子 double brightness_scale = 1.8; // 应用亮度增强 Mat brightened; imgOriginal.convertTo(brightened, -1, brightness_scale); // 图像预处理:高斯模糊 Mat blurred; GaussianBlur(brightened, blurred, Size(5, 5), 0); Mat hsv; cvtColor(blurred, hsv, cv::COLOR_BGR2HSV); // 将原始图像转换为HSV图像 // 分离HSV通道 std::vector<Mat> hsv_channels; split(hsv, hsv_channels); // 增强饱和度(S 通道) double saturation_scale = 1.5; // 饱和度增强因子,可以根据实际情况调整 hsv_channels[1].convertTo(hsv_channels[1], -1, saturation_scale); // 合并通道 merge(hsv_channels, hsv); Mat mask_red, mask_green,mask_blue,mask_orange,mask_brown,mask_yellow,mask_purple; // inRange(blurred, cv::Scalar(0, 0, 130), cv::Scalar(255, 108, 226), mask_red); inRange(blurred, cv::Scalar(0, 0, 144), cv::Scalar(252, 111, 203), mask_red); inRange(blurred, cv::Scalar(0, 112, 174), cv::Scalar(159, 139, 237), mask_orange);//BGR inRange(blurred, cv::Scalar(0, 0, 0), cv::Scalar(123, 112, 108), mask_brown); inRange(blurred, cv::Scalar(0, 127, 97), cv::Scalar(157, 255, 136), mask_green); inRange(blurred, cv::Scalar(156, 119, 0), cv::Scalar(218, 146, 102), mask_blue); inRange(blurred, cv::Scalar(0, 142, 152), cv::Scalar(150, 255, 255), mask_yellow); inRange(hsv, cv::Scalar(99, 33, 120), cv::Scalar(134, 146, 181), mask_purple); Mat kernel = getStructuringElement(MORPH_ELLIPSE, Size(5, 5));// 形态学操作的内核大小 //dilate(mask_red, mask_red, kernel); //dilate(mask_green, mask_green, kernel); erode(mask_red, mask_red, kernel);//腐蚀 erode(mask_green, mask_green, kernel); erode(mask_blue, mask_blue, kernel); erode(mask_yellow, mask_yellow, kernel); erode(mask_orange, mask_orange, kernel); erode(mask_purple, mask_purple, kernel); erode(mask_brown, mask_brown, kernel); //erode(mask_purple, mask_purple, kernel); dilate(mask_orange, mask_orange, kernel); dilate(mask_brown, mask_brown, kernel); //imshow("green", mask_green);//显示原始图像 //获取储存不同颜色的灰度图 std::vector<std::vector<cv::Point>> contours_red; std::vector<std::vector<cv::Point>> contours_red_output; cv::findContours(mask_red,contours_red, cv::RETR_EXTERNAL, cv::CHAIN_APPROX_NONE); std::vector<std::vector<cv::Point>> contours_orange; std::vector<std::vector<cv::Point>> contours_orange_output; cv::findContours(mask_orange,contours_orange, cv::RETR_EXTERNAL, cv::CHAIN_APPROX_NONE); std::vector<std::vector<cv::Point>> contours_brown; std::vector<std::vector<cv::Point>> contours_brown_output; cv::findContours(mask_brown,contours_brown, cv::RETR_EXTERNAL, cv::CHAIN_APPROX_NONE); std::vector<std::vector<cv::Point>> contours_green; std::vector<std::vector<cv::Point>> contours_green_output; cv::findContours(mask_green,contours_green, cv::RETR_EXTERNAL, cv::CHAIN_APPROX_NONE); std::vector<std::vector<cv::Point>> contours_blue; std::vector<std::vector<cv::Point>> contours_blue_output; cv::findContours(mask_blue,contours_blue, cv::RETR_EXTERNAL, cv::CHAIN_APPROX_NONE); std::vector<std::vector<cv::Point>> contours_yellow; std::vector<std::vector<cv::Point>> contours_yellow_output; cv::findContours(mask_yellow,contours_yellow, cv::RETR_EXTERNAL, cv::CHAIN_APPROX_NONE); std::vector<std::vector<cv::Point>> contours_purple; std::vector<std::vector<cv::Point>> contours_purple_output; cv::findContours(mask_purple,contours_purple, cv::RETR_EXTERNAL, cv::CHAIN_APPROX_NONE); //drawContours(blurred, contours, -1, Scalar(0, 255, 0), 3);//轮廓 /********************************红色********************/ for (size_t i = 0; i < contours_red.size(); i++) { std::vector<cv::Point>& contour_red = contours_red[i]; if (contour_red.size()>200) { // 检查轮廓是否为空 contours_red_output.push_back(contours_red[i]); } else { ROS_INFO("No red contours found."); } } /********************************橙色********************/ for (size_t i = 0; i < contours_orange.size(); i++) { std::vector<cv::Point>& contour_orange = contours_orange[i]; if (contour_orange.size()>200) { // 检查轮廓是否为空 contours_orange_output.push_back(contours_orange[i]); } else { ROS_INFO("No orange contours found."); } } /********************************棕色********************/ for (size_t i = 0; i < contours_brown.size(); i++) { std::vector<cv::Point>& contour_brown = contours_brown[i]; if (contour_brown.size()>200) { // 检查轮廓是否为空 contours_brown_output.push_back(contours_brown[i]); } else { ROS_INFO("No brown contours found."); } } /********************************绿色********************/ for (size_t i = 0; i < contours_green.size(); i++) { std::vector<cv::Point>& contour_green = contours_green[i]; if (contour_green.size()>200) { // 检查轮廓是否为空 contours_green_output.push_back(contours_green[i]); } else { ROS_INFO("No green contours found."); } } /********************************蓝色********************/ for (size_t i = 0; i < contours_blue.size(); i++) { std::vector<cv::Point>& contour_blue = contours_blue[i]; if (contour_blue.size()>200) { // 检查轮廓是否为空 contours_blue_output.push_back(contours_blue[i]); } else { ROS_INFO("No blue contours found."); } } /********************************黄色********************/ for (size_t i = 0; i < contours_yellow.size(); i++) { std::vector<cv::Point>& contour_yellow = contours_yellow[i]; if (contour_yellow.size()>200) { // 检查轮廓是否为空 contours_yellow_output.push_back(contours_yellow[i]); } else { ROS_INFO("No yellow contours found."); } } /********************************紫色********************/ // 存储每个轮廓的面积及其索引 std::vector<std::pair<double, size_t>> area_index_pairs; for (size_t i = 0; i < contours_purple.size(); ++i) { double area = cv::contourArea(contours_purple[i]); area_index_pairs.emplace_back(area, i); } // 按面积从大到小排序 std::sort(area_index_pairs.begin(), area_index_pairs.end(), [](const std::pair<double, size_t>& a, const std::pair<double, size_t>& b) { return a.first > b.first; }); // 处理面积最大的5个轮廓 size_t count = std::min<size_t>(5, area_index_pairs.size()); if (count == 0) { ROS_INFO("No contours found."); } for (size_t i = 0; i < count; ++i) { size_t index = area_index_pairs[i].second; contours_purple_output.push_back(contours_purple[index]); } //传递颜色灰度图像 Camera_TO_Robot_Process_YP(contours_purple_output,0,3); Camera_TO_Robot_Process_RO(contours_orange_output,0,3); Camera_TO_Robot_Process_RO(contours_red_output,0,3); Camera_TO_Robot_Process_YP(contours_yellow_output,0,4); Camera_TO_Robot_Process_brown(contours_brown_output,0,2); Camera_TO_Robot_Process_YP(contours_purple_output,3,4); Camera_TO_Robot_Process_GB(contours_blue_output,0,4); Camera_TO_Robot_Process_RO(contours_orange_output,3,4); Camera_TO_Robot_Process_GB(contours_green_output,0,4); Camera_TO_Robot_Process_YP(contours_yellow_output,4,5); Camera_TO_Robot_Process_RO(contours_red_output,3,5); Camera_TO_Robot_Process_brown(contours_brown_output,2,4); Camera_TO_Robot_Process_RO(contours_orange_output,4,5); Camera_TO_Robot_Process_GB(contours_blue_output,4,5); Camera_TO_Robot_Process_YP(contours_purple_output,4,5); Camera_TO_Robot_Process_GB(contours_green_output,4,5); // imshow("imgOriginal", imgOriginal); std::cout << "success!" << std::endl; ros::shutdown(); } //红色和橙色 void Camera_TO_Robot_Process_RO(const std::vector<std::vector<cv::Point>>& contours, int start_number, int end_number) { std::cout << "Red or Orange" << std::endl; cv::Point2f center; for (start_number; start_number < end_number; start_number++) { const std::vector<cv::Point>& contour = contours[start_number]; if (contour.size()>300) // 检查轮廓是否为空 { // 获取最小外接圆 // float radius; // cv::minEnclosingCircle(contours[i], center, radius); // 获取最小外接矩形 cv::RotatedRect minRect = cv::minAreaRect(contours[start_number]); center = minRect.center; if(center.y<60) { center.y=60; } if(center.x<82) { center.x=82; } if(center.y>=60&¢er.x>=82) { cv::Mat pixelPointMat = (cv::Mat_<double>(3, 1) << center.x, center.y,1); //cv::Mat pixelPointMat = (cv::Mat_<double>(3, 1) << center.x*1.2-91.0, center.y*1.275-74.5,1); std::cout << pixelPointMat << std::endl; // pixelPointMat = (cv::Mat_<double>(3, 1) << 640, 360, 1); cameraPointMat = objectHeight * K.inv() * pixelPointMat; // std::cout << "相机坐标系下的三维坐标:" << std::endl; // std::cout << cameraPointMat << std::endl; obj_camera_frame1.setX(cameraPointMat.at<double>(0,0)); obj_camera_frame1.setY(cameraPointMat.at<double>(1,0)); obj_camera_frame1.setZ(cameraPointMat.at<double>(2,0)); obj_robot_frame = camera_to_robot_ * obj_camera_frame1; // ros::shutdown(); //std::cout << "11111" << std::endl; /**************** 获取矩形的角度*******************/ double angle = minRect.angle; cv::Size2f size = minRect.size; double width = size.width; double height = size.height; //放平角度 if (width < height) { angle += 90; } // 输出角度 // std::cout << "Contour #" << start_number << " angle: " << angle << std::endl; /**************** 获取矩形的角度*******************/ Grasping(obj_robot_frame.getX(),obj_robot_frame.getY(),obj_robot_frame.getZ(),angle); } } else if(contour.size()>0&&contour.size()<100) { std::cout << "没有红橙像素坐标:" << std::endl; } } } //棕色 void Camera_TO_Robot_Process_brown(const std::vector<std::vector<cv::Point>>& contours, int start_number, int end_number) { std::cout << "Brown start" << std::endl; cv::Point2f center; for (start_number; start_number < end_number; start_number++) { const std::vector<cv::Point>& contour = contours[start_number]; if (contour.size()>300) // 检查轮廓是否为空 { // 获取最小外接圆 float radius; cv::minEnclosingCircle(contours[start_number], center, radius); /**************** 获取矩形的角度*******************/ // 获取最小外接矩形 cv::RotatedRect minRect = cv::minAreaRect(contours[start_number]); double angle2 = minRect.angle; cv::Size2f size = minRect.size; double width = size.width; double height = size.height; /**************** 获取矩形的角度*******************/ double epsilon = 0.1 * cv::arcLength(contours[start_number], true); std::cout << "1111" << std::endl; std::vector<int> lenth(8); std::vector<cv::Point2f> approx; cv::Point2f pt; cv::Point2f pt1; cv::Point2f pt0; cv::approxPolyDP(contours[start_number], approx, epsilon, true); pt0 = approx[0]; if (approx.size() == 3) { for (size_t j = 0; j < approx.size(); j++) { pt= approx[j]; pt1= approx[j+1]; lenth[j]=(pt.x-pt1.x)*(pt.x-pt1.x)+(pt.y-pt1.y)*(pt.y-pt1.y); if(j==2) { lenth[j]=(pt.x-pt0.x)*(pt.x-pt0.x)+(pt.y-pt0.y)*(pt.y-pt0.y); } } if ( lenth[0]>lenth[1]&&lenth[0]>lenth[2]) { if(width > height){ switch (start_number) { case 0: angle2 = angle2; break; case 1: angle2 -= 90; break; case 2: angle2 = angle2; break; case 3: angle2 -= 90; break; } } else if(width < height){ switch (start_number) { case 0: angle2 -= 90; break; case 1: angle2 += 180; break; case 2: angle2 -= 90; break; case 3: angle2 += 180; break; } } } else if ( lenth[2]>lenth[1]&&lenth[2]>lenth[0]) { if(width > height){ switch (start_number) { case 0: angle2 += 180; break; case 1: angle2 += 90; break; case 2: angle2 += 180; break; case 3: angle2 += 90; break; } } else if(width < height){ switch (start_number) { case 0: angle2 += 90; break; case 1: angle2 = angle2; break; case 2: angle2 += 90; break; case 3: angle2 = angle2; break; } } } } // center = minRect.center; if(center.y<60) { center.y=60; } if(center.x<82) { center.x=82; } if(center.y>=60&¢er.x>=82) { //cv::Mat pixelPointMat = (cv::Mat_<double>(3, 1) << center.x*1.2-91.5, center.y*1.275-74.5,1); //cv::Mat pixelPointMat = (cv::Mat_<double>(3, 1) << center.x*1.2-92, center.y*1.285-76.5,1); cv::Mat pixelPointMat = (cv::Mat_<double>(3, 1) << center.x, center.y,1); std::cout << pixelPointMat << std::endl; // pixelPointMat = (cv::Mat_<double>(3, 1) << 640, 360, 1); cameraPointMat = objectHeight * K.inv() * pixelPointMat; // std::cout << "相机坐标系下的三维坐标:" << std::endl; // std::cout << cameraPointMat << std::endl; obj_camera_frame1.setX(cameraPointMat.at<double>(0,0)); obj_camera_frame1.setY(cameraPointMat.at<double>(1,0)); obj_camera_frame1.setZ(cameraPointMat.at<double>(2,0)); obj_robot_frame = camera_to_robot_ * obj_camera_frame1; if(angle2>176) angle2=176; if(angle2<-176) angle2=-176; std::cout << " angle2 :" << angle2 << std::endl; Grasping(obj_robot_frame.getX(),obj_robot_frame.getY(),obj_robot_frame.getZ(),angle2); } } else if(contour.size()>0&&contour.size()<100) { std::cout << "没有棕色像素坐标:" << std::endl; }} } //紫色和黄色 void Camera_TO_Robot_Process_YP(const std::vector<std::vector<cv::Point>>& contours, int start_number, int end_number) { std::cout << "Yellow or Purple start" << std::endl; double angle2=0; for (start_number; start_number < end_number; start_number++) { const std::vector<cv::Point>& contour = contours[start_number]; if (contour.size()>300) // 检查轮廓是否为空 { /**************** 获取矩形的角度*******************/ cv::RotatedRect minRect = cv::minAreaRect(contours[start_number]); angle2 = minRect.angle; cv::Size2f size = minRect.size; double width = size.width; double height = size.height; /**************** 获取矩形的角度*******************/ cv::Point2f center; cv::Point2f center2; double epsilon = 0.04 * cv::arcLength(contours[start_number], true); // 多边形逼近 std::vector<int> lenth(6); std::vector<cv::Point2f> approx; cv::Point2f pt; cv::Point2f pt1; cv::Point2f pt0; cv::approxPolyDP(contours[start_number], approx, epsilon, true); // std::cout <<" lenth[i] "<< approx.size() << std::endl; pt0 = approx[0]; if (approx.size() == 6) { for (size_t i = 0; i < approx.size(); i++) { pt= approx[i]; pt1= approx[i+1]; lenth[i]=(pt.x-pt1.x)*(pt.x-pt1.x)+(pt.y-pt1.y)*(pt.y-pt1.y); if(i==5) lenth[i]=(pt.x-pt0.x)*(pt.x-pt0.x)+(pt.y-pt0.y)*(pt.y-pt0.y); } /********************************purple***************************************/ if ( lenth[5]>lenth[1]&&lenth[5]>lenth[0]&&lenth[5]>lenth[2]&&lenth[5]>lenth[3]&&lenth[5]>lenth[4]) { // center2 = (approx[1]+approx[2])/2; // center = (approx[5]+center2)/2; center = (approx[5]+approx[2])/2; if(width > height){ switch (start_number) { case 0: angle2 -= 90; break; case 1: angle2 += 90; break; case 2: angle2 -= 90; break; case 3: angle2 += 90; break; case 4: angle2 = angle2; break; }} else if(width < height){ switch (start_number) { case 0: angle2 += 180; break; case 1: angle2 = angle2; break; case 2: angle2 += 180; break; case 3: angle2 = angle2; break; case 4: angle2 -= 90; break; }} std::cout << "555555" << std::endl; } /********************************purple***************************************/ else if ( lenth[1]>lenth[2]&&lenth[1]>lenth[0]&&lenth[1]>lenth[5]&&lenth[1]>lenth[3]&&lenth[1]>lenth[4]) { //center2 = (approx[3]+approx[4])/2; //center = (approx[1]+center2)/2; center = (approx[1]+approx[4])/2; if(width > height){ if(approx[1].y<approx[0].y&&approx[1].y<approx[2].y&&approx[1].y<approx[3].y&&approx[1].y<approx[4].y&&approx[1].y<approx[5].y){ switch (start_number) { case 0: angle2 += 90; break; case 1: angle2 -= 90; break; case 2: angle2 += 90; break; case 3: angle2 -= 90; break; case 4: angle2 += 180; break; }} else{ switch (start_number) { case 0: angle2 -= 90; break; case 1: angle2 += 90; break; case 2: angle2 -= 90; break; case 3: angle2 += 90; break; case 4: angle2 = angle2; break; } } } else if(width < height) { switch (start_number) { case 0: angle2 = angle2; break; case 1: angle2 += 180; break; case 2: angle2 = angle2; break; case 3: angle2 += 180; break; case 4: angle2 += 90; break; }} std::cout << "11111111" << std::endl; } /********************************purple***************************************/ /********************************yellow***************************************/ else if ( lenth[0]>lenth[1]&&lenth[0]>lenth[2]&&lenth[0]>lenth[5]&&lenth[0]>lenth[3]&&lenth[0]>lenth[4]) { //center2 = (approx[4]+approx[5])/2; //center = (center2+approx[1])/2; center = (approx[4]+approx[1])/2; if(width < height){ switch (start_number) { case 0: angle2 += 180; break; case 1: angle2 = angle2; break; case 2: angle2 += 180; break; case 3: angle2 = angle2; break; case 4: angle2 += 90; break; } } else{ switch (start_number) { case 0: angle2 -= 90; break; case 1: angle2 += 90; break; case 2: angle2 -= 90; break; case 3: angle2 += 90; break; case 4: angle2 += 180; break; } } std::cout << "000000" << std::endl; } /********************************yellow***************************************/ else if ( lenth[4]>lenth[1]&&lenth[4]>lenth[0]&&lenth[4]>lenth[5]&&lenth[4]>lenth[3]&&lenth[4]>lenth[2])/////yellow { //center2 = (approx[2]+approx[3])/2; // center = (center2+approx[5])/2; center = (approx[2]+approx[5])/2; if(width < height){ if(approx[5].y<approx[0].y&&approx[5].y<approx[1].y&&approx[5].y<approx[2].y&&approx[5].y<approx[3].y&&approx[5].y<approx[4].y){ switch (start_number) { case 0: angle2 = angle2; break; case 1: angle2 += 180; break; case 2: angle2 = angle2; break; case 3: angle2 += 180; break; case 4: angle2 -= 90; break; } } else { switch (start_number) { case 0: angle2 += 180; break; case 1: angle2 = angle2; break; case 2: angle2 += 180; break; case 3: angle2 = angle2; break; case 4: angle2 += 90; break; } } } else if(width > height){ switch (start_number) { case 0: angle2 += 90; break; case 1: angle2 -= 90; break; case 2: angle2 += 90; break; case 3: angle2 -= 90; break; case 4: angle2 = angle2; break; } } std::cout << "444444" << std::endl; } /********************************yellow***************************************/ } if(center.y<60) { center.y=60; } if(center.x<82) { center.x=82; } if(center.y>=60&¢er.x>=82) { //cv::Mat pixelPointMat = (cv::Mat_<double>(3, 1) << center.x*1.2-92, center.y*1.285-76.5,1); cv::Mat pixelPointMat = (cv::Mat_<double>(3, 1) << center.x, center.y,1); // std::cout << pixelPointMat << std::endl; // pixelPointMat = (cv::Mat_<double>(3, 1) << 640, 360, 1); cameraPointMat = objectHeight * K.inv() * pixelPointMat; // std::cout << "相机坐标系下的三维坐标:" << std::endl; // std::cout << cameraPointMat << std::endl; obj_camera_frame1.setX(cameraPointMat.at<double>(0,0)); obj_camera_frame1.setY(cameraPointMat.at<double>(1,0)); obj_camera_frame1.setZ(cameraPointMat.at<double>(2,0)); obj_robot_frame = camera_to_robot_ * obj_camera_frame1; std::cout<< " X :" << obj_robot_frame.getX() << std::endl; std::cout<< " Y :" << obj_robot_frame.getY() << std::endl; std::cout<< " Z :" << obj_robot_frame.getZ() << std::endl; if(angle2>176) angle2=176; if(angle2<-176) angle2=-176; std::cout << " angle2 :" << angle2 << std::endl; Grasping(obj_robot_frame.getX(),obj_robot_frame.getY(),obj_robot_frame.getZ(),angle2); } } else if(contour.size()>0&&contour.size()<280) { std::cout << "没有黄紫像素坐标:" << std::endl; }} } //绿色和蓝色 void Camera_TO_Robot_Process_GB(const std::vector<std::vector<cv::Point>>& contours, int start_number, int end_number) { std::cout << "green or blue" << std::endl; // 输出提示信息 cv::Point2f center; for(start_number; start_number < end_number; start_number++) { const std::vector<cv::Point>& contour = contours[start_number]; if (contour.size()>300) { // 检查轮廓是否为空 // 获取最小外接矩形 cv::RotatedRect minRect = cv::minAreaRect(contours[start_number]); center = minRect.center; double angleInRadians = minRect.angle * M_PI / 180.0; if(center.y<60) { center.y=60; } if(center.x<82) { center.x=82; } if(center.y>=60&¢er.x>=82) { cv::Mat pixelPointMat = (cv::Mat_<double>(3, 1) << center.x, center.y,1); std::cout << pixelPointMat << std::endl; cameraPointMat = objectHeight * K.inv() * pixelPointMat; //平面下的坐标转为相机三维坐标 //相机坐标转为基座标 obj_camera_frame1.setX(cameraPointMat.at<double>(0, 0)); obj_camera_frame1.setY(cameraPointMat.at<double>(1, 0)); obj_camera_frame1.setZ(cameraPointMat.at<double>(2, 0)); obj_robot_frame= camera_to_robot_ * obj_camera_frame1; std::cout << "坐标为: " << obj_robot_frame << std::endl; /**************** 获取矩形的角度*******************/ double angle = minRect.angle; cv::Size2f size = minRect.size; double width = size.width; double height = size.height; //放平角度 if (width > height) { angle += 90; } Grasping(obj_robot_frame.getX(),obj_robot_frame.getY(),obj_robot_frame.getZ(),angle); } } else if(contour.size()>0&&contour.size()<100) { std::cout << "没有蓝绿像素坐标:" << std::endl; } } } }; /**********调用服务运行机械臂*********************/ int move_lineb_test(xarm_msgs::Move srv, ros::ServiceClient client, float x_mm0, float y_mm0, float z_mm0, double roll0, double pitch0, double yaw0, float x_mm1, float y_mm1, float z_mm1, double roll1, double pitch1, double yaw1, float x_mm2, float y_mm2, float z_mm2, double roll2, double pitch2, double yaw2, float x_mm3, float y_mm3, float z_mm3, double roll3, double pitch3, double yaw3, float x_mm4, float y_mm4, float z_mm4, double roll4, double pitch4, double yaw4) { // 设置机械臂的运动速度 srv.request.mvvelo = 160; // 设置机械臂的运动加速度 srv.request.mvacc = 1000; // 设置机械臂的运动时间 srv.request.mvtime = 0; // 设置机械臂运动路径的圆角半径 srv.request.mvradii = 20; ROS_INFO("ZUOBIAOR: %f, %f,%f", x_mm1, y_mm1, z_mm1); std::vector<float> pose[5] = { {x_mm0, y_mm0, z_mm0, static_cast<float>(roll0), static_cast<float>(pitch0), static_cast<float>(yaw0)}, {x_mm1, y_mm1, z_mm1, static_cast<float>(roll1), static_cast<float>(pitch1), static_cast<float>(yaw1)}, {x_mm2, y_mm2, z_mm2, static_cast<float>(roll2), static_cast<float>(pitch2), static_cast<float>(yaw2)}, {x_mm3, y_mm3, z_mm3, static_cast<float>(roll3), static_cast<float>(pitch3), static_cast<float>(yaw3)}, {x_mm4, y_mm4, z_mm4, static_cast<float>(roll4), static_cast<float>(pitch4), static_cast<float>(yaw4)} }; for(int i = 0; i < 5; i++) { srv.request.pose = pose[i]; if(client.call(srv)) { ROS_INFO("%s\n", srv.response.message.c_str()); std::cout << "success111" << std::endl; } else { ROS_ERROR("Failed to call service move_lineb"); } } return 0; } bool is_at_pose(const geometry_msgs::Pose& current_pose, const geometry_msgs::Pose& target_pose, double tolerance = 0.005) { return std::abs(current_pose.position.x - target_pose.position.x) < tolerance && std::abs(current_pose.position.y - target_pose.position.y) < tolerance && std::abs(current_pose.position.z - target_pose.position.z) < tolerance; } //阻塞服务,检测是否到达目标位置 void control_suction_during_move(float x_mm4, float y_mm4, float z_mm4) { // 定义目标位 moveit::planning_interface::MoveGroupInterface arm("xarm7"); ros::AsyncSpinner spinner(1); spinner.start(); geometry_msgs::Pose target_pose4; target_pose4.position.x = x_mm4 / 1000.0; target_pose4.position.y = y_mm4 / 1000.0; target_pose4.position.z = (z_mm4-76.7398-13.9+8.5) / 1000.0; bool suction_off_triggered = false; // 循环检查当前位姿 ros::Rate rate(10); // 10 Hz 检查频率 while (ros::ok()) { geometry_msgs::Pose current_pose = arm.getCurrentPose().pose; // ROS_INFO("ZUOBIAO_SHISHI: %f, %f, %f", current_pose.position.x, current_pose.position.y, current_pose.position.z); // 检查是否到达关闭吸盘的位置 if (!suction_off_triggered && is_at_pose(current_pose, target_pose4)) { ROS_INFO("Reached suction off position, trying to turn off suction."); sleep(0.8); writeSpeed(0); writeSpeed(0); writeSpeed(0); sleep(0.5); ROS_INFO("Suction off commands sent."); suction_off_triggered = true; } // 如果两个条件都满足,退出循环 if (suction_off_triggered) { break; } rate.sleep(); } } int main(int argc, char** argv) { // 初始化ROS节点 ros::init(argc, argv, "xarm_api"); ros::NodeHandle nh; XArmAPItest ic; nh.setParam("/xarm/wait_for_finish", true); ros::Publisher sleep_pub_ = nh.advertise<std_msgs::Float32>("/xarm/sleep_sec", 1); ros::ServiceClient motion_ctrl_client_ = nh.serviceClient<xarm_msgs::SetAxis>("/xarm/motion_ctrl"); ros::ServiceClient set_mode_client_ = nh.serviceClient<xarm_msgs::SetInt16>("/xarm/set_mode"); ros::ServiceClient set_state_client_ = nh.serviceClient<xarm_msgs::SetInt16>("/xarm/set_state"); ros::ServiceClient move_lineb_client_ = nh.serviceClient<xarm_msgs::Move>("/xarm/move_lineb"); xarm_msgs::SetAxis set_axis_srv_; xarm_msgs::SetInt16 set_int16_srv_; xarm_msgs::Move move_srv_; float x_mm0, y_mm0, z_mm0; double roll0, pitch0, yaw0; float x_mm1, y_mm1, z_mm1; double roll1, pitch1, yaw1; float x_mm2, y_mm2, z_mm2; double roll2, pitch2, yaw2; float x_mm3, y_mm3, z_mm3; double roll3, pitch3, yaw3; float x_mm4, y_mm4, z_mm4; double roll4, pitch4, yaw4; set_axis_srv_.request.id = 8; set_axis_srv_.request.data = 1; if(motion_ctrl_client_.call(set_axis_srv_)) { ROS_INFO("%s\n", set_axis_srv_.response.message.c_str()); } else { ROS_ERROR("Failed to call service motion_ctrl"); return 1; } set_int16_srv_.request.data = 0; if(set_mode_client_.call(set_int16_srv_)) { ROS_INFO("%s\n", set_int16_srv_.response.message.c_str()); } else { ROS_ERROR("Failed to call service set_mode"); return 1; } set_int16_srv_.request.data = 0; if(set_state_client_.call(set_int16_srv_)) { ROS_INFO("%s\n", set_int16_srv_.response.message.c_str()); } else { ROS_ERROR("Failed to call service set_state"); return 1; } nh.setParam("/xarm/wait_for_finish", false); std_msgs::Float32 sleep_msg; sleep_msg.data = 1.0; sleep_pub_.publish(sleep_msg); if(move_lineb_test(move_srv_, move_lineb_client_, x_mm0, y_mm0, z_mm0, roll0, pitch0, yaw0, x_mm1, y_mm1, z_mm1, roll1, pitch1, yaw1, x_mm2, y_mm2, z_mm2, roll2, pitch2, yaw2, x_mm3, y_mm3, z_mm3, roll3, pitch3, yaw3, x_mm4, y_mm4, z_mm4, roll4, pitch4, yaw4) == 1) return 1; // 调用 control_suction_during_move 函数 control_suction_during_move(x_mm4, y_mm4, z_mm4); sleep(0.8); nh.setParam("/xarm/wait_for_finish", true); while(ros::ok()) { ros::spinOnce(); } return 0; }解释一下

#include #include <sys_app.h> #include <subghz_phy_app.h> #include <radio.h> #include <string.h> /* USER CODE BEGIN Includes */ #include <stm32_timer.h> #include <stm32_seq.h> #include <utilities_def.h> #include <app_version.h> #include <subghz_phy_version.h> /* USER CODE END Includes */ /* External variables ---------------------------------------------------------*/ /* USER CODE BEGIN EV */ /* USER CODE END EV */ /* Private typedef -----------------------------------------------------------*/ void MX_SPI2_Init(void); void ADS1120reset(void); uint16_t ADS1120readonce(void); /* USER CODE BEGIN PTD */ #include <stdio.h> #include <math.h> #include <stdint.h> #include <stdio.h> #include <stdlib.h> #include <string.h> #define SAMPLES_TO_COLLECT 10 static int16_t calibrated_samples[SAMPLES_TO_COLLECT][3]; static uint8_t sample_index = 0; static uint8_t prediction_results[SAMPLES_TO_COLLECT] = {0}; typedef struct Vector { double x, y, z; }Vector1; const Vector1 a_static = { 9.8 / 1.414, 0, 9.8 / 1.414 }; //void cc(Vector1 a_static) //{ // a_static.x=g/sqrt(2); // a_static.y=0; // a_static.z=g/sqrt(2); //} //Vector1 a_static; // cc(*a_static); void normalize(Vector1 *v) // ¸ÄΪָÕë´«µÝ { double l = sqrt(v->x*v->x + v->y*v->y + v->z*v->z); v->x /= l; v->y /= l; v->z /= l; } Vector1 cross(Vector1 a, Vector1 b) { Vector1 result; result.x = a.y * b.z - a.z * b.y; result.y = a.z * b.x - a.x * b.z; result.z = a.x * b.y - a.y * b.x; return result; } Vector1 Convert(double R[3][3], Vector1 a_device) { Vector1 a_world; a_world.x = R[0][0] * a_device.x + R[1][0] * a_device.y + R[2][0] * a_device.z; a_world.y = R[0][1] * a_device.x + R[1][1] * a_device.y + R[2][1] * a_device.z; a_world.z = R[0][2] * a_device.x + R[1][2] * a_device.y + R[2][2] * a_device.z; return a_world; } Vector1 Remove(Vector1 a_world, double g) { Vector1 a_motion; a_motion.x = a_world.x/100; a_motion.y = a_world.y/100; a_motion.z = (a_world.z - g)/100; return a_motion; } Vector1 processBufferTx(uint8_t *BufferTx) { Vector1 res = { 0, 0, 0 }; for (int rain = 0; rain < 10; rain++) { int ax_sign = BufferTx[5 + 6 * rain] ? -1 : 1; int ax_value = BufferTx[6 + 6 * rain]; double ax = ax_sign * (ax_value * 100.0); int ay_sign = BufferTx[7 + 6 * rain] ? -1 : 1; int ay_value = BufferTx[8 + 6 * rain]; double ay = ay_sign * (ay_value * 100.0); int az_sign = BufferTx[9 + 6 * rain] ? -1 : 1; int az_value = BufferTx[10 + 6 * rain]; double az = az_sign * (az_value * 100.0); ax = ax / 4.0 * 0.976; ay = ay / 4.0 * 0.976; az = az / 4.0 * 0.976; res.x += ax; res.y += ay; res.z += az; } res.x /= 10; res.y /= 10; res.z /= 10; return res; } Vector1 solve(Vector1 raw_data) { static Vector1 a_static = {9.8/1.414, 0, 9.8/1.414}; normalize(&a_static); /* ×ø±êϵת»» */ Vector1 t = { 1, 0, 0 }; Vector1 x_temp = cross(t, a_static); normalize(&x_temp); Vector1 y = cross(a_static, x_temp); normalize(&y); double R[3][3] = { {x_temp.x, y.x, a_static.x}, {x_temp.y, y.y, a_static.y}, {x_temp.z, y.z, a_static.z} }; /* ×ø±êת»» */ Vector1 a_world = Convert(R, raw_data); /* È¥³ýÖØÁ¦Ó°Ïì */ return Remove(a_world, 9.8); } #define MAX_DATA_POINTS 200 // ѵÁ·¼¯´óС #define INPUT_FEATURE_DIM 3 // Ñù±¾Î¬¶È /* ½á¹¹Ìå */ /* ѵÁ·Êý¾Ý½á¹¹Ìå */ typedef struct { int16_t f[3]; // ÌØÕ÷Êý×é uint8_t label; // ±êÇ© } DataPoint; /* È«¾ÖѵÁ·Êý¾Ý¼¯£¨ÒÑÔ¤ÖÃ50¸öÑù±¾£© */ DataPoint dataset[MAX_DATA_POINTS] = { {{35, -13, -9}, 1},{{37, -13, -11}, 1},{{35, -9, -19}, 1},{{36, -8, -18}, 1}, {{37, -5, -18}, 1},{{37, -11, -23}, 1},{{36, -7, -17}, 1},{{36, -6, -15}, 1}, {{36, -9, -12}, 1},{{35, -9, -12}, 1},{{35, -8, -11}, 1}, {{34, -8, -11}, 1},{{35, -16, -6}, 1},{{35, -16, -5}, 1}, {{36, -17, -6}, 1},{{35, -17, -6}, 1},{{35, -18, -7}, 1}, {{36, -18, -6}, 1},{{36, -18, -6}, 1},{{36, -18, -6}, 1}, {{36, -17, -4}, 1},{{37, -16, -3}, 1},{{37, -23, -6}, 1}, {{37, -23, -6}, 1},{{37, -18, 0}, 1},{{37, -18, -1}, 1}, {{38, -18, -1}, 1},{{38, -18, -1}, 1},{{38, -8, -11}, 1}, {{38, -9, -12}, 1},{{38, -9, -12}, 1},{{26, -8, -11}, 1}, {{25, -20, 0}, 1},{{25, -20, 0}, 1},{{29, -19, 1}, 1}, {{28, -20, 0}, 1},{{30, -19, 1}, 1},{{29, -19, -1}, 1}, {{19, -23, -4}, 0},{{18, -23, -5}, 0},{{15, -14, -6}, 0}, {{20, -23, -11}, 0},{{19, -27, -17}, 0},{{20, -20, -10}, 0}, {{26, -15, -2}, 0},{{27, -16, -3}, 0},{{27, -15, -2}, 0}, {{28, -16, -3}, 0},{{27, -15, -3}, 0},{{27, -16, -3}, 0}, {{26, -8, -11}, 0},{{25, -8, -11}, 0},{{27, -8, -11}, 0}, {{27, -8, -11}, 0},{{3, -14, -1}, 0},{{4, -14, 0}, 0}, {{25, -8, -11}, 0},{{27, -8, -11}, 0},{{27, -26, 8}, 0}, {{3, -24, 9}, 0},{{4, -24, 10}, 0},{{3, -24, 10}, 0}, {{4, -24, 10}, 0},{{3, -23, 10}, 0},{{4, -23, 10}, 0}, {{7, -23, 10}, 0},{{4, -23, 9}, 0},{{4, -23, 9}, 0}, {{4, -23, 11}, 0},{{0, -25, 10}, 0},{{0, -25, 9}, 0}, {{0, -27, 9}, 1},{{28, -26, 12}, 2},{{30, -27, 10}, 2}, {{30, -27, 10}, 2},{{29, -27, 10}, 2},{{29, -28, 9}, 2}, {{29, -28, 9}, 2},{{28, -28, 9}, 2},{{28, -28, 7}, 2}, {{26, -29, 8}, 2},{{27, -28, 7}, 2},{{26, -40, -4}, 2}, {{23, -40, -4}, 2},{{26, -38, -4}, 2},{{24, -39, -5}, 2}, {{25, -37, -3}, 2},{{-1, -40, -4}, 2},{{17, -29, 6}, 2}, {{20, -29, 8}, 2},{{20, -28, 8}, 2},{{17, -28, 7}, 2}, {{18, -29, 6}, 2},{{19, -30, 6}, 2},{{22, -29, 6}, 2}, {{20, -28, 6}, 2},{{21, -28, 4}, 2},{{23, -27, 6}, 2}, {{19, -27, 6}, 2},{{20, -28, 5}, 2},{{18, -28, 5}, 2}, {{22, -30, 2}, 2},{{23, -34, -1}, 2},{{11, -27, 6}, 2}, {{16, -29, 5}, 2},{{16, -33, 1}, 2},{{28, -28, 7}, 2}, {{28, -28, 6}, 2},{{27, -28, 7}, 2},{{28, -28, 7}, 2},{{26, -29, 6}, 2}, {{25, -28, 6}, 2},{{27, -29, 6}, 2},{{28, -30, 6}, 2}, {{29, -28, 6}, 2},{{31, -28, 6}, 2},{{26, -37, -3}, 2}, {{22, -38, -4}, 2},{{22, -35, -1}, 2},{{21, -40, -6}, 2}, {{27, -25, 8}, 2},{{28, -24, 9}, 2},{{27, -25, 7}, 2}, {{30, -25, 7}, 2},{{31, -25, 7}, 2},{{30, -27, 9}, 2}, {{30, -28, 8}, 2},{{30, -29, 6}, 2},{{29, -27, 9}, 2}, {{30, -25, 11}, 2},{{30, -23, 11}, 2},{{28, -25, 11}, 2}, {{26, -25, 9}, 2},{{20, -37, -1}, 2},{{16, -39, -2}, 2}, {{9, -38, -3}, 2},{{13, -38, -3}, 2},{{12, -37, -2}, 2}, {{-10, -37, -2}, 2},{{0, -30, 4}, 2},{{10, -27, 9}, 2}, {{9, -26, 8}, 2},{{9, -27, 9}, 2},{{12, -27, 9}, 2}, {{14, -26, 8}, 2},{{16, -26, 9}, 2},{{13, -25, 10}, 2},{{15, -25, 11}, 2}, {{16, -25, 11}, 2},{{17, -25, 11}, 2},{{16, -25, 11}, 2}, {{13, -26, 9}, 2},{{15, -27, 9}, 2},{{22, -29, 6}, 2}, {{23, -29, 6}, 2},{{24, -28, 6}, 2},{{24, -26, 5}, 2}, {{25, -26, 6}, 2},{{25, -25, 7}, 2},{{24, -26, 8}, 2}, {{25, -26, 6}, 2},{{25, -25, 7}, 2},{{23, -26, 8}, 2}, {{3, -28, 5}, 2},{{3, -28, 6}, 2},{{1, -29, 6}, 2}, {{2, -30, 6}, 2},{{3, -30, 4}, 2},{{25, -21, 4}, 2}, {{25, -21, 4}, 2},{{25, -21, 4}, 2},{{26, -21, 5}, 2}, {{25, -21, 6}, 2},{{26, -21, 4}, 2},{{32, -11, 14}, 2}, {{32, -11, 14}, 2},{{32, -11, 15}, 2},{{32, -10, 16}, 2}, {{20, 3, 27}, 2},{{19, 5, 29}, 2},{{13, 7, 31}, 2}, {{-5, -6, 16}, 2},{{-6, -13, 9}, 2},{{4, -4, 21}, 2}, {{11, -23, 1}, 2},{{9, -23, 1}, 2},{{9, -23, 0}, 2}, {{10, -23, 1}, 2},{{11, -23, 0}, 2}, {{7, -23, 0}, 2},{{8, -24, -1}, 2},{{10, -24, -1}, 2}, {{10, -23, -1}, 2},{{11, -23, 0}, 2},{{37, -17, 6}, 2}, {{36, -18, 5}, 2},{{34, -17, 6}, 2},{{35, -18, 5}, 2}, {{35, -17, 7}, 2},{{34, -17, 7}, 2},{{35, -18, 6}, 2}, {{36, -17, 6}, 2},{{34, -17, 6}, 2},{{35, -16, 6}, 2}, {{34, 7, -23}, 3},{{33, 1, -23}, 3},{{30, -5, -29}, 3}, {{22, -9, -32}, 3},{{16, -6, -30}, 3},{{22, -10, -34}, 3},{{12, 0, -23}, 3}, {{14, -7, -28}, 3},{{11, -3, -24}, 3},{{13, 2, -16}, 3}, {{27, -20, 6}, 3},{{28, -21, 6}, 3},{{27, -21, 5}, 3}, {{27, -21, 5}, 3},{{27, -19, 4}, 3},{{26, -21, 4}, 3}, {{26, -22, 4}, 3},{{25, -25, 5}, 3},{{25, -26, 4}, 3}, {{22, -33, -2}, 3},{{9, -38, 2}, 3},{{12, -34, 6}, 3},{{18, -23, 2}, 3}, {{17, -22, 1}, 3},{{19, -20, 0}, 3},{{20, -21, 0}, 3}, {{21, -21, 2}, 3},{{20, -21, 2}, 3},{{20, -21, 2}, 3}, {{20, -21, 1}, 3},{{22, -20, 2}, 3},{{23, -23, 3}, 3}, {{23, -10, -15}, 3},{{23, -16, -23}, 3},{{19, -17, -24}, 3}, {{9, -14, -21}, 3},{{18, -21, -27}, 3},{{15, -14, -20}, 3},{{18, -20, -23}, 3}, {{-1, -5, -8}, 3},{{9, -5, -8}, 3},{{8, -4, -7}, 3} }; float train_mean[3] = {0}; float train_std[3] = {0}; void compute_train_stats(void); float euclidean_distance(const float *a, const float *b); uint8_t predict_class(const int16_t *features); void test_knn_performance(void); float euclidean_distance(const float *a, const float *b) { return sqrt(pow(a[0]-b[0],2) + pow(a[1]-b[1],2) + pow(a[2]-b[2],2)); } void compute_train_stats() { // ¼ÆËã¾ùÖµ float sum[3] = {0}; for (int i = 0; i < MAX_DATA_POINTS; i++) { sum[0] += dataset[i].f[0]; sum[1] += dataset[i].f[1]; sum[2] += dataset[i].f[2]; } train_mean[0] = sum[0] / MAX_DATA_POINTS; train_mean[1] = sum[1] / MAX_DATA_POINTS; train_mean[2] = sum[2] / MAX_DATA_POINTS; // ¼ÆËã±ê×¼²î float variance[3] = {0}; for (int i = 0; i < MAX_DATA_POINTS; i++) { variance[0] += pow(dataset[i].f[0] - train_mean[0], 2); variance[1] += pow(dataset[i].f[1] - train_mean[1], 2); variance[2] += pow(dataset[i].f[2] - train_mean[2], 2); } for (int j = 0; j < 3; j++) { train_std[j] = sqrt(variance[j]/MAX_DATA_POINTS); if (train_std[j] < 1e-5) train_std[j] = 1e-5; // ·ÀÖ¹³ýÁã } } /* KNNÔ¤²âʵÏÖ£¨k=10£© */ uint8_t predict_class(const int16_t *features) { float normalized_input[3]; // ÊäÈëÌØÕ÷¹éÒ»»¯ for (int j=0; j<3; j++) { normalized_input[j] = (features[j]-train_mean[j])/train_std[j]; } // ´æ´¢¾àÀëºÍ±êÇ© float distances[MAX_DATA_POINTS]; uint8_t labels[MAX_DATA_POINTS]; const uint8_t k = 7; // ¾­ÊµÑéµ÷ÕûµÄ×î¼ÑkÖµ // ¼ÆËãËùÓоàÀë for (int i=0; i<MAX_DATA_POINTS; i++) { float normalized_train[3]; for (int j=0; j<3; j++) { normalized_train[j] = (dataset[i].f[j]-train_mean[j])/train_std[j]; } distances[i] = euclidean_distance(normalized_input, normalized_train); labels[i] = dataset[i].label; } // ÕÒ³ök¸ö×î½üÁÚ for (int i=0; i<k; i++) { int min_idx = i; for (int j=i+1; j<MAX_DATA_POINTS; j++) { if (distances[j] < distances[min_idx]) min_idx = j; } // ½»»»Î»Öà float tmp_d = distances[i]; distances[i] = distances[min_idx]; distances[min_idx] = tmp_d; uint8_t tmp_l = labels[i]; labels[i] = labels[min_idx]; labels[min_idx] = tmp_l; } // ¼ÓȨͶƱ£¨Ê¹ÓþàÀëµ¹Êý×÷ÎªÈ¨ÖØ£© float weight_sum[4] = {0}; for (int i=0; i<k; i++) { weight_sum[labels[i]] += 1/(distances[i]+1e-5); // ·ÀÖ¹³ýÁã } // È·¶¨×îÖÕÀà±ð uint8_t predicted = 0; float max_weight = 0; for (int i=0; i<4; i++) { if (weight_sum[i] > max_weight) { max_weight = weight_sum[i]; predicted = i; } } printf("predicted label: %d\n", predicted); return predicted; } /* USER CODE END EV */ int * features; /* Private typedef -----------------------------------------------------------*/ /* USER CODE BEGIN PTD */ typedef enum { S_IDLE, S_RX, S_RX_TIMEOUT, S_RX_ERROR, S_TX, S_TXing, S_TX_TIMEOUT, S_SEND_DATA, } States_t; /* USER CODE END PTD */ /* Private define ------------------------------------------------------------*/ /* USER CODE BEGIN PD */ /* Configurations */ /*Timeout*/ #define RX_TIMEOUT_VALUE 5000 #define TX_TIMEOUT_VALUE 10000 //???? #define SEND_PERIOD_MS 15000 //??ID #define LOCAL_ID 0x01 //??ID #define REMOTE_ID 0x10 //???? #define DATA_TYPE 0x01 /*Size of the payload to be sent*/ /* Size must be greater of equal the PING and PONG*/ #define MAX_APP_BUFFER_SIZE 255 #if (PAYLOAD_LEN > MAX_APP_BUFFER_SIZE) #error PAYLOAD_LEN must be less or equal than MAX_APP_BUFFER_SIZE #endif /* (PAYLOAD_LEN > MAX_APP_BUFFER_SIZE) */ /*????*/ /*SHT3x??*/ #define SHT3x_ADDRESS 0x44<<1 //?????7?,?????? #define SCL_Pin GPIO_PIN_15 #define SCL_GPIO_Port GPIOB #define SDA_Pin GPIO_PIN_9 #define SDA_GPIO_Port GPIOB #define SHT3x_W_SCL(x) HAL_GPIO_WritePin(GPIOB, SCL_Pin, (GPIO_PinState)(x)) #define SHT3x_W_SDA(x) HAL_GPIO_WritePin(GPIOB, SDA_Pin, (GPIO_PinState)(x)) #define SHT3x_R_SDA() HAL_GPIO_ReadPin(GPIOB, SDA_Pin) #define SHT3x_R_SCL() HAL_GPIO_ReadPin(GPIOB, SCL_Pin) /* USER CODE END PD */ /* Private macro -------------------------------------------------------------*/ /* USER CODE BEGIN PM */ /*??????*/ void SHT3x_I2C_START(){ SHT3x_W_SDA(1); SHT3x_W_SCL(1); HAL_Delay(1); SHT3x_W_SDA(0); HAL_Delay(1); SHT3x_W_SCL(0); HAL_Delay(1); } /*??????*/ void SHT3x_I2C_STOP(){ SHT3x_W_SDA(0); SHT3x_W_SCL(1); HAL_Delay(1); SHT3x_W_SDA(1); HAL_Delay(1); } /** * @brief ?????? * @param ? * @retval 1-?????,0-???? */ uint8_t WaitAck(void) { uint8_t ret; SHT3x_W_SCL(1); HAL_Delay(1); if(SHT3x_R_SDA()){ ret=1; } else{ ret=0; } SHT3x_W_SCL(0); HAL_Delay(1); return ret; } /** * @brief I2C?????? * @param Byte ???????? * @retval ? */ void SHT3x_I2C_SendByte(uint8_t Byte) { uint8_t i; for (i = 0; i < 8; i++) { SHT3x_W_SDA((Byte<<i) & 0x80 ); SHT3x_W_SCL(1); HAL_Delay(1); SHT3x_W_SCL(0); HAL_Delay(1); } SHT3x_W_SDA(1);//??SDA?? } /** * @brief ???????? * @param MSB ?8? * @param LSB ?8? * @retval ? */ void SHT3x_WriteByte(uint8_t MSB,uint8_t LSB) { SHT3x_I2C_START(); //?????? SHT3x_I2C_SendByte(SHT3x_ADDRESS); //??????? WaitAck(); SHT3x_I2C_SendByte(MSB);//???8??? WaitAck(); SHT3x_I2C_SendByte(LSB);//???8??? WaitAck(); SHT3x_I2C_STOP(); } /** * @brief I2C?????? * @param NACK 1-?????,0-???? * @retval ???????? */ uint8_t SHT3x_ReadData(uint8_t NACK) { uint8_t i,Byte=0; SHT3x_W_SDA(1);//??SDA?? for (i = 0; i < 8; i++) { SHT3x_W_SCL(1); HAL_Delay(1); Byte=Byte|(SHT3x_R_SDA()<<(7-i)); HAL_Delay(1); SHT3x_W_SCL(0); HAL_Delay(1); } SHT3x_W_SDA(NACK);//????/????? SHT3x_W_SCL(1); HAL_Delay(1); SHT3x_W_SCL(0); HAL_Delay(1); SHT3x_W_SDA(1);//??SDA?? return Byte; } /*????SHT3x*/ void SHT3x_SoftReset(void) { SHT3x_WriteByte(0x30,0xA2); //?????SHT3x } /*?????*/ void SHT3x_Init(void) { GPIO_InitTypeDef GPIO_InitStruct = {0}; /* GPIO Ports Clock Enable */ __HAL_RCC_GPIOA_CLK_ENABLE(); /*Configure GPIO pin Output Level */ HAL_GPIO_WritePin(GPIOB, SCL_Pin | SDA_Pin, GPIO_PIN_SET); /*Configure GPIO pins : PBPin PBPin */ GPIO_InitStruct.Pin = SCL_Pin | SDA_Pin; GPIO_InitStruct.Mode = GPIO_MODE_OUTPUT_OD; GPIO_InitStruct.Pull = GPIO_NOPULL; GPIO_InitStruct.Speed = GPIO_SPEED_FREQ_HIGH; HAL_GPIO_Init(GPIOB, &GPIO_InitStruct); SHT3x_W_SCL(1); SHT3x_W_SDA(1); // SHT3x_WriteByte(0X27,0X21); //????????(??10?,Medium Repeatability) HAL_Delay(1); } #define SCL1_Pin GPIO_PIN_4 #define SCL1_GPIO_Port GPIOA #define SDA1_Pin GPIO_PIN_15 #define SDA1_GPIO_Port GPIOA #define SHT3x1_W_SCL(x) HAL_GPIO_WritePin(GPIOA, SCL1_Pin, (GPIO_PinState)(x)) #define SHT3x1_W_SDA(x) HAL_GPIO_WritePin(GPIOA, SDA1_Pin, (GPIO_PinState)(x)) #define SHT3x1_R_SDA() HAL_GPIO_ReadPin(GPIOA, SDA1_Pin) #define SHT3x1_R_SCL() HAL_GPIO_ReadPin(GPIOA, SCL1_Pin) /* USER CODE END EF */ void MyI2C_W_SCL(uint8_t BitValue) { SHT3x1_W_SCL(BitValue); HAL_Delay(1); } void MyI2C_W_SDA(uint8_t BitValue) { SHT3x1_W_SDA(BitValue); HAL_Delay(1); } uint8_t MyI2C_R_SDA(void) { uint8_t BitValue; BitValue = SHT3x1_R_SDA(); HAL_Delay(1); return BitValue; } void MyI2C_Init(void) { GPIO_InitTypeDef GPIO_InitStruct = {0}; /* GPIO Ports Clock Enable */ __HAL_RCC_GPIOA_CLK_ENABLE(); /*Configure GPIO pin Output Level */ HAL_GPIO_WritePin(GPIOA, SCL1_Pin | SDA1_Pin, GPIO_PIN_SET); /*Configure GPIO pins : PBPin PBPin */ GPIO_InitStruct.Pin = SCL1_Pin | SDA1_Pin; GPIO_InitStruct.Mode = GPIO_MODE_OUTPUT_OD; GPIO_InitStruct.Pull = GPIO_NOPULL; GPIO_InitStruct.Speed = GPIO_SPEED_FREQ_HIGH; // GPIO_InitStruct.Pin = GPIO_PIN_5; // GPIO_InitStruct.Mode = GPIO_MODE_INPUT; // GPIO_InitStruct.Pull = GPIO_NOPULL; HAL_GPIO_Init(GPIOA, &GPIO_InitStruct); } void MyI2C_Start(void) { MyI2C_W_SDA(1); MyI2C_W_SCL(1); MyI2C_W_SDA(0); MyI2C_W_SCL(0); } void MyI2C_Stop(void) { MyI2C_W_SDA(0); MyI2C_W_SCL(1); MyI2C_W_SDA(1); } void MyI2C_SendByte(uint8_t Byte) { uint8_t i; for (i = 0; i < 8; i ++) { MyI2C_W_SDA(Byte & (0x80 >> i)); MyI2C_W_SCL(1); MyI2C_W_SCL(0); } } uint8_t MyI2C_ReceiveByte(void) { uint8_t i, Byte = 0x00; MyI2C_W_SDA(1); for (i = 0; i < 8; i ++) { MyI2C_W_SCL(1); if (MyI2C_R_SDA() == 1){Byte |= (0x80 >> i);} MyI2C_W_SCL(0); } return Byte; } void MyI2C_SendAck(uint8_t AckBit) { MyI2C_W_SDA(AckBit); MyI2C_W_SCL(1); MyI2C_W_SCL(0); } uint8_t MyI2C_ReceiveAck(void) { uint8_t AckBit; MyI2C_W_SDA(1); MyI2C_W_SCL(1); AckBit = MyI2C_R_SDA(); MyI2C_W_SCL(0); return AckBit; } #define MPU6050_ADDRESS 0x32 void MPU6050_WriteReg(uint8_t RegAddress, uint8_t Data) { MyI2C_Start(); MyI2C_SendByte(MPU6050_ADDRESS); MyI2C_ReceiveAck(); MyI2C_SendByte(RegAddress); MyI2C_ReceiveAck(); MyI2C_SendByte(Data); MyI2C_ReceiveAck(); MyI2C_Stop(); } uint8_t MPU6050_ReadReg(uint8_t RegAddress) { uint8_t Data; MyI2C_Start(); MyI2C_SendByte(MPU6050_ADDRESS); MyI2C_ReceiveAck(); MyI2C_SendByte(RegAddress); MyI2C_ReceiveAck(); MyI2C_Start(); MyI2C_SendByte(MPU6050_ADDRESS | 0x01); MyI2C_ReceiveAck(); Data = MyI2C_ReceiveByte(); MyI2C_SendAck(1); MyI2C_Stop(); return Data; } #define MPU6050_PWR_MGMT_1 0x20 #define MPU6050_PWR_MGMT_2 0x25 #define MPU6050_SMPLRT_DIV 0x23 void MPU6050_Init(void) { MyI2C_Init(); MPU6050_WriteReg(MPU6050_PWR_MGMT_1, 0x60); MPU6050_WriteReg(MPU6050_PWR_MGMT_2, 0x84); MPU6050_WriteReg(MPU6050_SMPLRT_DIV, 0x00); } #define MPU6050_ACCEL_XOUT_H 0x29 #define MPU6050_ACCEL_XOUT_L 0x28 #define MPU6050_ACCEL_YOUT_H 0x2B #define MPU6050_ACCEL_YOUT_L 0x2A #define MPU6050_ACCEL_ZOUT_H 0x2D #define MPU6050_ACCEL_ZOUT_L 0x2C void MPU6050_GetData(int16_t *AccX, int16_t *AccY, int16_t *AccZ) { uint8_t DataH, DataL; DataH = MPU6050_ReadReg(MPU6050_ACCEL_XOUT_H); DataL = MPU6050_ReadReg(MPU6050_ACCEL_XOUT_L); *AccX = (DataH << 8) | DataL; DataH = MPU6050_ReadReg(MPU6050_ACCEL_YOUT_H); DataL = MPU6050_ReadReg(MPU6050_ACCEL_YOUT_L); *AccY = (DataH << 8) | DataL; DataH = MPU6050_ReadReg(MPU6050_ACCEL_ZOUT_H); DataL = MPU6050_ReadReg(MPU6050_ACCEL_ZOUT_L); *AccZ = (DataH << 8) | DataL; } /* USER CODE END PM */ /* Private variables ---------------------------------------------------------*/ /* Radio events function pointer */ static RadioEvents_t RadioEvents; /* USER CODE BEGIN PV */ /*Ping Pong FSM states */ static States_t State = S_IDLE; /* App Rx Buffer*/ static uint8_t BufferRx[MAX_APP_BUFFER_SIZE]; /* App Tx Buffer*/ static uint8_t BufferTx[MAX_APP_BUFFER_SIZE]; /* Last Received Buffer Size*/ uint16_t RxBufferSize = 0; /* Last Received packer Rssi*/ int8_t RssiValue = 0; /* Last Received packer SNR (in Lora modulation)*/ int8_t SnrValue = 0; /* Led Timers objects*/ //????? static UTIL_TIMER_Object_t timerSendData; /* random delay to make sure 2 devices will sync*/ /* the closest the random delays are, the longer it will take for the devices to sync when started simultaneously*/ static int32_t random_delay; //???? static int16_t temp_v=0; static uint16_t bat_v=0; static uint16_t pa11_v=0; /* USER CODE END PV */ /* Private function prototypes -----------------------------------------------*/ /*! * @brief Function to be executed on Radio Tx Done event */ static void OnTxDone(void); /** * @brief Function to be executed on Radio Rx Done event * @param payload ptr of buffer received * @param size buffer size * @param rssi * @param LoraSnr_FskCfo */ static void OnRxDone(uint8_t *payload, uint16_t size, int16_t rssi, int8_t LoraSnr_FskCfo); /** * @brief Function executed on Radio Tx Timeout event */ static void OnTxTimeout(void); /** * @brief Function executed on Radio Rx Timeout event */ static void OnRxTimeout(void); /** * @brief Function executed on Radio Rx Error event */ static void OnRxError(void); /* USER CODE BEGIN PFP */ /** * @brief Function executed on when led timer elapses * @param context ptr of LED context */ static void OnSendDataEvent(void *context); /** * @brief PingPong state machine implementation */ static void SendData_Process(void); static char ReadSHT3x(float *Hum,float *Temp); static uint8_t SHT3x_CRC_CAL(uint16_t DAT); /* USER CODE END PFP */ void HAL_GPIO_EXTI_Callback(uint16_t GPIO_Pin) { switch (GPIO_Pin) { case 1: /* Note: when "EventType == TX_ON_TIMER" this GPIO is not initialized */ APP_LOG(TS_OFF, VLEVEL_M, "\n\rBUT1 Event\n\r"); break; default: break; } } /* Exported functions ---------------------------------------------------------*/ void SubghzApp_Init(void) { /* USER CODE BEGIN SubghzApp_Init_1 */ /* Print APP version*/ /*???????*/ UTIL_TIMER_Create(&timerSendData, 0xFFFFFFFFU, UTIL_TIMER_ONESHOT, OnSendDataEvent, NULL); /*??????*/ UTIL_TIMER_SetPeriod(&timerSendData, SEND_PERIOD_MS); /*????*/ UTIL_TIMER_Start(&timerSendData); /* USER CODE END SubghzApp_Init_1 */ /* Radio initialization */ RadioEvents.TxDone = OnTxDone; RadioEvents.RxDone = OnRxDone; RadioEvents.TxTimeout = OnTxTimeout; RadioEvents.RxTimeout = OnRxTimeout; RadioEvents.RxError = OnRxError; /*?????*/ Radio.Init(&RadioEvents); /*??????*/ Radio.SetChannel(RF_FREQUENCY); /* Radio configuration */ APP_LOG(TS_OFF, VLEVEL_M, "---------------\n\r"); APP_LOG(TS_OFF, VLEVEL_M, "LORA_MODULATION\n\r"); APP_LOG(TS_OFF, VLEVEL_M, "LORA_BW=%d kHz\n\r", (1 << LORA_BANDWIDTH) * 125); APP_LOG(TS_OFF, VLEVEL_M, "LORA_SF=%d\n\r", LORA_SPREADING_FACTOR); /*??????*/ Radio.SetTxConfig(MODEM_LORA, TX_OUTPUT_POWER, 0, LORA_BANDWIDTH, LORA_SPREADING_FACTOR, LORA_CODINGRATE, LORA_PREAMBLE_LENGTH, LORA_FIX_LENGTH_PAYLOAD_ON, true, 0, 0, LORA_IQ_INVERSION_ON, TX_TIMEOUT_VALUE); /*??????*/ Radio.SetRxConfig(MODEM_LORA, LORA_BANDWIDTH, LORA_SPREADING_FACTOR, LORA_CODINGRATE, 0, LORA_PREAMBLE_LENGTH, LORA_SYMBOL_TIMEOUT, LORA_FIX_LENGTH_PAYLOAD_ON, 0, true, 0, 0, LORA_IQ_INVERSION_ON, true); /*????????*/ Radio.SetMaxPayloadLength(MODEM_LORA, MAX_APP_BUFFER_SIZE); /* LED initialization*/ compute_train_stats(); /* ?? initialization*/ BSP_PB_Init(BUTTON_SW1, BUTTON_MODE_EXTI); /*calculate random delay for synchronization*/ random_delay = (Radio.Random()) >> 22; /*10bits random e.g. from 0 to 1023 ms*/ /*fills tx buffer*/ memset(BufferTx, 0x0, MAX_APP_BUFFER_SIZE); APP_LOG(TS_ON, VLEVEL_L, "rand=%d\n\r", random_delay); /*starts reception*/ //Radio.Rx(RX_TIMEOUT_VALUE + random_delay); /*register task to to be run in while(1) after Radio IT*/ /* ????*/ UTIL_SEQ_RegTask((1 << CFG_SEQ_Task_SubGHz_Phy_App_Process), UTIL_SEQ_RFU, SendData_Process); /* USER CODE END SubghzApp_Init_2 */ } /* USER CODE BEGIN EF */ /* USER CODE END EF */ /* Private functions ---------------------------------------------------------*/ /* ??????*/ static void OnTxDone(void) { /* USER CODE BEGIN OnTxDone */ /* Update the State of the FSM*/ State = S_TX; /* Run PingPong process in background*/ UTIL_SEQ_SetTask((1 << CFG_SEQ_Task_SubGHz_Phy_App_Process), CFG_SEQ_Prio_0); /* USER CODE END OnTxDone */ } /* ??????*/ static void OnRxDone(uint8_t *payload, uint16_t size, int16_t rssi, int8_t LoraSnr_FskCfo) { /* USER CODE BEGIN OnRxDone */ APP_LOG(TS_ON, VLEVEL_L, "OnRxDone\n\r"); APP_LOG(TS_ON, VLEVEL_L, "RssiValue=%d dBm, SnrValue=%ddB\n\r", rssi, LoraSnr_FskCfo); /* Update the State of the FSM*/ State = S_RX; /* Clear BufferRx*/ memset(BufferRx, 0, MAX_APP_BUFFER_SIZE); /* Record payload size*/ RxBufferSize = size; if (RxBufferSize <= MAX_APP_BUFFER_SIZE) { memcpy(BufferRx, payload, RxBufferSize); } /* Record Received Signal Strength*/ RssiValue = rssi; /* Record payload content*/ APP_LOG(TS_ON, VLEVEL_H, "payload. size=%d \n\r", size); for (int i = 0; i < PAYLOAD_LEN; i++) { APP_LOG(TS_OFF, VLEVEL_H, "%02X", BufferRx[i]); if (i % 16 == 15) { APP_LOG(TS_OFF, VLEVEL_H, "\n\r"); } } APP_LOG(TS_OFF, VLEVEL_H, "\n\r"); /* Run PingPong process in background*/ UTIL_SEQ_SetTask((1 << CFG_SEQ_Task_SubGHz_Phy_App_Process), CFG_SEQ_Prio_0); /* USER CODE END OnRxDone */ } /* ??????*/ static void OnTxTimeout(void) { /* USER CODE BEGIN OnTxTimeout */ APP_LOG(TS_ON, VLEVEL_L, "OnTxTimeout\n\r"); /* Update the State of the FSM*/ State = S_TX_TIMEOUT; /* Run PingPong process in background*/ UTIL_SEQ_SetTask((1 << CFG_SEQ_Task_SubGHz_Phy_App_Process), CFG_SEQ_Prio_0); /* USER CODE END OnTxTimeout */ } /* ??????*/ static void OnRxTimeout(void) { /* USER CODE BEGIN OnRxTimeout */ APP_LOG(TS_ON, VLEVEL_L, "OnRxTimeout\n\r"); /* Update the State of the FSM*/ State = S_RX_TIMEOUT; /* Run PingPong process in background*/ UTIL_SEQ_SetTask((1 << CFG_SEQ_Task_SubGHz_Phy_App_Process), CFG_SEQ_Prio_0); /* USER CODE END OnRxTimeout */ } /* ??????*/ static void OnRxError(void) { /* USER CODE BEGIN OnRxError */ APP_LOG(TS_ON, VLEVEL_L, "OnRxError\n\r"); /* Update the State of the FSM*/ State = S_RX_ERROR; /* Run PingPong process in background*/ UTIL_SEQ_SetTask((1 << CFG_SEQ_Task_SubGHz_Phy_App_Process), CFG_SEQ_Prio_0); /* USER CODE END OnRxError */ } # define n 51.27 # define m -5.55 float change(int c) { float ph; ph=(c-n)/m; return ph; } int i=0; int k; void process_calibrated_data(void) { // ͳ¼ÆÖ÷ÒªÐÐΪ£¨±£Áô£© uint8_t counts[4] = {0}; for(int i=0; i<SAMPLES_TO_COLLECT; i++) { counts[prediction_results[i]]++; } uint8_t max_label = 0; for(int i=0; i<4; i++) { if(counts[i] > counts[max_label]) max_label = i; } printf("Dominant Behavior: %d\n", max_label); int16_t input_features[SAMPLES_TO_COLLECT][3]; /* ת»»ÎªKNNÊäÈë¸ñʽ */ for(int i=0; i<SAMPLES_TO_COLLECT; i++){ input_features[i][0] = calibrated_samples[i][0]; input_features[i][1] = calibrated_samples[i][1]; input_features[i][2] = calibrated_samples[i][2]; } /* Ö´ÐÐÔ¤²â²¢Í³¼Æ×¼È·¶È */ uint8_t correct_count = 0; for(int i=0; i<SAMPLES_TO_COLLECT; i++){ uint8_t pred = predict_class(input_features[i]); prediction_results[i] = pred; /* Óëʵ¼Ê±êÇ©±È½Ï£¨ÕâÀïÐèÒª¸ù¾Ýʵ¼ÊÇé¿öÐ޸ģ© */ if(pred == dataset[i].label) correct_count++; } /* Êä³ö½á¹û */ float accuracy = (float)correct_count/SAMPLES_TO_COLLECT*100; printf("Prediction Accuracy: %.1f%%\n", accuracy); } static uint8_t sample_count = 0; // ¶¨ÒåÒ»¸öÊý×éÀ´´æ´¢Ð£×¼ºóµÄÊý¾Ý #define SAMPLE_BUFFER_SIZE 10 static Vector1 sample_buffer[SAMPLE_BUFFER_SIZE]; static uint8_t buffer_index = 0; static void SendData_Process(void) { switch (State) { case S_IDLE: case S_TXing: break; case S_RX: Radio.Sleep(); State = S_IDLE; break; case S_TX: Radio.Sleep(); State = S_IDLE; break; case S_RX_TIMEOUT: case S_RX_ERROR: case S_TX_TIMEOUT: Radio.Sleep(); State = S_IDLE; break; case S_SEND_DATA: { MPU6050_Init(); int16_t AX, AY, AZ; for(int rain=0; rain<10; rain++) { MPU6050_GetData(&AX, &AY, &AZ); AX = AX/4 * 0.976; AY = AY/4 * 0.976; AZ = AZ/4 * 0.976; BufferTx[5+6*rain] = (AX < 0) ? 1 : 0; BufferTx[6+6*rain] = abs(AX)/100; BufferTx[7+6*rain] = (AY < 0) ? 1 : 0; BufferTx[8+6*rain] = abs(AY)/100; BufferTx[9+6*rain] = (AZ < 0) ? 1 : 0; BufferTx[10+6*rain] = abs(AZ)/100; printf("%d ",AX); printf("%d ",AY); printf("%d\n",AZ); /* ´«¸ÐÆ÷У׼ */ Vector1 raw_data = {AX, AY, AZ}; Vector1 calib_data = solve(raw_data); /* ´æ´¢µ½»º³åÇø */ if(buffer_index < SAMPLE_BUFFER_SIZE) { sample_buffer[buffer_index] = calib_data; buffer_index++; } printf("Sample %d: %.2f, %.2f, %.2f\n", buffer_index, calib_data.x, calib_data.y, calib_data.z); } /* µ±ÊÕ¼¯Âú10×éÊý¾Ýʱ½øÐÐÔ¤²â */ if(buffer_index >= SAMPLE_BUFFER_SIZE) { /* ת»»Êý¾Ý¸ñʽ¹©KNNʹÓà */ int16_t knn_input[SAMPLE_BUFFER_SIZE][3]; for(int i=0; i<SAMPLE_BUFFER_SIZE; i++) { knn_input[i][0] = (int16_t)(sample_buffer[i].x); knn_input[i][1] = (int16_t)(sample_buffer[i].y); knn_input[i][2] = (int16_t)(sample_buffer[i].z); } /* Ö´ÐÐÔ¤²â²¢Í³¼ÆÆ±Êý */ uint8_t votes[4] = {0}; // [0]:±êÇ©0µÄƱÊý£¬[1]:±êÇ©1... for(int i=0; i<SAMPLE_BUFFER_SIZE; i++) { uint8_t label = predict_class(knn_input[i]); BufferTx[75] = label; votes[label]++; } /* аæÍ¶Æ±Âß¼­ */ uint8_t final_label = 0; const uint8_t zero_votes = votes[0]; // ±êÇ©0µÄƱÊý // Ìõ¼þ1£º0µÄƱÊý¡Ý9ʱֱ½Ó·µ»Ø0 if(zero_votes >= 9) { final_label = 0; } // Ìõ¼þ2£ºÅųý0ºó±È½ÏÆäËû±êÇ© else { // ³õʼ»¯ÎªµÚÒ»¸ö·ÇÁã±êÇ© final_label = 1; uint8_t max_votes = votes[1]; // ±éÀú±êÇ©1-3ÕÒ×î´óÖµ for(uint8_t label=2; label<=3; label++){ if(votes[label] > max_votes){ final_label = label; max_votes = votes[label]; } // ƽƱʱѡÔñ±àºÅСµÄ±êÇ© else if(votes[label] == max_votes && label < final_label){ final_label = label; } } // ´¦ÀíÈ«ÁãµÄÌØÊâÇé¿ö£¨ÀíÂÛÉϲ»»á³öÏÖ£© if(max_votes == 0){ final_label = 0; // ½µ¼¶·µ»Ø0 } } printf("Final Prediction: %d\n", final_label); /* ÖØÖûº³åÇø */ buffer_index = 0; memset(sample_buffer, 0, sizeof(sample_buffer)); } /* ·¢ËÍԭʼÊý¾Ý£¨±£³ÖÔ­ÓÐÂß¼­£© */ BufferTx[0] = 5;//¶ÔÓ¦ÐòºÅ MX_SPI2_Init(); uint16_t ADCXX=0x00; uint8_t i=0; int ADCX=0; ADS1120reset(); HAL_Delay(20); ADCXX = ADS1120readonce(); if(ADCXX>32767) {ADCXX=3.15*(65534-ADCXX)/100; ADCX=ADCX-ADCXX; HAL_Delay(10); } if(ADCXX<32767) {ADCXX=3.15*ADCXX/100; ADCX=ADCX+ADCXX; HAL_Delay(10); } if(ADCXX==32767) { HAL_Delay(10); } ADCX=ADCX/50; printf("%d\r\n",ADCX); float a; a=change(ADCX); printf("%1.1f\r\n",a); SHT3x_Init(); float Temp,Hum; // if(ReadSHT3x(&Hum,&Temp)) // { if(Temp>=0) { printf("%.2f\n", Temp);// } else { printf("%.2f\n", Temp);// } } else {printf("error");} int zhengshu,xiaoshu; zhengshu=Temp*100/100; xiaoshu=Temp*100; xiaoshu=xiaoshu%100; BufferTx[1]=zhengshu; BufferTx[2]=xiaoshu; a=a*10; int b,c; b=(int)a/10; c=(int)a%10; c=c+1; BufferTx[3]=b; BufferTx[4]=c; Radio.Send(BufferTx, 80); State = S_TXing; break; } break; default: break; } } static void OnSendDataEvent(void *context) { // BSP_LED_On(LED_GREEN); // BSP_LED_On(LED_RED); UTIL_TIMER_Start(&timerSendData); if(State == S_IDLE) { State = S_SEND_DATA; UTIL_SEQ_SetTask((1 << CFG_SEQ_Task_SubGHz_Phy_App_Process), CFG_SEQ_Prio_0); } else { APP_PRINTF("OnSendDataEvent Error State(%d)\n\r",State); } } static char ReadSHT3x(float *Hum,float *Temp) { uint16_t HumData,TempData,HumCRC,TempCRC;//??????????? // SHT3x_WriteByte(0XE0,0X00); //????,????,????????? SHT3x_WriteByte(0X2C,0X0D); //????????(??????,Medium Repeatability) SHT3x_I2C_START(); //?????? SHT3x_I2C_SendByte(SHT3x_ADDRESS|0x01);//??????? WaitAck(); SHT3x_W_SCL(1); //??SCL?? while (!SHT3x_R_SCL()) ; //??????SCL?? TempData = SHT3x_ReadData(0); //?????8??? TempData=TempData<<8; //??8? // int c; // c=SHT3x_ReadData(0); TempData |= SHT3x_ReadData(0); //?????8??? TempCRC = SHT3x_ReadData(0); //????CRC???? HumData = SHT3x_ReadData(0); //?????8??? HumData=HumData<<8; //??8? HumData |= SHT3x_ReadData(0); //?????8??? HumCRC = SHT3x_ReadData(1); //????CRC???? SHT3x_I2C_STOP(); //?????? if( SHT3x_CRC_CAL(HumData)==HumCRC && SHT3x_CRC_CAL(TempData)==TempCRC ){ //????????CRC?? *Hum = (float)HumData*100/(65536-1); //????16?????????10?????? // *Temp=(-45+(175.0*((TempData<<8)+c)/65535.0))*100; // *Temp = (float)TempData*175/(65536-1)-45; //????16?????????10?????? return 1; } return 0; } static uint8_t SHT3x_CRC_CAL(uint16_t DAT) { uint8_t i,t,temp; uint8_t CRC_BYTE; CRC_BYTE = 0xFF; temp = (DAT>>8) & 0xFF; for(t = 0; t < 2; t++) { CRC_BYTE ^= temp; for(i = 0;i < 8;i ++) { if(CRC_BYTE & 0x80) { CRC_BYTE <<= 1; CRC_BYTE ^= 0x31; } else { CRC_BYTE <<= 1; } } if(t == 0) { temp = DAT & 0xFF; } } return CRC_BYTE; } 将knn改为支持向量机算法

/content Mounted at /content/drive env: PYTHONDONTWRITEBYTECODE=1 env: TF_CPP_MIN_LOG_LEVEL=1 41 packages can be upgraded. Run 'apt list --upgradable' to see them. W: Skipping acquire of configured file 'main/source/Sources' as repository 'https://r2u.stat.illinois.edu/ubuntu jammy InRelease' does not seem to provide it (sources.list entry misspelt?) --2025-07-10 10:19:00-- https://github.com/camenduru/gperftools/releases/download/v1.0/libtcmalloc_minimal.so.4 Resolving github.com (github.com)... 140.82.113.4 Connecting to github.com (github.com)|140.82.113.4|:443... connected. HTTP request sent, awaiting response... 302 Found Location: https://objects.githubusercontent.com/github-production-release-asset-2e65be/669786276/620e2e64-be9f-4599-904f-18ee3811e159?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=releaseassetproduction%2F20250710%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20250710T101901Z&X-Amz-Expires=1800&X-Amz-Signature=b5c26ca3d158bb5b88d2b09125b067e065003244102eeb42a1d86a5fc1b6c62e&X-Amz-SignedHeaders=host&response-content-disposition=attachment%3B%20filename%3Dlibtcmalloc_minimal.so.4&response-content-type=application%2Foctet-stream [following] --2025-07-10 10:19:01-- https://objects.githubusercontent.com/github-production-release-asset-2e65be/669786276/620e2e64-be9f-4599-904f-18ee3811e159?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=releaseassetproduction%2F20250710%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20250710T101901Z&X-Amz-Expires=1800&X-Amz-Signature=b5c26ca3d158bb5b88d2b09125b067e065003244102eeb42a1d86a5fc1b6c62e&X-Amz-SignedHeaders=host&response-content-disposition=attachment%3B%20filename%3Dlibtcmalloc_minimal.so.4&response-content-type=application%2Foctet-stream Resolving objects.githubusercontent.com (objects.githubusercontent.com)... 185.199.108.133, 185.199.109.133, 185.199.110.133, ... Connecting to objects.githubusercontent.com (objects.githubusercontent.com)|185.199.108.133|:443... connected. HTTP request sent, awaiting response... 200 OK Length: 373960 (365K) [application/octet-stream] Saving to: ‘/content/libtcmalloc_minimal.so.4’ /content/libtcmallo 100%[===================>] 365.20K --.-KB/s in 0.02s 2025-07-10 10:19:01 (15.2 MB/s) - ‘/content/libtcmalloc_minimal.so.4’ saved [373960/373960] env: LD_PRELOAD=/content/libtcmalloc_minimal.so.4 libcairo2-dev is already the newest version (1.16.0-5ubuntu2). python3-dev is already the newest version (3.10.6-1~22.04.1). python3-dev set to manually installed. The following packages were automatically installed and are no longer required: libbz2-dev libpkgconf3 libreadline-dev Use 'apt autoremove' to remove them. The following packages will be REMOVED: pkgconf r-base-dev The following NEW packages will be installed: aria2 libaria2-0 libc-ares2 pkg-config 0 upgraded, 4 newly installed, 2 to remove and 41 not upgraded. Need to get 1,561 kB of archives. After this operation, 5,430 kB of additional disk space will be used. (Reading database ... 126281 files and directories currently installed.) Removing r-base-dev (4.5.1-1.2204.0) ... dpkg: pkgconf: dependency problems, but removing anyway as you requested: libsndfile1-dev:amd64 depends on pkg-config; however: Package pkg-config is not installed. Package pkgconf which provides pkg-config is to be removed. libopencv-dev depends on pkg-config; however: Package pkg-config is not installed. Package pkgconf which provides pkg-config is to be removed. libmkl-dev:amd64 depends on pkg-config; however: Package pkg-config is not installed. Package pkgconf which provides pkg-config is to be removed. libjack-dev depends on pkg-config; however: Package pkg-config is not installed. Package pkgconf which provides pkg-config is to be removed. libgphoto2-dev:amd64 depends on pkg-config; however: Package pkg-config is not installed. Package pkgconf which provides pkg-config is to be removed. libglib2.0-dev:amd64 depends on pkg-config; however: Package pkg-config is not installed. Package pkgconf which provides pkg-config is to be removed. libfontconfig-dev:amd64 depends on pkg-config; however: Package pkg-config is not installed. Package pkgconf which provides pkg-config is to be removed. Removing pkgconf (1.8.0-1) ... Removing 'diversion of /usr/bin/pkg-config to /usr/bin/pkg-config.real by pkgconf' Removing 'diversion of /usr/share/aclocal/pkg.m4 to /usr/share/aclocal/pkg.real.m4 by pkgconf' Removing 'diversion of /usr/share/man/man1/pkg-config.1.gz to /usr/share/man/man1/pkg-config.real.1.gz by pkgconf' Removing 'diversion of /usr/share/pkg-config-crosswrapper to /usr/share/pkg-config-crosswrapper.real by pkgconf' Selecting previously unselected package pkg-config. (Reading database ... 126257 files and directories currently installed.) Preparing to unpack .../pkg-config_0.29.2-1ubuntu3_amd64.deb ... Unpacking pkg-config (0.29.2-1ubuntu3) ... Selecting previously unselected package libc-ares2:amd64. Preparing to unpack .../libc-ares2_1.18.1-1ubuntu0.22.04.3_amd64.deb ... Unpacking libc-ares2:amd64 (1.18.1-1ubuntu0.22.04.3) ... Selecting previously unselected package libaria2-0:amd64. Preparing to unpack .../libaria2-0_1.36.0-1_amd64.deb ... Unpacking libaria2-0:amd64 (1.36.0-1) ... Selecting previously unselected package aria2. Preparing to unpack .../aria2_1.36.0-1_amd64.deb ... Unpacking aria2 (1.36.0-1) ... Setting up libc-ares2:amd64 (1.18.1-1ubuntu0.22.04.3) ... Setting up pkg-config (0.29.2-1ubuntu3) ... Setting up libaria2-0:amd64 (1.36.0-1) ... Setting up aria2 (1.36.0-1) ... Processing triggers for man-db (2.10.2-1) ... Processing triggers for libc-bin (2.35-0ubuntu3.8) ... /sbin/ldconfig.real: /usr/local/lib/libtcm_debug.so.1 is not a symbolic link /sbin/ldconfig.real: /usr/local/lib/libtbbbind_2_0.so.3 is not a symbolic link /sbin/ldconfig.real: /usr/local/lib/libur_adapter_opencl.so.0 is not a symbolic link /sbin/ldconfig.real: /usr/local/lib/libtbbmalloc.so.2 is not a symbolic link /sbin/ldconfig.real: /usr/local/lib/libtbb.so.12 is not a symbolic link /sbin/ldconfig.real: /usr/local/lib/libtbbbind.so.3 is not a symbolic link /sbin/ldconfig.real: /usr/local/lib/libur_loader.so.0 is not a symbolic link /sbin/ldconfig.real: /usr/local/lib/libtbbbind_2_5.so.3 is not a symbolic link /sbin/ldconfig.real: /usr/local/lib/libhwloc.so.15 is not a symbolic link /sbin/ldconfig.real: /usr/local/lib/libur_adapter_level_zero.so.0 is not a symbolic link /sbin/ldconfig.real: /usr/local/lib/libumf.so.0 is not a symbolic link /sbin/ldconfig.real: /usr/local/lib/libtbbmalloc_proxy.so.2 is not a symbolic link /sbin/ldconfig.real: /usr/local/lib/libur_adapter_level_zero_v2.so.0 is not a symbolic link /sbin/ldconfig.real: /usr/local/lib/libtcm.so.1 is not a symbolic link The following packages were automatically installed and are no longer required: libbz2-dev libpkgconf3 libreadline-dev Use 'apt autoremove' to remove them. The following NEW packages will be installed: unionfs-fuse 0 upgraded, 1 newly installed, 0 to remove and 41 not upgraded. Need to get 48.7 kB of archives. After this operation, 146 kB of additional disk space will be used. Selecting previously unselected package unionfs-fuse. (Reading database ... 126346 files and directories currently installed.) Preparing to unpack .../unionfs-fuse_1.0-1ubuntu2_amd64.deb ... Unpacking unionfs-fuse (1.0-1ubuntu2) ... Setting up unionfs-fuse (1.0-1ubuntu2) ... Processing triggers for man-db (2.10.2-1) ... ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 2.3/2.3 GB 444.7 kB/s eta 0:00:00 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 6.1/6.1 MB 89.3 MB/s eta 0:00:00 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 4.4/4.4 MB 76.9 MB/s eta 0:00:00 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 2.0/2.0 MB 16.6 MB/s eta 0:00:00 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 4.6/4.6 MB 78.7 MB/s eta 0:00:00 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 63.3/63.3 MB 11.3 MB/s eta 0:00:00 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 96.4/96.4 kB 3.0 MB/s eta 0:00:00 ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts. torchtune 0.6.1 requires torchdata==0.11.0, but you have torchdata 0.6.0 which is incompatible. ERROR: Could not find a version that satisfies the requirement xformers==0.0.19 (from versions: 0.0.1, 0.0.2, 0.0.3, 0.0.4, 0.0.5, 0.0.6, 0.0.7, 0.0.8, 0.0.9, 0.0.10, 0.0.11, 0.0.12, 0.0.13, 0.0.16rc424, 0.0.16rc425, 0.0.16, 0.0.20, 0.0.21, 0.0.22, 0.0.22.post7, 0.0.23, 0.0.23.post1, 0.0.24, 0.0.25, 0.0.25.post1, 0.0.26.post1, 0.0.27, 0.0.27.post1, 0.0.27.post2, 0.0.28, 0.0.28.post1, 0.0.28.post2, 0.0.28.post3, 0.0.29, 0.0.29.post1, 0.0.29.post2, 0.0.29.post3, 0.0.30, 0.0.31, 0.0.31.post1, 0.0.32.dev1062, 0.0.32.dev1064) ERROR: No matching distribution found for xformers==0.0.19 Failed to open /content/drive/MyDrive/stable-diffusion-webui-colab/stable-diffusion-webui/extensions/sd-webui-additional-networks/models/lora/: No such file or directory. Aborting! /content/drive/MyDrive/stable-diffusion-webui-colab/stable-diffusion-webui Updating files: 100% (203/203), done. HEAD is now at 22bcc7be attempted fix for infinite loading for settings that some people experience Updating files: 100% (135/135), done. HEAD is now at cf1d67a Update modelcard.md HEAD is now at c202932 GRADIO_TUNNEL online Already up to date. Download Results: gid |stat|avg speed |path/URI ======+====+===========+======================================================= 20d2be|OK | 0B/s|/content/drive/MyDrive/stable-diffusion-webui-colab/stable-diffusion-webui/models/CLIP/ViT-L-14.pt Status Legend: (OK):download completed. Python 3.11.13 (main, Jun 4 2025, 08:57:29) [GCC 11.4.0] Commit hash: 22bcc7be428c94e9408f589966c2040187245d81 Installing gfpgan Installing clip Installing open_clip Installing xformers Traceback (most recent call last): File "/content/drive/MyDrive/stable-diffusion-webui-colab/stable-diffusion-webui/launch.py", line 355, in <module> prepare_environment() File "/content/drive/MyDrive/stable-diffusion-webui-colab/stable-diffusion-webui/launch.py", line 281, in prepare_environment run_pip(f"install {xformers_package}", "xformers") File "/content/drive/MyDrive/stable-diffusion-webui-colab/stable-diffusion-webui/launch.py", line 129, in run_pip return run(f'"{python}" -m pip {args} --prefer-binary{index_url_line}', desc=f"Installing {desc}", errdesc=f"Couldn't install {desc}") ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/content/drive/MyDrive/stable-diffusion-webui-colab/stable-diffusion-webui/launch.py", line 97, in run raise RuntimeError(message) RuntimeError: Couldn't install xformers. Command: "/usr/bin/python3" -m pip install xformers==0.0.16rc425 --prefer-binary Error code: 1 stdout: Collecting xformers==0.0.16rc425 Downloading xformers-0.0.16rc425.tar.gz (7.3 MB) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 7.3/7.3 MB 81.4 MB/s eta 0:00:00 Preparing metadata (setup.py): started Preparing metadata (setup.py): finished with status 'done' Requirement already satisfied: torch>=1.12 in /usr/local/lib/python3.11/dist-packages (from xformers==0.0.16rc425) (2.0.0+cu118) Requirement already satisfied: numpy in /usr/local/lib/python3.11/dist-packages (from xformers==0.0.16rc425) (2.0.2) Collecting pyre-extensions==0.0.23 (from xformers==0.0.16rc425) Downloading pyre_extensions-0.0.23-py3-none-any.whl.metadata (4.0 kB) Collecting typing-inspect (from pyre-extensions==0.0.23->xformers==0.0.16rc425) Downloading typing_inspect-0.9.0-py3-none-any.whl.metadata (1.5 kB) Requirement already satisfied: typing-extensions in /usr/local/lib/python3.11/dist-packages (from pyre-extensions==0.0.23->xformers==0.0.16rc425) (4.14.1) Requirement already satisfied: filelock in /usr/local/lib/python3.11/dist-packages (from torch>=1.12->xformers==0.0.16rc425) (3.18.0) Requirement already satisfied: sympy in /usr/local/lib/python3.11/dist-packages (from torch>=1.12->xformers==0.0.16rc425) (1.13.1) Requirement already satisfied: networkx in /usr/local/lib/python3.11/dist-packages (from torch>=1.12->xformers==0.0.16rc425) (3.5) Requirement already satisfied: jinja2 in /usr/local/lib/python3.11/dist-packages (from torch>=1.12->xformers==0.0.16rc425) (3.1.6) Requirement already satisfied: triton==2.0.0 in /usr/local/lib/python3.11/dist-packages (from torch>=1.12->xformers==0.0.16rc425) (2.0.0) Requirement already satisfied: cmake in /usr/local/lib/python3.11/dist-packages (from triton==2.0.0->torch>=1.12->xformers==0.0.16rc425) (3.31.6) Requirement already satisfied: lit in /usr/local/lib/python3.11/dist-packages (from triton==2.0.0->torch>=1.12->xformers==0.0.16rc425) (18.1.8) Requirement already satisfied: MarkupSafe>=2.0 in /usr/local/lib/python3.11/dist-packages (from jinja2->torch>=1.12->xformers==0.0.16rc425) (3.0.2) Requirement already satisfied: mpmath<1.4,>=1.1.0 in /usr/local/lib/python3.11/dist-packages (from sympy->torch>=1.12->xformers==0.0.16rc425) (1.3.0) Collecting mypy-extensions>=0.3.0 (from typing-inspect->pyre-extensions==0.0.23->xformers==0.0.16rc425) Downloading mypy_extensions-1.1.0-py3-none-any.whl.metadata (1.1 kB) Downloading pyre_extensions-0.0.23-py3-none-any.whl (11 kB) Downloading typing_inspect-0.9.0-py3-none-any.whl (8.8 kB) Downloading mypy_extensions-1.1.0-py3-none-any.whl (5.0 kB) Building wheels for collected packages: xformers Building wheel for xformers (setup.py): started Building wheel for xformers (setup.py): finished with status 'error' Running setup.py clean for xformers Failed to build xformers stderr: error: subprocess-exited-with-error × python setup.py bdist_wheel did not run successfully. │ exit code: 1 ╰─> See above for output. note: This error originates from a subprocess, and is likely not a problem with pip. ERROR: Failed building wheel for xformers ERROR: ERROR: Failed to build installable wheels for some pyproject.toml based projects (xformers)

404 - Not Found Could not reach Prometheus's API on http://ceph-node1:9095/api/v1 7/20/25 10:49:50 PM [root@ceph-node1 ~]# curl http://localhost:9095/api/v1/ 404 page not found [root@ceph-node1 ~]# curl http://localhost:9095/api/v1/ 404 page not found [root@ceph-node1 ~]# curl -v http://localhost:9095/api/v1/labels * Trying ::1... * TCP_NODELAY set * Connected to localhost (::1) port 9095 (#0) > GET /api/v1/labels HTTP/1.1 > Host: localhost:9095 > User-Agent: curl/7.61.1 > Accept: */* > < HTTP/1.1 200 OK < Content-Type: application/json < Date: Sun, 20 Jul 2025 14:43:55 GMT < Content-Length: 824 < * Connection #0 to host localhost left intact {"status":"success","data":["__name__","address","alertname","alertstate","bios_date","bios_vendor","bios_version","board_name","board_vendor","board_version","branch","broadcast","cause","ceph_daemon","ceph_version","chassis_asset_tag","chassis_vendor","chassis_version","clocksource","cluster_addr","code","collector","compression_mode","cpu","description","device","device_class","device_ids","devices","domainname","duplex","fstype","goversion","hostname","id","id_like","instance","ip","job","machine","major","method","minor","mode","mountpoint","name","nodename","objectstore","oid","operstate","pool_id","poolid","power_supply","pretty_name","product_name","product_version","public_addr","quantile","queue","rank","release","revision","severity","sysname","system_vendor","time_zone","type","version","version_id"]}[root@ceph-node1 ~]# curl -v http://localhost:9095/api/v1/query?query=up * Trying ::1... * TCP_NODELAY set * Connected to localhost (::1) port 9095 (#0) > GET /api/v1/query?query=up HTTP/1.1 > Host: localhost:9095 > User-Agent: curl/7.61.1 > Accept: */* > < HTTP/1.1 200 OK < Content-Type: application/json < Date: Sun, 20 Jul 2025 14:44:21 GMT < Content-Length: 656 < * Connection #0 to host localhost left intact {"status":"success","data":{"resultType":"vector","result":[{"metric":{"__name__":"up","instance":"192.168.1.120:9283","job":"ceph"},"value":[1753022661.748,"1"]},{"metric":{"__name__":"up","instance":"192.168.1.121:9283","job":"ceph"},"value":[1753022661.748,"1"]},{"metric":{"__name__":"up","instance":"192.168.1.122:9283","job":"ceph"},"value":[1753022661.748,"1"]},{"metric":{"__name__":"up","instance":"ceph-node1","job":"node"},"value":[1753022661.748,"1"]},{"metric":{"__name__":"up","instance":"ceph-node2","job":"node"},"value":[1753022661.748,"1"]},{"metric":{"__name__":"up","instance":"ceph-node3","job":"node"},"value":[1753022661.748,"1"]}]}}[root@ceph-node1 ~]# [root@ceph-node1 ~]# [root@ceph-node1 ~]# curl http://ceph-node1:9095/api/v1/labels {"status":"success","data":["__name__","address","alertname","alertstate","bios_date","bios_vendor","bios_version","board_name","board_vendor","board_version","branch","broadcast","cause","ceph_daemon","ceph_version","chassis_asset_tag","chassis_vendor","chassis_version","clocksource","cluster_addr","code","collector","compression_mode","cpu","description","device","device_class","device_ids","devices","domainname","duplex","fstype","goversion","hostname","id","id_like","instance","ip","job","machine","major","method","minor","mode","mountpoint","name","nodename","objectstore","oid","operstate","pool_id","poolid","power_supply","pretty_name","product_name","product_version","public_addr","quantile","queue","rank","release","revision","severity","sysname","system_vendor","time_zone","type","version","version_id"]}[root@ceph-node1 ~]# curl http://ceph-node1:9095/api/v1/query?query=up {"status":"success","data":{"resultType":"vector","result":[{"metric":{"__name__":"up","instance":"192.168.1.120:9283","job":"ceph"},"value":[1753022801.488,"1"]},{"metric":{"__name__":"up","instance":"192.168.1.121:9283","job":"ceph"},"value":[1753022801.488,"1"]},{"metric":{"__name__":"up","instance":"192.168.1.122:9283","job":"ceph"},"value":[1753022801.488,"1"]},{"metric":{"__name__":"up","instance":"ceph-node1","job":"node"},"value":[1753022801.488,"1"]},{"metric":{"__name__":"up","instance":"ceph-node2","job":"node"},"value":[1753022801.488,"1"]},{"metric":{"__name__":"up","instance":"ceph-node3","job":"node"},"value":[1753022801.488,"1"]}]}}[root@ceph-node1 ~]# [root@ceph-node1 ~]# podman logs 4b86647c61ac ts=2025-07-20T12:20:19.199Z caller=compact.go:519 level=info component=tsdb msg="write block" mint=1753003219021 maxt=1753005600000 ulid=01K0KWE92SF5BJ1R7R93QA4B3A duration=38.265327ms ts=2025-07-20T12:20:19.204Z caller=head.go:827 level=info component=tsdb msg="Head GC completed" duration=4.361588ms ts=2025-07-20T13:00:02.770Z caller=compact.go:519 level=info component=tsdb msg="write block" mint=1753005601290 maxt=1753012800000 ulid=01K0KYQ0QQ6NSPJ01V9NC4H2G6 duration=90.476581ms ts=2025-07-20T13:00:02.774Z caller=head.go:827 level=info component=tsdb msg="Head GC completed" duration=2.258296ms [root@ceph-node1 ~]# podman exec -it 4b86647c61ac sh /prometheus $ ss -tulnp | grep 9095 sh: ss: not found /prometheus $ cat /etc/prometheus/prometheus.yml # This file is generated by cephadm. global: scrape_interval: 10s evaluation_interval: 10s rule_files: - /etc/prometheus/alerting/* alerting: alertmanagers: - scheme: http static_configs: - targets: ['ceph-node1:9093'] scrape_configs: - job_name: 'ceph' honor_labels: true static_configs: - targets: - '192.168.1.120:9283' - '192.168.1.121:9283' - '192.168.1.122:9283' - job_name: 'node' static_configs: - targets: ['ceph-node1:9100'] labels: instance: 'ceph-node1' - targets: ['192.168.1.121:9100'] labels: instance: 'ceph-node2' - targets: ['192.168.1.122:9100'] labels: instance: 'ceph-node3'

最新推荐

recommend-type

PLC控制变频器:三菱与汇川PLC通过485通讯板实现变频器正反转及调速控制

内容概要:本文介绍了如何利用三菱和汇川PLC通过485通讯板实现变频器的正转、反转及调速控制。主要内容涵盖硬件配置、软件编程、具体控制逻辑及上机测试。文中详细描述了各个步骤的操作方法和注意事项,包括关键寄存器的设置及其含义。程序中有详细的中文注释,便于理解和维护。最终通过上机测试验证系统的稳定性和可靠性。 适合人群:从事工业自动化领域的工程师和技术人员,尤其是熟悉PLC编程和变频器控制的专业人士。 使用场景及目标:适用于需要对电机进行精确控制的工业应用场景,如生产线、机械设备等。目标是提高控制系统灵活性和效率,确保系统稳定可靠。 其他说明:本文不仅提供理论指导,还附带实际操作经验,有助于读者更好地掌握相关技术和应用。
recommend-type

Web前端开发:CSS与HTML设计模式深入解析

《Pro CSS and HTML Design Patterns》是一本专注于Web前端设计模式的书籍,特别针对CSS(层叠样式表)和HTML(超文本标记语言)的高级应用进行了深入探讨。这本书籍属于Pro系列,旨在为专业Web开发人员提供实用的设计模式和实践指南,帮助他们构建高效、美观且可维护的网站和应用程序。 在介绍这本书的知识点之前,我们首先需要了解CSS和HTML的基础知识,以及它们在Web开发中的重要性。 HTML是用于创建网页和Web应用程序的标准标记语言。它允许开发者通过一系列的标签来定义网页的结构和内容,如段落、标题、链接、图片等。HTML5作为最新版本,不仅增强了网页的表现力,还引入了更多新的特性,例如视频和音频的内置支持、绘图API、离线存储等。 CSS是用于描述HTML文档的表现(即布局、颜色、字体等样式)的样式表语言。它能够让开发者将内容的表现从结构中分离出来,使得网页设计更加模块化和易于维护。随着Web技术的发展,CSS也经历了多个版本的更新,引入了如Flexbox、Grid布局、过渡、动画以及Sass和Less等预处理器技术。 现在让我们来详细探讨《Pro CSS and HTML Design Patterns》中可能包含的知识点: 1. CSS基础和选择器: 书中可能会涵盖CSS基本概念,如盒模型、边距、填充、边框、背景和定位等。同时还会介绍CSS选择器的高级用法,例如属性选择器、伪类选择器、伪元素选择器以及选择器的组合使用。 2. CSS布局技术: 布局是网页设计中的核心部分。本书可能会详细讲解各种CSS布局技术,包括传统的浮动(Floats)布局、定位(Positioning)布局,以及最新的布局模式如Flexbox和CSS Grid。此外,也会介绍响应式设计的媒体查询、视口(Viewport)单位等。 3. 高级CSS技巧: 这些技巧可能包括动画和过渡效果,以及如何优化性能和兼容性。例如,CSS3动画、关键帧动画、转换(Transforms)、滤镜(Filters)和混合模式(Blend Modes)。 4. HTML5特性: 书中可能会深入探讨HTML5的新标签和语义化元素,如`<article>`、`<section>`、`<nav>`等,以及如何使用它们来构建更加标准化和语义化的页面结构。还会涉及到Web表单的新特性,比如表单验证、新的输入类型等。 5. 可访问性(Accessibility): Web可访问性越来越受到重视。本书可能会介绍如何通过HTML和CSS来提升网站的无障碍访问性,比如使用ARIA标签(Accessible Rich Internet Applications)来增强屏幕阅读器的使用体验。 6. 前端性能优化: 性能优化是任何Web项目成功的关键。本书可能会涵盖如何通过优化CSS和HTML来提升网站的加载速度和运行效率。内容可能包括代码压缩、合并、避免重绘和回流、使用Web字体的最佳实践等。 7. JavaScript与CSS/HTML的交互: 在现代Web开发中,JavaScript与CSS及HTML的交云并用是不可或缺的。书中可能会讲解如何通过JavaScript动态地修改样式、操作DOM元素以及使用事件监听和响应用户交互。 8. Web框架和预处理器: 这本书可能会提到流行的Web开发框架和预处理器,比如Bootstrap、Foundation、Sass和Less等,它们是如何简化和加速开发流程的。 9. 测试和维护: 书中也可能包含关于如何测试网页以及如何持续优化和维护CSS和HTML代码的章节。例如,使用断言测试、自动化测试、性能分析工具等。 最后,鉴于文件名称列表中的“压缩包子文件”的表述,这可能是对“压缩包”文件的一种误译或误用,此处“压缩包”应该指的是包含该书籍PDF文件的压缩文件格式,如ZIP或RAR。而“Pro CSS and HTML Design Patterns.pdf”指的就是该书籍的PDF格式电子版文件。 以上所述,构成了《Pro CSS and HTML Design Patterns》一书可能包含的核心知识点。通过学习这些内容,Web前端开发者可以掌握更为高效和优雅的设计模式,从而在日常工作中更高效地解决实际问题。
recommend-type

Zotero 7数据同步:Attanger插件安装&设置,打造文献管理利器

/i.s3.glbimg.com/v1/AUTH_08fbf48bc0524877943fe86e43087e7a/internal_photos/bs/2018/L/j/ipKA8TRPyt4ptaG1DGww/t-m-01.png) # 1. Zotero 7数据同步简介 在数字时代,管理研究数据变得日益重要,而Zot
recommend-type

卷积神经网络的基础理论200字

<think>我们正在回答用户关于卷积神经网络(CNN)基础理论的询问。根据提供的引用资料,我们可以概述CNN的基本结构和工作原理。引用[1]提到CNN是一种前馈神经网络,信息从输入单向流动到输出。引用[2]指出CNN在图像处理等领域应用广泛,通过层次结构和参数调整实现模式识别。引用[3]说明CNN包含卷积层,使用卷积核提取局部特征,减少参数量,提高效率。引用[4]则提到了训练过程,包括前向传播、误差计算和反向传播(梯度下降)。因此,我们将从以下几个方面概述CNN:1.CNN的基本结构(卷积层、池化层、全连接层)2.卷积操作的工作原理(局部感受野、权重共享)3.训练过程(前向传播、损失函数、反
recommend-type

轻便实用的Java库类查询工具介绍

标题 "java2库类查询" 和描述表明,所提及的工具是一个专门用于查询Java库类的应用程序。此软件旨在帮助开发者快速地查找和引用Java的标准开发工具包(SDK)中包含的所有应用程序编程接口(API)类。通过这样的工具,开发者可以节省大量在官方文档或搜索引擎上寻找类定义和使用方法的时间。它被描述为轻巧且方便,这表明其占用的系统资源相对较少,同时提供直观的用户界面,使得查询过程简洁高效。 从描述中可以得出几个关键知识点: 1. Java SDK:Java的软件开发工具包(SDK)是Java平台的一部分,提供了一套用于开发Java应用软件的软件包和库。这些软件包通常被称为API,为开发者提供了编程界面,使他们能够使用Java语言编写各种类型的应用程序。 2. 库类查询:这个功能对于开发者来说非常关键,因为它提供了一个快速查找特定库类及其相关方法、属性和使用示例的途径。良好的库类查询工具可以帮助开发者提高工作效率,减少因查找文档而中断编程思路的时间。 3. 轻巧性:软件的轻巧性通常意味着它对计算机资源的要求较低。这样的特性对于资源受限的系统尤为重要,比如老旧的计算机、嵌入式设备或是当开发者希望最小化其开发环境占用空间时。 4. 方便性:软件的方便性通常关联于其用户界面设计,一个直观、易用的界面可以让用户快速上手,并减少在使用过程中遇到的障碍。 5. 包含所有API:一个优秀的Java库类查询软件应当能够覆盖Java所有标准API,这包括Java.lang、Java.util、Java.io等核心包,以及Java SE平台的所有其他标准扩展包。 从标签 "java 库 查询 类" 可知,这个软件紧密关联于Java编程语言的核心功能——库类的管理和查询。这些标签可以关联到以下知识点: - Java:一种广泛用于企业级应用、移动应用(如Android应用)、网站后端、大型系统和许多其他平台的编程语言。 - 库:在Java中,库是一组预打包的类和接口,它们可以被应用程序重复使用。Java提供了庞大的标准库,以支持各种常见的任务和功能。 - 查询:查询指的是利用软件工具搜索、定位和检索信息的过程。对于Java库类查询工具来说,这意味着可以通过类名、方法签名或其他标识符来查找特定的API条目。 最后,压缩包文件列表包含了两个文件:“java.dit”和“Java.exe”。其中“Java.exe”很可能是程序的可执行文件,而“java.dit”可能是一个数据文件,用于存储Java类的索引或数据。由于文件名后缀通常与文件类型相关联,但“dit”并不是一个常见的文件扩展名。这可能是一个特定于软件的自定义格式,或是一个打字错误。 总结来说,"java2库类查询" 是一个针对Java开发者的实用工具,它提供了一个轻量级、易用的平台来查询和定位Java标准库中的所有类和API。此工具对优化开发流程,减少查找Java类文档的时间大有裨益,尤其适合需要频繁查阅Java API的开发者使用。
recommend-type

【Zotero 7终极指南】:新手必备!Attanger插件全攻略与数据同步神技

# 1. Zotero 7与Attanger插件的介绍 在当今的学术研究和知识管理领域,高效的文献管理工具至关重要。Zotero 7作为一个流行的参考文献管理软件,因其强大的功能和用户友好的界面而受到专业人士的青睐。而Attanger插件则为Zotero 7带来了更多定制化和高级功能,极大地增强
recommend-type

MATLAB整段注释快捷键

<think>首先,用户的问题是:“MATLAB整段注释快捷键”,意思是“MATLAB整段注释快捷键”。我需要用中文回答。 根据规则: - 回答必须用中文。 - 回答完问题后,必须在末尾输出“§§相关问题§§”,然后提出3-5个相关问题。 - “§§相关问题§§”必须完整输出在第一行,禁止换行分散。 - 相关问题必须基于用户的问题和我的回答。 现在,回忆MATLAB的快捷键:在MATLAB中,整段注释通常使用快捷键Ctrl + R来注释选中的代码行,使用Ctrl + T来取消注释。用户问的是“整段注释”,所以我应该确认快捷键。 在MATLAB中: - 注释选中的行:Ctrl + R -
recommend-type

Eclipse Jad反编译插件:提升.class文件查看便捷性

反编译插件for Eclipse是一个专门设计用于在Eclipse集成开发环境中进行Java反编译的工具。通过此类插件,开发者可以在不直接访问源代码的情况下查看Java编译后的.class文件的源代码,这在开发、维护和学习使用Java技术的过程中具有重要的作用。 首先,我们需要了解Eclipse是一个跨平台的开源集成开发环境,主要用来开发Java应用程序,但也支持其他诸如C、C++、PHP等多种语言的开发。Eclipse通过安装不同的插件来扩展其功能。这些插件可以由社区开发或者官方提供,而jadclipse就是这样一个社区开发的插件,它利用jad.exe这个第三方命令行工具来实现反编译功能。 jad.exe是一个反编译Java字节码的命令行工具,它可以将Java编译后的.class文件还原成一个接近原始Java源代码的格式。这个工具非常受欢迎,原因在于其反编译速度快,并且能够生成相对清晰的Java代码。由于它是一个独立的命令行工具,直接使用命令行可以提供较强的灵活性,但是对于一些不熟悉命令行操作的用户来说,集成到Eclipse开发环境中将会极大提高开发效率。 使用jadclipse插件可以很方便地在Eclipse中打开任何.class文件,并且将反编译的结果显示在编辑器中。用户可以在查看反编译的源代码的同时,进行阅读、调试和学习。这样不仅可以帮助开发者快速理解第三方库的工作机制,还能在遇到.class文件丢失源代码时进行紧急修复工作。 对于Eclipse用户来说,安装jadclipse插件相当简单。一般步骤包括: 1. 下载并解压jadclipse插件的压缩包。 2. 在Eclipse中打开“Help”菜单,选择“Install New Software”。 3. 点击“Add”按钮,输入插件更新地址(通常是jadclipse的更新站点URL)。 4. 选择相应的插件(通常名为“JadClipse”),然后进行安装。 5. 安装完成后重启Eclipse,插件开始工作。 一旦插件安装好之后,用户只需在Eclipse中双击.class文件,或者右键点击文件并选择“Open With Jadclipse”,就能看到对应的Java源代码。如果出现反编译不准确或失败的情况,用户还可以直接在Eclipse中配置jad.exe的路径,或者调整jadclipse的高级设置来优化反编译效果。 需要指出的是,使用反编译工具虽然方便,但要注意反编译行为可能涉及到版权问题。在大多数国家和地区,反编译软件代码属于合法行为,但仅限于学习、研究、安全测试或兼容性开发等目的。如果用户意图通过反编译获取商业机密或进行非法复制,则可能违反相关法律法规。 总的来说,反编译插件for Eclipse是一个强大的工具,它极大地简化了Java反编译流程,提高了开发效率,使得开发者在没有源代码的情况下也能有效地维护和学习Java程序。但开发者在使用此类工具时应遵守法律与道德规范,避免不当使用。
recommend-type

【进阶Python绘图】:掌握matplotlib坐标轴刻度间隔的高级技巧,让你的图表脱颖而出

# 摘要 本文系统地探讨了matplotlib库中坐标轴刻度间隔的定制与优化技术。首先概述了matplotlib坐标轴刻度间隔的基本概念及其在图表中的重要性,接
recommend-type

降帧是什么意思

<think>首先,用户的问题是:“我想了解降帧的含义 请问降帧是什么意思”。我需要用中文回答,并确保回答结构清晰,帮助用户逐步解决问题。 关键点是“降帧”。在视频处理、动画或游戏上下文中,“降帧”通常指的是帧率下降或丢帧(frame drop)。结合系统提供的引用,我可以看到相关概念。 回顾提供的引用: - 引用[1]:讨论了帧率和丢帧分析,提到应用丢帧导致帧率下降,造成卡顿。例如,在60Hz刷新率下,每帧需要在16.7ms内完成,否则可能丢帧。 - 引用[2]:提到掉帧(Frame Drop),与CPU和GPU相关。CPU或GPU处理不及时会导致帧无法按时渲染。 - 引用[3]: