ffmpeg libx264 编码 "use an encoding preset (e.g. -vpre medium)" 错误解决

本文介绍了解决H264编码时遇到的错误问题,通过调整AVCodecContext中的参数来避免出现错误提示,并提供了libx264的默认参数设置。

摘要生成于 C知道 ,由 DeepSeek-R1 满血版支持, 前往体验 >

最近在做h264编码,一直报这个错误:

broken ffmpeg default settings detected
use an encoding preset (e.g. -vpre medium)
preset usage: -vpre <speed> -vpre <profile>
speed presets are listed in x264 --help
profile is optional; x264 defaults to high

周折了好几天在网上查了大量的资料,结合libx264的源码发现是参数配置的问题,源码在libx264的encoder/encoder.c中:

    /* Detect default ffmpeg settings and terminate with an error. */
    if( b_open )
    {    
        int score = 0; 
        score += h->param.analyse.i_me_range == 0;
        score += h->param.rc.i_qp_step == 3;
        score += h->param.i_keyint_max == 12;
        score += h->param.rc.i_qp_min == 2;
        score += h->param.rc.i_qp_max == 31;
        score += h->param.rc.f_qcompress == 0.5; 
        score += fabs(h->param.rc.f_ip_factor - 1.25) < 0.01;
        score += fabs(h->param.rc.f_pb_factor - 1.25) < 0.01;
        score += h->param.analyse.inter == 0 && h->param.analyse.i_subpel_refine == 8;
        if( score >= 5 )
        {    
            x264_log( h, X264_LOG_ERROR, "broken ffmpeg default settings detected\n" );
            x264_log( h, X264_LOG_ERROR, "use an encoding preset (e.g. -vpre medium)\n" );
            x264_log( h, X264_LOG_ERROR, "preset usage: -vpre <speed> -vpre <profile>\n" );
            x264_log( h, X264_LOG_ERROR, "speed presets are listed in x264 --help\n" );
            x264_log( h, X264_LOG_ERROR, "profile is optional; x264 defaults to high\n" );
            return -1;
        }    
    } 

 解决方法就是在avcodec_open2之前设置AVCodecontex中的几个参数:

 ctx->me_range = 16; 

 ctx->max_qdiff = 4; 

 ctx->qmin = 0; 

 ctx->qmax = 69; 

 ctx->qcompress = 0.6;  

 这几个是libx264默认的参数。

x264编码参数设置可参考http://mewiki.project357.com/wiki/X264_Settings 

 

转载于:https://www.cnblogs.com/hojor/archive/2013/04/23/3038009.html

### 实现方案 为了实现实时从摄像头获取视频流并应用YOLO模型进行目标检测,可以按照以下方法构建程序。该过程涉及使用OpenCV捕获视频帧,并调用YOLO模型执行对象识别任务。最后,借助FFmpeg将处理后的图像序列推送至RTMP服务器。 #### 安装依赖库 确保已安装所需的Python包: ```bash pip install opencv-python-headless numpy ultralytics ``` 对于FFmpeg的操作,则需单独下载并配置环境变量以便于命令行调用[^1]。 #### 编写代码逻辑 下面是一个完整的Python脚本实例,展示了如何集成上述组件完成整个流程: ```python import cv2 from ultralytics import YOLO import subprocess as sp def init_camera(camera_id=0): cap = cv2.VideoCapture(camera_id) if not cap.isOpened(): raise IOError(f"Cannot open camera {camera_id}") return cap def get_frame_shape(cap): width = int(cap.get(cv2.CAP_PROP_FRAME_WIDTH)) height = int(cap.get(cv2.CAP_PROP_FRAME_HEIGHT)) fps = int(cap.get(cv2.CAP_PROP_FPS)) return width, height, fps def initialize_ffmpeg_pipe(rtmp_url, frame_width, frame_height, fps): command = ['ffmpeg', '-y', # overwrite output file if it exists '-f', 'rawvideo', '-vcodec', 'rawvideo', '-pix_fmt', 'bgr24', '-s', f"{frame_width}x{frame_height}", '-r', str(fps), '-i', '-', # The input comes from a pipe '-c:v', 'libx264', '-pix_fmt', 'yuv420p', '-preset', 'ultrafast', '-f', 'flv', rtmp_url] process = sp.Popen(command, stdin=sp.PIPE) return process if __name__ == "__main__": model = YOLO('yolov8n.pt') # Load pre-trained YOLOv8n model cam = init_camera() w, h, fps = get_frame_shape(cam) ffmpeg_process = initialize_ffmpeg_pipe("rtmp://your_rtmp_server/live/stream_key", w, h, fps) while True: ret, frame = cam.read() results = model(frame)[0].plot() # Perform object detection and draw boxes on the image ffmpeg_process.stdin.write(results.tobytes()) # Write processed frames to FFmpeg's stdin if cv2.waitKey(1) & 0xFF == ord('q'): break cam.release() cv2.destroyAllWindows() ffmpeg_process.terminate() ``` 这段代码实现了从初始化摄像机到加载YOLO模型再到启动FFmpeg管道的过程。循环内不断读取新帧、执行预测并将带有标注框的结果发送给FFmpeg用于编码传输[^2]。
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值