索引地址:系列教程索引地址
上一篇:FFmpeg5入门教程10.20:视频添加滤镜
本文介绍将视频压缩数据(h264)和音频数据(mp3)从视频文件(mp4)中解出来。
在10.04:解码视频流过程中介绍了解码视频流的基本流程,但是只解码视频流。而在FFmpeg5入门教程10.14:音频解码为pcm中介绍了解码音频流的基本流程,并且是只解码音频流。而一个视频文件中包含音视频至少两条流,我们在解码的时候故意过滤了我们不需要的流。
随便找一个视频,查看视频信息:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21
| $ ffprobe Sample.mkv ... [flv @ 0x55894ec85440] Missing AMF_END_OF_OBJECT in AMF_DATA_TYPE_MIXEDARRAY, found 0 Input Metadata: creator : www.qiyi.com metadatacreator : Yet Another Metadata Injector for FLV - Version 1.2 hasKeyframes : true hasVideo : true hasAudio : true hasMetadata : true canSeekToEnd : false datasize : 40600924 videosize : 36538352 audiosize : 3942540 lasttimestamp : 645 lastkeyframetimestamp: 637 lastkeyframelocation: 40109260 Duration: 00:10:44.87, start: 0.000000, bitrate: 503 kb/s Stream Stream
|
可以看到有两条流,分别是Stream #0:0 为h264即视频流,Stream #0:1为aac即音频流。接下来我们将其解出来并分别保存为音视频文件。
流程为:

实际的操作为读取两条流,然后创建两条流,分别转换时间戳保存数据。
首先是打开输入
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15
| ret = avformat_open_input(&ifmtCtx, inFilename, 0, 0); if (ret < 0) { printf("can't open input file\n"); break; }
ret = avformat_find_stream_info(ifmtCtx, 0); if (ret < 0) { printf("can't retrieve input stream information\n"); break; }
|
创建输出上下文
1 2 3 4 5 6
| avformat_alloc_output_context2(&ofmtCtxVideo, NULL, NULL, outFilenameVideo); if (!ofmtCtxVideo) { printf("can't create video output context"); break; }
|
打开输出上下文
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39
| int open_codec_context(int *streamIndex, AVFormatContext *&ofmtCtx, AVFormatContext *ifmtCtx, AVMediaType type) { AVStream *outStream = NULL, *inStream = NULL; int ret = -1, index = -1;
index = av_find_best_stream(ifmtCtx, type, -1, -1, NULL, 0); if (index < 0) { printf("can't find %s stream in input file\n", av_get_media_type_string(type)); return ret; }
inStream = ifmtCtx->streams[index];
outStream = avformat_new_stream(ofmtCtx, NULL); if (!outStream) { printf("failed to allocate output stream\n"); return ret; }
ret = avcodec_parameters_copy(outStream->codecpar, inStream->codecpar); if (ret < 0) { printf("failed to copy codec parametes\n"); return ret; }
outStream->codecpar->codec_tag = 0;
*streamIndex = index;
return 0; }
|
打开输出文件
1 2 3 4 5 6 7 8 9
| if (!(ofmtCtxVideo->oformat->flags & AVFMT_NOFILE)) { if (avio_open(&ofmtCtxVideo->pb, outFilenameVideo, AVIO_FLAG_WRITE) < 0) { printf("can't open output file: %s\n", outFilenameVideo); break; } }
|
写文件头
1 2 3 4 5 6
| if (avformat_write_header(ofmtCtxVideo, NULL) < 0) { printf("Error occurred when opening video output file\n"); break; }
|
然后就是最关键的时间戳转换
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43
| while (1) { AVFormatContext *ofmtCtx; AVStream *inStream, *outStream;
if (av_read_frame(ifmtCtx, &packet) < 0) { break; }
inStream = ifmtCtx->streams[packet.stream_index];
if (packet.stream_index == videoIndex) { outStream = ofmtCtxVideo->streams[0]; ofmtCtx = ofmtCtxVideo; } else if (packet.stream_index == audioIndex) { outStream = ofmtCtxAudio->streams[0]; ofmtCtx = ofmtCtxAudio; } else { continue; }
packet.pts = av_rescale_q_rnd(packet.pts, inStream->time_base, outStream->time_base,(AVRounding)(AV_ROUND_NEAR_INF | AV_ROUND_PASS_MINMAX)); packet.dts = av_rescale_q_rnd(packet.dts, inStream->time_base, outStream->time_base,(AVRounding)(AV_ROUND_NEAR_INF | AV_ROUND_PASS_MINMAX)); packet.duration = av_rescale_q(packet.duration, inStream->time_base, outStream->time_base); packet.pos = -1; packet.stream_index = 0;
if (av_interleaved_write_frame(ofmtCtx, &packet) < 0) { printf("Error muxing packet\n"); break; }
av_packet_unref(&packet); }
|
结果为:

可以用命令行播放看看效果。
音频效果

视频效果

完整代码在ffmpeg_beginner中的22.video_demuxer_mp42h264mp3
。
下一篇:FFmpeg5入门教程10.22:音视频解混合(demuxer)为PCM和YUV420P