【FFmpeg】av_read_frame函数

目录

  • 1.av_read_frame
    • 1.2 从pkt buffer中读取帧(avpriv_packet_list_get)
    • 1.3 从流当中读取帧(read_frame_internal)
      • 1.3.1 读取帧(ff_read_packet)
      • 1.3.2 解析packet(parse_packet)
        • 1.3.2.1 解析(av_parser_parse2)
    • 1.4 向pkt buffer中填充帧(avpriv_packet_list_put)

参考:
ffmpeg 源代码简单分析 : av_read_frame()

FFmpeg相关记录:

示例工程:
【FFmpeg】调用ffmpeg库实现264软编
【FFmpeg】调用ffmpeg库实现264软解
【FFmpeg】调用ffmpeg库进行RTMP推流和拉流
【FFmpeg】调用ffmpeg库进行SDL2解码后渲染

流程分析:
【FFmpeg】编码链路上主要函数的简单分析
【FFmpeg】解码链路上主要函数的简单分析

结构体分析:
【FFmpeg】AVCodec结构体
【FFmpeg】AVCodecContext结构体
【FFmpeg】AVStream结构体
【FFmpeg】AVFormatContext结构体
【FFmpeg】AVIOContext结构体
【FFmpeg】AVPacket结构体

函数分析:
【FFmpeg】avformat_open_input函数
【FFmpeg】avformat_find_stream_info函数
【FFmpeg】avformat_alloc_output_context2函数
【FFmpeg】avio_open2函数
【FFmpeg】avformat_write_header函数

av_read_frame函数内调用关系如下
在这里插入图片描述

1.av_read_frame

av_read_frame用于读取输入数据流当中的下一个帧,具体实现方式如下

/*** Return the next frame of a stream.* This function returns what is stored in the file, and does not validate* that what is there are valid frames for the decoder. It will split what is* stored in the file into frames and return one for each call. It will not* omit invalid data between valid frames so as to give the decoder the maximum* information possible for decoding.** On success, the returned packet is reference-counted (pkt->buf is set) and* valid indefinitely. The packet must be freed with av_packet_unref() when* it is no longer needed. For video, the packet contains exactly one frame.* For audio, it contains an integer number of frames if each frame has* a known fixed size (e.g. PCM or ADPCM data). If the audio frames have* a variable size (e.g. MPEG audio), then it contains one frame.** pkt->pts, pkt->dts and pkt->duration are always set to correct* values in AVStream.time_base units (and guessed if the format cannot* provide them). pkt->pts can be AV_NOPTS_VALUE if the video format* has B-frames, so it is better to rely on pkt->dts if you do not* decompress the payload.** @return 0 if OK, < 0 on error or end of file. On error, pkt will be blank*         (as if it came from av_packet_alloc()).** @note pkt will be initialized, so it may be uninitialized, but it must not*       contain data that needs to be freed.*/
// 返回流的下一帧
// 1.此函数返回存储在文件中的内容,而不验证是否存在用于解码器的有效帧。它将把存储在文件中的内容分割成帧,
//		并为每次调用返回一个帧。它不会省略有效帧之间的无效数据,以便为解码器提供可能用于解码的最大信息
// 2.如果成功,返回的数据包将被引用计数(pkt->但已设置)并无限期有效。当不再需要该数据包时,必须使用
//		av_packet_unref()释放该数据包。对于视频,数据包只包含一帧。对于音频,如果每帧有一个已知的固定大小
//		(例如PCM或ADPCM数据),它包含一个整数帧数。如果音频帧具有可变大小(例如MPEG音频),则它包含一个帧
// 3.pkt->pts, pkt->dts和pkt->duration在AVStream中总是设置为正确的值。Time_base单位
//		(并猜测格式是否不能提供它们)。如果视频格式有b帧,pkt->pts可以是AV_NOPTS_VALUE,
//		所以如果不解压缩有效载荷,最好依赖pkt->dts
// @return: 返回0则成功,如果return小于0则出错或者到达文件末尾,如果出错pkt为空
// @note: PKT将被初始化,因此它可能是未初始化的,但它必须不包含需要释放的数据
int av_read_frame(AVFormatContext *s, AVPacket *pkt)
{FFFormatContext *const si = ffformatcontext(s);// AVFMT_FLAG_GENPTS: 生成丢失的pts,即使它需要解析未来的帧const int genpts = s->flags & AVFMT_FLAG_GENPTS;int eof = 0;int ret;AVStream *st;// 1.数据包已被缓冲但未解码// 只有当数据包已经被缓冲但还没有解码时才需要这个缓冲区,例如在MPEG流中获取编解码器参数// genpts表示是否需要生成丢失的pts,一般情况下是不需要的,默认会进入下述分支if (!genpts) {ret = si->packet_buffer.head? avpriv_packet_list_get(&si->packet_buffer, pkt) // 从packet buffer中获取1帧: read_frame_internal(s, pkt); // 从流中读取一帧到pktif (ret < 0)return ret;goto return_packet;}// 需要生成丢失的ptsfor (;;) {PacketListEntry *pktl = si->packet_buffer.head;if (pktl) {AVPacket *next_pkt = &pktl->pkt;if (next_pkt->dts != AV_NOPTS_VALUE) {// pts_wrap_bits用于处理时间戳的溢出问题,它允许时间戳在溢出之前使用更多的位// 例如设置pts_wrap_bits为24,时间戳将在达到2^24时溢出int wrap_bits = s->streams[next_pkt->stream_index]->pts_wrap_bits;// last dts seen for this stream. if any of packets following// current one had no dts, we will set this to AV_NOPTS_VALUE.// 这条流的最后一次DTS。如果当前数据包之后的任何数据包没有dts,将其设置为AV_NOPTS_VALUEint64_t last_dts = next_pkt->dts;av_assert2(wrap_bits <= 64);while (pktl && next_pkt->pts == AV_NOPTS_VALUE) {// 如果当前pkt和下一个pkt同属一个stream// av_compare_mod用于比较两个整数a和b相对于模数mod的大小关系// 如果返回值小于0,表示整数a在模数mod下的值小于整数bif (pktl->pkt.stream_index == next_pkt->stream_index &&av_compare_mod(next_pkt->dts, pktl->pkt.dts, 2ULL << (wrap_bits - 1)) < 0) { if (av_compare_mod(pktl->pkt.pts, pktl->pkt.dts, 2ULL << (wrap_bits - 1))) {// not B-frame// 修正ptsnext_pkt->pts = pktl->pkt.dts;}if (last_dts != AV_NOPTS_VALUE) {// Once last dts was set to AV_NOPTS_VALUE, we don't change it.last_dts = pktl->pkt.dts;}}pktl = pktl->next;}if (eof && next_pkt->pts == AV_NOPTS_VALUE && last_dts != AV_NOPTS_VALUE) {// Fixing the last reference frame had none pts issue (For MXF etc).// We only do this when// 1. eof.// 2. we are not able to resolve a pts value for current packet.// 3. the packets for this stream at the end of the files had valid dts.// 修复最后一个参考帧的pts问题// 1.到达文件末尾// 2.无法解析当前数据包的PTS值// 3.该流在文件末尾的数据包具有有效的DTSnext_pkt->pts = last_dts + next_pkt->duration;}pktl = si->packet_buffer.head;}/* read packet from packet buffer, if there is data */// 如果有数据,则从packet buffer中取出数据st = s->streams[next_pkt->stream_index];if (!(next_pkt->pts == AV_NOPTS_VALUE && st->discard < AVDISCARD_ALL &&next_pkt->dts != AV_NOPTS_VALUE && !eof)) {ret = avpriv_packet_list_get(&si->packet_buffer, pkt);goto return_packet;}}// 读取帧ret = read_frame_internal(s, pkt);if (ret < 0) {if (pktl && ret != AVERROR(EAGAIN)) {eof = 1;continue;} elsereturn ret;}// 将pkt放入packet buffer当中ret = avpriv_packet_list_put(&si->packet_buffer,pkt, NULL, 0);if (ret < 0) {av_packet_unref(pkt);return ret;}}return_packet:st = s->streams[pkt->stream_index];if ((s->iformat->flags & AVFMT_GENERIC_INDEX) && pkt->flags & AV_PKT_FLAG_KEY) {ff_reduce_index(s, st->index);av_add_index_entry(st, pkt->pos, pkt->dts, 0, 0, AVINDEX_KEYFRAME);}if (is_relative(pkt->dts))pkt->dts -= RELATIVE_TS_BASE;if (is_relative(pkt->pts))pkt->pts -= RELATIVE_TS_BASE;return ret;
}

1.2 从pkt buffer中读取帧(avpriv_packet_list_get)

函数会从pkt的buffer当中读取一帧,位于libavcodec\avpkt.c中

/*** Remove the oldest AVPacket in the list and return it.** @note The pkt will be overwritten completely on success. The caller*       owns the packet and must unref it by itself.** @param head A pointer to a PacketList struct* @param pkt  Pointer to an AVPacket struct* @return 0 on success, and a packet is returned. AVERROR(EAGAIN) if*         the list was empty.*/
// 删除列表中最早的AVPacket并返回它
// @note: 一旦成功,pkt将被完全覆盖。调用者拥有packet,必须由调用者释放
int avpriv_packet_list_get(PacketList *pkt_buffer,AVPacket      *pkt)
{PacketListEntry *pktl = pkt_buffer->head;if (!pktl)return AVERROR(EAGAIN);*pkt        = pktl->pkt;pkt_buffer->head = pktl->next;if (!pkt_buffer->head)pkt_buffer->tail = NULL;av_freep(&pktl);return 0;
}

1.3 从流当中读取帧(read_frame_internal)

函数的定义位于libavformat\demux.c中,其主要的目的是从流当中读取一帧,函数大体可以分为几个部分:
(1)读取帧并且解析
(2)如果有必要则更新context
(3)时间戳等信息的检查
(4)检查discard和side data

在这些流程之中,最核心的内容是读取帧(ff_read_packet)并且解析(parse_packet),其中解析packet是只在有必要时进行,这需要提前检查

static int read_frame_internal(AVFormatContext *s, AVPacket *pkt)
{FFFormatContext *const si = ffformatcontext(s);int ret, got_packet = 0;AVDictionary *metadata = NULL;// 没有获取packet则持续读取while (!got_packet && !si->parse_queue.head) {AVStream *st;FFStream *sti;// 1.读取帧并且考虑解析/* read next packet */// 读取packetret = ff_read_packet(s, pkt);if (ret < 0) {if (ret == AVERROR(EAGAIN))return ret;/* flush the parsers */for (unsigned i = 0; i < s->nb_streams; i++) {AVStream *const st  = s->streams[i];FFStream *const sti = ffstream(st);if (sti->parser && sti->need_parsing)// 解析packetparse_packet(s, pkt, st->index, 1);}/* all remaining packets are now in parse_queue =>* really terminate parsing */break;}ret = 0;st  = s->streams[pkt->stream_index];sti = ffstream(st);st->event_flags |= AVSTREAM_EVENT_FLAG_NEW_PACKETS;/* update context if required */// 2.如果有必要则更新contextif (sti->need_context_update) {if (avcodec_is_open(sti->avctx)) {av_log(s, AV_LOG_DEBUG, "Demuxer context update while decoder is open, closing and trying to re-open\n");ret = codec_close(sti);sti->info->found_decoder = 0;if (ret < 0)return ret;}/* close parser, because it depends on the codec */if (sti->parser && sti->avctx->codec_id != st->codecpar->codec_id) {av_parser_close(sti->parser);sti->parser = NULL;}ret = avcodec_parameters_to_context(sti->avctx, st->codecpar);if (ret < 0) {av_packet_unref(pkt);return ret;}sti->codec_desc = avcodec_descriptor_get(sti->avctx->codec_id);sti->need_context_update = 0;}// 3.时间戳等信息的检查if (pkt->pts != AV_NOPTS_VALUE &&pkt->dts != AV_NOPTS_VALUE &&pkt->pts < pkt->dts) {av_log(s, AV_LOG_WARNING,"Invalid timestamps stream=%d, pts=%s, dts=%s, size=%d\n",pkt->stream_index,av_ts2str(pkt->pts),av_ts2str(pkt->dts),pkt->size);}if (s->debug & FF_FDEBUG_TS)av_log(s, AV_LOG_DEBUG,"ff_read_packet stream=%d, pts=%s, dts=%s, size=%d, duration=%"PRId64", flags=%d\n",pkt->stream_index,av_ts2str(pkt->pts),av_ts2str(pkt->dts),pkt->size, pkt->duration, pkt->flags);if (sti->need_parsing && !sti->parser && !(s->flags & AVFMT_FLAG_NOPARSE)) {sti->parser = av_parser_init(st->codecpar->codec_id);if (!sti->parser) {av_log(s, AV_LOG_VERBOSE, "parser not found for codec ""%s, packets or times may be invalid.\n",avcodec_get_name(st->codecpar->codec_id));/* no parser available: just output the raw packets */sti->need_parsing = AVSTREAM_PARSE_NONE;} else if (sti->need_parsing == AVSTREAM_PARSE_HEADERS)sti->parser->flags |= PARSER_FLAG_COMPLETE_FRAMES;else if (sti->need_parsing == AVSTREAM_PARSE_FULL_ONCE)sti->parser->flags |= PARSER_FLAG_ONCE;else if (sti->need_parsing == AVSTREAM_PARSE_FULL_RAW)sti->parser->flags |= PARSER_FLAG_USE_CODEC_TS;}if (!sti->need_parsing || !sti->parser) {/* no parsing needed: we just output the packet as is */compute_pkt_fields(s, st, NULL, pkt, AV_NOPTS_VALUE, AV_NOPTS_VALUE);if ((s->iformat->flags & AVFMT_GENERIC_INDEX) &&(pkt->flags & AV_PKT_FLAG_KEY) && pkt->dts != AV_NOPTS_VALUE) {ff_reduce_index(s, st->index);av_add_index_entry(st, pkt->pos, pkt->dts,0, 0, AVINDEX_KEYFRAME);}got_packet = 1;} else if (st->discard < AVDISCARD_ALL) {if ((ret = parse_packet(s, pkt, pkt->stream_index, 0)) < 0)return ret;st->codecpar->sample_rate = sti->avctx->sample_rate;st->codecpar->bit_rate = sti->avctx->bit_rate;ret = av_channel_layout_copy(&st->codecpar->ch_layout, &sti->avctx->ch_layout);if (ret < 0)return ret;st->codecpar->codec_id = sti->avctx->codec_id;} else {/* free packet */av_packet_unref(pkt);}if (pkt->flags & AV_PKT_FLAG_KEY)sti->skip_to_keyframe = 0;if (sti->skip_to_keyframe) {av_packet_unref(pkt);got_packet = 0;}}if (!got_packet && si->parse_queue.head)ret = avpriv_packet_list_get(&si->parse_queue, pkt);// 4.检查discard和side dataif (ret >= 0) {AVStream *const st  = s->streams[pkt->stream_index];FFStream *const sti = ffstream(st);int discard_padding = 0;if (sti->first_discard_sample && pkt->pts != AV_NOPTS_VALUE) {int64_t pts = pkt->pts - (is_relative(pkt->pts) ? RELATIVE_TS_BASE : 0);int64_t sample = ts_to_samples(st, pts);int64_t duration = ts_to_samples(st, pkt->duration);int64_t end_sample = sample + duration;if (duration > 0 && end_sample >= sti->first_discard_sample &&sample < sti->last_discard_sample)discard_padding = FFMIN(end_sample - sti->first_discard_sample, duration);}if (sti->start_skip_samples && (pkt->pts == 0 || pkt->pts == RELATIVE_TS_BASE))sti->skip_samples = sti->start_skip_samples;sti->skip_samples = FFMAX(0, sti->skip_samples);if (sti->skip_samples || discard_padding) {uint8_t *p = av_packet_new_side_data(pkt, AV_PKT_DATA_SKIP_SAMPLES, 10);if (p) {AV_WL32(p, sti->skip_samples);AV_WL32(p + 4, discard_padding);av_log(s, AV_LOG_DEBUG, "demuxer injecting skip %u / discard %u\n",(unsigned)sti->skip_samples, (unsigned)discard_padding);}sti->skip_samples = 0;}#if FF_API_AVSTREAM_SIDE_DATAif (sti->inject_global_side_data) {for (int i = 0; i < st->codecpar->nb_coded_side_data; i++) {const AVPacketSideData *const src_sd = &st->codecpar->coded_side_data[i];uint8_t *dst_data;if (av_packet_get_side_data(pkt, src_sd->type, NULL))continue;dst_data = av_packet_new_side_data(pkt, src_sd->type, src_sd->size);if (!dst_data) {av_log(s, AV_LOG_WARNING, "Could not inject global side data\n");continue;}memcpy(dst_data, src_sd->data, src_sd->size);}sti->inject_global_side_data = 0;}
#endif}if (!si->metafree) {int metaret = av_opt_get_dict_val(s, "metadata", AV_OPT_SEARCH_CHILDREN, &metadata);if (metadata) {s->event_flags |= AVFMT_EVENT_FLAG_METADATA_UPDATED;av_dict_copy(&s->metadata, metadata, 0);av_dict_free(&metadata);av_opt_set_dict_val(s, "metadata", NULL, AV_OPT_SEARCH_CHILDREN);}si->metafree = metaret == AVERROR_OPTION_NOT_FOUND;}if (s->debug & FF_FDEBUG_TS)av_log(s, AV_LOG_DEBUG,"read_frame_internal stream=%d, pts=%s, dts=%s, ""size=%d, duration=%"PRId64", flags=%d\n",pkt->stream_index,av_ts2str(pkt->pts),av_ts2str(pkt->dts),pkt->size, pkt->duration, pkt->flags);/* A demuxer might have returned EOF because of an IO error, let's* propagate this back to the user. */if (ret == AVERROR_EOF && s->pb && s->pb->error < 0 && s->pb->error != AVERROR(EAGAIN))ret = s->pb->error;return ret;
}

1.3.1 读取帧(ff_read_packet)

ff_read_packet用于从媒体文件中读取数据包,该函数会根据具体输入的媒体格式,调用具体的read_packet函数进行,核心函数为.read_packet

/*** Read a transport packet from a media file.** @param s media file handle* @param pkt is filled* @return 0 if OK, AVERROR_xxx on error*/
// 从媒体文件中读取传输数据包
int ff_read_packet(AVFormatContext *s, AVPacket *pkt)
{FFFormatContext *const si = ffformatcontext(s);int err;#if FF_API_INIT_PACKET
FF_DISABLE_DEPRECATION_WARNINGS // 禁用FFmpeg中的弃用警告pkt->data = NULL;pkt->size = 0;av_init_packet(pkt);
FF_ENABLE_DEPRECATION_WARNINGS // 启用FFmpeg中的弃用警告
#elseav_packet_unref(pkt);
#endiffor (;;) {PacketListEntry *pktl = si->raw_packet_buffer.head;if (pktl) {AVStream *const st = s->streams[pktl->pkt.stream_index];if (si->raw_packet_buffer_size >= s->probesize)if ((err = probe_codec(s, st, NULL)) < 0)return err;if (ffstream(st)->request_probe <= 0) {avpriv_packet_list_get(&si->raw_packet_buffer, pkt);si->raw_packet_buffer_size -= pkt->size;return 0;}}// 具体读取packet的函数err = ffifmt(s->iformat)->read_packet(s, pkt);if (err < 0) {av_packet_unref(pkt);/* Some demuxers return FFERROR_REDO when they consumedata and discard it (ignored streams, junk, extradata).We must re-call the demuxer to get the real packet. */if (err == FFERROR_REDO)continue;if (!pktl || err == AVERROR(EAGAIN))return err;for (unsigned i = 0; i < s->nb_streams; i++) {AVStream *const st  = s->streams[i];FFStream *const sti = ffstream(st);if (sti->probe_packets || sti->request_probe > 0)if ((err = probe_codec(s, st, NULL)) < 0)return err;av_assert0(sti->request_probe <= 0);}continue;}err = av_packet_make_refcounted(pkt);if (err < 0) {av_packet_unref(pkt);return err;}err = handle_new_packet(s, pkt, 1);if (err <= 0) /* Error or passthrough */return err;}
}

在上面的函数当中,最核心的地方是read_packet,会根据不同的FFInputFormat来调用不同的方式,例如FLV格式,会调用flv_read_packet,定义如下

const FFInputFormat ff_flv_demuxer = {.p.name         = "flv",.p.long_name    = NULL_IF_CONFIG_SMALL("FLV (Flash Video)"),.p.extensions   = "flv",.p.priv_class   = &flv_kux_class,.priv_data_size = sizeof(FLVContext),.read_probe     = flv_probe,.read_header    = flv_read_header,.read_packet    = flv_read_packet,.read_seek      = flv_read_seek,.read_close     = flv_read_close,
};

其中,flv_read_packet的定义如下。主要的工作是根据FLV格式,进行逐层的Tag解析以及TagData,获取Tag和TagData,流程如下
(1)解析tag header
(2)解析tag data
(3)其他信息配置

static int flv_read_packet(AVFormatContext *s, AVPacket *pkt)
{FLVContext *flv = s->priv_data;int ret, i, size, flags;enum FlvTagType type;int stream_type=-1;int64_t next, pos, meta_pos;int64_t dts, pts = AV_NOPTS_VALUE;int av_uninit(channels);int av_uninit(sample_rate);AVStream *st    = NULL;int last = -1;int orig_size;int enhanced_flv = 0;uint32_t video_codec_id = 0;retry:/* pkt size is repeated at end. skip it */// 1.解析tag headerpos  = avio_tell(s->pb);// tag的类型type = (avio_r8(s->pb) & 0x1F);// datasize的大小orig_size =size = avio_rb24(s->pb);flv->sum_flv_tag_size += size + 11LL;// 时间戳dts  = avio_rb24(s->pb);dts |= (unsigned)avio_r8(s->pb) << 24;av_log(s, AV_LOG_TRACE, "type:%d, size:%d, last:%d, dts:%"PRId64" pos:%"PRId64"\n", type, size, last, dts, avio_tell(s->pb));if (avio_feof(s->pb))return AVERROR_EOF;// 流idavio_skip(s->pb, 3); /* stream id, always 0 */// 解析tag header结束flags = 0;if (flv->validate_next < flv->validate_count) {int64_t validate_pos = flv->validate_index[flv->validate_next].pos;if (pos == validate_pos) {if (FFABS(dts - flv->validate_index[flv->validate_next].dts) <=VALIDATE_INDEX_TS_THRESH) {flv->validate_next++;} else {clear_index_entries(s, validate_pos);flv->validate_count = 0;}} else if (pos > validate_pos) {clear_index_entries(s, validate_pos);flv->validate_count = 0;}}if (size == 0) {ret = FFERROR_REDO;goto leave;}next = size + avio_tell(s->pb);if (type == FLV_TAG_TYPE_AUDIO) { // 音频stream_type = FLV_STREAM_TYPE_AUDIO;// tag data的第一个字节flags    = avio_r8(s->pb);size--;} else if (type == FLV_TAG_TYPE_VIDEO) { // 视频stream_type = FLV_STREAM_TYPE_VIDEO;// tag data的第一个字节flags    = avio_r8(s->pb);video_codec_id = flags & FLV_VIDEO_CODECID_MASK;/** Reference Enhancing FLV 2023-03-v1.0.0-B.8* https://github.com/veovera/enhanced-rtmp/blob/main/enhanced-rtmp-v1.pdf* */enhanced_flv = (flags >> 7) & 1;size--;if (enhanced_flv) {video_codec_id = avio_rb32(s->pb);size -= 4;}// 如果是增强的flv格式if (enhanced_flv && stream_type == FLV_STREAM_TYPE_VIDEO && (flags & FLV_VIDEO_FRAMETYPE_MASK) == FLV_FRAME_VIDEO_INFO_CMD) 				{int pkt_type = flags & 0x0F;if (pkt_type == PacketTypeMetadata) {int ret = flv_parse_video_color_info(s, st, next);av_log(s, AV_LOG_DEBUG, "enhanced flv parse metadata ret %d and skip\n", ret);}goto skip;} else if ((flags & FLV_VIDEO_FRAMETYPE_MASK) == FLV_FRAME_VIDEO_INFO_CMD) {goto skip;}} else if (type == FLV_TAG_TYPE_META) { // 如果是元数据stream_type=FLV_STREAM_TYPE_SUBTITLE; // 字幕if (size > 13 + 1 + 4) { // Header-type metadata stuffint type;meta_pos = avio_tell(s->pb);type = flv_read_metabody(s, next);if (type == 0 && dts == 0 || type < 0) {if (type < 0 && flv->validate_count &&flv->validate_index[0].pos     > next &&flv->validate_index[0].pos - 4 < next) {av_log(s, AV_LOG_WARNING, "Adjusting next position due to index mismatch\n");next = flv->validate_index[0].pos - 4;}goto skip;} else if (type == TYPE_ONTEXTDATA) {avpriv_request_sample(s, "OnTextData packet");return flv_data_packet(s, pkt, dts, next);} else if (type == TYPE_ONCAPTION) {return flv_data_packet(s, pkt, dts, next);} else if (type == TYPE_UNKNOWN) {stream_type = FLV_STREAM_TYPE_DATA;}avio_seek(s->pb, meta_pos, SEEK_SET);}} else { // 跳过av_log(s, AV_LOG_DEBUG,"Skipping flv packet: type %d, size %d, flags %d.\n",type, size, flags);
skip:if (avio_seek(s->pb, next, SEEK_SET) != next) {// This can happen if flv_read_metabody above read past// next, on a non-seekable input, and the preceding data has// been flushed out from the IO buffer.av_log(s, AV_LOG_ERROR, "Unable to seek to the next packet\n");return AVERROR_INVALIDDATA;}ret = FFERROR_REDO;goto leave;}/* skip empty data packets */if (!size) {ret = FFERROR_REDO;goto leave;}/* now find stream */// 寻找流for (i = 0; i < s->nb_streams; i++) {st = s->streams[i];if (stream_type == FLV_STREAM_TYPE_AUDIO) {// flv_same_audio_codec用于确保在转换为FLV格式时音频编码格式不变if (st->codecpar->codec_type == AVMEDIA_TYPE_AUDIO &&(s->audio_codec_id || flv_same_audio_codec(st->codecpar, flags)))break;} else if (stream_type == FLV_STREAM_TYPE_VIDEO) {// flv_same_viedo_codec用于确保在转换为FLV格式时保持视频编码格式不变if (st->codecpar->codec_type == AVMEDIA_TYPE_VIDEO &&(s->video_codec_id || flv_same_video_codec(st->codecpar, video_codec_id)))break;} else if (stream_type == FLV_STREAM_TYPE_SUBTITLE) {if (st->codecpar->codec_type == AVMEDIA_TYPE_SUBTITLE)break;} else if (stream_type == FLV_STREAM_TYPE_DATA) {if (st->codecpar->codec_type == AVMEDIA_TYPE_DATA)break;}}// 2.获取TagData// 根据读取的信息,创建一条流,将信息填充到流当中if (i == s->nb_streams) {static const enum AVMediaType stream_types[] = {AVMEDIA_TYPE_VIDEO, AVMEDIA_TYPE_AUDIO, AVMEDIA_TYPE_SUBTITLE, AVMEDIA_TYPE_DATA};st = create_stream(s, stream_types[stream_type]);if (!st)return AVERROR(ENOMEM);}av_log(s, AV_LOG_TRACE, "%d %X %d \n", stream_type, flags, st->discard);if (flv->time_pos <= pos) {dts += flv->time_offset;}if ((s->pb->seekable & AVIO_SEEKABLE_NORMAL) &&((flags & FLV_VIDEO_FRAMETYPE_MASK) == FLV_FRAME_KEY ||stream_type == FLV_STREAM_TYPE_AUDIO))av_add_index_entry(st, pos, dts, size, 0, AVINDEX_KEYFRAME);if ((st->discard >= AVDISCARD_NONKEY && !((flags & FLV_VIDEO_FRAMETYPE_MASK) == FLV_FRAME_KEY || stream_type == FLV_STREAM_TYPE_AUDIO)) ||(st->discard >= AVDISCARD_BIDIR && ((flags & FLV_VIDEO_FRAMETYPE_MASK) == FLV_FRAME_DISP_INTER && stream_type == FLV_STREAM_TYPE_VIDEO)) ||st->discard >= AVDISCARD_ALL) {avio_seek(s->pb, next, SEEK_SET);ret = FFERROR_REDO;goto leave;}// if not streamed and no duration from metadata then seek to end to find// the duration from the timestamps// 如果没有流化并且没有从元数据中获取持续时间,则寻求结束以从时间戳中查找持续时间if ((s->pb->seekable & AVIO_SEEKABLE_NORMAL) &&(!s->duration || s->duration == AV_NOPTS_VALUE) &&!flv->searched_for_end) {int size;const int64_t pos   = avio_tell(s->pb);// Read the last 4 bytes of the file, this should be the size of the// previous FLV tag. Use the timestamp of its payload as duration.// 读取文件的最后4个字节,这应该是前一个FLV标记的大小。使用其有效负载的时间戳作为持续时间int64_t fsize       = avio_size(s->pb);
retry_duration:avio_seek(s->pb, fsize - 4, SEEK_SET);size = avio_rb32(s->pb);if (size > 0 && size < fsize) {// Seek to the start of the last FLV tag at position (fsize - 4 - size)// but skip the byte indicating the type.avio_seek(s->pb, fsize - 3 - size, SEEK_SET);if (size == avio_rb24(s->pb) + 11) {uint32_t ts = avio_rb24(s->pb);ts         |= (unsigned)avio_r8(s->pb) << 24;if (ts)s->duration = ts * (int64_t)AV_TIME_BASE / 1000;else if (fsize >= 8 && fsize - 8 >= size) {fsize -= size+4;goto retry_duration;}}}avio_seek(s->pb, pos, SEEK_SET);flv->searched_for_end = 1;}// 3.其他信息配置//	如果流类型是音频,进行一些检查和配置if (stream_type == FLV_STREAM_TYPE_AUDIO) {int bits_per_coded_sample;channels = (flags & FLV_AUDIO_CHANNEL_MASK) == FLV_STEREO ? 2 : 1;sample_rate = 44100 << ((flags & FLV_AUDIO_SAMPLERATE_MASK) >>FLV_AUDIO_SAMPLERATE_OFFSET) >> 3;bits_per_coded_sample = (flags & FLV_AUDIO_SAMPLESIZE_MASK) ? 16 : 8;if (!av_channel_layout_check(&st->codecpar->ch_layout) ||!st->codecpar->sample_rate ||!st->codecpar->bits_per_coded_sample) {av_channel_layout_default(&st->codecpar->ch_layout, channels);st->codecpar->sample_rate           = sample_rate;st->codecpar->bits_per_coded_sample = bits_per_coded_sample;}if (!st->codecpar->codec_id) {flv_set_audio_codec(s, st, st->codecpar,flags & FLV_AUDIO_CODECID_MASK);flv->last_sample_rate =sample_rate           = st->codecpar->sample_rate;flv->last_channels    =channels              = st->codecpar->ch_layout.nb_channels;} else {AVCodecParameters *par = avcodec_parameters_alloc();if (!par) {ret = AVERROR(ENOMEM);goto leave;}par->sample_rate = sample_rate;par->bits_per_coded_sample = bits_per_coded_sample;flv_set_audio_codec(s, st, par, flags & FLV_AUDIO_CODECID_MASK);sample_rate = par->sample_rate;avcodec_parameters_free(&par);}} else if (stream_type == FLV_STREAM_TYPE_VIDEO) { // 如果是视频,则进行video coedc的配置int ret = flv_set_video_codec(s, st, video_codec_id, 1);if (ret < 0)return ret;size -= ret;} else if (stream_type == FLV_STREAM_TYPE_SUBTITLE) {st->codecpar->codec_id = AV_CODEC_ID_TEXT;} else if (stream_type == FLV_STREAM_TYPE_DATA) {st->codecpar->codec_id = AV_CODEC_ID_NONE; // Opaque AMF data}// 相对比于雷博记录的版本,这里新增了HEVC、AV1和VP9if (st->codecpar->codec_id == AV_CODEC_ID_AAC ||st->codecpar->codec_id == AV_CODEC_ID_H264 ||st->codecpar->codec_id == AV_CODEC_ID_MPEG4 ||st->codecpar->codec_id == AV_CODEC_ID_HEVC ||st->codecpar->codec_id == AV_CODEC_ID_AV1 ||st->codecpar->codec_id == AV_CODEC_ID_VP9) {int type = 0;if (enhanced_flv && stream_type == FLV_STREAM_TYPE_VIDEO) {type = flags & 0x0F;} else {type = avio_r8(s->pb);size--;}if (size < 0) {ret = AVERROR_INVALIDDATA;goto leave;}if (enhanced_flv && stream_type == FLV_STREAM_TYPE_VIDEO && flv->meta_color_info_flag) {// 更新packet的side dataflv_update_video_color_info(s, st); // update av packet side dataflv->meta_color_info_flag = 0;}// H264 或 H265if (st->codecpar->codec_id == AV_CODEC_ID_H264 || st->codecpar->codec_id == AV_CODEC_ID_MPEG4 ||(st->codecpar->codec_id == AV_CODEC_ID_HEVC && type == PacketTypeCodedFrames)) {// sign extension// 对应的composition timeint32_t cts = (avio_rb24(s->pb) + 0xff800000) ^ 0xff800000;pts = av_sat_add64(dts, cts);if (cts < 0) { // dts might be wrongif (!flv->wrong_dts)av_log(s, AV_LOG_WARNING,"Negative cts, previous timestamps might be wrong.\n");flv->wrong_dts = 1;} else if (FFABS(dts - pts) > 1000*60*15) {av_log(s, AV_LOG_WARNING,"invalid timestamps %"PRId64" %"PRId64"\n", dts, pts);dts = pts = AV_NOPTS_VALUE;}size -= 3;}if (type == 0 && (!st->codecpar->extradata || st->codecpar->codec_id == AV_CODEC_ID_AAC ||st->codecpar->codec_id == AV_CODEC_ID_H264 || st->codecpar->codec_id == AV_CODEC_ID_HEVC ||st->codecpar->codec_id == AV_CODEC_ID_AV1 || st->codecpar->codec_id == AV_CODEC_ID_VP9)) {AVDictionaryEntry *t;if (st->codecpar->extradata) {if ((ret = flv_queue_extradata(flv, s->pb, stream_type, size)) < 0)return ret;ret = FFERROR_REDO;goto leave;}if ((ret = flv_get_extradata(s, st, size)) < 0)return ret;/* Workaround for buggy Omnia A/XE encoder */t = av_dict_get(s->metadata, "Encoder", NULL, 0);if (st->codecpar->codec_id == AV_CODEC_ID_AAC && t && !strcmp(t->value, "Omnia A/XE"))st->codecpar->extradata_size = 2;ret = FFERROR_REDO;goto leave;}}/* skip empty data packets */if (!size) {ret = FFERROR_REDO;goto leave;}// 获取pktret = av_get_packet(s->pb, pkt, size);if (ret < 0)return ret;// 配置dts和pts等信息pkt->dts          = dts;pkt->pts          = pts == AV_NOPTS_VALUE ? dts : pts;pkt->stream_index = st->index;pkt->pos          = pos;if (flv->new_extradata[stream_type]) {int ret = av_packet_add_side_data(pkt, AV_PKT_DATA_NEW_EXTRADATA,flv->new_extradata[stream_type],flv->new_extradata_size[stream_type]);if (ret >= 0) {flv->new_extradata[stream_type]      = NULL;flv->new_extradata_size[stream_type] = 0;}}if (stream_type == FLV_STREAM_TYPE_AUDIO &&(sample_rate != flv->last_sample_rate ||channels    != flv->last_channels)) {flv->last_sample_rate = sample_rate;flv->last_channels    = channels;ff_add_param_change(pkt, channels, 0, sample_rate, 0, 0);}if (stream_type == FLV_STREAM_TYPE_AUDIO ||(flags & FLV_VIDEO_FRAMETYPE_MASK) == FLV_FRAME_KEY ||stream_type == FLV_STREAM_TYPE_SUBTITLE ||stream_type == FLV_STREAM_TYPE_DATA)pkt->flags |= AV_PKT_FLAG_KEY; // 关键帧的配置leave:last = avio_rb32(s->pb);if (!flv->trust_datasize) {if (last != orig_size + 11 && last != orig_size + 10 &&!avio_feof(s->pb) &&(last != orig_size || !last) && last != flv->sum_flv_tag_size &&!flv->broken_sizes) {av_log(s, AV_LOG_ERROR, "Packet mismatch %d %d %"PRId64"\n", last, orig_size + 11, flv->sum_flv_tag_size);avio_seek(s->pb, pos + 1, SEEK_SET);ret = resync(s);av_packet_unref(pkt);if (ret >= 0) {goto retry;}}}if (ret >= 0)flv->last_ts = pkt->dts;return ret;
}

参考雷博的文章知道FLV格式如下
在这里插入图片描述
从上图可以看出,FLV格式的媒体文件包含几个部分:FLV Header和FLV Body,Header中包括了Signature,Version,Flags和Header Size,Body包括Previous Tag、Tag Header和Tag Data,Tag Header中又包括了Type、Datasize、Timestamp、Tiemstamp_ex和StreamID。其中,Video Tag Data的定义如下
在这里插入图片描述
第一个字节记录了FrameType和CodecID,从第二个字节开始记录VideoData。其中,第一个字节的前面4位表示的是帧类型:

	1: keyframe (for AVC, a seekableframe)(关键帧)2: inter frame (for AVC, a nonseekableframe)3: disposable inter frame (H.263only)4: generated keyframe (reservedfor server use only)5: video info/command frame

第一个字节的后面4位表示的是CodecID:

	1: JPEG (currently unused)2: Sorenson H.2633: Screen video4: On2 VP65: On2 VP6 with alpha channel6: Screen video version 27: AVC

1.3.2 解析packet(parse_packet)

函数的定义位于libavformat\demux.c中,用于解析packet,并且将解析出来的信息添加到parse_queue之中。函数最核心的内容是调用了av_parser_parse2进行pkt的解析,随后根据解析的信息对一些变量进行配置

/*** Parse a packet, add all split parts to parse_queue.** @param pkt   Packet to parse; must not be NULL.* @param flush Indicates whether to flush. If set, pkt must be blank.*/
static int parse_packet(AVFormatContext *s, AVPacket *pkt,int stream_index, int flush)
{FFFormatContext *const si = ffformatcontext(s);AVPacket *out_pkt = si->parse_pkt;AVStream *st = s->streams[stream_index];FFStream *const sti = ffstream(st);const uint8_t *data = pkt->data;int size = pkt->size;int ret = 0, got_output = flush;if (!size && !flush && sti->parser->flags & PARSER_FLAG_COMPLETE_FRAMES) {// preserve 0-size sync packets// 计算和设置AVPakcet中的属性值compute_pkt_fields(s, st, sti->parser, pkt, AV_NOPTS_VALUE, AV_NOPTS_VALUE);}while (size > 0 || (flush && got_output)) {int64_t next_pts = pkt->pts;int64_t next_dts = pkt->dts;int len;// 解析packetlen = av_parser_parse2(sti->parser, sti->avctx,&out_pkt->data, &out_pkt->size, data, size,pkt->pts, pkt->dts, pkt->pos);pkt->pts = pkt->dts = AV_NOPTS_VALUE;pkt->pos = -1;/* increment read pointer */av_assert1(data || !len);data  = len ? data + len : data;size -= len;got_output = !!out_pkt->size;if (!out_pkt->size)continue;if (pkt->buf && out_pkt->data == pkt->data) {/* reference pkt->buf only when out_pkt->data is guaranteed to point* to data in it and not in the parser's internal buffer. *//* XXX: Ensure this is the case with all parsers when sti->parser->flags* is PARSER_FLAG_COMPLETE_FRAMES and check for that instead? */out_pkt->buf = av_buffer_ref(pkt->buf);if (!out_pkt->buf) {ret = AVERROR(ENOMEM);goto fail;}} else {ret = av_packet_make_refcounted(out_pkt);if (ret < 0)goto fail;}if (pkt->side_data) {out_pkt->side_data       = pkt->side_data;out_pkt->side_data_elems = pkt->side_data_elems;pkt->side_data          = NULL;pkt->side_data_elems    = 0;}/* set the duration */// 设置durationout_pkt->duration = (sti->parser->flags & PARSER_FLAG_COMPLETE_FRAMES) ? pkt->duration : 0;if (st->codecpar->codec_type == AVMEDIA_TYPE_AUDIO) {if (sti->avctx->sample_rate > 0) {out_pkt->duration =av_rescale_q_rnd(sti->parser->duration,(AVRational) { 1, sti->avctx->sample_rate },st->time_base,AV_ROUND_DOWN);}} else if (st->codecpar->codec_id == AV_CODEC_ID_GIF) {if (st->time_base.num > 0 && st->time_base.den > 0 &&sti->parser->duration) {out_pkt->duration = sti->parser->duration;}}// 设置pkt的一些属性值out_pkt->stream_index = st->index;out_pkt->pts          = sti->parser->pts;out_pkt->dts          = sti->parser->dts;out_pkt->pos          = sti->parser->pos;out_pkt->flags       |= pkt->flags & (AV_PKT_FLAG_DISCARD | AV_PKT_FLAG_CORRUPT);if (sti->need_parsing == AVSTREAM_PARSE_FULL_RAW)out_pkt->pos = sti->parser->frame_offset;if (sti->parser->key_frame == 1 ||(sti->parser->key_frame == -1 &&sti->parser->pict_type == AV_PICTURE_TYPE_I))out_pkt->flags |= AV_PKT_FLAG_KEY;if (sti->parser->key_frame == -1 && sti->parser->pict_type ==AV_PICTURE_TYPE_NONE && (pkt->flags&AV_PKT_FLAG_KEY))out_pkt->flags |= AV_PKT_FLAG_KEY;compute_pkt_fields(s, st, sti->parser, out_pkt, next_dts, next_pts);// 将packet添加到parse_queue之中ret = avpriv_packet_list_put(&si->parse_queue,out_pkt, NULL, 0);if (ret < 0)goto fail;}/* end of the stream => close and free the parser */if (flush) {av_parser_close(sti->parser);sti->parser = NULL;}fail:if (ret < 0)av_packet_unref(out_pkt);av_packet_unref(pkt);return ret;
}

上面代码的核心函数是av_parser_parse2,下面进行记录

1.3.2.1 解析(av_parser_parse2)

av_parser_parse2的定义位于libavcodec\parser.c中,定义如下

// 解析数据获得一个Packet, 从输入的数据流中分离出一帧一帧的压缩编码数据
// poutbuf: 解析后输出的压缩编码数据帧
// buf: 解析前的压缩编码数据帧
// 如果函数执行完后输出数据为空(poutbuf_size为0),则代表解析还没有完成,还需要再次调用av_parser_parse2()解析一部分数据才可以得到解析后的数据帧。
// 当函数执行完后输出数据不为空的时候,代表解析完成,可以将poutbuf中的这帧数据取出来做后续处理
int av_parser_parse2(AVCodecParserContext *s, AVCodecContext *avctx,uint8_t **poutbuf, int *poutbuf_size,const uint8_t *buf, int buf_size,int64_t pts, int64_t dts, int64_t pos)
{int index, i;uint8_t dummy_buf[AV_INPUT_BUFFER_PADDING_SIZE];// 检查当前的codec id是否不为空av_assert1(avctx->codec_id != AV_CODEC_ID_NONE);/* Parsers only work for the specified codec ids. */// 检查当前的parser的codec类型av_assert1(avctx->codec_id == s->parser->codec_ids[0] ||avctx->codec_id == s->parser->codec_ids[1] ||avctx->codec_id == s->parser->codec_ids[2] ||avctx->codec_id == s->parser->codec_ids[3] ||avctx->codec_id == s->parser->codec_ids[4] ||avctx->codec_id == s->parser->codec_ids[5] ||avctx->codec_id == s->parser->codec_ids[6]);/* 第一次进入时,flags为0,会进入if将offset设置成当前pkt的pos */if (!(s->flags & PARSER_FLAG_FETCHED_OFFSET)) {s->next_frame_offset =s->cur_offset        = pos;s->flags            |= PARSER_FLAG_FETCHED_OFFSET;}if (buf_size == 0) {/* padding is always necessary even if EOF, so we add it here */memset(dummy_buf, 0, sizeof(dummy_buf));buf = dummy_buf;} else if (s->cur_offset + buf_size != s->cur_frame_end[s->cur_frame_start_index]) { /* skip remainder packets *//* add a new packet descriptor */// 保留一下cur_offset, frame_end 信息, 有4个槽位供使用, 方便知道数据偏移量i = (s->cur_frame_start_index + 1) & (AV_PARSER_PTS_NB - 1);s->cur_frame_start_index = i;s->cur_frame_offset[i]   = s->cur_offset;s->cur_frame_end[i]      = s->cur_offset + buf_size;s->cur_frame_pts[i]      = pts;s->cur_frame_dts[i]      = dts;s->cur_frame_pos[i]      = pos;}if (s->fetch_timestamp) {s->fetch_timestamp = 0;s->last_pts        = s->pts;s->last_dts        = s->dts;s->last_pos        = s->pos;ff_fetch_timestamp(s, 0, 0, 0);}/* WARNING: the returned index can be negative */index = s->parser->parser_parse(s, avctx, (const uint8_t **) poutbuf,poutbuf_size, buf, buf_size);av_assert0(index > -0x20000000); // The API does not allow returning AVERROR codes
#define FILL(name) if(s->name > 0 && avctx->name <= 0) avctx->name = s->nameif (avctx->codec_type == AVMEDIA_TYPE_VIDEO) {FILL(field_order);FILL(coded_width);FILL(coded_height);FILL(width);FILL(height);}/* update the file pointer */if (*poutbuf_size) {/* fill the data for the current frame */s->frame_offset = s->next_frame_offset;/* offset of the next frame */s->next_frame_offset = s->cur_offset + index;s->fetch_timestamp   = 1;} else {/* Don't return a pointer to dummy_buf. */*poutbuf = NULL;}if (index < 0)index = 0;s->cur_offset += index;return index;
}

函数的核心是parser_parse,假设初始化解析器时使用的是264格式,则会调用h264_parse,264格式的解析器定义如下

const AVCodecParser ff_h264_parser = {.codec_ids      = { AV_CODEC_ID_H264 },.priv_data_size = sizeof(H264ParseContext),.parser_init    = init,.parser_parse   = h264_parse,.parser_close   = h264_close,
};

调用的h264_parse定义位于libavcodec\h264_parser.c中,如下所示

static int h264_parse(AVCodecParserContext *s,AVCodecContext *avctx,const uint8_t **poutbuf, int *poutbuf_size,const uint8_t *buf, int buf_size)
{H264ParseContext *p = s->priv_data;ParseContext *pc = &p->pc;AVRational time_base = { 0, 1 };int next;if (!p->got_first) {p->got_first = 1;if (avctx->extradata_size) {// h264解码额外数据// 用于解析AVCodecContext的extradata(里面实际上存储了H.264的SPS、PPS)ff_h264_decode_extradata(avctx->extradata, avctx->extradata_size,&p->ps, &p->is_avc, &p->nal_length_size,avctx->err_recognition, avctx);}}if (s->flags & PARSER_FLAG_COMPLETE_FRAMES) { // 解析成功一帧next = buf_size;} else {next = h264_find_frame_end(p, buf, buf_size, avctx); // 找到一帧的结尾if (ff_combine_frame(pc, next, &buf, &buf_size) < 0) {*poutbuf      = NULL;*poutbuf_size = 0;return buf_size;}if (next < 0 && next != END_NOT_FOUND) {av_assert1(pc->last_index + next >= 0);h264_find_frame_end(p, &pc->buffer[pc->last_index + next], -next, avctx); // update state}}// 解析nal数据parse_nal_units(s, avctx, buf, buf_size);if (avctx->framerate.num)time_base = av_inv_q(av_mul_q(avctx->framerate, (AVRational){2, 1}));if (p->sei.picture_timing.cpb_removal_delay >= 0) {s->dts_sync_point    = p->sei.buffering_period.present;s->dts_ref_dts_delta = p->sei.picture_timing.cpb_removal_delay;s->pts_dts_delta     = p->sei.picture_timing.dpb_output_delay;} else {s->dts_sync_point    = INT_MIN;s->dts_ref_dts_delta = INT_MIN;s->pts_dts_delta     = INT_MIN;}if (s->flags & PARSER_FLAG_ONCE) {s->flags &= PARSER_FLAG_COMPLETE_FRAMES;}if (s->dts_sync_point >= 0) {int64_t den = time_base.den * (int64_t)avctx->pkt_timebase.num;if (den > 0) {int64_t num = time_base.num * (int64_t)avctx->pkt_timebase.den;if (s->dts != AV_NOPTS_VALUE) {// got DTS from the stream, update reference timestampp->reference_dts = av_sat_sub64(s->dts, av_rescale(s->dts_ref_dts_delta, num, den));} else if (p->reference_dts != AV_NOPTS_VALUE) {// compute DTS based on reference timestamps->dts = av_sat_add64(p->reference_dts, av_rescale(s->dts_ref_dts_delta, num, den));}if (p->reference_dts != AV_NOPTS_VALUE && s->pts == AV_NOPTS_VALUE)s->pts = s->dts + av_rescale(s->pts_dts_delta, num, den);if (s->dts_sync_point > 0)p->reference_dts = s->dts; // new reference}}*poutbuf      = buf;*poutbuf_size = buf_size;return next;
}

1.4 向pkt buffer中填充帧(avpriv_packet_list_put)

函数的主要作用是向pkt buffer中填充帧,位于libavcodec\avpkt.c中

int avpriv_packet_list_put(PacketList *packet_buffer,AVPacket      *pkt,int (*copy)(AVPacket *dst, const AVPacket *src),int flags)
{PacketListEntry *pktl = av_malloc(sizeof(*pktl));int ret;if (!pktl)return AVERROR(ENOMEM);if (copy) {get_packet_defaults(&pktl->pkt);ret = copy(&pktl->pkt, pkt);if (ret < 0) {av_free(pktl);return ret;}} else {ret = av_packet_make_refcounted(pkt);if (ret < 0) {av_free(pktl);return ret;}av_packet_move_ref(&pktl->pkt, pkt);}pktl->next = NULL;if (packet_buffer->head)packet_buffer->tail->next = pktl;elsepacket_buffer->head = pktl;/* Add the packet in the buffered packet list. */packet_buffer->tail = pktl;return 0;
}

CSDN : https://blog.csdn.net/weixin_42877471
Github : https://github.com/DoFulangChen

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.mzph.cn/bicheng/38928.shtml

如若内容造成侵权/违法违规/事实不符,请联系多彩编程网进行投诉反馈email:809451989@qq.com,一经查实,立即删除!

相关文章

为什么要学习大模型应用开发?原因80%的人都不知道

0 prompt engineer 就是prompt工程师它的底层透视。 1 学习大模型的重要性 底层逻辑 人工智能大潮已来&#xff0c;不加入就可能被淘汰。就好像现在职场里谁不会用PPT和excel一样&#xff0c;基本上你见不到。你问任何一个人问他会不会用PPT&#xff0c;他都会说会用&#x…

喜讯|华院计算认知智能引擎算法平台荣登BPAA大赛创新组TOP50

6月25日&#xff0c;备受瞩目的BPAA第四届全球应用算法模型典范大赛&#xff08;以下简称“BPAA大赛”&#xff09;正式揭晓了《第四届全球应用算法模型典范大赛创业组TOP50榜单》和《第四届全球应用算法模型典范大赛创新组TOP50榜单》。其中&#xff0c;华院计算技术&#xff…

[Microsoft Office]Word设置页码从第二页开始为1

目录 第一步&#xff1a;设置页码格式 第二步&#xff1a;设置“起始页码”为0 第三步&#xff1a;双击页码&#xff0c;出现“页脚”提示 第四步&#xff1a;选中“首页不同” 第一步&#xff1a;设置页码格式 第二步&#xff1a;设置“起始页码”为0 第三步&#xff1a;双…

怎么把视频字幕提取出来?一招教你提取视频字幕

想必大家一定很有同感吧&#xff0c;视频已成为我们获取知识与新闻的主要渠道。 面对如此众多的视频资源&#xff0c;如何迅速筛选出核心信息并进行有效管理&#xff0c;成为了一项迫切需要解决的问题。 视频字幕提取翻译软件的问世&#xff0c;利用尖端的语音识别技术&#…

【产品经理】订单处理11-订单修改场景梳理

为了应对订单修改的场景&#xff0c;电商ERP系统应该如何设计相应模块&#xff1f; 电商ERP系统&#xff0c;经常遇到需要修改订单的情况&#xff0c;修改订单主要以下几种场景&#xff1a; 一、修改商品 修改商品&#xff0c;包括对正常商品的换货、以及对赠品的增删改。 1…

【Kaggle】Telco Customer Churn 数据编码与模型训练

&#x1f4ac;在上一部分中&#xff0c;我们已经完成了对数据集背景解读、数据预处理与探索性分析。在数据背景解读中&#xff0c;我们介绍了数据集来源、电信用户流失分析的基本业务背景&#xff0c;并详细解释了每个字段的基本含义&#xff1b;在数据预处理过程中&#xff0c…

安全隔离上网的有效途径:沙箱

在数字化浪潮日益汹涌的今天&#xff0c;网络安全成为了不可忽视的重要议题。沙箱技术作为一种高效的隔离机制&#xff0c;为企业和个人提供了一种在享受网络便利的同时&#xff0c;保障系统安全的解决方案。本文旨在深入探讨沙箱技术如何做到隔离上网&#xff0c;从而为用户提…

AI系统:未来科技的驱动力

引言 人工智能&#xff08;Artificial Intelligence, AI&#xff09;是一门研究如何使计算机模拟、延伸和扩展人类智能的学科。自20世纪50年代起&#xff0c;人工智能作为一项科学研究领域开始兴起。早期的AI系统主要集中在简单的任务&#xff0c;如棋类游戏和数学证明。随着计…

华为云物联网的使用

这里我们设置三个属性 1.温度DHT11_T 上传 2.湿度DHT11_H 上传 3.风扇motor 远程控制&#xff08;云平台控制设备端&#xff09; 发布主题&#xff1a; $oc/devices/{device_id}/sys/properties/report 发布主题时&#xff0c;需要上传数据&#xff0c;这个数据格式是JSON格式…

2007年上半年软件设计师【上午题】试题及答案

文章目录 2007年上半年软件设计师上午题--试题2007年上半年软件设计师上午题--答案2007年上半年软件设计师上午题–试题

公司管理系统

准备工作 上图mapper类型错了&#xff0c;不是class&#xff0c;是interface&#xff0c;修正过后的图片&#xff0c;如下所示 修正如下 spring.datasource.driver-class-namecom.mysql.cj.jdbc.Driver spring.datasource.urljdbc:mysql://localhost:3306/webm spring.datasour…

【Tech Point】

ARM加速LLama C 加速对象 LLama C 加速对象 LLama C 关键技术&#xff1a; 使用neon加速指令进行SIMD操作&#xff1b;优化数据排布&#xff0c;降低数据读取的中断

【区块链+基础设施】区块链服务网络 BSN | FISCO BCOS应用案例

BSN&#xff08;Blockchain-based Service Network&#xff0c;区块链服务网络&#xff09;是一个跨云服务、跨门户、跨底层框架&#xff0c;用于部 署和运行各类区块链应用的全球性基础设施网络&#xff0c;旨在为开发者提供低成本和技术互通的区块链一站式服务。 2019 年 12…

网络安全等级保护2.0(等保2.0)全面解析

一、等保2.0的定义和背景 网络安全等级保护2.0&#xff08;简称“等保2.0”&#xff09;是我国网络安全领域的基本制度、基本策略、基本方法。它是在《中华人民共和国网络安全法》指导下&#xff0c;对我国网络安全等级保护制度进行的重大升级。等保2.0的发布与实施&#xff0c…

主成分分析(PCA)详解与Python实现

1. 引言 主成分分析&#xff08;PCA&#xff09;是一种统计方法&#xff0c;它通过正交变换将一组可能相关的变量转换成一组线性不相关的变量&#xff0c;这些不相关变量称为主成分。PCA常用于降维、数据压缩和模式识别等领域。 喜欢的伙伴们点个关注哦~~❤❤❤ 2. 理论基础…

C++封装

1. 封装 1.1. struct 当单一变量无法完成描述需求的时候&#xff0c;结构体类型解决了这一问题。可以将多个类型打包成一体&#xff0c;形成新的类型&#xff0c;这是c语言中的封装 但是&#xff0c;新类型并不包含&#xff0c;对数据类的操作。所有操作都是通过函数的方式进…

【C++】——【 STL简介】——【详细讲解】

目录 ​编辑 1. 什么是STL 2. STL的版本 3. STL的六大组件 1.容器(Container)&#xff1a; 2.算法(Algorithm)&#xff1a; 3.迭代器(Iterator)&#xff1a; 4.函数(Function)&#xff1a; 5.适配器(Adapter)&#xff1a; 6.分配器(Allocator)&#xff1a; 4. STL的…

调度器APScheduler定时执行任务

APScheduler&#xff08;Advanced Python Scheduler&#xff09;是一个Python库&#xff0c;用于调度任务&#xff0c;使其在预定的时间间隔或特定时间点执行。它支持多种调度方式&#xff0c;包括定时&#xff08;interval&#xff09;、日期&#xff08;date&#xff09;和Cr…

探索IT世界的第一步:高考后的暑期学习指南

目录 前言1. IT领域概述1.1 IT领域的发展与现状1.2 IT领域的主要分支1.2.1 软件开发1.2.2 数据科学1.2.3 网络与安全1.2.4 系统与运维 2. 学习路线图2.1 基础知识的学习2.1.1 编程语言2.1.2 数据结构与算法 2.2 实战项目的实践2.2.1 个人项目2.2.2 团队项目 2.3 学习资源的利用…

综合项目实战--jenkins流水线

一、流水线定义 软件生产环节,如:需求调研、需求设计、概要设计、详细设计、编码、单元测试、集成测试、系统测试、用户验收测试、交付等,这些流程就组成一条完整的流水线。脚本式流水线(pipeline)的出现代表企业人员可以更自由的通过代码来实现不同的工作流程。 二、pi…