在HarmonyOS下自渲染视频数据
在本文中,我们将介绍如何在HarmonyOS下自渲染视频数据。我们将实现包括创建本地窗口、设置缓冲区选项、请求缓冲区、处理视频帧数据以及刷新缓冲区等步骤。
环境准备
在开始之前,请确保您已经安装了HarmonyOS的开发环境,并且能够编译和运行C++代码。
注意⚠️
以下方案中用到的API,都不是线程安全的!!!我这里是通过其他手段有保护的,请自行上锁~
创建本地窗口
首先,我们需要根据surfaceId
创建一个本地窗口。如何拿这个surfaceId
,有两种方案:
- 一种是原生ArkTs UI开发从XComponent拿;
- 一种是跨平台框架比如 flutter通过TextureRegistry注册一个Texture拿,
NativeWindowRender 类的构造函数中实现了这一功能:
NativeWindowRender::NativeWindowRender(const RenderConfig &config) : render_config_(config) {int ret = OH_NativeWindow_CreateNativeWindowFromSurfaceId(config.surfaceId, &window_);if (window_ == nullptr || ret != 0) {LOG_ERROR("create native window failed: {}", ret);return;}
}
在这里,我们使用
OH_NativeWindow_CreateNativeWindowFromSurfaceId
函数从surfaceId
创建一个本地窗口。如果创建失败,会记录错误日志。
设置缓冲区选项
创建本地窗口后,我们需要设置一些缓冲区选项,例如缓冲区使用情况、交换间隔、请求超时、颜色范围和变换等可以参考 NativeWindowOperation,我暂时只设置了这些,看看渲染效果和性能
bool NativeWindowRender::UpdateNativeBufferOptionsByVideoFrame(const VideoFrame *frame) {
#if 0// get current buffer geometry, if different, reset buffer geometryint32_t stride = 0;int32_t height = 0;// the fucking order of get is h, w, however, the fucking order of set is w, hint ret = OH_NativeWindow_NativeWindowHandleOpt(window_, GET_BUFFER_GEOMETRY, &height, &stride);if (ret != 0) {LOG_ERROR("get buffer geometry failed: {}", ret);return false;}// set buffer geometry if differentif (stride != frame->yStride || height != frame->height) {ret = OH_NativeWindow_NativeWindowHandleOpt(window_, SET_BUFFER_GEOMETRY, frame->yStride, frame->height);if (ret != 0) {LOG_ERROR("set buffer geometry failed: {}", ret);return false;}}
#elseint ret = OH_NativeWindow_NativeWindowHandleOpt(window_, SET_BUFFER_GEOMETRY, frame->yStride, frame->height);if (ret != 0) {LOG_ERROR("set buffer geometry failed: {}", ret);return false;}
#endif// set buffer formatret = OH_NativeWindow_NativeWindowHandleOpt(window_, SET_FORMAT, NATIVEBUFFER_PIXEL_FMT_YCBCR_420_P);if (ret != 0) {LOG_ERROR("set buffer format failed: {}", ret);return false;}// set buffer strideret = OH_NativeWindow_NativeWindowHandleOpt(window_, SET_STRIDE, 4);if (ret != 0) {LOG_ERROR("set buffer stride failed: {}", ret);return false;}// set native source type to OH_SURFACE_SOURCE_VIDEOret = OH_NativeWindow_NativeWindowHandleOpt(window_, SET_SOURCE_TYPE, OH_SURFACE_SOURCE_VIDEO);if (ret != 0) {LOG_ERROR("set source type failed: {}", ret);return false;}// set app framework typeret = OH_NativeWindow_NativeWindowHandleOpt(window_, SET_APP_FRAMEWORK_TYPE, "unknown");if (ret != 0) {LOG_ERROR("set app framework type failed: {}", ret);return false;}return true;
}
请求缓冲区并处理视频帧数据
在接收到视频帧数据时,我们需要请求缓冲区并将视频帧数据写入缓冲区, 坑点来了,UpdateNativeBufferOptionsByVideoFrame
你必须每次request前都要调用。。。。我反正没找到文档哪里有写,又或者我理解能力有问题,坑了一两个小时,最后lldb debug,发现如果只在构造函数中设置的话,只有第一次request出来的BufferHandle
中的width height stride format等参数是符合预期的,后面就不对了,结果就只能画出来第一张图,后面没有任何报错但是就是不出图,所以切记切记, 每次都要调用
(吗?请帮忙指正)
void NativeWindowRender::OnVideoFrameReceived(const void *videoFrame, const VideoFrameConfig &config, bool resize) {const VideoFrame *frame = reinterpret_cast<const VideoFrame *>(videoFrame);// must call this every time, coz every time you request a buffer, the properties of BufferHandle may change to// default value.if (!UpdateNativeBufferOptionsByVideoFrame(frame)) {return;}// request buffer from native windowOHNativeWindowBuffer *buffer = nullptr;int fence_fd = -1;int ret = OH_NativeWindow_NativeWindowRequestBuffer(window_, &buffer, &fence_fd);if (ret != 0 || buffer == nullptr) {LOG_ERROR("request buffer failed: {}", ret);return;}// get buffer handle from native bufferBufferHandle *handle = OH_NativeWindow_GetBufferHandleFromNative(buffer);// mmap buffer handle to write datavoid *data = mmap(handle->virAddr, handle->size, PROT_READ | PROT_WRITE, MAP_SHARED, handle->fd, 0);if (data == MAP_FAILED) {LOG_ERROR("mmap buffer failed");return;}// wait for fence fd to be signaleduint32_t timeout = 3000;if (fence_fd != -1) {struct pollfd fds = {.fd = fence_fd, .events = POLLIN};do {ret = poll(&fds, 1, timeout);} while (ret == -1 && (errno == EINTR || errno == EAGAIN));close(fence_fd);}// copy yuv420 data to bufferuint8_t *y = (uint8_t *)data;uint8_t *u = y + frame->yStride * frame->height;uint8_t *v = u + frame->uStride * frame->height / 2;memcpy(y, frame->yBuffer, frame->yStride * frame->height);memcpy(u, frame->uBuffer, frame->uStride * frame->height / 2);memcpy(v, frame->vBuffer, frame->vStride * frame->height / 2);// flush bufferRegion region{.rects = nullptr, .rectNumber = 0};int acquire_fence_fd = -1;ret = OH_NativeWindow_NativeWindowFlushBuffer(window_, buffer, acquire_fence_fd, region);if (ret != 0) {LOG_ERROR("flush buffer failed: {}", ret);}// unmap buffer handleret = munmap(data, handle->size);if (ret != 0) {LOG_ERROR("munmap buffer failed: {}", ret);}
}
在这里,我们首先调用 UpdateNativeBufferOptionsByVideoFrame
函数更新缓冲区选项,然后请求缓冲区并将视频帧数据写入缓冲区。最后,我们刷新缓冲区并解除映射。
结论
通过以上步骤,我们可以在HarmonyOS下实现自渲染视频数据。希望本文对您有所帮助。如果您有任何问题或建议,请随时与我联系。
PS
另外一种方案,是拿到NativeWindow后,用opengl自己画,但是opengl这个鬼你也知道的,一个context要一个线程,我要是十几路,几十路视频渲染,就得几十个线程?这在移动平台谁能忍?共享context的话,我自信我这种new opengler不能保证不出错。。。可以参考官方文档 自定义渲染 (XComponent) 。所以,这种方案交给鸿蒙UI框架说着说系统自己画的,叫懒人方案?