![]() * docs: add docs about shared texture mode osr * docs: add docs about shared texture mode osr * docs: fix dot * 更新 web-contents.md Co-authored-by: Sam Maddock <samuel.maddock@gmail.com> --------- Co-authored-by: Sam Maddock <samuel.maddock@gmail.com> |
||
---|---|---|
.. | ||
osr_host_display_client.cc | ||
osr_host_display_client.h | ||
osr_host_display_client_mac.mm | ||
osr_paint_event.cc | ||
osr_paint_event.h | ||
osr_render_widget_host_view.cc | ||
osr_render_widget_host_view.h | ||
osr_video_consumer.cc | ||
osr_video_consumer.h | ||
osr_view_proxy.cc | ||
osr_view_proxy.h | ||
osr_web_contents_view.cc | ||
osr_web_contents_view.h | ||
osr_web_contents_view_mac.mm | ||
README.md |
Offscreen Rendering
Shared Texture Mode
This section provides a brief summary about how an offscreen frame is generated and how to handle it in native code. This only applies to the GPU-accelerated mode with shared texture, when webPreferences.offscreen.useSharedTexture
is set to true
.
Life of an Offscreen Frame
This is written at the time of Chromium 134 / Electron 35. The code may change in the future. The following description may not completely reflect the procedure, but it generally describes the process.
Initialization
- Electron JS creates a
BrowserWindow
withwebPreferences.offscreen
set totrue
. - Electron C++ creates
OffScreenRenderWidgetHostView
, a subclass ofRenderWidgetHostViewBase
. - It instantiates an
OffScreenVideoConsumer
, passing itself as a reference asview_
. - The
OffScreenVideoConsumer
callsview_->CreateVideoCapturer()
, which makes Chromium code useHostFrameSinkManager
to communicate withFrameSinkManagerImpl
(in the Renderer Process) to create aClientFrameSinkVideoCapturer
and aFrameSinkVideoCapturerImpl
(in the Renderer Process). It storesClientFrameSinkVideoCapturer
invideo_capturer_
. - The
OffScreenVideoConsumer
registers the capture callback toOffScreenRenderWidgetHostView::OnPaint
. - It sets the target FPS, size constraints for capture, and calls
video_capturer_->Start
with the parameterviz::mojom::BufferFormatPreference::kPreferGpuMemoryBuffer
to enable shared texture mode and start capturing. - The
FrameSinkVideoCapturerImpl
acceptskPreferGpuMemoryBuffer
and creates aGpuMemoryBufferVideoFramePool
to copy the captured frame. The capacity iskFramePoolCapacity
, currently10
, meaning it can capture at most10
frames if the consumer doesn't consume them in time. It is stored inframe_pool_
. - The
GpuMemoryBufferVideoFramePool
creates aRenderableGpuMemoryBufferVideoFramePool
usingGmbVideoFramePoolContext
as the context provider, responsible for creatingGpuMemoryBuffer
(orMappableSharedImage
, as the Chromium team is removing the concept ofGpuMemoryBuffer
and may replace it withMappableSI
in the future). - The
GmbVideoFramePoolContext
initializes itself in both the Renderer Process and GPU Process.
Capturing
- The
FrameSinkVideoCapturerImpl
starts a capture when it receives anOnFrameDamaged
event or is explicitly requested to refresh the frame. All event sources are evaluated byVideoCaptureOracle
to see if the capture frequency meets the limit. - If a frame is determined to be captured,
FrameSinkVideoCapturerImpl
callsframe_pool_->ReserveVideoFrame()
to make the pool allocate a frame. TheGmbVideoFramePoolContext
then communicates with the GPU Process to create an actual platform-dependent texture (e.g.,ID3D11Texture2D
orIOSurface
) that supports being shared across processes. - The GPU Process wraps it into a
GpuMemoryBuffer
and sends it back to the Renderer Process, and the pool stores it for further usage. - The
FrameSinkVideoCapturerImpl
then uses this allocated (or reused) frame to create aCopyOutputRequest
and callsresolved_target_->RequestCopyOfOutput
to copy the frame to the target texture. Theresolved_target_
is aCapturableFrameSink
that was previously resolved when callingCreateVideoCapturer
usingOffScreenRenderWidgetHostView
. - The GPU Process receives the request and renders the frame to the target texture using the requested format (e.g.,
RGBA
). It then sends a completed event to the Renderer ProcessFrameSinkVideoCapturerImpl
. - The
FrameSinkVideoCapturerImpl
receives the completed event, provides feedback to theVideoCaptureOracle
, and then callsframe_pool_->CloneHandleForDelivery
with the captured frame to get a serializable handle to the frame (HANDLE
orIOSurfaceRef
). On Windows, it callsDuplicateHandle
to create a new handle. - It then creates a
VideoFrameInfo
with the frame info and the handle and callsconsumer_->OnFrameCaptured
to deliver the frame to the consumer.
Consuming
OffScreenVideoConsumer::OnFrameCaptured
is called when the frame is captured. It creates an Electron C++ structOffscreenSharedTextureValue
to extract the required info and handle from the callback. It then creates anOffscreenReleaserHolder
to take ownership of the handle and the mojom remote releaser to prevent releasing.- It calls the
callback_
with theOffscreenSharedTextureValue
, which goes toOffScreenRenderWidgetHostView::OnPaint
. When shared texture mode is enabled, it directly redirects the callback to anOnPaintCallback
target set during the initialization ofOffScreenRenderWidgetHostView
, currently set byOffScreenWebContentsView
, whosecallback_
is also set during initialization. Finally, it goes toWebContents::OnPaint
. - The
WebContents::OnPaint
usesgin_converter
(osr_converter.cc
) to convert theOffscreenSharedTextureValue
to av8::Object
. It converts most of the value to a correspondingv8::Value
, and the handle is converted to aBuffer
. It also creates arelease
function to destroy the releaser and free the frame we previously took ownership of. The frame can now be reused for further capturing. Finally, it creates a release monitor to detect if therelease
function is called before the garbage collector destroys the JS object; if not, it prints a warning. - The data is then emitted to the
paint
event ofwebContents
. You can now grab the data and pass it to your native code for further processing. You can pass thetextureInfo
to other processes using Electron IPC, but you can onlyrelease
it in the main process. Do not keep it for long, or it will drain the buffer pool.
Native Handling
You now have the texture info for the frame. Here's how you should handle it in native code. Suppose you write a node native addon to handle the shared texture.
Retrieve the handle from the textureInfo.sharedTextureHandle
. You can also read the buffer in JS and use other methods.
auto textureInfo = args[0];
auto sharedTextureHandle =
NAPI_GET_PROPERTY_VALUE(textureInfo, "sharedTextureHandle");
size_t handleBufferSize;
uint8_t* handleBufferData;
napi_get_buffer_info(env, sharedTextureHandle,
reinterpret_cast<void**>(&handleBufferData),
&handleBufferSize);
Import the handle to your rendering program.
// Windows
HANDLE handle = *reinterpret_cast<HANDLE*>(handleBufferData);
Microsoft::WRL::ComPtr<ID3D11Texture2D> shared_texture = nullptr;
HRESULT hr = device1->OpenSharedResource1(handle, IID_PPV_ARGS(&shared_texture));
// Extract the texture description
D3D11_TEXTURE2D_DESC desc;
shared_texture->GetDesc(&desc);
// Cache the staging texture if it does not exist or size has changed
if (!cached_staging_texture || cached_width != desc.Width ||
cached_height != desc.Height) {
if (cached_staging_texture) {
cached_staging_texture->Release();
}
desc.CPUAccessFlags = D3D11_CPU_ACCESS_READ;
desc.Usage = D3D11_USAGE_STAGING;
desc.BindFlags = 0;
desc.MiscFlags = 0;
std::cout << "Create staging Texture2D width=" << desc.Width
<< " height=" << desc.Height << std::endl;
hr = device->CreateTexture2D(&desc, nullptr, &cached_staging_texture);
cached_width = desc.Width;
cached_height = desc.Height;
}
// Copy to a intermediate texture
context->CopyResource(cached_staging_texture.Get(), shared_texture.Get());
// macOS
IOSurfaceRef handle = *reinterpret_cast<IOSurfaceRef*>(handleBufferData);
// Assume you have created a GL context.
GLuint io_surface_tex;
glGenTextures(1, &io_surface_tex);
glEnable(GL_TEXTURE_RECTANGLE_ARB);
glBindTexture(GL_TEXTURE_RECTANGLE_ARB, io_surface_tex);
CGLContextObj cgl_context = CGLGetCurrentContext();
GLsizei width = (GLsizei)IOSurfaceGetWidth(io_surface);
GLsizei height = (GLsizei)IOSurfaceGetHeight(io_surface);
CGLTexImageIOSurface2D(cgl_context, GL_TEXTURE_RECTANGLE_ARB, GL_RGBA8, width,
height, GL_BGRA, GL_UNSIGNED_INT_8_8_8_8_REV,
io_surface, 0);
glTexParameteri(GL_TEXTURE_RECTANGLE_ARB, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_RECTANGLE_ARB, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glBindTexture(GL_TEXTURE_RECTANGLE_ARB, 0);
// Copy to a intermediate texture from io_surface_tex
// ...
As introduced above, the shared texture is not a single fixed texture that Chromium draws on every frame (CEF before Chromium 103 works that like, please note this significate difference). It is a pool of textures, so every frame Chromium may pass a different texture to you. As soon as you call release, the texture will be reused by Chromium and may cause picture corruption if you keep reading from it. It is also wrong to only open the handle once and directly read from that opened texture on later events. Be very careful if you want to cache the opened texture(s), on Windows, the duplicated handle's value will not be a reliable mapping to the actual underlying texture.
The suggested way is always open the handle on every event, and copy the shared texture to a intermediate texture that you own and release it as soon as possible. You can use the copied texture for further rendering whenever you want. Opening a shared texture should only takes couple microseconds, and you can also use the damaged rect to only copy a portion of the texture to speed up.
You can also refer to these examples: