* docs: add docs about shared texture mode osr * docs: add docs about shared texture mode osr * docs: fix dot * 更新 web-contents.md Co-authored-by: Sam Maddock <samuel.maddock@gmail.com> --------- Co-authored-by: Sam Maddock <samuel.maddock@gmail.com>
10 KiB
Offscreen Rendering
Shared Texture Mode
This section provides a brief summary about how an offscreen frame is generated and how to handle it in native code. This only applies to the GPU-accelerated mode with shared texture, when webPreferences.offscreen.useSharedTexture is set to true.
Life of an Offscreen Frame
This is written at the time of Chromium 134 / Electron 35. The code may change in the future. The following description may not completely reflect the procedure, but it generally describes the process.
Initialization
- Electron JS creates a
BrowserWindowwithwebPreferences.offscreenset totrue. - Electron C++ creates
OffScreenRenderWidgetHostView, a subclass ofRenderWidgetHostViewBase. - It instantiates an
OffScreenVideoConsumer, passing itself as a reference asview_. - The
OffScreenVideoConsumercallsview_->CreateVideoCapturer(), which makes Chromium code useHostFrameSinkManagerto communicate withFrameSinkManagerImpl(in the Renderer Process) to create aClientFrameSinkVideoCapturerand aFrameSinkVideoCapturerImpl(in the Renderer Process). It storesClientFrameSinkVideoCapturerinvideo_capturer_. - The
OffScreenVideoConsumerregisters the capture callback toOffScreenRenderWidgetHostView::OnPaint. - It sets the target FPS, size constraints for capture, and calls
video_capturer_->Startwith the parameterviz::mojom::BufferFormatPreference::kPreferGpuMemoryBufferto enable shared texture mode and start capturing. - The
FrameSinkVideoCapturerImplacceptskPreferGpuMemoryBufferand creates aGpuMemoryBufferVideoFramePoolto copy the captured frame. The capacity iskFramePoolCapacity, currently10, meaning it can capture at most10frames if the consumer doesn't consume them in time. It is stored inframe_pool_. - The
GpuMemoryBufferVideoFramePoolcreates aRenderableGpuMemoryBufferVideoFramePoolusingGmbVideoFramePoolContextas the context provider, responsible for creatingGpuMemoryBuffer(orMappableSharedImage, as the Chromium team is removing the concept ofGpuMemoryBufferand may replace it withMappableSIin the future). - The
GmbVideoFramePoolContextinitializes itself in both the Renderer Process and GPU Process.
Capturing
- The
FrameSinkVideoCapturerImplstarts a capture when it receives anOnFrameDamagedevent or is explicitly requested to refresh the frame. All event sources are evaluated byVideoCaptureOracleto see if the capture frequency meets the limit. - If a frame is determined to be captured,
FrameSinkVideoCapturerImplcallsframe_pool_->ReserveVideoFrame()to make the pool allocate a frame. TheGmbVideoFramePoolContextthen communicates with the GPU Process to create an actual platform-dependent texture (e.g.,ID3D11Texture2DorIOSurface) that supports being shared across processes. - The GPU Process wraps it into a
GpuMemoryBufferand sends it back to the Renderer Process, and the pool stores it for further usage. - The
FrameSinkVideoCapturerImplthen uses this allocated (or reused) frame to create aCopyOutputRequestand callsresolved_target_->RequestCopyOfOutputto copy the frame to the target texture. Theresolved_target_is aCapturableFrameSinkthat was previously resolved when callingCreateVideoCapturerusingOffScreenRenderWidgetHostView. - The GPU Process receives the request and renders the frame to the target texture using the requested format (e.g.,
RGBA). It then sends a completed event to the Renderer ProcessFrameSinkVideoCapturerImpl. - The
FrameSinkVideoCapturerImplreceives the completed event, provides feedback to theVideoCaptureOracle, and then callsframe_pool_->CloneHandleForDeliverywith the captured frame to get a serializable handle to the frame (HANDLEorIOSurfaceRef). On Windows, it callsDuplicateHandleto create a new handle. - It then creates a
VideoFrameInfowith the frame info and the handle and callsconsumer_->OnFrameCapturedto deliver the frame to the consumer.
Consuming
OffScreenVideoConsumer::OnFrameCapturedis called when the frame is captured. It creates an Electron C++ structOffscreenSharedTextureValueto extract the required info and handle from the callback. It then creates anOffscreenReleaserHolderto take ownership of the handle and the mojom remote releaser to prevent releasing.- It calls the
callback_with theOffscreenSharedTextureValue, which goes toOffScreenRenderWidgetHostView::OnPaint. When shared texture mode is enabled, it directly redirects the callback to anOnPaintCallbacktarget set during the initialization ofOffScreenRenderWidgetHostView, currently set byOffScreenWebContentsView, whosecallback_is also set during initialization. Finally, it goes toWebContents::OnPaint. - The
WebContents::OnPaintusesgin_converter(osr_converter.cc) to convert theOffscreenSharedTextureValueto av8::Object. It converts most of the value to a correspondingv8::Value, and the handle is converted to aBuffer. It also creates areleasefunction to destroy the releaser and free the frame we previously took ownership of. The frame can now be reused for further capturing. Finally, it creates a release monitor to detect if thereleasefunction is called before the garbage collector destroys the JS object; if not, it prints a warning. - The data is then emitted to the
paintevent ofwebContents. You can now grab the data and pass it to your native code for further processing. You can pass thetextureInfoto other processes using Electron IPC, but you can onlyreleaseit in the main process. Do not keep it for long, or it will drain the buffer pool.
Native Handling
You now have the texture info for the frame. Here's how you should handle it in native code. Suppose you write a node native addon to handle the shared texture.
Retrieve the handle from the textureInfo.sharedTextureHandle. You can also read the buffer in JS and use other methods.
auto textureInfo = args[0];
auto sharedTextureHandle =
NAPI_GET_PROPERTY_VALUE(textureInfo, "sharedTextureHandle");
size_t handleBufferSize;
uint8_t* handleBufferData;
napi_get_buffer_info(env, sharedTextureHandle,
reinterpret_cast<void**>(&handleBufferData),
&handleBufferSize);
Import the handle to your rendering program.
// Windows
HANDLE handle = *reinterpret_cast<HANDLE*>(handleBufferData);
Microsoft::WRL::ComPtr<ID3D11Texture2D> shared_texture = nullptr;
HRESULT hr = device1->OpenSharedResource1(handle, IID_PPV_ARGS(&shared_texture));
// Extract the texture description
D3D11_TEXTURE2D_DESC desc;
shared_texture->GetDesc(&desc);
// Cache the staging texture if it does not exist or size has changed
if (!cached_staging_texture || cached_width != desc.Width ||
cached_height != desc.Height) {
if (cached_staging_texture) {
cached_staging_texture->Release();
}
desc.CPUAccessFlags = D3D11_CPU_ACCESS_READ;
desc.Usage = D3D11_USAGE_STAGING;
desc.BindFlags = 0;
desc.MiscFlags = 0;
std::cout << "Create staging Texture2D width=" << desc.Width
<< " height=" << desc.Height << std::endl;
hr = device->CreateTexture2D(&desc, nullptr, &cached_staging_texture);
cached_width = desc.Width;
cached_height = desc.Height;
}
// Copy to a intermediate texture
context->CopyResource(cached_staging_texture.Get(), shared_texture.Get());
// macOS
IOSurfaceRef handle = *reinterpret_cast<IOSurfaceRef*>(handleBufferData);
// Assume you have created a GL context.
GLuint io_surface_tex;
glGenTextures(1, &io_surface_tex);
glEnable(GL_TEXTURE_RECTANGLE_ARB);
glBindTexture(GL_TEXTURE_RECTANGLE_ARB, io_surface_tex);
CGLContextObj cgl_context = CGLGetCurrentContext();
GLsizei width = (GLsizei)IOSurfaceGetWidth(io_surface);
GLsizei height = (GLsizei)IOSurfaceGetHeight(io_surface);
CGLTexImageIOSurface2D(cgl_context, GL_TEXTURE_RECTANGLE_ARB, GL_RGBA8, width,
height, GL_BGRA, GL_UNSIGNED_INT_8_8_8_8_REV,
io_surface, 0);
glTexParameteri(GL_TEXTURE_RECTANGLE_ARB, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_RECTANGLE_ARB, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glBindTexture(GL_TEXTURE_RECTANGLE_ARB, 0);
// Copy to a intermediate texture from io_surface_tex
// ...
As introduced above, the shared texture is not a single fixed texture that Chromium draws on every frame (CEF before Chromium 103 works that like, please note this significate difference). It is a pool of textures, so every frame Chromium may pass a different texture to you. As soon as you call release, the texture will be reused by Chromium and may cause picture corruption if you keep reading from it. It is also wrong to only open the handle once and directly read from that opened texture on later events. Be very careful if you want to cache the opened texture(s), on Windows, the duplicated handle's value will not be a reliable mapping to the actual underlying texture.
The suggested way is always open the handle on every event, and copy the shared texture to a intermediate texture that you own and release it as soon as possible. You can use the copied texture for further rendering whenever you want. Opening a shared texture should only takes couple microseconds, and you can also use the damaged rect to only copy a portion of the texture to speed up.
You can also refer to these examples: