feat: GPU shared texture offscreen rendering (#42953)

* feat: GPU shared texture offscreen rendering

* docs: clarify texture infos that passed by the paint event.

* feat: make gpu osr spec test optional

* fix: osr image compare

* fix: remove duplicate test

* fix: update patch file

* fix: code review

* feat: expose more metadata

* feat: use better switch design

* feat: add warning when user forget to release the texture.

* fix: typo

* chore: update patch

* fix: update patch

* fix: update patch description

* fix: update docs

* fix: apply suggestions from code review

Co-authored-by: Charles Kerr <charles@charleskerr.com>

* fix: apply suggested fixes

---------

Co-authored-by: Charles Kerr <charles@charleskerr.com>
This commit is contained in:
reito 2024-08-23 08:23:13 +08:00 committed by GitHub
parent b481966f02
commit 1aeca6fd0e
No known key found for this signature in database
GPG key ID: B5690EEEBB952194
34 changed files with 1009 additions and 102 deletions

View file

@ -0,0 +1,24 @@
# OffscreenSharedTexture Object
* `textureInfo` Object - The shared texture info.
* `widgetType` string - The widget type of the texture. Can be `popup` or `frame`.
* `pixelFormat` string - The pixel format of the texture. Can be `rgba` or `bgra`.
* `codedSize` [Size](size.md) - The full dimensions of the video frame.
* `visibleRect` [Rectangle](rectangle.md) - A subsection of [0, 0, codedSize.width(), codedSize.height()]. In OSR case, it is expected to have the full section area.
* `contentRect` [Rectangle](rectangle.md) - The region of the video frame that capturer would like to populate. In OSR case, it is the same with `dirtyRect` that needs to be painted.
* `timestamp` number - The time in microseconds since the capture start.
* `metadata` Object - Extra metadata. See comments in src\media\base\video_frame_metadata.h for accurate details.
* `captureUpdateRect` [Rectangle](rectangle.md) (optional) - Updated area of frame, can be considered as the `dirty` area.
* `regionCaptureRect` [Rectangle](rectangle.md) (optional) - May reflect the frame's contents origin if region capture is used internally.
* `sourceSize` [Rectangle](rectangle.md) (optional) - Full size of the source frame.
* `frameCount` number (optional) - The increasing count of captured frame. May contain gaps if frames are dropped between two consecutively received frames.
* `sharedTextureHandle` Buffer _Windows_ _macOS_ - The handle to the shared texture.
* `planes` Object[] _Linux_ - Each plane's info of the shared texture.
* `stride` number - The strides and offsets in bytes to be used when accessing the buffers via a memory mapping. One per plane per entry.
* `offset` number - The strides and offsets in bytes to be used when accessing the buffers via a memory mapping. One per plane per entry.
* `size` number - Size in bytes of the plane. This is necessary to map the buffers.
* `fd` number - File descriptor for the underlying memory object (usually dmabuf).
* `modifier` string _Linux_ - The modifier is retrieved from GBM library and passed to EGL driver.
* `release` Function - Release the resources. The `texture` cannot be directly passed to another process, users need to maintain texture lifecycles in
main process, but it is safe to pass the `textureInfo` to another process. Only a limited number of textures can exist at the same time, so it's important
that you call `texture.release()` as soon as you're done with the texture.

View file

@ -79,10 +79,14 @@
[browserWindow](../browser-window.md) has disabled `backgroundThrottling` then
frames will be drawn and swapped for the whole window and other
[webContents](../web-contents.md) displayed by it. Defaults to `true`.
* `offscreen` boolean (optional) - Whether to enable offscreen rendering for the browser
* `offscreen` Object | boolean (optional) - Whether to enable offscreen rendering for the browser
window. Defaults to `false`. See the
[offscreen rendering tutorial](../../tutorial/offscreen-rendering.md) for
more details.
* `useSharedTexture` boolean (optional) _Experimental_ - Whether to use GPU shared texture for accelerated
paint event. Defaults to `false`. See the
[offscreen rendering tutorial](../../tutorial/offscreen-rendering.md) for
more details.
* `contextIsolation` boolean (optional) - Whether to run Electron APIs and
the specified `preload` script in a separate JavaScript context. Defaults
to `true`. The context that the `preload` script runs in will only have

View file

@ -869,12 +869,12 @@ app.whenReady().then(() => {
Returns:
* `event` Event
* `details` Event\<\>
* `texture` [OffscreenSharedTexture](structures/offscreen-shared-texture.md) (optional) _Experimental_ - The GPU shared texture of the frame, when `webPreferences.offscreen.useSharedTexture` is `true`.
* `dirtyRect` [Rectangle](structures/rectangle.md)
* `image` [NativeImage](native-image.md) - The image data of the whole frame.
Emitted when a new frame is generated. Only the dirty area is passed in the
buffer.
Emitted when a new frame is generated. Only the dirty area is passed in the buffer.
```js
const { BrowserWindow } = require('electron')
@ -886,6 +886,33 @@ win.webContents.on('paint', (event, dirty, image) => {
win.loadURL('https://github.com')
```
When using shared texture (set `webPreferences.offscreen.useSharedTexture` to `true`) feature, you can pass the texture handle to external rendering pipeline without the overhead of
copying data between CPU and GPU memory, with Chromium's hardware acceleration support. This feature is helpful for high-performance rendering scenarios.
Only a limited number of textures can exist at the same time, so it's important that you call `texture.release()` as soon as you're done with the texture.
By managing the texture lifecycle by yourself, you can safely pass the `texture.textureInfo` to other processes through IPC.
```js
const { BrowserWindow } = require('electron')
const win = new BrowserWindow({ webPreferences: { offscreen: { useSharedTexture: true } } })
win.webContents.on('paint', async (e, dirty, image) => {
if (e.texture) {
// By managing lifecycle yourself, you can handle the event in async handler or pass the `e.texture.textureInfo`
// to other processes (not `e.texture`, the `e.texture.release` function is not passable through IPC).
await new Promise(resolve => setTimeout(resolve, 50))
// You can send the native texture handle to native code for importing into your rendering pipeline.
// For example: https://github.com/electron/electron/tree/main/spec/fixtures/native-addon/osr-gpu
// importTextureHandle(dirty, e.texture.textureInfo)
// You must call `e.texture.release()` as soon as possible, before the underlying frame pool is drained.
e.texture.release()
}
})
win.loadURL('https://github.com')
```
#### Event: 'devtools-reload-page'
Emitted when the devtools window instructs the webContents to reload

View file

@ -3,7 +3,8 @@
## Overview
Offscreen rendering lets you obtain the content of a `BrowserWindow` in a
bitmap, so it can be rendered anywhere, for example, on texture in a 3D scene.
bitmap or a shared GPU texture, so it can be rendered anywhere, for example,
on texture in a 3D scene.
The offscreen rendering in Electron uses a similar approach to that of the
[Chromium Embedded Framework](https://bitbucket.org/chromiumembedded/cef)
project.
@ -17,22 +18,39 @@ the dirty area is passed to the `paint` event to be more efficient.
losses with no benefits.
* When nothing is happening on a webpage, no frames are generated.
* An offscreen window is always created as a
[Frameless Window](../tutorial/window-customization.md)..
[Frameless Window](../tutorial/window-customization.md).
### Rendering Modes
#### GPU accelerated
GPU accelerated rendering means that the GPU is used for composition. Because of
that, the frame has to be copied from the GPU which requires more resources,
thus this mode is slower than the Software output device. The benefit of this
mode is that WebGL and 3D CSS animations are supported.
GPU accelerated rendering means that the GPU is used for composition. The benefit
of this mode is that WebGL and 3D CSS animations are supported. There are two
different approaches depending on the `webPreferences.offscreen.useSharedTexture`
setting.
1. Use GPU shared texture
Used when `webPreferences.offscreen.useSharedTexture` is set to `true`.
This is an advanced feature requiring a native node module to work with your own code.
The frames are directly copied in GPU textures, thus this mode is very fast because
there's no CPU-GPU memory copies overhead, and you can directly import the shared
texture to your own rendering program.
2. Use CPU shared memory bitmap
Used when `webPreferences.offscreen.useSharedTexture` is set to `false` (default behavior).
The texture is accessible using the `NativeImage` API at the cost of performance.
The frame has to be copied from the GPU to the CPU bitmap which requires more system
resources, thus this mode is slower than the Software output device mode. But it supports
GPU related functionalities.
#### Software output device
This mode uses a software output device for rendering in the CPU, so the frame
generation is much faster. As a result, this mode is preferred over the GPU
accelerated one.
generation is faster than shared memory bitmap GPU accelerated mode.
To enable this mode, GPU acceleration has to be disabled by calling the
[`app.disableHardwareAcceleration()`][disablehardwareacceleration] API.

View file

@ -108,6 +108,7 @@ auto_filenames = {
"docs/api/structures/navigation-entry.md",
"docs/api/structures/notification-action.md",
"docs/api/structures/notification-response.md",
"docs/api/structures/offscreen-shared-texture.md",
"docs/api/structures/open-external-permission-request.md",
"docs/api/structures/payment-discount.md",
"docs/api/structures/permission-request.md",

View file

@ -469,6 +469,8 @@ filenames = {
"shell/browser/notifications/platform_notification_service.h",
"shell/browser/osr/osr_host_display_client.cc",
"shell/browser/osr/osr_host_display_client.h",
"shell/browser/osr/osr_paint_event.cc",
"shell/browser/osr/osr_paint_event.h",
"shell/browser/osr/osr_render_widget_host_view.cc",
"shell/browser/osr/osr_render_widget_host_view.h",
"shell/browser/osr/osr_video_consumer.cc",
@ -612,6 +614,8 @@ filenames = {
"shell/common/gin_converters/net_converter.cc",
"shell/common/gin_converters/net_converter.h",
"shell/common/gin_converters/optional_converter.h",
"shell/common/gin_converters/osr_converter.cc",
"shell/common/gin_converters/osr_converter.h",
"shell/common/gin_converters/serial_port_info_converter.h",
"shell/common/gin_converters/std_converter.h",
"shell/common/gin_converters/time_converter.cc",

View file

@ -129,3 +129,4 @@ feat_enable_passing_exit_code_on_service_process_crash.patch
chore_remove_reference_to_chrome_browser_themes.patch
feat_enable_customizing_symbol_color_in_framecaptionbutton.patch
build_expose_webplugininfo_interface_to_electron.patch
osr_shared_texture_remove_keyed_mutex_on_win_dxgi.patch

View file

@ -0,0 +1,97 @@
From 0000000000000000000000000000000000000000 Mon Sep 17 00:00:00 2001
From: reito <cnschwarzer@qq.com>
Date: Thu, 15 Aug 2024 14:05:52 +0800
Subject: Remove DXGI GMB keyed-mutex
This patch removes the keyed mutex of the d3d11 texture only when the texture is requested by offscreen rendering and on Windows.
The keyed mutex introduce extra performance cost and spikes. However, at offscreen rendering scenario, the shared resources will not be simultaneously read from & written to, typically just one reader, so it doesn't need such exclusive guarantee, and it's safe to remove this mutex for extra performance gain.
For resolving complex conflict please pin @reitowo
For more reason please see: https://crrev.com/c/5465148
diff --git a/gpu/ipc/service/gpu_memory_buffer_factory_dxgi.cc b/gpu/ipc/service/gpu_memory_buffer_factory_dxgi.cc
index 2096591596a26464ab8f71a399ccb16a04edfd59..9eb966b3ddc3551d6beeff123071b2c99a576620 100644
--- a/gpu/ipc/service/gpu_memory_buffer_factory_dxgi.cc
+++ b/gpu/ipc/service/gpu_memory_buffer_factory_dxgi.cc
@@ -179,7 +179,8 @@ gfx::GpuMemoryBufferHandle GpuMemoryBufferFactoryDXGI::CreateGpuMemoryBuffer(
// so make sure that the usage is one that we support.
DCHECK(usage == gfx::BufferUsage::GPU_READ ||
usage == gfx::BufferUsage::SCANOUT ||
- usage == gfx::BufferUsage::SCANOUT_CPU_READ_WRITE)
+ usage == gfx::BufferUsage::SCANOUT_CPU_READ_WRITE ||
+ usage == gfx::BufferUsage::SCANOUT_VEA_CPU_READ)
<< "Incorrect usage, usage=" << gfx::BufferUsageToString(usage);
D3D11_TEXTURE2D_DESC desc = {
@@ -193,7 +194,9 @@ gfx::GpuMemoryBufferHandle GpuMemoryBufferFactoryDXGI::CreateGpuMemoryBuffer(
D3D11_BIND_SHADER_RESOURCE | D3D11_BIND_RENDER_TARGET,
0,
D3D11_RESOURCE_MISC_SHARED_NTHANDLE |
- D3D11_RESOURCE_MISC_SHARED_KEYEDMUTEX};
+ static_cast<UINT>(usage == gfx::BufferUsage::SCANOUT_VEA_CPU_READ
+ ? D3D11_RESOURCE_MISC_SHARED
+ : D3D11_RESOURCE_MISC_SHARED_KEYEDMUTEX)};
Microsoft::WRL::ComPtr<ID3D11Texture2D> d3d11_texture;
diff --git a/media/video/renderable_gpu_memory_buffer_video_frame_pool.cc b/media/video/renderable_gpu_memory_buffer_video_frame_pool.cc
index 208d048ee68fd92d1fa7b5e8ad79e02e29b8be40..c8c8c32cd44a96dc6a476b8bc02bb13b02f86300 100644
--- a/media/video/renderable_gpu_memory_buffer_video_frame_pool.cc
+++ b/media/video/renderable_gpu_memory_buffer_video_frame_pool.cc
@@ -205,7 +205,7 @@ gfx::Size GetBufferSizeInPixelsForVideoPixelFormat(
bool FrameResources::Initialize() {
auto* context = pool_->GetContext();
- constexpr gfx::BufferUsage kBufferUsage =
+ gfx::BufferUsage buffer_usage =
#if BUILDFLAG(IS_MAC) || BUILDFLAG(IS_CHROMEOS)
gfx::BufferUsage::SCANOUT_VEA_CPU_READ
#else
@@ -219,6 +219,23 @@ bool FrameResources::Initialize() {
const gfx::Size buffer_size_in_pixels =
GetBufferSizeInPixelsForVideoPixelFormat(format_, coded_size_);
+#if BUILDFLAG(IS_WIN)
+ // For CEF OSR feature, currently there's no other place in chromium use RGBA.
+ // If the format is RGBA, currently CEF do not write to the texture anymore
+ // once the GMB is returned from CopyRequest. So there will be no race
+ // condition on that texture. We can request a GMB without a keyed mutex to
+ // accelerate and probably prevent some driver deadlock.
+ if (format_ == PIXEL_FORMAT_ARGB || format_ == PIXEL_FORMAT_ABGR) {
+ // This value is 'borrowed', SCANOUT_VEA_CPU_READ is probably invalid
+ // cause there's no real SCANOUT on Windows. We simply use this enum as a
+ // flag to disable mutex in the GMBFactoryDXGI because this enum is also
+ // used above in macOS and CrOS for similar usage (claim no other one will
+ // concurrently use the resource).
+ // https://chromium-review.googlesource.com/c/chromium/src/+/5302103
+ buffer_usage = gfx::BufferUsage::SCANOUT_VEA_CPU_READ;
+ }
+#endif
+
// Create the GpuMemoryBuffer if MappableSharedImages is not enabled. When its
// enabled, clients only create a mappable shared image directly without
// needing to create a GMB.
@@ -226,11 +243,11 @@ bool FrameResources::Initialize() {
kUseMappableSIForRenderableGpuMemoryBufferVideoFramePool);
if (!is_mappable_si_enabled) {
gpu_memory_buffer_ = context->CreateGpuMemoryBuffer(
- buffer_size_in_pixels, buffer_format, kBufferUsage);
+ buffer_size_in_pixels, buffer_format, buffer_usage);
if (!gpu_memory_buffer_) {
LOG(ERROR) << "Failed to allocate GpuMemoryBuffer for frame: coded_size="
<< coded_size_.ToString()
- << ", usage=" << static_cast<int>(kBufferUsage);
+ << ", usage=" << static_cast<int>(buffer_usage);
return false;
}
@@ -264,7 +281,7 @@ bool FrameResources::Initialize() {
if (is_mappable_si_enabled) {
shared_image_ = context->CreateSharedImage(
- buffer_size_in_pixels, kBufferUsage, si_format, color_space_,
+ buffer_size_in_pixels, buffer_usage, si_format, color_space_,
kTopLeft_GrSurfaceOrigin, kPremul_SkAlphaType, kSharedImageUsage,
sync_token_);
} else {

View file

@ -121,6 +121,7 @@
#include "shell/common/gin_converters/image_converter.h"
#include "shell/common/gin_converters/net_converter.h"
#include "shell/common/gin_converters/optional_converter.h"
#include "shell/common/gin_converters/osr_converter.h"
#include "shell/common/gin_converters/value_converter.h"
#include "shell/common/gin_helper/dictionary.h"
#include "shell/common/gin_helper/error_thrower.h"
@ -760,9 +761,23 @@ WebContents::WebContents(v8::Isolate* isolate,
// Get transparent for guest view
options.Get("transparent", &guest_transparent_);
bool b = false;
if (options.Get(options::kOffscreen, &b) && b)
type_ = Type::kOffScreen;
// Offscreen rendering
v8::Local<v8::Value> use_offscreen;
if (options.Get(options::kOffscreen, &use_offscreen)) {
if (use_offscreen->IsBoolean()) {
bool b = false;
if (options.Get(options::kOffscreen, &b) && b) {
type_ = Type::kOffScreen;
}
} else if (use_offscreen->IsObject()) {
type_ = Type::kOffScreen;
auto use_offscreen_dict =
gin_helper::Dictionary::CreateEmpty(options.isolate());
options.Get(options::kOffscreen, &use_offscreen_dict);
use_offscreen_dict.Get(options::kUseSharedTexture,
&offscreen_use_shared_texture_);
}
}
// Init embedder earlier
options.Get("embedder", &embedder_);
@ -798,7 +813,7 @@ WebContents::WebContents(v8::Isolate* isolate,
if (embedder_ && embedder_->IsOffScreen()) {
auto* view = new OffScreenWebContentsView(
false,
false, offscreen_use_shared_texture_,
base::BindRepeating(&WebContents::OnPaint, base::Unretained(this)));
params.view = view;
params.delegate_view = view;
@ -818,7 +833,7 @@ WebContents::WebContents(v8::Isolate* isolate,
content::WebContents::CreateParams params(session->browser_context());
auto* view = new OffScreenWebContentsView(
transparent,
transparent, offscreen_use_shared_texture_,
base::BindRepeating(&WebContents::OnPaint, base::Unretained(this)));
params.view = view;
params.delegate_view = view;
@ -3535,8 +3550,23 @@ bool WebContents::IsOffScreen() const {
return type_ == Type::kOffScreen;
}
void WebContents::OnPaint(const gfx::Rect& dirty_rect, const SkBitmap& bitmap) {
Emit("paint", dirty_rect, gfx::Image::CreateFrom1xBitmap(bitmap));
void WebContents::OnPaint(const gfx::Rect& dirty_rect,
const SkBitmap& bitmap,
const OffscreenSharedTexture& tex) {
v8::Isolate* isolate = JavascriptEnvironment::GetIsolate();
v8::HandleScope handle_scope(isolate);
gin::Handle<gin_helper::internal::Event> event =
gin_helper::internal::Event::New(isolate);
v8::Local<v8::Object> event_object = event.ToV8().As<v8::Object>();
gin_helper::Dictionary dict(isolate, event_object);
if (offscreen_use_shared_texture_) {
dict.Set("texture", tex);
}
EmitWithoutEvent("paint", event, dirty_rect,
gfx::Image::CreateFrom1xBitmap(bitmap));
}
void WebContents::StartPainting() {

View file

@ -37,6 +37,7 @@
#include "shell/browser/background_throttling_source.h"
#include "shell/browser/event_emitter_mixin.h"
#include "shell/browser/extended_web_contents_observer.h"
#include "shell/browser/osr/osr_paint_event.h"
#include "shell/browser/ui/inspectable_web_contents_delegate.h"
#include "shell/browser/ui/inspectable_web_contents_view_delegate.h"
#include "shell/common/gin_helper/cleaned_up_at_exit.h"
@ -310,7 +311,9 @@ class WebContents : public ExclusiveAccessContext,
// Methods for offscreen rendering
bool IsOffScreen() const;
void OnPaint(const gfx::Rect& dirty_rect, const SkBitmap& bitmap);
void OnPaint(const gfx::Rect& dirty_rect,
const SkBitmap& bitmap,
const OffscreenSharedTexture& info);
void StartPainting();
void StopPainting();
bool IsPainting() const;
@ -840,6 +843,9 @@ class WebContents : public ExclusiveAccessContext,
bool offscreen_ = false;
// Whether offscreen rendering use gpu shared texture
bool offscreen_use_shared_texture_ = false;
// Whether window is fullscreened by HTML5 api.
bool html_fullscreen_ = false;

View file

@ -69,7 +69,7 @@ void LayeredWindowUpdater::Draw(const gfx::Rect& damage_rect,
if (active_ && canvas_->peekPixels(&pixmap)) {
bitmap.installPixels(pixmap);
callback_.Run(damage_rect, bitmap);
callback_.Run(damage_rect, bitmap, {});
}
std::move(draw_callback).Run();

View file

@ -11,15 +11,16 @@
#include "base/memory/shared_memory_mapping.h"
#include "components/viz/host/host_display_client.h"
#include "services/viz/privileged/mojom/compositing/layered_window_updater.mojom.h"
#include "shell/browser/osr/osr_paint_event.h"
#include "third_party/skia/include/core/SkBitmap.h"
#include "third_party/skia/include/core/SkCanvas.h"
#include "ui/gfx/native_widget_types.h"
class SkBitmap;
class SkCanvas;
namespace electron {
typedef base::RepeatingCallback<void(const gfx::Rect&, const SkBitmap&)>
OnPaintCallback;
class LayeredWindowUpdater : public viz::mojom::LayeredWindowUpdater {
public:
explicit LayeredWindowUpdater(

View file

@ -32,7 +32,7 @@ void OffScreenHostDisplayClient::OnDisplayReceivedCALayerParams(
kPremul_SkAlphaType),
pixels, stride);
bitmap.setImmutable();
callback_.Run(ca_layer_params.damage, bitmap);
callback_.Run(ca_layer_params.damage, bitmap, {});
}
}

View file

@ -0,0 +1,31 @@
// Copyright (c) 2024 GitHub, Inc.
// Use of this source code is governed by the MIT license that can be found in
// the LICENSE file.
#include "shell/browser/osr/osr_paint_event.h"
namespace electron {
OffscreenNativePixmapPlaneInfo::~OffscreenNativePixmapPlaneInfo() = default;
OffscreenNativePixmapPlaneInfo::OffscreenNativePixmapPlaneInfo(
const OffscreenNativePixmapPlaneInfo& other) = default;
OffscreenNativePixmapPlaneInfo::OffscreenNativePixmapPlaneInfo(uint32_t stride,
uint64_t offset,
uint64_t size,
int fd)
: stride(stride), offset(offset), size(size), fd(fd) {}
OffscreenReleaserHolder::~OffscreenReleaserHolder() = default;
OffscreenReleaserHolder::OffscreenReleaserHolder(
gfx::GpuMemoryBufferHandle gmb_handle,
mojo::PendingRemote<viz::mojom::FrameSinkVideoConsumerFrameCallbacks>
releaser)
: gmb_handle(std::move(gmb_handle)), releaser(std::move(releaser)) {}
OffscreenSharedTextureValue::OffscreenSharedTextureValue() = default;
OffscreenSharedTextureValue::~OffscreenSharedTextureValue() = default;
OffscreenSharedTextureValue::OffscreenSharedTextureValue(
const OffscreenSharedTextureValue& other) = default;
} // namespace electron

View file

@ -0,0 +1,113 @@

// Copyright (c) 2024 GitHub, Inc.
// Use of this source code is governed by the MIT license that can be found in
// the LICENSE file.
#ifndef ELECTRON_SHELL_BROWSER_OSR_OSR_PAINT_EVENT_H
#define ELECTRON_SHELL_BROWSER_OSR_OSR_PAINT_EVENT_H
#include "base/functional/callback_helpers.h"
#include "content/public/common/widget_type.h"
#include "media/base/video_types.h"
#include "mojo/public/cpp/bindings/pending_remote.h"
#include "services/viz/privileged/mojom/compositing/frame_sink_video_capture.mojom.h"
#include "third_party/skia/include/core/SkCanvas.h"
#include "ui/gfx/canvas.h"
#include "ui/gfx/native_widget_types.h"
#include <cstdint>
namespace electron {
struct OffscreenNativePixmapPlaneInfo {
// The strides and offsets in bytes to be used when accessing the buffers
// via a memory mapping. One per plane per entry. Size in bytes of the
// plane is necessary to map the buffers.
uint32_t stride;
uint64_t offset;
uint64_t size;
// File descriptor for the underlying memory object (usually dmabuf).
int fd;
OffscreenNativePixmapPlaneInfo() = delete;
~OffscreenNativePixmapPlaneInfo();
OffscreenNativePixmapPlaneInfo(const OffscreenNativePixmapPlaneInfo& other);
OffscreenNativePixmapPlaneInfo(uint32_t stride,
uint64_t offset,
uint64_t size,
int fd);
};
struct OffscreenReleaserHolder {
OffscreenReleaserHolder() = delete;
~OffscreenReleaserHolder();
OffscreenReleaserHolder(
gfx::GpuMemoryBufferHandle gmb_handle,
mojo::PendingRemote<viz::mojom::FrameSinkVideoConsumerFrameCallbacks>
releaser);
// GpuMemoryBufferHandle, keep the scoped handle alive
gfx::GpuMemoryBufferHandle gmb_handle;
// Releaser, hold this to prevent FrameSinkVideoCapturer recycle frame
mojo::PendingRemote<viz::mojom::FrameSinkVideoConsumerFrameCallbacks>
releaser;
};
struct OffscreenSharedTextureValue {
OffscreenSharedTextureValue();
~OffscreenSharedTextureValue();
OffscreenSharedTextureValue(const OffscreenSharedTextureValue& other);
// It is user's responsibility to compose popup widget textures.
content::WidgetType widget_type;
// The pixel format of the shared texture, RGBA or BGRA depends on platform.
media::VideoPixelFormat pixel_format;
// The full dimensions of the video frame data.
gfx::Size coded_size;
// A subsection of [0, 0, coded_size().width(), coded_size.height()].
// In OSR case, it is expected to have the full area of the section.
gfx::Rect visible_rect;
// The region of the video frame that capturer would like to populate.
// In OSR case, it is the same with `dirtyRect` that needs to be painted.
gfx::Rect content_rect;
// Extra metadata for the video frame.
// See comments in src\media\base\video_frame_metadata.h for more details.
std::optional<gfx::Rect> capture_update_rect;
std::optional<gfx::Size> source_size;
std::optional<gfx::Rect> region_capture_rect;
// The capture timestamp, microseconds since capture start
int64_t timestamp;
// The frame count
int64_t frame_count;
// Releaser holder
raw_ptr<OffscreenReleaserHolder> releaser_holder;
#if BUILDFLAG(IS_WIN) || BUILDFLAG(IS_MAC)
// On Windows it is a HANDLE to the shared D3D11 texture.
// On macOS it is a IOSurface* to the shared IOSurface.
uintptr_t shared_texture_handle;
#elif BUILDFLAG(IS_LINUX)
std::vector<OffscreenNativePixmapPlaneInfo> planes;
uint64_t modifier;
#endif
};
typedef std::optional<OffscreenSharedTextureValue> OffscreenSharedTexture;
typedef base::RepeatingCallback<
void(const gfx::Rect&, const SkBitmap&, const OffscreenSharedTexture&)>
OnPaintCallback;
} // namespace electron
#endif // ELECTRON_SHELL_BROWSER_OSR_OSR_PAINT_EVENT_H

View file

@ -177,6 +177,7 @@ class ElectronDelegatedFrameHostClient
OffScreenRenderWidgetHostView::OffScreenRenderWidgetHostView(
bool transparent,
bool offscreen_use_shared_texture,
bool painting,
int frame_rate,
const OnPaintCallback& callback,
@ -187,6 +188,7 @@ OffScreenRenderWidgetHostView::OffScreenRenderWidgetHostView(
render_widget_host_(content::RenderWidgetHostImpl::From(host)),
parent_host_view_(parent_host_view),
transparent_(transparent),
offscreen_use_shared_texture_(offscreen_use_shared_texture),
callback_(callback),
frame_rate_(frame_rate),
size_(initial_size),
@ -544,8 +546,9 @@ OffScreenRenderWidgetHostView::CreateViewForWidget(
}
return new OffScreenRenderWidgetHostView(
transparent_, true, embedder_host_view->frame_rate(), callback_,
render_widget_host, embedder_host_view, size());
transparent_, offscreen_use_shared_texture_, true,
embedder_host_view->frame_rate(), callback_, render_widget_host,
embedder_host_view, size());
}
const viz::FrameSinkId& OffScreenRenderWidgetHostView::GetFrameSinkId() const {
@ -654,8 +657,15 @@ uint64_t OffScreenRenderWidgetHostView::GetNSViewId() const {
}
#endif
void OffScreenRenderWidgetHostView::OnPaint(const gfx::Rect& damage_rect,
const SkBitmap& bitmap) {
void OffScreenRenderWidgetHostView::OnPaint(
const gfx::Rect& damage_rect,
const SkBitmap& bitmap,
const OffscreenSharedTexture& texture) {
if (texture.has_value()) {
callback_.Run(damage_rect, {}, texture);
return;
}
backing_ = std::make_unique<SkBitmap>();
backing_->allocN32Pixels(bitmap.width(), bitmap.height(), !transparent_);
bitmap.readPixels(backing_->pixmap());
@ -711,7 +721,7 @@ void OffScreenRenderWidgetHostView::CompositeFrame(
}
callback_.Run(gfx::IntersectRects(gfx::Rect(size_in_pixels), damage_rect),
frame);
frame, {});
ReleaseResize();
}

View file

@ -25,14 +25,17 @@
#include "content/browser/renderer_host/render_widget_host_impl.h" // nogncheck
#include "content/browser/renderer_host/render_widget_host_view_base.h" // nogncheck
#include "content/browser/web_contents/web_contents_view.h" // nogncheck
#include "shell/browser/osr/osr_host_display_client.h"
#include "shell/browser/osr/osr_video_consumer.h"
#include "shell/browser/osr/osr_view_proxy.h"
#include "third_party/blink/public/mojom/widget/record_content_to_visible_time_request.mojom-forward.h"
#include "third_party/blink/public/platform/web_vector.h"
#include "third_party/skia/include/core/SkBitmap.h"
#include "ui/base/ime/text_input_client.h"
#include "ui/compositor/compositor.h"
#include "ui/compositor/layer_delegate.h"
#include "ui/compositor/layer_owner.h"
#include "ui/gfx/geometry/point.h"
#include "components/viz/host/host_display_client.h"
@ -59,8 +62,6 @@ class ElectronCopyFrameGenerator;
class ElectronDelegatedFrameHostClient;
class OffScreenHostDisplayClient;
using OnPaintCallback =
base::RepeatingCallback<void(const gfx::Rect&, const SkBitmap&)>;
using OnPopupPaintCallback = base::RepeatingCallback<void(const gfx::Rect&)>;
class OffScreenRenderWidgetHostView
@ -70,6 +71,7 @@ class OffScreenRenderWidgetHostView
private OffscreenViewProxyObserver {
public:
OffScreenRenderWidgetHostView(bool transparent,
bool offscreen_use_shared_texture,
bool painting,
int frame_rate,
const OnPaintCallback& callback,
@ -204,7 +206,9 @@ class OffScreenRenderWidgetHostView
void RemoveViewProxy(OffscreenViewProxy* proxy);
void ProxyViewDestroyed(OffscreenViewProxy* proxy) override;
void OnPaint(const gfx::Rect& damage_rect, const SkBitmap& bitmap);
void OnPaint(const gfx::Rect& damage_rect,
const SkBitmap& bitmap,
const OffscreenSharedTexture& texture);
void OnPopupPaint(const gfx::Rect& damage_rect);
void OnProxyViewPaint(const gfx::Rect& damage_rect) override;
@ -231,6 +235,10 @@ class OffScreenRenderWidgetHostView
void SetFrameRate(int frame_rate);
int frame_rate() const { return frame_rate_; }
bool offscreen_use_shared_texture() const {
return offscreen_use_shared_texture_;
}
ui::Layer* root_layer() const { return root_layer_.get(); }
content::DelegatedFrameHost* delegated_frame_host() const {
@ -274,6 +282,7 @@ class OffScreenRenderWidgetHostView
std::set<OffscreenViewProxy*> proxy_views_;
const bool transparent_;
const bool offscreen_use_shared_texture_;
OnPaintCallback callback_;
OnPopupPaintCallback parent_callback_;

View file

@ -16,21 +16,6 @@
#include "third_party/skia/include/core/SkRegion.h"
#include "ui/gfx/skbitmap_operations.h"
namespace {
bool IsValidMinAndMaxFrameSize(gfx::Size min_frame_size,
gfx::Size max_frame_size) {
// Returns true if
// 0 < |min_frame_size| <= |max_frame_size| <= media::limits::kMaxDimension.
return 0 < min_frame_size.width() && 0 < min_frame_size.height() &&
min_frame_size.width() <= max_frame_size.width() &&
min_frame_size.height() <= max_frame_size.height() &&
max_frame_size.width() <= media::limits::kMaxDimension &&
max_frame_size.height() <= media::limits::kMaxDimension;
}
} // namespace
namespace electron {
OffScreenVideoConsumer::OffScreenVideoConsumer(
@ -43,7 +28,23 @@ OffScreenVideoConsumer::OffScreenVideoConsumer(
video_capturer_->SetMinSizeChangePeriod(base::TimeDelta());
video_capturer_->SetFormat(media::PIXEL_FORMAT_ARGB);
SizeChanged(view_->SizeInPixels());
// Previous design of OSR try to set the resolution constraint to match the
// view's size. It is actually not necessary and creates faulty textures
// when the window/view's size changes frequently. The constraint may not
// take effect before the internal frame size changes, and makes the capturer
// try to resize the new frame size to the old constraint size, which makes
// the output image blurry. (For example, small window suddenly expands to
// maximum size, will actually produce a small output image with maximized
// window resized to fit the small image, however, the expected output is
// a maximized image without resizing).
// So, we just set the constraint to no limit (1x1 to max). When the window
// size changed, a new frame with new size will be automatically generated.
// There's no need to manually set the constraint and request a new frame.
video_capturer_->SetResolutionConstraints(
gfx::Size(1, 1),
gfx::Size(media::limits::kMaxDimension, media::limits::kMaxDimension),
false);
SetFrameRate(view_->frame_rate());
}
@ -51,7 +52,10 @@ OffScreenVideoConsumer::~OffScreenVideoConsumer() = default;
void OffScreenVideoConsumer::SetActive(bool active) {
if (active) {
video_capturer_->Start(this, viz::mojom::BufferFormatPreference::kDefault);
video_capturer_->Start(
this, view_->offscreen_use_shared_texture()
? viz::mojom::BufferFormatPreference::kPreferGpuMemoryBuffer
: viz::mojom::BufferFormatPreference::kDefault);
} else {
video_capturer_->Stop();
}
@ -61,43 +65,77 @@ void OffScreenVideoConsumer::SetFrameRate(int frame_rate) {
video_capturer_->SetMinCapturePeriod(base::Seconds(1) / frame_rate);
}
void OffScreenVideoConsumer::SizeChanged(const gfx::Size& size_in_pixels) {
DCHECK(IsValidMinAndMaxFrameSize(size_in_pixels, size_in_pixels));
video_capturer_->SetResolutionConstraints(size_in_pixels, size_in_pixels,
true);
video_capturer_->RequestRefreshFrame();
}
void OffScreenVideoConsumer::OnFrameCaptured(
::media::mojom::VideoBufferHandlePtr data,
::media::mojom::VideoFrameInfoPtr info,
const gfx::Rect& content_rect,
mojo::PendingRemote<viz::mojom::FrameSinkVideoConsumerFrameCallbacks>
callbacks) {
auto& data_region = data->get_read_only_shmem_region();
// Since we don't call ProvideFeedback, just need Done to release the frame,
// there's no need to call the callbacks, see in_flight_frame_delivery.cc
// The destructor will call Done for us once the pipe closed.
if (!CheckContentRect(content_rect)) {
SizeChanged(view_->SizeInPixels());
// Offscreen using GPU shared texture
if (view_->offscreen_use_shared_texture()) {
CHECK(data->is_gpu_memory_buffer_handle());
auto& orig_handle = data->get_gpu_memory_buffer_handle();
CHECK(!orig_handle.is_null());
// Clone the handle to support keep the handle alive after the callback
auto gmb_handle = orig_handle.Clone();
OffscreenSharedTextureValue texture;
texture.pixel_format = info->pixel_format;
texture.coded_size = info->coded_size;
texture.visible_rect = info->visible_rect;
texture.content_rect = content_rect;
texture.timestamp = info->timestamp.InMicroseconds();
texture.frame_count = info->metadata.capture_counter.value_or(0);
texture.capture_update_rect = info->metadata.capture_update_rect;
texture.source_size = info->metadata.source_size;
texture.region_capture_rect = info->metadata.region_capture_rect;
texture.widget_type = view_->GetWidgetType();
#if BUILDFLAG(IS_WIN)
texture.shared_texture_handle =
reinterpret_cast<uintptr_t>(gmb_handle.dxgi_handle.Get());
#elif BUILDFLAG(IS_APPLE)
texture.shared_texture_handle =
reinterpret_cast<uintptr_t>(gmb_handle.io_surface.get());
#elif BUILDFLAG(IS_LINUX)
const auto& native_pixmap = gmb_handle.native_pixmap_handle;
texture.modifier = native_pixmap.modifier;
for (const auto& plane : native_pixmap.planes) {
texture.planes.emplace_back(plane.stride, plane.offset, plane.size,
plane.fd.get());
}
#endif
// The release holder will be released from JS side when `release` called
texture.releaser_holder = new OffscreenReleaserHolder(std::move(gmb_handle),
std::move(callbacks));
callback_.Run(content_rect, {}, std::move(texture));
return;
}
mojo::Remote<viz::mojom::FrameSinkVideoConsumerFrameCallbacks>
callbacks_remote(std::move(callbacks));
// Regular shared texture capture using shared memory
const auto& data_region = data->get_read_only_shmem_region();
if (!data_region.IsValid()) {
callbacks_remote->Done();
return;
}
base::ReadOnlySharedMemoryMapping mapping = data_region.Map();
if (!mapping.IsValid()) {
DLOG(ERROR) << "Shared memory mapping failed.";
callbacks_remote->Done();
return;
}
if (mapping.size() <
media::VideoFrame::AllocationSize(info->pixel_format, info->coded_size)) {
DLOG(ERROR) << "Shared memory size was less than expected.";
callbacks_remote->Done();
return;
}
@ -127,30 +165,17 @@ void OffScreenVideoConsumer::OnFrameCaptured(
[](void* addr, void* context) {
delete static_cast<FramePinner*>(context);
},
new FramePinner{std::move(mapping), callbacks_remote.Unbind()});
new FramePinner{std::move(mapping), std::move(callbacks)});
bitmap.setImmutable();
// Since update_rect is already offset-ed with same origin of content_rect,
// there's nothing more to do with the imported bitmap.
std::optional<gfx::Rect> update_rect = info->metadata.capture_update_rect;
if (!update_rect.has_value() || update_rect->IsEmpty()) {
update_rect = content_rect;
}
callback_.Run(*update_rect, bitmap);
}
bool OffScreenVideoConsumer::CheckContentRect(const gfx::Rect& content_rect) {
gfx::Size view_size = view_->SizeInPixels();
gfx::Size content_size = content_rect.size();
if (std::abs(view_size.width() - content_size.width()) > 2) {
return false;
}
if (std::abs(view_size.height() - content_size.height()) > 2) {
return false;
}
return true;
callback_.Run(*update_rect, bitmap, {});
}
} // namespace electron

View file

@ -14,14 +14,12 @@
#include "components/viz/host/client_frame_sink_video_capturer.h"
#include "media/capture/mojom/video_capture_buffer.mojom-forward.h"
#include "media/capture/mojom/video_capture_types.mojom.h"
#include "shell/browser/osr/osr_paint_event.h"
namespace electron {
class OffScreenRenderWidgetHostView;
typedef base::RepeatingCallback<void(const gfx::Rect&, const SkBitmap&)>
OnPaintCallback;
class OffScreenVideoConsumer : public viz::mojom::FrameSinkVideoConsumer {
public:
OffScreenVideoConsumer(OffScreenRenderWidgetHostView* view,
@ -34,7 +32,6 @@ class OffScreenVideoConsumer : public viz::mojom::FrameSinkVideoConsumer {
void SetActive(bool active);
void SetFrameRate(int frame_rate);
void SizeChanged(const gfx::Size& size_in_pixels);
private:
// viz::mojom::FrameSinkVideoConsumer implementation.
@ -49,8 +46,6 @@ class OffScreenVideoConsumer : public viz::mojom::FrameSinkVideoConsumer {
void OnStopped() override {}
void OnLog(const std::string& message) override {}
bool CheckContentRect(const gfx::Rect& content_rect);
OnPaintCallback callback_;
raw_ptr<OffScreenRenderWidgetHostView> view_;

View file

@ -15,8 +15,11 @@ namespace electron {
OffScreenWebContentsView::OffScreenWebContentsView(
bool transparent,
bool offscreen_use_shared_texture,
const OnPaintCallback& callback)
: transparent_(transparent), callback_(callback) {
: transparent_(transparent),
offscreen_use_shared_texture_(offscreen_use_shared_texture),
callback_(callback) {
#if BUILDFLAG(IS_MAC)
PlatformCreate();
#endif
@ -109,8 +112,8 @@ OffScreenWebContentsView::CreateViewForWidget(
return static_cast<content::RenderWidgetHostViewBase*>(rwhv);
return new OffScreenRenderWidgetHostView(
transparent_, painting_, GetFrameRate(), callback_, render_widget_host,
nullptr, GetSize());
transparent_, offscreen_use_shared_texture_, painting_, GetFrameRate(),
callback_, render_widget_host, nullptr, GetSize());
}
content::RenderWidgetHostViewBase*
@ -124,9 +127,9 @@ OffScreenWebContentsView::CreateViewForChildWidget(
? web_contents_impl->GetOuterWebContents()->GetRenderWidgetHostView()
: web_contents_impl->GetRenderWidgetHostView());
return new OffScreenRenderWidgetHostView(transparent_, painting_,
view->frame_rate(), callback_,
render_widget_host, view, GetSize());
return new OffScreenRenderWidgetHostView(
transparent_, offscreen_use_shared_texture_, painting_,
view->frame_rate(), callback_, render_widget_host, view, GetSize());
}
void OffScreenWebContentsView::RenderViewReady() {

View file

@ -34,7 +34,9 @@ class OffScreenWebContentsView : public content::WebContentsView,
public content::RenderViewHostDelegateView,
private NativeWindowObserver {
public:
OffScreenWebContentsView(bool transparent, const OnPaintCallback& callback);
OffScreenWebContentsView(bool transparent,
bool offscreen_use_shared_texture,
const OnPaintCallback& callback);
~OffScreenWebContentsView() override;
void SetWebContents(content::WebContents*);
@ -105,6 +107,7 @@ class OffScreenWebContentsView : public content::WebContentsView,
raw_ptr<NativeWindow> native_window_ = nullptr;
const bool transparent_;
const bool offscreen_use_shared_texture_;
bool painting_ = true;
int frame_rate_ = 60;
OnPaintCallback callback_;

View file

@ -0,0 +1,176 @@

// Copyright (c) 2024 GitHub, Inc.
// Use of this source code is governed by the MIT license that can be found in
// the LICENSE file.
#include "shell/common/gin_converters/osr_converter.h"
#include "gin/dictionary.h"
#include "v8-external.h"
#include "v8-function.h"
#include <string>
#include "base/containers/to_vector.h"
#include "shell/common/gin_converters/gfx_converter.h"
#include "shell/common/gin_converters/optional_converter.h"
#include "shell/common/node_includes.h"
#include "shell/common/process_util.h"
namespace gin {
namespace {
std::string OsrVideoPixelFormatToString(media::VideoPixelFormat format) {
switch (format) {
case media::PIXEL_FORMAT_ARGB:
return "bgra";
case media::PIXEL_FORMAT_ABGR:
return "rgba";
default:
NOTREACHED_NORETURN();
}
}
std::string OsrWidgetTypeToString(content::WidgetType type) {
switch (type) {
case content::WidgetType::kPopup:
return "popup";
case content::WidgetType::kFrame:
return "frame";
default:
NOTREACHED_NORETURN();
}
}
struct OffscreenReleaseHolderMonitor {
explicit OffscreenReleaseHolderMonitor(
electron::OffscreenReleaserHolder* holder)
: holder_(holder) {
CHECK(holder);
}
void ReleaseTexture() {
delete holder_;
holder_ = nullptr;
}
bool IsTextureReleased() const { return holder_ == nullptr; }
v8::Persistent<v8::Value>* CreatePersistent(v8::Isolate* isolate,
v8::Local<v8::Value> value) {
persistent_ = std::make_unique<v8::Persistent<v8::Value>>(isolate, value);
return persistent_.get();
}
void ResetPersistent() const { persistent_->Reset(); }
private:
raw_ptr<electron::OffscreenReleaserHolder> holder_;
std::unique_ptr<v8::Persistent<v8::Value>> persistent_;
};
} // namespace
// static
v8::Local<v8::Value> Converter<electron::OffscreenSharedTextureValue>::ToV8(
v8::Isolate* isolate,
const electron::OffscreenSharedTextureValue& val) {
gin::Dictionary root(isolate, v8::Object::New(isolate));
// Create a monitor to hold the releaser holder, which enables us to
// monitor whether the user explicitly released the texture before
// GC collects the object.
auto* monitor = new OffscreenReleaseHolderMonitor(val.releaser_holder);
auto releaserHolder = v8::External::New(isolate, monitor);
auto releaserFunc = [](const v8::FunctionCallbackInfo<v8::Value>& info) {
auto* holder = static_cast<OffscreenReleaseHolderMonitor*>(
info.Data().As<v8::External>()->Value());
// Release the shared texture, so that future frames can be generated.
holder->ReleaseTexture();
};
auto releaser = v8::Function::New(isolate->GetCurrentContext(), releaserFunc,
releaserHolder)
.ToLocalChecked();
root.Set("release", releaser);
gin::Dictionary dict(isolate, v8::Object::New(isolate));
dict.Set("pixelFormat", OsrVideoPixelFormatToString(val.pixel_format));
dict.Set("codedSize", val.coded_size);
dict.Set("visibleRect", val.visible_rect);
dict.Set("contentRect", val.content_rect);
dict.Set("timestamp", val.timestamp);
dict.Set("widgetType", OsrWidgetTypeToString(val.widget_type));
gin::Dictionary metadata(isolate, v8::Object::New(isolate));
metadata.Set("captureUpdateRect", val.capture_update_rect);
metadata.Set("regionCaptureRect", val.region_capture_rect);
metadata.Set("sourceSize", val.source_size);
metadata.Set("frameCount", val.frame_count);
dict.Set("metadata", ConvertToV8(isolate, metadata));
#if BUILDFLAG(IS_WIN) || BUILDFLAG(IS_MAC)
auto handle_buf = node::Buffer::Copy(
isolate,
reinterpret_cast<char*>(
const_cast<uintptr_t*>(&val.shared_texture_handle)),
sizeof(val.shared_texture_handle));
dict.Set("sharedTextureHandle", handle_buf.ToLocalChecked());
#elif BUILDFLAG(IS_LINUX)
auto v8_planes = base::ToVector(val.planes, [isolate](const auto& plane) {
gin::Dictionary v8_plane(isolate, v8::Object::New(isolate));
v8_plane.Set("stride", plane.stride);
v8_plane.Set("offset", plane.offset);
v8_plane.Set("size", plane.size);
v8_plane.Set("fd", plane.fd);
return v8_plane;
});
dict.Set("planes", v8_planes);
dict.Set("modifier", base::NumberToString(val.modifier));
#endif
root.Set("textureInfo", ConvertToV8(isolate, dict));
auto root_local = ConvertToV8(isolate, root);
// Create a persistent reference of the object, so that we can check the
// monitor again when GC collects this object.
auto* tex_persistent = monitor->CreatePersistent(isolate, root_local);
tex_persistent->SetWeak(
monitor,
[](const v8::WeakCallbackInfo<OffscreenReleaseHolderMonitor>& data) {
auto* monitor = data.GetParameter();
if (!monitor->IsTextureReleased()) {
// Emit a warning when user didn't properly manually release the
// texture, output it in second pass callback.
data.SetSecondPassCallback([](const v8::WeakCallbackInfo<
OffscreenReleaseHolderMonitor>& data) {
auto* iso = data.GetIsolate();
node::Environment* env = node::Environment::GetCurrent(iso);
// Emit warning only once
static std::once_flag flag;
std::call_once(flag, [=] {
electron::EmitWarning(
env,
"[OSR TEXTURE LEAKED] When using OSR with "
"`useSharedTexture`, `texture.release()` "
"must be called explicitly as soon as the texture is "
"copied to your rendering system. "
"Otherwise, it will soon drain the underlying "
"framebuffer and prevent future frames from being generated.",
"SharedTextureOSRNotReleased");
});
});
}
// We are responsible for resetting the persistent handle.
monitor->ResetPersistent();
// Finally, release the holder monitor.
delete monitor;
},
v8::WeakCallbackType::kParameter);
return root_local;
}
} // namespace gin

View file

@ -0,0 +1,23 @@

// Copyright (c) 2024 GitHub, Inc.
// Use of this source code is governed by the MIT license that can be found in
// the LICENSE file.
#ifndef ELECTRON_SHELL_COMMON_GIN_CONVERTERS_OSR_CONVERTER_H_
#define ELECTRON_SHELL_COMMON_GIN_CONVERTERS_OSR_CONVERTER_H_
#include "gin/converter.h"
#include "shell/browser/osr/osr_paint_event.h"
namespace gin {
template <>
struct Converter<electron::OffscreenSharedTextureValue> {
static v8::Local<v8::Value> ToV8(
v8::Isolate* isolate,
const electron::OffscreenSharedTextureValue& val);
};
} // namespace gin
#endif // ELECTRON_SHELL_COMMON_GIN_CONVERTERS_OSR_CONVERTER_H_

View file

@ -153,6 +153,8 @@ const char kAllowRunningInsecureContent[] = "allowRunningInsecureContent";
const char kOffscreen[] = "offscreen";
const char kUseSharedTexture[] = "useSharedTexture";
const char kNodeIntegrationInSubFrames[] = "nodeIntegrationInSubFrames";
// Disable window resizing when HTML Fullscreen API is activated.

View file

@ -79,6 +79,7 @@ extern const char kSandbox[];
extern const char kWebSecurity[];
extern const char kAllowRunningInsecureContent[];
extern const char kOffscreen[];
extern const char kUseSharedTexture[];
extern const char kNodeIntegrationInSubFrames[];
extern const char kDisableHtmlFullscreenWindowResize[];
extern const char kJavaScript[];

View file

@ -15,6 +15,7 @@ import { HexColors, hasCapturableScreen, ScreenCapture } from './lib/screen-help
import { once } from 'node:events';
import { setTimeout } from 'node:timers/promises';
import { setTimeout as syncSetTimeout } from 'node:timers';
import { nativeImage } from 'electron';
const fixtures = path.resolve(__dirname, 'fixtures');
const mainFixtures = path.resolve(__dirname, 'fixtures');
@ -374,7 +375,7 @@ describe('BrowserWindow module', () => {
it('should emit did-fail-load event for files that do not exist', async () => {
const didFailLoad = once(w.webContents, 'did-fail-load');
w.loadURL('file://a.txt');
const [, code, desc,, isMainFrame] = await didFailLoad;
const [, code, desc, , isMainFrame] = await didFailLoad;
expect(code).to.equal(-6);
expect(desc).to.equal('ERR_FILE_NOT_FOUND');
expect(isMainFrame).to.equal(true);
@ -382,7 +383,7 @@ describe('BrowserWindow module', () => {
it('should emit did-fail-load event for invalid URL', async () => {
const didFailLoad = once(w.webContents, 'did-fail-load');
w.loadURL('http://example:port');
const [, code, desc,, isMainFrame] = await didFailLoad;
const [, code, desc, , isMainFrame] = await didFailLoad;
expect(desc).to.equal('ERR_INVALID_URL');
expect(code).to.equal(-300);
expect(isMainFrame).to.equal(true);
@ -399,7 +400,7 @@ describe('BrowserWindow module', () => {
it('should set `mainFrame = false` on did-fail-load events in iframes', async () => {
const didFailLoad = once(w.webContents, 'did-fail-load');
w.loadFile(path.join(fixtures, 'api', 'did-fail-load-iframe.html'));
const [,,,, isMainFrame] = await didFailLoad;
const [, , , , isMainFrame] = await didFailLoad;
expect(isMainFrame).to.equal(false);
});
it('does not crash in did-fail-provisional-load handler', (done) => {
@ -413,7 +414,7 @@ describe('BrowserWindow module', () => {
const data = Buffer.alloc(2 * 1024 * 1024).toString('base64');
const didFailLoad = once(w.webContents, 'did-fail-load');
w.loadURL(`data:image/png;base64,${data}`);
const [, code, desc,, isMainFrame] = await didFailLoad;
const [, code, desc, , isMainFrame] = await didFailLoad;
expect(desc).to.equal('ERR_INVALID_URL');
expect(code).to.equal(-300);
expect(isMainFrame).to.equal(true);
@ -4542,7 +4543,7 @@ describe('BrowserWindow module', () => {
fs.unlinkSync(savePageHtmlPath);
fs.rmdirSync(path.join(savePageDir, 'save_page_files'));
fs.rmdirSync(savePageDir);
} catch {}
} catch { }
});
it('should throw when passing relative paths', async () => {
@ -4590,7 +4591,7 @@ describe('BrowserWindow module', () => {
try {
await fs.promises.unlink(savePageMHTMLPath);
await fs.promises.rmdir(tmpDir);
} catch {}
} catch { }
});
it('should save page to disk with HTMLComplete', async () => {
@ -6367,7 +6368,7 @@ describe('BrowserWindow module', () => {
it('creates offscreen window with correct size', async () => {
const paint = once(w.webContents, 'paint') as Promise<[any, Electron.Rectangle, Electron.NativeImage]>;
w.loadFile(path.join(fixtures, 'api', 'offscreen-rendering.html'));
const [,, data] = await paint;
const [, , data] = await paint;
expect(data.constructor.name).to.equal('NativeImage');
expect(data.isEmpty()).to.be.false('data is empty');
const size = data.getSize();
@ -6465,6 +6466,52 @@ describe('BrowserWindow module', () => {
});
});
describe('offscreen rendering image', () => {
afterEach(closeAllWindows);
const imagePath = path.join(fixtures, 'assets', 'osr.png');
const targetImage = nativeImage.createFromPath(imagePath);
const nativeModulesEnabled = !process.env.ELECTRON_SKIP_NATIVE_MODULE_TESTS;
ifit(nativeModulesEnabled && ['win32'].includes(process.platform))('use shared texture, hardware acceleration enabled', (done) => {
const { ExtractPixels, InitializeGpu } = require('@electron-ci/osr-gpu');
try {
InitializeGpu();
} catch (e) {
console.log('Failed to initialize GPU, this spec needs a valid GPU device. Skipping...');
console.error(e);
done();
return;
}
const w = new BrowserWindow({
show: false,
webPreferences: {
offscreen: {
useSharedTexture: true
}
},
transparent: true,
frame: false,
width: 128,
height: 128
});
w.webContents.once('paint', async (e, dirtyRect) => {
try {
expect(e.texture).to.be.not.null();
const pixels = ExtractPixels(e.texture!.textureInfo);
const img = nativeImage.createFromBitmap(pixels, { width: dirtyRect.width, height: dirtyRect.height, scaleFactor: 1 });
expect(img.toBitmap().equals(targetImage.toBitmap())).to.equal(true);
done();
} catch (e) {
done(e);
}
});
w.loadFile(imagePath);
});
});
describe('"transparent" option', () => {
afterEach(closeAllWindows);

BIN
spec/fixtures/assets/osr.png vendored Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 9 KiB

View file

@ -0,0 +1,16 @@
{
"targets": [
{
"target_name": "osr-gpu",
"sources": ['napi_utils.h'],
"conditions": [
['OS=="win"', {
'sources': ['binding_win.cc'],
'link_settings': {
'libraries': ['dxgi.lib', 'd3d11.lib', 'dxguid.lib'],
}
}],
],
}
]
}

View file

@ -0,0 +1,197 @@
#include <d3d11_1.h>
#include <dxgi1_2.h>
#include <js_native_api.h>
#include <node_api.h>
#include <wrl/client.h>
#include <iostream>
#include <string>
#include "napi_utils.h"
namespace {
Microsoft::WRL::ComPtr<ID3D11Device> device = nullptr;
Microsoft::WRL::ComPtr<ID3D11Device1> device1 = nullptr;
Microsoft::WRL::ComPtr<ID3D11DeviceContext> context = nullptr;
UINT cached_width = 0;
UINT cached_height = 0;
Microsoft::WRL::ComPtr<ID3D11Texture2D> cached_staging_texture = nullptr;
napi_value ExtractPixels(napi_env env, napi_callback_info info) {
size_t argc = 1;
napi_value args[1];
napi_status status;
status = napi_get_cb_info(env, info, &argc, args, NULL, NULL);
if (status != napi_ok)
return nullptr;
if (argc != 1) {
napi_throw_error(env, nullptr,
"Wrong number of arguments, expected textureInfo");
}
auto textureInfo = args[0];
auto widgetType = NAPI_GET_PROPERTY_VALUE_STRING(textureInfo, "widgetType");
auto pixelFormat = NAPI_GET_PROPERTY_VALUE_STRING(textureInfo, "pixelFormat");
auto sharedTextureHandle =
NAPI_GET_PROPERTY_VALUE(textureInfo, "sharedTextureHandle");
size_t handleBufferSize;
uint8_t* handleBufferData;
napi_get_buffer_info(env, sharedTextureHandle,
reinterpret_cast<void**>(&handleBufferData),
&handleBufferSize);
auto handle = *reinterpret_cast<HANDLE*>(handleBufferData);
std::cout << "ExtractPixels widgetType=" << widgetType
<< " pixelFormat=" << pixelFormat
<< " sharedTextureHandle=" << handle << std::endl;
Microsoft::WRL::ComPtr<ID3D11Texture2D> shared_texture = nullptr;
HRESULT hr =
device1->OpenSharedResource1(handle, IID_PPV_ARGS(&shared_texture));
if (FAILED(hr)) {
napi_throw_error(env, "osr-gpu", "Failed to open shared texture resource");
return nullptr;
}
// Extract the texture description
D3D11_TEXTURE2D_DESC desc;
shared_texture->GetDesc(&desc);
// Cache the staging texture if it does not exist or size has changed
if (!cached_staging_texture || cached_width != desc.Width ||
cached_height != desc.Height) {
if (cached_staging_texture) {
cached_staging_texture->Release();
}
desc.CPUAccessFlags = D3D11_CPU_ACCESS_READ;
desc.Usage = D3D11_USAGE_STAGING;
desc.BindFlags = 0;
desc.MiscFlags = 0;
std::cout << "Create staging Texture2D width=" << desc.Width
<< " height=" << desc.Height << std::endl;
hr = device->CreateTexture2D(&desc, nullptr, &cached_staging_texture);
if (FAILED(hr)) {
napi_throw_error(env, "osr-gpu", "Failed to create staging texture");
return nullptr;
}
cached_width = desc.Width;
cached_height = desc.Height;
}
// Copy the shared texture to the staging texture
context->CopyResource(cached_staging_texture.Get(), shared_texture.Get());
// Calculate the size of the buffer needed to hold the pixel data
// 4 bytes per pixel
size_t bufferSize = desc.Width * desc.Height * 4;
// Create a NAPI buffer to hold the pixel data
napi_value result;
void* resultData;
status = napi_create_buffer(env, bufferSize, &resultData, &result);
if (status != napi_ok) {
napi_throw_error(env, "osr-gpu", "Failed to create buffer");
return nullptr;
}
// Map the staging texture to read the pixel data
D3D11_MAPPED_SUBRESOURCE mappedResource;
hr = context->Map(cached_staging_texture.Get(), 0, D3D11_MAP_READ, 0,
&mappedResource);
if (FAILED(hr)) {
napi_throw_error(env, "osr-gpu", "Failed to map the staging texture");
return nullptr;
}
// Copy the pixel data from the mapped resource to the NAPI buffer
const uint8_t* srcData = static_cast<const uint8_t*>(mappedResource.pData);
uint8_t* destData = static_cast<uint8_t*>(resultData);
for (UINT row = 0; row < desc.Height; ++row) {
memcpy(destData + row * desc.Width * 4,
srcData + row * mappedResource.RowPitch, desc.Width * 4);
}
// Unmap the staging texture
context->Unmap(cached_staging_texture.Get(), 0);
return result;
}
napi_value InitializeGpu(napi_env env, napi_callback_info info) {
HRESULT hr;
// Feature levels supported
D3D_FEATURE_LEVEL feature_levels[] = {D3D_FEATURE_LEVEL_11_1};
UINT num_feature_levels = ARRAYSIZE(feature_levels);
D3D_FEATURE_LEVEL feature_level;
// This flag adds support for surfaces with a different color channel ordering
// than the default. It is required for compatibility with Direct2D.
UINT creation_flags =
D3D11_CREATE_DEVICE_BGRA_SUPPORT | D3D11_CREATE_DEVICE_DEBUG;
// We need dxgi to share texture
Microsoft::WRL::ComPtr<IDXGIFactory2> dxgi_factory = nullptr;
Microsoft::WRL::ComPtr<IDXGIAdapter> adapter = nullptr;
hr = CreateDXGIFactory(IID_IDXGIFactory2, (void**)&dxgi_factory);
if (FAILED(hr)) {
napi_throw_error(env, "osr-gpu", "CreateDXGIFactory failed");
return nullptr;
}
hr = dxgi_factory->EnumAdapters(0, &adapter);
if (FAILED(hr)) {
napi_throw_error(env, "osr-gpu", "EnumAdapters failed");
return nullptr;
}
DXGI_ADAPTER_DESC adapter_desc;
adapter->GetDesc(&adapter_desc);
std::wcout << "Initializing DirectX with adapter: "
<< adapter_desc.Description << std::endl;
hr = D3D11CreateDevice(adapter.Get(), D3D_DRIVER_TYPE_UNKNOWN, nullptr,
creation_flags, feature_levels, num_feature_levels,
D3D11_SDK_VERSION, &device, &feature_level, &context);
if (FAILED(hr)) {
napi_throw_error(env, "osr-gpu", "D3D11CreateDevice failed");
return nullptr;
}
hr = device->QueryInterface(IID_PPV_ARGS(&device1));
if (FAILED(hr)) {
napi_throw_error(env, "osr-gpu", "Failed to open d3d11_1 device");
return nullptr;
}
return nullptr;
}
napi_value Init(napi_env env, napi_value exports) {
napi_status status;
napi_property_descriptor descriptors[] = {
{"ExtractPixels", NULL, ExtractPixels, NULL, NULL, NULL, napi_default,
NULL},
{"InitializeGpu", NULL, InitializeGpu, NULL, NULL, NULL, napi_default,
NULL}};
status = napi_define_properties(
env, exports, sizeof(descriptors) / sizeof(*descriptors), descriptors);
if (status != napi_ok)
return NULL;
std::cout << "Initialized osr-gpu native module" << std::endl;
return exports;
}
} // namespace
NAPI_MODULE(NODE_GYP_MODULE_NAME, Init)

View file

@ -0,0 +1 @@
module.exports = require('../build/Release/osr-gpu.node');

View file

@ -0,0 +1,33 @@
#define NAPI_CREATE_STRING(str) \
[&]() { \
napi_value value; \
napi_create_string_utf8(env, str, NAPI_AUTO_LENGTH, &value); \
return value; \
}()
#define NAPI_GET_PROPERTY_VALUE(obj, field) \
[&]() { \
napi_value value; \
napi_get_property(env, obj, NAPI_CREATE_STRING(field), &value); \
return value; \
}()
#define NAPI_GET_PROPERTY_VALUE_STRING(obj, field) \
[&]() { \
auto val = NAPI_GET_PROPERTY_VALUE(obj, field); \
size_t size; \
napi_get_value_string_utf8(env, val, nullptr, 0, &size); \
char* buffer = new char[size + 1]; \
napi_get_value_string_utf8(env, val, buffer, size + 1, &size); \
return std::string(buffer); \
}()
#define NAPI_GET_PROPERTY_VALUE_BUFFER(obj, field) \
[&]() { \
auto val = NAPI_GET_PROPERTY_VALUE(obj, field); \
size_t size; \
napi_create_buffer(env, val, nullptr, 0, &size); \
char* buffer = new char[size + 1]; \
napi_get_value_string_utf8(env, val, buffer, size + 1, &size); \
return std::string(buffer); \
}()

View file

@ -0,0 +1,5 @@
{
"main": "./lib/osr-gpu.js",
"name": "@electron-ci/osr-gpu",
"version": "0.0.1"
}

View file

@ -10,6 +10,7 @@
"@electron-ci/echo": "file:./fixtures/native-addon/echo",
"@electron-ci/is-valid-window": "file:./is-valid-window",
"@electron-ci/uv-dlopen": "file:./fixtures/native-addon/uv-dlopen/",
"@electron-ci/osr-gpu": "file:./fixtures/native-addon/osr-gpu/",
"@electron/fuses": "^1.8.0",
"@electron/packager": "^18.3.2",
"@marshallofsound/mocha-appveyor-reporter": "^0.4.3",

View file

@ -10,6 +10,9 @@
dependencies:
nan "2.x"
"@electron-ci/osr-gpu@file:./fixtures/native-addon/osr-gpu":
version "0.0.1"
"@electron-ci/uv-dlopen@file:./fixtures/native-addon/uv-dlopen":
version "0.0.1"