Previously we loaded both fs and child_process and then hooked into
the returned value, relying on the module cache to keep our modifications
and give them to everyone.
Loading child_process took in excess of 20ms though so instead of loading
it and then hooking in. We intercept all Module load requests, and when
the first one for `child_process` comes in, we wrap the appropriate methods
and then never touch it again.
* refactor: bundle the browser and renderer process electron code
* Bundles browser/init and renderer/init
* Improves load performance of main process by ~40%
* Improves load performance of renderer process by ~30%
* Prevents users from importing our "requiring" our internal logic such
as ipc-main-internal. This makes those message buses safer as they are
less accessible, there is still some more work to be done though to lock
down those buses completely.
* The electron.asar file now only contains 2 files, as a future
improvement maybe we can use atom_natives to ship these two files
embedded in the binary
* This also removes our dependency on browserify which had some strange
edge cases that caused us to have to hack around require-order and
stopped us using certain ES6/7 features we should have been able to use
(async / await in some files in the sandboxed renderer init script)
TLDR: Things are faster and better :)
* fix: I really do not want to talk about it
* chore: add performance improvements from debugging
* fix: resolve the provided path so webpack thinks it is absolute
* chore: fixup per PR review
* fix: use webpacks ProvidePlugin to keep global, process and Buffer alive after deletion from global scope for use in internal code
* fix: bundle worker/init as well to make node-in-workers work
* chore: update wording as per feedback
* chore: make the timers hack work when yarn is not used
We now create a new instance of atom::api::DesktopCapturer for every
request instead of weirdly re-using the same instance and queuing
requests. This means there is now a 1:1 relationship between request
and DesktopCapturer so there isn't a race condition between the observer
for one request calling back before the observer of another. This is an
issue ever since the backing APIs moved to worker threads.
This also does a few things to ensure memory management
* Only ever listen to one event per-request, after that we wipe the emit
function to ignore all future events
* Ensures we clean up the window_capturer_, screen_capturer_ and
captured_sources_ in native land once the request is over.
This _in theory_ fixes a flake we've been seeing on CI where we try to
resolve the promise for a request that no longerr exists.
* fix: add boringssl backport to support node upgrade
* fix: Update node_includes.h, add DCHECK macros
* fix: Update node Debug Options parser usage
* fix: Fix asar setup
* fix: using v8Util in isolated context
* fix: make "process" available in preload scripts
* fix: use proper options parser and remove setting of _breakFirstLine
_breakFirstLine was being set on the process, but that has changed in node 12 and so is no longer needed. Node will handle it properly when --inspect-brk is provided
* chore: update node dep sha
* fix: process.binding => _linkedBinding in sandboxed isolated preload
* fix: make original-fs work with streams
* build: override node module version
* fix: use _linkedBinding in content_script/init.js
* chore: update node ref in DEPS
* build: node_module_version should be 73
* spec: clean up after a failed window count assertion
Previously when this assertion failed all tests that ran after the
failed assertion also failed. This ensure that the assertion fails for
the test that actually caused the issue but cleans up the left-over
windows so that future tests do not fail.
* fix: maintain a ref count for objects sent over remote
Previously there was a race condition where a GC could occur in the
renderer process between the main process sending a meta.id and the
renderer pulling the proxy out its weakmap to stop it being GC'ed.
This fixes that race condition by maintaining a "sent" ref count in the
object registry and a "received" ref count in the object cache on the
renderer side. The deref request now sends the number of refs the
renderer thinks it owns, if the number does not match the value in the
object registry it is assumed that there is an IPC message containing a
new reference in flight and this race condition was hit.
The browser side ref count is then reduced and we wait for the new deref
message. This guaruntees that an object will only be removed from the
registry if every reference we sent has been guarunteed to be unreffed.
By default the Chromedriver will send remote-debugging-port=0 to let the
browser choose a free port to listen on. The chosen port is written to
a known file in the user data dir that is passed to the app through the
CLI.
This PR does two things.
1. Correctly passes the USER_DATA_DIR to the remote debugging server so
it knows where to write the file
2. Adds support for --user-data-dir as we did not support that CLI
argument and Chromedriver relies on being able to tell the "browser"
where to write this file.
Fixes#17354