b5ed025267
* build: use github actions for windows (#44136)
* build: test windows runner
* build: try build windows on windows?
* build: take win/cross changes
* build: use bash as default shell always
* build: configure git for windows build tools
* build: bash as default
* build: configure windows correctly
* build: use sha1sum
* build: force windows cipd init and python3 existence
* just pain
* build: restore cache on windows
* build: use build-tools gclient
* build: sync gclient vars to build windows job
* build: output depshash for debugging
* build: past sam was a silly goose
* build: depshash logging
* build: force lf endings for lock and DEPS
* build: platform strings are hard
* build: checkout on windows host
* sup
* no check
* idk
* sigh
* ...
* no double checkout
* build: yolo some stuff
* build: run gn-check for windows on linux hosts for speed
* use container...
* cry ?
* build: e d
* e d
* no log
* fix toolchain on windows cross check
* build: use powershell to add mksnapshot_args
* build: enable x86 and arm64 windows builds too
* clean up
* maybe not needed
* build: keep action around for post step
* build: configure git global on win
* build: ia32 zip manifest
* build: no patch depot_tools for tests
* build: get arm64 windows closer to working
* build: windows tar is ass
* 32 bit on 32 bit
* maybe bash
* build: set up nodejs
* correct windows sharding
* fix some spec runner stuff
* fix windows tests
* overwrite -Force
* sigh
* screen res
* wat
* logs
* ... more logs
* line endings will be the death of me
* remove 1080p force thing
* vsctools + logging
* disable some fullscreen tests on GHA
* no progress
* run all CI
* install visual studio on arm64
* windows hax for non windows
* maybe arm sdk
* clean up depshash logic
* build: use single check per platform
* ensure clean args
* fix loop
* remove debug
* update default build image sha for dispatch
* plzzzz
* one more try
* arm64 vctools
* sad
* build: fix non-dispatch windows gn check
* chore: debug datadog-ci location
* chore: update build-tools for newer toolchain
* chore: set path for datadog-ci
* try this
* chore: fixup gn check
* fixup gn-check some more
* fixup windows gn check
* chore: fixup windows gn check
* test: use cmd for Windows testing
* fixup use cmd for testing on Windows
* fixup windows GN check
* fixup npm config arch for x86
* Can we set test files via powershell
* fixup to set test files via powershell
* fixup set test files via powershell
* Don't check cross instance cache disk space on Windows
* Use separate step to set env variables for testing
* fixup Use separate step to set env variables for testing
* fixup Use separate step to set env variables for testing
* fixup Use separate step to set env variables for testing (AGAIN)
* use powershell if in powershell
* fixup use powershell if in powershell
* chore: remove no longer needed changes to depot_tools
xref: https://chromium-review.googlesource.com/c/chromium/tools/depot_tools/+/5669094
and https://chromium-review.googlesource.com/c/chromium/src/+/5844046
* chore: try using 7zip on Windows to extract tarball
* Revert "chore: try using 7zip on Windows to extract tarball"
This reverts commit c7432b6a37857fd0746b8f1776fbd1103dba0b85.
* test: debug failing tests on GHA windows
* fix: ftbfs when including simdjson in Node.js
(cherry picked from commit 48e44c40d61b7aa843a990d4e0c8dec676b4ce8f)
* chore: try to track down Windows testing hang
* use correct timeout
* try this
* see if this helps
* try to figure out why node is running
* shard tests to try to narrow down WOA lockup
* try to narrow down problem test
* Narrow down blocking test more
* do we need a combo to repro
* see if this cleans up the tests
* fixup navigator.usb test
* remove logging from problematic tests
* Revert "shard tests to try to narrow down WOA lockup"
This reverts commit a1806583769678491814cb8b008131c32be4e8fb.
* remove logging
* debug keyboard test
* add timeout for Windows since arm64 sometimes hangs
* see if this helps
* put back original timeout
* try to use screenCapture to get screenshots of what is going on on WOA
* try using electron screencapture to debug WOA hang
* chore: turn off privacy experience
* run screenshot on both shards
* fixup screencap
* try to narrow down hanging spec
* chore: cleanup servers left open
* cleanup tests
* Revert "try to narrow down hanging spec"
This reverts commit a0f959f5382f4012a9919ac535d42c5333eb7d5f.
* cleanup test debugging
* fixup extensions spec
* cleanup unneeded items
* run wtf with 2 shards instead of 6
* Revert "run wtf with 2 shards instead of 6"
This reverts commit ca2d282129ee42c535d80f9876d6fa0dc6c08344.
* debug windows version on woa
* dump more info
* Get detailed CPU info
* revert debugging
* use same args as AppVeyor WOA for GHA WOA
* fixup use same args as AppVeyor WOA for GHA WOA
* fixup use same args as AppVeyor WOA for GHA WOA
* try to track down which tests trigger hang
* one or more of these combinations should hang
* break up web contents spec to find hang
* further break down api-web-contents to find hang
* test: ensure all webContents are closed
* test: fix require is not defined error
* see if api-web-contents spec is now good
* test: ensure all webContents are closed
* Revert "try to track down which tests trigger hang"
This reverts commit 07298d6ffeb4873ef7615a8ec3d1a6696e354ff4.
* chore: use alternate location for windows toolchain
* Reapply "try to track down which tests trigger hang"
This reverts commit 0321f76d01069ef325339b6fe6ed39700eae2b6b.
* try to narrow down problem test
* fix TEST_SHARD env var
* no, really fix TEST_SHARD env var
* see if this fixes it
* test: cleanup any remaining windows and webcontents
* see if new cleanup helps
* dont destroy webcontents for now
* fixup dont destroy webcontents for now
* Only cleanup right before process.exit
* see if this fixes the hang
* actually destroy webcontents
* Revert "Reapply "try to track down which tests trigger hang""
This reverts commit cdee7de049ce6bb5f67bbcc64882c56aa2c73027.
* see if this helps
* Revert "see if this helps"
This reverts commit 9a15a69cf7dbc456db7a61efa5b6870535bae993.
* Is it all about the web contents?
* it is all about the webcontents
but which one?
* Narrow down problem webcontents test
* try to speed up git install on WOA
* disable problematic test on WOA
* remove debugging
* remove debugging from choco installs
* Revert "disable problematic test on WOA"
This reverts commit e060fb0839b73d53cfde1f8acdca634f8e267937.
* Revert "remove debugging"
This reverts commit f18dd8b1a555f56bb06d0ea996a6eff31b424bf1.
* run against all the tests in the failing shard
* don't run visibility tests first
* remove debugging
* 3 is a magic number
* Revert "3 is a magic number"
This reverts commit 36b91ccf9f03a4b34230cd69ceca482f7d8428c1.
* match what Appveyor runs exactly
* Revert "match what Appveyor runs exactly"
This reverts commit 7260dd432216c62696e4bc864930f17c857eabbe.
* chore: sort files alphabetically
* find out what spec is leaving stuff open
* chore: Checkout PR HEAD commit
instead of merge commit
* try using app.exit instead of process.exit
* test: cleanup BrowserWindows and webContents
* Revert "chore: sort files alphabetically"
This reverts commit d9e217ffb1522076e150fce9e43a31bf56716acb.
* chore: use win32 to match process.platform
Needed for build-tools to download from PRs
* chore: cache yarn dir
* fixup cache yarn
* fixup use win32 to match process.platform
* fixup use win32 to match process.platform
* fixup cache yarn
* Add debugging for WOA hang
* Add debugging for failing keyboard lock test
* Revert "Add debugging for WOA hang"
This reverts commit 8df03d568d15a269e4026140d1158e8cdf551dec.
* try using process.kill
* add more debugging to keyboard.lock test
* Revert "Add debugging for failing keyboard lock test"
* remove debugging
* test: disable keyboard.lock on Windows
* test: disable fullscreen tests on Windows
* test: only force test suite exit on WOA
* fixup test: only force test suite exit on WOA
* cleanup tests
* extract yarn caching/install to action
* try using bash to run windows tests
* remove left over debugging
* standardize on 'win' for Windows builds
* use 'x86' for arch for manifest files
* fixup try using bash to run windows tests
* fixup use 'x86' for arch for manifest files
* standardize on 'win' for Windows builds
* fixup use 'x86' for arch for manifest files
* fixup try using bash to run windows tests
---------
Co-authored-by: John Kleinschmidt <jkleinsc@electronjs.org>
Co-authored-by: Charles Kerr <charles@charleskerr.com>
(cherry picked from commit be1a3dce83
)
* chore: update build tools to correct sha
---------
Co-authored-by: Samuel Attard <sam@electronjs.org>
289 lines
9 KiB
JavaScript
Executable file
289 lines
9 KiB
JavaScript
Executable file
#!/usr/bin/env node
|
|
|
|
const { ElectronVersions, Installer } = require('@electron/fiddle-core');
|
|
|
|
const chalk = require('chalk');
|
|
const { hashElement } = require('folder-hash');
|
|
const minimist = require('minimist');
|
|
|
|
const childProcess = require('node:child_process');
|
|
const crypto = require('node:crypto');
|
|
const fs = require('node:fs');
|
|
const os = require('node:os');
|
|
const path = require('node:path');
|
|
|
|
const unknownFlags = [];
|
|
|
|
const pass = chalk.green('✓');
|
|
const fail = chalk.red('✗');
|
|
|
|
const FAILURE_STATUS_KEY = 'Electron_Spec_Runner_Failures';
|
|
|
|
const args = minimist(process.argv, {
|
|
string: ['runners', 'target', 'electronVersion'],
|
|
unknown: arg => unknownFlags.push(arg)
|
|
});
|
|
|
|
const unknownArgs = [];
|
|
for (const flag of unknownFlags) {
|
|
unknownArgs.push(flag);
|
|
const onlyFlag = flag.replace(/^-+/, '');
|
|
if (args[onlyFlag]) {
|
|
unknownArgs.push(args[onlyFlag]);
|
|
}
|
|
}
|
|
|
|
const utils = require('./lib/utils');
|
|
const { YARN_VERSION } = require('./yarn');
|
|
|
|
const BASE = path.resolve(__dirname, '../..');
|
|
const NPX_CMD = process.platform === 'win32' ? 'npx.cmd' : 'npx';
|
|
|
|
const runners = new Map([
|
|
['main', { description: 'Main process specs', run: runMainProcessElectronTests }]
|
|
]);
|
|
|
|
const specHashPath = path.resolve(__dirname, '../spec/.hash');
|
|
|
|
if (args.electronVersion) {
|
|
if (args.runners && args.runners !== 'main') {
|
|
console.log(`${fail} only 'main' runner can be used with --electronVersion`);
|
|
process.exit(1);
|
|
}
|
|
|
|
args.runners = 'main';
|
|
}
|
|
|
|
let runnersToRun = null;
|
|
if (args.runners !== undefined) {
|
|
runnersToRun = args.runners.split(',').filter(value => value);
|
|
if (!runnersToRun.every(r => [...runners.keys()].includes(r))) {
|
|
console.log(`${fail} ${runnersToRun} must be a subset of [${[...runners.keys()].join(' | ')}]`);
|
|
process.exit(1);
|
|
}
|
|
console.log('Only running:', runnersToRun);
|
|
} else {
|
|
console.log(`Triggering runners: ${[...runners.keys()].join(', ')}`);
|
|
}
|
|
|
|
async function main () {
|
|
if (args.electronVersion) {
|
|
const versions = await ElectronVersions.create();
|
|
if (args.electronVersion === 'latest') {
|
|
args.electronVersion = versions.latest.version;
|
|
} else if (args.electronVersion.startsWith('latest@')) {
|
|
const majorVersion = parseInt(args.electronVersion.slice('latest@'.length));
|
|
const ver = versions.inMajor(majorVersion).slice(-1)[0];
|
|
if (ver) {
|
|
args.electronVersion = ver.version;
|
|
} else {
|
|
console.log(`${fail} '${majorVersion}' is not a recognized Electron major version`);
|
|
process.exit(1);
|
|
}
|
|
} else if (!versions.isVersion(args.electronVersion)) {
|
|
console.log(`${fail} '${args.electronVersion}' is not a recognized Electron version`);
|
|
process.exit(1);
|
|
}
|
|
|
|
const versionString = `v${args.electronVersion}`;
|
|
console.log(`Running against Electron ${versionString.green}`);
|
|
}
|
|
|
|
const [lastSpecHash, lastSpecInstallHash] = loadLastSpecHash();
|
|
const [currentSpecHash, currentSpecInstallHash] = await getSpecHash();
|
|
const somethingChanged = (currentSpecHash !== lastSpecHash) ||
|
|
(lastSpecInstallHash !== currentSpecInstallHash);
|
|
|
|
if (somethingChanged) {
|
|
await installSpecModules(path.resolve(__dirname, '..', 'spec'));
|
|
await getSpecHash().then(saveSpecHash);
|
|
}
|
|
|
|
if (!fs.existsSync(path.resolve(__dirname, '../electron.d.ts'))) {
|
|
console.log('Generating electron.d.ts as it is missing');
|
|
generateTypeDefinitions();
|
|
}
|
|
|
|
await runElectronTests();
|
|
}
|
|
|
|
function generateTypeDefinitions () {
|
|
const { status } = childProcess.spawnSync('npm', ['run', 'create-typescript-definitions'], {
|
|
cwd: path.resolve(__dirname, '..'),
|
|
stdio: 'inherit',
|
|
shell: true
|
|
});
|
|
if (status !== 0) {
|
|
throw new Error(`Electron typescript definition generation failed with exit code: ${status}.`);
|
|
}
|
|
}
|
|
|
|
function loadLastSpecHash () {
|
|
return fs.existsSync(specHashPath)
|
|
? fs.readFileSync(specHashPath, 'utf8').split('\n')
|
|
: [null, null];
|
|
}
|
|
|
|
function saveSpecHash ([newSpecHash, newSpecInstallHash]) {
|
|
fs.writeFileSync(specHashPath, `${newSpecHash}\n${newSpecInstallHash}`);
|
|
}
|
|
|
|
async function runElectronTests () {
|
|
const errors = [];
|
|
|
|
const testResultsDir = process.env.ELECTRON_TEST_RESULTS_DIR;
|
|
for (const [runnerId, { description, run }] of runners) {
|
|
if (runnersToRun && !runnersToRun.includes(runnerId)) {
|
|
console.info('\nSkipping:', description);
|
|
continue;
|
|
}
|
|
try {
|
|
console.info('\nRunning:', description);
|
|
if (testResultsDir) {
|
|
process.env.MOCHA_FILE = path.join(testResultsDir, `test-results-${runnerId}.xml`);
|
|
}
|
|
await run();
|
|
} catch (err) {
|
|
errors.push([runnerId, err]);
|
|
}
|
|
}
|
|
|
|
if (errors.length !== 0) {
|
|
for (const err of errors) {
|
|
console.error('\n\nRunner Failed:', err[0]);
|
|
console.error(err[1]);
|
|
}
|
|
console.log(`${fail} Electron test runners have failed`);
|
|
process.exit(1);
|
|
}
|
|
}
|
|
|
|
async function asyncSpawn (exe, runnerArgs) {
|
|
return new Promise((resolve, reject) => {
|
|
let forceExitResult = 0;
|
|
const child = childProcess.spawn(exe, runnerArgs, {
|
|
cwd: path.resolve(__dirname, '../..')
|
|
});
|
|
child.stdout.pipe(process.stdout);
|
|
child.stderr.pipe(process.stderr);
|
|
if (process.env.ELECTRON_FORCE_TEST_SUITE_EXIT) {
|
|
child.stdout.on('data', data => {
|
|
const failureRE = RegExp(`${FAILURE_STATUS_KEY}: (\\d.*)`);
|
|
const failures = data.toString().match(failureRE);
|
|
if (failures) {
|
|
forceExitResult = parseInt(failures[1], 10);
|
|
}
|
|
});
|
|
}
|
|
child.on('error', error => reject(error));
|
|
child.on('close', (status, signal) => {
|
|
let returnStatus = 0;
|
|
if (process.env.ELECTRON_FORCE_TEST_SUITE_EXIT) {
|
|
returnStatus = forceExitResult;
|
|
} else {
|
|
returnStatus = status;
|
|
}
|
|
resolve({ status: returnStatus, signal });
|
|
});
|
|
});
|
|
}
|
|
|
|
async function runTestUsingElectron (specDir, testName) {
|
|
let exe;
|
|
if (args.electronVersion) {
|
|
const installer = new Installer();
|
|
exe = await installer.install(args.electronVersion);
|
|
} else {
|
|
exe = path.resolve(BASE, utils.getElectronExec());
|
|
}
|
|
const runnerArgs = [`electron/${specDir}`, ...unknownArgs.slice(2)];
|
|
if (process.platform === 'linux') {
|
|
runnerArgs.unshift(path.resolve(__dirname, 'dbus_mock.py'), exe);
|
|
exe = 'python3';
|
|
}
|
|
const { status, signal } = await asyncSpawn(exe, runnerArgs);
|
|
if (status !== 0) {
|
|
if (status) {
|
|
const textStatus = process.platform === 'win32' ? `0x${status.toString(16)}` : status.toString();
|
|
console.log(`${fail} Electron tests failed with code ${textStatus}.`);
|
|
} else {
|
|
console.log(`${fail} Electron tests failed with kill signal ${signal}.`);
|
|
}
|
|
process.exit(1);
|
|
}
|
|
console.log(`${pass} Electron ${testName} process tests passed.`);
|
|
}
|
|
|
|
async function runMainProcessElectronTests () {
|
|
await runTestUsingElectron('spec', 'main');
|
|
}
|
|
|
|
async function installSpecModules (dir) {
|
|
const env = {
|
|
npm_config_msvs_version: '2022',
|
|
...process.env,
|
|
CXXFLAGS: process.env.CXXFLAGS,
|
|
npm_config_yes: 'true'
|
|
};
|
|
if (args.electronVersion) {
|
|
env.npm_config_target = args.electronVersion;
|
|
env.npm_config_disturl = 'https://electronjs.org/headers';
|
|
env.npm_config_runtime = 'electron';
|
|
env.npm_config_devdir = path.join(os.homedir(), '.electron-gyp');
|
|
env.npm_config_build_from_source = 'true';
|
|
const { status } = childProcess.spawnSync('npm', ['run', 'node-gyp-install', '--ensure'], {
|
|
env,
|
|
cwd: dir,
|
|
stdio: 'inherit',
|
|
shell: true
|
|
});
|
|
if (status !== 0) {
|
|
console.log(`${fail} Failed to "npm run node-gyp-install" install in '${dir}'`);
|
|
process.exit(1);
|
|
}
|
|
} else {
|
|
env.npm_config_nodedir = path.resolve(BASE, `out/${utils.getOutDir({ shouldLog: true })}/gen/node_headers`);
|
|
}
|
|
if (fs.existsSync(path.resolve(dir, 'node_modules'))) {
|
|
await fs.promises.rm(path.resolve(dir, 'node_modules'), { force: true, recursive: true });
|
|
}
|
|
const { status } = childProcess.spawnSync(NPX_CMD, [`yarn@${YARN_VERSION}`, 'install', '--frozen-lockfile'], {
|
|
env,
|
|
cwd: dir,
|
|
stdio: 'inherit',
|
|
shell: process.platform === 'win32'
|
|
});
|
|
if (status !== 0 && !process.env.IGNORE_YARN_INSTALL_ERROR) {
|
|
console.log(`${fail} Failed to yarn install in '${dir}'`);
|
|
process.exit(1);
|
|
}
|
|
}
|
|
|
|
function getSpecHash () {
|
|
return Promise.all([
|
|
(async () => {
|
|
const hasher = crypto.createHash('SHA256');
|
|
hasher.update(fs.readFileSync(path.resolve(__dirname, '../spec/package.json')));
|
|
hasher.update(fs.readFileSync(path.resolve(__dirname, '../spec/yarn.lock')));
|
|
hasher.update(fs.readFileSync(path.resolve(__dirname, '../script/spec-runner.js')));
|
|
return hasher.digest('hex');
|
|
})(),
|
|
(async () => {
|
|
const specNodeModulesPath = path.resolve(__dirname, '../spec/node_modules');
|
|
if (!fs.existsSync(specNodeModulesPath)) {
|
|
return null;
|
|
}
|
|
const { hash } = await hashElement(specNodeModulesPath, {
|
|
folders: {
|
|
exclude: ['.bin']
|
|
}
|
|
});
|
|
return hash;
|
|
})()
|
|
]);
|
|
}
|
|
|
|
main().catch((error) => {
|
|
console.error('An error occurred inside the spec runner:', error);
|
|
process.exit(1);
|
|
});
|