Skip to content

Commit

Permalink
Port rchiodo's fixes (#199)
Browse files Browse the repository at this point in the history
* Fix two problems with escaping (#14228)

* Remove unneeded cell keys when exporting (#14241)

* Remove transient output when exporting from the interactive window

* Add news entry

* Test was failing with true jupyter (#14261)

* Potential fix for ipywidget flakiness (#14281)

* Try running tests with space in root path (#14113)

* Add test with a space (only works on flake)

* Push to insiders.yml only

* Remove test that doesn't really do anything

* REmove unused bits

* Change path to have unicode too

* Get test to run

* Set root path differently

* Valid dir

* A different way

* Another way

* Try creating the directory first

* Another try

* Only one env

* Pass parameters correctly

* Try without unicode

* Set working directory directly on xvfb actions

* Working-directory not workingDirectory

* Cached ts files output

* Remove test with space branch for insiders

* Update vscode-python-pr-validation.yaml (#14285)

REmove missing branch? Might make it work again

* Get rid of AZDO yamls. Not used anymore

* Dont run on push (#14307)

* Fix random failures on functional tests (#14331)

* Splitting test log

* Fix problem with kernels ports being reused

* Make kernel launcher port round robin only for testing

* Make formatters change only apply during testing

* Add news entry

* Apply black formatting

* Code review feedback and skip flakey remote password test

* Another flakey test

* More CR feedback

* Missed a spot

* More of the functional tests are failing (#14346)

* Splitting test log

* Fix problem with kernels ports being reused

* Make kernel launcher port round robin only for testing

* Make formatters change only apply during testing

* Add news entry

* Apply black formatting

* Code review feedback and skip flakey remote password test

* Another flakey test

* More CR feedback

* Missed a spot

* Some more log parser changes and try to get interrupt to be less flakey

* Fix interrupt killing kernel and add more logging for export

* More logging

* See if updating fixes the problem

* Dont delete temp files

* Upload webview output to figure out trust failure

* Add name to step

* Try another way to upload

* Upload doesn't seem to work

* Try a different way to upload

* Try without webview logging as this makes the test pass

* Try fixing test another way. Any logging is making the test pass

* Compile error

* Add more logging to figure out why raw kernel did not start (#14374)

* Some more logging

* Some more logging

* Move PR changes into pr.yml

* Fix multiprocessing problems with setting __file__ (#14376)

* Fix multiprocessing problems with setting __file__

* Update news entry

* Problem with wait for idle not propagating outwards

* Fix unnecessary ask for python extension install

* Don't error on warning for kernel install
  • Loading branch information
rchiodo authored Oct 12, 2020
1 parent b993b34 commit 60fbc79
Show file tree
Hide file tree
Showing 53 changed files with 651 additions and 1,381 deletions.
57 changes: 12 additions & 45 deletions .github/workflows/pr.yml
Original file line number Diff line number Diff line change
Expand Up @@ -21,7 +21,7 @@ env:
COVERAGE_REPORTS: tests-coverage-reports
CI_PYTHON_PATH: python
TEST_RESULTS_DIRECTORY: .
TEST_RESULTS_GLOB: '**/test-results.xml'
TEST_RESULTS_GLOB: '**/test-results*.xml'

jobs:
build-vsix:
Expand Down Expand Up @@ -201,7 +201,7 @@ jobs:
check_name: Ts-Unit Test Report

tests:
name: Tests (with Python)
name: Functional Jupyter Tests
runs-on: ${{ matrix.os }}
if: github.repository == 'microsoft/vscode-jupyter'
strategy:
Expand All @@ -210,13 +210,8 @@ jobs:
# We're not running CI on macOS for now because it's one less matrix entry to lower the number of runners used,
# macOS runners are expensive, and we assume that Ubuntu is enough to cover the UNIX case.
os: [ubuntu-latest]
python: [3.8] # Use flaky tests to run against more versions of Python.
# python-unit: Python tests
# functional: Tests with mocked VS Code, & mocked Python)
# functional-with-jupyter: Tests with mocked VS Code & real jupyter
# single-workspace: Tests with VS Code
# vscode: Tests with VS Code, Python extension & real Jupyter
test-suite: [python-unit, functional, test-based-on-pr-body]
python: [3.8]
test-suite: [group1, group2, group3, group4]
steps:
- name: Checkout
uses: actions/checkout@v2
Expand All @@ -240,7 +235,7 @@ jobs:
# Caching (https://github.com/actions/cache/blob/main/examples.md#python---pip
- name: Cache pip on linux
uses: actions/cache@v2
if: startsWith(matrix.test-suite, 'functional') && matrix.os == 'ubuntu-latest'
if: matrix.os == 'ubuntu-latest'
with:
path: ~/.cache/pip
key: ${{ runner.os }}-pip-${{env.PYTHON_VERSION}}-${{ hashFiles('requirements.txt') }}-${{hashFiles('build/debugger-install-requirements.txt')}}-${{hashFiles('test-requirements.txt')}}-${{hashFiles('ipython-test-requirements.txt')}}-${{hashFiles('functional-test-requirements.txt')}}-${{hashFiles('conda-functional-requirements.txt')}}
Expand All @@ -249,7 +244,7 @@ jobs:
- name: Cache pip on mac
uses: actions/cache@v2
if: startsWith(matrix.test-suite, 'functional') && matrix.os == 'macos-latest'
if: matrix.os == 'macos-latest'
with:
path: ~/Library/Caches/pip
key: ${{ runner.os }}-pip-${{env.PYTHON_VERSION}}-${{ hashFiles('requirements.txt') }}-${{hashFiles('build/debugger-install-requirements.txt')}}-${{hashFiles('test-requirements.txt')}}-${{hashFiles('ipython-test-requirements.txt')}}-${{hashFiles('functional-test-requirements.txt')}}-${{hashFiles('conda-functional-requirements.txt')}}
Expand All @@ -258,7 +253,7 @@ jobs:
- name: Cache pip on windows
uses: actions/cache@v2
if: startsWith(matrix.test-suite, 'functional') && matrix.os == 'windows-latest'
if: matrix.os == 'windows-latest'
with:
path: ~\AppData\Local\pip\Cache
key: ${{ runner.os }}-pip-${{env.PYTHON_VERSION}}-${{ hashFiles('requirements.txt') }}-${{hashFiles('build/debugger-install-requirements.txt')}}-${{hashFiles('test-requirements.txt')}}-${{hashFiles('ipython-test-requirements.txt')}}-${{hashFiles('functional-test-requirements.txt')}}-${{hashFiles('conda-functional-requirements.txt')}}
Expand Down Expand Up @@ -299,12 +294,12 @@ jobs:
# For faster/better builds of sdists.
- run: python -m pip install wheel
shell: bash
if: matrix.test-suite != 'test-based-on-pr-body' || contains(github.event.pull_request.body, '[x] Run ')

# debugpy is not shipped, only installed for local tests.
# In production, we get debugpy from python extension.
- name: Install functional test requirements
run: |
python -m pip --disable-pip-version-check install -t ./pythonFiles/lib/python --no-cache-dir --implementation py --no-deps --upgrade -r ./requirements.txt
python -m pip --disable-pip-version-check install -r build/debugger-install-requirements.txt
python ./pythonFiles/install_debugpy.py
python -m pip install numpy
Expand All @@ -314,56 +309,28 @@ jobs:
python -m pip install --upgrade -r ./build/conda-functional-requirements.txt
python -m ipykernel install --user
# This step is slow.
# If running the placeholder suite & not required to run any tests, then don't install python dependencies.
if: matrix.test-suite != 'test-based-on-pr-body' || contains(github.event.pull_request.body, '[x] Run ')

- name: Install dependencies (npm ci)
run: npm ci --prefer-offline
# This step is slow.
# If running the placeholder suite & not required to run any tests, then don't install python dependencies.
if: matrix.test-suite != 'test-based-on-pr-body' || contains(github.event.pull_request.body, '[x] Run ')

# Run the Python and IPython tests in our codebase.
- name: Run Python and IPython unit tests
run: |
python pythonFiles/tests/run_all.py
python -m IPython pythonFiles/tests/run_all.py
if: matrix.test-suite == 'python-unit'

- name: Compile if not cached
run: npx gulp prePublishNonBundle
# If running the placeholder suite & not required to run any tests, then don't compile.
# if: steps.out-cache.outputs.cache-hit == false && (matrix.test-suite != 'test-based-on-pr-body' || contains(github.event.pull_request.body, '[x] Run '))
if: matrix.test-suite != 'test-based-on-pr-body' || contains(github.event.pull_request.body, '[x] Run ')

- name: Run functional tests
run: npm run test:functional
id: test_functional
if: matrix.test-suite == 'functional'

- name: Run single-workspace tests
env:
CI_PYTHON_VERSION: ${{matrix.python}}
uses: GabrielBB/xvfb-action@v1.4
with:
run: npm run testSingleWorkspace
if: matrix.test-suite == 'single-workspace' || (matrix.test-suite == 'test-based-on-pr-body' && contains(github.event.pull_request.body, '[x] Run single-workspace test'))

- name: Run functional tests with Jupyter
run: npm run test:functional -- --grep="MimeTypes"
id: test_functional_jupyter
run: npm run test:functional:parallel -- --${{matrix.test-suite}}
env:
VSCODE_PYTHON_ROLLING: 1
VSC_PYTHON_FORCE_LOGGING: 1
if: matrix.test-suite == 'functional-with-jupyter' || (matrix.test-suite == 'test-based-on-pr-body' && contains(github.event.pull_request.body, '[x] Run functional-with-jupyter test'))
id: test_functional_group

- name: Publish Test Report
uses: scacap/action-surefire-report@v1
with:
github_token: ${{ secrets.GITHUB_TOKEN }}
report_paths: ${{ env.TEST_RESULTS_GLOB }}
check_name: Functional Test Report
if: (steps.test_functional.outcome == 'failure' || steps.test_functional_jupyter.outcome == 'failure') && failure()
check_name: Functional Test Report ${{matrix.test-suite}}
if: steps.test_functional_group.outcome == 'failure' && failure()

- name: Run DataScience tests with VSCode & Jupyter
uses: GabrielBB/xvfb-action@v1.4
Expand Down
4 changes: 3 additions & 1 deletion .vscode/launch.json
Original file line number Diff line number Diff line change
Expand Up @@ -296,7 +296,9 @@
// Some tests require multiple python interpreters (do not rely on discovery for functional tests, be explicit).
"XCI_PYTHON_PATH2": "<Python Path>",
// Remove 'X' prefix to dump output for debugger. Directory has to exist prior to launch
"XDEBUGPY_LOG_DIR": "${workspaceRoot}/tmp/Debug_Output"
"XDEBUGPY_LOG_DIR": "${workspaceRoot}/tmp/Debug_Output",
// Remove 'X' prefix to dump webview redux action log
"XVSC_PYTHON_WEBVIEW_LOG_FILE": "${workspaceRoot}/test-webview.log"
},
"outFiles": [
"${workspaceFolder}/out/**/*.js",
Expand Down
32 changes: 31 additions & 1 deletion ThirdPartyNotices-Repository.txt
Original file line number Diff line number Diff line change
Expand Up @@ -20,6 +20,7 @@ Microsoft Python extension for Visual Studio Code incorporates third party mater
16. ipywidgets (https://github.com/jupyter-widgets)
17. vscode-cpptools (https://github.com/microsoft/vscode-cpptools)
18. font-awesome (https://github.com/FortAwesome/Font-Awesome)
19. mocha (https://github.com/mochajs/mocha)

%%
Go for Visual Studio Code NOTICES, INFORMATION, AND LICENSE BEGIN HERE
Expand Down Expand Up @@ -1167,4 +1168,33 @@ trademarks does not indicate endorsement of the trademark holder by Font
Awesome, nor vice versa. **Please do not use brand logos for any purpose except
to represent the company, product, or service to which they refer.**
=========================================
END OF font-awesome NOTICES, INFORMATION, AND LICENSE
END OF font-awesome NOTICES, INFORMATION, AND LICENSE

%% mocha NOTICES, INFORMATION, AND LICENSE BEGIN HERE
=========================================

(The MIT License)

Copyright (c) 2011-2020 OpenJS Foundation and contributors, https://openjsf.org

Permission is hereby granted, free of charge, to any person obtaining
a copy of this software and associated documentation files (the
'Software'), to deal in the Software without restriction, including
without limitation the rights to use, copy, modify, merge, publish,
distribute, sublicense, and/or sell copies of the Software, and to
permit persons to whom the Software is furnished to do so, subject to
the following conditions:

The above copyright notice and this permission notice shall be
included in all copies or substantial portions of the Software.

THE SOFTWARE IS PROVIDED 'AS IS', WITHOUT WARRANTY OF ANY KIND,
EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.
IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY
CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT,
TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE
SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.

=========================================
END OF mocha NOTICES, INFORMATION, AND LICENSE
2 changes: 1 addition & 1 deletion build/.mocha-multi-reporters.config
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
{
"reporterEnabled": "spec,mocha-junit-reporter",
"reporterEnabled": "./build/ci/scripts/spec_with_pid,mocha-junit-reporter",
"mochaJunitReporterReporterOptions": {
"includePending": true
}
Expand Down
139 changes: 109 additions & 30 deletions build/ci/scripts/runFunctionalTests.js
Original file line number Diff line number Diff line change
Expand Up @@ -8,20 +8,93 @@
var path = require('path');
var glob = require('glob');
var child_process = require('child_process');
var fs = require('fs-extra');

// Create a base for the output file
var originalMochaFile = process.env['MOCHA_FILE'];
var mochaFile = originalMochaFile || './test-results.xml';
var mochaBaseFile = path.join(path.dirname(mochaFile), path.basename(mochaFile, '.xml'));
var mochaFileExt = '.xml';
var groupCount = 4;

function gatherArgs(extraArgs, file) {
return [
file,
'--require=out/test/unittests.js',
'--exclude=out/**/*.jsx',
'--reporter=mocha-multi-reporters',
'--reporter-option=configFile=build/.mocha-multi-reporters.config',
'--ui=tdd',
'--recursive',
'--colors',
'--exit',
'--timeout=180000',
...extraArgs
];
}

async function generateGroups(files) {
// Go through each file putting it into a bucket. Each bucket will attempt to
// have equal size

// Start with largest files first (sort by size)
var stats = await Promise.all(files.map((f) => fs.stat(f)));
var filesWithSize = files.map((f, i) => {
return {
file: f,
size: stats[i].size
};
});
var sorted = filesWithSize.sort((a, b) => b.size - a.size);

// Generate buckets that try to hold the largest file first
var buckets = new Array(groupCount).fill().map((_, i) => {
return {
index: i,
totalSize: 0,
files: []
};
});
var lowestBucket = buckets[0];
sorted.forEach((fs) => {
buckets[lowestBucket.index].totalSize += fs.size;
buckets[lowestBucket.index].files.push(fs.file);
lowestBucket = buckets.find((b) => b.totalSize < lowestBucket.totalSize) || lowestBucket;
});

// Return these groups of files
return buckets.map((b) => b.files);
}

async function runIndividualTest(extraArgs, file, index) {
var subMochaFile = `${mochaBaseFile}_${index}_${path.basename(file)}${mochaFileExt}`;
var args = gatherArgs(extraArgs, file);
console.log(`Running functional test for file ${file} ...`);
var exitCode = await new Promise((resolve) => {
// Spawn the sub node process
var proc = child_process.fork('./node_modules/mocha/bin/_mocha', args, {
env: { ...process.env, MOCHA_FILE: subMochaFile }
});
proc.on('exit', resolve);
});

// If failed keep track
if (exitCode !== 0) {
console.log(`Functional tests for ${file} failed.`);
} else {
console.log(`Functional test for ${file} succeeded`);
}

return exitCode;
}

// Wrap async code in a function so can wait till done
async function main() {
console.log('Globbing files for functional tests');

// Glob all of the files that we usually send to mocha as a group (see mocha.functional.opts.xml)
var files = await new Promise((resolve, reject) => {
glob('./out/test/**/*.functional.test.js', (ex, res) => {
glob('./out/test/datascience/**/*.functional.test.js', (ex, res) => {
if (ex) {
reject(ex);
} else {
Expand All @@ -30,38 +103,42 @@ async function main() {
});
});

// Figure out what group is running (should be something like --group1, --group2 etc.)
var groupArgIndex = process.argv.findIndex((a) => a.includes('--group'));
var groupIndex = groupArgIndex >= 0 ? parseInt(process.argv[groupArgIndex].slice(7), 10) - 1 : -1;

// Generate 4 groups based on sorting by size
var groups = await generateGroups(files);
files = groupIndex >= 0 ? groups[groupIndex] : files;
console.log(`Running for group ${groupIndex}`);

// Extract any extra args for the individual mocha processes
var extraArgs =
groupIndex >= 0 && process.argv.length > 3
? process.argv.slice(3)
: process.argv.length > 2
? process.argv.slice(2)
: [];

// Iterate over them, running mocha on each
var returnCode = 0;

// Go through each one at a time
// Start timing now (don't care about glob time)
var startTime = Date.now();

// Run all of the tests (in parallel or sync based on env)
try {
for (var index = 0; index < files.length; index += 1) {
// Each run with a file will expect a $MOCHA_FILE$ variable. Generate one for each
// Note: this index is used as a pattern when setting mocha file in the test_phases.yml
var subMochaFile = `${mochaBaseFile}_${index}_${path.basename(files[index])}${mochaFileExt}`;
process.env['MOCHA_FILE'] = subMochaFile;
var exitCode = await new Promise((resolve) => {
// Spawn the sub node process
var proc = child_process.fork('./node_modules/mocha/bin/_mocha', [
files[index],
'--require=out/test/unittests.js',
'--exclude=out/**/*.jsx',
'--reporter=mocha-multi-reporters',
'--reporter-option=configFile=build/.mocha-multi-reporters.config',
'--ui=tdd',
'--recursive',
'--colors',
'--exit',
'--timeout=180000'
]);
proc.on('exit', resolve);
});

// If failed keep track
if (exitCode !== 0) {
console.log(`Functional tests for ${files[index]} failed.`);
returnCode = exitCode;
if (process.env.VSCODE_PYTHON_FORCE_TEST_SYNC) {
for (var i = 0; i < files.length; i += 1) {
// Synchronous, one at a time
returnCode = returnCode | (await runIndividualTest(extraArgs, files[i], i));
}
} else {
// Parallel, all at once
const returnCodes = await Promise.all(files.map(runIndividualTest.bind(undefined, extraArgs)));

// Or all of the codes together
returnCode = returnCodes.reduce((p, c) => p | c);
}
} catch (ex) {
console.log(`Functional tests run failure: ${ex}.`);
Expand All @@ -73,8 +150,10 @@ async function main() {
process.env['MOCHA_FILE'] = originalMochaFile;
}

// Indicate error code
console.log(`Functional test run result: ${returnCode}`);
var endTime = Date.now();

// Indicate error code and total time of the run
console.log(`Functional test run result: ${returnCode} after ${(endTime - startTime) / 1_000} seconds`);
process.exit(returnCode);
}

Expand Down
Loading

0 comments on commit 60fbc79

Please sign in to comment.