Please see the the Integration Testing Framework document for more information about specifics.
chrome/test/webapps/generate_framework_tests_and_coverage.py
and verify nothing is outputted to the console.browser_tests
and sync_integration_tests
autoninja -C out/Release browser_tests sync_integration_tests
)--gtest_filter=WebAppIntegration*
:testing/run_with_dummy_home.py testing/xvfb.py out/Release/browser_tests --gtest_filter=WebAppIntegration*
testing/run_with_dummy_home.py testing/xvfb.py out/Release/sync_integration_tests --gtest_filter=WebAppIntegration*
The goal of this step is to put all critical user journeys for the feature into the critical user journeys file and any new actions/enums into their respective files (actions, enums).
Steps:
See the example below.
The browsertest files are split into two sections, manual tests and script-generated tests. Before generating all of the new tests, the next step is to
See the example browsertest file to see the manual tests at the top, written by the action authors.
For details about how to implement actions, see Creating Actions in the WebAppIntegrationTestDriver
. Implementing or changing actions is usually done in WebAppIntegrationTestDriver
. If the action only works with the sync system, then it may have to be implemented in the TestDelegate
interface and then in the WebAppIntegrationTestBase
. The dPWA team should have informed you if there was anything specific you need to do here.
Before submitting, make sure to also run the trybots on mac, as these are sometimes disabled on the CQ.
If in Step 2 above the team concluded a “Site” must be modified or created, these are located in the test data directory.
See the example below.
Implementing an action is often flaky and uncovers bugs. If all of the tests are in the first CL and it has problems, this causes large reverts or (even worse) sheriffs to manually disable many tests over the next few days. To prevent this complexity, this first CL should only include a few manual tests to make sure that everything is working correctly before more tests are generated.
Finally, now that the changes are implemented and tested, they can be used in generated critical user journey tests.
Add to (or modify) this file marking the new actions as supported.
To have the script actually generate tests using the new actions, they must be marked as supported in the supported actions file. The support is specified by a symbol per platform:
If the action you have implemented is not present in the file, please add it.
This command will output all changes that need to happen to the critical user journeys.
chrome/test/webapps/generate_framework_tests_and_coverage.py
The output should:
Note:
--delete-in-place
can be used to remove all tests that aren't disabled by sheriffs.--add-to-file
can be used to add new tests to existing test files. If the test file does not exist, the expected file names and tests will be printed out to the console. You will have to manually create the file, copy-and-paste the tests to the new file and add the file to the BUILD file.After you make changes to the integration browsertests, please re-run the above command to verify that all of the changes were performed and no mistakes were made. If all looks right, the script will output nothing to console when run a second time.
Possible issues / Things to know:
After all tests are added, git cl format
is often required. It's a good idea to test all of the new tests locally if you can, and then after local verification a patch can be uploaded, the the trybots can be run, and a review can be requested from the team.
Before submitting, make sure to also run the trybots on mac, as these are sometimes disabled on the CQ.
It is recommended to run the new tests locally before testing them on trybots.
This command will to generate the gtest_filter for all the new and modified tests.
chrome/test/webapps/generate_gtest_filter_for_added_tests.py --diff-strategy <upstream|committed|staged|unstaged>
This script uses a default diff strategy that includes uncommitted, staged, and committed changes to the UPSTREAM. See the --diff-strategy
option for more options here.
The output should print out the gtest_filter for any new (or modified) tests in browser_tests
and sync_integration_tests
.
The output format will be
browser_tests --gtest_filter=<test_name> sync_integration_tests --gtest_filter=<test_name>
You can run the tests by adding the path to browser_tests
or sync_integration_tests
binaries.
If the “manual” browsertest didn't catch a bug that is now failing for the generated tests and there is no obvious fix for the bug, it is OK to submit the new tests as disabled. To do this:
chrome/test/webapps/generate_framework_tests_and_coverage.py
again to update the coverage percentage.Why is this OK? Adding the generated tests can be a big pain, especially if others are modifying the tests as well. It is often better to get them compiling and submitted quickly with a few tests disabled instead of waiting until everything works.
UninstallFromList
This is an example CL of implementing an action. It:
WebAppIntegrationTestDriver
.Here is an example CL of adding generated tests for the UninstallFromList
action addition. During the development of this action, it was discovered that some of the critical user journeys were incorrect and needed updating - you can see this in the downloaded file changes.
The file handlers feature:
What critical user journeys will be needed? Generally:
The existing actions already have a lot of support for installing, launching, checking if a window was created, etc. The following changes will have to happen:
To contact the team for help, send an email to [email protected] and/or post on #pwas on Chromium Slack.