pwnall | ae101a5f | 2016-11-08 00:24:38 | [diff] [blame] | 1 | # Layout Tests |
| 2 | |
| 3 | Layout tests are used by Blink to test many components, including but not |
| 4 | limited to layout and rendering. In general, layout tests involve loading pages |
| 5 | in a test renderer (`content_shell`) and comparing the rendered output or |
| 6 | JavaScript output against an expected output file. |
| 7 | |
pwnall | 4ea2eb3 | 2016-11-29 02:47:25 | [diff] [blame] | 8 | This document covers running and debugging existing layout tests. See the |
| 9 | [Writing Layout Tests documentation](./writing_layout_tests.md) if you find |
| 10 | yourself writing layout tests. |
| 11 | |
Kent Tamura | a045a7f | 2018-04-25 05:08:11 | [diff] [blame] | 12 | Note that we're in process of changing the term "layout tests" to "web tests". |
| 13 | Please assume these terms mean the identical stuff. We also call it as |
| 14 | "WebKit tests" and "WebKit layout tests". |
| 15 | |
pwnall | ae101a5f | 2016-11-08 00:24:38 | [diff] [blame] | 16 | [TOC] |
| 17 | |
| 18 | ## Running Layout Tests |
| 19 | |
| 20 | ### Initial Setup |
| 21 | |
| 22 | Before you can run the layout tests, you need to build the `blink_tests` target |
| 23 | to get `content_shell` and all of the other needed binaries. |
| 24 | |
| 25 | ```bash |
Max Moroz | f5b31fcd | 2018-08-10 21:55:48 | [diff] [blame] | 26 | autoninja -C out/Release blink_tests |
pwnall | ae101a5f | 2016-11-08 00:24:38 | [diff] [blame] | 27 | ``` |
| 28 | |
| 29 | On **Android** (layout test support |
| 30 | [currently limited to KitKat and earlier](https://crbug.com/567947)) you need to |
| 31 | build and install `content_shell_apk` instead. See also: |
| 32 | [Android Build Instructions](../android_build_instructions.md). |
| 33 | |
| 34 | ```bash |
Max Moroz | f5b31fcd | 2018-08-10 21:55:48 | [diff] [blame] | 35 | autoninja -C out/Default content_shell_apk |
pwnall | ae101a5f | 2016-11-08 00:24:38 | [diff] [blame] | 36 | adb install -r out/Default/apks/ContentShell.apk |
| 37 | ``` |
| 38 | |
| 39 | On **Mac**, you probably want to strip the content_shell binary before starting |
| 40 | the tests. If you don't, you'll have 5-10 running concurrently, all stuck being |
| 41 | examined by the OS crash reporter. This may cause other failures like timeouts |
| 42 | where they normally don't occur. |
| 43 | |
| 44 | ```bash |
| 45 | strip ./xcodebuild/{Debug,Release}/content_shell.app/Contents/MacOS/content_shell |
| 46 | ``` |
| 47 | |
| 48 | ### Running the Tests |
| 49 | |
| 50 | TODO: mention `testing/xvfb.py` |
| 51 | |
| 52 | The test runner script is in |
Kent Tamura | a045a7f | 2018-04-25 05:08:11 | [diff] [blame] | 53 | `third_party/blink/tools/run_web_tests.py`. |
pwnall | ae101a5f | 2016-11-08 00:24:38 | [diff] [blame] | 54 | |
| 55 | To specify which build directory to use (e.g. out/Default, out/Release, |
| 56 | out/Debug) you should pass the `-t` or `--target` parameter. For example, to |
| 57 | use the build in `out/Default`, use: |
| 58 | |
| 59 | ```bash |
Kent Tamura | a045a7f | 2018-04-25 05:08:11 | [diff] [blame] | 60 | python third_party/blink/tools/run_web_tests.py -t Default |
pwnall | ae101a5f | 2016-11-08 00:24:38 | [diff] [blame] | 61 | ``` |
| 62 | |
| 63 | For Android (if your build directory is `out/android`): |
| 64 | |
| 65 | ```bash |
Kent Tamura | a045a7f | 2018-04-25 05:08:11 | [diff] [blame] | 66 | python third_party/blink/tools/run_web_tests.py -t android --android |
pwnall | ae101a5f | 2016-11-08 00:24:38 | [diff] [blame] | 67 | ``` |
| 68 | |
| 69 | Tests marked as `[ Skip ]` in |
| 70 | [TestExpectations](../../third_party/WebKit/LayoutTests/TestExpectations) |
| 71 | won't be run at all, generally because they cause some intractable tool error. |
| 72 | To force one of them to be run, either rename that file or specify the skipped |
pwnall | 4ea2eb3 | 2016-11-29 02:47:25 | [diff] [blame] | 73 | test as the only one on the command line (see below). Read the |
| 74 | [Layout Test Expectations documentation](./layout_test_expectations.md) to learn |
| 75 | more about TestExpectations and related files. |
pwnall | ae101a5f | 2016-11-08 00:24:38 | [diff] [blame] | 76 | |
pwnall | 4ea2eb3 | 2016-11-29 02:47:25 | [diff] [blame] | 77 | *** promo |
| 78 | Currently only the tests listed in |
pwnall | ae101a5f | 2016-11-08 00:24:38 | [diff] [blame] | 79 | [SmokeTests](../../third_party/WebKit/LayoutTests/SmokeTests) |
| 80 | are run on the Android bots, since running all layout tests takes too long on |
| 81 | Android (and may still have some infrastructure issues). Most developers focus |
| 82 | their Blink testing on Linux. We rely on the fact that the Linux and Android |
| 83 | behavior is nearly identical for scenarios outside those covered by the smoke |
| 84 | tests. |
pwnall | 4ea2eb3 | 2016-11-29 02:47:25 | [diff] [blame] | 85 | *** |
pwnall | ae101a5f | 2016-11-08 00:24:38 | [diff] [blame] | 86 | |
| 87 | To run only some of the tests, specify their directories or filenames as |
Mathias Bynens | 172fc6b | 2018-09-05 09:39:43 | [diff] [blame^] | 88 | arguments to `run_web_tests.py` relative to the layout test directory |
pwnall | ae101a5f | 2016-11-08 00:24:38 | [diff] [blame] | 89 | (`src/third_party/WebKit/LayoutTests`). For example, to run the fast form tests, |
| 90 | use: |
| 91 | |
| 92 | ```bash |
Mathias Bynens | 172fc6b | 2018-09-05 09:39:43 | [diff] [blame^] | 93 | python third_party/blink/tools/run_web_tests.py fast/forms |
pwnall | ae101a5f | 2016-11-08 00:24:38 | [diff] [blame] | 94 | ``` |
| 95 | |
| 96 | Or you could use the following shorthand: |
| 97 | |
| 98 | ```bash |
Mathias Bynens | 172fc6b | 2018-09-05 09:39:43 | [diff] [blame^] | 99 | python third_party/blink/tools/run_web_tests.py fast/fo\* |
pwnall | ae101a5f | 2016-11-08 00:24:38 | [diff] [blame] | 100 | ``` |
| 101 | |
| 102 | *** promo |
| 103 | Example: To run the layout tests with a debug build of `content_shell`, but only |
| 104 | test the SVG tests and run pixel tests, you would run: |
| 105 | |
| 106 | ```bash |
Mathias Bynens | 172fc6b | 2018-09-05 09:39:43 | [diff] [blame^] | 107 | python third_party/blink/tools/run_web_tests.py -t Default svg |
pwnall | ae101a5f | 2016-11-08 00:24:38 | [diff] [blame] | 108 | ``` |
| 109 | *** |
| 110 | |
| 111 | As a final quick-but-less-robust alternative, you can also just use the |
| 112 | content_shell executable to run specific tests by using (for Windows): |
| 113 | |
| 114 | ```bash |
Kent Tamura | cd3ebc4 | 2018-05-16 06:44:22 | [diff] [blame] | 115 | out/Default/content_shell.exe --run-web-tests --no-sandbox full_test_source_path |
pwnall | ae101a5f | 2016-11-08 00:24:38 | [diff] [blame] | 116 | ``` |
| 117 | |
| 118 | as in: |
| 119 | |
| 120 | ```bash |
Kent Tamura | cd3ebc4 | 2018-05-16 06:44:22 | [diff] [blame] | 121 | out/Default/content_shell.exe --run-web-tests --no-sandbox \ |
pwnall | ae101a5f | 2016-11-08 00:24:38 | [diff] [blame] | 122 | c:/chrome/src/third_party/WebKit/LayoutTests/fast/forms/001.html |
| 123 | ``` |
| 124 | |
| 125 | but this requires a manual diff against expected results, because the shell |
| 126 | doesn't do it for you. |
| 127 | |
Mathias Bynens | 172fc6b | 2018-09-05 09:39:43 | [diff] [blame^] | 128 | To see a complete list of arguments supported, run: |
| 129 | |
| 130 | ```bash |
| 131 | python run_web_tests.py --help |
| 132 | ``` |
pwnall | ae101a5f | 2016-11-08 00:24:38 | [diff] [blame] | 133 | |
| 134 | *** note |
| 135 | **Linux Note:** We try to match the Windows render tree output exactly by |
| 136 | matching font metrics and widget metrics. If there's a difference in the render |
| 137 | tree output, we should see if we can avoid rebaselining by improving our font |
| 138 | metrics. For additional information on Linux Layout Tests, please see |
| 139 | [docs/layout_tests_linux.md](../layout_tests_linux.md). |
| 140 | *** |
| 141 | |
| 142 | *** note |
| 143 | **Mac Note:** While the tests are running, a bunch of Appearance settings are |
| 144 | overridden for you so the right type of scroll bars, colors, etc. are used. |
| 145 | Your main display's "Color Profile" is also changed to make sure color |
| 146 | correction by ColorSync matches what is expected in the pixel tests. The change |
| 147 | is noticeable, how much depends on the normal level of correction for your |
| 148 | display. The tests do their best to restore your setting when done, but if |
| 149 | you're left in the wrong state, you can manually reset it by going to |
| 150 | System Preferences → Displays → Color and selecting the "right" value. |
| 151 | *** |
| 152 | |
| 153 | ### Test Harness Options |
| 154 | |
| 155 | This script has a lot of command line flags. You can pass `--help` to the script |
| 156 | to see a full list of options. A few of the most useful options are below: |
| 157 | |
| 158 | | Option | Meaning | |
| 159 | |:----------------------------|:--------------------------------------------------| |
| 160 | | `--debug` | Run the debug build of the test shell (default is release). Equivalent to `-t Debug` | |
| 161 | | `--nocheck-sys-deps` | Don't check system dependencies; this allows faster iteration. | |
| 162 | | `--verbose` | Produce more verbose output, including a list of tests that pass. | |
| 163 | | `--no-pixel-tests` | Disable the pixel-to-pixel PNG comparisons and image checksums for tests that don't call `testRunner.dumpAsText()` | |
Xianzhu Wang | cacba48 | 2017-06-05 18:46:43 | [diff] [blame] | 164 | | `--reset-results` | Overwrite the current baselines (`-expected.{png|txt|wav}` files) with actual results, or create new baselines if there are no existing baselines. | |
pwnall | ae101a5f | 2016-11-08 00:24:38 | [diff] [blame] | 165 | | `--renderer-startup-dialog` | Bring up a modal dialog before running the test, useful for attaching a debugger. | |
Quinten Yearsley | 17bf9b43 | 2018-01-02 22:02:45 | [diff] [blame] | 166 | | `--fully-parallel` | Run tests in parallel using as many child processes as the system has cores. | |
pwnall | ae101a5f | 2016-11-08 00:24:38 | [diff] [blame] | 167 | | `--driver-logging` | Print C++ logs (LOG(WARNING), etc). | |
| 168 | |
| 169 | ## Success and Failure |
| 170 | |
| 171 | A test succeeds when its output matches the pre-defined expected results. If any |
| 172 | tests fail, the test script will place the actual generated results, along with |
| 173 | a diff of the actual and expected results, into |
| 174 | `src/out/Default/layout_test_results/`, and by default launch a browser with a |
| 175 | summary and link to the results/diffs. |
| 176 | |
| 177 | The expected results for tests are in the |
| 178 | `src/third_party/WebKit/LayoutTests/platform` or alongside their respective |
| 179 | tests. |
| 180 | |
| 181 | *** note |
| 182 | Tests which use [testharness.js](https://github.com/w3c/testharness.js/) |
| 183 | do not have expected result files if all test cases pass. |
| 184 | *** |
| 185 | |
| 186 | A test that runs but produces the wrong output is marked as "failed", one that |
| 187 | causes the test shell to crash is marked as "crashed", and one that takes longer |
| 188 | than a certain amount of time to complete is aborted and marked as "timed out". |
| 189 | A row of dots in the script's output indicates one or more tests that passed. |
| 190 | |
| 191 | ## Test expectations |
| 192 | |
| 193 | The |
qyearsley | 23599b7 | 2017-02-16 19:10:42 | [diff] [blame] | 194 | [TestExpectations](../../third_party/WebKit/LayoutTests/TestExpectations) file (and related |
| 195 | files) contains the list of all known layout test failures. See the |
pwnall | 4ea2eb3 | 2016-11-29 02:47:25 | [diff] [blame] | 196 | [Layout Test Expectations documentation](./layout_test_expectations.md) for more |
| 197 | on this. |
pwnall | ae101a5f | 2016-11-08 00:24:38 | [diff] [blame] | 198 | |
| 199 | ## Testing Runtime Flags |
| 200 | |
| 201 | There are two ways to run layout tests with additional command-line arguments: |
| 202 | |
| 203 | * Using `--additional-driver-flag`: |
| 204 | |
| 205 | ```bash |
Mathias Bynens | 172fc6b | 2018-09-05 09:39:43 | [diff] [blame^] | 206 | python run_web_tests.py --additional-driver-flag=--blocking-repaint |
pwnall | ae101a5f | 2016-11-08 00:24:38 | [diff] [blame] | 207 | ``` |
| 208 | |
| 209 | This tells the test harness to pass `--blocking-repaint` to the |
| 210 | content_shell binary. |
| 211 | |
| 212 | It will also look for flag-specific expectations in |
| 213 | `LayoutTests/FlagExpectations/blocking-repaint`, if this file exists. The |
| 214 | suppressions in this file override the main TestExpectations file. |
| 215 | |
| 216 | * Using a *virtual test suite* defined in |
qyearsley | 23599b7 | 2017-02-16 19:10:42 | [diff] [blame] | 217 | [LayoutTests/VirtualTestSuites](../../third_party/WebKit/LayoutTests/VirtualTestSuites). |
pwnall | ae101a5f | 2016-11-08 00:24:38 | [diff] [blame] | 218 | A virtual test suite runs a subset of layout tests under a specific path with |
| 219 | additional flags. For example, you could test a (hypothetical) new mode for |
| 220 | repainting using the following virtual test suite: |
| 221 | |
| 222 | ```json |
| 223 | { |
| 224 | "prefix": "blocking_repaint", |
| 225 | "base": "fast/repaint", |
| 226 | "args": ["--blocking-repaint"], |
| 227 | } |
| 228 | ``` |
| 229 | |
| 230 | This will create new "virtual" tests of the form |
Robert Ma | 89dd91d83 | 2017-08-02 18:08:44 | [diff] [blame] | 231 | `virtual/blocking_repaint/fast/repaint/...` which correspond to the files |
pwnall | ae101a5f | 2016-11-08 00:24:38 | [diff] [blame] | 232 | under `LayoutTests/fast/repaint` and pass `--blocking-repaint` to |
| 233 | content_shell when they are run. |
| 234 | |
| 235 | These virtual tests exist in addition to the original `fast/repaint/...` |
| 236 | tests. They can have their own expectations in TestExpectations, and their own |
| 237 | baselines. The test harness will use the non-virtual baselines as a fallback. |
| 238 | However, the non-virtual expectations are not inherited: if |
| 239 | `fast/repaint/foo.html` is marked `[ Fail ]`, the test harness still expects |
| 240 | `virtual/blocking_repaint/fast/repaint/foo.html` to pass. If you expect the |
| 241 | virtual test to also fail, it needs its own suppression. |
| 242 | |
| 243 | The "prefix" value does not have to be unique. This is useful if you want to |
| 244 | run multiple directories with the same flags (but see the notes below about |
| 245 | performance). Using the same prefix for different sets of flags is not |
| 246 | recommended. |
| 247 | |
| 248 | For flags whose implementation is still in progress, virtual test suites and |
| 249 | flag-specific expectations represent two alternative strategies for testing. |
| 250 | Consider the following when choosing between them: |
| 251 | |
| 252 | * The |
| 253 | [waterfall builders](https://dev.chromium.org/developers/testing/chromium-build-infrastructure/tour-of-the-chromium-buildbot) |
| 254 | and [try bots](https://dev.chromium.org/developers/testing/try-server-usage) |
| 255 | will run all virtual test suites in addition to the non-virtual tests. |
| 256 | Conversely, a flag-specific expectations file won't automatically cause the |
| 257 | bots to test your flag - if you want bot coverage without virtual test suites, |
| 258 | you will need to set up a dedicated bot for your flag. |
| 259 | |
| 260 | * Due to the above, virtual test suites incur a performance penalty for the |
| 261 | commit queue and the continuous build infrastructure. This is exacerbated by |
| 262 | the need to restart `content_shell` whenever flags change, which limits |
| 263 | parallelism. Therefore, you should avoid adding large numbers of virtual test |
| 264 | suites. They are well suited to running a subset of tests that are directly |
| 265 | related to the feature, but they don't scale to flags that make deep |
| 266 | architectural changes that potentially impact all of the tests. |
| 267 | |
Jeff Carpenter | 489d402 | 2018-05-15 00:23:00 | [diff] [blame] | 268 | * Note that using wildcards in virtual test path names (e.g. |
| 269 | `virtual/blocking_repaint/fast/repaint/*`) is not supported. |
| 270 | |
pwnall | ae101a5f | 2016-11-08 00:24:38 | [diff] [blame] | 271 | ## Tracking Test Failures |
| 272 | |
| 273 | All bugs, associated with layout test failures must have the |
| 274 | [Test-Layout](https://crbug.com/?q=label:Test-Layout) label. Depending on how |
| 275 | much you know about the bug, assign the status accordingly: |
| 276 | |
| 277 | * **Unconfirmed** -- You aren't sure if this is a simple rebaseline, possible |
| 278 | duplicate of an existing bug, or a real failure |
| 279 | * **Untriaged** -- Confirmed but unsure of priority or root cause. |
| 280 | * **Available** -- You know the root cause of the issue. |
| 281 | * **Assigned** or **Started** -- You will fix this issue. |
| 282 | |
| 283 | When creating a new layout test bug, please set the following properties: |
| 284 | |
| 285 | * Components: a sub-component of Blink |
| 286 | * OS: **All** (or whichever OS the failure is on) |
| 287 | * Priority: 2 (1 if it's a crash) |
| 288 | * Type: **Bug** |
| 289 | * Labels: **Test-Layout** |
| 290 | |
Mathias Bynens | 172fc6b | 2018-09-05 09:39:43 | [diff] [blame^] | 291 | You can also use the _Layout Test Failure_ template, which pre-sets these |
pwnall | ae101a5f | 2016-11-08 00:24:38 | [diff] [blame] | 292 | labels for you. |
| 293 | |
pwnall | ae101a5f | 2016-11-08 00:24:38 | [diff] [blame] | 294 | ## Debugging Layout Tests |
| 295 | |
Mathias Bynens | 172fc6b | 2018-09-05 09:39:43 | [diff] [blame^] | 296 | After the layout tests run, you should get a summary of tests that pass or |
| 297 | fail. If something fails unexpectedly (a new regression), you will get a |
| 298 | `content_shell` window with a summary of the unexpected failures. Or you might |
| 299 | have a failing test in mind to investigate. In any case, here are some steps and |
| 300 | tips for finding the problem. |
pwnall | ae101a5f | 2016-11-08 00:24:38 | [diff] [blame] | 301 | |
| 302 | * Take a look at the result. Sometimes tests just need to be rebaselined (see |
| 303 | below) to account for changes introduced in your patch. |
| 304 | * Load the test into a trunk Chrome or content_shell build and look at its |
| 305 | result. (For tests in the http/ directory, start the http server first. |
| 306 | See above. Navigate to `http://localhost:8000/` and proceed from there.) |
| 307 | The best tests describe what they're looking for, but not all do, and |
| 308 | sometimes things they're not explicitly testing are still broken. Compare |
| 309 | it to Safari, Firefox, and IE if necessary to see if it's correct. If |
| 310 | you're still not sure, find the person who knows the most about it and |
| 311 | ask. |
| 312 | * Some tests only work properly in content_shell, not Chrome, because they |
| 313 | rely on extra APIs exposed there. |
| 314 | * Some tests only work properly when they're run in the layout-test |
| 315 | framework, not when they're loaded into content_shell directly. The test |
| 316 | should mention that in its visible text, but not all do. So try that too. |
| 317 | See "Running the tests", above. |
| 318 | * If you think the test is correct, confirm your suspicion by looking at the |
| 319 | diffs between the expected result and the actual one. |
| 320 | * Make sure that the diffs reported aren't important. Small differences in |
| 321 | spacing or box sizes are often unimportant, especially around fonts and |
| 322 | form controls. Differences in wording of JS error messages are also |
| 323 | usually acceptable. |
Mathias Bynens | 172fc6b | 2018-09-05 09:39:43 | [diff] [blame^] | 324 | * `python run_web_tests.py path/to/your/test.html --full-results-html` |
| 325 | produces a page including links to the expected result, actual result, |
| 326 | and diff. |
| 327 | * Add the `--sources` option to `run_web_tests.py` to see exactly which |
pwnall | ae101a5f | 2016-11-08 00:24:38 | [diff] [blame] | 328 | expected result it's comparing to (a file next to the test, something in |
| 329 | platform/mac/, something in platform/chromium-win/, etc.) |
| 330 | * If you're still sure it's correct, rebaseline the test (see below). |
| 331 | Otherwise... |
| 332 | * If you're lucky, your test is one that runs properly when you navigate to it |
| 333 | in content_shell normally. In that case, build the Debug content_shell |
| 334 | project, fire it up in your favorite debugger, and load the test file either |
qyearsley | 23599b7 | 2017-02-16 19:10:42 | [diff] [blame] | 335 | from a `file:` URL. |
pwnall | ae101a5f | 2016-11-08 00:24:38 | [diff] [blame] | 336 | * You'll probably be starting and stopping the content_shell a lot. In VS, |
| 337 | to save navigating to the test every time, you can set the URL to your |
qyearsley | 23599b7 | 2017-02-16 19:10:42 | [diff] [blame] | 338 | test (`file:` or `http:`) as the command argument in the Debugging section of |
pwnall | ae101a5f | 2016-11-08 00:24:38 | [diff] [blame] | 339 | the content_shell project Properties. |
| 340 | * If your test contains a JS call, DOM manipulation, or other distinctive |
| 341 | piece of code that you think is failing, search for that in the Chrome |
| 342 | solution. That's a good place to put a starting breakpoint to start |
| 343 | tracking down the issue. |
| 344 | * Otherwise, you're running in a standard message loop just like in Chrome. |
| 345 | If you have no other information, set a breakpoint on page load. |
| 346 | * If your test only works in full layout-test mode, or if you find it simpler to |
| 347 | debug without all the overhead of an interactive session, start the |
Kent Tamura | cd3ebc4 | 2018-05-16 06:44:22 | [diff] [blame] | 348 | content_shell with the command-line flag `--run-web-tests`, followed by the |
qyearsley | 23599b7 | 2017-02-16 19:10:42 | [diff] [blame] | 349 | URL (`file:` or `http:`) to your test. More information about running layout tests |
pwnall | d8a25072 | 2016-11-09 18:24:03 | [diff] [blame] | 350 | in content_shell can be found [here](./layout_tests_in_content_shell.md). |
pwnall | ae101a5f | 2016-11-08 00:24:38 | [diff] [blame] | 351 | * In VS, you can do this in the Debugging section of the content_shell |
| 352 | project Properties. |
| 353 | * Now you're running with exactly the same API, theme, and other setup that |
| 354 | the layout tests use. |
| 355 | * Again, if your test contains a JS call, DOM manipulation, or other |
| 356 | distinctive piece of code that you think is failing, search for that in |
| 357 | the Chrome solution. That's a good place to put a starting breakpoint to |
| 358 | start tracking down the issue. |
| 359 | * If you can't find any better place to set a breakpoint, start at the |
| 360 | `TestShell::RunFileTest()` call in `content_shell_main.cc`, or at |
| 361 | `shell->LoadURL() within RunFileTest()` in `content_shell_win.cc`. |
| 362 | * Debug as usual. Once you've gotten this far, the failing layout test is just a |
| 363 | (hopefully) reduced test case that exposes a problem. |
| 364 | |
| 365 | ### Debugging HTTP Tests |
| 366 | |
| 367 | To run the server manually to reproduce/debug a failure: |
| 368 | |
| 369 | ```bash |
Kent Tamura | e81dbff | 2018-04-20 17:35:34 | [diff] [blame] | 370 | cd src/third_party/blink/tools |
Mathias Bynens | 172fc6b | 2018-09-05 09:39:43 | [diff] [blame^] | 371 | python run_blink_httpd.py |
pwnall | ae101a5f | 2016-11-08 00:24:38 | [diff] [blame] | 372 | ``` |
| 373 | |
Mathias Bynens | 172fc6b | 2018-09-05 09:39:43 | [diff] [blame^] | 374 | The layout tests are served from `http://127.0.0.1:8000/`. For example, to |
pwnall | ae101a5f | 2016-11-08 00:24:38 | [diff] [blame] | 375 | run the test |
| 376 | `LayoutTest/http/tests/serviceworker/chromium/service-worker-allowed.html`, |
| 377 | navigate to |
| 378 | `http://127.0.0.1:8000/serviceworker/chromium/service-worker-allowed.html`. Some |
Mathias Bynens | 172fc6b | 2018-09-05 09:39:43 | [diff] [blame^] | 379 | tests behave differently if you go to `127.0.0.1` vs. `localhost`, so use |
| 380 | `127.0.0.1`. |
pwnall | ae101a5f | 2016-11-08 00:24:38 | [diff] [blame] | 381 | |
Kent Tamura | e81dbff | 2018-04-20 17:35:34 | [diff] [blame] | 382 | To kill the server, hit any key on the terminal where `run_blink_httpd.py` is |
Mathias Bynens | 172fc6b | 2018-09-05 09:39:43 | [diff] [blame^] | 383 | running, use `taskkill` or the Task Manager on Windows, or `killall` or |
| 384 | Activity Monitor on macOS. |
pwnall | ae101a5f | 2016-11-08 00:24:38 | [diff] [blame] | 385 | |
Mathias Bynens | 172fc6b | 2018-09-05 09:39:43 | [diff] [blame^] | 386 | The test server sets up an alias to the `LayoutTests/resources` directory. For |
| 387 | example, in HTTP tests, you can access the testing framework using |
pwnall | ae101a5f | 2016-11-08 00:24:38 | [diff] [blame] | 388 | `src="/js-test-resources/js-test.js"`. |
| 389 | |
| 390 | ### Tips |
| 391 | |
| 392 | Check https://test-results.appspot.com/ to see how a test did in the most recent |
| 393 | ~100 builds on each builder (as long as the page is being updated regularly). |
| 394 | |
| 395 | A timeout will often also be a text mismatch, since the wrapper script kills the |
| 396 | content_shell before it has a chance to finish. The exception is if the test |
| 397 | finishes loading properly, but somehow hangs before it outputs the bit of text |
| 398 | that tells the wrapper it's done. |
| 399 | |
| 400 | Why might a test fail (or crash, or timeout) on buildbot, but pass on your local |
| 401 | machine? |
| 402 | * If the test finishes locally but is slow, more than 10 seconds or so, that |
| 403 | would be why it's called a timeout on the bot. |
| 404 | * Otherwise, try running it as part of a set of tests; it's possible that a test |
| 405 | one or two (or ten) before this one is corrupting something that makes this |
| 406 | one fail. |
| 407 | * If it consistently works locally, make sure your environment looks like the |
| 408 | one on the bot (look at the top of the stdio for the webkit_tests step to see |
| 409 | all the environment variables and so on). |
| 410 | * If none of that helps, and you have access to the bot itself, you may have to |
| 411 | log in there and see if you can reproduce the problem manually. |
| 412 | |
Will Chen | 22b48850 | 2017-11-30 21:37:15 | [diff] [blame] | 413 | ### Debugging DevTools Tests |
pwnall | ae101a5f | 2016-11-08 00:24:38 | [diff] [blame] | 414 | |
Mathias Bynens | 172fc6b | 2018-09-05 09:39:43 | [diff] [blame^] | 415 | * Add `debug_devtools=true` to `args.gn` and compile: `autoninja -C out/Default devtools_frontend_resources` |
Will Chen | 22b48850 | 2017-11-30 21:37:15 | [diff] [blame] | 416 | > Debug DevTools lets you avoid having to recompile after every change to the DevTools front-end. |
| 417 | * Do one of the following: |
Mathias Bynens | 172fc6b | 2018-09-05 09:39:43 | [diff] [blame^] | 418 | * Option A) Run from the `chromium/src` folder: |
Kent Tamura | a045a7f | 2018-04-25 05:08:11 | [diff] [blame] | 419 | `third_party/blink/tools/run_web_tests.sh |
Will Chen | 22b48850 | 2017-11-30 21:37:15 | [diff] [blame] | 420 | --additional-driver-flag='--debug-devtools' |
| 421 | --additional-driver-flag='--remote-debugging-port=9222' |
| 422 | --time-out-ms=6000000` |
| 423 | * Option B) If you need to debug an http/tests/inspector test, start httpd |
| 424 | as described above. Then, run content_shell: |
Kent Tamura | cd3ebc4 | 2018-05-16 06:44:22 | [diff] [blame] | 425 | `out/Default/content_shell --debug-devtools --remote-debugging-port=9222 --run-web-tests |
Will Chen | 22b48850 | 2017-11-30 21:37:15 | [diff] [blame] | 426 | http://127.0.0.1:8000/path/to/test.html` |
| 427 | * Open `http://localhost:9222` in a stable/beta/canary Chrome, click the single |
| 428 | link to open the devtools with the test loaded. |
| 429 | * In the loaded devtools, set any required breakpoints and execute `test()` in |
| 430 | the console to actually start the test. |
| 431 | |
| 432 | NOTE: If the test is an html file, this means it's a legacy test so you need to add: |
pwnall | ae101a5f | 2016-11-08 00:24:38 | [diff] [blame] | 433 | * Add `window.debugTest = true;` to your test code as follows: |
| 434 | |
| 435 | ```javascript |
| 436 | window.debugTest = true; |
| 437 | function test() { |
| 438 | /* TEST CODE */ |
| 439 | } |
Kim Paulhamus | 61d60c3 | 2018-02-09 18:03:49 | [diff] [blame] | 440 | ``` |
pwnall | ae101a5f | 2016-11-08 00:24:38 | [diff] [blame] | 441 | |
Steve Kobes | e123a3d4 | 2017-07-20 01:20:30 | [diff] [blame] | 442 | ## Bisecting Regressions |
| 443 | |
| 444 | You can use [`git bisect`](https://git-scm.com/docs/git-bisect) to find which |
| 445 | commit broke (or fixed!) a layout test in a fully automated way. Unlike |
| 446 | [bisect-builds.py](http://dev.chromium.org/developers/bisect-builds-py), which |
| 447 | downloads pre-built Chromium binaries, `git bisect` operates on your local |
| 448 | checkout, so it can run tests with `content_shell`. |
| 449 | |
| 450 | Bisecting can take several hours, but since it is fully automated you can leave |
| 451 | it running overnight and view the results the next day. |
| 452 | |
| 453 | To set up an automated bisect of a layout test regression, create a script like |
| 454 | this: |
| 455 | |
Mathias Bynens | 172fc6b | 2018-09-05 09:39:43 | [diff] [blame^] | 456 | ```bash |
Steve Kobes | e123a3d4 | 2017-07-20 01:20:30 | [diff] [blame] | 457 | #!/bin/bash |
| 458 | |
| 459 | # Exit code 125 tells git bisect to skip the revision. |
| 460 | gclient sync || exit 125 |
Max Moroz | f5b31fcd | 2018-08-10 21:55:48 | [diff] [blame] | 461 | autoninja -C out/Debug -j100 blink_tests || exit 125 |
Steve Kobes | e123a3d4 | 2017-07-20 01:20:30 | [diff] [blame] | 462 | |
Kent Tamura | a045a7f | 2018-04-25 05:08:11 | [diff] [blame] | 463 | third_party/blink/tools/run_web_tests.py -t Debug \ |
Steve Kobes | e123a3d4 | 2017-07-20 01:20:30 | [diff] [blame] | 464 | --no-show-results --no-retry-failures \ |
| 465 | path/to/layout/test.html |
| 466 | ``` |
| 467 | |
| 468 | Modify the `out` directory, ninja args, and test name as appropriate, and save |
| 469 | the script in `~/checkrev.sh`. Then run: |
| 470 | |
Mathias Bynens | 172fc6b | 2018-09-05 09:39:43 | [diff] [blame^] | 471 | ```bash |
Steve Kobes | e123a3d4 | 2017-07-20 01:20:30 | [diff] [blame] | 472 | chmod u+x ~/checkrev.sh # mark script as executable |
| 473 | git bisect start <badrev> <goodrev> |
| 474 | git bisect run ~/checkrev.sh |
| 475 | git bisect reset # quit the bisect session |
| 476 | ``` |
| 477 | |
pwnall | ae101a5f | 2016-11-08 00:24:38 | [diff] [blame] | 478 | ## Rebaselining Layout Tests |
| 479 | |
pwnall | d8a25072 | 2016-11-09 18:24:03 | [diff] [blame] | 480 | *** promo |
| 481 | To automatically re-baseline tests across all Chromium platforms, using the |
Xianzhu Wang | cacba48 | 2017-06-05 18:46:43 | [diff] [blame] | 482 | buildbot results, see [How to rebaseline](./layout_test_expectations.md#How-to-rebaseline). |
pwnall | ae101a5f | 2016-11-08 00:24:38 | [diff] [blame] | 483 | Alternatively, to manually run and test and rebaseline it on your workstation, |
pwnall | d8a25072 | 2016-11-09 18:24:03 | [diff] [blame] | 484 | read on. |
| 485 | *** |
pwnall | ae101a5f | 2016-11-08 00:24:38 | [diff] [blame] | 486 | |
pwnall | ae101a5f | 2016-11-08 00:24:38 | [diff] [blame] | 487 | ```bash |
Kent Tamura | a045a7f | 2018-04-25 05:08:11 | [diff] [blame] | 488 | cd src/third_party/blink |
Mathias Bynens | 172fc6b | 2018-09-05 09:39:43 | [diff] [blame^] | 489 | python tools/run_web_tests.py --reset-results foo/bar/test.html |
pwnall | ae101a5f | 2016-11-08 00:24:38 | [diff] [blame] | 490 | ``` |
| 491 | |
Xianzhu Wang | cacba48 | 2017-06-05 18:46:43 | [diff] [blame] | 492 | If there are current expectation files for `LayoutTests/foo/bar/test.html`, |
| 493 | the above command will overwrite the current baselines at their original |
| 494 | locations with the actual results. The current baseline means the `-expected.*` |
| 495 | file used to compare the actual result when the test is run locally, i.e. the |
| 496 | first file found in the [baseline search path] |
| 497 | (https://cs.chromium.org/search/?q=port/base.py+baseline_search_path). |
| 498 | |
| 499 | If there are no current baselines, the above command will create new baselines |
| 500 | in the platform-independent directory, e.g. |
| 501 | `LayoutTests/foo/bar/test-expected.{txt,png}`. |
pwnall | ae101a5f | 2016-11-08 00:24:38 | [diff] [blame] | 502 | |
| 503 | When you rebaseline a test, make sure your commit description explains why the |
Xianzhu Wang | cacba48 | 2017-06-05 18:46:43 | [diff] [blame] | 504 | test is being re-baselined. |
pwnall | ae101a5f | 2016-11-08 00:24:38 | [diff] [blame] | 505 | |
Xianzhu Wang | 95d0bac3 | 2017-06-05 21:09:39 | [diff] [blame] | 506 | ### Rebaselining flag-specific expectations |
| 507 | |
| 508 | Though we prefer the Rebaseline Tool to local rebaselining, the Rebaseline Tool |
| 509 | doesn't support rebaselining flag-specific expectations. |
| 510 | |
| 511 | ```bash |
Kent Tamura | a045a7f | 2018-04-25 05:08:11 | [diff] [blame] | 512 | cd src/third_party/blink |
Mathias Bynens | 172fc6b | 2018-09-05 09:39:43 | [diff] [blame^] | 513 | python tools/run_web_tests.py --additional-driver-flag=--enable-flag --reset-results foo/bar/test.html |
Xianzhu Wang | 95d0bac3 | 2017-06-05 21:09:39 | [diff] [blame] | 514 | ``` |
| 515 | |
| 516 | New baselines will be created in the flag-specific baselines directory, e.g. |
| 517 | `LayoutTests/flag-specific/enable-flag/foo/bar/test-expected.{txt,png}`. |
| 518 | |
| 519 | Then you can commit the new baselines and upload the patch for review. |
| 520 | |
| 521 | However, it's difficult for reviewers to review the patch containing only new |
Xianzhu Wang | d063968e | 2017-10-16 16:47:44 | [diff] [blame] | 522 | files. You can follow the steps below for easier review. |
Xianzhu Wang | 95d0bac3 | 2017-06-05 21:09:39 | [diff] [blame] | 523 | |
Xianzhu Wang | d063968e | 2017-10-16 16:47:44 | [diff] [blame] | 524 | 1. Copy existing baselines to the flag-specific baselines directory for the |
| 525 | tests to be rebaselined: |
| 526 | ```bash |
Kent Tamura | a045a7f | 2018-04-25 05:08:11 | [diff] [blame] | 527 | third_party/blink/tools/run_web_tests.py --additional-driver-flag=--enable-flag --copy-baselines foo/bar/test.html |
Xianzhu Wang | d063968e | 2017-10-16 16:47:44 | [diff] [blame] | 528 | ``` |
| 529 | Then add the newly created baseline files, commit and upload the patch. |
| 530 | Note that the above command won't copy baselines for passing tests. |
Xianzhu Wang | 95d0bac3 | 2017-06-05 21:09:39 | [diff] [blame] | 531 | |
Xianzhu Wang | d063968e | 2017-10-16 16:47:44 | [diff] [blame] | 532 | 2. Rebaseline the test locally: |
| 533 | ```bash |
Kent Tamura | a045a7f | 2018-04-25 05:08:11 | [diff] [blame] | 534 | third_party/blink/tools/run_web_tests.py --additional-driver-flag=--enable-flag --reset-results foo/bar/test.html |
Xianzhu Wang | d063968e | 2017-10-16 16:47:44 | [diff] [blame] | 535 | ``` |
| 536 | Commit the changes and upload the patch. |
Xianzhu Wang | 95d0bac3 | 2017-06-05 21:09:39 | [diff] [blame] | 537 | |
Xianzhu Wang | d063968e | 2017-10-16 16:47:44 | [diff] [blame] | 538 | 3. Request review of the CL and tell the reviewer to compare the patch sets that |
| 539 | were uploaded in step 1 and step 2 to see the differences of the rebaselines. |
Xianzhu Wang | 95d0bac3 | 2017-06-05 21:09:39 | [diff] [blame] | 540 | |
foolip | eda32ab | 2017-02-16 19:21:58 | [diff] [blame] | 541 | ## web-platform-tests |
pwnall | ae101a5f | 2016-11-08 00:24:38 | [diff] [blame] | 542 | |
foolip | bbd0f45 | 2017-02-11 00:09:53 | [diff] [blame] | 543 | In addition to layout tests developed and run just by the Blink team, there is |
foolip | eda32ab | 2017-02-16 19:21:58 | [diff] [blame] | 544 | also a shared test suite, see [web-platform-tests](./web_platform_tests.md). |
pwnall | ae101a5f | 2016-11-08 00:24:38 | [diff] [blame] | 545 | |
| 546 | ## Known Issues |
| 547 | |
| 548 | See |
| 549 | [bugs with the component Blink>Infra](https://bugs.chromium.org/p/chromium/issues/list?can=2&q=component%3ABlink%3EInfra) |
| 550 | for issues related to Blink tools, include the layout test runner. |
| 551 | |
pwnall | ae101a5f | 2016-11-08 00:24:38 | [diff] [blame] | 552 | * If QuickTime is not installed, the plugin tests |
| 553 | `fast/dom/object-embed-plugin-scripting.html` and |
| 554 | `plugins/embed-attributes-setting.html` are expected to fail. |