blob: eea7c20023dbdc14ace3f17b5d2b4bdd9df1315d [file] [log] [blame] [view]
Darwin Huanga8cd38182019-01-10 11:05:101# Web Tests (formerly known as "Layout Tests" or "LayoutTests")
pwnallae101a5f2016-11-08 00:24:382
Kent Tamura59ffb022018-11-27 05:30:563Web tests are used by Blink to test many components, including but not
4limited to layout and rendering. In general, web tests involve loading pages
pwnallae101a5f2016-11-08 00:24:385in a test renderer (`content_shell`) and comparing the rendered output or
6JavaScript output against an expected output file.
7
Kent Tamura59ffb022018-11-27 05:30:568This document covers running and debugging existing web tests. See the
9[Writing Web Tests documentation](./writing_web_tests.md) if you find
10yourself writing web tests.
pwnall4ea2eb32016-11-29 02:47:2511
Kent Tamura59ffb022018-11-27 05:30:5612Note that we changed the term "layout tests" to "web tests".
Kent Tamuraa045a7f2018-04-25 05:08:1113Please assume these terms mean the identical stuff. We also call it as
14"WebKit tests" and "WebKit layout tests".
15
pwnallae101a5f2016-11-08 00:24:3816[TOC]
17
Kent Tamura59ffb022018-11-27 05:30:5618## Running Web Tests
pwnallae101a5f2016-11-08 00:24:3819
20### Initial Setup
21
Kent Tamura59ffb022018-11-27 05:30:5622Before you can run the web tests, you need to build the `blink_tests` target
pwnallae101a5f2016-11-08 00:24:3823to get `content_shell` and all of the other needed binaries.
24
25```bash
kyle Ju8f7d38df2018-11-26 16:51:2226autoninja -C out/Default blink_tests
pwnallae101a5f2016-11-08 00:24:3827```
28
Kent Tamura59ffb022018-11-27 05:30:5629On **Android** (web test support
pwnallae101a5f2016-11-08 00:24:3830[currently limited to KitKat and earlier](https://crbug.com/567947)) you need to
31build and install `content_shell_apk` instead. See also:
32[Android Build Instructions](../android_build_instructions.md).
33
34```bash
Max Morozf5b31fcd2018-08-10 21:55:4835autoninja -C out/Default content_shell_apk
pwnallae101a5f2016-11-08 00:24:3836adb install -r out/Default/apks/ContentShell.apk
37```
38
39On **Mac**, you probably want to strip the content_shell binary before starting
40the tests. If you don't, you'll have 5-10 running concurrently, all stuck being
41examined by the OS crash reporter. This may cause other failures like timeouts
42where they normally don't occur.
43
44```bash
45strip ./xcodebuild/{Debug,Release}/content_shell.app/Contents/MacOS/content_shell
46```
47
48### Running the Tests
49
50TODO: mention `testing/xvfb.py`
51
52The test runner script is in
Kent Tamuraa045a7f2018-04-25 05:08:1153`third_party/blink/tools/run_web_tests.py`.
pwnallae101a5f2016-11-08 00:24:3854
55To specify which build directory to use (e.g. out/Default, out/Release,
56out/Debug) you should pass the `-t` or `--target` parameter. For example, to
57use the build in `out/Default`, use:
58
59```bash
Kent Tamuraa045a7f2018-04-25 05:08:1160python third_party/blink/tools/run_web_tests.py -t Default
pwnallae101a5f2016-11-08 00:24:3861```
62
63For Android (if your build directory is `out/android`):
64
65```bash
Kent Tamuraa045a7f2018-04-25 05:08:1166python third_party/blink/tools/run_web_tests.py -t android --android
pwnallae101a5f2016-11-08 00:24:3867```
68
69Tests marked as `[ Skip ]` in
Kent Tamura59ffb022018-11-27 05:30:5670[TestExpectations](../../third_party/blink/web_tests/TestExpectations)
pwnallae101a5f2016-11-08 00:24:3871won't be run at all, generally because they cause some intractable tool error.
72To force one of them to be run, either rename that file or specify the skipped
pwnall4ea2eb32016-11-29 02:47:2573test as the only one on the command line (see below). Read the
Kent Tamura59ffb022018-11-27 05:30:5674[Web Test Expectations documentation](./web_test_expectations.md) to learn
pwnall4ea2eb32016-11-29 02:47:2575more about TestExpectations and related files.
pwnallae101a5f2016-11-08 00:24:3876
pwnall4ea2eb32016-11-29 02:47:2577*** promo
78Currently only the tests listed in
Kent Tamura59ffb022018-11-27 05:30:5679[SmokeTests](../../third_party/blink/web_tests/SmokeTests)
80are run on the Android bots, since running all web tests takes too long on
pwnallae101a5f2016-11-08 00:24:3881Android (and may still have some infrastructure issues). Most developers focus
82their Blink testing on Linux. We rely on the fact that the Linux and Android
83behavior is nearly identical for scenarios outside those covered by the smoke
84tests.
pwnall4ea2eb32016-11-29 02:47:2585***
pwnallae101a5f2016-11-08 00:24:3886
87To run only some of the tests, specify their directories or filenames as
Kent Tamura59ffb022018-11-27 05:30:5688arguments to `run_web_tests.py` relative to the web test directory
89(`src/third_party/blink/web_tests`). For example, to run the fast form tests,
pwnallae101a5f2016-11-08 00:24:3890use:
91
92```bash
Mathias Bynens172fc6b2018-09-05 09:39:4393python third_party/blink/tools/run_web_tests.py fast/forms
pwnallae101a5f2016-11-08 00:24:3894```
95
96Or you could use the following shorthand:
97
98```bash
Mathias Bynens172fc6b2018-09-05 09:39:4399python third_party/blink/tools/run_web_tests.py fast/fo\*
pwnallae101a5f2016-11-08 00:24:38100```
101
102*** promo
Kent Tamura59ffb022018-11-27 05:30:56103Example: To run the web tests with a debug build of `content_shell`, but only
pwnallae101a5f2016-11-08 00:24:38104test the SVG tests and run pixel tests, you would run:
105
106```bash
Xianzhu Wang0a37e9d2019-03-27 21:27:29107[python] third_party/blink/tools/run_web_tests.py -t Default svg
pwnallae101a5f2016-11-08 00:24:38108```
109***
110
111As a final quick-but-less-robust alternative, you can also just use the
Xianzhu Wang0a37e9d2019-03-27 21:27:29112content_shell executable to run specific tests by using (example on Windows):
pwnallae101a5f2016-11-08 00:24:38113
114```bash
Xianzhu Wang0a37e9d2019-03-27 21:27:29115out/Default/content_shell.exe --run-web-tests <url>|<full_test_source_path>|<relative_test_path>
pwnallae101a5f2016-11-08 00:24:38116```
117
118as in:
119
120```bash
Xianzhu Wang0a37e9d2019-03-27 21:27:29121out/Default/content_shell.exe --run-web-tests \
Kent Tamura59ffb022018-11-27 05:30:56122 c:/chrome/src/third_party/blink/web_tests/fast/forms/001.html
pwnallae101a5f2016-11-08 00:24:38123```
Xianzhu Wang0a37e9d2019-03-27 21:27:29124or
125
126```bash
127out/Default/content_shell.exe --run-web-tests fast/forms/001.html
128```
pwnallae101a5f2016-11-08 00:24:38129
130but this requires a manual diff against expected results, because the shell
Xianzhu Wang0a37e9d2019-03-27 21:27:29131doesn't do it for you. It also just dumps the text result only (as the dump of
132pixels and audio binary data is not human readable).
133See [Running Web Tests Using the Content Shell](web_tests_in_content_shell.md]
134for more details of running `content_shell`.
pwnallae101a5f2016-11-08 00:24:38135
Mathias Bynens172fc6b2018-09-05 09:39:43136To see a complete list of arguments supported, run:
137
138```bash
Xianzhu Wang0a37e9d2019-03-27 21:27:29139python third_party/blink/tools/run_web_tests.py --help
Mathias Bynens172fc6b2018-09-05 09:39:43140```
pwnallae101a5f2016-11-08 00:24:38141
142*** note
143**Linux Note:** We try to match the Windows render tree output exactly by
144matching font metrics and widget metrics. If there's a difference in the render
145tree output, we should see if we can avoid rebaselining by improving our font
Kent Tamura59ffb022018-11-27 05:30:56146metrics. For additional information on Linux web tests, please see
147[docs/web_tests_linux.md](../web_tests_linux.md).
pwnallae101a5f2016-11-08 00:24:38148***
149
150*** note
151**Mac Note:** While the tests are running, a bunch of Appearance settings are
152overridden for you so the right type of scroll bars, colors, etc. are used.
153Your main display's "Color Profile" is also changed to make sure color
154correction by ColorSync matches what is expected in the pixel tests. The change
155is noticeable, how much depends on the normal level of correction for your
156display. The tests do their best to restore your setting when done, but if
157you're left in the wrong state, you can manually reset it by going to
158System Preferences → Displays → Color and selecting the "right" value.
159***
160
161### Test Harness Options
162
163This script has a lot of command line flags. You can pass `--help` to the script
164to see a full list of options. A few of the most useful options are below:
165
166| Option | Meaning |
167|:----------------------------|:--------------------------------------------------|
168| `--debug` | Run the debug build of the test shell (default is release). Equivalent to `-t Debug` |
169| `--nocheck-sys-deps` | Don't check system dependencies; this allows faster iteration. |
170| `--verbose` | Produce more verbose output, including a list of tests that pass. |
Xianzhu Wangcacba482017-06-05 18:46:43171| `--reset-results` | Overwrite the current baselines (`-expected.{png|txt|wav}` files) with actual results, or create new baselines if there are no existing baselines. |
pwnallae101a5f2016-11-08 00:24:38172| `--renderer-startup-dialog` | Bring up a modal dialog before running the test, useful for attaching a debugger. |
Quinten Yearsley17bf9b432018-01-02 22:02:45173| `--fully-parallel` | Run tests in parallel using as many child processes as the system has cores. |
pwnallae101a5f2016-11-08 00:24:38174| `--driver-logging` | Print C++ logs (LOG(WARNING), etc). |
175
176## Success and Failure
177
178A test succeeds when its output matches the pre-defined expected results. If any
179tests fail, the test script will place the actual generated results, along with
180a diff of the actual and expected results, into
181`src/out/Default/layout_test_results/`, and by default launch a browser with a
182summary and link to the results/diffs.
183
184The expected results for tests are in the
Kent Tamura59ffb022018-11-27 05:30:56185`src/third_party/blink/web_tests/platform` or alongside their respective
pwnallae101a5f2016-11-08 00:24:38186tests.
187
188*** note
189Tests which use [testharness.js](https://github.com/w3c/testharness.js/)
190do not have expected result files if all test cases pass.
191***
192
193A test that runs but produces the wrong output is marked as "failed", one that
194causes the test shell to crash is marked as "crashed", and one that takes longer
195than a certain amount of time to complete is aborted and marked as "timed out".
196A row of dots in the script's output indicates one or more tests that passed.
197
198## Test expectations
199
200The
Kent Tamura59ffb022018-11-27 05:30:56201[TestExpectations](../../third_party/blink/web_tests/TestExpectations) file (and related
202files) contains the list of all known web test failures. See the
203[Web Test Expectations documentation](./web_test_expectations.md) for more
pwnall4ea2eb32016-11-29 02:47:25204on this.
pwnallae101a5f2016-11-08 00:24:38205
206## Testing Runtime Flags
207
Kent Tamura59ffb022018-11-27 05:30:56208There are two ways to run web tests with additional command-line arguments:
pwnallae101a5f2016-11-08 00:24:38209
210* Using `--additional-driver-flag`:
211
212 ```bash
Mathias Bynens172fc6b2018-09-05 09:39:43213 python run_web_tests.py --additional-driver-flag=--blocking-repaint
pwnallae101a5f2016-11-08 00:24:38214 ```
215
216 This tells the test harness to pass `--blocking-repaint` to the
217 content_shell binary.
218
219 It will also look for flag-specific expectations in
Kent Tamura59ffb022018-11-27 05:30:56220 `web_tests/FlagExpectations/blocking-repaint`, if this file exists. The
pwnallae101a5f2016-11-08 00:24:38221 suppressions in this file override the main TestExpectations file.
222
223* Using a *virtual test suite* defined in
Kent Tamura59ffb022018-11-27 05:30:56224 [web_tests/VirtualTestSuites](../../third_party/blink/web_tests/VirtualTestSuites).
225 A virtual test suite runs a subset of web tests under a specific path with
pwnallae101a5f2016-11-08 00:24:38226 additional flags. For example, you could test a (hypothetical) new mode for
227 repainting using the following virtual test suite:
228
229 ```json
230 {
231 "prefix": "blocking_repaint",
232 "base": "fast/repaint",
233 "args": ["--blocking-repaint"],
234 }
235 ```
236
237 This will create new "virtual" tests of the form
Robert Ma89dd91d832017-08-02 18:08:44238 `virtual/blocking_repaint/fast/repaint/...` which correspond to the files
Kent Tamura59ffb022018-11-27 05:30:56239 under `web_tests/fast/repaint` and pass `--blocking-repaint` to
pwnallae101a5f2016-11-08 00:24:38240 content_shell when they are run.
241
242 These virtual tests exist in addition to the original `fast/repaint/...`
243 tests. They can have their own expectations in TestExpectations, and their own
244 baselines. The test harness will use the non-virtual baselines as a fallback.
245 However, the non-virtual expectations are not inherited: if
246 `fast/repaint/foo.html` is marked `[ Fail ]`, the test harness still expects
247 `virtual/blocking_repaint/fast/repaint/foo.html` to pass. If you expect the
248 virtual test to also fail, it needs its own suppression.
249
250 The "prefix" value does not have to be unique. This is useful if you want to
251 run multiple directories with the same flags (but see the notes below about
252 performance). Using the same prefix for different sets of flags is not
253 recommended.
254
255For flags whose implementation is still in progress, virtual test suites and
256flag-specific expectations represent two alternative strategies for testing.
257Consider the following when choosing between them:
258
259* The
260 [waterfall builders](https://dev.chromium.org/developers/testing/chromium-build-infrastructure/tour-of-the-chromium-buildbot)
261 and [try bots](https://dev.chromium.org/developers/testing/try-server-usage)
262 will run all virtual test suites in addition to the non-virtual tests.
263 Conversely, a flag-specific expectations file won't automatically cause the
264 bots to test your flag - if you want bot coverage without virtual test suites,
265 you will need to set up a dedicated bot for your flag.
266
267* Due to the above, virtual test suites incur a performance penalty for the
268 commit queue and the continuous build infrastructure. This is exacerbated by
269 the need to restart `content_shell` whenever flags change, which limits
270 parallelism. Therefore, you should avoid adding large numbers of virtual test
271 suites. They are well suited to running a subset of tests that are directly
272 related to the feature, but they don't scale to flags that make deep
273 architectural changes that potentially impact all of the tests.
274
Jeff Carpenter489d4022018-05-15 00:23:00275* Note that using wildcards in virtual test path names (e.g.
276 `virtual/blocking_repaint/fast/repaint/*`) is not supported.
277
pwnallae101a5f2016-11-08 00:24:38278## Tracking Test Failures
279
Kent Tamura59ffb022018-11-27 05:30:56280All bugs, associated with web test failures must have the
pwnallae101a5f2016-11-08 00:24:38281[Test-Layout](https://crbug.com/?q=label:Test-Layout) label. Depending on how
282much you know about the bug, assign the status accordingly:
283
284* **Unconfirmed** -- You aren't sure if this is a simple rebaseline, possible
285 duplicate of an existing bug, or a real failure
286* **Untriaged** -- Confirmed but unsure of priority or root cause.
287* **Available** -- You know the root cause of the issue.
288* **Assigned** or **Started** -- You will fix this issue.
289
Kent Tamura59ffb022018-11-27 05:30:56290When creating a new web test bug, please set the following properties:
pwnallae101a5f2016-11-08 00:24:38291
292* Components: a sub-component of Blink
293* OS: **All** (or whichever OS the failure is on)
294* Priority: 2 (1 if it's a crash)
295* Type: **Bug**
296* Labels: **Test-Layout**
297
Mathias Bynens172fc6b2018-09-05 09:39:43298You can also use the _Layout Test Failure_ template, which pre-sets these
pwnallae101a5f2016-11-08 00:24:38299labels for you.
300
Kent Tamura59ffb022018-11-27 05:30:56301## Debugging Web Tests
pwnallae101a5f2016-11-08 00:24:38302
Kent Tamura59ffb022018-11-27 05:30:56303After the web tests run, you should get a summary of tests that pass or
Mathias Bynens172fc6b2018-09-05 09:39:43304fail. If something fails unexpectedly (a new regression), you will get a
305`content_shell` window with a summary of the unexpected failures. Or you might
306have a failing test in mind to investigate. In any case, here are some steps and
307tips for finding the problem.
pwnallae101a5f2016-11-08 00:24:38308
309* Take a look at the result. Sometimes tests just need to be rebaselined (see
310 below) to account for changes introduced in your patch.
311 * Load the test into a trunk Chrome or content_shell build and look at its
312 result. (For tests in the http/ directory, start the http server first.
313 See above. Navigate to `http://localhost:8000/` and proceed from there.)
314 The best tests describe what they're looking for, but not all do, and
315 sometimes things they're not explicitly testing are still broken. Compare
316 it to Safari, Firefox, and IE if necessary to see if it's correct. If
317 you're still not sure, find the person who knows the most about it and
318 ask.
319 * Some tests only work properly in content_shell, not Chrome, because they
320 rely on extra APIs exposed there.
Kent Tamura59ffb022018-11-27 05:30:56321 * Some tests only work properly when they're run in the web-test
pwnallae101a5f2016-11-08 00:24:38322 framework, not when they're loaded into content_shell directly. The test
323 should mention that in its visible text, but not all do. So try that too.
324 See "Running the tests", above.
325* If you think the test is correct, confirm your suspicion by looking at the
326 diffs between the expected result and the actual one.
327 * Make sure that the diffs reported aren't important. Small differences in
328 spacing or box sizes are often unimportant, especially around fonts and
329 form controls. Differences in wording of JS error messages are also
330 usually acceptable.
jonross26185702019-04-08 18:54:10331 * `python run_web_tests.py path/to/your/test.html` produces a page listing
332 all test results. Those which fail their expectations will include links
333 to the expected result, actual result, and diff. These results are saved
334 to `$root_build_dir/layout-test-results`.
335 * Alternatively the `--results-directory=path/for/output/` option allows
336 you to specify an alternative directory for the output to be saved to.
pwnallae101a5f2016-11-08 00:24:38337 * If you're still sure it's correct, rebaseline the test (see below).
338 Otherwise...
339* If you're lucky, your test is one that runs properly when you navigate to it
340 in content_shell normally. In that case, build the Debug content_shell
341 project, fire it up in your favorite debugger, and load the test file either
qyearsley23599b72017-02-16 19:10:42342 from a `file:` URL.
pwnallae101a5f2016-11-08 00:24:38343 * You'll probably be starting and stopping the content_shell a lot. In VS,
344 to save navigating to the test every time, you can set the URL to your
qyearsley23599b72017-02-16 19:10:42345 test (`file:` or `http:`) as the command argument in the Debugging section of
pwnallae101a5f2016-11-08 00:24:38346 the content_shell project Properties.
347 * If your test contains a JS call, DOM manipulation, or other distinctive
348 piece of code that you think is failing, search for that in the Chrome
349 solution. That's a good place to put a starting breakpoint to start
350 tracking down the issue.
351 * Otherwise, you're running in a standard message loop just like in Chrome.
352 If you have no other information, set a breakpoint on page load.
Kent Tamura59ffb022018-11-27 05:30:56353* If your test only works in full web-test mode, or if you find it simpler to
pwnallae101a5f2016-11-08 00:24:38354 debug without all the overhead of an interactive session, start the
Kent Tamuracd3ebc42018-05-16 06:44:22355 content_shell with the command-line flag `--run-web-tests`, followed by the
Kent Tamura59ffb022018-11-27 05:30:56356 URL (`file:` or `http:`) to your test. More information about running web tests
357 in content_shell can be found [here](./web_tests_in_content_shell.md).
pwnallae101a5f2016-11-08 00:24:38358 * In VS, you can do this in the Debugging section of the content_shell
359 project Properties.
360 * Now you're running with exactly the same API, theme, and other setup that
Kent Tamura59ffb022018-11-27 05:30:56361 the web tests use.
pwnallae101a5f2016-11-08 00:24:38362 * Again, if your test contains a JS call, DOM manipulation, or other
363 distinctive piece of code that you think is failing, search for that in
364 the Chrome solution. That's a good place to put a starting breakpoint to
365 start tracking down the issue.
366 * If you can't find any better place to set a breakpoint, start at the
367 `TestShell::RunFileTest()` call in `content_shell_main.cc`, or at
368 `shell->LoadURL() within RunFileTest()` in `content_shell_win.cc`.
Kent Tamura59ffb022018-11-27 05:30:56369* Debug as usual. Once you've gotten this far, the failing web test is just a
pwnallae101a5f2016-11-08 00:24:38370 (hopefully) reduced test case that exposes a problem.
371
372### Debugging HTTP Tests
373
374To run the server manually to reproduce/debug a failure:
375
376```bash
Kent Tamurae81dbff2018-04-20 17:35:34377cd src/third_party/blink/tools
Mathias Bynens172fc6b2018-09-05 09:39:43378python run_blink_httpd.py
pwnallae101a5f2016-11-08 00:24:38379```
380
Kent Tamura59ffb022018-11-27 05:30:56381The web tests are served from `http://127.0.0.1:8000/`. For example, to
pwnallae101a5f2016-11-08 00:24:38382run the test
Kent Tamura59ffb022018-11-27 05:30:56383`web_tests/http/tests/serviceworker/chromium/service-worker-allowed.html`,
pwnallae101a5f2016-11-08 00:24:38384navigate to
385`http://127.0.0.1:8000/serviceworker/chromium/service-worker-allowed.html`. Some
Mathias Bynens172fc6b2018-09-05 09:39:43386tests behave differently if you go to `127.0.0.1` vs. `localhost`, so use
387`127.0.0.1`.
pwnallae101a5f2016-11-08 00:24:38388
Kent Tamurae81dbff2018-04-20 17:35:34389To kill the server, hit any key on the terminal where `run_blink_httpd.py` is
Mathias Bynens172fc6b2018-09-05 09:39:43390running, use `taskkill` or the Task Manager on Windows, or `killall` or
391Activity Monitor on macOS.
pwnallae101a5f2016-11-08 00:24:38392
Kent Tamura59ffb022018-11-27 05:30:56393The test server sets up an alias to the `web_tests/resources` directory. For
Mathias Bynens172fc6b2018-09-05 09:39:43394example, in HTTP tests, you can access the testing framework using
pwnallae101a5f2016-11-08 00:24:38395`src="/js-test-resources/js-test.js"`.
396
397### Tips
398
399Check https://test-results.appspot.com/ to see how a test did in the most recent
400~100 builds on each builder (as long as the page is being updated regularly).
401
402A timeout will often also be a text mismatch, since the wrapper script kills the
403content_shell before it has a chance to finish. The exception is if the test
404finishes loading properly, but somehow hangs before it outputs the bit of text
405that tells the wrapper it's done.
406
407Why might a test fail (or crash, or timeout) on buildbot, but pass on your local
408machine?
409* If the test finishes locally but is slow, more than 10 seconds or so, that
410 would be why it's called a timeout on the bot.
411* Otherwise, try running it as part of a set of tests; it's possible that a test
412 one or two (or ten) before this one is corrupting something that makes this
413 one fail.
414* If it consistently works locally, make sure your environment looks like the
415 one on the bot (look at the top of the stdio for the webkit_tests step to see
416 all the environment variables and so on).
417* If none of that helps, and you have access to the bot itself, you may have to
418 log in there and see if you can reproduce the problem manually.
419
Will Chen22b488502017-11-30 21:37:15420### Debugging DevTools Tests
pwnallae101a5f2016-11-08 00:24:38421
Mathias Bynens172fc6b2018-09-05 09:39:43422* Add `debug_devtools=true` to `args.gn` and compile: `autoninja -C out/Default devtools_frontend_resources`
Will Chen22b488502017-11-30 21:37:15423 > Debug DevTools lets you avoid having to recompile after every change to the DevTools front-end.
424* Do one of the following:
Mathias Bynens172fc6b2018-09-05 09:39:43425 * Option A) Run from the `chromium/src` folder:
Kent Tamuraa045a7f2018-04-25 05:08:11426 `third_party/blink/tools/run_web_tests.sh
Will Chen22b488502017-11-30 21:37:15427 --additional-driver-flag='--debug-devtools'
428 --additional-driver-flag='--remote-debugging-port=9222'
429 --time-out-ms=6000000`
430 * Option B) If you need to debug an http/tests/inspector test, start httpd
431 as described above. Then, run content_shell:
Kent Tamuracd3ebc42018-05-16 06:44:22432 `out/Default/content_shell --debug-devtools --remote-debugging-port=9222 --run-web-tests
Will Chen22b488502017-11-30 21:37:15433 http://127.0.0.1:8000/path/to/test.html`
434* Open `http://localhost:9222` in a stable/beta/canary Chrome, click the single
435 link to open the devtools with the test loaded.
436* In the loaded devtools, set any required breakpoints and execute `test()` in
437 the console to actually start the test.
438
439NOTE: If the test is an html file, this means it's a legacy test so you need to add:
pwnallae101a5f2016-11-08 00:24:38440* Add `window.debugTest = true;` to your test code as follows:
441
442 ```javascript
443 window.debugTest = true;
444 function test() {
445 /* TEST CODE */
446 }
Kim Paulhamus61d60c32018-02-09 18:03:49447 ```
pwnallae101a5f2016-11-08 00:24:38448
Steve Kobese123a3d42017-07-20 01:20:30449## Bisecting Regressions
450
451You can use [`git bisect`](https://git-scm.com/docs/git-bisect) to find which
Kent Tamura59ffb022018-11-27 05:30:56452commit broke (or fixed!) a web test in a fully automated way. Unlike
Steve Kobese123a3d42017-07-20 01:20:30453[bisect-builds.py](http://dev.chromium.org/developers/bisect-builds-py), which
454downloads pre-built Chromium binaries, `git bisect` operates on your local
455checkout, so it can run tests with `content_shell`.
456
457Bisecting can take several hours, but since it is fully automated you can leave
458it running overnight and view the results the next day.
459
Kent Tamura59ffb022018-11-27 05:30:56460To set up an automated bisect of a web test regression, create a script like
Steve Kobese123a3d42017-07-20 01:20:30461this:
462
Mathias Bynens172fc6b2018-09-05 09:39:43463```bash
Steve Kobese123a3d42017-07-20 01:20:30464#!/bin/bash
465
466# Exit code 125 tells git bisect to skip the revision.
467gclient sync || exit 125
Max Morozf5b31fcd2018-08-10 21:55:48468autoninja -C out/Debug -j100 blink_tests || exit 125
Steve Kobese123a3d42017-07-20 01:20:30469
Kent Tamuraa045a7f2018-04-25 05:08:11470third_party/blink/tools/run_web_tests.py -t Debug \
Steve Kobese123a3d42017-07-20 01:20:30471 --no-show-results --no-retry-failures \
Kent Tamura59ffb022018-11-27 05:30:56472 path/to/web/test.html
Steve Kobese123a3d42017-07-20 01:20:30473```
474
475Modify the `out` directory, ninja args, and test name as appropriate, and save
476the script in `~/checkrev.sh`. Then run:
477
Mathias Bynens172fc6b2018-09-05 09:39:43478```bash
Steve Kobese123a3d42017-07-20 01:20:30479chmod u+x ~/checkrev.sh # mark script as executable
480git bisect start <badrev> <goodrev>
481git bisect run ~/checkrev.sh
482git bisect reset # quit the bisect session
483```
484
Kent Tamura59ffb022018-11-27 05:30:56485## Rebaselining Web Tests
pwnallae101a5f2016-11-08 00:24:38486
pwnalld8a250722016-11-09 18:24:03487*** promo
488To automatically re-baseline tests across all Chromium platforms, using the
Kent Tamura59ffb022018-11-27 05:30:56489buildbot results, see [How to rebaseline](./web_test_expectations.md#How-to-rebaseline).
pwnallae101a5f2016-11-08 00:24:38490Alternatively, to manually run and test and rebaseline it on your workstation,
pwnalld8a250722016-11-09 18:24:03491read on.
492***
pwnallae101a5f2016-11-08 00:24:38493
pwnallae101a5f2016-11-08 00:24:38494```bash
Kent Tamuraa045a7f2018-04-25 05:08:11495cd src/third_party/blink
Mathias Bynens172fc6b2018-09-05 09:39:43496python tools/run_web_tests.py --reset-results foo/bar/test.html
pwnallae101a5f2016-11-08 00:24:38497```
498
Kent Tamura59ffb022018-11-27 05:30:56499If there are current expectation files for `web_tests/foo/bar/test.html`,
Xianzhu Wangcacba482017-06-05 18:46:43500the above command will overwrite the current baselines at their original
501locations with the actual results. The current baseline means the `-expected.*`
502file used to compare the actual result when the test is run locally, i.e. the
Darwin Huanga8cd38182019-01-10 11:05:10503first file found in the [baseline search path](https://cs.chromium.org/search/?q=port/base.py+baseline_search_path).
Xianzhu Wangcacba482017-06-05 18:46:43504
505If there are no current baselines, the above command will create new baselines
506in the platform-independent directory, e.g.
Kent Tamura59ffb022018-11-27 05:30:56507`web_tests/foo/bar/test-expected.{txt,png}`.
pwnallae101a5f2016-11-08 00:24:38508
509When you rebaseline a test, make sure your commit description explains why the
Xianzhu Wangcacba482017-06-05 18:46:43510test is being re-baselined.
pwnallae101a5f2016-11-08 00:24:38511
Xianzhu Wang95d0bac32017-06-05 21:09:39512### Rebaselining flag-specific expectations
513
514Though we prefer the Rebaseline Tool to local rebaselining, the Rebaseline Tool
515doesn't support rebaselining flag-specific expectations.
516
517```bash
Kent Tamuraa045a7f2018-04-25 05:08:11518cd src/third_party/blink
Mathias Bynens172fc6b2018-09-05 09:39:43519python tools/run_web_tests.py --additional-driver-flag=--enable-flag --reset-results foo/bar/test.html
Xianzhu Wang95d0bac32017-06-05 21:09:39520```
521
522New baselines will be created in the flag-specific baselines directory, e.g.
Kent Tamura59ffb022018-11-27 05:30:56523`web_tests/flag-specific/enable-flag/foo/bar/test-expected.{txt,png}`.
Xianzhu Wang95d0bac32017-06-05 21:09:39524
525Then you can commit the new baselines and upload the patch for review.
526
527However, it's difficult for reviewers to review the patch containing only new
Xianzhu Wangd063968e2017-10-16 16:47:44528files. You can follow the steps below for easier review.
Xianzhu Wang95d0bac32017-06-05 21:09:39529
Xianzhu Wangd063968e2017-10-16 16:47:445301. Copy existing baselines to the flag-specific baselines directory for the
531 tests to be rebaselined:
532 ```bash
Kent Tamuraa045a7f2018-04-25 05:08:11533 third_party/blink/tools/run_web_tests.py --additional-driver-flag=--enable-flag --copy-baselines foo/bar/test.html
Xianzhu Wangd063968e2017-10-16 16:47:44534 ```
535 Then add the newly created baseline files, commit and upload the patch.
536 Note that the above command won't copy baselines for passing tests.
Xianzhu Wang95d0bac32017-06-05 21:09:39537
Xianzhu Wangd063968e2017-10-16 16:47:445382. Rebaseline the test locally:
539 ```bash
Kent Tamuraa045a7f2018-04-25 05:08:11540 third_party/blink/tools/run_web_tests.py --additional-driver-flag=--enable-flag --reset-results foo/bar/test.html
Xianzhu Wangd063968e2017-10-16 16:47:44541 ```
542 Commit the changes and upload the patch.
Xianzhu Wang95d0bac32017-06-05 21:09:39543
Xianzhu Wangd063968e2017-10-16 16:47:445443. Request review of the CL and tell the reviewer to compare the patch sets that
545 were uploaded in step 1 and step 2 to see the differences of the rebaselines.
Xianzhu Wang95d0bac32017-06-05 21:09:39546
foolipeda32ab2017-02-16 19:21:58547## web-platform-tests
pwnallae101a5f2016-11-08 00:24:38548
Kent Tamura59ffb022018-11-27 05:30:56549In addition to web tests developed and run just by the Blink team, there is
foolipeda32ab2017-02-16 19:21:58550also a shared test suite, see [web-platform-tests](./web_platform_tests.md).
pwnallae101a5f2016-11-08 00:24:38551
552## Known Issues
553
554See
555[bugs with the component Blink>Infra](https://bugs.chromium.org/p/chromium/issues/list?can=2&q=component%3ABlink%3EInfra)
Kent Tamura59ffb022018-11-27 05:30:56556for issues related to Blink tools, include the web test runner.
pwnallae101a5f2016-11-08 00:24:38557
pwnallae101a5f2016-11-08 00:24:38558* If QuickTime is not installed, the plugin tests
559 `fast/dom/object-embed-plugin-scripting.html` and
560 `plugins/embed-attributes-setting.html` are expected to fail.