blob: 33963609749a40e76d75b7b6198b8307c8c88130 [file] [log] [blame] [view]
Darwin Huanga8cd38182019-01-10 11:05:101# Web Tests (formerly known as "Layout Tests" or "LayoutTests")
pwnallae101a5f2016-11-08 00:24:382
Kent Tamura59ffb022018-11-27 05:30:563Web tests are used by Blink to test many components, including but not
4limited to layout and rendering. In general, web tests involve loading pages
pwnallae101a5f2016-11-08 00:24:385in a test renderer (`content_shell`) and comparing the rendered output or
6JavaScript output against an expected output file.
7
Kent Tamura59ffb022018-11-27 05:30:568This document covers running and debugging existing web tests. See the
9[Writing Web Tests documentation](./writing_web_tests.md) if you find
10yourself writing web tests.
pwnall4ea2eb32016-11-29 02:47:2511
Kent Tamura59ffb022018-11-27 05:30:5612Note that we changed the term "layout tests" to "web tests".
Kent Tamuraa045a7f2018-04-25 05:08:1113Please assume these terms mean the identical stuff. We also call it as
14"WebKit tests" and "WebKit layout tests".
15
Matt Falkenhagencef09742020-01-06 05:43:3816["Web platform tests"](./web_platform_tests.md) (WPT) are the preferred form of
17web tests and are located at
18[web_tests/external/wpt](/third_party/blink/web_tests/external/wpt).
19Tests that should work across browsers go there. Other directories are for
20Chrome-specific tests only.
21
pwnallae101a5f2016-11-08 00:24:3822[TOC]
23
Kent Tamura59ffb022018-11-27 05:30:5624## Running Web Tests
pwnallae101a5f2016-11-08 00:24:3825
26### Initial Setup
27
Kent Tamura59ffb022018-11-27 05:30:5628Before you can run the web tests, you need to build the `blink_tests` target
pwnallae101a5f2016-11-08 00:24:3829to get `content_shell` and all of the other needed binaries.
30
31```bash
kyle Ju8f7d38df2018-11-26 16:51:2232autoninja -C out/Default blink_tests
pwnallae101a5f2016-11-08 00:24:3833```
34
Kent Tamura59ffb022018-11-27 05:30:5635On **Android** (web test support
pwnallae101a5f2016-11-08 00:24:3836[currently limited to KitKat and earlier](https://crbug.com/567947)) you need to
37build and install `content_shell_apk` instead. See also:
38[Android Build Instructions](../android_build_instructions.md).
39
40```bash
Max Morozf5b31fcd2018-08-10 21:55:4841autoninja -C out/Default content_shell_apk
pwnallae101a5f2016-11-08 00:24:3842adb install -r out/Default/apks/ContentShell.apk
43```
44
45On **Mac**, you probably want to strip the content_shell binary before starting
46the tests. If you don't, you'll have 5-10 running concurrently, all stuck being
47examined by the OS crash reporter. This may cause other failures like timeouts
48where they normally don't occur.
49
50```bash
51strip ./xcodebuild/{Debug,Release}/content_shell.app/Contents/MacOS/content_shell
52```
53
54### Running the Tests
55
56TODO: mention `testing/xvfb.py`
57
Robert Ma7ed16792020-06-16 16:38:5258The test runner script is in `third_party/blink/tools/run_web_tests.py`.
pwnallae101a5f2016-11-08 00:24:3859
60To specify which build directory to use (e.g. out/Default, out/Release,
61out/Debug) you should pass the `-t` or `--target` parameter. For example, to
62use the build in `out/Default`, use:
63
64```bash
Robert Ma7ed16792020-06-16 16:38:5265third_party/blink/tools/run_web_tests.py -t Default
pwnallae101a5f2016-11-08 00:24:3866```
67
68For Android (if your build directory is `out/android`):
69
70```bash
Robert Ma7ed16792020-06-16 16:38:5271third_party/blink/tools/run_web_tests.py -t android --android
pwnallae101a5f2016-11-08 00:24:3872```
73
Robert Ma7ed16792020-06-16 16:38:5274*** promo
75Windows users need to use `third_party/blink/tools/run_web_tests.bat` instead.
76***
77
pwnallae101a5f2016-11-08 00:24:3878Tests marked as `[ Skip ]` in
Kent Tamura59ffb022018-11-27 05:30:5679[TestExpectations](../../third_party/blink/web_tests/TestExpectations)
Xianzhu Wang15355b22019-11-02 23:20:0280won't be run by default, generally because they cause some intractable tool error.
pwnallae101a5f2016-11-08 00:24:3881To force one of them to be run, either rename that file or specify the skipped
Xianzhu Wang15355b22019-11-02 23:20:0282test on the command line (see below) or in a file specified with --test-list
83(however, --skip=always can make the tests marked as `[ Skip ]` always skipped).
84Read the [Web Test Expectations documentation](./web_test_expectations.md) to
85learn more about TestExpectations and related files.
pwnallae101a5f2016-11-08 00:24:3886
pwnall4ea2eb32016-11-29 02:47:2587*** promo
88Currently only the tests listed in
Kent Tamura59ffb022018-11-27 05:30:5689[SmokeTests](../../third_party/blink/web_tests/SmokeTests)
90are run on the Android bots, since running all web tests takes too long on
pwnallae101a5f2016-11-08 00:24:3891Android (and may still have some infrastructure issues). Most developers focus
92their Blink testing on Linux. We rely on the fact that the Linux and Android
93behavior is nearly identical for scenarios outside those covered by the smoke
94tests.
pwnall4ea2eb32016-11-29 02:47:2595***
pwnallae101a5f2016-11-08 00:24:3896
97To run only some of the tests, specify their directories or filenames as
Kent Tamura59ffb022018-11-27 05:30:5698arguments to `run_web_tests.py` relative to the web test directory
99(`src/third_party/blink/web_tests`). For example, to run the fast form tests,
pwnallae101a5f2016-11-08 00:24:38100use:
101
102```bash
Robert Ma7ed16792020-06-16 16:38:52103third_party/blink/tools/run_web_tests.py fast/forms
pwnallae101a5f2016-11-08 00:24:38104```
105
106Or you could use the following shorthand:
107
108```bash
Robert Ma7ed16792020-06-16 16:38:52109third_party/blink/tools/run_web_tests.py fast/fo\*
pwnallae101a5f2016-11-08 00:24:38110```
111
112*** promo
Kent Tamura59ffb022018-11-27 05:30:56113Example: To run the web tests with a debug build of `content_shell`, but only
pwnallae101a5f2016-11-08 00:24:38114test the SVG tests and run pixel tests, you would run:
115
116```bash
Robert Ma7ed16792020-06-16 16:38:52117third_party/blink/tools/run_web_tests.py -t Default svg
pwnallae101a5f2016-11-08 00:24:38118```
119***
120
121As a final quick-but-less-robust alternative, you can also just use the
Xianzhu Wang0a37e9d2019-03-27 21:27:29122content_shell executable to run specific tests by using (example on Windows):
pwnallae101a5f2016-11-08 00:24:38123
124```bash
Xianzhu Wang0a37e9d2019-03-27 21:27:29125out/Default/content_shell.exe --run-web-tests <url>|<full_test_source_path>|<relative_test_path>
pwnallae101a5f2016-11-08 00:24:38126```
127
128as in:
129
130```bash
Xianzhu Wang0a37e9d2019-03-27 21:27:29131out/Default/content_shell.exe --run-web-tests \
Kent Tamura59ffb022018-11-27 05:30:56132 c:/chrome/src/third_party/blink/web_tests/fast/forms/001.html
pwnallae101a5f2016-11-08 00:24:38133```
Xianzhu Wang0a37e9d2019-03-27 21:27:29134or
135
136```bash
137out/Default/content_shell.exe --run-web-tests fast/forms/001.html
138```
pwnallae101a5f2016-11-08 00:24:38139
140but this requires a manual diff against expected results, because the shell
Xianzhu Wang0a37e9d2019-03-27 21:27:29141doesn't do it for you. It also just dumps the text result only (as the dump of
142pixels and audio binary data is not human readable).
Jeonghee Ahn2cbb9cb2019-09-23 02:52:57143See [Running Web Tests Using the Content Shell](./web_tests_in_content_shell.md)
Xianzhu Wang0a37e9d2019-03-27 21:27:29144for more details of running `content_shell`.
pwnallae101a5f2016-11-08 00:24:38145
Mathias Bynens172fc6b2018-09-05 09:39:43146To see a complete list of arguments supported, run:
147
148```bash
Robert Ma7ed16792020-06-16 16:38:52149third_party/blink/tools/run_web_tests.py --help
Mathias Bynens172fc6b2018-09-05 09:39:43150```
pwnallae101a5f2016-11-08 00:24:38151
152*** note
153**Linux Note:** We try to match the Windows render tree output exactly by
154matching font metrics and widget metrics. If there's a difference in the render
155tree output, we should see if we can avoid rebaselining by improving our font
Kent Tamura59ffb022018-11-27 05:30:56156metrics. For additional information on Linux web tests, please see
Jeonghee Ahn2cbb9cb2019-09-23 02:52:57157[docs/web_tests_linux.md](./web_tests_linux.md).
pwnallae101a5f2016-11-08 00:24:38158***
159
160*** note
161**Mac Note:** While the tests are running, a bunch of Appearance settings are
162overridden for you so the right type of scroll bars, colors, etc. are used.
163Your main display's "Color Profile" is also changed to make sure color
164correction by ColorSync matches what is expected in the pixel tests. The change
165is noticeable, how much depends on the normal level of correction for your
166display. The tests do their best to restore your setting when done, but if
167you're left in the wrong state, you can manually reset it by going to
168System Preferences → Displays → Color and selecting the "right" value.
169***
170
171### Test Harness Options
172
173This script has a lot of command line flags. You can pass `--help` to the script
174to see a full list of options. A few of the most useful options are below:
175
176| Option | Meaning |
177|:----------------------------|:--------------------------------------------------|
178| `--debug` | Run the debug build of the test shell (default is release). Equivalent to `-t Debug` |
179| `--nocheck-sys-deps` | Don't check system dependencies; this allows faster iteration. |
180| `--verbose` | Produce more verbose output, including a list of tests that pass. |
Xianzhu Wangcacba482017-06-05 18:46:43181| `--reset-results` | Overwrite the current baselines (`-expected.{png|txt|wav}` files) with actual results, or create new baselines if there are no existing baselines. |
pwnallae101a5f2016-11-08 00:24:38182| `--renderer-startup-dialog` | Bring up a modal dialog before running the test, useful for attaching a debugger. |
Quinten Yearsley17bf9b432018-01-02 22:02:45183| `--fully-parallel` | Run tests in parallel using as many child processes as the system has cores. |
pwnallae101a5f2016-11-08 00:24:38184| `--driver-logging` | Print C++ logs (LOG(WARNING), etc). |
185
186## Success and Failure
187
188A test succeeds when its output matches the pre-defined expected results. If any
189tests fail, the test script will place the actual generated results, along with
190a diff of the actual and expected results, into
191`src/out/Default/layout_test_results/`, and by default launch a browser with a
192summary and link to the results/diffs.
193
194The expected results for tests are in the
Kent Tamura59ffb022018-11-27 05:30:56195`src/third_party/blink/web_tests/platform` or alongside their respective
pwnallae101a5f2016-11-08 00:24:38196tests.
197
198*** note
199Tests which use [testharness.js](https://github.com/w3c/testharness.js/)
200do not have expected result files if all test cases pass.
201***
202
203A test that runs but produces the wrong output is marked as "failed", one that
204causes the test shell to crash is marked as "crashed", and one that takes longer
205than a certain amount of time to complete is aborted and marked as "timed out".
206A row of dots in the script's output indicates one or more tests that passed.
207
208## Test expectations
209
210The
Kent Tamura59ffb022018-11-27 05:30:56211[TestExpectations](../../third_party/blink/web_tests/TestExpectations) file (and related
212files) contains the list of all known web test failures. See the
213[Web Test Expectations documentation](./web_test_expectations.md) for more
pwnall4ea2eb32016-11-29 02:47:25214on this.
pwnallae101a5f2016-11-08 00:24:38215
216## Testing Runtime Flags
217
Kent Tamura59ffb022018-11-27 05:30:56218There are two ways to run web tests with additional command-line arguments:
pwnallae101a5f2016-11-08 00:24:38219
Xianzhu Wangadb0670a22020-07-16 23:04:58220* Using `--additional-driver-flag` or `--flag-specific`:
pwnallae101a5f2016-11-08 00:24:38221
222 ```bash
Robert Ma7ed16792020-06-16 16:38:52223 third_party/blink/tools/run_web_tests.py --additional-driver-flag=--blocking-repaint
pwnallae101a5f2016-11-08 00:24:38224 ```
225
226 This tells the test harness to pass `--blocking-repaint` to the
227 content_shell binary.
228
229 It will also look for flag-specific expectations in
Kent Tamura59ffb022018-11-27 05:30:56230 `web_tests/FlagExpectations/blocking-repaint`, if this file exists. The
Xianzhu Wang72fb20212020-05-05 03:39:13231 suppressions in this file override the main TestExpectations files.
232 However, `[ Slow ]` in either flag-specific expectations or base expectations
233 is always merged into the used expectations.
pwnallae101a5f2016-11-08 00:24:38234
Xianzhu Wang15355b22019-11-02 23:20:02235 It will also look for baselines in `web_tests/flag-specific/blocking-repaint`.
236 The baselines in this directory override the fallback baselines.
237
238 By default, name of the expectation file name under
239 `web_tests/FlagExpectations` and name of the baseline directory under
240 `web_tests/flag-specific` uses the first flag of --additional-driver-flag
241 with leading '-'s stripped.
242
243 You can also customize the name in `web_tests/FlagSpecificConfig` when
244 the name is too long or when we need to match multiple additional args:
245
246 ```json
247 {
248 "name": "short-name",
249 "args": ["--blocking-repaint", "--another-flag"]
250 }
251 ```
252
253 When at least `--additional-driver-flag=--blocking-repaint` and
254 `--additional-driver-flag=--another-flag` are specified, `short-name` will
255 be used as name of the flag specific expectation file and the baseline directory.
256
257 With the config, you can also use `--flag-specific=short-name` as a shortcut
258 of `--additional-driver-flag=--blocking-repaint --additional-driver-flag=--another-flag`.
259
pwnallae101a5f2016-11-08 00:24:38260* Using a *virtual test suite* defined in
Kent Tamura59ffb022018-11-27 05:30:56261 [web_tests/VirtualTestSuites](../../third_party/blink/web_tests/VirtualTestSuites).
Xianzhu Wang5d682c82019-10-29 05:08:19262 A virtual test suite runs a subset of web tests with additional flags, with
263 `virtual/<prefix>/...` in their paths. The tests can be virtual tests that
264 map to real base tests (directories or files) whose paths match any of the
265 specified bases, or any real tests under `web_tests/virtual/<prefix>/`
266 directory. For example, you could test a (hypothetical) new mode for
pwnallae101a5f2016-11-08 00:24:38267 repainting using the following virtual test suite:
268
269 ```json
270 {
271 "prefix": "blocking_repaint",
Xianzhu Wang5d682c82019-10-29 05:08:19272 "bases": ["compositing", "fast/repaint"],
273 "args": ["--blocking-repaint"]
pwnallae101a5f2016-11-08 00:24:38274 }
275 ```
276
277 This will create new "virtual" tests of the form
Xianzhu Wang5d682c82019-10-29 05:08:19278 `virtual/blocking_repaint/compositing/...` and
Robert Ma89dd91d832017-08-02 18:08:44279 `virtual/blocking_repaint/fast/repaint/...` which correspond to the files
Xianzhu Wang5d682c82019-10-29 05:08:19280 under `web_tests/compositing` and `web_tests/fast/repaint`, respectively,
281 and pass `--blocking-repaint` to `content_shell` when they are run.
pwnallae101a5f2016-11-08 00:24:38282
Xianzhu Wang5d682c82019-10-29 05:08:19283 These virtual tests exist in addition to the original `compositing/...` and
284 `fast/repaint/...` tests. They can have their own expectations in
285 `web_tests/TestExpectations`, and their own baselines. The test harness will
Xianzhu Wang72fb20212020-05-05 03:39:13286 use the non-virtual expectations and baselines as a fallback. If a virtual
287 test has its own expectations, they will override all non-virtual
288 expectations. otherwise the non-virtual expectations will be used. However,
289 `[ Slow ]` in either virtual or non-virtual expectations is always merged
290 into the used expectations. If a virtual test is expected to pass while the
291 non-virtual test is expected to fail, you need to add an explicit `[ Pass ]`
292 entry for the virtual test.
pwnallae101a5f2016-11-08 00:24:38293
Xianzhu Wang5d682c82019-10-29 05:08:19294 This will also let any real tests under `web_tests/virtual/blocking_repaint`
295 directory run with the `--blocking-repaint` flag.
296
297 The "prefix" value should be unique. Multiple directories with the same flags
298 should be listed in the same "bases" list. The "bases" list can be empty,
299 in case that we just want to run the real tests under `virtual/<prefix>`
300 with the flags without creating any virtual tests.
pwnallae101a5f2016-11-08 00:24:38301
302For flags whose implementation is still in progress, virtual test suites and
Xianzhu Wangadb0670a22020-07-16 23:04:58303flag-specific expectations represent two alternative strategies for testing both
304the enabled code path and not-enabled code path. They are preferred to only
305setting a [runtime enabled feature](../../third_party/blink/renderer/platform/RuntimeEnabledFeatures.md)
306to `status: "test"` if the feature has substantially different code path from
307production because the latter would cause loss of test coverage of the production
308code path.
309
310Consider the following when choosing between virtual test suites and
311flag-specific expectations:
pwnallae101a5f2016-11-08 00:24:38312
313* The
314 [waterfall builders](https://dev.chromium.org/developers/testing/chromium-build-infrastructure/tour-of-the-chromium-buildbot)
315 and [try bots](https://dev.chromium.org/developers/testing/try-server-usage)
316 will run all virtual test suites in addition to the non-virtual tests.
317 Conversely, a flag-specific expectations file won't automatically cause the
318 bots to test your flag - if you want bot coverage without virtual test suites,
Xianzhu Wangadb0670a22020-07-16 23:04:58319 you will need to set up a dedicated bot ([example](https://chromium-review.googlesource.com/c/chromium/src/+/1850255))
320 for your flag.
pwnallae101a5f2016-11-08 00:24:38321
322* Due to the above, virtual test suites incur a performance penalty for the
323 commit queue and the continuous build infrastructure. This is exacerbated by
324 the need to restart `content_shell` whenever flags change, which limits
325 parallelism. Therefore, you should avoid adding large numbers of virtual test
326 suites. They are well suited to running a subset of tests that are directly
327 related to the feature, but they don't scale to flags that make deep
328 architectural changes that potentially impact all of the tests.
329
Jeff Carpenter489d4022018-05-15 00:23:00330* Note that using wildcards in virtual test path names (e.g.
Xianzhu Wang5d682c82019-10-29 05:08:19331 `virtual/blocking_repaint/fast/repaint/*`) is not supported, but you can
332 still use `virtual/blocking_repaint` to run all real and virtual tests
333 in the suite or `virtual/blocking_repaint/fast/repaint/dir` to run real
334 or virtual tests in the suite under a specific directory.
Jeff Carpenter489d4022018-05-15 00:23:00335
Xianzhu Wanga617a142020-05-07 21:57:47336*** note
337We can run a virtual test with additional flags. Both the virtual args and the
338additional flags will be applied. The fallback order of baselines and
339expectations will be: 1) flag-specific virtual, 2) non-flag-specific virtual,
3403) flag-specific base, 4) non-flag-specific base
341***
342
pwnallae101a5f2016-11-08 00:24:38343## Tracking Test Failures
344
Kent Tamura59ffb022018-11-27 05:30:56345All bugs, associated with web test failures must have the
pwnallae101a5f2016-11-08 00:24:38346[Test-Layout](https://crbug.com/?q=label:Test-Layout) label. Depending on how
347much you know about the bug, assign the status accordingly:
348
349* **Unconfirmed** -- You aren't sure if this is a simple rebaseline, possible
350 duplicate of an existing bug, or a real failure
351* **Untriaged** -- Confirmed but unsure of priority or root cause.
352* **Available** -- You know the root cause of the issue.
353* **Assigned** or **Started** -- You will fix this issue.
354
Kent Tamura59ffb022018-11-27 05:30:56355When creating a new web test bug, please set the following properties:
pwnallae101a5f2016-11-08 00:24:38356
357* Components: a sub-component of Blink
358* OS: **All** (or whichever OS the failure is on)
359* Priority: 2 (1 if it's a crash)
360* Type: **Bug**
361* Labels: **Test-Layout**
362
Mathias Bynens172fc6b2018-09-05 09:39:43363You can also use the _Layout Test Failure_ template, which pre-sets these
pwnallae101a5f2016-11-08 00:24:38364labels for you.
365
Kent Tamura59ffb022018-11-27 05:30:56366## Debugging Web Tests
pwnallae101a5f2016-11-08 00:24:38367
Kent Tamura59ffb022018-11-27 05:30:56368After the web tests run, you should get a summary of tests that pass or
Mathias Bynens172fc6b2018-09-05 09:39:43369fail. If something fails unexpectedly (a new regression), you will get a
370`content_shell` window with a summary of the unexpected failures. Or you might
371have a failing test in mind to investigate. In any case, here are some steps and
372tips for finding the problem.
pwnallae101a5f2016-11-08 00:24:38373
374* Take a look at the result. Sometimes tests just need to be rebaselined (see
375 below) to account for changes introduced in your patch.
376 * Load the test into a trunk Chrome or content_shell build and look at its
377 result. (For tests in the http/ directory, start the http server first.
378 See above. Navigate to `http://localhost:8000/` and proceed from there.)
379 The best tests describe what they're looking for, but not all do, and
380 sometimes things they're not explicitly testing are still broken. Compare
381 it to Safari, Firefox, and IE if necessary to see if it's correct. If
382 you're still not sure, find the person who knows the most about it and
383 ask.
384 * Some tests only work properly in content_shell, not Chrome, because they
385 rely on extra APIs exposed there.
Kent Tamura59ffb022018-11-27 05:30:56386 * Some tests only work properly when they're run in the web-test
pwnallae101a5f2016-11-08 00:24:38387 framework, not when they're loaded into content_shell directly. The test
388 should mention that in its visible text, but not all do. So try that too.
389 See "Running the tests", above.
390* If you think the test is correct, confirm your suspicion by looking at the
391 diffs between the expected result and the actual one.
392 * Make sure that the diffs reported aren't important. Small differences in
393 spacing or box sizes are often unimportant, especially around fonts and
394 form controls. Differences in wording of JS error messages are also
395 usually acceptable.
Robert Ma7ed16792020-06-16 16:38:52396 * `third_party/blink/tools/run_web_tests.py path/to/your/test.html` produces
397 a page listing all test results. Those which fail their expectations will
398 include links to the expected result, actual result, and diff. These
399 results are saved to `$root_build_dir/layout-test-results`.
jonross26185702019-04-08 18:54:10400 * Alternatively the `--results-directory=path/for/output/` option allows
401 you to specify an alternative directory for the output to be saved to.
pwnallae101a5f2016-11-08 00:24:38402 * If you're still sure it's correct, rebaseline the test (see below).
403 Otherwise...
404* If you're lucky, your test is one that runs properly when you navigate to it
405 in content_shell normally. In that case, build the Debug content_shell
406 project, fire it up in your favorite debugger, and load the test file either
qyearsley23599b72017-02-16 19:10:42407 from a `file:` URL.
pwnallae101a5f2016-11-08 00:24:38408 * You'll probably be starting and stopping the content_shell a lot. In VS,
409 to save navigating to the test every time, you can set the URL to your
qyearsley23599b72017-02-16 19:10:42410 test (`file:` or `http:`) as the command argument in the Debugging section of
pwnallae101a5f2016-11-08 00:24:38411 the content_shell project Properties.
412 * If your test contains a JS call, DOM manipulation, or other distinctive
413 piece of code that you think is failing, search for that in the Chrome
414 solution. That's a good place to put a starting breakpoint to start
415 tracking down the issue.
416 * Otherwise, you're running in a standard message loop just like in Chrome.
417 If you have no other information, set a breakpoint on page load.
Kent Tamura59ffb022018-11-27 05:30:56418* If your test only works in full web-test mode, or if you find it simpler to
pwnallae101a5f2016-11-08 00:24:38419 debug without all the overhead of an interactive session, start the
Kent Tamuracd3ebc42018-05-16 06:44:22420 content_shell with the command-line flag `--run-web-tests`, followed by the
Kent Tamura59ffb022018-11-27 05:30:56421 URL (`file:` or `http:`) to your test. More information about running web tests
422 in content_shell can be found [here](./web_tests_in_content_shell.md).
pwnallae101a5f2016-11-08 00:24:38423 * In VS, you can do this in the Debugging section of the content_shell
424 project Properties.
425 * Now you're running with exactly the same API, theme, and other setup that
Kent Tamura59ffb022018-11-27 05:30:56426 the web tests use.
pwnallae101a5f2016-11-08 00:24:38427 * Again, if your test contains a JS call, DOM manipulation, or other
428 distinctive piece of code that you think is failing, search for that in
429 the Chrome solution. That's a good place to put a starting breakpoint to
430 start tracking down the issue.
431 * If you can't find any better place to set a breakpoint, start at the
432 `TestShell::RunFileTest()` call in `content_shell_main.cc`, or at
433 `shell->LoadURL() within RunFileTest()` in `content_shell_win.cc`.
Kent Tamura59ffb022018-11-27 05:30:56434* Debug as usual. Once you've gotten this far, the failing web test is just a
pwnallae101a5f2016-11-08 00:24:38435 (hopefully) reduced test case that exposes a problem.
436
437### Debugging HTTP Tests
438
439To run the server manually to reproduce/debug a failure:
440
441```bash
Robert Ma7ed16792020-06-16 16:38:52442third_party/blink/tools/run_blink_httpd.py
pwnallae101a5f2016-11-08 00:24:38443```
444
Kent Tamura59ffb022018-11-27 05:30:56445The web tests are served from `http://127.0.0.1:8000/`. For example, to
pwnallae101a5f2016-11-08 00:24:38446run the test
Kent Tamura59ffb022018-11-27 05:30:56447`web_tests/http/tests/serviceworker/chromium/service-worker-allowed.html`,
pwnallae101a5f2016-11-08 00:24:38448navigate to
449`http://127.0.0.1:8000/serviceworker/chromium/service-worker-allowed.html`. Some
Mathias Bynens172fc6b2018-09-05 09:39:43450tests behave differently if you go to `127.0.0.1` vs. `localhost`, so use
451`127.0.0.1`.
pwnallae101a5f2016-11-08 00:24:38452
Kent Tamurae81dbff2018-04-20 17:35:34453To kill the server, hit any key on the terminal where `run_blink_httpd.py` is
Mathias Bynens172fc6b2018-09-05 09:39:43454running, use `taskkill` or the Task Manager on Windows, or `killall` or
455Activity Monitor on macOS.
pwnallae101a5f2016-11-08 00:24:38456
Kent Tamura59ffb022018-11-27 05:30:56457The test server sets up an alias to the `web_tests/resources` directory. For
Mathias Bynens172fc6b2018-09-05 09:39:43458example, in HTTP tests, you can access the testing framework using
pwnallae101a5f2016-11-08 00:24:38459`src="/js-test-resources/js-test.js"`.
460
461### Tips
462
463Check https://test-results.appspot.com/ to see how a test did in the most recent
464~100 builds on each builder (as long as the page is being updated regularly).
465
466A timeout will often also be a text mismatch, since the wrapper script kills the
467content_shell before it has a chance to finish. The exception is if the test
468finishes loading properly, but somehow hangs before it outputs the bit of text
469that tells the wrapper it's done.
470
471Why might a test fail (or crash, or timeout) on buildbot, but pass on your local
472machine?
473* If the test finishes locally but is slow, more than 10 seconds or so, that
474 would be why it's called a timeout on the bot.
475* Otherwise, try running it as part of a set of tests; it's possible that a test
476 one or two (or ten) before this one is corrupting something that makes this
477 one fail.
478* If it consistently works locally, make sure your environment looks like the
479 one on the bot (look at the top of the stdio for the webkit_tests step to see
480 all the environment variables and so on).
481* If none of that helps, and you have access to the bot itself, you may have to
482 log in there and see if you can reproduce the problem manually.
483
Will Chen22b488502017-11-30 21:37:15484### Debugging DevTools Tests
pwnallae101a5f2016-11-08 00:24:38485
Will Chen22b488502017-11-30 21:37:15486* Do one of the following:
Mathias Bynens172fc6b2018-09-05 09:39:43487 * Option A) Run from the `chromium/src` folder:
Tim van der Lippeae606432020-06-03 15:30:25488 `third_party/blink/tools/run_web_tests.py --additional-driver-flag='--remote-debugging-port=9222' --additional-driver-flag='--debug-devtools' --time-out-ms=6000000`
Will Chen22b488502017-11-30 21:37:15489 * Option B) If you need to debug an http/tests/inspector test, start httpd
490 as described above. Then, run content_shell:
Tim van der Lippeae606432020-06-03 15:30:25491 `out/Default/content_shell --remote-debugging-port=9222 --additional-driver-flag='--debug-devtools' --run-web-tests http://127.0.0.1:8000/path/to/test.html`
Will Chen22b488502017-11-30 21:37:15492* Open `http://localhost:9222` in a stable/beta/canary Chrome, click the single
493 link to open the devtools with the test loaded.
494* In the loaded devtools, set any required breakpoints and execute `test()` in
495 the console to actually start the test.
496
497NOTE: If the test is an html file, this means it's a legacy test so you need to add:
pwnallae101a5f2016-11-08 00:24:38498* Add `window.debugTest = true;` to your test code as follows:
499
500 ```javascript
501 window.debugTest = true;
502 function test() {
503 /* TEST CODE */
504 }
Kim Paulhamus61d60c32018-02-09 18:03:49505 ```
pwnallae101a5f2016-11-08 00:24:38506
Steve Kobese123a3d42017-07-20 01:20:30507## Bisecting Regressions
508
509You can use [`git bisect`](https://git-scm.com/docs/git-bisect) to find which
Kent Tamura59ffb022018-11-27 05:30:56510commit broke (or fixed!) a web test in a fully automated way. Unlike
Steve Kobese123a3d42017-07-20 01:20:30511[bisect-builds.py](http://dev.chromium.org/developers/bisect-builds-py), which
512downloads pre-built Chromium binaries, `git bisect` operates on your local
513checkout, so it can run tests with `content_shell`.
514
515Bisecting can take several hours, but since it is fully automated you can leave
516it running overnight and view the results the next day.
517
Kent Tamura59ffb022018-11-27 05:30:56518To set up an automated bisect of a web test regression, create a script like
Steve Kobese123a3d42017-07-20 01:20:30519this:
520
Mathias Bynens172fc6b2018-09-05 09:39:43521```bash
Steve Kobese123a3d42017-07-20 01:20:30522#!/bin/bash
523
524# Exit code 125 tells git bisect to skip the revision.
525gclient sync || exit 125
Max Morozf5b31fcd2018-08-10 21:55:48526autoninja -C out/Debug -j100 blink_tests || exit 125
Steve Kobese123a3d42017-07-20 01:20:30527
Kent Tamuraa045a7f2018-04-25 05:08:11528third_party/blink/tools/run_web_tests.py -t Debug \
Steve Kobese123a3d42017-07-20 01:20:30529 --no-show-results --no-retry-failures \
Kent Tamura59ffb022018-11-27 05:30:56530 path/to/web/test.html
Steve Kobese123a3d42017-07-20 01:20:30531```
532
533Modify the `out` directory, ninja args, and test name as appropriate, and save
534the script in `~/checkrev.sh`. Then run:
535
Mathias Bynens172fc6b2018-09-05 09:39:43536```bash
Steve Kobese123a3d42017-07-20 01:20:30537chmod u+x ~/checkrev.sh # mark script as executable
538git bisect start <badrev> <goodrev>
539git bisect run ~/checkrev.sh
540git bisect reset # quit the bisect session
541```
542
Kent Tamura59ffb022018-11-27 05:30:56543## Rebaselining Web Tests
pwnallae101a5f2016-11-08 00:24:38544
pwnalld8a250722016-11-09 18:24:03545*** promo
546To automatically re-baseline tests across all Chromium platforms, using the
Kent Tamura59ffb022018-11-27 05:30:56547buildbot results, see [How to rebaseline](./web_test_expectations.md#How-to-rebaseline).
pwnallae101a5f2016-11-08 00:24:38548Alternatively, to manually run and test and rebaseline it on your workstation,
pwnalld8a250722016-11-09 18:24:03549read on.
550***
pwnallae101a5f2016-11-08 00:24:38551
pwnallae101a5f2016-11-08 00:24:38552```bash
Robert Ma7ed16792020-06-16 16:38:52553third_party/blink/tools/run_web_tests.py --reset-results foo/bar/test.html
pwnallae101a5f2016-11-08 00:24:38554```
555
Kent Tamura59ffb022018-11-27 05:30:56556If there are current expectation files for `web_tests/foo/bar/test.html`,
Xianzhu Wangcacba482017-06-05 18:46:43557the above command will overwrite the current baselines at their original
558locations with the actual results. The current baseline means the `-expected.*`
559file used to compare the actual result when the test is run locally, i.e. the
Darwin Huanga8cd38182019-01-10 11:05:10560first file found in the [baseline search path](https://cs.chromium.org/search/?q=port/base.py+baseline_search_path).
Xianzhu Wangcacba482017-06-05 18:46:43561
562If there are no current baselines, the above command will create new baselines
563in the platform-independent directory, e.g.
Kent Tamura59ffb022018-11-27 05:30:56564`web_tests/foo/bar/test-expected.{txt,png}`.
pwnallae101a5f2016-11-08 00:24:38565
566When you rebaseline a test, make sure your commit description explains why the
Xianzhu Wangcacba482017-06-05 18:46:43567test is being re-baselined.
pwnallae101a5f2016-11-08 00:24:38568
Xianzhu Wang95d0bac32017-06-05 21:09:39569### Rebaselining flag-specific expectations
570
571Though we prefer the Rebaseline Tool to local rebaselining, the Rebaseline Tool
572doesn't support rebaselining flag-specific expectations.
573
574```bash
Robert Ma7ed16792020-06-16 16:38:52575third_party/blink/tools/run_web_tests.py --additional-driver-flag=--enable-flag --reset-results foo/bar/test.html
Xianzhu Wang95d0bac32017-06-05 21:09:39576```
Xianzhu Wang525fcbc2020-06-04 22:49:46577*** promo
578You can use `--flag-specific=config` as a shorthand of
579`--additional-driver-flag=--enable-flag` if `config` is defined in
580`web_tests/FlagSpecificConfig`.
581***
Xianzhu Wang95d0bac32017-06-05 21:09:39582
583New baselines will be created in the flag-specific baselines directory, e.g.
Xianzhu Wang525fcbc2020-06-04 22:49:46584`web_tests/flag-specific/enable-flag/foo/bar/test-expected.{txt,png}`
585or
586`web_tests/flag-specific/config/foo/bar/test-expected.{txt,png}`
Xianzhu Wang95d0bac32017-06-05 21:09:39587
588Then you can commit the new baselines and upload the patch for review.
589
590However, it's difficult for reviewers to review the patch containing only new
Xianzhu Wangd063968e2017-10-16 16:47:44591files. You can follow the steps below for easier review.
Xianzhu Wang95d0bac32017-06-05 21:09:39592
Xianzhu Wangd063968e2017-10-16 16:47:445931. Copy existing baselines to the flag-specific baselines directory for the
594 tests to be rebaselined:
595 ```bash
Kent Tamuraa045a7f2018-04-25 05:08:11596 third_party/blink/tools/run_web_tests.py --additional-driver-flag=--enable-flag --copy-baselines foo/bar/test.html
Xianzhu Wangd063968e2017-10-16 16:47:44597 ```
598 Then add the newly created baseline files, commit and upload the patch.
599 Note that the above command won't copy baselines for passing tests.
Xianzhu Wang95d0bac32017-06-05 21:09:39600
Xianzhu Wangd063968e2017-10-16 16:47:446012. Rebaseline the test locally:
602 ```bash
Kent Tamuraa045a7f2018-04-25 05:08:11603 third_party/blink/tools/run_web_tests.py --additional-driver-flag=--enable-flag --reset-results foo/bar/test.html
Xianzhu Wangd063968e2017-10-16 16:47:44604 ```
605 Commit the changes and upload the patch.
Xianzhu Wang95d0bac32017-06-05 21:09:39606
Xianzhu Wangd063968e2017-10-16 16:47:446073. Request review of the CL and tell the reviewer to compare the patch sets that
608 were uploaded in step 1 and step 2 to see the differences of the rebaselines.
Xianzhu Wang95d0bac32017-06-05 21:09:39609
pwnallae101a5f2016-11-08 00:24:38610## Known Issues
611
612See
613[bugs with the component Blink>Infra](https://bugs.chromium.org/p/chromium/issues/list?can=2&q=component%3ABlink%3EInfra)
Kent Tamura59ffb022018-11-27 05:30:56614for issues related to Blink tools, include the web test runner.
pwnallae101a5f2016-11-08 00:24:38615
pwnallae101a5f2016-11-08 00:24:38616* If QuickTime is not installed, the plugin tests
617 `fast/dom/object-embed-plugin-scripting.html` and
618 `plugins/embed-attributes-setting.html` are expected to fail.