blob: f063a86cbff7ca4dcf1b173cc278e8934530a950 [file] [log] [blame] [view]
pwnallae101a5f2016-11-08 00:24:381# Layout Tests
2
3Layout tests are used by Blink to test many components, including but not
4limited to layout and rendering. In general, layout tests involve loading pages
5in a test renderer (`content_shell`) and comparing the rendered output or
6JavaScript output against an expected output file.
7
pwnall4ea2eb32016-11-29 02:47:258This document covers running and debugging existing layout tests. See the
9[Writing Layout Tests documentation](./writing_layout_tests.md) if you find
10yourself writing layout tests.
11
pwnallae101a5f2016-11-08 00:24:3812[TOC]
13
14## Running Layout Tests
15
16### Initial Setup
17
18Before you can run the layout tests, you need to build the `blink_tests` target
19to get `content_shell` and all of the other needed binaries.
20
21```bash
22ninja -C out/Release blink_tests
23```
24
25On **Android** (layout test support
26[currently limited to KitKat and earlier](https://crbug.com/567947)) you need to
27build and install `content_shell_apk` instead. See also:
28[Android Build Instructions](../android_build_instructions.md).
29
30```bash
31ninja -C out/Default content_shell_apk
32adb install -r out/Default/apks/ContentShell.apk
33```
34
35On **Mac**, you probably want to strip the content_shell binary before starting
36the tests. If you don't, you'll have 5-10 running concurrently, all stuck being
37examined by the OS crash reporter. This may cause other failures like timeouts
38where they normally don't occur.
39
40```bash
41strip ./xcodebuild/{Debug,Release}/content_shell.app/Contents/MacOS/content_shell
42```
43
44### Running the Tests
45
46TODO: mention `testing/xvfb.py`
47
48The test runner script is in
49`third_party/WebKit/Tools/Scripts/run-webkit-tests`.
50
51To specify which build directory to use (e.g. out/Default, out/Release,
52out/Debug) you should pass the `-t` or `--target` parameter. For example, to
53use the build in `out/Default`, use:
54
55```bash
56python third_party/WebKit/Tools/Scripts/run-webkit-tests -t Default
57```
58
59For Android (if your build directory is `out/android`):
60
61```bash
62python third_party/WebKit/Tools/Scripts/run-webkit-tests -t android --android
63```
64
65Tests marked as `[ Skip ]` in
66[TestExpectations](../../third_party/WebKit/LayoutTests/TestExpectations)
67won't be run at all, generally because they cause some intractable tool error.
68To force one of them to be run, either rename that file or specify the skipped
pwnall4ea2eb32016-11-29 02:47:2569test as the only one on the command line (see below). Read the
70[Layout Test Expectations documentation](./layout_test_expectations.md) to learn
71more about TestExpectations and related files.
pwnallae101a5f2016-11-08 00:24:3872
pwnall4ea2eb32016-11-29 02:47:2573*** promo
74Currently only the tests listed in
pwnallae101a5f2016-11-08 00:24:3875[SmokeTests](../../third_party/WebKit/LayoutTests/SmokeTests)
76are run on the Android bots, since running all layout tests takes too long on
77Android (and may still have some infrastructure issues). Most developers focus
78their Blink testing on Linux. We rely on the fact that the Linux and Android
79behavior is nearly identical for scenarios outside those covered by the smoke
80tests.
pwnall4ea2eb32016-11-29 02:47:2581***
pwnallae101a5f2016-11-08 00:24:3882
83To run only some of the tests, specify their directories or filenames as
84arguments to `run_webkit_tests.py` relative to the layout test directory
85(`src/third_party/WebKit/LayoutTests`). For example, to run the fast form tests,
86use:
87
88```bash
89Tools/Scripts/run-webkit-tests fast/forms
90```
91
92Or you could use the following shorthand:
93
94```bash
95Tools/Scripts/run-webkit-tests fast/fo\*
96```
97
98*** promo
99Example: To run the layout tests with a debug build of `content_shell`, but only
100test the SVG tests and run pixel tests, you would run:
101
102```bash
103Tools/Scripts/run-webkit-tests -t Default svg
104```
105***
106
107As a final quick-but-less-robust alternative, you can also just use the
108content_shell executable to run specific tests by using (for Windows):
109
110```bash
111out/Default/content_shell.exe --run-layout-test --no-sandbox full_test_source_path
112```
113
114as in:
115
116```bash
117out/Default/content_shell.exe --run-layout-test --no-sandbox \
118 c:/chrome/src/third_party/WebKit/LayoutTests/fast/forms/001.html
119```
120
121but this requires a manual diff against expected results, because the shell
122doesn't do it for you.
123
124To see a complete list of arguments supported, run: `run-webkit-tests --help`
125
126*** note
127**Linux Note:** We try to match the Windows render tree output exactly by
128matching font metrics and widget metrics. If there's a difference in the render
129tree output, we should see if we can avoid rebaselining by improving our font
130metrics. For additional information on Linux Layout Tests, please see
131[docs/layout_tests_linux.md](../layout_tests_linux.md).
132***
133
134*** note
135**Mac Note:** While the tests are running, a bunch of Appearance settings are
136overridden for you so the right type of scroll bars, colors, etc. are used.
137Your main display's "Color Profile" is also changed to make sure color
138correction by ColorSync matches what is expected in the pixel tests. The change
139is noticeable, how much depends on the normal level of correction for your
140display. The tests do their best to restore your setting when done, but if
141you're left in the wrong state, you can manually reset it by going to
142System Preferences → Displays → Color and selecting the "right" value.
143***
144
145### Test Harness Options
146
147This script has a lot of command line flags. You can pass `--help` to the script
148to see a full list of options. A few of the most useful options are below:
149
150| Option | Meaning |
151|:----------------------------|:--------------------------------------------------|
152| `--debug` | Run the debug build of the test shell (default is release). Equivalent to `-t Debug` |
153| `--nocheck-sys-deps` | Don't check system dependencies; this allows faster iteration. |
154| `--verbose` | Produce more verbose output, including a list of tests that pass. |
155| `--no-pixel-tests` | Disable the pixel-to-pixel PNG comparisons and image checksums for tests that don't call `testRunner.dumpAsText()` |
Xianzhu Wangcacba482017-06-05 18:46:43156| `--reset-results` | Overwrite the current baselines (`-expected.{png|txt|wav}` files) with actual results, or create new baselines if there are no existing baselines. |
pwnallae101a5f2016-11-08 00:24:38157| `--renderer-startup-dialog` | Bring up a modal dialog before running the test, useful for attaching a debugger. |
Quinten Yearsley17bf9b432018-01-02 22:02:45158| `--fully-parallel` | Run tests in parallel using as many child processes as the system has cores. |
pwnallae101a5f2016-11-08 00:24:38159| `--driver-logging` | Print C++ logs (LOG(WARNING), etc). |
160
161## Success and Failure
162
163A test succeeds when its output matches the pre-defined expected results. If any
164tests fail, the test script will place the actual generated results, along with
165a diff of the actual and expected results, into
166`src/out/Default/layout_test_results/`, and by default launch a browser with a
167summary and link to the results/diffs.
168
169The expected results for tests are in the
170`src/third_party/WebKit/LayoutTests/platform` or alongside their respective
171tests.
172
173*** note
174Tests which use [testharness.js](https://github.com/w3c/testharness.js/)
175do not have expected result files if all test cases pass.
176***
177
178A test that runs but produces the wrong output is marked as "failed", one that
179causes the test shell to crash is marked as "crashed", and one that takes longer
180than a certain amount of time to complete is aborted and marked as "timed out".
181A row of dots in the script's output indicates one or more tests that passed.
182
183## Test expectations
184
185The
qyearsley23599b72017-02-16 19:10:42186[TestExpectations](../../third_party/WebKit/LayoutTests/TestExpectations) file (and related
187files) contains the list of all known layout test failures. See the
pwnall4ea2eb32016-11-29 02:47:25188[Layout Test Expectations documentation](./layout_test_expectations.md) for more
189on this.
pwnallae101a5f2016-11-08 00:24:38190
191## Testing Runtime Flags
192
193There are two ways to run layout tests with additional command-line arguments:
194
195* Using `--additional-driver-flag`:
196
197 ```bash
198 run-webkit-tests --additional-driver-flag=--blocking-repaint
199 ```
200
201 This tells the test harness to pass `--blocking-repaint` to the
202 content_shell binary.
203
204 It will also look for flag-specific expectations in
205 `LayoutTests/FlagExpectations/blocking-repaint`, if this file exists. The
206 suppressions in this file override the main TestExpectations file.
207
208* Using a *virtual test suite* defined in
qyearsley23599b72017-02-16 19:10:42209 [LayoutTests/VirtualTestSuites](../../third_party/WebKit/LayoutTests/VirtualTestSuites).
pwnallae101a5f2016-11-08 00:24:38210 A virtual test suite runs a subset of layout tests under a specific path with
211 additional flags. For example, you could test a (hypothetical) new mode for
212 repainting using the following virtual test suite:
213
214 ```json
215 {
216 "prefix": "blocking_repaint",
217 "base": "fast/repaint",
218 "args": ["--blocking-repaint"],
219 }
220 ```
221
222 This will create new "virtual" tests of the form
Robert Ma89dd91d832017-08-02 18:08:44223 `virtual/blocking_repaint/fast/repaint/...` which correspond to the files
pwnallae101a5f2016-11-08 00:24:38224 under `LayoutTests/fast/repaint` and pass `--blocking-repaint` to
225 content_shell when they are run.
226
227 These virtual tests exist in addition to the original `fast/repaint/...`
228 tests. They can have their own expectations in TestExpectations, and their own
229 baselines. The test harness will use the non-virtual baselines as a fallback.
230 However, the non-virtual expectations are not inherited: if
231 `fast/repaint/foo.html` is marked `[ Fail ]`, the test harness still expects
232 `virtual/blocking_repaint/fast/repaint/foo.html` to pass. If you expect the
233 virtual test to also fail, it needs its own suppression.
234
235 The "prefix" value does not have to be unique. This is useful if you want to
236 run multiple directories with the same flags (but see the notes below about
237 performance). Using the same prefix for different sets of flags is not
238 recommended.
239
240For flags whose implementation is still in progress, virtual test suites and
241flag-specific expectations represent two alternative strategies for testing.
242Consider the following when choosing between them:
243
244* The
245 [waterfall builders](https://dev.chromium.org/developers/testing/chromium-build-infrastructure/tour-of-the-chromium-buildbot)
246 and [try bots](https://dev.chromium.org/developers/testing/try-server-usage)
247 will run all virtual test suites in addition to the non-virtual tests.
248 Conversely, a flag-specific expectations file won't automatically cause the
249 bots to test your flag - if you want bot coverage without virtual test suites,
250 you will need to set up a dedicated bot for your flag.
251
252* Due to the above, virtual test suites incur a performance penalty for the
253 commit queue and the continuous build infrastructure. This is exacerbated by
254 the need to restart `content_shell` whenever flags change, which limits
255 parallelism. Therefore, you should avoid adding large numbers of virtual test
256 suites. They are well suited to running a subset of tests that are directly
257 related to the feature, but they don't scale to flags that make deep
258 architectural changes that potentially impact all of the tests.
259
260## Tracking Test Failures
261
262All bugs, associated with layout test failures must have the
263[Test-Layout](https://crbug.com/?q=label:Test-Layout) label. Depending on how
264much you know about the bug, assign the status accordingly:
265
266* **Unconfirmed** -- You aren't sure if this is a simple rebaseline, possible
267 duplicate of an existing bug, or a real failure
268* **Untriaged** -- Confirmed but unsure of priority or root cause.
269* **Available** -- You know the root cause of the issue.
270* **Assigned** or **Started** -- You will fix this issue.
271
272When creating a new layout test bug, please set the following properties:
273
274* Components: a sub-component of Blink
275* OS: **All** (or whichever OS the failure is on)
276* Priority: 2 (1 if it's a crash)
277* Type: **Bug**
278* Labels: **Test-Layout**
279
280You can also use the _Layout Test Failure_ template, which will pre-set these
281labels for you.
282
pwnallae101a5f2016-11-08 00:24:38283## Debugging Layout Tests
284
285After the layout tests run, you should get a summary of tests that pass or fail.
286If something fails unexpectedly (a new regression), you will get a content_shell
287window with a summary of the unexpected failures. Or you might have a failing
288test in mind to investigate. In any case, here are some steps and tips for
289finding the problem.
290
291* Take a look at the result. Sometimes tests just need to be rebaselined (see
292 below) to account for changes introduced in your patch.
293 * Load the test into a trunk Chrome or content_shell build and look at its
294 result. (For tests in the http/ directory, start the http server first.
295 See above. Navigate to `http://localhost:8000/` and proceed from there.)
296 The best tests describe what they're looking for, but not all do, and
297 sometimes things they're not explicitly testing are still broken. Compare
298 it to Safari, Firefox, and IE if necessary to see if it's correct. If
299 you're still not sure, find the person who knows the most about it and
300 ask.
301 * Some tests only work properly in content_shell, not Chrome, because they
302 rely on extra APIs exposed there.
303 * Some tests only work properly when they're run in the layout-test
304 framework, not when they're loaded into content_shell directly. The test
305 should mention that in its visible text, but not all do. So try that too.
306 See "Running the tests", above.
307* If you think the test is correct, confirm your suspicion by looking at the
308 diffs between the expected result and the actual one.
309 * Make sure that the diffs reported aren't important. Small differences in
310 spacing or box sizes are often unimportant, especially around fonts and
311 form controls. Differences in wording of JS error messages are also
312 usually acceptable.
313 * `./run_webkit_tests.py path/to/your/test.html --full-results-html` will
314 produce a page including links to the expected result, actual result, and
315 diff.
316 * Add the `--sources` option to `run_webkit_tests.py` to see exactly which
317 expected result it's comparing to (a file next to the test, something in
318 platform/mac/, something in platform/chromium-win/, etc.)
319 * If you're still sure it's correct, rebaseline the test (see below).
320 Otherwise...
321* If you're lucky, your test is one that runs properly when you navigate to it
322 in content_shell normally. In that case, build the Debug content_shell
323 project, fire it up in your favorite debugger, and load the test file either
qyearsley23599b72017-02-16 19:10:42324 from a `file:` URL.
pwnallae101a5f2016-11-08 00:24:38325 * You'll probably be starting and stopping the content_shell a lot. In VS,
326 to save navigating to the test every time, you can set the URL to your
qyearsley23599b72017-02-16 19:10:42327 test (`file:` or `http:`) as the command argument in the Debugging section of
pwnallae101a5f2016-11-08 00:24:38328 the content_shell project Properties.
329 * If your test contains a JS call, DOM manipulation, or other distinctive
330 piece of code that you think is failing, search for that in the Chrome
331 solution. That's a good place to put a starting breakpoint to start
332 tracking down the issue.
333 * Otherwise, you're running in a standard message loop just like in Chrome.
334 If you have no other information, set a breakpoint on page load.
335* If your test only works in full layout-test mode, or if you find it simpler to
336 debug without all the overhead of an interactive session, start the
337 content_shell with the command-line flag `--run-layout-test`, followed by the
qyearsley23599b72017-02-16 19:10:42338 URL (`file:` or `http:`) to your test. More information about running layout tests
pwnalld8a250722016-11-09 18:24:03339 in content_shell can be found [here](./layout_tests_in_content_shell.md).
pwnallae101a5f2016-11-08 00:24:38340 * In VS, you can do this in the Debugging section of the content_shell
341 project Properties.
342 * Now you're running with exactly the same API, theme, and other setup that
343 the layout tests use.
344 * Again, if your test contains a JS call, DOM manipulation, or other
345 distinctive piece of code that you think is failing, search for that in
346 the Chrome solution. That's a good place to put a starting breakpoint to
347 start tracking down the issue.
348 * If you can't find any better place to set a breakpoint, start at the
349 `TestShell::RunFileTest()` call in `content_shell_main.cc`, or at
350 `shell->LoadURL() within RunFileTest()` in `content_shell_win.cc`.
351* Debug as usual. Once you've gotten this far, the failing layout test is just a
352 (hopefully) reduced test case that exposes a problem.
353
354### Debugging HTTP Tests
355
356To run the server manually to reproduce/debug a failure:
357
358```bash
Kent Tamurae81dbff2018-04-20 17:35:34359cd src/third_party/blink/tools
360./run_blink_httpd.py
pwnallae101a5f2016-11-08 00:24:38361```
362
363The layout tests will be served from `http://127.0.0.1:8000`. For example, to
364run the test
365`LayoutTest/http/tests/serviceworker/chromium/service-worker-allowed.html`,
366navigate to
367`http://127.0.0.1:8000/serviceworker/chromium/service-worker-allowed.html`. Some
368tests will behave differently if you go to 127.0.0.1 vs localhost, so use
369127.0.0.1.
370
Kent Tamurae81dbff2018-04-20 17:35:34371To kill the server, hit any key on the terminal where `run_blink_httpd.py` is
Hajime Hoshia6fad022017-08-01 17:57:58372running, or just use `taskkill` or the Task Manager on Windows, and `killall` or
373Activity Monitor on MacOS.
pwnallae101a5f2016-11-08 00:24:38374
375The test server sets up an alias to `LayoutTests/resources` directory. In HTTP
376tests, you can access testing framework at e.g.
377`src="/js-test-resources/js-test.js"`.
378
379### Tips
380
381Check https://test-results.appspot.com/ to see how a test did in the most recent
382~100 builds on each builder (as long as the page is being updated regularly).
383
384A timeout will often also be a text mismatch, since the wrapper script kills the
385content_shell before it has a chance to finish. The exception is if the test
386finishes loading properly, but somehow hangs before it outputs the bit of text
387that tells the wrapper it's done.
388
389Why might a test fail (or crash, or timeout) on buildbot, but pass on your local
390machine?
391* If the test finishes locally but is slow, more than 10 seconds or so, that
392 would be why it's called a timeout on the bot.
393* Otherwise, try running it as part of a set of tests; it's possible that a test
394 one or two (or ten) before this one is corrupting something that makes this
395 one fail.
396* If it consistently works locally, make sure your environment looks like the
397 one on the bot (look at the top of the stdio for the webkit_tests step to see
398 all the environment variables and so on).
399* If none of that helps, and you have access to the bot itself, you may have to
400 log in there and see if you can reproduce the problem manually.
401
Will Chen22b488502017-11-30 21:37:15402### Debugging DevTools Tests
pwnallae101a5f2016-11-08 00:24:38403
Will Chen22b488502017-11-30 21:37:15404* Add `debug_devtools=true` to args.gn and compile: `ninja -C out/Default devtools_frontend_resources`
405 > Debug DevTools lets you avoid having to recompile after every change to the DevTools front-end.
406* Do one of the following:
407 * Option A) Run from the chromium/src folder:
408 `blink/tools/run_layout_tests.sh
409 --additional-driver-flag='--debug-devtools'
410 --additional-driver-flag='--remote-debugging-port=9222'
411 --time-out-ms=6000000`
412 * Option B) If you need to debug an http/tests/inspector test, start httpd
413 as described above. Then, run content_shell:
414 `out/Default/content_shell --debug-devtools --remote-debugging-port=9222 --run-layout-test
415 http://127.0.0.1:8000/path/to/test.html`
416* Open `http://localhost:9222` in a stable/beta/canary Chrome, click the single
417 link to open the devtools with the test loaded.
418* In the loaded devtools, set any required breakpoints and execute `test()` in
419 the console to actually start the test.
420
421NOTE: If the test is an html file, this means it's a legacy test so you need to add:
pwnallae101a5f2016-11-08 00:24:38422* Add `window.debugTest = true;` to your test code as follows:
423
424 ```javascript
425 window.debugTest = true;
426 function test() {
427 /* TEST CODE */
428 }
Kim Paulhamus61d60c32018-02-09 18:03:49429 ```
pwnallae101a5f2016-11-08 00:24:38430
Steve Kobese123a3d42017-07-20 01:20:30431## Bisecting Regressions
432
433You can use [`git bisect`](https://git-scm.com/docs/git-bisect) to find which
434commit broke (or fixed!) a layout test in a fully automated way. Unlike
435[bisect-builds.py](http://dev.chromium.org/developers/bisect-builds-py), which
436downloads pre-built Chromium binaries, `git bisect` operates on your local
437checkout, so it can run tests with `content_shell`.
438
439Bisecting can take several hours, but since it is fully automated you can leave
440it running overnight and view the results the next day.
441
442To set up an automated bisect of a layout test regression, create a script like
443this:
444
445```
446#!/bin/bash
447
448# Exit code 125 tells git bisect to skip the revision.
449gclient sync || exit 125
450ninja -C out/Debug -j100 blink_tests || exit 125
451
452blink/tools/run_layout_tests.sh -t Debug \
453 --no-show-results --no-retry-failures \
454 path/to/layout/test.html
455```
456
457Modify the `out` directory, ninja args, and test name as appropriate, and save
458the script in `~/checkrev.sh`. Then run:
459
460```
461chmod u+x ~/checkrev.sh # mark script as executable
462git bisect start <badrev> <goodrev>
463git bisect run ~/checkrev.sh
464git bisect reset # quit the bisect session
465```
466
pwnallae101a5f2016-11-08 00:24:38467## Rebaselining Layout Tests
468
pwnalld8a250722016-11-09 18:24:03469*** promo
470To automatically re-baseline tests across all Chromium platforms, using the
Xianzhu Wangcacba482017-06-05 18:46:43471buildbot results, see [How to rebaseline](./layout_test_expectations.md#How-to-rebaseline).
pwnallae101a5f2016-11-08 00:24:38472Alternatively, to manually run and test and rebaseline it on your workstation,
pwnalld8a250722016-11-09 18:24:03473read on.
474***
pwnallae101a5f2016-11-08 00:24:38475
pwnallae101a5f2016-11-08 00:24:38476```bash
477cd src/third_party/WebKit
Xianzhu Wangcacba482017-06-05 18:46:43478Tools/Scripts/run-webkit-tests --reset-results foo/bar/test.html
pwnallae101a5f2016-11-08 00:24:38479```
480
Xianzhu Wangcacba482017-06-05 18:46:43481If there are current expectation files for `LayoutTests/foo/bar/test.html`,
482the above command will overwrite the current baselines at their original
483locations with the actual results. The current baseline means the `-expected.*`
484file used to compare the actual result when the test is run locally, i.e. the
485first file found in the [baseline search path]
486(https://cs.chromium.org/search/?q=port/base.py+baseline_search_path).
487
488If there are no current baselines, the above command will create new baselines
489in the platform-independent directory, e.g.
490`LayoutTests/foo/bar/test-expected.{txt,png}`.
pwnallae101a5f2016-11-08 00:24:38491
492When you rebaseline a test, make sure your commit description explains why the
Xianzhu Wangcacba482017-06-05 18:46:43493test is being re-baselined.
pwnallae101a5f2016-11-08 00:24:38494
Xianzhu Wang95d0bac32017-06-05 21:09:39495### Rebaselining flag-specific expectations
496
497Though we prefer the Rebaseline Tool to local rebaselining, the Rebaseline Tool
498doesn't support rebaselining flag-specific expectations.
499
500```bash
501cd src/third_party/WebKit
Kim Paulhamus61d60c32018-02-09 18:03:49502Tools/Scripts/run-webkit-tests --additional-driver-flag=--enable-flag --reset-results foo/bar/test.html
Xianzhu Wang95d0bac32017-06-05 21:09:39503```
504
505New baselines will be created in the flag-specific baselines directory, e.g.
506`LayoutTests/flag-specific/enable-flag/foo/bar/test-expected.{txt,png}`.
507
508Then you can commit the new baselines and upload the patch for review.
509
510However, it's difficult for reviewers to review the patch containing only new
Xianzhu Wangd063968e2017-10-16 16:47:44511files. You can follow the steps below for easier review.
Xianzhu Wang95d0bac32017-06-05 21:09:39512
Xianzhu Wangd063968e2017-10-16 16:47:445131. Copy existing baselines to the flag-specific baselines directory for the
514 tests to be rebaselined:
515 ```bash
Kim Paulhamus61d60c32018-02-09 18:03:49516 Tools/Scripts/run-webkit-tests --additional-driver-flag=--enable-flag --copy-baselines foo/bar/test.html
Xianzhu Wangd063968e2017-10-16 16:47:44517 ```
518 Then add the newly created baseline files, commit and upload the patch.
519 Note that the above command won't copy baselines for passing tests.
Xianzhu Wang95d0bac32017-06-05 21:09:39520
Xianzhu Wangd063968e2017-10-16 16:47:445212. Rebaseline the test locally:
522 ```bash
Kim Paulhamus61d60c32018-02-09 18:03:49523 Tools/Scripts/run-webkit-tests --additional-driver-flag=--enable-flag --reset-results foo/bar/test.html
Xianzhu Wangd063968e2017-10-16 16:47:44524 ```
525 Commit the changes and upload the patch.
Xianzhu Wang95d0bac32017-06-05 21:09:39526
Xianzhu Wangd063968e2017-10-16 16:47:445273. Request review of the CL and tell the reviewer to compare the patch sets that
528 were uploaded in step 1 and step 2 to see the differences of the rebaselines.
Xianzhu Wang95d0bac32017-06-05 21:09:39529
foolipeda32ab2017-02-16 19:21:58530## web-platform-tests
pwnallae101a5f2016-11-08 00:24:38531
foolipbbd0f452017-02-11 00:09:53532In addition to layout tests developed and run just by the Blink team, there is
foolipeda32ab2017-02-16 19:21:58533also a shared test suite, see [web-platform-tests](./web_platform_tests.md).
pwnallae101a5f2016-11-08 00:24:38534
535## Known Issues
536
537See
538[bugs with the component Blink>Infra](https://bugs.chromium.org/p/chromium/issues/list?can=2&q=component%3ABlink%3EInfra)
539for issues related to Blink tools, include the layout test runner.
540
pwnallae101a5f2016-11-08 00:24:38541* If QuickTime is not installed, the plugin tests
542 `fast/dom/object-embed-plugin-scripting.html` and
543 `plugins/embed-attributes-setting.html` are expected to fail.