HBBTV Test Spec v2023-1 v1.0
HBBTV Test Spec v2023-1 v1.0
Version 1.0
March, 2023
Contents
1 General..................................................................................................................................................... 6
1.1 Scope ................................................................................................................................................................. 6
1.2 Conformance and reference ............................................................................................................................... 6
2 Conditions of publication ........................................................................................................................ 7
2.1 Copyright ........................................................................................................................................................... 7
2.2 Disclaimer .......................................................................................................................................................... 7
2.3 Classification ..................................................................................................................................................... 7
2.4 Notice ................................................................................................................................................................ 7
3 References ............................................................................................................................................... 8
3.1 Normative references ......................................................................................................................................... 8
3.2 Informative references ....................................................................................................................................... 9
4 Definitions and abbreviations ................................................................................................................ 11
4.1 Definitions ....................................................................................................................................................... 11
4.2 Abbreviations................................................................................................................................................... 12
4.3 Conventions ..................................................................................................................................................... 13
5 Test system ............................................................................................................................................ 15
5.1 Test Suite ......................................................................................................................................................... 16
5.2 Test Environment............................................................................................................................................. 17
5.2.1 Standard Test Equipment ........................................................................................................................... 17
5.2.2 Test Harness ............................................................................................................................................... 34
5.2.3 Base Test Stream ........................................................................................................................................ 35
5.2.4 Secondary Base Test Stream ...................................................................................................................... 38
6 Test Case specification and creation process ......................................................................................... 41
6.1 Test Case creation process ............................................................................................................................... 41
6.1.1 Details of the process for creating Test Cases ............................................................................................ 41
6.2 Test Case section generation and handling ...................................................................................................... 42
6.3 Test Case Template.......................................................................................................................................... 42
6.3.1 General Attributes ...................................................................................................................................... 42
6.3.2 References .................................................................................................................................................. 43
6.3.3 Preconditions.............................................................................................................................................. 44
6.3.4 Adaptive Tests............................................................................................................................................ 49
6.3.5 Testing........................................................................................................................................................ 50
6.3.6 Others ......................................................................................................................................................... 52
6.4 Test Case Result .............................................................................................................................................. 52
6.4.1 Overview (informative) .............................................................................................................................. 52
6.4.2 Pass criteria ................................................................................................................................................ 52
7 Test API and Playout Set definition ...................................................................................................... 53
7.1 Introduction ..................................................................................................................................................... 53
7.1.1 JavaScript strings ....................................................................................................................................... 54
7.2 APIs communicating with the Test Environment ............................................................................................ 55
7.2.1 Starting the test........................................................................................................................................... 55
7.2.2 Pre-defined state ......................................................................................................................................... 56
7.2.3 Callbacks .................................................................................................................................................... 58
7.2.4 JS-Function getPlayoutInformation() ......................................................................................................... 58
7.2.5 JS-Function endTest() ................................................................................................................................ 60
7.2.6 JS-Function reportStepResult() .................................................................................................................. 60
7.2.7 JS-Function reportMessage() ..................................................................................................................... 62
7.2.8 JS-Function waitForCommunicationCompleted() ..................................................................................... 62
7.2.9 JS-Function manualAction() ...................................................................................................................... 62
7.2.10 JS-Function getDUTOptions () .................................................................................................................. 63
7.2.11 JS-Function endTestApp() ......................................................................................................................... 64
7.2.12 JS-Functions for multiple-client communication ....................................................................................... 64
7.2.13 Extended Settings ....................................................................................................................................... 65
This document targets HbbTV Testing Centers and Quality Assurance Departments of HbbTV Licensees.
1) A part that describes the HbbTV Test System and its components.
2) A Test Case creation part that describes the Test Case creation process.
3) HbbTV JavaScript API’s and playout definitions used for Test Case implementation
The process that is used to define the HbbTV Test Specification, Test Cases and implementation of Test Tools
from the HbbTV Specification is depicted in Figure 1.
2) From the total set of Test Cases the HbbTV Test Specification can be generated
3) From the HbbTV Test Specification the design and implementation of Test Tools will be designed
The grey highlighted elements in Figure 1 are provided by the HbbTV Testing Group.
2.2 Disclaimer
The information contained herein is believed to be accurate as of the data of publication; however, none of the
copyright holders will be liable for any damages, including indirect or consequential from use of the TEST
SPECIFICATION for HbbTV or reliance on the accuracy of this document.
2.3 Classification
The information contained in this document is marked confidential and shall be treated as confidential according
to the provisions of the agreement through which the document has been obtained.
2.4 Notice
For any further explanation of the contents of this document, or in case of any perceived inconsistency or
ambiguity of interpretation, contact:
HbbTV Association
Contact details: Nguyen Thi Thanh Van (HbbTV Testing Group Chair) [email protected]
E-mail: [email protected]
[1] ETSI TS 102 796 “Hybrid Broadcast Broadband TV”; V1.4.1. Including all approved
Erratas.
[2] VOID
[3] VOID
[4] ETSI TS 102 809 “Digital Video Broadcasting (DVB); Signalling and carriage of
interactive applications and services in hybrid broadcast / broadband environments",
V1.2.1
[5] VOID
[6] VOID
[7] VOID
[9] VOID
[10] VOID
[11] VOID
[12] VOID
[13] VOID
[14] CI Plus Forum, CI Plus Specification, “Content Security Extensions to the Common
Interface”, V1.3 (2011-01).
[15] VOID
[16] VOID
[17] VOID
[18] ETSI EN 300 468 “Specification for Service Information (SI) in DVB systems”, V1.10.1
[19] VOID
[20] VOID
[21] ISO 23009-1 (2012): “Information technology – Dynamic adaptive streaming over HTTP
(DASH) – Part 1: Media presentation description and segment formats”
[24] VOID
[28] W3C Recommendation (16 January 2014): “Cross-Origin Resource Sharing”. Available at:
http://www.w3.org/TR/2013/PR-cors-20131205/
[29] Open IPTV Forum Release 1 specification, “CSP Specification V1.2 Errata 1”, March
2013. Available from http://www.oipf.tv/specifications
[32] VOID
[34] ETSI TS 103 286-2 (V1.1.1): “Companion Screens and Streams; Part 2: Content
Identification and Media Synchronisation”
[39] Julian Aubourg et al. XMLHttpRequest.. 6 December 2012. W3C Working Draft. URL:
http://www.w3.org/TR/2012/WD-XMLHttpRequest-20121206/
[41] IP-Delivered Broadcast Channels and Related Signalling of HbbTV Applications 2017-
04-07
[i.2] ETSI ES 202 130 “Human Factors (HF); User Interfaces; Character repertoires, orderings
and assignments to the 12-key telephone keypad (for European languages and other
languages used in Europe)”, V2.1.2
[i.3] ETSI TS 102 757 “Digital Video Broadcasting (DVB); Content Purchasing API”, V1.1.1
[i.6] I. Hickson. The WebSocket API. 20 September 2012. W3C Candidate Recommendation.
[i.9] W3C Candidate Recommendation (11 December 2008): “Web Content Accessibility
Guidelines (WCAG) 2.0”
Application data: set of files comprising an application, including HTML, JavaScript, CSS and non-streamed
multimedia files
Assertion: Testable statement derived from a conformance requirement that leads to a single test result
Broadband: An always-on bi-directional IP connection with sufficient bandwidth for streaming or downloading
A/V content
Broadcast: classical uni-directional MPEG-2 transport stream based broadcast such as DVB-T, DVB-S or
DVB-C
Broadcast-independent application: Interactive application not related to any broadcast channel or other
broadcast data
Broadcast-related application: Interactive application associated with a broadcast television, radio or data
channel, or content within such a channel
Broadcast-related autostart application: A broadcast-related application intended to be offered to the end user
immediately after changing to the channel or after it is newly signalled on the current channel. These
applications are often referred to as “red button” applications in the industry, regardless of how they are actually
started by the end user
Conformance requirement: An unambiguous statement in the HbbTV specification, which mandates a specific
feature or behaviour of the terminal implementation
Digital teletext application: A broadcast-related application which is intended to replace classical analogue
teletext services
HbbTV application: An application conformant to the present document that is intended to be presented on a
terminal conformant with the present document
HbbTV test case XML description: The HbbTV XML document to store for a single test case information
such as test assertion, test procedure, specification references and history. There exists a defined XSD schema
for this document format.
HbbTV Technical Specification: Refers to [1] with the changes as detailed in [20]
Hybrid terminal: A terminal supporting delivery of A/V content both via broadband and via broadcast
Linear A/V content: Broadcast A/V content intended to be viewed in real time by the user
Non-linear A/V content: A/V content that which does not have to be consumed linearly from beginning to end
for example, A/V content streaming on demand
Persistent download1: The non-real time downloading of an entire content item to the terminal for later
playback
Playout: Refers to the modulation and playback of broadcast transport streams over an RF output.
Test Assertion: A high level description of the test purpose, consisting of a testable statement derived from a
conformance requirement that leads to a single test result.
1 Persistent download and streaming are different even where both use the same protocol - HTTP. See clause 10.2.3.2 of the HbbTV
specifications, ref [1]
Test Case: The complete set of documents and assets (assertion, procedure, preconditions, pass criteria and test
material) required to verify the derived conformance requirement.
NOTE 1: This definition does not include any test infrastructure (e.g. web server, DVB-Playout…) required
to execute the test case.
NOTE 2: For the avoidance of doubt, Test Material must be implemented in a way that it produces
deterministic and comparable test results to be stored in the final Test Report of this Test Material.
Test Material must adhere to the HbbTV Test Specification as defined by the Testing Group.
Test framework/Test tool/Test harness: The mechanism (automated or manually operated) by which the test
cases are executed and results are gathered. This might consist of DVB-Playout, IR blaster, Database- and
Webservers.
Test material: All the documents (e.g. HTML, JavaScript, CSS) and additional files (DVB-TS, VoD files,
static images, XMLs) needed to execute the test case.
Test Procedure: A high level textual description of the necessary steps (including their expected behaviour) to
follow in order to verify the test assertion.
Terminal specific applications: Applications provided by the terminal manufacturer, for example device
navigation, set-up or Internet TV portal.
Test Report: A single file containing the results of executing one or more Test Suites in the format defined in
section 9. This is equivalent to the “Test Report” referred to in the HbbTV Test Suite License Agreement and
HbbTV Full Logo License Agreement.
Test Repository: Online storage container for HbbTV test assertions and Test Cases. Access is governed by the
TRAA. Link: https://www.hbbtv.org/pages/about_hbbtv/hbbtv_test_repository.php
Test Suite: This means the collection of test cases developed against the HbbTV Specifications.
NOTE 1: The Testing Group specifies and approves which test cases are parts of the HbbTV Test Suite.
NOTE 2: Only Approved HbbTV Test Material shall be a part of the HbbTV Test Suite used for the
Authorized Purpose.
NOTE 3: Upon approval of HbbTV, new HbbTV Test Material may be added from time to time at regular
time-intervals.
NOTE 4: The HbbTV Test Suite does not include any physical equipment such as a Stream
Player/Modulator, IP Server
Test Harness: The Test Harness is a system which orchestrates the selection and execution of Test Cases on the
DUT, and the gathering of the Test Case Results for the Test Report.
Standard Test Equipment: The Standard Test Equipment is the collection of all “off the shelf” tools, which
are needed to store, serve, generate, and play out the Test Cases on the DUT.
4.2 Abbreviations
For the purposes of the present document, the following abbreviations apply:
4.3 Conventions
MUST This, or the terms "REQUIRED" or "SHALL", shall mean that the definition is an absolute
requirement of the specification.
MUST NOT This phrase, or the phrase "SHALL NOT", shall mean that the definition is an absolute
prohibition of the specification.
SHOULD This word, or the adjective "RECOMMENDED", mean that there may be valid reasons in
particular circumstances to ignore a particular item, but the full implications must be
understood and carefully weighed before choosing a different course.
SHOULD NOT This phrase, or the phrase "NOT RECOMMENDED" mean that there may exist valid
reasons in particular circumstances when the particular behaviour is acceptable or even
useful, but the full implications should be understood and the case carefully weighed
before implementing any behaviour described with this label.
MAY This word, or the adjective "OPTIONAL", mean that an item is truly optional. One vendor
may choose to include the item because a particular marketplace requires it or because the
vendor feels that it enhances the product while another vendor may omit the same item. An
implementation which does not include a particular option MUST be prepared to
interoperate with another implementation which does include the option, though perhaps
with reduced functionality. In the same manner an implementation which does include a
particular option MUST be prepared to interoperate with another implementation which
does not include the option (except, of course, for the feature the option provides.)
1) A suite of Approved Test Cases, obtained from and maintained by HbbTV Association,
2) A Test Environment used to execute the test cases and record the results of testing in a standard Test
Report format. The Environment consists of:
b. A Test Harness (TH), which acts as test operation coordinator and manages Test Cases and
Execution Results
c. Standard Test Equipment (STE) required for all Test Environments, as described in 5.2.1
d. Optional Test Equipment (OTE) which may be used to simplify or allow automation of some
Test System tasks, perhaps through proprietary terminal interfaces.
e. RF and IP connections between the DUT and other elements of the Test Environment
f. TLS test server used to provide a server-side environment for execution of some TLS related
test cases. This is provided on behalf of the HbbTV Association and does not need to be
supplied separately.
Internet
Network Interface
Test Harness
DVB Playout
DNS Server
Contains:
Web Server Contains: Test report XML files
Test case XML files Test evidence files
Test implementations (screen shots, video/
audio recordings)
- Assertion text
The Test Cases in a Test Suite can have one of two statuses:
• Approved Test Cases are approved by the HbbTV Testing Group and form part of the requirements for
HbbTV compliance
• Additional Test Cases are not approved the HbbTV Testing Group for compliance. They are
distributed because they may be of use to developers. Test Cases may be changed from Additional to
Approved if they are valid for HbbTV requirements and have been successfully reviewed for approval
by the Testing Group.
• Standard Test Equipment: The Standard Test Equipment is the collection of all “off the shelf” tools,
which are needed to store, serve, generate, and play out the Test Cases on the DUT. This includes web
servers, and the DVB Playout System. The components of the Standard Test Environment are not
included in the Test Suite delivery, but may be provided by commercially available test tools.
• Test Harness: The Test Harness is a system which orchestrates the selection and execution of Test
Cases on the DUT, and the gathering of the Test Case Results for the Test Report.
1) Uses the information in the HbbTV test case XML description and in the Test Material to initiate the
execution of all necessary steps prior to execution (e.g. final test stream generation).
2) initiates the execution of the Test Case on the DUT, causing changes in the environment during
execution based on timing points defined in the Test Material and in response to API calls from the
Test Case,
3) Collects the test results from the Test Case running on the DUT.
The implementation of these components is a proprietary decision. For example, each of these components may
or may not be physically installed on a single piece of hardware for each DUT, they may or may not be installed
on commonly accessible machine, or they may be virtualized. They may also be already integrated into
commercially available tools.
NOTE 1: None of the components defined in this section are delivered with the HbbTV Test Suite.
NOTE 2: There may be more than one DVB Playout device and DUT attached to the Test Harness to allow
concurrent testing of multiple devices. There may also be one or more devices (such as a WLAN
access point, an Ethernet switch or an IP router) connected between the web server and the DUT.
These are implementation decisions which are beyond the scope of this document.
NOTE: There may be test cases in the future that will require the ability to control this operation.
http://hbbtv1.test/_TESTSUITE/TESTS/00000010/index.html
The server shall not use port 81, connections to port 81 shall be refused. This port is used in test cases requiring
a refused TCP/IP connection error.
The network connection between the web server and the DUT is controlled by the Test Harness as defined in the
playout sets of the Test Cases executed, either directly or via instructions delivered to the operator, to allow
disconnection of the network from the DUT, see section 7 for more details.
5.2.1.1.1 PHP
The server shall include a valid installation of the PHP Hypertext Pre-processor to allow dynamic generation of
content. Any version of PHP 5.2 or newer may be installed on the server, no special modules are required. Only
files with the extension .php shall be processed by the PHP Hypertext Pre-processor.
The Test Harness shall set the PHP Hypertext Pre-processor’s “default_mimetype” option to
“application/vnd.hbbtv.xhtml+xml; charset=UTF-8”.
NOTE: PHP scripts can control the MIME type used for their output, by using a command such as
“header('Content-type: video/mp4');”. If a PHP script does not specify a MIME type for its output,
then PHP uses the MIME type specified by the “default_mimetype” option.
The Test Harness shall set the PHP Hypertext Pre-processor’s “short_open_tag” option to “Off”
NOTE: This is so that the test cases can use PHP in combination with XML.
The Test Harness shall set the PHP Hypertext Pre-processor’s “allow_url_fopen” option to “On”.
NOTE: This is so that the test cases can use PHP to fetch json log files from the secure TLS server
(specified in section 5.2.1.14 TLS test server) via file_get_contents() call.
NOTE: These PHP options are usually set in php.ini, but there are often web-server-specific ways to set
them too, such as php_admin_value in Apache’s mod_php.
Test authors shall ensure that content is served with cache control headers when needed. To set headers, a test
will need to use a PHP script to serve the content.
EXAMPLE: when serving a dynamic MPD file for DASH streaming, the PHP script should set the
"Cache-Control: no-cache" header.
The $_SERVER shall contain the variables required by the CGI/1.1 Specification [40].
NOTE: That does not necessarily mean that the PHP has to be launched by CGI. The mechanism used to
run PHP code is a harness implementation detail.
For each request, PHP shall be allowed to use at least 160MB of memory. Normally this means that the PHP
memory_limit option shall be 160M or greater.
Additionally3, the harness web server shall serve files with any of the following filenames with the “image/png”
Content-Type:
• dvb.icon.0001
• dvb.icon.0002
• dvb.icon.0004
• dvb.icon.0008
• dvb.icon.0010
• dvb.icon.0020
• dvb.icon.0040
• dvb.icon.0080
• dvb.icon.0100
• dvb.icon.0200
• dvb.icon.0400
• dvb.icon.0800
NOTE: These are application icons as defined in TS 102 809 [4] v1.3.1 section 5.2.8. The filenames are
mandated by that specification. The OpApps spec notes that the bilateral agreement may require
icons.
2 Only required for harnesses that support Operator Application testing as defined in Appendix D.
3 Only required for harnesses that support Operator Application testing as defined in Appendix D.
If the Path section [22] of HTTP requests matches the POSIX basic regular expression 4 [i.5]:
^/_TESTSUITE/TESTS/\([a-z0-9][a-z0-9\-]*\(\.[a-z0-9][a-z0-9\-]*\)\{1,\}_[A-Z0-9][A-Z0-9_\-
]*\)/\(.*\.fmp4\)/\(.*\)$
Matches the ID of a test case in the test suite, and the third capture group:
.*\.fmp4
Matches the path of a file in that test case’s directory (referred to as the container file), then the fourth capture
group:
.*
(Referred to as the segment path) shall be used to extract a subsection of the container file, as follows:
1) The web server shall look in the directory holding the container file for a file named ‘seglist.xml’. If
this file is not found the server shall return an HTTP response to the client with status 404. This file, if
it exists, shall conform to the XSD in SCHEMAS/seglist.xsd. The contents of the seglist.xml file shall
not vary during execution of a test. The server shall parse the seglist.xml file, and locate the ‘file’
element with a ‘ref’ attribute matching the segment path and a ‘video’ element matching the container
file. If no such element is found the server shall return an HTTP response to the client with status 404.
The server shall then read the number of bytes given by the ‘size’ element from the container file,
starting at the offset given by the ‘start’ element (where an offset of 0 is the first byte in the file.) The
HTTP response containing the data shall have a status of 200, and shall have the ‘content-type’ header
[8] set to the value specified by the ‘mimetype’ element in the XML, or ‘video/mp4’ if that element is
not present [23].
2) If an error occurs reading the file, the server shall return an HTTP response with status 500. If there is
not enough data in the file to service the request, the server shall return an HTTP response with status
404.
5.2.1.1.4 HTTPS
The web server shall also serve HTTP over TLS (HTTPS). The domain name used for this can be chosen by the
tester. The tester must acquire a TLS certificate and corresponding private key, and the certificate must be
trusted by the terminal. For example, the tester might buy an Internet domain name, and acquire a
corresponding TLS certificate from a Certificate Authority on the HbbTV Root Certificate list.
The HTTPS server shall support HTTP and TLS versions as follows:
^/_TESTSUITE/TESTS/([a-z0-9][a-z0-9\-]*(\.[a-z0-9][a-z0-9\-]*)+_[A-Z0-9][A-Z0-9_\-]*)/(.*\.fmp4)/(.*)$
5 The section of this test specification dealing with OpApp HTTPS client certificates was written before TLS 1.3 or HTTP 2 were
standardized, and is not compatible with those protocols. A future version of this test specification may correct that.
Test cases that use the harness HTTPS server must indicate that by either:
• including this tag in their implementation.xml file, directly inside the <testImplementation> element
(NOT in the <opAppDiscovery> element):
<httpsServerConfig/>
• OR, for OpApp tests only, including either the <opAppDiscovery> or <runOpAppPage> tags
NOTE: If the test does not use any of those tags, then:
• The test harness does not need to start the HTTPS server, and
• If the harness starts the HTTPS server anyway, then that HTTPS server does not need to be
correctly configured.
• For test cases that indicate the use of the harness HTTPS server in their implementation.xml file, the
domain name of the Standard Test Equipment HTTPS server shall be resolved to the IP address of that
server.
• dns-failure.hbbtv1.test shall not be resolved, the DNS server shall reply with NXDOMAIN error.
• External (on Internet) hbbtvtest.org domain and all its sub-domains must be reachable by the DUT and
PHP interpreter via both HTTP (port 80) and HTTPS (port 443) protocols
• DUT must be able to access hbbtvtest.org domain via its IP address directly via the HTTP protocol
over TLS
• if a lookup succeeds, the DNS server response shall indicate a 30 second TTL.
• if a lookup fails with NXDOMAIN, it shall indicate a 30 second negative-cache TTL.
NOTE: This functionality may be integrated into a Test Harness; however this is outside of the scope of this
document.
DVB Playout shall implement TDT/TOT adaptation such that the first TDT/TOT in the base stream is used
unaltered the first time that the base stream is played out, and all subsequent TDT/TOTs (including in any
repetitions of the base stream) shall be replaced such that the encoded time advances monotonically. For non-
integrated playout solutions this means that a playout mechanism which supports TDT/TOT adaptation must be
used, and this feature must be activated.
The DVB multiplexes are typically transmitted using multiple modulators (one for each multiplex) and a RF
mixer, but may also be transmitted using a single advanced modulator that supports generating multiple signals
at once on different RF channels. This specification uses the phrase “modulator channel” to refer to either a
modulator outputting a single multiplex, or one channel on an advanced modulator capable of generating
multiple signals at once.
When the Test Harness has two modulator channels configured and the Test Harness attempts to execute a test
case that specifies two DVB multiplexes, then:
- the first multiplex shall be delivered by the first configured modulator channel;
- the second multiplex shall be delivered by the second configured modulator channel.
When the Test Harness has two modulator channels configured and the Test Harness attempts to execute a test
case that specifies a single DVB multiplex, then:
- the multiplex shall be delivered using the first modulator channel configured in the Test Harness;
the second modulator channel shall completely stop outputting a signal.Any two supported modulator settings
may be configured in the Test Harness – there is no restriction on, for example, configuring a DVB-S modulator
and a DVB-T modulator.
The mechanism for this image capture is proprietary and outside of the scope of this specification. It may be
automated by the Test Harness, or may be implemented manually by testers using image capture technology,
webcam, digital camera or by some other capture mechanism. The format of the image stored is defined in
9.1.3.4.
This ECMAScript environment shall also provide the XMLHttpRequest [39] API, with the following
modifications:
Note: The referenced version of the XMLHttpRequest specification, and the first modification listed above, are
the same as the APIs defined in the OIPF Web Standards TV Profile [38] section A.3.1.
The file containing the application or the harness-based test shall be defined using the harnessTestCode element
of the implementation.xml file. If a server side test is defined then the transport stream may include an
application, but the application shall not instantiate the test API object or make calls to the test API.
Harness-based test applications shall be defined as an ECMAScript application, conforming to the ECMAScript
profile defined above. The application shall be contained in a text file encoded in UTF-8. The application
environment shall include the test API object as a built-in constructor, which may be instantiated by applications
in order to communicate with the test harness. All API calls shall behave as defined for running in the DAE.
When the test harness executes a test with the harnessTestCode element defined, it shall follow the following
steps:
The Test Harness may terminate the test at any time without notifying the test application.
NOTE: This section focuses on additional functions required of the test CSPG-CI+ module over a normal
CSPG-CI+ module.
The CSPG-CI+ module will have to operate with test certificates (not production certificates) because it
implements custom test behaviour. This implies that DUTs will need to be configured with test certificates also,
during testing.
Several CI+-related tests require a mechanism for the test harness to:
• configure CAM behaviour (for example put it into a mode where it will respond in a certain way to
future DUT/CAM interactions)
• Trigger the CSPG-CI+ CAM to perform an action now (for example send a message informing the
DUT of a parental rating change event).
• Indicate to the test harness that, according to its configuration, the host has not behaved as
expected/required and hence the test has failed.
1) Do/do not perform CSPG-CI+ discovery on next CAM insertion (i.e. switch between CSPG-CI+ and
standard CI+ modes) (accept session establishment to the CI+ SAS resource and OIPF private
application)
3) Assert that certain DRM messages were indeed received (in a specific order) by the CAM (since some
configurable recent time/event)
6) Assert content is/is not currently being received from the host for descrambling
1) Correct implementation of CI plus [14] and 4.2.3 of [29], except where behaviour needs to be modified
to implement other functionality described in this document.
NOTE 1: These requirements apply to the combination of test harness implementation and CICAM, this
document does not specify which component within the harness implements the requirements in
this document.
NOTE 2: The CICAM may be equipped with either development or production certificates.
5.2.1.7.3 CA System
The CICAM shall support descrambling of content encoded using DVB CSS when the CA_system_ID signalled
in the CAT has a value of 4096, 4097 or 4098.
NOTE: The DVB CSA specification is only available to licensees. Information on licensing it is available
at [i.7]. Non-licensees may find the Wikipedia page [i.8] helpful.
To descramble a selected service, the CAM shall parse the CA_descriptor from the CAPMT and parse the PID
identified by the CA_PID. These PIDs identify the ECMs for the selected service. The format of the ECM
(excluding transport stream packet and section headers and stuffing bytes) is shown in the table below:
control_word_indicator – 0x80 indicates that encrypted_control_word is the ‘odd’ key, 0x81 indicates
that encrypted_control_word is the ‘even’ key
encrypted_control_word – value to be used as either the odd or even control word, ‘encrypted’ by
being XORed with the value 0xAB CD EF 01 02 03 04 05.
1) CI+ CAM with OIPF application conforming to 4.2.3 of [29] profiled as in 11.4.1 [1]
A harness may do this either using multiple CAMs or by using a single reconfigurable CAM (or by any other
suitable mechanism.)
The test’s requirements (if any) for use of a CAM shall be declared using an extension to the existing
implementation XML syntax (see below).
If the XML element is not present then the test is assumed not to require the use of a CAM and test cases shall
not use the functionality described in this section.
• cspgcip (default)
• cip
• ci
• none
• cip – The CAM shall respond to SAS_connect_rqst APDUs specifying the OIPF application ID
(4.2.3.4.1.1 of [29]) with session_status 0x01 (Connection denied - no associated vendor-specific Card
application found)
• ci – The CAM shall be configured such then whenever the CAM is powered up or inserted the CAM
behaves as a CI, rather than CI plus, device for the purposes of host shunning behaviour (§10 in [14].)
There are multiple mechanisms by which this may be achieved, including removal of the ‘ciplus’
compatibility indicator from the CIS information string, disabling the CA resource, etc.
• none – The test requires that no CAM is inserted in the terminal at the start of the test. This is distinct
from the case where the test is not concerned with the presence or absence of a CAM (in which case
the ‘CAM’ element should be omitted altogether.) See also note below.
<playoutSets>
<playoutSet id="1" definition="playoutset.xml"/>
</playoutSets>
<CAM type="cip" />
</testImplementation>
NOTE: If a test case requires a CAM to be inserted after the start of the test then the implementation
XML shall specify a required CAM configuration using this syntax, and the test shall then request
removal and eventual reinsertion of the CAM by calling the ‘manualAction’ JS-Function (or
equivalent) once test execution has commenced. (If the CAM type is ‘none’ and CAM insertion is
requested then the CAM configuration would be indeterminate.)
A ‘companionScreen’ element is defined, with one mandatory ‘type’ attribute and an optional
‘initiallyConnected’ attribute.
The ‘type’ attribute indicates the type of the Companion Screen required to run the test. It shall have one of the
following values:
• any – a Companion Screen compatible with the DUT is connected to the test network. It does not
matter if the Companion Screen is implemented on iOS, Android, or some other OS,
• iOS - an iOS Companion Screen compatible with the DUT is connected to the test network, or
• Android - an Android Companion Screen compatible with the DUT is connected to the test network.
The "initiallyConnected" attribute indicates if the Companion Screen should be connected to the DUT at the
start of the test. It shall have one of the following values:
• true (default) - the Companion Screen shall be connected to the DUT at the start of the test, or
• false - the Companion Screen shall not be connected to the DUT at the start of the test; instead the test
will instruct the tester when to connect the Companion Screen.
The launcher application is developed by the manufacturer and is part of the system under test. It is not possible
for it to be provided as part of the test harness or test environment because the interface between the launcher
application running on the companion screen and the device is proprietary to the device manufacturer, as
described in section 14.3 of the HbbTV Specification. Therefore it does not make sense to consider testing this
feature of an HbbTV device in isolation from the companion application(s) developed with it.
Note that if this feature is supported, then the TV/STB manufacturer should make the launcher application
available to all customers who have purchased the HbbTV TV or STB, so that they can use this application
launching feature. E.g. by making it available in the Apple App Store / Google Play app store, and by
documenting it in the TV/STB’s user manual. It’s not just for testing!
Those Companion Screen test cases that relate to the terminal launching an application on a companion screen
have a CS_APP_LAUNCH optional feature precondition. Where they launch an application, they launch a
specific widely-available application, or one usually pre-installed with the OS (e.g., Google Maps on Android
devices). The test application running on the terminal is provided in the test case and discovers the CS launcher
with HbbTVCSManager.launchCSApp() and invokes HbbTVCSManager.launchCSApp() to launch the
application.
If the DUT does not have an integrated display, then the tester should select a suitable display, following any
instructions provided by the DUT manufacturer. (E.g. if the DUT’s instructions say that, for optimum
performance, it should be used with a HDMI 2.0 compliant display, then the tester should ensure that they
choose a HDMI 2.0 compliant display. Or if the DUT’s instructions include a list of tested TV models then the
tester should choose a TV from that list).
The Test Harness shall include sensors to detect and time light emissions from two different places on the
display, and to detect and time audio emissions. These sensors shall be synchronised together and each sensor
shall have an accuracy of +/-1ms.
NOTE: There are several ways the test harness might accomplish this, including:
▪ a high-speed camera set to 1000Hz and pointed at the TV with a genlocked audio recorder,
and then manual or automatic analysis of the recordings
▪ a person looking at a storage oscilloscope with light sensors and audio inputs connected.
1) Confirm that a series of tones and flashes across a 15 second period are synchronised (within a
tolerance specified by the test case, typically 10ms or 20ms). In this case, it is acceptable to check
exact synchronisation (e.g. using an oscilloscope) for only some of those tones and flashes, including at
least one near the start of the 15sec period and one near the end of that 15sec period. The rest of the
tones and flashes can be checked by an unaided human. (Rationale: if the first and last flashes are
synchronized, then the intermediate flashes are likely to either be synchronised or to be wrong by
0.5sec or more).
2) Confirm that the difference between the start times of 2 flashes is a given number of milliseconds,
within a tolerance specified by the test case. This is used where the test case cannot precisely
synchronize the flashes but it can precisely measure what the offset between the flashes should be.
▪ CSS-WC - Wall Clock. This is a simple UDP-based protocol that allows the Companion
Screen to read a clock that ticks at “wall clock” rate. In so doing, the client (the Companion
Screen) can measure and compensate for the network latency. It is a bit like NTP, but much
simpler.
The Test Harness shall include clients for the CSS-WC and CSS-TS services [34]. These clients shall be linked
together and linked into the light and audio sensors described in the previous section, so that the DUT’s
emission of light and audio can be timed against the timeline being reported by the DUT.
NOTE 3: There are several ways the test harness might accomplish this, including:
▪ Having a dedicated device with light sensors and audio inputs that synchronises (in some
proprietary way) to a CSS-WC and CSS-TS client on a PC. (Note: BBC R&D have an
example of this).
▪ Having a dedicated device with light sensors and audio inputs that also implements the
CSS-WC client, and communicates with a CSS-TS client on a PC
▪ Having a dedicated device with light sensors and audio inputs that also implements the
CSS-WC and CSS-TS clients
▪ Having software implementations of CSS-WC and CSS-TS on a PC, which flashes the PC
screen in a way that is synchronised to the timeline. That flashing is detected by an extra
light sensor, which is connected to a storage oscilloscope. The light sensors and audio
inputs from the DUT are connected to the same oscilloscope.
NOTE 4: Since the CSS-CII service is based on Websockets and is not timing-critical, it can be tested by
having a test application connect to it via Websockets using the API defined in section 7.7. There
is no need for explicit harness support for testing CSS-CII. The CSS-TS service is also based on
Websockets, and is tested using a combination of Websocket-based testing and the Test Harness’s
client described above.
NOTE 2: The Test Harness includes a partial implementation of the HbbTV inter-device media
synchronisation master server. The part that is needed is specified in the normative parts of this
section and section 7.8.4.
The Test Harness shall include servers for the CSS-WC, CSS-TS and CSS-CII protocols [34], with
configurability of those servers provided via the Test API function startFakeSyncMaster(). The reported
timeline will just advance at a fixed rate.
The Test Harness will also provide a server accessible via a Websocket connection that can be used to retrieve
the current timeline time, at the point the request is received from the DUT.
NOTE 3: This uses a Websocket so that the test can ensure the slow set-up of the TCP/IP connection is
done in advance. It is expected that once the Websocket connection is set up, sending a
Websocket message from the DUT to the Test Harness will be quick. The reply may be slower
due to the behaviour of the DUT’s task scheduler, or because the DUT happens to be busy
processing another JavaScript event at the time the reply is received, but the speed of the reply
doesn’t affect the accuracy of the test.
For the tests currently implemented, there is no reason for this to be linked to the light and audio sensors
described previously.
The positions of these detection areas are measured on the display that emits light, i.e. after all scaling has been
applied. The positions are shown in the following diagram, which shows the whole TV screen (i.e. the entire
panel excluding bezel):
1 2
(The dashed grid lines are spaced equally, to make it possible to read the position off this diagram).
In numbers, if (0, 0) is the top left of the display and (1, 1) the bottom right, then the co-ordinates of the two
defined detection areas are:
Test cases using the light sensors must ensure that these detection areas are usually fully black, and a "flash"
turns the entire detection area white, then black again. The detection area should be white for approximately
120ms. For measuring synchronization using the analyzeAvSync and analyzeAvNetSync APIs, the time of the
"flash" is the mid-point of the period the frame is white. For measuring synchronization with the
analyzeStartVideoGraphicsSync API, the time of the "flash" is the instant it changes from black to white.
The Test Harness may choose to monitor any point or area inside the detection area. It does not have to check
the entire detection area.
Test cases may display anything they like outside of these detection areas. (E.g.: countdown timer; debug
messages; etc).
The Test Harness shall ensure that the display outside the detection area, and ambient or external light sources,
do not interfere with its sensors. (For example, by placing the sensors in the middle of the detection area and
ensuring the sensitivity is set correctly, or shrouding them with black tape).
For health and safety reasons, test cases using this feature with a single light sensor should not have more than 3
flashes per second, and test cases using this feature with both light sensors at once should not have more than 1
flash per second. Note that some of the APIs defined here impose lower limits. (Rationale: it is desirable to not
exceed the photosensitive epilepsy threshold of 3 flashes per second from [i.9]. It is predictable that test cases
using two light sensors will sometimes fail due to the content not being synchronised, and in that case the tester
will see the two flashes separately, hence the lower limit for such tests. In practise, these limits are not expected
to cause any problems for test authors, and the sequence suggested in section 5.2.1.13 is well below these
limits).
Then assume the largest overscan possible (20% horizontal overscan and 10% vertical overscan ), and calculate
the detection area positions in that case:
Then take the min/max of those two tables to get a flash area which is guaranteed to cover the detection area
regardless of the amount of overscan:
The audio must be silent, except for short -6dB Full Scale bursts of tone between 1 and 5 kHz. For measuring
synchronization, the time of the tone burst is the mid-point of the period the tone burst is audible.
The duration of each flash and tone should be short, but not so short that it is missed. It is recommended that, for
50Hz terminals, the flash duration is 120ms. This is chosen to be 3 frames of 25fps progressive video, or 3 full
frames (6 fields) of interlaced video. It is 6 frames of 50fps progressive video. Choosing a constant duration,
instead of a constant number of frames, allows tests to mix 50p and 25p video, and also means that the same
audio track can be shared by test cases with different video framerates.
For some tests, there will be a single flash and a single, synchronised tone burst.
For other tests, there will be a repeating pattern of flashes and synchronised tone bursts. When choosing the time
between consecutive flashes or consecutive tone bursts, the test case author must consider the analysis criteria
documented for the analyzeAvSync() or analyzeAvNetSync() API as appropriate.
For this pattern, the flashes and synchronised tone bursts should be 120ms in length and start at
the following times, where time T is the start of the pattern (which may not be the start of the
media):
The irregular intervals are chosen so that medium-sized offsets between audio and video can be
detected. Seeing 3 consecutive synchronised flashes/tones tells you that the audio and video are
synchronised, and that check is guaranteed to detect synchronisation errors of less than 11
seconds. It will detect any synchronisation error that is not a multiple of 11 seconds long, which
makes a “false pass” error very unlikely.
A separate server is used for test cases that require that the HbbTV application or some other resource be loaded
from a https URI, hosted on a server with a valid certificate.
Name server for hbbtvtest.org contains multiple NS records, that are sub-domain specific. Requests related to
TLS client-server communication will target specific subdomains and the name server will return an IPv4
address of the virtual server configured for verifying TLS client-server communication. For requests targeting
the subdomain ste.hbbtvtest.org (where ‘ste’ stands for standard test environment), name server shall return an
IPv4 address of the general-purpose secure web server that can serve static resources over HTTPS. This
environment is illustrated on the image bellow.
● *.w.hbbtvtest.org - used for verifying parts of the TLS handshake, presents a valid wild-card certificate
● a.hbbtvtest.org - used in for creating an exact match on certificate
● *.host-mismatch.hbbtvtest.org - used for creating a host mismatch
● *.expired.hbbtvtest.org - used for presenting an expired certificate
● *.sha1.hbbtvtest.org - used for presenting a SHA1 certificate
● *.1024.hbbtvtest.org - used for presenting a certificate that has an RSA key with only 1024 bits
● access via IP - used for testing Subject Alternative Name
● http2.hbbtvtest.org - server that supports HTTP/2 but not HTTP/1.1
URLs targeting wildcard subdomain on this server shall be in the following format:
https://LogId-TestId.w.hbbtvtest.org.
LogId part is used as a name of the file in which communication log is stored as a JSON (described in Annex
F). TestId corresponds to ID of the test case, and server uses it to set up a proper communication context (i.e. to
determine which certificate to use or how to handle TLS handshake).
All subdomains used for TLS client-server verification resolve to the same IPv4 address.
Communication log contains low-level details of the communication between server and client, and depending if
communication is established or rejected, it is delivered to client in one of two ways:
1. If communication is established, script delivers information to test page using mechanism similar to
JSONP, where a ‘src’ attribute is dynamically set to an URL targeting the TLS test server via HTTPS. After the
script is successfully loaded and evaluated, test script can use its information to determine the verdict.
2. If communication has failed (script was not loaded), test script uses PHP proxy to get contents of the
communication log from remote server via HTTP, using known LogId (generated by test itself, a random 32
character sequence).
Communication logs older than certain time (default will be 60 seconds) are automatically deleted from server.
Test case evaluates provided information and determines outcome of the test.
There are test cases which don't require detailed information about the client-server handshake, where test case
outcome can be determined by evaluating if script tag was successfully loaded or not.
Block diagram of the system which illustrates both approaches is shown in image provided below.
For test cases where an application has to be loaded from an https:URI, or any other resource fetched via
https:URI, this web server provides an environment similar to that of a regular Standard Test Equipment HTTP
web server, described in section 5.2.1.1. The domain used for testing purposes is ste.hbbtvtest.org.
The web server shall accept incoming HTTP requests on port 80 and HTTPS requests on port 443.
In this pattern <test ID> is a test case ID and <version number> is an integer version number that starts at 1 for
each test case. The web server shall be set up in a way that allows reaching files in those directories under the
directory “/_TESTSUITE”. For example, the “EME0010.html5” file of the test case org.hbbtv_EME0010 shall
be reachable via the following URLs:
http://ste.hbbtvtest.org/_TESTSUITE/TESTS/org.hbbtv_EME0010/on_ste_hbbtvtest_org/v1/EME0010.ht
ml5
https://ste.hbbtvtest.org/_TESTSUITE/TESTS/org.hbbtv_EME0010/on_ste_hbbtvtest_org/v1/EME0010.h
tml5
The first time a Test Case that uses this mechanism is included in the HbbTV Test Suite, the <version number>
will be 1. When HbbTV has released a HbbTV Test Suite containing a file matching the pattern "TESTS/<test
ID>/on_ste_hbbtvtest_org/v<version number>/", that file will be included unmodified in all future HbbTV Test
Suite releases. (This means that the server will continue to work for people running older versions of the
HbbTV Test Suite). If HbbTV needs to make a change to such a file, it will be copied to a new "v<version
number>" directory using the next higher version number (i.e. 2 for the first time a change is needed, etc).
The server shall include a valid installation of the PHP Hypertext Pre-processor to allow dynamic generation of
content. Only files with the extension ‘.php’ shall be processed by the PHP Hypertext Pre-processor.
NOTE: HTML5 file on the secure server shall not include testsuite.js file due to unsecured mixed content. The
js file may be blocked by the browser.
To make that kind of analysis feasible, test environment must have the Network analysis tool as a part of
Standard Test Equipment.
Tests communicate with the Network analysis tool using JS-Function analyzeNetworkLog (see 7.9.1).
• it has access to the network traffic between DUT and network interface (see Figure 1 in: 5 Test system)
• it is able to record network traffic into the human-readable log file on demand
Some of examples of the off-the-shelf network analysis tools are Wireshark [i.11] and Netcat [i.12].
The implementation of these components is a proprietary decision. For example, each of these components may
or may not be physically installed on a single piece of hardware for each DUT, they may or may not be installed
on commonly accessible machine, or they may be virtualized. They may also be already integrated into
commercially available tools.
The Test Harness shall implement a mechanism for generation of broadcast transport streams specified by the
HbbTV Playout Set as defined in section 7.4.3 Playout set definition . This mechanism may be offline
(pre-generation of transport streams) or real-time (multiplexing on-the-fly). This is an implementation decision,
and is outside the scope of this specification.
The Test Harness shall implement the Test API as defined in section 7 for communication with test applications.
The Test Harness shall store all reportStepResult, analyze[] (e.g. such as analyzeScreenExtended,
analyzeScreenPixel etc.) and endTest calls received from the test application(s) during execution. The Test
Harness shall use these calls to determine the result of the test case according to the requirements set out in
section 6.4, and shall store all of this information in the result xml format defined in section 9.
• Where streams are required to be synchronised with the HTTP server time
The OpenCaster Glue Code is a python script released under the Creative Commons license CC BY-SA. It's
based on the free toolset called OpenCaster from the company Avalpa (see references below). It reads the
information from the Test Case definition XML files of the Test Material and creates a transport stream which
later can be played out by the Test Harness and the playout equipment used.
OpenCaster is a free and open source MPEG2 transport stream data generator and packet manipulator developed
by Avalpa (www.avalpa.com). It can generate the different tables and convert the table section data to transport
streams, filter out PIDs from already multiplexed streams and create object carousels from folders etc.
OpenCaster Glue Code files are located in the Test Suite delivery at TOOLS/OpenCasterGlueCode.
The base test stream shall be present in the test suite at the path:
RES/BROADCAST/TS/generic-HbbTV-Teststream.ts.
Table 3 below shows the elementary stream allocations in the Transport Stream. Some PID allocations are
referenced in the PAT/PMT only – there is no content with these PIDs contained in the stream as distributed.
Table 3: Base test stream PID allocations
PID Contents
0 PAT
16 NIT
17 SDT
18 EIT (contains both actual present/following and
actual schedule EIT tables)
20 TDT/TOT
100 PMT for service 10
101 Video
102 Audio
200 PMT for service 11
201 PMT for service 11 (DSM-CC signalled)
205 AIT for service 11 (see note)
300 PMT for service 12
305 AIT for service 12 (see note)
400 PMT for service 13
405 AIT for service 13 (see note)
500 PMT for service 14
505 AIT for service 14 (see note)
NOTE: PID values marked as AIT are populated in the
distributed stream and are referenced in the included
PAT/PMT. These shall be used as the insertion points of
additional data required by the test implementation (e.g.
the AIT for the test case).
Any of the SI tables above which contain a version number shall use the version number of 1. Streams used by
the test harness or by test cases shall not use the version number of 1 unless the SI table is an exact match to the
SI table in the base stream. This will ensure that cached SI is not used and tests run as expected.
A test harness may play a stream in between test case runs to reset any cached SI from a test case. SI tables used
by this stream which differ from the base stream as defined here shall use a version number of 0. Test cases
shall therefore also avoid using a version number of 0 for any SI tables.
This Transport stream has original network ID 99, network ID 99, and transport stream ID 1.
The secondary base test stream shall be present in the test suite at the path
RES/BROADCAST/TS/generic-HbbTV-Teststream_b.ts.
Table 4 below shows the elementary stream allocations in the Transport Stream. Some PID allocations are
referenced in the PAT/PMT only – there is no content with these PIDs contained in the stream as distributed.
Table 4: Secondary Base test stream PID allocations
PID Contents
0 PAT
16 NIT
17 SDT
18 EIT (contains both actual present/following and actual
schedule EIT tables)
20 TDT/TOT
100 PMT for service 15
101 Video
102 Audio
105 AIT for service 15(see note)
200 PMT for service 16
205 AIT for service 16 (see note)
300 PMT for service 17
305 AIT for service 17 (see note)
400 PMT for service 18
405 AIT for service 18 (see note)
500 PMT for service 19
505 AIT for service 19 (see note)
NOTE: PID values marked as AIT are populated in the distributed
stream and are referenced in the included PAT/PMT. These
shall be used as the insertion points of additional data required
by the test implementation (e.g. the AIT for the test case).
Any of the SI tables above which contain a version number shall use the version number of 1. Streams used by
the test harness or by test cases shall not use the version number of 1 unless the SI table is an exact match to the
SI table in the base stream. This will ensure that cached SI is not used and tests run as expected.
A test harness may play a stream in between test case runs to reset any cached SI from a test case. SI tables used
by this stream which differ from the base stream as defined here shall use a version number of 0. Test cases
shall therefore also avoid using a version number of 0 for any SI tables.
This Transport stream has original network ID 99, network ID 65281, and transport stream ID 2.
The process for defining, implementing and accepting HbbTV test cases consists of the steps as depicted in
Figure 2.
Propose Test
Case with
Assertion
Specification Group
No
Review
Accept?
Technical Group
Yes
Write Test
Procedure and
Implementation
Accept?
Yes
Approve Test
Case for inclusion
• Each relevant specification item has been translated into one or more Test Cases.
• Each Test Case is described by a defined set of attributes as listed in section 6.3.
• The Test Case attributes shall be stored in the corresponding XML file which is validated by Schema
SCHEMAS/testCase.xsd
NOTE: The use of XML for Test Case definition enables automated processing capabilities. I.e. to use
scripting that can generate overviews of existing Test Cases, apply filtering, and allow for flexible
generation of various output file formats.
For official HbbTV tests, the Test Case IDs will usually be allocated by the testing group. In this case, the
namespace shall be “org.hbbtv”. E.g. “org.hbbtv_0000123F”. The testing group must ensure those tests IDs are
unique.
It is important that every test has a different Test Case ID. If another organization wants to generate Test Case
IDs for its own tests, then it must not use the “org.hbbtv” namespace. Instead, it must take a domain name it
controls, reverse it, and use that for the namespace part of the Test Case ID. In this case, the Local ID can be
anything permitted by the schema. Organizations should have some internal procedure to allocate Local IDs so
that they don’t generate duplicate Test Case IDs.
For example, a company that controls the “example.com” domain could use Test Case IDs like
“com.example_FOO”, or “com.example_BAR_BAZ_9876-42”
(To be clear: The domain name used in Test Case IDs must be a real domain name, and must be registered on
the Internet in the usual way, using the normal ICANN roots. There is no need for there to be a website there).
See 8.2 for further details on how the Test Case ID is used.
The “procedure” part is deprecated; everything that does not belong to the “assertion” part is now considered
part of the “implementation”. New test material shall not use the “procedure” part. Older test material may still
include it.
6.3.1.3.2 Company
The author’s company name (mandatory). If the author is an individual who is not associated with a company,
this shall be the author's full name.
• "TMPA Commercial"
• "TMPA CC BY-NC-ND"
• “HbbTV Commercial”
6.3.1.4 Title
A short title to identify this specific Test Case (mandatory).
6.3.1.5 Description
A longer description of what the test does if the title is not sufficient (optional). The format of the Test Case
Description is a text field (no limit). Whitespace is not significant.
6.3.2 References
6.3.2.1 Test Applicability
The test specifies which specifications it applies to. Official HbbTV tests shall specify this element and shall use
a name of "HBBTV" (case sensitive) and a version of "1.1.1", "1.2.1", "1.3.1" or "1.4.1". Tests that are valid for
more than one version of HbbTV shall include tags for all applicable versions. Tests may include additional tags
to indicate non-HbbTV specifications that are tested (e.g. OIPF). The master list of specification names is kept
on the Wiki page “AppliesToSpecNames” [26]. For each spec element in the <appliesTo> tag, the test case
shall:
For example, a test which tests an OIPF feature which is also required for both HbbTV 1.1.1 and 1.2.1 would
use the following appliesTo element:
<appliesTo>
<spec name="HBBTV" version="1.1.1"/>
<spec name="HBBTV" version="1.2.1"/>
<spec name="HBBTV" version="1.5.1"/>
<spec name="HBBTV" version="1.6.1"/>
It is not necessary to include a spec element for every potential regime that could reference HbbTV to use this
test. For instance, a country-specific testing regime may require support for HbbTV and seek to use a particular
test case - it is not required to include a spec element for that regime.
Every test case shall have an appliesTo element with at least one spec element. It must include a spec element
for at least one HbbTV version. It may also include spec elements for other specifications as listed on the
“AppliesToSpecNames” Wiki page [26].
6.3.2.4 Chapter
The chapter number within the specified document (a dot separated list of integers or characters without spaces,
e.g. 9.3.1)
For HbbTV and OIPF tests, there shall be at most one assertion. (Other testing groups that reuse the HbbTV
Test Case XML format may relax this limit).
The clause is optional. The master list of object names is kept on the Wiki page “TestObjectNames” [26].
6.3.3 Preconditions
Lists preconditions on the DUT before this test can be run.
You can also use "-" prefixes to indicate the test should only run on devices that do not support the feature. For
example, a test with required terminal options set to "-PVR" will not be run on PVRs, but will be run on
terminals without PVR functionality.
Feature Description
+2DECODER
Terminal supports decoding and presenting two video
streams at the same time when running HbbTV
applications (one broadcast and one broadband or
two broadband). Only one audio is presented.
+AC4 Terminal supports AC-4 audio as defined in ETSI TS
103 190
+AC4_DE Terminal implements the dialogue enhancement API
as per ETSI TS 102796 v1.7.1 clause 15.3.3, when
AC-4 audio as defined in ETSI TS 103 190 is played
+AC4_DE_OVERRIDE Terminal implements the dialogue enhancement
override API as per ETSI TS 102796 v1.7.1 clause
15.3.3.4, when AC-4 audio as defined in ETSI TS 103
190 is played
+APPSTORE Terminal supports starting HbbTV apps from a
(proprietary) app store
+AUDIO_LANGUAGE_UI Terminal has a UI for selecting audio stream based
on language
+AVC_LEVEL_42 Terminal supports 1080p50 AVC video
+BROADCAST_BAT The terminal supports BAT in DVB broadcast
services. This includes the use case where the
terminal can be configured for a country profile where
it expects to receive a BAT in DVB broadcast services
+BROADCAST_HFR Terminal supports high frame rate video for broadcast
channels in accordance with ETSI TS 101 154
+CHANNEL_CHANGE_BY_NUMBER Terminal supports channel changing using number
keys from remote control
+CHANNEL_CHANGE_BY_P_PLUS Terminal supports channel changing by P+/P- or
equivalent
+CHANNEL_DELIVERY_BY_DASH Terminal supports DASH delivery of TV channels
+CI_PLUS Terminal supports CI Plus
+CICAM_PLAYER_MODE Terminal supports "IP delivery CICAM player mode"
as defined in the DVB Extensions to CI Plus ETSI TS
103 205 (CI Plus 1.4). Note: this is the same as the
terminalOption 'IPC '
+CS_APP_LAUNCH Terminal supports ‘launching a CS application from
an HbbTV application’
+CSS_WC_RESPONSE_TYPE_2 The terminal's first response to Wall Clock protocol
request messages is a response with message_type
2
+DASH_HFR Terminal supports high frame rate video for MPEG
DASH content
+DASH_MPD_CI_ANCILLARY_KEY The terminal provides the mpd-ci-ancillary key=value
Some features specific to testing Operator Applications are defined in D.3.5.1 Operator Application Optional
Features.
Multiple required features are concatenated to a single string without spaces in between. They must be listed in
alphabetical order.
You can also use "-" prefixes to indicate the test should only run on boxes that do not support the feature. For
example, a test with optional features set to "-EAC3" will not be run on terminals with E-AC-3 support, but will
be run on terminals without E-AC-3 support.
You can also use the <or> tags to specify a requirement for logical combinations of features. For example, “this
test requires either feature VK_PLAY_PAUSE, or both features VK_PLAY and VK_PAUSE” can be written
as:
<preconditions>
<or>
<optionalFeatures>+VK_PLAY_PAUSE</optionalFeatures>
<optionalFeatures>+VK_PLAY+VK_PAUSE</optionalFeatures>
</or>
</preconditions>
You can also use the <and> tag within an <or> tag to combine requirements for Optional Features with
requirements for Required Terminal Options within a logical combination of features. For example, “this test
requires either optional feature OF_A, or both required terminal option TO_B and optional feature OF_C” can
be written as:
<preconditions>
<or>
<optionalFeatures>+OF_A</optionalFeatures>
<and>
<requiredTerminalOptions>+TO_B</requiredTerminalOptions>
<optionalFeatures>+OF_C</optionalFeatures>
</and>
</or>
</preconditions>
6.3.3.3.1 Informative
This description is optional and considered informational for reviewers or implementers (it cannot be assumed
that a test operator will see this pre-condition). The format of the textual precondition is a text field (no limit).
6.3.3.4 TestRun
A reference to zero or more Test Cases that must be passed successfully before this Test Case can run. The Test
Cases are referenced by their Test Case ID (see above for format description).
This lists the terminal options and optional features and settings that can affect the test case. For terminal
options and optional features, only +OPTION can be specified, not –OPTION. For settings, the valid values
are:
Identifier Meaning
If the test case does not adapt to any terminal options, then this <terminalOptions> tag must be omitted. If the
test case does not adapt to any optional features, then this <optionalFeatures> tag must be omitted. The test case
must include one of the <setting> tags if and only if it adapts to that setting. If the test case does not adapt to
anything then the entire <adaptsTo> tag must be omitted.
Each key must refer to a setting in one of the RES/META/ExtraInfoRequired/*.xml files. If it does not, then the
test case is invalid.
The specified settings must be configured by the tester (or implicitly to their default values, if default values are
specified and the tester has not explicitly configured them) before the test is run. If they are not configured, the
harness may refuse to run the test case or immediately fail the test case, or it may run the test case anyway (in
which case it will fail if getExtendedSetting() is called).
Test case authors must only list settings if the test case may actually call getExtendedSetting() for that setting
and use the returned value.
NOTE: This XML makes it clear what tests have to be re-run if an extended setting is changed.
6.3.5 Testing
6.3.5.1 Test Procedure
Steps to perform in the test. The steps mentioned here should describe the major steps that can be found in the
implementation code. The actual implementation may consist of more (and probably smaller) steps, but the
general layout of the implementation should be described here. Each step has the following elements:
6.3.5.1.1 Index
The numerical sequence index of the test step (describing the order of steps to perform). This starts with index 1
for the first step and increments by 1 for each subsequent step. This attribute is optional.
6.3.5.1.2 Procedure
Contains the textual description of a single step. The format of the description is a text field (no limit).
Whitespace is not significant.
If a test uses multiple media files, this element can be repeated to define each file.
6.3.5.3.1 Name
The path (preferred) or file name for the media file. The procedure can use this file name to refer to the media
file.
• a path relative to the directory containing the test case XML file, OR
• a file name only, in which case the location of the file is not specified (not recommended for new tests)
Where this attribute is a path, it shall use “/” as a directory separator. Paths must contain at least one “/” to
distinguish them from file names. If the file is in the same directory as the test case XML file, the path shall start
with “./”, for example “./file.mp4”. The use of “.” or “..” is not allowed, except that the path may start with “./”
or “../../RES/”. If the path does not contain “/”, it is treated as a file name.
In all cases, the attribute shall not contain “\” or any character that is forbidden in Windows paths.
6.3.5.3.2 Description
A human-readable description of the media file. The format of the description value is a text field (no limit).
Whitespace is not significant.
6.3.5.4.1 Id
This attribute of the derivedFrom tag gives the id of the test case that this test case is derived from (the parent).
6.3.5.4.2 Difference
Definition of files that differ from the parent. There must be at least one difference tag within a derivedFrom
tag. The filename attribute gives the name of a file in this test case that is different from the parent, relative to
the directory containing the XML file for this test case. The difference element can optionally contain an
explanation of what differs from the parent and why – particularly important for binary files where this may not
be readily apparent or easy to understand with difference tools. If the file that is different from the parent is
renamed, the original name should be stated in the difference element.
6.3.5.6 History
Contains a history of additions and modifications to this test.
6.3.5.4.1 Date
The date of change/update/proposal. The format of the date is YYYY-MM-DD (e.g. 2010-06-10).
6.3.5.4.2 Part
The part of the test that was or is to be updated. Contains one of the following values:
• implementation: for the actual implementation of the test (not included in this document)
The “procedure” part is deprecated; everything that does not belong to the “assertion” part is now considered
part of the “implementation”. New test material shall not use the “procedure” part. Older test material may still
include it.
6.3.5.4.3 Type
The type of update. Contains one of the following values:
• rejected: If the update was rejected and probably should be modified and resubmitted.
• edited: The update provides only editorial changes and has no impact on the test logic. Any changes of
this type do not require review.
6.3.6 Others
6.3.6.1 Remarks
Remarks and comments to this test (optional). The format of the remarks is a text field (no limit).
A test harness may generate and store other data about tests and use this to present alternative reports to
operators. Such information may include indications that although a test would currently be considered as failed
by the criteria in 6.4.2, the test may be considered as ‘incomplete’, or passed after further events have taken
place (e.g. further test steps or a call to the endTest API method). Such indications and reporting are outside the
scope of this specification.
2) All step results stored by the test harness have the value ‘true’ for the ‘result’ parameter
3) All calls to analyze API method have been evaluated to give a result and that result is ‘true’
4) All calls to the test API methods that interact with the test environment (e.g. sendKeyCode,
changePlayoutSet) succeeded
If, at a given time, any of the above criteria are not met the result at that time shall be FAIL. The result may
change to PASS if these criteria are met at a later time (e.g. an analyze API method is evaluated, or a call to the
endTest API method is made.) As step results and analysis results may not be changed, if at a given time the
result is PASS then at a later time the result shall not change to FAIL.
• A JavaScript API that defines the interface between the Test Harness and DUT.
• A set of XML files that define how the Test Harness should interpret a test case. This allows definition
and control of DVB playout required to initiate a test.
By defining these interfaces is it possible for a test case contributor to author test cases that can be run on any
HbbTV compatible Test Harness. Similarly, the interface definition allows multiple Test Harnesses to be
implemented, with different levels of automation, but still all compatible with test cases that adhere to the
interface.
The layout of the APIs described in this document is designed in a way that allows for a high percentage of
automation.
There is no necessity for an HbbTV Test Environment to be fully automated. HbbTV JS APIs are therefore
designed in such a way that any required test may be operated manually. The implementer of an HbbTV Test
Harness may choose for interaction by an operator with the HbbTV Test Environment in a manual way.
The HbbTV JS APIs allow for multiple implementations from different implementers and are designed in such a
way that it allows a potential test implementer to implement compatible Test Cases and or Test Harnesses. This
allows for combining test cases created by one or more implementers of test cases to create a complete HbbTV
(automated) Test Environment.
1) APIs communicating with the Test Environment (see section 7. 2). It informs the Test Environment
about the current test's status.
2) APIs communicating with the Device under Test (see section 7.3). This part of the API communicates
with the DUT (e.g. send key codes, make screenshots). This can be either be implemented directly by
the DUT manufacturer (send commands directly via Ethernet to the DUT), or it can be implemented by
someone else (e.g. send commands to an IR sender or a frame grabber).
3) APIs communicating with the Playout Environment (see section 7.4). This part of the API
communicates with the Playout Environment (which generates and transmits the DVB-S/-C/-T signal
to the DUT). For example, the Playout Environment is responsible for sending the correct AIT to the
DUT, such that the test is started on the DUT.
4) APIs for testing specific further areas of functionality (DIAL, Websockets, Media Synchronization,
Network Testing, etc.). See sections 7.6+
I.e. the JavaScript string can only contain the Unicode code points U+0009, U+000A, U+00020-U+D7FF,
U+E000-U+FFFD, and U+10000-U+10FFFF.
If a test breaks the rules in the previous paragraph, the test harness shall fail the test.
The test harness must handle any XML escaping necessary when writing characters such as "<" in the test report
XML.
• The test is started automatically by the DUT as soon as the AIT is parsed and the test application
(which is signalled as “AUTOSTART”) is detected. This will open the entry URL of the initial test
page in the browser on the DUT.
• The test is started by the test harness, executing an application using its resident ECMAScript
environment. When the test is started is up to the test harness implementation. This mechanism is used
only when the test cannot be practically implemented on the DUT.
NOTE: All JS functions defined in this document are defined within the HbbTVTestAPI prototype. This
means that e.g. for reportStepResult, you would call “testapi.reportStepResult(...);”
• Include the JavaScript file “testsuite.js” from “RES” directory (either by relative reference
“../../RES/testsuite.js” or by absolute reference “http://hbbtv1.test/_TESTSUITE/RES/testsuite.js”),
e.g.:
<script type="text/javascript" src="../../RES/testsuite.js"></script>
• Initialize the test suite by creating a new instance of the HbbTVTestAPI class and calling init() on it.
The init() function needs to be called by the initial test page to initialize the connection between the
DUT and the server. e.g.:
<script type="text/javascript">
var testapi;
window.onload = function() {
testapi = new HbbTVTestAPI();
testapi.init();
};
</script>
The operator application shall only be supported if Annex D is supported. If it is not supported, the test harness
shall support having up to two “clients” all running at the same time, harness-based test code and a regular
application.
If more than one client uses HbbTVTestAPI at the same time then:
• The openWebsocket() API must only be used by one client in a test case.
• The following functions are classified as “multi-client-safe” and can be used by all clients at any time
without worrying about what other clients are doing:
o reportMessage, reportStepResult, waitForCommunicationCompleted, multiClientConnect,
MultiClientComms.send, getPlayoutInformation, getPlayoutStartTime
o HbbTVTestAPI() constructor and init() method, although each client may only create one
HbbTVTestAPI instance.
o endTestApp, although each client may only call it once
• The testing JS APIs not explicitly listed in the previous 2 bullet points are classified as “primary-client-
only”. The test case author shall choose at most one client at a time as the “primary” client. There is no
special API to identify the “primary” client, it is just a concept (and should be documented in
comments!). Only the “primary” client may make calls to any of the primary-client-only APIs. The test
case may change which client is the “primary” client as often as it likes, but it must ensure that no
primary-client-only API calls are in progress at that time. The multi-client messaging APIs defined in
7.2.12 JS-Functions for multiple-client communication can be used to co-ordinate a change of
primary client.
• endTest() has an extra restriction, as well as being primary-client-only: The test case must ensure that
any other clients that made reportMessage / reportStepResult calls have subsequently called
waitForCommunicationCompleted and got the callback from that before endTest() is called, else
messages will be lost. The client that is calling endTest does not need to call
waitForCommunicationCompleted, it is implicit in the endTest() call.
• Instead of calling endTest() to end the test, consider using endTestApp() instead. It is designed to be
simpler in multiple-client scenarios.
• If there are simultaneous calls to changePlayoutset and either getPlayoutInformation or
getPlayoutStartTime, then it is undefined whether getPlayoutInformation and getPlayoutStartTime
return information about the new or old playoutsets. This is similar to the situation where the playoutset
changes due to a timeout set in implementation.xml while those APIs are being called. It is
recommended to design test cases to avoid these situations.
• using an external solution: resetting the power of the DUT and tuning to a pre-defined channel via the
remote control (or automated IR sender)
• services scanned and stored in the service list of the DUT , as defined in section 7.2.2.1.
NOTE: This is typically used by harness-based test cases that are going to do a service scan
early in the test case.
b. Else the service list on the DUT must be the one that would be created by playing out the
XML playoutset file specified by the definition=”” attribute on that tag, and doing a full
rescan on the DUT.
2. Else if any playoutset in the test case uses two modulators, then the initial service list shall be the one
that would be created by playing out the two standard base streams on different modulator channels,
and doing a full rescan on the DUT. It is acceptable for the tester or test harness to use streams that
have equivalent SI to the standard base streams, instead of the actual base streams. The standard base
streams are:
• modulator channel 1: the file RES/BROADCAST/TS/generic-HbbTV-Teststream.ts or
equivalent. This has network id=99, original network id=99, transport stream id=1, and
contains the following services:
▪ service name “ATE test10”, TV service, id=10
▪ service name “ATE test11”, TV service, id=11
▪ service name “ATE test12”, TV service, id=12
▪ service name “ATE test13”, TV service, id=13
▪ service name “ATE test14”, Radio service, id=14
• modulator channel 2: the file RES/BROADCAST/TS/generic-HbbTV-Teststream_b.ts or
equivalent. This has network id=65281, original network id=99, transport stream id=2, and
contains the following services:
▪ service name “ATE test15”, TV service, id=15
▪ service name “ATE test16”, TV service, id=16
▪ service name “ATE test17”, TV service, id=17
▪ service name “ATE test18”, TV service, id=18
▪ service name “ATE test19”, Radio service, id=19
3. Else the initial service list shall be as described in the previous option, except that playing out the
“modulator channel 2” stream during the rescan is optional.
NOTE: People sometimes incorrectly think that “if this test case puts the channel list back how it was,
then it doesn’t need forceInstallationAfterThisTest=”true””. That is WRONG. In the case where
the test fails, it is impossible for the test case to “put the channel list back how it was”. So such a
test case needs forceInstallationAfterThisTest="true".
If a test case sets skipInstallationBeforeThisTest=”true” then the test case must also set
forceInstallationAfterThisTest="true"
The value of the mandatory “definition” attribute is a relative path to a playoutset XML file. The path is relative
to the directory containing the Test Case XML file, and uses “/” as a directory separator. The path must be
within the Test Suite, but it can be within the test case directory (or a subdirectory) or a shared file in RES/.
When moving from one test case to another, the test harness needs to know whether the new “Install Playoutset”
is the same or different, to decide whether or not the DUT needs rescanning. This decision may be based purely
on whether they have the same (absolute) path. So if two Test Cases require the same DVB environment then
they should both refer to the same “Install Playoutset” XML in the RES/ directory.
The playoutset XML specified here must specify a playoutset containing no applications. (It may contain AITs,
but those AITs must either not signal any applications, or signal applications with the KILL code).
The meaning of these attributes is documented in section 7.2.2.1. If not specified, these attributes default to
false.
NOTE: Coding style guideline: Don’t specify these attributes with the value false, just omit the
attributes(s) completely and let them default to false.
NOTE: When a test is not running, the test harness may play out a stream that has the same identification,
services, and SI/PSI as the standard HbbTV testing stream. This allows the user to do a channel
scan before running the tests, and ensures the DUT doesn’t go into “no signal” mode when the test
ends. This XML tag allows each test case to specify an “Install Playoutset”, which is the stream
that the tester should use to do the channel scan, instead of the standard stream. When starting a
new test, if it specifies the same “Install Playoutset” as the previously-run test, then the test may
just run without the user having to rescan the DUT. If the new test specifies a different “Install
Playoutset” from the previously-run test, then the test harness may play out the new “Install
Playoutset” and prompt the user to do a rescan on the DUT before running the test case.
7.2.3 Callbacks
All API calls defined in this document are asynchronous, as the API may have to interact with the test harness.
To know when the API call actually completes, the caller needs to provide a callback function. This function is
called as soon as the API call completes (either locally on the DUT or on the server). The first argument of the
callback function is always the callback object passed to the API function. This allows the callback function to
determine which API call was processed (if the same callback function is used for multiple API calls). The
callback function might be called with additional arguments. These additional arguments are defined in the
respective API function definition.
The implementation of a function can either be synchronous (e.g., via OIPF debug function, chapter 7.15.5 of
OIPF DAE) or asynchronous (e.g. via XMLHttpRequest to the server). The callback function is called in both
cases. In a synchronous implementation, this would be done before the function has returned.
Specifying callback functions and callback objects is not required. The callback arguments may be null if the
caller is not interested in the callback and the callbackObject may be null if no state is required.
ARGS: callback/callbackObject: a callback function to invoke when the information is available (also
see chapter 7.2.3 Callbacks). The callback function will be called with the following parameters:
callback(callbackObject, playoutInformationObject) where the playoutInformationObject is a
JavaScript object which has a ‘length’ property that returns the integer number of multiplexes in
the current playout set. It can be indexed using the integers 0 to `length-1` to obtain JavaScript
objects giving information about each multiplex in the current playout set. The order shall match
the order that the multiplexes were defined in the playout set XML file. For each multiplex object,
the following properties shall be available:
transponderDeliveryType: the idType (integer) for the DVB delivery type (DVB-S(2), DVB-
T(2), DVB-C(2)), as specified in OIPF DAE, chapter 7.13.11.1 to be used for
createChannelObject() function calls.
transponderDsd: the delivery system descriptor (tuning parameter) as specified in OIPF DAE,
chapter 7.13.1.3 to be used for createChannelObject() function calls, with the corrections from
section A.2.4.4 of the HbbTV specification [1] with errata 1 applied
eitOffset: a non-negative integer that is the ‘EIT Offset’ for this multiplex as defined in section
7.4.4.7. If the <adjustEit/> tag was not specified for this multiplex then this shall be zero.
signalLevelCurrent: the current output signal level in dBm, as a floating-point number. Note
that this will normally be negative. Note that the actual signal level received by the terminal will
usually be lower due to losses in the cabling connecting the harness to the terminal, especially if a
signal combiner is in use to support multiple multiplexes. Also note that there may be some error
in this value, modulator manufacturers typically quote an accuracy of +/-3dBm. But it is expected
that relative changes in this value will result in similar relative changes of the signal level at the
input to the terminal.
NOTE: We do not require this value to be accurate and calibrated because that would require
everyone running the tests to have calibrated RF analysis hardware to calibrate the
signal level. For the purposes it is currently used for, this is good enough.
signalLevelDefault: the harness’s default output signal level in dBm, as a floating-point number.
This is the signal level used if the test case does not explicitly change the signal level.
signalLevelMax: the maximum signal level supported by the harness, in dBm, as a floating-point
number. E.g. for one example modulator this is 0dBm
signalLevelMin: the minimum signal level supported by the harness, in dBm, as a floating-point
number. For harnesses that support OpApps, this must be at least 9dBm less than
signalLevelMax. E.g. for one example modulator this is -60dBm
signalLevelStep: the granularity that the signal level can be adjusted, in dBm, as a nonnegative
floating-point number. Note that it is guaranteed that both signalLevelMax and signalLevelMin
are achievable, so this cannot exceed (signalLevelMax – signalLevelMin). May not be exact due
to floating-point rounding. This may be low (e.g. 0.5) if the modulator has a built-in output power
control, but may be much larger (e.g. 9) if output power control is being done by manually
connecting an attenuator.
NOTE: The supported signal levels are likely to be different for different modulation types (e.g.
satellite vs terrestrial) and may be different for different multiplexes if the harness is
using two modulators that are different models.
If ‘length’ is 1, then the properties above shall also be available on the playoutInformationObject directly. (I.e. if
`length` is 1, then playoutInformationObject.transponderDeliveryType shall be the same as
playoutInformationObject[0].transponderDeliveryType, but if ‘length’ is not 1 then
playoutInformationObject.transponderDeliveryType should be undefined).
If this method call fails, an empty object with no key/value pairs is returned.
ARGS: None
This will end the test. No further calls to any API functions after this call shall have any effect on the test result.
The result of the test case is “PASSED” only in the following case:
• there were no calls to reportStepResult with result being false during the complete run of the test, and
• the analysis of all analyze function calls succeeded (or there were no analyze calls at all). There may
not be any network connection at the time of the function call (or the Test Harness cannot be reached
for other reasons), so this function may be implemented asynchronously. Therefore a test implementer
must make sure that the network connection is available at the end of the test. (see reportStepResult
documentation).
NOTE: This function could be internally implemented by calling reportStepResult with a stepId below 0,
but this is not a requirement.
ARGS: stepId: the step number that has been performed. Shall be called with an integer value >= 0.
stepId = 0 indicates that the test application has just started. stepId values shall be unique
throughout the execution of each test case. They shall not be repeated. If a Test Harness receives a
repeated value of stepId for a test case then the test shall fail.
There is no need for the stepId in the implementation to match the step values in the test
specification procedure.
result: true if the step has completed successfully, false if the step has failed. To send a failure, no
endTest() call is required. In this case, the stepId references the failing step. If the result is false,
the server may not stop the application immediately (this may take a few seconds or even minutes,
depending on the server). Therefore the testing application should ensure that no more
reportStepResult calls are executed after this. Furthermore, the Test Harness shall ignore any
reportStepResult calls received after a call with false result while executing a test.
comment: a comment from the test developer describing what the step actually does (e.g. a
reference to the test procedure)
NOTE 1: The step number is usually in ascending order, but this is not a requirement. A test may skip test
numbers and/or not report steps in ascending order. This is especially the case if multiple steps are
executed in parallel.
EXAMPLE 1: synchronous communication to the server via a non-network communication path (e.g. a
serial line connection)
EXAMPLE 2: asynchronous communication with the server (e.g. XmlHttpRequest via network) where all
calls are stored in a FIFO queue, that is, one by one, reported to the server. If
communication is not possible (e.g. network not available), the JS API implementation will
continually try to report the step results in the queue in the background. This is why a test
implementer must make sure that the network connection is available at the end of the test.
The following diagram shows the communication (and queuing) with the server in the case of an asynchronous
communication with the server:
Figure 5: Communication between DUT and Test Harness when network is temporarily
unavailable
NOTE: There may not be any network connection at the time of the function call (or the Test Harness
cannot be reached for other reasons), so this function might be implemented asynchronously.
NOTE: An implementation of this API which is based on manual interaction may immediately call the
callback function when this function is called, as it does not have any communication queue. If a
queue is present but empty, the callback function may also be called immediately (but may also be
called asynchronously).
ARGS: check: a textual description of the action required which will be presented to the test operator
callback/callbackObject: a callback function to invoke when the action has been completed by
the test operator (also see chapter 7.2.3 Callbacks).
NOTE: where possible, this API should not be used. Instead, use the preconditions mechanism to arrange
for a whole Test Case to be run or not based on the DUT’s features.
To use this API, the test case must specify what options and settings it adapts to as described in 6.3.4.1
adaptsTo.
bool optionsData.hasFeature(
optionName : string);
string optionsData.getSetting(
settingName : string);
For the first two of these methods the parameter should be a single option name of the relevant type. Only
+OPTION can be specified, not –OPTION.
NOTE: If you want –OPTION, call the function with +OPTION and negate the result in Javascript.
• If the option is not a single option name or was not listed in the relevant sub-tag of the <adaptsTo> tag,
this is a test case bug, and the harness will cause the test case to fail automatically. For
hasTerminalOption() the relevant sub-tag is <terminalOptions>. For hasFeature() the relevant sub-tag
is <optionalFeatures>.
• If the option is supported, return true
• If the option is not supported, return false
The optionsData.getSetting() method returns immediately, and has two possible results:
• If the value of settingName is not a single value listed in the table in section 6.3.4.1 adaptsTo, or
if the value was not listed in a <setting> sub-tag of the <adaptsTo> tag, this is a test case bug, and the
harness will cause the test case to fail automatically.
• Otherwise, return the value corresponding to the setting as configured in the harness.
Reports to the harness that, as far as the currently running application is concerned, the test case is complete and
can pass, but the harness should wait for calls to this function from other applications before actually marking
the test as passed. For the purposes of this function, each “application” could be a harness application, regular
HbbTV app, OpApp, or Launcher OpApp page (the latter two only if Appendix D is implemented in the Test
Harness).
numberOfApps indicates the total number of applications that should call this API. Must be an integer no less
than 2. When that number of calls have been made to this API, the test ends (possibly passing) as if endTest()
was called.
After calling this, the currently running JS execution environment (i.e. web page or harness-based test code)
should not make any more HbbTVTestAPI calls, and should not make any calls to MultiClientComms.send.
Other JS execution environments may continue to use those APIs.
It is a test case bug if a test case calls endTestApp() with different numberOfApps.
NOTE: The harness is likely to implement the multi-client messaging system via a websocket connection from
each client to a server that is part of the harness. This call opens that websocket.
myClientId identifies this client. Recommended values are “OpApp” for OpApps, “BroadcastApp” for regular
HbbTV apps running on the DUT, or “HarnessBased” for harness-based test code, but any string matching the
Javascript regex /^[-a-zA-Z0-9_]{1,25}$/ is legal. Within a test case it should be unique.
listOfOtherClients: a list of clients that are currently connected to the multi-client messaging
system, identified by the client ID values they passed as the myClientId argument to the
multiClientConnect function.
ARGS: message: The message that was sent. May be any JSON-compatible value (i.e. anything that
JSON.parse() returns).
sourceClientId: the client ID that the sender of the message passed to multiClientConnect()
If onFail is non-null it is called if the connection cannot be established (in which case onConnect is never called)
or (after a call to onConnect) if the connection is lost. This should not normally happen, unless the test case
disables the terminal’s network connection. Call is:
void onFail(
callbackObject : object,
reason : string);
ARGS: reason: A human-readable string describing the error. E.g. the test case may choose to fail the test
and use this string as part of the failure reason.
Each client (i.e. each Javascript execution context) must not try to open more than one connection to the multi-
client messaging system at any time. Once multiClientConnect() has been called it must not be called again by
that client unless or until the onFail callback has been called. If this rule is broken then the harness should fail
the test case automatically with an error.
void MultiClientComms.send(
toClientId : string,
message);
Sends a message to another client. Note that this function is on the object returned via the onConnect callback,
so you must first call multiClientConnect() and wait for the onConnect() callback.
ARGS: toClientId: identifies the client that should receive the message. Same legal values as the
myClientId argument to multiClientConnect()
message: may be any JSON-compatible value (i.e. anything that JSON.stringify() accepts).
The message is sent to every client with a client ID that matches toClientId. If there are multiple clients
connected using the same client ID, then it gets sent to all of them. If there are no clients with that client ID then
that is a test case bug and the harness should fail the test case automatically with an error. You can send
messages to a client that has onMessage set to null, even though the client ignores the message.
onClientJoinOrLeave: A callback function to invoke when another client joins the multi-client messaging
system (by calling multiClientConnect()) or disconnects from the multi-client messaging system (e.g. due to the
application that called multiClientConnect() being terminated, or the network connection going down).
Prototype:
void onClientJoinOrLeave(
callbackObject : object,
clientId : string,
join : boolean);
ARGS: clientId: the identity of the client that connected or disconnected. This is the string that client
passed as the myClientId argument to the multiClientConnect function.
join: true if the client joined the network (i.e. connected to the server), false if the client left the
network (i.e. disconnected from the server).
The harness reads all these files and merges all the data together to determine what settings to show the user.
The user then configures the settings in the harness.
NOTE: Storing this metadata in files in the test suite allows the list of bilateral agreement settings to be
changed without having to update this specification and release a new test harness.
NOTE: Having this per-testsuite rather than per-test allows changes to the descriptions etc in a single
central place, and avoids worrying about “what if one test has a description that’s different from
the others”.
NOTE: the filename, schema and JS API carefully does not have “bilateral agreement” in the name
because we expect this to be more generally useful for extra settings that other test suites require.
NOTE: The file should be named after the test package that uses the settings.
The key is used to uniquely identify the setting, and must be unique (across ALL these XML files). The title
and description are mandatory. The title is used to identify the setting to the user, so it should be unique, and
should be kept short – generally 1-4 words. The description is used to explain the setting to the user, so can be
long as necessary (although good UI design is to keep things as short as possible, but no shorter).
Settings can be a string or a number. For numbers, the “min”, “max”, and “step” attributes can be set – these
have the same values and meanings as in the HTML5 number input tag. For strings, the “minlength”,
“maxlength” and “pattern” attributes can be set, and have the same values and meanings as in the HTML5 text
input tag.
A default value can optionally be specified. If specified, the default must validate as a valid value.
NOTE: This is because the harness will typically use an empty string as “not specified” internally. But if
there is no minimum length then an empty string is a valid value.
callbackObject: Optional object to be passed back to the invoker as the first parameter of the
callback function.
If the requested key is not listed in ExtraInfoRequired.xml, or is not listed in <usesExtendedSettings> in the
testcase XML for the running test, or if the setting is not set and does not have a default value, then the test
harness shall fail the entire test automatically.
If the test case implementation.xml file includes the <httpsServerConfig/> XML tag, then this immediately
returns the URI to the TLS server used to serve OpApps. This shall be of the form “https://<domainname>/” or
“https://<domainname>:<port>/” – note that the HTTPS protocol and trailing slash are always included.
The return value of this function is a constant, its value shall not be changed by the test harness across the scope
of a single test run.
This function returns immediately, without waiting for a network request. It is even available if the terminal
does not have network access and has never had network access (and has presumably loaded the application
from DSMCC).
NOTE: Harness authors, please note: The above rules mean that the return value must be part of
testsuite.js, either hardcoded or substituted in.
If the test case does not specify the <httpsServerConfig/> XML tag, then it must not call this function.
ARGS: delaySeconds: the number of seconds after which the device is switched off. These seconds start
counting as soon as the callback function is called, if it is set to 0, the reset may occur while the
application is still handling the callback function. If delaySeconds < 0 then the test shall fail.
▪ “STANDBY”: the device will go to standby mode – if the device does not support standby
mode, POWERCYCLE will apply.
▪ “POWERCYCLE”: the device will be physically disconnected from power supply – if the
device has a built-in power supply (e.g. battery), the power cycle is defined by the device.
Test implementers should not call this function when network connection is down, as it may fail when network
connection is not available. In this case, this will cause the complete test to fail automatically.
Calling this function will cause the current application to be stopped. After reboot, the device needs to be
transferred to the pre-defined state (see chapter 7.2.2), so the AUTOSTART application signalled on the initial
service will be started.
ARGS: keyCode: a string describing the key to send: any VK_* code in the table in OIPF 2.3 volume 5a
Web Standards TV Profile section 6.2, except for VK_UNDEFINED (because that’s not actually
a key). It will also be legal to use this API to send the other VK_* codes defined in table 17 in the
OpApps specification. If any other keyCode string is requested then the test shall fail.
durationSeconds: the number of seconds for which the key is held down continuously, or a value
of 0 (zero) for a single key press event. If durationSeconds < 0 then the test shall fail.
callback/callbackObject: a callback function to invoke when the key code was actually sent to
the receiver (also see chapter 7.2.3 Callbacks).
Test implementers should not call this function when network connection is down, as it may fail when network
connection is not available. In this case, this will cause the complete test to fail automatically.
ARGS: stepId: the step number that has been performed (same as stepId in reportStepResult).
comment: a comment from the test developer describing what the analysis actually does (same as
reportStepResult)
posx: the horizontal position of the pixel to analyze within safe border (128-1152)
posy: the vertical position of the pixel to analyze within safe border (36-683)
referenceColor: the reference colour that the pixel should match. The String must start with a
hash (#) followed by a 2-digit case insensitive hexadecimal representation of the red, green, and
blue colour (e.g. #ff0000 for red colour).
callback/callbackObject: a callback function to invoke when the call completes (also see chapter
7.2.3 Callbacks). The analysis may not have taken place at the time the callback is invoked. The
analysis result is not passed back to the application. A failed analysis must cause the complete test
The method for pixel colour matching is not defined. The following tolerances are defined for analysis:
• The area of pixels analyzed may be up to +/- 10 pixels in each direction of the specified pixel position
• The pixel colour value analyzed may be up to +/- 80/255 of the specified colour value for each colour
component
EXAMPLE: This function may be implemented either for manual interaction or for automated
processing. During analysis, the implementation may modify the HTML DOM (e.g. add a
black layer and a cross hair on top of the screen and removing it after analysis is finished).
This might trigger an Application.show() call. If only manual processing is desired, an API
application could call analyzeScreenExtended("Is colour of Pixel at Pos.
"+posx+"/"+posy+" similar to reference colour "+referenceColor+"?", callback,
callbackObject).
Test implementers should not call this function when network connection is down, as it may fail when network
connection is not available. In this case, this will cause the complete test to fail automatically.
ARGS: stepId: the step number that has been performed (same as stepId in reportStepResult).
comment: a comment from the test developer describing what the analysis actually does (same as
reportStepResult)
check: a textual description detailing which test to perform. This is the only criteria that shall be
used for the assessment of this analysis call.
callback/callbackObject: a callback function to invoke when the analysis was made (also see
chapter 7.2.3 Callbacks). The analysis result is not passed back to the application. A failed
analysis must cause the complete test to fail, independent on the test result reported back by the
reportStepResult function. This is due to the fact that the analysis can also be performed off-line
on a taken screen shot.
image_relative_url: specifies a path, relative to the testcase XML file, to a reference image file
for the tester to refer to when performing the analysis. This is optional, if it is not specified or
undefined or null then there is no image to display. If specified, the reference image file shall
have one of the file extensions from the table below.
The same pixel and colour tolerances apply as specified in the analyzeScreenPixel function.
EXAMPLE: This function may be implemented by first taking the screenshot and then performing an
offline analysis at a later point in time.
Test implementers should not call this function when network connection is down, as it may fail when network
connection is not available. In this case, this will cause the complete test to fail automatically.
ARGS: stepId: the step number that has been performed (same as stepId in reportStepResult).
comment: a comment from the test developer describing what the analysis actually does (same as
reportStepResult)
channelId: the channel in the referenced audio that is being analyzed. The following channel IDs
are supported:
▪ 1: left channel
▪ 2: right channel
▪ 3: centre channel
If the channelId is not implemented (channelId < 1 or channelId > the number of audio channels
in the sample) then the test shall fail.
referenceFrequency: the reference frequency in Hz. Allowed values are 500, 630, 800, 1000,
1250, 1600, 2000, 2500, 3150, 4000. Other values shall cause the test to fail.
callback/callbackObject: a callback function to invoke when the analysis was made (also see
chapter 7.2.3 Callbacks). The analysis result is not passed back to the application. A failed
analysis must cause the complete test to fail, independent on the test result reported back by the
reportStepResult function. This is due to the fact that the analysis can also be performed off-line
on a taken screen shot.
Frequencies should be detected that have a minimum signal level between -50dB and 0dB, assuming no
attenuation by the DUT. The tolerance on frequency detected shall be +/- 10%.
EXAMPLE 2: This function may be implemented either for manual interaction or for automated
processing. If only manual processing is desired, an API application could call
analyzeAudioExtended("Do you hear a tone with a frequency of about
"+referenceFrequency+" Hz on channel "+channelId+"?", callback, callbackObject).
Test implementers should not call this function when network connection is down, as it may fail when network
connection is not available. In this case, this will cause the complete test to fail automatically.
NOTE: This function may be implemented by first making an audio capture and performing the analysis
at a later point in time.
ARGS: stepId: the step number that has been performed (same as stepId in reportStepResult).
comment: a comment from the test developer describing what the analysis actually does (same as
reportStepResult)
check: a textual description detailing which test to perform on the audio data. This is the only
criterion that shall be used for the assessment of this analysis call. An empty or null string shall
cause the test to fail.
callback/callbackObject: a callback function to invoke when the analysis was made (also see
chapter 7.2.3 Callbacks). The analysis result is not passed back to the application. A failed
analysis must cause the complete test to fail, independent on the test result reported back by the
reportStepResult function. This is due to the fact that the analysis can also be performed off-line
on a taken screen shot.
duration: duration of recording in seconds. The argument is optional; if it is not present then the
default is 10 seconds.
delay: delay before recording in seconds. The argument is optional; if it is not present then the
default is 0 seconds.
audio_relative_url: specifies a path, relative to the testcase XML file, to a reference audio file for
the tester to refer to when performing the analysis. This is optional, if it is not specified or
undefined or null then there is no audio file to refer to.
audio_mime_type: specifies the MIME type of the audio file. This is optional, if a string is
specified for audio_relative_url then audio_mime_type must be a string specifying the MIME
type, otherwise this parameter must be either not specified or undefined or null.
This function analyzes (or records for later analysis) the audio over the following period:
The described check shall not require audio data from outside the specified period. This allows the audio to be
recorded and then processed later on.
The callback may be called much later than the end of the analysis period, e.g. if the harness is waiting for a user
to enter the result, or if the harness is doing non-real-time audio compression or some automated analysis.
Test implementers should consider the audio content to be analyzed, assuming there will be a human tester.
E.g., if implementation uses a microphone for audio capture, provided results might not be very exact.
Inaccuracy of audio capture may be up to +/- 1000 Hz. Only frequencies should be detected that have a
minimum loudness of at least 50% of the maximum allowed loudness.
NOTE: This function may be implemented by first making an audio capture and performing the analysis
at a later point in time.
MP4 container containing only AAC .m4a or .mp4a audio/mp4, optionally with the codecs
audio (no video) or .aac parameter (RFC4281)
NOTE: These formats have been chosen to be compatible with the <audio> tag in all major desktop web
browsers. The file extensions are chosen to match the list in section 5.2.1.1 Web Server
Test implementers should not call this function when network connection is down, as it may fail when network
connection is not available. In this case, this will cause the complete test to fail automatically.
ARGS: stepId: the step number that has been performed (same as stepId in reportStepResult).
comment: a comment from the test developer describing what the analysis actually does (same as
reportStepResult)
check: a textual description detailing which test to perform on the video data. This is the only
criteria that shall be used for the assessment of this analysis call. An empty or null string shall
cause the test to fail.
callback/callbackObject: a callback function to invoke when the analysis was made (also see
chapter 7.2.3 Callbacks). The analysis result is not passed back to the application. A failed
duration: duration of recording in seconds. The argument is optional; if it is not present then the
default is 10 seconds.
delay: delay before recording in seconds. The argument is optional; if it is not present then the
default is 0 seconds.
image_or_video_mime_type: specifies the MIME type of the image or video file. This is
optional, if a string is specified for image_or_video_relative_url then image_or_video_mime_type
must be a string specifying the MIME type, otherwise this parameter must be either not specified
or undefined or null.
This function analyzes (or records for later analysis) the video over the following period:
• Starting between (delay + 0) and (delay + 2) seconds after the API is called (although this may be
delayed if other Test API calls are in progress when this API is called).
• Ending (duration) seconds after the analysis (or recording) started.
The described check shall not require video data from outside the specified period. This allows the video to be
recorded and then processed later on.
The callback may be called much later than the end of the analysis period, e.g. if the harness is waiting for a user
to enter the result, or if the harness is doing non-real-time video compression or some automated analysis.
MP4 container containing H.264 video .mp4 video/mp4, optionally with the codecs
only (no audio) parameter (RFC4281)
NOTE: These formats have been chosen to be compatible with the <img> and <video> tags in all major
desktop web browsers. The file extensions are chosen to match the list in section 5.2.1.1 Web
Server.
If specifying a reference video, it may optionally have sound, which should be ignored for the analysis. (This
allows reusing existing video assets).
Test implementers should not call this function when network connection is down, as it may fail when network
connection is not available. In this case, this will cause the complete test to fail automatically.
NOTE: This function may be implemented by first making a video capture and performing the analysis at
a later point in time.
ARGS: stepId: the step number that has been performed (same as stepId in reportStepResult).
comment: a comment from the test developer describing the purpose of the step
check: a textual description of the analysis that must be done which will be presented to the test
operator
callback/callbackObject: a callback function to invoke when the power cycle request was
received (also see chapter 7.2.3 Callbacks).
ARGS: serviceName: the name of the service to select, e.g. "ATE Test11". The name shall be the service
name as defined in the service descriptor of the target service. If the service is not signalled in the
current broadcast then the function shall fail.
callback/callbackObject: a callback function to invoke when the key codes were actually sent to
the receiver (also see chapter 7.2.3 Callbacks).
Test implementers should not call this function when network connection is down, as it may fail when network
connection is not available. In this case, this will cause the complete test to fail automatically.
This function should not be used to select a radio service when a TV service is selected or vice-versa, as this
could require a change of service lists and could involve intermediate service selections (e.g. to the last active
service of the new list).
Before calling this API, the test case should ensure no HbbTV applications are requesting the NUMERIC
KeySet.
NOTE: Although the naming of this function indicates that the service change must be performed by
emulating a remote control (i.e. sending IR commands to an IR receiver), there is no specific
requirement to do this. If alternative methods of service selection are available then these are
valid.
pointerCode: a string describing the code to send once the pointer is in position: “P_CLICK”,
“P_DBLCLICK”, “P_DOWN”, “P_UP”, or “NONE” if no code is to be sent. If any other
pointerCode string is requested then the test shall fail. For touchpad-type devices, code
“P_CLICK” shall be sent by the harness if pointerCode "NONE" is specified, to better simulate
real user interaction with a touchpad where pointing and clicking are both represented by a single
tap.
callback/callbackObject: a callback function to invoke when the pointer has been moved and
any requested pointer code was sent to the DUT (also see chapter 7.2.3 Callbacks).
callback/callbackObject: a callback function to invoke when the key code was actually sent to
the receiver (also see chapter 7.2.3 Callbacks).
ARGS: stepId: the step number that has been performed (same as stepId in reportStepResult).
comment: a comment describing to the tester content which is expected to appear in the specified
area. The area of interest is specified as a separate parameter.
areaOfInterest: JS object which contains pixel coordinates of the screen region that should be
compared (top, left, bottom, right). Pixel coordinates are zero-based.
callback/callbackObject: a callback function to invoke when the call completes (also see chapter
7.2.3 Callbacks). The comparison may not have taken place at the time the callback is invoked.
The comparison result is not passed back to the application. A failed comparison must cause the
complete test to fail, independent on the test result reported back by the reportStepResult function.
This is due to the fact that the analysis can also be performed off-line on a taken screenshot.
ARGS: stepId: the step number that has been performed (same as stepId in reportStepResult).
comment: a comment from the test developer describing to the tester (if test is performed
manually) text which is expected to appear in specified area. Area of interest is specified as a
separate parameter.
expectedText: text content expected to be found in given region. If text content is found, test
passes.
areaOfInterest: JS object which contains pixel coordinates of the screen region in which text
should appear (top, left, bottom, right). Pixel coordinates are zero-based.
referenceImage: path to the image relative to test folder. Reference image should be a complete
capture of the screen, although function will consider only area of interest. The image shall be in
PNG format and the filename shall end in ‘.png’. This image may be presented to the tester by the
test harness in order to provide further clarification of expected text appearance on the screen.
callback/callbackObject: a callback function to invoke when the call completes (also see chapter
7.2.3 Callbacks). The analysis may not have taken place at the time the callback is invoked. The
analysis result is not passed back to the application. A failed analysis must cause the complete test
to fail, independent on the test result reported back by the reportStepResult function. This is due
to the fact that the analysis can also be performed off-line on a taken screenshot.
ARGS: stepId: the step number that has been performed (same as stepId in reportStepResult).
comment: a comment from the test developer describing what the analysis actually does (same as
reportStepResult)
check: a textual description detailing which test to perform on the video data. This is the only
criteria that shall be used for the assessment of this analysis call. An empty or null string shall
cause the test to fail.
callback/callbackObject: a callback function to invoke when the analysis was made (also see
chapter 7.2.3 Callbacks). The analysis result is not passed back to the application. A failed
analysis must cause the complete test to fail, independent on the test result reported back by the
reportStepResult function. This is due to the fact that the analysis can also be performed off-line
on a taken recording.
duration: duration of recording in seconds. The argument is optional; if it is not present then the
default is 10 seconds.
delay: delay before recording in seconds. The argument is optional; if it is not present then the
default is 0 seconds.
image_or_video_mime_type: specifies the MIME type of the image or video file. This is
optional, if a string is specified for image_or_video_relative_url then image_or_video_mime_type
must be a string specifying the MIME type, otherwise this parameter must be either not specified
or undefined or null.
audio_relative_url: specifies a path, relative to the testcase XML file, to a reference audio file for
the tester to refer to when performing the analysis. This is optional, if it is not specified or
undefined or null then there is no audio file to refer to.
audio_mime_type: specifies the MIME type of the audio file. This is optional, if a string is
specified for audio_relative_url then audio_mime_type must be a string specifying the MIME
type, otherwise this parameter must be either not specified or undefined or null.
This function analyzes (or records for later analysis) the video over the following period:
• Starting between (delay + 0) and (delay + 2) seconds after the API is called (although this may be
delayed if other Test API calls are in progress when this API is called).
• Ending (duration) seconds after the analysis (or recording) started.
The described check shall not require video data from outside the specified period. This allows the video to be
recorded and then processed later on.
The callback may be called much later than the end of the analysis period, e.g. if the harness is waiting for a user
to enter the result, or if the harness is doing non-real-time video compression or some automated analysis.
Test implementers should consider the audio and video content to be analyzed, assuming there will be a human
tester.
For the supported reference audio formats, see 7.3.6 JS-Function analyzeAudioExtended()
For the supported reference image and video formats, see 7.3.7 JS-Function analyzeVideoExtended()
Test implementers should not call this function when network connection is down, as it may fail when network
connection is not available. In this case, this will cause the complete test to fail automatically.
NOTE: This function may be implemented by first making a video capture and performing the analysis at a
later point in time.
• Transport streams to play out (independent of delivery type: e.g. DVB-S, DVB-C, DVB-T)
• Definition for AIT tables (optional, if not defined, they must be inside transport streams)
The playout set definition with id “1” is the initial playout set and shall be active when the test starts. The
referenced AIT should start the actual test application. Some tests may require switching between multiple
playout sets. The switching is either done after a specified amount of time (timeout attribute in a playout set
specifies after how many seconds to automatically switch to the next playout set) or after an API call from the
test (see changePlayoutSet function below).
Each test must have a XML definition file called “implementation.xml” residing in the test directory defining all
the requirements of the test:
<testimplementation id="<testcaseid>">
<playoutsets>
<playoutset id="<number>" definition="<rel_filename>"
[timeout="<seconds>"] />+
</playoutsets>
</testimplementation>
The XML file “implementation.xml” must validate against the “testImplementation.xsd” XML schema as
defined in /SCHEMAS/testImplementation.xsd of the Test Suite.
When the timeout occurs (playout is played out for the specified time in seconds and no changePlayoutSet() call
was made), the next playout set to be played out is determined as follows: the id of the current playout set is
incremented by one, and the playout set with that new id is played out. If no such playout set exists, the playout
set with id 1 is played out.
When changing playout sets (especially the ones containing audio/video), no seamless switching is required:
continuity counter errors, PCR continuity problems, incomplete sections and/or tables may occur shortly after
For the purposes of this section, a reportXxx or analyzeXxx API call counts as test case activity that resets the
watchdog timer.
• the DUT's network connection being disabled due to the current playoutset specifying
<networkConnection available="NO"/>.
• the DUT rebooting, when triggered via initiatePowerCycle(). (The test harness may use a separate
timer to fail the test if the reboot doesn't work).
• analyzeXxx API calls that take a significant amount of time. E.g. video capture, manual analysis steps.
• other API calls that require a human to do a manual action. E.g. sendKeyCode if the tester is required
to manually press a key on the remote and indicate they have done that.
When the above states no longer apply, the watchdog timer is automatically reset and re-enabled. Ending those
states counts as test case activity that resets the watchdog timer.
By default, the watchdog timer shall have a timeout of at least 2 minutes, although harnesses may use a higher
value and may make this configurable.
A tag is defined that configures the harness with a minimum timeout that shall be respected. If this tag is used,
the watchdog timer's timeout shall be at least as long as the value specified.
<minimumTimeout duration="PT5M"/>
The required "duration" attribute is a duration in ISO 8601 format. Only seconds, minutes, hours and days are
supported. The seconds field may specify up to 3 digits after the decimal point for millisecond precision. The
harness may round this value up to whatever granularity the harness supports (e.g. whole seconds or whole
minutes). This value must be positive and nonzero.
Examples include:
• "PT5S" = 5 seconds
• "PT5.234S" = 5.234 seconds
• "PT5M" = 5 minutes
• "PT5M6S" = 5 minutes and 6 seconds
• "PT4H" = 4 hours
• "PT4H5M5.234S" = 4 hours 5 minutes and 5.234 seconds
• "P1D" = 1 day
• "P1DT5H5M5.234S" = 1 day and 5 hours 5 minutes and 5.234 seconds
For the purpose of this tag, a "day" always means 24 hours, even if the clocks go forward or back due to DST
changes while the test is running.
Test cases that need to run for more than 2 minutes between reportXxx or analyzeXxx API calls must use this
tag, unless the only time the test case does that is when it triggers one of the states listed above that temporarily
disables the watchdog.
Note that this is a minimum timeout, so a harness may choose to use a longer timeout or no timeout, and the
harness may choose to reset the watchdog timer upon other activity not listed here. Test cases that actually
depend on a timeout should use a Javascript timer, they should not depend on the watchdog.
The playout set definition XML files must validate against the “playoutsetDefinition.xsd” XML schema as
defined in /SCHEMAS/playoutsetDefinition.xsd of the Test Suite. The referenced NIT XML file(s) must
validate against the “nit.xsd” XML schema as defined in /SCHEMAS/nit.xsd of the Test Suite, and must use the
<nit> element from that schema as the root element. The syntax of the NIT XML file is defined in 7.4.4.3.3.
Any referenced SDT XML file must validate against the “nit.xsd” XML schema as defined in
/SCHEMAS/nit.xsd of the Test Suite, and must use the <sdt> element from that schema as the root element. The
syntax of the BAT XML file is defined in 7.4.4.3.5. Any referenced BAT XML file must validate against the
“nit.xsd” XML schema as defined in /SCHEMAS/nit.xsd of the Test Suite, and must use the <bat> element from
that schema as the root element. The syntax of the BAT XML file is defined in 7.4.4.3.4. The referenced AIT
table must validate against the AIT XML format as described in TS 102 809 [4], chapter 5.4. The referenced
StreamEvent description file must validate against the StreamEvent XML format as described in TS 102 809
[4], chapter with amendments in HbbTV Technical Specification section 9.3.1 [1] [20]. All other referenced data
is in binary format.
If a <nit> tag is being used, and no <nitOther>, then the <nitPid> tag is optional – the <nit> tag can go directly
inside <generatedData> in this (common) case. If <nitPid> is omitted, then the bitrate attribute can be specified
on the <nit> tag directly.
The <bat> element shall only use the ‘enabled’ attribute if the test case is testing Operator Applications under
Annex D, as described inD.4.1 Playoutset extensions.
When creating an AIT, you can assume that the test is available via a pre-defined URL. For more information,
see chapter 5.1 Test Environment of this document.
NOTE: The played out AIT version number is not necessarily identical to the version number specified in
the playout set definition XML file, as the test harness may add an offset (0, 8, 16, or 24) to that
version number, depending on the test run. However, the offset is constant during the run of a
single test case.
All bitrates are specified in bits per second (as integer values). Data should be played out with continuous
bitrates (no bursts). Due to packaging reasons, the bitrate may vary +/- 1504 bits on a specific second (1504 bits
per second is one TS packet per second). The overall specified bitrate should be achieved as closely as possible.
A multiplex within a playoutset shall not use any output PID more than once; if it does then this is an error and
no transport stream shall be generated. For the purpose of this rule, an output PID is used when:
A playout set with network connection set to NO must have a timeout value set to ensure that the Test Harness
will terminate this playout set and start a new one with network connection set to YES.
As reportStepResult might require a network connection, the test implementer must make sure that the network
connection is available at the end of the test, so the last playout set in a playout set definition shall always have
network connection set to YES.
To support the OpenCaster stream generator, StreamEvent objects need the extension ".event" to be interpreted
as and create a proper StreamEvent object in the DSM-CC. Without the ".event" extension the behaviour is
undefined. So, for example the following is required in the playout set XML streamEvent element:
<streamEvent dst="eventObject.event" src="ste.xml"/>
Each transport stream packet from the original transport stream shall only be multiplexed into the generated
transport stream if the PID of the packet is equal to the src attribute of one of the pid elements contained in the
transportStream element. In this case, the PID in the packet shall be replaced with the value in the dst attribute
of the pid element with a matching src attribute.
In the final generated transport stream packets originating from the original transport stream shall occur at the
same frequency at which they occurred in the original transport stream. The frequency of packets in the original
transport stream shall be determined by reference to the location of the packets in the transport stream and the
bitrate attribute, in bits per second, of the transportStream element.
The version field of the generated transport stream packets containing the AIT shall be set to the value of the
version attribute of the ait element, or 0 if no version attribute is defined. The harness may optionally implement
the behaviour described in 7.4.4.5.1.
The test harness shall read the file identified by the src attribute of the ait element. The harness shall encode the
data read from the XML definition into the generated transport stream packets as defined in clause 7.2.3.1 of
[1]. If the file identified by the src attribute does not contain a well-formed XML encoding of an AIT, as defined
in 5.4 of [4] (including the <ParentalRating> extension defined in clause 7.2.3.2 of [1]), then the test harness
shall not generate the transport stream.
If the XML definition includes a <ParentalRating> extension with ‘region’ attribute equal to ‘{auto}’, then the
test harness shall check if the value ‘PARENTAL_REGION_DVB_SI’ was listed in a <setting> sub-tag of the
<adaptsTo> tag, as described in Section 6.3.4.1 adaptsTo. If so, the harness will use use the region code
configured by the tester. If not, this is a test case bug, and the harness will cause the test case to fail
automatically.
The harness shall generate the application_icons_descriptor from the <icon> tags in the XML AIT. Note that the
Operator Applications specification [37] includes a correction to the XML schema in TS 102 809 [4]
specification that allows an XML AIT to specify multiple icons, like the broadcast AIT; it also changes the
definition of the version field.
The broadcast AIT may need to refer to the URL of the configured HTTPS server. To do this, inside
<mhp:URLBase> and <mhp:BoundaryExtension> tags the string “{https}” shall expand to the URL of the
HTTPS server without a trailing slash, for example “https://hbbtv1.test” or
“https://operator.example.com:1234”.
Typical usage:
<mhp:BoundaryExtension>{https}</mhp:BoundaryExtension>
Or:
<mhp:URLBase>{https}/_TESTSUITE/TESTS/org.hbbtv_123/</mhp:URLBase>
Tests that do not configure the HTTPS server in their implementation XML file, must not use the “{https}”
extension described above.
7.4.4.3.2 DSM-CC
For each dsmcc element contained in the generatedData element of the playout set definition the harness shall
generate transport stream packets and multiplex them into the final generated transport stream such that the
generated bits occur at the rate, in bits per second, given by the bitrate attribute of the dsmcc element. If no
bitrate attribute is specified then the default value of 100000 bits per second shall be used. The generated
transport stream packets shall have their PID set to the value defined by the pid attribute of the dsmcc element.
The test harness shall generate a DSM-CC object carousel as defined in clause 7 of [4] containing a file system.
The contents of the directory identified by the source_folder attribute of the dsmcc element shall be included at
the root of the generated file system, i.e. Files contained directly in the indicated directory shall be at the root
level of the carousel, subdirectories of the indicated directory shall be subdirectories of the carousel, etc. The
directory identified by the source_folder attribute may be empty.
For each directory element contained in the dsmcc element, the contents of the directory indicated by the src
attribute shall be available in the generated file system at the location indicated by the dst attribute. The location
specified by dst is a path relative to the root of the generated file system.
For each file element contained in the dsmcc element, the file indicated by the src attribute shall be available in
the generated file system at the location given by the dst attribute.
For each streamEvent element contained in the dsmcc element, the stream event described by the contents of the
file identified by the src attribute shall be available in the generated file system at the location given by the dst
attribute. If the file identified by the src attribute is not a valid XML document according to the schema defined
in 8.2 of [4] then the transport stream shall not be generated.
The location specified by the dst attribute of the directory, file and streamEvent elements is a path relative to the
root of the generated file system. All the parent directories of a location specified in a dst attribute must have
been defined by either the directory structure identified by the source_folder attribute of the parent dsmcc
element, or by the directory structure resulting from a sibling directory element.
For all paths in the generated file system referred to in file and directory elements the test harness shall treat the
character ‘/’ (ASCII 47 / Unicode U+002F) as a path separator. The test harness may also treat the character ‘\’
(ASCII 92 / Unicode U+005C) as a path separator. The test harness shall support relative paths, and shall
support the referencing of files from any location within the test suite associated with the playout set definition.
If the test harness is unable to locate any one of the indicated file system assets then the test harness shall not
generate the transport stream. If the contents of the dsmcc element result in an ambiguous definition for the
structure of the constructed carousel (e.g. the dst attribute of a file element refers to a path already defined by
the contents of the source_folder attribute) then the transport stream shall not be generated. The test harness may
choose not to generate the transport stream if src or source_folder attributes reference file system locations
outside the test suite associated with the playout set definition.
The generated DSM-CC carousel shall use the following attributes of the dsmcc element as specified:
• association_tag: this value shall be used for the DSM-CC elementary stream’s association tag. If the
value is outside the legal range then the transport stream shall not be generated.
• carousel_id: this value shall be used for the DSM-CC carousel ID. If the value is outside the legal
range then the transport stream shall not be generated.
• Version: this value shall be used for the version number of the module/DII. If not specified 0 shall be
used. The harness may optionally implement the behaviour described in 7.4.4.5.1. If the value is
outside the legal range then the transport stream shall not be generated.
7.4.4.3.3 NIT
By default, the Test Harness shall generate a NIT Actual and insert it into the DVB broadcast that is being
played out, as described in 7.4.4.4. This default NIT Actual may not be suitable for all tests, so the test case can
specify the NIT Actual in XML format, and the Test Harness will generate the specified NIT Actual. The test
case may also specify NIT Other(s) in XML format, and the Test Harness will generate them in addition to the
(default or specified-in-XML) NIT Actual.
Only a single NIT Actual is supported per generated TS, but an unlimited number of NIT Others are supported.
The “src” attribute specifies the path to the NIT XML file. It is relative to the directory containing the
playoutset XML file.
The generated NIT PID shall repeat all the NIT(s) specified in rotation. The optional “bitrate” attribute on
<nitPid> (or, if <nitPid> is not used, on <nit>) specifies the total TS bitrate for the NIT PID. This includes all
the TS packet overhead. If the bitrate is not specified, then the harness shall calculate the bitrate needed to
repeat all the NITs once per second.
NOTE: For a NIT that fits in one 188-byte TS packet and is repeated at the DVB specified minimum rate
of every 10 seconds the required bitrate = 188 byes * 8 bits/byte / 10 seconds = 150.4 bits per
second. The bitrate has to be specified as an integer, so you can round that, making sure to round
up to stay inside the 10 second limit, giving 151 bits/sec.
The NIT XML defines a customized NIT to be generated by the Test Harness.The root element of NIT XML
files must be the <nit> element defined in nit.xsd.
<nit nid="<0-65535>" version="<0-31>">
<network>
<networkNameDescriptor><text></networkNameDescriptor>
</network>
<transportStream onid="<0-65535>" tsid="<0-65535>">+
<autoDeliverySystemDescriptor multiplex="<multiplex>"/>
<serviceListDescriptor>
<service sid="<0-65535>" type="<serviceType>"/>+
</serviceListDescriptor>
<privateDataSpecifierDescriptor value="<value>"/>
<rawDescriptor tag="<0-255>">
<hex_bytes >
</rawDescriptor>
<linkageDescriptor type="<0-7><32-255>" onid="<0-65535>" tsid="<0-65535>"
sid="<0-65535>"/>
</transportStream>
</nit>
All numbers in this file are decimal, except where specifically noted, because that is the standard XML Schema
method of representing a number.
The <autoDeliverySystemDescriptor> tag inserts the correct delivery system descriptor for the current multiplex
in the playoutset (i.e. the multiplex we’re inserting the NIT into), taking into account the configured DVB type
and the configured DVB modulation settings. This allows the same test to work in DVB-T, C, and S.
The delivery system used, and hence the content of the delivery system descriptors is implementation
dependent. The inserted delivery system descriptors shall be the same as those returned by the
getPlayoutInformation test API defined in 7.2.4.
Modulation Descriptor(s)
DVB-T terrestrial_delivery_system_descriptor
DVB-T2 T2_delivery_system_descriptor
DVB-C cable_delivery_system_descriptor
DVB-S satellite_delivery_system_descriptor
DVB-S2 satellite_delivery_system_descriptor and
S2_satellite_delivery_system_descriptor
<network> and <transportStream> are both descriptor loops, so they can include any of the following descriptor
tags an unlimited number of times in any order. The generated NIT will contain the relevant descriptors in the
order that they were specified in the XML.
It is an error to specify any descriptor with a descriptor payload larger than 255 bytes.
Each generated NIT is limited to a single section. It is an error to specify a NIT larger than that.
The NIT will be signalled as a “NIT actual” or “NIT other” depending on which XML tag was used in the
playoutset to include it. The “current_next_indicator” will always indicate that the NIT is current. “Next” NIT
is not supported.
If the network ID is not set using the “nid” attribute on the NIT, then the harness shall fill it in automatically,
using the Network ID configured in the harness if running OpApp tests as described in Annex D section D.2.2
Discovery settings or the standard Network ID used for the base stream as described in 5.2.3
Base Test Stream if running regular HbbTV tests.
NIT used by the OpApp test cases shall not use the version number of 6. For details, see section D.4.5
OpApp Launcher Stream.
7.4.4.3.4 BAT
The NIT XML schema has been extended to support generation of BATs and SDTs as well as NITs. The
majority of the schema is shared. All the rules about descriptors etc in NITs also apply to BATs.
The root element of BAT XML files must be the <bat> element defined in nit.xsd.
If the Bouquet ID is not set using the “bouquetId” attribute on the BAT, then the harness shall fill it in
automatically, using the Bouquet ID configured in the harness. It is an error if there is no Bouquet ID
configured in the harness. For an OpApps test case testing under Annex D, if the bilateral agreement includes a
Bouquet ID, then that must be the one configured in the harness. (see D.2.2 Discovery settings)
7.4.4.3.5 SDT
The NIT XML schema has been extended to support generation of SDTs and BATs as well as NITs. The
majority of the schema is shared. All the rules about descriptors etc in NITs also apply to SDTs.
The SDT XML file defines a SDT Actual to be generated by the Test Harness. The root element of SDT XML
files must be the <sdt> element defined in nit.xsd.
The following example SDT XML file will generate the same SDT Actual as the one in the base stream:
<?xml version="1.0" encoding="utf-8"?>
<sdt xmlns="http://www.hbbtv.org/2016/nit"
tsid="1" onid="99" version="1">
<service sid="10" eitSchedule="true"
eitPresentFollowing="true"
runningStatus="4" ca="false">
<serviceDescriptor type="mpeg2-sd-tv"
provider="HbbTV.org"
name="ATE Test 10"/>
</service>
<service sid="11" eitSchedule="true"
eitPresentFollowing="true"
runningStatus="4" ca="false">
<serviceDescriptor type="mpeg2-sd-tv"
provider="HbbTV.org"
name="ATE Test 11"/>
</service>
<service sid="12" eitSchedule="true"
eitPresentFollowing="true"
runningStatus="4" ca="false">
<serviceDescriptor type="mpeg2-sd-tv"
provider="HbbTV.org"
name="ATE Test 12"/>
</service>
<service sid="13" eitSchedule="true"
eitPresentFollowing="true"
runningStatus="4" ca="false">
<serviceDescriptor type="mpeg2-sd-tv"
provider="HbbTV.org"
name="ATE Test 13"/>
</service>
<service sid="14" eitSchedule="true"
eitPresentFollowing="true"
runningStatus="4" ca="false">
<serviceDescriptor type="radio"
provider="HbbTV.org"
name="ATE Test 14"/>
</service>
</sdt>
All numbers in this file are decimal, because that is the standard XML Schema method of representing a
number, except where specifically noted. The <service> tags describe the services to encode into the SDT
Actual. They will be encoded in the order they are specified in the XML SDT. Each <service> tag has:
• “sid” attribute for the service ID
• “eitSchedule” for flag to indicate that EIT schedule information for the service is present in the current
TS
• “eitPresentFollowing” for indicating that EIT_present_following information for the service is present
in the current TS
• “runningStatus” for indicating the status of the service.
• “ca” for indicating whether service is scrambled or not. If set to true, service is scrambled. Default is
false.
If the SDT contains a <hbbtvUriLinkageDescriptor> with enabled="auto", then the SDT XML file is invalid. If
the SDT contains a <hbbtvUriLinkageDescriptor> where the “uri” attribute contains “{opapp-fqdn}” or
“{opapp-fqdn-2}” then the SDT XML file is invalid.
If the first (or only) DVB multiplex doesn't specify an XML NIT Actual then it behaves as if the following
XML NIT Actual was specified:
<?xml version="1.0" encoding="utf-8"?>
<nit xmlns="http://www.hbbtv.org/2016/nit"
nid="99" version="0">
<network>
<networkNameDescriptor>HBBTV_A</networkNameDescriptor>
</network>
<transportStream onid="99" tsid="1">
<autoDeliverySystemDescriptor />
<serviceListDescriptor>
<service sid="10" type="mpeg2-sd-tv"/>
<service sid="11" type="mpeg2-sd-tv"/>
<service sid="12" type="mpeg2-sd-tv"/>
<service sid="13" type="mpeg2-sd-tv"/>
<service sid="14" type="radio"/>
</serviceListDescriptor>
<privateDataSpecifierDescriptor value="40"/>
</transportStream>
</nit>
If there is a second DVB multiplex and it doesn't specify an XML NIT Actual then it behaves as if the following
XML NIT Actual was specified:
<?xml version="1.0" encoding="utf-8"?>
<nit xmlns="http://www.hbbtv.org/2016/nit"
nid="65281" version="0">
<network>
<networkNameDescriptor>HBBTV_B</networkNameDescriptor>
</network>
<transportStream onid="99" tsid="2">
<autoDeliverySystemDescriptor />
<serviceListDescriptor>
<service sid="15" type="mpeg2-sd-tv"/>
<service sid="16" type="mpeg2-sd-tv"/>
<service sid="17" type="mpeg2-sd-tv"/>
<service sid="18" type="mpeg2-sd-tv"/>
<service sid="19" type="radio"/>
</serviceListDescriptor>
<privateDataSpecifierDescriptor value="40"/>
</transportStream>
</nit>
The generated transport stream should have a data rate matching that required by the parameters of the inserted
delivery descriptors.
EXAMPLE: Some DUTs may implement an application caching model which relies on the AIT version
numbering scheme. If this is not used then the DUT may need to be power cycled or
service changed between tests.
• test_run is a positive integer incremented by 1 every time the harness starts a test
• test_run is a positive integer incremented by 1 every time the harness starts a test
• base_version is the value of the version attribute of the dsmcc element, or 0 if no version attribute is
defined
• Given a stream which was recorded at a specific time, the “baseStreamStartTime”, if we’re playing the
stream out at a later date then the test harness has to adjust the EIT appropriately so that events that
were “one hour after baseStreamStartTime” become “one hour after the test started”.
• It’s usually better if the event start times are a multiple of 1min (or even 5min), so there is a
configurable “granularity” which controls the granularity of the adjustment. This defaults to 1min.
• This mechanism also allows the EIT section version numbers to be adjusted, so the RUT detects the
change
• The test harness ensures that the generated EIT-schedule data complies with the DVB SI Guidelines.
To do this, the EIT-schedule has to be pulled apart and regrouped into sections.
Test authors shall ensure that each playout set using <adjustEit/> uses the same section version number on all
EIT-schedule sections relating to a single service. (I.e. a change in EIT-schedule data can only happen when the
playout set is changed; but a change in EIT-pf can happen at any point in the stream).
The test harness shall update the EIT using the following process:
1) First calculate the ‘EIT Offset’, which is a positive number of seconds, by:
a) Calculate the number of seconds between the date/time given by the baseStreamStartTime
attribute on the <adjustEit/> tag, and the actual time the harness starts running the test. (For tests
with more than one playout set, note that this is the time the test is started, not the time the
playout set is started).
b) Round that down to a multiple of the value given by the granularity attribute on the <adjustEit/>
tag.
a) Parse all EIT-schedule sections in the generated TS, and extract the events and section version
number for each service. Also remember the maximum number of sections seen in a single TS
packet on the EIT PID.
b) For every event extracted in step 1, adjust the start_time value in every event loop inside the EIT
by adding ‘EIT Offset’ seconds to the encoded date/time. If the start_time does not encode a valid
date/time (e.g. if all bits of the field are set to "1", as explicitly allowed by [18]) then it is not
adjusted
c) For each service, increment the version_number by the value given by the incrementVersion
attribute on the <adjustEit/> tag. The addition shall be done modulo 32.
d) For each service, generate EIT-schedule sections containing the events, following the rules in
ETSI TS 101 211 to sort events and to assign events to sections. Note that calculation of “last
midnight” must be based on the time the test is started
a) TS packets that are not on the EIT PID are not modified by this algorithm
b) EIT-schedule sections are replaced with the updated EIT-schedule sections from step 2
a. Adjust the start_time value in every event loop inside the EIT by adding ‘EIT Offset’
seconds to the encoded date/time. If the start_time does not encode a valid date/time (e.g.
if all bits of the field are set to "1", as explicitly allowed by [18]) then it is not adjusted
c. Adjust the section CRCs accordingly. If the CRC was correct before the changes above,
then the CRC shall be correct on the modified section. If the CRC did not match before
the changes above, then the CRC shall not match on the modified section.
e) The repacking of sections into the EIT PID shall respect the maximum number of sections in a
single TS packet as measured in step 2a.
f) Note that due to the repacking of sections on the EIT PID, some EIT-pf sections and non-EIT
sections will slightly change position in the stream. This is expected to be negligible unless you
are using extremely low bitrates for the EIT PID. (In the worst-case, the delay is approximately
the size of 2 EIT sections).
ARGS: playoutSetId: the id of the playout set to start playing out. If the playout set with this id is not
defined for the test then the test shall fail.
callback/callbackObject: a callback function to invoke after the new playout has started (also see
chapter 7.2.3).
As audio/video might show some artefacts, test case implementers should not perform any broadcast
audio/video checks up to 5 seconds after a switch of playout set occurs.
NOTE: To switch from no network to an available network connection, use the timeout parameter in the
playout set.
ARGS: bitrate: the maximum “nominal bitrate” (see below) into the device under test, in units of bits per
second (bps). Values less than 1bps are not supported, and shall cause the test to fail.
callback/callbackObject: a callback function to invoke when the restriction has been applied
(also see chapter 6.2.3 Callbacks).
The maximum throughput is calculated by multiplying the specified maximum nominal bitrate by 1.4.
Throughput is defined as the average number of bits per second passed in to the transmitting network interface
at the Ethernet frame layer. This count includes all HTTP, TCP/UDP and IP overheads, and includes Ethernet
destination MAC address, source MAC address and length fields. But it does not include the other Ethernet
overheads such as the preamble, Start Frame Delimiter, the padding to the minimum payload size if necessary,
the frame check sequence (CRC) field and the interpacket gap.
Note: For example, the calculated maximum throughput may be used directly as the "rate" parameter to the
Linux TC TBF qdisc. In this case, the "overhead" and "mpu" parameters should not be used or should be zero.
Note: This value of 1.4 was chosen in an attempt to account for HTTP, TCP/UDP, IP etc overheads. It is not a
perfect conversion factor, but cannot be changed for historical reasons - changing it would require revalidating
all the test cases that use this API.The restriction only applies to data received by the device under test (i.e. data
sent by the test harness.) Transmission of data from the device under test shall not be restricted.
Calls to this function set an upper limit on the permitted throughput. The maximum throughput achievable from
the test harness may be limited to a lower value by other factors (e.g. bus or CPU saturation.) Test authors
should take this into account when deciding appropriate values for bitrate.
The table below shows which functions may be used for each configurable state of the CICAM (these functions
are marked ●). Behaviour of functions if they are called other than as permitted by Table 5 is undefined, and the
harness may abort the test.
• Use of functions that communicate with the CICAM when the CICAM is not inserted in the host
• Use of functions that communicate with the CICAM when the ‘CAM’ XML element (see 5.2.1.7.5) is
not included in the implementation XML.
If values are not valid then error handling behaviour shall be as defined in 7.4.7.1.
If a harness does not support use of a CICAM then the ‘cicam’ property may have the value ‘undefined’. If the
‘cicam’ property does not have the value ‘undefined’ then this shall not be taken to mean that the harness
supports the use of CICAM or that a CICAM is available. Tests shall request the use of a CICAM by the use of
the XML syntax defined in 5.2.1.7.5.
The ‘cicam’ prefix is shown in the title of the following sections, but not in the function definitions.
ARGS: callback / callbackObject: callback function invoked if the function call succeeds
In addition to the above behaviour, calling this function is equivalent to calling the following methods:
• cspgcipSetScramblingEnabled(true)
• cspgicpSendURI(copy freely)
ARGS: enabled: If ‘true’ then transport stream descrambling will be enabled, subject to the other
requirements of the CI+ specification (e.g. even if this has been set to true, if host authentication
fails then descrambling shall not take place). If ‘false’ then transport stream descrambling shall
cease if it is currently being carried out, and not restart.
The state set by this function shall not be preserved across CICAM removal / re-insertion and CICAM reboots.
ARGS: callback / callbackObject: a callback function to invoke when the information is available.
where the cspgcipInformationObject is an associative array containing the following key/value pairs, where
each value is a boolean:
tsDetected: true if a transport stream is being input to the CICAM, otherwise false
The value returned for ‘descrambling’ indicates that the CICAM has authenticated the host, has a transport
stream input and has retrieved sufficient CA system information to configure and start its descrambler. It does
not guarantee that descrambling is taking place correctly or that the output video is correctly descrambled: if this
information is required then it must be confirmed with an analyzeScreenExtended or analyzeVideoExtended, as
appropriate.
ARGS: oipf_status: value to be used for the oipf_status of the reply_msg as defined in 4.2.3.4.1.1.4 of
[29]
ca_system_id: value to be used for the ca_system_id of the SAS_async_msg APDU encoding the
reply_msg.
ca_system_id: value to be used for the ca_system_id of the SAS_async_msg APDU encoding the
reply_msg.
ca_system_id: value to be used for the ca_system_id of the SAS_async_msg APDU encoding the
reply_msg.
ca_system_id: value to be used for the ca_system_id of the SAS_async_msg APDU encoding the
reply_msg.
ARGS: timeout: a time (in milliseconds) after which the CICAM should stop waiting. Values of timeout
must be in the range 1-300000. Values outside this range shall cause the test to fail.
oipf_status: If null then no reply_msg shall be sent. Otherwise a value to be used for the
oipf_status of the reply_msg as defined in 4.2.3.4.1.1.4 of [29]
• the CICAM has begun to wait for a message to be received. In this case the waitState parameter will
have the value 1, and the receivedInformationObject parameter shall be null.
• a message has been received by the CICAM from the host. In this case the waitState parameter will
have the value 2, and receivedInformationObject will be an associative array containing the following
key/value pairs;
• the timeout expires. In this case the waitState parameter will have the value 3, and
receivedInformationObject shall be null.
The callback shall be called a maximum of twice in response to any single call to waitForMessage, with each
value of waitState passed at most once. The first call to the callback shall always be with a waitState value of 1.
NOTE: This function only waits for a single SAS_async_msg and only sends a single reply_msg.
ARGS: uri: a URI as defined in 5.7.5.2 of [14]. The URI specified shall be in v2 format.
ARGS: data: Data about the multiplexes for which to set the signal level. Must be an array, with a length
that matches the length of the array retrieved by getPlayoutInformation(). Each entry in the
“data” array corresponds to the multiplex in the same position in the array retrieved by
getPlayoutInformation(). If a multiplex is not a DVB multiplex, then the corresponding entry in
“data” must be null. Otherwise the corresponding entry in “data” must be a number (fractions
allowed) and is the signal level in dBm. See the getPlayoutInformation() API for the valid range
callback/callbackObject: a callback function to invoke after the new playout has started (also see
chapter 7.2.3). The callback function will be called with the same parameters as for
getPlayoutInformation().
In this case, the application shall contain a simple_application_boundary_descriptor giving access to the web
server (in above case to http://hbbtv1.test/).
• Have an AIT with an AUTOSTART app that stores a cookie which counts the times that the
application was started - doing this will make it possible to verify that the application was started
twice, and therefore it must have been killed in between. This solution will only work if cookies are
supported (only for broadband connection).
• Have an AIT with an AUTOSTART app that, as soon as it is suitable for the test to give the correct
result, changes AIT on the current service (by changing the playout set). The new AIT then carries the
following applications:
- an AUTOSTART app which will be the app that is fired up once the currently running app has
been killed to verify that the previous app got killed.
• By using multiple services (this avoids changing the playout set): service A contains the application 1
as AUTOSTART, which is responsible for performing the test. The application tunes to service B,
which contains application 1 as PRESENT and application 2 as AUTOSTART, which will do the
endTest() call. The application 1 will keep on running until it is killed, which then will start application
2.
For manual analysis, always make sure that the changes to the screen/audio/video only happen after the callback
is received, which tells you that the analysis is done.
Calls to the analyze functions of the JS API shall be made only when network connection is enabled to allow the
Testing API to trigger the capturing of the screen or audio signal. The test might fail otherwise
As analysis may happen offline, the analysis result is not known to the test implementation and will not be
reported back in the callback. If analysis fails, the complete test will fail. So if you perform an analyze call (e.g.
check whether a specific pixel has a red colour) that fails (pixel is not red), this will make sure that the complete
test case fails.
In most other cases, a playout set should not have a timeout in order to allow slow implementations to still pass
the test.
Except for very special test cases (with a good reason not to do so), all playout sets shall signal and include all
services ATE test 10 – ATE test 14 (see chapter7.2.2).
The playout set definition should NOT include a NIT, which is inserted by the test harness for the tested
delivery type (e.g. DVB-S, DVB-C, or DVB-T).
In addition to that, the playout set shall contain the PMTs and referenced elementary streams for the various
services that are used by your test.
7.5.1.7 Choose the correct application and organization ID for your application
Test applications should stick to the following ID guidelines:
• The organization ID should be the ID assigned to the HbbTV consortium. This is: 0x70.
g) If it cannot be interpreted as a hex number, take the SHA-1 hex digest of the local part
2) When a test case requires more than one application ID, add multiples of 0x100 (256 decimal) for each
additional application ID required.
3) While the resulting ID is greater than 0x3fff (16383 decimal), subtract 0x3fff from the ID.
4) If application is intended to be in signed range (for trusted API calls), add the offset 0x4000 to the
resulting application ID.
1) Must use the OIPF DOCTYPE. OIPF DAE 1.2 only permits the 'strict' or 'transitional' XHTML
doctypes, as defined in CEA-2014-A (Annex G). OIPF DAE 2.3 permits HTML5 doctypes as defined
in 8.1.1 of HTML5 (W3C Candidate Recommendation 6 August 2013).
2) Must EITHER have an initial page called index.html or index.cehtml in the test directory, OR must use
the OIPF XML extensions in implementation.xml to name the initial page
• Analyze calls may be performed in offline mode. You don’t know the analyze result when the callback
is performed, you only know that you may continue with your test
• Wait for the callback when using an analyze call before changing the output (e.g. when calling
analyzeScreenPixel wait until the callback has returned before changing the screen) to make sure that
the analysis can be performed on the correct output.
• Make sure network connection is available when calling any API call except endTest(),
reportStepResult(), sendMessage() and waitForCommunicationCompleted().
• If an analyze call fails, this means that the test fails. The test environment then may terminate the test
while it continues to run, or it may wait until the test terminates itself
• HbbTV does not require monitoring of the AIT while broadband video is played back. A good test
should not change the AIT during playback (only when testing very special cases).
• Before calling endTest(), the test should try to unregister all listeners and stop all broadband video
playback. This makes it easier to run the following test.
Test Case
testsuite.js
The diagram shows how a Test Case running on the DUT can make calls to testsuite.js to ask for DIAL
operations to perform. The Test Harness uses some proprietary protocol (likely an XmlHttpRequest) to pass
those requests from testsuite.js to the DIAL Client service that is part of the Test Harness running on the Test
Harness PC. The DIAL Client service then actually does the requested DIAL operations (such as DIAL searches
and sending DIAL requests). The Test Harness then communicates the result to the Test Case.
• Getting the UPnP Device Description file from a DIAL server, which gets the URL for DIAL
applications
• Checking that starting an HbbTV Application via DIAL is allowed from a different Origin
These APIs are accessed via the ‘dial’ property on the HbbTVTestAPI object created by the test. The ‘dial’
property is an object with (at least) the methods specified here.
This function returns immediately. On success, it calls the provided callback function. On failure, the test
harness shall cause the complete test to fail automatically and shall not call the callback. Failure can be caused
by:
• no SSDP responses,
• multiple different SSDP responses when the harness has not been configured to use a specific one (see
next two paragraphs), or
The test harness shall accept SSDP M-SEARCH responses conformant to either UPnP Device Architecture 1.0
[35] or UPnP Device Architecture 1.1 [36].
Optionally, a test harness may support filtering SSDP responses, so only the correct one is returned to the test.
E.g. this may be useful if the Companion Screen happens to implement a DIAL server, and the Companion
Screen is left on the network when tests using this API are run. If supported, the mechanism for configuring the
specific SSDP response to use is harness-dependent and out of scope for this specification. Test harnesses are
not required to implement filtering SSDP responses, because there are no tests that both use this API and require
a real Companion Screen, so the tester could always turn the Companion Screen off while running tests that use
this API.
void HbbTVTestAPI.dial.doMSearch(
callback : function,
callbackObject : object);
ARGS: callback / callbackObject: callback function invoked when the harness has a SSDP response as:
void callback(
callbackObject : object,
locationHeader : string);
where:
locationHeader: value of the LOCATION header of the M-SEARCH response (i.e. the URL).
This function returns immediately. When resolving the address is finished, whether successful or not, it calls the
provided callback function.
This function shall handle all the hostname formats that are allowed in URIs (see section 3.2.2 of RFC 3986
[22]). This means dotted decimal IPv4 addresses, IPv6 addresses enclosed in square brackets, and DNS names.
void HbbTVTestAPI.dial.resolveIpV4Address(
hostname : string,
callback : function,
callbackObject : object);
callback / callbackObject: callback function invoked when the harness has finished resolving the
IP address as:
void callback(
callbackObject : object,
dottedIpv4 : string or null);
where,
dottedIpv4: string containing a dotted IPv4 address in the usual format (e.g.
“123.245.67.189”) if the specified hostname could be resolved, or null if the hostname was
not resolvable. This address shall not have leading zeros on any component (i.e. it can be
“1.2.0.3” but not “001.002.000.003”).
Some examples that are not resolvable and shall call the callback with null include: IPv6 addresses enclosed in
square brackets, DNS names that do not exist (i.e. NXDOMAIN response), DNS names that cannot be resolved
due to DNS timeouts, and any string that cannot be interpreted as a valid hostname according to RFC 3986 [22].
This function shall not do any Same-Origin or CORS [28] security checks. However, the parameters and results
of this function are sufficient for a test case to test the DUT’s Cross-Origin support (see HBBTV [1], Section
14.8).
void HbbTVTestAPI.dial.getDeviceDescription(
url : string,
origin : string or null,
callback: function,
callbackObject : object);
ARGS: url: string defining the DIAL LOCATION URL. Typically, this URL will have been obtained
from an earlier call to the dial.doMSearch() function.
origin: either null, in which case the request shall not include an Origin header, or else a string, in
which case the request shall include an Origin header with the value specified in that string.
callback / callbackObject: callback function invoked when the harness has successfully
completed the HTTP request as:
void callback(
callbackObject : object,
applicationUrl : string or null,
didRedirect : bool,
allowOrigin : string or null);
where,
applicationUrl: string containing the value of the Application¬-URL header returned with the
UPNP device description file, or null if the DUT did not return that header.
didRedirect: true if the DUT did a HTTP redirect (in violation of the DIAL specification
[30]), or false otherwise.
Test implementers should not call this function when network connection is configured to be down, as it may
fail when network connection is not available. In this case, this will cause the complete test to fail automatically.
This function shall not do any Same-Origin or CORS [28] security checks. However, the parameters and results
of this function are sufficient for a test case to test the DUT’s Cross-Origin support (see HBBTV 1.4.1 [1],
Section 14.8).
void HbbTVTestAPI.dial.getHbbtvAppDescription(
ARGS: applicationUrl: string defining the DIAL Application Resource URL. Typically, this URL will be
based on the DIAL Application URL obtained from an earlier call to the
dial.getDeviceDescription() function, plus a trailing slash (‘/’) character if not already present, and
followed by the Application Name ‘HbbTV’ (see [1], Section 14.7.2).
schemaValidation: if true, the harness shall do XML Schema Validation on the returned XML
document, using the HbbTV DIAL Application resource schema (see [1], Section 14.7.2). In that
case, if schema validation fails, then the harness shall fail the test automatically and shall not call
the callback. Else if false the harness shall not do XML Schema Validation.
origin: either null, in which case the request shall not include an Origin header, or else a string, in
which case the request shall include an Origin header with the value specified in that string.
callback / callbackObject: callback function invoked when the harness has successfully
completed the HTTP request as:
void callback(
callbackObject : object,
app2appUrl : string or null,
interdevSyncUrl : string or null,
userAgent : string or null,
allowOrigin : string or null);
where,
app2appUrl: string containing the value of the X_HbbTV_App2AppURL element from the
HbbTV namespace in the DIAL HbbTV Application description, or null if the DUT did not
return that element.
userAgent: string containing the value of the X_HbbTV_UserAgent element from the HbbTV
namespace in the DIAL HbbTV Application description, or null if the DUT did not return that
element.
Test implementers should not call this function when network connection is configured to be down, as it may
fail when network connection is not available. In this case, this will cause the complete test to fail automatically.
This function shall not do any Same-Origin or CORS [28] security checks. However, the parameters and results
of this function are sufficient for a test case to test the DUT’s Cross-Origin support (see HBBTV 1.4.1 [1],
Section 14.8).
void HbbTVTestAPI.dial.startHbbtvApp(
applicationUrl : string,
pathToAitXml : string,
origin : string or null,
callback : function,
ARGS: applicationUrl: string defining the DIAL Application Resource URL. Typically, this URL will be
based on the DIAL Application URL obtained from an earlier call to the
dial.getDeviceDescription() function, plus a trailing slash (‘/’) character if not already present, and
followed by the Application Name ‘HbbTV’ (see [1], Section 14.7.2).
pathToAitXml: string defining path to AIT XML relative to the directory containing the testcase
XML file, using a trailing slash (‘/’) as a path separator. This XML file is sent to the DUT as the
body of this HTTP POST.
origin: either null, in which case the request shall not include an Origin header, or else a string, in
which case the request shall include an Origin header with the value specified in that string,
callback / callbackObject: callback function invoked when the harness has successfully
completed the HTTP request as:
void callback(
callbackObject : object,
returnCode : int,
contentType : string or null,
body : string or null,
allowOrigin : string or null);
where,
contentType: string containing the value of the content type header in the HTTP response, or
null if a HTTP response is not received.
body: string containing the value of the body in the HTTP response, or null if a HTTP
response is not received.
Test implementers should not call this function when network connection is configured to be down, as it may
fail when network connection is not available. In this case, this will cause the complete test to fail automatically.
This function shall not do any Same-Origin or CORS [28] security checks. However, the parameters and results
of this function are sufficient for a test case to test the DUT’s Cross-Origin support (see HBBTV 1.4.1 [1],
Section 14.8).
void HbbTVTestAPI.dial.sendOptionsRequest(
applicationUrl : string,
origin : string or null,
callback : function,
callbackObject : object);
ARGS: applicationUrl: string defining the DIAL Application Resource URL. Typically, this URL will be
based on the DIAL Application URL obtained from an earlier call to the
dial.getDeviceDescription() function, plus a trailing slash (‘/’) character if not already present, and
followed by the Application Name ‘HbbTV’ (see [1], Section 14.7.2).
origin: either null, in which case the request shall not include an Origin header, or else a string, in
which case the request shall include an Origin header with the value specified in that string.
maxAge: string containing the value of the Access-Control-Max-Age header returned, or null
if the DUT did not return that header.
Test implementers should not call this function when network connection is configured to be down, as it may
fail when network connection is not available. In this case, this will cause the complete test to fail automatically.
There are 3 reasons why test cases cannot just use the standard W3C Websockets API [i.6], which browsers
provide and which is required by the HbbTV specification [1].
First, the W3C Websockets API is insufficient for certain HbbTV Test Cases. There are some extra low-level
details that Test Cases need to be able to control and test:
• CORS headers (setting the ‘Origin’ header to arbitrary values and checking the ‘Access-Control-
Allow-Origin’ header)
• Access to the HTTP status code when the WebSocket connection is refused
Second, if the Test Case is running as a harness-based test, the harness-based test environment is not required to
provide the W3C Websockets API. Harness-based tests must use this API instead.
Thirdly, if the Test Case is running on the DUT, it’s not sufficient to merely open a Websocket directly from the
DUT to the media synchronization or app2app endpoints. We need to test that those endpoints are accessible via
the network port on the DUT. So this API allows a Test Case to command the Test Harness to open Websockets
and forward data to the test case:
Test Case
testsuite.js
LAN Websocket
Testing service
Websocket Server being
tested
The diagram shows how a Test Case running on the DUT can make calls to testsuite.js to ask it to open
Websockets. The Test Harness uses some proprietary protocol to pass those requests from testsuite.js to the
Websocket Testing service that is running on the Test Harness PC. The Websocket Testing service then actually
opens the Websockets. The Test Harness then forwards messages in both directions between the Test Case and
the Websocket Server being tested.
Where this Test API specifies the details of Websocket messages, it refers to the Websocket between the
Websocket Testing service and the Websocket Server being tested. It does NOT refer to the “proprietary
protocol” part of the system, even if that happens to be implemented using a Websocket.
NOTE 1: The “proprietary protocol” is likely to be a Websocket connection. This should be a single
Websocket connection, with all the requests multiplexed down it. It should NOT be one
Websocket for each call to openWebsocket(). Using a single Websocket here ensures that, when
the Test Case tries to open 400 Websockets at once, the only DUT restrictions that are hit are
restrictions in the Websocket Server that’s being tested, not restrictions on how many Websockets
you can open in the browser.
NOTE 2: The “proprietary protocol” is just an RPC mechanism – i.e. when the Test Case requests a
particular Websocket message be sent with certain options, testsuite.js may just take the message
and all the options, and send them as JSON down a Websocket to the Websocket Testing service.
This API can also be used by a test case running as a Harness Based Test:
JavaScript environment
Test API
testsuite.js
DUT
LAN
Websocket Server Websocket
being tested Testing service
For security reasons, if the harness follows the design described here, then when there is a new “proprietary
protocol” Websocket connection to the the Websocket Testing Service in the Test Harness, that service should
check that either the Origin header of the HTTP request is one of the test domains listed in section 5.2.1.2 (i.e.
hbbtv1.test etc), or the connection is coming from a harness-based test case. See RFC 6454 and the HbbTV
specification [1] for details of how a web browser determines the Origin of a page.
NOTE 3: This means that test pages loaded from DSM-CC or from a server on the Internet cannot use the
openWebsocket() API. Tests that use the openWebsocket() API must be HTML pages loaded
from http://hbbtv1.test/ or from the other DNS names in section 5.2.1.2.
NOTE 4: The Websocket Testing service is a Websocket proxy that allows spoofing the Origin header. This
is a powerful tool that we do not want to be used by attackers. The Origin check ensures that only
Test Cases can use it.
Without the Origin check, there is the following security vulnerability: Suppose an organization
happens to have a Test Harness installed somewhere. Suppose an attacker who wishes to attack
that organization knows that they have a Test Harness installed, and knows either the IP address
of the Test Harness or a range of IP addresses that will include the Test Harness, and knows the
proprietary protocol used by that Test Harness. Further, suppose that the organization has an
intranet server inside their firewall that uses Websockets. The attacker first tricks someone at the
organization into visiting a malicious webpage (which is trivial for the attacker to arrange via a
malicious link in an email). The malicious webpage instructs their desktop web browser to make a
Websocket connection to the Test Harness, and then sends commands over that Websocket
instructing the Test Harness to connect to a secure internal server using a spoofed Origin header.
That allows the attacker to get Websocket access to an internal server (inside the corporate
firewall). To prevent this vulnerability, the Test Harness has to reject the initial Websocket
connection from the malicious page, based on the Origin header supplied by the browser.
Of course, an Origin check does not preclude connections made by non-browser based clients.
At the start of a testcase execution, there shall not be any Websockets open via this mechanism.
NOTE 5: When a test case stops running, and the terminal destroys the test application, the terminal will
close any Websockets open via the normal W3C Websocket API. (See the W3C Websocket API
spec for details). If the test harness uses a W3C Websocket in its implementation of this API, then
the Websocket Testing service can detect that W3C Websocket being closed and close any related
Websocket connections it's created.
For harness-based tests, a similar process can occur but it will be triggered by the harness-based
test execution environment being torn down.
• Bytes 0x20 to 0x24 inclusive are mapped to the corresponding characters U+0020 to U+0024
inclusive.
• Bytes 0x26 to 0x7E inclusive are mapped to the corresponding characters U+0026 to U+007E
inclusive.
• Other bytes are mapped to the three character sequence consisting of a percent character followed by
two uppercase hex characters, e.g. byte 0 maps to “%00” and byte 0xAB maps to “%AB”.
Where binary data is passed into this API, it is passed as a string which is decoded as follows:
• Characters U+0020 to U+0024 inclusive are mapped to the corresponding bytes 0x20 to 0x24
inclusive.
• Characters U+0026 to U+007E inclusive are mapped to the corresponding bytes 0x26 to 0x7E
inclusive.
• The three character sequence consisting of a percent character followed by two uppercase hex
characters maps to the corresponding byte, e.g. “%00” maps to byte 0 and “%AB” maps to byte 0xAB.
• A percent character that is not followed by two uppercase hex characters means the string is
malformed.
• If the string contains a character outside the range U+0020 to U+007E inclusive then the string is
malformed
Test Cases shall not pass malformed strings (as defined above) to APIs that are expecting to be able to convert
the string to binary data. The handling of malformed strings is test-harness-dependent (but a reasonable choice
would be to fail the test with an error).
NOTE 1: This uses a different string encoding from getPlayoutInformation. Experience has shown that test
case authors try to print the strings and pass them to test API calls, which does not work for the
binary strings returned by getPlayoutInformation. A human-readable string representation is
easier to use. Additionally, the string returned by getPlayoutInformation is intended to be passed
to an OIPF API that defines the encoding; that restriction does not apply here.
NOTE 2: These APIs uses a string encoding rather than an ArrayBuffer because that is simpler and easier
for test case authors to use.
onConnect: function called when “The WebSocket Connection is Established” as defined in RFC
6455 [31]. If the connection fails then this function will not be called – onFail will be called
instead. If the connection succeeds, this is always the first callback that is called.
void onConnect(
callbackObject : object,
websocketExtensionHeader : string or null,
websocket : WebsocketClient);
where,
onMessage: function called when “A WebSocket Message Has Been Received” as defined in
RFC 6455 [31]:
void onMessage(
callbackObject : object,
data : string,
binary : boolean,
websocket : WebsocketClient);
where,
data: if the message is a text message, ‘data’ is the message which has been decoded from
UTF-8 into a JavaScript string. If the message is a binary message, then “data” is the message
encoded as per section 7.7.1.
binary: true if the message is a binary message, or false if the message is a text message.
onPong: Called when a Pong frame is received. This may be an unsolicited Pong frame, or it may
be a reply to a Ping frame that was sent via WebsocketClient.sendPing().
void onPong(
callbackObject : object,
data : string,
websocket : WebsocketClient);
where,
data: the data from the Pong frame encoded as per section 7.7.1.
onClose: function called when “The WebSocket Connection is Closed” as defined in RFC 6455
[31], except this is not called if onFail has already been called. Once this method is called, the test
must not make any further method calls on this WebsocketClient object, and the harness must not
invoke any other callbacks on this WebsocketClient object.
void onClose(
callbackObject : object,
statusCode : integer,
reason : string,
websocket : WebsocketClient);
where,
statusCode: “The WebSocket Connection Close Code” as defined in RFC 6455 [31]
reason: “The WebSocket Connection Close Reason” as defined in RFC 6455 [31]
statusCode: If the Websocket handshake fails due to the HTTP status code not being 101,
then this shall equal the HTTP status code that was received (e.g. 503). In all other failure
cases it shall equal null.
reason: A human-readable string describing the error. E.g. the test case may choose to fail the
test and use this string as part of the failure reason.
originHeader: string to be sent as the value of the WebSockets “Origin” header field. If it is null,
or not specified, then that header is not sent.
The WebsocketClient object passed to the onConnect() and other callbacks has the methods defined in the
following subsections.
Tests that use this openWebsocket() API must either be a web page with an Origin that is one of the test
domains listed in section 5.2.1.2 (i.e. http://hbbtv1.test/ etc), or be a harness-based test case. (See section 7.7 for
rationale).
If the test harness receives a ping frame on a Websocket opened with this API, it shall automatically respond
with a pong frame as defined in RFC 6455 [31]. Those ping requests and pong responses are not exposed via
this API.
The test harness shall not send unsolicited pong frames. The test harness shall not send ping frames unless
instructed to do so via WebsocketClient.sendPing().
This API shall support having at least 500 simultaneous open Websockets.
NOTE 2: For tests running on the DUT, if the Test Harness provided implementation uses Websockets
internally, this implies some sort of multiplexing, because the DUT is only required to support 20
Websockets connections (See HbbTV [1] section 10.2.1).
A test case may call this function multiple times without waiting for a response, e.g. if testing what happens
with 400 rapid Websocket connection attempts.
Test implementers should not call this function from a test case running on the DUT when the network
connection is configured to be down. Test implementers should not take the network connection down when a
test case running on the DUT still has a Websocket open vial this API. Those actions may cause the complete
test to fail automatically. These restrictions do not apply to test cases running as a Harness Based Test.
ARGS: data: If ‘binary’ is false, the message as a string, which will be UTF-8 encoded for transmission.
If ‘binary’ is true, the message encoded as specified in section 7.7.1.
binary: true if the message should be sent as a binary message, or false if the message should be
sent as a text message.
fragments: if null or undefined or not specified, guarantees not to fragment the message. If
specified, it shall be an array of strictly positive integers that add up to the size of the message in
bytes. The message is fragmented into the requested size fragments.
ARGS: data: string containing the binary data to use as the payload of the Ping frame, encoded as
specified in section 7.7.1.
NOTE: It is always possible to close a Websocket connection by closing the underlying socket, so a close
cannot fail as such. However, there is a ‘graceful close’ protocol defined by RFC 6455, that
should be followed unless there is an error. In a ‘graceful close’, both sides send a Close frame
and shut down the sending side of their side of their socket, then the socket is closed and onClose
is called. If either side resorts to closing the underlying socket without sending a Close frame then
onFail is called.
WebsocketClient.close();
After calling WebsocketClient.close() the test must not make any further method calls on this WebsocketClient
object. However, the harness may invoke further callbacks on this WebsocketClient object (e.g. if a message is
received before the server’s close frame, then the onMessage callback will be called).
ARGS: callback / callbackObject: callback function invoked when the Test Harness has successfully
closed the Websocket’s underlying TCP/IP socket.
callback(callbackObject : object, websocket : WebsocketClient)
This function shall return immediately, and arrange for the harness to close the Websocket’s underlying TCP/IP
socket as soon as possible. When the underlying TCP/IP socket is closed, a callback will be called (see below
for which callback).
After calling WebsocketClient.tcpClose() the test must not make any further method calls on this
WebsocketClient object. However, the harness may invoke further callbacks on this WebsocketClient object to
report events that happened before the socket was actually closed (e.g. if a message is received before the
underlying socket was closed, then the onMessage callback will be called).
• If the Websocket connection fails or is gracefully closed before the harness could close the TCP/IP
socket, then the onFail or onClose callback shall be called as normal. In that case, the callback passed
to tcpClose() shall not be called.
• Otherwise, the harness closes the Websocket’s underlying TCP/IP socket in response to the tcpClose()
call, then calls the callback that was passed to tcpClose().In this case, the onFail and onClose callbacks
shall not ever be called.
Once the harness has called onFail, onClose, or the callback passed to tcpClose(), then the harness shall not
make any further callbacks for this WebsocketClient object.
This function is deprecated and must not be used in new test cases.
if (checkLight1AgainstAudio) {
temp.checkLight1AgainstAudio(maxDifference, maxDifference);
}
if (checkLight2AgainstAudio) {
temp.checkLight2AgainstAudio(maxDifference, maxDifference);
}
if (checkLight1AgainstLight2) {
temp.checkLight1AgainstLight2(maxDifference, maxDifference);
}
temp.analyze(callback, callbackObject);
If updating a test case to not use this API, you are recommended to use chained calls rather than a temporary
variable, e.g.:
testApi.createAnalyzeAvSync(step_number, comment).checkLight1AgainstAudio(maxDifference,
maxDifference).analyze(callback, callbackObject);
ARGS: stepId: the step number that has been performed (same as stepId in reportStepResult).
comment: a comment from the test developer describing the purpose of the step
callback/callbackObject: a callback function to invoke when the analysis was started (also see
chapter 7.2.3 Callbacks). The callback function will be called with the following parameters:
callback(callbackObject, analyzeFinishedVideoGraphicsSync) where
analyzeFinishedVideoGraphicsSync is a function that can be called as defined below.
The analyzeFinishedVideoGraphicsSync function is provided via the callback, it is NOT a function on the
HbbTVTestAPI class. (This design allows Test Harness implementers to return a closure that links the ‘finished’
call back to the corresponding ‘start’ call).
void analyzeFinishedVideoGraphicsSync(
expectedDeltaMillis : integer,
maxDifferenceMillis : integer,
callback : function or null,
callbackObject : object)
ARGS: expectedDeltaMillis: the number of milliseconds that the flash on light sensor 1 is expected to be
ahead of the flash on light sensor 2. May be negative if light sensor 2 is expected to detect a flash
first.
1) The Test Case arranges for a video player to be positioned so it completely covers light detection area
1, and a black graphics rectangle to be positioned so it completely covers light detection area 2. The
video player may be larger than the light detection area, e.g. if the video contains other information to
help debugging as well as the flash area.
2) The Test Case starts playback of a video with a single video flash at a known timeline position. (In this
section 7.8.2, “timeline” should be read generally as referring to any way that the Test Case can know
the video playback position; it does not refer to any particular way of doing that).
4) That causes the Test Harness to start analysing the two light sensors
5) The Test Harness then calls the callback passed to analyzeStartVideoGraphicsSync(), to indicate that
analysis has started.
7) The Test Case regularly monitors the video’s timeline time. When the video’s timeline time gets
‘close’ to the video flash time, the Test Case records the current video timeline time, initiates a
graphics flash by turning the graphics rectangle white, and then records the current video timeline time
again. (Note: It’s not possible for an HbbTV application to reliably change the graphics at a particular
timeline time, so we cannot use analyzeAvSync. The best we can do is ‘close’, but this procedure
carefully measures and compensates for this error).
8) A short time later (typically 2-5 video frames later), the Test Case clears the graphics flash by turning
the graphics rectangle black.
9) The Test Case waits for sufficient time for both the following to happen:
a. the graphics changes to get through all output buffering and actually be displayed as light
b. the video flash to happen, (remembering to add the acceptable synchronization inaccuracy to
the time it’s expected to happen), and that flash to get through all output buffering and
actually be displayed as light.
10) The Test Case calculates the average of the two timeline times around when it actually initiated the
graphics flash, and subtracts that from the timeline time the video flash is encoded, and converts the
result to milliseconds. It may then need to apply some corrections (e.g. see the Note below). This is the
expectedDeltaMillis.
11) The Test Case converts to milliseconds and adds up: the allowed tolerance as specified in the assertion,
half the difference between the two timeline times measured around when it actually initiated the
graphics flash, and any other corrections (e.g. see the Note below). This is the maxDifferenceMillis.
12) The Test Case calls the analyzeFinishedVideoGraphicsSync function provided by the Test Harness.
The Test Case passes the calculated expectedDeltaMillis and maxDifferenceMillis values.
13) That causes the Test Harness to stop analysing the two light sensors
a. Determines that each light sensor detected a single flash, and the difference between the start
times of the flashes was the expected value (within the specified tolerance). In this case, the
analysis has passed, and the Test Harness calls the callback provided by the Test Case.
b. Determines that the conditions in (a) do not hold. In this case, the analysis has failed, and the
Test Harness shall fail the entire Test Case immediately and shall not call the callback.
c. Records the information needed to make that determination for later off-line analysis (e.g. if
using a video camera to capture the screen and doing manual analysis later), and calls the
callback provided by the Test Case. In this case, the Test Case may be failed when the
analysis is carried out, which may be long after the Test Case has finished executing.
The Test Case shall ensure that the time that the Test Harness is observing the light sensors (between step 4 and
step 13 in the above procedure) is less than 20 seconds. If this limit is exceeded (because the Test Case does not
call analyzeFinishedVideoGraphicsSync quickly enough), the Test Harness may fail the entire test case. In that
case, if/when the Test Case does eventually call analyzeFinishedVideoGraphicsSync, the callback shall not be
called. (Rationale: this places a limit on the memory the Test Harness needs to do the capture).
The Test Harness shall ensure that, given reasonable assumptions about the performance of the DUT, a call to
analyzeStartVideoGraphicsSync() takes less than 5 seconds, measured from the start of the call until the
callback function is called. The time the Test Harness starts analysing the light sensors shall be somewhere
between those two points.
The Test Harness shall ensure that, given reasonable assumptions about the performance of the DUT, a call to
analyzeFinishedVideoGraphicsSync takes less than 2 seconds, measured from the start of the call until the Test
(Informative Rationale: the guarantees provided by these 2 paragraphs are needed to allow the Test Case author
to choose where in the video file the flash should occur, and so the Test Case author can ensure they meet the 20
second limit. E.g. the Test Case author may place the flash at 12 seconds into the video file, and call
analyzeFinishedVideoGraphicsSync at approximately 15 seconds into the video file. If after starting the video
file they get a callback immediately to say playback has started, and then analyzeStartVideoGraphicsSync runs
instantly, but analyzeFinishedVideoGraphicsSync takes 2 seconds to stop the observation, then they are still
well inside the 20 second limit. Conversely, if after starting the video file it takes 5 seconds for the DUT to
make the callback to say playback has started, and then 5sec for analyzeStartVideoGraphicsSync to run, then
those steps are still complete 2 seconds before the flash).
If either light sensor detects no flashes, or if either light sensor detects more than 1 flash, then the test harness
shall treat that as an analysis failure.
See section 5.2.1.12 for details of the light sensor positions. See section 5.2.1.13 for details of media
requirements – of the two alternatives for media described in that section, this API requires media with a single
flash.
NOTE: When calculating the tolerance to allow for this function, the Test Case should bear in mind that
some of the HbbTV APIs give the time of the last video frame that was composited with graphics,
and by the time the video/graphics compositor runs again it may or may not be onto the next video
frame. (E.g. with 50fps video, a DUT with a 50fps display will always go onto the next video
frame, and a DUT with a 200Hz display that chooses to composite graphics with video at 200Hz
will only have a 25% chance of going onto the next video frame). If you are particularly unlucky,
the video/graphics compositor may be running in parallel with your JavaScript that changes the
graphics, so your graphics change may have to wait 2 video frames. To allow for this potential
error of -0 / +2 video frames, you may wish to adjust the expected difference by 1 frame and
increase the tolerance by 1 frame.
The entry/entries in the Test Report XML for this step shall include the measured difference in milliseconds
between observed and expected times.
Test implementers should not call this function when network connection is configured to be down, as it may
fail when network connection is not available. In this case, this will cause the complete test to fail automatically.
NOTE: This function checks the start time of the flash, unlike analyzeAvSync/analyzeAvNetSync.
Experimental results show that typical TV “picture improvement” processing can affect the
detection of the flash start and end times, and when testing audio/video sync, choosing the mid-
point makes the tests more robust. However, that does not apply to this API because it is not
testing audio; instead it is testing the video and graphics planes which will probably be equally
affected. Choosing to test the start of the flash makes the test cases simpler, as the end of the
graphics flash does not need to be timed accurately.
ARGS: stepId: the step number that has been performed (same as stepId in reportStepResult).
comment: a comment from the test developer describing the purpose of the step
wcUrl: the URL of the CSS-WC service that the Test Harness shall connect to. The Test Case
should obtain this value from the DUT’s CSS-CII service using the Websockets API in section
7.7. This must be a “udp://” URL.
tsUrl: the URL of the CSS-TS service that the Test Harness shall connect to. The Test Case
should obtain this value from the DUT’s CSS-CII service. This must be a “ws://” URL.
contentIdStem: The "contentIdStem" the Test Harness shall specify in the Setup Data when it
connects to the CSS-TS service. The Test Case may have this hardcoded, or it may retrieve it
from the DUT’s CSS-CII service. See [34] section 5.7.3 for the meaning of this parameter.
timelineSelector: The "timelineSelector" the Test Harness shall specify in the Setup Data when it
connects to the CSS-TS service. The Test Case may have this hardcoded, or it may retrieve it
from the DUT’s CSS-CII service. See [34] section 5.7.3 for the meaning of this parameter.
unitsPerTick: The "unitsPerTick" that the Test Harness shall use to interpret the timeline it
receives from the CSS-TS service. The Test Case may retrieve this from the DUT’s CSS-CII
service. See [34] section 5.5.9.5 for the meaning of this parameter.
unitsPerSecond: The "unitsPerSecond" that the Test Harness shall use to interpret the timeline it
receives from the CSS-TS service. The Test Case may retrieve this from the DUT’s CSS-CII
service. See [34] section 5.5.9.5 for the meaning of this parameter.
patternStart: the time the repeating pattern of flashes and tones starts, in timeline ticks. This
refers to the mid-point of the first flash and tone in the pattern.
patternRepeatLength: the length of the repeating pattern of flashes and tones, in timeline ticks
pattern: the mid-point times of the repeating pattern of flashes and tones, in timeline ticks
checkLight1AgainstTimeline: if true, then light sensor 1 is checked against the timeline. (See
section 5.2.1.12 for details of the light sensor positions).
checkAudioAgainstTimeline: if true, then the audio sensor is checked against the timeline.
The Test Case must set at least one of the 3 ‘check…’ options, and may set 1, 2, or all 3 of them.
The Test Harness shall analyze the light and/or audio sensors for a period of 15 seconds.
The media playing on the DUT shall contain a repeating pattern of tones and flashes. This may be the pattern
described in section 5.2.1.13, or it may be a different pattern. The Test Case describes the pattern to the Test
Harness by specifying the ‘patternStart’, ‘patternRepeatLength’, and ‘pattern’ parameters. If there are N entries
in the ‘pattern’ array, then the mid-points of the tones and flashes are expected at the following times (measured
in timeline ticks):
• patternStart + pattern[0]
• patternStart + pattern[1]
• patternStart + pattern[2]
• …
• patternStart + pattern[N-2]
• patternStart + pattern[N-1]
• …
• …
• ...
The pattern keeps repeating until at least 20 seconds after this function was called. (The 20 seconds here is
calculated from the 5 second limit to start observing the sensors plus the 15 seconds of observation).
All entries in the pattern[] array must be non-negative and strictly less than patternRepeatLength. The pattern[]
array must be strictly increasing. I.e.:
0 <= pattern[0] < pattern[1] < pattern[2] …. < pattern[N-1] < patternRepeatLength.
The decision about whether this analysis passes or fails is made as follows:
• If the Test Harness could not obtain the time from the CSS-WC service on the DUT, the test fails.
• If the calculated CSS-WC dispersion, in milliseconds, exceeds maxDispersionMillis the test fails.
• If the Test Harness could not connect to the CSS-TS service or could not obtain the timeline from that
service, then the test fails.
• If there is a flash or tone that is expected, and the time it is expected is more than the calculated
tolerance plus 100ms from both the start and end of the 15 second observation period, and there is no
flash or tone detected at that time (within the calculated tolerance), then the test fails. (Note: The
special handling of the start and end of the observation period accounts for cases where the
synchronization is not perfect but is within the specified tolerance, and the Test Harness starts
monitoring just before a flash is expected but just after the flash has happened, so the flash is missed).
• If two flashes or two tones are detected from the same source within a 400ms window, that also causes
the test to fail. (Note: that means spurious flashes have been detected, either due to a failure of the
DUT or a failure in the test harness).
The Test Harness shall not send any Actual, Earliest or Latest Presentation Timestamp to the DUT’s CSS-TS
service.
The Test Harness shall close the CSS-TS connection when capture is complete, and before calling the callback.
The entry/entries in the Test Report XML for this step shall include the calculated CSS-WC dispersion, and the
maximum observed difference in milliseconds between observed and expected times for a flash or beep.
Test implementers should not call this function when network connection is configured to be down, as it may
fail when network connection is not available. In this case, this will cause the complete test to fail automatically.
NOTE: This function checks the mid-point of the flash. Experimental results show that typical TV
“picture improvement” processing can affect the detection of the flash start and end times.
Experiments suggest that the start time of the flash is affected more than the mid-point is.
Particularly when testing audio/video sync, choosing the mid-point makes the tests more robust.
ARGS: ciiData: a JavaScript dictionary containing the data to be returned by the CSS-CII service, except
the timeline service and wallclock service URLs do not need to be specified (and will be ignored
if they are specified). The implementation of this API must convert the ciiData to JSON using the
standard JavaScript JSON.stringify() method, with no replacer specified.
timelineStartValue: the initial value of the timeline time reported by the fake CSS-TS service.
• First, we have to determine the wallclock time when the timeline started. If the timelineStartSeconds
and timelineStartMicroseconds parameters are non-null, then they specify this time directly. Otherwise,
the test harness chooses the wallclock time when the timeline starts; this start time shall be after this
function is called and before the harness calls the callback.
• Next, we have to choose a timeline. The possible timelines are listed in the ‘timelines’ array in the
ciiData passed to startFakeSyncMaster(). For a connection to the CSS-TS service, the timeline will be
chosen by comparing the ‘timelineSelector’ in the setup data against the ‘timelineSelector’ from each
of the entries in the ‘timelines’ array; the matching entry is used. For the syncTestUrl service, the first
timeline in the ‘timelines’ array is always used. A Test Case that does not list any timelines in the
‘timelines’ array must not use the syncTestUrl.
• Then the timeline time is calculated as follows: measure the time since the timeline started, in
milliseconds; multiply that by ‘unitsPerSecond’, divide by ‘unitsPerTick’ and divide by 1000; round
the result down to an integer (rounding towards negative infinity); add ‘timelineStartValue’. For this
calculation. ‘unitsPerTick’ and ‘unitsPerSecond’ are taken from the matched entry in the ‘timelines’
array.
The ciiUrl parameter passed to the callback is the URL of the CSS-CII service provided by the Test Harness.
The Test Case can use this URL to configure the DUT as a slave.
The Test Case can make a Websocket connection to the Test Harness using the URL passed as the syncTestUrl
parameter to the callback. The protocol on this Websocket is as follows: The only valid message from the Test
Case to the Test Harness is a text message where the payload is the string “X”. Sending that message causes the
Test Harness to respond with the timeline time that the X was received, calculated as described above, as a
single decimal integer encoded as text in a Websocket message. The Test Harness will not send any other
messages over the Websocket. If the Test Case sends any other message over the Websocket then the behaviour
of the Test Harness is not defined by this specification. Future versions of this specification may define other
messages.
The fake master services provided by the Test Harness behaves as follows:
• CSS-CII: When a client connects, sends a single notification containing the ciiData specified in the call
to startFakeSyncMaster(), with the addition of the correct URLs for the fake CSS-TS and CSS-WC
services. Never reports any changes (i.e. does not send any other notifications).
• CSS-WC: Reports a wallclock time advancing at the correct rate. This service shall meet the
requirements of HbbTV [1] section 13.7.2 and 13.7.3, with the word “terminal” replaced by “Test
Harness”.
2. It checks if the ‘contentIdStem’ in the setup data matches or is a prefix of the ‘contentId’
specified in the ciiData passed to startFakeSyncMaster(). It also checks if the ‘timelineSelector’
in the setup data matches the ‘timelineSelector’ from one of the entries in the ‘timelines’ array
specified in the ciiData passed to startFakeSyncMaster().
3. If both the conditions from step 2 hold, then: Within 500ms of receiving the setup data it sends a
Control Timestamp to the client. This shall contain the current ‘wallClockTime’. It shall contain
‘timelineSpeedMultiplier’ set to 1. It shall also contain a ‘contentTime’ that is a timeline time
calculated as described above. The harness shall ensure that the wallClockTime and contentTime
refer to the same instant in time.
4. If the conditions from step 2 do not both hold, then: Within 500ms of receiving the setup data it
sends a Control Timestamp to the client. This shall contain the current ‘wallClockTime’. It shall
contain a ‘contentTime’ and ‘timelineSpeedMultiplier’ which are both null.
5. Any Actual, Earliest and Latest Presentation Timestamp sent by the client are read and ignored
Test implementers should not call this function when network connection is configured to be down, as it may
fail when network connection is not available. In this case, this will cause the complete test to fail automatically.
ARGS: multiplexIndex: the index of the multiplex to query. The Test Case must pass zero for this
parameter to refer to the first multiplex and must pass one to refer to the second multiplex. If the
Test Case specifies an invalid value, the Test Harness may fail the complete test case.
callback/callbackObject: a callback function to invoke with the playout start time (also see
chapter 7.2.3 Callbacks). The callback function will be called with the following parameters:
callback(callbackObject : object, seconds : integer, microseconds : integer,
maxErrorMicroseconds : integer)
This function returns immediately and arranges for the callback to be called as soon as possible.
NOTE 1: Some implementations may involve asking the user to type in the value, so “as soon as possible”
may be in a minute or two. If the Test Harness automates TS playout then it may be able to
respond in significantly less than a second.
The timestamp passed to the callback specifies a UNIX timestamp, split into an integer number of seconds and 0
to 999999 microseconds. The Test Harness must ensure this time is accurate within +/- 250 milliseconds. The
error bound shall be passed to the test as maxErrorMicroseconds, which must be between 0 and 250000
inclusive. This allows the Test Case to make allowance for the error.
NOTE 2: Although this call allows microsecond precision, the value is not expected to be that accurate. If a
human is starting the playout, an accuracy of +/- 250 millisecond is achievable. If the Test
Harness automates TS playout then it may be able to report the value more accurately.
PHP scripts in the Test Case can access a clock via the PHP time() or microtime() functions. The Test Harness
must ensure that time(), microtime(), and getPlayoutStartTime() use the same clock or synchronized clocks. Any
potential error due to the clocks not being perfectly synchronized must be accounted for in the value of
maxErrorMicroseconds reported by the Test Harness.
NOTE 3: A Test Case may use this to synchronise the availability of DASH segments with the broadcast
timeline. To do that:
1. The Test Case calls this JavaScript function, then when the callback is called it passes the
timestamp to a first PHP script via XmlHttpRequest, and that first PHP script saves the
timestamp in the PHP session.
2. Then the Test Case plays the DASH video, with the MPD URL pointing to a second PHP script.
The second PHP script can calculate the segment availability start and end times based on the
timestamp in the PHP session and the Test Case’s chosen constraints on segment availability, and
can include those values in the returned MPD data.
3. The MPD data can contain appropriate timing elements to ensure the DASH player requests the
time from the Test Harness via a HTTP call to a third PHP script, which can use time() or
microtime() to get the current time. This ensures that the DASH availability start/end times are
compared against the correct clock.
Test implementers should not call this function when network connection is configured to be down, as it may
fail when network connection is not available. In this case, this will cause the complete test to fail automatically.
ARGS: stepId: the step number that has been performed (same as stepId in reportStepResult).
comment: a comment from the test developer describing the purpose of the step
cssWcUrl: URL to the CSS-WC server to be tested. This must be a udp:// URL as used in
HbbTV.
numberOfClients: Number of distinct clients to create. Each IP address and port combination is
considered a single distinct client. Must be positive and nonzero.
transmitDurationMillis: The duration of the "transmit window" when CSS-WC requests are
being sent by the Test Harness. Note that the responses to the last few requests will be received
after the end of the transmit window.
maxResponseTimeMillis: If the time taken to receive a CSS-WC response is greater than this
value, the analysis step fails. This is measured from the time the CSS-WC request is sent on the
network until the CSS-WC response is received on the network. In the case of a 2 part response
where both parts are received in the correct order,
callback/callbackObject: a callback function to invoke when the analysis is complete (also see
chapter 7.2.3 Callbacks). The analysis result is not passed back to the application. A failed
analysis causes the test harness to fail the complete test, independent on the test result reported
back by the reportStepResult function. If the analysis step fails, then the callback shall not be
called.
The test harness shall send numberOfMessages CSS-WC requests to the specified CSS-WC server.
If numberOfMessages is 1, then the test harness shall immediately send a single CSS-WC request and
transmitDurationMillis is ignored; in this case the transmit window has a duration of zero.
Otherwise, the test harness shall send a CSS-WC request immediately, and shall send new CSS-WC requests are
every (transmitDurationMillis / (numberOfMessages - 1)) milliseconds. The first numberOfClients requests
All CSS-WC messages sent by the server shall use a 64-bit random number for the originate_timevalue field.
This shall be generated by a good-quality random number generator. The test harness shall ensure that all the
originate_timevalue values used during a single call to analyzeCssWcPerformance() are unique.
After the end of the transmit window, the Test Harness shall continue to listen for responses for (2 *
maxResponseTimeMillis) milliseconds.
The Test Harness shall consider the analysis step to have failed if any of the following conditions occur:
• Any UDP packet, received for the IP address and port of any of the CSS-WC clients created by this
call, does not match the syntax defined for CSS-WC responses, or does not have a message_type of 1,
2, or 3.
• Any CSS-WC response has an originate_timevalue field that does not match a CSS-WC request sent
by that client. This includes the case where the response is received on the wrong IP address or port.
• More than two CSS-WC responses have an originate_timevalue field that match the same CSS-WC
request sent by that client.
• Two CSS-WC responses have an originate_timevalue field that match the same CSS-WC request sent
by that client, and the first one received by the Harness is not message_type 2 (“response that will be
followed by a follow-up response”).
• Two CSS-WC responses have an originate_timevalue field that match the same CSS-WC request by
that client, and the second one received by the Harness is not message_type 3 (“follow-up response”).
• If the number of CSS-WC requests that are "dropped" is greater than maxDroppedRequests. A
request is considered to have been "dropped" if the Test Harness does not receive a CSS-WC response
with an ID matching that request, or if the Test Harness receives a response with message_type 2 but
does not receive the corresponding follow-up response with message type 3, or if the Test Harness
receives a response with message_type 3 but does not receive the corresponding response with
message type 2.
• If the "response time" for any CSS-WC request exceeds maxResponseTimeMillis milliseconds.
Requests that are dropped are not counted. The “response time” for a CSS-WC request is measured
from the time the CSS-WC request is sent on the network until the corresponding CSS-WC response is
received on the network. In the case of a 2 part response where both parts are received in the correct
order, the arrival time of the second part of the response is measured.
The Test Harness shall consider the analysis step to have passed if none of the above conditions occur.
The entry/entries in the Test Report XML for this step shall include the number of dropped requests, and the
maximum CSS-WC response time that was observed during the analysis.
Test harnesses are only required to support the following values for the parameters:
numberOfClients = 5
numberOfMessages = 25
transmitDurationMillis = 1000
maxDroppedRequests = 0
maxResponseTimeMillis = 200
Support for other values for these parameters is optional in a Test Harness. (It may be helpful when debugging
test failures, or for future tests).
ARGS: stepId: the step number that has been performed (same as stepId in reportStepResult).
comment: a comment from the test developer describing the purpose of the step
cssWcUrl: URL to the CSS-WC server to be tested. This must be a udp:// URL as used in
HbbTV.
callback/callbackObject: a callback function to invoke when the analysis is complete (also see
chapter 7.2.3 Callbacks). The analysis result is not passed back to the application. A failed
analysis causes the test harness to fail the complete test, independent on the test result reported
back by the reportStepResult function. If the analysis step fails, then the callback shall not be
called.
Where the results object has the following two sub-objects defined:
requests : array
responses : array
requests: a Javascript array contains list of the information of each CSS-WC request message sent
under this API call
responses: a Javascript array contains list of the information of each CSS-WC request message
received under this API call
harnessTimestamp: For a CSS-WC request message, it is the timestamp when the Test Harness
sent the message. For a CSS-WC response message, it is the timestamp when the Test Harness
received the message. This timestamp is in “Unix time” format – i.e. it is the number of seconds
and nanoseconds since 1 Jan 1970 00:00 UTC, not counting leap seconds.
The test harness shall immediately send a CSS-WC request to the specified CSS-WC server. The information of
the CSS-WC request shall be available in the requests object. After the request is sent, the test harness shall
listen on the same IP address and port for responses for 500 milliseconds. The information from the CSS-WC
responses received by the test harness shall be available in the responses object
All CSS-WC request messages sent by the the test harness shall use the Test Harness’s current system time for
the originate_timevalue field, unless a different behaviour is speicified by requestMessageConfig field. The
time value used here is in “Unix time” format – i.e. it is the number of seconds and nanoseconds since 1 Jan
1970 00:00 UTC, not counting leap seconds.
The Test Harness shall consider the analysis step to have failed if any of the following conditions occur:
• Any UDP packet, received for the IP address and port used to send any of the CSS-WC requests sent
by this call, does not match the syntax and length defined for CSS-WC responses, or does not have a
message_type of 1, 2, or 3.
• Any CSS-WC response has an originate_timevalue field that does not match a CSS-WC request sent
by this function.
• For one of the CSS-WC requests sent by this call, there are no CSS-WC responses that have an
originate_timevalue field that match.
• More than two CSS-WC responses have an originate_timevalue field that match the same CSS-WC
request sent by this call.
• Only one CSS-WC response has an originate_timevalue field that matches the CSS-WC request sent
by this call, and it is not message_type 1 (“response that will not be followed by a follow-up
response”).
• Two CSS-WC responses have an originate_timevalue field that match the same CSS-WC request sent
by this call, and the first one received by the Harness is not message_type 2 (“response that will be
followed by a follow-up response”).
• Two CSS-WC responses have an originate_timevalue field that match the same CSS-WC request by
this call, and the second one received by the Harness is not message_type 3 (“follow-up response”).
The Test Harness shall consider the analysis step to have passed if none of the above conditions occur.
The checkXxx functions modify the object they are called on, then return the same object. This allows chained
calls, e.g.:
testApi.createAnalyzeAvSync(3, “check”).checkLight1AgainstAudio(50, 35).analyze(callback)
For each AnalyzeAvSync object returned by a call to createAnalyzeAvSync(), test cases must call at least one of
the checkXxx functions before calling analyze(). Test cases must not call a checkXxx function to request a
check between a particular pair of sensors, if the test has already requested a check between the same pair of
sensors. For the purposes of the previous sentence, sensor order does not matter, e.g. “light 1 and audio” is the
same pair as “audio and light 1”.”.”. (E.g. you can call checkLight1AgainstAudio() and
checkLight2AgainstAudio(), but you cannot call checkLight1AgainstAudio() twice).
ARGS: stepId: the step number that has been performed (same as stepId in reportStepResult).
comment: a comment from the test developer describing the purpose of the step.
Requests that, when analyze() is called, then light sensor 1 is checked against the audio (See section 5.2.1.12 for
details of the light sensor positions). Modifies the object it is called on, then return the same object. Once this
function has been called on an AnalyzeAvSync object, test cases must not call this function again on the same
object.
Requests that, when analyze() is called, then light sensor 2 is checked against the audio (See section 5.2.1.12 for
details of the light sensor positions). Modifies the object it is called on, then return the same object. Once this
function has been called on an AnalyzeAvSync object, test cases must not call this function again on the same
object.
Requests that, when analyze() is called, then the specified sensor A is checked against the specified sensor B.
Modifies the object it is called on, then return the same object.
ARGS: sensorA: First sensor to check. Must be one of the AVSYNC_SENSOR_xxx constants as
defined in section 7.8.9.
minAOn: minimum allowed duration of a continuous “on” state (white or tone) on sensor A, in
milliseconds. If a shorter “on” pulse is detected, then either media playback has failed or the
sensor is not working properly, so the test case will fail. Must be >=20 and <14799.
maxAOn: maximum allowed duration of a continuous “on” state (white or tone) on sensor A, in
milliseconds. If a longer “on” pulse is detected, then either media playback has failed or the
sensor is not working properly, so the test case will fail. Must be >20 and <=14799.
maxAOff: maximum allowed duration of a continuous “off” state (black or silence) on sensor A,
in milliseconds. If a longer “off” pulse is detected, then either media playback has failed or the
sensor is not working properly, so the test case will fail. Must be >20 and <=14799.
ARGS: callback/callbackObject: a callback function to invoke when the observation is complete and, if
the Test Harness chooses to do the analysis immediately, the analysis is also complete analysis
was made successful (also see chapter 7.2.3 Callbacks). The analysis result is not passed back to
the application. A failed analysis causes the test harness to fail the complete test, independent on
the test result reported back by the reportStepResult function. This is due to the fact that the
analysis can also be performed off-line on a taken recording.
Before calling analyze() the test case must call at least one of the ‘check...()’ functions.
NOTE: "Check light sensor 1 against audio, and light sensor 2 against audio, with a 10ms tolerance" is
NOT the same as "check all 3 with a 10ms tolerance", since if light sensor 1 flashes 8ms earlier
than the audio tone, and light sensor 2 flashes 8ms later than the audio tone, then there is 16ms
between light sensor 1 and light sensor 2, so the first example passes and the second example
fails.
For a period of 15 seconds, starting within 5 seconds of the time this API is called, this monitors the selected
light and audio sensors. The harness performs all the requested checks on the captured data.
Test implementers must not call this function when network connection is configured to be down, as it may fail
when network connection is not available. In this case, this will cause the complete test to fail automatically.
For tests using this analysis method, the media should be mostly black/silence, with short flashes/tones.
The harness shall check that all the detected flashes and tones are synchronised within the specified tolerance.
In this section, the time of a flash or tone is the time of the mid-point of the flash or tone. (There is no
requirement for flashes or tones to have the same length).
• If there is a flash or tone that does not have a corresponding flash or tone with a centre within the
specified tolerance, and the centre of that flash or tone is more than the specified tolerance plus 100ms
from both the start and end of the 15 second observation period, then the test fails. (Note: The special
handling of the start and end of the observation period accounts for cases where the synchronization is
not perfect but is within the specified tolerance, and the Test Harness starts monitoring just after a flash
and just before the corresponding tone, so the tone is detected but the flash is missed).
• If two flashes or two tones are detected from the same source within a 400ms window, that also causes
the test to fail. (Note: that means spurious flashes have been detected, either due to a failure of the
DUT or a failure in the test harness).
NOTE: The algorithm above is designed to work with media files containing the repeating pattern of
flashes and tones described in section 5.2.1.13. However, other patterns of flashes and tones can
also be used.
The entry/entries in the Test Report XML for this step shall include the maximum measured difference in
milliseconds between the flash/tone.
NOTE: The format of the entry in the Test Report XML is not defined. It is for visual reference only.
Possible locations for the entry are the testStepComment or testStepData tags.
NOTE: This section checks the mid-point of the flash. Experimental results show that typical TV “picture
improvement” processing can affect the detection of the flash start and end times. Experiments
suggest that the start time of the flash is affected more than the mid-point is. Particularly when
testing audio/video sync, choosing the mid-point makes the tests more robust.
NOTE:
For tests using this analysis method, the media should be a mix of black/silence and white/tones. The time of
the transitions are analysed. The transitions are black to white, white to black, silence to tone, and tone to
silence.
NOTE: This checks that the transitions found by each sensor can be matched to transitions in the other
sensor, within the specified limits. At the start or end, unmatched transition(s) are allowed so
long as they could plausibly match a transition that was outside the time period that was analyzed.
The analysis period is 15 seconds long. In the following description, the “trimmed analysis period” refers to the
analysis period except for the first 100 milliseconds and the last 100 milliseconds; i.e. the “trimmed analysis
period” is 14.8 seconds long. (This accounts for the fact that detecting a transition at the very start or end of the
analysis period may not be possible, so we don’t try to do that).
The decision about whether this analysis passes or fails is made as follows:
Note that the harness is not required to implement exactly the algorithm above, it may use any implementation
that provides exactly the same results as the algorithm above.
• HbbTVTestAPI.AVSYNC_SENSOR_AUDIO
• HbbTVTestAPI.AVSYNC_SENSOR_LIGHT_1
• HbbTVTestAPI.AVSYNC_SENSOR_LIGHT_2
These shall be defined both on the HbbTVTestAPI constructor, and on all instances of that class.
ARGS: stepId: the step number that has been performed (same as stepId in reportStepResult).
expectedBehaviour: Type of behaviour that is expected to see when analyzing network log
(integer that represent type of behaviour, see the table below):
NOTE: In case of low-level TCP errors, such as a refused TCP/IP connection, the full URL will
not be available in the recorded traffic log. Host part of the URL will be resolved to an
IP and a port number. IP is determined by parsing the DNS request, which occurs when
the DUT resolves host’s name to its local network IP.
timeout – Time in milliseconds to log network traffic. Logging of network traffic starts sometime
after analyzeNetworkLog() was called but before the callback function analysisStartedCB() is
invoked, and runs for this length of time.
comment: a comment from the test developer describing what the analysis actually does (same as
reportStepResult)
check: a textual description detailing which checks to perform on the network log. This is the
only criteria that shall be used for the assessment of this analysis call. This allows the network log
to be recorded (duration specified by timeout parameter) and then processed later on. An empty or
null string shall cause the test to fail.
callback/callbackObject: a callback function to invoke when the analysis was made (also see
chapter 7.2.3 Callbacks). The analysis result is not passed back to the application. A failed
analysis must cause the complete test to fail, independent on the test result reported back by the
reportStepResult function. This is due to the fact that the analysis can also be performed off-line
on a taken recording network traffic log.
Just before invocation of callback function analysisStartedCB, network log recording/analysis was started, and
it should end after timeout milliseconds. After specified timeout, test harness should stop recording/analysis of
the network log. Once analysis is completed, callback function is invoked.
Invocation of the callback does not mean that analysis was successful (it may be postponed for later stage). In
case when specified network problem occurs fewer times than specified by minLogCount or more times than
specified by maxLogCount, test harness should automatically fail the test.
7.10.1 Introduction
This section defines a test API that can be used to measure a switch from one AV stream to another. I.e.
changing from a broadcast audio and video to a broadband audio and video, or vice-versa. This API requires the
streams to have special video and audio contents, which are also defined in this section. Note that the API does
not impose any restrictions on stream codecs, frame rates, resolutions* etc, the stream requirements only apply
to the video shown on the TV screen and the sound emitted from the TV’s audio out (typically it’s headphone
jack).
7.10.2 Overview
Each video contains QR codes giving the timestamp (i.e. the playback position in milliseconds). A camera is
used to capture this, and the video is analysed to detect which timestamps were played. Due to the nature of PC
webcam capture, including the low frame rate (50fps == 20ms per frame), this cannot reliably detect exactly
when playback stopped and started.
Each video also contains two rectangles, one white and one black. One video has the rectangle 1 being white,
the other has the rectangle 2 being white. A light sensor is positioned over each rectangle, being careful that the
light sensor cable does not cover the QR codes. The light sensors are used to determine when (according to
UTC time) the first video ceases to be displayed (rectangle goes black) and when the second video starts to be
displayed (other rectangle goes white).
Each audio contains a continuous tone, with a changing volume that encodes the audio timestamps. The two
audios use different frequencies, so the switch from one audio to the other can be detected by just looking at the
audio frequency. The TV audio is both captured by the PC’s audio in, which provides a good recording that is
not synchronized to anything else, and is to some extent measured by the same device that is measuring the light
sensors. The precise timing information from the device is used to synchronize the recording with the UTC
time, and that recording will then be analysed to determine when the first audio stopped and when the second
started, both as UTC times and as timeline times.
Xv_timeline The time the first video stopped, measured on its own timeline, in milliseconds.
Xv_UTC The time the first video stopped, measured on an independent clock (set to UTC).
Sv_timeline The time the second video started, measured on its own timeline, in milliseconds.
Sv_UTC The time the second video started, measured on an independent clock (set to UTC).
Xa_timeline The time the first audio stopped, measured on its own timeline, in milliseconds.
Xa_UTC The time the first audio stopped, measured on an independent clock (set to UTC).
Sa_timeline The time the second audio started, measured on its own timeline, in milliseconds.
Sa_UTC The time the second audio started, measured on an independent clock (set to UTC).
The measured values listed above can be used to verify the switch happened correctly, given the desired switch
time, the expected playback media timeline time, and the values of A, Dmin, Dmax. Here “A”, “Dmin” and
“Dmax” have the same meaning as the values in the HbbTV TA specification called either A1, D1min and
D1max or A2, D2min and D2max, depending on whether the switch being analysed is broadcast-to-broadband
or broadband-to-broadcast.
The frequency indicates whether the stream is broadcast or broadband. The frequencies are:
• Broadcast: 2500 Hz
• Broadband: 1500 Hz
The two volume levels are used to encode a sequence of binary data:
The data shall be encoded with an error correcting code (FEC) consisting of a Hamming(31,26) code truncated
to 21 bits, followed by an Even parity bit. This contains 16 bits of data, and allows 1-bit errors to be corrected
and 2-bit errors to be detected.
To put that definition of the FEC another way, the bits transmitted are calculated as follows:
3 d1 d1
5 d2 d2
6 d3 d3
7 d4 d4
8 p8 d5 ^ d6 ^ d7 ^ d8 ^ d9 ^ d10 ^ d11
9 d5 d5
10 d6 d6
11 d7 d7
12 d8 d8
13 d9 d9
14 d10 d10
15 d11 d11
18 d13 d13
19 d14 d14
20 d15 d15
21 d16 d16
From the decoded 16 bits of data, the most significant 4 bits are unused and shall be set to zero. The remaining
12 bits are the "embedded timestamp".
The "embedded timestamp" is measured in seconds, and indicates the media timeline time precisely at the start
of the 80ms high volume tone that preceded it, modulo 4096.
If an underlying timeline (e.g. PTS or TEMI) wraps, the media timeline time keeps counting up.
Note: For the switchMediaPresentation() API, the Targeted Advertising specification allows an app to specify
the switch point using this “keeps counting up” representation of the media timeline, or a representation where
the media timeline time goes to 0 at the wrap point. For analyzeAvSwitchPerformance(), we only support the
“keeps counting up” representation. This applies to the timestamp encoded in the audio, the timestamp encoded
in the video, and the API parameters. If necessary, it is fine to pass different representations of the same
timestamp to these functions.
• A solid box at the bottom left of the screen, where Light Sensor 1 is positioned (see the
Synchronization part of the specification). This shall be solid black for broadcast content, and solid
white for broadband content.
• A solid white box at the bottom right of the screen, where Light Sensor 2 is positioned (see the
Synchronization part of the specification). This shall be solid black for broadband content, and solid
white for broadcast content.
• Elsewhere on the screen, QR codes indicating whether the video is broadcast or broadband, and the
current timeline time.
• Elsewhere on the screen, to aid debugging, human-readable text saying it is broadcast and giving the
current timeline time.
The QR codes shall be placed in a grid that is 6 wide and 2 high. There shall not be any visible grid lines. The
space between QR codes shall be exactly 6 QR modules in size. The space between a QR code and the edge of
the QR area shall be at least 6 QR modules in size. The background of the QR area, including the Quiet Zones
and wherever a QR code is not displayed, shall be black. When a QR code is displayed, it shall be displayed in
white-on-black (i.e. inverted). QR codes shall be displayed the normal way up, i.e. with the three main
alignment marks (the nested squares) at the top left, top right, and bottom left. The QR module size shall be an
integer number of pixels, and modules shall be pixel-aligned. The QR modules should be as large as possible.
The QR codes must be in the title-safe area of the screen, although the Quiet Zone may extend outside that area.
It is expected that the QR grid will be displayed at different sizes for different video resolutions.
The QR codes are on different phases, so one new QR code is drawn every frame and one old QR code is
removed each frame. So there will always be 4 QR codes drawn each frame and the other 4 are missing.
The order they are drawn is as follows. These numbers are also the QR code “position ID” which is encoded
into the QR code:
Corner 1 2 3 4 Corner
Note that the first frame requires special handling – you have to pretend that you drew the preceding 3 frames,
and generate 4 QR codes with 3 of them being “in the past” by different amounts.
The timecode is a number from 0000000 to 4095999 inclusive. It is always 7 digits long, pad with zeroes on
the left if needed. It gives the media timeline time when the frame starts being displayed, in milliseconds,
modulo 4096000 milliseconds. (This number is chosen so the video and audio embedded timestamps wrap at
the same time).
If an underlying timeline (e.g. PTS or TEMI) wraps, the media timeline time keeps counting up.
Note: For the switchMediaPresentation() API, the Targeted Advertising specification allows an app to specify
the switch point using this “keeps counting up” representation of the media timeline, or a representation where
the media timeline time goes to 0 at the wrap point. For analyzeAvSwitchPerformance(), we only support the
“keeps counting up” representation. This applies to the timestamp encoded in the audio, the timestamp encoded
in the video, and the API parameters. If necessary, it is fine to pass different representations of the same
timestamp to these functions.
Frame 0
2 7
3 8
4 9
5 10
The timestamp passed to the callback specifies a UNIX timestamp, split into an integer number of seconds and 0
to 999999 microseconds.
This function shall be accurate to within +0/-1000ms. I.e. when the callback is called, the passed timestamp
may be up to 1 second earlier than the actual time, but will not be later than the actual time.
NOTE: This function is expected to be accurate to within about +0/-100ms normally, if the terminal has
reasonably good performance.
The network connection to the harness must be available when this function is called, otherwise it fails the entire
test case automatically.
These shall be defined both on the HbbTVTestAPI constructor, and on all instances of that class.
This analyses a switch from one AV to another AV. The first AV is referred to as the “from” AV and the
second is referred to as the “to” AV
ARGS: stepId: the step number for this analysis (same as stepId in reportStepResult).
from_timeline_time_millis: The requested switch time, as a timeline time on the “from” video
and audio, or null if the switch is being done based on the “to” timeline time. Note that the
timeline time repeats every 4,096,000 milliseconds (4096 seconds), so this is interpreted modulo
4,096,000. As a convenience for test case authors, non-integer values are accepted but may be
rounded to the nearest integer.
to_timeline_time_millis: Where the “to” video and audio should start playing, as a timeline time
on the “to” video and audio, or null if this is not to be tested. This has two different meanings: If
from_timeline_time_millis is null then this is the switch time, if from_timeline_time_millis is not
null then this is the playback start time. Those two values have different tolerances. Note that the
timeline time repeats every 4,096,000 milliseconds (4096 seconds), so this is interpreted modulo
4,096,000. As a convenience for test case authors, non-integer values are accepted but may be
rounded to the nearest integer.
from_fps: The “from” video frame rate, in frames per second. Used to calculate some tolerances.
Values greater than 1000 are not allowed. This does not imply that the API will function correctly
with values up to 1000.
to_fps: The “to” video frame rate, in frames per second. Used to calculate some tolerances.
Values greater than 1000 are not allowed. This does not imply that the API will function correctly
with values up to 1000.
optional: Whether the switch should happen or not. If set to true, it checks that either the switch
has happened (within the accuracy for the requested profile) or it hasn’t happened at all. If set to
false, then it checks that the switch has happened (within the accuracy for the requested profile).
callback/callbackObject: a callback function to invoke when the analysis was made (also see
chapter 7.2.3 Callbacks). The analysis result is not passed back to the application. A failed
analysis must cause the complete test to fail. This is because the analysis may be performed after
the test has completed.
to_timeline_time_early_millis: If not null and not undefined, this is the allowed inaccuracy of
the playback start time. The actual playback start time may be this number of milliseconds before
NOTE: Test cases can calculate expected_switch_ millis in several ways, including:
1. At an appropriate time at least (expected switch UTC accuracy + minimum audio duration to guarantee
decoding) == (5 + 2) == 6 seconds before the expected switch UTC time, start monitoring the audio
and video outputs of the terminal
2. Continue monitoring until either at least 2 seconds after the switch has completed, or at least (expected
switch UTC accuracy + max switch duration + minimum audio duration to guarantee decoding) == (5
+ 5 + 2) = 12 seconds after the expected switch UTC time.
NOTE: The max switch duration of 5 seconds here is fairly arbitrary, it is chosen to be “longer
than any reasonable terminal could take”. If (required_accuracy_millis +
required_duration_max_millis) exceed this value, then the analysis becomes unreliable.
3. Either immediately, or at a later time after the test has finished running:
a. Analyse the captured data as described below.
b. Report the step as passed or failed.
c. If failed, mark the complete test case as failed.
4. Unless the step is known to have failed, call the JS callback function.
1. Analyse the QR codes. (Note: This analysis step can be done e.g. on a webcam capture):
a. Run QR recognition on the video.
b. Check that the QR codes in the video at the start of the capture (the "from" video) indicate
broadcast video if HbbTVTestAPI.SWITCH_BROADCAST_TO_BROADBAND was
passed, or broadband video if HbbTVTestAPI.SWITCH_BROADBAND_TO_BROADCAST
was passed
c. Check that the "from" video ceased being displayed.
d. Check that the last frame of "from" video was not displayed for an excessive duration (based
on the video frame rate passed as a parameter).
NOTE: The HbbTV TA spec says 1 frame duration, but the harness is not required to
measure that accurately.
e. Check the "from" video was playing at a normal frame rate without discontinuities or rate
changes.
f. If A is not null, and from_timeline_time_millis is not null, check that all the following are
true:
i. from_timeline_time_millis <= Xa_timeline <= from_timeline_time_millis + A
ii. from_timeline_time_millis <= Xv_timeline <= from_timeline_time_millis + A
g. If Dmin is not null, check that all the following are true:
i. Dmin < D
h. If Dmax is not null, check that all the following are true:
i. D <= Dmax
i. If A is not null, and from_timeline_time_millis is null, and to_timeline_time_millis is not null,
and to_timeline_time_early_millis is null or undefined, check that all the following are true:
i. to_timeline_time_millis <= Sa_timeline - (Sa_UTC – X_UTC)
ii. to_timeline_time_millis <= Sv_timeline - (Sv_UTC – X_UTC)
iii. min(Sa_timeline - (Sa_UTC – X_UTC), Sv_timeline - (Sv_UTC – X_UTC)) <=
to_timeline_time_millis + S_tolerance + A
j. If from_timeline_time_millis is not null, and to_timeline_time_millis is not null, and
to_timeline_time_early_millis is null or undefined, check that all the following are true:
i. to_timeline_time_millis <= Sa_timeline
ii. to_timeline_time_millis <= Sv_timeline
iii. min(Sa_timeline, Sv_timeline) <= to_timeline_time_millis + S_tolerance
k. If to_timeline_time_early_millis is not null and not undefined, check that all the following are
true:
i. to_timeline_time_millis - to_timeline_time_early_millis <= Sa_timeline
ii. to_timeline_time_millis - to_timeline_time_early_millis <= Sv_timeline
iii. min(Sa_timeline, Sv_timeline) <= to_timeline_time_millis +
to_timeline_time_late_millis
A harness is free to use any implementation it likes, so long as the results are the same as the procedures above.
NOTE: A fully automated implementation of this test API is possible, where the harness determines pass
or fail without any user input.
Test Specifications under development shall indicate their intermediate status by a draft number.
When a new version of the HbbTV Technical Specification has been created and approved, a new version of the
Test Specification document shall be created.
The Test Specification document also includes references to the several parts of the Test Suite. This includes a
list containing all necessary Test Cases (stored in the HbbTV Test Repository) which must be successfully
performed by the DUT, in order to finish the technical certification process. This list is
“List_of_approved_Test_Material__x_xx.txt”, where “x_xx” in the filename refers to the Test Suite Version.
During the incremental process of HbbTV testing, above mentioned events 1, 2 or 3 might happen. This means
that either:
• Existing Test Cases are challenged the therefore excluded from the list of approved Test Material
documents.
All this will lead to a new version of the Test Specification document including the list of approved Test
Material which refers to the applicable Test Cases stored in the Test Repository.
In this example the HbbTV technical specification has a version 1.1.1, which is bound to a version 1 HbbTV test
specification which makes use of Test Cases that shall have a version 1.
Test Specification and Test Case version numbers shall only have integer numbers.
HbbTV Test
Specification V1.1.1
Version 1
• Test Cases may be improved, upgraded or corrected due to a challenge. In this case the ID number will
be kept identical and the version number increases.
• Test Cases may be added to further increase the coverage of testing. In this case a new ID number will
be assigned and the version number starts with 1.
In any of the above cases a new version of the Test Specification shall be created and the existing version shall
be made obsolete once the new version Test Specification has been formally released.
HbbTV Test HbbTV Test HbbTV Test HbbTV Test HbbTV Test HbbTV Test
Case Case Case Case Case Case
ID: org.hbbtv_ ID: org.hbbtv_ ID: org.hbbtv_ ID: org.hbbtv_ ID: org.hbbtv_ ID: org.hbbtv_
00000010 00000020 00000030 00000010 00000020 00000040
Version 1 Version 1 Version 1 Version 2 Version 1 Version 1
Replaced by
X X
HbbTV Test HbbTV Test
Case Case
ID: org.hbbtv_ ID: org.hbbtv_
00000010 00000030
Version 1 Version 1
Removed after
successful
challenge
Figure 8: Example of creating a new version Test Specification while keeping the existing
version of the Technical Specification
• Test Cases may be improved, upgraded or corrected due to a change in the new Technical
Specification. In this case the ID number will be kept identical and the version number increases.
• Test Cases may be added due to new technical requirements in the Technical Specification. In this case
a new ID number will be assigned and the version number starts with 1.
In any of the above cases a new version of the Test Specification shall be created. The new HbbTV Test
Specification shall contain the associated HbbTV Technical Specification Version in its title.
HbbTV Test HbbTV Test HbbTV Test HbbTV Test HbbTV Test HbbTV Test HbbTV Test HbbTV Test HbbTV Test
Case Case Case Case Case Case Case Case Case
ID: 00000010 ID: 00000020 ID: 00000030 ID: 00000010 ID: 00000020 ID: 00000040 ID: 00000010 ID: 00000020 ID: 00000040
Version 1 Version 1 Version 1 Version 2 Version 1 Version 1 Version 3 Version 1 Version 1
X X X
HbbTV Test HbbTV Test HbbTV Test ID: 00000050
Case Case Case Version 1
ID: 00000010 ID: 00000030 ID: 00000010
Version 1 Version 1 Version 2
New Artefacts
Added Test Case
X
removed as not HbbTV Test
required or valid, Case
not replaced ID: 00000010
Version 1
Figure 9: Example of creating a new version Test Specification after creating a new version of
the Technical Specification
• Test Case ID: If the test is changed (e.g. because a problem is identified, or an implementation is
added), the test case ID stays the same. If the complete test case must be changed to reflect a modified
section of a new HbbTV specification version, a new test case (with a new ID) is created.
• Test Case Version: The version attribute is deprecated. The Test Case ID shall be unique, different
from any other HbbTV Test Case.
For Test Case IDs starting “org.hbbtv_”, control of the use of these IDs is administrated by the Test Group.
• Test Case ID: Identifier for the test case being reported (as specified in 6.3.1.1).
• Test Case Version: Version of the test case executed (as specified in 6.3.1.2).
The remainder of the Test Case Result consists of a sequence of the following major sections:
• Test Performed by
• Remarks
• Verdict
9.1.1.1 Model
Text string providing a name for the DUT referring to a specific model or a family name of derivative models
(e.g. the same platform with different screen sizes).
9.1.1.4 Company
Name of the company for whom the test is being executed. Typically this would be the manufacturer of the
DUT.
NOTE: The HbbTV Version given here doesn’t necessarily match the version of the Test suite used. (E.g.
a DUT supporting HbbTV 1.2.1 could try to run the tests from the HbbTV 1.1.1 Test Suite).
This field is mandatory if a Test Report is going to be used for HbbTV compliance purposes. (If the XML
schema is being reused by another testing group, and they do not require HbbTV compliance, then this field is
optional).
Available options are +DL for download functionality, +DRM for DRM functionality, +PVR for PVR
functionality, +SYNC_SLAVE for slave operation in inter-device synchronisation. Multiple requirements are
concatenated to a single string without spaces in between. Example: +DL+PVR
NOTE: This should list the options tested, not necessarily those claimed supported.
9.1.1.7 Settings
The value of any settings configured in the harness and adapted to by the test case as described in Section
6.3.4.1 adaptsTo.
9.1.2.1 Name
This should be the test operator’s full name.
9.1.2.2 Company
The test operator’s company name.
9.1.2.3 Email
The test operator’s email address or another contact email address for issues regarding the test result.
9.1.3.3.1 Index
Index number of the executed test step as defined in the Test Case.
Specific results from the execution of the test step. This includes the following mandatory attributes:
9.1.3.4.1 id
Index (integer) of the result being reported.
9.1.3.4.2 type
The MIME type of the result being reported. E.g. image/png or text/plain.
9.1.3.4.3 href
Relative hyperlink to the result data in a sub-directory. E.g. "./images/screenshot1.png". This path shall:
• use only characters that are valid in filename paths on Windows, Unix/Linux and Apple machines,
• start with either a valid folder name, filename or the “.” character, and not with “/”, “//” or any OS
dependent drive or machine specifier. I.e. it must be a relative path.
9.1.3.5.1 Timestamp
UTC timestamp for time at which Test Harness output was started for this test
• PASSED: Test met the pass criteria specified in the Test Case.
• The root directory shall contain one or more Test Suite Results directories, where each directory
corresponds to results from a single Test Suite. A typical Test Report for the official HbbTV Test Suite
shall contain a single directory.
• Each Test Suite Results directory contains only the results of tests from the corresponding Test Suite.
Note that the HbbTV official tests are only contained in one Test Suite, therefore there will only be one
Test Suite Results directory needed. However, other standards or trademark licensors may require
results from multiple Test Suites in their Test Report.
• The name of the Test Suite Results directory is unspecified, but an informative name that contains the
version of the Test Suite is recommended e.g. HbbTV-1_2_1-TestSuite-v1_0_0.
• Each Test Suite Results directory shall only contain a Test Case Result directory for each test case that
has been executed.
• The name of each Test Case Result directory shall be in the following format, where {test_case_id} is
the Test Case ID of the test case that the results pertain to:
- {test_case_id}
• The Test Case Result directory shall contain one Test Case Result XML document in the format
specified in section 9.2.
• The filename of the Test Case Result XML document shall be in the following format:
- {test_case_id}.result.xml
• All files referenced by Test Step Data shall be stored in the Test Case Result directory where the Test
Case Result XML document is located or a subdirectory therein.
Example:
HbbTV-1_2_1-TestSuite-v1_0_0
org.hbbtv_TEST1
org.hbbtv_TEST1.result.xml
org.hbbtv_TEST2
org.hbbtv_TEST2.result.xml
screenshot1.jpg
[…]
Purpose of this activity is to check the validity of files that are part of the Test Material in an automatic and
objective fashion wherever possible. E.g. is the HTML valid according to the W3C validator, has a Transport
Stream been run through a TS analyzer, etc.
The following table lists the file types that are expected to be validated. The right hand column indicates tools
that are used. The Test Coordination Group reserves the right to use alternative tools.
B.1 Redmine
Redmine (https://hbbtv.org/redmine) is an online project management tool used by HbbTV for many of its
specification and testing project management needs. There are currently four projects used by the Testing Group
for test suite management:
2) HbbTV Test Review is used to track faults identified during test case development and review
4) Test Material Support is used by test suite customers to request help in use of the released test suites.
Members of the Testing Group are eligible to use all of these projects. If you require access please contact
[email protected].
B.2 Subversion
Subversion is a well-known version control system. It is used to store Test Cases and tools for their
development management. This is also the location of schema files for the various standard file formats used for
test case creation and test harness configuration. The repository is located at https://hbbtv.org/testing-repo.
https://valid.sfsg2.catest.starfieldtech.com
https://aaacertificateservices.comodoca.com
https://addtrustclass1caroot.comodoca.com
https://addtrustexternalcaroot-ev.comodoca.com
https://addtrustpubliccaroot.comodoca.com
https://addtrustqualifiedcaroot.comodoca.com
https://comodocertificationauthority-ev.comodoca.com
https://comodorsacertificationauthority-ev.comodoca.com
https://securecertificateservices.comodoca.com
https://trustedcertificateservices.comodoca.com
https://usertrustrsacertificationauthority-ev.comodoca.com
https://utnuserfirsthardware-ev.comodoca.com
https://baltimore.omniroot.com
https://ev.omniroot.com
https://assured-id-root.digicert.com/testroot
https://assured-id-root-g2.digicert.com
https://global-root.digicert.com/testroot
https://global-root-g2.digicert.com
https://ev-root.digicert.com/testroot
https://trusted-root-g4.digicert.com
https://2021.globalsign.com
https://2029.globalsign.com
https://2028.globalsign.com
https://valid.gdi.catest.godaddy.com
https://valid.gdig2.catest.godaddy.com
https://valid.sfi.catest.starfieldtech.com
https://valid.sfig2.catest.starfieldtech.com
https://ssltest14.bbtest.net
https://ssltest19.bbtest.net
https://ssltest22.bbtest.net
https://ssltest21.bbtest.net
https://ssltest20.bbtest.net
https://ssltest34.bbtest.net
https://ssltest6.bbtest.net
https://ssltest8.bbtest.net
https://ssltest3.bbtest.net
https://ssltest1.bbtest.net
https://ssltest26.bbtest.net
https://comodoecccertificationauthority-ev.comodoca.com
https://usertrustecccertificationauthority-ev.comodoca.com
https://assured-id-root-g3.digicert.com
https://global-root-g3.digicert.com
https://2038r4.globalsign.com
https://2038r5.globalsign.com
https://ssltest42.ssl.symclab.com
https://ssltest40.ssl.symclab.com
https://ssltest48.ssl.symclab.com
Initial testing will usually be carried out by the manufacturer (or a 3 rd party test house on their behalf), because
they are the people who can fix any problems found. Operators may also run tests themselves (or get a 3 rd party
test house to run the tests on their behalf) to verify compliance.
A lot of the OpApps specification depends on the choices that the operator and/or manufacturer have made. The
operator and manufacturer agree a “bilateral agreement” which specifies many of the details that affect testing,
such as keys and domain names and which options the manufacturer implements. So testing has to take place in
the context of a specific bilateral agreement.
There are two approaches to certification for a specific operator: Testing can be carried out with real encryption
keys and certificates, or with special test keys and certificates. Using real keys is more realistic, but using test
keys may be more secure. It is up to the operator which approach is used. If the operator chooses to test with
real keys and certificates, then the operator must arrange to provide such. If the operator chooses to test with test
certificates, then the operator should provide them to ensure they are consistent with the structure of the live
certificates.
A manufacturer may want to test their functionality before a bilateral agreement with a real operator exists. To
do this, they can invent a fake operator just for testing purposes. A manufacturer may choose to do that to test
their in-development terminal, or to test discovery methods that no currently-supported operator uses. This
allows them to prove their software with the intent of updating their software later to add a real operator (or
more real operators). In this case, since the operator is invented, the manufacturer can invent the corresponding
“bilateral agreement” too, to ensure the features they want tested get tested.
A manufacturer can make a device that supports multiple operators. In this case, they may wish to repeat testing
for each bilateral agreement they have.
Alternatively, a manufacturer may wish to do a single test run that exercises every feature used by every
operator they wish to support, which may be much quicker. To do that they can invent a fake operator with an
invented “bilateral agreement”, and test against that fake operator. The invented “bilateral agreement” needs to
include every feature needed by any real operator that they want to support.
NOTE: A tester must be able to reset the device under test to its as-shipped condition in order to be able to run
the test case org.hbbtv_OPAPP_INSTALL12.
• The ability to configure the test harness’s TLS Server, which requires:
o Knowing the domain name for the Test TLS Server (may be specified by the bilateral
agreement)
o the Test TLS Server certificate, corresponding private key, and corresponding certificate chain
o Other TLS certificates, corresponding private keys, and corresponding certificate chains
required for specific test cases:
▪ Revoked Certificate (for org.hbbtv_OPAPP_INSTALL66)
▪ Expired Certificate (for org.hbbtv_OPAPP_INSTALL65)
▪ Potentially more to test additional failure conditions
o If client certificates are specified in the bilateral agreement, then Client CA root certificate(s)
• The ability to create signed OpApps, which requires:
o Operator Signing Certificate, corresponding private key, and corresponding certificate chain
o Alternatively, if using real keys then signing the test OpApps will normally be done by the
operator, so they can keep control of their private key.
o The OpApps are:
▪ Launcher OpApp
▪ Stub OpApps, listed in section 0
• Terminal Packaging Certificate (specific to the terminal being tested, and specified by the bilateral
agreement)
o Note that the OpApps are signed first, then encrypted with this certificate, so one set of signed
OpApps can be reused for all terminals that support that operator.
• A terminal that has been loaded with the relevant operator’s:
o discovery information
o if applicable: operator-specific TLS server root CA certificate(s)
o OpApp signing root CA certificate(s)
o terminal packaging private key
o if applicable: client certificate and private key, and any intermediate certificates
• The ability to configure the Test Harness with some of the information from the Bilateral Agreement in
key/value pairs. These values shall be available to the test case through the JS APIs defined in “D.2.3
JS API to query discovery settings” and “D.2.5 Void”.
NOTE: It may be useful to define the set of test cases to be executed based on the preconditions in the test
cases and the bilateral agreement at the time that the bilateral agreement is developed.
• The “optional features” category is for things that can be handled with the existing optional features
preconditions mechanism described in “6.3.3.2 Optional Features” and with the additional
features defined in “D.3.5.1 Operator Application Optional Features”.
• The “discovery” category covers everything needed to configure the test harness’s TLS server and
install and run the launcher application.
• The “extended” information is everything else.
• Default OpApp discovery method. Mandatory. The tester must choose 1 from following 8 options:
NOTE 2: A particular test harness may not support all of these options. E.g. a test harness may
decide not to support method 5. Such a harness is not suitable for certifying terminals
that claim to support method 5, but can be used to test and certify terminals that do not
support method 5.
• TLS server domain name. Mandatory. If the terminal supports discovery mode 3, this must match the
domain name from the hardcoded URL.
• TLS server port number A (for most HTTPS usage). If the terminal supports discovery mode 3, this
must match the port number from the hardcoded URL. Mandatory. Default is 443.
• TLS server port numbers B, C, D, E. (Only used by a small number of test cases). Mandatory.
Defaults may be provided by the harness.
• The normal “TLS certificate bundle”. (This is referred to in this appendix as the “good”
configuration). For the purposes of this appendix, a “TLS certificate bundle” consists of:
o TLS server certificate file. Mandatory.
o TLS server private key file. Mandatory.
o TLS server intermediate certificates file. Optional.
• The “TLS certificate bundle” (as described above) for the TLS server which will cause a TLS error
because the host name does not match the certificate. (This is referred to in this appendix as the “host-
name-mismatch” configuration).
• The “TLS certificate bundle” (as described above) for the TLS server which will cause a TLS error
because the certificate is self signed. (This is referred to in this appendix as the “self-signed”
configuration).
• The “TLS certificate bundle” (as described above) for the TLS server which will cause a TLS error
because the certificate has a server IP address that is not the same as the IP address the harness is using
for this server. (This is referred to in this appendix as the “ip-address-mismatch” configuration).
• The “TLS certificate bundle” (as described above) for the TLS server which will cause a TLS error
because the certificate has expired. (This is referred to in this appendix as the “expired-certificate”
configuration).
• The “TLS certificate bundle” (as described above) for the TLS server which will cause a TLS error
because the certificate is on the CRL (i.e. it is revoked). (This is referred to in this appendix as the
“revoked-certificate” configuration).
• Client CA(s) for TLS server client certificates (a single file that may contain multiple CA certificates).
Optional. If not specified, then client certs are not expected to be used.
• Number of intermediate certificates allowed for TLS server client certificates. Cannot be set if Client
CA(s) is not set, mandatory if Client CA(s) is set.
• Network ID (a default value [matching the pre existing base stream] will be used if not explicitly
configured)
• Organization ID (a default value of 0x400 == decimal 1024 will be used if not explicitly configured).
To be used for OpApps and for regular HbbTV applications that need to share the same organization
ID as the OpApps. Must not match the normal Organization ID used by HbbTV tests as defined in
“7.5.1.7 Choose the correct application and organization ID for your application”.
NOTE: Another reason this should not match the normal Organization ID used by HbbTV tests
is because some test cases need a normal HbbTV app with a different orgid, so it is
natural to use the existing HbbTV testing Organization ID for the normal HbbTV apps
and a different Organization ID for the operator apps.
• Application ID to be used for the stub OpApp var_app_id for use in test cases where the stub OpApp
has to have a specific application ID to allow the terminal to launch it itself without direct input from
the testcase. Legal values are any integer in the range 0-65535. Optional, defaulting to 16. If a
bilateral agreement requires different values to be used for this app in different test cases, the tester
should re-configure the harness before running the different test cases.
• Bouquet ID (optional unless discovery method 1b or 6b are supported by the terminal in which case it
is mandatory)
• Hardcoded operator FQDN for discovery method 2. (Mandatory if discovery method 2 is supported by
the terminal, otherwise not used. Note that a terminal may support multiple discovery methods, if the
terminal supports discovery method 2 then this must be set even if a different discovery method is
being used by default)
• Second hardcoded operator FQDN for discovery method 2. (Optional if discovery method 2 is
supported by the terminal, otherwise not used)
• Hardcoded URL suffix for XML AIT for discovery method 3. This is a URL suffix, i.e. it does not
include the “https://” bit, or the host name or port which will be taken from the harness’s TLS server
configuration. (Mandatory if discovery method 3 is supported by the terminal, otherwise not used.
Note that a terminal may support multiple discovery methods, if the terminal supports discovery
method 3 then this must be set even if a different discovery method is being used by default).
• Second hardcoded URL suffix for XML AIT for discovery method 3. (Optional if discovery method 3
is supported by the terminal, otherwise not used).
• Whether the operator apps should be “privileged” or “operator-specific” by default. For terminals
supporting only one type of operator app, this shall be the supported type. For terminals supporting
both types of operator app, this shall be “operator-specific”. This information is configured by
configuring the harness with the DUT optional feature +OPAPP_OPERATOR or -
OPAPP_OPERATOR (see section D.3.5.1).
• Maximum uncompressed OpApp size in bytes.
This can only be called when the network is available. It calls the callback like this:
callback(callbackObject, opAppDiscoverySettings)
opAppDiscoverySettings is an object with the following methods, which all return immediately:
string OpAppDiscoverySettings.getTlsServerUri()
returns the URI to the TLS server used to serve OpApps. This shall be of the form “https://<domainname>/” or
“https://<domainname>:<port>/” – note that the HTTPS protocol and trailing slash are always included.
String OpAppDiscoverySettings.getOrgIdHex()
Returns the organization ID for operator apps, as a string in hex format with lowercase letters and no prefix.
(This matches the format used in DVB URIs).
int OpAppDiscoverySettings.getOrgIdInt()
Returns the configured application ID used for the stub OpApp var_app_id, as a string in hex format with
lowercase letters and no prefix. (This matches the format used in DVB URIs).
int OpAppDiscoverySetting.getVariableAppIdInt()
Returns the configured application ID used for the stub OpApp var_app_id, as a number.
String OpAppDiscoverySettings.getNidHex()
Returns the network ID used, as a string in hex format with lowercase letters and no prefix. (This matches the
format used in DVB URIs).
int OpAppDiscoverySettings.getNidInt()
D.2.4 Void
The previous content of this section has been moved to 7.2.13 Extended Settings
D.2.5 Void
The previous content of this section has been moved to 7.2.14 JS-Function getExtendedSetting
D.2.6 Void
The previous content of this section relating to getDutOptions has been moved to sections 6.3.4, 7.2.10,
7.4.4.3.1, and 9.1.1.7.
“Regular HbbTV Tests” are launched using the AIT or harness-based tests, as defined in sections “5.2.1.6
ECMAScript Environment” and “7.4.4 Transport stream requirements” of the current document.
All the HbbTV tests pre-existing the Operator Applications specification are Regular HbbTV Tests.
“Launcher-based OpApp Tests” are launched using a special launcher OpApp as described in the “Starting
Launcher-based OpApp Tests” section below. A test case is a “Launcher-based OpApp Test” if and only if the
implementation.xml file has the <runOpAppPage> tag (defined below).
“OpApp Discovery tests” are test cases that test how an operator application is discovered and run. They have
special rules regarding launching, as described in the “Starting OpApp Discovery Tests” section below. They
may or may not include a test Operator Application (therefore some test cases may be harness-based only).
The Test Harness shall provide a “Launcher OpApp”. The Test Harness shall also provide appropriate OpApp
discovery signalling to enable the terminal to find the Launcher OpApp. The details of this OpApp discovery
signalling depends on the bilateral agreement. The tester shall install the Launcher OpApp on the terminal
before running the Launcher-based OpApp Test, the details of how this is done depend on the bilateral
agreement.
Each Launcher-based OpApp Test shall include this element in its implementation.xml file:
<runOpAppPage path="index.html5"/>
Where the specified path is relative to the directory containing the implementation.xml file. A test case is
classified as a “Launcher-based OpApp Test” if and only if specifies such a HTML page.
When starting the test case, the Test Harness shall arrange for the Launcher OpApp to replace the HTML page
with the one specified by the runOpAppPage tag (e.g. using “window.location = …”). This means that as far as
the terminal is concerned there is a single OpApp that keeps running all the time, it just switches from the
Launcher HTML page to the test case’s HTML page (and back at the end of the test case). At the page switch,
the state of the OpApp shall be:
• Foreground state
• Visible
NOTE: It is expected that the Launcher will display a UI on screen that indicates that the
launcher is running. So the Launcher will be in Foreground state and visible. It is
expected that many test cases will want to show progress information on screen, like
other HbbTV test cases do, so will want to be in Foreground state and visible. Other
test cases will need to switch to a different state, and they can do that. But Foreground
& visible seems like the best state to default to, and it avoids unnecessary state
transitions.
The HTML page specified must load testsuite.js and create an HbbTVTestAPI object in the usual way.
However, it shall pass an argument to the HbbTVTestAPI constructor, which shall be the string “OpAppPage”.
I.e:
var testApi = new HbbTVTestAPI("OpAppPage");
When a test case completes (whether pass, fail or error), or if a user cancels execution of a test case, then the test
harness provided code in testsuite.js may replace the HTML page with another one. The test harness may do that
to move back to the Launcher OpApp page.
There is a mechanism for a test case to specify steps to take after the test completes to undo any changes that
would otherwise be preserved across page navigation, as follows:
A test case may optionally provide a global function called tearDown(). If tearDown() is provided then the test
harness code in testsuite.js should call tearDown() immediately before replacing the HTML page. tearDown()
If the harness chooses to implement this redirection back to the Launcher HTML page, it should make a good-
faith effort to put the OpApp into the Foreground+Visible state, if it is not already in that state.
NOTE: A test case may be able to leave the terminal in a state such that the harness won’t be able to
recover to the Launcher page in the normal state. This is especially true on buggy terminals that
don’t comply with the specification. If getting back to the Launcher fails, the tester will have to
manually relaunch the Launcher OpApp, perhaps by power cycling the terminal.
A launcher based test may launch a different OpApp with the createApplication(…, …, true) API. In this case,
the OpApp that is launched shall initialise the HbbTV Test API with a special argument, like this:
var testApi = new HbbTVTestAPI("OpAppReplacedLauncher");
In this case, when the test case ends the harness may optionally use createApplication(<harness-specific URL to
launcher app>, false, true) to return to the launcher application.
The launcher application behaviour is implementation-specific and the current document does not specify what
will happen if it is killed and then relaunched during the test case. Test cases should not do that. If a test case
needs an installed OpApp to be relaunched during the test case, the test case should use a stub application.
The stub OpApps will be served from the harness’s test TLS server. It is also possible to deliver the stub
OpApps via DSMCC.
All of the discovery tests will involve harness-based test code as well as OpApp(s). Most of those tests will
have a manual step at or near the beginning instructing the tester to start OpApp discovery.
The test application should uninstall itself after running between tests, as on some terminals only a single
operator application may be installed.
The test harness may be required to perform a factory reset after each test run.
If the test case has OPAPP_DISCOVERY_* preconditions (as defined in D.3.5.1 Operator Application
Optional Features) and the default discovery mechanism configured in the harness is not one which matches
those preconditions, then this is an error which the test harness shall detect and handle in a harness-dependent
way
gen_b 2 0 Generic stub app for tests that need two or more operator apps. Also
supports alternate entry point, see below for details.
gen_c 3 0 Generic stub app for tests that need many operator apps.
gen_e 5 0 Generic stub app for tests that need many operator apps
gen_f 6 0 Generic stub app for tests that need many operator apps
gen_g 7 0 Generic stub app for tests that need many operator apps
gen_h 8 0 Generic stub app for tests that need many operator apps
huge 13 2 Same as “upg_v2”, except that the application package ZIP file
contains additional data (as well as that normally included in the stub
application) so that the total uncompressed size of the OpApp exceeds
the configured maximum uncompressed OpApp size by 10 kibibytes
var_app_id 16* 65535 For use in test cases where the stub OpApp has to have a specific
application ID to allow the terminal to launch it directly. AppId will be
replaced with the value configured in the harness discovery settings
according to the bilateral agreement (see D.2.2 Discovery
settings). Version is set as high as the highest possible
MinimumApplicationVersion so that field never prevents this
application from being installed.
E.g. for OpApp “gen_a” if the harness is configured with the domain name “test.operator.com”:
https://test.operator.com/StubOpApps/packages/gen_a.pkg
To embed the OpApp into DSMCC, use the <file> tag with a special path. To embed the OpApp’s icons into
DSMCC, use the <directory> tag with a special path. Typically the playoutset XML will contain something like
this:
<generatedData>
…
<dsmcc …>
<file dst="opapps/<name>.pkg"
src="HARNESS:/StubOpApps/packages/<name>.pkg"/>
<directory dst="icons/<name>"
When loaded, in “html” mode the stub OpApp will replace the HTML page with the one specified as the
“target” in the “<stubOpApp>” tag. Any URL parameters passed to the stub OpApp will be passed to the target
page in the same way. The OpApp should load the script /_TESTSUITE/RES/testsuite.js in the usual way to
use the “HbbTVTestAPI” class.
In “html-visible” mode the stub OpApp will replace the HTML page as described for “html” mode but tries to
make the OpApp visible first by checking the current state at startup. If the current state is Background or
Transient or Overlaid Transient, then the stub OpApp shall call opAppRequestForeground() from the load event
of its initial document and fail the test if that call fails.
In “script” mode, the specified JavaScript script is executed after parsing of the stub OpApp’s HTML is
complete but before the DOMContentLoaded event has fired (i.e. in a <script defer src=””> block). The
“HbbTVTestAPI” class will already be defined before the specified JavaScript script is executed, the test code
must not try to load /_TESTSUITE/RES/testsuite.js. The stub app does not define any CSS rules, so the test
code can insert its own CSS rules. The <body> contains a single <div> with ID “stub-loading” which the test
code will probably want to remove before it inserts its own HTML elements. Note that the test code cannot use
document.write, it should use the DOM or innerHtml.
The behaviour described in this section shall apply each time the stub OpApp is launched during a test case.
The contents of the standardised XML AIT, broadcast AIT, and HTML page for the stub app with identity
“gen_a” are defined below. For the other stub apps, throughout replace “gen_a” with the identity of the other
stub app, and “GEN_A” with the identity of the other stub app converted to upper case and the app ID, app
version, and MinimumApplicationVersion with the ones defined in section D.3.3.1 List of Stub OpApps.
For the stub app with identity “priv” replace {auto} with “urn:hbbtv:opapp:privileged:2017”.
The standardized XML AIT included in a stub app shall have Unix-style LF line terminators, shall end with a
single LF, and shall not have a UTF-8 BOM.
NOTE: Some terminals may do a binary comparison of the XML AIT used to discover the OpApp against
the XML AIT inside the OpApp, so it is essential that they match precisely (i.e., the files are byte-
for-byte identical).
The stub app with identity “gen_a” includes a file opapp.aitx which is the following XML AIT, with “{https}”
replaced with the URL of the HTTPS server, the orgId replaced by the configured organization ID, and the
ApplicationUsage replaced with the configured application kind:
<?xml version="1.0" encoding="UTF-8"?>
<mhp:ServiceDiscovery
xmlns:mhp="urn:dvb:mhp:2009"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
<mhp:ApplicationDiscovery DomainName="hbbtv.de">
<mhp:ApplicationList>
<mhp:Application>
<mhp:appName Language="eng">Stub OpApp GEN_A</mhp:appName>
Some stub applications require a minimum application version signalling in the broadband XML AIT. This is
done as shown in the following example opapp.aitx template, which is for stub OpApp "mav_a_v3". The red
text indicates the differences required to signal the minimum application version:
<?xml version="1.0" encoding="UTF-8"?>
<mhp:ServiceDiscovery
xmlns:mhp="urn:dvb:mhp:2009"
xmlns:hbb="urn:hbbtv:opapp_application_descriptor:2017"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
<mhp:ApplicationDiscovery DomainName="hbbtv.de">
<mhp:ApplicationList>
<mhp:Application>
<mhp:appName Language="eng">Stub OpApp MAV_A_V3</mhp:appName>
<mhp:applicationIdentifier>
<mhp:orgId>0</mhp:orgId>
<mhp:appId>14</mhp:appId>
</mhp:applicationIdentifier>
<mhp:applicationDescriptor xsi:type="hbb:HbbTVOpAppApplicationDescriptor">
<mhp:type>
<mhp:OtherApp>application/vnd.hbbtv.opapp.pkg</mhp:OtherApp>
</mhp:type>
<mhp:controlCode>AUTOSTART</mhp:controlCode>
<mhp:visibility>VISIBLE_ALL</mhp:visibility>
<mhp:serviceBound>false</mhp:serviceBound>
<mhp:priority>1</mhp:priority>
<mhp:version>03</mhp:version>
<mhp:mhpVersion>
<mhp:profile>0</mhp:profile>
<mhp:versionMajor>1</mhp:versionMajor>
<mhp:versionMinor>4</mhp:versionMinor>
<mhp:versionMicro>1</mhp:versionMicro>
</mhp:mhpVersion>
<mhp:icon mhp:filename="icons/mav_a_v3/dvb.icon.0001"/>
<mhp:icon mhp:filename="icons/mav_a_v3/dvb.icon.0008"/>
<mhp:icon mhp:filename="icons/mav_a_v3/dvb.icon.0040"/>
<mhp:icon mhp:filename="icons/mav_a_v3/dvb.icon.0200"/>
<hbb:MinimumApplicationVersion>2</hbb:MinimumApplicationVersion>
</mhp:applicationDescriptor>
<mhp:applicationUsageDescriptor>
<mhp:ApplicationUsage>{auto}</mhp:ApplicationUsage>
The HTML for the stub app with identity “gen_a”, which will be in “index.html” inside the app package, is
generated by taking the following and replacing “https://hbbtv1.test” with the URL of the HTTPS server:
<!DOCTYPE html>
<html>
<meta charset="utf-8">
<head>
<title>Stub OpApp GEN_A</title>
<script>
(function () {
This allows tests to use the “hbbtv-package://2.71/alternate/entry_point.html” URI syntax to launch the app, and
check that the app launches and the alternate entry point is used.
If not using this feature, there is no need to configure it (even if you are using OpApp “gen_b” in the normal
way).
Values 101-199 inclusive are reserved for use by the test harness for other purposes.
Precondition Meaning
The path is relative to the test case XML file. It must refer to a HTML or XHTML page, or to a PHP script that
generates a HTML or XHTML page. This must be a path that exists. Note it is a path, not a URL, so URL
parameters & fragment references are not allowed. The specified page will be loaded from the harness’s HTTPS
server.
If you use the <runOpAppPage> tag then you must not specify an <opAppDiscovery> block in the
implementation.xml file.
For certain test cases it is necessary to intercept requests to download the OpApp package file to enable special
handling. Those test cases specify an extra attribute:
<stubOpApp ident="gen_a" target="index.html5" mode="html"
interceptDownload="get_package.php"/>
“ident” identifies which stub OpApp is used, and must be unique in a test case. Valid values are listed in the
table in section 0.
“mode” configures what the OpApp does when it’s loaded. In “html” mode, it replaces the HTML page with
the one specified, e.g. using “window.location=…”. In “html-visible” mode the stub OpApp will replace the
HTML page but tries to make the OpApp visible first as described in D.3.3.3 Stub OpApp behaviour. In
“script” mode, it loads the specified JavaScript script into the stub OpApp’s initial HTML page.
The “target” is a path relative to the test case XML file. If “mode” is “html”, then “target” must refer to a
HTML or XHTML page, or to a PHP script that generates a HTML or XHTML page. If “mode” is
“javascript”, then “target” must refer to a Javascript script, or to a PHP script that generates a Javascript script.
This must be a path that exists. Note it is a path, not a URL, so URL parameters & fragment references are not
allowed. The specified page or script will be loaded from the harness’s HTTPS server.
If “target” ends “.html”, “.html5” or “.cehtml” then “mode” must be “html” and may be omitted. If “target” ends
“.js” then “mode” must be “script” and may be omitted. If “target” ends “.php” then “mode” must be specified
explicitly.
If interceptDownload is used, then the harness shall configure the HTTPS server so that when the stub OpApp
package is requested from https://<server name>/StubOpApps/packages/<name>.pkg, the HTTPS server shall
do an internal redirect (i.e. not a HTTP redirect, so not visible to the HTTP client) and return the results of the
PHP script. This allows the test case to control the package returned, including simulating errors and capturing
details of the request. The value specified for the interceptDownload attribute must be a relative URI. It is
relative to the location of the testcase XML file. It may include query strings but not fragment identifiers. With
This XML tag goes inside the <opAppDiscovery> block in the implementation.xml file. Tests that use stub
apps must also specify the following tags: <opAppDnsDiscovery>, <opAppAitDiscovery>, and
<cicamOperatorProfile>.
NOTE: This requirement ensures test case authors explicitly specify discovery for their stub apps, or
explicitly choose to turn off the discovery features.
There are XML tag(s) for configuring the DNS server to respond to a SRV request. Many test cases will find
this sufficient:
<opAppDnsDiscovery enabled="auto"/>
• “enabled” controls whether non-empty DNS SRV responses are generated at all, it can be “true”,
“false” or “auto”. “auto” means it will be enabled if the harness is configured to use default discovery
mode 1n, 1b, 2 or 4, i.e. the discovery modes that need DNS response, and disabled if the default
discovery mode is set to something else. If “false”, no <dnsRecord> tags may be specified.
• "fqdn” is the domain name that the terminal will look up. The harness’s DNS server will respond to
SRV requests for “_hbbtv-ait._tcp.<fqdn>”. For international domain names, this must be specified in
Punycode-encoded format. There are some special values allowed:
o Specify the special value “!hardcoded” to use the first FQDN for discovery mode 2 as
configured in the harness. It is an error if the terminal does not support discovery mode 2.
o Specify the special value “!hardcoded-optional” to use the first FQDN for discovery mode 2
as configured in the harness. If the terminal does not support discovery mode 2 then this
<dnsRecord> is ignored.
o Specify the special value “!hardcoded-2” to use the second FQDN for discovery mode 2 as
configured in the harness. It is an error if the terminal does not support discovery mode 2 or
does not have a second FQDN.
Each <aitServer> tag specifies a host to respond with in the DNS SRV response. Zero or more hosts are
allowed.
If no <dnsRecord> tags are specified, then a default configuration is used which is the same as:
<dnsRecord>
<aitServer/>
</dnsRecord>
The harness will always respond to DNS SRV requests for the FQDN hbbtvopapps.org, for the FQDN(s) for
mode 2 as configured in the harness (if any), for the FQDN that the harness will use by default when generating
the URI_linkage_descriptor in the NIT,, and for the FQDNs that the harness will use by default when generating
the URI_linkage_descriptor in the BAT. For each of those FQDNs, if no dnsRecord entry is specified for it or if
DNS SRV responses are disabled via the “enabled” attribute then the harness will respond with an empty SRV
response.
This XML tag goes inside the <opAppDiscovery> block in the implementation.xml file.
This XML tag must not be used if you use <runOpAppPage>, in that case the harness will configure the DNS
server automatically.
To support these modes, it shall be possible to configure the HTTPS server to respond to these AIT requests by
either:
• Returning the contents of a specified XML AIT file, with certain fields optionally filled in by the
harness. Or
• do an internal redirect (i.e. not a HTTP redirect, so not visible to the HTTP client) to a PHP script and
return the results of that script. This allows the test case to control the AIT returned, including
generating dynamic AIT using PHP, and to capture details of the request using a PHP script.
To do this, two XML tags are defined. Most test cases will be able to use this simple tag:
<opAppAitDiscovery enabled="auto"
target="my_opapp_ait.aitx"/>
Some test case(s) will need to use this more complex form, to specify multiple different redirects:
<opAppAitDiscoveryCustom enabled="true">
<redirect
from="dns-a"
target="generate_ait.php?source=a"/>
<redirect
from="dns-b"
target="generate_ait.php?source=b"/>
<redirect
from="dns-c"
target="ait3.aitx"/>
<redirect
from="hardcoded"
target="generate_ait.php?source=hardcoded"/>
</opAppAitDiscoveryCustom>
• “enabled” is required, and controls whether AIT redirection happens at all, it can be “true”, “false” or
“auto”. “auto” means it will be enabled if the harness is configured to use default discovery mode 1n,
1b, 2, 3, 4 or 5 i.e. the discovery modes that need AIT redirection, and disabled if the default discovery
mode is set to something else. If “false”, none of the other attributes may be specified.
<opAppAitDiscoveryCustom enabled=false> is not allowed.
• “from” is optional on <opAppAitDiscovery> and mandatory on <redirect>. It can be “hardcoded”,
“hardcoded-2”, “dns”, “dns-a”, “dns-b”, “dns-c”, “dns-d”, “dns-e”, “cicam”, “cicam-2”, “auto” or
“auto-2”. If not specified on <opAppAitDiscovery> it defaults to “auto”. It sets which URL is
redirected. For discovery mode 3, it should be “hardcoded” or “hardcoded-2” to redirect from the first
or second URL (respectively) that is hardcoded in the terminal and configured in the harness, on the
TLS server on port A. For modes 1n, 1b, 2 or 4 it should be one of “dns”, “dns-a”, “dns-b”, “dns-c”,
“dns-d” or “dns-e” to redirect from the standard URI suffix “/opapp.aitx” on the TLS server on the
specified port. (“dns” is an alias for “dns-a”, provided for convenience since most tests will only use
TLS server port A). For discovery mode 5 it should be “cicam” or “cicam-2” to redirect from the first
or second AIT URLs used by the CICAM. If set to “auto” then it uses the value appropriate for the
default discovery mode configured in the harness, either “hardcoded”, “dns-a” or “cicam”, and it is an
error if that default discovery mode is not 1n, 1b, 2, 3, 4 or 5. If set to “auto-2” then it uses the second
value appropriate for the default discovery mode configured in the harness, either “hardcoded-2”, “dns-
b” or “cicam-2”, and it is an error if that default discovery mode is not 1n, 1b, 2, 3 or 5.
• “target” is mandatory and must be a relative URI. It is relative to the location of the testcase XML file.
It may include query strings but not fragment identifiers. With the query string (if any) removed, it
If the target is an “.aitx” or “.xml” file then it must be well-formed XML. It will be served with the XML
AIT Content-Type, with the following changes made automatically:
• If the value of an <orgId> element is “0” then it will be set to the configured organisation_id (in
decimal)
• If the value of an <orgId> element is “0” and the corresponding <appId> element is “16” then the
<appId> value will be set to the configured application_id (in decimal) for the stub app var_app_id (see
D.2.2 Discovery settings).
• If the value of an <ApplicationUsage> element is “{auto}” then it will be set to the configured default
OpApp kind
• If the value of an <URLBase> or <BoundaryExtension> element contains the string “{https}” then
“{https}” will be replaced with the configured URL of the HTTPS server without a trailing slash, for
example “https://hbbtv1.test” or “https://operator.example.com:1234”.
The above changes are made without making any other changes to the XML document, not even whitespace
changes. The harness shall not do XML schema validation. The harness shall make the above changes whether
the specified elements have XML namespace prefixes or not, regardless of the namespace the element is in.
If the target is a “.php” file then the harness shall do an internal redirect to that PHP script, with the following
parameters in the query string (or added to the end of any query string specified in the “target” attribute):
These XML tags go inside the <opAppDiscovery> block in the implementation.xml file.
These XML tags must not be used if you use <runOpAppPage>, in that case the harness will configure
redirection automatically.
• “enabled” controls whether CICAM NIT is used at all, it can be “true”, “false” or “auto”. “auto”
means it will be enabled if the harness is configured to use default discovery mode 5, i.e. the discovery
modes that uses the CICAM, and disabled if the default discovery mode is set to something else. If
“false”, the other attribute must be omitted.
• “numOpApps” is the number of OpApps to signal (i.e. the number of AITs to signal). Valid values are
1 (the default) or 2.
The harness will use a built-in CICAM NIT, which will list all the services from the standard HbbTV base
stream. The harness will configure the AIT URL(s) to point at the configured HTTPS server, with harness-
chosen AIT path(s). The test case may use the XML AIT serving mechanism from the previous section of this
appendix to configure the contents of the AIT(s).
This XML tag goes inside the <opAppDiscovery> block in the implementation.xml file.
This XML tag must not be used if you use <runOpAppPage>, in that case the harness will configure the CICAM
NIT automatically if it is necessary.
or:
<httpsServerConfig mode="good" clientCertForOpAppAit="true" clientCertForOpAppPackage="true"/>
The “mode” attribute allows the harness to be configured with special invalid certificates for testing. The valid
modes are: “good” (the default), “host-name-mismatch”, “self-signed”, “ip-address-mismatch”, “expired-
certificate” or “revoked-certificate”. See section 0 for details of the certificates.
When the clientCertForOpAppAit attribute is specified as true, then the OpApp AIT URL(s) shall request but
not require a client certificate as described in section D.5. For the purposes of this paragraph, the OpApp AIT
URL(s) for a particular test case are the URLs that are used as "from" URLs on an
<opAppAitDiscoveryCustom> or <opAppAitDiscoveryCustom><redirect> tag in that test case. When this flag
is true, a test harness may choose to request client certificates for all possible OpApp AIT URLs, even URLs not
used by that particular test case, if that make it easier for the test harness implementer.
When the clientCertForOpAppPackage attribute is specified as true, then the stub OpApp package URLs shall
request but not require a client certificate as described in section D.5. For the purposes of this paragraph, the
stub OpApp package URLs for a test case are the URLs of the form https://<server
name>/StubOpApps/packages/<name>.pkg, as described in section D.3.3.2 Stub OpApp package locations, for
the stub OpApps that are configured using <stubOpApp> tags in that particular test case. When this flag is true,
a test harness may choose to request client certificates for all possible OpApp package URLs, even URLs not
used by that particular test case, if that make it easier for the test harness implementer.
When this tag is used and specifies any non-default values, TLS server port A is configured as specified, and
test cases shall not use the other ports. The configuration of TLS on those other ports (or even whether there’s a
server running there at all) is deliberately not specified.
This XML tag goes inside the <opAppDiscovery> block in the implementation.xml file.
This is a complete <opAppDiscovery> block for a test with two OpApps that uses the default discovery mode
and supports all discovery modes except mode 4:
<opAppDiscovery>
<stubOpApp ident="gen_a" mode="html"
target="index_opApp_a.html5"/>
<stubOpApp ident="gen_b" mode="html"
target="index_opApp_b.html5"/>
(Note that a terminal that only supports discovery mode 4 can only discover a single OpApp).
• Include an XML syntax to indicate that a BAT should be conditionally included, i.e. only include it if
using an OpApp discovery mechanism that uses it, or only include it if the terminal supports an
OpApp discovery mechanism that uses it. If included, the contents of the BAT are specified by a
separate XML file. This is the “enabled” attribute on the <bat> element, which can be “true” (default)
to unconditionally include the BAT or “auto” to include it if the selected default discovery mode uses
it, or “if-possible” to include it if any supported discovery mode uses it.
In the uri attribute of the <hbbtvUriLinkageDescriptor> element the following substitutions are made:
• “{nid}” is replaced by the network ID, as a lowercase hex number with no leading zeroes. This is the
Network ID configured in the harness if running OpApp tests or the standard Network ID used for the
base stream if running regular HbbTV tests.
• “{opapp-fqdn}” is replaced by the harness-chosen default FQDN as used for discovery mode 1n.
(Note This is NOT the FQDN configured in the harness by the tester for discovery mode 2).
• “{opapp-fqdn-2}” is replaced by the second harness-chosen default FQDN as used for discovery mode
1n. (Note This is NOT the FQDN configured in the harness by the tester for discovery mode 2).
If the <hbbtvUriLinkageDescriptor> element has enabled=”auto” then the URI_linkage_descriptor will only be
included if the selected default OpApp discovery mechanism uses it , as follows:
• If “enabled” is “auto” and the “uri” attribute starts “dns:”, then the URI_linkage_descriptor will only be
included if the selected default OpApp discovery mechanism is 1n.
• If “enabled” is “auto” and the “uri” attribute starts “dvb:”, then the URI_linkage_descriptor will only
be included if the selected default OpApp discovery mechanism is 6n.
• If “enabled” is “auto” and the “uri” attribute does not start “dns:” or “dvb:”, then the NIT XML file is
invalid.
Launcher OpApp tests should not specify an explicit URI_linkage_descriptor in the broadcast NIT or BAT.
For the BAT, in the <hbbtvUriLinkageDescriptor>, “{opapp-fqdn}” and “{opapp-fqdn-2}” shall expand to the
harness-chosen default FQDNs as used for discovery mode 1b, which will be different from the value it expands
to in the NIT.
Note: this does not require an XML schema change, the existing XML AIT schema allows 0 even though that
value is invalid according to the specification.
D.4.4.2 Application ID
If orgId is specified as 0 and appId is specified as 16 in the AIT XML file, then the harness shall use the tester-
selected application_id for the stub app var_app_id (as specified in the bilateral agreement). See D.2.2
Discovery settings.
D.4.4.4 Void
The previous content of this section has been moved to 7.4.4.3.1 AIT.
In addition, it is possible to generate a DVB AIT that does not list any applications. This will be necessary
when testing this descriptor. If the root element of the XML AIT file is <testait:ServiceDiscovery> then the
ApplicationList is allowed to be empty.
Note that the interpretation of the version field in the XML AIT was modified by errata to the OpApp
specification. See HbbTV OpApp Spec Redmine #9749. For TS generation, test cases and test harnesses shall
use the new interpretation.
Note that this means that whenever an OpApp is signalled by a broadcast AIT, then the test case's XML AIT
used to generate the broadcast AIT must include the <MinimumApplicationVersion> tag to comply with the
OpApp specification. If you have no particular minimum application version to signal, then the
<MinimumApplicationVersion> tag should specify the same version as the application version.
D.4.4.8 Void
The previous content of this section has been moved to 7.4.4.3.1 AIT.
Test cases that use such URLs must contain appropriate preconditions so that they are only run if the bilateral
agreement requires the terminal to have a client certificate.
If the terminal sends a client certificate, then the server shall automatically verify that the certificate is valid
using the usual TLS rules (e.g. certificate chains back to a known client CA, client and any intermediate
certificates are not revoked)
The test harness’s test TLS server shall make details of the client certificate available to PHP via environment
variables. The set of environment variables to be made available are:
HTTPS If set, HTTPS is being used.
SSL_CLIENT_CERT_CHAIN_0, The client certificate chain. PEM-encoded. One certificate in each environment
SSL_CLIENT_CERT_CHAIN_1,
SSL_CLIENT_CERT_CHAIN_2, variable, as many environment variables as needed.
…
Normal HbbTV apps, and harness-based test code, shall not pass this parameter.
D.6.2 Void
The previous content of this section has been moved to 7.2.1.3 Multiple clients.
D.6.3 Void
The previous content of this section has been moved to 7.2.11 JS-Function endTestApp().
D.6.4 Void
The previous content of this section has been moved to 7.2.12 JS-Functions for multiple-client communication.
D.7 Void
D.7.1 Void
The previous content of this section has been moved to 7.4.8 JS-Function setSignalLevel().
A test case will have a set of specification versions to which it is applicable. See section 6.3.2.1 Test
Applicability.
A test case may have preconditions that restrict the devices for which it is mandatory to run the test. The current
section gives guidance on understanding and interpreting these preconditions. Section 6.3.3 Preconditions
in the current document gives the definition of preconditions.
The following describes the decision rules for whether a particular test case is mandatory on a device:
• The test case is mandatory if the <appliesTo> tag matches as defined below, AND the <preconditions>
tag is satisfied as defined below. Otherwise the test case is not mandatory.
• The <appliesTo> tag matches if any of the listed specifications are supported by the terminal. Note
that a terminal can only support one version of the HbbTV specification, since it has to report a single
HbbTV version number in its user agent string, but might also support OpApps and/or one or more
regional or operator-defined specifications.
• The <preconditions> tag is satisfied if every XML tag that is directly inside it (i.e. if there are nested
tags, only check the outer one here) is satisfied, except for any <textCondition type=“procedural”> tags
and <testRun> tags which shall be ignored.
• A <requiredTerminalOptions> tag is satisfied if every feature it lists with a “+” sign is supported by
the terminal and every feature it lists with a “-” sign is not supported by the terminal.
• A <optionalFeatures> tag is satisfied if every feature it lists with a “+” sign is supported by the
terminal and every feature it lists with a “-” sign is not supported by the terminal.
• A <textCondition type=”informative”> tag is satisfied if the described condition is true.
• An <or> tag is satisfied if at least one of the XML tags that are directly inside it (i.e. if there are nested
tags, only check the outer one here) is satisfied. Note that this tag can contain
<requiredTerminalOptions>, <optionalFeatures> or <and> tags.
• An <and> tag is satisfied if both the <requiredTerminalOptions> tag and the <optionalFeatures> tag
inside the <and> are satisfied. Note that this tag can only appear inside an <or> tag.
Note that <textCondition type=“procedural”> tags, <testRun> tags and the contents of <adaptsTo> tags are
ignored when deciding if a test case is mandatory or not.
• <textCondition type=“procedural”> tags are things you should do before running the test case, they do
not affect the decision about whether or not to run it.
• <testRun> specifies the order the tests have to be run in, it doesn’t affect the decision about whether or
not to run a test, and since HbbTV requires test cases to be self contained any use of <testRun> is
always a test case bug.
• <adaptsTo> does not affect whether a test is mandatory or not, it identifies that a test behaves
differently based on certain capabilities of the terminal.
{
"length": 16,
"client_version": "3.3",
"server version": "3.3",
"supported_versions": ["3.3","3.4"] //supported_versions extension
"random": {
"gmt_unix_time": "A65891BB",
"bytes": "17B760AA285FB652D27D2D6BED0CF8D7657932EA36A6FF46E032B6DE"
},
"session_id": "896C5DD3AFB8230D01919A99A80A38C0BD08454E63B81DD9F2D1D50B00D33161",
"renegotiation_info": {
"renegotiated_connection": ""
},
"handshake_integrity_failure": {
"decrypt_error alert_sent": false
},
"server_name_indication": {
"serverNameList": [
"158kcdz704k0qq24cln0f4p1z5t6qfz4-tls1010.w.hbbtvtest.org"
]
},
"supported_groups": [ "0x0017", "0x0018"], // NamedGroupList from supported_groups extension
,
"application_layer_protocol_negotiation":["h2","http/1.1"], // the order on the list is important
"elliptic_curves": {
"EllipticCurveList": [
"001D",
"0017",
"0018",
"0019"
]
},
"signature_algorithms": {
"supported_signature_algorithms": [
"ecdsa",
"UNKNOWN"
]
},
"signature_algorithms_certificate": {
"supported_signature_algorithms": [
"ecdsa",
"UNKNOWN"
]
},
"compression_methods": [
"00"
],
"cipher_suites_count": 4,
"cipher_suites": {
"1301": { // cipher suite value {0x13,0x01}
"cipher": "UNKNOWN",
"protocol": "",
"keyExchange": "",