0% found this document useful (0 votes)
62 views1 page

Automated NLIDB Testing Solutions

The document discusses automated testing of natural language interfaces to databases (NLIDBs). NLIDBs allow users to query databases using natural language instead of structured query languages like SQL. However, current testing of NLIDBs is ad-hoc and benchmarks lack natural language variation, query coverage, and exploitation of system translation choices. The goal is to automatically generate natural language question and structured query pairs for a given domain to test NLIDBs more comprehensively, define metrics for good test cases, and experimentally evaluate existing NLIDB systems using both static and dynamic test suites.

Uploaded by

Anant Chhajwani
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
62 views1 page

Automated NLIDB Testing Solutions

The document discusses automated testing of natural language interfaces to databases (NLIDBs). NLIDBs allow users to query databases using natural language instead of structured query languages like SQL. However, current testing of NLIDBs is ad-hoc and benchmarks lack natural language variation, query coverage, and exploitation of system translation choices. The goal is to automatically generate natural language question and structured query pairs for a given domain to test NLIDBs more comprehensively, define metrics for good test cases, and experimentally evaluate existing NLIDB systems using both static and dynamic test suites.

Uploaded by

Anant Chhajwani
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 1

Automated Testing of Natural Language Interface to Database

■ NLIDB: Natural language interface to database takes input a natural language question and outputs a structured query e.g. SQL
e.g. PRECISE, NALIR, ATHENA
– user does not need to know SQL and exact schema
■ Problem with current testing/evaluation of NLIDB systems
– Testing of such NLIDB systems have been performed in ad-hoc fashion. This hampers the production usage of NLIDB
systems
– The benchmarks used contain a set of natural language questions and their gold standard SQL. They lack in
■ Natural language variation
■ Query syntax and semantics coverage
■ They do not cater towards exploiting the choices an NLIDB system takes during the translation
■ Goal of the internship
– To create automated test case (NLQ, OQL pairs) generation for NLIDB systems for a domain given ontology and other
linguistic resources
– To define metrics of good test case for NLIDB systems
– To experimentally demonstrate that existing benchmarks lack in these metrics
– To create static test suite (with limitation of number of test cases)
– To create dynamic test suite (the test question is dependent on the response of previous question)
– Evaluation of existing NLIDB systems PRECISE, NALIR, ATHENA using both static and dynamic test suite generation
technique

You might also like