0 ratings0% found this document useful (0 votes) 227 views4 pagesRAG Notes
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content,
claim it here.
Available Formats
Download as PDF or read online on Scribd
Retrievalaugmented generation (RAG) combines large language models (LLMs) with retrieval
2-1 Explain the main parts of a RAG system and how they work.
‘Ans. A RAG (retrieval-augmented generation) system has two main components: the retriever
and the generator.
The retriever searches for and collects relevant information from extemal sources, like
databases, documents, or websites.
‘The generator, usually an advanced language model, uses this information to create clear and
accurate text.
‘The retriever makes sure the system gets the most up-to-date information, while the generator
combines this with its own knowledge to produce better answers,
Together, they provide more accurate responses than the generator could on its own.
Q.2 What are the main benefits of using RAG instead of just relying on an LLM's internal
knowledge?
Ans. If you rely only on an LLM’s builtin knowledge, the system is limited to what it was trained
‘on, which could be outdated or lacking detail.
roving ame say nae emt tm tr on,
This approach also reduces “hallucinations"—errors where the model makes up facts—because
the answers are based on real data. RAG is especially helpful for specific fields like law,
medicine, or tech, where up-to-date, specialized knowledge is needed.
Q.3 What types of external knowledge sources can RAG use?
‘Ans. RAG systems can gather information from both structured and unstructured external
sources:
© Structured sources include databases, APIs, or knowledge graphs, where data is
organized and easy to search.
¢ Unstructured sources consist of large collections of text, such as documents, websites,
oF archives, where the information needs to be processed using natural language
understanding.This flexibility allows RAG systems to be tailored to different fields, such as legal or medical use,
by pulling from case law databases, research journals, or clinical trial data.
2.4 Does prompt engineering matter in RAG?
‘Ans. Prompt engineering helps language models provide high-quality responses using the
retrieved information. How you design a prompt can affect the relevance and clarity of the
‘output.
'* Specific system prompt templates help guide the model. For example, instead of having
a simple out-of-the-box system prompt like “Answer the question,” you might have,
“Answer the question based only on the context provided.” This gives the model explicit
instructions to only use the context provided to answer the question, which can reduce
the probability of hallucinations.
‘¢ Few-shot prompting involves giving the model a few example responses before asking it
to generate its own, so it knows the type of response you're looking for.
¢ Chain-of-thought prompting helps break down complex questions by encouraging the
‘model to explain its reasoning step-by-step before answering.
Q.5 How does the retriever work in a RAG system? What are common retrieval methods?
‘Ans. In a RAG system, the retriever gathers relevant information from extemal sources for the
generator to use. There are different ways to retrieve information.
‘One method is sparse retrieval, which matches keywords (e.g., TF-IDF or BM25). This is simple
but may not capture the deeper meaning behind the words.
Another approach is dense retrieval. which uses neural embeddings to understand the meaning
of documents and queries. Methods like BERT or Dense Passage Retrieval (DPR) represent
documents as vectors in a shared space, making retrieval more accurate.
‘The choice between these methods can greatly affect how well the RAG system works,
(Q.6 What are the challenges of combining retrieved Information with LLM generation?
‘Ans, Combining retrieved information with an LLM’s generation presents some challenges. For
instance, the retrieved data must be highly relevant to the query as irrelevant data can confuse
the model and reduce the quality of the response.
Additionally, if the retrieved information conflicts with the model's internal knowledge, it can
create confusing or inaccurate answers. As such, resolving these conflicts without confusing the
user is crucial.Fal th syle an format of reeved data may not lays match the mod's usual wring
ot lormating, making i ard forthe modelo agate he nlomation smh
27 Whats the role ofa vector database in RAG?
[Ans @ RAG systom,a vector database helps manage and store dense embeddings of ox
‘These embedcngs are numancal representations that capture te meaning of words and
traces, created by models ke BERT or Open
nen a query is made, is ambeddng is compared othe stored ones inthe database o fad
‘amar document. Th makes faster and more accurate to reteve the ight infrmaton. The
[process heb the system quekly locate and pul up the most relevant infomation, improving
‘bom he speed and accuracy of retioval
{8 what are some common ways to evaluate RAG systems?
[Ans To evalusle @ RAG system, you need 10 look at both the retieval and generation
Metis tke precaion (now many reeved documents ere rlevet) and recall how
any of he toll relevant documents wee found) can be vaed ere.
1+ For the generator, matics Ike BLEU and ROUGE can be used Yo compare the
erected isto huran-wrten examples o 9098 quay.
For dowirean aah ike ueston-answerng, matics ike Ft score, pecislon, ar recs
‘so be used to evaluate the overat RAG sytem,
12.9 How do you handle ambiguous or incompleto quer
telovant results?
Ina RAG system to ensure
[Ana. Handing ambiguous or incomplete queries in a RAG system roquies strategies to ensure
‘hat reevant an accurate nfermaton s etveved despite the lack of Gary 9 he user's mp
(One approach i 1 iglament query raoament technique, where th system automaticaly
‘suggests ications or aformutates the ambiguous gry nto a more prise ono Dasa’ O0
Known pattems or previous ilerctona. Tha can svelve taking folowup questons of
‘roning te ser wih multe optns 6 narrow down thew nent
Another method is 10 rtieve 4 diverse set of documents that cover mute possible
Interpretations of he quer. By retiving a range of ess te system ensures that even fhe
‘ers vague, some relevant nlomatin i kayo be coed,Intermediate RAG Interview Questions
2.10 How do you choose the right retriever for a RAG application?
‘Ans. Choosing the right retriever depends on the type of data you're working with, the nature of
the queries, and how much computing power you have.
For complex queries that need a deep understanding of the meaning behind words, dense
retrieval methods like BERT or DPR are better. These methods capture context and are ideal for
tasks like customer support or research, where understanding the underlying meanings matter.
if the task is simpler and revolves around keyword matching, or if you have limited
computational resources, sparse retrieval methods such as BM25 or TF-IDF might be more
suitable. These methods are quicker and easier to set up but might not find documents that
don't match exact keywords.
The main trade-off between dense and sparse retrieval methods is accuracy versus
‘computational cost. Sometimes, combining both approaches in a hybrid retrieval system can
help balance accuracy with computational efficiency. This way, you get the benefits of both
dense and sparse methods depending on your needs.
Q.11 Describe what a hybrid search Is.
‘Ans. Hybrid search combines the strengths of both dense and sparse retrieval methods.
For instance, you can start with a sparse method like BM25 to quickly find documents based on
keywords. Then, a dense method like BERT re-ranks those documents by understanding their
context and meaning. This gives you the speed of sparse search with the accuracy of dense
methods, which is great for complex queries and large datasets.
Q.12 Do you need a vector database to implement RAG? If not, what are the alternatives?
Ans. A vector database is great for managing dense embeddings, but it's not always necessary.
Alternatives include:
© Traditional databases: If you're using sparse methods or structured data, regular
relational or NoSQL databases can be enough. They work well for keyword searches.
Databases like MongoDB or Elasticsearch are good for handling unstructured data and
full-text searches, but they lack deep semantic search.
© Inverted indices: These map keywords to documents for fast searches, but they don't
capture the meaning behind the words.