Generative AI Interview Questions and Answers
Generative AI Interview Questions and Answers
Interview Questions
Q1. What are Transformers?
Key components:
Encoder-Decoder structure
Multi-head attention layers
Feed-forward neural networks
Positional encodings
1. Self-Attention:
Also referred to as intra-attention, self-attention enables a model
to focus on various points within an input sequence. It plays a
crucial role in transformer architectures.
2. Multi-Head Attention:
This technique enables the model to attend to data from many
representation subspaces by executing numerous attention
processes simultaneously.
3. Cross-Attention:
This technique enables the model to process one sequence while
attending to information from another and is frequently utilised in
encoder-decoder systems.
Q3. How and why are transformers better
than RNN architectures?
How: Transformers process entire sequences in parallel.
Why better:
Why better:
XLNet:
Architecture: Based on transformer-XL.
Key feature: Permutation language modeling for bidirectional
context without masks.
Q5. What is a Large Language Model (LLM)?