0% found this document useful (0 votes)
8 views

GEN AI

Generative AI encompasses algorithms that create new content such as text, images, and music by learning patterns from existing data. It has applications in various fields including natural language processing, healthcare, and entertainment, distinguishing itself from traditional AI by its ability to generate rather than just classify data. Key models in Generative AI include GANs, VAEs, and Large Language Models (LLMs) like GPT-3, which utilize architectures like transformers to enhance contextual understanding and efficiency in generating coherent outputs.

Uploaded by

deepali
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
8 views

GEN AI

Generative AI encompasses algorithms that create new content such as text, images, and music by learning patterns from existing data. It has applications in various fields including natural language processing, healthcare, and entertainment, distinguishing itself from traditional AI by its ability to generate rather than just classify data. Key models in Generative AI include GANs, VAEs, and Large Language Models (LLMs) like GPT-3, which utilize architectures like transformers to enhance contextual understanding and efficiency in generating coherent outputs.

Uploaded by

deepali
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 17

GEN AI (PART 1) - Intro

https://www.youtube.com/watch?v=G2fqAlgmoPo (watch this 20 mins video - Gen AI summary)

1. What is Generative AI?

Generative AI refers to algorithms and models designed to generate new content, such as
text, images, or music, based on the patterns they have learned from existing data. It doesn't
just recognize or classify data; it can create new data that mirrors the characteristics of the
input it was trained on.

 Examples:

o Text generation: GPT-3 can generate paragraphs of coherent text based on a


prompt.

o Image generation: DALL-E can create images from text descriptions (e.g., "A
cat wearing a suit").

o Music generation: AI models like Jukedeck can create original music tracks.

--------------------------------------------------------------------------------------------------------------------------

2. Overview of Generative AI and Its Applications

Generative AI is widely used across multiple fields for creating new content or data. Some of
the key applications include:

 Natural Language Processing (NLP): Text generation, summarization, translation, and


chatbots (e.g., GPT-3/ChatGPT).

 Image and Video Creation: Models like DALL-E and GANs generate realistic images
and videos from text descriptions or random noise.

 Content Creation: Writing articles, generating code, producing music, or even


designing new fashion patterns.

 Healthcare: Drug discovery, medical imaging, and personalized medicine.

 Entertainment and Art: AI-generated art, music, and games.

3. Difference Between Traditional AI and Generative AI

Traditional AI generally focuses on predicting or classifying data. It learns to make decisions


based on labeled data but doesn’t create new content. In contrast, Generative AI creates
new data or content that resembles the input data it has learned from.

Traditional AI:

 Goal: Classify or predict based on existing data.


 Example: Predicting if an email is spam or not.

Generative AI:

 Goal: Create new data or content based on learned patterns.

 Example: Creating a new email from scratch or generating a new image from a
description.

4. Generative vs. Discriminative Models: Key Differences

 Discriminative Models: These models classify or predict data based on input


features. They learn to differentiate between different classes of data.

o Goal: Classify or predict given data.

o Example: A spam filter that classifies whether an email is spam or not (based
on features like the subject line, sender, etc.).

 Generative Models: These models generate new data that follows the patterns of
the training data. They try to learn the distribution of the data and can create new
instances of similar data.

o Goal: Generate new content/data.

o Example: GPT-3 generates new text based on learned patterns from large text
datasets.

6. Branches and Hierarchy of Model

 Artificial Intelligence (AI)


 ├── Machine Learning (ML)
 ├── Supervised Learning
 ├── Unsupervised Learning
 ├── Reinforcement Learning
 ├── Deep Learning (DL)
 ├── Neural Networks
 ├── Convolutional Neural Networks (CNNs)
 ├── Recurrent Neural Networks (RNNs)
 ├── Transformer Models
 ├── **Generative AI**
 ├── **Generative Models**
 | ├── **Generative Adversarial Networks (GANs)**
 | ├── **Variational Autoencoders (VAEs)**
 | ├── **Large Language Models (LLMs)**
 ├── **Discriminative Models**
 | ├── Logistic Regression
 | ├── Support Vector Machines (SVMs)
 | ├── Random Forests

GEN AI (PART 2) – History


https://www.youtube.com/watch?v=_6R7Ym6Vy_I (watch first 12 minutes of this video only)

Note: (You can watch the entire video too, if you’re left with some extra time, otherwise first
12 mins is enough – context (how it came into picture)

1. Early AI (1950s - 1980s): Rule-Based Systems

 AI was based on hard-coded rules.

 Examples: Early expert systems and machine translation (Google Translate’s first
versions).

2. Machine Learning Emerges (1990s - 2000s)

 AI shifted to data-driven learning.

 Models like Naive Bayes and Decision Trees started classifying data, but they
couldn’t generate anything new.

3. Deep Learning Breakthrough (2010s)

 Deep Learning (neural networks) enabled AI to handle complex tasks like image
recognition and speech processing.

 AI was still focused on classification and prediction, not generation.

4. The GANs Era (2014)

 Generative Adversarial Networks (GANs) were introduced by Ian Goodfellow.

 GANs introduced generation of new data, like realistic images and videos by having
two models (Generator & Discriminator) compete.

5. Transformer & NLP Revolution (2017 - 2020s)

 Transformers (introduced by Vaswani et al.) revolutionized Natural Language


Processing (NLP).

 GPT-2 and GPT-3 (by OpenAI) used transformers to generate coherent text and
perform a wide range of tasks, such as writing essays and coding.

6. Generative AI Today (2020s+)


 Models like GPT-3/4, DALL-E, Stable Diffusion dominate fields like text generation,
image creation, and music generation.

 Applications: Content creation, code writing, customer service, and more.

GEN AI (PART 3) – Gen AI models

1. Generative Adversarial Networks (GANs)

 How it works:

o Two Models: A Generator creates fake data, and a Discriminator evaluates whether
the data is real or fake.

o The Generator improves over time to produce more realistic data, while the
Discriminator gets better at detecting fakes.

 Key Feature: Competitive learning between the Generator and Discriminator.

 Example: Used for image generation, like creating fake photos or videos (DeepFakes).

2. Variational Autoencoders (VAEs)

 How it works:

o Encoder compresses input data into a smaller representation (latent space).

o Decoder reconstructs data from this compressed form.

 Key Feature: Data generation by sampling from the latent space to create new data points.

 Example: Used to generate new images based on a dataset of faces or create variations of
existing images.

3. Large Language Models (LLMs)

 How it works:

o Trained on massive amounts of text to predict the next word or sentence, helping
the model generate text based on input.

 Key Feature: Able to generate coherent, contextually relevant text.

 Example: GPT-3, ChatGPT for answering questions, writing articles, or creating code.
4. Transformers

 How it works:

o Uses an attention mechanism to focus on relevant parts of input data (like words in a
sentence) to process sequences more effectively.

o Particularly useful in Natural Language Processing (NLP).

 Key Feature: Contextual understanding and ability to handle long-range dependencies in


data.

 Example: BERT for understanding language and GPT for generating text.

5. Diffusion Models

 How it works:

o Start with random noise and use learned patterns to gradually refine it into a
coherent output (like an image).

 Key Feature: Generate high-quality content by reversing the "noising" process.

 Example: Stable Diffusion, which generates images from text prompts.

Summary:

 GANs: Two models (Generator vs. Discriminator) to generate realistic data.

 VAEs: Encode and decode to create new data from compressed representations.

 LLMs: Trained on large text datasets to generate text (e.g., GPT-3).

 Transformers: Attention-based models for language understanding (e.g., GPT, BERT).

 Diffusion Models: Start from random noise and generate high-quality data (e.g., images).
GEN AI (PART 4) – Understanding
Transformers
https://www.youtube.com/watch?v=ZXiruGOCn9s (5 mins video)

https://www.youtube.com/watch?v=SZorAJ4I-sA (9 mins video)

Note – Both are good, you could watch either; my suggestion watch both of them.

Then give this a read (it will make more sense) you can skip reading this videos are more than enough

What are Transformers?

Transformers are a type of deep learning architecture used mainly in Natural Language Processing (NLP) tasks
like language translation, text generation, and text understanding. They were introduced in 2017 by Vaswani et
al. in the paper "Attention is All You Need."

They are specialized models designed to process sequential data (like sentences) in a more efficient way
compared to older models, like RNNs (Recurrent Neural Networks) and LSTMs (Long Short-Term Memory).

Why are Transformers needed in Generative AI?

1. Improved Contextual Understanding:

o Transformers excel at understanding the context of a word in a sentence by looking at all


other words at once, not just the nearby ones. This global context helps them generate or
understand more coherent and meaningful text.

2. Parallelization for Efficiency:

o Unlike previous models that processed words one by one (sequentially), Transformers can
process all words in a sentence simultaneously. This speeds up training and allows them to
handle larger datasets efficiently.

3. Handling Long Sequences:

o Transformers can handle long-range dependencies better. This means that they can
understand relationships between words that are far apart in a sentence or document.

How do Transformers work?

Transformers rely on an innovation called Attention Mechanism (specifically Self-Attention) to process the
input data:

Self-Attention Mechanism:

 What it does: It calculates the importance of each word in a sentence relative to the other words.

o For example, in the sentence "The cat sat on the mat," the model needs to know that "cat"
and "sat" are more related than "the" and "mat."
o Self-attention allows the model to weigh each word's importance in relation to every other
word in the sentence.

 Why it matters: This helps the model understand context and relationships between words regardless
of their position in the sentence.

Key Components of Transformers:

1. Encoder:

o The encoder processes the input data (e.g., a sentence) and converts it into a series of
vectors (numerical representations).

2. Decoder:

o The decoder uses the encoded vectors to generate the output (e.g., the translated sentence).

o In some models, like GPT, only the decoder is used for generating text, while in others like
BERT, only the encoder is used for understanding text.

Role of Transformers in Generative AI:

Transformers are the backbone of most recent Generative AI models, especially for tasks involving language
generation or understanding. Without Transformers, modern AI systems like GPT-3, ChatGPT, or BERT
wouldn't be possible.

Examples of Transformer Models:

1. GPT-3 (Generative Pre-trained Transformer):

o Uses only the decoder part of the Transformer to generate human-like text.

o It’s pre-trained on a massive amount of data and can perform tasks like text completion,
answering questions, and even coding.

2. BERT (Bidirectional Encoder Representations from Transformers):

o Uses the encoder to understand the meaning of text (e.g., sentiment analysis, question
answering).

o It reads text in both directions (left-to-right and right-to-left) to capture better context.

In Summary:

 Transformers are a deep learning architecture used for understanding and generating sequential data
like text.

 Their key innovation is the self-attention mechanism, which allows them to process and understand
relationships between words, regardless of their position in a sentence.

 Why they’re important in Generative AI: Transformers allow models to generate coherent,
contextually relevant text and handle long sequences efficiently.
 Applications: Text generation (GPT-3), language understanding (BERT), and machine translation.

GEN AI (PART 5) – LLM’s


https://www.youtube.com/watch?v=5sLYAQS9sWQ ( 5 mins video) – brief)

https://www.youtube.com/watch?v=xU_MFS_ACrU&t=42s (8 mins video – detailed explanation)

Note : -Give this a read (feel free to skip a topic you don’t feel is imp)

What are Large Language Models (LLMs)?

 Definition:
LLMs are AI models designed to understand, generate, and predict human language. They are typically
trained on massive amounts of text data from books, websites, articles, etc., to learn the patterns,
grammar, and structure of language.

 Examples:

o GPT-3 (by OpenAI)

o BERT (by Google)

How do LLMs work?

LLMs are typically built using the Transformer architecture, which uses the self-attention mechanism to process
input (text) and generate output. Here’s how they work:

1. Training:

o LLMs are trained by exposing them to huge amounts of text data (e.g., books, websites).
During training, they learn to predict the next word or sequence of words in a sentence.

o This process allows them to understand language, learn context, and make predictions about
the text they generate.

2. Generating Text:

o When you input a prompt (e.g., a question or statement), the model processes it,
understands the context, and generates a coherent response.

o LLMs like GPT-3 are autoregressive, meaning they generate text word by word, using the
previous words to predict the next one.

Why are LLMs important in Generative AI?

1. Coherent Text Generation:


LLMs are incredibly powerful at generating natural-sounding text that can be indistinguishable from
human writing. This ability is crucial for many applications, like chatbots (e.g., ChatGPT), creative
writing, and customer support.
2. Wide Range of Applications:
LLMs can perform a variety of tasks:

o Text generation: Write essays, blogs, stories, etc.

o Translation: Translate text between languages (e.g., Google Translate).

o Summarization: Summarize articles or documents.

o Question answering: Answer questions based on knowledge learned from the training data.

3. Contextual Understanding:
LLMs understand the context of a sentence or prompt, which allows them to produce relevant and
coherent responses even in complex scenarios.

4. Versatility:
They can be fine-tuned for specific tasks (e.g., customer service, legal text analysis) while also being
capable of general-purpose tasks.

Types of LLMs

1. Autoregressive Models (e.g., GPT-3):

o These models generate text one word at a time, using the previous words to predict the next
one.

o Example: If you prompt GPT-3 with "Write a story about a cat," it will generate a story word
by word based on the pattern it learned.

2. Encoder-Decoder Models (e.g., T5):

o These models use an encoder-decoder architecture. The encoder reads the input and
converts it into a set of vectors (representations), and the decoder generates the output.

o Example: Used in tasks like machine translation (e.g., translating a sentence from English to
French).

3. Masked Language Models (e.g., BERT):

o These models are trained by masking parts of the input text and having the model predict the
missing words.

o Example: BERT is often used for text classification or sentiment analysis because it
understands the meaning of words in context.

Why are LLMs Crucial for Generative AI?

1. Human-Like Responses:

o LLMs are capable of generating human-like text, making them indispensable for applications
like conversational agents (e.g., ChatGPT) and virtual assistants.

2. Handling Complex Tasks:


o LLMs excel at performing complex tasks that require contextual understanding, such as
generating creative content, summarizing long articles, and answering diverse questions.

3. Learning from Massive Data:

o LLMs are trained on massive datasets, making them knowledge-rich. They can generate text
based on what they’ve learned, even on topics they weren’t explicitly trained on.

GEN AI (PART 6) – Q & A


(Give a quick read) – most of it is covered in the sections above

1. Key Differences Between LLMs and Transformers

1. Transformers are the architecture, LLMs are the models:

o Transformers refer to the underlying architecture used to process language.

o LLMs are models that are built using the Transformer architecture. So, LLMs are a specific
application of Transformers designed for language tasks like text generation, summarization,
and translation.

2. Function:

o Transformers: Focus on processing sequential data using the self-attention mechanism. They
handle things like word relationships in a sentence or long-range dependencies within text.

o LLMs: Utilize the Transformer architecture to understand and generate language. They are
trained on large text datasets to predict the next word or generate coherent responses based
on input text.

---------------------------------------------------------------------------------------------------------------------------------------------------

2. Do We Need LLMs If We Have Transformers? (same ans as above diff. way of putting the question)

 Transformers are just the architecture:


A Transformer alone is a model design or blueprint. It's a framework that enables effective learning
from data but doesn't make any predictions or perform tasks by itself. It's like a car chassis: It needs an
engine (which would be the model's parameters and training data) to actually move.

 LLMs are the application of Transformers:


LLMs are specific models that use the Transformer architecture but are trained on massive text data to
solve specific language tasks (text generation, question answering, summarization, etc.).
Without the training and fine-tuning on large datasets, a Transformer architecture wouldn’t be able to
perform complex language tasks. LLMs are trained versions of Transformers designed to generate
language and understand context in text.

---------------------------------------------------------------------------------------------------------------------------------------------------

3. Explain the working of a Generative AI model?


(We’ll talk about LLM’s here & the use of transformers, since LLMs are built using the Transformer
architecture)

Start with (Generative AI models in general):


 Generative AI refers to models that can generate new content (text, images, etc.)
based on patterns learned from existing data.
 In the case of language generation, LLMs like GPT-3 are trained to generate human-
like text by understanding patterns in the data.
Explain the Role of Transformers in LLMs:
 LLMs are built using Transformers, which are a type of deep learning architecture
designed to handle sequential data like text.
 Transformers use a mechanism called self-attention, which allows them to focus on
different parts of the input data simultaneously, capturing relationships between all
the words in a sentence.
Describe the Working of LLMs:
 Training: LLMs like GPT-3 are trained on massive datasets (e.g., books, websites) to
learn how language works—like grammar, word associations, and sentence structure.
 Generative Process: When you input a prompt or question, the model processes the
input, understands the context, and generates a response word by word, using what
it has learned from its training data.
 Autoregressive Nature: LLMs like GPT-3 are autoregressive, meaning they generate
the next word based on the previous ones, producing coherent text over time.

4. Use Cases of Generative AI


1. Text Generation:
o Applications: Writing articles, blogs, summaries, email responses, or even
code.
o Example: ChatGPT can help you write professional emails or assist in creating
software code snippets based on a prompt.
2. Image Generation:
o Applications: Design, advertising, video game assets, digital art.
o Example: DALL-E is a model that can generate images from text descriptions.
For example, "a two-story house made of candy" could generate a photo-
realistic image of such a house.
3. Code Generation:
o Applications: Assisting programmers by generating code from natural
language descriptions.
o Example: GitHub Copilot, powered by OpenAI’s Codex (similar to GPT), can
assist developers by suggesting code completions.
4. Voice and Music Generation:
o Applications: Creating songs, voice synthesis, podcast generation.
oExample: OpenAI’s Jukedeck generates music, and WaveNet (by DeepMind)
generates human-like voices.
5. Customer Support:
o Applications: Chatbots for 24/7 customer service.
o Example: ChatGPT is often integrated into customer service systems to
answer common queries and improve user experience.

5.Versions and Upcoming Models

 GPT-3 (2020): Launched with 175 billion parameters. It’s the model behind ChatGPT
and is used for text generation tasks.

 GPT-4 (2023): The more advanced version of GPT-3, providing better accuracy,
context understanding, and more.

 GPT-5 (Possibly in Development): Expected to be even more capable in


understanding and generating text, though it has not yet been officially released.

6. Questions Related to Tech and Innovation (dummy questions)

What are the latest trends in artificial intelligence or machine learning? How do you see
them impacting business?

Can you explain Generative AI and its potential applications in the business world?

What do you think the next big disruption in technology will be?

How would you use data analytics to solve a business problem?

Explain a complex technical concept (such as blockchain or AI, Gen Ai) in simple terms.

7. discuss models like GPT-3, ChatGPT, and GANs effectively,

Explain their core differences:

 GPT-3: General-purpose text generation.

 ChatGPT: Optimized for conversational AI and interactive tasks.

 GANs: Focused on generating new data (mainly images, but also other media) rather
than text.

8. Popular Generative AI Tools:

1. GPT-3 (OpenAI):
o Description: One of the most powerful language models that can generate
human-like text. GPT-3 is used in applications like content generation,
chatbots, and creative writing.

o Use Case: Text generation, customer service chatbots, content writing,


translation, and summarization.

2. ChatGPT (OpenAI):

o Description: A conversational AI powered by GPT-3, optimized for engaging in


dialogues and performing tasks based on user input.

o Use Case: Virtual assistants, customer support, tutoring, and interactive


agents.

3. DALL·E (OpenAI):

o Description: A model that can generate images from textual descriptions. It


combines language understanding with image generation.

o Use Case: Creative design, product ideation, and advertising.

4. MidJourney:

o Description: Another tool for generating artistic images from text prompts,
often used by digital artists and creators for inspiration.

o Use Case: Art generation, graphic design, marketing materials, and visual
storytelling.

5. Runway ML:

o Description: A platform that provides various generative models for tasks like
video editing, image generation, and text-to-image transformation.

o Use Case: Creative professionals using generative AI for media creation,


including videos, music, and animations.

6. DeepArt and DeepDream (Google):

o Description: DeepArt is focused on transforming photos into artworks in the


style of famous artists. DeepDream is more of a neural network that creates
visually intricate, dream-like imagery.

o Use Case: Artistic transformation of images, creative expression.

7. StyleGAN (NVIDIA):

o Description: StyleGAN is used for generating high-quality synthetic images,


often used in art and entertainment, including deepfakes and virtual
characters.
o Use Case: Creating synthetic faces, avatars, and design elements.

8. Jukedeck and OpenAI’s MuseNet:

o Description: These models generate music based on text or prompts.


MuseNet, for instance, can generate compositions in various genres.

o Use Case: Music production, content creation for media, and game
development.

9. Codex (OpenAI):

o Description: Codex is a model trained to generate code from natural


language, allowing developers to describe the code they want, and Codex will
write it.

o Use Case: Assisting with coding, automating code generation, and software
development.

9.how is LangChain linked or associated with LLM’s

LangChain is a framework designed to help integrate Large Language Models (LLMs) like
GPT-3 with external data sources and systems to create more complex applications. It
allows you to link multiple steps and tools, making LLMs more powerful and adaptable to
business needs.

###### Q&A Continuation (Short Answers) – If the above part feels too much read this it’s
the same

1. What is Generative AI?

Answer:
Generative AI refers to models that can create new content like text, images, or music,
based on the patterns they've learned from existing data. It differs from traditional AI, which
focuses on classification or prediction. Examples include GPT-3 (text generation) and GANs
(image generation).

2. How does GPT-3 work, and what are its use cases?

Answer:
GPT-3 is a transformer-based language model with 175 billion parameters. It generates text
by predicting the next word in a sequence. Its use cases include content creation, chatbots,
translation, and summarization.

3. What’s the difference between GPT-3 and ChatGPT?

Answer:
While both are based on GPT-3, ChatGPT is fine-tuned specifically for conversational
interactions, allowing it to handle back-and-forth dialogues more effectively, whereas GPT-3
is a general-purpose text generator.

4. What are GANs and how are they used?

Answer:
Generative Adversarial Networks (GANs) consist of two neural networks: a generator
(creates new data) and a discriminator (evaluates the data). They are widely used for
creating synthetic images, deepfakes, and data augmentation.

5. What is LangChain and how does it enhance LLMs?

Answer:
LangChain is a framework that integrates LLMs with external data sources and tools. It
enables complex, multi-step workflows by combining LLMs with databases, APIs, and other
systems, allowing for more advanced use cases like real-time data processing and
personalized responses.

6. What is the role of Transformers in Generative AI?

Answer:
Transformers are the foundation for many Generative AI models, including GPT-3. They use
self-attention mechanisms to process and generate text more effectively, allowing the
model to understand context and relationships between words over long sequences.

7. How do LLMs like GPT-3 differ from transformers?

Answer:
LLMs (Large Language Models) like GPT-3 are specific implementations of the Transformer
architecture. Transformers are a general model used for processing sequences of data, while
LLMs are trained on vast amounts of text data to generate human-like language.
8. What are some challenges of Generative AI?

Answer:
Key challenges include:

 Bias in training data, leading to biased or harmful outputs.

 Ethical concerns like the creation of deepfakes.

 Data privacy issues, as models often require large datasets.

 High resource consumption for training models like GPT-3.

9. How does LangChain connect external data with LLMs?

Answer:
LangChain allows LLMs to interact with external data sources like APIs, databases, or web
scraping tools. It uses chains to structure multi-step processes, enabling LLMs to pull in
relevant data and generate more dynamic, data-aware outputs.

10. Can you explain how Transformers work in simple terms?

Answer:
Transformers work by using self-attention mechanisms to look at all parts of a sentence
simultaneously, rather than word-by-word. This allows the model to understand
relationships between words even if they’re far apart in the sentence, improving
performance in tasks like translation or text generation.

11. What do you think is the next big disruption in technology?

Answer:
The next big disruption is likely to come from Quantum Computing. Quantum computers
have the potential to revolutionize industries like healthcare, finance, and logistics by solving
complex problems exponentially faster than classical computers.

12. How do you see AI impacting business in the near future?

Answer:
AI, especially Generative AI, will help businesses automate repetitive tasks, generate content
more efficiently, and provide better customer experiences through chatbots and
personalized marketing. It will also enable data-driven decision-making and more intelligent
products and services.
13. How would you explain a complex concept like AI or blockchain to a non-technical
person?

Answer:
AI is like teaching a computer to learn from data and make decisions based on patterns. Just
like how we learn from experience, AI models improve over time by practicing tasks like
recognizing faces or generating text. Blockchain is a digital ledger that records transactions
securely and transparently, making it hard to change or tamper with past records.

You can read parts of these articles for business perspective (How will generative AI impact
the future of work? Etc)

https://www.gartner.com/en/topics/generative-ai

https://www.nvidia.com/en-in/glossary/generative-ai/

https://www.torryharris.com/knowledge-zone/generative-ai

https://www.dotsquares.com/press-and-events/generative-ai-explained-2024

You might also like