Artificial 2
Artificial 2
agent (computer program) interacts with the environment and learns to act within that. It is a feedback-based
Machine learning technique in which an agent learns to behave in an environment by performing the actions and
seeing the results of actions. For each good action, the agent gets positive feedback, and for each bad action, the
agent gets negative feedback or penalty. In Reinforcement Learning, the agent learns automatically using feedbacks
without any labeled data. Key components include: Agent: The learner or decision-maker that takes actions in the
environment. Environment: The external system with which the agent interacts, where actions lead to changes and
produce rewards. State: A representation of the environment’s current situation, crucial for decision making.
Reward: A feedback returned to the agent from the environment to evaluate the action of the agent. Policy: Policy
is a strategy applied by the agent. Value: It is expected long term retuned with the discount factor and opposite to
the short-term reward. Q-value: It is mostly similar to the value, but it takes one additional parameter as a current
action.
Q-4: Reactive Agents: Reactive agents are a type of AI agents designed to make decisions based solely on the current
state of the environment, without considering the history of actions or anticipating future consequences. These
agents react to the immediate sensory input and execute pre-defined actions based on fixed rules or mapping. They
are often used in real-time and dynamic environments where quick decision-making is crucial. ∎These agents
continuously perceive the environment through sensors or input channels. They use this to make decisions in real-
time. ∎These agents are typically programmed with a set of rules or conditional statements that map sensory input
to actions. ∎Each decision is made independently based on the current input and the agent has no memory of past
actions or states. ∎Reactive agents can respond quickly to changes in the environment to makes them suitable for
tasks that require rapid decision-making and adaptation.
Q-5: Bidirectional search: Bidirectional search is a graph search algorithm used to find the shortest path between
two nodes in a graph. Unlike traditional search algorithms that start from the initial node and explore the graph in
a single direction, bidirectional search starts from both the source node and the destination node simultaneously.
The search progresses towards each other until they meet at a common node, indicating the discovery of a path.
Algorithm: (1) Initialize two frontiers, one from the source node and another from the destination node. (2) Start
exploring the graph from both frontiers, considering neighboring nodes and expanding the search towards the goal
in one direction and towards the source in the other direction. (3) If a node is discovered in both frontiers, a path
has been found and the search terminates. (4) Continue exploring until the frontiers intersect or possible nodes
have been visited. (5) If no intersection is found, there is no path between the source and destination nodes.
Q-6: Genetic algorithm: Genetic Algorithms (GAs) are a type of heuristic search and optimization technique inspired
by the principles of natural selection and genetics. They are used to find approximate solutions to optimization and
search problems. Here's how it'd work: 1) Randomly initialize populations p. 2) Determine fitness of population. 3)
Until convergence repeat: [ a) Select parents from population, b) Crossover and generate new population, c)
Perform mutation on new population, d) Calculate fitness for new population]; These algorithms are used in
computer programs to solve big problems by finding the best
solution, even when there are lots of possibilities. Genetic
Operators: There are three operators in GA: Selection: This operator
picks the fittest individuals (solutions) from the current population
to become parents for the next generation. Common selection
methods include roulette wheel selection, tournament selection, or
rank-based selection. Crossover (Recombination): This operator
mimics genetic recombination by combining genetic material from
two parent solutions to create offspring. It's like mixing and matching
parts of two parent recipes to create a new one. Mutation: This
operator introduces random changes in individual solutions. It's like
occasionally swapping an ingredient in a recipe to see if it makes the
cake taste better. Mutation helps explore new possibilities and
prevents the algorithm from getting stuck in a local optimum.
Q-7: Differences between Depth-Limited Search (DLS) and Iterative Deepening Search (IDS): ∎Approach: DLS:
Performs a depth-first search within a set depth limit from the root before backtracking. IDS: Conducts multiple
depth-limited searches, gradually increasing the depth limit with each iteration. ∎ Completeness: DLS: Not
complete if the solution lies beyond the depth limit. IDS: Complete - ensures finding a solution if one exists,
regardless of the depth, by incrementally increasing the depth limit. ∎Memory Usage: DLS: Requires relatively less
memory since it explores nodes only up to the depth limit. IDS: Uses more memory as it performs multiple searches,
but memory usage grows linearly with the depth limit. ∎Optimality: DLS: Not guaranteed to find the optimal
solution unless it lies within the depth limit. IDS: Can find the optimal solution if the path cost is uniform,
progressively increasing the depth limit without revisiting nodes. ∎Termination Condition: DLS: Terminates when
the depth limit is reached without finding a solution. IDS: Terminates when a solution is found or after exhausting
all depths.
Q-8: Differences between Depth-First Search (DFS) and Depth-Limited Search (DLS): ∎Search Strategy: DFS:
Explores as far as possible along a branch before backtracking. DLS: Similar to DFS but with a depth limit; it stops
exploring a branch when it reaches a certain depth. ∎Completeness: DFS: Complete if the search space is finite;
might not terminate in infinite graphs with cycles. DLS: Not complete if the solution lies beyond the depth limit.
∎Memory Usage: DFS: Can use less memory as it doesn't store all nodes. DLS: Utilizes memory similarly to DFS until
the depth limit is reached. ∎Time Complexity: DFS: Potentially has higher time complexity if the goal node is deep
in the search space. DLS: Has a fixed depth limit, so the time complexity is more predictable. ∎Termination: DFS:
Terminates when the goal node is found or when the entire graph/tree is traversed. DLS: Stops exploring a branch
when the depth limit is reached, regardless of whether the solution is found.
Q-9: Differences between Breadth First Search (BFS) and Depth First Search (DFS): ∎Strategy: BFS: Explores all
neighbors of a node before moving to the next level of nodes. DFS: Explores as far as possible along a branch before
backtracking. ∎Implementation: BFS: Typically implemented using a queue data structure. DFS: Often implemented
using recursion or a stack data structure. ∎Search Order: BFS: Searches breadth-wise, visiting nodes closer to the
starting node before exploring farther nodes. DFS: Searches deeply, exploring as far as possible along a branch
before moving to neighboring branches. ∎Order of Exploration: BFS: Explores nodes level by level, moving outward
from the starting node. DFS: Descends along a path until it reaches the deepest node, then backtracks. ∎Memory
Usage: BFS: Uses more memory as it needs to store all nodes at a given level. DFS: Uses comparatively less memory
as it doesn’t need to store all nodes at a level. ∎Completeness: BFS: Complete if the search space is finite and the
branching factor is finite. DFS: Complete if the graph/tree is finite; might not terminate in infinite graphs with cycles.
∎Time Complexity: BFS: Higher time complexity compared to DFS if the goal node is deeper in the tree/graph. DFS:
Can potentially reach the goal faster if it's located deeper in the search space.
Q-10: Differences between informed (heuristic) search and uninformed (blind) search: ∎Utilizing Knowledge:
Informed Search: It uses knowledge during the process of searching. Uninformed Search: It does not require using
any knowledge during the process of searching. ∎ Speed: Informed Search: Finding the solution is quicker.
Uninformed Search: Finding the solution is much slower comparatively. ∎ Completion: Informed Search: It can be
both complete and incomplete. Uninformed Search: It is always bound to be complete. ∎ Consumption of Time:
Informed Search: Due to a quicker search, it consumes much less time. Uninformed Search: Due to slow searches,
it consumes comparatively more time. ∎ Efficiency: Informed Search: It costs less and generates quicker results.
Thus, it is comparatively more efficient. Uninformed Search: It costs more and generates slower results. Thus, it is
comparatively less efficient.
Q-11: Differences between propositional logic and predicate logic: ∎ Scope: Propositional Logic: Deals with
propositions or simple statements that are either true or false, represented by variables connected by logical
operators. Predicate Logic: Expands on propositional logic by incorporating variables, quantifiers, and predicates to
express relationships between objects and properties. ∎ Expressiveness: Propositional Logic: Limited to dealing
with propositions without considering the internal structure or properties of the statements. Predicate Logic: Allows
for the representation of relationships between objects, properties, and quantified statements. ∎ Variables:
Propositional Logic: Uses propositional variables (p, q, r, etc.) to represent atomic statements and combines them
using logical connectives like AND (∧), OR (∨), NOT (¬), IMPLIES (→), and IF AND ONLY IF (↔). Predicate Logic: Uses
variables ranging over domains of objects, predicates to express properties or relationships, and quantifiers like ∀
(for all) and ∃ (there exists). ∎ Example: Propositional Logic: "It's sunny" (p) AND "It's raining" (q), represented as p
∧ q. Predicate Logic: "All cats are mammals," represented as ∀x (Cat(x) → Mammal(x)).
Q-12: Distinguish between knowledge and intelligence: ∎ Definition: Knowledge: Knowledge refers to the
information, facts, and skills acquired through experience, education, or learning. Intelligence: Intelligence is a
broader cognitive ability that involves the capacity for learning, reasoning, problem-solving, and adapting to new
situations. ∎ Nature: Knowledge: It is about having information and understanding about a particular subject or a
range of topics. Intelligence: It's the ability to understand complex ideas, adapt effectively to the environment, learn
from experiences, and apply knowledge to solve problems. ∎ Acquisition: Knowledge: Knowledge is gained through
learning, study, experience, and exposure to information. Intelligence: Intelligence is considered an inherent trait,
although it can be developed and enhanced through various experiences and learning opportunities. ∎ Examples:
Knowledge: Knowing historical events, scientific theories, mathematical formulas, languages, etc., constitutes
knowledge. Intelligence: Intelligence involves logical reasoning, critical thinking, creativity, adaptability, and the
ability to learn and apply knowledge effectively.
Q-13: Differences between supervised and unsupervised learning: a) Supervised learning algorithms are trained
using labeled data. a) Unsupervised learning algorithms are trained using unlabeled data. b) It takes direct feedback
to check if it is predicting correct output or not. b) It does not take any feedback. c) It predicts the output. c) It finds
the hidden patterns in data. d) In supervised learning, input data is provided to the model along with the output. d)
In unsupervised learning, only input data is provided to the model. e) Supervised learning aims to predict or classify
based on known outcomes. e) Unsupervised learning aims to find inherent patterns or structures within the data. f)
Supervised learning may face issues related to the availability and quality of labeled data. f) Unsupervised learning
may encounter difficulties in assessing the quality of discovered patterns.
Q-14: Differences between conventional computation and neural network computation: Processing Paradigm:
Conventional computation follows explicit instructions, while neural network computation involves learning from
data through interconnected layers. Learning Approach: Conventional computation relies on predefined algorithms,
while neural networks learn patterns and features from data. Flexibility and Adaptability: Neural networks can
adapt and learn from new data, while conventional computation often requires explicit changes to the code or
algorithm to accommodate new information. Problem Complexity: Neural networks are powerful for complex tasks
with high-dimensional data and non-linear relationships, whereas conventional computation is suitable for tasks
with well-defined rules and algorithms.