0% found this document useful (0 votes)
476 views7 pages

Icsc Qualification Round 2025 Solutions

Uploaded by

danyhardi06
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
476 views7 pages

Icsc Qualification Round 2025 Solutions

Uploaded by

danyhardi06
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 7

ICSC Qualification Round 2025 — Solutions

Problem A : Neural Network Components

Task: Identify and label the components in the provided MLP diagram.

Direct labels :

 ω(1)21— Weight on the edge connecting the 2nd input feature (“Tip received”)
to the 1st hidden neuron in the first hidden layer. A weight scales the input
before summation.
 Σ (Sigma, summation node) — takes all weighted inputs and the bias,

computing the pre-activation value: ( z = Σjwjxj + b).


 f (activation function) — maps the pre-activation value (z) to the neuron output
(a = f(z)); this is where nonlinearity is applied (examples: sigmoid, ReLU, tanh).
Colored circles from the figure:

 Green circle → a weight on an edge. In the figure this corresponds to ω(1)21, the
weight from the second input feature to a particular hidden neuron.
 Orange circle → Σ, the summation node inside a neuron that computes the
weighted sum plus bias.
 Red circle → f, the activation function applied to the sum to produce the
neuron’s output.
Box definitions:

 Box A — the hidden layer (neurons that process input signals and pass
transformed activations forward).

 Box B — the output layer (final neuron(s) producing (y), the model’s
prediction).
 (y) — the predicted output (for this example, whether the customer was
satisfied).
I see a neuron as a small machine: each incoming signal is multiplied by a weight (green
circles), these contributions are added up with a bias (orange circle), and then a little
function (red circle) decides how much of that combined signal passes forward. The
hidden layer (Box A) transforms raw inputs into features the output layer (Box B) uses
to make the final decision (y).

Problem B : Cake Calculator (programming problem)

Problem recap: A cake requires 100 units of flour and 50 units of sugar. Given
available flour and sugar (positive integers), return a list/array of three integers: [cakes,
remaining_flour, remaining_sugar].
Reasoning and approach:

1. The number of cakes you can bake is limited by the scarcest resource relative to
recipe needs. Compute how many full recipe units fit in flour and sugar
independently: flour//100 and sugar//50.
2. The actual number of cakes is the minimum of those two values. That ensures
neither ingredient goes negative.
3. Leftovers are what remain after producing that many cakes.
This is direct, deterministic, and optimal — there’s no benefit to any other strategy (for
example trying to make fractional cakes is not allowed, and using ingredients
differently won’t increase the count since the recipe is fixed).
Python implementation

# Cake calculator: returns [cakes, remaining_flour, remaining_sugar]


def cake_calculator(flour: int, sugar: int):
if flour <= 0 or sugar <= 0:
raise ValueError("flour and sugar must be positive integers")
cakes = min(flour // 100, sugar // 50)
return [cakes, flour - 100 * cakes, sugar - 50 * cakes]

# Example checks:
# cake_calculator(310, 120) -> [2, 110, 20]
# cake_calculator(500, 250) -> [5, 0, 0]

Explanation of correctness:
 // ensures integer division (full recipes only).
 min(...) enforces the bottleneck constraint.
 Subtraction calculates exact leftovers; nothing rounds or approximates.
 Complexity: O(1) time, O(1) memory — trivial and robust.

Problem C — The School Messaging App (information theory)

Problem recap: There are 12 characters with given probabilities. You must (1) explain
why variable-length codes help, (2) compute entropy, and (3) compute average code
length for a given Fano code and compare.

Given probability table:


A: 0.20, B: 0.15, C: 0.12, D: 0.10, E: 0.08, F: 0.06, G: 0.05, H: 0.05, I: 0.04, J: 0.03, K: 0.02,
L: 0.10

(These sum to 1.00 — check:


0.20+0.15+0.12+0.10+0.08+0.06+0.05+0.05+0.04+0.03+0.02+0.10 = 1.00.)

Q1 : Why different-length codes help (plain reasoning):

Fixed-length coding treats every symbol equally, so frequent symbols waste space. By
assigning shorter codes to symbols that appear often and longer codes to rare
symbols, the weighted average number of bits per symbol drops. Over many messages,
that reduces total transmitted bits and lets you fit more messages within the same
daily quota. This is the basic idea behind Huffman and Fano coding.
Simple example: If A occurs 20% of the time and had a 1-bit code while the rare K (2%)
had 6 bits, the expected contribution of A to average length drops massively compared
to a fixed 4-bit scheme. When summed over all characters the average falls.

Q2 :Calculate entropy H

Entropy formula:
[ H = -_{i=1}^{n} p_i , _2(p_i) ]
Compute term-by-term (rounded to 6 decimal places for clarity):
 A: (-0.20 _2(0.20) = 0.4643856)
 B: (-0.15 _2(0.15) = 0.4105430)
 C: (-0.12 _2(0.12) = 0.3676072)
 D: (-0.10 _2(0.10) = 0.3321928)
 E: (-0.08 _2(0.08) = 0.2915268)
 F: (-0.06 _2(0.06) = 0.2435469)
 G: (-0.05 _2(0.05) = 0.2160964)
 H: (-0.05 _2(0.05) = 0.2160964)
 I: (-0.04 _2(0.04) = 0.1857546)
 J: (-0.03 _2(0.03) = 0.1517861)
 K: (-0.02 _2(0.02) = 0.1128771)
 L: (-0.10 _2(0.10) = 0.3321928)
Sum these contributions:
[H ]
Interpretation:On average, no lossless scheme can compress this source below about
; that is the theoretical lower bound (for long sequences,
3.324 bits per character
ignoring overheads).

Q3 :Average length of the provided Fano code

Fano code listed (symbol: code length in parentheses):


 A: 000 (3)
 B: 100 (3)
 C: 010 (3)
 D: 1100 (4)
 E: 0110 (4)
 F: 1010 (4)
 G: 001 (3)
 H: 1011 (4)
 I: 0111 (4)
 J: 1101 (4)
 K: 1111 (4)
 L: 1110 (4)
Compute average length ({L} = p_i l_i):
 A: 0.20 * 3 = 0.60
 B: 0.15 * 3 = 0.45
 C: 0.12 * 3 = 0.36
 D: 0.10 * 4 = 0.40
 E: 0.08 * 4 = 0.32
 F: 0.06 * 4 = 0.24
 G: 0.05 * 3 = 0.15
 H: 0.05 * 4 = 0.20
 I: 0.04 * 4 = 0.16
 J: 0.03 * 4 = 0.12
 K: 0.02 * 4 = 0.08
 L: 0.10 * 4 = 0.40
Sum: ({L} = 3.48 ).

Comparison and efficiency: - Entropy H ≈ 3.324 bits/symbol. - Code average length (L


= 3.48) bits/symbol. - Overhead above entropy: (3.48 - 3.324 = 0.156) bits/symbol. -
Efficiency = (H / L / 3.48 = 95.5%).
Conclusion: The Fano code is fairly efficient and reasonably close to the entropy
bound. It is 95.5% efficient, meaning there’s a small but non-negligible overhead above
the theoretical minimum; that overhead is expected because prefix codes must use
whole bits and practical constructions (Fano/Huffman) are optimal or near-optimal
given integer-length constraints.

Problem D : Word Search Puzzle (programming problem)

Problem recap: Implement create_crossword(words) that returns a 10×10 grid (list of


lists of chars) where each input word appears as a continuous sequence in some
direction (horizontal, vertical, or diagonal). No special positioning rules required. I
reasoned it out this way:
1. Put the longest words first. Longer words are harder to place; placing them first
reduces backtracking.
2. Try every possible start cell and every direction (8 in total). A position is valid if
the word stays inside the 10×10 bounds and it only overlaps existing letters
where the letters match.
3. Use backtracking: place a word, recurse to place the next. If later we fail, undo
the placement and try different choices.
4. After all words placed, fill remaining empty cells with random letters (A–Z) to
produce a valid puzzle.
This approach is guaranteed to work for reasonable word lists in 10×10 grids or to fail
gracefully (prompting grid enlargement or word trimming).
Python implementation

import random
random.seed(0) # optional for reproducible puzzles

DIRS = [
(0, 1), (0, -1), (1, 0), (-1, 0),
(1, 1), (1, -1), (-1, 1), (-1, -1)
]

def create_crossword(words):
n = 10
grid = [['.' for _ in range(n)] for _ in range(n)]
words = [w.strip().upper() for w in words if w.strip()]
# sort longest first
words.sort(key=len, reverse=True)

def fits(word, r, c, dr, dc):


for k, ch in enumerate(word):
rr, cc = r + dr*k, c + dc*k
if not (0 <= rr < n and 0 <= cc < n):
return False
cell = grid[rr][cc]
if cell != '.' and cell != ch:
return False
return True

def place(word, r, c, dr, dc):


placed = []
for k, ch in enumerate(word):
rr, cc = r + dr*k, c + dc*k
if grid[rr][cc] == '.':
grid[rr][cc] = ch
placed.append((rr, cc))
return placed

def unplace(placed):
for rr, cc in placed:
grid[rr][cc] = '.'

def backtrack(i):
if i == len(words):
return True
word = words[i]
# randomized order of starts/directions helps avoid poor
deterministic failures
starts = [(r, c) for r in range(n) for c in range(n)]
random.shuffle(starts)
dirs = DIRS[:]
random.shuffle(dirs)
for r, c in starts:
for dr, dc in dirs:
if fits(word, r, c, dr, dc):
placed = place(word, r, c, dr, dc)
if backtrack(i + 1):
return True
unplace(placed)
return False

ok = backtrack(0)
if not ok:
raise ValueError("Could not place all words in 10x10 grid.")

# Fill remaining cells


for r in range(n):
for c in range(n):
if grid[r][c] == '.':
grid[r][c] = chr(ord('A') + random.randint(0, 25))

return grid

# Example usage:
# grid = create_crossword(["learning", "science", "fun"])
# for row in grid:
# print(''.join(row))

Notes on the implementation:

 Time complexity: worst-case exponential due to backtracking, but in practice


with a 10×10 grid and modest word lists it’s fast.
 Correctness: The algorithm enforces bounds and matching overlaps, and
backtracking ensures completeness of the search among feasible placements.
 Extensibility: if a larger word list fails, increasing grid size or implementing
heuristics (smarter cell ordering, constraint propagation) will help.

Problem E — Functional Completeness of NAND (proof)

Claim: The NAND gate alone is functionally complete.

Reminder: NAND truth table

 NAND(a, b) outputs 0 only when a = 1 and b = 1. For all other input combinations
it outputs 1.
Proof by construction (build NOT, AND, OR):
 NOT using NAND: (x = (x, x)).
o Reason: NAND(x, x) is 1 unless x=1,1 which only happens when x=1. So the
output equals NOT x.
 AND: (x y = ((x,y))). Using NAND only:

o (x y = ((x,y), (x,y))).
 OR (via De Morgan): (x y = (x y)).

o Using NAND: (x y = ((x,x), (y,y))).


Because we can create NOT, AND, and OR with only NAND, and since {NOT, AND, OR} is
functionally complete (any Boolean expression can be formed using them), NAND
alone is functionally complete.

You might also like