IOS Interview Prep
IOS Interview Prep
Interview
Handbook
A comprehensive guide to crack an iOS
interview with top interview questions with
answers, a roadmap for mastering your
interview preparation.
Curated By Swiftable
Third Edition
Table of Contents
Introduction
Chapter 01: Swift Fundamentals Roadmap
Chapter 02: UIKit Fundamentals Roadmap
Chapter 03: Intermediate Roadmap
Chapter 04: Product-Based Roadmap
Chapter 05: Experience Level
Chapter 06: Class, Structure, Actors & Enumeration
Chapter 07: Properties & Initializers
Chapter 08: Functions, Methods & Closures
Chapter 09: Protocol & Delegation
Chapter 10: SOLID Principles
Chapter 11: Generics & Error Handling
Chapter 12: Memory Management
Chapter 13: Networking
Chapter 14: Combine Framework
Chapter 15: App Security
Chapter 16: UIViewController Life-Cycle
Chapter 17: App Performance
Chapter 18: Concurrency
Chapter 19: UIKit Framework
Chapter 20: SwiftUI Framework
Chapter 21: Miscellaneous
End of Content
Copyright @ Swiftable, 2024
Introduction
iOS Interview Handbook (Your key to unlocking a new career)
In today's competitive job market, having access to quality questions and a well-defined
roadmap can give you a significant advantage over your peers. It equips you with the tools and
knowledge needed to stand out during the interview process, increasing your chances of
securing a good job.
Interview Questions: Dive into an extensive collection of 270+ curated iOS interview questions,
meticulously selected to cover important topics and difficulty levels.
Preparation Roadmap: Navigate your journey to interview success with our comprehensive
roadmap, meticulously crafted to provide you with a clear and structured path for interview
preparation.
Personalized Session: Gain the exclusive opportunity to discuss your doubts in a personalized
one-to-one session, where you'll receive tailored guidance, feedback, and strategies from an
experienced iOS expert.
In future updates, the goal is to transform this book into the ultimate guide for iOS developers of
all levels, from junior to senior. It will offer comprehensive guidance tailored specifically for
interview preparation.
If you have any doubts or queries, please don't hesitate to reach out to us via email:
[email protected]
struct MediaAssetStruct {
var name: String
var type: String
}
jpgMediaDuplicate.name = "Profile_123"
In the above example, jpgMedia and jpgMediaDuplicate are references to the same instance.
When you modify the name property of jpgMediaDuplicate, it also changes the name property of
the original jpgMedia instance.
Chapter 06: Class, Structure, Actors & Enumeration
When you assign an instance of a structure to a variable or pass it as an argument to a function,
you're working with a copy of the original instance. Changes made to the copy do not affect the
original instance unless explicitly mutated using the mutating keyword. For example:
var movMedia = MediaAssetStruct(name: "Video_123", type: "MOV")
var movMediaDuplicate = movMedia
// creating a copy of the original instance
movMediaDuplicate.name = "VideoFile_123"
In the above example, movMedia and movMediaDuplicate are separate instances. When you
modify the name property of movMediaDuplicate, it does not affect the
original movMedia instance.
Inheritance
Classes support inheritance, allowing one class to inherit properties and methods from another
class. While, structure doesn't support inheritance. You cannot subclass a structure. For
example:
// attempting to define a struct that inherits from another struct - This will
result in a compilation error.
struct PhotoAssetStruct: MediaAssetStruct {
If you need to achieve similar behavior to inheritance with structs, you can use protocols and
protocol extensions, but this would not be true inheritance.
Identity Checking
Classes have identity, and you can check if two references point to the same instance using the
=== operator. For example:
jpgMediaDuplicate.name = "Profile_123"
Structure do not have identity checks like classes. You compare instances of struct by comparing
their properties. For example:
The == operator is not automatically defined for structs. Therefore, we need to explicitly define
how to compare instances of our custom struct.
var movMedia = MediaAssetStruct(name: "Video_123", type: "MOV")
var movMediaDuplicate = movMedia
movMediaDuplicate.name = "VideoFile_123"
// Run the above example now and you will see the output like:
// Both struct objects do not have the same properties.
By conforming to Equatable, the compiler will compare all the properties of both the instances. In
case of custom comparison with Equatable protocol, you can override static == function.
Chapter 06: Class, Structure, Actors & Enumeration
Immutability
Instances of classes can have mutable properties, and you can modify these properties even if
the class instance is declared as a constant (using let ).
By default, instances of struct are immutable (constants). To modify the properties, you need to
mark the method that performs the modification with the mutating keyword. For example:
struct MediaAssetStruct: Equatable {
var name: String
var type: String
Deinitializers
In classes, deinitializers are called immediately before an instance of the class is deallocated.
Deinitializers can also access properties and other members of the class instance and can
perform any cleanup necessary for those members. For example:
class MediaAssetClass {
deinit {
print("class instance is deallocated.")
}
}
Because structs are value types and are copied when passed around, there's no concept of
deinitializing an instance of a struct in the same way as with classes. For example:
In summary, you should consider the differences between both based on reference vs. value
semantics, inheritance, immutability, deinitlaization etc.
In the above example, when originalMedia is passed to the modifyMedia() function, a new copy
of MediaAssetStruct is created, and modifications made to modifiedMedia inside the function do
not affect the original originalMedia.
Reference Types
They are not copied when they are assigned to a variable or passed as an argument to a
function.
When you pass a reference type to a function or assign it to another variable, you're working
with the same underlying instance, and changes made to that instance are reflected across
all references to it.
All classes are reference types.
In the above example, originalMedia is passed to the modifyMedia() function, and changes made
to media inside the function affect the original instance of originalMedia. This is because classes
are reference types, and media is a reference to the same object in memory.
Q. What are the actors and how do they help write concurrent code?
Actors are similar to classes and are compatible with concurrent environments. This is possible
because Swift automatically ensures that two pieces of code are never attempting to access an
actor's data at the same time.
We use actor keyword to make an Actor which are concrete nominal types.
Unlike classes, actors do not support inheritance, hence they lack convenience initializers
and are incompatible with both 'final' and 'override' keywords.
Similar to classes, actors are reference types.
Actors conform automatically to the Actor protocol, which no other type can use. This
allows you to write code tailored to actors only.
To eliminate the issues like data races and deadlocks, actors provide a safe concurrency model
by encapsulating state and ensuring that access to that state is serialized.
Here's how actors help write concurrent code:
They encapsulate their state, meaning that no external code can access or modify the
actor's state directly. Instead, other code communicates with the actor through
asynchronous messages.
They ensure that only one message is processed at a time. This means that access to the
actor's state is inherently serialized, eliminating the need for explicit locking mechanisms.
Communication with actors is asynchronous, meaning that you can send a message to an
actor and continue with other work without waiting for a response.
Actors provide a safe and efficient way to manage shared mutable state in multi-threaded
applications. Actors provide a clear separation of concerns between threads and help to avoid
many of the pitfalls associated with traditional concurrency mechanisms. For example:
Task {
await account.deposit(amount: 100.0)
}
Task {
await account.withdraw(amount: 50.0)
}
// Print:
// Deposited 100.0. New balance: 100.0
// Withdrawn 50.0. New balance: 50.0
In the above example, the Account actor ensures that deposit and withdrawal operations are
executed safely, preventing potential conflicts or inconsistencies in the balance.
Q. How actors help in preventing data races and ensuring thread safety?
They can protect the internal state through data isolation ensuring that only a single thread will
have access to the underlying data structure at a given time. All actors implicitly conform to a
new Actor protocol; no other concrete type can use this. Actors solve the data race problem
by introducing actor isolation. Actors help prevent data races and ensure thread safety through
a combination of mechanisms and constraints:
Exclusive Access to State
Only one task can access an actor's mutable state at a time. This ensures that there are no
concurrent modifications to the shared data, eliminating the possibility of data races.
Isolated Execution
Actors encapsulate their state and behavior, ensuring that the internal state is accessed and
modified only through defined methods. This isolation prevents external code from directly
accessing or modifying the actor's state, maintaining consistency and integrity.
Asynchronous Messaging
Actors communicate with each other asynchronously through message passing. When one actor
wants to access or modify another actor's state, it sends a message and waits for a response.
Chapter 06: Class, Structure, Actors & Enumeration
This asynchronous communication eliminates the need for locks or manual synchronization,
reducing complexity comes with traditional concurrent programming.
Structured Concurrency
Swift's structured concurrency model ensures that tasks associated with actors are well-defined
and managed. Tasks are structured in a way that makes it easier to reason about their execution
order and dependencies, reducing the likelihood of race conditions or deadlocks.
Error Handling
Actors have built-in error handling mechanisms that allow for graceful recovery from failures or
unexpected conditions. This ensures that the system remains stable and responsive even when
faced with exceptions or errors during concurrent execution.
By combining these features, actors provide a safer and more intuitive way to handle concurrent
programming, reducing the complexity comes with traditional thread-based approaches while
ensuring data integrity and consistency.
Q. How does memory management work for classes and structs? How can
you optimize memory while using them?
Swift uses the Automatic Reference Counting (ARC) technique to keep track of how many
references or pointers exist to a certain instance of a class. ARC automatically frees up the
memory used by an instance when there are no more references to it, preventing memory leaks
and wasted resources.
Consider these things to optimize memory for classes:
Use value types if possible: If your data structure doesn't require reference semantics or
inheritance, consider using structs instead of classes. Structs are stack-allocated and don't incur
the overhead of reference counting.
Take care of retain cycles: Be mindful of strong reference cycles (retain cycles) that can prevent
objects from being deallocated, leading to memory leaks. Use weak or unowned references, or
break strong reference cycles.
Use lazy initialization: Use lazy initialization for properties that are computationally expensive or
not always needed immediately after object creation. This ensures that resources are allocated
only when required, thus conserving memory.
Use weak references in capture lists: When capturing self in closures, especially in long-lived
closures like completion handlers, use weak or unowned references to prevent strong reference
Chapter 06: Class, Structure, Actors & Enumeration
cycles. This allows the object to be deallocated when it's no longer needed.
Object pooling: Implement object pooling for frequently used objects that are expensive to
create and destroy. Reusing objects from a pool can reduce memory fragmentation and overhead
associated with object creation.
Consider these things to optimize memory for structs:
Immutable data: Prefer immutability for struct properties whenever possible. Immutable data
allows for safer concurrency and enables more aggressive compiler optimizations, potentially
reducing memory usage.
Avoid excessive nesting: Avoid deeply nested structs, especially if they contain large amounts
of data. Deeply nested structs can increase memory usage and hinder performance due to
frequent copying.
Use lazy initialization: Just like with classes, employ lazy initialization for properties in structs
when appropriate. This defers property initialization until the first access, which can save
memory if the property is rarely accessed.
Use Copy-On-Write (CoW): Implement copy-on-write semantics for structs containing large or
mutable data. This optimization ensures that data is shared until it's modified, minimizing
unnecessary copying and conserving memory.
init(name: String) {
self.name = name
}
func displayInfo() {
print("MediaAssetClass's Name: \(name)")
}
}
When we call the displayInfo() method on the movie object, it prints out the details of the movie,
including its name and duration. Since displayInfo() is overridden in the Movie subclass, it prints
out the details with the duration included that is decided on run-time by dynamic dispatch.
Static Dispatch:
In static dispatch, also known as compile-time dispatch, the compiler determines which method
or function implementation to call based on the declared type of the variable or constant at
compile-time. This type of dispatch is used for value types such as structures and enums, where
the method implementation is known at compile-time.
struct MediaAssetStruct {
var name: String
func displayInfo() {
print("Name: \(name)")
}
}
The method dispatch for displayInfo() is static, meaning the method to be called is determined at
compile-time based on the type of the variable (mediaAsset), and there's no concept of
inheritance involved.
Chapter 06: Class, Structure, Actors & Enumeration
Q. Differentiate between a raw value and an associated value in an enum?
In Swift, enums allow you to define a group of related values. They can have associated values
and raw values, which serve different purposes. Let’s understand them.
Raw values:
These are predefined values of the same type that can be associated with each case of the
enum. These values are unique within the enum and provide a simple way to represent a set of
related values.
These are default values that must be unique and of the same type. These are useful when you
want to represent a set of related values with a simple data type, like an integer or a string. For
example:
enum Weekday: Int {
case sunday = 1, monday, tuesday, wednesday, thursday, friday, saturday
}
Associated values:
These values allow you to store extra information for each case of an enum. This additional data
is provided when you create an instance of the enum and can differ for each case, making
associated values a powerful tool for representing complex data set. For example:
Using associated values, you can easily access and manipulate the specific data associated with
each measurement.
So, raw values are predefined and shared among all instances of the enum, whereas associated
values are dynamic and specific to each instance. They serve different purposes and are used
according to the requirements.
init(responseCode: Int) {
if responseCode == 0 {
self = .noConnection
} else if responseCode >= 500 {
self = .serverError(statusCode: responseCode)
} else {
self = .parsingError(description: "Failed to parse response")
}
}
}
We have defined an enum called NetworkError which represents various networking errors. Each
case of the enum has associated values. We've also added a custom initializer
init(responseCode:) that takes a response code as a parameter.
switch error {
case .noConnection:
print("No internet connection.")
case .serverError(let statusCode):
print("Server error with status code: \(statusCode).")
case .parsingError(let description):
print("Parsing error: \(description)")
}
}
This custom initializer simplifies the process of creating instances of the NetworkError enum by
allowing you to pass the relevant information directly to the initializer, making your code cleaner
and more expressive.
In the above example, NetworkError is defined as an enum that conforms to CaseIterable. This
means that you can access an array of all cases using the allCases property.
We define an enum FileSystemItem with two cases: file and folder . The file case represents
a file with a name, and the folder case represents a folder with a name and an array of children.
let rootFolder: FileSystemItem = .folder(name: "Root", children: [
.folder(name: "Folder1", children: [
.file(name: "File1.txt"),
.folder(name: "Subfolder", children: [
.file(name: "File2.txt")
])
]),
.folder(name: "Folder2", children: [
.file(name: "File3.txt")
]),
.file(name: "File4.txt")
])
enumerateFileSystemItem(rootFolder)
// Print:
// Root
// Folder1
// File1.txt
// Subfolder
// File2.txt
// Folder2
// File3.txt
// File4.txt
Enums help improve code clarity, type safety, and maintainability. They make your code more
expressive and less error-prone, especially when dealing with a finite set of related values or
Chapter 06: Class, Structure, Actors & Enumeration
states.
Q. Explain the role of indirect keyword in enums and where they are
stored?
The indirect keyword is used when defining recursive enums. Recursive enums are enums
that have associated values of the same type as the enum itself. This means that the enum can
contain instances of itself, either directly or indirectly through associated values. For example:
indirect enum BinaryTree {
case leaf(Int)
case node(BinaryTree, BinaryTree)
}
In this example, we have defined a binary tree using a recursive enum. Each node in the binary
tree can either be a leaf with an integer value or a node containing two subtrees. We're creating a
binary tree with a root node, two leaf nodes, and a subtree under the right child of the root node.
When a value of an indirect enum is created, it is stored on the heap rather than the stack
because the size of the enum can vary, and it may contain references to other objects.
struct MediaAssetStruct {
var name: String
}
As you can see, when we assign one instance to another, it copies the reference. But after
modifying the data, they generate a new copy.
Performance Overhead:
While copy-on-write optimizes memory usage by avoiding unnecessary copies, it can introduce
performance overhead in certain scenarios, particularly when:
Frequent modifications: If a value type is frequently copied and modified, the overhead of
checking and potentially duplicating data can impact performance.
Large data structures: Copying large data structures can be costly in terms of memory and CPU
time, especially if most copies eventually lead to writes.
Multithreaded access: In concurrent programming, copy-on-write introduces synchronization
overhead to ensure thread safety when modifying shared data.
Practical Considerations:
Use structs wisely: Use structs for small, simple data types where copy-on-write overhead is
negligible or beneficial.
Chapter 06: Class, Structure, Actors & Enumeration
Beware of large data: If dealing with large data structures, consider using classes or optimizing
your algorithms to minimize unnecessary copying.
Profile performance: Profile your code to identify performance bottlenecks related to copy-on-
write and optimize accordingly. Techniques like lazy loading or caching can help mitigate
overhead.
Thread safety: Be cautious when using copy-on-write in multithreaded environments to avoid
race conditions and ensure data consistency.
In Swift, Copy-on-write feature specifically added to arrays and dictionaries as they used
widely in the code. This process followed by them implicitly but not for custom types.
Understanding copy-on-write is important to write efficient and performant code, especially
when dealing with value types. By leveraging its benefits while mitigating potential overhead, you
can write code that is both elegant and efficient.
Q. Explain the differences between deep copying and shallow copying, and
how they apply to classes, structs, and enums.
Deep copying and shallow copying are two common techniques used to duplicate objects, but
they differ in how they handle the copying process and the resulting copied objects.
Deep Copying:
Deep copying creates a new copy of an object along with all the objects contained within it,
recursively. This means that if the original object contains references to other objects, the copied
object will have duplicates of those referenced objects as well.
They duplicates everything.
No big impact to race conditions as they performs well in a multithreaded environment.
Deep copying is performed with value types.
In this example, deepCopy is a deep copy of originalCopy . Any changes made to deepCopy
will not affect originalCopy , and vice versa.
Shallow Copying:
Shallow copying creates a new object but retains references to the same objects contained
within the original object. This means that if the original object contains references to other
objects, the copied object will also have references to those same objects.
Impact may occur in race conditions as they shared references in a multithreaded
environment.
Shallow copying is performed with reference types.
class MediaAssetClass {
var name: String
init(name: String) {
self.name = name
}
}
In this example, modifying shallowCopy also affects originalCopy because they share the
same underlying data due to shallow copying.
Differences between Deep and Shallow Copying:
Memory Allocation: Shallow copying just copies references, while deep copying creates
new memory allocations.
The zone parameter is an optional NSZone object representing a memory zone, which is
typically ignored in modern usage. For example:
init(info: String) {
self.info = info
}
In the above example, MediaAssetClass and Metadata classes conform to the NSCopying
protocol. The copy(with:) method is implemented in each class to perform deep copying. It
recursively creates new instances of MetaData objects. When copying MediaAssetClass, it
ensures that a new instance of Metadata is created as well, preventing changes in one from
affecting the other.
When you perform deep copying, copy() method is invoked on the originalAsset . Since
originalAsset conforms to NSCopying protocol, it internally calls the copy(with:) method
implemented in MediaAssetClass, which performs a deep copy of originalAsset .
This example shows the use of deep copying to ensure that changes made to the copied object
do not affect the original object,
Type properties are shared among all instances of the type and can be accessed directly on the
type itself without needing an instance. For example:
// type properties in value type
struct MediaAssetStruct {
static var maxFileSizeInMB = 100
static var supportedFormats = ["mp4", "mov", "avi"]
}
They are accessed and modified using the type’s name and provide a way to encapsulate global
constants or values that are specific to a particular type.
Swift manages the memory lifecycle of type properties in a way that ensures they are initialized
before they are accessed and deallocated when they are no longer needed. The initialization and
deallocation of type properties follow similar rules to instance properties but with some
differences:
Initialization
Type properties for both value types and classes are initialized before any instances are created.
Specifically, for value types, they're initialized when the app starts, and for classes, when the
class is first accessed or referenced. This guarantees that type properties are ready for use as
soon as their type becomes available.
Deallocation
Type properties are deallocated when the program ends for value types, or when the class is
removed for class type properties. They're shared among all instances of the type and are only
Chapter 07: Properties & Initializers
deallocated when the program exits or the type is deallocated. Swift handles this automatically
as part of its memory management.
Swift handles type property memory by initializing them prior to access, deallocating when
unnecessary, following type-specific rules, and maintaining thread safety during initialization.
Computed Properties:
They do not store values directly but provide a getter and an optional setter to compute (or
calculate) the value dynamically.
They are declared with a type, but they do not store any value themselves. Instead, they
provide a mechanism to retrieve and set values based on computations.
They are always declared with var , as they are inherently variable.
Usage:
Stored properties are suitable for storing and accessing values that are directly associated
with instances of a type.
Computed properties are useful when you want to perform some computation or validation
before returning a value, or when you want to provide a different interface for accessing the
property.
Computed properties can be used to provide read-only access to a property whose value is
derived from other properties or data.
Q. What is lazy initialization and discuss the pros and cons of using it?
Lazy initialization is used to defer the initialization of a property until it is accessed for the first
time. Lazy initialization is achieved by declaring a property with the lazy keyword. When a
property is marked as lazy, its initialization is postponed until the first time it is accessed, and
after that, its value is cached for future accesses. For example:
mediaAsset.size = 120
// Print: Warning: File size is large!
In the example, property observers are used to print messages before and after changing the
property name , and to print a warning message if the size exceeds a certain threshold. These
observers help in maintaining the integrity of the properties and executing additional logic when
they are modified.
init(wrappedValue: String) {
self.wrappedValue = wrappedValue
}
}
In the above example, we've defined a property wrapper Capitalized that ensures any
assigned value is capitalized. The wrappedValue property is where the actual value is stored
and manipulated. We then use the @Capitalized property wrapper on the firstName and
lastName properties of the User struct.
Q. What are the designated initializers? Can a class or struct have multiple
designated initializers?
Designated initializers are the primary initializers for a class or struct. They are responsible for
initializing all properties introduced by that class or struct and ensuring that the instance is fully
initialized before it's used.
A designated initializer is marked with the init keyword, and it must initialize all properties
introduced by that class or struct, either by assigning initial values directly or by calling other
initializers.
A class or struct can have multiple designated initializers, each of which initializes a subset of
properties or provides different initialization paths. These multiple designated initializers can
have distinct parameter lists and initialization logic, but they all must ensure that all properties are
initialized before the instance is considered fully initialized. For example:
In this example, MediaAssetStruct has two designated initializers. The first one initializes both
name and type , while the second one initializes only name , setting type to a default value of
"Unknown".
class MediaAssetClass {
var name: String
var type: String
Similarly, MediaAssetClass also has two designated initializers. The first one initializes both
name and type , while the second one initializes only name , setting type to a default value of
"Unknown".
If you don't assign all properties in its initializer, you'll get a compiler error. It’s require that all
properties have a value before the initializer completes its execution. This ensures that an
instance of the struct is always in a valid state.
struct MediaAssetStruct {
var name: String
var type: String
Since type is not assigned in the initializer, the struct instance would be in an invalid state if it
were allowed to be created.
// designated initializer
init(title: String, fileSize: Int) {
self.title = title
self.fileSize = fileSize
}
// convenience initializer
convenience init(title: String) {
// calls the designated initializer with default fileSize
self.init(title: title, fileSize: 0)
}
}
Q. Explain the concept of initializer delegation. What are the rules Swift
applies for delegation calls between initializers?
Initializer delegation is the concept of one initializer in a class or struct calling another initializer in
the same class or struct to perform part of its initialization. This allows for code reuse and
ensures that all properties are properly initialized, regardless of which initializer is used to create
an instance.
Initializer delegation follows a set of rules to ensure that initialization proceeds in a safe and
consistent manner:
Designated Initializer Must Initialize All Properties
The designated initializer of a class or struct is responsible for initializing all properties introduced
by that class or struct. It must ensure that all properties have valid initial values before the
instance is considered fully initialized.
Convenience Initializers Must Call a Designated Initializer
Convenience initializers must call another initializer in the same class or struct before they can
assign a value to any property. This ensures that all properties are initialized properly according to
the rules defined by the designated initializer.
Initializer Delegation Chain
Initializer delegation can form a chain, where one initializer calls another initializer, which in turn
calls another, and so on, until eventually, a designated initializer is called. Each step in the chain
must obey the rules of initializer delegation.
// failable initializer
init?(name: String, type: String) {
self.name = name
self.type = type
}
}
Inside the initializer, it checks if either the name or type is empty. If either condition is true,
indicating invalid input data, the initializer returns nil , signifying the failure of initialization.
Otherwise, it initializes the MediaAssetStruct instance with the provided values and returns it.
Failable initializers provide flexibility and safety in initializing instances by allowing you to handle
potential initialization failures in a controlled manner. They are particularly useful when working
with external data sources, user input, or other unpredictable conditions where initialization may
not always succeed.
// required initializer
required init(name: String) {
self.name = name
}
}
// required initializer
required init(name: String) {
self.name = name
}
}
The compiler error message typically indicates that the subclass does not conform to the
requirement of providing an implementation for the required initializer. This error prevents the
program from compiling until the subclass implements the required initializer.
Q. How do you add computed properties using extensions? Can you give
an example?
You can add computed properties to a type (class, struct, or enum) using extensions. Extensions
allow you to add new functionality to existing types, including computed properties, without
modifying their original implementation.
extension MediaAssetClass {
var durationInMinutes: Double {
return duration / 60.0
}
}
Extensions are a powerful feature that allow you to enhance existing types with new functionality,
including computed properties, methods, initializers, and more, without modifying their original
implementation. This promotes code organization, modularity, and reusability.
// Print: [2, 3, 1, 4]
In the above example, map iterates over each element of the durations array. For each
element (denoted by $0 ), it applies the closure { $0 / 60 } , which divides each duration by
60 to convert it from seconds to minutes. The result is a new array durationsInMinutes
containing the transformed values.
This is a simple example, but the map function can be used with more complex transformations
and on different types of collections, providing a powerful tool for data manipulation.
Q. Write a custom higher order function wrt. a function that takes a closure
and an array of integers and returns the sum of squares of those integers.
Here's a custom higher-order function that takes a closure and an array of integers, and returns
the sum of squares of those integers:
func sumOfSquares(_ numbers: [Int], handler: (Int) -> Int) -> Int {
var sum = 0
for number in numbers {
sum += handler(number)
}
return sum
}
The sumOfSquares function takes an array of integers ( numbers ) and a closure ( handler ) that
takes an integer and returns an integer. Inside the function, it iterates over each number in the
execute {
print("This is a non-escaping closure")
}
// Print:
// Executing non-escaping closure
// This is a non-escaping closure
// Finished executing non-escaping closure
In this example, the closure is called synchronously within the execute function.
Escaping Closures
addEscapingClosureToQueue {
print("This is an escaping closure - 1")
}
addEscapingClosureToQueue {
print("This is an escaping closure - 2")
}
// Print:
// Adding escaping closure to queue
// Before closure execution
// This is an escaping closure - 1
// This is an escaping closure - 2
// After closure execution
The output shows that the closures added to the array they retain their intended order of
execution, reflecting the order in which they were added to the array.
fetchData { result in
switch result {
case .success(let data):
print("Received data:", data)
case .failure(let error):
print("Error:", error)
}
}
print("Fetching data...")
// Print:
// Fetching data...
// Received data: Data from server
In this example:
The fetchData function simulates a network request that is executed asynchronously on a
background queue.
Chapter 08: Functions, Methods & Closures
The completion handler is an escaping closure that is passed to the fetchData function.
It's marked with @escaping because it's stored for later execution when the network
request completes.
After calling fetchData , the program continues to execute immediately without waiting for
the network request to complete.
Once the network request finishes, the completion handler is called with the result, and the
appropriate action is taken based on the success or failure of the request.
Q. How can you prevent retain cycles when using escaping closures?
Retain cycles can occur when closures capture references to objects strongly, creating a situation
where objects reference each other, preventing them from being deallocated even when they're
no longer needed. This can lead to memory leaks.
To prevent retain cycles when using escaping closures, you can use either a capture list with a
weak reference ( [weak self] ) or an unowned reference ( [unowned self] ) inside the closure.
Both methods ensure that the closure does not create a strong reference cycle with the captured
instance.
When you use a weak reference in the closure capture list ( [weak self] ), the reference to the
captured instance will be automatically set to nil if the instance is deallocated. This means you
need to handle the possibility that the weak reference might be nil when accessed inside the
closure.
Suppose you want to create a Timer wrapper class that allows you to schedule a repeating timer
while avoiding retain cycles. We’ll use escaping closures to handle the timer’s callback. To
prevent a retain cycle, we’ll use a weak reference in the closure capture list. For example:
func stopTimer() {
timer?.invalidate()
timer = nil
}
}
This class ( RepeatedTask ) represents a task that repeats at a certain interval. The
startRepeatingTask() method initializes a TimerWrapper instance and starts the timer to
execute a repeating task every 5 seconds. The removeTimeWrapper() method stops and
deallocates the TimerWrapper instance. For example:
class RepeatedTask {
var timerWrapper: TimerWrapper?
func startRepeatingTask() {
timerWrapper = TimerWrapper()
timerWrapper?.startTimer(interval: 5.0) { [weak self] in
// this closure captures 'self' weakly to avoid retain cycle
guard let self = self else {
print("self (RepeatedTask) does not exists.")
return
}
print("Timer fired. Performing the repeating task.")
}
}
func removeTimeWrapper() {
timerWrapper?.stopTimer()
timerWrapper = nil
}
}
// Print:
// deallocating task object...
// self (TimerWrapper) does not exists.
In this example, the closure { $0 * $0 } is provided as a trailing closure to the map function,
making it clear that it's transforming each element of the array by squaring it. This enhances the
readability and maintainability of the code.
Placing the closure outside the function call can make the code more readable, especially for
longer closures. It separates the closure's implementation from the function call, making it easier
to distinguish between the two.
Trailing closure syntax is particularly useful in APIs where the closure serves as a completion
handler or a callback, as it allows the code to read more fluently.
func loadMediaAsset(withID id: String, completion: (MediaAsset) -> Void) {
DispatchQueue.main.asyncAfter(deadline: .now() + 2) {
// assume we fetched the media asset with the given ID
let mediaAsset = MediaAsset(id: id, url: "https://example.com/\(id)")
completion(mediaAsset)
}
}
In this example, the closure { asset in ... } is provided as a trailing closure to the
loadMediaAsset function. This makes the code more readable, especially when dealing with
asynchronous operations and completion handlers.
let number = 10
let conditions: [(Int) -> Bool] = [
{ $0 > 0 }, // Condition 1: Greater than 0
{ $0 % 2 == 0 } // Condition 2: Even number
]
In the above example, a generic function satisfyAllConditions that takes a value of type T
and an array of closure conditions as parameters. It iterates through each condition in the array
and checks if the value satisfies all conditions by applying each condition to the value.
A number 10 is defined, and an array of conditions for integers is created. These conditions are
expressed as closures: one checks if the number is greater than 0, and the other checks if the
number is even.
The equality comparison is often considered a type-level ( Task in this case) operation rather
than an instance-level operation. This means that it makes sense for the equality operator to be
associated with the type itself rather than with individual instances of the type.
By defining it as a static method, you indicate that it’s a function of the type ( Task in this case) ,
not of any specific instance.
Inside the == method, you define the logic for comparing the two instances ( lhs and rhs ).
Typically, you compare the properties of the instances that determine their equality.
Q. What are autoclosures and how do they differ from regular closures?
Autoclosures are a special type of closure that automatically wraps an expression into a closure
without needing to write explicit closure syntax. They are often used as a way to delay evaluation
of an expression until it's needed.
Autoclosures are particularly useful when you want to pass a simple expression as a parameter to
a function that expects a closure.
autoclosure:
func evaluate(condition: @autoclosure () -> Bool) {
if condition() {
print("Condition is true")
} else {
print("Condition is false")
}
}
evaluate(condition: 2 > 1)
// Print: Condition is true
Regular Closure:
Chapter 08: Functions, Methods & Closures
func performOperation(closure: () -> Void) {
print("Performing operation...")
closure()
}
performOperation {
print("Operation executed")
}
// Autoclosure
func autoclosureExample(_ closure: @autoclosure () -> Int) {
print("Before evaluating closure")
let result = closure()
print("After evaluating closure, result is \(result)")
}
Autoclosures are commonly used in scenarios like lazy initialization or control flow
statements like if and guard where you want to delay execution of certain expressions.
Named Closures
Named closures have an explicit name assigned to them. They are defined using the func
keyword and can be referenced by their name throughout the codebase.
They have a signature similar to regular functions, including a name, parameters, and a body
enclosed within curly braces { } .
They are useful when you need to reuse the same block of code multiple times, provide clarity
and readability to the code, or when defining complex functionality that requires separate
declaration.
func square(_ x: Int) -> Int { // Named closure
return x * x
}
Q. How would you handle a case where you need to capture an immutable
state in a closure to ensure thread safety?
Capturing immutable state in a closure to ensure thread safety involves ensuring that the state
remains unchanged and consistent during the execution of the closure. This is particularly
important when dealing with concurrent operations or asynchronous code where multiple threads
may access the same state simultaneously.
Use Capture Lists with let Constants:
Declare immutable state as constants ( let ) outside the closure.
Chapter 08: Functions, Methods & Closures
Capture the constants in the closure's capture list to ensure that the state remains
immutable and consistent within the closure.
By capturing immutable state using let constants, you prevent accidental modifications to
the state from within the closure.
class MediaAssetManager {
func fetchMediaAssets(completion: @escaping ([MediaAsset]) -> Void) {
// asynchronous operation to fetch media assets
}
}
In the above example, self is captured weakly in the closure to avoid a strong reference cycle.
This ensures that the closure doesn't keep a strong reference to self , preventing memory leaks.
Use Value Types for Immutable State:
If possible, use value types for immutable state instead of reference types.
Value types are inherently thread-safe because each instance has its own independent copy
of the state, preventing concurrent access issues.
Capture immutable value types in the closure as constants to ensure thread safety.
class MediaAssetManager {
func fetchMediaAssets(completion: @escaping ([MediaAsset]) -> Void) {
// asynchronous operation to fetch media assets
}
}
In the above example, instead of capturing self , we capture the updateUI method as a value
type. This ensures that no reference to self is retained within the closure, thus avoiding
memory leaks.
protocol MediaAssetProtocol {
var title: String { get }
var duration: TimeInterval { get }
func play()
}
We extend MediaAssetProtocol with a default implementation for the play() method. This
default implementation simply prints a message indicating that the media asset is being played.
class MediaAssetClass: MediaAssetProtocol {
var title: String
var duration: TimeInterval
protocol Editable {
func edit()
}
// protocol composition
typealias PrintableAndEditable = Printable & Editable
Now, any type that conforms to PrintableAndEditable must also conform to both Printable
and Editable .
Protocol extensions allow you to provide default implementations for protocol methods. When
combined with protocol composition, you can provide default implementations for methods
required by multiple protocols.
extension Editable {
func edit() {
print("Editable content edited")
}
}
extension MediaAssetProtocol {
func play() {
print("Playing \(title) for \(duration) seconds.")
}
}
When we call the play() method of mediaAsset instance, the method dispatch process
ensures that the correct implementation is called based on the actual type of the object.
Method Dispatch with Class Inheritance
Method dispatch with class inheritance involves dynamic dispatch, also known as late binding.
This means that the method to be called is determined at runtime based on the dynamic type of
the object. For example:
init(name: String) {
self.name = name
}
func play() {
print("Playing \(name)")
}
}
When we create instances of VideoClass and AudioClass and call the play() method on
them, the method dispatch mechanism resolves the method calls at run-time. Since
VideoClass doesn't override the play() method, it uses the implementation inherited from
MediaAssetClass. However, AudioClass overrides the play() method, so its implementation
is used instead.
So, method dispatch with protocol extensions is determined at compile-time based on the static
type, whereas method dispatch with class inheritance involves dynamic dispatch, determined at
runtime based on the dynamic type of the object.
Q. Can you explain the difference between type aliases and associated
types in protocols?
Chapter 09: Protocol & Delegation
Associated types allow you to define placeholder types that are associated with the protocol.
These types are not specified until the protocol is adopted. They are declared using the
associatedtype keyword. They are powerful because they enable protocol authors to define
protocols in a way that can work with any data type.
Type aliases allow you to provide an alternate name for an existing data type. They are declared
using the typealias keyword. Type aliases are particularly useful when you want to refer to a
complex type with a simpler, more descriptive name.
Let's create an example using a protocol with an associated type and typealias to implement a
stack:
protocol Stack {
associatedtype Element
var isEmpty: Bool { get }
We define a protocol named Stack for implementing a generic stack. This protocol declares an
associated type Element , representing the type of elements stored in the stack.
Implementations of this protocol must provide concrete types for the associated type Element
and define functionality for the required properties and methods.
struct IntStack: Stack {
typealias Element = Int
Above stack IntStack can be performed on integers conforming to the Stack protocol. In this
implementation, the associated type Element is typealiased to Int , meaning that this stack
Chapter 09: Protocol & Delegation
specifically deals with integers.
var stack = IntStack()
stack.push(1)
stack.push(2)
stack.push(3)
The IntStack will work only for integers because it's explicitly defined to hold integers as
conforms to the Stack protocol and specifies its associated type Element as Int , using the
typealias.
The object that needs to communicate with another object declares a delegate property and
assigns itself as the delegate:
func loadAsset() {
// load the asset
// once the asset is loaded, notify the delegate
delegate?.didFinishLoading(asset: loadedAsset)
}
}
The delegate, which conforms to the protocol, implements the required methods:
class ViewController: UIViewController, MediaAssetDelegate {
func didFinishLoading(asset: MediaAsset) {
// handle the loaded asset
}
}
Closures
Closures are self-contained blocks of functionality that can be passed around and used in your
code. They are often used for handling asynchronous operations or as callback mechanisms.
class MediaLoader {
var completionHandler: ((MediaAsset) -> Void)?
func loadAsset() {
// load the asset
// once the asset is loaded, call the completion handler
completionHandler?(loadedAsset)
}
}
Q. What is the @objc attribute, and why might you need to use it when
working with protocols?
The @objc attribute is used to expose Swift declarations to Objective-C code. It's primarily used
when interoperating between Swift and Objective-C, allowing Swift code to be used in
Objective-C contexts.
When working with protocols, you might need to use @objc for a few reasons:
Objective-C Interoperability If you have a Swift protocol that needs to be used in Objective-C
code, you'll need to mark it with @objc to make it accessible and usable from Objective-C.
Objective-C doesn't inherently understand Swift protocols, so this annotation bridges the gap
between both languages.
Optional Protocol Requirements Swift protocols can define optional requirements using the
@objc attribute. This is particularly useful when interoperating with Objective-C, as Objective-C
protocols often have optional methods. In Swift, you mark such methods with @objc optional .
func play() {
print("Playing video \(mediaName)")
}
}
In this example, VideoPlayer class adopts the MediaAsset protocol and implements its required
methods. With @objc , this protocol can be seamlessly used with class VideoPlayer.
Now, let's say we want to create more specific types of media assets, such as ImageAsset and
VideoAsset , each with additional properties and methods specific to their type.
We can create protocols for each of these specific types, inheriting from the MediaAsset
protocol:
protocol ImageAsset: MediaAsset {
var resolution: (width: Int, height: Int) { get }
}
With protocol inheritance, any type that conforms to ImageAsset or VideoAsset automatically
conforms to MediaAsset as well. This ensures that they implement the basic properties and
behaviors required by MediaAsset, while also providing additional functionality specific to their
type.
Let's create a struct for ImageAsset and VideoAsset:
func play() {
print("Displaying image \(title) by \(author)")
}
}
func play() {
print("Playing video \(title) by \(author)")
}
}
Now, any Image or Video instance can be treated as a MediaAsset, allowing for code reuse
and maintainability.
We can rely on the common interface provided by the MediaAsset protocol while still leveraging
the specific functionalities of ImageAsset and VideoAsset.
Now, you want to create VideoAsset and AudioAsset types that represents video and audio
media assets and extends MediaAsset:
struct VideoAsset: MediaAsset {
var title: String
var duration: TimeInterval
var videoURL: URL
func play() {
// write logic here to play
}
}
func play() {
// write logic here to play
}
}
You can see that both VideoAsset and AudioAsset share common properties and methods
defined in MediaAsset, leading to code duplication and potential maintenance issues.
Instead of using inheritance, you can use protocol composition to address this problem more
efficiently. First, define separate protocols for Playable and Displayable behaviors:
protocol Playable {
func play()
}
protocol Displayable {
func display()
}
With MediaAsset defined using protocol composition, you can implement VideoAsset and
AudioAsset conforming to this protocol without inheritance:
struct VideoAsset: MediaAsset {
var title: String
var duration: TimeInterval
var videoURL: URL
func play() {
// write logic here to play
}
func display() {
// write logic here to display
}
}
func play() {
// write logic here to play
}
func display() {
// write logic here to display
}
}
By using protocol composition, you eliminate code duplication, promote code reuse, and
maintain a cleaner, more modular architecture compared to inheritance. This approach also
allows for greater flexibility in defining types with specific combinations of behaviors.
Protocol Inheritance: It is indicated by listing the inherited protocols separated by commas after
the protocol declaration.
protocol ProtocolB: ProtocolA { // protocol definition }
Multiple Inheritance:
Class Inheritance: Swift does not support multiple inheritance for classes. A class can inherit
from only one superclass, leading to a linear inheritance hierarchy.
class Subclass: Superclass1, Superclass2 {
// This is not allowed in Swift
}
Protocol Inheritance: Swift allows for multiple inheritance for protocols. A protocol can inherit
from one or more protocols, enabling the combination of behaviors and requirements from
multiple sources. This promotes flexibility and modularity in protocol-oriented programming.
protocol Metadata {
var duration: TimeInterval { get }
var fileSize: Int { get }
}
func play() { }
}
func play()
extension MediaAssetProtocol {
func duration() -> TimeInterval {
// default implementation returns zero secon
return 0
}
}
func play() {}
}
extension MediaAssetProtocol {
func assetUrlString() -> String {
baseUrl + "/" + fileName
}
}
Q. How would you refactor a legacy iOS codebase that doesn't adhere to
SOLID principles?
Refactoring a legacy iOS codebase that doesn't follow to SOLID principles can be a challenging
but rewarding process. Here's a general approach you can follow:
Identify Areas for Improvement
Start by analyzing the codebase to identify areas where SOLID principles are violated. Look for
classes that are doing too much (violating SRP), tight coupling between classes (violating DIP),
large interfaces with unnecessary methods (violating ISP), etc.
Prioritize Refactoring Targets
Not all parts of the codebase may need immediate attention. Prioritize refactoring targets based
on factors like frequency of change, impact on the system, and ease of refactoring.
Break Down Responsibilities
For classes that violate the Single Responsibility Principle, identify the distinct responsibilities
they have and extract each responsibility into its own class. This may involve creating new
classes, extracting methods, or splitting existing classes.
Chapter 10: SOLID Principles
Introduce Abstractions
Wherever there is tight coupling between classes, introduce abstractions to decouple them. This
might involve defining protocols to represent common behaviors and having classes depend on
these abstractions rather than concrete implementations.
Apply Dependency Injection
Implement Dependency Injection to break dependencies between classes and stick to the
Dependency Inversion Principle. This allows you to inject dependencies into classes rather than
having them create their dependencies directly.
Refactor Large Interfaces
If you have protocols that are too large and violate the Interface Segregation Principle, consider
breaking them down into smaller, more focused interfaces. This allows clients to depend only on
the methods they need.
Refactor Gradually
Refactoring a large codebase all at once can be risky and time-consuming. Instead, aim to
refactor gradually, focusing on one area at a time while ensuring that the application remains
functional and stable.
Review and Iterate
After each refactoring step, review the changes and iterate as needed. Solicit feedback from
team members to ensure that the refactored codebase meets quality and performance standards.
Document Changes
Finally, document the changes made during the refactoring process to help other developers
understand the updated codebase and ensure consistency in future development efforts.
Remember that refactoring a legacy codebase is an ongoing process, and it may take time to fully
align with SOLID principles. Be patient and persistent, and focus on making incremental
improvements that bring tangible benefits to the codebase and the development process.
Added a function processFile to proceed the file to parse the content. In this example, the
FileHandler class violates SRP because it has multiple responsibilities like reading from a file,
writing to a file, parsing file content, and processing file content.
We can refactor it by splitting these responsibilities into separate classes:
// FileReader class responsible for reading from a file
class FileReader {
func readFile(fileName: String) -> String? {
// code to read a file
return nil
}
}
In this refactored code, each class has a single responsibility: reading from a file, writing to a file,
parsing file content, or processing file content. This makes the codebase easier to understand,
maintain, and extend, following to the Single Responsibility Principle.
Q. How can you design iOS classes/modules to be open for extension but
closed for modification?
To design iOS classes/modules to be open for extension but closed for modification, you can
utilize the principle Open-Closed Principle. The Open-Closed Principle states that classes or
modules should be open for extension but closed for modification. This means that you should
be able to extend the behavior of a class without modifying its source code.
Let's consider an example where we will design a type that will represent various types of media
assets such as photos, videos, and audio files. We want to design it in a way that allows for
adding new types of media assets without modifying the existing code. For example:
protocol MediaAsset {
var id: String { get }
var name: String { get }
var type: MediaType { get }
func display()
}
enum MediaType {
case photo
case video
}
We define different structs (PhotoAsset, VideoAsset) that conform to the MediaAsset protocol for
each specific type of media asset.
Now, if we want to add a new type of media asset, say a audio, we can simply create a new struct
that conforms to the MediaAsset protocol without modifying the existing code.
First, modify the enum MediaType with the new type like:
enum MediaType {
case photo
case video
case audio
}
This design allows us to extend the functionality by adding new types of media assets without
modifying the existing codebase, thus following to the Open/Closed Principle.
func display() {
print("Displaying \(name)")
}
}
func play() {
// write logic here to play media
}
func pause() {
// write logic here to pause media
}
}
enum MediaType {
case image
case video
case audio
}
Create a service or manager class that operates on media assets using the protocol:
class MediaAssetService {
func filterAssetsByType(assets: [MediaAsset], type: MediaType) ->
[MediaAsset] {
return assets.filter { $0.type == type }
}
// other methods for working with media assets
}
The MediaAssetService class depends on the MediaAsset protocol rather than specific
implementations. This makes it easier to extend and maintain because it's not tightly coupled to
concrete types.
Adding new types of media assets (e.g., adding support for PDF files) is straightforward. You just
need to create a new struct conforming to the MediaAsset protocol.
Unit testing becomes easier as you can use mock objects or stubs conforming to the MediaAsset
protocol.
By following to the Dependency Inversion Principle leads to more maintainable and scalable apps
by promoting decoupling, abstraction, testability, flexibility, and modular design. This helps you
to build robust, adaptable, and high-quality iOS apps.
With Dependency Inversion Principle, we're using an abstraction ( MediaAsset ) to define the
interface for different types of media assets.
By using the protocol MediaAsset, we're decoupling the high-level modules (e.g., classes that
use media assets) from the low-level details (specific implementations of media assets). This
Chapter 10: SOLID Principles
abstraction allows us to switch between different types of media assets easily without affecting
the high-level modules.
Dependency Injection
This allows for easier testing, as dependencies can be mocked or replaced with stubs during
testing, and promotes reusability and modularity.
Dependency injection can be achieved through constructor injection, property injection, or
method injection. For example:
class MediaPlayer {
let mediaAsset: MediaAsset
init(mediaAsset: MediaAsset) {
self.mediaAsset = mediaAsset
}
func playMedia() {
mediaAsset.play()
}
}
This approach makes MediaPlayer more flexible because it can work with any type of
MediaAsset, as long as it conforms to the MediaAsset protocol. It also makes testing easier since
we can inject mock or stub MediaAsset objects during testing.
Dependency Inversion is a design principle, while Dependency Injection is a technique used to
implement that principle. Dependency Injection allows us to adhere to Dependency Inversion by
providing dependencies externally, making our code more flexible, testable, and adherent to
SOLID principles.
enum MediaType {
case image
case video
case audio
}
Now, let's say we want to perform some operations on these media assets, such as fetching
metadata or processing them in some way. We might be tempted to adhere strictly to SOLID
principles by introducing interfaces and dependency injection.
class AudioManager {
let processor: MediaAssetProcessor
init(processor: MediaAssetProcessor) {
self.processor = processor
}
protocol MediaAssetConvertible {
var asset: MediaAssetStruct { get }
}
We define a MediaAssetStruct struct to represent a generic media asset with a URL and a type.
We then define a protocol MediaAssetConvertible that requires conforming types to provide a
property asset of type MediaAssetStruct.
Define specific types that conform to the MediaAssetConvertible protocol:
// ImageAsset struct conforming to MediaAssetConvertible
struct ImageAsset: MediaAssetConvertible {
let asset: MediaAssetStruct
init(assetURL: URL) {
self.asset = MediaAssetStruct(assetURL: assetURL, assetType: "Image")
}
}
init(assetURL: URL) {
self.asset = MediaAssetStruct(assetURL: assetURL, assetType: "Video")
}
}
return maxElement
}
The <T: Comparable> syntax indicates that the type T must conform to the Comparable
protocol. This ensures that the elements in the array can be compared using the > operator.
In this example, findMaximum function works with both Int and String arrays because these
types conform to the Comparable protocol, allowing comparison of elements using the >
operator.
Let's try to call findMaximum() with a type that doesn't conform to Comparable , such as a
custom type Person :
struct Person {
let name: String
let age: Int
}
The compiler will generate an error indicating that the type Person does not conform to the
Comparable protocol.
Q. What is type erasure? How can it be useful when working with protocols
and generics?
Type erasure is used to hide the underlying types of objects that conform to a certain protocol. It
allows you to work with instances of different types in a uniform way, abstracting away their
actual types. This can be particularly useful when working with protocols and generics because it
enables you to work with heterogeneous collections of objects that share a common behavior.
protocol MediaValidator {
var isSupported: Bool { get }
}
struct ValidMedia {
var isSupported: Bool
mediaArray.forEach { media in
print(media.isSupported)
}
Upon iteration through the mediaArray, it prints the isSupported property for each ValidMedia
instance, indicating whether the respective media type is supported or not.
By using generics, you can write more versatile and robust code that adapts to various data
types, enhances type safety, and improves code readability, ultimately leading to more
maintainable and scalable codebase.
Code Reusability: Generics allow you to write flexible and reusable code components. You can
create functions, methods, and data structures that can work with any type, rather than being
tied to specific data types.
Type Safety: Generics help in catching type-related errors at compile-time rather than runtime.
By specifying constraints on generic types, you ensure that the code operates only on the
expected types, reducing the chance of runtime errors.
Performance: Generics are implemented using type erasure, which means that generic code
doesn't incur any performance overhead. The compiler generates specialized implementations
for each type, resulting in efficient code execution.
Abstraction: Generics enable you to write abstract algorithms and data structures that can
operate on different types without specifying those types beforehand. This promotes cleaner,
more modular code architecture.
Reduced Code Duplication: By using generics, you can avoid writing redundant code for similar
functionalities with different types. This leads to more concise and maintainable codebases.
Future-proofing: Generics make your code more adaptable to changes and additions in the
future. As your project evolves and new requirements arise, generic components can easily
accommodate new types without requiring extensive modifications.
extension {
mutating func addItem(item: T) {
self.push(item: item)
}
In this example, Container protocol defines an associated type Item . When a type adopts this
protocol, it must provide a concrete type for Item . Stack is a generic struct conforming to
Container protocol, where Item is associated with the type T . This allows Stack to be used
with any type.
When to use associatedtype:
Common Protocols
When designing protocols that need to work with a variety of types, especially in cases where the
exact type is not known upfront, associated types are very useful.
Frameworks and Libraries
They are particularly valuable when designing frameworks and libraries intended for use by
others. They allow users of the framework/library to customize behavior by providing their own
implementations for associated types, making the framework/library more flexible and adaptable
to different use cases.
Code Abstraction
If you find yourself writing code that needs to work with multiple types but doesn't want to
commit to any specific implementation, associated types can help abstract away concrete type
details, making your code more generic and reusable.
Q. Can you explain the difference between using generics and protocols
with associated types?
Both generics and protocols with associated types offer flexibility. They serve different purposes
and are applied in different contexts based on the requirements of your code:
Generics are more suitable for writing code that operates uniformly on a range of types,
whereas protocols with associated types are more suitable for defining interfaces that
require type-specific behavior.
Generics provide flexibility in implementation, allowing you to define generic functions,
structures, and classes, while protocols with associated types provide flexibility in
Chapter 11: Generics & Error Handling
interface, allowing you to define protocols with placeholders for types or properties.
Generics are resolved at compile time, while protocols with associated types are resolved
dynamically at runtime when the concrete type is known.
This protocol AssetProtocol specifies two properties: name and type , which are common
attributes of different types of media assets. By conforming this protocol to the
MediaAssetStruct type guarantee that they provide these properties.
func filterAssets<T: AssetProtocol>(_ assets: [T], where condition: (T) ->
Bool) -> [T] {
var filteredAssets = [T]()
for asset in assets {
if condition(asset) {
filteredAssets.append(asset)
}
}
return filteredAssets
}
print(filteredImages.count) // 1
You can use where clauses to specify associated types and other requirements for
protocols.
where clauses can also be used in protocol extensions to add additional constraints to
associated types.
Q. Can you explain how generics are used in the Swift standard library?
Provide examples of standard library types and functions that make use of
generics.
Generics allow you to write flexible and reusable functions and data types that can work with any
type. They enable you to write code that avoids duplication and promotes type safety. The Swift
standard library extensively uses generics to provide powerful and flexible functionalities.
Here's an explanation along with examples:
Collection Types
The Swift standard library provides generic collection types such as Array , Dictionary , Set ,
and Optional . These collections can hold elements of any type while ensuring type safety:
Chapter 11: Generics & Error Handling
var numberArray: Array<Int> = [1, 2, 3, 4, 5]
var dictionary: Dictionary<String, Int> = ["grade": 5, "age": 12]
var scores: Set<Double> = [3.14, 2.71, 1.618]
var optionalString: Optional<String> = "Swiftable"
Functions
Functions in the Swift standard library are often generic, allowing them to work with various data
types. For instance, the map , filter , and reduce functions on collection types are
implemented using generics, enabling you to apply operations to elements of any type:
let numbers = [1, 2, 3, 4, 5]
let doubled = numbers.map { $0 * 2 }
let filtered = numbers.filter { $0 % 2 == 0 }
let sum = numbers.reduce(0, +)
Optionals
The Optional type is a generic enumeration used to represent either a wrapped value or nil . It's
declared as Optional<Wrapped> , where Wrapped is a placeholder for the wrapped value's type.
var optionalInt: Optional<Int> = 10
var optionalString: Optional<String> = "Swiftable"
By using generics, the Swift standard library provides a robust foundation for building type-safe
and reusable components, making it easier to write concise and efficient code. Generics enable
you to write code that is more adaptable to changes and promotes better code organization and
readability.
do {
let jsonData = try encoder.encode(objects)
return String(data: jsonData, encoding: .utf8)
} catch {
print("Error encoding objects to JSON: \(error)")
return nil
}
}
We have two overloaded functions convertObjectToJSON . One takes a single object ( T ) and
another takes an array of objects ( [T] ). Both functions utilize generics ( <T> ) to accept any
type that conforms to the Encodable protocol, ensuring type safety. The functions use
JSONEncoder to encode the object(s) into JSON data and return the JSON string representation.
We defines a MediaAsset struct that conforms to the Encodable protocol, which allows it to be
converted into JSON format.
if let json = convertObjectToJSON(audio) {
print("JSON for single object: \(json)")
}
/*
JSON for single object: {
"name" : "SampleAudio",
"type" : "mp3"
}
*/
We calls the function convertObjectToJSON with the audio object as an argument, and then
prints the resulting JSON string if the conversion is successful.
/*
JSON for array of objects: [
{
"name" : "SampleAudio",
"type" : "mp3"
},
{
"name" : "SampleVideo",
"type" : "mov"
},
{
"name" : "SampleImage",
"type" : "png"
}
]
*/
We calls a function convertObjectToJSON with the jsonArray array as an argument, and then
prints the resulting JSON string if the conversion is successful.
This approach allows us to handle both single objects and arrays of objects seamlessly,
improving code readability and maintainability.
struct NetworkValidator {
func validateURL(_ urlString: String) throws -> URL {
guard urlString.hasPrefix("https"), let url = URL(string: urlString)
else {
throw URLError.invalidURL
}
return url
}
}
In the above code, NetworkValidator contains a method validateURL() which takes a string
representation of a URL as input and throws an error of type URLError if the URL is invalid.
try
This keyword is used when calling a function that can throw an error. When you use try , you're
indicating that you're aware that the function might throw an error and you're handling it
appropriately using do-catch blocks or propagating it up the calling chain.
let networkValidator = NetworkValidator()
let urlString = "https://example.com"
do {
let validURL = try networkValidator.validateURL(urlString)
print("Valid URL: \(validURL)")
} catch {
print("Invalid URL: \(error)")
}
try?
This keyword is used when calling a function that can throw an error, but you want to handle
errors gracefully by converting them into an optional value. If the function throws an error, the
result will be nil .
try!
This keyword is used when calling a function that can throw an error, and you're certain that the
function will not throw an error in your specific use case. If the function does throw an error, it will
result in a runtime error.
let networkValidator = NetworkValidator()
let urlString = "https://example.com"
It's crucial to use these keywords appropriately based on your requirements and the certainty of
whether an error will be thrown. Misuse of try! can lead to runtime crashes if the function
unexpectedly throws an error. Use it only when you're absolutely sure that the function will not
throw an error in your specific context.
The load() method attempts to load the media asset. If any errors occur during the loading
process, such as an invalid URL, file not found, or unsupported format, it throws the appropriate
MediaAssetError .
do {
let assetData = try mediaAsset.load()
print("Media asset loaded successfully: \(assetData)")
} catch let error as MediaAssetError {
switch error {
case .invalidURL:
print("Invalid URL provided.")
case .fileNotFound:
print("File not found at the specified URL.")
case .unsupportedFormat:
print("Unsupported format.")
}
} catch {
print("An unknown error occurred: \(error)")
}
This code attempts to load a media asset using the load() method of the MediaAsset struct. If
an error occurs, it catches the specific MediaAssetError and handles it accordingly.
This approach provides a clear and structured way to handle errors in your codebase, making it
easier to debug and maintain.
let result = divide(10, by: 0) // this will cause a fatal error and terminate
the program.
Throwing an Error
Throwing an error is a process for signaling that an exceptional condition has occurred
during the execution of a function or method, but it doesn't immediately terminate the code.
The caller of the function or method has the responsibility to handle the error by using do-
catch blocks or propagating it up the call stack.
do {
let result = try safeDivide(10, by: 0)
print("Result: \(result)")
} catch DivisionError.divisionByZero {
print("Cannot divide by zero.")
} catch {
print("An unexpected error occurred: \(error)")
}
In summary, while both fatalError and throwing an error are ways for handling exceptional
situations. The fatalError is used for unrecoverable errors that necessitate immediate
Chapter 11: Generics & Error Handling
termination of the app, while throwing an error is used for recoverable errors that can be handled
by the caller.
Q. Explain how you would handle asynchronous errors in, such as those
occurring in asynchronous operations or networking tasks.
Handling asynchronous errors is crucial for building robust and reliable applications. Here's how
you can handle such errors effectively:
Using Completion Handlers
One common approach is to use completion handlers to propagate errors. You can define a
completion handler with a Result type that encapsulates either a success value or an error. For
example:
enum NetworkError: Error {
case invalidURL
case noInternetConnection
}
completion(.success(data))
}.resume()
}
do {
let mediaAsset = try await fetchMediaAsset(withID: "123")
print("Media asset loaded: \(mediaAsset)")
} catch {
print("Error loading media asset: \(error)")
}
In these examples, errors are appropriately handled, providing feedback to the user or taking
corrective actions as necessary.
Q. Explain how you would localize error messages for different languages.
Localizing error messages for different languages involves translating error messages into the
target language while ensuring that the translated messages convey the same meaning and
Chapter 11: Generics & Error Handling
context as the original messages. Here's a guide on how you can localize error messages
effectively:
Prepare Localizable Strings Files
Create separate strings files for each language you want to support. These files should contain
key-value pairs where the key is a unique identifier for the error message and the value is the
localized message in the corresponding language. For example:
// Localizable.strings (English):
"MEDIA_ASSET_NOT_FOUND" = "Media asset not found.";
// Localizable.strings (Spanish):
"MEDIA_ASSET_NOT_FOUND" = "Recurso multimedia no encontrado.";
Use NSLocalizedString
In your code, replace direct string literals with calls to NSLocalizedString . This function looks
up the localized string for the provided key in the appropriate strings file based on the user's
language preferences. For example:
let errorMessage = NSLocalizedString("MEDIA_ASSET_NOT_FOUND", comment: "Media
asset not found.")
func displayMediaAsset() {
let assetID = "12345"
if let mediaAsset = fetchMediaAsset(withID: assetID) {
// display media asset
} else {
let errorMessage = NSLocalizedString("MEDIA_ASSET_NOT_FOUND", comment:
"Media asset not found.")
// show error message to the user
print(errorMessage)
}
}
In this example, if the media asset with the specified ID is not found, the localized error message
will be displayed to the user, based on their language preference.
By following this approach, you can ensure that your app provides a seamless and localized
experience for users across different languages.
In the above example, the defer is used to ensure that the file is closed after it has been read,
even if an error occurs.
Transaction Rollback
In database operations, you might need to rollback a transaction if an error occurs. defer can
help ensure that the rollback code is executed regardless of the outcome. Here's a simplified
example using CoreData:
func saveDataToDatabase() {
let context = persistentContainer.viewContext
context.perform {
defer {
if context.hasChanges {
context.rollback() // rollback changes if an error occurred
}
}
do {
try context.save()
} catch {
// handle error
}
}
}
// process image
// example: Upload image to a server
return "https://example.com/uploadedImages/tempImage.jpg"
}
In these scenarios, using defer ensures that cleanup tasks are performed in a structured and
predictable manner, improving code readability and maintainability, especially in error-prone
situations.
struct MediaAssetStruct {
func process(data: Data) throws {
guard isValid(data) else {
throw MediaAssetError.invalidData
}
// process data here
}
rethrow
It is used when a function itself doesn't throw an error but it accepts a throwing function as a
parameter and can potentially throw an error based on the outcome of that parameter function. It
allows the function to rethrow the error thrown by its closure parameter. It is used in functions
that take throwing functions as parameters and are responsible for propagating the error thrown
by those functions.
Chapter 11: Generics & Error Handling
func processFunction(_ function: () throws -> Void) rethrows {
// this function takes a throwing function as a parameter
// and can rethrow any error it throws.
try function()
}
do {
try processFunction(throwingFunction)
} catch {
print(error)
}
In the above example, processFunction doesn't throw any error itself, but it can propagate
errors thrown by the function it accepts as a parameter ( throwingFunction ). So, it's marked
with rethrows .
init(name: String) {
self.name = name
}
deinit {
print("\(name) is deallocated")
}
}
class MediaAsset {
var name: String
var owner: User?
init(name: String) {
self.name = name
}
deinit {
print("\(name) is deallocated")
}
}
user?.favoriteAsset = asset
asset?.owner = user
// now, both user and asset have strong references to each other
user = nil
asset = nil
Even after setting both user and asset to nil, neither object will be deallocated because
they're still holding strong references to each other. This leads to a memory leak.
If you see that no logs gets printed even after made objects both nil. Why? Because they both
made a strong reference cycle here and that’s why both the objects freezed to being deallocated.
How to solve it?
To prevent a retain cycle, you can use weak references. In the example above, you can make the
owner property in MediaAsset weak:
This way, the MediaAsset won't keep a strong reference to the User , breaking the retain cycle
and allowing both objects to be deallocated properly when there are no other strong references
to them.
Run the code again and see the logs after making the weak reference of owner:
Swiftable is deallocated
ProfilePhoto is deallocated
A weak reference does not keep a strong reference on the instance it refers to and so does
not stop ARC from deallocating the referenced instance. This behavior prevents the
reference from becoming part of a strong reference cycle.
init(url: URL) {
self.url = url
self.data = Data()
}
deinit {
print("\(url) is being deallocated.")
}
}
In the above example, the dataLoadHandler captures self (which is MediaAsset) strongly,
because it accesses self.url . Since dataLoadHandler keeps a reference to self , a strong
reference cycle is formed. Even when media is set to nil, the reference count of MediaAsset
instance is not decremented to zero, preventing deallocation. This leads to a memory leak.
To break this strong reference cycle, you can use a capture list in the closure to capture self
weakly like this:
lazy var dataLoadHandler: () -> Void = { [weak self] in
guard let self = self else { return }
print("Data loaded for \(self.url)")
}
If self gets deallocated before the closure is executed, the weak reference will become nil, and
the closure won't execute. This prevents the strong reference cycle and potential memory leak.
Q. What are the best practices for managing memory in iOS applications?
Managing memory is important for the performance and stability. If you don’t manage the
memory in the app, it may result in memory leaks, degrade the app performance, unpredictable
crashes, etc.
Here are some best practices for memory management:
Use structs for lightweight data: Utilize structs instead of classes for lightweight data
structures to minimize memory overhead on the compiler.
Chapter 12: Memory Management
Avoid overuse of Singletons: Be careful when using singletons as they can lead to strong
references throughout the application's lifecycle. Consider using dependency injection or other
design patterns when appropriate.
Handle memory warnings: Implement didReceiveMemoryWarning method in view controllers to
handle memory warnings gracefully by releasing non-essential resources.
override func didReceiveMemoryWarning() {
super.didReceiveMemoryWarning()
// release non-essential resources
}
Optimize image handling: Use the appropriate image formats and sizes to reduce memory
consumption. Utilize techniques like image caching and resizing.
Avoid large dataset once in list: Reuse cells and avoid rendering unnecessary and large content
in table views and collection views to conserve memory.
Use lazy loading for heavy resources: Load resources such as images, data, or views lazily,
especially when dealing with large datasets or complex views. This helps in conserving memory
by loading resources only when they are needed.
lazy var heavyResource: HeavyResource = {
return HeavyResource()
}()
Proper View Controller lifecycle: Ensure proper handling of view controller lifecycle methods
such as viewDidLoad , viewWillAppear , viewWillDisappear , etc. Release resources that are
no longer needed in appropriate lifecycle methods.
Use Weak or Unowned references: A weak reference does not keep a strong reference on the
instance it refers to and so does not stop ARC from deallocating the referenced instance. This
behavior prevents the reference from becoming part of a strong reference cycle.
closure = { [weak self] in
self?.doSomething()
}
class MediaMetadata {
weak var asset: MediaAsset?
}
Check for Retain Cycles: Use the "Debug Memory Graph" tool in Xcode to visualize object
relationships and identify retain cycles.
Review Code: Regularly review your code, especially closures, delegate relationships, and
block-based APIs, as they can lead to retain cycles if not managed properly.
Use Unowned References Carefully: Unowned references are similar to weak references but
assume that the object being referred to will never be deallocated while the reference is in use.
They can lead to crashes if the referenced object is deallocated.
Chapter 12: Memory Management
Avoid Strong Reference Cycles in Closures: Use capture lists ( [weak self] or [unowned
self] ) when capturing self in closures to avoid strong reference cycles. For example:
func fetchData() {
NetworkManager.fetchData { [weak self] data in
self?.processData(data)
}
}
By following these steps and incorporating them into your debugging process, you can
effectively identify and resolve memory-related issues in your apps, ensuring better performance
and stability.
init(url: URL) {
self.url = url
print("Instance for \(url.absoluteString) is being created.")
}
deinit {
print("Instance for \(url.absoluteString) is being deallocated.")
}
}
reference1 = nil
reference2 = nil
reference3 = nil
We set reference1 , reference2 , and reference3 to nil . Since all three references were
pointing to the same MediaAsset instance, setting them to nil means there are no more
strong references to the instance. As a result, the instance becomes eligible for deallocation.
Despite ARC's automatic memory management, it's possible to create strong reference cycles
between class instances where each instance has a strong hold on the other, causing them to not
get deallocated. This is where weak and unowned references come in handy.
Weak references are used when the other instance has a shorter lifetime. On the other hand,
unowned references are used when the other instance has the same or a longer lifetime.
ARC makes memory management more convenient and less error-prone by automating the
process of memory management, reducing the likelihood of memory leaks and dangling pointers.
Closing Connections: If your class establishes any connections, such as network connections or
file streams, you should close them in the deinit method to ensure resources are released
properly.
deinit {
socket?.disconnect()
}
Cleanup of Strong References: If your class holds strong references to other objects, the
deinit method provides an opportunity to break these strong reference cycles by setting these
references to nil .
init() {
metadata = MediaMetadata(asset: self)
}
deinit {
metadata?.asset = nil
}
}
class MediaMetadata {
weak var asset: MediaAsset?
init(asset: MediaAsset) {
self.asset = asset
}
}
The asset property of MediaMetadata is declared as weak to avoid creating a strong reference
cycle. Since the MediaMetadata object only needs a weak reference to its associated
MediaAsset , using a weak reference prevents a retain cycle between the two objects.
Remember, while deinit is powerful, it's essential to use it judiciously and not rely solely on it
for resource cleanup. It's good practice to couple it with other cleanup mechanisms like weak or
unowned references, and to perform manual cleanup when dealing with non-memory resources
like file handles or network connections.
Q. Explain the differences between stack and heap memory allocation and
how they work?
Stack and heap memory allocation are two different methods used to manage memory during
runtime.
Stack Memory Allocation:
It is used for static memory allocation, where memory is allocated and deallocated in a last-
in-first-out (LIFO) manner.
It is typically used for storing local variables, function parameters, and function return
addresses.
It is fast to allocate and deallocate since it follows a strict order.
Chapter 12: Memory Management
Memory allocation and deallocation are handled automatically by the compiler.
The size of stack memory is limited and usually fixed.
It is thread-safe, making it suitable for multithreaded applications.
func calculateSum(a: Int, b: Int) -> Int {
let sum = a + b // variables like 'sum' are typically stored in stack
memory
return sum
}
init(url: URL) {
self.url = url
}
}
Understanding these memory allocation concepts is essential to write efficient and optimized
code while avoiding memory-related issues.
Q. What are the impacts that may occur of using third-party libraries and
frameworks in terms of memory management?
Using external libraries and frameworks can enhance productivity and functionality, but it's
essential to be careful of their impacts on memory management. Here are some potential
impacts:
Retain Cycles: External libraries might create retain cycles if they hold strong references to
objects that also have strong references back to them. This prevents the objects involved from
being deallocated, even when they're no longer needed.
Memory Leaks: External libraries may contain memory leaks, where objects are allocated but not
deallocated properly. This can lead to increase in memory usage over time, eventually causing
the app to crash due to memory overflow.
Compatibility Issues: External libraries may not always be optimized for the latest iOS versions
or device architectures. This can lead to compatibility issues, memory leaks, or crashes on
specific iOS versions or devices. It's crucial to regularly update libraries to their latest versions to
mitigate these risks.
Overhead of Unused Resources: External libraries often include features and functionalities that
your application may not need. Including these unused resources can increase the memory
overhead of your application without providing any tangible benefits.
Q. What are the things you should consider to prevent memory leaks and
improve performance in singleton implementations?
You can consider the following to prevent memory leaks and improve performance in singleton
implementations:
Avoid Strong Reference Cycles: Be careful with closures and delegate relationships within your
singleton. Use weak or unowned references when appropriate to prevent retain cycles. For
example:
enum MediaType {
case photo
case video
case audio
}
private init() {}
func removeAllMediaAssets() {
queue.async {
self.mediaAssets.removeAll()
}
}
Access to the mediaAssets array is synchronized using a private serial dispatch queue
( queue ), ensuring thread safety. Also, access to the mediaAssets array is encapsulated within
the singleton methods, preventing direct modification from external sources.
Q. How would you optimize memory usage when working with large
amounts of datasets?
Memory management is very important aspects working with large datasets in the apps. If you do
not handle memory usage carefully, its reduce the app performance and user experience.
Here are some strategies you can follow to deal with large datasets:
Use Lazy Loading: Load data into memory only when needed. For example, if you're displaying a
list of items, load data for visible items only, and fetch more as the user scrolls.
Implement Pagination: Instead of loading all data at once, fetch data in chunks or pages. This
reduces the memory footprint by loading only a subset of the dataset at any given time.
Use Data Compression: Compressing data, especially if it's images, can significantly reduce
memory usage. iOS provides APIs for image compression. For instance, use
UIImageJPEGRepresentation or UIImagePNGRepresentation for image compression.
Ephemeral Configuration
This configuration doesn't cache any data to disk, making it suitable for private browsing or
temporary data fetching.
let ephemeralConfiguration = URLSessionConfiguration.ephemeral
let ephemeralSession = URLSession(configuration: ephemeralConfiguration)
Background Configuration
This configuration allows the session to continue even if the app is suspended or terminated,
enabling tasks to complete in the background. You need to specify a unique identifier for the
background session.
let backgroundIdentifier = "com.app.backgroundSession"
let backgroundConfiguration =
URLSessionConfiguration.background(withIdentifier: backgroundIdentifier)
let backgroundSession = URLSession(configuration: backgroundConfiguration,
delegate: nil, delegateQueue: nil)
Custom Configuration
This configuration allows you to customize various aspects such as timeout intervals, caching
policies, and additional headers for HTTP requests. For example, setting a timeout interval
ensures that requests are automatically canceled if they take too long to complete.
Chapter 13: Networking
let customConfiguration = URLSessionConfiguration.default
customConfiguration.timeoutIntervalForRequest = 30 // Set timeout interval for
requests (in seconds)
customConfiguration.requestCachePolicy = .reloadIgnoringLocalCacheData // Set
cache policy
customConfiguration.httpAdditionalHeaders = ["Authorization": "Bearer
YOUR_ACCESS_TOKEN"] // Set additional headers
let customSession = URLSession(configuration: customConfiguration)
By utilizing different configurations, you can tailor URLSession behavior to suit your app's
specific requirements, whether it's for regular network requests, background transfers, or custom
settings for specific tasks. Always remember to choose the configuration that best fits your app's
needs to optimize performance and user experience.
It includes cases for different types of errors that might occur during network operations:
invalidData , invalidJSON , and invalidResponse . These cases help to categorize and
handle errors more effectively. Also, you can add more error cases according to your requirement.
Chapter 13: Networking
In case of error messages, you can define enum’s cases with associated values to provide the
error messages with the particular case.
Execute a request
Assume a function that takes a URLRequest and an optional completion handler as parameters.
It creates a data task using URLSession to perform the network request. For example:
func executeRequest(request: URLRequest, completion: ((Result<[String: Any],
NetworkError>) -> ())?) {
let dataTask = URLSession.shared.dataTask(with: request) { (data, response,
error) in
do {
let responseJSON = try JSONSerialization.jsonObject(with: data,
options: .allowFragments)
if let responseData = responseJSON as? [String : Any] {
completion?(.success(responseData))
} else {
completion?(.failure(.invalidResponse))
}
} catch let error {
completion?(.failure(.invalidJSON))
}
}
dataTask.resume()
}
Inside the data task's completion handler, it checks for potential errors:
If there is no data received ( data is nil), it calls the completion handler with a failure result
containing the .invalidData error case.
If there is data, it attempts to serialize the JSON using JSONSerialization .
If serialization is successful and the JSON data is in the expected format ( [String: Any] ),
it calls the completion handler with a success result containing the JSON data.
If the JSON data is not in the expected format, it calls the completion handler with a failure
result containing the .invalidResponse error case.
If an error occurs during JSON serialization, it calls the completion handler with a failure
result containing the .invalidJSON error case.
Chapter 13: Networking
Note: Inside the completion handler, error and response handling may be according to the API’s
structure.
In order to call this function to execute a request, you can call it like this:
if let url = URL(string: "https://www.example.com/sample_data") {
let urlRequest = URLRequest(url: url)
executeRequest(request: urlRequest) { result in
switch result {
case .success(let response): print("success response")
case .failure(let error): print("something is wrong: \(error)")
}
}
}
Using URLSessionConfiguration
You can set caching behavior at the session level using URLSessionConfiguration :
let configuration = URLSessionConfiguration.default
configuration.requestCachePolicy = .useProtocolCachePolicy // default caching
policy
let session = URLSession(configuration: configuration)
This allows you to override the caching policy defined at the session level for specific requests.
By configuring caching behavior, you can control how URLSession handles caching of
responses, ensuring that your app behaves as expected in terms of network data retrieval and
caching. Adjusting caching behavior can help optimize network performance and improve user
experience, especially in scenarios where data freshness is critical.
In the above example, the data is encoded into a Base64 string using base64EncodedString()
method. This is a common way to encode credentials for HTTP Basic Authentication. The
Authorization header is set with the value "Basic " followed by the Base64 encoded
credentials.
Token-Based Authentication
Token-based authentication involves sending an authentication token in the Authorization header
of the HTTP request.
let authToken = "your_auth_token"
In this example, replace "your_auth_token" with the actual token obtained from the
authentication server. This token is typically obtained during the authentication process and
represents the user's identity or session.
By setting the appropriate Authorization header with either the Basic or Bearer scheme, you can
authenticate URLSession requests using basic authentication or token-based authentication,
Q. Where you might need to cancel an ongoing network request and how
URLSessionDataTask cancellation is implemented?
Canceling an ongoing network request is necessary in various scenarios to optimize network
usage, manage resources efficiently, and provide a better user experience. Here are some
situations where you might need to cancel a network request:
User-initiated Cancelation
When a user initiates an action that renders a network request unnecessary or undesirable, such
as navigating away from a view or closing an app, canceling ongoing network requests can
prevent unnecessary network traffic.
Response Timeouts
If a network request takes longer than expected to receive a response, canceling the request can
prevent potential performance issues or delays in your app. Setting appropriate timeout intervals
for network requests is essential, and canceling requests that exceed these intervals can help
manage network traffic effectively.
Batch Operations
When performing batch operations or bulk data transfers, canceling individual network requests
within the batch can help manage the overall workload and prioritize critical tasks. For example, if
a user cancels a multi-file download operation, canceling ongoing requests associated with the
remaining files can prevent unnecessary data consumption.
Connection Changes
In cases where the device's network connectivity changes frequently, such as switching from Wi-
Fi to cellular or entering a low-connectivity area, canceling ongoing network requests can
prevent network errors or interruptions and improve the reliability of your app.
Implementing URLSessionDataTask cancellation involves calling the cancel() method on the
data task object. Here's how you can implement URLSessionDataTask cancellation:
func cancelRequest() {
print("\(#function)")
dataTask?.cancel()
}
}
The fetchData() method is called to initiate the network request. It starts fetching data
from the URL. Upon completion or cancellation of the request, the provided closure is
executed.
After some delay, the cancelRequest() method of the NetworkManager is called. This
cancels the ongoing data task.
If the data task is cancelled before completion, the error code .cancelled is checked
inside the completion handler of the data task.
Calling the cancel() method on the data task cancels the ongoing network request associated
with that task. It's essential to handle the cancellation appropriately in your completion handler to
ensure that resources are cleaned up correctly and any necessary cleanup tasks are performed.
The fetchDataFromMultipleURLs function is called with the array of URLs. In the completion
closure provided to fetchDataFromMultipleURLs , the results are processed. For each URL, if
data is received, it will have a valid response otherwise a nil value will be there.
This approach allows multiple network requests to be executed concurrently without blocking the
main thread, ensuring optimal performance and stability.
Q. Can you explain the difference between data tasks, download tasks,
and upload tasks in URLSession?
URLSession provides three types of tasks for handling different types of network operations: data
tasks, download tasks, and upload tasks.
Data Tasks
They are used to send and receive data over the network. They are ideal for making requests that
expect to receive small to medium-sized data payloads, such as JSON or XML responses. Data
tasks return the response body as Data objects in their completion handlers.
let url = URL(string: "https://api.example.com/data")!
let task = URLSession.shared.dataTask(with: url) { data, response, error in
// handle response and data
}
task.resume()
Download Tasks
They are used to download files from a remote server to the local device. They are suitable for
downloading large files such as images, videos, or documents. Download tasks write the
response data directly to a file on disk, allowing you to monitor the download progress and
manage file storage efficiently.
let url = URL(string: "https://example.com/large_file.zip")!
let task = URLSession.shared.downloadTask(with: url) { location, response,
error in
// handle downloaded file location
}
task.resume()
Upload Tasks
They are used to upload data from the local device to a remote server. They allow you to send
data in the request body, such as files, form data, or JSON payloads. Upload tasks provide
Chapter 13: Networking
flexibility for uploading various types of data and support monitoring the upload progress.
let url = URL(string: "https://api.example.com/upload")!
var request = URLRequest(url: url)
request.httpMethod = "POST"
Each type of task serves a specific purpose and offers distinct features and capabilities.
Understanding the differences between data tasks, download tasks, and upload tasks allows you
to choose the most appropriate type of task for your networking requirements.
// this request built with sample URL, replace this request with the
actual endpoint.
var request = URLRequest(url: URL(string:
"https://api.example.com/refresh_token")!)
request.httpMethod = "POST"
request.httpBody = "refresh_token=\(refreshToken)".data(using: .utf8)
The refreshAccessToken method constructs a request to refresh the access token using the
provided refresh token. It then performs the request using dataTask method, and upon
receiving a response, it extracts the new token and passes it to the completion handler.
An extension adds a method dataTaskWithAuthHandling to URLSession for handling
authentication automatically like this:
In this example:
TokenManager manages the refresh token and provides a method ( refreshAccessToken )
to refresh the access token.
An extension on URLSession adds a custom method ( dataTaskWithAuthHandling ) that
intercepts data tasks and handles authentication failures (HTTP status code 401) by
automatically refreshing the access token and retrying the original request with the new
token.
If the token refresh is successful, the original request is retried with the new access token.
Otherwise, an error is propagated to the original completion handler.
You can use dataTaskWithAuthHandling just like a regular data task, and it handles token
refreshing seamlessly in the background.
Q. How would you handle interruptions such as network errors during the
download process of a large file?
Handling interruptions such as network errors during the download process of a large file
involves implementing error handling mechanisms and ensuring robustness in your URLSession
download task. Here's how you can handle interruptions effectively:
Implement Error Handling
Handle potential network errors, timeouts, and connectivity issues gracefully in the completion
handler of your download task. Check for specific error conditions and provide informative error
messages to users.
Retry Policy
Implement a retry policy to automatically retry failed download tasks in case of transient network
errors. You can use exponential backoff or other retry strategies to gradually increase the interval
between retries.
var retryCount = 0
let maxRetries = 3 // reset this count
func downloadFile() {
let url = URL(string: "https://example.com/large_file.zip")!
let task = URLSession.shared.downloadTask(with: url) { location, response,
error in
if let error = error {
if retryCount < maxRetries {
// retry the download task
retryCount += 1
print("Download failed, retrying...")
downloadFile()
} else {
print("Download failed after maximum retries: \
(error.localizedDescription)")
}
return
}
// handle successful download
}
task.resume()
}
downloadFile()
Resume Data
By implementing these error handling strategies, you can ensure that interruptions such as
network errors during the download process of a large file are handled effectively, providing a
more reliable and resilient experience for users.
Q. How would you implement resumable downloads for large files using
URLSession to allow users to pause and resume the download process?
Implementing resumable downloads for large files using URLSession involves utilizing the
resumeData provided in the completion handler of URLSessionDownloadTask to save the
partially downloaded data. This allows users to pause and resume the download process
seamlessly.
Let’s implement a DownloadManager class can be used for efficiently managing file downloads
using URLSession. We will implement a set of functionalities to initiate, pause, and cancel
download tasks seamlessly to ensuring reliability in handling large file transfers.
Here's how you can implement it:
extension DownloadManager {
func pauseDownload() {
downloadTask?.cancel(byProducingResumeData: { resumeData in
if let resumeData = resumeData {
self.resumeData = resumeData
}
})
isDownloading = false
}
func cancelDownload() {
downloadTask?.cancel()
resumeData = nil
isDownloading = false
}
}
// pause download
downloadManager.pauseDownload()
// cancel download
downloadManager.cancelDownload()
In this example:
DownloadManager class encapsulates the logic for starting, pausing, and canceling the
download process.
The startDownload method initiates a download task with or without resume data,
depending on whether the download is new or resumed from interruption.
The pauseDownload method cancels the download task and saves the resumeData
provided in the completion handler for resuming later.
The cancelDownload method cancels the download task and resets the resumeData .
Users can start, pause, resume, or cancel downloads as needed, and the download manager
handles the state and manages the download process accordingly.
By implementing resumable downloads with URLSession and managing the resumeData , users
can pause and resume large file downloads seamlessly, providing a more flexible and user-
friendly experience.
Publishers
A publisher is an object that sends values to its subscribers. It's a source of values, such as a
network request, a database query, or a user interface event. Publishers can send multiple values
over time, and they can also send errors or completion signals to indicate that no more values will
be sent. It’s syntax like that:
protocol Publisher<Output, Failure>
Publishers conform to the Publisher protocol, which defines the interface for sending values to
subscribers. Publishers can be created using various methods, such as:
Creating a Just publisher, which sends a publisher that emits a single value and then finishes
immediately. It is ideal for scenarios where you have a known, constant value that you want to
publish. For example:
Chapter 14: Combine Framework
let justPublisher = Just("Hello, Swiftable!")
let subscriber = Subscribers.Sink<String, Never>(
receiveCompletion: { print("Completed: \($0)") },
receiveValue: { print("Received value: \($0)") }
)
justPublisher.subscribe(subscriber)
// prints:
// Received value: Hello, Swiftable!
// Completed: finished
Creating a Future publisher, that eventually produces a single value or an error. It is useful for
representing asynchronous operations that may complete in the future, such as network requests
or long-running computations. For example:
func performAsyncTask() -> Future<String, Error> {
return Future { promise in
// simulate an asynchronous task like network call
DispatchQueue.global().asyncAfter(deadline: .now() + 2) {
promise(.success("Async Task Result"))
}
}
}
futurePublisher.subscribe(subscriber)
// prints:
// Received value: Async Task Result
// Completed: finished
Creating a PassthroughSubject publisher, which allows you to manually send values to its
subscribers. Using this, you can explicitly control by sending values or completion events to it. It
is useful for bridging imperative code with the reactive Combine world. For example:
subject.subscribe(subscriber)
subject.send("First event")
subject.send("Second event")
subject.send(completion: .finished)
// prints:
// Received value: First event
// Received value: Second event
// Completed: finished
Subscribers
A subscriber is an object that receives values from a publisher. It's a consumer of values, such as
a view model, a view controller, or a data processing pipeline. Subscribers can request values
from a publisher, and they can also cancel their subscription to stop receiving values.
Subscribers conform to the Subscriber protocol, which defines the interface for receiving values
from publishers. Subscribers can be created using various methods, such as:
Creating a Sink subscriber, which receives values and errors from a publisher.
Creating an Assign subscriber, which assigns received values to a property.
For example:
// define a User class with a name property
class User {
var name: String {
didSet {
print("Name change to \(name)")
}
}
init(name: String) {
self.name = name
}
}
// use the Assign subscriber to bind the publisher to the User's name property
let subscription = namePublisher
.assign(to: \.name, on: user)
In the above example, assign(to:on:) is used to bind the namePublisher to the name
property of the user instance. This means any values sent by namePublisher will
automatically be assigned to user.name . Each time a new name will be sent, the name property
of the user instance is updated, triggering the didSet observer to print the updated name.
Here's how publishers and subscribers interact:
Creating a Publisher: Publishers can be created from various sources, such as user interface
events (e.g., button taps, text field changes), network requests, timers, or even custom data
sources.
Subscribing to a Publisher: Subscribers express their interest in receiving values from a
publisher by subscribing to it. This is typically done using one of Combine's operators, such as
sink or assign .
Emitting Values: When a publisher has new data to share, it emits a value through its stream.
This value propagates downstream to any subscribed subscribers.
Receiving Values: Subscribed subscribers receive the emitted values from the publisher. They
can then perform operations on these values, such as transforming, filtering, or combining them
with other streams.
Chaining Operators: Combine provides a rich set of operators that allow subscribers to
manipulate the received data in various ways. These operators can be chained together to create
complex data processing pipelines.
Handling Events: In addition to emitting values, publishers can also emit events, such as
completion events (indicating that the stream has finished) or failure events (indicating an error
Chapter 14: Combine Framework
occurred). Subscribers can handle these events appropriately.
Canceling Subscriptions: When a subscriber is no longer interested in receiving values from a
publisher, it can cancel its subscription. This prevents unnecessary memory usage and potential
resource leaks.
By decoupling publishers and subscribers, Combine enables a flexible and reactive programming
model that allows you to create complex data flows, handle errors and asynchronous events,
perform transformations, and react to changes in real-time in a robust way.
Using compactMap , you can transforms the output of a publisher by applying a closure that
returns an optional value, and then flattening the resulting optional values. For example:
Filtering Operators
Using filter , you can selectively passes through elements that satisfy a predicate. For
example:
let numbers = [1, 2, 3, 4, 5]
let evenNumbers = numbers.publisher
.filter { $0 % 2 == 0 }
.sink { print($0) } // prints: 2, 4
Using removeDuplicates , you can removes duplicate elements from a publisher. For example:
let numbers = [1, 2, 2, 3, 3, 3, 4, 5]
let uniqueNumbers = numbers.publisher
.removeDuplicates()
.sink { print($0) } // prints: 1, 2, 3, 4, 5
These are just a few examples of the many operators available in Combine. Operators can be
chained together to create complex data processing pipelines. Combine operators provide a
powerful way to process and manage data streams. By using operators, you can create complex
data pipelines that filter, transform, combine, and handle errors in a declarative and concise
manner. Understanding and utilizing these operators allows you to handle asynchronous data
streams effectively and write more readable and maintainable code.
DispatchQueue.global().async {
for input in userInputs {
searchBarPublisher.send(input)
In the loop, we sends user input strings to searchBarPublisher with a short delay (0.1) to mimic
typing. The input array simulates the user typing "swiftable" with small pauses between some
keystrokes. The output shows the search query being sent to the server only after the user
pauses typing for 500 milliseconds, avoiding unnecessary requests.
By using debouncing and removing duplicates, we ensure that the server is not overwhelmed
with too many requests and only receives meaningful, distinct search queries. This leads to a
more efficient and responsive application.
Q. What are subjects in Combine? When would you use them in your code?
In Combine, subjects are a type of publisher that you can explicitly control. They act as a bridge
between imperative and declarative code, allowing you to send values to subscribers manually.
Subjects can both publish new values and subscribe to other publishers. There are two main
types of subjects in Combine:
PassthroughSubject emits values to subscribers when they are sent.
In this example, buttonPressSubject acts as a bridge between the button press events and the
subscriber. Each call to send() simulates a button press.
CurrentValueSubject holds a current value and sends it to new subscribers. It's useful when you
need to provide an initial value to subscribers. For example:
// a CurrentValueSubject to hold the current text of a text field
let textFieldSubject = CurrentValueSubject<String, Never>("")
textFieldSubject.send("Hello, Swiftable!")
// Prints: "Text field value: Hello, Swiftable!"
An enum CustomError that conforms to the Error protocol will be used to represent custom errors
in the Combine pipeline. In the actual cases, you can defines more error types.
Using mapError
The mapError operator allows you to transform an error into a new error or a custom error type.
This can be useful when you want to provide a more user-friendly error message or handle
specific error cases differently. For example:
let numbers = [10, 20, 30, 40, 50].publisher
let subscription = numbers
.tryMap { value -> Int in
if value / 10 == 3 { throw CustomError.someError }
return value
}
.mapError { error -> CustomError in
if let customError = error as? CustomError {
return customError
} else {
return .unknown
}
}
.sink(receiveCompletion: { completion in
switch completion {
case .failure(let error):
print("Error: \(error)")
case .finished:
print("Finished")
}
}, receiveValue: { value in
print("Received value: \(value)")
})
The pipeline emits the values 10 and 20, and then throws a CustomError.someError error when
it encounters the value 30. The mapError operator transforms this error into
a CustomError enum value, and the sink operator prints an error message.
Using retry
The retry operator allows you to retry a failed publisher a specified number of times before
propagating the error to the subscriber. For example:
let numbers = [10, 20, 30, 40, 50].publisher
let retryPublisher = numbers
.tryMap { value -> Int in
if value / 10 == 3 { throw CustomError.someError }
return value
}
.retry(2) // retry up to 2 times
.sink(receiveCompletion: { completion in
switch completion {
case .failure(let error):
print("Error: \(error)")
case .finished:
print("Finished")
}
}, receiveValue: { value in
print("Received value: \(value)")
})
In the above code, if the tryMap operator throws an error, the pipeline will retry up to 2 times
before propagating the error to the subscriber.
When you run this code, you'll see the following output:
The pipeline emits the values 10 and 20, and then throws a CustomError.someError error when
it encounters the value 30. The retry operator retries the pipeline up to 2 times, but the error is
still propagated to the subscriber after the second retry. The sink operator prints an error
message.
Using catch
The catch operator allows you to catch errors and return a default value or a new publisher that
continues the pipeline. For example:
let numbers = [10, 20, 30, 40, 50].publisher
let catchPublisher = numbers
.tryMap { value -> Int in
if value / 10 == 3 { throw CustomError.someError }
return value
}
.catch { error -> Just<Int> in
print("Error found: \(error)")
return Just(0) // return a default value
}
.sink(receiveValue: { value in
print("Received value: \(value)")
})
In the above example, the catch operator is used to catch and handle errors that occur in the
pipeline. In this case, the catch operator takes a closure that returns a new publisher that emits a
default value (in this case, 0) when an error occurs.
When you run this code, you'll see the following output:
Received value: 10
Received value: 20
Error found: someError
Received value: 0
Q. How would you integrate Combine with SwiftUI to build reactive user
interfaces?
Integrating Combine with SwiftUI allows you to build reactive user interfaces where the UI
updates automatically in response to changes in your data. Combine works seamlessly with
SwiftUI by leveraging the @State , @ObservedObject , and @Published property wrappers to
bind your data models to the UI. Let’s take an example to build reactive counter with Combine
and SwiftUI.
Define a model that uses Combine to publish changes:
class CounterModel: ObservableObject {
@Published var count: Int = 0
func increment() {
count += 1
}
func decrement() {
count -= 1
}
}
HStack {
Button(action: {
counterModel.increment()
}) {
Text("Increment")
}
.padding()
Button(action: {
counterModel.decrement()
}) {
Text("Decrement")
}
.padding()
}
}
}
}
In the above view, the @ObservedObject is used to observe the CounterModel instance. When
the count property in the model changes, the view automatically updates. The Text view
displays the current count and Button views call the increment() and decrement() methods of the
model to update the count.
The buttons in the view call the increment() and decrement() methods on the counterModel ,
which change the value of count . Since count is a @Published property, these changes are
automatically published to any subscribers, causing the view to update reactively.
By combining Combine with SwiftUI in this way, you can build user interfaces that automatically
respond to changes in your data model, leading to a more declarative and reactive programming
style.
Q. What is the purpose of the sink method in Combine, and when would
you use it?
Chapter 14: Combine Framework
The sink method is used to handle the output of a publisher and perform side effects or actions
based on the received values or completion events. It's primarily used to terminate a Combine
pipeline and execute specific logic in response to the publisher's emissions. The sink method
takes two closures as arguments:
The receiveCompletion (of type (Subscribers.Completion<Failure>) -> Void ) closure is
called when the publisher completes, either successfully or with an error. You can handle the
completion event and perform any necessary cleanup or error handling within this closure. For
example:
let publisher = somePublisher()
publisher
.sink(receiveCompletion: { completion in
switch completion {
case .finished: // handle finished case here...
case .failure(let error): // handle error case here...
}
}, receiveValue: { data in
print("Received data: \(data)")
})
The receiveValue (of type (Output) -> Void ) closure is called for each value emitted by the
publisher. You can perform side effects, update UI elements, or execute any other logic based on
the received value within this closure. For example:
let publisher = somePublisher()
publisher.sink { value in
print("Received value: \(value)")
}
Come common scenarios where you would use the sink method:
UI Updates: When working with UIKit or AppKit, you can use sink to update UI elements in
response to changes in data streams or published properties. For example, you can bind a
published property to a label's text or an image view's image using sink.
Side Effects: sink is often used to perform side effects, such as logging, network requests, or
persisting data, in response to values emitted by a publisher.
Error Handling: The receiveCompletion closure in sink allows you to handle errors or
successful completion events from the publisher.
cancellable = timer
.sink { _ in
print("Timer emitted")
}
}
func cancelTimer() {
cancellable?.cancel()
print("Timer canceled")
}
}
In the above example, the startTimerAndCancelAfterCount(_:) method starts the timer and
specifies the number of emissions before cancellation. The timer emits values every second, and
the .prefix(count) operator limits the number of emissions to the specified count.
let timerExample = TimerExample()
Then, we create an instance of TimerExample , start the timer, and then perform cancellation
after 3 seconds.
When you run this code, you'll see the following output:
Timer emitted
Timer emitted
Timer emitted
Timer canceled
Q. What is the purpose of the assign operator in Combine, and when would
you use it?
init() {
// subscribe to changes in celsiusTemperature
// then, update fahrenheitTemperature accordingly
$celsiusTemperature
.sink { [weak self] _ in
self?.updateFahrenheitTemperature()
}
.store(in: &cancellables)
}
In the above code, the throttle operator ensures that the latest value is emitted at most once
every 500 milliseconds. The latest: true parameter ensures that the latest value is emitted
when the throttling interval elapses.
DispatchQueue.global().async {
for input in userInputs {
searchBarPublisher.send(input)
The user inputs ["s", "sw", "swi", "swif", "swift", "swiftable"] with a delay of 0.1 seconds (100
milliseconds) between each character.
When you run this code, you'll see the following output:
prints:
"Searching for: s" (immediately emitted)
"Searching for: swiftable" (emitted as the latest value in the throttling
period)
The output reflects that the values "s" and "swiftable" are emitted at the end of each 500-
millisecond throttling period, capturing the latest value typed within each period.
Both debounce and throttle operators take a DispatchQueue scheduler as an argument. This
allows you to specify the queue on which the debouncing or throttling logic should be executed,
typically the main queue for UI-related operations.
By using debouncing and throttling in your Combine pipelines, you can optimize performance,
reduce unnecessary computations, and improve the overall responsiveness of your apps.
In the above function, we creates a dictionary containing the password data, service, account,
and a key indicating that the data should be stored as a generic password in the Keychain. The
SecItemAdd function is called to add the password data to the Keychain. If the operation fails
with a errSecDuplicateItem error code, it means that an item with the same service and
account already exists in the Keychain.
let password = "password@12345"
let service = "com.swiftable.app"
let account = "[email protected]"
guard status == errSecSuccess, let data = item as? Data, let password =
String(data: data, encoding: .utf8) else {
return nil
}
return password
}
When you call the getPassword function, you'll receive either the password as a String or
nil if no password is found for the specified service and account. It's important to handle the
nil case appropriately in your code, such as prompting the user to enter their password or
taking appropriate action based on your app’s requirements.
Note that when working with Keychain Services, it's important to follow best practices, such as
handling errors properly, using appropriate accessibility constraints, and securely storing and
Chapter 15: App Security
retrieving sensitive data.
Q. Explain the role of App Transport Security (ATS) in iOS app security.
App Transport Security (ATS) is a security feature introduced by Apple in iOS 9 to improve the
security of data transmitted between an iOS app and a web server. ATS ensures that all network
requests made by an app use secure protocols, such as HTTPS, to encrypt data in transit.
ATS Role in iOS App Security
Best Practices
Use HTTPS: Ensure that your server uses HTTPS and a valid SSL/TLS certificate.
Configure ATS Correctly: Configure ATS settings in your Info.plist file to ensure secure
networking.
Test Your App: Test your app to ensure that ATS is working correctly and that all network
requests are secure.
By enabling ATS and configuring it correctly, you can ensure that your iOS app provides a secure
connection between the app and the server, protecting user data and preventing MITM attacks.
Q. Explain the concept of Secure Sockets Layer (SSL) and Transport Layer
Security (TLS) in the context of app security.
Secure Sockets Layer (SSL) and Transport Layer Security (TLS) are cryptographic protocols that
provide secure communication over a computer network. They are essential for app security,
especially when transmitting sensitive data such as user credentials, personal information, or
financial data over the internet.
Secure Sockets Layer (SSL)
SSL was developed by Netscape in the 1990s as a protocol for establishing secure connections
between clients and servers. It operates at the application layer of the network stack and
provides:
Chapter 15: App Security
Encryption: SSL encrypts the data being transmitted between the client and server, preventing
eavesdropping and data theft.
Authentication: SSL enables the client to verify the identity of the server using digital certificates
issued by trusted Certificate Authorities (CAs).
Data Integrity: SSL ensures that the data transmitted between the client and server is not
modified or tampered with during transit.
SSL has been superseded by TLS, but the term "SSL" is still commonly used to refer to the
secure communication protocol.
Transport Layer Security (TLS)
TLS is the successor to SSL and is the current standard for secure communication over the
internet. It operates at the transport layer of the network stack and provides the same security
features as SSL:
Encryption: TLS uses advanced encryption algorithms like AES (Advanced Encryption Standard)
to encrypt the data being transmitted.
Authentication: TLS uses digital certificates and public-key cryptography to authenticate the
server (and optionally the client) to prevent man-in-the-middle attacks.
Data Integrity: TLS uses message authentication codes (MACs) to ensure the integrity of the
transmitted data.
TLS has gone through several versions (TLS 1.0, TLS 1.1, TLS 1.2, and TLS 1.3), with each new
version introducing improved security features and addressing vulnerabilities in previous
versions.
Secure Communication in an app
In an app, SSL/TLS is typically used to secure network communication with servers, APIs, or web
services. Here's an example of how you can establish a secure connection using TLS:
task.resume()
}
Q. How would you handle user sessions securely within an app to prevent
unauthorized access and protect user data?
When a user successfully authenticates with a server (e.g., logging in with credentials), the
server generates a unique session token and sends it back to the client (iOS app). This session
token serves as proof of authentication and authorization for subsequent requests made by the
client.
Session tokens are included in the headers or request bodies of API requests made by the client.
This allows the server to securely identify and authenticate the user without exposing sensitive
Chapter 15: App Security
information like passwords or user credentials over the network.
Handling user sessions securely in an app involves several key practices to ensure that sensitive
data is protected and unauthorized access is prevented. Here, we will go through these
practices.
Use Keychain to Store Session Token
The keychain provides a secure way to store sensitive information such as session tokens. Unlike
storing tokens in UserDefaults or plain text, keychain storage is encrypted and protected by the
iOS system. This is how you can save session token in keychain:
func saveSessionToken(service: String, account: String, token: String) -> Bool
{
let data = token.data(using: .utf8)!
let query: [String: Any] = [
kSecClass as String: kSecClassGenericPassword,
kSecAttrService as String: service,
kSecAttrAccount as String: account,
kSecValueData as String: data
]
And this is how you can get the stored token from keychain:
guard status == errSecSuccess, let data = item as? Data, let token =
String(data: data, encoding: .utf8) else {
return nil
}
return token
}
Handle session token expiry and renewal Session tokens often have an expiration time to
enhance security. You should implement mechanisms to handle token expiration and renewal to
maintain an active session. When the token is about to expire, or has expired, you will need to
request a new token from the server. This is typically done by sending a request to the server with
the expired token and receiving a new token in response.
When making a request to the server using the current session token, the server can respond
with a specific error or status code indicating that the token has expired. For example, the server
Invalidate sessions on logout When a user logs out of the app, it's essential to invalidate the
session on both the client and server sides to prevent unauthorized access. Note that, it’s a good
practice to clear all local data after success response from the logout endpoint’s response. After
clearing the data, it’s required to reset the UI or app state for login state.
Ensure that sessions are invalidated both locally and on the server when a user logs out by
sending a request and remove login info from local.
By leveraging session tokens, iOS apps can securely manage user sessions, authenticate and
authorize requests, handle session expiration and renewal, and seamlessly integrate with server-
side session management mechanisms. This approach provides a robust and secure foundation
for managing user sessions, protecting user data, and ensuring only authorized access to
sensitive resources and functionalities within the app.
For example, you can log unauthorized access attempts to specific resources and implements
additional security measures like blocking user access if needed:
func handleResourceAccess(user: User, resource: Resource) {
if !isAuthorizedToAccess(user: user, resource: resource) {
// log unauthorized access attempt
let logMessage = "Unauthorized access attempt by user: \(user.username)
for resource: \(resource.name)"
logSecurityEvent(message: logMessage)
You can monitor for suspicious user activities, log them, and raise a real-time notification or alert
for further investigation and response. For example:
By implementing comprehensive logging and real-time monitoring, you can enhance the security
posture of your iOS app and quickly detect and respond to security incidents, minimizing the
potential impact and ensuring the protection of user data and the integrity of your app.
This method is called just before the view's layout process begins. It is an opportunity for
you to perform any necessary setup or calculations related to the layout of the view's
subviews.
It is commonly used to update the frame or constraints of subviews based on the current
state of the view or any external data.
Changes made to the subviews' frames or constraints in this method will be reflected in the
subsequent layout pass.
viewDidLayoutSubviews()
This method is called immediately after the view and its subviews have been laid out and
positioned on the screen.
It is useful for performing additional layout adjustments or calculations that depend on the
final layout of the subviews.
Since the layout process has completed, you can safely access the frame properties of the
subviews and make any necessary adjustments or perform additional layout-related tasks.
Both of these methods are particularly useful when you need to perform custom layout logic or
make adjustments to the layout of subviews based on specific conditions or data. Here are some
common use cases:
Managing Custom Layouts
If you are implementing a custom layout for your view and its subviews, you can use these
methods to calculate and adjust the frames or constraints of the subviews based on the available
space or other factors.
Animating Layout Changes
When you need to animate changes to the layout of subviews, you can use these methods to
capture the initial state (in viewWillLayoutSubviews() ) and the final state (in
viewDidLayoutSubviews() ) and then perform the necessary animations.
In viewDidAppear(_:) , we will fetch the current weather data for the user's current location and
update the UI to show the temperature and weather conditions. For example:
// fetch the current weather data for the user's current location
LocationManager.shared.fetchWeatherData(for:
LocationManager.shared.currentLocation) { (weather) in
// update the UI to show the temperature and weather conditions
self.temperatureLabel.text = "\(weather.temperature)°"
self.weatherConditionLabel.text = weather.condition
}
}
In this example, we use viewWillAppear(_:) to show the loading state and fetch the user's
current location. We use viewDidAppear(_:) to fetch the current weather data and update the
UI to show the temperature and weather conditions. This ensures that the user sees the latest
data as soon as the ViewController appears on the screen.
It's important to note that viewWillAppear(_:) is called before the view appears on the screen,
while viewDidAppear(_:) is called after the view has appeared on the screen. This allows us to
show the loading state in viewWillAppear(_:) and update the UI with the latest data
in viewDidAppear(_:) .
Additionally, it's important to
call super.viewWillAppear(_:) and super.viewDidAppear(_:) to ensure that the parent
class's implementation of these methods is executed. This is important for ensuring that the view
controller's view is properly displayed and that any necessary setup is performed.
coordinator.animate(alongsideTransition: { _ in
self.updateConstraintsForSize(size)
}, completion: nil)
}
By handling this method correctly, you can ensure that your app's UI adapts smoothly and
consistently to orientation changes and other size transitions, providing a better user experience.
Note that viewWillTransition(to:with:) is not specific to handling device orientation
changes, but is called whenever a view controller's view is about to transition to a new size. This
can happen for reasons other than device orientation changes, such as when a view controller is
presented or dismissed, or when the size of the view changes due to other factors.
Q. How does iOS handle memory warnings, and how does it affect view
controllers?
iOS handles memory warnings by notifying apps when the system is running low on available
memory. When an app receives a memory warning, it should immediately release any non-critical
resources, such as cached data or images, to free up memory for the system. If the app fails to
release enough memory after receiving the warning, the system may terminate the app to reclaim
the resources it needs.
This memory management process affects view controllers in the following way:
Chapter 16: UIViewController Life-Cycle
Notification
When the system sends a memory warning, the didReceiveMemoryWarning method is called on
the app's root view controller and any presented view controllers. This method is part of the
UIViewController class, so any custom view controllers you create can override this method to
handle memory warnings appropriately.
Resource Cleanup
Within the didReceiveMemoryWarning method, you should release any non-critical resources
held by the view controller or its associated views. This may include:
Releasing cached data or images
Removing strong references to objects that are no longer needed
Invalidating and releasing expensive data structures or collections
View Unloading
If the app still needs to free up more memory after performing the cleanup tasks, the system may
decide to unload the view controllers' views from memory. This process is automatic and
managed by the system, but you can receive notifications by implementing the
didReceiveMemoryWarning method in your view controllers.
Memory warnings can affect view controllers in several ways. For example, if a view controller
does not release any unnecessary resources in response to a memory warning, the system may
terminate the app to reclaim memory. Additionally, if a view controller is not prepared to handle
memory warnings, it may cause the app to crash or behave unexpectedly.
To avoid these issues, it's important to properly handle memory warnings in your view controllers
and release any resources that are not essential to their functioning. This will help ensure that
Chapter 16: UIViewController Life-Cycle
your app remains responsive and stable, even when the system is under memory pressure.
Q. How does the View Controller Lifecycle change when it becomes a child
of another view controller?
When a view controller becomes a child of another view controller, its lifecycle is managed by the
parent view controller. This means that the parent view controller is responsible for adding and
removing the child view controller's view from the view hierarchy.
However, there are a few key differences in terms of the order and timing of certain lifecycle
methods being called. Additionally, some lifecycle methods have different implications when
dealing with child view controllers.
When a view controller becomes a child of another view controller, the following lifecycle events
occur:
The loadView() is called on the child view controller, if it hasn't been loaded already.
The viewDidLoad() is called on the child view controller, if it hasn't been called already.
The parent view controller's didMove(toParent:) method is called with the child view
controller as the argument.
The child view controller's view is added to the parent view controller's view hierarchy.
When a child view controller is added or removed, its view is automatically added or removed
from the parent's view hierarchy. However, if you need to manually add or remove the child view
controller's view, you should do so within the didMove(:) method.
Suppose you have a parent view controller called ParentViewController and a child view
controller called ChildViewController . Here's an example of how you might handle the
lifecycle events when adding the child view controller:
However, if you want the subview to be in its new position immediately, you can
call layoutIfNeeded() on the superview after changing the frame of the subview. This will force
the layout of the superview and the subview will be in its new position immediately.
Q. What is the role of the viewDidLoad method and what tasks are typically
performed in this method?
The viewDidLoad method is a crucial part of a view controller's lifecycle. It is called after the view
controller's view hierarchy has been loaded into memory, either from a storyboard or a nib file, or
// configure subviews
configureSubviews()
// load data
loadData()
deinit {
// clean up observers or delegates
NotificationCenter.default.removeObserver(self)
}
The viewDidLoad() is called only once during the lifetime of a view controller instance.
Subsequent presentations or dismissals of the view controller will not trigger this method again. If
you need to perform setup tasks every time the view controller's view appears, you should use
the viewWillAppear() or viewDidAppear() methods instead.
You can call the addDetailView() function whenever you need to add detailView to the
super view.
Lazy Loading of Data Models
You can lazily load data models or other resources when they are actually needed, instead of
loading them during the initial setup of the view controller. This can be done whenever required
or even in response to user interactions. For example:
In this example, by lazily loading the dataModel , we defer its creation and data loading until it is
required, potentially reducing the initial load time and memory footprint.
Lazy Loading with Closures
Swift also provides a lazy loading mechanism using closures. This can be useful when you need
to perform lazy initialisation of properties or other objects. For example:
class ViewController: UIViewController {
In the above example, the closure is executed only when the dataManager is accessed for the
first time, and the resulting instance is stored and reused for subsequent accesses. This lazy
initialization approach can be useful when you have properties or objects that are expensive to
create or require complex setup.
Lazy loading can help improve the performance and memory efficiency of your app by deferring
the creation or initialization of objects until they are actually needed. However, it's important to
Chapter 16: UIViewController Life-Cycle
carefully consider when and where to apply lazy loading, as it can add complexity and make the
code harder to reason about if not used judiciously.
Q. How can you ensure that a view controller properly removes its
notification observers to prevent memory leaks?
Ensuring that a view controller properly removes its notification observers is crucial to prevent
memory leaks. If a view controller registers for notifications but fails to unregister or remove the
observers when it's no longer needed, it can lead to strong reference cycles, causing the view
controller and its associated objects to remain in memory even after they are no longer needed.
Chapter 16: UIViewController Life-Cycle
Here are some best practices to ensure that a view controller properly removes its notification
observers:
Use the deinit() Method
The deinit() method is called automatically when an instance of a class is about to be
deallocated. You can use this method to remove any notification observers registered by the view
controller. This ensures that observers are automatically removed when the view controller is
deallocated, preventing potential memory leaks. For example:
class ViewController: UIViewController {
deinit {
// remove the notification observer
NotificationCenter.default.removeObserver(self)
}
}
deinit {
// invalidate the observation
if let token = observerToken {
NotificationCenter.default.removeObserver(token)
}
}
}
By following these best practices, you can ensure that notification observers are properly
removed when they are no longer needed, preventing potential memory leaks and improving the
overall memory management of your app.
These techniques should be applied not only to view controllers but also to any objects that
register for notifications. Proper cleanup of observers is crucial for maintaining a healthy memory
footprint and preventing unexpected behavior in your app.
These are the supporting functions that performed different subtasks to play and cache a video:
func loadVideoFromCache(_ videoURL: URL) -> Data? {
// load video from cache
// implement logic to retrieve video from local cache
return cachedVideo
}
Q. Have you encountered any challenges with image loading and rendering
performance in iOS apps? How did you address them?
Image loading and rendering performance are common challenges in iOS apps, especially when
dealing with large images or numerous images in a collection view or table view.
There are several best practices to address these challenges:
Lazy Loading
Load images asynchronously as they are needed rather than all at once. This prevents the app
from being overwhelmed with image processing tasks at startup or when loading large datasets.
Choose Image Format
Using the WebP image format can be beneficial for improving image loading and rendering
performance, as WebP offers better compression and smaller file sizes compared to formats like
JPEG or PNG.
Image Caching
Implement image caching mechanisms to store images in memory or on disk after they are
loaded once. This reduces the need to fetch the same image repeatedly, improving performance
Image Processing
Performing image manipulation tasks such as resizing, cropping, or applying filters. These tasks
can be CPU-intensive and may cause stuttering in the UI if performed on the main thread. Use
background threads or operation queues to process images asynchronously. For example:
DispatchQueue.global().async {
// perform image processing
let processedImage = processImage(image)
Data Synchronization
Synchronizing data between local and remote data stores, such as databases or cloud services.
Performing synchronization tasks on the main thread can lead to UI freezes, especially when
dealing with large datasets or slow network connections. Use background threads or operation
queues to handle data synchronization asynchronously. For example:
DispatchQueue.global().async {
// perform data synchronization
synchronizeData()
Long-Running Tasks
Performing tasks that take a significant amount of time to complete, such as data processing or
calculations. Executing long-running tasks on the main thread can cause the app to appear
unresponsive. Use background threads or operation queues to execute these tasks
asynchronously. For example:
Q. Can you explain the importance of tools like Instruments, Xcode Profiler,
and other performance monitoring tools in iOS development?
The Xcode Profiler is highly adjustable, allowing you to zero in on the most relevant data and
conduct analysis that is particular to their work. Xcode’s Profiler will enables you to inspect and
analyze their code for inefficiencies, resulting in a more stable and smooth-running app for users.
Instruments, Xcode Profiler, and other performance monitoring tools play an important role in iOS
development as they help developers identify and optimize performance bottlenecks in their
apps. These tools provide valuable insights into the app's behavior, allowing developers to solve
many problems like:
Track down problems in source code: Identify memory leaks, crashes, and other issues that
can negatively impact the user experience.
Q. How do you manage and optimize the loading and rendering of large
data sets in table views or collection views?
Working with large data sets in iOS apps can be challenging, especially when it comes to
displaying and rendering the data efficiently in table views or collection views. As the amount of
data increases, the performance of your application can suffer, leading to sluggish scrolling, slow
loading times, and a poor overall user experience.
Fortunately, you can use some best practices to optimize the loading and rendering process,
ensuring smooth performance even with massive data sets.
Use pagination
Load data in chunks or pages instead of loading all data at once. This can reduce the memory
footprint and improve the loading time. You can implement pagination by loading a fixed number
of items at a time, or by loading more items as the user scrolls down. You can also provide a way
for the user to load more items manually, such as by tapping a "Load More" button.
func loadNextPage() {
// load the next page of data
// update the table view or collection view with the new data
}
Use dequeueReusableCell
Use the dequeueReusableCell(withIdentifier:for:) method to reuse table view cells instead
of creating new ones. This can reduce the memory footprint and improve the performance. When
a cell is scrolled off the screen, it is added to a reuse queue. When a new cell is needed,
the dequeueReusableCell(withIdentifier:for:) method returns a reusable cell from the
queue if one is available, or creates a new one if none is available.
These practices can significantly improve the loading and rendering performance of your app,
resulting in a smoother and more responsive user experience. By following these best practices,
you can ensure that their apps can handle large data sets efficiently and effectively.
Q. Can you explain the role of lazy loading and prefetching in optimizing
the performance of list-based UI components?
Lazy loading and prefetching are techniques used to optimize the performance of list-based UI
components.
Lazy loading is a pattern that defers the loading of non-critical resources at runtime. In the
context of list-based UI components, lazy loading can be used to defer the loading of data for list
items that are not currently visible to the user. This can help to reduce the initial load time of the
list and improve the overall performance of the app.
Prefetching is a related technique that can be used to improve the performance of lazy loading.
Prefetching involves loading data for list items that are likely to become visible to the user in the
// once data is fetched, call the completion handler on the main thread
DispatchQueue.main.async {
completion("Fetched Data")
}
}
}
fetchData { fetchedData in
// update UI with the latest data
print("Fetched data: \(fetchedData)")
}
DispatchQueue.main.async {
// update UI with loaded data
hideLoadingIndicator()
displayData(data)
}
}
In this example, sync is used to update the UI and show a loading indicator on the main queue
before fetching data from the network. Since the UI update is expected to be quick, using sync
here is acceptable.
Here are the key differences between async and sync:
Blocking: async doesn't block the calling thread, while sync blocks the calling thread until the
task is complete.
Execution order: With async, the task is executed in the background, and the calling thread
continues executing without waiting. With sync, the task is executed synchronously, and the
calling thread waits for the task to complete.
Use cases: Use async when you need to perform a task in the background without blocking the
UI or other tasks. Use sync when you need to ensure that a task is completed before continuing
with other tasks.
Chapter 18: Concurrency
It's recommended to use DispatchQueue.main.async for most UI updates and tasks that need
to be executed on the main queue. This ensures that the main queue remains responsive and can
handle user interactions. Use DispatchQueue.main.sync only when necessary and for short-
lived tasks that need to be executed immediately on the main queue.
synchronousTask()
// Swiftable
// iOS
// Community
sortWords()
print("words: \(words)")
// words: ["community", "developer", "swiftable"]
When we call the sortWords() function, it executes the words.sort() line synchronously on
the current thread. This means that the current thread (in this case, the main thread) will block
and wait until the sorting operation is completed before moving to the next line of code.
In this case, using a synchronous operation for sorting the in-memory array is acceptable
because the operation is likely to be fast and won't cause any noticeable delay or freezing of the
user interface.
Asynchronous Tasks
In asynchronous tasks, the program does not wait for a task to complete. Instead, it continues
executing other tasks while waiting for the asynchronous task to finish. Asynchronous tasks are
commonly used for operations that may take some time to complete, such as network requests,
file I/O, or animations. For example:
let imageURL = URL(string: "https://example.com/image.jpg")!
// let's assume you are receiving data in string format after encoded.
let strings = String(data: data, encoding:
.utf8)?.components(separatedBy: "\n")
// reload list
tableView.reloadData()
} else {
// update UI for error case here
}
}
In this example, the fetchData() function performs a network request by fetching data from a
URL on the main queue using Data(contentsOf:) . This operation can take a significant amount
of time, depending on the network conditions and the size of the data being fetched.
When you run this app and attempt to interact with the table view or other UI elements while the
data is being fetched, you'll notice that the app becomes unresponsive. This is because the main
queue is blocked by the long-running network request, preventing it from handling user
interactions and updating the UI.
To prevent this issue, you can use concurrency by performing the network request on a
background queue or a separate thread. This allows the main queue to remain responsive,
enabling users to interact with your app while the data is being fetched in the background. For
example:
We dispatch the network request to the background queue using queue.async { ... } . This
block of code will execute concurrently on the background queue, allowing the main queue to
remain responsive.
With concurrency, the app's user interface will remain responsive even during the network
request, providing a better user experience. Users can scroll, tap, or interact with other UI
elements without any noticeable freezing or unresponsiveness.
It's important to note that while this example uses GCD for concurrency, you can also achieve
similar results using other concurrency mechanisms such as operations or async/await.
// creating a queue
let serialQueue = DispatchQueue(label: "com.swiftable.serial")
// adding a task
serialQueue.async {
sleep(5)
print("Task 1 executed")
}
// adding a task
serialQueue.async {
print("Task 2 executed")
}
}
// Print:
// Task 1 executed
// Task 2 executed
Since the queue is serial, Task 2 has to wait for Task 1 to finish its execution, even though
Task 2 is much shorter. This means that Task 2 executed will be printed after a delay of 5
seconds, as it has to wait for the first task to complete.
This shows the scenario where serial execution may not be ideal, as tasks are executed strictly in
the order they were added to the queue, regardless of their duration or priority. Even though
Task 2 could have been completed quickly, they have to wait for the longer task Task 1 to
finish first.
In situations like this, it might make more sense to use a concurrent queue or prioritize tasks
based on their duration or importance, allowing shorter or more important tasks to be executed
sooner, rather than forcing them to wait behind longer tasks. This would be analogous to allowing
Task 2 to go ahead of Task 1 in the queue, as their task can be completed much faster
without causing significant delay for others.
Concurrent Queues
Concurrent queues can execute multiple tasks simultaneously. Tasks are started in the order they
are added, but they may finish in any order, depending on system conditions and available
resources. Useful when you have independent tasks that can run concurrently without
dependencies on each other. For example:
Chapter 18: Concurrency
func performTasks() {
// creating a queue
let concurrentQueue = DispatchQueue(label: "com.swiftable.queue",
attributes: .concurrent)
// adding a task
concurrentQueue.async {
sleep(5)
print("Task 1 executed")
}
// adding a task
concurrentQueue.async { print("Task 2 executed") }
// adding a task
concurrentQueue.async { print("Task 3 executed") }
}
In this example, the fetchProductDetails function fetches the product details from the server
asynchronously using NetworkManager.
When the fetch operation completes, either successfully or with an error, the appropriate UI
updates are performed on the main queue using DispatchQueue.main.async . By dispatching
the UI updates to the main queue, we ensure that the changes are applied correctly and avoid
potential race conditions or crashes that could occur if the UI is updated from a background
thread.
These QoS classes are used to specify the priority and importance of tasks executed on dispatch
queues.
.userInteractive : This is the highest priority service recommended for task that must be
done immediately in order to keep the user interface responsive. Examples include handling
user input, animations, and other time-sensitive operations that directly affect the user
experience.
.userInitiated : It is recommended for tasks that were initiated by the user and should be
executed as soon as possible. Examples include processing data after a user action, such as
applying a filter to an image or sending a network request after the user taps a button. It has
higher priority than the default QoS class.
.default : It is used for tasks that are important but don't require special prioritization.
Examples include loading data from disk, processing data in the background, and other
general-purpose tasks.
.utility : It is recommended for long-running tasks that should be executed at a lower
priority. Examples include performing calculations, processing large amounts of data, and
other computationally intensive tasks that are not user-facing. It has lower priority than the
default QoS class.
Chapter 18: Concurrency
.background : It has lowest priority recommended for tasks that should be executed only
when the system has available resources. Examples include prefetching data, performing
backups, and other tasks that can be deferred or run in the background without affecting the
user experience.
func applyFilter(_ filter: Filter, to image: UIImage) {
let filterQueue = DispatchQueue.global(qos: .userInitiated)
filterQueue.async {
// process the image with the selected filter
guard let filteredImage = self.applyFilterToImage(filter, image: image)
else {
return
}
DispatchQueue.main.async {
// update the UI with the filtered image
self.imageView.image = filteredImage
}
}
}
In the above example, we offload the computationally intensive image filtering task to a
background thread, allowing the app to remain responsive while the filtering operation is being
performed. The main queue is only used for updating the UI after the filtering operation is
complete, ensuring a smooth and responsive user experience.
Custom Serial Queues
You can create custom serial queues for executing tasks in a specific order. These queues are
useful when you need to ensure that certain tasks are executed sequentially, such as writing data
to a file or updating a shared resource. You can see the example explained in the previous
question for your reference.
Custom Concurrent Queues
You can also create own concurrent queues for executing tasks concurrently. These queues are
useful when you have tasks that can be executed in parallel, such as downloading files or
processing images. You can see the example explained in the previous question for your
reference.
Chapter 18: Concurrency
Q. Discuss the use of DispatchGroup in GCD. Can you provide an example
of how you would use it to manage asynchronous tasks?
Grand Central Dispatch (GCD) provides a powerful and efficient way to manage concurrent
operations and asynchronous tasks. One of the useful constructs in GCD is DispatchGroup,
which allows you to track and coordinate a group of tasks, ensuring that all tasks in the group
complete before moving on to the next step.
The DispatchGroup is particularly useful when you have a set of asynchronous tasks that need to
be completed before proceeding with some subsequent operation or updating the user interface.
It helps you avoid complex callback nesting or timing issues that can arise when dealing with
multiple asynchronous tasks.
For an example where you're building an app that displays information from multiple web
services. Specifically, your app needs to fetch data from three different APIs and display the
combined data to the user. Here's how you can use DispatchGroup to manage a group of
asynchronous network requests:
// create a dispatch group
let group = DispatchGroup()
Iterate over the URLs and make network requests like this:
// append the data to the results array if the request was successful
if let data = data {
results.append(data)
}
}.resume()
}
For each URL, we enter the dispatch group using group.enter() and make an asynchronous
network request. When the network request completes, we append the data to the results
array and leave the dispatch group using group.leave() .
After entering the dispatch group for all tasks, we use group.notify(queue:) to specify a
closure that will be executed once all tasks in the group have completed. In this closure, we can
safely access and process the results array, as all network requests have finished.
The DispatchGroup ensures that the notify closure is not called until all tasks have left the group,
guaranteeing that all network requests have completed before proceeding with the subsequent
operations.
Using DispatchGroup in this way simplifies the management of multiple asynchronous tasks and
eliminates the need for complex callback nesting or timing issues. It provides a clean and
structured way to coordinate the completion of a set of asynchronous operations.
func increment() {
count += 1
}
When you run the above example, you can see the output is varying because of race conditions.
To fix this, you can use a serial queue or a lock to ensure thread-safety. For example:
func increment() {
serialQueue.async(flags: .barrier) {
self.count += 1
}
}
By using a serial dispatch queue and synchronizing access to the shared count property, we
have effectively eliminated the race condition and ensured thread-safety.
Deadlocks
Deadlocks occur when two or more threads are waiting for each other to release resources that
they need, resulting in a situation where none of the threads can proceed. Deadlocks can cause
your app to freeze or become unresponsive. To avoid deadlocks, you should be careful when
acquiring and releasing locks, and follow best practices for lock ordering and avoiding circular
dependencies.
Let’s create a deadlock with an example:
queue1.async {
print("Task ID: 1")
queue2.sync { print("Task ID: 2") }
print("Task ID: 3")
}
queue2.async {
print("Task ID: 4")
queue1.sync { print("Task ID: 5") }
print("Task ID: 6")
}
// Print:
// Task ID: 1
// Task ID: 4
You can see the incomplete output in the above example. The Task ID: 1 runs on queue1 and
attempts to acquire a lock on queue2 using queue2.sync . In the same way, Task ID: 4 runs
on queue2 and attempts to acquire a lock on queue1 using queue1.sync . Since both tasks are
waiting for each other to release the lock, a deadlock occurs.
To prevent the deadlocks, we can use various techniques such as avoiding nested locks, using
timeouts, and breaking circular dependencies. One solution is to
use queue1.async and queue2.async instead of queue1.sync and queue2.sync . This
change will allow the tasks to run concurrently without waiting for each other to release the lock,
avoiding the deadlock. For example:
queue1.async {
print("Task ID: 1")
queue2.async { print("Task ID: 2") }
print("Task ID: 3")
}
queue2.async {
print("Task ID: 4")
queue1.async { print("Task ID: 5") }
print("Task ID: 6")
}
// Print:
// Task ID: 1
// Task ID: 4
// Task ID: 3
// Task ID: 6
// Task ID: 5
// Task ID: 2
Thread Safety
Not all data structures and APIs in Swift are thread-safe by default. When working with shared
resources across multiple threads, you need to ensure that the data structures and APIs you're
using are either thread-safe or that you're using proper synchronization techniques to make them
thread-safe. For example, we have used DispatchQueue in the first point (i.e. Race Conditions) to
enable thread safety.
Concurrency issues can be difficult to reproduce and debug, so it's important to thoroughly test
your concurrent code under various scenarios and with different workloads.
In this example, fetchDataSync() is a synchronous function that blocks the current thread until
the data is fetched. The queue.async method is used to run this function on a background
thread, so that it doesn't block the main thread.
The key difference between these two approaches is that async/await makes it easier to write
asynchronous code that looks like synchronous code, while GCD is a lower-level API that gives
you more control over how and where your code is executed. Here are some specific differences
between the two:
Async/await is easier to read and write than GCD. The syntax is more concise and the code
flow is more intuitive.
Async/await automatically handles the allocation and deallocation of threads, while GCD
requires you to manually manage dispatch queues and threads.
Async/await is built on top of Swift's concurrency model, which provides more efficient and
scalable concurrency than GCD.
GCD provides more control over how and where your code is executed. For example, you
can use GCD to create custom dispatch queues with specific quality of service (QoS)
attributes.
Swift's async/await model is a higher-level and easier-to-use API for writing asynchronous code,
while GCD is a lower-level and more flexible API for managing concurrency. Which one you
choose to use depends on your specific use case and the requirements of your project.
// sample operation
for i in 1...10 {
print("Task \(i)")
Thread.sleep(forTimeInterval: 1)
In the above code, you can see how to use the Operation class to perform long-running tasks
that can be cancelled safely and efficiently. By checking the isCancelled property regularly, the
operation can be cancelled at any point during its execution, saving resources and improving the
user experience.
Above code creates an OperationQueue, adds a VideoOperation to the queue, and then cancels
the operation after 5 seconds. This is useful when you want to execute a task concurrently and
have the ability to cancel it if needed.
// output
Task 1
Task 2
Task 3
Task 4
Task 5
Task 6
Task cancelled
Note that the VideoOperation class is responsible for checking if it's been cancelled and
stopping its task accordingly. In a practical scenario, you would need to implement this logic in
custom Operation subclass.
Q. Can you explain how error handling works with async/await, and what
are the best practices for handling errors in asynchronous tasks?
Error handling in asynchronous tasks using async/await is similar to synchronous error
handling, but with a few key differences. Here's how it works:
When an asynchronous task encounters an error, it throws that error. You can catch and handle
that error using a do-catch statement, just like in synchronous code. Here's an example:
do {
let result = try await someAsyncFunction()
// handle the result
} catch {
// handle the error
}
In this example, the defer statement ensures that the file handle is closed, even if an error occurs
while reading from the file.
Use try? or try! for non-critical errors
If an asynchronous task might throw an error that isn't critical (such as a network timeout), you
can use try? or try! to ignore or suppress the error. However, be careful when using these
operators, as they can make your code more difficult to debug if an error does occur.
Use throws to propagate errors
If you can't handle an error in an asynchronous task, you can propagate the error to the caller by
declaring the function as throws. Here's an example:
Chapter 18: Concurrency
func someAsyncFunction() async throws -> String {
// asynchronous code that might throw an error
}
By following these best practices, you can ensure that your asynchronous code is robust,
reliable, and easy to maintain.
Structured concurrency is a way to write asynchronous code that is easier to read, write, and
maintain. It's based on the concept of tasks, which are units of asynchronous work that can be
composed together to create more complex asynchronous operations. Let’s understand it with an
example.
func fetchUser() async throws -> User {
async let userData = URLSession.shared.data(from: URL(string:
"https://example.com/user")!)
async let userProfile = URLSession.shared.data(from: URL(string:
"https://example.com/user/profile")!)
do {
let user = User(
data: try await userData,
profile: try await userProfile
)
return user
} catch {
throw error
}
}
In this example, we define a function fetchUser() that returns a User object. The function
uses two asynchronous operations to fetch the user's data and profile from two different URLs.
The async let syntax is used to declare two tasks, userData and userProfile , which are
executed concurrently.
The try await syntax is used to wait for the completion of each task and retrieve the result.
The User object is created by combining the results of the two tasks.
One of the key benefits of structured concurrency is that it allows you to write asynchronous
code that is more readable and maintainable. By using tasks and async let , you can break
down complex asynchronous operations into smaller, more manageable pieces.
Another benefit is that structured concurrency provides better error handling. If an error occurs in
one of the tasks, it will be propagated to the caller of the fetchUser() function. This makes it
easier to handle errors in a centralised way.
In the above example, we use the DispatchQueue.main.asyncAfter function to cancel the task
after 2 seconds. If the task is cancelled, all of its child tasks will also be cancelled.
Structured concurrency provides a powerful way to write asynchronous code. It allows you to
break down complex asynchronous operations into smaller, more manageable pieces, and
provides better error handling and cancellation mechanisms.
Q. What are Swift actors, and how do they differ from traditional
concurrency mechanisms like locks and semaphores?
Swift actors are a concurrency mechanism introduced in Swift 5.5 that allows for thread-safe
access to shared resources. They differ from traditional concurrency mechanisms like locks and
semaphores in that they provide a higher-level abstraction and are more expressive.
Actors are essentially a way to encapsulate shared state and provide a thread-safe interface to
access that state. They achieve this by serializing access to the shared state, ensuring that only
one task can access the state at a time.
Let’s see an example of how to use actor:
Chapter 18: Concurrency
actor Article {
let id: Int
private(set) var viewCount: Int
init(id: Int) {
self.id = id
self.viewCount = 0
}
func incrementViewCount() {
viewCount += 1
}
}
DispatchQueue.concurrentPerform(iterations: 10) { _ in
Task {
await article.incrementViewCount()
}
}
Task {
let finalViewCount = await article.viewCount
print("Total view count: \(finalViewCount)") // prints: 10
}
In this example, the Article actor encapsulates the viewCount state and provides a thread-
safe interface to increment it. The incrementViewCount method is serialized, ensuring that only
one task can increment the view count at a time. While, traditional concurrency mechanisms like
locks and semaphores require manual synchronization and can be error-prone.
Swift actors and traditional concurrency mechanisms like locks and semaphores are both used to
synchronize access to shared resources in concurrent programming. However, they differ in their
approach, complexity, and usage.
Locks
They are a low-level synchronization primitive that allows only one thread to access a shared
resource at a time. They work by locking the resource, allowing one thread to access it, and
blocking other threads until the lock is released.
Semaphores
They are a more general form of locks that allow a limited number of threads to access a shared
resource. They work by maintaining a count of available slots, and threads can acquire a slot
Chapter 18: Concurrency
(decrement the count) or release a slot (increment the count).
Actors
They are a high-level concurrency mechanism that provides a thread-safe interface to shared
state. They encapsulate the shared state and provide a serialized access to it, ensuring that only
one task can access the state at a time.
Key differences:
Abstraction level
Actors provide a higher-level abstraction than locks and semaphores. They encapsulate the
shared state and provide a thread-safe interface, whereas locks and semaphores require manual
synchronization and error-prone code.
Concurrency model
Actors are designed for asynchronous, non-blocking concurrency, whereas locks and
semaphores are typically used for synchronous, blocking concurrency.
Serialization
Actors serialize access to shared state, ensuring that only one task can access the state at a
time. Locks and semaphores also provide serialization, but they require manual synchronization
and can be more error-prone.
Error handling
Actors provide built-in error handling and cancellation, whereas locks and semaphores require
manual error handling and cancellation.
Complexity
Actors are generally easier to use and less error-prone than locks and semaphores, which require
manual synchronization and can be more complex to use correctly.
When to use each:
Swift actors
Use Swift actors when you need to encapsulate shared state and provide a thread-safe interface
to it. They are well-suited for asynchronous, non-blocking concurrency and provide a high-level
abstraction.
Locks
for i in 0...2 {
print(symbol, i)
}
print("\(symbol) signal")
semaphore.signal() // releasing the resource
}
}
As you can see, the higher priority queue (🔴) starts printing the sequence of numbers first, and
the lower priority queue waits until the higher priority queue is done before it starts printing. This
is because the semaphore only allows one thread to access the shared resource at a time.
If we had not used the semaphore, both queues could have printed the sequence of numbers
concurrently, which could lead to race conditions and other synchronization issues. By using the
semaphore, we can ensure that the shared resource is accessed in a thread-safe manner.
for i in 1...6 {
semaphore.wait() // decrement the semaphore
DispatchQueue.global().async {
print("Start access to the shared resource: \(i)")
sleep(2)
semaphore.signal() // increment the semaphore
}
}
In this example, we create a DispatchSemaphore with an initial value of 3, which means that up to
3 concurrent operations are allowed. We then create a loop that runs 6 times, and in each
iteration, we:
Decrement the semaphore using semaphore.wait() . This will block the thread if the
semaphore's value is 0.
Create an asynchronous block using DispatchQueue.global().async that accesses a
shared resource (in this case, just printing a message).
Sleep for 2 seconds to simulate some work being done.
Increment the semaphore using semaphore.signal() when the work is done.
// prints:
Start access to the shared resource: 1
Start access to the shared resource: 2
Start access to the shared resource: 3
// after 2 seconds...
Start access to the shared resource: 4
Start access to the shared resource: 5
Start access to the shared resource: 6
The key point here is that the semaphore ensures that only 3 concurrent operations are allowed
at any given time. If the 4th iteration tries to access the shared resource, it will be blocked until
one of the previous 3 operations completes and signals the semaphore.
This approach is useful when you need to limit the number of concurrent operations to prevent
resource starvation or to control the rate of access to a shared resource.
group.enter()
someAsyncTask {
// do work
group.leave()
}
group.enter()
anotherAsyncTask {
// do more work
group.leave()
}
group.notify(queue: .main) {
print("all tasks completed")
}
In this example, two tasks are started and tracked using a dispatch group. The final action
(printing "all tasks completed") is performed only after both tasks have finished.
Dispatch Semaphore
It can controls access to a limited resource across multiple execution contexts. It is used for
limiting concurrent access to a specified number of resources. Mainly, it is more useful when you
need to restrict the number of tasks that can run concurrently and for implementing a producer-
consumer scenario with a fixed buffer size. To synchronize access to a shared resource in a
multi-threaded environment.
Key Methods:
wait(): Decrements the semaphore count or blocks if the count is zero.
signal(): Increments the semaphore count.
For example:
for i in 1...10 {
DispatchQueue.global().async {
semaphore.wait() // wait for a free slot
// do some work that should be limited to 3 concurrent operations
print("Task \\(i) started")
sleep(2)
print("Task \\(i) finished")
semaphore.signal() // release the slot
}
}
In this example, a semaphore with an initial value of 2 is created, allowing only 3 tasks to run
concurrently. Additional tasks wait until the semaphore is signaled by one of the running tasks.
Key Differences
Purpose:
Dispatch Group: Synchronizes completion of multiple tasks.
Dispatch Semaphore: Controls concurrent access to resources.
Usage:
Dispatch Group: Used when you need to know when a set of tasks completes.
Dispatch Semaphore: Used to limit concurrent execution or protect shared resources.
Counting:
Dispatch Group: Counts down to zero (tasks remaining).
Dispatch Semaphore: Counts available resources, blocking when zero.
Blocking:
Dispatch Group: Typically doesn't block unless you explicitly call wait().
Dispatch Semaphore: Can block threads when resources are unavailable.
Notification:
Dispatch Group: Can notify when all tasks are complete without blocking.
Dispatch Semaphore: Doesn't have a built-in notification mechanism.
Flexibility:
Dispatch Group: More flexible for managing groups of related asynchronous tasks.
Chapter 18: Concurrency
Dispatch Semaphore: More suited for resource management and synchronization.
Use Dispatch Groups when you need to track completion of a set of tasks, and use Dispatch
Semaphores when you need to control access to limited resources or restrict concurrent
execution. Each serves a distinct purpose in concurrent programming and can be powerful when
used appropriately.
Core Animation also provides other classes for creating more complex animations, such as
CAKeyframeAnimation, CATransition, and CAAnimationGroup. These classes can be used to
create custom animations with multiple stages, transitions, and grouped animations.
Core Animation manages the rendering pipeline to ensure smooth animations. It uses a technique
called double-buffering, where it draws the current and next frames in separate buffers. This
allows it to display the next frame without any visual artifacts or flickering. Core Animation also
uses hardware acceleration to take advantage of the GPU's capabilities, which results in faster
and smoother animations.
By managing the rendering pipeline and providing various animation classes, Core Animation
enables you to create smooth and visually appealing animations in the apps.
Q. How does UIKit optimize the rendering process of UIView and CALayer
for performance?
UIKit optimizes the rendering process of UIView and CALayer for performance in several ways:
Layer Hierarchy
UIKit uses a layer hierarchy to efficiently manage the rendering of views. Each UIView has a
corresponding CALayer, which handles the view's rendering. By default, UIKit manages the layer
hierarchy automatically, but you can also create and manage standalone CALayers for custom
rendering.
Hardware Acceleration
Chapter 19: UIKit Framework
Core Animation, which manages CALayers, uses hardware acceleration to take advantage of the
GPU's capabilities. This results in faster and smoother animations and rendering.
Double Buffering
Core Animation uses double buffering, where it draws the current and next frames in separate
buffers. This allows it to display the next frame without any visual artifacts or flickering.
Optimized Drawing
UIKit and Core Animation provide various optimized drawing techniques and classes, such as
CATiledLayer for tiled rendering of large data, CAEmitterLayer for particle emitters, and other
CALayer subclasses with built-in optimizations for high performance.
Layer Composition
Core Animation composites and renders layers efficiently, reducing the load on the CPU. When
using UIView and CALayer together, UIView properties often forward directly to the CALayer
without adding any overhead.
Asynchronous Drawing
CALayer supports asynchronous drawing, allowing layers to be drawn in a background thread.
This can help improve performance, especially when dealing with complex or resource-intensive
drawing operations.
Dirty Region Management
Instead of re-rendering the entire view hierarchy on every update, Core Animation tracks the
"dirty regions" of each layer – the areas that have changed and need to be redrawn. This
optimization minimizes the amount of work required for each render cycle, improving
performance.
By utilizing these techniques and features, UIKit and Core Animation enables you to create high-
performance and visually appealing apps.
Q. How does the responder chain work, and what role does it play in event
handling?
The responder chain allows events to be propagated through a series of objects, known as
responders, until one of them handles the event. This chain is used to handle events such as
touch events, motion events, and remote control events.
Here's how it works:
Responder Objects
In UIKit, UIResponder is the base class for objects that can participate in the responder chain.
The main responder objects are UIView, UIViewController, UIWindow, and UIApplication. Each
responder object has a reference to its next responder in the chain.
Event Delivery
When an event occurs, such as a touch on the screen or a keyboard input, UIKit delivers the
event to the appropriate responder object based on the view hierarchy and the responder chain.
Responder Chain Path
The responder chain follows a specific path:
When an event occurs, iOS creates an instance of UIEvent that represents the event.
The event is then sent to the first responder in the chain, which is usually the view that was
touched or the view controller that is currently active.
If the first responder cannot handle the event, it passes the event to the next responder in
the chain, which is usually its parent view or view controller.
This process continues until an object in the chain can handle the event or until the event
reaches the top of the chain, which is the UIApplication instance.
Chapter 19: UIKit Framework
Event Handling
At any point in the responder chain, if a responder object can handle the event, it overrides the
appropriate event handling method (e.g., touchesBegan(*:with:) ,* touchesMoved(:with:) ,
touchesEnded(_:with:) for touch events) and performs the necessary actions. If the responder
object doesn't handle the event, it can pass it along to the next responder in the chain.
Responder Chain Customization
You can customize the responder chain by overriding the next property of UIResponder in their
custom classes. This allows them to bypass certain responders or insert their own responders
into the chain.
override func touchesBegan(_ touches: Set<UITouch>, with event: UIEvent?) { }
The responder chain plays an important role in event handling because it allows events to be
handled in a decentralized manner. Instead of having a single object responsible for handling all
events, the responder chain allows multiple objects to participate in event handling. Some key
benefits of the responder chain include:
Event Delegation: The responder chain allows for event delegation, where a view can choose to
handle an event or pass it along to its parent view or view controller for handling.
Modular Event Handling: By separating event handling into individual responder objects, the
code becomes more modular and easier to maintain.
Custom Event Handling: Developers can override event handling methods at various levels of
the responder chain, enabling custom event handling behaviors for specific views, view
controllers, or even the entire application.
Shared Event Handling: The responder chain allows for shared event handling, where multiple
responder objects can respond to the same event if necessary.
Overall, the responder chain is a powerful mechanism in UIKit that provides efficient and
structured event handling, while also providing flexibility for customization and modular design in
iOS apps.
func doSomething() {
// when an event occurs, call the delegate method
delegate?.testMethod()
}
}
func presentChildController() {
let childController = ChildViewController()
// pass a closure to the child controller
childController.buttonClickHandler = { [weak self] data in
guard let self = self else { return }
// handle the data passed back from the child controller
}
present(childController, animated: true, completion: nil)
}
}
In this example, we use a closure to pass data back from the child view controller to its parent
without relying on delegates. The parent view controller creates an instance of the child view
controller and passes a closure to it. When the child view controller needs to send data back to
its parent, it simply calls the closure with the data.
This approach is useful when you want to pass data back from a child view controller to its parent
in a simple and direct manner, without the overhead of setting up delegates or handling segues.
Using Delegate Pattern
The delegate pattern is a widely used method for passing data back from a child view controller
to its parent. The parent view controller acts as the delegate for the child view controller, and the
child view controller communicates with the parent through a protocol. For example:
func presentChildViewController() {
let childVC = ChildViewController()
childVC.delegate = self
present(childVC, animated: true, completion: nil)
}
}
We define a protocol ChildViewControllerDelegate with a method that will be used to pass data
from the child view controller to its delegate (the parent view controller).
extension ParentViewController: ChildViewControllerDelegate {
The ParentViewController conforms to the protocol and implements its method. This method will
be called by the child view controller when it needs to pass data back to the parent.
After the button is tapped, we calls the delegate method on the delegate object, passing the
data to it.
In this example, we use the delegate pattern to pass data back from the child view controller to its
parent. The parent view controller sets itself as the delegate of the child view controller. When
the child view controller needs to send data back to its parent, it calls the appropriate delegate
method on the delegate object, passing the data as a parameter.
This approach is useful when you want to establish a communication channel between a child
view controller and its parent, allowing the child to pass data back to the parent in a structured
and decoupled manner.
Using Notification Center
It is a central broadcast system that allows objects to send notifications and other objects to
observe and receive those notifications. It provides a way for different parts of an app to
communicate with each other without having direct dependencies or knowledge of each other's
implementations. This approach follows the Observer design pattern, where the broadcasting
object (the sender) doesn't need to know anything about the receiving objects (the observers).
Let’s see an example.
After the button tapped, it creates a dictionary data with some sample data and posts a
notification named "DataBroadcast" to the default NotificationCenter using the
post(name:object:userInfo:) method. The userInfo parameter contains the data dictionary
that needs to be broadcast.
class ObserverViewController: UIViewController {
deinit {
NotificationCenter.default.removeObserver(self)
}
}
In the above controller, we add an observer for the "DataBroadcast" notification using the
addObserver(self:selector:name:object:) method. This means that whenever a
"DataBroadcast" notification is posted, the handleDataBroadcast(_:) method will be called.
Without weak : If loginButton were declared as a strong reference (strong is default), the view
controller would hold a strong reference to loginButton , and loginButton would hold a
strong reference back to its superview, which holds a strong reference back to the view
controller, potentially creating a reference cycle.
With weak : Declaring loginButton as weak means that the view controller holds a weak
reference to the button. The button is already strongly referenced by its superview, so it won't be
deallocated as long as its superview exists. When the view controller and its view hierarchy are
no longer needed, they can all be deallocated properly.
Using weak for IBOutlet properties is a common best practice to avoid strong reference cycles
and potential memory leaks in iOS apps. It ensures that the view controller does not
unnecessarily hold onto views, allowing the system to manage memory efficiently and keep the
app performant.
Q. What is intrinsic content size, and how does it affect Auto Layout?
Chapter 19: UIKit Framework
Intrinsic content size is a key concept in Auto Layout, particularly when dealing with user
interface elements. It refers to the natural size a view wants to be, based on its content. This size
acts as a constraint that Auto Layout can use to determine the final size of a view.
Here are some examples of views that have an intrinsic content size:
UILabel: The intrinsic content size of a UILabel is based on the text it contains, including the font
size, style, and number of lines.
UIButton: A UIButton has an intrinsic content size based on its title, image, and content insets.
UIImageView: An UIImageView has an intrinsic content size based on the size of the image it
displays.
UITextView: A UITextView has an intrinsic content size based on the text it contains, including
the font size, style, and number of lines.
How Does Intrinsic Content Size Affect Auto Layout?
Intrinsic Size as Implicit Constraints: Views with an intrinsic content size automatically provide
their width and height constraints based on their content. These constraints help Auto Layout
determine the size of these views without needing explicit size constraints.
Content-Driven Layout: When designing interfaces, the content often dictates the size of the
view. Using intrinsic content size ensures that views expand or contract based on their content,
leading to dynamic and adaptable layouts.
Fewer Explicit Constraints: Since views like labels and buttons already have intrinsic sizes, you
don’t need to explicitly define width and height constraints for them. This reduces the complexity
of the layout and the number of constraints you need to manage.
For an example, a UILabel with text "Hello, Swiftable!" and a specific font size has an intrinsic
content size based on the text length and font. You can place this label in a view without setting
explicit width and height constraints because Auto Layout will use the label's intrinsic content
size to determine its dimensions.
let label = UILabel()
label.text = "Hello, Swiftable!"
// no need to set width and height constraints explicitly
Properly setting the intrinsic content size is essential for Auto Layout to work correctly. If a view's
intrinsic content size is not set correctly, Auto Layout may produce unexpected results or fail to
satisfy constraints.
Both views have a horizontal spacing constraint between them, set to 10 points.
In this scenario, there's a conflict between the width constraints of View A and View B . To
resolve this conflict, Auto Layout considers the content hugging and compression resistance
priorities of both views.
If View A has a higher content hugging priority than View B , View A will maintain its intrinsic
width of 100 points, and View B will be compressed to fit the available space. If View B has a
higher compression resistance priority than View A , View B will resist compression, and View
A will be shrunk to fit the available space.
By adjusting the content hugging and compression resistance priorities of your views, you can
influence how Auto Layout resolves conflicts and determines the final layout of your user
interface.
Asynchronous Content: If the content of your cells (e.g., images) is loaded asynchronously,
ensure that you reload the cell or update its height after the content has been loaded. You can
use a completion handler to reload the cell once the content is ready.
Avoid Forced Layout Passes: Avoid calling layoutIfNeeded or layoutSubviews unnecessarily as
it can lead to performance issues. Auto Layout should be able to handle most layout calculations
without manual intervention.
By following these steps, you can efficiently implement dynamic cell heights in a UITableView,
ensuring that the table view adjusts the height of its cells based on their varying content sizes.
return true
}
}
Using makeKeyAndVisible() is required for displaying content in a new UIWindow and ensuring
that it can interact with the user. Without calling this method, the window would not be shown to
the user, and it would not receive input events, rendering it effectively invisible and non-
interactive.
💡 Note:
In this example, newVC is pushed onto the stack, and the navigation controller updates its
navigation bar and toolbar accordingly.
Chapter 19: UIKit Framework
When the user navigates back, the UINavigationController pops the top view controller off the
stack using the popViewController(animated:) method. This method is called automatically
when the user taps the back button in the navigation bar.
Here's an example of how the navigation controller's stack might look like:
// initial stack
[RootViewController]
As you can see, the UINavigationController manages its stack of view controllers by pushing and
popping view controllers onto and off the stack, respectively. This allows the user to navigate
through a hierarchical interface, with the navigation controller handling the navigation logic and
updating the navigation bar and toolbar accordingly.
init(dataSource: DataSource) {
self.dataSource = dataSource
super.init(frame: .zero)
// setup view
}
}
By following these practices, you can create complex UIs that are modular, flexible, and easier to
maintain and extend over time. Additionally, reusable UI components can improve development
Chapter 19: UIKit Framework
efficiency, consistency, and collaboration within your team.
Next, you need to set the prefetchDataSource property of your UITableView to your view
controller:
override func viewDidLoad() {
super.viewDidLoad()
Then, you can implement the tableView(_:prefetchRowsAt:) method to start loading data for
the rows that are about to be displayed:
In this method, you can use the indexPaths parameter to determine which rows are about to be
displayed. You can then start loading data for those rows in the background.
Finally, you can implement the tableView(_:cancelPrefetchingForRowsAt:) method to cancel
any ongoing data loading operations for the rows that are no longer needed:
func tableView(_ tableView: UITableView, cancelPrefetchingForRowsAt indexPaths:
[IndexPath]) {
for indexPath in indexPaths {
let row = indexPath.row
// cancel any ongoing data loading operations for the row here
// ...
}
}
In this method, you can use the indexPaths parameter to determine which rows are no longer
needed. You can then cancel any ongoing data loading operations for those rows.
By implementing prefetching in UITableView, you can improve the performance of your app by
loading data in the background before it's needed. This can help to reduce the latency of your
app and provide a smoother user experience.
Then, in your view controller, you can set the font for your labels like this:
largeTitleLabel.font = CustomFont.largeTitle
headlineLabel.font = CustomFont.headline
By following these steps, you can ensure that your app's custom fonts will automatically adjust
their sizes based on the user's preferred content size settings, and you can handle different font
traits as needed.
Q. Explain the different states of an iOS app with the use cases.
Every app goes through different states during its lifecycle. Understanding these states and their
use cases is essential for proper app management and resource handling. Here are the different
states of an iOS app:
Active
The app enters this state when it is running in the foreground and receiving events from the
system. This is the state where the app is fully functional and can execute any tasks or update its
user interface based on user interactions.
Background
When the user leaves the app or presses the Home button, the app transitions to the background
state. In this state, the app is still running but with limited execution time and restricted access to
certain resources. Apps in the background can perform specific tasks, such as playing audio,
tracking location, handling push notifications, or finishing up tasks that were started in the
foreground.
func applicationDidEnterBackground(_ application: UIApplication) {
// called when the app has entered the background
}
Suspended
If an app in the background is not performing any tasks or if the system needs to free up memory,
the app may transition to the suspended state. In this state, the app remains in memory but does
not execute any code. When the app needs to run again, it must transition back to the active or
background state.
Terminated
The system may terminate an app due to various reasons, such as low memory conditions or if
the app has been in the background for an extended period. When an app is terminated, it is
completely removed from memory, and upon its next launch, it needs to restart from the
beginning.
func applicationWillTerminate(_ application: UIApplication) {
// called when the app is about to be terminated
}
By understanding these states and their use cases, you can manage the app's lifecycle
effectively, handle transitions between states properly, and ensure optimal performance and
resource utilization.
Q. How you can save and restore an app's state when it transitions to the
background and back to the foreground?
Many times you need to save and restore an app's state when it transitions between the
foreground and background states. This ensures that users can resume their tasks seamlessly
when they return to the app.
Consider a note-taking app where users can create, edit, and delete notes. When the user
switches to another app or receives a phone call, the note-taking app transitions to the
background state.
Saving the app's state:
Implement the AppDelegate Methods:
In AppDelegate, there are some methods that are called when the app transitions between
different states. Specifically, the applicationDidEnterBackground(_:) method is called when
the app is about to move to the background.
Chapter 19: UIKit Framework
Save App Data:
In the applicationDidEnterBackground(_:) method, you should save any unsaved data or the
app's current state to persistent storage (e.g., file system, Core Data, or a database). In the note-
taking app, you would save any unsaved notes or the current state of the note editor.
Suspend Ongoing Tasks:
If the app has any ongoing tasks, such as network requests or background operations, you
should suspend or cancel them before the app enters the background state. This helps conserve
system resources and ensures that the app doesn't continue running tasks that may drain the
device's battery or consume excessive data.
Restoring the app's state:
Implement the AppDelegate Methods:
The applicationWillEnterForeground(_:) method is called when the app is about to move
from the background to the foreground state.
Restore App Data:
In the applicationWillEnterForeground(_:) method, you should restore the app's state from
the persistent storage. For the note-taking app, you would load any previously saved notes or the
last state of the note editor.
Resume Suspended Tasks:
If any tasks were suspended when the app went to the background, you can resume them in this
method. However, it's important to consider the user's experience and avoid resuming tasks that
may no longer be relevant or desired.
Update the User Interface:
After restoring the app's state, you should update the user interface to reflect the restored data or
state. In the note-taking app, you would display the previously saved notes or the last state of the
note editor.
By following this approach, you can ensure that your app's state is preserved when it transitions
to the background and restored when it returns to the foreground, providing a seamless user
experience.
In the above example, text: $searchText binds the TextField to the searchText state
variable. The $ prefix creates a binding to the state variable, allowing the TextField to both read
from and write to searchText .
The string interpolation "\(searchText)" inserts the current value of searchText into the
string. As searchText changes (when the user types in the TextField), this Text view updates
to reflect the new search query.
Form Inputs
Chapter 20: SwiftUI Framework
@State is excellent for managing form inputs because it automatically triggers view updates
when the value changes. For example:
struct ContactForm: View {
@State private var name = ""
@State private var email = ""
@State private var agreeToTerms = false
Button("Submit") {
submitForm()
}
.disabled(!agreeToTerms || name.isEmpty || email.isEmpty)
}
}
func submitForm() {
// handle form submission
}
}
In this example, @State properties manage the text field contents and toggle state. As the user
interacts with these controls, the view automatically updates. The submit button's disabled state
also updates based on these properties.
Local View State
@State is ideal for managing UI state that's specific to a single view, such as whether a sheet is
presented or a menu is expanded. For example:
Menu("Options") {
Button("Option 1") { selectedOption = "Option 1" }
Button("Option 2") { selectedOption = "Option 2" }
Button("Option 3") { selectedOption = "Option 3" }
}
Here, @State properties control the presentation of a sheet, the selected option in a menu, and
the expansion state of a disclosure group. These states are local to this view and don't need to be
shared with other parts of the app.
Temporary Storage
@State is useful for storing temporary data that doesn't need to persist beyond the lifetime of
the view, such as intermediate results or user selections. For example:
Q. How do you handle data flow in SwiftUI? Discuss the roles of @Binding,
@ObservedObject, and @EnvironmentObject.
Managing data flow between views is important for building dynamic and responsive user
interfaces. SwiftUI provides several property wrappers to facilitate this like
@State , @Binding , @ObservedObject , and @EnvironmentObject . Let's understand them.
@Binding
It creates a two-way connection between a property in a parent view and a property in a child
view. This means that when the property changes in the child view, it also updates in the parent
view, and vice versa. When you need to pass a state property down to a child view and allow the
child view to modify it. For example:
Chapter 20: SwiftUI Framework
struct ParentView: View {
@State private var isDarkMode: Bool = false
var body: some View {
VStack {
Toggle("Dark Mode", isOn: $isDarkMode)
.padding()
ChildView(isDarkMode: $isDarkMode)
}
}
}
@ObservedObject
It is used to observe an external object that conforms to the ObservableObject protocol. This
object can be shared across multiple views, and when any property marked
with @Published inside this object changes, the view will update. When you have a data model
that multiple views need to observe and react to. For example:
class settings: ObservableObject {
@Published var isDarkMode: Bool = false
}
These property wrappers allow SwiftUI to efficiently manage and update the UI in response to
state changes, ensuring that the user interface remains in sync with the underlying data.
In this example, we're declaring what the UI should look like based on the isToggled state, not
how to update it.
Impacted areas on development process when we compared to imperative approaches:
Simplified Code:
Declarative: UI structure is more readable and concise.
Imperative: Often requires more boilerplate code to set up and update UI elements.
State Management:
Declarative: State drives the UI automatically.
Imperative: You must manually update the UI when state changes.
Chapter 20: SwiftUI Framework
Maintainability:
Declarative: Easier to understand and modify UI structure.
Imperative: Can become complex with nested views and multiple state changes.
Debugging:
Declarative: Often easier to debug as the UI structure is clearly defined.
Imperative: Can be challenging to track down UI update issues.
Performance:
Declarative: Framework optimizes updates and rendering.
Imperative: You must carefully manage performance, especially with frequent updates.
Learning Curve:
Declarative: New paradigm may require adjustment for you to use imperative approaches.
Imperative: Familiar to many developers from UIKit and other frameworks.
Consistency:
Declarative: Encourages consistent patterns across the app.
Imperative: More prone to inconsistencies in how UI updates are handled.
Testing:
Declarative: Often easier to test as UI is a function of state.
Imperative: May require more setup to test UI in different states.
The declarative approach generally leads to more robust, maintainable, and efficient code,
although it does require a shift in thinking for you to accustom imperative UI programming.
Protocol Conformance
In this example, the ContentView adapts to both the system's color scheme and the user's
preferred text size. The colorScheme environment value is used to change the background and
text color based on whether the device is in light or dark mode. The sizeCategory value reflects
the user's preferred text size, which can be used to adjust the layout or font sizes accordingly.
This show how @Environment allows your views to respond dynamically to system-wide
settings without needing to manually pass this information through your view hierarchy.
Example of using dependency injection:
It provides a clean way to inject dependencies into views without explicitly passing them through
initializers or as properties. This is particularly useful for providing services or shared resources to
multiple views. For example:
extension EnvironmentValues {
var logger: Logger {
get { self[LoggerKey.self] }
set { self[LoggerKey.self] = newValue }
}
}
In this example, we are injecting a simple Logger service. The ContentView accesses the
logger through @Environment and uses it to log a message when the button is tapped. This
allows the logging functionality to be easily provided and potentially customized from a parent
view, without explicitly passing it to ContentView .
In this example:
NavigationView wraps the content, enabling navigation functionalities.
The navigationTitle modifier sets the title of the navigation bar for each view.
Comparing NavigationView & NavigationLink
NavigationView
Purpose: Acts as a container for managing a navigation-based hierarchy.
Functionality: Provides the navigation context and navigation bar.
Usage: Wraps around the entire view hierarchy that requires navigation capabilities.
NavigationLink
Purpose: Triggers navigation to a new view.
Functionality: Specifies the destination view and optionally customizes the appearance of
the link.
Usage: Placed within a NavigationView to create navigable items.
Chapter 20: SwiftUI Framework
SwiftUI's navigation system leverages NavigationView and NavigationLink to create a
seamless and intuitive navigation experience. NavigationView serves as the container and
navigation context, while NavigationLink acts as the trigger for navigation actions. Together,
they enable developers to build complex navigation hierarchies in a declarative and easy-to-
maintain manner.
extension View {
func customTextStyle() -> some View {
self.modifier(CustomTextModifier())
}
}
You can now apply .customTextStyle() to any text in your app to have consistency.
struct ContentView: View {
var body: some View {
VStack {
Text("Hello, Swiftable!")
.customTextStyle()
Text("A community for iOS developers!")
.customTextStyle()
}
}
}
In this example:
CustomTextModifier conforms to the ViewModifier protocol.
Chapter 20: SwiftUI Framework
The body method describes the modifications: changing the font, color, padding,
background, and corner radius.
An extension on View adds a convenience method customTextStyle() to apply the
modifier easily.
ViewModifiers are a key feature that promote code reuse and maintainability. By encapsulating
view transformations and styling, they allow for consistent and centralized management of view
modifications, making it easier to create and maintain complex applications.
In this example, @StateObject ensures viewModel is created once and managed by the view.
The ObservableObject protocol plays a n important role for managing and observing state
changes in a reactive and declarative manner. By
using @Published , @ObservedObject , @StateObject , and @EnvironmentObject , you can
effectively bind your data models to your views, ensuring that your UI stays in sync with your
underlying data. This approach promotes clean, maintainable, and responsive apps.
In this example, the GeometryReader provides the full size of its container, and we display the
width and height within a VStack .
Positioning Elements
You can use GeometryProxy to position elements dynamically. For instance, centering a view
within its parent:
struct CenteredView: View {
var body: some View {
GeometryReader { geometry in
Text("Centered Text")
.frame(width: geometry.size.width, height:
geometry.size.height)
.background(Color.yellow)
.position(x: geometry.size.width / 2, y: geometry.size.height /
2)
}
}
}
Key benefits:
Ease of Use: Automatically handles reading from and writing to UserDefaults .
Persistence Across Launches: Data is saved even when the app is closed and reopened.
Consistency: Ensures that data is consistent across all scenes and views in your app.
Similar to UserDefaults, the keys in @AppStorage are string-based. To ensure consistency and
avoid issues due to spelling errors in different views, it’s recommended to adopt a unified
management approach or define keys uniformly. This practice not only reduces the risk of errors
but also makes the code easier to maintain and understand.
SceneStorage
It is a property wrapper that stores data specific to a scene. This is useful for saving and restoring
state when the scene moves to the background or is closed and reopened. Unlike @AppStorage ,
which is app-wide, @SceneStorage is scoped to a specific scene. To use @SceneStorage , you
declare a property with the @SceneStorage attribute, providing a key for the stored value.
The working principle of @SceneStorage is similar to that of @State , with the latter being used
to save the private state of a view, while @SceneStorage is for saving the private state of a
scene. In a sense, @SceneStorage can be seen as a convenient way to share data between
views within a scene, eliminating the need to inject models separately for each scene.
Here’s an example:
In this example, the currentTab property is backed by scene-specific storage and will
remember the selected tab for each scene independently. Each window or scene will have its
own currentTab value.
Key benefits:
Scene-Specific State: Maintains independent state for each scene or window, useful in
multi-window environments.
Automatic State Restoration: Automatically saves and restores state when the scene is
recreated.
Ease of Use: Simple to implement and requires minimal code to manage scene-specific
state.
Key points to note down:
@AppStorage:
App-wide scope, suitable for settings and preferences that need to be consistent across the
entire app.
User settings such as dark mode, volume level, preferred language.
Flags or states that are relevant across the entire app.
Data persists across app launches and reboots.
@SceneStorage:
Scene-specific scope, suitable for state that needs to be restored when a scene is
reactivated.
Draft text in a text editor.
Scroll position in a long list.
Chapter 20: SwiftUI Framework
Temporarily unsaved form data in a multi-scene app.
Data persists while the scene is in memory, which includes backgrounded state but not
necessarily after the app is completely terminated and restarted.
Important Considerations:
Security: Neither @AppStorage nor @SceneStorage is suitable for storing sensitive data.
Use Keychain for sensitive information.
Performance: These wrappers are designed for small amounts of data. For larger datasets,
consider using Core Data or other persistence solutions.
Data Consistency: Be cautious when using @AppStorage in multiple places. Changes in one
view will affect all views using the same key.
Testing: When unit testing, you might need to reset UserDefaults to ensure consistent test
results.
Both @AppStorage and @SceneStorage are optimized for small amounts of data. For larger
datasets or more complex data structures, consider other persistent storage solutions such
as Core Data or local databases.
@AppStorage and @SceneStorage offer powerful yet simple mechanisms for persisting data.
They help streamline state management by reducing boilerplate code and ensuring data
persistence across app launches or within specific scenes. Understanding when to use each
property wrapper allows you to effectively manage state and provide a seamless user experience
in your SwiftUI apps.
Live Previews
Xcode provides a live preview of the UI as you code, which allows for real-time feedback and
faster iteration. Also, you can interact with the previews, providing a better sense of how the app
will behave without needing to run it on a simulator or device. You can see the changes instantly
in the canvas as you modify the SwiftUI code like this:
struct ContentView: View {
var body: some View {
Text("Hello, Swiftable!")
.padding()
}
}
UIKit does not provide a direct approach to enable live preview. To enable live preview of
UIKit components within SwiftUI, you can use UIViewRepresentable or
UIViewControllerRepresentable to wrap UIKit components and display them in the
SwiftUI canvas. This allows you to leverage the live preview functionality of SwiftUI while
still using UIKit components.
Cross-Platform Compatibility
SwiftUI supports building UIs for iOS, macOS, watchOS, and tvOS using a single codebase,
promoting code reuse and reducing the need for platform-specific code. This means you can
Chapter 20: SwiftUI Framework
write one piece of code and run it on different devices with minimal adjustments. However, UIKit
is primarily designed for iOS and tvOS.
Modern Features
SwiftUI is designed to work seamlessly with modern Apple APIs, such as Combine for reactive
programming, and is better suited for integrating new features as Apple continues to update the
framework.
Using declarative syntax, you can describe what the UI should look like and how it should
behave. This contrasts with the imperative style of UIKit, where you must write explicit
instructions to manage the UI state and updates.
SwiftUI integrates seamlessly with Swift’s concurrency model, including async/await and
actors. This makes it easier to write modern, asynchronous code for tasks such as network
calls and background processing, leading to more responsive UIs.
SwiftUI is designed to work well with Combine, Apple’s framework for handling
asynchronous events by combining event-processing operators. This allows for
sophisticated data handling and reactive programming patterns within SwiftUI apps.
See the below example to implement reactive components using Combine:
class ViewModel: ObservableObject {
@Published var text: String = ""
private var cancellable: AnyCancellable?
init() {
cancellable = Timer.publish(every: 1.0, on: .main, in: .common)
.autoconnect()
.map { _ in "Updated: \(Date())" }
.assign(to: \.text, on: self)
}
}
Automatic Adaptations
SwiftUI provides automatic support for features like dark mode and dynamic type, making it
easier to build apps that adapt to user preferences and accessibility settings.
Chapter 20: SwiftUI Framework
You have a flexible layout system that automatically adapts to different screen sizes and
orientations. This includes components like HStack , VStack , ZStack , and LazyVStack ,
which make it easy to create layouts that work well on any device.
Using environment modifiers, you can allow views to adapt to changes in the environment,
such as size classes, color schemes, and layout directions. This ensures that the UI looks
good in various contexts without requiring manual adjustments.
You can support Dynamic Type, allowing the text to automatically adjust its size based on
the user’s settings. This ensures better readability and accessibility.
SwiftUI automatically respects safe area insets, ensuring that content is not obscured by
device-specific elements like the notch or home indicator. This is particularly useful for
creating layouts that work well on devices with different screen shapes and sizes.
There can be more points that you feel are good while working with SwiftUI compared to
UIKit framework. Feel free to add your points in the above points during interviews.
Points to be note down:
Limited Features: As a relatively new framework, SwiftUI lacks some of the advanced and fine-
grained controls available in UIKit, which can be a limitation for complex or highly customized
interfaces.
Bugs and Instability: Being newer, SwiftUI can have more bugs and stability issues compared to
the well-established UIKit.
Learning Curve: You accustomed to the imperative style of UIKit might find the declarative
approach of SwiftUI initially challenging to learn and adapt to.
Performance Concerns: In certain cases, UIKit might offer better performance optimizations,
especially for apps that require highly customized and performant interfaces.
Both SwiftUI and UIKit have their unique advantages and disadvantages. SwiftUI excels with its
modern, declarative syntax, cross-platform capabilities, and real-time previews, making it ideal
for new projects, rapid prototyping, and simpler apps. On the other hand, UIKit's maturity,
stability, comprehensive feature set, and performance optimizations make it a better choice for
complex, performance-critical applications and projects requiring extensive backward
compatibility. The choice between the two largely depends on the specific needs of the project,
the target platform, and the development team's familiarity with the frameworks.
func fetchUsername() {
// write logic here...
}
In this example, we use both .onAppear() and .task() to fetch the username. The
.onAppear() modifier calls fetchUsername method which doesn't use Swift's structured
concurrency and doesn't automatically cancel if the view disappears. In the other way, the
.task() modifier calls fetchUsernameAsync method which leverages Swift's concurrency
features and will automatically cancel if the view disappears before completion.
Key Differences:
Concurrency: .task() is designed for async/await operations, while .onAppear() is not.
Cancellation: .task() automatically cancels its operation if the view disappears, .onAppear()
does not.
Timing: .task() may start slightly after .onAppear() in the view lifecycle.
In general, the .onAppear() is a general-purpose modifier suitable for a wide range of tasks,
both synchronous and asynchronous, and is called every time the view appears. The .task() is
specifically designed for asynchronous operations, providing a more concise and robust way to
Chapter 20: SwiftUI Framework
handle tasks that may run long or need cancellation when the view disappears. Understanding
these differences helps in choosing the right approach for your SwiftUI views, ensuring better
performance and cleaner code.
In practice, you would typically use either .onAppear() or .task(), not both. The choice depends
on whether you're using Swift's structured concurrency and if you need automatic cancellation of
the task when the view disappears.
init(value: Binding<Double>) {
self.value = value
}
In the above example, the UISliderView is a custom SwiftUI view that wraps a UIKit UISlider
component. It conforms to the UIViewRepresentable protocol, which allows UIKit views to be
used within SwiftUI's declarative structure. This view takes a binding to a Double value, which
represents the current value of the slider. The binding creates a two-way connection, allowing
changes in the slider to update SwiftUI state and vice versa.
In the above example, you can see how to use the custom UISliderView within a SwiftUI
interface. It serves as the main view of the application. The view contains a @State property
called sliderValue , which is a Double initialized with a default value. This state variable will
store and manage the current value of the slider.
This integration process is valuable when SwiftUI lacks a native equivalent for a UIKit component,
or when specific UIKit functionality is required. It allows you to gradually transition to SwiftUI or to
continue using familiar UIKit components while taking advantage of SwiftUI's modern,
declarative approach to UI development. By using this approach, you can create more flexible
and powerful iOS apps, combining the best of both UIKit and SwiftUI framework in projects.
In the above code, we are using the @Published property wrapper for properties that you want
to announce changes for. When these properties change, SwiftUI will automatically update any
views that are observing this object. So, name and age are marked with @Published , meaning
any changes to these properties will trigger the objectWillChange publisher, which in turn
notifies the SwiftUI views to update.
Create an Instance of the ObservableObject with @StateObject
Use @StateObject to create and own an instance of UserModel . @StateObject should be
used when you create a new instance of an observable object within a view. This ensures the
instance is managed correctly by SwiftUI and persists across view updates. For example:
struct ContentView: View {
@StateObject private var userModel = UserModel()
var body: some View {
VStack {
Text("User's Name: \(userModel.name)")
Text("User's Age: \(userModel.age)")
Button("Increase Age") {
model.age += 1
}
ChildView(userModel: userModel)
}
.padding()
}
}
In the above example, both Text views automatically update when name or age changes,
thanks to the @Published properties in UserModel . The button updates userModel.age .
When the button is pressed, age is incremented, which triggers the @Published property to
notify SwiftUI to update the relevant views.
Chapter 20: SwiftUI Framework
Use @ObservedObject to Observe an Existing ObservableObject
Use @ObservedObject when you want a view to observe an existing instance of an
ObservableObject that is passed to it. This allows child views to react to changes in the shared
state without owning the state. For example:
struct ChildView: View {
@ObservedObject var userModel: UserModel
var body: some View {
Text("Child View - User's Name: \(userModel.name)")
}
}
In ChildView , Text will automatically update whenever name changes. This show how
@ObservedObject allows the child view to observe and react to state changes.
When properties marked with @Published change, SwiftUI automatically updates any views
that depend on these properties. In ContentView , when userModel.age is incremented by the
button, both Text("User's Age: \(userModel.age)") and any other view that depends on
userModel.age will re-render.
If you need more control over when changes are announced, you can manually trigger change
announcements by calling objectWillChange.send() . This is less common but can be useful
for complex state management scenarios. For example:
class UserModel: ObservableObject {
let objectWillChange = ObservableObjectPublisher()
Using LocalizedStringKey
Chapter 20: SwiftUI Framework
Define strings that need localization using the LocalizedStringKey struct. This provides a type-
safe way to reference localized strings in your code. Within each localization file, provide
translations for the corresponding LocalizedStringKey values. Keys and translations should be on
separate lines, separated by an equal sign ( = ). Open the Localizable.strings file and add
key-value pairs for each string you want to localize. For example:
"greeting" = "Hello";
Add separate Localizable.strings files for each language you want to support. For instance,
for Spanish, create Localizable.strings (Spanish) and add:
"greeting" = "Hola";
You can use string interpolation with LocalizedStringKey to dynamically insert values into the
localized string. SwiftUI automatically infers format specifiers for variables passed
to LocalizedStringKey, ensuring proper formatting (e.g., dates, numbers).
Benefits of SwiftUI's L10n Approach:
Type Safety: LocalizedStringKey helps prevent typos and ensures you're referencing the correct
string for localization.
Code Readability: Separating keys and translations keeps code clean and easier to maintain.
Automatic Updates: The UI automatically updates based on the device's language setting.
If your app needs to support dynamic language switching within the app (not just based on
the system language), you need to manually reload the views when the language changes.
This can be complex and might require additional setup, such as using a custom
environment key to manage the current language and update views accordingly.
Chapter 20: SwiftUI Framework
By following these practices, you can effectively localize your SwiftUI application to reach a wider
audience and provide a user-friendly experience for users with different languages and cultural
preferences.
Q. Can you explain how SwiftUI manages view updates and rendering
optimizations?
SwiftUI's approach to view updates and rendering optimizations is one of its key strengths. It
uses a declarative paradigm and employs several strategies to efficiently manage view updates
and optimize rendering. Here's how SwiftUI handles this:
Dependency Tracking
When you define a view in SwiftUI, SwiftUI establishes a dependency graph. This graph tracks
how views depend on each other and the data they use. It essentially maps out how changes in
one part of your UI might affect other parts.
Dirty Marking
When a view or its underlying data changes, SwiftUI marks that view and any dependent views
as "dirty." This flag indicates that these views need to be re-evaluated to reflect the
modifications.
Efficient Re-evaluation
SwiftUI doesn't blindly rerender the entire UI hierarchy on every change. It intelligently
determines the minimal set of dirty views that need to be updated based on the dependency
graph. Only the affected views and their subviews are re-rendered, minimizing unnecessary
work.
Memoization
SwiftUI can leverage memoization for views that are expensive to create or render. Memoization
essentially caches the results of view creation or modification, so subsequent calls with the same
parameters can retrieve the cached version instead of recalculating everything. This can
significantly improve performance, especially for complex views.
Efficient Layout
SwiftUI utilizes a declarative layout system based on constraints and modifiers. This allows it to
calculate the layout of your views efficiently, avoiding redundant layout passes and improving
rendering performance.
In the above example, the relativeTo parameter tells SwiftUI to scale your custom font relative
to a system text style (in this case, .title2). When the user changes their preferred text size in
system settings, your custom font will scale proportionally, similar to how the system font would.
Using this, you can maintain your app's unique visual identity while still adhering to iOS
accessibility best practices and respecting user preferences for text size.
Device Size Adaptability
Different iPhone and iPad models have varying screen sizes and resolutions. Your app's UI needs
to adapt to these differences to maintain a visually appealing and usable experience.
SwiftUI's Layout System: SwiftUI utilizes a declarative layout system based on stacks
(HStack, VStack) and modifiers. These layout systems automatically adjust the positioning and
sizing of your views based on the available space on the device.
GeometryReader: This view allows you to access the size and geometry of its
container, enabling you to adjust your view's layout dynamically based on the available space.
For example:
GeometryReader { geometry in
Text("Hello, Swiftable!")
.frame(width: geometry.size.width * 0.8)
}
Usage:
zIndex() takes a Double value as an argument.
Higher values bring views to the front, lower values send them to the back.
The default zIndex for all views is 0.
Behavior:
Views with higher zIndex values appear in front of views with lower values.
If two views have the same zIndex, their relative order in the code determines their front-to-
back positioning.
Example:
Rectangle()
.fill(Color.blue)
.frame(width: 100, height: 100)
.offset(x: 40, y: 40)
.zIndex(1) // this will appear in front of the red rectangle
Rectangle()
.fill(Color.green)
.frame(width: 100, height: 100)
.offset(x: -40, y: -40)
.zIndex(-1) // this will appear behind the red rectangle
}
}
}
In this example:
The red rectangle has the default zIndex of 0.
The blue rectangle has a zIndex of 1, so it appears in front of the red one.
The green rectangle has a zIndex of -1, so it appears behind the red one.
It has two properties: spacing - a value to set the spacing between items in the stack and
content - a closure that returns Content , which is constrained to be a View .
In the body , it creates a VStack with the specified spacing.
The content() closure is called inside the VStack , placing the child views.
The @ViewBuilder attribute on the content parameter is key here. It allows the user of
CardStack to pass multiple views as if they were writing normal SwiftUI view code.
extension Point {
// error: extensions must not contain stored properties
var z: Double
}
Overriding Limitations
Extensions cannot override existing methods or properties of a type. They can provide an
alternative implementation, but the original method or property will still be available. Additionally,
extensions cannot add or override designated initializers of a class. For example:
init() {
// designated initializer
}
}
extension BaseClass {
// error: cannot override existing methods or properties
override func someMethod() {
print("extension method")
}
init(name: String) {
self.fileName = name
}
}
extension MediaAsset {
// error: deinitializers may only be declared within a class
deinit {
print("\(fileName) is being deallocated")
}
}
It's important to understand these limitations when working with extensions. Extensions are
designed to add functionality to existing types in a non-invasive way, without modifying their
underlying structure or breaking encapsulation principles.
Q. How do you declare a type alias? What are some common scenarios
where type aliases are particularly useful?
You can declare a type alias using the typealias keyword followed by the new name you want
to give to an existing type. Here's the basic syntax:
typealias NewTypeName = ExistingType
For example, completion handlers are commonly used to handle asynchronous operations, such
as making network requests. Here's an example of how you might define a completion handler in
a network manager class:
In this example, UserLocation is a type alias for a tuple with latitude and longitude. Using the
type alias makes the code more readable and easier to understand.
Some common scenarios where type aliases are particularly useful:
When you have a complex type, such as a nested generic type or a closure type, a type alias
can make it more readable and easier to use throughout your code.
Type aliases can give more semantic meaning to types, making your code more self-
documenting and easier to understand.
Since tuples have an anonymous type, you can create type aliases for tuples to make them
more explicit and reusable.
Type aliases can help abstract away implementation details, making your code more flexible
and easier to maintain. For example, you could use a type alias to represent a data structure,
and then change the underlying implementation without affecting the rest of your code.
Type aliases allow you to create custom names for existing data types, closures, or complex
types, thereby simplifying complex type declarations and making code more concise. By
providing descriptive aliases, typealias aids in documenting the intent and purpose of specific
types, making code easier to understand for both the original author and future maintainers.
Additionally, it facilitates code reuse and promotes abstraction by enabling you to abstract away
implementation details behind more expressive names.
Buffer Sharing
When creating a new collection from an existing one using operations like map, filter,
or compactMap, the Swift compiler can optimize these operations by sharing the underlying
buffer between the original and the new collection. This sharing is possible because the original
collection is immutable, so the new collection can safely refer to the same buffer without causing
mutations. For example:
let originalArray = [1, 2, 3, 4, 5]
let mappedArray = originalArray.map { $0 * 2 }
// mappedArray shares the same buffer as originalArray
Loop Unrolling
For small collections, the compiler can unroll loops that iterate over the collection's elements.
Instead of using a loop construct, the compiler generates inline code for each iteration,
potentially eliminating the loop overhead and enabling further optimizations. For example:
Vectorization
For certain operations on collections, the compiler can generate vectorized code that takes
advantage of SIMD (Single Instruction, Multiple Data) instructions available on modern CPUs.
This can significantly improve performance for operations that can be parallelized. For example:
let array1 = [1, 2, 3, 4, 5, 6, 7, 8]
let array2 = [9, 10, 11, 12, 13, 14, 15, 16]
let arraySum = array1.zip(array2).map(+)
// the zip and map operations can be vectorized
insert()
It is used to insert a new element at a specified position in an array or mutable ordered set. It has
the following impact:
It modifies the original collection by inserting the new element at the specified index.
For arrays, it has an average time complexity of O(n), where n is the number of elements in
the array. This is because inserting an element in the middle of an array requires shifting all
the subsequent elements to make room for the new element.
When you insert an element into an array at a specific index, the new element is added at
that position, and the indices of existing elements after the insertion point are shifted to
accommodate the new element.
For mutable ordered sets, the time complexity is O(log n), where n is the number of elements
in the set, as ordered sets maintain the elements in sorted order.
var numbers = [1, 2, 3]
numbers.insert(4, at: 1)
print(numbers) // [1, 4, 2, 3]
In terms of performance, append() is generally faster than insert() because it doesn't require
shifting elements. However, if you need to insert an element at a specific position, insert() is the
way to go.
numbers.forEach { num in
if num / 20 == 2 {
break // compile error because 'break' or 'continue' are not allowed
here
}
sum += num
}
Return Statement
In a for-in loop, the return statement exits the entire loop or function scope. In a forEach loop, the
return statement only exits the current iteration's closure, allowing the loop to continue with the
remaining elements. For example:
Using return statement in for-in loop:
let numbers = [10, 20, 30, 40, 50, 60]
var sum = 0
func testForInLoop() {
for number in numbers {
if number / 20 == 2 {
return
}
sum += number
}
}
testForInLoop() // sum: 60
// sum: 120
// Number 20 is divisible by 20
// Number 40 is divisible by 20
// Number 60 is divisible by 20
Iteration Style
The for-in loop is a traditional loop construct that iterates over sequences directly, while
the forEach loop is a higher-order function that takes a closure as an argument.
Mutability
In a for-in loop, you can modify the elements of the collection being iterated over, while in
a forEach loop, you cannot modify the collection itself within the closure.
Accessing Indices
In a for-in loop, you can access the indices of the elements using the enumerated() method, but
in a forEach loop, you don't have direct access to the indices.
Performance
There is generally no significant difference between for-in and forEach loops for simple iterations.
However, for complex operations or large collections, the for-in loop can sometimes be more
efficient because it avoids the overhead of creating and invoking closures for each iteration.
Memory Management
Both for-in and forEach loops are memory-efficient when working with value types (e.g., structs,
enums) because they don't create additional copies of the elements. However, when working
with reference types (e.g., classes), the forEach loop can be slightly more memory-efficient
because it captures the elements by reference, whereas the for-in loop may create temporary
copies of the elements.
Chapter 21: Miscellaneous
The choice between for-in and forEach often comes down to personal preference, coding style,
and the specific requirements of your code. The for-in loop is more traditional and may be
preferred in cases where you need to modify the collection or access the index of the elements.
The forEach loop is more functional in nature and is often used when you want to perform an
operation on each element without modifying the collection itself.
Q. How can you customize the encoding and decoding behavior when
working with JSON?
You can customize the encoding and decoding behavior when working with JSON by adopting
the Codable protocol and implementing custom init(from:) and encode(to:) methods. Here are
some common scenarios to use:
Renaming Keys
If your JSON keys don't match the property names in your struct or class, you can use the
CodingKey protocol to provide a mapping. For example:
struct User: Codable {
let name: String
let age: Int
In this example, the JSON keys are user_name and age . To handle this key mismatch, we
adopt the Codable protocol and provide a custom CodingKeys enumeration that maps the
property names to the JSON key names. The CodingKeys enum conforms to the CodingKey
protocol, which requires a stringValue property representing the JSON key name. In this case,
we map name to "user_name" and use the default age key.
Encoding/Decoding Nested Objects
For nested objects, you can use the Codable protocol recursively. Suppose we have an app that
displays information about restaurants, including their menus. Here's how we can model this data
using nested objects:
During encoding, Swift will encode the Restaurant object, including its nested Menu and
MenuItem objects, into the appropriate JSON structure. During decoding, Swift will create the
Restaurant instance and automatically decode the nested objects from the JSON data.
However, during decoding, we still want to decode the authToken if it's present in the JSON
data. To achieve this, we use the try? operator when decoding the authToken . If the decoding
Chapter 21: Miscellaneous
is successful, authToken will be assigned the decoded value. If the decoding fails (e.g., the
authToken key is missing in the JSON data), authToken will be set to nil .
By providing a custom init(from:) implementation and defining a CodingKeys enum that excludes
the authToken key, we can selectively ignore properties during encoding while still allowing
them to be decoded when present in the JSON data.
When encoding or decoding a Date object, the default implementation of Codable expects the
date to be represented as a Unix timestamp. However, we might want to encode and decode the
date in a different format, such as "yyyy-MM-dd" .
To achieve this, we can create a custom DateFormatter and use it in custom encoding and
decoding strategies:
Chapter 21: Miscellaneous
let formatter: DateFormatter = {
let formatter = DateFormatter()
formatter.dateFormat = "yyyy-MM-dd"
return formatter
}()
extension User {
enum CodingKeys: String, CodingKey {
case id, name, birthDate
}
In the above code, we implement a custom init(from:) method for decoding. Inside this method,
we decode the id and name properties as usual. For the birthDate property, we first decode
it as a String, then use the DateFormatter to convert it to a Date object.
Further, we implement a custom encode(to:) method for encoding. Inside this method, we
encode the id and name properties as usual. For the birthDate property, we use
the DateFormatter to convert the Date object to a String before encoding it.
With these custom encoding and decoding strategies in place, when you encode a User object,
the birthDate property will be represented as a string in the "yyyy-MM-dd" format. Similarly,
when decoding, the birthDate property will be decoded from a string in the same format.
For example, if you have the following JSON data:
By conforming your date types to Codable and implementing custom encoding and decoding
logic, you can handle date formatting according to your specific requirements. This approach
allows you to seamlessly integrate date handling with the Codable protocol and work with various
date formats used in APIs or data formats. Note that, there are other approaches also to handle
the date format in encoding and decoding.
Q. How would you deal with cases where JSON keys don't match your
property names?
When working with JSON data, it's common to encounter situations where the JSON keys don't
match the property names in your structs or classes. In such cases, you can use the CodingKeys
protocol to provide a mapping between the JSON keys and your property names.
Suppose you have the following JSON data representing a user:
{
"user_name": "Swiftable",
"user_email": "[email protected]",
"user_age": 30
}
And you want to map this JSON data to a struct like this:
As you can see, the JSON keys ( user_name , user_email , user_age ) don't match the property
names ( name , email , age ) in the User struct.
To handle this mismatch, you can adopt the Codable protocol and provide a custom CodingKeys
enumeration that maps the JSON keys to your property names like below:
struct User: Codable {
let name: String
let email: String
let age: Int
During encoding and decoding, Swift will use the CodingKeys enumeration to map between the
JSON keys and the property names.
By providing the CodingKeys enumeration, you can seamlessly handle the mismatch between
JSON keys and property names, ensuring that your code can work with various JSON
representations without needing to modify the property names in your model types.
In this example, the urlString property is marked as @objc to make it visible to Objective-C
code, and dynamic to enable dynamic dispatch and allow KVO to work with this property. By
using both @objc and dynamic together, you ensure that the property can be observed using
KVO from Swift or Objective-C code.
class MediaObserver: NSObject {
init(mediaAsset: MediaAsset) {
self.mediaAsset = mediaAsset
super.init()
self.mediaAsset.addObserver(self, forKeyPath:
#keyPath(MediaAsset.urlString), options: [.old, .new], context: nil)
}
In this example, the MediaObserver class registers itself as an observer for the urlString
property of the MediaAsset class using the string-based key path "urlString" . The
observeValue(forKeyPath:of:change:context:) method is called whenever this property
videoAsset.urlString = "sample_video.mp4"
videoAsset.urlString = "www.example.com/sample_video.mp4"
// Print:
// Property 'urlString' changed from 'sample_url' to 'sample_video.mp4'
// Property 'urlString' changed from 'sample_video.mp4' to
'www.example.com/sample_video.mp4'
You can see, the urlString property of the MediaAsset instance is changed twice, triggering
the observeValue(forKeyPath:of:change:context:) method in the MediaObserver instance,
which prints the old and new values of the property.
Using keyPath Expressions (new)
Swift 4 introduced key path expressions, which provide a more type-safe way of observing
properties using KVO. For example:
class MediaObserver {
init(mediaAsset: MediaAsset) {
self.mediaAsset = mediaAsset
observer = self.mediaAsset.observe(\.urlString, options: [.old, .new])
{ (asset, change) in
guard let oldValue = change.oldValue,
let newValue = change.newValue else { return }
print("Property 'urlString' changed from '\(oldValue)' to '\
(newValue)'")
}
}
deinit { observer?.invalidate() }
}
Q. Explain how you can unregister KVO observers and why it's important to
do so. Provide examples of scenarios where failure to unregister observers
can lead to issues.
When using the new approach with key path expressions to observe property changes via Key-
Value Observing (KVO), it's important to properly unregister the observers when they are no
longer needed. Failure to do so can lead to memory leaks and potential crashes in your app.
The observe(_:options:changeHandler:) method returns an NSKeyValueObservation
instance, which represents the observation between the observer and the observed object. To
unregister the observer, you need to call the invalidate() method on this NSKeyValueObservation
instance.
As you can see in the previous example, how we can unregister the observer:
deinit {
print("observer removed")
observer?.invalidate()
}
}
In this example, we store the NSKeyValueObservation instance returned by the observe() method
in the observation property. When the MediaObserver instance is about to be deallocated
(e.g., when observer() is set to nil), the deinit() method is called, and we call invalidate() on the
observation instance. This ensures that the observation is properly unregistered before the
MediaObserver instance is deallocated.
SceneDelegate:
It is a new class introduced in iOS 13 and iPadOS 13 to support multiple scenes and windows
within an app.
A scene represents a window or a group of windows that display content for a particular task
or mode of operation within the app.
It is responsible for managing the lifecycle events of individual scenes, such as scene
creation, activation, deactivation, and destruction.
It handles scene-specific tasks like configuring the initial user interface, responding to
environment changes (e.g., light/dark mode), and managing state restoration for scenes.
An app can have multiple SceneDelegate instances, one for each active scene, while there is
only one AppDelegate instance for the entire application.
It is responsible for handling scene-based multitasking on iPad, which allows users to have
multiple scenes open simultaneously.
So, the AppDelegate handles app-level tasks and events, while the SceneDelegate manages the
lifecycle and state of individual scenes or windows within the app. This separation of concerns
allows for better support for multi-window and multi-scene apps, as well as more efficient
management of resources and state for each scene.
In this example, the MediaAsset struct conforms to the Equatable protocol by implementing the
== operator. Two MediaAsset instances are considered equal if they have the same name and
duration . You can customize the comparison of values in this static method as per
requirement.
Comparable Protocol
It is used for defining an order between instances of a particular type. It requires the
implementation of the < operator, which takes two instances of the same type and returns a
boolean value indicating whether the first instance is less than the second. The <= , > , and >=
operators are also provided by default for types conforming to Comparable. For example:
struct MediaAsset: Comparable {
let name: String
let duration: Double
The sorted() method is available for types conforming to Comparable, allowing the array of
MediaAsset instances to be sorted in ascending order based on the < implementation.
The key difference between Equatable and Comparable is that Equatable is used to check
for equality between instances, while Comparable is used to define an order or sorting
criteria between instances.
Both protocols serve different purposes and can be used together if needed. For example, a type
can conform to both Equatable and Comparable protocols to allow for equality checks and
sorting operations on instances of that type.
Q: How does the use of the final keyword impact method dispatch?
The final keyword is used to prevent a class, method, or property from being overridden or
subclassed. When you mark a method as final, it means that subclasses cannot override that
method. This can impact method dispatch, which is the process of selecting the appropriate
implementation of a method to be called at runtime.
Method dispatch is based on the dynamic type of the instance, which is determined at runtime.
This means that when you call a method on an instance, the implementation that is executed is
the one defined in the class of the actual instance, not the class of the variable or constant
holding that instance.
Let’s see an example:
private init() {}
func testFunction() {
// write code here
}
}
Using a semaphore
Semaphores can be used to synchronize access to the Singleton instance. This approach is
useful when you need to perform some asynchronous initialization before the Singleton instance
is ready to use. For example:
private init() {
// perform some asynchronous initialization
DispatchQueue.global().async {
// initialize the Singleton instance
self.initialized = true
self.semaphore.signal()
}
}
func doSomething() {
semaphore.wait()
if initialized {
// write code here
}
}
}
private init() {}
func doSomething() {
queue.sync {
// write code here
}
}
}
These approaches ensure that the Singleton instance is created only once, even in a multi-
threaded environment, and that all access to the instance is synchronized to prevent race
Chapter 21: Miscellaneous
conditions and data corruption. The choice of approach depends on your specific requirements,
performance considerations, and code complexity.
Q. Explain the difference between an array and a set. When using a set, is it
a good choice?
An array and a set are both collection types, but they differ in their structure and behavior.
Order: Arrays maintain the order of elements, while sets are unordered collections. In an array,
elements are stored in a specific sequence, and you can access them by their index. In a set,
however, the order of elements is not guaranteed, and you cannot rely on a specific order when
iterating over the set.
Duplicate Values: Arrays allow duplicate values, meaning you can have multiple occurrences of
the same value within an array. Sets, on the other hand, enforce uniqueness, ensuring that each
value appears only once within the set.
Access and Retrieval: In arrays, you can access elements by their index using subscript notation
(e.g., array[0] ). Sets don't have a concept of indexing, as they are unordered collections.
Instead, you can check if a value is a member of a set using the contains(_:) method.
Performance: Sets are optimized for fast membership testing and uniqueness operations.
Checking if a value is present in a set or adding a new value to a set is generally faster than
performing the same operations on an array, especially for large collections.
Operations: Sets support set operations like union, intersection, subtraction, and symmetric
difference, which allow you to combine or manipulate sets in various ways. Arrays don't have
built-in support for these kinds of operations out of the box.
Use Cases: Arrays are commonly used when you need to maintain the order of elements, access
elements by index, or allow duplicates. Sets are preferred when you need to ensure uniqueness,
perform membership testing efficiently, or work with set operations.
Imagine you have a text file containing a large text, and you want to find all the unique words
present in the file. This can be useful for tasks like text analysis, word frequency calculations, or
spell-checking. For example:
do {
let fileContent = try String(contentsOfFile: filePath, encoding: .utf8)
By using a set, we can efficiently find and store the unique words from the text file without
worrying about duplicates. Sets provide a convenient way to handle unique values and perform
operations like counting or iterating over them.
In the above example, instead of using a tuple, we create a MediaInfo struct that encapsulates
the media's fileName, type, and duration. The getMediaInfo function now returns an instance
of MediaInfo type.
Now, let’s return all the values together using tuple. For example:
func getMediaInfo(fileName: String, type: String, duration: Double) -> (name:
String, type: String, duration: Double) {
return (fileName, type, duration)
}
In this refactored example, we use a tuple with named elements (name, type, and duration). This
makes it much easier to understand what each value represents when we access them later.
Using named tuples makes the code more self-documenting and easier to read and maintain,
especially when dealing with multiple related values. It's a perfect choice when you need to
return or group multiple values together, and you want to make it clear what each value
represents.
Another advantage of using tuples is that they can contain values of different data types, which
can be useful when you need to group heterogeneous data together.
Overall, tuples are a concise and convenient way to group related values together, and using
named tuples can significantly improve code readability and maintainability.
Chapter 21: Miscellaneous
Q. How do you use API availability and handle the fallback condition?
Both #available and @available are used to handle API availability and provide fallback
implementations when specific APIs or language features are not available on certain platforms
or versions.
#available
It is a compilation condition that is used to conditionally include or exclude code based on the
availability of a specific API or language feature. It is typically used in combination
with #if , #else , and #endif statements.
Support, you want to format a Date object into a string representation. Let’s define an extension
on the Date, which adds a new method that will returns a string of the date in an abbreviated
format, omitting the time. For example:
extension Date {
func toString() -> String {
if #available(iOS 15.0, *) {
return self.formatted(date: .abbreviated, time: .omitted)
} else {
let formatter = DateFormatter()
formatter.dateStyle = .short
formatter.timeStyle = .none
return formatter.string(from: self)
}
}
}
By using the #available condition and providing a fallback implementation, this code ensures
that the toString() method works on both newer and older versions of iOS. It takes advantage
of the new formatted API when available, and falls back to the older DateFormatter approach
when the new API is not supported.
The @available attribute can also be used with other conditions, such as specific macOS
versions, watchOS versions, or even custom conditions using #if statements.
@available:
The @available attribute is used to mark declarations (such as functions, classes, or
properties) as available or unavailable based on specific platform versions or other conditions.
For example:
func makeAPICall() {
if #available(iOS 16.0, *) {
useNewAPI()
} else {
// fallback for older iOS versions
}
}
These features allow you to write code that is compatible with multiple platforms and versions,
while still taking advantage of new APIs and language features as they become available. They
help ensure that your code doesn't crash or exhibit unexpected behavior on older platforms or
versions due to the use of unsupported APIs or features.
Q. What are the in-out parameters and when are they useful?
The inout parameters are used to pass arguments by reference instead of by value. This means
that any modifications made to the argument inside the function will persist after the function
call, effectively changing the original value.
An inout parameter is defined by prefixing the parameter with the inout keyword, like this:
func updateValue(_ value: inout Int) {
value += 10
}
var x = 10
var y = 20
swapValues(&x, &y)
// x is now 20, and y is now 10
Note that inout parameters cannot be marked as let constants or be part of a function's return
type. Also, when passing an inout argument, you must pass a variable (not a literal or constant),
as it requires a memory address to modify the value.
While inout parameters can be useful in certain situations, they should be used judiciously, as
they can make code harder to reason about and potentially introduce side effects. In many cases,
it's preferable to return a new value from a function instead of modifying an existing one.
When you force-unwrap an optional, you're telling the compiler that you know for certain that the
optional contains a value, and you want to directly access that value. If the optional is nil
(contains no value), force-unwrapping it will cause a runtime error.
It's generally recommended to avoid forced unwrapping as much as possible because it can lead
to crashes if the optional is nil. Forced unwrapping should only be used in situations where you
are absolutely certain that the optional contains a value, and it's safe to force-unwrap it.
Situations where forced unwrapping might be appropriate:
During initialization: When you're initializing a constant or variable, and you know for sure that
the initial value is non-nil, you can force-unwrap it. This is common when dealing with values that
are required for the instance to be created.
After checking for nil: If you've already checked that an optional is not nil using an if
let statement or other ways, you can force-unwrap it safely within the scope where it's known
to be non-nil.
In staging environments: When you're working on a project and you know that certain optionals
will always have values during development or testing, you can force-unwrap them to simplify
your code. However, this should be avoided in production code.
Chapter 21: Miscellaneous
Situations where forced unwrapping should be avoided:
External data sources: User input and data from external sources can be unreliable, and force-
unwrapping optionals in these cases can lead to crashes.
In long-running tasks: Force-unwrapping in code that runs frequently or is critical to your app’s
functionality can increase the risk of crashes and should be avoided.
Alternative way: Swift provides safer techniques for working with optionals, such as optional
binding ( if let ), optional chaining, and nil-coalescing operators. Using these techniques is
generally preferred over force-unwrapping.
So, forced unwrapping should be used sparingly and only in situations where you have absolute
certainty that the optional contains a value. In most cases, it's better to use safer techniques for
handling optionals to avoid runtime errors and crashes.
var x = 10, y = 20
swapValues(&x, &y)
print("x: \(x), y: \(y)")
Any can represent instances of any type, including value types and reference types,
while AnyObject can only represent instances of class types (reference types). For example:
class TestClass { }
Generics allow you to write code that can work with any type, subject to constraints,
while Any and AnyObject are used to represent unknown or dynamically-typed values. For
example:
class TestClass { }
When working with Any and AnyObject, you need to perform typecasting to access the
underlying value, while Generics allow you to work with the actual type directly. For example:
var value: Any = 42
if let intValue = value as? Int {
print("The value is \(intValue)") // type casting required with `Any`
}
struct Stack<T> {
var items: [T] = []
Q. How equality (==) is different from identity (===) when using the
Equatable protocol?
The Equatable protocol is used to provide a way to compare two instances of a type for equality.
It requires you to implement the == operator, which defines how instances of your type
(including custom types) should be compared for equality. This is particularly useful when you
have custom types and you need to compare them in the code.
Chapter 21: Miscellaneous
When we talk about equality, we are comparing the values or contents of two instances to
determine if they are the same. When a type conforms to the Equatable protocol, it provides a
way to compare two instances of that type to see if their values are equal. This comparison is
typically done using the == operator, which checks if the properties of the instances are equal.
Identity refers to the memory address or location of an instance in memory. It determines whether
two references point to the same instance, rather than comparing the values contained within
those instances. Identity comparison is done using the === operator.
For example:
class Point: Equatable {
let x: Int
let y: Int
Usage:
let point1 = Point(x: 3, y: 4)
let point2 = Point(x: 3, y: 4)
let point3 = point1
In the example above, point1 and point2 are different instances of the Point class, but their
values are equal according to the custom implementation of the == operator. However, point1
and point3 refer to the same instance in memory, so they are equal using both the == and
=== operators.
We define a static property called shared that holds the shared instance of
the NetworkManager class. This property is initialized with an instance of the class when the
class is first accessed.
However, it's important to note that singletons should be used with caution, as they can
introduce global state and potential thread-safety issues if not implemented correctly.
Additionally, overusing singletons can lead to tight coupling and make it harder to test and
maintain your code.
init(radius: Double) {
self.radius = radius // using 'self' to disambiguate between the
property and parameter with the same name
}
In the above example, the use of Self in the create() function's return type makes it possible
to return an instance of the ProgrammingBook type. By using Self as the return type of the
Chapter 21: Miscellaneous
create() function, the code becomes more flexible and reusable, as it can work with any type
that conforms to the same ProgrammingBook type.
@discardableResult
func validateLength(_ string: String, minLength: Int, maxLength: Int) ->
Bool {
let length = string.count
return length >= minLength && length <= maxLength
}
}
In the above code, you're calling validateLength() to validate the length of the password
string. However, you're not assigning the return value to a variable or constant because you
might only care about the side effects of the validation (e.g., displaying an error message or
updating the UI).
Without the @discardableResult attribute, the compiler would generate a warning because
you're not using the return value of the function. By applying this attribute, you're explicitly telling
the compiler that it's okay to discard the result in cases where you're only interested in the side
effects of the function call.
Chapter 21: Miscellaneous
Q. How does the open access level differ from the public?
The open and public access levels both allow an entity (class, struct, protocol, property, method,
etc.) to be accessible from anywhere, including external modules. However, there's an important
difference between the both:
public access level:
Entities marked as public can be accessed and used within the defining module as well as
from any other module that imports the defining module.
However, public entities cannot be subclassed or overridden outside of the defining module.
open access level:
Like public, open entities can be accessed and used from within the defining module and
from any other module that imports the defining module.
Additionally, open classes can be subclassed, and open class members (properties and
methods) can be overridden by subclasses in other modules.
We can say that open access goes one step further than public by allowing code outside the
defining module to subclass and override the functionality of a class or class members.
The open access level is primarily used when you want to create a public API that can be
extended and customized by client code in other modules. It's commonly used in frameworks
and libraries that are intended to be inherited from or overridden by client applications.
Also, public access is used when you want to create a public API that can be used by other
modules, but without allowing subclassing or overriding of the functionality outside the defining
module.
Q. What is the default access level for properties, methods, and classes?
The default access level for properties, methods, and classes is internal. The internal access level
means that the entity (property, method, or class) is accessible within the same source file and
also from any other source file that belongs to the same module (target/framework/app).
What’s meaning of module?
A module is a single unit of code distribution, such as a framework or application. When you
create a new Xcode project, the default target (like an app or framework target) is considered a
module.
So when we say an internal entity is accessible within the same module, it means:
It can be used by any source file within that same target/framework/app.
But it cannot be accessed from outside that target/framework/app, like from another app or
framework that you may have in your project.
Chapter 21: Miscellaneous
However, if you want to make an entity accessible from other modules or publicly, you need to
explicitly specify a different access level:
public: The entity is accessible anywhere, even from other modules.
private: The entity is accessible only within the same source file.
fileprivate: The entity is accessible within the same source file and from extensions of the same
type in other source files.
For example:
// by default, this class is `internal`
class TestClass {
If you don't explicitly specify an access level, Swift uses the default internal access level. This
helps to encapsulate the implementation details and control the visibility of your code.
In general, it's a good practice to use the most restrictive access level that meets your
requirements. This promotes encapsulation and helps prevent accidental access or modification
of your code from other parts of the codebase.
Preventing Overrides
If you want to prevent a method or property from being overridden in subclasses, you can mark
them as final . This effectively blocks inheritance for that particular member. For example:
// File: PersonExtension.swift
extension Person {
func canVote() -> Bool {
// can access the fileprivate 'isAdult' property
retu
rn isAdult
}
}
Shared Code
If you have a shared code base (e.g., a framework or library) that is used across multiple projects
or targets, using fileprivate can be useful. It allows you to share implementation details within the
module while still hiding them from external code. For example:
// you can still access 'users' array in UserManager without changing access
level
extension UserManager {
func addUser(_ user: User) {
users.append(user)
}
}
Nested Types
In general, private should be preferred when you want to strictly limit the visibility of an entity to
the current file or type. However, fileprivate can be a useful alternative when you need to share
implementation details within the same module or when you anticipate the need for future code
refactoring or evolution.
It's important to note that both private and fileprivate are used to encapsulate implementation
details and promote code modularity. The choice between them depends on the specific
requirements and considerations of your project.
Q. Why and when switch is better than if-else? Explain with a use case.
The choice between using a switch statement or an if-else statement depends on the specific
use case and the nature of the conditions being evaluated. However, there are certain situations
where using a switch statement is generally considered better than using an if-else
statement.
The switch statement can often make the code more readable and easier to maintain,
especially when dealing with multiple conditions or cases.
The cases in a switch statement are clearly separated, making it easier to understand the
logic and add or modify cases in the future.
With if-else statements, the logic can become nested and harder to follow as the number
of conditions increases.
Chapter 21: Miscellaneous
If you miss a case in a switch statement, the compiler will produce an error, prompting you to
handle the missing case.
The switch statement support powerful pattern matching capabilities, allowing you to match
values based on complex patterns and conditions.
In some cases, switch statements can be more efficient than nested if-else statements,
especially when dealing with a large number of conditions.
The compiler can optimize switch statements more effectively, leading to better performance
in certain scenarios.
It's recommended to use switch statements when:
You have multiple, distinct cases to handle.
You're working with enumerations, tuples, or complex data structures.
You want to ensure exhaustiveness and catch potential logic errors during compilation.
You want to take advantage of pattern matching capabilities.
Imagine you have a function that calculates the area of different geometric shapes based on the
provided shape type and dimensions.
Using nested if-else statements:
func calculateArea(shapeType: String, dimension1: Double, dimension2: Double? =
nil) -> Double {
if shapeType == "circle" {
if let radius = dimension1 {
return Double.pi * radius * radius
}
} else if shapeType == "rectangle" {
if let length = dimension1, let width = dimension2 {
return length * width
}
} else if shapeType == "triangle" {
if let base = dimension1, let height = dimension2 {
return 0.5 * base * height
}
} else {
// handle invalid shape type
return 0.0
}
// handle missing dimensions
return 0.0
}
view.addSubview(label)
view.addSubview(button)
In this example:
The label is positioned 20 points from the leading edge of the view.
The button is positioned 10 points from the trailing edge of the label.
Chapter 21: Miscellaneous
This layout will automatically adapt to RTL languages, positioning the label and button
correctly.
Using Left Constraint
NSLayoutConstraint.activate([
label.leftAnchor.constraint(equalTo: view.leftAnchor, constant: 20),
label.topAnchor.constraint(equalTo: view.topAnchor, constant: 20),
In this example:
The label is positioned 20 points from the left edge of the view.
The button is positioned 10 points from the right edge of the label.
This layout will not adapt to RTL languages. The label and button will remain fixed on the left
side of the view.
Use leading constraint for adaptive layouts that support both LTR and RTL languages and use
left constraint for fixed positioning that does not need to adapt to different reading directions.
In this example, each time a cell is needed, a new instance of UITableViewCell is created. As
you scroll through the table view, new cell objects are continuously created, leading to high
memory usage and poor performance.
With reuseIdentifier
When you use a reuseIdentifier , the table view maintains a queue of reusable cells. As cells
scroll off-screen, they are placed in this queue and reused for new cells that scroll into view. This
minimizes the number of cell objects in memory and enhances performance. For example:
func tableView(_ tableView: UITableView, cellForRowAt indexPath: IndexPath) ->
UITableViewCell {
var cell = tableView.dequeueReusableCell(withIdentifier: "CellIdentifier")
cell?.textLabel?.text = "Item \\(indexPath.row)"
return cell!
}
In this example, the table view first tries to dequeue a reusable cell from the queue using the
reuseIdentifier . This approach significantly reduces the number of cell objects created, as
cells are reused when they scroll off-screen, leading to lower memory usage and smoother
scrolling.
Impacts of using reuseIdentifier
Memory usage is significantly reduced as cells are reused.
Scrolling performance is improved because cell creation is minimized.
Resource management (like image loading) becomes more efficient.
By using reuse identifiers, we create a more efficient and performant table view that can handle
large amounts of data smoothly, providing a better user experience.
In this example:
We conform to Hashable , which implicitly conforms to Equatable .
We implement hash(into:) to create a unique hash value.
We implement == to define equality.
let person1 = Person(id: 1, name: "Alex Murphy", age: 30)
let person2 = Person(id: 2, name: "Tina Martin", age: 25)
// mutable array
var mutableNumbers: [Int] = [1, 2, 3, 4, 5]
mutableNumbers.append(6)
// type safety
// mutableNumbers.append("seven") // compile-time error
// accessing elements
let firstNumber = numbers[0]
print(firstNumber) // Prints: 1
NSArray
NSArray is a class provided by the Foundation framework for managing ordered collections of
objects. It is a reference type and is part of Objective-C’s collection classes.
Type Safety: NSArray is not type-safe. It can store any type of object, and type checks are
performed at runtime.
Mutability: NSArray is immutable. Its mutable counterpart is NSMutableArray.
Interoperability: NSArray can be used in Swift through bridging, but lacks Swift’s type
safety and generics.
Syntax: Uses Objective-C syntax and APIs.
Example:
let courseNames: NSArray = ["iOS", "Swift", "Combine"]
// accessing an element
let courseName = courseNames[2] as? String
print(courseName) // Prints: Optional("Combine")
Use Cases
Use Array in Swift: When working primarily in Swift, use Array for its type safety,
performance, and integration with Swift’s language features.
Use NSArray in Objective-C: When working in Objective-C or interfacing with Objective-C
APIs, use NSArray and NSMutableArray.
Chapter 21: Miscellaneous
Bridging: Swift’s Array can be seamlessly bridged to NSArray when interoperating with
Objective-C code, but be mindful of type safety issues.
In modern development, it's generally recommended to use Swift's Array unless you specifically
need NSArray for Objective-C interoperability or when working with APIs that require it. Swift's
Array provides better type safety, performance, and a more idiomatic Swift experience.
Optional Chaining
Purpose: To safely access properties, methods, and subscripts on an optional that might be nil.
Syntax: Uses a question mark (?) after the optional value.
Usage: Allows you to call properties or methods on an optional without unwrapping.
Propagation: If any part of the chain is nil, the entire expression returns nil.
Return type: The return type of an optional chain is always an optional.
Chapter 21: Miscellaneous
Example:
let streetName = person?.address?.street?.name
Key Differences
Unwrapping
Optional binding explicitly unwraps the optional.
Optional chaining does not unwrap the optional.
Scope
Optional binding creates a new scope where the unwrapped value is available.
Optional chaining doesn't create a new scope.
Usage context
Optional binding is typically used when you need to perform multiple operations with the
unwrapped value.
Optional chaining is used for navigating through a series of optional properties or methods.
Nil handling
In optional binding, you can provide an else clause for nil cases.
In optional chaining, if any part is nil, the entire expression quietly returns nil.
Return value
Optional binding doesn't change the type of the unwrapped value.
Optional chaining always returns an optional, even if the final property is non-optional.
Example:
struct Person {
var address: Address?
}
// optional chaining
let streetName = person?.address?.street
print(streetName) // Optional("General Street Road")
Optional binding is generally used when you need to perform operations with the unwrapped
value, while optional chaining is more for safely navigating through a chain of optional values.
Often, you'll use them together, as shown in the last part of the above example.
End of Content