Clean Code Principles and Patterns A Software Practitioners Handbook 1 No Cover Petri Siln PDF Download
Clean Code Principles and Patterns A Software Practitioners Handbook 1 No Cover Petri Siln PDF Download
https://ebookbell.com/product/clean-code-principles-and-patterns-
a-software-practitioners-handbook-1-no-cover-petri-siln-47653024
https://ebookbell.com/product/clean-code-clean-coder-two-books-robert-
c-martin-50689186
Clean Code Cookbook Recipes To Improve The Design And Quality Of Your
Code 1st Edition Maximiliano Contieri
https://ebookbell.com/product/clean-code-cookbook-recipes-to-improve-
the-design-and-quality-of-your-code-1st-edition-maximiliano-
contieri-52578500
Clean Code With C Second Edition Refactor Your Legacy C Code Base And
Improve Application Performance Using Best Practices 2nd Jason Alls
https://ebookbell.com/product/clean-code-with-c-second-edition-
refactor-your-legacy-c-code-base-and-improve-application-performance-
using-best-practices-2nd-jason-alls-54544598
https://ebookbell.com/product/clean-code-in-c-refactor-your-legacy-c-
code-base-and-improve-application-performance-by-applying-best-
practices-1st-edition-jason-alls-54553138
Clean Code In Javascript Develop Reliable Maintainable And Robust
Javascript Anonymous
https://ebookbell.com/product/clean-code-in-javascript-develop-
reliable-maintainable-and-robust-javascript-anonymous-55256670
https://ebookbell.com/product/clean-code-an-agile-guide-to-software-
craft-kameron-h-56128868
https://ebookbell.com/product/clean-code-in-python-develop-
maintainable-and-efficient-code-2nd-edition-2nd-ed-anaya-57054248
Clean Code In Python Refactor Your Legacy Code Base Mariano Anaya
https://ebookbell.com/product/clean-code-in-python-refactor-your-
legacy-code-base-mariano-anaya-21260656
https://ebookbell.com/product/clean-code-a-handbook-of-agile-software-
craftsmanship-1st-edition-robert-cecil-martin-22041006
Clean Code
Principles and
Patterns
A Software Practitioner’s
Handbook
PETRI SILÉN
1
Highly-Available Microservices Principle 57
Observable Microservices Principle 58
Software Versioning Principles 59
Use Semantic Versioning Principle 59
Avoid Using 0.x Versions Principle 59
Don't Increase Major Version Principle 60
Implement Security Patches and Bug Corrections to All Major Versions Principle 60
Avoid Using Non-LTS Versions in Production Principle 61
Git Version Control Principle 61
Feature Branch 61
Feature Toggle 62
Architectural Patterns 62
Event Sourcing Pattern 62
Command Query Responsibility Segregation (CQRS) Pattern 63
Distributed Transaction Patterns 64
Saga Orchestration Pattern 65
Saga Choreography Pattern 67
Preferred Technology Stacks Principle 69
Object-Oriented Design Principles 73
SOLID Principles 73
Single Responsibility Principle 74
Open-Closed Principle 78
Liskov's Substitution Principle 82
Interface Segregation and Multiple Inheritance Principle 85
Program Against Interfaces Principle (Generalized Dependency Inversion Principle) 91
Clean Microservice Design Principle 99
Uniform Naming Principle 104
Naming Interfaces and Classes 104
Naming Functions 105
Preposition in Function Name 108
Example 1: Renaming JavaScript Array Methods 109
Example 2: Renaming C++ Casting Expressions 110
Naming Method Pairs 111
Naming Boolean Functions (Predicates) 112
Naming Builder Methods 115
Naming Methods with Implicit Verbs 116
Naming Property Getter Functions 116
Naming Lifecycle Methods 117
Naming Function Parameters 117
Encapsulation Principle 118
Immutable Objects 118
Don't Leak Modifiable Internal State Outside an Object Principle 119
2
Don't Assign From a Method Parameter to a Modifiable Field 120
Real-life Example of Encapsulation Violation: React Class Component's State 121
Object Composition Principle 123
Domain-Driven Design Principle 130
Domain-Driven Design Example 1: Data Exporter Microservice 131
Domain-Driven Design Example 2: Anomaly Detection Microservice 137
Design Patterns 139
Design Patterns for Creating Objects 139
Factory Pattern 139
Abstract Factory Pattern 140
Factory Method Pattern 142
Builder Pattern 144
Singleton Pattern 146
Prototype Pattern 148
Object Pool Pattern 150
Structural Design Patterns 154
Composite Pattern 154
Facade Pattern 156
Bridge Pattern 158
Strategy Pattern 162
Adapter Pattern 162
Proxy Pattern 166
Decorator Pattern 167
Flyweight Pattern 169
Behavioral Design Patterns 170
Chain of Responsibility Pattern 171
Observer Pattern 175
Command/Action Pattern 177
Iterator Pattern 184
State Pattern 185
Mediator Pattern 187
Template Method Pattern 204
Memento Pattern 206
Visitor Pattern 207
Null Object Pattern 211
Don't Ask, Tell Principle 211
Law of Demeter 214
Avoid Primitive Type Obsession Principle 215
Dependency Injection (DI) Principle 223
Avoid Code Duplication Principle 229
Inheritance in Cascading Style Sheets (CSS) 232
Coding Principles 235
3
Uniform Variable Naming Principle 235
Naming Integer Variables 236
Naming Floating-Point Number Variables 236
Naming Boolean Variables 237
Naming String Variables 238
Naming Enum Variables 238
Naming Collection (Array, List, and Set) Variables 239
Naming Map Variables 239
Naming Pair and Tuple Variables 240
Naming Object Variables 240
Naming Optional Variables 241
Naming Function Variables (Callbacks) 241
Naming Class Properties 243
General Naming Rules 243
Use Short, Common Names 243
Pick One Name And Use It Consistently 243
Avoid Obscure Abbreviations 244
Avoid Too Short Or Meaningless Names 244
Uniform Source Code Repository Structure Principle 244
Java Source Code Repository Structure 245
C++ Source Code Repository Structure 245
JavaScript/TypeScript Source Code Repository Structure 246
Domain-Based Source Code Structure Principle 247
Avoid Comments Principle 255
Name Things Properly 255
Single Return Of Named Value At The End Of Function 257
Return Type Aliasing 258
Extract Constant for Boolean Expression 260
Extract Named Constant or Enumerated Type 261
Extract Function 261
Name Anonymous Function 263
Avoiding Comments in Bash Shell Scripts 264
Function Single Return Principle 265
Prefer a Statically Typed Language for Production Code Principle 269
Function Arguments Might Be Given in Wrong Order 269
Function Argument Might Be Given with Wrong Type 269
Not All Function Arguments Are Given 269
Function Return Value Type Might Be Misunderstood 270
Forced to Write Public API Comments 270
Type Errors Are Not Found in Testing 270
Refactoring Principle 270
Rename 271
4
Extract Method 272
Extract Constant 272
Replace Conditionals with Polymorphism 274
Introduce Parameter Object 275
Invert If Statement 276
Static Code Analysis Principle 277
Common Static Code Analysis Issues 278
Error/Exception Handling Principle 280
Handling Checked Exceptions in Java 287
Returning Errors 288
Returning Failure Indicator 288
Returning an Optional Value 289
Returning an Error Object 289
Adapt to Wanted Error Handling Mechanism 291
Asynchronous Function Error Handling 293
Functional Exception Handling 294
Stream Error Handling 297
Don't Pass or Return Null Principle 298
Avoid Off-By-One Errors Principle 299
Be Critical When Googling Principle 300
Optimization Principle 300
Optimization Patterns 301
Optimize Busy Loops Only Pattern 301
Remove Unnecessary Functionality Pattern 302
Copy Memory in Chunks Pattern (C++) 302
Object Pool Pattern 302
Replace Virtual Methods with Non-Virtual Methods Pattern (C++) 303
Inline Methods Pattern (C++) 303
Use Unique Pointer Pattern (C++) 303
Share Identical Objects a.k.a Flyweight Pattern 304
Testing Principles 305
Functional Testing Principles 305
Unit Testing Principle 306
Test-Driven Development (TDD) 308
Naming Conventions 312
Mocking 313
UI Component Unit Testing 329
Software Component Integration Testing Principle 330
UI Integration Testing 339
Setting Up Integration Testing Environment 340
End-to-End (E2E) Testing Principle 343
Non-Functional Testing Principle 346
5
Performance Testing 346
Data Volume Testing 347
Stability Testing 348
Reliability Testing 348
Stress and Scalability Testing 349
Security Testing 350
Other Non-Functional Testing 350
Visual Testing 351
Security First Principle 353
Threat Modelling 353
Decompose Application 353
Determine and Rank Threats 354
Determine Countermeasures and Mitigation 355
Security Features 355
Authentication and Authorization 355
OpenID Connect Authentication and Authorization in Frontend 355
OAuth2 Authorization in Backend 367
Password Policy 371
Cryptography 372
Denial-of-service (DoS) Prevention 373
SQL Injection Prevention 373
Security Configuration 373
Automatic Vulnerability Scanning 374
Integrity 374
Error Handling 374
Audit Logging 374
Input Validation 374
Validating Numbers 375
Validating Strings 375
Validating Arrays 375
Validating Objects 375
Validation Library Example 376
API Design Principles 377
Frontend Facing API Design Principles 377
JSON-RPC API Design Principle 377
REST API Design Principle 379
Creating a Resource 379
Reading Resources 381
Updating Resources 384
Deleting Resources 385
Executing Non-CRUD Actions on Resources 386
Resource Composition 386
6
HTTP Status Codes 387
HATEOAS and HAL 388
Versioning 389
Documentation 389
Implementation Example 390
GraphQL API Design 392
Subscription-Based API Design 400
Server-Sent Events (SSE) 400
GraphQL Subscriptions 403
WebSocket Example 404
Inter-Microservice API Design Principles 416
Synchronous API Design Principle 416
gRPC-Based API Design Example 416
Asynchronous API Design Principle 419
Request-Only Asynchronous API Design 419
Request-Response Asynchronous API Design 420
Databases And Database Principles 423
Relational Databases 423
Structure of Relational Database 424
Use Object Relational Mapper (ORM) Principle 424
Entity/Table Relationships 427
One-To-One/Many Relationships 427
Many-To-Many Relationships 429
Use Parameterized SQL Statements Principle 430
Normalization Rules 432
First Normal Form (1NF) 433
Second Normal Form (2NF) 433
Third Normal Form (3NF) 433
Document Database Principle 434
Key-Value Database Principle 436
Wide-Column Database Principle 437
Search Engine Principle 442
Concurrent Programming Principles 443
Threading Principle 443
Parallel Algorithms 444
Thread Safety Principle 445
Synchronization Directive 446
Atomic Variables 446
Concurrent Collections 447
Mutexes 448
Spinlocks 449
Teamwork Principles 453
7
Use Agile Framework Principle 453
Define the Done Principle 454
You Write Code for Other People Principle 455
Avoid Technical Debt Principle 455
Software Component Documentation Principle 457
Code Review Principle 458
Focus on Object-Oriented Design 458
Focus on Proper and Uniform Naming 458
Don't Focus on Premature Optimization 458
Detect Possible Malicious Code 458
Uniform Code Formatting Principle 459
Highly Concurrent Development Principle 459
Dedicated Microservices and Microlibraries 459
Dedicated Domains 459
Follow Open-Closed Principle 460
Pair Programming Principle 460
Well-Defined Development Team Roles Principle 461
Product Owner 461
Scrum Master 461
Software Developer 462
Test Automation Developer 462
DevOps Engineer 463
UI Designer 463
DevSecOps 465
SecOps Lifecycle 466
DevOps Lifecycle 466
Plan 467
Code 467
Build and Test 467
Release 468
Example Dockerfile 468
Example Kubernetes Deployment 469
Example CI/CD Pipeline 473
Deploy 477
Operate 478
Monitor 478
Logging 480
OpenTelemetry Log Data Model 480
PrometheusRule Example 482
Appendix A 483
8
About the Author
Petri Silén is a seasoned software developer working at Nokia Networks in Finland with industry
experience of almost 30 years.He has done both frontend and backend development with a solid
competence in multiple programming languages, including C++, Java, and JavaScript/TypeScript.
He started his career at Nokia Telecommunications in 1995. During his first years, he developed a
real-time mobile networks analytics product called "Traffica" in C++ for major telecom customers
worldwide, including companies like T-Mobile, Orange, Vodafone, and Claro. The initial product
was for monitoring a 2G circuit-switched core network and GPRS packet-switched core network.
Later, functionality to Traffica was added to cover new network technologies, like 3G circuit-
switched and packet core networks, 3G radio networks, and 4G/LTE. He later developed new
functionality for Traffica using Java and web technologies, including jQuery and React. During the
last few years, he has developed cloud-native containerized microservices with Java and C++ for
the next-generation Customer and Networks Insights (CNI) product used by major
communications service providers like Verizon, AT&T, USCC, and KDDI. The main application
areas he has contributed during the last years include KPI-based real-time alerting, anomaly
detection for KPIs, and configurable real-time data exporting.
During his free time, he has developed a data visualization application using React, Redux,
TypeScript, and Jakarta EE. He has also developed a security-first cloud-native microservice
framework for Node.js in TypeScript. He likes to take care of his Kaapo cat, take walks, play tennis
and badminton, ski in the winter, and watch soccer and ice hockey on TV.
9
10
Introduction
This book teaches you how to write clean code. It presents software design and development
principles and patterns in a very practical manner. This book is suitable for both junior and senior
developers. Basic understanding and knowledge of object-oriented programming in one language,
like C++, Java, JavaScript/TypeScript, Python or C# is required. Examples in this book are
presented in Java, JavaScript/TypeScript, or C++. Most examples are in Java or JavaScript/
TypeScript and are adaptable to other programming languages, too. The content of this book is
divided into eleven chapters.
The second chapter is about architectural design principles that enable the development of true
cloud-native microservices. The first architectural design principle described is the single
responsibility principle which defines that a piece of software should be responsible for one thing at
its abstraction level. Then a uniform naming principle for microservices, clients, APIs, and libraries
is presented. The encapsulation principle defines how each software component should hide the
internal state behind a public API. The service aggregation principle is introduced with a detailed
explanation of how a higher-level microservice can aggregate lower-level microservices.
Architectural patterns, like event sourcing, command query responsibility segregation (CQRS), and
distributed transactions, are discussed. Distributed transactions are covered with examples using
both the saga orchestration pattern and the saga choreography pattern. You get answers on how to
avoid code duplication at the architectural level. Externalized configuration principle describes how
service configuration should be handled in modern environments. We discuss the service
substitution principle, which states that dependent services a microservice uses should be easily
substitutable. The importance of autopilot microservices is discussed from statelessness, resiliency,
high availability, observability, and automatic scaling point of view. Towards the end of the chapter,
there is a discussion about different ways microservices can communicate with each other. Several
rules are presented on how to version software components. And the chapter ends with a discussion
of why it is helpful to limit the number of technologies used in a software system.
The third chapter presents object-oriented design principles. We start the chapter by describing all
the SOLID principles: Single responsibility principle, open-closed principle, Liskov's substitution
11
principle, interface segregation principle, and dependency inversion principle. Each SOLID
principle is presented with realistic but simple examples. The uniform naming principle defines a
uniform way to name interfaces, classes, functions, function pairs, boolean functions (predicates),
builder, factory, conversion, and lifecycle methods. The encapsulation principle describes that a
class should encapsulate its internal state and how immutability helps ensure state encapsulation.
The encapsulation principle also discusses the importance of not leaking an object's internal state
out. The object composition principle defines that composition should be preferred over
inheritance. Domain-driven design (DDD) is presented with two real-world examples. All the
design patterns from GoF's Design Patterns book are presented with realistic yet simple examples.
The don't ask, tell principle is presented as a way to avoid the feature envy design smell. The
chapter also discusses avoiding primitive-type obsession and the benefits of using semantically
validated function arguments. The chapter ends by presenting the dependency injection principle
and avoiding code duplication principle, also known as the don't repeat yourself (DRY) principle
The fourth chapter is about coding principles. The chapter starts with a principle for uniformly
naming variables in code. A uniform naming convention is presented for integer, floating-point,
boolean, string, enum, and collection variables. Also, a naming convention is defined for maps,
pairs, tuples, objects, optionals, and callback functions. The uniform source code repository
structure principle is presented with examples for C++, Java, and JavaScript/TypeScript. Next, the
avoid comments principle defines concrete ways to remove unnecessary comments from the code.
The following concrete actions are presented: naming things correctly, returning a named value,
return-type aliasing, extracting a constant for a boolean expression, extracting a constant for a
complex expression, extracting enumerated values, and extracting a function. The chapter discusses
the benefits of using a statically typed language. We discuss the most common refactoring
techniques: renaming, extracting a method, extracting a variable, replacing conditionals with
polymorphism, and introducing a parameter object. The importance of static code analysis is
described, and the most popular static code analysis tools for C++, Java, and JavaScript/TypeScript
are listed. The most common static code analysis issues are listed with the preferred way to correct
them. Handling errors and exceptions correctly in code is fundamental and can be easily forgotten
or done wrong. This chapter instructs how to handle errors and exceptions, how to handle Java's
checked exceptions, and how to return errors by returning a boolean failure indicator, an optional
value, or an error object. The chapter instructs how to adapt code to a wanted error-handling
mechanism, handle errors in asynchronous code, handle stream errors, and handle errors
functionally. The null value handling is discussed. The ways to avoid off-by-one errors are
presented. Readers are instructed on handling situations where some code is copied from a web
page found by googling. The chapter ends with a discussion about code optimization: when and
how to optimize.
The fifth chapter is dedicated to testing principles. We start with the introduction of the functional
testing pyramid. Then we present unit testing and instruct how to use test-driven development
(TDD). We give unit test examples with mocking in Java, JavaScript, and C++. When introducing
software component integration testing, we discuss behavior-driven development (BDD) and the
Gherkin language to describe features. Integration test examples are given using Cucumber for Java
12
and Postman API development platform. The chapter also discusses the integration testing of UI
software components. We end the integration testing section with an example of setting up an
integration testing environment using Docker Compose. Lastly, the purpose of end-to-end (E2E)
testing is discussed with some examples. The chapter ends with a discussion about non-functional
testing. The following categories of non-functional testing are covered in more detail: performance
testing, stability testing, reliability testing, security testing, stress, and scalability testing.
The sixth chapter handles security principles. The threat modeling process is introduced. A full-
blown OpenID Connect/OAuth 2.0 authentication and authorization example with TypeScript,
Vue.js, and Keycloak is implemented. Then we discuss how authorization by validating a JWT
should be handled in the backend. Examples are presented with Node.js, Express, Java, and Spring
Boot. The chapter ends with a discussion of the most important security features: password policy,
cryptography, denial-of-service prevention, SQL injection prevention, security configuration,
automatic vulnerability scanning, integrity, error handling, audit logging, and input validation.
The seventh chapter is about API design principles. First, we tackle design principles for frontend
facing APIs. We discuss how to design JSON-RPC, REST, and GraphQL APIs. Also, subscription-
based and real-time APIs are presented with realistic examples using Server-Sent Events (SSE) and
the WebSocket protocol. The last part of the chapter discusses inter-microservice API design and
event-driven architecture. gRPC is introduced as a synchronous inter-microservice communication
method, and examples of request-only and request-response asynchronous APIs are presented.
The 8th chapter discusses databases and related principles. We cover the following types of
databases: relational databases, document databases (MongoDB), key-value databases (Redis),
wide-column databases (Cassandra), and search engines. For relational databases, we present how
to use object-relational mapping (ORM), one-to-one, one-to-many and many-to-many
relationships, and parameterized SQL queries. Finally, we present three normalization rules for
relational databases.
The 9th chapter presents concurrent programming principles regarding threading, parallel
algorithms, and thread safety. For thread safety, we present several ways to achieve thread
synchronization: synchronization directives, atomic variables, mutexes, and spinlocks.
The 10th chapter discusses teamwork principles. We explain the importance of using an agile
framework and discuss the fact that a developer usually never works alone and what that entails.
We discuss how to document a software component so that onboarding new developers is easy and
quick. Technical debt in software is something that each team should avoid. Some concrete actions
to prevent technical debt are presented. Code reviews are something teams should do, and this
chapter gives guidance on what to focus on in code reviews. The chapter ends with a discussion of
developer roles each team should have and provides hints on enabling a team to develop software
as concurrently as possible.
The 11th chapter is dedicated to DevSecOps. DevOps describes practices that integrate software
development (Dev) and software operations (Ops). It aims to shorten the software development life
13
cycle through parallelization and automation and provides continuous delivery with high software
quality. DevSecOps is a DevOps augmentation where security practices are integrated into the
DevOps practices. This chapter presents the phases of the DevOps lifecycle: plan, code, build and
test, release, deploy, operate and monitor. The chapter gives an example of creating a microservice
container image and how to specify the deployment of a microservice to a Kubernetes cluster. Also,
a complete example of a CI/CD pipeline using GitHub Actions is provided.
14
Architectural Principles
This chapter describes architectural principles for designing clean, modern cloud-native software
systems and applications. Cloud-native software is built of loosely coupled scalable, resilient and
observable services that can run in public, private, or hybrid clouds. Cloud-native software utilizes
technologies like containers (e.g., Docker), microservices, serverless functions, and container
orchestration (e.g., Kubernetes), and it can be automatically deployed using declarative code.
15
• Architectural patterns
• Preferred technology stacks principle
Software Hierarchy
A software system consists of multiple computer programs and anything related to those programs
to make them operable, including but not limited to configuration, deployment code, and
documentation. A software system is divided into two parts: the backend and the frontend.
Backend software runs on servers, and frontend software runs on client devices like PCs, tablets,
and phones. Backend software consists of services. Frontend software consist of clients that use
backend services and standalone applications that do not use any backend services. An example of
a standalone application is a calculator or a simple text editor.
The term application is often used to describe a single program designated for a specific purpose.
In general, a software application is some software applied to solve a specific problem. From an end
user's point of view, all clients are applications. But from a developer’s point of view, an application
needs both a client and backend service(s) to be functional unless the application is a standalone
application. In this book, I will use the term application to designate a logical grouping of
program(s) and related artifacts, like configuration, to form a functional piece of the software
16
system dedicated to a specific purpose. In my definition, a non-standalone application consists of
one or more services and possibly a client or clients to fulfill an end user's need. Let's say we have a
software system for telecom network analytics. That system provides data visualization
functionality. We can call the data visualization part of the software system a data visualization
application. That application consists of, for example, a web client and two services, one for
fetching data and one for configuration. Suppose we also have a generic data ingester microservice
in the system. That generic data ingester is not an application without some configuration that
makes it a specific service that we can call an application. For example, the generic data ingester
can have a configuration to ingest raw data from a radio network. The generic data ingester and the
configuration together form an application: a radio network data ingester.
Computer programs and libraries are software components. A software component is something
that can be individually packaged, tested, and delivered. It consists of one or more classes, and a
class consists of one or more functions (class methods). (There are no traditional classes in purely
functional languages, but software components consist only of functions.) A computer program can
also be composed of one or more libraries, and a library can be composed of other libraries.
A software system is at the highest level in the software hierarchy and should have a single
dedicated purpose. For example, there can be an e-commerce or payroll software system. But there
should not be a software system that handles both e-commerce and payroll-related activities. If you
were a software vendor and had made an e-commerce software system, selling that to clients
wanting an e-commerce solution would be easy. But if you had made a software system that
17
encompasses both e-commerce and payroll functionality, it would be hard to sell that to customers
wanting only an e-commerce solution because they might already have a payroll software system
and, of course, don't want another one.
Let's consider the application level in the software hierarchy. Suppose we have designed a software
system for telecom network analytics. This software system is divided into four different
applications: Radio network data ingestion, core network data ingestion, data aggregation, and data
visualization. Each of these applications has a single dedicated purpose. Suppose we had coupled
the data aggregation and visualization applications into a single application. In that case, replacing
the data visualization part with a 3rd party application could be difficult. But when they are
separate applications with a well-defined interface, it would be much easier to replace the data
visualization application with a 3rd party application, if needed.
A software component should also have a single dedicated purpose. A service type of software
component with a single responsibility is called a microservice. For example, one microservice
could be responsible for handling orders and another for handling sales items. Both of those
microservices are responsible for one thing only. We should not have a microservice responsible for
both orders and sales items. That would be against the single responsibility principle because order
and sales item handling are two different functionalities at the same level of abstraction.
• Improved productivity
• You can choose the best-suited programming language and technology stack
• Microservices are easy to develop in parallel because there will be fewer merge conflicts
• Developing a monolith can result in more frequent merge conflicts
• Improved resiliency and fault isolation
• A fault in a single microservice does not bring other microservices down
• A bug in a monolith can bring the whole monolith down
• Better scalability
• Stateless microservices can be automatically horizontally scalable
• Horizontal scaling of a monolith is complicated or impossible
• Better data security and compliance
• Each microservice encapsulates its data, which can be accessed via a public API only
• Faster and easier upgrades
• Upgrading only the changed microservice(s) is enough. No need to update the whole
monolith every time
• Faster release cycle
• Build the changed microservice only. No need to build the whole monolith when
something changes
• Fewer dependencies
18
• Lower probability for dependency conflicts
• Enables open-closed architecture, meaning architecture that is open for extension and closed for
modification
• New functionality not related to any existing microservice can be put into a new
microservice instead of modifying the current codebase.
The main drawback of microservices is the complexity that a distributed architecture brings.
Operating and monitoring a microservice-based software system is complicated. Also, testing a
distributed system is more challenging than testing a monolith. Development teams should put
focus on these areas by hiring DevOps and test automation specialists.
A library type of software component should also have a single responsibility. Like calling single-
responsibility services microservices, we can call a single-responsibility library a microlibrary. For
example, there could be a library for handling YAML-format content and another for handling
XML-format content. We shouldn't try to bundle the handling of both formats into a single library.
If we did and needed only the YAML-related functionality, we would also always get the XML-
related functionality. Our code would always ship with the XML-related code, even if it is never
used. This can introduce unnecessary code bloat. We would also have to take any security patch for
the library into use, even if the patch was only for the XML-related functionality we don't use.
When developing software, you should establish a naming convention for microservices, clients,
and libraries. The preferred naming convention for microservices is <service's purpose>-service.
For example: data-aggregation-service or email-sending-service. Use the microservice name
systematically in different places. For example, use it as the Kubernetes Deployment name and the
source code repository name (or directory name in case of a monorepo). It is enough to name your
microservices with the service postfix instead of a microservice postfix because each service should
be a microservice by default. So, there would not be any real benefit in naming microservices with
the microservice postfix. That would just make the microservice name longer without any added
value.
If you want to be more specific in naming microservices, you can name API microservices with an
api postfix instead of the more generic service postfix, for example, sales-item-api. In this book, I
am not using the api postfix but always use the service postfix only.
The preferred naming convention for clients is <client's purpose>-<client type>-client. For
19
example: data-visualization-web-client, data-visualization-mobile-client, data-visualization-
android-client or data-visualization-ios-client.
The preferred naming convention for libraries is <library's purpose>-library. For example:
common-utils-library or common-ui-components-library.
When using these naming conventions, a clear distinction between a microservice, client, and
library-type software component can be made only by looking at the name. Also, it is easy to
recognize if a source code repository contains a microservice, client, or library.
Encapsulation Principle
Microservice must encapsulate its internal state behind a public API. Anything behind the
public API is considered private to the microservice and cannot be accessed directly by other
microservices.
Microservices should define a public API that other microservices use for interfacing. Anything
behind the public API is private and inaccessible from other microservices.
While microservices should be made stateless (the stateless services principle is discussed later in
this chapter), a stateless microservice needs a place to store its state outside the microservice.
Typically, the state is stored in a database. The database is the microservice's internal dependency
and should be made private to the microservice, meaning that no other microservice can directly
access the database. Access to the database happens indirectly using the microservice's public API.
It is discouraged to allow multiple microservices to share a single database because then there is no
control how each microservice will use the database, and what requirements each microservice has
for the database.
20
Service Aggregation Principle
Service on a higher level of abstraction aggregates services on a lower level of abstraction.
Service aggregation happens when one service on a higher level of abstraction aggregates services
on a lower level of abstraction.
Let's have a service aggregation example with an e-commerce software system that allows people to
sell second-hand products online.
The problem domain of the e-commerce service consists of the following subdomains:
21
• Send order confirmation by email
• View orders with sales item details
• Update and delete orders
We should not implement all the subdomains in a single ecommerce-service microservice because
then we would not be following the single responsibility principle. We should use service
aggregation. We create a separate lower-level microservice for each subdomain. Then we create a
higher-level ecommerce-service microservice that aggregates those lower-level microservices.
We can define that our ecommerce-service aggregates the following lower-level microservices:
• user-account-service
• Create/Read/Update/Delete user accounts
• sales-item-service
• Create/Read/Update/Delete sales items
• shopping-cart-service
• View a shopping cart, add/remove sales items from a shopping cart or empty a shopping
cart
• order-service
• Create/Read/Update/Delete orders
• email-notification-service
• Send email notifications
Most of the microservices described above can be implemented as REST APIs because they mainly
contain basic CRUD (create, read, update and delete) operations for which a REST API is a good
match. We will handle API design in more detail in a later chapter. Let's implement the sales-item-
22
service as a REST API using Java and Spring Boot. We will implement the
SalesItemController class first. It defines API endpoints for creating, getting, updating, and
deleting sales items:
SalesItemController.java
import io.swagger.v3.oas.annotations.Operation;
import io.swagger.v3.oas.annotations.tags.Tag;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.http.HttpStatus;
import org.springframework.web.bind.annotation.DeleteMapping;
import org.springframework.web.bind.annotation.GetMapping;
import org.springframework.web.bind.annotation.PathVariable;
import org.springframework.web.bind.annotation.PostMapping;
import org.springframework.web.bind.annotation.PutMapping;
import org.springframework.web.bind.annotation.RequestBody;
import org.springframework.web.bind.annotation.RequestMapping;
import org.springframework.web.bind.annotation.RequestParam;
import org.springframework.web.bind.annotation.ResponseStatus;
import org.springframework.web.bind.annotation.RestController;
@RestController
@RequestMapping(SalesItemController.API_ENDPOINT)
@Tag(
name = "Sales item API",
description = "Manages sales items"
)
public class SalesItemController {
public static final String API_ENDPOINT = "/sales-items";
@Autowired
private SalesItemService salesItemService;
@PostMapping
@ResponseStatus(HttpStatus.CREATED)
@Operation(summary = "Creates new sales item")
public final SalesItem createSalesItem(
@RequestBody final SalesItemArg salesItemArg
) {
return salesItemService.createSalesItem(salesItemArg);
}
@GetMapping
@ResponseStatus(HttpStatus.OK)
@Operation(summary = "Gets sales items")
public final Iterable<SalesItem> getSalesItems() {
return salesItemService.getSalesItems();
}
@GetMapping("/{id}")
@ResponseStatus(HttpStatus.OK)
@Operation(summary = "Gets sales item by id")
public final SalesItem getSalesItemById(
@PathVariable("id") final Long id
) {
return salesItemService.getSalesItemById(id);
}
@GetMapping(params = "userAccountId")
@ResponseStatus(HttpStatus.OK)
@Operation(summary = "Gets sales items by user account id")
public final Iterable<SalesItem> getSalesItemsByUserAccountId(
23
@RequestParam("userAccountId") final Long userAccountId
) {
return salesItemService
.getSalesItemsByUserAccountId(userAccountId);
}
@PutMapping("/{id}")
@ResponseStatus(HttpStatus.NO_CONTENT)
@Operation(summary = "Updates a sales item")
public final void updateSalesItem(
@PathVariable final Long id,
@RequestBody final SalesItemArg salesItemArg
) {
salesItemService.updateSalesItem(id, salesItemArg);
}
@DeleteMapping("/{id}")
@ResponseStatus(HttpStatus.NO_CONTENT)
@Operation(summary = "Deletes a sales item by id")
public final void deleteSalesItemById(
@PathVariable final Long id
) {
salesItemService.deleteSalesItemById(id);
}
@DeleteMapping
@ResponseStatus(HttpStatus.NO_CONTENT)
@Operation(summary = "Deletes all sales items")
public final void deleteSalesItems() {
salesItemService.deleteSalesItems();
}
}
As we can notice from the above code, the SalesItemController class delegates the actual work
to an instance of a class that implements the SalesItemService interface. This is an example of
using the bridge pattern which is discussed, along with other design patterns, in the next chapter.
In the bridge pattern, the controller is just an abstraction of the service, and a class implementing
the SalesItemService interface provides a concrete implementation. We can change the service
implementation without changing the controller or introduce a different controller, e.g., a GraphQL
controller, using the same SalesItemService interface. Only by changing the used controller
class could we change the API from a REST API to a GraphQL API. Below is the definition of the
SalesItemService interface:
SalesItemService.java
public interface SalesItemService {
SalesItem createSalesItem(SalesItemArg salesItemArg);
SalesItem getSalesItemById(Long id);
Iterable<SalesItem> getSalesItemsByUserAccountId(
Long userAccountId
);
Iterable<SalesItem> getSalesItems();
void updateSalesItem(Long id, SalesItemArg salesItemArg);
void deleteSalesItemById(Long id);
void deleteSalesItems();
}
24
The below SalesItemServiceImpl class implements the SalesItemService interface. It will
interact with a sales item repository to persist, fetch and delete data to/from a database.
SalesItemServiceImpl.java
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.stereotype.Service;
@Service
public class SalesItemServiceImpl implements SalesItemService {
private static final String SALES_ITEM = "Sales item";
@Autowired
private SalesItemRepository salesItemRepository;
@Override
public final SalesItem createSalesItem(
final SalesItemArg salesItemArg
) {
final var salesItem = SalesItem.from(salesItemArg);
return salesItemRepository.save(salesItem);
}
@Override
public final SalesItem getSalesItemById(final Long id) {
return salesItemRepository.findById(id)
.orElseThrow(() ->
new EntityNotFoundError(SALES_ITEM, id));
}
@Override
public final Iterable<SalesItem> getSalesItemsByUserAccountId(
final Long userAccountId
) {
return salesItemRepository
.findByUserAccountId(userAccountId);
}
@Override
public final Iterable<SalesItem> getSalesItems() {
return salesItemRepository.findAll();
}
@Override
public final void updateSalesItem(
final Long id,
final SalesItemArg salesItemArg
) {
if (salesItemRepository.existsById(id)) {
final var salesItem =
SalesItem.from(salesItemArg, id);
salesItemRepository.save(salesItem);
} else {
throw new EntityNotFoundError(SALES_ITEM, id);
}
}
@Override
public final void deleteSalesItemById(final Long id) {
if (salesItemRepository.existsById(id)) {
salesItemRepository.deleteById(id);
25
}
}
@Override
public final void deleteSalesItems() {
salesItemRepository.deleteAll();
}
}
EntityNotFoundError.java
import org.springframework.http.HttpStatus;
import org.springframework.web.bind.annotation.ResponseStatus;
@ResponseStatus(HttpStatus.NOT_FOUND)
public class EntityNotFoundError extends RuntimeException {
EntityNotFoundError(final String entityType, final long id) {
super(entityType +
" entity not found with id " +
String.valueOf(id));
}
}
The SalesItemRepository interface is defined below. Spring will create an instance of a class
implementing that interface and inject it into an instance of the SalesItemServiceImpl class.
The SalesItemRepository interface extends Spring's CrudRepository interface, which
provides many database access methods by default. It provides the following and more methods:
findAll, findById, save, existsById, deleteAll, and deleteById. We need to
add only one method to the SalesItemRepository interface: findByUserAccountId. Spring
will automatically generate an implementation for the findByUserAccountId method because
the method name follows certain conventions of the Spring Data (https://docs.spring.io/spring-
data/jpa/docs/current/reference/html) framework. We just need to add the method to the
interface, and that's it. We don't have to provide an implementation for the method because Spring
will do it for us.
SalesItemRepository.java
import org.springframework.data.repository.CrudRepository;
import org.springframework.stereotype.Repository;
@Repository
public interface SalesItemRepository extends
CrudRepository<SalesItem, Long>
{
Iterable<SalesItem> findByUserAccountId(Long userAccountId);
}
Next, we define the SalesItem entity class, which contains properties like name and price. It
also includes two methods to convert an instance of the SalesItemArg Data Transfer Object
(DTO) class to an instance of the SalesItem class. A DTO is an object that transfers data between
a server and a client. I have used the class name SalesItemArg instead of SalesItemDto to
describe that a SalesItemArg DTO is an argument for an API endpoint. If some API endpoint
26
returned a special sales item DTO instead of a sales item entity, I would name that DTO class
SalesItemResponse instead of SalesItemDto. The terms Arg and Response better describe the
direction in which a DTO transfers data. You could also use the following DTO names:
InputSalesItem and OutputSalesItem to describe an incoming and outgoing DTO (from the
server's point of view).
SalesItem.java
import lombok.AllArgsConstructor;
import lombok.Data;
import lombok.NoArgsConstructor;
import org.modelmapper.ModelMapper;
import javax.persistence.Entity;
import javax.persistence.GeneratedValue;
import javax.persistence.GenerationType;
import javax.persistence.Id;
import javax.validation.constraints.Max;
import javax.validation.constraints.Min;
import javax.validation.constraints.NotNull;
@Entity
@Data
@NoArgsConstructor
@AllArgsConstructor
public class SalesItem {
@Id
@GeneratedValue(strategy = GenerationType.IDENTITY)
private Long id;
@NotNull
private String name;
salesItem.setId(id);
return salesItem;
}
}
27
The below SalesItemArg class contains the same properties as the SalesItem entity class,
except the id property. The SalesItemArg DTO class is used when creating a new sales item or
updating an existing sales item. When creating a new sales item, the id property should not be
given by the client because the microservice will automatically generate it (or the database will,
actually, in this case).
SalesItemArg.java
import lombok.AllArgsConstructor;
import lombok.Data;
import lombok.NoArgsConstructor;
@Data
@NoArgsConstructor
@AllArgsConstructor
public class SalesItemArg {
private Long userAccountId;
private String name;
private Integer price;
}
Below is defined how the ecommerce-service will orchestrate the use of the aggregated lower-level
microservices:
The ecommerce-service is meant to be used by frontend clients, like a web client, for example.
Backend for Frontend (BFF) term is often used to describe a microservice designed to provide an
API for frontend clients. Compared to the BFF term, service aggregation is a generic term, and
28
there need not be a frontend involved. You can use service aggregation to create an aggregated
microservice used by another microservice or microservices. There can even be multiple levels of
service aggregation if you have a large and complex software system.
Clients can have different needs regarding what information they want from an API. For example, a
mobile client might be limited to exposing only a subset of all information available from an API. In
contrast, a web client can fetch all information, or it can be customized what information a client
retrieves from the API.
All of the above requirements are something that a GraphQL-based API can fulfill. For that reason,
it would be wise to implement the ecommerce-service using GraphQL. I have chosen JavaScript,
Node.js, and Express as technologies to implement a single GraphQL query in the ecommerce-
service. Below is the implementation of a user query, which fetches data from three microservices.
It fetches user account information from the user-account-service, the user's sales items from the
sales-item-service, and finally, the user's orders from the order-service.
server.js
const express = require('express');
const { graphqlHTTP } = require('express-graphql');
const { buildSchema, GraphQLError } = require('graphql');
const axios = require('axios').default;
type SalesItem {
id: ID!,
name: String!
# Define additional properties...
}
type Order {
id: ID!,
userId: ID!
# Define additional properties...
}
type User {
userAccount: UserAccount!
salesItems: [SalesItem!]!
orders: [Order!]!
}
type Query {
user(id: ID!): User!
}
`);
const {
ORDER_SERVICE_URL,
SALES_ITEM_SERVICE_URL,
USER_ACCOUNT_SERVICE_URL
29
} = process.env;
const rootValue = {
user: async ({ id }) => {
try {
const [
{ data: userAccount },
{ data: salesItems },
{ data: orders }
] = await Promise.all([
axios.get(`${USER_ACCOUNT_SERVICE_URL}/user-accounts/${id}`),
axios.get(
`${SALES_ITEM_SERVICE_URL}/sales-items?userAccountId=${id}`
),
axios.get(`${ORDER_SERVICE_URL}/orders?userAccountId=${id}`)
]);
return {
userAccount,
salesItems,
orders
};
} catch (error) {
throw new GraphQLError(error.message);
}
},
};
app.use('/graphql', graphqlHTTP({
schema,
rootValue,
graphiql: true,
}));
app.listen(4000);
After you have started the above program with the node server.js command, you can access
the GraphiQL endpoint with a browser at http://localhost:4000/graphql
On the left-hand side pane, you can specify a GraphQL query. For example, to query the user
identified with id 2:
{
user(id: 2) {
userAccount {
id
userName
}
salesItems {
id
name
}
orders {
id
userId
}
}
}
30
Because we haven't implemented the lower-level microservices, let's modify the part of the server.js
where lower level microservices are accessed to return dummy static results instead of accessing the
real lower-level microservices:
const [
{ data: userAccount },
{ data: salesItems },
{ data: orders }
] = await Promise.all([
Promise.resolve({
data: {
id,
userName: 'pksilen'
}
}),
Promise.resolve({
data: [
{
id: 1,
name: 'sales item 1'
}
]
}),
Promise.resolve({
data: [
{
id: 1,
userId: id
}
]
})
]);
If we now execute the previously specified query, we should see the following query result:
{
"data": {
"user": {
"userAccount": {
"id": "2",
"userName": "pksilen"
},
"salesItems": [
{
"id": "1",
"name": "sales item 1"
}
],
"orders": [
{
"id": "1",
"userId": "2"
}
]
}
}
}
We can simulate a failure by modifying the server.js to contain the following code:
31
const [
{ data: userAccount },
{ data: salesItems },
{ data: orders }
] = await Promise.all([
axios.get(`http://localhost:3000/user-accounts/${id}`),
Promise.resolve({
data: [
{
id: 1,
name: 'sales item 1'
}
]
}),
Promise.resolve({
data: [
{
id: 1,
userId: id
}
]
})
]);
Now, if we execute the query again, we will get the below error response because the server cannot
connect to a service at the local host on port 3000 because there is no service running at
localhost:3000.
{
"errors": [
{
"message": "connect ECONNREFUSED 127.0.0.1:3000",
"locations": [
{
"line": 2,
"column": 3
}
],
"path": [
"user"
],
"extensions": {}
}
],
"data": null
}
You can also query a user and specify the query to return only a subset of fields. The below query
does not return ids and does not return orders. The server-side GraphQL library automatically
includes only requested fields in the response. You, as a developer, do not have to do anything. You
can, of course, optimize your microservice to fetch only the requested fields from the database if
you desire.
{
user(id: 2) {
userAccount {
userName
32
}
salesItems {
name
}
}
}
{
"data": {
"user": {
"userAccount": {
"userName": "pksilen"
},
"salesItems": [
{
"name": "sales item 1"
}
]
}
}
}
The above example lacks some features like authorization that is needed for production.
Authorization should check that a user can only execute the user query to fetch his/hers resources.
The authorization should fail if a user tries to execute the user query using someone else's id.
Security is discussed more in the coming security principles chapter.
The user query in the previous example spanned over multiple lower-level microservices: user-
account-service, sales-item-service, and order-service. Because the query is not mutating anything,
it can be executed without a distributed transaction. A distributed transaction is similar to a regular
(database) transaction, with the difference that it spans multiple remote services.
The API endpoint for placing an order in the ecommerce-service needs to create a new order using
the order-service, mark purchased sales items as bought using the sales-item-service, empty the
shopping cart using the shopping-cart-service, and finally send order confirmation email using the
email-notification-service. These actions need to be wrapped inside a distributed transaction
because we want to be able to roll back the transaction if any of these operations fail. Guidance on
how to implement a distributed transaction is given later in this chapter.
Service aggregation utilizes the facade pattern. The facade pattern allows hiding individual lower-
level microservices behind a facade (the higher-level microservice). The clients of the software
system access the system through the facade. They don't directly contact the individual lower-level
microservices behind the facade because it breaks the encapsulation of the lower-level
microservices inside the higher-level microservice. A client accessing the lower-level microservices
directly creates unwanted coupling between the client and the lower-level microservices, which
makes changing the lower-level microservices hard without affecting the client.
Think about a post office counter as an example of a real-world facade. It serves as a facade for the
33
post office and when you need to receive a package, you communicate with that facade (the post
office clerk at the counter). You have a simple interface of just telling the package code, and the
clerk will find the package from the correct shelf and bring it to you. If you hadn't that facade, it
would mean that you would have to do lower-level work by yourself. Instead of just telling the
package code, you must walk to the shelves and try to find the proper shelf where your package is
located, make sure that you pick the correct package, and then carry the package by yourself. In
addition to requiring more work, this approach is more error-prone. You can accidentally pick
someone else's package if you are not pedantic enough. And think about the case when you go to
the post office next time and find out that all the shelves have been rearranged. This wouldn't be a
problem if you used the facade.
Service aggregation allows using more design patterns from the object-oriented design world. The
most useful design patterns in the context of service aggregation are:
• Decorator pattern
• Proxy pattern
• Adapter pattern
Decorator pattern can be used to add functionality in a higher-level microservice for lower-level
microservices. One example is adding audit logging in a higher-level microservice. For example,
you can add audit logging to be performed for requests in the ecommerce-service. You don't need to
implement the audit logging separately in all the lower-level microservices.
Proxy pattern can be used to control the access from a higher-level microservice to lower-level
microservices. Typical examples of the proxy pattern are authorization and caching. For example,
you can add authorization and caching to be performed for requests in the ecommerce-service.
Only after successful authorization will the requests be delivered to the lower-level microservices.
And if a request's response is not found in the cache, the request will be forwarded to the
appropriate lower-level microservice. You don’t need to implement authorization and caching
separately in all the lower-level microservices.
Adapter pattern allows a higher-level microservice to adapt to different versions of the lower-level
microservices while maintaining the API towards clients unchanged.
34
Cohesion refers to the degree to which classes inside a service belong together. Coupling refers to
how many other services a service is interacting with.
High cohesion and low coupling mean that the development of services can be highly parallelized.
In the e-commerce example, the five lower-level microservices don't have coupling with each other.
The development of each of those microservices can be isolated and assigned to a single team
member or a group of team members. The development of the lower-level microservices can
proceed in parallel, and the development of the higher-level microservice can start when the APIs of
the lower-level microservices become stable enough. The target is to design the lower-level
microservices APIs early on to enable the development of the higher-level microservice.
Suppose you need a library for parsing configuration files (in particular syntax) in YAML or JSON
35
format. In that case, you can first create the needed YAML and JSON parsing libraries (or use
existing ones). Then you can create the configuration file parsing library, composed of the YAML
and JSON parsing libraries. You would then have three different libraries: one higher-level library
and two lower-level libraries. Each library has a single responsibility: one for parsing JSON, one for
parsing YAML, and one for parsing configuration files with a specific syntax, either in JSON or
YAML. Software components can now use the higher-level library for parsing configuration files,
and they need not be aware of the JSON/YAML parsing libraries at all.
Duplication at the software system level happens when two or more software systems use the same
services. For example, two different software systems can both have a message broker, API
gateway, identity and access management (IAM) application, and log and metrics collection
services. You could continue this list even further. The goal of duplication-free architecture is to
have only one deployment of these services. Public cloud providers offer these services for your use.
If you have a Kubernetes cluster, an alternative solution is to deploy your software systems in
different Kubernetes namespaces and deploy the common services to a shared Kubernetes
namespace, which can be called the platform or common-services, for example.
Duplication at the service level happens when two or more services have common functionality that
could be extracted to a separate new microservice. For example, consider a case where both a user-
account-service and order-service have the functionality to send notification messages by email to
a user. This email-sending functionality is duplicated in both microservices. Duplication can be
avoided by extracting the email-sending functionality to a separate new microservice. The single
responsibility of the microservices becomes more evident when the email-sending functionality is
extracted to its own microservice. One might think another alternative is extracting the common
functionality to a library. This is not a solution that is as good because microservices become
dependent on the library. When changes to the library are needed (e.g., security updates), you must
change the library version in all the microservices using the library and then test all the affected
microservices.
When a company develops multiple software systems in several departments, the software
development typically happens in silos. The departments are not necessarily aware of what the
other departments are doing. For example, it might be possible that two departments have both
developed a microservice for sending emails. There is now software duplication that none is aware
of. This is not an optimal situation. A software development company should do something to
enable collaboration between the departments and break the silos. One good way to share software
is to establish shared folders or organizations in the source code repository hosting service that the
company uses. For example, in GitHub, you could create an organization for sharing source code
36
repositories for common libraries and another for sharing common services. Each software
development department has access to those common organizations and can still develop its
software inside its own GitHub organization. In this way, the company can enforce proper access
control for the source code of different departments, if needed. When a team needs to develop
something new, it can first consult the common source code repositories to find out if something is
already available that can be reused as such or extended.
The following are typical places where externalized configuration can be stored when software is
running in a Kubernetes cluster:
• Environment variables
• Kubernetes ConfigMaps
• Kubernetes Secrets
In the following sections, let's discuss these three configuration storage options.
37
Environment Variables
Environment variables can be used to store configuration as simple key-value pairs. They are
typically used to store information like how to connect to dependent services, like a database or a
message broker, or a microservice's logging level. Environment variables are available for the
running process of a microservice, which can access the environment variable values by their names
(keys).
You should not hardcode the default values for environment variables in the source code. This is
because the default values are typically not for a production environment but for a development
environment. Suppose you deploy a service to a production environment and forget to set all the
needed environment variables. In that case, your service will have some environment variables with
default values unsuitable for a production environment.
You can supply environment variables for a microservice in environment-specific .env files. For
example, you can have an .env.dev file for storing environment variable values for a development
environment and an .env.ci file for storing environment variable values used in the microservice's
continuous integration (CI) pipeline. The syntax of .env files is straightforward. There is one
environment variable defined per line:
.env.dev
NODE_ENV=development
HTTP_SERVER_PORT=3001
LOG_LEVEL=INFO
MONGODB_HOST=localhost
MONGODB_PORT=27017
MONGODB_USER=
MONGODB_PASSWORD=
.env.ci
NODE_ENV=integration
HTTP_SERVER_PORT=3001
LOG_LEVEL=INFO
MONGODB_HOST=localhost
MONGODB_PORT=27017
MONGODB_USER=
MONGODB_PASSWORD=
When a software component is deployed to a Kubernetes cluster using Helm, environment variable
values should be defined in the Helm chart's values.yaml file:
values.yaml
nodeEnv: production
httpServer:
port: 8080
database:
mongoDb:
host: my-service-mongodb
port: 27017
38
The values in the above values.yaml file can be used to define environment variables in a
Kubernetes Deployment using the following Helm chart template:
deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-service
spec:
template:
spec:
containers:
- name: my-service
env:
- name: NODE_ENV
value: {{ .Values.nodeEnv }}
- name: HTTP_SERVER_PORT
value: "{{ .Values.httpServer.port }}"
- name: MONGODB_HOST
value: {{ .Values.database.mongoDb.host }}
- name: MONGODB_PORT
value: {{ .Values.database.mongoDb.port }}
When Kubernetes starts a microservice pod, the following environment variables will be made
available for the running container:
NODE_ENV=production
HTTP_SERVER_PORT=8080
MONGODB_HOST=my-service-mongodb
MONGODB_PORT=27017
Kubernetes ConfigMaps
A Kubernetes ConfigMap can store a configuration file or files in various formats, like JSON or
YAML. These files can be mounted to the filesystem of a microservice's running container. The
container can then read the configuration files from the mounted directory in its filesystem.
For example, you can have a ConfigMap for defining the logging level of a my-service microservice:
configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: my-service
data:
LOG_LEVEL: INFO
The below Kubernetes Deployment descriptor defines that the content of the my-service
ConfigMap's key LOG_LEVEL will be stored in a volume named config-volume, and the value of
the LOG_LEVEL key will be stored in a file named LOG_LEVEL. After mounting the config-
39
volume to the /etc/config directory in a my-service container, it is possible to read the contents of
the /etc/config/LOG_LEVEL file, which contains the text: INFO.
deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-service
spec:
template:
spec:
containers:
- name: my-service
volumeMounts:
- name: config-volume
mountPath: "/etc/config"
readOnly: true
volumes:
- name: config-volume
configMap:
name: my-service
items:
- key: "LOG_LEVEL"
path: "LOG_LEVEL"
In Kubernetes, the editing of a ConfigMap is reflected in the respective mounted file. This means
that you can listen to changes in the /etc/config/LOG_LEVEL file. Below is shown how to do it in
JavaScript:
fs.watchFile('/etc/config/LOG_LEVEL', () => {
try {
const newLogLevel = fs.readFileSync(
'/etc/config/LOG_LEVEL', 'utf-8'
).trim();
updateLogLevel(newLogLevel);
} catch (error) {
// Handle error
}
});
Kubernetes Secrets
Kubernetes Secrets are similar to ConfigMaps except that they are used to store sensitive
information, like passwords and encryption keys.
Below is an example of values.yaml file and a Helm chart template for creating a Kubernetes
Secret. The Secret will contain two key-value pairs: the database username and password. The
Secret’s data needs to be Base64-encoded. In the below example, the Base64 encoding is done using
the Helm template function b64enc.
40
values.yaml
database:
mongoDb:
host: my-service-mongodb
port: 27017
user: my-service-user
password: Ak9(lKt41uF==%lLO&21mA#gL0!"Dps2
secret.yaml
apiVersion: v1
kind: Secret
metadata:
name: my-service
type: Opaque
data:
mongoDbUser: {{ .Values.database.mongoDb.user | b64enc }}
mongoDbPassword: {{ .Values.database.mongoDb.password | b64enc }}
After being created, secrets can be mapped to environment variables in a Deployment descriptor for
a microservice. In the below example, we map the value of the secret key mongoDbUser from the
my-service secret to an environment variable named MONGODB_USER and the value of the secret
key mongoDbPassword to an environment variable named MONGODB_PASSWORD.
deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-service
spec:
template:
spec:
containers:
- name: my-service
env:
- name: MONGODB_USER
valueFrom:
secretKeyRef:
name: my-service
key: mongoDbUser
- name: MONGODB_PASSWORD
valueFrom:
secretKeyRef:
name: my-service
key: mongoDbPassword
When a my-service pod is started, the following environment variables are made available for the
running container:
MONGODB_USER=my-service-user
MONGODB_PASSWORD=Ak9(lKt41uF==%lLO&21mA#gL0!”Dps2
41
Service Substitution Principle
Make substituting a service's service dependency for another service easy by making the
dependencies transparent. A transparent service is exposed to other services by defining a
host and port. Use externalized service configuration principle (e.g., environment variables)
in your microservice to define the host and port (and possibly other needed parameters like a
database username/password) for a dependent service.
Let's have an example where a microservice depends on a MongoDB service. The MongoDB service
should expose itself by defining a host and port combination. For the microservice, you can specify
the following environment variables for connecting to a localhost MongoDB service:
.env.dev
MONGODB_HOST=localhost
MONGODB_PORT=27017
Suppose that in a Kubernetes-based production environment, you have a MongoDB service in the
cluster accessible via a Kubernetes Service named my-service-mongodb. In that case, you should
have the environment variables for the MongoDB service defined as follows:
MONGODB_HOST=my-service-mongodb.default.svc.cluster.local
MONGODB_PORT=8080
Alternatively, a MongoDB service can run in the MongoDB Atlas cloud. Then the MongoDB service
could be connected to using the following kind of environment variable values:
MONGODB_HOST=my-service.tjdze.mongodb.net
MONGODB_PORT=27017
As shown with the above examples, you can easily substitute a different MongoDB service
depending on your microservice's environment. If you want to use a different MongoDB service,
you don't need to modify the microservice's source code but only change the configuration.
42
service and wants an immediate response. Synchronous communication can be implemented using
protocols like HTTP or gRPC (which uses HTTP under the hood).
In case of a failure when processing a request, the request processing microservice sends an error
response to the requestor microservice. The requestor microservice can cascade the error up in the
synchronous request stack until the initial request maker is reached. Often, that initial request
maker is a client, like a web or mobile client. The initial request maker can then decide what to do.
Usually, it will attempt to send the request again after a while (we are assuming here that the error
is a transient server error, not a client error, like a bad request, for example)
43
communication method because no response for the operations is expected.
Asynchronous communication can be implemented using a message broker. Services can produce
messages to the message broker and consume messages from the message broker. There are several
message broker implementations available like Apache Kafka, RabbitMQ, Apache ActiveMQ and
Redis. When a microservice produces a request to a message broker's topic, the producing
microservice must wait for an acknowledgment from the message broker indicating that the request
was successfully stored to multiple, or preferably all, replicas of the topic. Otherwise, there is no
100% guarantee that the request was successfully delivered in some message broker failure
scenarios.
When an asynchronous request is of type fire-and-forget (i.e., no response is expected), the request
processing microservice must ensure that the request will eventually get processed. If the request
processing fails, the request processing microservice must reattempt the processing after a while. If
a termination signal is received, the request processing microservice instance must produce the
request back to the message broker and allow some other instance of the microservice to fulfill the
request. The rare possibility exists that the production of the request back to the message broker
fails. You could then try to save the request to a persistent volume, for instance, but also that can
fail. The likelihood of such a situation is very low.
Designing APIs for inter-service communication is described in more detail in the API design
principles chapter.
44
microservice(s) consume that data. The interface between these microservices is defined by the
schema of the shared data, e.g., by the schemas of database tables. To secure the shared data, only
the producing microservice(s) should have write access to the shared data, and the consuming
microservice(s) should only have read access to the shared data.
I often compare software system architectural design to the architectural design of a house. The
house represents a software system. The facade of the house represents the external interfaces of
the software system. The rooms in the house are the microservices of the software system. Like a
microservice, a single room usually has a dedicated purpose. The architectural design of a software
system encompasses the definition of external interfaces, microservices, and their interfaces to
other microservices.
The result of the architectural design phase is a ground plan for the software system. After the
architectural design, you have the facade designed, and all the rooms are specified: the purpose of
each room and how rooms interface with other rooms.
Designing an individual microservice is no more architectural design it is like the interior design of
a single room. The design of microservices is handled using object-oriented design principles,
presented in the next chapter.
Domain-driven design (DDD) is a software design approach where software is modeled to match a
problem/business domain according to input from the domain experts. Usually, these experts come
from the business and specifically from product management. The idea of DDD is to transfer the
domain knowledge from the domain experts to individual software developers so that everyone
participating in software development can share a common language that describes the domain.
45
The idea of the common language
is that people can understand each other, and no multiple terms are used to describe a single thing.
This common language is also called the ubiquitous language.
The domain knowledge is transferred from product managers and architects to lead developers and
product owners (POs) in development teams. The team's lead developer and PO share the domain
knowledge with the rest of the team. This usually happens when the team processes epics and
features and splits them into user stories in planning sessions. A software development team can
also have a dedicated domain expert or experts.
DDD starts from the top business/problem domain. The top domain is split into multiple
subdomains on the same abstraction level: one level lower than the top domain. A domain should
be divided into subdomains so that there is minimal overlap between subdomains. Subdomains will
be interfacing with other subdomains using well-defined interfaces. Subdomains are also called
bounded contexts, and technically they represent an application or a microservice. For example, a
banking software system can have a subdomain or bounded context for loan applications and
another for making payments.
1. Ingesting raw data from various sources of the mobile telecom network
2. Transforming the ingested raw data into meaningful insights
3. Proper ways of presenting the insights to software system users
Let's pick up some keywords from the above definitions and formulate short names for the
subdomains:
We know that a mobile telecom network is divided into core and radio networks. From that, we can
conclude that Ingesting raw data domain can be divided into further subdomains: Ingesting radio
46
Figure 2.11 Subdomains
network raw data and Ingesting core network raw data. We can turn these two subdomains into
applications for our software system: Radio network data ingester and Core network data
ingester.
The Transforming raw data to insights domain should at least consist of an application
aggregating the received raw data to counters and key performance indicators (KPIs). We can call
that application Data aggregator.
The Presenting insights domain should contain a web application that can present insights in
various ways, like using dashboards containing charts presenting aggregated counters and
calculated KPIs. We can call this application Insights visualizer.
Now we have the following applications for the software system defined:
Next, we continue architectural design by splitting each application into one or more software
47
components. (services, clients, and libraries). When defining the software components, we must
remember to follow the single responsibility principle, avoid duplication principle and
externalized service configuration principle.
When considering the Radio network data ingester and Core network data ingester applications,
we can notice that we can implement them both using a single microservice, data-ingester-service,
with different configurations for radio and core network. This is because the protocol for ingesting
the data is the same for radio and core networks. The two networks differ in the schema of the
ingested data. Using a single configurable microservice, we can avoid code duplication by using
externalized configuration.
• A web client
• A service for fetching aggregated and calculated data (counters and KPIs)
• A service for storing the dynamic configuration of the web client
The dynamic configuration service stores information about what insights to visualize and how in
the web client. Microservices in the Insights visualizer application are:
• insights-visualizer-web-client
• insights-visualizer-data-service
• insights-visualizer-configuration-service
Now we are ready with the microservice-level architectural design for the software system.
48
Random documents with unrelated
content Scribd suggests to you:
STOCK TURNOVER
A New England merchant operating a country general store made it
one of his business rules that he would never sell an article at less
than cost. His way of figuring was that if he never made a sale at a
loss he could never lose money and consequently his business was
bound to prosper. And so he went on year after year faithfully
following out his original idea, which later became as a law to him.
Each year contributed to pile up more stock, and each year he found
himself with more dead stock, that steadily decreased in value the
longer he kept it. There could be only one result following from such
a short-sighted policy—the business died of dry-rot. It was then for
the creditors to sell the goods at whatever they would bring, and it
was an actual fact that some of the goods were found to have been
in stock for twenty-five and thirty years.
For our purpose we are interested in this experience only as it shows
the importance of keeping the stock moving. The old country
merchant knew nothing of the meaning or importance of stock
turnover. Today most merchants understand that a great measure of
their success in trading is dependent upon the ability of the
salesmen to sell out the stock promptly. A profit is made only when
the goods are sold, and therefore the store’s success is measured by
the number of times a line of goods can be sold out or turned over
during the course of a year. To say that a certain line of shoes has a
turnover of four, means that the line is sold out and replaced with
fresh stock four times during the year.
The following is a word of good advice given to merchants on this
important matter of turnover:
Did you ever think of shoes as so many dollar bills lying on
your shelves? Picture this thought in your mind. As long as
they repose on your shelves they do not work for you. In
fact, converted into shoes, they cost money and depreciate
in value the longer they stay there. It would be better to
have real dollar bills tucked away in your stocking; you would
then receive no interest but they would not cost you money.
Keep your stock moving!
Clean out slow sellers!
Stock turnover is the secret of success in conducting a store.
The salesman’s work has a very direct relation to the matter of stock
turnover, for, after all, he is closest to the customer and upon his
knowledge of the stock and selling ability depends a great deal of
the success in keeping the stock moving and of keeping it clean of
short lines and dead stock. This is no small responsibility. A
knowledge of the stock is essential. On the part of the salesman it
requires that he know what goods he has to offer, where he may
find them quickly, their particular merits and special advantages
from the customer’s point of view.
CHAPTER XI
MONEY VALUE OF IDEAS
GETTING “UNDER HIS SKIN”
It is for the salesman, if he is to get results, to talk to his customer
in terms of facts and ideas—not simply “words.” Sometimes we hear
of a person who “talks a great deal but says nothing,” and we
understand by this that his statements are without facts—that there
is no point to what he says. Personal selling is a matter of presenting
the story to the customer in such a way that he realizes he is getting
information. It is for the salesman to tell his story so that it will “get
under the customer’s skin.” This requires a certain amount of
originality, a knowledge of what is being sold, an understanding of
the customer.
In reading footwear advertisements, which are simply printed selling
talks, it is interesting to notice how well the selling points are
presented to appeal to different classes of customers. The following
one, for instance, is directed to men. It is brief, but in a few words
brings out the story by emphasizing the qualities of comfort and
convenience, which are of greatest importance to most men:
Low shoes give your ankles a holiday every day.
Perhaps russet is a bit cooler—it’s easier to care for anyhow.
Other people think more of exact shoe fitting, especially if they are
having trouble with their feet. The main selling point in this case is
that of offering a shoe to do away with further troubles. The
following ad. shows how this was done. The shoe salesman has the
same problem, except that he has the advantage of meeting the
buyer face to face and can tell his story in a little different way.
Ever have trouble with your feet? “Blank” wearers never do.
That’s because the “Blank” fits perfectly—no pinching, nor
pain for the grown-ups—no deformities for growing feet. The
“Blank” shoe starts the foot right and keeps it so.
But, as every shoe salesman will know, different people have
different ideas concerning what is the feature most desirable in a
shoe. To impress the person who considers as uppermost the matter
of appearance and style, the selling talk is directed along a different
line so as to “get under the skin” of such a customer.
If you have a pretty foot and ankle, wear a shoe that does
them justice. If you haven’t, wear a shoe that makes them
look as if the pretty foot and ankle were yours. “Blank” shoes
for women emphasize the pretty foot, add grace and
shapeliness to any foot. “Blank” shoes fit all over—not in
spots. They fit around the ankle and they fit around the foot,
and fit both with the smoothness of a stocking and the
firmness of a glove. The fit of the ankle is for something
more than looks. That graceful custom-made “curve” at the
back holds the shoe firmly but gently in place. No up-and-
down slide—heel-hurting and peace-impairing—to the
“Blank” shoe.
These selling appeals are all made with the express purpose of
meeting the individual desires of different classes of people. The
man who tells the printed story realizes that he cannot get results in
talking style to the person who is suffering from foot trouble, or vice
versa. He realizes that there are many classes of customers and he
plans his selling talk so that it will be accepted by the people to
whom he is talking. The salesman will realize at once that he must
meet the same condition.
MAKING TWO SALES OUT OF ONE
Just as it is possible for a man, by mixing brains with his effort, to
make two blades of grass grow where one grew before, or to grow
two bushels of wheat on the plot that formerly produced but one, so
also the salesman may increase his production of sales. With him it
is a matter of seasoning his effort with ideas and suggestions that
will appeal to the customer and stir-up the desire to buy. To
illustrate: The manager of one of the finest shoe departments in the
United States has built up a big business in patent low-cut shoes.
The growth has come about largely through the application of an
original but simple idea that has as its basis a positive suggestion to
the customer. The plan may be described briefly by mentioning the
case of a woman who enters the department to purchase a pair of
spats. The salesman, working on the idea, gets the spats, removes
the customer’s shoes and puts on her feet a pair of patent leather
pumps. He had, of course, previously taken notice of the size of the
customer’s foot. Having put on the patent leather shoes the
salesman then adjusts the spats, dropping just a word of explanation
to the effect that spats can be judged to better advantage when
fitted over patent low-cuts. The result in a large percentage of cases
is the sale of the patent leather shoes as well as the spats.
Illustrations without number might be mentioned to show the
generous response, in the way of increased business, that follows in
the path of intelligent effort. Some of these the salesman might well
use, without variation, in his daily work; others he might improve to
meet more closely the demands of his own trade. However, the
greatest good will come to the salesman who uses these illustrations
as a guide rather than as a model to be copied line for line.
An incident worth mentioning is that of a gentleman accompanied by
his wife and two children who entered a shoe department to
purchase a pair of canvas shoes for the lady. It was in the early
spring and the family was starting off to spend some time in the
country. While serving the woman the salesman noticed that the
husband was wearing heavy winter shoes, and after completing the
first sale he suggested a “pair of comfortable canvas shoes for all-
around country use,” and mentioned that a new line had recently
been received. He was then quickly on his way to select a desirable
shoe, and by the time he returned the customer had half decided
that he probably would be much more comfortable with a pair of
light shoes. The feel of the shoe upon his foot served to complete
his decision—and the sale followed. A bright remark on the
salesman’s part to the effect that he could furnish “two pairs of
shoes for the price of the one just bought” was an original way of
suggesting shoes for the two children. It appealed to the customer
and another sale was made. Furthermore, the customer was more
pleased with having purchased the four pairs than he would have
been with only the one he had first planned to buy.
It is out of the question to suggest that this plan or any other would
produce results in every instance—every salesman knows that it
would not. On the other hand, it does very clearly point out how
intelligent effort on the salesman’s part can be turned into sales
when properly directed to meet the needs of the individual customer.
ADVANTAGES OF AN EXTRA PAIR
There is probably not one customer in fifty who understands why it
is to his advantage to be provided with an extra pair of shoes. Most
customers would agree that, for the sake of variety, it would be well
to have another pair so that they might alternate in wearing
different shoes. But they do not realize that there is actually an
advantage of money saving to be gained.
It is for the salesman to offer a definite reason for the purchase of a
second pair. If the shoes are allowed to “rest” every other day or
perhaps for two days after each time they are worn the wearing life
will be much greater. By regularly changing off in this way,
opportunity is given for the foot perspiration to dry out before it is
able to cause any damaging effect upon the leather and fabric,
especially that on the inside of the shoe. In addition, there is the
sanitary advantage. Most people live in their shoes about sixteen
hours a day and during that time subject them to a variety of
conditions of cold, heat and dampness. From the standpoint of
sanitation, it is as important to provide sufficient ventilation for the
shoe as it is to do so for the rooms in which we live.
CLOSING THE SALE IN THE STORE
Satisfaction on the part of the customer is the basis of successful
merchandising. Every wide-awake salesman and dealer realizes this
fact, and makes it a part of his selling policy to insure the customer’s
entire satisfaction, as far as it is humanly possible to do so. The
mistake sometimes is made, in trying to please a customer, to leave
an unnecessary opening for dissatisfaction. For instance, the
salesman might make the remark to an undecided customer, “Take
them home and if they are not just what you want, bring them
back.” The suggestion is made with the best intention to serve well.
But there is in it the germ of indecision which may later develop into
dissatisfaction and cause the customer to return the goods when
there may be no occasion for it.
The time for the salesman to complete the sale is when he has the
customer before him, face to face. There are exceptions to the rule,
but in general if the customer cannot decide favorably when he has
the benefit of the salesman’s advice and suggestion, it is not likely
that he will be able to do so later. To suggest a decision later is the
salesman’s admission that he has not completed the sale. What the
buyer requires is more selling effort, rather than more time so that
he may think it out for himself.
Closing the sale in the store means to learn just what the objections
are that are holding up the decision, and then to present selling
facts so that the objections will be overcome and the sale will follow
naturally. If the customer is told to work out his own salvation by
deciding later, it is likely that his objections will take on greater
proportions, while the advantages must fade into the background.
The result then is that the goods will be returned, and either the
business is lost altogether or else the effort to sell must be
commenced all over again. A sale that is completed when the
customer first calls is good business for the salesman. To the
customer it is even more satisfying, for the reason that he is put to
no inconvenience in returning the pair first bought and in selecting
some other. He is also more favorably impressed with the salesman’s
ability to sell and his understanding of the goods being offered.
GETTING BUSINESS FROM OUTSIDE FRIENDS
When a salesman encourages business with outside friends he is
justified in his feeling that he is offering a higher quality of personal
service than the friend would receive at any other store where he is
unknown to the salesman. To begin with, there is a better basis of
understanding between the buyer and the seller. The salesman
knows quite definitely what his friend desires in style, fit, quality, and
he may know his price limitations. Furthermore, there is a natural
personal interest in the customer that must surely result in his
receiving the maximum of service. These are advantages to be
gained by the friend. The salesman has the advantage of an
enlarged list of regular customers as a result of a simple
announcement that he is in the shoe business and that he would like
to have a call from his friends.
Along the same line may be considered the suggestion sometimes
made by the salesman to the effect that “I wear this style myself.” A
point such as this would carry weight with a close personal
acquaintance of the salesman and would be well worth bringing out
whenever necessary. However, to customers who are not personally
acquainted with the salesman it would probably seem out of place,
and would carry no weight in bringing about a decision. Rather than
run the risk of being misunderstood it would be better for the
salesman to omit, as much as possible, personal reference from his
sales talk.
TELEPHONE SALESMANSHIP
More and more the advantage of the telephone as a means of
getting business is coming to be realized by shoe salesmen who are
alive to ideas. With a list of his customers’ telephone numbers the
salesman is in a position to place himself and his story before any
one of them within a moment’s notice. He may have an
announcement of the receipt of a new line of styles which he knows
will especially appeal to the customer, or perhaps the salesman may
have in stock a special-value shoe of the customer’s size that he will
be interested to see. It may be an advance announcement of a sale,
or any one of a dozen items of special interest to a buyer. The
telephone is at the salesman’s elbow. It is as easy for him to tell his
story to the customer as it is for him to “talk about the weather” to
the man standing alongside of him.
“Good-morning, Mrs. Brown, this is the Progressive Shoe Store—Mr.
Smith talking. You will be interested to know that we have today
received our complete line of spring styles. There are two or three of
the models I know will appeal to you especially.” ... “Wednesday?
Very well, I’ll have them ready to show when you call.”
The customer appreciates genuine service of this kind. It requires
just a moment of the salesman’s time, but produces big results in
the form of increased business, and it establishes the good-will of
the customer, both in the store and the salesman.
PERSONAL LETTER
It requires somewhat more time and a little extra effort on the
salesman’s part to write a short, personal letter to his customers to
accompany the season’s announcement. The telephone can be
employed, perhaps, with less effort, but it is not always possible to
make use of this means of getting in touch with customers. There
are some buyers who live out of town, and others who cannot be
reached by telephone—but the mails go everywhere.
The personal letter has its advantage in that it makes a more lasting
impression on the customer’s mind. It is of a more permanent
nature and is consequently less easily forgotten. Also it serves to get
the salesman’s name before the customer in such a way that it will
be remembered. It is a known fact that people remember what they
read for a longer time than they do the things they hear. This is no
small matter from the standpoint of the salesman, because he is
continually working to single himself out from all other shoe
salesmen in the mind of the customer and thus to build up a
personal following of his own. A short, business-like letter will go a
long way toward establishing such a relationship.
SALESMANSHIP AND DISPLAY FIXTURES
The inside display case is the shoe store’s open picture book. Almost
everyone enjoys looking at pictures, which is proved by the success
of the moving-picture show. Were the salesman merely to say, in
suggesting an additional purchase, that he has a pretty suede pump
of a new model, he could not do more than arouse a mild interest.
On the other hand, if, with the aid of the display case, he is able to
bring the shoe directly to the customer’s notice he at once has
interest and his statements then are not mere words, but facts.
Very often the tendency is to let the show case tell its own story; to
take it for granted that if the customer sees what he wants he will
say so and buy it. But that, generally, is not what happens. Most
people are inclined to hold back in making a decision to spend
money, even though they realize their need for the goods. A word
from the salesman to bridge over the gap many times is all that is
required to complete the sale. Display fixtures are mechanical and
have their purpose to reduce the salesman’s physical effort in
showing the goods. They do not take the place of the salesman but
serve as his convenience to show more and to sell more goods. It
does not take a great deal of extra effort to finish off the sale of a
pair of shoes with an additional sale of shoe trees, hosiery, shoe
dressing or some other findings, but the business amounts to a
substantial figure in the course of a month.
EXAGGERATION
Just as it is important for the salesman to develop positive, money-
making ideas, it is necessary for him to guard against anything in his
selling talk that will result to deaden the customer’s confidence.
Lincoln very wisely said, with his original knack of expressing the
point so that no one could miss it, that “you can fool all the people
some of the time, you can fool some of the people all the time, but
you can’t fool all the people all the time.” Ninety-nine per cent of the
customers are in the class of people who may be fooled once but
who make it their special business to guard against it the second
time.
Exaggeration is one way of fooling the customer. There are times
when a sale might be closed more quickly by stretching the truth,
but the advantage to the salesman and the store cannot be lasting
on such a basis. When the customer learns that he has been fooled,
and in most cases he will find it out, his further business will very
likely be lost forever. The customer has been given a just cause for
grievance and it will be necessary to overcome his strong prejudice
before he can be brought into the store again. He will never entirely
forget the occurrence even though he might overlook it for the time
being. Moreover, it will surely be revived in his mind at a later time
upon any slight indication of what might seem to be an attempt at
unfair treatment.
Exaggeration is largely a matter of habit. If the salesman allows
himself to stretch a point today and he finds that it works, the
chances are that he will try the same trick a second and a third time,
until finally the exaggeration comes to him so naturally that he does
not realize he is fooling the customer. On the other hand, it is a
matter of habit also to cultivate honesty and square dealing. If the
customer is given the true facts in the first place it means that there
can be no come-back—that he will know what to expect of the
goods he has bought and that he will respect the man who sold
them, when he finds that they come up to his expectations.
FORCED SALES
Another point of importance along this general line of thought is that
of guarding against forced sales. Once in a great while it may
happen that a salesman does not have in stock the shoe he knows
the customer should have. Perhaps the customer may have a foot of
such unusual shape that it requires either a custom-made shoe or
some special model not carried in stock. Even though the salesman
were to force on such a man a pair of shoes that would not give him
service, there could be no permanent advantage. If the customer did
not later return the shoes for a claim he would probably pocket his
loss with the feeling that he had been beaten.
C. A. Reynolds, president of the Keystone Leather Company,
Camden, New Jersey, who, as a young man, was a retail shoe
salesman, tells of an experience of his that illustrates this point. A
customer entered the store, asked to be fitted, and explained that
he was having considerable trouble with his feet. Upon examining
the foot the young salesman (who was Mr. Reynolds) noticed that it
was of such a shape and in such a condition as to require a special
type of shoe that was not kept in stock. The salesman frankly
explained the facts and then advised the customer where he could
get the shoe he needed. The sale had been lost, but the customer
was pleased because he found what he wanted in the store to which
he had been directed. He returned to thank the young man for his
advice. And he did more; he later brought his wife and three
children to be fitted where he knew they would receive service.
It was a matter of losing one customer to gain four. The experience
illustrates the difference between the short-sighted policy of “a sale
at any cost,” and the true basis of selling on the foundation of
service.
CHAPTER XII
THE SALESMAN’S RESPONSIBILITY
SELLING P.M. GOODS
ADVANTAGES
In favor of the premium system may be mentioned the fact that it is
an effective means of keeping the shelves clean, at all times, of dead
stock. To the house it means a smaller profit on the sale as a result
of the extra commission paid the salesman, but this is more than
overbalanced by the fact that goods are being steadily kept moving
and that there would result an even greater loss if they were allowed
to remain in stock indefinitely.
The particular advantage to the salesman is that he is encouraged to
sell goods that require on his part a higher degree of salesmanship
than that called for in selling the popular lines. Then, of course,
there is the evident advantage he has to increase his earnings to the
extent of the premium.
DISADVANTAGES
It is not to be expected that the P.M. system has all advantages in its
favor, and none of the disadvantages to offset them. Indeed, there
are many retailers today who are very strongly opposed to the
premium system and who will not introduce it into their own
organizations, on the ground that it works against the best interests
of the customer. The opposition is based on the claim that the
tendency to earn the reward is so great on the part of the salesman
that there is the likelihood that the customer will be prevailed upon
to buy goods that are not best suited to his needs. In other words,
the inexperienced salesman will have foremost in his mind the fact
that a certain shoe bears a P.M., and in order to earn this for himself
he will adopt the short-sighted policy of selling the shoe to the
customer, even though he may know it to be the one not best
suited.
If the salesman should allow himself to be influenced in this way in
order to earn a small commission, it is certainly true that the
premium system would be a failure. It would be a great deal better
to have the dead stock on the shelves than to allow the customers
to be badly served. The result would be to lose the customer, and
this, of course, would be fatal to the business if the system were
allowed to continue. It is from “repeat” business that the store
makes its soundest profits, and it is also from “repeat” business that
the salesman establishes himself as a big sales producer. He cannot
afford to allow a small temporary gain in the form of a premium to
stand in the way of his future development and success as a
salesman.
ebookbell.com