Introduction to Business Analytics
Introduction to Business Analytics
LECTURE NOTES
UNIT 1
INTRODUCTION TO BUSINESS ANALYTICS
Analytics and data science- Analytics life cycle-Types of Analytics-Business Problem
definition- Data collection- Data preparation-Hypothesis
generation-Modeling-Validation and Evaluation-Interpretation-Deployment and
iteration.
Introduction:
Every organization across the world uses performance measures such as market share,
profitability, sales growth, return on investments (ROI), customer satisfaction, and so on for
quantifying, monitoring, and improving its performance.
Organisation should understand the KPI’s (Key performance Indicators) and the
factors that have impact on KPI’s.
1. Analytics:
Data Science is nothing short of magic, and a Data Scientist is a magician who
performs tricks with the data in his hat. Now, as magic is composed of different elements,
similarly, Data Science is an interdisciplinary field. We can consider it to be an amalgamation
of different fields such as data manipulation, data visualization, statistical analysis, and
Machine Learning. Each of these sub-domains has equal importance.
Data Manipulation:
With the help of data manipulation techniques, you can find interesting insights from
the raw data with minimal effort. Data manipulation is the process of organizing information
to make it readable and understandable. Engineers perform data manipulation using data
manipulation language (DML) capable of adding, deleting, or altering data. Data comes from
various sources.
While working with disparate data, you need to organize, clean, and transform it to use it in
your decision-making process. This is where data manipulation fits in. Data manipulation
allows you to manage and integrate data helping drive actionable insights.
Data manipulation, also known as data preparation, enables users to turn static data into fuel
for business intelligence and analytics. Many data scientists use data preparation software to
organize data and generate reports, so non-analysts and other stakeholders can derive
valuable information and make informed decisions.
Data manipulation makes it easier for organizations to organize and analyse data as
needed. It helps them perform vital business functions such as analyzing trends, buyer
behaviour, and drawing insights from their financial data.
Data visualization:
1. First things first, you should define data sources and data types that will be used. Then
transformation methods and database qualities are determined.
2. Following that, the data is sourced from its initial storages, for example, Google
Analytics, ERP, CRM, or SCM system.
3. Using API channels, the data is moved to a staging area where it is transformed.
Transformation assumes data cleaning, mapping, and standardizing to a unified
format.
4. Further, cleaned data can be moved into a storage: a usual database or data
warehouse. To make it possible for the tools to read data, the original base language
of datasets can also be rewritten.
Bar chart
A bar chart is one of the basic ways to compare data units to each other. Because of
its simple graphic form, a bar chart is often used in Business Analytics as an interactive page
element.
Bar charts are versatile enough to be modified and show more complex data models. The bars
can be structured in clusters or be stacked, to depict distribution across market segments, or
subcategories of items. The same goes for horizontal bar charts, fitting more for long data
labels to be placed on the bars.
When to use: comparing objects, numeric information. Use horizontal charts to fit long data
labels. Place stacks in bars to break each object into segments for a more detailed
comparison.
Pie chart
This type of chart is used in any marketing or sales department, because it makes it easy to
demonstrate the composition of objects or unit-to-unit comparison.
Line Graph
This type of visual utilizes a horizontal axis and a vertical axis to depict the value of a unit
over time.
Line graphs can also be combined with bar charts to represent data from multiple dimensions.
When to use: object value on the timeline, depicting tendencies in behavior over time.
Box plot
At first glance, a box plot looks pretty complicated. But if we look closer at the example, it
becomes evident that it depicts quarters in a horizontal fashion.
Our main elements here are minimum, maximum, and the median placed in between
the first and third quartile. What a box shows is the distribution of objects, and their
deviation from the [Link] to use: Distribution of the complex object,
deviation from the median value.
Scatter plot
This type of visualization is built on X and Y axes. Between them, there are dots placed
around, defining objects. The position of a dot on the graph denotes which qualities it has.
As in the case of line graphs, dots placed between the axes are noticed in a split second. The
only limitation of this type of visualization is the number of axes.
When to use: showing distribution of objects, defining the quality of each object on the
graph.
A sad scatterplot showing the inability of young people to earn money
This type of chart is basically a line chart drawn in radial fashion. It has a spider web form
that is created by multiple axes and variables.
Its purpose is the same as for a line chart. But because of the number of axes, you can
compare units from various angles and show the inclinations graphically.
When to use: describing data qualities, comparing multiple objects to each other through
different dimensions.
Superimposing a visualization over the map works for data’s geographical domain. Density
maps are built with the help of dots placed on the map, marking the location of each unit.
These are perfect for showing narrowing correlations between different groups of items. In
most cases, funnels will utilize both geometric form and colour coding to differentiate items.
The example shows conversion results starting from total traffic number and the number of
subscribers
This type of chart is also handy when there are multiple stages in the process. On the
example above, we can see that after the “Contacted Support” stage, the number of
subscribers has been reduced.
When to use: depicting processual stages with the narrowing percentage of value/objects
In choosing the type of visualization, make sure you clearly understand the following points:
● Summarizing and presenting the data in a graph or chart to present key findings
● Discovering crucial measures within the data, like the mean
● Calculating if the data is slightly clustered or spread out, which also determines
similarities.
● Making future predictions based on past behavior
● Testing a hypothesis from an experiment
There are several ways that businesses can use statistical analysis to their advantage. Some of
these ways include identifying who on your sales staff is performing poorly, finding trends in
customer data, narrowing down the top operating product lines, conducting financial audits,
and getting a better understanding of how sales performance can vary in different regions of
the country.
Just like any other thing in business, there is a process involved in business analytics
as well. Business analytics needs to be systematic, organized, and include step-by-step
actions to have the most optimized result at the end with the least amount of discrepancies.
● Business Problem Framing: In this step, we basically find out what business
problem we are trying to solve, e.g., when we are looking to find out why the supply
chain isn’t as effective as it should be or why we are losing sales. This discussion
generally happens with stakeholders when they realize inefficiency in any part of the
business.
● Analytics Problem Framing: Once we have the problem statement, what we need to
think of next is how analytics can be done for that business analytics problem. Here,
we look for metrics and specific points that we need to analyze.
● Data: The moment we identify the problem in terms of what needs to be analyzed, the
next thing that we need is data, which needs to be analyzed. In this step, not only do
we obtain data from various data sources but we also clean the data; if the raw data is
corrupted or has false values, we remove those problems and convert the data into
usable form.
● Methodology selection and model building: Once the data gets ready, the tricky part
begins. At this stage, we need to determine what methods have to be used and what
metrics are the crucial ones. If required, the team has to build custom models to find
out the specific methods that are suited to respective operations. Many times, the kind
of data we possess also dictates the methodology that can be used to do business
analytics. Most organizations make multiple models and compare them based on the
decided-upon crucial metrics.
● Deployment: Post the selection of the model and the statistical ways of analyzing
data for the solution, the next thing we need to do is to test the solution in a real-time
scenario. For that, we deploy the models on the data and look for different kinds of
insights. Based on the metrics and data highlights, we need to decide the optimum
strategy to solve our problem and implement a solution effectively. Even in this phase
of business analytics, we will compare the expected output with the real-time output.
Later, based on this, we will decide if there is a need to reiterate and modify the
solution or if we can go on with the implementation of the same.
2. Business Analytics Process
The Business Analytics process involves asking questions, looking at data, and
manipulating it to find the required answers. Now, every organization has different ways to
execute this process as all of these organizations work in different sectors and value different
metrics more than the others based on their specific business model.
Since the approach to business is different for different organizations, their solutions and their
ways to reach the solutions are also different. Nonetheless, all of the actions that they do can
be classified and generalized to understand their approach. The image given below
demonstrates the steps in Business Analytics process of a firm:
The first step of the process is identifying the business problem. The problem could be an
actual crisis; it could be something related to recognizing business needs or optimizing
current processes. This is a crucial stage in Business Analytics as it is important to clearly
understand what the expected outcome should be. When the desired outcome is determined, it
is further broken down into smaller goals. Then, business stakeholders decide the relevant
data required to solve the problem. Some important questions must be answered in this stage,
such as: What kind of data is available? Is there sufficient data? And so on.
Once the problem statement is defined, the next step is to gather data (if required) and, more
importantly, cleanse the data—most organizations would have plenty of data, but not all data
points would be accurate or useful. Organizations collect huge amounts of data through
different methods, but at times, junk data or empty data points would be present in the
dataset. These faulty pieces of data can hamper the analysis. Hence, it is very important to
clean the data that has to be analyzed.
To do this, you must do computations for the missing data, remove outliers, and find new
variables as a combination of other variables. You may also need to plot time series graphs as
they generally indicate patterns and outliers. It is very important to remove outliers as they
can have a heavy impact on the accuracy of the model that you create. Moreover, cleaning the
data helps you get a better sense of the dataset.
Step 3: Analysis
Once the data is ready, the next thing to do is analyze it. Now to execute the same, there are
various kinds of statistical methods (such as hypothesis testing, correlation, etc.) involved to
find out the insights that you are looking for. You can use all of the methods for which you
have the data.
The prime way of analyzing is pivoting around the target variable, so you need to take into
account whatever factors that affect the target variable. In addition to that, a lot of
assumptions are also considered to find out what the outcomes can be. Generally, at this step,
the data is sliced, and the comparisons are made. Through these methods, you are looking to
get actionable insights.
Gone are the days when analytics was used to react. In today’s era, Business Analytics is all
about being proactive. In this step, you will use prediction techniques, such as neural
networks or decision trees, to model the data. These prediction techniques will help you find
out hidden insights and relationships between variables, which will further help you uncover
patterns on the most important metrics. By principle, a lot of models are used simultaneously,
and the models with the most accuracy are chosen. In this stage, a lot of conditions are also
checked as parameters, and answers to a lot of ‘what if…?’ questions are provided.
From the insights that you receive from your model built on target variables, a viable plan of
action will be established in this step to meet the organization’s goals and expectations. The
said plan of action is then put to work, and the waiting period begins. You will have to wait to
see the actual outcomes of your predictions and find out how successful you were in your
endeavors. Once you get the outcomes, you will have to measure and evaluate them.
Post the implementation of the solution, the outcomes are measured as mentioned above. If
you find some methods through which the plan of action can be optimized, then those can be
implemented. If that is not the case, then you can move on with registering the outcomes of
the entire process. This step is crucial for any analytics in the future because you will have an
ever-improving database. Through this database, you can get closer and closer to maximum
optimization. In this step, it is also important to evaluate the ROI (return on investment). Take
a look at the diagram above of the life cycle of business analytics.
For different stages of business analytics huge amount of data is processed at various
steps. Depending on the stage of the workflow and the requirement of data analysis, there are
four main kinds of analytics – descriptive, diagnostic, predictive and prescriptive. These four
types together answer everything a company needs to know- from what’s going on in the
company to what solutions to be adopted for optimising the functions.
The four types of analytics are usually implemented in stages and no one type of analytics is
said to be better than the other. They are interrelated and each of these offers a different
insight. With data being important to so many diverse sectors- from manufacturing to energy
grids, most of the companies rely on one or all of these types of analytics. With the right
choice of analytical techniques, big data can deliver richer insights for the companies
Before diving deeper into each of these, let’s define the four types of analytics:
This can be termed as the simplest form of analytics. The mighty size of big data is beyond
human comprehension and the first stage hence involves crunching the data into
understandable chunks. The purpose of this analytics type is just to summarise the findings
and understand what is going on.
Among some frequently used terms, what people call as advanced analytics or business
intelligence is basically usage of descriptive statistics (arithmetic operations, mean, median,
max, percentage, etc.) on existing data. It is said that 80% of business analytics mainly
involves descriptions based on aggregations of past performance. It is an important step to
make raw data understandable to investors, shareholders and managers. This way it gets easy
to identify and address the areas of strengths and weaknesses such that it can help in
[Link] two main techniques involved are data aggregation and data mining stating
that this method is purely used for understanding the underlying behavior and not to make
any estimations. By mining historical data, companies can analyze the consumer behaviors
and engagements with their businesses that could be helpful in targeted marketing, service
improvement, etc. The tools used in this phase are MS Excel, MATLAB, SPSS, STATA, etc
2.3.2 Diagnostic Analytics
In a time series data of sales, diagnostic analytics would help you understand why the sales
have decrease or increase for a specific year or so. However, this type of analytics has a
limited ability to give actionable insights. It just provides an understanding of causal
relationships and sequences while looking backward.
A few techniques that uses diagnostic analytics include attribute importance, principle
components analysis, sensitivity analysis, and conjoint analysis. Training algorithms for
classification and regression also fall in this type of analytics.
The analytics is found in sentiment analysis where all the opinions posted on social media are
collected and analyzed (existing text data) to predict the person’s sentiment on a particular
subject as being- positive, negative or neutral (future prediction).
Hence, predictive analytics includes building and validation of models that provide accurate
predictions. Predictive analytics relies on machine learning algorithms like random forests,
SVM, etc. and statistics for learning and testing the data. Usually, companies need trained
data scientists and machine learning experts for building these models. The most popular
tools for predictive analytics include Python, R, RapidMiner, etc.
The prediction of future data relies on the existing data as it cannot be obtained otherwise. If
the model is properly tuned, it can be used to support complex forecasts in sales and
marketing. It goes a step ahead of the standard BI in giving accurate predictions.
The basis of this analytics is predictive analytics but it goes beyond the three mentioned
above to suggest the future solutions. It can suggest all favorable outcomes according to a
specified course of action and also suggest various course of actions to get to a particular
outcome. Hence, it uses a strong feedback system that constantly learns and updates the
relationship between the action and the outcome.
The computations include optimisation of some functions that are related to the desired
outcome. For example, while calling for a cab online, the application uses GPS to connect
you to the correct driver from among a number of drivers found nearby. Hence, it optimises
the distance for faster arrival time. Recommendation engines also use prescriptive analytics.
The other approach includes simulation where all the key performance areas are combined to
design the correct solutions. It makes sure whether the key performance metrics are included
in the solution. The optimisation model will further work on the impact of the previously
made forecasts. Because of its power to suggest favorable solutions, prescriptive analytics is
the final frontier of advanced analytics or data science, in today’s term.
The four techniques in analytics may make it seem as if they need to be implemented
sequentially. However, in most scenarios, companies can jump directly to prescriptive
analytics. As for most of the companies, they are aware of or are already implementing
descriptive analytics but if one has identified the key area that needs to be optimised and
worked upon, they must employ prescriptive analytics to reach the desired outcome.
According to research, prescriptive analytics is still at the budding stage and not many firms
have completely used its power. However, the advancements in predictive analytics will
surely pave the way for its development.
In business, a problem is a situation that creates a gap between the desired and actual
outcomes. In addition, a true problem typically does not have an immediately obvious
resolution.
Understanding the importance of problem-solving skills in the workplace will help you
develop as a leader. Problem-solving skills will help you resolve critical issues and conflicts
that you come across. Problem-solving is a valued skill in the workplace because it allows
you to:
There are many different problem-solving skills, but most can be broken into general steps.
Here is a four-step method for business problem solving:
1) Identify the Details of the Problem: Gather enough information to accurately define the
problem. This can include data on procedures being used, employee actions, relevant
workplace rules, and so on. Write down the specific outcome that is needed, but don’t assume
what the solution should be.
2) Creatively Brainstorm Solutions: Alone or with a team, state every solution you can
think of. You’ll often need to write them down. To get more solutions, brainstorm with the
employees who have the greatest knowledge of the issue.
3) Evaluate Solutions and Make a Decision: Compare and contrast alternative solutions
based on the feasibility of each one, including the resources needed to implement it and the
return on investment of each one. Finally, make a firm decision on one solution that clearly
addresses the root cause of the problem.
4) Take Action: Write up a detailed plan for implementing the solution, get the necessary
approvals, and put it into action.
Although there are use cases for second- and third-party data, first-party data (data you’ve
collected yourself) is more valuable because you receive information about how your
audience behaves, thinks, and feels—all from a trusted source.
Data can be qualitative (meaning contextual in nature) or quantitative (meaning numeric in
nature). Many data collection methods apply to either type, but some are better suited to one
over the other.
In the data life cycle, data collection is the second step. After data is generated, it must be
collected to be of use to your team. After that, it can be processed, stored, managed, analyzed,
and visualized to aid in your organization’s decision-making.
Before collecting data, there are several factors you need to define:
The data collection method you select should be based on the question you want to answer,
the type of data you need, your timeframe, and your company’s budget. Explore the options
in the next section to see which data collection method is the best fit.
4.1 SEVEN DATA COLLECTION METHODS USED IN BUSINESS ANALYTICS
1. Surveys
Surveys are physical or digital questionnaires that gather both qualitative and quantitative
data from subjects. One situation in which you might conduct a survey is gathering attendee
feedback after an event. This can provide a sense of what attendees enjoyed, what they wish
was different, and areas you can improve or save money on during your next event for a
similar audience.
Because they can be sent out physically or digitally, surveys present the opportunity for
distribution at scale. They can also be inexpensive; running a survey can cost nothing if you
use a free tool. If you wish to target a specific group of people, partnering with a market
research firm to get the survey in the hands of that demographic may be worth the money.
Something to watch out for when crafting and running surveys is the effect of bias, including:
● Collection bias: It can be easy to accidentally write survey questions with a biased lean.
Watch out for this when creating questions to ensure your subjects answer honestly and
aren’t swayed by your wording.
● Subject bias: Because your subjects know their responses will be read by you, their
answers may be biased toward what seems socially acceptable. For this reason, consider
pairing survey data with behavioral data from other collection methods to get the full
picture.
2. Transactional Tracking
Each time your customers make a purchase, tracking that data can allow you to make
decisions about targeted marketing efforts and understand your customer base better.
Often, e-commerce and point-of-sale platforms allow you to store data as soon as it’s
generated, making this a seamless data collection method that can pay off in the form of
customer insights.
Interviews and focus groups consist of talking to subjects face-to-face about a specific topic
or issue. Interviews tend to be one-on-one, and focus groups are typically made up of several
people. You can use both to gather qualitative and quantitative data.
Through interviews and focus groups, you can gather feedback from people in your target
audience about new product features. Seeing them interact with your product in real-time and
recording their reactions and responses to questions can provide valuable data about which
product features to pursue.
As is the case with surveys, these collection methods allow you to ask subjects anything you
want about their opinions, motivations, and feelings regarding your product or brand. It also
introduces the potential for bias. Aim to craft questions that don’t lead them in one particular
direction.
One downside of interviewing and conducting focus groups is they can be time-consuming
and expensive. If you plan to conduct them yourself, it can be a lengthy process. To avoid
this, you can hire a market research facilitator to organize and conduct interviews on your
behalf.
4. Observation
Observing people interacting with your website or product can be useful for data collection
because of the candour it offers. If your user experience is confusing or difficult, you can
witness it in real-time.
Yet, setting up observation sessions can be difficult. You can use a third-party tool to record
users’ journeys through your site or observe a user’s interaction with a beta version of your
site or product.
While less accessible than other data collection methods, observations enable you to see first
hand how users interact with your product or site. You can leverage the qualitative and
quantitative data gleaned from this to make improvements and double down on points of
success.
5. Online Tracking
To gather behavioural data, you can implement pixels and cookies. These are both tools that
track users’ online behaviour across websites and provide insight into what content they’re
interested in and typically engage with.
You can also track users’ behavior on your company’s website, including which parts are of
the highest interest, whether users are confused when using it, and how long they spend on
product pages. This can enable you to improve the website’s design and help users navigate
to their destination.
Inserting a pixel is often free and relatively easy to set up. Implementing cookies may come
with a fee but could be worth it for the quality of data you’ll receive. Once pixels and cookies
are set, they gather data on their own and don’t need much maintenance, if any.
It’s important to note: Tracking online behavior can have legal and ethical privacy
implications. Before tracking users’ online behavior, ensure you’re in compliance with local
and industry data privacy standards.
6. Forms
Online forms are beneficial for gathering qualitative data about users, specifically
demographic data or contact information. They’re relatively inexpensive and simple to set up,
and you can use them to gate content or registrations, such as webinars and email newsletters.
You can then use this data to contact people who may be interested in your product, build out
demographic profiles of existing customers, and in remarketing efforts, such as email
workflows and content recommendations
Monitoring your company’s social media channels for follower engagement is an accessible
way to track data about your audience’s interests and motivations. Many social media
platforms have analytics built in, but there are also third-party social platforms that give more
detailed, organized insights pulled from multiple channels.
You can use data collected from social media to determine which issues are most important to
your followers. For instance, you may notice that the number of engagements dramatically
increases when your company posts about its sustainability efforts.
Data preparation, also sometimes called “pre-processing,” is the act of cleaning and
consolidating raw data prior to using it for business analysis. It might not be the most
celebrated of tasks, but careful data preparation is a key component of successful data
analysis.
Doing the work to properly validate, clean, and augment raw data is essential to draw
accurate, meaningful insights from it. The validity and power of any business analysis
produced is only as good as the data preparation done in the early stages.
To drive the deepest level of analysis and insight, successful teams and organizations must
implement a data preparation strategy that prioritizes:
With the right solution in hand, analysts and teams can streamline the data preparation
process, and instead, spend more time getting to valuable business insights and outcomes,
faster.
The data preparation process can vary depending on industry or need, but typically consists
of the following steps:
● Acquiring data: Determining what data is needed, gathering it, and establishing
consistent access to build powerful, trusted analysis
● Exploring data: Determining the data’s quality, examining its distribution, and
analyzing the relationship between each variable to better understand how to compose
an analysis
● Cleansing data: Improving data quality and overall productivity to craft error-proof
insights
● Transforming data: Formatting, orienting, aggregating, and enriching the datasets
used in an analysis to produce more meaningful insights
While data preparation processes build upon each other in a serialized fashion, it’s not always
linear. The order of these steps might shift depending on the data and questions being asked.
It’s common to revisit a previous step as new insights are uncovered or new data sources are
integrated into the process.
The entire data preparation process can be notoriously time-intensive, iterative, and
repetitive. That’s why it’s important to ensure the individual steps taken can be easily
understood, repeated, revisited, and revised so analysts can spend less time prepping and
more time analyzing.
The first step in any data preparation process is acquiring the data that an analyst will use for
their analysis. It’s likely that analysts rely on others (like IT) to obtain data for their analysis,
likely from an enterprise software system or data management system. IT will usually deliver
this data in an accessible format like an Excel document or CSV.
Modern analytic software can remove the dependency on a data-wrangling middleman to tap
right into trusted sources like SQL, Oracle, SPSS, AWS, Snowflake, Salesforce, and
Marketo. This means analysts can acquire the critical data for their regularly-scheduled
reports as well as novel analytic projects on their own.
Examining and profiling data helps analysts understand how their analysis will begin to take
shape. Analysts can utilize visual analytics and summary statistics like range, mean, and
standard deviation to get an initial picture of their data. If data is too large to work with
easily, segmenting it can help.
During this phase, analysts should also evaluate the quality of their dataset. Is the data
complete? Are the patterns what was expected? If not, why? Analysts should discuss what
they’re seeing with the owners of the data, dig into any surprises or anomalies, and consider
if it’s even possible to improve the quality. While it can feel disappointing to disqualify a
dataset based on poor quality, it is a wise move in the long run. Poor quality is only amplified
as one moves through the data analytics processes
5.2.3 Cleanse Data
During the exploration phase, analysts may notice that their data is poorly structured and in
need of tidying up to improve its quality. This is where data cleansing comes into play.
Cleansing data includes:
Data comes in many shapes, sizes, and structures. Some is analysis-ready, while other
datasets may look like a foreign language.
Transforming data to ensure that it’s in a format or structure that can answer the questions
being asked of it is an essential step to creating meaningful outcomes. This will vary based on
the software or language that an analysts uses for their data analysis.
In order to find the plausibility of this hypothesis, the researcher will have to test the
hypothesis using hypothesis testing methods. Unlike a hypothesis that is ‘supposed’ to stand
true on the basis of little or no evidence, hypothesis testing is required to have plausible
evidence in order to establish that a statistical hypothesis is true.
It implies that the two variables are related to each other and the relationship that exists
between them is not due to chance or coincidence.
When the process of hypothesis testing is carried out, the alternative hypothesis is the main
subject of the testing process. The analyst intends to test the alternative hypothesis and
verifies its plausibility.
The Null Hypothesis (H0) aims to nullify the alternative hypothesis by implying that there
exists no relation between two variables in statistics. It states that the effect of one variable on
the other is solely due to chance and no empirical cause lies behind it.
The null hypothesis is established alongside the alternative hypothesis and is recognized as
important as the latter. In hypothesis testing, the null hypothesis has a major role to play as it
influences the testing against the alternative hypothesis.
6.1.3 Non-Directional Hypothesis
The Non-directional hypothesis states that the relation between two variables has no
direction. Simply put, it asserts that there exists a relation between two variables, but does not
recognize the direction of effect, whether variable A affects variable B or vice versa.
6.1.4 Directional Hypothesis
The Directional hypothesis, on the other hand, asserts the direction of effect of the
relationship that exists between two variables.
Herein, the hypothesis clearly states that variable A affects variable B, or vice versa.
6.1.5 Statistical Hypothesis
A statistical hypothesis is a hypothesis that can be verified to be plausible on the basis of
statistics. By using data sampling and statistical knowledge, one can determine the
plausibility of a statistical hypothesis and find out if it stands true or not.
To establish these two hypotheses, one is required to study data samples, find a plausible
pattern among the samples, and pen down a statistical hypothesis that they wish to test.
A random population of samples can be drawn, to begin with hypothesis testing. Among the
two hypotheses, alternative and null, only one can be verified to be true. Perhaps the presence
of both hypotheses is required to make the process successful.
At the end of the hypothesis testing procedure, either of the hypotheses will be rejected and
the other one will be supported. Even though one of the two hypotheses turns out to be true,
no hypothesis can ever be verified 100%.
Let us perform hypothesis testing through the following 7 steps of the procedure:
Assume a food laboratory analyzed a certified reference freeze-dried food material with a
stated sodium (Na) content of 250 mg/kg. It carried out 7 repeated analyses and obtained a
mean value of 274 mg/kg of sodium with a sample standard deviation of 21 mg/kg. Now we
want to know if the mean value of 274 mg/kg is significantly larger than the stated amount of
250 mg/kg. If so, we will conclude that the reported results of this batch of analysis were of
bias and had consistently given higher values than expected.
The null hypothesis Ho is the statement that we are interested in testing. In this case, the null
condition is that the mean value is 250 mg/kg of sodium.
The alternative hypothesis H1 is the statement that we accept if our sample outcome leads
us to reject the null hypothesis. In our case, the alternative hypothesis is that the mean value
is not equal to 250 mg/kg of sodium. In other words, it can be significantly larger or smaller
than the value of 250 mg/kg.
So, our formal statement of the hypotheses for this example is as follows:
The level of significance is the probability of rejecting the null hypothesis by chance alone.
This could happen from sub-sampling error, methodology, analyst’s technical competence,
instrument drift, etc. So, we have to decide on the level of significance to reject the null
hypothesis if the sample result was unlikely given the null hypothesis was true.
Traditionally, we define the unlikely (given by symbol ) as 0.05 (5%) or less. However,
there is nothing to stop you from using = 0.1 (10%) or = 0.01 (1%) with your own
justification or reasoning.
In fact, the significance level sometimes is referred to as the probability of a Type I error. A
Type I error occurs when you falsely reject the null hypothesis on the basis of the
above-mentioned errors. A Type II error occurs when you fail to reject the null hypothesis
when it is false.
The test statistic is the value calculated from the sample to determine whether to reject the
null hypothesis. In this case, we use Student’s t-test statistic in the following manner:
𝜇=𝑥̅±(𝛼=0.05,𝑣=𝑛−1)𝑠√𝑛
or 𝑡(𝛼=0.05,𝑣=𝑛−1)=|𝑥̅−𝜇|√𝑛𝑠
By calculation, we get a t-value of 3.024 at the significance level of = 0.05 and v = (7-1)
or 6 degrees of freedom for n = 7 replicates.
Reject Ho if …..
We reject the null hypothesis if the test statistic is larger than a critical value corresponding to
the significance level in step 2.
There is now a question in H1 on either one-tailed (> or <) or two-tailed (≠ not equal) tests to
be addressed. If we are talking about either “greater than” or “smaller than”, we take the
significance level at = 0.05 whilst for the unequal (that means the result can be either
larger or smaller than the certified value), the significance level at = 0.025 on either side
of the normal curve is to be studied.
As our H1 is for the mean value to be larger or smaller than the certified value, we use the
2-tailed t-test for = 0.05 with 6 degrees of freedom. In this case, the t-critical value at =
0.05 and 6 degrees of freedom is 2.447 from the Student’s t-table or from using the Excel
function “=[Link].2T(0.05,6)” or “=TINV(0.05,6) in older Excel version.
Upon calculation on the sample data, we have got a t-value of 3.024 at the significance level
of = 0.05 and v = (7-1) or 6 degrees of freedom for n = 7 replicates.
When we compare the result of step 5 to the decision rule in step 4, it is obvious that 3.024 is
greater than the t-critical value of 2.447, and so we reject the null hypothesis. In other words,
the mean value of 274 mg/kg is significantly different from the certified value of 250 mg/kg.
Is it really so? We must go to step 7.
Since hypothesis testing involves some kind of probability under the disguise of significance
level, we must interpret the final decision with caution. To say that a result is “statistically
significant” sounds remarkable, but all it really means is that it is more than by chance alone.
To do justice, it would be useful to look at the actual data to see if there are one or more high
outliers pulling up the mean value. Perhaps increasing the number of replicates might show
up any undesirable data. Furthermore, we might have to take a closer look at the test
procedure and the technical competence of the analyst to see if there were any lapses in the
analytical process. A repeated series of experiment should be able to confirm these findings.
[Link]
1. Data, which are assumed to be constant for purposes of the model. Some examples would
be costs, machine capacities, and intercity distances.
2. Uncontrollable variables, which are quantities that can change but cannot be directly
controlled by the decision maker. Some examples would be customer demand, inflation rates,
and investment returns. Often, these variables are uncertain.
3. Decision variables, which are controllable and can be selected at the discretion of the
decision maker. Some examples would be production quantities, staffing levels, and
investment allocations. Decision models characterize the relationships among the data,
uncontrollable variables, and decision variables, and the outputs of interest to the decision
maker.
Decision models can be represented in various ways, most typically with mathematical
functions and spreadsheets. Spreadsheets are ideal vehicles for implementing decision models
because of their versatility in managing data, evaluating different scenarios, and presenting
results in a meaningful fashion. Using these relationships, we may develop a mathematical
representation by defining symbols for each of these quantities:
TC = total cost
F = fixed cost
Q = quantity produced
All models are based on assumptions that reflect the modeler’s view of the “real world.”
Some assumptions are made to simplify the model and make it more tractable; that is, able to
be easily analyzed or solved. Other assumptions might be made to better characterize
historical data or past observations. The task of the modeler is to select or build an
appropriate model that best represents the behavior of the real situation. For example,
economic theory tells us that demand for a product is negatively related to its price. Thus, as
prices increase, demand falls, and vice versa (a phenomenon that you may recognize as price
elasticity—the ratio of the percentage change in demand to the percentage change in price).
Different mathematical models can describe this phenomenon.
7.2 Prescriptive Decision Models
A prescriptive decision model helps decision makers to identify the best solution to a decision
problem. Optimization is the process of finding a set of values for decision variables that
minimize or maximize some quantity of interest—profit, revenue, cost, time, and so
on—called the objective function. Any set of decision variables that optimizes the objective
function is called an optimal solution. In a highly competitive world where one percentage
point can mean a difference of hundreds of thousands of dollars or more, knowing the best
solution can mean the difference between success and failure.
As we all know, the future is always uncertain. Thus, many predictive models incorporate
uncertainty and help decision makers analyze the risks associated with their decisions.
Uncertainty is imperfect knowledge of what will happen; risk is associated with the
consequences and likelihood of what might happen.
For example, the change in the stock price of Apple on the next day of trading is uncertain.
However, if you own Apple stock, then you face the risk of losing money if the stock price
falls. If you don’t own any stock, the price is still uncertain although you would not have any
risk. Risk is evaluated by the magnitude of the consequences and the likelihood that they
would occur. For example, a 10% drop in the stock price would incur a higher risk if you own
$1 million than if you only owned $1,000. Similarly, if the chances of a 10% drop were 1 in
5, the risk would be higher than if the chances were only 1 in 100. The importance of risk in
business has long been recognized.
[Link] Validation
Model validation is defined within regulatory guidance as “the set of processes and activities
intended to verify that models are performing as expected, in line with their design
objectives, and business uses.” It also identifies “potential limitations and assumptions, and
assesses their possible impact.”
In statistics, model validation is the task of confirming that the outputs of a statistical model
are acceptable with respect to the real data-generating process. In other words, model
validation is the task of confirming that the outputs of a statistical model have enough fidelity
to the outputs of the data-generating process that the objectives of the investigation can be
achieved.
1. Conceptual Design
The foundation of any model validation is its conceptual design, which needs documented
coverage assessment that supports the model’s ability to meet business and regulatory needs
and the unique risks facing a bank.
The design and capabilities of a model can have a profound effect on the overall effectiveness
of a bank’s ability to identify and respond to risks. For example, a poorly designed risk
assessment model may result in a bank establishing relationships with clients that present a
risk that is greater than its risk appetite, thus exposing the bank to regulatory scrutiny and
reputation damage.
A validation should independently challenge the underlying conceptual design and ensure
that documentation is appropriate to support the model’s logic and the model’s ability to
achieve desired regulatory and business outcomes for which it is designed.
2. System Validation
All technology and automated systems implemented to support models have limitations. An
effective validation includes: firstly, evaluating the processes used to integrate the model’s
conceptual design and functionality into the organisation’s business setting; and, secondly,
examining the processes implemented to execute the model’s overall design. Where gaps or
limitations are observed, controls should be evaluated to enable the model to function
effectively.
Data errors or irregularities impair results and might lead to an organisation’s failure to
identify and respond to risks. Best practise indicates that institutions should apply a
risk-based data validation, which enables the reviewer to consider risks unique to the
organisation and the model.
To establish a robust framework for data validation, guidance indicates that the accuracy of
source data be assessed. This is a vital step because data can be derived from a variety of
sources, some of which might lack controls on data integrity, so the data might be incomplete
or inaccurate.
4. Process Validation
To verify that a model is operating effectively, it is important to prove that the established
processes for the model’s ongoing administration, including governance policies and
procedures, support the model’s sustainability. A review of the processes also determines
whether the models are producing output that is accurate, managed effectively, and subject to
the appropriate controls.
If done effectively, model validation will enable your bank to have every confidence in its
various models’ accuracy, as well as aligning them with the bank’s business and regulatory
expectations. By failing to validate models, banks increase the risk of regulatory criticism,
fines, and penalties.
The following example is an introduction to data validation in Excel. The data validation
button under the data tab provides the user with different types of data validation checks
based on the data type in the cell. It also allows the user to define custom validation checks
using Excel formulas. The data validation can be found in the Data Tools section of the Data
tab in the ribbon of Excel:
The example below illustrates a case of data entry, where the province must be entered for
every store location. Since stores are only located in certain provinces, any incorrect entry
should be caught.
It is accomplished in Excel using a two-fold data validation. First, the relevant provinces are
incorporated into a drop-down menu that allows the user to select from a list of valid
provinces.
Second, if the user inputs a wrong province by mistake, such as “NY” instead of “NS,” the
system warns the user of the incorrect input.
Fig. 3: Second level of data validation
Further, if the user ignores the warning, an analysis can be conducted using the data
validation feature in Excel that identifies incorrect inputs.
Model Evaluation is an integral part of the model development process. It helps to find the
best model that represents our data and how well the chosen model will work in the future.
Evaluating model performance with the data used for training is not acceptable in data
science because it can easily generate overoptimistic and overfitted models. There are two
methods of evaluating models in data science, Hold-Out and Cross-Validation. To avoid
overfitting, both methods use a test set (not seen by the model) to evaluate model
performance.
● Hold-Out: In this method, the mostly large dataset is randomly divided to three subsets:
3. Test set or unseen examples is a subset of the dataset to assess the likely future
performance of a model. If a model fit to the training set much better than it fits the test
set, overfitting is probably the cause.
● Classification Evaluation
● Regression Evaluation
[Link]:
Data interpretation is the process of reviewing data and drawing meaningful conclusions
using a variety of analytical approaches. Data interpretation aids researchers in categorizing,
manipulating, and summarising data in order to make sound business decisions. The end goal
for a data interpretation project is to develop a good marketing strategy or to expand its client
user base.
To evaluate qualitative data, also known as categorical data, the qualitative data interpretation
approach is utilized. Words, instead of numbers or patterns, are used to describe data in this
technique. Unlike quantitative data, which can be studied immediately after collecting and
sorting it, qualitative data must first be converted into numbers before being analyzed. This is
due to the fact that analyzing texts in their original condition is frequently time-consuming
and results in a high number of mistakes. The analyst’s coding should also be defined so that
it may be reused and evaluated by others.
Observations: a description of the behavioral patterns seen in a group of people. The length
of time spent on an activity, the sort of activity, and the form of communication used might
all be examples of these patterns.
Groups of people: To develop a collaborative discussion about a study issue, group people
and ask them pertinent questions.
Interviews are one of the most effective ways to get narrative data. Themes, topics, and
categories can be used to group inquiry replies. The interview method enables extremely
targeted data segmentation.
● Transcripts of interviews
● Questionnaires with open-ended answers
● Transcripts from call centers
● Documents and texts
● Audio and video recordings are available.
● Notes from the field
Now the second step is to interpret the data that is produced. This is done by the following
methods:
Content Analysis
This is a popular method for analyzing qualitative data. Other approaches to analysis may fall
under the general category of content analysis. An aspect of the content analysis is thematic
analysis. By classifying material into words, concepts, and themes, content analysis is used to
uncover patterns that arise from the text.
Narrative Analysis
The focus of narrative analysis is on people’s experiences and the language they use to make
sense of them. It’s especially effective for acquiring a thorough insight into customers’
viewpoints on a certain topic. We might be able to describe the results of a targeted case
study using narrative analysis.
Discourse Analysis
It’s critical to be very clear on the type and scope of the study topic in order to get the most
out of the analytical process. This will assist you in determining which research collection
routes are most likely to assist you in answering your query.
Your approach to qualitative data analysis will differ depending on whether you are a
corporation attempting to understand consumer sentiment or an academic surveying a school.
Quantitative data, often known as numerical data, is analyzed using the quantitative data
interpretation approach. Because this data type contains numbers, it is examined using
numbers rather than words. Quantitative analysis is a collection of procedures for analyzing
numerical data. It frequently requires the application of statistical modeling techniques such
as standard deviation, mean, and median. Let’s try and understand these;
Median: The median is the middle value in a list of numbers that have been sorted ascending
or descending, and it might be more descriptive of the data set than the average.
Mean: The basic mathematical average of two or more values is called a mean. The
arithmetic mean approach, which utilizes the sum of the values in the series, and the
geometric mean method, which is the average number of products, are two ways to determine
the mean for a given collection of numbers.
Standard deviation: The positive square root of the variance is the standard deviation. One
of the most fundamental approaches to statistical analysis is the standard deviation. A low
standard deviation indicates that the values are near to the mean, whereas a large standard
deviation indicates that the values are significantly different from the mean.
● For starters, it’s used to compare and contrast groupings. For instance, consider the
popularity of certain car brands with different colors.
● It’s also used to evaluate relationships between variables.
● Third, it’s used to put scientifically sound theories to the test. Consider a
hypothesis concerning the effect of a certain vaccination.
Regression analysis
Cohort Analysis
Cohort analysis is a technique for determining how engaged users are over time. It’s useful to
determine whether user engagement is improving over time or just looking to improve due to
growth. Cohort analysis is useful because it helps to distinguish between growth and
engagement measures. Cohort analysis is watching how individuals’ behavior develops over
time in groups of people.
Predictive Analysis
By examining historical and present data, the predictive analytic approach seeks to forecast
future trends. Predictive analytics approaches, which are powered by machine learning and
deep learning, allow firms to notice patterns or possible challenges ahead of time and prepare
educated initiatives. Predictive analytics is being used by businesses to address issues and
identify new possibilities.
Prescriptive Analysis
Conjoint Analysis
Conjoint analysis is the best market research method for determining how much customers
appreciate a product’s or service’s qualities. This widely utilized method mixes real-life
scenarios and statistical tools with market decision models
Cluster analysis
Any organization that wants to identify distinct groupings of consumers, sales transactions, or
other sorts of behaviors and items may use cluster analysis as a valuable data-mining
technique.
The goal of cluster analysis is to uncover groupings of subjects that are similar, where
“similarity” between each pair of subjects refers to a global assessment of the entire
collection of features. Cluster analysis, similar to factor analysis, deals with data matrices in
which the variables haven’t been partitioned into criteria and predictor subsets previously.
Iterative processes are a fundamental part of lean methodologies and Agile project
management—but these processes can be implemented by any team, not just Agile ones.
During the iterative process, you will continually improve your design, product, or project
until you and your team are satisfied with the final project deliverable.
Pros:
● Increased efficiency. Because the iterative process embraces trial and error, it can often help
you achieve your desired result faster than a non-iterative process.
● Increased collaboration. Instead of working from predetermined plans and specs (which also
takes a lot of time to create), your team is actively working together.
● Increased adaptability. As you learn new things during the implementation and testing phases,
you can tweak your iteration to best hit your goals—even if that means doing something you
didn’t expect to be doing at the start of the iterative process.
● More cost effective. If you need to change the scope of the project, you’ll only have invested
the minimum time and effort into the process.
● Ability to work in parallel. Unlike other, non-iterative methodologies like the waterfall
method, iterations aren’t necessarily dependent on the work that comes before them. Team
members can work on several elements of the project in parallel, which can shorten your
overall timeline.
● Reduced project-level risk. In the iterative process, risks are identified and addressed during
each iteration. Instead of solving for large risks at the beginning and end of the project,
you’re consistently working to resolve low-level risks.
● More reliable user feedback. When you have an iteration that users can interact with or see,
they’re able to give you incremental feedback about what works or doesn’t work for them.
Cons:
● Increased risk of scope creep. Because of the trial-and-error nature of the iterative process,
your project could develop in ways you didn’t expect and exceed your original project scope.
● Inflexible planning and requirements. The first step of the iterative process is to define your
project requirements. Changing these requirements during the iterative process can break the
flow of your work, and cause you to create iterations that don’t serve your project’s purpose.
● Vague timelines. Because team members will create, test, and revise iterations until they get
to a satisfying solution, the iterative timeline isn’t clearly defined. Additionally, testing for
different increments can vary in length, which also impacts the overall iterative process
timeline.
UNIT II BUSINESS INTELLIGENCE
A Data Warehousing (DW) is process for collecting and managing data from varied
sources to provide meaningful business insights. A Data warehouse is typically used to
connect and analyze business data from heterogeneous sources. The data warehouse is
the core of the BI system which is built for data analysis and reporting. It is a blend of
technologies and components which aids the strategic use of data. It is electronic storage
of a large amount of information by a business which is designed for query and analysis
instead of transaction processing. It is a process of transforming data into information
and making it available to users in a timely manner to make a difference.
A Data Warehouse works as a central repository where information arrives from one or
more data sources. Data flows into a data warehouse from the transactional system and
other relational databases.
1. Structured
2. Semi-structured
3. Unstructured data
The data is processed, transformed, and ingested so that users can access the processed
data in the Data Warehouse through Business Intelligence tools, SQL clients, and
spreadsheets. A data warehouse merges information coming from different sources into
one comprehensive database.
By merging all of this information in one place, an organization can analyze its customers
more holistically. This helps to ensure that it has considered all the information available.
Data warehousing makes data mining possible. Data mining is looking for patterns in the
data that may lead to higher sales and profits.
Operational Data Store, which is also called ODS, are nothing but data store required 1.2
when neither Data warehouse nor OLTP systems support organizations reporting needs.
In ODS, Data warehouse is refreshed in real time. Hence, it is widely preferred for routine
activities like storing records of the Employees.
3. Data Mart:
A data mart is a subset of the data warehouse. It specially designed for a particular line of
business, such as sales, finance, sales or finance. In an independent data mart, data can
collect directly from sources.
Load manager: Load manager is also called the front component. It performs with all the
operations associated with the extraction and load of data into the warehouse. These
operations include transformations to prepare the data for entering into the Data
warehouse.
Query Manager: Query manager is also known as backend component. It performs all
the operation operations related to the management of user queries. The operations of
this Data warehouse components are direct queries to the appropriate tables for
scheduling the execution of queries.
This is categorized into five different groups like 1. Data Reporting 2. Query Tools 3.
Application development tools 4. EIS tools, 5. OLAP tools and data mining tools.
Airline:
In the Airline system, it is used for operation purpose like crew assignment, analyses of
route profitability, frequent flyer program promotions, etc.
Banking:
It is widely used in the banking sector to manage the resources available on desk
effectively. Few banks also used for the market research, performance analysis of the
product and operations.
Healthcare:
Healthcare sector also used Data warehouse to strategize and predict outcomes, generate
patient's treatment reports, share data with tie-in insurance companies, medical aid
services, etc.
Public sector:
In the public sector, data warehouse is used for intelligence gathering. It helps
government agencies to maintain and analyze tax records, health policy records, for every
individual.
In this sector, the warehouses are primarily used to analyze data patterns, customer
trends, and to track market movements.
Retail chain:
In retail chains, Data warehouse is widely used for distribution and marketing. It also
helps to track items, customer buying pattern, promotions and also used for determining
pricing policy.
Telecommunication:
A data warehouse is used in this sector for product promotions, sales decisions and to
make distribution decisions.
Hospitality Industry:
This Industry utilizes warehouse services to design as well as estimate their advertising
and promotion campaigns where they want to target clients based on their feedback and
travel patterns.
The best way to address the business risk associated with a Data warehouse
implementation is to employ a three-prong strategy as below
Here, are key steps in Datawarehouse implementation along with its deliverables.
7 Maps Operational Data Store to Data Warehouse D/W Data Integration Map
9 Extract Data from Operational Data Store Integrated D/W Data Extracts
Decide a plan to test the consistency, accuracy, and integrity of the data.
The data warehouse must be well integrated, well defined and time stamped.
While designing Datawarehouse make sure you use right tool, stick to life cycle,
take care about data conflicts and ready to learn you're your mistakes.
Never replace operational systems and reports
Don't spend too much time on extracting, cleaning and loading data.
Ensure to involve all stakeholders including business personnel in Datawarehouse
implementation process. Establish that Data warehousing is a joint/ team project.
You don't want to create Data warehouse that is not useful to the end users.
Prepare a training plan for the end users.
Data warehouse allows business users to quickly access critical data from some
sources all in one place.
Data warehouse provides consistent information on various cross-functional
activities. It is also supporting ad-hoc reporting and query.
Data Warehouse helps to integrate many sources of data to reduce stress on the
production system.
Data warehouse helps to reduce total turnaround time for analysis and reporting.
Restructuring and Integration make it easier for the user to use for reporting and
analysis.
Data warehouse allows users to access critical data from the number of sources in
a single place. Therefore, it saves user's time of retrieving data from multiple
sources.
Data warehouse stores a large amount of historical data. This helps users to
analyze different time periods and trends to make future predictions.
There are many Data Warehousing tools are available in the market. Here, are some most
prominent one:
1. MarkLogic:
MarkLogic is useful data warehousing solution that makes data integration easier and
faster using an array of enterprise features. This tool helps to perform very complex
search operations. It can query different types of data like documents, relationships, and
metadata.
2. Oracle:
Oracle is the industry-leading database. It offers a wide range of choice of data warehouse
solutions for both on-premises and in the cloud. It helps to optimize customer
experiences by increasing operational efficiency.
3. Amazon RedShift:
Amazon Redshift is Data warehouse tool. It is a simple and cost-effective tool to analyze
all types of data using standard SQL and existing BI tools. It also allows running complex
queries against petabytes of structured data, using the technique of query optimization.
1.
Parameter Data Warehouse Data Mart
A Data Warehouse is a large repository of A data mart is an only subtype of a Data
Definition data collected from different organizations Warehouse. It is designed to meet the need of a
or departments within a corporation. certain user group.
Usage It helps to take a strategic decision. It helps to take tactical decisions for the business.
May or may not use in a dimensional model. It is built focused on a dimensional model using a
However, it can feed dimensional models. start schema.
Time variance and non-volatile design are Mostly includes consolidation data structures to
Data type
strictly enforced. meet subject area’s query and reporting needs.
Concept of KM:
(ii) Knowledge is of two types – explicit and implicit. Explicit knowledge is visible
information available in literature, reports, patents, technical specifications,
communication with customers, suppliers, competitors etc. It can be embedded in rules,
systems, policies and procedures etc. of the organisation.
(iii) KM is a continuous process; as the world economy is dynamic and full of challenges.
It requires constant creation of new skills and capabilities and improvement of existing
ones.
(iv) KM requires whole-hearted support of top management, to provide cultural and
technical foundation for the origination and implementation of KM practices.
KM is not an outgrowth of IT. Rather, KM requires human skills, creativity and innovative
capabilities of people; which are the base of KM. In fact I there are tools of IT like
Intranets, Lotus Notes, MS-Exchange etc.; which provide an infrastructure for the free
play of human creativity and innovative powers for the formulation of corporation
strategy, in a competitive globalized environment.
The above ideas are illustrated with the help of the following diagram:
The first step in KM is an identification of what type of knowledge is required for the
successful designing and implementation of corporate strategy.
The management must identify what are the knowledge assets of the organisation; which
basically are competitors, suppliers, governmental agencies, products and processes,
technology etc. Management must plan to get maximum returns out of knowledge assets.
(a) Acquisition of knowledge through knowledge assets e.g. knowledge about new
products (from competitors), new technologies, social, economical, political changes. It
also requires transformation of raw information into knowledge, useful to solve business
problems.
KM enables a corporation to build and sharpen its competitive edge, for survival and
growth in the competitive globalized economy. In fact, KM aided by IT tools enables a
corporation to design and implement most appropriate corporate strategies.
(ii) Betterment of Human Relations:
KM is basically built on the knowledge generated, shared and utilized through a learning
organisation. There is no doubt that learning organisation provides the foundation on
which the building of KM could be built. A learning organisation through facilitating
interaction among people of the organisation, leads to betterment of human relations;
which is a very big permanent asset an organisation can boast of to possess.
KM-its concept and practices – motivate people to enhance their intellectual capabilities,
resulting in new skills, improvement of existing skills etc. Thus not only does KM enhance
the intellectual elements of people; but also indirectly prevents depreciation of human
capital.
Initiation and practices of KM help an enterprise enhance its goodwill in the global
market; enabling it to acquire more success and prosperity.
The characteristics of decisions faced by managers at different levels are quite different.
Decisions can be classified as structured, semi structured, and unstructured.
Unstructured decisions are those in which the decision maker must provide judgment,
evaluation, and insights into the problem definition. Each of these decisions is novel,
important, and nonroutine, and there is no well-understood or agreed-on procedure for
making them.
Structured decisions, by contrast, are repetitive and routine, and decision makers
can follow a definite procedure for handling them to be efficient. Many decisions have
elements of both and are considered semi structured decisions, in which only part of the
problem has a clear-cut answer provided by an accepted procedure. In general,
structured decisions are made more prevalently at lower organizational levels, whereas
unstructured decision making is more common at higher levels of the firm.
[Link]-support systems (DSS) are targeted systems that combine analytical models
with operational data and supportive interactive queries and analysis for middle
managers who face semistructured decision situations.
[Link] support systems (ESS) are specialized systems that provide senior
management making primarily unstructured decisions with a broad array of both
external information (news, stock analyses, industry trends) and high-level summaries
of firm performance. The purpose of ESS to help the C- level managers to focus on the
information that really affect the overall profitability and success of the firm. The leading
methodology for understanding the really important information needed by the firm’s
executive is called the Balanced Score Card Method, a frame work for operationalizing
the firm’s strategic plan by focusing on measurable outcomes on four dimensions of firm
[Link],business process ,customer, learning and growth. Performance on
each dimension is measured using KPI’s.
[Link] decision-support systems (GDSS) are specialized systems that provide a group
electronic environment in which managers and teams can collectively make decisions
and design solutions for unstructured and semistructured [Link] guided
meetings takes place in a conference rooms with special software and hardware tools to
facilitate group decision [Link] makes possible to increase the meeting size and
increase in [Link] individuals contribute simultaneously at the same time
rather than one at a time.
[Link] IN THE DECISION-MAKING PROCESS
Making decisions consists of several different activities. Simon (1960) describes four
different stages in decision making: intelligence, design, choice, and implementation
The decision-making process can be described in four steps that follow one another in a
logical order. In reality, decision makers frequently circle back to reconsider the previous
stages and through a process of iteration eventually arrive at a solution that is workable.
Choice consists of choosing among solution alternatives. Here, DSS with access
extensive firm data can help managers choose the optimal solution. Also group
decisionsupport systems can be used to bring groups of managers together in an
electronic online environment to discuss different solutions and make a choice.
In the real world, the stages of decision making described here do not necessarily
follow a linear path. You can be in the process of implementing a decision, only to
discover that your solution is not working. In such cases, you will be forced to repeat the
design, choice, or perhaps even the intelligence stage.
For instance, in the face of declining sales, a sales management team may strongly
support a new sales incentive system to spur the sales force on to greater effort. If
paying the sales force a higher commission for making more sales does not produce
sales increases, managers would need to investigate whether the problem stems from
poor product design, inadequate customer support, or a host of other causes, none of
which would be “solved” by a new incentive system.
Systems supporting management decision making originated in the early 1960s as early
MIS that created fixed, inflexible paper-based reports and distributed them to managers
on a routine schedule. In the 1970s, the first DSS emerged as standalone applications
with limited data and a few analytic models. ESS emerged during the 1980s to give
senior managers an overview of corporate operations. Early ESS were expensive, based
on custom technology, and suffered from limited data and flexibility.
The rise of client/server computing, the Internet, and Web technologies has made
a major impact on systems that support decision making. Many decision-support
applications are now delivered over corporate intranets. We see six major trends:
[Link] Intelligence
Business intelligence combines business analytics, data mining, data visualization, data
tools and infrastructure, and best practices to help organizations make more data-driven
decisions. In practice, you know you’ve got modern business intelligence when you have
a comprehensive view of your organization’s data and use that data to drive change,
eliminate inefficiencies, and quickly adapt to market or supply changes. Modern BI
solutions prioritize flexible self-service analysis, governed data on trusted platforms,
empowered business users, and speed to insight
BI tools perform data analysis and create reports, summaries, dashboards, maps, graphs,
and charts to provide users with detailed intelligence about the nature of the business.
step 2) The data is cleaned and transformed into the data warehouse. The table can be
linked, and data cubes are formed.
Step 3) Using BI system the user can ask quires, request ad-hoc reports or conduct any
other analysis.
Example 1:
. In an Online Transaction Processing (OLTP) system information that could be fed into
product database could be
Correspondingly, in a Business Intelligence system query that would beexecuted for the
product subject area could be did the addition of new product line or change in product
price increase revenues
Correspondigly, in BI system query that could be executed would be how many new
clients added due to change in radio budget
In OLTP system dealing with customer demographic data bases data that could be fed
would be
Correspondingly in the OLAP system query that could be executed would be can customer
profile changes support support higher product price
Example 2:
It also collects statistics on market share and data from customer surveys from each hotel
to decides its competitive position in various markets.
By analyzing these trends year by year, month by month and day by day helps
management to offer discounts on room rentals.
Example 3:
The use of BI tools frees information technology staff from the task of generating
analytical reports for the departments. It also gives department personnel access to a
richer data source.
The data analyst is a statistician who always needs to drill deep down into data. BI system
helps them to get fresh insights to develop unique business strategies.
2. The IT users:
CEO or CXO can increase the profit of their business by improving operational efficiency
in their business.
Business intelligence users can be found from across the organization. There are mainly
two types of business users
1. Boost productivity
With a BI program, It is possible for businesses to create reports with a single click thus
saves lots of time and resources. It also allows employees to be more productive on their
tasks.
2. To improve visibility
BI also helps to improve the visibility of these processes and make it possible to identify
any areas which need attention.
3. Fix Accountability
BI system also helps organizations as decision makers get an overall bird’s eye view
through typical BI features like dashboards and scorecards.
BI takes out all complexity associated with business processes. It also automates analytics
by offering predictive analysis, computer modeling, benchmarking and other
methodologies.
BI software has democratized its usage, allowing even nontechnical or non-analysts users
to collect and process data quickly. This also allows putting the power of analytics from
the hand’s many people.
4.6 BI System Disadvantages
1. Cost:
Business intelligence can prove costly for small as well as for medium-sized enterprises.
The use of such type of system may be expensive for routine business transactions.
2. Complexity:
3. Limited use
Like all improved technologies, BI was first established keeping in consideration the
buying competence of rich firms. Therefore, BI system is yet not affordable for many small
and medium size companies.
It takes almost one and half year for data warehousing system to be completely
implemented. Therefore, it is a time-consuming process.
[Link] is OLAP?
Most business data have multiple dimensions—multiple categories into which the data
are broken down for presentation, tracking, or analysis. For example, sales figures might
have several dimensions related to location (region, country, state/province, store), time
(year, month, week, day), product (clothing, men/women/children, brand, type), and
more.
But in a data warehouse, data sets are stored in tables, each of which can organize data
into just two of these dimensions at a time. OLAP extracts data from multiple relational
data sets and reorganizes it into a multidimensional format that enables very fast
processing and very insightful analysis.
5.1What is an OLAP cube?
The core of most OLAP systems, the OLAP cube is an array-based multidimensional
database that makes it possible to process and analyze multiple data dimensions much
more quickly and efficiently than a traditional relational database.
SQL and relational database reporting tools can certainly query, report on, and analyze
multidimensional data stored in tables, but performance slows down as the data volumes
increase. And it requires a lot of work to reorganize the results to focus on different
dimensions.
This is where the OLAP cube comes in. The OLAP cube extends the single table with
additional layers, each adding additional dimensions—usually the next level in the
“concept hierarchy” of the dimension. For example, the top layer of the cube might
organize sales by region; additional layers could be country, state/province, city and even
specific store.
In theory, a cube can contain an infinite number of layers. (An OLAP cube representing
more than three dimensions is sometimes called a hypercube.) And smaller cubes can
exist within layers—for example, each store layer could contain cubes arranging sales by
salesperson and product. In practice, data analysts will create OLAP cubes containing just
the layers they need, for optimal analysis and performance.
OLAP cubes enable four basic types of multidimensional data analysis:
Drill-down
The drill-down operation converts less-detailed data into more-detailed data through
one of two methods—moving down in the concept hierarchy or adding a new dimension
to the cube. For example, if you view sales data for an organization’s calendar or fiscal
quarter, you can drill-down to see sales for each month, moving down in the concept
hierarchy of the “time” dimension.
Roll up
Roll up is the opposite of the drill-down function—it aggregates data on an OLAP cube by
moving up in the concept hierarchy or by reducing the number of dimensions. For
example, you could move up in the concept hierarchy of the “location” dimension by
viewing each country's data, rather than each city.
The slice operation creates a sub-cube by selecting a single dimension from the main
OLAP cube. For example, you can perform a slice by highlighting all data for the
organization's first fiscal or calendar quarter (time dimension).
The dice operation isolates a sub-cube by selecting several dimensions within the main
OLAP cube. For example, you could perform a dice operation by highlighting all data by
an organization’s calendar or fiscal quarters (time dimension) and within the U.S. and
Canada (location dimension).
Pivot
The pivot function rotates the current cube view to display a new representation of the
data—enabling dynamic multidimensional views of data. The OLAP pivot function is
comparable to the pivot table feature in spreadsheet software, such as Microsoft Excel,
but while pivot tables in Excel can be challenging, OLAP pivots are relatively easier to use
(less expertise is required) and have a faster response time and query performance.
However, there are two other types of OLAP which may be preferable in certain cases:
ROLAP
HOLAP
HOLAP, or hybrid OLAP, attempts to create the optimal division of labor between
relational and multidimensional databases within a single OLAP architecture. The
relational tables contain larger quantities of data, and OLAP cubes are used for
aggregations and speculative processing. HOLAP requires an OLAP server that supports
both MOLAP and ROLAP.
A HOLAP tool can "drill through" the data cube to the relational tables, which paves the
way for quick data processing and flexible access. This hybrid system can offer better
scalability but can't escape the inevitable slow-down when accessing relational data
sources. Also, its complex architecture typically requires more frequent updates and
maintenance, as it must store and process all the data from relational databases and
multidimensional databases. For this reason, HOLAP can end up being more expensive.
The main difference between OLAP and OLTP is in the name: OLAP is analytical in nature,
and OLTP is transactional.
OLAP tools are designed for multidimensional analysis of data in a data warehouse, which
contains both transactional and historical data. In fact, an OLAP server is typically the
middle, analytical tier of a data warehousing solution. Common uses of OLAP include data
mining and other business intelligence applications, complex analytical calculations, and
predictive scenarios, as well as business reporting functions like financial analysis,
budgeting, and forecast planning.
Time series may also exhibit short-term seasonal effects (over a year, month, week,
or even a day) as well as longer-term cyclical effects, or nonlinear trends. A seasonal
effect is one that repeats at fixed intervals of time, typically a year, month, week, or day.
At a neighborhood grocery store, for instance, short-term seasonal patterns may occur
over a week, with the heaviest volume of customers on weekends; seasonal patterns may
also be evident during the course of a day, with higher volumes in the mornings and late
afternoons. Figure shows seasonal changes in natural gas usage for a homeowner over
the course of a year (Excel file Gas & Electric). Cyclical effects describe ups and downs
over a much longer time frame, such as several years. shows a chart of the data in the
Excel file Federal Funds Rates.
The mean absolute deviation (MAD) is the absolute difference between the
actual value and the forecast, averaged over a range of forecasted
values:
where At is the actual value of the time series at time t, Ft is the forecast value for time t,
and n is the number of forecast values (not the number of data points since we do not
have a forecast value associated with the first k data points). MAD provides a robust
measure of error and is less affected by extreme observations.
2. Mean square error (MSE):
Mean square error (MSE) is probably the most commonly used error metric.
It penalizes larger errors because squaring larger numbers has a greater impact than
squaring smaller numbers. The formula for MSE is
where F t+1 is the forecast for time period t + 1, Ft is the forecast for period t, At is the
observed value in period t, and a is a constant between 0 and 1 called the smoothing
constant.
To begin, set F1 and F2 equal to the actual observation in period 1, A1.
Using the two forms of the forecast equation just given, we can interpret the simple
exponential smoothing model in two ways. In the first model, the forecast for the next
period, Ft+1, is a weighted average of the forecast made for period t, Ft, and the actual
observation in period t, At. The second form of the model, obtained by simply rearranging
terms, states that the forecast for the next period, Ft+1, equals the forecast for the last
period, Ft, plus a fraction a of the forecast error made in period t, At - Ft. Thus, to make a
forecast once we have selected the smoothing constant, we need to know only the
previous forecast and the actual value. By repeated substitution for Ft in the equation, it
is easy to demonstrate that Ft+1 is a decreasingly weighted average of all past time-series
data. Thus,the forecast actually reflects all the data, provided that a is strictly between 0
and 1.
Double Exponential Smoothing
In double exponential smoothing, the estimates of at and bt are obtained from
the following
equations:
In essence, we are smoothing both parameters of the linear trend model. From the first
equation, the estimate of the level in period t is a weighted average of the observed value
at time t and the predicted value at time t, at+1 + bt+1 ,based on simple exponential
smoothing. For large values of a, more weight is placed on the observed value. Lower
values of a put more weight on the smoothed predicted value. Similarly, from the second
equation, the estimate of the trend in period t is a weighted average of the differences in
the estimated levels in periods t and t - 1 and the estimate of the level in period t - 1.
Forecasting Time Series with Seasonality:
When time series exhibit seasonality, different techniques provide better forecasts,
Regression-Based Seasonal Forecasting Models
One approach is to use linear regression. Multiple linear regression models
with categorical variables can be used for time series with seasonality.
Holt-Winters Forecasting for Seasonal Time Series
Holt-Winters models are similar to exponential smoothing models in that
smoothing constants are used to smooth out variations in the level and seasonal patterns
over time. For time series with seasonality but no trend, XLMiner supports a Holt-Winters
method but does not have the ability to optimize the parameters
Holt-Winters Models for Forecasting Time Series with seasonality and
Trend
Many time series exhibit both trend and seasonality. Such might be the case for
growing sales of a seasonal product. These models combine elements of both the trend
and seasonal models. Two types of Holt-Winters smoothing models are often used.
The Holt-Winters additive model is based on the equation
The additive model applies to time series with relatively stable seasonality, whereas the
multiplicative model applies to time series whose amplitude increases or decreases over
time. Therefore, a chart of the time series should be viewed first to identify the
appropriate type of model to use. Three parameters,∝,β,γ, are used to smooth the level,
trend,and seasonal factors in the time series. XLMiner supports both models.
Predictive modeling is often performed using curve and surface fitting, time series
regression, or machine learning approaches. Regardless of the approach used, the
process of creating a predictive model is the same across methods.
The two steps in supervised machine learning. Table1.1 lists a set of historical instances,
or dataset, of mortgages that a bank has granted in the past. This dataset includes
descriptive features that describe the mortgage, and a target feature that indicates
whether the mortgage applicant ultimately defaulted on the loan or paid it back in full.
The descriptive features tell us three pieces of information about the mortgage: the
OCCUPATION (which can be professional or industrial) and AGE of the applicant and the
ratio between the applicant’s salary and the amount borrowed (LOANSALARY RATIO).
The target feature, OUTCOME, is set to either default or repay. In machine learning terms,
each row in the dataset is referred to as a training instance, and the overall dataset is
referred to as a training data sets.
Table 1.1
Table1.4(b) also illustrates the fact that the training dataset does not contain an instance
for every possible descriptive feature value combination and that there are still a large
number of potential prediction models that remain consistent with the training dataset
after the inconsistent models have been excluded. Specifically, there are three remaining
descriptive feature value combinations for which the correct target feature value is not
known, and therefore there are 33 = 27 potential models that remain consistent with the
training data. Three of these- M2,M4,M5- shown in Table1.4(b). Because a single consistent
model cannot be found based on the sample training dataset alone, we say that machine
learning is fundamentally an ill-posed problem.
We might be tempted to think that having multiple models that are consistent with the
data is a good thing. The problem is, however, that although these models agree on what
predictions should be made for the instances in the training dataset, they disagree with
regard to what predictions should be returned for instances that are not in the training
dataset. For example, if a new customer starts shopping at the supermarket and buys
baby food, alcohol, and organic vegetables, our set of consistent models will contradict
each other with respect to what prediction should be returned for this customer, for
example, M2 will return GRP = single, M4 will return GRP = family, and M5 will return GRP
= couple.
The criterion of consistency with the training data doesn’t provide any guidance with
regard to which of the consistent models to prefer when dealing with queries that are
outside the training dataset. As a result, we cannot use the set of consistent models to
make predictions for these queries. In fact, searching for predictive models that are
consistent with the dataset is equivalent to just memorizing the dataset. As a result, no
learning is taking place because the set of consistent models tells us nothing about the
underlying relationship between the descriptive and target features beyond what a
simple look-up of the training dataset would provide.
If a predictive model is to be useful, it must be able to make predictions for queries that
are not present in the data. A prediction model that makes the correct predictions for
these queries captures the underlying relationship between the descriptive and target
features and is said to generalize well. Indeed, the goal of machine learning is to find the
predictive model that generalizes best. In order to find this single best model, a machine
learning algorithm must use some criteria for choosing among the candidate models it
considers during its search.
Given that consistency with the dataset is not an adequate criterion to select the best
prediction model, what criteria should we use? There are a lot of potential answers to this
question, and that is why there are a lot of different machine learning algorithms. Each
machine learning algorithm uses different model selection criteria to drive its search for
the best predictive model. So, when we choose to use one machine learning algorithm
instead of another, we are, in effect, choosing to use one model selection criterion instead
of another.
All the different model selection criteria consist of a set of assumptions about the
characteristics of the model that we would like the algorithm to induce. The set of
assumptions that defines the model selection criteria of a machine learning algorithm is
known as the inductive bias 6 of the machine learning algorithm.
There are two types of inductive bias that a machine learning algorithm can use, a
restriction bias and a preference bias. A restriction bias constrains the set of models that
the algorithm will consider during the learning process. A preference bias guides the
learning algorithm to prefer certain models over others.
For example, we introduce a machine learning algorithm called multivariable linear
regression with gradient descent, which implements the restriction bias of only
considering prediction models that produce predictions based on a linear combination of
the descriptive feature values and applies a preference bias over the order of the linear
models it considers in terms of a gradient descent approach through a weight space. As a
second example, we introduce the Iterative Dichotomizer 3 (ID3) machine learning
algorithm, which uses a restriction bias of only considering tree prediction models where
each branch encodes a sequence of checks on individual descriptive features but also
utilizes a preference bias by considering shallower (less complex) trees over larger trees.
It is important to recognize that using an inductive bias is a necessary prerequisite for
learning to occur; without inductive bias, a machine learning algorithm cannot learn
anything beyond what is in the data.
In summary, machine learning works by searching through a set of potential models to
find the prediction model that best generalizes beyond the dataset. Machine learning
algorithms use two sources of information to guide this search, the training dataset and
the inductive bias assumed by the algorithm.
UNIT IV
HR & SUPPLY CHAIN ANALYTICS
Human Resources – Planning and Recruitment – Training and Development - Supply
chain network - Planning Demand, Inventory and Supply – Logistics – Analytics
applications in HR & Supply Chain
1. Societal Objectives:
Since an organisation is part of the society the main objective of HRM is to
be responsive to the needs and challenges of society.
HRM’s societal objectives include:
i) To provide more employment oppurtunities.
ii) To provide maximum productivity.
iii) To provide material and mental satisfaction to workforce.
iv) To control the wastage of effort.
v) To help help to maintain ethical policies and socially responsible
behaviour.
vi) To encourage healthy human relations and social welfare.
vii) To manage change to the mutual advantage of individuals, groups ,the
enterprise and the public
2. Organisational Objectives :
These objectives of HRM are based on the fact that human resource
management exists to contribute to organisational effectiveness.
HRM’s organisational objectives include:
i) To help the organisation to reach its goals.
ii) To efficiently employ the skills and abilities of the workforce.
iii) To provide well-trained and well-motivated employees to the
organisation.
iv) To develop and maintain a quality of work life that makes employment
in the organisation desirable.
v) To communicate HRM policies to all employees.
3. Personal (or employees) Objectives :
The another important objective of HRM is to assist employees in
achieving their personal goals.
HRM’s Personal objective include:
i) To provide adequate remuneration to the employees.
ii) To provide job security
iii) To provide Facilities for proper Training and Development.
iv) To increase the employees’s job satisfaction and self-actualisation.
v) To provide congenial working environment.
4. Labour Union Objectives :
The HRM is also concerned with labour unions and related issues.
HRM’s labour union related objectives include:
i) To recognise the labour unions.
ii) To establish the personnel policies in consultation with unions.
iii) To create congenial atmosphere with unions so as to maintain the spirit
of self-discipline and co-operation with the management.
1. HRM Activities/Functions:
a)Organisational Planning and Development:
Determination of needs of the organisation based on long and short
term objectives, technology selected, product feature and external
environment.
Design of organisational structure.
Establishing a healthy organisational climate of mutual co-
operation, trust and confidence.
b) Strategic Human Resource Planning
Assessing current human resources.
Assessing future human resources needs
Developing a program to meet the future needs.
c) Job Analysis
It is an assessment that defines jobs and the behaviour necessary
to perform them.
Preparation of Job descriptions and job specifications.
d) Staffing
It concerns the recruitment and selection of human resources
for an organisation.
It includes: Man power planning, Recruitment, Selection and
Placement,Induction,Promotion and Transfer and Seperation
e) Training and Development
It includes:
1. Orientation of new employees
2. Training of employees to perform their jobs.
3. Retraining of employees as their job requirements change .
4. Encouraging the development and growth of employees.
f) Performance Appraisal:
It assesses how well employees are doing their jobs.
Appraisals are useful:
i) In making compensation decisions
ii) In specifying areas in which additional development of
employees is needed.
Iii)In making Placement decisions.
g) Compensation Benefits:
Compensation rewards people through pay, incentives
and benefits for performing work within the
organisation.
Organisations must develop and refine their basic wage
and salary to ensure that pay-for-performance policies
are followed.
h) Health and Safety:
Organisations should be more responsive to the concerns about the
physical and mental health and safety of employees.
Organisation should provide safer and healthy workshop conditions
for employees.
i)Employee Relations:
The formal relationship between employees and their employers
must be managed for the benefit of both.
To facilitate good employee relations, it is important to develop
and communicate HR policies and rules.
j) Union Relations:
Union-related activities are important because they affect
employees, managers, and the performance of many HR
activities.
At the formal organisation level, the union is the agent
representing a group of employees in an organisation.
The other activities of union include collective bargaining and
grievance management.
[Link] Information:
The first step in any form of HR planning is to collect information. A plan or a forecast
cannot be any better than the data on which it is based.
Figure 1.3 Human Resource Planning Process Model
2. Statistical Techniques:
A. Internal Sources:
1. Present Employees:
Promotions and transfers from among the present employees can be a good source of
higher status, pay and responsibilities. Promotion from among the present employees is
advanta-geous because the employees promoted are well acquainted with the
Promotion from among present employees also reduces the require-ment for job
training. However, the disadvantage lies in limiting the choice to a few people and
denying hiring of outsiders who may be better qualified and skilled. Furthermore,
promotion from among present employees also results in inbreeding which creates
frustration among those not promoted. Transfer refers to shifting an employee from one
job to another without any change in the position/post, status and responsibilities. The
need for transfer is felt to provide employees a broader and varied base which is
considered necessary for promotions. Job rotation, involves transfer of employees from
2. Former Employees:
Former employees are another source of applicants for vacancies to be filled up in the
company to work on a part-time basis. Similarly, some former employees who left the
organisation for any reason may again be interested to come back to work. This source
has the advantage of hiring people whose performance is already known to the
organisation.
3. Employee Referrals:
This is yet another internal source of recruitment. The existing employ-ees refer their
family members, friends and relatives to the company as potential candidates for the
vacancies to be filled up in the organisation. This source serves as one of the most
those potential candidates who meet the company requirements known to them from
their own experience. The referred individuals are expected to be similar in type in terms
of race and sex, for example, to those who are already working in the organisation.
4. Previous Applicants:
This is considered as internal source in the sense that applications from the potential
candidates are already lying with the organisation. Sometimes, the organisations contact
through mail or messenger these applicants to fill up the vacancies particularly for
B. External Sources:
External sources of recruitment lie outside the organisation. These outnumber internal
sources.
1. Employment Exchanges:
The National Commission on Labour (1969) observed in its report that in the pre-
Independence era, the main source of labour was rural areas surrounding the industries.
In response to it, the compulsory Notification of Vacancies Act of 1959 (commonly called
Employment Exchange Act) was instituted which became operative in 1960. Under
Section 4 of the Act, it is obligatory for all industrial establishments having 25 workers or
more, to notify the nearest employment exchange of vacancies (with certain exceptions)
The main functions of these employment exchanges with their branches in most cities are
registration of job seekers and their placement in the notified vacancies. It is obligatory
for the employer to inform the outcome of selection within 15 days to the employment
exchange.
country also revealed that recruitment through employment exchanges was most
[Link] Agencies:
Generally, these agencies select personnel for supervisory and higher levels. The main
function of these agencies is to invite applications and short list the suitable candidates
for the organisation. Of course, the final decision on selection is taken by the
representatives of the organisation. At best, the representatives of the employment
agencies may also sit on the panel for final selection of the candidates
[Link]:
Advertisement is perhaps the most widely used method for generating many
applications. This is because its reach is very high. This method of recruitment can be
used for jobs like clerical, technical and managerial. The higher the position in the
organisation, the more specialized the skills or the shorter the supply of that resource
in the labour market, the more widely dispersed the advertisements is likely to be.
For example, the search for a top executive might include advertise-ments in a
4. Professional Associations:
Very often, recruitment for certain professional and technical positions is made through
services for their members. For this, the professional associations prepare either list of
their members. The professional associations are particularly useful for attracting highly
skilled and professional personnel. However, in India, this is not a very common practice
and those few that provide such kind of service have not been able to generating a large
number of applications.
5. Campus Recruitment:
phenomenon particularly in the American organisations, it has made its mark rather
recently Of late, some organisations such as HLL, HCL. L &T, Citi Bank, ANZ Grindlays,
Motorola, Reliance etc., in India have started visiting educational and training
Ex-amples of such campuses are the Indian Institutes of Management, Indian Institutes
purpose, many institutes have regular placement cells/offices to serve as liaison between
the employers and the students. Tezpur Central University has, for example, one Deputy
Director (Training and Placement) for the purposes of campus recruitment and
placement.
organisations. First, the most of the candidates are available at one place; Second, the
interviews are arranged at short notice; third, the teaching faculty is also met; and Fourth,
it gives them opportunity to sell the organisation to a large student body who would be
graduating subsequently. However, the disadvantages of this type of recruitment are that
organisations have to limit their selection to only “entry” positions and they interview the
6. Deputation:
organisation for a short duration of two to three years. This method of recruitment is
organisation does not have to incur the initial cost of induction and training.
However, the disadvantage associated with deputation is that the deputa-tion period of
two/three years is not long enough for the deputed employee to prove his/her mettle, on
the one hand, and develop commitment with the organisation to become part of it, on the
other.
7. Word-of-Mouth:
this method, the word is passed around the possible vacancies or openings in the
pinching” i.e., the employees working in another organisation are offered an attractive
offer by the rival organisations. This method is economic, both in terms of time and
money.
Some organisations maintain a file of the applications and bio-data sent by job-seekers.
These files serve as very handy as and when there is vacancy in the organisation. The
advantage of this method is no cost involved in recruitment. However, the drawbacks of
this method of recruitment are non-availability of the candidate when needed and the
8. Raiding or Poaching:
Raiding or poaching is another method of recruitment whereby the rival firms by offering
better terms and conditions, try to attract qualified employees to join them. This raiding
of pilots from the Indian Airlines to join private air taxi operators. Whatever may be the
means used to raid rival firms for potential candidates, it is often seen as an unethical
practice and not openly talked about. In fact, raiding has become a challenge for the
human resource manager. Besides these, walk-ins, contractors, radio and television,
acquisitions and mergers, etc., are some other sources of recruitment used by
organisations.
1
capture data-driven strategy and deliver consistent information about this.
2
3. Social Media Analytics
The social media platform has become the most accessible and diverse tool from
the perspective of marketing. This marketing strategy can help you to understand the
sentiments of people and how they are responding and engaging with you. This will
enable you to take decisive action and approach the right audience of the target market.
To implement a successful analytics marketing strategy you would have to reach more
people and engage with the followers to understand the improvements they are looking
for.
4. Campaign Analytics
This strategy helps you in tracking your campaign, like how these are performing,
getting the leads or not. So what you can do is understand the lead conversion rates from
multiple channels and sources. After doing this, you have to identify the opportunities by
product category and the source of lead. With this, you need to identify the content and
platform that is majorly resonating with your audience. This will enable you to optimize
the messaging and target of your content strategy.
5. Link Analytics
• Link is the most crucial aspect of searching algorithms. By taking the assistance of
link analytics you can view the link of the site, the domain, and page authority of referring
domains, like the total number of inbound links, top pages by link, anchor text and many
more.
• Thus having transparency is the prime motive of marketing managers. For this,
they have to set a common agreement on different KPIs. in today’s competitive age it is
essential to opt for effective marketing strategies by learning the art of positioning your
brand, as it can help to win over the competitors. In addition to this, another important
element of a reliable marketing analytics framework is to build an effective analytics
dashboard. This dashboard should represent KPIs by unifying data strategies from
different marketing data sources.
[Link] Research–
With keyword research, you can obtain very detailed insights into how your
business is appealing to your potential customers and if there are areas that you can
optimise. View how competitive your target keywords are, the average monthly search
volume for that particular keyword, the estimated CPC’s if you decide to bid on those
keywords, the number of clicks that you are getting for that keyword and the click-
through rates.
Thus the marketing analytics strategies are necessary for any business to obtain
timely, reliable, complete and operative information.
Tools
• It is the practice of studying the data of Marketing efforts of various channels and
campaigns and form models in order to report the metrics like ROI, Channel Performance,
etc. to identify parameters for improvement. Marketers will be able to provide answers
3
to the analytics questions that are most important to their stakeholders by monitoring
and reporting on business performance results, diagnostic metrics, and leading indicator
metrics
• The intelligence derived from marketing analytics allows you to spend each dollar
as effectively as possible.
• However, despite the emergence of several platforms and technologies that can
streamline the marketing analytics workflows, it remains a challenge for companies to
build concrete, actionable data analytics solutions for marketing efforts. According to a
survey of senior marketing executives published in the Harvard Business Review, “more
than 80% of respondents were dissatisfied with their ability to measure marketing ROI.”
• To set up a practical marketing analytics framework within your organisation,
you must have the right processes along with investing in the right technology platforms
to capture data-driven strategy and deliver unified and consistent information on your
measurement metrics.
1.3 Marketing Analytics Strategies Process
• With marketing analytics, you can gather intelligence into several different areas
of your marketing strategy. It will help you understand how your programs are
performing against the cost and which programs are delivering the best ROI. It will help
you to segregate your efforts and identify the area that you need to focus on the most.
• Analytics strategy will help you to realise how your programs are working in
conjunction to nurture your leads. With this, you can build a solid base upon which you
can qualify them and pass the leads on to your sales reps as opportunities.
• With marketing analytics, you can also identify laggards, i.e. the programs that are
not providing adequate return based on efforts invested at them. You can then choose to
redefine your data-driven strategy at them or remove them from your focus altogether.
• Market and competitor analysis will give you crucial insights into your competitor
data-driven strategy and which channels/ programs are working for them. Learning from
your competitors is an old business principle and marketing analytics can give you a
powerful arsenal to use and base your actions on the digital platforms.
• Even better! Advanced analytics can provide insights into trends, make forecasts
and capitalise on opportunities before anyone else.
• This will help grow your bottom line and avoid wastage on marketing spending,
optimising the dollar spend and viewing campaign performance in real-time. It helps you
to measure the impact of your strategies and compare it against the cost. •
Marketing strategies and tactics are normally based on explicit and implicit beliefs
about consumer behavior. Decisions based on explicit assumptions and sound theory and
research are more likely to be successful than the decisions based solely on implicit
intuition.
4
• Knowledge of consumer behavior can be an important competitive advantage
while formulating marketing strategies. It can greatly reduce the odds of bad decisions
and market failures. The principles of consumer behavior are useful in many areas of
marketing, some of which are listed below –
Analyzing Market Opportunity
Consumer behavior helps in identifying the unfulfilled needs and wants of
consumers. This requires scanning the trends and conditions operating in the market
area, customer’s lifestyles, income levels and growing influences.
Selecting Target Market
The scanning and evaluating of market opportunities helps in identifying different
consumer segments with different and exceptional wants and needs. Identifying these
groups, learning how to make buying decisions enables the marketer to design products
or services as per the requirements.
Example − Consumer studies show that many existing and potential shampoo
users did not want to buy shampoo packs priced at Rs 60 or more. They would rather
prefer a low price packet/sachet containing sufficient quantity for one or two washes.
This resulted in companies introducing shampoo sachets at a minimal price which has
provided unbelievable returns and the trick paid off wonderfully well.
Marketing-Mix Decisions
Once the unfulfilled needs and wants are identified, the marketer has to determine
the precise mix of four P’s, i.e., Product, Price, Place, and Promotion.
Product
A marketer needs to design products or services that would satisfy the
unsatisfied needs or wants of consumers. Decisions taken for the product are related to
size, shape, and features. The marketer also has to decide about packaging, important
aspects of service, warranties, conditions, and accessories.
Example − Nestle first introduced Maggi noodles in masala and capsicum
[Link], keeping consumer preferences in other regions in mind, the
company introduced Garlic, Sambar, Atta Maggi, Soupy noodles, and other flavours.
Price
The second important component of marketing mix is price. Marketers must
decide what price to be charged for a product or service, to stay competitive in a tough
market. These decisions influence the flow of returns to the company.
Place
The next decision is related to the distribution channel, i.e., where and how to offer
the products and services at the final stage. The following decisions are taken regarding
the distribution mix −
5
• Are the products to be sold through all the retail outlets or only through the
selected ones?
• Should the marketer use only the existing outlets that sell the competing brands?
Or, should they indulge in new elite outlets selling only the marketer’s brands?
• Is the location of the retail outlets important from the customers’ point of view?
• Should the company think of direct marketing and selling?
Promotion
• Promotion deals with building a relationship with the consumers through the
channels of marketing communication. Some of the popular promotion techniques
include advertising, personal selling, sales promotion, publicity, and direct marketing and
selling.
• The marketer has to decide which method would be most suitable to effectively
reach the consumers. Should it be advertising alone or should it be combined with sales
promotion techniques? The company has to know its target consumers, their location,
their taste and preferences, which media do they have access to, lifestyles, etc.
2. Marketing Mix
• Marketing mix modeling is a marketing analytics strategy that can help your brand
maximize on return and get a deeper understanding of how your business actually
functions. Let’s look into the benefits that this strategy can provide for your brand. As the
world of digital marketing has exploded, the rise of big data and incredibly technical and
complex data sets has been both a blessing and a curse to brands big and small.
• While it’s true that detailed data can help businesses understand their consumers
and grow their businesses, it’s often the case that the data is overwhelming.
• With technology platforms and analytics tools being able to collect enormous
amounts of data, brands are often left struggling to get through it all and understand what
it is that they’ve gathered.
• In order to address the issue of how to manage incoming data and then use that
information to make impactful decisions, a clear analytics strategy is necessary for all
brands.
• Picking the right strategy for your business is the key to making sure you are
getting the most out of your planning and marketing activity.
• Marketing mix modeling is one example of a marketing analytics strategy that can
really help your brand manage data and learn the best places to invest your budget and
time on.
• Keep reading this post to learn more.
2.1 What is Marketing Mix Modeling?
6
• Marketing mix modeling is a statistical marketing method that attempts to
determine the effectiveness of marketing campaigns and initiatives by taking apart data
and attributing contributions to different marketing tactics and factors to better predict
future success.
• Put another way, marketing mix modeling looks at different pre-determined
factors and the data that has been gathered from marketing campaigns to see which
factors have had the biggest impact on return and which factors have contributed the
most to success.
• Once this data has been collected and organized, the marketing mix modeling
system will use the past and historical data to predict or forecast future marketing and
sales success.
• By looking at the trends that have worked before, the marketing mix modeling will
theoretically be able to forecast with more accuracy than other analytical methods.
2.2 The 5 P’s of Marketing Mix Modeling
As stated above, marketing mix modeling distributes success from data to
different pre-determined factors.
Those factors are often referred to as the 5 P’s of marketing, which are derived
from other marketing research and studies. Let’s look at those 5 P’s now.
Product : Product refers to the actual products or services that are created and
offered to customers by a brand.
Price :The price takes into consideration any deals, sales, pricing models, and
methods of payment involved in a sale.
Place : Place refers to the channels through which products are available to
consumers and how consumers are able to find the offers that the brand has.
Promotion
Promotion : is the method by which products or services are marketed and
shared among audiences.
People: People is the final P, and is sometimes left off of marketing mix modeling.
People refers to both the internal staff and the customers that drive sales in a
brand.
2.3 Marketing Mix Modeling vs. Attribution Modeling
• Marketing mix modeling is often compared to another popular model of marketing
analytics, attribution modeling.
• Attribution modeling is the process of setting up different touchpoints that trigger
events on the customer’s journey.
• Each touchpoint is assigned a value to help determine which points in the
customer’s journey are responsible for bringing in revenue. • While attribution
modeling can be helpful to understand data and provide context for ROI, it also has a few
major drawbacks.
7
• The biggest problem is that not every touchpoint in a customer’s journey can
possibly be tracked and analyzed through collected data.
• Another drawback of attribution modeling is that it functions mainly through
clicks and clicks alone — other potential data points are put aside in favor of clicks that
can “prove” a conversion has taken place at a touchpoint.
• Attribution modeling also doesn’t prove the effectiveness of a campaign. After all,
a customer will have to pass through the same touchpoints whether they were convinced
through an advertisement to make a conversion or not.
• That makes it difficult to assign return to specific touchpoints.
2.4 Benefits of the Marketing Mix Modeling
Let’s take a deeper dive into the benefits that it can provide to your brand’s
analytics and reporting models.
Prove the ROI of Marketing Initiatives
Marketing mix modeling allows marketers to really prove the ROI of their
initiatives. By relating data insights back to the factors in each campaign that provided
success, it can help brands understand the full impact of their efforts.
Gather Insights
Marketing mix modeling is also great for understanding key insights from business
initiatives. Those insights can be used to drive effective budget allocations within
marketing and sales departments and convince stakeholders of the benefits of the model.
Create Better Sales Forecasting
Sales forecasting refers to the practice of estimating how much revenue can be
generated in the future based on the impact that your sales and marketing efforts have
had in the past.
By allocating success to key factors, marketing mix modeling allows brands to have more
accurate forecasting.
Understand Historical Data and Trends
Marketing mix modeling is based on understanding the past data that has been
collected during initiatives and [Link] other analytics models will ignore this
valuable data or only look at parts of it. The marketing mix system ensures historical data
and trends are examined closely for value.
Account for Negative Impacts
Just as marketing mix modeling allows brands to see the positive impacts that
their efforts have created, it can also be used to see negative impacts on different
marketing [Link] helps brands know which areas of the business need work
and where serious corrections need to take place.
8
2.5 What are the Limitations of Marketing Mix Modeling?
Like all marketing analytics methods, there are drawbacks and limitations to this
[Link] amount of data collected means that there isn’t one method of analytics that
can address every data set. Here are some of the major drawbacks of marketing mix
modeling:
• Infrequent reporting, meaning no real-time data analytics.
• Does not analyze the customer’s experience or journey.
• Doesn’t provide the 1:1 analysis of attribution modeling.
• Doesn’t examine relationships between channels.
• Doesn’t look into brand awareness, messaging, or reach.
• Requires a large marketing analysis budget.
• Harder to implement in B2B businesses than B2C brands.
2.6 How to Build the Marketing Mix Modeling
While there are some setbacks, marketing mix modeling can provide major
benefits to your brand. Let’s take a look at how you can go about building this system in
your own organization.
1. Establish Your Goals
• The end goal of any marketing analytics strategy is to parse through and gather
insights from your data sets.
• That means that marketing mix modeling is meant to help organize your data and
your analytics methods.
• Therefore, it makes sense that the first step is to establish the specific goals you
want to attain through your strategy.
• Your goals might center around budgets, marketing campaigns, product pricing,
or your brand in comparison to competitors.
2. Create Internal Alignment
In order to succeed, you need to have clear alignment across your organization.
• As with most data analytics, marketing mix modeling requires you to pull data
from many different systems from different departments.
• That requires compliance across different teams and with the key stakeholders in
your organization, such as:
CMO, Media agencies, Marketing agencies, Marketing executives and
managers,CRM managers,Sales leads
3.0 Consumer Behavior
9
Consumer behavior is about the approach of how people buy and the use
merchandise and services. Understanding consumer behavior will assist business entities
to be more practical at selling, designing, development of products or services, and every
other different initiative that impacts their customers. In this tutorial, it has been our
endeavor to cover the multidimensional aspects of Consumer Behavior in an easy-to-
understand manner.
• Audience
This tutorial will help management students as well as industry professionals who
work in a product development environment, or in packaging, or for that matter, any part
of a company that has an interface with the customers.
• Prerequisites
To understand this tutorial, it is advisable to have a foundation level knowledge of
basic business and management studies. However, general students and entrepreneurs
who wish to get an understanding about consumer behavior may find it quite useful.
Consumer Behavior - Consumerism
Consumerism is the organized form of efforts from different individuals, groups,
governments and various related organizations which helps to protect the consumer
from unfair practices and to safeguard their [Link] growth of consumerism has led to
many organizations improving their services to the customer.
Consumerism
Consumer is regarded as the king in modern marketing. In a market economy, the
concept of consumer is given the highest priority, and every effort is made to
encourage consumer satisfaction. However, there might be instances where
consumers are generally ignored and sometimes they are being exploited as well.
Therefore, consumers come together for protecting their individual interests. It is a
peaceful and democratic movement for self-protection against their exploitation.
Consumer movement is also referred as consumerism.
3.1 Features of Consumerism
Highlighted here are some of the notable features of consumerism −
Protection of Rights − Consumerism helps in building business communities and
institutions to protect their rights from unfair practices.
Prevention of Malpractices − Consumerism prevents unfair practices within the
business community, such as hoarding, adulteration, black marketing, profiteering, etc.
Unity among Consumers − Consumerism aims at creating knowledge and
harmony among consumers and to take group measures on issues like consumer laws,
supply of information about marketing malpractices, misleading and restrictive trade
practices.
10
Enforcing Consumer Rights − Consumerism aims at applying the four basic
rights of consumers which are Right to Safety, Right to be Informed, Right to Choose, and
Right to Redress.
Advertising and technology are the two driving forces of consumerism −
• The first driving force of consumerism is advertising. Here, it is connected
with the ideas and thoughts through which the product is made and the consumer buys
the product. Through advertising, we get the necessary information about the product we
have to buy.
• Technology is upgrading very fast. It is necessary to check the environment on a
daily basis as the environment is dynamic in nature. Product should be manufactured
using new technology to satisfy the consumers. Old and outdated technology won’t help
product manufacturers to sustain their business in the long run.
3.2 Consumer Behavior – Significance
• Consumer behavior covers a broad variety of consumers based on diversity in age,
sex, culture, taste, preference, educational level, income level, etc. Consumer behavior can
be defined as “the decision process and physical activity engaged in evaluating, acquiring,
using or disposing of goods and services.”
• With all of the diversity to the surplus of goods and services offered to us, and the
freedom of choices, one may speculate how individual marketers actually reach us with
their highly definite marketing messages. Understanding consumer behavior helps in
identifying whom to target, how to target, when to reach them, and what message is to be
given to them to reach the target audience to buy the product.
The following illustration shows the determinants of consumer behavior.
• The study of Consumer Behavior helps in understanding how individuals make
decisions to spend their available resources like time, money, and effort while purchasing
goods and services. It is a subject that explains the basic questions that a normal
consumer faces − what to buy, why to buy, when to buy, where to buy from, how often to
buy, and how they use it.
11
• Consumer behavior is a complex and multidimensional process that reflects the
totality of consumer decisions with respect to acquisition, consumption, and disposal of
goods and services.
3.3 Dimensions of Consumer Behavior
Consumer behavior is multidimensional in nature and it is influenced by the
following subjects −
• Psychology is a discipline that deals with the study of mind and behavior. It helps
in understanding individuals and groups by establishing general principles and
researching specific cases. Psychology plays a vital role in understanding how consumers
behave while making a purchase.
• Sociology is the study of groups. When individuals form groups, their actions are
sometimes relatively different from the actions of those individuals when they are
operating individually.
• Social Psychology is a combination of sociology and psychology. It explains how
an individual operates in a group. Group dynamics play an important role in purchasing
decisions. Opinions of peers, reference groups, their families and opinion leaders
influence individuals in their behavior.
12
• Cultural Anthropology is the study of human beings in society. It explores the
development of central beliefs, values and customs that individuals inherit from their
parents, which influence their purchasing patterns.
3.4 How Consumer Behavior affects Marketing Strategy ?
• Business organizations across globe try to influence consumer by encouraging
them to buy products and services. This is done by studying about the needs of the
consumer and creating appropriate strategies so that consumer buys products. There are
several marketing strategies used for influencing consumer behavior which affects the
buying decision.
• The first thing to be kept in mind while building strategies for marketing products
is communicating with consumers emotionally. This can be done by giving promotional
material in order to get attention of consumer. It has been found that consumers are
attracted to products that create emotions in the form of joy and surprise.
• All businesses throughout the world are seeking for solutions to assure long-term
sales and profitability, as well as market sustainability. To do so, companies must pay
close attention to their source of profit – consumers – and, more crucially, their
behaviour.
• Consumer behaviour is the study of consumer demands and how consumers
(customers and organizations) meet these needs, as well as their motivation for using and
purchasing a certain product or service.
• This is an exceptionally helpful study for corporations looking seeking strategies
to stay relevant in the market since it assists them in determining the best marketing plan
for their items.
13
product/service failure. Businesses are expected to research all the criteria listed below
to effectively analyze their customers.
• A successful marketing strategy is critical to a company’s success since it assists
the company in developing a product or service that has the potential to sell and provide
high levels of profit yield A marketing strategy is a company’s plan for selling its
product, which includes considering the four variables listed below.
Consumer behaviour and marketing strategy are inextricably linked:
• Consumer behaviour assists firms in determining whether what they are selling
will be lucrative, as well as in tailoring their marketing plan to the appropriate target
population for their product/service.
• Catering a product/service to the wrong audience may be detrimental to a
business, whereas, Catering the appropriate product/service to the right consumers by
observing their behaviour, on the other hand, might be invaluable to a company.
• Many organizations look for the most cost-effective way to do consumer research.
By using technologies like Google Analytics, Google Survey, CRM, and the social
networking sites listed above, businesses may keep track of their customers’ web activity,
making it easier to determine client preferences. Keepingtrack of consumer behaviour is
critical for ensuring profitability
With the recent change towards the Covid-19 crisis, businesses must monitor customer
behaviour more now than [Link] Covid-19 has bought drastic changes in
consumer [Link] are also less likely to make large purchases during an
economic- financial crisis such as the recession; therefore, businesses must study and
analyze consumer behaviour to ensure sustainability through having the right marketing
strategy catered to the consumer’s financial and emotional preferences. Failure to do so
may result in the suspension of operations or bankruptcy.
In conclusion, consumer behaviour has a significant influence on marketing
strategy and is important to the success of a product; so, the marketing strategy must be
determined through analyzing consumer behaviour to understand what customers want.
Meeting consumer demand is the quickest method to make profits – the ultimate
objective of any and every firm.
4. Selling process
The sales process – also known as a sales cycle – is the method your company follows to
sell your product or service to customers. It involves a series of steps, from initial contact
with a lead to the final sale.
The sales process is similar to developing a relationship with someone new. When you
first meet, you get to know each other, learn what they like, and determine their goals.
Along the way, you decide if you can work together and whether you are a match. If this
is the case, the relationship can proceed and grow.
14
4.1 Importance of building a sales process
These are some benefits of building a sales process for your business:
You can optimize the structure of your sales team to support the sales process and
identify the main challenges in the sales cycle.
It will be easier to onboard new sales personnel.
It helps you identify short-term and long-term goals and how each step in the sales
process supports the next one.
It highlights where time and resources are being wasted, so you can remove activities
with low return on investment and focus your efforts on activities with more positive
returns.
It identifies the steps that need to be improved. This allows you to invest in training,
education, and practice to get better in areas of weakness, which will help match your
success in other parts of the sales process.
4.2 The 7-step sales process
Prospecting
Preparation
Approach
Presentation
Handling objections
Closing
Follow-up
If you are one of the 2.5 million employees in the United States working in sales, you know
that even for the most natural salesperson, it can sometimes be difficult to turn potential
leads into closed sales. Across industries, you need different skills and knowledge to
prove to your potential customers that your solution is best for their particular problem.
The seven-step sales process outlined in business textbooks is a good start, especially
since leading sales ops teams attribute to 60% or more of their total pipeline in any
quarter to actively designed and deployed sales plays. The seven- step sales process is
not only a good start to customizing it to your particular business but more importantly,
customizing it to your target customers as you move them through the sales funnel.
As the old adage goes, “Learn the rules like a pro so you can break them like an artist.”
Once you’ve mastered the seven steps of the sales process you might learn in a business
class or sales seminar, then you can break the rules where necessary to create a sales
process that may not necessarily follow procedure but gets results.
The textbook 7-step sales process
15
What are the seven steps of the sales process according to most sales masters? The
following steps provide a good outline for what you should be doing to find potential
customers, close the sale, and retain your clients for repeat business and referrals in the
future.
1. Prospecting
The first step in the sales process is prospecting. In this stage, you find potential
customers and determine whether they have a need for your product or service— and
whether they can afford what you offer. Evaluating whether the customers need your
product or service and can afford it is known as qualifying.
Keep in mind that, in modern sales, it's not enough to find one prospect at a company:
There are an average of 6.8 customer stakeholders involved in a typical purchase, so
you'll want to practice multi-threading, or connecting with multipledecision-makers on
the purchasing side. Account maps are an effective way .
2. Preparation
The next step is preparing for initial contact with a potential customer, researching the
market and collecting all relevant information regarding your product or service. Develop
your sales presentation and tailor it to your potential client’s particular needs.
Preparation is key to setting you up for success. The better you understand your prospect
and their needs, the better you can address their objections and set yourself apart from
the competition.
3. Approach
Next, make first contact with your client. This is called the approach. Sometimes this is a
face-to-face meeting, sometimes it’s over the phone. There are three common approach
methods. Premium approach: Presenting your potential client with a gift at the beginning
of your interaction
16
Question approach: Asking a question to get the prospect interested.
Product approach: Giving the prospect a sample or a free trial to review and evaluate
your service
4. Presentation
In the presentation phase, you actively demonstrate how your product or service meets
the needs of your potential customer. The word presentation implies using PowerPoint
and giving a salesy spiel, but it doesn’t always have to be that way—you should actively
listen to your customer’s needs and then act and respond accordingly.
5. Handling objections
Perhaps the most underrated step of the sales process is handling objections. This is
where you listen to your prospect’s concerns and address them. It’s also where many
unsuccessful salespeople drop out of the process—44% of salespeople abandoning
pursuit after one rejection, 22% after two rejections, 14% after three,and 12% after four,
even though 80% of sales require at least five follow-ups to convert. Successfully handling
objections and alleviating concerns separates good salespeople from bad and great from
good.
6. Closing
In the closing stage, you get the decision from the client to move forward. Depending on
your business, you might try one of these three closing techniques.
Alternative choice close: Assuming the sale and offering the prospect a choice, where both
options close the sale—for example, “Will you be paying the whole fee up front or in
installments?” or “Will that be cash or charge?”
Extra inducement close: Offering something extra to get the prospect to close, such as a
free month of service or a discount.
Standing room only close: Creating urgency by expressing that time is of the essence—
for example, “The price will be going up after this month” or “We only have six spots left”
[Link]-up
Once you have closed the sale, your job is not done. The follow-up stage keeps you in
contact with customers you have closed, not only for potential repeat business but for
referrals as well. And since retaining current customers is six to seven times less costly
than acquiring new ones, maintaining relationships is key.
4.3 Prospect for potential customers
The first step is to prospect for customers, which requires some research. This stage has
three components.
[Link] an ideal customer profile (ICP). The goal is to identify and understand your
ideal customers. This helps you determine whom to contact and why you are contacting
17
them as potential customers. The ICP uses real data to create a fictional characterization
of a client who:
Can provide your company with value (e.g., revenue, influence)
Your company can provide value to (e.g., return on investment, better service)
[Link] potential leads. Use the ICP to create a list of potential leads that fit this
profile. Use a variety of sources (e.g., online databases, social media) to develop a list of
ideal client companies. Then create a list of prospects from these companies that your
sales team can contact and qualify.
[Link] initial qualification. First, qualify the company by conducting research to see
if it meets the criteria that matter to you (e.g., company size, geography, industry, growth
phase). Then qualify the prospects with an interview to determine if they are a good fit
as a customer. Determine if the prospect has:
A need for your product or service.
The budget to purchase your product or service.
The authority to make the purchasing decision.
The timing to make the purchase
4. Make contact with prospects
After identifying the ideal prospect, reach out to contact them. This step
has two parts:
Determine the best way to contact the prospect (e.g., telephone, email,
social media).
Reach out to the prospect. Make sure you are prepared (e.g., with a script,
introduction and questions) before making contact. Introduce yourself
and work on building trust, not making a sale.
5. Qualify prospects.
Although you have already done your research to qualify the prospect before
making contact, you still need to determine if they would make an ideal
customer. This can only happen in a direct conversation with the prospect
(either over the phone or in person).
To qualify the prospect, learn more about them. Ask about their goals, budget,
challenges and other issues that will help you to make your decision. Make sure
that the person you are speaking with has the power to make decisions on doing
business with you. When speaking with the prospect, identify opportunities to
provide value.
Qualifying the prospect involves confirming whether they meet the criteria of a
good customer. If they are not a good fit, tell the prospect why. If they are still
interested, determine why.
6..Nurture prospects.
18
Once you have qualified the prospect, demonstrate the relevance of your solution
to them. This typically involves answering questions about your unique offer, the
benefits you provide, and the problems you solve.
When answering the prospect’s questions and learning about their needs, you
have to nurture them along the process of making a decision. This involves:
Moving the prospect along the stages of awareness
Educating the prospect about the product, service or industry Personalizing your
communications
Responding to common challenges
19
Content of offer (e.g., offer does not provide enough detail) Contract terms (e.g.,
term is too long)
Ideally, you addressed the common objections during the nurturing phase or
when creating the offer. However, you cannot always address every objection
before the prospect makes it.
To overcome or address objections:
Be patient and measured in your response. Listen to the prospect’s concerns
objectively. Do not rush or pressure the prospect to move forward. Address
objections that are related to each other. For example, if the prospect questions
the value and price, go over everything you’ve included in the offer to show how
the value you provide exceeds the price.
When you have explained your reasoning, ask the prospect if you have properly
addressed their objection.
Read between the lines of generic objections (e.g., “We are not interested”).
Ask more questions to determine the real reasons behind each objection. Listen
carefully to the answers before responding.
First, work on sealing the deal. The goal is to confirm the prospect’s
engagement and work toward the next steps. The key is to make it easier for the
prospect to say yes to the deal. Prime the prospect by reminding them how they
will achieve a specific goal in purchasing your product or service.
To close the deal:
Ask a direct question or make a direct statement (e.g., “Would you like to sign the
deal now?”).
Ask an indirect question (e.g., “Are you satisfied with what is included in the
offer?”).
Provide an incentive to close the deal (e.g., add a sign-up bonus). Offer a free trial
period (e.g., “Try it for one week”).
Emphasize the urgency or scarcity of the offer (e.g., “This is a limited-time offer”).
Ask what else the prospect requires to make a decision. When the prospect has
committed to the purchase, answer any additional questions they have and give
them details on the next steps. Provide a written agreement and summary of the
conversation so that their supervisor or other stakeholders can review it for
accuracy.
If the prospect still responds with “not yet” or “not now” for reasons beyond your
control (or theirs), then return the prospect to the nurturing stage. Stay in touch
and follow up with prospects who are not ready to purchase.
The sales process begins with the buyer. To implement an effective sales process,
you must understand the buyer and then design your sales process to address
their goals, motivations, and needs. This requires identifying and then answering
their “why” question. For instance, why is the buyer looking for a solution? Why
are they looking to you for the solution?
Build a sales process to help your salespeople find the answer to the key
question. Conduct interviews with buyers and salespeople and perform industry
research to find the answers to include in the process.
[Link] milestones.
Once you’ve defined the stages of your sales process, establish the key steps and
milestones within those stages. A milestone could be identifying where the buyer
is in the sales process or engaging with stakeholders within a certain time
period. Score each milestone to determine how many resources to invest into
that part of the sales process. When you set a milestone for each stage, train
salespeople to meet that milestone at the assigned stage. This will prevent them
from skipping steps or taking the wrong approach at the wrong time (such as
talking about the price too soon). Instructing salespeople on when and how to do
handoffs will also help correct problems in the sales process. This simplifies the
process of helping buyers move from one stage to the next.
Build skills, resources and activities into the sales process to help your
salespeople move to the next milestone. Resources could include brochures, case
studies and whitepapers for a salesperson to share with customers. Provide your
salespeople with specific training for particular milestones or have them engage
in activities for other milestones.
A sales process is not static; it should be refined and improved over time. Get
feedback from salespeople, measure buyer behavior, and track and analyze sales
data to evaluate the effectiveness of your sales process. Use the results to solidify
the successful activities and resources within the sales process, implement
activities and processes to prevent negative outcomes, and remove activities and
resources that do not advance the sales process. This will keep the sales process
relevant, actionable and efficient.
By constantly iterating and improving your sales process, you will: Reduce the
time it takes to onboard new salespeople.
21
Minimize costly mistakes.
Improve sales forecasting.
Reach sales targets on a more consistent basis.
Align your technology and systems with the sales process.
It’s important to equip your salespeople with technology (such as CRM software)
that enables them to perform each step of the sales process efficiently. However,
software tools alone won’t make salespeople more effective or encourage them
to follow best practices. You need to combine the technology with supportive
systems, guidance and resources.
Provide technology that streamlines the sales process, collects and organizes
information on customers, and lists the required activities for salespeople to
follow.
Create systems and resources to support the sales team’s use of the technology
during the sales process, such as these:
Checklists to make sure all steps are performed in order
Content and video to demonstrate the importance of the stages and milestones
Buyer-focused content tied to where they are in the sales process
Reminders to prevent salespeople from skipping steps
Training content for each step in the sales process
5. Sales Planning
Sales planning is a set of strategies that are designed to help sales teams reach
their target sales quotas and help the company reach its overall sales goals. Sales
planning helps to forecast the level of sales you want to achieve and outlines a
plan to help you accomplish your goals. A sales plan covers past sales, risks,
market conditions, your target personas, and plans for prospecting and selling.
Sales planning occurs at various stages of the sales cycle. Generally, businesses
set monthly or quarterly sales goals. Sales don’t happen all on their own just
because your sales manager sets goals. By defining the steps in a sales plan, sales
managers can help their teams reach their targets and enjoy the rewards that
come with collective success.
Another important part of the sales planning process is evaluating the company
and understanding its position in the marketplace. Market conditions are ever-
changing, so it’s important to study them and to adjust your sales plan
accordingly.
Sales plans typically account for short- and long-term planning. Goals without
rewards aren’t sufficient to incentivize each salesperson to reach for the sky. The
right tools and sales strategies go a long way toward motivating salespeople to
reach their targets.
As salespeople reach their goals, you’ll want to set new ones. Every time you set
new targets, it’s appropriate to amend your sales plan. Changes to your sales
22
plan may also mean that you need to change how your company allocates
resources to ensure that your salespeople have the resources they need. If you
haven’t already invested in a cloud-based phone system and VoIP integrations,
you might consider how setting up a sales call center, complete with call center
software, could help streamline your sales activities and help you reach your
goals more easily.
In case there’s any doubt about the important role that your sales plan plays in
your business, you may be interested to know that a little more than half of sales
professionals annually miss their sales quotas. Sales experts attribute this
underwhelming percentage to the lack of strategic planning and failure to align
sales goals in accordance with conditions in the marketplace.
Top sales performances only come about after proper planning and preparation.
A well-thought-out plan streamlines sales tasks, which increases the efficiency
and productivity of your sales teams.
For the best results, develop your sales plan well in advance. The best plans
account for multiple levels. A common approach is to start with annual targets
and break them down by the quarter, month, and week. Also, you’ll need to pre-
plan your resources, logistics, and activities for every part of your sales plan.
These activities will give you a road map that leads to sales success.
Short-term planning and monitoring are important activities because they
give you the opportunity to make changes to your sales plan based on weekly or
monthly sales results. If your salespeople are way ahead of – or way behind on –
your projections, short-term planning will ensure that sales goals are reasonable
and attainable.
A good sales plan means that your sales teams can function as efficiently as
possible. Inside sales reps and call center agents can easily use call center
software for sales call planning, freeing up outside salespeople to focus on
making in-person calls and closing sales.
23
Alignment between sales and marketing improves the customer experience
because it helps to improve customer service and to create a single customer
journey.
Salespeople don’t always use marketing content when there’s no alignment.
A proven sales plan template should be part of your brand strategy because it
will guide your business growth every step of the way. You could think of it as
telling your sales story. Every story tells the who, what, why, where, when, and
how from beginning to end.
Let’s break the strategic process down into five parts:
[Link] setting
[Link] forecasting
[Link] and customer research
4. Prospecting
5. Sales
One process seamlessly dovetails with the next. Start with your high-level goals
and then factor in the various market factors. Set realistic goals as a benchmark
for forecasting reasonable goals in the future. You’ll need to base your goals on
several things, including the size of the market, your annual company goals, your
sales teams’ experience, and the resources that you have available.
A cloud-based phone system offers dashboard analytics that gives you metrics
such as the number of inbound calls and outbound calls and the average call
length. This will allow you to set standards for your call agents. Also, it will help
you to scale your contact center so that it’s not over- or understaffed.
Marketing and customer research is an important activity that helps you position
your company properly for business growth. The right data will determine your niche
markets so you can start building traction with a receptive audience. Your niche
encompasses your products, content, culture, and branding.
24
The next step is to identify the most likely sources for finding high-quality leads
so that you can start building a quality prospect list. It’s also a great idea to
leverage current client relationships as you build your prospecting plan.
Define targets
Creates strategies
Identifies tactics
Motivates teams
Sets budgets to achieve targets
Reviews goals and suggests improvements
The most basic form of marketing analytics is to provide marketers with the
tools to understand what business impact their marketing campaigns have. This
task can range from something as straightforward as providing standard metrics
(click- through rate, ROI, etc..) at the campaign level to an analysis as complex as
developing a Market Mix Model to come up with the optimal marketing strategy
to maximize profit.
Find Opportunities in Marketing Performance
While marketing performance analytics will let you know on the whole how a
campaign performs, it isn’t until someone digs in to many cuts of data to uncover
whether there are certain types of users that respond better to particular
marketing treatment - perhaps some campaigns work better in certain markets
or on mobile. Marketing analysts mine and model your data to uncover nuggets
that can be acted on by marketers.
Understand Your Customers
25
Diving deep into customer demographics and behaviors can help you understand
which are more likely to be successful. This information can then be used by
marketers when selecting their target audiences. Through data mining and
statistical modeling, marketing analysts can provide a rich understanding of your
customers and what drives success.
Understand Your Competition
Market research is often within the domain of marketing analytics and it can help
marketers understand the competition better and adjust their strategy
accordingly.
Sales reps spend more time on non-sales activities according to most research on
the topic. These include making sales forecasts, prioritizing leads, deciding how
to approach leads which can all be automated with sales analytics applications.
To perform such tasks, sales reps can use behavioural analytics.
Improved prioritization
26
27
28