0% found this document useful (0 votes)
1 views

Chapter 2

Chapter 2 provides an overview of data science, emphasizing its multi-disciplinary nature and the distinction between data and information. It covers data types, the data processing cycle, the data value chain, and introduces big data concepts, including its characteristics and the Hadoop ecosystem. The chapter concludes with a discussion on the big data life cycle and the role of various tools in processing and analyzing data.

Uploaded by

seena mosisa
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
1 views

Chapter 2

Chapter 2 provides an overview of data science, emphasizing its multi-disciplinary nature and the distinction between data and information. It covers data types, the data processing cycle, the data value chain, and introduces big data concepts, including its characteristics and the Hadoop ecosystem. The chapter concludes with a discussion on the big data life cycle and the role of various tools in processing and analyzing data.

Uploaded by

seena mosisa
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 22

Chapter 2

Data Science

Contents covered in this chapter:


1. More about data science,
2. Data vs. Information,
3. Data types and representation,
4. Data value chain, and
5. Basic concepts of big data.
An Overview of Data Science

• Data science is a multi-disciplinary field that uses


scientific methods, processes, algorithms, and systems
to extract knowledge and insights from structured, semi-
structured and unstructured data.
• Data science is much more than simply analyzing data.
• It offers a range of roles and requires a range of skills.
 Why Data Science is described as a multi-disciplinary field?
Data Vs. Information
Data
• Defined as a representation of facts, concepts, or instructions in a
formalized manner, which should be suitable for communication,
interpretation, or processing, by human or electronic machines.
• It can be described as unprocessed facts and figures.
• It is represented with the help of characters such as alphabets (A-Z,
a-z), digits (0-9) or special characters (+, -, /, *, <,>, =, etc.).
Information is the processed data on which decisions and actions
are based.
• It is data that has been processed into a form that is meaningful to
the recipient and is of real or perceived value in the current or the
prospective action or decision of recipient.
• Furtherer more, information is interpreted data; created from
organized, structured, and processed data in a particular context.
Data Processing Cycle

 Data processing is the re-structuring or re-ordering of data by


people or machines to increase their usefulness and add values for
a particular purpose.
 Data processing consists of the following basic three steps

 Input − in this step, the input data is prepared in some convenient


form for processing. The form will depend on the processing
machine.
 Processing − in this step, the input data is changed to produce data
in a more useful form.
 Output − at this stage, the result of the proceeding processing step
is collected. The particular form of the output data depends on the
use of the data. For example, output data may be payroll for
employees.
Data types and their representation

Data types can be determines by two perspective, computer


programming and data analytics.
1. Data types from Computer programming perspective
There are 5 common data types in computer programming:
i. Integers(int)- used to store whole numbers, mathematically
known as integers
ii. Booleans(bool)- used to represent restricted to one of two
values: true or false
iii. Characters(char)- used to store a single character
iv. Floating-point numbers(float)- used to store real numbers
v. Alphanumeric strings(string)- used to store a combination
of characters and numbers
Data types from Data Analytics perspective

From a data analytics point of view, there are three


common types of data types:
Structured,
Semi-structured, and
Unstructured data types.

Structured Data unstructured data Semi structured Data


A. Structured Data

Structured data are data that adhere to a pre-defined


data model and is therefore straightforward to analyze.
• Conforms to a tabular format with a relationship between
the different rows and columns.
• Common examples of structured data are:
– Excel files or SQL databases.
• Each of these has structured rows and columns that can
be sorted.
B. Semi-structured Data

Semi-structured data is a form of structured data that


does not conform with the formal structure of data
models associated with relational databases or other
forms of data tables, but nonetheless, contains tags or
other markers to separate semantic elements and
enforce hierarchies of records and fields within the data.
 It is also known as a self-describing structure. Examples
of semi-structured data include JSON and XML are
forms of semi-structured data.
• For instance,
– Wallaga University
– College of Engineering and Technology
– Department of Information Technology, …
C. Unstructured Data

 It is information that either does not have a predefined


data model or is not organized in a pre-defined manner.
 Typically text-heavy but may contain data such as dates,
numbers, and facts as well.
 This results in irregularities and ambiguities that make it
difficult to understand using traditional programs as
compared to data stored in structured databases.
Common examples of unstructured data include audio,
video files or No- SQL databases.
 Metadata
– Is data about data
– The last category of data type.
– It provides additional information about a specific set of data.
– For instance, photographs
Data value Chain

• Introduced to describe the information flow within a big


data system as a series of steps needed to generate
value and useful insights from data.
• The Big Data Value Chain identifies the following five
key high-level activities:
1. Data Acquisition
2. Data Analysis
3. Data Curation
4. Data Storage
5. Data Usage
1. Data Acquisition

 It is the process of gathering, filtering, and cleaning data


before it is put in a data warehouse or any other storage
solution on which data analysis can be carried out.
• Data acquisition is one of the major big data challenges
in terms of infrastructure requirements.
• The infrastructure required to support the acquisition of
big data must deliver low, predictable latency in both
capturing data and in executing queries; be able to
handle very high transaction volumes, often in a
distributed environment; and support flexible and
dynamic data structures.
2. Data Analysis

• It is concerned with making the raw data acquired amenable to


use in decision-making as well as domain-specific usage.
• Data analysis involves exploring, transforming, and modeling data
with the goal of highlighting relevant data, synthesizing and
extracting useful hidden information with high potential from a
business point of view.
• Related areas include data mining, business intelligence, and
machine learning.
3. Data Curation
• It is the active management of data over its life cycle to ensure it meets
the necessary data quality requirements for its effective usage.
• Data curation processes can be categorized into different activities
such as content creation, selection, classification,
transformation,validation, and preservation.
• Data curation is performed by expert curators.
4. Data Storage
• It is the persistence and management of data in a scalable way that
satisfies the needs of applications that require fast access to the
data.
• Relational Database Management Systems (RDBMS) have been
the main, and almost unique, a solution to the storage paradigm for
nearly 40 years.
• NoSQL technologies have been designed with the scalability goal in
mind and present a wide range of solutions based on alternative
data models.
5. Data Usage
• It covers the data-driven business activities that need access to
data, its analysis, and the tools needed to integrate the data
analysis within the business activity. Data usage in business
decision- making can enhance competitiveness through the
reduction of costs, increased added value, or any other parameter
that can be measured against existing performance criteria.
Basic concepts of big data

 What is Big Data?


• Big data is the term for a collection of data sets so large
and complex that it becomes difficult to process using on-
hand database management tools or traditional data
processing applications.
• Simply it mean large dataset.
 Big data is characterized by 3V and more:
– Volume: large amounts of data Zeta bytes/Massive
datasets
– Velocity: Data is live streaming or in motion
– Variety: data comes in many different forms from diverse
sources
– Veracity: can we trust the data? How accurate is it? etc.
Characteristics of Big data in diagram
Clustered Computing and Hadoop Ecosystem
Clustered Computing
 Why cluster computing become necessary?
 Because of its huge size, individual
computers are often inadequate for
handling big data.
 To better address the high storage and
computational needs of big data, computer
clusters are a better fit.
Benefits of Cluster Computing
 Resource Pooling: Combining the available storage space to hold data .
– CPU and memory pooling are extremely important here.
 High Availability: Clusters provide fault tolerance and availability to
prevent hardware or software failures from affecting access to data and
processing.
 Easy Scalability: Clusters make it easy to scale horizontally by adding
additional machines to the group.
– This means the system can react to changes in resource requirements
without expanding the physical resources on a machine.
 Using clusters requires a solution for managing cluster membership,
coordinating resource sharing, and scheduling actual work on individual
nodes. Cluster membership and resource allocation can be handled by
software like Hadoop’s YARN (which stands for Yet Another Resource
Negotiator).
Hadoop and its Ecosystem

 Hadoop is an open-source framework intended to make


interaction with big data easier.
 It allows the distributed processing of large datasets across
clusters of computers using simple programming models.
 It is inspired by a technical document published by Google.
The four key characteristics of Hadoop are:
– Economical: enables ordinary computers to process big
data
– Reliable: stores copies of the data on different machines
and resists hardware failure.
– Scalable: easily scalable both horizontally and vertically..
– Flexible: can store as much structured and unstructured
data.
Hadoop Ecosystem
Hadoop has an ecosystem that has evolved four core components:
 data management,
 access,
 processing, and
 storage.
 It is continuously growing to meet the needs of Big Data.
 It comprises the following components and many others:
• HDFS: Hadoop Distributed File System
• YARN: Yet Another Resource Negotiator
• MapReduce: Programming based Data Processing
• Spark: In-Memory data processing
• PIG, HIVE: Query-based processing of data services
• HBase: NoSQL Database
• Mahout, Spark MLLib: Machine Learning algorithm libraries
• Solar, Lucene: Searching and Indexing
• Zookeeper: Managing cluster
• Oozie: Job Scheduling
Diagram
Big Data Life Cycle with Hadoop

1. Ingesting data into the system


 The first stage of Big Data processing is Ingest. The data is ingested or
transferred to Hadoop from various sources such as relational databases,
systems, or local files. Sqoop transfers data from RDBMS to HDFS,
whereas Flume transfers event data.
2. Processing the data in storage
 The second stage is Processing. In this stage, the data is stored and
processed. The data is stored in the distributed file system, HDFS, and the
NoSQL distributed data, HBase. Spark and MapReduce perform data
processing.
3. Computing and analyzing data
 The third stage is to Analyze. Here, the data is analyzed by processing
frameworks such as Pig, Hive, and Impala. Pig converts the data using a
map and reduce and then analyzes it. Hive is also based on the map and
reduce programming and is most suitable for structured data.
4. Visualizing the results
 The fourth stage is Access, which is performed by tools such as Hue and
Cloudera Search. In this stage, the analyzed data can be accessed by
users.
End of Chapter Two

u !
k Yo
a n
Th

You might also like