0% found this document useful (0 votes)
30 views98 pages

Information Management

The document provides an overview of data, information systems, and the evolution of various types of information systems, including Transaction Processing Systems (TPS), Management Information Systems (MIS), Decision Support Systems (DSS), and Executive Information Systems (EIS). It highlights the importance of information technology in education and business, emphasizing the need for technological literacy, access to information, and cost reduction in education. Additionally, it discusses the role of Enterprise Information Systems, including Customer Relationship Management (CRM), Supply Chain Management (SCM), and Enterprise Resource Planning (ERP), in enhancing organizational efficiency and decision-making.

Uploaded by

tamizh143876
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
30 views98 pages

Information Management

The document provides an overview of data, information systems, and the evolution of various types of information systems, including Transaction Processing Systems (TPS), Management Information Systems (MIS), Decision Support Systems (DSS), and Executive Information Systems (EIS). It highlights the importance of information technology in education and business, emphasizing the need for technological literacy, access to information, and cost reduction in education. Additionally, it discusses the role of Enterprise Information Systems, including Customer Relationship Management (CRM), Supply Chain Management (SCM), and Enterprise Resource Planning (ERP), in enhancing organizational efficiency and decision-making.

Uploaded by

tamizh143876
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd

UNIT I INTRODUCTION

Data, Information, Information System, evolution, types based on functions and


hierarchy, Enterprise and functional information systems.
Data
Data is defined as facts or figures, or information that's stored in or used by a computer. An
example of data is information collected for a research paper. the quantities, characters, or
symbols on which operations are performed by a computer, which may be stored and
transmitted in the form of electrical signals and recorded on magnetic, optical, or mechanical
recording media.
Information
Information is data that has been processed into form that is meaningful to the recipient and
is real or perceived value in current or prospective action or decision.
Information Technology (IT)
Information technology (IT) is the application of computers and telecommunications
equipment to store, retrieve, transmit and manipulate data, often in the context of a business
or other enterprise
Need
• Education is a lifelong process therefore anytime anywhere access to it is the need
• Information explosion is an ever increasing phenomena therefore there is need to get access
to this information
• Education should meet the needs of variety of learners and therefore IT is important in
meeting this need
• It is a requirement of the society that the individuals should posses technological literacy
• We need to increase access and bring down the cost of education to meet the challenges of
illiteracy and poverty-IT is the answer
Importance
• access to variety of learning resources
• immediacy to information
• anytime learning
• anywhere learning
• collaborative learning
• multimedia approach to education
Information system
An information system (IS) is a system composed of people and computers that processes or
interprets information. The term is also sometimes used in more restricted senses to refer to
only the software used to run a computerized database or to refer to only a computer system
Importance of Information system
1. To control the creation and growth of records
Despite decades of using various non-paper storage media, the amount of paper in our offices
continues to escalate. An effective records information system addresses both creation control
(limits the generation of records or copies not required to operate the business) and records
retention (a system for destroying useless records or retiring inactive records), thus stabilizing
the growth of records in all formats.
2. To reduce operating costs
Recordkeeping requires administrative dollars for filing equipment, space in offices, and
staffing to maintain an organized filing system (or to search for lost records when there is no
organized system).It costs considerably less per linear foot of records to store inactive records
in a Data Records Center versus in the office and there is an opportunity to effect some cost
savings in space and equipment, and an opportunity to utilize staff more productively - just
by implementing a records management program.
3. To improve efficiency and productivity
Time spent searching for missing or misfiled records are non-productive. A good records
management program (e.g. a document system) can help any organization upgrade its
recordkeeping systems so that information retrieval is enhanced, with corresponding
improvements in office efficiency and productivity. A well designed and operated filing
system with an effective index can facilitate retrieval and deliver information to users as
quickly as they need it. Moreover, a well managed information system acting as a corporate
asset enables organizations to objectively evaluate their use of information and accurately lay
out a roadmap for improvements that optimize business returns
4.To assimilate new records management technologies
A good records management program provides an organization with the capability to
assimilate new technologies and take advantage of their many benefits. Investments in new
computer systems whether this is financial, business or otherwise, don't solve filing problems
unless current manual recordkeeping or bookkeeping systems are analyzed (and occasionally,
overhauled) before automation is applied.
5. To ensure regulatory compliance
In terms of recordkeeping requirements, China is a heavily regulated country. These laws can
create major compliance problems for businesses and government agencies since they can be
difficult to locate, interpret and apply. The only way an organization can be reasonably sure
that it is in full compliance with laws and regulations is by operating a good management
information system which takes responsibility for regulatory compliance, while working
closely with the local authorities. Failure to comply with laws and regulations could result in
severe fines, penalties or other legal consequences
Evolution of information system
The first business application of computers (in the mid- 1950s) performed repetitive, high-
volume, transaction-computing tasks. The computers crunched number summarizing and
organizing transactions and data in the accounting, finance, and human resources areas. Such
systems are generally called transaction processing systems (TPSs).
Management Information Systems (MISs): these systems access, organize, summarize and
display information for supporting routine decision making in the functional areas.
Office Automation Systems (OASs): such as word processing systems were developed to
support office and clerical workers.
Decision Support Systems: were developed to provide computer based support for complex,
non routine decision. „
End- user computing: The use or development of information systems by the principal users
of the systems‘ outputs, such as analysts, managers, and other professionals.
Intelligent Support System (ISSs): Include expert systems which provide the stored
knowledge of experts to non experts, and a new type of intelligent system with machine-
learning capabilities that can learn from historical cases. „
Knowledge Management Systems: Support the creating, gathering, organizing, integrating
and disseminating of organizational knowledge.
Data Warehousing: A data warehouse is a database designed to support DSS, ESS and other
analytical and end-user activities. „
Mobile Computing: Information systems that support employees who are working with
customers or business partners outside the physical boundaries of their company; can be done
over wire or wireless networks
Kinds of Information Systems
 Operational-level systems
Support operational managers by monitoring the day-to-day‘s elementary activities and
transactions of the organization. e.g. TPS.
 Knowledge-level systems
Support knowledge and data workers in designing products, distributing information, and
coping with paperwork in an organization.
 Management-level-systems
Support the monitoring, controlling, decision-making, and administrative activities of
middle managers. e.g. MIS,
 Strategic-level systems
Support long-range planning activities of senior management.

Information system based on hierarchy

TRANSACTION PROCESSING SYSTEM


Transaction Processing System are operational-level systems at the bottom of the pyramid.
They are usually operated directly by shop floor workers or front line staff, which provide the
key data required to support the management of operations. This data is usually obtained
through the automated or semi- automated tracking of low-level activities and basic
transactions
• Transaction = an event that generates or modifies data Used at Operational level of the
organization
• Processes business events and transactions to produce reports
• Goal: to automate repetitive information processing activities within organizations
•Increases speed
• Supports the monitoring, collection, storage, processing, and dissemination of the
organization’s basic business transactions
• Mainly includes accounting and financial transactions
• Mainly used for providing other information systems with data.
Role of TPS
• Produce information for other systems
• Cross boundaries (internal and external)
• Used by operational personnel + supervisory levels
• Efficiency oriented
Examples
• Payroll processing
• Sales and order processing
• Inventory management
• Accounts payable and receivable

MANAGEMENT INFORMATION SYSTEM


For historical reasons, many of the different types of Information Systems found in
commercial organizations are referred to as "Management Information Systems". However,
within our pyramid model, Management Information Systems are management-level systems
that are used by middle managers to help ensure the smooth running of the organization in the
short to medium term. The highly structured information provided by these systems allows
managers to evaluate an organization's performance by comparing current with previous
outputs.
• Refers to the data, equipment and computer programs that are used to develop information
for managerial use
• Converts raw data from transaction processing system into meaningful form
• Focus on the information requirements of low to middle level managers
Role of MIS
• Based on internal information flows
• Support relatively structured decisions
• Inflexible and have little analytical capacity
• Used by lower and middle managerial levels
• Deals with the past and present rather than the future
• Efficiency oriented
Some examples of MIS
• Sales management systems
• Inventory control systems
• Budgeting systems
• Management Reporting Systems (MRS)
• Personnel (HRM) systems
Functions
INPUTS PROCESSING OUTPUT
Internal transactions Internal Sorting Merging Summary reports Action
files Structured data Summarizing reports Detailed reports

DECISION SUPPORT SYSTEM


A Decision Support System can be seen as a knowledge based system, used by senior
managers, which facilitates the creation of knowledge and allow its integration into the
organization. These systems are often used to analyze existing structured information and
allow managers to project the potential effects of their decisions into the future. Such systems
are usually interactive and are used to solve ill structured problems. They offer access to
databases, analytical tools, allow "what if" simulations, and may support the exchange of
information within the organization.
FUNCTIONS OF A DSS

INPUTS PROCESSING OUTPUT


Internal Transactions Modelling Simulation Summary reports Forecasts
Internal Files External Analysis Summarizing Graphs / Plots
Information

Role of DSS
• Support ill- structured or semi-structured decisions
• Have analytical and/or modelling capacity
• Used by more senior managerial levels
• Are concerned with predicting the future
• Are effectiveness oriented
Some examples of DSS
Group Decision Support Systems (GDSS)
Computer Supported Co-operative work (CSCW)
Logistics systems
Financial Planning systems
Spreadsheet Models
EXECUTIVE INFORMATION SYSTEM
Executive Information Systems are strategic-level information systems that are found at the
top of the Pyramid. They help executives and senior managers analyze the environment in
which the organization operates, to identify long-term trends, and to plan appropriate courses
of action. The information in such systems is often weakly structured and comes from both
internal and external sources. Executive Information System are designed to be operated
directly by executives without the need for intermediaries and easily tailored to the
preferences of the individual using them
Specialized form of DSS
• Used by top-level managers
• Reduce the information overload on executives
• Makes use of internal and external information
• Provides managers and executives flexible access to information for monitoring operational
results and general business conditions
• Provides a comprehensive picture of business performance by analysing key performance
indicators for growth
• Meets strategic information needs of the top management
• Also known as Executive Support System
Role of EIS
• Are concerned with ease of use
• Are concerned with predicting the future
• Are effectiveness oriented
• Are highly flexible
• Support unstructured decisions
• Use internal and external data sources
• Used only at the most senior management levels
Functions of EIS

INPUTS PROCESSING OUTPUT


External Data Summarizing Simulation Summary reports Forecasts
Internal Files "Drilling Down" Graphs / Plots
Pre-defined models

ESS- Executive support systems


Executive support systems are intended to be used by the senior managers directly to provide
support to non-programmed decisions in strategic management. These information are often
external, unstructured and even uncertain. Exact scope and context of such information is
often not known beforehand. This information is intelligence based:
• Market intelligence
• Investment intelligence
• Technology intelligence
Following are some examples of intelligent information, which is often source of an ESS:
• External databases
• Technology reports like patent records etc.
• Technical reports from consultants
• Market reports
• Confidential information about competitors
• Speculative information like market conditions
• Government policies
• Financial reports and information
Advantages of ESS
• Easy for upper level executive to use
• Ability to analyze trends
• Augmentation of managers' leadership capabilities
• Enhance personal thinking and decision making
• Contribution to strategic control flexibility
• Enhance organizational competitiveness in the market place
• Instruments of change
• Increased executive time horizons.
• Better reporting system
• Improved mental model of business executive
• Help improve consensus building and communication
• Improve office automation
• Reduce time for finding information
Disadvantage of ESS
• Functions are limited
• Hard to quantify benefits
• Executive may encounter information overload
• System may become slow
• Difficult to keep current data
• May lead to less reliable and insecure data
• Excessive cost for small company
Enterprise Information Systems

Types Of Enterprise system


1. customer relationships management (CRM),
2. enterprise resource planning (ERP), and
3. supply chain management (SCM).
Customer relationships management (CRM)
The CRM system is designed to collect customer data and forecast sales and market
opportunities. It tracks all communications with clients, assists with lead management, can
enhance customer service and boost sales. CRM usually closely integrates with the sales and
marketing module.
The sales module -handles workflows like inquiries, quotations, orders, and invoices. It helps
boost leads, speed up the sales cycle, and increase revenue.
Marketing module -helps build highly personalized marketing campaigns, automate
communications via social media, email, and advertisements based on customer segmentation
features.
Both modules provide detailed reports, be it on sales pipelines, lead sources' effectiveness, activity,
forecast, case logs, and profitability or marketing campaigns performance to measure the
effectiveness of efforts and shape plans and spend.

key benefits of CRM


Enhanced customer service.
Facilitates better customer service and enhances the effectiveness of marketing efforts through the
centralized storage and use of customer data and history.
Sales automation.
Automating the sales process, helps improve its efficiency, increase SDR capacity and eliminate
the bottleneck in the lead acquisition.
Customer segmentation.
Allows you to identify and segment customers based on a myriad of attributes, preferences,
behaviors, and buying patterns.
Customer retention.
Helps in retaining customers by tracking their interactions with the company and offering
personalized services.
Supply chain management (SCM)
The SCM software streamlines your entire supply chain, ensures a smooth flow of goods from
supplier to customer, and makes these processes adjustable to market shifts. This module helps
employees, such as purchasing agents, inventory planners, warehouse managers and senior supply
chain leaders, to get detailed information and optimize inventory levels, prioritize orders,
maximize on-time shipments, avoid supply chain disruptions and identify inefficient processes.

Procurement module.
Helps assess the needs of an organization in terms of goods consumption. It provides automation,
tracking, and quotes analysis, along with invoice management, contracts, and billing.
Inventory management module.
Enables inventory control by tracking item quantities and location, offering a complete picture of
current and incoming inventory and preventing stock-outs and delays. The module can also
compare sales trends with the available products to help a company make informed decisions,
boosting margins and increasing inventory turn.
Warehouse management system (WMS).
Primarily aims to control the movement and storage of materials within a warehouse, including the
receipt, storage and movement of goods to intermediate storage locations or to the final customer.
Transport management system (TMS).
Assists with the logistics within the SCM by optimizing loads and delivery routes, tracking freight
across local and global routes, along with automating previously time-consuming tasks, such as
trade compliance documentation and freight billing.
Benefits of supply chain management
Product life cycle management.
Supports effective management of the product life cycle, from development to disposal, ensuring
coordination and efficiency in various phases.
Improved supply chain visibility.
Offers visibility into the supply chain, allowing for better planning and management of resources.
Demand forecasting.
Facilitates demand forecasting, helping to optimize inventory levels and reduce carrying costs.
Supplier management.
Helps in managing supplier relationships more effectively, ensuring timely delivery and quality of
products.
Logistics optimization.
Assists in optimizing logistics and distribution processes, planning better routes and finding
carriers.
Risk management.
Aids in identifying and mitigating supply chain risks, such as logistics risks, product, and raw
materials shortages or demand volatility.
Enterprise resource planning (ERP)
ERP software helps support organizational goals bcy providing a cross-functional, company-wide
communication system. It allows efficient collection, storage, interpretation, and management of
information.
The core ERP modules include CRM (Customer relationships management), SCM (Supply chain
management), finance and accounting, human resources management (HRM), manufacturing,
finance and accounting, and business intelligence (BI). We have described the CRM and SCM
modules above and are going to take a closer look at the remaining ones.
Human resources management (HRM)
HRM software allows its specialists to automate administrative tasks and speed up internal
processes. Provided functions are integrated into a single module that makes general management
and decision-making easier. It features standard HRM tools as a timesheet, database for employee
records, recruitment, and employee evaluations.
The module may also include performance reviews and payroll systems, and the last is usually
integrated with the financial module to manage wages, compensation, and travel expenses.
Manufacturing
The key functionalities of this module are developed to help businesses make manufacturing more
efficient through product planning, materials sourcing, daily production monitoring, and product
forecasting. The module is tightly integrated with SCM, especially in areas like product planning
and inventory control.
Finance and accounting
This module keeps track of the organization’s finances and helps automate tasks related to billing
tasks, account reconciliation, vendor payments, and others. Its key features include tracking
accounts payable (AP) and accounts receivable (AR) and managing the general ledger. Financial
planning and analysis data help prepare key reports such as Profit and Loss (P&L) statements.

Advantages of ERP systems


Centralized database.
Helps in consolidating data from different departments, offering a unified, consistent view of
business information.
Improved decision making.
Facilitates data-driven decision-making by providing comprehensive insights and analytics.
Operational efficiency.
Streamlines and automates business processes, reducing manual efforts and minimizing errors.
Cost reduction.
Helps in long-term reducing operational and administrative costs through process optimization.
Regulatory compliance.
Assists in meeting regulatory requirements through built-in tools that ensure adherence to various
laws and regulations.
Resource management.
Enables effective management of resources, including workforce, finance, and assets.

Functional information system (FIS)


A functional information system is defined as an information system that has the ability to
offer complete information for a group of activities or operation that are related to each other.
Functional Area Of FIS
• Finance (FIN): provide internal and external professional access to stock, investment and
capital spending information.
• Accounting (ACC): similar to financial MIS more related to invoicing, payroll, receivables.
• Marketing (MKT): pricing, distribution, promotional, and information by customer and
salesperson.
• Operations (OPS): regular reports on production, yield, quality, inventory levels. These
systems typically deal with manufacturing, sourcing, and supply chain management.
• Human Resources Management (HR): employees, benefits, hiring‘s, etc
CHARACTERISTCS Of FIS
➢ Many small changes in a large database

➢ Systematic records ( mostly numerical )

➢ Routine actions & updating .

➢ Data preparation is a large & important effort

EQUIPMENTS REQUIREMENTS OF FUNCTIONAL INFORMATION SYSTEMS


➢ Large auxiliary storage

➢ Dual use files

➢ Moderate input / output requirements

➢ Flexible printing capacity

➢ Offline data entry

➢ Often difficult to define the problem

➢ Needs fast random access to large storage capacity

➢ Organization of computer storage is difficult

➢ Versatile inquiry stations desirable

A summary of capabilities of a FIS are organized by functional area in the following


chart:
• From the pyramid Each vertical section represents a functional area of the organization, and
thus a vertical view can be compared to a functional view of the organization
• Information systems can be designed to support the functional areas or traditional
departments such as, accounting, finance, marketing, human resources, and manufacturing, of
an organization
• Such systems are classified Functional information systems typically follow the
organizational structure
• Functional information systems are typically focused on increasing the efficiency of a
particular department or a functional area.
• One disadvantage of functional systems is that although they may support a particular
functional area effectively, they may be incompatible to each other (NO interaction between
internal systems).
• Such systems, rather than aiding organizational performance will act as inhibitors to an
organization's development and change.
• Organizations have realized that in order to be agile and efficient they need to focus on
organizational processes
• A process may involve more than one functional area.
• Some Information Systems are cross-functional
• Example: A TPS can affect several different business areas: Accounting, Human Resources,
Production, etc.
• Some Information Systems concentrate on one particular business area (Accounting for
example) These systems are:
• Marketing Systems
• Manufacturing Systems
• Human Resource Systems
• Accounting Systems
• Financial Management Systems

UNIT -2 SYSTEM ANALYSIS AND DESIGN


SYSTEM DEVLOPMENT METHODOLOGY
1. Waterfall model
1) Requirements - The Discovery Phase
The first phase of the Waterfall model is to gather all the requirements for the project, which
are usually outlined by the client. The team will conduct interviews, research and review
existing documentation to determine what needs to be done. This phase is often called "the
discovery phase."
To understand what a business organisation needs, you must first listen to its stakeholders and
collect as much information as possible. Make sure you are not rushing into planning or
design without a clear understanding of your client's business goals, target users, and any
potential obstacles that may arise later in the process.
2) Design
In the second phase, the project begins with a design process that outlines the end result and
how it will be achieved. This is typically a very detailed plan and is highly unlikely to change
throughout the project since there are no opportunities for re-work later on in the process.
Once this step is complete, it moves to implementation.
3) System Testing
In this next phase, all system components are tested. This includs an integration test, which
makes sure that each part works properly with the others; a functional test, which guarantees
that all functionality meets requirements; and a test of performance, which ensures that the
system can handle peak loads without crashing or slowing down significantly.
4) Implementation
During the implementation phase, each element of design is put into place one at a time, with
each team member completing their assigned tasks in sequence before passing on their work
to the next individual or group in line. There is not a lot of overlap or communication
between teams during these phases—each team stays focused on their own piece of the
puzzle until they finish and pass it on (hence the waterfall name).
5) Verification/Integration
Once every element has been completed according to plan, these pieces are put together
through integration and verification testing processes that ensure all elements fit together
seamlessly as intended at each phase of development by validating that each feature works
properly and meets its requirements before moving forward to create more features — even if
those additional features do not work properly yet when combined with others that have
already been developed because they are not ready to be tested yet as part of integration
verification.
6) Maintenance
After the project is completed, any bugs that are found are squashed, and customers get to
actually use the finished product. Maintenance also applies to adding new features or
functionality. This phase may come after the product has been completed and used by
customers, and it could potentially end as soon as you are happy with the finished product.

2.Prototyping model
The prototyping model is a systems development method in which a prototype is built, tested
and then reworked as necessary until an acceptable outcome is achieved from which the
complete system or product can be developed.
This model works best in scenarios where not all the project requirements are known in detail
ahead of time. It is an iterative, trial-and-error process that takes place between the
developers and the users.
Steps of the prototyping model
1. The new system requirements are defined in as much detail as possible. This usually
involves interviewing a number of users representing all the departments or aspects of
the existing system.
2. A preliminary, simple design is created for the new system.
3. The first prototype of the new system is constructed from the preliminary design. This
is usually a scaled-down system and represents an approximation of the
characteristics of the final product.
4. The users thoroughly evaluate the first prototype and note its strengths and
weaknesses, what needs to be added and what should be removed. The developer
collects and analyzes the remarks from the users.
5. The first prototype is modified, based on the comments supplied by the users and a
second prototype of the new system is constructed.
6. The second prototype is evaluated in the same manner as the first prototype.
7. The preceding steps are iterated as many times as necessary, until the users are
satisfied that the prototype represents the final product desired.
8. The final system is constructed, based on the final prototype.
9. The final system is thoroughly evaluated and tested. Routine maintenance is carried
out on a continuing basis to prevent large-scale failures and to minimize downtime.
Advantages of the prototyping model
 Customers get a say in the product early on, increasing customer satisfaction.

 Missing functionality and errors are detected easily.


 Prototypes can be reused in the future, for more complicated projects.
 It emphasizes team communication and flexible design practices.
 Users have a better understanding of how the product works.
 Quicker customer feedback provides a better idea of customer needs.
Disadvantages of the prototyping model
The main disadvantage of this methodology is that it is more costly in terms of time and
money when compared to alternative development methods, such as the spiral or Waterfall
model. Since in most cases the prototype is discarded, some companies may not see the value
in taking this approach.

Additionally, inviting customer feedback so early in the development lifecycle may cause
problems. One problem is that there may be an excessive amount of change requests that may
be hard to accommodate. Another issue could arise if after seeing the prototype, the customer
demands a quicker final release or becomes uninterested in the product.

Spiral Model

Identification

This phase starts with gathering the business requirements in the baseline spiral. In the
subsequent spirals as the product matures, identification of system requirements, subsystem
requirements and unit requirements are all done in this phase.

This phase also includes understanding the system requirements by continuous


communication between the customer and the system analyst. At the end of the spiral, the
product is deployed in the identified market.
Design

The Design phase starts with the conceptual design in the baseline spiral and involves
architectural design, logical design of modules, physical product design and the final design
in the subsequent spirals.

Construct or Build

The Construct phase refers to production of the actual software product at every spiral. In the
baseline spiral, when the product is just thought of and the design is being developed a POC
(Proof of Concept) is developed in this phase to get customer feedback.

Then in the subsequent spirals with higher clarity on requirements and design details a
working model of the software called build is produced with a version number. These builds
are sent to the customer for feedback.

Evaluation and Risk Analysis

Risk Analysis includes identifying, estimating and monitoring the technical feasibility and
management risks, such as schedule slippage and cost overrun. After testing the build, at the
end of first iteration, the customer evaluates the software and provides feedback.

Advantages

 Changing requirements can be accommodated.


 Allows extensive use of prototypes.
 Requirements can be captured more accurately.
 Users see the system early.
 Development can be divided into smaller parts and the risky parts can be developed
earlier which helps in better risk management.

Disadvantages

 Management is more complex.


 End of the project may not be known early.
 Not suitable for small or low risk projects and could be expensive for small projects.
 Process is complex
 Spiral may go on indefinitely.
 Large number of intermediate stages requires excessive documentation.

RAD- Rapid application development

A prototype is a working model that is functionally equivalent to a component of the product.

In the RAD model, the functional modules are developed in parallel as prototypes and are
integrated to make the complete product for faster product delivery. Since there is no detailed
preplanning, it makes it easier to incorporate the changes within the development process.

RAD projects follow iterative and incremental model and have small teams comprising of
developers, domain experts, customer representatives and other IT resources working
progressively on their component or prototype.

RAD Model Design

Business Modelling

The business model for the product under development is designed in terms of flow of
information and the distribution of information between various business channels. A
complete business analysis is performed to find the vital information for business, how it can
be obtained, how and when is the information processed and what are the factors driving
successful flow of information.

Data Modelling

The information gathered in the Business Modelling phase is reviewed and analyzed to form
sets of data objects vital for the business. The attributes of all data sets is identified and
defined. The relation between these data objects are established and defined in detail in
relevance to the business model.

Process Modelling

The data object sets defined in the Data Modelling phase are converted to establish the
business information flow needed to achieve specific business objectives as per the business
model. The process model for any changes or enhancements to the data object sets is defined
in this phase. Process descriptions for adding, deleting, retrieving or modifying a data object
are given.

Application Generation

The actual system is built and coding is done by using automation tools to convert process
and data models into actual prototypes.

Testing and Turnover

The overall testing time is reduced in the RAD model as the prototypes are independently
tested during every iteration. However, the data flow and the interfaces between all the
components need to be thoroughly tested with complete test coverage. Since most of the
programming components have already been tested, it reduces the risk of any major issues.

Advantages of the RAD Model

 Changing requirements can be accommodated.


 Progress can be measured.

 Iteration time can be short with use of powerful RAD tools.

 Productivity with fewer people in a short time.

 Reduced development time.

 Increases reusability of components.

 Quick initial reviews occur.

 Encourages customer feedback.

 Integration from very beginning solves a lot of integration issues.

Disadvantages of the RAD Model

 Dependency on technically strong team members for identifying business


requirements.

 Only system that can be modularized can be built using RAD.

 Requires highly skilled developers/designers.

 High dependency on Modelling skills.

 Inapplicable to cheaper projects as cost of Modelling and automated code generation


is very high.

 Management complexity is more.

 Suitable for systems that are component based and scalable.

 Requires user involvement throughout the life cycle.

 8i8Suitable for project requiring shorter development times.

Systems Analysis
It is a process of collecting and interpreting facts, identifying the problems, and
decomposition of a system into its components.
System analysis is conducted for the purpose of studying a system or its parts in order to
identify its objectives. It is a problem solving technique that improves the system and ensures
that all the components of the system work efficiently to accomplish their purpose.
Types of System
 Physical – These are tangible entities that may be static or dynamic in operation. For
example- parts of a computer center are the desks, chairs etc. that facilitate operation
of the computer. They are static and a programmed computer is dynamic.
 Abstract System – These are conceptual or non physical entities. For example- the
abstract conceptualization of physical situations. A model is a representation of a real
or planned system. A model is used to visualize relationships.
 Deterministic System – It operates in a predictable manner and the interaction
between parts is known with certainty. For example: Two molecules of hydrogen and
one molecule of oxygen make water.
 Probabilistic System – It shows probable behavior. The exact output is not known.
For example: weather forecasting, mail delivery.
 Social System- It is made up of people. For example: social clubs, societies
 Human Machine System- When both human and machines are involved to perform a
particular a particular task to achieve a target. For example: - Computer.
 Machine System- Where human interference is neglected. All the tasks are performed
by the machine
 Natural System- The system which is natural. For example- Solar system, Seasonal
System.
 Manufactured System- System made by man is called manufactured system. For
example- Rockets, Dams, and Trains.
 Permanent System- Which persists for long time. For example- policies of business.
Temporary System- Made for specified time and after that they are dissolved. For
example- setting up DJ system
 Adaptive System- responds to change in the environment in such a way to improve
their performance and to survive. For example- Human beings, animals. Non
Adaptive System-The system which doesn‘t respond to the environment. For
example- Machines
 Open System – It has many interfaces with its environment. It interacts across its
boundaries, it receives inputs from and delivers outputs to the outside world. It must
adapt to the changing demands of the user. Closed System – It is isolated from the
environmental influences. A completely closed system is rare.
System design
In this the phase that bridges the gap between problem domain and the existing system in a
manageable way. This phase focuses on the solution domain, i.e. “how to implement?”
Types of System Design
Logical Design
Logical design pertains to an abstract representation of the data flow, inputs, and outputs of
the system. It describes the inputs (sources), outputs (destinations), databases (data stores),
procedures (data flows) all in a format that meets the user requirements.
While preparing the logical design of a system, the system analyst specifies the user needs at
level of detail that virtually determines the information flow into and out of the system and
the required data sources. Data flow diagram, E-R diagram modeling are used.
Physical Design
Physical design relates to the actual input and output processes of the system. It focuses on
how data is entered into a system, verified, processed, and displayed as output.
It produces the working system by defining the design specification that specifies exactly
what the candidate system does. It is concerned with user interface design, process design,
and data design.
Architectural Design
It is also known as high level design that focuses on the design of system architecture. It
describes the structure and behavior of the system. It defines the structure and relationship
between various modules of system development process.
Detailed Design
It follows Architectural design and focuses on development of each module.
Conceptual Data Modeling
It is representation of organizational data which includes all the major entities and
relationship. System analysts develop a conceptual data model for the current system that
supports the scope and requirement for the proposed system.
The main aim of conceptual data modeling is to capture as much meaning of data as possible.
Most organization today use conceptual data modeling using E-R model which uses special
notation to represent as much meaning about data as possible.
Data Flow Diagram
A data flow diagram (DFD) maps out the flow of information for any process or system. It
uses defined symbols like rectangles, circles and arrows, plus short text labels, to show data
inputs, outputs, storage points and the routes between each destination. Data flowcharts can
range from simple, even hand-drawn process overviews, to in-depth, multi-level DFDs that
dig progressively deeper into how the data is handled. They can be used to analyze an
existing system or model a new one. Like all the best diagrams and charts, a DFD can often
visually “say” things that would be hard to explain in words, and they work for both technical
and nontechnical audiences, from developer to CEO. That’s why DFDs remain so popular
after all these years. While they work well for data flow software and systems, they are less
applicable nowadays to visualizing interactive, real-time or database-oriented software or
systems.
Logical Data Flow Diagram
Logical data flow diagram mainly focuses on the system process. It illustrates how data flows
in the system. In the Logical Data Flow Diagram (DFD), we focus on the high-level
processes and data flow without delving into the specific implementation details. Logical
DFD is used in various organizations for the smooth running of system. Like in a Banking
software system, it is used to describe how data is moved from one entity to another.
When to use Logical Data Flow Diagram
Logical Data Flow Diagram are mostly used during the requirement analysis phase, user
communication, and high-level system design
1. Requirement Analysis: Logical Data Flow Diagram plays a important role during
Requirement analysis. It provides a clear understanding of data flow between end-
user, processes and data warehouse without getting deep into technical terminology.
They help in identifying what the system needs to do and how data moves within the
system.
2. User Communication: Logical Data Flow Diagram is very useful communication
tools between system analyst and end users. It provide a clear understanding of
system requirements and functionalities to non-technical stakeholders.
3. High-Level System Design: Logical Data Flow Diagram helps in creating a high
level system design for the system’s architecture. Logical Data Flow focus on
processes and data flow that helps in the further development of software.
Physical Data Flow Diagram (DFD)
Physical data flow diagram shows how the data flow is actually implemented in the system.
In the Physical Data Flow Diagram (DFD), we include additional details such as data storage,
data transmission, and specific technology or system components. Physical DFD is more
specific and close to implementation.
When to use Logical Data Flow Diagram
Physical Data Flow Diagrams (DFDs) are used in the following situations:
1. Detailed Design Phase: Physical DFDs are helpful during the detailed design phase
of a system. Logical DFD provide system processes and data flow at higher level of
abstraction while physical DFD provides a more detailed view of data flow and
processes within the information system.
2. Implementation Planning: Physical DFD helps developer during Implementation
planning as it provides a detailed view of how data flow within the system’s physical
components, such as hardware devices, databases, and software modules. Physical
DFD helps the developers in identifying the correct technologies and resources
required to implement the system.
3. Integration with Existing Systems: Physical DFD are important for understanding
data flow when integrating a new system with existing systems or external entities,
4. Documentation and Maintenance : Physical DFDs can be referred as documentation
for system architecture and data flow patterns, that helps in system maintenance and
troubleshooting.
Levels in Data Flow Diagram (DFD)

 Level 0 Data Flow Diagram (DFD)


Level 0 is the highest-level Data Flow Diagram (DFD), which provides an overview of the
entire system. It shows the major processes, data flows, and data stores in the system, without
providing any details about the internal workings of these processes.
It is also known as a context diagram. It’s designed to be an abstraction view, showing the
system as a single process with its relationship to external entities. It represents the entire
system as a single bubble with input and output data indicated by incoming/outgoing arrows.
 1-Level Data Flow Diagram (DFD)
1-Level provides a more detailed view of the system by breaking down the major processes
identified in the level 0 Data Flow Diagram (DFD) into sub-processes. Each sub-process is
depicted as a separate process on the level 1 Data Flow Diagram (DFD). The data flows and
data stores associated with each sub-process are also shown.
In 1-level Data Flow Diagram (DFD), the context diagram is decomposed into multiple
bubbles/processes. In this level, we highlight the main functions of the system and
breakdown the high-level process of 0-level Data Flow Diagram (DFD) into subprocesses.
 2-Level Data Flow Diagram (DFD)
2-Level provides an even more detailed view of the system by breaking down the sub-
processes identified in the level 1 Data Flow Diagram (DFD) into further sub-processes. Each
sub-process is depicted as a separate process on the level 2 DFD. The data flows and data
stores associated with each sub-process are also shown.
2-Level Data Flow Diagram (DFD) goes one step deeper into parts of 1-level DFD. It can be
used to plan or record the specific/necessary detail about the system’s functioning.
 3-Level Data Flow Diagram (DFD)
3-Level is the most detailed level of Data Flow Diagram (DFDs), which provides a detailed
view of the processes, data flows, and data stores in the system. This level is typically used
for complex systems, where a high level of detail is required to understand the system. Each
process on the level 3 DFD is depicted with a detailed description of its input, processing, and
output. The data flows and data stores associated with each process are also shown.

Advantages of DFD
1. Easy to understand: DFDs are graphical representations that are easy to understand
and communicate, making them useful for non-technical stakeholders and team
members.
2. Improves system analysis: DFDs are useful for analyzing a system’s processes and
data flow, which can help identify inefficiencies, redundancies, and other problems
that may exist in the system.
3. Supports system design: DFDs can be used to design a system’s architecture and
structure, which can help ensure that the system is designed to meet the requirements
of the stakeholders.
4. Enables testing and verification: DFDs can be used to identify the inputs and outputs
of a system, which can help in the testing and verification of the system’s
functionality.
5. Facilitates documentation: DFDs provide a visual representation of a system, making
it easier to document and maintain the system over time.
DISADVANTAGES OF DFD
1. Can be time-consuming: Creating DFDs can be a time-consuming process, especially
for complex systems.
2. Limited focus: DFDs focus primarily on the flow of data in a system, and may not
capture other important aspects of the system, such as user interface design, system
security, or system performance.
3. Can be difficult to keep up-to-date: DFDs may become out-of-date over time as the
system evolves and changes.
4. Requires technical expertise: While DFDs are easy to understand, creating them
requires a certain level of technical expertise and familiarity with the system being
analyzed.

DECISION TABLE
A decision table is a good way to deal with combinations of things (e.g. inputs). This
technique is sometimes also referred to as a ‘cause-effect‘ table. The reason for this is that
there is an associated logic diagramming technique called ‘cause-effect graphing‘ which
was sometimes used to help derive the decision table.
STEPS
1. Identifying conditions and actions. we start by identifying the relevant conditions and
actions for testing software behavior. For example, if we’re testing a login feature,
conditions could include a valid username and password while the action is successful
login.
2. Constructing the decision table matrix. Once we identify the conditions and actions,
we construct the decision table matrix. The matrix is made up of rows and columns.
Each row represents a particular scenario based on the conditions, and each column
represents the potential actions. The intersections between the rows and columns
define the rules for each scenario.
3. Defining rules for each scenario. We need to set the rules that specify the expected
actions for each combination of conditions. These rules help us determine the
software’s behavior under various conditions, making it easier to understand and
validate its responses.

Advantages

Decision tables provide a clear and structured representation of complex


scenarios. When software behavior involves multiple conditions and possible actions,
understanding it can be overwhelming. Decision tables can be incredibly useful in such
cases. They break down the information into manageable tables, making it easier to
analyze and validate the expected outcomes.
In software testing, we need comprehensive test case coverage to identify potential
defects. Decision tables are an effective tool for managing multiple combinations of
conditions and actions, allowing for a comprehensive analysis of all possible scenarios. This
inclusive approach minimizes the risk of overlooking critical test cases.
Software applications encompass various possible inputs and outputs. Identifying all
necessary test cases manually can be daunting, and missing test cases may lead to undetected
defects. Decision tables are roadmaps for testers, aiding in visualizing and ensuring no test
cases are unintentionally left behind.
Moreover, decision tables simplify the test case design process. Creating test cases directly
from requirements documents can be time-consuming and error-prone. They provide a
systematic approach, allowing testers to seamlessly translate conditions and actions into
specific test cases, thus streamlining the overall testing process.

Disadvantages
While they offer valuable insights, they may not be suitable for all testing scenarios,
particularly when software behavior is straightforward and involves only a few conditions
and actions. In such cases, other testing techniques like equivalence partitioning
and boundary value analysis may be more efficient.
Additionally, constructing decision tables can become time-consuming when dealing with
large and complex software systems. The substantial number of conditions and actions
involved in such systems can make creating and maintaining decision tables a time-intensive
process. While decision tables offer a structured approach, their effectiveness can be
influenced by the scale and complexity of the software being tested.

Entity Relationship Diagram


Entity Relationship Diagram, also known as ERD, ER Diagram or ER model, is a type of
structural diagram for use in database design. An ERD contains different symbols and
connectors that visualize two important information: The major entities within the system
scope, and the inter-relationships among these entities.
ERD notations guide
Entity
An ERD entity is a definable thing or concept within a system, such as a person/role (e.g.
Student), object (e.g. Invoice), concept (e.g. Profile) or event (e.g. Transaction) (note: In
ERD, the term "entity" is often used instead of "table", but they are the same). When
determining entities, think of them as nouns. In ER models, an entity is shown as a rounded
rectangle, with its name on top and its attributes listed in the body of the entity shape. The
ERD example below shows an example of an ER entity.

Entity Attributes
Also known as a column, an attribute is a property or characteristic of the entity that holds it.
An attribute has a name that describes the property and a type that describes the kind of
attribute it is, such as varchar for a string, and int for integer. When an ERD is drawn for
physical database development, it is important to ensure the use of types that are supported by
the target RDBMS.
The ER diagram example below shows an entity with some attributes in it.

Primary Key
Also known as PK, a primary key is a special kind of entity attribute that uniquely defines a
record in a database table. In other words, there must not be two (or more) records that share
the same value for the primary key attribute. The ERD example below shows an entity
'Product' with a primary key attribute 'ID', and a preview of table records in the database. The
third record is invalid because the value of ID 'PDT-0002' is already used by another record.
Foreign Key
Also known as FK, a foreign key is a reference to a primary key in a table. It is used to
identify the relationships between entities. Note that foreign keys need not be unique.
Multiple records can share the same values. The ER Diagram example below shows an entity
with some columns, among which a foreign key is used in referencing another entity.
Relationship
A relationship between two entities signifies that the two entities are associated with each
other somehow. For example, a student might enroll in a course. The entity Student is
therefore related to Course, and a relationship is presented as a connector connecting between
them.
Cardinality
Cardinality defines the possible number of occurrences in one entity which is associated
with the number of occurrences in another. For example, ONE team has MANY players.
When present in an ERD, the entity Team and Player are inter-connected with a one-to-many
relationship.
1.One-to-One cardinality example
A one-to-one relationship is mostly used to split an entity in two to provide information
concisely and make it more understandable. The figure below shows an example of a one-to-
one relationship.
2. One-to-Many cardinality example
A one-to-many relationship refers to the relationship between two entities X and Y in which
an instance of X may be linked to many instances of Y, but an instance of Y is linked to only
one instance of X. The figure below shows an example of a one-to-many relationship
3. Many-to-Many cardinality example
A many-to-many relationship refers to the relationship between two entities X and Y in
which X may be linked to many instances of Y and vice versa. The figure below shows an
example of a many-to-many relationship. Note that a many-to-many relationship is split into
a pair of one-to-many relationships in a physical ERD. You will know what a physical ERD
is in the next section.
DATA MODELS
Conceptual data model
Conceptual ERD models the business objects that should exist in a system and the
relationships between them. A conceptual model is developed to present an overall picture of
the system by recognizing the business objects involved. It defines what entities exist, NOT
which tables. For example, 'many to many' tables may exist in a logical or physical data
model but they are just shown as a relationship with no cardinality under the conceptual data
model.
Logical data model
Logical ERD is a detailed version of a Conceptual ERD. A logical ER model is developed
to enrich a conceptual model by defining explicitly the columns in each entity and
introducing operational and transactional entities. Although a logical data model is still
independent of the actual database system in which the database will be created, you can still
take that into consideration if it affects the design.
Physical data model
Physical ERD represents the actual design blueprint of a relational database. A physical
data model elaborates on the logical data model by assigning each column with type, length,
nullable, etc. Since a physical ERD represents how data should be structured and related in a
specific DBMS it is important to consider the convention and restriction of the actual
database system in which the database will be created. Make sure the column types are
supported by the DBMS and reserved words are not used in naming entities and columns
Object-Oriented Analysis and Design(OOAD)
Object-Oriented Analysis and Design (OOAD) is a software engineering methodology that
employs object-oriented principles to model and design complex systems. It involves
analyzing the problem domain, representing it using objects and their interactions, and then
designing a modular and scalable solution. It helps create systems that are easier to
understand, maintain, and extend by organizing functionality into reusable and interconnected
components.
Object-Oriented Analysis
Object-Oriented Analysis (OOA) is the first technical activity performed as part of object-
oriented software engineering. OOA introduces new concepts to investigate a problem. It is
based on a set of basic principles, which are as follows:
 The information domain is modeled:
Lets say you’re building a game. OOA helps you figure out all the things you need to
know about the game world – the characters, their features, and how they interact. It’s
like making a map of everything important.
 Behavior is represented:
OOA also helps you understand what your game characters will do. If a character jumps
when you press a button, OOA helps describe that action. It’s like writing down a script
for each character.
 The function is described:
Every program has specific tasks or jobs it needs to do. OOA helps you list and describe
these jobs. In our game, it could be tasks like moving characters or keeping score. It’s like
making a to-do list for your software.
 Data, functional, and behavioral models are divided to uncover greater detail:
OOA is smart about breaking things into different parts. It splits the job into three
categories: things your game knows (like scores), things your game does (like jumping),
and how things in your game behave (like characters moving around). This makes it
easier to understand.
 Starting Simple, Getting Detailed:
OOA knows that at first, you just want to understand the big picture. So, it starts with a
simple version of your game or program. Later on, you add more details to make it work
perfectly. It’s like sketching a quick drawing before adding all the colors and details.
Object-Oriented Design
In the object-oriented software development process, the analysis model, which is initially
formed through object-oriented analysis (OOA), undergoes a transformation during object-
oriented design (OOD). This evolution is crucial because it shapes the analysis model into a
detailed design model, essentially serving as a blueprint for constructing the software.
The outcome of object-oriented design, or OOD, manifests in a design model characterized
by multiple levels of modularity. This modularity is expressed in two key ways:
 Subsystem Partitioning:
At a higher level, major components of the system are organized into subsystems.
This practice is similar to creating modules at the system level, providing a structured
and organized approach to managing the complexity of the software.
 Object Encapsulation:
A more granular form of modularity is achieved through the encapsulation of data
manipulation operations into objects. ” It’s like putting specific tasks (or operations)
and the data they need into little boxes called “objects.”
Each object does its job neatly and keeps things organized. So, if our game has a
character jumping, we put all the jumping stuff neatly inside an object.
It’s like having a box for each task, making everything easier to handle and understand.
Furthermore, as part of the object-oriented design process, it is essential to define specific
aspects:
 Data Organization of Attributes:
OOD involves specifying how data attributes are organized within the objects. This
includes determining the types of data each object will hold and how they relate to
one another, ensuring a coherent and efficient data structure.
 Procedural Description of Operations:
OOD requires a procedural description for each operation that an object can perform.
This involves detailing the steps or processes involved in carrying out specific tasks,
ensuring clarity and precision in the implementation of functionality.

Benefits of Object-Oriented Analysis and Design(OOAD)


 Improved modularity: OOAD encourages the creation of small, reusable objects that
can be combined to create more complex systems, improving the modularity and
maintainability of the software.
 Better abstraction: OOAD provides a high-level, abstract representation of a
software system, making it easier to understand and maintain.
 Improved reuse: OOAD encourages the reuse of objects and object-oriented design
patterns, reducing the amount of code that needs to be written and improving the
quality and consistency of the software.
 Improved communication: OOAD provides a common vocabulary and methodology
for software developers, improving communication and collaboration within teams.
 Reusability: OOAD emphasizes the use of reusable components and design patterns,
which can save time and effort in software development by reducing the need to
create new code from scratch.
 Scalability: OOAD can help developers design software systems that are scalable and
can handle changes in user demand and business requirements over time.
 Maintainability: OOAD emphasizes modular design and can help developers create
software systems that are easier to maintain and update over time.
 Flexibility: OOAD can help developers design software systems that are flexible and
can adapt to changing business requirements over time.
 Improved software quality: OOAD emphasizes the use of encapsulation,
inheritance, and polymorphism, which can lead to software systems that are more
reliable, secure, and efficient.
Challenges of Object-Oriented Analysis and Design(OOAD)
 Complexity: OOAD can add complexity to a software system, as objects and their
relationships must be carefully modeled and managed.
 Overhead: OOAD can result in additional overhead, as objects must be instantiated,
managed, and interacted with, which can slow down the performance of the software.
 Steep learning curve: OOAD can have a steep learning curve for new software
developers, as it requires a strong understanding of OOP concepts and techniques.
 Complexity: OOAD can be complex and may require significant expertise to
implement effectively. It may be difficult for novice developers to understand and
apply OOAD principles.
 Time-consuming: OOAD can be a time-consuming process that involves significant
upfront planning and documentation. This can lead to longer development times and
higher costs.
 Rigidity: Once a software system has been designed using OOAD, it can be difficult
to make changes without significant time and expense. This can be a disadvantage in
rapidly changing environments where new technologies or business requirements may
require frequent changes to the system.
 Cost: OOAD can be more expensive than other software engineering methodologies
due to the upfront planning and documentation required.
Unified Modeling Language (UML) Diagrams

Unified Modeling Language (UML) is a general-purpose modeling language. The main aim
of UML is to define a standard way to visualize the way a system has been designed. It is
quite similar to blueprints used in other fields of engineering. UML is not a programming
language , it is rather a visual language.
STRUCTURAL UML DIAGRAMS
1. Class Diagram
The most widely use UML diagram is the class diagram. It is the building block of all object
oriented software systems. We use class diagrams to depict the static structure of a system by
showing system’s classes, their methods and attributes. Class diagrams also help us identify
relationship between different classes or objects.
2. Composite Structure Diagram
We use composite structure diagrams to represent the internal structure of a class and its
interaction points with other parts of the system.
 A composite structure diagram represents relationship between parts and their
configuration which determine how the classifier (class, a component, or a
deployment node) behaves.
 They represent internal structure of a structured classifier making the use of parts,
ports, and connectors.
 We can also model collaborations using composite structure diagrams.
 They are similar to class diagrams except they represent individual parts in detail as
compared to the entire class.
3. Object Diagram
An Object Diagram can be referred to as a screenshot of the instances in a system and the
relationship that exists between them. Since object diagrams depict behaviour when objects
have been instantiated, we are able to study the behaviour of the system at a particular instant.
 An object diagram is similar to a class diagram except it shows the instances of
classes in the system.
 We depict actual classifiers and their relationships making the use of class diagrams.
 On the other hand, an Object Diagram represents specific instances of classes and
relationships between them at a point of time.
4. Component Diagram
Component diagrams are used to represent how the physical components in a system have
been organized. We use them for modelling implementation details.
 Component Diagrams depict the structural relationship between software system
elements and help us in understanding if functional requirements have been covered
by planned development.
 Component Diagrams become essential to use when we design and build complex
systems.
 Interfaces are used by components of the system to communicate with each other.
5. Deployment Diagram
Deployment Diagrams are used to represent system hardware and its software.It tells us what
hardware components exist and what software components run on them.
 We illustrate system architecture as distribution of software artifacts over distributed
targets.
 An artifact is the information that is generated by system software.
 They are primarily used when a software is being used, distributed or deployed over
multiple machines with different configurations.
6. Package Diagram
We use Package Diagrams to depict how packages and their elements have been organized. A
package diagram simply shows us the dependencies between different packages and internal
composition of packages.
 Packages help us to organise UML diagrams into meaningful groups and make the
diagram easy to understand.
 They are primarily used to organise class and use case diagrams.

BEHAVIORAL UML DIAGRAMS


1. State Machine Diagrams
A state diagram is used to represent the condition of the system or part of the system at finite
instances of time. It’s a behavioral diagram and it represents the behavior using finite state
transitions.
 State diagrams are also referred to as State machines and State-chart Diagrams
 These terms are often used interchangeably. So simply, a state diagram is used to
model the dynamic behavior of a class in response to time and changing external
stimuli.
2. Activity Diagrams
We use Activity Diagrams to illustrate the flow of control in a system. We can also use an
activity diagram to refer to the steps involved in the execution of a use case.
 We model sequential and concurrent activities using activity diagrams. So, we
basically depict workflows visually using an activity diagram.
 An activity diagram focuses on condition of flow and the sequence in which it
happens.
 We describe or depict what causes a particular event using an activity diagram.
3. Use Case Diagrams
Use Case Diagrams are used to depict the functionality of a system or a part of a system.
They are widely used to illustrate the functional requirements of the system and its interaction
with external agents(actors).
 A use case is basically a diagram representing different scenarios where the system
can be used.
 A use case diagram gives us a high level view of what the system or a part of the
system does without going into implementation details.
4. Sequence Diagram
A sequence diagram simply depicts interaction between objects in a sequential order i.e. the
order in which these interactions take place.
 We can also use the terms event diagrams or event scenarios to refer to a sequence
diagram.
 Sequence diagrams describe how and in what order the objects in a system function.
 These diagrams are widely used by businessmen and software developers to
document and understand requirements for new and existing systems.
5. Communication Diagram
A Communication Diagram (known as Collaboration Diagram in UML 1.x) is used to show
sequenced messages exchanged between objects.
 A communication diagram focuses primarily on objects and their relationships.
 We can represent similar information using Sequence diagrams, however
communication diagrams represent objects and links in a free form.
6. Timing Diagram
Timing Diagram are a special form of Sequence diagrams which are used to depict the
behavior of objects over a time frame. We use them to show time and duration constraints
which govern changes in states and behavior of objects.
7. Interaction Overview Diagram
An Interaction Overview Diagram models a sequence of actions and helps us simplify
complex interactions into simpler occurrences. It is a mixture of activity and sequence
diagrams.

UNIT -3 DATABASE MANAGEMENT SYSTEM

DBMS- Data Base Management System


Database is collection of data which is related by some aspect. Data is collection of facts and
figures which can be processed to produce information. Name of a student, age, class and her
subjects can be counted as data for recording purposes. Mostly data represents recordable
facts. Data aids in producing information which is based on facts. For example, if we have
data about marks obtained by all students, we can then conclude about toppers and average
marks etc. A database management system stores data, in such a way which is easier to
retrieve, manipulate and helps to produce information
EVOLUTION OF DBMS
 1950s and early 1960s: Magnetic tapes were developed for data storage. Data
processing tasks such as payroll were automated, with data stored on tapes.
Processing of data consisted of reading data from one or more tapes and writing data
to a new tape. Data could also be input from punched card decks, and output to
printers. For example, salary raises were processed by entering the raises on punched
cards and reading the punched card deck in synchronization with a tape containing the
master salary details. The records had to be in the same sorted order. The salary raises
would be added to the salary read from the master tape, and written to a new tape; the
new tape would become the new master tape. Tapes (and card decks) could be read
only sequentially, and data sizes were much larger than main memory; thus, data
processing programs were forced to process data in a particular order, by reading and
merging data from tapes and card decks.
 Late 1960s and 1970s: Widespread use of hard disks in the late 1960s changed the
scenario for data processing greatly, since hard disks allowed direct access to data.
The position of data on disk was immaterial, since any location on disk could be
accessed in just tens of milliseconds. Data were thus freed from the tyranny of
sequentiality. With disks, network and hierarchical databases could be created that
allowed data structures such as lists and trees to be stored on disk. Programmers could
construct and manipulate these data structures. A landmark paper by Codd [1970]
defined the relational model and nonprocedural ways of querying data in the relational
model, and relational databases were born. The simplicity of the relational model and
the possibility of hiding implementation details completely from the programmer
were enticing indeed. Codd later won the prestigious Association of Computing
Machinery Turing Award for his work.
 1980s: Although academically interesting, the relational model was not used in
practice initially, because of its perceived performance disadvantages; relational
databases could not match the performance of existing network and hierarchical
databases. That changed with System R, a groundbreaking project at IBM Research
that developed techniques for the construction of an efficient relational database
system. Excellent overviews of System R are provided by Astrahan et al. [1976] and
Chamberlin et al. [1981]. The fully functional System R prototype led to IBM’s first
relational database product, SQL/DS. At the same time, the Ingres system was being
developed at the University of California at Berkeley. It led to a commercial product
of the same name. Initial commercial relational database systems, such as IBM DB2,
Oracle, Ingres, and DEC Rdb, played a major role in advancing techniques for
efficient processing of declarative queries. By the early 1980s, relational databases
had become competitive with network and hierarchical database systems even in the
area of performance. Relational databases were so easy to use that they eventually
replaced network and hierarchical databases; programmers using such databases were
forced to deal with many low-level implementation details, and had to code their
queries in a procedural fashion. Most importantly, they had to keep efficiency in mind
when designing their programs, which involved a lot of effort. In contrast, in a
relational database, almost all these low-level tasks are carried out automatically by
the database, leaving the programmer free to work at a logical level. Since attaining
dominance in the 1980s, the relational model has reigned supreme among data
models. The 1980s also saw much research on parallel and distributed databases, as
well as initial work on object-oriented databases.
 Early 1990s: The SQL language was designed primarily for decision support
applications, which are query-intensive, yet the mainstay of databases in the 1980s
was transaction-processing applications, which are update-intensive. Decision support
and querying re-emerged as a major application area for databases. Tools for
analyzing large amounts of data saw large growths in usage. Many database vendors
introduced parallel database products in this period. Database vendors also began to
add object-relational support to their databases.
 1990s: The major event of the 1990s was the explosive growth of the World Wide
Web. Databases were deployed much more extensively than ever before. Database
systems now had to support very high transaction-processing rates, as well as very
high reliability and 24 × 7 availability (availability 24 hours a day, 7 days a week,
meaning no downtime for scheduled maintenance activities). Database systems also
had to support Web interfaces to data.
 2000s: The first half of the 2000s saw the emerging of XML and the associated query
language XQuery as a new database technology. Although XML is widely used for
data exchange, as well as for storing certain complex data types, relational databases
still form the core of a vast majority of large-scale database applications. In this time
period we have also witnessed the growth in “autonomic-computing/auto-admin”
techniques for minimizing system administration effort. This period also saw a
significant growth in use of open-source database systems, particularly PostgreSQL
and MySQL. The latter part of the decade has seen growth in specialized databases
for data analysis, in particular column-stores, which in effect store each column of a
table as a separate array, and highly parallel database systems designed for analysis of
very large data sets. Several novel distributed data-storage systems have been built to
handle the data management requirements of very large Web sites such as Amazon,
Facebook, Google, Microsoft and Yahoo!, and some of these are now offered as Web
services that can be used by application developers. There has also been substantial
work on management and analysis of streaming data, such as stock-market ticker data
or computer network monitoring data. Data-mining techniques are now widely
deployed; example applications include Web-based product-recommendation systems
and automatic placement of relevant advertisements on Web pages.
Application of DBMS

1. Railway Reservation System


In the rail route reservation framework, the information base is needed to store the record or
information of ticket appointments, status of train’s appearance, and flight. Additionally, if
trains get late, individuals become acquainted with it through the information base update.
2. Library Management System
There are many books in the library so; it is difficult to store the record of the relative
multitude of books in a register or duplicate. Along these lines, the data set administration
framework (DBMS) is utilized to keep up all the data identified with the name of the book,
issue date, accessibility of the book, and its writer.
3. Banking
Database the executive’s framework is utilized to store the exchange data of the client in the
information base.
4. Education Sector
Presently, assessments are led online by numerous schools and colleges. They deal with all
assessment information through the data set administration framework (DBMS). In spite of
that understudy’s enlistments subtleties, grades, courses, expense, participation, results, and
so forth all the data is put away in the information base.
5. Credit card exchanges
The database Management framework is utilized for buying on charge cards and age of
month to month proclamations.
6. Social Media Sites
We all utilization of online media sites to associate with companions and to impart our
perspectives to the world. Every day, many people group pursue these online media accounts
like Pinterest, Facebook, Twitter, and Google in addition to. By the utilization of the data set
administration framework, all the data of clients are put away in the information base and, we
become ready to interface with others.
7. Broadcast communications
Without DBMS any media transmission organization can’t think. The Database the
executive’s framework is fundamental for these organizations to store the call subtleties and
month to month postpaid bills in the information base.
8. Accounting and Finance
The information base administration framework is utilized for putting away data about deals,
holding and acquisition of monetary instruments, for example, stocks and bonds in a data set.
9. E-Commerce Websites
These days, web-based shopping has become a major pattern. Nobody needs to visit the shop
and burn through their time. Everybody needs to shop through web based shopping sites, (for
example, Amazon, Flipkart, Snapdeal) from home. So all the items are sold and added
uniquely with the assistance of the information base administration framework (DBMS).
Receipt charges, installments, buy data these are finished with the assistance of DBMS.
10. Human Resource Management
Big firms or organizations have numerous specialists or representatives working under them.
They store data about worker’s compensation, assessment, and work with the assistance of an
information base administration framework (DBMS).
ADVANTAGES
1. Data Security: The more accessible and usable the database, the more it is prone to
security issues. As the number of users increases, the data transferring or data sharing
rate also increases thus increasing the risk of data security. It is widely used in the
corporate world where companies invest large amounts of money, time, and effort to
ensure data is secure and used properly. A Database Management System (DBMS)
provides a better platform for data privacy and security policies thus, helping
companies to improve Data Security.
2. Data integration: Due to the Database Management System we have access to well-
managed and synchronized forms of data thus it makes data handling very easy and
gives an integrated view of how a particular organization is working and also helps to
keep track of how one segment of the company affects another segment.
3. Data abstraction: The major purpose of a database system is to provide users with an
abstract view of the data. Since many complex algorithms are used by the developers
to increase the efficiency of databases that are being hidden by the users through
various data abstraction levels to allow users to easily interact with the system.
4. Reduction in data Redundancy: When working with a structured database, DBMS
provides the feature to prevent the input of duplicate items in the database. for e.g. –
If there are two same students in different rows, then one of the duplicate data will be
deleted.
5. Data sharing: A DBMS provides a platform for sharing data across multiple
applications and users, which can increase productivity and collaboration.
Disadvantages
Performance: The traditional file system is written for small organizations and for some
specific applications due to which performance is generally very good. But for the small-
scale firms, DBMS does not give a good performance as its speed is very slow. As a
result, some applications will not run as fast as they could. Hence it is not good to use
DBMS for small firms. Because performance is a factor that is overlooked by everyone. If
performance is good then everyone (developers, designers, end-users) will use it easily
and it will be user-friendly too. As the speed of the system totally depends on the
performance so performance needs to be good.
Damaged Part : If one part of database is corrupted or damaged, then entire database
may get affected.
Compatibility: DBMS software may not be compatible with other software systems or
platforms, making it difficult to integrate with other applications.
Security: A DBMS can be vulnerable to security breaches if not properly configured and
managed. This can lead to data loss or theft.
Data Isolation: Because data are scattered in various files, and files may be in different
formats, writing new application programs to retrieve the appropriate data is difficult.
RDBMS-Relational Database Management System
In relational databases, the relationship between data files is relational. Hierarchical and
network databases require the user to pass a hierarchy in order to access needed data.
These databases connect to the data in different files by using common data numbers or a
key field. Data in relational databases is stored in different access control tables, each
having a key field that mainly identifies each row. In the relational databases are more
reliable than either the hierarchical or network database structures. In relational databases,
tables or files filled up with data are called relations designates a row or record, and
columns are referred to as attributes or fields. Relational databases work on each table has
a key field that uniquely indicates each row, and that these key fields can be used to
connect one table of data to another.
Features of RDBMS
 Data must be stored in tabular form in DB file, that is, it should be organized in the
form of rows and columns.
 Each row of table is called record/tuple . Collection of such records is known as the
cardinality of the table
 Each column of the table is called an attribute/field. Collection of such columns is
called the arity of the table.
 No two records of the DB table can be same. Data duplicity is therefore avoided by
using a candidate key. Candidate Key is a minimum set of attributes required to
identify each record uniquely.
 Tables are related to each other with the help for foreign keys.
 Database tables also allow NULL values, that is if the values of any of the element of
the table are not filled or are missing, it becomes a NULL value, which is not
equivalent to zero.
Advantages of RDBMS
 Easy to Manage: Each table can be independently manipulated without affecting
others.
 Security: It is more secure consisting of multiple levels of security. Access of data
shared can be limited.
 Flexible: Updating of data can be done at a single point without making amendments
at multiple files. Databases can easily be extended to incorporate more records, thus
providing greater scalability. Also, facilitates easy application of SQL queries.
 Users: RDBMS supports client-side architecture storing multiple users together.
 Facilitates storage and retrieval of large amount of data.
 Easy Data Handling:
o Data fetching is faster because of relational architecture.
o Data redundancy or duplicity is avoided due to keys, indexes, and
normalization principles.
o Data consistency is ensured because RDBMS is based on ACID properties for
data transactions(Atomicity Consistency Isolation Durability).
 Fault Tolerance: Replication of databases provides simultaneous access and helps
the system recover in case of disasters, such as power failures or sudden shutdowns.
Disadvantages of RDBMS
 High Cost and Extensive Hardware and Software Support: Huge costs and setups
are required to make these systems functional.
 Scalability: In case of addition of more data, servers along with additional power, and
memory are required.
 Complexity: Voluminous data creates complexity in understanding of relations and
may lower down the performance.
 Structured Limits: The fields or columns of a relational database system is enclosed
within various limits, which may lead to loss of data.
OODBMS – Object oriented Database Management System
In this Model we have to discuss the functionality of the object oriented Programming .It
takes more than storage of programming language objects. Object DBMS's increase the
semantics of the C++ and Java .It provides full-featured database programming capability,
while containing native language compatibility. It adds the database functionality to object
programming languages. This approach is the analogical of the application and database
development into a constant data model and language environment. Applications require less
code, use more natural data modeling, and code bases are easier to maintain. Object
developers can write complete database applications with a decent amount of additional
effort. The object-oriented database derivation is the integrity of object-oriented
programming language systems and consistent systems. The power of the object-oriented
databases comes from the cyclical treatment of both consistent data, as found in databases,
and transient data, as found in executing programs. Object-oriented databases use small,
recyclable separated of software called objects.
The objects themselves are stored in the object-oriented database. Each object contains of
two elements:
1. Piece of data (e.g., sound, video, text, or graphics).
2. Instructions or software programs called methods, for what to do with the data
Advantages:
Supports Complex Data Structures: ODBMS is designed to handle complex data
structures, such as inheritance, polymorphism, and encapsulation. This makes it easier to
work with complex data models in an object-oriented programming environment.
Improved Performance: ODBMS provides improved performance compared to traditional
relational databases for complex data models. ODBMS can reduce the amount of mapping
and translation required between the programming language and the database, which can
improve performance.
Reduced Development Time: ODBMS can reduce development time since it eliminates the
need to map objects to tables and allows developers to work directly with objects in the
database.
Supports Rich Data Types: ODBMS supports rich data types, such as audio, video, images,
and spatial data, which can be challenging to store and retrieve in traditional relational
databases.
Scalability: ODBMS can scale horizontally and vertically, which means it can handle larger
volumes of data and can support more users.
Disadvantages:
Limited Adoption: ODBMS is not as widely adopted as traditional relational databases,
which means it may be more challenging to find developers with experience working with
ODBMS.
Lack of Standardization: ODBMS lacks standardization, which means that different
vendors may implement different features and functionality.
Cost: ODBMS can be more expensive than traditional relational databases since it requires
specialized software and hardware.
Integration with Other Systems: ODBMS can be challenging to integrate with other
systems, such as business intelligence tools and reporting software.
Scalability Challenges: ODBMS may face scalability challenges due to the complexity of
the data models it supports, which can make it challenging to partition data across multiple
nodes.
Object-Relational Database Management System
An Object-Relational Database Management System (ORDBMS) is a database management
system that incorporates elements of both relational and object-oriented database models.
Features of ORDBMS
1. Object-Oriented Capabilities:
o Support for Complex Data Types: ORDBMS can handle complex data
types, including user-defined types, which allow for the representation of real-
world entities more accurately.
o Inheritance: Objects in an ORDBMS can inherit properties and methods from
other objects, enabling code reuse and simplification of data modeling.
o Encapsulation: Data and operations on data can be encapsulated in objects,
promoting modularity and reusability.
o Polymorphism: ORDBMS supports polymorphism, allowing objects to be
treated as instances of their parent class, enhancing flexibility in data
manipulation.
2. Relational Database Capabilities:
o SQL Support: ORDBMS fully supports SQL for querying and manipulating
data, ensuring compatibility with existing applications and tools.
o Transactional Integrity: ORDBMS provides robust support for ACID
(Atomicity, Consistency, Isolation, Durability) properties, ensuring data
integrity and reliability.
o Scalability and Performance: ORDBMS are designed to handle large
volumes of data and complex queries efficiently, making them suitable for
enterprise-level applications.
3. Enhanced Data Modeling:
o Support for Multimedia and Spatial Data: ORDBMS can store and manage
multimedia data (images, videos, audio) and spatial data (geographic
information), which are often required in modern applications.
o Extensibility: Users can extend the database with custom functions and
procedures, tailored to specific application needs.
4. Interoperability:
o Integration with Programming Languages: ORDBMS often provide
integration with popular programming languages (e.g., Java, Python, C++),
allowing developers to use familiar tools and frameworks.
Challenges and Considerations
While Object-Relational Database Management Systems (ORDBMS) offer numerous
benefits, they also present certain challenges and considerations that organizations should be
aware of:
Complexity
1. Learning Curve: The combination of relational and object-oriented paradigms can be
complex, requiring database administrators and developers to have a solid
understanding of both models.
2. Complex Schema Design: Designing a schema that leverages both relational and
object-oriented features can be intricate and time-consuming.
Performance Overhead
1. Processing Overhead: The additional features and capabilities of an ORDBMS can
introduce processing overhead, potentially impacting performance compared to
simpler relational databases.
2. Optimization Challenges: Optimizing queries and data structures in an ORDBMS
can be more challenging due to the complexity of the object-relational model.
Compatibility and Integration
1. Compatibility Issues: Integrating an ORDBMS with existing systems and
applications that rely on traditional relational databases may require significant
adjustments.
2. Tool Support: Not all database management tools and applications fully support the
advanced features of ORDBMS, which can limit their utility in certain environments.
Cost
1. Higher Costs: ORDBMS can be more expensive to implement and maintain due to
the need for specialized skills and more sophisticated hardware and software
infrastructure.
2. Licensing Fees: Commercial ORDBMS products may come with higher licensing
fees compared to traditional relational databases.
Data Migration
1. Migration Efforts: Migrating data from a traditional relational database to an
ORDBMS can be complex and require significant effort to ensure data integrity and
compatibility.
2. Data Transformation: The process of transforming data to fit the object-relational
model can introduce risks and require thorough testing and validation.
DATA WAREHOUSING
A data warehouse is a collection of data marts representing historical data from different
operations in the company. This data is stored in a structure optimized for querying and data
analysis as a data warehouse. Table design, dimensions and organization should be consistent
throughout a data warehouse so that reports or queries across the data warehouse are
consistent.
Integrated: Data that is gathered into the data warehouse from a variety of sources and
merged into a coherent whole.
Time-variant: All data in the data warehouse is identified with a particular time period.
Non-volatile: Data is stable in a data warehouse. More data is added but data is never
removed. This enables management to gain a consistent picture of the business. It is a single,
complete and consistent store of data obtained from a variety of different sources made
available to end users in what they can understand and use in a business context
Benefits of data warehousing
• Data warehouses are designed to perform well with aggregate queries running on large
amounts of data.
• The structure of data warehouses is easier for end users to navigate, understand and query
against unlike the relational databases primarily designed to handle lots of transactions.
• Data warehouses enable queries that cut across different segments of a company's operation.
E.g. production data could be compared against inventory data even if they were originally
stored in different databases with different structures.
• Queries that would be complex in very normalized databases could be easier to build and
maintain data warehouses, decreasing the workload on transaction systems.
• Data warehousing is an efficient way to manage and report on data that is from a variety ,
non uniform and scattered throughout a company.
• Data warehousing is an efficient way to manage demand for lots of information from lots of
users.
• Data warehousing provides the capability to analyze large amounts of historical data for
nuggets of wisdom that can provide an organization with competitive advantage.
DESIGN OF DATA WAREHOUSE
The following nine-step method is followed in the design of a data warehouse:
1. Choosing the subject matter
2. Deciding what a fact table represents
3. Identifying and conforming the dimensions
4. Choosing the facts
5. Storing pre calculations in the fact table
6. Rounding out the dimension table
7. Choosing the duration of the DB
8. The need to track slowly changing dimensions
9. Deciding the query priorities and query model
Technical considerations
A number of technical issues are to be considered when designing a data warehouse
environment.
These issues include:
• The hardware platform that would house the data warehouse
• The dbms that supports the warehouse data
• The communication infrastructure that connects data marts, operational systems and end
users • The hardware and software to support meta data repository
• The systems management framework that enables admin of the entire environment
Implementation considerations
The following logical steps needed to implement a data warehouse:
• Collect and analyze business requirements
• Create a data model and a physical design
• Define data sources • Choose the db tech and platform
• Extract the data from operational db, transform it, clean it up and load it into the warehouse
• Choose db access and reporting tools • Choose db connectivity software
• Choose data analysis and presentation s/w
• Update the data warehouse
ARCHITECTURE OF DATA WAREHOUSING
The data in a data warehouse comes from operational systems of the organization as well as
from other external sources. These are collectively referred to as source systems. The data
extracted from source systems is stored in a area called data staging area, where the data is
cleaned, transformed, combined, duplicated to prepare the data for us in the data warehouse.
The data staging area is generally a collection of machines where simple activities like
sorting and sequential processing takes place. The data staging area does not provide any
query or presentation services. As soon as a system provides query or presentation services, it
is categorized as a presentation server. A presentation server is the target machine on which
the data is loaded from the data staging area organized and stored for direct querying by end
users, report writers and other applications. The three different kinds of systems that are
required for a data warehouse are:
1. Source Systems
2. Data Staging Area
3. Presentation servers
1.OPERATIONAL DATA
The sources of data for the data warehouse is supplied from:
(i) The data from the mainframe systems in the traditional network and
hierarchical format
(ii) Data can also come from the relational DBMS like Oracle, Informix.
(iii) In addition to these internal data, operational data also includes external data obtained
from commercial databases and databases associated with supplier and customers.
2. LOAD MANAGER The load manager performs all the operations associated with
extraction and loading data into the data warehouse. These operations include simple
transformations of the data to prepare the data for entry into the warehouse. The size and
complexity of this component will vary between data warehouses and may be constructed
using a combination of vendor data loading tools and custom built programs.
WAREHOUSE MANAGER
The warehouse manager performs all the operations associated with the management of data
in the warehouse. This component is built using vendor data management tools and custom
built programs. The operations performed by warehouse manager include:
(i) Analysis of data to ensure consistency
(ii)Transformation and merging the source data from temporary storage into data warehouse
tables
(iii) Create indexes and views on the base table.
(iv) Denormalization
(v) Generation of aggregation
(vi) Backing up and archiving of data
3. QUERY MANAGER The query manager performs all operations associated with
management of user queries. This component is usually constructed using vendor end-
user access tools, data warehousing monitoring tools, database facilities and custom
built programs. The complexity of a query manager is determined by facilities
provided by the end-user access tools and database.
4. DETAILED DATA This area of the warehouse stores all the detailed data in the
database schema. In most cases detailed data is not stored online but aggregated to the
next level of details. However the detailed data is added regularly to the warehouse to
supplement the aggregated data
5. LIGHTLY AND HIGHLY SUMMERIZED DATA The area of the data warehouse
stores all the predefined lightly and highly summarized (aggregated) data generated
by the warehouse manager. This area of the warehouse is transient as it will be subject
to change on an ongoing basis in order to respond to the changing query profiles. The
purpose of the summarized information is to speed up the query performance. The
summarized data is updated continuously as new data is loaded into the warehouse
6. ARCHIVE AND BACK UP DATA This area of the warehouse stores detailed and
summarized data for the purpose of archiving and back up. The data is transferred to
storage archives such as magnetic tapes or optical disks
7.META DATA The data warehouse also stores all the Meta data (data about data)
definitions used by all processes in the warehouse. It is used for variety of purposed
including:
(i) The extraction and loading process – Meta data is used to map data sources to a
common view of information within the warehouse.
(ii) The warehouse management process – Meta data is used to automate the production
of summary tables.
(iii) As part of Query Management process Meta data is used to direct a query to the most
appropriate data source
8.END-USER ACCESS TOOLS The principal purpose of data warehouse is to provide
information to the business managers for strategic decision-making. These users interact
with the warehouse using end user access tools. The examples of some of the end user
access tools can be: (i) Reporting and Query Tools (ii) Application Development Tools
(iii) Executive Information Systems Tools (iv) Online Analytical Processing Tools (v)
Data Mining Tools
Data Mart
A data mart is a simple form of a data warehouse that is focused on a single subject (or
functional area), such as sales, finance or marketing. Data marts are often built and
controlled by a single department within an organization. Given their single-subject
focus, data marts usually draw data from only a few sources. The sources could be
internal operational systems, a central data warehouse, or external data
Dependent and Independent Data Marts
There are two basic types of data marts: dependent and independent. The categorization
is based primarily on the data source that feeds the data mart. Dependent data marts draw
data from a central data warehouse that has already been created. Independent data marts,
in contrast, are standalone systems built by drawing data directly from operational or
external sources of data, or both. The main difference between independent and
dependent data marts is how you populate the data mart; that is, how you get data out of
the sources and into the data mart.
This step, called the Extraction-Transformation-and Loading (ETL) process, involves
moving data from operational systems, filtering it, and loading it into the data mart. With
dependent data marts, this process is somewhat simplified because formatted and
summarized (clean) data has already been loaded into the central data warehouse. The
ETL process for dependent data marts is mostly a process of identifying the right subset
of data relevant to the chosen data mart subject and moving a copy of it, perhaps in a
summarized form.
With independent data marts, however, you must deal with all aspects of the ETL
process, much as you do with a central data warehouse. The number of sources is likely to
be fewer and the amount of data associated with the data mart is less than the warehouse,
given your focus on a single subject.The motivations behind the creation of these two
types of data marts are also typically different.
Dependent data marts are usually built to achieve improved performance and availability,
better control, and lower telecommunication costs resulting from local access of data
relevant to a specific department. The creation of independent data marts is often driven
by the need to have a solution within a shorter time.
Steps in Implementing a Data Mart
1. Designing The design step is first in the data mart process.
This step covers all of the tasks from initiating the request for a data mart through
gathering information about the requirements, and developing the logical and physical
design of the data mart. The design step involves the following tasks:
• Gathering the business and technical requirements
• Identifying data sources
• Selecting the appropriate subset of data
• Designing the logical and physical structure of the data mart
2. Constructing
This step includes creating the physical database and the logical structures associated with
the data mart to provide fast and efficient access to the data. This step involves the
following tasks:
• Creating the physical database and storage structures, such as tablespaces, associated
with the data mart
• Creating the schema objects, such as tables and indexes defined in the design step
• Determining how best to set up the tables and the access structures
3. Populating
The populating step covers all of the tasks related to getting the data from the source,
cleaning it up, modifying it to the right format and level of detail, and moving it into the
data mart. More formally stated, the populating step involves the following tasks:
• Mapping data sources to target data structures
• Extracting data
• Cleansing and transforming the data
• Loading data into the data mart
• Creating and storing metadata
4. Accessing
The accessing step involves putting the data to use: querying the data, analyzing it, creating
reports, charts, and graphs, and publishing these. Typically, the end user uses a graphical
front-end tool to submit queries to the database and display the results of the queries. The
accessing step requires that you perform the following tasks:
• Set up an intermediate layer for the front-end tool to use. This layer, the meta layer,
translates database structures and object names into business terms, so that the end user can
interact with the data mart using terms that relate to the business function.
• Maintain and manage these business interfaces.
• Set up and manage database structures, like summarized tables, that help queries submitted
through the front-end tool execute quickly and efficiently.
5.Managing
This step involves managing the data mart over its lifetime. In this step, you perform
management tasks such as the following:
• Providing secure access to the data
• Managing the growth of the data
• Optimizing the system for better performance
• Ensuring the availability of data even with system failures
DATA MINING
Data Mining is defined as extracting information from huge sets of data. In other words, we
can say that data mining is the procedure of mining knowledge from data.
Stages of data mining
1. Data gathering. Identify and assemble relevant data for an analytics application. The
data might be located in different source systems, a data warehouse or a data lake, an
increasingly common repository in big data environments that contain a mix of
structured and unstructured data. External data sources can also be used. Wherever the
data comes from, a data scientist often moves it to a data lake for the remaining steps
in the process.
2. Data preparation. This stage includes a set of steps to get the data ready to be mined.
Data preparation starts with data exploration, profiling and pre-processing, followed
by data cleansing work to fix errors and other data quality issues, such as duplicate or
missing values. Data transformation is also done to make data sets consistent, unless a
data scientist wants to analyze unfiltered raw data for a particular application.
3. Data mining. Once the data is prepared, a data scientist chooses the appropriate data
mining technique and then implements one or more algorithms to do the mining.
These techniques, for example, could analyze data relationships and detect patterns,
associations and correlations. In machine learning applications, the algorithms
typically must be trained on sample data sets to look for the information being sought
before they're run against the full set of data.
4. Data analysis and interpretation. The data mining results are used to create
analytical models that can help drive decision-making and other business actions. The
data scientist or another member of a data science team must also communicate the
findings to business executives and users, often through data visualization and the use
of data storytelling techniques.

Benefits of data mining


 More effective marketing and sales. Data mining helps marketers better understand
customer behavior and preferences, which helps them create targeted marketing and
advertising campaigns. Similarly, sales teams can use data mining results to improve
lead conversion rates and sell additional products and services to existing customers.
 Better customer service. Data mining helps companies identify potential customer
service issues more promptly and give contact center agents up-to-date information to
use in calls and online chats with customers.
 Improved SCM. Organizations can spot market trends and forecast product demand
more accurately, enabling them to better manage inventories of goods and supplies.
Supply chain managers can also use information from data mining to optimize
warehousing, distribution and other logistics operations.
 Increased production uptime. Mining operational data from sensors on
manufacturing machines and other industrial equipment supports predictive
maintenance applications to identify potential problems before they occur, helping to
avoid unscheduled downtime.
 Stronger risk management. Risk managers and business executives can better assess
financial, legal, cybersecurity and other risks to a company and develop plans for
managing them.
 Lower costs. Data mining helps improve cost savings through operational efficiencies
in business processes and reduces redundancy and waste in corporate spending.

Types of data mining techniques


 Association rule mining. In data mining, association rules are if-then statements that
identify relationships between data elements. Support and confidence criteria are used
to assess the relationships. Support measures how frequently the related elements
appear in a data set, while confidence reflects the number of times an if-then
statement is accurate.
 Classification. This approach assigns the elements in data sets to different categories
defined as part of the data mining process. Decision trees, Naive Bayes classifiers, k-
nearest neighbors (KNN) and logistic regression are examples of classification
methods.
 Clustering. In this case, data elements that share particular characteristics are
grouped together into clusters as part of data mining applications. Examples include
k-means clustering, hierarchical clustering and Gaussian mixture models.
 Regression. This method finds relationships in data sets by calculating predicted data
values based on a set of variables. Linear regression and multivariate regression are
examples. Decision trees and other classification methods can also be used to do
regressions.
 Sequence and path analysis. Data can also be mined to look for patterns in which a
particular set of events or values leads to later ones.
 Neural networks. A neural network is a set of algorithms that simulates the activity
of the human brain, where data is processed using nodes. Neural networks are
particularly useful in complex pattern recognition applications involving deep
learning, a more advanced offshoot of machine learning.
 Decision trees. This process classifies or predicts potential results using either
classification or regression methods. Treelike structures are used to represent the
potential decision outcomes.
 KNN. This data mining method classifies data based on its proximity to other data
points. Assuming nearby data points are more similar to each other than other data
points, KNN is used to predict group features.
Industry examples of data mining
 Retail. Online retailers mine customer data and internet clickstream records to help
them target marketing campaigns, ads and promotional offers to individual shoppers.
Data mining and predictive modeling also power the recommendation engines that
suggest possible purchases to website visitors, as well as inventory and SCM
activities.
 Financial services. Banks and credit card companies use data mining tools to build
financial risk models, detect fraudulent transactions, and vet loan and credit
applications. Data mining also plays a key role in marketing and identifying potential
upselling opportunities with existing customers.
 Insurance. Insurers rely on data mining to aid in pricing insurance policies and
deciding whether to approve policy applications, as well as for risk modeling and
managing prospective customers.
 Manufacturing. Data mining applications for manufacturers include efforts to
improve uptime and operational efficiency in production plants, supply chain
performance and product safety.
 Entertainment. Streaming services analyze what users are watching or listening to
and make personalized recommendations based on their viewing and listening habits.
Likewise, individuals might data mine software to learn more about it.
 Healthcare. Data mining helps doctors diagnose medical conditions, treat patients,
and analyze X-rays and other medical imaging results. Medical research also depends
heavily on data mining, machine learning and other forms of analytics.
 HR. HR departments typically work with large amounts of data. This includes
retention, promotion, salary and benefit data. Data mining compares this data to better
help HR processes.
 Social media. Social media companies use data mining to gather large amounts of
data about users and their online activities. This data is controversially either used for
targeted advertising or might be sold to third parties
UNIT-4 INTEGRATESD SYSTEM ,SECURITY AND CONTROL

KNOWLEDGE BASED DECISION SUPPORT SYSTEM


A system which supports the process of decision-making is known as Decision Support
System (DSS) According to Scott Mortan, “ Decision Support System (DSS) is interactive
computer-based systems, which help decision-makers utilize data and models to solve
unstructured problems”
Characteristics of Decision Support System
1. Provides Rapid Access to Information
2. Handles Large Amount of Data from Different Sources
3. Provides Report and Presentation Flexibility
4. Offers both Textual and Graphical Orientation
5. Supports Drill-Down Analysis
6. Performs Complex, Sophisticated Analysis and Comparisons Using advanced Software
Packages
Classification of Decision Support System
1.Representational Models
-Estimation of consequences
-These models help users understand complex data and relationships, facilitating better
analysis and decisionmaking.
2.Optimization Models:
-Generates optimal solutions
- Represented in mathematical form
-Example : Arranging training classes and material usage system under some constraints.
3.Suggestion Models:
- Used in where decisions are repetitive in nature
-Provides a specific suggested decision and bypasses other decisions.
Example: Insurance renewal rate calculator

STEPS IN KBDSS
1. Choose the Problem to be solved : Concerned department are made to participate in the
process of identifying the problem.
2. Select Software and Hardware : Concerned department are made to participate in the
process of identifying the problem.
3. Data Acquisition and Management : Data must be acquired and maintained by forming
knowledge base to solve the problem.
4. Data Acquisition and Management : A model base is identified, acquired or developed and
subsequently included relevant models are added up to the model base.
5.Dialogue sub system and its Management : User interface is need to be developed for DSSS
6. Knowledge Component : Knowledge Engineering is performed using knowledge base to
create DSS
7. Packaging : All required software components are put together to make a system. Each
component must be tested in isolation as well as integrated in the system.
8. Testing, Evaluation and Improvement : The DSS is undergone an integration testing as
well as system testing with suitable test cases. After Evaluation of the system further
improvement requirements are identified
9. User Training : User must be properly trained so that efficient utilization of DSS is
possible
10. Documentation and Maintenance : Proper documentation must be done for future
maintenance of the DSS.
11.Adaptation : Due to Dynamic need of users, DSS must be adaptable to cater them
social media
Social media refers to online platforms and technologies that enable users to create, share,
and exchange content in virtual communities and networks. Social media has become an
integral part of modern communication, connecting people worldwide and providing a
platform for various interactions. Social media refers to new forms of media that involve
interactive participation. While challenges to the definition of social media arise due to the
variety of stand-alone and built-in social media services currently available. Social media is
impacting everything from a media and non-media standpoint.
Characteristics of Social Media:
1. User-Generated Content (UGC): Social media relies on content created and shared by
users. This includes text, images, videos, and other multimedia content.
2. Two-Way Communication: Social media facilitates interactive communication. Users can
engage in conversations, share feedback, and respond to content in real-time.
3. Networking and Connections: Social media platforms enable users to connect with others,
build networks, and establish relationships. This can include friends, family, colleagues, and
even strangers with shared interests.
4. Real-Time Updates: Information on social media is often updated in real time, providing
users with immediate access to news, events, and personal updates.
5. Multimedia Sharing: Users can share a variety of media, including photos, videos, GIFs,
and audio clips, allowing for diverse and engaging content.
6. Profile and Identity: Users create profiles that represent their identity on social media.
Profiles typically include personal information, interests, and a timeline of activity.
7. Privacy Settings: Social media platforms offer privacy settings that allow users to control
the visibility of their content and manage who can access their information.
8. Hashtags and Trends: Hashtags are used to categorize and organize content. Users can
follow trends and discover content related to specific topics through popular hashtags.
9. Notifications and Alerts: Users receive notifications for activities such as likes, comments,
and mentions, keeping them informed about interactions on their content.
10. Content Discovery Algorithms: Social media platforms use algorithms to curate and
display content based on user preferences, engagement history, and trends.
Social Media Tools:
Blogs: A blog (a truncation of "weblog") is an informational website consisting of discrete,
often informal diary-style text entries (posts). Posts are typically displayed in reverse
chronological order so that the most recent post appears first, at the top of the web page. A
blog post is an individual web page on your website that dives into a particular subtopic of
your blog. In addition, many blogs provide a forum to allow visitors to leave comments and
interact with the publisher. “To blog” is the act of composing material for a blog. Materials
are largely written, but pictures, audio, and videos are important elements of many blogs. The
“blogosphere” is the online universe of blogs.
Twitter:
Twitter, Inc. was an American social media company based in San Francisco, California. The
company operated the social networking service Twitter and previously the Vine short video
app and Periscope livestreaming service. X, formerly (and still colloquially) known as
Twitter, is a social media website based in the United States. With over 500 million users, it
is one of the world's largest social networks. Users can share and post text messages, images,
and videos known historically as "tweets”. It is a microblogging service -- a combination of
blogging and instant messaging -- for registered users to post, share, like and reply to tweets
with short messages. Nonregistered users can only read tweets. People use Twitter to get the
latest updates and promotions from brands; communicate with friends; and follow business
leaders, politicians and celebrities. Businesses use Twitter for brand awareness and public
relations -- part of their social media marketing strategy.
Using Twitter helps businesses:
 interact with customers;
 provide timely customer service;
 monitor the competition; and
 announce new products, sales and events. Businesses can also purchase promoted tweets --
or ads -- to help marketers reach more users or engage with followers. These tweets appear
just like other posts but are labelled "promoted."
Facebook:
Facebook is a social media and social networking service owned by American technology
conglomerate Meta Platforms. Created in 2004 by Mark Zuckerberg with four other Harvard
College students and roommates Eduardo Saverin, Andrew McCollum, Dustin Moskovitz,
and Chris Hughes, its name derives from the face book directories often given to American
university students. Membership was initially limited to Harvard students, gradually
expanding to other North American universities. Since 2006, Facebook allows everyone to
register from 13 years old (or older), except in the case of a handful of nations, where the age
limit is 14 years. Facebook allows you to send messages and post status updates to keep in
touch with friends and family. You can also share different types of content, like photos and
links. But sharing something on Facebook is a bit different from other types of online
communication. Unlike email or instant messaging, which are relatively private, the things
you share on Facebook are more public, which means they'll usually be seen by many other
people. While Facebook offers privacy tools to help you limit who can see the things you
share, it's important to understand that Facebook is designed to be more open and social than
traditional communication tools.
Whatsapp:
WhatsApp is free to download messenger app for smartphones. WhatsApp uses the internet
to send messages, images, audio or video. The service is very similar to text messaging
services, however, because WhatsApp uses the internet to send messages, the cost of using
WhatsApp is significantly less than texting. You can also use Whatsapp on your desktop,
simply go to the Whatsapp website and download it to Mac or Windows. It is popular with
teenagers because of features like group chatting, voice messages and location sharing. The
service was created by WhatsApp Inc. of Mountain View, California, which was acquired by
Facebook in February 2014.
Instagram:
Instagram is an American photo and video sharing social networking service owned by Meta
Platforms. It allows users to upload media that can be edited with filters, be organized by
hashtags, and be associated with a location via geographical tagging. It also added messaging
features, the ability to include multiple images or videos in a single post, and a Stories
feature. Instagram began development in San Francisco as Burbn, a mobile check-in app
created by Kevin Systrom and Mike Krieger. On March 5, 2010. In March 2020, Instagram
launched a new feature called "Co-Watching". The new feature allows users to share posts
with each other over video calls. In May 2021, Instagram began allowing users in some
regions to add pronouns to their profile page. In April 2022, Instagram began testing the
removal of the ability to see "recent" posts from various hashtags.
Forums:
A forum is an online discussion board where people can ask questions, share their
experiences, and discuss topics of mutual interest. Forums are an excellent way to create
social connections and a sense of community. They can also help you to cultivate an interest
group about a particular subject.
YouTube:
YouTube is an American online video sharing and social media platform owned by Google.
Accessible worldwide, it was launched on February 14, 2005, by Steve Chen, Chad Hurley,
and Jawed Karim, three former employees of PayPal. Private individuals and large
production corporations have used YouTube to grow their audiences. Indie creators have
built grassroots followings numbering in the thousands at very little cost or effort, while mass
retail and radio promotion proved problematic. Concurrently, old media celebrities moved
into the website at the invitation of a YouTube management that witnessed early content
creators accruing substantial followings and perceived audience sizes potentially larger than
that attainable by television. The major features include Audio/video file upload, Live
Captioning, Reporting/Analytics, Social Sharing, Speech Recognition, Subtitles/Closed
Captions, Text Overlay, Time Stamps etc…,
Integrating Social media and mobile technologies in Information System:
Impact of Mobile technology on Social Media
 Mobile advancement has become a giant in the technology field, since apps were invented
and Steve Jobs launched his iPhone, iPad, iPod series.
 Now most of the world actually prefers using the internet on their mobile devices than
seated at computer.
 However mobile technology has just only started- there is so much more ahead of us.
 The key benefits of integrating the social media and mobile technologies include o Higher
conversions o Higher engagement o More visibility o Lower marketing costs
Use of smart phones and Mobile Technology
1.Communication: Voice Calls and Messaging: Smartphones allow traditional voice calls
and text messaging, serving as basic communication tools. Instant Messaging and Social
Media: Apps like WhatsApp, Facebook Messenger, and others provide real-time
communication, multimedia sharing, and social interactions.
2. Information Access: Internet Browsing: Smartphones enable users to access the internet
on-the-go, providing instant access to information, news, and online resources. Search
Engines: Mobile devices facilitate quick searches through search engines like Google,
allowing users to find information rapidly.
3. Entertainment: Streaming Services: Mobile technology supports video and music
streaming services, such as Netflix, YouTube, Spotify, and others. Mobile Games:
Smartphones host a wide variety of games, from casual to high-end graphics, catering to
diverse gaming preferences. Podcasts and Audiobooks: Users can access and consume a vast
array of podcasts and audiobooks on their mobile devices.
4. Productivity and Work: Email and Calendar: Smartphones provide access to email and
calendar applications, allowing users to stay organized and respond to work-related
communications. Document Editing and Cloud Storage: Mobile apps enable document
editing, collaboration, and access to files stored in the cloud (e.g., Google Drive, Dropbox).
Video Conferencing: Mobile devices support video conferencing tools like Zoom and
Microsoft Teams, facilitating remote work and virtual meetings.
5. Navigation and Location Services: GPS and Maps: Smartphones offer GPS navigation
and mapping services, helping users navigate, find locations, and get real-time directions.
Location-Based Services: Apps utilize location data for services such as local
recommendations, weather updates, and personalized content.
6. Health and Fitness: Fitness Apps: Mobile technology supports fitness and health-
tracking apps, monitoring physical activity, sleep, and nutrition. Healthcare Apps: Users can
access healthcare information, schedule appointments, and receive telemedicine services
through mobile apps.
7. E-commerce and Mobile Payments: Online Shopping: Users can browse and make
purchases through e-commerce apps, such as Amazon, eBay, and various retail platforms.
Mobile Wallets: Mobile payment services like Apple Pay, Google Pay, and others allow users
to make secure transactions using their smartphones.
8. Social Networking: Social Media Apps: Platforms like Facebook, Instagram, Twitter,
and LinkedIn are primarily accessed through mobile devices for social interactions, content
sharing, and networking.
9. Photography and Multimedia: Camera and Photo Editing: Smartphones come equipped
with high-quality cameras, and users can edit and share photos instantly. Video Recording
and Editing: Users can capture and edit videos directly on their smartphones, sharing content
on social media platforms.
10.Smart Home Integration: Smart Home Control: Mobile apps allow users to control and
monitor smart home devices, such as thermostats, lights, security cameras, and appliances.
The widespread adoption of smartphones and mobile technology has transformed the way
individuals and societies operate, offering convenience, connectivity, and access to a wealth
of information and services at our fingertips.
Advantages of Social Media:
1. Global Connectivity
2. Information Sharing and Awareness
3. Communication and Networking
4. Personal Expression and Creativity
5. Business and Marketing Opportunities
Disadvantages of Social Media:
1. Privacy Concerns
2. Misinformation and Fake News:
3. Cyberbullying and Harassment
4. Addiction and Time-Wasting
5. Comparative Social Pressure
Security
Security means measures policies and technical procedures that are used to prevent
alternation,physical damage,theft and unauthorized access to the information systems

Basic principles

1.Integrity

In information security, data integrity means maintaining and assuring the accuracy and
consistency of data over its entire life-cycle. This means that data cannot be modified in an
unauthorized or undetected manner. This is not the same thing as referential integrity in
databases, although it can be viewed as a special case of consistency as understood in the
classic ACID model of transaction processing. Integrity is violated when a message is
actively modified in transit. Information security systems typically provide message integrity
in addition to data confidentiality.
2.Availability

For any information system to serve its purpose, the information must be available when it is
needed. This means that the computing systems used to store and process the information, the
security controls used to protect it, and the communication channels used to access it must be
functioning correctly. High availability systems aim to remain available at all times,
preventing service disruptions due to power outages, hardware failures, and system upgrades.
Ensuring availability also involves preventing denial-of-service attacks, such as a flood of
incoming messages to the target system essentially forcing it to shut down.

3.Authenticity

In computing, e-Business, and information security, it is necessary to ensure that the data,
transactions, communications or documents (electronic or physical) are genuine. It is also
important for authenticity to validate that both parties involved are who they claim to be.
Some information security systems incorporate authentication features such as "digital
signatures", which give evidence that the message data is genuine and was sent by someone
possessing the proper signing key.

Computer crimes

1. Phishing and Scam:

Phishing is a type of social engineering attack that targets the user and tricks them by sending
fake messages and emails to get sensitive information about the user or trying to download
malicious software and exploit it on the target system.

2. Identity Theft

Identity theft occurs when a cybercriminal uses another person’s personal data like credit
card numbers or personal pictures without their permission to commit a fraud or a crime.

3. Ransomware Attack

Ransomware attacks are a very common type of cybercrime. It is a type of malware that has
the capability to prevent users from accessing all of their personal data on the system by
encrypting them and then asking for a ransom in order to give access to the encrypted data.

4. Hacking/Misusing Computer Networks


This term refers to the crime of unauthorized access to private computers or networks and
misuse of it either by shutting it down or tampering with the data stored or other illegal
approaches.

5. Internet Fraud

Internet fraud is a type of cybercrimes that makes use of the internet and it can be considered
a general term that groups all of the crimes that happen over the internet like spam, banking
frauds, theft of service, etc.

6.Cyber Bullying

It is also known as online or internet bullying. It includes sending or sharing harmful and
humiliating content about someone else which causes embarrassment and can be a reason for
the occurrence of psychological problems. It became very common lately, especially among
teenagers.

7. Cyber Stalking

Cyberstalking can be defined as unwanted persistent content from someone targeting other
individuals online with the aim of controlling and intimidating like unwanted continued calls
and messages.

8. Software Piracy

Software piracy is the illegal use or copy of paid software with violation of copyrights or
license restrictions.

An example of software piracy is when you download a fresh non-activated copy of


windows and use what is known as “Cracks” to obtain a valid license for windows activation.
This is considered software piracy.

9.Social Media Frauds

The use of social media fake accounts to perform any kind of harmful activities like
impersonating other users or sending intimidating or threatening messages. And one of the
easiest and most common social media frauds is Email spam.

10. Online Drug Trafficking


With the big rise of cryptocurrency technology, it became easy to transfer money in a secured
private way and complete drug deals without drawing the attention of law enforcement. This
led to a rise in drug marketing on the internet.

illegal drugs such as cocaine, heroin, or marijuana are commonly sold and traded online,
especially on what is known as the "Dark Web".

11. Electronic Money Laundering

Also known as transaction laundering. It is based on unknown companies or online business


that makes approvable payment methods and credit card transactions but with incomplete or
inconsistent payment information for buying unknown products.

It is by far one of the most common and easy money laundering methods.

12. Cyber Extortion

Cyber extortion is the demand for money by cybercriminals to give back some important data
they've stolen or stop doing malicious activities such as denial of service attacks.

13 Intellectual-property Infringements

It is the violation or breach of any protected intellectual-property rights such as copyrights


and industrial design.

14. Online Recruitment Fraud

One of the less common cybercrimes that are also growing to become more popular is the
fake job opportunities released by fake companies for the purpose of obtaining a financial
benefit from applicants or even making use of their personal data.

IS Vulnerability

In computer security, vulnerability is a weakness which allows an attacker to reduce a


system's information assurance. Vulnerability is the intersection of three elements: a system
susceptibility or flaw, attacker access to the flaw, and attacker capability to exploit the flaw.
To exploit vulnerability, an attacker must have at least one applicable tool or technique that
can connect to a system weakness. In this frame, vulnerability is also known as the attack
surface.
National Information Assurance Training and Education Center defines vulnerability

1. A weakness in automated system security procedures, administrative controls, internal


controls, and so forth that could be exploited by a threat to gain unauthorized access to
information or disrupt critical processing.

2. A weakness in system security procedures, hardware design, internal controls, etc., which
could be exploited to gain unauthorized access to classify or sensitive information.

3. A weakness in the physical layout, organization, procedures, personnel, management,


administration, hardware, or software that may be exploited to cause harm to the ADP system
or activity. The presence of vulnerability does not in itself cause harm; vulnerability is merely
a condition or set of conditions that may allow the ADP system or activity to be harmed by an
attack.

4. An assertion primarily concerning entities of the internal environment (assets); we say that
an asset (or class of assets) is vulnerable (in some way, possibly involving an agent or
collection of agents); we write: V (i,e) where: e may be an empty set.

5. Susceptibility to various threats.

6. A set of properties of a specific internal entity that, in union with a set of properties of a
specific external entity, implies a risk.

7. The characteristics of a system which cause it to suffer a definite degradation (incapability


to perform the designated mission) as a result of having been subjected to a certain level of
effects in an unnatural (manmade) hostile environment

Vulnerability and risk factor models

A resource (either physical or logical) may have one or more vulnerabilities that can be
exploited by a threat agent in a threat action. The result can potentially compromise the
confidentiality, integrity or availability of resources (not necessarily the vulnerable one)
belonging to an organization and/or others parties involved (customers, suppliers). The so-
called CIA triad is the basis of Information Security. An attack can be active when it attempts
to alter system resources or affect their operation, compromising integrity or availability. A
"passive attack" attempts to learn or make use of information from the system but does not
affect system resources, compromising confidentiality.

Disaster Management Information System (DMIS)

Disaster management process


1.identify the critical business processes

2.access the business risk probability of risk of occurrence and risk exposure

3.enlist the impact target of damage for attention to manage and recover

4.identify the life saving sensitive data , files software and databases linked to these process

5.prepare plan pf bridging pre and post disaster scenario so that continuity of data and
information is maintained

6.ensure all risk are suitably covered by appropriate insurance policies

7.authority ,rights for decision and action in the event of disaster should be clear in DMP

8.test the DMP plan once a year in simulated live model event

Threats and control

1. Threats to facilitate AND structure

-power failure and power related problem

-theft

-damaged by disgruntled employees

-unauthorized use of IT structure

Controls

-Place critical hardware on high floor

-provide security training to employees

-install close circuit cameras

2.Threats to communication system

-incorrect input due to communication break down


-insertion of viruses

-defective network operation

Controls

-firewalls

-error detection and correction method

-access logs

-log of system failure

3.Threats to database and DBMS

-corruption of data

-theft of data

-data inconsistency

Controls

-use of antivirus software

-backup copies

-restart and recovery procedure

ADVATAGES

1. Forecasting

2.provide responsive measure

3.provide recovery measures

4.provide sense of ownership

5.helpful for particular communities


DISADVATAGES

1.reluctance to expose vulnerabilities

2.Unavailability of resources

3.improper public awarness

Securing the Web

Web servers are one of the many public faces of an organization and one of the most easily
targeted. Web servers represent an interesting paradox namely, how do you share information
about your organization without giving away the so-called store? Solving this dilemma can
be a tough and thankless job; but it's also one of the most important. Before I get too far,
though, let's take a look at some of the threats that your server faces by virtue of being one of
the "troops" on the front line. Now, there are a tremendous number of threats facing a Web
server, and many depend on the applications, operating system, and environment you have
configured on the system itself. What I have assembled in this section are some of the more
generic attacks that your poor server may face.

Denial of service

The denial of service (DoS) attack is one of the real "old-school" attacks that a server can
face. The attack is very simple, and nowadays it's carried out by those individuals commonly
known as script kiddies, who basically have a low skill level. In a nutshell, a DoS attack is an
attack in which one system attacks another with the intent of consuming all the resources on
the system (such as bandwidth or processor cycles), leaving nothing behind for legitimate
requests. Generally, these attacks have been relegated to the category of annoyance, but don't
let that be a reason to lower your guard, because there are plenty of other things to keep you
up at night.

Distributed denial of service

The distributed DoS (DDoS) attack is the big brother of the DoS attack and as such is meaner
and nastier. The goal of the DDoS attack is to do the same thing as the DoS, but on a much
grander and more complex scale. In a DDoS attack, instead of one system attacking another,
an attacker uses multiple systems to target a server, and by multiple systems I mean not
hundreds or thousands, but more on the order of hundreds of thousands. Where DoS is just an
annoyance, a DDoS attack can be ownright deadly, as it can take a server offline quickly. The
good news is that the skill level required to pull a DDoS attack off is fairly high. Some of the
more common DDoS attacks include:
• FTP bounce attacks. A File Transfer Protocol (FTP) bounce attack is enacted when an
attacker uploads a specially constructed file to a vulnerable FTP server, which in turn
forwards it to another location, which generally is another server inside the organization. The
file that is forwarded typically contains some sort of payload designed to make the final
server do something that the attacker wants it to do.

• Port scanning attack. A port scanning attack is performed through the structured and
systematic scanning of a host. For example, someone may scan your Web server with the
intention of finding exposed services or other vulnerabilities that can be exploited. This attack
can be fairly easily performed with any one of a number of port scanners available freely on
the Internet. It also is one of the more common types of attacks, as it is so simple to pull off
that script kiddies attempt it just by dropping the host name or IP address of your server
(however, they typically don't know how to interpret the results). Keep in mind that a more
advanced attacker will use port scanning to uncover information for a later effort.

• Ping flooding attack. A ping flooding attack is a simple DDoS attack in which a computer
sends a packet (ping) to another system with the intention of uncovering information about
services or systems that are up or down. At the low end, a ping flood can be used to uncover
information covertly, but throttle up the packets being sent to a target or victim so that now,
the system will go offline or suffer slowdowns. This attack is "old school" but still very
effective, as a number of modern operating systems are still susceptible to this attack and can
be taken down.

• Smurf attack. This attack is similar to the ping flood attack but with a clever modification to
the process. In a Smurf attack, a ping command is sent to an intermediate network, where it is
amplified and forwarded to the victim. What was once a single "drop" now becomes a virtual
tsunami of traffic? Luckily, this type of attack is somewhat rare.

• SYN flooding. This attack requires some knowledge of the TCP/ IP protocol suite—namely,
how the whole communication process works. The easiest way to explain this attack is
through an analogy. This attack is the networking equivalent of sending a letter to someone
that requires a response, but the letter uses a bogus return address. That individual sends your
letter back and waits for your response, but the response never comes, because it went into a
black hole some place. Enough SYN requests to the system and an attacker can use all the
connections on a system so that nothing else can get through.

• P fragmentation/fragmentation attack. In this attack, an attacker uses advanced knowledge


of the TCP/IP protocol to break packets up into smaller pieces, or "fragments", that bypass
most intrusion-detection systems. In extreme cases, this type of attack can cause hangs, lock-
ups, reboots, blue screens, and other mischief. Luckily, this attack is a tough one to pull off.

• Simple Network Management Protocol (SNMP) attack. SNMP attacks are specifically
designed to exploit the SNMP service, which is used to manage the network and devices on
it. Because SNMP is used to manage network devices, exploiting this service can result in an
attacker getting detailed intelligence on the structure of the network that he or she can use to
attack you later

Web page defacement

Web page defacement is seen from time to time around the Internet. As the name implies, a
Web page defacement results when a Web server is improperly configured, and an attacker
uses this flawed configuration to modify Web pages for any number of reasons, such as for
fun or to push a political cause

UNIT 5 NEW IT INITATIVES


Introduction to deep learning
The definition of Deep learning is that it is the branch of machine learning that is based on
artificial neural network architecture. An artificial neural network or ANN uses layers of
interconnected nodes called neurons that work together to process and learn from the input
data.

Deep Learning Applications:


1. Computer vision
The first Deep Learning applications is Computer vision. In computer vision, Deep learning
AI models can enable machines to identify and understand visual data. Some of the main
applications of deep learning in computer vision include:
 Object detection and recognition: Deep learning model can be used to identify and
locate objects within images and videos, making it possible for machines to perform
tasks such as self-driving cars, surveillance, and robotics.
 Image classification: Deep learning models can be used to classify images into
categories such as animals, plants, and buildings. This is used in applications such as
medical imaging, quality control, and image retrieval.
 Image segmentation: Deep learning models can be used for image segmentation into
different regions, making it possible to identify specific features within images.
2. Natural language processing (NLP):
In Deep learning applications, second application is NLP. NLP, the Deep learning model can
enable machines to understand and generate human language. Some of the main applications
of deep learning in NLP include:
 Automatic Text Generation – Deep learning model can learn the corpus of text and
new text like summaries, essays can be automatically generated using these trained
models.
 Language translation: Deep learning models can translate text from one language to
another, making it possible to communicate with people from different linguistic
backgrounds.
 Sentiment analysis: Deep learning models can analyze the sentiment of a piece of
text, making it possible to determine whether the text is positive, negative, or neutral.
This is used in applications such as customer service, social media monitoring, and
political analysis.
 Speech recognition: Deep learning models can recognize and transcribe spoken
words, making it possible to perform tasks such as speech-to-text conversion, voice
search, and voice-controlled devices.
3. Reinforcement learning:
In reinforcement learning, deep learning works as training agents to take action in an
environment to maximize a reward. Some of the main applications of deep learning in
reinforcement learning include:
 Game playing: Deep reinforcement learning models have been able to beat human
experts at games such as Go, Chess, and Atari.
 Robotics: Deep reinforcement learning models can be used to train robots to perform
complex tasks such as grasping objects, navigation, and manipulation.
 Control systems: Deep reinforcement learning models can be used to control
complex systems such as power grids, traffic management, and supply chain
optimization.
Challenges in Deep Learning
1. Data availability: It requires large amounts of data to learn from. For using deep
learning it’s a big concern to gather as much data for training.
2. Computational Resources: For training the deep learning model, it is computationally
expensive because it requires specialized hardware like GPUs and TPUs.
3. Time-consuming: While working on sequential data depending on the computational
resource it can take very large even in days or months.
4. Interpretability: Deep learning models are complex, it works like a black box. it is
very difficult to interpret the result.
5. Overfitting: when the model is trained again and again, it becomes too specialized for
the training data, leading to overfitting and poor performance on new data.
Advantages of Deep Learning:
1. High accuracy: Deep Learning algorithms can achieve state-of-the-art performance in
various tasks, such as image recognition and natural language processing.
2. Automated feature engineering: Deep Learning algorithms can automatically discover
and learn relevant features from data without the need for manual feature engineering.
3. Scalability: Deep Learning models can scale to handle large and complex datasets,
and can learn from massive amounts of data.
4. Flexibility: Deep Learning models can be applied to a wide range of tasks and can
handle various types of data, such as images, text, and speech.
5. Continual improvement: Deep Learning models can continually improve their
performance as more data becomes available.
Disadvantages of Deep Learning:
1. High computational requirements: Deep Learning AI models require large amounts of
data and computational resources to train and optimize.
2. Requires large amounts of labeled data: Deep Learning models often require a large
amount of labeled data for training, which can be expensive and time- consuming to
acquire.
3. Interpretability: Deep Learning models can be challenging to interpret, making it
difficult to understand how they make decisions.
Overfitting: Deep Learning models can sometimes overfit to the training data,
resulting in poor performance on new and unseen data.
4. Black-box nature: Deep Learning models are often treated as black boxes, making it
difficult to understand how they work and how they arrived at their predictions.

BIG DATA
Big data refers to extremely large and diverse collections of structured, unstructured, and
semi-structured data that continues to grow exponentially over time. These datasets are so
huge and complex in volume, velocity, and variety, that traditional data management systems
cannot store, process, and analyze them
Three V’s of Big Data
 Volume: The volume of data is of utmost importance. With big data, you will have to
process huge amounts of unstructured data, low-density. It can be data of undefined
value, including Twitter data feeds, clickstreams on a mobile app, clickstreams on a
web page, time-series data, or sensor-enabled equipment. For some companies, it can
be tens of terabytes of data. For other organizations, it can be hundreds of petabytes.
 Velocity: The rate at which data is collected and acted on is termed velocity.
Generally, the highest velocity of data runs straight into memory as compared to being
written to disk. Some internet-supported smart products perform in real-time or close
to real-time and will need real-time assessment and action.
 Variety: Variety is considered as the various kinds of data that are available.
Traditional data types were structured and neatly fit in a relational database. Data
comes in new unstructured data types with the advance of big data. Unstructured and
semi-structured data types, including video, text, and audio need extra preprocessing
to determine meaning and support metadata which is the context around that data.
Applications of Big Data

1. Banking
Be it financial management or cash collection, big data has made banks more efficient for
each industry. The technology’s application has defeated the user’s struggle, helping the bank
to generate more revenue and their insights are more transparent and comprehensible than
before. Varying from distinguishing fraud, analyzing and streamlining transaction processing,
improving understanding of the users, perfecting trade execution, and promoting an
exceptional user experience, Big Data extends a range of applications.
2. Education
When talking about the Education industry, the data garnered from the courses, students,
faculty, and results is huge, the interpretation of which can bring forth insights useful for
improving the operations and functioning of educational institutes. From promoting efficient
learning, improving International recruiting for universities, supporting students in
establishing career goals, decreasing university dropouts, promoting definite student
evaluation, enhancing the decision-making process, and improving student results, Big Data
has an indispensable role in this sector.
3. Media
The buzz for the conventional methods of consuming media is gradually fading away because
the current strategies of consuming online content with the help of gadgets have become the
latest trend. Since an immense amount of data is generated, big data has triumphantly made
its way into this industry. Ranging from assisting to predicting what the audience needs, in
the genre, music, and content as per their age group, to proposing them insights regarding
customer churn, Big Data has made the lives of media houses much easier.
4. Healthcare
Big Data has an essential role to play in improving modern healthcare operations. Technology
has fully remodelled the healthcare sector By decreasing the cost of treatment, predicting
epidemic outbreaks, dodging preventable diseases, improving life quality, prophesying the
income obtained by daily patients to adjust staffing, adopting Electronic Health Records
(EHRs), using real-time alerts to promote immediate care, utilizing health data for more
efficient strategic planning, to decreasing frauds and flaws.
5. Agriculture
big data analytics drives smart farming and accurate agriculture operations, saving costs and
unleashing new business possibilities. Some important areas where big data work involve
meeting the food demand by providing farmers with information regarding the changes in
weather, rainfall, and factors affecting crop yield, propelling smart and correct application of
pesticides, management of equipment, guaranteeing supply chain productivity, etc.
6. Travel
Big Data plays an intrinsic role in shaping transportation in a more perfect and effective
manner. Be it managing the revenue earned, maintaining the reputation gained, or following
strategic marketing, Big Data has influenced this sector. It also helps in mapping out the route
as per the requirements of the user, assisting in efficiently managing wait time, and
identifying accident-prone areas to increase the safety level of traffic.
7. Manufacturing
Big Data, manufacturing is no longer an arduous manual process. Technology and Data
analytics have succeeded in completely revolutionizing the manufacturing process. Big Data
improves manufacturing, personalizing product design, guaranteeing accurate quality
maintenance, overseeing the supply chain, and also evaluating to keep track of potential risks.
8. Government
Governments come across a huge level of data on an everyday basis, irrespective of the
nation as they have to maintain various records and databases of their citizens, growth,
geographical surveys, energy resources, etc. This data is needed to be reviewed and analyzed,
thereby becoming an ally for the government in its operations. Primarily, the government
utilizes this data in two areas, in its developmental plans and in the case of cybersecurity.
9. Retail
big data plays an important part in foretelling rising trends, targeting fitting customers at the
relevant time, reducing marketing expenses, and improving the quality of customer service.
From keeping a detailed view of each user and promoting personal engagement, enhancing
pricing to acquire the best value from forthcoming trends, systemizing back-office operations,
and improving customer services, Big Data gives a wide array of applications when it comes
to Retail.
Big Data examples:
 Learning about consumer shopping habits
 Customized marketing
 Discovering new customer leads
 Fuel optimization devices for the industry of transportation
 Prediction of user demand for ridesharing businesses
 Observing health conditions via data from wearables
 Live road mapping for independent vehicles
Pervasive computing
Pervasive computing, also known as ubiquitous computing, integrates connectivity
functionalities into all of the objects in our environment so they can interact with one another,
automate routine tasks, and require minimal human effort to complete tasks and follow
machine instructions.

APPLICATION OF PERVASIVE COMPUTING


 Healthcare: Smart wearable sensors can monitor a patient's vital statistics, such as
heart rate, blood pressure, and body temperature.
 Transportation: Electronic toll systems use pervasive computing to allow vehicles to
pass through barriers using QR or bar codes.
 Smart homes and workplaces: Pervasive computing can be used for simple tasks like
switching lights, as well as more complicated tasks like booking plane tickets and
managing banking accounts.
 Sales force automation: Pervasive computing can be used for sales force automation.
 Mobile workers: Mobile workers can use portable computers and wireless
connectivity to access enterprise data.
 Goods transportation monitoring: Pervasive computing can be used to monitor the
transportation of goods.
 Voice assistants: Smart speakers like Google Home, Amazon Echo, and Apple
HomePod are examples of pervasive computing applications

Characteristics
1. he human component is taken into account, and the paradigm is placed in a human,
rather than a computational, environment.
2. Use of low-cost processors, resulting in lower memory and storage requirements.
3. Real-time properties are captured.
4. Computer gadgets that are always linked and available.
5. In the environment, focus on many-to-many relationships rather than one-to-one,
many-to-one, or one-to-many, as well as the idea of technology, which is always
present.
6. Takes into account local/global, social/personal, public/private, and invisible/visible
characteristics, as well as knowledge creation and distribution.

ADVANTAGES
lower service costs through smart networks, increased industrial scheduling and productivity,
and faster response times in health care settings. More precise targeted advertising and more
convenient personal financial transactions are two further advantages.

People profit from ubiquitous computing because it combines sensors, networking


technology, and data analytics to monitor and report on a variety of things, such as purchase
preferences, manufacturing processes, and traffic patterns.
Anomalies, errors, and pollutants in the workplace can be detected by these computing
systems, allowing for early intervention or preventing a workplace disaster. Ubiquitous
computing can also measure resource utilization, inputs, and outputs, enabling for
better resource management during peak loads or better resource distribution over time.

The implementation of ubiquitous computing sensors and networks in rural areas can also aid
service delivery to remote locations. Doctors can monitor patient vital signs from
considerable distances, allowing medical treatments to be provided outside of the confines of
a hospital or clinic.

Rural education can also be delivered via interactive media delivery technology, which
allows students and instructors to communicate in a personal setting without having to be in
the same classroom.
DISADVANTAGES
One of the most serious issues that ubiquitous computing faces is privacy. Protecting system
security, privacy, and safety is critical in ubiquitous computing.

It's also worth noting that, despite progress in ubiquitous computing, the sector continues to
confront challenges in areas like human-machine interfaces and data security, as well as
technical impediments that cause concerns with availability and reliability.

Despite the rapid proliferation of smart devices today, making ubiquitous computing
available to everyone with a comprehensive infrastructure and ease of use is a difficult
undertaking. Senior persons and individuals living in rural areas are still at a disadvantage,
which must be addressed if ubiquitous computing is to be adopted in a healthy way.

Cloud Computing
Cloud Computing means storing and accessing the data and programs on remote servers that
are hosted on the internet instead of the computer’s hard drive or local server. Cloud
computing is also referred to as Internet-based computing, it is a technology where the
resource is provided as a service through the Internet to the user. The data that is stored can
be files, images, documents, or any other storable document.
The following are some of the Operations that can be performed with Cloud Computing
 Storage, backup, and recovery of data
 Delivery of software on demand
 Development of new applications and services
 Streaming videos and audio
Characteristics of Cloud Computing

1. On-demand self-services: The Cloud computing services does not require any
human administrators, user themselves are able to provision, monitor and manage
computing resources as needed.
2. Broad network access: The Computing services are generally provided over standard
networks and heterogeneous devices.
3. Rapid elasticity: The Computing services should have IT resources that are able to
scale out and in quickly and on a need basis. Whenever the user require services it is
provided to him and it is scale out as soon as its requirement gets over.
4. Resource pooling: The IT resource (e.g., networks, servers, storage, applications, and
services) present are shared across multiple applications and occupant in an
uncommitted manner. Multiple clients are provided service from a same physical
resource.
5. Measured service: The resource utilization is tracked for each application and
occupant, it will provide both the user and the resource provider with an account of
what has been used. This is done for various reasons like monitoring billing and
effective use of resource.
6. Multi-tenancy: Cloud computing providers can support multiple tenants (users or
organizations) on a single set of shared resources.
7. Virtualization: Cloud computing providers use virtualization technology to abstract
underlying hardware resources and present them as logical resources to users.
8. Resilient computing: Cloud computing services are typically designed with
redundancy and fault tolerance in mind, which ensures high availability and
reliability.
9. Flexible pricing models: Cloud providers offer a variety of pricing models, including
pay-per-use, subscription-based, and spot pricing, allowing users to choose the option
that best suits their needs.
10. Security: Cloud providers invest heavily in security measures to protect their users’
data and ensure the privacy of sensitive information.
11. Automation: Cloud computing services are often highly automated, allowing users to
deploy and manage resources with minimal manual intervention.
12. Sustainability: Cloud providers are increasingly focused on sustainable practices, such
as energy-efficient data centers and the use of renewable energy sources, to reduce
their environmental impact

Cloud Computing Architecture


1. Frontend
Frontend of the cloud architecture refers to the client side of cloud computing system. Means
it contains all the user interfaces and applications which are used by the client to access the
cloud computing services/resources. For example, use of a web browser to access the cloud
platform.
2. Backend
Backend refers to the cloud itself which is used by the service provider. It contains the
resources as well as manages the resources and provides security mechanisms. Along with
this, it includes huge storage, virtual applications, virtual machines, traffic control
mechanisms, deployment models, etc.
Types of Cloud Computing
Software as a Service(SaaS)
Software-as-a-Service (SaaS) is a way of delivering services and applications over the
Internet. Instead of installing and maintaining software, we simply access it via the Internet,
freeing ourselves from the complex software and hardware management. It removes the need
to install and run applications on our own computers or in the data centers eliminating the
expenses of hardware as well as software maintenance.
SaaS provides a complete software solution that you purchase on a pay-as-you-go basis from
a cloud service provider. Most SaaS applications can be run directly from a web browser
without any downloads or installations required. The SaaS applications are sometimes
called Web-based software, on-demand software, or hosted software.
Platform as a Service
PaaS is a category of cloud computing that provides a platform and environment to allow
developers to build applications and services over the internet. PaaS services are hosted in the
cloud and accessed by users simply via their web browser.
A PaaS provider hosts the hardware and software on its own infrastructure. As a result, PaaS
frees users from having to install in-house hardware and software to develop or run a new
application. Thus, the development and deployment of the application take
place independent of hardware.
The consumer does not manage or control the underlying cloud infrastructure including
network, servers, operating systems, or storage, but has control over the deployed
applications and possibly configuration settings for the application-hosting environment. To
make it simple, take the example of an annual day function, you will have two options either
to create a venue or to rent a venue but the function is the same.
Infrastructure as a Service
Infrastructure as a service (IaaS) is a service model that delivers computer infrastructure on
an outsourced basis to support various operations. Typically IaaS is a service where
infrastructure is provided as outsourcing to enterprises such as networking equipment,
devices, database, and web servers.
It is also known as Hardware as a Service (HaaS). IaaS customers pay on a per-user basis,
typically by the hour, week, or month. Some providers also charge customers based on the
amount of virtual machine space they use.
It simply provides the underlying operating systems, security, networking, and servers for
developing such applications, and services, and deploying development tools, databases, etc.

Artificial Intelligence (AI)


Artificial intelligence (AI) technology allows computers and machines to simulate human
intelligence and problem-solving tasks. The ideal characteristic of artificial intelligence is its
ability to rationalize and take action to achieve a specific goal. AI research began in the 1950s
and was used in the 1960s by the United States Department of Defense when it trained
computers to mimic human reasoning.
Types of Artificial Intelligence
Narrow AI: Also known as Weak AI, this system is designed to carry out one particular job.
Weak AI systems include video games like personal assistants like Amazon's Alexa and
Apple's Siri. Users ask the assistant a question, and it answers it for you.
General AI: This type includes strong artificial intelligence systems that carry on the tasks
considered to be human-like. They tend to be more complex and complicated and can be
found in applications like self-driving cars or hospital operating rooms.
Artificial Intelligence Applications
Healthcare
AI is used in healthcare to improve the accuracy of medical diagnoses, facilitate drug
research and development, manage sensitive healthcare data and automate online patient
experiences. It is also a driving factor behind medical robots, which work to provide assisted
therapy or guide surgeons during surgical procedures.
Retail
AI in retail amplifies the customer experience by powering user personalization, product
recommendations, shopping assistants and facial recognition for payments. For retailers and
suppliers, AI helps automate retail marketing, identify counterfeit products on marketplaces,
manage product inventories and pull online data to identify product trends.
Customer Service
In the customer service industry, AI enables faster and more personalized support. AI-
powered chatbots and virtual assistants can handle routine customer inquiries, provide
product recommendations and troubleshoot common issues in real-time. And through NLP,
AI systems can understand and respond to customer inquiries in a more human-like way,
improving overall satisfaction and reducing response times.
Manufacturing
AI in manufacturing can reduce assembly errors and production times while increasing
worker safety. Factory floors may be monitored by AI systems to help identify incidents,
track quality control and predict potential equipment failure. AI also drives factory and
warehouse robots, which can automate manufacturing workflows and handle dangerous
tasks.
Finance
The finance industry utilizes AI to detect fraud in banking activities, assess financial credit
standings, predict financial risk for businesses plus manage stock and bond trading based on
market patterns. AI is also implemented across fintech and banking apps, working to
personalize banking and provide 24/7 customer service support.
Marketing
In the marketing industry, AI plays a crucial role in enhancing customer engagement and
driving more targeted advertising campaigns. Advanced data analytics allows marketers to
gain deeper insights into customer behavior, preferences and trends, while AI content
generators help them create more personalized content and recommendations at scale. AI can
also be used to automate repetitive tasks such as email marketing and social media
management.
Gaming
Video game developers apply AI to make gaming experiences more immersive. Non-playable
characters (NPCs) in video games use AI to respond accordingly to player interactions and
the surrounding environment, creating game scenarios that can be more realistic, enjoyable
and unique to each player.
Military
AI assists militaries on and off the battlefield, whether it's to help process military
intelligence data faster, detect cyberwarfare attacks or automate military weaponry, defense
systems and vehicles. Drones and robots in particular may be imbued with AI, making them
applicable for autonomous combat or search and rescue operations.
Benefits of AI
Automating Repetitive Tasks
Repetitive tasks such as data entry and factory work, as well as customer service
conversations, can all be automated using AI technology. This lets humans focus on other
priorities.
Solving Complex Problems
AI’s ability to process large amounts of data at once allows it to quickly find patterns and
solve complex problems that may be too difficult for humans, such as predicting financial
outlooks or optimizing energy solutions.
Improving Customer Experience
AI can be applied through user personalization, chatbots and automated self-service
technologies, making the customer experience more seamless and increasing customer
retention for businesses.
Advancing Healthcare and Medicine
AI works to advance healthcare by accelerating medical diagnoses, drug discovery and
development and medical robot implementation throughout hospitals and care centers.
Reducing Human Error
The ability to quickly identify relationships in data makes AI effective for catching mistakes
or anomalies among mounds of digital information, overall reducing human error and
ensuring accuracy.
Disadvantages of AI
Job Displacement
AI’s abilities to automate processes, generate rapid content and work for long periods of time
can mean job displacement for human workers.
Bias and Discrimination
AI models may be trained on data that reflects biased human decisions, leading to outputs
that are biased or discriminatory against certain demographics.
Hallucinations
AI systems may inadvertently “hallucinate” or produce inaccurate outputs when trained on
insufficient or biased data, leading to the generation of false information.
Privacy Concerns
The data collected and stored by AI systems may be done so without user consent or
knowledge, and may even be accessed by unauthorized individuals in the case of a data
breach.
Ethical Concerns
AI systems may be developed in a manner that isn’t transparent, inclusive or sustainable,
resulting in a lack of explanation for potentially harmful AI decisions as well as a negative
impact on users and businesses.
Environmental Costs
Large-scale AI systems can require a substantial amount of energy to operate and process
data, which increases carbon emissions and water consumption.

Advances in AI
Artificial Intelligence (AI) is a branch of Science which deals with helping machines find
solutions to complex problems in a more human-like fashion. This generally involves
borrowing characteristics from human intelligence, and applying them as algorithms in a
computer friendly way. A more or less flexible or efficient approach can be taken depending
on the requirements established, which influences how artificial the intelligent behavior
appears Artificial intelligence can be viewed from a variety of perspectives. From the
perspective of intelligence artificial intelligence is making machines "intelligent" -- acting as
we would expect people to act. The inability to distinguish computer responses from human
responses is called the Turing test.

Internet of things
The Internet of Things (IoT) describes the network of physical objects—“things”—that are
embedded with sensors, software, and other technologies for the purpose of connecting and
exchanging data with other devices and systems over the internet.
Technologies of IOT
 Access to low-cost, low-power sensor technology. Affordable and reliable sensors
are making IoT technology possible for more manufacturers.
 Connectivity. A host of network protocols for the internet has made it easy to connect
sensors to the cloud and to other “things” for efficient data transfer.
 Cloud computing platforms. The increase in the availability of cloud platforms
enables both businesses and consumers to access the infrastructure they need to scale
up without actually having to manage it all.
 Machine learning and analytics. With advances in machine learning and analytics,
along with access to varied and vast amounts of data stored in the cloud, businesses
can gather insights faster and more easily. The emergence of these allied technologies
continues to push the boundaries of IoT and the data produced by IoT also feeds these
technologies.
 Conversational artificial intelligence (AI). Advances in neural networks have
brought natural-language processing (NLP) to IoT devices (such as digital personal
assistants Alexa, Cortana, and Siri) and made them appealing, affordable, and viable
for home use.
Characteristics of IoT
 Massively scalable and efficient
 IP-based addressing will no longer be suitable in the upcoming future.
 An abundance of physical objects is present that do not use IP, so IoT is made
possible.
 Devices typically consume less power. When not in use, they should be automatically
programmed to sleep.
 A device that is connected to another device right now may not be connected in
another instant of time.
 Intermittent connectivity – IoT devices aren’t always connected. In order to save
bandwidth and battery consumption, devices will be powered off periodically when
not in use. Otherwise, connections might turn unreliable and thus prove to be
inefficient

Applications of IOT
1.Conusmer application
2. Smart Home
3. Elder Care
4. Medical and Health Care
5. Transportation
6. Manufacturing
7. Agriculture 8. Maritime
Advantages of IoT
 Improved efficiency and automation of tasks.
 Increased convenience and accessibility of information.
 Better monitoring and control of devices and systems.
 Greater ability to gather and analyze data.
 Improved decision-making.
 Cost savings.
Disadvantages of IoT
 Security concerns and potential for hacking or data breaches.
 Privacy issues related to the collection and use of personal data.
 Dependence on technology and potential for system failures.
 Limited standardization and interoperability among devices.
 Complexity and increased maintenance requirements.
 High initial investment costs.
 Limited battery life on some devices.

Block chain
Blockchain is a system of recording information in a way that makes it difficult or impossible
to change, hack, or cheat the system. A blockchain is essentially a digital ledger of
transactions that is duplicated and distributed across the entire network of computer systems
on the blockchain
TYPES of block chain
1. Public Blockchain
These blockchains are completely open to following the idea of decentralization. They don’t
have any restrictions, anyone having a computer and internet can participate in the network.
 As the name is public this blockchain is open to the public, which means it is not
owned by anyone.
 Anyone having internet and a computer with good hardware can participate in this
public blockchain.
 All the computer in the network hold the copy of other nodes or block present in the
network
 In this public blockchain, we can also perform verification of transactions or records
2. Private Blockchain
These blockchains are not as decentralized as the public blockchain only selected nodes can
participate in the process, making it more secure than the others.
 These are not as open as a public blockchain.
 They are open to some authorized users only.
 These blockchains are operated in a closed network.
 In this few people are allowed to participate in a network within a
company/organization.

3.Hybrid Blockchain
It is the mixed content of the private and public blockchain, where some part is controlled
by some organization and other makes are made visible as a public blockchain.
 It is a combination of both public and private blockchain.
 Permission-based and permissionless systems are used.
 User access information via smart contracts
 Even a primary entity owns a hybrid blockchain it cannot alter the transaction

Benefits of blockchain

Applications and Uses of Blockchain


 Cryptocurrencies: A cryptocurrency is a digital currency, basically designed to be
used as a medium of exchange wherein each coin ownership record is stored in a
decentralized ledger. Cryptocurrencies use ‘decentralized control’, which suggests
that they are not controlled by one person or government. When Bitcoin launched in
2008, it allowed people to directly transact with each other without having to trust
third parties like banks. Since then 4000 different cryptocurrencies have been created.
Some examples are Bitcoin, Ethereum, Dogecoin, Fantom, etc. The blockchain is the
technology behind cryptos where all the exchange or transaction information is stored
which cannot be hacked or changed and a copy of the ledger is distributed among all
the participants of the network. It records every single transaction. Each and every
person can buy/sell or deal in cryptos and be a part of the network. Nowadays, several
financial applications provide a user with the luxury of doing so.
 Cars: Let us see how can blockchain be used in cars. Ever heard of odometer fraud?
By tampering with the odometer, someone can make a car appear to be newer and less
worn out, resulting in customers paying more than what the car is actually worth. The
government tries to encounter by collecting the mileage of cars when they get a safety
inspection, but that’s not enough. So, instead, we could replace regular odometers
with smart ones that are connected to the internet and frequently write the car’s
mileage to the blockchain. This would create a secure and digital certificate for every
car. And because we use a blockchain, nobody can tamper with the data/information,
and everyone can look up a vehicle’s history to ensure it’s correct. In fact, this has
already been developed used by Bosch’s IoT lab and they are currently testing it on a
fleet of 100 cars in Germany and Switzerland.
 Legal Documents: So, blockchains are great at keeping a good track of data over
time. So, besides odometers, you can keep track of things like intellectual property or
patents or it can even function as a notary. A notary is someone (for example the
Central Government) who can confirm and verify signatures on legal documents. But
we can just as well use blockchain for it. The online website stampd.io as an
example, allows you to feature the documents to the Bitcoin or Ethereum Blockchain.
Once, a document has been added you can always prove that you simply created a
document at a particular point of time very similar to a notary, although right now
blockchains are not on the same level as notaries in a legal perspective.
 Digital Voting: Another interesting application is digital voting. Right now voting
happens either on paper or EVM (electronic voting machines) which are special
computers running proprietary software. Voting on paper costs a lot of money and
wastage and electronic voting has security issues. In recent years we have seen
countries move away from digital voting and adopting paper again because they fear
that electronic votes can be tampered with and influenced by hackers. In our country
as well, we have seen politicians fight over “EVM hack” things. But, in place of paper
ballots or EVMs, we could use blockchains to cast and store votes. Such a system
would be very transparent and everyone could verify the voting count for themselves
and it would make tampering with it very difficult. The Swiss company Agora is
already working on such a system and it is going to be completely open-source. But
there are many challenges. First, you have to be able to verify voters without
compromising their privacy. Secondly, if you allow people to vote with their own
computers or phones, you have to take care of the situation that those devices might
be infected with malware designed to tamper with the voting process. And a final
example: a system like this also has to be able to withstand denial-of-service attacks
that could render the whole thing unusable. Definitely, a very tough nut to crack but if
it becomes reality it could make for a more transparent and practical voting system.
 Food and Medical Industry: They could use blockchain technology to track their
food products from the moment they are harvested or made, to when they end up in
the hands of the customers. See, every year almost half a million people die because
of food-borne diseases and that’s partly because it takes too long to isolate the food
that is causing harm. Blockchains could help us to create a digital certificate for each
package of food, proving where it came from and where it has been. So, if
contamination has been detected i.e. the manufacturer wants to revert a batch of food
because of certain quality issues, we can trace it back to its root and instantly notify
other people who bought the same batch of bad food. Walmart and IBM are the two
big giants currently working on such a system. It allowed them to trace the origin of a
box of mangoes in just 2 seconds, compared to days or even weeks with a traditional
system. A system like this could be applied to other similar industries as well. We
could use it to track medicines, and other regular products and battle counterfeit goods
by allowing anyone (the officials in general) to verify whether or not the product
comes from the original and authentic manufacturer.
 Logistics and Supply-chain: Another idea would be to track packages and shipments
using blockchain. That is something that IBM and container shipping giant Maersk
Line are working on a decentralized ledger to help with making the global trade of
goods more efficient. Many hackathons on blockchain have this topic for college
students to build the project. It is still in the development phase and companies are
trying to come up with such a system to track their package pinpoint.
 Smart Contracts: So far, we have looked at ways blockchains can be used to keep
track of information and verify its integrity. But blockchains are even more powerful
when we use them as smart contracts as one of its applications. These contracts live
on the blockchain and can perform actions when various conditions are met.
Insurance companies could use smart contracts to validate claims and keep a record of
all the people who are buying insurance and paying their premiums on time so as to
continue the terms of the policy. Or they could allow us to only pay for car insurance
when we are driving. But it goes even further, with smart contracts we can our own
data on a blockchain. In the same fashion, you could store your personal identity there
and choose what data you want to reveal.
 Original Content Creation and Royalties tracking: Think about collecting royalties
for artists. A beautiful idea for the use case would be streaming platforms could set up
two smart contracts: one where users send a monthly subscription to and one that
keeps track of what song or video a particular user is consuming and how many times
the song has been played or a video being watched. At the end of each month, the
smart contract that holds the subscription fee can automatically distribute the money
to artists, based o how many times their songs have been played. People can have
smart contracts for their content and have proof that they were the creators and no
other. Mediachain is one of the companies working in the music industry using
blockchain and smart contracts. Similarly, smart contracts can be used in other places
as well, some of them are:
other applications are:
 Real Estate: Propy, a California-based company is using blockchain as a title registry
system for property ownership with distributed and decentralized systems.
 IoT Devices: Filament, a Nevada-based company, creates IoT microchip hardware
and software that lets the connected devices run on blockchain technology. The
product’s encrypted and secured ledger data distribute information to other
blockchain-connected devices and allow monetization of machines based on the usage
of time stamps and others. A cybersecurity company, HYPR, is using this technology
to secure IoT devices with a decentralized credential system. By taking passwords
away from a centralized system to make devices even more secure and unhackable.
 Documents: Many countries are adopting blockchain technology to store their data
and people’s data, this way they are bringing transparency and security to the
documents like birth certificates, social security numbers, and voter registration cards,
and much more.
 Non-Fungible Tokens (NFTs): These are the new trend in the world after crypto-
currencies survive upon blockchain technology. The year 2020-2021 gave rise to
digital items at par. NFTs are also digital items that include videos, photos, arts, GIFs,
and other media that are sold over blockchain such that the owner of the media
created can claim his/her full rights. The Nyan cat meme that was a trend in 2011, that
GIF was sold for $600,000.
 Gambling: The gambling industry can use blockchain to provide several benefits to
players. With the help of the blockchain there is a transparency between the potential
gamblers. The transaction are recorded in the blockchain network, game could be
played fairly.
 Big Data: The every computer in the network is trying to verify the information
stored in it, making blockchain an excellent tool for storing data with its immutable
nature.

Cryptocurrency
A cryptocurrency is a digital or virtual currency secured by cryptography, which makes it
nearly impossible to counterfeit or double-spend. Most cryptocurrencies exist on
decentralized networks using blockchain technology—a distributed ledger enforced by a
disparate network of computers.
Types of cryptocurrency

Bitcoin- Bitcoin—the first and largest cryptocurrency—remains the leading player in terms
of volumes and economic value. Bitcoin continues to lead the crypto market in terms of
market capitalization, user base, and popularity. Launched in 2009 by Satoshi Nakamoto,
Bitcoin has touched its highest peak of $68,000 in 2021. Bitcoin supports both smart
contracts and DApps because of its major updates like lightning network and taproot update.
Ethereum - Ethereum is an open-source decentralized blockchain with smart contract
functionality. Ethereum is the second largest cryptocurrency which holds a very strong and
dominant position in the crypto market after Bitcoin. Ethereum operates on its blockchain and
supports smart contracts which run on its own blockchain and are executed automatically
when certain conditions are met. Ether is the cryptocurrency which runs on the Ethereum
blockchain.
Tether- USDT is a stablecoin which is pegged against the U.S. dollar issued by the Hong Kong
based company Tether. It is to be noted that the Tether is backed by an equivalent number of U.S.
dollars, which means it experiences the same kind of pricing volatility. As other cryptocurrencies
fluctuate in value, tether’s price is usually equivalent to $1.

It does not have its own blockchain; rather it runs as a second-layer token on top of other multiple
blockchains such as Bitcoin, Ethereum, Tron, Algorand, Bitcoin Cash and OMG, and thus it is
secured by their respective hashing algorithms. Tether is not minable because of its asset-backed
nature thus new Tether is issued to verified users who make fiat currency deposits.
Polkadot-polkadot is a token which can be bought or sold via exchanges easily. It uses a
nominated proof-of-stake mechanism for network security, verification of transaction and
distribution of new DOT. DOT is the native cryptocurrency of Polkadot which was launched
in 2016. It is a shard blockchain and thus it connects several different chains together under a
single network. It allows them to process transactions in parallel and transfer data between
chains without sacrificing its security. Polkadot is highly scalable as it is able to connect
several blockchains in a way which was not possible before
Litecoin-Litecoin, launched in 2011 is known for its simplicity and utility benefits. It is
known as a light version of Bitcoin but works on an entirely different algorithm known as
Scrypt. Litecoin is minable and also has a faster transaction processing time compared to
Bitcoin. Litecoin was launched with 150 pre-mined coins and has a maximum supply of 84
million coins. Like Bitcoin, the Litecoin supply is also designed to reduce over time to
preserve the coin’s value.

Advantages
 Removes single points of failure
 Easier to transfer funds between parties
 Removes third parties
 Can be used to generate returns
 Remittances are streamlined
Disadvantages
 Transactions are pseudonymous
 Pseudonymity allows for criminal uses
 Have become highly centralized
 Expensive to participate in a network and earn
 Off-chain security issues
 Prices are very volatile

Quantum computing
Quantum computing is a multidisciplinary field comprising aspects of computer science,
physics, and mathematics that utilizes quantum mechanics to solve complex problems faster
than on classical computers
Applications Of Quantum Computing
Finance
Companies would further optimise their investment portfolios and improve fraud detection
and simulation systems.
Healthcare
This sector would benefit from the development of new drugs and genetically customised
treatments, as well as DNA research.
Cybersecurity
Quantum programming involves risks, but also advances in data encryption, such as the new
Quantum Key Distribution (QKD) system. This is a new technique for sending sensitive
information that uses light signals to detect intruders in the system.
Mobility and transport
Companies like Airbus use quantum computing to design more efficient aircraft. Qubits will
also enable significant progress in traffic planning systems and route optimisation.
Pros of Quantum Computing
1. Speed: Quantum computers are substantially quicker than conventional computers at
some sorts of computations, particularly when factoring big numbers and modelling
quantum processes.
2. Parallelism: Due to the simultaneous processing of many calculations by quantum
computers, certain types of problems can be solved much more quickly.
3. Large-scale optimization: Compared to conventional algorithms, quantum
algorithms are faster and more accurate at solving complex optimization issues.
4. Simulating quantum systems: A quantum computer can be used to simulate quantum
systems more effectively and precisely than conventional computers since it is based
on the ideas of quantum physics.
5. Cryptography: Quantum computers have the ability to crack some of the encryption
used by conventional computers, but they also present fresh possibilities for private
communication.
Cons of Quantum Computing
1. Hardware: The size and stability of existing quantum computers are constrained, and
developing a large-scale, dependable quantum computer is a big engineering problem.
2. Software: The field of creating quantum algorithms and software is still developing,
and qualified professionals are in short supply.
3. Cost: Building and maintaining quantum computers is currently relatively expensive,
and this may prevent widespread deployment.
4. Noise and mistakes: Compared to conventional computers, quantum computers are
more prone to noise and faults, and fixing these errors is a difficult task.
5. Scalability: At the moment, quantum computers are only partially scalable, and it is
yet unclear how to construct a robust, large-scale quantum computer that is capable of
solving complex problems.
6. Interoperability: Due to the lack of standards in the realm of quantum computing, it
might be challenging to compare and combine various quantum computers.
IMPORTANT QUESTIONS

UNIT I – INTRODUCTION
Data, Information, Information System, evolution, types based on functions and hierarchy,
Enterprise and functional information systems.
Q.NO UNIT -1 PART A BT CO
1 Differentiate between data and information. U CO1
2 What is Data? R CO1
3 Compare Data and information? U CO1
4 How would you use sub systems of GIS? R CO1
5 What are the applications of information technology? R CO1
6 What is the functional of Information system? R CO1
7 Why components of information system are important? R CO1
8 Define information Technology? R CO1
9 Can you explain Information system transforming Business? R CO1
10 How would you show your understanding of International R CO1
Information systems?
11 Can you list out the types of system development R CO1
methodology?
12 Why do you think the KMS is essential? R CO1
13 What is EIS ? R CO1
14 Define information system. R CO1
15 What is meant by Marketing Information system? R CO1
16 How would you show your understanding of Global R CO1
Information systems?
17 Can you list out the components of Knowledge Management R CO1
system?
18 What is DSS? R CO1
19 How would you summarize the applications of DSS? R CO1
20 Define Executive Information system. R CO1

Q.NO UNIT 1 PART-B BT CO

1 Demonstrate with examples about the types of information U CO1


system based on functions.

2 Briefly describe about information technology. U CO1

3 Write about the Evolution of Information System. U CO1

4 Describe about the various Information system types based U CO1


on Function and hierarchy.

5 What is System development methodology and list down the U CO1


various types? Discuss the various phases of SDLC in brief.

6 Explain in detail about functional information system. E CO1

7 Discuss about Decision support system with its U CO1


Advantages and Disadvantages.

8 Explain about Knowledge information system and Executive E CO1


information system.

9 Explain about Geographic information system E CO1

10 Explain about Prototype and Spiral model with its Principles, E CO1
merits and demerits

Q.NO UNIT 1 PART-C BT CO

1 Explain about international information system E CO1

2 Discuss the various phases of SDLC in brief U CO1

3 GlobalTech Solutions is a multinational IT services provider C CO1


with offices in several countries. Over the years, the company
has amassed a large amount of data across different
departments—ranging from customer contracts, project
documentation, employee records, financial reports, and
client interactions. Managing and accessing this information
became increasingly difficult as the company expanded globall

What are the risks of poor information management, and how


did GlobalTech mitigate these risks with its new system?

UNIT II SYSTEM ANALYSIS AND DESIGN


System development methodologies, Systems Analysis and Design, Data flow Diagram
(DFD), Decision table, Entity Relationship (ER), Object Oriented Analysis and Design(OOAD),
UML diagram
Q.NO UNIT 2 PART-A BT CO
1 Define SDLC? R CO2
2 List out the various SDLC Models. R CO2
3 Define Software Engineering. R CO2
4 What is Data Dictionary? R CO2
5 What is the need for DFD? R CO2
6 What is system flowchart? R CO2
7 What is system flow chart? R CO2
8 What is system analysis and design? R CO2
9 What is feasibility Study? Discuss its types. R CO2
10 What is structured analysis? R CO2
11 What is prototyping? R CO2
12 What is end user computing? List its advantages. R CO2
13 What are the possible risks associated with end user R CO2
computing?
14 Define system design. List its objectives. R CO2
15 What are the major advantages of structured programs? R CO2

Q.NO UNIT 2 PART-B BT CO


1 Explain the traditional information systems development E CO2
cycle.
2 Discuss the waterfall system development model. U CO2
3 Explain the system lifecycle. E CO2
4 Explain the phases of SDLC? E CO2
5 Discuss the system flow chart in detail. U CO2
6 Explain the uses of decision table. E CO2
7 Explain the way to use decision tables in test designing. E CO2
8 Discuss the dataflow diagram. U CO2
9 Explain the types of attributes. E CO2
10 Explain the concepts of ER model. E CO2

Q.NO UNIT 2 PART-C BT CO


1 Discuss the high level conceptual data models for database U CO2
design.
2 Discuss the structured systems analysis and design. U CO2
3 Explain the UML diagram in detail. E CO2

UNIT III DATABASE MANAGEMENT SYSTEMS


DBMS – types and evolution, RDBMS, OODBMS, RODBMS, Data warehousing, Data Mart,
Data mining.
Q.NO UNIT 3 PART-A BT CO
1 What is operating system? R CO3
2 Define database. R CO3
3 What is DBMS? R CO3
4 What are the objectives of DBS? R CO3
5 What are the features of DBMS? R CO3
6 Differentiate between DBMS and RDBMS. U CO3
7 What are the advantages and disadvantages of DBMS? R CO3
8 What do you mean by DDL? R CO3
9 Mention different types of database model. U CO3
10 State the merits of hierarchy data model. U CO3
11 What is RDBMS? R CO3
12 Define OODBMS. R CO3
13 What is data warehousing? R CO3
14 What is Data Mart? R CO3
15 What is data mining? R CO3

Q.NO UNIT 3 PART-B BT CO


1 Explain different types of database. E CO3
2 Explain the components of DBMS. E CO3
3 Explain hierarchy model. E CO3
4 Discuss about network model. U CO3
5 Briefly describe the Relational model. U CO3
6 Describe RDBMS in detail? U CO3
7 Explain OODBMS in detail. E CO3
8 Explain RODBMS. E CO3
9 Explain Data ware housing in detail. E CO3
10 Describe Data Mart and Data Mining in detail? U CO3

Q.NO UNIT 3 PART-C BT CO


1 Demonstrate the significance of different data models with pros U CO3
and cons.
2 Explain database management system in detail E CO3
3 LMN Electronics is a multinational company that manufactures and CO3
sells consumer electronics such as smartphones, laptops, and home
appliances. The company has a central data warehouse that stores
information from various business units such as sales, marketing,
inventory, and finance. The sales department, however, faced
challenges when trying to access relevant data for their specific
needs.
What considerations should be made when scaling a data mart, and
how can the company ensure that it continues to meet the evolving
needs of the sales department?

UNIT IV INTEGRATED SYSTEMS, SECURITY AND CONTROL


Knowledge based decision support systems, Integrating social media and mobile
technologies in Information system, Security, IS Vulnerability, Disaster Management,
Computer Crimes, Securing the Web
Q.NO UNIT 4 PART-A BT CO
1 What is knowledge-based decision support system? R CO4
2 Why social media in mobile app development? R CO4
3 What is Information security? R CO4
4 Define information security. R CO4
5 What do you mean by Software Security? R CO4
6 What is Denial of Service? R CO4
7 What is security governance? R CO4
8 List down the types of system testing. R CO4
9 What is Cyclic redundancy checks? R CO4
10 State the difference between Application security and Software U CO4
Security.
11 What do you mean by Software Testing? R CO4
12 What do you mean by Vulnerability? R CO4
13 State the Classification of Vulnerability? R CO4
14 State the Causes of Vulnerability? R CO4
15 What is disaster management information system? R CO4

Q.NO UNIT 4 PART-B BT CO


1 Describe knowledge-based decision support system in detail? U CO4
2 Explain the importance of integrating social media into mobile E CO4
app development.
3 Describe information security? U CO4
4 Explain IS Vulnerability. E CO4
5 Describe disaster management information system in detail? U CO4
6 Explain the types of computer crime. E CO4
7 E x p l a i n possibilities to secure a web in detail? E CO4
8 Explain computer crimes E CO4
9 Explain integrated social media in in information system E CO4
10 Explain the security system in information system E CO4
Q.NO UNIT 4 PART-C BT CO
1 Explain the integrated system in information system E CO4
2 Discuss the information security system U CO4
3 What are the unique advantages of each platform for a fashion C CO4
brand targeting millennials and Gen Z?

UNIT V NEW IT INITIATIVES


Introduction to Deep learning, Big data, Pervasive Computing, Cloud computing,
Advancements in AI, IoT, Block chain, Crypto currency, Quantum computing
Q.NO UNIT 5 PART-A BT CO
1 What is deep learning? R CO5
2 How deep learning works? R CO5
3 List down the examples of deep learning. R CO5
4 Mention some of the limitations of deep learning. R CO5
5 Distinguish between deep learning and machine learning. U CO5
6 What do you mean by big data? R CO5
7 List down the types of big data. R CO5
8 What are the big data applications in management? R CO5
9 What are the example of big data? R CO5
10 What are the benefits of big data? R CO5
11 What are the five uses of big data? R CO5
12 State the 5 V’s of Big Data. U CO5
13 What are the 5 characteristics of big data? R CO5
14 List down the advantages of big data processing. R CO5
15 What is pervasive computing? R CO5
16 What do you mean by cloud computing? R CO5
17 What is Iaas? R CO5
18 What is Paas? R CO5
19 What is sass? R CO5
20 What is private cloud? R CO5

Q.NO UNIT 5 PART-B BT CO


1 Explain the deep learning process in detail. E CO5
2 Describe Big Data in detail. U CO5
3 Explain Pervasive Computing. E CO5
4 Explain cloud computing with advantages and dis advantages. E CO5
5 Describe advancement in AI? U CO5
6 Discuss about IOT in detail U CO5
7 Describe Block Chain. U CO5
8 Explain Cryptocurrency in detail. E CO5
9 Describe Quantum Computing in detail U CO5
10 Explain advancement in AI E CO5

Q.NO UNIT 5 PART-C BT CO


1 Explain Pervasive Computing. E CO5
2 Discuss deep learning in detail U CO5
3 A company is developing a self-driving truck. How can they C CO5
ensure the safety and reliability of the truck's deep learning-
based perception and control systems?

You might also like