Major Project Report On
Major Project Report On
E-Commerce
By
Student Name
(2020)
Professor
HOD
College Name
College Address
Department of
Certificate
Professor
HOD
Department of
Certificate
External Examiner
Signature
Inter Examiner
Signature
ACKNOWLEDGEMENT
Student Name
Roll Number
ABSTRACT
CHAPTER 1 : INTRODUCTION
1.1 INTRODUCTION
Customer get many benefits via online shopping this helps e-commerce companies to
build long-lasting and profitable relationship with their customers. For making strong
relationship with these users it is very important to focus on the customer as a whole
and making sense of a flood of real-time information that goes well beyond
demographics or shopping behavior. There are two entities who will have the access
to the system. One is the admin and another one will be the registered user. Admin
can add product details, view all the order details and can also view the sales of the
products. User need to register with basic registration details to generate a valid
username and password. After the user logins, it can view all the products that are
recommended on the homepage compiled by the system based on user’s information.
From the recommended products, the user can even further view its details and then if
interested to buy, the system gives add to cart option for purchasing the product. The
system even has an AI bot with the help of which the user can get answers to queries
like features, warranty, price etc. details of the products. This AI Bot even converts
text to speech. After selecting the product, user can do payment for the particular
product online. Users can view their order history of their purchased product.
1.2 AIM
The main aim of e-commerce websites development is to sell products to users. The
most successful websites are carefully optimized to achieve a high percentage of
purchases. To achieve success e-commerce websites need to integrate all of the latest
online closing & upsell techniques available which have been proven to increase the
chances that a visitor will purchase. There are many important elements that go into
building a successful e-commerce website such as removing friction during the
purchasing process, making the checkout smooth and easy, making the website fast
and attractive, up selling users on related products, incentivizing buyers, reducing cart
abandonment, nurturing past buyers to buy again, remarketing to past visitors who
haven’t yet purchased, using the proper payment options, having a mobile ready
design and many more things which are needed to develop and e-commerce website.
🕑 5-15 minutes — To Create your file with — FileMakr
This existing system of buying goods has several disadvantages. It requires lots of
time to travel to the particular shop to buy the goods. It is having lots of manual work.
Since everyone is leading busy life now a days, time means a lot to everyone. Also
there are expenses for travelling from house to shop. It is less user-friendly. In current
system user must go to shop and order products. It is difficult to identify the required
product.More over the shop from where we would like to buy some thing may not be
open 24*7*365. Hence we have to adjust our time with the shopkeeper’s time or
vendor’s time. In current e commerce system user have to go shop to view the
description of the product. It is unable to generate different kinds of report.
The proposed system helps in building a website to buy, sell products or goods online
using internet connection. Unlike traditional commerce that is carried out physically
with effort of a person to go and get products, eCommerce has made it easier for
human to reduce physical work and to save time. The basic concept of the application
is to allow the customer to shop virtually using the Internet and allow customers to
buy the items and articles of their desire from the store.E-commerce is fast gaining
ground as an accepted and used business paradigm.
A feasibility study is a high-level capsule version of the entire System analysis and
Design Process. The study begins by classifying the problem definition. Feasibility is
to determine if it’s worth doing. Once an acceptance problem definition has been
generated, the analyst develops a logical model of the system. A search for
alternatives is analyzed carefully. There are 3 parts in feasibility study.
1) Operational Feasibility
2) Technical Feasibility
3) Economical Feasibility
Operational feasibility is the measure of how well a proposed system solves the
problems, and takes advantage of the opportunities identified during scope definition
and how it satisfies the requirements identified in the requirements analysis phase of
system development.The operational feasibility assessment focuses on the degree to
which the proposed development projects fits in with the existing business
environment and objectives with regard to development schedule, delivery
date, corporate culture and existing business processes.To ensure success, desired
operational outcomes must be imparted during design and development. These
include such design-dependent parameters as reliability, maintainability,
supportability, usability, producibility, disposability, sustainability, affordability and
others. These parameters are required to be considered at the early stages of design if
desired operational behaviours are to be realised. A system design and development
requires appropriate and timely application of engineering and management efforts to
meet the previously mentioned parameters. A system may serve its intended purpose
most effectively when its technical and operating characteristics are engineered into
the design. Therefore, operational feasibility is a critical aspect of systems engineering
that needs to be an integral part of the early design phases.
This involves questions such as whether the technology needed for the system exists,
how difficult it will be to build, and whether the firm has enough experience using
that technology. The assessment is based on outline design of system requirements in
terms of input, processes, output, fields, programs and procedures. This can be
qualified in terms of volume of data, trends, frequency of updating inorder to give an
introduction to the technical system. The application is the fact that it has been
developed on windows XP platform and a high configuration of 1GB RAM on Intel
Pentium Dual core processor. This is technically feasible .The technical feasibility
assessment is focused on gaining an understanding of the present technical resources
of the organization and their applicability to the expected needs of the proposed
system. It is an evaluation of the hardware and software and how it meets the need of
the proposed system.
1.5.3 ECONOMICAL FEASIBILITY
Establishing the cost-effectiveness of the proposed system i.e. if the benefits do not
outweigh the costs then it is not worth going ahead. In the fast paced world today
there is a great need of online social networking facilities. Thus the benefits of this
project in the current scenario make it economically feasible. The purpose of the
economic feasibility assessment is to determine the positive economic benefits to the
organization that the proposed system will provide. It includes quantification and
identification of all the benefits expected. This assessment typically involves a
cost/benefits analysis.
1.7.1 INTRODUCTION
This section includes the overall view of the project i.e. the basic problem definition
and the general overview of the problem which describes the problem in layman
terms. It also specifies the software used and the proposed solution strategy.
This section includes the Software and hardware requirements for the smooth running
of the application.
This section consists of the Software Development Life Cycle model. It also contains
technical diagrams like the Data Flow Diagram and the Entity Relationship diagram.
This section describes the different technologies used for the entire development
process of the Front-end as well as the Back-end development of the application.
This section has screenshots of all the implementation i.e. user interface and their
description.
Number Description
1 PC with 250 GB or more Hard disk.
2 PC with 2 GB RAM.
3 PC with Pentium 1 and Above.
2.2 Software Requirements
The waterfall model was selected as the SDLC model due to the following reasons:
3.4 ER Diagram
* All ER-Diagram Images will be shown after edit project report
4.1.1 HTML
Hypertext Markup Language (HTML) is the standard markup language for documents
designed to be displayed in a web browser. It can be assisted by technologies such as
Cascading Style Sheets (CSS) and scripting languages such as JavaScript. Web
browsers receive HTML documents from a web server or from local storage and
render the documents into multimedia web pages. HTML describes the structure of a
web page semantically and originally included cues for the appearance of the
document.
HTML elements are the building blocks of HTML pages. With HTML constructs,
images and other objects such as interactive forms may be embedded into the
rendered page. HTML provides a means to create structured documents by denoting
structural semantics for text such as headings, paragraphs, lists, links, quotes and other
items. HTML elements are delineated by tags, written using angle brackets. Tags such
as <img /> and <input /> directly introduce content into the page. Other tags such as
<p> surround and provide information about document text and may include other
tags as sub-elements. Browsers do not display the HTML tags, but use them to
interpret the content of the page.
HTML can embed programs written in a scripting language such as JavaScript, which
affects the behavior and content of web pages. Inclusion of CSS defines the look and
layout of content. The World Wide Web Consortium (W3C), former maintainer of the
HTML and current maintainer of the CSS standards, has encouraged the use of CSS
over explicit presentational HTML since 1997.
Cascading Style Sheets (CSS) is a style sheet language used for describing the
presentation of a document written in a markup language like HTML.CSS is a
cornerstone technology of the World Wide Web, alongside HTML and
JavaScript.CSS is designed to enable the separation of presentation and content,
including layout, colors, and fonts.This separation can improve content accessibility,
provide more flexibility and control in the specification of presentation characteristics,
enable multiple web pages to share formatting by specifying the relevant CSS in a
separate .css file, and reduce complexity and repetition in the structural content.
CSS information can be provided from various sources. These sources can be the web
browser, the user and the author. The information from the author can be further
classified into inline, media type, importance, selector specificity, rule order,
inheritance and property definition. CSS style information can be in a separate
document or it can be embedded into an HTML document. Multiple style sheets can
be imported. Different styles can be applied depending on the output device being
used; for example, the screen version can be quite different from the printed version,
so that authors can tailor the presentation appropriately for each medium.The style
sheet with the highest priority controls the content display. Declarations not set in the
highest priority source are passed on to a source of lower priority, such as the user
agent style. The process is called cascading.
One of the goals of CSS is to allow users greater control over presentation. Someone
who finds red italic headings difficult to read may apply a different style sheet.
Depending on the browser and the web site, a user may choose from various style
sheets provided by the designers, or may remove all added styles and view the site
using the browser's default styling, or may override just the red italic heading style
without altering other attributes.
4.1.3 JavaScript
Initially only implemented client-side in web browsers, JavaScript engines are now
embedded in many other types of host software, including server-side in web servers
and databases, and in non-web programs such as word processors and PDF software,
and in runtime environments that make JavaScript available for writing mobile and
desktop applications, including desktop widgets.
The terms Vanilla JavaScript and Vanilla JS refer to JavaScript not extended by any
frameworks or additional libraries. Scripts written in Vanilla JS are plain JavaScript
code.Google's Chrome extensions, Opera's extensions, Apple's Safari 5 extensions,
Apple's Dashboard Widgets, Microsoft's Gadgets, Yahoo! Widgets, Google Desktop
Gadgets, and Serence Klipfolio are implemented using JavaScript.
PHP is a server side scripting language that is used to develop Static websites or
Dynamic websites or Web applications. PHP stands for Hypertext Pre-processor, that
earlier stood for Personal Home Pages. PHP scripts can only be interpreted on a server
that has PHP installed. The client computers accessing the PHP scripts require a web
browser only. A PHP file contains PHP tags and ends with the extension ".php".
The term PHP is an acronym for PHP: Hypertext Preprocessor. PHP is a server-side
scripting language designed specifically for web development. PHP can be easily
embedded in HTML files and HTML codes can also be written in a PHP file. The
thing that differentiates PHP with client-side language like HTML is, PHP codes are
executed on the server whereas HTML codes are directly rendered on the browser.
4.2.2 MySQL
MySQL is an open source relational database management system (RDBMS) based
on Structured Query Language (SQL). It is one part of the very popular LAMP
platform consisting of Linux, Apache, My SQL, and PHP. Currently My SQL is
owned by Oracle. My SQL database is available on most important OS platforms. It
runs on BSD Unix, Linux, Windows, or Mac OS. Wikipedia and YouTube use My
SQL. These sites manage millions of queries each day. My SQL comes in two
versions: My SQL server system and My SQL embedded system.
RDBMS TERMINOLOGY
Before we proceed to explain MySQL database system, let's revise few definitions
related to database.
The term implementation has different meanings ranging from the conversation of a
basic application to a complete replacement of a computer system. The procedures
however, are virtually the same. Implementation includes all those activities that take
place to convert from old system to new. The new system may be totally new
replacing an existing manual or automated system or it may be major modification to
an existing system. The method of implementation and time scale to be adopted is
found out initially. Proper implementation is essential to provide a reliable system to
meet organization requirement.
5.1.1 Introduction
5.1.2 Benifits
The goal of unit testing is to isolate each part of the program and show that the
individual parts are correct. A unit test provides a strict, written contract that the piece
of code must satisfy. As a result, it affords several benefits.
1) Find problems early : Unit testing finds problems early in the development cycle.
In test-driven development (TDD), which is frequently used in both extreme
programming and scrum, unit tests are created before the code itself is written. When
the tests pass, that code is considered complete. The same unit tests are run against
that function frequently as the larger code base is developed either as the code is
changed or via an automated process with the build. If the unit tests fail, it is
considered to be a bug either in the changed code or the tests themselves. The unit
tests then allow the location of the fault or failure to be easily traced. Since the unit
tests alert the development team of the problem before handing the code off to testers
or clients, it is still early in the development process.
Integration testing (sometimes called integration and testing, abbreviated I&T) is the
phase in software testing in which individual software modules are combined and
tested as a group. It occurs after unit testing and before validation testing. Integration
testing takes as its input modules that have been unit tested, groups them in larger
aggregates, applies tests defined in an integration test plan to those aggregates, and
delivers as its output the integrated system ready for system testing.
5.2.1 Purpose
The purpose of integration testing is to verify functional, performance, and
reliability requirements placed on major design items. These "design items", i.e.,
assemblages (or groups of units), are exercised through their interfaces using black-
box testing, success and error cases being simulated via appropriate parameter and
data inputs. Simulated usage of shared data areas and inter-process communication is
tested and individual subsystems are exercised through their input interface. Test
cases are constructed to test whether all the components within assemblages interact
correctly, for example across procedure calls or process activations, and this is done
after testing individual modules, i.e., unit testing. The overall idea is a "building
block" approach, in which verified assemblages are added to a verified base which is
then used to support the integration testing of further assemblages.Software
integration testing is performed according to the software development life cycle
(SDLC) after module and functional tests. The cross-dependencies for software
integration testing are: schedule for integration testing, strategy and selection of the
tools used for integration, define the cyclomatical complexity of the software and
software architecture, reusability of modules and life-cycle and versioning
management.Some different types of integration testing are big-bang, top-down, and
bottom-up, mixed (sandwich) and risky-hardest. Other Integration Patterns[2] are:
collaboration integration, backbone integration, layer integration, client-server
integration, distributed services integration and high-frequency integration.
In the big-bang approach, most of the developed modules are coupled together to form
a complete software system or major part of the system and then used for integration
testing. This method is very effective for saving time in the integration testing
process. However, if the test cases and their results are not recorded properly, the
entire integration process will be more complicated and may prevent the testing team
from achieving the goal of integration testing.A type of big-bang integration testing is
called "usage model testing" which can be used in both software and hardware
integration testing. The basis behind this type of integration testing is to run user-like
workloads in integrated user-like environments. In doing the testing in this manner,
the environment is proofed, while the individual components are proofed indirectly
through their use. Usage Model testing takes an optimistic approach to testing,
because it expects to have few problems with the individual components. The strategy
relies heavily on the component developers to do the isolated unit testing for their
product. The goal of the strategy is to avoid redoing the testing done by the
developers, and instead flesh-out problems caused by the interaction of the
components in the environment. For integration testing, Usage Model testing can be
more efficient and provides better test coverage than traditional focused functional
integration testing. To be more efficient and accurate, care must be used in defining
the user-like workloads for creating realistic scenarios in exercising the environment.
This gives confidence that the integrated environment will work as expected for the
target customers.
5.3.1 Introduction
Software Validation: The process of evaluating software during or at the end of the
development process to determine whether it satisfies specified requirements.
In other words, software verification is ensuring that the product has been built
according to the requirements and design specifications, while software validation
ensures that the product meets the user's needs, and that the specifications were
correct in the first place. Software verification ensures that "you built it right".
Software validation ensures that "you built the right thing". Software validation
confirms that the product, as provided, will fulfill its intended use.
Both verification and validation are related to the concepts of quality and of software
quality assurance. By themselves, verification and validation do not guarantee
software quality; planning, traceability, configuration management and other aspects
of software engineering are required.Within the modeling and simulation (M&S)
community, the definitions of verification, validation and accreditation are similar:
A test case is a tool used in the process. Test cases may be prepared for software
verification and software validation to determine if the product was built according to
the requirements of the user. Other methods, such as reviews, may be used early in the
life cycle to provide for software validation.
Test cases are built around specifications and requirements, i.e., what the application
is supposed to do. Test cases are generally derived from external descriptions of the
software, including specifications, requirements and design parameters. Although the
tests used are primarily functional in nature, non-functional tests may also be used.
The test designer selects both valid and invalid inputs and determines the correct
output, often with the help of an oracle or a previous result that is known to be good,
without any knowledge of the test object's internal structure.
White-box testing (also known as clear box testing, glass box testing, transparent box
testing, and structural testing) is a method of testing software that tests internal
structures or workings of an application, as opposed to its functionality (i.e. black-box
testing). In white-box testing an internal perspective of the system, as well as
programming skills, are used to design test cases. The tester chooses inputs to exercise
paths through the code and determine the appropriate outputs. This is analogous to
testing nodes in a circuit, e.g. in-circuit testing (ICT). White-box testing can be
applied at the unit, integration and system levels of the software testing process.
Although traditional testers tended to think of white-box testing as being done at the
unit level, it is used for integration and system testing more frequently today. It can
test paths within a unit, paths between units during integration, and between
subsystems during a system–level test. Though this method of test design can uncover
many errors or problems, it has the potential to miss unimplemented parts of the
specification or missing requirements.
5.5.1 Levels
1 ) Unit testing : White-box testing is done during unit testing to ensure that the code
is working as intended, before any integration happens with previously tested code.
White-box testing during unit testing catches any defects early on and aids in any
defects that happen later on after the code is integrated with the rest of the application
and therefore prevents any type of errors later on.
2 ) Integration testing : White-box testing at this level are written to test the
interactions of each interface with each other. The Unit level testing made sure that
each code was tested and working accordingly in an isolated environment and
integration examines the correctness of the behaviour in an open environment through
the use of white-box testing for any interactions of interfaces that are known to the
programmer.
5.5.2 Procedures
White-box testing's basic procedures involves the tester having a deep level of
understanding of the source code being tested. The programmer must have a deep
understanding of the application to know what kinds of test cases to create so that
every visible path is exercised for testing. Once the source code is understood then the
source code can be analyzed for test cases to be created. These are the three basic
steps that white-box testing takes in order to create test cases:
5.5.3 Advantages
White-box testing is one of the two biggest testing methodologies used today. It has
several major advantages:
5.5.5 Disadvantages
Although white-box testing has great advantages, it is not perfect and contains some
disadvantages:
White-box testing brings complexity to testing because the tester must have
knowledge of the program, including being a programmer. White-box testing
requires a programmer with a high level of knowledge due to the complexity of
the level of testing that needs to be done.
On some occasions, it is not realistic to be able to test every single existing
condition of the application and some conditions will be untested.
The tests focus on the software as it exists, and missing functionality may not
be discovered.
CHAPTER 6 : ADVANTAGES
CHAPTER 7 : CONCLUSION
In general, today’s businesses must always strive to create the next best thing that
consumers will want because consumers continue to desire their products, services
etc. to continuously be better, faster, and cheaper. In this world of new technology,
businesses need to accommodate to the new types of consumer needs and trends
because it will prove to be vital to their business’ success and survival. E-commerce is
continuously progressing and is becoming more and more important to businesses as
technology continues to advance and is something that should be taken advantage of
and implemented.From the inception of the Internet and e-commerce, the possibilities
have become endless for both businesses and consumers. Creating more opportunities
for profit and advancements for businesses, while creating more options for
consumers. However, just like anything else, e-commerce has its disadvantages
including consumer uncertainties, but nothing that can not be resolved or avoided by
good decision-making and business practices.
https://www.tutorialspoint.com/index.htm
https://www.javatpoint.com
https://www.w3schools.com
https://html.com