0% found this document useful (0 votes)
46 views33 pages

1 IA Producto

This document reviews the transformative impact of big data and AI on product design, highlighting the limitations of traditional methods and the potential of data-driven approaches. It discusses how various data types, including text, images, audio, and video, can enhance product design processes through AI algorithms. The survey aims to provide a comprehensive understanding of current research and future directions in AI-driven product design methodologies.

Uploaded by

Aid
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
46 views33 pages

1 IA Producto

This document reviews the transformative impact of big data and AI on product design, highlighting the limitations of traditional methods and the potential of data-driven approaches. It discusses how various data types, including text, images, audio, and video, can enhance product design processes through AI algorithms. The survey aims to provide a comprehensive understanding of current research and future directions in AI-driven product design methodologies.

Uploaded by

Aid
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 33

applied

sciences
Review
Big Data and AI-Driven Product Design: A Survey
Huafeng Quan 1 , Shaobo Li 2 , Changchang Zeng 3 , Hongjing Wei 4 and Jianjun Hu 5, *

1 College of Big Data and Statistics, Guizhou University of Finance and Economics, Guiyang 550050, China;
[email protected]
2 State Key Laboratory of Public Big Data, Guizhou University, Guiyang 550050, China; [email protected]
3 School of Computer Science, Civil Aviation Flight University of China, Guanghan 618307, China;
[email protected]
4 School of Mechanical Engineering, Guizhou Institute of Technology, Guiyang 550050, China;
[email protected]
5 Department of Computer Science and Engineering, University of South Carolina, Columbia, SC 29201, USA
* Correspondence: [email protected]

Abstract: As living standards improve, modern products need to meet increasingly diversified and
personalized user requirements. Traditional product design methods fall short due to their strong
subjectivity, limited survey scope, lack of real-time data, and poor visual display. However, recent
progress in big data and artificial intelligence (AI) are bringing a transformative big data and AI-
driven product design methodology with a significant impact on many industries. Big data in the
product lifecycle contains valuable information, such as customer preferences, market demands,
product evaluation, and visual display: online product reviews reflect customer evaluations and
requirements, while product images contain shape, color, and texture information that can inspire
designers to quickly generate initial design schemes or even new product images. This survey
provides a comprehensive review of big data and AI-driven product design, focusing on how big
data of various modalities can be processed, analyzed, and exploited to aid product design using
AI algorithms. It identifies the limitations of traditional product design methods and shows how
textual, image, audio, and video data in product design cycles can be utilized to achieve much more
intelligent product design. We finally discuss the major deficiencies of existing data-driven product
design studies and outline promising future research directions and opportunities, aiming to draw
increasing attention to modern AI-driven product design.
Citation: Quan, H.; Li, S.; Zeng, C.;
Wei, H.; Hu, J. Big Data and Keywords: product design; big data; AI algorithm; AI-generated content; Kansei engineering;
AI-Driven Product Design: A Survey. generative design
Appl. Sci. 2023, 13, 9433. https://
doi.org/10.3390/app13169433

Academic Editors: Chaogang Tang


and Dong Zeng 1. Introduction

Received: 24 July 2023


Developing competitive products that exceed consumers’ expectations plays a central
Revised: 10 August 2023
role in enterprise activities. Successful products can enhance user satisfaction, stimulate
Accepted: 15 August 2023
purchase desire, increase sales, and promote the completion of the ‘product–money–new
Published: 20 August 2023 product’ cycle, enabling enterprises to thrive in the competitive market [1]. Over the past
few decades, product design has undergone significant changes in design concepts and
methods, driven by advancements in manufacturing and technology. Design concepts
have evolved from ‘form follows function’, ‘technology first’, and ‘product-centric’ to ‘form
Copyright: © 2023 by the authors. follows emotion’ and ‘user-centric’ [2]. Design methods have shifted from experiential
Licensee MDPI, Basel, Switzerland. and fuzzy design to intelligent design, computer-aided design, and multi-domain joint
This article is an open access article design [3–6]. Despite this progress, the fierce market competition calls for next-generation
distributed under the terms and product innovation and design methodologies based on the significant progress of big data,
conditions of the Creative Commons deep learning, and other AI algorithms, which have brought a transformative revolution in
Attribution (CC BY) license (https://
the understanding, pattern recognition, and generative design and synthesis of text, image,
creativecommons.org/licenses/by/
video, audio, etc. [7,8].
4.0/).

Appl. Sci. 2023, 13, 9433. https://doi.org/10.3390/app13169433 https://www.mdpi.com/journal/applsci


Appl. Sci. 2023, 13, 9433 2 of 33

Currently, customers are increasingly focused on their diversified and personalized spir-
itual and emotional needs. They pay more and more attention to product appearance [9–13]
in addition to its functions. Traditional product design methods, such as the theory of in-
ventive problem solving (TRIZ), quality function deployment (QFD), KANO model, Kansei
engineering, and axiomatic design (AD), have been well-established [2,14–16]. However,
capturing user requirements, ensuring visibility, and evaluating products remain significant
challenges in the field of product design. The fundamental aspect of capturing user require-
ments and evaluating products is data acquisition. Traditional methods rely on manual
surveys [10,17–19], which are time-consuming, labor-intensive, and often result in uneven
data distribution. Additionally, these data are one-time and cannot be updated, which is
a major obstacle in the fast-paced era we live in [20]. Alternatively, relying on designers’ or
experts’ experience and intuition is highly subjective, uninterpretable, and risky [21]. As we
will discuss in detail in Section 3, traditional methods also have several other drawbacks.
These deficiencies led to lengthy product development cycles, poor predictability, and low
success rates, which are crucial to enterprises. However, recent studies have found that
big data and AI algorithms can alleviate these problems in an automated and intelligent
manner [20,22]. Especially, the AI-Generated Content (AIGC) can bring disruptive break-
throughs to product design. Big data and AI algorithms have been increasingly applied in
the field of product design [18,23].
With the rapid development of the Internet, loT, and communication technologies,
a large amount of data has accumulated in the product lifecycle (Figure 1), which is expand-
ing exponentially every day [24–26]. The product lifecycle contains a lot of product feedback
information, such as user preferences, market demands, and visual displays [27–29]. This
information is valuable for guiding product design and has sparked increasing interest in
both product design and big data fields [30,31]. On one hand, processing and analyzing
such a massive amount of data presents new challenges that need to be appropriately
addressed. On the other hand, successful analysis can lead to better products. Thus, how
to extract valuable information from big data and apply it to design remains the primary
difficulty and focal point of current research [22,27,31].
Big data in the product lifecycle are characterized by multiple data types, a large
volume, low-value density, and fast update, making them challenging to handle with
conventional techniques and algorithms. However, AI algorithms have strong capabilities
in processing big data, including convolutional neural networks (CNNs), generative adver-
sarial networks (GANs), natural language processing (NLP), neural style transfer, motion
detection, speech recognition, video summarization, and emotion recognition. Figure 2 pro-
vides an overview of mining product-related information from big data. This paper focuses
on analyzing big data with AI algorithms and applying the findings to product design.
In recent years, big data and AI-driven product design have become a significant
research hotspot. However, to the best of our knowledge, before this work, there has
been almost no comprehensive summary of the applications of big data in product design.
Therefore, a thorough literature review is necessary to provide theoretical foundations that
can be utilized to develop scientific insights in this area. Furthermore, it can help enterprises
to reduce the time and cost of product development, enhance user satisfaction, and promote
the advancement of product design towards automation and intelligence. This paper
introduces both the traditional product design method and the big data-based method,
highlights their limitations and challenges, and focuses on the application, algorithm, and
process flow of structured, textual, image, audio, and video data. The research framework
is shown in Figure 3.
Appl. Sci. 2023, 13, 9433 3 of 33

Big data

Text Image Audio Video …

E-commerce Internet Customer


Product data Machine
&Public relationship
platform management tool
dataset management

Computer Enterprise Supply Manufacturi


aided design Smart
resource chain ng execution
planning device
system management system

Data
Manufacture
Design Transportation

Big data in the


Raw materials Sale
product lifecycle

Recycle Usage
Repair

Figure 1. Big data in the product lifecycle. The product lifecycle is the entire process from product
development to scrap, including design, manufacture, transportation, sale, usage, repair, and recycle.

Application Product design User portrait Product improvement Development trend Satisfaction …

Convolutional neural Generative adversarial Natural language Emotion Object


network(CNN) network(GAN) processing (NLP) recognition detection
Analysis
(AI algorithm) Neural style Video Motion Speaker Speech Machine

transfer summarization detection identification recognition learning

Cleaning Integration Transformation Reduction

Named entity
Missing value Conversion Attributes
recognition(NER)
Pre-processing Abnormal value Attribute recognition Normalization Value

Repeating value Conflict handling Discretization Dimension

… … … …

Text/Image/ Unstructured Semi-structured Structured


Data acquirement Audio/Video/
NoSQL/HDFS XML DDBS
……
Data source Web Crawler Application Programming Interface (API) Download Read

Smart sensor E-commerce platform Information management system Internet Public dataset

RFID reader Machine tool Smart meters External device Embedded device …

Figure 2. A framework for mining product-related information from big data. The framework includes
data acquisition, pre-processing, analysis, and application. Data acquisition involves collecting data
from various data sources; pre-processing aims to clean and standardize the acquired data; data
analysis reveals hidden knowledge and information; application directly reflects the value of big data.
Appl. Sci. 2023, 13, 9433 4 of 33

Kansei Engineering
QFD
Traditional methods
KANO model
TRIZ
Product design
Definition
Structured
Big data and Textual data
AI- driven methods
Unstructured Image data
Audio data
Video data

Figure 3. The research framework. Through Kansei engineering, QFD, and the KANO model, we
revealed the shortcomings of traditional product design methods and showed that big data could
improve them.

Figure 3 shows the structure of the paper. Section 2 discusses the fundamental tasks in
product design, while Section 3 provides an overview of several widely used traditional
product design methods along with their limitations. Section 4 reviews the application of
big data and AI algorithms in product design, including structured, textual, image, audio,
and video data. Moreover, potential future studies and current limitations are discussed in
this section. In Section 5, we summarize the paper and outline the future direction of the
product design field.

2. Key Tasks in Product Design


Figure 4 summarizes the process of product design. It includes nine key tasks: product
vision definition, market research, competitive research, user research, idea generation,
feasibility analysis, sketching, prototyping, and scheme evaluation. The product vision,
which clarifies the overall goal of the product, is typically defined before the design process
begins. Market research is carried out to understand the development trend, user demand,
and purchasing power. Competitive analysis compares existing products from multiple
dimensions and derives their advantages and deficiencies.
User research is supported by data. Traditional methods of data collection, such
as observation, experiment, interview, and survey, are often insufficient in terms of the
quality and quantity of data, when compared to big data-driven methods. To extract user
requirements, preferences, and satisfaction, traditional methods typically utilize Kansei
engineering, QFD, the KANO model, AD, and affective design [14,16,32,33]. In contrast,
big data-driven methods widely use technologies such as NLP, speech recognition, emotion
recognition, and intelligent video analysis. Depending on different object-oriented, product
design methods can be broadly categorized into product-centric and user-centric. The
product-centric approach focuses on improving performance and emphasizes that users
passively adapt to the product. In contrast, the user-centric approach prioritizes satisfying
users’ spirit requirements and emphasizes that products must actively adapt to users.
As users’ requirements have become more and more diversified and personalized, the
user-centric approach has become mainstream. Kansei engineering and the KANO model
are typical user-centric methods, and QFD also involves user requirements [14,16,33].
Idea generation can be classified into two types based on different innovative thinking:
logical thinking and intuitive thinking [34]. Logical thinking focuses on detailed analysis
and decomposition of problems, such as TRIZ, universal design, and AD [15,32,35]. In
contrast, intuitive thinking aims to inspire designers and includes brainstorming, bionics,
analogy, combination, and deformation [1,2,36]. For instance, Nathalie and John [37] uti-
lized brainstorming to stimulate inspiration, while Youn et al. [38] found that combining
ideas was a significant driver of invention by analyzing a vast number of patents. Similarly,
Lai et al. [9] created new products by merging various product forms and colors, and
Zarraonandia et al. [39] used the combinatorial creativity in digital game design. After
Appl. Sci. 2023, 13, 9433 5 of 33

generating ideas, the next task is feasibility analysis. This involves technical, economic,
security, infringement, and environmental analysis. Notably, technical analysis examines
whether there are any inconsistencies among various technical attributes and resolves them.
Product display is important for enabling users, designers, and experts to intuitively
comprehend the designed product. Sketches and prototypes serve as visualizations of
ideas. However, traditional methods often require designers to process drawing skills, such
as hand drawing or 3D modeling. In contrast, big data and AI algorithm driven methods
offer simpler operations and do not require drawing skills. Furthermore, traditional
display methods often involve images or 3D models, whereas big data and AI algorithm
driven methods can incorporate videos or interactions, resulting in more intuitive and
engaging displays.
Evaluation is the final step. Once the product is designed, it will be presented to users,
experts, designers, or decision makers for feedback. Based on this feedback, the product can
be improved and verified to ensure it aligns with the original vision. For instance, Mattias
and Tomohiko [40] proposed a novel evaluation method that considers the importance
of customer values, the contribution of each offering to the value, and the customer’s
budget. They successfully applied this method to a real-life case at an investment machine
manufacturer. User evaluation can also be regarded as a part of the user research task.

Project startup Observation Development trend


Expert assessment
Experimental Demand estimation
User evaluation
Interview Sales forecast
Experimental
test Product revise Survey Purchasing power
Multi-criteria decision vision test

Evalution Market
research
Brand equity Product attribute
test test
Rendering revise revise Market share Product function
Floor plan Product ranking
Competitive
3D model product selection Positioning
Prototyping Competitive
Product research Dimension
High-fidelity prototype Advantage
innovation
Product video design Analysis criteria Deficiency
test revise revise test
Virtual reality model

User Observation
Sketching
research
Experimental Implicit requirement
revise revise
test Interview Explicit requirement
Hand drawn test
Feasibility test Idea Survey
Low-fidelity prototype analysis generation
Wireframe revise
Brainstorming User preference
Scribble Technical analysis
Combination Functional requirement
Economical analysis
Analogy Affection requirement
Environmental analysis
Logical thinking Deformation Customer satisfaction
Security analysis
Infringement analysis Intuitive thinking QFD

Figure 4. The product design process. The dotted line represents the affiliation and the solid line
represents the product design process. The user research, idea generation, and product display
(sketching and prototyping) are the key tasks in product design.

3. Traditional Product Design Methods


3.1. Kansei Engineering
Kansei engineering is a widely used user-centric method that is commonly utilized
in the user research task [41]. In the 1970s, Nagamechi noted that, in Japan, where mate-
rial wealth was already abundant, the consumption trend had shifted from functionality
to sensibility, with sensibility becoming the core of product design [42]. Nissan, Mit-
subishi, Honda, and Mazda used Kansei engineering to improve car positioning, shape,
color, dashboard, and more, resulting in significant success for the Japanese automobile
industry [43–45]. In the late 1990s, Kansei engineering expanded to Europe. Schütte
Appl. Sci. 2023, 13, 9433 6 of 33

proposed a modification strategy to simplify the approach for European culture [46–49].
In addition, Schütte discussed samples selection [50] and visualized Kansei engineering
steps [51]. Nagamechi laid the foundation of Kansei engineering and continued to ex-
plore it further. In his latest research [52], he suggested the introduction of AI in Kansei
engineering. The framework of Kansei engineering is shown in Figure 5.

Choice of domain

Span the semantic space Span the semantic properties

Kansei words collection Sample selection


Kansei words selection Properties selection
Kansei words Properties

Updata Kansei evaluation Updata

Relationship model construction


No No
Test of validity
Yes
Model
Figure 5. The framework of Kansei engineering.

The Kansei engineering process comprises four key stages: (i) Kansei words collection
involves gathering words from various sources, such as magazines, documents, manuals,
experts, interviewees, product catalogs, and e-mails. However, collected words may lack
pertinence and applicability due to their diverse fields of origin; (ii) Kansei words selec-
tion can be done manually, but this may lead to subjective results. To address this issue,
some researchers have used techniques like principal component analysis (PCA), factor
analysis, hierarchical clustering analysis, and K-means for word selection. For instance,
Djatna et al. [53] adopted the Term Frequency-Inverse Document Frequency (TF-IDF) ap-
proach to select high-frequency Kansei words for their tea powder packaging design, while
Shieh and Yeh [10] used cluster analysis to select four sets of Kansei words out of a hundred;
(iii) For Kansei evaluation, most studies use semantic differential (SD) scales or Likert scales
to design questionnaires and obtain Kansei evaluations from survey results. However,
some researchers have paid attention to physiological responses [11,54] as they believe
these signals are more reliable than the defined score. For example, Kemal et al. [12] used
the eye-tracker to obtain objective data on the ship’s appearance, including the area of
interest (AOI), scan path, and heat maps. Nevertheless, relying on a single physiological
signal can lead to one-sided results, and thus, Xiao and Cheng [55] designed several experi-
ments involving eye-tracking, skin conductance, heart rate, and electroencephalography.
We notice that physiological signals are more related to the intensity of Kansei than the
content; (iv) The final step is constructing the relationship model that links user affections
to product parameters. Various methods like quantification theory, support vector machine,
neural networks, etc., can be employed to achieve this.
Although Kansei engineering has achieved significant success in product design, there
are still several deficiencies that need to be addressed. (i) The collected Kansei words
are often inadequate in terms of their pertinence and number; (ii) The small number of
samples and subjects; (iii) Both questionnaires and physiological signals are susceptible
to the effects of time, environment, and concentrated subjects, which can reduce the
authenticity and objectivity of the results; (iv) Limited by life background and work
experience, subjects’ evaluations in the questionnaire vary greatly; (v) The survey scope is
often small, leading to uneven data distribution. These deficiencies mainly arise from data
Appl. Sci. 2023, 13, 9433 7 of 33

acquisition, especially the collection of Kansei words and Kansei evaluation. The quantity
and quality of survey data can directly affect the constructing of the relational model, thus,
making data acquisition an urgent issue that need to be addressed.

3.2. KANO Model


The KANO model is utilized in user research tasks to classify user requirements by
revealing the nonlinear relationship between user satisfaction and product performance.
For a long time, it was believed that user satisfaction is directly proportional to product
performance. However, it has been shown that fulfilling individual product performance
to a great extent does not necessarily lead to high user satisfaction, and not all performance
factors are equally important.
In the 1980s, Noriaki Kano conducted a detailed study on user satisfaction and pro-
posed the KANO model [56], which divides user requirements into five types (Figure 6).
Must-be quality refers to requirements that users take for granted. When fulfilled, users
just feel normal; otherwise, they will be incredibly dissatisfied, such as the call function for
mobile phones. One-dimensional quality is proportional to user satisfaction, such as the
battery life for mobile phones. Attractive quality increases satisfaction when fulfilled but
does not cause dissatisfaction when unfulfilled, such as the temperature display for cups.
Indifferent quality has no impact on user satisfaction, such as the built-in pocket for coats.
Reverse quality results in dissatisfaction when fulfilled, such as pearls on men’s wear. Re-
searchers have utilized the KANO model for various applications, including Yao et al. [19]
, who categorized twelve features of mobile security applications through a structured
KANO questionnaire, and Avikal et al.’s [13] examination of customer satisfaction based
on aesthetic sentiments by integrating the KANO model with QFD.

Satisfaction

Attractive quality

One-dimensional quality

Indifferent quality
Insufficient Sufficient
quality quality
Must-be quality

Reverse quality

Dissatisfaction

Figure 6. The KANO model for user requirements classification [56]. The horizontal axis indicates
how fulfilled the requirement is and the vertical axis indicates how satisfied the user is.

The KANO model is an essential tool for understanding user satisfaction and pri-
oritizing product development efforts [16,57]. According to the priority assigned by the
KANO model, enterprises must ensure the must-be quality reach the threshold, and any
additional investment is a waste; invest in one-dimensional quality as much as possible;
prioritize attractive quality when the budget permits; avoid reverse quality and never waste
resources on indifferent quality. However, not all customer requirements are equal, even
within the same category. As the KANO model cannot distinguish the differences among
requirements within the same category [58], Lina et al. [59] proposed the IF-KANO model.
This model adopts logical KANO classification criteria to categorize requirements. They
Appl. Sci. 2023, 13, 9433 8 of 33

take elevators as an example; results showed that both load capacity and operation stability
belong to a one-dimensional quality but the priority of operation stability is slightly higher
than load capacity.
The KANO model relies on the data acquisition of the KANO questionnaire, which tra-
ditionally provides a single option or options within a given range, failing to fully capture
the ambiguity and complexity of customers’ preferences. As a result, numerous scholars
have attempted to enhance the KANO questionnaire’s options [60,61]. For instance, Maza-
her et al. [62] proposed the fuzzy KANO questionnaire, which uses percentages instead
of fixed options and allows participants to make multiple choices. Chen and Chuang [63]
introduced the Likert scale to gauge the degree of satisfaction or dissatisfaction. Cigdem
and Amitava [64] combined the Servqual scale and the KANO model in a complementary
way. While some researchers have recognized that text can better express opinions than
scale, they also note that text is challenging for traditional statistical methods [65]. In
addition to improving questionnaire options, some studies have focused on questioning
skills to enhance the KANO questionnaire. For example, Bellandi et al. [66] suggested that
questions should avoid polar wording.
While improvements have been made to the KANO questionnaire, as mentioned in
Section 3.1, it is important to acknowledge its limitations. In addition, the KANO model
itself also has some drawbacks. (i) The model relies on pre-existing requirements and lacks
research on the acquisition; (ii) The KANO model only focuses on the categorization and
rating of customer requirements, and its application is relatively simplistic.

3.3. QFD
QFD was first proposed by Japanese scholars Yoji Akao and Shigeru Mizuno in
the 1970s, and aims to translate customer requirements into technical attributes, parts
characteristics, key process operations, and production requirements [15,33,67]. The house
of quality (HoQ) is the core of QFD, as depicted in Figure 7. QFD can be used for multiple
tasks, such as generating ideas, analyzing competition, and assessing technical feasibility.

Correlat-
ion Matrix
Engineering
Requirements
Importance Rating

Relationship
Benchmarking
Requirements

Comparative
(Attributes)

Matrix (Impact
Perception
Customer

Customer

of Design
Requirements on
Customer
Attributes)

Technical Matrix
(Importance and
Targets)

Figure 7. The structure of HoQ [13]. HoQ consists of customer requirements, importance rating, engi-
neering requirements, correlation matrix, relationship matrix, competitive benchmarking, customer
perception, and technical matrix.

Starting from meeting market demands and driven by customer requirements, QFD
explicitly translates requirements information into specific information that is directly used
Appl. Sci. 2023, 13, 9433 9 of 33

by design, production, and sales departments, ensuring that the resulting products meet
customer needs and expectations [68]. For instance, Tomohiko [21] applied QFD to hair
dryer design, using it to decompose customer requirements into characteristics and build-
ing a HoQ to guide the design process. Yan et al. [69] applied QFD for competitive analysis,
generating insights into product improvement strategies. Noting the inherently vague and
ambiguous nature of customer requirements [17,70], Cengiz et al. [71] proposed fuzzy QFD.
Additionally, QFD can translate requirements from various stakeholders such as recyclers,
production engineers, and customers. Considering that the technical characteristics in the
HoQ may be contradictory, Wang et al. [72] combined QFD with TRIZ and used the contra-
diction matrix to derive a solution. However, QFD’s reliance on experts to rank customer
requirements leads to intense subjectivity. To mitigate this issue, scholars have introduced
the Multi-Criteria Decision-Making (MCDM) methods [73–76]. Although QFD, similar to
Kansei engineering, employs a definite score to express evaluation, ignoring the uncertainty
of user and expert scoring, some studies have introduced the rough theory, interval-valued
fuzzy-rough sets, and grey relational analysis to address this limitation [77–80].
Although QFD has made some progress in product design, it still has limitations.
(i) The relationship between customer requirements and characteristics is determined by
experts manually, which heavily relies on their expertise; (ii) The evaluation of satisfaction
is often completed by a small number of subjects, leading to potential bias; (iii) Customer re-
quirements are directly given by experts, customers, designers, or summarized by scholars,
lacking data acquisition.

3.4. TRIZ
TRIZ is an approach that helps to of generate inventive solutions by identifying and
resolving contradictions. It can be utilized for both idea generation and technical analysis
tasks. Genrich Altshuller, a Soviet inventor, and his colleagues began developing TRIZ in
1946 [81]. By analyzing over 400,000 invention patents, Altshuller developed the technical
contradiction, the concept of ideality of a system, contradiction matrix, 40 principles of
invention (Table 1), and 39 engineering parameters (Table 2). In addition, Altshuller ob-
served smart and creative individuals, discovered patterns in their thinking, and developed
thinking tools and techniques to model this “talented thinking”.

Table 1. The 40 inventive principles of TRIZ [2].

No. Inventive Principle No. Inventive Principle


1 Segmentation 21 Skipping
2 Taking out 22 Blessing in disguise
3 Local quality 23 Feedback
4 Asymmetry 24 Intermediary
5 Merging 25 Self-service
6 Universabrationlity 26 Copying
7 Nested dol1 27 Cheap short-living
8 Anti-weight 28 Mechanics substitution
9 Preliminary anti-action 29 Pneumatics and hydraulics
10 Preliminary action 30 Flexible shells and thin films
11 Beforehand cushioning 31 Porous materials
12 Equipotentiality 32 Colours changes
13 The other way around 33 Homogeneity
14 Spheroidality 34 Discarding and recovering
15 Dynamics 35 Parameter changes
16 Partial or excessive actions 36 Phase transitions
17 Another dimension 37 Thermal expansion
Appl. Sci. 2023, 13, 9433 10 of 33

Table 1. Cont.

No. Inventive Principle No. Inventive Principle


18 Mechanical vibration 38 Strong oxidants
19 Periodic action 39 Inert atmosphere
20 Continuity of useful action 40 Composite material film

Table 2. The 39 engineering parameters [2].

No. Engineering Parameter No. Engineering Parameter


1 Weight of moving object 21 Power
2 Weight of nonmoving object 22 Waste of energy
3 Length of moving object 23 Waste of substance
4 Length of nonmoving object 24 Loss of information
5 Area of moving object 25 Waste of time
6 Area of nonmoving object 26 Amount of substance
7 Volume of moving object 27 Reliability
8 Volume of nonmoving object 28 Accuracy of measurement
9 Speed 29 Accuracy of manufacturing
10 Force 30 Harmful factors acting on object
11 Tension, pressure 31 Harmful side effects
12 Shape 32 Manufacturability
13 Stability of object 33 Convenience of use
14 Strength 34 Reparability
15 Durability of moving object 35 Adaptability
16 Durability of nonmoving object 36 Complexity of device
17 Temperature 37 Complexity of control
18 Brightness 38 Level of automation
19 Energy spent by moving object 39 Productivity
20 Energy spent by nonmoving object

The TRIZ methodology involves four main steps: (i) defining the specific problem;
(ii) abstracting the problem to a more general level; (iii) mapping potential solutions to
address the general problem; and (iv) concretizing the general solution to fit the spe-
cific problem. For instance, Wang [2] employed principles 1 (segmentation), 5 (combin-
ing), and 28 (replacement of a mechanical system) from Table 1 to design phone cameras.
Yamashina et al. [15] combined QFD and TRIZ to perform washing machine design. Addi-
tionally, Ai et al. [82] designed low-carbon products by considering both technical system
and human use, and they used TRIZ to identify measures for improving energy efficiency.

3.5. Summary of Limitations


Among the traditional methods mentioned above, QFD focuses on constructing the
HoQ to map customer requirements to product parameters. The KANO model mainly
emphasizes the classification and rating of customer requirements. Both QFD and KANO
adopt the pre-given requirements and lack research on acquisition. Kansei engineering
is a comprehensive method that involves the acquisition, expression, and mapping of
customer requirements. However, Kansei engineering still has limitations in the data
collection process. To capture customers’ affective responses towards different products,
various traditional methods are widely used, such as user interviews, questionnaires,
focus groups, experiments, etc. [10,11,41,55]. We have summarized their advantages and
disadvantages in Table 3.
Appl. Sci. 2023, 13, 9433 11 of 33

Table 3. Comparison of user requirement mining methods.

Methods Description Advantages Disadvantages


Time-consuming; one-time; sub-
User interview The interviewer talks directly jective; labor-intensive; small sur-
Detailed; easy to implement
with the subject vey scope
Subjective; one-time; time-
Record subjects’ opinions on consuming; labor-intensive;
Questionnaire Easy to implement centralized in time and place;
specific questions
small survey scope
Web-based Distribute questionnaires on Decentralized in time and Subjective; one-time
questionnaire the Internet place; large survey scope
Time-consuming; labor-intensive;
Observe the opinions and be- Real data; low cost; in-depth one-time; complex; small
Focus-group
haviors of a group on the subject questions survey scope
Subjects test the product and One-time; labor-intensive; small
Usability testing give feedback Detailed; high reliability survey scope; time-consuming
Time-consuming; slow; ineffi-
Experience Analyze the data generated by Easy to implement ciency; subjective; small survey
consumers during usage scope
Expensive; one-time; time-
Experimental Record the psychological and Detailed; high reliability consuming; labor-intensive;
physiological data of the subject complex; small survey scope

Overall, traditional product design methods suffer from several significant disadvan-
tages. (i) It is difficult to capture accurate customer requirements due to the increasing
diversification and complexity of customer needs, making it challenging for enterprises
to determine product positioning; (ii) Various shortcomings in the process of data acquisi-
tion, such as being manual, time-consuming, labor-intensive, hard to update, limited in
scope, and subject to time and place. This results in small sample sizes, poor real-time
data, and limited quality and quantity of samples and data. In addition, traditional data
collection is time-consuming and quickly becomes outdated; (iii) Survey results are sus-
ceptible to time and environment, and there may be differences between subjects and real
users, which can impact the validity of the data; (iv) Heavy reliance on expert can increase
workload and prolong the product development cycle, while different experts may have
varying backgrounds and experiences, introducing uncertainty and subjectivity in designs;
(v) Traditional methods lacking an intuitive approach to inspiring designers, and visualiza-
tion plays a crucial role in it; (vi) The absence of visual display in early design schemes can
make it difficult for enterprises to hold product development direction, as decision-makers
rely on imagination alone and cannot see designs. This increases the probability of failure
resulting in wasted resources and potentially leading to enterprise bankruptcy. These
deficiencies have significantly hindered the development of product design.
The advent of the big data era has brought forth innovative ideas and technologies
that can overcome the shortcomings of traditional product design methods and enhance
innovation capabilities. By leveraging big data and AI algorithms, we can reduce subjectiv-
ity, expand the scope of survey, and automate data processing (including data acquisition,
updating and analysis), to accurately acquire user requirements and present them in an
intuitive visual way. To illustrate this point, we can consider product evaluation as an
example. We can collect customer reviews from e-commerce platforms, social media, and
review websites worldwide using web crawlers. With the help of NLP technology, we
can automatically extract information, such as product attributes, opinion words, and
sentiment orientations, in customer evaluations of products. Furthermore, as customer
Appl. Sci. 2023, 13, 9433 12 of 33

evaluations are dynamic, we can easily obtain real-time evaluation by simply adding a
piece of updated code. In the next section, we will delve deeper into the new-generation
data and AI algorithm driven product design methods, which are the primary focus of
this article.

4. Product Design Based on Big Data and AI


4.1. Product Design Based on Structured Data
Big data in the product lifecycle can be categorized into structured, semi-structured,
and unstructured [30]. Structured data separate the structure and content, whereas semi-
structured data mix them. Unstructured data, such as audio, video, images, text, and
location, have no fixed structure.
Structured data refer to information that has been formatted and transformed into
a predefined data model, which provides regularity and strict formatting. However,
structured data can suffer from poor scalability and limited flexibility. Semi-structured
data can be considered as an extension of structured data, that offers greater flexibility and
extensibility. For this reason, in this paper, we will treat structured data and semi-structured
as the same and introduce their applications in design together.
To address the limitations of traditional questionnaire surveys, including small survey
scope, difficulty in updating, time-consuming, and labor-intensive, Li et al. [83] proposed
a machine learning-based affective design dynamic mapping approach (MLADM). The
approach involves collecting Kansei words from literature, manually clustering them,
obtaining product features and images from shopping websites, and generating online
questionnaires. Four machine learning algorithms are used to construct the relationship
model. However, despite MLADM being able to predict product feeling, it still heavily
relies on the expert, and the online questionnaire data are also highly subjective. To
overcome these limitations, some studies explore objective data for design knowledge. For
instance, Jiao et al. [84] established a database to record affective needs, design elements,
and Kansei words from past sales records and previous product specifications. They
applied association rule mining to construct the relationship model and used it as the
product design inference pattern.
Limited by data sources’ openness, it is not easy for designers to obtain structured
data, which can impede their research in product design. In contrast, unstructured data
holds advantages in terms of volume and accessibility, accounting for nearly 95% of big
data [85]. Despite its amorphous and complex nature, unstructured data are still of great
interest to scholars because of their implicit information and value. In Sections 4.3–4.6,
we will, respectively, introduce the application of textual, image, audio, and video data in
product design, as well as the AI algorithms involved.

4.2. Product Design Based on Textual Data


The text exists in every phase of a product’s lifecycle. Among them, user experience
feedback after purchasing the product is particularly valuable for product design. In recent
years, more and more users have shared their opinions on Twitter, Facebook, microblogs,
blogs, and e-commerce websites [86–92]. As online reviews on e-commerce websites are
provided by consumers who have purchased the product, they are considered highly
reliable and authentic [93]. Research shows that 90 % of customers use online reviews as a
reference, with 81% finding them helpful for purchase decisions, and 45% consulting them
even while shopping in physical stores [94,95]. Consequently, online reviews have become
an important influence factor in consumer behavior and a reference for enterprises to use
to improve their products [22,96].
Online reviews offer many benefits, including understanding customer satisfaction [97–101],
capturing user requirements [102–104], finding product deficiencies [105–107], propos-
ing improvement strategies [65], comparing competitive products [108], and providing
product recommendations [109–114]. For instance, in order to identify innovation sen-
tences from online reviews, Zhang et al. [4] proposed a deep learning-based approach.
Appl. Sci. 2023, 13, 9433 13 of 33

Jin et al. [22,108,115,116] focused on filtering out helpful reviews. Wang et al. [117] pre-
sented a heuristic deep learning method to extract opinions and classified them into seven
pairs of affective attributes, namely “like-dislike”, “aesthetic-inaesthetic”, “soft-hard”,
“small-big”, “useful-useless”, “reliable-unreliable”, and “recommended-not recommended”.
Kumar et al. [118] combined reviews and electroencephalogram signals to predict product
ratings, and Xiao et al. [94] proposed a marginal effect-based KANO model (MEKM) to
categorize customer requirements. Simon et al. [119] explored product customization based
on online reviews. Figure 8 summarizes the typical processing flow of textual data.

NLP technology  GPT


Web  BERT
 Text clustering  Product comparison
 Web Crawler  NER  Product ranking
Social medias
 Download  Long short-term memory (LSTM)  Product recommendation
 API  Part-of-speech (POS)  Customer satisfaction
E-commerce
 ……  transfer  Development trend
platforms
 Topic model  Feature extraction  User requirement
Review sites  Lexicons  Opinion mining  Product deficiency
 Word2vec  Topic extraction  Product improvement
……  ……  Sentiment analysis  Review summarization
 ……  ……
Data resource Data acquisition Database Data processing Information extraction Application

Figure 8. The typical processing flow of textual data. It includes five parts, namely data acquisition,
data processing, information extraction, and application.

The rapid development of NLP is the key to product design based on textual data,
which including topic extraction, opinion mining, text classification, sentiment analysis,
and text clustering.
Product attributes extraction. In the field of product design, topic extraction can
be used to extract product attributes (e.g., the screen, battery, weight, and camera of
a smartphone) and opinions [120]. As each product has multiple attributes, and consumers
have varying preferences and evaluations for each attribute, extracting attributes from
online reviews becomes essential [110]. Typically, product attributes are nouns or noun
phrases in online review sentences [121]. To extract product attributes, most studies utilize
the part-of-speech (POS) tag to generate tags identifying whether a word is a noun, adjective,
adverb, etc., and consider all nouns and noun phrases as attribute candidates. These
candidates are then pruned using techniques such as term-frequency (TF) [116,122], TF-
IDF [123,124], dictionary [125], manual definition [99], and clustering [96,126]. Moreover,
since product attributes are domain-sensitive, some studies consider it a domain-specific
entity recognition problem. For instance, Putthividhya and Hu [127] used named entity
recognition (NER) to extract product attributes.
The methods discussed above are limited to extracting explicit attributes and are not
suitable for implicit attributes. The distinction between explicit and implicit attributes
lies in whether they are explicitly mentioned or not. For example, in the sentence “lap-
top battery is durable, the laptop is expensive”. The “battery” is explicitly mentioned
as an explicit attribute, while “expensive” is related to price and is an implicit attribute.
Explicit attributes can be easily identified using rule-based or machine learning-based
methods, implicit attributes require more sophisticated techniques [126]. Various unsu-
pervised, semi-supervised, and supervised methods have been developed for implicit
attribute extraction [128]. For instance, Xu et al. [129] used an implicit topic model that
incorporated pre-existing knowledge to select training attributes and developed an implicit
attribute classifier based on SVM. To overcome the lack of a training corpus, they annotated
a large number of online reviews. Meanwhile, Kang et al. [130] proposed an unsupervised
rule-based method that can extract both subjective and objective features, including im-
plicit attributes, from customer reviews. Additionally, employing synonym dictionaries is
a viable method. For example, Jin et al. [108] combined WordNet and manually defined
Appl. Sci. 2023, 13, 9433 14 of 33

synonyms to extract attributes of mobile phones. WordNet is an English lexical database


that organizes words based on synonyms and antonyms. By analyzing online comments,
designers can gain a more detailed understanding of users’ attention to each product
attribute at a finer granularity, enabling a more precise analysis.
Opinion mining. In product design, opinion mining aims to extract descriptive words
that express subjective opinions, commonly known as opinion words. There are two primary
approaches to this task: co-occurrence analysis and syntactic analysis. The co-occurrence
approach identifies opinion words by analyzing the adjectives that appear in proximity
to product attributes. Meanwhile, the syntactic approach relies on analyzing the structure
and dependency of review sentences. For example, Hu and Liu [131] categorized product
attributes as either frequent or infrequent, with the former identified through POS and
association mining, and nearby adjectives deemed opinion words. The latter were identified
through a reverse search for opinion words.
In addition to the two types of approaches discussed above, topic modeling has also
shown promising results in opinion word extraction. For instance, Bi et al. [97] used latent
Dirichlet allocation (LDA) to extract customer satisfaction dimensions from online reviews
and proposed the effect-based KANO model (EKM). LDA is a topic model that can group
synonyms into the same topic and obtain its probability distribution. Wang et al. [132]
applied the long short-term memory (LSTM) model to extract opinion words from raw
online reviews and mapped customer opinions to design parameters through deep learning.
However, it is worth noting that the topic model ignores the fine-grained aspects.
Several scholars have studied the expression of opinion words since different reviewers
may share the same opinions but use different words to express them. In our previous
research [133], we clustered similar opinion words based on word2vec. Additionally,
Wang et al. [134] restructured raw sentences to extract attribute-opinions from online
reviews, following grammatical rules. However, different languages have different sentence
structure rules, so it is hard to expand to other languages.
Sentiment analysis. In product design, sentiment analysis plays a crucial role in
identifying customers’ emotional attitudes towards a product and classifying them as
positive, negative, or neutral. Sentiment analysis can be conducted at three different
levels [135–137]: (i) document-level, which is a coarse-grained analysis of all reviews
in a document; (ii) sentence-level, which is a medium-grained analysis of individual
sentences; and (iii) aspect (attribute)-level, which is a fine-grained analysis of specific
product attributes.
In addition, sentiment analysis methods are categorized into three categories: machine
learning methods, lexicon-based methods, and hybrid methods [117,118,138,139]. Machine
learning methods regard sentiment analysis as a classification problem, using sentiment
polarities as labels and employing various techniques, such as Recurrent Neural Network
(RNN) [4], support vector machines [140,141], conditional random field (CRF) [135], and
neural networks [136], to construct classifiers with text features. In contrast, lexicon-based
methods identify sentiment orientations by referring to pre-defined lexicons like LIWC,
HowNet, and WordNet [141,142]. These lexicons contain sentiment-related terms and their
corresponding polarity. However, the quality of these lexicons is critical, and scholars are
working towards improving the coverage, domain adaptation, and continuous updating
of these resources. For instance, Cho et al. [143] constructed a comprehensive lexicon by
merging ten lexicons. Araque et al. [144] proposed a sentiment classification model that
uses semantic similarity measures and embedding representations, rather than keyword
matching, to compute the semantic distance between the input word and lexicon word
rather. Marquez et al. [145] analyzed several lexicons and how they complement each other.
Lexicon collection can be divided into three categories: manual, dictionary-based, and
corpus-based. The manual method is time-consuming and labor-intensive. The dictionary-
based method uses a set of opinion words as seeds to search in existing dictionaries. The
corpus-based method expands lexicons based on information from the corpus, resulting in
domain-specific lexicons [146].
Appl. Sci. 2023, 13, 9433 15 of 33

Machine learning methods offer higher accuracy, while lexicon-based methods are
more general. Hybrid methods combine both. For instance, Dang et al. [147] proposed
a lexicon-enhanced approach for sentiment classification that combines machine learn-
ing and semantic-orientation methods. Their findings indicate that the hybrid method
significantly improves sentiment classification performance.
By reviewing literature related to product design based on textual data, we have
found that it is an important research topic that has been addressed by many scholars in
recent years. NLP technology is used to extract product attributes and sentiment features
from text, which helps to understand user requirements, evaluate products, and grasp
product development trends. While requirements and evaluates are the core foundation,
it is important to note that the rest of the design process is equally crucial. Unfortunately,
most existing research has overlooked this point, resulting in designs that are more of an
improvement rather than being original and innovation. In other words, product design
based on text still has vast potential for development. Undoubtedly, the emergence of
textual data has great significance for product. Compared to traditional methods, textual
data provides richer information of higher quality and can be acquired and processed much
more quickly.

4.3. Product Design Based on Image Data


Product images serve as intuitive representations of products that convey essential
information on color, texture, and shape, playing a crucial role in the design process. Images
are an example of “what you see is what you get”, as our eyes can easily interpret the infor-
mation, constituting a significant advantage. As image data keep growing, it has evolved
into a vital information carrier alongside textual data. In recent years, CNN has made
significant breakthroughs in image recognition, classification, and segmentation [148–150].
Its powerful ability to learn robust feature has attracted attention across various fields.
Figure 9 shows the structure of CNN. In product design, image data plays two key roles: in-
spiration and generation. The former inspires designers by existing product images, while
the latter directly generate new product images based on large amounts of existing images.

2 35 19 25 77 10 16 68 44

53 48 33 13 4 8 53 10 21


68 2 39 8 35 25 10 16 19
44 39 10 2 19 68 4 53 33
10 35 68 48 16 25 77 35 10
19 33 16 25 44 2 25 68 16
48 13 35 77 19 53 4 13 19
16 39 77 10 16 2 35 33 48
33 53 19 8 35 44 39 2 13
44 13 4 16 10 68 48 35 39
Fully connected layer
Convolution layer

Convolution layer
Pooling layer

Pooling layer

Ouput layer
Input layer

Flatten

Repeated

Figure 9. The structure of CNN. CNN consists of the input layer, convolution layer, pooling layer,
fully connected layer, and output layer. The input image will be converted into the pixel matrix in the
input layer. The convolution layer involves some filters, and different filters get triggered by different
features (e.g., semicircle, triangle, quadrilateral, red, and green) in the input image. Each filter will
output a feature map, and the pooling layer will reduce its dimensionality. The fully connected layer
and the output layer are the same as the artificial neural network.

Spark creative inspiration. Existing product images are helpful to inspire designers to
come out with new ideas and initial design schemes more efficiently [151,152]. To achieve
this goal, image retrieval, matching, and recommendation play crucial roles [3,150,153,154].
Designers and users can express their requirements through images and text and search
for related product images from databases or e-commerce websites. The matched images
Appl. Sci. 2023, 13, 9433 16 of 33

can be recommended as design references. The retrieval input can be either text, images,
or both [155–158].
However, product retrieval is more complicated than simple image retrieval due to the
different shooting angles, conditions, backgrounds, or postures of the images [159–163]. For
instance, clothing images taken on the street or in a store with a phone may differ from those
in databases and e-commerce websites. Liu et al. [164] used a human detector to locate
30 human parts, and then utilized a sparsely coded transfer matrix to establish a mapping
that ensure the two distributions dose not compromise the retrieval quality. On the other
hand, free-hand sketches are even more abstract. Yu et al. [165] introduced two new
datasets with dense annotation and built a deep network trained with triplet annotations to
enable retrieval across the sketch/image gap. Ullah et al. [166] used a 16-layer CNN model
(VGG16) to extract product features from images and measure their similarities using the
Euclidean distance.
In addition, some studies have explored the use of both image and text features for
product matching and categorization. For instance, Ristoski et al. [125] used CNN to extract
image features and match them with text embedding features to improve product matching
and categorization performance. Liang et al. [167] proposed a joint image segmentation and
labeling framework to retrieve clothing. They grouped superpixels into regions, trained
an E-SVM classifier using confident foreground regions, and propagated segmentations by
applying the E-SVM template to the entire image.
In addition to serving as a reference, image data can be leveraged to track fashion
trends [168,169], identify user preferences [168,170], provide product matching recom-
mendations [171–174], and prevent infringement [151]. For instance, Hu et al. [169]
established a furniture visual classification model that contains 16 styles, such as Gothic,
Modernist, and Rococo style. The model combines image features extracted by CNN
with handcrafted features and can help users understand their preferred styles. Most
products do not stand alone, and their compatibility with other products must be con-
sidered during the design process. To address this, Aggarwal et al. [175] used Siamese
networks to assess the style compatibility between pairs of furniture images. They also
proposed a joint visual-text embedding model for recommendation, based on furniture
type, color, and material. Laura et al. [176] built a graph neural network (GNN) model
to account for multiple items’ interactions, instead of just pairwise compatibility. Their
GNN model comprised a deep CNN to extract image features and enabled evaluating
the compatibility of multiple furniture items based on style, color, material, and overall
appearance. Moreover, they applied the GNN model to solve the fill-in-the-blank task,
for example, recommending the most suitable bed from multiple alternatives based on
a given desk, cabinet, chair, and mirror.
Generate new product images. In the field of product design, two popular models
are used to generate new product images: GAN and neural style transfer. Both of these
models take images as input and produce output images. GAN is an unsupervised
model that consists of two neural networks, namely the generator and discriminator [177].
The generator creates images, while the discriminator evaluates the generated images
against real images. Both networks are trained simultaneously and are improve through
competition. Figure 10 shows the structure of GAN. We summarize the contribution
of GAN to product design as follows: schemes generation [151,178–180], text-to-image
synthesis [181], generative transformation [182], collocation generation [183–185], sketch
acquisition [152,186], colorization [187–190], and virtual display [28,191,192]. Some exam-
ples of new product generation based on image data are shown in Figure 11. In Figure 11a,
a new image of handbag is generated given an image from the shoe. In Figure 11b, the
edge image is input, new shoes are generated in the second and fourth columns, and the
rest are ground truth. In Figure 11c, a product image and style image are inputted, then a
new product will be generated.
Appl. Sci. 2023, 13, 9433 17 of 33

Error propagation
x
Real data product x Predict True
Real Real
Fake Fake
G (z)
Random noise z
Discriminator (D)
Generator (G) Generated image (Fake) Error propagation

Figure 10. The structure of GAN. Setting x as the real product and z as the random noise, G (z) is the
synthetic data generated by the generator G. By the way, images can be regarded as a kind of noise
distribution. Both x and G (z) are inputted to the discriminator D to predict whether the data are real
or fake. If the predicted result is correct, the error will be transferred to G for improvement; otherwise,
it will transfer to D for improvement. Eventually, G could capture the statistical distribution of x,
and G (z) can deceive D. G (z) is the generated design scheme that contains product features and is
different from the real product.
Input

Input Output Output

Input Input Output


Output Output
Output

Input

(a) (b) (c)


Figure 11. The case of generating a new product based on image big data: (a) generative
transformation [182]. (b) the edges-to-shoe translation [187]. (c) neural style transfer [193].

The automatic generation of product schemes is made possible by using a large number
of existing product images. GAN can learn the distribution of product features from these
images and generate new designs through the gradual optimization of a generator by
a discriminator. For example, Li et al. [151] used GAN to create new smartwatches
schemes by training their model with 5459 smartwatch images. They also compared the
results of GAN with various extensions, including deep convolution GAN (DCGAN), least-
squares GAN (LSGAN), and Wasserstein GAN (WGAN). In another example, to produce
collocation clothing images, Liu et al. [194] proposed Attribute-GAN, which comprises
a generator and two discriminators. However, GAN only generate one view of the product,
which is insufficient for design purposes. To address this limitation, Chen et al. [191]
proposed conditional variational GAN (CVGAN), which can synthesize arbitrary product
views. This is useful for virtual display, especially for designs that require high shape and
appearance standards, such as clothing design where it’s necessary to show the effect of
try-on. Moreover, GAN and its extended models are also applicable to layout generation,
such as interior, websites, and advertisements [6,195,196].
Text-to-image synthesis is a promising technique for product design, although cur-
rent research in this field is scarce. One notable study by Kenan et al. [181] proposed
an enhanced attentional GAN (e-AttnGAN) for generating fashion images from text de-
scriptions, such as “long sleeve shirt with red check pattern”. Most existing studies aim to
improve the synthesis technology [197–200], and the potential of text-to-image synthesis
has yet to be fully realized by designers. It is worth noting that a lot of commercial software
has been developed for text-to-image synthesis, including Midjourney, DALL-E2, Imagen,
Stable Diffusion, Novel AI, Mimic, Dream by WOMBO, wenxinyige, Tiamat, and 6pen
art. These platforms allow users to express their requirements in text and their desired
image will be automatically generated. To demonstrate the effectiveness of this technique,
we also made some attempts, and the resulting product images are displayed in Figure 12.
We generated plates, vases, mugs, butterfly necklaces, crystal balls, and phone cases with
wenxinyige. We believe that text-to-image will become an important direction for prod-
uct design in the future. Users will only need to type out their requirements, without
Appl. Sci. 2023, 13, 9433 18 of 33

having to master complex skills such as hand-drawing, sketching, or 3D modeling. In


addition to text-to-image, progress has also been made in text-to-3D [201,202]. For instance,
Wang et al. [203] proposed the Variational Score Distillation (VSD), which models 3D pa-
rameters as a probability distribution and optimizes the distance between the distribution
of rendered 2D images and the distribution of a pre-trained 2D diffusion model. This
approach generates high-quality textured 3D meshes based on the given text.

Figure 12. Designing new products with text-to-image synthesis.

Generative transformation and colorization can be seen as a type of image-to-image


problem, where an input image is transformed into a modified output image [190,204,205].
Generative transformation aims to convert one product image into another. This pro-
cess generates a sequence of intermediate images that may be used as new design ideas.
Zhu et al. [206–208] developed a product design assistance system that leverages GAN to
achieve three applications: (i) change product shape and color by manipulating an underly-
ing generative model; (ii) generate new images from user scribbles; and (iii) performing
generative transformation of one product picture to another product. For example, a short
black boot can be converted into a long brown boot, and intermediate shoe images are
displayed. Nikolay et al. [192] proposed the conditional analogy GAN (CAGAN) to swap
clothing on models automatically. Pan et al. [209] proposed DragGAN to control the pose,
expression, and layout of products in the image.
Since the sketch already contains the structural and functional features of the product,
once it is colored, a preliminary design scheme is completed. Compared to other product
features, color has an intuitive influence, and users have different preferences for it. GAN
can automatically color the sketch and quickly generate multiple alternatives. For example,
Liu et al. [190] established an end-to-end GAN model for ethnic costume sketch colorization,
demonstrating its excellent ability to learn the color rules of ethnic costumes. Additionally,
Sreedhar et al. [180] proposed a car design system based on GAN that supports single-color,
dual-color, and multiple-color coloring from a single sketch.
Appl. Sci. 2023, 13, 9433 19 of 33

GAN and its extended models have made significant contributions to product design
but the generated images often suffer from blurriness, lack of detail, and low quality.
To address these problems, Lang et al. [184] proposed Design-GAN, which introduces
a texture similarity constraint mechanism. Similarly, Oh et al. [5] combined GAN with
topology optimization to enhance the quality of two-dimensional wheel images. By limiting
the wheel image to a non-design domain, a pre-defined domain, and a design domain
for topology optimization, they achieved results that were significantly different from the
initial design.
Neural style transfer is a deep generative model that enables the generation of images
by separating and reconstructing their content and style. Nevertheless, neural style transfer
is not simply a matter of overlapping a content image with a style image. Its implementation
relies to the features learned by CNN. Figure 13 shows the idea of neural style transfer.

Error propagation
VGG-19
Input

Content image (C) Content loss


Input Generated No
Random noise z Loss goal VGG-19 Total loss

Yes
Generator (G) Generated image (G(z))
Style loss

Input
VGG-19
Output image (O)
Style image (S)

Figure 13. Neural style transfer. Setting z as the random noise, G (z) is the synthetic image generated
by the generator (G). The pre-trained VGG 19 is used to calculate the style loss between G (z) and the
style image (S), the content loss between G (z) and the content image (C). Minimize the total loss that
consists of style and content loss to optimize G. Once the loss goal is reached, G (z) will be output
and marked as O. O can preserve the content feature of C and the style feature of S.

In 2015, Gatys et al. [210] found that the image style and content could be separated
in CNN and manipulated independently. Building upon this finding, they proposed
a neural style transfer algorithm to transfer the style of famous artworks. Later, Huang
and Serge [211] introduced an adaptive instance normalization (AdaIN) layer, which
aligns the mean and variance of content features with style features to realize arbitrary
style transfer. Inspired by their method, our previous research [212] combined Kansei
engineering with neural style transfer to generate product design schemes (Figure 11c),
resulting in an enhanced semantic of the generated product. Additionally, we developed
a real-time design system that allowed users to input images and receive product schemes
as output [193]. In a similar vein, Wu et al. [213] combined GAN with neural style transfer
to generate fashionable Dunhuang clothes, using GAN to generate clothing shapes and
neural style transfer to add Dunhuang elements. Neural style transfer is performed over
the entire image, limiting its flexibility. By using masks, it is possible to transfer different
styles to different parts of a product [211]. For example, using masks, three style images
can be transferred to the collar, pocket, and sleeve of a coat, respectively. Overall, neural
style transfer still offers tremendous untapped potential for product design.
Although GAN and its extended models show some limitations in generating
high-quality and controllable designs due to their strong randomness, recent work by
Sohn et al. [214] have demonstrated that AI and GAN can improve customer satisfaction
and product design freshness, indicating their potential in generating innovative and at-
tractive design proposals. On the other hand, neural style transfer is a technique used for
Appl. Sci. 2023, 13, 9433 20 of 33

generating product schemes with good image quality, strong interpretability, and controlla-
bility. While it may lack the innovation ability of GAN, it can be regarded as a combination
of design elements such as product shape, texture, and color. Generation using these mod-
els promises to be a cost-effective way to generate product designs and support designers
in creating design schemes quickly. However, further research is necessary to enhance the
controllability and quality of GAN-based design generation and explore the potential of
combining these techniques to generate innovative and high-quality design schemes.
Generally, image data and AIGC have broken old thinking of traditional product de-
sign. Despite the emergence of generative models in product design, the literature on this
topic remains relatively sparse. Additionally, most of the existing literature primarily fo-
cuses on technical aspects and lacks product design knowledge [215,216]. Simply replacing
different training image data is not a viable solution for achieving design goals [217,218],
as product design requires specific knowledge and considerations. Traditional product
design methods and knowledge cannot be abandoned, but rather should be combined with
big data and AI algorithms. It is possible that this integration is the future direction of
product design.

4.4. Product Design Based on Audio Data


Big audio data [219] refer to information in the form of sound or voice. The customer
center is a valuable resource for collecting users’ complaints, inquiries, and suggestions
throughout the product service cycle [220]. Audio feedback can be utilized to collect user
requirements, provide recommendations, evaluate products, and improve product design.
However, there is currently a lack of literature on product design based on big audio data.
Therefore, we will explore several key AI technologies that could be employed in the future,
such as speech recognition, speaker identification, and emotion recognition.
Some studies have opted to manually record telephone complaints as text to avoid
direct processing of audio signals [221,222]. However, this approach heavily relies on the
recorder and is a labor-intensive and time-consuming process. In comparison, speech
recognition can accomplish this task automatically [223]. Speech recognition, also referred
to as automatic speech recognition (ASR), computer speech recognition, and speech-to-
text, involves converting human speech into computer-readable information, typically
in the form of text, although it may also be binary codes or character sequences [224].
The process of speech recognition is shown in Figure 14. Speech recognition is widely
used in mobile communication, search engines, human-computer interaction, among
other applications [225].
Feature vectors
(LPCC/Pitch period/
MFCC/Formant/…)

Input speech Framing Feature extraction Decoder Output words

Frame

Acoustic Pronunciation Language


State State State State
models dictionary models
Phoneme Phoneme

 Input :feature vectors


The correspondence
 Output:phonemes  Rule-based
between words and
 Model:HMM/DNN/  Statistical-based
phonemes
CNN/RNN/…

Figure 14. The process of speech recognition. Framing the input speech into many frames and
transfer the waveform to extract feature vectors. The acoustic model is then used to convert features
into phonemes, matched to words through the pronunciation dictionary. Finally, it eliminates the
confusion of homophones by the language model.
Appl. Sci. 2023, 13, 9433 21 of 33

Speaker identification, also referred to as voiceprint recognition, is a technology that


identifies individuals based on their speech. Each person’s voice has unique characteristics,
which are determined by two factors: the size of the sound cavity and how the vocal
organs are manipulated. Like fingerprints, voiceprints need to be collected and stored in
a database prior to analysis. The spectrogram is generated by the amplitude of the short-
time Fourier transform (STFT) of the audio signal (Figure 15) [226]. Voiceprint recognition
is achieved by extracting speaker parameters, such as pitch frequency and formant, and
using machine learning methods and AI algorithms. Voiceprint recognition has been
widely applied in various fields, including biometric authentication, crime forensics, mobile
payment, and social security. However, the variability of voiceprint, the differences in audio
acquisition equipment, and environmental noise interference pose significant challenges
for the voiceprint recognition task.
8000

7000

6000
Frequency(Hz)

5000

4000

3000

2000

1000

0
0 1 2 3 4 5 6
Time(s)

Figure 15. The spectrogram. The horizontal axis represents time, and the vertical axis represents
frequency. The amplitude of speech at each frequency point is distinguished by color.

As a vital division of audio processing and emotion computing, speech emotion


recognition aims to identify the emotions expressed by speaker [227], including but not
limited to, anger, sadness, surprise, pleasure, and panic. This process can be regarded as
a classification problem, and selecting the appropriate emotion feature is essential for its
success [228,229]. We have summarized commonly used acoustic parameters in Table 4.
Moreover, since spectrograms are images, CNN can be used to automatically learn features
for speech recognition, speaker identification, and emotion recognition [230].

Table 4. Acoustic parameters.

Category Parameter
Prosody parameter Duration
Pitch
Energy
Intensity
Spectral parameter Linear predictor coefficient (LPC)
One-sided autocorrelation linear predictor coeffi-
cient(OSALPC)
Log-frequency power coefficient(LFPC)
Linear predictor cepstral coefficient(LPCC)
Cepstral-based OSALPC(OSALPCC)
Mel-frequency cepstral coefficient(MFCC)
Sound quality parameter Format frequency and bandwidth
Jitter and shimmer
Glottal parameter

As AI products gain popularity, there has been increasing focus on audio signal
processing algorithm, particularly in the areas of speech recognition and emotion recog-
nition. These algorithms directly impact the user experience and play a crucial role in
customer decision-making. Despite the enormous potential of audio data in product design,
Appl. Sci. 2023, 13, 9433 22 of 33

practical applications remain limited by processing algorithms and data quality, creating
a substantial between theory and practice applications.

4.5. Product Design Based on Video Data


Videos showcasing the usage of products are also valuable source of information for
product design. They can be viewed repeatedly without restriction, making it easier to
obtain hard-to-find but essential information. For instance, by watching the cooking videos
of homemakers, Nagamechi found that standing up while taking food made them feel
more comfortable than bending over. Additionally, Nagamechi found that the refrigerator
compartment was used more frequently than the freezer compartment. Based on these
insights, he improved the traditional refrigerator structure by changing the upper layer to
a refrigerator compartment and the lower layer to a freezer compartment [231], which is
still in use today.
Although video data is useful for product design, manual viewing can be time-
consuming, and is only suitable for small data volume scenes. In comparison, intelligent
video analysis supports motion detection, video summarization, video retrieval, color
detection, object detection, emotion recognition, and more [219,232–234] . This capability
makes it possible to apply big video data to user requirement acquisition, user behavior
observation, experience improvement, and product virtual display. Intelligent video anal-
ysis establishes a mapping relationship between images and their descriptions, allowing
computers to understand video through digital image analysis.
Product detection. Compared to single image, object detection in the video has
a temporal context, which helps to solve the redundancy between consecutive frames,
motion blur, unfocused image, partial occlusion, singular postures, etc. Li et al. [235]
proposed a novel method for annotating products in videos by identifying keyframes and
extracting SIFT features to generate BOVW histograms, which were then compared with
the visual signature of products for annotation. Meanwhile, Zhang et al. [236] developed
a framework for identifying clothes worn by celebrities in videos, which involved utilizing
DCNN for tasks such as human body detection, human posture selection, human pose
estimation, face verification, and clothing detection. Additionally, Zhang et al. [237] linked
clothes worn by stars with online shops to provide clothing recommendations. To improve
the matching results, Chen et al. [238] used image feature network (IFN) and video fea-
ture network (VFN) to generate deep visual features for shopping images and clothing
trajectories in videos.
User behavior observation. Understanding user behavior is crucial in designing prod-
ucts and improving user experience. To explore user behaviors in VR spherical video
streaming, Wu et al. [239] collected a head tracking dataset. Additionally, Babak et al. [240]
evaluated product usability through video analysis. They performed temporal segmenta-
tion of video featuring human–product interaction, automatically identifying time segments
where users encountered difficulties. They took water faucet design as an example and
used optical flow for motion detection. Optical flow, temporal difference, and background
subtraction are commonly used for motion detection in videos [241].
Product virtual display. Compared to images, video-based virtual displays demand
higher spatiotemporal consistency, but they offer a more comprehensive display for cus-
tomers. Various AI algorithms have been proposed to enhance the customer experience.
For instance, Liu et al. [242] presented a parsing model to predict human poses in the
video, while Dong et al. [243] proposed a flow-navigated warping GAN (FW-GAN) to
generate try-on videos conditioned on person, clothing images, and a series of target poses.
Interactive displays not only promote product understanding but also evoke enjoyment
during the experience [244]. To this end, An et al. [245] designed a video composition
system that displays products on mobile phones in an interactive and 3D-like manner. This
system can automatically perform rotation direction estimation, video object segmentation,
motion adjustment, and color adjustment.
Appl. Sci. 2023, 13, 9433 23 of 33

Current intelligent video analysis technology has limitations in analyzing complex


activities, as it can only identify simple ones. Structuralization is a vital obstacle to video
analysis, which identifies features in the video through background modeling, video seg-
mentation, and target tracking operation. In addition, complete video data acquisition
requires the cooperation of multiple cameras, but achieving consistent installation condi-
tions for each camera is difficult. It is also challenging to perform continuous and consistent
visual analysis of moving targets in multiple videos. Moreover, the existing motion detec-
tion technology is not yet mature, as it can only detect target entry, departure, appearance,
disappearance, wandering, trailing, etc. The product use process is complicated, and the
information extracted at present is of little help to product design. Despite audio data
holding great potential in product design, its development and application are restricted
by processing technology and data quality.

4.6. Summary of Opportunities


Big data and AI algorithms hold enormous potential in modern product design. There
are two types of data: structured and unstructured. Structured data offer strong pertinence
and high-value density, but they are limited in openness and volume. Obtaining large-scale
structured data is challenging for individuals, and even enterprises—it requires substantial
workforce and resources for manual collection, sorting, summarizing, and storage in
standard databases. On the other hand, unstructured data enjoy high openness and volume
compared to structured data. It has great advantages in terms of veracity, velocity, value,
and variety, all may strongly contribute to modern product design.
We have detailed the common data types in the product lifecycle, including text,
image, audio, and video. Among these, text and image data are widely used in product
design due to their ease of acquisition, mature processing algorithms, and high data
quality. Textual data can overcome the limitations of traditional survey methods, such as
small sample size, limited survey scope, and high labor-intensive, while also guaranteeing
data authenticity, reliability, and timeliness. Image data, on the other hand, is useful for
surveys and excels in visual inspiration and schemes display. The color, texture, sharp
information in images can inspire designers to create preliminary schemes. Furthermore,
generative models can generate new product designs from image data with varying colors,
structures, textures, etc. AIGC is a revolutionary advancement in product design, which
was previously unattainable through traditional methods.
Video data are also a visual type of information, which have high value for user
behavior observation and product virtual display. Compared to images, video can provide
customers with a comprehensive understanding and allowing them to interact with prod-
ucts virtually. However, both video and audio data are still in their early stages, with many
technological and data quality challenges to overcome before they can be fully utilized in
product design. Despite the development of technologies for audio and video processing,
few researchers have employed them in product design. However, these limited studies
warrant a discussion to motivate further research in the field.
Big data have brought about numerous benefits for product design; however, current
research still has its limitations and requires further investigation. There are two research
directions that are especially promising: (i) the fusion of different types of data. Existing
research is confined to only one type of data (i.e., text, image, video, audio, and location),
despite these data types co-existing at various stages of the product lifecycle. Furthermore,
multi-modal data fusion could enhance the persuasiveness and efficacy of the extracted
information; (ii) The synergistic exploitation of big data techniques and design domain
knowledge. While previous efforts have focused on technological breakthroughs, espe-
cially for data lacking advanced processing technologies (e.g., audio and video), domain
knowledge in the field of product design should be ignored. The knowledge accumulated
in traditional product design is precious and helpful. Big data and traditional product
design methods are not contradictory but complementary. Only by combining them can
we better capture user requirements and improve the success rate of design. By addressing
Appl. Sci. 2023, 13, 9433 24 of 33

these research directions, we can further unlock the potential of big data and AI-driven
product design, leading to more intelligent, personalized, and successful products across
various industries.

5. Conclusions
In the era of the knowledge-driven economy, customer demands are more diversified
and personalized, and the lifecycles of products are becoming shorter, especially in the
usage phase. Therefore, successful product innovation now requires the assistance of
interdisciplinary knowledge, the support of powerful techniques, and the guidance of
innovation theories that break conventional wisdom.
Currently, big data are one of the resources with the most potential for promoting
innovation throughout the whole product lifecycle. In this survey, we presented a compre-
hensive overview of existing studies on big data and AI-driven product design, aiming to
help researchers and practitioners understand the latest developments and opportunities in
this exciting field. Firstly, we visualized the product design process and introduced the key
tasks. Secondly, we introduced several representative traditional product design methods,
including their functionalities, applications, advantages, and disadvantages. Based on
this, we summarized seven common shortcomings of traditional methods. Thirdly, we
illustrated how big data and related AI algorithms can help solve challenges in modern
product design, especially in user requirements acquisition, product evaluation, and visual
display. We offered a detailed analysis of the current and potential application of AI tech-
niques in product design, utilizing the textual, image, audio, and video data. For textual
data, NLP techniques can be used to extract product attributes and sentiment features,
aiding in understanding user requirements, evaluating products, and grasping product
development trends. For images, neural style transfer, CNN, and GAN and its extended
models can be used to spark creative inspiration and generate new product images. Audio
can be used to capture user requirements, provide recommendations, evaluate products,
and improve product design. Video can be used to observe user behavior and display
product. Since audio data are still in an early stage, we focused on possible processing
technologies and workflow that may be applied in the future. Finally, we summarized the
deficiencies of existing data-driven product design studies and provided future research
directions, especially for synergistic methods that combine big data-driven approaches and
traditional methods for product design.
Product design based on big data and AI is typically performed with vast amounts
of real-world data. This approach provides unprecedented opportunities to harness the
collective intelligence of consumers, suppliers, designers, enterprises, etc. With the aid of
advanced big data processing and AI technologies, it is possible to more accurately acquire
user requirements, product evaluations, and virtual displays, leading to higher success
rates in developing competitive products while saving time, effort, and development costs.
We hope this survey will bring greater attention to the role of big data and cutting-edge
AI technologies in the modern product design field. By exploiting the power of deep
learning-based NLP, speech recognition, and generative AI techniques (such as GAN) in
product design, it is believed that product innovation can reach an unprecedented level
with high intelligence and automation.

Author Contributions: H.Q. and J.H. conceived the conception; H.Q. conducted literature collection
and manuscript writing; J.H., S.L., C.Z. and H.W. revised and polished the manuscript. All authors
have read and agreed to the published version of the manuscript.
Funding: This work was supported by Guizhou Provincial Basic Research Program (Natural Science)
under grant No. ZK[2023]029 and Guizhou Provincial Department of Education Youth Science and
Technology Talents Growth Project under grant No. KY[2022]209.
Conflicts of Interest: The authors declare no conflict of interest.
Appl. Sci. 2023, 13, 9433 25 of 33

References
1. Keshwani, S.; Lena, T.A.; Ahmed-Kristense, S.; Chakrabart, A. Comparing novelty of designs from biological-inspiration with
those from brainstorming. J. Eng. Des. 2017, 28, 654–680. https://doi.org/10.1080/09544828.2017.1393504. [CrossRef]
2. Wang, C.H. Using the theory of inventive problem solving to brainstorm innovative ideas for assessing varieties of phone-cameras.
Comput. Ind. Eng. 2015, 85, 227–234. [CrossRef]
3. Gu, X.; Gao, F.; Tan, M.; Peng, P. Fashion analysis and understanding with artificial intelligence. Inf. Process. Manag. 2020, 57,
102276–102292. [CrossRef]
4. Zhang, M.; Fan, B.; Zhang, N.; Wang, W.; Fan, W. Mining product innovation ideas from online reviews. Inf. Process. Manag. 2021,
58, 102389–102402. [CrossRef]
5. Oh, S.; Jung, Y.; Lee, I.; Kang, N. Design automation by integrating generative adversarial networks and topology optimization.
In Proceedings of the International Design Engineering Technical Conferences and Computers and Information in Engineering
Conference, Quebec, QC, Canada, 26–29 August 2018; American Society of Mechanical Engineers: New York, NY, USA, 2018;
p. 02-03008.
6. Li, J.; Yang, J.; Zhang, J.; Liu, C.; Wang, C.; Xu, T. Attribute-conditioned layout gan for automatic graphic design. IEEE Trans. Vis.
Comput. Graph. 2020, 27, 4039–4048. [CrossRef]
7. Cao, Y.; Li, S.; Liu, Y.; Yan, Z.; Dai, Y.; Yu, P.S.; Sun, L. A comprehensive survey of ai-generated content (aigc): A history of
generative ai from gan to chatgpt. arXiv 2023, arXiv:2303.04226.
8. Wu, J.; Gan, W.; Chen, Z.; Wan, S.; Lin, H. Ai-generated content (aigc): A survey. arXiv 2023, arXiv:2304.06632.
9. Lai, H.-H.; Lin, Y.-C.; Yeh, C.-H.; Wei, C.-H. User-oriented design for the optimal combination on product design. Int. J. Prod. Econ.
2006, 100, 253–267. [CrossRef]
10. Shieh, M.D.; Yeh, Y.E. Developing a design support system for the exterior form of running shoes using partial least squares and
neural networks. Comput. Ind. Eng. 2013, 65, 704–718. [CrossRef]
11. Qu, Q.X.; Guo, F. Can eye movements be effectively measured to assess product design? Gender differences should be considered.
Int. J. Ind. Ergon. 2019, 72, 281–289. [CrossRef]
12. Dogan, K.M.; Suzuki, H.; Gunpinar, E. Eye tracking for screening design parameters in adjective-based design of yacht hull.
Ocean. Eng. 2018, 166, 262–277. [CrossRef]
13. Avikal, S.; Singh, R.; Rashmi, R. Qfd and fuzzy kano model based approach for classification of aesthetic attributes of suv car
profile. J. Intell. Manuf. 2020, 31, 271–284. [CrossRef]
14. Mistarihi, M.Z.; Okour, R.A.; Mumani, A.A. An integration of a qfd model with fuzzy-anp approach for determining the
importance weights for engineering characteristics of the proposed wheelchair design. Appl. Soft Comput. 2020, 90, 106136–106148.
https://doi.org/10.1016/j.asoc.2020.106136. [CrossRef]
15. Yamashina, H.; Ito, T.; Kawada, H. Innovative product development process by integrating qfd and triz. Int. J. Prod. Res. 2002,
40, 1031–1050. [CrossRef]
16. Dou, R.; Zhang, Y.; Nan, G. Application of combined kano model and interactive genetic algorithm for product customization.
J. Intell. Manuf. 2019, 30, 2587–2602. [CrossRef]
17. Wu, Y.-H.; Ho, C.C. Integration of green quality function deployment and fuzzy theory: A case study on green mobile phone
design. J. Clean. Prod. 2015, 108, 271–280. [CrossRef]
18. Chen, Z.-S.; Liu, X.-L.; Chin, K.-S.; Pedrycz, W.; Tsui, K.-L.; Skibniewski, M.J. Online-review analysis based large-scale group
decision-making for determining passenger demands and evaluating passenger satisfaction: Case study of high-speed rail system
in china. Inf. Fusion 2020, 69, 22–39. [CrossRef]
19. Yao, M.-L.; Chuang, M.-C.; Hsu, C.-C. The kano model analysis of features for mobile security applications. Comput. Secur. 2018,
78, 336–346. [CrossRef]
20. Dong, M.; Zeng, X.; Koehl, L.; Zhang, J. An interactive knowledge-based recommender system for fashion product design in the
big data environment. Inf. Sci. 2020, 540, 469–488. [CrossRef]
21. Sakao, T. A qfd-centred design methodology for environmentally conscious product design. Int. J. Prod. Res. 2007, 45, 4143–4162.
[CrossRef]
22. Jin, J.; Liu, Y.; Ji, P.; Kwong, C. Review on recent advances in information mining from big consumer opinion data for product
design. J. Comput. Inf. Sci. Eng. 2019, 19, 010801. [CrossRef]
23. Chen, Z.-S.; Liu, X.-L.; Rodríguez, R.M.; Wang, X.-J.; Chin, K.-S.; Tsui, K.-L.; Martínez, L. Identifying and prioritizing factors
affecting in-cabin passenger comfort on high-speed rail in china: A fuzzy-based linguistic approach. Appl. Soft Comput. 2020,
95, 106558–106577. [CrossRef]
24. Iosifidis, A.; Tefas, A.; Pitas, I.; Gabbouj, M. Big media data analysis. Signal Process. Image Commun. 2017, 59, 105–108. [CrossRef]
25. Wang, L.; Liu, Z. Data-driven product design evaluation method based on multi-stage artificial neural network. Appl. Soft Comput.
2021, 103, 107117. [CrossRef]
26. Shoumy, N.J.; Ang, L.-M.; Seng, K.P.; Rahaman, D.M.; Zia, T. Multimodal big data affective analytics: A comprehensive survey
using text, audio, visual and physiological signals. J. Netw. Comput. Appl. 2020, 149, 102447–1024482. [CrossRef]
27. Zhang, X.; Ming, X.; Yin, D. Application of industrial big data for smart manufacturing in product service system based on system
engineering using fuzzy dematel. J. Clean. Prod. 2020, 265, 121863–121888. [CrossRef]
28. Pandey, N.; Savakis, A. Poly-gan: Multi-conditioned gan for fashion synthesis. Neurocomputing 2020, 414, 356–364. [CrossRef]
Appl. Sci. 2023, 13, 9433 26 of 33

29. Zhang, Y.; Ren, S.; Liu, Y.; Si, S. A big data analytics architecture for cleaner manufacturing and maintenance processes of complex
products. J. Clean. Prod. 2017, 142, 626–641. [CrossRef]
30. Zhang, Y.; Ren, S.; Liu, Y.; Sakao, T.; Huisingh, D. A framework for big data driven product lifecycle management. J. Clean. Prod.
2017, 159, 229–240. [CrossRef]
31. Ren, S.; Zhang, Y.; Liu, Y.; Sakao, T.; Huisingh, D.; Almeida, C.M.V.B. A comprehensive review of big data analytics throughout
product lifecycle to support sustainable smart manufacturing: A framework, challenges and future research directions. J. Clean.
Prod. 2019, 210, 1343–1365. [CrossRef]
32. Büyükzkan, G.; Ger, F. Application of a new combined intuitionistic fuzzy mcdm approach based on axiomatic design methodol-
ogy for the supplier selection problem. Appl. Soft Comput. 2017, 52, 1222–1238. [CrossRef]
33. Carnevalli, J.A.; Miguel, P.C. Review, analysis and classification of the literature on qfd—Types of research, difficulties and
benefits. Int. J. Prod. Econ. 2008, 114, 737–754. [CrossRef]
34. Li, Y.; Wang, J.; Li, X. L.; Zhao, W.; Hu, W. Creative thinking and computer aided product innovation. Comput. Integr. Manuf. Syst.
2003, 9, 1092–1096.
35. Li, X.; Qiu, S.; Ming, H.X.G. An integrated module-based reasoning and axiomatic design approach for new product design
under incomplete information environment. Comput. Ind. Eng. 2019, 127, 63–73. [CrossRef]
36. Kudrowitz, B.M.; Wallace, D. Assessing the quality of ideas from prolific, early-stage product ideation. J. Eng. Des. 2013,
24, 120–139. [CrossRef]
37. Bonnardel, N.; Didier, J. Brainstorming variants to favor creative design. Appl. Ergon. 2020, 83, 102987. [CrossRef] [PubMed]
38. Youn, H.; Strumsky, D.; Bettencourt, L.M.; Lobo, J. Invention as a combinatorial process: Evidence from us patents. J. R.
Soc. Interface 2015, 12, 20150272. [CrossRef]
39. Zarraonandia, T.; Diaz, P.; Aedo, I. Using combinatorial creativity to support end-user design of digital games. Multimed. Tools
Appl. 2017, 76, 9073–9098. [CrossRef]
40. Sakao, T.; Lindahl, M. A value based evaluation method for product/service system using design information. CIRP Ann.
Manuf. Technol. 2012, 61, 51–54. [CrossRef]
41. Vieira, J.; Osório, J.M.A.; Mouta, S.; Delgado, P.; Portinha, A.; Meireles, J.F.; Santos, J.A. Kansei engineering as a tool for the design
of in-vehicle rubber keypads. Appl. Ergon. 2017, 61, 1–11. [CrossRef] [PubMed]
42. Nagamachi, M. Kansei engineering in consumer product design. Ergon. Des. Q. Hum. Factors Appl. 2016, 10, 5–9. [CrossRef]
43. Nagamachi, M. Kansei engineering: A new ergonomic consumer-oriented technology for product development. Int. J. Ind. Ergon.
1995, 15, 3–11. [CrossRef]
44. Nagamachi, M. Successful points of kansei product development. In Proceedings of the 7th International Conference on Kansei
Engineering & Emotion Research, Kuching, Malaysia, 19–22 March 2018; Linköping University Electronic Press: Linköping,
Sweden, 2018; pp. 177–187.
45. Nagamachi, M.; Lokman, A.M. Innovations of Kansei Engineering; CRC Press: Boca Raton, FL, USA, 2016.
46. Ishihara, S.; Nagamachi, M.; Schütte, S.; Eklund, J. Affective Meaning: The Kansei Engineering Approach; Elsevier: Amsterdam,
The Netherlands, 2008; pp. 477–496.
47. Schütte, S. Designing Feelings into Products: Integrating Kansei Engineering Methodology in Product Development. Master’s
Thesis, Linköping University, Linköping, Sweden, 2002.
48. Schütte, S. Engineering Emotional Values in Product Design: Kansei Engineering in Development. Ph.D. Thesis, Institutionen för
Konstruktions-och Produktionsteknik: Linköping, Sweden, 2005.
49. Schütte, S.; Eklund, J. Design of rocker switches for work-vehicles—An application of kansei engineering. Appl. Ergon. 2005,
36, 557–567. [CrossRef] [PubMed]
50. Marco Almagro, L.; Tort-Martorell Llabrés, X.; Schütte, S. A Discussion on the Selection of Prototypes for Kansei Engineering Study;
Universitat Politècnica de Catalunya: Barcelona, Spain, 2016.
51. Schütte, S.T.; Eklund, J.; Axelsson, J.R.; Nagamachi, M. Concepts, methods and tools in Kansei engineering. Theor. Issues Ergon. Sci.
2004, 5, 214–231. [CrossRef]
52. Ishihara, S.; Nagamachi, M.; Tsuchiya, T. Development of a Kansei engineering artificial intelligence sightseeing application. In
Proceedings of the International Conference on Applied Human Factors and Ergonomics, Orlando, FL, USA, 21–25 July 2018;
Springer: Berlin/Heidelberg, Germany, 2018; pp. 312–322.
53. Djatna, T.; Kurniati, W.D. A system analysis and design for packaging design of powder shaped fresheners based on Kansei
engineering. Procedia Manuf. 2015, 4, 115–123. [CrossRef]
54. Shi, F.; Dey, N.; Ashour, A.S.; Sifaki-Pistolla, D.; Sherratt, R.S. Meta-kansei modeling with valence-arousal fmri dataset of brain.
Cogn. Comput. 2019, 11, 227–240. [CrossRef]
55. Xiao, W.; Cheng, J. Perceptual design method for smart industrial robots based on virtual reality and synchronous quantitative
physiological signals. Int. J. Distrib. Sens. Netw. 2020, 16, 1–15. [CrossRef]
56. Kano, N.; Seraku, N.; Takahashi, F.; Tsuji, S.-I. Attractive quality and must-be quality. J. Jpn. Soc. Qual. Control. 1984, 14, 147–156.
57. Avikal, S.; Jain, R.; Mishra, P. A kano model, ahp and m-topsis method-based technique for disassembly line balancing under
fuzzy environment. Appl. Soft Comput. 2014, 25, 519–529. [CrossRef]
58. Violante, M.G.; Vezzetti, E. Kano qualitative vs quantitative approaches: An assessment framework for products attributes
analysis. Comput. Ind. 2017, 86, 15–25. [CrossRef]
Appl. Sci. 2023, 13, 9433 27 of 33

59. He, L.; Song, W.; Wu, Z.; Xu, Z.; Zheng, M.; Ming, X. Quantification and integration of an improved kano model into qfd based
on multi-population adaptive genetic algorithm. Comput. Ind. Eng. 2017, 114, 183–194. [CrossRef]
60. Geng, X.; Chu, X. A new importance–performance analysis approach for customer satisfaction evaluation supporting pss design.
Expert Syst. Appl. 2012, 39, 1492–1502. [CrossRef]
61. Lee, Y.-C.; Sheu, L.-C.; Tsou, Y.-G. Quality function deployment implementation based on fuzzy kano model: An application in
plm system. Comput. Ind. Eng. 2008, 55, 48–63. [CrossRef]
62. Ghorbani, M.; Mohammad Arabzad, S.; Shahin, A. A novel approach for supplier selection based on the kano model and fuzzy
mcdm. Int. J. Prod. Res. 2013, 51, 5469–5484. [CrossRef]
63. Chen, C.-C.; Chuang, M.-C. Integrating the kano model into a robust design approach to enhance customer satisfaction with
product design. Int. J. Prod. Econ. 2008, 114, 667–681. [CrossRef]
64. Basfirinci, C.; Mitra, A. A cross cultural investigation of airlines service quality through integration of servqual and the kano
model. J. Air Transp. Manag. 2015, 42, 239–248. [CrossRef]
65. Qi, J.; Zhang, Z.; Jeon, S.; Zhou, Y. Mining customer requirements from online reviews: A product improvement perspective.
Inf. Manag. 2016, 53, 951–963. [CrossRef]
66. Bellandi, V.; Ceravolo, P.; Ehsanpour, M. A case study in smart healthcare platform design. In Proceedings of the IEEE World
Congress on Services, Beijing, China, 18–24 October 2020; pp. 7–12.
67. Almannai, B.; Greenough, R.; Kay, J. A decision support tool based on qfd and fmea for the selection of manufacturing automation
technologies. Robot. Comput. -Integr. Manuf. 2008, 24, 501–507. [CrossRef]
68. Lee, C.H.; Chen, C.H.; Lee, Y.C. Customer requirement-driven design method and computer-aided design system for supporting
service innovation conceptualization handling. Adv. Eng. Inform. 2020, 45, 1–16. [CrossRef]
69. Yan, H.B.; Meng, X.S.; Ma, T.; Huynh, V.N. An uncertain target-oriented qfd approach to service design based on service
standardization with an application to bank window service. IISE Trans. 2019, 51, 1167–1189. [CrossRef]
70. Kim, K.-J.; Moskowitz, H.; Dhingra, A.; Evans, G. Fuzzy multicriteria models for quality function deployment. Eur. J. Oper. Res.
2000, 121, 504–518. [CrossRef]
71. Kahraman, C.; Ertay, T.; Büyüközkan, G. A fuzzy optimization model for qfd planning process using analytic network approach.
Eur. J. Oper. Res. 2006, 171, 390–411. [CrossRef]
72. Wang, Y.-H.; Lee, C.-H.; Trappey, A.J. Service design blueprint approach incorporating triz and service qfd for a meal ordering
system: A case study. Comput. Ind. Eng. 2017, 107, 388–400. [CrossRef]
73. Dursun, M.; Karsak, E.E. A qfd-based fuzzy mcdm approach for supplier selection. Appl. Math. Model. 2013, 37, 5864–5875
[CrossRef]
74. Li, M.; Jin, L.; Wang, J. A new mcdm method combining qfd with topsis for knowledge management system selection from the
user’s perspective in intuitionistic fuzzy environment. Appl. Soft Comput. 2014, 21, 28–37. [CrossRef]
75. Liu, H.-T. Product design and selection using fuzzy qfd and fuzzy mcdm approaches. Appl. Math. Model. 2011, 35, 482–496.
[CrossRef]
76. Yazdani, M.; Chatterjee, P.; Zavadskas, E.K.; Zolfani, S.H. Integrated qfd-mcdm framework for green supplier selection. J. Clean.
Prod. 2017, 142, 3728–3740. [CrossRef]
77. Wang, X.; Fang, H.; Song, W. Technical attribute prioritisation in qfd based on cloud model and grey relational analysis. Int. J.
Prod. Res. 2020, 58, 5751–5768. [CrossRef]
78. Yazdani, M.; Kahraman, C.; Zarate, P.; Onar, S.C. A fuzzy multi attribute decision framework with integration of qfd and grey
relational analysis. Expert Syst. Appl. 2019, 115, 474–485. [CrossRef]
79. Zhai, L.-Y.; Khoo, L.-P.; Zhong, Z.-W. A rough set based qfd approach to the management of imprecise design information in
product development. Adv. Eng. Inform. 2009, 23, 222–228. [CrossRef]
80. Zhai, L.-Y.; Khoo, L.P.; Zhong, Z.-W. Towards a qfd-based expert system: A novel extension to fuzzy qfd methodology using
rough set theory. Expert Syst. Appl. 2010, 37, 8888–8896. [CrossRef]
81. Moussa, F.Z.B.; Rasovska, I.; Dubois, S.; De Guio, R.; Benmoussa, R. Reviewing the use of the theory of inventive problem solving
(triz) in green supply chain problems. J. Clean. Prod. 2017, 142, 2677–2692. [CrossRef]
82. Ai, X.; Jiang, Z.; Zhang, H.; Wang, Y. Low-carbon product conceptual design from the perspectives of technical system and human
use. J. Clean. Prod. 2020, 244, 118819. [CrossRef]
83. Li, Z.; Tian, Z.; Wang, J.; Wang, W.; Huang, G. Dynamic mapping of design elements and affective responses: a machine learning
based method for affective design. J. Eng. Des. 2018, 29, 358–380. [CrossRef]
84. Jiao, J.R.; Zhang, Y.; Helander, M. A kansei mining system for affective design. Expert Syst. Appl. 2006, 30, 658–673. [CrossRef]
85. Gandomi, A.; Haider, M. Beyond the hype: Big data concepts, methods, and analytics. Int. J. Inf. Manag. 2015, 35, 137–144.
[CrossRef]
86. Carvalho, J.P.; Rosa, H.; Brogueira, G.; Batista, F. Misnis: An intelligent platform for twitter topic mining. Expert Syst. Appl. 2017,
89, 374–388. [CrossRef]
87. Lau, R.Y.; Li, C.; Liao, S.S. Social analytics: Learning fuzzy product ontologies for aspect-oriented sentiment analysis. Decis. Support
Syst. 2014, 65, 80–94. [CrossRef]
88. Liu, Y.; Jiang, C.; Zhao, H. Using contextual features and multi-view ensemble learning in product defect identification from
online discussion forums. Decis. Support Syst. 2018, 105, 1–12. [CrossRef]
Appl. Sci. 2023, 13, 9433 28 of 33

89. Park, Y.; Lee, S. How to design and utilize online customer center to support new product concept generation. Expert Syst. Appl.
2011, 38, 10638–10647. [CrossRef]
90. Ren, L.; Zhu, B.; Xu, Z. Data-driven fuzzy preference analysis from an optimization perspective. Fuzzy Sets Syst. 2019, 377, 85–101.
[CrossRef]
91. Hong, H.; Xu, D.; Wang, G.A.; Fan, W. Understanding the determinants of online review helpfulness: A meta-analytic investiga-
tion. Decis. Support Syst. 2017, 102, 1–11. [CrossRef]
92. Min, H.-J.; Park, J.C. Identifying helpful reviews based on customer’s mentions about experiences. Expert Syst. Appl. 2012,
39, 11830–11838. [CrossRef]
93. Choi, J.; Yoon, J.; Chung, J.; Coh, B.-Y.; Lee, J.-M. Social media analytics and business intelligence research: A systematic review.
Inf. Process. Manag. 2020, 57, 102279–102298. [CrossRef]
94. Xiao, S.; Wei, C.-P.; Dong, M. Crowd intelligence: Analyzing online product reviews for preference measurement. Inf. Manag.
2016, 53, 169–182. [CrossRef]
95. Zhao, K.; Stylianou, A.C.; Zheng, Y. Sources and impacts of social influence from online anonymous user reviews. Inf. Manag.
2018, 55, 16–30. [CrossRef]
96. Lee, A.J.; Yang, F.-C.; Chen, C.-H.; Wang, C.-S.; Sun, C.-Y. Mining perceptual maps from consumer reviews. Decis. Support Syst.
2016, 82, 12–25. [CrossRef]
97. Bi, J.-W.; Liu, Y.; Fan, Z.-P.; Cambria, E. Modelling customer satisfaction from online reviews using ensemble neural network and
effect-based kano model. Int. J. Prod. Res. 2019, 57, 7068–7088. [CrossRef]
98. Hu, M.; Liu, B. Mining opinion features in customer reviews. AAAI 2004, 4, 755–760.
99. Kang, D.; Park, Y. Review-based measurement of customer satisfaction in mobile service: Sentiment analysis and vikor approach.
Expert Syst. Appl. 2014, 41, 1041–1050. [CrossRef]
100. Kangale, A.; Kumar, S.K.; Naeem, M.A.; Williams, M.; Tiwari, M.K. Mining consumer reviews to generate ratings of different
product attributes while producing feature-based review-summary. Int. J. Syst. Sci. 2016, 47, 3272–3286. [CrossRef]
101. Wang, Y.; Lu, X.; Tan, Y. Impact of product attributes on customer satisfaction: An analysis of online reviews for washing
machines. Electron. Commer. Res. Appl. 2018, 29, 1–11. [CrossRef]
102. Aguwa, C.; Olya, M.H.; Monplaisir, L. Modeling of fuzzy-based voice of customer for business decision analytics. Knowl.-Based
Syst. 2017, 125, 136–145. [CrossRef]
103. Zhan, J.; Loh, H.T.; Liu, Y. Gather customer concerns from online product reviews–a text summarization approach. Expert Syst.
Appl. 2009, 36, 2107–2115. [CrossRef]
104. Archak, N.; Ghose, A.; Ipeirotis, P.G. Deriving the pricing power of product features by mining consumer reviews. Manag. Sci.
2011, 57, 1485–1509. [CrossRef]
105. Law, D.; Gruss, R.; Abrahams, A.S. Automated defect discovery for dishwasher appliances from online consumer reviews.
Expert Syst. Appl. 2017, 67, 84–94. [CrossRef]
106. Winkler, M.; Abrahams, A.S.; Gruss, R.; Ehsani, J.P. Toy safety surveillance from online reviews. Decis. Support Syst. 2016,
90, 23–32. [CrossRef] [PubMed]
107. Zhang, W.; Xu, H.; Wan, W. Weakness finder: Find product weakness from chinese reviews by using aspects based sentiment
analysis. Expert Syst. Appl. 2012, 39, 10283–10291. [CrossRef]
108. Jin, J.; Ji, P.; Gu, R. Identifying comparative customer requirements from product online reviews for competitor analysis. Eng. Appl.
Artif. Intell. 2016, 49, 61–73. [CrossRef]
109. Chatterjee, S. Explaining customer ratings and recommendations by combining qualitative and quantitative user generated
contents. Decis. Support Syst. 2019, 119, 14–22. [CrossRef]
110. Fan, Z.-P.; Li, G.-M.; Liu, Y. Processes and methods of information fusion for ranking products based on online reviews:
An overview. Inf. Fusion 2020, 60, 87–97. [CrossRef]
111. Liu, P.; Teng, F. Probabilistic linguistic todim method for selecting products through online product reviews. Inf. Sci. 2019,
485, 441–455. [CrossRef]
112. Liu, Y.; Bi, J.-W.; Fan, Z.-P. Ranking products through online reviews: A method based on sentiment analysis technique and
intuitionistic fuzzy set theory. Inf. Fusion 2017, 36, 149–161. [CrossRef]
113. Siering, M.; Deokar, A.V.; Janze, C. Disentangling consumer recommendations: Explaining and predicting airline recommenda-
tions based on online reviews. Decis. Support Syst. 2018, 107, 52–63. [CrossRef]
114. Zhang, J.; Chen, D.; Lu, M. Combining sentiment analysis with a fuzzy kano model for product aspect preference recommendation.
IEEE Access 2018, 6, 59163–59172. [CrossRef]
115. Jin, J.; Ji, P.; Kwong, C.K. What makes consumers unsatisfied with your products: Review analysis at a fine-grained level.
Eng. Appl. Artif. Intell. 2016, 47, 38–48. [CrossRef]
116. Jin, J.; Liu, Y.; Ji, P.; Liu, H. Understanding big consumer opinion data for market-driven product design. Int. J. Prod. Res. 2016,
54, 3019–3041. [CrossRef]
117. Wang, W.M.; Wang, J.; Li, Z.; Tian, Z.; Tsui, E. Multiple affective attribute classification of online customer product reviews:
A heuristic deep learning method for supporting Kansei engineering. Eng. Appl. Artif. Intell. 2019, 85, 33–45. [CrossRef]
118. Kumar, S.; Yadava, M.; Roy, P.P. Fusion of eeg response and sentiment analysis of products review to predict customer satisfaction.
Inf. Fusion 2019, 52, 41–52. [CrossRef]
Appl. Sci. 2023, 13, 9433 29 of 33

119. Li, S.; Nahar, K.; Fung, B.C. Product customization of tablet computers based on the information of online reviews by customers.
J. Intell. Manuf. 2015, 26, 97–110. [CrossRef]
120. Sun, J.-T.; Zhang, Q.-Y. Product typicality attribute mining method based on a topic clustering ensemble. Artif. Intell. Rev. 2022,
55, 6629–6654. [CrossRef]
121. Zhang, H.; Sekhari, A.; Ouzrout, Y.; Bouras, A. Jointly identifying opinion mining elements and fuzzy measurement of opinion
intensity to analyze product features. Eng. Appl. Artif. Intell. 2016, 47, 122–139. [CrossRef]
122. Tubishat, M.; Idris, N.; Abushariah, M. Explicit aspects extraction in sentiment analysis using optimal rules combination.
Future Gener. Comput. Syst. 2021, 114, 448–480. [CrossRef]
123. Quan, C.; Ren, F. Unsupervised product feature extraction for feature-oriented opinion determination. Inf. Sci. 2014, 272, 16–28.
[CrossRef]
124. Sun, H.; Guo, W.; Shao, H.; Rong, B. Dynamical mining of ever-changing user requirements: A product design and improvement
perspective. Adv. Eng. Inform. 2020, 46, 101174–101186. [CrossRef]
125. Ristoski, P.; Petrovski, P.; Mika, P.; Paulheim, H. A machine learning approach for product matching and categorization.
Semant. Web 2018, 9, 707–728. [CrossRef]
126. Fang , Z.; Zhang, Q.; Tang, X.; Wang, A.; Baron, C. An implicit opinion analysis model based on feature-based implicit opinion
patterns. Artif. Intell. Rev. 2020, 53, 4547–4574. [CrossRef]
127. Putthividhya, D.; Hu, J. Bootstrapped named entity recognition for product attribute extraction. In Proceedings of the Conference
on Empirical Methods in Natural Language Processing, Edinburgh, UK, 27–31 July 2011; pp. 1557–1567.
128. Tubishat, M.; Idris, N.; Abushariah, M.A. Implicit aspect extraction in sentiment analysis: Review, taxonomy, oppportunities, and
open challenges. Inf. Process. Manag. 2018, 54, 545–563. [CrossRef]
129. Xu, H.; Zhang, F.; Wang, W. Implicit feature identification in chinese reviews using explicit topic mining model. Knowl.-Based Syst.
2015, 76, 166–175. [CrossRef]
130. Kang, Y.; Zhou, L. Rube: Rule-based methods for extracting product features from online consumer reviews. Inf. Manag. 2017,
54, 166–176. [CrossRef]
131. Hu, M.; Liu, B. Mining and summarizing customer reviews. In Proceedings of the Tenth International Conference on Knowledge
Discovery and Data Mining, Seattle, DC, USA, 22–25 August 2004; ACM: New York, NY, USA, 2004; pp. 168–177. [CrossRef]
132. Wang, Y.; Mo, D.Y.; Tseng, M.M. Mapping customer needs to design parameters in the front end of product design by applying
deep learning. CIRP Ann. 2018, 67, 145–148. [CrossRef]
133. Li, S.B.; Quan, H.F.; Hu, J.J.; Wu, Y.; Zhang, A. Perceptual evaluation method of products based on online reviews data driven.
Comput. Integr. Manuf. Syst. 2018, 24, 752–762.
134. Wang, W.M.; Li, Z.; Tian, Z.; Wang, J.; Cheng, M. Extracting and summarizing affective features and responses from online
product descriptions and reviews: A kansei text mining approach. Eng. Appl. Artif. Intell. 2018, 73, 149–162. [CrossRef]
135. Chen, L.; Qi, L.; Wang, F. Comparison of feature-level learning methods for mining online consumer reviews. Expert Syst. Appl.
2012, 39, 9588–9601. [CrossRef]
136. Moraes, R.; Valiati, J.F.; Neto, W.P.G. Document-level sentiment classification: An empirical comparison between svm and ann.
Expert Syst. Appl. 2013, 40, 621–633. [CrossRef]
137. Bordoloi, M.; Biswas, S.K. Sentiment analysis: A survey on design framework, applications and future scopes. Artif. Intell. Rev..
2023, 1–56. [CrossRef]
138. Do, H.H.; Prasad, P.; Maag, A.; Alsadoon, A. Deep learning for aspect-based sentiment analysis: A comparative review. Expert Syst.
Appl. 2019, 118, 272–299. [CrossRef]
139. Liu, Y.; Bi, J.-W.; Fan, Z.-P. Multi-class sentiment classification: The experimental comparisons of feature selection and machine
learning algorithms. Expert Syst. Appl. 2017, 80, 323–339. [CrossRef]
140. Liu, Y.; Jian-Wu, B.; Fan, Z.P.: A method for multi-class sentiment classification based on an improved one-vs-one (ovo) strategy
and the support vector machine (svm) algorithm. Inf. Sci. 2017, 394, 38–52. [CrossRef]
141. Zengcai, S.; Yunfeng, X., Dongwen, Z. Chinese comments sentiment classification based on word2vec and svmperf. Expert Syst.
Appl. 2015, 42, 1857–1863.
142. Dehdarbehbahani, I.; Shakery, A.; Faili, H. Semi-supervised word polarity identification in resource-lean languages. Neural Netw.
2014, 58, 50–59. [CrossRef]
143. Cho, H.; Kim, S.; Lee, J.; Lee, J.-S. Data-driven integration of multiple sentiment dictionaries for lexicon-based sentiment
classification of product reviews. Knowl.-Based Syst. 2014, 71, 61–71. [CrossRef]
144. Araque, O.; Zhu, G.; Iglesias, C.A. A semantic similarity-based perspective of affect lexicons for sentiment analysis. Knowl.-Based
Syst. 2019, 165, 346–359. [CrossRef]
145. Bravo-Marquez, F.; Mendoza, M.; Poblete, B. Meta-level sentiment models for big social data analysis. Knowl.-Based Syst. 2014,
69, 86–99. [CrossRef]
146. Yadollahi, A.; Shahraki, A.G.; Zaiane, O.R. Current state of text sentiment analysis from opinion to emotion mining. ACM Comput.
Surv. 2017, 50, 1–33. [CrossRef]
147. Dang, Y.; Zhang, Y.; Chen, H. A lexicon-enhanced method for sentiment classification: An experiment on online product reviews.
IEEE Intell. Syst. 2009, 25, 46–53. [CrossRef]
Appl. Sci. 2023, 13, 9433 30 of 33

148. Chen, Z.; Ai, S.; Jia, C. Structure-aware deep learning for product image classification. ACM Trans. Multimed. Comput. Commun.
Appl. (TOMM) 2019, 15, 1–20. [CrossRef]
149. Li, Q.; Peng, X.; Cao, L.; Du, W.; Xing, H.; Qiao, Y.; Peng, Q. Product image recognition with guidance learning and noisy
supervision. Comput. Vis. Image Underst. 2020, 196, 102963–102971. [CrossRef]
150. Liu, S.; Feng, J.; Domokos, C.; Xu, H.; Huang, J.; Hu, Z.; Yan, S. Fashion parsing with weak color-category labels. IEEE Trans.
Multimed. 2013, 16, 253–265. [CrossRef]
151. Li, Y.; Dai, Y.; Liu, L.-J.; Tan, H. Advanced designing assistant system for smart design based on product image dataset. In
Proceedings of the International Conference on Human-Computer Interaction, Orlando, FL, USA, 26–31 July 2019; Springer:
Cham, Switzerland; pp. 18–33.
152. Dai, Y.; Li, Y.; Liu, L.-J. New product design with automatic scheme generation. Sens. Imaging 2019, 20, 1–16. [CrossRef]
153. Kovacs, B.; O’Donovan, P.; Bala, K.; Hertzmann, A. Context-aware asset search for graphic design. IEEE Trans. Vis. Comput. Graph.
2018, 25, 2419–2429. [CrossRef] [PubMed]
154. Yamaguchi, K.; Kiapour, M.H.; Ortiz, L.E.; Berg, T.L. Retrieving similar styles to parse clothing. IEEE Trans. Pattern Anal.
Mach. Intell. 2014, 37, 1028–1040. [CrossRef] [PubMed]
155. Bell, S.; Bala, K. Learning visual similarity for product design with convolutional neural networks. ACM Trans. Graph. (TOG)
2015, 34, 1–10. [CrossRef]
156. Liu, X.; Zhang, S.; Huang, T.; Tian, Q. E2bows: An end-to-end bag-of-words model via deep convolutional neural network for
image retrieval. Neurocomputing 2020, 395, 188–198. [CrossRef]
157. Rubio, A.; Yu, L.; Simo-Serra, E.; Moreno-Noguer, F. Multi-modal joint embedding for fashion product retrieval. In Proceedings
of the IEEE International Conference on Image Processing, Beijing, China, 17–20 September 2017; pp. 400–404.
158. Tautkute, I.; Trzciński, T.; Skorupa, A.P.; Brocki, Ł.; Marasek, K. Deepstyle: Multimodal search engine for fashion and interior
design. IEEE Access 2019, 7, 84613–84628. [CrossRef]
159. Andreeva, E.; Ignatov, D.I.; Grachev, A.; Savchenko, A.V. Extraction of visual features for recommendation of products via deep
learning. In Proceedings of the International Conference on Analysis of Images, Social Networks and Texts, Moscow, Russia, 5–7
July 2018; Springer: Berlin/Heidelberg, Germany, 2018; pp. 201–210.
160. Wang, X.; Sun, Z.; Zhang, W.; Zhou, Y.; Jiang, Y.-G. Matching user photos to online products with robust deep features. In
Proceedings of the 2016 ACM on International Conference on Multimedia Retrieval, New York, NY, USA, 6–9 June 2016; pp. 7–14.
161. Zhan, H.; Shi, B.; Duan, L.-Y.; Kot, A.C. Deepshoe: An improved multi-task view-invariant cnn for street-to-shop shoe retrieval.
Comput. Vis. Image Underst. 2019, 180, 23–33. [CrossRef]
162. Jiang, S.; Wu, Y.; Fu, Y. Deep bidirectional cross-triplet embedding for online clothing shopping. ACM Trans. Multimed. Comput.
Commun. Appl. (TOMM) 2018, 14, 1–22. [CrossRef]
163. Jiang, Y.-G.; Li, M.; Wang, X.; Liu, W.; Hua, X.-S. Deepproduct: Mobile product search with portable deep features. ACM Trans.
Multimed. Comput. Commun. Appl. (TOMM) 2018, 14, 1–18. [CrossRef]
164. Liu, S.; Song, Z.; Liu, G.; Xu, C.; Lu, H.; Yan, S. Street-to-shop: Cross-scenario clothing retrieval via parts alignment and auxiliary
set. In Proceedings of the Conference on Computer Vision and Pattern Recognition, Providence, RL, USA, 16–21 June 2012;
pp. 3330–3337.
165. Yu, Q.; Liu, F.; Song, Y.-Z.; Xiang, T.; Hospedales, T.M.; Loy, C.-C. Sketch me that shoe. In Proceedings of the Conference on
Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 799–807.
166. Ullah, F.; Zhang, B.; Khan, R.U.; Ullah, I.; Khan, A.; Qamar, A.M. Visual-based items recommendation using deep neural network.
In Proceedings of the International Conference on Computing, Networks and Internet of Things, Sanya, China, 24–26 April 2020;
pp. 122–126.
167. Liang, X.; Lin, L.; Yang, W.; Luo, P.; Huang, J.; Yan, S. Clothes co-parsing via joint image segmentation and labeling with
application to clothing retrieval. IEEE Trans. Multimed. 2016, 18, 1175–1186. [CrossRef]
168. Chu, W.-T.; Wu, Y.-L. Image style classification based on learnt deep correlation features. IEEE Trans. Multimed. 2018, 20, 2491–2502.
[CrossRef]
169. Hu, Z.; Wen, Y.; Liu, L.; Jiang, J.; Hong, R.; Wang, M.; Yan, S. Visual classification of furniture styles. ACM Trans. Intell. Syst. Technol.
2017, 8, 1–20. [CrossRef]
170. Poursaeed, O.; Matera, T.; Belongie, S. Vision-based real estate price estimation. Mach. Vis. Appl. 2018, 29, 667–676. [CrossRef]
171. Pan, T.-Y.; Dai, Y.-Z.; Hu, M.-C.; Cheng, W.-H. Furniture style compatibility recommendation with cross-class triplet loss. Multimed.
Tools Appl. 2019, 78, 2645–2665. [CrossRef]
172. Shin, Y.-G.; Yeo, Y.-J.; Sagong, M.-C.; Ji, S.-W.; Ko, S.-J. Deep fashion recommendation system with style feature decomposition. In
Proceedings of the International Conference on Consumer Electronics, Las Vegas, NV, USA, 11–13 January 2019; pp. 301–305.
173. Zhan, H.; Shi, B.; Chen, J.; Zheng, Q.; Duan, L.-Y.; Kot, A.C. Fashion recommendation on street images. In Proceedings of the
International Conference on Image Processing, Taipei, Taiwan, 22–25 September 2019; pp. 280–284.
174. Zhang, H.; Huang, W.; Liu, L.; Chow, T.W. Learning to match clothing from textual feature-based compatible relationships. IEEE
Trans. Ind. Inform. 2019, 16, 6750–6759. [CrossRef]
175. Aggarwal, D.; Valiyev, E.; Sener, F.; Yao, A. Learning style compatibility for furniture. In Proceedings of the German Conference
on Pattern Recognition, Stuttgart, Germany, 9–12 October 2018; Springer: Cham, Switzerland, 2018; pp. 552–566.
Appl. Sci. 2023, 13, 9433 31 of 33

176. Polania, L.F.; Flores, M.; Nokleby, M.; Li, Y. Learning furniture compatibility with graph neural networks. In Proceedings of the
IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, Seattle, WA, USA, 14–19 June 2020; pp. 366–367.
177. Dan, Y.; Zhao, Y.; Li, X.; Li, S.; Hu, M.; Hu, J. Generative adversarial networks (gan) based efficient sampling of chemical
composition space for inverse design of inorganic materials. npj Comput. Mater. 2020, 6, 1–7. [CrossRef]
178. Kang, W.-C.; Fang, C.; Wang, Z.; McAuley, J. Visually-aware fashion recommendation and design with generative image models.
In Proceedings of the IEEE International Conference on Data Mining, New Orleans, LA, USA, 18–21 November 2017; pp. 207–216.
179. Zhang, H.; Sun, Y.; Liu, L.; Xu, X. Cascadegan: A category-supervised cascading generative adversarial network for clothes
translation from the human body to tiled images. Neurocomputing 2020, 382, 148–161. [CrossRef]
180. Radhakrishnan, S.; Bharadwaj, V.; Manjunath, V.; Srinath, R. Creative intelligence–automating car design studio with generative
adversarial networks (gan). In International Cross-Domain Conference for Machine Learning and Knowledge Extraction; Springer:
Berlin/Heidelberg, Germany, 2018; pp. 160–175.
181. Ak, K.E.; Lim, J.H.; Tham, J.Y.; Kassim, A.A. Semantically consistent text to fashion image synthesis with an enhanced attentional
generative adversarial network. Pattern Recognit. Lett. 2020, 135, 22–29. [CrossRef]
182. Kim, T.; Cha, M.; Kim, H.; Lee, J.K.; Kim, J. Learning to discover cross-domain relations with generative adversarial networks. In
Proceedings of the International Conference on Machine Learning, Sydney, Australia, 6–11 August 2017; pp. 1857–1865.
183. Hsiao, W.-L.; Katsman, I.; Wu, C.-Y.; Parikh, D.; Grauman, K. Fashion++: Minimal edits for outfit improvement. In Proceedings
of the IEEE/CVF International Conference on Computer Vision, Seoul, Republic of Korea, 27 October–2 November 2019;
pp. 5047–5056
184. Lang, Y.; He, Y.; Dong, J.; Yang, F.; Xue, H. Design-gan: Cross-category fashion translation driven by landmark attention. In
Proceedings of the ICASSP 2020—2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP),
Barcelona, Spain, 4–8 May 2020; pp. 1968–1972.
185. Liu, J.; Song, X.; Chen, Z.; Ma, J. Mgcm: Multi-modal generative compatibility modeling for clothing matching. Neurocomputing
2020, 414, 215–224. [CrossRef]
186. Lu, Q.; Tao, Q.; Zhao, Y. Sketch simplification using generative adversarial networks. Acta Autom. Sin. 2018, 44, 75–89.
187. Chai, C.; Liao, J.; Zou, N.; Sun, L. A one-to-many conditional generative adversarial network framework for multiple image-to-
image translations. Multimed. Tools Appl. 2018, 77, 22339–22366. [CrossRef]
188. Lee, Y.; Cho, S. Design of semantic-based colorization of graphical user interface through conditional generative adversarial nets.
Int. J. Hum. Comput. Interact. 2020, 36, 699–708. [CrossRef]
189. Liu, Y.; Qin, Z.; Wan, T.; Luo, Z. Auto-painter: Cartoon image generation from sketch by using conditional wasserstein generative
adversarial networks. Neurocomputing 2018, 311, 78–87. [CrossRef]
190. Liu, B.; Gan, J.; Wen, B.; LiuFu, Y.; Gao, W. An automatic coloring method for ethnic costume sketches based on generative
adversarial networks. Appl. Soft Comput. 2021, 98, 106786–106797. [CrossRef]
191. Chen, Y.; Xia, S.; Zhao, J.; Zhou, Y.; Niu, Q.; Yao, R.; Zhu, D. Appearance and shape based image synthesis by conditional
variational generative adversarial network. Knowl.-Based Syst. 2020, 193, 105450–105477. [CrossRef]
192. Jetchev, N.; Bergmann, U. The conditional analogy gan: Swapping fashion articles on people images. In Proceedings of the IEEE
International Conference on Computer Vision Workshops, Venice, Italy, 22–29 October 2017; pp. 2287–2292.
193. Huafeng, Q. Product Design Based on Big Data. Ph.D. Thesis, Guizhou University, Guizhou, China, 2019.
194. Liu, L.; Zhang, H.; Ji, Y.; Wu, Q.J. Toward ai fashion design: An attribute-gan model for clothing match. Neurocomputing 2019
341, 156–167. [CrossRef]
195. Rahbar, M.; Mahdavinejad, M.; Bemanian, M.; Davaie Markazi, A.H.; Hovestadt, L. Generating synthetic space allocation
probability layouts based on trained conditional-gans. Appl. Artif. Intell. 2019, 33, 689–705. [CrossRef]
196. Wang, X.; Gupta, A. Generative image modeling using style and structure adversarial networks. In Proceedings of the European
Conference on Computer Vision, Amsterdam, The Netherlands, 11–14 October 2016; Springer: Cham, Switzerland, 2016;
pp. 318–335.
197. Cheng, Q.; Gu, X. Cross-modal feature alignment based hybrid attentional generative adversarial networks for text-to-image
synthesis. Digit. Signal Process. 2020, 107, 102866–102884. [CrossRef]
198. Reed, S.; Akata, Z.; Yan, X.; Logeswaran, L.; Schiele, B.; Lee, H. Generative adversarial text to image synthesis. In Proceedings of
the International Conference on Machine Learning, New York, NY, USA, 19–24 June 2016; pp. 1060–1069.
199. Tan, H.; Liu, X.; Liu, M.; Yin, B.; Li, X. Kt-gan: Knowledge-transfer generative adversarial network for text-to-image synthesis.
IEEE Trans. Image Process. 2020, 30, 1275–1290. [CrossRef]
200. Xu, T.; Zhang, P.; Huang, Q.; Zhang, H.; Gan, Z.; Huang, X.; He, X. Attngan: Fine-grained text to image generation with attentional
generative adversarial networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake
City, UT, USA, 18–23 June 2018; pp. 1316–1324
201. Poole, B.; Jain, A.; Barron, J.T.; Mildenhall, B. Dreamfusion: Text-to-3d using 2d diffusion. arXiv 2022, arXiv:2209.14988.
202. Jun, H.; Nichol, A. Shap-e: Generating conditional 3d implicit functions. arXiv 2023, arXiv:2305.02463.
203. Wang, Z.; Lu, C.; Wang, Y.; Bao, F.; Li, C.; Su, H.; Zhu, J. Prolificdreamer: High-fidelity and diverse text-to-3d generation with
variational score distillation. arXiv 2023, arXiv:2305.16213.
204. Iizuka, S.; Simo-Serra, E.; Ishikawa, H. Let there be color! joint end-to-end learning of global and local image priors for automatic
image colorization with simultaneous classification. ACM Trans. Graph. 2016, 35, 1–11. [CrossRef]
Appl. Sci. 2023, 13, 9433 32 of 33

205. Lei, Y.; Du, W.; Hu, Q. Face sketch-to-photo transformation with multi-scale self-attention gan. Neurocomputing 2020, 396, 13–23.
[CrossRef]
206. Isola, P.; Zhu, J.-Y.; Zhou, T.; Efros, A.A. Image-to-image translation with conditional adversarial networks. In Proceedings of the
IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 1125–1134.
207. Zhu, J.-Y.; Krähenbühl, P.; Shechtman, E.; Efros, A.A. Generative visual manipulation on the natural image manifold. In
Proceedings of the European Conference on Computer Vision, Amsterdam, The Netherlands, 11–14 October 2016; Springer:
Cham, Switzerland, 2016; pp. 597–613.
208. Zhu, J.-Y.; Park, T.; Isola, P.; Efros, A.A. Unpaired image-to-image translation using cycle-consistent adversarial networks. In
Proceedings of the IEEE International Conference on Computer Vision, Venice, Italy, 22–29 October 2017; pp. 2223–2232.
209. Pan, X.; Tewari, A.; Leimkühler, T.; Liu, L.; Meka, A.; Theobalt, C. Drag your gan: Interactive point-based manipulation on the
generative image manifold. arXiv 2023, arXiv:2305.10973.
210. Gatys, L.A.; Ecker, A.S.; Bethge, M. A neural algorithm of artistic style. arXiv 2015, arXiv:1508.06576.
211. Huang, X.; Belongie, S. Arbitrary style transfer in real-time with adaptive instance normalization. In Proceedings of the IEEE
International Conference on Computer Vision, Venice, Italy, 22–29 October 2017.
212. Quan, H.; Li, S.; Hu, J. Product innovation design based on deep learning and kansei engineering. Appl. Sci. 2018, 8, 2397–2415.
[CrossRef]
213. Wu, Q.; Zhu, B.; Yong, B.; Wei, Y.; Jiang, X.; Zhou, R.; Zhou, Q. Clothgan: generation of fashionable dunhuang clothes using
generative adversarial networks. Connect. Sci. 2021, 33, 341–358. [CrossRef]
214. Sohn, K.; Sung, C.E.; Koo, G.; Kwon, O. Artificial intelligence in the fashion industry: consumer responses to generative
adversarial network (gan) technology. Int. J. Retail. Distrib. Manag. 2020, 49, 1–20. [CrossRef]
215. Sun, Y.; Chen, J.; Liu, Q.; Liu, G. Learning image compressed sensing with sub-pixel convolutional generative adversarial network.
Pattern Recognit. 2020, 98, 107051. [CrossRef]
216. Wang, C.; Chen, Z.; Shang, K.; Wu, H. Label-removed generative adversarial networks incorporating with k-means. Neurocomputing
2019, 361, 126–136. [CrossRef]
217. Faezi, M.H.; Bijani, S.; Dolati, A. Degan: Decentralized generative adversarial networks. Neurocomputing 2021, 419, 335–343.
[CrossRef]
218. Sun, G.; Ding, S.; Sun, T.; Zhang, C. Sa-capsgan: Using capsule networks with embedded self-attention for generative adversarial
network. Neurocomputing 2021, 423, 399–406. [CrossRef]
219. Yao, S.; Wang, Y.; Niu, B. An efficient cascaded filtering retrieval method for big audio data. IEEE Trans. Multimed. 2015,
17, 1450–1459. [CrossRef]
220. Yang, Z.; Xu, M.; Liu, Z.; Qin, D.; Yao, X. Study of audio frequency big data processing architecture and key technology.
Telecommun. Sci. 2013, 29, 1–5.
221. Lee, C.-H.; Wang, Y.-H.; Trappey, A.J. Ontology-based reasoning for the intelligent handling of customer complaints. Comput. Ind.
Eng. 2015, 84, 144–155. [CrossRef]
222. Yang, Y.; Xu, D.-L.; Yang, J.-B.; Chen, Y.-W. An evidential reasoning-based decision support system for handling customer
complaints in mobile telecommunications. Knowl.-Based Syst. 2018, 162, 202–210. [CrossRef]
223. Bingol, M.C.; Aydogmus, O. Performing predefined tasks using the human–robot interaction on speech recognition for
an industrial robot. Eng. Appl. Artif. Intell. 2020, 95, 103903–103917. [CrossRef]
224. Tanaka, T.; Masumura, R.; Oba, T. Neural candidate-aware language models for speech recognition. Comput. Speech Lang. 2021,
66, 101157–101170. [CrossRef]
225. Dokuz, Y.; Tufekci, Z. Mini-batch sample selection strategies for deep learning based speech recognition. Appl. Acoust. 2021, 171,
107573–107583. [CrossRef]
226. Mulimani, M.; Koolagudi, S.G. Extraction of mapreduce-based features from spectrograms for audio-based surveillance.
Digit. Signal Process. 2019, 87, 1–9. [CrossRef]
227. El Ayadi, M.; Kamel, M.S.; Karray, F. Survey on speech emotion recognition: Features, classification schemes, and databases.
Pattern Recognit. 2011, 44, 572–587. [CrossRef]
228. Fayek, H.M.; Lech, M.; Cavedon, L. Evaluating deep learning architectures for speech emotion recognition. Neural Netw. 2017,
92, 60–68. [CrossRef] [PubMed]
229. Li, D.; Zhou, Y.; Wang, Z.; Gao, D. Exploiting the potentialities of features for speech emotion recognition. Inf. Sci. 2021,
548, 328–343. [CrossRef]
230. Badshah, A.M.; Ahmad, J.; Rahim, N.; Baik, S.W. Speech emotion recognition from spectrograms with deep convolutional
neural network. In Proceedings of the International Conference on Platform Technology and Service, Busan, Republic of Korea,
13–15 February 2017.
231. Nagamachi, M.; Lokman, A.M. Kansei Innovation: Practical Design Applications for Product and Service Development; CRC Press:
Boca Raton, FL, USA, 2015.
232. Hossain, M.S.; Muhammad, G. Emotion recognition using deep learning approach from audio–visual emotional big data.
Inf. Fusion 2019, 49, 69–78. [CrossRef]
233. Jiang, Y.; Li, W.; Hossain, M.S.; Chen, M.; Alelaiwi, A.; Al-Hammadi, M. A snapshot research and implementation of multimodal
information fusion for data-driven emotion recognition. Inf. Fusion 2020, 53, 209–221. [CrossRef]
Appl. Sci. 2023, 13, 9433 33 of 33

234. Zhang, J.; Yin, Z.; Chen, P.; Nichele, S. Emotion recognition using multi-modal data and machine learning techniques: A tutorial
and review. Inf. Fusion 2020, 59, 103–126. [CrossRef]
235. Li, G.; Wang, M.; Lu, Z.; Hong, R.; Chua, T.-S. In-video product annotation with web information mining. ACM Trans. Multimed.
Comput. Commun. Appl. 2012, 8, 1–19. [CrossRef]
236. Zhang, H.; Guo, H.; Wang, X.; Ji, Y.; Wu, Q.J. Clothescounter: a framework for star-oriented clothes mining from videos.
Neurocomputing 2020, 377, 38–48. [CrossRef]
237. Zhang, H.; Ji, Y.; Huang, W.; Liu, L. Sitcom-star-based clothing retrieval for video advertising: a deep learning framework. Neural
Comput. Appl. 2019, 31, 7361–7380. [CrossRef]
238. Cheng, Z.-Q.; Wu, X.; Liu, Y.; Hua, X.-S. Video2shop: Exact matching clothes in videos to online shopping images. In Proceedings
of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 4048–4056.
239. Wu, C.; Tan, Z.; Wang, Z.; Yang, S. A dataset for exploring user behaviors in vr spherical video streaming. In Proceedings of the
ACM on Multimedia Systems Conference, Taipei, Taiwan, 20–23 June 2017; pp. 193–198.
240. Taati, B.; Snoek, J.; Mihailidis, A. Video analysis for identifying human operation difficulties and faucet usability assessment.
Neurocomputing 2013, 100, 163–169. [CrossRef]
241. Chen, B.-H.; Huang, S.-C.; Yen, J.-Y. Counter-propagation artificial neural network-based motion detection algorithm for
static-camera surveillance scenarios. Neurocomputing 2018, 273, 481–493. [CrossRef]
242. Liu, S.; Liang, X.; Liu, L.; Lu, K.; Lin, L.; Cao, X.; Yan, S. Fashion parsing with video context. IEEE Trans. Multimed. 2015,
17, 1347–1358. [CrossRef]
243. Dong, H.; Liang, X.; Shen, X.; Wu, B.; Chen, B.-C.; Yin, J. Fw-gan: Flow-navigated warping gan for video virtual try-on. In Pro-
ceedings of the IEEE/CVF International Conference on Computer Vision, Seoul, Republic of Korea, 27 October–2 November 2019;
pp. 1161–1170.
244. Yi, C.; Jiang, Z.; Benbasat, I. Enticing and engaging consumers via online product presentations: The effects of restricted interaction
design. J. Manag. Inf. Syst. 2015, 31, 213–242. [CrossRef]
245. An, S.; Liu, S.; Huang, Z.; Che, G.; Bao, Q.; Zhu, Z.; Chen, Y.; Weng, D.Z. Rotateview: A video composition system for interactive
product display. IEEE Trans. Multimed. 2019, 21, 3095–3105. [CrossRef]

Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual
author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to
people or property resulting from any ideas, methods, instructions or products referred to in the content.

You might also like