electronics10030308
electronics10030308
Article
Biometric User Identification Based on Human Activity
Recognition Using Wearable Sensors: An Experiment Using
Deep Learning Models
Sakorn Mekruksavanich 1 and Anuchit Jitpattanakul 2, *
Abstract: Currently, a significant amount of interest is focused on research in the field of Human Ac-
tivity Recognition (HAR) as a result of the wide variety of its practical uses in real-world applications,
such as biometric user identification, health monitoring of the elderly, and surveillance by authorities.
The widespread use of wearable sensor devices and the Internet of Things (IoT) has led the topic of
HAR to become a significant subject in areas of mobile and ubiquitous computing. In recent years,
the most widely-used inference and problem-solving approach in the HAR system has been deep
learning. Nevertheless, major challenges exist with regard to the application of HAR for problems in
biometric user identification in which various human behaviors can be regarded as types of biometric
qualities and used for identifying people. In this research study, a novel framework for multi-class
wearable user identification, with a basis in the recognition of human behavior through the use
of deep learning models, is presented. In order to obtain advanced information regarding users
during the performance of various activities, sensory data from tri-axial gyroscopes and tri-axial
Citation: Mekruksavanich, S.; accelerometers of the wearable devices are applied. Additionally, a set of experiments were shown to
Jitpattanakul, A. Biometric User validate this work, and the proposed framework’s effectiveness was demonstrated. The results for
Identification based on Human the two basic models, namely, the Convolutional Neural Network (CNN) and the Long Short-Term
Activity Recognition using Wearable Memory (LSTM) deep learning, showed that the highest accuracy for all users was 91.77% and
Sensors: An Experiment using Deep 92.43%, respectively. With regard to the biometric user identification, these are both acceptable levels.
Learning Models. Electronics 2021, 10,
308. https://doi.org/10.3390/ Keywords: human activity recognition (HAR); biometric user identification; wearable sensor devices;
electronics10030308 mobile and ubiquitous computing; deep learning; human behaviors; convolutional neural network
(CNN); long short-term memory (LSTM)
Received: 28 December 2020
Accepted: 25 January 2021
Published: 27 January 2021
1. Introduction
Publisher’s Note: MDPI stays neu-
tral with regard to jurisdictional clai- Among researchers in both academia and industry whose goal is the advancement of
ms in published maps and institutio- ubiquitous computing and human computer interaction, one of the most widely-discussed
nal affiliations. research topics has become Human Activity Recognition (HAR) [1]. Presently, the number
of research studies conducted on HAR is rapidly increasing because sensors are more
widely available, costs and power consumption have decreased, and because of data and
technological advances in machine learning algorithms. Artificial Intelligence (AI), and the
Copyright: c 2021 by the authors.
Internet of Things (IoT) can now be live-streamed [2,3]. The progress in HAR has facili-
Licensee MDPI, Basel, Switzerland.
tated practical applications in various real-world fields, including the healthcare industry,
This article is an open access article
the detection of crime and violence, sports science, and tactical military applications. It is
distributed under the terms and con-
ditions of the Creative Commons At-
clear that the wide range of situations to which HAR is applicable provides proof that the
tribution (CC BY) license (https://
field has strong potential to improve our quality of life [4]. Mathematical models, based on
creativecommons.org/licenses/by/
human activity data, allow the recognition of a variety of human activities, for example,
4.0/). running, sitting, sleeping, standing, and walking. HAR systems can be classified into two
2. Related Research
This study is primarily concerned with HAR and deep learning. Therefore, recent ad-
vances in these two areas are briefly reviewed first.
2.1. Human Activity Recognition via Machine Learning and Deep Learning
The time series classification tasks are the main challenges in using HAR, which is
when the person’s movements are predicted by the use of sensory data. This normally
involves accurately engineering features from the raw data by employing deep domain
expertise and signal processing methods with the aim of fitting one of the models of
machine learning. In recent studies, the capacity of deep learning models, including CNN
and LSTM neural networks, to automatically extract meaningful attributes from the given
raw sensor data and achieve the most advanced results, has been shown.
Electronics 2021, 10, 308 3 of 21
Research to study activity in the field of HAR has been conducted since the 1990s [10,11].
The collection and detection of real-life activities, performed by a group or single person for
an understanding of the environmental context surrounding humans, is the focus of HAR.
Nowadays, due to its potential to assist with revolutionizing the ways that people can inter-
act with computers, HAR is regarded as a promising area in the field of human-computer
interaction [12,13].
There are five main tasks performed by HAR, as shown in Figure 1a, namely recog-
nition of basic activities [14], recognition of daily activities [15], recognition of unusual
events [16], identification of biometric subjects [17], and prediction of energy expendi-
tures [18]. As illustrated in Figure 1b, various sensors are employed for the performance
of these tasks, such as video cameras, circumstantial sensors that measure temperature,
relative humidity, light, pressure, and wearable sensors. In general, built-in smartphone
sensors, or sensors embedded in wearable devices, are the main types of wearable sensors.
Figure 1. Human Activity Recognition (HAR): (a) Tasks of HAR and (b) classification of HAR.
Rich and unique sets of information, unable to be obtained through the use of other
kinds of sensors, can be provided by cameras. However, continuous monitoring of a
subject’s activities is required by camera-based methods, which means that huge amounts
of computational resources and storage space are needed. Moreover, being continuously
observed by cameras may make some people feel uncomfortable [19]. One example
of this type of indoor camera-based system for monitoring human activity is described
in [20], which allows for continuous monitoring and intelligent processing of the video.
An additional function of utilizing camera sensors is to provide human activity recognition
systems with the “ground truth”, i.e. checking the results of machine learning for accuracy
against the real world.
It is possible to track and record the interaction of a user with the environment by
using environmental sensors. One example of this is in the experimental context of [21],
in which the objects employed in the test environment were fitted with wireless Bluetooth
acceleration and gyroscope sensors that record the use of these objects. In addition, arrays
of wired microphones were placed within the room for the recording of ambient sound.
Moreover, reed switches were installed on drawers, doors, and shelves to detect usage
and provide ground truth. In contrast, the disadvantage of circumstantial sensors is their
limited use for specific situations and building designs, which results in the HAR system
not being general. Thus, even a well-designed and built HAR system may not be easily
implemented in a different ambient environment. Finally, the cost of deploying these
sensors is relatively expensive.
Electronics 2021, 10, 308 4 of 21
The sensors that can be worn on a user’s body can identify the physical states and
characteristics of that person’s activities and include Inertial Measurement Unit (IMU)
sensors (accelerometers and gyroscopes) and GPS, as well as magnetic field sensors, all of
which are commonly-employed in applications for activity recognition. In some previous
research, one or more accelerometers were attached to various positions of the subjects’
bodies for recognition of human activity. A wearable sensor network designed for detection
of human activity was presented by Dong and Biawas [22]. In a similar study, wearable tri-
axial accelerometers were used for activity detection by Curone et al. [23].
Since recent breakthroughs in deep learning have been achieved in numerous areas of
machine learning applications and due to the inherently multi-class nature of deep learning
models, the review of the literature began by briefly summarizing deep learning for human
activity recognition. In 2011, 56 papers using deep learning models were surveyed by
Wang et al. [24].
These papers included deep neural networks, convolutional and recurrent neural
networks, autoencoders, and restricted Boltzmann machines, and were used for conducting
sensor-based HAR. The results indicated that no single model was found that surpasses
all others in all situations, and the model selection based on the scheme of the application
was recommended. Four papers were identified [25–28] as being modern deep learning for
HAR, based on comparing the Opportunity [29], Skoda [30], and the UCI HAR, which are
the three HAR benchmark datasets (University of California, Irvine), in addition to smart-
phone datasets [31] which all comprise of data obtained from the participants wearing
several IMUs.
(PDF) of the derived attributes was proposed by Robertas et al. [17], and offline data from
the USC HAR dataset was used for testing. The result for the grand mean accuracy was
72.02% however, if only walking-related activities, such as forward, right, and left walking,
were considered, 94.44% was determined as the mean accuracy.
3. Proposed Methodology
3.1. Proposed Framework
The proposed framework for the biometric user identification with activity data,
extracted from wearable sensors, is discussed in this section. Figure 2 presents the over-
all identification process. The framework has four main stages, namely data collection,
data pre-processing, model training by deep learning, and user identification. These stages
require a classification system. The following are the details of each stage of the biometric
user identification process proposed in the present study.
Figure 2. The proposed framework of biometric user identification using activity data.
3.1.1. Datasets
The data collection utilized two opened human activity datasets commonly used in
human activity research fields, as follows:
• The UCI Human Activity Recognition Dataset (UCI HAR) [31] is the first dataset
recorded using embedded tri-axial sensors of accelerometer and gyroscope in a smart-
phone (Samsung Galaxy S II) on the waist of 30 subjects performing six daily activities;
• The USC Human Activity Dataset (USC HAD) [36] is the second dataset recorded
using MotionNode device-embedded tri-axial sensors of magnetometer, accelerometer,
and gyroscope. The research sampling rate was 100 Hz. The dataset comprises activity
data recorded with 14 subjects, including seven male and seven female subjects,
aged between 21–49, performing 12 activities.
In Figure 3, the activity data from UCI HAR and USC HAD dataset are sampled
and presented.
Electronics 2021, 10, 308 6 of 21
Figure 3. Samples of activity data from UCI HAR and USC HAD: (a) UCI HAR dataset, (b) USC
HAD (1), and (c) USC HAD (2).
Electronics 2021, 10, 308 7 of 21
course of the time dimension for each sensor channel. This process can be observed in
Figure 4, and the summary of hyperparameters for CNN networks is shown in Table 1.
Table 1. The summary of hyperparameters for CNN networks proposed in this work.
The gates are weighted functions which control the flow of information inside the cells.
Three types of gates exist, as follows:
f t = σ (U f x t + W f h t −1 + b f ) (1)
i t = σ (U f x t + W f h t −1 + b f ) (2)
gt = tanh(Ug xt + Wg ht−1 + bc ) (3)
ot = σ (Uo xt + Wo ht−1 + bo ) (4)
O M O
ct = f t c t −1 it gt (5)
O
ht = ot tanh(ct ) (6)
where:
• Forget gate ( f t ): Selects the information which is to be eliminated from the cell;
• Input gate (it ): Selects the input values which are to be used in updating the mem-
ory state;
• Input modulation gate ( gt ): Manipulates the preeminent input to the memory cell;
• Output gate (ot ): Selects the output on the basis of the input and the cell memory;
• Internal state (ct ): Manipulates the constitutional recurrence of cell;
• Hidden state (ht ): Manipulates the data from the preceding data case inward the
context window.
LSTM cells are similar to neurons in that they are arranged in layers, as can be observed
in Figure 5, whereby the output from each of the cells is then passed onto the next cell
within the layer, and then onwards to the next network layer. When the final layer is
reached, the output is passed further to the dense and softmax layers in order to address
the problem of classification. Hyperparameters for LSTM networks are detailed in Table 2.
Figure 5. The LSTM architecture: (a) The overall architecture of LSTM and (b) LSTM Unit.
Electronics 2021, 10, 308 10 of 21
Table 2. The summary of hyperparameters for LSTM networks proposed in this work.
Table 3. The summary of hyperparameters for CNN-LSTM networks proposed in this work.
Table 4. The summary of hyperparameters for ConvLSTM networks proposed in this work.
Table 5 shows a summary of these two datasets. Activities and their descriptions of
both datasets are shown in Table 6. An abbreviation of each activity is defined, as shown in
Table 7. The table also presents the proportion of activity samples in each dataset.
Table 5. Activities and their summary.
In Table 8, these activities are divided into dynamic and static activities.
Table 8. Two categories of activities from the UCI HAR and USC HAD dataset.
To achieve our research goal, two different classifiers (AC and UC) are implemented
as a pre-processing step of the hierarchical ensemble classifier proposed as the main
architecture and illustrated in Figure 8. The ensemble classifier EC consists of two layers.
In the first layer, the activity identification is performed by the AC classifier. Then the UC
classifier is used to perform biometric user identification.
In the pre-processing step of finding the AC classifier, the percentage of accuracy is
about 91.235% by ConvLSTM using the UCI HAR dataset. For the USC HAD dataset,
the percentage of accuracy is about 87.773% by CNN-LSTM. The related metrics are
presented in Table 9.
Table 9. An experimental result of activity identification of the user classifier Activity Classifier (AC).
From Table 9, the results of activity identification report good classification results
with a high average accuracy. In particular, it is interesting to observe that both activity
dataset (UCI HAR and USC HAD) are well classified by the CNN-LSTM deep learning
model. However, these experimental results are not state-of-the-art results [38,39].
If an activity of a user has been settled, user classifier separation for each activity can
be applied for user identification with both datasets. After that, the data from one reaction
Electronics 2021, 10, 308 16 of 21
are applied for training and testing only. In this case, the data are split into 70% and 30%
for training and testing purposes, respectively.
The mean percentage of accuracy is 92.444% for dynamic activities by the CNN-LSTM
deep learning model, using the UCI HAR dataset. The worst result provided by all static
activities (sitting, standing, and sleeping) has the highest mean percentage of accuracy,
62.785%, by the LSTM model. Nonetheless, the highest mean accuracy of 92.444% is
acceptable, if only the top three walking-related activities (walking forward, walking
upstairs, and walking downstairs) are considered. The related results are shown in Table 10
and Figure 9.
Figure 9. Percentages of testing accuracy values of UCI HAR dataset by each deep learning model
(a) CNN, (b) LSTM, (c) CNN-LSTM, and (d) ConvLSTM.
Table 10. An experimental result of user identification of the User Classifier (UC) using the UCI
HAR dataset.
For the USC HAD dataset, the ConvLSTM model provides the highest average accu-
racy, 87.178%, for the dynamic activity and the CNN-LSTM model provides the highest
average accuracy, 78.698%, for static activity. However, if one considers only the top
three activities of walking (walking forward, walking left, and walking right), the highest
mean accuracy is 95.858% by CNN-LSTM. Other evaluated metrics of the UC is present in
Table 11 and Figure 10.
Electronics 2021, 10, 308 17 of 21
Figure 10. Percentages of testing accuracy values of USC HAD dataset by each deep learning model
(a) CNN, (b) LSTM, (c) CNN-LSTM, and (d) ConvLSTM.
Table 11. An experimental result of user identification of the UC using the USC HAD dataset.
Figure 11. The architecture of the ensemble classifier EC for (a) UCI HAR and (b) USC HAD.
The results show that the proposed ensemble method provides a high percentage of
accuracy values, as shown in Table 12. By using walking-related activity data (walking for-
ward, walking upstairs, and walking downstairs) from the UCI HAR dataset, the proposed
ensemble method gives an accuracy of 91.776%. With a similar result, the proposed ensem-
ble method gives an accuracy of 92.432% by using the USC HAD dataset, selecting only
walking-related activity data (walking forward, walking left, walking right, walking up-
stairs, and walking downstairs).
Table 12. An experimental result of user identification using walking-related activity data.
5. Conclusions
Biometric technology provides advanced and highly difficult to duplicate security
techniques by which a person’s individual identity can be confirmed. In this article,
an ensemble method for biometric user identification, with a basis on the recognition
of human activity by employing wearable sensors, was presented. As a result of the
continuous utilization of accelerometers and gyroscopes by users of wearable devices,
there is a remarkable potential for the improvement of identification of users in the analysis
of human activity.
The ensemble method that is proposed was developed using experiments involv-
ing four specific deep learning models, selected to enhance user identification efficiency.
Two basic models, namely, the Convolutional Neural Network (CNN) and the Long-Short
Term Memory (LSTM) neural network, from the four models of deep learning (CNN, LSTM,
CNN-LSTM, and ConvLSTM) were adopted. Offline data from the UCI HAR and USC
HAD datasets were used in the testing of the proposed method. With regard to the results
concerned with user identification, the findings for the two models indicated high accuracy
levels for all users, at 91.78% and 92.43%, respectively. Moreover, the finding model for
USC HAD demonstrated acceptable levels with the highest accuracy of walking-related
activities for all users at 95.86%, when compared with the previous research work.
The implementing of biometric user identification based on mobile platforms and the
conducting of real-time experiments with subjects, will be included in future research work.
Author Contributions: Conceptualization and model analysis, S.M.; resource and data curation, A.J.;
methodology and validation, S.M.; data visualization and graphic improvement, A.J.; discussion and
final editing, S.M.; writing-review and editing, S.M.; funding acquisition, S.M. and A.J. All authors
have read and agreed to the published version of the manuscript.
Funding: This research was funded by the University of Phayao (grant number: FF64-UoE008) and
King Mongkut’s University of Technology, North Bangkok (grant number: KMUTNB-BasicR-64-33-2).
Conflicts of Interest: The authors declare no conflict of interest.
References
1. Slim, S.O.; Atia, A.; Elfattah, M.M.; Mostafa, M.S.M. Survey on Human Activity Recognition based on Acceleration Data. Int. J.
Adv. Comput. Sci. Appl. 2019, 10, 84–98. [CrossRef]
2. Issarny, V.; Sacchetti, D.; Tartanoglu, F.; Sailhan, F.; Chibout, R.; Levy, N.; Talamona, A. Developing Ambient Intelligence Systems:
A Solution based on Web Services. Autom. Softw. Eng. 2005, 12, 101–137. [CrossRef]
3. Mekruksavanich, S.; Jitpattanakul, A.; Youplao, P.; Yupapin, P. Enhanced Hand-Oriented Activity Recognition Based on
Smartwatch Sensor Data Using LSTMs. Symmetry 2020, 12, 1570. [CrossRef]
4. Osmani, V.; Balasubramaniam, S.; Botvich, D. Human Activity Recognition in Pervasive Health-Care: Supporting Efficient
Remote Collaboration. J. Netw. Comput. Appl. 2008, 31, 628–655. [CrossRef]
5. Ehatisham-ul Haq, M.; Azam, M.A.; Loo, J.; Shuang, K.; Islam, S.; Naeem, U.; Amin, Y. Authentication of Smartphone Users
Based on Activity Recognition and Mobile Sensing. Sensors 2017, 17, 2043. [CrossRef] [PubMed]
Electronics 2021, 10, 308 20 of 21
6. Mekruksavanich, S.; Jitpattanakul, A. Smartwatch-based Human Activity Recognition Using Hybrid LSTM Network. In Proceed-
ings of the 2020 IEEE Sensors, Rotterdam, The Netherlands, 25–28 October 2020; pp. 1–4. [CrossRef]
7. Drosou, A.; Ioannidis, D.; Moustakas, K.; Tzovaras, D. Spatiotemporal analysis of human activities for biometric authentication.
Comput. Vis. Image Underst. 2012, 116, 411–421. [CrossRef]
8. Mahfouz, A.; Mahmoud, T.M.; Eldin, A.S. A Survey on Behavioral Biometric Authentication on Smartphones. arXiv 2018,
arXiv:1801.09308.
9. Mekruksavanich, S.; Jitpattanakul, A. Convolutional Neural Network and Data Augmentation for Behavioral-Based Biometric
User Identification. In ICT Systems and Sustainability; Tuba, M., Akashe, S., Joshi, A., Eds.; Springer: Singapore, 2021; pp. 753–761.
10. Lara, O.; Labrador, M. A Survey on Human Activity Recognition Using Wearable Sensors. IEEE Commun. Surv. Tutor. 2013,
15, 1192–1209. [CrossRef]
11. Hnoohom, N.; Mekruksavanich, S.; Jitpattanakul, A. Human Activity Recognition Using Triaxial Acceleration Data from
Smartphone and Ensemble Learning. In Proceedings of the 2017 13th International Conference on Signal-Image Technology
Internet-Based Systems (SITIS), Jaipur, India, 4–7 December 2017; pp. 408–412. [CrossRef]
12. Chrungoo, A.; Manimaran, S.S.; Ravindran, B. Activity Recognition for Natural Human Robot Interaction. In Social Robotics;
Beetz, M., Johnston, B., Williams, M.A., Eds.; Springer International Publishing: Cham, Switzerland, 2014; pp. 84–94.
13. Gehrig, D.; Krauthausen, P.; Rybok, L.; Kuehne, H.; Hanebeck, U.D.; Schultz, T.; Stiefelhagen, R. Combined Intention, Activity,
and Motion Recognition for a Humanoid Household Robot. In Proceedings of the IEEE/RSJ International Conference on
Intelligent Robots and Systems (IROS 2011), San Francisco, CA, USA, 25–30 September 2011.
14. Yousefi, B.; Loo, C.K. Biologically-Inspired Computational Neural Mechanism for Human Action/activity Recognition: A Review.
Electronics 2019, 8, 1169. [CrossRef]
15. Mekruksavanich, S.; Jitpattanakul, A. Exercise Activity Recognition with Surface Electromyography Sensor using Machine
Learning Approach. In Proceedings of the 2020 Joint International Conference on Digital Arts, Media and Technology with
ECTI Northern Section Conference on Electrical, Electronics, Computer and Telecommunications Engineering (ECTI DAMT and
NCON), Pattaya, Thailand, 11–14 March 2020; pp. 75–78. [CrossRef]
16. Tripathi, R.K.; Jalal, A.S.; Agrawal, S.C. Suspicious Human Activity Recognition: A Review. Artif. Intell. Rev. 2018, 50, 283–339.
[CrossRef]
17. Damaševičius, R.; Maskeliūnas, R.; Venčkauskas, A.; Woźniak, M. Smartphone User Identity Verification Using Gait Characteris-
tics. Symmetry 2016, 8, 100. [CrossRef]
18. Rault, T.; Bouabdallah, A.; Challal, Y.; Marin, F. A Survey of Energy-Efficient Context Recognition Systems Using Wearable
Sensors for Healthcare Applications. Pervasive Mob. Comput. 2017, 37, 23–44. [CrossRef]
19. Fookes, C.; Denman, S.; Lakemond, R.; Ryan, D.; Sridharan, S.; Piccardi, M. Semi-supervised intelligent surveillance system for
secure environments. In Proceedings of the 2010 IEEE International Symposium on Industrial Electronics, Bari, Italy, 4–7 July
2010; pp. 2815–2820.
20. Zhou, Z.; Chen, X.; Chung, Y.C.; He, Z.; Han, T.; Keller, J. Activity Analysis, Summarization, and Visualization for Indoor Human
Activity Monitoring. IEEE Trans. Circuits Syst. Video Technol. 2008, 18, 1489–1498. [CrossRef]
21. Zhan, Y.; Miura, S.; Nishimura, J.; Kuroda, T. Human Activity Recognition from Environmental Background Sounds for Wireless
Sensor Networks. In Proceedings of the 2007 IEEE International Conference on Networking, Sensing and Control, London, UK,
15–17 April 2007; pp. 307–312.
22. Dong, B.; Biswas, S. Wearable Networked Sensing for Human Mobility and Activity Analytics: A Systems Study. In Proceedings
of the 2012 Fourth International Conference on Communication Systems and Networks (COMSNETS 2012), Bangalore, India, 3–7
January 2012; Volume 2012, pp. 1–6. [CrossRef]
23. Curone, D.; Bertolotti, G.M.; Cristiani, A.; Secco, E.L.; Magenes, G. A Real-Time and Self-Calibrating Algorithm Based on Triaxial
Accelerometer Signals for the Detection of Human Posture and Activity. IEEE Trans. Inf. Technol. Biomed. 2010, 14, 1098–1105.
[CrossRef] [PubMed]
24. Wang, J.; Chen, Y.; Hao, S.; Peng, X.; Hu, L. Deep learning for sensor-based activity recognition: A survey. Pattern Recognit. Lett.
2019, 119, 3–11.
25. Jiang, W.; Yin, Z. Human Activity Recognition Using Wearable Sensors by Deep Convolutional Neural Networks. In Proceedings
of the 23rd ACM International Conference on Multimedia (MM ’15), Brisbane, Australia, 26–30 October 2015; Association for
Computing Machinery: New York, NY, USA, 2015; pp. 1307–1310. [CrossRef]
26. Zhang, L.; Wu, X.; Luo, D. Recognizing Human Activities from Raw Accelerometer Data Using Deep Neural Networks.
In Proceedings of the 2015 IEEE 14th International Conference on Machine Learning and Applications (ICMLA), Miami, FL, USA,
9–11 December 2015; pp. 865–870.
27. Ordóñez, F.; Roggen, D. Deep Convolutional and LSTM Recurrent Neural Networks for Multimodal Wearable Activity
Recognition. Sensors 2016, 16, 115. [CrossRef]
28. Hammerla, N.Y.; Halloran, S.; Plötz, T. Deep, Convolutional, and Recurrent Models for Human Activity Recognition Using
Wearables. In Proceedings of the Twenty-Fifth International Joint Conference on Artificial Intelligence (IJCAI’16), New York, NY,
USA, 9–15 July 2016; AAAI Press: Menlo Park, CA, USA, 2016; pp. 1533–1540.
29. Chavarriaga, R.; Sagha, H.; Calatroni, A.; Digumarti, S.T.; Tröster, G.; Millán, J.d.R.; Roggen, D. The Opportunity challenge: A
benchmark database for on-body sensor-based activity recognition. Pattern Recognit. Lett. 2013, 34, 2033–2042.
Electronics 2021, 10, 308 21 of 21
30. Plötz, T.; Hammerla, N.Y.; Olivier, P. Feature Learning for Activity Recognition in Ubiquitous Computing. In Proceedings of the
Twenty-Second International Joint Conference on Artificial Intelligence (IJCAI’11), Barcelona, Spain, 16–22 July 2011; AAAI Press:
Menlo Park, CA, USA, 2011; Volume 2, pp. 1729–1734.
31. Anguita, D.; Ghio, A.; Oneto, L.; Parra, X.; Reyes-Ortiz, J. A Public Domain Dataset for Human Activity Recognition us-
ing Smartphones. In Proceedings of the ESANN 2013 Proceedings, European Symposium on Artificial Neural Networks,
Computational Intelligence and Machine Learning, Bruges, Belgium, 24–26 April 2013.
32. Kataria, A.N.; Adhyaru, D.M.; Sharma, A.K.; Zaveri, T.H. A survey of automated biometric authentication techniques. In Proceed-
ings of the 2013 Nirma University International Conference on Engineering (NUiCONE), Ahmedabad, India, 28–30 November
2013; pp. 1–6.
33. Ailisto, H.J.; Lindholm, M.; Mantyjarvi, J.; Vildjiounaite, E.; Makela, S.M. Identifying people from gait pattern with accelerometers.
In Biometric Technology for Human Identification II; Jain, A.K., Ratha, N.K., Eds.; International Society for Optics and Photonics,
SPIE: Bellingham, WA, USA, 2005; Volume 5779, pp. 7–14. [CrossRef]
34. Kwapisz, J.R.; Weiss, G.M.; Moore, S.A. Cell phone-based biometric identification. In Proceedings of the 2010 Fourth IEEE
International Conference on Biometrics: Theory, Applications and Systems (BTAS), Washington, DC, USA, 27–29 September 2010;
pp. 1–7.
35. Juefei-Xu, F.; Bhagavatula, C.; Jaech, A.; Prasad, U.; Savvides, M. Gait-ID on the move: Pace independent human identification
using cell phone accelerometer dynamics. In Proceedings of the 2012 IEEE Fifth International Conference on Biometrics: Theory,
Applications and Systems (BTAS), Arlington, VA, USA, 23–27 September 2012; pp. 8–15.
36. Zhang, M.; Sawchuk, A. USC-HAD: A daily activity dataset for ubiquitous activity recognition using wearable sensors.
In Proceedings of the 2012 ACM Conference on Ubiquitous Computing, Pittsburgh, PA, USA, 5–8 Spetember 2012; pp. 1036–1043.
[CrossRef]
37. Pires, I.M.; Hussain, F.; Garcia, N.M.; Zdravevski, E. Improving Human Activity Monitoring by Imputation of Missing Sensory
Data: Experimental Study. Future Internet 2020, 12, 155. [CrossRef]
38. Xia, K.; Huang, J.; Wang, H. LSTM-CNN Architecture for Human Activity Recognition. IEEE Access 2020, 8, 56855–56866.
[CrossRef]
39. Cho, H.; Yoon, S.M. Divide and Conquer-Based 1D CNN Human Activity Recognition Using Test Data Sharpening. Sensors 2018,
18, 1055. [CrossRef]