Comparative Analysis of Sensor-Based Human Activity Recognition Using Artificial Intelligence
Résumé
Human Activity Recognition (HAR) has become one of the most prominent research topics in the field of ubiquitous computing and pattern recognition over the last decade. In this paper, a comparative analysis of 17 different algorithms is done using a 4-core 940mx machine and a 16-core G4dn.4xlarge Elastic Compute (EC2) instance on a public domain HAR dataset using accelerometer and gyroscope data from the inertial sensors in smartphones. The results are evaluated using the metrics accuracy, F1-score, precision, recall, training time, and testing time. The Machine Learning (ML) models implemented include Logistic Regression (LR), Support Vector Classifier (SVC), Random Forest (RF), Decision Trees (DT), Gradient Boosted Decision Trees (GBDT), linear and Radial Basis Function (RBF) kernel Support Vector Machines (SVM), K- Nearest Neighbors (KNN) and Naive Bayes (NB). The Deep Learning (DL) models implemented include Convolutional Neural Networks (CNN), Long Short Term Memory (LSTM), a combination of CNN-LSTM and Bidirectional LSTM. Neural Structure Learning was also implemented over a CNN-LSTM model along with Deep Belief Networks (DBN). It is identified that the Deep Learning models CNN, LSTM, CNN-LSTM & CNN-BLSTM consistently confuse between dynamic activities and that the machine learning models confuse between static activities. A Divide and Conquer approach was implemented on the dataset and CNN achieved an accuracy of 99.92% on the dynamic activities, whereas the CNN-LSTM model achieved an accuracy of 96.73% eliminating confusion between the static and dynamic activities. Maximum classification accuracy of 99.02% was achieved by DBN on the full dataset after Gaussian standardization. The proposed DBN model is much more efficient, lightweight, accurate, and faster in its classification than the existing models.