Recognition human walking and running actions using temporal foot-lift features
DOI:
https://doi.org/10.58712/ie.v1i1.1Keywords:
Moving action recognition, Skeleton joint data , Foot-lift feature, Smart surveillance system, Human actionAbstract
The recognition of human walking and running actions becomes essential part of many different practical applications such as smart video-surveillance, patient and elderly people monitoring, health care as well as human-robot interaction. However, the requirements of a large spatial information and a large number of frames for each recognition phase are still open challenges. Aiming at reducing the number frames and joint information required, temporal foot-lift features were introduced in this study. The temporal foot-lift features and weighted KNN classifier were used to recognize “Walkin and“Running”actions from four different human action datasets. Half of the datasets were trained and the other half of datasets were experimentally tested for performance evaluation. The experimental results were presented and explained with justifications. An overall recognition accuracy of 88.6% was achieved using 5 frames and it was 90.7% when using 7 frames. The performance of proposed method was compared with the performances of existing methods. Skeleton joint information and temporal foot-lift features are promising features for real-time human moving action recognition.
Downloads
References
M. Kashef, A. Visvizi, and O. Troisi, “Smart city as a smart service system: Human-computer interaction and smart city surveillance systems,” Comput Human Behav, vol. 124, pp. 1–14, Nov. 2021, https://doi.org/10.1016/j.chb.2021.106923
G. Johansson, “Visual perception of biological motion and a model for its analysis",” Percept Psychophys, vol. 14, no. 2, pp. 201–211, Jun. 1973.
H. Su and F.-G. Huang, “Human Gait Recognition Based on Motion Analysis,” in International Conference on Machine Learning and Cybernetics, Guangzhou, China: IEEE, Aug. 2005, pp. 4464–4468. https://doi.org/10.1109/ICMLC.2005.1527725
O. Masoud and N. Papanikolopoulos, “A method for human action recognition,” Image Vis Comput, vol. 21, no. 8, pp. 729–743, Aug. 2003, https://doi.org/10.1016/S0262-8856(03)00068-4
N. Käse, M. Babaee, and G. Rigoll, “Multi-view human activity recognition using motion frequency,” in IEEE International Conference on Image Processing (ICIP), Beijing, China: IEEE, Sep. 2017, pp. 3963–3967. https://doi.org/10.1109/ICIP.2017.8297026
P. Fihl and T. B., “Recognizing Human Gait Types,” Robot Vision, pp. 183–208, Mar. 2010, https://doi.org/10.5772/9293
T. Ahmad, S. T. H. Rizvi, and N. Kanwal, “Transforming spatio-temporal self-attention using action embedding for skeleton-based action recognition,” J Vis Commun Image Represent, vol. 95, pp. 1–11, Sep. 2023, https://doi.org/10.1016/j.jvcir.2023.103892
Y. Hbali, S. Hbali, L. Ballihi, and M. Sadgal, “Skeleton-based human activity recognition for elderly monitoring systems,” IET Computer Vision, vol. 12, no. 1, pp. 16–26, Feb. 2018, https://doi.org/10.1049/iet-cvi.2017.0062
X. Jiang, K. Xu, and T. Sun, “Action Recognition Scheme Based on Skeleton Representation with DS-LSTM Network,” IEEE Transactions on Circuits and Systems for Video Technology, vol. 30, no. 7, pp. 2129–2140, Jul. 2020, https://doi.org/10.1109/TCSVT.2019.2914137
A. F. Bavil, H. Damirchi, and H. D. Taghirad, “Action Capsules: Human Skeleton Action Recognition,” Computer Vision and Image Understanding, vol. 223, pp. 1–11, Aug. 2023, https://doi.org/10.1016/j.cviu.2023.103722
M. A. R. Ahad, M. Ahmed, A. Das Antar, Y. Makihara, and Y. Yagi, “Action recognition using kinematics posture feature on 3D skeleton joint locations,” Pattern Recognit Lett, vol. 145, pp. 216–224, May 2021, https://doi.org/10.1016/j.patrec.2021.02.013
F. Khezerlou, A. Baradarani, and M. A. Balafar, “A Convolutional Autoencoder Model with Weighted Multi-Scale Attention Modules for 3D Skeleton-Based Action Recognition,” J Vis Commun Image Represent, vol. 92, pp. 1–14, Apr. 2022.
S. Ghazal, U. S. Khan, M. M. Saleem, N. Rashid, and J. Iqbal, “Human activity recognition using 2D skeleton data and supervised machine learning,” IET Image Process, vol. 13, no. 13, pp. 2572–2578, Nov. 2019, https://doi.org/10.1049/iet-ipr.2019.0030
J. Kim, G. Li, I. Yun, C. Jung, and J. Kim, “Weakly-supervised temporal attention 3D network for human action recognition,” Pattern Recognit, vol. 119, pp. 1–10, Nov. 2021, https://doi.org/10.1016/j.patcog.2021.108068
W. Peng, X. Hong, and G. Zhao, “Tripool: Graph triplet pooling for 3D skeleton-based action recognition,” Pattern Recognit, vol. 115, pp. 1–12, Jul. 2021, https://doi.org/10.1016/j.patcog.2021.107921
M. Terreran, L. Barcellona, and S. Ghidoni, “A general skeleton-based action and gesture recognition framework for human–robot collaboration,” Rob Auton Syst, vol. 170, pp. 1–14, Dec. 2023, https://doi.org/10.1016/j.robot.2023.104523
Q. Xu, W. Zheng, Y. Song, C. Zhang, X. Yuan, and Y. Li, “Scene image and human skeleton-based dual-stream human action recognition,” Pattern Recognit Lett, vol. 148, pp. 136–145, Aug. 2021, https://doi.org/10.1016/j.patrec.2021.06.003
C. Plizzari, M. Cannici, and M. Matteucci, “Skeleton-based action recognition via spatial and temporal transformer networks,” Computer Vision and Image Understanding, vol. 208–209, pp. 1–10, Jul. 2021, https://doi.org/10.1016/j.cviu.2021.103219
Q. Ye, Z. Tan, and Y. Zhang, “Human action recognition method based on Motion Excitation and Temporal Aggregation module,” Heliyon, vol. 8, no. 11, pp. 1–12, Nov. 2022, https://doi.org/10.1016/j.heliyon.2022.e11401
O. C. Kurban, N. Calik, and T. Yildirim, “Human and action recognition using adaptive energy images,” Pattern Recognit, vol. 127, pp. 1–23, Jul. 2022, https://doi.org/10.1016/j.patcog.2022.108621
J. Lin, Z. Mu, T. Zhao, H. Zhang, X. Yang, and P. Zhao, “Action density based frame sampling for human action recognition in videos,” J Vis Commun Image Represent, vol. 90, pp. 1–7, Feb. 2023, https://doi.org/10.1016/j.jvcir.2022.103740
K. Schindler, E. Zürich, L. Van Gool, and K. Leuven, “Action Snippets: How many frames does human action recognition require?,” in IEEE Conference on Computer Vision and Pattern Recognition, Anchorage, AK, USA: IEEE, Jun. 2008, pp. 1–8. doi: https://doi.org/10.1109/CVPR.2008.4587730
M. B. Shaikh and D. Chai, “Rgb-d data-based action recognition: A review,” Sensors, vol. 21, no. 12, pp. 1–25, Jun. 2021, https://doi.org/10.3390/s21124246
S. Gaglio, G. Lo Re, and M. Morana, “Human Activity Recognition Process Using 3-D Posture Data,” IEEE Trans Hum Mach Syst, vol. 45, no. 5, pp. 586–597, Oct. 2015, https://doi.org/10.1109/THMS.2014.2377111
V. Bloom, V. Argyriou, and D. Makris, “Hierarchical Transfer Learning for Online Recognition of Compound Actions,” Computer Vision and Image Understanding, vol. 144, pp. 62–72, Mar. 2015, https://doi.org/10.1016/j.cviu.2015.12.001
Lu Xia, Chia-Chih Chen, and J. K. Aggarwal, “View Invariant Human Action Recognition Using Histograms of 3D Joints ,” in IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops, Providence, RI: IEEE, Jul. 2012, pp. 20–27. https://doi.org/10.1109/CVPRW.2012.6239233
re3data.org, “CMU Graphics Lab Motion Capture Database,” re3data.org - Registry of Research Data Repositories. Accessed: Apr. 01, 2024. [Online]. Available: http://mocap.cs.cmu.edu/
J. H. Yoo and M. S. Nixon, “Automated markerless analysis of human gait motion for recognition and classification,” ETRI Journal, vol. 33, no. 2, pp. 259–266, Apr. 2011, https://doi.org/10.4218/etrij.11.1510.0068
H. Jhuang, T. Serre, and L. Wolf, “A Biologically Inspired System for Action Recognition,” in IEEE 11th International Conference on Computer Vision, Rio de Janeiro, Brazil: IEEE, Oct. 2007, pp. 1–8.
M. Blank, L. Gorelick, E. Shechtman, M. Irani, and R. Basri, “Actions as Space-Time Shapes,” IEEE Trans Pattern Anal Mach Intell, vol. 29, no. 12, pp. 1395–1402, Dec. 2007, https://doi.org/10.1109/TPAMI.2007.70711
P. Fihl and T. B., “Recognizing Human Gait Types,” in Robot Vision, InTech, 2010. https://doi.org/ 10.5772/9293
Downloads
Published
How to Cite
Issue
Section
License
Copyright (c) 2024 Khin Cho Tun, Hla Myo Tun, Lei Lei Yin Win, Khin Kyu Kyu Win

This work is licensed under a Creative Commons Attribution 4.0 International License.