Deep ChaosNet for Action Recognition in Videos

Complexity 2021:1-5 (2021)
  Copy   BIBTEX

Abstract

Current methods of chaos-based action recognition in videos are limited to the artificial feature causing the low recognition accuracy. In this paper, we improve ChaosNet to the deep neural network and apply it to action recognition. First, we extend ChaosNet to deep ChaosNet for extracting action features. Then, we send the features to the low-level LSTM encoder and high-level LSTM encoder for obtaining low-level coding output and high-level coding results, respectively. The agent is a behavior recognizer for producing recognition results. The manager is a hidden layer, responsible for giving behavioral segmentation targets at the high level. Our experiments are executed on two standard action datasets: UCF101 and HMDB51. The experimental results show that the proposed algorithm outperforms the state of the art.

Links

PhilArchive



    Upload a copy of this work     Papers currently archived: 91,349

External links

Setup an account with your affiliations in order to access resources via your University's proxy server

Through your library

Similar books and articles

Perceptual Kinds as Supervening Sortals.Błażej Skrzypulec - 2018 - Pacific Philosophical Quarterly 100 (1):174-201.
Theorizing Recognition in Education.Charles Wayne Bingham - 1999 - Dissertation, University of Washington

Analytics

Added to PP
2021-02-14

Downloads
10 (#1,160,791)

6 months
9 (#298,039)

Historical graph of downloads
How can I increase my downloads?

Citations of this work

No citations found.

Add more citations

References found in this work

No references found.

Add more references