Back to Projects

HAR Deep Learning Comparison

Research study comparing CNN-LSTM, BiLSTM, Attention, and Transformer architectures for Human Activity Recognition on UCI-HAR dataset.

PythonPyTorchDeep LearningTransformers
HAR Deep Learning Comparison

Key Highlights

  • CNN-Transformer Ultimate achieved 93.48% accuracy — best among 4 architectures tested
  • Key finding: Attention mechanisms don't universally improve CNN-LSTM; proper Transformer optimization is critical
  • BiLSTM offers best accuracy-to-parameters ratio (93.38% with 12x fewer params than Transformer)