Yoga Pose Detection Using AI
Presented by:
Parthivi malik
2102220100129
Nupur kumari
2102220100128
MD Asharaf khan
2202220109010
Guided by: Mr Ashish Shrivastava
Department of CSE, ITS Engineering College, Greater Noida
Introduction
• Yoga Pose Detection using AI is aimed
at automating the process of
analyzing yoga postures to provide
real-time feedback for improved
posture accuracy and fitness tracking.
Contd.
• By leveraging computer vision and AI
models, the system is designed to help
users perform yoga poses more
effectively, reducing risks of incorrect
posture and injuries. The system can be
used in personal and commercial settings
like gyms, yoga studios, and home
practice.
Background
• Relevant research and technologies include:
• The PoseNet model for human pose
estimation using computer vision.
• TensorFlow-based deep learning models
for human body detection.
• OpenCV library for image processing and
video capture.
Contd.
• Additional technologies and concepts
applied in this project:
• Usage of datasets for training AI
models for pose classification.
• 5. Real-time feedback systems using
graphical interfaces and voice
commands.
Problem Statement
• Yoga practitioners often lack accurate
and immediate feedback on their
poses, which may lead to improper
form and increase the risk of injury.
There is a need for a system that
provides real-time pose detection
and correction feedback.
Proposed Work
• The proposed system detects yoga
poses using a camera feed and a
trained AI model to analyze poses in
real-time. It includes:
• Camera-based image capture.
• Image preprocessing and key point
detection.
Contd.
• The final output provides users with
feedback through a graphical
interface and voice-based
instructions, making the system more
intuitive and user-friendly.
System Design
• The system is designed with three main
components:
• 1. Input: Camera capturing live video feed.
• 2. Processing: AI-based analysis of the
yoga poses to ensure proper form.
• 3. Output: Real-time feedback for the user
on their posture.
Diagram
Contd
• Each of these components work
together to create a seamless
experience, where the user can
correct their poses based on real-
time AI feedback.
Implementation
• The system was implemented using
the following tools and technologies:
• Python for backend development.
• OpenCV for video capture and image
processing.
• TensorFlow for AI model integration.
Contd
• Further implementation details
include:
• 1. Training data from yoga datasets
for pose classification.
• 2. Use of pre-trained models like
PoseNet for key point detection.
Conclusion
• The project successfully automates yoga pose detection
using AI, offering real-time feedback to help users improve
posture accuracy. By detecting key body joints and
comparing them with ideal pose structures, the system
provides instant corrections, reducing the risk of injury.
This makes yoga practice more effective and accessible for
all levels, whether for personal use or in commercial
settings like gyms and yoga studios. The automation also
minimizes the need for human supervision, providing a
scalable and efficient tool for yoga practice.
Limitations
• Pose Detection Accuracy: While the AI model can detect
poses, its accuracy may vary depending on lighting
conditions, camera quality, and user position. The
model may not always detect minor misalignments in
poses.
• Limited Dataset: The project relies on available datasets
for training the model. If the dataset does not cover all
possible yoga poses or variations, the system might not
perform optimally for some less common poses.
Contd
nal resources and camera processing speed, t
static environment with a single user in the f
imultaneously.
Reference
Papandreou, G., Zhu, T., Chen, L. C., et al. (2018). ”PersonLab:
Person Pose Estimation and Instance Segmentation with a Bottom-
Up, Part-Based, Geometric Embedding Model.” ECCV.
* Toshev, A., Szegedy, C. (2014). ”DeepPose: Human Pose Estimation
via Deep Neural Networks.” CVPR.
* Simon, T., Joo, H., Matthews, I., Sheikh, Y. (2017). ”Hand Keypoint
Detection in Single Images Using Multiview Bootstrapping.” CVPR.
* Xiao, B., Wu, H., Wei, Y. (2018). ”Simple Baselines for Human Pose
Estimation and Tracking.” ECCV.
* He, K., Zhang, X., Ren, S., Sun, J. (2016). ”Deep Residual Learning
for Image Recognition.” CVPR.
* Cao, Z., Simon, T., Wei, S., Sheikh, Y. (2017). ”Realtime Multi-
Person 2D Pose Estimation Using Part Affinity Fields.” CVPR.
* Iqbal, U., Gall, J. (2017). ”Multi-Person Pose Estimation with Local
Joint-to-Joint Dependencies.” IEEE Transactions on Pattern Analysis
and Machine Intelligence.
* Chang, W., Kothari, M., Pitkow, X. (2019). ”AI-Driven Fitness
Applications for Real-Time Posture Correction: An Empirical Study.”
Journal of Applied AI in Fitness Technology.
Future Scope
• Model Optimization: Improving the AI model's accuracy
by expanding the training dataset and including a wider
variety of yoga poses and body types.
• Multi-User Support: Enhancing the system to support
group yoga sessions, detecting multiple users’ poses in a
single frame, and providing feedback to each individual.
• Integration with Wearable Devices: Incorporating data
from wearable fitness devices (e.g., smartwatches or
fitness bands) to provide more comprehensive feedback
on heart rate, balance, and muscle engagement during
poses.
Contd
• Augmented Reality (AR) Feedback:
Developing an AR-based interface where
the system overlays real-time
corrections or improvements on the
user's video feed to guide them during
yoga practice
• Mobile Application: Expanding the
project into a mobile app version that
allows users to use their phone cameras
for pose detection and feedback while
Thank You
• Contact: [Your Email or Contact Information]