IAES International Journal of Robotics and Automation (IJRA)
Vol. 14, No. 1, March 2025, pp. 11~18
ISSN: 2722-2586, DOI: 10.11591/ijra.v14i1.pp11-18 11
Design and development of humanoid robotic arm
Shripad Bhatlawande1, Sakshi Kulkarni1, Shajjad Shaikh1, Sachi Kurian2, Swati Shilaskar1
1
Electronics and Telecommunication Engineering, Vishwakarma Institute of Technology, Pune, India
2
Department of Biomedical Engineering, Rutgers University School of Engineering, New Brunswick, United States
Article Info ABSTRACT
Article history: This paper presents the design, development, and evaluation of a 5-degrees
of freedom (5-DoF) humanoid robotic arm featuring a sophisticated 5-finger
Received Apr 6, 2024 gripper. The five degrees of freedom include the base, shoulder, elbow,
Revised Oct 22, 2024 wrist, and gripper, all controlled by MG996R servo motors to enhance
Accepted Nov 19, 2024 grasping, positioning, flexibility, and mobility. The arm is constructed from
laser-cut aluminum sheets. It effectively picks and places objects such as
bottles and bags. A high-speed portable computing system is used to control
Keywords: robotics hand operations. A webcam is used for object detection and to
acquire information about the surroundings. The system uses a convolutional
Assistive aid neural network-based MobileNet architecture for object detection. The
Deep learning robotic hand is used as an assistive aid for amputees. It mimics finger
End effectors movements based on detected objects. The system achieved a precision of
Hand amputee 0.97 for bags and 0.93 for bottles, with accuracies of 96.83% and 92.42%,
Humanoid robotic arm respectively. The system employs advanced computer vision algorithms and
real-time strategies, ensuring adaptability across various tasks. It integrates
advanced visual systems and improved feedback to enhance user interaction
and overall usability. It addresses trade-offs between detection precision and
processing time.
This is an open access article under the CC BY-SA license.
Corresponding Author:
Swati Shilaskar
Electronics and Telecommunication Engineering, Vishwakarma Institute of Technology
Pune, India
Email: [email protected]
1. INTRODUCTION
This paper presents the design, development, and evaluation of a 5-degrees of freedom (5-DoF)
humanoid robotic arm with a sophisticated 5-finger gripper. The 5 DoFs are the base, shoulder, elbow, wrist,
and gripper. These resources are controlled with servo motors, enhancing the arm’s capabilities in grasping,
positioning, flexibility, and mobility. Serving as the central control unit is the Raspberry Pi, directing the
movements of the five servo motors through advanced algorithms to ensure seamless and precise arm
operations. This robotic arm is developed for the rehabilitation of hand amputees and can be used for other
industrial applications with certain modifications.
Factors affecting limb movement and grasp are analyzed by measuring joint angles using Kinect
depth sensors and MediaPipe Framework [1]. This emphasized the integrated approaches for real-time
challenges. To oversee the system [2], handheld input devices like joysticks, keyboards, computer mice, and
touch screens are commonly employed. The constraint of limited DoF presents a hurdle, particularly when
managing robots with numerous degrees of freedom, such as robotic arms. Additionally, joystick
manipulation of a robotic arm necessitates non-intuitive transitions between position, orientation, and
gripping control modes. Nodes [3] can control the arm, plan safe movements, and execute actions to reach
initial and final positions. Neural network-based [4] learning offers accurate continuous mapping and handles
Journal homepage: http://ijra.iaescore.com
12 ISSN: 2722-2586
multiple object shapes, enhancing object detection capabilities essential for robotic interaction in varied
environments [5]. It enables continuous estimation and addresses non-linearities. The suggested strategy in
[6] seeks to overcome this drawback and improve semantic representation which adopts robotic fingers with
force sensors designed to hold objects for daily activities like eating and drinking were controlled using a
proportional-integrated-derivative (PID) algorithm. This arm was designed for children [7]. A robotic arm for
pick and place robot operation using a greedy algorithm for prioritizing the action is described [8]. Defined
sequences of operation were used along with image-based object recognition for industrial applications.
Decision tree-based decision is integrated with rotating the robotic arm in the trajectory defined by the
sequence of operations [9]. Robotic hand grasping unseen objects is described in [10]. It implemented the
pick, sense, and place strategy using adaptive techniques. A robotic arm that can grasp objects from outside is
more common. The arm that holds the object from inside is implemented in [11]. Symmetrical movement for
grasping is used here for holding the object with stability. Water bottle identification was carried out using
the YOLOv5 algorithm with 85% accuracy of grasping the bottle by taking the path trajectory [12]. For
deformable objects, grip strength is important [13] to form a stable grip without damaging the object. Point
cloud scanned output is used to define the gripper coordinates to hold the object. Tactile sensor-based
identification of object hardness is carried out using machine learning [14]. The Cartesian robot is trained to
detect 5 hardness levels using a machine learning algorithm. Twisted strings control the robotic hand [15]
and intelligent sensing-based grip is achieved. A humanoid 3D-printed hand with 5 fingers with a virtual
reality-based monitoring system is implemented. Exoskeleton robots are pivotal in stroke rehabilitation,
utilizing innovative inverse kinematics and robust nonlinear control approaches. These advancements
enhance trajectory accuracy and ensure stability during passive therapy. Future directions involve integrating
visual systems like Kinect for further refinement and effectiveness in rehabilitation protocols [16]. Bilateral
haptic collaboration [17] is proposed for human-robot cooperation, featuring a CoGripper and wearable
interface. Three user studies confirm efficacy, showing reduced task time and improved grasp control. The
system integrates sonar sensing and vibrotactile feedback for enhanced communication. Future enhancements
include automated gripper reconfiguration and improve tactile cues for user recognition. Grasping and lifting
objects [18] with suitable control remains a challenge in robotics. Neurophysiology sheds light on human
hand dynamics, inspiring robotic solutions. Real-time processing of tactile data poses computational hurdles.
A bio-inspired approach utilizing cellular nonlinear/neural networks is proposed, enhancing robotic grasping
capabilities. Successful grasps of diverse objects validate the system’s efficacy. State of the art manipulators
control using machine learning algorithms reviewed in [19] indicate cognitive skills development for robots.
Recent studies [20] have focused on the evolution of robotic arms over the past two decades, delineating
various parameters influencing their performance. These include accuracy, repeatability, kinematics, and
working envelope. Commercially available arms exhibit diverse capabilities, yet research highlights gaps in
optimization and suggests avenues for future algorithmic and simulation-driven enhancements. Designing
[21] a three finger gripper robotic arm with low-cost components to enable various object-picking tasks is
discussed. Incorporating a five-finger gripper and precise control algorithms, the prototype demonstrates
effective functionality through comprehensive testing. Humanoid motion [22] planning for robotic arms
integrates human arm physics and reinforcement learning, promising safer interaction for aged individuals.
Robotic arm control for press, grasp, and flip operation was controlled using image inputs to a dual-arm robot
[23]. Experimental results show successful implementation, enabling object manipulation via hand gestures.
A pick-and-place algorithm [24] using a multirate event-triggered sliding mode controller for a robotic arm in
3D space is proposed. Control updates occur when triggering rules are violated, optimizing resource use.
Validated on a human arm system, it demonstrates efficiency in object manipulation with minimum control
updates. A remotely operated 6 degrees of freedom (6-DoF) robotic manipulator was designed for the swab
collection of Covid patients [25]. Robotic manipulators need precise operation, and more work needs to be
done to provide assistive solutions to humans.
One of the primary limitations of current robotic arms is the challenge of robust grasping. The
surface curvature of objects poses significant difficulties, necessitating integrated approaches to manage real-
time grasping effectively. Additionally, the limited DoF in many robotic arms restricts precise control,
particularly in arms with multiple DoFs. The control and interface mechanisms of robotic arms also present
substantial limitations. Common input devices such as joysticks, keyboards, mice, and touch screens are non-
intuitive for managing the position, orientation, and grip of robotic arms. Joystick manipulation, in particular,
requires complex transitions between different control modes, complicating user operation and reducing
efficiency. Real-time processing of tactile data and the need for computational efficiency pose significant
hurdles for robotic arms. Balancing computational resources while maintaining performance is an ongoing
challenge, affects the overall effectiveness of robotic systems. Current robotic systems lack intuitive
interfaces for effective human-robot cooperation, making tasks such as grasping and lifting complex and
inefficient. The proposed system addresses the majority of challenges. It integrates advanced visual systems
IAES Int J Rob & Autom, Vol. 14, No. 1, March 2025: 11-18
IAES Int J Rob & Autom ISSN: 2722-2586 13
and improved feedback to enhance user interaction and overall usability. It uses a lightweight MobileNet
convolutional neural network (CNN) architecture, offers high accuracy and addresses trade-offs between
detection precision and processing time.
2. METHOD
This work presents the design and development of an assistive robotic arm for hand amputees, with
a particular focus on mimicking finger movements. Unlike a biological hand that receives instructions from
the brain, this robotic arm detects objects through a computer vision system and forms the grip accordingly.
The workflow of the proposed system in Figure 1 involves a sequential process starting with a camera
capturing video frames. These frames are processed by the MobileNet architecture for object detection,
followed by grip formation. The frames undergo normalization and resizing to a standardized 224×224-pixel
format. The current system uses a camera-mounted spectacle to acquire information about the surroundings.
Figure 1. Humanoid robotic arm
The MobileNet CNN extracts relevant features for efficient and lightweight object recognition.
Once the system recognizes an object, it uses the identified class information, such as “Bottle” or “Bag,” to
trigger specific servo actions for grip formation. The servo motors then control the robotic arm to interact
with the recognized objects in real-time, establishing a seamless integration between vision-based recognition
and robotic manipulation. The comprehensive approach details the collection and preprocessing of the dataset
as the initial phase, followed by an in-depth exploration of the system design and implementation processes.
2.1. Dataset and preprocessing
The dataset used for training the robotic arm’s computer vision system comprises 1,943 custom
images sourced from various open-source datasets. These images depict bags and bottles of different colors,
shapes, and sizes. The dataset is essential for training the CNN architecture, specifically MobileNet, to
classify objects into two distinct categories: bags and bottles. Preprocessing steps include normalization and
resizing of images to a standardized 224×224-pixel format. This optimization ensures the dataset is suitable
for effective model training. The chosen CNN architecture excels at discerning patterns unique to bottles and
bags, enabling accurate real-time predictions for live input images.
2.2. Mechanical design of the robotic hand
The robotic arm is crafted from high-quality aluminum sheets of 2 mm and 4 mm thickness,
processed through laser cutting, and designed using SolidWorks in Figure 2. This design promises a dynamic
range of applications due to its structural integrity and precision. The components undergo welding, drilling,
and lathe work to ensure robust construction. The SolidWorks 3D model serves as a blueprint for seamless
component integration, resulting in a balanced system where each DoF operates in harmony.
The gripper in Figure 3 features five fingers, four of which are connected with shafts. Gears are used
to control the precise and synchronized movement of the fingers, enhancing the efficiency of the grip. This
coordinated design ensures smooth operation and improves the overall performance of the gripper.
Design and development of humanoid robotic arm (Shripad Bhatlawande)
14 ISSN: 2722-2586
Figure 2. Robotic arm design Figure 3. Gripper design
2.3. Electronic design of the robotic hand
The electronic design focuses on the control and actuation of the robotic arm. MG996R servo
motors are selected based on torque requirements. The torque calculation is given in (1) and (2). Let the
torque (T) in kg cm, force (F) in Newton, and distance (D) in cm. Let weight=0.5 kg and D=20 cm.
𝑇=𝐹 ∗ 𝐷 (1)
𝑇𝑒𝑛𝑑 𝑒𝑓𝑓𝑒𝑐𝑡𝑜𝑟 = 𝑤𝑒𝑖𝑔ℎ𝑡 ∗ 𝑑𝑖𝑠𝑡𝑎𝑛𝑐𝑒
𝑇 = 0.5 ∗ 20
𝑇 = 10 𝑘𝑔/𝑐𝑚 (2)
The selected MG996R servo motor, with a torque capacity of 11 kg/cm, accommodates the placement of a
0.5 kg object at a distance of 20 cm from the robotic arm’s base. This choice ensures sufficient torque for the
specified load and distance requirements. The workflow of the proposed system is presented in this section as
algorithm 1. It begins with capturing video frames from a specified camera using OpenCV. These frames
undergo normalization and resizing to a standardized 224224-pixel format. The resized images are
processed by the MobileNet CNN to extract relevant features for efficient and lightweight object recognition.
Algorithm 1. Robotic arm control
while true do
ret, frame=cap.read()
if not then
PRINT “Failed to grab frame”
BREAK
end if
predictions=model.predict(input_image)
predicted_class=int(predictions[0][0] > 0.5)
label=if predicted_class == 0 then “bottle”
else “bag”
end if
if predicted_class == 0 then
call bottle()
else
call bag()
end if
Following object detection, the system utilizes the identified class information (e.g., “Bottle” or
“Bag”) to trigger specific servo actions for grip formation. The script calls corresponding functions—either
bottle() or bag()—to execute actions related to servo control based on the classification. This process is
looped, continuously updating the displayed video frame with the predicted class and responding to detected
objects in real time. The comprehensive approach ensures seamless integration between vision-based
recognition and robotic manipulation, providing an assistive aid for amputees. By mimicking finger
movements based on detected objects, the robotic arm demonstrates significant potential in various
applications, including manufacturing, logistics, and healthcare.
IAES Int J Rob & Autom, Vol. 14, No. 1, March 2025: 11-18
IAES Int J Rob & Autom ISSN: 2722-2586 15
3. RESULTS AND DISCUSSION
The study presents a thorough analysis of the implemented humanoid robotic arm system, enabling
successful object detection and ensuing robotic manipulation. Figure 4 and Figure 5, illustrate the system’s
adeptness in detecting binary classes and executing precise grabbing functions. The system initiates specific
grabbing functions designed for each class. This action then prompts the robotic arm to execute pick-and-
place tasks in a manner that ensures the efficient and precise handling of objects within the given
environment. The robotic arm (Figure 5), detects the bottle and grabs it precisely, following the elbow motor,
the fingers form the desired grip to pick up the bottle. Subsequently, the wrist rotates to a specific angle, after
which the fingers form the desired grip to pick up the bag.
Figure 4. Bottle detection Figure 5. Grip formation
This graphical representation of the training and validation accuracy on the y-axis with respect to
the number of epochs on the x-axis in Figure 6 provides a compelling narrative of the model’s learning
behavior. The curve illustrates a positive correlation between the number of epochs and the training accuracy,
showcasing a steady increase over time. However, the validation accuracy curve exhibits an interesting trend.
Initially aligning with the training accuracy, it experiences a decrease around the midway point of epochs
before resuming an upward trajectory. This phenomenon suggests that the model, while excelling in learning
from the training data, encounters challenges in generalization, leading to a temporary dip in performance on
unseen validation data. The observed training accuracy of 98% signifies a high level of model proficiency,
but the validation accuracy hovering around 93% highlights the need for further exploration into techniques
for mitigating overfitting and enhancing the model’s ability to generalize to new, unseen data.
Figure 6. Training and validation accuracy
The graphical representation of the training and validation loss on the y-axis with respect to the
number of epochs on the x-axis in Figure 7 reveals a noticeable pattern. Initially, the training loss demonstrates
a gradual decrease as the number of epochs increases, indicative of the model learning from the training data.
However, the validation loss exhibits a distinctive behavior by increasing up to the midpoint before later
decreasing. This divergence suggests that, while the model is effectively learning from the training data, there
may be a point where it begins overfitting and does not generalize well to unseen validation data. The increase
in validation loss could be attributed to the model capturing noise or specific patterns unique to the training set,
Design and development of humanoid robotic arm (Shripad Bhatlawande)
16 ISSN: 2722-2586
which may not necessarily apply to new data. This behavior underscores the importance of monitoring both
training and validation loss to strike a balance between learning from the data and avoiding overfitting. Table 1
shows that the model was subjected to a total of 192 tests, comprising 126 instances of the ‘Bag’ class and 66
instances of the ‘Bottle’ class. Impressively, the model achieved a high overall accuracy of 95.31%, with
96.83% accuracy, 0.97 predicting ‘Bag’ instances, and 92.42% accuracy for ‘Bottle’ instances.
Figure 7. Training and validation loss
Table 1. Model performance
Class Bag class instances Bottle class instances
Tests 126 66
Correct Prediction 122 61
False Prediction 4 5
Accuracy (%) 96.83 92.42
Recall 0.91 0.89
Precision 0.97 0.93
F1 Score 0.95 0.80
The ‘Bag’ class, precision stands at 97%, emphasizing the model’s ability to accurately identify ‘Bag’
instances when predicted. Additionally, a recall of 91% signifies the model’s proficiency in capturing the
majority of ‘Bag’ instances. Similarly, for the ‘Bottle’ class, while precision is slightly lower at 80%, the
model excels with a recall of 93%. The model encountered four false predictions for the ‘Bag’ class, likely
influenced by variations in lighting, diverse bag shapes, and occlusions in the real-world environment.
Instances, where bags were partially obscured or positioned at unusual angles, could contribute to
misclassifications. Similarly, in the ‘Bottle’ class, five false predictions may be attributed to variations in
bottle shapes, sizes, and orientations, as well as challenges like label presence, translucency, and the presence
of other objects. Factors such as different backgrounds and reflections could impact accurate ‘Bottle’
classification. Overall, these misclassifications are complex, influenced by diverse dataset conditions and real-
world complexities. This involves addressing challenges by refining the dataset, augmenting training data with
diverse scenarios, and exploring advanced techniques like transfer learning for improved adaptability in varied
conditions. The process entails refining the dataset to enhance its quality, incorporating diverse scenarios into
training data to ensure robust learning, and implementing advanced techniques, such as transfer learning, using
advanced techniques like transfer learning to make the system work well in various conditions.
4. CONCLUSION
This paper presented the design, development, and evaluation of a 5-DoF humanoid robotic arm
featuring a sophisticated 5-finger gripper. The system is operated by a high-speed portable computing system,
utilizing a webcam for object detection and environmental awareness. The robotic hand is designed as an
assistive aid for amputees, mimicking finger movements based on detected objects. Unlike a biological hand
that receives instructions from the brain, this robotic arm detects objects through a computer vision system and
forms the grip accordingly. The CNN-based MobileNet architecture is employed for object detection,
achieving precision scores of 0.97 for bags and 0.93 for bottles, with accuracies of 96.83% and 92.42%,
respectively. The results demonstrate the potential of integrating advanced computer vision algorithms and
real-time strategies to develop assistive technologies. The novel approach of using computer vision to guide
IAES Int J Rob & Autom, Vol. 14, No. 1, March 2025: 11-18
IAES Int J Rob & Autom ISSN: 2722-2586 17
robotic manipulation sets a precedent for future developments in the field of assistive and industrial robotics.
The high accuracy and efficiency of the system are highlighted by its performance metrics. The model’s
precision, recall, and F1 scores demonstrate its ability to handle diverse real-world scenarios. However, the
project also identifies the complex challenges that arise in real-world situations, especially regarding
misclassification. Factors such as variations in object shapes, sizes, and lighting conditions contribute to false
predictions. Addressing these challenges involves refining the dataset, augmenting training data with diverse
scenarios, and exploring advanced techniques like transfer learning for improved adaptability.
ACKNOWLEDGEMENTS
The authors would like to thank all who have helped directly or indirectly to complete this work
successfully.
REFERENCES
[1] P. Jha et al., “Human–machine interaction and implementation on the upper extremities of a humanoid robot,” Discover Applied
Sciences, vol. 6, no. 4, Mar. 2024, doi: 10.1007/s42452-024-05734-3.
[2] S. Li, R. Rameshwar, A. M. Votta, and C. D. Onal, “Intuitive control of a robotic arm and hand system with pneumatic haptic
feedback,” IEEE Robotics and Automation Letters, vol. 4, no. 4, pp. 4424–4430, Oct. 2019, doi: 10.1109/LRA.2019.2937483.
[3] W. S. Barbosa et al., “Industry 4.0: examples of the use of the robotic arm for digital manufacturing processes,” International
Journal on Interactive Design and Manufacturing, vol. 14, no. 4, pp. 1569–1575, Sep. 2020, doi: 10.1007/s12008-020-00714-4.
[4] Z. Xu, Y. Zheng, and S. A. Rawashdeh, “A simple robotic fingertip sensor using imaging and shallow neural networks,” IEEE
Sensors Journal, vol. 19, no. 19, pp. 8878–8886, Oct. 2019, doi: 10.1109/JSEN.2019.2919492.
[5] Y. Lv, Y. Fang, W. Chi, G. Chen, and L. Sun, “Object detection for sweeping robots in home scenes (ODSR-IHS): a novel
benchmark dataset,” IEEE Access, vol. 9, pp. 17820–17828, 2021, doi: 10.1109/ACCESS.2021.3053546.
[6] C.-W. Chen, A.-C. Tsai, Y.-H. Zhang, and J.-F. Wang, “3D object detection combined with inverse kinematics to achieve robotic
arm grasping,” in 2022 10th International Conference on Orange Technology (ICOT), Nov. 2022, pp. 1–4. doi:
10.1109/ICOT56925.2022.10008135.
[7] D. Babu, A. Nasir, Ravindran, M. Farag, and W. A. Jabbar, “3D printed prosthetic robot arm with grasping detection system for
children,” International Journal on Advanced Science, Engineering and Information Technology, vol. 13, no. 1, pp. 226–234,
Feb. 2023, doi: 10.18517/ijaseit.13.1.16547.
[8] S. D. Han, S. W. Feng, and J. Yu, “Toward fast and optimal robotic pick-and-place on a moving conveyor,” IEEE Robotics and
Automation Letters, vol. 5, no. 2, pp. 446–453, Apr. 2020, doi: 10.1109/LRA.2019.2961605.
[9] J. Borrell Mendez, C. Perez-Vidal, J. V. Segura Heras, and J. J. Perez-Hernandez, “Robotic pick-and-place time optimization:
application to footwear production,” IEEE Access, vol. 8, pp. 209428–209440, 2020, doi: 10.1109/ACCESS.2020.3037145.
[10] C. Mitash, R. Shome, B. Wen, A. Boularias, and K. Bekris, “Task-driven perception and manipulation for constrained placement
of unknown objects,” IEEE Robotics and Automation Letters, vol. 5, no. 4, pp. 5605–5612, Oct. 2020, doi:
10.1109/LRA.2020.3006816.
[11] H. Hua, Z. Liao, and Y. J. Chen, “A 1-Dof bidirectional graspable finger mechanism for robotic gripper,” Journal of Mechanical
Science and Technology, vol. 34, no. 11, pp. 4735–4741, Nov. 2020, doi: 10.1007/s12206-020-1030-6.
[12] M. Sualeh and G. W. Kim, “Visual-LiDAR based 3d object detection and tracking for embedded systems,” IEEE Access, vol. 8,
pp. 156285–156298, 2020, doi: 10.1109/ACCESS.2020.3019187.
[13] T. B. Jørgensen, S. H. N. Jensen, H. Aanæs, N. W. Hansen, and N. Krüger, “An adaptive robotic system for doing pick and place
operations with deformable objects,” Journal of Intelligent and Robotic Systems: Theory and Applications, vol. 94, no. 1, pp. 81–
100, Dec. 2019, doi: 10.1007/s10846-018-0958-6.
[14] K. Kleeberger, R. Bormann, W. Kraus, and M. F. Huber, “A survey on learning-based robotic grasping,” Current Robotics
Reports, vol. 1, no. 4, pp. 239–249, 2020, doi: 10.1007/s43154-020-00021-6.
[15] M. Levins and H. Lang, “A tactile sensor for an anthropomorphic robotic fingertip based on pressure sensing and machine
learning,” IEEE Sensors Journal, vol. 20, no. 22, pp. 13284–13290, Nov. 2020, doi: 10.1109/JSEN.2020.3003920.
[16] B. Brahmi, M. Saad, M. H. Rahman, and C. Ochoa-Luna, “Cartesian trajectory tracking of a 7-dof exoskeleton robot based on
human inverse kinematics,” IEEE Transactions on Systems, Man, and Cybernetics: Systems, vol. 49, no. 3, pp. 600–611, Mar.
2019, doi: 10.1109/TSMC.2017.2695003.
[17] G. Salvietti, M. Z. Iqbal, and D. Prattichizzo, “Bilateral haptic collaboration for human-robot cooperative tasks,” IEEE Robotics
and Automation Letters, vol. 5, no. 2, pp. 3517–3524, Apr. 2020, doi: 10.1109/LRA.2020.2975715.
[18] L. Ascari, U. Bertocchi, P. Corradi, C. Laschi, and P. Dario, “Bio-inspired grasp control in a robotic hand with massive sensorial
input,” Biological Cybernetics, vol. 100, no. 2, pp. 109–128, Dec. 2009, doi: 10.1007/s00422-008-0279-0.
[19] S. Nahavandi, R. Alizadehsani, D. Nahavandi, C. P. Lim, K. Kelly, and F. Bello, “Machine learning meets advanced robotic
manipulation,” Information Fusion, vol. 105, May 2024, doi: 10.1016/j.inffus.2023.102221.
[20] V. Patidar and R. Tiwari, “Survey of robotic arm and parameters,” in 2016 International Conference on Computer
Communication and Informatics, ICCCI 2016, Jan. 2016, pp. 1–6. doi: 10.1109/ICCCI.2016.7479938.
[21] S. Bhatlawande, M. Ambekar, S. Amilkanthwar, and S. Shilaskar, “Three-finger robotic gripper for irregular-shaped objects,” in
Smart Innovation, Systems and Technologies, vol. 364, Springer Nature Singapore, 2024, pp. 63–75. doi: 10.1007/978-981-99-
5180-2_6.
[22] A. Yang, Y. Chen, W. Naeem, M. Fei, and L. Chen, “Humanoid motion planning of robotic arm based on human arm action
feature and reinforcement learning,” Mechatronics, vol. 78, Oct. 2021, doi: 10.1016/j.mechatronics.2021.102630.
[23] D. Kijdech and S. Vongbunyong, “Manipulation of a complex object using dual-arm robot with mask R-CNN and grasping
strategy,” Journal of Intelligent and Robotic Systems: Theory and Applications, vol. 110, no. 3, Jul. 2024, doi: 10.1007/s10846-
024-02132-0.
[24] M. Philip and P. S. Lal Priya, “Pick and place operation of a robotic arm using multirate event triggered sliding mode control,” in
2021 IEEE Second International Conference on Control, Measurement and Instrumentation (CMI), Jan. 2021, pp. 61–66. doi:
10.1109/CMI50323.2021.9362877.
Design and development of humanoid robotic arm (Shripad Bhatlawande)
18 ISSN: 2722-2586
[25] M. Leung, R. Ortiz, and B. W. Jo, “Semi-automated mid-turbinate swab sampling using a six degrees of freedom collaborative
robot and cameras,” IAES International Journal of Robotics and Automation (IJRA), vol. 12, no. 3, pp. 240–247, Sep. 2023, doi:
10.11591/ijra.v12i3.pp240-247.
BIOGRAPHIES OF AUTHOR
Shripad Bhatlawande received the B.E. degree in electronics engineering from
the Shri Guru Gobind Singhji College of Engineering and Technology, Nanded, India, in
2000, the M.E. degree in electronics engineering (digital systems) from the Government
College of Engineering, Pune, India, in 2008, and the Ph.D. degree from the Indian Institute
of Technology Kharagpur, India, in 2015. His research interests include embedded systems,
machine intelligence, and robotics. He can be contacted at
[email protected].
Sakshi Kulkarni is pursuing B.Tech. in Electronics and Telecommunication
Engineering from Vishwakarma Institute of Technology, Pune, India. Her research interests
include Embedded systems and IoT. She can be contacted at
[email protected].
Shajjad Shaikh is pursuing B.Tech. in Electronics and Telecommunication
Engineering from Vishwakarma Institute of Technology, Pune, India. His research interests
include artificial intelligence and machine learning. He can be contacted at
[email protected] or
[email protected].
Sachi Kurian is pursuing B.S. in Biomedical Engineering from Rutgers
University School of Engineering in New Jersey, United States of America. Her research
interests include artificial intelligence and machine learning in biomedical application. She
can be contacted at
[email protected].
Swati Shilaskar received a B.E. degree in electronics engineering, an M.E.
degree in digital electronics, and a Ph.D. degree from Sant Gadge Baba Amravati University,
India. Her research interests include computer-based diagnostic support systems, brain-
computer interfaces, machine learning, and automation. She can be contacted at
[email protected].
IAES Int J Rob & Autom, Vol. 14, No. 1, March 2025: 11-18