Human-Centric Physical AI: Collaborative Robots and Beyond Training Course
Human-Centric Physical AI focuses on fostering collaboration between humans and AI-driven physical systems to boost productivity and safety across various environments.
This instructor-led live training, available online or onsite, is designed for intermediate-level participants eager to investigate the role of collaborative robots (cobots) and other human-centric AI systems in modern work settings.
Upon completion of this training, participants will be able to:
- Grasp the fundamental principles of Human-Centric Physical AI and their practical applications.
- Examine how collaborative robots contribute to enhanced workplace productivity.
- Recognize and resolve challenges associated with human-machine interactions.
- Create workflows that maximize collaboration between humans and AI-driven systems.
- Foster a culture of innovation and adaptability within AI-integrated workplaces.
Format of the Course also allows for the evaluation of participants.
- Interactive lectures and discussions.
- Extensive exercises and practice sessions.
- Hands-on implementation in a live-lab environment.
Course Customization Options
- For customized training requests, please reach out to us to arrange details.
Course Outline
Introduction to Human-Centric Physical AI
- Overview of Physical AI and its human-centric approach
- The evolution of collaborative robots (cobots)
- Applications in industrial, healthcare, and service sectors
Collaborative Robots in Action
- Understanding cobot capabilities and limitations
- Key features: Safety, adaptability, and user-friendliness
- Hands-on demonstration of cobot interactions
Human-Machine Interaction
- Principles of effective collaboration between humans and AI
- Designing intuitive interfaces and workflows
- Addressing cognitive and ergonomic factors
Workplace Integration Strategies
- Assessing organizational readiness for AI adoption
- Creating AI-friendly work environments
- Training and upskilling employees for AI collaboration
Overcoming Challenges
- Resistance to AI adoption: Strategies and solutions
- Ethical considerations in AI-enabled workplaces
- Ensuring inclusivity and accessibility in AI design
Future Trends in Human-Centric Physical AI
- Emerging technologies in collaborative robotics
- Innovations in human-centered AI design
- Envisioning the future of AI-human collaboration
Summary and Next Steps
Requirements
- Fundamental understanding of AI concepts and automation
- Familiarity with workplace dynamics and team collaboration
Audience
- Workforce trainers
- HR professionals
- Managers integrating AI systems
Open Training Courses require 5+ participants.
Human-Centric Physical AI: Collaborative Robots and Beyond Training Course - Booking
Human-Centric Physical AI: Collaborative Robots and Beyond Training Course - Enquiry
NobleProg offers professional training programs designed specifically for companies and organizations. These trainings are not intended for individuals.
Human-Centric Physical AI: Collaborative Robots and Beyond - Consultancy Enquiry
Testimonials (2)
Supply of the materials (virtual machine) to get straight into the excersises, and the explanation of the Ros2 core. Why things work a certain way.
Arjan Bakema
Course - Autonomous Navigation & SLAM with ROS 2
its knowledge and utilization of AI for Robotics in the Future.
Ryle - PHILIPPINE MILITARY ACADEMY
Course - Artificial Intelligence (AI) for Robotics
Upcoming Courses
Related Courses
Artificial Intelligence (AI) for Robotics
21 HoursArtificial Intelligence (AI) for Robotics merges machine learning, control systems, and sensor fusion to develop intelligent machines capable of autonomous perception, reasoning, and action. Leveraging modern tools such as ROS 2, TensorFlow, and OpenCV, engineers can now design robots that intelligently navigate, plan, and interact with real-world environments.
This instructor-led live training (available online or onsite) targets intermediate-level engineers looking to develop, train, and deploy AI-driven robotic systems using contemporary open-source technologies and frameworks.
Upon completion of this training, participants will be able to:
- Utilize Python and ROS 2 to build and simulate robotic behaviors.
- Implement Kalman and Particle Filters for localization and tracking purposes.
- Apply computer vision techniques using OpenCV for perception and object detection.
- Use TensorFlow for motion prediction and learning-based control.
- Integrate SLAM (Simultaneous Localization and Mapping) for autonomous navigation.
- Develop reinforcement learning models to enhance robotic decision-making.
Format of the Course also allows for the evaluation of participants.
- Interactive lecture and discussion.
- Hands-on implementation using ROS 2 and Python.
- Practical exercises with simulated and real robotic environments.
Course Customization Options
To request a customized training for this course, please contact us to arrange.
AI and Robotics for Nuclear - Extended
120 HoursIn this instructor-led, live training in France (online or onsite), participants will learn the different technologies, frameworks and techniques for programming different types of robots to be used in the field of nuclear technology and environmental systems.
The 6-week course is held 5 days a week. Each day is 4-hours long and consists of lectures, discussions, and hands-on robot development in a live lab environment. Participants will complete various real-world projects applicable to their work in order to practice their acquired knowledge.
The target hardware for this course will be simulated in 3D through simulation software. The ROS (Robot Operating System) open-source framework, C++ and Python will be used for programming the robots.
By the end of this training, participants will be able to:
- Understand the key concepts used in robotic technologies.
- Understand and manage the interaction between software and hardware in a robotic system.
- Understand and implement the software components that underpin robotics.
- Build and operate a simulated mechanical robot that can see, sense, process, navigate, and interact with humans through voice.
- Understand the necessary elements of artificial intelligence (machine learning, deep learning, etc.) applicable to building a smart robot.
- Implement filters (Kalman and Particle) to enable the robot to locate moving objects in its environment.
- Implement search algorithms and motion planning.
- Implement PID controls to regulate a robot's movement within an environment.
- Implement SLAM algorithms to enable a robot to map out an unknown environment.
- Extend a robot's ability to perform complex tasks through Deep Learning.
- Test and troubleshoot a robot in realistic scenarios.
AI and Robotics for Nuclear
80 HoursIn this instructor-led, live training in France (online or onsite), participants will learn the different technologies, frameworks and techniques for programming different types of robots to be used in the field of nuclear technology and environmental systems.
The 4-week course is held 5 days a week. Each day is 4-hours long and consists of lectures, discussions, and hands-on robot development in a live lab environment. Participants will complete various real-world projects applicable to their work in order to practice their acquired knowledge.
The target hardware for this course will be simulated in 3D through simulation software. The code will then be loaded onto physical hardware (Arduino or other) for final deployment testing. The ROS (Robot Operating System) open-source framework, C++ and Python will be used for programming the robots.
By the end of this training, participants will be able to:
- Understand the key concepts used in robotic technologies.
- Understand and manage the interaction between software and hardware in a robotic system.
- Understand and implement the software components that underpin robotics.
- Build and operate a simulated mechanical robot that can see, sense, process, navigate, and interact with humans through voice.
- Understand the necessary elements of artificial intelligence (machine learning, deep learning, etc.) applicable to building a smart robot.
- Implement filters (Kalman and Particle) to enable the robot to locate moving objects in its environment.
- Implement search algorithms and motion planning.
- Implement PID controls to regulate a robot's movement within an environment.
- Implement SLAM algorithms to enable a robot to map out an unknown environment.
- Test and troubleshoot a robot in realistic scenarios.
Autonomous Navigation & SLAM with ROS 2
21 HoursROS 2 (Robot Operating System 2) is an open-source framework intended to facilitate the creation of complex and scalable robotic applications.
This instructor-led live training, available online or on-site, targets intermediate robotics engineers and developers seeking to implement autonomous navigation and SLAM (Simultaneous Localization and Mapping) via ROS 2.
Upon completion of this training, participants will be capable of:
- Configuring and setting up ROS 2 for autonomous navigation use cases.
- Implementing SLAM algorithms for mapping and localization purposes.
- Integrating sensors such as cameras and LiDAR with ROS 2.
- Testing and simulating autonomous navigation within Gazebo.
- Deploying navigation stacks onto physical robots.
Course Format
- Interactive lectures and discussions.
- Practical exercises utilizing ROS 2 tools and simulation environments.
- Live laboratory implementation and testing on virtual or physical robots.
Course Customization Options
- For requests regarding customized training for this course, please contact us to make arrangements.
Developing Intelligent Bots with Azure
14 HoursAzure Bot Service integrates the Microsoft Bot Framework with Azure Functions, offering a robust platform for the rapid development of intelligent bots.
During this instructor-led live training, participants will discover efficient methods for developing intelligent bots using Microsoft Azure.
Upon completing the training, participants will be able to:
Grasp the fundamental concepts underlying intelligent bots.
Develop intelligent bots leveraging cloud-based applications.
Acquire practical expertise in the Microsoft Bot Framework, the Bot Builder SDK, and Azure Bot Service.
Implement established bot design patterns in real-world contexts.
Create and deploy their first intelligent bot utilizing Microsoft Azure.
Target Audience
This course is tailored for developers, enthusiasts, engineers, and IT professionals with an interest in bot development.
Course Format
The training blends lectures and discussions with exercises, placing a strong emphasis on hands-on practice.
Computer Vision for Robotics: Perception with OpenCV & Deep Learning
21 HoursOpenCV is an open-source computer vision library that enables real-time image processing, while deep learning frameworks such as TensorFlow provide the tools for intelligent perception and decision-making in robotic systems.
This instructor-led, live training (online or onsite) is aimed at intermediate-level robotics engineers, computer vision practitioners, and machine learning engineers who wish to apply computer vision and deep learning techniques for robotic perception and autonomy.
By the end of this training, participants will be able to:
- Implement computer vision pipelines using OpenCV.
- Integrate deep learning models for object detection and recognition.
- Use vision-based data for robotic control and navigation.
- Combine classical vision algorithms with deep neural networks.
- Deploy computer vision systems on embedded and robotic platforms.
Format of the Course also allows for the evaluation of participants.
- Interactive lecture and discussion.
- Hands-on practice using OpenCV and TensorFlow.
- Live-lab implementation on simulated or physical robotic systems.
Course Customization Options
- To request a customized training for this course, please contact us to arrange.
Developing a Bot
14 HoursA bot, or chatbot, functions as a digital assistant designed to automate user interactions across various messaging platforms. It enables tasks to be completed more efficiently, eliminating the need for direct human conversation.
In this instructor-led live training, participants will learn how to begin developing bots by creating sample chatbots using specialized development tools and frameworks.
Upon completing this training, participants will be able to:
- Identify the various use cases and applications of bots
- Grasp the entire bot development lifecycle
- Examine the tools and platforms utilized in bot construction
- Construct a sample chatbot for Facebook Messenger
- Develop a sample chatbot using the Microsoft Bot Framework
Audience
- Developers interested in creating their own bots
Course Format
- A blend of lectures, discussions, exercises, and extensive hands-on practice
Edge AI for Robots: TinyML, On-Device Inference & Optimization
21 HoursEdge AI allows artificial intelligence models to operate directly on embedded or resource-limited devices, thereby reducing latency and power usage while enhancing autonomy and privacy within robotic systems.
This instructor-led live training (available online or onsite) is designed for intermediate-level embedded developers and robotics engineers seeking to implement machine learning inference and optimization techniques directly on robotic hardware using TinyML and edge AI frameworks.
Upon completion of this training, participants will be able to:
- Grasp the core principles of TinyML and edge AI for robotics.
- Convert and deploy AI models for on-device inference.
- Optimize models to improve speed, reduce size, and enhance energy efficiency.
- Integrate edge AI systems into robotic control architectures.
- Evaluate performance and accuracy in real-world scenarios.
Course Format
- Interactive lectures and discussions.
- Hands-on practice utilizing TinyML and edge AI toolchains.
- Practical exercises conducted on embedded and robotic hardware platforms.
Customization Options
- To request customized training for this course, please contact us to arrange.
Human-Robot Interaction (HRI): Voice, Gesture & Collaborative Control
21 HoursHuman-Robot Interaction (HRI): Voice, Gesture & Collaborative Control is a practical course designed to equip participants with the skills to design and implement intuitive interfaces for human–robot communication. This training blends theoretical concepts, design principles, and programming practice to create natural and responsive interaction systems utilizing speech, gesture, and shared control techniques. Participants will learn to integrate perception modules, develop multimodal input systems, and design robots that collaborate safely with humans.
This instructor-led, live training (available online or onsite) targets beginner to intermediate-level participants looking to design and implement human–robot interaction systems that enhance usability, safety, and user experience.
By the end of this training, participants will be able to:
- Comprehend the foundations and design principles of human–robot interaction.
- Develop voice-based control and response mechanisms for robots.
- Implement gesture recognition using computer vision techniques.
- Design collaborative control systems for safe and shared autonomy.
- Evaluate HRI systems based on usability, safety, and human factors.
Format of the Course also allows for the evaluation of participants.
- Interactive lectures and demonstrations.
- Hands-on coding and design exercises.
- Practical experiments in simulation or real robotic environments.
Course Customization Options
- To request a customized training for this course, please contact us to arrange.
Industrial Robotics Automation: ROS-PLC Integration & Digital Twins
28 HoursThis practical course, "Industrial Robotics Automation: ROS-PLC Integration & Digital Twins," is designed to bridge the gap between industrial automation and contemporary robotics frameworks. Participants will learn how to synchronize robotic systems based on ROS (Robot Operating System) with PLCs (Programmable Logic Controllers) and utilize digital twin environments to simulate, monitor, and optimize production processes. The curriculum emphasizes interoperability, real-time control, and predictive analysis through the use of digital replicas of physical systems.
This instructor-led, live training, available online or onsite, is tailored for intermediate-level professionals seeking to develop practical skills in connecting ROS-controlled robots with PLC environments and implementing digital twins to enhance automation and manufacturing efficiency.
Upon completion of this training, participants will be able to:
- Grasp the communication protocols facilitating interaction between ROS and PLC systems.
- Establish real-time data exchange mechanisms between robots and industrial controllers.
- Create digital twins for monitoring, testing, and simulating processes.
- Seamlessly integrate sensors, actuators, and robotic manipulators into industrial workflows.
- Design and validate industrial automation systems using hybrid simulation environments.
Course Format
- Interactive lectures and architectural walkthroughs.
- Practical exercises focused on integrating ROS and PLC systems.
- Implementation of simulation and digital twin projects.
Customization Options
- To request customized training for this course, please contact us to arrange your session.
Artificial Intelligence (AI) for Mechatronics
21 HoursThis instructor-led live training in France (online or onsite) is designed for engineers interested in learning how to apply artificial intelligence to mechatronic systems.
Upon completing this training, participants will be able to:
- Obtain a comprehensive overview of artificial intelligence, machine learning, and computational intelligence.
- Grasp the core concepts of neural networks and various learning methodologies.
- Select appropriate artificial intelligence approaches to address real-world challenges.
- Deploy AI solutions within mechatronic engineering contexts.
Multi-Robot Systems and Swarm Intelligence
28 HoursMulti-Robot Systems and Swarm Intelligence is an advanced training program that delves into the design, coordination, and control of robotic teams inspired by biological swarm behaviors. Participants will learn how to model interactions, implement distributed decision-making, and optimize collaboration across multiple agents. The course combines theory with hands-on simulation to prepare learners for applications in logistics, defense, search and rescue, and autonomous exploration.
This instructor-led, live training (online or onsite) is aimed at advanced-level professionals who wish to design, simulate, and implement multi-robot and swarm-based systems using open-source frameworks and algorithms.
By the end of this training, participants will be able to:
- Understand the principles and dynamics of swarm intelligence and cooperative robotics.
- Design communication and coordination strategies for multi-robot systems.
- Implement distributed decision-making and consensus algorithms.
- Simulate collective behaviors such as formation control, flocking, and coverage.
- Apply swarm-based techniques to real-world scenarios and optimization problems.
Format of the Course also allows for the evaluation of participants.
- Advanced lectures with algorithmic deep dives.
- Hands-on coding and simulation in ROS 2 and Gazebo.
- Collaborative project applying swarm intelligence principles.
Course Customization Options
- To request a customized training for this course, please contact us to arrange.
Multimodal AI in Robotics
21 HoursThis instructor-led, live training in France (online or onsite) targets advanced robotics engineers and AI researchers who wish to utilize Multimodal AI. The goal is to integrate various sensory data streams to create more autonomous and efficient robots capable of visual, auditory, and tactile interaction.
Upon completion, participants will be able to:
- Implement multimodal sensing within robotic systems.
- Develop AI algorithms for sensor fusion and decision-making.
- Build robots capable of executing complex tasks in dynamic environments.
- Address challenges related to real-time data processing and actuation.
Smart Robots for Developers
84 HoursA Smart Robot is an Artificial Intelligence (AI) system that can learn from its environment and its experience and build on its capabilities based on that knowledge. Smart Robots can collaborate with humans, working along-side them and learning from their behavior. Furthermore, they have the capacity for not only manual labor, but cognitive tasks as well. In addition to physical robots, Smart Robots can also be purely software based, residing in a computer as a software application with no moving parts or physical interaction with the world.
In this instructor-led, live training, participants will learn the different technologies, frameworks and techniques for programming different types of mechanical Smart Robots, then apply this knowledge to complete their own Smart Robot projects.
The course is divided into 4 sections, each consisting of three days of lectures, discussions, and hands-on robot development in a live lab environment. Each section will conclude with a practical hands-on project to allow participants to practice and demonstrate their acquired knowledge.
The target hardware for this course will be simulated in 3D through simulation software. The ROS (Robot Operating System) open-source framework, C++ and Python will be used for programming the robots.
By the end of this training, participants will be able to:
- Understand the key concepts used in robotic technologies
- Understand and manage the interaction between software and hardware in a robotic system
- Understand and implement the software components that underpin Smart Robots
- Build and operate a simulated mechanical Smart Robot that can see, sense, process, grasp, navigate, and interact with humans through voice
- Extend a Smart Robot's ability to perform complex tasks through Deep Learning
- Test and troubleshoot a Smart Robot in realistic scenarios
Audience
- Developers
- Engineers
Format of the course
- Part lecture, part discussion, exercises and heavy hands-on practice
Note
- To customize any part of this course (programming language, robot model, etc.) please contact us to arrange.
Smart Robotics in Manufacturing: AI for Perception, Planning, and Control
21 HoursSmart Robotics involves integrating artificial intelligence into robotic systems to enhance perception, decision-making, and autonomous control capabilities.
This instructor-led live training, available online or onsite, is designed for advanced robotics engineers, systems integrators, and automation leads looking to implement AI-driven perception, planning, and control in smart manufacturing settings.
Upon completion of this training, participants will be able to:
- Understand and apply AI techniques for robotic perception and sensor fusion.
- Develop motion planning algorithms for collaborative and industrial robots.
- Deploy learning-based control strategies for real-time decision making.
- Integrate intelligent robotic systems into smart factory workflows.
Format of the Course also allows for the evaluation of participants.
- Interactive lecture and discussion.
- Plenty of exercises and practice.
- Hands-on implementation in a live-lab environment.
Course Customization Options
- To request a customized training for this course, please contact us to arrange.