In this hand-on workshop, you’ll learn the fundamentals of integrating object detection into a mobile robot running on a ROS/Jetson framework. After developing your code in a Gazebo simulation environment, you’ll deploy it to the physical robot for testing.
You’ll start with an overview of the Robot Operating System(ROS) and its associated architecture. Then, you’ll build a node for simple movement of the robot using the development workflow: simulate, develop, and deploy. You’ll proceed to integrate image recognition and object detection deep neural network (DNN) models, including an exploration of how to build your own models with DIGITS. You’ll verify the robot’s behavior in simulation and finally deploy the project to a Jetson/ROS robot.
Throughout the workshop, you’ll get hands-on simulation and coding experience using a live GPU-accelerated environment. At the end, you’ll have access to additional resources to design and deploy Jetson-based applications on your own.
NVIDIA-Deep Learning for Robotics (DL-R)
Schedule
- No schedule events found for this course.
- PC
Private Class
Privately train a group of your employees at your facility, virtually, or any of our locations.
- PC
- LCLive Classroom
Live Classroom
Learn and interact with your instructor and peers in-person in our classrooms. - VCVirtual Classroom
Virtual Classroom
Attend any of our instructor-led classes virtually regardless of your physical location. - PCPrivate Class
Private Class
Privately train a group of your employees at your facility, virtually, or any of our locations. - GTRGuaranteed to Run
Guaranteed to Run
GTR classes are guaranteed to run as promised and delivered.
Course Summary
Show All
Description
Objectives
Learning Objectives
- Learn the general ROS paradigm of messages passing between nodes
- Learn to work with the robotic development workflow by taking a hands-on approach to simulation, development, and deployment using a Gazebo simulator
- Learn to integrate an object detection inference model, trained with DIGITS, into a ROS network to build autonomous behavior for a Jetson-based rob
Prerequisites
Basic familiarity with deep neural networks; basic coding experience in Python or similar language
Outline
Introduction to ROS Robot Control (120 minutes)
- System overview
- ROS and Gazebo
- Coding & testing in simulation
Work with ROS nodes and topics on a cloud desktop to code and run robot movement in a Gazebo simulation.
Deploy to the Robot (60 minutes)
- Deploy and test Deploy your code to the physical robot and test it in the real world.
ROS Integration of Image Recognition (90 minutes)
- Inference on the robot
- Training with DIGITS
- Coding & testing with ROS bags
Learn to integrate inference with ROS nodes. You’ll write code to parse classification messages and test with RSO bags on the desktop.
ROS Integration of Object Detection (60 minutes)
- Inference on the robot
- Training with DIGITS
- Coding & testing with ROS bags and Gazebo simulation
Combine what you’ve learned about control and inference integration to build a ROS node that moves toward an object it identifies autonomously.
Deploy Object Detection control node to the Robot (60 minutes)
- Deploy and test Deploy your code to the physical robot to autonomously find objects.
Next Steps and Q&A (15 Mins)
- Discuss next steps and questions.
Use this time to discuss any questions about assessment/material.