Skip to content

The ROS 2 Course

Overview

This is a 6-part course, designed to teaching you all about ROS 2 and how to use it. The course is designed to be completed in simulation, so you will therefore need access to a ROS 2 installation which can either be installed on your own machine, or accessed on a range of managed computers across the University of Sheffield campus. See here for more information on how to access or install ROS 2.

Each part of the course comprises a series of step-by-step instructions and exercises to teach you how ROS works, and introduces you to the core principles of the framework. The exercises give you the opportunity to see how to apply these principles to practical robotic applications. Completing this course (either in part or in full) will provide you with the necessary skills for working with our real TurtleBot3 Waffle robots in the Diamond.

The Course

  • Part 1: Getting Started with ROS 2

    Learn the basics of ROS 2 and become familiar with some key tools and principles, allowing you to program robots and work with ROS 2 applications effectively.

  • Part 2: Odometry & Navigation

    Learn about Odometry, which informs us of a robot's position and orientation in an environment. Apply both open and closed-loop velocity control methods to a Waffle.

  • Part 3: Beyond the Basics

    Execute ROS applications more efficiently using launch files, and learn how to affect the behaviour of nodes during run-time using parameters. Learn about the LiDAR sensor, the data that it generates, and see the benefits of this for tools like "SLAM".

  • Part 4: Services

    Learn about an alternative way that ROS nodes can communicate across a ROS network, and the situations where this might be useful.

  • Part 5: Actions

    Learn about another key ROS communication method which is similar to a ROS Service, but with a few key benefits and alternative use-cases.

  • Part 6: Cameras, Machine Vision & OpenCV

    Learn how to work with images from a robot's camera. Learn techniques to detect features within these images, and use this to inform robot decision-making.