Program of the Virtual RoboCup Humanoid Open Workshops


Time (GMT+2) Thu June 25 Fri Jun 26 Sat Jun 27 Sun Jun 28
09:00 – 09:30   09:00 NUsight: Debugging Tool
09:00 Servo Interface Semantics, Taylor Expansions in Inverse Kinematics Solving and Why You Should Care 09:00 Computer Vision is a 3D Problem
09:30 – 10:00   09:35 Lightning Talk Session   09:30 The Humanoid Open Competition Track
09:45 Using generative adversarial networks to enhance simulation images in the context of the RoboCup Standard Platform League
10:00 – 10:30   10:00 “Gretchen” – a Humanoid Open Hardware Platform for Education and Research
10:30 – 11:00   10:30 Introducing the Humanoid Research Demonstration
11:00 – 13:00 Break
13:00 – 13:30 13:00 Welcome 13:00 Roadmap: current status and upcoming changes 13:00 Hands-on with
13:00 The human in the loop: perspectives and challenges for robots’ behaviours in Robocup 2050
13:30 – 14:00 13:30 Application of Inertial Measurement Unit For Humanoid Robot Stability
14:00 – 14:30  
14:15 A Black-Box Approach to Sim-to-Real Transfer
14:30 – 15:00
15:00 – 15:30 15:00 Welcome by Peter Stone   15:00 Farewell & Closing
15:15 Documentary: “RoboCup – robots can’t jump”
15:30 – 16:00    


  Lightning Talk
  Welcome & Closing

Schedule in Google Calendar can be found here.

Thursday, June 25

Afternoon Session

13:00: Welcome

13:00  – 13:10 Welcome and Introduction by the Organizing Committee
13:10  – 13:15 Welcome by Daniel Polani, Trustee and Past President of the RoboCup Federation
13:15  – 13:20 Welcome by Minoru Asada, Founding Trustee of the RoboCup Federation
13:20 – 13:30 Overview of the program, technical remarks

Presenter(s): Organizing & Technical Committee, Daniel Polani and Minoru Asada


  • Organizer slides: here
  • Input from Daniel Polani: here
  • Input from Minoru Asada: here

13:30 – 14:00: Application of Inertial Measurement Unit For Humanoid Robot Stability

Abstract: The presentation will give a rudimentary background in the use of IMUs for humanoid robotics, specifically in the context of RoboCup. We will cover a short background of IMU sensors, including types of available sensors, different grades of sensors, the benefits and the limitations of each. Next, we will give an example of a specific sensor, the InvenSense MPU6050, processing raw IMU data and converting this to roll, pitch and yaw. Finally, we use the IMU derived Euler angles to perform stability with examples. Participants of this presentation should gain a basic knowledge of using IMUs to produce data that can be used for stability.
Presenter(s): Humayun Khan
Team name: Electric Sheep
Activity type: Presentation (30 min)
Slides: Here

14:15 – 14:45: A Black-Box Approach to Sim-to-Real Transfer

Abstract: Learning robot behaviors in simulation is faster, cheaper, and safer than learning in the real world. However, small inaccuracies in the simulator can lead to drastically different performance when simulator-learned policies are transferred to the real world. To close this reality gap, we present Grounded Action Transformation (GAT), an approach that improves the accuracy of the simulator without the need to modify the simulator directly. We show successes of GAT and related approaches. GAT successfully transfers a policy from the SimSpark simulator (used in 3D simulation league) to the Nao robot (used in SPL league), even outperforming policies hand-tuned for the Nao.

Presenter(s): Siddharth Desai, Ishan Durugkar, Haresh Karnan
Team name: UT Austin Villa
Activity type: Presentation (30 min)
Material: Slides; iROS papers the talk was based on can be found here and here.
Recording: Available on YouTube

15:00 – 15:15: Welcome by Peter Stone

Peter Stone, the current president of the RoboCup Federation, will welcome participants to the Virtual RoboCup Humanoid Open Workshops.

Presenter(s): Peter Stone

15:15 – 15:45: Documentary “RoboCup – Robots can’t jump”

Abstract: In 2018, filmmaker Silvia Schmidt followed the BoldHearts to the Robocup in Montreal. Not knowing anything about the world of robot-football before, she was amazed to discover a whole new universe. Most importantly though: are the BoldHearts going to win?

Silvia will give a short introduction to her documentary before we will live-stream the movie.

Presenter(s): Silvia Schmidt
Activity type: Presentation (30 min)

Friday, June 26

Morning Session

09:00 – 09:30 NUsight: Debugging Tool

Abstract: NUsight is a visual, real-time and open source robot debugging utility that provides the user with comprehensive information regarding low-level functionality. The system implementation facilitates simple and rapid extension or modification, making it a useful utility for debugging any similar complex robotic framework.
Presenter(s): Josephus Paye II
Team name: NUbots
Activity type: Presentation (30 min)
Slides: Here
Recording: Available on YouTube

09:35 – 09:50 Lightning Talk Session

1. Development of an Open Platform Humanoid Robot GankenKun
Abstract: We introduce the open humanoid robot GankenKun developed by CIT Brains.
Presenter(s): Takaharu Nakajima
Team name: CIT Brains
Activity type: Lightning Talk (5 min)
Recording: Available on YouTube
2. Pose Estimation of the Humanoid Robot Using Deep Learning
Abstract: We are developing a pose estimation system for the humanoid robot based on the OpenPose. It will be used to estimate the state of the robot such as walking and kicking. We introduce the system and the experimental results.
Presenter(s): Satoshi Shimada
Team name: CIT Brains
Activity type: Lightning Talk (5 min)

10:00 – 11:00 “Gretchen” – a Humanoid Open Hardware Platform for Education and Research

Abstract: “Gretchen” is an experimental open humanoid robot aiming to be a versatile platform for education and research. The main focus of the project is to keep the robot as open and as low-cost as possible, making it accessible for students and researchers of many levels. At the same time the robot is aimed to be capable enough to provide a rich platform for research. One of our development goals for Gretchen is for the robot to be able to participate in RoboCup Humanoid league competitions. This is achieved through modular design of the robot, use of widely accessible components and manufacturing techniques, as well as extensive documentation and teaching materials. All hardware, software and documentation is developed as open source.

The robot Gretchen is currently in a prototype stage and resembles the lower part of the human body with 10 degrees of freedom. A complete robot is planed to have a height of about 110cm. Only widely accessible materials and electronic components, and manufacturing methods are used. Joints and drive parts are 3D-printed, limbs and the torso are laser cut from plywood. The joints are actuated indirectly through toothed belts which relieve stress on the motors. As one of the highlights, the robot is powered by low-cost servo-motors controlled by custom developed control boards, so-called Sensorimotor boards. The boards are completely open-source and can be used to drive a wide field of brushed DC motors and bring smart-servo features such as direct PWM control, various sensory feedback (position, current, voltage, temperature), RS485 bus communication and customizable firmware. A first version for an extensive documentation can be found in here.

Several prototypes of the robot “Gretchen” have already been produced and are under further development. The robot is also already being deployed in teaching, where students work and experiment with its design, actuators, 3D-printed components, firmware, power supply, communication bus, software libraries or the API.

In this talk, we will present the current state of the project Gretchen and some of the robots most interesting features in more detail. A part of the talk will be specifically dedicated to Sensorimotor boards.

Presenter(s): Heinrich Mellmann, Anastasia Prisacaru, Matthias Kubisch
Team name: Berlin United (SPL)
Activity type: Presentation (60 min)
Slides: Here
Recording: Available on YouTube

Afternoon Session

13:00 – 15:00 Roadmap: current status and upcoming changes

Abstract: A new roadmap has been presented to the league here.

During this workshop, we will present and discuss the upcoming changes for the league.
This discussion will cover both, short-term rule changes and the long-term development of the league.

Some of the changes foreseen in the RoadMap draft have already been implemented in the 2020 rulebook. One of the most important among those was the reduction of size classes to two (Kid and Adult), without an overlap in height. The new maximum height for KidSize and minimum height for AdultSize robots is, however, still under discussion and in this workshop we would like to give an open space to gather arguments from the league about the new size division.

Teams have expressed the wish to see the frequency of rule updates reduced, which has been reflected in the RoadMap.
Given the fact that RoboCup2020 has been postponed, we will as well discuss
whether we postpone any major changes to the rules until Robocup2024 or if
we agree to have significant changes for RoboCup2023.

Presenter(s): Ludovic Hofer, Maike Paetzel
Team name: Technical Committee
Activity type: Workshop (2 h)
Slides: Here
Recording: Available on YouTube

Saturday, June 27

Morning Session

9:00 – 9:30 Servo Interface Semantics, Taylor Expansions in Inverse Kinematics Solving and Why You Should Care

Abstract: Motion generation and planning is tricky. There is a plethora of approaches to enable smart or intuitive motion generation for robots but a one-size-fits-all solution has yet to be found. This talk will not propose such a one-size-fits-all solution but will show how unusual(ish) interface semantics of smart actuators influence the motion generation semantics in my control software.

The off-the-shelf smart actuator usually wants to be controlled like so: “Tell me how far I shall move and how fast I should get there”. This always felt a bit odd as it renders generation of certain movements very cumbersome: Think about a trajectory starting at 0, going to 1 and then returning to 0. You’d tell the actuator to go to the final target value of 0 (as it happens, this is where it already is) but with a nonzero velocity. This apparent contradiction leads some “smart” actuators to not move at all – under certain conditions.

A nicer interface for smart actuators would be: “Give me a function with respect to time that I shall follow”. This function could be expressed as a polynomial and the above contradiction would be resolved. However, the semantics of this interface have great ramifications on motion planning and generation: The “classic” smart actuators require information about a future pose whereas my “unusual(ish)” actuators require information about the desired pose at the current point in time. As it turns out these different semantics allow for quite a big deal of simplifications and generalizations in least-squares inverse kinematics solvers. Furthermore, the polynomials which are required by the actuators can be generated surprisingly easily by performing a taylor expansion of the task functions at the current configuration.

My talk will cover the idea behind it all, the math involved and what it looks like on a robot.

Presenter(s): Lutz Freitag
Team name: 01. RFC Berlin (to be founded soonish)
Activity type: Presentation (30 min)
Slides: Here
Recording: Available on YouTube

09:45 – 10:15 Using generative adversarial networks to enhance simulation images in the context of the RoboCup Standard Platform League.

Abstract: In this bachelor thesis, the problem of generating realistic images from simulated environments is addressed, specifically in the Standard Platform League of the RoboCup. Automatic generation of realistic images would very beneficial in the development of machine learning algorithms.For the Standard Platform League these developments are needed to increase the performance of the ball or opponent detection. Our approach enables researchers to enhance the realism of an existing simulator simply by collecting images from the real world and use those as training example for the generated simulated views. To do this a transformation transformation between simulated images and real images needs to be found. This transformation can be found by deploying generative adversarial networks. For this approach to work, the objective function has to be clearly specified, otherwise the transformed images are not as diverse or accurate as real images. Architectures like CycleGan and Munit have been applied in previous work to make these kind of transformations. These architectures are here adjusted what is needed in the the Standard Platform League. After tuning the loss functions and modifying the architectures, better results are obtained compared to the base implementations. Our approach significantly reduces the perceptual distance between a real image and a simulated one. Additionally the impact of this approach is demonstrated by fact that a machine learning algorithm, such as an image instance segmenter, performs notably better when trained on the transformed images compared to the simulated ones.
Sample image with simulation on the left, generated from simulation on the right, with random uniformly sampled style parameters.
Requirements: Basic knowledge on artificial neural networks
Presenter(s): Hidde Lekanne gezegd Deprez
Team name: Dutch Nao Team (SPL)
Activity type: Presentation (30 min)
Slides: Here
Recording: Available on YouTube

10:30 – 11:00 Introducing the Humanoid Research Demonstration

Abstract: The RoboCup Humanoid Research Demonstration is an initiative for showcasing the latest research relevant for humanoid robotics. It is meant to give room for presenting and discussing research initiatives that are not (yet) ready to be applied in a RoboCup competition. Any research is welcome to be presented, as long as it is related to humanoid robotics and has the potential of a practical demonstration. This includes both hardware and software advancements, as well as education with and for humanoid robots.

During this short presentation, the Technical Committee will give an overview of the ideas behind the Humanoid Research Demonstration and the current plan for establishing this league for RoboCup 2021 in Bordeaux. There will then be time for questions, discussions and feedback on how we can successfully organize the first version of the Humanoid Research Demonstration next year.

Presenter(s): Maike Paetzel, Ludovic Hofer
Team name: Technical Committee
Activity type: Presentation (30 min)
Slides: Here
Recording: Available on YouTube

Afternoon Session

13:00 – 16:00 Hands-on with ROS 2

Abstract: The next generation of the Robot Operating System (ROS 2) has matured over the last few years. The central framework and tools have converged into a stable system, reflected by the fact that by the time of the Virtual RoHOW 2020 there will have been 2 long term support releases. More and more important packages are becoming available, making it ready to be used as a platform for robotics projects. Team Bold Hearts has been using it effectively to implement their software framework. In this workshop we aim to present ROS 2, its design and inner workings and what makes it appealing as a platform for RoboCup and other robotic applications. The workshop does not expect the participant to know anything about ROS 1 or 2. The main part of the workshop consists of several hands-on exercises for participants to become familiar with ROS 2 and learn how to implement their own packages. These exercises will involve working with a full 3-D simulation of a humanoid robot with sensors and actuators, as well as a computer vision pipeline.
To get the most out of the hands-on exercises, it is crucial to set up the docker-based environment at least one day in advance.
Preliminary Schedule:
  1. Presentation: Introduction to ROS 2 (20 min)
  2. Set-up Humanoid 3-D simulation environment (10 min, Docker based environment prepared and installed by participants beforehand)
  3. Hands-on demonstration: ROS 2 basics – topics / services / parameters / visualization (30 min)
  4. Exercise 1: create a sense-act package (60 min)
  5. Exercise 2: advanced ROS 2 features, e.g. node composition, quality-of-service, node lifecycle (60 min)

Knowledge/experience level required:

  • Linux command line basics
  • Exercise 1: Python 3
  • Exercise 2: C++ / CMake (enough to follow provided steps)
  • No knowledge of or experience with ROS 1 or 2 required

Hardware/software required:

  • Mid-level PC capable of running 3-D physics simulation
  • Linux
  • Git
  • Docker
  • ~5-6 GB free HD space.

All requirements will be provided in advance as a Docker environment with ROS 2 and demo packages installed. The setup for participants involves installing Docker and downloading the development environment, estimated to take 10-60min preparation time, depending on download speed. It is required to set up the development environment at least a day before the workshop so there is enough time to assist with any issues.

Instructions and bootstrap/cofiguratino files for downloading and test-running the environment will be provided at this Git repository. The issue functionality of can be used to report and resolve any problems with setting up the requirements before the workshop.

Presenter(s): Sander van Dijk, Marcus Scheunemann
Team name: Bold Hearts
Activity type: Workshop (3 h)
Recording: Available on YouTube

Sunday, June 28

Morning Session

09:00 – 9:30 Computer Vision is a 3D Problem

Abstract: This presentation will consider the importance of approaching computer vision as a geometrically 3D problem rather than a 2D problem. We will explore the influence of lens optics on projection and distortion and how this breaks 2D models. We will present both classical and machine learning-based techniques that can be employed to improve the simplicity and accuracy of these systems.
Requirements:A basic understanding of geometry is required. A basic understanding of computer vision and machine learning is recommended but not essential.
Presenter(s): Trent Houliston
Team name: NUbots
Activity type: Presentation (30 min)
Slides: Here
Recording: Available on YouTube

09:30 – 11:00 The Humanoid Open Competition Track

Abstract: The RoboCup Open Humanoid League has been proposed to attract new teams, either young teams interested in humanoid soccer robots and possibly mature teams interested in mastering specific, very narrow aspects of soccer robots. Though the general idea was widely welcome in the community, the details have been discussed intensively. There have been talks with Junior league representatives, who are working on extending Junior activities towards a humanoid competition, as well as with mature researchers who may be interested in lessened restrictions on robot design and rules. The objective of the track is to shortly present the current state and provide input on potential approaches. Following, a discussion to approach agreement on practical implementation as part of upcoming RoboCup events is foreseen.
Presenter(s): Reinhard Gerndt, Jacky Baltes
Team name: Technical Committee
Activity type: Workshop (1.5 h)

Afternoon Session

13:00 – 15:00 The human in the loop: perspectives and challenges for robots’ behaviours in Robocup 2050

Abstract: Current literature in Human-Robot Interaction (HRI) showed that a robot aware of human social conventions increases people’s acceptability, and positively affects the outcome of the interaction. For example, we know that people do not like to be approached too fast, or from the back by a robot, or they might not prefer a robot to come into their personal or intimate space, among others. However, human soccer players will have close physical contact with robots when they will play against them in RoboCup 2050. Will players’ and audience’s perception of robots change? Will human players actually be comfortable in playing with robots even when the thrill of the challenge wears off? How will they perceive their own and robots’ safety? Moreover, robots will not only need to plan, navigate and play soccer according to the FIFA rules, but they will also need to understand and infer the human players’ intentions and multi-modal communication signals.
This workshop aims to highlight the future challenges that will be posed for the robotics community to allow human players to play against robots in RoboCup in 2050. We will particularly focus on the factors of human-robot interaction that can affect and enhance people’s acceptance, sense of safety and comfort in close physical interactions with robots.
Attendees are invited, but not compelled, to present a two-pages input paper to be used as discussion points for a panel discussion held during the workshop with the invited speakers and attendees. The input papers can be entirely theoretical on a current open challenge on a Human-Robot RoboCup game, interaction goal, and etc…
Authors should submit their papers formatted according to the IEEE two-column format. Use the following templates to create the paper and generate or export a PDF file: LaTeX or MS-Word. Organisers will look at the possibility of a joint publication of selected input papers, e.g. at the next RC Symposium. Authors can email their papers to Alessandra Rossi <>.
Invited Speakers: Luca Iocchi (University of Rome “La Sapienza”, Rome, Italy) and Justin Hart (University of Texas, Austin, US)
Organizers: Alessandra Rossi, Merel Keijsers, Maike Paetzel
Team name: Bold Hearts, Electric Sheep, Hamburg Bit-Bots
Activity type: Workshop (2 h)
Recording: Available on YouTube

15:00 Farewell & Closing

Presenter(s): Organizing & Technical Committee

Recording: Available on YouTube