Inspired by machine learning, ROS 2, Turtlebot3, and Rulex

29th November 2019
Posted By : Anna Flockett
Inspired by machine learning, ROS 2, Turtlebot3, and Rulex

One of the greatest challenges of a machine learning (ML) solution, is being able to explain why a trained model gave the output it did for a given input. One could track down the network of neurons that activated or the value of the discriminant function, but it would still be difficult to explain the decision to a human. 

Guest blog written by Mihai Dragusu, Wind River.

I was introduced to Wind River partner Rulex, a company which deals in ML solutions that focus on solving the interpretability of a ML model. They do this by using a Logic Learning Machine (LLM) algorithm, which is able to output rules that describe the input-output relationship vs. a more traditional non-linear mathematical equation; these rules can then help create if then else statements.

The Idea
I had a small robot platform (Turtlebot3) sitting on my shelf, and I had zero inspiration of what to do with it, so when Rulex reached out to test their solution it dawned on me to use the rules generation in order to make the robot avoid obstacles using the readings from the LIDAR sensor it's equipped with. So the inspiration came and I wanted to experiment with ROS 2, Turtlebot3, LIDAR, and Rulex.

Robot and ROS 2           
The Turtlebot3 is a small robot that is regularly sold as a ROS and ROS 2 experimentation platform. I chose ROS 2 because from previous experiences I already knew how ROS operated, so for the sake of experimentation I chose the second iteration.

The Experiment
First thing that needed to be done was to boot and run the Turtlebot, and that proved to be unnecessarily more difficult than expected.

After the robot was assembled I expected the following:

1. I would go online download a Raspbian image with the ROS 2 setup for a ROS node on the robot

2. Set up another ROS 2 (server) on a host machine

3. Have or install the and scripts to drive the robot with a keyboard ready

4. Drive the robot using remote control over network using ROS 2 messages

However, none of that proved to be the case. I learned that after assembling the Turtlebot you have to follow a readme to install and/or compile manually everything…a minor, but manageable inconvenience.

There was only one problem, the readme does not fully reflect the reality of the present, because you have to clone a repository without a commit id that snapshots the time the readme was written and the entire system was working, so I ended up learning that some ROS 2 components moved faster than others, and that it was almost impossible to align them.

I ended up getting a working ROS 2 install on Wind River Linux and I was able to finally move the robot around and read LIDAR data.

The Dataset
In order to provide a dataset for Rulex to train their ML with, I had to move the robot around and record the LIDAR readings, and the decision I made while driving in correlation with the reading. The convened upon format was csv (Angle1, Angle2, ...Angle360, Decision).

After I was able to drive the robot around, I had to do some minor changes to the ROS python scripts to record the LIDAR reading and the decision (key pressed). Just by looking at the LIDAR readings I was able to see that there were some inconsistencies in the data, and it turned out that the LIDAR is prone to some noise and the data you get from the driver is very raw.

The LIDAR is supposed to have a range of 3.5 meters, but after testing it 1 meter seemed to be where the good accuracy limit was. The other issue was that if you held something in front of the LIDAR not all values would read appropriate values, for example, some of them would randomly drop to 0.

In order to have a clean and accurate dataset I decided to reduce the range to 80cm since it was good enough for my test. And the 0 drops I solved by making any value that's not like the neighboring value the mean of the neighboring values.

My initial thoughts on Turtlebot are that it’s a good recreational/hobbyist experimentation platform, but can’t expect much more from it than that.

Rulex and Final Testing
With the dataset built I contacted Rulex to build the rule set that would drive the robot in a scenario similar to the one done during the training process.

With a dataset I built that was smaller than I estimated would be enough, however I wanted to test out the algorithm (stress it a bit) and the logistics of running the robot was somewhat cumbersome, so I sent ~2400 filtered and curated entries, and after a short while I got some results back with insights into the data.

It turns out Rulex has a GUI that is able to both build the ML model/rules, and also analyse the input data so that the data scientist can understand what creates noise and what is redundant, as well as identify what factors contribute more to a decision, etc.

In a following meeting I was presented some of these features:

Drag and drop model training interface
Drag and drop model training interface
Rule viewer, dependency between input and rule, and its importance
Rule viewer, dependency between input and rule, and its importance
Feature ranking, this describe what input value is more relevant for a given action. Action "a" means "go left", "d" is for "go right"
Feature ranking, this describe what input value is more relevant for a given action. Action "a" means "go left", "d" is for "go right"

But more importantly the output:

Using regexp I converted into python since that was what I was running to control the robot in the first place
Using regexp I converted into python since that was what I was running to control the robot in the first place

While a typical ML algorithm would output a configuration file that describes a big equation and a binary file with the equation's parameters, Rulex outputs a readable C code describing the behavior. This from my perspective is the most important feature that makes this solution more relevant than a conventional one.

A format like this exposes the logic of a ML algorithm to the developer, which is especially relevant in safety domains where the availability and interpretability of the system is very critical. Having control over the ML logic and being able to explain why an output has a specific value is very refreshing for a ML developer, and in some domains is mandatory.

Another fun fact, during my testing I noticed that in some cases where the floor was too slippery, or it had imperfections, it would affect the turning, so from time to time the robot would spin out of control. And I was able to add my own simple rule to stop the robot if it started doing that.

Because I had a small dataset, the testing had to be simple, otherwise the robot would not behave as desired. However, in the videos you can see the robot avoiding the box and after the coast is clear, turning back straight since it enters a narrow corridor.

Conclusions                                                                                                
Because of the way the relation between input and output works, it cannot be an alternative to Deep Learning or Image recognition algorithms because the input is just too large and the amount of if statements would kill performance.

However, it actually provides value as an alternative to classical ML algorithms (decision trees, support vector machine etc.), in a professional, safety oriented, industrial or medical environment it might provide one of the best solutions to cover control and interpretability of a model.

In our specific case it also provided a software dependency free solution for doing ML in our VxWorks real-time operating system. Since the inference part of the model is done only with if statements, there is no reason to port a ML library, it also aligns with VxWorks core values of safety, small footprint and with the domains it targets (industrial, medical etc.)

While the experiment of using an autonomous Turtlebot robot using Rulex for decision making was a success, it doesn’t even scratch the surface of the wide array of capabilities that could be leveraged and/or challenges that could be solved with a complex set of rules. Stay tuned for my next experiment to share!

Courtesy of Wind River. 


You must be logged in to comment

Write a comment

No comments




Sign up to view our publications

Sign up

Sign up to view our downloads

Sign up

The Magnetics Show US
22nd May 2024
United States of America The Pasadena Convention Center
2024 World Battery & Energy Storage Industry Expo (WBE)
8th August 2024
China 1st and 2nd Floor, Area A, China Import and Export Fair Complex