Ryan Stonebraker

Software Developer

Raspberry Kicker Bot

The Raspberry Kicker Bot started out in itself as an idea for an autonomous path planning robot. However, upon further delving into this idea, I realized that there were many aspects of path planning that could not be easily represented by a single algorithm. For example, not only did I want the robot to be able to perform localized path planning, but I wanted it to have the ability to record its path history and to fully utilize all sensory information available to it. In addition to this, I wanted it to be modular enough to easily support the addition of new sensors and telemetry overlays. Out of these key ideas, the idea for developing a platform instead of just a standalone robot was created.

There has been much work done in autonomous path planning in recent years. Nearly every large tech company from Uber to Google to Apple has some sort of play at autonomous vehicles. In approaching path planning though, these companies have vast resources and equipment to throw at the problem. Due to this, they can rely on more brute force intensive methods involving large convolutional neural networks to process vast amounts of image data. However, in going the other direction to sectors of the tech world that aren't as well funded, this problem holds different meaning. What level of path planning could be developed with rudimentary sensors and equipment? This project aims to be a testing ground for this exact question.

In a perfect world with perfect sensors and telemetry, path planning is easy. A simple A* algorithm given a start point and an endpoint can effectively calculate an optimal path between the two, quickly rendering the problem solved. However, when dealing with sensors that occasionally spit out anomalous data and/or have a fixed range, perfect data cannot be relied on. With this in mind, I started out this project with an extensible GUI in mind that could easily support a variety of telemetry overlays and could quickly be used to debug problems with such imperfect data.

A simple demonstration of the GUI. The white points represent location history and the red points represent detected obstacles. Using this same scheme, new color points could be easily overlaid to represent things such as optimal paths or areas of interest.

When approaching the backend, I realized that features of it needed to be built separately from the GUI in order to make each feature more modular. This led me to create a scheme involving three critical parts; a local viewer, a robot controller, and a telemetry monitor. The local viewer is simply used to represent everything displayed in the GUI. The robot controller and telemetry monitor on the other hand are used for actually bridging the gap between whats seen on the screen and what the robot does. I decided that the simplest way to approach this problem would be to set up a simple restful API using node.js Express. This API acts as a gateway between the robot and the controller. The robot controller posts commands to the API, the robot's backend firmware (written in Python) listens to and receives the commands posted, does the action, and responds with appropriate telemetry data. The telemetry monitor then receives the telemetry data posted to the API and updates the local viewer.

A visual representation of the communication model between the robot and GUI.

After I set up the GUI and communication link, it was time to actually build the robot. My goal with this was to make it as simple and extensible of a platform as possible. For this reason, I chose to use the cheap and simple L293D motor controller, some motors I had lying around, a Raspberry Pi, and a custom platform that I 3d printed.

The finished, hacked together, Raspberry Kicker Bot.

There is still quite a bit of work that could be done on the platform going forward. First of all and most importantly, I ran out of time to actually add sensors to the robot. For this reason, it currently is little more than drivable robot that can be controlled wirelessly and keeps rudimentary track of its position based off of driving commands sent to it. However, my plan is to integrate both an ultrasonic sensor and basic camera onto it and use this data to actually start developing path planning algorithms that work on the platform. Another design flaw I made comes in terms of the way I store position data. When approaching the problem at the start, I knew that there would be a large potential for error. To prematurely deal with this, I decided to make location relative to the last known point. My thought behind this was that I could then use additional sensor data to help correct local errors as needed. While this made some things easier, (like drawing history points onto the screen) it made getting the location relative to its starting point much more tedious. I think in going forward, I will redesign this system to have position relative to its starting point and deal with position inaccuracies as I come across them.