5

I am building a self balancing two wheeled robot. I have been planning to implement a simple algorithm for the balancing part - then spend days tweaking the algorithm, but now i have the idea that I could use a neural network instead.

As input I want to give it the current velocity of the wheels, gyro and accelerometer data in the dimensions relevant for balancing and perhaps input from the remote controller.

As output I want a direction and thrust for each motor.

Error situations include falling over and not moving according to the remote control.

The trouble I am having is how to train it? Ideally it'll learn over time, but I don't know how the network will learn - say if it does something, then falls over 2 seconds later.

So there is no way I can tell the network that a certain output was wrong instantly. An idea I have is to say that I "roll back" the entire network state a few seconds every time the robot falls. What is the proper way to do this?

I also would like to have the network try to conserve energy; using power is negative, but neccesary.

I hope to be able to use libfann on a 1 ghz BeagleBone Black computer.

Extra info: I will not allow the robot to fall over, so a manual algorithm will take over control if certain threshold values are reached - and take the robot to a neutral position and give control back to the network.

frodeborli
  • 1,537
  • 1
  • 21
  • 30
  • I like this idea. For your fallback algo, you should do what the segway does (try and keep the shaft at a right angle at all times) – phyrrus9 May 18 '14 at 17:34
  • Check out recurrent neural networks, such as http://ieeexplore.ieee.org/xpl/login.jsp?tp=&arnumber=5336158&url=http%3A%2F%2Fieeexplore.ieee.org%2Fxpls%2Fabs_all.jsp%3Farnumber%3D5336158 – Sean Barbeau May 18 '14 at 19:17
  • Do you know about [Reinforcement Learning](http://en.wikipedia.org/wiki/Reinforcement_learning) (see also ["the book"](http://webdocs.cs.ualberta.ca/~sutton/book/the-book.html))? – lmjohns3 Oct 11 '14 at 00:56
  • Possible duplicate of [Clarification on a Neural Net that plays Snake](http://stackoverflow.com/questions/42099814/clarification-on-a-neural-net-that-plays-snake) – devinbost Feb 15 '17 at 20:33
  • 1
    @devinbost this is like the 10th question I have seen in the past 5 minutes on neural networks that you have flagged as a duplicate of the *Clarification on a Neural Net that plays Snake* question, which is a question with a -5 score but one which you happen to have written an answer for. I find it difficult to believe that all these questions are variants of a question with such a specific title. I don't pretend to understand your motives for doing this but please explain, or stop. – tom redfern Feb 15 '17 at 21:10

2 Answers2

1

to let it learn you need to log all of the inputs and outputs and then feed the data into the ANN. I have done this in the process control field for water treatment. The software can be expensive and I don't know of open source alternatives, but the way you 'train' it is by giving it historical data. For instance, when you did x on an output, y came back on an input. You can then do a number of experiments while logging the data and feed that data into the ANN.

  • This solution requires somebody to control the robot initially. I want it to try stuff until it understands it by itself. As I understand it I need a recurrent neural network. – frodeborli May 22 '14 at 17:53
  • that's exactly what my solution is except it doesn't learn on the fly, it needs historical data. This way you can try different weights on parameters and fine tune your ANN. what do you mean by try stuff until it understands? – user3666086 May 22 '14 at 18:13
  • I want the robot to try to teach itself how to balance, by trial and error. With your solution, I could just as well use a Kalman filter and a PID controller. – frodeborli May 23 '14 at 10:35
1

You could start with a simulator to avoid having to pickup the robot or resort to using a backup controller. You can find one here built for the T-Bot which is a self balancing robot produced by KLiK Robotics. Look for T-BotSimulator_KB_HD.py file in the python folder. The simulator is currently setup with a cascading PID loop but the framework is very simple and clear and you could easily substitute the existing controller for an ANN. Dependencies are Numpy and Pygame. You will also need TBotTools which is also in the Python folder. Good luck.