1

How can one design line following bot using only camera sensor but for high speeds? I am currently using opencv library to process frames and calculate steering angle from that. But at higher speeds, since the path changes rapidly the given approach does not work.

P.S. Is there a particular camera that works well with this application?

Yunus Temurlenk
  • 4,085
  • 4
  • 18
  • 39
n_m
  • 13
  • 3
  • 1
    It depends on what sort of line it is. And speed as well. 1. How fast your robot is? Meters per second? 2. What is the line thickness? What is the minimum line turn radius? Surface color and line color also maters? How noisy might be a background (surface)? – Stepan Dyatkovskiy Jun 14 '22 at 16:30
  • 1
    The thickness of the line is variable and the color is black and can be adjusted. speed of robot: 25mph, and as for the turn radius, the desired trajectory is square. (with minimal curvature). The testing is done on a test course with pebbles, potholes etc – n_m Jun 14 '22 at 20:03
  • 1
    line following is a control loop. the loop has to respond quicker, if you go faster. make sure your loop does that. -- ok so you want a 60 fps camera? or more? consider that you don't need much resolution at all. a raspi cam can be configured for high frame rates and low resolutions. – Christoph Rackwitz Jun 15 '22 at 01:14

1 Answers1

0

It supposed to be a complicated system, and line following worth a good science publication. So here's a very simplified answer.

When you think about algorithm it's important to understand, that it is always trade. You can get camera with small FPS and poor resolution, but then you can follow smoother lines with big turn radiuses.

Or if you need to follow some crazy curve with sharp turns, you should get good camera or may be few cameras.

First of all let's assume following conditions:

  1. You need solution with single camera or line sensor.
  2. Line is of a dark colour and background is light coloured.
  3. Robot speed as 25mph.
  4. We also have two parameters:
    • thickness - we want to maximize thickness variation.
    • turn radius - we want to minimize turn radius.

There we go.

On top level your system looks like block with negative feedback.

In your case:

  • feedback is a line declination from the center... and robot velocity.
  • output signals are steering and desired robot velocity. I assume better to slowdown robot then to fail and loose the line.

Usually in order to achieve smooth reaction you should use PID controllers and sometimes (in space industry) Bellman equations.

Robot

Schematically your robot might be like this: enter image description here

  1. It accepts line declination.
  2. It calculates desired steering via Steering PID
  3. It might happen, that you should reduce velocity, this is why you might need smth like Max velocity calculator
  4. Then knowing max velocity and current velocity, you calculate velocity error and send it to Thrust PID.
  5. Thrust PID on output forms positive or negative signal which tells your motors (or rather hardware motor controller) to accelerate or break respectively.
  6. Also steering signal goes to steering servos.

Now having this schema at hands we can talk a little bit about algorithm.

As it was said fast camera will allow you to follow sharper turns. BUT turn radius also depends on other physical properties of your robot: mass, wheel tires.

Camera sensor

If my understanding of your robot is correct, then camera sensor is just a few filters:

  1. Thresholding. It might be Otsu thresholding or may be even Adaptive thresholding. Both methods allow to work in different light conditions.

  2. Find center of line with moments filter. As long as line is black, you should invert frame. It might be achieved by passing THRES_BINARY_INV instead of THRES_BINARY on previous step.

  3. Pick x coordinate of momentum center, and compare it with frame mid line:

    declination = x - frame_width/2

That's it.

This set of filters uses very efficient convolution algorithms and should work with minimal delay even on oldest RPi versions.

If you need to improve FPS, you can crop your frame from top and bottom.

Very sharp turns

This solution might deal with even right-angled turns like this:

    ------
   |
   |
   ^
<robot>

All you need is to adjust Steering PID parameters.

But it might not work well with U turns where camera captured both directions:

    --
   |  |
   |  | 
   ^
<robot>

In this case you should reduce camera's angle of view.

Further improvements

In fact the recent case, when your camera can see whole U-turn might be an advantage. Actually the more curves you can recognize the better you can plan robot movements. But it requires more robust and expensive algorithms.

Also you can embed simple LSTM model to recognize some dangerous cases.

  • 1
    Thank you for the detailed answer, I had one doubt, the declination value we are sending to the PID will be obtained from the image captured right? And to tune PID will we also need a ground truth value, if so how can we calculate that? (I am new to PID control, hence was a bit confused). Again, thanks for the detailed answer. – n_m Jun 15 '22 at 14:18
  • In current case on each step you should trust 100% previous one. So you should trust declination value you obtained from image. Otherwise you should replace PID with more complicted block, for you would accept not only 'error' but some sort of 'confidence' signal. Whilst PID doesn't assume any sort of 'confidence'. – Stepan Dyatkovskiy Jun 15 '22 at 14:24
  • 1
    Thank you for the clarification, so I would not need any additional sensor? Currently I am working on this problem, and the system I have is not able to trace tighter curves, since the image capturing window adds lag. And by the time the system is captures new image it has moved ahead. Wondering if changing control strategy will have any impact on that? or is it completely a hardware issue. – n_m Jun 15 '22 at 15:44
  • Ideally you should use linux without X server launched, and use opencv without graphical interface. But I think that most of lag comes from another places. 1. You can try to profile your app. Here's some [article](http://euccas.github.io/blog/20170827/cpu-profiling-tools-on-linux.html) about profiling. 2. Lag might come out of bad camera exposition settings: when it's too dark and camera shutter tries to get more light and thus slows down FPS. – Stepan Dyatkovskiy Jun 15 '22 at 16:06