[ROS Tutorials] ROS Perception Unit2# Follow Line with OpenCV (Python)


Welcome to this unit to Vision Basics
in ROS part two follow a line so in previous unit you learned the like the
very very basics of blob tracking so in this unit you do similar block tracking
but using now OpenCV and learning the basics of using OpenCV in ROS because
it has some some special some special stuff but first two things at the end of
each unit you have this project part where we do like a brief intro of what
you will have to do in the project part of this that refers to this unit so in
previous unit and you have to do in the project a blob follower a blob tracker
of a pink ball that that traditionally I will robots have and you have to track
that ball and go towards it that’s the main project of unit one we will talk
about the project in a special video for the project so don’t worry about that
but will comment on this and because I didn’t comment it on the first video
unit one then I comment it now so in the case of this project of this unit in the
project you’ll have to make I will robot follow a white line in the floor until
it reaches a green star you’ll know how to do this and so on but just to comment
it so let’s use open CV in Ross so using open CV in ROS have something
very very peculiar is that you don’t use open CV directly but you use the open CD
bridge so if you go to this link it will take you to this and this is a
package that connects OpenCV images image system and protocols and so on
with ROS and you ask okay why okay obviously it’s a fantastic library is
the reference library for computer vision and it’s used for many many
things not only computer ism but but for example you have in the in the main page
you have a tutorial and section and you have from basic functionality to machine
learning deep neural networks GPU accelerated computer vision Lotus stuff
so I highly recommend you that you take a look at this because it’s really
interesting but the thing is there’s something very peculiar that OpenCV has
is that it doesn’t work with RGB systems it uses BGR systems and this makes it
would be difficult to work because you you would have to change all the images
that you’re recording this evil or in your simulator or in a real robot that
normally are RTB convert them to be gr1 open sea open
sea be bridge makes that seamless without any problems but I comment that
because there are some aspects of open CV that this comes into into the topic
and it may cause some problems ok so you’re going to learn all this
through this simulation which you have here so you have a total bond kabuki
robot and it’s inside this environment that has painted on the floor yellow
line with stars of different colors so we we’re going to work here
and in essence what we want to do is make the Kabuki robot navigate through
this environment only with an RGB camera no point clouds no lasers nothing just
vision one camera so what he will do is follow the yellow line and make a
circuit and we have to imagine that in each star of each color you have
different things like for example a charging station in the green a place
where you have to retrieve some data red star where you have to stay for amount
of time I don’t know it’s just an example place and this color scheme is
also using navigation in industrial environments and so on so it’s not just
purely theoretical an academic but also you can use it in many other places so
we have this environment and the first step that we have to do is get the
images from a roast topic and show them in open CV so what do we what we have to
do is retrieve the image data from the Kabuki robots from this robot and
transform it into OpenCV image glasses here you have an example this Python
script which is line follower basics and we are going to comment on this code
that I said because I think it’s quite interesting so here we have the import
we input OpenCV to bear in mind that there’s over three three so this is the
open C 2 then we import numpy that allows us to work with arrays and so on
it’s really useful and it’s used everywhere
nowadays and finally we improv import from CV bridge module we import CD
bridge and CB bridge error okay so we had this and then we’re going to comment
on the name so we execute the main and what it does
is initialize this class and that’s quite it and then when we finish then it
destroys all the four windows then the line follower what we do is we
initialize this CV bridge class we initialize a subscriber to our image
topic which in this case is camera RGB image wrong you see it is RGB type image
and it has a callback when you receive an image in your image you call it’s
automatically called this callback and what we do is get the data and we encode
it in our desired encoding in this case we do it to be gr8 we’ll talk about this
in just a minute but essentially now the idea is that you
get this data and you transform it into a CV image so we go from RGB rose gazebo
environment images to open CD type images and from now on we work with open
CV we don’t work with Ross okay so we process the image we do whatever we want
we what we have to do and at the end what we do is show that image okay so
let’s look before executing this program let’s have
a look at the images so we have this topic that it’s camera RGB image role
which is type image so now let’s have a look as you can see here the the sensor
messages image type has like all the topics in blobs or nearly all it has a
header in same ID and Sam and so on then it has a height a width which is height
and width of the image let’s say 800 by 600 or or on an ER or 4k I mean all the
dimensions then the encoding and if the image is big and in the matrix is big
endian or not then step and the data which here it contains all the image
data so let’s let’s do a Rasta a gecko we’re going to do what a big echo of
each value why because otherwise we’ll get we’ll get only the image matrix
which is humongous and we won’t see them the other variables so height so look
there we go as you can see it’s a bit slow because it depends on how it’s made
but in this case the image topic normally it waits until someone asks for
information then that’s why it took a little bit longer than usual
normally topics so we have a 418 height and then
it’s get burn the whip 6 I’m 640 so it’s quite small but it’s bigger than if you
remember the mirror robot we had 400 by 400 this one is a bit bigger if you do
bigger the images it will be slower the algorithm so you have to try to minimize
the size of the images as much as you can without losing any definition or
losing the definition that you need for your recognition then the encoding this
is also very important in this case the images are encoded in RGB 8-bit
integrals ok and finally the data which is the matrix with the image so beware
that you’ll get are humongous matrix here and this is only one image there you go
so I highly recommend you that you don’t do a roast of the gecko of the whole
string because otherwise you you don’t really get this and you won’t be able to
stop it okay so basically it’s a matrix with numbers that’s an image in RGB
encoding so this is encoded in RGB which each number is the encoding of an RGB
color of a pixel over six 640 by 480 pixels at your hand okay okay that’s the
basics and then okay so it’s the information more or less and this we
commented already and we print the image and this one is for destroying all the
windows of opportunity and you should get something like this so let’s let’s
see what what happens so let’s execute this I’ve already done my package of so
let’s close this so we have my following nine package I created this package and
as dependencies I didn’t put anything because it is not necessary if you want
don’t want to compile and so on but it’s highly recommendable that you highly
advise advisable that you put CV and open bridges and CD bridge and so on
it’s not necessary to make it work so here I made a launch folder and a
scripts folder and inside I have all the examples that we are going to talk about
and here I have some launchers that we’ll use in this case I’m going to
launch this line them basic spawn I think so let’s reset this roster on my
following line package okay now I go to my graphical tools I’m going to close
this and there we have it so this is the image stream of the RGB in the camera
that we have so more legs it’s seeing a line and a blue star there I hope it
works okay you have to do this because this is the
basics so if your program doesn’t work if this doesn’t work nothing will work
so it’s good to test it and you get you get something like that okay now what’s
this this is it’s useless why because normally what we want is
making the image data more accessible less noisy easier for our algorithms to
detect what we want to detect in this case we’re going to apply filters to the
image so they are various tips we have basically two steps basically yes
so we have three steps we have X we will crop the image we will convert it to HSV
I will talk about that in a moment and but whether I forgot applying the mask
and we will apply mask so let’s talk step by step what it is and what they
will do so the first step is getting the image as small as we can that’s the
first step why what we did you can you can have for
example an image of these dimensions one 1200 by 1024 and it’s a lot of pixels
your algorithms or your computers may not be that powerful to process all that
huge data and more importantly maybe you’re processing rubbish things that
you don’t need so it’s really not that efficient so what we want to do is crop
cut the image as small as we can so that our algorithm goes fast how do we do
that well you remember this is the sealy image that we got then this one is a CD
image object that we got through the conversion from RGB to BD or let’s say
from gazebo Kamath to organ see details so what we do is first we access the
shape which returns height width and channels the important is height and
weight and then here we define some parameters we don’t have to do this but
it’s easier if you want to one if you want to make changes that you have to do
then and exercises that we are going to do in a
minute and then what we do is crop it to crop an image in a CD image it’s really
easy because in a Python it is what we do
images are matrix so we access the let’s say that X and the y and this in this
case this is a height so it’s the height and then it’s the width so here what
we’re doing is getting the entire width of the image so from here to here but
the height we are getting just the piece that the little pond this is in this
case from the heights divided by 2 so on this in the middle so we start from here
so all the things that we have before go to the rubbish we don’t use it and then
we D Center it so if we only got a half then we would start here if we descend
then we start lower so run here and then we get until this half cluster this
enter class the rows we want to watch so from here to here and we would only get
like a blue stars and yellow line and this part down here ok
so why do we use this fashion now that man is 160 and 20 knots something else
well it’s not obvious it just depends on the robot depends on your algorithms the
way you want the robot to work it depends a lot of things so that’s why
we’re going to do an exercise about that because it depends on how you like your
algorithms to work and it depends on your environment also a lot of variables
ok next step this one is important also one of the problems of working with
color in robot over with vision in general is that it depends a lot on the
light that you have and RGB and vgr are are really sensitive to a saturation of
the colors so if you have a lot of light or you don’t have any light it’s really
sensitive so what we want to do is take it to a color encoding that it’s less
prone to errors and less sensitive to changes in lighting conditions so that
it’s perfect and it encourages through changes in the light but it’s more
robust there’s it the HSV what we do here you have a picture of how the idea
of HSV which is basically you have a code and each slice is represents this
circle and these are our colors and if I select one point in this circle I’m
selecting a color and all the colors beneath it if I want but all the colors
beneath that point is the same color but it has more or less saturation of the
color that means that if I select for example this red I’ll get all the regs beneath and it
doesn’t depend it it doesn’t matter if I have more or less light I will get the
same rent and that’s really important in simulated environments it’s not it’s not
a big deal if the simulation is like this with a well-lit environment and so
on but in realistic simulations you have the same problem because you don’t have
the same light everywhere okay and how do we do that how do you change from one
to the other it’s very very simple like this so you
get your cropped image in this case but any image variable CB image important
and you change it through this color BGR to HSV and you change it and you have
the same image but the HSV version perfect and then the other thing is that
we’re going to do a filter which we have the HSV we are going to filter all the
colors so we are going to filter from this one the lower let’s say I tell you
okay I want to jello and I want to track this yellow and I consider yellow in HSV
for this value to this value okay as you can see it looks like RGB or BG on fun
the values are totally different and the question is okay how do I know the
values of of RGB BGR or HSV or basically RGB that I need to import in some way to
know this to know these values okay the depends on where you are but in in
our case it’s very simple because we have this a web tool which is called
color picker and you can download it I’m using Chrome but in Firefox there’s
another version with a nominating but it’s essentially the same so what we do
is pick a color from a page so I’m going to go here and I’m going to select and
that’s it you see I’m getting all the colors of all the environment
of all the web page so I pick and then I have all the information of that color
so we have the RGB values HSB also and we have the values here so we can
copy/paste them and so on and then what what do I do
so I’m going to see for example and to copy this and that’s it elsewhere
whatever I want and then what I’m I’m going to do is are we’ve listening a
very small program that what it does is convert a value that we give to hhp so
that you can have an HSV value to create europe your low one upper limits for the
filters so i have you can execute this code sorry you can execute this code
which is you put that BG are very important BDR not RGB so you’ll have to
change it okay then that you consider yellow so this is my yellow and then and
then what i do is use the same value of the same function to change it to HSV
yellow and if i print it then i’ll have the value and then the
lower number i’ll put it the same thing it’s a thing same thing as we had here
why a hundred sixty twenty why then this well it’s experience you have to
test it see which strangers work better and so on but the lower has to be lower
than the one that you have here and this one has to be higher or the same in this
case okay because this is the maximum basically that maximum value okay so
let’s let’s have a look so I have this so if I do a restaurant and my following
and then color it is V in this case red so red would be 123 123 green green 255
and blue 1 and this program it is the same colchester it’s easier for for
showing you here and you have the HSV version as you can see here you have to
make a calculate the conversions because it’s not the same this is a percentage
and this is from from one through twenty or fifty five point zero to 255 so you
have to make the conversions but this is the value in which you will face the
lower and upper filters okay understood yeah so there we go so here you have an
image of how it will look when you change to HSV as you can see it’s more
or less the same color but this one is slightly different because it doesn’t
care about the saturation all it cares but it’s more less sensitive to this
and then the third part is applying masks and you say why do I have to play
masks and what’s a mask a mask is is a filter that what it does is remove all
the things that you don’t need and only leave the things that you’re interested
in so in this case we are interested in the lines and for that we use these
lower yellow and upper yellow variables that we used that we’ve set it up here
here what we do is we generate the mask based on the image HSV the lower yellow
and the upper yellow and it would print this mask image what we see is once in
what we consider is yellow and black where we don’t consider as yellow as you
can see here in the HSV you had some region that it was yellowish but not
perfectly yellow and here they add until you consider is yellow on the other you
you just remove it this is used for many reasons one because working this so you
have this mask and then what we’ll do is is a result which what it does is get
the cropped image which would be like the part that we got from here but in
the real in the RGB colors and then apply this mask and what it does is only
leave the yellow part so it would be something like this
okay so black here are green and then yellow yeah and this would be the result
the mask is this so it’s binary it’s 1 1 1 and then 0 0 0 ok the reason why we
work with this is because it’s faster because it’s only 0 or 1 it’s not
continuously values that say and this allows us to go faster and calculate the
centroid of the Globes it’s much easier the center of
calculating the centroids let’s say the center of this blog in theory you should
use integrals as you don’t use it off obviously and you do discrete algorithms
and inside them if you only use zero or one it’s much easier faster that’s the
main reason why we use masks and the filters and so on so all we did was to
reach this point so we we did this cropping to go faster and get a smaller
image we did this HSV conversion to get robust coloring and then we use this
mask to get this white and black that’s it then the final step would be to
calculate the centroids to calculate that center of this block that you
detect here in this case we’re now going only to calculate one century one block
central ok so in this case we are getting the mask so this binary not the
result of the binary and then we calculate the moments which what it does
is do this discrete integrals and so on it’s it’s more complicated than this and
you have this links that give you especially this one the image moment
gives you a very good idea or theoretical idea of how to calculate
moments but we won’t talk about it here once did you have them so essentially
you with this M you calculate the center of
antiques and y-coordinates and if for some reason you have zero zero means
that you didn’t detect anything then you say that the center is in the center
that’s it okay so now we have where the bloatedness
of the color that we are interested in that’s it so now we have to throw it so
Open City gives you a lot of tools to paint and your things and the contours
of the objects that you detect and so on and you have very good documentation
here enjoying functions and we have loads of things to do I mean you can do
circles ellipses engines you can do notice done I highly recommend you that
you check it out in our case we’re going to just draw a simple circle color later
so how do you do that well you get the image where you want to draw it you put
the center where on the center of the circle the width the type of the line
and the color you can leave this as it is there are many functions from the
options part of the color it’s interesting so this is red why it’s red
its RGB no it’s being G R and this is the thickness
I am I said that that thickness because this mix I don’t and not a circle but if
you put one then you will get a circle and well just the circumference and then
what we do here is print all the images so they resolve the mass the HSV and the
C V image which is the original one okay and this is how it would look like so
this is the original image then we have the HSV
that it’s part of doing a cropping and then turning it into HSV
the mask and the result with a dot drawn point and finally finally we’re getting
a real getting there so finally what we are going to do is use this Center
values to move on robot to turn to make it turn and here we’re using a very
simple proportional let’s say a control it’s a most basic control it’s very
imprecise and it has many problems as you’ll see when you make it work but
it’s the first thing at the first thing so let’s let’s does okay because
otherwise so here you have examples of how to move the robots I won’t get into
details it’s just publishing here I think very
fancy ok and then we have this follow line step HSV which is all the code that
we commented part by part thoughts combined ok so exercise number two
create a Python script inside your package with a provided code so this is
got here and see how it works and then test with different speeds for the for
the robot so here and in this function that essentially it’s a camera callback
so each time you get an image it executes all this here you have lynnium
and the angular which is the attorney try different values essentially then
and then change their behavior the robots need a certain maybe do a
recovery for example sometimes the robot will get lost so do our recovery system okay
so let’s let’s try it now pause the video and have a look okay so let’s have
a look on how this works more or less okay so now what we’re going to do is
I’m going to show you how this would work normally I’m going to test I’m
going to execute this follow line in this case I have for my test I’ve I’ve
set it at the center of 0 and rows to watch 200 so you’ll get something
slightly different because I’m getting the lower part of the image so let’s
have a look so I’m going to this here there we go and a girl as you can see here’s the
mask and here is the result and he’s detecting more or less the centroid
around here which makes sense it’s around the center and then it’s trying
to follow the yellow line so now it will start there we go so now it got this
line here one because I didn’t tell him what to do really
he only just follow line he just only follow signs and that’s it
now that the yellow line has gone it will continue forever because I didn’t
do a recovery system of any kind so let’s stop it so the second exercise
would be M in this case well third exercise would be to track different
colors of uploads you can select the red style the green style and blue stock so
do the three and what you should see it’s something like this so you should
see that now a mask for example you only see in white the for example if you’re
tracking the blue star you only see the blue star and the result will will have
all the in black except the blue star and so pause the video and try it
yourself okay so back again let’s do it so what we have to do is in this case go
here and I’ve changed from the yellow ones to the blue values so you have to
get the RGB value of the blue one and I turn it to HSV with a program that we
talked about and then create some filters about it okay
I execute and as you can see here I’m now getting only the blue values yeah there we go so it’s now you’re seeing
some problems that this control has which is that it’s quite difficult to
control and go but it’s going to the blue star you see it’s going to the blue
star yeah and there we go okay perfect the next exercise so this was changing
the filters color okay then try also changing the contours and explained so
instead of putting very big so you can change we have I give you some options
here but basically is using the yellow but being very strict with with what is
general on what it’s not and being not that strict or even so loose that it
gets all the colors so you use the loose color detection of the yellow for
example that it around here you see that it gets a lot of colors so it will get
the green star and and the yellow path everything and that’s why it gets all of
this all being very strict change it to being very strict this will mean that
even the color yellow if it’s slightly different we won’t get it yeah for
example you can see here in this image that it’s getting this is yellow but
it’s slightly slightly different and then it doesn’t get yeah how would you
do this the same thing working around the filters and getting for example like
here so putting these values opening as other values try it with the yellow and
try with other colors yep fantastic and the fourth exercise would
be change the dissenter so do it bigger do it smaller we have three options here
type three if you want to try more of course so here you have the the first
option is in the center so you get the center and you only get very few lines
very small lines so basically space and this region then another one that is the
same Center you start from the same place but you get more you get more
elements that would be this one here you see it just how it evolves so you move
around and because it’s so small you get you lose it very fast it is you’re just
looking at this region so when it goes past this region you lose it forever in
this case you don’t lose it so you but you have to work with more image and
this one is we descender it like around here
and we get a very small part so we only see the yellow line when it’s around
here this causes many problems and you have to see which one works better for
you so let’s let’s have a try so for example I’m going to pick again
the yellow and I’m going to select and we’ll select the filter yellow and then
let’s crop it so for example I’m going to select this the center 200 and rows
20 so it’s really really near to the robot the detection so I’m going to save
save and I’m going to reset the simulation and here and now it’s follow
the line there we go as you can see it won’t see the line
until it’s really there and maybe it will be too late and it won’t be able to
turn there you go we start turning turning turning turning returning yeah
there we go we’re getting the path you can see that it has problems but it’s
getting the path okay perfect there we go until thick now we have to kill it if it doesn’t
close for any reason then we have to lift again plus node kill and we kill
our node okay and we see that it’s stopped
perfect okay fantastic so I hope you you did get
to this point so now we have a robot that can follow lines or look for colors
lots of colors of any kind so an additional step what happens so the
thing is that we want in our simulation to make this robot follow this line we
don’t want it to go to this part that we don’t think we don’t want to we want to
control that so how do we do that well we do it by getting different centroids
so multiple centroids because now we get as you can see in this image we are
getting a red one which is all the yellow blobs that we’re getting
he gets the center of let’s say the center of mass of all that globes in
that image so there’s more yellow here than here that’s why it’s now in center
but more slightly to them to this side okay but as you can see now we are
painting green centers for each one of them so the code will be exactly the
same but we will use the mask remember we’re not using this yellow black and
yellow but we’re using the binary one and we are going to use the find
contours function which is slightly different from the ones that the one
that we used yeah so it gets the contours and then calculate the moments
for each of the contours so for this one and for this one okay and then it paints
a green circle in each one thing that’s it and if for what for any reason
there’s a zero division then it doesn’t paint anything which it make sense okay
so here you have the example so the code set here advance here so you have here
the final exercise and based on talking about this topic of globe tracking and
so on which would be this multiple using this multiple step multiple tential
centroid so the objective of this exercise is when the robot faces a
dilemma like this one here so I turn left or turn right then make it turn in
a way that he always goes in this circuit stays in this circuit for ever
and ever for the moment okay so for example if he
goes like this and goes around here and say hey I have these two lines where do
I go then it has to go in this line okay so let’s have a look let’s see so have a
try when you finished then or you get stuck we’ll come back again and we’ll
see a possible solution hi so you’re back
let’s continue so one possible solution would be implementing this so you have
this we have talked about it we are getting the contours and then looking
all the contours that you get and store them in this centers array and once you
have it then what we do is look the x value of all the centroid selected
and select the highest one that would mean that you always turn to the right
always so in this case he will start here and then go to the right and then
it will work all the time all the time yeah in case in case you don’t get any
centers you have to do this so try the centers if you don’t have any then just
put in the middle let’s say so that it goes forward all the time
it’s a possible solution you can do many many solutions but this is one just one
so let’s see how it should work so we have follow line multiple we execute it
you see that it goes forward let’s put this here and this here as you can see
I’m only printing the result and the final this makes the algorithm go faster
you see it just selected the first one and he’s there okay he got the path now
we have to wait until until it gets so I’ll fast forward the video and until we
get to the dilemma so see you in a sec so we’re back as you can see he’s
already going through this dilemma see that because of a proportional control
its oscillate a lot but now you should see here that in a moment you’ll get two
dots two green dots there yeah it was really fast but he got
the right part he would have got this one probably because it has more yellow
and this like in this region maybe but he got that but let’s see the second one
and you can see that the control is really basic so it’s giving a lot of
problems because it’s oscillate a lot because it’s proportional but we’ll see
how to solve that in a moment there we go
whoa and he got lost now that happens sometime okay so why do we need so now
this is perfect because it we need some kind of control that allows us to move
around in a smoother way not perfect because you need more tuning and so on
but better and this comes in handy because we’re going to talk about the
PID controller with perception so you’ve seen that if it oscillate a lot the
proportional control so we need some kind of new more sophisticated control
that allows us to move smoothly and this is PID PID s are really deep and it can
give a lot of work fortunately Ross has our backs covered
because there’s a PID ross package for this essentially what what it gives you
is an infrastructure that allows you to control stuff using PID in a more
user-friendly way so here you have a let’s say a test a demo or
how how this works so let’s have a look this is the PID test launch which what
it does is launch this node which is sorry the PID control then it launches
also an r QT plot so that we can see the values of the the signals and also the
architect figure this allows us to change the PID values on the fly while
they stick up to this limits of course that we set up here okay so here we are
setting up the values of KP ki and KD also the upper limit so the upper limit
that we we can get of the signal that we send up upper limit and lower limit and
the wind-up limit also then the maximum frequency and the minimal frequency in
which we try to make the control so let’s have a look so and so here what
we’re doing is setting this up for this PID controller node which is the basics
and then we are launching this PID control which is the one would be our
our system that we want to control let’s say so we launched this node this is the
basic PID controller from this package and then we launch our system our own
system plus some extra features to see to graph the values and so on so let’s
have a look at this PID control essentially if if you don’t know
anything about pids you should stop the video and have a look I think we give
here a guide that it’s a practical guide it’s
not theoretical but it will give you some idea of what we’re doing and how to
use PIDs but essentially what you need are three things so you set a set point
you have a state and you have an effort so the set point is where you want to go
okay in this case in this test we want to maintain the signal in zero in the
zero value then we have the state the state is the real the the system how it
is so our system can be can have a value of two of three minus one is the state
in the case of a robot would be were how much has it turned really and the
setpoint is where you want to go and then we have that control effort so the
control effort is the the effort that that you do that tries to change this
state so that it has the value of the set but that’s the main idea it’s much
more complex and much deeper but that’s the main very very basic idea so this
program would be our our system and what we do here we have are we created two
functions one that it’s the sign test and one that it’s the spec test these
are very basic control tests in the spec test what we do is we input a step so a
constant value and we try to go to that value but because this is a text it’s
not a real system so it will trying to move there continuously until we change
it and the sine wave it’s more or less the same so we input the sine wave
in this system and we we make that this system it’s always oscillating in using
a sine wave and the effort will try to counteract that so that it gets the zero
value you see zero value like this one so in the stat test we want the zero
value but the system oscillates from value one in this case in this case so 1
and minus 1 1 and minus 1 in this case it’s a sine wave that oscillates between
1 I think 1 and minus 1 in a sine wave and the effort will try to counteract so
it will be the inverse function more or less so let’s execute this I I wrote the
script here so I have the PID test here already so we’re going to launch this a
rough lunch and sorry I was ouch my follow line and then PID test so this
doesn’t have to do anything to do with the robot now so we have to wait until
it appears the graphs there we have it so in this case we are
executing the sign test so here we have you can see that the set point data is 0
and the state data is a red line and the control effort data is the blue line as
you can see while our state changes the effort changes and tries to counteract
that so that then the final value would be 0
yeah so we can change with the dynamic vacum figure we can change this so let’s
have a look so for example we can change the proportional and we said it you see
now the blue the effort goes much slower and doesn’t get through the maximum
value why because it’s 0.02 okay now we change it again and again works
perfectly then we change the the integrator and I’m sorry yeah there we
go as you can see there’s some difference especially at the start and
it depends on what the step would it works better or it has more effect or
the KD as you can see especially the difference you see that they are not
synchronized to the value so it goes slower or faster to get the values yeah
okay let’s try the step for example so we get here the PID control and we do
the stack test for example we wait until it starts the system okay
and there we go so you see the Spade data is oscillates from minus 1 to 1 and this is the effort as you can see
depending on the values that we put here it oscillates more or less so for
example we can lower the KP a proportional value and it goes there we
can put more integration here and you see that it doesn’t quite get to the
place and maybe we can if we if we put a higher value in the KD it will go to the
value faster but it also oscillates more it doesn’t go to infinite or higher
values just because we put a maximum value of 2 or minus 2 ok so there we
have it so here you have this exercise which is basically what we’ve done is
changing the values of the test nothing very special so try it
put this script in your package try this exactly the same thing that I did and
then you have this exercise 7 which is using all the knowledge that you gain
from this and apply to your robot so control it in a way that it uses PID and
plot the values why not so the objective is make the rover follow smoothly the
lines it’s not smooth like super perfect just using if you use the PID s small
enough because other one at the end afterwards you can fewn it and so
so don’t worry about that so try this and when you finish or you get suck come
back okay done so this is the solution of what you could do to do this as I
told you pids is deep topic so it’s not easy to get these values and actually
these values are not optimized so the robot might not go to the place that we
want to for that and but certainly have always this problems and for this you
also need recovery situations algorithms and so on but essentially what you have
to do is create a PID movement large like this one with the values of the
proportional integral and derivative values that you think work best how do
you know it well basically you try trial and error all you have to do some
experiments calibrations systems and so on so it’s not easy then you can put the
plotting system and also the arca to record to reconfigure just in case you
want to tweak it you want to change some values and tests then you have the
follow line with the with PID that launches this PID movement so the pide
system and then it launches our line following that uses PID for the movement
and finally the PID a Python script which is exactly the same just just that
it has this difference so we are setting a value we are really we are also
setting the so we set the set point that place where we want to go in this
we want that the essentially that we detect if we detect the centroid is in
the middle okay that’s our desire if we don’t detect anything then the sensor it
is in the middle so our robot should go straight in case it’s not in the middle
our desire is that it’s it’s in the middle so we trying to put let’s say
this red circle in the middle all the time
that ours our control that’s what we want then we have to set the value so
the value the real value is where is the centroid in this case CX okay and then
we have to set the value the angular velocity the value that we need here
we’re setting the linear speed that you can change it so if we go slower then
maybe it’s easier to move to control it depends it would in it will be easier
because you’ll change the values will change slower and you do get the line
for more time for more seconds and that will make it easier to control and essentially what we do is set that value
and then we retrieve the effort the value that we need to put into our
system in this case a robot so that we move the centroid we try to move it to
the center that’s our main objective here we have some prints just to be able
to see it properly and then what we do is the effort because the effort is we
are inputting so your inputs are always a screen positions pixels so we have to
divide it by some value so that we have a coherent value and then afterwards
with the KP then we can tune it so that it’s the value that we need okay
as I said if this sounds strange wait until we do that we execute this and and
if you don’t understand it then you should go over your control theory to
get more of a hang of how this works but I know that even people that know about
this when you apply it to a robot then it’s something different okay so yeah
basically this is what we talked about so let’s let’s execute it so you can see
more or less what’s what’s going on so let’s let’s execute this value I’ll
have it here oh really and let’s put this in a way that we can
see everything so here and this one put it here and like this but we have a bit
more of room okay now we launch it okay there we go so we have the
reconfigure in case we want to change something we’re going to change it just
for testing so here we have it so at present everything is okay because it’s
not detecting anything you see when he starts detecting you see now the blue
line is the effort and the red line is the position of the centroid and the
difference you see now it went much much smoother and with the proportion as you
can see here we we can see more or less the values that you are you are getting
and here because it’s in the center now we have the effort is zero and the
position it’s well the position that we have so for like 300 in something
hundred and twenty which is the desire you see then when it changed a bit
then the effort change a bit also and one thing that you must when you when
you tried it you have noticed it is that this system is really slow what does
this mean that even if you do the control really really fast the robot
can’t move as fast and not only that but the data the image that it has really
slow so you have to put the values of the PID according to that bearing that
in mind that no matter how fast you go to the value the system won’t update as
fast there we go so now it’s dividing and it went you see now it’s oscillating
and it starts controlling a bit better why it’s oscillating so much it
is them because the values of them integrator and derivative are really
small now we are going to talk about that so you see now we are losing track
this is okay it is now he’s not seeing anything so it’s in the center of always
opposes independent okay let’s restart again and this time we’re going to
change values so this time I’m going to execute this and and you’ll see firstly
you’ll see that even without changing values the system won’t do exactly the
same thing okay so now let’s see what it does now it’s accepting it’s oscillating
it’s in the center now tries to turn again okay and get the time so more or
less it did more or less thing now what we’re going to do is change values so
for example I wanted to go faster to the solution but if I put this higher what
it does is it tends to oscillate more now no because you’re just in the center okay so you see now it’s oscillating bits
with the bit okay but as you can see well it’s okay it was okay you see that
you we don’t have value in the integrator if we put it higher one of
the effects is that you’ll get error so let’s let’s put it higher example this
this you’ll see it especially when it’s it starts to oscillate and then you have
very big changes then you’ll get values that aren’t very nice there you go it also lights well more or less it worked very nice
okay so it depends on how it works and so on so try different values and and
see if you can break it because I couldn’t break it now but basically
that’s the idea the main idea and let’s see what we have to do next so the next
exercise this bees exerciser extract so create a definitive script that can
follow the correct path so multiple centuries and uses the PID to move
smoothly or PID what allows you to do is control how the robot works and moves in
a more intelligent way yep this is the first extra exercise the next one is
creating a new phone so you have to create an action or something that
allows you to control more what the robot does so in this one we have three
the new following script that were published in the topic for example for
you you can do an action whatever you want for example if you say okay the
objective is yellow so the robot will follow the yellow if you publish red
then it will follow the path until it finds the red star so bear in mind that
you’ll have to track multiple blobs of different colors okay
the green the the same the same thing and the blue the same thing and if you
don’t publish anything or you publish something like stop then it will stop
okay this is just to practice you’ve done and if extra functionality to your
robot so that’s it we finally finished unit
two so just a comment that for the project you can now go to the Bible
project and start it very you so you can do if you didn’t do the first one then
you can do the first one and the second and the second point so this one is
there you you will have to make the IVA Lou but follow a white line so it’s the
same thing but it’s them the objective is that it follows the
white line and finds a green stone between the green star means symbolizes
are a wireless charging area so the robot has to find it and then go there
and stop and then it leaves supposedly it would leave the charging to start
okay so fantastic so give a thumbs up if you
like this this unit subscribe if you didn’t do it at the start and see you in
the next unit which will completely change of topics we will talk about will
do surface and object recognition then we will do people serve like face
recognition story but next one is object recognition and surface so see you there

Leave a Reply

Your email address will not be published. Required fields are marked *