I presented at the "3 Minute Thesis" Competition hosted by the Office of Graduate Studies at Saint Louis University on November 30, 2018. Unfortunately, my presentation exceeded the 3 minute time limit by 1 second, and I was disqualified! Here is the transcript of my presentation.
Have you ever watched a YouTube video of a cat hitching a ride on a Roomba robot vacuum cleaner and wondered if the Roomba is actually able to perform its job properly while it has a cat sitting on its back? I blame my thesis research for the fact that I've thought about these things...
Robots find their way around a room by using sensors to perceive the world in front of them, and reconcile this view against a map of the environment to calculate their current position in the environment. The Roomba needs an estimate of its current location in the environment so that it can plan a driving route that will help it achieve its goal of vacuuming under the couch. This location estimate is like a "You Are Here" sticker on the map at a shopping mall. Abnormal driving events like a cat sitting on a Roomba or a person picking up a robot and putting it down somewhere else have the same effect as if the "You Are Here" sticker were moved to some completely wrong location on the shopping mall map. With an incorrect location estimate, the robot becomes unable to plan a path that will allow it to reach its goal. These abnormal driving conditions are also known as Robot Kidnapping Scenarios. These kidnapping scenarios can be detected and corrected to ensure the robot's success even under abnormal situations.
Existing methods of Kidnap Detection have two flaws: some methods are built to detect only certain types of Kidnapping scenarios. For example, a method might be built to detect situations in which a person picks up the robot, but the method is unable to detect that a cat is sitting on the robot. Other methods require specific sensors or large processing power requirements, which prevents the easily implementation of these methods on just any robot configuration.
For my thesis research, I developed a method to detect Robot Kidnapping events that is more flexible than existing methods. My detection scheme is designed to combat the weaknesses of existing methods by detecting any form of Robot Kidnapping on any robot. To accomplish this goal, I am re-purposing some of the data that is generated in the normal course of the robot's localization process. This re-purposing means there are no additional resource or configuration requirements to run this detection method.
To study this problem, I ran 40 simulations of a robot driving around a room. In the top example of the slide, the robot was driving under normal circumstances. As you can see, the algorithm tracking the robot's location represents the robot's possible true location as a cloud of points. Each point is associated with a confidence measure to represent the likelihood that this position is the robot's true current location. On the slide, the orange values are the most likely, while the blue values are the least likely. The main takeaway is that under normal circumstances, the confidences are spread across a spectrum of values.
The lower section of the slide shows the data immediately following a Kidnapping Event, when I picked up the robot and moved it to a new location. The localization algorithm naturally flattened out the variation in the confidence values, and this is represented in the plain green cloud on the graph.
This insight can be translated into the 10-bin histograms as seen on the right side of the slide. The skewed plot of the Kidnapped example was used to train an artificial neural network function to recognize the skew as an indicator that the robot has been Kidnapped. This method is 92% accurate and can identify Kidnapping Events within 3 update cycles of the localization algorithm.
Thanks to this system, cats can continue to enjoy Roombas, and now Roombas can get all of their work done.
Thesis Blog
This is a blog of my M.S. Electrical and Computer Engineering Thesis research. I've started to post here as a way to combine links (for code, datasets), pictures, and text as I try to get a neural network to model the data collected for my Kidnapped Robot Problem research.
Sunday, December 2, 2018
Sunday, February 11, 2018
ROS Addition: Piece de Resistance
I learned that ROS_INFO and ROS_DEBUG route to a ROS message topic called /rosout. Knowing that would have saved me so much time!!!!! Way back at the beginning, when I was trying to make an overlay of the AMCL package, I put a lot of debug statements in the AMCL code and could never figure out where they were being printed. I later discovered the "rqt" tool, but /rosout is much easier!
The greatest part is that I can log messages to ROS_INFO and record them into my bag file by adding the /rosout topic to the list that my bag file is recording with each data run. The list of topics being recorded is pretty long now!
This allows me to log the exact time that the robot has been kidnapped, as well as the to and from locations that the robot is being moved to. Now I have EVERYTHING I need in the bag file - no more need for a separate .txt file to record the kidnap event, like I used to have.
I'm really happy with my apparatus for creating trial data - bag data for the win!
The greatest part is that I can log messages to ROS_INFO and record them into my bag file by adding the /rosout topic to the list that my bag file is recording with each data run. The list of topics being recorded is pretty long now!
This allows me to log the exact time that the robot has been kidnapped, as well as the to and from locations that the robot is being moved to. Now I have EVERYTHING I need in the bag file - no more need for a separate .txt file to record the kidnap event, like I used to have.
I'm really happy with my apparatus for creating trial data - bag data for the win!
Adjusting the Neural Network Design
Even though my histogram ratio attribute was a good classifier for kidnap activity, I struggled to train my neural net to identify the calculation. I wasn't doing much on the neural net's internal structure, however - I was letting Matlab handle most of it, and setting the initial layer to 15 neurons. I probably should have been more strategic about the neural net's layers if I wanted to improve my results.
I learned a valuable lesson by discussing my problem with my research advisor. The fact that I could perform a calculation on the histogram of the particle filter particle weights and label the results as 1 or 0 means that I'm imposing my own (unnecessary) limits on the problem (that the ratio of the tallest to 2nd tallest histograms is the solution). If I instead give the neural network the raw histogram bin data, the neural network will have more flexibility to find its own version of the relationship between the histogram bin heights that are assigned to "Kidnap" and "Normal" events.
Now, my dataset consists of 10 variables - the heights of each of the 10 histogram bins that correspond to the weights of the particles in the particle filter.
I learned a valuable lesson by discussing my problem with my research advisor. The fact that I could perform a calculation on the histogram of the particle filter particle weights and label the results as 1 or 0 means that I'm imposing my own (unnecessary) limits on the problem (that the ratio of the tallest to 2nd tallest histograms is the solution). If I instead give the neural network the raw histogram bin data, the neural network will have more flexibility to find its own version of the relationship between the histogram bin heights that are assigned to "Kidnap" and "Normal" events.
Now, my dataset consists of 10 variables - the heights of each of the 10 histogram bins that correspond to the weights of the particles in the particle filter.
Wednesday, January 24, 2018
From Colors to Histograms to Neural Networks - Part 2
In my previous blog post, I explained the method I chose for representing big, uneven histograms that appear when the particle cloud weights at a timestep are divided into 10 bins. I use the ratio of the tallest histogram bin divided by the 2nd tallest histogram bin.
I calculated this "Histogram Bin Ratio" (HBR) for each timestep and set up my datasets for an artificial neural network in Matlab.
Experiment 1
My first trial had Inputs of the current timestep's HBR, and outputs of the difference in meters between the robot's ground-truth pose in the Gazebo simulator and the AMCL SLAM algorithm's estimate, for X, Y, and Heading.
Experiment 1 Dataset:
Inputs:
Ratio (current)
Outputs:
|AMCL_X - Gazebo_X|
|AMCL_Y - Gazebo_Y|
|AMCL_Heading - Gazebo_Heading|
The neural network's outputs were hopelessly off, with MSE's on the scale of 300.
Side Study
I wrote a test for my neural network setup to see if it could be trained to learn a sine function. I found a working example that used a different Matlab NN library, and ran the same datasets through my code. My neural network function was able to match the expected results, so I'm confident that my algorithm should be able to learn the kidnapping pattern.
Experiment 2
I scrapped the idea of using the difference between AMCL and Gazebo as the NN outputs. Instead, I used a 0/1 coding scheme of "not post-kidnapping event" or "post-kidnapping event". Including all the data after the kidnapping event resulted in MSEs in the 200s.
Experiment 2 Dataset:
Inputs:
Ratio (current)
Outputs:
0 or 1 (kidnapped or not)
Experiment 3
My next change was to cut off the datasets immediately following the kidnaping event's visible effects on the HBR (histogram ratio). After the kidnapping, the HBR jumps to 5 or 18 or 40, and hovers there for a few more timesteps. Once the HBR gets back to normal, I truncate the dataset. This worked marginally better - MSEs of the NN performance in the 100s.
Experiment 3 Dataset:
Inputs:
Ratio(current)
Outputs:
0 or 1 (kidnapped or not) but killed the dataset after the kidnapping event.
Experiment 4
Next, I added a history of 2 timestamps to the data inputs, so that the HBRs of the current timestep, the previous and the 2nd previous timesteps were represented. This drove my NN's MSEs down to 25!
Experiment 3 Dataset:
Inputs:
Ratio(current)
Prev_Ratio
Prev_Prev_Ratio
Outputs:
0 or 1 (kidnapped or not) but killed the dataset after the kidnapping event.
HOWEVER. Now my NN output matches the HBR ratios (current timestep's ratio), when I expected it - and theoretically, trained it - to give me a 0 or a 1!
The goofy part is that I could threshold the NN output around 4, and say anything over that indicates a Kidnapping. At this rate, I don't need a neural network!
I calculated this "Histogram Bin Ratio" (HBR) for each timestep and set up my datasets for an artificial neural network in Matlab.
Experiment 1
My first trial had Inputs of the current timestep's HBR, and outputs of the difference in meters between the robot's ground-truth pose in the Gazebo simulator and the AMCL SLAM algorithm's estimate, for X, Y, and Heading.
Experiment 1 Dataset:
Inputs:
Ratio (current)
Outputs:
|AMCL_X - Gazebo_X|
|AMCL_Y - Gazebo_Y|
|AMCL_Heading - Gazebo_Heading|
The neural network's outputs were hopelessly off, with MSE's on the scale of 300.
Side Study
I wrote a test for my neural network setup to see if it could be trained to learn a sine function. I found a working example that used a different Matlab NN library, and ran the same datasets through my code. My neural network function was able to match the expected results, so I'm confident that my algorithm should be able to learn the kidnapping pattern.
Experiment 2
I scrapped the idea of using the difference between AMCL and Gazebo as the NN outputs. Instead, I used a 0/1 coding scheme of "not post-kidnapping event" or "post-kidnapping event". Including all the data after the kidnapping event resulted in MSEs in the 200s.
Experiment 2 Dataset:
Inputs:
Ratio (current)
Outputs:
0 or 1 (kidnapped or not)
Experiment 3
My next change was to cut off the datasets immediately following the kidnaping event's visible effects on the HBR (histogram ratio). After the kidnapping, the HBR jumps to 5 or 18 or 40, and hovers there for a few more timesteps. Once the HBR gets back to normal, I truncate the dataset. This worked marginally better - MSEs of the NN performance in the 100s.
Experiment 3 Dataset:
Inputs:
Ratio(current)
Outputs:
0 or 1 (kidnapped or not) but killed the dataset after the kidnapping event.
Experiment 4
Next, I added a history of 2 timestamps to the data inputs, so that the HBRs of the current timestep, the previous and the 2nd previous timesteps were represented. This drove my NN's MSEs down to 25!
Experiment 3 Dataset:
Inputs:
Ratio(current)
Prev_Ratio
Prev_Prev_Ratio
Outputs:
0 or 1 (kidnapped or not) but killed the dataset after the kidnapping event.
HOWEVER. Now my NN output matches the HBR ratios (current timestep's ratio), when I expected it - and theoretically, trained it - to give me a 0 or a 1!
The goofy part is that I could threshold the NN output around 4, and say anything over that indicates a Kidnapping. At this rate, I don't need a neural network!
From Colors to Histograms to Neural Networks - Part 1
My previous blog post was about noticing the unique teal coloring of the particle cloud weights following a Kidnap event. In the time since, I have:
-Used histograms for representing the color change numerically
-Experimented with arranging the datasets to train the neural network
-Tested that my neural network *should* work by training it on a standard sine function
-Found a combination for the neural network that "works" (kinda).
Histograms:
The algorithm behind coloring the particles by weight is this:
1. Come up with a range of color representations. There should be as many different colors as there are unique weights.
2. Sort the colors from dark to light, and sort the list of unique weights least to greatest.
3. Match each unique weight to a unique color.
4. Apply this mapping to assign a color to each weights in the dataset
I needed a way to numerically recreate the message that the teal color conveys. I applied a rule and printed out the resulting histograms:
*At each timestep, divide up the data into 10 bins. Always 10 bins.*
The histograms at each timestep during a "normal" trial looked like this, with some variation:
The histograms following a Kidnapping event and corresponding to the teal particle cloud looked like this:
The next step was preparing the data for the neural network. I represented the dramatic histogram of the Kidnap event by creating a ratio of the tallest histogram bin at the timestep to the 2nd tallest histogram bin. Ratios that indicate a Kidnap event are higher (5, 18, 40) while normal ratios are 1 or 2 or less.
Here's a plot of the ratios for the whole duration of the driving trials. It's bad resolution, but take my word for it: the "sprinkle" outliers at ratios of 20 or above are from the kidnapping trials. There are even 3 "normal" trials that appear in the outliers, that helped me identify unintentional kidnapping occurrences during these trials.
Note: after explaining the histogram method and showing that it works...I'm still not sure why it works, and why the method appears to work exclusively for kidnapping instances.
-Used histograms for representing the color change numerically
-Experimented with arranging the datasets to train the neural network
-Tested that my neural network *should* work by training it on a standard sine function
-Found a combination for the neural network that "works" (kinda).
Histograms:
The algorithm behind coloring the particles by weight is this:
1. Come up with a range of color representations. There should be as many different colors as there are unique weights.
2. Sort the colors from dark to light, and sort the list of unique weights least to greatest.
3. Match each unique weight to a unique color.
4. Apply this mapping to assign a color to each weights in the dataset
I needed a way to numerically recreate the message that the teal color conveys. I applied a rule and printed out the resulting histograms:
*At each timestep, divide up the data into 10 bins. Always 10 bins.*
The histograms at each timestep during a "normal" trial looked like this, with some variation:
The histograms following a Kidnapping event and corresponding to the teal particle cloud looked like this:
The next step was preparing the data for the neural network. I represented the dramatic histogram of the Kidnap event by creating a ratio of the tallest histogram bin at the timestep to the 2nd tallest histogram bin. Ratios that indicate a Kidnap event are higher (5, 18, 40) while normal ratios are 1 or 2 or less.
Here's a plot of the ratios for the whole duration of the driving trials. It's bad resolution, but take my word for it: the "sprinkle" outliers at ratios of 20 or above are from the kidnapping trials. There are even 3 "normal" trials that appear in the outliers, that helped me identify unintentional kidnapping occurrences during these trials.
Note: after explaining the histogram method and showing that it works...I'm still not sure why it works, and why the method appears to work exclusively for kidnapping instances.
Wednesday, December 20, 2017
The Mysterious Teal Cloud!
Found something interesting yesterday: after color-coding the particle weights, it appears that the timesteps following kidnapping events and select actions like turning a corner result in a particle cloud in which all the weights are assigned to a bright lime green.
SIDEBAR:
One thing I'm wondering about: do the particle weights/weight distributions vary depending on things like the map or the route? I would want whatever threshold I come up with to be based on ...a specific rule or calculation, not a specific number.
SIDEBAR OVER.
Anyway, back to the Mysterious Teal Cloud:
There's a beautiful shade of teal that ONLY surfaces in the particle clouds in the 5-8 seconds after a kidnapping incident. It occurs in 12/13 of my datasets. Hallelujah!
Observe: Kidnapping event at 45 seconds. At 52 seconds, there's this glorious swath of teal in the particle cloud. In the timestep following it (53 seconds), we see the lime green:
In the "Normal", non-kidnapping datasets, the green cloud is preceded by a rainbow cloud:
Another variable I'd like to track is the difference between AMCL message arrivals.
TODO List:
-See if there's a noticeable difference in the AMCL message frequency in kidnapping or non-kidnapping examples
-See if there's a noticeable difference in the AMCL message frequency in kidnapping or non-kidnapping examples
-Fix colormap scale
P.S. Part of my "teaching philosophy" that I developed in the SLU CUTS program is the benefits of including reflection as part of the learning process. The experience of writing this blog post forced me to go through the process of explaining what I think is going on, and caused me to re-think a couple things and come to a better understanding of the data.
Sunday, December 17, 2017
Plot Comparison
Finally learned how to plot (x, y, angle) data in Matlab with the quiver() function, while also setting the arrows' colors individually to correspond with the particle weight. I re-visited the joys of For Loop vs. matrix operations performance, learned about the num2cell() and cellfun() functions, and questioned my sanity over figure holds and resets.
Turns out, Matlab has built-in color schemes, and I used these to denote the particle's weight.
Dark Blue = low weight
Green = high weight
Here is what a "normal" drive looks like over time.
Here is what the drive looks like when there is a "major" kidnapping event as the robot approaches (0, -2) - moves right to left from (1, -2):
Turns out, Matlab has built-in color schemes, and I used these to denote the particle's weight.
Dark Blue = low weight
Green = high weight
Here is what a "normal" drive looks like over time.
Here is what the drive looks like when there is a "major" kidnapping event as the robot approaches (0, -2) - moves right to left from (1, -2):
Wednesday, November 29, 2017
aggregate()
Now that I'm getting back into R, the things it can do reminds me of other things I used to do when I used SAS as a supply chain intern. For example, grouping data by a timestamp and taking the first/last element from the timestamp by using a keyword like "last" or "first". Fun stuff!
I'm looking at particle cloud weights today. I've plotted the cloud weights for each timestamp in Excel by creating a scatter plot with an X axis of each timestamp and a Y axis of the individual particles' weight at that timestamp. The "normal" plots and the "kidnapped" plots are easily differentiated:
Kidnapped:
Not Kidnapped:
Next, I looked at using R to group the particles by timestep and analyze the timesteps. Here's a script for getting the mean of each timesteps' particle weights:
1. Import the CSV:
mytable<-read.csv(<CSVpath>,header=TRUE,sep="\t")
2. Convert to a data frame:
values<-data.frame(mytable)
The piece de resistance: using the aggregate() function and specifying Time as the column by which I want the data to be grouped
3. xyz<-aggregate(x=values, by=list(unique.values=values$Time), FUN=mean)
Voila! Though, I don't know what the 4th column from the left is about.
Normal:
unique.values Time ParticleNumber ParticleWeight
1 18.87 18.87 999.5 0.000500000
2 26.15 26.15 250.0 0.001996008
3 26.39 26.39 250.0 0.001996008
4 26.62 26.62 250.0 0.001996008
5 27.14 27.14 250.0 0.001996008
6 30.28 30.28 250.0 0.001996008
7 31.71 31.71 250.0 0.001996008
8 33.60 33.60 250.0 0.001996008
9 34.48 34.48 250.0 0.001996008
10 35.82 35.82 250.0 0.001996008
11 36.20 36.20 250.0 0.001996008
12 36.42 36.42 250.0 0.001996008
13 37.60 37.60 250.0 0.001996008
14 39.22 39.22 250.0 0.001996008
15 40.80 40.80 250.0 0.001996008
16 42.29 42.29 250.0 0.001996008
17 43.54 43.54 250.0 0.001996008
18 44.79 44.79 250.0 0.001996008
19 45.16 45.16 250.0 0.001996008
20 46.39 46.39 250.0 0.001996008
21 48.29 48.29 250.0 0.001996008
22 49.11 49.11 250.0 0.001996008
23 49.39 49.39 250.0 0.001996008
24 50.24 50.24 250.0 0.001996008
25 50.77 50.77 250.0 0.001996008
26 51.29 51.29 250.0 0.001996008
27 52.99 52.99 250.0 0.001996008
28 54.71 54.71 250.0 0.001996008
29 58.20 58.20 250.0 0.001996008
30 60.42 60.42 250.0 0.001996008
31 61.28 61.28 250.0 0.001996008
32 61.85 61.85 250.0 0.001996008
33 63.77 63.77 250.0 0.001996008
34 65.99 65.99 250.0 0.001996008
35 66.43 66.43 250.0 0.001996008
36 67.50 67.50 250.0 0.001996008
37 68.43 68.43 250.0 0.001996008
38 68.50 68.50 250.0 0.001996008
39 68.72 68.72 250.0 0.001996008
40 68.90 68.90 250.0 0.001996008
41 69.35 69.35 250.0 0.001996008
42 70.96 70.96 250.0 0.001996008
43 71.17 71.17 250.0 0.001996008
44 71.36 71.36 250.0 0.001996008
45 71.69 71.69 250.0 0.001996008
46 75.19 75.19 250.0 0.001996008
47 75.94 75.94 250.0 0.001996008
48 77.48 77.48 250.0 0.001996008
49 79.60 79.60 250.0 0.001996008
50 80.53 80.53 250.0 0.001996008
51 80.81 80.81 250.0 0.001996008
52 81.26 81.26 250.0 0.001996008
53 81.31 81.31 250.0 0.001996008
54 83.78 83.78 250.0 0.001996008
55 85.54 85.54 250.0 0.001996008
56 85.72 85.72 250.0 0.001996008
57 86.10 86.10 250.0 0.001996008
58 86.72 86.72 250.0 0.001996008
59 88.28 88.28 250.0 0.001996008
60 88.75 88.75 250.0 0.001996008
61 88.84 88.84 250.0 0.001996008
62 89.12 89.12 250.0 0.001996008
63 89.34 89.34 250.0 0.001996008
64 89.76 89.76 250.0 0.001996008
65 90.40 90.40 250.0 0.001996008
66 90.44 90.44 250.0 0.001996008
67 90.71 90.71 250.0 0.001996008
68 91.15 91.15 250.0 0.001996008
69 91.40 91.40 250.0 0.001996008
70 92.65 92.65 250.0 0.001996008
71 92.84 92.84 250.0 0.001996008
72 94.27 94.27 250.0 0.001996008
73 95.67 95.67 250.0 0.001996008
74 96.96 96.96 250.0 0.001996008
75 98.51 98.51 250.0 0.001996008
76 101.10 101.10 250.0 0.001996008
77 101.90 101.90 250.0 0.001996008
78 101.99 101.99 250.0 0.001996008
79 102.57 102.57 250.0 0.001996008
80 103.00 103.00 250.0 0.001996008
81 104.61 104.61 250.0 0.001996008
82 105.99 105.99 250.0 0.001996008
83 106.55 106.55 250.0 0.001996008
84 106.72 106.72 250.0 0.001996008
85 107.74 107.74 250.0 0.001996008
86 108.20 108.20 250.0 0.001996008
87 108.40 108.40 250.0 0.001996008
88 108.85 108.85 250.0 0.001996008
89 109.31 109.31 250.0 0.001996008
90 111.11 111.11 250.0 0.001996008
91 112.32 112.32 250.0 0.001996008
92 112.50 112.50 250.0 0.001996008
93 114.44 114.44 250.0 0.001996008
94 116.50 116.50 250.0 0.001996008
95 117.14 117.14 250.0 0.001996008
96 117.29 117.29 250.0 0.001996008
97 117.64 117.64 250.0 0.001996008
98 118.40 118.40 250.0 0.001996008
99 118.66 118.66 250.0 0.001996008
100 118.70 118.70 250.0 0.001996008
101 120.13 120.13 250.0 0.001996008
102 120.86 120.86 250.0 0.001996008
103 121.40 121.40 250.0 0.001996008
104 122.85 122.85 250.0 0.001996008
105 124.50 124.50 250.0 0.001996008
106 124.86 124.86 250.0 0.001996008
107 125.22 125.22 250.0 0.001996008
108 126.61 126.61 250.0 0.001996008
109 128.70 128.70 250.0 0.001996008
110 128.76 128.76 250.0 0.001996008
111 128.87 128.87 250.0 0.001996008
112 129.17 129.17 250.0 0.001996008
113 129.42 129.42 250.0 0.001996008
114 129.63 129.63 250.0 0.001996008
115 130.26 130.26 250.0 0.001996008
116 130.46 130.46 250.0 0.001996008
117 130.60 130.60 250.0 0.001996008
118 130.88 130.88 250.0 0.001996008
119 132.27 132.27 250.0 0.001996008
120 132.44 132.44 250.0 0.001996008
121 132.62 132.62 250.0 0.001996008
122 132.98 132.98 250.0 0.001996008
123 133.35 133.35 250.0 0.001996008
124 133.50 133.50 250.0 0.001996008
125 134.17 134.17 250.0 0.001996008
126 134.74 134.74 250.0 0.001996008
127 135.36 135.36 250.0 0.001996008
128 135.60 135.60 250.0 0.001996008
Unfortunately, the mean of the cloud weights from the major kidnapping incident looks identical:
unique.values Time ParticleNumber ParticleWeight
1 19.57 19.57 999.5 0.000500000
2 28.58 28.58 250.0 0.001996008
3 28.76 28.76 250.0 0.001996008
4 28.95 28.95 250.0 0.001996008
5 29.24 29.24 250.0 0.001996008
6 31.79 31.79 250.0 0.001996008
7 33.14 33.14 250.0 0.001996008
8 34.52 34.52 250.0 0.001996008
9 35.79 35.79 250.0 0.001996008
10 37.20 37.20 250.0 0.001996008
11 38.53 38.53 250.0 0.001996008
12 39.42 39.42 250.0 0.001996008
13 39.53 39.53 250.0 0.001996008
14 39.96 39.96 250.0 0.001996008
15 40.28 40.28 250.0 0.001996008
16 43.80 43.80 250.0 0.001996008
17 44.95 44.95 250.0 0.001996008
18 46.43 46.43 250.0 0.001996008
19 47.78 47.78 250.0 0.001996008
20 48.81 48.81 250.0 0.001996008
21 49.18 49.18 250.0 0.001996008
22 51.19 51.19 250.0 0.001996008
23 53.23 53.23 250.0 0.001996008
24 53.82 53.82 250.0 0.001996008
25 53.90 53.90 250.0 0.001996008
26 55.38 55.38 250.0 0.001996008
27 56.65 56.65 250.0 0.001996008
28 58.33 58.33 250.0 0.001996008
29 59.97 59.97 250.0 0.001996008
30 62.59 62.59 250.0 0.001996008
31 64.49 64.49 250.0 0.001996008
32 65.53 65.53 250.0 0.001996008
33 66.70 66.70 250.0 0.001996008
34 67.93 67.93 250.0 0.001996008
35 69.70 69.70 250.0 0.001996008
36 69.85 69.85 250.0 0.001996008
37 70.14 70.14 250.0 0.001996008
38 70.55 70.55 250.0 0.001996008
39 71.66 71.66 250.0 0.001996008
40 71.94 71.94 250.0 0.001996008
41 72.27 72.27 250.0 0.001996008
42 72.56 72.56 250.0 0.001996008
43 72.96 72.96 250.0 0.001996008
44 74.51 74.51 250.0 0.001996008
45 74.93 74.93 250.0 0.001996008
46 75.36 75.36 250.0 0.001996008
47 75.50 75.50 250.0 0.001996008
48 75.71 75.71 250.0 0.001996008
49 78.99 78.99 250.0 0.001996008
50 80.40 80.40 250.0 0.001996008
51 81.76 81.76 250.0 0.001996008
52 83.46 83.46 250.0 0.001996008
53 84.67 84.67 250.0 0.001996008
54 84.99 84.99 250.0 0.001996008
55 85.32 85.32 250.0 0.001996008
56 85.86 85.86 250.0 0.001996008
57 87.74 87.74 250.0 0.001996008
58 89.19 89.19 250.0 0.001996008
59 89.39 89.39 250.0 0.001996008
60 89.61 89.61 250.0 0.001996008
61 90.11 90.11 250.0 0.001996008
62 90.88 90.88 250.0 0.001996008
63 92.79 92.79 250.0 0.001996008
64 93.37 93.37 250.0 0.001996008
65 93.85 93.85 250.0 0.001996008
66 94.31 94.31 250.0 0.001996008
67 94.67 94.67 250.0 0.001996008
68 94.86 94.86 250.0 0.001996008
69 95.21 95.21 250.0 0.001996008
70 95.77 95.77 250.0 0.001996008
71 96.94 96.94 250.0 0.001996008
72 97.24 97.24 250.0 0.001996008
73 98.72 98.72 250.0 0.001996008
74 100.32 100.32 250.0 0.001996008
75 102.60 102.60 250.0 0.001996008
76 102.72 102.72 250.0 0.001996008
77 105.42 105.42 250.0 0.001996008
78 105.60 105.60 250.0 0.001996008
79 105.79 105.79 250.0 0.001996008
80 106.21 106.21 250.0 0.001996008
81 106.55 106.55 250.0 0.001996008
82 106.87 106.87 250.0 0.001996008
83 108.58 108.58 250.0 0.001996008
84 110.10 110.10 250.0 0.001996008
85 110.11 110.11 250.0 0.001996008
86 110.47 110.47 250.0 0.001996008
87 111.41 111.41 250.0 0.001996008
88 111.76 111.76 250.0 0.001996008
89 112.12 112.12 250.0 0.001996008
90 112.29 112.29 250.0 0.001996008
91 113.31 113.31 250.0 0.001996008
92 113.90 113.90 250.0 0.001996008
93 115.19 115.19 250.0 0.001996008
94 115.64 115.64 250.0 0.001996008
95 115.99 115.99 250.0 0.001996008
96 118.30 118.30 250.0 0.001996008
97 119.62 119.62 250.0 0.001996008
98 120.46 120.46 250.0 0.001996008
99 120.66 120.66 250.0 0.001996008
100 120.99 120.99 250.0 0.001996008
101 121.24 121.24 250.0 0.001996008
102 121.64 121.64 250.0 0.001996008
103 122.23 122.23 250.0 0.001996008
104 122.50 122.50 250.0 0.001996008
105 123.95 123.95 250.0 0.001996008
106 124.70 124.70 250.0 0.001996008
107 125.12 125.12 250.0 0.001996008
108 125.20 125.20 250.0 0.001996008
109 125.92 125.92 250.0 0.001996008
110 127.14 127.14 250.0 0.001996008
111 128.39 128.39 250.0 0.001996008
112 129.26 129.26 250.0 0.001996008
113 129.72 129.72 250.0 0.001996008
114 131.23 131.23 250.0 0.001996008
115 132.64 132.64 250.0 0.001996008
116 133.30 133.30 250.0 0.001996008
117 133.49 133.49 250.0 0.001996008
118 133.90 133.90 250.0 0.001996008
119 134.53 134.53 250.0 0.001996008
120 134.67 134.67 250.0 0.001996008
121 134.80 134.80 250.0 0.001996008
122 135.00 135.00 250.0 0.001996008
123 136.33 136.33 250.0 0.001996008
124 136.46 136.46 250.0 0.001996008
125 136.80 136.80 250.0 0.001996008
126 137.10 137.10 250.0 0.001996008
127 137.29 137.29 250.0 0.001996008
128 137.62 137.62 250.0 0.001996008
129 137.82 137.82 250.0 0.001996008
130 139.17 139.17 250.0 0.001996008
131 139.46 139.46 250.0 0.001996008
132 139.72 139.72 250.0 0.001996008
I'm looking at particle cloud weights today. I've plotted the cloud weights for each timestamp in Excel by creating a scatter plot with an X axis of each timestamp and a Y axis of the individual particles' weight at that timestamp. The "normal" plots and the "kidnapped" plots are easily differentiated:
Kidnapped:
Not Kidnapped:
1. Import the CSV:
mytable<-read.csv(<CSVpath>,header=TRUE,sep="\t")
2. Convert to a data frame:
values<-data.frame(mytable)
The piece de resistance: using the aggregate() function and specifying Time as the column by which I want the data to be grouped
3. xyz<-aggregate(x=values, by=list(unique.values=values$Time), FUN=mean)
Voila! Though, I don't know what the 4th column from the left is about.
Normal:
unique.values Time ParticleNumber ParticleWeight
1 18.87 18.87 999.5 0.000500000
2 26.15 26.15 250.0 0.001996008
3 26.39 26.39 250.0 0.001996008
4 26.62 26.62 250.0 0.001996008
5 27.14 27.14 250.0 0.001996008
6 30.28 30.28 250.0 0.001996008
7 31.71 31.71 250.0 0.001996008
8 33.60 33.60 250.0 0.001996008
9 34.48 34.48 250.0 0.001996008
10 35.82 35.82 250.0 0.001996008
11 36.20 36.20 250.0 0.001996008
12 36.42 36.42 250.0 0.001996008
13 37.60 37.60 250.0 0.001996008
14 39.22 39.22 250.0 0.001996008
15 40.80 40.80 250.0 0.001996008
16 42.29 42.29 250.0 0.001996008
17 43.54 43.54 250.0 0.001996008
18 44.79 44.79 250.0 0.001996008
19 45.16 45.16 250.0 0.001996008
20 46.39 46.39 250.0 0.001996008
21 48.29 48.29 250.0 0.001996008
22 49.11 49.11 250.0 0.001996008
23 49.39 49.39 250.0 0.001996008
24 50.24 50.24 250.0 0.001996008
25 50.77 50.77 250.0 0.001996008
26 51.29 51.29 250.0 0.001996008
27 52.99 52.99 250.0 0.001996008
28 54.71 54.71 250.0 0.001996008
29 58.20 58.20 250.0 0.001996008
30 60.42 60.42 250.0 0.001996008
31 61.28 61.28 250.0 0.001996008
32 61.85 61.85 250.0 0.001996008
33 63.77 63.77 250.0 0.001996008
34 65.99 65.99 250.0 0.001996008
35 66.43 66.43 250.0 0.001996008
36 67.50 67.50 250.0 0.001996008
37 68.43 68.43 250.0 0.001996008
38 68.50 68.50 250.0 0.001996008
39 68.72 68.72 250.0 0.001996008
40 68.90 68.90 250.0 0.001996008
41 69.35 69.35 250.0 0.001996008
42 70.96 70.96 250.0 0.001996008
43 71.17 71.17 250.0 0.001996008
44 71.36 71.36 250.0 0.001996008
45 71.69 71.69 250.0 0.001996008
46 75.19 75.19 250.0 0.001996008
47 75.94 75.94 250.0 0.001996008
48 77.48 77.48 250.0 0.001996008
49 79.60 79.60 250.0 0.001996008
50 80.53 80.53 250.0 0.001996008
51 80.81 80.81 250.0 0.001996008
52 81.26 81.26 250.0 0.001996008
53 81.31 81.31 250.0 0.001996008
54 83.78 83.78 250.0 0.001996008
55 85.54 85.54 250.0 0.001996008
56 85.72 85.72 250.0 0.001996008
57 86.10 86.10 250.0 0.001996008
58 86.72 86.72 250.0 0.001996008
59 88.28 88.28 250.0 0.001996008
60 88.75 88.75 250.0 0.001996008
61 88.84 88.84 250.0 0.001996008
62 89.12 89.12 250.0 0.001996008
63 89.34 89.34 250.0 0.001996008
64 89.76 89.76 250.0 0.001996008
65 90.40 90.40 250.0 0.001996008
66 90.44 90.44 250.0 0.001996008
67 90.71 90.71 250.0 0.001996008
68 91.15 91.15 250.0 0.001996008
69 91.40 91.40 250.0 0.001996008
70 92.65 92.65 250.0 0.001996008
71 92.84 92.84 250.0 0.001996008
72 94.27 94.27 250.0 0.001996008
73 95.67 95.67 250.0 0.001996008
74 96.96 96.96 250.0 0.001996008
75 98.51 98.51 250.0 0.001996008
76 101.10 101.10 250.0 0.001996008
77 101.90 101.90 250.0 0.001996008
78 101.99 101.99 250.0 0.001996008
79 102.57 102.57 250.0 0.001996008
80 103.00 103.00 250.0 0.001996008
81 104.61 104.61 250.0 0.001996008
82 105.99 105.99 250.0 0.001996008
83 106.55 106.55 250.0 0.001996008
84 106.72 106.72 250.0 0.001996008
85 107.74 107.74 250.0 0.001996008
86 108.20 108.20 250.0 0.001996008
87 108.40 108.40 250.0 0.001996008
88 108.85 108.85 250.0 0.001996008
89 109.31 109.31 250.0 0.001996008
90 111.11 111.11 250.0 0.001996008
91 112.32 112.32 250.0 0.001996008
92 112.50 112.50 250.0 0.001996008
93 114.44 114.44 250.0 0.001996008
94 116.50 116.50 250.0 0.001996008
95 117.14 117.14 250.0 0.001996008
96 117.29 117.29 250.0 0.001996008
97 117.64 117.64 250.0 0.001996008
98 118.40 118.40 250.0 0.001996008
99 118.66 118.66 250.0 0.001996008
100 118.70 118.70 250.0 0.001996008
101 120.13 120.13 250.0 0.001996008
102 120.86 120.86 250.0 0.001996008
103 121.40 121.40 250.0 0.001996008
104 122.85 122.85 250.0 0.001996008
105 124.50 124.50 250.0 0.001996008
106 124.86 124.86 250.0 0.001996008
107 125.22 125.22 250.0 0.001996008
108 126.61 126.61 250.0 0.001996008
109 128.70 128.70 250.0 0.001996008
110 128.76 128.76 250.0 0.001996008
111 128.87 128.87 250.0 0.001996008
112 129.17 129.17 250.0 0.001996008
113 129.42 129.42 250.0 0.001996008
114 129.63 129.63 250.0 0.001996008
115 130.26 130.26 250.0 0.001996008
116 130.46 130.46 250.0 0.001996008
117 130.60 130.60 250.0 0.001996008
118 130.88 130.88 250.0 0.001996008
119 132.27 132.27 250.0 0.001996008
120 132.44 132.44 250.0 0.001996008
121 132.62 132.62 250.0 0.001996008
122 132.98 132.98 250.0 0.001996008
123 133.35 133.35 250.0 0.001996008
124 133.50 133.50 250.0 0.001996008
125 134.17 134.17 250.0 0.001996008
126 134.74 134.74 250.0 0.001996008
127 135.36 135.36 250.0 0.001996008
128 135.60 135.60 250.0 0.001996008
Unfortunately, the mean of the cloud weights from the major kidnapping incident looks identical:
unique.values Time ParticleNumber ParticleWeight
1 19.57 19.57 999.5 0.000500000
2 28.58 28.58 250.0 0.001996008
3 28.76 28.76 250.0 0.001996008
4 28.95 28.95 250.0 0.001996008
5 29.24 29.24 250.0 0.001996008
6 31.79 31.79 250.0 0.001996008
7 33.14 33.14 250.0 0.001996008
8 34.52 34.52 250.0 0.001996008
9 35.79 35.79 250.0 0.001996008
10 37.20 37.20 250.0 0.001996008
11 38.53 38.53 250.0 0.001996008
12 39.42 39.42 250.0 0.001996008
13 39.53 39.53 250.0 0.001996008
14 39.96 39.96 250.0 0.001996008
15 40.28 40.28 250.0 0.001996008
16 43.80 43.80 250.0 0.001996008
17 44.95 44.95 250.0 0.001996008
18 46.43 46.43 250.0 0.001996008
19 47.78 47.78 250.0 0.001996008
20 48.81 48.81 250.0 0.001996008
21 49.18 49.18 250.0 0.001996008
22 51.19 51.19 250.0 0.001996008
23 53.23 53.23 250.0 0.001996008
24 53.82 53.82 250.0 0.001996008
25 53.90 53.90 250.0 0.001996008
26 55.38 55.38 250.0 0.001996008
27 56.65 56.65 250.0 0.001996008
28 58.33 58.33 250.0 0.001996008
29 59.97 59.97 250.0 0.001996008
30 62.59 62.59 250.0 0.001996008
31 64.49 64.49 250.0 0.001996008
32 65.53 65.53 250.0 0.001996008
33 66.70 66.70 250.0 0.001996008
34 67.93 67.93 250.0 0.001996008
35 69.70 69.70 250.0 0.001996008
36 69.85 69.85 250.0 0.001996008
37 70.14 70.14 250.0 0.001996008
38 70.55 70.55 250.0 0.001996008
39 71.66 71.66 250.0 0.001996008
40 71.94 71.94 250.0 0.001996008
41 72.27 72.27 250.0 0.001996008
42 72.56 72.56 250.0 0.001996008
43 72.96 72.96 250.0 0.001996008
44 74.51 74.51 250.0 0.001996008
45 74.93 74.93 250.0 0.001996008
46 75.36 75.36 250.0 0.001996008
47 75.50 75.50 250.0 0.001996008
48 75.71 75.71 250.0 0.001996008
49 78.99 78.99 250.0 0.001996008
50 80.40 80.40 250.0 0.001996008
51 81.76 81.76 250.0 0.001996008
52 83.46 83.46 250.0 0.001996008
53 84.67 84.67 250.0 0.001996008
54 84.99 84.99 250.0 0.001996008
55 85.32 85.32 250.0 0.001996008
56 85.86 85.86 250.0 0.001996008
57 87.74 87.74 250.0 0.001996008
58 89.19 89.19 250.0 0.001996008
59 89.39 89.39 250.0 0.001996008
60 89.61 89.61 250.0 0.001996008
61 90.11 90.11 250.0 0.001996008
62 90.88 90.88 250.0 0.001996008
63 92.79 92.79 250.0 0.001996008
64 93.37 93.37 250.0 0.001996008
65 93.85 93.85 250.0 0.001996008
66 94.31 94.31 250.0 0.001996008
67 94.67 94.67 250.0 0.001996008
68 94.86 94.86 250.0 0.001996008
69 95.21 95.21 250.0 0.001996008
70 95.77 95.77 250.0 0.001996008
71 96.94 96.94 250.0 0.001996008
72 97.24 97.24 250.0 0.001996008
73 98.72 98.72 250.0 0.001996008
74 100.32 100.32 250.0 0.001996008
75 102.60 102.60 250.0 0.001996008
76 102.72 102.72 250.0 0.001996008
77 105.42 105.42 250.0 0.001996008
78 105.60 105.60 250.0 0.001996008
79 105.79 105.79 250.0 0.001996008
80 106.21 106.21 250.0 0.001996008
81 106.55 106.55 250.0 0.001996008
82 106.87 106.87 250.0 0.001996008
83 108.58 108.58 250.0 0.001996008
84 110.10 110.10 250.0 0.001996008
85 110.11 110.11 250.0 0.001996008
86 110.47 110.47 250.0 0.001996008
87 111.41 111.41 250.0 0.001996008
88 111.76 111.76 250.0 0.001996008
89 112.12 112.12 250.0 0.001996008
90 112.29 112.29 250.0 0.001996008
91 113.31 113.31 250.0 0.001996008
92 113.90 113.90 250.0 0.001996008
93 115.19 115.19 250.0 0.001996008
94 115.64 115.64 250.0 0.001996008
95 115.99 115.99 250.0 0.001996008
96 118.30 118.30 250.0 0.001996008
97 119.62 119.62 250.0 0.001996008
98 120.46 120.46 250.0 0.001996008
99 120.66 120.66 250.0 0.001996008
100 120.99 120.99 250.0 0.001996008
101 121.24 121.24 250.0 0.001996008
102 121.64 121.64 250.0 0.001996008
103 122.23 122.23 250.0 0.001996008
104 122.50 122.50 250.0 0.001996008
105 123.95 123.95 250.0 0.001996008
106 124.70 124.70 250.0 0.001996008
107 125.12 125.12 250.0 0.001996008
108 125.20 125.20 250.0 0.001996008
109 125.92 125.92 250.0 0.001996008
110 127.14 127.14 250.0 0.001996008
111 128.39 128.39 250.0 0.001996008
112 129.26 129.26 250.0 0.001996008
113 129.72 129.72 250.0 0.001996008
114 131.23 131.23 250.0 0.001996008
115 132.64 132.64 250.0 0.001996008
116 133.30 133.30 250.0 0.001996008
117 133.49 133.49 250.0 0.001996008
118 133.90 133.90 250.0 0.001996008
119 134.53 134.53 250.0 0.001996008
120 134.67 134.67 250.0 0.001996008
121 134.80 134.80 250.0 0.001996008
122 135.00 135.00 250.0 0.001996008
123 136.33 136.33 250.0 0.001996008
124 136.46 136.46 250.0 0.001996008
125 136.80 136.80 250.0 0.001996008
126 137.10 137.10 250.0 0.001996008
127 137.29 137.29 250.0 0.001996008
128 137.62 137.62 250.0 0.001996008
129 137.82 137.82 250.0 0.001996008
130 139.17 139.17 250.0 0.001996008
131 139.46 139.46 250.0 0.001996008
132 139.72 139.72 250.0 0.001996008
R is not magic
Realization:
As cool as R is, you still have to do the work of choosing the best ways to arrange your data and the statistical tests to use on it.
As cool as R is, you still have to do the work of choosing the best ways to arrange your data and the statistical tests to use on it.
Sunday, November 19, 2017
I'm Becoming A Fan...
...A fan of R! I have all these datasets for different AMCL/Gazebo variables, and in the past, I'd written little scripts for merging them based on their closest timestamp matches. My method was cumbersome and broke easily. Turns out, there's a variety of ways to do the same thing with R. Woot woot!
Getting Rolling With R
Today, I've written a little script that parses bag files into CSVs of the topics' messages. Now I have to match them up and put them together into a combined dataset. The way I described this situation earlier was,
"A bag file stores message traffic from recorded ROS topics. Messages arrive regularly, but at different frequencies."
As I tried to think of the best way to organize my datasets, I considered putting together a database. I balked when I realized I'd have to define tables, and declare columns with names and the appropriate datatypes (yuck). I needed SQL functionality with the simplicity of a CSV file. Lo and behold, I googled for 'r join datasets', and found this page:
"We often encounter situations where we have data in multiple files, at different frequencies and on different subsets of observations, but we would like to match them to one another as completely and systematically as possible. In R, the merge() command is a great way to match two data frames together."
https://www.r-bloggers.com/merging-multiple-data-files-into-one-data-frame/
Sounds like what I need!
"A bag file stores message traffic from recorded ROS topics. Messages arrive regularly, but at different frequencies."
As I tried to think of the best way to organize my datasets, I considered putting together a database. I balked when I realized I'd have to define tables, and declare columns with names and the appropriate datatypes (yuck). I needed SQL functionality with the simplicity of a CSV file. Lo and behold, I googled for 'r join datasets', and found this page:
"We often encounter situations where we have data in multiple files, at different frequencies and on different subsets of observations, but we would like to match them to one another as completely and systematically as possible. In R, the merge() command is a great way to match two data frames together."
https://www.r-bloggers.com/merging-multiple-data-files-into-one-data-frame/
Sounds like what I need!
Mistakes Were Made, and Corrections Were, Too.
I did some extra work yesterday to get the robot's Gazebo poses publishing as a ROS message, and I'm glad I did. I hadn't been sure it would be a good use of time, since I already have a script that prints the poses to a text file, but at the time, I did it just to make things ship-shape. It led to an important discovery, though:
In all of my simulations, I've had the starting AMCL pose adjusted for the wrong map. Yikes! I suspect this was going on in the previous round of simulations, too.
I've corrected this by forcing the AMCL pose to (0,0) through a command line parameter added to my Bash script (another reason I'm loving the Bash scripts for starting ROS nodes). Another addition to the start_driving_robot Bash script is a couple commands that execute after the playback of the driving route is finished:
rosnode kill -a
Purpose :Kills everything - specifically, the rosrecord node, so that I can walk away from the simulation after I start it up, and I won't get data from the robot sitting still at the end.
killall -9 gzserver
Purpose: rosnode kill apparently wasn't shutting down Gazebo nicely, and when I would immediately start a new instance of Gazebo, it would fail because there was already a gzserver process running. This step seems to help prevent this.
Really feeling the benefits of my improvements to the research infrastructure.
In all of my simulations, I've had the starting AMCL pose adjusted for the wrong map. Yikes! I suspect this was going on in the previous round of simulations, too.
I've corrected this by forcing the AMCL pose to (0,0) through a command line parameter added to my Bash script (another reason I'm loving the Bash scripts for starting ROS nodes). Another addition to the start_driving_robot Bash script is a couple commands that execute after the playback of the driving route is finished:
rosnode kill -a
Purpose :Kills everything - specifically, the rosrecord node, so that I can walk away from the simulation after I start it up, and I won't get data from the robot sitting still at the end.
killall -9 gzserver
Purpose: rosnode kill apparently wasn't shutting down Gazebo nicely, and when I would immediately start a new instance of Gazebo, it would fail because there was already a gzserver process running. This step seems to help prevent this.
Really feeling the benefits of my improvements to the research infrastructure.
Saturday, November 18, 2017
Plugin for Publishing Robot Pose to ROS from Gazebo
This morning, I wanted to have a Gazebo plugin that publishes the robot's pose in Gazebo. Right now, I've got a python script that prints to a text file, but it would be easier for analysis to just have all the data in the same bag file.
It took an hour, but now I've got the robot's pose published as a ROS topic.
******
NOTE! The published pose messages from this new way match the Gazebo print statements of the get_model_state call on the mobile_base in the world frame to 0.01 meters.
******
Here's how I did it:
https://answers.ros.org/question/222033/how-do-i-publish-gazebo-position-of-a-robot-model-on-odometry-topic/
Recommends something like this should be added to the URDF (robot specification)
<plugin name="p3d_base_controller" filename="libgazebo_ros_p3d.so">
<rpyOffsets>0 0 0</rpyOffsets> <xyzOffsets>0 0 0</xyzOffsets> <frameName>world</frameName> <gaussianNoise>0.01</gaussianNoise> <topicName>gazebo_robot_pose</topicName> <bodyName>base_link</bodyName> <updateRate>50.0</updateRate> <alwaysOn>true</alwaysOn>
</plugin>
Notes:
The difficult part was finding which file to use. Here's what I did:
1. I went to the turtlebot_gazebo package and looked at the turtlebot_world.launch file to figure out where the turtlebot URDF files are.
2. Turns out, there's a turtlebot_description package with relevant files. I opened turtlebot_description\urdf and found a file called turtlebot_gazebo.urdf.xacro
3. I added a new <gazebo> tag and put the plugin section within the new <gazebo> section.
**Originally, I used "mobile_base" for the <bodyName>, but I got an error that mobile_base wasn't a valid bodyName value. I switched back to using base_link, and the results are working fine.
This was a helpful example of where to put the plugin code - note that the plugin is within its own <gazebo> tag in the xacro:
https://github.com/tu-darmstadt-ros-pkg/taurob_tracker_common/blob/master/taurob_tracker_description/urdf/tracker_chassis.gazebo.xacro.xml#L44
The source of that .so file is described here:
##what is libgazebo_ros_p3d.so? You can find source code from gazebo_ros_pkgs repository.
Go to
It took an hour, but now I've got the robot's pose published as a ROS topic.
******
NOTE! The published pose messages from this new way match the Gazebo print statements of the get_model_state call on the mobile_base in the world frame to 0.01 meters.
******
Here's how I did it:
https://answers.ros.org/question/222033/how-do-i-publish-gazebo-position-of-a-robot-model-on-odometry-topic/
Recommends something like this should be added to the URDF (robot specification)
<plugin name="p3d_base_controller" filename="libgazebo_ros_p3d.so">
<rpyOffsets>0 0 0</rpyOffsets> <xyzOffsets>0 0 0</xyzOffsets> <frameName>world</frameName> <gaussianNoise>0.01</gaussianNoise> <topicName>gazebo_robot_pose</topicName> <bodyName>base_link</bodyName> <updateRate>50.0</updateRate> <alwaysOn>true</alwaysOn>
</plugin>
Notes:
The difficult part was finding which file to use. Here's what I did:
1. I went to the turtlebot_gazebo package and looked at the turtlebot_world.launch file to figure out where the turtlebot URDF files are.
2. Turns out, there's a turtlebot_description package with relevant files. I opened turtlebot_description\urdf and found a file called turtlebot_gazebo.urdf.xacro
3. I added a new <gazebo> tag and put the plugin section within the new <gazebo> section.
**Originally, I used "mobile_base" for the <bodyName>, but I got an error that mobile_base wasn't a valid bodyName value. I switched back to using base_link, and the results are working fine.
This was a helpful example of where to put the plugin code - note that the plugin is within its own <gazebo> tag in the xacro:
https://github.com/tu-darmstadt-ros-pkg/taurob_tracker_common/blob/master/taurob_tracker_description/urdf/tracker_chassis.gazebo.xacro.xml#L44
The source of that .so file is described here:
##what is libgazebo_ros_p3d.so? You can find source code from gazebo_ros_pkgs repository.
Go to
gazebo_plugins/include/gazebo_plugins/gazebo_ros_p3d.h
./*
BriefNote:
Compute relative position of [bodyName] frame with respect to [frameName] frame.
And publish it as [topicName] with [updateRate].
It seems that it does not have physical control of the model.
*/
https://github.com/jaejunlee0538/ua_ros_p3dx
Simulations
I'm back to collecting data, now that I've got the AMCL particle cloud weights printed out.
See my ROS Answers forum question: https://answers.ros.org/question/275574/amcl-particle-weights-are-always-equal-why/
In the past, I had a couple different driving methods and they each had their own problems:
1. turtlebot_teleop keyboard_teleop - driving the robot around the room on roughly the same route with keyboard controls.
Problem: Time-consuming, and didn't follow the exact route each time.
2. Wrote a Python script to drive the robot around a series of locations in the Gazebo world
Problem: Changing direction relied rotating the robot in place (sometimes a full circle or two). AMCL really didn't like this.
3. Using the ROS Navigation path planner to specify a route
Problem: I must have been using a different path planner than the one that RViz uses (which is GREAT), because this one had a lot of rotation in it, too.
This time around, I decided to record a BAG file of the output from the turtlebot_teleop keyboard_teleop node, on the /cmd_vel_mux/input/teleop topic. This topic has the translational and rotational changes specified by the keyboard commands, so I drove around the room once and recorded these messages so that I could replay them for every simulation.
The difficulty this week was getting a solid driving route - apparently, there's a lot of variation in what the robot actually executes compared to what you told it to do. I'd record a clean driving route, and when I replayed it, the robot would over- or under-shoot the specified motion at some timestep along the way, and start bumping into the furniture in the simulation. I finally learned to steer WAY clear of the furniture and was able to record a replayable route.
I had been aiming for a 5-minute route, but due to the difficulty in getting a driving route in place, I ended up sticking to a 2:20 driving sample. As long as I schedule kidnapping events within the first 45 seconds, I think it will still give us enough information about AMCL's long-term post-kidnapping behavior.
Anyway, here are the scripts I've been using to record trials:
bash start_gazebo.sh
bash start_amcl.sh
bash start_recording_messages.sh
bash start_recording_amcl_gazebo.sh
bash start_driving_robot.sh
I LOVE that I adopted bash scripts, instead of having to remember what to type in each time - would highly recommend this step, as it's saved me a lot of time and prevented frustration. I also made them foolproof by saving the data files within the script, and putting a usage error message when I don't supply a filename. It's been working great.
Here are the specifics of each:
bash start_gazebo.sh
roslaunch turtlebot_gazebo turtlebot_world.launch world_file:=/opt/ros/indigo/share/turtlebot_gazebo/worlds/playground.world
bash start_amcl.sh
source /home/ebrenna8/amcl_overlay/devel/setup.bash;
roslaunch turtlebot_gazebo amcl_demo.launch map_file:=/opt/ros/indigo/share/turtlebot_gazebo/maps/playground.yaml
bash start_recording_messages.sh - this one takes a file name
rosbag record -O $1 /cluster_count /delta_pose /filter_covariance /max_weight_hyp /numsamples /particlecloud /test_pub /amcl_pose /cloud_weight /particlecloudPreResample /odom /cmd_vel_mux/input/teleop
bash start_driving_robot.sh
rosbag play /home/ebrenna8/Turtlebot_World_Driving.bag
bash start_recording_amcl_gazebo.sh - this one takes a file name
python ./print_amcl_gazebo_poses.py > $1
See my ROS Answers forum question: https://answers.ros.org/question/275574/amcl-particle-weights-are-always-equal-why/
In the past, I had a couple different driving methods and they each had their own problems:
1. turtlebot_teleop keyboard_teleop - driving the robot around the room on roughly the same route with keyboard controls.
Problem: Time-consuming, and didn't follow the exact route each time.
2. Wrote a Python script to drive the robot around a series of locations in the Gazebo world
Problem: Changing direction relied rotating the robot in place (sometimes a full circle or two). AMCL really didn't like this.
3. Using the ROS Navigation path planner to specify a route
Problem: I must have been using a different path planner than the one that RViz uses (which is GREAT), because this one had a lot of rotation in it, too.
This time around, I decided to record a BAG file of the output from the turtlebot_teleop keyboard_teleop node, on the /cmd_vel_mux/input/teleop topic. This topic has the translational and rotational changes specified by the keyboard commands, so I drove around the room once and recorded these messages so that I could replay them for every simulation.
The difficulty this week was getting a solid driving route - apparently, there's a lot of variation in what the robot actually executes compared to what you told it to do. I'd record a clean driving route, and when I replayed it, the robot would over- or under-shoot the specified motion at some timestep along the way, and start bumping into the furniture in the simulation. I finally learned to steer WAY clear of the furniture and was able to record a replayable route.
I had been aiming for a 5-minute route, but due to the difficulty in getting a driving route in place, I ended up sticking to a 2:20 driving sample. As long as I schedule kidnapping events within the first 45 seconds, I think it will still give us enough information about AMCL's long-term post-kidnapping behavior.
Anyway, here are the scripts I've been using to record trials:
bash start_gazebo.sh
bash start_amcl.sh
bash start_recording_messages.sh
bash start_recording_amcl_gazebo.sh
bash start_driving_robot.sh
I LOVE that I adopted bash scripts, instead of having to remember what to type in each time - would highly recommend this step, as it's saved me a lot of time and prevented frustration. I also made them foolproof by saving the data files within the script, and putting a usage error message when I don't supply a filename. It's been working great.
Here are the specifics of each:
bash start_gazebo.sh
roslaunch turtlebot_gazebo turtlebot_world.launch world_file:=/opt/ros/indigo/share/turtlebot_gazebo/worlds/playground.world
bash start_amcl.sh
source /home/ebrenna8/amcl_overlay/devel/setup.bash;
roslaunch turtlebot_gazebo amcl_demo.launch map_file:=/opt/ros/indigo/share/turtlebot_gazebo/maps/playground.yaml
bash start_recording_messages.sh - this one takes a file name
rosbag record -O $1 /cluster_count /delta_pose /filter_covariance /max_weight_hyp /numsamples /particlecloud /test_pub /amcl_pose /cloud_weight /particlecloudPreResample /odom /cmd_vel_mux/input/teleop
bash start_driving_robot.sh
rosbag play /home/ebrenna8/Turtlebot_World_Driving.bag
bash start_recording_amcl_gazebo.sh - this one takes a file name
python ./print_amcl_gazebo_poses.py > $1
Monday, November 6, 2017
Debugging in ROS
In the past few days, I have been investigating ways to better understand what's going on with the AMCL code and I've turned to improving my debugging capabilities.
Turns out, I've been missing out on the ROS_DEBUG messages already being printed by the AMCL node. I have used rqt before to view bag files' contents, but it turns out that if you open the tool with some special options, it lets you view the message channels, too.
More info here:
http://wiki.ros.org/ROS/Tutorials/UsingRqtconsoleRoslaunch
Another thing I started reading about was getting an IDE around ROS code so that I can put in breakpoints and step through the code. There are some tools that are new to me, and tools that I recognized, like Visual Studio Code or Eclipse. Unfortunately, they ALL look really complicated.
More info here:
http://wiki.ros.org/IDEs
Turns out, I've been missing out on the ROS_DEBUG messages already being printed by the AMCL node. I have used rqt before to view bag files' contents, but it turns out that if you open the tool with some special options, it lets you view the message channels, too.
More info here:
http://wiki.ros.org/ROS/Tutorials/UsingRqtconsoleRoslaunch
Another thing I started reading about was getting an IDE around ROS code so that I can put in breakpoints and step through the code. There are some tools that are new to me, and tools that I recognized, like Visual Studio Code or Eclipse. Unfortunately, they ALL look really complicated.
More info here:
http://wiki.ros.org/IDEs
Saturday, October 28, 2017
Publishing Point Cloud Weights
After some confusion with ROS messages and C++ float/double and array/vector properties, I've finally put the code together that publishes the weights of each particle in the particle filter as the robot drives around. I had to use a custom ROS message type I had originally created for the filter's covariance, because for some reason the ROS weights message I made was not being copied to the proper dependency folder in the build process. I suspect this is related to the CMakeLists file getting blown away.
The code that I wrote is just a few lines added to amcl_node.cpp:
cloud_weight_pub_ = nh_.advertise<filter_covariance_msg>("cloud_weight",2,true);
...
if (!m_force_update) {
geometry_msgs::PoseArray cloud_msg;
filter_covariance_msg weights_msg;
std::vector<float> weights(set->sample_count);
cloud_msg.header.stamp = ros::Time::now();
cloud_msg.header.frame_id = global_frame_id_;
cloud_msg.poses.resize(set->sample_count);
for(int i=0;i<set->sample_count;i++)
{
tf::poseTFToMsg(tf::Pose(tf::createQuaternionFromYaw(set->samples[i].pose.v[2]),
tf::Vector3(set->samples[i].pose.v[0],
set->samples[i].pose.v[1], 0)),
cloud_msg.poses[i]);
weights[i] = set->samples[i].weight;
}
particlecloud_pub_.publish(cloud_msg);
weights_msg.cov = weights;
cloud_weight_pub_.publish(weights_msg);
}
}
And the result is that there's now a /cloud_weights topic published by my AMCL overlay with data that looks like this:
From here, I'm going to run trials and collect data. There are a few new details I'm going to incorporate:
-Data routes should be 5 minutes long
-Drive with teleop and record/playback the teleop commands to replicate the exact route
-Script anything I can (for reproducibility)
-For every route, try the same route but with kidnapping occurrences at different times (locations) to see what happens.
The ultimate direction is that I'm going to add this particle weight data into my Matlab print-outs and color the particles by weight. I'm crossing my fingers that this will reveal something.
The code that I wrote is just a few lines added to amcl_node.cpp:
cloud_weight_pub_ = nh_.advertise<filter_covariance_msg>("cloud_weight",2,true);
...
if (!m_force_update) {
geometry_msgs::PoseArray cloud_msg;
filter_covariance_msg weights_msg;
std::vector<float> weights(set->sample_count);
cloud_msg.header.stamp = ros::Time::now();
cloud_msg.header.frame_id = global_frame_id_;
cloud_msg.poses.resize(set->sample_count);
for(int i=0;i<set->sample_count;i++)
{
tf::poseTFToMsg(tf::Pose(tf::createQuaternionFromYaw(set->samples[i].pose.v[2]),
tf::Vector3(set->samples[i].pose.v[0],
set->samples[i].pose.v[1], 0)),
cloud_msg.poses[i]);
weights[i] = set->samples[i].weight;
}
particlecloud_pub_.publish(cloud_msg);
weights_msg.cov = weights;
cloud_weight_pub_.publish(weights_msg);
}
}
And the result is that there's now a /cloud_weights topic published by my AMCL overlay with data that looks like this:
From here, I'm going to run trials and collect data. There are a few new details I'm going to incorporate:
-Data routes should be 5 minutes long
-Drive with teleop and record/playback the teleop commands to replicate the exact route
-Script anything I can (for reproducibility)
-For every route, try the same route but with kidnapping occurrences at different times (locations) to see what happens.
The ultimate direction is that I'm going to add this particle weight data into my Matlab print-outs and color the particles by weight. I'm crossing my fingers that this will reveal something.
Monday, October 16, 2017
Small Victories
It looks like there's a problem with the cloud_weight_msg ROS message that I made a while back in order to publish the particle cloud's particle weights. Even though the file declares the message type as float64[], the
rosmsg show cloud_weight_msg prints out the message type as int32[]. Not sure what the deal is.
Fortunately, I had an existing message of type float32[], so I'm using that one for the cloud_weights message declaration. The only difference is that my cloud weights are stored as doubles, and this float32[] will require them to be converted to floats. I don't think it will cause a problem, so I should be good to go.
Work Plan:
- Put code in for publishing particle cloud weights tonight.
- Tomorrow: collect a bunch of data, over 5-minute trials and kidnapping events at different timesteps.
rosmsg show cloud_weight_msg prints out the message type as int32[]. Not sure what the deal is.
Fortunately, I had an existing message of type float32[], so I'm using that one for the cloud_weights message declaration. The only difference is that my cloud weights are stored as doubles, and this float32[] will require them to be converted to floats. I don't think it will cause a problem, so I should be good to go.
Work Plan:
- Put code in for publishing particle cloud weights tonight.
- Tomorrow: collect a bunch of data, over 5-minute trials and kidnapping events at different timesteps.
Wednesday, October 11, 2017
Maybe ROS Isn't So Bad
I'm still waiting for the other shoe to drop, but I ran catkin_make in my amcl_overlay workspace, and I was able to successfully compile the code and start the amcl_demo code with the overlaid version. Then, I went over to the turtlebot plugin workspace, and ran catkin_make, and everything compiled.
I may have been gunshy about changing the code in order to get the particle cloud weights for no reason at all. Whoops.
I may have been gunshy about changing the code in order to get the particle cloud weights for no reason at all. Whoops.
Graduate Research Seminar Speaker
Yesterday I had the opportunity to be the speaker for the weekly Graduate Research Seminar at Parks College. There were about 15 students in attendance, spaced out among all levels and corners of the auditorium of 200 seats. I spoke for about 40 minutes about my research, and in particular, the steps we had to take to simulate robot fault. While I was preparing the presentation, I was thinking, "You know, this robot fault thing could really be a paper or something." Unfortunately, a RosCon conference proposal was not accepted over the summer, and it's made me think either the topic isn't as original as I think, or I didn't write a good proposal.
The most enjoyable part of the talk was the last 2 minutes, when I took questions. I got questions about automating the tests, whether I'll be able to differentiate between major and minor kidnapping event classes, and how to improve the Roomba's ability to make it around the whole room like it's supposed to. Getting to free-style and apply my knowledge was fun.
The most enjoyable part of the talk was the last 2 minutes, when I took questions. I got questions about automating the tests, whether I'll be able to differentiate between major and minor kidnapping event classes, and how to improve the Roomba's ability to make it around the whole room like it's supposed to. Getting to free-style and apply my knowledge was fun.
Sunday, September 24, 2017
Daily Goals
To add the Gazebo poses, here's what I've found:
*Major Kidnapping Trial 4 is missing a print_amcl_gazebo_poses.txt file. Booo.
*The angle for Gazebo is a quaternion. Need to write a python script to convert that to radians.
I'm in instant-gratification mode now, though, so I'm going to just see if I can get X,Y plotting for the Gazebo pose to work on Major Kidnapping Trial 5, and then I'll add angle.
*****************
Update! Got the Gazebo poses added to the graph. It's enlightening - the blue dot is the robot's ground-truth pose in Gazebo:
Simulation starts at (0,0) and AMCL is started at (0,0), but the particle cloud is pretty big:
Cloud shrinks down as robot moves. (Pose estimates are off by a quarter of a meter in each direction...)
BOOM! Kidnapped.
*Major Kidnapping Trial 4 is missing a print_amcl_gazebo_poses.txt file. Booo.
*The angle for Gazebo is a quaternion. Need to write a python script to convert that to radians.
I'm in instant-gratification mode now, though, so I'm going to just see if I can get X,Y plotting for the Gazebo pose to work on Major Kidnapping Trial 5, and then I'll add angle.
*****************
Update! Got the Gazebo poses added to the graph. It's enlightening - the blue dot is the robot's ground-truth pose in Gazebo:
Simulation starts at (0,0) and AMCL is started at (0,0), but the particle cloud is pretty big:
Cloud shrinks down as robot moves. (Pose estimates are off by a quarter of a meter in each direction...)
BOOM! Kidnapped.
Plotting Mean and AMCL Poses
Plotted particle cloud + AMCL Pose + Mean X/Y/Theta all together! Here are a couple samples:
Robot's not lost:
Robot's lost:
Do you notice a difference? (I don't...)
Robot's not lost:
Do you notice a difference? (I don't...)
Structure
One of the biggest take-aways from Paul Silvia's "How To Write A Lot: A Practical Guide to Productive Academic Writing" is that you HAVE to make a schedule for when you're going to do your research writing. I actually found it helpful that he's aiming at PhD students and professors, because besides writing-up their research, they have a lot of other responsibilities like teaching, administration stuff, grading homework, etc. and research might get pushed to the side by the rest of these things. It's akin to my situation working full-time, so his advice to JUST MAKE TIME FOR YOUR WORK - you make time every Thursday night to watch a half-hour of Donnybrook - hit home.
And, I have good news: now that I'm making time - an hour after I get off work everyday, getting up early on the weekend - I'm actually getting stuff done (see the recent blog posts!).
Another principle from the book is goal-setting: identifying project goals, as well as the daily steps to implementing them. I haven't really been doing that. Therefore, here goes:
My current project goal is to find a method of arranging data so that the Neural Network can find a really-tight function approximation for the data.
Daily goals:
TODAY: I need visualizations of the following:
* Particle Cloud Poses + AMCL Pose + Gazebo Pose + Mean X/MeanY/MeanTheta Pose
* Revisit the mean/stdev dataset and somehow see if you can tell if there's a function to find there.
Another method from the book is keeping track of your productivity by making a spreadsheet tracking date and words written (lines of code, etc.) Then, using it like a fitness tracker - Are you on track for this week? Can you beat last week's goal?, etc. Might be good to implement.
Friday, September 22, 2017
Woot woot! Got My Arrows
Turns out it was that I wasn't converting the AMCL Theta into [u,v] coordinates before plotting the arrow. Once I added that step, my plots have the AMCL pose mapped!
hold on, fig = quiver(points(1,4),points(1,5),u,v,'color',[0 0 1]);[u,v] = pol2cart(points(1,6),0.1);
Isn't my little AMCL arrow cute?
hold on, fig = quiver(points(1,4),points(1,5),u,v,'color',[0 0 1]);[u,v] = pol2cart(points(1,6),0.1);
Isn't my little AMCL arrow cute?
Wish List
As I'm formatting my data for review, here's what I want:
-Particle cloud arrows plotted at each timestep (got this)
-AMCL's position estimate plotted at each timestep as a big red arrow (having trouble getting this)
-Gazebo position plotted at each timestep as a big green arrow (don't have any infrastructure for this)
-Plot the meanX and meanY positions on the particle cloud as dots, too.
Also, format the mean/stddev files for review.
-Particle cloud arrows plotted at each timestep (got this)
-AMCL's position estimate plotted at each timestep as a big red arrow (having trouble getting this)
-Gazebo position plotted at each timestep as a big green arrow (don't have any infrastructure for this)
-Plot the meanX and meanY positions on the particle cloud as dots, too.
Also, format the mean/stddev files for review.
Wednesday, September 20, 2017
Generating images of particle clouds
%import data file:
InputFileName = <>
SaveFilesToFolder=<>
TimesParticleCloudPoints = readtable(InputFileName,'Delimiter', '\t');
TimesParticleCloudPoints = table2array(TimesParticleCloudPoints);
d = TimesParticleCloudPoints(:,1);
times = unique(TimesParticleCloudPoints(:,1)); %get distinct time periods
points = [];
counter = 0;
for t=1:size(times)
if times(t) > 70
disp('KIDNAPPING: ')
disp(t)
end
counter =counter+1;
points = [];
for w=1:size(TimesParticleCloudPoints(:,1))
if TimesParticleCloudPoints(w,1) == times(t)
%if Time(w) == times(t)
%points = [points; X(w) Y(w);];
points = [points; TimesParticleCloudPoints(w,3) TimesParticleCloudPoints(w,4);];
end
end
fig = scatter(points(:,1), points(:,2));
if times(t) >= 70
header = strvcat('Post-Kidnapped. T = ', num2str(times(t)));
else
header = strvcat('T = ', num2str(times(t)));
end
title(header);
saveas(fig, strcat(SaveFilesToFolder, int2str(counter),'.png'));
end
Expanding to draw arrows. This link seems helpful. My angles are in the dataset as quaternions...
https://stackoverflow.com/questions/1803043/how-do-i-display-an-arrow-positioned-at-a-specific-angle-in-matlab
InputFileName = <>
SaveFilesToFolder=<>
TimesParticleCloudPoints = readtable(InputFileName,'Delimiter', '\t');
TimesParticleCloudPoints = table2array(TimesParticleCloudPoints);
d = TimesParticleCloudPoints(:,1);
times = unique(TimesParticleCloudPoints(:,1)); %get distinct time periods
points = [];
counter = 0;
for t=1:size(times)
if times(t) > 70
disp('KIDNAPPING: ')
disp(t)
end
counter =counter+1;
points = [];
for w=1:size(TimesParticleCloudPoints(:,1))
if TimesParticleCloudPoints(w,1) == times(t)
%if Time(w) == times(t)
%points = [points; X(w) Y(w);];
points = [points; TimesParticleCloudPoints(w,3) TimesParticleCloudPoints(w,4);];
end
end
fig = scatter(points(:,1), points(:,2));
if times(t) >= 70
header = strvcat('Post-Kidnapped. T = ', num2str(times(t)));
else
header = strvcat('T = ', num2str(times(t)));
end
title(header);
saveas(fig, strcat(SaveFilesToFolder, int2str(counter),'.png'));
end
Expanding to draw arrows. This link seems helpful. My angles are in the dataset as quaternions...
https://stackoverflow.com/questions/1803043/how-do-i-display-an-arrow-positioned-at-a-specific-angle-in-matlab
Subscribe to:
Posts (Atom)