Wednesday, January 24, 2018

From Colors to Histograms to Neural Networks - Part 2

In my previous blog post, I explained the method I chose for representing big, uneven histograms that appear when the particle cloud weights at a timestep are divided into 10 bins.  I use the ratio of the tallest histogram bin divided by the 2nd tallest histogram bin.


I calculated this "Histogram Bin Ratio" (HBR) for each timestep and set up my datasets for an artificial neural network in Matlab. 

Experiment 1
My first trial had Inputs of the current timestep's HBR, and outputs of the difference in meters between the robot's ground-truth pose in the Gazebo simulator and the AMCL SLAM algorithm's estimate, for X, Y, and Heading. 

Experiment 1 Dataset:
Inputs:
Ratio (current)

Outputs:
|AMCL_X - Gazebo_X|
|AMCL_Y - Gazebo_Y|
|AMCL_Heading - Gazebo_Heading|

The neural network's outputs were hopelessly off, with MSE's on the scale of 300.


Side Study
I wrote a test for my neural network setup to see if it could be trained to learn a sine function.  I found a working example that used a different Matlab NN library, and ran the same datasets through my code.  My neural network function was able to match the expected results, so I'm confident that my algorithm should be able to learn the kidnapping pattern.


Experiment 2
I scrapped the idea of using the difference between AMCL and Gazebo as the NN outputs.  Instead, I used a 0/1 coding scheme of "not post-kidnapping event" or "post-kidnapping event".  Including all the data after the kidnapping event resulted in MSEs in the 200s.

Experiment 2 Dataset:
Inputs:
Ratio (current)

Outputs:
0 or 1 (kidnapped or not)


Experiment 3
My next change was to cut off the datasets immediately following the kidnaping event's visible effects on the HBR (histogram ratio).  After the kidnapping, the HBR jumps to 5 or 18 or 40, and hovers there for a few more timesteps. Once the HBR gets back to normal, I truncate the dataset.  This worked marginally better - MSEs of the NN performance in the 100s.

Experiment 3 Dataset:
Inputs:
Ratio(current)

Outputs:
0 or 1 (kidnapped or not) but killed the dataset after the kidnapping event.


Experiment 4
Next, I added a history of 2 timestamps to the data inputs, so that the HBRs of the current timestep, the previous and the 2nd previous timesteps were represented.  This drove my NN's MSEs down to 25!


Experiment 3 Dataset:
Inputs:
Ratio(current)
Prev_Ratio
Prev_Prev_Ratio

Outputs:
0 or 1 (kidnapped or not) but killed the dataset after the kidnapping event.




HOWEVER.  Now my NN output matches  the HBR ratios (current timestep's ratio), when I expected it - and theoretically, trained it - to give me a 0 or a 1! 


The goofy part is that I could threshold the NN output around 4, and say anything over that indicates a Kidnapping.  At this rate, I don't need a neural network!

No comments:

Post a Comment