The Rocket’s Brain Pt. 3— Laser Vision!
In the previous parts of this series I developed a neural network to control a rocket trying to hit a target. In part 1 the target’s position is directly fed into the network. Then in part 2 the rocket was given “sensors”, so it wouldn’t know where the target is until it “sees” it. As a further step towards more interesting behavior I wanted to add obstacles to the world, so for example the rocket would need to fly around an asteroid to hit the target. The sensor setup in part 2 isn’t very suited for simulating that kind of behavior, so I decided to first improve the way the rocket sees its world. In the new version the rocket has a number of “laser rangefinders” that emit from its nose and let it measure the distance to an obstacle that the laser hits, if any. We could then distinguish the kind of object the laser hits by either passing on a negative (i.e. for obstacle) or positive (i.e. for target) value to the rocket, as a very simplistic approximation of some sort of color vision in reality.
I experimentally tried out some feedback connections, which seemed to make a substantial difference for the resulting behavior this time. The feedback connections are simply outputs of the neural network that are looped back and connect to some of its inputs, with some filtering for too large values. This lets the rocket pass information from one time step to the next. Otherwise it has essentially no “memory” of what has happened so far. See the resulting behavior in the animation below:
I found it interesting to see that this time I got a behavior that I would have tried to program myself “manually” to control the rocket, i.e. stopping from whatever speed we start with, then looking around, and then flying towards to the target in a straight line.
You can try it live here.