=

desalasworks presents:

a selection of works by steven de salas

Asynchronous Neural Networks in JavaScript

For the past few months I’ve been trying to get some of my robots to roam around the house with a mind of their own, learning as they go. It turns out that the task is harder than I thought.

The main issue is paralell signal processing.

To illustrate: a single robot will have usually 3-5 sensors that collect information on the world around it, typically a combination of IR Sensors used for collission detection, magnetometer to measure direction, microphones with audio spectrum analysis, as well as feedback from its own actuators such as the speed of the motors that determine how fast the robot is travelling.

First attempt, enter the matrix

I originally attempted a weighted relational matrix of inputs to outputs. Like a chequers board where each square starts off with one piece that represents the strength of the relationship between an input and an output. To simulate learning, I would stack extra pieces on a square, for example to reinforce past choices, making it more likely to get picked in subsequent iterations. I called the engine fusspot, which I thought was a suitable name given that this algorithm could be trained but in the end, choices were always fuzzy and random.

chequers

This worked, in a fashion. I could train the engine by modifying the strength of each relationship, making subsequent choices more likely to be biased by training, but it was a terrible solution when trying to implement it in practice.

The main problem is that this solution does not take context into account, and it turns out context is everything – you can’t rely on a single input channel (such as IR sensor) to determine the choices a robot can make (stop, turn, keep going), it takes a wholistic approach that integrates data from all the sensors at the same time.

Context is everything. You cant rely on a single input channel, it takes a wholistic approach that integrates data from all the sensors at the same time.

I got to a point where trying to break down the problem further wasn’t getting me anywhere. So I decided to look around and see what other people had done.

Video games and artificial gardening.

By this point I had done a bit of reading on Artificial Neural Networks and decided that this was roughly where I was heading.

This led me into the world of video games. I’ve played plenty of games myself and know that game developers are at the leading edge of AI.

The wikipedia article inspired me to look a bit more into a game called Creatures written by a guy called Steve Grand. Steve dropped off the face of the earth a few years ago through a self-inflicted kickstarter project gone wrong.

But before he did so, he wrote an excellent book called Creation, Life and how to make it, where he details some of the techniques he used, as well a zen-like philosophical outlook on artificial life powerful enough to blow my socks off and turn upside down all my notions of what AI is and should be about:

Life emerges, even the artificial kind, it cannot be commanded to do so. Only encouraged.

… There is no spoon.

I stopped trying to model the behaviour of my robot, trying to find the optimal learning algorithm, trying to work out the mathematics behind back-propagation, and instead simply sought to create the conditions that would allow artificial life to emerge.

Think of it as artificial gardening.

There is no spoon Neo. The only problem is perspective. Do some artificial gardening instead.

Neural Networks, 2.0

So. I started modelling a neural network in JavaScript.

Its not hard. Just an array of nodes, each containing another array of links to other nodes. With the added catch that each link to another node has a weight between 0 and 1, which determines if the onward connection should be fired during a chain-reaction event. My first commit was 100 lines long. 60 if you remove comments and whitespacing.

Since I was trying to accurately model the biological system in our own brains (rather than a mathematical abstraction), I noticed my own gut telling me to add a slight delay between firing neurons. It seems natural really, given that signals travel anywhere between walking speed and the speed of a racing car, so the signal is never going to travel all the way across the brain instantly.

I noticed my own gut telling me to add a slight delay between firing neurons. It seems natural really, the signal its never going to travel all the way across the brain instantly.

Besides a traditional neural network with a back-propagation algorithm seemed similar in nature to the relational matrix I’d already attempted, though certainly more evolved.

Input => Output, Input => Output, Input => Output.

In other words: An optimised sausage factory.

I had several input streams from each sensor that needed to be integrated, and I needed something that could be trained to react to deltas (changes) in a particular input, while taking into account all the other inputs, or even a changing set of inputs. Something that could behave more like a normal brain and have a delayed reaction to an input, or integrate an input with itself. Hence it felt right to try a different angle and throw some asynchronicity into the mix.

Chaos and the game of life

I needed to visualize my network in order to understand what it was doing.

Thankfully JavaScript is a great language for that, there are tons of libraries out there for visualization and thanks to V8 and the NodeJS ecosystem you can run the same code on the browser as you would a command line executable. I used browserify to require my modules on a browser page and viz.js to perform visualizations using HTML5 <canvas/>.

The result was intriguing. I noticed all sorts of patterns, travelling waves, and oscillations taking place when a neuron was fired. It was utter chaos but it was also mesmerizing.

The result was intriguing. It was utter chaos, but it was also mesmerizing.

I felt I was on to something. After all electrical wave patterns are an inherent part of the human brain. So why do prevailing models of neural networks ignore them?

I tried different network shapes, a ball, a sausage and a dounught. All produced interesting variations. Jonathan, a colleague at work, said it reminded him of Conway’s Game of Life. I had to agree with him.

Now all I needed was a way to plug my robots into it.