Blog: Logic with Neural Networks (2018-07-10)

Artificial neural networks are a fascinating topic, as they allow to parametrise complex target functions in a very flexible way. Recurrent neural networks are even more powerful, as their "execution" can contain arbitrary feedback loops.

In recent years, neural networks have established themselves as a key technology for advancements in machine learning and AI. But they were known already for a long time before the hype of the last years. In fact, I have been interested in them also already for a long time, although I never did any serious research and always just played around for fun.

One thing that fascinates me particularly about R-ANNs with a Heaviside activation function is that they can be used to quite easily build up logical operations. Combined with the recurrent nature of the network, this quickly leads to something like Turing completeness. In other words, R-ANNs can be used to "encode" or "execute" arbitrary computations.

A long time ago, I designed a simple programming language and a corresponding compiler to build neural networks from programs. I then applied genetic algorithms to those programs, trying to optimise them further. (The idea being that one could construct good starting points for further optimisation through writing an imperfect program for the task.) My results at the time seemed quite positive, although I did not work anywhere near a proper scientific way. Also, the programming language was never really complete, and misses, for instance, things like pointer arithmetic (using variables as indices into arrays)—not because it is impossible with neural networks to do that, but because I simply never implemented it.

Many years later, I became interested in this topic again. But since the old project was quite complex, I decided to approach the task from a different angle: Instead of implementing a full-fledged programming language, I instead built a compiler for Brainfuck that outputs R-ANNs. This made the task much simpler—but since Brainfuck is Turing complete, it did not sacrifice any computational power (at the expense of the sanity of any poor human who shall decide to write programs for my compiler).

Due to the much simpler task, I was now actually able to finish the project: A first version of NeuralBF is now officially released—supporting all of the few features of Brainfuck. Take a look at the source repository on Gitlab for more information, including how it all works.

One final remark: There's been quite some proper research lately (with those only being two randomly selected examples) that is related to my work here. All the results I've seen, however, have a different twist to them: They are trying to train a neural network from scratch, which then performs algorithmic tasks (or writes programs). My project, on the other hand, is about constructing the neural networks explicitly. (Presumably, though, that is because the approach of learning the network is just more useful in practice. If my work could be useful as well, then my guess is in the way discussed above: For generating starting points for further optimisation. But who knows!)

Copyright © 2011–2019 by Daniel KraftHomeContactImprint