Decoding Gibberish using Neural Networks

Datetime:2017-04-18 05:45:12         Topic: Neural Networks          Share        Original >>
Here to See The Original Article!!!

Decoding Gibberish using Neural Networks

April 16, 2017

Training a neural network to predict word embeddings from spelling, and using nearest neighbor search to decode meaning.

View Code

Use a Neural Network to Infer Meaning from Gibberish

The nearest neighbor words will show up here.

This is a simple experiment in predicting a word’s meaning purely from it’s spelling. A recurrent neural network is first trained to predict word embeddings from spelling, from a subset of 2000 word-embedding pairs trained using Glove on a Wikipedia crawl . This subset is filtered for common words that are at least five letters long, so a lot of words won’t appear in the search results. Next, a new word is fed to the network to get it’s word embedding. The nearest neighbors to that embedding are shown; these represent the words that the network thinks are closest in meaning to the presented word. This happens entirely in your browser using Keras JS . Nearest neighbor decoding is done using NumJS .

Neural Networks in your Browser

Because the model is deployed using Keras JS , you can run a full neural network in your browser, despite my site being hosted statically on Github Pages (although the network files are pretty large, so it can be kind of buggy, especially on slow connections). In the future I hope to make more interactive examples like this, for language modeling and other things.

The model architecture used to map words to their corresponding embeddings. This is a relatively simple recurrent neural network architecture. This model is trained on a subset of existing word embeddings.

Nearest Neighbor Decoding

Word Embeddings are a ubiquitous idea in language modeling that suggests that we can represent a word as a vector, where the location of the vector in vector space represents the meaning of that word. There are some cool examples of what this looks like in practice. The most common example is that if you take the vector for “king”, subtract the vector for “man” and add the vector for “woman” you get the vector for “queen”. This means that our neural network is essentially trying to predict the word’s meaning purely from it’s spelling. We can decode it’s meaning by finding words with similar vectors. As with a lot of neural applications, interpreting the results is a bit like finding images in clouds.

Did you ever hear the tragedy of Darth Plagueis The Wise? If you type in "plageus", you end up with a vector embedding that is close (in vector space) to the vector embeddings for "where", "opened" and "places".