From charlesreid1

Adversarial Neural Networks

Adversarial neural networks use an architecture consisting of two separate neural networks - one network attempts to learn how to accomplish a task, and another network attempts to differentiate between the output of the first network and the "real" output.

TensorFlow Adversarial Examples

Adversarial Crypto

This adversarial crypto neural network attempts to learn how to protect communications using the adversarial architecture.

Paper: "Learning to Protect Communications with Adversarial Neural Cryptography"

Link to paper: https://arxiv.org/abs/1610.06918

Link to code: https://github.com/tensorflow/models/tree/master/research/adversarial_crypto

Part of the tensorflow models repository (https://github.com/tensorflow/models/tree/master/research).

Running

To train the network:

$ python train_eval.py

The approach used by the training is to train the "defender" network (representing the Alice-Bob channel) until it is sufficiently well-trained, then reset the "attacker" network (representing the eavesdropper Eve) from scratch to give the eavesdropper multiple opportunities to find weaknesses in the cryptosystem.

The Model

We'll step through the code line-by-line. Here's the link to the code: https://github.com/tensorflow/models/blob/master/research/adversarial_crypto/train_eval.py

Full model walkthrough is on the TensorFlow/Adversarial Crypto page.

The rundown is:

  • Create an AdversarialCrypto class that holds a training optimizer object for the Bob and Alice networks
  • Define a method that evaluates the networks as-is and prints the percent losses
  • Define a method that trains the network for a specified number of iterations, stopping early if the network reaches its target losses
  • Define a method that calls the training function (above), then re-trains Eve several more times from scratch

Adversarial Text

This trains a neural network model to detect the sentiment in IMDB text. This illustrates semi-supervised learning.

Link to code: https://github.com/tensorflow/models/tree/master/research/adversarial_text

Running

Running this model is slightly more complicated than running the adversarial crypto network.

The adversarial text network steps are as follows:

  • fetch data
  • generate vocab
  • generate training/validation/test data
  • pretrain language model
  • train classifier
  • evaluate classifier on test data

Get Vocabulary Data

Start by obtaining the data, which is an 80 MB tar file, and decompress it:

$ wget http://ai.stanford.edu/~amaas/data/sentiment/aclImdb_v1.tar.gz -O /tmp/imdb.tar.gz

$ tar -xf /tmp/imdb.tar.gz -C /tmp

$ du -hs /tmp/aclImdb
487M	/tmp/aclImdb

Build the Vocabulary

Use a Bazel job to build the vocabulary from the data:

$ IMDB_DATA_DIR=/tmp/imdb

$ bazel run data:gen_vocab -- \
    --output_dir=$IMDB_DATA_DIR \
    --dataset=imdb \
    --imdb_input_dir=/tmp/aclImdb \
    --lowercase=False

This uses a build rule called gen_vocab located in data/BUILD:

py_binary(
    name = "gen_vocab",
    srcs = ["gen_vocab.py"],
    deps = [
        ":data_utils",
        ":document_generators",
        # tensorflow dep,
    ],
)

This build vocabulary step is, unfortunately, failing. See this Github issue (1917): https://github.com/tensorflow/models/issues/1917

Adversarial Image Network

Flags