Keras
From charlesreid1
Neural network package in Python.
Contents
Code
Github Repos
charlesreid1
- https://github.com/charlesreid1/in-your-face - examples of fitting Keras neural networks to the LFW (labeled faces in the wild) dataset.
- https://github.com/charlesreid1/lfw_fuel - fork of lfw_fuel repo, which applies kerosene and fuel (Python libraries) for organizing and packaging LFW data in a nice format that makes it easy to load and hand off to Keras.
- https://github.com/charlesreid1/circe - specifically the examples using the NIST handwriting digit classification data set. This shows how to utilize Keras to train a neural network to perform dimensionality reduction, and further explores the manifold that the neural network identified for the different digits to better understand the neural network model.
Keras
- https://github.com/fchollet/keras - the main Keras repo
- https://github.com/farizrahman4u/keras-contrib - keras community contributions
- https://github.com/fchollet/deep-learning-models - pre-trained models for Keras, now available in the main Keras repo/package
- https://github.com/fchollet/hualos - Keras total visualization project (Keras RemoteMonitor callbacks - JSON - Flask - C3)
Keras Forks
- https://github.com/MarcBS/keras - Keras fork with additional functionality
- https://github.com/MarcBS/multimodal_keras_wrapper - wrapper for MarcBS' Keras fork (see prior entry)
- https://github.com/dmlc/keras - Fork of Keras that supports an MXNet backend
Paper/Network Implementations
- https://github.com/titu1994/Neural-Style-Transfer - neural network style transfer network (implementation from paper) via Keras
- https://github.com/ellisvalentiner/credit-card-fraud - analysis of credit card fraud (uses custom Keras layer)
- https://github.com/maciejkula/triplet_recommendations_keras - movie recommendation with triplet loss function in Keras
- https://github.com/bstriner/keras-adversarial - GANs (generative adversarial networks) using keras
- https://github.com/farizrahman4u/recurrentshop - Keras framework for building complex RNNs
- https://github.com/usernaamee/keras-wavenet - Keras implementation of Deep Mind's Wavenet paper
- https://github.com/kentsommer/keras-inceptionV4 - Keras implementation of Inception V4 architecture
- https://github.com/flyyufelix/DenseNet-Keras - Keras implementation of DenseNet + ImageNet
- https://github.com/pengpaiSH/Kaggle_NCFM - Keras to solve Kaggle Nature Conservancy Fisheries Monitoring dataset leaderboard
- https://github.com/kylemcdonald/SmileCNN - CNN to detect smiles with Keras
- https://github.com/titu1994/Super-Resolution-using-Generative-Adversarial-Networks - super resolution with generative adversarial networks
- https://github.com/snf/keras-fractalnet - FractalNet ultra deep neural networks w/o residuals (M$ paper)
- https://github.com/gustavla/fractalnet - original version by paper's author
- https://github.com/alexander-rakhlin/CNN-for-Sentence-Classification-in-Keras - CNN for sentence classification in Keras
- https://github.com/udibr/headlines - generation of short headlines for articles using Keras, NLP, and RNN
- https://github.com/jinfagang/LSTM_learn - Keras for LSTM and time series prediction
- https://github.com/kengz/openai_lab - reinforcement learning with Keras
- https://github.com/jisungk/deepjazz - deep learning generative jazz music
- https://github.com/bstriner/keras-tqdm - Keras plus TQDN for nice progress bars
Related Utilities
- https://github.com/keplr-io/quiver - convolutional neural net visualization for Keras
- https://github.com/yusugomori/deeplearning-tensorflow-keras - extensive notes/notebooks on different architectures, divided into chapters/sections
- https://github.com/bstriner/bayesian_dense - bayesian weight uncertainty dense layer for Keras
- https://github.com/bstriner/dense_tensor - dense Tensor layer for Keras
- https://github.com/sandeep-krishnamurthy/keras-mxnet-benchmarks - Examples for profiling performance of Keras/MXnet
- https://github.com/joeddav/devol - automating hyperparameter tuning with genetic algorithm
Really Cool Stuff
LSTMetallica:
- https://soundcloud.com/kchoi-research/sets/lstmetallica-drums
- https://github.com/keunwoochoi/LSTMetallica
Notes
I think my in-your-face repository (https://github.com/charlesreid1/in-your-face) does a pretty good job of making notes on how to use Keras as we go.
Check out the iPython notebooks in that repository.
Errors
Notes on errors with Keras
Convolutional Neural Networks
Max Pooling 2D Applied to Wrong Dimensions
In a convolutional neural network, the network architecture generally looks like this:
- Convolution
- Convolution
- Pool
- Dropout
- Flatten
- Dense
- Dropout
- Dense
The problem was with how the Pool layer was working, and which dimensions it was pooling. I was constructing the neural network as follows, starting with two Convolution2D layers. The input data was of shape (6, 32, 32) - it consisted of TWO 3-channel images (hence, 6 channels), and a 32 x 32 pixel resolution. Thus, the Convolution2D layers looked like this:
modelA.add(Conv2D(32, (3, 3), input_shape=(6, 32, 32), padding='same', activation='relu')) modelA.add(Conv2D(32, (3, 3), input_shape=(6, 32, 32), padding='same', activation='relu')) modelA.summary()
This resulted in the following model summary:
_________________________________________________________________ Layer (type) Output Shape Param # ================================================================= conv2d_1 (Conv2D) (None, 6, 32, 32) 9248 _________________________________________________________________ conv2d_2 (Conv2D) (None, 6, 32, 32) 9248 ================================================================= Total params: 18,496 Trainable params: 18,496 Non-trainable params: 0
But then, when I created a new MaxPooling2D layer, it was incorrectly applying the pooling operation to the wrong dimensions - it was pooling the channels!
modelA.add(MaxPooling2D(pool_size=(4, 4))
This was resulting in the MaxPooling2D layer having an output shape of (4, 8, 32)
- the pooling layer was pooling the channels, plus the first dimension of each photo, while leaving the second dimension alone.
I eventually figured out how to fix this by studying the MaxPooling2D documentation page: https://keras.io/layers/pooling/#maxpooling2d
This revealed a channels_first or channels_last option that I had not seen or set. Once I added it, I got what I was after:
modelA.add(MaxPooling2D(pool_size=(4, 4), data_format='channels_first')) modelA.summary()
This led to an output shape of (6, 8, 8), as desired.
_________________________________________________________________ Layer (type) Output Shape Param # ================================================================= conv2d_1 (Conv2D) (None, 6, 32, 32) 9248 _________________________________________________________________ conv2d_2 (Conv2D) (None, 6, 32, 32) 9248 _________________________________________________________________ max_pooling2d_1 (MaxPooling2 (None, 6, 8, 8) 0 ================================================================= Total params: 18,496 Trainable params: 18,496 Non-trainable params: 0
Binary Categorization
Accuracy stays at exactly 50% and does not change
Was running into the issue with Keras that, when running a binary categorization model, I was seeing predictions of exactly 50%, and nothing was changing at all.
This turned out to be due to a couple of reasons:
- Poor choice of activation function
- Poor choice of optimizer
Check out the first example here: https://keras.io/getting-started/sequential-model-guide/#training
This gives an example of a simple binary categorization network:
# For a single-input model with 2 classes (binary classification): model = Sequential() model.add(Dense(32, activation='relu', input_dim=100)) model.add(Dense(1, activation='sigmoid')) model.compile(optimizer='rmsprop', loss='binary_crossentropy', metrics=['accuracy']) # Generate dummy data import numpy as np data = np.random.random((1000, 100)) labels = np.random.randint(2, size=(1000, 1)) # Train the model, iterating on the data in batches of 32 samples model.fit(data, labels, epochs=10, batch_size=32)
note a few things being done here:
- Optimizer being used is "rmsprop"
- Activation function of last layer, Dense(1), is sigmoid
- The last layer is a dense layer with a single neuron, which (when on) represents yes, and (when off) represents no
What I was doing wrong:
- I had set the activation function of the last Dense layer as "softmax", which meant that I was ALWAYS predicting "yes" (everything was always rounded up). Because my training set had a 50/50 mix of yes/no cases, I was getting predictions of exactly 50% because I was always guessing "yes".
- (Even earlier) I had initially been using TWO Dense neurons, e.g., Dense(2), to get yes/no. This is not correct!
- I was trying to use an SGD optimizer... something fancy-pants that I should not have been using
Resources
Ugh, a big ugly giant hairy list of links. I'll do my best to keep this maintained.
Learning Keras
Building a very, very simple neural network with Keras - https://www.pyimagesearch.com/2016/09/26/a-simple-neural-network-with-python-and-keras/