The goal of the image recognition system explained here is to find parameter values that result in the model’s output being correct as often as possible. This kind of training, in which the correct solution is used together with the input data, is called supervised learning. (Unsupervised learning is out of the scope of this article.)
After the training phase has been done, the model’s parameter values don’t change anymore and the model can be used in the testing phase for classifying images which were not part of its training dataset.
Preparation
[1] Python
https://www.python.org/downloads/
Download 3.6.4 (or later) and install it.
[2] TensorFlow
https://www.tensorflow.org/versions/r0.12/get_started/os_setup
Pip installation
Access
http://pip.readthedocs.org/en/stable/installing/
Download
get-pip.py
Then save get-pip.py on your directory.
Run the following scripts on your directory. (e.g., Terminal, Mac OS)
python get-pip.py
https://www.tensorflow.org/versions/r0.12/get_started/os_setup#pip_installation
pip install tensorflow
[3] The CIFAR-10 python version dataset:
https://www.cs.toronto.edu/~kriz/cifar.html
Download "CIFAR-10 python version"
Then extract the downloaded file and save the cifar-10-batches-py holder on your directory.
It consists of 60,000 images (10 different categories * 6,000 images per category). Each image has a size of 32 by 32 pixels. Each pixel is described by three floating point numbers representing the red (R), green (G) and blue (B) values for this pixel. This results in 32 x 32 x 3 = 3,072 values for each image.
[4] Python scripts
Download all the py scripts on
https://github.com/wolfib/image-classification-CIFAR10-tf
then save them on your directory.
1.Softmax (not a neutral network)
Run the following scripts on your directory. (e.g., Terminal, Mac OS)
python softmax.py
Step 0: training accuracy 0.08
Step 100: training accuracy 0.3
Step 200: training accuracy 0.26
Step 300: training accuracy 0.24
Step 400: training accuracy 0.34
Step 500: training accuracy 0.28
Step 600: training accuracy 0.35
Step 700: training accuracy 0.27
Step 800: training accuracy 0.37
Step 900: training accuracy 0.37
Test accuracy 0.266
Total time: 3.95s
The accuracy of evaluating the trained model on the test set is about 27% (this might vary on your environment). So our model is able to pick the correct label for an image it has never seen before around 27% of the time. There are 10 different labels, so random guessing would result in an accuracy of 10%.
2. Neutral Network
Let's build a neural network that performs the same task.
two_layer_fc.py
This defines the model
run_fc_model.py
This runs the model (‘fc’ stands for fully connected).
Run the following scripts on your directory. (e.g., Terminal, Mac OS)
python run_fc_model.py
Parameters:
batch_size = 400
hidden1 = 120
learning_rate = 0.001
max_steps = 2000
reg_constant = 0.1
train_dir = tf_logs
2017-12-29 12:50:00.619290: I tensorflow/core/platform/cpu_feature_guard.cc:137] Your CPU supports instructions that this TensorFlow binary was not compiled to use: SSE4.1 SSE4.2 AVX AVX2 FMA
Step 0, training accuracy 0.1
Step 100, training accuracy 0.3225
Step 200, training accuracy 0.365
Step 300, training accuracy 0.3925
Step 400, training accuracy 0.4125
Step 500, training accuracy 0.4575
Step 600, training accuracy 0.3925
Step 700, training accuracy 0.4475
Step 800, training accuracy 0.4875
Step 900, training accuracy 0.475
Saved checkpoint
Step 1000, training accuracy 0.5025
Step 1100, training accuracy 0.5125
Step 1200, training accuracy 0.5025
Step 1300, training accuracy 0.45
Step 1400, training accuracy 0.515
Step 1500, training accuracy 0.515
Step 1600, training accuracy 0.5625
Step 1700, training accuracy 0.5575
Step 1800, training accuracy 0.5525
Step 1900, training accuracy 0.5375
Saved checkpoint
Test accuracy 0.4628
Total time: 34.08s
This indicates that our model is not significantly overfitted. The performance of the softmax classifier was around 30%, so 46% is an improvement of about 50%.
Run the following scripts on your directory. (e.g., Terminal, Mac OS)
tensorboard --logdir=tf_logs
Then access the following URL via your browser.
(your hostname).local:6006
Source:
http://www.wolfib.com
The Financial Journal is a blog for all financial industry professionals. This blog has been, and always will be, interactive, intellectually stimulating, and open platform for all readers.
AdSense
Subscribe to:
Post Comments (Atom)
Deep Learning (Regression, Multiple Features/Explanatory Variables, Supervised Learning): Impelementation and Showing Biases and Weights
Deep Learning (Regression, Multiple Features/Explanatory Variables, Supervised Learning): Impelementation and Showing Biases and Weights ...
-
Black-Litterman Portfolio Optimization with Python This is a very basic introduction of the Black-Litterman portfolio optimization with t...
-
0_MacOS_Python_setup_for_Quandl.txt # Go to: https://www.quandl.com/ # Sign up / in with your email address and password # Run Termina...
-
This is a great paper to understand having and applying principles to day-to-day business and personal lives. If you do not have your own ...
No comments:
Post a Comment