By starting your biases at 0
, y
will be close to 0
:
y = w * x + b = sum(.1 * -1, .9 * -.9, ..., .1 * 1, .9 * .9) + 0 = 0
So your biases should be:
biases = {
'b1': tf.Variable(tf.zeros([n_hidden_1])),
'b2': tf.Variable(tf.zeros([n_hidden_2])),
'output': tf.Variable(tf.zeros([n_output]))
}
This is sufficient to output numbers smaller than 0.5
:
[1. 0.4492423 0.4492423...0.4492423 0.4492423 1.]
predictions mean: 0.7023628
confusion matrix: [
[4370 1727]
[1932 3971]
]
accuracy: 0.6950833333333334
It's very uncommon to use MSE with sigmoid output. Use binary cross-entropy instead:
logits = tf.add(tf.matmul(layer_2, weights['output']), biases['output'])
output = tf.nn.sigmoid(logits)
cost = tf.nn.sigmoid_cross_entropy_with_logits(labels = y, logits = logits)
It's more reliable to initialize the weights from a normal distribution:
weights = {
'h1': tf.Variable(tf.random_uniform([n_input, n_hidden_1])),
'h2': tf.Variable(tf.random_uniform([n_hidden_1, n_hidden_2])),
'output': tf.Variable(tf.random_uniform([n_hidden_2, n_output]))
}
You should not fit the scaler to the test data, because you will lose the statistics from train and because it violates the principle that that chunk of data is purely observational. Do this:
scaler = StandardScaler() x_train = scaler.fit_transform(x_train) x_test = scaler.transform(x_test)
It's very uncommon to use MSE with sigmoid output. Use binary cross-entropy instead:
logits = tf.add(tf.matmul(layer_2, weights['output']), biases['output'])
output = tf.nn.sigmoid(logits)
cost = tf.nn.sigmoid_cross_entropy_with_logits(labels = y, logits = logits)
It's more reliable to initialize the weights from a normal distribution:
weights = {
'h1': tf.Variable(tf.random_uniform([n_input, n_hidden_1])),
'h2': tf.Variable(tf.random_uniform([n_hidden_1, n_hidden_2])),
'output': tf.Variable(tf.random_uniform([n_hidden_2, n_output]))
}
February 9, 2019 at 4:30 am,February 9, 2019 at 3:12 am,February 7, 2019 at 7:08 am,February 6, 2019 at 7:01 am
For example, we may define a simple sequential neural network as:
model = Sequential()
model.add(Dense(8, input_shape = (10, ), activation = "relu"))
model.add(Dense(4, activation = "relu"))
model.add(Dense(1, activation = "linear"))
We can define the sample neural network using the functional API:
inputs = Input(shape = (10, ))
x = Dense(8, activation = "relu")(inputs)
x = Dense(4, activation = "relu")(x)
x = Dense(1, activation = "linear")(x)
model = Model(inputs, x)
To see the power of Keras’ function API consider the following code where we create a model that accepts multiple inputs:
# define two sets of inputs inputA = Input(shape = (32, )) inputB = Input(shape = (128, )) # the first branch operates on the first input x = Dense(8, activation = "relu")(inputA) x = Dense(4, activation = "relu")(x) x = Model(inputs = inputA, outputs = x) # the second branch opreates on the second input y = Dense(64, activation = "relu")(inputB) y = Dense(32, activation = "relu")(y) y = Dense(4, activation = "relu")(y) y = Model(inputs = inputB, outputs = y) # combine the output of the two branches combined = concatenate([x.output, y.output]) # apply a FC layer and then a regression prediction on the # combined outputs z = Dense(2, activation = "relu")(combined) z = Dense(1, activation = "linear")(z) # our model will accept the inputs of the two branches and # then output a single value model = Model(inputs = [x.input, y.input], outputs = z)
And from there you can download the House Prices dataset via:
$ git clone https: //github.com/emanhamed/Houses-dataset
Let’s take a look at how today’s project is organized:
$ tree--dirsfirst--filelimit 10 .├──Houses - dataset│├── Houses\ Dataset[2141 entries]│└── README.md├── pyimagesearch│├── __init__.py│├── datasets.py│└── models.py└── mixed_training.py 3 directories, 5 files
Last updated 2022-04-27 UTC.
Install the tfds-nightly
package for the penguins dataset. The tfds-nightly
package is the nightly released version of the TensorFlow Datasets (TFDS). For more information on TFDS, see TensorFlow Datasets overview.
pip install - q tfds - nightly
Import TensorFlow and the other required Python modules.
import os
import tensorflow as tf
import tensorflow_datasets as tfds
import matplotlib.pyplot as plt
print("TensorFlow version: {}".format(tf.__version__))
print("TensorFlow Datasets version: ", tfds.__version__)
TensorFlow version: 2.9 .0 - rc1 TensorFlow Datasets version: 4.5 .2 + nightly
ds_preview, info = tfds.load('penguins/simple', split='train', with_info=True)
df = tfds.as_dataframe(ds_preview.take(5), info)
print(df)
print(info.features)
body_mass_g culmen_depth_mm culmen_length_mm flipper_length_mm island\
0 4200.0 13.9 45.500000 210.0 0
1 4650.0 13.7 40.900002 214.0 0
2 5300.0 14.2 51.299999 218.0 0
3 5650.0 15.0 47.799999 215.0 0
4 5050.0 15.8 46.299999 215.0 0
sex species
0 0 2
1 0 2
2 1 2
3 1 2
4 1 2
FeaturesDict({
'body_mass_g': tf.float32,
'culmen_depth_mm': tf.float32,
'culmen_length_mm': tf.float32,
'flipper_length_mm': tf.float32,
'island': ClassLabel(shape = (), dtype = tf.int64, num_classes = 3),
'sex': ClassLabel(shape = (), dtype = tf.int64, num_classes = 3),
'species': ClassLabel(shape = (), dtype = tf.int64, num_classes = 3),
})
2022 - 04 - 27 01: 32: 48.776548: W tensorflow / core / kernels / data / cache_dataset_ops.cc: 856] The calling iterator did not fully read the dataset being cached.In order to avoid unexpected truncation of the dataset, the partially cached contents of the dataset will be discarded.This can happen
if you have an input pipeline similar to `dataset.cache().take(k).repeat()`.You should use `dataset.take(k).cache().repeat()`
instead.
Create a list containing the penguin species names in this order. You will use this list to interpret the output of the classification model:
class_names = ['Adélie', 'Chinstrap', 'Gentoo']