google colab taking too much time to train a classifier. how to fix this?

  • Last Update :
  • Techknowledgy :

Check to make sure your using a GPU as sometimes even if I put the environment to GPU it still does not use it.

# ' '
means CPU whereas '/device:G:0'
means GPU
import tensorflow as tf
tf.test.gpu_device_name()

try running by first coping your files locally

!cp '/content/gdrive/My Drive/Colab Notebooks/dataset/training_set'
'training_set'

and then:

training_set = train_datagen.flow_from_directory('training_set',
   target_size = (64, 64),
   batch_size = 32,
   class_mode = 'binary')

test_set = test_datagen.flow_from_directory('test_set',
   target_size = (64, 64),
   batch_size = 32,
   class_mode = 'binary')

Suggestion : 2

Check to make sure your using a GPU as anycodings_machine-learning sometimes even if I put the environment anycodings_machine-learning to GPU it still does not use it.,I have tried changing the runtime to GPU as anycodings_python well as TPU but both the runtimes are not anycodings_python working.,How to make a ping in Android to Google,There are many deprecations while executing anycodings_python this code. After executing anycodings_python classifier.fit_generator() , it shows 12 hrs anycodings_python remaining for 1 epoch

Here's my code:

classifier = Sequential()

classifier.add(Conv2D(32, (3, 3), input_shape = (64, 64, 3), activation =
   'relu'))

classifier.add(MaxPooling2D(pool_size = (2, 2)))

classifier.add(Conv2D(32, (3, 3), activation = 'relu'))
classifier.add(MaxPooling2D(pool_size = (2, 2)))

classifier.add(Flatten())

classifier.add(Dense(units = 128, activation = 'relu'))
classifier.add(Dense(units = 1, activation = 'sigmoid'))

classifier.compile(optimizer = 'adam', loss = 'binary_crossentropy',
   metrics = ['accuracy'])

from keras.preprocessing.image
import ImageDataGenerator

train_datagen = ImageDataGenerator(rescale = 1. / 255,
   shear_range = 0.2,
   zoom_range = 0.2,
   horizontal_flip = True)

test_datagen = ImageDataGenerator(rescale = 1. / 255)

training_set = train_datagen.flow_from_directory('/content/gdrive/My 
   Drive / Colab Notebooks / dataset / training_set ',
   target_size = (64, 64),
   batch_size = 32,
   class_mode = 'binary')

test_set = test_datagen.flow_from_directory('/content/gdrive/My 
   Drive / Colab Notebooks / dataset / test_set ',
   target_size = (64, 64),
   batch_size = 32,
   class_mode = 'binary')

classifier.fit_generator(training_set,
   steps_per_epoch = 8000,
   epochs = 1,
   validation_data = test_set,
   validation_steps = 2000)

Check to make sure your using a GPU as anycodings_machine-learning sometimes even if I put the environment anycodings_machine-learning to GPU it still does not use it.

# ' '
means CPU whereas '/device:G:0'
means GPU
import tensorflow as tf
tf.test.gpu_device_name()

try running by first coping your files anycodings_machine-learning locally

!cp '/content/gdrive/My Drive/Colab Notebooks/dataset/training_set'
'training_set'

and then:

training_set = train_datagen.flow_from_directory('training_set',
   target_size = (64, 64),
   batch_size = 32,
   class_mode = 'binary')

test_set = test_datagen.flow_from_directory('test_set',
   target_size = (64, 64),
   batch_size = 32,
   class_mode = 'binary')

Suggestion : 3

Asritha Bodepudi | Posted December 18, 2020

Setup:

#import necessary libraries
import tensorflow as tf

#load training data and split into train and test sets
mnist = tf.keras.datasets.mnist

(x_train, y_train), (x_test, y_test) = mnist.load_data()
x_train, x_test = x_train / 255.0, x_test / 255.0

The output for this code snippet will look like this:

Downloading data from https: //storage.googleapis.com/tensorflow/tf-keras-datasets/mnist.npz
   11493376 / 11490434[ === === === === === === === === === === ] - 0 s 0 us / step

Next, we define the Google Colab model using Python:

#define model
model = tf.keras.models.Sequential([
   tf.keras.layers.Flatten(input_shape = (28, 28)),
   tf.keras.layers.Dense(128, activation = 'relu'),
   tf.keras.layers.Dropout(0.2),
   tf.keras.layers.Dense(10)
])

#define loss
function variable
loss_fn = tf.keras.losses.SparseCategoricalCrossentropy(from_logits = True)

#define optimizer, loss
function and evaluation metric
model.compile(optimizer = 'adam',
   loss = loss_fn,
   metrics = ['accuracy'])

#train the model
model.fit(x_train, y_train, epochs = 5)
Epoch 1/5
1875/1875 [==============================] - 3s 2ms/step - loss: 0.3006 - accuracy: 0.9125
Epoch 2/5
1875/1875 [==============================] - 3s 2ms/step - loss: 0.1461 - accuracy: 0.9570
Epoch 3/5
1875/1875 [==============================] - 3s 2ms/step - loss: 0.1098 - accuracy: 0.9673
Epoch 4/5
1875/1875 [==============================] - 3s 2ms/step - loss: 0.0887 - accuracy: 0.9729
Epoch 5/5
1875/1875 [==============================] - 3s 2ms/step - loss: 0.0763 - accuracy: 0.9754
<tensorflow.python.keras.callbacks.History at 0x7f2abd968fd0>
#test model accuracy on test set
model.evaluate(x_test, y_test, verbose = 2)

Expected output:

313 / 313 - 0 s - loss: 0.0786 - accuracy: 0.9761[0.07860152423381805, 0.9761000275611877]