Please take note that google only allow one active session for the free service. If you need faster GPU, more RAM and sessions, please consider to subscribe colab pro.
- keraslearn.ipynb Select all
# Step 1 mount google drive if data is from google drive
import os
from google.colab import drive
drive.mount('/content/drive')
# Step 2 if using tensorflow GPU
#%tensorflow_version 2.x
#import tensorflow as tf
#print('TensorFlow: {}'.format(tf.__version__))
#tf.test.gpu_device_name()
# Step 3
from keras.models import Sequential
from keras.layers import Dense
import numpy
import time
# fix random seed for reproducibility
numpy.random.seed(7)
# Step 4
# download pima indians dataset to google drive
!curl -L https://tinyurl.com/tensorflowwin | grep -A768 pima-indians-diabetes.data.nbsp | sed '1d' > 'drive/MyDrive/Colab Notebooks/pima-indians-diabetes.data'
# or download to local data directory
!mkdir -p ./data
!curl -L https://tinyurl.com/tensorflowwin | grep -A768 pima-indians-diabetes.data.nbsp | sed '1d' > './data/pima-indians-diabetes.data'
# Step 5 load dataset from google drive
dataset = numpy.loadtxt("drive/MyDrive/Colab Notebooks/pima-indians-diabetes.data", delimiter=",")
# or load data from local data directory
dataset = numpy.loadtxt("./data/pima-indians-diabetes.data", delimiter=",")
# Step 6
# split into input (X) and output (Y) variables
X = dataset[:,0:8]
Y = dataset[:,8]
# Step 7
# create model
model = Sequential()
model.add(Dense(12, input_dim=8, activation='relu'))
model.add(Dense(1, activation='sigmoid'))
# Step 8
# Compile model
model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])
# Step 9
start_time=time.time()
# Fit the model
model.fit(X, Y, batch_size=10, epochs=1500) # parameters for keras 1.2.2
# evaluate the model
scores = model.evaluate(X, Y)
print("\n%s: %.2f%%" % (model.metrics_names[1], scores[1]*100))
print("\nTraining took %.2f seconds\n" %(time.time()-start_time))
(2) For large training data set, consider to zip them and upload to google drive. Mount the google drive, then unzip it in local session. e.g.
!mkdir -p ./data !unzip -o './drive/MyDrive/Colab Notebooks/mydata.zip' -d ./data/
(3) To stop the running cell in Google Colab use Ctrl-M I
(4) How to quickly run an ipynb example from github ?
4.1) Go to https://colab.research.google.com/, after login gmail and choose GitHub tab and enter search say "clareyan/From-Linear-to-Logistic-Regression-Explained-Step-by-Step"
4.2) In Step 2 cell box change the importing of dataset to
df = pd.read_csv('https://raw.githubusercontent.com/clareyan/From-Linear-to-Logistic-Regression-Explained-Step-by-Step/master/Social_Network_Ads.csv')
4.3) Then choose menu -> Runtime -> Run All. After that, use menu -> File -> Save a copy in Drive.
No comments:
Post a Comment