Tensorflow input pipeline

Classify Structured Data

Import TensorFlow and Other Libraries

1
2
3
4
5
6
7
8
import pandas as pd
import tensorflow as tf

from tensorflow.keras import layers
from tensorflow import feature_column

from os import getcwd
from sklearn.model_selection import train_test_split

Use Pandas to Create a Dataframe

Pandas is a Python library with many helpful utilities for loading and working with structured data. We will use Pandas to download the dataset and load it into a dataframe.

1
2
3
filePath = f"{getcwd()}/../tmp2/heart.csv"
dataframe = pd.read_csv(filePath)
dataframe.head()
age sex cp trestbps chol fbs restecg thalach exang oldpeak slope ca thal target
0 63 1 1 145 233 1 2 150 0 2.3 3 0 fixed 0
1 67 1 4 160 286 0 2 108 1 1.5 2 3 normal 1
2 67 1 4 120 229 0 2 129 1 2.6 2 2 reversible 0
3 37 1 3 130 250 0 0 187 0 3.5 3 0 normal 0
4 41 0 2 130 204 0 2 172 0 1.4 1 0 normal 0

Split the Dataframe Into Train, Validation, and Test Sets

The dataset we downloaded was a single CSV file. We will split this into train, validation, and test sets.

1
2
3
4
5
train, test = train_test_split(dataframe, test_size=0.2)
train, val = train_test_split(train, test_size=0.2)
print(len(train), 'train examples')
print(len(val), 'validation examples')
print(len(test), 'test examples')
193 train examples
49 validation examples
61 test examples

Create an Input Pipeline Using tf.data

Next, we will wrap the dataframes with tf.data. This will enable us to use feature columns as a bridge to map from the columns in the Pandas dataframe to features used to train the model. If we were working with a very large CSV file (so large that it does not fit into memory), we would use tf.data to read it from disk directly.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
# EXERCISE: A utility method to create a tf.data dataset from a Pandas Dataframe.

def df_to_dataset(dataframe, shuffle=True, batch_size=32):
dataframe = dataframe.copy()

# Use Pandas dataframe's pop method to get the list of targets.
labels = dataframe["target"].values
dataframe.drop("target", axis = 1, inplace = True)

# Create a tf.data.Dataset from the dataframe and labels.
ds = tf.data.Dataset.from_tensor_slices((dict(dataframe),labels))

if shuffle:
# Shuffle dataset.
ds = ds.shuffle(3)

# Batch dataset with specified batch_size parameter.
ds = ds.batch(batch_size)

return ds
1
2
3
4
batch_size = 5 # A small batch sized is used for demonstration purposes
train_ds = df_to_dataset(train, batch_size=batch_size)
val_ds = df_to_dataset(val, shuffle=False, batch_size=batch_size)
test_ds = df_to_dataset(test, shuffle=False, batch_size=batch_size)

Understand the Input Pipeline

Now that we have created the input pipeline, let’s call it to see the format of the data it returns. We have used a small batch size to keep the output readable.

1
2
3
4
for feature_batch, label_batch in train_ds.take(1):
print('Every feature:', list(feature_batch.keys()))
print('A batch of ages:', feature_batch['age'])
print('A batch of targets:', label_batch )
Every feature: ['age', 'sex', 'cp', 'trestbps', 'chol', 'fbs', 'restecg', 'thalach', 'exang', 'oldpeak', 'slope', 'ca', 'thal']
A batch of ages: tf.Tensor([51 63 64 58 57], shape=(5,), dtype=int32)
A batch of targets: tf.Tensor([0 1 0 0 0], shape=(5,), dtype=int64)

We can see that the dataset returns a dictionary of column names (from the dataframe) that map to column values from rows in the dataframe.

Create Several Types of Feature Columns

TensorFlow provides many types of feature columns. In this section, we will create several types of feature columns, and demonstrate how they transform a column from the dataframe.

1
2
# Try to demonstrate several types of feature columns by getting an example.
example_batch = next(iter(train_ds))[0]
1
2
3
4
# A utility method to create a feature column and to transform a batch of data.
def demo(feature_column):
feature_layer = layers.DenseFeatures(feature_column, dtype='float64')
print(feature_layer(example_batch).numpy())

Numeric Columns

The output of a feature column becomes the input to the model (using the demo function defined above, we will be able to see exactly how each column from the dataframe is transformed). A numeric column is the simplest type of column. It is used to represent real valued features.

1
2
3
4
# EXERCISE: Create a numeric feature column out of 'age' and demo it.
age = tf.feature_column.numeric_column("age")

demo(age)
[[51.]
 [58.]
 [63.]
 [64.]
 [60.]]

In the heart disease dataset, most columns from the dataframe are numeric.

Bucketized Columns

Often, you don’t want to feed a number directly into the model, but instead split its value into different categories based on numerical ranges. Consider raw data that represents a person’s age. Instead of representing age as a numeric column, we could split the age into several buckets using a bucketized column.

1
2
3
4
5
6
7
# EXERCISE: Create a bucketized feature column out of 'age' with
# the following boundaries and demo it.
boundaries = [18, 25, 30, 35, 40, 45, 50, 55, 60, 65]

age_buckets = tf.feature_column.bucketized_column(age, boundaries = boundaries)

demo(age_buckets)
[[0. 0. 0. 0. 0. 0. 0. 1. 0. 0. 0.]
 [0. 0. 0. 0. 0. 0. 0. 0. 1. 0. 0.]
 [0. 0. 0. 0. 0. 0. 0. 0. 0. 1. 0.]
 [0. 0. 0. 0. 0. 0. 0. 0. 0. 1. 0.]
 [0. 0. 0. 0. 0. 0. 0. 0. 0. 1. 0.]]

Notice the one-hot values above describe which age range each row matches.

Categorical Columns

In this dataset, thal is represented as a string (e.g. ‘fixed’, ‘normal’, or ‘reversible’). We cannot feed strings directly to a model. Instead, we must first map them to numeric values. The categorical vocabulary columns provide a way to represent strings as a one-hot vector (much like you have seen above with age buckets).

Note: You will probably see some warning messages when running some of the code cell below. These warnings have to do with software updates and should not cause any errors or prevent your code from running.

1
2
3
4
5
6
7
8
9
# EXERCISE: Create a categorical vocabulary column out of the
# above mentioned categories with the key specified as 'thal'.
thal = tf.feature_column.categorical_column_with_vocabulary_list(
'thal', ['fixed', 'normal', 'reversible'])

# EXERCISE: Create an indicator column out of the created categorical column.
thal_one_hot = tf.feature_column.indicator_column(thal)

demo(thal_one_hot)
[[0. 1. 0.]
 [0. 1. 0.]
 [0. 0. 1.]
 [0. 0. 1.]
 [0. 1. 0.]]

The vocabulary can be passed as a list using categorical_column_with_vocabulary_list, or loaded from a file using categorical_column_with_vocabulary_file.

Embedding Columns

Suppose instead of having just a few possible strings, we have thousands (or more) values per category. For a number of reasons, as the number of categories grow large, it becomes infeasible to train a neural network using one-hot encodings. We can use an embedding column to overcome this limitation. Instead of representing the data as a one-hot vector of many dimensions, an embedding column represents that data as a lower-dimensional, dense vector in which each cell can contain any number, not just 0 or 1. You can tune the size of the embedding with the dimension parameter.

1
2
3
4
5
6
7
8
# EXERCISE: Create an embedding column out of the categorical
# vocabulary you just created (thal). Set the size of the
# embedding to 8, by using the dimension parameter.

thal_embedding = tf.feature_column.embedding_column(thal, dimension=8)


demo(thal_embedding)
[[-1.4254066e-01 -1.0374661e-01  3.4352791e-01 -3.3996427e-01
  -3.2193713e-02 -1.8381193e-01 -1.8051244e-01  3.2638407e-01]
 [-1.4254066e-01 -1.0374661e-01  3.4352791e-01 -3.3996427e-01
  -3.2193713e-02 -1.8381193e-01 -1.8051244e-01  3.2638407e-01]
 [-6.5549983e-05  2.7680036e-01  4.1849682e-01  5.3418136e-01
  -1.6281548e-01  2.5406811e-01  8.8969752e-02  1.8004593e-01]
 [-6.5549983e-05  2.7680036e-01  4.1849682e-01  5.3418136e-01
  -1.6281548e-01  2.5406811e-01  8.8969752e-02  1.8004593e-01]
 [-1.4254066e-01 -1.0374661e-01  3.4352791e-01 -3.3996427e-01
  -3.2193713e-02 -1.8381193e-01 -1.8051244e-01  3.2638407e-01]]

Hashed Feature Columns

Another way to represent a categorical column with a large number of values is to use a categorical_column_with_hash_bucket. This feature column calculates a hash value of the input, then selects one of the hash_bucket_size buckets to encode a string. When using this column, you do not need to provide the vocabulary, and you can choose to make the number of hash buckets significantly smaller than the number of actual categories to save space.

1
2
3
4
5
6
# EXERCISE: Create a hashed feature column with 'thal' as the key and 
# 1000 hash buckets.
thal_hashed = tf.feature_column.categorical_column_with_hash_bucket(
'thal', hash_bucket_size=1000)

demo(feature_column.indicator_column(thal_hashed))
[[0. 0. 0. ... 0. 0. 0.]
 [0. 0. 0. ... 0. 0. 0.]
 [0. 0. 0. ... 0. 0. 0.]
 [0. 0. 0. ... 0. 0. 0.]
 [0. 0. 0. ... 0. 0. 0.]]

Crossed Feature Columns

Combining features into a single feature, better known as feature crosses, enables a model to learn separate weights for each combination of features. Here, we will create a new feature that is the cross of age and thal. Note that crossed_column does not build the full table of all possible combinations (which could be very large). Instead, it is backed by a hashed_column, so you can choose how large the table is.

1
2
3
4
5
# EXERCISE: Create a crossed column using the bucketized column (age_buckets),
# the categorical vocabulary column (thal) previously created, and 1000 hash buckets.
crossed_feature = tf.feature_column.crossed_column([age_buckets, thal], hash_bucket_size=1000)

demo(feature_column.indicator_column(crossed_feature))
[[0. 0. 0. ... 0. 0. 0.]
 [0. 0. 0. ... 0. 0. 0.]
 [0. 0. 0. ... 0. 0. 0.]
 [0. 0. 0. ... 0. 0. 0.]
 [0. 0. 0. ... 0. 0. 0.]]

Choose Which Columns to Use

We have seen how to use several types of feature columns. Now we will use them to train a model. The goal of this exercise is to show you the complete code needed to work with feature columns. We have selected a few columns to train our model below arbitrarily.

If your aim is to build an accurate model, try a larger dataset of your own, and think carefully about which features are the most meaningful to include, and how they should be represented.

1
dataframe.dtypes
age           int64
sex           int64
cp            int64
trestbps      int64
chol          int64
fbs           int64
restecg       int64
thalach       int64
exang         int64
oldpeak     float64
slope         int64
ca            int64
thal         object
target        int64
dtype: object

You can use the above list of column datatypes to map the appropriate feature column to every column in the dataframe.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
# EXERCISE: Fill in the missing code below
feature_columns = []

# Numeric Cols.
# Create a list of numeric columns. Use the following list of columns
# that have a numeric datatype: ['age', 'trestbps', 'chol', 'thalach', 'oldpeak', 'slope', 'ca'].
numeric_columns = ['age', 'trestbps', 'chol', 'thalach', 'oldpeak', 'slope', 'ca']

for header in numeric_columns:
# Create a numeric feature column out of the header.
numeric_feature_column = tf.feature_column.numeric_column(header)

feature_columns.append(numeric_feature_column)

# Bucketized Cols.
# Create a bucketized feature column out of the age column (numeric column)
# that you've already created. Use the following boundaries:
# [18, 25, 30, 35, 40, 45, 50, 55, 60, 65]
age_buckets = tf.feature_column.bucketized_column(age, boundaries = [18, 25, 30, 35, 40, 45, 50, 55, 60, 65] )

feature_columns.append(age_buckets)

# Indicator Cols.
# Create a categorical vocabulary column out of the categories
# ['fixed', 'normal', 'reversible'] with the key specified as 'thal'.
thal = feature_column.categorical_column_with_vocabulary_list(
'thal', ['fixed', 'normal', 'reversible'])

# Create an indicator column out of the created thal categorical column
thal_one_hot = feature_column.indicator_column(thal)

feature_columns.append(thal_one_hot)

# Embedding Cols.
# Create an embedding column out of the categorical vocabulary you
# just created (thal). Set the size of the embedding to 8, by using
# the dimension parameter.
thal_embedding = tf.feature_column.embedding_column(thal, dimension=8)

feature_columns.append(thal_embedding)

# Crossed Cols.
# Create a crossed column using the bucketized column (age_buckets),
# the categorical vocabulary column (thal) previously created, and 1000 hash buckets.
crossed_feature = feature_column.crossed_column([age_buckets, thal], hash_bucket_size=1000)

# Create an indicator column out of the crossed column created above to one-hot encode it.
crossed_feature = feature_column.indicator_column(crossed_feature)

feature_columns.append(crossed_feature)

Create a Feature Layer

Now that we have defined our feature columns, we will use a DenseFeatures layer to input them to our Keras model.

1
2
# EXERCISE: Create a Keras DenseFeatures layer and pass the feature_columns you just created.
feature_layer = tf.keras.layers.DenseFeatures(feature_columns)

Earlier, we used a small batch size to demonstrate how feature columns worked. We create a new input pipeline with a larger batch size.

1
2
3
4
batch_size = 32
train_ds = df_to_dataset(train, batch_size=batch_size)
val_ds = df_to_dataset(val, shuffle=False, batch_size=batch_size)
test_ds = df_to_dataset(test, shuffle=False, batch_size=batch_size)

Create, Compile, and Train the Model

1
2
3
4
5
6
7
8
9
10
11
12
13
14
model = tf.keras.Sequential([
feature_layer,
layers.Dense(128, activation='relu'),
layers.Dense(128, activation='relu'),
layers.Dense(1, activation='sigmoid')
])

model.compile(optimizer='adam',
loss='binary_crossentropy',
metrics=['accuracy'])

model.fit(train_ds,
validation_data=val_ds,
epochs=100)
Epoch 1/100
7/7 [==============================] - 4s 609ms/step - loss: 1.5455 - accuracy: 0.6321 - val_loss: 0.0000e+00 - val_accuracy: 0.0000e+00
Epoch 2/100
7/7 [==============================] - 0s 45ms/step - loss: 1.6424 - accuracy: 0.5803 - val_loss: 1.7392 - val_accuracy: 0.7143
Epoch 3/100
7/7 [==============================] - 0s 44ms/step - loss: 1.2255 - accuracy: 0.6995 - val_loss: 0.7653 - val_accuracy: 0.5714
Epoch 4/100
7/7 [==============================] - 0s 44ms/step - loss: 0.7326 - accuracy: 0.6891 - val_loss: 0.5689 - val_accuracy: 0.6939
Epoch 5/100
7/7 [==============================] - 0s 43ms/step - loss: 0.5230 - accuracy: 0.7358 - val_loss: 0.5406 - val_accuracy: 0.7143
Epoch 6/100
7/7 [==============================] - 0s 44ms/step - loss: 0.4348 - accuracy: 0.8083 - val_loss: 0.5609 - val_accuracy: 0.7143
Epoch 7/100
7/7 [==============================] - 0s 56ms/step - loss: 0.4592 - accuracy: 0.7824 - val_loss: 0.5710 - val_accuracy: 0.7347
Epoch 8/100
7/7 [==============================] - 0s 45ms/step - loss: 0.4996 - accuracy: 0.7461 - val_loss: 0.5585 - val_accuracy: 0.7143
Epoch 9/100
7/7 [==============================] - 0s 44ms/step - loss: 0.4389 - accuracy: 0.7927 - val_loss: 0.5297 - val_accuracy: 0.6735
Epoch 10/100
7/7 [==============================] - 0s 55ms/step - loss: 0.3914 - accuracy: 0.8446 - val_loss: 0.5216 - val_accuracy: 0.6531
Epoch 11/100
7/7 [==============================] - 0s 45ms/step - loss: 0.4022 - accuracy: 0.7979 - val_loss: 0.5331 - val_accuracy: 0.7347
Epoch 12/100
7/7 [==============================] - 0s 54ms/step - loss: 0.3811 - accuracy: 0.8238 - val_loss: 0.6522 - val_accuracy: 0.6735
Epoch 13/100
7/7 [==============================] - 0s 44ms/step - loss: 0.4173 - accuracy: 0.7927 - val_loss: 0.5219 - val_accuracy: 0.7347
Epoch 14/100
7/7 [==============================] - 0s 44ms/step - loss: 0.4235 - accuracy: 0.7513 - val_loss: 0.5027 - val_accuracy: 0.6531
Epoch 15/100
7/7 [==============================] - 0s 44ms/step - loss: 0.3789 - accuracy: 0.7979 - val_loss: 0.7249 - val_accuracy: 0.6531
Epoch 16/100
7/7 [==============================] - 0s 45ms/step - loss: 0.3972 - accuracy: 0.8342 - val_loss: 0.4830 - val_accuracy: 0.6939
Epoch 17/100
7/7 [==============================] - 0s 55ms/step - loss: 0.3339 - accuracy: 0.8601 - val_loss: 0.4912 - val_accuracy: 0.6531
Epoch 18/100
7/7 [==============================] - 0s 44ms/step - loss: 0.3555 - accuracy: 0.7927 - val_loss: 0.6399 - val_accuracy: 0.6939
Epoch 19/100
7/7 [==============================] - 0s 43ms/step - loss: 0.3531 - accuracy: 0.8601 - val_loss: 0.5526 - val_accuracy: 0.6735
Epoch 20/100
7/7 [==============================] - 0s 44ms/step - loss: 0.3810 - accuracy: 0.7876 - val_loss: 0.5751 - val_accuracy: 0.7143
Epoch 21/100
7/7 [==============================] - 0s 56ms/step - loss: 0.3409 - accuracy: 0.8549 - val_loss: 0.5524 - val_accuracy: 0.7551
Epoch 22/100
7/7 [==============================] - 0s 44ms/step - loss: 0.3167 - accuracy: 0.8756 - val_loss: 0.6607 - val_accuracy: 0.7143
Epoch 23/100
7/7 [==============================] - 0s 45ms/step - loss: 0.3732 - accuracy: 0.8601 - val_loss: 0.5993 - val_accuracy: 0.6939
Epoch 24/100
7/7 [==============================] - 0s 55ms/step - loss: 0.3918 - accuracy: 0.7979 - val_loss: 0.5646 - val_accuracy: 0.6735
Epoch 25/100
7/7 [==============================] - 0s 44ms/step - loss: 0.3624 - accuracy: 0.8187 - val_loss: 0.7324 - val_accuracy: 0.6735
Epoch 26/100
7/7 [==============================] - 0s 56ms/step - loss: 0.3531 - accuracy: 0.8446 - val_loss: 0.4501 - val_accuracy: 0.6939
Epoch 27/100
7/7 [==============================] - 0s 45ms/step - loss: 0.3164 - accuracy: 0.8653 - val_loss: 0.4770 - val_accuracy: 0.6735
Epoch 28/100
7/7 [==============================] - 0s 55ms/step - loss: 0.3557 - accuracy: 0.8290 - val_loss: 0.5188 - val_accuracy: 0.7551
Epoch 29/100
7/7 [==============================] - 0s 45ms/step - loss: 0.3193 - accuracy: 0.8446 - val_loss: 0.5949 - val_accuracy: 0.7347
Epoch 30/100
7/7 [==============================] - 0s 55ms/step - loss: 0.3049 - accuracy: 0.8601 - val_loss: 0.5904 - val_accuracy: 0.7347
Epoch 31/100
7/7 [==============================] - 0s 44ms/step - loss: 0.3150 - accuracy: 0.8705 - val_loss: 0.4901 - val_accuracy: 0.6531
Epoch 32/100
7/7 [==============================] - 0s 55ms/step - loss: 0.3223 - accuracy: 0.8446 - val_loss: 0.5034 - val_accuracy: 0.6939
Epoch 33/100
7/7 [==============================] - 0s 43ms/step - loss: 0.3178 - accuracy: 0.8394 - val_loss: 0.6359 - val_accuracy: 0.7347
Epoch 34/100
7/7 [==============================] - 0s 43ms/step - loss: 0.3041 - accuracy: 0.8549 - val_loss: 0.5558 - val_accuracy: 0.7551
Epoch 35/100
7/7 [==============================] - 0s 44ms/step - loss: 0.2853 - accuracy: 0.8808 - val_loss: 0.5089 - val_accuracy: 0.6939
Epoch 36/100
7/7 [==============================] - 0s 55ms/step - loss: 0.2905 - accuracy: 0.8653 - val_loss: 0.5989 - val_accuracy: 0.7347
Epoch 37/100
7/7 [==============================] - 0s 45ms/step - loss: 0.2885 - accuracy: 0.8705 - val_loss: 0.5644 - val_accuracy: 0.7551
Epoch 38/100
7/7 [==============================] - 0s 55ms/step - loss: 0.2890 - accuracy: 0.8601 - val_loss: 0.5590 - val_accuracy: 0.7755
Epoch 39/100
7/7 [==============================] - 0s 45ms/step - loss: 0.2792 - accuracy: 0.8808 - val_loss: 0.4820 - val_accuracy: 0.7551
Epoch 40/100
7/7 [==============================] - 0s 55ms/step - loss: 0.2781 - accuracy: 0.8653 - val_loss: 0.4974 - val_accuracy: 0.7551
Epoch 41/100
7/7 [==============================] - 0s 45ms/step - loss: 0.2873 - accuracy: 0.8705 - val_loss: 0.5550 - val_accuracy: 0.7551
Epoch 42/100
7/7 [==============================] - 0s 55ms/step - loss: 0.2737 - accuracy: 0.8808 - val_loss: 0.5356 - val_accuracy: 0.7347
Epoch 43/100
7/7 [==============================] - 0s 44ms/step - loss: 0.2677 - accuracy: 0.8860 - val_loss: 0.5071 - val_accuracy: 0.7551
Epoch 44/100
7/7 [==============================] - 0s 44ms/step - loss: 0.2794 - accuracy: 0.8756 - val_loss: 0.5320 - val_accuracy: 0.6939
Epoch 45/100
7/7 [==============================] - 0s 55ms/step - loss: 0.2932 - accuracy: 0.8394 - val_loss: 0.5533 - val_accuracy: 0.7755
Epoch 46/100
7/7 [==============================] - 0s 45ms/step - loss: 0.2750 - accuracy: 0.8705 - val_loss: 0.5723 - val_accuracy: 0.7347
Epoch 47/100
7/7 [==============================] - 0s 55ms/step - loss: 0.2694 - accuracy: 0.8808 - val_loss: 0.5347 - val_accuracy: 0.7551
Epoch 48/100
7/7 [==============================] - 0s 44ms/step - loss: 0.2632 - accuracy: 0.8912 - val_loss: 0.5369 - val_accuracy: 0.7755
Epoch 49/100
7/7 [==============================] - 0s 44ms/step - loss: 0.2677 - accuracy: 0.8860 - val_loss: 0.5837 - val_accuracy: 0.7143
Epoch 50/100
7/7 [==============================] - 0s 55ms/step - loss: 0.2635 - accuracy: 0.8808 - val_loss: 0.5337 - val_accuracy: 0.7755
Epoch 51/100
7/7 [==============================] - 0s 45ms/step - loss: 0.2592 - accuracy: 0.8912 - val_loss: 0.5533 - val_accuracy: 0.7755
Epoch 52/100
7/7 [==============================] - 0s 55ms/step - loss: 0.2536 - accuracy: 0.8912 - val_loss: 0.5743 - val_accuracy: 0.7347
Epoch 53/100
7/7 [==============================] - 0s 45ms/step - loss: 0.2511 - accuracy: 0.9016 - val_loss: 0.5451 - val_accuracy: 0.7551
Epoch 54/100
7/7 [==============================] - 0s 55ms/step - loss: 0.2650 - accuracy: 0.8860 - val_loss: 0.5864 - val_accuracy: 0.6531
Epoch 55/100
7/7 [==============================] - 0s 45ms/step - loss: 0.3354 - accuracy: 0.8290 - val_loss: 0.5772 - val_accuracy: 0.7347
Epoch 56/100
7/7 [==============================] - 0s 54ms/step - loss: 0.2759 - accuracy: 0.8653 - val_loss: 0.5857 - val_accuracy: 0.7347
Epoch 57/100
7/7 [==============================] - 0s 44ms/step - loss: 0.2522 - accuracy: 0.8860 - val_loss: 0.5930 - val_accuracy: 0.7347
Epoch 58/100
7/7 [==============================] - 0s 44ms/step - loss: 0.2488 - accuracy: 0.8808 - val_loss: 0.5814 - val_accuracy: 0.7551
Epoch 59/100
7/7 [==============================] - 0s 55ms/step - loss: 0.2428 - accuracy: 0.8964 - val_loss: 0.5805 - val_accuracy: 0.7551
Epoch 60/100
7/7 [==============================] - 0s 44ms/step - loss: 0.2555 - accuracy: 0.8912 - val_loss: 0.5903 - val_accuracy: 0.7347
Epoch 61/100
7/7 [==============================] - 0s 44ms/step - loss: 0.2391 - accuracy: 0.9016 - val_loss: 0.5721 - val_accuracy: 0.7755
Epoch 62/100
7/7 [==============================] - 0s 44ms/step - loss: 0.2423 - accuracy: 0.8860 - val_loss: 0.5911 - val_accuracy: 0.7551
Epoch 63/100
7/7 [==============================] - 0s 44ms/step - loss: 0.2450 - accuracy: 0.8756 - val_loss: 0.5845 - val_accuracy: 0.7755
Epoch 64/100
7/7 [==============================] - 0s 55ms/step - loss: 0.2447 - accuracy: 0.8912 - val_loss: 0.5883 - val_accuracy: 0.7551
Epoch 65/100
7/7 [==============================] - 0s 44ms/step - loss: 0.2386 - accuracy: 0.8964 - val_loss: 0.6093 - val_accuracy: 0.7551
Epoch 66/100
7/7 [==============================] - 0s 56ms/step - loss: 0.2278 - accuracy: 0.9067 - val_loss: 0.6654 - val_accuracy: 0.7347
Epoch 67/100
7/7 [==============================] - 0s 44ms/step - loss: 0.2474 - accuracy: 0.8912 - val_loss: 0.6545 - val_accuracy: 0.7143
Epoch 68/100
7/7 [==============================] - 0s 45ms/step - loss: 0.2509 - accuracy: 0.8808 - val_loss: 0.6298 - val_accuracy: 0.6735
Epoch 69/100
7/7 [==============================] - 0s 44ms/step - loss: 0.2931 - accuracy: 0.8549 - val_loss: 0.6237 - val_accuracy: 0.7347
Epoch 70/100
7/7 [==============================] - 0s 44ms/step - loss: 0.2653 - accuracy: 0.8808 - val_loss: 0.6296 - val_accuracy: 0.7143
Epoch 71/100
7/7 [==============================] - 0s 43ms/step - loss: 0.2649 - accuracy: 0.8549 - val_loss: 0.5915 - val_accuracy: 0.6531
Epoch 72/100
7/7 [==============================] - 0s 44ms/step - loss: 0.3141 - accuracy: 0.8394 - val_loss: 0.6017 - val_accuracy: 0.7755
Epoch 73/100
7/7 [==============================] - 0s 45ms/step - loss: 0.2557 - accuracy: 0.8756 - val_loss: 0.6444 - val_accuracy: 0.7347
Epoch 74/100
7/7 [==============================] - 0s 55ms/step - loss: 0.2220 - accuracy: 0.9067 - val_loss: 0.6380 - val_accuracy: 0.7347
Epoch 75/100
7/7 [==============================] - 0s 44ms/step - loss: 0.2209 - accuracy: 0.9016 - val_loss: 0.6977 - val_accuracy: 0.7347
Epoch 76/100
7/7 [==============================] - 0s 55ms/step - loss: 0.2318 - accuracy: 0.8964 - val_loss: 0.6422 - val_accuracy: 0.7347
Epoch 77/100
7/7 [==============================] - 0s 44ms/step - loss: 0.2183 - accuracy: 0.9067 - val_loss: 0.6183 - val_accuracy: 0.7143
Epoch 78/100
7/7 [==============================] - 0s 55ms/step - loss: 0.2304 - accuracy: 0.8860 - val_loss: 0.6522 - val_accuracy: 0.7143
Epoch 79/100
7/7 [==============================] - 0s 44ms/step - loss: 0.2338 - accuracy: 0.8756 - val_loss: 0.5959 - val_accuracy: 0.7551
Epoch 80/100
7/7 [==============================] - 0s 44ms/step - loss: 0.2250 - accuracy: 0.8964 - val_loss: 0.6232 - val_accuracy: 0.7551
Epoch 81/100
7/7 [==============================] - 0s 55ms/step - loss: 0.2275 - accuracy: 0.8912 - val_loss: 0.6500 - val_accuracy: 0.7551
Epoch 82/100
7/7 [==============================] - 0s 44ms/step - loss: 0.2053 - accuracy: 0.9016 - val_loss: 0.6249 - val_accuracy: 0.7347
Epoch 83/100
7/7 [==============================] - 0s 45ms/step - loss: 0.2250 - accuracy: 0.8964 - val_loss: 0.6744 - val_accuracy: 0.7347
Epoch 84/100
7/7 [==============================] - 0s 44ms/step - loss: 0.2109 - accuracy: 0.9067 - val_loss: 0.7039 - val_accuracy: 0.7347
Epoch 85/100
7/7 [==============================] - 0s 45ms/step - loss: 0.2171 - accuracy: 0.9016 - val_loss: 0.6693 - val_accuracy: 0.7347
Epoch 86/100
7/7 [==============================] - 0s 44ms/step - loss: 0.2187 - accuracy: 0.9067 - val_loss: 0.6765 - val_accuracy: 0.7143
Epoch 87/100
7/7 [==============================] - 0s 45ms/step - loss: 0.2225 - accuracy: 0.9067 - val_loss: 0.6637 - val_accuracy: 0.6939
Epoch 88/100
7/7 [==============================] - 0s 44ms/step - loss: 0.2193 - accuracy: 0.8808 - val_loss: 0.7029 - val_accuracy: 0.6735
Epoch 89/100
7/7 [==============================] - 0s 45ms/step - loss: 0.2644 - accuracy: 0.8653 - val_loss: 0.6829 - val_accuracy: 0.6939
Epoch 90/100
7/7 [==============================] - 0s 55ms/step - loss: 0.2625 - accuracy: 0.8601 - val_loss: 0.6617 - val_accuracy: 0.7347
Epoch 91/100
7/7 [==============================] - 0s 45ms/step - loss: 0.2206 - accuracy: 0.8860 - val_loss: 0.6889 - val_accuracy: 0.7551
Epoch 92/100
7/7 [==============================] - 0s 55ms/step - loss: 0.2090 - accuracy: 0.8964 - val_loss: 0.7322 - val_accuracy: 0.7347
Epoch 93/100
7/7 [==============================] - 0s 45ms/step - loss: 0.2016 - accuracy: 0.9119 - val_loss: 0.7244 - val_accuracy: 0.7347
Epoch 94/100
7/7 [==============================] - 0s 55ms/step - loss: 0.1933 - accuracy: 0.9067 - val_loss: 0.6788 - val_accuracy: 0.7347
Epoch 95/100
7/7 [==============================] - 0s 45ms/step - loss: 0.2002 - accuracy: 0.9171 - val_loss: 0.6849 - val_accuracy: 0.7143
Epoch 96/100
7/7 [==============================] - 0s 55ms/step - loss: 0.2138 - accuracy: 0.8964 - val_loss: 0.7610 - val_accuracy: 0.6939
Epoch 97/100
7/7 [==============================] - 0s 45ms/step - loss: 0.2225 - accuracy: 0.8912 - val_loss: 0.6998 - val_accuracy: 0.6939
Epoch 98/100
7/7 [==============================] - 0s 55ms/step - loss: 0.2089 - accuracy: 0.9067 - val_loss: 0.6846 - val_accuracy: 0.7143
Epoch 99/100
7/7 [==============================] - 0s 44ms/step - loss: 0.2043 - accuracy: 0.8964 - val_loss: 0.7292 - val_accuracy: 0.7347
Epoch 100/100
7/7 [==============================] - 0s 55ms/step - loss: 0.2008 - accuracy: 0.9016 - val_loss: 0.7064 - val_accuracy: 0.7143





<tensorflow.python.keras.callbacks.History at 0x7f33184937b8>
1
2
loss, accuracy = model.evaluate(test_ds)
print("Accuracy", accuracy)
2/2 [==============================] - 1s 329ms/step - loss: 0.5511 - accuracy: 0.8197
Accuracy 0.8196721
Donate article here