Classify Structured Data
Import TensorFlow and Other Libraries
1 | import pandas as pd |
Use Pandas to Create a Dataframe
Pandas is a Python library with many helpful utilities for loading and working with structured data. We will use Pandas to download the dataset and load it into a dataframe.
1 | filePath = f"{getcwd()}/../tmp2/heart.csv" |
age | sex | cp | trestbps | chol | fbs | restecg | thalach | exang | oldpeak | slope | ca | thal | target | |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
0 | 63 | 1 | 1 | 145 | 233 | 1 | 2 | 150 | 0 | 2.3 | 3 | 0 | fixed | 0 |
1 | 67 | 1 | 4 | 160 | 286 | 0 | 2 | 108 | 1 | 1.5 | 2 | 3 | normal | 1 |
2 | 67 | 1 | 4 | 120 | 229 | 0 | 2 | 129 | 1 | 2.6 | 2 | 2 | reversible | 0 |
3 | 37 | 1 | 3 | 130 | 250 | 0 | 0 | 187 | 0 | 3.5 | 3 | 0 | normal | 0 |
4 | 41 | 0 | 2 | 130 | 204 | 0 | 2 | 172 | 0 | 1.4 | 1 | 0 | normal | 0 |
Split the Dataframe Into Train, Validation, and Test Sets
The dataset we downloaded was a single CSV file. We will split this into train, validation, and test sets.
1 | train, test = train_test_split(dataframe, test_size=0.2) |
193 train examples
49 validation examples
61 test examples
Create an Input Pipeline Using tf.data
Next, we will wrap the dataframes with tf.data. This will enable us to use feature columns as a bridge to map from the columns in the Pandas dataframe to features used to train the model. If we were working with a very large CSV file (so large that it does not fit into memory), we would use tf.data to read it from disk directly.
1 | # EXERCISE: A utility method to create a tf.data dataset from a Pandas Dataframe. |
1 | batch_size = 5 # A small batch sized is used for demonstration purposes |
Understand the Input Pipeline
Now that we have created the input pipeline, let’s call it to see the format of the data it returns. We have used a small batch size to keep the output readable.
1 | for feature_batch, label_batch in train_ds.take(1): |
Every feature: ['age', 'sex', 'cp', 'trestbps', 'chol', 'fbs', 'restecg', 'thalach', 'exang', 'oldpeak', 'slope', 'ca', 'thal']
A batch of ages: tf.Tensor([51 63 64 58 57], shape=(5,), dtype=int32)
A batch of targets: tf.Tensor([0 1 0 0 0], shape=(5,), dtype=int64)
We can see that the dataset returns a dictionary of column names (from the dataframe) that map to column values from rows in the dataframe.
Create Several Types of Feature Columns
TensorFlow provides many types of feature columns. In this section, we will create several types of feature columns, and demonstrate how they transform a column from the dataframe.
1 | # Try to demonstrate several types of feature columns by getting an example. |
1 | # A utility method to create a feature column and to transform a batch of data. |
Numeric Columns
The output of a feature column becomes the input to the model (using the demo function defined above, we will be able to see exactly how each column from the dataframe is transformed). A numeric column is the simplest type of column. It is used to represent real valued features.
1 | # EXERCISE: Create a numeric feature column out of 'age' and demo it. |
[[51.]
[58.]
[63.]
[64.]
[60.]]
In the heart disease dataset, most columns from the dataframe are numeric.
Bucketized Columns
Often, you don’t want to feed a number directly into the model, but instead split its value into different categories based on numerical ranges. Consider raw data that represents a person’s age. Instead of representing age as a numeric column, we could split the age into several buckets using a bucketized column.
1 | # EXERCISE: Create a bucketized feature column out of 'age' with |
[[0. 0. 0. 0. 0. 0. 0. 1. 0. 0. 0.]
[0. 0. 0. 0. 0. 0. 0. 0. 1. 0. 0.]
[0. 0. 0. 0. 0. 0. 0. 0. 0. 1. 0.]
[0. 0. 0. 0. 0. 0. 0. 0. 0. 1. 0.]
[0. 0. 0. 0. 0. 0. 0. 0. 0. 1. 0.]]
Notice the one-hot values above describe which age range each row matches.
Categorical Columns
In this dataset, thal is represented as a string (e.g. ‘fixed’, ‘normal’, or ‘reversible’). We cannot feed strings directly to a model. Instead, we must first map them to numeric values. The categorical vocabulary columns provide a way to represent strings as a one-hot vector (much like you have seen above with age buckets).
Note: You will probably see some warning messages when running some of the code cell below. These warnings have to do with software updates and should not cause any errors or prevent your code from running.
1 | # EXERCISE: Create a categorical vocabulary column out of the |
[[0. 1. 0.]
[0. 1. 0.]
[0. 0. 1.]
[0. 0. 1.]
[0. 1. 0.]]
The vocabulary can be passed as a list using categorical_column_with_vocabulary_list, or loaded from a file using categorical_column_with_vocabulary_file.
Embedding Columns
Suppose instead of having just a few possible strings, we have thousands (or more) values per category. For a number of reasons, as the number of categories grow large, it becomes infeasible to train a neural network using one-hot encodings. We can use an embedding column to overcome this limitation. Instead of representing the data as a one-hot vector of many dimensions, an embedding column represents that data as a lower-dimensional, dense vector in which each cell can contain any number, not just 0 or 1. You can tune the size of the embedding with the dimension
parameter.
1 | # EXERCISE: Create an embedding column out of the categorical |
[[-1.4254066e-01 -1.0374661e-01 3.4352791e-01 -3.3996427e-01
-3.2193713e-02 -1.8381193e-01 -1.8051244e-01 3.2638407e-01]
[-1.4254066e-01 -1.0374661e-01 3.4352791e-01 -3.3996427e-01
-3.2193713e-02 -1.8381193e-01 -1.8051244e-01 3.2638407e-01]
[-6.5549983e-05 2.7680036e-01 4.1849682e-01 5.3418136e-01
-1.6281548e-01 2.5406811e-01 8.8969752e-02 1.8004593e-01]
[-6.5549983e-05 2.7680036e-01 4.1849682e-01 5.3418136e-01
-1.6281548e-01 2.5406811e-01 8.8969752e-02 1.8004593e-01]
[-1.4254066e-01 -1.0374661e-01 3.4352791e-01 -3.3996427e-01
-3.2193713e-02 -1.8381193e-01 -1.8051244e-01 3.2638407e-01]]
Hashed Feature Columns
Another way to represent a categorical column with a large number of values is to use a categorical_column_with_hash_bucket. This feature column calculates a hash value of the input, then selects one of the hash_bucket_size
buckets to encode a string. When using this column, you do not need to provide the vocabulary, and you can choose to make the number of hash buckets significantly smaller than the number of actual categories to save space.
1 | # EXERCISE: Create a hashed feature column with 'thal' as the key and |
[[0. 0. 0. ... 0. 0. 0.]
[0. 0. 0. ... 0. 0. 0.]
[0. 0. 0. ... 0. 0. 0.]
[0. 0. 0. ... 0. 0. 0.]
[0. 0. 0. ... 0. 0. 0.]]
Crossed Feature Columns
Combining features into a single feature, better known as feature crosses, enables a model to learn separate weights for each combination of features. Here, we will create a new feature that is the cross of age and thal. Note that crossed_column
does not build the full table of all possible combinations (which could be very large). Instead, it is backed by a hashed_column
, so you can choose how large the table is.
1 | # EXERCISE: Create a crossed column using the bucketized column (age_buckets), |
[[0. 0. 0. ... 0. 0. 0.]
[0. 0. 0. ... 0. 0. 0.]
[0. 0. 0. ... 0. 0. 0.]
[0. 0. 0. ... 0. 0. 0.]
[0. 0. 0. ... 0. 0. 0.]]
Choose Which Columns to Use
We have seen how to use several types of feature columns. Now we will use them to train a model. The goal of this exercise is to show you the complete code needed to work with feature columns. We have selected a few columns to train our model below arbitrarily.
If your aim is to build an accurate model, try a larger dataset of your own, and think carefully about which features are the most meaningful to include, and how they should be represented.
1 | dataframe.dtypes |
age int64
sex int64
cp int64
trestbps int64
chol int64
fbs int64
restecg int64
thalach int64
exang int64
oldpeak float64
slope int64
ca int64
thal object
target int64
dtype: object
You can use the above list of column datatypes to map the appropriate feature column to every column in the dataframe.
1 | # EXERCISE: Fill in the missing code below |
Create a Feature Layer
Now that we have defined our feature columns, we will use a DenseFeatures layer to input them to our Keras model.
1 | # EXERCISE: Create a Keras DenseFeatures layer and pass the feature_columns you just created. |
Earlier, we used a small batch size to demonstrate how feature columns worked. We create a new input pipeline with a larger batch size.
1 | batch_size = 32 |
Create, Compile, and Train the Model
1 | model = tf.keras.Sequential([ |
Epoch 1/100
7/7 [==============================] - 4s 609ms/step - loss: 1.5455 - accuracy: 0.6321 - val_loss: 0.0000e+00 - val_accuracy: 0.0000e+00
Epoch 2/100
7/7 [==============================] - 0s 45ms/step - loss: 1.6424 - accuracy: 0.5803 - val_loss: 1.7392 - val_accuracy: 0.7143
Epoch 3/100
7/7 [==============================] - 0s 44ms/step - loss: 1.2255 - accuracy: 0.6995 - val_loss: 0.7653 - val_accuracy: 0.5714
Epoch 4/100
7/7 [==============================] - 0s 44ms/step - loss: 0.7326 - accuracy: 0.6891 - val_loss: 0.5689 - val_accuracy: 0.6939
Epoch 5/100
7/7 [==============================] - 0s 43ms/step - loss: 0.5230 - accuracy: 0.7358 - val_loss: 0.5406 - val_accuracy: 0.7143
Epoch 6/100
7/7 [==============================] - 0s 44ms/step - loss: 0.4348 - accuracy: 0.8083 - val_loss: 0.5609 - val_accuracy: 0.7143
Epoch 7/100
7/7 [==============================] - 0s 56ms/step - loss: 0.4592 - accuracy: 0.7824 - val_loss: 0.5710 - val_accuracy: 0.7347
Epoch 8/100
7/7 [==============================] - 0s 45ms/step - loss: 0.4996 - accuracy: 0.7461 - val_loss: 0.5585 - val_accuracy: 0.7143
Epoch 9/100
7/7 [==============================] - 0s 44ms/step - loss: 0.4389 - accuracy: 0.7927 - val_loss: 0.5297 - val_accuracy: 0.6735
Epoch 10/100
7/7 [==============================] - 0s 55ms/step - loss: 0.3914 - accuracy: 0.8446 - val_loss: 0.5216 - val_accuracy: 0.6531
Epoch 11/100
7/7 [==============================] - 0s 45ms/step - loss: 0.4022 - accuracy: 0.7979 - val_loss: 0.5331 - val_accuracy: 0.7347
Epoch 12/100
7/7 [==============================] - 0s 54ms/step - loss: 0.3811 - accuracy: 0.8238 - val_loss: 0.6522 - val_accuracy: 0.6735
Epoch 13/100
7/7 [==============================] - 0s 44ms/step - loss: 0.4173 - accuracy: 0.7927 - val_loss: 0.5219 - val_accuracy: 0.7347
Epoch 14/100
7/7 [==============================] - 0s 44ms/step - loss: 0.4235 - accuracy: 0.7513 - val_loss: 0.5027 - val_accuracy: 0.6531
Epoch 15/100
7/7 [==============================] - 0s 44ms/step - loss: 0.3789 - accuracy: 0.7979 - val_loss: 0.7249 - val_accuracy: 0.6531
Epoch 16/100
7/7 [==============================] - 0s 45ms/step - loss: 0.3972 - accuracy: 0.8342 - val_loss: 0.4830 - val_accuracy: 0.6939
Epoch 17/100
7/7 [==============================] - 0s 55ms/step - loss: 0.3339 - accuracy: 0.8601 - val_loss: 0.4912 - val_accuracy: 0.6531
Epoch 18/100
7/7 [==============================] - 0s 44ms/step - loss: 0.3555 - accuracy: 0.7927 - val_loss: 0.6399 - val_accuracy: 0.6939
Epoch 19/100
7/7 [==============================] - 0s 43ms/step - loss: 0.3531 - accuracy: 0.8601 - val_loss: 0.5526 - val_accuracy: 0.6735
Epoch 20/100
7/7 [==============================] - 0s 44ms/step - loss: 0.3810 - accuracy: 0.7876 - val_loss: 0.5751 - val_accuracy: 0.7143
Epoch 21/100
7/7 [==============================] - 0s 56ms/step - loss: 0.3409 - accuracy: 0.8549 - val_loss: 0.5524 - val_accuracy: 0.7551
Epoch 22/100
7/7 [==============================] - 0s 44ms/step - loss: 0.3167 - accuracy: 0.8756 - val_loss: 0.6607 - val_accuracy: 0.7143
Epoch 23/100
7/7 [==============================] - 0s 45ms/step - loss: 0.3732 - accuracy: 0.8601 - val_loss: 0.5993 - val_accuracy: 0.6939
Epoch 24/100
7/7 [==============================] - 0s 55ms/step - loss: 0.3918 - accuracy: 0.7979 - val_loss: 0.5646 - val_accuracy: 0.6735
Epoch 25/100
7/7 [==============================] - 0s 44ms/step - loss: 0.3624 - accuracy: 0.8187 - val_loss: 0.7324 - val_accuracy: 0.6735
Epoch 26/100
7/7 [==============================] - 0s 56ms/step - loss: 0.3531 - accuracy: 0.8446 - val_loss: 0.4501 - val_accuracy: 0.6939
Epoch 27/100
7/7 [==============================] - 0s 45ms/step - loss: 0.3164 - accuracy: 0.8653 - val_loss: 0.4770 - val_accuracy: 0.6735
Epoch 28/100
7/7 [==============================] - 0s 55ms/step - loss: 0.3557 - accuracy: 0.8290 - val_loss: 0.5188 - val_accuracy: 0.7551
Epoch 29/100
7/7 [==============================] - 0s 45ms/step - loss: 0.3193 - accuracy: 0.8446 - val_loss: 0.5949 - val_accuracy: 0.7347
Epoch 30/100
7/7 [==============================] - 0s 55ms/step - loss: 0.3049 - accuracy: 0.8601 - val_loss: 0.5904 - val_accuracy: 0.7347
Epoch 31/100
7/7 [==============================] - 0s 44ms/step - loss: 0.3150 - accuracy: 0.8705 - val_loss: 0.4901 - val_accuracy: 0.6531
Epoch 32/100
7/7 [==============================] - 0s 55ms/step - loss: 0.3223 - accuracy: 0.8446 - val_loss: 0.5034 - val_accuracy: 0.6939
Epoch 33/100
7/7 [==============================] - 0s 43ms/step - loss: 0.3178 - accuracy: 0.8394 - val_loss: 0.6359 - val_accuracy: 0.7347
Epoch 34/100
7/7 [==============================] - 0s 43ms/step - loss: 0.3041 - accuracy: 0.8549 - val_loss: 0.5558 - val_accuracy: 0.7551
Epoch 35/100
7/7 [==============================] - 0s 44ms/step - loss: 0.2853 - accuracy: 0.8808 - val_loss: 0.5089 - val_accuracy: 0.6939
Epoch 36/100
7/7 [==============================] - 0s 55ms/step - loss: 0.2905 - accuracy: 0.8653 - val_loss: 0.5989 - val_accuracy: 0.7347
Epoch 37/100
7/7 [==============================] - 0s 45ms/step - loss: 0.2885 - accuracy: 0.8705 - val_loss: 0.5644 - val_accuracy: 0.7551
Epoch 38/100
7/7 [==============================] - 0s 55ms/step - loss: 0.2890 - accuracy: 0.8601 - val_loss: 0.5590 - val_accuracy: 0.7755
Epoch 39/100
7/7 [==============================] - 0s 45ms/step - loss: 0.2792 - accuracy: 0.8808 - val_loss: 0.4820 - val_accuracy: 0.7551
Epoch 40/100
7/7 [==============================] - 0s 55ms/step - loss: 0.2781 - accuracy: 0.8653 - val_loss: 0.4974 - val_accuracy: 0.7551
Epoch 41/100
7/7 [==============================] - 0s 45ms/step - loss: 0.2873 - accuracy: 0.8705 - val_loss: 0.5550 - val_accuracy: 0.7551
Epoch 42/100
7/7 [==============================] - 0s 55ms/step - loss: 0.2737 - accuracy: 0.8808 - val_loss: 0.5356 - val_accuracy: 0.7347
Epoch 43/100
7/7 [==============================] - 0s 44ms/step - loss: 0.2677 - accuracy: 0.8860 - val_loss: 0.5071 - val_accuracy: 0.7551
Epoch 44/100
7/7 [==============================] - 0s 44ms/step - loss: 0.2794 - accuracy: 0.8756 - val_loss: 0.5320 - val_accuracy: 0.6939
Epoch 45/100
7/7 [==============================] - 0s 55ms/step - loss: 0.2932 - accuracy: 0.8394 - val_loss: 0.5533 - val_accuracy: 0.7755
Epoch 46/100
7/7 [==============================] - 0s 45ms/step - loss: 0.2750 - accuracy: 0.8705 - val_loss: 0.5723 - val_accuracy: 0.7347
Epoch 47/100
7/7 [==============================] - 0s 55ms/step - loss: 0.2694 - accuracy: 0.8808 - val_loss: 0.5347 - val_accuracy: 0.7551
Epoch 48/100
7/7 [==============================] - 0s 44ms/step - loss: 0.2632 - accuracy: 0.8912 - val_loss: 0.5369 - val_accuracy: 0.7755
Epoch 49/100
7/7 [==============================] - 0s 44ms/step - loss: 0.2677 - accuracy: 0.8860 - val_loss: 0.5837 - val_accuracy: 0.7143
Epoch 50/100
7/7 [==============================] - 0s 55ms/step - loss: 0.2635 - accuracy: 0.8808 - val_loss: 0.5337 - val_accuracy: 0.7755
Epoch 51/100
7/7 [==============================] - 0s 45ms/step - loss: 0.2592 - accuracy: 0.8912 - val_loss: 0.5533 - val_accuracy: 0.7755
Epoch 52/100
7/7 [==============================] - 0s 55ms/step - loss: 0.2536 - accuracy: 0.8912 - val_loss: 0.5743 - val_accuracy: 0.7347
Epoch 53/100
7/7 [==============================] - 0s 45ms/step - loss: 0.2511 - accuracy: 0.9016 - val_loss: 0.5451 - val_accuracy: 0.7551
Epoch 54/100
7/7 [==============================] - 0s 55ms/step - loss: 0.2650 - accuracy: 0.8860 - val_loss: 0.5864 - val_accuracy: 0.6531
Epoch 55/100
7/7 [==============================] - 0s 45ms/step - loss: 0.3354 - accuracy: 0.8290 - val_loss: 0.5772 - val_accuracy: 0.7347
Epoch 56/100
7/7 [==============================] - 0s 54ms/step - loss: 0.2759 - accuracy: 0.8653 - val_loss: 0.5857 - val_accuracy: 0.7347
Epoch 57/100
7/7 [==============================] - 0s 44ms/step - loss: 0.2522 - accuracy: 0.8860 - val_loss: 0.5930 - val_accuracy: 0.7347
Epoch 58/100
7/7 [==============================] - 0s 44ms/step - loss: 0.2488 - accuracy: 0.8808 - val_loss: 0.5814 - val_accuracy: 0.7551
Epoch 59/100
7/7 [==============================] - 0s 55ms/step - loss: 0.2428 - accuracy: 0.8964 - val_loss: 0.5805 - val_accuracy: 0.7551
Epoch 60/100
7/7 [==============================] - 0s 44ms/step - loss: 0.2555 - accuracy: 0.8912 - val_loss: 0.5903 - val_accuracy: 0.7347
Epoch 61/100
7/7 [==============================] - 0s 44ms/step - loss: 0.2391 - accuracy: 0.9016 - val_loss: 0.5721 - val_accuracy: 0.7755
Epoch 62/100
7/7 [==============================] - 0s 44ms/step - loss: 0.2423 - accuracy: 0.8860 - val_loss: 0.5911 - val_accuracy: 0.7551
Epoch 63/100
7/7 [==============================] - 0s 44ms/step - loss: 0.2450 - accuracy: 0.8756 - val_loss: 0.5845 - val_accuracy: 0.7755
Epoch 64/100
7/7 [==============================] - 0s 55ms/step - loss: 0.2447 - accuracy: 0.8912 - val_loss: 0.5883 - val_accuracy: 0.7551
Epoch 65/100
7/7 [==============================] - 0s 44ms/step - loss: 0.2386 - accuracy: 0.8964 - val_loss: 0.6093 - val_accuracy: 0.7551
Epoch 66/100
7/7 [==============================] - 0s 56ms/step - loss: 0.2278 - accuracy: 0.9067 - val_loss: 0.6654 - val_accuracy: 0.7347
Epoch 67/100
7/7 [==============================] - 0s 44ms/step - loss: 0.2474 - accuracy: 0.8912 - val_loss: 0.6545 - val_accuracy: 0.7143
Epoch 68/100
7/7 [==============================] - 0s 45ms/step - loss: 0.2509 - accuracy: 0.8808 - val_loss: 0.6298 - val_accuracy: 0.6735
Epoch 69/100
7/7 [==============================] - 0s 44ms/step - loss: 0.2931 - accuracy: 0.8549 - val_loss: 0.6237 - val_accuracy: 0.7347
Epoch 70/100
7/7 [==============================] - 0s 44ms/step - loss: 0.2653 - accuracy: 0.8808 - val_loss: 0.6296 - val_accuracy: 0.7143
Epoch 71/100
7/7 [==============================] - 0s 43ms/step - loss: 0.2649 - accuracy: 0.8549 - val_loss: 0.5915 - val_accuracy: 0.6531
Epoch 72/100
7/7 [==============================] - 0s 44ms/step - loss: 0.3141 - accuracy: 0.8394 - val_loss: 0.6017 - val_accuracy: 0.7755
Epoch 73/100
7/7 [==============================] - 0s 45ms/step - loss: 0.2557 - accuracy: 0.8756 - val_loss: 0.6444 - val_accuracy: 0.7347
Epoch 74/100
7/7 [==============================] - 0s 55ms/step - loss: 0.2220 - accuracy: 0.9067 - val_loss: 0.6380 - val_accuracy: 0.7347
Epoch 75/100
7/7 [==============================] - 0s 44ms/step - loss: 0.2209 - accuracy: 0.9016 - val_loss: 0.6977 - val_accuracy: 0.7347
Epoch 76/100
7/7 [==============================] - 0s 55ms/step - loss: 0.2318 - accuracy: 0.8964 - val_loss: 0.6422 - val_accuracy: 0.7347
Epoch 77/100
7/7 [==============================] - 0s 44ms/step - loss: 0.2183 - accuracy: 0.9067 - val_loss: 0.6183 - val_accuracy: 0.7143
Epoch 78/100
7/7 [==============================] - 0s 55ms/step - loss: 0.2304 - accuracy: 0.8860 - val_loss: 0.6522 - val_accuracy: 0.7143
Epoch 79/100
7/7 [==============================] - 0s 44ms/step - loss: 0.2338 - accuracy: 0.8756 - val_loss: 0.5959 - val_accuracy: 0.7551
Epoch 80/100
7/7 [==============================] - 0s 44ms/step - loss: 0.2250 - accuracy: 0.8964 - val_loss: 0.6232 - val_accuracy: 0.7551
Epoch 81/100
7/7 [==============================] - 0s 55ms/step - loss: 0.2275 - accuracy: 0.8912 - val_loss: 0.6500 - val_accuracy: 0.7551
Epoch 82/100
7/7 [==============================] - 0s 44ms/step - loss: 0.2053 - accuracy: 0.9016 - val_loss: 0.6249 - val_accuracy: 0.7347
Epoch 83/100
7/7 [==============================] - 0s 45ms/step - loss: 0.2250 - accuracy: 0.8964 - val_loss: 0.6744 - val_accuracy: 0.7347
Epoch 84/100
7/7 [==============================] - 0s 44ms/step - loss: 0.2109 - accuracy: 0.9067 - val_loss: 0.7039 - val_accuracy: 0.7347
Epoch 85/100
7/7 [==============================] - 0s 45ms/step - loss: 0.2171 - accuracy: 0.9016 - val_loss: 0.6693 - val_accuracy: 0.7347
Epoch 86/100
7/7 [==============================] - 0s 44ms/step - loss: 0.2187 - accuracy: 0.9067 - val_loss: 0.6765 - val_accuracy: 0.7143
Epoch 87/100
7/7 [==============================] - 0s 45ms/step - loss: 0.2225 - accuracy: 0.9067 - val_loss: 0.6637 - val_accuracy: 0.6939
Epoch 88/100
7/7 [==============================] - 0s 44ms/step - loss: 0.2193 - accuracy: 0.8808 - val_loss: 0.7029 - val_accuracy: 0.6735
Epoch 89/100
7/7 [==============================] - 0s 45ms/step - loss: 0.2644 - accuracy: 0.8653 - val_loss: 0.6829 - val_accuracy: 0.6939
Epoch 90/100
7/7 [==============================] - 0s 55ms/step - loss: 0.2625 - accuracy: 0.8601 - val_loss: 0.6617 - val_accuracy: 0.7347
Epoch 91/100
7/7 [==============================] - 0s 45ms/step - loss: 0.2206 - accuracy: 0.8860 - val_loss: 0.6889 - val_accuracy: 0.7551
Epoch 92/100
7/7 [==============================] - 0s 55ms/step - loss: 0.2090 - accuracy: 0.8964 - val_loss: 0.7322 - val_accuracy: 0.7347
Epoch 93/100
7/7 [==============================] - 0s 45ms/step - loss: 0.2016 - accuracy: 0.9119 - val_loss: 0.7244 - val_accuracy: 0.7347
Epoch 94/100
7/7 [==============================] - 0s 55ms/step - loss: 0.1933 - accuracy: 0.9067 - val_loss: 0.6788 - val_accuracy: 0.7347
Epoch 95/100
7/7 [==============================] - 0s 45ms/step - loss: 0.2002 - accuracy: 0.9171 - val_loss: 0.6849 - val_accuracy: 0.7143
Epoch 96/100
7/7 [==============================] - 0s 55ms/step - loss: 0.2138 - accuracy: 0.8964 - val_loss: 0.7610 - val_accuracy: 0.6939
Epoch 97/100
7/7 [==============================] - 0s 45ms/step - loss: 0.2225 - accuracy: 0.8912 - val_loss: 0.6998 - val_accuracy: 0.6939
Epoch 98/100
7/7 [==============================] - 0s 55ms/step - loss: 0.2089 - accuracy: 0.9067 - val_loss: 0.6846 - val_accuracy: 0.7143
Epoch 99/100
7/7 [==============================] - 0s 44ms/step - loss: 0.2043 - accuracy: 0.8964 - val_loss: 0.7292 - val_accuracy: 0.7347
Epoch 100/100
7/7 [==============================] - 0s 55ms/step - loss: 0.2008 - accuracy: 0.9016 - val_loss: 0.7064 - val_accuracy: 0.7143
<tensorflow.python.keras.callbacks.History at 0x7f33184937b8>
1 | loss, accuracy = model.evaluate(test_ds) |
2/2 [==============================] - 1s 329ms/step - loss: 0.5511 - accuracy: 0.8197
Accuracy 0.8196721