Model: "model"
__________________________________________________________________________________________________
Layer (type) Output Shape Param # Connected to
==================================================================================================
input_1 (InputLayer) [(None, 180, 180, 3 0 []
)]
sequential (Sequential) (None, 180, 180, 3) 0 ['input_1[0][0]']
rescaling (Rescaling) (None, 180, 180, 3) 0 ['sequential[0][0]']
conv2d (Conv2D) (None, 90, 90, 32) 896 ['rescaling[0][0]']
batch_normalization (BatchNorm (None, 90, 90, 32) 128 ['conv2d[0][0]']
alization)
activation (Activation) (None, 90, 90, 32) 0 ['batch_normalization[0][0]']
conv2d_1 (Conv2D) (None, 90, 90, 64) 18496 ['activation[0][0]']
batch_normalization_1 (BatchNo (None, 90, 90, 64) 256 ['conv2d_1[0][0]']
rmalization)
activation_1 (Activation) (None, 90, 90, 64) 0 ['batch_normalization_1[0][0]']
activation_2 (Activation) (None, 90, 90, 64) 0 ['activation_1[0][0]']
separable_conv2d (SeparableCon (None, 90, 90, 128) 8896 ['activation_2[0][0]']
v2D)
batch_normalization_2 (BatchNo (None, 90, 90, 128) 512 ['separable_conv2d[0][0]']
rmalization)
activation_3 (Activation) (None, 90, 90, 128) 0 ['batch_normalization_2[0][0]']
separable_conv2d_1 (SeparableC (None, 90, 90, 128) 17664 ['activation_3[0][0]']
onv2D)
batch_normalization_3 (BatchNo (None, 90, 90, 128) 512 ['separable_conv2d_1[0][0]']
rmalization)
max_pooling2d (MaxPooling2D) (None, 45, 45, 128) 0 ['batch_normalization_3[0][0]']
conv2d_2 (Conv2D) (None, 45, 45, 128) 8320 ['activation_1[0][0]']
add (Add) (None, 45, 45, 128) 0 ['max_pooling2d[0][0]',
'conv2d_2[0][0]']
activation_4 (Activation) (None, 45, 45, 128) 0 ['add[0][0]']
separable_conv2d_2 (SeparableC (None, 45, 45, 256) 34176 ['activation_4[0][0]']
onv2D)
batch_normalization_4 (BatchNo (None, 45, 45, 256) 1024 ['separable_conv2d_2[0][0]']
rmalization)
activation_5 (Activation) (None, 45, 45, 256) 0 ['batch_normalization_4[0][0]']
separable_conv2d_3 (SeparableC (None, 45, 45, 256) 68096 ['activation_5[0][0]']
onv2D)
batch_normalization_5 (BatchNo (None, 45, 45, 256) 1024 ['separable_conv2d_3[0][0]']
rmalization)
max_pooling2d_1 (MaxPooling2D) (None, 23, 23, 256) 0 ['batch_normalization_5[0][0]']
conv2d_3 (Conv2D) (None, 23, 23, 256) 33024 ['add[0][0]']
add_1 (Add) (None, 23, 23, 256) 0 ['max_pooling2d_1[0][0]',
'conv2d_3[0][0]']
activation_6 (Activation) (None, 23, 23, 256) 0 ['add_1[0][0]']
separable_conv2d_4 (SeparableC (None, 23, 23, 512) 133888 ['activation_6[0][0]']
onv2D)
batch_normalization_6 (BatchNo (None, 23, 23, 512) 2048 ['separable_conv2d_4[0][0]']
rmalization)
activation_7 (Activation) (None, 23, 23, 512) 0 ['batch_normalization_6[0][0]']
separable_conv2d_5 (SeparableC (None, 23, 23, 512) 267264 ['activation_7[0][0]']
onv2D)
batch_normalization_7 (BatchNo (None, 23, 23, 512) 2048 ['separable_conv2d_5[0][0]']
rmalization)
max_pooling2d_2 (MaxPooling2D) (None, 12, 12, 512) 0 ['batch_normalization_7[0][0]']
conv2d_4 (Conv2D) (None, 12, 12, 512) 131584 ['add_1[0][0]']
add_2 (Add) (None, 12, 12, 512) 0 ['max_pooling2d_2[0][0]',
'conv2d_4[0][0]']
activation_8 (Activation) (None, 12, 12, 512) 0 ['add_2[0][0]']
separable_conv2d_6 (SeparableC (None, 12, 12, 728) 378072 ['activation_8[0][0]']
onv2D)
batch_normalization_8 (BatchNo (None, 12, 12, 728) 2912 ['separable_conv2d_6[0][0]']
rmalization)
activation_9 (Activation) (None, 12, 12, 728) 0 ['batch_normalization_8[0][0]']
separable_conv2d_7 (SeparableC (None, 12, 12, 728) 537264 ['activation_9[0][0]']
onv2D)
batch_normalization_9 (BatchNo (None, 12, 12, 728) 2912 ['separable_conv2d_7[0][0]']
rmalization)
max_pooling2d_3 (MaxPooling2D) (None, 6, 6, 728) 0 ['batch_normalization_9[0][0]']
conv2d_5 (Conv2D) (None, 6, 6, 728) 373464 ['add_2[0][0]']
add_3 (Add) (None, 6, 6, 728) 0 ['max_pooling2d_3[0][0]',
'conv2d_5[0][0]']
separable_conv2d_8 (SeparableC (None, 6, 6, 1024) 753048 ['add_3[0][0]']
onv2D)
batch_normalization_10 (BatchN (None, 6, 6, 1024) 4096 ['separable_conv2d_8[0][0]']
ormalization)
activation_10 (Activation) (None, 6, 6, 1024) 0 ['batch_normalization_10[0][0]']
global_average_pooling2d (Glob (None, 1024) 0 ['activation_10[0][0]']
alAveragePooling2D)
dropout (Dropout) (None, 1024) 0 ['global_average_pooling2d[0][0]'
]
dense (Dense) (None, 36) 36900 ['dropout[0][0]']
==================================================================================================
Total params: 2,818,524
Trainable params: 2,809,788
Non-trainable params: 8,736
Epoch 50/50
78/78 [==============================] - 50s 591ms/step - loss: 0.0165 - accuracy: 0.9049 - val_loss: 0.0195 - val_accuracy: 0.9007
Classification Report
precision recall f1-score support
apple 0.78 0.70 0.74 10
banana 0.80 0.89 0.84 9
beetroot 1.00 0.78 0.88 9
bell pepper 1.00 0.86 0.92 7
cabbage 0.70 1.00 0.82 7
capsicum 1.00 0.78 0.88 9
carrot 1.00 1.00 1.00 7
cauliflower 1.00 1.00 1.00 8
chilli pepper 1.00 1.00 1.00 7
corn 1.00 0.25 0.40 8
cucumber 0.78 1.00 0.88 7
eggplant 1.00 1.00 1.00 8
garlic 0.89 1.00 0.94 8
ginger 1.00 1.00 1.00 10
grapes 1.00 1.00 1.00 9
jalepeno 0.88 0.78 0.82 9
kiwi 1.00 1.00 1.00 8
lemon 0.75 1.00 0.86 6
lettuce 1.00 0.83 0.91 6
mango 1.00 1.00 1.00 9
onion 1.00 0.89 0.94 9
orange 1.00 0.60 0.75 5
paprika 1.00 0.89 0.94 9
pear 1.00 1.00 1.00 8
peas 1.00 1.00 1.00 6
pineapple 1.00 1.00 1.00 8
pomegranate 0.89 1.00 0.94 8
potato 0.80 0.80 0.80 10
raddish 1.00 0.83 0.91 6
soy beans 1.00 1.00 1.00 7
spinach 1.00 1.00 1.00 7
sweetcorn 0.57 1.00 0.73 8
sweetpotato 1.00 0.89 0.94 9
tomato 0.62 1.00 0.76 8
turnip 1.00 1.00 1.00 7
watermelon 1.00 0.88 0.93 8
accuracy 0.90 284
macro avg 0.93 0.91 0.90 284
weighted avg 0.93 0.90 0.90 284
dict_keys(['loss', 'accuracy', 'val_loss', 'val_accuracy'])
Model: "sequential_2"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
conv2d_8 (Conv2D) (None, 178, 178, 32) 896
max_pooling2d_6 (MaxPooling (None, 89, 89, 32) 0
2D)
conv2d_9 (Conv2D) (None, 87, 87, 64) 18496
max_pooling2d_7 (MaxPooling (None, 43, 43, 64) 0
2D)
flatten_1 (Flatten) (None, 118336) 0
dense_4 (Dense) (None, 128) 15147136
dense_5 (Dense) (None, 256) 33024
dense_6 (Dense) (None, 36) 9252
=================================================================
Total params: 15,208,804
Trainable params: 15,208,804
Non-trainable params: 0
Epoch 10/10
78/78 [==============================] - 44s 519ms/step - loss: 0.2938 - accuracy: 0.9478 - val_loss: 0.5521 - val_accuracy: 0.9178
Classification Report
precision recall f1-score support
apple 1.00 0.60 0.75 10
banana 1.00 0.78 0.88 9
beetroot 1.00 1.00 1.00 9
bell pepper 0.78 1.00 0.88 7
cabbage 0.88 1.00 0.93 7
capsicum 0.89 0.89 0.89 9
carrot 1.00 1.00 1.00 7
cauliflower 0.88 0.88 0.88 8
chilli pepper 0.78 1.00 0.88 7
corn 1.00 0.75 0.86 8
cucumber 0.88 1.00 0.93 7
eggplant 1.00 0.88 0.93 8
garlic 1.00 1.00 1.00 8
ginger 1.00 1.00 1.00 10
grapes 0.75 1.00 0.86 9
jalepeno 1.00 1.00 1.00 9
kiwi 1.00 1.00 1.00 8
lemon 0.55 1.00 0.71 6
lettuce 1.00 0.83 0.91 6
mango 1.00 0.89 0.94 9
onion 0.82 1.00 0.90 9
orange 1.00 0.80 0.89 5
paprika 1.00 0.89 0.94 9
pear 1.00 1.00 1.00 8
peas 1.00 1.00 1.00 6
pineapple 1.00 1.00 1.00 8
pomegranate 0.89 1.00 0.94 8
potato 0.88 0.70 0.78 10
raddish 0.83 0.83 0.83 6
soy beans 0.88 1.00 0.93 7
spinach 0.78 1.00 0.88 7
sweetcorn 1.00 0.88 0.93 8
sweetpotato 1.00 0.78 0.88 9
tomato 1.00 0.75 0.86 8
turnip 1.00 1.00 1.00 7
watermelon 1.00 1.00 1.00 8
accuracy 0.92 284
macro avg 0.93 0.92 0.92 284
weighted avg 0.93 0.92 0.92 284
dict_keys(['loss', 'accuracy', 'val_loss', 'val_accuracy'])
Model: "sequential_4"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
conv2d_12 (Conv2D) (None, 178, 178, 32) 896
max_pooling2d_10 (MaxPoolin (None, 89, 89, 32) 0
g2D)
dropout_3 (Dropout) (None, 89, 89, 32) 0
conv2d_13 (Conv2D) (None, 87, 87, 64) 18496
max_pooling2d_11 (MaxPoolin (None, 43, 43, 64) 0
g2D)
dropout_4 (Dropout) (None, 43, 43, 64) 0
flatten_3 (Flatten) (None, 118336) 0
dense_8 (Dense) (None, 36) 4260132
=================================================================
Total params: 4,279,524
Trainable params: 4,279,524
Non-trainable params: 0
Epoch 10/10
78/78 [==============================] - 59s 677ms/step - loss: 0.5996 - accuracy: 0.8758 - val_loss: 1.1097 - val_accuracy: 0.9110