Training

Building and compiling a model

[1]:
%%capture
import h5py
from chimeranet.models import ChimeraPPModel

# probe shape of dataset and set embedding dimension
dataset_path = 'example-dataset-train.hdf5'
with h5py.File(dataset_path, 'r') as f:
    _, T, F, C = f['y/embedding'].shape
D = 20
cm = ChimeraPPModel(T, F, C, D)

# build_model returns Keras' Model object
model = cm.build_model()
model.compile(
    'rmsprop',
    loss={
        'embedding': cm.loss_deepclustering(),
        'mask': cm.loss_mask()
    },
    loss_weights={
        'embedding': 0.9,
        'mask': 0.1
    }
)

Training a model

[2]:
# load train and validation data
dataset_validation_path = 'example-dataset-validation.hdf5'
with h5py.File(dataset_path, 'r') as f:
    x_train = f['x'][()]
    y_train = {'mask': f['y/mask'][()], 'embedding': f['y/embedding'][()]}
with h5py.File(dataset_validation_path, 'r') as f:
    x_validation = f['x'][()]
    y_validation = {'mask': f['y/mask'][()], 'embedding': f['y/embedding'][()]}

# train model by model.fit function
model.fit(
    x=x_train,
    y=y_train,
    validation_data=(x_validation, y_validation),
    batch_size=32,
    epochs=10
)
# save the model
model_path = 'example-model.hdf5'
model.save(model_path)
Train on 160 samples, validate on 160 samples
Epoch 1/10
160/160 [==============================] - 261s 2s/step - loss: 18381.2736 - embedding_loss: 18457.8914 - mask_loss: 17691.7186 - val_loss: 12940.3148 - val_embedding_loss: 12466.7975 - val_mask_loss: 17201.9725
Epoch 2/10
160/160 [==============================] - 258s 2s/step - loss: 12468.0131 - embedding_loss: 12140.8994 - mask_loss: 15412.0373 - val_loss: 11921.1551 - val_embedding_loss: 11286.8416 - val_mask_loss: 17629.9813
Epoch 3/10
160/160 [==============================] - 266s 2s/step - loss: 12008.5932 - embedding_loss: 11784.1303 - mask_loss: 14028.7604 - val_loss: 11863.6486 - val_embedding_loss: 11170.0996 - val_mask_loss: 18105.5883
Epoch 4/10
160/160 [==============================] - 266s 2s/step - loss: 12022.4951 - embedding_loss: 11823.9797 - mask_loss: 13809.1371 - val_loss: 11875.8588 - val_embedding_loss: 11263.6596 - val_mask_loss: 17385.6521
Epoch 5/10
160/160 [==============================] - 266s 2s/step - loss: 11840.2252 - embedding_loss: 11682.9516 - mask_loss: 13255.6879 - val_loss: 11862.4658 - val_embedding_loss: 11327.9975 - val_mask_loss: 16672.6859
Epoch 6/10
160/160 [==============================] - 273s 2s/step - loss: 11768.4652 - embedding_loss: 11728.4332 - mask_loss: 12128.7551 - val_loss: 11538.6459 - val_embedding_loss: 11066.1010 - val_mask_loss: 15791.5525
Epoch 7/10
160/160 [==============================] - 270s 2s/step - loss: 11755.5100 - embedding_loss: 11649.7219 - mask_loss: 12707.6035 - val_loss: 11632.5807 - val_embedding_loss: 11140.6088 - val_mask_loss: 16060.3297
Epoch 8/10
160/160 [==============================] - 276s 2s/step - loss: 11816.1187 - embedding_loss: 11642.1570 - mask_loss: 13381.7742 - val_loss: 11814.3582 - val_embedding_loss: 11381.6398 - val_mask_loss: 15708.8264
Epoch 9/10
160/160 [==============================] - 270s 2s/step - loss: 11801.6576 - embedding_loss: 11660.4234 - mask_loss: 13072.7674 - val_loss: 11853.9027 - val_embedding_loss: 11215.5012 - val_mask_loss: 17599.5209
Epoch 10/10
160/160 [==============================] - 283s 2s/step - loss: 11742.7662 - embedding_loss: 11698.4328 - mask_loss: 12141.7742 - val_loss: 11385.7367 - val_embedding_loss: 11052.1635 - val_mask_loss: 14387.8969