There can be several ways to load a model from ckpt file and run inference.
Build model instance from source, just like in preparing for training from scratch.
model = build_model_function() model.load_weights(ckpt_path) model.predict(X)
When the ckpt file is a bundle of model architecture and weights, then simply use
model = tf.keras.model.load_model(ckpt_path) model.predict(X)
In case the model architecture and weights are saved in separate files, use
with open("model_arch.json", 'r') as fd: archjson = fd.read() model = tf.keras.model.model_from_json(archjson) model.load_weights(ckpt_path)
Usually, any of these methods should work but I have ran into some cases where only Method#1 works and others failed. Method#2 and #3 seemed to have no problem when constructing a model and loading the weights, but its predictions were incorrect compared to the results from Method#1.
The cause of this problem was infact due to the use of
Lambda layers in the model architecture. This function is a custom function that the user defines and when the model architecture is saved in the ckpt file(or json file), the function of Lambda is not saved. Rather, it is saved in a weird string which is unreadable. This is why in
Lambda used model reconstruction, using the original model building function from source works while reconstructing from an external save file fails.
To solve this problem the user will be forced to use Method#1 or find an alternative method to avoid using