There can be several ways to load a model from ckpt file and run inference.

#### Method1

Build model instance from source, just like in preparing for training from scratch.

model = build_model_function()

model.predict(X)

#### Method2

When the ckpt file is a bundle of model architecture and weights, then simply use load_model function.

model = tf.keras.model.load_model(ckpt_path)

model.predict(X)

#### Method3

In case the model architecture and weights are saved in separate files, use model_from_json / model_from_config and load_weights

with open("model_arch.json", 'r') as fd:

model = tf.keras.model.model_from_json(archjson)


## Symptom

Usually, any of these methods should work but I have ran into some cases where only Method#1 works and others failed. Method#2 and #3 seemed to have no problem when constructing a model and loading the weights, but its predictions were incorrect compared to the results from Method#1.

## Solution

The cause of this problem was infact due to the use of Lambda layers in the model architecture. This function is a custom function that the user defines and when the model architecture is saved in the ckpt file(or json file), the function of Lambda is not saved. Rather, it is saved in a weird string which is unreadable. This is why in Lambda used model reconstruction, using the original model building function from source works while reconstructing from an external save file fails.

To solve this problem the user will be forced to use Method#1 or find an alternative method to avoid using Lambda layers.

Categories: tensorflow

### 1 Comment

#### Anonymous · September 28, 2020 at 6:10 am

Thanks very much. I spent endless time trying to find an answer to this issue.