0

I am using this repo as my reference.

After running till 78th epochs, I saved the model as in utils.py. However, when perform the following code:

CHECKPOINT_GEN = "output/vanilla/checkpoints/gen-78.pth.tar"
gen = Generator(in_channels=3, features=64).to("cpu")
gen.apply(weights_init)
opt_gen = optim.Adam(gen.parameters(), lr=config.LEARNING_RATE, betas=(0.5, 0.999))
scheduler_gen = optim.lr_scheduler.StepLR(opt_gen, step_size=100, gamma=0.1)
gen_checkpoint  = torch.load(CHECKPOINT_GEN, map_location="cpu")
for k,v in gen_checkpoint["state_dict"].items():
    print(k)

for k,v in gen.state_dict().items():
    print(k)

First k yields, with layers named like these:

initial_down.0.weight,
initial_down.0.bias,
down1.conv.0.weight,
down1.conv.1.weight,
down1.conv.1.bias...

However in the second snippet, it yields only "0"-contained-in name- layers. (which means only layers like initial_down.0.weight, initial_down.0.bias,down1.conv.0.weight and not down1.conv.1.weight these) Because of this, I am not able to load the generator model to evaluate. How to fix this issue?

jayant98
  • 135
  • 6
  • Can you give an example of a few of those `"0"-contained-in name-` layer names? – Ivan Jul 05 '22 at 06:05
  • @Ivan updated the description. – jayant98 Jul 05 '22 at 22:42
  • If you are missing layers then I guess there is really nothing we can do. Can you print what `gen.load_state_dict(gen_checkpoint["state_dict"], strict=False)` returns? – Ivan Jul 06 '22 at 06:37

0 Answers0