I am training a RL model using the DQN algorithm. At every iteration, I save the model as follows:
agent = dqn.DQNTrainer(env=CustomEnv,config=config)
for n in range(100):
result = agent.train()
agent.save()
I want to evluate the trained RL model using on a different environment. I am not sure how to load the checkpoint and evaluate in a different environment.
I try to load the trained model (the last checkpoint) but it throws me an error. I do the following:
agent.restore('./RL_saved/checkpoint-100.tune_metadata')
It throws me an error saying
unsupported pickle protocol: 5
and when I do
agent.restore('./RL_saved/checkpoint-100.tune_metadata')
It throws me an error saying
Invalid magic number; corrupt file?
Am I loading the model in the right way? And how do I pass the environment to the loaded model?
CodePudding user response:
I found the answer to this incase it helps any.
so we first create an object of the class DQN and then load the chenkpoint without using the extension .tune_metadata
agent = dqn.DQNTrainer(env=CustomEnv,config=config)
agent.restore(''./RL_saved/checkpoint-100')