I'm using ray RLlib library to train multi-agent Trainer on the 5-in-a-row game. This is zero-sum environment so I have a problem of agents behavior degeneration (always win for 1'st agent, 5 moves to win). I have an idea to change learning rate of the agents in a that way: first train the first agent, left second as random with learning rate equal to zero. After first agent learns how to win more than 90% games switch. Then repeat But I can't change learning rate after its initialization in constructor. Is this possible?
def gen_policy(GENV, lr=0.001):
config = {
"model": {
"custom_model": 'GomokuModel',
"custom_options": {"use_symmetry": True, "reg_loss": 0},
},
"custom_action_dist": Categorical,
"lr": lr
}
return (None, GENV.observation_space, GENV.action_space, config)
def map_fn(agent_id):
if agent_id=='agent_0':
return "policy_0"
else:
return "policy_1"
trainer = ray.rllib.agents.a3c.A3CTrainer(env="GomokuEnv", config={
"multiagent": {
"policies": {"policy_0": gen_policy(GENV, lr = 0.001), "policy_1": gen_policy(GENV,lr=0)},
"policy_mapping_fn": map_fn,
},
"callbacks":
{"on_episode_end": clb_episode_end},
while True:
rest = trainer.train()
#here I want to change learning rate of my policies based on environment statistics
I've tried to add these lines inside while True loop
new_config = trainer.get_config()
new_config["multiagent"]["policies"]["policy_0"]=gm.gen_policy(GENV, lr = 0.00321)
new_config["multiagent"]["policies"]["policy_1"]=gm.gen_policy(GENV, lr = 0.00175)
trainer["raw_user_config"]=new_config
trainer.config = new_config
it didn't help