3

I am having a small error using the DQN RL agent from Keras. I have created my own OpenAI gym environment that outputs a numpy array of size 1 for the observation. Yet when I call the fit function on my environment, I get the error ValueError: Error when checking input: expected flatten_3_input to have shape (1, 1) but got array with shape (1, 4). I have used the same code (only changing the input shape to (1,4)) on the CartPole environment with no error, so I am very confused as to what is the problem here. At each step, my gym environment returns a tuple of the form (numpy array, float, bool, dict), the same format as does CartPole. My policy and target nets have the form:

def agent(shape, actions):
    model = Sequential()
    model.add(Flatten(input_shape = (1, shape)))
    model.add(Dense(128, activation='relu'))
    model.add(Dense(128, activation='relu'))
    model.add(Dense(actions, activation='linear'))
    return model

The following throws the error at the fit function:

model = agent(1, len(env.action_space))
memory = SequentialMemory(limit=50000, window_length=1)
policy = BoltzmannGumbelQPolicy()

dqn = DQNAgent(model=model, policy=policy, nb_actions=len(env.action_space), memory=memory)
dqn.compile('adam', metrics = ['mae'])
dqn.fit(env, nb_steps = 50000, visualize = False, verbose = 1)

I read the answers for similar problems on Keras model: Input shape dimension error for RL agent but I am not able to overcome this issue. Any suggestions here? Thanks!

ml0220
  • 31
  • 1

0 Answers0