Questions tagged [keras-rl]

keras-rl is a Reinforcement Learning library based on Keras

The code can be found at github.com/matthiasplappert/keras-rl.

81 questions
14
votes
2 answers

TypeError: len is not well defined for symbolic Tensors. (activation_3/Identity:0) Please call `x.shape` rather than `len(x)` for shape information

I am trying to implement a DQL model on one game of openAI gym. But it's giving me following error. TypeError: len is not well defined for symbolic Tensors. (activation_3/Identity:0) Please call x.shape rather than len(x) for shape…
vivekpadia70
  • 1,039
  • 3
  • 10
  • 30
13
votes
1 answer

Why can't my DQN agent find the optimal policy in a non-deterministic environment?

edit: The following seems also to be the case for FrozenLake-v0. Please note that I'm not interested in simple Q-learning as I want to see solutions that work with continuous observation spaces. I recently created the banana_gym OpenAI environment.…
Martin Thoma
  • 124,992
  • 159
  • 614
  • 958
12
votes
1 answer

How to implement custom environment in keras-rl / OpenAI GYM?

I'm a complete newbie to Reinforcement Learning and have been searching for a framework/module to easily navigate this treacherous terrain. In my search I've come across two modules keras-rl & OpenAI GYM. I can get both of them two work on the…
Manipal King
  • 422
  • 1
  • 5
  • 18
7
votes
1 answer

Gym (openAI) environment actions space depends from actual state

I'm using gym toolkit to create my own env and keras-rl to use my env within an agent. The problem is that my actions space changes, it depends from actual state. For example, i have 46 possible actions, but given a certain state only 7 are…
davide
  • 91
  • 7
6
votes
1 answer

Define action values in keras-rl

I have a custom environment in keras-rl with the following configurations in the constructor def __init__(self, data): #Declare the episode as the first episode self.episode=1 #Initialize data self.data=data #Declare low…
mad
  • 2,677
  • 8
  • 35
  • 78
6
votes
1 answer

Python Reinforcement Learning - Tuple Observation Space

I've created a custom openai gym environment with a discrete action space and a somewhat complicated state space. The state space has been defined as a Tuple because it combines some dimensions which are continuous and others which are…
6
votes
0 answers

DQNAgent can't put batch size more than 1

When I try to train an agent with a batch_size greater than 1 it gives me an exception. Where is my issue? lr = 1e-3 window_length = 1 emb_size = 10 look_back = 6 # "Expert" (regular dqn) model architecture inp = Input(shape=(look_back,)) emb =…
Angelo
  • 575
  • 3
  • 18
5
votes
3 answers

TensorFlow's Print is not printing

I am trying to understand some code from a reinforcement learning algorithm. In order to do that I am trying to print the value of a tensor. I made a simple piece of code to show what I mean. import tensorflow as tf from keras import backend as K x…
JorDik
  • 127
  • 2
  • 2
  • 7
5
votes
2 answers

Keras with Tensorflow backend - Run predict on CPU but fit on GPU

I am using keras-rl to train my network with the D-DQN algorithm. I am running my training on the GPU with the model.fit_generator() function to allow data to be sent to the GPU while it is doing backprops. I suspect the generation of data to be too…
Raphael Royer-Rivard
  • 2,252
  • 1
  • 30
  • 53
4
votes
3 answers

Keras data generator predict same number of values

I have implemented a CNN-based regression model that uses a data generator to use the huge amount of data I have. Training and evaluation work well, but there's an issue with the prediction. If for example I want to predict values from a test…
Derriese
  • 53
  • 4
3
votes
2 answers

Keras-rl2 error AttributeError: 'Sequential' object has no attribute '_compile_time_distribution_strategy'

I am getting this error AttributeError: 'Sequential' object has no attribute '_compile_time_distribution_strategy' with keras-rl2, when using the below code. I have searched the whole internet but could not find a solution. import gym import…
Parv Jain
  • 85
  • 2
  • 6
3
votes
0 answers

MineRL Keras-RL2: Error when checking input: expected conv2d_input to have 5 dimensions, but got array with shape (1, 3)

I'm trying to train an Agent in the MineRL environment using Keras. This is my code so far: import gym import random import numpy as np from tensorflow.keras.models import Sequential from tensorflow.keras.layers import Dense, Flatten,…
3
votes
2 answers

How does the dimensions work when training a keras model?

Getting: assert q_values.shape == (len(state_batch), self.nb_actions) AssertionError q_values.shape : (1, 1, 10) (len(state_batch), self.nb_actions) : (1, 10) which is from the keras-rl library of the sarsa…
Tjorriemorrie
  • 16,818
  • 20
  • 89
  • 131
3
votes
1 answer

How to use keras-rl for multi agent training

I am trying to use keras-rl but in a multi-agent environment. So I found this github issue of keras-rl with an idea using shared environment for all agents. Unfortunately, I haven't managed to get it working. It seems that using a gym environment in…
cserpell
  • 716
  • 1
  • 7
  • 17
2
votes
0 answers

cannot import name 'CallbackList' from 'keras.callbacks'

I am trying to implement DQN to solve RL problem with keras. I use MAC and running this code in an Anaconda Jupyter environment. I have tried searching up or this error but I have got no results. Code from rl.agents.dqn import DQNAgent from…
1
2 3 4 5 6