#Given the following 2 example tables (always both tables have the same dimensions)
tabla_ganancias = [[5, 6, 3, 5, 5],
[2, 5, 3, 1, 0],
[2, 4, 0, 1, 1]]
tabla_asignaciones = [[ 140, 220, None, 80, 60],
[None, None, 330, None, None],
[None, None, 30, None, 90]]
tabla_resultado = []
rows = len(tabla_ganancias)
columns = len(tabla_ganancias[0])
#Code that creates result_table and does the decomposition of the profits
#HERE THE PROBLEM
#Print the results obtained
print(tabla_resultado)
for lista in tabla_resultado: print(lista)
I decompose the values of tabla_ganancias
that correspond to the non-empty cells (that is, they are not None) of tabla_assignments
, placing the decomposition of the values of the earnings in the last of the rows and in the last of the columns .
In this case, the values of the profits would be broken down (by breaking down I mean finding 2 integers that add up to form):
The gain value 5 become in 2 in a row and 3 in a column ---> 2 + 3 = 5
The gain value 6 become in 2 in a row and 4 in a column ---> 2 + 4 = 6
The gain value 5 become in 2 in a row and 3 in a column ---> 2 + 3 = 5
The gain value 5 become in 2 in a row and 3 in a column ---> 2 + 3 = 5
The gain value 3 become in 1 in a row and 2 in a column ---> 1 + 2 = 3
The gain value 0 become in 2 in a row and -2 in a column ---> 2 + (-2) = 0
The gain value 1 become in -2 in a row and 3 in a column ---> -2 + 3 = 1
So that there is a resulting table having in its last row and its last column the decomposition of the profits. Note that tabla_resultado
is the tabla_asignaciones
but with the row and column of the decompositions of the profits
#The decomposed values will be stored in the last row and last column of tabla_resultado.
#System of equations that should be solved to find the values that I need to build the last of the rows and the last of the columns of tabla_resultado
# X1 + Y1 = 5
# X2 + Y1 = 6
# X4 + Y1 = 5
# X5 + Y1 = 5
# X3 + Y2 = 3
# X3 + Y3 = 0
# X5 + Y3 = 1
#tabla_resultado = [[ 140, 220, None, 80, 60, Y1],
# [None, None, 330, None, None, Y2],
# [None, None, 30, None, 90, Y3],
# [ X1, X2, X3, X4, X5, None]]
tabla_resultado = [[ 140, 220, None, 80, 60, 2],
[None, None, 330, None, None, 1],
[None, None, 30, None, 90, -2],
[ 3, 4, 2, 3, 3, None]]
This code not work, but I add this
#Given the following 2 example tables (always both tables have the same dimensions)
tabla_ganancias = [[5, 6, 3, 5, 5],
[2, 5, 3, 1, 0],
[2, 4, 0, 1, 1]]
tabla_asignaciones = [[ 140, 220, None, 80, 60],
[None, None, 330, None, None],
[None, None, 30, None, 90]]
tabla_resultado = []
rows = len(tabla_ganancias)
columns = len(tabla_ganancias[0])
for i in range(rows):
tabla_resultado.append([])
for j in range(columns):
if tabla_asignaciones[i][j] is not None:
value = tabla_ganancias[i][j]
x = value // 2
y = value - x
tabla_resultado[i].append(x)
if len(tabla_resultado) <= j+rows:
tabla_resultado.append([None] * (rows+1))
tabla_resultado[i][j+rows] = y
else:
tabla_resultado[i].append(None)
# Print the results obtained
print(tabla_resultado)
for lista in tabla_resultado: print(lista)
But I get that error
Traceback (most recent call last):
tabla_resultado[i][j+rows] = y
IndexError: list assignment index out of range