I have a Python process that does some heavy computations with Pandas and such - not my code so I basically don't have much knowledge on this.
The situation is this Python code used to run perfectly fine on a server with 8GB of RAM maxing all the resources available.
We moved this code to Kubernetes and we can't make it run: even increasing the allocated resources up to 40GB, this process is greedy and will inevitably try to get as much as it can until it gets over the container limit and gets killed by Kubernetes.
I know this code is probably suboptimal and needs rework on its own.
However my question is how to get Docker on Kubernetes mimic what Linux did on the server: give as much as resources as needed by the process without killing it?