1

Simulating a Random Walk in pandas is easy enough (this link gives a guide on one way to do so).

I am interested in taking things a step further, finding optimal entry/exit points for a set of random walks. The easiest way to explain this is to think of it like buying stocks. For a provided random walk, our "Entry Point" is our buy price - that is, if the random walk reaches a value less than or equal to our entry, we will "Buy" there. (This value is denoted B, for ease).

We are also able to select 2 exit points that work fairly similarly (denoted E0 and E1, for ease, with E0 < B < E1). If we have already bought, and the random walk reaches a value less than or equal to E0, we will "Sell" there. If we have bought and the random walk reaches a value greater than or equal to E1, we will "Sell" there.

The benefit gained from each random walk is:

  1. 0 if we never buy
  2. (The last value in the walk) - B if we buy but do not sell
  3. E0 - B if we sell at E0
  4. E1 - B if we sell at E1

I am looking for to determine the values E0, B, and E1 that maximize total benefit gained from a set of random walks. My current plan to use scipy to approach this is as follows:

from scipy.optimize import minimize

def BenefitFromOneRandomWalk(R, E0, B, E1):
    #Returns the benefit gained with the entry/exit points provided
    #This works by going through the series one value at a time
    #I question whether it could be made more efficient
    Bought = False
    for price in R:
        if (not Bought) and price <= B:
            Bought = True
        elif Bought and price <= E0:
            return E0 - B
        elif Bought and price >= E1:
            return E1 - B
    LastValue = R[-1]
    if Bought:
        return LastValue - B
    return 0

def TotalBenefits(RandomWalks, ExitPoints):
    #Return the total benefit of all random walks, using the same entry/exits
    E0, B, E1 = ExitPoints
    return sum([BenefitFromOneRandomWalk(R, E0, B, E1) for R in RandomWalks])

def FindOptimalEntryAndExitPoints(RandomWalks):
    #Use scipy's minimize feature to estimate optimal entry/exit points
    x0 = [1, 1, 1]
    
    def ScoreExitPoints(e):
        return -1*TotalBenefits(RandomWalks, e)
    
    res = minimize(ScoreExitPoints, x0, method='Nelder-Mead', tol=1e-2)
    return res.x

Is there a more efficient way to go about this, or is this the optimal solution?

EDIT: For the sake of simplicity, we can buy and sell only once each per random walk.

  • Are you sure that `BenefitsFromOneRandomWalk` works as intended?. Suppose that the random walk is [1, 2, 1, 2, 1, 2] and `ExitPoints = [0, 1, 2]`. The function returns 1, as expected based on the code. However, lets say that you start with one dollar, and execute `[buy, sell, buy, sell, buy, sell]`. The amount of money you have after each step is `[0, 2, 0, 4, 0, 8]`, so I would expect the answer to be 8. – hilberts_drinking_problem Jan 02 '22 at 07:55
  • Ah! Forgot to mention - for simplicity, we assume that we can only buy and sell a maximum of once each per random walk. I have edited the original question to note this – Asher Silverglade Jan 03 '22 at 04:36

0 Answers0