1

Note: I originally had chi = 1 but I've changed it to chi = 0 (This is the simpler case).

My equations f(x,y) and g(x,y) come from the following code:

import numpy as np
from pylab import *

def stress(X,Y):
    chi = 0
    F = 1
    a = 1
    c = (1.0*a)/(np.sqrt(np.power(X,2)+np.power(Y,2))*1.0)
    A = 0.5*(1 - c**2. + (1 - 4*c**2 + 3*c**4.)*np.cos(2*np.arctan(Y/X)))
    B = 0.5*(1 - c**2. - (1 - 4*c**2 + 3*c**4.)*np.cos(2*np.arctan(Y/X)))
    C = 0.5*(1 + c**2. - (1 + 3*c**4.)*np.cos(2*np.arctan(Y/X)))
    D = 0.5*(1 + c**2. + (1 + 3*c**4.)*np.cos(2*np.arctan(Y/X)))
    E = 0.5*((1 + 2*c**2. - 3*c**4.)*np.sin(2*np.arctan(Y/X)))

    f = 1.0*F*c**2 + (A-1.0*chi*B) # Radial stress
    g = -1.*F*c**2 + (C - 1.0*chi*D) # Tangential stress
    return f,g

def f(X,Y):
    return stress(X,Y)[0]
def g(X,Y):
    return stress(X,Y)[1]

def find_points(X_max,Y_max,X_steps,Y_steps):
    Xs = np.linspace(-X_max,X_max,X_steps)
    Ys = np.linspace(-Y_max,Y_max,Y_steps)

    radials = f(Xs,Ys)
    tangentials = g(Xs,Ys)

    return radials, tangentials

find_points(10,10,100,100)

This returns arrays of values for f and g.

I would like to find all of the (X,Y) ordered pairs where both f(X,Y) = 0 and g(X,Y) = 0. I was looking at different scipy packages and I couldn't find anything that seem to work for a multi-variable function like this. Also, my answers right now are being returned in arrays, so could I use something like np.where()? The problem with this is that because I'm storing exact values, I don't necessarily ever see f(x,y) or g(x,y) explicitly equal zero. My end goal is to plot these points. Also, does what I did so far make sense with the Xs and Ys as linspaces over these ranges?

Thank you

Update: I went back and wrote up a little script using a guideline I found on a somewhat similar question see This link. I used scipy.optimize.

from scipy import optimize

def equations(p):
    X, Y = p
    a = 1
    F = 1
    chi = 0
    c = (1.0*a)/(np.sqrt(np.power(X,2)+np.power(Y,2))*1.0)
    A = 0.5*(1 - c**2. + (1 - 4*c**2 + 3*c**4.)*np.cos(2*np.arctan(Y/X)))
    B = 0.5*(1 - c**2. - (1 - 4*c**2 + 3*c**4.)*np.cos(2*np.arctan(Y/X)))
    C = 0.5*(1 + c**2. - (1 + 3*c**4.)*np.cos(2*np.arctan(Y/X)))
    D = 0.5*(1 + c**2. + (1 + 3*c**4.)*np.cos(2*np.arctan(Y/X)))

    f = 1.0*F*c**2 + (A-1.0*chi*B) # Radial stress
    g = -1.*F*c**2 + (C - 1.0*chi*D) # Tangential stress
    return (f,g)

X, Y =  optimize.fsolve(equations, (1, 1))

print equations((X, Y))

This requires me to put in different initial guesses to get different (X,Y) roots. It would be awesome if somehow I could solve all of the solutions. Also, the answers I'm getting seem kind of off. Thanks again.

Note: The original equations, before I converted them to Cartesian coordinates, were as follows:

def stress(R,theta):
    chi = 0
    F = 1
    a = 1
    c = (1.0*a)/(R*1.0)
    A = 0.5*(1 - c**2. + (1 - 4*c**2 + 3*c**4.)*np.cos(2*theta))
    B = 0.5*(1 - c**2. - (1 - 4*c**2 + 3*c**4.)*np.cos(2*theta))
    C = 0.5*(1 + c**2. - (1 + 3*c**4.)*np.cos(2*theta))
    D = 0.5*(1 + c**2. + (1 + 3*c**4.)*np.cos(2*theta))
    E = 0.5*((1 + 2*c**2. - 3*c**4.)*np.sin(2*theta))

    f = 1.0*F*c**2. + (A-1.0*chi*B) # Radial stress
    g = -1.0*F*c**2. + (C-1.0*chi*D) # Tangential stress
    return f,g

Maybe this will help sort out some of the confusion with the arctan(Y/X) aspect of the equation.

Community
  • 1
  • 1
Ned
  • 25
  • 6

1 Answers1

1

As @Azad has already noted in a comment, you probably need scipy.optimize to do the bulk of the work for you. Specifically, either scipy.optimize.fsolve or scipy.optimize.root. As the latter seems more general, I'll demonstrate that. Since it can use multiple methods, have a look at the help.

Both of these functions are capable of finding roots for functions mapping from R^n to R^m, i.e. multivariate vector-valued functions. If you consider your stress function, that's exactly what you have: it maps from R^2 to R^2. For clarity, you could even define it as

def stress2(Rvec):
    X,Y=Rvec
    chi = 1
    F = 1
    a = 1
    c = (1.0*a)/(np.sqrt(np.power(X,2)+np.power(Y,2))*1.0)
    A = 0.5*(1 - c**2. + (1 - 4*c**2 + 3*c**4.)*np.cos(2*np.arctan(Y/X)))
    B = 0.5*(1 - c**2. - (1 - 4*c**2 + 3*c**4.)*np.cos(2*np.arctan(Y/X)))
    C = 0.5*(1 + c**2. - (1 + 3*c**4.)*np.cos(2*np.arctan(Y/X)))
    D = 0.5*(1 + c**2. + (1 + 3*c**4.)*np.cos(2*np.arctan(Y/X)))
    E = 0.5*((1 + 2*c**2. - 3*c**4.)*np.sin(2*np.arctan(Y/X)))
    f = 1.0*F*c**2 + (A-1.0*chi*B) # Radial stress
    g = -1.*F*c**2 + (C - 1.0*chi*D) # Tangential stress
    return f,g

Now, with this definition you can simply call

import scipy.optimize as opt
sol=opt.root(stress2,[0.5,0.5])

wich will try to find a zero starting from [0.5,0.5]. Note that the root of the vector-valued function is exactly where both of its components are zero, which is what you're after.

The return OptimizeResult looks like this:

In [224]: sol
Out[224]: 
  status: 1
 success: True
     qtf: array([  2.94481987e-09,   4.76366933e-25])
    nfev: 47
       r: array([ -7.62669534e-06,   7.62669532e-06,   2.16965211e-21])
     fun: array([  2.25125258e-10,  -2.25125258e-10])
       x: array([ 167337.87789902,  167337.87786433])
 message: 'The solution converged.'
    fjac: array([[-0.70710678,  0.70710678],
       [ 0.70710678,  0.70710678]])

It has a bunch of information. First of all, sol.status will tell you if it converged successfully. This is the most important output: your root and the possibility of finding it very sensitively depends on your starting point. If you try a starting point where X=0 or Y=0 in your example, you'll see that it has difficulties finding a root.

If you do have a root, sol.x will tell you the coordinates, and sol.fun will tell you the value of your function (near 0 if sol.status==1).

Now, as you also noticed, each call will tell you at most a single root. To find multiple roots, you can't avoid searching for them. You can do this by going over an X,Y grid of your choice, starting root/fsolve from there, and checking whether the search succeeded. If it did: store the values for post-processing.

Unfortunately, finding the zeros of a nonlinear multivariate function is far from easy, so you have to get your hands dirty sooner or later.

Update

You're in for some trouble. Consider:

v=np.linspace(-10,10,100)
X,Y=np.meshgrid(v,v)

fig = plt.figure()
hc=plt.contourf(X,Y,stress2([X,Y])[0].clip(-1,1),levels=np.linspace(-1,1,20))
plt.contour(X,Y,stress2([X,Y])[0].clip(-1,1),levels=[0],color=(1,0,0))
plt.colorbar(hc)

and the same for the other function. Here's how the x and y components of your function look like, respectively:

x component y component

They both have zeros along some hyperbolic-like curves. Which seem to be identical. This figure strongly suggests that there is a line of points where your function is zero: both components. This is as bad as it can get for numerical root-finding algorithms, as there are no clear-cut (isolated) zeroes.

I suggest checking your function on paper for the case of X==Y, you might indeed see that your function disappears there, at least asymptotically.

Update2

You added the original, polar form of your function. While I can't see where you went wrong (apart from using np.arctan instead of np.arctan2, but that doesn't seem to fix the problem), I tried plotting your polar function:

def stress_polar(Rvec):
    R,theta=Rvec
    chi = 0
    F = 1
    a = 1
    c = (1.0*a)/(R*1.0)                                   
    A = 0.5*(1 - c**2. + (1 - 4*c**2 + 3*c**4.)*np.cos(2*theta))
    B = 0.5*(1 - c**2. - (1 - 4*c**2 + 3*c**4.)*np.cos(2*theta))
    C = 0.5*(1 + c**2. - (1 + 3*c**4.)*np.cos(2*theta))
    D = 0.5*(1 + c**2. + (1 + 3*c**4.)*np.cos(2*theta))
    E = 0.5*((1 + 2*c**2. - 3*c**4.)*np.sin(2*theta))
    f = 1.0*F*c**2. + (A-1.0*chi*B)
    g = -1.0*F*c**2. + (C-1.0*chi*D)
    return f,g

v1=np.linspace(0.01,10,100)
v2=np.linspace(-np.pi,np.pi,100)
R,theta=np.meshgrid(v1,v2)

fig = plt.figure()
ax=plt.subplot(111, polar=True)
hc=plt.contourf(theta,R,stress_polar([R,theta])[0].clip(-1,1),levels=np.linspace(-1,1,20))
plt.contour(theta,R,stress_polar([R,theta])[0].clip(-1,1),levels=[0],color=(1,0,0))
plt.colorbar(hc)

and the same for the tangential component. Note that the polar plot needs to get theta first, then R. The result:

radial component tangential component

this shows a quite different picture, with a finite support for the zeros of the radial component. Now, I've never used polar plots before in matplotlib, so it's equally possible that I messed something up during plotting. But it might be worth looking at the A,B,C,D parameters that are computed by your polar and Cartesian functions, to make sure they compute the same thing.

Community
  • 1
  • 1
  • Thanks! This is a handy little bit of code. My only concern is sol.x. Does that not seem a little suspect that the coordinate pair for this example is approx. (167337.8779,167337.8779)? These values seem way too large also. Maybe my intuition is wrong though. @AndrasDeak – Ned Oct 28 '15 at 22:28
  • @Ned Well, yeah, that's weird, but it also depends on your function. Check out a lot of starting points. That `arctan(Y/X)` by itself seems a bit difficult for me. Plus all those trigonometric functions are periodic, possibly creating a bunch of roots. – Andras Deak -- Слава Україні Oct 28 '15 at 22:30
  • That is really bad news. I included the original forms of the equations as functions of (R,theta). Hopefully this will clarify that arctan(Y/X) part as well as some other aspects of it. Did I convert to Cartesian coordinates correctly? @AndrasDeak – Ned Oct 28 '15 at 23:52
  • I ran some numbers and you seem to be correct. After a certain point, when X==Y, f = g = 0. At X = 1 Y =1, f = 0.5 and g = -0.5. By X = 3, Y = 3; f = 0.056 and g = -0.056. They begin to converge very quickly to zero after this. @Andras – Ned Oct 29 '15 at 00:31
  • @Ned I updated my answer. 1. I screwed up the original plots, so the second component has a bit different zeros as I thought. But the asymptotic behaviour you found is still the same. 2. I added polar plots with your polar function. If I didn't screw *this* up, then there is indeed a problem with your transformation, but I can't find what it is... Anyway, use `arctan2(y,x)` instead of `arctan(y/x)` in such cases, that's more safe and general. – Andras Deak -- Слава Україні Oct 29 '15 at 09:24
  • @Ned I just noticed that you changed `chi` along the way. Bad idea. This probably explains the difference between the two sets of plots, but I don't have time right now to make sure. – Andras Deak -- Слава Україні Oct 29 '15 at 12:28
  • I apologizing for not making the switch on chi more proclaimed. This change definitely accounts for the change in the two plots that you produced. Your contour plots are amazing. I really appreciate all of your help. You've helped me out a ton. I wish that I could upvote your answer but apparently I need a higher reputation before my upvote goes into effect. Thanks again @Andras – Ned Oct 29 '15 at 17:17
  • @Ned It's OK, neither of us were careful enough:) I'm glad I could help. You can still mark my answer as accepted if you feel that it solved your problem. – Andras Deak -- Слава Україні Oct 29 '15 at 17:24