0

I'm doing some research on avalanche photodiodes and I need to define the breakdown voltage.

The data: Serial_number indicates a certain device.

Serial_number Amplification Voltage Dark_current
912009913 1.00252 24.9681 5.9e-05
912009913 1.00452 29.9591 -0.002713
912009913 1.00537 34.9494 -0.018948
912009913 1.0056 44.9372 -0.023865
912009913 1.00683 49.9329 -0.032359
912009913 1.0069 54.9625 -0.039507
912009913 1.00847 59.9639 -0.050374
912009913 1.00935 64.9641 -0.052635
912009913 1.01157 69.965 -0.061749
912009913 1.01418 74.9663 -0.062348
912009913 1.01914 79.969 -0.072459
912009913 1.0247 84.9719 -0.082766
912009913 1.02919 89.974 -0.076758
912009913 1.03511 94.9752 -0.064063
912009913 1.04545 99.9759 -0.071779
912009913 1.07362 109.974 -0.090566
912009913 1.11549 119.969 -0.10307
912009913 1.17123 129.96 -0.104257
912009913 1.25019 139.96 -0.10766
912009913 1.36276 149.963 -0.140744
912009913 1.5104 159.958 -0.155636
912009913 1.69862 169.959 -0.140121
912009913 1.9518 179.963 -0.158292
912009913 2.26756 189.957 -0.168324
912009913 2.66278 199.97 -0.185998
912009913 3.14247 209.971 -0.195933
912009913 3.73163 219.971 -0.205238
912009913 4.46152 229.973 -0.221267
912009913 5.36262 239.966 -0.209433
912009913 6.49514 249.962 -0.227462
912009913 7.9227 259.971 -0.231201
912009913 9.73803 269.971 -0.255691
912009913 12.0663 279.97 -0.269587
912009913 15.0943 289.968 -0.275706
912009913 19.1004 299.959 -0.26576
912009913 20.0563 301.968 -0.280931
912009913 21.0672 303.967 -0.28487
912009913 22.142 305.966 -0.278539
912009913 23.2867 307.965 -0.279225
912009913 24.5037 309.955 -0.302595
912009913 25.8102 311.963 -0.28918
912009913 27.2024 313.963 -0.279895
912009913 28.6916 315.961 -0.280352
912009913 30.2968 317.962 -0.288305
912009913 32.0181 319.956 -0.27916
912009913 33.8775 321.951 -0.29779
912009913 35.8937 323.961 -0.294303
912009913 38.0569 325.962 -0.291265
912009913 40.4069 327.963 -0.297522
912009913 42.9713 329.965 -0.2964
912009913 45.7766 331.966 -0.296188
912009913 48.8312 333.959 -0.284309
912009913 52.2068 335.97 -0.281586
912009913 55.916 337.972 -0.279796
912009913 60.0356 339.973 -0.294777
912009913 64.6109 341.974 -0.267339
912009913 69.7152 343.967 -0.272938
912009913 75.4698 345.97 -0.26193
912009913 82.0003 347.978 -0.271638
912009913 89.4222 349.98 -0.265017
912009913 97.9493 351.98 -0.240706
912009913 107.807 353.971 -0.218123
912009913 119.441 355.983 -0.227814
912009913 133.2 357.983 -0.19964
912009913 149.796 359.983 -0.168064
912009913 170.113 361.975 -0.136797
912009913 195.89 363.989 -0.113058
912009913 229.058 365.989 -0.060668
912009913 273.481 367.984 0.016261
912009913 335.96 369.979 0.115103
912009913 431.682 371.992 0.248178
912009913 593.091 373.994 0.531031
912009913 918.112 375.985 1.06569
912009913 1903.74 377.999 2.94136

914010064 1.00316 19.9797 -0.054451
914010064 1.00512 24.9712 -0.075451
914010064 1.0053 44.9491 -0.068426
914010064 1.00544 49.9373 -0.074604
914010064 1.00545 29.9664 -0.074174
914010064 1.00593 34.9645 -0.07002
914010064 1.00644 54.96 -0.086103
914010064 1.00818 59.9692 -0.091332
914010064 1.01147 64.9675 -0.118275
914010064 1.0121 69.9659 -0.102073
914010064 1.01506 74.9644 -0.11037
914010064 1.01839 79.9636 -0.119529
914010064 1.02517 84.9637 -0.129388
914010064 1.02982 89.9563 -0.143007
914010064 1.03963 94.9591 -0.146575
914010064 1.04867 99.9682 -0.137368
914010064 1.07856 109.972 -0.15596
914010064 1.12018 119.975 -0.172676
914010064 1.17905 129.974 -0.183283
914010064 1.25973 139.961 -0.205805
914010064 1.36789 149.951 -0.208709
914010064 1.51643 159.947 -0.217745
914010064 1.71406 169.936 -0.235312
914010064 1.97106 179.931 -0.248335
914010064 2.29885 189.934 -0.252967
914010064 2.70789 199.941 -0.271414
914010064 3.20953 209.943 -0.286104
914010064 3.83129 219.962 -0.289541
914010064 4.60241 229.968 -0.303555
914010064 5.56253 239.972 -0.314163
914010064 6.76895 249.975 -0.323764
914010064 8.29853 259.979 -0.335809
914010064 10.2513 269.98 -0.339116
914010064 12.7751 279.971 -0.367378
914010064 16.0853 289.982 -0.362098
914010064 20.509 299.983 -0.367966
914010064 21.5682 301.983 -0.374602
914010064 22.6875 303.974 -0.383174
914010064 23.8892 305.984 -0.36848
914010064 25.1634 307.984 -0.370073
914010064 26.5296 309.981 -0.389497
914010064 27.9885 311.974 -0.390006
914010064 29.5618 313.984 -0.370569
914010064 31.2473 315.984 -0.369184
914010064 33.0625 317.979 -0.388763
914010064 35.0332 319.973 -0.373506
914010064 37.1546 321.982 -0.365102
914010064 39.4563 323.982 -0.361706
914010064 41.9586 325.978 -0.367868
914010064 44.6709 327.97 -0.36304
914010064 47.6658 329.979 -0.352983
914010064 50.937 331.979 -0.342905
914010064 54.535 333.978 -0.34633
914010064 58.5001 335.968 -0.336943
914010064 62.9498 337.976 -0.316943
914010064 67.8918 339.976 -0.301211
914010064 73.4357 341.975 -0.300109
914010064 79.6783 343.966 -0.294855
914010064 86.7887 345.966 -0.275604
914010064 94.9703 347.973 -0.249706
914010064 104.443 349.972 -0.227924
914010064 115.548 351.971 -0.219019
914010064 128.732 353.964 -0.174815
914010064 144.598 355.962 -0.134898
914010064 164.121 357.971 -0.093419
914010064 188.535 359.971 -0.063305
914010064 219.921 361.972 0.00031
914010064 261.853 363.97 0.096817
914010064 320.45 365.964 0.22806
914010064 408.338 367.976 0.43456
914010064 551.935 369.977 0.802936
914010064 829.197 371.979 1.59034
914010064 1579.26 373.975 4.387
914010064 2808.82 375.981 33267.2

1609017458 1.00013 19.9997 -0.008105
1609017458 1.00121 44.9994 0.000364
1609017458 1.00135 24.9994 -0.014093
1609017458 1.00577 65.001 -0.007834
1609017458 1.00769 70.001 -8.2e-05
1609017458 1.01063 75.0009 0.005057
1609017458 1.01143 80.0019 0.041253
1609017458 1.01985 85 0.008452
1609017458 1.02303 90.0023 0.029542
1609017458 1.03362 95.0003 0.019085
1609017458 1.04082 100.004 0.034151
1609017458 1.06696 110 0.048276
1609017458 1.10689 120.002 0.048982
1609017458 1.16101 130 0.060167
1609017458 1.23641 140.002 0.081061
1609017458 1.33861 150 0.081057
1609017458 1.47412 160 0.087024
1609017458 1.65754 169.999 0.093757
1609017458 1.89164 180 0.096188
1609017458 2.18485 190 0.113831
1609017458 2.54851 200 0.140305
1609017458 2.98959 209.999 0.14579
1609017458 3.53436 220 0.157321
1609017458 4.19954 229.999 0.178926
1609017458 5.02997 240 0.17251
1609017458 6.05716 250 0.199333
1609017458 7.34896 260 0.193716
1609017458 8.97473 270 0.2079
1609017458 11.0445 279.999 0.238844
1609017458 13.7228 289.999 0.234042
1609017458 17.225 300 0.235281
1609017458 18.0497 301.999 0.234139
1609017458 18.9231 303.999 0.254458
1609017458 19.8577 306 0.241071
1609017458 20.843 308.001 0.237839
1609017458 21.8913 310 0.247501
1609017458 23.0061 311.999 0.25374
1609017458 24.2006 314 0.248418
1609017458 25.4699 316 0.256976
1609017458 26.8169 318 0.267824
1609017458 28.2716 320 0.270051
1609017458 29.8326 322 0.267358
1609017458 31.5056 324 0.26475
1609017458 33.3076 326 0.275358
1609017458 35.2411 328.001 0.284069
1609017458 37.3363 330 0.280811
1609017458 39.5941 332 0.294164
1609017458 42.0531 334 0.282169
1609017458 44.73 335.999 0.288543
1609017458 47.6448 338.001 0.279577
1609017458 50.8315 340 0.317149
1609017458 54.3507 342 0.30047
1609017458 58.2129 344 0.311864
1609017458 62.4813 346 0.325131
1609017458 67.243 348 0.312699
1609017458 72.5607 350 0.325698
1609017458 78.5519 352 0.328251
1609017458 85.3285 354 0.343606
1609017458 93.0406 356 0.330998
1609017458 101.932 358 0.344761
1609017458 112.235 360 0.371229
1609017458 124.325 362 0.374378
1609017458 138.755 364 0.397073
1609017458 156.171 366 0.409378
1609017458 177.628 368 0.430561
1609017458 204.796 370 0.430657
1609017458 240.12 372 0.483052
1609017458 287.956 374.001 0.520612
1609017458 356.342 376 0.565771
1609017458 461.913 378 0.63563
1609017458 646.269 380.001 0.775906
1609017458 1039.49 382 1.07535
1609017458 2324.88 384.001 2.13433

Here is a plot of the data which is double-logarithmically:

log fit

Does the inflection point at the most rising edge indicate the breakdown voltage/the point where it becomes infinite?

update:

I'm trying to realize mikuszefski's advices: I now have only data points with Amplification>130:

log_plot

Now I need the best fit function, right? This shall be (e^c)^(x0-x)^(-b)?

Ben
  • 1,432
  • 4
  • 20
  • 43
  • Are these values from a simulation or measurements? Is the point at 15 a value or part of a rudimentary legend? – Lutz Lehmann Oct 05 '17 at 08:08
  • @LutzL These are real measurements. The point at 15 probably just means that there should have been some measurements between that and the former points.. the measurement wasn't done by me but I have thousands of such subsets. This one here was only chosen arbitrarily. – Ben Oct 05 '17 at 12:58
  • 1
    If you don't have the theoretical function at hand, using a divergent function is problematic. I'd transform the data to `y'=1/(a+y)` (the `a` to avoid division by zero) and fit a simple polynomial. The extrapolated zero-crossing should be your point of divergence. – mikuszefski Oct 05 '17 at 15:04
  • @mikuszefski Thank you! Is "a" arbitrary? – Ben Oct 05 '17 at 15:10
  • 1
    Well, large enough to make everything positive. If you still have a zero crossing you get another divergence. In this case probably something larger than1 (and maybe smaller than 5). I'd play a little. Also depends if all the other data looks similar. If you put some example data I might post an answer, if you want. – mikuszefski Oct 06 '17 at 06:22
  • @mikuszefski Thanks! I will also figure it out on the weekend but nevertheless I attached some data. – Ben Oct 06 '17 at 12:18
  • OK...cool. Seems it doesn't get too negative. Looking forward to your results. – mikuszefski Oct 06 '17 at 12:26
  • At first thanks a lot! I'll update my question. Unfortunately I had not much time today but nevertheless I'll provide my current status. – Ben Oct 09 '17 at 16:42
  • Typo? ...should be `( e**c ) * ( x0 - x )**( -b )` and not `( e**c )**( x0 - x )**( -b )` – mikuszefski Nov 02 '17 at 07:50
  • The most simple answer to your last question is "Yes". You'd try to fit `exp( c ) * ( x0 - x )**( -b )`. But to go on a loglog-scale is still a good idea. Note, this allowed me to detect measurement problems ( saturation maybe?) for very large amplifications. Detecting this kind of problems automatically might be difficult, but should be possible. All depends on how many data sets you have to analyse. – mikuszefski Nov 02 '17 at 08:02
  • Additionally, if the result of my second answer, where all rescaled data has the same slope (and intercept), is generally true, you might fit all data at once with a common slope and intercept and only leaving `x0` free for each data set. – mikuszefski Nov 02 '17 at 08:05

2 Answers2

1

I am not using r, but here my python2.7 solution. First I remove the varying slope and intercept of the data near zero. For nicer transformation and fitting I put a small slope myself, but that one is identical for all data then. Finally I transform according to y'=1/(a+y) so the divergence becomes a zero crossing. This is fitted by a polynomial of order 5 using the last 8 data points. I additionally put a weighting of type x**2 as the points closer to the divergence, i.e. with larger x, are somewhat more important to get the divergence right.

import matplotlib.pyplot as plt
import numpy as np
from scipy.optimize import leastsq, newton
from scipy.stats import linregress
np.set_printoptions(linewidth=190)


if __name__=="__main__":

    testX=np.linspace(300,400,250)

    nameList=["data_00.dat","data_01.dat","data_02.dat"]
    xDict=dict()
    yDict=dict()

    for name in nameList:
        _,_,xDict[name],yDict[name]=np.loadtxt(name, unpack=True)

    aaa=0.25
    n0=50 # to correct for slope in the beginning
    manslope=.002 #some manual slope to make the fits nicer
    yDictCorrected=dict()
    yDictInv=dict()
    nn=-8# number of poibts for fit
    fitDict=dict()
    slopeDict=dict()
    interDict=dict()
    for name in nameList:
        slopeDict[name], interDict[name], rVal, pVal, err=linregress( xDict[name][:n0],yDict[name][:n0])
        yDictCorrected[name]=[y-slopeDict[name]*x+manslope*x-interDict[name] for x,y in zip(xDict[name],yDict[name])]
        yDictInv[name]=[1./(aaa+y) for y in yDictCorrected[name]]
        fitDict[name]=np.polynomial.polynomial.polyfit(xDict[name][nn:],yDictInv[name][nn:],5, w=[x**2 for x in xDict[name][nn:]] )


    ####find "infinity"
    infDict=dict()
    apprxDict=dict()
    appryDict=dict()
    for name in nameList:
        infDict[name]=newton( np.polynomial.polynomial.polyval, xDict[name][-1] , args=(fitDict[name], ) )
        print infDict[name],xDict[name][-1],yDictInv[name][-1]
        if infDict[name] > xDict[name][-1]:
            apprxDict[name]=np.linspace(xDict[name][-1],infDict[name]-1e-7,20)
            appryDict[name]=[ 1./np.polynomial.polynomial.polyval(x,fitDict[name]) - aaa  + (slopeDict[name]-manslope)*x + interDict[name] for x in apprxDict[name] ]

        else:
            print("Warning: in dataset {} the fitted divergence is less than the smallest measured value".format(name))

    fig1 = plt.figure(1)
    ax=fig1.add_subplot(2,2,1)
    bx=fig1.add_subplot(2,2,2)

    for name in nameList:
        ax.plot(xDict[name], yDictCorrected[name] )    
        bx.plot(xDict[name], yDictInv[name], linestyle='', marker='o',fillstyle='none')

    plt.gca().set_prop_cycle(None)#https://stackoverflow.com/questions/24193174/reset-color-cycle-in-matplotlib
    for name in nameList:
        bx.plot(testX,  [np.polynomial.polynomial.polyval(x,fitDict[name]) for x in testX], linestyle='-')

    cx=fig1.add_subplot(2,1,2)
    for name in nameList:
        ppp=cx.plot(xDict[name], [ np.arcsinh(y) for y in yDict[name] ])  
        cc=ppp[-1].get_color() 
        cx.axvline(infDict[name],color=cc,linestyle=':')

    plt.gca().set_prop_cycle(None)
    for name in nameList:
        if infDict[name] > xDict[name][-1]:
            cx.plot(apprxDict[name], [ np.arcsinh(y) for y in appryDict[name] ], linestyle='--') # sort of logarithmic plot
        else:
            cx.plot([],[])
    cx.set_ylabel(r"$\mathrm{arcsinh}(y)$")


    ax.set_ylim([-1,2])
    bx.set_ylim([0,2])
    cx.set_ylim([-1.1,15])
    cx.set_xlim([370,390])
    bx.set_xlim([360,390])
    plt.show()

The result is:

379.923614394 377.999 0.232709117518
375.97892795 375.981 3.00583273388e-05
Warning: in dataset data_01.dat the fitted divergence is less than the smallest measured value
385.888655048 384.001 0.354032659754

Fitting Procedure upper left: "slope corrected" data upper right: inverse data with polynomial fit lower: original data and extrapolated part in a arcsinh-scale; the divergence is marked as vertical dotted line.

The second data set has already so large y-values that the position of the divergence is within fit errors. In other words: the zero crossing of the fitted polynomial is smaller than the largest measured x-value. One might fiddle with parameters to get this "right".

mikuszefski
  • 3,943
  • 1
  • 25
  • 38
  • Nice plots! Looks like the divergence is calculated very well. Can you please tell me what the upper right plot is exactly? I wonder why the polynomials fall. In my updated example they are rising (I guess they should because the curve is increasing?). Hence, in the lower plot: Do I understand it right, that you transformed the data into a arcsinh scale? And the breakdown is then a simple straight vertical line fit? – Ben Oct 09 '17 at 16:54
  • The upper right is `1/(0.25+y')`, where `y'` is what is seen in the upper left. Note that I plot column 4 against column three, i.e. dark current vs voltage like in your first graph and I show only a small portion. I am a little surprised that the `log` of `log` works, but I guess that is due to the fact that the amplification is always bigger than 1 making the log always bigger than 0. – mikuszefski Oct 10 '17 at 05:51
  • 1
    In the lower plot I just put the original data, the estimated divergence, and the limit where it goes to infinity. I'd like to make a logarithmic plot, but the data of the dark current has a zero crossing. A log-plot is, hence, not an option. So instead of `log y` I plot `arcsinh y`. This is linear for small `y`, allows for negative values, as it is point symmetric, and behaves like `log` for large `y`. So it is somewhat like [symlog](https://matplotlib.org/examples/scales/scales.html) – mikuszefski Oct 10 '17 at 05:58
  • 1
    Note, the estimate divergence is plotted from back transforming the polynomials of the upper right plot. The vertical lines are placed at the position where the according polynomial has a zero crossing. This is where the original data should be infinite/diverge. – mikuszefski Oct 10 '17 at 06:00
  • Thanks again for the detailed answers! I'll try to realize these, especially the arcsinh looks useable. Nevertheless for clearification, as the so-called breakdown voltage induces rapidly a very amplification: Can I relate the inflection point in the log-log plot to that? It looks like but can this be proven anyhow? In case of the dark current I'll probably use your exp-method, so I'll have two possibilities to determine the voltage. – Ben Oct 10 '17 at 07:39
1

Again not in r but python 2.7. A second solution assumes that the divergence is of type exp(c)*(x0-x)**(-b). For this I rescale the data similar to the other answer, except for the artificial slope. Then I scale logarithmic. If the choice of x0 is correct and the assumption of algebraic behaviour is true, it should give a straight line. I make, hence a fit for the best x0 to give a straight line. I additionally get the the values of c and b. The accuricy depends a little on the choice of a in y'=1/(a+y). The smaller a the better, but avoid division by zero.

This looks as follows:

import matplotlib.pyplot as plt
import numpy as np
from scipy.optimize import leastsq, newton, minimize
from scipy.stats import linregress
np.set_printoptions(linewidth=190)


def divergence_scale(value,dataX,dataY, precision=1e-7):
    minScale = dataX[-1]
    trueScale=minScale+value**2
    newDataX= np.fromiter((np.log(precision+trueScale-x) for x in dataX),np.float)
    newDataY= np.fromiter((np.log(y) for y in dataY),np.float)
    return newDataX,newDataY


def measure(value,dataX,dataY,passParams=False):
    newDataX,newDataY= divergence_scale(value,dataX,dataY)
    x2,x1,x0=newDataX[-3:]
    y2,y1,y0=newDataY[-3:]
    b=(y2-y1)/(x2-x1)
    c=y1-x1*b
    delta=y0-(c+b*x0)
    if passParams:
        return delta**2,c,b
    else:
        return delta**2


def best_divergence(dataX,dataY):
    sol=minimize(measure,0.5,args=(dataX,dataY))
    return sol.x[0]


if __name__=="__main__":
    allName,_,allX,allY=np.loadtxt("allData.dat",unpack=True)
    nameList=list(set(allName))

    xDict=dict()
    yDict=dict()
    for name in nameList:
        xDict[name]=[x for x,n in zip(allX,allName) if n==name ]
        yDict[name]=[y for y,n in zip(allY,allName) if n==name ]

    aaa=0.1 ## should be small compared to three largest y-values
    n0=50 # to correct for slope in the beginning
    yDictCorrected=dict()
    yDictInv=dict()
    xDictInv=dict()
    nn=-8# number of poibts for fit
    fitDict=dict()
    slopeDict=dict()
    interDict=dict()
    paramsDict=dict()
    infinityDict=dict()
    for name in nameList:
        slopeDict[name], interDict[name], rVal, pVal, err=linregress( xDict[name][:n0],yDict[name][:n0])
        yDictCorrected[name]=[y-slopeDict[name]*x-interDict[name] for x,y in zip(xDict[name],yDict[name])]
        yDictInv[name]=[1./(aaa+abs(y)) for y in yDictCorrected[name]]
        fitDict[name]=best_divergence(xDict[name],yDictInv[name])
        infinityDict[name]=xDict[name][-1]+fitDict[name]**2
        xDictInv[name]=[infinityDict[name]-x for x in xDict[name]]
        paramsDict[name]=measure(fitDict[name],xDict[name],yDictInv[name],passParams=True)
        print infinityDict[name]

    apprxDict=dict()
    appryDict=dict()
    for name in nameList:
        apprxDict[name]=np.linspace(xDict[name][-2],infinityDict[name]-1e-7,55)
        appryDict[name]=[ 1./( np.exp(paramsDict[name][1])*(xDict[name][-1]+fitDict[name]**2-x )**(paramsDict[name][2]) ) - aaa  + slopeDict[name]*x + interDict[name] for x in apprxDict[name] ]

    fig1 = plt.figure(1)
    ax=fig1.add_subplot(2,2,1)
    bx=fig1.add_subplot(2,2,2)

    for name in nameList:
        ax.plot(xDict[name], yDictCorrected[name] )    
        bx.plot(xDictInv[name], yDictInv[name] ,marker='+')  

    cx=fig1.add_subplot(2,1,2)
    for name in nameList:
        ppp=cx.plot(xDict[name], [ np.arcsinh(y) for y in yDict[name] ])  
        cc=ppp[-1].get_color() 
        cx.axvline(infinityDict[name],color=cc,linestyle=':') 

    plt.gca().set_prop_cycle(None)
    for name in nameList:
        cx.plot(apprxDict[name], [ np.arcsinh(y) for y in appryDict[name] ], linestyle='--')  
    cx.set_ylabel(r"$\mathrm{arcsinh}(y)$")    

    ax.set_ylim([-1,2])
    cx.set_xlim([370,390])
    cx.set_ylim([-1,15])
    bx.set_xscale('log')
    bx.set_yscale('log')
    plt.show()

Output is:

375.982403898
379.636216525
385.502010366

BestFit

I'd say the result looks good; the linear behaviour looks reasonable. The fact that the exponents b are different, however, is interesting from a physics point of view.

Update I am not sure what the OP means by inflection point (I mean I know what an inflection point is). The log(log(...)) graph has two inflection points, about 120 and near 300. I do not see how this is related to the divergence. You can, however, do the same thing with the amplitude of course. The according code variation is:

if __name__=="__main__":
    allName,allY,allX,_=np.loadtxt("allData.dat",unpack=True)
    nameList=list(set(allName))

    xDict=dict()
    yDict=dict()
    for name in nameList:
        xDict[name]=[x for x,n in zip(allX,allName) if n==name ]
        yDict[name]=[y for y,n in zip(allY,allName) if n==name ]

    aaa=0.01 ## should be small compared to three largest y-values
    yDictInv=dict()
    xDictInv=dict()
    fitDict=dict()
    paramsDict=dict()
    infinityDict=dict()
    for name in nameList:
        yDictInv[name]=[1./(aaa+abs(y)) for y in yDict[name]]
        fitDict[name]=best_divergence(xDict[name],yDictInv[name])
        infinityDict[name]=xDict[name][-1]+fitDict[name]**2
        xDictInv[name]=[infinityDict[name]-x for x in xDict[name]]
        paramsDict[name]=measure(fitDict[name],xDict[name],yDictInv[name],passParams=True)
        print infinityDict[name]

    apprxDict=dict()
    appryDict=dict()
    for name in nameList:
        apprxDict[name]=np.linspace(xDict[name][-2],infinityDict[name]-1e-7,55)
        appryDict[name]=[ 1./( np.exp(paramsDict[name][1])*(xDict[name][-1]+fitDict[name]**2-x )**(paramsDict[name][2]) ) - aaa  for x in apprxDict[name] ]

    fig1 = plt.figure(1)
    ax=fig1.add_subplot(2,2,1)
    bx=fig1.add_subplot(2,2,2)

    for name in nameList:
        ax.plot(xDict[name], yDict[name] )    
        bx.plot(xDictInv[name], yDictInv[name] ,marker='+')  
    ax.set_ylabel(r"amplitude") 

    cx=fig1.add_subplot(2,1,2)
    for name in nameList:
        ppp=cx.plot(xDict[name], [ np.arcsinh(y) for y in yDict[name] ])  
        cc=ppp[-1].get_color() 
        cx.axvline(infinityDict[name],color=cc,linestyle=':') 

    plt.gca().set_prop_cycle(None)
    for name in nameList:
        cx.plot(apprxDict[name], [ np.arcsinh(y) for y in appryDict[name] ], linestyle='--')  
    cx.set_ylabel(r"$\mathrm{arcsinh}(y)$")    

    ax.set_ylim([10,4000])
    ax.set_xlim([200,400])
    ax.set_yscale('log')
    cx.set_xlim([370,390])
    cx.set_ylim([5,15])
    bx.set_xscale('log')
    bx.set_yscale('log')
    plt.show()

providing:

376.533671414
380.307952245
385.91681625

amplitude (looking carefully at the upper left graph and at the result of the upper right shows that the last point in the first measurement cannot be correct)

so the results are slightly different. Moreover, two points need to be mentioned here:

  1. Graph one has sort of a saturation measurement error such that the last point must be skipped. I therefore changed the code to use points [-4,-3,-2]
  2. After rescaling, all lines lie basically on top of each other. This means that there is a general scalable physical behaviour behind, supporting the approach.
mikuszefski
  • 3,943
  • 1
  • 25
  • 38
  • I'm stuck at the phrase "the divergence is of type exp(c)*(x0-x)**(-b)". What exactly is the divergence in this context? – Ben Oct 09 '17 at 16:58
  • 1
    Your data is expected to go to infinity, i.e. it is diverging at a specific voltage and you want to figure out, what voltage that is. Therefore, you have to extrapolate. How good this extrapolation is, depends on how good your knowledge about the divergence is. In the other post I assume no knowledge whatsoever. In general, there are many ways how a function may diverge. It could be e.g. `-log(x0-x)` or something evil like `exp(1/(x0-x)**4)` or something as simple as `1/(x0-x)`. In physics you often have a divergence that has behaviour like `(x0-x)**(-b)` – mikuszefski Oct 10 '17 at 06:08
  • 1
    @Ben The function to fit then would be, `d*(x0-x)**(-b)` and you fit `(d,b,x0)`. As I know that I am going to make a log-transformation I directly write `d=exp(c)`, defining a new `c` and fitting `c` instead of `d`. If the way your data goes to infinity can be described by such a function, it must be possible to find an `x0` on log-scaled data such that the data is on a straight line. That is all that I am doing. And as I find a straight line, the assumption was right...self consistency proof. – mikuszefski Oct 10 '17 at 06:13
  • Thanks for your help! I was very busy with some other stuff but I returned on this case right now! I'm sorry I cannot really understand the procedure you advice. I update my question to show my current status. – Ben Oct 30 '17 at 19:02