0

I have to evaluate a function (3x-2) in a interval [-10,10] with precision increments from 1, 0.1, 0.01, 0.001, 0.0001, etc. I tried this

a=1.0
for x in range(1,40):
    for y in range(-10,10,a):
        c=3(x)-2
        print(c)
    a=a/10

(The first for is because I need to get 40 decimals) But I got this error

for x in range(-10,10,a):
TypeError: 'float' object cannot be interpreted as an integer

Thanks for the help.

juanpa.arrivillaga
  • 88,713
  • 10
  • 131
  • 172

1 Answers1

0

It looks like you are overthinking the problem. Here is a much simpler approach using numpy and matplotlib to visualize the answer:

import numpy as np
import matplotlib.pyplot as plt

desired_increment = 0.1
upper_bound = 10
lower_bound = -10
pts = int((upper_bound - lower_bound)/desired_increment + 1)

x = np.linspace(lower_bound, upper_bound, pts)

y = 3*x-2

plt.figure(1)
plt.plot(x,y,'*')
plt.show()

enter image description here

You can use np.linspace() to change the size of the increments that you mention.

rahlf23
  • 8,869
  • 4
  • 24
  • 54