I recently wrote a simple Python program to find prime numbers. The code is as follows:
import math
searchLimit = 20
n = 3
def findPrimes(primes):
global n
global searchLimit
if n == searchLimit:
print(primes)
return primes
temp = [c for c in primes if c <= round(math.sqrt(n), 0)]
isPrime = True
for i in temp:
if n % i == 0:
isPrime = False
if isPrime:
primes.append(n)
n += 1
findPrimes(primes)
foundPrimes = findPrimes([2])
print(foundPrimes)
(don't bother berating me with my use of global variables - I know it's bad practice, for this simple test I cannot imagine it making a difference)
When findPrimes is called, the program first makes sure that the search limit - in this case 20 - has not been exceeded. Afterwards, it does some arithmetic to check if the number has any prime factors. If the number is prime, it appends n to the array of primes, and then adds 1 to n and recurses. Ultimately the nature of this piece of the method should be irrelevant, as we will come to see.
When I run the program, the output is as follows:
[2, 3, 5, 7, 11, 13, 17, 19]
None
The output "[2, 3, 5, 7, 11, 13, 17, 19]" comes from print(primes) inside of findPrimes, which of course implies that primes = [2, 3, 5, 7, 11, 13, 17, 19]. However, when I return primes and set the output of findPrimes equal to foundPrimes, and print that, Python gives the output None - implying that foundPrimes, and hence the output of findPrimes, and therefore primes is null.
So, my question is: what is the source of these conflicting implications, and how do I resolve this issue to get a non-null output from findPrimes? Thanks in advance.