The first step is to find the maximum precision of your list of points. To do that, you need a way to find the precision of a number. Here is a function call that can do that (but see note at bottom of post):
def decimal_precision(x):
if isinstance(x, int): # if x is an integer there are no decimals
return(0)
# if it is a float count the number of values after the decimal
x_str = str(x) # start by converting x to a string
x_split = x_str.split('.') # the split command creates a list, dividing the string at '.'
n_decimals = len(x_split[1]) # the length of element 1 of x_split will be the number of decimals
return(n_decimals)
Now we need to see which entry in your list has the most decimals
my_list = [23.40, 12.4523, 87.123]
max_precision = 0
for entry in my_list:
prec = decimal_precision(entry)
if prec > max_precision:
max_precision = prec
Finally, printing the values:
for entry in my_list:
print(f'{entry:.{max_precision}f}')
# 23.4000
# 12.4523
# 87.1230
NOTE: This can actually be a trickier question to answer than at first glance because of how floating point arithmetic works. See for example, this post. That post is about a decade old, though, and current versions of Python (I'm using 3.8.8) seem to do something to try to improve this under the hood. Above, I assumed to use a simpler approach for estimating precision than the one suggested in the accepted answer for the post referenced above. If you ran into issues due to floating point arithmetic, you'd might want to consider a more elaborate function.