24

I've defined a Vector class which has three property variables: x, y and z. Coordinates have to be real numbers, but there's nothing to stop one from doing the following:

>>> v = Vector(8, 7.3, -1)
>>> v.x = "foo"
>>> v.x
"foo"

I could implement "type safety" like this:

import numbers

class Vector:
    def __init__(self, x, y, z):
        self.setposition(x, y, z)

    def setposition(self, x, y, z):
        for i in (x, y, z):
            if not isinstance(i, numbers.Real):
                raise TypeError("Real coordinates only")

        self.__x = x
        self.__y = y
        self.__z = z

    @property
    def x(self):
        return self.__x

    @property
    def y(self):
        return self.__y

    @property
    def z(self):
        return self.__z

...but that seems un-Pythonic.

Suggestions?

  • But why? Integers totally work. – S.Lott Sep 20 '10 at 14:07
  • I understand this sort of concerns usually come up with large projects/team. If typesafety is something you feel strongly about then I suggest you have a look at scala instead. – Khoa Jan 14 '17 at 10:59

5 Answers5

18

You have to ask yourself why you want to test type on setting these values. Just raise a TypeError in any calculation which happens to stumble over the wrong value type. Bonus: standard operations already do this.

>>> 3.0 / 'abc'
Traceback (most recent call last):
  File "<stdin>", line 1, in ?
TypeError: unsupported operand type(s) for /: 'float' and 'str'
knitti
  • 6,817
  • 31
  • 42
  • 3
    And then write a comprehensive set of unit tests that exercise all branches of your code, to catch your TypeErrors before your users do. Right? Anyway, regarding when the error should be thrown: if you know that a certain combination is invalid and will cause problems, wouldn't it be better/clearer to fail as early as possible? Doesn't failing early make finding the root problem much easier? (I suppose the "why it's invalid" is less clear, though.) – Jon Coombs Mar 15 '14 at 06:39
  • 1
    I think the question was about LBYL vs. EAFP, so in a "pythonic" way it makes sense to not only "allow" for the types you originally intended. For a "known bad" outcome it makes perfect sense to cut its execution path (raise). – knitti Mar 15 '14 at 21:53
12

Duck Typing is the usual way in Python. It should work with anything that behaves like a number, but not necessarily is a real number.

In most cases in Python one should not explicitly check for types. You gain flexibility because your code can be used with custom datatypes, as long as they behave correctly.

Mad Scientist
  • 18,090
  • 12
  • 83
  • 109
5

The other answers already pointed out that it doesn't make much sense to check for the type here. Furthermore, your class won't be very fast if it's written in pure Python.

If you want a more pythonic solution - you could use property setters like:

@x.setter
def x(self, value):
    assert isinstance(value, numbers.Real)
    self.__x = value

The assert statement will be removed when you disable debugging or enable optimizing mode.

Alternatively, you could force value to floating-point in the setter. That will raise an exception if the type/value is not convertible:

@x.setter
def x(self, value):
    self.__x = float(value)
AndiDog
  • 68,631
  • 21
  • 159
  • 205
4

But there's nothing to stop one from doing the following:

I believe trying to stop someone from doing something like that is un-Pythonic. If you must, then you should check for type safety during any operations you might do using Vector, in my opinion.

To quote G.V.R:

we are all adults.

after all. See this question and its answers for more information.

I am sure more experienced Pythonistas here can give you better answers.

Community
  • 1
  • 1
Manoj Govindan
  • 72,339
  • 21
  • 134
  • 141
2

You are not supposed to provide type safety this way. Yes, someone can deliberately break your code by supplying values for which your container won't work - but this is just the same with other languages. And even if someone put the right value for a parameter into a method or member function does not necessarily mean it's not broken: If a program expects an IP address, but you pass a hostname, it still won't work, although both may be strings.

What I am saying is: The mindset of Python is inherently different. Duck typing basically says: Hey, I'm not limited to certain types, but to the interface, or behavior of objects. If an object does act like it's the kind of object I'd expect, I don't care - just go for it.

If you try to introduce type checking, you are basically limiting one of most useful features of the language.

That being said, you really need to get into test driven development, or unit testing at least. There really is no excuse not to do it with dynamic languages - it's just moving the way (type) errors are being detected to another step in the build process, away from compile time to running a test suite multiple times a day. While this seems like added effort, it will actually reduce time spent on debugging and fixing code, as it's an inherently more powerful way to detect errors in your code.

But enough of that, I'm already rambling.

Jim Brissom
  • 31,821
  • 4
  • 39
  • 33
  • Although "interface" is a bit of a misnomer here, because no interface is ever explicitly defined (except hopefully in comments). Duck typing means that you have actually look through or run the code to see what the "interface" is. In my (limited) experience in Python, this is the biggest drawback to duck typing. Writing my own code is faster, but making use of someone else's undocumented API (e.g. writing a plugin) is much harder, so comments become more crucial. – Jon Coombs Mar 15 '14 at 06:35
  • 1
    Unit tests are great, but stepping through tests in a debugger isn't the most optimal way to learn what kind of ducks each method is expecting to be passed--that is, how to use the API properly. And it's hard to guarantee that your unit test always cover every edge case. – Jon Coombs Mar 15 '14 at 06:36