34

I have gone through the first couple of sections in PEP 3107, but I still don't get what good they do for the language. It seems to me that you can add metadata to functions using decorators. e.g.

def returns(return_type):
  f.return_type = return_type  # <- adding metadata here
  return f

@returns(Foo)
def make_foo(): ...

You can add metadata to arguments too, and it can look pretty if you take advantage of default arguments, like so:

import inspect

def defaults_are_actually_metadata(f):
  names, args_name, kwargs_name, defaults = inspect.getfuncspec(f)
  f.parameter_metadata = dict(zip(names[-len(defaults):], defaults))
  f.__defaults__ = ()
  return f

@defaults_are_actually_metadata
def haul(load="Stuff to be carried.",
         speed_mph="How fast to move the load (in miles per hour)."): ...

At least my initial impression is that annotations are superfluous: decorators can do everything that annotations can (and more). Why are annotations better than decorators when it comes to adding metadata to functions?

allyourcode
  • 21,871
  • 18
  • 78
  • 106
  • 4
    For now, annotations are kind of an experiment, and kind of a work in progress. There is actually a recent thread in the [python-ideas mailing list](http://mail.python.org/pipermail/python-ideas/2012-December/thread.html) on the topic which may be helpful. – John Y Dec 09 '12 at 05:52
  • @JohnY I want to mark your answer as "correct", but it's not an "official" answer! – allyourcode Dec 09 '12 at 09:50

3 Answers3

33

As you mentioned, the relevant PEP is 3107 (linked for easy reference in case others encountering this question haven't read it yet).

For now, annotations are kind of an experiment, and kind of a work in progress. There is actually a recent thread in the python-ideas mailing list on the topic which may be helpful. (The link provided is just for the monthly archive; I find that the URL for specific posts tends to change periodically. The thread in question is near the beginning of December, and titled "[Python-ideas] Conventions for function annotations". The first post is from Thomas Kluyver on Dec 1.)

Here's a bit from one of Guido van Rossum's posts in that thread:

On 12/4/2012 11:43 AM, Jasper St. Pierre wrote:

Indeed. I've looked at annotations before, but I never understood the purpose. It seemed like a feature that was designed and implemented without some goal in mind, and where the community was supposed to discover the goal themselves.

Guido's response:

To the contrary. There were too many use cases that immediately looked important, and we couldn't figure out which ones would be the most important or how to combine them, so we decided to take a two-step approach: in step 1, we designed the syntax, whereas in step 2, we would design the semantics. The idea was very clear that once the syntax was settled people would be free to experiment with different semantics -- just not in the stdlib. The idea was also that eventually, from all those experiments, one would emerge that would be fit for the stdlib.

Jasper St. Pierre:

So, if I may ask, what was the original goal of annotations? The PEP gives some suggestions, but doesn't leave anything concrete. Was it designed to be an aid to IDEs, or static analysis tools that inspect source code? Something for applications themselves to munge through to provide special behaviors, like a command line parser, or runtime static checker?

Guido's response:

Pretty much all of the above to some extent. But for me personally, the main goal was always to arrive at a notation to specify type constraints (and maybe other constraints) for arguments and return values. I've toyed at various times with specific ways of combining types. E.g. list[int] might mean a list of integers, and dict[str, tuple[float, float, float, bool]] might mean a dict mapping strings to tuples of three floats and a bool. But I felt it was much harder to get consensus about such a notation than about the syntax for argument annotations (think about how many objections you can bring in to these two examples :-) -- I've always had a strong desire to use "var: type = default" and to make the type a runtime expression to be evaluated at the same time as the default.

And a tiny bit of humor from Ned Batchelder:

A telling moment for me was during an early Py3k keynote at PyCon (perhaps it was in Dallas or Chicago?), Guido couldn't remember the word "annotation," and said, "you know, those things that aren't type declarations?" :-)

John Y
  • 14,123
  • 2
  • 48
  • 72
8

I think that the first paragraph states it all:

Because Python's 2.x series lacks a standard way of annotating a function's parameters and return values

(emphasis mine)

Having a standard way to do this has the advantage that you know exactly where the annotations will be located.

As for your argument about there's another way to do it, you could make the same argument against list comprehensions:

out = []
for x in my_iterable:
    out.append(x)
mgilson
  • 300,191
  • 65
  • 633
  • 696
6

They have two different roles.

Annotations are documentation / comment for arguments while decorator transform the function.

By itself, Python does not attach any particular meaning or significance to annotations.

Python Decorators page

Decorators dynamically alter the functionality of a function, method, or class without having to directly use subclasses or change the source code of the function being decorated.