2

I am currently reading the tensorflow tutorial given by Stanford lecture. It is teaching about a concept call "lazy loading". And in the slide, it explains that we can avoid this by exploiting the property of python.

I am relatively new to Python, what exactly is the property he is talking about?

Use Python property to ensure function is also loaded once the first time it is called*

enter image description here

user3222184
  • 1,071
  • 2
  • 11
  • 28
  • Did you read the last sentence at the bottom of the slide? – Patwie May 23 '18 at 05:11
  • Possible duplicate of [How does the @property decorator work?](https://stackoverflow.com/questions/17330160/how-does-the-property-decorator-work) – Kode May 22 '19 at 20:18

2 Answers2

1

There's a pretty good description of the use of the @property decorator for lazy evaluation here. Basically, the @property decorator allows you to define an instance variable that will:

  1. Be evaluated upon the first call (i.e. Lazy loading)
  2. The result then stored inside of an instance variable to be referenced upon further calls.

To see this, consider the code (Note, this code was taken and altered from the previous link, so credit should go to the Steven Loria):

class Node_normal:
    def __init__(self):
        print("nn")
        self.val = 2

class Node_lazy:
    def val(self):
        print("nl")
        return 2

class Node_property:
    def __init__(self):
        self._val = False

    @property
    def val(self):
        if not self._val:
            print("np")
            self._val = 2
        return self._val

When a Node_normal object is instantiated, self.val will be initialized and assigned the value 2 immediately (Normal loading), whereas Node_lazy will be instantiated but doesn't assign a value. Instead, it returns the value when val() is called (Lazy loading). Finally, when Node_property is instantiated, self._val will be initialized with a default value (False) and the true value (2) will be assigned when called and stored in self._val to be referenced again upon further calls.

You can test this with:

exec(open("node_normal.py").read())
exec(open("node_lazy.py").read())
exec(open("node_property.py").read())

class Graph:
    def __init__(self):
        nn = Node_normal()
        self.nodes1 = [nn.val for x in range(1,10)]
        nl = Node_lazy()
        self.nodes2 = [nl.val() for x in range(1,10)]
        np = Node_property()
        self.nodes3 = [np.val  for x in range(1,10)]

g = Graph()
print(g.nodes1)
print(g.nodes2)
print(g.nodes3)

which gives the results

nn
nl
nl
nl
nl
nl
nl
nl
nl
nl
np
[2, 2, 2, 2, 2, 2, 2, 2, 2]
[2, 2, 2, 2, 2, 2, 2, 2, 2]
[2, 2, 2, 2, 2, 2, 2, 2, 2]

Note that the values in g.nodes is the same for all three ways, but Node_normal and Node_property only needed to run the assignment once whereas Node_lazy made 10 assignments. Additionally, Node_property did not assign 2 to self._val until np.val was called. This is most useful when assigning the value is an expensive call (which is not the case here).

There are other reasons why the @property decorator is used, see here.

user387832
  • 503
  • 3
  • 8
0

If you check the associated lecture note, it provides a link for this: http://danijar.com/structuring-your-tensorflow-models/.