124

Does anyone here have any useful code which uses reduce() function in python? Is there any code other than the usual + and * that we see in the examples?

Refer Fate of reduce() in Python 3000 by GvR

martineau
  • 119,623
  • 25
  • 170
  • 301
cnu
  • 36,135
  • 23
  • 65
  • 63
  • 1
    `from functools import reduce` allows the same code to work on both Python 2 and 3. – jfs Jan 11 '15 at 20:49

24 Answers24

66

The other uses I've found for it besides + and * were with and and or, but now we have any and all to replace those cases.

foldl and foldr do come up in Scheme a lot...

Here's some cute usages:

Flatten a list

Goal: turn [[1, 2, 3], [4, 5], [6, 7, 8]] into [1, 2, 3, 4, 5, 6, 7, 8].

reduce(list.__add__, [[1, 2, 3], [4, 5], [6, 7, 8]], [])

List of digits to a number

Goal: turn [1, 2, 3, 4, 5, 6, 7, 8] into 12345678.

Ugly, slow way:

int("".join(map(str, [1,2,3,4,5,6,7,8])))

Pretty reduce way:

reduce(lambda a,d: 10*a+d, [1,2,3,4,5,6,7,8], 0)
Claudiu
  • 224,032
  • 165
  • 485
  • 680
  • Why did you append the ", 0" in the last example? To be able to handle the eventuality of an empty list, or are there other reasons? – conny Jul 28 '09 at 09:48
  • 24
    For flattening a list, I prefer list(itertools.chain(*nested_list)) – Roberto Bonvallet Jul 28 '09 at 19:59
  • @conny: i think you always need an initial element. in this case, ti's 0. otherwise, what would 'd' be when looking at the first element? – Claudiu Jul 29 '09 at 07:09
  • when no initial value is given, reduce just applies the reduction to the first *two* elements. if no initial value is given and there is only one element, then no reduction occurs.... I think... – SingleNegationElimination Sep 13 '09 at 23:18
  • @Roberto: That doesn't handle nested lists. – Daenyth Jul 17 '10 at 17:48
  • @Daenyth: The ``reduce`` solution doesn't either! – Roberto Bonvallet Jul 18 '10 at 00:01
  • 13
    sum([[1, 2, 3], [4, 5], [6, 7, 8]], []) – Gordon Wrigley Sep 27 '10 at 12:47
  • That's an awful piece of code... The best thing GvR did was remove reduce from builtins. – JBernardo Sep 01 '11 at 04:03
  • @Roberto Bonvallet: `list(itertools.chain.from_iterable(nested_list))` – jfs Oct 13 '11 at 17:14
  • 3
    It's also useful for bitwise operations. What if you want to take the bitwise or of a bunch of numbers, for example if you need to convert flags from a list to a bitmask? – Antimony Oct 15 '12 at 21:55
  • 2
    You are calling the last reduce example pretty? IMHO, it anything but pretty. To me int("".join(str(i) for i in range(1, 9))) is much more readable. – Victor Yan Oct 30 '12 at 08:12
  • 1
    @VictorYan: at some point it's a matter of taste. but the `reduce` way is much more efficient, at least algorithmically (maybe the overhead of doing the calcs via a lambda is greater than the other way. one would need to benchmark.) – Claudiu Oct 30 '12 at 15:26
  • 6
    Doing some benchmarks, the 'ugly' way is faster for large lists. `timeit.repeat('int("".join(map(str, digit_list)))', setup = 'digit_list = list(d%10 for d in xrange(1,1000))', number=1000)` takes ~0.09 seconds while `timeit.repeat('reduce(lambda a,d: 10*a+d, digit_list)', setup = 'digit_list = list(d%10 for d in xrange(1,1000))', number=1000)` takes 0.36 seconds (about 4 times slower). Basically multiplication by 10 becomes expensive when the list gets big, while int to str and concatenation stays cheap. – dr jimbob Aug 21 '13 at 17:18
  • 3
    Granted, yes for small lists (size 10) then the reduce method is 1.3 times faster. However, even in this case, avoiding reduce and doing a simple loop is even faster `timeit.repeat('convert_digit_list_to_int(digit_list)', setup = 'digit_list = [d%10 for d in xrange(1,10)]\ndef convert_digit_list_to_int(digits):\n i = 0\n for d in digits:\n i = 10*i + d\n return i', number=100000)` takes 0.06 s, `timeit.repeat('reduce(lambda a,d: 10*a+d, digit_list)', setup = 'digit_list = list(d%10 for d in xrange(1,10))', number=100000)` takes 0.12 s and converting digits to str method takes 0.16 s. – dr jimbob Aug 21 '13 at 17:20
  • For #2, list of digits to a number. I would do something like this: `int(''.join([str(i) for i in l]))`. A little awkward since I have to cast it back and force but it's fast (list comprehension and str.join). Ok, just saw Victor Yan posted it earlier. same thing. – Devy Mar 14 '14 at 03:46
  • @Devy: `map(str, l)` is faster than a list comprehension, but [the reduce method is faster](http://pastebin.com/TdMTUtj3) – Claudiu Mar 14 '14 at 04:04
  • @drjimbob: [I get similar results on pypy](https://gist.github.com/zed/ac0f5df365dffdb94f97476a89d38c8f): `int(''.join` being is faster than `reduce()` for N=1000 and `reduce()` is faster than `int(''.join` for N=10`. – jfs Dec 21 '16 at 14:44
  • The "list of digits" solution generalizes to [base](https://simple.wikipedia.org/wiki/Base_%28mathematics%29) conversion (given a list of digit values), and to polynomial evaluation using [Horner's method](https://en.wikipedia.org/wiki/Horner's_method) (given a list of coefficients from high to low). – leewz Jun 29 '19 at 22:21
51

reduce() can be used to find Least common multiple for 3 or more numbers:

#!/usr/bin/env python
from math import gcd
from functools import reduce

def lcm(*args):
    return reduce(lambda a,b: a * b // gcd(a, b), args)

Example:

>>> lcm(100, 23, 98)
112700
>>> lcm(*range(1, 20))
232792560
Mabadai
  • 347
  • 4
  • 16
jfs
  • 399,953
  • 195
  • 994
  • 1,670
  • 1
    What is `lcm` in the second line? – beardc May 24 '12 at 03:01
  • 1
    @BirdJaguarIV: follow [the link](http://stackoverflow.com/questions/147515/least-common-multiple-for-3-or-more-numbers#147539) in the answer. `lcm()` returns least common multiple of two numbers. – jfs May 24 '12 at 11:51
40

reduce() could be used to resolve dotted names (where eval() is too unsafe to use):

>>> import __main__
>>> reduce(getattr, "os.path.abspath".split('.'), __main__)
<function abspath at 0x009AB530>
jfs
  • 399,953
  • 195
  • 994
  • 1,670
  • Please, could you elaborate more about "dotted names"? why it works on __main__? your example is not generic and fails on 'string' attributes. – Mabadai Nov 16 '22 at 15:02
  • @Mabadai: `__main__` is a stand-in for an object you want to get your attributes from. It is a module object (the current module where the repl executes) in the example. It can be any object you like as long as it has desired attributes. – jfs Nov 16 '22 at 18:06
24

Find the intersection of N given lists:

input_list = [[1, 2, 3, 4, 5], [2, 3, 4, 5, 6], [3, 4, 5, 6, 7]]

result = reduce(set.intersection, map(set, input_list))

returns:

result = set([3, 4, 5])

via: Python - Intersection of two lists

Community
  • 1
  • 1
ssoler
  • 4,884
  • 4
  • 32
  • 33
  • 1
    [see my comment to the corresponding answer](http://stackoverflow.com/questions/642763/python-intersection-of-two-lists/1404146#comment18842466_1404146) – jfs Dec 05 '12 at 06:55
13

I think reduce is a silly command. Hence:

reduce(lambda hold,next:hold+chr(((ord(next.upper())-65)+13)%26+65),'znlorabggbbhfrshy','')
Chris X
  • 189
  • 1
  • 2
11

Function composition: If you already have a list of functions that you'd like to apply in succession, such as:

color = lambda x: x.replace('brown', 'blue')
speed = lambda x: x.replace('quick', 'slow')
work = lambda x: x.replace('lazy', 'industrious')
fs = [str.lower, color, speed, work, str.title]

Then you can apply them all consecutively with:

>>> call = lambda s, func: func(s)
>>> s = "The Quick Brown Fox Jumps Over the Lazy Dog"
>>> reduce(call, fs, s)
'The Slow Blue Fox Jumps Over The Industrious Dog'

In this case, method chaining may be more readable. But sometimes it isn't possible, and this kind of composition may be more readable and maintainable than a f1(f2(f3(f4(x)))) kind of syntax.

beardc
  • 20,283
  • 17
  • 76
  • 94
11

The usage of reduce that I found in my code involved the situation where I had some class structure for logic expression and I needed to convert a list of these expression objects to a conjunction of the expressions. I already had a function make_and to create a conjunction given two expressions, so I wrote reduce(make_and,l). (I knew the list wasn't empty; otherwise it would have been something like reduce(make_and,l,make_true).)

This is exactly the reason that (some) functional programmers like reduce (or fold functions, as such functions are typically called). There are often already many binary functions like +, *, min, max, concatenation and, in my case, make_and and make_or. Having a reduce makes it trivial to lift these operations to lists (or trees or whatever you got, for fold functions in general).

Of course, if certain instantiations (such as sum) are often used, then you don't want to keep writing reduce. However, instead of defining the sum with some for-loop, you can just as easily define it with reduce.

Readability, as mentioned by others, is indeed an issue. You could argue, however, that only reason why people find reduce less "clear" is because it is not a function that many people know and/or use.

mweerden
  • 13,619
  • 5
  • 32
  • 32
  • to guard against empty list you could exploit short-circuit behavior of `and` operator: `L and reduce(make_and, L)` if returning empty list is appropriate in this case – jfs Dec 05 '12 at 07:14
8

You could replace value = json_obj['a']['b']['c']['d']['e'] with:

value = reduce(dict.__getitem__, 'abcde', json_obj)

If you already have the path a/b/c/.. as a list. For example, Change values in dict of nested dicts using items in a list.

martineau
  • 119,623
  • 25
  • 170
  • 301
jfs
  • 399,953
  • 195
  • 994
  • 1,670
7

@Blair Conrad: You could also implement your glob/reduce using sum, like so:

files = sum([glob.glob(f) for f in args], [])

This is less verbose than either of your two examples, is perfectly Pythonic, and is still only one line of code.

So to answer the original question, I personally try to avoid using reduce because it's never really necessary and I find it to be less clear than other approaches. However, some people get used to reduce and come to prefer it to list comprehensions (especially Haskell programmers). But if you're not already thinking about a problem in terms of reduce, you probably don't need to worry about using it.

Eli Courtwright
  • 186,300
  • 67
  • 213
  • 256
  • 2
    Both `sum` and `reduce` lead to quadratic behavior. It can be done in linear time: [`files = chain.from_iterable(imap(iglob, args))`](http://stackoverflow.com/questions/15995/useful-code-which-uses-reduce-in-python/282678#comment18841937_16091). Though it probably doesn't matter in this case due to time it takes for glob() to access a disk. – jfs Dec 05 '12 at 07:04
6

reduce can be used to support chained attribute lookups:

reduce(getattr, ('request', 'user', 'email'), self)

Of course, this is equivalent to

self.request.user.email

but it's useful when your code needs to accept an arbitrary list of attributes.

(Chained attributes of arbitrary length are common when dealing with Django models.)

Jian
  • 10,320
  • 7
  • 38
  • 43
5

reduce is useful when you need to find the union or intersection of a sequence of set-like objects.

>>> reduce(operator.or_, ({1}, {1, 2}, {1, 3}))  # union
{1, 2, 3}
>>> reduce(operator.and_, ({1}, {1, 2}, {1, 3}))  # intersection
{1}

(Apart from actual sets, an example of these are Django's Q objects.)

On the other hand, if you're dealing with bools, you should use any and all:

>>> any((True, False, True))
True
Jian
  • 10,320
  • 7
  • 38
  • 43
3

Reduce isn't limited to scalar operations; it can also be used to sort things into buckets. (This is what I use reduce for most often).

Imagine a case in which you have a list of objects, and you want to re-organize it hierarchically based on properties stored flatly in the object. In the following example, I produce a list of metadata objects related to articles in an XML-encoded newspaper with the articles function. articles generates a list of XML elements, and then maps through them one by one, producing objects that hold some interesting info about them. On the front end, I'm going to want to let the user browse the articles by section/subsection/headline. So I use reduce to take the list of articles and return a single dictionary that reflects the section/subsection/article hierarchy.

from lxml import etree
from Reader import Reader

class IssueReader(Reader):
    def articles(self):
        arts = self.q('//div3')  # inherited ... runs an xpath query against the issue
        subsection = etree.XPath('./ancestor::div2/@type')
        section = etree.XPath('./ancestor::div1/@type')
        header_text = etree.XPath('./head//text()')
        return map(lambda art: {
            'text_id': self.id,
            'path': self.getpath(art)[0],
            'subsection': (subsection(art)[0] or '[none]'),
            'section': (section(art)[0] or '[none]'),
            'headline': (''.join(header_text(art)) or '[none]')
        }, arts)

    def by_section(self):
        arts = self.articles()

        def extract(acc, art):  # acc for accumulator
            section = acc.get(art['section'], False)
            if section:
                subsection = acc.get(art['subsection'], False)
                if subsection:
                    subsection.append(art)
                else:
                    section[art['subsection']] = [art]
            else:
                acc[art['section']] = {art['subsection']: [art]}
            return acc

        return reduce(extract, arts, {})

I give both functions here because I think it shows how map and reduce can complement each other nicely when dealing with objects. The same thing could have been accomplished with a for loop, ... but spending some serious time with a functional language has tended to make me think in terms of map and reduce.

By the way, if anybody has a better way to set properties like I'm doing in extract, where the parents of the property you want to set might not exist yet, please let me know.

Zoran Pavlovic
  • 1,166
  • 2
  • 23
  • 38
tborg
  • 116
  • 3
3

reduce can be used to get the list with the maximum nth element

reduce(lambda x,y: x if x[2] > y[2] else y,[[1,2,3,4],[5,2,5,7],[1,6,0,2]])

would return [5, 2, 5, 7] as it is the list with max 3rd element +

Sidharth C. Nadhan
  • 2,191
  • 2
  • 17
  • 16
3

Not sure if this is what you are after but you can search source code on Google.

Follow the link for a search on 'function:reduce() lang:python' on Google Code search

At first glance the following projects use reduce()

  • MoinMoin
  • Zope
  • Numeric
  • ScientificPython

etc. etc. but then these are hardly surprising since they are huge projects.

The functionality of reduce can be done using function recursion which I guess Guido thought was more explicit.

Update:

Since Google's Code Search was discontinued on 15-Jan-2012, besides reverting to regular Google searches, there's something called Code Snippets Collection that looks promising. A number of other resources are mentioned in answers this (closed) question Replacement for Google Code Search?.

Update 2 (29-May-2017):

A good source for Python examples (in open-source code) is the Nullege search engine.

martineau
  • 119,623
  • 25
  • 170
  • 301
Brendan
  • 18,771
  • 17
  • 83
  • 114
  • 1
    "The functionality of reduce can be done using function recursion" ...Or a `for` loop. – Jason Orendorff Dec 10 '09 at 23:13
  • 2
    Also, searching for reduce() yields projects that define reduce functions within their code. You should search for lang:python "reduce(" to to find actual usages of the built-in function. – Seun Osewa Mar 12 '10 at 18:08
  • @Seun Osewa: Even searching for `lang:python "reduce("` will find definitions of `reduce` depending on the source code coding style. – martineau Apr 21 '11 at 18:32
3

I'm writing a compose function for a language, so I construct the composed function using reduce along with my apply operator.

In a nutshell, compose takes a list of functions to compose into a single function. If I have a complex operation that is applied in stages, I want to put it all together like so:

complexop = compose(stage4, stage3, stage2, stage1)

This way, I can then apply it to an expression like so:

complexop(expression)

And I want it to be equivalent to:

stage4(stage3(stage2(stage1(expression))))

Now, to build my internal objects, I want it to say:

Lambda([Symbol('x')], Apply(stage4, Apply(stage3, Apply(stage2, Apply(stage1, Symbol('x'))))))

(The Lambda class builds a user-defined function, and Apply builds a function application.)

Now, reduce, unfortunately, folds the wrong way, so I wound up using, roughly:

reduce(lambda x,y: Apply(y, x), reversed(args + [Symbol('x')]))

To figure out what reduce produces, try these in the REPL:

reduce(lambda x, y: (x, y), range(1, 11))
reduce(lambda x, y: (y, x), reversed(range(1, 11)))
ben
  • 1,094
  • 7
  • 4
  • I've used `compose = lambda *func: lambda arg: reduce(lambda x, f: f(x), reversed(funcs), arg)` to [generate all possible combinations of functions for performance testing.](http://stackoverflow.com/a/13656047/4279) – jfs Dec 05 '12 at 06:00
2

After grepping my code, it seems the only thing I've used reduce for is calculating the factorial:

reduce(operator.mul, xrange(1, x+1) or (1,))
Tomi Kyöstilä
  • 1,269
  • 9
  • 13
2
import os

files = [
    # full filenames
    "var/log/apache/errors.log",
    "home/kane/images/avatars/crusader.png",
    "home/jane/documents/diary.txt",
    "home/kane/images/selfie.jpg",
    "var/log/abc.txt",
    "home/kane/.vimrc",
    "home/kane/images/avatars/paladin.png",
]

# unfolding of plain filiname list to file-tree
fs_tree = ({}, # dict of folders
           []) # list of files
for full_name in files:
    path, fn = os.path.split(full_name)
    reduce(
        # this fucction walks deep into path
        # and creates placeholders for subfolders
        lambda d, k: d[0].setdefault(k,         # walk deep
                                     ({}, [])), # or create subfolder storage
        path.split(os.path.sep),
        fs_tree
    )[1].append(fn)

print fs_tree
#({'home': (
#    {'jane': (
#        {'documents': (
#           {},
#           ['diary.txt']
#        )},
#        []
#    ),
#    'kane': (
#       {'images': (
#          {'avatars': (
#             {},
#             ['crusader.png',
#             'paladin.png']
#          )},
#          ['selfie.jpg']
#       )},
#       ['.vimrc']
#    )},
#    []
#  ),
#  'var': (
#     {'log': (
#         {'apache': (
#            {},
#            ['errors.log']
#         )},
#         ['abc.txt']
#     )},
#     [])
#},
#[])
  • 2
    Could you perhaps add a little explanation as to what's going on here? Otherwise, the usefulness is really not obvious at all. – Zoran Pavlovic Sep 25 '14 at 08:36
2

I just found useful usage of reduce: splitting string without removing the delimiter. The code is entirely from Programatically Speaking blog. Here's the code:

reduce(lambda acc, elem: acc[:-1] + [acc[-1] + elem] if elem == "\n" else acc + [elem], re.split("(\n)", "a\nb\nc\n"), [])

Here's the result:

['a\n', 'b\n', 'c\n', '']

Note that it handles edge cases that popular answer in SO doesn't. For more in-depth explanation, I am redirecting you to original blog post.

MatthewRock
  • 1,071
  • 1
  • 14
  • 30
2

I used reduce to concatenate a list of PostgreSQL search vectors with the || operator in sqlalchemy-searchable:

vectors = (self.column_vector(getattr(self.table.c, column_name))
           for column_name in self.indexed_columns)
concatenated = reduce(lambda x, y: x.op('||')(y), vectors)
compiled = concatenated.compile(self.conn)
bjmc
  • 2,970
  • 2
  • 32
  • 46
1

I have an old Python implementation of pipegrep that uses reduce and the glob module to build a list of files to process:

files = []
files.extend(reduce(lambda x, y: x + y, map(glob.glob, args)))

I found it handy at the time, but it's really not necessary, as something similar is just as good, and probably more readable

files = []
for f in args:
    files.extend(glob.glob(f))
Blair Conrad
  • 233,004
  • 25
  • 132
  • 111
  • How about a list comprehension? This seems like a perfect application for it: `files = [glob.glob(f) for f in args]` – steveha Mar 11 '10 at 07:28
  • Actually, @steveha, your example will result in a list of lists of expanded globs, rather than a flat list of all items that match the globs, but you could use a list comprehension + sum, as @[Eli Courtwright](#16198) points out. – Blair Conrad Mar 12 '10 at 02:14
  • 1
    Okay, you are correct, sorry about that. I still don't like the combination of extend/reduce/lambda/map very much! I would recommend importing `itertools`, using the `flatten()` recipe from http://docs.python.org/library/itertools.html, and then writing: `files = flatten(glob.glob(f) for f in args)` (And this time, I tested the code before posting it, and I know this works correctly.) – steveha Mar 12 '10 at 20:04
  • `files = chain.from_iterable(imap(iglob, args))` where `chain`, `imap` are from `itertools` module and `glob.iglob` is useful if a pattern from `args` may yield files from several directories. – jfs Dec 05 '12 at 06:11
1

Let say that there are some yearly statistic data stored a list of Counters. We want to find the MIN/MAX values in each month across the different years. For example, for January it would be 10. And for February it would be 15. We need to store the results in a new Counter.

from collections import Counter

stat2011 = Counter({"January": 12, "February": 20, "March": 50, "April": 70, "May": 15,
           "June": 35, "July": 30, "August": 15, "September": 20, "October": 60,
           "November": 13, "December": 50})

stat2012 = Counter({"January": 36, "February": 15, "March": 50, "April": 10, "May": 90,
           "June": 25, "July": 35, "August": 15, "September": 20, "October": 30,
           "November": 10, "December": 25})

stat2013 = Counter({"January": 10, "February": 60, "March": 90, "April": 10, "May": 80,
           "June": 50, "July": 30, "August": 15, "September": 20, "October": 75,
           "November": 60, "December": 15})

stat_list = [stat2011, stat2012, stat2013]

print reduce(lambda x, y: x & y, stat_list)     # MIN
print reduce(lambda x, y: x | y, stat_list)     # MAX
lessthanl0l
  • 1,035
  • 2
  • 12
  • 21
1

I have objects representing some kind of overlapping intervals (genomic exons), and redefined their intersection using __and__:

class Exon:
    def __init__(self):
        ...
    def __and__(self,other):
        ...
        length = self.length + other.length  # (e.g.)
        return self.__class__(...length,...)

Then when I have a collection of them (for instance, in the same gene), I use

intersection = reduce(lambda x,y: x&y, exons)
JulienD
  • 7,102
  • 9
  • 50
  • 84
1
def dump(fname,iterable):
  with open(fname,'w') as f:
    reduce(lambda x, y: f.write(unicode(y,'utf-8')), iterable)
deddu
  • 855
  • 12
  • 16
  • This discards `x` and writes `y`? What if `x` was important? – Stef Oct 07 '20 at 19:16
  • `x` is the result of the previous iteration. In this case it's the output of `f.write`, i.e. the number of characters written. Unlikely to be useful. That said, this code is just a way to show how you can use reduce to iterate trough an iterable and save to a file. It's like `for y in iterable: x=f.write(y)` – deddu Oct 08 '20 at 11:55
  • I see! But then the very first `x` encountered is an element of the initial iterable, not the result of a `write` operator? Also, this code relies on the order in which `reduce` encounters its arguments, so it wouldn't work if we substituted `pyspark.reduce` for `functools.reduce`? – Stef Oct 08 '20 at 12:36
0

Using reduce() to find out if a list of dates are consecutive:

from datetime import date, timedelta


def checked(d1, d2):
    """
    We assume the date list is sorted.
    If d2 & d1 are different by 1, everything up to d2 is consecutive, so d2
    can advance to the next reduction.
    If d2 & d1 are not different by 1, returning d1 - 1 for the next reduction
    will guarantee the result produced by reduce() to be something other than
    the last date in the sorted date list.

    Definition 1: 1/1/14, 1/2/14, 1/2/14, 1/3/14 is consider consecutive
    Definition 2: 1/1/14, 1/2/14, 1/2/14, 1/3/14 is consider not consecutive

    """
    #if (d2 - d1).days == 1 or (d2 - d1).days == 0:  # for Definition 1
    if (d2 - d1).days == 1:                          # for Definition 2
        return d2
    else:
        return d1 + timedelta(days=-1)

# datelist = [date(2014, 1, 1), date(2014, 1, 3),
#             date(2013, 12, 31), date(2013, 12, 30)]

# datelist = [date(2014, 2, 19), date(2014, 2, 19), date(2014, 2, 20),
#             date(2014, 2, 21), date(2014, 2, 22)]

datelist = [date(2014, 2, 19), date(2014, 2, 21),
            date(2014, 2, 22), date(2014, 2, 20)]

datelist.sort()

if datelist[-1] == reduce(checked, datelist):
    print "dates are consecutive"
else:
    print "dates are not consecutive"
lessthanl0l
  • 1,035
  • 2
  • 12
  • 21