87

I have two lists of objects. Each list is already sorted by a property of the object that is of the datetime type. I would like to combine the two lists into one sorted list. Is the best way just to do a sort or is there a smarter way to do this in Python?

Bjorn
  • 69,215
  • 39
  • 136
  • 164

22 Answers22

129

is there a smarter way to do this in Python

This hasn't been mentioned, so I'll go ahead - there is a merge stdlib function in the heapq module of python 2.6+. If all you're looking to do is getting things done, this might be a better idea. Of course, if you want to implement your own, the merge of merge-sort is the way to go.

>>> list1 = [1, 5, 8, 10, 50]
>>> list2 = [3, 4, 29, 41, 45, 49]
>>> from heapq import merge
>>> list(merge(list1, list2))
[1, 3, 4, 5, 8, 10, 29, 41, 45, 49, 50]

Here's the documentation.

OrangeDog
  • 36,653
  • 12
  • 122
  • 207
sykora
  • 96,888
  • 11
  • 64
  • 71
  • 5
    I've added link to heapq.py. `merge()` is implemented as a pure python function so It is easy to port it to older Python versions. – jfs Jan 21 '09 at 20:36
  • 2
    Though right, this solution seems to be one order of magnitude slower than the `sorted(l1+l2)` solution. – sono Sep 02 '17 at 10:43
  • 2
    @Ale: That's not wholly surprising. `list.sort` (which `sorted` is implemented in terms of) uses [TimSort](https://en.wikipedia.org/wiki/Timsort), which is optimized to take advantage of existing ordering (or reverse ordering) in the underlying sequence, so even though it's theoretically `O(n log n)`, in this case, it's much closer to `O(n)` to perform the sort. Beyond that, CPython's `list.sort` is implemented in C (avoiding interpreter overhead), while `heapq.merge` is mostly implemented in Python, and optimizes for the "many iterables" case in a way that slows the "two iterables" case. – ShadowRanger Mar 24 '18 at 00:43
  • 8
    The selling point for `heapq.merge` is that it doesn't require either the inputs or the outputs to be `list`; it can consume iterators/generators and produces a generator, so huge inputs/outputs (not stored in RAM at once) can be combined without swap thrashing. It also handles merging an arbitrary number of input iterables with lower overhead than might be expected (it uses a heap to coordinate the merging, so the overhead scales with the log of the number of iterables, not linearly, but as noted, that doesn't matter for the "two iterable" case). – ShadowRanger Mar 24 '18 at 00:51
119

People seem to be over complicating this.. Just combine the two lists, then sort them:

>>> l1 = [1, 3, 4, 7]
>>> l2 = [0, 2, 5, 6, 8, 9]
>>> l1.extend(l2)
>>> sorted(l1)
[0, 1, 2, 3, 4, 5, 6, 7, 8, 9]

..or shorter (and without modifying l1):

>>> sorted(l1 + l2)
[0, 1, 2, 3, 4, 5, 6, 7, 8, 9]

..easy! Plus, it's using only two built-in functions, so assuming the lists are of a reasonable size, it should be quicker than implementing the sorting/merging in a loop. More importantly, the above is much less code, and very readable.

If your lists are large (over a few hundred thousand, I would guess), it may be quicker to use an alternative/custom sorting method, but there are likely other optimisations to be made first (e.g not storing millions of datetime objects)

Using the timeit.Timer().repeat() (which repeats the functions 1000000 times), I loosely benchmarked it against ghoseb's solution, and sorted(l1+l2) is substantially quicker:

merge_sorted_lists took..

[9.7439379692077637, 9.8844599723815918, 9.552299976348877]

sorted(l1+l2) took..

[2.860386848449707, 2.7589840888977051, 2.7682540416717529]
Community
  • 1
  • 1
dbr
  • 165,801
  • 69
  • 278
  • 343
  • This is mostly due to a flaw in ghoseb's solution - it's actually O(n**2), so will perform worse than O(n lg(n)) sort. An O(n) merge will probably be faster than sorting, at least for sufficiently large input list (sort may well beat it for short lists). – Brian Jan 21 '09 at 09:57
  • 6
    Finally a sane answer, taking actual *benchmarking* into account. :-) --- Also, 1 line to maintain instead of 15-20 is much to be preferred. – Deestan Jan 21 '09 at 10:10
  • 27
    Sorting a very short list created by appending two lists will indeed be very fast, as the constant overheads will dominate. Try doing this for lists with several million items, or files on disk with several billion items, and you'll soon find out why merging is preferable. – Barry Kelly Jan 21 '09 at 10:16
  • 6
    @Barry: If you have "several billion items" and a speed requisite, *anything* in Python is the wrong answer. – Deestan Jan 21 '09 at 10:24
  • Actually, I've done some timings (see my answer). sorted() looks like it is indeed faster for most common cases (under a million items). This will obviously change with a more efficient implementation (eg written in C), or where operations are more time consuming (eg. disk based IO) – Brian Jan 21 '09 at 10:39
  • 11
    @Deestan: I disagree - there are times when speed will be dominated by other factors. Eg. if you're sorting data on-disk (merge 2 files), IO times will likely dominate and python's speed won't matter much, just the number of operations you do (and hence the algorithm). – Brian Jan 21 '09 at 10:41
  • The original question states that "each list [contains items] of the datetime type".. I'd worry about that long before I reimpliment the sorting algorithm! – dbr Jan 21 '09 at 11:34
  • 1
    Remember to either pass a function that can compare dates within an object or override the _cmp_ method in the object. This is the same solution I gave below (though your benchmarks are very very welcome). – Josh Smeaton Jan 21 '09 at 11:53
  • 8
    Seriously? Benchmarking a sort function with a 10 entry list?? – Seun Osewa Jan 12 '10 at 00:21
  • 1
    Seun: The answer doesn't specify how large the list is - it's entirely plausible the questioner (or a random Google'r) would wanting to sort fairly small lists.. The benchmark isn't exactly the point of my answer, I included it because it's silly to argue "x is faster than y" without providing any benchmarks at all. – dbr Jan 12 '10 at 20:47
  • Since python uses timsort, I would expect this to _always_ be faster, since it does the merging like you would manually do, but in c instead. – Chris Cain Apr 15 '12 at 17:26
  • For useful benchmarks, compare `sorted` with `heapq.merge` for larger iterables (and larger numbers of iterables), and with custom `key` functions. – OrangeDog Jun 20 '18 at 14:11
  • @Deestan if you have *iterables* totaling several billion items then there's nothing wrong with using Python - though using list concatenation and `sorted` is a big mistake. – OrangeDog Jun 20 '18 at 14:13
54

Long story short, unless len(l1 + l2) ~ 1000000 use:

L = l1 + l2
L.sort()

merge vs. sort comparison

Description of the figure and source code can be found here.

The figure was generated by the following command:

$ python make-figures.py --nsublists 2 --maxn=0x100000 -s merge_funcs.merge_26 -s merge_funcs.sort_builtin
Community
  • 1
  • 1
jfs
  • 399,953
  • 195
  • 994
  • 1,670
  • You're comparing it against a golfed solution, not one that's actually trying to be efficient. – OrangeDog Jun 20 '18 at 13:47
  • @OrangeDog I don't understand what you are talking about. The point of the answer is that adding two lists and sorting them may be faster for small input than heapq.merge() from Python 2.6 (despite `merge()` being O(n) in time, O(1) in space and the sorting is O(n log n) in time, and the whole algorithm is O(n) in space here)¶ The comparison has only historical value now. – jfs Jun 20 '18 at 14:30
  • this answer has nothing to do with `heapq.merge`, you're comparing `sort` against somebody's code-golf submission. – OrangeDog Jun 20 '18 at 14:32
  • @OrangeDog **wrong**. [`merge_26()`](https://gist.github.com/zed/51074#file-merge_funcs-py) is from Python 2.6 heapq module. – jfs Jun 20 '18 at 14:39
  • you're the one who said "the source code can be found here" and linked to a code-golf answer. Don't blame people for thinking that the code that can be found there is what you tested. – OrangeDog Jun 20 '18 at 14:49
  • @OrangeDog: read the linked answer carefully. `merge_26()` and `merge_alabaster()` are different functions. – jfs Jun 20 '18 at 14:55
33

This is simply merging. Treat each list as if it were a stack, and continuously pop the smaller of the two stack heads, adding the item to the result list, until one of the stacks is empty. Then add all remaining items to the resulting list.

res = []
while l1 and l2:
    if l1[0] < l2[0]:
        res.append(l1.pop(0))
    else:
        res.append(l2.pop(0))

res += l1
res += l2
iamtodor
  • 734
  • 8
  • 21
Barry Kelly
  • 41,404
  • 5
  • 117
  • 189
18

There is a slight flaw in ghoseb's solution, making it O(n**2), rather than O(n).
The problem is that this is performing:

item = l1.pop(0)

With linked lists or deques this would be an O(1) operation, so wouldn't affect complexity, but since python lists are implemented as vectors, this copies the rest of the elements of l1 one space left, an O(n) operation. Since this is done each pass through the list, it turns an O(n) algorithm into an O(n**2) one. This can be corrected by using a method that doesn't alter the source lists, but just keeps track of the current position.

I've tried out benchmarking a corrected algorithm vs a simple sorted(l1+l2) as suggested by dbr

def merge(l1,l2):
    if not l1:  return list(l2)
    if not l2:  return list(l1)

    # l2 will contain last element.
    if l1[-1] > l2[-1]:
        l1,l2 = l2,l1

    it = iter(l2)
    y = it.next()
    result = []

    for x in l1:
        while y < x:
            result.append(y)
            y = it.next()
        result.append(x)
    result.append(y)
    result.extend(it)
    return result

I've tested these with lists generated with

l1 = sorted([random.random() for i in range(NITEMS)])
l2 = sorted([random.random() for i in range(NITEMS)])

For various sizes of list, I get the following timings (repeating 100 times):

# items:  1000   10000 100000 1000000
merge  :  0.079  0.798 9.763  109.044 
sort   :  0.020  0.217 5.948  106.882

So in fact, it looks like dbr is right, just using sorted() is preferable unless you're expecting very large lists, though it does have worse algorithmic complexity. The break even point being at around a million items in each source list (2 million total).

One advantage of the merge approach though is that it is trivial to rewrite as a generator, which will use substantially less memory (no need for an intermediate list).

[Edit] I've retried this with a situation closer to the question - using a list of objects containing a field "date" which is a datetime object. The above algorithm was changed to compare against .date instead, and the sort method was changed to:

return sorted(l1 + l2, key=operator.attrgetter('date'))

This does change things a bit. The comparison being more expensive means that the number we perform becomes more important, relative to the constant-time speed of the implementation. This means merge makes up lost ground, surpassing the sort() method at 100,000 items instead. Comparing based on an even more complex object (large strings or lists for instance) would likely shift this balance even more.

# items:  1000   10000 100000  1000000[1]
merge  :  0.161  2.034 23.370  253.68
sort   :  0.111  1.523 25.223  313.20

[1]: Note: I actually only did 10 repeats for 1,000,000 items and scaled up accordingly as it was pretty slow.

Community
  • 1
  • 1
Brian
  • 116,865
  • 28
  • 107
  • 112
  • Thanks for the fix. Would be great if you can exactly point out the flaw and your fix :) – Baishampayan Ghose Jan 21 '09 at 11:24
  • @ghoseb: I gave a brief description as a comment on your post, but I've now updated the answer to give more details - essentially l.pop() is an O(n) operation for lists. It's fixable by tracking position in some other manner (alternatively by popping from the tail instead, and reversing at the end) – Brian Jan 21 '09 at 11:42
  • Can you bench mark these same tests but comparing the dates like the question is requiring? I'm guessing this extra method will take up a fair amount of time relatively. – Josh Smeaton Jan 21 '09 at 11:57
  • I'd say the difference is due to the fact that sort() is implemented in c/c++ and compiled vs our merge() that is being interpreted. merge() should be faster on equal terms. – Drakosha Jan 21 '09 at 12:09
  • Good point Drakosha. Show's that benchmarking is indeed the only way to know for certain. – Josh Smeaton Jan 21 '09 at 12:13
  • @Josh: Good point - I've rerun the timings using code assuming an object with a datetime attribute. This does affect the point at which merge beats out sort(). – Brian Jan 21 '09 at 13:02
6

This is simple merging of two sorted lists. Take a look at the sample code below which merges two sorted lists of integers.

#!/usr/bin/env python
## merge.py -- Merge two sorted lists -*- Python -*-
## Time-stamp: "2009-01-21 14:02:57 ghoseb"

l1 = [1, 3, 4, 7]
l2 = [0, 2, 5, 6, 8, 9]

def merge_sorted_lists(l1, l2):
    """Merge sort two sorted lists

    Arguments:
    - `l1`: First sorted list
    - `l2`: Second sorted list
    """
    sorted_list = []

    # Copy both the args to make sure the original lists are not
    # modified
    l1 = l1[:]
    l2 = l2[:]

    while (l1 and l2):
        if (l1[0] <= l2[0]): # Compare both heads
            item = l1.pop(0) # Pop from the head
            sorted_list.append(item)
        else:
            item = l2.pop(0)
            sorted_list.append(item)

    # Add the remaining of the lists
    sorted_list.extend(l1 if l1 else l2)

    return sorted_list

if __name__ == '__main__':
    print merge_sorted_lists(l1, l2)

This should work fine with datetime objects. Hope this helps.

Baishampayan Ghose
  • 19,928
  • 10
  • 56
  • 60
  • 3
    Unfortunately this is counterproductive - normally merge would be O(n), but because you're popping from the left of each list (an O(n) operation), you're actually making it an O(n**2) process - worse than the naive sorted(l1+l2) – Brian Jan 21 '09 at 09:54
  • @Brian I actually think that this solution is the cleanest of all and I believe you're right on O(n) complexity of popping the first element from the list. You can eliminate that problem by using deque from collections, which gives you O(1) when popping an item from either side. https://docs.python.org/2/library/collections.html#collections.deque – mohi666 May 08 '15 at 00:59
  • @Brian, `head, tail = l[0], l[1:]` also will have O(n**2) complexity? – Nikolay Fominyh Feb 12 '16 at 17:14
  • 2
    @Brian: As an alternative to `collections.deque`, it could also be solved by creating `l1` and `l2` in reverse order (`l1 = l1[::-1]`, `l2 = l2[::-1]`), then working from the right-hand side rather than the left, replacing `if l1[0] <= l2[0]:` with `if l1[-1] <= l2[-1]:`, replacing `pop(0)` with `pop()` and changing `sorted_list.extend(l1 if l1 else l2)` to `sorted_list.extend(reversed(l1 if l1 else l2))` – ShadowRanger Mar 24 '18 at 00:57
4
from datetime import datetime
from itertools import chain
from operator import attrgetter

class DT:
    def __init__(self, dt):
        self.dt = dt

list1 = [DT(datetime(2008, 12, 5, 2)),
         DT(datetime(2009, 1, 1, 13)),
         DT(datetime(2009, 1, 3, 5))]

list2 = [DT(datetime(2008, 12, 31, 23)),
         DT(datetime(2009, 1, 2, 12)),
         DT(datetime(2009, 1, 4, 15))]

list3 = sorted(chain(list1, list2), key=attrgetter('dt'))
for item in list3:
    print item.dt

The output:

2008-12-05 02:00:00
2008-12-31 23:00:00
2009-01-01 13:00:00
2009-01-02 12:00:00
2009-01-03 05:00:00
2009-01-04 15:00:00

I bet this is faster than any of the fancy pure-Python merge algorithms, even for large data. Python 2.6's heapq.merge is a whole another story.

akaihola
  • 26,309
  • 7
  • 59
  • 69
4
def merge_sort(a,b):

    pa = 0
    pb = 0
    result = []

    while pa < len(a) and pb < len(b):
        if a[pa] <= b[pb]:
            result.append(a[pa])
            pa += 1
        else:
            result.append(b[pb])
            pb += 1

    remained = a[pa:] + b[pb:]
    result.extend(remained)


return result
krezaeim
  • 170
  • 1
  • 7
3

Python's sort implementation "timsort" is specifically optimized for lists that contain ordered sections. Plus, it's written in C.

http://bugs.python.org/file4451/timsort.txt
http://en.wikipedia.org/wiki/Timsort

As people have mentioned, it may call the comparison function more times by some constant factor (but maybe call it more times in a shorter period in many cases!).

I would never rely on this, however. – Daniel Nadasi

I believe the Python developers are committed to keeping timsort, or at least keeping a sort that's O(n) in this case.

Generalized sorting (i.e. leaving apart radix sorts from limited value domains)
cannot be done in less than O(n log n) on a serial machine. – Barry Kelly

Right, sorting in the general case can't be faster than that. But since O() is an upper bound, timsort being O(n log n) on arbitrary input doesn't contradict its being O(n) given sorted(L1) + sorted(L2).

FutureNerd
  • 769
  • 7
  • 9
3

An implementation of the merging step in Merge Sort that iterates through both lists:

def merge_lists(L1, L2):
    """
    L1, L2: sorted lists of numbers, one of them could be empty.

    returns a merged and sorted list of L1 and L2.
    """

    # When one of them is an empty list, returns the other list
    if not L1:
        return L2
    elif not L2:
        return L1

    result = []
    i = 0
    j = 0

    for k in range(len(L1) + len(L2)):
        if L1[i] <= L2[j]:
            result.append(L1[i])
            if i < len(L1) - 1:
                i += 1
            else:
                result += L2[j:]  # When the last element in L1 is reached,
                break             # append the rest of L2 to result.
        else:
            result.append(L2[j])
            if j < len(L2) - 1:
                j += 1
            else:
                result += L1[i:]  # When the last element in L2 is reached,
                break             # append the rest of L1 to result.

    return result

L1 = [1, 3, 5]
L2 = [2, 4, 6, 8]
merge_lists(L1, L2)               # Should return [1, 2, 3, 4, 5, 6, 8]
merge_lists([], L1)               # Should return [1, 3, 5]

I'm still learning about algorithms, please let me know if the code could be improved in any aspect, your feedback is appreciated, thanks!

timokratia
  • 85
  • 1
  • 8
1

Recursive implementation is below. Average performance is O(n).

def merge_sorted_lists(A, B, sorted_list = None):
    if sorted_list == None:
        sorted_list = []

    slice_index = 0
    for element in A:
        if element <= B[0]:
            sorted_list.append(element)
            slice_index += 1
        else:
            return merge_sorted_lists(B, A[slice_index:], sorted_list)

    return sorted_list + B

or generator with improved space complexity:

def merge_sorted_lists_as_generator(A, B):
    slice_index = 0
    for element in A:
        if element <= B[0]:
            slice_index += 1
            yield element       
        else:
            for sorted_element in merge_sorted_lists_as_generator(B, A[slice_index:]):
                yield sorted_element
            return        

    for element in B:
        yield element
Pavel Paulau
  • 786
  • 2
  • 7
  • 19
1

Use the 'merge' step of merge sort, it runs in O(n) time.

From wikipedia (pseudo-code):

function merge(left,right)
    var list result
    while length(left) > 0 and length(right) > 0
        if first(left) ≤ first(right)
            append first(left) to result
            left = rest(left)
        else
            append first(right) to result
            right = rest(right)
    end while
    while length(left) > 0 
        append left to result
    while length(right) > 0 
        append right to result
    return result
Mongoose
  • 4,555
  • 1
  • 16
  • 7
1

This is my solution in linear time without editing l1 and l2:

def merge(l1, l2):
  m, m2 = len(l1), len(l2)
  newList = []
  l, r = 0, 0
  while l < m and r < m2:
    if l1[l] < l2[r]:
      newList.append(l1[l])
      l += 1
    else:
      newList.append(l2[r])
      r += 1
  return newList + l1[l:] + l2[r:]
hardfork
  • 2,470
  • 1
  • 23
  • 43
1

I'd go with the following answer.

from math import floor

def merge_sort(l):
    if len(l) < 2:
        return l
    left = merge_sort(l[:floor(len(l)/2)])
    right = merge_sort(l[floor(len(l)/2):])
    return merge(left, right)

def merge(a, b):
    i, j = 0, 0
    a_len, b_len = len(a), len(b)
    output_length = a_len + b_len
    out = list()
    for _ in range(output_length):
        if i < a_len and j < b_len and a[i] < b[j]:
            out.append(a[i])
            i = i + 1
        elif j < b_len:
            out.append(b[j])
            j = j + 1
    
    while (i < a_len):
        out.append(a[i])
        i += 1
    
    while (j < b_len):
        out.append(b[j])
        j += 1
        
    return out


if __name__ == '__main__':
    print(merge_sort([7, 8, 9, 4, 5, 6]))
0

If you want to do it in a manner more consistent with learning what goes on in the iteration try this

def merge_arrays(a, b):
    l= []

    while len(a) > 0 and len(b)>0:
        if a[0] < b[0]: l.append(a.pop(0))    
        else:l.append(b.pop(0))

    l.extend(a+b)
    print( l )
Leon
  • 5,701
  • 3
  • 38
  • 38
0
import random

    n=int(input("Enter size of table 1")); #size of list 1
    m=int(input("Enter size of table 2")); # size of list 2
    tb1=[random.randrange(1,101,1) for _ in range(n)] # filling the list with random
    tb2=[random.randrange(1,101,1) for _ in range(m)] # numbers between 1 and 100
    tb1.sort(); #sort the list 1 
    tb2.sort(); # sort the list 2
    fus=[]; # creat an empty list
    print(tb1); # print the list 1
    print('------------------------------------');
    print(tb2); # print the list 2
    print('------------------------------------');
    i=0;j=0;  # varialbles to cross the list
    while(i<n and j<m):
        if(tb1[i]<tb2[j]):
            fus.append(tb1[i]); 
            i+=1;
        else:
            fus.append(tb2[j]);
            j+=1;

    if(i<n):
        fus+=tb1[i:n];
    if(j<m):
        fus+=tb2[j:m];

    print(fus);

  # this code is used to merge two sorted lists in one sorted list (FUS) without
  #sorting the (FUS)
  • 1
    It's not clear whether this is an answer to the question, let alone whether it actually does? Can you provide some sort of explanation? – Ben Aug 30 '14 at 12:00
  • Sory but I didn't understand what you whant ! – OUSSAMA EL ISSAOUI Aug 30 '14 at 12:10
  • 1
    You'll note that the higher voted answers (and most of the others) have some text that explain what's happening in the answer and _why_ that answer is an answer to the question.. – Ben Aug 30 '14 at 12:11
  • because it merge two list in one sorted list and this is the answer of the question – OUSSAMA EL ISSAOUI Aug 30 '14 at 12:19
0

Have used merge step of the merge sort. But I have used generators. Time complexity O(n)

def merge(lst1,lst2):
    len1=len(lst1)
    len2=len(lst2)
    i,j=0,0
    while(i<len1 and j<len2):
        if(lst1[i]<lst2[j]):
                yield lst1[i]
                i+=1
        else:
                yield lst2[j]
                j+=1
    if(i==len1):
        while(j<len2):
                yield lst2[j]
                j+=1
    elif(j==len2):
        while(i<len1):
                yield lst1[i]
                i+=1
l1=[1,3,5,7]
l2=[2,4,6,8,9]
mergelst=(val for val in merge(l1,l2))
print(*mergelst)
Atul
  • 61
  • 2
  • 3
0

Well, the naive approach (combine 2 lists into large one and sort) will be O(N*log(N)) complexity. On the other hand, if you implement the merge manually (i do not know about any ready code in python libs for this, but i'm no expert) the complexity will be O(N), which is clearly faster. The idea is described wery well in post by Barry Kelly.

Drakosha
  • 11,925
  • 4
  • 39
  • 52
  • As a point of interest, the python sorting algorithm is very good, so the performance would likely be better than O(n log n), since the algorithm often takes advantage of regularities in the input data. I would never rely on this, however. – Daniel Nadasi Jan 21 '09 at 07:45
  • Generalized sorting (i.e. leaving apart radix sorts from limited value domains) cannot be done in less than O(n log n) on a serial machine. – Barry Kelly Jan 21 '09 at 10:17
0

This code has time complexity O(n) and can merge lists of any data type, given a quantifying function as the parameter func. It produces a new merged list and does not modify either of the lists passed as arguments.

def merge_sorted_lists(listA,listB,func):
    merged = list()
    iA = 0
    iB = 0
    while True:
        hasA = iA < len(listA)
        hasB = iB < len(listB)
        if not hasA and not hasB:
            break
        valA = None if not hasA else listA[iA]
        valB = None if not hasB else listB[iB]
        a = None if not hasA else func(valA)
        b = None if not hasB else func(valB)
        if (not hasB or a<b) and hasA:
            merged.append(valA)
            iA += 1
        elif hasB:
            merged.append(valB)
            iB += 1
    return merged
Daniel
  • 1,599
  • 1
  • 16
  • 19
0

in O(m+n) complexity

def merge_sorted_list(nums1: list, nums2:list) -> list:
        m = len(nums1)
        n = len(nums2)
        
        nums1 = nums1.copy()
        nums2 = nums2.copy()
        nums1.extend([0 for i in range(n)])
        while m > 0 and n > 0:
            if nums1[m-1] >= nums2[n-1]:
                nums1[m+n-1] = nums1[m-1]
                m -= 1
            else:
                nums1[m+n-1] = nums2[n-1]
                n -= 1
        if n > 0:
            nums1[:n] = nums2[:n]
        return nums1

l1 = [1, 3, 4, 7]    
l2 =  [0, 2, 5, 6, 8, 9]    
print(merge_sorted_list(l1, l2))

output

[0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
sahasrara62
  • 10,069
  • 3
  • 29
  • 44
-1
def compareDate(obj1, obj2):
    if obj1.getDate() < obj2.getDate():
        return -1
    elif obj1.getDate() > obj2.getDate():
        return 1
    else:
        return 0



list = list1 + list2
list.sort(compareDate)

Will sort the list in place. Define your own function for comparing two objects, and pass that function into the built in sort function.

Do NOT use bubble sort, it has horrible performance.

Josh Smeaton
  • 47,939
  • 24
  • 129
  • 164
  • Merge sort will definately be faster, but a little more complicated if you have to implement it yourself. I *think* python uses quicksort. – Josh Smeaton Jan 21 '09 at 07:46
-2

Hope this helps. Pretty Simple and straight forward:

l1 = [1, 3, 4, 7]

l2 = [0, 2, 5, 6, 8, 9]

l3 = l1 + l2

l3.sort()

print (l3)

[0, 1, 2, 3, 4, 5, 6, 7, 8, 9]

J_man
  • 1
  • 1
    OP didn't ask how to add and sort lists, was asking if there was a better or more "Python" way of doing it within their context. – ForeverZer0 Jun 27 '18 at 02:22