0

For the following problem, I am looking for a solution that is fast over a very large set of data and works. I have access to databricks infrastructure and can write in SQL and Python (pyspark) interchangeably. The problem is as follows – given a dataset of roughly 1m rows with the format below.

duplicate  parent
2          1
3          1
4          1
5          2
3          2
2          6
2          7
3          7
8          9
10         12
11         8
15         14
13         15
14         10

I am getting this data from spark tables. I am now trying to get to two results:

1 - Find the root parents for a each family member

member    parent
1         []
2         [1,7]
3         [1,7]
4         [1]
5         [1,7]
6         [1,7]
7         []
8         [9]
9         []
10        [12]
11        [9]
12        []
13        [12]
14        [12]
15        [12]

2 - Pooling all the parent-child relationships into 'families'

family
[1,2,3,4,5,6,7]
[8,9,11]
[10,12,13,14,15]

Here's a Python dict that represents the relationships and an attempt that I made to solve to result 1 works but is incredibly slow, probably due to the recursive function. My problem is that the approach is very slow on a large amount of data and I am not sure which tools that I have at my disposal are best poised to solve this. Pandas? Scala? Pure Python?

test = {
  'duplicate':[2,3,4,5,3,2,6,3,8,10,11,14,15,14],
  'parent':[1,1,1,2,2,6,7,7,9,12,8,15,13,10]
}

result = {
  'root_parent': [],
  'duplicate': []
}

parents = test['parent']
duplicates = test['duplicate']

def find_parents(root_duplicate, duplicate, result):
  parents_of_duplicate = [parents[i] for i, x in enumerate(duplicates) if x == duplicate]
  if not parents_of_duplicate:
    result['root_parent'].append(duplicate)
    result['duplicate'].append(root_duplicate)
  else:
    for parent_of_duplicate in parents_of_duplicate:
      find_parents(root_duplicate, parent_of_duplicate, result)

for duplicate in set(duplicates):
  find_parents(duplicate, duplicate, result)
Julius
  • 1,451
  • 1
  • 15
  • 19

1 Answers1

0

I have found my answer in this Stackoverflow response. Seems to be a common graphing problem:

Merge lists that share common elements

import networkx as nx
test = {
  'duplicate':[2,3,4,5,3,2,6,3,8,10,11,14,15,14],
  'parent':[1,1,1,2,2,6,7,7,9,12,8,15,13,10]
}
relations = zip(test['duplicate'], test['parent'])
G = nx.Graph()
G.add_edges_from(relations)
list(nx.connected_components(G))

Out:

[{1, 2, 3, 4, 5, 6, 7}, {8, 9, 11}, {10, 12, 13, 14, 15}]
Julius
  • 1,451
  • 1
  • 15
  • 19