I am trying to optimize a node data structure of a graph/network by avoiding duplicate edges. The graph is undirected and has no loops (in the sense that no node has edges to itself). Each node data structure contains a list of edges. To this end, I have the following function for adding an edge between two nodes of the graph:
"""
Creates an edge between `node1` and `node2`.
If i < j and there is an edge between node i and node j, then
the edge between node i and node j will be stored in node i.
"""
def create_edge(self, node1, node2):
if node1.id < node2.id:
node1.edges.append(node2.id)
else:
node2.edges.append(node1.id)
And then, as part of the instantiation of the graph/network object, the following code is executed to create edges between nodes, based on some probability:
for node1 in self.green_network:
for node2 in range(node1 + 1, self.green_network):
if random.random() < self.probability_of_an_edge:
node1.create_edge(node2)
Now, I realise this second segment of code is incorrect, but I think it roughly illustrates what I am trying to do. Given my stated goal, it seems to me that, for every node in the first loop, for the second loop, I want to loop from (every node + 1) to the last node. So, for instance, we first loop between node.id = 1 to node.id = 2, node.id = 3, node.id = 4, ..., and then between node.id = 2 to node.id = 3, node.id = 4, node.id = 5, ..., and then between node.id = 3 to node.id = 4, node.id = 5, node.id = 6, ..., and so on. That way, we avoid duplicates, given how the create_edge
function behaves.
This is related, but it doesn't seem to answer the question of iterating over objects, which is what I'm trying to do here.