Belief propagation, also known as sum-product message passing, is a message passing algorithm for performing inference on graphical models, such as Bayesian networks and Markov random fields.
Belief networks are structured, graphical representation of probabilistic relationships between several random variables. They are the explicit representation of conditional independencies. These graph represents a factorization of the joint distribution in terms of factors that depend only on local subsets of random variables.
Belief propagation algorithm works by passing real valued functions called messages along the edges between the hidden nodes. A message is basically the belief about some node's probability of being in 0 or 1 state (that's why we call it belief propagation). These messages contain the "influence" that one variable exerts on another. Each message will be updated iteratively from the previous value of the neighbouring messages.
It is easiest to understand Belief propagation in factor graphs (we can convert any given belief network into a factor graph). A factor graph is an undirected graph where we have variable nodes on one side, and factor nodes on other side, and there is an edge between variable node A and factor node F if variable A appears in factor F . Here after, I will call variable nodes as simply nodes, and factor nodes as features.
In belief propagation,a factor f sends message to a node n by multiplying all the messages f gets from its neighbour nodes except n, and then multiplying resultant product with its own potential table, and then finally summing out all variables except n. We will stop this message passing when messages don't change across iterations. Now multiply all messages to that node to calculate marginal(marginal distribution of X is simply the probability distribution of X averaging over information about Y) of that node.