5

I have been reading many papers recently about Byzantine fault tolerance. There is a common proof that 3m+1 computers are needed to handle m Byzantine faults. The general proof goes something like this:

There are three "generals": A, B, and C. Suppose the generals communicate like this, where C is a "traitor":

A --> B "Attack", A --> C "Attack"
B --> A "Attack", B --> C "Attack"
C --> A "Attack", C --> B "Retreat"

A receives "Attack" from both sources, and will attack.
B receives "Attack" from A but "Retreat" from C and doesn't know what to do.
C is a traitor, so his action could be anything.

Therefore, we can't guarantee that a majority of the actors will reach consensus.

I sort of understand that proof, but it seems to miss a major point. Don't A, B, and C also do their own internal calculation of what to do? Since A & B are the "loyal" generals here, it would seem that the "correct" action is to attack. Isn't B allowed to factor in his own calculation in deciding what to do? In that case, he could easily break the tie between the conflicting A&C inputs and decide to attack. Then, both A & B attack, and we solve the problem. Is this a different problem than the classic Byzantine Generals problem?

Chris Brown
  • 61
  • 1
  • 3

3 Answers3

1

What is "their own internal calculation"? Is it means that if one general have conflict message, then it basically does default option(e.g. attack)? And What is the meaning of "(B)his own calculation in deciding what to do"? In assumption, B only decides what to do when he gets majority of matching message. Well, there might be a default option when conflicted. But the default option doesn't guarantee consistent decision among loyal generals because they don't trust each other.

Important thing in Byzantine general problem is that they don't trust each other and they don't know who is loyal or not. Anyone can be traitor, so even if both A and B are loyal generals, they don't know each of them is true loyal general in terms of A or B. In that case, even if B conducts his own internal calculation when B gets conflict messages from A and C, it cannot sure 100% for the right decision(A and B do the same action).

yongrae
  • 163
  • 1
  • 10
1

It is common to assume that loyal generals will give you the same answer given the same question. I.e., that A and B will both return either "attack" or "retreat". But that's not the case on BFT scenarios. On a BFT, each loyal general is seeing a different part of the problem and thus can give a different answer. So, a loyal general can say "attack" while another loyal can say "retreat".

A good use case is the altitude sensors of an airplane. Each one can give you a different answer because they "see" different data (they are all located on different places, being influenced by different factors).

To quote the original paper (Lamport, 1982):

The use of majority voting to achieve reliability is based upon the assumption that all the nonfaulty processors will produce the same output. This is true so long as they all use the same input. However, any single input datum comes from a single physical component -- for example, from some other circuit in the reliable computer, or from some radar site in the missile defense system -- and a malfunctioning component can give different values to different processors.

A voting system doesn't work here because a faulty component can trick the loyal generals by sending conflicting information to them. In other words, C (malice) can send "attack" to B and "retreat" to A.

Let's say B (loyal) says "retreat" (everything else is the same):

A --> B "Attack",  A --> C "Attack"
B --> A "Retreat", B --> C "Retreat"
C --> A "Attack",  C --> B "Retreat"

In this example, they shouldn't do anything (because they disagree), but A will attack and B will retreat. The honest nodes think they reached agreement, but they didn't. In this case, the traitor C was sucessfuly able to trick the honests A and B generals.

On a side note, if you are in a scenario where the honest components are expected to give you the same answer, then a voting system can be used (as Lamport himself suggests in his paper). For example, you can use it on a RAID system, where each node has the same data - all you need to do is to use what the majority returns as the actual data.

Daniel Loureiro
  • 4,595
  • 34
  • 48
0

What you are describing is 3-way consensus, where all participants can have an opinion of their own. The byzantine generals problem consists of a single general sending orders to the other generals. All loyal generals must then, as a group, either obey or disobey the command. It's a matter of making sure everyone agrees on what the commanding general said.

Here's an example:

First off, being the commander or a byzantine general are easy cases; you don't care what anyone else thinks. The hard part is being a loyal general getting a command from someone else.

For 3 generals trying to decide if they should attack or not, we have two possible cases:

  • If the commander is the byzantine general, it can send different commands to the two generals. They then cannot agree, since they have gotten different information from the commander, and end up with equal number of votes for and against.
  • If the byzantine general is not the commander, it can lie about what order it got from the commander. Once again, the loyal general got one vote for (from the commander) and one against (since the byzantine general lied).

Since you, the loyal general, have no idea what the commander actually said to the other general, you have no idea if the commander lied to you, or if the other general did.

Filip Haglund
  • 13,919
  • 13
  • 64
  • 113
  • Well, I always see the problem reduced to this "commanders and lieutenants" problems, but I don't see how that's obvious. In the original Byzantine Generals problem, each general "observes" the city and decides whether to attack. So aren't they formulating their own independent idea of whether to attack? In that case, they just need one other general to agree with them. The reduction to the "commanding general" problem does not make sense to me. – Chris Brown Apr 05 '16 at 20:23
  • That way you can make sure a majority of the generals attack, but not that every loyal general attacks. It's more obvious for bigger examples. – Filip Haglund Apr 05 '16 at 20:54