Byzantine fault explained

A Byzantine fault is a condition of a system, particularly a distributed computing system, where a fault occurs such that different symptoms are presented to different observers, including imperfect information on whether a system component has failed. The term takes its name from an allegory, the "Byzantine generals problem",[1] developed to describe a situation in which, to avoid catastrophic failure of a system, the system's actors must agree on a strategy, but some of these actors are unreliable in such a way as to cause other (good) actors to disagree on the strategy and they may be unaware of the disagreement.

A Byzantine fault is also known as a Byzantine generals problem, a Byzantine agreement problem, or a Byzantine failure.

Byzantine fault tolerance (BFT) is the resilience of a fault-tolerant computer system or similar system to such conditions.

Definition

A Byzantine fault is any fault presenting different symptoms to different observers.[2] A Byzantine failure is the loss of a system service due to a Byzantine fault in systems that require consensus among multiple components.[3]

The Byzantine allegory considers a number of generals who are attacking a fortress. The generals must decide as a group whether to attack or retreat; some may prefer to attack, while others prefer to retreat. The important thing is that all generals agree on a common decision, for a halfhearted attack by a few generals would become a rout, and would be worse than either a coordinated attack or a coordinated retreat.

The problem is complicated by the presence of treacherous generals who may not only cast a vote for a suboptimal strategy; they may do so selectively. For instance, if nine generals are voting, four of whom support attacking while four others are in favor of retreat, the ninth general may send a vote of retreat to those generals in favor of retreat, and a vote of attack to the rest. Those who received a retreat vote from the ninth general will retreat, while the rest will attack (which may not go well for the attackers). The problem is complicated further by the generals being physically separated and having to send their votes via messengers who may fail to deliver votes or may forge false votes.

Byzantine fault tolerance can be achieved if the number of loyal (non-faulty) generals is greater than three times the number of disloyal (faulty) generals. There can be a default vote value given to missing messages. For example, missing messages can be given a "null" value. Further, if the agreement is that the null votes are in the majority, a pre-assigned default strategy can be used (e.g., retreat).[4]

The typical mapping of this allegory onto computer systems is that the computers are the generals and their digital communication system links are the messengers. Although the problem is formulated in the allegory as a decision-making and security problem, in electronics, it cannot be solved by cryptographic digital signatures alone, because failures such as incorrect voltages can propagate through the encryption process. Thus, a faulty message could be sent such that some recipients detect the message as faulty (bad signature), others see it is having a good signature, and a third group also sees a good signature but with different message contents than the second group.[5]

History

The problem of obtaining Byzantine consensus was conceived and formalized by Robert Shostak, who dubbed it the interactive consistency problem. This work was done in 1978 in the context of the NASA-sponsored SIFT project in the Computer Science Lab at SRI International. SIFT (for Software Implemented Fault Tolerance) was the brainchild of John Wensley, and was based on the idea of using multiple general-purpose computers that would communicate through pairwise messaging in order to reach a consensus, even if some of the computers were faulty.

At the beginning of the project, it was not clear how many computers in total were needed to guarantee that a conspiracy of n faulty computers could not "thwart" the efforts of the correctly-operating ones to reach consensus. Shostak showed that a minimum of 3n+1 are needed, and devised a two-round 3n+1 messaging protocol that would work for n=1. His colleague Marshall Pease generalized the algorithm for any n > 0, proving that 3n+1 is both necessary and sufficient. These results, together with a later proof by Leslie Lamport of the sufficiency of 3n using digital signatures, were published in the seminal paper, Reaching Agreement in the Presence of Faults.[6] The authors were awarded the 2005 Edsger W. Dijkstra Prize for this paper.

To make the interactive consistency problem easier to understand, Lamport devised a colorful allegory in which a group of army generals formulate a plan for attacking a city. In its original version, the story cast the generals as commanders of the Albanian army. The name was changed, eventually settling on "Byzantine", at the suggestion of Jack Goldberg to future-proof any potential offense-giving.[7] This formulation of the problem, together with some additional results, were presented by the same authors in their 1982 paper, "The Byzantine Generals Problem".

Mitigation

The objective of Byzantine fault tolerance is to be able to defend against failures of system components with or without symptoms that prevent other components of the system from reaching an agreement among themselves, where such an agreement is needed for the correct operation of the system.

The remaining operationally correct components of a Byzantine fault tolerant system will be able to continue providing the system's service as originally intended, assuming there are a sufficient number of accurately-operating components to maintain the service.

When considering failure propagation only via errors, Byzantine failures are considered the most general and most difficult class of failures among the failure modes. The so-called fail-stop failure mode occupies the simplest end of the spectrum. Whereas the fail-stop failure mode simply means that the only way to fail is a node crash, detected by other nodes, Byzantine failures imply no restrictions on what errors can be created, which means that a failed node can generate arbitrary data, including data that makes it appear like a functioning node to a subset of other nodes. Thus, Byzantine failures can confuse failure detection systems, which makes fault tolerance difficult. Despite the allegory, a Byzantine failure is not necessarily a security problem involving hostile human interference: it can arise purely from physical or software faults.

The terms fault and failure are used here according to the standard definitions[8] originally created by a joint committee on "Fundamental Concepts and Terminology" formed by the IEEE Computer Society's Technical Committee on Dependable Computing and Fault-Tolerance and IFIP Working Group 10.4 on Dependable Computing and Fault Tolerance.[9] See also dependability.

Byzantine fault tolerance is only concerned with broadcast consistency, that is, the property that when a component broadcasts a value to all the other components, they all receive exactly this same value, or in the case that the broadcaster is not consistent, the other components agree on a common value themselves. This kind of fault tolerance does not encompass the correctness of the value itself; for example, an adversarial component that deliberately sends an incorrect value, but sends that same value consistently to all components, will not be caught in the Byzantine fault tolerance scheme.

Solutions

Several early solutions were described by Lamport, Shostak, and Pease in 1982.[4] They began by noting that the Generals' Problem can be reduced to solving a "Commander and Lieutenants" problem where loyal Lieutenants must all act in unison and that their action must correspond to what the Commander ordered in the case that the Commander is loyal:

All BFT solutions require multiple rounds of communication. Each recipient in a round of communication must repeat to all other receivers what it received. For f number of Byzantine failures, there needs to be at least 3f+1 players (fault containment zones), 2f+1 independent communication paths, and f+1 rounds of communication. There also can be hybrid fault models in which benign (non-Byzantine) faults as well as Byzantine faults may exist simultaneously. For each additional benign fault that must be tolerated, the above numbers must be incremented.

If the BFT rounds of communication don't exist, Byzantine failures can occur even with no faulty hardware.

Several system architectures were designed c. 1980 that implemented Byzantine fault tolerance. These include: Draper's FTMP,[13] Honeywell's MMFCS, and SRI's SIFT.[14]

In 1999, Miguel Castro and Barbara Liskov introduced the "Practical Byzantine Fault Tolerance" (PBFT) algorithm,[15] which provides high-performance Byzantine state machine replication, processing thousands of requests per second with sub-millisecond increases in latency.

After PBFT, several BFT protocols were introduced to improve its robustness and performance. For instance, Q/U,[16] HQ,[17] Zyzzyva,[18] and ABsTRACTs,[19] addressed the performance and cost issues; whereas other protocols, like Aardvark[20] and RBFT,[21] addressed its robustness issues. Furthermore, Adapt[22] tried to make use of existing BFT protocols, through switching between them in an adaptive way, to improve system robustness and performance as the underlying conditions change. Furthermore, BFT protocols were introduced that leverage trusted components to reduce the number of replicas, e.g., A2M-PBFT-EA[23] and MinBFT.[24]

Applications

Several examples of Byzantine failures that have occurred are given in two equivalent journal papers. These and other examples are described on the NASA DASHlink web pages.[25]

Applications in computing

Byzantine fault tolerance mechanisms use components that repeat an incoming message (or just its signature, which can be reduced to just a single bit of information if self-checking pairs are used for nodes). to other recipients of that incoming message. All these mechanisms make the assumption that the act of repeating a message blocks the propagation of Byzantine symptoms. For systems that have a high degree of safety or security criticality, these assumptions must be proven to be true to an acceptable level of fault coverage. When providing proof through testing, one difficulty is creating a sufficiently wide range of signals with Byzantine symptoms.[26] Such testing will likely require specialized fault injectors.[27] [28]

Military applications

Byzantine errors were observed infrequently and at irregular points during endurance testing for the newly constructed Virginia class submarines, at least through 2005 (when the issues were publicly reported).[29]

Cryptocurrency applications

The Bitcoin network works in parallel to generate a blockchain with proof-of-work allowing the system to overcome Byzantine failures and reach a coherent global view of the system's state.[30] Some proof of stake blockchains also use BFT algorithms.

Blockchain Technology

Byzantine Fault Tolerance (BFT) is a crucial concept in blockchain technology, ensuring that a network can continue to function even when some nodes[31] (participants) fail or act maliciously. This tolerance is necessary because blockchains are decentralized systems with no central authority, making it essential to achieve consensus among nodes, even if some try to disrupt the process.

Applications and Examples of Byzantine Fault Tolerance in Blockchain

Safety Mechanisms: Different blockchains use various BFT-based consensus mechanisms like Practical Byzantine Fault Tolerance (PBFT), Tendermint, and Delegated Proof of Stake (DPoS) to handle Byzantine faults. These protocols ensure that the majority of honest nodes can agree on the next block in the chain, securing the network against attacks and preventing double-spending and other types of fraud. Practical examples of networks include Hyperledger Fabric, Cosmos and Klever in this sequence.

51% Attack Mitigation: While traditional blockchains like Bitcoin use Proof of Work (PoW), which is susceptible to a 51% attack, BFT-based systems are designed to tolerate up to one-third of faulty or malicious nodes without compromising the network's integrity.

Decentralized Trust: Byzantine Fault Tolerance underpins the trust model in decentralized networks. Instead of relying on a central authority, the network's security depends on the ability of honest nodes to outnumber and outmaneuver malicious ones.

Private and Permissioned Blockchains: BFT is especially important in private or permissioned blockchains, where a limited number of known participants need to reach a consensus quickly and securely. These networks often use BFT protocols to enhance performance and security.

Applications in aviation

Some aircraft systems, such as the Boeing 777 Aircraft Information Management System (via its ARINC 659 SAFEbus network), the Boeing 777 flight control system, and the Boeing 787 flight control systems, use Byzantine fault tolerance; because these are real-time systems, their Byzantine fault tolerance solutions must have very low latency. For example, SAFEbus can achieve Byzantine fault tolerance within the order of a microsecond of added latency.[32] [33] [34] The SpaceX Dragon considers Byzantine fault tolerance in its design.[35]

Sources

External links

Notes and References

  1. Lamport . L. . Leslie Lamport. Shostak . R. . Pease . M. . 10.1145/357172.357176 . The Byzantine Generals Problem . ACM Transactions on Programming Languages and Systems . 4 . 3 . 382–401 . 1982 . live . https://web.archive.org/web/20180613015025/https://www.microsoft.com/en-us/research/publication/byzantine-generals-problem/?from=http%3A%2F%2Fresearch.microsoft.com%2Fen-us%2Fum%2Fpeople%2Flamport%2Fpubs%2Fbyz.pdf . 13 June 2018. 10.1.1.64.2312 . 55899582 .
  2. Book: Driscoll. K.. The 23rd Digital Avionics Systems Conference (IEEE Cat. No.04CH37576). Hall. B.. Paulitsch. M.. Zumsteg. P. . Sivencrona. H.. The Real Byzantine Generals. 2004. 6.D.4–61–11. 10.1109/DASC.2004.1390734. 978-0-7803-8539-9. 15549497.
  3. Book: Driscoll. Kevin. Computer Safety, Reliability, and Security. Hall. Brendan. Sivencrona. Håkan. Zumsteg. Phil . 12690337. Byzantine Fault Tolerance, from Theory to Reality. 2788. 2003. 235–248. 0302-9743. 10.1007/978-3-540-39878-3_19. Lecture Notes in Computer Science. 978-3-540-20126-7.
  4. Lamport. L.. Leslie Lamport. Shostak. R.. Pease. M.. 1982. The Byzantine Generals Problem. ACM Transactions on Programming Languages and Systems. 4. 3. 387–389. 10.1.1.64.2312. 10.1145/357172.357176. 55899582 . https://web.archive.org/web/20170207104645/http://research.microsoft.com/en-us/um/people/lamport/pubs/byz.pdf. 7 February 2017.
  5. Book: Driscoll. K.. The 23rd Digital Avionics Systems Conference (IEEE Cat. No.04CH37576). Hall. B.. Paulitsch. M.. Zumsteg. P. . Sivencrona. H.. The Real Byzantine Generals. 2004. 6.D.4–61–11. 10.1109/DASC.2004.1390734. 978-0-7803-8539-9. 15549497.
  6. Pease. Marshall. Shostak. Robert. Lamport. Leslie. April 1980. Reaching Agreement in the Presence of Faults. Journal of the Association for Computing Machinery. 27. 2. 228–234. 10.1145/322186.322188. 10.1.1.68.4044. 6429068.
  7. Lamport . Leslie . The Byzantine Generals Problem . ACM Transactions on Programming Languages and Systems . SRI International . 18 March 2019. 2016-12-19 .
  8. Avizienis . A.. Laprie. J.-C.. Randell. Brian. Brian Randell. Landwehr. C.. Basic concepts and taxonomy of dependable and secure computing. IEEE Transactions on Dependable and Secure Computing. 1. 1. 2004. 11–33. 1545-5971 . 10.1109/TDSC.2004.2. 1903/6459. 215753451. free.
  9. Web site: Dependable Computing and Fault Tolerance. 2015-03-02. https://web.archive.org/web/20150402141319/http://www.dependability.org/. 2015-04-02. live.
  10. P. . Feldman . S. . Micali . An optimal probabilistic protocol for synchronous Byzantine agreement . SIAM J. Comput. . 26 . 4 . 873–933 . 1997 . 10.1137/s0097539790187084 . 2012-06-14 . https://web.archive.org/web/20160305012505/http://people.csail.mit.edu/silvio/Selected%20Scientific%20Papers/Distributed%20Computation/An%20Optimal%20Probabilistic%20Algorithm%20for%20Byzantine%20Agreement.pdf . 2016-03-05 . live .
  11. Koopman, Philip; Driscoll, Kevin; Hall, Brendan (March 2015). "Cyclic Redundancy Code and Checksum Algorithms to Ensure Critical Data Integrity" (PDF). Federal Aviation Administration. DOT/FAA/TC-14/49. Archived (PDF) from the original on 18 May 2015. Retrieved 9 May 2015.
  12. Book: Paulitsch . M. . 2005 International Conference on Dependable Systems and Networks (DSN'05) . Morris . J. . Hall . B. . Driscoll . K. . Latronico . E. . Koopman . P. . Coverage and the Use of Cyclic Redundancy Codes in Ultra-Dependable Systems . 2005 . 346–355 . 10.1109/DSN.2005.31. 978-0-7695-2282-1 . 14096385 .
  13. Book: Hopkins . Albert L. . The Evolution of Fault-Tolerant Computing . Lala . Jaynarayan H. . Smith . T. Basil . The Evolution of Fault Tolerant Computing at the Charles Stark Draper Laboratory, 1955–85 . 1 . 1987 . 121–140 . 0932-5581 . 10.1007/978-3-7091-8871-2_6. Dependable Computing and Fault-Tolerant Systems . 978-3-7091-8873-6 .
  14. SIFT: design and analysis of a fault-tolerant computer for aircraft control. Microelectronics Reliability. 19. 3. 1979. 190. 0026-2714. 10.1016/0026-2714(79)90211-7.
  15. M. . Castro . B. . Liskov . 10.1.1.127.6130 . Practical Byzantine Fault Tolerance and Proactive Recovery . . ACM Transactions on Computer Systems . 20 . 4 . 398–461 . 2002 . 10.1145/571637.571640. 18793794 .
  16. M. . Abd-El-Malek . G. . Ganger . G. . Goodson . M. . Reiter . J. . Wylie . 10.1145/1095809.1095817 . Fault-scalable Byzantine Fault-Tolerant Services . ACM SIGOPS Operating Systems Review . 39 . 5 . 59 . . 2005 .
  17. James . Cowling . Daniel . Myers . Barbara Liskov . Barbara . Liskov . Rodrigo . Rodrigues . Liuba . Shrira . HQ Replication: A Hybrid Quorum Protocol for Byzantine Fault Tolerance . Proceedings of the 7th USENIX Symposium on Operating Systems Design and Implementation . 2006 . 1-931971-47-1 . 177–190.
  18. Ramakrishna . Kotla . Lorenzo . Alvisi . Mike . Dahlin . Allen . Clement . Edmund . Wong . 10.1145/1658357.1658358 . Zyzzyva: Speculative Byzantine Fault Tolerance . . ACM Transactions on Computer Systems . 27 . 4 . 1–39 . December 2009 .
  19. Rachid . Guerraoui . Nikola . Kneževic . Marko . Vukolic . Vivien . Quéma . The Next 700 BFT Protocols . Proceedings of the 5th European conference on Computer systems . EuroSys . 2010 . 2011-10-04 . https://web.archive.org/web/20111002225957/http://infoscience.epfl.ch/record/144158 . 2011-10-02 . live .
  20. A.. Clement. E.. Wong. L.. Alvisi. M.. Dahlin. M.. Marchetti. Making Byzantine Fault Tolerant Systems Tolerate Byzantine Faults. USENIX. Symposium on Networked Systems Design and Implementation. April 22–24, 2009. 2010-02-17. https://web.archive.org/web/20101225155052/https://www.usenix.org/events/nsdi09/tech/full_papers/clement/clement.pdf. 2010-12-25. live.
  21. P.-L.. Aublin. S.. Ben Mokhtar. V.. Quéma. RBFT: Redundant Byzantine Fault Tolerance. International Conference on Distributed Computing Systems. 33rd IEEE International Conference on Distributed Computing Systems. July 8–11, 2013. dead. https://web.archive.org/web/20130805115252/http://www.temple.edu/cis/icdcs2013/program.html. August 5, 2013.
  22. Book: Bahsoun. J. P.. Guerraoui. R.. Shoker. A.. 2015 IEEE International Parallel and Distributed Processing Symposium . Making BFT Protocols Really Adaptive . 2015-05-01. 904–913. 10.1109/IPDPS.2015.21. 978-1-4799-8649-1. 16310807. http://repositorio.inesctec.pt/handle/123456789/4107.
  23. Book: Chun. Byung-Gon. Maniatis. Petros. Shenker. Scott. Kubiatowicz. John. Proceedings of twenty-first ACM SIGOPS symposium on Operating systems principles . Attested append-only memory . 2007-01-01. SOSP '07. New York, NY, USA. ACM. 189–204. 10.1145/1294261.1294280. 9781595935915. 6685352.
  24. Veronese. G. S.. Correia. M.. Bessani. A. N.. Lung. L. C.. Verissimo. P.. 2013-01-01. Efficient Byzantine Fault-Tolerance. IEEE Transactions on Computers. 62. 1. 16–30. 10.1109/TC.2011.221. 0018-9340. 10.1.1.408.9972. 8157723.
  25. Web site: Driscoll . Kevin . 2012-12-11 . Real System Failures . live . https://web.archive.org/web/20150402190610/https://c3.nasa.gov/dashlink/resources/624/ . 2015-04-02 . 2015-03-02 . DASHlink . NASA.
  26. Nanya . T. . Goosen . H.A. . 1989 . The Byzantine hardware fault model . IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems . 8 . 11 . 1226–1231 . 10.1109/43.41508 . 0278-0070.
  27. Book: Martins . Rolando . Middleware 2013 . Gandhi . Rajeev . Narasimhan . Priya . Pertet . Soila . Casimiro . António . Kreutz . Diego . Veríssimo . Paulo . 2013 . 978-3-642-45064-8 . Lecture Notes in Computer Science . 8275 . 41–61 . Experiences with Fault-Injection in a Byzantine Fault-Tolerant Protocol . 10.1007/978-3-642-45065-5_3 . 0302-9743 . 31337539.
  28. US. 7475318. Method for testing the sensitive input range of Byzantine filters. patent. 2006-01-27. 2005-01-28. 2009-01-06. Kevin R. Driscoll. Honeywell International Inc..
  29. Book: Walter. C.. Ninth IEEE International Symposium on High-Assurance Systems Engineering (HASE'05). Ellis. P.. LaValley. B.. The Reliable Platform Service: A Property-Based Fault Tolerant Service Architecture. 2005. 34–43. 10.1109/HASE.2005.23. 978-0-7695-2377-4. 21468069.
  30. Web site: Rubby . Matt . 20 January 2024 . How Byzantine Generals Problem Relates to You in 2024 . 2024-01-27 . Swan Bitcoin . en.
  31. Web site: Node Operations .
  32. Book: M. . Paulitsch . Driscoll . K. . Richard . Zurawski . Industrial Communication Technology Handbook, Second Edition . https://books.google.com/books?id=ppzNBQAAQBAJ . 9 January 2015 . Chapter 48:SAFEbus . 48–1–48–26 . CRC Press . 978-1-4822-0733-0.
  33. Book: Thomas A. Henzinger . Christoph M. Kirsch . Embedded Software: First International Workshop, EMSOFT 2001, Tahoe City, CA, USA, October 8-10, 2001. Proceedings . 26 September 2001 . Springer Science & Business Media . 978-3-540-42673-8 . 307– . 2015-03-05 . https://web.archive.org/web/20150922114036/http://www.csl.sri.com/papers/emsoft01/emsoft01.pdf . 2015-09-22 . live .
  34. Book: Yeh . Y.C. . 20th DASC. 20th Digital Avionics Systems Conference (Cat. No.01CH37219) . Safety critical avionics for the 777 primary flight controls system . 1 . 2001 . 1C2/1–1C2/11 . 10.1109/DASC.2001.963311. 978-0-7803-7034-0 . 61489128 .
  35. Web site: ELC: SpaceX lessons learned [LWN.net] ]. 2016-07-21 . https://web.archive.org/web/20160805064218/http://lwn.net/Articles/540368/ . 2016-08-05 . live .