Ethical regulator theorem explained

Mick Ashby's ethical regulator theorem builds upon the Conant-Ashby good regulator theorem,[1] which is ambiguous because being good at regulating does not imply being good ethically.

Theorem

The ethical regulator theorem claims that the following nine requisites are necessary and sufficient for a cybernetic regulator to be both effective and ethical:[2]

  1. Purpose expressed as unambiguously prioritized goals.
  2. Truth about the past and present.
  3. Variety of possible actions.
  4. Predictability of the future effects of actions.
  5. Intelligence to choose the best actions.
  6. Influence on the regulated system.
  7. Ethics expressed as unambiguously prioritized rules.
  8. Integrity of all subsystems.
  9. Transparency of ethical behavior.

Of these requisites, only the first six are necessary for a regulator to be effective. The three requisites of ethics, integrity, and transparency are optional if a system only needs to be effective. This gives rise to the Law of Inevitable Ethical Inadequacy, which states "If you do not specify that you require a secure ethical system, what you get is an insecure unethical system." The reason is that unless ethical adequacy is a requirement, a system design will tend to optimize for effectiveness, and therefore maximally ignore the optional ethical, integrity, and transparency dimensions, which inevitably results in a design and subsequent implementation that is ethically inadequate and vulnerable to manipulation.

Effectiveness of a regulator

The ethical regulator theorem shows that the effectiveness of a cybernetic regulator depends on seven requisites. The effectiveness of a regulator, R, can be expressed as the function:

EffectivenessR = PurposeR x TruthR x (VarietyR - EthicsR) x PredictabilityR x IntelligenceR x InfluenceR

If two systems, A and B, are competing for control of a third system, C, and EffectivenessA "is greater than EffectivenessB, then A is more likely than B to win control of C".

The effectiveness function reflects how the variety of actions that are available to an ethical regulator is reduced by all actions that are considered unethical, which puts an ethical regulator at a disadvantage when competing against an unethical competitor.

Model-Centric cybernetics paradigm

According to the model-centric cybernetics paradigm,[3] an ethical regulator is the product of third-order regulation. The Conant-Ashby good regulator theorem, which established that "every good regulator of a system must be a model of that system," is a key tenet of the model-centric cybernetics paradigm. The paradigm defines that a cybernetic regulator consists of a purpose, a model, a well-defined observer that only observes what the model requires as input parameters, some kind of decision-making intelligence, and a control channel that transmits selected actions or communications to the regulated system.

Thus, in order to be effective, a first-order (simple) cybernetic regulator requires a model of the system that is being regulated and a second-order (reflexive) regulator can only achieve reflexivity by also having a model of itself, which encodes a real-time representation of the possible variety that is available to the regulator. Finally, an ethical regulator is realized by using a third regulator to regulate a reflexive regulator to constrain it to only exhibit behavior that does not violate the ethical schema that is encoded in a third model. If the third model encodes what is illegal or socially unacceptable, then the concept of a third-order regulator has utility for making law-abiding robots and ethical AI.

See also

Notes and References

  1. R. C. Conant and W. R. Ashby, "Every good regulator of a system must be a model of that system", Int. J. Systems Sci., 1970, vol 1, No 2, pp. 89–97
  2. M. . Ashby . Ethical Regulators and Super-Ethical Systems . Systems . 2020 . 8 . 4 . 53 . 10.3390/systems8040053 . free.
  3. M. Ashby, "Problems with Abstract Observers and Advantages of a Model-Centric Cybernetics Paradigm". Systems, 2022; 10(3):53.