The Language of A.I.: Reframing the Issue of Bias in Society
by Robert James Horne and Daniel Maley / Published March 13th, 2024
Key Takeaways
The integration of human bias into A.I. represents a danger to American society as highlighted by recent examples including Google DeepMind's Gemini model.
A.I. operating systems capable of objectivity when encountering instances of human bias are the right approach to support effective reasoning and decision-making.
A historical understanding of bias and related issues can provide developers and regulators a useful framework for regulating bias.
Introduction
The integration of human bias into A.I. presents a clear danger to American society as highlighted by recent examples including Google DeepMind's Gemini model.
Our vision for A.I. is a technology able to distinguish and account for bias in all forms without adopting human bias into its own internal deliberations and operations. We believe the goal should be an A.I. operating system capable of objectivity when encountering instances of human bias.
A historical understanding of bias and related issues using the native language of A.I., Latin, can provide a useful framework for regulating bias. Bias is one of the internal controls important for A.I. use. This article seeks to reframe understandings of bias in society as a means of introducing new concepts for regulating it in A.I.
A.I & Societal Bias: Historical Perspectives Are Key
It is hard to understand the full meaning of bias without considering its origins. The origin of the word is Latin and means to favor. It is important to note that the definition does not denote whether favor as a concept is positive or negative. That happens later when we consider context. The word in this instance therefore represents an objective perspective from which to further consider whether a particular bias is positive or negative.
The absence of bias is also possible. The Latin meaning of unbiased is to be impartial or unprejudiced with regards to a person or thing. One example of this concept can be found in how federal, state, and local courts are supposed to operate as sources of impartial findings on questions related to standing laws and regulations.
There are advantages to allowing A.I. to consider issues of bias using the Latin definition. Understanding the historical evolution of bias is also important as it sheds light on present-day implications. Tracing bias back to its origins reveals the shifting perceptions and applications over time, providing valuable insights into its role within both society and A.I.
This is the context upon which we want A.I. to consider and work with human bias.
Negative & Positive Bias in Federal Statutes
There are numerous examples of both negative and positive bias in federal statutes dating back centuries. A few examples to consider:
Negative Bias: Many states and the federal government did not start out ‘favoring’ every citizen with the right to vote. In some instances, it took Amendments to the U.S. Constitution to solve these forms of negative bias against minorities and women, to name a few. The 15th Amendment, which banned the federal government and states from barring citizens from voting in elections because of color or race, is one example.
Positive Bias: The Medicare program, established in 1965, ‘favors’ seniors and the disabled as beneficiaries of the program. These ‘favors’ are not extended to people under a certain age who do not have a disability. This favoritism allows the federal government to exclude millions of Americans from benefits as one means of financing the program.
The role of bias in public policy exemplifies its potential benefits when properly managed. Social programs that prioritize resources for the needy rely on positive biases to function effectively. This demonstrates the utility of bias in fostering societal welfare under certain conditions.
The 15th Amendment is an example of this principle as well. It ‘favors’ citizenship as the bias around which voting rights now rests. The fact that this form of ‘favoritism’ did away with the negative biases governments used to restrict voting rights is one of many examples of the importance of bias in our lives.
Context Matters When Determining Bias
The perceived meaning behind something, or context, is how people and society determine the value of a particular ‘favor.’
American society uses context to determine whether a bias is good or bad just like people do. Context in this instance can be explained by considering divisions in society related to the Medicare program. Some see the Medicare program as a benefit for those in the U.S. living in retirement. Others view the program as unconstitutional, and an example of government overreach. These two differing ways of valuing the Medicare program are examples of context.
For purposes of regulating A.I. use in society, the question of context is also applicable. The most fundamental contextual question is the following:
Should A.I. adopt a contextual lens from society to understand the totality of bias? Or rather an unbiased operating system from which to consider issues of bias in all forms?
We feel that an effective approach to considering bias in A.I. is to ensure that the technology can consider the subjective aspects of bias without validating them. An objective operating system with controls for how best to work with bias.
Conclusion
Our next article in the series will feature some considerations for developing effective A.I. bias controls.