A blurred person in a suit points to a transparent touchscreen with various icons around a central box labeled "AI ETHICS." The icons depict scales, a gear, a checklist, a brain, a robotic hand, and a judge's gavel—highlighting different ethical considerations essential for CMMC compliance and DOD cybersecurity.

Historical precedent for ethical AI research is suggested by NIST researchers.

Thirdtimeluckystudio/Shutterstock is credited.

Artificial intelligence ( AI ) systems can make biased decisions that have an impact on hiring decisions, loan applications, welfare benefits, and a variety of other real-world effects if they are trained on biased data. How can we ensure that humans train AI systems on data that reflects sound ethical principles given the rapidly developing technology that could have life-changing effects? &nbsp,

The National Institute of Standards and Technology ( NIST ) is a multidisciplinary team of researchers who believe that the answer to this question is already workable: We should adhere to the same fundamental guidelines that have been used for decades to protect human subjects ‘ research. The Belmont Report, a seminal work from 1979 that changed U.S. government policy on conducting research on human subjects, was based on these three principles, which are summed up as “respect for persons, beneficence, and justice.”

The team’s research was published in the peer-reviewed journal Computer magazine of IEEE in February. The paper coincides with NIST’s larger initiative to support the creation of reliable and responsible AI, despite the fact that it is the authors ‘ own work and is not officially guided by the organization. &nbsp,

According to Kristen Greene, a social scientist at NIST and one of the paper’s authors,” we examined current principles of human subjects research and explored how they could apply to AI.” ” Reinventing the wheel is not necessary.” Since research participants ‘ data may be used to train AI, we can use an established paradigm to ensure that we are being transparent with them.

The Tuskegee syphilis study and other unethical research studies involving human subjects led to the creation of the Belmont Report. The National Commission for the Protection of Human Subjects of Biomedical and Behavioral Research was established by the United States in 1974, and it established the fundamental ethical standards for human protection in research studies. These principles were later codified by a federal regulation in the United States in 1991’s Common Rule, which mandates that researchers obtain the participants ‘ informed consent. The Common Rule, which has been adopted by numerous federal agencies and departments, underwent revision in 2017 to account for advancements in research. &nbsp,

The Belmont Report and Common Rule do have a restriction, though: Only government research is subject to the regulations that call for the application of their principles. However, they are not a constraint on industry. &nbsp,

The concepts should be used more broadly in all research that involves human subjects, according to the NIST authors. Information scraped from the web can be stored in databases used to train AI, but the users may not have given their permission to use it, which would be against the “respect for persons” rule. &nbsp,

According to Greene, the private sector must decide whether or not to adopt ethical review principles. &nbsp,

The NIST authors point out that inappropriate exclusion, which can lead to bias in a dataset against specific demographics, is one of the main issues with AI research, in contrast to the Belmont Report’s focus on the inappropriate inclusion of specific people. Face recognition algorithms that are focused on one demographic will be less able to distinguish people from those in other demographics, according to prior research.

The authors propose that it might be simple to apply the report’s three principles to AI research. In contrast to beneficence, which implies that studies should be created to reduce participant risk, respect for people would require subjects to give their informed consent to what happens to them and their data. Subjects would need to be chosen fairly in order to prevent unjustified exclusion in the interest of justice. &nbsp,

According to Greene, the paper is best viewed as a springboard for an AI and our data discussion that will benefit both businesses and consumers. &nbsp,

We do n’t support increased government regulation. She stated,” We’re supporting thoughtfulness.” ” This is the right thing to do, so we should do it.”


Paper: E. Barron, A. Andrews, C. Watson, M. F. Theofanos, and K.K. Greene. Moving from AI Principles to Practice: Avoiding Past Errors in Unethical Human Subjects Research. a computer 2024 February. DOI: 10.109/MC 327653, 2023

Skip to content