Researchers develop a framework to mitigate the potential dangers of AI

Researchers from Texas A&M University School of Public Health are developing a new governance model to mitigate the potential dangers of AI without hindering its advancement.

The new model, known as Copyleft AI with Trusted Enforcement (CAITE), will provide ethical guidance to the rapidly advancing field of Artificial Intelligence (AI), attempting to guard against the potential dangers of AI.

The article, ‘Leveraging IP for AI governance,’ which details the new model, is published in the journal Science.

Potential dangers of AI are brought about by misuse

Artificial Intelligence has the potential to revolutionise almost every aspect of our lives, but a misuse of AI-based tools may be harmful, especially to communities already facing discrimination. Although AI is often considered to be objective, the human-annotated data it feeds on can contain biases.

When reading through websites, the AI does not have the understanding to filter what is useful and what is a harmful stereotype. There have been numerous studies on the dangers of AI due to these biases, for example, where exclusionary algorithms have made racist predictions on an offender’s likelihood to re-offend.

The potential to cause harm reflects the urgent need for ethical guidance through regulation and policy. However, creating such ethical guidance is a challenge due to the rapid advancement of AI and the rigidity of government regulation.

The CAITE model was developed using two methods

To combat the potential dangers of AI, Cason Schmit, JD, assistant professor at the School of Public Health and director of the Program in Health Law and Policy, Megan Doerr of Sage Bionetworks, and Jennifer Wager, JD, of Penn State, developed the CAITE model. The model combines two methods of managing intellectual property rights – aspects of copyleft licensing and the patent-troll model.

Traditionally, these models were considered to be competing with copyleft licensing allowing intellectual property to be shared under conditions like attributing the original creator or non-commercial use. These schemes usually have little enforcement power. The patent troll approach, however, uses enforcement rights to ensure compliance. The framework would ensure that AI users can report biases that they discover in a model, aiming to combat some of the dangers certain AI models represent.

The CAITE model would restrict unethical AI uses

The model is built on an ethical use license that would require users to abide by a code of conduct. The copyleft approach would ensure that developers who create data must use the same license terms as the parent work. The license would then assign the enforcement rights to a designated third-party – the CAITE host. Through this, the enforcement rights for all these ethical use licenses would pool in a single organisation, with the CAITE host acting as an AI regulator.

© shutterstock/Chaay_Tee

“This approach combines the best of two worlds: a model that is as fast and flexible as industry, but with enforcement teeth and power of a traditional government regulator,” Schmit said.

Using a nongovernment party designated by the AI developer community could allow for greater flexibility in enforcement and trust in oversight. For example, consequences for unethical actions can be set by hosts, as well as lenient policies being promoted which allow self-reporting. By combining the two models, the framework will help prevent some of the dangers of AI by ensuring the ethical use of AI training datasets and models.

More needs to be done for the CAITE approach to be implemented

Although the CAITE approach is flexible, participation of the AI community will be required. Further research and funding will also be needed for a pilot implementation of the ethical policies built using the CAITE approach. AI community members will be needed for the implementation of the model to overcome any challenges that may arise.

Despite the work that needs to be done, the researchers believe that industry will prefer the flexible CAITE framework to the stringent and slow-to-adapt regulations that governments could eventually impose.

“Efforts to promote ethical and trustworthy AI must go beyond what is legally mandated as the baseline for acceptable conduct,” Wagner said. “We can and should strive to do better than what is minimally acceptable.”

Once implemented, CAITE will guard against the potential dangers of AI without hindering technological advances. The researchers argue that as AI expands into our daily lives, the value of a responsive ethical framework will become crucial.

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Featured Topics

Partner News

Advertisements

Media Partners

Similar Articles

More from Innovation News Network