[hpc-announce] CFP TEML23 Trustworthy and Ethical Machine Learning at TransAI

Lofstead, Gerald F II gflofst at sandia.gov
Mon Jun 19 12:01:31 CDT 2023

Call for Submissions for Trustworthy and Ethical Machine Learning Workshop part of Transdisciplinary AI (www.transai.org) Conference September 25-27, 2023, Laguna Hills, CA

Submission site: https://easychair.org/my/conference?conf=teml23

Workshop Webpage: https://sites.google.com/view/teml23/home

Submission Deadline: 11 August 2023
Submissions are IEEE format, 5 pages + references limit.

Machine learning is often seen as a "magical" technique to solve data-related pattern recognition and generation problems. This general utility has led to ML's incorporation into a wide variety of fields. However, simply incorporating ML into a new field is not sufficient.

The first hurdle that should be addressed is whether or not the ML results can be trusted or not. Differences such as the pseudo-random number seed or train/test ratio can radically affect the model accuracy. Other considerations, such as overfitting, properly selected training data, and proper model validation techniques need to be employed to better understand how well the ML model corresponds to the system modeled and what the limitations of this model might be. The Trustworthy ML Initiative's existence is evidence of the importance and broad reach of this topic.

The second hurdle is the ethical considerations of the model's output and the impacts using ML may have on society and the world. Data considerations, such as a full spectrum of skin tones, hair types and styles, and other physical characteristics are crucial to having a person recognition model that does not discriminate against people not fully represented in the training data set. Further, if data that is inherently discriminatory is used to train the data, such as house loan approvals that do not properly account for red lining, can end up reinforcing this practice even though the goal was to remove the human bias. Other examples, such as predictive policing can be self-reinforcing systems targeting particular groups and areas rather than true helps to reduce crime.

With the rise of Large Language Models, the ethical considerations for their use and societal impacts have become an important conversation topic. Ethical Principles that can help guide the use of these and other AI tools

This workshop seeks to explore how to trust ML both from a "are the results something I can rely on?" perspective as well as the "are these results fair and legal to everyone?" perspective. The solutions for the both problems can be technological and/or social with a broad-based solution that uses tools to identify human-affected features that could reduce trust or cause ethical questions about how ML was incorporated into an application or process.

This workshop contributes by sharing experiences and exploring the extent and boundaries of the problem spaces as well as solutions and experiences that work within these bounds. The problem domain is not limited by the type of system nor by the data or application domain. Instead, this workshop focuses on how to best ensure that ML is working as intended and that it is not reinforcing or generating ethical questions about the results.

Topics of Interest:

- Ethical principles definitions with arguments justifying the idea
- Position, research, and experience papers related to trustworthy ML and ethics in ML (particularly the topics listed below)
- Explainable ML
- FAIR data principles for ML
- Ethical uses for ML
- Ethical data uses for ML model generation
- Evaluating the ethical standard for an ML model
- Privacy preserving ML
- And other topics related to trustworthy ML and ethical ML

We are also investigating interest in a journal special issue related to ethical principles for AI/ML and/or HPC. If you are interested in this additional opportunity, please contact the organizers.

Jay Lofstead (gflofst at sandia.gov)
Randy Rannow
Roselyne Tchoua

More information about the hpc-announce mailing list