[hpc-announce] CFP deadline 27 August Workshop on Trustworthy and Ethical Machine Learning
Lofstead, Gerald F II
gflofst at sandia.gov
Mon Aug 22 08:28:06 CDT 2022
Deadline: 27 August 2022
Response to authors: 7 September 2022
Workshop held at TransAI 2022 the week of 15 September 2022
https://easychair.org/conferences/?conf=teml22
Machine learning is often seen as a "magical" technique to solve data-related pattern recognition and generation problems. This general utility has led to ML's incorporation into a wide variety of fields. However, simply incorporating ML into a new field is not sufficient.
The first hurdle that should be addressed is whether or not the ML results can be trusted or not. Differences such as the pseudo-random number seed or train/test ratio can radically affect the model accuracy [1]. Other considerations, such as overfitting, properly selected training data, and proper model validation techniques need to be employed to better understand how well the ML model corresponds to the system modeled and what the limitations of this model might be. The Trustworthy ML Initiative's existence is evidence of the importance and broad reach of this topic.
The second hurdle is the ethical considerations of the model's output and the impacts using ML may have on society and the world. Data considerations, such as a full spectrum of skin tones, hair types and styles, and other physical characteristics are crucial to having a person recognition model that does not discriminate against people not fully represented in the training data set. Further, if data that is inherently discriminatory is used to train the data, such as house loan approvals that do not properly account for red lining, can end up reinforcing this practice even though the goal was to remove the human bias. Other examples, such as predictive policing can be self-reinforcing systems targeting particular groups and areas rather than true helps to reduce crime.
This workshop seeks to explore how to trust ML both from a "are the results something I can rely on?" perspective as well as the "are these results fair and legal to everyone?" perspective. The solutions for the both problems can be technological and/or social with a broad-based solution that uses tools to identify human-affected features that could reduce trust or cause ethical questions about how ML was incorporated into an application or process.
This workshop contributes by sharing experiences and exploring the extent and boundaries of the problem spaces as well as solutions and experiences that work within these bounds. The problem domain is not limited by the type of system nor by the data or application domain. Instead, this workshop focuses on how to best ensure that ML is working as intended and that it is not reinforcing or generating ethical questions about the results.
Topics of Interest:
* Position, research, and experience papers related to trustworthy ML and ethics in ML (particularly the topics listed below)
* Explainable ML
* FAIR data principles for ML
* Ethical uses for ML
* Ethical data uses for ML model generation
* Evaluating the ethical standard for an ML model
* Privacy preserving ML
* And other topics related to trustworthy ML and ethical ML
Organizers:
Jay Lofstead (Sandia)
Jakob Luettgau (UTK)
Margaret Lawson (Google)
More information about the hpc-announce
mailing list