19 October 2024
Due to many requests, we decided to extend the submission deadline to June 14 (AOE).
With the development and spread of AI techniques, ensuring the adherence of AI's behavior to legal and ethical principles has become a major subject. General fear of the unintended effects of AI systems, by its actions and its use of personal data, has led to a strong demand for trustworthy AI. This is a central concern that has become prominent both in public opinion and policy maker's agenda. The EU High-Level Expert Group on AI, convened by the European Commission in 2018, published a report on "Ethics Guidelines for Trustworthy AI" says that AI systems should be:
* lawful, complying with all applicable laws and regulations
* ethical, ensuring adherence to ethical principles and values
To achieve "TrustWorthy AI", there is a need to develop software systems that reason about human values and legal/ethical norms, implement these values through legal/ethical norms, and ensure the alignment of behaviour with those values and legal/ethical norms.
This workshop focuses on human values and compliance mechanism of legal/ethical norms. We set up two tracks in the workshop:
* Value Engineering and Value-Aware AI (VALE track)
* AI Compliance Mechanisms for Legal/Ethical Norms (AICOM track)
Authors should access to a track page to which they would like to submit their papers and follow instructions.
Submission due: June 14, 2024( AOE )(extended)
VALE track
Nardine Osman, Artificial Intelligence Research Institute (IIIA-CSIC), Spain
Luc Steels, Barcelona Supercomputing Center, Spain
AICOM track
Gauvain Bourgne, Sorbonne University, France
Jean-Gabriel Ganascia, University of Sorbonne, France
Adrian Paschke , Freie Universität Berlin and Fraunhofer FOKUS, Germany
Ken Satoh National, Institute of Informatics, Japan