Executive Summary : | AI-based systems are increasingly being integrated into various domains, including healthcare, finance, job recruitment, cyber-security, criminal justice, and intrusion detection. However, in high-stakes applications like medicine and security, people may not fully rely on AI due to lack of trust. To harness the power of AI in these areas, the adoption of AI is moving towards AI-assisted decision making. This project aims to develop teams with multiple humans and AI models in different settings satisfying different goals. The first objective is to propose a model that combines hard labels from humans with probabilistic outputs from AI models, intelligently selecting humans to improve overall accuracy. The second objective considers the cost of seeking expert advice and optimizes it only when required. A defferred module can be designed to decide whether to seek expert advice or not by providing a trade-off between costs and accuracy. Fairness is another important parameter in decision making processes. Researchers have raised concerns about biased training in machine learning models for societal applications. The third objective focuses on fairness in the context of human-AI teams. The project will investigate what would happen if both AI and humans are biased, and if the team results in fair models. Some work has been done in this direction, but it is considered as the reject classifier problem, where an AI defers an example to humans if it is not confident about the example, leading to better accuracy and less biased decisions. |