Research

Research by TRAILS faculty, postdocs and students will build upon the combined strengths of the three primary institutions:

  • University of Maryland’s expertise in machine learning, artificial intelligence and the understanding of sociotechnical systems

  • George Washington University’s expertise in law, policy, governance, human-computer interaction and socio-technical systems engineering

  • Morgan State University’s expertise in developing forward-looking education modules and working with diverse communities

TRAILS has identified four key research thrusts to promote the development of AI systems that can earn the public’s trust through broader participation in the AI ecosystem:

  • Participatory Design

    We will create new AI technology and advocate for its deployment and use in a way that aligns with the values and interests of diverse groups of people, particularly those that have been previously marginalized in the development process.

    Lead: Katie Shilton, an associate professor in UMD’s College of Information Studies who specializes in ethics and sociotechnical systems, particularly as it relates to big data.

  • Student coding on a laptop

    Methods and Metrics

    We will develop novel methods, metrics and advanced machine learning algorithms that reflect the values and interests of relevant stakeholders, allowing them to understand the behavior and best uses of AI-infused systems.

    Lead: Tom Goldstein, a UMD associate professor of computer science who leverages mathematical foundations and efficient hardware for high-performance systems.

  • Sense-Making

    We will effectively evaluate how people make sense of AI systems, and the degree to which their levels of reliability, fairness, transparency and accountability can lead to appropriate levels of trust.

    Lead: David Broniatowski, a GW associate professor of engineering whose research focuses on decision-making under risk, group decision-making, system architecture, and behavioral epidemiology.

  • Computer with a justice system image

    Participatory Governance

    We will explore how policymakers at all levels in the U.S. and abroad can foster trust in AI systems, as well as how policymakers can incentivize broader participation, accountability, and inclusiveness in the design and deployment of these systems.

    Lead: Susan Ariel Aaronson, a research professor of international affairs at GW who is an expert in data-driven change and international data governance.

Morgan State faculty and students—led by Virginia Byrne, an assistant professor of higher education and student affairs—will interact with all four research thrusts, focusing on community-driven projects related to the interplay between AI and education.

Additional academic support will come from Valerie Reyna, a professor of human development at Cornell, who will use her expertise in human judgment and cognition to advance efforts focused on how people interpret their use of AI.

Federal officials at NIST will collaborate with TRAILS in the development of meaningful measures, benchmarks, test beds and certification methods—particularly as they apply to important topics essential to trust and trustworthiness such as safety, fairness, privacy, transparency, explainability, accountability, accuracy and reliability.