Contribution to the PDC 2020 Interactive Workshop "Computing Professionals for Social Responsibility: The Past, Present and Future Values of Participatory Design".
By Kenneth R. Fleischmann Ph.D.
Professor, School of Information, Founding Chair, Good Systems, a UT Grand Challenge The University of Texas at Austin
Contact: [email protected]
Designing Artificial Intelligence (AI)-based systems that are compatible with human values requires more than just socially responsible computing professionals. As AI is already being deployed in a wide range of domains and has the potential to transform our lives in fundamental ways, the right and responsibility to ensure that AI is compatible with human values should not belong to computing professionals alone. Within the academy, many fields have expertise relevant to socially responsible AI, including not only STEM experts but also social scientists and humanists. Many academic efforts to promote ethical and socially responsible AI are led by computer scientists who are brilliant in their own domains of expertise and have good intentions but have less expertise in understanding the societal implications of technology. Ethicists have been debating what is “good” and how to measure and achieve it for thousands of years, anthropologists have been observing how values shape and are shared via culture for hundreds of years, and social psychologists have been measuring how different individuals and societies prioritize values for decades. Efforts in academia need to engage diverse fields, not just computer science. Achieving this goal requires significant effort both from computer science to recognize and value other types of expertise, and from humanists and social scientists to communicate their ideas in ways that make them easier to understand and apply. Further, such efforts should not take place in the academy alone – it is critical for many stakeholders to be engaged, including corporations, governments, and nonprofits. Ensuring that AI is designed and used in socially responsible ways requires a significant effort across disciplines and sectors.
Good Systems, a UT Grand Challenge is a campus-wide research effort at The University of Texas at Austin involving convergent research collaborations among humanists, social scientists, and technologists, and also engaging corporate, government, and nonprofit partners. Good Systems was formally launched in fall 2019, including a campus-wide launch event involving academic presentations by many members of our 150+ member interdisciplinary Good Systems Network, and an industry-facing launch event at Capital Factory, the downtown Austin technology incubator that serves as the home base for entrepreneurs and startups in the Silicon Hills, featuring panelists representing five major technology companies and fast-growing startups. During its first year,
Good Systems has provided seed grants of $100,000 each to ten one-year research projects, each of which includes a mixture of humanists, social scientists, and technologists conducting research on how to define, evaluate, and build socially responsible AI-enhanced socio-technical systems.
Speaking from personal experience, the development and launch of Good Systems has allowed me to build connections across campus and with a wide range of stakeholder organizations. Good Systems is led by an Executive Team includes eight faculty members from six colleges or schools, and the broader Good Systems Network includes faculty, researchers, staff, and students from 25+ departments and research units, ranging from the iSchool to Philosophy to Computer Science to Journalism to the Dell Medical School to Architecture to Engineering to the UT Libraries to the Texas Advanced Computing Center. Good Systems has also enabled collaborations with industry, government, and nonprofits. For example, concurrently with planning Good Systems, I have been funded on research projects by and with Microsoft Research (led by Danna Gurari from the iSchool) and Cisco Research Center (with Sherri Greenberg from the LBJ School of Public Affairs), involving biweekly or monthly meetings with our industry collaborators. Good Systems recently launched a new partnership with the City of Austin, including a highly successful workshop that brought together 30+ researchers from across UT-Austin and 30+ public servants from the City of Austin and the State of Texas. I have also been funded by nonprofits such as Micron Foundation (led by Matt Lease from the iSchool, with Sharon Strover and Talia Stroud from the Moody College of Communication and Sam Baker from the Department of English) and the Public Interest Technology University Network (with Amelia Acker and Eric T. Meyer from the UT-Austin iSchool and Patricia Garcia, Casey Pierce, and Kentaro Toyama from the University of Michigan iSchool), which is convened by the Ford Foundation, the Hewlett Foundation, and New America. The latter, of which UT-Austin is one of the 21 founding members, represented by iSchool Dean Eric T. Meyer funded a conference that we held in Austin in March, Informatics Education 2020, which featured keynote speeches by New America VP for Public Interest Technology Cecilia Muñoz, City of Austin Chief Information Officer Stephen Elkins, Texas Workforce Commission Chair Bryan Daniel, and Microsoft Research Senior Principal Researcher and Research Manager Meredith Ringel Morris, and brought together participants from 40+ organizations, including major public research universities, minority serving institutions, polytechnics, community colleges, and Ivy League schools, nonprofits, industry, and government. Developing socially responsible AI will require these kinds of new opportunities for stakeholders across academia, industry, government, and nonprofits to meet and collaborate.