Faculty Perspectives- May 2025

We asked the same question to a faculty member from each of our three departments—Computer Sciences, Statistics, and the Information School. Read their full responses below.

Question: What does ethical decision making look like in your discipline, and how do you prepare students to respond to them?

Frederic Sala

Assistant Professor in Computer Sciences

The promise (and potentially peril) of AI is that it lets us engage with so many more situations and scenarios. Previously, we could automate processes and tasks when we could manually implement algorithms for them—now we are often much more flexible! Much of AI comes down to “showing” models what to do, often through plain language descriptions. This has led to an explosion in AI use cases, and we’re still just scratching the surface.

This ability to show up and play a role in so many more situations means that AI technologies also encounter ethical challenges and dilemmas with increasing frequency. Sometimes these are straightforward: if a model helping perform diagnosis in a medical setting is confused, it makes sense to demand that it raise a flag and call for the help of a human expert, like a doctor, rather than making a potentially dangerous decision unilaterally. In other cases, things are less clear and there are complex tradeoffs involved.

The advice I give my students is that when we are building and deploying AI, we should be as aware as possible of where ethical challenges lie, what these may be, how our tools and technologies interact with them, and what is necessary to address them. At the same time, like the model doing diagnosis, we should not try to unilaterally “resolve” these issues. We are trained researchers, scientists, and engineers—that does not mean we know everything about moral philosophy, ethical standards, or the law. Instead, our goal is to open channels of communication with those who are experts in these areas, and to be able to convey, stripped of the technical jargon, what our technology can and cannot do. I believe the most productive way to address ethical challenges comes down to these conversations. 

 
Clinton Castro
Assistant Professor in the iSchool 
 
My discipline is philosophy. Mary Midgley once compared this to plumbing, noting that “both activities … arise because elaborate cultures like ours have, beneath their surface, a fairly complex system which is usually unnoticed, but which sometimes goes wrong.” 
 
In keeping with this image, philosophers strive for certain kinds of precision and depth. In the service of precision, the philosopher might pause to analyze terms—such as “bias”—before discussing whether, for example, an algorithm is biased. And in the service of depth, they might question assumptions. It might be assumed, for instance, that biases are themselves bad. But this likely isn’t so. Instead, it’s likely the case that biases are bad when they violate certain norms, such as norms of fairness. And once we make this observation—once we’re invoking concepts like fairness—we’ve waded into one of the complex systems Midgley alerts us to.
 
There are many things that can be done to prepare students for plumbing these depths. First and foremost, it’s helpful to show them that it’s doable, rewarding, and even fun. Well-structured, in-class discussions of open-ended questions about real world cases are great for this. It’s also helpful to give students tools that can help them make real progress in these discussions. Towards this, it’s useful to get acquainted with writing on some of the mainstays of ethical thinking, such as well-being, autonomy, and fairness. It’s also useful to introduce key ideas from formal logic that are vital for testing—and producing—arguments for or against proposed answers to ethical questions.

 

Cécile Ané:
Professor in Statistics and Botany
 
In statistical practice, technical skills are important, but ethics is foundational. The first issues that come to mind are data fabrication and data falsification. Beyond these, integrity enters all kinds of day-to-day decisions for statisticians: respecting the privacy and confidentiality of human subjects, advocating for blinding and adequate power when designing a study, presenting data without bias, or being mindful to avoid “fishing and cooking” when taking the myriad of decisions during data analysis. When is it justified to exclude data points? Should a variable be transformed? discretized? Can I pick the transformation that gives me the result I would like to get? There might be pressure from collaborators to find some expected outcome. There is also the temptation of “HARKing” (hypothesizing after the results are known). Data dredging is especially relevant on large data sets with many variables, when there is a garden of forking paths: many possible models and many options to choose from.
 
Unethical statistical practice can lead to a replication crisis or worse, wrong downstream decisions. In medical research, this could harm patients. In our courses, we promote the “Ethical Guidelines for Statistical Practice” developed by the American Statistical Association. When I teach statistical consulting, ethics is a major topic. We were fortunate to have a guest lecture by Professor David L. DeMets, Founding chair of the Department of Biostatistics and Medical Informatics. He quoted William R. Hiatt: “If you have integrity, nothing else matters. If you don’t have integrity, nothing else matters.”