The hype makes sense — robots can offer collaborative approaches for delicate medical procedures, text can be pulled from pictures and translated into different languages before being put back on the image and your entire online experience is able to be curated so that it’s just the perfect fit for you. Hell, when I wrote this piece I used a service built with artificial intelligence (AI) to transcribe some of the interviews I conducted.
But for every example where machine learning (ML) is going to revolutionize the way we live, you can find another example in which the potential consequences don’t seem to be worth the risk. At best, neural networks trained on vast amounts of data may produce correlations that are useless, but at worst, their findings may magnify the biases present in humans and become incredibly harmful. Over the past year, even students have been dealing with concerns surrounding privacy and the collection of sensitive data throughout UBC’s Proctorio saga.
Anouk Ruhaak is a current Mozilla Fellow who researches data governance models. They’re also the founder of Radical Engineers, an initiative that connects organizations that are challenging the status quo and taking steps to achieve justice with volunteer software developers and designers to provide them with the technical resources they need to effect positive change.
Ruhaak is someone who has experienced the fervour surrounding the potential of machine learning firsthand. While they think that sometimes this excitement is justified, very often, they find that it’s not.
“People tend to think that the machine becomes increasingly magical, because we don’t fully understand how we arrived at the answer,” they said. “But we’re still thinking it has some magical power, and it can tell us something that we don’t know.
“In those contexts, what I often find is that there’s very, very little awareness of all the different biases that creates.”
This is not to say that work isn’t being done to discover and catalogue the ways applications of machine learning — and artificial intelligence more generally — can go wrong. In fact, it’s quite the opposite. The AI ethics field is enormous, with countless academic papers already written on how simply correlating a bunch of data — showing how one data set might correspond to another to drive the choice of which ML algorithm you use — can perpetuate biases already present in the data (a classic example of this is Amazon’s resume screener which taught itself to downgrade applications from women).
But according to Ruhaak, just because the work is being done, that doesn’t mean the engineers and scientists working on AI are aware of it. The idea that the people developing and managing this technology might not consider the potential impacts of what they’re doing was something Ruhaak found deeply concerning.
“If you talk to the engineers who are doing the actual statistics … they’re often not that aware of it,” they said. “When you talk to them about it, they’re willing to gain the awareness. It’s just that no one’s ever actually brought it up.”
Artificial intelligence and machine learning aren’t the only aspects of computer science that raise ethical quandaries — though they certainly are prominent. The products created by software engineers, developers, statisticians and data scientists can be found in just about every aspect of society. As such, the concerns surrounding the products of computer science span issues such as personal data collection, intellectual property and even individual freedom.
Ruhaak said that Slack employees took to Twitter to explain they were speaking up about the implications of the feature before launch, but said “no one was listening for it,” including product managers.
In late 2020, and again in early 2021, Google made headlines for firing two of its top researchers in AI ethics. One of the researchers fired, Dr. Timnit Gebru, was a co-author of the pioneering Gender Shades project, which found that some of the industry’s leading gender classification services were worse at accurately identifying women with dark skin compared to people with lighter skin.
According to Gebru, she was fired after refusing to retract or remove employee names from a then-unpublished paper on the harmful biases in the AI systems that Google’s search engine is built upon. After defending Gebru amid the controversy, the creator of Google’s ethical AI team, Dr. Margaret Mitchell, was fired as well.
The COVID-19 pandemic seemed to provide a “shock doctrine”-esque opportunity for Big Tech companies to push for the digitization of state functions. While tech corporations were selling their products as solutions to COVID-19-induced chaos, employees at companies like Amazon were being forced to work in what The Guardian called “unsafe, grueling conditions.”
In May 2020, Amazon posted a minimalist tweet over a black background to show solidarity for the Black Lives Matter movement. At the same time, it was selling its facial recognition technology — which has been found to misidentify portraits of Black Americans as mugshots — to police departments that already disproportionately incarcerate Black people.
In another example, Google employees collectively decided in 2018 that they would no longer work on AI for drones, which led the company to pull out of a Pentagon contract. Ruhaak published a blog post detailing the event, where they argued that while employee activism isn’t the final solution to combating the negative impacts of tech, a software developer often wields greater power than a warehouse worker in the proceedings of their company, and their voice complements other tools like “pressuring politicians to regulate, raising consumer awareness and engaging in protests.”
They also acknowledge that bringing up the consequences of your work is not an easy thing to do.
“[The people speaking up are] not heard because no one actually wants to hear it, because you’re gonna have this problem where you’re the one criticizing the thing everyone else is super excited about,” said Ruhaak in our conversation.
However, there can be more malicious aspects of why certain viewpoints aren’t given the time of day in tech.
“Occasionally, if you’re unheard just because of who you are, like because of your gender or your ethnicity or whatever other thing, that is just frustrating and infuriating in its own right,” said Ruhaak.
The department offers an elective course, CPSC 430, which dives into how computational technology interacts with society. Some other CS courses include modules on ethics, and faculty often highlight examples within their classes where the technical skills being covered have been misused.
At an individual level, CS students can seek out courses in other departments that cover ethical reasoning and communication skills. Outside of the classroom, students form clubs and organize events that prioritize values of inclusivity, diversity and accessibility. The remaining question isn’t if these sorts of skills fit into a computer science education, but if what’s being done to develop them is enough, and whether there is a responsibility to do more.
As part of this story, Ubyssey Science ran an ethics and tech survey to garner a better understanding of how computer science students at UBC thought about their work, and the department’s responsibilities in the face of the potential impacts of computational technology.
One survey respondent wrote that computer science students “are incredibly bad at this stuff and the current curriculum is both woefully insufficient and also a better one may not even help, you can’t change people's politics unfortunately.
“I saw student proposals for an AI project in a second year CPEN class and one person wanted to build a thing to detect criminals based on surveillance footage (uh because that went so well on the news last time), others wanted to rebuild the attention sensing pieces of Proctorio. If people consider it fine to build these things as a toy, what will they do at work?”
Dr. David Silver is an associate professor at the Sauder School of Business and director of the W. Maurice Young Centre for Applied Ethics. He’s also the person responsible for conducting internal-facing ethics audits on faculties at UBC. These audits interrogate what a successful graduate would look like to each department, both in values and technical skills, and then test systematic ways the faculty can ensure they’re accomplishing these goals.
Silver has conducted audits for Sauder and the faculty of forestry, and is currently conducting audits on the faculty of land and food systems and parts of the engineering faculty. Silver said he recalled discussions around conducting an audit of the CS department, but they were halted by the pandemic.
Ruhaak believed that for university-level approaches to instill ethical reasoning skills that address tech industry problems, the lessons need to be accompanied by the historical context behind the technologies and their real-world consequences.
“Looking at the history of these technologies and how they’ve historically created inequalities or inequities … Who do they empower and who they do not empower?”
They also stressed the importance of “[having] some historical understanding of what has come before you.”
According to Silver, the university has a level of responsibility in educating students about the potential impacts of their labour.
“A profession, as I see it is, is two things,” he said. “It’s a mastery of a certain knowledge or skill and it’s a knowledge of how to ethically use that skill. That’s what makes a profession.
“… Universities over the last several hundred years, they’ve become more or less cognizant of that part of their role: Are we just about technical knowledge, or are we also about empowering our students to go out and use that responsibly?”
The program started when a group of undergraduate students approached Harvard professors Dr. Barbara Grosz, known for her groundbreaking contributions to natural language processing and the advancement of women in science, and Dr. Alison Simmons, who specializes in philosophy of the mind, about involving ethics in the CS curriculum. The two then put their heads together to develop a framework that included ethical issues in the curricula without being overly difficult for CS faculty.
From there, the Embedded EthiCS program was born — an initiative that pairs philosophy graduate students with computer science faculty members to develop modules on ethical issues that could emerge naturally from the technical content covered in class.
“So that in the best-case scenario, it doesn’t feel like something that's being added on in some ad hoc fashion, but it’s something that is actually really relevant to one’s course of study in computer science,” said Dr. Jeff Behrends, lecturer of philosophy at Harvard and co-director of Embedded EthiCS.
The developed course modules are all open source. Some topics include ethical tradeoffs in system design for an operating systems course, tackling censorship by compromising privacy in an advanced computer networks course and the ethics of natural language representation in a systems programming course.
According to Behrends, both student and faculty response to the initiative has been “tremendously positive.” Adopting Embedded EthiCS modules into a course is entirely up to the discretion of the course’s teaching staff and as of this academic year, the initiative is operating in roughly half of the undergraduate computer science offerings in any term.
“So [there’s] definitely tremendous enthusiasm from the computer science faculty, who are inviting philosophers into their classroom. There’s just no way we could do this without their enthusiasm,” he said.
Because the program arose from student awareness of the issues covered, and because the reasoning and communication skills are so intertwined with the technical aspects of the class, Behrends says students are finding the modules relevant to their educational experience as future computer scientists.
With any pedagogy — but especially so within a field that evolves so rapidly — there is the concern that what the students are learning within the modules may not be timely or applicable once they finish their degrees. To combat this, Embedded EthiCS has been developed in a way that the objectives of the modules are skills oriented. Instead of simply teaching students how an algorithm should be designed, the program focuses on empowering students to recognize a large range of ethical issues and how to evaluate them in different contexts.
“Those kinds of skills, especially the reasoning skills, aren’t topic specific,” said Behrends.
“Our hope is that for someone who’s gone through the computer science curriculum … [and] has encountered many Embedded EthiCS modules and has been given many opportunities to practice developing new skills, they’ll just be able to do some things in professional settings, or in political settings or in whatever social settings they find themselves in which testing methods are relevant.”
Furthermore, when a module is implemented in a course, at the beginning of each term it’s offered, the course instructor and philosophy graduate fellow will update the module to reflect changes in technology and current events. The end result is a “custom-built ethics module” every term, said Behrends.
It’s important to note that the Embedded EthiCS pedagogy isn’t necessarily the silver bullet for intertwining ethics with a CS education at every institution. Harvard and UBC are different schools with different populations and different approaches to education. Despite this, Behrends noted that the Harvard team is always interested in working with other institutions to “get the goods that we’re aiming for” in whatever form works best.
Dr. Sharon Stein, an assistant professor in the department of educational studies whose research focuses on ongoing colonial patterns of higher education, explained that if greater coverage of ethics in CS curricula doesn’t also explain why that coverage wasn’t there before, students won’t learn from those historical mistakes.
“There’s often this move to say, ‘Okay, we’ve excluded X knowledge, let’s bring it in,’” said Stein. “… When we do that without understanding this history and context of why and how those things have been excluded, generally, when we include them we do it in very conditional, instrumental ways that [don’t] really interrupt the continuity of business as usual.”
Certain schools of thought have been devalued in computer science for reasons often rooted in Anglo- and Euro-centrism, racism, sexism and just about every other -ism you can think of. Without reckoning with the field’s past problems, an initiative to introduce ethics to the curricula may be built on top of what allowed these inequities to occur in the first place.
Stein highlighted a few ways to help ensure incorporated equity and ethics in curricula aren’t just surface level.
First is to contextualize the area of study that is computer science at UBC into the ecology of knowledge so that it’s understood that there are many different ways of relating to the world. This teaches that while knowledge systems are indispensable, they each have their own limitations.
Second is to expand our idea of intellectual rigour so that it doesn’t just include technical knowledge, but also an understanding of the impacts and accountabilities to different communities that are affected by the knowledge a discipline produces.
And third is to facilitate a culture of humility so that when new approaches to knowledge are presented, they’re not met with hostility. According to Stein, this humility ensures that those who feel a “commitment to understanding the systemic and historical ways that our institutions, our disciplines and us personally have been complicit in social and ecological harm … can do that without … falling into a spiral of shame or guilt that doesn’t necessarily go anywhere.”
Anouk Ruhaak, a current Mozilla Fellow in residence working on data trusts, added the potential benefit from providing more opportunities within the CS education for students to interact with people in different disciplines and from diverse backgrounds.
“I think, if you look at the tech sector right now, a lot of the front-end developers don’t come from CS backgrounds, but from majors in history or some other social sciences,” they said. “But that’s not true on the very deep, infrastructural levels [of software]. But that would be interesting to explore.”
However, this is much more easily said than done.
“We have sort of been conditioned to think that our worth is premised on what we know,” said Stein. “… Especially [for] people whose positions have been universalized, it can be extremely uncomfortable, disorienting to have that [universal application] questioned.”
Beyond the difficulties that can arise when students might be forced to confront topics they disagree with, other concerns arise regarding the limitations of even including ethical reasoning and communication skills in computer science curricula.
For one, even if an individual were to receive the greatest education on ethics in the universe, there is only so much that they can control about the outcomes of their work.
“I don’t think there’s anything that anyone can do to ensure that they’re not doing something that won’t be misused by others,” said Dr. Karon MacLean, a professor in the department of computer science who researches human–computer and human–robot interaction. “A lot of us in the university environment are working on stuff that's quite fundamental and is very far from being applied. It’s going to branch and go so many different directions … so it’s just impossible to really predict that.”
Dr. Cinda Heeren, a professor in the department of computer science, also pointed out that technology developed through ethically sound mechanisms could just as easily turn sour in the wrong hands.
“You can think of it as the same technology underlies the atom bomb as nuclear energy. I don’t even know if that’s technically true, but it’s in the wielding of it, not the not the technology itself, necessarily,” she said.
(Nuclear reactors and atom bombs both utilize nuclear fission, but they differ in how the fission is controlled and how much their fuel is enriched with fissile material.)
There are also those who think that university is too late to be asking people to begin critically thinking about how their work and actions impact other people. In response to The Ubyssey’s ethics and tech survey post, one reddit user wrote that “Social studies in high school [has] already covered the part of how to be a decent citizen.”
Ideally, high school graduates would arrive at post secondary with infallible ethical reasoning skills, but until then, Stein said that the only place educators can meet people on these topics is where they already are.
“It would be great if we start from birth to learn to be more sober, mature, discerning, accountable people,” she said. “… But we start where we’re at, and that’s all we can do, right?”
It’s also important to keep in mind that while computer science graduates start at the bottom of a corporate ladder, over time they often work their way up into leadership positions. In these positions, the value of instilling ethical considerations presents itself.
“When I look at the leadership of the tech sector now, there’s a wide variation of some that are actually showing some leadership and some, you’re just like, ‘I can't believe how clueless they are,’ in terms of what values are at stake and how to manage them,” said Sauder’s Dr. David Silver.
Because of the influence of these roles, Silver thinks that resistance to integrating ethical understanding and reasoning skills into the curriculum should be faced head-on.
“I think it’s resistance that can and should be overcome.”
Today, almost every facet of society uses and benefits from computing technology, whether it be governance or communication or education. The invention of the world wide web means you can connect with someone just about anywhere on the planet. At the same time, “a handful of Big Tech corporations now wield more power than most national governments.”
As for who might ensure that the weight of these issues is expressed throughout a computer science degree — well, it’s really up to everyone.
“It’s the job of the youth to push institutions, to challenge them, to not be caught up in the momentum of the way things have been done,” said Silver. “… Students have a responsibility because they’re the ones experiencing their education, they can see when it’s not working.
“… And it’s our job [as faculty] sometimes to say, ‘That’s a little nuts.’ Or to respond like, ‘Oh, okay, you got it.’”
“We need to be inculcating our students with the idea of being a global citizen, a part of society,” said MacLean. “We shouldn’t be training you just to be a technologist and thinking about great research topics, or how to be a really good computer scientist and code really well. We also have to be teaching you to be alert to ethics issues, so that when you go out and are practicing your trade … that you're watching out for this stuff as it develops.
“Because it’s our students who are the ones who are going to be developing applications, putting this into technology that is directly affecting people’s lives. … You’re on the frontlines. You’re the ones who can say ‘No, this is not right. I shouldn’t be doing this, this will have big societal consequences which I do not want to be part of.’
“And that’s where the line needs to get drawn.”