Research and the liminal spaces
The following text was presented at the 4th. Radical Research Summit in Vancouver, B.C., on September 27, 2019.
My background is in Design, Social Sciences and Humanities, which makes me lean towards qualitative studies or studies that involve visualization as a method of analysis. I have been doing research in the private sector for a bit more than a couple of years now, after a reluctant transition from academia. Reluctant, because like most humanists and social scientists coming out from doctoral studies, I was expected to find a job within academia. Instead, I ended up in a liminal space, a space between academia and the private sector. Certainly a nice space, but not what I was supposed to consider a win.
The company that I work for, Charitable Impact, is a fintech operating in the charitable giving space. Among other things, we provide people with a platform in which they can search, choose and give money and other kinds of assets to charities across Canada. Charitable Impact is not a big company, and although I am not really sure about this, I assume most fintechs of similar size do not have a devoted research team. The core research team has 3 individuals (Bella Margolles, Lise Owens and me), a part-time research assistant (Jason Proulx) and Tanmay Deshpande, a close collaborator and advisor for the project.
Being a research team for a small company comes with constraints in terms of budget, but it has worked pretty well for us because the company is immensely supportive of our roles, besides, it has forced us to become more resourceful and confident as researchers. In my particular case, I think that being a designer has been crucial to defining how I approach research in general: I tend to see research as not different from any other design product that I have dealt with before: It always starts with a question.
Today I would like to focus on one specific project (a case, if you will) that might not come up frequently in your own practice or might not be perceived as a typical task for a research team, one that involved the consideration of qualitative research in a data-driven project: our charity search algorithm.
Let me give you some context, bear with me:
Context
- In Canada, charities obtain their status from the Canada Revenue Agency.
- The CRA then categorizes the charities according to their own system and keeps an updated record of new and revoked charities.
- This information is public, updated weekly and monthly and made available to whoever requests it in a spreadsheet format.
There are more than 85,000 charities in Canada. You can see how the prospect of a searchable database of charities is appealing to potential donors.
Searching for charities online has been possible since at least late 2000, but pretty much all portals that have enabled this (including ours) are based on lexical search (the kind of search that requires one to write the exact word to get a result, just like the find function in a word document). Now, although many people have very clear preferences about when, where and how to give to charity, many others don’t. The latter group has it really hard because they often have to wait for the charities to approach them in order to give. Searchable databases intend to help everyone identify charities and donate, but given its technical limitations, it mostly benefits those who already know what they want to give to.
Late last year, we decided to improve the experience of searching for Canadian charities by expanding the functionalities of search and to provide donor-centric tools for discovery and recommendation. We decided to do this by recategorizing charities from a tax-oriented classification to a more human-friendly one using machine learning. This required a training set, which the research team provided, along with the categorization model itself.
The dataset
A training dataset is a model that the machine will learn from before performing its intended tasks. In this case, it is a sample of charities already categorized under the new parameters. Those familiar with this process know that technically speaking, this process is pretty straightforward and does not require the intervention of a research team.
But charity is a complex topic. The charity sector represents roughly 13% of Canada’s GDP, roughly 239 billion dollars. In relatively recent times, philanthropy has incorporated business practices (a concept known as philanthrocapitalism), which has been extensively denounced for the implicit power dynamics (e.g., lobbying) associated with it.
Online giving has been seen as a way of democratizing philanthropy, but of course, online giving comes with the limitations and particularities of the medium. We tried to address these particularities by taking upon ourselves the creation of this training dataset. For starters, like any other data coming from a fairly dated source, there was a high chance that this data would have some sort of bias and if there was any possibility of harvesting anything from the already existing categorization, we needed to be perfectly aware of these biases.
During our analyses of the data, we discovered several categorization issues. I’ll give you just 3 examples:
- When we started the process, the CRA had a category called “Temperance associations”, under which charities related to addiction treatment were classified. Addiction is a health issue that requires treatment, temperance is a concept that refers to self-restraint, an artifact of dated views on addictions. If you want to support a charity that deals with, let’s say a current health crisis such as opioid addiction, you won’t find those charities under health, but temperance.
- Most (if not all) causes led or related to indigenous people were classified as outreach causes. I am talking about radio stations, museums and environmental advocacy groups among many other institutions with charitable status. This is not only inaccurate from a categorization point of view, but it is also harmful to indigenous people as it perpetuates historical stereotypes.
- There are many common terms used by both pro-life and pro-choice non-profits across Canada. This differentiation might be crucial for those who identify in any of these two camps.
Now, I want to be clear about something, although each member of the research team has opinions and in most cases those opinions are biased, our task was not to make moral judgements about the charities, but to be as objective as possible in terms of the outcome for donors. These personal biases are managed through acknowledgement and discussion as a team. In the end, helping people finding the causes they cared about was more important than those personal biases.
The reference data had so many identifiable issues that we ended up completely ignoring the CRA categorization. We decided to define a statistically representative sample of all the charities in Canada and categorize it from scratch. Understandably, ignoring the existing model implied putting another one in place. This process entailed making decisions, and each one of those decisions had to be well informed, and that required a lot of research.
Some decisions were easy, based mostly on common sense, such as allowing charities to be classified in more than one category. Some others, such as deciding which categories to consider, required studying and comparing existing models and determining what would work better for our own internal purposes. Some required interviewing donors to establish the extent of information required for making decisions. Others required testing, like establishing the interplay between categories and keywords in the mind of the donors. Some required analysis of public discourse and sentiments, such as determining whether or not to associate the word genocide to causes that support missing and murdered indigenous women, for example. These were human decisions, decisions that involve taking a critical approach to the problem and engaging in philosophical conversations always as a team.
The algorithm
Eventually, these decisions had to be expressed in mathematical terms to build the categorization algorithm. When such a process started, the research team shifted towards an advisory and support role. We started having weekly meetings with the development team in order to fine-tune, tweak and account for the inevitable exceptions that come from this kind of project. We revised and tested the already categorized charities to make sure the algorithm was not picking up on biases that were not accounted for in the original sample. In the end, what could have been a data science and development task, ended up being a collaborative endeavour between a predominantly qualitative research team and developers.
But what is the point of all this? was it necessary? was it worth it? you might ask. Yes, it was. And not only for our own purposes. You might have heard comments about how algorithms are ruining everything, but how often do you hear ideas about how to solve the issue? Well, we did a bit of research into this: when asked about her views on how to deal with biased algorithms, Dr. Shannon Vallor, professor at Santa Clara University and AI Ethicist and Visiting Scholar at Google, offered a potential solution in an interview with David McRaney in the You Are Not So Smart podcast. Dr. Vallor said the following:
“We can’t have designers who are simply resting on the knowledge they have as computer scientists or engineers, we need technologists who also have an understanding of history, of human social dynamics, of ethics and politics, because those are the only forms of knowledge that would help them make a distinction between the kinds […] outputs that are unethical and we don’t want, and the kinds that are useful and we do want.”
— Dr. Shannon Vallor
Finding these all-knowing technologists might be hard, but we found a solution by getting the research team involved in the process from a very early stage. In fact, by providing the training dataset for the algorithm, we found a very practical way to address Dr. Vallor’s proposal.
I realize that not every researcher or research team will have the opportunity to engage in a project like this, but it also makes me think, how many other predominantly data-driven projects could benefit from the involvement of an interdisciplinary, socially-driven research team? What would that look like?
In the end, virtually every project brings a valuable lesson. If I had to summarize what we learned about this one is that addressing certain issues required us to push research into the liminal spaces between conceptualization and development, between idea and execution, between the expectations of our positions and our individual capabilities and between our personal biases. It seems to me that it is precisely in these liminal spaces where research thrives and where is most needed.
The result
So, did it work? In a few days, Charitable Impact will release the new search and discovery experience, wrapped around a new brand, a new web platform and a new mobile app, all grounded in the work that the research team has done over the last couple of years and we will have a chance to assess the impact that our work has had, and so do you if you want to. It is also worth mentioning that quite recently (last month to be exact) the CRA has updated their own categorization, so we will have the opportunity to compare what we have done with the new standards. In the meantime, I can tell you that this process has brought really interesting outcomes. Firstly, it highlighted the positive implications of collaboration between traditionally distant units in data-driven companies. It also highlighted the benefits of bringing different research paradigms to the same problem. And finally, it was challenging enough to mitigate the reluctance that I felt when I took this job. It’s not only a win but a win-win.