Human-centered Design in Data Science
Peter is a co-founder at DrivenData, whose mission is to bring the power of data science to the social sector. DrivenData builds software that uses data and artificial intelligence for non-profits, NGOs, and governments. DrivenData also engages a global community of data scientists is online competitions that leverage data for the greater good. Recently Peter has worked on projects in lung cancer prediction, anti-human-trafficking, crop yield modeling, and digital financial services for rural populations. He also maintains the Cookiecutter Data Science and Deon open source projects. Peter earned his master's in Computational Science and Engineering from Harvard. Previously he worked as a software engineer at Microsoft and earned a BA in philosophy from Yale University.
Hugo is a data scientist, educator, writer and podcaster at DataCamp. His main interests are promoting data & AI literacy, helping to spread data skills through organizations and society and doing amateur stand up comedy in NYC.
Transcript
Hugo: Hi there, Peter, and welcome to Data Framed.
Peter: Thanks, Hugo. I'm happy to be here.
Hugo: I'm happy to have you here. I'm really excited to be talking about human centered design and data science, the role of design in data science, and what data science can tell us about human centered design as well. Before we get into this, I want to find out a bit about you. What are you known for in the data community?
Peter: Primarily I'm known for my work at Driven Data. Driven Data is an organization that runs machine learning competitions that have a social impact. We work with non-profits, NGOs, government groups to help them figure out a way that machine learning can help them to be more effective. Then we put that online so that our global community of data scientists can come up with the best solution to this particular problem. Then after the competition, we help the organization to use that going forward.
Peter: That's probably one of the things, is just that work at Driven Data that we've been doing for the last five years. Outside of that, there are two particular areas of data science that I often talk about and I'm very interested in. The first one is engineering best practices for data science. I'm one of the maintainers of the cookie cutter data science project, which is a project template, that I hope we get some time to talk about because it's one of my pet projects and I think it makes a big difference in our own work, and I hope it makes a difference for other people.
Peter: The other one is thinking ... See more
Hugo: I would love to jump into all of these things. To recap, we have machine learning competitions with social impact, engineering best practices, which I think is incredibly important, particularly because there is an argument that in data science ... the idea of best practices in general is in a woeful state and something that a lot of people are working on a correction for, and bringing engineering best principles into that will be essential. Then of course, the data ethics aspect of your work, very recently ... Mike Loukides, Hilary Mason, DJ Patil, have started writing their series on data ethics for O'Reilly where they've actually got a post on checklists versus oaths versus codes of conduct. I think all of these things are incredibly topical.
Machine Learning Competitions with Social Impact
Hugo: Let's just spend a bit of time to go through each of these. In terms of your machine learning competitions with social impact, could you just give us a couple of examples?
Peter: Yeah, sure. I'll start with one of my favorites, and that was the first competition we ever ran. There's a nonprofit organization called Education Resource Strategies. Really what they want to do is they want to help school districts to spend their money more effectively. Schools are spending money on things like student transportation, teacher salaries, textbooks. They have a wide range of operational costs. Right now, it's very difficult for a school district to say, "Am I paying a lot more for textbooks than neighboring districts and getting the same outcomes? Are my investments being effective?" The biggest barrier to doing that kind of benchmarking or comparison is that school budgets come in wildly different formats. There's not standard for reporting the categories of a budget expenditure so that I can say, "We're spending more on textbooks than a neighboring school, and we need to look at this."
Peter: Education Resource Strategies gathers all of this budget information from the districts they work with. Ultimately, what their output is is a recommendation for how to think about the school district's budgeting after they go through this process of benchmarking that district against other districts. The big problem is that they spend hundreds of person hours a year looking at Excel spreadsheets, reading a budget line item, and trying to assign it a standard set of categories.
Peter: As I'm sure your audience will have picked up on from the description, they have a lot of labelled data that they've generated through the course of their operations. That label data is a budget line item, a description of it, the cost of that budget item, and then what category it belongs to, whether that is transportation, textbooks, extracurricular activities, administrative salaries. All of those things, they've captured over their history of working with school districts.
Peter: So our first competition was how do we help this organization that really cares about the output report and not about this taxonomy process? How do we help them to automate that? So we ran a competition where people built algorithms to parse the natural language in these budgets, to look at these budget costs, and to automatically assign categories for those school budgets to those line items. We took the output of that competition, we turned it into a tool that fits into the workflow that this organization already had. It's saving their analysts tons of time in terms of just reading through sales spreadsheets so that they can focus that time where they can really add value, which is about making recommendations for how those budgets can be changed.
Hugo: That's great. It seems like it would reduce so much of the manual labor involved.
Peter: Yeah, really it's ... Last time we checked in, it was saving them about 300 person hours a year to automate that process. For a relatively small nonprofit organization, that's actually a huge amount of labor savings. Really, their goal is to employ those savings more effectively, where their employees actually add value, rather than in the labeling of spreadsheets where it's just this task that had to happen, any way.
Hugo: Yeah, absolutely. We should mention that if people find this type of stuff pretty exciting and interesting, they can check out all of the competitions at Driven Data, but if they find this particular competition interesting, they can even take the DataCamp course that you've built and that I collaborated on, which is learning from the experts: you get to build the winning solution in the end.
Peter: Yeah, that's right. That course will walk through not only what a baseline solution to a problem like this is, but also how the person who won the competition combined a number of interesting techniques to get to that best-performing solution.
Hugo: I'm not going to spoil the punch line. I don't want you to either, but I will say that it's not an LSTM or any crazy deep learning architecture that wins the competition.
Engineering Best Practices for Data Science
Hugo: Now we've talked about the types of machine learning competitions you do at Driven Data. Maybe you can tell us a bit about your thoughts on engineering best practices for data science, and in particular your cookie cutter data science project that you maintain?
Peter: Great. Yeah, so my background is in software engineering. One of the things that I think about while I'm working on data science projects is how software and data science go together. I think there are some important differences between the processes, in that data science tends to be more open-ended, tends to be more of a research and development effort. It's still the fact that a lot of what we do in data science is, at its core, building software. Even if that software exists in a Jupyter notebook or in an Rmarkdown file, it's still a piece of software. A lot of the best practices that come out of software engineering can be employed to improve those products.
Peter: The cookie cutter data science project is what we think of as the first pass at standardizing our process to make ourselves more effective. That's to have a consistent layout of what a data science project looks like. If you were to look at a web application in Django, which is a Python web framework, or in Ruby on Rails, or in Node.js, all of these are different programming languages but any time you build a web application in one of those frameworks, you have more or less the same structure. What that means is, that anyone who's a web developer can go into a project like that and have some expectation about where they would find certain kinds of code. The code that talks to the database usually lives in one place. The code that talks to the front end usually lives in another place. Those expectations make it very easy to work together and collaborate on projects.
Peter: The cookie cutter data science project idea is to bring that kind of standardization to our own data science work. We have a defined folder structure where our data lives. We have a set of folders for raw data, data that has some processing but is in an interim state, and then processed data. We have a particular folder structure for where we keep our Jupyter notebooks or Rmarkdown files that are built in the literate programming style. We have another set of folder structures for data processing pipelines that may exist as scripts, and then ultimately may get refactored in something like a Python package.
Peter: Because of this consistency, it's very easy for us to move from project to project, and pick up something and remember where we are, how to reproduce something, and because we work with lots of different clients on lots of different projects, this means that anyone who works on the team can jump into any project without having to spend a lot of time figuring out what happens where.
Building a Package for Data Ethics and Ethical Checklists
Hugo: That's great. I find all the details very interesting, but as you've hammered home, the idea of actually having an overarching, overall consistent layout to data science projects and a system of best practices, I think is incredibly important. I presume this actually plays a role in your approach to building the package for data ethics and ethical checklists, of having something that you can carry across projects. If there are biases involved or challenges, they are systematic in the sense that they won't be induced by a human working on the project. They'll be in this structure as a whole, so you'll become aware of it.
Peter: Yeah, so I think the two are related in that we really spend a lot of our time building tools for people who are data scientists. A lot of times they start out as tools that we're using ourselves, and then we open source those tools so that other people who are working in data science can use them. That's how the cookie cutter data science project started, and that's really how this ethics checklist package DEON started as well.
Peter: The idea there was, there are a lot of conversations around data ethics that we found very compelling from a standpoint of really seeing where things had gone wrong in the process, and feeling like ourselves, we were vulnerable to some of these things that could go wrong. It's very very hard to have perfect foresight and to understand exactly what can go wrong in any given circumstance. Because we kept seeing these examples and feeling like, "Would we necessarily have caught this in our own kind of work," we wanted to have a more actionable way of engaging with that data ethics conversation, to make sure that we didn't fall into some of these traps that just exist in the work. That if you're focused on methods, if you're thinking about data, you can get into these really technical aspects and not have a chance to pull up and look at the ethical implications.
Peter: We wanted to have a really process-driven way of engaging that conversation for our own projects. What this package we built does, is it generates a data ethics checklist that's really framed around the data science process. It starts with the data collection phase. There are a set of checklist items that ask you questions like, do the participants in your data collection give informed consent for the data that is collected? That's one example of an item on the checklist.
Peter: What we've done is we've taken each of these checklist items at different parts in the data science process, and we've mapped them to real world examples where something has gone wrong. We've got this collection of news articles and academic papers that explain when data science projects have hit ethical implications and the problems that have arisen.
Peter: There was actually just a really great article about Amazon really trying to build a resume filtering algorithm. They get an unbelievable volume of resumes for any position that they open up. They had the belief that they could use all of their historical data to train an algorithm to identify the top candidates that were applying for a given position. Now, as a sort of framing that may seem from a data science perspective like it makes a lot of sense. We've got a long set of training data, and we want to be able to replicate this with an algorithm.
Peter: They just actually shut down the team that had been working on this project for years, because they found that the algorithm was biased against women. In particular, it wasn't saying, "Oh, is this person a woman and they've indicated that on their application and now I want to use that and discount their application," but it was using things that were a little more implicit than that. In particular, if the applicant had attended a women's college, then their score went down. They discovered these problems with their algorithm and disbanded the team that was working on this project entirely, because they couldn't get it over this bump in the process. They're still having humans review all of these resumes that come in.
Peter: This is just sort of a classic example of something that's seems like a great setup for a machine learning problem in the abstract, but if you don't think about how it's affecting your outputs that aren't just some measure of accuracy, it can go really really wrong.
DEON
Hugo: These are the types of things that DEON would ask you to check for?
Peter: That's exactly right. Yeah. It goes through the data science process from data collection to data cleaning to exploratory data analysis to actually building models and that's where this would come in. Then actually it's got a section on deploying models. When the model is deployed, what are the questions that you should ask? Really, the goal is not to have all of the answers about what's right and what's wrong in a checklist. Given all of the different domains people work in, that's really an impossible task.
Peter: The goal is to take people who are already conscientious actors that want to be doing the right thing and make sure they're having the conversations about the ways that things can be misused. Really the workflow we see for the tool is you generate a checklist using the tool, and then you submit a PR to your project that says, "Hey, here's our ethics checklist, let's make sure we talk about each of these items as a team." It's really about spurring that team discussion to make sure you've considered the particular implications of your project and you've made the right decision about it.
Hugo: We'll include links to the cookie cutter project and the DEON package in the show notes. I think you discussing data ethics in terms of thinking about the variety of stakeholders really dovetails nicely into our conversation about human centered design in data science and why it's important. As a prelude to human centered design though, I'd just like to ask you a quick question about the role of empathy in data science as a whole. I'm wondering what is the role of empathy in data science for you?
Peter: That's a great question. I actually feel like empathy is a term that has started to pop up in the data science conversation as a core skill of a data scientist. In my mind, empathy is just one way to get at a particular kind of approach. That approach is to be problem focused rather than method focused. What I mean by that is we should start as data scientists that are really in a professional service role. We're providing a service to different parts of a business or different parts of an organization. We should start with what the problem is that we're solving, and understand the context for that problem, rather than saying, "Hey, who's got an NLP problem that I can solve with an LSTM?" Or, "Who's got a computer vision problem where I can try out the latest neural network methods and get a really cool result."
Peter: If you start methods first, a lot of times you end up with a solution that's not going to be really useful in the context in which it operates. When we talk about data science and empathy, what we're really saying is that you should empathize with how your data science output will be used. You should empathize with what's the problem we're solving. When we talk about empathy, I think that's one way of getting to a perspective that's problem first rather than method first.
Hugo: And do any concrete examples spring to mind?
Peter: Yeah, so I think that for me, a good example is we worked on a project that is trying to automatically identify species in videos. Species of animals, that is. There's a research organization that has these motion-detecting cameras that they set up in the jungle and they try to record videos of chimpanzees, but they get a lot of videos of other animals as well. Instead of sitting there and watching all of these videos and saying which one has a chimpanzee and which one doesn't, we were helping them to build an algorithm to automatically identify the animals that appear. We actually ran a competition around this last year. If you're curious, you can look at DrivenData.org and see the results of that competition.
Hugo: What was the name of the competition?
Peter: The name of the competition was "Primate-rix" Factorization.
Hugo: Stop it.
Peter: Thanks for asking.
Hugo: That's brilliant.
Peter: We really care about our data science puns, and that's one of my absolutely favorites.
Naïve Bees
Hugo: When talking about the data science puns, we have to mention Naïve Bees, as well.
Peter: Yeah. To be honest Hugo, with your accent I wasn't even sure if you were saying Bees or Bayes, so it works even better in Australia.
Hugo: Naïve Bees of course, is currently being turned into a series of DataCamp projects as well.
Peter: That's right, yeah. The Naïve Bees competition was to build an algorithm to identify honey bees from bumblebees. We're working on a set of DataCamp projects to help people work through that problem to give them that first exposure to a computer-visioned task that fits into a classification framework, and look at the more traditional methods and then move on to deep learning, neural networks, and convolutional neural networks.
Role of Empathy
Hugo: After that pun-tastic interlude, back to the role of empathy in identifying primates?
Peter: Back to the role of empathy. Really, this is about going back to the context and understanding the context in which you operate. We were working with this team of ecologists and biologists. They spent a lot of time in the field setting up these cameras, capturing data, watching videos, and then writing papers about what they see in the videos. The output that we ended up working on after the competition was an open source package that let you run from the command line, predict here are my new videos, and it would output a CSV with each of the videos and a probability that a certain kind of animal appeared in the videos. We were pretty pleased with this output. It'd be super useful for us.
Peter: The first thing we heard from the team we were working with is, "We can't even get this tool installed. I can't get XGBoost to install on my machine. I'm having trouble getting the version of TensorFlow installed. I'm having trouble getting GPU Drivers installed." All of this stuff that feels like second nature to us as data scientists, sort of blinded us to the context in which this tool was actually going to be used and it's by ecologists that aren't used to all of this complex machinery around the packaging of data science tools, that can make it really challenging to use the latest methods. That's just a really concrete example of a place where we weren't doing the right thing upfront to really understand that context and make sure we built something useful. We were building something that we knew would be useful for us.
Hugo: I've got to make clear that, I'm sure a non-trivial proportion of working expert data scientists have a lot of problems getting XGBoost installed occasionally, as well.
Peter: Yeah, if someone wants to take on the initiative to improve the XGBoost installation experience, that's a really valuable project that someone could do for the open source community.
Human Centered Design
Hugo: Let's jump into human centered design and why it's important in data science. I think probably a lot of our listeners wouldn't necessarily think of design principles as being something which would play a huge role in the data science process. Maybe you can tell us about human centered design and why it's important in data science?
Peter: Great, yeah. Human centered design is a way of framing the design process. It's really related to other terms that are in this field that you may have heard of, like design thinking, the double diamond method, and design sprints. Those are all sort of popular terms that people may talk about. It's really referring to the same set of ideas, which is about having a design process.
Peter: Human centered design in particular is the one that we're most familiar with through our work with an organization called IDO.Org. IDO is one of the leading human centered design firms. They helped design the first Apple mouse. They have a long history of being designers both from an industrial design perspective, but also from a digital design and then eventually service design perspective. They have a really long history and track record of working and using these design tools to spur innovation.
Peter: They spun out a nonprofit arm, IDO.org, that works with NGOs to have the same sort of results. We partnered with that organization to look at digital financial services in Tanzania. Just to take a step back, that's sort of the context that we're working in, is with this team of human centered designers. What that human centered design process looks like, and I'll give you the overview first and after that we can dig into the details of that particular project which I think are pretty enlightening for how data science and design work together ... The big picture is that human centered design is about starting with what's desirable. There's a perspective that the best solutions to a problem are desirable in that someone wants to use them, they're feasible from a technological perspective, and they're viable from a business perspective. The best solutions sit at the intersection of that Venn Diagram. You know you're not having a good data science conversation until you're talking about a Venn Diagram.
Hugo: I was waiting for the Venn Diagram.
Peter: Right? This particular Venn Diagram is desirable, feasible, viable. We want the intersection of all three of those things.
Hugo: Of course a lot of data science work maybe will start with feasible, like the newest cutting edge technology and the coolest most efficient algorithms and that type of stuff, right?
Peter: That's exactly right. I think that that is one of our tendencies as data scientists that I see in myself all the time, where I'm getting excited about lots of these new technologies and I want to find ways to use them. The trick is just to find the balance for really solving a problem where using that is appropriate. The human centered design process starts from this perspective of what's desirable to a user. It gets there by moving through these three phases of the design process.
Inspiration
Peter: The first is inspiration. Inspiration is about going and observing the people who will be users of your end product. In the case where you're a data scientist, let's say that your job is to create this report that's emailed out to your executive team once a month. What you would actually do is you would go and talk to the people who get that email and you would say, hey, when you get this email, what do you use it for? What does it go into? Do you say, "Okay, I need some top line numbers here that I put into slides?" Or is something you read to get context that then you say to the people that you manage, "We need to change things, XY and Z."
Peter: You would go and you would actually talk to the consumers of your data science process to see how does it fit into the bigger picture. The inspiration part of the phase is really about going broad, brainstorming, and trying to get inspired by everything you might see around you. It's not about, "Let me see the data," and get inspired by what's in my data. It's "let's get inspired by everything before we even think about the data". That's the first phase.
Ideation
Peter: The second phase is ideation. What this means is trying to come up with particular solutions to a problem and then testing those really quickly. One of the core concepts here is having the lowest fidelity prototypes possible, and getting real user feedback on them. It might be the case that you're working on a model to do some classification and ultimately it's going to be this big, complex machine learning system that's deployed in the Cloud. What you might do first is ... Let's say we're working on this honeybee/bumblebee problem. You might just say, "Okay, here's a spreadsheet of probabilities for each of these species, what would you do with this?" That's sort of my lowest fidelity one. Take the most basic method, take the most basic output and say, "Is this useful?" Then you take that and you learn from that.
Peter: The ideation phase is about these iterative cycles of learning from low fidelity prototypes, that slowly and slowly build fidelity around them, but it sort of helps to keep your project going in a direction that ensures that the output is going to be targeted at a real problem. That it's actually going to be useful, and that as you come up with new ideas throughout that process, you can see which of those are good ideas and which ones aren't.
Hugo: It keeps you honest, right, in the sense that you're not going to end up building something which is useless, or going down the entirely wrong path.
Peter: That's exactly right. Yeah. I mean, I've seen so many times that ... even work that we've done where you build a dashboard that no one ever looks at. Everyone thought the dashboard was what they wanted, but that's not the right tool for the job. People really cared about answering one specific question and just if they went to one page with a thumbs up for a yes and a thumbs down for a no, that would have been either better than the dashboard that you created.
Hugo: In this dashboard case, instead of building the dashboard, perhaps the inspiration or ideation phase would involve drawing the type of figure on a whiteboard or on a piece of paper that the dashboard would show and say to the stakeholders, "Hey, is this actually what you want?"
Peter: Yeah, that's exactly right. Or even mocking it up in PowerPoint or using Microsoft Paint to make a little prototype and say, "Hey, is this graph something that you would need? How would you actually use this in your process?" Trying to get at, not just saying, "Hey, do you like this, is this something you want to see?" More, "How would you use it then?" That question of how you actually use something will change people's answers.
Peter: I think in a lot of data science work, it's very clear that if you ask, "Hey, do you want to see this, hey do you want to see that, hey do you want to see something else?" people say yes to all of those questions. How could you not want to have all of the information that you could have? Really, the question of how would you use it helps you to narrow it down on the things that are going to be valuable, and not create this information overload.
Implementation
Hugo: Tell us about the third phase. We've got inspiration, ideation, and the third "I" is ...?
Peter: The third "I" is implementation. Implementation here is not just, okay, you finished it, now go build it. Implementation is actually a continuation of this process, to go from prototyping to actually piloting a solution. We think of prototypes as being small scale, low fidelity, something we do with a couple of people to get some feedback. We think of implementation as, okay, how do we become more data driven about this decision that we're making now? Implementation is about picking a pilot cohort, a set of users that will actually consume this and then saying, "Okay, here's the version we're working with now. Here's a higher fidelity prototype that we have. Let's put it out there for a particular user group, and let's do some real testing of if this solution is working and solving the problem that we want to solve." Implementation is this piloting phase to get to the point where not only do you have a lot of great anecdotal and qualitative evidence that you've built up from these discussions, but you're also starting to get this quantitative evidence for how what you built is changing the metrics that you care about.
Digital Financial Services in Tanzania Project
Hugo: You and I have discussed a really interesting project you worked on previously, which I think illuminates all of these steps really wonderfully. It's a project where you were looking at digital financial services in Tanzania. Maybe you could tell us a bit about this project and how human centered design actually played a pivotal role for you.
Peter: Great, yeah. This project, I'm going to step back and sort of give you the context that this project is done in. For a lot of people, your money has always been digital. What I mean by that is that when I opened my first bank account, my parents brought me to a bank in middle school and said, "You're going to have a bank account. You have to be responsible for your finances now." I gave that bank some money and they wrote down on a piece of paper how much money I had. That amount of money wasn't physical cash that I was holding, that was actually in a database that the bank had. That was purely digital. That's my native experience of what it's like to interact with money, is to have what is really a digital representation of that currency.
Peter: Whereas in lots of places, bank infrastructure is not very good. It's expensive to build banks, there are security risks of moving physical money around. In many countries, it's the case that people do most of their interaction with money in pure cash. That means when they want to save money, they have cash. When they want to spend money, that have to get enough cash to buy something. That can be very limiting in terms of your ability to save money over time, your ability to get loans that you might need to buy particular things. There's a belief in the international development space that one of the ways in which we can help lift people out of poverty is to provide them access to digital financial services. This same digital experience that people have had growing up where they put their money in a bank, how can we provide that without having to build all of that physical banking infrastructure?
Peter: One of the approaches to this is to have a digital wallet on your phone that's associated with your mobile phone account. Mobile phones have been one of these leapfrogging technologies where people who didn't even have a landline now have access to a mobile phone. The mobile phone providers are now starting to offer services where you can actually save cash on your phone. You can say, "Okay, I've got a wallet with $10 in it," and that's just associated with my phone.
Peter: There's this transition from a world in which you just work with cash to a world in which you have some digital representation. For people who haven't had the experience of always having a digital representation, you have to build trust in that system. You have to make sure that the digital financial services actually fit the needs that this community has. That's the big project background and context, is how do we increase uptick of these digital financial services that are providing for people in these environments that have very volatile incomes, some sense of stability for their money, some access to loan infrastructure, some access to savings infrastructure? That's the context.
Peter: This is a project that's funded by the Gates Foundation to try to understand what levers we can pull to help engage people and give them access to these tools. One of the particular things about this digital financial services system is that you need a way to exchange your digital currency for cash. There's always going to be some sort of transaction that's like working with an ATM. However, ATMs are relatively expensive and there's a lot of physical infrastructure you'd need for an ATM to work.
Peter: What happens in places where they have digital financial services, and this also goes by the name "Mobile Money", so I'll use those interchangeably, you have what are called Mobile Money agents. These are usually people that have a small shop somewhere, that are selling home goods, snacks, drinks. They also become Mobile Money agents. What that means is, you can go to them and say, "I want to trade five digital dollars for five dollars in cash," or, "I want to take five dollars in cash and put that in my digital wallet." They're the interface between physical cash and digital cash.
Peter: The focus of our project was, how do we make that interaction between agents and customers one that can help build trust in this Mobile Money system, and that direction was driven by this human centered design process. When we went out, the first thing we did was talk to people who used Mobile Money and people who didn't. We went and sat in a number of markets. We watched people buy things, and we asked them, "Why did you buy that with cash? Why did you buy that with Mobile Money? Do you have a Mobile Money account? What's your experience been like?"
Hugo: It sounds like in this process as well then, you're getting qualitative data as well as quantitative data.
Peter: That's right. Really, a big part of what you do is try to gather that qualitative data as well. We've been gathering this qualitative data through these interviews, but we also got Mobile Money transactions from one of the mobile network operators for a full nine months. It was tens of gigabytes of data, all of the transactions. Hundreds of millions of Mobile Money transactions in the country. Our goal was to combine that very rich data source about real behaviors with behaviors that we heard about, with these qualitative experiences that people actually had.
Peter: One really great example that I think just highlights the value of human centered design is the fact that we kept talking to agents and saying, "What are your biggest struggles with Mobile Money? How do you see it fitting into the larger context of your business where Mobile Money is one of your revenue streams, but you also sell other goods?" What we kept hearing from these Mobile Money agents is, "Well, it can be really tricky to predict how much money I'm going to make on Mobile Money transactions, because I earn some commission on each transaction, but it's really opaque what those commissions are and it's very hard for me to predict on any aggregated timeframe for some given week or some given month how much money I'm going to get in commissions."
Peter: This was particularly interesting to us because we as data scientists had been digging into this huge treasure trove of data thinking, "Oh, this has all the answers in it, this is going to be amazing. We can find so many insights in this huge range of transactions that we have." One of the things that we realized after doing these interviews is, we didn't know what the commissions for an agent were. That was not data that was in the dataset that we had. We had the amount of the transaction, but the portion that then the agent got as a commission was calculated totally separately by some business logic that existed in another application.
Hugo: Even if you did know how much they got, you may not know whether that was a lot or a little or how that affected them on the ground.
Peter: Oh, absolutely. Yeah. Just the thought that we could have known what was valuable to agents by looking at the dataset and figuring out these patterns, that dataset didn't even have the most important variable to the agents that we were working with. We wouldn't have learned that if we didn't talk to them and just try to learn things from the data alone.
Peter: One of the big things that we try to do in all of our work, and I really encourage all data scientists to do, is to go out and observe how your data gets collected. If you're working with IoT data in a factory, actually go to the factory floor, watch how things happen. If you're like us working with digital financial services, go watch people make transactions. See what actions in the real world correspond with something in your data, because that perspective changes how you think about the data itself. It changes where you trust the data and where you don't trust the data. I really encourage people to get away from their screen, step away from their desk, and go watch the data collection in action. I think in nearly every case, you can go do this and it will have a transformative effect on how you think about that data.
Hugo: I think that's really important, particularly as we live in an age when people maybe first get associated, have experience with data science, online competitions, for example, such as yours ... platforms like DataCamp, getting tech data or getting data online, and not actually thinking a lot about the data-generating process.
Peter: Yeah. It's very easy to just start with the data and say, "Okay, what's in here?" Until you really understand that data-generating process, you won't know to ask, "What's not in this data that I might care about?" Or, "What in this data is not reliable?" For example, we saw a lot of these Mobile Money transactions fail because of network connectivity, for example. For some of those transactions, we wouldn't have seen that in the data. If the network failed, the transaction never when through, it doesn't get recorded in the database. Understanding the limitations of the connectivity and how that affects the experience, is something that we can only even start asking about, how do we measure those transactions, when we've actually observed them.
Other Aspects of the Project in Tanzania
Hugo: We're going to need to wrap up in a few minutes Peter. I was just wondering, are there any other aspects of the project in Tanzania that you wanted to discuss?
Peter: Yeah, so just to share one other example of where the human centered design approach I think really made a difference. We were looking at the times of day which a Mobile Money agent was busiest. When were people coming to trade cash for digital currency or do the opposite. We took the data and we looked at it for days of the week and times of day, and we built this really beautiful heat map visualization that you can think of as checkerboard where each of the squares is either lighter or darker based on how many transactions you have. We made it interactive so that you can hover and you can see how busy, for a different region, it's a different time of day, it's a different day of week, and really get a great sense of the patterns of Mobile Money use that happened.
Hugo: It's also colorblind human friendly.
Peter: It absolutely is. We did build it using viridis, which is colorblind friendly. If you're not thinking about that for your visualizations, you should be, because that's a little bit of human centered design.
Hugo: Speaking as a colorblind individual, I'm red-green colorblind as you know 8% of human males are.
Peter: You appreciate viridis even more than the rest of us that think it's a beautiful color map.
Hugo: I can't get enough of it.
Peter: Well, I think it's a really compelling and beautiful color map, which is one of the reasons that we loved this visualization, which is one of the reasons the people we worked with loved this visualization, and how interactive it was ... but, none of the agents that we were working with had access to a computer. They weren't sitting at a laptop and they wanted to look at a dashboard that had this beautiful visualization on it. That wasn't going to be useful to them.
Peter: What we ended up actually building was a text-based visualization that was essentially just a bar chart where it would say "M" for Monday and then it would have three capital "I's" in a row. Then it would have "T" for Tuesday, and it would have eight capital "I's" in a row. By building these text-based visualizations, essentially bar charts built out of characters, we could actually give a data visualization experience to these agents that were working on feature phones. That process of taking something that we think of as an amazing data science output, this really compelling interactive visualization, and putting that visualization into a context where it can actually get used, is one of the transformative experiences of that project for us where we started to think about, "Okay, what's the context for all of our output?" Not, "How do we make the most amazing data visualization?"
Hugo: Yeah, and it's so reliant on the knowledge of what actual technology, what phones humans have on the ground.
Peter: Yeah, that's absolutely right.
Call to Action
Hugo: Peter, as a final question, I'm wondering if you have a final call to action to all our listeners out there?
Peter: Yeah. I think there are, in my mind, four core activities of a human centered data scientist, and I think we should all be human centered data scientists. The first one is, go to the field and observe the data being generated. Without understanding what a row in your dataset means, without actually observing that happening, without knowing what gets captured, what doesn't, what happens when something goes wrong, you'll be very limited in the output that you can have. Also, if you do go and do that, you'll be so much better positioned to ask questions that matter of your data. Without talking to those agents, we wouldn't have asked that commission question of the dataset. Going to the field, observing data being generated, is item number one.
Peter: Item two is design with, not for, by iterating on prototypes. This process of constant iteration, conversation with people who will actually be using the output, getting their buy-in on the decisions that you're making, means that it's going to be something that's useful when you finish the project. Not, "I worked for three months, is this good for you?" "Oh, no, it's not," or it requires some major changes. It's how do we keep that process tighter in-sync so that we're actually building things that are useful. We do that with really low fidelity prototypes that we're constantly testing.
Peter: The third is to put outcomes, not methods or outputs, first. That's really saying, what is the outcome we care about? In our case, it was the increase in the adoption of digital financial services. That's what we cared about, and in particular we thought we could do that by improving the tools that Mobile Money agents had. Our goal was to say, "Okay, the best outcome is for Mobile Money agents to be making more transactions." That's what we want to measure. It wasn't, how do we do the most interesting dimensionality reduction on this huge dataset that we have.
Peter: The fourth item is to build consensus on metrics for success. I think this is one of the most difficult but most important ones, is you need to define upfront what success means and you need to get buy-in from everyone on that being successful. I think this is one of the things that people assume they've got the same goals if they're working on the same project from the get-go, but until you have that explicit discussion about what success means and what those metrics are, you won't be optimizing for exactly the same thing.
Peter: Those, I think ... My call to action really is to take those and try to build them into your process as a data scientist. Other than becoming a human centered data scientist, thinking about your users, and using a more collaborative process, I would encourage people to come check out a competition on DrivenData.org. We've got a lot of interesting social impact projects happening there. Or, to check out one of the open source projects that we had as part of this discussion, that's cookie cutter data science on DEON, the ethics checklist package.
Hugo: Of course, both of those projects and engaging in competitions and all the other great stuff you do on Driven Data, will help any budding data scientist, or established data scientist, to doing more human centered data work, as well.
Peter: Yeah, that's the goal.
Hugo: Peter, it's been such a pleasure having you on the show.
Peter: Hugo, thanks for having me. I loved chatting, as always.
podcast
Building Human-Centered AI Experiences with Haris Butt, Head of Product Design at ClickUp
podcast
Data Science, Past, Present and Future
podcast
Successful Digital Transformation Puts People First
podcast
Managing Data Science Teams
podcast
Data Science at the BBC
podcast