Created with Sketch.
Experiencing Data with Brian T. O'Neill
38 minutes | Jun 15, 2021
Why Roche Diagnostics’ BI and Data Science Teams Are Adopting Human-Centered Design and UX featuring Omar Khawaja
On today’s episode of Experiencing Data, I’m so excited to have Omar Khawaja on to talk about how his team is integrating user-centered design into data science, BI and analytics at Roche Diagnostics. In this episode, Omar and I have a great discussion about techniques for creating more user-centered data products that produce value — as well as how taking such an approach can lead to needed change management on how data is used and interpreted. In our chat, we covered: What Omar is responsible for in his role as Head of BI & Analytics at Roche Diagnostics — and why a human-centered design approach to data analytics is important to him. (0:57) Understanding the end-user's needs: Techniques for creating more user-centric products — and the challenges of taking on such an approach. (6:10) Dissecting 'data culture': Why Omar believes greater implementation of data-driven decision-making begins with IT 'demonstrating' the approach's benefits. (9:31) Understanding user personas: How Roche is delivering better outcomes for medical patients by bringing analytical insights to life. (15:19) How human-centered design yields early 'actionable insights' that can lead to needed change management on how data is used and interpreted. (22:12) The journey of learning: Why 'it's everybody's job' to be focused on user experience — and how field research can help determine an end-users needs. (27:26) Omar's love of cricket and the statistics collected about the sport! (31:23) Resources and Links: Roche Diagnostics: https://www.roche.com/ LinkedIn: https://www.linkedin.com/in/kmaomar/ Twitter: https://twitter.com/kmaomar Quotes from Today’s Episode “I’ve been in the area of data and analytics since two decades ago, and out of my own learning — and I’ve learned it the hard way — at the end of the day, whether we are doing these projects or products, they have to be used by the people. The human factor naturally comes in.” - Omar (2:27) “Especially when we’re talking about enterprise software, and some of these more complex solutions, we don’t really want people noticing the design to begin with. We just want it to feel valuable, and intuitive, and useful right out of the box, right from the start.” - Brian (4:08) “When we are doing interviews with [end-users] as part of the whole user experience [process], you learn to understand what’s being said in between the lines, and then you learn how to ask the right questions. Those exploratory questions really help you understand: What is the real need?” - Omar (8:46) “People are talking about data-driven [cultures], data-informed [cultures] — but at the end of the day, it has to start by demonstrating what change we want. ... Can we practice what we are trying to preach? Am I demonstrating that with my team when I’m making decisions in my day-to-day life? How do I use the data? IT is very good at asking our business colleagues and sometimes fellow IT colleagues to use various enterprise IT and business tools. Are we using, ourselves, those tools nicely?” - Omar (11:33) “We focus a lot on what’s technically possible, but to me, there’s often a gap between the human need and what the data can actually support. And the bigger that gap is, the less chance things get used. The more we can try to close that gap when we get into the implementation stage, the more successful we probably will be with getting people to care and to actually use these solutions.” - Brian (22:20) “When we are working in the area of data and analytics, I think it’s super important to know how this data and insights will be used — which requires an element of putting yourself in the user’s shoes. In the case of an enterprise setup, it’s important for me to understand the end-user in different roles and personas: What they are doing and how their job is. [This involves] sitting with them, visiting them, visiting the labs, visiting the factory floors, sitting with the finance team, and learning what they do in the system. These are the places where you have your learning.” - Omar (29:09)
34 minutes | Jun 1, 2021
How Alison Magyari Used Design Thinking to Transform Eaton’s Business Analytics Approach to Creating Data Products
Earlier this year, the always informative Women in Analytics Conference took place online. I didn’t go — but a blog post about one of the conference’s presentations on the International Institute of Analytics’ website caught my attention. The post highlighted key points from a talk called Design Thinking in Analytics that was given at the conference by Alison Magyari, an IT Manager at Eaton. In her presentation, Alison explains the four design steps she utilizes when starting a new project — as well as what “design thinking” means to her. Human-centered design is one of the main themes of Experiencing Data, so given Alison’s talk about tapping into the emotional state of customers to create better designed data products, I knew she would be a great guest. In this episode, Alison and I have a great discussion about building a “design thinking mindset” — as well as the importance of keeping the design process flexible. In our chat, we covered: How Alison employs design thinking in her role at Eaton to better understand the 'voice of the customer.' (0:28) Same frustrations, no excitement, little use: The factors that led to Alison's pursuit of a design thinking mindset when building data products at Eaton. (3:35) Alleviating the 'pain points' with design thinking: The importance of understanding how a data tool makes users feel. (10:24) How Eaton's business analysts (and end users) take ownership of the design process — and the challenges Alison faced in building a team of business analysts committed to design thinking. (15:51) 'It's not one size fits all': The benefits of keeping the design process flexible — and why curiosity and empathy are traits of successful designers. (21:06) 'Pay me now or pay me later': How Alison dealt with pushback to spending more time and resources on design — and how she dealt with skepticism from business users. (24:09) Resources and Links: Blog post on International Institute for Analytics: https://www.iianalytics.com/blog/2021/2/25/utilizing-human-centered-design-to-inform-products-and-reach-communities Eaton: https://www.eaton.com/ LinkedIn: https://www.linkedin.com/in/alisonmagyari/ Email: email@example.com Quotes from Today’s Episode “In IT, it’s really interesting how sometimes we get caught up in just looking at the technology for what it is, and we forget that the technology is there to serve our business partners.” - Alison (2:00) “You can give people exactly what they asked for, but if you’re designing solutions and data-driven products with someone, and if they’re really for somebody else, you actually have to dig in to figure out the unarticulated needs. TAnd they may not know how to invite you in to do ask for that. They may not even know how they’re going to make a decision with data about something. So, you can say “sorry, ... You could say, “Well, you’re not prepared to talk to us yet,.” oOr, you can be part of helping them work it out. ‘decide,H how will you make a decision with this information? Let us be part of that problem-finding exercise with you, not just the solution part. Because you can fail if you just give people what they asked for, so it’s best to be part of the problem finding not just solving.” - Brian (8:42) “During our design process, we noted down what the sentiment of our users was while they were using our data product. … Our users so appreciated when we would mirror back to them our observations about what they were feeling, and we were right about it. I mean, they were much more open to talking to us. They were much more open and they shared exactly what they were feeling.” - Alison (12:51) “In our case, we did have the business analyst team really own the design process. Towards the end, we were the champions for it, but then our business users really took ownership, which I was proud of. They realized that if they didn’t embrace this, that they were going to have to deal with the same pain points for years to come. They didn’t want to deal with that, so they were really good partners in taking ownership at the end of the day.” - Alison (16:56) “The way you learn how to do design is by doing it. … the second thing is that you don’t have to do, “All of it,” to get some value out of it. You could just do prototyping, you could do usability evaluation, you could do ‘what if’ analyses. You can do a little of one thing and probably get some value out of that fairly early, and it’s fairly safe. And then over time, you can learn other techniques. Eventually, you will have a library of techniques that you can apply. It’s a mindset, it’s really about changing the mind. It’s heads not hands, as I sometimes say: It’s not really about hands. It’s about how we think and approach problem-solving.” - Brian (20:16) “I think everybody can do design, but I think the ones that have been incredibly successful at it have a natural curiosity. They don’t just stop with the first answer that they get. They want to know, “If I were doing this job, would I be satisfied with compiling a 50 column spreadsheet every single day in my life? Probably not. Its curiosity and empathy — if you have those traits, naturally, then design is just kind of a better fit.” - Alison (23:15)
33 minutes | May 18, 2021
Balancing Human Intuition and Machine Intelligence with Salesforce Director of Product Management Pavan Tumu
I once saw a discussion on LinkedIn about a fraud detection model that had been built but never used. The model worked — it was expensive — but it just simply didn’t get used because the humans in the loop were not incentivized to use it. It was on this very thread that I first met Salesforce Director of Product Management Pavan Tuvu, who chimed in on the thread about a similar experience he went through. When I heard about his experience, I asked him if he would share it with you and he agreed. So, today on the Experiencing Data podcast, I’m excited to have Pavan on to talk about some lessons he learned while designing ad-spend software that utilized advanced analytics — and the role of the humans in the loop. We discussed: Pavan's role as Director of Product Management at Salesforce and how he works to make data easier to use for teams. (0:40) Pavan's work protecting large-dollar advertising accounts from bad actors by designing a ML system that predicts and caps ad spending. (6:10) 'Human override of the machine': How Pavan addressed concerns that its advertising security system would incorrectly police legitimate large-dollar ad spends. (12:22) How the advertising security model Pavan worked on learned from human feedback. (24:49) How leading with "why" when designing data products will lead to a better understanding of what customers need to solve. (29:05)
31 minutes | May 4, 2021
064 - How AI Shapes the Products of Startups in MIT’s “Tough Tech” Venture Fund, The Engine feat. General Partner, Reed Sturtevant
Reed Sturtevant sees a lot of untapped potential in “tough tech.” As a General Partner at The Engine, a venture capital firm launched by MIT, Reed and his colleagues invest in companies with breakthrough technology that, if successful, could positively transform the world. It’s been about 15 years since I’ve last caught up to Reed—who was CTO at a startup we worked at together—so I’m so excited to welcome him on this episode of Experiencing Data! Reed and I talked about AI and how some of the portfolio companies in his fund are using data to produce better products, solutions, and inventions to tackle some of the world’s toughest challenges. In our chat, we covered: How Reed's venture capital firm, The Engine, is investing in technology driven businesses focused on making positive social impacts. (0:28) The challenges that technical PhDs and postdocs face when transitioning from academia to entrepreneurship. (2:22) Focusing on a greater mission: The importance of self-examining whether an invention would be a good business. (5:16) How one technology business invested in by The Engine, The Routing Company, is leveraging AI and data to optimize public transportation and bridge service gaps. (9:05) Understanding and solving a problem: Using ‘design exercises’ to find successful market fits for existing technological solutions. (16:53) Solutions first, problems second: Why asking the right questions is key to mapping a technological solution back to a problem in the market. (19:31) Understanding and articulating a product’s value to potential buyers. (22:54) How the go-to-market strategies of software companies have changed over the last few decades. (26:16) Resources and Links: The Engine: https://www.engine.xyz/ Quotes from Today’s Episode There have been a couple of times while working at The Engine when I’ve taken it as a sign of maturity when a team self-examines whether their invention is actually the right way to build a business. - Reed (5:59) For some of the data scientists I know, particularly with AI, executive teams can mandate AI without really understanding the problem they want to solve. It actually pushes the problem discovery onto the solution people — but they’re not always the ones trained to go find the problems. - Brian (19:42) You can keep hitting people over the head with a product, or you can go figure out what people care about and determine how you can slide your solution into something they care about. ... You don’t know that until you go out and talk to them,listen, and and get in to their world. And I think that’s still something that’s not happening a lot with data teams. - Brian (24:45) I think there really is a maturity among even the early stage teams now, where they can have a shelf full of techniques that they can just pick and choose from in terms of how to build a product, how to put it in front of people, and how to have the [user] experience be a gentle on-ramp. - Reed, on startups (27:29)
38 minutes | Apr 20, 2021
063 - Beyond Compliance: Designing Data Products With Data Privacy As a UX Benefit with The Data Diva (Debbie Reynolds)
Debbie Reynolds is known as “The Data Diva” — and for good reason. In addition to being founder, CEO and chief data privacy officer of her own successful consulting firm, Debbie has been named to the Global Top 20 CyberRisk Communicators by The European Risk Policy Institute in 2020. She’s also written a few books, such as The GDPR Challenge: Privacy, Technology, and Compliance In An Age of Accelerating Change; as well as articles for other publications. If you are building data products, especially customer-facing software, you’ll want to tune into this episode. Debbie and Ihad an awesome discussion about data privacy from the lens of user experience instead of the typical angle we are all used to: legal compliance. While collecting user data can enable better user experiences, we can also break a customer’s trust if we don’t request access properly. In our chat, we covered: 'Humans are using your product': What it means to be a 'data steward' when building software. (0:27) 'Privacy by design': The importance for software creators to think about privacy throughout the entire product creation process. (4:32) The different laws (and lack thereof) regarding data privacy — and the importance to think about a product's potential harm during the design process. (6:58) The importance of having 'diversity at all levels' when building data products. (16:41) The role of transparency in data collection. (19:41) Fostering a positive and collaborative relationship between a product or service’s designers, product owners, and legal compliance experts. (24:55) The future of data monetization and how it relates to privacy. (29:18) Resources and Links: Debbie’s Website. Twitter: @DebbieDataDiva Debbie’s LinkedIn Quotes from Today’s Episode When it comes to your product, humans are using it. Regardless of whether the users are internal or external — what I tell people is to put themselves in the shoes of someone who’s using this and think about what you would want to have done with your information or with your rights. Putting it in that context, I think, helps people think and get out of their head about it. Obviously there’s a lot of skill and a lot of experience that it takes to build these products and think about them in technical ways. But I also try to tell people that when you’re dealing with data and you’re building products, you’re a data steward. The data belongs to someone else, and you’re holding it for them, or you’re allowing them to either have access to it or leverage it in some way. So, think about yourself and what you would think you would want done with your information. - Debbie (3:28) Privacy by design is looking at the fundamental levels of how people are creating things, and having them think about privacy as they’re doing that creation. When that happens, then privacy is not a difficult thing at the end. Privacy really isn’t something you could tack on at the end of something; it’s something that becomes harder if it’s not baked in. So, being able to think about those things throughout the process makes it easier. We’re seeing situations now where consumers are starting to vote with their feet — if they feel like a tool or a process isn’t respecting their privacy rights, they want to be able to choose other things. So, I think that’s just the way of the world. .... It may be a situation where you’re going to lose customers or market share if you’re not thinking about the rights of individuals. - Debbie (5:20) I think diversity at all levels is important when it comes to data privacy, such as diversity in skill sets, points of view, and regional differences. … I think people in the EU — because privacy is a fundamental human right — feel about it differently than we do here in the US where our privacy rights don’t really kick in unless it’s a transaction. ... The parallel I say is that people in Europe feel about privacy like we feel about freedom of speech here — it’s just very deeply ingrained in the way that they do things. And a lot of the time, when we’re building products, we don’t want to be collecting data or doing something in ways that would harm the way people feel about your product. So, you definitely have to be respectful of those different kinds of regimes and the way they handle data. … I’ll give you a biased example that someone had showed me, which was really interesting. There was a soap dispenser that was created where you put your hand underneath and then the soap comes out. It’s supposed to be a motion detection thing. And this particular one would not work on people of color. I guess whatever sensor they created, it didn’t have that color in the spectrum of what they thought would be used for detection or whatever. And so those are problems that happen a lot if you don’t have diverse people looking at these products. Because you — as a person that is creating products — you really want the most people possible to be able to use your products. I think there is an imperative on the economic side to make sure these products can work for everyone. - Debbie (17:31) Transparency is the wave of the future, I think, because so many privacy laws have it. Almost any privacy law you think of has transparency in it, some way, shape, or form. So, if you’re not trying to be transparent with the people that you’re dealing with, or potential customers, you’re going to end up in trouble. - Debbie (24:35) In my experience, while I worked with lawyers in the digital product design space — and it was heaviest when I worked at a financial institution — I watched how the legal and risk department basically crippled stuff constantly. And I say “cripple” because the feeling that I got was there’s a line between adhering to the law and then also—some of this is a gray area, like disclosure. Or, if we show this chart that has this information, is that construed as advice? I understand there’s a lot of legal regulation there. My feeling was, there’s got to be a better way for compliance departments and lawyers that genuinely want to do the right thing in their work to understand how to work with product design, digital design teams, especially ones using data in interesting ways. How do you work with compliance and legal when we’re designing digital products that use data so that it’s a team effort, and it’s not just like, “I’m going to cover every last edge because that’s what I’m here to do is to stop anything that could potentially get us sued.” There is a cost to that. There’s an innovation cost to that. It’s easier, though, to look at the lawyer and say, “Well, I guess they know the law better, so they’re always going to win that argument.” I think there’s a potential risk there. - Brain (25:01) Trust is so important. A lot of times in our space, we think about it with machine learning, and AI, and trusting the model predictions and all this kind of stuff, but trust is a brand attribute as well and it’s part of the reason I think design is important because the designers tend to be the most empathetic and user-centered of the bunch. That’s what we’re often there to do is to keep that part in check because we can do almost anything these days with the tech and the data, and some of it’s like, “Should we do this?” And if we do do it, how do we do it so we’re on brand, and the trust is built, and all these other factors go into that user experience. - Brian (34:21)
41 minutes | Apr 6, 2021
062 - Why Ben Shneiderman is Writing a Book on the Importance of Designing Human-Centered AI
Ben Shneiderman is a leading figure in the field of human-computer interaction (HCI). Having founded one of the oldest HCI research centers in the country at the University of Maryland in 1983, Shneiderman has been intently studying the design of computer technology and its use by humans. Currently, Ben is a Distinguished University Professor in the Department of Computer Science at the University of Maryland and is working on a new book on human-centered artificial intelligence. I’m so excited to welcome this expert from the field of UX and design to today’s episode of Experiencing Data! Ben and I talked a lot about the complex intersection of human-centered design and AI systems. In our chat, we covered: Ben's career studying human-computer interaction and computer science. (0:30) 'Building a culture of safety': Creating and designing ‘safe, reliable and trustworthy’ AI systems. (3:55) 'Like zoning boards': Why Ben thinks we need independent oversight of privately created AI. (12:56) 'There’s no such thing as an autonomous device': Designing human control into AI systems. (18:16) A/B testing, usability testing and controlled experiments: The power of research in designing good user experiences. (21:08) Designing ‘comprehensible, predictable, and controllable’ user interfaces for explainable AI systems and why [explainable] XAI matters. (30:34) Ben's upcoming book on human-centered AI. (35:55) Resources and Links: People-Centered Internet: https://peoplecentered.net/ Designing the User Interface (one of Ben’s earlier books): https://www.amazon.com/Designing-User-Interface-Human-Computer-Interaction/dp/013438038X Bridging the Gap Between Ethics and Practice: https://doi.org/10.1145/3419764 Partnership on AI: https://www.partnershiponai.org/ AI incident database: https://www.partnershiponai.org/aiincidentdatabase/ University of Maryland Human-Computer Interaction Lab: https://hcil.umd.edu/ ACM Conference on Intelligent User Interfaces: https://iui.acm.org/2021/hcai_tutorial.html Human-Computer Interaction Lab, University of Maryland, Annual Symposium: https://hcil.umd.edu/tutorial-human-centered-ai/ Ben on Twitter: https://twitter.com/benbendc Quotes from Today’s Episode The world of AI has certainly grown and blossomed — it’s the hot topic everywhere you go. It’s the hot topic among businesses around the world — governments are launching agencies to monitor AI and are also making regulatory moves and rules. … People want explainable AI; they want responsible AI; they want safe, reliable, and trustworthy AI. They want a lot of things, but they’re not always sure how to get them. The world of human-computer interaction has a long history of giving people what they want, and what they need. That blending seems like a natural way for AI to grow and to accommodate the needs of real people who have real problems. And not only the methods for studying the users, but the rules, the principles, the guidelines for making it happen. So, that’s where the action is. Of course, what we really want from AI is to make our world a better place, and that’s a tall order, but we start by talking about the things that matter — the human values: human rights, access to justice, and the dignity of every person. We want to support individual goals, a person’s sense of self-efficacy — they can do what they need to in the world, their creativity, their responsibility, and their social connections; they want to reach out to people. So, those are the sort of high aspirational goals that become the hard work of figuring out how to build it. And that’s where we want to go. - Ben (2:05) The software engineering teams creating AI systems have got real work to do. They need the right kind of workflows, engineering patterns, and Agile development methods that will work for AI. The AI world is different because it’s not just programming, but it also involves the use of data that’s used for training. The key distinction is that the data that drives the AI has to be the appropriate data, it has to be unbiased, it has to be fair, it has to be appropriate to the task at hand. And many people and many companies are coming to grips with how to manage that. This has become controversial, let’s say, in issues like granting parole, or mortgages, or hiring people. There was a controversy that Amazon ran into when its hiring algorithm favored men rather than women. There’s been bias in facial recognition algorithms, which were less accurate with people of color. That’s led to some real problems in the real world. And that’s where we have to make sure we do a much better job and the tools of human-computer interaction are very effective in building these better systems in testing and evaluating. - Ben (6:10) Every company will tell you, “We do a really good job in checking out our AI systems.” That’s great. We want every company to do a really good job. But we also want independent oversight of somebody who’s outside the company — someone who knows the field, who’s looked at systems at other companies, and who can bring ideas and bring understanding of the dangers as well. These systems operate in an adversarial environment — there are malicious actors out there who are causing trouble. You need to understand what the dangers and threats are to the use of your system. You need to understand where the biases come from, what dangers are there, and where the software has failed in other places. You may know what happens in your company, but you can benefit by learning what happens outside your company, and that’s where independent oversight from accounting companies, from governmental regulators, and from other independent groups is so valuable. - Ben (15:04) There’s no such thing as an autonomous device. Someone owns it; somebody’s responsible for it; someone starts it; someone stops it; someone fixes it; someone notices when it’s performing poorly. … Responsibility is a pretty key factor here. So, if there’s something going on, if a manager is deciding to use some AI system, what they need is a control panel, let them know: what’s happening? What’s it doing? What’s going wrong and what’s going right? That kind of supervisory autonomy is what I talk about, not full machine autonomy that’s hidden away and you never see it because that’s just head-in-the-sand thinking. What you want to do is expose the operation of a system, and where possible, give the stakeholders who are responsible for performance the right kind of control panel and the right kind of data. … Feedback is the breakfast of champions. And companies know that. They want to be able to measure the success stories, and they want to know their failures, so they can reduce them. The continuous improvement mantra is alive and well. We do want to keep tracking what’s going on and make sure it gets better. Every quarter. - Ben (19:41) Google has had some issues regarding hiring in the AI research area, and so has Facebook with elections and the way that algorithms tend to become echo chambers. These companies — and this is not through heavy research — probably have the heaviest investment of user experience professionals within data science organizations. They have UX, ML-UX people, UX for AI people, they’re at the cutting edge. I see a lot more generalist designers in most other companies. Most of them are rather unfamiliar with any of this or what the ramifications are on the design work that they’re doing. But even these largest companies that have, probably, the biggest penetration into the most number of people out there are getting some of this really important stuff wrong. - Brian (26:36) Explainability is a competitive advantage for an AI system. People will gravitate towards systems that they understand, that they feel in control of, that are predictable. So, the big discussion about explainable AI focuses on what’s usually called post-hoc explanations, and the Shapley, and LIME, and other methods are usually tied to the post-hoc approach.That is, you use an AI model, you get a result and you say, “What happened?” Why was I denied a parole, or a mortgage, or a job? At that point, you want to get an explanation. Now, that idea is appealing, but I’m afraid I haven’t seen too many success stories of that working. … I’ve been diving through this for years now, and I’ve been looking for examples of good user interfaces of post-hoc explanations. It took me a long time till I found one. The culture of AI model-building would be much bolstered by an infusion of thinking about what the user interface will be for these explanations. And even the DARPA’s XAI—Explainable AI—project, which has 11 projects within it—has not really grappled with this in a good way about designing what it’s going to look like. Show it to me. … There is another way. And the strategy is basically prevention. Let’s prevent the user from getting confused and so they don’t have to request an explanation. We walk them along, let the user walk through the step—this is like Amazon checkout process, seven-step process—and you know what’s happened in each step, you can go back, you can explore, you can change things in each part of it. It’s also what TurboTax does so well, in really complicated situations, and walks you through it. … You want to have a comprehensible, predictable, and controllable user interface that makes sense as you walk through each step. - Ben (31:13)
37 minutes | Mar 23, 2021
061 - Applying a Product Mindset to Internal Data Products with Silicon Valley Product Group Partner Marty Cagan
Marty Cagan has had a storied career working as a product executive. With a resume that includes Vice President of Product at Netscape and Ebay, Marty is an expert in product management and strategy. This week, Marty joins me on Experiencing Data to talk more about what a successful data product team looks like, as well as the characteristics of an effective product manager. We also explored the idea of product management applied to internal data teams. Marty and I didn’t necessarily agree on everything in this conversation, but I loved his relentless focus on companies’ customers. Marty and I also talked a bit about his new book, Empowered: Ordinary People, Extraordinary Teams. I also spoke with Marty about: The responsibilities of a data product team. (0:59) Whether an internally-facing software solution can be considered a 'product.' (5:02) Customer-facing vs. customer-enabling: Why Marty tries hard not to confuse the terminology of internal employees as customers. (7:50) The common personality characteristics and skill sets of effective product managers. (12:53) The importance of 'customer exposure time.' (17:56) The role of product managers in upholding ethical standards. (24:57) The value of a good designer on a product team. (28:07) Why Marty decided to write his latest book, Empowered, about leadership. (30:52) Quotes from Today’s Episode We try hard not to confuse customers with internal employees — for example, a sales organization, or customer service organization. They are important partners, but when a company starts to confuse these internal organizations with real customers, all kinds of bad things happen — especially to the real customer. [...] A lot of data reporting teams are, in most companies, being crushed with requests. So, how do you decide what to prioritize? Well, a product strategy should help with that and leadership should help with that. But, fundamentally, the actual true customers are going to drive a lot of what we need to do. It’s important that we keep that in mind. - Marty (9:13) I come out of the technology space, and, for me, the worlds of product design and product management are two overlapping circles. Some people fall in the middle, some people are a little bit heavier to one side or the other. The focus there is there’s a lot of focus on empathy, and a focus on understanding how to frame the problem correctly — it’s about not jumping to a solution immediately without really understanding the customer pain point. - Brian (10:47) One thing I’ve seen frequently throughout my career is that designers often have no idea how the business sustains itself. They don’t understand how it makes money, they don’t understand how it’s even sold or marketed. They are relentlessly focused on user experience, but the other half of it is making a business viable. - Brian (14:57) Ethical issues really do, in almost all cases I see, originate with the leaders. However, it’s also true that they can first manifest themselves in the product teams. The product manager is often the first one to see that this could be a problem, even when it’s totally unintentional. - Marty (26:45) My interest has always been product teams because every good product I know came from a product team. Literally — it is a combination of product design and engineering that generate great products. I’m interested in the nature of that collaboration and in nurturing the dynamics of a healthy team. To me, having strong engineering that’s all engaged with direct customer access is fundamental. Similarly, a professional designer is important — somebody that really understands service design, interaction design, visual design, and the user research behind it. The designer role is responsible for getting inside the heads of the users. This is hard. And it’s one of those things, when it’s done well, nobody even notices it. - Marty (28:54) Links Referenced Silicon Valley Product Group: https://svpg.com/ Empowered: https://svpg.com/empowered-ordinary-people-extraordinary-products/ Inspired: https://svpg.com/inspired-how-to-create-products-customers-love/ Twitter: https://twitter.com/cagan LinkedIn: https://www.linkedin.com/in/cagan/
38 minutes | Mar 9, 2021
060-How NPR Uses Data to Drive Editorial Decisions in the Newsroom with Sr. Dir. of Audience Insights Steve Mulder
Journalism is one of the keystones of American democracy. For centuries, reporters and editors have kept those in power accountable by seeking out the truth and reporting it. However, the art of newsgathering has changed dramatically in the digital age. Just take it from NPR Senior Director of Audience Insights Steve Mulder — whose team is helping change the way NPR makes editorial decisions by introducing a streamlined and accessible platform for data analytics and insights. Steve and I go way, way back (Lycos anyone!?) — and I’m so excited to welcome him on this episode of Experiencing Data! We talked a lot about the Story Analytics and Data Insights (SANDI) dashboard for NPR content creators that Steve’s team just recently launched, and dove into: How Steve’s design and UX background influences his approach to building analytical tools and insights (1:04) Why data teams at NPR embrace qualitative UX research when building analytics and insights solutions for the editorial team. (6:03) What the Story Analytics and Data Insights (SANDI) dashboard for NPR’s newsroom is, the goals it is supporting, and the data silos that had to be broken down (10:52) How the NPR newsroom uses SANDI to measure audience reach and engagement. (14:40) 'It's our job to be translators': The role of moving from ‘what’ to ‘so what’ to ‘now what’ (22:57) Quotes from Today’s Episode People with backgrounds in UX and design end up everywhere. And I think it's because we have a couple of things going for us. We are user-centered in our hearts. Our goal is to understand people and what they need — regardless of what space we're talking about. We are grounded in research and getting to the underlying motivations of people and what they need. We're focused on good communication and interpretation and putting knowledge into action — we're generalists. - Steve (1:44) The familiar trope is that quantitative research tells you what is going on, and qualitative research tells you why. Qualitative research gets underneath the surface to answer why people feel the way they do. Why are they motivated? Why are they describing their needs in a certain way? - Steve (6:32) The more we work with people and develop relationships — and build that deeper sense of trust as an organization with each other — the more openness there is to having a real conversation. - Steve (9:06) I’ve been reading a book by Nancy Duarte called DataStory (see Episode 32 of this show), and in the book she talks about this model of the career growth [...]that is really in sync with how I've been thinking about it. [...]you begin as an explorer of data — you're swimming in the data and finding insights from the data-first perspective. Over time in your career, you become an explainer. And an explainer is all about creating meaning: what is the context and interpretation that I can bring to this insight that makes it important, that answers the question, “So what?” And then the final step is to inspire, to actually inspire action and inspire new ways of looking at business problems or whatever you're looking at. - Steve (25:50) I think that carving things down to what's the simplest is always a big challenge, just because those of us drowning in data are always tempted to expose more of it than we should. - Steve (29:30) There's a healthy skepticism in some parts of NPR around data and around the fact that ‘I don't want data to limit what I do with my job. I don't want it to tell me what to do.’ We spend a lot of time reassuring people that data is never going to make decisions for you — it's just the foundation that you can stand on to better make your own decision. … We don't use data-driven decisions. At NPR, we talk about data-??? decisions because that better reflects the fact that it is data and expertise together that make things magic. - Steve (34:34) Resources and Links: Twitter: https://twitter.com/muldermedia
43 minutes | Feb 23, 2021
059 - How Design Thinking Helps Organizations and Data Science Teams Create Economic Value with Machine Learning and Analytics feat. Bill Schmarzo
With a 30+ year career in data warehousing, BI and advanced analytics under his belt, Bill has become a leader in the field of big data and data science – and, not to mention, a popular social media influencer. Having previously worked in senior leadership at DellEMC and Yahoo!, Bill is now an executive fellow and professor at the University of San Francisco School of Management as well as an honorary professor at the National University of Ireland-Galway. I’m so excited to welcome Bill as my guest on this week’s episode of Experiencing Data. When I first began specializing my consulting in the area of data products, Bill was one of the first leaders that I specifically noticed was leveraging design thinking on a regular basis in his work. In this long overdue episode, we dug into some examples of how he’s using it with teams today. Bill sees design as a process of empowering humans to collaborate with one another, and he also shares insights from his new book, “The? Economics of Data, Analytics and Digital Transformation.” In total, we covered: Why it’s crucial to understand a customer’s needs when building a data product and how design helps uncover this. (2:04) How running an “envisioning workshop” with a customer before starting a project can help uncover important information that might otherwise be overlooked. (5:09) How to approach the human/machine interaction when using machine learning and AI to guide customers in making decisions – and why it’s necessary at times to allow a human to override the software. (11:15) How teams that embrace design-thinking can create “organizational improvisation” and drive greater value. (14:49) Bill take on how to properly prioritize use cases (17:40) How toidentify a data product’s problems ahead of time. (21:36) The trait that Bill sees in the best data scientists and design thinkers (25:41) How Bill helps transition the practice of data science from being a focus on analytic outputs to operational and business outcomes. (28:40) Bill’s new book, “The Economics of Data, Analytics, and Digital Transformation.” (31:34) Brian and Bill’s take on the need for organizations to create a technological and cultural environment of continuous learning and adapting if they seek to innovate. (38:22) Quotes from Today’s Episode There’s certainly a UI aspect of design, which is to build products that are more conducive for the user to interact with – products that are more natural, more intuitive … But I also think about design from an empowerment perspective. When I consider design-thinking techniques, I think about how I can empower the wide variety of stakeholders that I need to service with my data science. I’m looking to identify and uncover those variables and metrics that might be better predictors of performance. To me, at the very beginning of the design process, it’s about empowering everybody to have ideas. – Bill (2:25) Envisioning workshops are designed to let people realize that there are people all across the organization who bring very different perspectives to a problem. When you combine those perspectives, you have an illuminating thing. Now let’s be honest: many large organizations don’t do this well at all. And the reason why is not because they’re not smart, it’s because in many cases, senior executives aren’t willing to let go. Design thinking isn’t empowering the senior executives. In many cases, it’s about empowering those frontline employees … If you have a culture where the senior executives have to be the smartest people in the room, design is doomed. – Bill (10:15) Organizational charts are the great destroyer of creativity because you put people in boxes. We talk about data silos, but we create these human silos where people can’t go out … Screw boxes. We want to create swirls – we want to create empowered teams. In fact, the most powerful teams are the ones who can embrace design thinking to create what I call organizational improvisation. Meaning, you have the ability to mix and match people across the organization based on their skill sets for the problem at hand, dissipate them when the problem is gone, and reconstitute them around a different problem. It’s like watching a great soccer team play … These players have been trained and conditioned, they make their own decisions on the field, and they interact with each other. Watching a good soccer team is like ballet because they’ve all been empowered to make decisions. – Bill (15:30) I tend to feel like design thinkers can be born from any job title, not just “creatives” – even certain types of verytechnically gifted people can be really good at it. A lot of it is focused around the types of questions they ask and their ability to be empathetic. – Brian (25:55) (Is there another quote from me? So many good ones in this episode from Bill though so if not, i understand) The best design thinkers and the best data scientists share one common trait: they’re humble. They have the ability to ask questions, to learn. They don’t walk in with an answer…and here’s the beauty of design thinking: anybody can do it. But you have to be humble. If you already know the answer, then you’re never going to be a good designer. Never. – Bill (26:34) From an economic perspective … The value of data isn’t in having it. The value in data is how you use it to generate more value … In the same way that design thinking is learning how to speak the language of the customer, economics is about learning how to speak the language of the business. And when you bring those concepts together around data science, that’s a blend that is truly a game-changer. – Bill (36:03) Links Transcript Brian: Welcome back to Experiencing Data. This is Brian T. O’Neill and today, Bill Maher-Schmarzo, sorry. We were just talking about your name, and it was like, “It could be an adjective. It could be a verb. And it’s a noun, too.” [laugh]. Bill Schmarzo, thanks for coming on Experiencing Data with me. How are you? Bill: Yeah, Schmarzo as a verb, what would that mean? You Schmarzo-ed something. Would that be- Brian: I don’t- Bill: -would that be-that’d be a great success or a horrible failure? Brian: It kind of falls into that Yiddish vibe, immediately. I don’t know, like- Bill: [laugh]. Brian: You’re an [crosstalk 00:00:56]- Bill: [singing]. Schmarzo. Brian: [laugh]. Bill: [laugh]. Brian: Excellent. No, no. It’s great to have you on here. And I’ve had you on my list for some time. Part of the reason I had you on here is I think when I decided to focus my work and trying to bring design into the world of data products and helping teams get better outcomes from their work with design, you were one of the few people where I even saw that word anywhere relevantly close to your job title, your work description, the way you think about things. And I’m like, okay, I got to get him on the show at some point to see why. So, maybe we can just start there. I think design is really hand-wavy for a lot of people. It’s this kind of like fluffy extra stuff. If you have some extra money, maybe you throw some of that little magic dust in at the end. It’s not a normal way that, especially non-digital native companies tend to operate. I think it’s changing slowly; I would say it’s pretty normal now for most tech companies. Like, when I consult that designing user experiences an early hire with your technology people, as well. But talk to me about why does design matter at all in data science and analytics? You’ve had some experiences with this and I want you to tell us about it, but help someone who hasn’t really bought into this or kind of hears it as this creative-y the hand-wavy thing, why do we need that? Bill: Well, Brian, it’s a loaded question. And the good news is, there’s actually not a simple answer to this. I think it’s actually a complex answer because it all starts by what you mean by design. Now, there’s certainly a UI aspect of design, to build products that are more conducive for the user to interact with, more natural, more intuitive. And product companies, for the most part, or last 10, 15 years, have really embraced design from a UI perspective. But I also think about design from an empowerment perspective. So, when I think about design, and particularly things like design thinking techniques, I’m thinking about how do I empower the wide variety of stakeholders that I need to service with my data science, to identify and uncover those variables and metrics that might be better predictors of performance. And so to me, at the very beginning of the design process-really about empowering everybody to have ideas. All ideas are worthy of consideration-which by the way, doesn’t mean all ideas are good-it’s about having that ability to diverge your broad thinking in order to converge. So, to me, it’s an empowerment process where everybody has a chance to have a voice because you never know who might have the best idea. The second part about design that I think is very critical is that good design-whether it be customer journey maps, or personas, or service design templates-forces you to speak the language of your customer. Way too many product companies and way too many service companies are internally out-focused. That is, they think about their products and their services first, and then how does the customer fit into what I have to offer? Wrong. You need to understand what your customer is trying to accomplish that-I love a customer journey map. I think a customer journey map is an illuminating process, to understand what the customer is trying to accomplish. And then being able to figure out how do my products and services help support that journey map? So it’s very much-design is a pivot in how people think. That is you stop thinking internally out, and you start focusing externally back in. And the reason why this is so important is because the only real source of value creation is around the customer. Customer is the only person with ink in their pen. They’re the only ones who are buying things. Even if you think about [unintelligible 00:04:52] in a B2B market, well, you’re probably B2B2C at some point. And so you need to be able to speak the language of the customer and walk in their shoes in order to be able to identify, validate, verify, and prioritize the sources of customer value creation. Brian: So, you’re preaching to the choir here on this, but tell me, for someone who hasn’t experienced this, can you give me a specific example where-or maybe you’ve seen a team where there was like a before-after. So, in the old way, they were doing things this way, and this other way, when we tried doing this project, or this product, or whatever the thing was that they were making, we went through some of this process, what light bulbs went on? Was there a particular moment where things clicked for you, either personally, or just seeing a team go through this? Help make it concrete. Bill: Okay. This is a long story; I apologize upfront, but it’s an illuminating story to me. It’s around what’s something we call an envisioning workshop. That is, before we ever do our data science work before we really start putting [unintelligible 00:05:52] the data, we bring in all the key business stakeholders and we run an envisioning workshop to identify the variables and metrics that might be better predictors of performance. And of course, the key phrase there is the word ‘Might.’ Because if you as an organization don’t have enough ‘Might’ moments, you’ll never have any breakthrough moments. And so we go through this process; we’re doing a project for a casino, a large casino. And they’re trying to figure out how to optimize the comps they give. Most casinos have a twenty percent rule that if you lose, like, $1,000, they’re going to give you $200 back in comps because they don’t want you not to come back. So, generally what casinos typically did is they gave everybody twenty percent of what you lost. And that was a total waste because some people, you were wasting your money, they were never going to come back anyway. Some people, maybe, needed more, some people needed less. And so they wanted to create a very focused calculation on not only understanding what the customer’s lifetime value was, but they wanted to create a prediction of what that lifetime value could be. So we’re running this visioning workshop and we’re trying to brainstorm these variables and metrics, and there’s this woman in the back-I don’t remember her name, I’m going to call her Mary-Mary’s in the back, and Mary in the casino, her team sits behind-she’s the cashier-sits behind the bars. And they’re the ones extending credit to different players. And she shares-so she-I-so we had interviewed her, and I knew she had some good insights so I said, “Hey, Mary, tell us what information and metrics you might know of, or some data you might know of that might really help us figure out who are our most highest potential customers?” And she says, “Well, every night, the thirty casinos down in Southern California, they fax each other what everybody-all the players who got a line of credit. And the reason why they do, the reason why they fax this is, they don’t want somebody bouncing from casino to casino, run up a line of credit and then bolting down Mexico.” Now, it makes total sense. So, every night, they fax all this information. And she says this, and there’s a guy in the front of the room-let me call him, Buddy. He runs the slots. And in these casinos, the guy who runs-or gal who runs slots, they’re the king of the casino. And I remember, Buddy hears this, he pivots and he looks, he goes, “Wait, Mary. Are you telling me that every night we tell every other Casino in our area, who our biggest players are from a line-of-credit perspective?” She goes, “Yes.” He goes, “And,” he says, “every other casino is telling us who their biggest vendors are from line-of-credit perspective.” And she said, “Yes,” And then she goes, “but it’s in a PDF, so, you know, you can’t get to it.” And of course, our data science team is all drooling. “PDF, [drooling sound] let me at that.” And it was at that moment that Buddy realized that, oh, my gosh, this is a very valuable piece of information, especially in combination with all the other data brought together. So, what happened in that moment, you could literally in this room of 25 casino executives, the light bulb went off all at once when they realized, oh my gosh, we’re all sitting on these little pieces of data that individually may not sound important, but when you bring them all together, it gives you invaluable insights into who your most important customers might be. So, that was one of those moments when that happened, you could literally see the whole room, look at each other and go, “Oh, my gosh, I didn’t know we had that.” And then all [unintelligible 00:09:00] talking, “Well, what other data do we have? What else do we know about our players?” And such. So, it was-anyway, long story. Brian: Do you know how that turned out- Bill: Oh, yeah. Brian: -or what they ended up- Bill: Oh, yeah. [laugh]. I can’t go into details, but let me tell you, the payback, the ROI on that project was measured in weeks, not years. It worked out very well. Brian: That’s great. I love it. So, getting to the point, though, where some people that come from math, or statistics, engineering, some of these technology upbringings and they end up in management positions, how does someone decide that, “Oh, we need a design thinking workshop,” or, “We need”-that all of a sudden this matters, that we should spend the time to go and do something like this before we start deploying machine learning or data science at our problem? Is this something where you’re like, “Well, hold the phone here. If we’re going to do this, we’re going to do it my way,” and you kind of have your thing, or was this something where they knew that they were going to go through this process and they wanted that? How’s that experience like, and what had to change to allow that to happen? Because this doesn’t happen in a lot of places, in my experience. Bill: You’ve got to allow them to experience the power of empowerment. You have to help them-the envisioning workshop is all designed to let people realize that there are valuable assets-people-all across the organization who bring very different perspectives to a problem, then when you combine those perspectives, you have an illuminating thing. Now, let’s be real honest: many large organizations don’t do this well at all. Horribly. And the reason why is not because they’re not smart, it’s because in many cases, senior executives aren’t willing to let go. Design thinking-you think about the empowerment-isn’t empowering the senior executives. In many cases, empowering those frontline employees. Think about the COVID situation. Who knows best about the conditions around COVID better than a nurse or a doctor? It isn’t the chief hospital administrator; he doesn’t know anything. I mean, it’s the people at the front line who really know what’s going on. So, if you have a culture where the senior executives have to be the smartest people in the room, design is doomed. Brian: This last mile, you know, we talked about all the amazing things we can do with the technology, and it always comes back to this last mile. There’s going to be some human touch-point at the end of this that’s ultimately going to determine what’s going to happen, how much-someone is going to handle a wad of cash or tokens to that gambler: that is the last mile. How much is that wad of cash? Well, she’s the last person, or whoever the woman, Mary behind bars or whatever it was, how do we help Mary know how much to give out? What is she empowered to do? Does she have any personal judgment over that? If you don’t understand that last mile there, or if you bury it in some application, she’s like, “I deal with the cash. And I talk to the customers. I’m not going to go, open up a 500-page PDF to look up someone’s name to see what their-you know, how much cash they’re allowed to take out a credit line, or whatever it is.” I’m guessing. You speak for-tell me if I’m wrong here. But it’s really important to talk about this last mile stuff at the beginning of the project, and then cover ‘How might we’ questions to get what you mentioned. Bill: Well, there’s a couple of points there here, Brian. Number one is that last mile is where you really are turning your analytic outputs into business outcomes. And so we didn’t expect Mary to go and pull up a record that showed everything this person hit. We created a score, a series of scores that said how important is this person to us? How much are we willing to give them? And within the guidelines-it isn’t like there was an AI robot saying, “You get $2,000.” She had the ability to make a judgment. Now, what we also did is when she made a judgment as far as how much it gives somebody, we recorded it because we wanted to learn if that was a good decision or not. So, we want to use AI machine learning to give recommendations, and scores, and guidance to the frontline employees, and technicians, and people giving out cash, the cashiers, but we also want to empower the humans to override that. Let me give you a really cool story. So, we know that most automobile loans today are driven by AI models. And they’ve got a model that works, and if somebody walks in, and their past payment and credit history doesn’t look worthy, they’re out the door. They’re automatically rejected. Well, they did this once, and one of the women who was at the bank who was giving out the loans, she said, “Well, tell me why do you want the money?” And the person said, “Well I’ve had a rough life. I’ve been in and out of prison, and I’ve had some problems and I’ve decided I’m going to become an Uber driver, and I need to buy a car.” Now, think about it. Now, all of a sudden, you realize that this person would have been denied a car loan, but this person is going to get a car loan in order to make money, they’re going to use it. And she asks a few more questions to understand what his plans were, his process, kind of did a sanity check on the business model and gave him the loan. Now, what she wants to do is she wants to immediately tag that and said, “Was that a good loan? Did that person pay it back, or did the person not pay it back?” Well, in this case, the person bought the car, became an Uber driver, made lots of money, bought another car, right? Pretty soon had a series of cars of people he was working with that were doing this. So, you need to empower the human at this last mile to override what the model might say, but you want to measure how effective it was because if you don’t do that, what happens is these AI models suffer from what’s called confirmation bias. They keep making the same decisions over and over again, and they don’t look at the outliers that if the-false positives and the false negatives [unintelligible 00:14:39] could dramatically not only improve the quality of the AI model’s decision but also could improve dramatically the quality of an organization’s total addressable market. Brian: Do you find that this kind of squishy human part of it and allowing some of this human touchpoint, is this an uncomfortable thing that an organization needs to get past, whether it’s the data scientists themselves who are thinking, often, about model accuracy as being the ultimate determinant of their worth in the organization, or the management to understand, wait a second, we’re really going to let Mary or John decide how much money to be handing out at the front gate? Is that a tough thing to swallow, or do you think by the time they’ve gone through a design-driven method for building a solution like this, it’s not a difficult thing to get to? Bill: Oh, it’s very difficult. It’s very difficult because we have senior executives who went to school, learn certain management techniques, have been in business-I always get very frustrated when I see an organization-organizational charts are the great destroyer of creativity because you put people in boxes and then you almost forbid the people in marketing to talk to the people in sales, talk to-I mean, you put them in boxes. And you-they’re like silos. We talk about data silos, but we create these human silos where people can’t go out. And so, most organizations operate around boxes, organizational charts, and whenever they bring in, by the way, a senior expensive management consulting firm to do some analysis, the management consulting firm always comes back with a new set of boxes; here’s the box you need to be in. You know, screw boxes. We want to create swirls, we want to create empowered teams. In fact, the powerful teams are the ones who can embrace design thinking to create what I call organizational improvisation. That is you have the ability to mix and match people across the organization based on their skill sets for the problem at hand, dissipate them when the problem is gone, and reconstitute them around a different problem. It’s like watching a great soccer team play. This is like the coaches standing above there and yelling, “Bill, you go there.” “Max, you go there.” And, “Alec, you kick the ball over here.” No. These players have been trained and conditioned, they make their own decisions on the field, they interact with each other. It’s like ballet, watching a good soccer team because they’ve all been empowered to make decisions. But the minute we get into a business world, it’s like, the senior guy at the top knows all the answers, and everybody else is a friggin’ bunch of robots, and just nod your head. That way I’ve run an organization is friggin’ dead. It is dead, and there are up and coming organizations that are going to knock those people on their butts because they’re going to empower all the creativity, they’re going to unleash the greatness in each of their employees by employing not only design thinking, but integrating design thinking with data science to really help to identify, codify, and operationalize all those sources of value. Brian: Sure. Yeah, I’m totally with you. I think that the idea of owning the problem and not-I’m going to call it “solution” in quotes, but what often is the output, not the outcome. But if you can allow the team to own the problem, it really changes that dynamic now because it opens up other possibilities for doing things and not every solution that we come out with needs to be hit with this hammer-the machine learning hammer is my favorite one-you know, “No matter what it is, let’s hit it with that.” And so part of what I do in my seminars is to really-by understanding the-framing the problem correctly, we might find out we don’t need machine learning for it, and if we can let go that yes, that is your core technical skill, but maybe you find out what, you don’t need me on this project. You need a more basic analytical technique here, we don’t need to build out a giant infrastructure to do this. We can get something done in three weeks, using a more elementary technique here, and that’s going to solve the problem, now that we know what it is. How does a team though, get-you talked about the-I understand the management and some of these larger changes that aren’t going to change on a dime, but if someone was feeling like, “I get this. I’ve been through the pain. This makes sense to me.” What’s the zero to one step? We’re not doing any of this now. What’s the first step to getting into this world? Is it to deploy design on a small project? Or, like how would you recommend an organization take a first step into doing things a different way like this? Where does it begin? Bill: Where I’ve seen this be successful is-you said here, Brian, you pick a use case that has meaningful business value. And in fact, what we do is we go through-in our visioning process, we go through the prioritization process, we identify use cases, and we go through a process of prioritizing based on value, and feasibility over a nine-month period. We’re not going to cure world hunger. That’s a project that’s doomed from the start. So, we find a use case we can go after, we find a friendly on the business side who wants to engage with us, who sees either a growth opportunity of how using data science can help them change a thing, or somebody who’s in trouble mode, who knows, “I got to change.” So, we find a friendly, we do a proof of value. And a proof of value can be four to six months. Pretty straightforward. And we did one of these when I was at Hitachi Vantara. Our CIO, she realized that their data lake was basically a data swamp, wasn’t getting any value. So, she wanted to try something different. She partnered with our chief marketing officer, they picked a use case, we applied this process. The proof of value generated $28 million in additional revenue. $28 million in the proof of value. Guess- Brian: Drain the swamp. Bill: Guess who our biggest supporter going forward? The CIO now understood. She goes, “Oh, my gosh, I see-this is like printing money.” So, use case by use case, here she is, the data lake no longer has 25 data sources, it has the three most important and then we add one more. She saw the light and became our strongest proponent. It delivered value. So, what happens-and I know this is a hard concept to grok, but you have to basically build it brick-by-brick, and you have to as a design team, as a data science team, you have to prove yourself every step. Because the minute you make a misstep, you go back to zero. And so you may have gotten to the 20, 30, 40-yard line, but you screw up, you’re back at the goal line again. Off you go again. Brian: How do you handle that, though because I see a lot of the point of design is to move as quickly and to accelerate learning as fast as possible, working in low fidelity, getting ideas going visually, fast. And this includes things like journey mapping, and not just the final-as you said, you can apply this to things where there’s not a heavy user interface element, a lot of this is about the problem, the way we approach problem-solving. But to me, that failure part is very much a part of this. Not everything is going to have a Eureka moment, there’s not always going to be a success, and not everything is quantifiable. So, how do you handle that? I don’t think every time we do this, it’s necessarily going to yield a substantially different type of outcome. I think overall on the trend, it’s a better way of designing solutions. It’s human-centered in every way and you’ll get the business value if you get the people part. But tell me about that. Maybe I misunderstood you, but it sounded like you’re saying if you don’t show a success every time, you don’t get another swing at it. That sounds like a really risky-especially if a team is trying to do this, and maybe they don’t have internal knowledge on how to do it. It sounds really risky. Bill: It is. But here’s what you do. You cheat. You cheat by-you identify the problems ahead of time, through this envisioning process that you can be successful at. When we do an envisioning workshop, the data science team is right there with us. It’s led by a design thinker. We have data engineers here, we have business subject matter experts. And so we cheat by making sure that we’ve invested enough time upfront to minimize the chances of failure. Because what you don’t want to have happen, what can easily happen, you do one of these things, you have success. Everybody, the organization sees it right? Now, everybody wants it. The worst thing you can do is to open the floodgates and let everybody do it. Because then you’ll have everybody fail. So, what you have to do-you said a very important word here, Brian. Learning. It’s a learning process, and so how does an organization exploit the economies of learning? Because in knowledge-based industries, the economies of learning are more powerful than economies of scale. So, you have to put in a governance process-a governance process with teeth-that says, “We’re going to review the use cases, we’re going to go use case by use case. And oh, by the way, if you do this right”-like my latest book says-“You can exploit the economics of data and analytics to drive material impact in the organization.” It’s around the economies of learning. And it isn’t necessarily human-centered as much as it’s human empowered to help you figure out how you learn more quickly. So, you will have success. Every time we’ve done one of these things-and I’ve been doing this for 20-some years-every time we do this, we have a success. The challenge isn’t the first one. The challenge is the second and third one. So, you don’t basically open the floodgates. And I’m serious about this governance process; you need to have a very rigid governance process that both blesses the use cases you go after, but ensures that the organization is leveraging the learning from the data and analytics, use case by use case. Brian: Mm-hm. Does that require a lot of either specialized roles or a lot of additional, not necessarily technology that needs to be enabled, but a responsibility that someone has to own in the business? It’s this cost of doing design that has to be built in along this to keep the thing on the right track? Is that kind of what you’re saying? Bill: I think the reason why most companies are really poor at monetizing their data is because nobody in the organization owns it. It’s spread across chief data officer, chief information officer, chief analytics officer. You have all these people who own pieces of it. Unless you have one throat to choke who is that point of governance by collaborating with other executives to make sure you have this approach, this thing can go-well, one thing, it probably never gets off the ground. But if you don’t have that, I call it the chief data monetization officer role, who is basically responsible for trying to drive the governance and the re-application of data analytics across the organization, then this thing can-yeah, you’ll have your first success, but it’s not the first one that matters. It’s the third, and the fifth, and the seventh. It’s all these other ones that come on after that where you get a really big impact. Brian: Who are the right types of people to do this kind of work in an organization, especially if they don’t have designers who are trained in doing this type of facilitation, or work? Is there a personality type or something? A trend you’ve seen? I tend to feel like design thinkers can come out of almost anywhere, even certain types of very, very technical people can be really good at it; a lot of it is around the types of questions they ask. I find it’s people who have built enough bad stuff, and they’re tired of doing it that way, and they’re just like, “I’m at the point in my career where I want to work on good stuff. My job prospects are good. I’m tired of building stuff I don’t like that doesn’t go anywhere.” And those people sometimes can become really good at this. But can you tell me about your experience? If I was to tap four people, and run some small squads, and try to bring some of this into my organization, who would I look for as a leader in the data space? Bill: The best design thinkers and the best data scientists share one common trait: they’re humble. They have the ability to ask questions, to learn. They don’t walk in with an answer, they walk in trying to seek an answer. And that’s a very different process. And here’s the beauty of this design thinking kind of approach. Anybody can do it. Anybody. But you have to be humble. If you already know the answer, then you’re never going to be a good designer. Never. Once you have put together this team of people who are intrinsically humble, and willing to ask questions, and learn from each other, it creates this synergy around creativity. And I would argue that we’re all born naturally with creativity. As little kids, you know, we took things apart, and we put them together. And I took things apart and put them back together and there’s always extra parts laying over, which always drove my dad nuts. “Oh, there goes that radio. That one’s no good anymore.” But what happens is-and it starts in school, through things like standardized testing, and such, where we work really hard in schools, starting in grade school, and middle school, et cetera to wipe creativity out. “Oh, that kid, he’s a troublemaker. He’s an outlier. He can’t sit there and be still.” So, and then we get standardized testing in college, and everything else, and standardized curriculums. And so we wipe creativity out. The best design thinkers and the best data scientists are, by their very nature, very creative. They have a strong curiosity about what variables and metrics might be important metrics. They leverage that curiosity to explore, and exploring is about failing. You don’t learn if you’re not failing. So, you have to embrace-and the data science development methodology is full of failures. You are failing all the time. You’re constantly trying different combinations of variables, and metrics, and different algorithms, and different transformations and enrichments. You’re trying all these things, just to see if you can get a little bit better. It’s built on failure. Brian: I would agree, that ability to ask questions and putting apart some of our assumptions about being the smartest one in the room, or I know the domain the best or whatever, I would agree those are critical skills. Do you find that the people that end up being really good at this, does this take away from or compliment the data science work they do? Do they tend to then kind of migrate out of the hands-on data science work if they do come from that area? Or is this just a different way for them to even do their technical work and everything? It just becomes part of the way they do that technical work as well? Is it a different-do they kind of evolve into a different role within the organization? How do you see that because I could see some-just devil’s advocate, I could see some people saying, “Well, I really need those kinds of people on that really hard modeling stuff that we pay them really well to do, and we don’t have a lot of that resource, so I don’t want to have them doing this other stuff, this design stuff.” Play the other side of that argument for me. Bill: So, imagine you’re a data scientist, and you’ve got an infinite number of ways to solve a problem. Truly infinite. There’s almost an infinite number of data combinations, of data enrichment techniques and transformations and algorithms. It’s almost an impossible job. How do you take the impossible job and make it manageable? Will you put guardrails around it? What we do in this envisioning process, using these design techniques, is to really understand what are the variables and metrics are trying to optimize against? What are the decisions we’re trying to make? What metrics and variables might help us be better predictors of those decisions? And so, we automatically start putting some guardrails in place that helps the data scientist. They’re still going to bounce around between those guardrails, but they’re not off in Etherland. They now have a concrete idea of what they need to do. It’s really this key about how do we transition data science-modern data science, data science 2.0-from outputs to outcomes? How do we transition that discipline, that practice from focusing on analytic outputs using ml and AI techniques to delivering business and operational outcomes that have meaningful financial value attached to them? I think it’s all about the maturation of data science as a discipline. We’re not focused on the activities; we’re focused on the outcomes. And so I think what you’re seeing is that data scientists, they love this because now they know that their work has meaning, they know who their customers-if you do the design part right, they can even envision the customer, they can walk in their shoes, they can go to the store, and see what the customer is going through, and experience it firsthand. So, my experience with a data scientist-and by the way, it’s not all data s-I’ve had a couple of data scientists who couldn’t get this. I had to let him go. But I needed them to be able to think, and act, and talk to the customers. They needed to be a part of this process in order to be an effective data scientist. Brian: Amen. I’m totally with you there, Jared Spool, in the design [unintelligible 00:31:37], talks a lot about exposure hours. How much exposure are the people that are making the decisions-and if the data science is doing the model, and the model is part of the solution, then they’re effectively one of the designers of the solution-we got to get more exposure time to all of the people, not just the researchers, and designers, or whoever. It’s the team that’s responsible to make the decisions; it’s really important to have that exposure: you develop the empathy; you start to first see solutions; instead of just being reactive, you can start to be proactive and say, “Wait a second. Why aren’t we doing this here? We could so easily do X, Y, and Z over here to help this, and maybe no one’s asking about it because they didn’t know it was possible. But I know it’s possible because I’ve been trained in this, and this is a very easy thing that we could get a win over here.” I’m with you on all of that. Where did someone like you-you have a computer science background, a math background as I understand, how the heck did you get into this? Where did you get exposed to this as someone at your level? Bill: So I’ve always been fascinated with data and analytics because of what I can do with it. And it started probably at an early age when I was in middle school and we played this game board called Strat-O-Matic baseball, which was a kind of precursor to baseball sabermetrics. And I quickly realized because I knew more about math and stats, that I knew what players were more valuable than other players and I had an unfair advantage in trades and amassing a team that was pretty-you know, Murderers’ Row had a whole new definition. So I’ve always had this fascination that I knew that if you could leverage data and analytics, you could drive outcomes. The real place of indoctrination for me, though, was when I was working with Procter & Gamble in the 1980s. Yeah, I am that old. And Procter & Gamble was moving towards a data-based decision making culture. And we built in 87 and 88, one of the very first, and maybe it was the first data warehouse and BI environment, with Procter & Gamble’s data combined with Walmart’s point-of-sale data. And the kind of insights we were able to gain on marketing programs, and pricing, and promotions, and all these other things were illuminating. We were printing money. It was staggering. And I remember walking out of there-and we’d all been trained in six sigma as a methodology. I remember walking out of there thinking, “There’s a better way to do this,” Procter & Gamble has-they sort of got me my appetite whetted on this. And so all through my life here, I’ve been on this goal, Brian, to really try to understand, what is the value of data and how do I help organizations leverage data to make better decisions? And so it just came on and on. I know a lot of Forrest Gump moments in my life. When I went to become vice president of advertising analytics at Yahoo, that was one of those Forrest Gump moments where everything I’d learned about BI and data warehousing, I had to unlearn because the way that we did analytics at Yahoo was very different than how we were doing analytics to other places I’d had been. So I’ve always sort of been on this journey. And not to bore you, but it was a research project-I teach at the University of San Francisco-and it was a research project we did, they said-I was always been fascinated with trying to understand the value of data, and so when I was at USF-I’m the executive fellow there-I was able to do a research project. I had lots of really bright, really motivated research assistants who were free, and I turned them loose on this problem. And the epiphany moment in that-and when I went into this conversation, I was thinking about, “How do I show data on the books? If data is truly an asset, you’ve got to find a way to put it onto a company’s balance sheet.” And so we’re doing this project, and I asked my team to go out, I said, “Find me an asset that sits on the balance sheet that looks like data.” And so off they go. They do their work and do their brainstorming, and one of the research assistants, she comes back to me, says, “Professor Schmarzo,” she says, “I got to be honest, I can’t find anything.” She says, “Data isn’t like anything we have on the balance sheet.” She said, “Think about it. It never wears out. It never depletes. The same data set can be used across an unlimited number-an infinite number of use cases at zero marginal cost.” And that’s when I realized, “Oh, my gosh, I’ve been thinking about this entirely wrong.” That zero marginal cost comment reflected back to a marginal propensity to consume or the economic multiplier effect, and I realized that my approach all along had been wrong in how I view data as a standalone asset. But when I took a look at it from an economic perspective, from this economic multiplier effect, I realized the value of data isn’t in having it. The value in data is how you use it to generate more value. And that’s just launched everything about-I’ve been doing it on economics. I’m now working on a concept around nanoeconomics. Like I said, I mentioned in my book, the book’s called The Economics of Data, Analytics, and Digital Transformation, probably the most boring title one could ever think of. But it speaks to the heart of the opportunity is that this whole conversation is around economics. And I would argue in the same way that design thinking is learning how to speak the language of the customer, Economics is about learning how to speak the language of the business. And you bring those concepts together around data science, that’s a blend that is truly a game-changer. Brian: Who would get the most out of your new text? Bill: I think anybody: students, professionals, retirees, anybody who’s trying to understand, “How do I advance my career by understanding more about how one exploits the value of data and analytics?” Would benefit from this. I did a keynote recently at a large industrial company about two weeks ago, and after the keynote, one of the vice presidents said, “Your book is going to be mandatory reading for all of our leaders because our leaders need to transform how they think about data and analytics.” And it’s not just a technology conversation, it’s how do we leverage design and human empowerment in order to create a culture and a company of continuous learning and adapting? So, I think anybody can benefit from it. But it’s not a fun read. It’s a horrible read; it’s a boring read because it was written as a textbook. It’s deep. It makes you do homework assignments at the end of each chapter. But if you’re really serious about understanding why data and analytics is such a unique asset, and how you personally and professionally can advance your career with it, I’ve got a lot of very positive feedback on the book as far as changing people, how they think about their careers, whether they’re a nurse, whether they’re a data scientist, whether they’re a teacher, whether they’re a technician, anybody whose career can benefit from data and analytics and making better decisions, I think we’ll enjoy the-well, honestly, enjoy is the wrong term, I think they’ll get value out of the book. Brian: Yeah, that’s great. That’s great. This has been a great conversation. I really appreciate you sharing all these insights. Just kind of in closing, is there one particular message that you would send out to the leadership community in the data science and analytics and product space here about all the things that you’ve learned, putting together the economic side of data, your use of design as a strategic way of problem-solving within businesses to create better solutions? Is there one message you’d like to kind of leave them with? Bill: Yeah. Here’s the message I leave them with. I believe in knowledge-based industries, economies of learning are more powerful than economies of scale. And organizations need to work hard to create both a technology and a cultural environment of continuous learning and adapting. In my book, chapter nine-which I think is the most powerful chapter in my book-it has nothing to do with technology or economics, it has everything to do with team empowerment. If organizations are going to truly create a culture of continuous learning and adapting to learn faster than the competition and to adapt more quickly, then you have to empower your frontline people. You have to empower the frontline people because that’s the point where machine learning and human collaboration is going to drive new sources of customer product and operational value. Brian: Love it, love it. Love it. This is so good. Where can people follow you or get more insights? Do you have a mailing list or something like that? What’s the best place to follow? Bill: On LinkedIn. I try to post about one blog a week. My goal is to continue to write new chapters for the book. I mentioned this concept around nanoeconomics, I think is very much a game-changer. It’s a new concept, and I’ll create the equivalent of a chapter that would go in the book [unintelligible 00:40:02] actually won’t. I’m also working a lot right now on ethical AI and how do organizations create a culture that enables you to overcome the confirmation bias that drives AI to do unethical things? So, LinkedIn is the place to find me. Come hang out on LinkedIn. Brian: Awesome, awesome. Well, we’ll definitely link that up. The Economics of Data, Analytics, and Digital Transformation if you’re interested. Bill Schmarzo, this was such an awesome conversation. Thank you so much for coming on the show. Bill: Thanks, Brian, for having me. It was a lot of fun.
35 minutes | Feb 9, 2021
058 - IoT Spotlight: 8 UI / UX Strategies for Designing Indispensable Monitoring Applications
On this solo episode of Experiencing Data, I discussed eight design strategies that will help your data product team create immensely valuable IOT monitoring applications. Whether your team is creating a system for predictive maintenance, forecasting, or root-cause analysis – analytics are often a big part of helping users make sense of the huge volumes of telemetry and data an IOT system can generate. Often times, product or technical teams see the game as, “how do we display all the telemetry from the system in a way the user can understand?” The problem with this approach is that it is completely decoupled from the business objectives the customers likely have-and it is a recipe for a very hard-to-use application. The reality is that a successful application may require little to no human interaction at all-that may actually be the biggest value of all that you can create for your customer: showing up only when necessary, with just the right insight. So, let’s dive into some design considerations for these analytical monitoring applications, dashboards, and experiences. In total, I covered: Why it’s important to consider that a monitoring application user experiences may happen across multiple screens, interfaces, departments or people. (2:32) Design considerations benefits when building a forecasting or predictive application that allows customers to change parameters and explore “what-if” scenarios. (6:09) Designing for seasonality: What it means to have a monitoring application that understands and adapts to periodicity in the real world. (11:03) How the best user experiences for monitoring and maintenance applications using analytics seamlessly integrate people, processes and related technology. (16:03) The role of alerting and notifications in these systems … and where things can go wrong if they aren’t well designed from a UX perspective. (19:49) How to keep the customer (user’s) business top of mind within the application UX. (23:19) One secret to making time-series charts in particular more powerful and valuable to users. (25:24) Some of the common features and use cases I see monitoring applications needing to support on out-of-the-box dashboards. (27:15) Quotes from Today’s Episode Consider your data product across multiple applications, screens, departments and people. Be aware that the experience may go beyond the walls of the application sitting in front of you. – Brian (5:58) When it comes to building forecast or predictive applications, a model’s accuracy frequently comes second to the interpretability of the model. Because if you don’t have transparency in the UX, then you don’t have trust. And if you don’t have trust, then no one pays attention. If no one pays attention, then none of the data science work you did matters. – Brian (7:15) Well-designed applications understand the real world. They know about things like seasonality and what normalcy means in the environment in which this application exists. These applications learn and take into consideration new information as it comes in. (11:03) The greatest IoT UIs and UXs may be the ones where you rarely have to use the service to begin with. These services give you alerts and notifications at the right time with the right amount of information along with actionable next steps. – Brian (20:00) With tons of IoT telemetry comes a lot of discussion of stats and metrics that are visualized on charts and tables. But at the end of the day, your customer probably may not really care about the objects themselves. Ultimately, the devices being monitored are there to provide business value to your customer. Working backwards from the business value perspective helps guide solid UX design choices. – Brian (23:18) Links Referenced Transcript Hi, it’s Brian T. O’Neill here again, and I’m back with Experiencing Data and an episode featuring me again, [laugh]. This is going to be another solo episode. And I wanted to share my ideas around IoT. We haven’t talked about IoT too much on the show. But this is actually more about monitoring applications, and designing really useful tools for monitoring health of systems, whether or not there’s physical IoT objects or software objects, or whatever it may be, I thought I would give you some strategies that you could take back to your work, to the applications, or products that you create in the space to try to make them more valuable for customers. So I’ve titled this episode, “IoT Spotlight: Eight UI and UX Strategies for Designing Indispensable Monitoring Applications.” So, what do I mean by that? I’m talking about, again, health monitoring, predictive maintenance, predictive utilization, you know, services that are intended to provide business continuity. So, any type of system where you’re monitoring activity and the goal is basically to keep a normal operating state and business state at all times. Also, tools that do things like root cause analysis, so this could be for cybersecurity or analyzing network traffic, this kind of thing. So, I’ve done a fair amount of work in this space, both for data center products-software tools used in the data center to monitor traffic and analytics on your I/O and all that, as well as-I mean, even stuff in the refrigeration space and looking at software and hardware systems used inside places like grocery stores and food transport, and that, and looking at predictive maintenance in that space. So, I thought I would share some of the insights from my experience working with clients in this space that I think can be extrapolated for a lot of different industries. So, without further ado, I’ll jump into these eight items for you, and hopefully, you can take them back to your work. The first one is to consider that your data products user experience may happen across multiple applications, screens, interfaces, departments, people, et cetera. Sometimes there’s a thing called CX, or sometimes it’s called service design; you might hear these words. I think part of this is really about understanding that the customer experience or the user experience may go beyond a single application and this is especially true if say, you have people whose job it is to monitor the health of your IoT devices, and so part of the time they’re online doing things such as looking at this application. Other times, they might be out servicing a physical piece of hardware or something like that. So, I want you to be thinking outside of the scope of just a specific software application that you might be doing the data science part on, or the predictive modeling, or whatever it may be, the user experience may need to look beyond that. So, if you work at a company that produces physical products that are data-enabled somehow, they have some type of telemetry connected to the web, or even a local network or something like that, and your goal is to have some type of predictive maintenance capability on the devices so that you can minimize business disruption to your customer when you think about the user experience here, and all the people that need to interact with the service, you may be designing something that involves the end-users of the devices themselves; the people who manage the hardware, or the devices, or the objects, or whatever it is that the system is monitoring; you have the managers of the people who are the main users of the software, again the monitoring software, et cetera. And the managers may be the purchasers if you’re working with commercial software, so they have a business interest in this even if they’re not the ones who perhaps do the daily monitoring stuff. You may have employees at your company who may service the devices, so technicians or something like that, they may need telemetry as well, or even a third-party service provider may be doing that type of work. You may have account managers and sales reps who want to understand-again, if this is a commercial offering, or something like that, or even if it’s not, those people may have an interest in knowing what’s going on in the client site and being kept in the loop about what’s happening with the accounts that they manage. And then you have like CSR and support reps. The people who might field inbound calls, or if a customer doesn’t know how to read a chart or doesn’t know what the telemetry is saying, or they don’t think it’s collecting the data properly. So, the point here is that all of these different user types may have different interests and different needs, different tasks and activities that they do. So again, we need to think beyond a singular user here and understand that there’s groups of people, potentially, working together, or maybe pockets or groups like your employees and your team, and then the customer has their employees and their team, et cetera. So, be aware of that. Make sure you’re designing with all those people in mind, or I should say, the people who matter most. Not everyone necessarily needs a separate user experience, but you should be aware that there could be other people in the loop here with distinct needs. So that’s the first one: consider your data product across multiple applications, screens, departments, people’s interface. Be aware, the experience may go beyond the walls of your application sitting in front of you. Okay? Next one is when building forecasting or predictive applications, you want to allow customers to change the parameters, explore what-if scenarios, and really begin to understand the relationships that may be present, if possible. I know with some forms of machine learning, it can be very difficult if there’s a lot of different variables contributing to the way forecasts are made, but this can be really helpful, too, if you’re showing something to senior management, rolling up these recommendations from your machine learning efforts into a small little prototype or application that can be played with can be really powerful in communicating the value of the work that you’ve done, it probably goes to say that ex-AI, or explainable AI, or model interpretability is going to be really important here, especially if the audience is perhaps not used to… they’re not used to seeing that Feature A or Variable A actually has a lot to do with the predictive power of the model because no one’s ever known that before. So I’m hearing more and more that model accuracy, it’s second priority to interpretability of the model. Because if you don’t have that, you don’t have the trust. And if you don’t have the trust, then no one pays attention. If no one pays attention, then none of the data science part matters. So, most of you probably know that, but just keep it in mind. Second one, here talking, again, about forecasting and letting people play with the parameters here. A good-if you need a visual example of this, this is something like a retirement planning calculator or tool. So, these run forecasts of your portfolio under certain conditions, certain stock market conditions, and your rates of savings, and all this kind of stuff. And so while they may not use machine learning or whatever, the customers don’t care about that. They’re trying to set up themselves for retirement. So, the reason I use this is that it’s a good example that we can all probably relate to in terms of thinking about what are the variables that we allow the user to set? Savings rates, which accounts should be included? Which accounts are being drawn down on versus just, they’re fully allocated for retirement purposes? What about a lump sum payment? You’re expecting an inheritance or something like this, and so you want to be able to plan for some of these variables in the forecasts that are given. So, you need to kind of do the same work to sit down and really understand. Your CEO or your executive team probably isn’t planning for retirement, but they are doing some type of work, and if you’re going to build a tool like this, you need to be aware of what are the variables that they want to potentially lock-in or control for if they’re going to be playing around with your forecasts and that kind of thing. So that’s that one. And then the third kind of sub-bullet on this is to try to help the customer visualize how far apart the map is from the territory. Territory is reality; a map is just a model of the territory. The map is not reality. So, when they’re using this application or tool, how far apart is the map-your application, your forecasting application-how far away is that from the territory, the actual reality? So, one way this might manifest itself in the design is if a user is making-you know, if there’s going to be a human decision made based on these tools, is there anything excluded in the modeling that may be worth explicitly telling the customer about? So, this is not just about showing the features that were used to model the predictions, it’s also about what was not used and being clear about that if there is an expectation that they might. So, a simple-this is a really lame example, but a simple one with, like, forecasting something to do with money over time would be not factoring in inflation. So, if for some reason you are unable to factor in inflation-which probably would not be the case, but if that was for some reason technically very difficult-and it was excluded, you might know that, hey, this is actually a really important aspect if we’re going to be projecting something out over 20 years. We need to go beyond putting that in a little tiny, gray footnote, legal text at the bottom of the application-which is more about risk and compliance and nothing to do with user experience-we might want to surface that and make sure that that’s very clear to the customer when they’re making a decision to a user that this has not been factored in. The map is always incomplete, but it may be really missing something that they might just assume is there. So that’s also part of the design choices. It might just amount to a blob of text or a little alert or something like that, but that is also part of the user experience. So, number three, well-designed applications understand the real world. So, what do I mean by that? Well, I think they know about things like seasonality. They know what normalcy means in the world that this application is monitoring. They learn and take into consideration new information as it comes in. So, the goal here is not to just display the statistics from the hardware and the telemetry, and expect users to come up with meaningful comparisons, and insights, and action items. Nobody wants to come and play with your metrics toilet. That’s not what they’re there for. So, how can you model what normal operation means? What is the definition of unusual operation? What is the definition of unusual operation that is actually worth flagging the user’s attention in the actual interface, like a warning, an alert, a cause for concern, abnormality, stuff like this? Modeling these things out and understanding either the hardware itself or what the customer’s tolerance levels are for certain things, this can help us design a better experience because the system has knowledge about the human element of this data. Knowing that 30 is really bad and 40 is terrible on a scale of 1 to 100. And just looks like a 30 to-the software is dumb; it doesn’t know anything, we want to teach it that, no, 30 is a really significant number. And when this chart or whatever, this metric goes beyond that, that’s a really important thing to know. And I sometimes see teams kind of run from this because everyone’s different, and we don’t want to be telling them what it should be and all this kind of stuff, when in reality, the customers may already have an operating model in mind, they may have a sense of what normalcy is and they’re already treating the world without your system. They already monitor and manage the world in that way. That is their mental model of things. And so you may want to go with that, or at least begin with that. And maybe if you need to adjust their expectations if they’re actually not looking at it the right way, you may need to go with it a little bit, and then adjust it over time. And that gets my other point, which is, again, ideally, these ranges need to adjust to seasonality. And that could be literal seasonality, like the weather or something outside, or business seasonality, like there’s more use of the system during the summer, or during the accounting season or whatever, we expect to see a lot more activity in these systems and we want the system to be smart enough to understand that so it doesn’t throw out false flags to us about what’s going on. And finally, why do some of these ranges, these qualitative ranges for the numbers matter? Well, understanding the real world, the ranges, and the numbers that the human users of these tools are already kind of keeping in their head; they help us draw better charts. And I’m talking particularly like things like time-series charts, like so what do I mean? Well, a simple example would be, how do you dynamically determine how to render the Y-axis minimum and maximum values when you’re dealing with dynamic data? So yes, you could just say, well, we’ll take the min and the max, and we’ll pad it by 20 percent, and then those will become the values that we print on every single chart. Well, the problem with that is, let’s say that normal use is, like, between 30 and 100. On a scale of 0 to 100, the typical range would be 30 to 100. Well, for some reason, something’s not working, and so you end up printing a range of 0 to 1. So literally, like 1 percent of the total range is now what the plot is so that the chart itself is taking up the same amount of visual space, but the bottom axis is 0 and the top of the Y-axis is 1 and you’ve got this wild plot going zigzagging all over the place. And in reality, there’s no story. It’s like, go home, nothing to see here. But your chart visually suggests that there’s all kinds of activity going on here when in reality, there’s just nothing to see there because the chart was rendered dynamically and did not consider the real world. So, you might want to pin those charts to a 0 to 100 range, or last month’s average, or I don’t know what it is, but the point is, just simply taking the min-max values and using those may not be the best choice, and you may end up creating noise in the interface. So that’s part of the reason why these what I call qualitative ranges really matter and actually help us render a better story. So, in that case, I might want to just to see basically a flatline-a chart that’s empty. Why? The thing is offline. There’s nothing to see there. Maybe it’s got a pulse, but it’s basically dead. I don’t want to see a bunch of visual noise that suggests, hey, why is this thing going crazy, when it’s not going crazy. Right? Okay. Number four, when dealing with things like service and maintenance, the best user experiences around these devices integrate the people, the processes, and the technology. So, a simple example here might be a system that monitors the health of a bunch of similar objects, all of which are actually interconnected, and they all actually have some kind of impact on each other. So, what happens if one of these objects is taken out of service? This may have implications for your customer’s business, as well as for the actual technical system and the way the monitoring application reacts. So, if one device is taken out of service, what happens to the other devices, and how does the software react to a situation like that? Does it understand the difference between being out of service and being broken or disconnected? Like, can a customer go in and say, “I’m actually removing this from service.” Instead of just all of a sudden, the telemetry disappears and the system starts generating alerts and notifications, and, “Oh, my God, something’s wrong,” when in reality, it’s a planned outage. So, this means we have to factor in how will alerting and notification work when the environment changes. And the last kind of sub-bullet on this is how to multiple users on the customer side coordinate a change to the system? A good example here would be something like integration with a ticketing system. So, widget number 1A out of, you know, the 100 widgets that are out there is being taken out of service. Well, it might be good to know that since there’s actually five technicians that manage these 100 widgets, maybe you integrate that ticketing system or something so that when User B logs in, they can see User A has already-aware of the situation with this object, has taken it out of service; there’s some kind of log or record there. And again, this gets into some product scope things like, what’s the real value we provide? And is this becoming a ticketing system and all that? It’s so much easier to integrate software and tools and share data with API’s and stuff that I think it’s really worth exploring that or at least creating bridges. I’ve talked about this show, like, avoiding creating these islands with no bridges and no transportation. You don’t want to create an island, which is, you have this kick-ass experience or vacation hub, but no one ever thought to build a port, or a bridge, or any-an airport. There’s no way to get to it. So, if you can figure out a way to, say, link off to a ticketing system that logically links to this particular item that’s being taken out of service, you’re really helping out with the overall user experience. And you’re not letting the walls of your software application necessarily define the real-world user experience since, again, the real world is different. And the real world spans people, technology, application software, all those kinds of things. All right, number five, the greatest IoT UI and UX may be one where you never have to use the service to begin with. So, what do I mean here? I mean some services may deliver their best value when you just don’t have to log in almost ever unless you’re curious. And you just get really helpful, powerful alerts and notifications at the right time with the right amount of information with some kind of actionable next steps in them. This really gets to the point that don’t underestimate old technologies like email and messaging as being critical, not as this kind of like feature add-on. And sometimes I see this as like, “Well, that’s a feature we’re going to add to the product,” instead of it being, “Actually no, this is-we can’t look at email, just because it’s technically a different feature. It’s not part of the application user interface. It’s inherently part of the product and the product strategy and the user experience. We should not be looking at it as a bolt-on feature, it could be integral.” So, notification strategy, really, really important. You may end up building something where like the dashboards rarely even get looked at because a customer gets an event notification and that links to like an event detail screen or something like that, and they don’t ever really look at the dashboard because they manage the whole thing through their inbox, or Slack or whatever the heck it may be. If there’s not many of these, that might be a totally fine thing. Maybe they pop by the dashboard on their way out to make sure there’s nothing else that they missed, but that could be a great experience right there. So don’t assume that everyone’s going to come through the front door: they may be knocking on your back door, they may come in from the pool house. You got to think about these different entry points when you’re designing this. So notifications, really clear; make sure there’s-I usually recommend that they have some type of supporting evidence in it. Like, you don’t want to send out a little scare bomb like, “System offline!” Or whatever, “Object whatever offline!” It’s like, “Why is it offline and what do I need to do about it?” Well, maybe you could pack a little bit more information density into that email, like, “Taken offline by User X,” or, “It’s normally offline at this time,” or some kind of supporting data evidence to provide some context for that so that the user knows whether or not do I need to go get off my couch and deal with this right now or is this something I could maybe leave for tomorrow. The final thing I’m just going to close out on notifications is you got to watch out for alert noise. Consider batching alerts and notifications, so that you’re not piling them on. And this is especially true when the objects being monitored are interconnected, you can end up really creating a lot of noise for the customer, and now they just have a different problem, which is, which alerts should I pay attention to? And at some point, they just start to ignore it. And they’re like, I mean, I’ve literally heard users tell me this, “I just wait for the phone to ring because there’s so many alerts and devices sending me telemetry and crap all the time. I wait until someone calls me and they’re mad. Because I can’t possibly keep up with all this.” Do not contribute to that noise. I would say err on the side of less notification. Don’t go crazy with too many preferences and parameters, but if you only send out stuff, when it’s really important, people will pay attention to that. But if your system is designed by default to generate a ton of noise out of the box, they’re just going to tune you out, and you’ve kind of lost the battle there. Okay, the next one here: with tons of IoT telemetry comes a lot of discussion of stats and metrics and letting people look at everything on charts and tables. So, at the end of the day, your customer probably doesn’t really care about the objects or the hardware or the things that are being maintained. The direct end-user might because that might be their job is simply to maintain all this equipment and make sure that it’s running correctly and all that. But ultimately, the devices and the things being monitored are probably there to provide some type of business outcome or value to your customer, to your user. So, this idea of business continuity and disruption, or whatever it is that they’re really interested in, the downstream value provided by the system. Let’s say it’s thermostats controlling buildings. They really care about their cost spend, energy use, and all of that. They don’t really care about the thermostats themselves and the fans and the AC equipment. They care about the spend, their green footprint, and are we wasting money, and all this kind of stuff. That’s really what it’s about. And that’s this should be part of the user experience, it should be part of the strategy and the way we think about it. So, focusing on this business value is also not just good for your user, it can be good for you. And this is really true for commercial software people. So, if you’re running a commercial software product in this space, a simple example can be can you quantify the business impact that the system is having? How many dollars and cents are you saving every day, or how many incidents did you prevent from happening? And is there a way to quantify what the business impact may have been from those? And maybe there’s a little part of the dashboard that kind of just sits quietly in the corner, but it’s keeping track of this stuff, and it’s kind of gently reminding the customer, what is the value I get for this? Oh, yeah, like this thing is literally helping save me money. It’s saving on labor, it’s keeping business continuity going, and all of this. So, make sure you don’t get too lost in just the objects and the data, but also be thinking about the business impact. Number seven, here it is two more. If your system enables troubleshooting, do not underestimate the value of letting customers enter in custom events or text descriptions, particularly on time-based charts. So, what do I mean by that? Again, map and territory aren’t the same thing. Territory is reality; map is your application, it’s not going to have the entire real world in it. However, a simple example of this might be, we relocated equipment, so we had to disconnect it, reattach it to the network somehow, and now it’s back. Well, it might be good to let the user actually put in an event, type in an event and say, “Equipment moved on January 7, 2021. We moved this equipment.” And seeing this on the time-series charts, so when we see an abnormal drop there in the telemetry that would normally be coming in, we have some context for what happened in the real world. So, super basic feature from an engineering standpoint. It doesn’t really have anything to do with the data science or the predictive capability or any of that, but it could be very powerful in the user experience because we now have some additional real-world context built into the software experiences. So, I imagine this could be taken further where maybe you could actually use-you provide a dropdown, like what is the event, it’s an out of service event, it’s a fault event, or whatever, and you let the user manually include these, and then maybe the models actually, logically include something about that data and the way they react. That’s possible, too, but I would say not to get too carried away initially, and just consider the value of letting people add notes have their own and especially dates when we’re talking about time-series charts and things like that, since a lot of times with IoT stuff, we’re looking at time-series data. And then the last one here, primary dashboards and these types of tools probably need to focus on doing a handful of things very well. So, what are those things? First is, you probably need a small snapshot of the landscape, so this is the overall health of the entire system, potentially with some meaningful comparisons so that this gets back to, “As compared to what?” Whenever we show data, or KPIs, or things like this, typically speaking, we want to have some type of comparison. So, whether that’s comparing it over time, comparing it to siblings, comparing it to a target. You know, maybe there’s a service level agreement and there’s a way to quantify that. Whatever it is, the point is, usually you’re going to need some comparison there. So, first one there, again, small snapshot of the landscape. This is probably not the major thing, though. The major thing I would say on most of these dashboards is going to be action items. What needs my attention right now? Ideally, probably ranked by business importance, but this could also be ranked by things like ownership, maybe I only managed odd-numbered devices or things that are in California, but I don’t touch the ones in Massachusetts. Whatever it may be, there’s different ways to think about that. But this is especially true if there’s a large environment that’s being monitored, is being really clear about how do we rank these things in a smart way, so that the users really know what should I pay attention to right now? So, action items, again, could be addressing broken or suspicious activity accepting a recommendation which actually then changes the environment. And so if you run with that model further, this could be something where it’s like, “We recommend that you auto-power down these devices between this time period, would you like us to do that?” And you say-click the button, “Yes.” Well, you would want your models to also be smarter enough to adjust to that parameter and say, okay, well, these devices will now be off at this time, we need to make sure that the models don’t flag that as being some kind of an unusual event. So, constant learning here, dynamic environments, be thinking about those things. Again, why does that matter to the user? To avoid noise. That’s what a lot of it comes back to. Okay. And then another thing here would be, again, we talked about this business value, but seeing if there’s a way to roll up the business value that the customer has been getting from the system. So again, I wouldn’t go overboard with this, but salespeople will love it. [laugh]. The managers may also love it, but I literally have seen it where an end-user who’s not the buyer of a piece of software, but they’re the person that lives with it every day, they want to be able to prove to their boss who may be the fiscal sponsor, the one that makes the purchasing decisions, they may want to be able to advocate for it and say, “This is how much this is really helping me do my job. It’s saving us a ton of money. We should definitely renew our contract or whatever.” There can be a lot of value if you can actually show that in there. And they may want to actually go in and look at a detailed report about the business value, so there could be an opportunity there as well. And then the final thing on these dashboards that I would be thinking about is not to underestimate the importance of letting users have something akin to a watch list. So, this is something like, “I just did maintenance on these objects, there should be working fine, but I still just want to keep an eye on it. I still want to just feel good that for the next week or so I want to make sure that they don’t go offline, I just want to feel good about it.” This is one of those great squishy design things. It’s not a complicated engineering feature. It doesn’t require any special modeling. It’s a rather simple thing to build in, but if we really get to know our customers, and of course, all these decisions should be informed by research that you’re doing with your users because they may not care about some of this stuff. I am giving you generalizations. But you might find out that it’s just simple; I just want to be able to pin these items here, and I want to make sure that they’re always right on the dashboard when I log in, at least for a while. So don’t make them dig for that kind of stuff if it’s not necessary. Okay? And that’s it. Those are the eight strategies for IoT monitoring applications. I hope they were useful. If you want some further reading on this topic, I’ve got a couple links to share with you, I do have a free “Designing for Analytics Self Assessment Guide.” This is not just for IoT applications and cloud data products, especially in that kind of health monitoring space, but there are some things in there that might be helpful, so you can get that at designingforanalytics.com/theguide-one word. I also have what I call my “CED UX Framework” for designing advanced analytics applications. This really gets into how I like to think about presenting insights from predictions or even traditional analytics, and how do we figure out how much data and evidence to show? When do we show it? How do we enable drill-downs? And all these kinds of design decisions. If that sounds like it would be helpful to you, you can go read about this framework. The CED stands for conclusions, evidence, and data, and it’ll go more into what that means. That is available at designingforanalytics.com/CED-just the letters CED. And then finally, if you need personalized help, two quick things: I do run my public seminar twice a year. It’s called “Designing Human-Centered Data Products.” So, at any time, you can just head over to designingforanalytics.com/theseminar and get on the early access list there and I’ll shoot you an email when registration opens. And then the final thing here is if you already have an IoT product or analytics application and you’re finding that it’s really complicated for users, they’re not seeing the value in it that you think is there, maybe it’s hard to sell, maybe it’s hard-it’s just not getting the adoption you want, you can hire me to come in and do an audit and to actually assess what’s going on with the design and come up with a remediation plan. So, this would be a design remediation plan: what features need to change? How would the user experience change? Dashboard, visualizations, all that kind of stuff. It only takes about a week or two to get done, and then you can oftentimes my clients can run with their own team to implement the solution, implement the recommendations themselves, or they can hire a contractor, or we can keep working together, or whatever. But my goal there is to really rapidly turn around some changes for you so that you know, how can I make this design better beyond just lightweight aesthetic changes that really don’t move the needle in terms of having the business impact that we want to promise with these types of IoT cloud applications and things like that. So that’s guaranteed and again, only takes a couple of weeks. So, if you’re interested in that, the link for that service is designingforanalytics.com/theaudit-just T-H-E-A-U-D-I-T. Okay, so again, this is Brian O’Neill, Experiencing Data. Thanks for hanging out with me. Stay safe and healthy, and if you have a question for me, suggestion for the podcast, or comment, you can always leave an audio question right through your browser. Just head over to the podcast homepage designingforanalytics.com/podcast. I’m always interested in hearing from you and trying to make these episodes as useful as possible. So, until next time, see you soon.
57 minutes | Jan 26, 2021
057 - How to Design Successful Enterprise Data Products When You Have Multiple User Types to Satisfy
Designing a data product from the ground up is a daunting task, and it is complicated further when you have several different user types who all have different expectations for the service. Whether an application offers a wealth of traditional historical analytics or leverages predictive capabilities using machine learning, for example, you may find that different users have different expectations. As a leader, you may be forced to make choices about how and what data you’ll present, and how you will allow these different user types to interact with it. These choices can be difficult when domain knowledge, time availability, job responsibility, and a need for control vary greatly across these personas. So what should you do? To answer that, today I’m going solo on Experiencing Data to highlight some strategies I think about when designing multi-user enterprise data products so that in the end, something truly innovative, useful, and valuable emerges. In total, I covered: Why UX research is imperative and the types of research I think are important (4:43) The importance for teams to have a single understanding of how a product’s success will be measured before it is built and launched (and how research helps clarify this). (8:28) The pros and cons of using the design tool called “personas” to help guide design decision making for multiple different user types. (19:44) The idea of ‘Minimum valuable product’ and how you balance this with multiple user types (24:26) The strategy I use to reduce complexity and find opportunities to solve multiple users’ needs with a single solution (29:26) The relevancy of declaratory vs. exploratory analytics and why this is relevant. (32:48) My take on offering customization as a means to satisfy multiple customer types. (35:15) Expectations leaders should have-particularly if you do not have trained product designers or UX professionals on your team. (43:56) Resources and Links My training seminar, Designing Human-Centered Data Products: http://designingforanalytics.com/theseminar Designing for Analytics Self-Assessment Guide: http://designingforanalytics.com/guide (Book) The User Is Always Right: A Practical Guide to Creating and Using Personas for the Web by Steve Mulder https://www.amazon.com/User-Always-Right-Practical-Creating/dp/0321434536 My C-E-D Design Framework for Integrating Advanced Analytics into Decision Support Software: https://designingforanalytics.com/resources/c-e-d-ux-framework-for-advanced-analytics/ Homepage for all of my free resources on designing innovative machine learning and analytics solutions: designingforanalytics.com/resources
40 minutes | Jan 12, 2021
056-How Design Helps Drive Adoption of Data Products Used for Social Work with Chief Data Officer Dr. Besa Bauta of MercyFirst
40 minutes | Dec 29, 2020
055 - What Can Carol Smith’s Ethical AI Work at the DoD Teach Us About Designing Human-Machine Experiences?
It’s not just science fiction: As AI becomes more complex and prevalent, so do the ethical implications of this new technology.But don’t just take it from me – take it from Carol Smith, a leading voice in the field of UX and AI. Carol is a senior research scientist in human-machine interaction at Carnegie Mellon University’s Emerging Tech Center, a division of the school’s Software Engineering Institute. Formerly a senior researcher for Uber’s self-driving vehicle experience, Carol-who also works as an adjunct professor at the university’s Human-Computer Interaction Institute-does research on Ethical AI in her work with the US Department of Defense. Throughout her 20 years in the UX field, Carol has studied how focusing on ethics can improve user experience with AI. On today’s episode, Carol and I talked about exactly that: the intersection of user experience and artificial intelligence, what Carol’s work with the DoD has taught her, and why design matters when using machine learning and automation. Better yet, Carol gives us some specific, actionable guidance and her four principles for designing ethical AI systems. In total, we covered: “Human-machine teaming”: what Carol learned while researching how passengers would interact with autonomous cars at Uber (2:17)Why Carol focuses on the ethical implications of the user experience research she is doing (4:20)Why designing for AI is both a new endeavor and an extension of existing human-centered design principles (6:24)How knowing a user’s information needs can drive immense value in AI products (9:14)Carol explains how teams can improve their AI product by considering ethics (11:45)“Thinking through the worst-case scenarios”: Why ethics matters in AI development (14:35) and methods to include ethics early in the process (17:11)The intersection between soldiers and artificial intelligence (19:34)Making AI flexible to human oddities and complexities (25:11)How exactly diverse teams help us design better AI solutions (29:00)Carol’s four principles of designing ethical AI systems and “abusability testing” (32:01) Quotes from Today’s Episode “The craft of design-particularly for #analytics and #AI solutions-is figuring out who this customer is-your user-and exactly what amount of evidence do they need, and at what time do they need it, and the format they need it in.” – Brian “From a user experience, or human-centered design aspect, just trying to learn as much as you can about the individuals who are going to use the system is really helpful … And then beyond that, as you start to think about ethics, there are a lot of activities you can do, just speculation activities that you can do on the couch, so to speak, and think through – what is the worst thing that could happen with the system?” – Carol “[For AI, I recommend] ‘abusability testing,’ or ‘black mirror episode testing,’ where you’re really thinking through the absolute worst-case scenario because it really helps you to think about the people who could be the most impacted. And particularly people who are marginalized in society, we really want to be careful that we’re not adding to the already bad situations that they’re already facing.” – Carol, on ways to think about the ethical implications of an AI system “I think people need to be more open to doing slightly slower work […] the move fast and break things time is over. It just, it doesn’t work. Too many people do get hurt, and it’s not a good way to make things. We can make them better, slightly slower.” – Carol “The four principles of designing ethical AI systems are: accountable to humans, cognizant of speculative risks and benefits, respectful and secure, and honest and usable. And so with these four aspects, we can start to really query the systems and think about different types of protections that we want to provide.” – Carol “Keep asking tough questions. Have these tough conversations. This is really hard work. It’s very uncomfortable work for a lot of people. They’re just not used to having these types of ethical conversations, but it’s really important that we become more comfortable with them, and keep asking those questions. Because if we’re not asking the questions, no one else may ask them.” – Carol Links Designing Ethical AI Experiences (Agreement and Checklist) (PDF) Transcript Brian: Welcome back to Experiencing Data. This is Brian T. O’Neill. Today we’re going to talk about design user experience, and particularly in the context of artificial intelligence. And I have Carol Smith on the line today. Carol is formerly of Uber’s self-driving car unit, and she is now a senior research scientist in human-machine interaction and adjunct professor in the HCI Institute at Carnegie Mellon University. So, Carol, that’s a lot of words in your title. Tell us exactly what all that means, what you’re up to today, and why does UX matter for AI? Carol: Yeah, thank you for having me. So, the work I do really crosses between the people and the problems we’re trying to solve and these technologies that are emerging, and new, and complex, and creating new and interesting problems that we need to pay attention to and try to solve. But none of this is necessarily new, but because of the ways that AI can be extended beyond where typical software programs can be extended, for example, is the reason why it becomes even more important. It’s just more and more people are affected by the decisions that we make with these systems. And we need to be more aware of the effects of the systems on people. Brian: Yeah, I mean, this is actually one of the reasons I don’t use ‘UX’ too much in the language when I talk about this because now we have third parties who are not stakeholders, and they’re not users that are also impacted. And so the human-centered design framework, to me, is a better positioning of the work that we all need to be doing, I think if we care about the systems in the world that we want to live in, and all this jazz. So, the other area of focus that I thought was really interesting here is you’re doing a lot of work in ethics, and you’re also doing work and warfighting with artificial intelligence. And I love this yin and yang here. And so I want to dive into some of that as well. But maybe you can kind of start us out with the work that you were doing at Uber, and I’m assuming that seeded some things in the work that you’re doing today at Carnegie Mellon and with the government clients. So, tell me about that kind of process, and what did you take away from Uber, and all that? Carol: Yeah, yeah. So, I joined Uber with the expectation that self-driving vehicles were right around the corner. I was somewhat ignorant, going into the status of that technology. And it appeared that the vehicles driving around my neighborhood that seemed to be doing it themselves, and seemed to be maturing quite quickly from the early prototypes that we saw, to the more formalized frameworks that we were seeing, literally driving right down the street. So, I assumed it was near, and coming, and really was hoping my kids would be able to use self-driving cars instead of learning to drive. But that is not going to happen, unfortunately. And so that was back in 2017. And when I joined, I was doing some really interesting work on human-potential passengers, and what would their expectations be and how do you gain trust? As well as the operators and dealing with the information that they were getting from the system, and how would they understand what the system’s needs were, and what the next movement would be of the vehicle and just having that relationship, the human-machine teaming there. And then also the actual developers working on that developer experience. People who are actually making these systems work, they needed information about the systems in easy to understand formats at the appropriate level. And so trying to keep the complexity there, but also make sure that the meaning was available to them, and that they had this high-level view. So, those were some of the problems I was looking at there. And it was just really interesting. I’ve always enjoyed working with machines and people. When I was at Goodyear Tire, we worked with these massive vehicles that did mining. The tires that we were mostly concerned with were 15 feet high. But the people were 30 feet high in these vehicles, and so just understanding how they were managing these systems, how they understood what was going on, that was very important, then, as well. And then the ethics has actually been something that is kind of flowed throughout my career. The work that I do has always involved consent, and making sure that the people that I was involving in the research understood what I was doing, and why, and that they could stop the work at any time, and that sort of thing. So, a lot of dealing with just the ethical implications of the work I was doing has always been part of that. And then as well worrying about am I doing work that is actually helpful to people? Am I doing work that can be added-socie-making society a little better, even if it is just a financial application or something like that. It’s still, ideally, I want to be doing something that’s somewhat meaningful. And so the opportunity at Carnegie Mellon was an opportunity to work, as you mentioned, to help to keep our warfighters safe, and that intersection between soldiers and artificial intelligence. And many of them don’t have that background in computer science or in artificial intelligence necessarily understand the systems. And so helping to break down what the system is doing for them, and helping them to be able to have the proper control. Something-thankfully, our Department of Defense does have a set of ethical principles. And they are very strongly pushing those for artificial intelligence, so maintaining the control with the humans, and making sure that these systems are within those sets of ethics. And so how do you implement that for a soldier? And make it relatively easy to use: these are very complex systems, but thinking through that process is the work that I do. Brian: Mm-hm. And do you self-identify as a designer, as well, or your work primarily just in the research, and feeding that into the makers, so to speak, the teams that are going to produce? Carol: Yeah, I’m definitely not a not a designer, in a visual sense anyway. I do wireframing, and prototyping, and things like that, early ideation and early troubleshooting mostly. And then ideally, I partner with someone. There-Alex, on our team, is a wonderful designer, so I partner with her, frequently. Working with people who have that skill and are able to really bring things to life is really important as well, it’s just outside of my skill set. Brian: Got it, got it. One thing I wanted to talk about, I think maybe you touched on this, and this is something that I’ve given a fair amount of thought to, and I feel like there’s a lot of young energy in the design community, and the audience on this show is primarily people coming from the data science, product management, analytics background. The young energy is treating a lot of this designing for AI and machine learning as something entirely new. I see this as an extension of what we do with human-centered design with anything. At the end of the day, we’re still building software, and we talk about data products and machine learning and all this, it’s like, “Wait a second. There’s still just a software application. Maybe there’s a hardware component. Let’s not turn it too much into something different,” because to me, a lot of the core material, the core activities still need to happen. It’s just, it’s almost like a bigger problem, now. You have probabilistic outcomes, so many more use cases to test for, learning systems, and all of this. Do you agree with that, or do you think, no, we need a fundamental shift in mindset about how we approached all of this? Carol: Yeah, so I think it’s both. It is definitely building on what we’ve always done as far as best practices in design, as far as user experience, clarity, and understanding of the next steps, and things like that. Like the basic heuristics that we’ve always learned. Those are ever so important, and even more so now because when you’re working with a more complex system, you need to know even more. So, that hasn’t changed. But what has changed, and what I see to be more of a challenge for people coming into this type of work is the complexity of the systems, and understanding dynamic systems, and systems where the data and the information that’s presented may change over time and may change relatively quickly, and the interaction needs to adjust as well. And so if you’ve mostly worked on websites and relatively simple applications, it’s a huge change to go from those types of interactions to these. So, that’s where I see the biggest challenge. And then with autonomy, that is new work. And that is something that there just isn’t a lot of visual design efforts around. Most of the interfaces for people who need to understand what the autonomous vehicle is doing are either completely simplified to the point that there’s really almost no information gained through; that’s usually the consumer view, so if you look at some of the transportation vehicles that are used in parking lots and things like that, and even Tesla’s interactions, they’re very scaled down. They’re very basic. It’s your speed and direction and the little bit of information beyond that, and that’s about it. Whereas the expert using a system where they really need to understand much more detail, and they need more information about what’s in the scene, and what’s around them, there’s a lot of work to be done in that area to provide an expert view that is consumable visually. Brian: Sure, sure. And I think our audience is probably always riding this line because as people producing decision and intelligence products, and things like this, it’s always a question of how much evidence do you throw at people? How much conclusions do you draw? Model interpretability is a big question. I’m seeing a lot more activity around, you know, “Any amount of interpretability is always better. More of this all the time.” And I’m kind of like, “Well, wait until that becomes a problem because you can also just overwhelm people as well.” And this is the craft of design is figuring out who this customer is-your user-and exactly what amount of evidence do they need, and at what time do they need it, and the format that need it in. I don’t know if you can just build a recipe around that except to say, “If you’re not doing research, you can’t find the answer.” Like [laugh] would you agree with that. It’s not a- Carol: Definitely. Brian: It’s not a checklist item, you know? Carol: Yeah, yeah. And there’s no one right answer for these systems. It’s got to be for the context that the person is in, and it needs to be for that individual. So, a technician who is trying to troubleshoot a system is going to need very different information than the person who’s just using the system, just for an example. And I always think of the Bloomberg Terminal and how much data is there, and if you’re not familiar with those kinds of financial pieces of data, it just looks like a huge mess. [laugh]. “Why would anyone do that?” is usually the reaction. And yet, the financial analysts and the others who use and appreciate that, find it to be enormously helpful and very meaningful information, and the right information at the right time. And so, there’s that difference. You also find this in any complex system: airplanes, and things like that, there’s a huge amount of information that needs to be given to an expert, and they need that information, but they need the right information at the right time, and so figuring out that balances is only something you can get from working with those individuals and understanding their work deeply. Brian: Yeah, you hammered on something I talk about a lot, and this is that context, and we see sometimes it’s, “Let’s copy this template,” or, “Let’s copy this design from someone else.” And it’s like, well, unless you’re copying all the research and the customer base, you have no idea whether or not this template makes sense, and the Bloomberg is a great example. So-especially with enterprise, I think you have to take a grain of salt. When I go do an audit, I’m very careful about making assumptions about stuff that looks really bad on the surface. Not to mention the disruption you can cause by going in and changing things that you don’t understand the legacy of why they’re there, and all this kind of stuff. So, you make some really good points there about the right amount of volume, and information, and all of this. So, if you don’t have designers on your team, there’s still a lot of data science groups and analytics groups that are now being tasked with coming up with machine learning and AI strategies, and that’s a different kind of work, especially when the work is not well-defined upfront by the business. Now, we’re into innovation, and discovery, and problem framing, and all this other kind of stuff. What are some things that someone that doesn’t have designers on staff, but they know, “I want to build better stuff because people aren’t using our stuff a lot of the time. We want to get more adoption, we want to drive more value.” What are some of the activities that quote “anybody” can start doing to get better at this? What would you recommend? Carol: Yeah, yeah. So, certainly from a user experience, or human-centered design aspect, just trying to learn as much as you can about the individuals who are going to use the system is really helpful. So, even looking on LinkedIn, and websites that they might frequent, and just trying to glean as much information about those individuals as you can. Minimally understanding the terminology that’s appropriate is extremely important. And then beyond that, as you start to think about, like, ethics and things like that, there are a lot of activities you can do, just speculation activities that you can do on the couch, so to speak, and think through, what is the worst thing that could happen with the system? Something I’ve been working on is a [00:12:59 checklist], which we can share with your audience, to help people just kind of frame those conversations that they need to have with their team and start to think through just the implications of their work. How are we controlling the system? When are we passing control from the human to the machine and vice versa? How are we going to represent the data and the source of the data? How are we going to show individuals the biases that is inherent in the data, and how we convey that in a way that shows them the strengths of the system, as well as the weaknesses, or the limitations of the system. So, just really having those conversations can help you to begin to understand better how to do that work. And then there are lots of resources online, certainly for user experience and human-centered design research type activities, that people can start doing. It’s one of those things that you only get better by doing it a lot. And so it’s tough, if you’re just doing it once in a while, to be skillful. Much like if you put me in front of a terminal and asked me to start coding, it would really not go well. [laugh]. Brian: [laugh]. I understand. You make good points there. I do want to drive into the ethics material because you’ve published a fair amount of stuff. I did see the checklist and I’ve already pre-linked it up in my [00:14:15 show notes] because I think it’s great. Is part of that exercise-and I feel like it should be I don’t know if it is because I didn’t have this question in my head when I read it, but should we be caring about ethics? And I know there’s business leaders probably saying like, “Okay. It’s, like, another tax on my project, and we need to kind of check the ethics box.” They’re not really excited about it, like maybe the way a UX person would be because we’re driven by empathy and all these other stuff. They want to do the right thing, but they also are going to say we don’t have time to, like, blow up the whole world and spend tons of time on this. Is part of the work figuring out first whether there is a potential ethics issue, and then if you say, “Oh, there’s not,” you pass the level one checks, you really can just proceed because this is, like, some internal operations thing that’s shuffling paperwork around using machine learning, and it’s not really going to affect anybody. Is that part of it? Is figuring out if there actually is an ethics question, and then proceeding with level two diagnostic if there is-how do you frame that? Is that the wrong framing? Carol: Yeah, no. I think that’s a reasonable way to go. It’s just initially just really being exhaustive about it, to some extent, really thinking through the worst-case scenarios. So, particularly if you know that there’s going to be private identifiable information about individuals, or you know, if you know that you’re dealing already with a particularly risky area. So, in the US, like, with mortgages, there’s been a lot of work where people have tried to create these systems, and because of the inherent bias in the data that they’re trying to use to build the system, that data just carries on through into the system. And so that would be a situation where clearly there are already issues, we know they’re existing issues in the human system thinking that the AI system is going to get rid of those is nonsense, really. It just can’t, it can’t take away those types of issues. So, I think the first thing is, yeah, looking just subjectively, is this an area where we need to be on high alert or is this more of a situation, where we just need to make sure that we’re building really good software, and that we’re not leaving open doors that someone could easily hack into the system and that sort of thing. But then, when you do know that you’re dealing with the public in any way, or dealing with particularly those higher risk areas, then there’s a lot more work that needs to be done to both protect the data, protect the people, and also to do mitigation planning and communication type work, just to think through ahead of time, how are you going to shut off the system if it comes to that? How are you going to- Brian: You can’t turn it off. Carol: Right. Brian: [laugh]. Carol: Right. That’s every sci-fi movie, right? Brian: Right, I know. [laugh]. Carol: Ahh. You didn’t plan for that? How can you not plan for that? Brian: Yeah. There’s no off switch. It wasn’t a requirement. Carol: Right. Right. Right. Really? Like, yeah, just I think doing good work is the key to preventing a lot of this. My kids were watching Cloudy With a Chance of Meatballs the other day, and he built this machine and launches it, and it storms food items, and people are getting hurt by food items, and turn it off is unfortunately not an option. Brian: Right. [laugh]. Carol: Yeah. [laugh]. Brian: Something that I can see from the engineers who I know and love, from all my clients in the past, super smart people, but sometimes it’s like, “Well, we can add that in later. We can provide a method to give feedback, so now it’s learning from the feedback that it gets,” or these are, like, features that get added on. That’s maybe a downside of the traditional software engineering approach, or is it? I’m sure some of this, if you’re training a model on a bad data set, and there’s no discussion about that, then the nuts and bolts of the whole system already has a problem, right? Or at least if it’s not explained to the user that that’s what has been trained on. But do you see some of this as these are improvements that are added along to an MVP or a first version? Or it’s no, you just don’t ship any of this without some minimum level of all of these different kind of special requirements for AI solutions? Like, how do you think about that? Carol: I think it depends. It depends, like, is that MVP, literally just a click-through site together knowledge and interest? Then that’s probably minimally a problem. But if you’re already building and you know you’re building an AI system, a much more complex system, you must start this work super early because if you wait, you’re going to find that some of the decisions cannot be done, at least not easily. So, it really is important to do it at the inception of the project, at least from those high-level speculative type activities. So, some of the ones that I recommend are ‘abusability testing,’ or ‘black mirror episode testing,’ where you’re really thinking through the absolute worst-case scenario because really helps you to think about the people who could be the most impacted. And particularly people who are marginalized in society, we really want to be careful that we’re not adding to the already bad situations that they’re already facing. So, it’s really important to do it upfront. And much like accessibility, people feel like, “Oh, we’ll fix that later.” You can’t really effectively fix those types of activities later; you really do need to build it into the system. And it’s really important to do the right thing, in this sense. And to your earlier point, businesses aren’t always interested in that, and it’s a hard sell, unfortunately. It will probably end up being lawsuits before many individuals will really understand the importance of it. But I’m hoping that we can get enough people at the ground-level doing this work that will already be baked in. Brian: Yeah, I mean, ultimately, a lot of it’s going to come down to appetite for risk, taking chances, and all that kind of stuff. So, this kind of is a good bridge, I think, to talk about some of the warfighting and military work that you’re doing. Talking about ethics in that context is really interesting, as I’m sure you’re imagining. It’s kind of like I’m here to pull everyone to the left, while the technology wants to go to the right, and you find some middle ground here. The first thing that jumped in mind when I saw this is, “We’re putting out a code of ethics about this.” Isn’t that exactly what quote “the enemy” is going to not follow to help to level up, and then you have that natural poll to, like, bend the rules and, like, “Well, we’ll just automate this too, and we’ll automate that.” And the next thing you know, you’re-it’s machines and machines. So, talk to me about this yin and yang, and finding that middle ground, what is that like? It’s a fascinating area. Carol: It is. It is. And it’s not something unfortunately, that’s new for the US Department of Defense. We’re constantly, unfortunately, working against organizations that will go to any lengths to make sure that they win. And that’s just not the way we do the work that we do. That’s not the way the United States wants to present itself, at least that’s not the United States I believe in. So, trying to figure out where that balance is, is really challenging, particularly right now, in the cyber world. Now, I don’t work as much in that work-doing cybersecurity type work-because there’s so many people using the various activities to get into these systems and to break through security protocols. In some cases, some of my colleagues have to think about other less ethical ways of doing that work, too. And thinking about, do we make what they call a honeypot and attract them in order to prevent further damage? And at what point are you crossing that line? And just really thinking through all those types of implications. And the same way with the warfighters is, we need to give them enough control and protection to keep them safe, and at the same time, there’s always that point of, well at what point, if, for example, someone lost control of the vehicle and there was a crash, and people got hurt, you know, starting to think about, how do we prevent that? Is there a way to prevent that? Is there a way to put in an automatic stop, and what are the implications of that in the system? If it automatically stops, then what risk are we potentially incurring? And just thinking through those types of things is really hard. I was teaching a workshop a few days ago, and talking about how most of this isn’t about the trolley problem the way people think about the trolley problem, which is-the idea is, that there’s a trolley person and they are managing the trolley, and there’s a fork, and the decision is to go left or to go right, but in both directions, people get hurt, get killed most likely, by the trolley. That’s an in the moment decision. What we really are doing with this work is thinking through that long, long before it ever is going to happen. So, long before that incident, how can we prevent that? And where does the trolley stop? And what are the implications of that? Who do we need to notify that it’s stopped because it prevented either of those strategies because we built a safe system? And that doesn’t necessarily mean-safety is always relative. Another example is passing other vehicles. To human drivers, we want good three, five feet, I don’t know what it typically is, but there’s a distance that we want, between us with that yellow line and some distance, and to a self-driving vehicle, it doesn’t care, right? As long as it’s passing by the merest centimeter, it’s safe, technically. But how many people are going to feel safe in that vehicle at that time, when it’s that tight? It’s like driving in-I’ve been in some countries where people pass each other that closely, going slower, but [laugh] it’s still very-feels very risky. And where’s that balance between protecting and doing all we can to keep people safe, and also not creating a worst problem in some way? Brian: Some of the stuff you were talking about, I thought it’s really important to hammer this, I think, into the community of makers working on these things is, for example, you can’t teach a machine justice; you can simulate justice-like decisions, but it doesn’t feel that-safety is also a feeling, right? It’s not quantitatively decided. If I’m in a jumbo jet going 900 miles an hour, then that three to five feet of spacing does not feel as good as that if I’m in a 10 mile per hour tank, where it’s like, oh, we got plenty of space, reach my hand out. There’s something going on there that’s very emotional, and it’s human, and you can’t teach that to the machine. I mean, maybe you can. Maybe there’s an algorithm or a formula for size of craft, velocity, this much space feels right. I don’t know, maybe there’s some way to quantify that. But is that kind of the work you’re doing, is helping to quantify some of these things, and make them into requirements or parameters for the system, and say this is what we learned through research? Is that part of what you’re doing? Carol: To some extent, yeah. I’m not doing as much of that right now, but I have in the past, and really just trying to figure out how do you get that context into the system. But the other thing is, it’s just as subjective. So, what you think is safe and what I think are safe are different. In the time of COVID, we’re learning a lot about what that person thinks and what I think are safe are two very different things sometimes. And we both use the word ‘safe,’ and we-you know. So, it is, it’s very subjective. And part of my job is also just figuring out okay, so since this is very subjective, how do we make a system that is somewhat flexible for that situation, so that for this more aggressive person, the system is appropriate and helpful to them, but also for this more conservative, more careful person, they are going to feel that the system works? Or is that two different systems? Do we need to build two different systems? And then how do you easily switch off? And what happens if they have to share? There are lots of those kinds of, really the aspects of the humans and our complexities and oddities, and the machine and figuring out how do we get that partnership really working is mostly where I focus. Brian: I think a lot of product designers have probably felt this before at some point in their career that we create work and we slow shit down. Like, we add tax for long term value, you know, long term usability, long term investments that pay off, but in the short term, it feels like tax, more requirements, more stuff, slow it down. How does it feel when we’re doing warfighting? And in the context of work you’re doing, do you think that yes, I know it’s a tax and you, kind of, acknowledge that? Or do they not see it that way, the teams you work on? How do they see the work that you do with user experience? Carol: Yeah. Well, I’m very lucky, I mostly just work on prototypes. So, I work for what’s called an FFRDC, so the work we’re doing is super early, just thinking through ideas and trying to help people. So, we actually don’t have that constraint. I have worked a lot in Agile, though, and in those instances, it can feel like the user experience work, the ethics work, that it’s slowing the system’s down. And I think people need to be more open to doing slightly slower work, I do think that the move fast and break things time is over. It just, it doesn’t work. Too many people do get hurt, and it’s not a good way to make things. We can make them better, slightly slower. I’m a huge fan of getting things released, and getting feedback, and getting things out, but if it takes three weeks, instead of two weeks because we spent a little bit more time thinking through, and we protected people, I think that’s a win. Brian: Yeah, I think it’s always-the risks have to be understood. I think part of the work is asking the questions, having the scenarios that you talked about. I want to get into the abusability testing, too. That’s a great word; I hadn’t heard of that before. But before that, you had written down, I’m going to quote you here. I think this was on the ethics checklist, but you said-and this is talking about diversity of teams in this process, and why this is important, and you said, “Talented individuals with a variety of life experiences will be better prepared to create systems for our diverse warfighters and the range of challenges they face.” And I was like, “Wow. Okay, so I was a touring musician, went on a van, whatever. What the heck does my life experience have to do with contributing to something like that?” I found that fascinating. So, talk to me both about diversity, not just in skin color and race in the teams but of experience. How does an artist have something to do with this? Tell me about that. Carol: Actually, that’s that’s an excellent example because you have worked on cramming huge amounts of equipment into probably not large enough vehicles, and you probably were traveling with more people than you maybe should have been at the time. And that’s an excellent way to think about soldiers in a vehicle who are being transported: they usually are in very tight spaces and have a lot of equipment, and they need to make sure that the equipment is cared, and fed for, but also themselves. And being able to appreciate that and understand that is actually important. So, in that example, I think it’s a great comparator. And generally, the thing that people with different backgrounds bring is just those different life experiences. So, an example I don’t enjoy using, but that’s very exemplary is when you think about smart thermostats and smart speakers in people’s homes, and the way that we tend to share passwords and things like that, with other humans that we’re in close relationships with. If those relationships become violent, if a person then leaves, the violent offender leaves, then they still have access to the home. They still are able to potentially make the home temperature uncomfortable, they can raise and lower the volume of speakers, they can do all kinds of things to make that person’s life unbearable, even though they’ve physically not-you know, they’ve left the home. So, those are the types of things I try to think about. Just how can we keep people safe and prevent those things from happening? And abusability testing is an excellent way to do that because if you think about how that system can be abused, you might get there. Someone who has, unfortunately, been in that type of relationship, or been stalked, or been in other situations where they felt threatened, is going to be much more imaginative about the ways a system can be abused than someone who has never felt concerned for their own safety. So, that’s where that diversity matters, as well. It’s just having people who have those different life experiences and can say, “You know, I can imagine an ex of mine really misusing that, or abusing me with this. If they had access to that I’d still be having trouble.” That type of thing. I think that just having people with those different experiences, you’re just more likely to have those kinds of conversations, or at least I hope that you’re in a safe enough organization that you can have those conversations. Brian: Talk to me about, then, these four principles for designing ethical AI systems, and if you could go into the abusability testing thing, I think it’s an actual tactic; it sounds like a very fun activity people can actually do, but it has real purpose as well. So, can you break down these four principles? Carol: Yeah, sure. Yeah. So, the four principles are: accountable to humans, cognizant of speculative risks and benefits, respectful and secure, and honest and usable. And so with these four aspects, we can start to really query the systems and think about different types of protections that we want to provide. So, with accountable to humans, for example, we can start to think about who is making decisions? How are we making sure that humans can appeal or somehow undo an action that an important decision is made by the system? How are we protecting the quality of life and human lives in general? How are we making sure that the system is not making decisions that we don’t want it to make? And with cognizant of speculative risks and benefits, this is where we get into abusability testing. So, this is really thinking through those worst-case scenarios. And with the abusability testing-this was made popular by Dan Brown, and the idea is that you really take the time to think about the work that you’re doing, to think about the scenario, and go through the steps of thinking about the good, the good things and the benefits that the system can provide, but also the negative aspects and what those are, and what could happen potentially, if the system was hacked, or if the system was turned off inadvertently, or if someone wants to hurt someone else with the system, using the data, using whatever aspects they might have access to. This is particularly important for systems that are using camera data, or facial recognition, or anything like that. Things where human lives or any important information, again, is determined, we want to make sure that we’re being as speculative as possible so that we can prepare, ideally prevent, but at least mitigate, and then communicate about how we’re mitigating those issues, and make sure that people are aware of them. And then with respectful and secure, this has to do with people’s data. For example, just making sure that we’re not collecting more information than we need, and that we’re taking responsibility for all the data that we collect, and making sure that we’re keeping it as safe as possible. Also, making sure that the system is easy enough to use, easy enough to be secure, that we don’t have to worry about people writing information down on a post-it note where it might be accessed by someone else. And then with honest and usable, that’s with regard to the system actually identifing itself: being clear about when it’s an AI system. So, particularly with smart speakers, and chatbots, and things like that, we want to make sure that humans always know when they’re dealing with another human versus a machine. And so making that really clear to them. And being honest about, again, the weaknesses, the limitations of the system, how it was built, who built it, where the data came from, why they should trust the data-or why the data might be questionable-and providing all that information upfront so that people can determine how best to use the system. Because there may be-there was an example that-I’m not going to remember her name-but I was at a presentation a week or two ago, and she was talking about how they found with certain candidates using the system that they had built, they shouldn’t use the AI system. The AI system had significant bias for certain individuals, and so in some cases, they would go ahead and use the system because it was faster for the decisions that they’re making, and in some cases, they would specifically not use the system because they knew the system would not make the best decision. Even though it was going to be faster, it was going to provide a very biased answer, and so they just made that determination about when to use the system. Brian: Got it. Got it. The second principle, I wanted to ask you about this because I thought this is really interesting, and I wrote down, “Risk to humans’ personal information and decisions that affect their life, quality of life, health, or reputation must be anticipated as much as possible, and then evaluated long before humans are using or affected by the AI system.” And so I immediately thought about your warfighting [00:36:00 unintelligible]; I was like, wow, that’s kind of at odds in some ways, right? How do you balance that? And did this actually come out of some of the work you were doing in the defense space, where you’re like, “It’s this yin and yang,” I just found that kind of fascinating because the opposite seems to be what you’d want the tech to do if it was an offensive solution, you know? [laugh]. Carol: Yeah. Yeah, yeah. Well, ideally, we will always have humans making those final types of decisions, regardless. But it’s still-yeah, it’s a really difficult area to work in. And the Department of Defense has always had standards of ethics and standards of action with regard to the soldiers. Unfortunately, those aren’t always followed, but for the most part, they are, and they’re very important to the Armed Forces. And so making sure that that’s also in the AI systems is really important. Making sure that we’re still standing for the things that we believe in, and protecting life as much as possible. Certainly, no soldier wants to be responsible for the death of anyone that they don’t intend to injure, so that’s part of it; it’s just making sure that the systems truly are safe, and truly are protecting life as much as possible. And where it’s up to the individual soldier to make that determination, then they’ll make that determination. And they’ll have that responsibility on them, not on an AI system because an AI system doesn’t have rights and responsibilities. It’s just a computer. So, making sure that that responsibility stays with the individual who is making that decision, who’s always had to make that decision. Brian: Yeah, yeah. Good stuff. Any closing thoughts for our audience? Where do you see things going? Or a message you’d like to convey strongly about this work? Carol: Yeah. I’d say just keep asking tough questions. Have these tough conversations. This is really hard work. It’s very uncomfortable work for a lot of people. They’re just not used to having these types of ethical conversations, but it’s really important that we become more comfortable with them, and keep asking those questions. Because if we’re not asking the questions, no one else may ask them. And we need to make sure that the work that we’re doing is protecting people, and is the right work to be doing, and is going to be helpful, and hopefully be really useful and usable, and all the wonderful things that we want our users’ experiences to be. Brian: I think it’s a great place to finish. So, Carol Smith from Carnegie Mellon, thank you so much for coming and talking about this. Carol: Thank you. This was a pleasure.
43 minutes | Dec 15, 2020
054-Jared Spool on Designing Innovative ML/AI and Analytics User Experiences
40 minutes | Dec 1, 2020
053-Creating (and Debugging) Successful Data Product Teams with Jesse Anderson
In this episode of Experiencing Data, I speak with Jesse Anderson, who is Managing Director of the Big Data Institute and author of a new book titled, Data Teams: A Unified Management Model for Successful Data-Focused Teams. Jesse opens up about why teams often run into trouble in their efforts to build data products, and what can be done to drive better outcomes. In our chat, we covered: Jesse’s concept of debugging teams How Jesse defines a data product, how he distinguishes them from software products What users care about in useful data products Why your tech leads need to be involved with frontline customers, users, and business leaders Brian’s take on Jesse’s definition of a “data team” and the roles involved-especially around two particular disciplines The role that product owners tend to play in highly productive teams What conditions lead teams to building the wrong product How data teams are challenged to bring together parts of the company that never talk to each other – like business, analytics, and engineering teams The differences in how tech companies create software and data products, versus how non-digital natives often go about the process Quotes from Today’s Episode “I have a sneaking suspicion that leads and even individual contributors will want to read this book, but it’s more [to provide] suggestions for middle,upper management, and executive management.” – Jesse “With data engineering, we can’t make v1 and v2 of data products. We actually have to make sure that our data products can be changed and evolve, otherwise we will be constantly shooting ourselves in the foot. And this is where the experience or the difference between a data engineer and software engineer comes into place.” – Jesse “I think there’s high value in lots of interfacing between the tech leads and whoever the frontline customers are…” – Brian “In my opinion-and this is what I talked about in some of the chapters-the business should be directly interacting with the data teams.” – Jesse “[The reason] I advocate so strongly for having skilled product management in [a product design] group is because they need to be shielding teams that are doing implementation from the thrashing that may be going on upstairs.” – Brian “One of the most difficult things of data teams is actually bringing together parts of the company that never talk to each other.” – Jesse Links Big Data Institute Data Teams: A Unified Management Model for Successful Data-Focused Teams Follow Jesse on Twitter Connect with Jesse on LinkedIn Transcript Brian: Hello, everyone, welcome back to Experiencing Data. This is Brian T. O’Neill, and today I have Jesse Anderson on the line, the managing director of the Big Data Institute. Jesse, what’s going on? Jesse: Not much. Great to be here. Thank you for inviting me. Brian: Yeah. So, you have a new book out, this is not your first text. So, first of all, congratulations on that. I know, it’s always a slog getting through a book, at least that’s what everyone says that writes books. So, why do we need a book about data teams? Jesse: you need a book about data teams because I wanted to bring the other teams into the picture. And instead of just focusing on the data science team, I saw the need to bring in the data engineering team and the operations team and really educate managers on saying, “Well, you need more than just data science to do this right, and here’s why.” Brian: So, there was a premise that being successful with big data means go get a data science team; done. Is that kind of the premise? Jesse: That’s kind of the premise right now. And it isn’t just a, “Here’s what I think is happening,” I’ve actually dealt with the companies where they went out and hired that data science team and started scratching their head of, “Why are we having problems? Why are we not getting the value that we saw when we sat in that audience at the conference talk, and the person said, ‘Oh, my goodness, we just [00:01:49 unintelligible] a checkbook,’ or, ‘we were just bringing in the cash.'” this is why. Brian: Got it. Got it. So, the text, by the way, is called Data Teams: A Unified Management Model for Successful Data-Focused Teams. So, why did you feel like this needs to come out now? Why not two years ago? Why not, you know, in a couple years from now? What prompted the need for this right now? Jesse: To be honest, it would have come out last year. [laugh]. Brian: [laugh]. Jesse: Besides logistical reasons, it’s really time for us to understand from a manager’s point of view what we needed to do. I think this is timely information. I think it-some companies will obviously have a team in place, and it will help them understand why it’s either underperforming or maybe if they’re really lucky, why it’s performing so well. Maybe they don’t have the words or the experience to say, “Hey, this is it.” In general, what I’ve always tried to do is share my experience and knowledge. I think for me, I brought a different level of experience than most people have. Instead of writing about my one company, since I’m a consultant and I’ve done this for so long, I have all that experience to bring to bear. And I went a step further. I went and started talking to other people. In the book, there’s interviews with other companies, so that you just don’t hear my point of view, I want to bring other people’s points of view, what problems they had, what successes they had, and really create a compendium of knowledge. Brian: Yeah. I did see that I wanted to compliment you on something else, format-wise, which I thought was really compelling in the text. The text is not long-read format. You’ve broken it up; there’s a lot of section headings. So, imagine chapters with lots of subtitles in them. You’ve collected some of these around problem statements, like, “My team shipped something and nobody liked it and used it,” and then you have a section about that. And so I thought that was very problem-oriented because it’s very easy to scan the text table of contents and say, “Well, that stands out to me. I know that. I feel that. Let me jump into that.” So, I thought that was an interesting approach. How did you come up with that? Or where did that inspiration come from? Jesse: It came from seeing the same problems over and over again. And the people, as I’ve both talked to people and done some of these interviews, it’s more of a therapy session than it is a sharing of knowledge sometimes, where people want to know, “Were the problems that I was having similar to other people’s?” And this is what I’m trying to show in the book is, “Hey, if you’re having this problem, not only are you not unique, so many other people have had this problem that I wrote a subheading in the chapter because it’s all experienced-based. I hit this several times. I hit it enough to go through, write a section about this so that you can know about this.” Let me share what I did when I wrote the table of contents. I spent about a month just kind of brainstorming and going back through experiences at all these companies and just writing it down, what I saw, and then bringing that together into here’s what I wanted in that chapter. Brian: Got it. So, I want to step back one more time to who is this for? Jesse: This is primarily for management. I have a sneaking suspicion that leads and even individual contributors will want to read this book, but it’s more to be able to say, “Hey, look. This is why-you know, give this to my boss. Give some suggestions to, kind of, manage upwards.” But this is definitely primarily written for middle and upper management, executive management. Brian: We’re talking about analytics, data science, on the technical side, software engineering, these disciplines? Jesse: those sorts of disciplines then, where they may be at the point they’re going to start a team, they’re trying to fix that existing team. Various possibilities there. Brian: Got it. Got it. So, first of all, let’s talk about data products for a second. So, what is a data product to you? I want to set that stage, and then I want to dig into this a little bit. What is a data product? And you make a distinction here about what a software product is, and a data product. So, help me understand your distinction there? Jesse: Sure. And you bring up a very valid distinction that’s, I think, is important for management to understand. A software product is usually, “Here, I’m deploying a piece of software out there, a REST API, something like that, where the end product is putting something into production that serves whatever we need.” API, for example. Data products, when we’re in a company, and we have a data product, we have a product that is data. That is what we actually get out the door. It isn’t software. Software, obviously, creates that data product, but the data product is what we deliver. And that’s a different thinking, whereas we compare that to, let’s say, software engineering. And if we have a REST API for example, and we want to change that, we can just do v1, v2, v whatever. Well, with data engineering, we can’t make v1 and v2 of data products. We actually have to make sure that our data products can be changed and evolve, otherwise, we will be constantly shooting ourselves in the foot. And this is where the experience or the difference between a data engineer and software engineer comes into place. The data engineer needs to have the experience and knowledge to be able to say, “This is how things will change.” Or, for example, on a data scientist, as a data scientist consumes that data product, how is that data product going to be exposed in such a way that it’s useful to not just data scientists, but to analysts, and to the rest of the organization? It’s a key metric that we look at to make sure that we are using the right things, we’re exposing the right things. Brian: So, the distinction you’re making here, I want to understand if this is important primarily to a technical audience. Like, who is it important to make this distinction to because I’m going to argue that it’s a distinction without a difference to someone who’s on the receiving end of where the value is supposed to happen because no one interfaces directly with data; we interface with data through some type of interface of some kind, whether it’s Excel, or it’s an application, or a Tableau, or whatever the heck it is, there’s an interface at the very end that happens. And I understand that there are sometimes non-interface facing deliverables within engineering, but who cares about that distinction, I guess I would say, why is that important to make? Jesse: It’s important to make, and you’re completely right. If you are making the business consumer care about this distinction, you’ve lost. Brian: [laugh]. Okay, good. So, we’re on the same-because I don’t think they care, to be honest. Jesse: No, and they shouldn’t. What they do care about is manifestations of problems in your data product. For example, let’s say you use the wrong technologies, or you can’t scale, or you don’t have operational excellence, then the business cares that your data product is not usable. That’s kind of a binary thing for the business people; “I can either use this or I can’t.” And the various reasons I can’t use this, usually stem back to a problem in the data teams of three examples I just gave. One is, if there’s no operational excellence, the business cannot rely on that. If there isn’t sound infrastructure behind it and sound usage of technologies, then we’re missing our data engineering team, and for data science, if we aren’t using the right models, then they can’t use that either because we have a problem of, let’s say prediction, whatever is happening, clustering, it’s not using the right thing. Brian: So, I think there’s high value and lots of interfacing between the tech leads-maybe not the entire team, and all the data engineers, software engineers, but especially the tech leads-being involved with whoever the frontline customers are, whether it’s an internal customer or external customer, that interfacing is so important to know, what needs to be build? What does a small value look like? How do we ship something that’s useful, as opposed to boiling the ocean and creating these giant data projects that put out large-scale infrastructure that often doesn’t produce any value in the near term? And I’m not saying that there’s not infrastructure work that can’t happen in tandem with what I’m going to just call product development work, which is the solution piece that people interface with in the last mile. There’s not a strong product management component to the book or a design component. So, I guess my question is, whose job is it to figure out what needs to be built what needs to be plumbed such that we know what all this work is that matters, and it’s not just guessing, and it’s not just saying, “Well, let’s just assemble all these different pipelines here, in case somebody needs this.” And I understand there’s some more laboratory-oriented, research-oriented work where you don’t always know-especially in the data science field, you may not know exactly what you need to bake your cake. You might have to bake 20 cakes before you figure out which recipe is going to work. I get that there’s some variability there. I think sometimes though, we’re building a lot of infrastructure without focusing a lot on the last mile. So, when is it time to define the last mile at the beginning of the project such that we’re focused on the minimum amount of stuff to make in the data team. And then they feel like it’s a win because someone said, “Wow, this is great. Wow, I want to use this give me more.” That was something I didn’t feel like that was dug into as much in the text. Jesse: Yeah, and you’re completely right on that I didn’t dig into who should be creating the actual, let’s say, specs, and who should be that interface to the business. In my opinion-and this is what I talked about in some of the chapters-the business should be directly interacting with the data teams. You could go so far as to having a business person, a representative in a daily Scrum, maybe be not every day, but on a daily basis. You could get to that level of interaction between the business or-by the business, we’re talking about this end consumer of the product. And you’re completely right; one of the actual problems I’ve seen, too, is that the data engineering team will haul off and create massive amounts of infrastructure, with no end in sight, but more importantly, no business objective in sight. And this is a failure of the data engineering team to interface with the business. So, what I talk about in the book is being very close to the business, that the business’s interaction does not stop at the point where we have a data strategy in place. That’s just the beginning of the project. We’ve created a strategy, but we have not actually done something, we have not actually created something that’s ready to go. So, this is a key thing. So, back to your original question of who, you could say it’s the team leads. I think that the interaction-is in the book, I talk about, you could go so far as to say that there’s four teams, that there’s a BizOps teams that’s needed. And that BizOps team is needed to make sure that what the data teams is creating is business appropriate and solving business problems. And you may have on that team your project managers, your product managers. One other thing that I came out of the book thinking about, both from the interviews I did as well as the extensive reading I did, is DataOps. If you’ve read those case studies, one of the most, I think, highest and best usage, and most productive teams is actually having a cross-functional team where the product owner is actually embedded on that team. And this way, there is no resource allocation issue; there is no mismatch of requirements that the product owner is actually running that team, and every piece of data scientists, operations, data engineering is right there to be able to create that. There is now no excuse to have something that doesn’t meet business needs. Brian: This latter thing you just talked about with this product owner being involved, are you saying that’s an alternative way to do it? I guess that the universe that I come- Jesse: [00:14:24 crosstalk]- Brian: What’s that? Jesse: It’s an advanced way of doing it. So, I see that as if you’re just starting out with data teams and trying to do this, I don’t believe that you can go to DataOps, yet. I think what you have to do is you have to get good practices, get the right teams in place, and then once you get to a level of friction-and usually that’s the word I use in there, in the book is-once you get to a level of friction where the problems that you’re having are not team-related, they’re usually about resource-related, and communication-of-goals-related. That’s when your friction is a different story. By bringing those together, you’ll either eliminate or dramatically reduce those levels of friction. And there’s two case studies that are in the book; one is from Moneysupermarket with Harvinder Atwal, and the other one is from [00:15:10 Mickey O’Braun], where we talked about if you’re familiar with the Spotify model with guilds, and chapters, and that, they actually brought that to data teams, and as a direct result, were able to remove a great deal of friction. Brian: So, I want to hear if I got this right. Are you saying that having a dedicated, what I’m going to use the term ‘data product manager,’ involved to help these teams be successful delivering value in the last mile, are you saying that’s more of an advanced thing that you don’t need at the beginning of the journey? Bring that role in later? Is that what you said, or did I misinterpret that? Jesse: It’s not that you don’t need it, it’s that if you make too many changes all at once, you could find yourself just crawling consistently if you’ve made so many organizational changes and so many technical changes, you just can’t get anywhere. So, it’s more about getting you some focus initially, that you have a north star that you’re getting to. Brian: So, without that person, and again, I don’t care whether or not it’s a separate body, a separate human, I’m talking about the role, the hat that’s worn, so if they’re not there then, what is the chance that we deliver something that’s great? Whose job is it to be the product owner because it feels fairly unmanaged to me if we don’t have a central hub with all the spokes- Jesse: Let me clarify something, I think that I’m-I think you’re misunderstanding. It’s not that that person, that product manager doesn’t exist initially. It’s that the team structure, they’re not actually embedded on a team, initially, for structure. Yeah. Definitely from the beginning, if you’re an enterprise company, and you have your data engineering team, your data science, and operations, you definitely need some kind of either product manager or project manager in place. Another piece that I neglected to mention is the data architect, and I have an entire section on what that data architect is, where that data architect’s role is to translate the business need into something that’s actually usable while making sure that we don’t spend three years, four years on nothing. And then, at the fourth year, we have something. We need to be creating value initially. Does that make more sense? Brian: Well, were you talking about the reporting structure? Like, they don’t need to be on the team? Are you saying it’s like, well, they’re a resource that doesn’t report literally into the product? I guess I didn’t understand the distinction. Jesse: Yeah, the reporting structure, they may be part of, let’s say, data engineering; they may not. I haven’t seen any specific one to point to and say, “This is how you be successful initially, and this is where I’ve seen the best way to put those project managers.” They may be part of the PM organization, I’ve seen that before. I’ve seen them be part of a data science org, the data science team. But suffice it to say they are within that organization. Brian: Mm-hm. I want to be clear, too: I’m talking about product and not project, and I think those are very different roles; one’s about looking down the pipeline of what the value is that needs to be created, what are the increments and iterations of work to get there? What should be worked on first, second, third, et cetera? The other one is about hitting the milestones and making sure that the ship is cruising at the right speed the whole time, delivering on expectation. Often that can be one role, but it’s a very distinctly different one to me. And I think almost the product side is more critical right now because a good data architect, or a good-at least in my experience-a good tech lead can kind of be the project manager if they have disciplined engineering practices in place. They may not need a separate project manager overseeing everything because they have a rigor. I don’t-maybe you have a different take. I’m not technical, but I’m just-know who I’ve worked with and seen, and the really good architects are kind of a cop on the project side, as well, or someone. There’s a tech lead that’s aware of the milestones, and if they’re doing agile, then they’re aware of how many points they can eat in a sprint, stuff like that. What’s your take? Jesse: I would agree with you on that. Definitely your definitions of product versus project. Sometimes companies are trying to start up a team, and if you tell them you have to start up a team with 20 people, or 10 people, they may not be able to initially. And so what you have to do is you’ll have to have, perhaps initially, your manager wearing several hats, and one of those would be a product manager hat, at least initially. And so, they’re going out and doing some of that interfacing with the business to make sure that they’re working on the right thing. As the team grows, then that can be a separate person. So, that’s actually a progression that we made at one of my clients of, there was no product manager initially. And what we realized as we tried to triage and deal with all the-we had an initial set of what the data engineering and data science needed to do, but the issue was, what order do we do them in? How do we start getting some requirements in place so that once they’re ready, that we’re ready to rock on that? That’s what that product manager was doing for us initially, but he wasn’t done initial hire. He was a later hire, but it wasn’t super late. It was more probably from establishing the team, it was probably a year in. Not making a specific recommendation there, it was more, to understand that that initial progression of-in that case, the director of data science was acting as product manager. That then got passed to the data engineering manager that we hired, and then that got passed to an actual product manager once we hired that person. Brian: Mm-hm. Jesse: Does that make more sense? Brian: Yeah. No, I understand. And I’m not saying there’s a fixed recipe either. I think you can either look at it as an insurance hire, where it’s like, this role is going to help provide insurance for your team that they don’t spend a lot of time building the wrong stuff, and at some point the business loses trust in their ability to deliver value because everything takes forever, no one’s clear about what they get at the end, and you end up with just, apparently, a lot of stuff was done, but someone at the end, at the last mile, is not feeling like a lot of stuff got done. They’re just like, “What? This isn’t the right thing.” So, whoever does the work, regardless of how many human bodies there are, I think someone needs to own that process, just like I think design matters, too, even if it’s not a designer-with and E-R at the end of it-the design of the final last mile of experiences has a lot to do with informing the technical work that needs to happen. Not entirely, there are always security, and audit trails, and all this kind of stuff, you need to have all that in there and it’s not necessarily a user-facing requirement most of the time until it becomes a problem, and then you do need to go back and audit. So, there’s definitely engineering work that’s not part of the last mile product piece. But I want to talk about this thing. So, talk to me about building the wrong product. And you say in your text, in the book, there’s a shared blame going on here. So, whose fault is this? Not that this is a fault, but where does this problem stem from? What are we supposed to do about building the wrong thing? Jesse: usually building the wrong thing comes from two potential issues. And one of those starts with the business side, where it’s a political issue, where maybe the teams don’t like each other; they’ve never worked well together; the two managers hate each other. I’ve seen those firsthand. That can be just a complete non-starter, and as a direct result, they will just never really work well enough together to create that correct data product. Brian: Wait, which sides hate each other? And where did that come from, though? Jesse: one of the most difficult things of data teams is actually bringing together parts of the company that never talk to each other. That you’re having to bring in analytics, and often analytics has never dealt with the engineering people. And then you bring in software engineering-who should have been dealing with the business people all along, but often don’t, even know it’s part of agile; you’re supposed to do this-and then you have operations who just receive the problems of everybody, and so they’re holed up in this prisoner’s [laugh] prisoner mentality. Sometimes it’s just the sheer effort of bringing those three teams together because usually, they’re part of a different-a separate part of the company that you’re having now to bring together, and have them work consistently together. And then latch business on to that, it can be this recipe for, “Oh, I always hated that person. I never had to deal with them. I’d see them in a meeting once every so often, but I hated them.” So, I’ve seen that before. Does that make more sense? Brian: Yeah. I mean, there’s always a reason where this stuff comes from. I generally don’t think it’s just personality or something like that. There’s probably been some professional reason why something didn’t work out, a project failed, or somebody let somebody down, something like this. And part of the reason I’m asking this question is a big role of product design is facilitating getting the right people in the room to get aligned around a customer outcome that we want. This is nuts and bolts product design; it doesn’t happen in isolation, and so you need to pull in these teams. And part of that is to drive empathy for the last mile, to drive a relentless focus on whatever the definition of value is, and making that uber clear to the entire team, even if your part is just to work on this one aspect of some pipeline or whatever it may be, you know how your piece fits into the big picture, as opposed to, like, “I have no idea where this ship is going, but you know what? At this point, I don’t care. I got my repo, I’m checking in my stuff, on to the next JIRA ticket.” To me, that checkout mentality, it’s like, that’s when you’re you better be worried. When everyone’s kind of like, at this point, no one has a vision, the strategy is unclear, I worry there. [laugh] do you agree that this is where stuff comes from? Jesse: I mean, I’ve seen that before. In fact, what I’ve seen is that the company strategy changes so often, that they don’t even bother implementing it. They’ll sit in the room, they’ll talk, but when it goes to implementing, they know that it’s going to change in a month, so they don’t even bother doing it. And yes, I’ve seen those companies. It’s an unfortunate thing that could put your product manager in the unenviable position of trying to herd cats that don’t want to be herded, or that you’re herding them to Group A, now over to Objective B, and then back over to Objective C, relatively shortly after that. This usually happens-I’ve seen-at financial institutions, or companies without strong leadership, where they’re, kind of, on to the next, shall we say, the next silver bullet. Brian: Sure. And I mean, this is, again, why I advocate so strongly for having skilled product management in this group is because they need to be shielding teams that are doing implementation from the thrashing that may be going on upstairs. And not only that, they’re also helping the executive team figure out what the strategy should be, and then they manage down. So, I kind of see them as this, like, defensive and offensive layer that sits and manages both up and down to try to keep the team insulated from the nonsense that can happen upstairs, when the strategy is unclear, or it’s shifting, or whatever may be going on. I think they can really help with that role. And then the practice of getting the right people in the room, that is a design activity: when we’re talking about delivering the value in the last mile and getting people aligned around that aspect, that is a design activity. It doesn’t have to be a designer that does it, but the practice, the skill, the activities there, it matters because if there’s no empathy for the people using this stuff in the last mile, you’re going to check out and go native because you just don’t really know how your work fits into anything. You don’t care, either-[laugh] or you just-it turns into a job. It’s like, check in, check out. And that’s where I think teams go to die. [laugh]. Where value goes to die with all this stuff. Because these are really complicated systems, and if you don’t know how your stuff fits in, you don’t know how it’s going to provide value. I don’t know, I don’t think people like working on stuff that doesn’t get used. But maybe I’m wrong. [laugh]. I mean what do you-don’t you feel like people get a lot more satisfaction out of their work when they’re like, “We hit a home run.” Or even a base hit. Like, “Accounting was really happy with this thing that we did, the reporting that came out the other end, it’s in real-time now. They’re stoked.” Like, that feels better; it energizes the team. I don’t know what-tell me about-I mean, what are your-what do you think? Jesse: I talk about that in several different ways. I talked about the team getting velocity, and the way that you get velocity is not just with experience of I now understand Spark-let’s say-better, it’s also getting velocity of, “Oh, wow. My product went into production, and it was used by accounting.” That’s a win for the team. So, it’s structuring your team around wins. And maybe that’s the engineering manager initially, that does that, maybe that’s the product manager that does that, but you’re structuring things around, “How do I get a win for the team?” One of the things you don’t want to do is create that project I was mentioning of, four years from now we get something into people’s hands. That’s another way for it to go nowhere because inevitably the strategy will change. We also want to make sure that we’re doing the right thing for the business. So, in that sense, if we have the connection with the business that we should-maybe that’s the BizOps, maybe that’s the product owner being in the scrum, or making sure that that data product gets in front of them consistently for feedback-that makes sure that we are actually creating something that is usable, that once we go live, they continue to use it. Brian: on this topic of usable, or usability in the noun form, designers and user experience isn’t mentioned a ton of the text, so talk to me about whose jobs to own usability of the final result. And have you seen teams working with designers and user experience professionals? How do they evaluate usability, if it’s even evaluated at all? Is it, kind of just, put it out and hope it gets used, and then on to the next thing? Or is there a cycle of iteration improvement that’s not just like, “I hate this go back?” And it’s like, “I don’t know. Well, let’s try this instead.” It’s a guess. What’s your experience there with both formal designers or untrained design, or no design, user experiences? Does this factor in at all? Jesse: It depends on what the structure of the data teams are. Are they creating something that is used, or has a UI that’s being interacted with, and what is the nature of that UI? So, if you’re doing something that’s customer-facing, hey, one of the things I talk about in the book is that your data engineering team is cross-functional. You may actually have a front-end engineer, you may have a front-end designer, and you may have an actual UI acceptance team, for example, as part of that. Usually, the data engineering aren’t the ones creating that sort of user interface. Usually they’re more focused on the data product. So, how do we make sure that the-ensure the usability of our data product? We may have those designers on the team, we may have data modelers on the team, that’s a possibility, though, in my experience, the data modeling is usually not the most difficult part of the data engineering roadmap; a data engineer should be able to do that data modeling. Once we go to the data science side, I’ve actually seen some of those data science teams actually have front end engineers embedded on them, so that they can write something that is a prettier, more usable, that goes through, maybe, some kind of user acceptance testing. Brian: Mm-hm. If I read this book, what do you hope someone gets out of it in terms of activities that they can go put into play immediately? Are there a few things that you can pull out-and I know this kind of a crappy question, right? “Could you distill your whole book down into three easy bullets for me?” But are there a couple things you’re like, “Repeatedly over, and over, and over this happens. I see this all the time: this kind of team with this kind of problem, if you would just go do X, Y, and Z, you’re going to make a dent?” Are there some takeaways like that, or are there too many different small problems that it’s not that simple? Jesse: There’s one chapter I’m especially proud of, in that respect of, I wrote this chapter, it was basically debugging teams. I can’t remember the name of the exact chapter. But it was this troubleshooting guide of here’s various troubles, where it’s a problem statement of, “We’re not getting enough value. We’re not doing this, we’re not doing that.” And so, I go through those sorts of statements and say, “Here’s the possible reasons.” And it isn’t, “Here’s the hypothetical reasons that might happen,” it was, “No, this is my experience on when this happens, this is usually the culprit here.” So, that’s one of the things that if you are having problems with your current team, it’s kind of a debugging guide, and you can go through and figure out, “Oh, if I’m always talking about technical debt and that, probably missing a data engineering team; probably missing the right data engineers,” for example, there’s a listing there that shows that. Other actionable things, I’d say, are the case studies. I tried to hit people at various stages of their journey. I have an entire chapter for starting a team; that’s one set of people. And then I have that chapter on debugging, and then I have the chapters on case studies where these aren’t just, here is s-I’ve been running a team for a year, and here’s my thoughts. Some of those people have been running teams for 10-plus years. For example, I interviewed Dmitriy Ryaboy from Twitter. He started Twitter’s data teams and data engineering. There aren’t many companies doing data engineering for as long as that. So, I was able to get a really long point of view from him that should be really helpful to people to see what are the sorts of problems that happen over the long term? Brian: Cool, cool. I want to give you the last word on the book, which again, is called Data Teams: A Unified Management Model for Successful Data-Focused Teams. Jesse, it’s been great getting your stuff, but before I give you the last word on whatever you would like to close with, I’m curious-and you just mentioned Twitter here-is there any meaningful distinction in the text here, or in your mindset, about the way software companies make software and data products versus traditional companies because, as you may know, there’s-[laugh] the non-digital natives are sometimes worried about the tech companies coming in and eating their lunch, but what I find is that a lot of them don’t copy the way software teams-and I come more out of the software-native industry-they don’t copy everything about how they make products. [laugh] they leave roles out, they don’t have the same processes in place, they don’t interface with the right cross-departmental teams, and things like this. Is there any meaningful distinction there, or is it like, that’s an unnecessary distinction in the world of data science and engineering from your perspective? Jesse: No, it’s a completely valid point. In fact, I go so far as to say, “Is this your core competency?” Let’s start even there. If you’re, let’s say, a manufacturing company, and you have no software engineering pedigree, and you have no data science, and maybe you have rudimentary analytics, you need to ask yourself the question as management, is this our core competency? Should we be using some kind of consumer off-the-shelf product? Should we be even farming this out to a consultancy? Those are very valid questions that I’ve seen the end result of the company not asking that question of themselves, and then coming out the other side of this really half-assed product, or not just product, but team, where we hodgepodge the team together to not 100 percent of what we need, but what that person thought that they’d need. They never really went anywhere relative to the spend. So, their ROI, pretty terrible, pretty in the weeds, not going to go anywhere. And what they didn’t do is they didn’t copy what companies with a good, solid, strong software engineering background do of, we have the right tools, and we do this, and we do that. And that sets us up with a solid foundation. Brian: Cool. Well, thanks for coming and sharing all this. Where can people find the text? And do you have any closing thoughts for our listeners that you would like to share about your book? Jesse: I guess one of the reasons I wanted to be on this podcast is I know you have this more product manager oriented focus. So, when I originally pitch this to you, that was what I wanted to really talk about, and think through. Where do product managers fit on this? Because you’re right, I don’t talk about it in the book. And I wanted companies who really want to know about this to have a resource that I can point them to and say, “Go listen to this product-podcast. Brian and I talk about it.” Brian: ‘Prodcast?’ Jesse: [laugh]. Brian: I think you just came up with so-[laugh]. Jesse: Okay, well, I said it, so I’m copywriting it. Brian: There’s something there. You, like, getting the domain right now. Jesse: It’s my, it’s my copyright. Brian: [laugh]. That’s awesome. Jesse: The other thing I’d say, is the book’s website is datateams.io, and I recorded a series of videos on it; there’s about 40 minutes of videos that cover things that either I wanted to compliment the book with, or were easier in a video than with a book. So, I talk about that debugging teams in one of the videos. I talk about hiring: how do you actually hire? I’ve seen it for individuals where we say here are the buzzwords you need to put in your resume, but I haven’t seen anybody say, from a management point of view, what is this process of hiring look like? Another one I talk about is setting goals. It’s probably something after your own heart, of maybe that’s what your product designer would have done-or your product manager would have done. But what I try to do, if you take just a few things out of this podcast, one is, you’d be laser-focused. You don’t go after ten things, you don’t go after twenty things, you go after one to three things. And if you get that focus, then you can execute, but if you don’t have that focus, it’d be like some of the other teams that I’ve seen, where yeah, you can execute things at once, but the problem is that you’ve taken your resources, you don’t divide them evenly amongst twenty things; you divide them against twenty things with some overhead. So, now you execute twenty things, but it takes, now, five years because you’re executing so slowly, and you never get any velocity there. So, I encourage people to watch those videos, it’s up on datateams.io, and I’d have to say that’s the end of it. I really appreciate it, Brian. Thank you. Brian: Yeah, Jesse. It’s been great coming on, and good luck with the continued launch of the text. And I’m sure there’ll be a good one coming out number four, right? That’ll be your-this is three right? Number three? Jesse: This is number three. Brian: All right. So, maybe we’ll have you back on number four and see where that goes. Jesse: Excellent. Thank you again, Brian. Brian: All right, cool. Cheers.
45 minutes | Nov 16, 2020
052-Reasons Automated Decision Making with Machine Learning Can Fail with James Taylor
In this episode of Experiencing Data, I sat down with James Taylor, the CEO of Decision Management Solutions. This discussion centers around how enterprises build ML-driven software to make decisions faster, more precise, and more consistent-and why this pursuit may fail. We covered: The role that decision management plays in business, especially when making decisions quickly, reliably, consistently, transparently and at scale. The concept of the "last mile," and why many companies fail to get their data products across it James' take on operationalization of ML models, why Brian dislikes this term Why James thinks it is important to distinguish between technology problems and organizational change problems when leveraging ML. Why machine learning is not a substitute for hard work. What happens when human-centered design is combined with decision management. James's book, Digital Decisioning: How to Use Decision Management to Get Business Value from AI, which lays out a methodology for automating decision making. Quotes from Today's Episode "If you're a large company, and you have a high volume transaction where it's not immediately obvious what you should do in response to that transaction, then you have to make a decision - quickly, at scale, reliably, consistently, transparently. We specialize in helping people build solutions to that problem." - James "Machine learning is not a substitute for hard work, for thinking about the problem, understanding your business, or doing things. It's a way of adding value. It doesn't substitute for things." - James "One thing that I kind of have a distaste for in the data science space when we're talking about models and deploying models is thinking about 'operationalization' as something that's distinct from the technology-building process." - Brian "People tend to define an analytical solution, frankly, that will never work because[…] they're solving the wrong problem. Or they build a solution that in theory would work, but they can't get it across the last mile. Our experience is that you can't get it across the last mile if you don't begin by thinking about the last mile." - James "When I look at a problem, I'm looking at how I use analytics to make that better. I come in as an analytics person." - James "We often joke that you have to work backwards. Instead of saying, 'here's my data, here's the analytics I can build from my data […], you have to say, 'what's a better decision look like? How do I make the decision today? What analytics will help me improve that decision?' How do I find the data I need to build those analytics?' Because those are the ones that will actually change my business." - James "We talk about [the last mile] a lot ... which is ensuring that when the human beings come in and touch, use, and interface with the systems and interfaces that you've created, that this isthe make or break point-where technology goes to succeed or die." - Brian Links Decision Management Solutions Digital Decisioning: How to Use Decision Management to Get Business Value from AI James' Personal Blog Connect with James on Twitter Connect with James on LinkedIn Transcript Brian: All right, everybody. Welcome back to Experiencing Data. This is Brian T. O'Neill, your host, and today I have James Taylor on the line-wait, not the guitarist. Not the-and I'm sure you hear this all the time-I really like James Taylor, actually. You are the CEO of Decision Management Solutions, a little bit different. Tell us what is decision management and what does it mean to have solutions in the decision management space? James: Sure. So, it's great to be here. So, Decision Management Solutions. The bottom line here is if you're a large company, and you have a high volume transaction, where it's not immediately obvious what you should do in response to that transaction, then you have to make a decision, and you have to make a decision quickly, at scale, reliably, consistently, transparently, all those good things, and we specialize in helping people build solutions to that problem. How do you build software solutions, primarily, that address those issues and let companies handle their decision making more quickly, more precisely, more consistently? Brian: Got it. Got it. And I know, one of the things you talk about in your work which appealed to me and made me want to reach out to you is, I use the framing 'the last mile.' We talk about this a lot in the work, which is ensuring that when the human beings come in and touch, use, interface with the systems and interfaces that you've created, that's kind of the make or break point where technology goes to succeed or die. So, talk to me about this starting at the endpoint from your perspective. I want to hear how you frame this in your own words. James: Sure. So, an experience-and this is backed up by various surveys is that the typical analytical project has problems at both ends. People tend to define an analytical solution, frankly, that will never work because it's the wrong solution; they're solving the wrong problem. Or they build a solution that in theory would work, but they can't get it across the last mile. And our experience is that you can't get it across the last mile if you don't begin by thinking about the last mile. And so, both problems are in fact indicative of the same problem, which is that I need to understand the business problem I'm going to solve before I develop the analytics, and I need to make sure that I deploy the analytics so that it solves that business problem. And for us, what we would say is, "Look-" ask people why they want to use data, why do they want to use analytics, and they will always tell you, "Well, I want to improve decision making." "Well, just decision making in general, or a specific decision?" "Well, normally a specific decision or a set of specific decisions." And we're like, "Okay, so do you understand how that decision is made today? Who makes it? Where do they make it? How do they tell good ones from bad ones? What are the constraints and regulatory requirements for that decision?" If you don't understand those things, how are you going to build an analytic that will improve that? And so we like to begin-we often joke that you have to work backwards. Instead of saying, "Here's my data, here's the analytics I can build from my data. Now I'm deploying these analytics to see if I can improve decision making." You have to say, "What's a better decision look like? How do I make the decision today? What analytics will help me improve that decision?" Now, how do I find the data I need to build those analytics because those are the ones that will actually change my business. So, we often talk about working backwards, or the other phrase I use a lot is I misquote Stephen Covey; "You have to begin with the decision in mind." Brian: Yeah. No, I'm completely in agreement because that's-the decision-making forces you to also really focus on the problem and to get clarity around what does a positive outcome look like for the people who care, the ones that are sponsoring the project or are creating the application, or whatever the thing is. It forces you to get really concrete about that and to get everyone bought in on what success looks like? And it just makes the whole technology initiative usually a lot easier because there's not going to be a giant surprise at the end about like, "What is this?" [laugh]. James: Yes, exactly. Yes. Brian: This doesn't help me do anything. You know? [laugh]. James: Exactly. No, I have lots of stories about this, as I'm sure you do. I think one of my favorites is this guy who called me up and said, "Did my company build churn models?" And I'm like, "Well, yeah, we can help you build a churn model. I'm curious, why do you need to churn model? What is it for?" And he said, "Well, we have a real customer retention problem and a churn model will help me solve this problem." And I said, "Okay, so humor me. What decisions are you responsible for? What are you responsible for in the organization?" And it turned out, he ran the save queue in a telco. I don't know if you've heard the expression save queue, but it basically means that people you get transferred to in the call center when you say you want to cancel your service. And so these are the table whose job it is to persuade you not to cancel your service. So, he's telling me this. I said, "That's it? That's your scope?" He said, "Yes. That's the only bits I can change." And I said, "Okay, in that case, I can give you a free churn model." "You can?" "Yes, churn equals one." Brian: [laugh]. James: Because absolutely everyone you speak to has said they want to cancel their service. Therefore, they are at a 100 percent risk of churn. [laugh]. So, yes, you have a churn problem, but no, a churn model will not help you." [laugh]. Other models might, but that's not going to help you because way too late for a churn model. You know? Brian: Yeah, yeah, yeah. No, I [laugh] totally understand. So, one thing I wanted to ask you about, and I know that design thinking as part of your work, you have a flavor of that that I want you to go into, and maybe this question will force you to do that. But, one thing that I kind of have a distaste for in this space, in the data science space, when we're talking about models and deploying models is thinking about operationalization as something that's distinct from the technology building process. I tend-when we think about a system's design, we would say, "Well, that's integral to the success of the product." And it may not be the job of the literal data scientists who built a model to be responsible for all of that, but to not even be considering it or participating in that, or for whoever the, what I would call the product manager, the data product manager, whoever the person who's in charge of this, this is a real problem if you're just working in isolation here. So, do you think operationalization should really be a second and distinct step from the technology, or should that be integral to thinking about it holistically, as a system? This is a whole system, there's multiple human beings, departments, technologies, engineering, all kinds of stuff involved with it. What do you think about that? I don't-am I crazy? [laugh]. James: Yeah, I mean, we talk a lot about operationalization, but we would very much, as you do, regard it as part of the project. If you haven't operationalized it, you're not done. One of my pet peeves is where data scientists say, "Well, I'm done." And I'm like, "Well, no, you're not." I had this great call a journalist call me the other day and she was asking about an interview, and I was being my usual cynical self about AI and machine learning, and she said, "Well, how do you explain notable AI successes?" And I'm like, "Well, give me an example of one." And she said, "Well, the AI that successfully identified tumors in radiology scans that humans had missed." And I'm like, "I'm going to challenge your definition of success." She said, "How can that not be a success?" I said, "Because as far as I know, they haven't treated one patient differently because of it. No patient is healthier today because of that AI. Therefore, they're not done yet. They may well yet make it successful, and I'm intrigued by the potential, but it is not yet a successful AI because it has not yet improved anybody's health outcome." And that was its [00:07:52 unintelligible]. Because I'm with you: it's not operationalized. We're not finished. You can't declare victory [laugh] at that point. You have to be finishing it. And we often use the CRISP-DM framework-you know, the business understanding all the way around-and one of our key things is, when you get to the evaluation stage, one of the reasons you need to understand the decision making is that you should be evaluating the model in terms of its impact on the decision making, not just on it's a match to the training data, or it's lift, or the theoretical sort of mathematical ROC and all these things. Those are all important, but you should then also say, "And when used in the decision making-" which we understand well enough to describe how it would change thanks to our model because we started by understanding that-"And will be the business impact of deploying the model." And if you can't do that, then why did you bother building the model? It's not anybody else's job but yours to explain the impact of your model on the business problem that your model is designed to solve. So, I'm with you. It is a separate set of tasks that need to be included, but the idea that you're going to have a separate organization do it, I disagree completely with that. I think it's got to be part of the machine learning team, and I think machine learning teams need to hold their members accountable-models are successful. And you get a tick, "Very good" on your resume, or your internal job descriptions on when the model is deployed and not before. I don't care how talented you are, if I can't deploy the models you build, well, then there's a problem. I need you to be engaged in that process. And I think that if you read the stuff about MLOps, you know, MLOps and DataOps, AnalyticOps and stuff like that, and I read these descriptions, and I'm like, this is all stuff the IT department already does. There's not a single task in this list of MLOps-y things that isn't done already by the IT department. So, the reason you want to add this to your ml tool is because you just don't want to talk to the IT guys. If you were willing to actually go talk to the IT guys, they've got data pipelines. They've got ways to do any of these things. Now, some of them aren't scaled for the kinds of things that analytics people need. I'm not saying it's a complete thing, but just this idea that somehow that the machine learning team has to create all this stuff from scratch, I think it's just because they don't want to talk to the IT guys [laugh]. I want to be able to stay in my little bubble and do my little thing, and not have to interact with people. I think that's a-in a big company, in the end, you're going to have to talk to the IT guys so you might as well get over it and go talk to them now. Brian: Do you think that's a-I mean, to me, that sounds like a management problem. It's either hiring the wrong people, or not providing the right training, or not explaining like, "This is what the gig is." [laugh]. The gig is really-it's this story. It's this change that we want to push out. It's about making the change. It's not about exercising technical proficiency alone. That is not what the game is that we're playing on this team, if you want that there's a place to go for it, but I feel like this is just a skill that it's not natural because there's a lot of other skills that come into play into doing that well. And it's-I don't know if that's your experience as well, and I wonder if that's just a management change that needs to happen. If it's coming out of people individually that want to do this. I mean, I have students in my seminar, sometimes, that come in, and I would say they're curious, but oftentimes, it might be a leader wants them to develop this skill outright and they're a little bit resistant to it because they think they're there to do something else, which is more academic in nature. From my perspective, that's how-I was like, "This sounds like academic research work that you want to do with perfecting a model or-" and then you end up with these, what I call, 'technically right, effectively wrong.' Right? IT's 92 percent accurate with 1 percent use? You know? [laugh]. James: Yes, exactly. I would agree with you; I think it is a management problem. I think that the big challenge with this often is that the senior management have thrown up their hands. "It's all too complicated, I'm not going to engage with it." And so they say, "Well, here's our data. Tell us something interesting." Or they hire people who think it is their job to use machine learning, and AI, and do these cool things, and they go off and do them. I remember talking to one big company had hired this big group of AI and machine learning folks. They were spending a ton of money on AI and machine learning. And we had a conversation within a group where we were proposing a slightly different approach. And they were like, "Well, we're not going to do that. We're going to use the AI and machine learning group to do it." And I'm like, "Okay, well, help me understand how they're going to help solve this problem." And so, I'm pushing on them, "Well, how are you going to solve this problem?" "Well, we're going to use AI and machine learning." "I know. How are you going to solve-this person, she's sitting right here. She runs this claims group. How are you going to solve the problem she just described? What kinds of machine learning and AI? Apply it how?" "Well, we're going to use AI and machine-" I mean, really, this person had no idea, no interest in the business problem. "We're going to use AI and machine learning. We got a big budget, big team, rah, rah us." And I'm like, "And your job here is, apparently, to spend money on machine learning and AI." [laugh]- Brian: Yeah, and this- James: -not, in fact, to make any. You know? Brian: [laugh]. Well, that's the thing. It's like, do you want and I think, especially for leaders in this space, it's like, do you want to be associated with a cost center? Or do you want to be a center of excellence and innovation? And I think the C-level team is thinking that AI is a strategic thing, we have all this data, everyone else is doing it, I need to be on the race. And some of this is hype cycle stuff. Some of this is legitimate, that they should be thinking about this, but the assumptions:, "Well, hire this team, and then I'm going to get all this magic dust falling from the sky." And it doesn't happen that way. Like, with any technology, it really doesn't happen that way. You have to think about operationalizing it, you have to think about the experience of the people using it. And what's the story? How do you change people's mindset? How do you change existing behaviors? And what's people's fear and trust? There's all these aspects that are human things. James: Many years ago, when I was a young consultant, I was working, writing a methodology for IT projects. And then we merged it with an organizational change methodology. And this old organizational change consultant-he seemed really old at the time, but he's probably my age now. But at the time, I was young, so he seemed really old-and he said, "James, you technology people are so funny." He said, "You think it's all a technology problem. It's always an organizational change problem." Brian: Yeah. James: And now I'm old, I have to say that he had a certain point. It's always going to be an organizational change problem. And that is, in fact, the number one issue. And we lovingly refer to our customers as big boring customers. They're big boring companies. And if you're a big boring company-and most people work for big, boring companies-you're constrained by all sorts of things. So, I was just talking to an analytics head, a very, very smart guy, very interested in machine learning, works at a big bank. And one of the teams that he's working with, one of the platform providers said, "Well when it comes to making offers to people based on responding to events with an offer, why don't you just let the machine learning learn what works best?" And he's like, "So, if you're looking at, say, the mortgage page, to begin with, it would just randomly pick an offer: a car loan, a savings account, right? It's got nothing to do with the fact that you've spent the last 30 minutes looking at the mortgage, that's not going to give that customer a good sense of the bank." I need to constrain those offers to at least the ones that make some kind of sense. And if I know which customer it is, to at least eliminate the ones that they're not allowed to buy. Brian: Or a product they have already. James: A product they have already. Or a product that they can't have because they don't have another product that it relies on, or a product they can't have because they've already got another product that's considered by the regulator to be a comparative product and you can't own both. And on and on. And so, "No, I'm not just going to let the machine learning model pick. I want to decide some structure for this. And then I want the machine learning to help me get better at it." And so that's a persistent thing with us is, machine learning is not a substitute for hard work, for thinking about the problem, understanding your business, doing things, it's a way of adding value. It doesn't substitute for things. It adds more insight, more precision, new opportunities, and so on, but this idea for most big companies that they can throw out what they already know and just replace it with machine learning, I think is, as you say, part of the hype cycle. I'm just going to spend a lot of money and getting nowhere. Brian: Yeah, yeah. Talk to me about how you implement design thinking-or whatever you want to call it-this process of human-centered design into your work, and why does it matter? Have you seen results from it? How does it connect? Are clients resistant to that? James: Sure. So, one of the things we find is that, obviously, the principle of design thinking, there's several things. You want to be very focused on the people involved, and you want to prototype things, and you want to show people, not tell them. All this kind of standard design thinking stuff. And what we found is when it comes to decision making-because if you're trying to do analytics, you're trying to improve decision making. Well, you can prototype UIs if there's going to be a UI involved and stuff, but what you really want to do is you want to understand the decision making because that's the thing you're trying to redesign. And so we use decision modeling. So, decision modeling is a graphical notation for drawing out how you make a decision. So, sort of a way of sketching out what are the pieces of your decisions and the sub-decisions and sub-sub-decisions, and yeah. Just like a process model describes a process, and a data model describes a database, a decision model describes a decision. And so you lay out this decision model and we always begin by asking people, how do you decide today? Don't tell me what you'd like to do or what you think you should, but what do you do today? What should you do today? And obviously, we say 'should' because sometimes there's inconsistencies or whatever. But we really try and start with that and define that. And what we find is that no one's ever asked them this before. People have said, "What data do you need?" Or, "What kind of analytics could we give you?" Or, "How would you like the UI to look?" Or, "What kind of report do you need?" But no one's actually said, "So, how do you decide to pay this claim and not pay that claim? How do you decide to lend Brian money for a car and not lend James money for a car? How do you decide?" And it turns out, like in any design thinking, people really like to tell you. [laugh]. So, they tell us, and then we build a decision model. So, now we have a model that's a visual representation that they can say, "Yes, that looks like how we decide, or how we ought to decide today." And this gives us a couple of things. It means we can actually prototype the decision making because we can say, "Well, okay, let's take a real example of a real customer, how would you decide their lifetime value? How would you decide their credit risk? How would you decide which products they've already got? How would you-" we can work our way through the model, essentially prototyping how that decision would really work for a real example. So, you can really get a very robust understanding of, "Okay, this is really where you are today, how you decide today." Well, now you can ideate. We have this game we play called the, "If only game." And we'll say, "Well, okay, fill in the blanks. If only I knew blank, I would decide differently." And then people will go, "Oh, hm. Well, if only I knew who had an undisclosed medical condition. If only I knew who wouldn't pay me back if only I knew who had life insurance with another company." Okay, well, now the machine learning team can go, "Okay, so what if we can predict that and how accurately we might have to predict it?" And then you can ask all sorts of interesting questions because-I'll give you a concrete example, we did the exercise with some folks here in disability claims, and they were trying to decide if they could fast-track the claims. Fast-tracking just means we just sort of send you a few emails, and then we pay you right? We don't go through a big interview, have a nurse come visit, the whole production. And they're just trying to fast-track these claims. And so we asked this 'if only' question. She said, "Well, if only we knew whether your claim matched your medical report." You're claiming for something-yeah, you've broken your leg or whatever it is-does your claim match the medical report that you attach to the claim? In other words, does the medical report say you broke your leg? And the analytics team were like, "Well, we'd started looking at text analytics to analyze the medical reports, but we assumed you'd want to know all the things that the medical report said. You know, 'what are the things wrong with you that are in this medical report?'" And she said, "No, no, no, no. I just need to know the one you're claiming for is in the medical report." And the analytics team is like, "Oh, well, that's a lot simpler. That's a much easier problem." Because you don't care if you have diabetes, or you're overweight, or you're [00:20:24 unintelligible]. I actually don't care. I just care that you have, in fact, broken your leg. Okay. And they said, "All right. So, if we did that, how accurate would it have to be?" And she famously said, "Better than 50/50." And they practically fell off their chairs. And I kid you not, they made her say it again while they recorded her- Brian: Wow. James: -because they didn't really believe her. And she's like, no. I mean, it's a fast-track process. We've already got other bits of the decision that eliminate people who've got mental health issues, or long term care issues, or-we never fast-track those. So, we're just looking at the ones which we might fast-track. And for the ones we might fast-track, if the medical report probably says you have the same thing you're claiming for, that's good enough to fast-track you because we've got steps later to double-check all this stuff. We're not going to pay you because of this decision. We're just trying to fast-track it and avoid cost in the process. And, frankly, it's a sniff test. As long as it's better than 50/50, we're good. So, 60/40, something like that would be great. And of course, the analytics team, I talked him afterwards. I'm like, "So, you originally had this plan to build a minimum viable product to come up with-" I forget if it was 85 or 95-"Percent accurate assessment of all the conditions in the medical report that was going to take you the rest of the year-" This was like in February or something. "How long is this going to take you?" They're like, "You know, we'll probably be done in a couple of weeks-" [laugh], "-because we just need to do a very rough and ready. You say this is what's wrong with you, it's got an ICD-10 code, how likely is it that this medical report includes that?" And so we collapse the minimum viable product from nine months to a few weeks because that's what she actually needed to improve that decision. And that to me was like, "Okay, this is why we do it this way." [laugh]. Brian: Did you get a sense of, like, skin crawling that, like, "50/50? But that's just guessing, practically?" Was there a sense that- James: Oh, yeah. Brian: -how could you possibly accept a double f minus? Like if we're in school, right, that would be like, you're so far from failing? And it's like, no, it's just 51 is good. [laugh]. James: Exactly. Exactly. No, it was- Brian: Did that make people's skin crawl? James: -difficult. Yes. For sure. The group split into two, there's definitely the ones who are like, "Cool, that means we'll be able to get some value out of this quickly, get a minimum viable product-" more agile thinking types-"And we'll be able to go on to something else." And there were other people who were clearly extraordinarily uncomfortable with the whole notion, who were very uncomfortable with the idea that this would be useful and were like, "Well, surely more accurate model would be better?" I'm like, "Well, probably, but probably not a lot better because she's only got two choices. She can fast-track it or not fast-track it." And at some point, the extra work you spend to make it more accurate is just not worth the payoff. Brian: Tell me about prototyping and design. You talked about decision-centric dashboarding as well, so talk to me about when you get down into the interfaces and how do you prototype and test this stuff to know that it's going to work before you commit? James: So, one of the things we focus on is we're typically focused on high volume decisions. So, what we have found is that when you've got a transactional kind of decision, a decision about a customer, about an order, about a transaction, you're much better off applying analytics to an automated part of the decision than to the human part of the decision. So, what we tend to do is we'll build these decision models, and then we'll identify an automation boundary: which bits of this model can we automate? And then we'll try and capture the logic for that decision making, as it stands today. So, that now we're not making the decision necessarily any better than the way we used to, but now we're making it repeatedly, and repeatably-you know consistently-and we can start to generate data. So, we start to save off how we made that decision. So, we'll say, "Well, we made this decision to pay this claim because we decide your policy was enforced, we decided the claim is valid, we decided there wasn't a fraud risk, and we decided it wasn't wastage. And we decided your policy was enforced by deciding these things. And we decided that your claim was valid by deciding these things." And we have that whole structure for every transaction. Now at that point, we haven't done any analytics, but we have got control of the decision. Then we start to say, okay, which bits of this decision model could be made more accurate by applying machine learning, a prediction, or so on. And then what we're trying to do is then tweak the rules, take advantage of that new prediction. So, a rule might have said I've got a set of red flags: if you've ever lied to me before, if you went to a doctor who's lied to me before [laugh], if you went for a service that I have lots of issues with, then it got red-flagged, and so it's going to get reviewed. But if it didn't get a red flag, I'm interested in whether you could predict that it should have got a red flag. So, this is the first time this doctor's lied to him, but he's got the characteristics of a doctor who's going to lie. This is the first time you've lied to us, but you have the characteristics of someone who might lie to us. This is the first time we've had this treatment, but it smells like the kind of treatment that gets red-flagged a lot. So, I don't have an explicit reason to reject this claim. But perhaps you can predict that I probably ought to at least look at it. And then we change the rules slightly. So, now the rules say, "Well, if you had a red flag, it goes to review and if you didn't have a red flag, but this predictive model says, 'smells bad. Looks too much like an outlier. Looks too different from the usual run of the mill stuff,' then we'll review it anyway." So, you start to add value by identifying things that were missed in the current explicit version of it. So, we very much focus on automation first and improvement second. So, to your point, we will prototype the models, but then generally the next step is to automate a chunk of it, so that we can start to run simulations and automate the decision making, and then start applying analytics to it. And the dashboards we build are mostly about how we made decisions, not about making a decision. It's like, "Here's a dashboard that shows you how the decisions were made so you can improve your decision-making process." Rather than, "Here's a dashboard to make a decision." We do those occasionally, but we often find that what people think mentally is if there are humans and automation involved in a problem, that they think, "Well, the machine is going to make a bunch of decisions, and then I'm going to make the final choice." And we often find the reverse is true. I need you to make some key judgments about this customer, or this transaction, or this building. Once you've told me what those are, I can wrap those into an automated decision and do the 'so what' because the 'so what' is pretty well defined. We have one example with a logistics effort where they had to pick a ship for a given logistics shipment. And there's all this stuff in there about predicting arrival times and departure times whether it's-obviously, there's a bunch of rules about is it the right kind of ship? There's some analytics. Predicting its arrival time: is it going to be available? Is it likely to need repair and all those things? But then there was one last bit which is, is it seaworthy? Well, someone has to go look at the ship. But if that's not the final decision, that's one of the inputs, but someone's got to go do that. And so we see that a lot more often. So, often, we're taking predictions, taking some human judgment, and then wrapping that into an automated 'so what' framework. Brian: In a solution like that, are you also tasked with sometimes figuring out that whole end-to-end process? Let's say they're actually does-a guy with a camera, or gal, goes to the dock and takes pictures of-maybe this is a bad example, but do you ever think about that holistic entire experience and clients are kind of realizing this is ultimately part of a decision mindset- James: Yeah, for sure. Brian: -is that literally we have to include that part of this thing into it and we have to make that whole experience work? And- James: And it has to work. Exactly. Yes. Very much so. We often see, for instance, what people will start with is their first mental model of automation is, "I'm going to automate it and you're going to override it when it's a bad idea." The problem with that is I don't really know why you override it and no matter how much work I've done on the automation, you still have to do all the work again to decide if you're going to override it. And one of my pet peeves is people will say, "Well, the AI will tell you whether you should pay the claim or not and it'll be transparent about why he came up with that, and then you can decide if you're going to pay the claim." And I'm like, "Well, but then I have to read the claim. I thought the whole point was, I didn't want to read the claim." [laugh]. It was kind of the point. Brian: Well, it depends on what the success criteria was, right? In that case, if the real goal was to prioritize what stuff needs human review because there's so many claims coming in- James: Oh, sure. Right, yeah. Brian: -then the AI is providing decision support, it's saying, "Likely, likely, likely, likely, likely." And you're like, "Check, check, check, check, check, check, check," really fast because you've accelerated that claims processing thing if that was the goal. James: If you trust the AI, right. Yeah. And the problem is, well, how do you develop trust in the AI? Well, you have to go look at the claim. And so [00:30:57 unintelligible] example, what we had was there was a bunch of things that could be-like underwriting that are very rules-based, as a life underwriting manual, certain things have to be true. We ask you these questions. And then what we found is there were places where we needed to make an assessment of your risk in certain areas. And so what would happen is we would try and decide with the data you'd submitted. And then if we couldn't decide, it would basically use a process to reach out to the underwriters and say, "Hey, look, we got this customer coming. We don't need you to underwrite them so much as we need you to assess how crazy a scuba diver are they?" Because they said they're a scuba diver, but we don't really know how crazy a scuba diver they are. And we know that the algorithm requires us to know if they are a casual scuba diver, a serious but safe scuba diver, or a nutjob. You know, scuba diving alone, at night, in a dark underwater cave. Okay, so we're totally going to charge you extra for your life insurance. Brian: Right. James: And so, it's easier for you to do that, or we can't do that, whatever. So, then what the process is doing is it's reaching out to people to say, the decision can't be made unless we get your inputs here. So, instead of saying I can't make the decision, here's all the data. [vomit noise], you decide. It says, I can't decide because I need you to decide these two or three things. So, you're being asked to these very specific judgments in the context of this application, but you're not just being dumped back into the process. We're not just throwing it over the wall and saying given up, right. Brian: Right. Abort. And-[laugh]. James: Yeah, exactly. And then the other thing we found is that once you do that, you start to identify-well, you can start to say, "Well, what are we asking Brian to do in this circumstance? Well, we're asking him to go look at this historical data, and draw a conclusion about trends." "Okay, well, trend analysis is something analytics are pretty good at, so maybe we could, in fact, use a machine learning model there at least some of the time." Versus, "No, it's much more of a conversation with this very qualitative kind of stuff." Then we might go. "Okay. That seems like that's too hard to do with analytics right now. We'll continue to ask a person." And so by breaking out the role very specifically, you can be much clearer. My favorite one was a medical one, where one of the key decisions for treatment selection was, "Does the patient look like they will survive surgery?" And people were like, "Well, how are you going to automate that?" And I'm like, "Well, I'm not going to automate that. I'm going to ask the surgeon." [laugh]. And they said, "Well, why wouldn't you just let the surgeon override a bad answer if they didn't think you were going to survive surgery?" And the surgeon themselves who was in the meeting said, "Well, because sometimes it's still the right answer. You're so sick, you're going to die if I don't do surgery. So, I would like the engine to say, even though I put in, 'I don't think Brian's going to survive the surgery,' then it might come back and say, 'Well, too bad. Nothing else has any chance of working except surgery; you're going to have to go ahead and try.' Whereas if I say, well, he probably will survive surgery, maybe it would suggest surgery more often, if I say he's not going to just less often." But that's part of the decision. It's not an override because once I override it, then I'm ignoring all the other bits of it. And so that for us has been the key thing: once you focus on the decision at the end, you're bringing people in to provide their expertise as part of the decision making. And so we tend to design everything from that perspective, rather than the, "How can the computer help you make a decision? Which bits of the decision do we need you to make?" And then, okay, obviously, we still have to ask, how can we help you make a good one, but what we're really trying to do is embed your decision-making into an overall and effective decision making approach? Brian: Yeah, sure. I think you've planned out clearly, too, especially in a case like this with medicine, where the experience and participation of humans in the loop-that are part of the decision making-we're not talking about complete automation, and there's these squishy subjective areas that you get into, and what is that process? Do you feed data back into the model? Do you kind of go offline at that point, and you take the machines best guess plus the human, and you go to some manual process? There's lots of different ways to think about that, but you have to think about how they're going to provide their insight and they're thinking into the final decision to see if we're actually doing anything of value with any of this stuff. We have to holistically think about that operationalization to be successful. James: Absolutely, yes. I mean, and medicine is a good one because it's the physical interaction with the patient. We're doing some in manufacturing and some other places. And then there's still visual inspections. There's still a sense, right. We talked to an independent system operator, and one of the things they were like, people tell us which plants are going to be producing power next month, we don't always believe them. Okay, so I need a way to say, I know technically the data feed says, "These are the plants that are available for power generation next week." I don't think they'll be ready because I've talked to the head engineer over there and I think they've got a more serious maintenance problem than they think they've got. So, I think it's going to be at least another week. So, I don't want to build an optimization that assumes this plant will be ready on Monday because I don't think it's going to be ready on Monday. Well, you need to understand where that fits in the decision-making, otherwise you can't really take advantage of it. You just have people who have this nagging sense that the automated system is going to be wrong. [laugh]. And that doesn't help anybody. Then they have the problem you said, which is, it works; no one uses it. Brian: Yeah, exactly. James: It's 95 percent accurate, and it's used 1 percent of the time. "Okay. Well." Brian: Yeah [laugh]. James: Knock yourselves out. Brian: No one's applauding? [laugh]. James: Yeah, exactly, exactly. I defined in one session, I came up with this sort of dictionary definition of a valuable analytic. And I said, "A valuable analytic is one where the organization that paid for it can identify a business metric that has been-" note use of past tense-"Improved because of the analytic." That means you have to have deployed it, and it has to have had an impact on a metric that I track as a business metric, and I can say, here's my metric before the analytic, here's my metric after analytic. That metric has improved, and my competitors have not seen the same improvement from environmental factors, therefore, the analytic is why my results is better. Once you apply that, it's like crickets. "Okay, I'm listening. I'm listening. Anyone got one?" And it's really quiet. "Well, when we get it deployed, it will." "Yeah, when it's fully rolled out, it will." "When we apply it to the other 99 percent of the portfolio, it's definitely going to have a big impact." "Yeah, okay. Well, call me when that happens." [laugh]. Brian: Yeah, this is the outcomes over outputs mentality, you know? James: Yes, yes. Brian: That's a big move for-it's a big leap for a lot of-in my experience-very tactically talented people to make that change with a mindset that I'm really here to help the business organization achieve outcomes. James: Outcomes. Exactly. Brian: And it's a big change. James: It is a big change, and it's very hard for people. They struggle with the fact that the right answer might be a very dull model, but with a very dull technique, using very small amounts of data, and that may be enough to move the needle. Brian: That's not what I got this PhD for. [laugh]. James: Exactly. They. Brian: Do you know how much that cost? James: Exactly. Yeah. And so it depends, and I'm with you: if you really want to do research, then there are organizations out there that want to hire researchers, so go work for one. Down at Microsoft, Google, these people have huge research departments working on techniques and everything else. Like Jim Goodnight used to say at SAS, he says, "You know, I need to hire PhDs," he said, "Because I need to figure out how to make this stuff work for everybody. But you guys, you needed to work." And so, I think that's a big shift for people. And I think not just for the individuals because I think your point earlier is very valid. The way these groups are structured, the way they're motivated, the way they're paid, the way they're led, all of those things have to change, too, because far too many companies just say, "You guys are smart. Figure it out. Tell me what I should do." And it's just like, well, that's basically a waste of time. Brian: Yeah. No, I understand. I think that's partly why learning how to frame problems, and sometimes this may-your job may be to go and help the room, the stakeholders, that people extract a problem everyone can agree on and help them define that. And it may not feel like that's what I went to school for; that's not what I was trained to do, but that is what is going to allow you to do work that matters. James: Yes. Exactly. Brian: Work that people care about. And someone needs to do that. And that's a big part of what I train in my work is otherwise you're just taking a guess, and you're going to end up not being happy, probably. Most people like to work on stuff people care about. "Oh, we shipped this. It got used. It made a difference." It's so much more fulfilling, at least for me it is, to work on things people care about. So. [laugh]. James: I would agree with you completely. And I think we have to think about how we motivate, and train, and encourage folks. And I also think my obsession is the companies I work with that do the best job of this have much more of a mix of internal people that they've trained and external people that they've hired. They've got people who come in saying, I know that this is important to my company, you know, the way we interact with our partners, the way we interact with customers, never missing a customer order delivery date. These things matter to-I've been here a long time, we talk about all the time. So, when I look at a problem, I'm looking at how I use analytics to make that better. I come in as an analytics person, I don't know your business, particularly. Well, it is much easier for me to focus on the analytic as an outcome. That's what my job is, right? So, I think companies also need to stop trying to just hire everybody fresh from outside and set up a separate group and think much more about how do I infuse this overall analytic effort with people who really have a feel for what matters to the company because then they'll be focused on it. Brian: Yeah, absolutely. James, it's been a great conversation, and I know you have a book. And so I assume that's a great way to kind of get deeper into your head. So, tell me about the book. Who is it for? And where can my audience get it? James: The newest book is called Digital Decisioning: How to Use Decision Management to Get Business Value from AI. And it really is based on, like, 20 years of experience-mine and others-of trying to say, how do you effectively automate decision making so that you can take advantage of machine learning, and AI, and these technologies? And really lays out a methodology. It's only a couple hundred pages. So, it's a high-level, more, aimed at a line of business head, or someone who's running an analytics group or running an IT department who's interested in this stuff, lays out how do you go about discovering what kind of decisions you should focus on and then how do you build these kind of automated decisioning systems and figure out how machine learning and AI fits in there. So, that's the intent of the book. It's doesn't tell you how to build models, it doesn't tell you how to write rules, it doesn't really how to build a decision model, it just tells you when you should do those things and how they fit together. So, it's much more of a overview book to help get people started. So, that's a place to start. It's available on Amazon and Kindle. There's a Chinese version and a Japanese version being worked on, but they're not available quite yet. But they will be and so- Brian: Excellent. James: -my publisher said, "You have to write it in less than 200 pages, James." And it is 200… pages. Brian: 199 pages. [laugh]. James: More or less, yeah. It's more or less exactly 199 pages. There's a problem with doing it-as you know, a problem with doing something a lot is that you can talk about it for hours, right? [laugh]. Brian: Yeah, yeah. Sure, sure. Awesome. James: But anyway, so yeah, that the book. Brian: So, are you active on social media? I know you have a blog. Is that the best place to kind of watch your work or a mailing list? What's the- James: Yeah, you can watch the blog. The company has a website with a blog and I have a blog at JT on EDM. I'm also on Twitter, @jamet123. No S. And that's pretty good. I'm on LinkedIn, people can find me. It can be hard to find me because my name is James Taylor and there are lots of James Taylors, but I've been on LinkedIn so long that my profile is actually slash jamestaylor. Brian: Oh, sweet. All right. [laugh]. Congratulations. James: That's the advantage of living in Silicon Valley. Is that [laugh] I got in there early. So, you know. Brian: That's great. Apparently, there's a big mar-by the way, just to end this, but did you-there's a whole market I guess for people who get the one letter handles and then they sell them. James: Oh, yeah. Brian: Do you know about them? James: This kind of stuff goes on all the time. If you ever can buy a domain name that's got real words in it, right- Brian: Oh, right. Yeah. James: And I get all these complaints sometimes, that people are typing my email address because it's at decision management solutions dot com. And someone was complaining, and he worked for one of these companies where in order to get the domain name, they basically misspell the name. Right. Brian: Right. [laugh]. James: Yeah. And I'm like, "Really? It's like you're complaining about the length of my-at least they're all real words." Brian: Right. Exactly. James: I haven't had to make something up, you know? Brian: It's management with an X. Oh. Okay. Excellent. James: Yes. Yes, decision, but D-E-X-I-S-O-N. Brian: [laugh]. Well, James, it was great to have you on Experiencing Data. Thanks for sharing all this great information about decision-making. It's been fantastic. James: You're most welcome. It was fun to be here, and stay safe. Brian: All right, you too.
30 minutes | Nov 2, 2020
051-Methods for Designing Ethical, Human-Centered AI with Undock Head of Machine Learning, Chenda Bunkasem
Chenda Bunkasem is head of machine learning at Undock, where she is focusing on using quantitative methods to influence ethical design. In this episode of Experiencing Data, Chenda and I explore her actual methods to designing ethical AI solutions as well as how she works with UX and product teams on ML solutions. We covered: How data teams can actually design ethical ML models, after understanding if ML is the right approach to begin with How Chenda aligns her data science work with the desired UX, so that technical choices are always in support of the product and user instead of “what’s cool” An overview of Chenda’s role at Undock, where she works very closely with product and marketing teams, advising them on uses for machine learning How Chenda’s approaches to using AI may change when there are humans in the loop What NASA’s Technology Readiness Level (TRL) evaluation is, and how Chenda uses it in her machine learning work What ethical pillars are and how they relate to building AI solutions What the Delphi method is and how it relates to creating and user-testing ethical machine learning solutions Quotes From Today’s Episode “There's places where machine learning should be used and places where it doesn't necessarily have to be.” - Chenda “The more interpretability, the better off you always are.” - Chenda “The most advanced AI doesn't always have to be implemented. People usually skip past this, and they're looking for the best transformer or the most complex neural network. It's not the case. It’s about whether or not the product sticks and the product works alongside the user to aid whatever their endeavor is, or whatever the purpose of that product is. It can be very minimalist in that sense.” - Chenda “First we bring domain experts together, and then we analyze the use case at hand, and whatever goes in the middle — the meat, between that — is usually decided through many iterations after meetings, and then after going out and doing some sort of user testing, or user research, coming back, etc.” - Chenra, explaining the Delphi method. “First you're taking answers on someone's ethical pillars or a company's ethical pillars based off of their intuition, and then you're finding how that solution can work in a more engineering or systems-design fashion. “ - Chenda “I'm kind of very curious about this area of prototyping, and figuring out how fast can we learn something about what the problem space is, and what is needed, prior to doing too much implementation work that we or the business don't want to rewind and throw out.” - Brian “There are a lot of data projects that get created that end up not getting used at all.”- Brian Links Undock website Chenda's personal website Substack Twitter Instagram Connect with Chenda on LinkedIn Transcript Brian: Hi, everyone, Welcome back to Experiencing Data. This is Brian O'Neill, and today I have Chenda Bunkasem on the line, an AI research scientist in question, right? You're not quite sure is that what I just heard? Chenda: [laugh]. Yeah, there's debate within the scientific community about titles. So, you know, you always have to be skeptical. Brian: Exactly. So, maybe we could jump into whether or not you're a scientist, and what the heck you're doing with machine learning and AI. I saw Chenda on a webinar that was about ethics in this area, and as listeners to the show will know, I tend to think about ethics as kind of Human-Centered Design at scale when we're kind of looking beyond just the end-user or our immediate stakeholders, and we're kind of looking to the outer circles of how the technology affects other people. So, Chenda, tell us a little bit about you, your background, whether or not you think you're a scientist, and what you're doing these days. Chenda: Yeah, yeah. So, I've been conducting machine learning, artificial intelligence research for quite some time now. I'd say about three years. Happened to accidentally write my computer science dissertation on AI, with just intention to make a really cool video game script, which is awesome. And it led me into conducting machine learning research on Google Stadia project, which is a cloud-based gaming platform, and then eventually, in an AI startup at the World Trade Center, focusing on simulations and synthetic data, which I'll go more into later on. So, all of this has just kind of accumulated to what I'm working on now, which is using a lot of these applied strategies for data-driven ethics and data-driven algorithms with very many different applications. Brian: Tell me about quantitative ethics. I think that's what I first wrote down when I was like thinking about talking to you and having you come on the show. So, can you dig into what this is? And I'm also curious to see how you might relate this to your early user experience work that you did as well. We talk a lot about qualitative research when we do user experience design, so I'm curious about these two things: quantitative ethics and research there, and then also the qualitatives piece. Can you talk about that a little bit? Chenda: Right. So, a lot of the times, especially with regards to ethics and data, there's often an approach that's very, very abstract; it's qualitative, as you said before. And while I was on that panel at Hypergiant, I spoke about using quantitative methods to come up with better decisions for how we ethically design systems. And it's funny because in a way, you’re outputting very data-driven decisions with these systems, and it would only make sense intuitively that you'd make data-driven decisions in how they're designed. And so this was the seed for how I started to think about more ethical artificial intelligence research in both the UX sense and in a more deeper technical sense. Brian: So, what's different about your approach? Or how do you go about doing it ethically versus not? Walk us through what the process of doing a project, or working on a product, or models, et cetera, et cetera, how do you approach the ethics piece here? Chenda: So, there are a couple of ways we do this. One of them is to definitely align any sort of research and development along a scale that we call a ‘technology readiness level.’ NASA actually uses it. But what's interesting is that only recently has there risen what we call a TRL level—technology readiness level—label for machine learning systems, and so there's there's a lot of space for people to come in and add pieces where it might be better to make a decision on TRL seven versus nine and design a system that takes use case into account. So, TRL stands for technology readiness level, and labs such as NASA use it—I’m sure SpaceX uses it as well—and it's the meter for which we as researchers can determine whether or not something in the lab or something within research and development is ready to either be productionized, or what stage it's at. And oftentimes, it's here that these ideas needed to be honed already. They need to be honed and sharpened in a sense where, is what we're researching ethical? Brian: So, how does one go about getting a score? For example, what's the process of that? And I'm curious, is there any involvement with the people that are in the loop that are going to use this or be affected by it? Is there some type of research activity or something that goes into scoring these things? Chenda: Right, definitely. So, that's a great question. I actually introduced something called the Delphi method during that panel with Hypergiant, and I find it very interesting because there are very many different sort of combinatorial approaches to decision-making where you either have a blind vote, or you have people contributing with—as we said before—quantitative data. So, exactly how an algorithm works, the exact law or the exact policy around, let's say, user privacy rights. It's about designing communities and curating discussions with domain experts who understand the inner workings of their fields, and then these people coming together to help seed how you would create these systems, let's say, taking use case into account from the get-go. And it's extremely complex. The quantitative feature comes from the fact that the method of itself, the Delphi method, isn't just discussion, it's not just debate. Brian: What else do you guys go into with that? Chenda: So, what we'll do, and it's interesting because I'm actually taking up a research project in 2021, to submit to an AI ethics conference in spring on this topic, and so, we're actually still very, very early in its stages, so I'm kind of disclosing or describing how we're starting to form this. But first, we bring domain experts together, and then we analyze the use case at hand, and whatever goes in the middle of the meet, between that, is usually decided through very, very many iterations, after meetings, and then after going out and doing some sort of user testing, or user research, coming back, et cetera, et cetera. Brian: Mm-hm. Can you talk to me about what a test looks like? Like a user test? Or that field research looks like? What do you actually do, like, step by step with somebody? Chenda: So, the first thing is, whatever client that we decided to work with, or at least in this sense, the company that is conducting this research, we actually start with their ethical pillars first. So, we start with an abstract concept because more often than not, you want to take the inclination you have first on whether or not a use case is this or that, from your feelings. And this is where it gets interesting because, first you're taking answers on someone's ethical pillars or a company's ethical pillars based off of their intuition, and then you're finding how that solution can work in a more engineering or systems design fashion. Then you have your solution to how you can approach this problem ethically, or at least how the client ethically wants it, and when you go out and do your user testing, you're usually making sure that the way that the user is interacting with the product—or the end goal—one, has a human in the loop, which AI researchers talk about very often. It's allowing the user to understand the systems that they're working with. And the other thing is, that ensures user privacy. Brian: So, this human-in-the-loop concept is something I talk about on this show all the time. This is kind of a foundational element of designing effective decision support applications and things of this nature. So, can you give an example of how… like, how would you take feedback from one of these sessions and make a change? Like, if we saw—or maybe you've done this before, and you can talk about an adjustment that you might have made to the system based on getting this feedback from the field. So, whether it's interpretability, or some other—or privacy or something like that, do you have an anecdote you could recount to kind of make it concrete? Chenda: One thing I would definitely add is that the most advanced AI doesn't always have to be implemented. And people usually—they skip past this, and they're looking for the best transform or the most complex neural network. It's not the case: it's about whether or not the product sticks and the product works alongside the user to aid whatever their endeavor is, or whatever the purpose of that product is. It can be very minimalist in that sense. Brian: Sure, sure. No, I a hundred percent agree with that. So, do you have an example where maybe the data science approach, the implementation approach for something changed? Perhaps it changed from a more complex model that was more black box, and move to one that was perhaps less accurate but had some interpretability because the stickiness factor was there. Is that the type of thing that you would hope to get from this kind of research, or is that information coming too late to be useful? That would be one thing I would think of if I put my—a lot of the audience for this show tend to be software leaders, technology, data leaders, et cetera, and I'm guessing many of them wouldn't want to find out that late in the process that, oh, we have to do all this rework of the modeling and the engineering because we found out no one will use this if they don't understand how the system came up with this recommendation or whatever. Can you unpack that a little bit for me, your perspective on that? Chenda: Yeah, really, really good question, actually. So, I described this at the panel as well, but you want to design dynamic systems, and especially within machine learning production-ready systems—there's a difference between this concept of static systems and dynamic systems because models have to also retrain themselves if they want to optimize, let's say, object recognition or whatever the outcome is for that model. You definitely want there to be a bit of a loop as you were speaking about before, a feedback loop. And this can happen early on in the process because you don't want to wait too long even before it gets to the user to see this. And you want systems that have features of automation as well—if we're talking about AI systems—automation features that allow for, let's say, the tuning of a hyperparameter. So, there's systems designed like this that will iterate on itself, and then be optimal for the user. Brian: One of the interesting things for me here in this particular question was more around the feedback loop between whether or not the solution is actually, as you call it sticking, like, is the product going to stick and get used for the desired use case? Is this something where you feel like you have to go through the process of actually creating the full automation, you have to pick a model and push that out before you can get that feedback? Or is it possible—and part of the reason I'm asking this is because I know that it sounded like you had some work in the BitChat, you did some user experience research. I'm kind of very curious about this area of prototyping, and figuring out how fast can we learn something about what the problem space is, what is needed, prior to doing too much implementation work that we don't want to rewind, or that the business doesn't necessarily want to rewind and throw out? Do you think it's possible to get informed about which models and which technology approaches we may want to use with machine learning earlier, or do we have to get into some kind of working prototype before we can establish that? Chenda: That's a really, really good question. The first thing I would lead with my answer on that is the more interpretability, the better off you always are. If there is a definitive method right now to determine whether or not you should or shouldn't prototype a specific type of model, I'd be very curious to explore. But the higher interpretability, the better, and the more data is always better. Brian: Always. I mean, I guess I can think of places where you may not care about how a low-risk scenario, a recommender or something like that, like exactly knowing how, so I can see people playing the other side of that argument. I mean, I would generally, I would generally agree with you that if feels like if the human-in-the-loop is is a heavy factor, and whether or not the solution is going to be adopted and used, then the right amount of control from their perspective is really important, the right amount of transparency, et cetera, so do you think it's possible to tease that out early in the technology-building part? Or do you just play it safe and say, “You know what? I'm not going to go—we're not going to use any of these techniques.” It's just more of a gut read. Like, “This will not work; we have enough gut read on the situation from our customers to know that a black box implementation is going to be a showstopper,” so to speak. Chenda: So, you usually want to know, you don't want to go with intuition in this sense. As I said before when there's more interpretability, it's easier to determine those things especially with regards to that human-in-the-loop feature, and as we were talking about before if you have more data to play around with, that's more data you split into either training, testing, validation phases with your machine learning, right? And you can switch that data out, you can also design systems solely for the sorting and rearranging of the data required to make the best model as well. And these are all preemptive kind of systems design steps that one can take. Brian: Mm-hm. So, it sounds like you're at a new startup, correct? You're working at a tech startup building some kind of software SaaS or some kind of software product. Is that correct? Chenda: Yeah, definitely. Brian: Cool. So, I definitely want to hear about that. I'm curious, how involved are you in that kind of design phase, so to speak, the product itself, the interfacing with customers. Do you work tightly with the product management and your product designers, or are you staying a step away from that? What's that relationship like, and how do you guys work it? Chenda: So, it's really interesting. At this company, Undock, I work very, very closely actually, with the product teams and the marketing teams because, you know, machine learning for product management is very, very particular. And it's very, very interesting because, as we're kind of saying before, this preemptive systems design needs to be noted very, very early on from the get-go. And how a person is experiencing the product is very, very relative to what models are chosen and how the models are architected. Brian: How do you inform those technology choices? Chenda: So, I really like this question because the user journey—or if you want to refer to as the user experience—is kind of one of the fewer, actually, pieces of—especially with Undock—that actually has to very, very closely align with not just the model architecture, but the data generation and the data collection. We're actually generating data as well, synthetic data, for A lot of this model training as well, and the reason is because the user experience, or the product experience that we would like the users to feel, needs to be seamless. And so if you're working in a very, very close loop with people who are designing the user journey and designing how the product looks, you have a better idea of what data needs to be collected where and when to give the person that experience, especially with regards to machine learning. Brian: Mm-hm. Was there a particular anecdote or something you might recount from your journey so far with Undock where you're like, “I'm glad I heard this now.” [laugh]. Or something along those lines, or, “Wow, this was really informative to my work or prevents us going down the wrong path.” Is there anything like that that came up so far? Chenda: [laugh]. Actually, it's funny; kind of. So, Undock has this, we have this phrase that we use, which is time travel, right? We are an AI-powered smart scheduling application that is sort of all-encompassing, with being able to automatically detect whether or not you're free during a certain timeframe, and free with the rest of your coworkers as well. It really, really is kind of revolutionizing how you think about scheduling, especially with regards to remote work because all of your colleagues are now in different time zones. And there are so many pieces to determining whether or not—it's almost like having an EA within your laptop, you know? It's almost as if every individual person has one of those as well, and the different pieces with a user journey like that, you got to think to yourself that you're getting smart suggestions all the time and the data that has to be collected and aggregated for the optimal experience that is seamless within how you communicate with your coworkers, and how you’re communicating with the application is quite extensive, actually; it's very complex. Brian: Tell me about this Undock. What I'm picturing here is it's scheduling facilitation. I use Calendly extensively in my own work, so I'm always curious about reducing time spent on this kind of stuff. So, how is it different than just booking a room and your people, especially if this is inside the corporate walls where everyone's on Exchange or whatever, and you know when everyone's available, et cetera? Is this intended for cross-company where you don't have that visibility into people's schedules? Is that the difference or—like, how does it work? Chenda: That's actually a good question. Um, you don't actually have complete visibility into your coworker’s schedules, which is quite different from how you might see Google Calendar. But that's what's nice about it because the more personal features of what your schedule looks like, and how you’re planning your days is actually occluded. Well, you can still really, really easily and seamlessly figure out when's the best time to meet and when's the best time to talk. You can schedule meetings in person, you can schedule meetings face-to-face as well, and there's a feature to it that actually makes it kind of social: you have a profile, you can state whether or not you're online, and it's really blending a lot of features from—there's an application called Discord, which a lot of remote workers are using these days, and Calendly, Slack, too. As I said before, we have this thing where we're talking about time travel or controlling time, where you don't actually have to know all the details to get the best outcome. Brian: Sure, sure. So, what do you want to change about how we develop machine-learning-driven products and services? Is there even a problem, I guess, is a good question. Chenda: I don't think there's a problem per se. I think we're very clear now that there's places where machine learning should be used and places where it doesn't necessarily have to be. There's obviously that very stark difference between machine learning engineering—which is really software engineering using elements of machine learning—and machine learning research and development, which is massively different. It has a place in science that is very abstract. It's all-encompassing, you can use machine learning in your research in biology, you can use machine learning in research in physics, the list goes on, and on, and on. So, it's broad, but it's not. And it's applied in engineering, but it's also applied in research. And it's very all-encompassing, so what I'm really curious to see is just how it changes in the years to come. But I don't think there's anything that should be changed specifically. Brian: Mm-hm. There's a lot of solutions that end up not—I use “solutions” in quotes—there's a lot of projects that get created that end up not getting used at all. I'm not sure the large majority of people are necessarily thinking about the last mile and kind of that, as you called it, the stickiness of the product, or the solution, or whatever it's going to be, and kind of working backwards from that, and I'm just curious, do you think something needs to change there to get a higher success rate here? And I'm speaking specifically about not the technical blockers: not, like, the data sets are too small, or we can't train the model, or the accuracy is slightly too low for various technical reasons and whatnot. More the human piece, right? It doesn't solve a problem that exists, or it solves it in the wrong way, or it's too complicated or hard to use. More of these kind of last mile user experience problems, do agree that that's a challenge right now, and if so, is there something that could be done differently? Chenda: I do agree that it's a challenge. I'm sure that you've heard, many people are saying machine learning is no place to move fast and break things. And it's true. There's this sort of race to innovate. And the question is always for why? Brian: Yeah. [laugh]. Throw machine learning at everything. Chenda: [laugh]. Brian: Yes. Oh, I know. Chenda: And then the question is for why and who's affected? Of course, there's facial recognition, and I think it's a perfect example because what facial recognition really is—without it being the product that it is—is glorified computer graphics, right? AI aided, machine learning aided computer graphics that is advanced enough to pick up the pixels in what a camera can see and make meaning out of that, you know? Now that's facial recognition. So, you really have to ask yourself, why is it necessary? And who is it helping? Brian: Mm-hm. Or who's it not helping? Chenda: Yeah. Brian: Do you have concerns then, when you're in the middle of a project, or maybe you're working on a feature or some aspect of Undock. I mean, maybe Undock’s a good example of this, like, the question of, should we do this? And should we be using these advanced analytical techniques in this particular area? Should we be looking at someone’s… whatever; their online/offline status, or whatever it may be. Even if you've technically thrown up the dialog that says, “Yes, click here to give permission to do this thing?” There's also the question of does the person really understand what they're handing over in terms of their personal information and how it's being used? I think there's the comprehension piece is almost separate from the did we get permission, and it feels good to just say, well, they check the box. And at some point, there's personal responsibility, but some of this stuff is pretty technical. So. Chenda: It's an interesting point. Most people don't really know what they're giving to the products that they're using, actually. And it's very funny because there's this quote that circulates around tech Twitter, which is, “If you're not paying for the product, then you are the product.” And that's more than often, you know, with social media, blah, blah, blah, blah because it's true. I don't think it should actually be a part—actually, this is quite an interesting point. I don't think it should actually be a part of their user journey in every single individual product they're using. I think there should be a centralized place where people are well aware of how they interact with products generally, in a more macro perspective. Brian: Mm-hm. Yeah, I just talked to an interesting, MIT sandbox startup there, and they're actually thinking about, kind of this, bank analogy where the bank is, you have all of your personal data there, and companies can pay you for it. So, here's an offer for nine dollars and twenty-seven cents; Google would like to use your email address, and here's how they want to use it. And you can choose whether to disclose it and get paid for it. And I thought it was a very interesting concept of starting to expose where's my stuff being used. How is it being used? Now, I mean, how transparent they get about that, and whether they just throw up a bunch of legal stuff in front of the offer, I don't know. To me, there's potentially some ethical considerations there as well, especially with people that don't have a lot of money that may just see it as free money. “I just click here and I get $9 a month from Google, and I don't know, like, I didn't have to do anything.” [laugh]. So, I still think there's a human effort to understand—there's effort on both sides that's required if you really want to have a long-term strategy of being both ethical but also producing some kind of business value there. I don't know. It's a complicated question, but I wanted to get your take on it because I know you have a heavy ethics consideration in the work that you do. Chenda: Yeah, yeah, it's definitely. It's something that I think should really actually be coming from a centralized place because despite the methods being different, maybe the algorithm being different, or the user journey, from product to product, the essence is the same, and it's that—this goes into painting a larger picture, but how much of a stronghold, especially these larger technology companies have on us, our lives, and our data. And to some extent, it shouldn't actually be just every single product creator’s job to notify users to that extent, I think we should be taught this, you know? Brian: Yeah. No, it's a fair—I agree. I mean, at some point, there is a level of personal accountability and responsibility, and it's your choice; it's a free society. I totally agree. I think—I don't feel like the scale is not super well balanced at this point. But it's a complicated [laugh]—it's not a binary, “It’s this way or that way.” It's definitely some kind of scale there. So, I guess we all have to figure out what the right balance is there between customers and what kind of company do we want to work for, and what do we want to do with the work that we do. So, jumping to a slightly different topic, just to kind of start to close this out. It's been great to talk to you, and I'm curious, do you have any feeling—you know, having worked in some tech companies and stuff, I'm always interested in, kind of, the relationship between the technical leaders, the product leaders, and the designers, and kind of this trio that's at the backbone of many software companies, is there changes that you'd like to see there about how that relationship works in terms of either people understanding your work, or vice versa, or whatever. I'm curious about those relationships. Is there a takeaway from your experience about how you think those teams could be more optimized to work together? Chenda: I really, really love this question. It's something that I contemplated at my last company and will maneuver differently in this company. But I helped scale my last company from, really, three to four people to now it's over 100. And seeing how senior leadership communicated with each other, especially representatives from each of these different groups, there has to be translators, there has to be people who exist in translational roles, and they're quite difficult to fill because you have to have an understanding of the hows, but you have to be able to explain the whys. Brian: Mm-hm. Is there a special type of person, or role, or something where you think that role falls naturally? Chenda: It's something that I actually think is still being—it's showing itself through in tech companies, whether it's the Big Five, at Google or Facebook, or in startups that are high growth. It's, “Oh, we need someone to sit in on this meeting, who is a creative technologist.” There are these names now that get thrown about that aren't actually just ‘senior engineer,’ et cetera, et cetera, who have an understanding—yeah, even sometimes the social sciences behind why you would do something in the design. Or they have an economics background and understand computational social science and why micro-influencers communicate with each other in this way, you know? Brian: Cool. Well, I appreciate you sharing your stuff with me. And I did have one last question. When I was checking out your background stuff, what kind of music are you DJing? Chenda: Oh, [laugh]. So— Brian: I'm a musician as well, so I was curious to hear what kinds of stuff you're spinning? Chenda: Oh, that's a good question. So, I'm very, very influenced by the electronic music scene in both the UK and Europe, and I kind of combined that with, mmm, yeah, some other futuristic sounds as well. So, mostly electronic music, but, I mean, it can range from like anyone on— Brian: New stuff? Old stuff? Drum and bass? Like, newer genres? Chenda: Drum ‘nBass is great. Anything from Brainfeeder. Of course, you have to, like, throw in Aphex Twin sometimes, but then, like, then you mix it with Travis Scott, you know? Brian: Yeah. Chenda: [laugh]. Brian: Cool. I have a soft spot for some drum and bass in my life as well, so it's good stuff. [laugh]. Well, Chenda, it's been great to talk to you and where can people follow you? I know you're pretty active on Twitter. Is that the best place to check out your work? Chenda: Yeah, so Twitter is great. Of course, there's my website. I also have a Substack where I'll be releasing a newsletter. Of course, feel free to follow me on Instagram with anything that's more visual, but I am trying to [00:29:28 unintelligible] everywhere. Brian: Awesome. And that's C-H-E-N-D-A, bunk like a bunk bed, B-U-N-K-A-S-E-M if anyone's just listening and not reading. So, Chenda, thanks for coming on Experiencing Data and talking to us today. Chenda: Thank you for having me. Brian: Take care.
37 minutes | Oct 19, 2020
050 - Ways to Practice Creativity and Foster Innovation When You’re An Analytical Thinker
50 episodes! I can’t believe it. Since it’s somewhat of a milestone for the show, I decided to do another solo round of Experiencing Data, following the positive feedback that I’ve gotten from the last few episodes. Today, I want to help you think about ways to practice creativity when you and your organization are living in an analytical world, creating analytics for a living, and thinking logically and rationally. Why? Because creativity is what leads to innovation, and the sciences says a lot of decision making is not rational. This means we have to tap things besides logical reasoning and data to bring data products to our customers that they will love...and use. (Sorry!) One of the biggest blockers to creativity is in the organ above your shoulders and between your ears. I frequently encounter highly talented technical professionals who find creativity to be a foreign thing reserved for people like artists. They don’t think of themselves as being creative, and believe it is an innate talent instead of a skill. If you have ever said, “I don’t have a creative bone in my body,” then this episode is for you. As with most technical concepts, practicing creativity is a skill most people can develop, and if you can inculcate a mix of thinking approaches into your data product and analytical solution development, you’re more likely to come up with innovative solutions that will delight your customers. The first thing to realize though is that this isn’t going to be on the test. You can’t score a “92” or a “67” out of 100. There’s no right answer to look up online. When you’re ready to let go of all that, grab your headphones and jump in. I’ll even tell you a story to get going. Links Referenced Previous podcast with Steve Rader Transcript Brian: Welcome back to episode 50 of the Experiencing Data podcast. This is Brian T. O’Neill, your host. And my guest today is… me again. I’ve been rolling a couple solo episodes if you’ve been following along the last couple of weeks, and I got enough positive feedback that people seem to be enjoying these, so I’ll be doing them from time to time. We’re still going to have guests coming back, but for episode 50 here, I thought we’d keep it solo and share some thoughts around innovation and creativity for people working with data and analytics. So, these things don’t always go together, and it kind of bums me out when I meet somebody that is very strong technically, very strong analytical thinker, great technical skills, and they tend to think of themselves as not being creative. My favorite quote, or my least favorite quote, was talking to a chief analytics officer recently, and he just said, “Brian, I don’t have a creative bone in my body.” And the main thing I wanted to talk to you today is about this, kind of, mental block here and what some of the tactical ways of actually practicing creative work and thinking about innovation from a series of steps, and activities, and behaviors that you can actually do instead of thinking about it as this black box thing that only resides in the minds of artists, and creators, and designers, and things like this. So, I think a lot of you know that part of my mission is really to help people who think this way, and who are very gifted and talented with their work with technology and data and analytics and data science to tap into what human-centered design can do and how it can help you deliver indispensable products and services to your customers. And part of this, a lot of this is about the mental part of it: the way you approach the work, the way you think about your work in terms of problem-finding versus problem-solving; the role of empathy, which is really about putting ourselves in the service of others. And I really do mean that if we start to change the work from being a technical problem that’s staring you in the face, it’s kind of you versus it, and instead, thinking about, “My job is to enable somebody else.” Like, most of the software that we’re all working on is not for ourselves, it’s typically for somebody else, and so kind of getting that mind shift in place. But anyhow, I digress. Let’s jump into this topic here around innovation and creativity. So, I hope that you guys don’t think of this as—how do I say this—I don’t want to be disparaging. I’m going to be throwing out stereotypes and generalities from the many conversations I have at conferences, and just phone calls, and research, and trying to talk to people my list, and clients, et cetera. So, these are very broad, sweeping generalizations that I’m going to make/ some of you may feel some similarity or this might resonate with you. Others, maybe not so much. But everybody’s unique. I’ve met plenty of skilled technologists who are also talented artists and musicians, especially around Boston. There’s a great great many of them. So, without further ado, let’s jump in a little bit here. The first thing that came to mind when I was thinking about this was that to reframe this, as humans as a race, we haven’t been dealing with giant amounts of data and analytics that the world now puts out every single year; it’s multiplying like crazy. Not even 100 years, have we really been dealing with this kind of information. So, what does it mean to not deal with it? Well, it means we made decisions and we lived without having all the insights that we do today, which really reduces our guessing. That’s a lot of what we do is we inform better decision making with all this data. We had to rely on trial and error, and our gut, and experimentation; we had to use other techniques. And one place I can see this getting in the way, just to give a very tactical example, was I had been consulting at a very well known online travel agency. We’ll call them an OTA. And this company was relentlessly using A/B testing on their website. And I have no problem with A/B testing; it has a role, and it’s a great technique for certain kinds of work. But this company, a huge amount of the attention for the overall product strategy and design strategy was on one screen, was their hotel search screen. And this company was kind of gun shy when I arrived there, and they were looking at the work that needed to be done as kind of optimizing this one screen to basically get people to type dates in for their travel and get back pricing from all the different operators and hotels, et cetera, that provide these travel services, hotel rooms, and whatnot. And the vibe I got, when I walked in the doors, I could tell the joy in the work was gone for a lot because, in somewhat of a recent past apparently, there was a large scale redesign; it did not go well and the company lost all appetite to experiment beyond what they could immediately prove with data. And what this translated into was that they were constantly running these very small-scale design changes on the site and testing each one and trying to isolate—you know, change the color, change the button, move it over, change this text, very tactical approaches to things, and optimizing. And, again, there’s a place for this with conversion. This is not new stuff. Many of you will probably know this if you’re doing any type of web analytics work. But the point here that I want to get across is that this type of thinking where everything is analytically-minded, even when you’re doing design work like this, you’re focusing on a local maximum. You’re only going to get to the top of the peak that you can see right in front of you, but it’s not going to tell you what the next big idea is, what the next big problem is to solve; it’s just going to tell you of the choices that you throw at it, here’s which one performed better, and you can keep optimizing down. And on top of this, what can also happen—and this is what happened there—was that they started ruling out—any changes that failed once would be thrown out. So, what this means is if green background color on the button didn’t work, then they will never use green on any buttons, regardless of what the label of the text of the button was, surrounding content, all these other variables that may have something to do with how the conversion worked. But if you threw that idea out, forget ever presenting that idea again. It was now a forbidden choice. So, this kind of approach where we’re thinking analytically all the time about everything, this is not how we do innovative work. This is not how we find out what the next big approach might be to solve the customers’ problem. It’s not even to help you see the problem really well; you’re just going to keep iterating and refining on what’s in front of you. And I think part of this is the mindset—and I understand conversions matter, in this case, and it was a big part of it. And there’s shareholders and all the whatnot, I get all that stuff. But as Peter Drucker famously said, “Marketing and innovation. Those are the only two things that really matter, and everything else is a cost.” So, if you want to lead with the data work that you’re doing, and you want to be seen more as a center of excellence and innovation, if you want to find ways to leverage analytics and machine learning, to create better customer experiences, whether that’s an internal customer, or supplier, or employee, or whether you’re talking about the people that buy the products and services you offer, you need to wear a different hat to do this kind of work. So, be aware of these local maximums, be aware of when the analytical mind is kind of jumping to what the next choice might need to be here. The exercise of creativity, which hopefully leads us to innovation, which is what a lot of companies say they’re looking for in their employees, it’s a different kind of work, and it’s a different kind of thinking. And I know this can be hard. These words get thrown around a ton, and I almost cringe talking about it just as much as some of you probably cringe a little bit at the idea of ‘craft,’ and ‘design,’ and some of these words that sounds so fluffy and hand-wavy: they’re so subjective and I understand that. I feel like ‘innovation’ gets batted around quite a bit. Everybody says everything is innovative now and all of this, and it’s a little bit eye-roll-y for me, too. I think that one of the reasons I really like design, and I why I talked to you about design, and why I want you to apply human-centered design to your work is that it is a step-by-step process, it’s something anybody can learn how to do, and it’s a framework for innovative solutions. And it’s something you can repeat and do regularly as part of your daily work to get better outcomes. And that’s what it’s all about. Nobody wants machine learning, nobody wants artificial intelligence, nobody wants analytics. What they want are the outcomes. They want the value that—they want the promise; what was the promise of these technologies? What was the service that they wanted to get? What was the outcome they wanted? It’s outcomes, not outputs. So, it’s a different kind of thinking. And this stuff matters. And I’ll tell you something else too. A recent study—this is just from 2020—came out, and what it looked at was what were the sources of innovation in companies. And apparently, this study found that explaining the innovation output at companies is five to ten times more correlated to the ability of individual inventors at the company, as compared to all the characteristics of the company they’re actually working at, combined. So, people and the way they work and think—you, that is. For those of you that are primarily employees, full-time employees at companies, you are the source of innovation, or you’re not. But you have a multiplier effect if you are seen as someone that’s doing innovative work. And if you’re a leader, I’m hoping that you’re fostering an environment to allow this type of work to happen, if you are in the product creation business. And again, whether you’re at a technology company selling commercial software products, or your data products are actually internal applications, services, models that are being deployed, I still call those data products, as many of you know by now, I think of them with a product mindset, even if we’re not making money from them because chances are, they’re there to save cost or, potentially, to improve the customer experience. There’s some value there. And the ability to do this, and to be innovative here, a lot of the output at these companies is tied to the people doing the work. So, how do you get started with this stuff? How do we do innovation? How do we practice this stuff? I was thinking about this, and I was looking around to my own work that I’ve done with clients, and a lot of people associate design with innovation because they realize that design can change their customers’ experience, they understand how it can impact the business, they see what design-led companies like Apple—they understand that there’s something there even if they don’t know how to do it. So, what I want to do is give you some practical stuff; take away the black turtlenecks, and the big reveals, and some of the hype that goes with this, and focus on the activities and the behaviors that you can practice on a regular basis. Things that you could put into work at your company today. So, let me jump into those. The first one is that—and again, I’m thinking about leaders here, and I’m assuming most of people listening here are leaders or perhaps you’re looking to be a leader in the future, and you’re in charge of the ship; you’re driving the ship whether it’s a product, or a whole suite of products, and entire department, and this is who I’m talking to. Because a lot of these things can’t happen without the proper management and leadership support; you need an environment to do this, and I know that’s really hard if you’re in one where it feels very closed-minded, this can be difficult. But we can get into how—maybe in another episode—how an individual contributor might deal with some of these problems, but because my primary audience to serve are software leaders in data, and product, and technology, and engineering. So, the first one here is you have to create dedicated time to work on things that are not deadline-driven, and are not just about the latest fire drill. And I’ve seen this a lot, probably more so the fire drill stuff with the technology companies where they’re constantly chasing whatever the latest thing is. You know, a big customer wants to come in, and, “Well, we’ll only buy if we have this feature.” And the whole ship stops and everyone’s now working on this other thing. And I get that. Sometimes that is the right thing to do. But if that’s the only environment you’re living in, forget about it. You’re not—the only way you’re going to come up with something innovative is probably by luck, and I want you to have repeatable processes and repeatable behaviors because we can’t rely on luck to get us to the next mountain top, so to speak. It’s just not reliable. So, the next one here is beyond creating this dedicated time to work on these spaces, to work on innovation and to think about projects and products and ideas that are not deadline-driven and fire-drill-oriented is that you have to have the right people in the room in order to do this type of work. So, what does that mean? Well, we hear a lot about diversity. And I think diversity is really important here. And I’m not just talking about racial diversity, gender diversity, although I will say right now, simply adding some women to your team—and I happen to know just from looking at my podcast guests, the leaders in the data products industry, machine learning, analytics, it’s a white male industry. And if you’re only surrounded with people like you, you’re not going to come up with stuff that’s as interesting and creative and innovative if everybody in the room looks like you. So—but moving beyond gender and race, we’re also talking about inviting outsiders, inviting customers, inviting resellers, inviting the sales team, the marketing team, subject matter experts. Why do we do this? Well, when we’re thinking about the next level up, we need a volume of ideas; we need to get lots of different ideas, not jumping into one solution, implementing technology, and then hoping that when we throw it at the customer, it’s going to work. And it’s rare you see this happen because, speed, speed, speed, speed, speed. You hear it all the time. But that assumes that the cycle time is really fast. So, I would say, it’s probably okay—you could probably get somewhere innovative—if you can rapidly iterate, learn, change, ship, get feedback. If that cycle is really fast, by all means, feel free to keep shipping code, getting feedback, changing the designing experience, and getting closer to what customers need. You can do it that way. That’s okay. Very, very rarely is that the case, though. I find that, especially when you’re talking about the next big problem that you want to work on and thinking about six months, twelve months down the road, and not just to the next quarter and this type of thing, it’s too hard to do this kind of stuff. And you still need a volume of ideas. And why do you need a volume of ideas? Well, part of the reason having a bunch of different people with a volume of ideas, it’s not that one person’s idea is going to, necessarily, be the right one. It’s more about the fact that one idea might spawn someone else in the room coming up with an idea that’s a spin on that, and you get this snowball effect. So, you get the, “Yes, and” kind of mentality. “Yes, we could do that. And what if we also did this?” “Yes, and what if we could also do that?” You need a breeding ground for that, and if we just come into the room with one or two ways of doing something, and we’re ready to jump into implementation, this is not how you do innovation. This is—maybe you get something innovative once in a while, but again, that’s relying on luck. So, this divergent thinking—and this is something I practice and teach in my seminar, it’s a very common thing in the design thinking process, as it is sometimes called; I just call it design, but some people call it design thinking. But having a volume of ideas can really lead to creative ideas that no one individually in the room would have thought of, but the collective intelligence of the room can spawn some really interesting things. And we’re bringing all these different perspectives. You give a customer service rep in the room, they’re going to have a very different perspective on what, say, someone in product marketing or even product management might think, who might have—you know, maybe they’re relying on market research, and they’ve bought research surveys and some of this kind of stuff, to kind of think out, “What should we be doing with data in the future?” And on the other end, you have someone that’s dealing with customers on a regular basis—if you’re talking about, like, a B2C company or something—and they’re hearing daily from the actual people who are the recipient of all this work that’s happening. And whether or not having a CSR in the room during your ideation work here is the right person, the point here is that their perspectives are really different. And having a salesperson there; let’s say you’re at a tech company and having a salesperson participate in some of this work, they also know what is difficult about communicating the value of the data and insights that we have? They are a super valuable source of intelligence for the team that is responsible for designing the experience that the customer is going to have. And so, CSRs, salespeople, marketers, product managers, designers, and of course, the technologists as well, especially with the knowledge of what’s possible—what do we need to do this kind of work? But it’s really about wearing a different hat, and realizing that it’s time to take the implementation hat off and to think creatively, and to think about volume. So, if you want to practice this idea, the way I would think about it is when you get this group of people together to work on a particular customer or business problem, make one of your first activities about generating a volume of ideas and try to separate ideas from the person who came up with them. It’s not about who came up with it; it’s not about owning the idea; it’s about generating a volume of those ideas. So, there’s lots of different techniques for doing this, but the core of this is really at the quantity and not jumping into the quality of the idea, or jumping into an implementation plan too soon. The next one is open innovation. And I’m not going to talk too much about this because we have a great episode on the show. Steve Rader from NASA came on the show to talk about open innovation. But this is the act of looking outside of your company for solutions to problems that perhaps you’ve worked on a lot and you haven’t gotten too far with them yet. But the idea here is, kind of, putting almost a for sale sign out, like, “Hey, we have this problem, it’s for sale. We need to invite people to come in here and tackle it.” And this is something that’s almost happening right now, with—at the time I’m recording this, we’re still in the middle of COVID. Now they’re calling it the third wave, at least here in the United States. And so, you actually have all these different people, all these different companies out there working on vaccines, and PPE, and analytics, and insights, and all this kind of stuff. Imagine if your company had also put out that for sale sign and invited outsiders to come work on some of the problems that you have. A volume of ideas from people that maybe don’t even know so much about your particular industry, but may be able to approach a particular aspect or a particular part of the problem from a different perspective there. And it may be such that they don’t necessarily solve it, but in the act of looking to outsiders to come and help you, you again may get an idea; it may plant a seed for what the actual path is that you should go on. But that seed might be something that never would have come out of your own team. And a lot of this is because we go native at some point. You get used to the way your company thinks, your team thinks, and this happens a lot. Hiring, especially in data science and analytics teams, there’s so much hiring around technical skill sets—and that is of course, really important—but you’re going to end up being surrounded by people that tend to think the same way. And this is why I think teams that have product functions on them, they have a designer or user experience function on them with the data science and analytics, and the engineering pieces there are more likely to come up with better solutions. Because they’re all approaching the problem from different perspectives, and you really need to have that. So, open innovation, check out the episode with Steve Rader from NASA on that was really fantastic. Great story in there about how do you remove the excess grease from potato chips in the factory where they actually produce potato chips; this was a problem, and this problem was actually outsourced, and it’s fascinating to hear the way that they came up—the technology that they came up with to actually reduce the fat, the grease that is on the chips at the time it’s put into the bag. So, check out that episode—really great. The next one here is improvisation, and empowering the group over the individual, and this kind of idea of teams. So, design is very much a team sport, and the best designers out there are not the ones living in the tools, setting type, choosing colors—and all this stuff matters; these fine details matter and I’m not saying they don’t, but really what the best designers are doing is they are facilitating the team; they’re facilitating the group coming up with a common framing of the problem, and helping the team understand how the experience is very much tied to what the value is that is going to come out the other end of the technology initiative. This is true whether it’s analytics, or advanced machine learning, whatever, AI, doesn’t matter. So, what is this role of improvisation? So, I’m going to use an analogy here from jazz music. A bunch of that I’m also a professional musician, and I do play jazz as well. And one of the things about improvisation and jazz, it’s not just a total free-for-all. A lot of what it’s about is exercising your individual creativity within a group who is there to support you and to allow you to take those leaps, and to—when you’re taking a solo and the band is backing you up, you’re improvising there. You’re trying out ideas live, in the moment, right now. And the team is not there to judge you. The team—the band—is there to support the soloist. So, there are loose structures here, but we’re experimenting and relying on the bandmates—our team—to support these expressions, these trials, these attempts. We’re not there to sabotage each other. We’re not to say, “This is why I said we shouldn’t have done with this idea,” et cetera, et cetera. It’s not about that, that doesn’t help the customer. Infighting, and the politics, and all of this. If anything, we want to take a misstep or a mistake and turn it into something positive. We want to see, how can I riff on that? How can I take that idea—maybe that’s not the right way to go, but how could we come up with something better here? And one of the things I like about this idea of improvisation in teams is the story about how there’s a very famous pianist named Herbie Hancock, which maybe some of you have heard of, and he was playing on stage with Miles Davis who is probably the most, maybe the most famous jazz trumpeter. He was alive in the ’50s and ’60s and ’70s, and Herbie was in his early point in the band with Miles, and he was comping, which means he’s playing chords on the piano, largely to support the soloist, which at the time was Miles. And Herbie played a quote, “wrong chord.” He did not play the right chord at the right time in the song. And so instead of judging the chord as being wrong, what Miles did is he changed the notes he played while he was soloing. And by changing the notes in his solo, and adjusting them to the chord, instead of expecting the chord to adjust to him—even though we tend to think of, when the person is soloing in the ensemble, we’re there to support them, they’re not there to support us as much. That’s exactly what Miles did is he changed the notes to make Herbie’s wrong chord sound right. And I think there’s analogies here to how we do work together in groups. And it’s more about this idea of the risk-taking here and trying to support the act of risk-taking together because ultimately, if you’re not taking any risks, and you’re not ever taking some big swings, and missing, and the company culture is not supporting these occasional misses and thinks that every increment of work that is done is supposed to create value, you’re never going to come up with something innovative because by definition, you have to try stuff, and not everything is going to work. And this also gets a lot of lip service, but I find a lot of companies aren’t willing to do this. A lot of them are so focused on the bottom line that they can’t see beyond taking a big risk and realizing, “You know what? We’re probably going to really make some people mad with this change we’re about to roll off, but you know what, this is the right leap to make now, and we need to start looking a mile ahead instead of just looking a few yards ahead and taking those risks.” And sometimes they’re going to be wrong, but sometimes they’re going to be right. So, innovation doesn’t happen if we’re just doing incremental improvement work all the time; it’s not going to happen. So, get the team together, understanding that individuals in the team should be taking risks, and the team should be supporting those risks, and the team should be trying to take the ideas of the individuals and make them work towards the common good, towards the common problem we’re trying to solve with this data and this technology. The next one is fostering and creating, and finding a company culture that allows for experimentation and failure. So, I kind of talked about this right now, but I want to call it out as something distinct. You need to think about this as what I call, reward the at-bats, and not all the base hits. How many times did the team get up and take swings? How many swings did we take at this problem space? And have we even figured out what the right problem is, right? That’s another thing is, what game are we even playing? Have we clarified that on top of the how many at-bats has the team had? And there’s a difference, right. In baseball, we tend to reward the points scored, the base hits, and all of this. We’re not rewarding the number of at-bats because there’s going to be strikeouts. Well, I think you actually do need to have strikeouts here, and the point of the strikeout isn’t to strike out. I mean, obviously, we would love to have more base hits, but the real point of it is, what did I learn from the pitches that were just thrown at me? What did I learn when I was at bat? What did the team learn when we went to bat, and swung, and we didn’t connect? How fast are those learning cycles, the iterations that you’re doing? Are they fast? Are we taking new information in and putting it through our brain grinder and spitting out something better the next time we go up to the plate, or are we just swinging the same thing all the time? So, think about rewarding the number of at-bats. And I know you can’t always do this all the time on all projects; there’s not time to do this kind of work. But if you’re not ever taking the time to do this kind of work, you’re not going to come up with innovative solutions to things. You’re just going to be focused on that local maximum. And the final one here is this is maybe a ‘no duh’ for people that listen to this show, but practicing human-centered design, or design thinking, or just user experience design. For our purposes, they’re all the same thing, and having this product mindset—this is the other piece—and when I say ‘that product mindset,’ I’m talking to people doing applied data science, analytics work, and custom application work within non-software companies. But having the product mindset, as if, you know what? We’re going to roll out this artificial intelligence, this model, it’s going to change some of our operations, it could touch a whole bunch of different departments, and thinking about that as a product, as if you could sell that to another company that would have this problem. How much would you focus on the customer experience? How much would you care about whether the model did its job? How much would you think about the outcomes? That’s the product mindset piece. The design piece, of course, is practicing these techniques. And there are step by step things you can do to implement design in your group, even if you don’t have designers. And I’m not trying to turn everybody into a designer, but I actually think you already are. If you’re determining the solutions and the experiences that your customers are going to deal with, you are a designer. You just might not be a good one because your designs are byproducts of your technology work that you’re doing. So, the question is, do you want to get better at that intentionally? Right? That’s the keyword is, do you want to intentionally change the design instead of it kind of being a byproduct of the technology work that you’re doing. So, innovation can spawn from simply applying a human-centered design process to the work that you and your teams are doing. Designers are accelerants for this, they are experts at this, they help make this process go faster, they help you take more of these swings, and take these risks, and they help you develop a faster and deeper empathy for customers; they will pull you into this customer space, this mentality and thinking relentlessly about who is supposed to receive the value here? And again, customers here may mean a paying customer or it could be an internal stakeholder. A report, or a dashboard, or some artifact, some output, those are still customers or users here. I’m using these terms broadly. So, if you don’t know how to do that, a lot of you know I have a self-guided video course on my website, as well as an instructor-led seminar that I teach twice a year, those are great places to learn. There’s tons of free resources on how to practice these techniques. Of course, hiring staff, or bringing expertise are other ways to accelerate this process. But the point here is, you don’t have to be a title designer to do this work. There’s a lot of this work that is skill-based, it can be taught to other people, and ultimately, I think that my personal mission with this work is I want to see design happening in these places because today’s data leaders, and data science and analytics leaders, more and more—especially as AI gets adopted—we, or you all, are going to have a big impact on the society and the world that we live in. And so I want to infect that work with design. I want you to practice design. As you do the amazing technical work that you do, I want you to use this as a way to make sure that you’re serving the audience that you intended to serve, ethically, with trust, with user experience in mind, with usability in mind, with accessibility in mind. I want you to be able to do that, and there are ways to do that. So, again, you can get a demo of my course, or seminar, or go to any other place. The difference with my work is that I focused it for data practitioners, technical product managers, and people that tend to think like we do: they live in this world of data and trying to create new and innovative solutions with data. So, those are my activities. Those are the behaviors I want you to go out and practice. This was episode 50. I wanted to give you something both actionable, but also kind of high-level and strategic in terms of the mindset here. And whatever you do, be aware of those local maximums. Don’t try to be the individual hero. Remember that design is a team sport. It’s not art. We’re not here for self-expression. Design is about serving others, it’s about empathy, and it’s about having an impact. It’s about creating an outcome, not just creating an output. Why does that matter? Because nobody wants data, analytics, machine learning, artificial intelligence, software. That’s not what they really want. They want to feel something, they want to change something. In our case, they usually want decision support, decision intelligence, actionable insights. Those are the outcomes; “Help me make a decision.” That’s the game we’re playing. So, go out, take a bunch of swings, I hope you hit some base hits. Hope you hit some home runs. Good luck. Thanks.
41 minutes | Oct 5, 2020
049 - CxO & Digital Transformation Focus: (10) Reasons Users Can’t or Won’t Use Your Team’s ML/AI-Driven Software and Analytics Applications
Join the Free Webinar Related to this Episode I'm taking questions and going into depth about how to address the challenges in this episode of Experiencing Data on Oct 9, 2020. 30 Mins + Q/A time. Replay will also be available. Register Now Welcome back for another solo episode of Experiencing Data. Today, I am primarily focusing on addressing the non-digital natives out there who are trying to use AI/ML in innovative ways, whether through custom software applications and data products, or as a means to add new forms of predictive intelligence to existing digital experiences. Many non-digital native companies today tend to approach software as a technical “thing” that needs to get built, and neglect to consider the humans who will actually use it — resulting in a lack of business or organizational value emerging. While my focus will be on the design and user experience aspects that tend to impede adoption and the realization of business value, I will also talk about some organizational blockers related to how intelligent software is created that can also derail a successful digital transformation efforts. These aren’t the only 10 non-technical reasons an intelligent application or decision support solution might fail, but they are 10 that you can and should be addressing—now—if the success of your technology is dependent on the humans in the loop actually adopting your software, and changing their current behavior. Links Want to address these issues? Learn about my Self-Guided Video Course and Instructor-Led Seminar Subscribe to my Free DFA Insights Mailing List: https://designingforanalytics.com/mailing-list/ Transcript Brian: Welcome back to Experiencing Data. This is Brian T. O'Neill, and I'm going to be rolling another solo episode today, this time focused on CXOs. I'm calling this episode CXO Focus: Ten Reasons Customers Don't Use or Value Your Team's New Machine Learning or AI-Driven Software Application. Could actually say software applications here since I know a lot of you are working on multiple projects, products, models at the same time. Today's episode is really for people who consider themselves non-digital-natives, or working at non-digital-native companies who are trying to use AI and machine learning in innovative ways inside their business, primarily through the use of custom software applications or embedding new forms of predictive intelligence with AI and machine learning into digital experiences that will be used either by employees, or even by customers, or partners, suppliers, et cetera. The truth is that many non-digital-native companies approach software as a technical thing that needs to get built without as much focus on the humans who will actually use it, and what they need in order for any business or organizational value to emerge. This gets even more complicated when the software is intended to be intelligent, and you're integrating data science and analytics into the user experience either in the background, primarily, or more in the foreground of that experience. And now let's get into the ten reasons—top ten reasons, at least, customers don't use or value your team's new ML or AI-driven software applications. Number one, usability. What does this mean? Well, I have a couple different things that I put under the usability category. So, these can range from, it's too hard to use; it requires explanation to new employees, customers, and users—and what I mean by that is, not only do you not want to have to explain the tooling and the applications to the current employees, and users, and customers you have, you don't want to have it such that if they leave the company, you then have to retrain a whole bunch of new people how to do it. So, you have to think down the road as well, not just with the current people you have, but also to the future. Usability also refers to requiring major behavioral changes to people and processes that were not considered critical to the success of the actual technology piece. Another aspect is too much information or not enough information. So, this gets to the right amount of information density. There tends to be—I feel like the pendulum swung from way too much—which is really about poorly designed, and not so much information overload—to information deserts. These clean, heavily whitespaced dashboards with a few donut charts on them and a bunch of really big numbers with not a lot of comparisons, stuff like that. So, density is important here. Explicit versus implicit conclusions in the data. This is another usability aspect. So, this gets to how much you're putting the onus on the customer, or the user to be able to make a decision themselves implicitly through the information versus giving them prescriptions or different choices that the data actually suggests they should proceed with. And the final thing on usability is, you aren't even measuring usability ever, let alone routinely, and routing that information back into your product development cycle. And so, by product here I mean really whatever the software application or thing you're building. And we're going to talk more about the concept of product. Even if you're not actually selling any commercial software or a SaaS, we're going to get into that thing about product mindset later. Number two is utility, which is different than usability. So, utility for me is, “I understand how to use the software that's in front of me, but I don't care. I just don't see any value in it.” And again, I’m voice of your customer right now, right? So, they can understand what the buttons do, they understand what's in front of them, but it's not valuable to me, it doesn't answer a question they have. So, you can also think about utility as the solution in search of a problem. And this often happens when there hasn't been sufficient one-on-one user research to fully understand the problem space, the context of use, and the environment in which the software is going to be used by the humans in the loop. So, that's utility. Third one here is trust. So, we hear about this a lot in the context of things like ethics, and all of that. I want to talk about trust in a few different ways, “I can't understand if—” this is, again, me in the voice of your customer—“I can't understand if it makes predictions the way I used to.” So, this gets to something where perhaps you're swapping out a traditional way of decision making with AI, and the customer using this can't understand whether or not the checks and the things that—the different decision points they use to make an overall larger decision, is the system doing that? If they can't tell that, that's a trust issue. Another one is data privacy issues, which many of you know about. And this can kind of get to the heart of, “I don't know how my information is going to potentially be used against me.” This is probably more important in applications and systems where there's both input and output, so the user is actually putting some data back into the system that may be personal information of some sort. But that's a factor, is being transparent about how that's used. It's not just literally being transparent, and, like, putting footer copy in tiny grey text that was approved by the risk department or whatever. People can sense when you're being genuine with this or not, and if you try to mask this behind terms of service, or something like that, it's pretty clear you're saying, “There's stuff in here I don't really think you'll want to read or we don't really want you to read, and we're going to make it really difficult for you to understand it because we're doing some stuff that's kind of questionable.” That's what tiny little gray text that's chock full of legal language can state. So, give it some thought with your approach. Again, this is probably less critical for employee situations where you're using internal tooling, but it is something to think about. Another aspect of trust is what are the ramifications on me, the user for making the wrong choice? So, if the suggestion I get from the tool is wrong—this gets to the, you have your true positives, false positives, true negatives, false negatives, all that—what is the ramification for me making the wrong choice? That's something that sometimes is an issue, sometimes it's not; it really depends on the context, but it's something that the design of that situation should be thought out so the customer can understand the context of what's happening, what might happen to them, or what might happen downstream from them as a result of that. Another aspect of trust is the ugliness: the aesthetics. So, I know this one probably sounds a little bit strange here. But the short of it is that the way things look, the presentation, the aesthetics, the look and feel, there's lots of different words for this, this aspect of design—which is what a lot of people tend to think of because it's the most visual, it's the most surface-y part of design—does matter for this kind of stuff. If your thing looks really ugly and crude, and it looks like an engineering prototype, it suggests that the system—the code, the stuff behind it—that the work that went into it is also ugly and crude and potentially not stable. So, this is a very hard line, I think, for someone who's a non-designer to draw as to when is it sufficient. You may or may not hear people comment on this out loud. If you do hear people mentioning it, then that's probably very obvious that there's a problem. So, it's a sliding scale. It’s, kind of a—no pun intended—it's a grayscale kind of thing where, how much treatment and how much visual love does it need to get? That's something that probably only a designer is going to be able to help you out with, but I can tell you that it matters in terms of building the trust. And people are more willing to struggle with software and applications that look well-polished. There is a study about this, so there's some numbers and facts behind it if you really need that, but in short, the looks matter. So, it's something that you need to be thinking about. And finally, expectations were not set or are ambiguous about what the intelligence, the AI here, can or cannot do. So, a good example of this is something like a chatbot for—since we all have to use the shorthand here—and not really understanding the types of questions and the scope of what the system can actually answer or not. If you just give people this open slate and they expect that it can do anything, there's a good chance with a few swings at the plate, they may find that it falls short of their expectations, and the next thing you know, all they're trying to do is figure out how to bypass the system. And that can actually create some real frustration where people are just really annoyed with the technology. So, what while you may think that you did a great job with text and natural language processing, and you have all these models in play, and it's in production, and all this kind of stuff, the trust may not be there because the expectations about the intelligence are mismatched between the reality of what the system is able to do. So, that's another trust aspect. Number four, lack of empathy for the customer. So, you know, I heard a quote from a customer who was talking to me about trying to bring a product mindset into his analytics organization, and he was saying, kind of jokingly, how he has some staff members who have kind of stepped up from technical roles into acting product management roles, and there's still this attitude that our job would be so much easier if we didn't have customers. And this kind of mindset is not good. If these people are the frontline decision-makers about what your customers and users are going to interact with, that's troubling. Empathy matters. And I know it can sound like this kind of squishy, emotional thing. It really matters if you want to do a good job with this stuff. If you really want to drive adoption, you have to care about this. And not everyone is probably wired to want to go out and do this kind of work. You can train people in this space, but the first part is that you have to decide that it matters. Examples of lack of empathy, to me, are your team has never watched the target user that you're supporting—perhaps another employee—do their job, their work, the activities they do every day to the point that they can really empathize with that customer, they can put themselves in that shoes, when they're back at their desk doing code, or whatever they're doing. They have a deep sense of what it's like to be that person, whether you're in payroll, or accounting, or some other department there, they've done enough shadowing, that they can take themselves out of the equation when they're thinking about design choices, and put that other person's hat on, and try to make design choices that are based on what's right for the other person. So, this is the act of avoiding self-reflection, and this is a core tenet of really being a design thinker, or just being a designer, even if you're not going to become a professional designer, this act of empathy is really important. But another, kind of, idea under this topic of lack of empathy is, there's little or no routine access to customers to validate the choices. So, if you don't have the relationships set up and you don't have buy-in from executives that, for example, if you're going to be helping procurement with some type of AI or machine learning application, and procurement doesn't know about the work that you're doing or doesn't care, or their leadership's not bought in, you're going to have a hard time getting access to those people because they're going to probably perceive it as a tax on their time unless they've requested this service and they feel like it's in their value to participate. So, it's really important to be opening up these dialogues, and these channels, and routine access, and having what we sometimes call design partners onboard. And these are people that you can routinely tap for quick phone calls, email reviews, stuff like that, who can help your team validate the choices they're making along the way. And the final one here would be the lack of prototyping or a fast fail approach. So, I realize the timelines, especially when you're building models and stuff like this, you know, Agile and some of these things don't work particularly well if you have to build out a lot of plumbing just to get to the point where you can start doing some predictive modeling and all that. But I do think there's ways to do prototyping here that I don't hear talked about too much. Prototyping, meaning creating something very low-fidelity, probably not even any code, that can go out and tease out what would be some of the points of failure with this new application that we can test with customers today such that we can plan for that now, and not wait until we've built out what we think is a great solution only to find out that it has some major blockers in it that we didn't consider. So, that's all under the theme of lack of empathy. So, the next one here is elastic success criteria. So, this gets to, “The project is taking so long that nobody really knows what success to the business or the customer looks like, or how to measure it.” Sometimes with these really long projects, people lose sight of what the big picture is and what success is going to be defined by and how to measure it. This isn't a great way to work because what happens is, I think, we all focus on kind of the nearest hill and getting over the nearest hill, and we focus on things that are easy to measure: code check-ins, or meeting the sprint deadlines, or whatever it may be. And if you don't routinely have a way to validate yourself against the, whether it's progress metrics or actual success metrics, you need something to be accountable to and people need to understand—the team needs to understand that ultimately, that's what it's about. It's not about creating the model or the thing. That may be their individual responsibility, but an understanding—someone needs to own the idea that the output of the technology effort is not the success, it's the outcome that's enabled by it. And we're going to talk a little bit more about that in a minute. And actually, right here, I'm going to take a quick short break, and we'll be right back with the final five. All right, we're back with number six here: leadership and skills gaps. So, this is where I hear teams approach new software applications as data and technology projects, or worse it's a ticket in JIRA or some ticketing system. And it's looked at as a one-time thing, and you throw this thing back over the wall, and then you're off to the next ticket. The alternative to that is thinking about these machine learning and AI—especially the projects that specifically involve routine use with humans in the loop—is thinking about them as human-centered data products—not projects—led by a skilled product manager. So, who should be the product manager? I have no idea at your organization who that should be. It may be someone that you already have on staff. I do know there's a lot of talk that data scientists are being asked to do too many things, wear too many hats, to be these huge generalists; they may not be the right person to be the product manager, but the idea here is, if you think about a successful software initiative as being tied specifically to business and customer outcomes, you need a lot of different stuff. You need to be aware of the design and user experience piece, you need to be aware of the engineering piece, you need to understand the data, the analytics, the data science pieces of that, you need to manage that team's overall backlog and the focus of what they're going to be producing. Whose decision is it to say what, “Yes, we'll get a model increase of eight percent if we include these additional features from this pipeline which doesn't exist—” whose decision is it to say, “When should we spend the extra two months to build out that extra eight percent model improvement, with all the labor that comes with that, versus fixing some other aspect of it that we know is currently broken right now?” You need someone whose job it is to focus on that value, and they're looking at all the facets here. This is what product managers do. And they're also managing up to the business and the leadership. It's a role that I think is missing a lot; it's a foundational role at tech companies, you simply don't—not have product managers: it's table stakes these days, and I think that the role of the data product manager is one that a lot of groups are missing. And there is a whole presentation I recently gave. If you come over to my website, go to the speaking page, you can see a talk I gave with the International Institute for Analytics about the missing role of the data product manager as filling this gap in the data science and intelligence software space. So, check that out. The other aspect here, of course, is—and I kind of mentioned this—was designers and the role of product or user experience design within the context of intelligent software. So, when and why do you need this? Well, again, if human adoption, usability, utility, and experience matter, and you actually want to make sure that the technology that you created—the models, the predictive power, all these kinds of things—actually gets used and put to work in the business, and is used by the people that it was intended for, this is what user experience designers are really good at doing. And, by extension, that can include user interface work as well. You may or may not really need data visualization specialization, it depends on the type of work you're doing. But data vis is really a subset of user experience and product design, in my opinion. Data vis isn't going to fix all the issues here with adoption, and the research that UX people put into going to figure out the emotional reasons people will and won't use products, the bridges we need to connect different pieces of software or offline experiences together. They're thinking about experience as a whole, as an enabler of whatever the business outcomes are that you're looking for. And typically in a tech company product management, and product design are tied at the hip, often with the third hip being an engineering or technical lead. This is what I call the ‘power trio.’ I think it's really important to have all three of those aligned. They all kind of give and take; they push and pull against each other. The designer is, like, constantly advocating for what's right for the customer, the product manager is trying to make sure there's overall business value, and that the data scientist or software architect, or whoever the lead technical person is, is thinking about what's doable? What can we get done in the time that's here? Is this the right way to do it? All those implementation factors. So, I do think there's a large shortage of this, and we don't see a lot of people doing the research that I think would really help get more of these AI products and machine learning applications into production, or quote, “Operationalized.” I don't like that word, but I think this would be easier if we were involving more product design and user experience professionals in that loop. So, those are the leadership and skill gaps there: product managers, and designers in particular. I'm assuming the people listening to this, you already know what the technology people you need are in terms of your data scientists and your analytics subject matter experts, analysts, data engineers, all the technical side, but sometimes I think people don't really understand, what are these other roles for? What pains and problems do they serve within your organization? I'm seeing this change; I'm seeing more and more talk about the product mindset. I talked about it with Karim Lakhani—who wrote the Competing in the Age of AI text—recently, so you can go back a couple episodes and listen to that, and we talked about these two things very specifically there. So, number seven, there is no intentionally designed onboarding or honeymoon user experience. So, what do I mean by that? Well, I think we all probably know what onboarding is; if you've ever bought an iPhone or an Android or whatever your device is, and from right out of the box, you turn it on, you don't just usually see a phone dial pad. There's usually some type of whole setup process, and they walk you through what the experience is going to be, and they explain to you, “What is this? Why might you want to turn on this feature?” And they ask you to do it, et cetera. It's a guided experience. Not all software applications need all that setup, but the point is, did you actually give the requisite amount of thought to figure out, “Do we actually need to have some type of intentionally designed onboarding experience here? What is the risk if we just drop people into this application with no hand-holding?” And again, this is a time to think to those future employees, right. When they come in, what's that experience going to be like the first time you log into the system and application and use this thing? You don't want to find out you did all this great data science and technical work that went into a digital product, only to find out no one ever gets to it because, you know, they're supposed to use a special credential, and it doesn't use the corporate sign-on, and they're lost in this loop, and they don't get the email for their account, and da, da, da, on and on. There's a million things that you probably wouldn't think are really part of the overall data product work that can end up blocking all of that value that you've created. So, you got to think about the onboarding piece. And you also need to think about the next phase, which is after the setup or the onboarding. And I call this the honeymoon user experience. And this is about thinking about, what is that first couple weeks—this could be days, it could be weeks, it could be months—but thinking about what the early life cycle of a customer using this thing is. This might be really important if, say, you're transitioning people from, quote, “The old way to do it,” to, “The new way to do it.” So, if this person used tool X in their past job, and now they're using this intelligent tool Y, it may take time for them to transition over. Another reason here, may be, like, if the model needs to be trained on data they're going to provide, well, it might take a while before either it's accurate, or that it has enough information. And so you need to think about, “Well, if they don't get any value out of this until they've walked around with their phone for 14 days, what is that experience going to be like over those 14 days?” From an engineering perspective or a technical perspective, you could say it's technically accurate. Like, in 15 days this application will work, but they may never get to 15 days because they might decide not ever to log back into the tool because they don't even remember it exists because there was never any notification, for example, that came out telling them that, you know what, we've now collected this data. We can now predict the following things, please log in now. No one ever thought through any of that because it wasn't really part of the modeling or the intelligence piece, it was really part of the user experience piece. So, don't forget about the honeymoon experience. This can also include the transition period from, “The old way” to, “The new way.” And that probably needs to be designed as well. And if you've ever been through a really elegant software upgrade or something where you're switching from one platform to another, this can be a real delighter for your customers. And it's a great way to build initial trust, when that feeling of, you know, “We know where you've been. We know how you were doing it in the past. This is the new way to do it. Here are the values. We're going to handhold you through the entire process, whatever it may be.” That's a great way to build trust and to actually buy you some credits, especially if, maybe, the early honeymoon experience with your intelligence software may not be great out of the box, maybe it takes time before it can really show the value. Well, you might buy yourself some credits by making that transition period so good. And when we talk about transition periods, even with modern software, you'll see this sometimes with SaaS products where if they want to do a major redesign, most of the time that the idea of the major redesign doesn't even exist. Most software these days in the tech world is gradually changed over time: they're constantly shipping small increments, so you don't ever really see huge redesigns. And when they do want to make a sizable shift, particularly in software that's used heavily—like a CRM with the sales team or something like that—they will provide ways to access the old version and the new version simultaneously. So, this could become relevant if you've had, for example, someone that used to have to implicitly eyeball different reports and kind of come up with their own computation or whatever, look at a million views in Tableau to figure out—you know, my favorite comparison is, how many carrots does this grocery store need to buy over the next two months? And now you're going to provide a model to do that. How do you transition them from that old way to the new way? Giving that intentional thought matters, that will really matter to your customers; they will value you for it. And again, you're probably buying yourself some credit there. So, the other aspect here is no obvious benefit to the new version of whatever the thing is. So, this can be where customers perceive the switching cost is simply too high to go to your new version. So, lots of different factors here that go—and there's no hard line between when the honeymoon ends and when you're kind of in the what I call the nth—letter N—nth day experience, right? You will have an infinite number of nth day experiences, but you’ll only have one first time experience, and you really only have one, kind of, honeymoon experience, which is over a range of time, but you have lots of those nth day logins and experiences, so just make sure you're designing for all three of those phases. Number eight, the right subject matter experts and stakeholders were not included from the start because their participation did not necessarily block technical implementation. This gets to the idea of who should be at the table during the design of this solution, during the design of the human facing—the human interaction piece of the solution that you're building out? You have a few experts and narrow areas owning too many aspects of the software creation; that can be a problem. You know, coding check-ins and other technology work that is easily measurable becomes the focus of success here because you don't have a product manager that's overseeing the overall value creation, so now we're just looking at the engineering piece. There's all these different ways that you can get into trouble by not having the right people in the room at the right time. The ones I hear most frequently about are often subject matter expertise not being paired up with the data scientist; you hear this a lot. But I actually think that the designers also need to be part of this, if again, the more your solution depends on a human using the tooling and the application properly, it's really powerful if you can get that power trio together with the subject matter expertise, especially if it's not in-house; you got to get the right people in the room. The other aspect here about subject matter experts and stakeholders who should be there—two other points on this. One is—we hear this with ethics, but who are the people that are going to be affected by this software, this new intelligent application, or AI that we're going to use that are not going to be in the room to help create it, but they're going to be recipients of the effects of the software? Simply having that discussion about who could we really benefit, or who could we totally screw here in how we do this? And being aware of that, and then making a conscious choice to go out and get representation from those groups. Simply even having that discussion, I don't think, happens a lot of the time, so that's something you should be doing. And on a similar note, and this is less about ethics, is making sure that you get the right internal stakeholders and leaders involved, who may not have seemed essential at the beginning of the project, but this is something that often happens with user experience design is when we look at it from a technical standpoint, we focus on the modeling and the data requirements, and the engineering, and all that stuff, and when you bring user experience into the loop, sometimes what you'll find out is, like, “Wait a second. What the customer does is they do X, Y, and Z, and then they go over here and they do A, B, and C, and then they come back. And the only way they're going to come back is if they get what they need over at A-B-C land first.” And guess what? We don't have anybody involved in this project, who has any decision making authority or has anything to do with A-B-C-land, we probably need to involve them. So, this could be something like a handoff between sales and marketing or something like this, and you only have—you have some sales representation, but you have no marketing representation because there were no marketing requirements in the original ask, but you're going to affect the marketing department by the work you do for the sales team. So, the point is, if you don't go through the process of journey mapping and figuring out what user experiences actually look like with this tool, you're not going to know who you need to invite into the room. So, part of it is knowing who needs to be there. The next part is making those invitations and getting the buy-in. And sometimes what can happen here is, in this case, marketing doesn't have time. Or this isn't their project, this is sales’ project with IT, or whatever it may be; you get into the fiefdoms. And this is why having buy-in at the high levels to do this stuff the right way, to design these products and solutions the right way, is important. And if you don't have that buy-in, then what I'd be doing is telegraphing very clearly what the risks are to the project. In this case, it may be, “Well, look. If we don't get marketing involved, no selling is going to happen regardless of this model because this customer needs A-B-C information before they can do X-Y-Z with our new AI.” If we don't know how to properly integrate with those applications technically, and we don't understand the human work, and the human tasks, and jobs, and activities that happen, the chance of this entire project being successful is low. So, you have to make that business case for it, and senior leadership needs to be aware that you may need to pull in and get time from different departments to participate in the design process. So, that's that. Number nine is tactics and technology, like using Agile or just get some machine learning and AI into our software. The tactics are given a higher priority than producing meaningful outcomes. So, I sometimes call this the AI hammer, which is, “We need a strategy. Go hire some data scientists; just start doing some AI because everyone else says ‘we got to get there.’” The same thing happened with Agile, I think, there's a lot of places where this is actually still new. It's kind of old news in the tech industry; it's new news and some larger traditional companies. You don't just get free, good, better software by using Agile. There's a mindset change that has to happen, and just like with what I'm talking to you about, design is also a mindset change that has to happen. So, focusing on the tactics, and the tech, and going through the rituals of scrums and stand-ups, and we have to hire this kind of person; we probably need to get this platform up and running because everyone else is using it to do AI, et cetera, et cetera, et cetera, the list goes on and on. Those are focusing on outputs instead of outcomes, and ultimately, if you want to create value, you need to be focused on producing outcomes with the outputs that your team is going to put together. So, when you say, “Success of the project or the product,” your data science team might have heard, “Success means high predictive power in the model,” whereas business stakeholders and customers might have a complete different idea about what it means. For a customer, it may mean, “I don't have to spend any more time waiting in your phone queue to do X, Y, and Z, or the process of contacting customer support is so much better. I don't even want to talk to you guys. And so, but at least you've made it easier for me to do that.” Their idea of what successes is completely different. So, the point is, you better have made the success criteria really clear, and really focus on the outcome piece so that you don't get lost in the exercise of the outputs, and just focusing on tactics. And so the last one I want to leave you with today is specific to machine learning stuff. And this is, operationalizing the model is seen as someone else's job, or it's been treated as a distinct phase in the project that is less important to, or just simply separate from the data science work. So, I understand that, technically speaking, it is separate, and you may have a large enough organization where it makes sense, and you really just want your data scientist only doing that modeling work. So, whoever does the work, the point is operationalization of the model is not a separate activity. You can't look at it like that. It needs to be integral to the way you approach the system itself, the entire experience itself. So, a well-designed experience would not decouple these two things; there's too much overlap between how do you successfully operationalize with what the model is? Because this gets into things like, think about explainability, interpretability, and all that, that we talked about earlier. The requirements for that, if you simply build the model and you don't understand whether or not interpretability of the model matters because deployment and operationalization of the model was someone else's job, well, you can see what happens here, right? If you just had your team create a black-box model that's super highly accurate, but the team that's going to use this is being asked to make a major switch in how they currently do their work to the new way to do their work, and they don't trust this thing because they have no insight into how the model is doing the predictions it's doing, then your system is going to fall flat. And this is why you don't want to decouple it because you won't know whether or not interpretability matters. And it doesn't always matter. There's times where it will not matter. And so you might not need to use that particular data science technique, or algorithm, or whatever it may be to produce the outcomes that you're trying to get to. So, I think it's really important, too, is simply just to have teams building digital experiences, they need to have customer one-on-one time. They either need to be participating in one-on-one research, or they need to be shadowing it and observing it as it's happening, preferably live. They need to watch how people do their job. We talked about this earlier with empathizing with the customer, but the more you can do that and have this become a regular cadence in your group, the better the technology and applications your team is going to put out. It's going to avoid them going native, it's going to avoid them thinking, “Look, it's not my job, it’s somebody else’s.” When you watch someone suffer, it's difficult; it changes your brain, it changes your approach. And it's probably the number one thing I like to do, especially when I go into very technology-heavy companies is get the technology people—especially decision-makers—get them in front of customers. Sometimes it takes recording some sessions with customers and showing a highlight reel before the light will go on. They're like, “Wow. We had no idea what the resistance to this was going to be. Maybe we can try a small prototype first. Or maybe there's a shorter thing, a smaller increment we could do to test the waters before we jump into a really big project together.” So, operationalizing the model, don't look at that as a separate activity when you're thinking about this. So, anyhow, those are my top ten reasons customers don't use or value your team's new machine learning or AI-driven software applications. And if you'd like to get insights like this in your inbox, just head over to designingforanalytics.com/list and you can join my DFA Insights email list. I usually send out stuff anywhere from daily to weekly, at least once a week. And until next time, stay safe, wear those masks. And remember, nobody cares about technically right and effectively wrong data products. Ciao.
38 minutes | Sep 22, 2020
048 - Good vs. Great: (10) Things that Distinguish the Best Leaders of Intelligent Products, Analytics Applications, and Decision Support Tools
Today I’m going solo on Experiencing Data! Over the years, I have worked with a lot of leaders of data-driven software initiatives with all sorts of titles. Today, I decided to focus the podcast episode on what I think makes the top product management and digital/software leaders stand out, particularly in the space of enterprise software, analytics applications, and decision support tools. This episode is for anyone leading a software application or product initiative that has to produce real value, and not just a technology output of some kind. When I recorded this episode, I largely had “product managers” in mind, but titles can vary significantly. Additionally, this episode focuses on my perspective as a product/UX design consultant and advisor, focusing specifically at the traits associated with these leaders’ ability to produce valuable, innovative solutions customers need and want. A large part of being a successful software leader also involves managing teams and other departments that aren’t directly a part of the product strategy and design/creation process, however I did not go deep into these aspects today. As a disclaimer, my ideas are not based on research. They’re just my opinions. Some of the topics I covered include: The role of skepticism The misunderstanding of what it means to be a “PM” The way top software leaders collaborate with UX professionals, designers, and engineering/tech leads How top leaders treat UX when building customer-focused technology How top product management leaders define success and make a strategy design-actionable The ways in which great PMs enable empathy in their teams and evangelize meaningful user research The output vs. outcome mindset
Terms of Service
Do Not Sell My Personal Information
© Stitcher 2021