Connecting the Dots
A conversation with user experience and service designer, Trip O’Dell
Trip O’Dell has created immersive user experiences for Amazon’s Alexa, last-mile delivery logistics, and many other complex design challenges. He is also outspoken about his dyslexia, and the challenges and advantages it presents.
How has dyslexia been a challenge in your career? An asset?
The challenges that I had growing up forced me to adapt technology to my needs. Whether it was learning to read with audiobooks, or using a word processor for the first time in high school (and getting accused of plagiarism because my writing had improved so much), I had to hack technology to meet my needs.
Hear about upcoming conversations:
That habit helped me to stay afloat and mitigate weaknesses, and I developed an instinct for looking at ways to use new technologies for weird, off-label purposes. I still experiment with new tools all the time, like Microsoft’s Seeing AI app (intended for people with visual impairments or blindness) to live scan long-form text from books rather than hand copy it, which is difficult for me. My dyslexia and dysgraphia make me almost illiterate with pen and paper.
It’s a lot easier for me to talk than it is to write, so I record myself as part of my writing process. I use machine learning to transcode that audio into text, and then pop it into another machine learning backed to editor (Grammarly). It’s a cumbersome process, but it helps me express my ideas faster and more effectively. It makes me more impactful as a communicator, and it pushes me to take risks, like being published in places like Fast Company.
I think another benefit of my experience is that it makes me more sensitive to other’s unmet needs. I can see where a design may be failing invisibly, or where new technology can be adapted in non-obvious ways. That’s empathy, and many designers have it, but mine is fine-tuned to see where a person might be struggling with a system in a way they don’t want to admit to a stranger. The 20% of the population who have some degree of dyslexic processing do a lot of things to hide their differences, out of shame.
Combined with some of the strengths of dyslexia, like lateral thinking or “connecting dots”, my dyslexic “gist detection” has helped me become an outlier in my field because I have a history of making accurate connections between observed behaviors and systems design. Those are a few of the ways where I’ve turned weaknesses and hidden differences into strengths.
Now, there are things that I avoid. I am very “out” about my dyslexia, and tasks like writing on a whiteboard, or taking notes in a meeting, or writing a document, I'm fine with saying “no”, or designating someone else to do that. Or preferring face to face video chats over writing emails because it’s hard for me to be succinct, because it takes a lot of effort to do that. Those are some of the ways my dyslexia shapes me. I don’t wear my difference on my sleeve because I want someone’s understanding. It’s taken me a long time, but I don’t carry the burden of the shame that often goes with a hidden disability anymore. However, its also important to recognize that difference is one of the things that make me really good at what I do. I’m more comfortable saying “this is the way I work”, because the downstream benefits of that way of working shows in the results. I’m a very visual thinker, so using tools like storyboards for expressions of strategy means I’m okay with saying “Yeah, I’m going to show you some cartoons, but those storyboards are going to put the data and the experience into context and help you make decisions faster than if I write a long document or put it in a spreadsheet”.
Can you give us an idea of your process?
I tend to look at problems a little differently. I don’t come from a design background, I didn’t go to art school (and there’s nothing wrong with having gone to art school), but when I sit down to talk with a potential client or an employer, I’ve had the feedback like “I was expecting a very different kind of conversation. You don't sound like designers I’ve worked with before, and I really like what I'm hearing, because you're kind of speaking my language.” I think that's a combination of my background in cognitive science and my ability to explain what users are doing in terms of behavior. When something is going wrong with a product or the humans using it aren't doing something in a way that the team expects, a lot of times it’s because those users have unmet needs, or you haven’t actually gone through the trouble to understand, or you don't understand the full context around the problem.
I like to go deep on the context, culture and behavioral motivations in a system or service design, especially in experiences where you have multiple users, often with competing interests or different points of view on their goals and motivations. Understanding the dynamics of those situations, and how to create a win-win for different customers or users is helpful. People are not nearly as dumb as a lot of us in technology like to believe they are. They are “dumb like a fox”— they work around constraints, often around the software, to get what they need.
So the more you understand that, the easier it is to make the “right” behaviors the smart and useful behaviors and the wrong behaviors more effort. That’s typically the way that I approach problems. And then I think in visualizing those more complex problems—things like last-mile logistics, or healthcare delivery in the developing world.
I’m looking at it as telling a story from multiple points of view. It’s almost like filmmaking, which is why I use what I call strategy storyboards where you can tell a complete end-to-end story that makes sense to people, that features things like ubiquitous computing, where you’ve got mobile and desktop and time-shifting and voice and all of these other things that you can put into a physical context rather than a bunch of mock-ups.
Wireframes are a prototype where you can only see one part of that experience on a device. I think strategy storyboards help not just the engineering, product, and design team, but they help the business stakeholders understand how doing the right thing is also the self-interested thing to do, because people will pay for something that solves their problem. Users seek pleasure and they avoid pain and we often lose sight of that in design. Understanding the way that people’s brains work helps to understand the problem more deeply. The only part of the person you ever really design for is the brain, so you need to understand the context and motivations first. That’s my process.
What’s your relationship with new technology? Do you have any privacy concerns?
I think my biggest concern around privacy is if you’re exploiting users and not delivering value. But I think often there’s a fair trade-off in terms of privacy and convenience. For example, IoT door locks that open with Alexa or a smartphone app. You need to be able to authenticate and give the app certain permissions or information about yourself for the experience to work. The developer is likely recording metrics around your comings and goings to increase its accuracy, safety, and ease of use. Alexa is the same way. If you don’t talk to Alexa, it doesn’t get better. But there’s a trade-off. Alexa knows a lot about you, but only when you use the ‘wake’ word. If Alexa were passively listening and recording everything all the time to make the algorithm better, that wouldn’t deliver value to the customer. It would actually exploit the relationship and the trust of the customer. That surveillance wouldn’t serve the purpose the customer bought the device for. I think it was one of the things that I found very ethical about Amazon around how Alexa actually works. Recording and posting voices all the time without the ‘wake’ word is one of the things that would make Alexa faster and smarter, but it doesn’t do that. Alexa is a little harder to use than it otherwise might be, because it’s listening on a continuous 15 second loop for the ‘wake’ word, and its not saving any of it.
Alexa has to grab a little bit of context and start passively pushing that context to the cloud, but it often fails because it doesn’t have enough context of the intent. Amazon was a very ethical company in terms of its leadership principles.
So if we said, “Well, we should record everything to make Alexa better”, it wouldn’t fly because of the “Customer Obsession” and “Earn Trust” principles. Recording everything would totally violate those principles if we only wanted to make the tech better. We didn’t want customers to ever feel that their trust has been violated and that was something that I really enjoyed about being there.
How do you manage the relationship your children have with technology?
My children spend a lot of time on screens, which we’re trying to mitigate. We use a lot of network management tools like “Disney Circle” and “Google Family Link” to keep their in-home screen time and mobile phone use under control. We can just say “No, turn that off,” and disconnect the internet. So that helps to allay some of those fights, but it’s hard. I struggle with being in an intentional relationship with tech in my own life. Being present isn’t easy for me, so I try my best to model what I want from them. I uninstalled Facebook on my phone, even though it’s an important way for me to communicate with friends and family that aren’t nearby. I try and keep productivity-oriented apps in the bottom bar. It’s sort of like putting the cookie jar further back on the shelf, or locking the fridge. it’s definitely a work in progress.
Social media is a tricky one. I think Twitter is a really good example of misaligned incentives. Twitter limits the amount of nuance that you can react with, and Facebook limits the range of emotions that you can react with. There’s no notion of “laughing out loud” vs. “laughing at you” vs. “laughing with you”, or the amount of nuance that you can make in a response that is disagreeing, but also respectful. So, those interactions lend themselves to polarization, and it has weaponized political and social discussion because people constantly feel like they are either defending their virtue or defending good vs. evil or they are reacting to an attack. That’s a problem. I think we need to think more deeply about the interactions we’re creating. That’s something I think a lot about where my kids are concerned, and what I model. The internet is forever, and I want to be the version of myself that my children believe me to be more often than not.
Should we be friends with computers?
I say “no”. Computers aren’t people. It’s problematic to treat objects like people when we have difficulty not treating people like objects. And I think that the difference is the way the brain works, and the way that humans work. I think there’s this sort of imperialism in the tech industry around the notion that we’re making the world better with technology, because obviously we’re smart and we’re solving problems that we’re interested in, and that you’re sort of supplanting your values for the values that other people may have.
Like the right to not be connected, or not to be followed, or not to have your phone detect when you’re allowed to use it. Did you know that when you’re driving, a lot of apps will block you even if a passenger is using the phone? They make it more complex, and people will work around that system and become even more distracted, and it creates a more hazardous experience. It’s not empowering users to be better, it’s just blocking them from their goals. Computers are binary in that way.
I think where you’re seeing things that make technology easier to use, like natural language, where it is emulating a human voice, that’s a trade-off in terms of getting something that’s easier to use in exchange for giving an object possibly more psychological power than it should in terms of the relationship we’re creating with it. It’s like a prosthetic relationship, right? Say somebody’s very lonely and isolated. Is it sufficient to give them a prosthetic relationship in terms of an AI that can ask, “How are you doing?” or is it better to find ways to connect them with people that can have genuine human relationships with them? I don’t know the answer to that question.
But those are some of the considerations that we need to go deeper on. We can’t just claim empathy and make people feel better. We assume that we know the problem because we feel it deeply, but there’s a gradient. Picture a four quadrant matrix. On one axis there is Self-oriented to Other-oriented. On the other axis there is Attachment to Detachment. The Self-interested with Attachment quadrant would be Pity. The Other-interested with Attachment quadrant may be Compassion. The Self-oriented and Detached quadrant would be Disdain. I think something that is high engagement like compassion actually requires action. Compassion without action is pity. Empathy requires a level of detachment that designers often don’t own. They’re not willing to own it. It’s much easier to signal virtue—that I believe this, and I’m doing this on Twitter—but they don’t actually try to understand what would motivate somebody to do that thing, or what would put them in that sort of a situation, and you end up in a place where design is solving symptoms of pain as opposed to causes of pain, and the causes may not be what the user is telling us.
To learn more about Trip O’Dell, visit: