By Elizabeth Campbell / Illustration by Michael Morgenstern
Computer science professor Lauren Milne’s research focuses on making digital interfaces more accessible for people with disabilities. Her previous projects include a touchscreen-based Braille program, accessible digital games for blind children, and a tool that helps blind programmers navigate code structure. She’s especially drawn to introductory programming environments, which are increasingly part of elementary school curriculums, and how those can include kids who are blind or have low vision.
Elizabeth Campbell: What prompted your interest in working on accessible interfaces?
Lauren Milne: I majored in physics in college, but when I graduated, I knew I actually wanted to go into computer science. Before graduate school, I worked as a personal care assistant for a young man who had Angelman syndrome and used an AAC [augmentative and alternative communication] device—basically a pad with icons that he could use to speak and communicate with people. It performed such a vital function, but there are also a lot of pieces that I thought could be improved. Some of those barriers are what led me to look at accessible interfaces.
I got a lot of support along the way, especially from my University of Washington advisor, Richard Ladner, who is passionate about working with students with disabilities. But a lot of it for me was just learning. I had experience working as a personal care assistant for someone with Angelman syndrome, but I did not have experience working with people who are blind. I had to learn about the tools and the community, and stay humble as a sighted researcher about what I don’t know.
EC: Sometimes people who are sighted or don’t have another disability will come in thinking they have all the answers, and we try to say, “Hey, we appreciate you working with us, but listen to us.”
LM: What I try to do is solve real problems. Richard would talk about how students would come in with proposals for fancy new additions to the white cane that often weren’t actually useful. We talked a lot about participatory design, and making sure that you’re solving a problem that’s actually a problem.
EC: What problems did you observe in introductory programming languages and how they work with screen reader technology?
LM: Traditional programming languages are text-based, so they work relatively well with screen readers, which is software that reads what is on a screen so that a blind person can use a computer or smartphone. But introductory programming languages are a lot more visual now. In particular, block-based programming environments (such as Scratch), which use puzzle piece-like units of code, have become really popular. Because block-based programming relies heavily on visual metaphors, though, it’s not fully accessible for children with visual impairments.
We explored techniques to overcome these barriers and built Blocks4All, an accessible blocks-based environment on an iPad that uses audio and spatial cues to help blind children learn to program. For my project, I had the children figure out how to write a program where a little robot would be able to knock over Jenga towers. They were really pumped about using the robot.
EC: I wonder how many kids have felt encouraged through this work to pursue careers or courses of study after previously feeling discouraged with the lack of access.
LM: That’s one of my underlying goals: to get kids excited about potentially doing research and going into computer science or any STEM-related field. It’s very frustrating when STEM teaching tools aren’t accessible, because it sends such a poor message to students. I want to help mitigate that and say, “You are welcome. Please. We need you in this space.”
EC: Have you encountered opposition from people who don’t see a space for blind people in math, science, and technology?
LM: I don’t think I’ve necessarily faced opposition. A lot of times when I present my research, sighted people are surprised—they just didn’t know or hadn’t considered that blind people are programmers. And a lot of times when I talk with people at computer science education conferences who are showing off awesome new programming environments that make it easy for sighted children to learn to program, I’ll ask, “Have you considered making it accessible?” That’s where I face some opposition. People say, “Oh, we don’t know how to do that,” or “We don’t have the resources to do that.” It’s a next step that never seems to get taken.
EC: How do you incorporate this research into your teaching?
LM: I taught a “Human Computer Interaction” course about research methods in designing technology, and I brought in guest lecturers to talk about designing for someone who might be different from you. For example, a blind researcher talked about how some design tools can actually propagate ableism (discrimination against people with disabilities). The classic example is that people think wearing a blindfold means they understand how someone who is blind would use an interface. And obviously, that doesn’t fully represent a blind person’s experience.
EC: What are you working on now?
LM: I’m working with Professor Abby Marsh, a Mac computer science colleague, to look at security and privacy questions related to using assistive technology. Because a screen reader needs to be able to access information that’s underlying in the interface, developers have to balance keeping a system open and keeping it secure. And what happens if you’re using assistive technology on a website and that fact becomes exposed to the developer? We’re exploring the trade-off of people sharing personal information to improve accessibility. There’s so much data being collected about people. Where that information goes, and how it’s being used, are really interesting questions that people should think about.
Elizabeth Campbell, who is blind, is a reporter at the Fort Worth Star-Telegram in Texas. She has a passion for greater accessibility in the workplace and beyond.
July 26 2021
Back to top