Hi, I'm Amanda Swearngin.
I am a recent graduate with a Ph.D. in Computer Science at the University of Washington. I was advised by Amy Ko and James Fogarty. I researched systems and interfaces for UX/UI designers that apply techniques from diverse areas including program analysis, synthesis, constraint solving, and machine learning.
Through this research, I created systems to help interface designers explore and adapt alternative and example interfaces, and analyze the usability of an interface without needing to collect any data. For this research, I collaborated with industry researchers through internships with Adobe Research and Google, and have conducted over 100 interviews and study sessions with interface designers. My research was supported by the National Science Foundation Graduate Research Fellowship.
Previously, I spent 3 years working as a full-time software development engineer at Microsoft, where I helped build a web interface framework for Microsoft Dynamics, and specialized in user interface layout, patterns, and visual regression testing.
I’m currently working at Apple as an AI and Accessibilty Researcher and Engineer.
Exploring alternatives is a key part of interface design, yet processes for creating alternatives are mostly manual. We created Scout, a system to help designers explore alternatives rapidly through mixed-initiative interaction with high-level constraints and design feedback. We formalized design concepts combined with high-level constraints into low-level spatial constraints to enable rapidly generating layout alternatives through constraint solving and synthesis.
Tapping is an immensely important gesture in mobile interfaces, yet people still frequently are required to learn which elements are tappable through trial and error. We created TapShoe, an approach for modeling the tappability of mobile interfaces at scale. We collected a crowdsourced dataset of over 20k tappability labels and built a deep learning model to evaluate tappability automatically which interface designers can use to evaluate this key aspect of usability without needing to collect any data.
Interface designers frequently adapt screenshot examples into designs, yet, images are unstructured and difficult to edit. We created Rewire, an interactive system that helps designers leverage example screenshots by automatically inferring a vector representation of a screenshot where UI components have editble shape and style properties. We demonstrate that Rewire can help designers reconstruct and edit example designs more efficiently compared to a baseline tool.
Most web applications are designed as one-size-fits-all, despite considerable variation in people’s expertise, physical abilities, and other factors that impact interaction. We created Genie, a system that reverse engineers an abstract model of the underlying commands in a web application, then enables interaction with that functionality through alternative interfaces and other input modalities (e.g., speech, keyboard, or command line input).
Interface designers use human performance models to compare designs to legacy systems or competitors’ interfaces, yet these models require manual creation of storyboards in human performance modeling tools (i.e., CogTool). We created a system to automatically generate storyboards and predictive models for desktop interfaces to help UI designers estimate human task performance (CHI 2012), and detect human performance regressions in interfaces (ICSE 2013).