Warning: Creating default object from empty value in /home/chi2013/public_html/wordpress/wp-content/plugins/members/includes/functions.php on line 21
Monday | CHI 2013
  • facebook icon
  • twitter icon
  • google+ icon

Monday

Show / hide full affiliations and abstracts (May take a few seconds.)

IWC Enter a 3-letter code in the search box of the CHI 2013 mobile app to go to the corresponding session or presentation.
 When clickable, a 3-letter code links to the Video Previews web site.

All communities Design (52) Engineering (19) Management (9)
User Experience (41) Child-Computer Interaction (6) Digital Arts (7) Games and Entertainment (13)
Health (8) Sustainability (4) HCI for Development (5)

Monday – 9:00-10:20

GrandSpecial session: Opening Keynote Plenary

  • KOPKeynote Speaker: Paola Antonelli, Senior Curator and Architecture & Design Director for Research & Development, MOMA, New York.

Monday – 11:00-12:20

BluePapers: Managing Social Media

SCJSession chair: Louise Barkhuus
  • PSXPaper: The Many Faces of Facebook: Experiencing Social Media as Performance, Exhibition, and Personal Archive
    X. Zhao (Cornell Univ., USA), N. Salehi, S. Naranjit, S. Alwaalan, S. Voida, D. Cosley
    X. Zhao (Cornell Univ., USA)N. Salehi (Sharif Univ. of Technology, IR)S. Naranjit (Cornell Univ., USA)S. Alwaalan (King Saud Univ., SA)S. Voida (Cornell Univ., USA)D. Cosley (Cornell Univ., USA)

    We bring new perspectives to the design of social media by drawing from Goffman’s theatrical metaphor and Hogan’s exhibition approach to explore how people manage social media data over time. The growing use of social media means that an increasing amount of people’s lives are visible online. We draw from Goffman’s theatrical metaphor and Hogan’s exhibition ap-proach to explore how people manage their personal collec-tion of social media data over time. We conducted a quali-tative study of 13 participants to reveal their day-to-day decision-making about producing and curating digital traces on Facebook. Their goals and strategies showed that people experience the Facebook platform as consisting of three different functional regions: a performance region for man-aging recent data and impression management, an exhibi-tion region for longer term presentation of self-image, and a personal region for archiving meaningful facets of life. Further, users’ need for presenting and archiving data in these three regions is mediated by temporality. These find-ings trigger a discussion of how to design social media that support these dynamic and sometimes conflicting needs.

  • PHNPaper: Favors from Facebook Friends: Unpacking Dimensions of Social Capital
    Y. Jung (Michigan State Univeristy, USA), R. Gray, C. Lampe, N. Ellison
    Y. Jung (Michigan State Univeristy, USA)R. Gray (Michigan State Univeristy, USA)C. Lampe (Univ. of Michigan, USA)N. Ellison (Univ. of Michigan, USA)

    This paper provides an innovative way of using Williams’ (2006) social capital measures which are the most widely-used social capital measures in social media studies.Past research has demonstrated a link between perceptions of social capital and use of the popular social network site, Facebook. Williams’ Internet Social Capital Scales, based on Putnam’s formulation, tap into sub-dimensions of social capital that have not been broadly used yet may enlighten our understanding of the different ways in which connecting with others online can facilitate access to resources embedded within our social relationships. In this study, we segment Williams’ Internet Social Capital Scales into various sub-dimensions using factor analysis and explicate the distinct facets of social capital through a lab experiment in which Facebook users (N=98) request a small favor from their Facebook network. We find that some sub-dimensions play a significant role in getting favors from Facebook friends while bonding and bridging social capital do not significantly predict responses to favor requests.

  • PDHPaper: Quantifying the Invisible Audience in Social Networks
    M. Bernstein (Stanford Univ., USA), E. Bakshy, M. Burke, B. Karrer
    M. Bernstein (Stanford Univ., USA)E. Bakshy (Facebook, Inc., USA)M. Burke (Facebook, Inc., USA)B. Karrer (Facebook, Inc., USA)

    When you share content in a social network, who is listening? We combine survey and log data to examine how well users’ audience perceptions match their true audience on Facebook.When you share content in an online social network, who is listening? Users have scarce information about who actually sees their content, making their audience seem invisible and difficult to estimate. However, understanding this invisible audience can impact both science and design, since perceived audiences influence content production and self-presentation online. In this paper, we combine survey and large-scale log data to examine how well users’ perceptions of their audience match their actual audience on Facebook. We find that social media users consistently underestimate their audience size for their posts, guessing that their audience is just 27% of its true size. Qualitative coding of survey responses reveals folk theories that attempt to reverse-engineer audience size using feedback and friend count, though none of these approaches are particularly accurate. We analyze audience logs for 222,000 Facebook users’ posts over the course of one month and find that publicly visible signals — friend count, likes, and comments — vary widely and do not strongly indicate the audience of a single post. Despite the variation, users typically reach 61% of their friends each month. Together, our results begin to reveal the invisible undercurrents of audience attention and behavior in online social networks.

  • NDLNote: Gender, Topic, and Audience Response: An Analysis of User-Generated Content on Facebook
    Y. Wang (Carnegie Mellon Univ., USA), M. Burke, R. Kraut
    Y. Wang (Carnegie Mellon Univ., USA)M. Burke (Facebook, Inc., USA)R. Kraut (Carnegie Mellon Univ., USA)

    This paper identifies topics that men and women talk about in Facebook status updates and determines which topics are more likely to receive feedback.Although both men and women communicate frequently on Facebook, we know little about what they talk about, whether their topics differ and how their network responds. Using Latent Dirichlet Allocation (LDA), we identify topics from more than half a million Facebook status updates and determine which topics are more likely to receive feedback, such as likes and comments. Women tend to share more personal topics (e.g., family matters), while men discuss more public ones (e.g., politics and sports). Generally, women receive more feedback than men, but “male” topics (those more often posted by men) receive more feedback, especially when posted by women.

  • NMTNote: Using Contextual Integrity to Examine Interpersonal Information Boundary on Social Network Sites
    P. Shi (The Pennsylvania State Univ., USA), H. Xu, Y. Chen
    P. Shi (The Pennsylvania State Univ., USA)H. Xu (The Pennsylvania State Univ., USA)Y. Chen (Univ. of California, Irvine, USA)

    Through a case analysis of Friendship Pages on Facebook, this paper identifies users’ interpersonal privacy concerns that are rooted from informational norms outlined in the theory of contextual integrity.Although privacy problems in Social Network Sites (SNS) have become more salient than ever in recent years, interpersonal privacy issues in SNS remain understudied. This study aims to generate insights in understanding users’ interpersonal privacy concerns by expounding interpersonal privacy boundaries in SNS. Through a case analysis of Friendship Pages on Facebook, this paper identifies users’ interpersonal privacy concerns that are rooted from informational norms outlined in the theory of contextual integrity, as well as the tensions that occur within and cross these informational norms. This paper concludes with a discussion of design implications and future research.

241Panel

  • LDJCall All Game Changers: BYOD (Bring Your Own Disruption)
    Iram Mirza (moderator), Jannie Lai, Chris Maliwat, Evelyn Huang, Marcy Barton
    Iram Mirza (moderator)Jannie LaiChris MaliwatEvelyn HuangMarcy Barton

    This panel welcomes provocateurs who challenge conventional wisdom, take risks, and want to create new products and services. We are focused on looking at disruptive innovation from various key vantage points: education, cultural shift, social networking, and the corporate landscape. Join us if you want to enlist in a successful culture of disruption, and learn how to influence and propagate change throughout your organization.

242ABPapers: Enhancing Access

STJSession chair: Geraldine Fitzpatrick
  • PFVPaper: Older Adults as Digital Content Producers
    J. Waycott (The Univ. of Melbourne, AU), F. Vetere, S. Pedell, L. Kulik, E. Ozanne, A. Gruner, J. Downs
    J. Waycott (The Univ. of Melbourne, AU)F. Vetere (The Univ. of Melbourne, AU)S. PedellL. Kulik (The Univ. of Melbourne, AU)E. Ozanne (The Univ. of Melbourne, AU)A. Gruner (Benetas Aged Care Services, AU)J. Downs (The Univ. of Melbourne, AU)

    This paper examines the self-expression and social engagement that occurred when older adults used an iPad application to create and share photographs and messages within a small peer community. Older adults are normally characterized as consumers, rather than producers, of digital content. Current research concerning the design of technologies for older adults typically focuses on providing access to digital resources. Access is important, but is often insufficient, especially when establishing new social relationships. This paper investigates the nature and role of digital content that has been created by older adults, for the purpose of forging new relationships. We present a unique field study in which seven older adults (aged 71-92 years), who did not know each other, used a prototype iPad application (Enmesh) to create and share photographs and messages. The findings demonstrate that older adults, even those in the “oldest old” age group, embraced opportunities to express themselves creatively through digital content production. We show that self-expression and social engagement with peers can be realized when socio-technical systems are suitably designed to allow older adults to create and share their own digital content.

  • PRCPaper: Health Vlogger-Viewer Interaction in Chronic Illness Management
    L. Liu (Univ. of Washington, USA), J. Huh, T. Neogi, K. Inkpen, W. Pratt
    L. Liu (Univ. of Washington, USA)J. Huh (Univ. of Washington, USA)T. Neogi (Univ. of Washington, USA)K. Inkpen (Microsoft Research, USA)W. Pratt (Univ. of Washington, USA)

    Health vlogs allow individuals with chronic illnesses to share experiences. We examined methods that vloggers use to connect with viewers. We present design implications that facilitate sustainable communities for vloggers.Health video blogs (vlogs) allow individuals with chronic illnesses to share their stories, experiences, and knowledge with the general public. Furthermore, health vlogs help in creating a connection between the vlogger and the viewers. In this work, we present a qualitative study examining the various methods that health vloggers use to establish a connection with their viewers. We found that vloggers used genres to express specific messages to their viewers while using the uniqueness of video to establish a deeper connection with their viewers. Health vloggers also explicitly sought interaction with their viewers. Based on these results, we present design implications to help facilitate and build sustainable communities for vloggers.

  • PQSPaper: Accessible Online Content Creation By End Users
    K. Kuksenok (Univ. of Washington, USA), M. Brooks, J. Mankoff
    K. Kuksenok (Univ. of Washington, USA)M. Brooks (Univ. of Washington, USA)J. Mankoff (Carnegie Mellon Univ., USA)

    End-user generated content is common, yet often not accessibility. Our case studies of online communities that create accessible content, shows the importance of negotiated and community-defined notions of accessibility.Like most online content, user-generated content (UGC) poses accessibility barriers to users with disabilities. However, the accessibility difficulties pervasive in UGC warrant discussion and analysis distinct from other kinds of online content. Content authors, community culture, and the authoring tool itself all affect UGC accessibility. The choices, resources available, and strategies in use to ensure accessibility are different than for other types of online content. We contribute case studies of two UGC communities with accessible content: Wikipedia, where authors focus on access to visual materials and navigation, and an online health support forum where users moderate the cognitive accessibility of posts. Our data demonstrate real world moderation strategies and illuminate factors affecting success, such as community culture. We conclude with recommended strategies for creating a culture of accessibility around UGC.

  • PSMPaper: Augmented Endurance: Controlling Fatigue while Handling Objects by Affecting Weight Perception using Augmented Reality
    Y. Ban (The Univ. of Tokyo, JP), T. Narumi, T. Fujii, S. Sakurai, J. Imura, T. Tanikawa, M. Hirose
    Y. Ban (The Univ. of Tokyo, JP)T. Narumi (The Univ. of Tokyo, JP)T. Fujii (The Univ. of Tokyo, JP)S. Sakurai (The Univ. of Tokyo, JP)J. Imura (The Univ. of Tokyo, JP)T. Tanikawa (The Univ. of Tokyo, JP)M. Hirose (The Univ. of Tokyo, JP)

    “Augmented Endurance” reveals the implicit effect of augmented reality on our perception of weight and realizes a method to utilize it for human interfaces.The main contribution of this paper is to develop a method for alleviating fatigue during handling medium-weight objects and augmenting our endurance by affecting our weight perception with augmented reality technology. To assist people to lift medium-weight objects without a complex structure or various costs, we focus on the phenomenon that our weight perception during handling objects is affected by visual properties. Our hypothesis is that this illusionary effect in weight perception can be applied to reduce fatigue while handling medium-weight objects without mechatronics-based physical assistance. In this paper, we propose an augmented reality system that changes the brightness value of an object in order to reduce fatigue while handling the object. We conducted two fundamental experiments to investigate the effectiveness of the proposed system. Our results suggested that the system eliminates the need to use excess energy for handling objects and reduces fatigue during the handling task.

243Course C01

  • CRLC01: User Interface Design and Adaptation for Multi-Device Environments
    F. Paternò (CNR-ISTI, IT)
    F. Paternò (CNR-ISTI, IT)

    This course provides a discussion of the possible solutions in terms of concepts, techniques, and tools for multi-device interactive applications, accessed by mobile and stationary devices even through different modalities Program Description: Benefits: This tutorial aims to help user interface designers and developers to understand the issues involved in multi-device interactive applications, which can be accessed through mobile and stationary devices even exploiting different interaction modalities (graphical, vocal, …). It will provide a discussion of the possible solutions in terms of concepts, techniques, languages, and tools, with particular attention to Web environments. The tutorial will deal with the various strategies in order to adapt, distribute, and migrate the user interface according to the context of use. Origins: This tutorial is an updated and more extended version of a tutorial given at CHI 2012, Mobile HCI 2010, and INTERACT 2011 Features: Issues in multi-device interfaces The influence of the interaction platforms on the suitability of the possible tasks and their structure Authoring multi-device interfaces Model-based design of multi-device interfaces Approaches to automatic adaptation How to address adaptation to various platforms with different modalities (graphical, vocal, …) Distributed user interfaces User interfaces able to migrate and preserve their state Audience: The tutorial will be interesting for interactive software developers and designers who want to understand the issues involved in multi-device interactive applications and the space of the possible solutions. In addition, other researchers who would like to have an update on the state of art and research results in the field will find the tutorial of interest. Presentation: Lectures, demonstrations, exercises, videos, group discussions Instructor background: Fabio Paternò is Research Director at CNR-ISTI, where his main research interests are in user interfaces for ubiquitous environments, model-based design and development, tools and methods for multi-device interactive applications, migratory interfaces. In these areas he has coordinated several projects and the development of various tools.

251Papers: Learning

SSGSession chair: Andruid Kerne
  • PLPPaper: In Search of Learning: Facilitating Data Analysis in Educational Games
    E. Harpstead (Carnegie Mellon Univ., USA), B. Myers, V. Aleven
    E. Harpstead (Carnegie Mellon Univ., USA)B. Myers (Carnegie Mellon Univ., USA)V. Aleven (Carnegie Mellon Univ., USA)

    We present a toolkit and methodology for recording and analyzing player log data in educational games, that allows game designers and researches multiple ways to explore student learning.The field of Educational Games has seen many calls for added rigor. One avenue for improving the rigor of the field is developing more generalizable methods for measuring student learning within games. Throughout the process of development, what is relevant to measure and assess may change as a game evolves into a finished product. The field needs an approach for game developers and researchers to be able to prototype and experiment with different measures that can stand up to rigorous scrutiny, as well as provide insight into possible new directions for development. We demonstrate a toolkit and analysis tools that capture and analyze students’ performance within open educational games. The system records relevant events during play, which can be used for analysis of player learning by designers. The tools support replaying student sessions within the original game’s environment, which allows researchers and developers to explore possible explanations for student behavior. Using this system, we were able to facilitate a number of analyses of student learning in an open educational game developed by a team of our collaborators as well as gain greater insight into student learning with the game and where to focus as we iterate.

  • PLLPaper: Optimizing Challenge in an Educational Game Using Large-Scale Design Experiments
    D. Lomas (Carnegie Mellon Univ., USA), K. Patel, J. Forlizzi, K. Koedinger
    D. Lomas (Carnegie Mellon Univ., USA)K. Patel (DA-IICT, IN)J. Forlizzi (Carnegie Mellon Univ., USA)K. Koedinger (Carnegie Mellon Univ., USA)

    Online experiments (>80,000 game players in >14,400 conditions) optimized challenge to maximize engagement and learning in an educational game. Alas, what was optimal for engagement was not optimal for learning.Online games can serve as research instruments to explore the effects of game design elements on motivation and learning. In our research, we manipulated the design of an online math game to investigate the effect of challenge on player motivation and learning. To test the “Inverted-U Hypothesis”, which predicts that maximum game engagement will occur with moderate challenge, we produced two large-scale (10K and 70K subjects), multi-factor (2×3 and 2x9x8x4x25) online experiments. We found that, in almost all cases, subjects were more engaged and played longer when the game was easier, which seems to contradict the generality of the Inverted-U Hypothesis. Troublingly, we also found that the most engaging design conditions produced the slowest rates of learning. Based on our findings, we describe several design implications that may increase challenge-seeking in games, such as providing feedforward about the anticipated degree of challenge.

  • PMEPaper: From Competition to Metacognition: Designing Diverse, Sustainable Educational Games
    S. Foster (UC San Diego, USA), S. Esper, W. Griswold
    S. Foster (UC San Diego, USA)S. Esper (Univ. of California, San Diego, USA)W. Griswold (UC San Diego, USA)

    Educational games are traditionally single-player games. We evaluate commercial multiplayer games in order to inform the design of educational multiplayer games.We investigate the unique educational benefits of 1-on-1 competitive games, arguing that such games can be just as easy to design as single-player educational games, while yielding a more diverse and sustainable learning experience. We present a study of chess and StarCraft II in order to inform the design of similar educational games and their communities. We discuss a competitive game we designed to teach Java programming. We evaluate the game by discussing its user study. Our main contributions are 1) an argument that the use of 1-on-1 competition can solve two existing problems inherent to single-player games, 2) an analysis of the features that make competitive games effective learning environments, and 3) an early but encouraging description of the emergent learning environment one can expect from designing an educational game with these features.

  • PBSPaper: Why Interactive Learning Environments Can Have It All: Resolving Design Conflicts Between Competing Goals
    M. Rau (Carnegie Mellon Univ., USA), V. Aleven, N. Rummel, S. Rohrbach
    M. Rau (Carnegie Mellon Univ., USA)V. Aleven (Carnegie Mellon Univ., USA)N. Rummel (Ruhr-Univ. Bochum, DE)S. Rohrbach (Carnegie Mellon Univ., USA)

    We present a principled, approach to resolving conflicts between competing goals in educational settings. We provide evidence that our approach lead to the development of a successful interactive learning environment.Designing interactive learning environments (ILEs; e.g., intelligent tutoring systems, educational games, etc.) is a challenging interdisciplinary process that needs to satisfy multiple stakeholders. ILEs need to function in real educa-tional settings (e.g., schools) in which a number of goals interact. Several instructional design methodologies exist to help developers address these goals. However, they often lead to conflicting recommendations. Due to the lack of an established methodology to resolve such conflicts, develop-ers of ILEs have to rely on ad-hoc solutions. We present a principled methodology to resolve such conflicts. We build on a well-established design process for creating Cognitive Tutors, a highly effective type of ILE. We extend this pro-cess by integrating methods from multiple disciplines to resolve design conflicts. We illustrate our methodology’s effectiveness by describing the iterative development of the Fractions Tutor, which has proven to be effective in class-room studies with 3,000 4th-6th graders.

252ACourse C04

  • CYUC04: Body, Whys & Videotape: Applying Somatic Techniques to User Experience in HCI
    T. Schiphorst (Simon Fraser Univ., CA), L. Loke
    T. Schiphorst (Simon Fraser Univ., CA)L. Loke (The Univ. of Sydney, AU)

    This course will illustrate how somatic principles can be applied to design and evaluation of user experience methods within HCI utilizing case studies, videos and in class experiential examples.How can HCI designers and practitioners incorporate a somatic perspective within interaction design? This course will enable participants to develop an understanding of how somatic experiential techniques can be used to support design and evaluation of user experience methods within HCI. It will provide multiple examples using case studies, video and in-class exercises that illustrate somatic application to design of technology. The course contextualizes the history of somatic methods within HCI, highlighting the relationships between user experience and the application of somatic principles. It illustrates the benefits and challenges of integrating somatic approaches to experience design in a technological context. Participants will be encouraged to explore somaesthetic strategies and apply them to research. The course addresses differences in epistemological assumptions through contextual practice, discussion and case studies with a strong emphasis on multi-modal examples in the context of HCI design and evaluation.

252BAlt.chi: Reflection and Evaluation

SAPSession chair: Amanda Williams
  • ALUChanging Perspectives on Evaluation in HCI: Past, Present, and Future
    C. MacDonald (Pratt Institute, USA), M. Atwood
    C. MacDonald (Pratt Institute, USA)M. Atwood (Drexel Univ., USA)

    We review the history of evaluation and outline five research directions that will help researchers, practitioners, and educators adapt to meet new evaluation challenges.Evaluation has been a dominant theme in HCI for decades, but it is far from being a solved problem. As interactive systems and their uses change, the nature of evaluation must change as well. In this paper, we outline the challenges our community needs to address to develop adequate methods for evaluating systems in modern (and future) use contexts. We begin by tracing how evaluation efforts have been shaped by a continuous adaptation to technological and cultural changes and conclude by discussing important research directions that will shape evaluation’s future.

  • ATEPersonal Informatics and Reflection: A Critical Examination of the Nature of Reflection
    A. Pirzadeh (IUPUI, USA), L. He, E. Stolterman
    A. Pirzadeh (IUPUI, USA)L. He (Indiana Univ. Bloomington, USA)E. Stolterman (Indiana Univ. Bloomington, USA)

    This study critically examined the process of reflection on one’s experiences, thoughts, and insights through design research; and Wandering Mind was designed as a support tool to facilitate this process. Personal informatics systems that help people both collect and reflect on various kinds of personal information are growing rapidly. Despite the importance of journaling and the main role it has in tracking one’s personal growth, a limited number of studies have examined journaling in the area of personal informatics in detail. In this paper, we critically examine the process of reflection on experiences, thoughts and evolving insights through a qualitative research study. We also present the design research process we conducted to develop the Wandering Mind as a support tool to help individuals record and reflect on their experiences.

  • APZPattern Language and HCI: Expectations and Experiences
    Y. Pan (Indiana Univ. Bloomington, USA), E. Stolterman
    Y. Pan (Indiana Univ. Bloomington, USA)E. Stolterman (Indiana Univ. Bloomington, USA)

    This paper examines the experiences and expectations that HCI researchers have had with Pattern Language and provides reflections and directions on the use of Pattern Language in HCI.Pattern Language (PL) has been researched and developed in HCI research since the mid-80s. Our research was initiated by the question why something like PL can create such enthusiasm and interest over the years, while at the same time not be more widespread and successful? In this paper, we examine the experiences and expectations that HCI researchers who have been involved in PL research have had and still have when it comes to PL. Based on the literature review and interview studies, we provide some overall reflections and several possible directions on the use of PL in HCI.

  • AJGComparative Appraisal of Expressive Artifacts
    M. Feinberg (The Univ. of Texas at Austin, USA)
    M. Feinberg (The Univ. of Texas at Austin, USA)

    Describes a form of comparative, structured appraisal of expressive artifacts that adds to the existing repertoire of HCI assessment techniques.This paper describes a form of comparative, structured appraisal of expressive artifacts that adds to the existing repertoire of HCI assessment techniques. Comparative appraisal uses a situationally defined procedure to be followed by multiple assessors in examining a group of artifacts. The conceptual basis for this method is drawn from writing assessment.

  • AZXSound Design As Human Matter Interaction
    X. Sha (Concordia Univ., CA), A. Freed, N. Navab
    X. Sha (Concordia Univ., CA)A. Freed (CNMAT UC Berkeley, USA)N. Navab (Concordia Univ., CA)

    Realtime responsive sound design provides models for non-anthropocentric approaches to interactions between humans and computational matter. We approach this in light of new materiality and material computation.Recently, terms like material computation or natural computing in foundations of computer science and engineering, and new materiality in cultural studies signal a broader turn to conceptions of the world that are not based on solely human categories. While respecting the values of human-centered design, how can we begin to think about the design of responsive environments and computational media while paying as much attention to material qualities like elasticity, density, wear, and tension as to social and cognitive phenomena? This question understands computation as a potential property of matter in a non-reductive way that plausibly spans formal divides between symbolic-semiotic, social, and physical processes. Full investigation greatly exceeds one brief paper. But we open this question in the concrete practices of computational sound and sound design.

  • AVCCrafting Against Robotic Fakelore: On the Critical Practice of ArtBot Artists
    M. Jacobsson (Mobile Life @ Stockholm Univ., SE), Y. Fernaeus, H. Cramer, S. Ljungblad
    M. Jacobsson (Mobile Life @ Stockholm Univ., SE)Y. Fernaeus (KTH – Royal Institute of Technology, SE)H. Cramer (Yahoo! Labs, USA)S. Ljungblad (Univ. of Gothenburg, SE)

    We report on topics raised in encounters with robotics oriented artworks, which to us were interpreted as a general critique to what could be framed as robotic fakelore, or mythology.We report on topics raised in encounters with a series of robotics oriented artworks, which to us were interpreted as a general critique to what could be framed as robotic fakelore, or mythology. We do this based on interviews held with artists within the community of ArtBots, and discuss how their approach relates to and contributes to the discourse of HCI. In our analysis we outline a rough overview of issues emerging in the interviews and reflect on the broader questions they may pose to our research community.

253Course C02

  • CJEC02: Six Steps to Successful UX in an Agile World
    H. Beyer (InContext Design, USA), K. Holtzblatt
    H. Beyer (InContext Design, USA)K. Holtzblatt (InContext Design, USA)

    Participants in this course will learn the UX role and tasks at each point in an Agile project. They will learn specific, tested techniques for performing that role effectively. Detailed Course Description Duration: 80 minutes (1 course unit) Linkage to Other Courses This course is intended to stand alone. Learning Objectives: Participants in this course will: 1. Learn the UX role and tasks at each point in an Agile project 2. Learn specific, tested techniques for performing that role effectively: • Contribute to defining the right user stories • Write and prioritize user stories • Work out low-level design details within sprints • Drive iterations with real user feedback during each sprint • Maintain a whole-system perspective on the UI • Develop a real, day-to-day collaboration with developers 3. Practice one key skill—writing user stories 4. Understand how UX skills contribute to an effective Agile project Justification As Agile development becomes standard across the industry, UX groups are finding it necessary to redefine their relationships to the development projects they are a part of. UX groups are also finding that the constraints of Agile development are forcing them to rethink how UX work is done. On the one hand, the tight constrains of short sprints require that all slack be taken out of the process, so that work can be done in small increments at the last responsible moment. On the other, good UX design requires holistic thinking about the entire system and UX groups are challenged to maintain this focus even during the continual heads-down work of sprints. This course seeks to give UX designers specific, actionable techniques to handle this new situation. We briefly review why UX techniques are critical to delivering on the promise of Agile development—how UX techniques permit the initial project backlog to be developed effectively and ensure that project iterations evolve the product in a direction useful to its users. The bulk of the course discusses the five key skills described above—what the UX designer should be doing, why that works in the context of an Agile project, and how the UX designer’s existing skills are critical to supporting Agile development. The discussion of each skill is supported with examples from Agile teams. One critical skill—writing user stories—is practiced in the session. This is a core skill and requires that UX designers think about their design in a different way and break it up in counter-intuitive ways they may not be familiar with, so focused practice is useful. This course is informed by our years of experience working with Agile teams, so we can not only describe how UX designers should integrate with Agile teams in theory, but how things actually work in practice. Content The course contains the following main parts: First, we briefly summarize the origins of Agile development to provide shared context for all participants. We describe the problems developers faced and how these methods gave them control over some of their most intractable problems. We are honest about the shortcomings of the methods as well—being designed by developers, for developers, they are limited in the scope of the problems they attempt to address (the world begins and ends with coding) and in the techniques they use (none of the standard methods for involving users are part of the Agile toolbox). This section is short—the course is not intended as a general introduction to Agile. We then describe six industry best practices for incorporating UX work into an Agile team. These techniques are tried and tested, so new Agile teams can depend on them—they don’t have to pioneer them. We discuss: 1. Bring a user focus to “Phase 0” activities to help define the right user stories. Why a “Phase 0” or “Sprint 0” is needed to drive Agile development; how it is used to drive user story creation; making Phase 0 user-centered and Agile; validating concepts captured in user stories; how much is “just enough” to drive Agile development. 2. Write and prioritize user stories to deliver the most important user value while accommodating development needs. What a user story looks like; why they are valuable to development; how to split up larger stories into smaller ones; why stories should not be split along component lines but instead should deliver user value; how to balance competing goals when prioritizing stories into sprints. We practice writing user stories so participants can work with the different ways to structure a user story that delivers coherent user value while still being small enough for Agile development. 3. Work out low-level design details with users within the constrained timeframe of a sprint. The “no BDUF” value means low-level details will be worked out during sprints; when and why to design one sprint ahead and test one sprint behind; alternative methods of interleaving design and development work; “four users every Friday” as a method to bring user data into the process; how to use low-fidelity prototyping to work out design details. 4. Gather real user feedback on the code as developed in each sprint and work that feedback into the Agile process. How to use user visits to test running code; how to run such user tests; how to design and communicate changes to the development team; alternative methods of working design changes into the backlog; how to maintain overall UI design coherence despite the focused, rapid nature of Agile development. 5. Maintain a coherent picture of the UI across user stories and sprints. User stories and short sprints make it difficult to maintain a whole-system view of the product being created; how to maintain that view across sprints and across multiple teams working on the same system. 6. Be a full member of the development team with real collaboration with developers throughout the development process. What it means to be a full team member; where to sit, when to show up, when to have face-to-face discussions; how (and why) to involve developers in aspects of UI design; how to fit collaboration with developers into the tasks of a sprint; how and why to track UI tasks in the team’s tracking tools. Assumed Background of Attendees: This course is appropriate for all backgrounds. It is designed especially for UX designers and managers who are currently working with Agile teams and wish to improve their cooperation with those teams, or who expect to be in that situation soon. Presentation Format: The course will consist of lecture and an exercise done in pairs, followed by discussion. Schedule Minutes Topic 5 Overview of Agile development 40 Techniques for UX involvement on an Agile team 20 Exercise: Writing user stories 15 Summary: The structure of an Agile project Audience Size: There is no limit to the number of people who can be in this course. We would only teach the course one time. Course History: This course is based on work done with clients in several industries, and on material previously presented at a highly-rated course at the 2011 CHI Conference. We have focused this course on practical, hands-on advice that participants can use immediately. Student Volunteers: We anticipate no unique student volunteer needs. Audio Visual Needs with Room Requirements: 1. Computer projector to be attached to instructor’s computer and large screen 2. One flipchart easel with paper 3. Wireless (lavaliere) microphone (so the instructor can move in the audience)

BordeauxSpecial session: Lifetime Research Award

Session chair: Mary Czerwinski
  • LRAAward winner: George Robertson, Retired (formerly Microsoft Research), USA

342APapers: Interaction in the Wild

SKUSession chair: Marina Jirotka
  • PQVPaper: Electric Materialities and Interactive Technology
    J. Pierce (Carnegie Mellon Univ., USA), E. Paulos
    J. Pierce (Carnegie Mellon Univ., USA)E. Paulos (Univ. of California, Berkeley, USA)

    Characterizes electric technology by three forms of materiality: the electric object, its electric materiality, and electric power. Presents and analyzes novel interactive form prototypes. This paper offers new theoretical and design insights into interactive technology. By initially considering electric technology broadly, our work informs how HCI approaches a range of specific interactive or digital things and materials. Theoretically, we contribute a rigorous analysis of electric technology using the experiential lens of phenomenology. A major result is to characterize electric technology by three forms of materiality: the electric object, its electric materiality, and electric power. In terms of design, we present and analyze novel interactive form prototypes. Our theoretical contributions offer new insight into design artifacts, just as our novel design artifacts help reveal new theoretical insight.

  • PJBPaper: A Conversation Between Trees: What Data Feels Like In The Forest
    R. Jacobs (The Univ. of Nottingham, UK), S. Benford, M. Selby, M. Golembewski, D. Price, G. Giannachi
    R. Jacobs (The Univ. of Nottingham, UK)S. Benford (The Univ. of Nottingham, UK)M. Selby (The Univ. of Nottingham, UK)M. Golembewski (The Univ. of Nottingham, UK)D. Price (The Univ. of Nottingham, UK)G. Giannachi (The Univ. of Exeter, UK)

    Study of an environmentally engaged artwork reveals how artists’ strategies of embodying, performing and juxtaposing different views of climate data fostered emotional engagement and interpretation among visitors.A study of an interactive artwork shows how artists engaged the public with scientific climate change data. The artwork visualised live environmental data collected from remote trees, alongside both historical and forecast global CO2 data. Visitors also took part in a mobile sensing experience in a nearby forest. Our study draws on the perspectives of the artists, visitors and a climate scientist to reveal how the work was designed and experienced. We show that the artists adopted a distinct approach that fostered an emotional engagement with data rather than an informative or persuasive one. We chart the performative strategies they used to achieve this including sensory engagement with data, a temporal structure that balanced liveness with slowness, and the juxtaposition of different treatments of the data to enable interpretation and dialogue.

  • PHCPaper: Unlimited Editions: Three Approaches to the Dissemination and Display of Digital Art
    M. Blythe (Northumbria Univ., UK), J. Briggs, J. Hook, P. Wright, P. Olivier
    M. Blythe (Northumbria Univ., UK)J. Briggs (Northumbria Univ., UK)J. Hook (Newcastle Univ., UK)P. Wright (Newcastle Univ., UK)P. Olivier (Newcastle Univ., UK)

    Three approaches to digital art are explored: in “s[editon]” a limited digital editions website, the iPad “Brushes” Gallery and a field study using digital frames and an immersive projection room.The paper reflects on three approaches to the dissemination and display of digital art. “s[edition]” is a novel, web-based service that offers limited editions of “digital prints”. Analysis of user comments suggests that the metaphor of a “limited digital edition” raises issues and to some extent is resisted. The second approach is the Flickr Brushes Gallery, where digital painters post images and comment on one another’s work. Analysis of comment boards indicates that the shared art and comments are a form of gift exchange. Finally, the paper discusses a field study in which artists exhibited their work as it develops over time in digital frames and also in an immersive digital projection room. Analysis of field notes and interviews indicate that the digital frame approach was unsuccessful because of aesthetic and environmental concerns. The immersive projection suggested that more experiential approaches may be more interesting. It is argued that there is an inherent resistance in digital media to previous models of art commoditization. None of the approaches discussed here resolve the dilemma but rather indicate the scope and complexity of the issues.

  • PBYPaper: ‘See Me, Feel Me, Touch Me, Hear Me’: Trajectories and Interpretation in a Sculpture Garden
    L. Fosh (The Univ. of Nottingham, UK), S. Benford, S. Reeves, B. Koleva, P. Brundell
    L. Fosh (The Univ. of Nottingham, UK)S. Benford (The Univ. of Nottingham, UK)S. Reeves (The Univ. of Nottingham, UK)B. Koleva (The Univ. of Nottingham, UK)P. Brundell (The Univ. of Nottingham, UK)

    Describes the application of the trajectories framework to the design of a user experience in a sculpture garden. Can assist in designing experiences for engagement and interpretation.We apply the HCI concept of trajectories to the design of a sculpture trail. We crafted a trajectory through each sculpture, combining textual and audio instructions to drive directed viewing, movement and touching while listening to accompanying music. We designed key transitions along the way to oscillate between moments of social interaction and isolated personal engagement, and to deliver official interpretation only after visitors had been given the opportunity to make their own. We describe how visitors generally followed our trajectory, engaging with sculptures and making interpretations that sometimes challenged the received interpretation. We relate our findings to discussions of sense-making and design for multiple interpretations, concluding that curators and designers may benefit from considering ‘trajectories of interpretation’.

343Course C03, unit 1/3

  • CVZC03: Rapid Design Labs—A Tool to Turbocharge Design-Led Innovation
    J. Nieters (Hewlett Packard, USA), C. Thompson, A. Pande
    J. Nieters (Hewlett Packard, USA)C. Thompson (zSpace, Inc, USA)A. Pande (Hewlett Packard, IN)

    Jim Nieters, Carola Thompson, and Amit Pande will empower designers and UX teams to act as a catalyst to systemically identify and drive game-changing ideas to market with rapid design labs.Have you ever had a big idea that got crushed? You know, one of those inspiring ideas that could change the world? If you work in a product or design group in a corporation or design firm, you have probably experienced what happens after you share one those ideas. In the real world, coming up with a breakthrough idea or transformative design doesn’t mean it will automatically get to market. By definition, innovative ideas represent new ways of thinking. Organizations by nature seem to have anti-innovation antibodies that often kill new ideas—even disruptive innovations that could help companies differentiate themselves from their competition. As difficult as coming up with a game-changing idea can be, getting an organization to act on the idea often seems impossible. The course Rapid Design Labs- A Tool to Turbocharge Design-Led Innovation gives you new tools for this challenge, tools that empower designers and UX teams to get breakthrough ideas and designs accepted. Learn how UX can act as a catalyst to systemically identify and drive game-changing ideas to market. Rapid design labs are a design-led, facilitative, cross-functional, iterative approach to innovation that aligns organizations and generates business value each step of the way.

361Special Interest Group

  • GEJDesigning Interactive Secure System: CHI 2013 Special Interest Group
    S. Faily (Univ. of Oxford, UK), L. Coles-Kemp, P. Dunphy, M. Just, Y. Akama, A. De Luca
    S. Faily (Univ. of Oxford, UK)L. Coles-Kemp (Royal Holloway, UK)P. Dunphy (Newcastle Univ., UK)M. Just (Glasgow Caledonian Univ., UK)Y. Akama (RMIT Univ. , AU)A. De Luca (Univ. of Munich (LMU), DE)

    Despite a growing interest in the design and engineering of interactive secure systems, there is also a noticeable amount of fragmentation. This has led to a lack of awareness about what research is currently being carried out, and misunderstandings about how different fields can contribute to the design of usable and secure systems. By drawing interested members of the CHI community from design, user experience, engineering, and HCI Security, this SIG will take the first steps towards creating a research agenda for interactive secure system design. In the SIG, we will summarise recent initiatives to develop a research programme in interactive secure system design, network members of the CHI community with an interest in this research area, and initiate a roadmap towards addressing identified research challenges and building an interactive secure system design community.

362/363Special Interest Group

  • GTLHuman Computer Interaction for Development (HCI4D)
    B. Al-Ani (Univ. of California, Irvine, USA), M. Densmore, E. Cutrell, R. Grinter, J. Thomas, A. Dearden, M. Kam, A. Peters
    B. Al-Ani (Univ. of California, Irvine, USA)M. Densmore (Microsoft Research India, IN)E. Cutrell (Microsoft Research India, IN)R. Grinter (Georgia Institute of Technology, USA)J. Thomas (IBM T. J. Watson Research , USA)A. Dearden (Sheffield Hallam Univ., UK)M. Kam (American Institutes for Research, USA)A. Peters (Iowa State Univ., USA)

    We are proposing a SIG designed for Human-Computer Interaction for Development (HCI4D) Community. It is designed to foster further collaboration, dissemination of research results and findings from practitioners, as well as to promote discussion of how we can both learn from each other and from those we serve in underserved communities wherever they may be.

HavanePapers: 3D User Interfaces

SLMSession chair: Pierre Cubaud
  • PDQPaper: Pointing at 3D Target Projections with One-Eyed and Stereo Cursors
    R. Teather (York Univ., CA), W. Stuerzlinger
    R. Teather (York Univ., CA)W. Stuerzlinger (York Univ., CA)

    We investigate 2D-projected 3D pointing tasks. In particular, we look at the modeling of perspective-scaled 3D targets in screen-plane pointing, while comparing mouse and remote pointing techniques.We present a study of cursors for selecting 2D-projected 3D targets. We compared a stereo- and mono-rendered (one-eyed) cursor using two mouse-based and two remote pointing techniques in a 3D Fitts’ law pointing experiment. The first experiment used targets at fixed depths. Results indicate that one-eyed cursors only improve screen-plane pointing techniques, and that constant target depth does not influence pointing throughput. A second experiment included pointing between targets at varying depths and used only “screen-plane” pointing techniques. Our results suggest that in the absence of stereo cue conflicts, screen-space projections of Fitts’ law parameters (target size and distance) yield constant throughput despite target depth differences and produce better models of performance.

  • PTKPaper: Creating and Analyzing Stereoscopic 3D Graphical User Interfaces in Digital Games
    J. Schild (Univ. of Duisburg-Essen, DE), L. Bölicke, J. LaViola Jr., M. Masuch
    J. Schild (Univ. of Duisburg-Essen, DE)L. Bölicke (Univ. of Duisburg-Essen, DE)J. LaViola Jr. (Univ. of Central Florida, USA)M. Masuch (Univ. of Duisburg-Essen, DE)

    Supports GUI designers with a design space to create stereoscopic 3D GUIs for games. Our evaluation shows that perceptual, spatial and diegetic integration provide helpful constraints for influencing user experience.Creating graphical user interfaces (GUI) for stereoscopic 3D (S3D) games is a difficult choice between visual comfort and effect. We present a S3D Game GUI Design Space and a list of S3D-specific attributes that emphasizes integrating visually comfortable interfaces into the game world, story and S3D view. To showcase our approach, we created two GUI concepts and evaluated them with 32 users. Our results show quality improvements for a combination of bottom position and visual attachment for a menu. In a referencing interface, placing the reference near to the target depth significantly improved perceived quality, game integration, and increased presence. These results confirm the need to create S3D GUIs with perceptual constraints in mind, demonstrating the potential to extend the user experience. Additionally, our design space offers a formal and flexible way to create new effects in S3D GUIs.

  • PMBPaper: BeThere: 3D Mobile Collaboration with Spatial Input
    R. Sodhi (Univ. of Illinois at Urbana-Champaign, USA), B. Jones, D. Forsyth, B. Bailey, G. Maciocci
    R. Sodhi (Univ. of Illinois at Urbana-Champaign, USA)B. Jones (Univ. of Illinois at Urbana-Champaign, USA)D. Forsyth (Univ. of Illinois at Urbana-Champaign, USA)B. Bailey (Univ. of Illinois at Urbana-Champaign, USA)G. Maciocci (Qualcomm Corporate R&D, UK)

    We contribute a proof-of-concept system and interactions that show how mobile devices equipped with depth sensors can leverage spatial input and knowledge of our 3D environment to enrich communication.We present BeThere, a proof-of-concept system designed to explore 3D input for mobile collaborative interactions. With BeThere, we explore 3D gestures and spatial input which al- low remote users to perform a variety of virtual interactions in a local user’s physical environment. Our system is completely self-contained and uses depth sensors to track the location of a user’s fingers as well as to capture the 3D shape of objects in front of the sensor. We illustrate the unique capabilities of our system through a series of interactions that allow users to control and manipulate 3D virtual content. We also pro- vide qualitative feedback from a preliminary user study which confirmed that users can complete a shared collaborative task using our system.

  • NNGNote: SpaceTop: Integrating 2D and Spatial 3D Interactions in a See-through Desktop Environment
    J. Lee (Media Lab, USA), A. Olwal, H. Ishii, C. Boulanger
    J. Lee (Media Lab, USA)A. Olwal (Media Lab, USA)H. Ishii (Massachusetts Institute of Technology, USA)C. Boulanger (Microsoft Applied Sciences Group, USA)

    SpaceTop is a concept that integrates 2D and 3D spatial interactions in a desktop workspace. It extends the desktop interface with interaction technology and visualization techniques that enable seamless transitions between 2D and 3D manipulations.SpaceTop is a concept that fuses spatial 2D and 3D interactions in a single workspace. It extends the traditional desktop interface with interaction technology and visualization techniques that enable seamless transitions between 2D and 3D manipulations. SpaceTop allows users to type, click, draw in 2D, and directly manipulate interface elements that float in the 3D space above the keyboard. It makes it possible to easily switch from one modality to another, or to simultaneously use two modalities with different hands. We introduce hardware and software configurations for co-locating these various interaction modalities in a unified workspace using depth cameras and a transparent display. We describe new interaction and visualization techniques that allow users to interact with 2D elements floating in 3D space. We present the results from a preliminary user study that indicates the benefit of such hybrid workspaces.

  • NRJNote: 3D Object Position using Automatic Viewpoint Transitions
    M. Ortega (CNRS, FR)
    M. Ortega (CNRS, FR)

    IUCA is a new technique for 3D objects manipulation. IUCA proposes to interact in a full-resolution perspective view by integrating transients animated transitions to orthographic views into the manipulation task. This paper presents IUCA (Interaction Using Camera Animations), a new interaction technique for 3D objects manipulation. IUCA allows efficient interaction in a full-resolution perspective view by integrating transients animated transitions to orthographic views into the manipulation task. This provides an interaction in context, with precise object positioning and alignment. An evaluation of the technique shows that, compared to the classical configurations, IUCA allows to reduce pointing time by 14% on average. Testing with professional 3D designers and novice users indicate that IUCA is easy to use and to learn; and that users feel comfortable with it.

351Papers: Crowdsourcing: People Power

SCXSession chair: Tapan Parikh
  • PFCPaper: Form Digitization in BPO: From Outsourcing to Crowdsourcing?
    J. O’Neill (Xerox Research Centre Europe, FR), S. Roy, A. Grasso, D. Martin
    J. O’Neill (Xerox Research Centre Europe, FR)S. Roy (Xerox Research Centre India, IN)A. Grasso (Xerox Innovation Group, FR)D. Martin (Xerox Research Center Europe, FR)

    This work describes findings from an ethnographic study of an outsourced business process for “form digitization”. It is a first step to how crowdsourcing might be applied to business processes.This paper describes an ethnographic study of an outsourced business process – the digitization of healthcare forms. The aim of the study was to understand how the work is currently organized, with an eye to uncovering the research challenges which need to be addressed if that work is to be crowdsourced. The findings are organised under four emergent themes: Workplace Ecology, Data Entry Skills and Knowledge, Achieving Targets and Collaborative Working. For each theme a description of how the work is undertaken in the outsourcer’s Indian office locations is given, followed by the implications for crowdsourcing that work. This research is a first step in understanding how crowdsourcing might be applied to BPO activities. The paper examines features specific to form digitization – extreme distribution and form decomposition – and lightly touches on the crowdsourcing of BPO work more generally.

  • PMFPaper: Crowdsourcing Performance Evaluations of User Interfaces
    S. Komarov (Harvard Univ., USA), K. Reinecke, K. Gajos
    S. Komarov (Harvard Univ., USA)K. Reinecke (Harvard Univ., USA)K. Gajos (Harvard Univ., USA)

    We explored the feasibility of using Amazon Mechanical Turk for user interface evaluation by replicating three well-known UI experiments both in lab and online. Online labor markets, such as Amazon’s Mechanical Turk (MTurk), provide an attractive platform for conducting human subjects experiments because the relative ease of recruitment, low cost, and a diverse pool of potential participants enable larger-scale experimentation and faster experimental revision cycle compared to lab-based settings. However, because the experimenter gives up the direct control over the participants’ environments and behavior, concerns about the quality of the data collected in online settings are pervasive. In this paper, we investigate the feasibility of conducting online performance evaluations of user interfaces with anonymous, unsupervised, paid participants recruited via MTurk. We implemented three performance experiments to re-evaluate three previously well-studied user interface designs. We conducted each experiment both in lab and online with participants recruited via MTurk. The analysis of our results did not yield any evidence of significant or substantial differences in the data collected in the two settings: All statistically significant differences detected in lab were also present on MTurk and the effect sizes were similar. In addition, there were no significant differences between the two settings in the raw task completion times, error rates, consistency, or the rates of utilization of the novel interaction mechanisms introduced in the experiments. These results suggest that MTurk may be a productive setting for conducting performance evaluations of user interfaces providing a complementary approach to existing methodologies.

  • PRFPaper: A Multi-Site Field Study of Crowdsourced Contextual Help: Usage and Perspectives of End Users and Software Teams
    P. Chilana (Univ. of Washington, USA), A. Ko, J. Wobbrock, T. Grossman
    P. Chilana (Univ. of Washington, USA)A. Ko (Univ. of Washington, USA)J. Wobbrock (Univ. of Washington, USA)T. Grossman (Autodesk Research, CA)

    We present a field study of a crowdsourced contextual help system deployed on 4 large web sites. Data was collected over several weeks through usage logs, surveys, and interviews. We present a multi-site field study to evaluate LemonAid, a crowdsourced contextual help approach that allows users to retrieve relevant questions and answers by making selections within the interface. We deployed LemonAid on 4 different web sites used by thousands of users and collected data over several weeks, gathering over 1,200 usage logs, 168 exit surveys, and 36 one-on-one interviews. Our results indicate that over 70% of users found LemonAid to be helpful, intuitive, and desirable for reuse. Software teams found LemonAid easy to integrate with their sites and found the analytics data aggregated by LemonAid a novel way of learning about users’ popular questions. Our work provides the first holistic picture of the adoption and use of a crowdsourced contextual help system and offers several insights into the social and organizational dimensions of implementing such help systems for real-world applications.

  • PKKPaper: A Pilot Study of Using Crowds in the Classroom
    S. Dow (Carnegie Mellon Univ., USA), E. Gerber, A. Wong
    S. Dow (Carnegie Mellon Univ., USA)E. Gerber (Northwestern Univ., USA)A. Wong (Carnegie Mellon Univ., USA)

    We contribute early evidence and discuss implications for creating a socio-technical infrastructure to use online crowds to increase the authenticity of innovation education.Industry relies on higher education to prepare students for careers in innovation. Fulfilling this obligation is especially difficult in classroom settings, which often lack authentic interaction with the outside world. Online crowdsourcing has the potential to change this. Our research explores if and how online crowds can support student learning in the classroom. We explore how scalable, diverse, immediate (and often ambiguous and conflicting) input from online crowds affects student learning and motivation for project-based innovation work. In a pilot study with three classrooms, we explore interactions with the crowd at four key stages of the innovation process: needfinding, ideating, testing, and pitching. Students reported that online crowds helped them quickly and inexpensively identify needs and uncover issues with early-stage prototypes, although they favored face-to-face interactions for more contextual feed-back. We share early evidence and discuss implications for creating a socio-technical infrastructure to more effectively use crowdsourcing in education.

352ABPapers: Multitouch and Gesture

SFNSession chair: James Fogarty
  • PDEPaper: A Multi-touch Interface for Fast Architectural Sketching and Massing
    Q. Sun (Nanyang Technological Univ., SG), J. Lin, C. Fu, S. Kaijima, Y. He
    Q. Sun (Nanyang Technological Univ., SG)J. Lin (Xiamen Univ., CN)C. Fu (Nanyang Technological Univ., SG)S. Kaijima (Singapore Univ. of Technology and Design, SG)Y. He (Nanyang Technological Univ., SG)

    This paper proposes a novel multi-touch interface for architectural Sketching and Massing; it offers a rich set of direct finger gestures for rapid prototyping of contemporary building designs. Architectural sketching and massing are used by designers to analyze and explore the design space of buildings. This paper describes a novel multi-touch interface for fast architectural sketching and massing of tall buildings. It incorporates a family of multi-touch gestures, enabling one to quickly sketch the 2D contour of a base floor plan and extrude it to model a building with multi-floor structures. Further, it provides a set of gestures to users: select and edit a range of floors; scale contours of a building; copy, paste, and rotate a building, i.e., create a twisted structure; edit profile curves of a building’s profile; and collapse and remove a selected range of floors. The multi-touch system also allows users to apply textures or geometric facades to the building, and to compare different designs side-by-side. To guide the design process, we describe interactions with a domain expert, a practicing architect. The final interface is evaluated by architects and students in an architecture department, which demonstrates that the system allows rapid conceptual design and massing of novel multi-story building structures.

  • PMSPaper: Gesture Studio: Authoring Multi-Touch Interactions through Demonstration and Declaration
    H. Lü (Univ. of Washington, USA), Y. Li
    H. Lü (Univ. of Washington, USA)Y. Li (Google Research, USA)

    We present Gesture Studio, a tool for creating multi-touch interactions. It combines the strengths of both programming by demonstration and declaration in an intuitive UI based on video-editing metaphor.The prevalence of multi-touch devices opens the space for rich interactions. However, the complexity for creating multi-touch interactions hinders this potential. In this paper, we present Gesture Studio, a tool for creating multi-touch interaction behaviors by combining the strength of two distinct but complementary approaches: programming by demonstration and declaration. We employ an intuitive video-authoring metaphor for developers to demonstrate touch gestures, compose complicated behaviors, test these behaviors in the tool and export them as source code that can be integrated into the developers’ project.

  • PLYPaper: EventHurdle: Supporting Designers’ Exploratory Interaction Prototyping with Gesture-Based Sensors
    J. Kim (KAIST (Korea Advanced Institute of Science and Technology), KR), T. Nam
    J. Kim (KAIST (Korea Advanced Institute of Science and Technology), KR)T. Nam (KAIST (Korea Advanced Institute of Science and Technology), KR)

    This paper presents EventHurdle, a visual gesture authoring tool for designers that supports connecting gesture-based sensors, visually intuitive gesture definitions, and easy prototyping without programming expertise.Prototyping of gestural interactions in the early phase of design is one of the most challenging tasks for designers without advanced programming skills. Relating users’ input from gesture-based sensor values requires a great deal of effort on the designer’s part and disturbs their reflective and creative thinking. To deal with this problem, we present EventHurdle, a visual gesture-authoring tool to support designers’ explorative prototyping. It supports remote gestures from a camera, handheld gestures with physical sensors, and touch gestures by utilizing touch screens. EventHurdle allows designers to visually define and modify gestures through interaction workspace and graphical markup language with hurdles. Because the created gestures can be integrated into a prototype as programming code and automatically recognized, designers do not need to pay attention in sensor-related implementation. Two user studies and a recognition test are reported to discuss the acceptance and implications of explorative prototyping tools for designers.

  • NEVNote: Small, Medium, or Large? Estimating the User-Perceived Scale of Stroke Gestures
    R. Vatavu (Univ. Stefan cel Mare of Suceava, RO), G. Casiez, L. Grisoni
    R. Vatavu (Univ. Stefan cel Mare of Suceava, RO)G. Casiez (LIFL & INRIA Lille, Univ. of Lille, FR)L. Grisoni (Univ. Lille, FR)

    We explore scale as parameter for gesture commands. We deliver a training-free, user- and device-independent scale estimator that can be integrated into existing gestural interfaces with three lines of code.We show that large consensus exists among users in the way they articulate stroke gestures at various scales (i.e., small, medium, and large), and formulate a simple rule that estimates the user-intended scale of input gestures with 87% accuracy. Our estimator can enhance current gestural interfaces by leveraging scale as a natural parameter for gesture input, reflective of user perception (i.e., no training required). Gesture scale can simplify gesture set design, improve gesture-to-function mappings, and reduce the need for users to learn and for recognizers to discriminate unnecessary symbols.

  • NJGNote: Indirect Shear Force Estimation for Multi-Point Shear Force Operations
    S. Heo (KAIST (Korea Advanced Institute of Science and Technology), KR), G. Lee
    S. Heo (KAIST (Korea Advanced Institute of Science and Technology), KR)G. Lee (KAIST (Korea Advanced Institute of Science and Technology), KR)

    We designed a novel method to indirectly estimate shear forces at multiple points. We show the feasibility by implementing a prototype and a demo application.The possibility of using shear forces is being explored recently as a method to enrich touch screen interaction. However, most of the related studies are restricted to the case of single-point shear forces, possibly owing to the difficulty of independently sensing shear forces at multiple touch points. In this paper, we propose indirect methods to estimate shear forces using the movement of contact areas. These methods enable multi-point shear force estimation, where the estimation is done for each finger independently. We show the feasibility of these methods through an informal user study with a demo application utilizing these methods.

221/221MLast-minute SIGs: Session 1

Monday – 14:00-15:20

BluePanel

  • LJKWill Massive Online Open Courses (MOOCs) Change Education?
    Scott Klemmer (moderator), Daniel Russell, Elizabeth Losh, Armando Fox, Celine Latulipe, Mitch Duneier
    Scott Klemmer (moderator)Daniel RussellElizabeth LoshArmando FoxCeline LatulipeMitch Duneier

    As has been apparent for the past several months, MOOCs (Massive Online Open Courseware) have emerged as a powerful contender for the next new education technology. Yet the landscape of education technology is littered with the remains of previous technological breakthroughs that have failed to live up to their initial promise, or at least their initial rhetoric. Is anything different this time? We strongly believe the answer is yes—this time really is different. Several MOOCs have been run during 2012 that have taught many thousands of students in a variety of topics. This panel will be a chance to review and discuss the short but engaging history of MOOCs, reviewing data from several MOOC instances, critically assessing what’s happening and why things are different. Are MOOCs really a qualitative change in the way education can be delivered, or is it merely another new wrapper for old content. We believe the human experience of online education is about to change; we should understand the issues behind the phenomena

241Papers: Gaze

SJLSession chair: Lyn Bartram
  • PPNPaper: Still Looking: Investigating Seamless Gaze-supported Selection, Positioning, and Manipulation of Distant Targets
    S. Stellmach (Technische Univ. Dresden, DE), R. Dachselt
    S. Stellmach (Technische Univ. Dresden, DE)R. Dachselt (Technische Univ. Dresden, DE)

    Describes and compares interaction techniques for combining gaze/head and touch input for fluently selecting, positioning and manipulating distant graphical objects. This can help supporting more seamless interactions with distant displays.We investigate how to seamlessly bridge the gap between users and distant displays for basic interaction tasks, such as object selection and manipulation. For this, we take advantage of very fast and implicit, yet imprecise gaze- and head-directed input in combination with ubiquitous smartphones for additional manual touch control. We have carefully elaborated two novel and consistent sets of gaze-supported interaction techniques based on touch-enhanced gaze pointers and local magnification lenses. These conflict-free sets allow for fluently selecting and positioning distant targets. Both sets were evaluated in a user study with 16 participants. Overall, users were fastest with a touch-enhanced gaze pointer for selecting and positioning an object after some training. While the positive user feedback for both sets suggests that our proposed gaze- and head-directed interaction techniques are suitable for a convenient and fluent selection and manipulation of distant targets, further improvements are necessary for more precise cursor control.

  • PSGPaper: Individual User Characteristics and Information Visualization: Connecting the Dots through Eye Tracking
    D. Toker (Univ. of British Columbia, CA), C. Conati, B. Steichen, G. Carenini
    D. Toker (Univ. of British Columbia, CA)C. Conati (Univ. of British Columbia, CA)B. Steichen (Univ. of British Columbia, CA)G. Carenini (Univ. of British Columbia, CA)

    We present results from an eye tracking user study, showing that a user’s cognitive abilities have a significant impact on gaze behavior when performing common information visualization tasks.There is increasing evidence that users’ characteristics such as cognitive abilities and personality have an impact on the effectiveness of information visualization techniques. This paper investigates the relationship between such characteristics and fine-grained user attention patterns. In particular, we present results from an eye tracking user study involving bar graphs and radar graphs, showing that a user’s cognitive abilities such as perceptual speed and verbal working memory have a significant impact on gaze behavior, both in general and in relation to task difficulty and visualization type. These results are discussed in view of our long-term goal of designing information visualisation systems that can dynamically adapt to individual user characteristics.

  • TPXTOCHI: Study of Polynomial Mapping Functions in Video-Oculography Eye Trackers
    J. Cerrolaza (Public Univ. of Navarra, ES), A. Villanueva, R. Cabeza
    J. Cerrolaza (Public Univ. of Navarra, ES)A. Villanueva (Public Univ. of Navarra, ES)R. Cabeza (Public Univ. of Navarra, ES)

    In this study we enlighten one of the most employed and least explored techniques for gaze estimation in eye-tracking systems. We obtain a sort of precise and simpler alternative equations. Gaze-tracking data have been used successfully in the design of new input devices and as an observational technique in usability studies. Polynomial-based Video-Oculography (VOG) systems are one of the most attractive gaze estimation methods thanks to their simplicity and ease of implementation. Although the functionality of these systems is generally acceptable, there has been no thorough comparative study to date of how the mapping equations affect the final system response. After developing a taxonomic classification of calibration functions, we examined over 400,000 models and evaluated the validity of several conventional assumptions. Our rigorous experimental procedure enabled us to optimize the calibration process for a real VOG gaze-tracking system and halve the calibration time while avoiding a detrimental effect on the accuracy or tolerance to head movement. Finally, a geometry-based method is implemented and tested. The results and performance is compared with those obtained by the general purpose expressions.

  • NARNote: EyeContext: Recognition of High-level Contextual Cues from Human Visual Behaviour
    A. Bulling (Max Planck Institute for Informatics, DE), C. Weichel, H. Gellersen
    A. Bulling (Max Planck Institute for Informatics, DE)C. Weichel (Lancaster Univ., UK)H. Gellersen (Lancaster Univ., UK)

    We present EyeContext, a system to automatically infer high-level contextual cues from visual behaviour. We demonstrate the large information content available in long-term visual behaviour that’s potentially useful for eye-based behavioural monitoring or life logging.In this work we present EyeContext, a system to infer high-level contextual cues from human visual behaviour. We conducted a user study to record eye movements of four participants over a full day of their daily life, totalling 42.5 hours of eye movement data. Participants were asked to self-annotate four non-mutually exclusive cues: social (interacting with somebody vs. no interaction), cognitive (concentrated work vs. leisure), physical (physically active vs. not active), and spatial (inside vs. outside a building). We evaluate a proof-of-concept EyeContext system that combines encoding of eye movements into strings and a spectrum string kernel support vector machine (SVM) classifier. Our results demonstrate the large information content available in long-term human visual behaviour and opens up new venues for research on eye-based behavioural monitoring and life logging.

  • NEKNote: A Preliminary Investigation of Human Adaptations for Various Virtual Eyes in Video See-Through HMDs
    J. Lee (Korea Institute of Science and Technology (KIST), KR), S. Kim, H. Yoon, B. Huh, J. Park
    J. Lee (Korea Institute of Science and Technology (KIST), KR)S. Kim (Korea Institute of Science and Technology (KIST), KR)H. Yoon (Korea Institute of Science and Technology (KIST), KR)B. Huh (KIST(Korea Institute of Science and Technology), KR)J. Park (Korea Institute of Science and Technology (KIST), KR)

    We investigated whether any differences in visuomotor and adaptation trends exist across 16 distinct VD conditions. The performance tasks studied were of two types: foot placement and finger touch.A video see-through head mounted display (HMD) has a different viewing point than does the real eye, resulting in visual displacement (VD). VD deteriorates visuomotor performance due to sensory conflict. Previous work has investigated this deterioration and human adaptation by comparing fixed VD and real eye conditions. In this study we go a step further to investigate whether any differences in visuomotor and adaptation trends exist across 16 distinct VD conditions. The performance tasks studied were of two types: foot placement and finger touch. In contrast to our initial prediction, the results showed equal task performance levels and adaptation within about 5 minutes regardless of VD conditions. We found that human adaptation covered a variety of VDs — up to 55 mm in the X, Y direction; up to 125mm in the Z direction; and up to 140mm of interocular distance (IOD). In addition, we found that partial adaptation gave participants the interesting experience of a sense of body structure distortion for a few minutes.

242ABPapers: Technologies for Life 1

SSKSession chair: Jeffrey Bigham
  • PKYPaper: ‘Digital Motherhood’: How does technology help new mothers?
    L. Gibson (Univ. of Dundee, UK), V. Hanson
    L. Gibson (Univ. of Dundee, UK)V. Hanson (Univ. of Dundee, UK)

    This research identified two themes where technology supports mothers: improving confidence and being more than ‘just’ a mother. Findings have implications for digital engagement, digital identity and social networking.New mothers can experience social exclusion, particularly during the early weeks when infants are solely dependent on their mothers. We used ethnographic methods to investigate whether technology plays a role in supporting new mothers. Our research identified two core themes: (1) the need to improve confidence as a mother; and (2) the need to be more than ‘just’ a mother. We reflect on these findings both in terms of those interested in designing applications and services for motherhood and also the wider CHI community.

  • PTPPaper: Age-Related Performance Issues for PIN and Face-Based Authentication Systems
    J. Nicholson (Northumbria Univ., UK), L. Coventry, P. Briggs
    J. Nicholson (Northumbria Univ., UK)L. Coventry (Northumbria Univ., UK)P. Briggs (Northumbria Univ., UK)

    A PIN system and a face-based graphical system were evaluated with younger and older adults. Old benefitted from own-age faces most while young performed well with faces overall.Graphical authentication systems typically claim to be more usable than PIN or password-based systems, but these claims often follow limited, single-stage paradigm testing on a young, student population. We present a more demanding test paradigm in which multiple codes are learned and tested over a three-week period. We use this paradigm with two user populations, comparing the performance of younger and older adults. We first establish baseline performance in a study in which populations of younger and older adults learn PIN codes and we follow this with a second study in which younger and older adults use two face-based graphical authentication systems employing young faces vs. old faces as code components. As expected, older adults show relatively poor performance when compared to younger adults, irrespective of the authentication material, but this age-related deficit can be markedly reduced by the introduction of age-appropriate faces. We conclude firstly that this paradigm provides a good basis for the future evaluation of memory-based authentication systems and secondly that age-appropriate face-based authentication is viable in the security marketplace.

  • PAYPaper: The Presentation of Health-Related Search Results and Its Impact on Negative Emotional Outcomes
    C. Lauckner (Michigan State Univeristy, USA), G. Hsieh
    C. Lauckner (Michigan State Univeristy, USA)G. Hsieh (Michigan State Univeristy, USA)

    This experiment demonstrates features of health symptom search results that can influence negative emotional outcomes, with results suggesting strategies for web developers and users to help avoid such effects.Searching for health information online has become increasingly common, yet few studies have examined potential negative emotional effects of online health information search. We present results from an experiment manipulating the presentation of search results for common symptoms, which shows that the frequency and placement of serious illness mentions within results can influence perceptions of symptom severity and susceptibility of having the serious illness, respectively. The increase in severity and susceptibility can then lead to higher levels of negative emotional outcomes experienced–including feeling overwhelmed and frightened. Interestingly, health literacy can help reduce perceived symptom severity, and high online health experience actually increases the likelihood that individuals use a frequency-based heuristic. Technological implications and directions for future research are discussed.

  • NDFNote: Age-Related Differences in Performance with Touchscreens Compared to Traditional Mouse Input
    L. Findlater (Univ. of Maryland, USA), J. Froehlich, K. Fattal, J. Wobbrock, T. Dastyar
    L. Findlater (Univ. of Maryland, USA)J. Froehlich (Univ. of Maryland, USA)K. Fattal (Univ. of Maryland, USA)J. Wobbrock (Univ. of Washington, USA)T. Dastyar (Univ. of Maryland, USA)

    We compared performance of older and younger adults on a range of desktop and touchscreen tasks. The touchscreen reduced the performance gap between the two groups relative to the desktop. Despite the apparent popularity of touchscreens for older adults, little is known about the psychomotor performance of these devices. We compared performance between older adults and younger adults on four desktop and touchscreen tasks: pointing, dragging, crossing and steering. On the touchscreen, we also examined pinch-to-zoom. Our results show that while older adults were significantly slower than younger adults in general, the touchscreen reduced this performance gap relative to the desktop and mouse. Indeed, the touchscreen resulted in a significant movement time reduction of 35% over the mouse for older adults, compared to only 16% for younger adults. Error rates also decreased.

  • NLHNote: Access Lens: A Gesture-Based Screen Reader for Real-World Documents
    S. Kane (Univ. of Maryland, Baltimore County, USA), B. Frey, J. Wobbrock
    S. Kane (Univ. of Maryland, Baltimore County, USA)B. Frey (Univ. of Maryland, Baltimore County, USA)J. Wobbrock (Univ. of Washington, USA)

    Introduces Access Lens, a computer vision-based system that comboines gesture tracking and optical character recognition to enable blind people to explore physical documents using accessible gestures.Gesture-based touch screen user interfaces, when designed to be accessible to blind users, can be an effective mode of interaction for those users. However, current accessible touch screen interaction techniques suffer from one serious limitation: they are only usable on devices that have been explicitly designed to support them. Access Lens is a new interaction method that uses computer vision-based gesture tracking to enable blind people to use accessible gestures on paper documents and other physical objects, such as product packages, device screens, and home appliances. This paper describes the development of Access Lens hardware and software, the iterative design of Access Lens in collaboration with blind computer users, and opportunities for future development.

243Course C05, unit 1/2

  • CPQC05: Practical Statistics for User Experience Part I
    J. Sauro (Measuring Usability LLC, USA), J. Lewis
    J. Sauro (Measuring Usability LLC, USA)J. Lewis (IBM, USA)

    Learn to generate confidence intervals and compare two designs using rating scale data, binary measures and task times for large and small sample sizes.If you don’t measure it you can’t manage it. Usability analysis and user-research is about more than rules of thumb, good design and intuition: it’s about making better decisions with data. Is Product A faster than Product B? Will more users complete tasks on the new design? Learn how to conduct and interpret appropriate statistical tests on small and large sample usability data then communicate your results in easy to understand terms to stakeholders. Features 1. Get a visual introduction or refresher to the most important statistical concepts for applied use. 2. Be able to compare two interfaces or versions (A/B Testing) by showing statistical significance (e.g. Product A takes 20% less time to complete a task than Product B p <.05). 3. Clearly understand both the limits and data available from small sample usability data through use of confidence intervals. Audience Open to anyone who’s interested in quantitative usability tests. Participants should be familiar with the process of conducting usability tests as well as basic descriptive statistics such as the mean, median and standard deviation and have access to Microsoft Excel.

251Papers: Evaluation Methods 1

SQBSession chair: Anthony Jameson
  • PEXPaper: LEMtool – Measuring Emotions in Visual Interfaces
    G. Huisman (Univ. of Twente, NL), M. Van Hout, E. van Dijk, T. Van der Geest, D. Heylen
    G. Huisman (Univ. of Twente, NL)M. Van Hout (SusaGroup, NL)E. van Dijk (Univ. of Twente, NL)T. Van der Geest (Univ. of Twente, NL)D. Heylen (Univ. of Twente, NL)

    The paper describes the development and validation of the LEMtool: a non-verbal self-report method for indicating emotions during interaction with a visual interface.In this paper the development process and validation of the LEMtool (Layered Emotion Measurement tool) are described. The LEMtool consists of eight images that display a cartoon figure expressing four positive and four negative emotions using facial expressions and body postures. The instrument can be used during interaction with a visual interface, such as a website, and allows participants to select elements of the interface that elicit a certain emotion. The images of the cartoon figure were submitted to a validation study, in which participants rated the recognizability of the images as specific emotions. All images were found to be recognizable above chance level. In another study, the LEMtool was used to assess visual appeal judgements of a number of web pages. The LEMtool ratings were supported by visual appeal ratings of web pages both for very brief (50 milliseconds) and for long (free-viewing) stimulus exposures. Furthermore, the instrument provided insight into the elements of the web pages that elicited the emotional responses.

  • PKSPaper: Exploring Personality-Targeted UI Design in Online Social Participation Systems
    O. Nov (Polytechnic Institute of New York Univ., USA), O. Arazy, C. López, P. Brusilovsky
    O. Nov (Polytechnic Institute of New York Univ., USA)O. Arazy (Univ. of Alberta, CA)C. López (Univ. of Pittsburgh, USA)P. Brusilovsky (Univ. of Pittsburgh, USA)

    We show how personality-targeted UI design can be more effective than design applied to entire populations – much like a medical treatment applied to a person based on his genetic profile.We present a theoretical foundation and empirical findings demonstrating the effectiveness of personality-targeted design. Much like a medical treatment applied to a person based on his specific genetic profile, we argue that theory-driven, personality-targeted UI design can be more effective than design applied to the entire population. The empirical exploration focused on two settings, two populations and two personality traits: Study 1 shows that users’ extraversion level moderates the relationship between the UI cue of audience size and users’ contribution. Study 2 demonstrates that the effectiveness of social anchors in encouraging online contributions depends on users’ level of emotional stability. Taken together, the findings demonstrate the potential and robustness of the interactionist approach to UI design. The findings contribute to the HCI community, and in particular to designers of social systems, by providing guidelines to targeted design that can increase online participation.

  • PFSPaper: Designing and Theorizing Co-Located Interactions
    T. Reitmaier (Univ. of Cape Town, ZA), P. Benz, G. Marsden
    T. Reitmaier (Univ. of Cape Town, ZA)P. Benz (Univ. of Cape Town, ZA)G. Marsden (Univ. of Cape Town, ZA)

    This paper gives an interwoven account of the theoretical and practical work we undertook in pursuit of designing co-located interactions on mobile devices.This paper gives an interwoven account of the theoretical and practical work we undertook in pursuit of designing co-located interactions. We show how we sensitized ourselves to theory from diverse intellectual disciplines, to develop an analytical lens to better think about co-located interactions. By critiquing current systems and their conceptual foundations, and further interrelating theories particularly in regard to performative aspects of identity and communication, we develop a more nuanced way of thinking about co-located interactions. Drawing on our sensitivities, we show how we generated and are exploring, through the process of design, a set of co-located interactions that are situated within our social ecologies, and contend that our upfront theoretical work enabled us to identify and explore this space in the first place. This highlights the importance of problem framing, especially for projects adopting design methodologies.

  • NDPNote: Scenario-Based Interactive UI Design
    K. Kusano (NTT Service Evolution Laboratories, JP), M. Nakatani, T. Ohno
    K. Kusano (NTT Service Evolution Laboratories, JP)M. Nakatani (NTT Service Evolution Laboratories, JP)T. Ohno (NTT Service Evolution Laboratories, JP)

    Our proposal is a novel tool that enhances the designer’s skill in writing scenarios and designing UIs smoothly and easily.Clearly picturing user behavior is one of the key requirements when designing successful interactive software. However, covering all possible user behaviors with one UI is a complex challenge. The Scenario-based Interactive UI Design tool is designed to support the characterization of user behavior based on scenarios and then using the information in UI design. Scenarios make it easy to understand and share user behavior even if we have little design knowledge. However, they have two big weaknesses; 1) integrating several scenarios in one UI is difficult, even if we can create appropriate scenarios, 2) maintaining the links between scenarios and the UI is a heavy task in iterative design. Our tool solves the above problems through its hierarchical scenario structure and visualized overview of scenarios. It enhances the designer’s skill in writing scenarios and designing UIs smoothly and easily.

  • NLSNote: Regularly Visited Patches in Human Mobility
    Y. Qu (placenous.com, USA), J. Zhang
    Y. Qu (placenous.com, USA)J. Zhang (Pitney Bowes Inc., USA)

    This paper proposes a new analytic unit for human mobility research – the patch. Regularly Visited Patches (RVP) identified from GPS-based location data were analyzed, revealing fundamental mobility patterns. In this paper, we propose a new analytic unit for human mobility analysis – the patch. We developed a process to identify Regularly Visited Patches (RVP) and a set of metrics to characterize and measure their spatial patterns. Using a large dataset of Foursquare check-ins as a test bed, we show that RVP analysis reveals fundamental patterns of human mobility and will lead to promising research with strong implications for businesses.

252ACourse C07, unit 1/2

  • CVCC07: Speech-based Interaction: Myths, Challenges, and Opportunities
    C. Munteanu (National Research Council Canada, CA), G. Penn
    C. Munteanu (National Research Council Canada, CA)G. Penn (Univ. of Toronto, CA)

    Learn how speech recognition and synthesis works, what are its limitations and usability challenges, how can it enhance interaction paradigms, and what is the current research and commercial state-of-the-art. Speech remains the “holy grail” of interaction, as this is the most natural form of communication that humans employ. Unfortunately, it is also one of the most difficult modalities to be understood by machines – despite, and perhaps, because it is the highest-bandwidth communication channel we possess. While significant research effort, in engineering, linguistics and psychology, have been spent on improving machines’ ability to understand and synthesize speech, the HCI community has been relatively timid in embracing this modality as a central focus of research. This can be attributed in part to the relatively discouraging levels of accuracy in understanding speech, in contrast with often-unfounded claims of success from industry, but also to the intrinsic difficulty of designing and especially evaluating interfaces that use speech and natural language as an input or output modality. While the accuracies of understanding speech input are still discouraging for many applications under less-than-ideal conditions, several interesting areas have yet to be explored that could make speech-based interaction truly hands-free. The goal of this course is to inform the HCI community of the current state of speech and natural language research, to dispel some of the myths surrounding speech-based interaction, as well as to provide an opportunity for HCI researchers and practitioners to learn more about how speech recognition and synthesis work, what are their limitations, and how these could be used to enhance current interaction paradigms.

252BPapers: Co-Design with Users

SKRSession chair: Andrea Parker
  • PRGPaper: Large-Scale Participation: A Case Study of a Participatory Approach to Developing a New Public Library
    P. Dalsgaard (Aarhus Univ., DK), E. Eriksson
    P. Dalsgaard (Aarhus Univ., DK)E. Eriksson (Chalmers Univ. of Technology, SE)

    A case study of a participatory large-scale project in the development of a new public library, with a range of activities and the main lessons from the project.In this paper, we present a case study of a participatory project that focuses on interaction in large-scale design, namely, the development of the new Urban Mediaspace Aarhus. This project, which has been under way for ten years, embodies a series of issues that arise when participatory design approaches are applied to large-scale, IT-oriented projects. At the same time, it highlights the issues public knowledge institutions face, when interactive technologies challenge their fundamental roles and practices; by extension, this case offers examples of how these challenges may be explored and addressed through IT-based participatory initiatives. We present a range of such activities carried out during the past ten years, and present the main lessons from the project, based on interviews with three key stakeholders. These lessons focus on how to make participation work in practice, how to align different paradigms of inquiry and practice in a project of this scale, and how to capture and anchor the insights from participatory events to inform the ongoing design process.

  • PSSPaper: Probing Bus Stop for Insights on Transit Co-design
    D. Yoo (Univ. of Washington, USA), J. Zimmerman, T. Hirsch
    D. Yoo (Univ. of Washington, USA)J. Zimmerman (Carnegie Mellon Univ., USA)T. Hirsch (Univ. of Washington, USA)

    We investigate how social computing might support citizens co-design their transit service. We conducted a field study with public transit riders, exploring the issues and controversies that reveal conflicting communities.Social computing provides a new way for citizens to engage with their public service. Our research investigates how social computing might support citizens co-design their transit service. We conducted a field study with public transit riders, exploring the issues and controversies that reveal conflicting communities. Our analyses revealed three insights. First, encourage citizens to share what they see as the rationale for current service offerings. Second, encourage citizens to share the consequences of current services and of proposed changes and new designs. Third, focus on producing a shared citizen and service provider understanding of what the goals and mission of the public service should be.

  • PSRPaper: A Value Sensitive Action-Reflection Model: Evolving a Co-Design Space with Stakeholder and Designer Prompts
    D. Yoo (Univ. of Washington, USA), A. Huldtgren, J. Woelfer, D. Hendry, B. Friedman
    D. Yoo (Univ. of Washington, USA)A. Huldtgren (Delft Univ. of Technology, NL)J. Woelfer (Univ. of Washington, USA)D. Hendry (Univ. of Washington , USA)B. Friedman (Univ. of Washington, USA)

    We introduce the Value Sensitive Action-Reflection Model: a co-design method focus on the social context of use and values that lie with individuals, groups, and societies.We introduce a design method for evolving a co-design space to support stakeholders untrained in design. Specifically, the purpose of the method is to expand and shape a co-design space so that stakeholders, acting as designers, focus not only on the form and function of a tool being envisioned but also on the social context of its use and values that lie with individuals, groups, and societies. The method introduces value sensitive stakeholder prompts and designer prompts into a co-design process, creating a particular kind of reflection-on-action cycle. The prompts provide a means for bringing empirical data on values and theoretical perspective into the co-design process. We present the method in terms of a general model, the Value Sensitive Action-Reflection Model; place the model within discourse on co-design spaces; and illustrate the model with a discussion of its application in a lo-fi prototyping activity around safety for homeless young people. We conclude with reflections on the model and method.

  • PBEPaper: Configuring Participation: On How We Involve People In Design
    J. Vines (Newcastle Univ., UK), R. Clarke, P. Wright, J. McCarthy, P. Olivier
    J. Vines (Newcastle Univ., UK)R. Clarke (Newcastle Univ., UK)P. Wright (Newcastle Univ., UK)J. McCarthy (Univ. College Cork, IE)P. Olivier (Newcastle Univ., UK)

    Critically examines the goals of user participation in design processes in contemporary HCI. Highlights limitations in how participatory processes are documented by the community, and outlines strategies for future research.The term ‘participation’ is traditionally used in HCI to describe the involvement of users and stakeholders in design processes, with a pretext of distributing control to participants to shape their technological future. In this paper we ask whether these values can hold up in practice, particularly as participation takes on new meanings and incorporates new perspectives. We argue that much HCI research leans towards configuring participation. In discussing this claim we explore three questions that we consider important for understanding how HCI configures participation; Who initiates, directs and benefits from user participation in design? In what forms does user participation occur? How is control shared with users in design? In answering these questions we consider the conceptual, ethical and pragmatic problems this raises for current participatory HCI research. Finally, we offer directions for future work explicitly dealing with the configuration of participation.

253Course C06, unit 1/2

  • CGJC06: Agile User Experience and UCD
    W. Hudson (Syntagm Ltd, UK)
    W. Hudson (Syntagm Ltd, UK)

    This course shows how to integrate UCD with Agile methods to create great user experiences. It takes an ‘emotionally intelligent’ approach to engaging all team members in UCD. Benefits: This half-day course shows how to integrate User-Centered Design with Agile methods to create great user experiences. The course builds on the instructor’s research into empathizing skills and takes an ‘emotionally intelligent’ approach to engaging all team members in UCD. The course is a balanced combination of tutorials, group exercises and discussions, ensuring that participants gain a rich understanding of the problems presented by Agile and how they can be addressed. Origins: This is a half-day version of a popular one-day course that has been well-received within a major UK telecoms operator and at a number of public presentations in London, Brussels and Hamburg in 2010 and 2011. It was part of the CHI 2011 & 2012 course offerings. Features: Up-front versus Agile UCD Empathetic design User & Persona Stories Agile usability testing Adding value to the Agile team Design maps Audience: Usability, UX and UCD practitioners trying to integrate UCD activities within Agile teams. (Some familiarity with UCD techniques is required.) Presentation: The course is approximately 60% tutorials and 40% activities or group discussions. Instructor Background: William Hudson has 40 years’ experience in the development of interactive systems. He has contributed material on user-centered design and user interface design to the Rational Unified Process and to Addison-Wesley’s Object Modeling and User Interface Design (van Harmelen, 2001). He is the founder of Syntagm, a consultancy specializing in user-centered design and has conducted more than 300 intranet and web site evaluations. William has written over 30 articles, papers and studies. He is an Adjunct Professor at Hult International Business School. Web Site: Further information about the instructor and this course can be found at www.syntagm.co.uk/design

BordeauxPapers: Language and Translation

SNKSession chair: Gahgene Gweon
  • PCHPaper: The Efficacy of Human Post-Editing for Language Translation
    S. Green (Stanford Univ., USA), J. Heer, C. Manning
    S. Green (Stanford Univ., USA)J. Heer (Stanford Univ., USA)C. Manning (Stanford Univ., USA)

    We analyzed human post-editing of machine translation output, a common feature in translator interfaces. We found that machine suggestions reduce human translation time and improved final quality.Language translation is slow and expensive, so various forms of machine assistance have been devised. Automatic machine translation systems process text quickly and cheaply, but with quality far below that of skilled human translators. To bridge this quality gap, the translation industry has investigated post-editing, or the manual correction of machine output. We present the first rigorous, controlled analysis of post-editing and find that post-editing leads to reduced time and, surprisingly, improved quality for three diverse language pairs (English to Arabic, French, and German). Our statistical models and visualizations of experimental data indicate that some simple predictors (like source text part of speech counts) predict translation time, and that post-editing results in very different interaction patterns. From these results we distill implications for the design of new language translation interfaces.

  • PLJPaper: Same Translation but Different Experience: The Effects of Highlighting on Machine-Translated Conversations
    G. Gao (Cornell Univ., USA), H. Wang, D. Cosley, S. Fussell
    G. Gao (Cornell Univ., USA)H. Wang (National Tsing Hua Univ., TW)D. Cosley (Cornell Univ., USA)S. Fussell (Cornell Univ., USA)

    This study demonstrates that keyword highlighting is useful for improving the quality of MT-mediated communication. It informs the design of tools to support communication and collaboration across language boundaries.Machine translation (MT) has the potential to allow members of multilingual organizations to interact via their own native languages, but issues with the quality of MT output have made it difficult to realize this potential. We hypothesized that highlighting keywords in MT output might make it easier for people to overlook translation errors and focus on what was intended by the message. To test this hypothesis, we conducted a laboratory experiment in which native English speakers interacted with a Mandarin-speaking confederate using machine translation. Participants performed three brainstorming tasks, under each of three conditions: no highlighting, keyword highlighting, and random highlighting. Our results indicated that people consider the identical messages clearer and less distracting when the keywords in the message are highlighted. Keyword highlighting also improved subjective impressions of the partner and the quality of the collaboration. These findings inform the design of future communication tools to support multilingual communications.

  • PCPPaper: Improving Teamwork Using Real-Time Language Feedback
    Y. Tausczik (Carnegie Mellon Univ., USA), J. Pennebaker
    Y. Tausczik (Carnegie Mellon Univ., USA)J. Pennebaker (The Univ. of Texas at Austin, USA)

    We develop a real-time language feedback system that monitors the communication patterns among students in a discussion group and provides real-time instructions to shape the way the group works together.We develop and evaluate a real-time language feedback system that monitors the communication patterns among students in a discussion group and provides real-time instructions to shape the way the group works together. As an initial step, we determine which group processes are related to better outcomes. We then experimentally test the efficacy of providing real-time instructions which target two of these group processes. The feedback system was successfully able to shape the way groups worked together. However, only appropriate feedback given to groups that were not working well together from the start was able to improve group performance.

  • NRSNote: SpatialEase: Learning Language through Body Motion
    D. Edge (Microsoft Research Asia, CN), K. Cheng, M. Whitney
    D. Edge (Microsoft Research Asia, CN)K. Cheng (Microsoft Research Asia, CN)M. Whitney (Microsoft Research Asia, CN)

    Motivates and evaluates the design of the SpatialEase system for the kinesthetic learning of second language constructions grounded in space and motion, leading to implications for mixed-modality learning games.Games that engage both mind and body by targeting users’ kinesthetic intelligence have the potential to transform the activity of learning across a wide variety of domains. To investigate this potential in the context of second language learning, we have developed SpatialEase: a Kinect game for the body-based learning of language that is grounded in space and motion. In this game, learners respond to audio commands in the second language by moving their bodies in space, while a game mechanic based on distributed cued-recall supports learning over time. Our comparison of SpatialEase with the popular Rosetta Stone software for learner of Mandarin Chinese showed similar learning gains over a single session and generated several key implications for the future design of mixed-modality learning systems.

342APapers: Brain Sensing and Analysis

SPCSession chair: Petra Isenberg
  • PCRPaper: Using fNIRS Brain Sensing to Evaluate Information Visualization Interfaces
    E. Peck (Tufts Univ., USA), B. Yuksel, A. Ottley, R. Jacob, R. Chang
    E. Peck (Tufts Univ., USA)B. Yuksel (Tufts Univ., USA)A. Ottley (Tufts Univ., USA)R. Jacob (Tufts Univ., USA)R. Chang (Tufts Univ., USA)

    We explore the use of fNIRS brain sensing to evaluate information visualization interfaces.We show how brain sensing can lend insight to the evaluation of visual interfaces and establish a role for fNIRS in visualization. Research suggests that the evaluation of visual design benefits by going beyond performance measures or questionnaires to measurements of the user’s cognitive state. Unfortunately, objectively and unobtrusively monitoring the brain is difficult. While functional near-infrared spectroscopy (fNIRS) has emerged as a practical brain sensing technology in HCI, visual tasks often rely on the brain’s quick, massively parallel visual system, which may be inaccessible to this measurement. It is unknown whether fNIRS can distinguish differences in cognitive state that derive from visual design alone. In this paper, we use the classic comparison of bar graphs and pie charts to test the viability of fNIRS for measuring the impact of a visual design on the brain. Our results demonstrate that we can indeed measure this impact, and furthermore measurements indicate that there are not universal differences in bar graphs and pie charts.

  • TRNTOCHI: A Predictive Speller Controlled by a Brain-Computer Interface Based on Motor Imagery
    T. D’Albis (Politecnico di Milano, IT), R. Blatt, R. Tedesco, L. Sbattella, M. Matteucci
    T. D’Albis (Politecnico di Milano, IT)R. Blatt (Politecnico di Milano, IT)R. Tedesco (Politecnico di Milano, IT)L. Sbattella (Politecnico di Milano, IT)M. Matteucci (Politecnico di Milano, IT)

    Persons suffering from severe motor disorders have limited possibilities to communicate. We present a speller, based on a brain-computer interface, improved by a smart UI and a text predictor.Persons suffering from motor disorders have limited possibilities to communicate and normally require assistive technologies to fulfill this primary need. Promising means to provide basic communication abilities to subjects affected by severe motor impairments are brain-computer interfaces (BCIs), i.e., systems that directly translate brain signals into device commands bypassing any muscle or nerve mediation. To date, the use of BCIs for effective verbal communication is yet an open issue – primarily due to the low rates of information transfer that can be achieved with this technology. Still, the performance of BCI spelling applications can be considerably improved by a smart user interface design and by the adoption of Natural Language Processing (NLP) techniques for text prediction. The objective of this work is to suggest an approach and a user interface for BCI spelling applications combining state-of-the-art BCI and NLP techniques to maximize the overall communication rate of the system. The BCI paradigm adopted is motor imagery, i.e., when the subject imagines to move a certain part of the body, he/she produces modifications to specific brain rhythms that are detected in real-time through an electroencephalogram and translated into commands for a spelling application. By maximizing the overall communication rate our approach is twofold: on one hand we maximize the information transfer rate from the control signal, on the other side we optimize the way this information is employed for the purpose of verbal communication. The achieved results are satisfactory and comparable with the latest works reported in literature on motor-imagery BCI spellers. For the three subjects tested we obtained a spelling rate of respectively 3 char/min, 2.7 char/min and 2 char/min.

  • PPDPaper: Weighted Graph Comparison Techniques for Brain Connectivity Analysis
    B. Alper (Univ. of California, Santa Barbara, USA), B. Bach, N. Henry Riche, T. Isenberg, J. Fekete
    B. Alper (Univ. of California, Santa Barbara, USA)B. Bach (INRIA, FR)N. Henry Riche (Microsoft Research, USA)T. Isenberg (INRIA, FR)J. Fekete (INRIA, FR)

    This paper presents the design and evaluation of two visualizations for comparing weighted graphs. Results have implications for the design of brain connectivity analysis and other graph visualization tools.The analysis of brain connectivity is a vast field in neuroscience with a frequent use of visual representations and an increasing need for visual analysis tools. Based on an in-depth literature review and interviews with neuroscientists, we explore high-level brain connectivity analysis tasks that need to be supported by dedicated visual analysis tools. A significant example of such a task is the comparison of different connectivity data in the form of weighted graphs. Several approaches have been suggested for graph comparison within information visualization, but the comparison of weighted graphs has not been addressed. We explored the design space of applicable visual representations and present augmented adjacency matrix and node-link visualizations. To assess which representation best support weighted graph comparison tasks, we performed a controlled experiment. Our findings suggest that matrices support these tasks well, outperforming node-link diagrams. These results have significant implications for the design of brain connectivity analysis tools that require weighted graph comparisons. They can also inform the design of visual analysis tools in other domains, e.g. comparison of weighted social networks or biological pathways.

  • PPSPaper: At the Interface of Biology and Computation
    A. Taylor (Microsoft Research, UK), N. Piterman, S. Ishtiaq, J. Fisher, B. Cook, C. Cockerton, S. Bourton, D. Benque
    A. Taylor (Microsoft Research, UK)N. Piterman (Univ. of Leicester, UK)S. Ishtiaq (Microsoft Research, UK)J. Fisher (Microsoft Research, UK)B. Cook (Microsoft Research, UK)C. Cockerton (Microsoft Research, UK)S. Bourton (QuantumBlack, UK)D. Benque (Royal college of Art, UK)

    Presents study of scientific tool for proving stabilization in biological systems. Shows how such tools, using new computational techniques, can introduce frictions but that these frictions can be used constructively.Representing a new class of tool for biological modeling, Bio Model Analyzer (BMA) uses sophisticated computational techniques to determine stabilization in cellular networks. This paper presents designs aimed at easing the problems that can arise when such techniques—using distinct approaches to conceptualizing networks—are applied in biology. The work also engages with more fundamental issues being discussed in the philosophy of science and science studies. It shows how scientific ways of knowing are constituted in routine interactions with tools like BMA, where the emphasis is on the practical business at hand, even when seemingly deep conceptual problems exist. For design, this perspective refigures the frictions raised when computation is used to model biology. Rather than obstacles, they can be seen as opportunities for opening up different ways of knowing.

343Course C03, unit 2/3

  • CVZC03: Rapid Design Labs—A Tool to Turbocharge Design-Led Innovation
    J. Nieters (Hewlett Packard, USA), C. Thompson, A. Pande
    J. Nieters (Hewlett Packard, USA)C. Thompson (zSpace, Inc, USA)A. Pande (Hewlett Packard, IN)

    Jim Nieters, Carola Thompson, and Amit Pande will empower designers and UX teams to act as a catalyst to systemically identify and drive game-changing ideas to market with rapid design labs.Have you ever had a big idea that got crushed? You know, one of those inspiring ideas that could change the world? If you work in a product or design group in a corporation or design firm, you have probably experienced what happens after you share one those ideas. In the real world, coming up with a breakthrough idea or transformative design doesn’t mean it will automatically get to market. By definition, innovative ideas represent new ways of thinking. Organizations by nature seem to have anti-innovation antibodies that often kill new ideas—even disruptive innovations that could help companies differentiate themselves from their competition. As difficult as coming up with a game-changing idea can be, getting an organization to act on the idea often seems impossible. The course Rapid Design Labs- A Tool to Turbocharge Design-Led Innovation gives you new tools for this challenge, tools that empower designers and UX teams to get breakthrough ideas and designs accepted. Learn how UX can act as a catalyst to systemically identify and drive game-changing ideas to market. Rapid design labs are a design-led, facilitative, cross-functional, iterative approach to innovation that aligns organizations and generates business value each step of the way.

362/363Special Interest Group

  • GRGThe Role of Engineering Work in CHI
    P. Palanque (Univ. of Toulouse, FR), F. Paternò, J. Nichols, N. Nunes, B. Myers
    P. Palanque (Univ. of Toulouse, FR)F. Paternò (CNR-ISTI, IT)J. Nichols (IBM Research, USA)N. Nunes (Univ. of Madeira, PT)B. Myers (Carnegie Mellon Univ., USA)

    The Engineering community faces a number of issues around its role in the larger CHI community and its contribution to SIGCHI-sponsored conferences. This SIG aims to stimulate discussion and attention on the work of researchers interested in the engineering aspects of HCI. It is the forum to report progress on key issues, identify objectives for the near future, and develop plans to address them.

HavanePapers: Crowdwork and Online Communities

SBYSession chair: Krzysztof Gajos
  • PREPaper: Crowdfunding inside the Enterprise: Employee-Initiatives for Innovation and Collaboration
    M. Muller (IBM, USA), W. Geyer, T. Soule, S. Daniels, L. Cheng
    M. Muller (IBM, USA)W. Geyer (IBM Research, USA)T. Soule (IBM Research, USA)S. Daniels (IBM T.J. Watson Research, USA)L. Cheng (IBM Research, USA)

    Crowdfunding behind a company firewall showed diverse projects, inter-organizational collaborations, and collaborative motivations. Potential interest for HCI researchers, organizational practitioners, and consultants.We describe a first experiment in enterprise crowdfunding – i.e., employees allocating money for employee-initiated proposals at an Intranet site, including a trial of this system with 511 employees in IBM Research. Major outcomes include: employee proposals that addressed diverse indivi-dual and organizational needs; high participation rates; ex-tensive inter-departmental collaboration, including the dis-covery of large numbers of previously unknown collabora-tors; and the development of goals and motivations based on collective concerns at multiple levels of project groups, communities of practice, and the organization as a whole. We recommend further, comparative research into crowd-funding and other forms of employee-initiated innovations.

  • PHVPaper: Community Insights: Helping Community Leaders Enhance the Value of Enterprise Online Communities
    T. Matthews (IBM Research, USA), S. Whittaker, H. Badenes, B. Smith, M. Muller, K. Ehrlich, M. Zhou, T. Lau
    T. Matthews (IBM Research, USA)S. Whittaker (Univ. of California at Santa Cruz, USA)H. Badenes (IBM, AR)B. Smith (IBM Research, USA)M. Muller (IBM, USA)K. Ehrlich (IBM, USA)M. Zhou (IBM Research, USA)T. Lau (Willow Garage, USA)

    Evidence-based design and evaluation of a novel tool that provides community leaders with useful, actionable, and contextualized analytics. Benefits designers of and practitioners using analytic tools to foster successful communities.Online communities are increasingly being deployed in enterprises to increase productivity and share expertise. Community leaders are critical for fostering successful communities, but existing technologies rarely support leaders directly, both because of a lack of clear data about leader needs, and because existing tools are member- rather than leader-centric. We present the evidence-based design and evaluation of a novel tool for community leaders, Community Insights (CI). CI provides actionable analytics that help community leaders foster healthy communities, providing value to both members and the organization. We describe empirical and system contributions derived from a long-term deployment of CI to leaders of 470 communities over 10 months. Empirical contributions include new data showing: (a) which metrics are most useful for leaders to assess community health, (b) the need for and how to design actionable metrics, (c) the need for and how to design contextualized analytics to support sensemaking about community data. These findings motivate a novel community system that provides leaders with useful, actionable and contextualized analytics.

  • PKHPaper: CommunityCompare: Visually Comparing Communities for Online Community Leaders in the Enterprise
    A. Xu (Univ. of Illinois at Urbana-Champaign, USA), J. Chen, T. Matthews, M. Muller, H. Badenes
    A. Xu (Univ. of Illinois at Urbana-Champaign, USA)J. Chen (IBM Research, USA)T. Matthews (IBM Research, USA)M. Muller (IBM T.J. Watson Research, USA)H. Badenes (IBM, AR)

    Design and evaluation of a new visual, comparison-based analytic system, CommunityCompare, to help leaders assess and identify actions to improve community health. Can enhance design of systems for community leaders.Online communities are important in enterprises, helping workers to build skills and collaborate. Despite their unique and critical role fostering successful communities, community leaders have little direct support in existing technologies. We introduce CommunityCompare, an interactive visual analytic system to enable leaders to make sense of their community’s activity with comparisons. Composed of a parallel coordinates plot, various control widgets, and a preview of example posts from communities, the system supports comparisons with hundreds of related communities on multiple metrics and the ability to learn by example. We motivate and inform the system design with formative interviews of community leaders. From additional interviews, a field deployment, and surveys of leaders, we show how the system enabled leaders to assess community performance in the context of other comparable communities, learn about community dynamics through data exploration, and identify examples of top performing communities from which to learn. We conclude by discussing how our system and design lessons generalize.

  • PJNPaper: Analyzing Crowd Workers in Mobile Pay-for-Answer Q&A
    U. Lee (KAIST (Korea Advanced Institute of Science and Technology), KR), J. Kim, E. Yi, J. Sung, M. Gerla
    U. Lee (KAIST (Korea Advanced Institute of Science and Technology), KR)J. Kim (Univ. of California, Los Angeles, USA)E. Yi (KAIST (Korea Advanced Institute of Science and Technology), KR)J. Sung (KAIST (Korea Advanced Institute of Science and Technology), KR)M. Gerla (Univ. of California, Los Angeles, USA)

    We studied one of the largest mobile pay-for-answer Q&A services called Jisiklog to understand behaviors of crowdworkers: key motivators of participation, working strategies of experienced users, and longitudinal interaction dynamics.Despite the popularity of mobile pay-for-answer Q&A services, little is known about the people who answer questions on these services. In this paper we examine 18.8 million question and answer pairs from Jisiklog, the largest mobile pay-foranswer Q&A service in Korea, and the results of a complementary survey study of 245 Jisiklog workers. The data are used to investigate key motivators of participation, working strategies of experienced users, and longitudinal interaction dynamics. We find that answerers are rarely motivated by social factors but are motivated by financial incentives and intrinsic motives. Additionally, although answers are provided quickly, an answerer’s topic selection tends to be broad, with experienced workers employing unique strategies to answer questions and judge relevance. Finally, analysis of longitudinal working patterns and community dynamics demonstrate the robustness of mobile pay-for-answer Q&A. These findings have significant implications on the design of mobile pay-for-answer Q&A.

351Papers: Keyboards and Hotkeys

SMBSession chair: Mark Dunlop
  • PLNPaper: Octopus: Evaluating Touchscreen Keyboard Correction and Recognition Algorithms via “Remulation”
    X. Bi (Google, Inc., USA), S. Azenkot, K. Partridge, S. Zhai
    X. Bi (Google, Inc., USA)S. Azenkot (Univ. of Washington, USA)K. Partridge (Google, Inc., USA)S. Zhai (Google, Inc., USA)

    Proposed and tested remulation, an efficient method for evaluating touchscreen keyboards by replicating prior user study data in real-time, on-device simulation. Implemented Octopus, a remulation-based evaluation tool.The time and labor demanded by a typical laboratory-based keyboard evaluation are limiting resources for algorithmic adjustment and optimization. We propose Remulation, a complementary method for evaluating touchscreen keyboard correction and recognition algorithms. It replicates prior user study data through real-time, on-device simulation. We have developed Octopus, a Remulation-based evaluation tool that enables keyboard developers to efficiently measure and inspect the impact of algorithmic changes without conducting resource-intensive user studies. It can also be used to evaluate third-party keyboards in a “black box” fashion, without access to their algorithms or source code. Octopus can evaluate both touch keyboards and word-gesture keyboards. Two empirical examples show that Remulation can efficiently and effectively measure many aspects of touch screen keyboards at both macro and micro levels. Additionally, we contribute two new metrics to measure keyboard accuracy at the word level: the Ratio of Error Reduction (RER) and the Word Score.

  • PHYPaper: TapBoard: Making a Touch Screen Keyboard More Touchable
    S. Kim (KAIST (Korea Advanced Institute of Science and Technology), KR), J. Son, G. Lee, H. Kim, W. Lee
    S. Kim (KAIST (Korea Advanced Institute of Science and Technology), KR)J. Son (KAIST (Korea Advanced Institute of Science and Technology), KR)G. Lee (KAIST (Korea Advanced Institute of Science and Technology), KR)H. Kim (KAIST (Korea Advanced Institute of Science and Technology), KR)W. Lee (KAIST (Korea Advanced Institute of Science and Technology), KR)

    TapBoard is a touch screen software keyboard that regards tapping actions as keystrokes and enables other touches for more useful operations; such as resting, feeling surface textures, and making gestures.A physical keyboard key has three states, whereas a touch screen usually has only two. Due to this difference, the state corresponding to the touched state of a physical key is missing in a touch screen keyboard. This touched state is an important factor in the usability of a keyboard. In order to recover the role of a touched state in a touch screen, we propose the TapBoard, a touch screen software keyboard that regards tapping actions as keystrokes and other touches as the touched state. In a series of user studies, we validate the effectiveness of the TapBoard concept. First, we show that tapping to type is in fact compatible with the existing typing skill of most touch screen keyboard users. Second, users quickly adapt to the TapBoard and learn to rest their fingers in the touched state. Finally, we confirm by a controlled experiment that there is no difference in text-entry performance between the TapBoard and a traditional touch screen software keyboard. In addition to these experimental results, we demonstrate a few new interaction techniques that will be made possible by the TapBoard.

  • PRDPaper: Métamorphe: Augmenting Hotkey Usage with Actuated Keys
    G. Bailly (Telekom Innovation Laboratories, TU Berlin, DE), T. Pietrzak, J. Deber, D. Wigdor
    G. Bailly (Telekom Innovation Laboratories, TU Berlin, DE)T. Pietrzak (Univ. de Lille 1, FR)J. Deber (Univ. of Toronto, CA)D. Wigdor (Univ. of Toronto, CA)

    Demonstrate the advantages of shape-changing keyboards for command selection. The Metamorphe keyboard offers a novel height-changing mechanism that provides haptic feedback and enables new key gestures.Hotkeys are an efficient method of selecting commands on a keyboard. However, these shortcuts are often underused by users. We present Métamorphe, a novel keyboard with keys that can be individually raised and lowered to promote hotkeys usage. Métamorphe augments the output of traditional keyboards with haptic and visual feedback, and offers a novel design space for user input on raised keys (e.g., gestures such as squeezing or pushing the sides of a key). We detail the implementation of Métamorphe and discuss design factors. We also report two user studies. The first is a user-defined interface study that shows that the new input vocabulary is usable and useful, and provides insights into the mental models that users associate with raised keys. The second user study shows improved eyes-free selection performance for raised keys as well as the surrounding unraised keys.

  • PKCPaper: Promoting Hotkey Use through Rehearsal with ExposeHK
    S. Malacria (Univ. of Canterbury, NZ), G. Bailly, J. Harrison, A. Cockburn, C. Gutwin
    S. Malacria (Univ. of Canterbury, NZ)G. Bailly (Telekom Innovation Laboratories, TU Berlin, DE)J. Harrison (Univ. of Canterbury, NZ)A. Cockburn (Univ. of Canterbury, NZ)C. Gutwin (Univ. of Saskatchewan, CA)

    Introduces ExposeHK, a new interface that promotes hotkey selection. Presents results of three studies showing that ExposeHK increases hotkey use, improves performance and was strongly prefered.Keyboard shortcuts allow fast interaction, but they are known to be infrequently used, with most users relying heavily on traditional pointer-based selection for most commands. We describe the goals, design, and evaluation of ExposeHK, a new interface mechanism that aims to increase hotkey use. ExposeHK’s four key design goals are: 1) enable users to browse hotkeys; 2) allow non-expert users to issue hotkey commands as a physical rehearsal of expert performance; 3) exploit spatial memory to assist non-expert users in identifying hotkeys; and 4) maximise expert performance by using consistent shortcuts in a flat command hierarchy. ExposeHK supports these objectives by displaying hotkeys overlaid on their associated commands when a modifier key is pressed. We evaluated ExposeHK in three empirical studies using toolbars, menus, and a tabbed ‘ribbon’ toolbar. Results show that participants used more hotkeys, and used them more often, with ExposeHK than with other techniques; they were faster with ExposeHK than with either pointing or other hotkey methods; and they strongly preferred ExposeHK. Our research shows that ExposeHK can substantially improve the user’s transition from a ‘beginner mode’ of interaction to a higher level of expertise.

352ABPapers: Flexible Displays

SHQSession chair: Edward Lank
  • PCVPaper: Flexpad: Highly Flexible Bending Interactions for Projected Handheld Displays
    J. Steimle (Massachusetts Institute of Technology, USA), A. Jordt, P. Maes
    J. Steimle (Massachusetts Institute of Technology, USA)A. Jordt (Kiel Univ. of Applied Sciences, DE)P. Maes (Massachusetts Institute of Technology, USA)

    Introduces highly flexible handheld displays as user interfaces. Contributes a novel real-time method for capturing complex deformations of flexible surfaces and novel interactions that leverage highly flexible deformations of displays.Flexpad is an interactive system that combines a depth camera and a projector to transform sheets of plain paper or foam into flexible, highly deformable, and spatially aware handheld displays. We present a novel approach for tracking deformed surfaces from depth images in real time. It captures deformations in high detail, is very robust to occlusions created by the user’s hands and fingers, and does not require any kind of markers or visible texture. As a result, the display is considerably more deformable than in previous work on flexible handheld displays, enabling novel applications that leverage the high expressiveness of detailed deformation. We illustrate these unique capabilities through three application examples: curved cross-cuts in volumetric images, deforming virtual paper characters, and slicing through time in videos. Results from two user studies show that our system is capable of detecting complex deformations and that users are able to perform them quickly and precisely.

  • PBRPaper: MorePhone: A Study of Actuated Shape Deformations for Flexible Thin-Film Smartphone Notifications
    A. Gomes (Queen’s Univ., CA), A. Nesbitt, R. Vertegaal
    A. Gomes (Queen’s Univ., CA)A. Nesbitt (Queen’s Univ., CA)R. Vertegaal (Queen’s Univ., CA)

    Presents a shape changing flexible smartphone that actuates its body for the purpose of providing notifications. Empirically evaluates mapping between shape actuations, urgency and notification type.We present MorePhone, an actuated flexible smartphone with a thin-film E Ink display. MorePhone uses shape memory alloys to actuate the entire surface of the display as well as individual corners. We conducted a participatory study to determine how users associate urgency and notification type with full screen, 1 corner, 2 corner and 3 corner actuations of the smartphone. Results suggest that with the current prototype, actuated shape notifications are useful for visual feedback. Urgent notifications such as alarms and voice calls were best matched with actuation of the entire display surface, while less urgent notifications, such as software notifications were best matched to individual corner bends. While different corner actuations resulted in significantly different matches between notification types, medium urgency notification types were treated as similar, and best matched to a single corner bend. A follow-up study suggested that users prefer to dedicate each corner to a specific type of notification. Users would like to personalize the assignment of corners to notification type. Animation of shape actuation significantly increased the perceived urgency of any of the presented shapes.

  • PAMPaper: Morphees: Toward High “Shape Resolution” in Self-Actuated Flexible Mobile Devices
    A. Roudaut (Univ. of Bristol, UK), A. Karnik, M. Löchtefeld, S. Subramanian
    A. Roudaut (Univ. of Bristol, UK)A. Karnik (Univ. of Bristol, UK)M. Löchtefeld (German Research Center for Artificial Intelligence (DFKI), DE)S. Subramanian (Univ. of Bristol, UK)

    We introduce the term shape resolution in 10 features, which adds to the existing definitions of screen and touch resolution and helps the design of shape-shifting mobile devices.We introduce the term shape resolution, which adds to the existing definitions of screen and touch resolution. We propose a framework, based on a geometric model (Non-Uniform Rational B-splines), which defines a metric for shape resolution in ten features. We illustrate it by comparing the current related work of shape changing devices. We then propose the concept of Morphees that are self-actuated flexible mobile devices adapting their shapes on their own to the context of use in order to offer better affordances. For instance, when a game is launched, the mobile device morphs into a console-like shape by curling two opposite edges to be better grasped with two hands. We then create preliminary prototypes of Morphees in order to explore six different building strategies using advanced shape changing materials (dielectric electro active polymers and shape memory alloys). By comparing the shape resolution of our prototypes, we generate insights to help designers toward creating high shape resolution Morphees.

  • NDVNote: LightCloth: Senseable Illuminating Optical Fiber Cloth for Creating Interactive Surfaces
    S. Hashimoto (JST ERATO Igarashi Design Interface Project, JP), R. Suzuki, Y. Kamiyama, M. Inami, T. Igarashi
    S. Hashimoto (JST ERATO Igarashi Design Interface Project, JP)R. Suzuki (The Univ. of Tokyo, JP)Y. Kamiyama (JST ERATO Igarashi Design Interface Project, JP)M. Inami (Keio Univ., JP)T. Igarashi (The Univ. of Tokyo, JP)

    LightCloth is a fabric interface that enables illumination, light communication, and position sensing. We added a sensory function to diffusive optical fibers, and widened the possibilities for new fabric interactions.This paper introduces an input and output device that enables illumination, bi-directional data communication, and position sensing on a soft cloth. This “LightCloth” is woven from diffusive optical fibers. Since the fibers are arranged in parallel, the cloth has one-dimensional position information. Sensor-emitter pairs attached to bundles of contiguous fibers enable bundle-specific light input and output. We developed a prototype system that allows full-color illumination and 8-bit data input by infrared signals. We present as an application a chair with a LightCloth cover whose illumination pattern is specified using an infrared light pen. Here we describe the implementation details of the device and discuss possible interactions using the device.

  • NRZNote: Bending the Rules: Bend Gesture Classification for Flexible Displays
    K. Warren (Carleton Univ., CA), J. Lo, V. Vadgama, A. Girouard
    K. Warren (Carleton Univ., CA)J. Lo (Carleton Univ., CA)V. Vadgama (Carleton Univ., CA)A. Girouard (Carleton Univ., CA)

    We propose a bend gesture classification scheme and we evaluate how users naturally perform bend gestures on deformable displays with minimal instruction.Bend gestures have a large number of degrees of freedom and therefore offer a rich interaction language. We propose a classification scheme for bend gestures, and explore how users perform these bend gestures along four classification criterion: location, direction, size, and angle. We collected 36 unique bend gestures performed three times by each participant. The results suggest a strong agreement among participants for preferences of location and direction. Size and angle were difficult for users to differentiate. Finally, users performed and perceived two distinct levels of magnitude. We propose recommendations for designing bend gestures with flexible displays.

221/221MLast-minute SIGs: Session 2

Monday – 16:00-17:20

BluePapers: Smart Tools, Smart Work

SCFSession chair: Steven Dow
  • PMLPaper: Turkopticon: Interrupting Worker Invisibility in Amazon Mechanical Turk
    L. Irani (Univ. of California, Irvine, USA), M. Silberman
    L. Irani (Univ. of California, Irvine, USA)M. Silberman (Bureau of Economic Interpretation, USA)

    With Turkopticon, we contribute an example of a long-term systems building project that reworks employer-worker relations in Amazon Mechanical Turk. We analyze the system in feminist, infrastrastructural, and political terms. As HCI researchers have explored the possibilities of human computation, they have paid less attention to ethics and values of crowdwork. This paper offers an analysis of Amazon Mechanical Turk, a popular human computation system, as a site of technically mediated worker-employer relations. We argue that human computation currently relies on worker invisibility. We then present Turkopticon, an activist system that allows workers to publicize and evaluate their relationships with employers. As a common infrastructure, Turkopticon also enables workers to engage one another in mutual aid. We conclude by discussing the potentials and challenges of sustaining activist technologies that intervene in large, existing socio-technical systems.

  • PBHPaper: Don’t Hide in the Crowd! Increasing Social Transparency Between Peer Workers Improves Crowdsourcing Outcomes
    S. Huang (Univ. of Illinois at Urbana-Champaign, USA), W. Fu
    S. Huang (Univ. of Illinois at Urbana-Champaign, USA)W. Fu (Univ. of Illinois at Urbana-Champaign, USA)

    Our study suggests that a careful combination of methods that increase social transparency and different peer-dependent reward schemes can significantly improve crowdsourcing outcomes.This paper studied how social transparency and different peer-dependent reward schemes (i.e., individual, teamwork, and competition) affect the outcomes of crowdsourcing. The results showed that when social transparency was increased by asking otherwise anonymous workers to share their demographic information (e.g., name, nationality) to the paired worker, they performed significantly better. A more detailed analysis showed that in a teamwork reward scheme, in which the reward of the paired workers depended only on the collective outcomes, increasing social transparency could offset effects of social loafing by making them more accountable to their teammates. In a competition reward scheme, in which workers competed against each other and the reward depended on how much they outperformed their opponent, increasing social transparency could augment effects of social facilitation by providing more incentives for them to outperform their opponent. The results suggested that a careful combination of methods that increase social transparency and different reward schemes can significantly improve crowdsourcing outcomes.

  • PJCPaper: Combining Crowdsourcing and Google Street View to Identify Street-level Accessibility Problems
    K. Hara (Univ. of Maryland, USA), V. Le, J. Froehlich
    K. Hara (Univ. of Maryland, USA)V. Le (Univ. of Maryland, USA)J. Froehlich (Univ. of Maryland, USA)

    In this paper, we investigate the feasibility of using untrained crowd workers from Amazon Mechanical Turk (turkers) to find, label, and assess sidewalk accessibility problems in Google Street View imageryPoorly maintained sidewalks, missing curb ramps, and other obstacles pose considerable accessibility challenges; however, there are currently few, if any, mechanisms to determine accessible areas of a city a priori. In this paper, we investigate the feasibility of using untrained crowd workers from Amazon Mechanical Turk (turkers) to find, label, and assess sidewalk accessibility problems in Google Street View imagery. We report on two studies: Study 1 examines the feasibility of this labeling task with six dedicated labelers including three wheelchair users; Study 2 investigates the comparative performance of turkers. In all, we collected 13,379 labels and 19,189 verification labels from a total of 402 turkers. We show that turkers are capable of determining the presence of an accessibility problem with 81% accuracy. With simple quality control methods, this number increases to 93%. Our work demonstrates a promising new, highly scalable method for acquiring knowledge about sidewalk accessibility.

  • PKZPaper: Labor Dynamics in a Mobile Micro-Task Market
    M. Musthag (Univ. of Massachusetts, USA), D. Ganesan
    M. Musthag (Univ. of Massachusetts, USA)D. Ganesan (Univ. of Massachusetts, USA)

    This paper provides an in-depth exploration of labor dynamics in mobile task markets which require spatial mobility based on a year-long dataset from a leading mobile crowdsourcing platform. The ubiquity of smartphones has led to the emergence of mobile crowdsourcing markets, where smartphone users participate to perform tasks in the physical world. Mobile crowdsourcing markets are uniquely different from their online counterparts in that they require spatial mobility, and are therefore impacted by geographic factors and constraints that are not present in the online case. Despite the emergence and importance of such mobile marketplaces, little to none is known about the labor dynamics and mobility patterns of agents. This paper provides an in-depth exploration of labor dynamics in mobile task markets based on a year-long dataset from a leading mobile crowdsourcing platform. We find that a small core group of workers (< 10%) account for a disproportionately large proportion of activity (> 80%) generated in the market. We find that these super agents are more efficient than other agents across several dimensions: a) they are willing to move longer distances to perform tasks, yet they amortize travel across more tasks, b) they work and search for tasks more efficiently, c) they have higher data quality in terms of accepted submissions, and d) they improve in almost all of these efficiency measures over time. We find that super agent efficiency stems from two simple optimizations — they are 3x more likely than other agents to chain tasks and they pick fewer lower priced tasks than other agents. We compare mobile and online micro-task markets, and discuss differences in demographics, data quality, and time of use, as well as similarities in super agent behavior. We conclude with a discussion of how a mobile micro-task market might leverage some of our results to improve performance.

241Panel

  • LXELeveraging the Progress of Women in the HCI Field to Address the Diversity Chasm
    Anicia Peters (moderator), Shikoh Gitau, Pamela Jennings, Janaki Kumar, Dianne Murray
    Anicia Peters (moderator)Shikoh GitauPamela JenningsJanaki KumarDianne Murray

    Worldwide there is a gender gap in technology with only a small part of all computer science related positions being held by women. Among different initiatives to encourage women to join STEM fields, we started a video interview initiative last year at CHI to encourage more women to enter and remain in the field of HCI as well as strengthening existing women’s voices. In addition to strengthening women’s progress, many interviewees also identified a diversity chasm within the HCI field that needs to be addressed. This panel aims at continuing and deepening the conversation that was started at CHI 2011 addressing the experience of women in the HCI field in both industry and academia and extending the conversation to include diversity. It will serve as a platform to discuss important issues such as mentoring, leadership, and career development and for creating networks for including and encouraging diversity in HCI.

242ABPapers: Creating and Authoring

SNUSession chair: Andrew Ko
  • PMGPaper: Creativity Support for Novice Digital Filmmaking
    N. Davis (Georgia Institute of Technology , USA), A. Zook, B. O’Neill, A. Grosz, B. Headrick, M. Nitsche, M. Riedl
    N. Davis (Georgia Institute of Technology , USA)A. Zook (Georgia Institute of Technology, USA)B. O’Neill (Georgia Institute of Technology, USA)A. Grosz (Georgia Institute of Technology, USA)B. Headrick (Georgia Institute of Technology, USA)M. Nitsche (Georgia Institute of Technology, USA)M. Riedl (Georgia Institute of Technology, USA)

    We show that novice digital filmmakers have difficulty adhering to certain cinematographic conventions. Our subsequent Wizard-of-Oz study showed that a rule-based cinematic critic can reduce the frequency of errors.Machinima is a new form of creative digital filmmaking that leverages the real time graphics rendering of computer game engines. Because of the low barrier to entry, machinima has become a popular creative medium for hobbyists and novices while still retaining borrowed conventions from professional filmmaking. Can novice machinima creators benefit from creativity support tools? A preliminary study shows novices generally have difficulty adhering to cinematographic conventions. We identify and document four cinematic conventions novices typically violate. We report on a Wizard-of-Oz study showing a rule-based intelligent system that can reduce the frequency of errors that novices make by providing information about rule violations without prescribing solutions. We discuss the role of error reduction in creativity support tools.

  • PNQPaper: AutoGami: A Low-cost Rapid Prototyping Toolkit for Automated Movable Paper Craft
    K. Zhu (National Univ. of Singapore, SG), S. Zhao
    K. Zhu (National Univ. of Singapore, SG)S. Zhao (National Univ. of Singapore, SG)

    We presents a systematic analysis of the design space for automated movable paper craft, and developed a low-cost rapid prototyping toolkit for automated movable paper craft using the technology of selective inductive power transmission. AutoGami is a toolkit for designing automated movable paper craft using the technology of selective inductive power transmission. AutoGami has hardware and software components that allow users to design and implement automated movable paper craft without any prerequisite knowledge of electronics; it also supports rapid prototyping. Apart from developing the toolkit, we have analyzed the design space of movable paper craft and developed a taxonomy to facilitate the design of automated paper craft. AutoGami made consistently strong showings in design workshops, confirming its viability in supporting engagement and creativity as well as its usability in storytelling through paper craft. Additional highlights include rapid prototyping of product design as well as interaction design such as human-robot interactions.

  • PRQPaper: HyperSlides: Dynamic Presentation Prototyping
    D. Edge (Microsoft Research Asia, CN), J. Savage, K. Yatani
    D. Edge (Microsoft Research Asia, CN)J. Savage (Microsoft Research Asia, CN)K. Yatani (Microsoft Research Asia, CN)

    Motivates and evaluates the design of the HyperSlides system for dynamic prototyping of PowerPoint presentations that are themselves dynamic in their ability to help presenters rehearse and deliver their story.Presentations are a crucial form of modern communication, yet there is a dissonance between everyday practices with presentation tools and best practices from the presentation literature. We conducted a grounded theory study to gain a better understanding of the activity of presenting, discovering the potential for a more dynamic, automated, and story-centered approach to prototyping slide presentations that are themselves dynamic in their ability to help presenters rehearse and deliver their story. Our prototype tool for dynamic presentation prototyping, which we call HyperSlides, uses a simple markup language for the creation of hierarchically structured scenes, which are algorithmically transformed into hyperlinked slides of a consistent and minimalist style. Our evaluation suggests that HyperSlides helps idea organization, saves authoring time, creates aesthetic layouts, and supports more flexible rehearsal and delivery than linear slides, at the expense of reduced layout control and increased navigation demands.

  • NTYNote: SidePoint: A Peripheral Knowledge Panel for Presentation Slide Authoring
    Y. Liu (Waseda Univ., JP), D. Edge, K. Yatani
    Y. Liu (Waseda Univ., JP)D. Edge (Microsoft Research Asia, CN)K. Yatani (Microsoft Research Asia, CN)

    Implements an implicit search and peripheral panel system for presentation authoring by showing concise knowledge items relevant to the slide content, and investigates the benefits and issues of such peripheral knowledge panels.Presentation authoring is an important activity, but often requires the secondary task of collecting the information and media necessary for both slides and speech. Integration of implicit search and peripheral displays into presentation authoring tools may reduce the effort to satisfy not just active needs the author is aware of, but also latent needs that she is not aware of until she encounters content of perceived value. We develop SidePoint, a peripheral panel that supports presentation authoring by showing concise knowledge items relevant to the slide content. We study SidePoint as a technology probe to examine the benefits and issues associated with peripheral knowledge panels for presentation authoring. Our results show that peripheral knowledge panels have the potential to satisfy both types of needs in ways that transform presentation authoring for the better.

243Course C05, unit 2/2

  • CPQC05: Practical Statistics for User Experience Part I
    J. Sauro (Measuring Usability LLC, USA), J. Lewis
    J. Sauro (Measuring Usability LLC, USA)J. Lewis (IBM, USA)

    Learn to generate confidence intervals and compare two designs using rating scale data, binary measures and task times for large and small sample sizes.If you don’t measure it you can’t manage it. Usability analysis and user-research is about more than rules of thumb, good design and intuition: it’s about making better decisions with data. Is Product A faster than Product B? Will more users complete tasks on the new design? Learn how to conduct and interpret appropriate statistical tests on small and large sample usability data then communicate your results in easy to understand terms to stakeholders. Features 1. Get a visual introduction or refresher to the most important statistical concepts for applied use. 2. Be able to compare two interfaces or versions (A/B Testing) by showing statistical significance (e.g. Product A takes 20% less time to complete a task than Product B p <.05). 3. Clearly understand both the limits and data available from small sample usability data through use of confidence intervals. Audience Open to anyone who’s interested in quantitative usability tests. Participants should be familiar with the process of conducting usability tests as well as basic descriptive statistics such as the mean, median and standard deviation and have access to Microsoft Excel.

251Papers: Exploring Games

SQSSession chair: Lennart Nacke
  • PGKPaper: Control Your Game-Self: Effects of Controller Type on Enjoyment, Motivation, and Personality in Game
    M. Birk (Univ. of Saskatchewan, CA), R. Mandryk
    M. Birk (Univ. of Saskatchewan, CA)R. Mandryk (Univ. of Saskatchewan, CA)

    We show that controller choice affects a player’s enjoyment and motivation of a game, but also affects a player’s perception of themselves during play as measured by their in-game personality.Whether they are made to entertain you, or to educate you, good video games engage you. Significant research has tried to understand engagement in games by measuring player experience (PX). Traditionally, PX evaluation has focused on the enjoyment of game, or the motivation of players; these factors no doubt contribute to engagement, but do decisions regarding play environment (e.g., the choice of game controller) affect the player more deeply than that? We apply self-determination theory (specifically satisfaction of needs and self-discrepancy represented using the five factors model of personality) to explain PX in an experiment with controller type as the manipulation. Our study shows that there are a number of effects of controller on PX and in-game player personality. These findings provide both a lens with which to view controller effects in games and a guide for controller choice in the design of new games. Our research demonstrates that including self-characteristics assessment in the PX evaluation toolbox is valuable and useful for understanding player experience.

  • PDSPaper: Mastering the Art of War: How Patterns of Gameplay Influence Skill in Halo
    J. Huang (Univ. of Washington, USA), T. Zimmermann, N. Nagapan, C. Harrison, B. Phillips
    J. Huang (Univ. of Washington, USA)T. Zimmermann (Microsoft Research, USA)N. Nagapan (Microsoft Research, USA)C. Harrison (Microsoft, USA)B. Phillips (Microsoft, USA)

    We look at patterns of skill through large-scale gameplay analysis and player surveys to identify how different factors (play intensity, skill change over time, demographics, breaks, and prior games played) affect players’ skill in Halo.How do video game skills develop, and what sets the top players apart? We study this question of skill through a rating generated from repeated multiplayer matches called TrueSkill. Using these ratings from 7 months of games from over 3 million players, we look at how play intensity, breaks in play, skill change over time, and other games affect skill. These analyzed factors are then combined to model future skill and games played; the results show that skill change in early matches is a useful metric for modeling future skill, while play intensity explains eventual games played. The best players in the 7-month period, who we call “Master Blasters”, have varied skill patterns that often run counter to the trends we see for typical players. The data analysis is supplemented with a 70 person survey to explore how players’ self-perceptions compare to the gameplay data; most survey responses align well with the data and provide insight into player beliefs and motivation. Finally, we wrap up with a discussion about hiding skill information from players, and implications for game designers.

  • PLFPaper: Villains, Architects and Micro-Managers: What Tabula Rasa Teaches Us About Game Orchestration
    T. Graham (Queen’s Univ., CA), I. Schumann, M. Patel, Q. Bellay, R. Dachselt
    T. Graham (Queen’s Univ., CA)I. Schumann (Univ. of Magdeburg, DE)M. Patel (Kingston Univ., CA)Q. Bellay (Kingston Univ., CA)R. Dachselt (Technische Univ. Dresden, DE)

    Describes how digital games can allow design like activities at play-time, and how players use them when playing games.Players of digital games are limited by the constraints of the game’s implementation. Players cannot fly a kite, plant a tree or make friends with a dragon if these activities were not coded within the game. Game orchestration relaxes these restrictions by allowing players to create game narratives and settings as the game is being played. This enables players to express their creativity beyond the strictures of the game’s implementation. We present Tabula Rasa, a novel game orchestration tool based on an efficient tabletop interface. Based on a study of 20 game orchestration sessions using Tabula Rasa, we identify five behavioural patterns adopted by orchestrators, and four styles of collaborative interaction between orchestrators and players. Finally, we present recommendations for designers of game orchestration systems.

  • PAZPaper: Playing with Leadership and Expertise: Military Tropes and Teamwork in an ARG
    T. Peyton (The Pennsylvania State Univ., USA), A. Young, W. Lutters
    T. Peyton (The Pennsylvania State Univ., USA)A. Young (Univ. of Maryland, Baltimore County, USA)W. Lutters (Univ. of Maryland, Baltimore County, USA)

    Explores how ARG teams arrange and militarize play within unstructured ludic systems. Illustrates that the development of expertise and emergence of leadership occurs in response to this lack of structure.Ad-hoc virtual teams often lack tools to formalize leadership and structure collaboration, yet they are often successful. How does this happen? We argue that the emergence of leadership and the development of expertise occurs in the process of taking action and in direct response to a lack of structure. Using a twinned set of eight modality sliders, we examine the interactions of fourteen players in an alternate reality game. We find that players adopted military language and culture to structure and arrange their play. We determine that it is critical to account for the context of play across these modalities in order to design appropriately for effective in-game virtual organizing.

252ACourse C07, unit 2/2

  • CVCC07: Speech-based Interaction: Myths, Challenges, and Opportunities
    C. Munteanu (National Research Council Canada, CA), G. Penn
    C. Munteanu (National Research Council Canada, CA)G. Penn (Univ. of Toronto, CA)

    Learn how speech recognition and synthesis works, what are its limitations and usability challenges, how can it enhance interaction paradigms, and what is the current research and commercial state-of-the-art. Speech remains the “holy grail” of interaction, as this is the most natural form of communication that humans employ. Unfortunately, it is also one of the most difficult modalities to be understood by machines – despite, and perhaps, because it is the highest-bandwidth communication channel we possess. While significant research effort, in engineering, linguistics and psychology, have been spent on improving machines’ ability to understand and synthesize speech, the HCI community has been relatively timid in embracing this modality as a central focus of research. This can be attributed in part to the relatively discouraging levels of accuracy in understanding speech, in contrast with often-unfounded claims of success from industry, but also to the intrinsic difficulty of designing and especially evaluating interfaces that use speech and natural language as an input or output modality. While the accuracies of understanding speech input are still discouraging for many applications under less-than-ideal conditions, several interesting areas have yet to be explored that could make speech-based interaction truly hands-free. The goal of this course is to inform the HCI community of the current state of speech and natural language research, to dispel some of the myths surrounding speech-based interaction, as well as to provide an opportunity for HCI researchers and practitioners to learn more about how speech recognition and synthesis work, what are their limitations, and how these could be used to enhance current interaction paradigms.

252BCase studies: Innovating User-Centered Design

SBKSession chair: John Boyd
  • YQTProject Pokerface: Building User-Centered Culture at Scale
    A. Baki (Google, Inc., USA), P. Bowen, B. Brekke, E. Ferrall-Nunge, G. Kossinets, J. Riegelsberger, N. Weber, M. Mayer
    A. Baki (Google, Inc., USA)P. Bowen (Google, Inc., USA)B. Brekke (Google, Inc., USA)E. Ferrall-Nunge (Google, Inc., USA)G. Kossinets (Google, Inc., USA)J. Riegelsberger (Google, Inc, USA)N. Weber (Google, Inc, USA)M. Mayer (Google, Inc, USA)

    Learn about a compact, lightweight user immersion process that engages entire teams in user research. It creates lasting impressions and therefore provides momentum for change with minimal time and resources.We describe a project (‘Pokerface’) we ran at Google to increase our collective focus on the user. It involved hundreds of Eng/PM across multiple locations. This immersion project allowed non-UX professionals to feel firsthand the delight and, and at times, pain of our users when using our products. It strengthened the bond between colleagues and users and called to attention issues that needed immediate action.

  • YNMData-driven Design Process in Adoption of Marking Menus for Large Scale Software
    J. Oh (Autodesk, Inc., USA), A. Uggirala
    J. Oh (Autodesk, Inc., USA)A. Uggirala (Autodesk, Inc., USA)

    This case study presents the user-centered design process that helped delivering the successful integration of marking menu into Autodesk Inventor, Autodesk’s flagship mechanical engineering software. This case study presents the iterative design process where usage data and feedback played key role in successful adoption of the marking menu to Autodesk’s major mechanical engineering software, Inventor.

  • YMJCreating Small Products at a Big Company: Adobe’s “Pipeline” Innovation Process
    R. Adams (Adobe Systems, USA), B. Evans, J. Brandt
    R. Adams (Adobe Systems, USA)B. Evans (Adobe Systems, USA)J. Brandt (Adobe Research, USA)

    Pipeline is a new development process at Adobe designed to rapidly evaluate product ideas. We detail our adaption of lean approaches to the realities of a 10,000+ person company.Pipeline is a new development process at Adobe designed to rapidly prototype and evaluate new product offerings. Pipeline has user research at its core, and success is defined by how much is learned about a given problem, not by how much product is built. Starting ideas for new product directions are identified through Contextual Inquiry. Once a product direction is selected, an iterative process of development and evaluation is carried out over a 13-week period. Opportunities to pivot are built in at 3-week intervals, driven by evaluation results from laboratory studies. The Pipeline process is explained through an example product prototype, called “Gadget”. Gadget is an application targeted at Web developers that helps them more easily experiment with and modify the visual layout of a Web page.

  • YENUX Design with International Teams: Challenges and Best Practices
    C. Yiu (Microsoft Corporation, USA)
    C. Yiu (Microsoft Corporation, USA)

    Being a UX designer at Microsoft leading projects with multiple stakeholders from U.S., China and Israel, I would like to share my insights on the challenges and best practices.International UX collaboration has become the necessity for producing great global products. Microsoft Windows Intune™, an IT management and security product in the cloud, consists of engineering groups in different parts of the world. Being a UX designer leading projects with multiple stakeholders, vendors and contractors from U.S., China and Israel, I would like to share my insights on the challenges and best practices – organizing seeding and recurring visits; having key remote UX champions; utilizing the right communication channels; sharing works early; and be sensitive of time zone and cultural differences.

253Course C06, unit 2/2

  • CGJC06: Agile User Experience and UCD
    W. Hudson (Syntagm Ltd, UK)
    W. Hudson (Syntagm Ltd, UK)

    This course shows how to integrate UCD with Agile methods to create great user experiences. It takes an ‘emotionally intelligent’ approach to engaging all team members in UCD. Benefits: This half-day course shows how to integrate User-Centered Design with Agile methods to create great user experiences. The course builds on the instructor’s research into empathizing skills and takes an ‘emotionally intelligent’ approach to engaging all team members in UCD. The course is a balanced combination of tutorials, group exercises and discussions, ensuring that participants gain a rich understanding of the problems presented by Agile and how they can be addressed. Origins: This is a half-day version of a popular one-day course that has been well-received within a major UK telecoms operator and at a number of public presentations in London, Brussels and Hamburg in 2010 and 2011. It was part of the CHI 2011 & 2012 course offerings. Features: Up-front versus Agile UCD Empathetic design User & Persona Stories Agile usability testing Adding value to the Agile team Design maps Audience: Usability, UX and UCD practitioners trying to integrate UCD activities within Agile teams. (Some familiarity with UCD techniques is required.) Presentation: The course is approximately 60% tutorials and 40% activities or group discussions. Instructor Background: William Hudson has 40 years’ experience in the development of interactive systems. He has contributed material on user-centered design and user interface design to the Rational Unified Process and to Addison-Wesley’s Object Modeling and User Interface Design (van Harmelen, 2001). He is the founder of Syntagm, a consultancy specializing in user-centered design and has conducted more than 300 intranet and web site evaluations. William has written over 30 articles, papers and studies. He is an Adjunct Professor at Hult International Business School. Web Site: Further information about the instructor and this course can be found at www.syntagm.co.uk/design

BordeauxPapers: Tables and Floors

SFUSession chair: Sriram Subramanian
  • PJKPaper: GravitySpace: Tracking Users and Their Poses in a Smart Room Using a Pressure-Sensing Floor
    A. Bränzel (Hasso Plattner Institute, DE), C. Holz, D. Hoffmann, D. Schmidt, M. Knaust, P. Lühne, R. Meusel, S. Richter, P. Baudisch
    A. Bränzel (Hasso Plattner Institute, DE)C. Holz (Hasso Plattner Institute, DE)D. Hoffmann (Hasso Plattner Institute, DE)D. Schmidt (Hasso Plattner Institute, DE)M. Knaust (Hasso Plattner Institute, DE)P. Lühne (Hasso Plattner Institute, DE)R. Meusel (Hasso Plattner Institute, DE)S. Richter (Hasso Plattner Institute, DE)P. Baudisch (Hasso Plattner Institute, DE)

    Introduces new approach to tracking people and objects in smart rooms based on a high-resolution pressure-sensitive floor. Provides consistent wall-to-wall coverage, is less susceptible and less privacy-critical than camera-based systems.We explore how to track people and furniture based on a high-resolution pressure-sensitive floor. Gravity pushes people and objects against the floor, causing them to leave imprints of pressure distributions across the surface. While the sensor is limited to sensing direct contact with the surface, we can sometimes conclude what takes place above the surface, such as users’ poses or collisions with virtual objects. We demonstrate how to extend the range of this approach by sensing through passive furniture that propagates pressure to the floor. To explore our approach, we have created an 8 m2 back-projected floor prototype, termed GravitySpace, a set of passive touch-sensitive furniture, as well as algorithms for identifying users, furniture, and poses. Pressure-based sensing on the floor offers four potential benefits over camera-based solutions: (1) it provides consistent coverage of rooms wall-to-wall, (2) is less susceptible to occlusion between users, (3) allows for the use of simpler recognition algorithms, and (4) intrudes less on users’ privacy.

  • PHUPaper: Improving Digital Object Handoff Using the Space Above the Table
    S. Sutcliffe (Univ. of Saskatchewan, CA), Z. Ivkovic, D. Flatla, A. Pavlovych, I. Stavness, C. Gutwin
    S. Sutcliffe (Univ. of Saskatchewan, CA)Z. Ivkovic (Univ. of Saskatchewan, CA)D. Flatla (Univ. of Saskatchewan, CA)A. Pavlovych (Univ. of Saskatchewan, CA)I. Stavness (Univ. of Saskatchewan, CA)C. Gutwin (Univ. of Saskatchewan, CA)

    We developed and evaluated two new above-the-table digital object handoff techniques, Force-Field and a new innovation ElectroTouch, and found they are significantly faster and less error-prone than traditional surface-only techniques.Object handoff – that is, passing an object or tool to another person – is an extremely common activity in collaborative tabletop work. On digital tables, object handoff is typically accomplished by sliding them on the table surface – but surface-only interactions can be slow and error-prone, particularly when there are multiple people carrying out multiple handoffs. An alternative approach is to use the space above the table for object handoff; this provides more room to move, but requires above-surface tracking. We have developed two above-the-surface handoff techniques that use simple and inexpensive tracking: a force-field technique that uses a depth camera to determine hand proximity, and an electromagnetic-field technique called ElectroTouch that provides positive indication when people touch hands over the table. We compared the new techniques to three kinds of surface-only handoff (sliding, flicking, and surface-only Force-Fields). The study showed that the above-surface techniques significantly improved both speed and accuracy, and that ElectroTouch was the best technique overall. This work provides designers with practical new techniques for substantially increasing performance and interaction richness on digital tables.

  • PFRPaper: An Evaluation of State Switching Methods for Indirect Touch Systems
    S. Voelker (RWTH Aachen Univ., DE), C. Wacharamanotham, J. Borchers
    S. Voelker (RWTH Aachen Univ., DE)C. Wacharamanotham (RWTH Aachen Univ., DE)J. Borchers (RWTH Aachen Univ., DE)

    Comparing four different state switching techniques for indirect touch systems that allow the users to rest their arms on the surfaces while they are in the Tracking state.Indirect touch systems combine a horizontal touch input surface with a vertical display for output. While this division is ergonomically superior to simple direct-touch displays for many tasks, users are no longer looking at their hands when touching. This requires the system to support an intermediate “tracking” state that lets users aim at objects without trigger- ing a selection, similar to the hover state in mouse-based UIs. We present an empirical analysis of several interaction techniques for indirect touch systems to switch to this intermediate state, and derive design recommendations for incorporat- ing it into such systems.

  • NRPNote: Improving Touch Accuracy on Large Tabletops Using Predecessor and Successor
    M. Möllers (RWTH Aachen Univ., DE), N. Dumont, S. Ladwig, J. Borchers
    M. Möllers (RWTH Aachen Univ., DE)N. Dumont (RWTH Aachen Univ., DE)S. Ladwig (RWTH Aachen Univ., DE)J. Borchers (RWTH Aachen Univ., DE)

    We explore how one touch affects the location and orientation of its successor. We show how this can be used to increase touch accuracy on tabletops.Touch interfaces provide great flexibility in designing an UI. However, the actual experience is often frustrating due to bad touch recognition. On small systems, we can analyze yaw, roll, and pitch of the finger to increase touch accuracy for a single touch. On larger systems, we need to take additional factors into account as users have more flexibility for their limb posture and need to aim over larger distances. Thus, we investigated how people perform touch sequences on those large touch surfaces. We show that the relative location of the predecessor of a touch has a significant impact on the orientation and position of the touch ellipsis. We exploited this effect on an off-the-shelf touch display and showed that with only minimal preparation the touch accuracy of standard hardware can be improved by at least 7%, allowing better recognition rates or more UI components on the same screen.

  • NPJNote: Touchbugs: Actuated Tangibles on Multi-Touch Tables
    D. Nowacka (Newcastle Univ., UK), K. Ladha, N. Hammerla, D. Jackson, C. Ladha, E. Rukzio, P. Olivier
    D. Nowacka (Newcastle Univ., UK)K. Ladha (Newcastle Univ., UK)N. Hammerla (Newcastle Univ., UK)D. Jackson (Newcastle Univ., UK)C. Ladha (Newcastle Univ., UK)E. Rukzio (Ulm Univ., DE)P. Olivier (Newcastle Univ., UK)

    We present a novel approach to graspable interfaces using Touchbugs, small tangibles that are able to move across surfaces by employing vibrating motors and to communicate with interactive surfaces by using infrared LEDs.We present a novel approach to graspable interfaces using Touchbugs, actuated physical objects for interacting with interactive surface computing applications. Touchbugs are active tangibles that are able to move across surfaces by employing vibrating motors and can communicate with camera based multi-touch surfaces using infrared LEDs. Touchbug’s embedded inertial sensors and computational capabilities open a new interaction space by providing autonomous capabilities for tangibles that allow goal directed behavior.

342APapers: Design for Classrooms 1

SQZSession chair: Deborah Tatar
  • PFPPaper: The Effect of Virtual Achievements on Student Engagement
    P. Denny (The Univ. of Auckland, NZ)
    P. Denny (The Univ. of Auckland, NZ)

    Do badge-based achievement systems actually engage users? We present the first large-scale study providing empirical evidence of their impact within an online learning tool. Badge-based achievement systems are being used increasingly to drive user participation and engagement across a variety of platforms and contexts. Despite positive anecdotal reports, there is currently little empirical evidence to support their efficacy in particular domains. With the recent rapid growth of tools for online learning, an interesting open question for educators is the extent to which badges can positively impact student participation. In this paper, we report on a large-scale (n > 1000) randomized, controlled experiment measuring the impact of incorporating a badge-based achievement system within an online learning tool. We discover a highly significant positive effect on the quantity of students’ contributions, without a corresponding reduction in their quality, as well as on the period of time over which students engaged with the tool. Students enjoyed being able to earn badges, and indicated a strong preference for having them available in the user interface.

  • PPXPaper: A Trace-based Framework for Analyzing and Synthesizing Educational Progressions
    E. Andersen (Univ. of Washington, USA), S. Gulwani, Z. Popovic
    E. Andersen (Univ. of Washington, USA)S. Gulwani (Microsoft, USA)Z. Popovic (Univ. of Washington, USA)

    Proposes a framework for using program execution traces to automatically analyze and synthesize progressions of practice problems for any procedural task, focusing on grade-school mathematics and learning games.A key challenge in teaching a procedural skill is finding an effective progression of example problems that the learner can solve in order to internalize the procedure. In many learning domains, generation of such problems is typically done by hand and there are few tools to help automate this process. We reduce this effort by borrowing ideas from test input generation in software engineering. We show how we can use execution traces as a framework for abstracting the characteristics of a given procedure and defining a partial ordering that reflects the relative difficulty of two traces. We also show how we can use this framework to analyze the completeness of expert-designed progressions and fill in holes. Furthermore, we demonstrate how our framework can automatically synthesize new problems by generating large sets of problems for elementary and middle school mathematics and synthesizing hundreds of levels for a popular algebra-learning game. We present the results of a user study with this game confirming that our partial ordering can predict user evaluation of procedural difficulty better than baseline methods.

  • PDJPaper: Wikipedia Classroom Experiment: Bidirectional Benefits of Students’ Engagement in Online Production Communities
    R. Farzan (Univ. of Pittsburgh, USA), R. Kraut
    R. Farzan (Univ. of Pittsburgh, USA)R. Kraut (Carnegie Mellon Univ., USA)

    This paper provides details in the design of a program to encourage students’ contribution to Wikipedia and our quantitative and qualitative approaches to evaluating it. Over the last decade, a citizen science movement has tried to engage students, laymen and other non-scientists in the production of science. However, there has been less attention in citizen science projects to use the public to disseminate scientific knowledge. Wikipedia provides a platform to study engagement of citizen scientists in knowledge dissemination. College and university students are especially appropriate members of the public to write science articles, because of the course-work and mentorship they receive from faculty. This paper describes a project to support students’ writing of scientific articles in Wikipedia. In collaboration with a scientific association, we involved 640 students from 36 courses in editing scientific articles on Wikipedia. This paper provides details in the design of the program and our quantitative and qualitative approaches to evaluating it. Our results show that the Wikipedia classroom experiment benefits both the Wikipedia community and students. Undergraduate and graduate students substantially improved the scientific content of over 800 articles, at a level of quality indistinguishable from content written by PhD experts. Both students and faculty endorsed the motivational benefits of an authentic writing experience that would be read by thousands of people.

  • NFKNote: TypeRighting: Combining the Benefits of Handwriting and Typeface in Online Educational Videos
    A. Cross (Microsoft Research India, IN), M. Bayyapunedi, E. Cutrell, A. Agarwal, W. Thies
    A. Cross (Microsoft Research India, IN)M. Bayyapunedi (Microsoft Research India, IN)E. Cutrell (Microsoft Research India, IN)A. Agarwal (edX, USA)W. Thies (Microsoft Research India, IN)

    Examines viewers’ preferences of presentation styles for online educational videos and presents a novel way to combine the benefits of handwriting and typeface called TypeRighting.Recent years have seen enormous growth of online educational videos, spanning K-12 tutorials to university lectures. As this content has grown, so too has grown the number of presentation styles. Some educators have strong allegiance to handwritten recordings (using pen and tablet), while others use only typed (PowerPoint) presentations. In this paper, we present the first systematic comparison of these two presentation styles and how they are perceived by viewers. Surveys on edX and Mechanical Turk suggest that users enjoy handwriting because it is personal and engaging, yet they also enjoy typeface because it is clear and legible. Based on these observations, we propose a new presentation style, TypeRighting, that combines the benefits of handwriting and typeface. Each phrase is written by hand, but fades into typeface soon after it appears. Our surveys suggest that about 80% of respondents prefer TypeRighting over handwriting. The same fraction of respondents prefer TypeRighting over typeface, for videos in which the handwriting is sufficiently legible.

  • NSNNote: Tweeting for Class: Co-Construction as a Means for Engaging Students in Lectures
    J. Birnholtz (Northwestern Univ., USA), J. Hancock, D. Retelny
    J. Birnholtz (Northwestern Univ., USA)J. Hancock (Cornell Univ., USA)D. Retelny (Stanford Univ., USA)

    We present a case study of students in a lecture course using Twitter to contribute instructional content. Results show that students enjoyed this and primarily contributed examples and asked questions.Motivating students to be active learners is a perennial problem in education, and is particularly challenging in lectures where instructors typically prepare content in ad-vance with little direct student participation. We describe our experience using Twitter as a tool for student “co-construction” of lecture materials. Students were required to post a tweet prior to each lecture related to that day’s topic, and these tweets – consisting of questions, examples and reflections – were incorporated into the lecture slides and notes. Students reported that they found lectures including their tweets in the class slides to be engaging, interactive and relevant, and nearly 90% of them recommended we use our co-construction approach again.

343Course C03, unit 3/3

  • CVZC03: Rapid Design Labs—A Tool to Turbocharge Design-Led Innovation
    J. Nieters (Hewlett Packard, USA), C. Thompson, A. Pande
    J. Nieters (Hewlett Packard, USA)C. Thompson (zSpace, Inc, USA)A. Pande (Hewlett Packard, IN)

    Jim Nieters, Carola Thompson, and Amit Pande will empower designers and UX teams to act as a catalyst to systemically identify and drive game-changing ideas to market with rapid design labs.Have you ever had a big idea that got crushed? You know, one of those inspiring ideas that could change the world? If you work in a product or design group in a corporation or design firm, you have probably experienced what happens after you share one those ideas. In the real world, coming up with a breakthrough idea or transformative design doesn’t mean it will automatically get to market. By definition, innovative ideas represent new ways of thinking. Organizations by nature seem to have anti-innovation antibodies that often kill new ideas—even disruptive innovations that could help companies differentiate themselves from their competition. As difficult as coming up with a game-changing idea can be, getting an organization to act on the idea often seems impossible. The course Rapid Design Labs- A Tool to Turbocharge Design-Led Innovation gives you new tools for this challenge, tools that empower designers and UX teams to get breakthrough ideas and designs accepted. Learn how UX can act as a catalyst to systemically identify and drive game-changing ideas to market. Rapid design labs are a design-led, facilitative, cross-functional, iterative approach to innovation that aligns organizations and generates business value each step of the way.

361Special session: Student Research Judging

Session chairs: Shaowen Bardzell, Celine Latulipe
  • SRJJury: Shaowen Bardzell, Celine Latulipe
    This first round of the Student Research Competition is reserved to the competition participants and jury. The second round, Wednesday at 11am, is open to all CHI 2013 attendees.

362/363Special Interest Group

  • GVQEnhancing the Research Infrastructure for Child-Computer Interaction
    J. Read (Univ. of Central Lancashire, UK), J. Hourcade
    J. Read (Univ. of Central Lancashire, UK)J. Hourcade (Univ. of Iowa, USA)

    The child-computer interaction community has been steadily adding research infrastructure over the past 20 years through books, the Interaction Design and Children conference, being a featured community at CHI, through an official IFIP group, and more recently through a journal. In this SIG we will discuss the next steps to further strengthen the research infrastructure in this research community with the goals of improving the quality of the research, enhancing research resources, and increasing the impact of the field in industry and education.

HavanePapers: Crowds and Activism

SMPSession chair: Jaime Teevan
  • PHFPaper: Delivering Patients to Sacré Coeur: Collective Intelligence in Digital Volunteer Communities
    K. Starbird (Univ. of Washington, USA)
    K. Starbird (Univ. of Washington, USA)

    This study examines the activities of digital volunteers during crisis events, using a distributed cognition perspective to demonstrate how individual ICT users function together as a collectively intelligent cognitive system.This study examines the information-processing activities of digital volunteers and other connected ICT users in the wake of crisis events. Synthesizing findings from several previous research studies of digital volunteerism, this paper offers a new approach for conceptualizing the activities of digital volunteers, shifting from a focus on organizing to a focus on information movement. Using the lens of distributed cognition, this research describes collective intelligence as transformations of information within a system where cognition is distributed socially across individuals as well as through their tools and resources. This paper demonstrates how digital volunteers, through activities such as relaying, amplifying, verifying, and structuring information, function as a collectively intelligent cognitive system in the wake of disaster events.

  • PQXPaper: Does Slacktivism Hurt Activism?: The Effects of Moral Balancing and Consistency in Online Activism
    Y. Lee (Michigan State Univeristy, USA), G. Hsieh
    Y. Lee (Michigan State Univeristy, USA)G. Hsieh (Michigan State Univeristy, USA)

    We examine how simple online activism influence people’s likelihood and efforts in a subsequent civic action. The findings have implications for online campaign design.In this paper we explore how the decision of partaking in low-cost, low-risk online activism—slacktivism—may affect subsequent civic action. Based on moral balancing and consistency effects, we designed an online experiment to test if signing or not signing an online petition increased or decreased subsequent contribution to a charity. We found that participants who signed the online petition were significantly more likely to donate money to a related charity, demonstrating a consistency effect. We also found that participants who did not sign the petition donated significantly more money to an unrelated charity, demonstrating a moral balancing effect. The results suggest that exposure to an online activism influences individual decision on subsequent civic actions.

  • PAVPaper: Using Crowdsourcing to Support Pro-Environmental Community Activism
    E. Massung (Univ. of Bristol, UK), D. Coyle, K. Cater, M. Jay, C. Preist
    E. Massung (Univ. of Bristol, UK)D. Coyle (Univ. of Bristol, UK)K. Cater (Univ. of Bristol, UK)M. Jay (Univ. of Bristol, UK)C. Preist (Univ. of Bristol, UK)

    We developed mobile applications and investigated motivational techniques to support crowdsourcing and pro-environmental community activism. The paper offers new insights and recommendations for environmental technologies targeting communities, rather than individuals.Community activist groups typically rely on core groups of highly motivated members. In this paper we consider how crowdsourcing strategies can be used to supplement the activities of pro-environmental community activists, thus increasing the scalability of their campaigns. We focus on mobile data collection applications and strategies that can be used to engage casual participants in pro-environmental data collection. We report the results of a study that used both quantitative and qualitative methods to investigate the impact of different motivational factors and strategies, including both intrinsic and extrinsic motivators. The study compared and provides empirical evidence for the effectiveness of two extrinsic motivation strategies, pointification – a subset of gamification – and financial incentives. Prior environmental interest is also assessed as an intrinsic motivation factor. In contrast to previous HCI research on pro-environmental technology, much of which has focused on individual behavior change, this paper offers new insights and recommendations on the design of systems that target groups and communities.

  • PJQPaper: A Longitudinal Study of Follow Predictors on Twitter
    C. Hutto (Georgia Institute of Technology, USA), E. Gilbert, S. Schoenebeck
    C. Hutto (Georgia Institute of Technology, USA)E. Gilbert (Georgia Institute of Technology, USA)S. Schoenebeck (Univ. of Michigan, USA)

    Comparing across many variables related to message content, social behavior, and network structure allows us to interpret their relative effect on follower growth from different theoretical perspectives.Follower count is important to Twitter users: it can indicate popularity and prestige. Yet, holistically, little is understood about what factors – like social behavior, message content, and network structure – lead to more followers. Such information could help technologists design and build tools that help users grow their audiences. In this paper, we study 507 Twitter users and a half-million of their tweets over 15 months. Marrying a longitudinal approach with a negative binomial auto-regression model, we find that variables for message content, social behavior, and network structure should be given equal consideration when predicting link formations on Twitter. To our knowledge, this is the first longitudinal study of follow predictors, and the first to show that the relative contributions of social behavior and message content are just as impactful as factors related to social network structure for predicting growth of online social networks. We conclude with practical and theoretical implications for designing social media technologies.

351Papers: Large and Public Displays

SLFSession chair: Xiang Cao
  • PBZPaper: High-Precision Pointing on Large Wall Displays using Small Handheld Devices
    M. Nancel (Univ Paris-Sud, FR), O. Chapuis, E. Pietriga, X. Yang, P. Irani, M. Beaudouin-Lafon
    M. Nancel (Univ Paris-Sud, FR)O. Chapuis (Univ Paris-Sud, FR)E. Pietriga (INRIA, Orsay, France & INRIA Chile, FR)X. Yang (Univ. of Alberta, CA)P. Irani (Univ. of Manitoba, CA)M. Beaudouin-Lafon (Univ Paris-Sud, FR)

    Reports on the design and evaluation of pointing techniques, some of which use head orientation, so the handheld device can also be used for other interactions.Rich interaction with high-resolution wall displays is not limited to remotely pointing at targets. Other relevant types of interaction include virtual navigation, text entry, and direct manipulation of control widgets. However, most techniques for remotely acquiring targets with high precision have studied remote pointing in isolation, focusing on pointing efficiency and ignoring the need to support these other types of interaction. We investigate high-precision pointing techniques capable of acquiring targets as small as 4 millimeters on a 5.5 meters wide display while leaving up to 93 % of a typical tablet device’s screen space available for task-specific widgets. We compare these techniques to state-of-the-art distant pointing techniques and show that two of our techniques, a purely relative one and one that uses head orientation, perform as well or better than the best pointing-only input techniques while using a fraction of the interaction resources.

  • TUUTOCHI: Window Brokers: Collaborative Display Space Control
    R. Arthur (Brigham Young Univ., USA), D. Olsen
    R. Arthur (Brigham Young Univ., USA)D. Olsen (Brigham Young Univ., USA)

    Take collaborative control of a display space you do not own in a familiar, platform-independent way without transmitting new software to the display or other participating devices.As users travel from place to place, they can encounter display servers, that is, machines which supply a collaborative content-sharing environment. Users need a way to control how content is arranged on these display spaces. The software for controlling these display spaces should be consistent from display server to display server. However, display servers could be controlled by institutions which may not allow for the control software to be installed. This article introduces the window broker protocol which allows users to carry familiar control techniques on portable personal devices and use the control technique on any display server without installing the control software on the display server. This article also discusses how the window broker protocol mitigates some security risks that arise from potentially malicious display servers.

  • PEZPaper: StrikeAPose: Revealing Mid-Air Gestures on Public Displays
    R. Walter (Telekom Innovation Laboratories, TU Berlin, DE), G. Bailly, J. Müller
    R. Walter (Telekom Innovation Laboratories, TU Berlin, DE)G. Bailly (Telekom Innovation Laboratories, TU Berlin, DE)J. Müller (Univ. of the Arts, DE)

    Proposes three strategies to reveal mid-air gestures on interactive public displays and introduces the Teapot gesture as a novel initial mid-air gesture. Shows that users naturally explore gesture variations.We investigate how to reveal an initial mid-air gesture on interactive public displays. This initial gesture can serve as gesture registration for advanced operations. We propose three strategies to reveal the initial gesture: spatial division, temporal division and integration. Spatial division permanently shows the gesture on a dedicated screen area. Temporal division interrupts the application to reveal the gesture. Integration embeds gesture hints directly in the application. We also propose a novel initial gesture called Teapot to illustrate our strategies. We report on a laboratory and field study. Our main findings are: A large percentage of all users execute the gesture, especially with spatial division (56%). Users intuitively discover a gesture vocabulary by exploring variations of the Teapot gesture by themselves, as well as by imitating and extending other users’ variations.

  • PQMPaper: SideWays: A Gaze Interface for Spontaneous Interaction with Situated Displays
    Y. Zhang (Lancaster Univ., UK), A. Bulling, H. Gellersen
    Y. Zhang (Lancaster Univ., UK)A. Bulling (Max Planck Institute for Informatics, DE)H. Gellersen (Lancaster Univ., UK)

    Presents a system that uses light weight computer vision techniques for calibration-free eye tracking. The system enables hands-free spontaneous interaction with situated displays using eye gaze.Eye gaze is compelling for interaction with situated displays as we naturally use our eyes to engage with them. In this work we present SideWays, a novel person-independent eye gaze interface that supports spontaneous interaction with displays: users can just walk up to a display and immediately interact using their eyes, without any prior user calibration or training. Requiring only a single off-the-shelf camera and lightweight image processing, SideWays robustly detects whether users attend to the centre of the display or cast glances to the left or right. The system supports an interaction model in which attention to the central display is the default state, while “sidelong glances” trigger input or actions. The robustness of the system and usability of the interaction model are validated in a study with 14 participants. Analysis of the participants’ strategies in performing different tasks provides insights on gaze control strategies for design of SideWays applications.

352ABPapers: Embodied Interaction 1

SELSession chair: Antonio Krüger
  • TTETOCHI: Interaction Design for and with the Lived Body: Some Implications of Merleau-Ponty’s Phenomenology
    D. Svanaes (Norwegian Univ. of Science and Technology, NO)
    D. Svanaes (Norwegian Univ. of Science and Technology, NO)

    The body as experienced by the user has to a large extent been absent in HCI. The paper exemplifies how the field can benefit from Merleau-Ponty’s phenomenology of the body. In 2001, Paul Dourish proposed the term Embodied Interaction to describe a new paradigm for Interaction Design that focuses on the physical, bodily and social aspects of our interaction with digital technology. Dourish used Merleau-Ponty’s phenomenology of perception as the theoretical basis for his discussion of the bodily nature of embodied interaction. This paper extends Dourish’s work to introduce the human-computer interaction community to ideas related to Merleau-Ponty’s concept of the lived body. It also provides a detailed analysis of two related topics: (1) Embodied Perception: the active and embodied nature of perception, including the body’s ability to extent its sensory apparatus through digital technology; and (2) Kinaesthetic Creativity: the body’s ability to relate in a direct and creative fashion with the “feel” dimension of interactive products during the design process.

  • TZCTOCHI: On the Naturalness of Touchless: Putting the “Interaction” Back into NUI
    K. O’Hara (Microsoft Research, UK), R. harper, H. Mentis, A. Sellen, A. Taylor
    K. O’Hara (Microsoft Research, UK)R. harper (Microsoft Research, UK)H. Mentis (Harvard Medical School, USA)A. Sellen (Microsoft Research, UK)A. Taylor (Microsoft Research, UK)

    Using examples of gestural interaction from surgery and urban screen gaming, we discuss the notion of naturalness in NUI narratives as an occasioned property of interaction rather than inherent property of an interface.After many decades of research, the ability to interact with technology through touchless gestures and sensed body movements is becoming an everyday reality. These technologies form part of a broader suite of innovations that have come to be characterised as Natural User Interfaces. While the narrative of NUI serves a number of useful purposes, it also raises some concerns that make it increasingly important to examine the conceptual work being performed by this moniker and how these frame approaches to design and engineering in particular ways. Often the arguments made situate the locus of naturalness in the gestural interface alone, treating the issue as a representational concern. But in doing this, attention is perhaps less focused on the in situ and embodied aspects of interaction with such technologies. Drawing on examples of gestural interaction in the diverse settings of surgery and urban screen gaming, we consider naturalness as an occasioned property of action that social actors actively manage and produce together in situ through their interaction with each other and the material world.

  • TSJTOCHI: Moving and Making Strange: An Embodied Approach to Movement-based Interaction Design
    L. Loke (The Univ. of Sydney, AU), T. Robertson
    L. Loke (The Univ. of Sydney, AU)T. Robertson (Univ. of Technology, Sydney, AU)

    We offer a methodology for the design and evaluation of movement-based interactions with technology, where the felt experience of moving is valued along with the perspectives of observer and machine.There is growing interest in designing for movement-based interactions with technology, now that various sensing technologies are available enabling a range of movement possibilities from gestural to whole-body interactions. We present a design methodology of Moving and Making Strange, an approach to movement- based interaction design that recognizes the central role of the body and movement in lived cognition. The methodology was developed through a series of empirical projects, each focusing on different conceptions of movement available within motion-sensing interactive, immersive spaces. The methodology offers designers a set of principles, perspectives, methods and tools for exploring and testing movement-related design concepts. It is innovative for the inclusion of the perspective of the mover, together with the traditional perspectives of the observer and the machine. Making strange is put forward as an important tactic for rethinking how to approach the design of movement-based interaction.

  • TDGTOCHI: Embodied Cognition And The Magical Future Of Interaction Design
    D. Kirsh (Univ. of California, San Diego, USA)
    D. Kirsh (Univ. of California, San Diego, USA)

    Explores what world-class choreography and dance teaches us about embodied cognition & creativity. Explains how bodies absorb tools and how bodies and things are used for thinking. The theory of embodied cognition can provide HCI practitioners and theorists with new ideas about interaction and new principles for better designs. I support this claim with four ideas about cognition: 1) interacting with tools changes the way we think and perceive – tools, when manipulated, are soon absorbed into the body schema, and this absorption leads to fundamental changes in the way we perceive and conceive of our environments; 2) we think with our bodies not just with our brains; 3) we know more by doing than by seeing – there are times when physically performing an activity is better than watching someone else perform the activity, even though our motor resonance system fires strongly during other person observation; 4) there are times when we literally think with things. These four ideas have major implications for interaction design, especially the design of tangible, physical, context aware, and telepresence systems.

221/221MLast-minute SIGs: Session 3