• facebook icon
  • twitter icon
  • google+ icon

TOCHI papers

Show / hide full affiliations and abstracts (May take a few seconds.)

IWC Enter a 3-letter code in the search box of the CHI 2013 mobile app to go to the corresponding session or presentation.
 When clickable, a 3-letter code links to the Video Previews web site.

All communities Design (16) Engineering (2) Management (0)
User Experience (14) Child-Computer Interaction (2) Digital Arts (1) Games and Entertainment (4)
Health (6) Sustainability (0) HCI for Development (0)
  • THJWed. 9amBacktracking Events as Indicators of Usability Problems in Creation-Oriented Applications
    D. Akers (Univ. of Puget Sound, USA), R. Jeffries, M. Simpson, T. Winograd
    D. Akers (Univ. of Puget Sound, USA)R. Jeffries (Google, Inc., USA)M. Simpson (Google, Inc., USA)T. Winograd (Stanford Univ., USA)

    Three experiments demonstrate that backtracking events such as undo are useful indicators of usability problems for creation-oriented applications. This insight yields a new cost-effective usability evaluation method, backtracking analysis.A diversity of user goals and strategies make creation-oriented applications such as word processors or photo-editors difficult to comprehensively test. Evaluating such applications requires testing a large pool of participants to capture the diversity of experience, but traditional usability testing can be prohibitively expensive. To address this problem, this article contributes a new usability evaluation method called backtracking analysis, designed to automate the process of detecting and characterizing usability problems in creation-oriented applications. The key insight is that interaction breakdowns in creation-oriented applications often manifest themselves in backtracking operations that can be automatically logged (e.g., undo and erase operations). Backtracking analysis synchronizes these events to contextual data such as screen capture video, helping the evaluator to characterize specific usability problems. The results from three experiments demonstrate that backtracking events can be effective indicators of usability problems in creationoriented applications, and can yield a cost-effective alternative to traditional laboratory usability testing.

  • TTZWed. 11amEnriching Archaeological Parks with Contextual Sounds and Mobile Technology
    C. Ardito (Univ. of Bari, IT), M. Costabile, A. De Angeli, R. Lanzilotti
    C. Ardito (Univ. of Bari, IT)M. Costabile (Univ. of Bari, IT)A. De Angeli (Univ. of Trento, IT)R. Lanzilotti (Univ. of Bari, IT)

    Explore! is an educational pervasive game for pupils exploring sites of cultural interest. The soundscape model implemented in Explore! helps visitors to navigate a site and feel its historical atmosphere.The importance of cultural heritage in forging a sense of identity is becoming increasingly evident. Information and communication technologies have a great potential to promote a greater awareness and appreciation of cultural heritage. This paper presents some findings on how mobile technology can be used to foster a better understanding of an archaeological site by reconstructing the ancient environment and life. Children aged 11-13 years old are the target of our research. To motivate and engage them, a pervasive educational game has been developed and implemented in Explore!, a system aimed at supporting children exploring sites of cultural interest. Special attention has been devoted to the design of a soundscape that may improve players’ navigation in degraded physical environments and enrich their overall experience. A field study indicated that children judged their experience both useful and entertaining: not only did they enjoy playing the game but they also learned historical notions and facts related to ancient Roman life. Contextual sounds were found to have a facilitating effect on space navigation, reducing the need for map reading and improving spatial orientation. This work provides insights into the design of educational games for use with cultural heritage and a model to enrich historical sites through the creation of soundscapes which can help visitors to navigate a site and feel its historical atmosphere.

  • TUUMon. 4pmWindow Brokers: Collaborative Display Space Control
    R. Arthur (Brigham Young Univ., USA), D. Olsen
    R. Arthur (Brigham Young Univ., USA)D. Olsen (Brigham Young Univ., USA)

    Take collaborative control of a display space you do not own in a familiar, platform-independent way without transmitting new software to the display or other participating devices.As users travel from place to place, they can encounter display servers, that is, machines which supply a collaborative content-sharing environment. Users need a way to control how content is arranged on these display spaces. The software for controlling these display spaces should be consistent from display server to display server. However, display servers could be controlled by institutions which may not allow for the control software to be installed. This article introduces the window broker protocol which allows users to carry familiar control techniques on portable personal devices and use the control technique on any display server without installing the control software on the display server. This article also discusses how the window broker protocol mitigates some security risks that arise from potentially malicious display servers.

  • TXLTue. 11amPhysical Activity Motivating Games: Be Active and Get Your Own Reward
    S. Berkovsky (National ICT Australia, AU), J. Freyne, M. Coombe
    S. Berkovsky (National ICT Australia, AU)J. Freyne (CSIRO, AU)M. Coombe (CSIRO, AU)

    We present a game design that leverages the playfulness of games to motivate players to perform mild physical activity. This design can potentially change the way players interact with games.People’s daily lives have become increasingly sedentary, with extended periods of time being spent in front of a host of electronic screens for learning, work, and entertainment. We present research into the use of an adaptive persuasive technology, which introduces bursts of physical activity into a traditionally sedentary activity – computer game playing. Our game design approach leverages the playfulness and addictive nature of computer games to motivate players to engage in mild physical activity. The design allows players to gain virtual in-game rewards in return for performing real physical activity captured by sensory devices. This paper presents a two-stage analysis of the activity-motivating game design approach applied to a prototype game. Initially, we detail the overall acceptance of active games discovered when trialling the technology with 135 young players. Results showed that players performed more activity without negatively affecting their perceived enjoyment of the playing experience. The analysis did discover, however, a lack of balance between the amounts of physical activity carried out by players with various gaming skills, which prompted a subsequent investigation into adaptive techniques for balancing the amount of physical activity performed by players. An evaluation of additional 90 players showed that adaptive techniques successfully overcame the gaming skills dependence and achieved more balanced activity levels. Overall, this work positions activity-motivating games as an approach that can potentially change the way players interact with computer games and lead to healthier lifestyles.

  • TQSWed. 4pmSupporting Personal Narrative for Children with Complex Communication Needs
    R. Black (Univ. of Dundee, UK), A. Waller, R. Turner, E. Reiter
    R. Black (Univ. of Dundee, UK)A. Waller (Univ. of Dundee, UK)R. Turner (Data2Text, UK)E. Reiter (Univ. of Aberdeen, UK)

    “How was School today. . . ?” uses sensor based data-to-text technology to generate personal narratives. Children with cerebral palsy are able to tell parents about their school day.Children with complex communication needs who use voice output communication aids seldom engage in extended conversation. The “How was School today. . . ?” system has been designed to enable such children to talk about their school day. The system uses data-to-text technology to generate narratives from sensor data. Observations, interviews and prototyping were used to ensure that stakeholders were involved in the design of the system. Evaluations with three children showed that the prototype system, which automatically generates utterances, has the potential to support disabled individuals to participate better in interactive conversation. Analysis of a conversational transcript and observations indicate that the children were able to access relevant conversation and had more control in the conversation in comparison to their usual interactions where control lay mainly with the speaking partner. Further research to develop an improved, more rugged system that supports users with different levels of language ability is now underway.

  • TNGWed. 11amBeyond Recommendations: Local Review Websites and Their Impact
    B. Brown (Mobile Life @ Stockholm Univ., SE)
    B. Brown (Mobile Life @ Stockholm Univ., SE)

    Study of how reviews are used on the Yelp and Tripadvisor websites, develops new implications for recommendation systems.Online review websites have enabled new interactions between companies and their customers. In this paper we draw on interviews with users, reviewers, and establishments to explore how local review websites can change interactions around local places. Review websites such as Yelp and Tripadvisor allow customers to ‘pre-visit’ establishments and areas of a city before an actual visit. The collection of a large numbers of user generated reviews has also created a new genre of writing – with reviewers gaining considerable pleasure from passing on word-of-mouth and influencing others’ choices. Reviews also offer a new channel of communication between establishments, customers and competitors. We discuss how review websites can be designed to cater for a broader range of interactions around reviews beyond a focus on recommendations.

  • TPXMon. 2pmStudy of Polynomial Mapping Functions in Video-Oculography Eye Trackers
    J. Cerrolaza (Public Univ. of Navarra, ES), A. Villanueva, R. Cabeza
    J. Cerrolaza (Public Univ. of Navarra, ES)A. Villanueva (Public Univ. of Navarra, ES)R. Cabeza (Public Univ. of Navarra, ES)

    In this study we enlighten one of the most employed and least explored techniques for gaze estimation in eye-tracking systems. We obtain a sort of precise and simpler alternative equations. Gaze-tracking data have been used successfully in the design of new input devices and as an observational technique in usability studies. Polynomial-based Video-Oculography (VOG) systems are one of the most attractive gaze estimation methods thanks to their simplicity and ease of implementation. Although the functionality of these systems is generally acceptable, there has been no thorough comparative study to date of how the mapping equations affect the final system response. After developing a taxonomic classification of calibration functions, we examined over 400,000 models and evaluated the validity of several conventional assumptions. Our rigorous experimental procedure enabled us to optimize the calibration process for a real VOG gaze-tracking system and halve the calibration time while avoiding a detrimental effect on the accuracy or tolerance to head movement. Finally, a geometry-based method is implemented and tested. The results and performance is compared with those obtained by the general purpose expressions.

  • TEXThu. 2pmDesigning a Multi-Slate Reading Environment to Support Active Reading Activities
    N. Chen (Univ. of Maryland, USA), F. Guimbretière, A. Sellen
    N. Chen (Univ. of Maryland, USA)F. Guimbretière (Cornell Univ., USA)A. Sellen (Microsoft Research, UK)

    Researchers have identified numerous requirements for systems aiming to support active reading. We survey these requirements and present interactions for a multi-slate reading environment that address them in a comprehensive manner.Despite predictions of the paperless office, most knowledge workers and students still rely heavily on paper in most of their document practices. Research has shown that paper’s dominance can be attributed to the fact that it supports a broad range of these users’ diverse reading requirements. Our analysis of the literature suggests that a new class of reading device consisting of an interconnected environment of thin and lightweight electronic slates could potentially unify the distinct advantages of e-books, PCs, and tabletop computers to offer an electronic reading solution providing functionality comparable to, or even exceeding, that of paper. This article presents the design and construction of such a system. In it, we explain how data can be mapped to slates, detail interactions for linking the slates, and describe tools that leverage the connectivity between slates. A preliminary study of the system indicates that such a system has the potential of being an electronic alternative to paper.

  • TECWed. 2pm“Without the Clutter of Unimportant Words”: Descriptive Keyphrases for Text Visualization
    J. Chuang (Stanford Univ., USA), C. Manning, J. Heer
    J. Chuang (Stanford Univ., USA)C. Manning (Stanford Univ., USA)J. Heer (Stanford Univ., USA)

    We study how people summarize text using descriptive phrases, develop a novel algorithm for extracting keyphrases, and demonstrate how our algorithms enable novel text visualization designs.Keyphrases aid the exploration of text collections by communicating salient aspects of documents and are often used to create effective visualizations of text. While prior work in HCI and visualization has proposed a variety of ways of presenting keyphrases, less attention has been paid to selecting the best descriptive terms. In this article, we investigate the statistical and linguistic properties of keyphrases chosen by human judges and determine which features are most predictive of high-quality descriptive phrases. Based on 5,611 responses from 69 graduate students describing a corpus of dissertation abstracts, we analyze characteristics of human-generated keyphrases, including phrase length, commonness, position, and part of speech. Next, we systematically assess the contribution of each feature within statistical models of keyphrase quality. We then introduce a method for grouping similar terms and varying the specificity of displayed phrases so that applications can select phrases dynamically based on the available screen space and current context of interaction. Precision-recall measures find that our technique generates keyphrases that match those selected by human judges. Crowdsourced ratings of tag cloud visualizations rank our approach above other automatic techniques. Finally, we discuss the role of HCI methods in developing new algorithmic techniques suitable for user-facing applications.

  • TRNMon. 2pmA Predictive Speller Controlled by a Brain-Computer Interface Based on Motor Imagery
    T. D’Albis (Politecnico di Milano, IT), R. Blatt, R. Tedesco, L. Sbattella, M. Matteucci
    T. D’Albis (Politecnico di Milano, IT)R. Blatt (Politecnico di Milano, IT)R. Tedesco (Politecnico di Milano, IT)L. Sbattella (Politecnico di Milano, IT)M. Matteucci (Politecnico di Milano, IT)

    Persons suffering from severe motor disorders have limited possibilities to communicate. We present a speller, based on a brain-computer interface, improved by a smart UI and a text predictor.Persons suffering from motor disorders have limited possibilities to communicate and normally require assistive technologies to fulfill this primary need. Promising means to provide basic communication abilities to subjects affected by severe motor impairments are brain-computer interfaces (BCIs), i.e., systems that directly translate brain signals into device commands bypassing any muscle or nerve mediation. To date, the use of BCIs for effective verbal communication is yet an open issue – primarily due to the low rates of information transfer that can be achieved with this technology. Still, the performance of BCI spelling applications can be considerably improved by a smart user interface design and by the adoption of Natural Language Processing (NLP) techniques for text prediction. The objective of this work is to suggest an approach and a user interface for BCI spelling applications combining state-of-the-art BCI and NLP techniques to maximize the overall communication rate of the system. The BCI paradigm adopted is motor imagery, i.e., when the subject imagines to move a certain part of the body, he/she produces modifications to specific brain rhythms that are detected in real-time through an electroencephalogram and translated into commands for a spelling application. By maximizing the overall communication rate our approach is twofold: on one hand we maximize the information transfer rate from the control signal, on the other side we optimize the way this information is employed for the purpose of verbal communication. The achieved results are satisfactory and comparable with the latest works reported in literature on motor-imagery BCI spellers. For the three subjects tested we obtained a spelling rate of respectively 3 char/min, 2.7 char/min and 2 char/min.

  • TCLTue. 9amWhat Does Touch Tell Us About Emotions in Touchscreen-Based Gameplay?
    Y. Gao (Univ. College London, UK), N. Bianchi-Berthouze, H. Meng
    Y. Gao (Univ. College London, UK)N. Bianchi-Berthouze (Univ. College London, UK)H. Meng (Brunel Univ., UK)

    The paper contributes a method to automatically recognize users’ emotional states from their touch behaviour in touch-based computer games. It also discusses its generalization to other types of applications. The increasing number of people playing games on touch-screen mobile phones raises the question of whether touch behaviours reflect players’ emotional states. This prospect would not only be a valuable evaluation indicator for game designers, but also for real-time personalization of the game experience. Psychology studies on acted touch behaviour show the existence of discriminative affective profiles. In this paper, finger-stroke features during gameplay on an iPod were extracted and their discriminative power analysed. Machine learning algorithms were used to build systems for automatically discriminating between four emotional states (Excited, Relaxed, Frustrated, Bored), two levels of arousal and two levels of valence. Accuracy reached between 69% and 77% for the four emotional states, and higher results (~89%) were obtained for discriminating between two levels of arousal and two levels of valence. We conclude by discussing the factors relevant to the generalization of the results to applications other than games.

  • TFSThu. 2pmAll you Need is Love: Current Strategies of Mediating Intimate Relationships through Technology
    M. Hassenzahl (Folkwang Univ. of the Arts, DE), S. Heidecker, K. Eckoldt, S. Diefenbach, U. Hillmann
    M. Hassenzahl (Folkwang Univ. of the Arts, DE)S. Heidecker (Folkwang Univ. of the Arts, DE)K. Eckoldt (Folkwang Univ. of the Arts, DE)S. Diefenbach (Folkwang Univ. of the Arts, DE)U. Hillmann (Telekom Innovation Laboratories, DE)

    There is a growing interest in creating a “relatedness” experience through technology. Our review of 143 artifacts revealed six strategies designer/researcher use: Awareness, expressivity, physicalness, gift giving, joint action, memories.A wealth of evidence suggests that love, closeness, and intimacy — in short relatedness—are important for people’s psychological well-being. Nowadays, however, couples are often forced to live apart. Accordingly, there has been a growing and flourishing interest in designing technologies that mediate (and create) a feeling of relatedness when being separated, beyond the explicit verbal communication and simple emoticons available technologies offer. This article provides a review of 143 published artifacts (i.e., design concepts, technologies). Based on this, we present six strategies used by designers/researchers to create a relatedness experience: Awareness, expressivity, physicalness, gift giving, joint action, and memories. We understand those strategies as starting points for the experience-oriented design of technology.

  • TJETue. 11amAn Empirical Study of the“Prototype Walkthrough”: A Studio-Based Activity for HCI Education
    C. Hundhausen (Washington State Univ., USA), D. Fairbrother, M. Petre
    C. Hundhausen (Washington State Univ., USA)D. Fairbrother (Washington State Univ., USA)M. Petre (The Open Univ., UK)

    Presents video analysis of the prototype walkthrough, a studio-based learning activity for HCI education. Results suggest that the activity provides valuable opportunities for students to actively learn HCI design.For over a century, studio-based instruction has served as an effective pedagogical model in architecture and fine arts education. Because of its design orientation, human-computer interaction (HCI) education is an excellent venue for studio-based instruction. In an HCI course, we have been exploring a studio-based learning activity called the prototype walkthrough, in which a student project team simulates its evolving user interface prototype while a student audience member acts as a test user. The audience is encouraged to ask questions and provide feedback. We have observed that prototype walkthroughs create excellent conditions for learning about user interface design. In order to better understand the educational value of the activity, we performed a content analysis of a video corpus of 16 prototype walkthroughs held in two HCI courses. We found that the prototype walkthrough discussions were dominated by relevant design issues. Moreover, mirroring the justification behavior of the expert instructor, students justified over 80 percent of their design statements and critiques, with nearly one-quarter of those justifications having a theoretical or empirical basis. Our findings suggest that PWs provide valuable opportunities for students to actively learn HCI design by participating in authentic practice, and provide insight into how such opportunities can be best promoted.

  • TMLTue. 4pmStrong Concepts: Intermediate-level Knowledge in Interaction Design Research
    K. Höök (KTH – Royal Institute of Technology, SE), J. Löwgren
    K. Höök (KTH – Royal Institute of Technology, SE)J. Löwgren (School of Arts and Communication, SE)

    Design-oriented research can construct knowledge that is more abstracted than particular instances, without being at the scope of generalized theories. We propose an intermediate design knowledge form: strong concepts.Design-oriented research practices create opportunities to construct knowledge that is more abstracted than particular instances, without aspiring to be at the scope of generalized theories. We propose an intermediate design knowledge form that we name strong concepts with the following properties: generative, carries a core design idea, cuts across particular use situations and even application domains; concerns interactive behaviour, not static appearance; is a design element, a part of an artefact, and at the same time speaks of a use practice and behaviour over time; and finally, residing on an abstraction level above particular instances. We exemplify with two strong concepts: social navigation and seamfulness, and discuss how these fulfil criteria we might have on knowledge, such as being contestable, defensible and substantive. Our aim is to foster an academic culture of discursive knowledge construction of intermediate-level knowledge and how it can be produced and assessed in design-oriented HCI research.

  • TYGWed. 4pm“Spindex” (Speech Index) Enhances Menus on Touch Screen Devices with Tapping, Wheeling, and Flicking
    M. Jeon (Georgia Institute of Technology, USA), B. Walker, A. Srivastava
    M. Jeon (Georgia Institute of Technology, USA)B. Walker (Georgia Institute of Technology, USA)A. Srivastava (Georgia Institute of Technology, USA)

    Advanced auditory cues (spindex) enhance multimodal and auditory menus on a smartphone, making user inputs via tapping, wheeling, and flicking gestures more efficient, faster, and more enjoyable.Users interact with many electronic devices via menus such as auditory or visual menus. Auditory menus can either complement or replace visual menus. We investigated how advanced auditory cues enhance auditory menus on a smartphone, with tapping, wheeling, and flicking input gestures. The study evaluated a spindex (speech index), in which audio cues inform users where they are in a menu; 122 undergraduates navigated through a menu of 150 songs. Study variables included auditory cue type (text-to-speech alone or TTS plus spindex), visual display mode (on or off), and input gesture (tapping, wheeling, or flicking). Target search time and subjective workload were lower with spindex than without for all input gestures regardless of visual display mode. The spindex condition was rated subjectively higher than plain speech. The effects of input method and display mode on navigation behaviors were analyzed with the two-stage navigation strategy model. Results are discussed in relation to attention theories and in terms of practical applications.

  • TDGMon. 4pmEmbodied Cognition And The Magical Future Of Interaction Design
    D. Kirsh (Univ. of California, San Diego, USA)
    D. Kirsh (Univ. of California, San Diego, USA)

    Explores what world-class choreography and dance teaches us about embodied cognition & creativity. Explains how bodies absorb tools and how bodies and things are used for thinking. The theory of embodied cognition can provide HCI practitioners and theorists with new ideas about interaction and new principles for better designs. I support this claim with four ideas about cognition: 1) interacting with tools changes the way we think and perceive – tools, when manipulated, are soon absorbed into the body schema, and this absorption leads to fundamental changes in the way we perceive and conceive of our environments; 2) we think with our bodies not just with our brains; 3) we know more by doing than by seeing – there are times when physically performing an activity is better than watching someone else perform the activity, even though our motor resonance system fires strongly during other person observation; 4) there are times when we literally think with things. These four ideas have major implications for interaction design, especially the design of tangible, physical, context aware, and telepresence systems.

  • TSJMon. 4pmMoving and Making Strange: An Embodied Approach to Movement-based Interaction Design
    L. Loke (The Univ. of Sydney, AU), T. Robertson
    L. Loke (The Univ. of Sydney, AU)T. Robertson (Univ. of Technology, Sydney, AU)

    We offer a methodology for the design and evaluation of movement-based interactions with technology, where the felt experience of moving is valued along with the perspectives of observer and machine.There is growing interest in designing for movement-based interactions with technology, now that various sensing technologies are available enabling a range of movement possibilities from gestural to whole-body interactions. We present a design methodology of Moving and Making Strange, an approach to movement- based interaction design that recognizes the central role of the body and movement in lived cognition. The methodology was developed through a series of empirical projects, each focusing on different conceptions of movement available within motion-sensing interactive, immersive spaces. The methodology offers designers a set of principles, perspectives, methods and tools for exploring and testing movement-related design concepts. It is innovative for the inclusion of the perspective of the mover, together with the traditional perspectives of the observer and the machine. Making strange is put forward as an important tactic for rethinking how to approach the design of movement-based interaction.

  • TLQTue. 11amEmbedded Interaction: the accomplishment of actions in everyday and video-mediated environments
    P. Luff (King’s College, London, UK), M. Jirotka, N. Yamashita, H. Kuzuoka, C. Heath, G. Eden
    P. Luff (King’s College, London, UK)M. Jirotka (Univ. of Oxford, UK)N. Yamashita (NTT Communication Science Laboratories, JP)H. Kuzuoka (Univ. of Tsukuba, JP)C. Heath (King’s College, London, UK)G. Eden (Univ. of Oxford, UK)

    This paper suggests how interactional studies of everyday interaction can both help shape the development of complex technologies for collaboration and also be informed by experiments with prototype systems.A concern with ‘embodied action’ has informed both the analysis of everyday action through technologies and also suggested ways of designing innovative systems. In this paper, we consider how these two programmes, the analysis of everyday embodied interaction on the one hand, and the analysis of technically-mediated embodied interaction on the other, are interlinked. We draw on studies of everyday interaction to reveal how embodied conduct is embedded in the environment. We then consider a collaborative technology that attempts to provide a coherent way of presenting life-sized embodiments of participants alongside particular features of the environment. These analyses suggest that conceptions of embodied action should take account of the interactional accomplishment of activities and how these are embedded in the material environment.

  • TZCMon. 4pmOn the Naturalness of Touchless: Putting the “Interaction” Back into NUI
    K. O’Hara (Microsoft Research, UK), R. harper, H. Mentis, A. Sellen, A. Taylor
    K. O’Hara (Microsoft Research, UK)R. harper (Microsoft Research, UK)H. Mentis (Harvard Medical School, USA)A. Sellen (Microsoft Research, UK)A. Taylor (Microsoft Research, UK)

    Using examples of gestural interaction from surgery and urban screen gaming, we discuss the notion of naturalness in NUI narratives as an occasioned property of interaction rather than inherent property of an interface.After many decades of research, the ability to interact with technology through touchless gestures and sensed body movements is becoming an everyday reality. These technologies form part of a broader suite of innovations that have come to be characterised as Natural User Interfaces. While the narrative of NUI serves a number of useful purposes, it also raises some concerns that make it increasingly important to examine the conceptual work being performed by this moniker and how these frame approaches to design and engineering in particular ways. Often the arguments made situate the locus of naturalness in the gestural interface alone, treating the issue as a representational concern. But in doing this, attention is perhaps less focused on the in situ and embodied aspects of interaction with such technologies. Drawing on examples of gestural interaction in the diverse settings of surgery and urban screen gaming, we consider naturalness as an occasioned property of action that social actors actively manage and produce together in situ through their interaction with each other and the material world.

  • TAUTue. 11amThe Impact of Interface Affordances on Human Ideation, Problem Solving and Inferential Reasoning
    S. Oviatt (Incaa Designs, USA), A. Cohen, A. Miller, K. Hodge, A. Mann
    S. Oviatt (Incaa Designs, USA)A. Cohen (Duke Univ., USA)A. Miller (Stanford Univ., USA)K. Hodge (Stanford Univ., USA)A. Mann (Massachusetts Institute of Technology, USA)

    Computer input capabilities have communications affordances that can substantially facilitate people’s ability to produce ideas, solve problems correctly, and make accurate inferences about information, with the magnitude of improvement 9-38%.Two studies investigated how computer interface affordances influence basic cognition, including ideational fluency, problem solving, and inferential reasoning. In one study comparing interfaces with different input capabilities, students expressed 56% more nonlinguistic representations (diagrams, symbols, numbers) when using pen interfaces. A linear regression confirmed that nonlinguistic communication directly mediated a substantial increase (38.5%) in students’ ability to produce appropriate science ideas. In contrast, students expressed 41% more linguistic content when using a keyboard-based interface, which mediated a drop in science ideation. A follow-up study pursued the question of how interfaces that prime nonlinguistic communication so effectively facilitate cognition. This study examined the relation between students’ expression of nonlinguistic representations and their inference accuracy when using analogous digital and non-digital pen tools. Perhaps surprisingly, the digital pen interface stimulated construction of more diagrams, more correct Venn diagrams, and more accurate domain inferences. Students’ construction of multiple diagrams to represent a problem also directly suppressed overgeneralization errors, the most common inference failure. These research results reveal that computer interfaces have communications affordances, which elicit communication patterns that can substantially stimulate or impede basic cognition. Implications are discussed for designing new digital tools for thinking, with an emphasis on nonlinguistic and especially spatial representations that are most poorly supported by current keyboard-based interfaces.

  • TGNWed. 9amEnabling the Blind to See Gestures
    F. Quek (Virginia Polytechnic Institute and State Univ., USA), F. Oliveira
    F. Quek (Virginia Polytechnic Institute and State Univ., USA)F. Oliveira (Ceará State Univ. , BR)

    Our contributions are on the understanding of how gestural interaction may be designed as part of a multimodal system and on applying Durish’s embodiment theory to solving practical issues. Human embodied discourse involves gesture and speech. Mathematics instruction involves communication using speech and graphical presentation. Vision gives sighted students `embodiment awareness’ to keep communication situated between visual material and speech. For blind students, haptic fingertip reading of embossed material can replace visual material. We developed a Haptic Deictic System to furnish blind students with awareness of the instructor’s deictic gestures. Our studies show that the HDS can support learning in inclusive classrooms comprising both blind and sighted students. We developed analysis methodologies to ascertain how theHDS supports embodied discourse. The HDS was advantageous to all parties increasing learning opportunities, mutual understanding and engagement.

  • TVQTue. 2pmTeamwork Errors in Trauma Resuscitation
    A. Sarcevic (Drexel Univ., USA), I. Marsic, R. Burd
    A. Sarcevic (Drexel Univ., USA)I. Marsic (Rutgers Univ., USA)R. Burd (Children’s National Medical Center, USA)

    Proposes a model of teamwork and a classification of team errors based on an observational study of emergency medical teams. Identifies key information structures for computerized support of team cognition.Human errors in trauma resuscitation can have cascading effects leading to poor patient outcomes. To determine the nature of teamwork errors, we conducted an observational study in a trauma center over a two-year period. While eventually successful in treating the patients, trauma teams had problems tracking and integrating information in a longitudinal trajectory, which resulted in inefficiencies and near-miss errors. As an initial step in system design to support trauma teams, we proposed a model of teamwork and a novel classification of team errors. Four types of team errors emerged from our analysis: communication errors, vigilance errors, interpretation errors, and management errors. Based on these findings, we identified key information structures to support team cognition and decision making. We believe that displaying these information structures will support distributed cognition of trauma teams. Our findings have broader applicability to other collaborative and dynamic work settings that are prone to human error.

  • TKUThu. 2pmExoBuilding: Physiologically Driven Adaptive Architecture
    H. Schnädelbach (The Univ. of Nottingham, UK), A. Irune, D. Kirk, K. Glover, P. Brundell
    H. Schnädelbach (The Univ. of Nottingham, UK)A. Irune (The Univ. of Nottingham, UK)D. Kirk (Newcastle Univ., UK)K. Glover (The Univ. of Nottingham, UK)P. Brundell (The Univ. of Nottingham, UK)

    The study of ExoBuilding demonstrates how this prototypical building, exposing respiration and heart beat, changes respiratory behaviour of its inhabitants and how it effects their state of relaxation.Our surroundings are becoming infused with sensors measuring a variety of data streams about the environment, people and objects. Such data can be used to make the spaces that we inhabit responsive and interactive. Personal data in its different forms are one important data stream that such spaces are designed to respond to. In turn, one stream of personal data currently attracting high levels of interest in the HCI community is physiological data (e.g., heart rate, electrodermal activity), but this has seen little consideration in building architecture or the design of responsive environments. In this context, we developed a prototype mapping a single occupant’s respiration to its size and form, while it also sonifies their heartbeat. The result is a breathing building prototype, formative trials of which suggested that it triggers behavioral and physiological adaptations in inhabitants without giving them instructions and it is perceived as a relaxing experience. In this paper, we present and discuss the results of a controlled study of this prototype, comparing three conditions: the static prototype, regular movement and sonification and a biofeedback condition, where the occupant’s physiological data directly drives the prototype and presents this data back to them. The study confirmed that the biofeedback condition does indeed trigger behavioral changes and changes in participants’ physiology, resulting in lower respiration rates as well as higher respiration amplitudes, respiration to heart rate coherence and lower frequency heart rate variability. Self-reported state of relaxation is more dependent on inhabitant preferences, their knowledge of physiological data and whether they found space to ‘let go’. We conclude with a discussion of ExoBuilding as an immersive but also sharable biofeedback training interface and the wider potential of this approach to making buildings adapt to their inhabitants.

  • TBQTue. 11amTwo-Part Models Capture the Impact of Gain on Pointing Performance
    G. Shoemaker (Univ. of British Columbia, CA), T. Tsukitani, Y. Kitamura, K. Booth
    G. Shoemaker (Univ. of British Columbia, CA)T. Tsukitani (Osaka Univ., JP)Y. Kitamura (Tohoku Univ., JP)K. Booth (Univ. of British Columbia, CA)

    The paper provides empirical evidence of limitations in Fitts’s Law and demonstrates how a two-part formulation by Welford’s provides a model that naturally takes into account control-display gain.We establish that two-part models of pointing performance (Welford’s model) describe pointing on a computer display significantly better than traditional one-part models (Fitts’s Law). We explore the space of pointing models and describe how independent contributions of movement amplitude and target width to pointing time can be captured in a parameter k. Through a reanalysis of data from related work we demonstrate that one-part formulations are fragile in describing pointing performance, and that this fragility is present for various devices and techniques. We show that this same data can be significantly better described using two-part models. Finally, we demonstrate through further analysis of previous work and new experimental data that k increases linearly with gain. Our primary contribution is the demonstration that Fitts’s Law is more limited in applicability than previously appreciated, and that more robust models, such as Welford’s formulation, should be adopted in many cases of practical interest.

  • TTEMon. 4pmInteraction Design for and with the Lived Body: Some Implications of Merleau-Ponty’s Phenomenology
    D. Svanaes (Norwegian Univ. of Science and Technology, NO)
    D. Svanaes (Norwegian Univ. of Science and Technology, NO)

    The body as experienced by the user has to a large extent been absent in HCI. The paper exemplifies how the field can benefit from Merleau-Ponty’s phenomenology of the body. In 2001, Paul Dourish proposed the term Embodied Interaction to describe a new paradigm for Interaction Design that focuses on the physical, bodily and social aspects of our interaction with digital technology. Dourish used Merleau-Ponty’s phenomenology of perception as the theoretical basis for his discussion of the bodily nature of embodied interaction. This paper extends Dourish’s work to introduce the human-computer interaction community to ideas related to Merleau-Ponty’s concept of the lived body. It also provides a detailed analysis of two related topics: (1) Embodied Perception: the active and embodied nature of perception, including the body’s ability to extent its sensory apparatus through digital technology; and (2) Kinaesthetic Creativity: the body’s ability to relate in a direct and creative fashion with the “feel” dimension of interactive products during the design process.

  • TJZThu. 2pmCo-Narrating a Conflict: An Interactive Tabletop to Facilitate Attitudinal Shifts
    M. Zancanaro (FBK-irst, IT), O. Stock, Z. Eisikovits, C. Koren, P. Weiss
    M. Zancanaro (FBK-irst, IT)O. Stock (FBK-IRST, IT)Z. Eisikovits (Univ. of Haifa, IL)C. Koren (Univ. of Haifa, IL)P. Weiss (Univ. of Haifa, IL)

    A tabletop designed to support reconciliation of a conflict allows escalation and de-escalation during shared narration. An experiment with Israeli-Jewish and Palestinian-Arab demonstrated a shift of attitude toward the other. A multi-user tabletop interface was designed to support reconciliation of a conflict aimed at shifting hostile attitudes and achieving a greater understanding of another viewpoint. The interface provided a setting for face-to-face shared narration and support for the management of disagreements. The interface allows escalation and de-escalation of the conflict emerging in the shared narration and requires that participants perform joint actions when a contribution to the story is to be removed from the overall narration. A between-subjects experiment compared the tabletop interface and a desktop multimedia interface with mixed pairs (male Israeli-Jewish and Palestinian-Arab youth). The results demonstrated that the experience with the tabletop interface appears to be motivating and, most important, produce at least a short term shift of attitude toward the other.

  • TPCWed. 9amUser-Experience from an Inference Perspective
    P. van Schaik (Teesside Univ., UK), M. Hassenzahl, J. Ling
    P. van Schaik (Teesside Univ., UK)M. Hassenzahl (Folkwang Univ. of the Arts, DE)J. Ling (Univ. of Sunderland, UK)

    The research provides consistent evidence for how people infer specific user-experience attributes of an interactive product from other attributes or broader evaluations, such as beauty or an overall evaluation. In many situations, people make judgments on the basis of incomplete information, inferring unavailable attributes from available ones. These inference processes may also well operate when judgments about a product’s user-experience are made. To examine this, an inference model of user-experience, based on Hassenzahl and Monk’s [2010], was explored in three studies using Web sites. All studies supported the model’s predictions and its stability, with hands-on experience, different products, and different usage modes (action mode versus goal mode). Within a unified framework of judgment as inference [Kruglanski et al. 2007], our approach allows for the integration of the effects of a wide range of information sources on judgments of user-experience.