• facebook icon
  • twitter icon
  • google+ icon

Interactivity

Show / hide full affiliations and abstracts (May take a few seconds.)

IWC Enter a 3-letter code in the search box of the CHI 2013 mobile app to go to the corresponding session or presentation.
 When clickable, a 3-letter code links to the Video Previews web site.

    Research

    • RMSSimpleTones: A Collaborative Sound Controller System for Non-Musicians
      F. Zamorano (Parsons The New School for Design, USA)
      F. Zamorano (Parsons The New School for Design, USA)

      SimpleTones is an interactive sound system that allows non-musicians to experience improvisation as a group, without the need of previous musical training.SimpleTones is an interactive sound system that enables non-musicians to engage in collaborative acts of music making. The aim of SimpleTones is to make collaborative musical experiences more approachable and accessible for a wide range of users of different levels of musical expertise. This allows them to actively participate in the social aspects of collective musical improvisation, something usually confined to trained performers. Players can participate with ease and in real time by operating physical sound controllers in tandem. By using play as a catalyst and setting novices free from the requirement of previous musical experience, participants are able to focus on the collaborative aspects of performance, such as synchronizing movements, discovering the system’s functionality together and making collective decisions.

    • RJEStoreys – Designing Collaborative Storytelling Interfaces
      J. Cheng (Stanford Univ., USA), L. Kang, D. Cosley
      J. Cheng (Stanford Univ., USA)L. Kang (Cornell Univ., USA)D. Cosley (Cornell Univ., USA)

      Storeys is a graph-based visualization tool designed for collaborative story writing that represents stories in a branching tree of individual sentences.Storeys is a graph-based visualization tool designed for collaborative story writing that represents stories in a branching tree of individual sentences. The fine-grained, branching structure supports collaboration by reducing contribution cost, conflict over text ownership, and production blocking. Also designed to be ludic and playful, in initial evaluations Storeys was seen as a fun tool for creativity that balanced the exploration and elaboration of ideas.

    • RPJMusical Embrace: Facilitating Engaging Play Experiences through Social Awkwardness
      A. Huggard (RMIT Univ. , AU), A. De Mel, J. Garner, C. Toprak, A. Chatham, F. Mueller
      A. Huggard (RMIT Univ. , AU)A. De Mel (RMIT Univ. , AU)J. Garner (RMIT Univ. , AU)C. Toprak (RMIT Univ., AU)A. Chatham (RMIT Univ. , AU)F. Mueller (RMIT Univ. , AU)

      Musical Embrace is a novel digital game that promotes social awkwardness, arising through close physical proximity between strangers, and challenges current ideologies associated with digital games, play and interactivity. Socially awkward experiences are often looked upon as something to be avoided. However, examples from the traditional games domain suggest that social awkwardness can facilitate engaging experiences. Yet so far, there has been little research into social awkwardness and digital games. In acknowledgement of this, we present Musical Embrace, a digital game that promotes close physical contact between two strangers, through the use of a novel pillow-like controller, in order to navigate a virtual soundscape. Through our observations from demonstrating Musical Embrace at a number of events, we have procured a set of strategies intend on engaging players by “facilitating” social awkwardness, allowing players to “transform” while also letting players take “control” of social awkwardness. With our work we hope to inspire game designers to consider the potential of social awkwardness and guide them when using it for engaging playful experiences.

    • RJSGaze-supported Foot Interaction in Zoomable Information Spaces
      F. Göbel (Technische Univ. Dresden, DE), K. Klamka, A. Siegel, S. Vogt, S. Stellmach, R. Dachselt
      F. Göbel (Technische Univ. Dresden, DE)K. Klamka (Technische Univ. Dresden, DE)A. Siegel (Technische Univ. Dresden, DE)S. Vogt (Technische Univ. Dresden, DE)S. Stellmach (Technische Univ. Dresden, DE)R. Dachselt (Technische Univ. Dresden, DE)

      The demo illustrates a combination of gaze and foot input to navigate in Google Earth. Several alternatives for multimodal gaze-supported foot interaction for pan and zoom can be experienced.When working with zoomable information spaces, we can distinguish complex tasks into primary and secondary tasks (e.g., pan and zoom). In this context, a multimodal combination of gaze and foot input is highly promising for supporting manual interactions, for example, using mouse and keyboard. Motivated by this, we present several alternatives for multimodal gaze-supported foot interaction in a computer desktop setup for pan and zoom. While our eye gaze is ideal to indicate a user’s current point of interest and where to zoom in, foot interaction is well suited for parallel input controls, for example, to specify the zooming speed. Our investigation focuses on varied foot input devices differing in their degree of freedom (e.g., one- and two-directional foot pedals) that can be seamlessly combined with gaze input.

    • RHJHapSeat: A Novel Approach to Simulate Motion in a Consumer Environment
      F. Danieau (Technicolor, FR), J. Fleureau, P. Guillotel, N. Mollet, M. Christie, A. Lécuyer
      F. Danieau (Technicolor, FR)J. Fleureau (Technicolor, FR)P. Guillotel (Technicolor, FR)N. Mollet (Technicolor, FR)M. Christie (IRISA, FR)A. Lécuyer (Inria, FR)

      The HapSeat is a novel approach for simulating motion sensations in consumer settings. Come to to experience four videos enhanced by haptic effects of motion!The HapSeat is a novel approach for simulating motion sensations in a consumer environment. Multiple force-feedbacks are applied to the seated user’s body to generate a 6DoF sensation of motion while experiencing passive navigation. A set of force-feedback devices such as mobile armrests or headrests are arranged around a seat so that they can apply forces to the user. Several video sequences have been created to highlight the capabilities of the HapSeat. We propose to CHI attendees to experience these videos enhanced by haptic effects of motion.

    • RJLTouchbugs: Actuated Tangibles on Multi-Touch Tables
      D. Nowacka (Newcastle Univ., UK), K. Ladha, N. Hammerla, D. Jackson, C. Ladha, P. Olivier
      D. Nowacka (Newcastle Univ., UK)K. Ladha (Newcastle Univ., UK)N. Hammerla (Newcastle Univ., UK)D. Jackson (Newcastle Univ., UK)C. Ladha (Newcastle Univ., UK)P. Olivier (Newcastle Univ., UK)

      The interaction space of the Touchbugs consists of its movement on an interactive surface or conventional table and of direct gestural interaction.In this work we present a novel approach to graspable interfaces using Touchbugs, actuated physical objects for interacting with interactive surface computing applications. Touchbugs are active tangibles that are able to move across surfaces by employing vibrating motors and can communicate with camera based multi-touch surfaces using infrared LEDs. Touchbug’s embedded inertial sensors and computational capabilities open a new interaction space by providing autonomous capabilities for tangibles that allow goal directed behavior.

    • RAUGravitySpace: Tracking Users and Their Poses in a Smart Room Using a Pressure-Sensing Floor
      A. Bränzel (Hasso Plattner Institute, DE), C. Holz, D. Hoffmann, D. Schmidt, M. Knaust, P. Lühne, R. Meusel, S. Richter, P. Baudisch
      A. Bränzel (Hasso Plattner Institute, DE)C. Holz (Hasso Plattner Institute, DE)D. Hoffmann (Hasso Plattner Institute, DE)D. Schmidt (Hasso Plattner Institute, DE)M. Knaust (Hasso Plattner Institute, DE)P. Lühne (Hasso Plattner Institute, DE)R. Meusel (Hasso Plattner Institute, DE)S. Richter (Hasso Plattner Institute, DE)P. Baudisch (Hasso Plattner Institute, DE)

      GravitySpace is a pressure-sensitive floor: It identifies and tracks users, recognizes poses, and detects objects based on pressure imprints. Virtual 3D-avatars visualize in real-time what users are doing on the floor.We explore how to track people and furniture based on a high-resolution pressure-sensitive floor. Gravity pushes people and objects against the floor, causing them to leave imprints of pressure distributions across the surface. While the sensor is limited to sensing direct contact with the surface, we can sometimes conclude what takes place above the surface, such as users’ poses or collisions with virtual objects. We demonstrate how to extend the range of this approach by sensing through passive furniture that propagates pressure to the floor. To explore our approach, we have created an 8 m2 back-projected floor prototype, termed GravitySpace, a set of passive touch-sensitive furniture, as well as algorithms for identifying users, furniture, and poses. Pressure-based sensing on the floor offers four potential benefits over camera-based solutions: (1) it provides consistent coverage of rooms wall-to-wall, (2) is less susceptible to occlusion between users, (3) allows for the use of simpler recognition algorithms, and (4) intrudes less on users’ privacy.

    • RLCDemonstrating PIXEE: Pictures, Interaction and Emotional Expression
      M. Morris (Intel Corporation, USA), C. Marshall, M. Calix, M. Al Haj, J. MacDougall, D. Carmean
      M. Morris (Intel Corporation, USA)C. Marshall (Intel Corporation, USA)M. Calix (Mira Calix, UK)M. Al Haj (Centre de Visio per Computador, ES)J. MacDougall (Univ. of Victoria, CA)D. Carmean (Intel, USA)

      PIXEE promotes social, social media. Your images (#CHI2013) become part of an interactive display. Sentiment analysis and an emotional self-expression interface invite an affective mapping of CHI2013.An interactive system, PIXEE, was developed to promote greater emotional expression in image-based social media. An interdisciplinary team developed this system and has deployed it as a cultural probe around the world to explore ways that technology can foster emotional connectedness. In this system, images that participants share on social media are projected onto a large interactive display. A multimodal interface displays the sentiment analysis of image captions and invites viewers to adjust this classification in order to express their emotional response to the images. Viewers can adjust the emotional classification and thereby change the colors and sound associated with a picture, and compose musical scores by touching a series of images. CHI participants will be able to share their own content and their emotional reaction to other images throughout the conference. If CHI attendees share feedback about presentations through this system, an affective map of the conference may emerge

    • RMZBubble Popper: Body Contact in Digital Games
      C. Toprak (RMIT Univ., AU), J. Platt, H. Ho, F. Mueller
      C. Toprak (RMIT Univ., AU)J. Platt (RMIT Univ., AU)H. Ho (RMIT Univ., AU)F. Mueller (RMIT Univ., AU)

      Bubble Popper is a 2-player exertion game in which players push, block and compete in popping as many projected bubbles as they can within 60 seconds.Exertion games, digital games involving physical effort, are becoming more popular. Although some exertion games support social experiences, they rarely consider or support body contact. We believe overlooking body contact as part of social play experiences limits opportunities to design engaging exertion games. To explore this opportunity, we present Bubble Popper, an exertion game that considers and facilitates body contact. Bubble Popper, which uses very simple technology, also demonstrates that considering and facilitating body contact can be achieved without the need to sense body contact. Through reflecting on our design and analyzing observations of play, we are able to articulate what impact physical space layout in relation to digital game elements, and physical disparity between input and digital display can have on body contact. Our results aid game designers in creating engaging exertion games by guiding them when considering body contact, ultimately helping players benefit more from engaging exertion games.

    • RJZGravity Well: Underwater Play
      S. Pell (RMIT Univ., AU), F. Mueller
      S. Pell (RMIT Univ., AU)F. Mueller (RMIT Univ., AU)

      Gravity Well: Underwater play systems engaging human-aquatic movement interactions through water aquabatics and smart underwater robotic “Exploration Fish”. More and more technology supports utilitarian interactions in altered gravity conditions, for example underwater and during Zero-G flights. Extending this, we are interested in digital play in these conditions, and in particular see an opportunity to explore underwater bodily games. We present an interactive shallow-water system that supports bodily play through water-movement interactions: Gravity Well. Through designing the system and combining aquabatic principles with exertion game design strategies, we identified a set of design tactics for underwater play based on the relationship between the afforded type, and level, of bodily exertion relative to pressure change and narcosis. With our work, we aim to inspire designers to utilize the unique characteristics of bodily interactions underwater and guide them in developing digital play in altered gravity domains.

    • RNG4 Design Themes for Skateboarding
      S. Pijnappel (RMIT Univ., AU), F. Mueller
      S. Pijnappel (RMIT Univ., AU)F. Mueller (RMIT Univ., AU)

      We explore how to design interactive technologies to enhance the experience of skateboarding, providing thought provoking insights into how technology can have value beyond the context of performance focused sports.Interactive technology can support exertion activities, with many examples focusing on improving athletic performance. We see an opportunity for technology to also support extreme sports such as skateboarding, which often focus primarily on the experience of doing tricks rather than on athletic performance. However, there is little knowledge on how to design for such experiences. In response, we designed 12 basic skateboarding prototypes inspired by skateboarding theory. Using an autoethnographical approach, we skated with each of these and reflected on our experiences in order to derive four design themes : location of feedback in relation to the skater’s body, timing of feedback in relation to peaks in emotions after attempts, aspects of the trick emphasized by feedback, and aesthetic fittingness of feedback. We hope our work will guide designers of interactive systems for skateboarding and extreme sports in general, and will therefore further our understanding of how to design for the active human body.

    • RSCThe Music Room
      F. Morreale (Univ. of Trento, IT), R. Masu, A. De Angeli, P. Rota
      F. Morreale (Univ. of Trento, IT)R. Masu (Univ. of Trento, IT)A. De Angeli (Univ. of Trento, IT)P. Rota (Univ. of Trento, IT)

      The Music Room is an installation that enables everybody to experience music composition. Visitors can manipulate an original music by interacting with each other in the room.This paper presents The Music Room, an interactive installation where couples compose original music. The music is generated by Robin, an automatic composition system, according to relative distance between the users and the speed of their own movements. Proximity maps the pleasantness of music, while speed maps its intensity. The Music Room was exhibited during the EU Researchers’ Night in Trento, where it met with a strong interest by visitors.

    • RDGPermulin: Personal In- and Output on Interactive Surfaces
      R. Lissermann (Technische Univ. Darmstadt, DE), J. Huber, J. Steimle, M. Mühlhäuser
      R. Lissermann (Technische Univ. Darmstadt, DE)J. Huber (Technische Univ. Darmstadt, DE)J. Steimle (Massachusetts Institute of Technology, USA)M. Mühlhäuser (Technische Univ. Darmstadt, DE)

      We showcase Permulin, a novel interactive surface prototype enabling users to utilize the entire horizontal surface for personal in- and output simultaneously.Interactive tables are well suited for co-located collaboration. Most prior research assumed users to share the same overall display output; a key challenge was the appropriate partitioning of screen real estate, assembling the right information “at the users’ finger-tips” through simultaneous input. A different approach is followed in recent multi-view display environments: they offer personal output for each team member, yet risk to dissolve the team due to the lack of a common visual focus. Our approach combines both lines of thought, guided by the question: “What if the visible output and simultaneous input was partly shared and partly private?” We present Permulin as a concrete corresponding implementation, based on a set of novel interaction concepts that support fluid transitions between individual, group activities and coordination of group activities.

    • RQESweat-Atoms: Turning Physical Exercise into Physical Objects
      R. Khot (RMIT Univ., AU), F. Mueller
      R. Khot (RMIT Univ., AU)F. Mueller (RMIT Univ. , AU)

      SweatAtoms is a 3D modelling application that crafts physical objects using the heart beat pattern of the individual engaged in physical activities. In this paper, we introduce a novel idea of crafting a physical object in tandem with the physical exercise using the heart rate patterns. Our aim is provide a new way of visualizing the exercise intensity. We present Sweat-Atoms, a 3D modeling and printing system, which generates abstract 3D designs using the heart rate patterns of individuals engaged in a physical activity. The crafted physical objects can act as souvenirs and be testimony to the human efforts invested in performing the physical activity. We believe the creative experience of crafting will help to change the monotonous nature of physical exercise.

    • RDULaserOrigami: Laser-Cutting 3D Objects
      S. Mueller (Hasso Plattner Institute, DE), B. Kruck, P. Baudisch
      S. Mueller (Hasso Plattner Institute, DE)B. Kruck (Hasso Plattner Institute, DE)P. Baudisch (Hasso Plattner Institute, DE)

      LaserOrigami is a rapid prototyping system that produces 3D objects using a laser cutter. LaserOrigami is substantially faster than 3D-printing and unlike traditional laser cutting it requires no manual assembly.We present LaserOrigami, a rapid prototyping system that produces 3D objects using a laser cutter. LaserOrigami is substantially faster than traditional 3D fabrication techniques such as 3D printing and unlike traditional laser cutting the resulting 3D objects require no manual assembly. The key idea behind LaserOrigami is that it achieves three-dimensionality by folding and stretching the workpiece, rather than by placing joints, thereby eliminating the need for manual assembly. LaserOrigami achieves this by heating up selected regions of the workpiece until they become compliant and bend down under the force of gravity. LaserOrigami administers the heat by defocusing the laser, which distributes the laser’s power across a larger surface. LaserOrigami implements cutting and bending in a single integrated process by automatically moving the cutting table up and down — when users take out the workpiece, it is already fully assembled. We present the three main design elements of LaserOrigami: the bend, the suspender, and the stretch, and demonstrate how to use them to fabricate a range of physical objects. Finally, we demonstrate an interactive fabrication version of LaserOrigami, a process in which user interaction and fabrication alternate step-by-step.

    • RECconstructable: Interactive Construction of Functional Mechanical Devices
      S. Mueller (Hasso Plattner Institute, DE), P. Lopes, K. Kaefer, B. Kruck, P. Baudisch
      S. Mueller (Hasso Plattner Institute, DE)P. Lopes (Hasso Plattner Institute, DE)K. Kaefer (Hasso Plattner Institute, DE)B. Kruck (Hasso Plattner Institute, DE)P. Baudisch (Hasso Plattner Institute, DE)

      constructable is an interactive system based on a laser-cutter. Users interact by drafting directly on the workpiece using hand-held laser pointers. constructable tracks the pointer, beautifies its path, and cuts.constructable is an interactive drafting table based on a laser cutter that produces precise physical output in every editing step. Users interact by drafting directly on the workpiece using a hand-held laser pointer. The system tracks the pointer, beautifies its path, and implements its effect by cutting the workpiece using a fast high-powered laser cutter. constructable achieves precision through tool-specific constraints, user-defined sketch lines, and by using the laser cutter itself for all visual feedback, rather than using a screen or projection. As part of this interactive demonstration, attendees will use constructable to create simple but functional devices, including a booklet and a simple gearbox.

    • RGNMuscle-Propelled Force Feedback: Bringing Force Feedback to Mobile Devices using Electrical Stimulation
      P. Lopes (Hasso Plattner Institute, DE), P. Baudisch
      P. Lopes (Hasso Plattner Institute, DE)P. Baudisch (Hasso Plattner Institute, DE)

      Muscle-propelled Force Feedback electrically stimulates the user’s forearm muscles via four electrodes, causing the user to involuntarily tilt the device. As he is countering this force, he perceives force feedback.Unlike many other areas in computer science and HCI, force feedback devices resist miniaturization, because they require physical motors. We propose mobile force feedback devices based on actuating the user’s muscles using electrical stimulation. Since this allows us to eliminate motors, our approach results in devices that are substantially smaller and lighter than traditional motor based devices. We present a simple prototype that we mount to the back of a mobile phone. It actuates users’ forearm muscles via four electrodes, causing the muscles to contract involuntarily, so that users tilt the device sideways. As users resist this motion using their other arm, they perceive force feedback. We demonstrate the interaction at the example of an interactive videogame in which users try to steer an airplane through winds rendered using force feedback.

    • RGUAutoGami: A Low-cost Rapid Prototyping Toolkit for Automated Movable Paper Craft
      K. Zhu (National Univ. of Singapore, SG), S. Zhao, H. Nii
      K. Zhu (National Univ. of Singapore, SG)S. Zhao (National Univ. of Singapore, SG)H. Nii (IIJ Innovation Institute, JP)

      AutoGami is a toolkit for designing automated movable paper craft. It has hardware and software components allowing users to create automated movable paper craft without any prerequisite knowledge of electronics.AutoGami is a toolkit for designing automated movable paper craft using the technology of selective inductive power transmission. AutoGami has hardware and software components that allow users to design and implement automated movable paper craft without any prerequisite knowledge of electronics; it also supports rapid prototyping. AutoGami’s software interface allows users to plan a variety of paper craft movements: its use of selective inductive power transmission allows users to control aspects of movement such as duration, amplitude, and sequence without concerning themselves with technical implementation. AutoGami made consistently strong showings in design workshops, confirming its viability in supporting engagement and creativity as well as its usability in storytelling through paper craft.

    • RHCFrom Codes to Patterns: Designing Interactive Decoration for Tableware
      R. Meese (The Univ. of Nottingham, UK), S. Ali, E. Thorne, S. Benford, A. Quinn, R. Mortier, B. Koleva, T. Pridmore, S. Baurley
      R. Meese (The Univ. of Nottingham, UK)S. Ali (The Univ. of Nottingham, UK)E. Thorne (Central Saint Martins College of Art and Design, UK)S. Benford (The Univ. of Nottingham, UK)A. Quinn (Central Saint Martins College of Art and Design, UK)R. Mortier (The Univ. of Nottingham, UK)B. Koleva (The Univ. of Nottingham, UK)T. Pridmore (The Univ. of Nottingham, UK)S. Baurley (Brunel Univ., UK)

      Through collaboration with ceramic designers we created aesthetic decorative patterns for tableware that contain multiple embedded visual codes hidden within them and a mobile app to enhance a dining experience. We explore the idea of making aesthetic decorative patterns that contain multiple visual codes. We chart an iterative collaboration with ceramic designers and a restaurant to refine a recognition technology to work reliably on ceramics, produce a pattern book of designs, and prototype sets of tableware and a mobile app to enhance a dining experience. We document how the designers learned to work with and creatively exploit the technology, enriching their patterns with embellishments and backgrounds and developing strategies for embedding codes into complex designs. We discuss the potential and challenges of interacting with such patterns. We argue for a transition from designing ‘codes to patterns’ that reflects the skills of designers alongside the development of new technologies.

    • RLXHeartLink: Open Broadcast of Live Biometric Data to Social Networks
      F. Curmi (Lancaster Univ., UK), J. Whittle, M. Ferrario, J. Southern
      F. Curmi (Lancaster Univ., UK)J. Whittle (Lancaster Univ., UK)M. Ferrario (Lancaster Univ., UK)J. Southern (Lancaster Univ., UK)

      A system to crowdsource motivation in real-time. Users conducting a task share their biometric data over social networks and remote viewers cheer in social support.A number of studies in the literature have looked into the use of real-time biometric data to improve one’s own physiological performance and wellbeing. However, there is limited research that looks into the effects that sharing biometric data with others could have on one’s social network. Following a period of research on existing mobile applications and prototype testing, we developed a system, HeartLink, which collects real-time personal biometric data such as heart rate and broadcasts this data online. Insights gained on designing systems to broadcast real-time biometric data are presented. In this paper we also report emerging results from testing HeartLink in a pilot study and a user study that were conducted during sport events. The results showed that sharing heart rate data does influence the relationship of the persons involved and that the degree of influence seems related to the tie strength prior to visualizing the data.

    • RBQSideWays: A Gaze Interface for Spontaneous Interaction with Situated Displays
      Y. Zhang (Lancaster Univ., UK), A. Bulling, H. Gellersen
      Y. Zhang (Lancaster Univ., UK)A. Bulling (Max Planck Institute for Informatics, DE)H. Gellersen (Lancaster Univ., UK)

      Presents a system that uses light weight computer vision techniques for calibration-free eye tracking. The system enables hands-free spontaneous interaction with situated displays using eye gaze.Eye gaze is compelling for interaction with situated displays as we naturally use our eyes to engage with them. In this work we present SideWays, a novel person-independent eye gaze interface that supports spontaneous interaction with displays: users can just walk up to a display and immediately interact using their eyes, without any prior user calibration or training. Requiring only a single off-the-shelf camera and lightweight image processing, SideWays robustly detects whether users attend to the centre of the display or cast glances to the left or right. The system supports an interaction model in which attention to the central display is the default state, while “sidelong glances” trigger input or actions. The robustness of the system and usability of the interaction model are validated in a study with 14 participants. Analysis of the participants’ strategies in performing different tasks provides insights on gaze control strategies for design of SideWays applications.

    • RXLColocated Surface Sound Interaction
      A. Freed (CNMAT UC Berkeley, USA), J. Rowland
      A. Freed (CNMAT UC Berkeley, USA)J. Rowland (Univ. of California, Berkeley, USA)

      We show a conductive paper musical instrument, magnets mounted in gloves with printed conductors to form planar loudspeaker arrays and conductive and resistive fabrics integrated in loudspeaker.We present three related schemes for colocating sensing and sound actuation on flat surfaces One uses conductive paper to create a musical instrument, another uses magnets mounted in gloves and printed conductors to form planar loudspeaker arrays. Finally we show how conductive and resistive fabrics can be integrated with loudspeaker drivers.

    • RBCGaussBits: Magnetic Tangible Bits for Portable and Occlusion-Free Near-Surface Interactions
      R. Liang (National Taiwan Univ., TW), K. Cheng, L. Chan, C. Peng, M. Chen, R. Liang, D. Yang, B. Chen
      R. Liang (National Taiwan Univ., TW)K. Cheng (National Taiwan Univ., TW)L. Chan (Academia Sinica, TW)C. Peng (National Taiwan Univ. of Science and Technology, TW)M. Chen (National Taiwan Univ., TW)R. Liang (National Taiwan Univ. of Science and Technology, TW)D. Yang (Academia Sinica, TW)B. Chen (National Taiwan Univ., TW)

      This work presents a system of the passive magnetic tangible designs that enables occlusion-free tangible interactions in the near-surface space of portable displays.We present GaussBits, which is a system of the passive magnetic tangible designs that enables 3D tangible interactions in the near-surface space of portable displays. When a thin magnetic sensor grid is attached to the back of the display, the 3D position and partial 3D orientation of the GaussBits can be resolved by the proposed bi-polar magnetic field tracking technique. This portable platform can therefore enrich tangible interactions by extending the design space to the near-surface space. Since non-ferrous materials, such as the user’s hand, do not occlude the magnetic field, interaction designers can freely incorporate a magnetic unit into an appropriately shaped non-ferrous object to exploit the metaphors of the real-world tasks, and users can freely manipulate the GaussBits by hands or using other non-ferrous tools without causing interference. The presented example applications and the collected feedback from an explorative workshop revealed that this new approach is widely applicable.

    • RVXThe Space Between the Notes: Adding Expressive Pitch Control to the Piano Keyboard
      A. McPherson (Queen Mary, Univ. of London, UK), A. Gierakowski, A. Stark
      A. McPherson (Queen Mary, Univ. of London, UK)A. Gierakowski (Queen Mary, Univ. of London, UK)A. Stark (Queen Mary, Univ. of London, UK)

      Attendees are invited to try a new musical keyboard incorporating capacitive touch sensing. Vibrato and pitch bends are easily played by moving the fingers on the key surfaces.The piano-style keyboard is among the most widely used and versatile of digital musical interfaces. However, it lacks the ability to alter the pitch of a note after it has been played, a limitation which prevents the performer from executing common expressive techniques including vibrato and pitch bending. We present a system for controlling pitch from the keyboard surface using capacitive touch sensors to measure the locations of the player’s fingers on the keys. The large community of trained pianists makes the keyboard a compelling target for augmentation, but it also poses a challenge: how can a musical interface be extended while making use of the existing techniques performers have spent thousands of hours learning? In the associated paper, user studies with conservatory pianists explore the constraints of traditional keyboard technique and evaluate the usability of the continuous pitch control system. This Interactivity exhibit presents the extended keyboard.

    • RAGArt Mapping in Paris
      L. Carletti (Horizon Digital Economy Research – Univ. of Nottingham, UK), D. Price, R. Sinker, G. Giannachi, D. McAuley, J. Stack, K. Beaver, J. Mundy
      L. Carletti (Horizon Digital Economy Research – Univ. of Nottingham, UK)D. Price (Univ. of Nottingham , UK)R. Sinker (Tate, UK)G. Giannachi (The Univ. of Exeter, UK)D. McAuley (Univ. fo Nottingham, UK)J. Stack (Tate, UK)K. Beaver (Tate, UK)J. Mundy (Tate, UK)

      Art Maps aims to explore the relation between artworks and the location that they depict, and to improve the quality of the artworks’ geographic data, through a crowdsourcing process.In this work, we describe a proposed technology demonstrator for Art Maps, a collaborative research project exploring the relation between artworks and the location that they depict, through the support of a cloud-based crowdsourcing platform with web and mobile interfaces. The Art Maps demonstration entails two types of hands-on experiences for the conference attendees: an in-CHI-experience and an optional bespoke outdoor activity to experience Paris through Art Maps.

    • RANThorDMX: A Prototyping Toolkit for Interactive Stage Lighting Control
      T. Bartindale (Newcastle Univ., UK), P. Olivier
      T. Bartindale (Newcastle Univ., UK)P. Olivier (Newcastle Univ., UK)

      ThorDMX is a lightweight prototyping toolkit for designing expressive, collaborative and flexible stage lighting controllers. The toolkit provides a framework, code and tutorials for developing controllers using familiar prototyping tools.ThorDMX is a lightweight prototyping toolkit for rapid and easy design of new stage lighting controllers. The toolkit provides a framework, code samples and tutorials for quickly developing new controller interfaces using familiar prototyping tools and software. Aimed at prototyping interaction designs for stage lighting control it facilitates the exploration of expressive, collaborative and flexible new interfaces.

    • RBJPosture Training With Real-time Visual Feedback
      B. Taylor (Univ. of Saskatchewan, CA), M. Birk, R. Mandryk, Z. Ivkovic
      B. Taylor (Univ. of Saskatchewan, CA)M. Birk (Univ. of Saskatchewan, CA)R. Mandryk (Univ. of Saskatchewan, CA)Z. Ivkovic (Univ. of Saskatchewan, CA)

      Improve your standing posture by visiting this interactive demonstration of two different real-time visual posture feedback systems.Our posture affects us in a number of surprising and unexpected ways, by influencing how we handle stress and how confident we feel. But it is difficult for people to main good posture. We present a non-invasive posture training system using an Xbox Kinect sensor. We provide real-time visual feedback at two levels of fidelity.

    • RBXKINECTWheels: Wheelchair-Accessible Motion-Based Game Interaction
      K. Gerling (Univ. of Saskatchewan, CA), M. Kalyn, R. Mandryk
      K. Gerling (Univ. of Saskatchewan, CA)M. Kalyn (Univ. of Saskatchewan, CA)R. Mandryk (Univ. of Saskatchewan, CA)

      KINECTWheels helps designers integrate wheelchair-based input into motion-based games. Besides upper body gestures, the system tracks wheelchair movements such as turning to the sides and moving back and forth.The increasing popularity of full-body motion-based video games creates new challenges for game accessibility research. Many games strongly focus on able-bodied persons and require players to move around freely. To address this problem, we introduce KINECTWheels, a toolkit that facilitates the integration of wheelchair-based game input. Our library can help game designers to integrate wheelchair input at the development stage, and it can be configured to trigger keystroke events to make off-the-shelf PC games wheelchair-accessible.

    • RTLPianoText: Transferring Musical Expertise to Text Entry
      A. Feit (Max Planck Institute for Informatics, DE), A. Oulasvirta
      A. Feit (Max Planck Institute for Informatics, DE)A. Oulasvirta (Max Planck Institute for Informatics, DE)

      PianoText allows text entry on a regular piano. It is based on a computational method that maps structures in the english language to well practiced music patterns. We present PianoText, a text entry method based on a piano keyboard with an optimized mapping between notes and chords of music to letters of the English language. PianoText exemplifies the idea of transferring musical expertise to a text entry task by computationally searching for mappings between frequent motor patterns while considering their n-gram frequency distributions and respecting constraints affecting the playability of music. In the Interactivity session, audience members with piano skills can transcribe text with PianoText, and a trained pianist will show that it allows him to generate text at speeds close to that of professional QWERTY-typists.

    • RGGBRAVO: A BRAin Virtual Operator For Education Exploiting Brain-Computer Interfaces
      M. Marchesi (Univ. of Bologna, IT), B. Riccò
      M. Marchesi (Univ. of Bologna, IT)B. Riccò (Univ. of Bologna, IT)

      BRAVO is a software for content visualization in a mobile e-learning context. It makes use of a commercial BCI to detect the user’s brain activity and customize the contents proposed.This paper introduces a new e-learning system that works with a Brain-Computer Interface to customize the educational experience, according to user’s reactions and preferences.

    • RCZEducaTableware: Computer-Augmented Tableware to Enhance the Eating Experiences
      A. KADOMURA (Ochanomizu Univ., JP), K. Tsukada, I. Siio
      A. KADOMURA (Ochanomizu Univ., JP)K. Tsukada (Japan Science and Technology Agency, JP)I. Siio (Ochanomizu Univ., JP)

      We propose “EducaTableware (Educate/Tableware)”, a design for interactive tableware devices that makes eating more fun and improves daily eating habits through auditory feedback to encourage specific mealtime behaviors.We propose “EducaTableware (Educate/Tableware)”, a design for interactive tableware devices that makes eating more fun and improves daily eating habits through auditory feedback to encourage specific mealtime behaviors. We have developed two kinds of device: “EaTheremin” is a fork-type device used for eating, and “TeaTheremin” is a cup-type device used for drinking. These devices emit sounds when a user is consuming a food item. In this paper, we discuss the EducaTableware concept, describe the implementation of EaTheremin and TeaTheremin, and show the usages.

    • RCLThe Augmented Video Wall: Multi-user AR Interaction With Public Displays
      M. Baldauf (FTW Telecommunications Research Center, Vienna, AT), P. Fröhlich
      M. Baldauf (FTW Telecommunications Research Center, Vienna, AT)P. Fröhlich (FTW Telecommunications Research Center, AT)

      The Augmented Video Wall demonstrates a novel collocated interaction technique for public displays creating the illusion of literally private views by utilizing means of augmented reality.The Augmented Video Wall is a compelling showcase application demonstrating a novel collocated interaction technique for public displays beyond traditional competitive or collaborative multi-user scenarios. By utilizing means of augmented reality on personal mobile devices and applying animated video overlays accurately superimposed upon the public display, we create the illusion of literally private views to a shared public display. Besides this concurrent viewing mode, the demonstrator features a competitive mode and a concurrent mode enhanced with social features to highlight the characteristics of this novel display interaction techniques. During a first preliminary study, the Augmented Video Wall attracted lots of visitors and created highly entertaining experiences for groups.

    • RXETraInAb: A Solution Based on Tangible and Distributed User Interfaces to Improve Cognitive Disabilities
      E. de la Guía (Univ. of Castilla-La Mancha (UCLM), ES), M. Lozano, V. R. Penichet
      E. de la Guía (Univ. of Castilla-La Mancha (UCLM), ES)M. Lozano (Univ. of Castilla-La Mancha (UCLM), ES)V. R. Penichet (Univ. of Castilla-La Mancha (UCLM), ES)

      TraInAb (Training Intellectual Abilities) is an interactive and collaborative set of games based on Tangible and Distributed User Interfaces designed to stimulate people with intellectual disabilitiesNowadays three percent of the world’s population has some type of intellectual disability. This disability limits their life and the life of those around them. Technology can provide means to facilitate learning processes and stimulate cognitive capacities. The rapid evolution of technology has changed the way in which we can engage in interactive systems. These days we are witnessing how (MDE) Multi-Device Environments are fast becoming a part of everyday life in today´s society. TraInAb (Training Intellectual Abilities) system is a collaborative and interactive game based on MDE scenarios aimed at achieving greater integration in society for intellectually disabled people by encouraging cognitive abilities such as memory, calculation and attention. The system provides a new style of interaction based on tangible and distributed user interfaces.

    • RXSParallel Faceted Browsing
      S. Buschbeck (German Research Center for Artificial Intelligence (DFKI), DE), A. Jameson, A. Spirescu, T. Schneeberger, R. Troncy, H. Khrouf, O. Suominen, E. Hyvonen
      S. Buschbeck (German Research Center for Artificial Intelligence (DFKI), DE)A. Jameson (German Research Center for Artificial Intelligence (DFKI), DE)A. Spirescu (German Research Center for Artificial Intelligence, DE)T. Schneeberger (German Research Center for Artificial Intelligence, DE)R. Troncy (EURECOM, FR)H. Khrouf (EURECOM, FR)O. Suominen (Aalto Univ., FI)E. Hyvonen (Aalto Univ., FI)

      Parallel faceted browsing makes it easy to construct – and view the results of – multiple interrelated queries. It facilitates (collaborative) exploration when you don’t know what you’re looking for. The widely used paradigm of faceted browsing is limited by the fact that only one query and result set are displayed at a time. This demonstrator introduces an interaction design for parallel faceted browsing that makes it easy for a user to construct and view the results of multiple interrelated queries. The paradigm offers general benefits for a variety of application areas.

    • RLQEnhancing One-handed Website Operation on Touchscreen Mobile Phones
      K. Seipp (Goldsmiths, Univ. of London, UK), K. Devlin
      K. Seipp (Goldsmiths, Univ. of London, UK)K. Devlin (Goldsmiths, Univ. of London, UK)

      A JavaScript framework that transforms input for form elements, media control and page access on the fly into a thumb-friendly interaction model.Operating a website with one hand on a touchscreen mobile phone remains difficult despite advances in hardware and software development. This problem is exacerbated by manufacturers producing phones with larger screens which are more difficult to hold and operate one-handedly. We present a way to enhance one-handed operation of a website using standard client-side web technologies, without the need to redesign the site or to overwrite any CSS styles. It transforms input for form elements, media control and page access on the fly into a thumb-friendly interaction model. Initial user testing of our interface prototype confirms efficiency and learnability, and highlights its usefulness for navigating long pages and finding the desired information more quickly, even between different websites, when operating the device with one hand.

    • RKGEnhancing Saltiness with Cathodal Current
      H. Nakamura (4-21-1 Nakano, JP), H. Miyashita
      H. Nakamura (4-21-1 Nakano, JP)H. Miyashita (4-21-1 Nakano, JP)

      Fries can taste saltier without adding salt. Weak cathodal current inhibits salty tastes. And saltiness recovers/increases after the release of current. We propose a saltiness enhancer that uses this phenomenon. Weak cathodal current applied to the tongue inhibits the taste of salt, but perceived saltiness tends to increase after the current is released. In this study, we propose a saltiness enhancer that uses this phenomenon. Our system applies weak cathodal current for a short time when the user eats or drinks. The user can thus perceive a salty taste without the use of salt.

    • RHQPaperTonnetz: Supporting Music Composition with Interactive Paper
      J. Garcia (INRIA & Univ Paris-Sud, FR), L. Bigo, A. Spicher, W. Mackay
      J. Garcia (INRIA & Univ Paris-Sud, FR)L. Bigo (Univ. Paris 12, FR)A. Spicher (Univ. Paris-Est Créteil, FR)W. Mackay (INRIA, FR)

      PaperTonnetz lets users compose music with tone networks representations on interactive paper. Create the paper interface, draw and select paths with the pen in the network to create chords and melodies.A Tonnetz, or “tone-network“ in German, is a two-dimensional representation of the relationships among musical pitches. In this paper, we present PaperTonnetz, a tool that lets musicians explore and compose music with Tonnetz representations by making gestures on interactive paper. In addition to triggering musical notes with the pen as a button based-interface, the drawn gestures become interactive paths that can be used as chords or melodies to support composition.

    • RRNPeter Piper Picked a Peck of Pickled Peppers – an Interface for Playful Language Exploration
      C. Sylla (Univ. of Minho, PT), S. Gonçalves, P. Branco, C. Coutinho
      C. Sylla (Univ. of Minho, PT)S. Gonçalves (Univ. of Minho, PT)P. Branco (Department of Information Systems, Univ. of Minho, PT)C. Coutinho (Univ. of Minho, PT)

      t-words allow users to playfully explore sounds, words and sentences while engaging collaboratively. Users can record and then playback audio, as well as changing the audio sequences, creating new audio combinations.In this paper we describe t-words (tangible words) an interface that consists of rectangular blocks in which children can record and then playback audio. The blocks can then be snapped together playing the recorded audio in a sequence, by reordering the blocks in different ways the audio sequence changes according to the order of the blocks. t-words does not need a computer, which makes it flexible for various contexts. The interface was presented during two workshops that took place in Kathmandu – Nepal with two schools. During the workshops children used the interface playfully exploring sounds, words and sentences while engaging in collaborative work.

    • RQSPursuits: Eye-Based Interaction with Moving Targets
      M. Vidal (Lancaster Univ., UK), K. Pfeuffer, A. Bulling, H. Gellersen
      M. Vidal (Lancaster Univ., UK)K. Pfeuffer (Lancaster Univ., UK)A. Bulling (Max Planck Institute for Informatics, DE)H. Gellersen (Lancaster Univ., UK)

      Pursuits is a new way to interact with eye-based interfaces. By exploiting the movement of targets, it creates a potential for interaction with dynamic interfaces.Eye-based interaction has commonly been based on estimation of eye gaze direction, to locate objects for interaction. We introduce Pursuits, a novel and very different eye tracking method that instead is based on following the trajectory of eye movement and comparing this with trajectories of objects in the field of view. Because the eyes naturally follow the trajectory of moving objects of interest, our method is able to detect what the user is looking at, by matching eye movement and object movement. We illustrate Pursuits with three applications that demonstrate how the method facilitates natural interaction with moving targets.

    • RQLCan You Handle It? Bimanual Techniques for Browsing Media Collections on Touchscreen Tablets
      R. McLachlan (Univ. of Glasgow, UK), S. Brewster
      R. McLachlan (Univ. of Glasgow, UK)S. Brewster (Univ. of Glasgow, UK)

      Experience a compelling and continuous interaction for browsing ‘your stuff’ in its entirety. Scroll using the rear-mounted dial and accelerate using the front mounted pressure sensor.Touchscreen tablets present an interesting challenge to interaction design: they are not quite handheld like their smartphone cousins, though their form factor affords usage away from the desktop and other surfaces. This means that users will have to find ways to support the device that often require one hand to hold it, constraining their ability to use two hands on the touchscreen. Our ongoing work explores the possibility of using novel input modalities mounted on the tablet to enable simultaneous two-handed input while the user is holding the device. This paper presents a bimanual scrolling technique that splits the control of scrolling speed and scrolling direction across two hands using a combination of pressure, physical dial and touch input.

    • RXZLibmapper (A Library for Connecting Things)
      J. Malloch (McGill Univ., CA), S. Sinclair, M. Wanderley
      J. Malloch (McGill Univ., CA)S. Sinclair (McGill Univ., CA)M. Wanderley (McGill Univ., CA)

      We introduce and demonstrate libmapper, a software library and protocol for providing network-enabled discovery and connectivity of real-time control signals.We present libmapper, a software library and protocol for providing network-enabled discovery and connectivity of real-time control signals. Today there is a trade-off present in the state of the art for music-related networking. At one extreme, we have many systems still using MIDI, an old and insufficient standard for specifying keyboard-oriented commands embedded in short, coded 3-byte messages, limiting modulation controls to a 7-bit range. At the other extreme we have Open Sound Control (OSC), a flexible packet format that supports named data and a wide number of binary numerical representations, but lacks built-in semantic standards. The present work proposes a semantic layer built on OSC over multicast UDP/IP used to carry metadata about signals, which can specify peer-to-peer connectivity between nodes along with instructions for associated translation of data representations. The translation layer avoids the need for normalization or standardization of data representation while maintaining ease of use and providing a distributed, flexible approach to music networking. The goal is to provide a system for fast and dynamic experimentation during the mapping phase of instrument design.

    • RSJPaperTab: An Electronic Paper Computer with Multiple Large Flexible Electrophoretic Displays
      A. Tarun (Queen’s Univ., CA), P. Wang, A. Girouard, P. Strohmeier, D. Reilly, R. Vertegaal
      A. Tarun (Queen’s Univ., CA)P. Wang (Queen’s Univ., CA)A. Girouard (Carleton Univ., CA)P. Strohmeier (Queen’s Univ., CA)D. Reilly (Dalhousie Univ., CA)R. Vertegaal (Queen’s Univ., CA)

      Papertab is a paper computer with multiple 10.7” functional touch sensitive flexible electrophoretic displays. Papertab merges the benefits of working with electronic documents with the tangibility of paper documents.We present Papertab, a paper computer with multiple 10.7” functional touch sensitive flexible electrophoretic displays. Papertab merges the benefits of working with electronic documents with the tangibility of paper documents. In Papertab, each document window is represented as a physical, functional, flexible e-paper screen called a displaywindow. Each displaywindow is an Android computer that can show documents at varying resolutions. The location of displaywindows is tracked on the desk using an electro-magnetic tracker. This allows for context-aware operations between displaywindows. Touch and bend sensors in each displaywindow allow users to navigate content.

    • RVQTouchViz: (Multi)Touching Multivariate Data
      J. Rzeszotarski (Carnegie Mellon Univ., USA), A. Kittur
      J. Rzeszotarski (Carnegie Mellon Univ., USA)A. Kittur (Carnegie Mellon Univ., USA)

      TouchViz is a multi-touch interactive information visualization system. Users can employ a variety of physically grounded, force-based tools to explore data in order to find complex, multidimensional trends.In this paper we describe TouchViz, an information visualization system for tablets that encourages rich interaction, exploration, and play through references to physical models. TouchViz turns data into physical objects that experience forces and respond to the user. We describe the design of the system.

    • RTZGracoli: A Graphical Command Line User Interface
      P. Verma (Johns Hopkins Univ., USA)
      P. Verma (Johns Hopkins Univ., USA)

      Gracoli is a graphical command line interface that takes advantage of both text-based interface and graphical user interface to provide better user experience. Visit www.gracoli.com for demo.Command Line Interface (CLI) is the most popular and basic user interface to interact with computers. Despite its simplicity, it has some limitations in terms of user experience. For example, sometimes it is hard to understand and interpret the textual output of the command. In this paper we describe the limitations of command line interfaces and propose Gracoli: A graphical command line interface that takes advantage of both text-based interface and graphical user interface to provide better user experience and perform complex tasks. We demonstrate some of the useful applications and features of Gracoli. Sometime such a hybrid system provides and combines the strengths of CLI and GUI to perform specific tasks. We explore some useful applications of Gracoli to create a new kind of user experience. Command line interface makes accessibility faster and Graphical User-Interface makes output more interactive and understandable.

    • RUUMARSUI: Malleable Audio-Reactive Shape-retaining User Interface
      V. Wikström (Aalto Univ., FI), S. Overstall, K. Tahiroglu, J. Kildal, T. Ahmaniemi
      V. Wikström (Aalto Univ., FI)S. Overstall (Aalto Univ., FI)K. Tahiroglu (Aalto Univ., FI)J. Kildal (Nokia Research Center, FI)T. Ahmaniemi (Nokia Research Center, FI)

      MARSUI is a deformable hardware prototype that can recognize its own shape. It is designed to investigate the role of interactive auditory feedback as a guide to create specific shapes.MARSUI is a hardware deformable prototype exhibiting plastic (shape-retaining) behavior. It can track the shape that the user creates when deforming it. We envision that a set of predefined shapes could be mapped onto particular applications and functions. In its current implementation, we present three shapes that MARSUI can be deformed into: circular band, flat surface and sharp bent. These shapes map respectively onto the following applications: wristwatch, mobile phone and media player. Since the malleable interface can also take other forms, feedback plays an important role in guiding the user towards the predefined shapes. In this paper, we focus on investigating the possibilities that auditory feedback could offer in guiding the user towards reaching the intended shapes.

    • RFZSpinning Data: Remixing Live Data Like a Music DJ
      P. Groth (VU Univ. Amsterdam, NL), D. Shamma
      P. Groth (VU Univ. Amsterdam, NL)D. Shamma (Yahoo! Research, USA)

      Spin data! Using a DJ turntable curate and mix the story of a live event based on social media. Try it for CHI or other ongoing event.This demonstration investigates data visualization as a performance through the use of disc jockey (DJs) mixing boards. We assert that the tools DJs use in-situ can deeply inform the creation of data mixing interfaces and performances. We present a prototype system, DMix, which allows one to filter and summarize information from social streams using a audio mixing deck. It enables the Data DJ to distill multiple feeds of information in order to give an overview of a live event.

    • RPQInformation Capacity of Full-body Movements
      A. Oulasvirta (Max Planck Institute for Informatics, DE), T. Roos, A. Modig, L. Leppänen
      A. Oulasvirta (Max Planck Institute for Informatics, DE)T. Roos (Univ. of Helsinki, FI)A. Modig (Aalto Univ., FI)L. Leppänen

      Presents new metric for information capacity in full-body movement. Come and try how high throughputs you can achieve!We present a novel metric for information capacity of full-body movements. It accommodates HCI scenarios involving continuous movement of multiple limbs. Throughput is calculated as mutual information in repeated motor sequences. It is affected by the complexity of movements and the precision with which an actor reproduces them. Computation requires decorrelating co-dependencies of movement features (e.g., wrist and elbow) and temporal alignment of sequences. HCI researchers can use the metric as an analysis tool when designing and studying user interfaces.

    • RNURobotic Wheelchair Easy to Move and Communicate with Companions
      Y. Kobayashi (Saitama Univ., JP), R. Suzuki, Y. Sato, M. Arai, Y. Kuno, A. Yamazaki, K. Yamazaki
      Y. Kobayashi (Saitama Univ., JP)R. Suzuki (Saitama Univ., JP)Y. Sato (Saitama Univ., JP)M. Arai (Saitama Univ., JP)Y. Kuno (Saitama Univ., JP)A. Yamazaki (Tokyo Univ. of Technology, JP)K. Yamazaki (Saitama Univ., JP)

      We demonstrate our robotic wheelchair able to move alongside a companion. You can ride on and try to control the robotic wheelchair! Although it is desirable for wheelchair users to go out alone by operating wheelchairs on their own, they are often accompanied by caregivers or companions. In designing robotic wheelchairs, therefore, it is important to consider not only how to assist the wheelchair user but also how to reduce companions’ load and support their activities. We specially focus on the communications among wheelchair users and companions because the face-to-face communication is known to be effective to ameliorate elderly mental health. Hence, we proposed a robotic wheelchair able to move alongside a companion. We demonstrate our robotic wheelchair. All attendees can try to ride and control our robotic wheelchair.

    • RCSGesture Output: Eyes-Free Output Using a Force Feedback Touch Surface
      A. Roudaut (Hasso Plattner Institute, DE), A. Rau, C. Sterz, M. Plauth, P. Lopes, P. Baudisch
      A. Roudaut (Hasso Plattner Institute, DE)A. Rau (Hasso Plattner Institute, DE)C. Sterz (Hasso Plattner Institute, DE)M. Plauth (Hasso Plattner Institute, DE)P. Lopes (Hasso Plattner Institute, DE)P. Baudisch (Hasso Plattner Institute, DE)

      We propose using spatial gestures not only for input but also for output. Analogous to gesture input, gesture output moves the user’s finger in a gesture, which the user then recognizes.We propose using spatial gestures not only for input but also for output. Analogous to gesture input, gesture output moves the user’s finger in a gesture, which the user then recognizes. A motion path forming a “5”, for example, may inform the user about five unread messages; a heart-shaped path may serve as a message from a close friend. We built two prototypes: (1) The longRangeOuija is a stationary prototype that offers a large motion range and full control; (2) The pocketOuija is self-contained mobile device based on an iPhone. They actuate the user’s fingers by means of an actuated transparent foil overlaid onto a touchscreen. We demonstrate the pocketOuija prototype.

    Exploration

    • ERGThe CHI 2013 Interactive Schedule
      A. Satyanarayan (Stanford Univ., USA), D. Strazzulla, C. Klokmose, M. Beaudouin-Lafon, W. Mackay
      A. Satyanarayan (Stanford Univ., USA)D. Strazzulla (INRIA, FR)C. Klokmose (Aarhus Univ., DK)M. Beaudouin-Lafon (Univ. Paris-Sud, FR)W. Mackay (INRIA, FR)

      The CHI’13 Interactive Schedule helps attendees navigate the wealth of video preview content in order to identify events they would like to attend.CHI 2013 offers over 500 separate events including paper presentations, panels, courses, case studies and special interest groups. Given the size of the conference, it is no longer practical to host live summaries of these events. Instead, a 30-second Video Preview summary of each event is available. The CHI’13 Interactive Schedule helps attendees navigate this wealth of video content in order to identify events they would like to attend. It consists of a number of large display screens throughout the conference venue which cycle through a video playlist of events. Attendees can interact with these displays using their mobile devices by either constructing custom video playlists or adding on-screen content to their personal schedule.

    • ESXCobi: Communitysourcing Large-Scale Conference Scheduling
      H. Zhang (Massachusetts Institute of Technology, USA), P. André, L. Chilton, J. Kim, S. Dow, R. Miller, W. Mackay, M. Beaudouin-Lafon
      H. Zhang (Massachusetts Institute of Technology, USA)P. André (Carnegie Mellon Univ., USA)L. Chilton (Univ. of Washington, USA)J. Kim (Massachusetts Institute of Technology, USA)S. Dow (Carnegie Mellon Univ., USA)R. Miller (Massachusetts Institute of Technology, USA)W. Mackay (INRIA, FR)M. Beaudouin-Lafon (Univ. Paris-Sud, FR)

      Cobi engages the entire CHI community in planning CHI. Cobi elicits community members’ preferences and constraints, and provides a scheduling tool empowering organizers to take informed actions toward improving the schedule.Creating a good schedule for a large conference such as CHI requires taking into account the preferences and constraints of organizers, authors, and attendees. Traditionally, the onus of planning is placed entirely on the organizers and involves only a few individuals. Cobi presents an alternative approach to conference scheduling that engages the entire community to take active roles in the planning process. The Cobi system consists of a collection of crowdsourcing applications that elicit preferences and constraints from the community, and software that enable organizers and other community members to take informed actions toward improving the schedule based on collected information. We are currently piloting Cobi as part of the CHI 2013 planning process.

    • ETEMubuFunkScatShare: Gestural Energy and Shared Interactive Music
      A. Tanaka (Goldsmiths, Univ. of London, UK), B. Caramiaux, N. Schnell
      A. Tanaka (Goldsmiths, Univ. of London, UK)B. Caramiaux (Goldsmiths, Univ. of London, UK)N. Schnell (IRCAM, FR)

      MubuFunkScatShare is a collaborative performance featuring a musician playing guitar, scatting and beatboxing. Multiple visitors can participate in the performance, re-performing live-recorded musical phrases by shaking and waving mobile phones.We present a ludic interactive music performance that allows live recorded sounds to be re-rendered through the users’ movements. The interaction design made the control similar to a shaker where the motion energy drives the energy of the played music piece. The instrument has been designed for musicians as well as non-musicians and allows for multiple players. In the MubuFunkScatShare performance, one performer plays acoustical instruments into the system, subsequently rendering them by shaking a smartphone. He invites participation by volunteers from the audience, resulting in a fun musical piece that includes layers of funk guitar, scat singing, guitar solo, and beatboxing.

    • ESQThe Throat III – Disforming Operatic Voices Through a Novel Interactive Instrument
      C. Unander-Scharin (Opera and Technology, SE), K. Hook, L. Elblaus
      C. Unander-Scharin (Opera and Technology, SE)K. Hook (KTH – Royal Institute of Technology, SE)L. Elblaus (KTH – Royal Institute of Technology, SE)

      The Throat III is a tool for opera-singers to dynamically disform, change and accompany their voices through gestures by one hand. Practitioner-led artistic research, combined with interactive technologies, opens up new and unexplored design spaces. Here we focus on the creation of a tool for opera-singers to dynamically disform, change and accompany their voices. In an opera composed by one of the authors, the title-role singer needed to be able to alter his voice to express hawking, coughing, snuffling and other disturbing vocal qualities associated with the lead role – Joseph Merrick, aka “The Elephant Man”. In our designerly exploration, we were guided by artistic experiences from the opera tradition and affordances of the technology at hand. The resulting instrument, The Throat III, is a singer-operated artefact that embodies and extends particular notions of operatic singing techniques while at the same time creating accompaniment. It therefore becomes an emancipatory tool, putting a spotlight on some of the power hierarchies between singers, composers, conductors, and stage directors in the operatic world.

    • EFSChiseling Bodies: an Augmented Dance Performance
      S. Fdili Alaoui (LIMSI-CNRS, FR), C. Jacquemin, F. Bevilacqua
      S. Fdili Alaoui (LIMSI-CNRS, FR)C. Jacquemin (LIMSI-CNRS, Orsay, FR)F. Bevilacqua (IRCAM, FR)

      Chiseling Bodies is an interactive augmented dance performance, where one classical dancer interacts with abstract visuals. They are massive mass-spring systems whose dynamical behaviors are echoing the dancer’s movement qualities.Chiseling Bodies is an interactive augmented dance performance, where a dancer interacts with abstract visuals. They are massive mass-spring systems whose dynamical behaviors are echoing the dancer’s movement qualities.

    • EPCKalpana II: A Dome based Learning Installation for Indian Schools
      I. Grover (Industrial Design Center, IIT Bombay., IN)
      I. Grover (Industrial Design Center, IIT Bombay., IN)

      ‘Kalpana’— meaning imagination, provides a dome based immersive experience and make students fantasize about experience a day at different places on earth. This extended abstract presents Kalpana II, a low cost learning installation with enhanced audio feedback that teaches users “Sun changes its trajectory in sky with change in location and time of the year”. It is based on the previously published research paper, “Kalpana: A Dome based Learning Installation for Indian Schools” that evaluates three different mediums (Conventional paper based, screen based and Dome based medium) that can be used to teach the same topic. With Kalpana, students can interact with simple known objects like globe and corresponding changes can be observed on Dome. These objects are made responsive to give visual and audio feedback. Kalpana empowered students to visualize the phenomenon that sun changes its trajectory in the sky with change in location or time of the year. Students interact with the interactive physical model and the corresponding response can be observed on dome supported with contextual audio feedback.

    • EDNBig Huggin: A Bear for Affection Gaming
      L. Grace (Miami Univ., USA)
      L. Grace (Miami Univ., USA)

      Big Huggin’ is a game played with a 30 inch custom teddy bear controller. Players complete the game by providing several well-timed hugs. It is an experiment in alternative interface.Today the world of human computer interaction is an impersonal one. It is one where touch is mediated through glass and plastic. Where multi-touch means antiseptic sleek materials with little texture. Why isn’t touch more personal? Why isn’t touch more tactile? Big Huggin’ is a game played with a 30 inch custom teddy bear controller. Players complete the game by providing several well-timed hugs. It is an experiment and gesture in alternative interface. Instead of firing toy guns at countless enemies or revving the engines of countless gas guzzling virtual cars, why not give a hug? The Big Huggin’ game is designed to offer reflection on the way we play and the cultural benefit of alternative play.

    • EHXSOUND BOUND: Making a Graphic Equalizer More Interactive and Fun
      S. Kim (KAIST (Korea Advanced Institute of Science and Technology), KR), W. Lee
      S. Kim (KAIST (Korea Advanced Institute of Science and Technology), KR)W. Lee (KAIST (Korea Advanced Institute of Science and Technology), KR)

      SOUND BOUND contributes to ordinary graphic equalizer evolving into interactive musical instrument. It actively engages people for listening to music and it enhances their experience by distorting the music. SOUND BOUND is an interactive graphic equalizer. It displays the magnitude of frequency akin to traditional graphic equalizers or other sound visualizers, and it also allows the user to distort and create sounds by touch interaction. The user can create virtual balls or instrument icons on the screen and can cause these objects to collide with the equalizer by using a physics engine. When virtual balls collide with the equalizer bars, the magnitude of certain frequencies is amplified. When equalizer bars strikes an instruments icon, it create an instrumental sound effect. Until the invention of SOUND BOUND, the activities involved in listening to music have been essentially passive and static. SOUND BOUND actively engages people in the activity of listening to music and enhances their experience as well.

    • EMESmarter Objects: Using AR Technology to Program Physical Objects and their Interactions
      V. Heun (Massachusetts Institute of Technology, USA), S. Kasahara, P. Maes
      V. Heun (Massachusetts Institute of Technology, USA)S. Kasahara (Sony Cooperation, JP)P. Maes (Massachusetts Institute of Technology, USA)

      Smarter Objects provides a flexible user interface, combining both tangible and graphical user interfaces, using a GUI for understanding and programming and a TUI for day-to-day operation.Graphical User Interfaces (GUI) offers a very flexible interface but require the user’s complete visual attention, whereby Tangible User Interfaces (TUI) can be operated with minimal visual attention. To prevent visual overload and provide a flexible yet intuitive user interface, Smarter Objects combines the best of both styles by using a GUI for understanding and programming and a TUI for day to day operation. Smarter Objects uses Augmented Reality technology to provide a flexible GUI for objects when that is needed.

    • EPXInteractive Sensory Objects for Improving Access to Heritage
      K. Allen (Univ. of Reading, UK), N. Hollinworth, F. Hwang, A. Minnion, G. Kwiatkowska, T. Lowe, N. Weldin
      K. Allen (Univ. of Reading, UK)N. Hollinworth (Univ. of Reading, UK)F. Hwang (The Univ. of Reading, UK)A. Minnion (Univ. of East London, UK)G. Kwiatkowska (Univ. of East London, UK)T. Lowe (Mencap, Liverpool, UK)N. Weldin (Middlesex Univ., UK)

      This project aims to help improve the accessibility and experiences of museums and heritage sites through the creation of interactive, multisensory objects, developed collaboratively with learning disabled people.In this project we explore how to enhance the experience and understanding of cultural heritage in museums and heritage sites by creating interactive multisensory objects collaboratively with artists, technologists and people with learning disabilities. We focus here on workshops conducted during the first year of a three year project in which people with learning disabilities each constructed a ‘sensory box’ to represent their experiences of Speke Hall, a heritage site in the UK. The box is developed further in later workshops which explore aspects of physicality and how to appeal to the entire range of senses, making use of Arduino technology and basic sensors to enable an interactive user experience.

    • ERUMobile Rhythmic Interaction in a Sonic Tennis Game
      S. Baldan (Univ. of Milan, IT), S. Serafin, A. de Gotzen
      S. Baldan (Univ. of Milan, IT)S. Serafin (Aalborg Univ. Copenhagen, DK)A. de Gotzen (Aalborg Univ. Copenhagen, DK)

      Sonic Tennis: motion-based audio only game for mobile devicesThis paper presents a game for mobile devices which simulates a tennis match between two players. It is an audio-based game, so the majority of information and feedback to the user is given through sound instead of being displayed on a screen. As users are not requested to keep their eyes on the display, the device can be used as a motion-based controller, exploiting its internal motion sensors to their full potential. The game aims to be useful for both entertainment and educational purposes, and enjoyable both by visually-impaired (the main target audience for audio-based games nowadays) and sighted users.

    • ETSDe-MO : Designing Action-Sound Relationships with the MO Interfaces
      F. Bevilacqua (IRCAM, FR), N. Schnell, N. Rasamimanana, J. Bloit, E. Flety, B. Caramiaux, J. Francoise, E. Boyer
      F. Bevilacqua (IRCAM, FR)N. Schnell (IRCAM, FR)N. Rasamimanana (Phonotonic, FR)J. Bloit (IRCAM, FR)E. Flety (IRCAM, FR)B. Caramiaux (Goldsmiths, Univ. of London, UK)J. Francoise (IRCAM, FR)E. Boyer (IRCAM, FR)

      Demonstrations of the Modular Musical Objects (MO), an ensemble of tangible interfaces and software modules for creating novel musical instruments or for augmenting objects with sound. The Modular Musical Objects (MO) are an ensemble of tangible interfaces and software modules for creating novel musical instruments or for augmenting objects with sound. In particular, the MOs allow for designing action-sound relationships and behaviors based on the interaction with tangible objects or free body movements. Such interaction scenarios can be inspired by the affordances of particular objects (e.g. a ball, a table), by interaction metaphors based on the playing techniques of musical instruments or games. We describe specific examples of action-sound relationships that are made possible by the MO software modules and which take advantage of machine learning techniques.

    • EVJCuboino. Extending Physical Games. An Example.
      F. Heibeck (Univ. of Bremen, DE)
      F. Heibeck (Univ. of Bremen, DE)

      Cuboino is a computationally augmented physical toy system designed as an extension for the existing marble-game cuboro.Cuboino is a computationally augmented physical toy system designed as an extension for the existing marble-game cuboro. It consists of a set of cubes that are seamlessly compatible with the cuboro cubes. In contrast to the passive cuboro cubes, cuboino modules are active parts of a digital system consisting of sensor cubes, actor cubes and supply cubes. By snapping them together, the player can build a modular system that functions according to he individual functionalities of the cuboino cubes. Cuboino establishes a new pathway that is not embodied in the marble, but adapts to the medium of its transmission. Signals can be received by multiple modules, creating more than one signal at a time. This allows signals to intertwine and thus create more dynamic and complex game outcomes.

    • EUGNoiseBear: A Wireless Malleable Multiparametric Controller for use in Assistive Technology Contexts
      M. Grierson (Goldsmiths, Univ. of London, UK), C. Kiefer
      M. Grierson (Goldsmiths, Univ. of London, UK)C. Kiefer (Goldsmiths, Univ. of London, UK)

      NoiseBear is a malleable multiparametric interface, made with conductive textiles, currently being developed in participatory design workshops with disabled children.NoiseBear is a malleable multiparametric interface, currently being developed in a series of participatory design workshops with disabled children. It follows a soft toy design, using conductive textiles for pressure sensing and circuitry. The system is a highly sensitive deformable controller; it can be used flexibly in a range of scenarios for continuous or discrete control, allowing interaction to be designed at a range of complexity levels. The controller is wireless, and can be used to extend the interactive possibilities of mobile computing devices. Multiple controllers may also be networked together in collaborative scenarios.

    • EUNThe Voice Harvester: An Interactive Installation
      N. True (Umea Univ., SE), N. Papworth, R. Zarin, J. Peeters, F. Nilbrink, K. Lindbergh, D. Fallman, A. Lind
      N. True (Umea Univ., SE)N. Papworth (Interactive Institute Umea, SE)R. Zarin (Interactive Institute Umea, SE)J. Peeters (Interactive Institute, SE)F. Nilbrink (Interactive Institute Umea, SE)K. Lindbergh (Interactive Institute Umea, SE)D. Fallman (Interactive Institute, SE)A. Lind (Umeå Univ., SE)

      The Voice Harvester is an exploratory interactive installation that embodies human voice in physical materials. Sound input is amplified through speakers connected to a thin, flexible membrane agitating the material.The Voice Harvester is an exploratory interactive installation that embodies human voice in physical materials. Sound input is processed, amplified and transmitted through audio drivers connected to a thin, flexible membrane that agitates the material on it. The title “Voice Harvester” is derived from the original design brief, which called for an object able to elicit non-linguistic, expressive, and naturalistic human vocal sounds to explore the full range of capability of the human voice through use of a novel, playful, and embodied interaction. This paper describes the intention, design process, construction, technical details, interaction, and planned/potential uses of this design exploration.

    • EFECELL
      E. Kim (Hongik Univ., KR), R. Achituv (advisor)
      E. Kim (Hongik Univ., KR)R. Achituv (advisor) (HongIk Univ., KR)

      Cell is a pneumatically controlled, body augmenting, interactive and kinetic wearable/sculpture/installation that transforms in response to the user’s breath.Cell is a pneumatically controlled, body augmenting, interactive and kinetic wearable/sculpture/installation that transforms in response to the user’s breath.

    • EEQRepentir: Digital Exploration Beneath the Surface of an Oil Painting
      J. Hook (Newcastle Univ., UK), J. Briggs, M. Blythe, N. Walsh, P. Olivier
      J. Hook (Newcastle Univ., UK)J. Briggs (Northumbria Univ., UK)M. Blythe (Northumbria Univ., UK)N. Walsh (Bernarducci Meisel Gallery, USA)P. Olivier (Newcastle Univ., UK)

      Repentir is a mobile application that employs marker-less tracking and augmented reality to enable gallery visitors to explore the under drawing and successive stages of pigment beneath a painting’s surface. Repentir is a mobile application that employs marker-less tracking and augmented reality to enable gallery visitors to explore the under drawing and successive stages of pigment beneath an oil painting’s surface. Repentir recognises the position and orientation of a specific painting within a photograph and precisely overlays images that were captured during that painting’s creation. The viewer may then browse through the work’s multiple states and closely examine its painted surface in one of two ways: sliding or rubbing. Our current prototype recognises realist painter Nathan Walsh’s most recent work, “Transamerica”. Repentir enables the viewer to explore intermediary stages in the painting’s development and see what is usually lost within the materially additive painting process. The prototype offers an innovative approach to digital reproduction and provides users with unique insights into the painter’s working method.

    • ELJVenus
      E. Jung (Hongik Univ., KR), Y. Lee, R. Achituv (advisor)
      E. Jung (Hongik Univ., KR)Y. Lee (Hongik Univ., KR)R. Achituv (advisor) (Hongik Univ., KR)

      Please approach the garment to witness its behavior.Venus is an interactive garment inspired by the Venus Fly Trap, a carnivorous plant that traps its pray in jaw-like leaves. The garment responds to infringements on “personal space”. In default mode the collars of the garment move in gentle sinuous organic waves sending a message of enticement. However, as viewers approach and cross an invisible threshold from “public” to “personal” space, the collars snap shut, changing their orientation and color, and the whole garment is transformed into a statement of hostile rejection.

    • ENNGravity of Light
      Y. Kim (Hongik Univ., KR), Y. Cho
      Y. Kim (Hongik Univ., KR)Y. Cho (Hongik Univ., KR)

      ‘Gravity of Light’ is an interactive wearable art made of 3D printed smart textile that displays the wearer’s natural movements of her head into flowing patterns of light toward gravity.‘What if light behaves like water and flows as if it feels gravity?’ This art project started with this imaginative question. ‘Gravity of Light’ is an interactive wearable art project made of 3D printed smart textile that displays the wearer’s natural movements of her head into flowing patterns of light toward gravity. Through this experimental project, the artists try to explore a visual representation of the fundamental movement of body aided with the artistic imagination.

    • EVC汇(Hui)
      Y. Lee (Hongik Univ., KR), D. Shin, R. Achituv (Advisor)
      Y. Lee (Hongik Univ., KR)D. Shin (Korea Aerospace Univ., KR)R. Achituv (Advisor) (WCU, National Research Foundation of Korea, KR)

      Approach the garment and experience its response. 汇(Hui) is the Chinese character for hedgehog. Like the thorny creature’s defensive shield that protects and camouflages it from predators, the Hui garment is comprised of a coat of aggressively reactive sharp spikes intended to symbolically ward off the threatening presence of others. In default mode, the spikes sway gently in place, creating the sense of a living and breathing organism. However, upon detection of presence within the proximity of the garment, they violently turn to face the approaching subjects, and the LED lights change color from white to red.

    • EQZColourNet: A System of Interactive and Interacting Digital Artworks
      S. Clark (De Montfort Univ., UK), E. Edmonds
      S. Clark (De Montfort Univ., UK)E. Edmonds (De Montfort Univ., UK)

      ColourNet is a digital art system composed of two interactive and interacting artworks – Shaping Form and Transformations.ColourNet is a digital art system composed of a set interactive and interacting artworks. Although the artworks are able to work independently, they can also operate together to provide enhanced possibilities for human interaction and creative participation. We describe the ColourNet digital art system and demonstrate how people can interact with a smartphone artwork and a screen-based work that each interact with one another.

    • EEJSurface Tension
      N. Plant (Queen Mary, Univ. of London, UK), P. Healey
      N. Plant (Queen Mary, Univ. of London, UK)P. Healey (Queen Mary, Univ. of London, UK)

      An installation abstracting human expression of pain into disembodied form, gesticulations tracing shapes out of a canvas revealing the expressive quality of hands articulating a personal experience of pain.The human body has a privileged place in explanations of how emotions are communicated. Tangible human bodies, it is hoped, can provide a conceptual and empirical bridge sufficient to convey intangible human experiences; a hope shared by technologies such as avatars and embodied robots. Surface tension explores this idea by testing the boundary between the embodied and disembodied expression of pain. The installation uses motion-capture data of people describing personal experiences of pain. Their original gestural movements are extracted and translated into mechanical gesticulations that stretch and trace forms onto the surface of a canvas; mapping the twists, turns, contractions and accelerations of ngers and hands articulating an experience of pain. We manipulate the parameters of the original motions to ask in what ways can a disembodied translation of a human description of pain evoke recognition or empathy in the viewer?

    • EFLLong Living Chair
      L. Pschetz (Univ. of Edinburgh, UK), R. Banks
      L. Pschetz (Univ. of Edinburgh, UK)R. Banks (Microsoft Research, UK)

      The Long Living Chair was designed to embody important slow technology concepts. Its semi-hidden display shows how many times it is used over the course of 96 years.The Long Living Chair is a rocking chair with enhanced memory. It knows the day it was produced and can record how many times it is used over the course of 96 years. Its semi-hidden display and the slow pace it changes suggest that the recorded information should not be accessed frequently. Instead, it is meant to be forgotten and then once in a while accessed to provide a moment of wonder and a sense of relatedness to the object. The project was developed to embody important slow technology concepts.

    • EEXSense of the Deep
      M. Kim (Hongik Univ., KR), R. Achituv (advisor)
      M. Kim (Hongik Univ., KR)R. Achituv (advisor) (WCU, National Research Foundation of Korea, KR)

      Sense of the Deep is a multi-user interactive installation that employs a unique gel-based controller to collaboratively manipulate 3D holograms in real time.Sense of the Deep is a multi-user interactive installation that employs a unique tactile controller to collaboratively manipulate 3D holograms in real time. Participants immerse both hands in closed chambers containing a wet gelatin substance. As they explore the sensual matter they also learn how to utilize it as an interface. Up to four participants can work together to mold and reshape the fluid visuals.

    • EKNMetaphone: An Artistic Exploration of Biofeedback and Machine Aesthetics
      V. Simbelis (KTH – Royal Institute of Technology, SE), K. Höök
      V. Simbelis (KTH – Royal Institute of Technology, SE)K. Höök (KTH – Royal Institute of Technology, SE)

      The Metaphone is an interactive art piece that transforms biosensor data extracted from participants into colorful, evocative perceivable visual patterns on a big canvas. The Metaphone is an interactive art piece that transforms biosensor data extracted from participants into colorful, evocative perceivable visual patterns on a big canvas. The biosensors register movement, pulse and skin conductance, the latter two relating to emotional arousal. The machine creates a traditional art form – colorful paintings – which can be contrasted with the pulsating, living body of the participants and the machine-like movements of the Metaphone. Participants interacting with the machine get their own painting drawn for them – a highly involving activity spurring a whole range of questions around bio-sensing technologies. The participants engaging with Metaphone have to agree to share their personal data, thereby expanding the interactive discourse while questioning the extension of the body with the machine and involving participants with public exposition of their inner worlds.

    • ECEOverlapped Playback
      H. Hayeon (Hongik Univ., KR), R. Achituv (advisor)
      H. Hayeon (Hongik Univ., KR)R. Achituv (advisor) (WCU, National Research Foundation of Korea, Daejon, Korea, Republic of, KR)

      Overlapped Playback employs interlaced video frames to digitally expand upon a classical optical illusion. The animations are generated by changing one’s spatial orientation in relation to the projected image. Overlapped Playback employs multiple real-time video sources and interlaced image frames to digitally translate and expand upon a classical analogue optical illusion. Users perceive animated effects by moving in front of a grid that partly obscures a composite image. The illusion of motion is created through the visual sequencing of the various image fragments exposed to the viewers as they change their spatial orientation.

    • EMLDirty Tangible Interfaces — Expressive Control of Computers with True Grit
      M. Savary (UserStudio, FR), D. Schwarz, D. Pellerin, F. Massin, C. Jacquemin, R. Cahen
      M. Savary (UserStudio, FR)D. Schwarz (Ircam, FR)D. Pellerin (UserStudio, FR)F. Massin (UserStudio, FR)C. Jacquemin (LIMSI-CNRS, FR)R. Cahen (ENSCI–Les Ateliers, FR)

      Create music and graphic animation by plowing through granular or liquid interaction material, molding sonic landscapes in the Dirty Tangible Interface (DIRTI). Dirty Tangible Interfaces (DIRTI) are a new concept in interface design that forgoes the dogma of repeatability in favor of a richer and more complex experience, constantly evolving, never reversible, and infinitely modifiable. We built a prototype interface realizing the DIRTI principles based on low-cost commodity hardware and kitchenware: A video camera tracks a granular or liquid interaction material placed in a glass dish. The 3D relief estimated from the images, and the dynamic changes applied to it by the user(s), are used to control two applications: For 3D scene authoring, the relief is directly translated into a terrain, allowing fast and intuitive map editing. For expressive audio–graphic music performance, both the relief and real-time changes are interpreted as activation profiles to drive corpus-based concatenative sound synthesis, allowing one or more musicians to mold sonic landscapes and to plow through them in an inherently collaborative, expressive, and dynamic experience.

    • EKUMind Pool: Encouraging Self-Reflection through Ambiguous Bio-Feedback
      K. Long (Lancaster Univ., UK), J. Vines
      K. Long (Lancaster Univ., UK)J. Vines (Newcastle Univ., UK)

      Mind Pool is a Brain-Computer Interface artwork providing real-time feedback of brain activity to encourage sustained interactions and self-reflection through motivating participants to relate ambiguous feedback with their thoughts.In this interactivity we present Mind Pool, an exploratory Brain-Computer Interface (BCI) interactive artwork that provides real-time feedback of brain activity to those interacting with it. Brain activity is represented sonically and physically via a magnetically reactive liquid that sits in a pool in front of the participant. Mind Pool is designed to present this information ambiguously so as to encourage sustained interactions and self-reflection from participants through motivating them to relate the ambiguous feedback with their brain activity.