• facebook icon
  • twitter icon
  • google+ icon

Video Showcase

Show / hide full affiliations and abstracts (May take a few seconds.)

IWC Enter a 3-letter code in the search box of the CHI 2013 mobile app to go to the corresponding session or presentation.
 When clickable, a 3-letter code links to the Video Previews web site.

Best Video Award

Congratulations to the winners of the Best Video Award:

Video Showcase Entries

  • VAURevel: Programming the Sense of Touch
    O. Bau (Disney Research, USA), I. Poupyrev, M. Le Goc, L. Galliot, M. Glisson
    O. Bau (Disney Research, USA)I. Poupyrev (Disney Research Pittsburgh, USA)M. Le Goc (Disney Research, USA)L. Galliot (Disney Research, USA)M. Glisson (Disney Research, USA)

    REVEL is a new wearable tactile technology that modifies the user’s tactile perception of the physical world. It provides tactile feedback on almost any surface or object of the environment.Revel is a new wearable tactile technology that modifies the user’s tactile perception of the physical world. Current tactile technologies enhance objects and devices with various actuators to create rich tactile sensations, limiting the experience to the interaction with instrumented devices. In contrast, REVEL can add artificial tactile sensations to almost any surface or object with very little if any instrumentation of the environment. As a result, REVEL can provide dynamic tactile sensations on touch screens as well as everyday objects and surfaces in the environment, such as furniture, walls, wooden and plastic objects, and even human skin. Revel can be used in many new and exciting applications, including adding tactile feedback to projected content, enhancing the environment with tactile guidance for the visually impaired or providing personal tactile feedback for multi-user touch surfaces.

  • VNSSkyWords: an Engagement Machine at Chicago City Hall
    L. Braun (Illinois Institute of Technology, USA), J. Rivera, K. Hindi, L. Lin, J. Mello, K. Patel, A. Mathew
    L. Braun (Illinois Institute of Technology, USA)J. Rivera (Illinois Institute of Technology, USA)K. Hindi (Illinois Institute of Technology, USA)L. Lin (Illinois Institute of Technology, USA)J. Mello (Illinois Institute of Technology, USA)K. Patel (Illinois Institute of Technology, USA)A. Mathew (Illinois Institute of Technology, USA)

    SkyWords is a site-specific installation or “civic engagement machine”.When governments make new policies they often have limited methods for engaging the public and gathering opinions. As a result, policy-making is not always inclusive and too often important decisions are made by just a few. SkyWords is a site-specific installation or “civic engagement machine”. SkyWords leveraged technology, interaction design and the universal appeal of play to give hundreds of people the opportunity to participate in government.

  • VDRReMind. A Transformational Object for Procrastinators
    J. Brechmann (Folkwang Univ. of the Arts, DE), M. Hassenzahl, M. Laschke, M. Digel
    J. Brechmann (Folkwang Univ. of the Arts, DE)M. Hassenzahl (Folkwang Univ. of the Arts, DE)M. Laschke (Folkwang Univ. of the Arts, DE)M. Digel (Folkwang Univ. of the Arts, DE)

    ReMind is a pleasurable troublemaker. It playfully addresses the ever-present human tendency to procrastinate. ReMind is a pleasurable troublemaker. It playfully addresses the ever-present human tendency to procrastinate. To do the dishes or not to do the dishes: That is the question. ReMind helps answering it by cruelly reminding you of all the things to be done. It now and then even pelts overdue chores at you! At the same time, it forgives slips and even allows for some cheating. Because: Nobody is perfect.

  • VNHVirtualized Reality
    J. Brieger (Carnegie Mellon Univ., USA), J. Ota
    J. Brieger (Carnegie Mellon Univ., USA)J. Ota (Carnegie Mellon Univ., USA)

    We have created an alternate reality in which participants explore their environment in third person. The physical environment is mapped by the the Kinect and presented as an abstracted virtual environment. Virtual reality is the creation of an entirely digital world. Virtualized reality is the translation of the real world into a digital space. There, the real and virtual unify. We have created an alternate reality in which participants explore their environment in third person. The physical environment is mapped by the the Kinect and presented as an abstracted virtual environment. Forced to examine reality from a new perspective, participants must determine where the boundary lies between the perceived and the actual.

  • VQJGravitySpace: Tracking Users and Their Poses in a Smart Room Using a Pressure-Sensing Floor
    A. Bränzel (Hasso Plattner Institute, DE), C. Holz, D. Hoffmann, D. Schmidt, M. Knaust, P. Lühne, R. Meusel, S. Richter, P. Baudisch
    A. Bränzel (Hasso Plattner Institute, DE)C. Holz (Hasso Plattner Institute, DE)D. Hoffmann (Hasso Plattner Institute, DE)D. Schmidt (Hasso Plattner Institute, DE)M. Knaust (Hasso Plattner Institute, DE)P. Lühne (Hasso Plattner Institute, DE)R. Meusel (Hasso Plattner Institute, DE)S. Richter (Hasso Plattner Institute, DE)P. Baudisch (Hasso Plattner Institute, DE)

    GravitySpace is a new approach to tracking people and objects in smartrooms based on a high-resolution pressure-sensitive floor, providing consistent wall-to-wall coverage, less susceptible and less privacy-critical than camera-based systems.We explore how to track people and furniture based on a high-resolution pressure-sensitive floor. Gravity pushes peo-ple and objects against the floor, causing them to leave imprints of pressure distributions across the surface. While the sensor is limited to sensing direct contact with the surface, we can sometimes conclude what takes place above the surface, such as users’ poses or collisions with virtual objects. We demonstrate how to extend the range of this approach by sensing through passive furniture that propagates pressure to the floor. To explore our approach, we have created an 8 m2 back-projected floor prototype, termed GravitySpace, a set of passive touch-sensitive furniture, as well as algorithms for identifying users, furniture, and poses. Pressure-based sensing on the floor offers four potential benefits over camera-based solutions: (1) it provides consistent coverage of rooms wall-to-wall, (2) is less susceptible to occlusion between users, (3) allows for the use of simpler recognition algorithms, and (4) intrudes less on users’ privacy.

  • VYHUnoJoy!: A Library for Rapid Video Game Prototyping using Arduino
    A. Chatham (RMIT Univ., AU), W. Walmink, F. Mueller
    A. Chatham (RMIT Univ., AU)W. Walmink (RMIT Univ., AU)F. Mueller (RMIT Univ., AU)

    UnoJoy! is an open-source library that transforms an Arduino into a system-native video game controller.UnoJoy! is a free, open-source library for the Arduino Uno platform allowing users to rapidly prototype system-native video game controllers. Using standard Arduino code, users assign inputs to button presses, and then the user can run a program to overwrite the Arduino firmware, allowing the Arduino to register as a native game controller for Windows, OSX, and Playstation 3. Focusing on ease of use, the library allows researchers and interaction designers to quickly experiment with novel interaction methods while using high-quality commercial videogames. In our practice, we have used it to add exertion-based controls to existing games and to explore how different controllers can affect the social experience of video games. We hope this tool can help other researchers and designers deepen our understanding of game interaction mechanics by making controller design simple.

  • VTQiRotateGrasp: Automatic Screen Rotation based on Grasp of Mobile Devices
    L. Cheng (National Taiwan Univ., TW), M. Lee, C. Wu, F. Hsiao, Y. Liu, H. Liang, Y. Chiu, M. Lee, M. Chen
    L. Cheng (National Taiwan Univ., TW)M. LeeC. Wu (National Taiwan Univ., TW)F. Hsiao (National Taiwan Univ., TW)Y. Liu (National Taiwan Univ., TW)H. Liang (National Taiwan Univ., TW)Y. Chiu (National Taiwan Univ., TW)M. Lee (National Taiwan Univ., TW)M. Chen (National Taiwan Univ., TW)

    iRotateGrasp automatically rotates the screen to your viewing orientation by sensing your grasp using 44 capacitive sensors and support vector machine (SVM).Automatic screen rotation improves viewing experience and usability of mobile devices, but current gravity-based approaches do not support postures such as lying on one side, and manual rotation switches require explicit user input. iRotateGrasp automatically rotates screens of mobile devices to match users’ viewing orientations based on how users are grasping the devices. Our prototype used a total of 44 capacitive sensors along the four sides and the back of an iPod Touch, and uses support vector machine (SVM) to recognize grasps at 25Hz. We collected 6-users’ usage under 108 different combinations of posture, orientation, touchscreen operation, and left/right/both hands. Our offline analysis showed that our grasp-based approach is promising, with 80.9% accuracy when training and testing on different users, and up to 96.7% if users are willing to train the system.

  • VUBiGrasp: Grasp-based Adaptive Keyboard for Mobile Devices
    L. Cheng (National Taiwan Univ., TW), H. Liang, C. Wu, M. Chen
    L. Cheng (National Taiwan Univ., TW)H. Liang (National Taiwan Univ., TW)C. Wu (National Taiwan Univ., TW)M. Chen (National Taiwan Univ., TW)

    iGrasp automatically adapts keyboard layout and position to fit your grasp sensed by 46 capacitive sensors around the device.Multitouch tablets, such as iPad and Android tablets, support virtual keyboards for text entry. Our 64-user study shows that 98% of the users preferred different keyboard layouts and positions depending on how they were holding these devices. However, current tablets either do not allow keyboard adjustment or require users to manually adjust the keyboards. We present iGrasp, which automatically adapts the layout and position of virtual keyboards based on how and where users are grasping the devices without requiring explicit user input. Our prototype uses 46 capacitive sensors positioned along the sides of an iPad to sense users’ grasps, and supports two types of grasp-based automatic adaptation: layout switching and continuous positioning. Our two 18-user studies show that participants were able to begin typing 42% earlier using iGrasp’s adaptive keyboard compared to the manually adjustable keyboard.

  • VMCHeartLink: Open Broadcast of Live Biometric Data to Social Networks
    F. Curmi (Lancaster Univ., UK), M. Ferrario, J. Southern, J. Whittle
    F. Curmi (Lancaster Univ., UK)M. Ferrario (Lancaster Univ., UK)J. Southern (Lancaster Univ., UK)J. Whittle (Lancaster Univ., UK)

    HeartLink is a system to crowdsource motivation in real-time. Users conducting a task share their biometric data over social networks and remote viewers cheer in social support.A number of studies in the literature have looked into the use of real-time biometric data to improve one’s own physiological performance and wellbeing. However, there is limited research that looks into the effects that sharing biometric data with others could have on one’s social network. The video documents the design and development of HeartLink, a system that collects real-time personal biometric data such as heart rate and broadcasts this data online to anyone. Insights gained on designing systems to broadcast real-time biometric data are presented. The video also reports on the key results from testing HeartLink in two studies that were conducted during sport events.

  • VGPCheers — Alcohol-aware Strobing Ice Cubes
    D. Dand (MIT Media Lab, USA)
    D. Dand (MIT Media Lab, USA)

    After an alcohol induced blackout, I made self-aware glowing ice-cubes that beat to the ambient music. The cubes also change color as you keep drinking beyond the safety limit.After an alcohol induced blackout, I made self-aware glowing ice-cubes that beat to the ambient music. The electronics inside the ice-cubes know how fast and how much you are drinking. The cubes change color from green to orange to finally red as you keep drinking beyond the safety limit. If things get out of control, the cubes send a text to your close friend using your smartphone.

  • VVRWhat’s Cookin?: A Platform for Remote Collaboration
    C. Ervin (Harvard Graduate School of Design, USA), D. Dand, R. Hemsley, D. Nunez, L. Perovich
    C. Ervin (Harvard Graduate School of Design, USA)D. Dand (MIT Media Lab, USA)R. Hemsley (Massachusetts Institute of Technology, USA)D. Nunez (Massachusetts Institute of Technology, USA)L. Perovich (Massachusetts Institute of Technology, USA)

    What’s Cookin’? is a collaborative cooking system that helps people create meals together even when they’re apart.What’s Cookin’? is a collaborative cooking system that helps people create meals together even when they’re apart. It is a collection of augmented kitchen tools, surfaces, and computational representation of meals. Cooking is a social, shared, and sensory experience. The best meals bring friends together and engage all of our senses, from the sizzling of sautéing onions to the texture of toasted bread. Our tool allows people to cook together in a remote, synchronous, collaborative environment by mirroring the social, shared, and sensory nature of the cooking experience. It is particularly suited for collaborations that involve multiple parallel processes that work in isolation but also intersect at various times throughout the collaboration, most notably in assembling the meal.

  • VJQBridging Book: a Not-So-Electronic Children Picturebook
    A. Figueiredo (Univ. of Minho, PT), A. Pinto, N. Zagalo, P. Branco
    A. Figueiredo (Univ. of Minho, PT)A. Pinto (Univ. of Minho, PT)N. Zagalo (engageLab, Univ. of Minho, PT)P. Branco (Department of Information Systems, Univ. of Minho, PT)

    Bridging Book is an hybrid picturebook; thumbing through it displays synchronized content on a tablet.The physical book requires no batteries or wires due to the use of magnetsIn the video we present the Bridging Book, a mixed media picturebook that blurs the line between physical and electronic books. We present also the technical implementation of the prototype. In this prototype the illustrations of a children picture book are complemented by the digital content on a digital tablet placed next to it. Thumbing through the physical book pages displays synchronized content on the tablet. The narrative contents can be explored by reading the physical book and further exploring the interaction on the digital media. The physical book requires no batteries or wires.

  • VXCSPRWeb: Preserving Subjective Responses to Website Colour Schemes through Automatic Recolouring
    D. Flatla (Univ. of Saskatchewan, CA), K. Reinecke, C. Gutwin, K. Gajos
    D. Flatla (Univ. of Saskatchewan, CA)K. Reinecke (Harvard Univ., USA)C. Gutwin (Univ. of Saskatchewan, CA)K. Gajos (Harvard Univ., USA)

    SPRWeb equalizes website experience for people with colour vision deficiency by improving colour differentiation (like previous recolouring tools), but also maintains the original colour scheme’s subjective properties (‘warmth’, ‘weight’, ‘activity’).Colours are an important part of user experiences on the Web. Colour schemes influence not only the aesthetics, but also our first impressions and long-term engagement with websites (e.g., Figure 1 shows a ‘warm’ website colour scheme). However, five percent of people perceive a subset of all colours because they have colour vision deficiency (CVD), resulting in an unequal and presumably less-rich user experience on the Web (Figure 2). Traditionally, people with CVD have been supported by recolouring tools that improve colour differentiability, but do not consider the subjective properties of colour schemes while recolouring (Figure 3 shows Figure 1 after standard recolouring; it is now ‘cool’ instead of ‘warm’). To address this, we developed SPRWeb, a tool that recolours websites to preserve subjective responses and improve colour differentiability – thus enabling users with CVD to have similar online experiences (Figure 4 shows Figure 1 recoloured using SPRWeb; it is once again ‘warm’). To develop SPRWeb, we extended existing models of non-CVD subjective responses to people with CVD, then used this extended model to steer the recolouring process. In a lab study, we found that SPRWeb did significantly better than a standard recolouring tool at preserving the temperature and naturalness of websites, while achieving similar weight and differentiability preservation. We also found that recolouring did not preserve activity, and hypothesize that visual complexity influences activity more than colour. SPRWeb is the first tool to automatically preserve the subjective and perceptual properties of website colour schemes thereby equalizing the colour-based web experience for people with CVD.

  • VBQEngineering: Upfront effort, downstream pay-back
    D. Furniss (Univ. College London, UK), A. Blandford, B. John
    D. Furniss (Univ. College London, UK)A. Blandford (Univ. College London, UK)B. John (IBM T.J. Watson Research Center, USA)

    This video provokes debate about the importance of engineering for HCI. By investing upfront in developing guidance, and science, and tools engineering typically demands less on-going effort for higher returns.In an age where HCI research is embracing softer approaches, we must not neglect the practical value of engineering research for HCI. This video challenges the assumption that engineering interactive systems takes a lot of effort with little pay-back or reward. We take CANCEL as an example. Not many people think about the challenges of implementing CANCEL effectively even though CANCEL buttons pervade modern life. Adding CANCEL is not just about adding a button: the processes behind the button needs to work, accurately, reliably, and in a timely way. Engineering research for HCI combines science, guidance and tools to make better products faster. In one study a company measured their return on investment as high as 17 to 1. This video provokes debate about the importance of engineering for HCI.

  • VHUMorePhone: An Actuated Shape Changing Flexible Smartphone
    A. Gomes (Queen’s Univ., CA), A. Nesbitt, R. Vertegaal
    A. Gomes (Queen’s Univ., CA)A. Nesbitt (Queen’s Unviersity, CA)R. Vertegaal (Queen’s Univ., CA)

    MorePhone is a flexible smartphone that uses changes in shape to convey notifications.Flexible Technology will enable computing devices to become mutable in terms of shape, behavior and movement, allowing them to form a more dynamic relationship with the environment. We present MorePhone, a prototype flexible smartphone that uses shape deformations as its primary means of both haptic and visual notifications. To inform the design of a flexible smartphone that uses shape changes to convey notifications, we conducted a participatory study to determine how users associate urgency and notification type with full screen, 1 corner, 2 corner and 3 corner actuations of the smartphone. Results suggested that urgent notifications (e.g. alarms, voice calls) were best matched with actuation of the entire display surface, while less urgent notifications, (e.g. app notifications) were best matched to individual corner bends.

  • VULConductive Inkjet Printed DIY Music Control Surface
    N. Gong (Massachusetts Institute of Technology , USA), N. Zhao, J. Paradiso
    N. Gong (Massachusetts Institute of Technology , USA)N. Zhao (Massachusetts Institute of Technology, USA)J. Paradiso (Massachusetts Institute of Technology, USA)

    A novel music control sensate surface, which enables retrofit integration between any musical instruments with a versatile, customizable, and essentially cost-effective user interface.We developed a novel music control sensate surface, which enables retrofit integration between any musical instruments with a versatile, customizable, and essentially cost-effective user interface. Our project presents new opportunities in customizable, flexible interface design since, unlike just using a touch screen, it adapts very well to non-square or non-flat surfaces or surfaces with holes. Our design requires an interactive circuit that is made in a computer-aided design environment and printed from a conductive inkjet printer on a PET substrate. This method allows us to create a functional decoration on the controller surface, combining graphic design and music performance with expressive physical manipulation. We present an example of implementation on an electric ukulele and provide several design examples to demonstrate the versatile capabilities of this system.

  • VZDMixsourcing: Turn This into That
    S. Hallacher (ITP, USA), J. Rodenhouse, A. Monroy-Hernandez
    S. Hallacher (ITP, USA)J. Rodenhouse (Microsoft Research, USA)A. Monroy-Hernandez (Microsoft Research, USA)

    A web based system that explores remixing as a framework to crowdsource creative tasks, encourage individual interpretation, multiple mediums, and ultimately connect existing sources and remixers.We introduce the concept of mixsourcing as a modality of crowdsourcing focused on using remixing as a framework to get people to perform creative tasks. We explore this idea through the design of a web based system called Turn This into That which combines the structure of task-driven crowdsourcing systems with the free-form creativity of remixing; encouraging individual interpretation, multiple mediums, and connecting existing sources and remixers.

  • VPNLightCloth: Senseable Illuminating Optical Fiber Cloth for Creating Interactive Surfaces
    S. Hashimoto (JST ERATO Igarashi Design Interface Project, JP), R. Suzuki, Y. Kamiyama, M. Inami, T. Igarashi
    S. Hashimoto (JST ERATO Igarashi Design Interface Project, JP)R. Suzuki (The Univ. of Tokyo, JP)Y. Kamiyama (JST ERATO Igarashi Design Interface Project, JP)M. Inami (Keio Univ., JP)T. Igarashi (The Univ. of Tokyo, JP)

    We introduce an input and output device that enables illumination, bi-directional data communication, and position sensing on a soft cloth.We introduce an input and output device that enables illumination, bi-directional data communication, and position sensing on a soft cloth. This “LightCloth” is woven from diffusive optical fibers. Sensor-emitter pairs attached to bundles of contiguous fibers enable bundle-specific light input and output. We developed a prototype system that allows full-color illumination and 8-bit data input by infrared signals.

  • VLRCuboino. Extending Physical Games. An Example.
    F. Heibeck (Univ. of Bremen, DE)
    F. Heibeck (Univ. of Bremen, DE)

    Cuboino. Digitally extending Analog Games. An Example.Cuboino is a computationally augmented physical toy system designed as an extension for the existing marble-game cuboro. It consists of a set of cubes that are seamlessly compatible with the cuboro cubes. In contrast to the passive cuboro cubes, cuboino modules are active parts of a digital system consisting of sensor cubes, actor cubes and supply cubes. By snapping them together, the player can build a modular system that functions according to he individual functionalities of the cuboino cubes. Cuboino establishes a new pathway that is not embodied in the marble, but adapts to the medium of its transmission. Signals can be received by multiple modules, creating more than one signal at a time. This allows signals to intertwine and thus create more dynamic and complex game-outcomes.

  • VSKIndirect Shear Force Estimation for Multi-Point Shear Force Operations
    S. Heo (KAIST (Korea Advanced Institute of Science and Technology), KR), G. Lee
    S. Heo (KAIST (Korea Advanced Institute of Science and Technology), KR)G. Lee (KAIST (Korea Advanced Institute of Science and Technology), KR)

    This video shows a novel method to indirectly estimate shear forces at multiple points. We also demonstrate the method with a game application.As a way to enrich touch screen interaction, the possibility of using shear forces is being explored recently. However, most of the studies are restricted to the case of single-point shear forces possibly due to the difficulty of sensing shear forces at multiple touch points independently. In this video, we introduce a novel method to indirectly estimate shear forces using the movement of contact areas. These methods enable multi-point shear force estimation since the estimation of a shear force is done for each finger independently. We also demonstrate the method with a game application.

  • VMXSmarter Objects: Using AR Technology to Program Physical Objects and their Interactions
    V. Heun (Massachusetts Institute of Technology, USA), S. Kasahara, P. Maes
    V. Heun (Massachusetts Institute of Technology, USA)S. Kasahara (Sony Cooperation, JP)P. Maes (Massachusetts Institute of Technology, USA)

    The Smarter Objects system explores a new method for interaction with everyday objects using Augmented Reality technology.Graphical User Interfaces (GUI) offers a very flexible interface but require the user’s complete visual attention, whereby Tangible User Interfaces (TUI) can be operated with minimal visual attention. To prevent visual overload and provide a flexible yet intuitive user interface, Smarter Objects combines the best of both styles by using a GUI for understanding and programming and a TUI for day to day operation. Smarter Objects uses Augmented Reality technology to provide a flexible GUI for objects when that is needed.

  • VBFMusical Embrace: Socially Awkward Interactions Through Physical Proximity to Drive Digital Play
    A. Huggard (RMIT Univ. , AU), A. De Mel, J. Garner, C. Toprak, A. Chatham, F. Mueller
    A. Huggard (RMIT Univ. , AU)A. De Mel (RMIT Univ., AU)J. Garner (RMIT Univ., AU)C. Toprak (RMIT Univ., AU)A. Chatham (RMIT Univ. , AU)F. Mueller (RMIT Univ., AU)

    Musical Embrace is a novel digital game that promotes close physical proximity between strangers and challenges current ideologies associated with digital games, play and interactivity.Socially awkward interactions are often regarded as something that is to be avoided, nonetheless encompasses the potential to be ingredients for compelling play. Although examples exist in the non-digital games domain to support this point (e.g. Twister), we’ve found that there has been little exploration conducted on social awkwardness when it comes to digital play. In response, we present Musical Embrace, a digital game that calls for strangers to collaboratively apply pressure, in the form of awkward whole-body movements in close physical proximity, to a novel suspended pillow-like controller; as means of traversing a virtual environment. We use Musical Embrace to identify design tactics that utilize social awkwardness, to drive digital play. With our work, we hope to encourage designers to consider socially awkward interactions as a compelling ingredient for digital games.

  • VVGHephaestus and the Senses
    C. Hummels (Eindhoven Univ. of Technology, NL), A. Trotto
    C. Hummels (Eindhoven Univ. of Technology, NL)A. Trotto (Interactive Institute Umeå, SE)

    In the workshop Hephaestus and the Senses we use embodiment, skilful coping and playfulness to connect people and to catalyse a constructive design “conversation” among people with different backgrounds.The need for transformative collaboration among cross-disciplinary stakeholders is becoming paramount since the complexity of designing (intelligent) systems, products and services has increased rapidly the last decade. Inspired by phenomenology, pragmatism and embodied cognition, we explore how we can use embodiment and skilful coping to connect people and to catalyse a constructive design “conversation” among people with different backgrounds (Hummels, 2012). During a two-weeks workshop with Master students at the Department of Industrial Design at the Eindhoven University of Technology, we developed six different interactive Engagement Probes: open, creative and playful tools aimed at engaging people in a design process more concrete and effective than a brainstorm session. These Engagement Probes have been validated in a five-hour workshop Hephaestus and the Senses, in which 80 participants from different background have used them as a mean to ignite their design process. Every team started to meet and getting to know each other in a playful way using their bodies through one of the Engagement Probes. Thereupon, they designed and prototyped in teams of 4-6 persons, an artefact for citizens to socially connect through their senses. The results of the workshop show that the Probes stimulate engagement, help people to get familiar and connect in a short period of time, and inspire and boost a design process with an emphasis on embodiment and tangibility (Trotto and Hummels, submitted).

  • VTFDesigning Digital Puppetry Systems: Guidelines and Best Practices
    S. Hunter (Massachusetts Institute of Technology, USA), P. Maes
    S. Hunter (Massachusetts Institute of Technology, USA)P. Maes (Massachusetts Institute of Technology, USA)

    Guidelines and best practices for designers of digital puppetry systems via four example systems: portable green-screen, laptop theatre, rod puppet animation, and using depth cameras.This instructive video presents guidelines and best practices for designers of digital puppetry systems by demonstrating four common setups and illustrating the benefits and limitations of each approach. Practical suggestions and humorous examples of green-screening techniques, digital composition, using rod puppets and using a Kinect camera are included to illustrate the possibilities and pitfalls of real-time animation for HCI designers interested in using computer vision to support creative expression with physical objects.

  • VAKTesting a Novel Parking System: On-Street Reservations
    E. Isaacs (Palo Alto Research Center (PARC), USA), R. Hoover
    E. Isaacs (Palo Alto Research Center (PARC), USA)R. Hoover (Palo Alto Research Center (PARC), USA)

    We developed a novel parking system that allows people to reserve on-street parking. We built prototype parking meters and conducted usability testing in our parking lot to refine the system’s design.A good portion of auto congestion in urban downtown areas is caused by people driving around looking for parking spots. We developed a novel parking system that allows people to reserve on-street parking, meant to reduce congestion, increase convenience, and raise revenue. We built prototype parking meters and set them up in the PARC parking lot, where we ran two rounds of usability testing. We asked participants to drive their cars in the parking lot, choose an appropriate spot, and pay the meter. The first round of testing identified a key design issue with the meters, so we came up with three alternative meter designs to address that issue and then compared them in the second round of testing. This video shows how we ran the parking meter testing and the outcome.

  • VJFIllumiRoom: Peripheral Projected Illusions for Interactive Experiences
    B. Jones (Univ. of Illinois at Urbana-Champaign, USA), H. Benko, E. Ofek, A. Wilson
    B. Jones (Univ. of Illinois at Urbana-Champaign, USA)H. Benko (Microsoft Research, USA)E. Ofek (Microsoft Research, USA)A. Wilson (Microsoft Research, USA)

    IllumiRoom is a proof-of-concept system that augments the area surrounding a television with projected visualizations to enhance traditional gaming experiences. IllumiRoom is a proof-of-concept system that augments the area surrounding a television with projected visualizations to enhance traditional gaming experiences. Our system demonstrates how projected visualizations in the periphery can negate, include, or augment the existing physical environment and complement the content displayed on the television screen. We can change the appearance of the room, induce apparent motion, extend the field of view, and enable entirely new physical gaming experiences. Our system is entirely self-calibrating and is designed to work in any room.

  • VQTXtempo: Music Polaroid for Printing Real-Time Acoustic Guitar Performance
    H. Kim (Korea Advanced Institute of Science & Technology, KR), M. Lee, B. Goo, T. Nam
    H. Kim (Korea Advanced Institute of Science & Technology, KR)M. Lee (Korea Advanced Institute of Science & Technology, KR)B. Goo (Korea Advanced Institute of Science & Technology, KR)T. Nam (KAIST (Korea Advanced Institute of Science and Technology), KR)

    Presents a real-time musical score printing system for acoustic guitar player. Interprets the music performance and instantly prints like Polaroid camera. Supports the recording of meaningful experiences with contextual information.We present a real-time musical score printing system called Xtempo. This system supports the recording of more meaningful musical experiences with the contextual information. It interprets the music performance and instantly prints the musical score like a Polaroid camera. This allows recording of musical composition or improvised play. We applied this system to an acoustic guitar which can be augmented by experience recording. The tablature (for guitar) is also printed with information of a performer’s stroke action. This system shows new potential in logging meaningful, emotional and personalized musical experience.

  • VRETapBoard: Making a Touch Screen Keyboard More Touchable
    S. Kim (KAIST (Korea Advanced Institute of Science and Technology), KR), J. Son, G. Lee, H. Kim, W. Lee
    S. Kim (KAIST (Korea Advanced Institute of Science and Technology), KR)J. Son (KAIST (Korea Advanced Institute of Science and Technology), KR)G. Lee (KAIST (Korea Advanced Institute of Science and Technology), KR)H. Kim (KAIST (Korea Advanced Institute of Science and Technology), KR)W. Lee (KAIST (Korea Advanced Institute of Science and Technology), KR)

    TapBoard is a touch screen software keyboard that regards tapping actions as keystrokes and enables other touches for more useful operations; such as resting, feeling surface textures, and making gestures.We propose the TapBoard, a touch screen software keyboard that regards tapping actions as keystrokes, and other touches as touched states. In a series of user studies, we could validate the effectiveness of the TapBoard concept. First, we could show that tapping to type is in fact compatible with the existing typing skill of most touch screen keyboard users. Second, users could soon adapt to the TapBoard and learn to rest their fingers in a touched state. Finally, we confirm by a controlled experiment that there is no difference in the text entry performance between the TapBoard and a traditional touch screen software keyboard. In addition to these experimental results, we demonstrate a few new interaction techniques that will be made possible by the TapBoard.

  • VXXMobile Proxemic Awareness and Control: Exploring the Design Space for Interaction with a Single Appliance
    D. Ledo (Univ. of Calgary, CA), S. Greenberg
    D. Ledo (Univ. of Calgary, CA)S. Greenberg (Univ. of Calgary, CA)

    Exploration of the design space for control and awareness of appliances through proxemic interaction on mobile devices.Computing technologies continue to grow exponentially every day. However, appliances have become a class of technology that has remained stagnant through time. They are restricted by physical and cost limitations, while also aiming to provide with a lot of functionality. This leads to limited capabilities of input (through multiple buttons and combinations) and output (LEDs, small screens). We introduce the notion of mobile proxemic awareness and control, whereby a mobile device is used as a medium to reveal of information regarding awareness of presence, state, content and control as a function of proxemics. We explore a set of concepts that exploit different proximal distances and levels of information and controls. We illustrate the concepts with two deliberately simple prototypes: a lamp and a radio alarm clock.

  • VGZGaussBits: Magnetic Tangible Bits for Portable and Occlusion-Free Near-Surface Interactions
    R. Liang (National Taiwan Univ., TW), K. Cheng, L. Chan, C. Peng, M. Chen, R. Liang, D. Yang, B. Chen
    R. Liang (National Taiwan Univ., TW)K. Cheng (National Taiwan Univ., TW)L. Chan (Academia Sinica, TW)C. Peng (Institute of Commercial Design, TW)M. Chen (National Taiwan Univ., TW)R. Liang (National Taiwan Univ. of Science and Technology, TW)D. Yang (Academia Sinica, TW)B. Chen (National Taiwan Univ., TW)

    This work presents a system of the passive magnetic tangible designs that enables occlusion-free tangible interactions in the near-surface space of portable displays.We present GaussBits, which is a system of the passive magnetic tangible designs that enables 3D tangible interactions in the near-surface space of portable displays. When a thin magnetic sensor grid is attached to the back of the display, the 3D position and partial 3D orientation of the GaussBits can be resolved by the proposed bi-polar magnetic field tracking technique. This portable platform can therefore enrich tangible interactions by extending the design space to the near-surface space. Since non-ferrous materials, such as the user’s hand, do not occlude the magnetic field, interaction designers can freely incorporate a magnetic unit into an appropriately shaped non-ferrous object to exploit the metaphors of the real-world tasks, and users can freely manipulate the GaussBits by hands or using other non-ferrous tools without causing interference. The presented example applications and the collected feedback from an explorative workshop revealed that this new approach is widely applicable.

  • VGEPixelTone: A Multimodal Interface for Image Editing
    J. Linder (Adobe Research, USA), G. Laput, M. Dontcheva, G. Wilensky, W. Chang, A. Agarwala, E. Adar
    J. Linder (Adobe Research, USA)G. Laput (Univ. of Michigan, USA)M. Dontcheva (Adobe Research, USA)G. Wilensky (Adobe Research, USA)W. Chang (Adobe Research, USA)A. Agarwala (Adobe Research, USA)E. Adar (Univ. of Michigan, USA)

    PixelTone is a multimodal photo editing interface that combines speech and direct manipulation.Photo editing can be a challenging task, and it becomes even more difficult on the small, portable screens of mobile devices that are now frequently used to capture and edit images. To address this problem we present PixelTone, a multimodal photo editing interface that combines speech and direct manipulation. In this video, we demonstrate how our system uses natural language for expressing users’ desired changes to an image. We also demonstrate how we combine natural language and touch gestures for creating named references and sketching to localize image operations to specific regions.

  • VCBFuture Lighting Systems
    R. Magielse (Eindhoven Univ. of Technology, NL), S. Offermans
    R. Magielse (Eindhoven Univ. of Technology, NL)S. Offermans (Eindhoven Univ. of Technology, NL)

    An insight in future lighting systems with multiple interfaces for lighting control in different usage scenariosContemporary lighting systems may consist of many individual light sources that can be controlled on various parameters (e.g. intensity, colour, spatial position). Therefore, opening up freedom of control to the user in a comprehensive manner is a challenge. We present a lighting system with three different interfaces that suit different usage scenarios in terms of control effort and freedom. The system consists of modular ceiling tiles for down-lighting and colored wall-washing for atmospheric lighting. The LightPad allows people to quickly adjust all light sources with an expressive touch; duration and force determine respectively the light color and intensity. This could be used near the entrance of a space to quickly set the lighting. The LightCube allows users to choose between various presets that are related to different activities. The top-facing preset is activated. The LightApp is a tablet interface that allows users to control many light sources in detail using simple gestures: dragging, pinching, rotating and wiping. This could be used to create specific atmospheres, or to create presets for the LightCube.

  • VLGNoteVideo: Facilitating Navigation of Blackboard-style Lecture Videos
    T. Monserrat (National Univ. of Singapore, SG), S. Zhao, K. Mcgee, A. Pandey
    T. Monserrat (National Univ. of Singapore, SG)S. Zhao (National Univ. of Singapore, SG)K. Mcgee (National Univ. of Singapore, SG)A. Pandey (National Univ. of Singapore, SG)

    We present NoteVideo, an intuitive navigation tool that allows in-scene jumping navigation on blackboard-style lecture videos.Khan Academy’s pre-recorded blackboard-style lecture videos attract millions of online users every month. However, current video navigation tools do not adequately support the kinds of goals that students typically have, like quickly finding a particular concept in a blackboard-style lecture video. We present NoteVideo and its improved version, NoteVideo+. The system starts by analyzing and identifying the conceptual ‘objects’ of a blackboard-style lecture video. It then creates a summarized image of the video and uses it as an in-scene navigation interface for the user to interact with. This allows users to directly jump to the video frame where that object first appeared and be discussed instead of navigating it linearly through time.

  • VDGInteracting with Microseismic Visualizations
    A. Mostafa (Univ. of Calgary, CA), S. Greenberg, E. Vital Brazil, E. Sharlin, M. Costa Sousa
    A. Mostafa (Univ. of Calgary, CA)S. Greenberg (Univ. of Calgary, CA)E. Vital Brazil (Univ. of Calgary, CA)E. Sharlin (Univ. of Calgary, CA)M. Costa Sousa (Univ. of Calgary, CA)

    Proxemic interactions and a spatial input device to simplify how microseismic experts navigate through data visualization, and a painting metaphor to simplify how they select that information.Microseismic visualization systems present complex 3D data of small seismic events within oil reservoirs to allow experts to explore and interact with that data. Yet existing systems suffer several problems: 3D spatial navigation and orientation is difficult, and selecting 3D data is challenging due to the problems of occlusion and lack of depth perception. Our work mitigates these problems by applying both proxemic interactions and a spatial input device to simplify how experts navigate through the visualization, and a painting metaphor to simplify how they select that information.

  • VEMJoggobot – Jogging with a Flying Robot
    F. Mueller (RMIT Univ., AU), E. Graether, C. Toprak
    F. Mueller (RMIT Univ., AU)E. Graether (RMIT Univ., AU)C. Toprak (RMIT Univ., AU)

    Joggobot is a flying robot that accompanies joggers to enhance the jogging experience.Joggobot illustrates a novel approach towards a more social use of robots, where the robot acts as exercise companion to make physical activity more enjoyable. Joggobot is the first autonomous flying robot companion for joggers. Joggobot makes the solo running experience more enjoyable by flying next to you when jogging, offering a coach mode to motivate you to run faster and further, and a “looking after” mode that is similar to jogging with a dog. The results are more enjoyable runs, furthering the many physical health benefits of exercise.

  • VHKLaserOrigami: Laser-Cutting 3D Objects
    S. Mueller (Hasso Plattner Institute, DE), B. Kruck, P. Baudisch
    S. Mueller (Hasso Plattner Institute, DE)B. Kruck (Hasso Plattner Institute, DE)P. Baudisch (Hasso Plattner Institute, DE)

    LaserOrigami is a rapid prototyping system that produces 3D objects using a laser cutter by cutting and bending in a single integrated process. We present LaserOrigami, a rapid prototyping system that produces 3D objects using a laser cutter. LaserOrigami is substantially faster than traditional 3D fabrication techniques such as 3D printing and unlike traditional laser cutting the resulting 3D objects require no manual assembly. The key idea behind LaserOrigami is that it achieves three-dimensionality by folding and stretching the workpiece, rather than by placing joints, thereby eliminating the need for manual assembly. LaserOrigami achieves this by heating up selected regions of the workpiece until they become compliant and bend down under the force of gravity. LaserOrigami administers the heat by defocusing the laser, which distributes the laser’s power across a larger surface. LaserOrigami implements cutting and bending in a single integrated process by automatically moving the cutting table up and down—when users take out the workpiece, it is already fully assembled. We present the three main design elements of LaserOrigami: the bend, the suspender, and the stretch, and demonstrate how to use them to fabricate a range of physical objects. Finally, we demonstrate an interactive fabrication version of LaserOrigami, a process in which user interaction and fabrication alternate step-by-step.

  • VPDSmile! Box
    J. Ota (Carnegie Mellon Univ., USA)
    J. Ota (Carnegie Mellon Univ., USA)

    Our identities are shaped by our history and our relationships.What if your possessions could remember your interactions with it and, like a best friend, could determine who you are? In this digital age where it is commonplace and easy to spoof passwords and other security measures, a more robust and personalized form of security needs to be developed. Our identities are not who we are now, but are shaped by our history and our relationships. What if objects that we entrust our most valuable possessions could become like a close confidant? What if your possessions could remember your interactions with it and, like a best friend, could determine who you were no matter the circumstance? A speculative design fiction exploration. Created with Processing, Arduino and FaceOSC

  • VFTPOKE: A New Way of Sharing Emotional Touches during Phone Conversations
    Y. Park (KAIST (Korea Advanced Institute of Science and Technology), KR), T. Nam
    Y. Park (KAIST (Korea Advanced Institute of Science and Technology), KR)T. Nam (KAIST (Korea Advanced Institute of Science and Technology), KR)

    POKE is a new device that enables callers to share emotional touches during calls. Possible touch expressions and tactile vocabularies are represented in the video.We present POKE, which is a device that enables callers to share touches during calls. POKE delivers these touches through an inflatable surface on the front of the device that receives index finger pressure inputs on the back of another device, while allowing the callers to maintain a conventional phone-calling posture. A user can receive a call by looking at the physical movement of POKE if the other person is making the user’s device move up and down by touch inputs. Callers can send different touches according to pressure strength, frequency, and pattern. POKE also enables non-verbal tactile communication by exchanging pokes and poke backs. This opens possibilities for developing affective tactile languages over phone calls.

  • VKLIntentacles: Wearable Interactive Antennae to Sense and Express Emotion
    M. Petre (The Open Univ., UK), D. Bowers, T. Baker, E. Copcutt, A. Lawson, A. Martindale, B. Moses, Y. Yan
    M. Petre (The Open Univ., UK)D. Bowers (The Open Univ., UK)T. Baker (The Open Univ., UK)E. Copcutt (The Open Univ., UK)A. Lawson (The Open Univ., UK)A. Martindale (The Open Univ., UK)B. Moses (The Open Univ., UK)Y. Yan (The Open Univ., UK)

    ‘Intentacles’ are wearable, interactive antennae that help sense and communicate emotion. The antennae’s behaviour can be programmed to respond to sensor inputs, or controlled directly using buttons.‘Intentacles’ are wearable, programmable antennae intended to augment the wearer’s communication of emotion. The antennae move in 3D and change colour, controlled by an Arduino Nano. The antennae’s behaviour can be controlled directly using buttons, or programmed to respond to sensor inputs. Intentacles can respond to external stimuli such as music, light levels, or temperature, and/or to the wearer’s physiological responses, such as pulse and movement. Preliminary user studies found that antennae actions which mirror stereotypical hand and eyebrow gestures are easily understood by other people. It helps if the actions are executed well, and if the antennae move intermittently, rather than continuously. After a while, people stop paying direct attention to the antennae per se, but they continue to notice changes, and to engage with the emotional expression. Further work will study the antennae in different social interactions.

  • VRZCopy Paste Skate
    S. Pijnappel (RMIT Univ., AU), F. Mueller
    S. Pijnappel (RMIT Univ., AU)F. Mueller (RMIT Univ., AU)

    We explore how to design interactive technologies to enhance the experience of skateboarding, providing thought provoking insights into how technology can have value beyond the context of performance focused sports. Interactive technology can support exertion activities, with many examples focusing on improving athletic performance. We see an opportunity for technology to also support extreme sports such as skateboarding, which often focus primarily on the experience of doing tricks rather than on athletic performance. However, there is little knowledge on how to design for such experiences. In response, we designed 12 basic skateboarding prototypes inspired by skateboarding theory. Using an autoethnographical approach, we skated with each of these and reflected on our experiences in order to derive four design themes: location of feedback in relation to the skater’s body, timing of feedback in relation to peaks in emotions after attempts, aspects of the trick emphasized by feedback, and aesthetic fittingness of feedback. As an exemplification and elaboration of this work we designed an interactive skateboarding system called Copy Paste Skate, using the 4 themes as a guide. We hope our work will inspire and guide designers and practitioners in the field of interactive systems for skateboarding and trick-focused sports in general, and will further our understanding of how to design for the active human body.

  • VKBUbiRing My Bell
    M. Rissanen (Nanyang Technological Univ., SG), H. Iroshan, O. Fernando, W. Toh, J. Hong, S. Vu, N. Pang, S. Foo
    M. Rissanen (Nanyang Technological Univ., SG)H. Iroshan (Nanyang Technological Univ., SG)O. Fernando (Nanyang Technological Univ., SG)W. Toh (Nanyang Technological Univ., SG)J. Hong (Nanyang Technological Univ., SG)S. Vu (Nanyang Technological Univ., SG)N. Pang (Nanyang Technological Univ., SG)S. Foo (Nanyang Technological Univ., SG)

    UbiRing My Bell is a concept video demonstrating wearable smartphone mnemonics triggers called UbiRing and UbiBracelet that enable smartphone functions to be used by simple free-air pointing.“UbiRing My Bell” presents the concepts of UbiRing and UbiBracelet, wearable devices that enable triggering of mnemonics on smartphones such as making phone calls, sending SMS or tweeting. This video demonstrates how UbiRing and UbiBracelet can be used to create mnemonics freely on any static object and triggered by free-air pointing. Interaction provided by these devices is rapid, subtle and socially acceptable. Also, usage scenarios including assistive technology, location and context sensitivity, tangible interaction and activity sensing are visualized. Additionally, UbiRing could be used also by young children to make phone calls to parents or during play with toys.

  • VXMTouchViz: (Multi)Touching Multivariate Data
    J. Rzeszotarski (Carnegie Mellon Univ., USA), A. Kittur
    J. Rzeszotarski (Carnegie Mellon Univ., USA)A. Kittur (Carnegie Mellon Univ., USA)

    TouchViz is a multitouch information visualization system that encourages exploration and curiosity through intuitive, physically grounded interactions.In this video we show TouchViz, a software system for visualizing multivariate data that harnesses the physical, embodied nature of tablet computers and physical models such as gravity and force to allow users to explore data along many dimensions at once. Data are represented as actual physical objects that can be manipulated through user touches, tilts, and finger gestures. TouchViz provides an open sandbox for user interaction, supplying an array of force-based tools for structuring and manipulating data made physical. These tools promote curiosity, play, and exploration, leading users to trends and actionable findings encoded in data. By closely mimicking real-world force, gravity, and momentum, TouchViz allows users to explore many dimensions at once through multitouch interactions.

  • VFJPatchworks: Citizen-Led Innovation for Chaotic Lives
    J. Southern (Lancaster Univ., UK), R. Dillon, R. Potts, D. Morrell, M. Ferrario, W. Simm, R. Ellis, J. Whittle
    J. Southern (Lancaster Univ., UK)R. Dillon (Lancaster Univ., UK)R. Potts (Lancaster Univ., UK)D. Morrell (Manchester Business School, UK)M. Ferrario (Lancaster Univ., UK)W. Simm (Lancaster Univ., UK)R. Ellis (Lancaster Univ., UK)J. Whittle (Lancaster Univ., UK)

    #Patchworks is a citizen-led co-design project to build innovative technologies for and with homeless people, bringing together Signposts (Morecambe), Madlab (Manchester), and an interdisciplinary team from Catalyst, Lancaster University (www.catalystproject.org.uk).#Patchworks is a citizen-led co-design project to design innovative technologies for and with homeless people in the North West of England. The project involved 3 quite distinct communities: Signposts, a community resource centre that works with homeless people in Morecambe; Madlab, a community of DIY hackers and creative technologists in Manchester; and an interdisciplinary community of academics from bio-medicine, computing, art-design, anthropology and management science from Lancaster University, as part of the Catalyst project (www.catalystproject.org.uk). This video was made to communicate the Patchworks research to a wider public audience. It shows how through a series of practical electronics workshops and discussions the co-design team developed a prototype to help homeless people, whose lives are often characterised as ‘chaotic’, to access important appointments.

  • VPYFlexpad: A Highly Flexible Handheld Display
    J. Steimle (Massachusetts Institute of Technology, USA), A. Jordt, P. Maes
    J. Steimle (Massachusetts Institute of Technology, USA)A. Jordt (Kiel Univ. of Applied Sciences, DE)P. Maes (Massachusetts Institute of Technology, USA)

    Introduces highly flexible handheld displays as user interfaces. Contributes a novel real-time method for capturing complex deformations of flexible surfaces and novel interactions that leverage highly flexible deformations of displays.This video demonstrates Flexpad, a highly flexible display interface. Flexpad introduces a novel way of interacting with flexible displays by using detailed deformations. Using a Kinect camera and a projector, Flexpad transforms virtually any sheet of paper or foam into a flexible, highly deformable and spatially aware handheld display. It uses a novel approach for tracking deformed surfaces from depth images very robustly, in high detail and in real time. As a result, the display is considerably more deformable than previous work on flexible handheld displays, enabling novel applications that leverage the high expressiveness of detailed deformation. We illustrate these unique capabilities through three application examples: curved cross-cuts in volumetric images, deforming virtual paper characters, and slicing through time in videos.

  • VUVPaperTab: Tablets as Thin and Flexible as Paper
    A. Tarun (Queen’s Univ. Kingston, CA), P. Wang, P. Strohmeier, A. Girouard, D. Reilly, R. Vertegaal
    A. Tarun (Queen’s Univ. Kingston, CA)P. Wang (Queen’s Univ. Kingston, CA)P. Strohmeier (Queen’s Univ., CA)A. Girouard (Carleton Univ., CA)D. Reilly (Dalhousie Univ., CA)R. Vertegaal (Queen’s Univ., CA)

    PaperTab is a paper tablet computer where each flexible display represents a document.We present PaperTab, a paper tablet computer that allows physical manipulation of windows embodied in multiple flexible displays. PaperTab offers the benefits of updating electronic information on the fly, while maintaining the haptic/kinesthetic feedback of tangible documents, as each document is a fully functional, paper-like E Ink display. We present windowing techniques for a paper computer that relies on multiple physical windows. Our between-display interactions are based on the proximity of a display to the user. They are categorized into hot zones, for active editing, warm zones for temporary storage, and cold zones for long-term storage. Our within-display interactions use pointing with a display as a focus+context tool.

  • VSUCrafting Wearables: Interaction Design Meets Fashion Design
    O. Tomico (Eindhoven Univ. of Technology, NL), M. van Zijverden, T. Fejér, Y. Chen, S. Aïssaoui, E. Lubbers, V. Schepperheyn, M. Heuvelings
    O. Tomico (Eindhoven Univ. of Technology, NL)M. van Zijverden (ArtEZ, NL)T. Fejér (Eindhoven Univ. of Technology, NL)Y. Chen (ArtEZ, NL)S. Aïssaoui (ArtEZ, NL)E. Lubbers (Eindhoven Univ. of Technology, NL)V. Schepperheyn (ArtEZ, NL)M. Heuvelings (Eindhoven Univ. of Technology, NL)

    Loom is part of a set of wearables that explored the boundaries between the human body, its movement and the technological possibilities.As people intimate relation with all kinds of technologies evolves, new expressive and interactive technologies are becoming relevant for the field of design. Loom is a garment that fits tight around the upper body, supporting the posture and preventing large movements. Small movements therefore become the focus of the interaction. Through the use of NiTi wires the collar moves upward; by hand the collar can be pushed down. The continuous moving up and pushing down creates a subtle touch on the neck, supporting relaxation and meditation activities. Loom is part of a set of wearables [1] that explored the boundaries between the human body, its movement and the technological possibilities. The goal was to blend Phenomenology [2], interaction design, and fashion design in order to create new design practices.

  • VECDynamic Duo: Phone-Tablet Interaction on Tabletops
    P. Tommaso (Chalmers Univ. of Technology, SE), S. Zhao, G. Ramos, A. Yantaç, M. Fjeld
    P. Tommaso (Chalmers Univ. of Technology, SE)S. Zhao (National Univ. of Singapore, SG)G. Ramos (Microsoft Corporation, USA)A. Yantaç (Chalmers Univ. of Technology, SE)M. Fjeld (Chalmers Univ. of Technology, SE)

    DynamicDuo (DD) is a) a design space of distributed input and output solutions relying on phone-tablet combinations working together; b) a mobile framework; and c) a range of conceptual prototypes. http://www.t2i.se/?page_id=1043As an increasing number of users carry smartphones and tablets simultaneously, there is an opportunity to leverage the use of these two form factors in a more complementary way. Our work aims to explore this by a) defining the design space of distributed input and output solutions that rely on and benefit from phone- tablet combinations working together physically and digitally; and b) reveal the idiosyncrasies of each particular device combination via interactive prototypes. Our research provides actionable insight in this emerging area by defining a design space, suggesting a mobile framework, and implementing prototypical applications in such areas as distributed information display, distributed control, and combinations of these. For each of these, we show a few example techniques and demonstrate an application combining more techniques.

  • VEYCart-Load-O-Fun: Designing Digital Games for Trams
    C. Toprak (RMIT Univ., AU), J. Platt, H. Ho, F. Mueller
    C. Toprak (RMIT Univ., AU)J. Platt (RMIT Univ., AU)H. Ho (RMIT Univ., AU)F. Mueller (RMIT Univ., AU)

    Cart-Load-O-Fun is an experimental game designed to be played by passengers on trams to create more engaging commutes for players as well as observers.Travelling on public transport can often be an unengaging experience. We see an opportunity to enrich the public transport experience by utilizing digital play in this space, and in response explore the design of a digital game for trams. Cart-Load-O-Fun acts as a research vehicle to understand how games for public transport should be designed. We present findings as a result of a study of passengers playing the game. We hope that these findings will help designers who aim to facilitate play on public transport to evoke playfulness in the users of these spaces, ultimately allowing for a more engaging experience.

  • VKVA Design-led Inquiry into Personhood in Dementia
    J. Wallace (Northumbria Univ., UK), P. Wright, J. McCarthy, D. Green, J. Thomas, P. Olivier
    J. Wallace (Northumbria Univ., UK)P. Wright (Newcastle Univ., UK)J. McCarthy (Univ. College Cork, IE)D. Green (Newcastle Univ., UK)J. Thomas (Northumbria Univ., UK)P. Olivier (Newcastle Univ., UK)

    A design-led, co-creative inquiry into personhood with Gillian, who has dementia, and John her husband – mediated by Design Probes and resulting in Digital Jewellery to support personhood and relationshipsOur video sets the context for a research project that sought to bring a person with dementia (Gillian) right into the heart of a design process in order to explore together the experiential qualities of dementia and how co-creative design of (digital) jewellery could support Gillian’s sense of self and personhood. Pieces were made for Gillian from intrinsically personal materials, referencing things from her life. Bespoke probes designed to scaffold reflection and dialogue around social/relational aspects of personhood facilitated acts of deep, multi-faceted remembering and sense making and can be used as such by others. The research established guides for design relating to anchoring, capturing, supporting sense of self and connecting to relationships. New insights were revealed around temporal considerations for design in dementia forced by the orientation of self toward relationship and change.

  • VZNLumaHelm – an Interactive Helmet
    W. Walmink (RMIT Univ., AU), A. Chatham, F. Mueller
    W. Walmink (RMIT Univ., AU)A. Chatham (RMIT Univ., AU)F. Mueller (RMIT Univ., AU)

    Lumahelm is an interactive helmet that lights up supporting us in communicating, expressing ourselves and engaging in play.We wear helmets to protect us from injury, but how much more can they do for us in their everyday use? LumaHelm turns the helmet into a display through which we can communicate, express and play. We are exploring how this can make cycling safer, skateboarding more expressive, improve communication on construction sites, and affect any other activity requiring a helmet. Through this design and research process we want to find out what wearable technology in the future may look like and how it can be more intimately integrated in our everyday lives.

  • VZYDuel Reality
    W. Walmink (RMIT Univ., AU), A. Chatham, F. Mueller
    W. Walmink (RMIT Univ., AU)A. Chatham (RMIT Univ., AU)F. Mueller (RMIT Univ., AU)

    Duel Reality is a digitally enabled sword-fighting game to explore how we can create compelling exertion gameplay by intentionally hiding biofeedback data from players, but not their opponents.Duel Reality is a digitally enabled sword-fighting game. It explores how we could create compelling exertion gameplay by intentionally hiding biofeedback from players, but not their opponents. In our game the data coming from your heart rate sensor determines where the opponent should hit you. However, you do not see this information yourself, so you will have to observe your body to estimate what your heart rate is doing. If players had perfect knowledge of their heart rate, they could easily defend themselves. However, since a player has no direct feedback from the sensors, their defence is weakened by their imperfect knowledge. Rather than trying to reduce the disparity between a player’s physical state and their awareness of that state, we use our game as a platform to study ways to harness this disparity to create novel gameplay.

  • VCVMetaSolid – On Flexibility and Rigidity in Future User Interfaces
    C. Winkler (Royal college of Art, UK), J. Steimle, P. Maes
    C. Winkler (Royal college of Art, UK)J. Steimle (Massachusetts Institute of Technology, USA)P. Maes (Massachusetts Institute of Technology, USA)

    MetaSolid is an imaginative material that allows us to explore the potential of future flexible interfaces to redefine our relationship with digital media and our physical world.MetaSolid is an imaginary material that changes its state between soft and solid on demand. It allows us to explore the potential of future flexible interfaces that programmatically control their material characteristics. When is flexibility for an interface needed and when is rigidness better? This video sketch proposes novel interactive and collaborative experiences around the idea of MetaSolid to stimulate future research and development on novel materials and interaction techniques. Besides classical gestures like folding or bending the interface, new ways appear, such as crumbling, stretching or tickling. These enable the user to easily form, display and modify virtual representations of real physical objects.

  • VAJInteractive Cognitive Aids in Medicine
    L. Wu (Stanford Univ., USA), J. Cirimele, K. Leach, L. Chu, K. Harrison, S. Card, S. Klemmer
    L. Wu (Stanford Univ., USA)J. Cirimele (Stanford Univ., USA)K. Leach (Stanford Univ., USA)L. Chu (Stanford Univ., USA)K. Harrison (Stanford Univ., USA)S. Card (Stanford Univ., USA)S. Klemmer (Stanford Univ., USA)

    Cognitive aids such as checklists benefit medical teams in crisis care. This video shows physicians reacting to a simulated OR emergency, demonstrating potential benefits of interactive cognitive aids in medicine.Cognitive aids such as checklists have been shown to benefit medical teams working in routine and crisis environments. This video presents a team of physicians reacting to a simulated operating room emergency, demonstrating potential benefits of interactive cognitive aids in medicine.

  • VMMWorldKit: Rapid and Easy Creation of Ad-hoc Interactive Applications on Everyday Surfaces
    R. Xiao (Carnegie Mellon Univ., USA), C. Harrison, S. Hudson
    R. Xiao (Carnegie Mellon Univ., USA)C. Harrison (Carnegie Mellon Univ., USA)S. Hudson (Carnegie Mellon Univ., USA)

    We describe the WorldKit system, which makes use of a paired depth camera and projector to make ordinary surfaces instantly interactive.Instant access to computing, when and where we need it, has long been one of the aims of research areas such as ubiquitous computing. In this paper, we describe the WorldKit system, which makes use of a paired depth camera and projector to make ordinary surfaces instantly interactive. Using this system, touch-based interactivity can, without prior calibration, be placed on nearly any unmodified surface literally with a wave of the hand, as can other new forms of sensed interaction. From a user perspective, such interfaces are easy enough to instantiate that they could, if desired, be recreated or modified “each time we sat down” by “painting” them next to us. From the programmer’s perspective, our system encapsulates these capabilities in a simple set of abstractions that make the creation of interfaces quick and easy. Further, it is extensible to new, custom interactors in a way that closely mimics conventional 2D graphical user interfaces, hiding much of the complexity of working in this new domain. We detail the hardware and software implementation of our system, and several example applications built using the library.

  • VCLMirrorFugue III: Conjuring the Recorded Pianist
    X. Xiao (Massachusetts Institute of Technology, USA), P. Aguilera, J. Williams, H. Ishii
    X. Xiao (Massachusetts Institute of Technology, USA)P. Aguilera (Massachusetts Institute of Technology, USA)J. Williams (Massachusetts Institute of Technology, USA)H. Ishii (Massachusetts Institute of Technology, USA)

    Inspired by reflections on a grand piano, MirrorFugue combines a player-piano’s moving keys with projection to evoke the impression of a pianist’s presence. We depict interactions across distance and time. The body channels rich layers of information when playing music, from intricate manipulations of the instrument to vivid personifications of expression. But when music is captured and replayed across distance and time, the performer’s body is rarely present. MirrorFugue conjures the recorded performer at the piano by combining the moving keys of a player piano with life-sized projection of the pianist’s hands and upper body. Inspired by reflections on a lacquered grand piano, MirrorFugue evokes the sense that the virtual pianist is playing the physically moving keys. This video tells two stories of interactions across space and time mediated by MirrorFugue. One presents a great concert broadcasted across the world, where a young pianist learns by playing along. The other depicts a woman who plays a duet with her childhood self.

  • VYSLiberi: Bringing Action to Exergames for Children with Cerebral Palsy
    Z. Ye (Queen’s Univ., CA), H. Hernandez, N. Graham, D. Fehlings, L. Switzer
    Z. Ye (Queen’s Univ., CA)H. Hernandez (Queen’s Univ., CA)N. Graham (Queen’s Univ., CA)D. Fehlings (Holland Bloorview Kids Rehabilitation Hospital, CA)L. Switzer (Holland Bloorview Kids Rehabilitation Hospital, CA)

    This video presents Liberi, an action-oriented exergame for children with Cerebral Palsy. Here we show that it is possible to develop fast-paced videogames that are playable for children with CP.Children with cerebral palsy (CP) want to play fastpaced action-oriented videogames similar to those played by their friends. This is particularly true of exergames, whose physically-active gameplay matches the fast pace of action games. But disabilities resulting from CP can make it difficult to play action games. Guidelines for developing games for people with motor disabilities steer away from high-paced action. Through a year-long participatory process with children with CP, we developed Liberi (Figure 1), an action-oriented exergame that shows how to bring action to exergames for children with CP at level III on the Gross Motor Function Classification Scale. A follow-up eight-week home trial found Liberi to be playable and enjoyable.

  • VRPStickEar: Augmenting Objects and Places Wherever Whenever
    K. Yeo (Singapore Univ. of Technology and Design, SG), S. Nanayakkara
    K. Yeo (Singapore Univ. of Technology and Design, SG)S. Nanayakkara (Singapore Univ. of Technology and Design, SG)

    StickEar: Augmenting Objects and Places Wherever WheneverSticky notes provide a means of anchoring visual information on physical objects while having the versatility of being redeployable and reusable. StickEar encapsulate sensor network technology in the form factor of a sticky note that has a tangible user interface, offering the affordances of redeployablilty and reusability. It features a distributed set of network-enabled sound-based sensor nodes. StickEar is a multi-function input/output device that enables sound-based interactions for applications such as remote sound monitoring, remote triggering of sound, autonomous response to sound events, and controlling of digital devices using sound. In addition, multiple StickEars can interact with each other to perform novel input and output tasks. We believe this work would provide non-expert users with an intuitive and seamless method of interacting with the environment and its artifacts though sound.