Filter by type:

Sort by year:

nZwarm:a swarm of luminous sea creatures that interact with passers-by.

Installation
Thomas Schroepfer, Suranga Nanayakkara, Anusha Withana, Thomas Wortmann and Juan Pablo
In Wellington LUX 2014 (Wellington Waterfront, New Zealand, 22 31 August, 2014).

Abstract
nZwarm is a swarm of luminous “sea creatures” that interact with passers-by. Subtle and hardly visible by day, nZwarm comes alive at night. As daylight fades, the cells of nZwarm illuminate the waters of Wellington’s waterfront lagoon with fluorescent light reminiscent of natural phenomena such as bioluminescent algae or the Aurora Borealis. You will be able to interact with nZwarm through your mobile phone which will cause subtle modulations of its light patterns.

PaperPixels : A Toolkit to Create Paper-based Displays

ConferenceFull Paper
Roshan Peiris, Suranga Nanayakkara
In Proceedings of the 26th Annual CHISGI Australian Computer-Human Interaction Conference (Sydney, Australia, December 2--5, 2014). OZCHI’14. ACM, New York, NY.

Abstract
In this paper we present PaperPixels, a toolkit for creating subtle and ambient animations on regular paper. This toolkit consists of two main components: (1) a modularised plug and play type elements (PaperPixels elements) that can be attached on the back of regular paper; (2) a GUI (graphical user interface) that allows users to stage the animation in a time line format. A user would simply draw on regular paper, attach PaperPixels elements behind the regions that needs to be animated, and specify the sequence of appearing and disappearing by arranging icons on a simple GUI. Observations made during a workshop at a local maker faire showed the potential of PaperPixels being integrated in many different applications such as animated wallpapers, animated story books.

Enhancing Musical Experience for the Hearing-impaired using Visual and Haptic Inputs

Journal
Suranga Nanayakkara, Lonce Wyse L, S H Ong, Elizabeth Taylor
Human-Computer Interaction, 28 (2), pp.115-160, 2013

Abstract
This article addresses the broad question of understanding whether and how a combination of tactile and visual information could be used to enhance the experience of music by the hearing impaired. Initially, a background survey was conducted with hearing-impaired people to find out the techniques they used to “listen” to music and how their listening experience might be enhanced. Information obtained from this survey and feedback received from two profoundly deaf musicians were used to guide the initial concept of exploring haptic and visual channels to augment a musical experience.

The proposed solution consisted of a vibrating “Haptic Chair” and a computer display of informative visual effects. The Haptic Chair provided sensory input of vibrations via touch by amplifying vibrations produced by music. The visual display transcoded sequences of information about a piece of music into various visual sequences in real time. These visual sequences initially consisted of abstract animations corresponding to specific features of music such as beat, note onset, tonal context, and so forth. In addition, because most people with impaired hearing place emphasis on lip reading and body gestures to help understand speech and other social interactions, their experiences were explored when they were exposed to human gestures corresponding to musical input.

Rigorous user studies with hearing-impaired participants suggested that musical representation for the hearing impaired should focus on staying as close to the original as possible and is best accompanied by conveying the physics of the representation via an alternate channel of perception. All the hearing-impaired users preferred either the Haptic Chair alone or the Haptic Chair accompanied by a visual display. These results were further strengthened by the fact that user satisfaction was maintained even after continuous use of the system over a period of 3 weeks. One of the comments received from a profoundly deaf user when the Haptic Chair was no longer available (“I am going to be deaf again”), poignantly expressed the level of impact it had made.

The system described in this article has the potential to be a valuable aid in speech therapy, and a user study is being carried out to explore the effectiveness of the Haptic Chair for this purpose. It is also expected that the concepts presented in this paper would be useful in converting other types of environmental sounds into a visual display and/or a tactile input device that might, for example, enable a deaf person to hear a doorbell ring, footsteps approaching from behind, or a person calling him or her, or to make understanding conversations or watching television less stressful. Moreover, the prototype system could be used as an aid in learning to play a musical instrument or to sing in tune.

This research work has shown considerable potential in using existing technology to significantly change the way the deaf community experiences music. We believe the findings presented here will add to the knowledge base of researchers in the field of human–computer interaction interested in developing systems for the hearing impaired.

EarPut: Augmenting Ear-worn Devices for Ear-based Interactio

ConferenceFull Paper
Roman Lissermann, Jochen Huber, Aristotelis Hadjakos, Suranga Nanayakkara and Max Mühlhäuser.
In Proceedings of the 26th Annual CHISGI Australian Computer-Human Interaction Conference (Sydney, Australia, December 2--5, 2014). OZCHI’14. ACM, New York, NY.

Abstract
One of the pervasive challenges in mobile interaction is de- creasing the visual demand of interfaces towards eyes-free interaction. In this paper, we focus on the unique affordances of the human ear to support one-handed and eyes-free mobile interaction. We present EarPut, a novel interface concept and hardware prototype, which unobtrusively augments a variety of accessories that are worn behind the ear (e.g. headsets or glasses) to instrument the human ear as an interactive surface. The contribution of this paper is three-fold. We contribute (i) results from a controlled experiment with 27 participants, providing empirical evidence that people are able to target salient regions on their ear effectively and precisely, (ii) a first, systematically derived design space for ear-based inter- action and (iii) a set of proof of concept EarPut applications that leverage on the design space and embrace mobile media navigation, mobile gaming and smart home interaction.

SpiderVision: Extending the Human Field of View for Augmented Awareness

ConferenceFull Paper
Kevin Fan, Jochen Huber, Suranga Nanayakkara, Masahiko Inami.
In Proceedings of the 5th Augmented Human International Conference (AH ’14), ACM, Article 49

Abstract
We present SpiderVision, a wearable device that extends the human field of view to augment a user’s awareness of things happening behind one’s back. SpiderVision leverages a front and back camera to enable users to focus on the front view while employing intelligent interface techniques to cue the user about activity in the back view. The extended back view is only blended in when the scene captured by the back camera is analyzed to be dynamically changing, e.g. due to object movement. We explore factors that affect the blended extension, such as view abstraction and blending area. We contribute results of a user study that explore 1) whether users can perceive the extended field of view effectively, and 2) whether the extended field of view is considered a distraction. Quantitative analysis of the users’ performance and qualitative observations of how users perceive the visual augmentation are described.

FingerDraw: More than a Digital Paintbrush

ConferenceShort Paper
Anuruddha Hettiarachchi, Suranga Nanayakkara, Kian Peen Yeo, Roy Shilkrot, Pattie Maes
In Proceedings of the 4th Augmented Human International Conference (Stuttgart, Germany, March 8–9, 2013). AH’13. New York, NY. 1–4.

Abstract
Research in cognitive science shows that engaging in visual arts has great benefits for children particularly when it allows them to bond with nature [7]. In this paper, we introduce FingerDraw, a novel drawing interface that aims to keep children connected to the physical environment by letting them use their surroundings as templates and color palette. The FingerDraw system consists of (1) a finger-worn input device [13] which allows children to upload visual contents such as shapes, colors and textures that exist in the real world; (2) a tablet with touch interface that serves as a digital canvas for drawing. In addition to real-time drawing activities, children can also collect a palette of colors and textures in the input device and later feed them into the drawing interface. Initial reactions from a case study indicated that the system could keep a child engaged with their surroundings for hours to draw using the wide range of shapes, colors and patterns found in the natural environment.

The Hybrid Artisans: A Case Study in Smart Tools

Journal
Amit Zoran, Roy Shilkrot, Suranga Nanayakkara, Joseph Paradiso
ACM Transactions on Computer-Human Interaction (TOCHI), 21 (3), Article no. 15

Abstract
We present an approach to combining digital fabrication and craft, demonstrating a hybrid interaction paradigm where human and machine work in synergy. The FreeD is a hand-held digital milling device, monitored by a computer while preserving the makers freedom to manipulate the work in many creative ways. Relying on a pre-designed 3D model, the computer gets into action only when the milling bit risks the objects integrity, preventing damage by slowing down the spindle speed, while the rest of the time it allows complete gestural freedom. We present the technology and explore several interaction methodologies for carving. In addition, we present a user study that reveals how synergetic cooperation between human and machine preserves the expressiveness of manual practice. This quality of the hybrid territory evolves into design personalization. We conclude on the creative potential of open-ended procedures within this hybrid interactive territory of manual smart tools and devices.

FingerReader: A Wearable Device to Support Text-Reading on the Go

ConferenceShort Paper
Roy Shilkrot, Jochen Huber, Connie K. Liu, Pattie Maes, Suranga Nanayakkara
In Proceedings of CHI ’14 Extended Abstracts on Human Factors in Computing Systems (CHI EA ’14). ACM, New York, NY, USA, p. 2359-2364.

Abstract
Visually impaired people report numerous difficulties with accessing printed text using existing technology, including problems with alignment, focus, accuracy, mobility and efficiency. We present a finger worn device that assists the visually impaired with effectively and efficiently reading paper-printed text. We introduce a novel, local-sequential manner for scanning text which enables reading single lines, blocks of text or skimming the text for important sections while providing real-time auditory and tactile feedback. The design is motivated by preliminary studies with visually impaired people, and it is small-scale and mobile, which enables a more manageable operation with little setup.

Workshop on Assistive Augmentation

ConferenceWorkshop
Jochen Huber, Jun Rekimoto, Masahiko Inami, Roy Shilkrot, Pattie Maes, Meng Ee Wong, Graham Pullin, Suranga Nanayakkara
Proceedings of CHI ’14 Extended Abstracts on Human Factors in Computing Systems (CHI EA ’14). ACM, New York, NY, USA, p. 103-106

Abstract
Our senses are the dominant channel for perceiving theworld around us, some more central than the others,such as the sense of vision. Whether they haveimpairments or not, people often find themselves at theedge of sensorial capability and seek assistive orenhancing devices. We wish to put sensorial ability anddisability on a continuum of usability for certaintechnology, rather than treat one or the other extremeas the focus.

SmartFinger: Connecting Devices, Objects and People seamlessly

ConferenceShort Paper
Shanaka Ransiri, Roshan Lalintha Peiris, Kian Peen Yeo, Suranga Nanayakkara
In Proceedings of the 24th conference of the computer-human interaction special interest group of Australia on Computer-human interaction(OZCHI ’13). ACM, New York, NY, USA, p. 359-362

Abstract
In this paper, we demonstrate a method to create a seamless information media ‘channel’ between the physical and digital worlds. Our prototype, SmartFinger, aims to achieve this goal with a finger-worn camera, which continuously captures images for the extraction of information from our surroundings. With this metaphorical channel, we have created a software architecture which allows users to capture and interact with various entities in our surroundings. The interaction design space of SmartFinger is discussed in terms of smart-connection, smart-sharing and smart-extraction of information. We believe this work will create numerous possibilities for future explorations.

SpeechPlay: Composing and Sharing Expressive Speech Through Visually Augmented Text

ConferenceShort Paper
Kian Peen Yeo, Suranga Nanayakkara
In Proceedings of the 24th conference of the computer-human interaction special interest group of Australia on Computer-human interaction(OZCHI ’13). ACM, New York, NY, USA, p. 565-568

Abstract
SpeechPlay allows users to create and share expressive synthetic voices in a fun and interactive manner. It promotes a new level of self-expression and public communication by adding expressiveness to a plain text. Control of prosody information in synthesized speech output is based on the visual appearance of the text, which can be manipulated with touch gestures. Users could create/modify contents using their mobile phone (SpeechPlay Mobile application) and publish/share their work on a large screen (SpeechPlay Surface). Initial user reactions suggest that the correlation between the visual appearance of a text phrase and the resulting audio was intuitive. While it is possible to make the speech output more expressive, users could easily distort the naturalness of the voice in a fun manner. This could also be a useful tool for music composers and for training new musicians.

StickEar: making everyday objects respond to sound

ConferenceFull Paper
Kian Peen Yeo, Suranga Nanayakkara, Shanaka Ransiri.
In Proceedings of the 26th annual ACM symposium on User interface software and technology (UIST ’13). ACM, New York, NY, USA, 221-226

Abstract
This paper presents StickEar, a system consisting of a network of distributed ‘Sticker-like’ sound-based sensor nodes to propose a means of enabling sound-based interactions on everyday objects. StickEar encapsulates wireless sensor network technology into a form factor that is intuitive to reuse and redeploy. Each StickEar sensor node consists of a miniature sized microphone and speaker to provide sound-based input/output capabilities. We provide a discussion of interaction design space and hardware design space of StickEar that cuts across domains such as remote sound monitoring, remote triggering of sound, autonomous response to sound events, and controlling of digital devices using sound. We implemented three applications to demonstrate the unique interaction capabilities of StickEar.

StickEar: Augmenting Objects and Places Wherever Whenever

ConferenceShort Paper
Kian Peen Yeo, Suranga Nanayakkara
In Extended Abstracts of the 31st Annual SIGCHI Conference on Human Factors in Computing Systems (Paris, France, April 27–May 2, 2013). CHI’13. ACM, New York, NY. p. 751-756

Abstract
Sticky notes provide a means of anchoring visual information on physical objects while having the versatility of being redeployable and reusable. StickEar encapsulate sensor network technology in the form factor of a sticky note that has a tangible user interface, offering the affordances of redeployablilty and reusability. It features a distributed set of network-enabled sound-based sensor nodes. StickEar is a multi-function input/output device that enables sound-based interactions for applications such as remote sound monitoring, remote triggering of sound, autonomous response to sound events, and controlling of digital devices using sound. In addition, multiple StickEars can interact with each other to perform novel input and output tasks. We believe this work would provide non-expert users with an intuitive and seamless method of interacting with the environment and its artifacts though sound.

EyeRing: A Finger Worn Input Device for Seamless Interactions with our Surroundings

ConferenceFull Paper
Suranga Nanayakkara, Roy Shilkrot, Kian Peen Yeo, Pattie Maes
In Proceedings of the 4th Augmented Human International Conference (Stuttgart, Germany, March 8–9, 2013). AH’13. New York, NY. p.13-20.

Abstract
Finger-worn interfaces remain a vastly unexplored space for user interfaces, despite the fact that our fingers and hands are naturally used for referencing and interacting with the environment. In this paper we present design guidelines and implementation of a finger-worn I/O device, the EyeRing, which leverages the universal and natural gesture of pointing. We present use cases of EyeRing for both visually impaired and sighted people. We discuss initial reactions from visually impaired users which suggest that EyeRing may indeed offer a more seamless solution for dealing with their immediate surroundings than the solutions they currently use. We also report on a user study that demonstrates how EyeRing reduces effort and disruption to a sighted user. We conclude that this highly promising form factor offers both audiences enhanced, seamless interaction with information related to objects in the environment.

An enhanced musical experience for the deaf: Design and evaluation of a music display and a haptic chair

ConferenceFull Paper
Suranga Nanayakkara, Elizabeth Taylor, Lonce Wyse L, S H Ong
In Proceedings of the 27th Annual SIGCHI Conference on Human Factors in Computing Systems (Boston, USA, April 4–9, 2009). CHI’09. ACM, New York, NY. p. 337-346

Abstract
Music is a multi-dimensional experience informed by much more than hearing alone, and is thus accessible to people of all hearing abilities. In this paper we describe a prototype system designed to enrich the experience of music for the deaf by enhancing sensory input of information via channels other than in-air audio reception by the ear. The system has two main components-a vibrating ‘Haptic Chair’ and a computer display of informative visual effects that correspond to features of the music. The Haptic Chair provides sensory input of vibrations via touch. This system was developed based on an initial concept guided by information obtained from a background survey conducted with deaf people from multi-ethnic backgrounds and feedback received from two profoundly deaf musicians. A formal user study with 43 deaf participants suggested that the prototype system enhances the musical experience of a deaf person. All of the users preferred either the Haptic Chair alone (54%) or the Haptic Chair with the visual display (46%). The prototype system, especially the Haptic Chair was so enthusiastically received by our subjects that it is possible this system might significantly change the way the deaf community experiences music.

The Haptic Chair as a Speech Training Aid for the Deaf

ConferenceFull Paper
Suranga Nanayakkara, Lonce Wyse L, Elizabeth Taylor
In Proceedings of the 24th Annual CHISGI Australian Computer-Human Interaction Conference (Melbourne, Australia, November 26–30, 2012). OZCHI’12. ACM, New York, NY. p. 405-410.

Abstract
The ‘Haptic Chair’ (Nanayakkara et al., 2009, 2010) delivers vibrotactile stimulation to several parts of the body including the palmar surface of the hand (palm and fingers), and has been shown to have a significant positive effect on the enjoyment of music even by the profoundly deaf. In this paper, we explore the effectiveness of using the Haptic Chair during speech therapy for the deaf. Based on evidence we present from a 12-week pilot user study, a follow-up 24-week study with 20 profoundly deaf users was conducted to validate our initial observations. The improvements in word clarity we observed over the duration of these studies indicate that the Haptic Chair has the potential to make a significant contribution to speech therapy for the deaf.

Toolkit that Allows Users to Animate Contents on Paper or Textiles

Patent
Roshan Peiris, Suranga Nanayakkara
Singapore Provisional Patent Application: 10201400334Y. Filing Date: 19 March 2014.

A distributed Wireless Sensing System

Patent
Kian Peen Yeo, Suranga Nanayakkara
US Provisional Patent Application: 61/750,578. Filing Date: 9 January 2013.

EyeRing: A Finger-worn Assistant

Patent
Suranga Nanayakkara, Roy Shilkrot, Pattie Maes
US Provisional Patent Application: 61581766. Filing Date: 30 December 2011.

Methods and Apparatus for Touch-Based Data Transfer

Patent
Pranav Mistry, Suranga Nanayakkara, Pattie Maes
US Provisional Patent Application: 61408728. Filing Date: 1 November 2010

Haptic Chair with Audiovisual Input

Patent
Elizabeth Taylor, Suranga Nanayakkara, Lonce Wyse L, S H Ong, Kian Peen Yeo, G H Tan
US Patent No. US 8,638,966 B2 . January. 28, 2014.

iSwarm: an iterative light installation on the water

Installation
Thomas Schroepfer, Suranga Nanayakkara, Thomas Worthmann, Aloysius Lian, Khew Yu Nong, Alex Cornelius, and Kian Peen Yeo
In i Light Marina Bay 2014 (Marina Bay, Singapore, 7 30 March, 2014)

Abstract
iSwarm is a swarm of luminous “sea creatures” that interact with passers-by. Subtle and hardly visible by day, iSwarm comes alive at night. As daylight fades, the cells of iSwarm illuminate the waters of Marina Bay with fluorescent light reminiscent of natural phenomena such as bioluminescent algae or the Aurora Borealis. iSwarm reacts to groups of visitors by detecting human presence and greeting them with subtle modulation of its light patterns.

SmartFinger: An Augmented Finger as a Seamless ‘Channel’ between Digital and Physical Objects

ConferenceShort Paper
Shanaka Ransiri, Suranga Nanayakkara
In Proceedings of the 4th Augmented Human International Conference (Stuttgart, Germany, March 8–9, 2013). AH’13. New York, NY. 5–8.

Abstract
Connecting devices in the digital domain for exchanging data is an essential task in everyday life. Additionally, our physical surrounding is full of valuable visual information. However, existing approaches for transferring digital content and extracting information from physical objects require separate equipment. SmartFinger aims to create a seamless ‘channel’ between digital devices and physical surrounding by using a finger-worn vision based system. It is an always available and intuitive interface for ‘grasping’ and semantically analyzing visual content from physical objects as well as sharing media between digital devices. We hope that SmartFinger will lead to seamless digital information ‘channel’ among all entities with a semblance in the physical and digital worlds.

AugmentedForearm: Exploring the Design Space of a Display-enhanced Forearm

ConferenceShort Paper
Simon Olberding, Kian Peen Yeo, Suranga Nanayakkara, Jurgen Steimle
In Proceedings of the 4th Augmented Human International Conference (Stuttgart, Germany, March 8–9, 2013). AH’13. New York, NY. 9–12.

Abstract
Recent technical advances allow traditional wristwatches to be equipped with high processing power. Not only do they allow for glancing at the time, but they also allow users to interact with digital information. However, the display space is very limited. Extending the screen to cover the entire forearm is promising. It allows the display to be worn similarly to a wristwatch while providing a large display surface. In this paper we present the design space of a display-augmented forearm, focusing on two specific properties of the forearm: its hybrid nature as a private and a public display surface and the way clothing influences information display. We show a wearable prototypical implementation along with interactions that instantiate the design space: sleeve-store, sleeve-zoom, public forearm display and interactive tattoo.

WatchMe: Wrist-worn interface that makes remote monitoring seamless

ConferenceShort Paper
Shanaka Ransiri, Suranga Nanayakkara
In Proceedings of the 14th international ACM SIGACCESS conference on Computers and accessibility (Boulder, Colorado, October 22–24, 2012). ASSETS’12. 243–244.

Abstract
Remote monitoring allows us to understand the regular living behaviors of the elderly and alert their loved ones in emergency situations. In this paper, we describe WatchMe, a software and hardware platform that focuses on making ambient monitoring intuitive and seamless. WatchMe system consists of the WatchMe server application and a WatchMe client application implemented on a regular wristwatch. Thus, it requires minimal effort to monitor and is less disruptive to the user. We hope that the WatchMe system will contribute to improving the lives of the elderly by creating a healthy link between them and their loved ones.

Effectiveness of the Haptic Chair in Speech Training

ConferenceShort Paper
Suranga Nanayakkara, Lonce Wyse, Elizabeth Taylor
In Proceedings of the 14th international ACM SIGACCESS conference on Computers and accessibility (Boulder, Colorado, October 22–24, 2012). ASSETS’12. 235–236

Abstract
The ‘Haptic Chair’ [3] delivers vibrotactile stimulation to several parts of the body including the palmar surface of the hand (palm and fingers), and has been shown to have a significant positive effect on the enjoyment of music even by the profoundly deaf. In this paper, we explore the effectiveness of using the Haptic Chair during speech therapy for the deaf. We conducted a 24-week study with 20 profoundly deaf users to validate our initial observations. The improvements in word clarity observed over the duration of this study indicate that the Haptic Chair has the potential to make a significant contribution to speech therapy for the deaf.

Palm-area sensitivity to vibrotactile stimuli above 1 kHz

ConferenceShort Paper
Lonce Wyse, Suranga Nanayakkara, Paul Seekings, S H Ong, Elizabeth Taylor
In Proceedings of the 12th International Conference on New Interfaces for Musical Expression (Ann Arbor, Michigan, May 21–23, 2012). NIME’12. 21–23

Abstract
The upper limit of frequency sensitivity for vibrotactile stimulation of the fingers and hand is commonly accepted as 1 kHz. However, during the course of our research to develop a full-hand vibrotactile musical communication device for the hearing-impaired, we repeatedly found evidence suggesting sensitivity to higher frequencies. Most of the studies on which vibrotactile sensitivity are based have been conducted using sine tones delivered by point-contact actuators. The current study was designed to investigate vibrotactile sensitivity using complex signals and full, open-hand contact with a flat vibrating surface representing more natural environmental conditions. Sensitivity to frequencies considerably higher than previously reported was demonstrated for all the signal types tested. Furthermore, complex signals seem to be more easily detected than sine tones, especially at low frequencies. Our findings are applicable to a general understanding of sensory physiology, and to the development of new vibrotactile display devices for music and other applications.

EyeRing: A Finger-worn Assistant

ConferenceShort Paper
Suranga Nanayakkara, Roy Shilkrot, Pattie Maes
In Extended Abstracts of the 30th Annual SIGCHI Conference on Human Factors in Computing Systems (Austin, Texas, May 5–10, 2012). CHI’12. ACM, New York, NY. 1961–1966.

Abstract
Finger-worn interfaces are a vastly unexplored space for interaction design. It opens a world of possibilities for solving day-to-day problems, for visually impaired people and sighted people. In this work we present EyeRing, a novel design and concept of a finger-worn device. We show how the proposed system may serve for numerous applications for visually impaired people such as recognizing currency notes and navigating, as well as helping sighted people to tour an unknown city or intuitively translate signage. The ring apparatus is autonomous, however it is counter parted by a mobile phone or computation device to which it connects wirelessly, and an earpiece for information retrieval. Finally, we will discuss how finger worn sensors may be extended and applied to other domains.

The effect of visualizing audio targets in a musical listening and performance task

ConferenceShort Paper
Lonce Wyse, Norikazu Mitani, Suranga Nanayakkara
In Proceedings of the 11th International Conference on New Interfaces for Musical Expression (Oslo, Norway, May 30–June 1, 2011). NIME’11. 304–307

Abstract
The goal of our research is to find ways of supporting andencouraging musical behavior by non-musicians in sharedpublic performance environments. Previous studies indicatedsimultaneous music listening and performance is difficult fornon-musicians, and that visual support for the task might behelpful. This paper presents results from a preliminary userstudy conducted to evaluate the effect of visual feedback on amusical tracking task. Participants generated a musical signalby manipulating a hand-held device with two dimensions ofcontrol over two parameters, pitch and density of note events,and were given the task of following a target pattern as closelyas possible. The target pattern was a machine-generatedmusical signal comprising of variation over the same twoparameters. Visual feedback provided participants withinformation about the control parameters of the musical signalgenerated by the machine. We measured the task performanceunder different visual feedback strategies. Results show thatsingle parameter visualizations tend to improve the trackingperformance with respect to the visualized parameter, but notthe non-visualized parameter. Visualizing two independentparameters simultaneously decreases performance in bothdimensions.

Biases and interaction effects in gestural acquisition of auditory targets using a hand-held device

ConferenceShort Paper
Lonce Wyse, Suranga Nanayakkara, Norikazu Mitani
In Proceedings of the 23rd Annual CHISGI Australian Computer-Human Interaction Conference (Canberra, Australia, November 28–December 2, 2011). OZCHI’11. ACM, New York, NY. 315–318.

Abstract
A user study explored bias and interaction effects in an auditory target tracking task using a hand-held gestural interface device for musical sound. Participants manipulated the physical dimensions of pitch, roll, and yaw of a hand-held device, which were mapped to the sound dimensions of musical pitch, timbre, and event density. Participants were first presented with a sound, which they then had to imitate as closely as possible by positioning the hand-held controller. Accuracy and time-to-target were influenced by specific sounds as well as pairings between controllers and sounds. Some bias effects in gestural dimensions independent of sound mappings were also found.

Towards building an experiential music visualizer

ConferenceFull Paper
Suranga Nanayakkara, Elizabeth Taylor, Lonce Wyse L, S H Ong
In Proceedings of the 6th International Conference on Information, Communications and Signal Processing (Singapore, December 10–13, 2007). ICICS’07. IEEE, Piscataway, NJ. 1–5

Abstract
There have been many attempts to represent music using a wide variety of different music visualisation schemes. In this paper, we propose a novel system architecture which combines Max/MSPtrade with Flashtrade that can be used to build real-time music visualisations rapidly. In addition, we have used this proposed architecture to develop a music visualisation scheme that uses individual notes from a MIDI keyboard or from a standard MIDI file and creates novel displays that reflect pitch, note duration, characteristics such as how hard a key is struck, and which instruments are playing at any one time. The proposed music visualization scheme is a first step towards developing a music visualization that alone can provide a sense of musical experience to the user.

Genetic Algorithm Based Route Planner for Large Urban Street Networks

ConferenceFull Paper
Suranga Nanayakkara, Dipti Srinivasan, Lai Wei Lup, Xavier German, Elizabeth Taylor, S H Ong
In Proceedings of the IEEE Congress on Evolutionary Computation (Singapore, September 25–28, 2007). CEC’07. IEEE, Piscataway, NJ. 4469-4474

Abstract
Finding the shortest path from a given source to a given destination is a well known and widely applicable problem. Most of the work done in the area have used static route planning algorithms such as A*, Dijkstra’s, Bellman-Ford algorithm etc. Although these algorithms are said to be optimum, they are not capable of dealing with certain real life scenarios. For example, most of these single objective optimizations fails to find the equally good solutions when there is more than one optimum (shortest distance path, least congested path). We believe that the genetic algorithm (GA) based route planning algorithm proposed in this paper has the ability to tackle the above problems. In this paper, the proposed GA based route planning algorithm is successfully tested on the entire Singapore map with more than 10,000 nodes. Performance of the proposed GA is compared with an ant based path planning algorithm. Simulation results demonstrate the effectiveness of the proposed algorithm over ant based algorithm. Moreover, the proposed GA may be used as a basis for developing an intelligent route planning system.

Automatic classification of whistles produced by Indo-Pacific Humpback dolphins (Sousa chinensis)

ConferenceFull Paper
Suranga Nanayakkara, Mandar Chitre, S H Ong, Elizabth Taylor
In Proceedings of the IEEE Oceans Conference (Aberdeen, Scotland, June 18–21, 2007). Oceans’07. IEEE, Piscataway, NJ. 1–5

Abstract
A fast, robust technique is needed to facilitate studies of vocalisations by dolphins and other marine mammals such as whales in which large quantities of acoustic data are commonly generated. It is sometimes necessary to be able to describe whistle contours quantitatively, rather than simply looking at descriptors such as start frequency, maximum frequency, number of inflection points, etc. This is important when whistles are to be compared using an automated classification system, and is an essential component of a real-time, automated classification system for use with a raw data stream. In this paper we describe a rapid and robust high order polynomial curve fitting technique which extracts features in preparation for automated classification. We applied this method to classify natural vocalizations of Indo-Pacific humpback dolphins (Sousa chinensis). We believe the method will be widely applicable to bioacoustic studies involving FM acoustic signals in both underwater and in-air environments.

A Wearable Text-Reading Device for the Visually-Impaired

ConferenceVideo
Roy Shilkrot, Jochen Huber, Connie K. Liu, Pattie Maes, Suranga Nanayakkara
In Videos Track of the 32nd Annual SIGCHI Conference on Human Factors in Computing Systems (Toronto, Canada, April 26–May 2, 2014). CHI’14. ACM, New York, NY

Abstract
Visually impaired people report numerous difficulties with accessing printed text using existing technology, including problems with alignment, focus, accuracy, mobility and efficiency. We present a finger worn device, which contains a camera, vibration motors and a microcontroller, that assists the visually impaired with effectively and efficiently reading paper-printed text in a manageable operation with little setup. We introduce a novel, local-sequential manner for scanning text which enables reading single lines, blocks of text or skimming the text for important sections while providing real-time auditory and tactile feedback.

StickEar: Making Everyday Objects Respond to Sound

ConferenceDemonstration
Kian Peen Yeo, Suranga Nanayakkara, Shanaka Ransiri.
Demos of the ACM Symposium on User Interface Software and Technology (StAndrews, UK, October 8–11, 2013). UIST’13. ACM, New York, NY.

Abstract
Sticky notes provide the versatility of being a redeployable and reusable way of anchoring visual information on physical objects while having the versatility of being redeployable and reusable. StickEar encapsulate sensor network technology in the form factor of a sticky note that has a tangible user interface, offering the affordances of redeployablilty and reusability. It features a distributed set of network-enabled sound-based sensor nodes. StickEar is a multi-function input/output device that enables sound-based interactions for applications such as remote sound monitoring, remote triggering of sound, autonomous response to sound events, and controlling of digital devices using sound. In addition, multiple StickEars can interact with each other to perform novel input and output tasks. We believe this work would provide non-expert users with an intuitive and seamless method of interacting with the environment and its artifacts though sound.

EyeRing: An Eye on a Finger

ConferenceDemonstration
Suranga Nanayakkara, Roy Shilkrot, Pattie Maes
In Interactivity (Research) Track of the 30th Annual SIGCHI Conference on Human Factors in Computing Systems (Austin, Texas, May 5–10, 2012). CHI’12. ACM, New York, NY.

Abstract
Finger-worn devices are a greatly underutilized form of interaction with the surrounding world. By putting a camera on a finger we show that many visual analysis applications, for visually impaired people as well as the sighted, prove seamless and easy. We present EyeRing, a ring mounted camera, to enable applications such as identifying currency and navigating, as well as helping sighted people to tour an unknown city or intuitively translate signage. The ring apparatus is autonomous, however our system also includes a mobile phone or computation device to which it connects wirelessly, and an earpiece for information retrieval. Finally, we will discuss how different finger worn sensors may be extended and applied to other domains.

EyeRing: An Eye on a Finger

ConferenceVideo
Suranga Nanayakkara, Roy Shilkrot, Pattie Maes
In Video Track of the 30th Annual SIGCHI Conference on Human Factors in Computing Systems (Austin, Texas, May 5–10, 2012). CHI’12. ACM, New York, NY.

Abstract
Finger-worn devices are a greatly underutilized form of interaction with the surrounding world. By putting a camera on a finger we show that many visual analysis applications, for visually impaired people as well as the sighted, prove seamless and easy. We present EyeRing, a ring mounted camera, to enable applications such as identifying currency and navigating, as well as helping sighted people to tour an unknown city or intuitively translate signage. The ring apparatus is autonomous, however our system also includes a mobile phone or computation device to which it connects wirelessly, and an earpiece for information retrieval. Finally, we will discuss how different finger worn sensors may be extended and applied to other domains.

SPARSH: Touch the Cloud

ConferenceVideo
Pranav Mistry, Suranga Nanayakkara, Pattie Maes
In Videos Track of the 16th Annual ACM Conference on Computer Supported Cooperative Work (Hangzhou, China, March 19–23, 2011). CSCW’11. ACM, New York, NY.

Abstract
Our digital world – laptop, TV, smart phone, e-book reader and all are now relying upon the cloud, the cloud of information. SPARSH explores a novel interaction method to seamlessly transfer something between these devices in a real fun way using the underlying cloud. Here it goes. Touch whatever you want to copy. Now it is saved conceptually in you. Next, touch the device you want to paste/pass the saved content. So, what can you do with SPARSH? Imagine you received a text message from a friend with his address. You touch the message and it is conceptually get copied in you – your body. Now you pass that address to the search bar of the Google map in the web browser of your laptop by simply touching it. Want to see some pictures from your digital camera on your tablet computer? Select the pictures you want to copy by touching them on camera display screen and now pass it to your tablet by touching the screen of the tablet. Or you can watch a video from your Facebook wall by copying it from your phone to TV. SPARSH uses touch based interactions as just indication for what to copy, from where and where to pass it. Technically, the actual magic(transfer of media) happens on the cloud.

SPARSH: Touch the Cloud

ConferenceDemonstration
Pranav Mistry, Suranga Nanayakkara, Pattie Maes
In Demonstrations Track of the 16th Annual ACM Conference on Computer Supported Cooperative Work (Hangzhou, China, March 19–23, 2011). CSCW’11. ACM, New York, NY.

Abstract
SPARSH presents a seamless way of passing data among multiple users and devices. The user touches a data item they wish to copy from a device, conceptually saving it in the user’s body. Next, the user touches the other device they want to paste/pass the saved content. SPARSH uses touch-based interactions as indications for what to copy and where to pass it. Technically, the actual transfer of media happens via the information cloud. Accompanying video shows some of the SPARSH scenarios.

SPARSH: Passing Data using the Body as a Medium

ConferenceShort Paper
Pranav Mistry, Suranga Nanayakkara, Pattie Maes
In Interactivity Track of the 16th Annual ACM Conference on Computer Supported Cooperative Work (Hangzhou, China, March 19–23, 2011). CSCW’11. ACM, New York, NY.

Abstract
SPARSH explores a novel interaction method to seamlessly transfer data among multiple users and devices in a fun and intuitive way. The user touches a data item they wish to copy from a device, conceptually saving in the user’s body. Next, the user touches the other device they want to paste/pass the saved content. SPARSH uses touch-based interactions as indications for what to copy and where to pass it. Technically, the actual transfer of media happens via the information cloud.

BWard: An Optical Approach for Reliable in-situ Early Blood Leakage Detection at Catheter Extraction Points

Patent
Juan Pablo Cortes, Suranga Nanayakkara and Foong Shaohui
Singapore Provisional Patent Application: WT/EK/ann/S.20151973 (3052/SG). Filing Date: 14 July 2015

StickAmps: just in time intuitive price signals via non-invasive wireless sensors

Patent
Erik Wilhelm, Suranga Nanayakkara, Foong Shaohui and Samitha Elvitigala
Singapore Provisional Patent Application:SG4663. Filing Date: 23 May 2014

Stroke Haptic Rehabilitation Utilising Gaming

Patent
Roshan Peiris and Suranga Nanayakkara
Singapore Provisional Patent Application: IES101930. Filing Date: 27 October 2014

zSense: A novel technique for close proximity gesture recognition

Patent
Anusha Withana and Suranga Nanayakkara
Singapore Provisional Patent Application: 10201407991X. Filing Date: 1 December 2014.

A system and method for providing information for at least one predefined location.

Patent
Juan Pablo Cortes, Suranga Nanayakkara and Piyum Fernando
Singapore Provisional Patent Application: 10201600342W. Filing Date: 15 January 2016.