Interactive Projects

VR & AR Applications

VR & AR Applications

Multimodal Interaction

Multimodal Interaction

Mobile Interaction

Interaction with Mobile

Haptic Interaction

Haptic Interaction

Medical Applications

Medical Applications

Multi-Displays

Multi-Displays

Collaborative 3DUI

Design and Evaluation of a Handheld-based 3D User Interface for Collaborative Object Manipulation
CHI 2017

Authors: GRANDI, J. G., DEBARRBA, H. G., NEDEL, L. and MACIEL, A.

Abstract: Object manipulation in 3D virtual environments demands a combined coordination of rotations, translations and scales, as well as the camera control to change the user’s viewpoint. Then, for many manipulation tasks, it would be advantageous to share the interaction complexity among team members. In this paper we propose a novel 3D manipulation interface based on a collaborative action coordination approach. Our technique explores a smartphone – the touchscreen and inertial sensors – as input interface, enabling several users to collaboratively manipulate the same virtual object with their own devices. We first assessed our interface design on a docking and an obstacle crossing tasks with teams of two users. Then, we conducted a study with 60 users to understand the influence of group size in collaborative 3D manipulation. We evaluated teams in combinations of one, two, three and four participants. Experimental results show that teamwork increases accuracy when compared with a single user. The accuracy increase is correlated with the number of individuals in the team and their work division strategy.

VR & AR Multimodal Mobile
[paper]

Designing a Vibrotactile Head-mounted Display for Spatial Awareness in 3D Spaces
IEEE VR 2017 & TVCG

Authors: de JESUS OLIVEIRA, V. A., BRAYDA, L., NEDEL, L., MACIEL, A.

Abstract: Due to the perceptual characteristics of the head, vibrotactile Head-mounted Displays are built with low actuator density. Therefore, vibrotactile guidance is mostly assessed by pointing towards objects in the azimuthal plane. When it comes to multisensory interaction in 3D environments, it is also important to convey information about objects in the elevation plane. In this paper, we design and assess a haptic guidance technique for 3D environments. First, we explore the modulation of vibration frequency to indicate the position of objects in the elevation plane. Then, we assessed a vibrotactile HMD made to render the position of objects in a 3D space around the subject by varying both stimulus loci and vibration frequency. Results have shown that frequencies modulated with a quadratic growth function allowed a more accurate, precise, and faster target localization in an active head pointing task. The technique presented high usability and a strong learning effect for a haptic search across different scenarios in an immersive VR setup.

VR & AR Multimodal Haptics
[paper]

Efficient Surgical Cutting with Position-based Dynamics
IEEE Computer Graphics & Applications (Volume: 38, Issue: 3, May-June 2017)

Authors: Iago Berndt, Rafael Torchelsen and Anderson Maciel

Abstract: Simulations of cuts on deformable bodies have been an active research subject for more than two decades. However, previous works based on finite element methods and mass spring meshes cannot scale to complex surgical scenarios. This article presents a novel method that uses position-based dynamics (PBD) for mesh-free cutting simulation. The proposed solutions include a method to efficiently render force feedback while cutting, an efficient heat diffusion model to simulate electrocautery, and a novel adaptive skinning scheme based on oriented particles.

Best SVR Best Demo - People's Choice Award - Symposium on Virtual and Augmented Reality 2016

Haptics Medical
[paper] [code|demo] [project home]

A laparoscopy-based method for BRDF estimation from in vivo human liver
Medical Image Analysis 2017

Authors: Augusto L.P. Nunes, Anderson Maciel, Leandro Totti Cavazzola, Marcelo Walter

Abstract:While improved visual realism is known to enhance training effectiveness in virtual surgery simulators, the advances on realistic rendering for these simulators is slower than similar simulations for man-made scenes. One of the main reasons for this is that in vivo data is hard to gather and process. In this paper, we propose the analysis of videolaparoscopy data to compute the Bidirectional Reflectance Distribution Function (BRDF) of living organs as an input to physically based rendering algorithms. From the interplay between light and organic matter recorded in video images, we propose the definition of a process capable of establishing the BRDF for inside-the-body organic surfaces. We present a case study around the liver with patient-specific rendering under global illumination. Results show that despite the limited range of motion allowed within the body, the computed BRDF presents a high-coverage of the sampled regions and produces plausible renderings.

Medical
[Journal article] [Tutorial]

Collaborative 3D Manipulation using Mobile Phones
3DUI Contest 2016

Authors: GRANDI, J. G., BERNDT, I., DEBARRBA, H. G., NEDEL, L. and MACIEL, A.

Abstract: We present a 3D user interface for collaborative manipulation of three-dimensional objects in virtual environments. It maps inertial sensors, touch screen and physical buttons of a mobile phone into well-known gestures to alter the position, rotation and scale of virtual objects. As these transformations require the control of multiple degrees of freedom (DOF), collaboration is proposed as a solution to coordinate the modification of each and all the available DOFs. Users are free to define their manipulation roles. All virtual elements are displayed in a single shared screen, which is handy to aggregate multiple users in the same physical space.

Best 3DUI Best 3DUI Contest Award

VR & AR Multimodal Mobile
[paper]

Tactile Treasure Map: Integrating Allocentric and Egocentric Information for Tactile Guidance
ASIAHAPTICS 2016

Authors: MEMEO, M., de JESUS OLIVEIRA, V. A., NEDEL, L., MACIEL, A., BRAYDA, L.

Abstract: With interactive maps a person can manage to find the way from one point to another, using an allocentric perspective (e.g. Google Maps), but also looking at a location as from the inside of the map with an egocentric perspective (e.g. Google Street View). Such experience cannot be performed with tactile maps, mostly explored from a top-view. To solve this, we built a system with two different but complementary devices. When coupled, they can provide both allocentric and egocentric spatial information to support the exploration of interactive tactile maps. To show the potential of the system, we built a blind treasure hunt.

Asiahaptics Third Place - People's Choice Award

Haptics
[paper]

Medical Imaging VR: Can Immersive 3D Aid in Diagnosis?
VRST 2016

Authors: José Venson, Jean Berni, Carlos Maia, Ana Marques3, Marcos d’Ornelas and Anderson Maciel

Abstract: In the radiology diagnosis, medical images are most often visualized slice by slice on 2D screens or printed. At the same time, the visualization based on 3D volumetric rendering of the data is considered useful and has increased its field of application. We report a user study to assess VR usage in the diagnostic procedure of fracture identification. In addition, we assessed the subjects perception of the 3D reconstruction quality and ease of interaction.

VR & AR Medical
[poster]

Immersive Visualization for 3D Volumetric Medical Images
SVR 2016

Authors: José Venson, Jean Carlo Berni and Anderson Maciel

VR & AR Medical

Lossless Multitasking: Using 3D gestures embedded in mouse devices
SVR 2016

Authors: FRANZ, J., MENIN, A. and NEDEL, L.

Abstract: Desktop-based operating systems allow the use of many applications concurrently, but the frequent switching between two or more applications distracts the user, preventing him to keep focused in the main task. In this work we introduce an augmented mouse, which supports the regular 2D movements and clicks, as well as 3D gestures performed over it. While the keyboard and mouse conventional operation are used for the main task, with 3D gestures the user can control secondary tasks. As a proof of concept, we embedded a Leap Motion Controller device inside a regular mouse. User tests have been conducted firstly to help in the selection of the gestures supported, and then to evaluate the device effectiveness and usability. Results shown that the use of the augmented mouse as a strategy to keep the user focused reduces the task completion time.

Multimodal
EUROHAPTICS

Localized Magnification in Vibrotactile HMDs for Accurate Spatial Awareness
EUROHAPTICS 2016

Authors: de JESUS OLIVEIRA, V. A., NEDEL, L., MACIEL, A., BRAYDA, L.

Abstract: Actuator density is an important parameter in the design of vibrotactile displays. When it comes to obstacle detection or navigation tasks, a high number of tactors may provide more information, but not necessarily better performance. Depending on the body site and vibration parameters adopted, high density can make it harder to detect tactors in an array. In this paper, we explore the trade-off between actuator density and precision by comparing three kinds of directional cues. After performing a within-subject naive search task using a head-mounted vibrotactile display, we found that increasing the density of the array locally provides higher performance in detecting directional cues.

Haptics
[paper]
HAPTICS

Spatial Discrimination of Vibrotactile Stimuli Around the Head
Haptics Symposium 2016

Authors: de JESUS OLIVEIRA, V. A., NEDEL, L., MACIEL, A., BRAYDA, L.

Abstract: Several studies evaluated vibrotactile stimuli on the head to aid orientation and communication. However, the acuity for vibration of the head's skin still needs to be explored. In this paper, we report the assessment of the spatial resolution on the head. We performed a 2AFC psychophysical experiment systematically varying the distance between pairs of stimuli in a standard-comparison approach. We took into consideration not only the perceptual thresholds but also the reaction times and subjective factors, like workload and vibration pleasantness. Results show that the region around the forehead is not only the most sensitive, with thresholds under 5mm, but it is also the region wherein the spatial discrimination was felt to be easier to perform. We also have found that it is possible to describe acuity on the head for vibrating stimulus as a function of skin type (hairy or glabrous) and of the distance of the stimulated loci from the head midline.

Haptics
[paper]

Proactive Haptic Articulation for Intercommunication in Collaborative Virtual Environments
3DUI 2016

Authors: de JESUS OLIVEIRA, V. A., NEDEL, L., MACIEL, A.

Abstract: In this paper, we look upon elements present in speech articulation to introduce proactive haptic articulation as a novel approach for communication in Collaborative Virtual Environments. We defend the hypothesis that elements present in natural language, when added to the design of the vibrotactile vocabulary, should provide an expressive medium for intercommunication. Moreover, the ability to render tactile cues to a teammate should encourage users to extrapolate a given vocabulary while using it. We implemented a collaborative puzzle task to observe the use of such vocabulary. Results show that participants autonomously adapted it to attend their communication needs during the assembly.

VR & AR Multimodal Haptics
[paper] [related poster (VR 2015)] [related poster (VR 2016)]

Guidelines for Designing Dynamic Applications With Second Screen
SVR 2015

Authors: Bruno Pagno, Diogo Costa, Leandro Guedes, Carla Dal Sasso Freitas and Luciana Nedel

Abstract: The concept of second screen became popular with the introduction of interactive TVs. In this context, while the user focuses on the TV screen, the exploration of additional content is possible through the use of a smartphone or tablet as a second screen. Lately, dynamic applications, e.g. video games, also started to use a second screen. Nintendo DS and Wii U are the game consoles that began to incorporate these ideas. Dynamic applications are based on real time action and interaction, and their implementation can be very complex specially because users have to change focus between the displays frequently. In this paper, we summarize the results found in a set of experimental studies we conducted to verify the usability of interaction techniques based on the use of a second auxiliary screen in dynamic applications. We developed a multiplayer game that employs one main screen shared by two players, each one also using a second (private) screen. From these studies, we elaborate a set of guidelines to help developers in the use of second screens. Although future case studies would improve these guidelines, our experiments show that they contribute with robust principles for developers who want to build multiscreen applications.

Mobile
[paper]

Blind Guardian: A Sonar-Based Solution for Avoiding Collisions with the Real World
SVR 2015

Authors: Marina F. Rey, Inatan Hertzog, Nicolas Kagami and Luciana Nedel

Abstract: Sightless navigation is an ever-present issue that affects a great part of the population. The affected include permanent or temporarily blind individuals, persons walking in the dark, and users of immersive virtual environments using real walking for navigation. This paper presents an alternative solution to this problem, which relies on a simple wearable device based on ultrasonic waves to detect obstacles and on vibrotactile feedback to warn the user of nearby obstacles. In the following pages, we describe the design and implementation of this apparatus, called the Blind Guardian. We conducted user tests with 29 subjects in a controlled environment. Results demonstrated the potential of Blind Guardian for future use in real life situations, as well as for immersive virtual reality applications based on the use of head-mounted displays.

VR & AR Haptics
[paper]

Spatially Aware Mobile Interface for 3D Visualization and Interactive Surgery Planning
SEGAH 2014

Authors: GRANDI, J. G., DEBARBA, H. G., ZANCHET, D. J. and MACIEL, A.

Abstract: While medical images are fundamental in the surgery planning procedure, the process of analysis of such images slice-by-slice is still tedious and inefficient. In this work we introduce a system for exploration of the internal anatomy structures directly on the surface of the real body using a mobile display device as a window to the interior of the patient’s body. The method is based on volume visualization of standard computed tomography datasets and augmented reality for interactive visualization of the generated volume. It supports our liver surgery planner method in the analysis of the segmented liver and in the color classification of the vessels. We present a set of experiments showing the system’s ability to operate on mobile devices. Quantitative performance results are detailed, and applications in teaching anatomy and doctor-patient communication are discussed.

VR & AR Multimodal Mobile Medical
[paper]

The Point Walker Multi-label Approach
3DUI Contest 2014

Authors: Hernandi Krammes, Marcio M. Silva, Theodoro Mota, Matheus T. Tura, Anderson Maciel, Luciana Nedel

Abstract: This paper presents a 3D user interface to select and label point sets in a point cloud. A walk-in-place strategy based on a weight platform is used for navigation. Selection is made in two levels of precision. First, a pointing technique is used relying on a smartphone and built-in sensors. Then, an ellipsoidal selection volume is deformed by pinching on the smartphone touchscreen in different orientations. Labels are finally selected by pointing icons and a hierarchy of labels is automatically defined by multiple labelling. Voice is used to create new icons/labels. The paper describes the concepts in our approach and the system implementation.

VR & AR Multimodal Mobile
[paper]

Tactile Interface for Navigation in Underground Mines
SVR 2014

Authors: de JESUS OLIVEIRA, V. A., MARQUES, E., PERONI, R. de L. and MACIEL, A.

Abstract: This paper presents the design and evaluation of a tactile vocabulary to aid navigation in an underground mine. We studied different ways to construct tactile vocabularies and assessed several tactile icons for aid navigation. After trying a dozen stimuli families, we have selected tactons based on the users' ability to perceive and process them during navigation in virtual environments to design a more usable tactile interface. Then, we performed a user experiment in a virtual simulation of an emergency situation in an underground mine. The user study shows that the tactile feedback facilitated the execution of the task. Also, the perceptual adjustment of the tactile vocabulary increased its usability as well as the memorization of its signals.

Best Paper Best Application Paper Award

VR & AR Multimodal Haptics
[paper]

Assessment of Tactile Languages as Navigation Aid in 3D Environments
EUROHAPTICS 2014

Authors: de JESUS OLIVEIRA, V. A. and MACIEL, A.

Abstract: In this paper we present the design and evaluate alternative tactile vocabularies to support navigation in 3D environments. We have focused on the tactile communication expressiveness by applying a prefixation approach in the construction of the tactile icons. We conducted user experiments to analyze the effects of both prefixation and the use of tactile sequences on the user's performance in a navigation task. Results show that, even if tactile sequences are more difficult to process during the navigation task, the prefixed patterns were easier to learn in all assessed vocabularies.

VR & AR Multimodal Haptics
[paper] [poster] [related poster (3DUI 2014)]

Introducing the Modifier Tactile Pattern for Vibrotactile Communication
EUROHAPTICS 2014

Authors: de JESUS OLIVEIRA, V. A. and MACIEL, A.

Abstract: We introduce the concept of "Modifier Tactile Pattern" as a pattern that modifies the interpretation of other elements that compose a Tacton or an entire tactile message. This concept was inspired by the prefixation strategies of the Braille system. We also show how to design tactile languages applying the concept of Modifier by following method- ologies and approaches of Tacton design that already exist in literature. Then a modifier-based tactile language is designed and assessed in a user study.

Haptics
[paper] [poster]

Disambiguation Canvas: A Precise Selection Technique for Virtual Environments
INTERACT 2013

Authors: DEBARBA, H. G., GRANDI, J. G., MACIEL, A., NEDEL, L. and BOULIC R.

Abstract: We present the disambiguation canvas, a technique developed for easy, accurate and fast selection of small objects and objects inside cluttered virtual environments. Disambiguation canvas rely on selection by progressive refinement, it uses a mobile device and consists of two steps. During the first, the user defines a subset of objects by means of the orientation sensors of the device and a volume casting pointing technique. The subsequent step consists of the disambiguation of the desired target among the previously defined subset of objects, and is accomplished using the mobile device touchscreen. By relying on the touchscreen for the last step, the user can disambiguate among hundreds of objects at once. User tests show that our technique performs faster than ray-casting for targets with approximately 0.53 degrees of angular size, and is also much more accurate for all the tested target sizes.

VR & AR Multimodal Mobile Multi-displays
[paper]
svr2013

Study of the Sensors Embedded in Smartphones for Use in Indoor Localization
SVR 2013

Authors:Ivan Boesing, Tomaz Silva and Luciana Nedel

Abstract: In recent years, smartphones experienced a fast technological growth, approaching its computational capacity to personal computers. In addition to that, new hardware and sensors are being incorporated into these devices. The smartphone’s embedded sensors are a low cost solution that allows interactions between humans, computers and the environment. Examples include applications designed to identify the user’s location by GPS receiver, games that use accelerometers and/or gyroscope, Wi-Fi and Bluetooth antennas that exchange information between users, microphones that perceive user’s gestural movements, and so on. This paper presents a study of the various smartphones sensors, detailing their operation and identifying their main features, advantages, drawbacks, and potential of use for indoor location. We analyzed smartphones sensitive to sound, Wi-Fi, magnetic field, 3-dimensional linear acceleration and angular velocity. This study presented – as the ideal solution – a fusion of sensors to complement individual skills and to improve the information accuracy regarding the device’s rotation and translation.

Mobile
[paper]

Interacting with Danger in an Immersive Environment
VRST 2013

Authors: Vitor A. M. Jorge, Wilson J. Sarmiento, Anderson Maciel, Luciana Nedel, César A. Collazos, Frederico Faria, Jackson Oliveira

Abstract: Any human-computer interface imposes a certain level of cognitive load to the user task. Analogously, the task itself also imposes different levels of cognitive load. It is common sense in 3D user interfaces research that a higher number of degrees of freedom increases the interface cognitive load. If the cognitive load is significant, it might compromise the user performance and undermine the evaluation of user skills in a virtual environment. In this paper, we propose an assessment of two immersive VR interfaces with varying degrees of freedom in two VR tasks: risk perception and basic object selection. We examine the effectiveness of both interfaces in these two different tasks. Results show that the number of degrees of freedom does not significantly affect a basic selection task, but it affects risk perception task in an unexpected way.

VR & AR
SAC

Assessment of a User Centered Interface for Teleoperation and 3D Environments
SAC 2013

Authors: Juliano Franz, Anderson Maciel and Luciana Nedel

Abstract: The efficient, natural and precise selection and manipulation of nearby objects in 3D environments is yet a challenge in the area of 3D interaction. Robot teleoperation is a potential application field for this kind of interaction and a difficult task. In part due to the low quality of the information delivered to the operator, but also because of the non-natural interfaces used to operate the robot. This paper presents a direct 3D user interface based on visual and haptic feedback that can be correctly operated by untrained users. The interface offers a higher level of presence than the average existing solutions, and involves the use of the operator’s primary hand to control a robotic arm using a Phantom device. Spatial awareness, in turn, is obtained from stereoscopic vision and motion parallax effect. The main contribution though is an assessment study to investigate which interface elements maximize user performance and if any interface elements is counter-effective. Surprising results show that parallax is the most effective feature while stereoscopy is often detrimental and force feedback requires training.

VR & AR
[paper]

LOP-cursor: Fast and precise interaction with tiled displays using one hand and levels of precision
3DUI 2012

Authors: DEBARBA, H. G., NEDEL, L. and MACIEL, A.

Abstract: We present levels of precision (LOP) cursor, a metaphor for high precision pointing and simultaneous cursor controlling using commodity mobile devices. The LOP-cursor uses a two levels of precision representation that can be combined to access low and high resolution of input. It provides a constrained area of high resolution input and a broader area of lower input resolution, offering the possibility of working with a two legs cursor using only one hand. LOP-cursor is designed for interaction with large high resolution displays, e.g. display walls, and distributed screens/computers scenarios. This paper presents the design of the cursor, the implementation of a prototype, and user evaluation experiments showing that our method allows both, the acquisition of small targets, and fast interaction while using simultaneous cursors in a comfortable manner. Targets smaller than 0.3 cm can be selected by users at distances over 1.5 m from the screen with minimum effort.

VR & AR Multimodal Mobile Multi-displays
[paper]

Inclusive Games: A Multimodal Experience for Blind Players
SBGAMES 2011

Authors: Jean Felipe Patikowski Cheiran, Luciana Nedel and Marcelo Pimenta

Abstract: Electronic games have an important role in the human development so we can face the world of constantly changing technologies. Considering that the most of games is grounded in the interaction through visual elements and that the most of alternate games for blind is less attractive to non-blind people, we have developed a prototype of a 3D environment with dense sound experience and haptic feedback that would allow to blind and non-blind users orientate and move through it. Designing this environment like a game, we have employed blindfolded and non-blindfolded users to evaluate the major interaction issues in order to refine the software and make it mature to be used for the research with blind subjects.

VR & AR Multimodal Haptics
[paper]

The cube of doom: A bimanual perceptual user Experience
3DUI Contest 2011

Authors: DEBARBA, H. G., FRANZ, J., REUS, V., MACIEL, A. and NEDEL, L.

Abstract: This paper presents a 3D user interface to solve a three-dimensional wooden blocks puzzle. Such interface aims at reproducing the real scenario of puzzle solving using involving devices and techniques for interaction and visualization which include a mobile device, haptics and enhanced stereo vision. The paper describes our interaction approach, the system implementation and user experiments

VR & AR Multimodal Mobile
[paper]
SBGames

Why not with the foot?
SBGAMES 2011

Authors: Erivaldo Xavier de Lima Filho, Mateus Bisotto Nunes, João Comba and Luciana Nedel

Abstract: The evolution of graphics hardware in the past decade has made it possible to generate scenes in computer games with a high degree of realism, which in turn, requires richer interactions. However, while the number and complexity of possible interactive tasks increases, the motor capabilities of humans remains almost constant. One solution for this issue is to use other communication strategies. In this paper, we explore the foot as an interaction channel and demonstrate its viability to accomplish different tasks. We also show that interaction using the foot can be easily and efficiently implemented under different hardware configurations. To validate our hypothesis, we present results of three experiments involving different hardware and software configurations, and summarize the lessons learned and discuss potential avenues to continue this work.

Multimodal
[paper]

Reality Cues-Based Interaction Using Whole-Body Awareness
SAC 2010

Authors: MACIEL, A. and NEDEL, L., JORGE, V. A. M., IBIAPINA, J. M. T., SILVA, L. F. M. S.

Abstract: The exploration of 3D environments using 6 degrees-of-freedom interaction is still a challenge since users easily become disoriented. In this paper we discuss the benefits of the whole-body awareness in 3D interactive applications. We propose a technique for navigation and selection in 3D environments which explores the peephole metaphor with a tablet PC. In practice, the tablet is held by the participant who moves it around and points it in any direction for visualization and interaction. The method was tested with a set of users who were asked to perform selection tasks. The technique presented competitive results when compared with conventional interaction methods and also showed that real world body orientation memory helps users to perform better in the virtual world.

VR & AR Mobile
[paper]

Collaborative Interaction through Spatially Aware Moving Displays
SAC 2010

Authors:Anderson Maciel, Luciana P. Nedel, Eduardo M. Mesquita, Marcelo H. Mattos, Gustavo M. Machado, Carla M.D.S. Freitas

Abstract: In many real life situations, people work together using each their own computers. In practice, besides personal communication, such situations often involve exchanging documents and other digital objects. Since people are working in a common physical space, it is a natural idea to enlarge the virtual space to a common area where they can exchange objects while taking advantage of the collaborators' physical proximity. In this work we propose a way to allow collaboration through the interaction with objects in a common virtual workspace built on the top of tablet PCs. The concepts of dynamic multiple displays and real world position tracking are implemented exploiting the tablet's embodied resources such as webcam, touch-screen and stylus. Also, a multiplayer game was implemented to show how users can exchange information through intercommunicating tablets. We performed user tests to demonstrate the feasibility of collaborative tasks in such environment, and drawn conslusions regarding the impact of the new paradigm of extended multi-user workspaces.

VR & AR Mobile Multi-displays
[paper]

Permeating the Architectural Past in Dante Alighieri Square in Caxias do Sul through Augmented Reality
SIBGRAPI 2010

Authors: RIBOLDI, G., MACIEL, A.

Abstract: Important buildings and urban sites of the recent past were not adequately documented and have been forgotten for a long time. In this context, new experiences with the 3D modeling and simulation of such spaces in VR are becoming very impactful as documentation tools. However, these virtual spaces are not accessible to the general public as visualization tools are not available. The purpose of this work, then, is to create an interaction environment in augmented reality to explore historical areas in such a way that ancient versions of buildings can be visualized and explored directly in the real space where the buildings were in the past or their current versions are situated today. Users handling a mobile display device, as a tablet PC, walk around the real site, and as they point the display towards a neighboring building, they can see how it was in the past, which allows a travel in time, offering a fourth dimension to the experience. The results obtained demonstrate the potential of augmented reality applications for the dissemination of historical heritage.

VR & AR Mobile
[paper]
sibwuw2010b

An Interactive Dynamic Tiled Display System
SIBGRAPI 2010

Authors:Juliano Franz, Gelson Reinaldo, Anderson Maciel and Luciana Nedel

Abstract: Data acquisition devices and algorithms are generating each day larger datasets. As displays are not evolving in the same velocity, the use of tiled displays systems is being seriously considered for the visualization of huge datasets. However, tiled-displays are expensive and large, requiring dedicated rooms for it. Therefore we propose a low cost and scalable tiled display using an array of movable tablet PCs. We also present a strategy to interact with applications running on this dynamic tiled display system, which can be operated by one or multiple users concurrently. Our solution is based on two principles: even if each tile is a separate computer, users should feel it as an unique application running on a single machine; interaction is provided by sketching gestures directly over the displays surfaces using the tablet stylus. Users may use the system in a natural way, as they are just taking notes in their own scrapbook. Preliminary results are presented and discussed.

VR & AR Mobile Multi-displays
[paper]

WindWalker: UsingWind as an Orientation Tool in Virtual Environments
SVR 2009

Authors: Henrique G. Debarba, Jerônimo G. Grandi, Adriano Oliveski, Diana Domingues, Anderson Maciel and Luciana P. Nedel

Abstract: Trans-sensory perception is the alternative use of one of our senses to perceive information which is generally perceived by another sense. Common examples exist among handicapped people, such as blind people who play soccer based on sound emitters placed on the ball and at the goals. The present study aims at using wind as an interface modality for interaction in virtual environments. More than that, in this study we propose to use the direction of the air in motion as an abstraction of the natural sense humans have from the wind. We give a new meaning to the wind direction with the purpose of self-orientation in virtual reality environments. We develop hardware and software interfaces for wind rendering and then analyze user performance on specific orientation tasks.

VR & AR Multimodal
[paper]

A Dynamic Multi-display System Approach
SVR 2008

Authors:Gelson C. Reinaldo, Marilena Maule, Márcio Zacarias, Anderson Maciel, Carla M.D.S. Freitas, Luciana P. Nedel

Abstract: Techniques for construction and configuration of tiled display systems have been focused by a number of research groups. Arrays of monitors or projectors in a fixed size matrix NxM can be managed by computer clusters to display a single image with large dimensions and high resolution. In the present work, an array of tablet PCs is used to compose a tiled display with a specific interaction feature due to the mobility of each individual tablet. Tracking ground fixed markers with the tablets web cameras enable the system to change the virtual image region to be displayed by each tile, which allows dynamic exploration of the visualization space.

VR & AR Mobile Multi-displays
[paper]

Last update: Jun 05, 2017