Research Projects Listing and Overview
This page lists our major research projects, past and present.
Distributed cognition and embodiment provide compelling models for how humans think and interact with the environment. Our examination of the use of large, high-resolution displays from an embodied perspective has lead directly to the development of a new sensemaking environment called Analyst’s Workspace (AW). AW leverages the embodied resources made more accessible through the physical nature of the display to create a spatial workspace.
The data set for the VAST Challenge 2012 Mini Challenge 1 (MC1) requires a large scale situation awareness analysis to understand a large data set containing the network health and activity status of approximately one million online devices for three days. One of the tables in the dataset contains 158,530,955 records and the devices can be ordered by hierarchically based on business units and facilities. So the main visualization challenge was to support very large quantities of hierarchical information.
The Vehicle Terrain Measurement System (VTMS) allows highly detailed terrain modeling and vehicle simulations. Visualization of large-scale terrain datasets taken from VTMS provides insights into the characteristics of the pavement or road surface. However, the resolution of these terrain datasets greatly exceeds the capability of traditional graphics displays and computer systems.
The goal of this project is to enable end users to directly manipulate data visualizations created by mathematical models for dimension reduction. Users can explore structure in high-dimensional data by directly moving data points within the visualization, causing the models to learn from the user feedback, and viewing the effects of those movements on other points.
The multiplicity of computing and display devices currently available presents new opportunities for how visual analytics is performed. One of the significant inherent challenges that comes with the use of multiple and varied types of displays for visual analytics is sharing and subsequent integration of information among different devices. Multiple devices enable analysts to employ and extend visual space for working with visualizations, but it requires users to switch intermittently between activities and foci of interest over different workspaces.
Workflow Enhancer is our custom Excel Add-In that optimizes users’ Excel experience for the large, high resolution screen use. The add-in allows users to create new Excel windows with predefined size, position and content with just one click while leaving the previous windows active. This encourages users to leave trails of active visual history as they progress in their analysis process. Workflow Enhancer also provides highlighting and computational capabilities that can accelerate the pace of investigations.
VizCept is a new web-based visual analytics system which is designed to support fluid, collaborative analysis of large textual intelligence datasets. The main approach of the design is to combine individual workspace and shared visualization in an integrated environment. Collaborating analysts will be able to identify concepts and relationships from the dataset based on keyword searches in their own workspace and collaborate visually with other analysts using visualization tools such as a concept map view and a timeline view.
User generated reviews, like those found on Yelp and Amazon, have become important reference material in casual decision making, like dining, shopping and entertainment. However, very large amounts of reviews make the review reading process time consuming. A text visualization can speed up the review reading process.
Large collections of documents create a cumbersome comprehension task. To lighten the load, interactive computational techniques can create visual summaries of these documents. We conducted a study comparing document highlights from humans to document highlights from a salience algorithm. We are exploring interactive computational techniques identified from the study results as well as the modified salience algorithm within an interactive tool. We developed a salience tool that visualizes highlights based on percentages of users who found sentences salient.
Insight-based evaluation is a method to evaluate visualizations by measuring the amount and type of insight they produce. The goal is to move beyond standard measures of time and accuracy to better reflect the true purpose of visualization -- insight.
We are currently collaborating with a professor from the Psychology Department here at VT researching the effects of physical navigation on cognition. We will be using large high resolution displays and analyzing the enhancement of spatial memory.
We studied co-located collaborative sensemaking on a large, high-resolution display and investigated the sensemaking strategies adopted, territorial behavior, and large display usage. The analysis was completed on a small dataset of 50 text-only documents, allowing the participants the ability to read all documents and synthesize their knowledge into a hypothesis within a two-hour time frame.
The preliminary results of this study were presented at INTERACT '11 in Lisbon, Portugal.
This section describes our work on Semantic Interaction, a design space for user interaction in visual analytic tools that infer analytic reasoning of users for model steering.
Giga Pixel has it's own site! Head there for more information:
Giga Pixel Site
The GigaPixel Display Laboratory at Virginia Tech's Center for Human-Computer Interaction is a flexible facility supporting advanced research on future user interfaces and visualization. The facility integrates a diverse equipment including:
ChairMouse is an embodied interaction technique that couples cursor location with the attention of users through capturing chair movements.