Spectrum: A Visual Analytics Tool to Explore Logs for VAST Challenge 2015

We present a visual analytics tool, called Spectrum, to analyze the movement and communication log data from VAST Challenge 2015. Spectrum has two views: MoveView and SpectrumView. MoveView gives an overview of the movement logs at a certain timestamp by synthesizing time, location and identity information. It replays movement logs over time and demonstrates communication logs with dynamic links. SpectrumView shows the status of all visitors' activities within a period of time. Each stay of visitors in a location is visualized as a line segment.

Keywords:

InfoVis Taxomony:

Be the Data: Embodied Visual Analytics

"Be the Data" is a physical and immersive approach to visual analytics designed for teaching abstract statistical analysis concepts to students. In particular, it addresses the problem of exploring alternative projections of high-dimensional data points using interactive dimension reduction techniques. In our system, each student literally embodies a data point in a dataset that is visualized in the room of students; coordinates in the room are coordinates in a two-dimensional plane to which the high-dimensional data are projected.

InfoVis Taxomony:

Andromeda: Semantic Interaction for Dimension Reduction

Andromeda enables users to directly manipulate the data points in 2D plots of high-dimensional data to explore alternative dimension reduction projections. Andromeda implements interactive weighted multidimensional scaling (WMDS) with semantic interaction. Andromeda allows for both parametric and observation-level interaction to provide in-depth data exploration. A machine learning approach enables WMDS to learn from user manipulated projections.

See the Andromeda demo video here: https://youtu.be/lyfUMCu-wC8

InfoVis Taxomony:

Semantic Interaction Project

The goal of this project is to enable the creation of new human-centered computing tools that will help people effectively analyze large collections of textual documents by providing powerful statistical analysis functionality in a usable and intuitive form. To accomplish that, this project investigates “semantic interaction” in visual analytics as a method to combine the large-data computationally-intensive foraging abilities of formal statistical mining algorithms with the intuitive cognitively-intensive sensemaking abilities of human analysts.

InfoVis Taxomony:

Lauren Bradel completes PhD

Congratulations to Dr. Lauren Bradel, who completed her PhD in May 2015 on "Multi-Model Semantic Interaction for Scalable Text Analytics". She will join the Department of Defense as a Post-Doc.

http://infovis.cs.vt.edu/users/lauren-bradel

InfoVis Taxomony:

CMDA 3654: Intro to Data Analytics & Visualization

CMDA/CS/STAT 3654 is a new course, and part of the new CMDA undergraduate degree in Computational Modeling and Data Analytics.
It covers: Basic principles and techniques in data analytics; methods for the collection of, storing, accessing, processing, and analyzing standard-size and large datasets; data visualization; and identifying sources of bias. Applications to real-world case studies.
Typically taught in Spring semester.
Website on Scholar.

InfoVis Taxomony:

Haeyong Chung to become Asst Professor at UAH

Congratulations to lab member Haeyong Chung, who completed his PhD in March 2015 and has accepted an Assistant Professor position in Computer Science at the University of Alabama in Huntsville.

http://people.cs.vt.edu/~chungh/

InfoVis Taxomony:

Lauren Bradel Dissertation Abstract

Multi-Model Semantic Interaction for Scalable Text Analytics

Lauren Bradel

Committee: Chris North (chair), Naren Ramakrishnan, Doug Bowman, Leanna House, William Pike

Learning from text data often involves a loop of tasks that iterate between foraging for information and synthesizing it in incremental hypotheses. Past research has shown the advantages of using spatial workspaces as a means for synthesizing information through externalizing hypotheses and creating spatial schemas. However, spatializing the entirety of datasets becomes prohibitive as the number of documents available to the analysts grows, particularly when only a small subset are relevant to the tasks at hand. To address this issue, we developed the multi-model semantic interaction (MSI) technique, which leverages user interactions to aid in the display layout (as was seen in previous semantic interaction work), forage for new, relevant documents as implied by the interactions, and then place them in context of the user’s existing spatial layout. This results in the ability for the user to conduct both implicit queries and traditional explicit searches. A comparative user study of StarSPIRE discovered that while adding implicit querying did not impact the quality of the foraging, it enabled users to 1) synthesize more information than users with only explicit querying, 2) externalize more hypotheses, 3) complete more synthesis-related semantic interactions. Also, 18% of relevant documents were found by implicitly generated queries when given the option. StarSPIRE has also been integrated with web-based search engines, allowing users to work across vastly different levels of data scale to complete exploratory data analysis tasks (e.g. literature review, investigative journalism).

The core contribution of this work is multi-model semantic interaction (MSI) for usable big data analytics. This work has expanded the understanding of how user interactions can be interpreted and mapped to underlying models to steer multiple algorithms simultaneously and at varying levels of data scale. This is represented in an extendable multi-model semantic interaction pipeline. The lessons learned from this dissertation work can be applied to other visual analytics systems, promoting direct manipulation of the data in context of the visualization rather than tweaking algorithmic parameters and creating usable and intuitive interfaces for big data analytics.

This research was funded in part by the National Science Foundation, IIS-1218346, IIS-144746, and CCF-0937133, Department of Homeland Security Visual Analytics for Command, Control, and Interoperability Environments (VACCINE), and the Ted and Karyn Hume Center for National Security and Technology.

The end of an era...and the beginning of a new one!

On April 11, 2015, our laboratory group, along with Dr. Doug Bowman's 3D Interaction group, renovated the Black Lab in Knowledge Works II.

We dismantled the Gigapixel display, most of which will be repurposed for research in other departments, vizblocks, and unused equipment stands. With this new free space, we will be able to implement a staging area for the Cube at ICAT! This is a wonderful opportunity for our department to develop software and experiments for the Cube.

This may be the end of the Gigapixel era, but it's definitely the start of a new and exciting one!

Thanks to everyone who came out to help.

InfoVis Taxomony:

Pages

Subscribe to InfoVis Lab @ Virginia Tech RSS