A Portable Framework for Semantic Zooming in

Information-Rich Virtual Environments

Nicholas F. Polys

Umur Yilmaz

Virtual Environments Course: Dr. Doug Bowman

Virginia Tech Computer Science

Problem

Information Rich Virtual Environments are enhanced with multiple information types embedded within their spatial view construct, but management and design of information displays within realtime 3D environments is hard:

  • Perception of objects and associating attributes to referent
  • Legibility of annotations
  • Occlusion of other scene objects
  • Novel Level-of-Detail techniques are required to integrate and manage geometry details and annotation panels simultaneously

 

Goals

  • Develop a testbed API framework for novel Level-of-Detail (LOD) techniques that are applicable to the ‘Window’ projection paradigm as employed in both desktop and head-mounted displays (HMDs)
  • Encapsulate information rich display behaviors in reusable scenegraph nodes

 

Background

Display of information about perceptual objects (active in the world scenegraph):

  • In proximity to referent
  • On a Heads-Up-Display

see also:

  • Barrileaux (2001) - World space vs. Display space (imageplane);
  • Bederson & Hollan (1996) - Semantic Zooming, "interface physics", Pad++
  • Bell, Feiner, Höllerer (2001) - View management for virtual and augmented reality

 

Requirements for Annotations

1] Text and graphs are always right-side up.  Expose all Text FonstStyle fields. 

                           a) use a true 3D Billboard

                            b) always align with NavigationInfo Up exposedField

 

2] Annotation panels should be able to contain text, images, or Geometry2D components (for graphs, etc)

 

3] Text legibility can be improved using a background panel for contrast

a)     expose text and panel colors, transparency

b)     auto-size panel

 

4] Legibility can be improved by scaling annotations by a function of user distance

 

5] Reference line – expose beginPoint (on object), color

 

6] Annotations may contain multiple levels of detail which do not necessarily correspond 1-1 with the referent’s LODs

 

7] Support multiple targets for rendering optimization:

            a)     embedded in scene ('near' referent) 

            b)     on HUD : at near clipping plane/ last rendered

[Overlay requires access another level up in the runtime: like Java’s Canvas2D Component]

            c)     expose the ImagePlane as a Transformation branch – should be normalized to window width and height

           

8] Embedded Annotations may need layout algorithms to reduce occlusion from the user’s Viewpoint; they must be transformable and should 'know' their bounds

 

 

Functionality I - Portable nodes (VRML PROTOs)

'Info-Rich' Panels:

Text -

  • versions for unstructured text and title/field/value pairs
  • position, orientation, scale exposed
  • Annotation: justified left, center, or right; hasPanel & autocompute
  • Text: font, color
  • Panel: color, transparency
  • exposed string field allows text updating

Image - default texture mapping

Generic Heads-Up-Display (HUD) framework :

  • arbitrary geometry scaled or offset with hooks for add/remove children

Semantic Objects:

  • Animatable position and orientation
  • Separate geometry LOD and annotation LOD
  • True 3D Billboards with or without lines
  • mouseOver and click can route Overview and Detail children to HUD
  • Fixed size, and (by perspective): constantSize and periodicSize Panels
Layout Scheme / Feature Builds & Examples  
Relative Infoposition constant SemanticObj1
Snap panel to closest corner point - developer can define all 8 points for the bounds, as well as an offset SemanticObj2
Panel location on spherical bounds (needs work for offset) SemanticObj3

Emergent behavior with 'flocking' infopanel positions -

panels tend toward nearest box corner (goal) & avoid other Semantic Objects
(boid algorithm adapted from P. Gerstman, Ohio State U)

SemanticObj4
 

SemanticMenagerie (1 of each of above)

Bio Simulation (early build)

Emergent Behavior

Benefits:

  • No centralized 'manager' needed, just routes among the flock
  • Simple rules give rise to complex, 'intelligent' behavior

Issues:

  • Requires some tuning per environment
  • Closest bounding box corner may not be the ideal goal location (eg by left, center, right justified?)
  • Constant motion is distracting; try periodic, large boid moves by threshold

 

Functionality II - Cortona/IE specific nodes

VRML + HMD:

  • Set screen resolution & frequency for HMD (640x480, 60Hz), right click in 3D scene for 'Full Screen' render
  • Head tracker data routed into the scene via the External Authoring Interface
  • Global navigation parents this ViewPoint
    (currently controlled by Widgets click or slide on HUD- requires mouse)

 

uses Menagerie world:

EAI examples (use MS VM + IE):

 

Usage

Declare EXTERNPROTO file location and interface to use in any scene.

 

 

Benefits

  • Flexible Semantic objects with low development cost
  • Easily populated from database or XML (using XSL Transformations (XSLT))
  • Standards advocacy- X3D specification:
    • Annotation Component
    • Compositing Component and the need for scenegraph control of last-rendered objects

 

Current & Future Work

  • Tune flocking behavior parameters
  • More example applications improving the functionality for odd-shaped objects across scales, etc. (Bio Simulation, CAD/Architecture)
  • Determine the role of other interactions and display venues (ie tooltip panel pops-up on mouseOver, click)
  • Global navigation by wand/pointer (EAI Latency?)

 

Conclusions

  • Researchers, developers, and users can benefit from structured, encapsulated behaviors for Semantic Objects
  • Monocularly rendered 'Window' projection content seems to be tranferrable from desktop to HMD
  • More work is needed to identify perceptual ideals and display and interaction conditions for successful IRVEs

 

© 2003