Using segments of rich media makes it possible to aggregate context and meaning on these chunks by using a number of different mechanisms. Starting with a granular node -- be it a sound bite, visual clip or written fact -- it is possible to aggregate contextual metadata through a series of steps that emergently progress from:
* Starting with thousands of defined Audio Sound Bites & visual clips
* Rating sound bites and clustering them with folksonomy tags
* Sequencing audio sound bites within playlists
* Collaboratively building larger sequences with nested playlists
* Independently controlling the video & audio tracks with 2-dimensional nested playlists
* Evaluating Multiple Storylines and Hypotheses with a 2-dimensional playlist matrix
* Visualizing complex networks by mapping out feedback loop relationships between nodes