myExperiment is a collaborative environment where scientists can safely publish their workflows, share them with groups and find the workflows of others. Workflows, other digital objects and collections - called Packs - can now be swapped, sorted and searched like photos and videos on the Web. And unlike Facebook or MySpace, myExperiment fully understands the needs of the researcher. myExperiment makes it really easy for the next generation of scientists to contribute to a pool of scientific workflows, build communities and form relationships. It enables scientists to share, reuse and repurpose workflows and reduce time-to-experiment, share expertise and avoid reinvention.
The project KiWi is concerned with knowledge management in Semantic Wikis and funded by the European Commission under the Project Number 211932 in the EU Seventh Framework Programme (FP7). KiWi's objective is to investigate how knowledge management in highly dynamic environments can be supported using Semantic Wiki technologies, and how Semantic Wikis can be improved to satisfy the requirements of knowledge management. For this purpose, KiWi will
* implement an advanced knowledge management system based on the Semantic Wiki IkeWiki and extend it by improved, rule-based reasoning support, information extraction, personalisation, and advanced visualisations and editors
* verify the system on two use cases in the areas of project knowledge management and software knowledge management, with flexible workflow models and specific support for the respective application areas.
Qwaq provides 3-D virtual collaboration solutions for enterprises. Qwaq Forums are virtual environments used to facilitate interactive online meetings, workflow, project and program management processes, real-time document editing, document sharing, and online training. Qwaq Forums are deployed as virtual workspaces for virtual offices, program management, virtual operations centers, facilitated meetings, and corporate training.
Using segments of rich media makes it possible to aggregate context and meaning on these chunks by using a number of different mechanisms. Starting with a granular node -- be it a sound bite, visual clip or written fact -- it is possible to aggregate contextual metadata through a series of steps that emergently progress from:
* Starting with thousands of defined Audio Sound Bites & visual clips
* Rating sound bites and clustering them with folksonomy tags
* Sequencing audio sound bites within playlists
* Collaboratively building larger sequences with nested playlists
* Independently controlling the video & audio tracks with 2-dimensional nested playlists
* Evaluating Multiple Storylines and Hypotheses with a 2-dimensional playlist matrix
* Visualizing complex networks by mapping out feedback loop relationships between nodes