Skip to main content

Home/ beyondwebct/ Group items tagged users

Rss Feed Group items tagged

62More

Shirky: A Group Is Its Own Worst Enemy - 1 views

  • April 24, 2003
  • I want to talk about a pattern I've seen over and over again in social software that supports large and long-lived groups.
  • definition of social software
  • ...59 more annotations...
  • It's software that supports group interaction
  • how radical that pattern is. The Internet supports lots of communications patterns, principally point-to-point and two-way, one-to-many outbound, and many-to-many two-way.
  • Prior to the Internet, the last technology that had any real effect on the way people sat down and talked together was the table.
  • We've had social software for 40 years at most, dated from the Plato BBS system, and we've only had 10 years or so of widespread availability, so we're just finding out what works. We're still learning how to make these kinds of things.
  • If it's a cluster of half a dozen LiveJournal users, on the other hand, talking about their lives with one another, that's social. So, again, weblogs are not necessarily social, although they can support social patterns.
  • So email doesn't necessarily support social patterns, group patterns, although it can. Ditto a weblog. If I'm Glenn Reynolds, and I'm publishing something with Comments Off and reaching a million users a month, that's really broadcast.
  • So there's this very complicated moment of a group coming together, where enough individuals, for whatever reason, sort of agree that something worthwhile is happening, and the decision they make at that moment is: This is good and must be protected. And at that moment, even if it's subconscious, you start getting group effects. And the effects that we've seen come up over and over and over again in online communities.
  • You are at a party, and you get bored. You say "This isn't doing it for me anymore. I'd rather be someplace else.
  • The party fails to meet some threshold of interest. And then a really remarkable thing happens: You don't leave.
  • That kind of social stickiness is what Bion is talking about.
  • Twenty minutes later, one person stands up and gets their coat, and what happens? Suddenly everyone is getting their coats on, all at the same time. Which means that everyone had decided that the party was not for them, and no one had done anything about it, until finally this triggering event let the air out of the group, and everyone kind of felt okay about leaving.
  • This effect is so steady it's sometimes called the paradox of groups.
  • what's less obvious is that there are no members without a group.
  • there are some very specific patterns that they're entering into to defeat the ostensible purpose of the group meeting together. And he detailed three patterns.
  • The first is sex talk,
  • second basic pattern
  • The identification and vilification of external enemies.
  • So even if someone isn't really your enemy, identifying them as an enemy can cause a pleasant sense of group cohesion. And groups often gravitate towards members who are the most paranoid and make them leaders, because those are the people who are best at identifying external enemies.
  • third pattern Bion identified: Religious veneration
  • The religious pattern is, essentially, we have nominated something that's beyond critique.
  • So these are human patterns that have shown up on the Internet, not because of the software, but because it's being used by humans. Bion has identified this possibility of groups sandbagging their sophisticated goals with these basic urges. And what he finally came to, in analyzing this tension, is that group structure is necessary. Robert's Rules of Order are necessary. Constitutions are necessary. Norms, rituals, laws, the whole list of ways that we say, out of the universe of possible behaviors, we're going to draw a relatively small circle around the acceptable ones.
  • He said the group structure is necessary to defend the group from itself. Group structure exists to keep a group on target, on track, on message, on charter, whatever. To keep a group focused on its own sophisticated goals and to keep a group from sliding into these basic patterns. Group structure defends the group from the action of its own members.
  • technical and social issues are deeply intertwined. There's no way to completely separate them.
  • Some of the users wanted the system to continue to exist and to provide a forum for discussion. And other of the users, the high school boys, either didn't care or were actively inimical. And the system provided no way for the former group to defend itself from the latter.
  • What matters is, a group designed this and then was unable, in the context they'd set up, partly a technical and partly a social context, to save it from this attack from within. And attack from within is what matters.
  • This pattern has happened over and over and over again. Someone built the system, they assumed certain user behaviors. The users came on and exhibited different behaviors. And the people running the system discovered to their horror that the technological and social issues could not in fact be decoupled.
  • nd the worst crisis is the first crisis, because it's not just "We need to have some rules." It's also "We need to have some rules for making some rules." And this is what we see over and over again in large and long-lived social software systems. Constitutions are a necessary component of large, long-lived, heterogenous groups.
  • As a group commits to its existence as a group, and begins to think that the group is good or important, the chance that they will begin to call for additional structure, in order to defend themselves from themselves, gets very, very high.
  • The downside of going for size and scale above all else is that the dense, interconnected pattern that drives group conversation and collaboration isn't supportable at any large scale. Less is different -- small groups of people can engage in kinds of interaction that large groups can't. And so we blew past that interesting scale of small groups. Larger than a dozen, smaller than a few hundred, where people can actually have these conversational forms that can't be supported when you're talking about tens of thousands or millions of users, at least in a single group.
  • So the first answer to Why Now? is simply "Because it's time." I can't tell you why it took as long for weblogs to happen as it did, except to say it had absolutely nothing to do with technology. We had every bit of technology we needed to do weblogs the day Mosaic launched the first forms-capable browser. Every single piece of it was right there. Instead, we got Geocities. Why did we get Geocities and not weblogs? We didn't know what we were doing.
  • It took a long time to figure out that people talking to one another, instead of simply uploading badly-scanned photos of their cats, would be a useful pattern. We got the weblog pattern in around '96 with Drudge. We got weblog platforms starting in '98. The thing really was taking off in 2000. By last year, everyone realized: Omigod, this thing is going mainstream, and it's going to change everything.
  • Why was there an eight-year gap between a forms-capable browser and the Pepys diaries? I don't know. It just takes a while for people to get used to these ideas. So, first of all, this is a revolution in part because it is a revolution. We've internalized the ideas and people are now working with them. Second, the things that people are now building are web-native.
  • A weblog is web-native. It's the web all the way in. A wiki is a web-native way of hosting collaboration. It's lightweight, it's loosely coupled, it's easy to extend, it's easy to break down. And it's not just the surface, like oh, you can just do things in a form. It assumes http is transport. It assumes markup in the coding. RSS is a web-native way of doing syndication. So we're taking all of these tools and we're extending them in a way that lets us build new things really quickly.
  • Third, in David Weinberger's felicitous phrase, we can now start to have a Small Pieces Loosely Joined pattern.
  • You can say, in the conference call or the chat: "Go over to the wiki and look at this."
  • It's just three little pieces of software laid next to each other and held together with a little bit of social glue. This is an incredibly powerful pattern. It's different from: Let's take the Lotus juggernaut and add a web front-end.
  • And finally, and this is the thing that I think is the real freakout, is ubiquity.
  • In many situations, all people have access to the network. And "all" is a different kind of amount than "most." "All" lets you start taking things for granted.
  • But for some groups of people -- students, people in high-tech offices, knowledge workers -- everyone they work with is online. Everyone they're friends with is online. Everyone in their family is online.
  • And this pattern of ubiquity lets you start taking this for granted.
  • There's a second kind of ubiquity, which is the kind we're enjoying here thanks to Wifi. If you assume whenever a group of people are gathered together, that they can be both face to face and online at the same time, you can start to do different kinds of things. I now don't run a meeting without either having a chat room or a wiki up and running. Three weeks ago I ran a meeting for the Library of Congress. We had a wiki, set up by Socialtext, to capture a large and very dense amount of technical information on long-term digital preservation.
  • The people who organized the meeting had never used a wiki before, and now the Library of Congress is talking as if they always had a wiki for their meetings, and are assuming it's going to be at the next meeting as well -- the wiki went from novel to normal in a couple of days.
  • It really quickly becomes an assumption that a group can do things like "Oh, I took my PowerPoint slides, I showed them, and then I dumped them into the wiki. So now you can get at them." It becomes a sort of shared repository for group memory. This is new. These kinds of ubiquity, both everyone is online, and everyone who's in a room can be online together at the same time, can lead to new patterns.
  • "What is required to make a large, long-lived online group successful?" and I think I can now answer with some confidence: "It depends."
  • The normal experience of social software is failure. If you go into Yahoo groups and you map out the subscriptions, it is, unsurprisingly, a power law. There's a small number of highly populated groups, a moderate number of moderately populated groups, and this long, flat tail of failure. And the failure is inevitably more than 50% of the total mailing lists in any category. So it's not like a cake recipe. There's nothing you can do to make it come out right every time.
  • Of the things you have to accept, the first is that you cannot completely separate technical and social issues.
  • So the group is real. It will exhibit emergent effects. It can't be ignored, and it can't be programmed, which means you have an ongoing issue. And the best pattern, or at least the pattern that's worked the most often, is to put into the hands of the group itself the responsibility for defining what value is, and defending that value, rather than trying to ascribe those things in the software upfront.
  • Members are different than users. A pattern will arise in which there is some group of users that cares more than average about the integrity and success of the group as a whole. And that becomes your core group, Art Kleiner's phrase for "the group within the group that matters most."
  • But in all successful online communities that I've looked at, a core group arises that cares about and gardens effectively. Gardens the environment, to keep it growing, to keep it healthy.
  • The core group has rights that trump individual rights in some situations
  • And absolute citizenship, with the idea that if you can log in, you are a citizen, is a harmful pattern, because it is the tyranny of the majority. So the core group needs ways to defend itself -- both in getting started and because of the effects I talked about earlier -- the core group needs to defend itself so that it can stay on its sophisticated goals and away from its basic instincts.
  • All groups of any integrity have a constitution. The constitution is always partly formal and partly informal. A
  • If you were going to build a piece of social software to support large and long-lived groups, what would you design for? The first thing you would design for is handles the user can invest in.
  • Second, you have to design a way for there to be members in good standing. Have to design some way in which good works get recognized. The minimal way is, posts appear with identity.
  • Three, you need barriers to participation.
  • It has to be hard to do at least some things on the system for some users, or the core group will not have the tools that they need to defend themselves.
  • The user of social software is the group, not the individual.
  • Reputation is not necessarily portable from one situation to another
  • If you want a good reputation system, just let me remember who you are. And if you do me a favor, I'll remember it. And I won't store it in the front of my brain, I'll store it here, in the back. I'll just get a good feeling next time I get email from you; I won't even remember why. And if you do me a disservice and I get email from you, my temples will start to throb, and I won't even remember why. If you give users a way of remembering one another, reputation will happen,
31More

Web 2.0: A New Wave of Innovation for Teaching and Learning? (EDUCAUSE Review) | EDUCAU... - 0 views

  • Web 2.0. It is about no single new development. Moreover, the term is often applied to a heterogeneous mix of relatively familiar and also very emergent technologies
  • Ultimately, the label “Web 2.0” is far less important than the concepts, projects, and practices included in its scope.
  • Social software has emerged as a major component of the Web 2.0 movement. The idea dates as far back as the 1960s and JCR Licklider’s thoughts on using networked computing to connect people in order to boost their knowledge and their ability to learn. The Internet technologies of the subsequent generation have been profoundly social, as listservs, Usenet groups, discussion software, groupware, and Web-based communities have linked people around the world.
  • ...26 more annotations...
  • It is true that blogs are Web pages, but their reverse-chronological structure implies a different rhetorical purpose than a Web page, which has no inherent timeliness. That altered rhetoric helped shape a different audience, the blogging public, with its emergent social practices of blogrolling, extensive hyperlinking, and discussion threads attached not to pages but to content chunks within them. Reading and searching this world is significantly different from searching the entire Web world. Still, social software does not indicate a sharp break with the old but, rather, the gradual emergence of a new type of practice.
  • Rather than following the notion of the Web as book, they are predicated on microcontent. Blogs are about posts, not pages. Wikis are streams of conversation, revision, amendment, and truncation. Podcasts are shuttled between Web sites, RSS feeds, and diverse players. These content blocks can be saved, summarized, addressed, copied, quoted, and built into new projects. Browsers respond to this boom in microcontent with bookmarklets in toolbars, letting users fling something from one page into a Web service that yields up another page. AJAX-style pages feed content bits into pages without reloading them, like the frames of old but without such blatant seams. They combine the widely used, open XML standard with Java functions.3 Google Maps is a popular example of this, smoothly drawing directional information and satellite imagery down into a browser.
  • Web 2.0 builds on this original microcontent drive, with users developing Web content, often collaboratively and often open to the world.
  • openness remains a hallmark of this emergent movement, both ideologically and technologically.
  • Drawing on the “wisdom of crowds” argument, Web 2.0 services respond more deeply to users than Web 1.0 services. A leading form of this is a controversial new form of metadata, the folksonomy.
  • Third, people tend to tag socially. That is, they learn from other taggers and respond to other, published groups of tags, or “tagsets.”
  • First, users actually use tags.
  • Social bookmarking is one of the signature Web 2.0 categories, one that did not exist a few years ago and that is now represented by dozens of projects.
  • This is classic social software—and a rare case of people connecting through shared metadata.
  • RawSugar (http://www.rawsugar.com/) and several others expand user personalization. They can present a user’s picture, some background about the person, a feed of their interests, and so on, creating a broader base for bookmark publishing and sharing. This may extend the appeal of the practice to those who find the focus of del.icio.us too narrow. In this way too, a Web 2.0 project learns from others—here, blogs and social networking tools.
  • How can social bookmarking play a role in higher education? Pedagogical applications stem from their affordance of collaborative information discovery.
  • First, they act as an “outboard memory,” a location to store links that might be lost to time, scattered across different browser bookmark settings, or distributed in e-mails, printouts, and Web links. Second, finding people with related interests can magnify one’s work by learning from others or by leading to new collaborations. Third, the practice of user-created tagging can offer new perspectives on one’s research, as clusters of tags reveal patterns (or absences) not immediately visible by examining one of several URLs. Fourth, the ability to create multi-authored bookmark pages can be useful for team projects, as each member can upload resources discovered, no matter their location or timing. Tagging can then surface individual perspectives within the collective. Fifth, following a bookmark site gives insights into the owner’s (or owners’) research, which could play well in a classroom setting as an instructor tracks students’ progress. Students, in turn, can learn from their professor’s discoveries.
  • After e-mail lists, discussion forums, groupware, documents edited and exchanged between individuals, and blogs, perhaps the writing application most thoroughly grounded in social interaction is the wiki. Wiki pages allow users to quickly edit their content from within the browser window.11 They originally hit the Web in the late 1990s (another sign that Web 2.0 is emergent and historical, not a brand-new thing)
  • How do social writing platforms intersect with the world of higher education? They appear to be logistically useful tools for a variety of campus needs, from student group learning to faculty department work to staff collaborations. Pedagogically, one can imagine writing exercises based on these tools, building on the established body of collaborative composition practice. These services offer an alternative platform for peer editing, supporting the now-traditional elements of computer-mediated writing—asynchronous writing, groupwork for distributed members
  • Blogging has become, in many ways, the signature item of social software, being a form of digital writing that has grown rapidly into an influential force in many venues, both on- and off-line. One reason for the popularity of blogs is the way they embody the read/write Web notion. Readers can push back on a blog post by commenting on it. These comments are then addressable, forming new microcontent. Web services have grown up around blog comments, most recently in the form of aggregation tools, such as coComment (http://www.cocomment.com/). CoComment lets users keep track of their comments across myriad sites, via a tiny bookmarklet and a single Web page.
  • Technorati (http://technorati.com/) and IceRocket (http://icerocket.com/) head in the opposite direction of these sites, searching for who (usually a blogger) has recently linked to a specific item or site. Technorati is perhaps the most famous blog-search tool. Among other functions, it has emphasized tagging as part of search and discovery, recommending (and rewarding) users who add tags to their blog posts. Bloggers can register their site for free with Technorati; their posts will then be searchable by content and supplemental tags.
  • Many of these services allow users to save their searches as RSS feeds to be returned to and examined in an RSS reader, such as Bloglines (http://www.bloglines.com/) or NetNewsWire (http://ranchero.com/netnewswire/). This subtle ability is neatly recursive in Web 2.0 terms, since it lets users create microcontent (RSS search terms) about microcontent (blog posts). Being merely text strings, such search feeds are shareable in all sorts of ways, so one can imagine collaborative research projects based on growing swarms of these feeds—social bookmarking plus social search.
  • Students can search the blogosphere for political commentary, current cultural items, public developments in science, business news, and so on.
  • The ability to save and share a search, and in the case of PubSub, to literally search the future, lets students and faculty follow a search over time, perhaps across a span of weeks in a semester. As the live content changes, tools like Waypath’s topic stream, BlogPulse’s trend visualizations, or DayPop’s word generator let a student analyze how a story, topic, idea, or discussion changes over time. Furthermore, the social nature of these tools means that collaboration between classes, departments, campuses, or regions is easily supported. One could imagine faculty and students across the United States following, for example, the career of an Islamic feminist or the outcome of a genomic patent and discussing the issue through these and other Web 2.0 tools. Such a collaboration could, in turn, be discovered, followed, and perhaps joined by students and faculty around the world. Extending the image, one can imagine such a social research object becoming a learning object or an alternative to courseware.
  • A glance at Blogdex offers a rough snapshot of what the blogosphere is tending to pay attention to.
  • A closer look at an individual Blogdex result reveals the blogs that link to a story. As we saw with del.icio.us, this publication of interest allows the user to follow up on commentary, to see why those links are there, and to learn about those doing the linking. Once again, this is a service that connects people through shared interest in information.
  • The rich search possibilities opened up by these tools can further enhance the pedagogy of current events. A political science class could explore different views of a news story through traditional media using Google News, then from the world of blogs via Memeorandum. A history class could use Blogdex in an exercise in thinking about worldviews. There are also possibilities for a campus information environment. What would a student newspaper look like, for example, with a section based on the Digg approach or the OhmyNews structure? Thematizing these tools as objects for academic scrutiny, the operation and success of such projects is worthy of study in numerous disciplines, from communication to media studies, sociology to computer science.
  • At the same time, many services are hosted externally to academia. They are the creations of enthusiasts or business enterprises and do not necessarily embrace the culture of higher education.
  • Lawrence Lessig, J. D. Lasica, and others remind us that as tools get easier to use and practices become more widespread, it also becomes easier for average citizens to commit copyright violations.19
    • Barbara Lindsey
       
      Which is why he led the Creative Commons Movement and why he exhorts us to re-imagine copyright.
  • Web 2.0’s lowered barrier to entry may influence a variety of cultural forms with powerful implications for education, from storytelling to classroom teaching to individual learning. It is much simpler to set up a del.icio.us tag for a topic one wants to pursue or to spin off a blog or blog departmental topic than it is to physically meet co-learners and experts in a classroom or even to track down a professor. Starting a wiki-level text entry is far easier than beginning an article or book.
  • How can higher education respond, when it offers a complex, contradictory mix of openness and restriction, public engagement and cloistering?
  •  
    Web 2.0. It is about no single new development. Moreover, the term is often applied to a heterogeneous mix of relatively familiar and also very emergent technologies
1More

GroupTweet | Learn About GroupTweet - 0 views

  •  
    GroupTweet is a service that works behind the scenes with Twitter to allow users to form groups and communicate within those groups. It enhances the direct message feature already available within Twitter that allows users to send a private message to another user. GroupTweet takes that same direct message and sends it to multiple users.
5More

Visualize your GPS tracks with Breadcrumbs | Google Earth Blog - 0 views

  • Our users can log their ski trip, hiking trip or sightseeing trip, upload it to Breadcrumbs with their photos and videos, and send it to all their friends, who can relive the adventure in 3D. And this is only the start, as we plan to provide our users with a platform to not only edit and maintain tracks, but also to find new places to explore and interact within a social network.
  • Breadcrumbs is the first web application of its kind, where users can manage GPS tracks, photos and videos in one place - it can be thought of as 'Flickr for GPS tracks'.
  • The key features of Breadcrumbs include:Relive your adventure: Breadcrumbs brings together photos, videos and GPS tracks in one quick and easy process and our 3D playback function brings the track alive. Edit and manage: Breadcrumbs comes with a suite of tools which let users edit and manage their GPS tracks, photos and videos. These include:- Automated geotagging of photos.- Track editing tool to correct GPS points.- Add information to your adventure to help tell the story, such as show where you ate your lunch or spotted some wildlife.- Organize: Breadcrumbs offers a rich set of tools to help users to manage adventures.- Share: Breadcrumbs makes it easy to share adventures, with options including a public page for each track and direct integration with Facebook.
  • ...2 more annotations...
  • Our utilization and heavy integration with the Google Earth plugin is also a big bonus for the user. Garmin allows you to look at your data in Google Maps and indeed Google Earth. However, Breadcrumbs builds on this as we have built a track playback feature on top of the plugin which allows you to hit play and replay your trip step by step. It's like watching a movie of your day out! This really does bring the users' GPS data to life especially when sharing with friends and family.
  • We are already integrated with one smartphone application allowing the user to push their tracks directly to Breadcrumbs from their phone.
4More

Wikipedia Pushes for Users to Add Videos - Wired Campus - The Chronicle of Higher Educa... - 0 views

  • For some institutions, the biggest roadblock to posting video to Web sites is the worry that the videos can be taken by other people, Mr. Moskowitz said. Because information posted on Wikipedia is free and unlicensed, someone can use a posted video for a profit-making venture. Mr. Moskowitz said the best strategy for protecting your videos is to keep the HD version of a video for your own use and post the standard-definition version to Wikipedia. Institutions could brand videos as well, although other users could crop out the institutional seal or post a new video in its place.
  • For some institutions, the biggest roadblock to posting video to Web sites is the worry that the videos can be taken by other people, Mr. Moskowitz said. Because information posted on Wikipedia is free and unlicensed, someone can use a posted video for a profit-making venture. Mr. Moskowitz said the best strategy for protecting your videos is to keep the HD version of a video for your own use and post the standard-definition version to Wikipedia. Institutions could brand videos as well, although other users could crop out the institutional seal or post a new video in its place.
  •  
    "For some institutions, the biggest roadblock to posting video to Web sites is the worry that the videos can be taken by other people, Mr. Moskowitz said. Because information posted on Wikipedia is free and unlicensed, someone can use a posted video for a profit-making venture. Mr. Moskowitz said the best strategy for protecting your videos is to keep the HD version of a video for your own use and post the standard-definition version to Wikipedia. Institutions could brand videos as well, although other users could crop out the institutional seal or post a new video in its place."
  •  
    Can Creative Commons be a good alternative to these troubling issues?
27More

The New Gold Mine: Your Personal Information & Tracking Data Online - WSJ.com - 0 views

  • the tracking of consumers has grown both far more pervasive and far more intrusive than is realized by all but a handful of people in the vanguard of the industry. • The study found that the nation's 50 top websites on average installed 64 pieces of tracking technology onto the computers of visitors, usually with no warning. A dozen sites each installed more than a hundred. The nonprofit Wikipedia installed none.
  • the Journal found new tools that scan in real time what people are doing on a Web page, then instantly assess location, income, shopping interests and even medical conditions. Some tools surreptitiously re-spawn themselves even after users try to delete them. • These profiles of individuals, constantly refreshed, are bought and sold on stock-market-like exchanges that have sprung up in the past 18 months.
  • Advertisers once primarily bought ads on specific Web pages—a car ad on a car site. Now, advertisers are paying a premium to follow people around the Internet, wherever they go, with highly specific marketing messages.
  • ...22 more annotations...
  • "It is a sea change in the way the industry works," says Omar Tawakol, CEO of BlueKai. "Advertisers want to buy access to people, not Web pages."
  • The Journal found that Microsoft Corp.'s popular Web portal, MSN.com, planted a tracking file packed with data: It had a prediction of a surfer's age, ZIP Code and gender, plus a code containing estimates of income, marital status, presence of children and home ownership, according to the tracking company that created the file, Targus Information Corp.
  • Tracking is done by tiny files and programs known as "cookies," "Flash cookies" and "beacons." They are placed on a computer when a user visits a website. U.S. courts have ruled that it is legal to deploy the simplest type, cookies, just as someone using a telephone might allow a friend to listen in on a conversation. Courts haven't ruled on the more complex trackers.
  • tracking companies sometimes hide their files within free software offered to websites, or hide them within other tracking files or ads. When this happens, websites aren't always aware that they're installing the files on visitors' computers.
  • Often staffed by "quants," or math gurus with expertise in quantitative analysis, some tracking companies use probability algorithms to try to pair what they know about a person's online behavior with data from offline sources about household income, geography and education, among other things. The goal is to make sophisticated assumptions in real time—plans for a summer vacation, the likelihood of repaying a loan—and sell those conclusions.
  • Consumer tracking is the foundation of an online advertising economy that racked up $23 billion in ad spending last year. Tracking activity is exploding. Researchers at AT&T Labs and Worcester Polytechnic Institute last fall found tracking technology on 80% of 1,000 popular sites, up from 40% of those sites in 2005.
  • The Journal found tracking files that collect sensitive health and financial data. On Encyclopaedia Britannica Inc.'s dictionary website Merriam-Webster.com, one tracking file from Healthline Networks Inc., an ad network, scans the page a user is viewing and targets ads related to what it sees there.
    • Barbara Lindsey
       
      Tracking you an targeting ads to you on a popular dictionary site!
  • Beacons, also known as "Web bugs" and "pixels," are small pieces of software that run on a Web page. They can track what a user is doing on the page, including what is being typed or where the mouse is moving.
  • The majority of sites examined by the Journal placed at least seven beacons from outside companies. Dictionary.com had the most, 41, including several from companies that track health conditions and one that says it can target consumers by dozens of factors, including zip code and race.
  • After the Journal contacted the company, it cut the number of networks it uses and beefed up its privacy policy to more fully disclose its practices.
  • Flash cookies can also be used by data collectors to re-install regular cookies that a user has deleted. This can circumvent a user's attempt to avoid being tracked online. Adobe condemns the practice.
  • Most sites examined by the Journal installed no Flash cookies. Comcast.net installed 55.
  • Wittingly or not, people pay a price in reduced privacy for the information and services they receive online. Dictionary.com, the site with the most tracking files, is a case study.
  • Think about how these technologies and the associated analytics can be used in other industries and social settings (e.g. education) for real beneficial impacts. This is nothing new for the web, the now that it has matured, it can be a positive game-changer.
  • Media6Degrees Inc., whose technology was found on three sites by the Journal, is pitching banks to use its data to size up consumers based on their social connections. The idea is that the creditworthy tend to hang out with the creditworthy, and deadbeats with deadbeats.
  • "There are applications of this technology that can be very powerful," says Tom Phillips, CEO of Media6Degrees. "Who knows how far we'd take it?"
  • Hidden inside Ashley Hayes-Beaty's computer, a tiny file helps gather personal details about her, all to be put up for sale for a tenth of a penny.
  • "We can segment it all the way down to one person," says Eric Porres, Lotame's chief marketing officer.
  • One of the fastest-growing businesses on the Internet, a Wall Street Journal investigation has found, is the business of spying on Internet users.
  • Yahoo Inc.'s ad network,
  • "Every time I go on the Internet," she says, she sees weight-loss ads. "I'm self-conscious about my weight," says Ms. Reid, whose father asked that her hometown not be given. "I try not to think about it…. Then [the ads] make me start thinking about it."
  • Information about people's moment-to-moment thoughts and actions, as revealed by their online activity, can change hands quickly. Within seconds of visiting eBay.com or Expedia.com, information detailing a Web surfer's activity there is likely to be auctioned on the data exchange run by BlueKai, the Seattle startup.
  •  
    a New York company that uses sophisticated software called a "beacon" to capture what people are typing on a website
57More

Web 2.0 Storytelling: Emergence of a New Genre (EDUCAUSE Review) | EDUCAUSE - 2 views

  • A story is told by one person or by a creative team to an audience that is usually quiet, even receptive. Or at least that’s what a story used to be, and that’s how a story used to be told. Today, with digital networks and social media, this pattern is changing. Stories now are open-ended, branching, hyperlinked, cross-media, participatory, exploratory, and unpredictable. And they are told in new ways: Web 2.0 storytelling picks up these new types of stories and runs with them, accelerating the pace of creation and participation while revealing new directions for narratives to flow.
    • Barbara Lindsey
       
      Do you agree with this statement?
    • loisramirez
       
      I also agree with the statement. A story in this age can take a life of it's own (or many, depending one the variations created), it allows a constant input by others and consequently the evolution of the text and the author as well.
  • To further define the term, we should begin by explaining what we mean by its first part: Web 2.0. Tim O'Reilly coined Web 2.0 in 2004,1 but the label remains difficult to acceptably define. For our present discussion, we will identify two essential features that are useful in distinguishing Web 2.0 projects and platforms from the rest of the web: microcontent and social media.2
  • creating a website through Web 2.0 tools is a radically different matter compared with the days of HTML hand-coding and of moving files with FTP clients.
  • ...44 more annotations...
  • out of those manifold ways of writing and showing have emerged new practices for telling stories.
  • Web 2.0 platforms are often structured to be organized around people rather than the traditional computer hierarchies of directory trees.
    • loisramirez
       
      I think this is a very important feature, since the web is not as static anymore and more people friendly, we as users feel more encourage to collaborate and create our own content.
  • Websites designed in the 1990s and later offered few connecting points for individuals, generally speaking, other than perhaps a guestbook or a link to an e-mail address. But Web 2.0 tools are built to combine microcontent from different users with a shared interest:
  • If readers closely examine a Web 2.0 project, they will find that it is often touched by multiple people, whether in the content creation or via associated comments or discussion areas. If they participate actively, by contributing content, we have what many call social media.
  • But Web 2.0's lowered bar to content creation, combined with increased social connectivity, ramps up the ease and number of such conversations, which are able to extend outside the bounds of a single environment.
    • Barbara Lindsey
       
      Does the definition of Web 2.0 given in this article help you to better understand your experiences thus far in this course?
  • Another influential factor of Web 2.0 is findability: the use of comprehensive search tools that help story creators (and readers) quickly locate related micocontent with just a few keywords typed into a search field.
  • Social bookmarking and content tagging
  • the "art of conveying events in words, images, and sounds often by improvisation or embellishment."4 Annette Simmons sees the storyteller’s empathy and sensory detail as crucial to "the unique capability to tap into a complex situation we have all experienced and which we all recognize."5
    • loisramirez
       
      I also agree with this comment, something as simple as a keyword can trigger a memory and bring back information that we have learned.
  • Web 2.0 stories are often broader: they can represent history, fantasy, a presentation, a puzzle, a message, or something that blurs the boundaries of reality and fiction.
  • On one level, web users experienced a great deal of digital narratives created in non-web venues but published in HTML, such as embedded audio clips, streaming video, and animation through the Flash plug-in. On another level, they experienced stories using web pages as hypertext lexia, chunks of content connected by hyperlinks.
  • While HTML narratives continued to be produced, digital storytelling by video also began, drawing on groundbreaking video projects from the 1970s.
  • By the time of the emergence of blogs and YouTube as cultural media outlets, Tim O'Reilly's naming of Web 2.0, and the advent of social media, storytelling with digital tools had been at work for nearly a generation.
  • Starting from our definitions, we should expect Web 2.0 storytelling to consist of Web 2.0 practices.
  • In each of these cases, the relative ease of creating web content enabled social connections around and to story materials.
  • Web 2.0 creators have many options about the paths to set before their users. Web 2.0 storytelling can be fully hypertextual in its multilinearity. At any time, the audience can go out of the bounds of the story to research information (e.g., checking names in Google searches or looking for background information in Wikipedia).
  • User-generated content is a key element of Web 2.0 and can often enter into these stories. A reader can add content into story platforms directly: editing a wiki page, commenting on a post, replying in a Twitter feed, posting a video response in YouTube. Those interactions fold into the experience of the overall story from the perspective of subsequent readers.
  • On a less complex level, consider the 9th Btn Y & L War Diaries blog project, which posts diary entries from a World War I veteran. A June 2008 post (http://yldiaries.blogspot.com/2008_06_01_archive.html) contains a full wartime document, but the set of comments from others (seven, as of this writing) offer foreshadowing, explication of terms, and context.
    • Barbara Lindsey
       
      Consider how these new media create rich dissertation and research opportunities.
  • As with the rest of Web 2.0, it is up to readers and viewers to analyze and interpret such content and usually to do so collaboratively.
  • At times, this distributed art form can range beyond the immediate control of a creator.
  • Creators can stage content from different sites.
  • Other forms leverage the Web 2.0 strategies of aggregating large amounts of microcontent and creatively selecting patterns out of an almost unfathomable volume of information.
  • The Twitter content form (140-character microstories) permits stories to be told in serialized portions spread over time.
    • loisramirez
       
      It is also a great way to practice not only creative writing but due to the 140 character limitation; this is a new challenge for a writer, how to say a lot in a just a few words.
  • It also poses several challenges: to what extent can we fragment (or ‘microchunk,’ in the latest parlance) literature before it becomes incoherent? How many media can literature be forced into—if, indeed, there is any limit?"
  • Facebook application that remixes photos drawn from Flickr (based on tags) with a set of texts that generate a dynamic graphic novel.
  • movie trailer recuts
  • At a different—perhaps meta—level, the boundaries of Web 2.0 stories are not necessarily clear. A story's boundaries are clear when it is self-contained, say in a DVD or XBox360 game. But can we know for sure that all the followers of a story's Twitter feed, for example, are people who are not involved directly in the project? Turning this question around, how do we know that we've taken the right measure of just how far a story goes, when we could be missing one character's blog or a setting description carefully maintained by the author on Wikipedia?
  • The Beast was described by its developer, Sean Stewart: “We would tell a story that was not bound by communication platform: it would come at you over the web, by email, via fax and phone and billboard and TV and newspaper, SMS and skywriting and smoke signals too if we could figure out how.
  • instead of telling a story, we would present the evidence of that story, and let the players tell it to themselves.”15
    • Barbara Lindsey
       
      How might your students who come to your courses with these kinds of experiences impact the way you present your content?
  • In addition, the project served as an illustrative example of the fact that no one can know about all of the possible web tools that are available.
    • Barbara Lindsey
       
      How might we address this conundrum?
  • web video storytelling, primarily through YouTube
  • Web 2.0 storytelling offers two main applications for colleges and universities: as composition platform and as curricular object.
  • Students can use blogs as character studies.
  • The reader is driven to read more, not only within the rest of that post but also across the other sites of the story: the archive of posts so far, the MySpace page, the resources copied and pointed to. Perhaps the reader ranges beyond the site, to the rest of the research world—maybe he or she even composes a response in some Web 2.0 venue.
  • Yet the blog form, which accentuates this narrative, is accessible to anyone with a browser. Examples like Project 1968 offer ready models for aspiring writers to learn from. Even though the purpose of Project 1968 is not immediately tied to a class, it is a fine example for all sorts of curricular instances, from history to political science, creative writing to gender studies, sociology to economics.
  • it’s worth remembering that using Web 2.0 storytelling is partly a matter of scale. Some projects can be Web 2.0 stories, while others integrate Web 2.0 storytelling practices.
  • Lecturers are familiar with telling stories as examples, as a way to get a subject across. They end discussions with a challenging question and create characters to embody parts of content (political actors, scientists, composite types). Imagine applying those habits to a class Twitter feed or Facebook group.
  • For narrative studies, Web 2.0 stories offer an unusual blend of formal features, from the blurry boundaries around each story to questions of chronology.
  • An epistolary novel, trial documents, a lab experiment, or a soldier's diaries—for example, WW1: Experiences of an English Soldier (http://wwar1.blogspot.com/)—come to life in this new format.
  • epigrams are well suited to being republished or published by microblogging tools, which focus the reader’s attention on these compressed phases. An example is the posting of Oscar Wilde’s Phrases and Philosophies for the Use of the Young (1894), on Twitter (http://twitter.com/oscarwilde). Other compressed forms of writing can be microblogged also, such as Félix Fénéon's Novels in Three Lines (1906), also on Twitter (http://twitter.com/novelsin3lines). As Dan Visel observed of the latter project: “Fénéon . . . was secretly a master of miniaturized text. . . . Fénéon's hypercompression lends itself to Twitter. In a book, these pieces don't quite have space to breathe; they're crowded by each other, and it's more difficult for the reader to savor them individually. As Twitter posts, they're perfectly self-contained, as they would have been when they appeared as feuilleton.”21
  • A publicly shared Web 2.0 story, created by students for a class, afterward becomes something that other students can explore. Put another way, this learning tool can produce materials that subsequently will be available as learning objects.
  • We expect to see new forms develop from older ones as this narrative world grows—even e-mail might become a new storytelling tool.22 Moreover, these storytelling strategies could be supplanted completely by some semantic platform currently under development. Large-scale gaming might become a more popular engine for content creation. And mobile devices could make microcontent the preferred way to experience digital stories.
  • perhaps the best approach for educators is simply to give Web 2.0 storytelling a try and see what happens. We invite you to jump down the rabbit hole. Add a photo to Flickr and use that as a writing prompt. Flesh out a character in Twitter. Follow a drama unfolding on YouTube. See how a wiki supports the gradual development of a setting. Then share with all of us what you have learned about this new way of telling, and listening to, stories.
  • The interwoven characters, relationships, settings, and scenes that result are the stuff of stories, regardless of how closely mapped onto reality they might be; this also distinguishes a Web 2.0 story from other blogging forms, such as political or project sites (except as satire or criticism!).
  • in sharp contrast to the singular flow of digital storytelling. In the latter form, authors create linear narratives, bound to the clear, unitary, and unidirectional timeline of the video format and the traditional story arc. Web 2.0 narratives can follow that timeline, and podcasts in particular must do so. But they can also link in multiple directions.
  •  
    By Bryan Alexander and Alan Levine
4More

Apple's iPhone OS 4.0: What Will It Mean for Mobile Learning? by Bill Brandon : Learni... - 0 views

  • This has pretty exciting possibilities for Webinar/virtual classroom applications. The demo this morning was Skype. Until now, if you weren’t running Skype in the foreground on your iPhone, you couldn’t receive calls, and if you left the Skype app during a call, the call would disconnect. Now even when the phone is locked, you will still be able to receive Skype calls. When the phone is asleep or when the user is running other apps, VoIP apps can receive calls. When you send Skype to the background, incoming call invites will appear as the standard iPhone/iPod notification. Clicking the answer button brings the Skype app back. One question in the backchat during the presentation was whether there might be a new iPhone coming with a front-facing camera for such calls, but this went unanswered.
  • The new OS, in the background, will use cell towers to detect the phone’s location, in order to minimize power demands. The primary use of this service will be for turn-by-turn navigation. The secondary use will be to support social networking apps, such as Loopt. The OS has privacy protection for this service. An indicator on the status bar lets the user know when an app is using his or her location. The user can enable and disable location use by individual apps. This service could be useful for “location-based learning.”
  • Users can read books on any device (iPhone/iPod Touch/iPad). The books sync between devices, so that a user can stop reading a book on one device, then open it on another, and at the same place. This could be extremely handy for textbook use.
  • ...1 more annotation...
  • Mobile device management Wireless application distribution (iTunes sync not required)
16More

Web 2.0: What does it constitute? | 11 Feb 2008 | ComputerWeekly.com - 0 views

  • O'Reilly identified Google as "the standard bearer for Web 2.0", and pointed out the differences between it and predecessors such as Netscape, which tried to adapt for the web the business model established by Microsoft and other PC software suppliers.
  • Google "began its life as a native web application, never sold or packaged, but delivered as a service, with customers paying, directly or indirectly.
  • perpetual beta, as O'Reilly later dubbed it
  • ...13 more annotations...
  • Perhaps the most important breakthrough was Google's willingness to relinquish control of the user-end of the transaction, instead of trying to lock them in with proprietary technology and restrictive licensing
  • O'Reilly took a second Web 2.0 principle from Peer-to-Peer pioneer BitTorrent, which works by completely decentralising the delivery of files, with every client also functioning as a server. The more popular a file, is, the faster it can be served, since there are more users providing bandwidth and fragments of the file. Thus, "the service automatically gets better the more people use it".
  • Taking another model from open source, users are treated as "co-developers", actively encouraged to contribute, and monitored in real time to see what they are using, and how they are using it.
  • "Until Web 2.0 the learning curve to creating websites was quite high, complex, and a definite barrier to entry," says the third of our triumvirate of Tims, Tim Bray, director of Web Technologies at Sun Microsystems.
  • Web 2.0 takes some of its philosophical underpinning from James Surowiecki's book The Wisdom of Crowds, which asserts that the aggregated insights of large groups of diverse people can provide better answers and innovations than individual experts.
  • In practice, even fewer than 1% of people may be making a useful contribution - but these may be the most energetic and able members of a very large community. In 2006 1,000 people, just 0.003% of its users, contributed around two-thirds of Wikipedia's edits.
  • Ajax speeds up response times by enabling just part of a page to be updated, instead of downloading a whole new page. Nielsen's objections include that this breaks the "back" button - the ability to get back to where you've been, which Nielsen says is the second most used feature in Web navigation.
  • "Everybody who has a Web browser has got that platform," says Berners-Lee, in a podcast available on IBM's developerWorks site. "So the nice thing about it is when you do code up an Ajax implementation, other people can take it and play with it."
  • Web 2.0 is a step on the way to the Semantic Web, a long-standing W3C initiative to create a standards-based framework able to understand the links between data which is related in the real world, and follow that data wherever it resides, regardless of application and database boundaries.
  • The problem with Web 2.0, Pemberton says, is that it "partitions the web into a number of topical sub-webs, and locks you in, thereby reducing the value of the network as a whole."
  • How do you decide which social networking site to join? he asks. "Do you join several and repeat the work?" With the Semantic Web's Resource Description Framework (RDF), you won't need to sign up to separate networks, and can keep ownership of your data. "You could describe it as a CSS for meaning: it allows you to add a small layer of markup to your page that adds machine-readable semantics."
  • The problems with Web 2.0 lock-in which Pemberton describes, were illustrated when a prominent member of the active 1%, Robert Scoble, ran a routine called Plaxo to try to extract details of his 5,000 contacts from Facebook, in breach of the site's terms of use, and had his account disabled. Although he has apparently had his account reinstated, the furore has made the issue of Web 2.0 data ownership and portability fiercely topical.
  • when Google announced its OpenSocial set of APIs, which will enable developers to create portable applications and bridges between social networking websites, Facebook was not among those taking part. Four years after O'Reilly attempted to define Web 2.0, Google, it seems, remains the standard-bearer, while others are forgetting what it was supposed to be about.
64More

Planning for Neomillennial Learning Styles: Implications for Investments in Technology ... - 0 views

  • Research indicates that each of these media, when designed for education, fosters particular types of interactions that enable—and undercut—various learning styles.
    • Barbara Lindsey
       
      How much do we know about our students' learning styles? How do we know this?
  • Over the next decade, three complementary interfaces will shape how people learn
  • The familiar "world to the desktop." Provides access to distant experts and archives and enables collaborations, mentoring relationships, and virtual communities of practice. This interface is evolving through initiatives such as Internet2. "Alice in Wonderland" multiuser virtual environments (MUVEs). Participants' avatars (self-created digital characters) interact with computer-based agents and digital artifacts in virtual contexts. The initial stages of studies on shared virtual environments are characterized by advances in Internet games and work in virtual reality. Ubiquitous computing. Mobile wireless devices infuse virtual resources as we move through the real world. The early stages of "augmented reality" interfaces are characterized by research on the role of "smart objects" and "intelligent contexts" in learning and doing.
  • ...48 more annotations...
  • This immersion in virtual environments and augmented realities shapes participants' learning styles beyond what using sophisticated computers and telecommunications has fostered thus far, with multiple implications for higher education.
  • Beyond actional and symbolic immersion, advances in interface technology are now creating virtual environments and augmented realities that induce a psychological sense of sensory and physical immersion.
  • The research on virtual reality Salzman and I conducted on frames of reference found that the exocentric and the egocentric FORs have different strengths for learning. Our studies established that learning ideally involves a "bicentric" perspective alternating between egocentric and exocentric FORs.
    • Barbara Lindsey
       
      Could we make the argument that this is one of the main goals of language programs?
  • But what is so special about the egocentric perspectives and situated learning now enabled by emerging media? After all, each of us lives with an egocentric perspective in the real world and has many opportunities for situated learning without using technology. One attribute that makes mediated immersion different and powerful is the ability to access information resources and psychosocial community distributed across distance and time, broadening and deepening experience. A second important attribute is the ability to create interactions and activities in mediated experience not possible in the real world, such as teleporting within a virtual environment, enabling a distant person to see a real-time image of your local environment, or interacting with a (simulated) chemical spill in a busy public setting. Both of these attributes are actualized in the Alice-in-Wonderland interface.
  • Notion of place is layered/blended/multiple; mobility and nomadicity prevalent among dispersed, fragmented, fluctuating habitats (for example, coffeehouses near campus)
  • Guided social constructivism and situated learning as major forms of pedagogy
  • he defining quality of a learning community is that there is a culture of learning, in which everyone is involved in a collective effort of understanding. There are four characteristics that such a culture must have: (1) diversity of expertise among its members, who are valued for their contributions and given support to develop, (2) a shared objective of continually advancing the collective knowledge and skills, (3) an emphasis on learning how to learn, and (4) mechanisms for sharing what is learned. If a learning community is presented with a problem, then the learning community can bring its collective knowledge to bear on the problem. It is not necessary that each member assimilate everything that the community knows, but each should know who within the community has relevant expertise to address any problem. This is a radical departure from the traditional view of schooling, with its emphasis on individual knowledge and performance, and the expectation that students will acquire the same body of knowledge at the same time.26
  • Peer-developed and peer-rated forms of assessment complement faculty grading, which is often based on individual accomplishment in a team performance context  Assessments provide formative feedback on instructional effectiveness
  • Multipurpose habitats—creating layered/blended/personalizable places rather than specialized locations (such as computer labs)
  • o the extent that some of these ideas about neomillennial learning styles are accurate, campuses that make strategic investments in physical plant, technical infrastructure, and professional development along the dimensions suggested will gain a considerable competitive advantage in both recruiting top students and teaching them effectively.
  • Net Generation learning styles stem primarily from the world-to-the-desktop interface; however, the growing prevalence of interfaces to virtual environments and augmented realities is beginning to foster so-called neomillennial learning styles in users of all ages.
    • Barbara Lindsey
       
      What is the timeline?
  • Immersion is the subjective impression that one is participating in a comprehensive, realistic experience.
  • Inducing a participant's symbolic immersion involves triggering powerful semantic associations via the content of an experience.
    • Barbara Lindsey
       
      Felice's Utopian City
  • The capability of computer interfaces to foster psychological immersion enables technology-intensive educational experiences that draw on a powerful pedagogy: situated learning.
  • The major schools of thought cited are behaviorist theories of learning (presentational instruction), cognitivist theories of learning (tutoring and guided learning by doing), and situated theories of learning (mentoring and apprenticeships in communities of practice).
    • Barbara Lindsey
       
      What kinds of learning environments do you prefer and what kinds do you create for your students?
  • Situated learning requires authentic contexts, activities, and assessment coupled with guidance from expert modeling, mentoring, and "legitimate peripheral participation."8 As an example of legitimate peripheral participation, graduate students work within the laboratories of expert researchers, who model the practice of scholarship. These students interact with experts in research as well as with other members of the research team who understand the complex processes of scholarship to varying degrees. While in these laboratories, students gradually move from novice researchers to more advanced roles, with the skills and expectations for them evolving.
  • Potentially quite powerful, situated learning is much less used for instruction than behaviorist or cognitivist approaches. This is largely because creating tacit, relatively unstructured learning in complex real-world settings is difficult.
    • Barbara Lindsey
       
      Not too far in the future!
  • However, virtual environments and ubiquitous computing can draw on the power of situated learning by creating immersive, extended experiences with problems and contexts similar to the real world.9 In particular, MUVEs and real-world settings augmented with virtual information provide the capability to create problem-solving communities in which participants can gain knowledge and skills through interacting with other participants who have varied levels of skills, enabling legitimate peripheral participation driven by intrinsic sociocultural forces.
  • Situated learning is important in part because of the crucial issue of transfer. Transfer is defined as the application of knowledge learned in one situation to another situation and is demonstrated if instruction on a learning task leads to improved performance on a transfer task, typically a skilled performance in a real-world setting
    • Barbara Lindsey
       
      One of the most difficult skills to master.
  • Moreover, the evolution of an individual's or group's identity is an important type of learning for which simulated experiences situated in virtual environments or augmented realities are well suited. Reflecting on and refining an individual identity is often a significant issue for higher education students of all ages, and learning to evolve group and organizational identity is a crucial skill in enabling innovation and in adapting to shifting contexts.
  • Immersion is important in this process of identity exploration because virtual identity is unfettered by physical attributes such as gender, race, and disabilities.
    • Barbara Lindsey
       
      Don't agree with this. We come to any environment with our own baggage and we do not interact in a neutral social context.
  • Thanks to out-of-game trading of in-game items, Norrath, the virtual setting of the MMOG EverQuest, is the seventy-seventh largest economy in the real world, with a GNP per capita between that of Russia and Bulgaria. One platinum piece, the unit of currency in Norrath, trades on real world exchange markets higher than both the Yen and the Lira (Castronova, 2001).14
  • Multiple teams of students can access the MUVE simultaneously, each individual manipulating an avatar which is "sent back in time" to this virtual environment. Students must collaborate to share the data each team collects. Beyond textual conversation, students can project to each other "snapshots" of their current individual point of view (when someone has discovered an item of general interest) and also can "teleport" to join anyone on their team for joint investigation. Each time a team reenters the world, several months of time have passed in River City, so learners can track the dynamic evolution of local problems.
  • In our research on this educational MUVE based on situated learning, we are studying usability, student motivation, student learning, and classroom implementation issues. The results thus far are promising: All learners are highly motivated, including students typically unengaged in classroom settings. All students build fluency in distributed modes of communication and expression and value using multiple media because each empowers different types of communication, activities, experiences, and expressions. Even typically low-performing students can master complex inquiry skills and sophisticated content. Shifts in the pedagogy within the MUVE alter the pattern of student performance.
    • Barbara Lindsey
       
      Would like to see research on this.
  • Research shows that many participants value this functionality and choose to access the Web page after leaving the museum.
    • Barbara Lindsey
       
      More could be done with this.
  • Participants in these distributed simulations use location-aware handheld computers (with GPS technology), allowing users to physically move throughout a real-world location while collecting place-dependent simulated field data, interviewing virtual characters, and collaboratively investigating simulated scenarios.
    • Barbara Lindsey
       
      Much better
  • Initial research on Environmental Detectives and other AR-based educational simulations demonstrates that this type of immersive, situated learning can effectively engage students in critical thinking about authentic scenarios.
  • Students were most effective in learning and problem-solving when they collectively sought, sieved, and synthesized experiences rather than individually locating and absorbing information from some single best source.
    • Barbara Lindsey
       
      How does this 'fit' learning goals and teaching styles in our program?
  • Rheingold's forecasts draw on lifestyles seen at present among young people who are high-end users of new media
  • Rather than having core identities defined through a primarily local set of roles and relationships, people would express varied aspects of their multifaceted identities through alternate extended experiences in distributed virtual environments and augmented realities.
    • Barbara Lindsey
       
      How is this different from current experiences for individuals working within/across different social groups and boundaries?
  • one-third of U.S. households now have broadband access to the Internet. In the past three years, 14 million U.S. families have linked their computers with wireless home networks. Some 55 percent of Americans now carry cell phones
  • Mitchell's forecasts25 are similar to Rheingold's in many respects. He too envisions largely tribal lifestyles distributed across dispersed, fragmented, fluctuating habitats: electronic nomads wandering among virtual campfires. People's senses and physical agency are extended outward and into the intangible, at considerable cost to individual privacy. Individual identity is continuously reformed via an ever-shifting series of networking with others and with tools. People express themselves through nonlinear, associational webs of representations rather than linear "stories" and co-design services rather than selecting a precustomized variant from a menu of possibilities.
  • More and more, though, people of all ages will have lifestyles involving frequent immersion in both virtual and augmented reality. How might distributed, immersive media be designed specifically for education, and what neomillennial learning styles might they induce?
  • Mediated immersion creates distributed learning communities, which have different strengths and limits than location-bound learning communities confined to classroom settings and centered on the teacher and archival materials.27
  • Neomillenial Versus Millennial Learning Styles
  • Emphasis is placed on implications for strategic investments in physical plant, technology infrastructure, and professional development.
  • such as textbooks linked to course ratings by students)
  • Mirroring": Immersive virtual environments provide replicas of distant physical settings
  • Middleware, interoperability, open content, and open source
  • Finding information Sequential assimilation of linear information stream
  • Student products generally tests or papers Grading centers on individual performance
  • These ideas are admittedly speculative rather than based on detailed evidence and are presented to stimulate reaction and dialogue about these trends.
  • f we accept much of the analysis above
    • Barbara Lindsey
       
      But have they made the case for its educational value?
  • students of all ages with increasingly neomillennial learning styles will be drawn to colleges and universities that have these capabilities. Four implications for investments in professional development also are apparent. Faculty will increasingly need capabilities in:
  • Some of these shifts are controversial for many faculty; all involve "unlearning" almost unconscious beliefs, assumptions, and values about the nature of teaching, learning, and the academy. Professional development that requires unlearning necessitates high levels of emotional/social support in addition to mastering the intellectual/technical dimensions involved. The ideal form for this type of professional development is distributed learning communities so that the learning process is consistent with the knowledge and culture to be acquired. In other words, faculty must themselves experience mediated immersion and develop neomillennial learning styles to continue teaching effectively as the nature of students alters.
  • Differences among individuals are greater than dissimilarities between groups, so students in any age cohort will present a mixture of neomillennial, millennial, and traditional learning styles
  • The technologies discussed are emerging rather than mature, so their final form and influences on users are not fully understood. A substantial number of faculty and administrators will likely dismiss and resist some of the ideas and recommendations presented here.
1More

Who Are You Online? [infographic] - 0 views

  •  
    Today's infographic shows these divergent philosophies of Internet culture and, most interestingly, what the average Internet user thinks about the privacy of their information online. What type of user are you? Do you prefer anonymity or transparency?
3More

Digital Ethnography » Blog Archive » Getting Started with Web 2.0 - 0 views

  • Web 2.0 refers to new websites that are more dynamic, user-driven, and interlinked (and interlinked in new and interesting ways).
  • An RSS (Really Simple Syndication) feed is a way for news organizations, academic journals, book publishers, and virtually anybody who distributes information to distribute that information without any markup or formatting, so that your own browser or website can format it and make it look nice on your own page. You can add any RSS feed to a website like Netvibes. This allows you to have all of your favorite sites that are frequently updated viewable on one single page.
  •  
    Web 2.0 refers to new websites that are more dynamic, user-driven, and interlinked (and interlinked in new and interesting ways).
18More

The Cape Town Open Education Declaration - 0 views

  • The Internet provides a platform for collaborative learning and knowledge creation across long distances, which is central to the long term promise of open education. It also offers a channel for the creation and distribution of knowledge from a diversity of places and cultures around the world, and not just from major publishing centres like New York, London, and Paris.
  • we believe that open education and open educational resources are very much compatible with the business of commercial publishing. The Declaration clearly states that the open education movement should "...engage entrepreneurs and publishers who are developing innovative business models that are both open and financially sustainable."
  • here is likely to be some upheaval in formal educational systems as teachers and students engage in the new pedagogies that are enabled by openness. There might also be concerns that some of the deeper goals of the open education movement could backfire. For example, instead of enhancing locally relevant educational practices and rewarding those with regional expertise, it is possible that a flood of foreign-produced open educational resources will actually undermine the capacity for regional expertise to form or thrive.
  • ...13 more annotations...
  • First, this is not actually a philanthropic endeavor in the classic sense of "donating" something to those with less. Instead, the open education movement promotes conditions for self-empowerment, and one of the central premises of the movement focuses on the freedom to be educated in the manner of one's choosing. Second, the permissions granted in defining an open educational resource explicitly enable the localization and adaptation of materials to be more locally appropriate. Every person should have the right to be educated in his/her native language, and in a manner that is most suitable to the personal and cultural contexts in which they reside. Third, we have good reason to believe that the contributions to the global open educational enterprise from those in resource-limited settings are at least as valuable as contributions from anyone else. While we have much to do to enable truly equitably participation among all of the citizens of the globe, there is widespread agreement that the ultimate goal is some type of open educational network, not a unidirectional pipeline.
    • Barbara Lindsey
       
      Key component of a critical pedagogical approach.
  • educational resources commissioned and paid for directly by the public sector should be released as open educational resources. This ensures that the taxpayers who financed these resources can benefit from them fully. Of course, this principle cannot extend to resources paid for indirectly with public funds, such as materials written by professors at public universities. The Declaration does strongly encourage these professors and institutions to make all of their resources open. However, in the end, this is their choice.
    • Barbara Lindsey
       
      Wow! Wonder how many critical pedagogists would embrace this idea.
  • resources should be licensed to facilitate use, revision, translation, improvement and sharing by anyone
  • many of the participants advocated for inclusion of language that indicates that the license should ideally impose no legal constraints other than a requirement by the creator for appropriate attribution or the sharing of derivative works. This degree of openness represents the 'gold standard' in open educational resource licensing. However, it is also recognized that some authors and publishers may wish to disallow commercial uses (non-commercial). Resources licensed with this additional restriction are still open educational resources, but do come with risks and costs.
  • we suggest that you use one of the Creative Commons (CC) licenses, for several reasons: The licenses have human-readable deeds, which is (generally) easier for people to understand.The licenses have a computer-readable component which enables search and filtering by license status, an increasingly important consideration in an era of exploding online content.The licenses have been ported to many countries around the world, with more being added every year, which guarantees their worldwide application and enforcement.The licenses are already the most frequently used licenses for open educational resources, which will make it easier for users to learn about their rights, as well as use the materials in interesting ways.
  • Open educational resources licensed using CC-BY have no restrictions on use beyond attribution for the original creator. Open educational resources licensed using CC-BY-SA also require attribution, but have the additional restriction of requiring that the derived material be licensed in the same manner as the original(s), thus ensuring their continued availability as open educational resources.
  • n most cases, the NC term is likely to have undesired repercussions for your work. If you are thinking of restricting commercial activity, ask yourself the following questions: What is the goal of doing so? Is it that the creators wish to make money from their contributions? Is this likely? Is it assumed that all for-profit activity is somehow inimical to education? What are the costs of restricting commercial use of open educational resources and do you wish to incur them? For example, is it your goal to forbid a for-profit publisher in a developing country from printing copies of your materials and distributing them there?
  • If an author's primary purpose in creating open educational resources is for it to be used as widely, freely, and creatively as possible, then using CC-BY is the better choice
  • CC-BY allows for a variety of motivations, including the possibility of commercial success, to drive users to adapt and re-purpose their materials.
  • f an author's primary purpose in creating open educational resources is for that material to never leave the educational commons, such as it is, then you may want to apply the SA term. In this case, the possibilities for viable commercial derivatives, though not disallowed, are diminished, and so users motivated to adapt materials for that purpose are unlikely to participate. In addition, open educational resources licensed with an SA term are only interoperable with other SA materials, which seriously limits their capacity for re-mixing.
  • There are two key points we would ask you to consider prior to applying the ND term. First, are you willing to prevent all of the wonderful ways in which your work might be improved upon just for the sake of preventing a few derivatives that you would consider inferior? It is worth remembering that it is the granting of freedoms to share, reprint, translate, combine, or adapt that makes open educational resources educationally different from those that can merely be read online for free.
  • you must remember that digital resources are not consumable goods, in the sense that they can be shared infinitely without any loss of value for the original. As such, if inferior derivatives are created, those creations have done nothing to diminish the quality of your original work, which will remain available for others to use or improve upon as they wish.
  • there is absolutely no restriction on use of public domain materials. In addition to being able to freely use such materials, you are free to adapt public domain materials and then license the derivative works in any way you choose, including standard all-rights-reserved copyright. You have to apply an open license if you want your contribution to add to the pool of open educational resources.
37More

Dr. Mashup; or, Why Educators Should Learn to Stop Worrying and Love the Remix | EDUCAU... - 0 views

  • A classroom portal that presents automatically updated syndicated resources from the campus library, news sources, student events, weblogs, and podcasts and that was built quickly using free tools.
  • Increasingly, it's not just works of art that are appropriated and remixed but the functionalities of online applications as well.
  • mashups involve the reuse, or remixing, of works of art, of content, and/or of data for purposes that usually were not intended or even imagined by the original creators.
  • ...31 more annotations...
  • hat, exactly, constitutes a valid, original work? What are the implications for how we assess and reward creativity? Can a college or university tap the same sources of innovative talent and energy as Google or Flickr? What are the risks of permitting or opening up to this activity?
    • Barbara Lindsey
       
      Good discussion point
  • Remix is the reworking or adaptation of an existing work. The remix may be subtle, or it may completely redefine how the work comes across. It may add elements from other works, but generally efforts are focused on creating an alternate version of the original. A mashup, on the other hand, involves the combination of two or more works that may be very different from one another. In this article, I will apply these terms both to content remixes and mashups, which originated as a music form but now could describe the mixing of any number of digital media sources, and to data mashups, which combine the data and functionalities of two or more Web applications.
  • Harper's article "The Ecstasy of Influence," the novelist Jonathan Lethem imaginatively reviews the history of appropriation and recasts it as essential to the act of creation.3
  • Lethem's article is a must-read for anyone with an interest in the history of ideas, creativity, and intellectual property. It brilliantly synthesizes multiple disciplines and perspectives into a wonderfully readable and compelling argument. It is also, as the subtitle of his article acknowledges, "a plagiarism." Virtually every passage is a direct lift from another source, as the author explains in his "Key," which gives the source for every line he "stole, warped, and cobbled together." (He also revised "nearly every sentence" at least slightly.) Lethem's ideas noted in the paragraph above were appropriated from Siva Vaidhyanathan, Craig Baldwin, Richard Posner, and George L. Dillon.
  • Reading Walter Benjamin's highly influential 1936 essay "The Work of Art in the Age of Mechanical Reproduction,"4 it's clear that the profound effects of reproductive technology were obvious at that time. As Gould argued in 1964 (influenced by theorists such as Marshall McLuhan5), changes in how art is produced, distributed, and consumed in the electronic age have deep effects on the character of the art itself.
  • Yet the technology developments of the past century have clearly corresponded with a new attitude toward the "aura" associated with a work of invention and with more aggressive attitudes toward appropriation. It's no mere coincidence that the rise of modernist genres using collage techniques and more fragmented structures accompanied the emergence of photography and audio recording.
  • Educational technologists may wonder if "remix" or "content mashup" are just hipper-sounding versions of the learning objects vision that has absorbed so much energy from so many talented people—with mostly disappointing results.
  • The question is, why should a culture of remix take hold when the learning object economy never did?
  • when most learning object repositories were floundering, resource-sharing services such as del.icio.us and Flickr were enjoying phenomenal growth, with their user communities eagerly contributing heaps of useful metadata via simple folksonomy-oriented tagging systems.
  • the standards/practices relationship implicit in the learning objects model has been reversed. With only the noblest of intentions, proponents of learning objects (and I was one of them) went at the problem of promoting reuse by establishing an arduous and complex set of interoperability standards and then working to persuade others to adopt those standards. Educators were asked to take on complex and ill-defined tasks in exchange for an uncertain payoff. Not surprisingly, almost all of them passed.
  • Discoverable Resources
  • Educators might justifiably argue that their materials are more authoritative, reliable, and instructionally sound than those found on the wider Web, but those materials are effectively rendered invisible and inaccessible if they are locked inside course management systems.
  • It's a dirty but open secret that many courses in private environments use copyrighted third-party materials in a way that pushes the limits of fair use—third-party IP is a big reason why many courses cannot easily be made open.
  • The potential payoff for using open and discoverable resources, open and transparent licensing, and open and remixable formats is huge: more reuse means that more dynamic content is being produced more economically, even if the reuse happens only within an organization. And when remixing happens in a social context on the open web, people learn from each other's process.
  • Part of making a resource reusable involves making the right choices for file formats.
  • To facilitate the remixing of materials, educators may want to consider making the source files that were used to create a piece of multimedia available along with the finished result.
  • In addition to choosing the right file format and perhaps offering the original sources, another issue to consider when publishing content online is the critical question: "Is there an RSS feed available?" If so, conversion tools such as Feed2JS (http://www.feed2JS.org) allow for the republication of RSS-ified content in any HTML Web environment, including a course management system, simply by copying and pasting a few lines of JavaScript code. When an original source syndicated with RSS is updated, that update is automatically rendered anywhere it has been republished.
  • Jack Schofield
  • Guardian Unlimited
  • "An API provides an interface and a set of rules that make it much easier to extract data from a website. It's a bit like a record company releasing the vocals, guitars and drums as separate tracks, so you would not have to use digital processing to extract the parts you wanted."1
  • What's new about mashed-up application development? In a sense, the factors that have promoted this approach are the same ones that have changed so much else about Web culture in recent years. Essential hardware and software has gotten more powerful and for the most part cheaper, while access to high-speed connectivity and the enhanced quality of online applications like Google Docs have improved to the point that Tim O'Reilly and others can talk of "the emergent Internet operating system."15 The growth of user-centered technologies such as blogs have fostered a DIY ("do it yourself") culture that increasingly sees online interaction as something that can be personalized and adapted on the individual level. As described earlier, light syndication and service models such as RSS have made it easier and faster than ever to create simple integrations of diverse media types. David Berlind, executive editor of ZDNet, explains: "With mashups, fewer technical skills are needed to become a developer than ever. Not only that, the simplest ones can be done in 10 or 15 minutes. Before, you had to be a pretty decent code jockey with languages like C++ or Visual Basic to turn your creativity into innovation. With mashups, much the same way blogging systems put Web publishing into the hands of millions of ordinary non-technical people, the barrier to developing applications and turning creativity into innovation is so low that there's a vacuum into which an entire new class of developers will be sucked."16
  • The ability to "clone" other users' mashups is especially exciting: a newcomer does not need to spend time learning how to structure the data flows but can simply copy an existing framework that looks useful and then make minor modifications to customize the result.19
    • Barbara Lindsey
       
      This is the idea behind the MIT repository--remixing content to suit local needs.
  • As with content remixing, open access to materials is not just a matter of some charitable impulse to share knowledge with the world; it is a core requirement for participating in some of the most exciting and innovative activity on the Web.
  • "My Maps" functionality
  • For those still wondering what the value proposition is for offering an open API, Google's development process offers a compelling example of the potential rewards.
    • Barbara Lindsey
       
      Wikinomics
  • Elsewhere, it is difficult to point to significant activity suggesting that the mashup ethos is taking hold in academia the way it is on the wider Web.
  • Yet for the most part, the notion of the data mashup and the required openness is not even a consideration in discussions of technology strategy in higher educational institutions. "Data integration" across campus systems is something that is handled by highly skilled professionals at highly skilled prices.
  • Revealing how a more adventurous and inclusive online development strategy might look on campus, Raymond Yee recently posted a comprehensive proposal for his university (UC Berkeley), in which he outlined a "technology platform" not unlike the one employed by Amazon.com (http://aws.amazon.com/)—resources and access that would be invaluable for the institution's programmers as well as for outside interests to build complementary services.
  • All too often, college and university administrators react to this type of innovation with suspicion and outright hostility rather than cooperation.
  • those of us in higher education who observe the successful practices in the wider Web world have an obligation to consider and discuss how we might apply these lessons in our own contexts. We might ask if the content we presently lock down could be made public with a license specifying reasonable terms for reuse. When choosing a content management system, we might consider how well it supports RSS syndication. In an excellent article in the March/April 2007 issue of EDUCAUSE Review, Joanne Berg, Lori Berquam, and Kathy Christoph listed a number of campus activities that could benefit from engaging social networking technologies.26
  • What might happen if we allow our campus innovators to integrate their practices in these areas in the same way that social networking application developers are already integrating theirs? What is the mission-critical data we cannot expose, and what can we expose with minimal risk? And if the notion of making data public seems too radical a step, can APIs be exposed to selected audiences, such as on-campus developers or consortia partners?
19More

2010 Horizon Report » One Year or Less: Open Content - 0 views

  • The movement toward open content reflects a growing shift in the way academics in many parts of the world are conceptualizing education to a view that is more about the process of learning than the information conveyed in their courses. Information is everywhere; the challenge is to make effective use of it.
  • As customizable educational content is made increasingly available for free over the Internet, students are learning not only the material, but also skills related to finding, evaluating, interpreting, and repurposing the resources they are studying in partnership with their teachers.
  • collective knowledge and the sharing and reuse of learning and scholarly content,
  • ...16 more annotations...
  • the notion of open content is to take advantage of the Internet as a global dissemination platform for collective knowledge and wisdom, and to design learning experiences that maximize the use of it.
  • The role of open content producers has evolved as well, away from the idea of authoritative repositories of content and towards the broader notion of content being both free and ubiquitous.
  • schools like Tufts University (and many others) now consider making their course materials available to the public a social responsibility.
  • Many believe that reward structures that support the sharing of work in progress, ongoing research, highly collaborative projects, and a broad view of what constitutes scholarly publication are key challenges that institutions need to solve.
  • learning to find useful resources within a given discipline, assess the quality of content available, and repurpose them in support of a learning or research objective are in and of themselves valuable skills for any emerging scholar, and many adherents of open content list that aspect among the reasons they support the use of shareable materials.
  • Open content shifts the learning equation in a number of interesting ways; the most important is that its use promotes a set of skills that are critical in maintaining currency in any discipline — the ability to find, evaluate, and put new information to use.
  • Communities of practice and learning communities have formed around open content in a great many disciplines, and provide practitioners and independent learners alike an avenue for continuing education.
  • Art History. Smarthistory, an open educational resource dedicated to the study of art, seeks to replace traditional art history textbooks with an interactive, well-organized website. Search by time period, style, or artist (http://smarthistory.org).
  • American Literature before 1860 http://enh241.wetpaint.com Students in this course, held at Mesa Community College, contribute to the open course material as part of their research. MCC also features a number of lectures on YouTube (see http://www.youtube.com/user/mesacc#p/p).
  • Carnegie Mellon University’s Open Learning Initiative http://oli.web.cmu.edu/openlearning/ The Open Learning Initiative offers instructor-led and self-paced courses; any instructor may teach with the materials, regardless of affiliation. In addition, the courses include student assessment and intelligent tutoring capability.
  • Connexions http://cnx.org Connexions offers small modules of information and encourages users to piece together these chunks to meet their individual needs.
  • eScholarship: University of California http://escholarship.org/about_escholarship.html eScholarship provides peer review and publishing for scholarly articles, books, and papers, using an open content model. The service also includes tools for dissemination and research.
  • MIT OpenCourseWare http://ocw.mit.edu The Massachusetts Institute of Technology publishes lectures and materials from most of its undergraduate and graduate courses online, where they are freely available for self-study.
  • Open.Michigan’s dScribe Project https://open.umich.edu/projects/oer.php The University of Michigan’s Open.Michigan initiative houses several open content projects. One, dScribe, is a student-centered approach to creating open content. Students work with faculty to select and vet resources, easing the staffing and cost burden of content creation while involving the students in developing materials for themselves and their peers.
  • Center for Social Media Publishes New Code of Best Practices in OCW http://criticalcommons.org/blog/content/center-for-social-media-publishes-new-code-of-best-practices-in-ocw (Critical Commons, 25 October 2009.) The advocacy group Critical Commons seeks to promote the use of media in open educational resources. Their Code of Best Practices in Fair Use for OpenCourseWare is a guide for content developers who want to include fair-use material in their offerings.
  • Flat World Knowledge: A Disruptive Business Model http://industry.bnet.com/media/10003790/flat-world-knowledge-a-disruptive-business-model/ (David Weir, BNET, 20 August 2009.) Flat World Knowledge is enjoying rapid growth, from 1,000 students in the spring of 2009 to 40,000 in the fall semester using their materials. The company’s business model pays a higher royalty percentage to textbook authors and charges students a great deal less than traditional publishers.
7More

Social Annotations in Digital Library Collections - 0 views

  • While used textbooks are obviously less costly, they often carry another benefit new textbooks don't: highlights, underscores and other annotations by their previous owners. Even though the author of, and rationale for, the annotations may be unknown, the fact that somebody found particular sections of the book important enough to emphasize tends to make the eye linger. Ideally, annotations can make learning and knowledge discovery feel less like a solitary pursuit and more like a collaborative effort.
  • At first glance, it would seem that the trustworthiness of an unknown individual who has interpreted or appended an author's work would be questionable, but several reasonable assumptions can be made that contribute to the perceived authority of an unknown annotator. At the very least, they read the work and took the time to make the annotations, which may question or clarify certain statements in the text, and create links to other works, authors or ideas. The subsequent reader of an annotated work then has one or more additional perspectives from which to evaluate the usefulness of the text and annotations, and more implied permission to add his or her own interpretations than in an unannotated text. Published scholarly works are objects for discussion in an ongoing conversation among a community of knowledge seekers, and whether via formal citation in later publications or annotations in existing ones, all are designed to advance the generation and exchange of ideas.
  • Most critically, knowledge discovery and transfer is no longer restricted to a model of one expert creator to many consumers. In Web 2.0, consumers are creators, who can add their voices to both expert and non-expert claims. Users get the benefit of multiple perspectives and can evaluate claims in the best tradition of participative, critical inquiry.
  • ...3 more annotations...
  • However, as with annotations in paper books, sometimes the value of an annotation goes beyond its content. Marshall (1998) suggests that the very act of evaluating a handwritten annotation's relevance creates a level of critical engagement that would not happen while reading a clean copy of a book. Marshall studied university students' annotations in textbooks, and found that students preferred books that had been marked by previous readers, as long as the marks were intelligible.
  • Similarly, Sherman (2008) studied marginalia in English Renaissance texts and found that students of the time were routinely taught that simply reading a book was insufficient. In order to have a "fruitful interaction" (p. 4) with a text, marking it up with one's thoughts and reactions was considered essential. Marginalia and other signs of engagement and use – even such apparently content-neutral additions as food stains – Sherman sees as valuable evidence of reader reaction, and the place of the physical information object in people's lives.
  • In a study of flickr.com, Ames and Naaman (2007) created a taxonomy of motivations for annotation along two dimensions: sociality and function. The latter dimension echoes people's motivation to annotate printed textbooks: the function of making important or interesting passages more easily findable for later review. The sociality dimension is a component of the Web infrastructure – making photographs findable for others, and creating shared tagsets for people with similar interests, so they might collaborate more easily. In this sense, photographs are boundary objects (Star and Griesemer 1989), around which diverse individuals can interact and communities can build (Gal, Yoo and Boland 2006). Digital collection items can also be boundary objects, even if those conversations take place asynchronously.
  •  
    This article analyzes the integration of social annotations - uncontrolled user-generated content - into digital collection items.
5More

Microsoft, Google eye Arabic web growth potential | Reuters - 0 views

  • "One of our biggest missions is to enable Arabic users to find the right tools to enrich Arabic content," Ghonim said. "It would be great to see more e-commerce in the region, more publishers, more news sites. We are committed to help them."
  • Ghonim said Arabic speakers have historically engaged in poorly organized and difficult to archive forums, citing a message board used by 400,000 teachers in Saudi Arabia.Both Google and Microsoft place Arabic in their top ten languages in need of prioritized attention.
  • "The next few million Egyptian internet users will be people who don't really speak English," Ghonim said.
  • ...2 more annotations...
  • "Think of the guy running a very small one-stop shop in (Nile delta industrial city) Mahalla," Ghonim said. "You should facilitate for him a complete experience in Arabic, from the way he registers his domain to finding a hosting company to communicating to his customers."
  • Mundie said the Arab world was well-placed to skip PC-dominated use and go straight to mobile internet.
1 - 20 of 95 Next › Last »
Showing 20 items per page