Skip to main content

Home/ Instructional & Media Services at Dickinson College/ Group items tagged world

Rss Feed Group items tagged

Ed Webb

The Myth Of AI | Edge.org - 0 views

  • The distinction between a corporation and an algorithm is fading. Does that make an algorithm a person? Here we have this interesting confluence between two totally different worlds. We have the world of money and politics and the so-called conservative Supreme Court, with this other world of what we can call artificial intelligence, which is a movement within the technical culture to find an equivalence between computers and people. In both cases, there's an intellectual tradition that goes back many decades. Previously they'd been separated; they'd been worlds apart. Now, suddenly they've been intertwined.
  • Since our economy has shifted to what I call a surveillance economy, but let's say an economy where algorithms guide people a lot, we have this very odd situation where you have these algorithms that rely on big data in order to figure out who you should date, who you should sleep with, what music you should listen to, what books you should read, and on and on and on. And people often accept that because there's no empirical alternative to compare it to, there's no baseline. It's bad personal science. It's bad self-understanding.
  • there's no way to tell where the border is between measurement and manipulation in these systems
  • ...8 more annotations...
  • It's not so much a rise of evil as a rise of nonsense. It's a mass incompetence, as opposed to Skynet from the Terminator movies. That's what this type of AI turns into.
  • What's happened here is that translators haven't been made obsolete. What's happened instead is that the structure through which we receive the efforts of real people in order to make translations happen has been optimized, but those people are still needed.
  • In order to create this illusion of a freestanding autonomous artificial intelligent creature, we have to ignore the contributions from all the people whose data we're grabbing in order to make it work. That has a negative economic consequence.
  • If you talk to translators, they're facing a predicament, which is very similar to some of the other early victim populations, due to the particular way we digitize things. It's similar to what's happened with recording musicians, or investigative journalists—which is the one that bothers me the most—or photographers. What they're seeing is a severe decline in how much they're paid, what opportunities they have, their long-term prospects.
  • because of the mythology about AI, the services are presented as though they are these mystical, magical personas. IBM makes a dramatic case that they've created this entity that they call different things at different times—Deep Blue and so forth. The consumer tech companies, we tend to put a face in front of them, like a Cortana or a Siri
  • If you talk about AI as a set of techniques, as a field of study in mathematics or engineering, it brings benefits. If we talk about AI as a mythology of creating a post-human species, it creates a series of problems that I've just gone over, which include acceptance of bad user interfaces, where you can't tell if you're being manipulated or not, and everything is ambiguous. It creates incompetence, because you don't know whether recommendations are coming from anything real or just self-fulfilling prophecies from a manipulative system that spun off on its own, and economic negativity, because you're gradually pulling formal economic benefits away from the people who supply the data that makes the scheme work.
  • This idea that some lab somewhere is making these autonomous algorithms that can take over the world is a way of avoiding the profoundly uncomfortable political problem, which is that if there's some actuator that can do harm, we have to figure out some way that people don't do harm with it. There are about to be a whole bunch of those. And that'll involve some kind of new societal structure that isn't perfect anarchy. Nobody in the tech world wants to face that, so we lose ourselves in these fantasies of AI. But if you could somehow prevent AI from ever happening, it would have nothing to do with the actual problem that we fear, and that's the sad thing, the difficult thing we have to face.
  • To reject your own ignorance just casts you into a silly state where you're a lesser scientist. I don't see that so much in the neuroscience field, but it comes from the computer world so much, and the computer world is so influential because it has so much money and influence that it does start to bleed over into all kinds of other things.
Ed Webb

What Bruce Sterling Actually Said About Web 2.0 at Webstock 09 | Beyond the Beyond from... - 0 views

  • things in it that pretended to be ideas, but were not ideas at all: they were attitudes
    • Ed Webb
       
      Like Edupunk
  • A sentence is a verbal construction meant to express a complete thought. This congelation that Tim O'Reilly constructed, that is not a complete thought. It's a network in permanent beta.
  • This chart is five years old now, which is 35 years old in Internet years, but intellectually speaking, it's still new in the world. It's alarming how hard it is to say anything constructive about this from any previous cultural framework.
  • ...20 more annotations...
  • "The cloud as platform." That is insanely great. Right? You can't build a "platform" on a "cloud!" That is a wildly mixed metaphor! A cloud is insubstantial, while a platform is a solid foundation! The platform falls through the cloud and is smashed to earth like a plummeting stock price!
  • luckily, we have computers in banking now. That means Moore's law is gonna save us! Instead of it being really obvious who owes what to whom, we can have a fluid, formless ownership structure that's always in permanent beta. As long as we keep moving forward, adding attractive new features, the situation is booming!
  • Web 2.0 is supposed to be business. This isn't a public utility or a public service, like the old model of an Information Superhighway established for the public good.
  • it's turtles all the way down
  • "Tagging not taxonomy." Okay, I love folksonomy, but I don't think it's gone very far. There have been books written about how ambient searchability through folksonomy destroys the need for any solid taxonomy. Not really. The reality is that we don't have a choice, because we have no conceivable taxonomy that can catalog the avalanche of stuff on the Web.
  • JavaScript is the duct tape of the Web. Why? Because you can do anything with it. It's not the steel girders of the web, it's not the laws of physics of the web. Javascript is beloved of web hackers because it's an ultimate kludge material that can stick anything to anything. It's a cloud, a web, a highway, a platform and a floor wax. Guys with attitude use JavaScript.
  • Before the 1990s, nobody had any "business revolutions." People in trade are supposed to be very into long-term contracts, a stable regulatory environment, risk management, and predictable returns to stockholders. Revolutions don't advance those things. Revolutions annihilate those things. Is that "businesslike"? By whose standards?
  • I just wonder what kind of rattletrap duct-taped mayhem is disguised under a smooth oxymoron like "collective intelligence."
  • the people whose granular bits of input are aggregated by Google are not a "collective." They're not a community. They never talk to each other. They've got basically zero influence on what Google chooses to do with their mouseclicks. What's "collective" about that?
  • I really think it's the original sin of geekdom, a kind of geek thought-crime, to think that just because you yourself can think algorithmically, and impose some of that on a machine, that this is "intelligence." That is not intelligence. That is rules-based machine behavior. It's code being executed. It's a powerful thing, it's a beautiful thing, but to call that "intelligence" is dehumanizing. You should stop that. It does not make you look high-tech, advanced, and cool. It makes you look delusionary.
  • I'd definitely like some better term for "collective intelligence," something a little less streamlined and metaphysical. Maybe something like "primeval meme ooze" or "semi-autonomous data propagation." Even some Kevin Kelly style "neobiological out of control emergent architectures." Because those weird new structures are here, they're growing fast, we depend on them for mission-critical acts, and we're not gonna get rid of them any more than we can get rid of termite mounds.
  • Web 2.0 guys: they've got their laptops with whimsical stickers, the tattoos, the startup T-shirts, the brainy-glasses -- you can tell them from the general population at a glance. They're a true creative subculture, not a counterculture exactly -- but in their number, their relationship to the population, quite like the Arts and Crafts people from a hundred years ago. Arts and Crafts people, they had a lot of bad ideas -- much worse ideas than Tim O'Reilly's ideas. It wouldn't bother me any if Tim O'Reilly was Governor of California -- he couldn't be any weirder than that guy they've got already. Arts and Crafts people gave it their best shot, they were in earnest -- but everything they thought they knew about reality was blown to pieces by the First World War. After that misfortune, there were still plenty of creative people surviving. Futurists, Surrealists, Dadaists -- and man, they all despised Arts and Crafts. Everything about Art Nouveau that was sexy and sensual and liberating and flower-like, man, that stank in their nostrils. They thought that Art Nouveau people were like moronic children.
  • in the past eighteen months, 24 months, we've seen ubiquity initiatives from Nokia, Cisco, General Electric, IBM... Microsoft even, Jesus, Microsoft, the place where innovative ideas go to die.
  • what comes next is a web with big holes blown in it. A spiderweb in a storm. The turtles get knocked out from under it, the platform sinks through the cloud. A lot of the inherent contradictions of the web get revealed, the contradictions in the oxymorons smash into each other. The web has to stop being a meringue frosting on the top of business, this make-do melange of mashups and abstraction layers. Web 2.0 goes away. Its work is done. The thing I always loved best about Web 2.0 was its implicit expiration date. It really took guts to say that: well, we've got a bunch of cool initiatives here, and we know they're not gonna last very long. It's not Utopia, it's not a New World Order, it's just a brave attempt to sweep up the ashes of the burst Internet Bubble and build something big and fast with the small burnt-up bits that were loosely joined. That showed more maturity than Web 1.0. It was visionary, it was inspiring, but there were fewer moon rockets flying out of its head. "Gosh, we're really sorry that we accidentally ruined the NASDAQ." We're Internet business people, but maybe we should spend less of our time stock-kiting. The Web's a communications medium -- how 'bout working on the computer interface, so that people can really communicate? That effort was time well spent. Really.
  • The poorest people in the world love cellphones.
  • Digital culture, I knew it well. It died -- young, fast and pretty. It's all about network culture now.
  • There's gonna be a Transition Web. Your economic system collapses: Eastern Europe, Russia, the Transition Economy, that bracing experience is for everybody now. Except it's not Communism transitioning toward capitalism. It's the whole world into transition toward something we don't even have proper words for.
  • The Transition Web is a culture model. If it's gonna work, it's got to replace things that we used to pay for with things that we just plain use.
  • Not every Internet address was a dotcom. In fact, dotcoms showed up pretty late in the day, and they were not exactly welcome. There were dot-orgs, dot edus, dot nets, dot govs, and dot localities. Once upon a time there were lots of social enterprises that lived outside the market; social movements, political parties, mutual aid societies, philanthropies. Churches, criminal organizations -- you're bound to see plenty of both of those in a transition... Labor unions... not little ones, but big ones like Solidarity in Poland; dissident organizations, not hobby activists, big dissent, like Charter 77 in Czechoslovakia. Armies, national guards. Rescue operations. Global non-governmental organizations. Davos Forums, Bilderberg guys. Retired people. The old people can't hold down jobs in the market. Man, there's a lot of 'em. Billions. What are our old people supposed to do with themselves? Websurf, I'm thinking. They're wise, they're knowledgeable, they're generous by nature; the 21st century is destined to be an old people's century. Even the Chinese, Mexicans, Brazilians will be old. Can't the web make some use of them, all that wisdom and talent, outside the market?
  • I've never seen so much panic around me, but panic is the last thing on my mind. My mood is eager impatience. I want to see our best, most creative, best-intentioned people in world society directly attacking our worst problems. I'm bored with the deceit. I'm tired of obscurantism and cover-ups. I'm disgusted with cynical spin and the culture war for profit. I'm up to here with phony baloney market fundamentalism. I despise a prostituted society where we put a dollar sign in front of our eyes so we could run straight into the ditch. The cure for panic is action. Coherent action is great; for a scatterbrained web society, that may be a bit much to ask. Well, any action is better than whining. We can do better.
Ed Webb

Social Media is Killing the LMS Star - A Bootleg of Bryan Alexander's Lost Presentation... - 0 views

  • Note that this isn’t just a technological alternate history. It also describes a different set of social and cultural practices.
  • CMSes lumber along like radio, still playing into the air as they continue to gradually shift ever farther away on the margins. In comparison, Web 2.0 is like movies and tv combined, plus printed books and magazines. That’s where the sheer scale, creative ferment, and wife-ranging influence reside. This is the necessary background for discussing how to integrate learning and the digital world.
  • These virtual classes are like musical practice rooms, small chambers where one may try out the instrument in silent isolation. It is not connectivism but disconnectivism.
  • ...11 more annotations...
  • CMSes shift from being merely retrograde to being actively regressive if we consider the broader, subtler changes in the digital teaching landscape. Web 2.0 has rapidly grown an enormous amount of content through what Yochai Benkler calls “peer-based commons production.” One effect of this has been to grow a large area for informal learning, which students (and staff) access without our benign interference. Students (and staff) also contribute to this peering world; more on this later. For now, we can observe that as teachers we grapple with this mechanism of change through many means, but the CMS in its silo’d isolation is not a useful tool.
  • those curious about teaching with social media have easy access to a growing, accessible community of experienced staff by means of those very media. A meta-community of Web 2.0 academic practitioners is now too vast to catalogue. Academics in every discipline blog about their work. Wikis record their efforts and thoughts, as do podcasts. The reverse is true of the CMS, the very architecture of which forbids such peer-to-peer information sharing. For example, the Resource Center for Cyberculture Studies (RCCS) has for many years maintained a descriptive listing of courses about digital culture across the disciplines. During the 1990s that number grew with each semester. But after the explosive growth of CMSes that number dwindled. Not the number of classes taught, but the number of classes which could even be described. According to the RCCS’ founder, David Silver (University of San Francisco), this is due to the isolation of class content in CMS containers.
  • unless we consider the CMS environment to be a sort of corporate intranet simulation, the CMS set of community skills is unusual, rarely applicable to post-graduation examples. In other words, while a CMS might help privacy concerns, it is at best a partial, not sufficient solution, and can even be inappropriate for already online students.
  • That experiential, teachable moment of selecting one’s copyright stance is eliminated by the CMS.
  • Another argument in favor of CMSes over Web 2.0 concerns the latter’s open nature. It is too open, goes the thought, constituting a “Wild West” experience of unfettered information flow and unpleasant forms of access. Campuses should run CMSes to create shielded environments, iPhone-style walled gardens that protect the learning process from the Lovecraftian chaos without.
  • social sifting, information literacy, using the wisdom of crowds, and others. Such strategies are widely discussed, easily accessed, and continually revised and honed.
  • at present, radio CMS is the Clear Channel of online learning.
  • For now, the CMS landsape is a multi-institutional dark Web, an invisible, unsearchable, un-mash-up-able archipelago of hidden learning content.
  • Can the practice of using a CMS prepare either teacher or student to think critically about this new shape for information literacy? Moreover, can we use the traditional CMS to share thoughts and practices about this topic?
  • The internet of things refers to a vastly more challenging concept, the association of digital information with the physical world. It covers such diverse instances as RFID chips attached to books or shipping pallets, connecting a product’s scanned UPC code to a Web-based database, assigning unique digital identifiers to physical locations, and the broader enterprise of augmented reality. It includes problems as varied as building search that covers both the World Wide Web and one’s mobile device, revising copyright to include digital content associated with private locations, and trying to salvage what’s left of privacy. How does this connect with our topic? Consider a recent article by Tim O’Reilly and John Battle, where they argue that the internet of things is actually growing knowledge about itself. The combination of people, networks, and objects is building descriptions about objects, largely in folksonomic form. That is, people are tagging the world, and sharing those tags. It’s worth quoting a passage in full: “It’s also possible to give structure to what appears to be unstructured data by teaching an application how to recognize the connection between the two. For example, You R Here, an iPhone app, neatly combines these two approaches. You use your iPhone camera to take a photo of a map that contains details not found on generic mapping applications such as Google maps – say a trailhead map in a park, or another hiking map. Use the phone’s GPS to set your current location on the map. Walk a distance away, and set a second point. Now your iPhone can track your position on that custom map image as easily as it can on Google maps.” (http://www.web2summit.com/web2009/public/schedule/detail/10194) What world is better placed to connect academia productively with such projects, the open social Web or the CMS?
  • imagine the CMS function of every class much like class email, a necessary feature, but not by any means the broadest technological element. Similarly the e-reserves function is of immense practical value. There may be no better way to share copyrighted academic materials with a class, at this point. These logistical functions could well play on.
Ed Webb

Guest Post: The Complexities of Certainty | Just Visiting - 0 views

  • Privileges abound in academia, but so do experiences of loss, instability and fear. And into this situation we were called to respond to a pandemic.
  • It is tempting to reach for certainties when everything around us is in chaos, and for a vast swath of higher ed instructors, the rapid shift from face-to-face teaching to emergency distance learning has been chaos. Small wonder, then, that people have offered -- and clung to -- advice that seeks to bring order to disorder. Many people have advised instructors to prioritize professionalism, ditching the sweatpants and putting away the visible clutter in our homes before making a Zoom call, upholding concepts like "rigor" so that our standards do not slip. To some, these appeals to universal principles are right-minded and heartening, a bulwark against confusion and disarray. But to others they have felt oppressive, even dangerously out of touch with the world in which we and our students live.
  • certainties can be dangerous; their very power is based upon reifying well-worn inequities dressed up as tradition
  • ...3 more annotations...
  • there is no objective standard of success that we reach when we insist on rigor, which is too often deployed in defense of practices that are ableist and unkind
  • We are not just teachers, or scholars, or professionals. We are individuals thrown back in varying degrees on our own resources, worried about ourselves and our families and friends as we navigate the effects of COVID-19. Many of us are deeply anxious and afraid. Our pre-existing frailties have been magnified; we feel vulnerable, distracted and at sea. Our loved ones are sick, even dying. This is trauma. Few of us have faced such world-changing circumstances before, and as our minds absorb the impact of that reality, our brains cannot perform as capably as they usually would.
  • The most professional people I know right now are those who show up, day after day, to teach under extraordinary circumstances. Perhaps they do it with their laundry waiting to be folded, while their children interrupt, thinking constantly of their loved ones, weathering loneliness, wearing sweatpants and potentially in need of a haircut. But I know they do it while acknowledging this is not the world in which we taught two months before, and that every student is facing disruption, uncertainty and distraction. They do it creatively, making room for the unexpected, challenging their students, with the world a participant in the conversation.
Ed Webb

The Internet Intellectual - 0 views

  • Even Thomas Friedman would be aghast at some of Jarvis’s cheesy sound-bites
  • What does that actually mean?
  • In Jarvis’s universe, all the good things are technologically determined and all the bad things are socially determined
  • ...7 more annotations...
  • Jarvis never broaches such subtleties. His is a simple world:
  • why not consider the possibility that the incumbents may be using the same tools, Jarvis’s revered technologies, to tell us what to think, and far more effectively than before? Internet shelf space may be infinite, but human attention is not. Cheap self-publishing marginally improves one’s chances of being heard, but nothing about this new decentralized public sphere suggests that old power structures—provided they are smart and willing to survive—will not be able to use it to their benefit
  • Jarvis 1.0 was all about celebrating Google, but Jarvis 2.0 has new friends in Facebook and Twitter. (An Internet intellectual always keeps up.) Jarvis 1.0 wrote that “Google’s moral of universal empowerment is the sometimes-forgotten ideal of democracy,” and argued that the company “provides the infrastructure for a culture of choice,” while its “algorithms and its business model work because Google trusts us.” Jarvis 2.0 claims that “by sharing publicly, we people challenge Google’s machines and reclaim our authority on the internet from algorithms.”
  • Jarvis has another reference point, another sacred telos: the equally grand and equally inexorable march of the Internet, which in his view is a technology that generates its own norms, its own laws, its own people. (He likes to speak of “us, people of the Net.”) For the Technology Man, the Internet is the glue that holds our globalized world together and the divine numen that fills it with meaning. If you thought that ethnocentrism was bad, brace yourself for Internet-centrism
  • Why worry about the growing dominance of such digitalism? The reason should be obvious. As Internet-driven explanations crowd out everything else, our entire vocabulary is being re-defined. Collaboration is re-interpreted through the prism of Wikipedia; communication, through the prism of social networking; democratic participation, through the prism of crowd-sourcing; cosmopolitanism, through the prism of reading the blogs of exotic “others”; political upheaval, through the prism of the so-called Twitter revolutions. Even the persecution of dissidents is now seen as an extension of online censorship (rather than the other way around). A recent headline on the blog of the Harvard-based Herdictproject—it tracks Internet censorship worldwide—announces that, in Mexico and Morocco, “Online Censorship Goes Offline.” Were activists and dissidents never harassed before Twitter and Facebook?
  • Most Internet intellectuals simply choose a random point in the distant past—the honor almost invariably goes to the invention of the printing press—and proceed to draw a straight line from Gutenberg to Zuckerberg, as if the Counter-Reformation, the Thirty Years’ War, the Reign of Terror, two world wars—and everything else—never happened.
  • even their iPad is of interest to them only as a “platform”—another buzzword of the incurious—and not as an artifact that is assembled in dubious conditions somewhere in East Asian workshops so as to produce cultic devotion in its more fortunate owners. This lack of elementary intellectual curiosity is the defining feature of the Internet intellectual. History, after all, is about details, but no Internet intellectual wants to be accused of thinking small. And so they think big—sloppily, ignorantly, pretentiously, and without the slightest appreciation of the difference between critical thought and market propaganda.
  •  
    In which Evgeny rips Jeff a new one
Ed Webb

ChatGPT Is Nothing Like a Human, Says Linguist Emily Bender - 0 views

  • Please do not conflate word form and meaning. Mind your own credulity.
  • We’ve learned to make “machines that can mindlessly generate text,” Bender told me when we met this winter. “But we haven’t learned how to stop imagining the mind behind it.”
  • A handful of companies control what PricewaterhouseCoopers called a “$15.7 trillion game changer of an industry.” Those companies employ or finance the work of a huge chunk of the academics who understand how to make LLMs. This leaves few people with the expertise and authority to say, “Wait, why are these companies blurring the distinction between what is human and what’s a language model? Is this what we want?”
  • ...16 more annotations...
  • “We call on the field to recognize that applications that aim to believably mimic humans bring risk of extreme harms,” she co-wrote in 2021. “Work on synthetic human behavior is a bright line in ethical Al development, where downstream effects need to be understood and modeled in order to block foreseeable harm to society and different social groups.”
  • chatbots that we easily confuse with humans are not just cute or unnerving. They sit on a bright line. Obscuring that line and blurring — bullshitting — what’s human and what’s not has the power to unravel society
  • She began learning from, then amplifying, Black women’s voices critiquing AI, including those of Joy Buolamwini (she founded the Algorithmic Justice League while at MIT) and Meredith Broussard (the author of Artificial Unintelligence: How Computers Misunderstand the World). She also started publicly challenging the term artificial intelligence, a sure way, as a middle-aged woman in a male field, to get yourself branded as a scold. The idea of intelligence has a white-supremacist history. And besides, “intelligent” according to what definition? The three-stratum definition? Howard Gardner’s theory of multiple intelligences? The Stanford-Binet Intelligence Scale? Bender remains particularly fond of an alternative name for AI proposed by a former member of the Italian Parliament: “Systematic Approaches to Learning Algorithms and Machine Inferences.” Then people would be out here asking, “Is this SALAMI intelligent? Can this SALAMI write a novel? Does this SALAMI deserve human rights?”
  • Tech-makers assuming their reality accurately represents the world create many different kinds of problems. The training data for ChatGPT is believed to include most or all of Wikipedia, pages linked from Reddit, a billion words grabbed off the internet. (It can’t include, say, e-book copies of everything in the Stanford library, as books are protected by copyright law.) The humans who wrote all those words online overrepresent white people. They overrepresent men. They overrepresent wealth. What’s more, we all know what’s out there on the internet: vast swamps of racism, sexism, homophobia, Islamophobia, neo-Nazism.
  • One fired Google employee told me succeeding in tech depends on “keeping your mouth shut to everything that’s disturbing.” Otherwise, you’re a problem. “Almost every senior woman in computer science has that rep. Now when I hear, ‘Oh, she’s a problem,’ I’m like, Oh, so you’re saying she’s a senior woman?”
  • “We haven’t learned to stop imagining the mind behind it.”
  • In March 2021, Bender published “On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?” with three co-authors. After the paper came out, two of the co-authors, both women, lost their jobs as co-leads of Google’s Ethical AI team.
  • “On the Dangers of Stochastic Parrots” is not a write-up of original research. It’s a synthesis of LLM critiques that Bender and others have made: of the biases encoded in the models; the near impossibility of studying what’s in the training data, given the fact they can contain billions of words; the costs to the climate; the problems with building technology that freezes language in time and thus locks in the problems of the past. Google initially approved the paper, a requirement for publications by staff. Then it rescinded approval and told the Google co-authors to take their names off it. Several did, but Google AI ethicist Timnit Gebru refused. Her colleague (and Bender’s former student) Margaret Mitchell changed her name on the paper to Shmargaret Shmitchell, a move intended, she said, to “index an event and a group of authors who got erased.” Gebru lost her job in December 2020, Mitchell in February 2021. Both women believe this was retaliation and brought their stories to the press. The stochastic-parrot paper went viral, at least by academic standards. The phrase stochastic parrot entered the tech lexicon.
  • Tech execs loved it. Programmers related to it. OpenAI CEO Sam Altman was in many ways the perfect audience: a self-identified hyperrationalist so acculturated to the tech bubble that he seemed to have lost perspective on the world beyond. “I think the nuclear mutually assured destruction rollout was bad for a bunch of reasons,” he said on AngelList Confidential in November. He’s also a believer in the so-called singularity, the tech fantasy that, at some point soon, the distinction between human and machine will collapse. “We are a few years in,” Altman wrote of the cyborg merge in 2017. “It’s probably going to happen sooner than most people think. Hardware is improving at an exponential rate … and the number of smart people working on AI is increasing exponentially as well. Double exponential functions get away from you fast.” On December 4, four days after ChatGPT was released, Altman tweeted, “i am a stochastic parrot, and so r u.”
  • “This is one of the moves that turn up ridiculously frequently. People saying, ‘Well, people are just stochastic parrots,’” she said. “People want to believe so badly that these language models are actually intelligent that they’re willing to take themselves as a point of reference and devalue that to match what the language model can do.”
  • The membrane between academia and industry is permeable almost everywhere; the membrane is practically nonexistent at Stanford, a school so entangled with tech that it can be hard to tell where the university ends and the businesses begin.
  • “No wonder that men who live day in and day out with machines to which they believe themselves to have become slaves begin to believe that men are machines.”
  • what’s tenure for, after all?
  • LLMs are tools made by specific people — people who stand to accumulate huge amounts of money and power, people enamored with the idea of the singularity. The project threatens to blow up what is human in a species sense. But it’s not about humility. It’s not about all of us. It’s not about becoming a humble creation among the world’s others. It’s about some of us — let’s be honest — becoming a superspecies. This is the darkness that awaits when we lose a firm boundary around the idea that humans, all of us, are equally worthy as is.
  • The AI dream is “governed by the perfectibility thesis, and that’s where we see a fascist form of the human.”
  • “Why are you trying to trick people into thinking that it really feels sad that you lost your phone?”
Ed Webb

The trouble with Khan Academy - Casting Out Nines - The Chronicle of Higher Education - 1 views

  • When we say that someone has “learned” a subject, we typically mean that they have shown evidence of mastery not only of basic cognitive processes like factual recall and working mechanical exercises but also higher-level tasks like applying concepts to new problems and judging between two equivalent concepts. A student learning calculus, for instance, needs to demonstrate that s/he can do things like take derivatives of polynomials and use the Chain Rule. But if this is all they can demonstrate, then it’s stretching it to say that the student has “learned calculus”, because calculus is a lot more than just executing mechanical processes correctly and quickly.
  • Even if the student can solve optimization or related rates problems just like the ones in the book and in the lecture — but doesn’t know how to start if the optimization or related rates problem does not match their template — then the student hasn’t really learned calculus. At that point, those “applied” problems are just more mechanical processes. We may say the student has learned about calculus, but when it comes to the uses of the subject that really matter — applying calculus concepts to ambiguous and/or complex problems, choosing the best of equivalent methods or results, creating models to solve novel problems — this student’s calculus knowledge is not of much use.
  • Khan Academy is great for learning about lots of different subjects. But it’s not really adequate for learning those subjects on a level that really makes a difference in the world.
  • ...2 more annotations...
  • mechanical skill is a proper subset of the set of all tasks a student needs to master in order to really learn a subject. And a lecture, when well done, can teach novice learners how to think like expert learners; but in my experience with Khan Academy videos, this isn’t what happens — the videos are demos on how to finish mathematics exercises, with little modeling of the higher-level thinking skills that are so important for using mathematics in the real world.
  • The Khan Academy is a great new resource, and it's a sign of greater things to come... but it's much more akin to a book than a teacher.
Ed Webb

Reflections on open courses « Connectivism - 0 views

  • There is value of blending traditional with emergent knowledge spaces (online conferences and traditional journals) - Learners will create and innovate if they can express ideas and concepts in their own spaces and through their own expertise (i.e. hosting events in Second Life) - Courses are platforms for innovation. Too rigid a structure puts the educator in full control. Using a course as a platform fosters creativity…and creativity generates a bit of chaos and can be unsettling to individuals who prefer a structure with which they are familiar. - (cliche) Letting go of control is a bit stressful, but surprisingly rewarding in the new doors it opens and liberating in how it brings others in to assist in running a course and advancing the discussion. - People want to participate…but they will only do so once they have “permission” and a forum in which to utilize existing communication/technological skills.
  • The internet is a barrier-reducing system. In theory, everyone has a voice online (the reality of technology ownership, digital skills, and internet access add an unpleasant dimension). Costs of duplication are reduced. Technology (technique) is primarily a duplicationary process, as evidenced by the printing press, assembly line, and now the content duplication ability of digital technologies. As a result, MOOCs embody, rather than reflect, practices within the digital economy. MOOCs reduce barriers to information access and to the dialogue that permits individuals (and society) to grow knowledge. Much of the technical innovation in the last several centuries has permitted humanity to extend itself physically (cars, planes, trains, telescopes). The internet, especially in recent developments of connective and collaborative applications, is a cognitive extension for humanity. Put another way, the internet offers a model where the reproduction of knowledge is not confined to the production of physical objects.
  • Knowledge is a mashup. Many people contribute. Many different forums are used. Multiple media permit varied and nuanced expressions of knowledge. And, because the information base (which is required for knowledge formation) changes so rapidly, being properly connected to the right people and information is vitally important. The need for proper connectedness to the right people and information is readily evident in intelligence communities. Consider the Christmas day bomber. Or 9/11. The information was being collected. But not connected.
  • ...11 more annotations...
  • The open model of participation calls into question where value is created in the education system. Gutenberg created a means to duplicate content. The social web creates the opportunity for many-to-many interactions and to add a global social layer on content creation and knowledge growth.
  • Whatever can be easily duplicated cannot serve as the foundation for economic value. Integration and connectedness are economic value points.
  • In education, content can easily be produced (it’s important but has limited economic value). Lectures also have limited value (easy to record and to duplicate). Teaching – as done in most universities – can be duplicated. Learning, on the other hand, can’t be duplicated. Learning is personal, it has to occur one learner at a time. The support needed for learners to learn is a critical value point.
  • Learning, however, requires a human, social element: both peer-based and through interaction with subject area experts
  • Content is readily duplicated, reducing its value economically. It is still critical for learning – all fields have core elements that learners must master before they can advance (research in expertise supports this notion). - Teaching can be duplicated (lectures can be recorded, Elluminate or similar webconferencing system can bring people from around the world into a class). Assisting learners in the learning process, correcting misconceptions (see Private Universe), and providing social support and brokering introductions to other people and ideas in the discipline is critical. - Accreditation is a value statement – it is required when people don’t know each other. Content was the first area of focus in open education. Teaching (i.e. MOOCs) are the second. Accreditation will be next, but, before progress can be made, profile, identity, and peer-rating systems will need to improve dramatically. The underlying trust mechanism on which accreditation is based cannot yet be duplicated in open spaces (at least, it can’t be duplicated to such a degree that people who do not know each other will trust the mediating agent of open accreditation)
  • The skills that are privileged and rewarded in a MOOC are similar to those that are needed to be effective in communicating with others and interacting with information online (specifically, social media and information sources like journals, databases, videos, lectures, etc.). Creative skills are the most critical. Facilitators and learners need something to “point to”. When a participant creates an insightful blog post, a video, a concept map, or other resource/artifact it generally gets attention.
  • Intentional diversity – not necessarily a digital skill, but the ability to self-evaluate ones network and ensure diversity of ideologies is critical when information is fragmented and is at risk of being sorted by single perspectives/ideologies.
  • The volume of information is very disorienting in a MOOC. For example, in CCK08, the initial flow of postings in Moodle, three weekly live sessions, Daily newsletter, and weekly readings and assignments proved to be overwhelming for many participants. Stephen and I somewhat intentionally structured the course for this disorienting experience. Deciding who to follow, which course concepts are important, and how to form sub-networks and sub-systems to assist in sensemaking are required to respond to information abundance. The process of coping and wayfinding (ontology) is as much a lesson in the learning process as mastering the content (epistemology). Learners often find it difficult to let go of the urge to master all content, read all the comments and blog posts.
  • e. Learning is a social trust-based process.
  • Patience, tolerance, suspension of judgment, and openness to other cultures and ideas are required to form social connections and negotiating misunderstandings.
  • An effective digital citizenry needs the skills to participate in important conversations. The growth of digital content and social networks raises the need citizens to have the technical and conceptual skills to express their ideas and engage with others in those spaces. MOOCs are a first generation testing grounds for knowledge growth in a distributed, global, digital world. Their role in developing a digital citizenry is still unclear, but democratic societies require a populace with the skills to participate in growing a society’s knowledge. As such, MOOCs, or similar open transparent learning experiences that foster the development of citizens confidence engage and create collaboratively, are important for the future of society.
Ed Webb

Bad News : CJR - 0 views

  • Students in Howard Rheingold’s journalism class at Stanford recently teamed up with NewsTrust, a nonprofit Web site that enables people to review and rate news articles for their level of quality, in a search for lousy journalism.
  • the News Hunt is a way of getting young journalists to critically examine the work of professionals. For Rheingold, an influential writer and thinker about the online world and the man credited with coining the phrase “virtual community,” it’s all about teaching them “crap detection.”
  • last year Rheingold wrote an important essay about the topic for the San Francisco Chronicle’s Web site
  • ...3 more annotations...
  • What’s at stake is no less than the quality of the information available in our society, and our collective ability to evaluate its accuracy and value. “Are we going to have a world filled with people who pass along urban legends and hoaxes?” Rheingold said, “or are people going to educate themselves about these tools [for crap detection] so we will have collective intelligence instead of misinformation, spam, urban legends, and hoaxes?”
  • I previously called fact-checking “one of the great American pastimes of the Internet age.” But, as Rheingold noted, the opposite is also true: the manufacture and promotion of bullshit is endemic. One couldn’t exist without the other. That makes Rheingold’s essay, his recent experiment with NewsTrust, and his wiki of online critical-thinking tools” essential reading for journalists. (He’s also writing a book about this topic.)
  • I believe if we want kids to succeed online, the biggest danger is not porn or predators—the biggest danger is them not being able to distinguish truth from carefully manufactured misinformation or bullshit
  •  
    As relevant to general education as to journalism training
Ed Webb

Study Shows Students Are Addicted to Social Media | News | Communications of the ACM - 0 views

  • most college students are not just unwilling, but functionally unable to be without their media links to the world. "I clearly am addicted and the dependency is sickening," says one person in the study. "I feel like most people these days are in a similar situation, for between having a Blackberry, a laptop, a television, and an iPod, people have become unable to shed their media skin."
  • what they wrote at length about was how they hated losing their personal connections. Going without media meant, in their world, going without their friends and family
  • they couldn't connect with friends who lived close by, much less those far away
  • ...8 more annotations...
  • "Texting and IM-ing my friends gives me a constant feeling of comfort," wrote one student. "When I did not have those two luxuries, I felt quite alone and secluded from my life. Although I go to a school with thousands of students, the fact that I was not able to communicate with anyone via technology was almost unbearable."
  • students' lives are wired together in such ways that opting out of that communication pattern would be tantamount to renouncing a social life
  • "Students expressed tremendous anxiety about being cut-off from information,"
  • How did they get the information? In a disaggregated way, and not typically from the news outlet that broke or committed resources to a story.
  • the young adults in this study appeared to be generally oblivious to branded news and information
  • an undifferentiated wave to them via social media
  • 43.3 percent of the students reported that they had a "smart phone"
  • Quotes
Ed Webb

Forvo: the pronunciation guide. All the words in the world pronounced by native speakers - 0 views

  • Forvo is the largest pronunciation guide in the world. Ever wondered how a word is pronounced? Ask for that word or name, and another user will pronounce it for you. You can also help others recording your pronunciations in your own language.
Ed Webb

Why I won't buy an iPad (and think you shouldn't, either) - Boing Boing - 1 views

  • If there was ever a medium that relied on kids swapping their purchases around to build an audience, it was comics. And the used market for comics! It was -- and is -- huge, and vital.
  • what does Marvel do to "enhance" its comics? They take away the right to give, sell or loan your comics. What an improvement. Way to take the joyous, marvellous sharing and bonding experience of comic reading and turn it into a passive, lonely undertaking that isolates, rather than unites.
  • a palpable contempt for the owner.
  • ...8 more annotations...
  • But with the iPad, it seems like Apple's model customer is that same stupid stereotype of a technophobic, timid, scatterbrained mother as appears in a billion renditions of "that's too complicated for my mom" (listen to the pundits extol the virtues of the iPad and time how long it takes for them to explain that here, finally, is something that isn't too complicated for their poor old mothers).
  • The model of interaction with the iPad is to be a "consumer," what William Gibson memorably described as "something the size of a baby hippo, the color of a week-old boiled potato, that lives by itself, in the dark, in a double-wide on the outskirts of Topeka. It's covered with eyes and it sweats constantly. The sweat runs into those eyes and makes them sting. It has no mouth... no genitals, and can only express its mute extremes of murderous rage and infantile desire by changing the channels on a universal remote."
  • Buying an iPad for your kids isn't a means of jump-starting the realization that the world is yours to take apart and reassemble; it's a way of telling your offspring that even changing the batteries is something you have to leave to the professionals.
  • Apple's customers can't take their "iContent" with them to competing devices, and Apple developers can't sell on their own terms.
  • I don't want my universe of apps constrained to the stuff that the Cupertino Politburo decides to allow for its platform. And as a copyright holder and creator, I don't want a single, Wal-Mart-like channel that controls access to my audience and dictates what is and is not acceptable material for me to create.
  • Rupert Murdoch can rattle his saber all he likes about taking his content out of Google, but I say do it, Rupert. We'll miss your fraction of a fraction of a fraction of a percent of the Web so little that we'll hardly notice it, and we'll have no trouble finding material to fill the void.
  • the walled gardens that best return shareholder value
  • The real issue isn't the capabilities of the piece of plastic you unwrap today, but the technical and social infrastructure that accompanies it.
Ed Webb

Ian Bogost - Beyond Blogs - 0 views

  • I wish these were the sorts of questions so-called digital humanists considered, rather than figuring out how to pay homage to the latest received web app or to build new tools to do the same old work. But as I recently argued, a real digital humanism isn't one that's digital, but one that's concerned with the present and the future. A part of that concern involves considering the way we want to interact with one another and the world as scholars, and to intervene in that process by making it happen. Such a question is far more interesting and productive than debating the relative merits of blogs or online journals, acts that amount to celebrations of how little has really changed.
  • Perhaps a blog isn't a great tool for (philosophical; videogame) discussion or even for knowledge retention, etc... but a whole *blogosphere*...? If individuals (and individual memory in particular) are included within the scope of "the blogosphere" then surely someone remembers the "important" posts, like you seemed to be asking for...?
1 - 20 of 34 Next ›
Showing 20 items per page