Skip to main content

Home/ Mindamp/ Group items tagged civil

Rss Feed Group items tagged

Charles van der Haegen

Robert Thurman | Professor of Buddhist Studies, Columbia University; President, Tibet H... - 0 views

  •  
    "Robert Thurman is Professor of Indo-Tibetan Buddhist Studies in the Department of Religion at Columbia University, President of Tibet House US, a non-profit organization dedicated to the preservation and promotion of Tibetan civilization, and President of the American Institute of Buddhist Studies. The New York Times recently hailed him as "the leading American expert on Tibetan Buddhism." The first American to have been ordained a Tibetan Buddhist monk and a personal friend of the Dalai Lama for over 40 years, Professor Thurman is a passionate advocate and spokesperson for the truth regarding the current Tibet-China situation and the human rights violations suffered by the Tibetan people under Chinese rule. His commitment to finding a peaceful, win-win solution for Tibet and China inspired him to write his latest book, Why the Dalai Lama Matters: His Act of Truth as the Solution for China, Tibet and the World, published in June of 2008. Professor Thurman also translates important Tibetan and Sanskrit philosophical writings and lectures and writes on Buddhism, particularly Tibetan Buddhism; on Asian history, particularly the history of the monastic institution in the Asian civilization; and on critical philosophy, with a focus on the dialogue between the material and inner sciences of the world's religious traditions."
  •  
    I believe this is a great interview... and video set
David McGavock

Hybrid Pedagogy: A Digital Journal on Teaching & Technology | Home - 1 views

  •  
    Hybrid Pedagogy | [What is Hybrid Pedagogy?] : combines the strands of critical and digital pedagogy to arrive at the best social and civil uses for technology and digital media in on-ground and online classrooms. : avoids valorizing educational technology, but seeks to interrogate and investigate technological tools to determine their most progressive applications. : invites you to an ongoing discussion that is networked and participant-driven, to an open peer reviewed journal that is both academic and collective.
Charles van der Haegen

The Saguaro Seminar: Civic Engagement in America - 0 views

  •  
    "THE SAGUARO SEMINAR: CIVIC ENGAGEMENT IN AMERICA is an initiative of Professor Robert D. Putnam at John F. Kennedy School of Government at Harvard University focused on the study of "social capital" (the value of social networks) and community engagement. Our eponymous seminar from 1995-2000 involved 30 talented scholars and practitioners from across America (including then civil-rights-lawyer Barack Obama) in developing strategies for increasing American civic engagement and led to the bettertogether report. "Bettertogether" the final report of the Saguaro Seminar is now available at www.BetterTogether.org The Seminar participants were a diverse, exceptional group of 33 thinkers and doers, including articulate leaders from all parts of the country - from coast to coast, from small town and suburb to the inner city - and from all walks of life - from government officials to religious leaders, from labor union activists to high-tech and business executives, from elected officials to street workers. All participants demonstrate a deep commitment to improving the infrastructure of national civic life. These twenty-five practitioners and eight academic thinkers met for two-day sessions through late-1999 to develop a handful of practical strategies with national applicability for increasing Americans' connections with one another. "
David McGavock

The Myth Of AI | Edge.org - 1 views

  • what I'm proposing is that if AI was a real thing, then it probably would be less of a threat to us than it is as a fake thing.
  • it adds a layer of religious thinking to what otherwise should be a technical field.
  • we can talk about pattern classification.
  • ...38 more annotations...
  • But when you add to it this religious narrative that's a version of the Frankenstein myth, where you say well, but these things are all leading to a creation of life, and this life will be superior to us and will be dangerous
  • I'm going to go through a couple of layers of how the mythology does harm.
  • this overall atmosphere of accepting the algorithms as doing a lot more than they do. In the case of Netflix, the recommendation engine is serving to distract you from the fact that there's not much choice anyway.
  • If a program tells you, well, this is how things are, this is who you are, this is what you like, or this is what you should do, we have a tendency to accept that.
  • our economy has shifted to what I call a surveillance economy, but let's say an economy where algorithms guide people a lot, we have this very odd situation where you have these algorithms that rely on big data in order to figure out who you should date, who you should sleep with, what music you should listen to, what books you should read, and on and on and on
  • people often accept that
  • all this overpromising that AIs will be about to do this or that. It might be to become fully autonomous driving vehicles instead of only partially autonomous, or it might be being able to fully have a conversation as opposed to only having a useful part of a conversation to help you interface with the device.
  • other cases where the recommendation engine is not serving that function, because there is a lot of choice, and yet there's still no evidence that the recommendations are particularly good.
  • there's no way to tell where the border is between measurement and manipulation in these systems.
  • if the preponderance of those people have grown up in the system and are responding to whatever choices it gave them, there's not enough new data coming into it for even the most ideal or intelligent recommendation engine to do anything meaningful.
  • it simply turns into a system that measures which manipulations work, as opposed to which ones don't work, which is very different from a virginal and empirically careful system that's trying to tell what recommendations would work had it not intervened
  • What's not clear is where the boundary is.
  • If you ask: is a recommendation engine like Amazon more manipulative, or more of a legitimate measurement device? There's no way to know.
  • we don't know to what degree they're measurement versus manipulation.
  • If people are deciding what books to read based on a momentum within the recommendation engine that isn't going back to a virgin population, that hasn't been manipulated, then the whole thing is spun out of control and doesn't mean anything anymore
  • not so much a rise of evil as a rise of nonsense.
  • because of the mythology about AI, the services are presented as though they are these mystical, magical personas. IBM makes a dramatic case that they've created this entity that they call different things at different times—Deep Blue and so forth.
  • Cortana or a Siri
  • This pattern—of AI only working when there's what we call big data, but then using big data in order to not pay large numbers of people who are contributing—is a rising trend in our civilization, which is totally non-sustainable
    • David McGavock
       
      Key relationship between automation of tasks, downsides, and expectation for AI
  • If you talk about AI as a set of techniques, as a field of study in mathematics or engineering, it brings benefits. If we talk about AI as a mythology of creating a post-human species, it creates a series of problems that I've just gone over, which include acceptance of bad user interfaces, where you can't tell if you're being manipulated or not, and everything is ambiguous.
  • It creates incompetence, because you don't know whether recommendations are coming from anything real or just self-fulfilling prophecies from a manipulative system that spun off on its own, and economic negativity, because you're gradually pulling formal economic benefits away from the people who supply the data that makes the scheme work.
  • I'm going to give you two scenarios.
  • let's suppose somebody comes up with a way to 3-D print a little assassination drone that can go buzz around and kill somebody. Let's suppose that these are cheap to make.
  • Having said all that, let's address directly this problem of whether AI is going to destroy civilization and people, and take over the planet and everything.
  • some disaffected teenagers, or terrorists, or whoever start making a bunch of them, and they go out and start killing people randomly
  • This idea that some lab somewhere is making these autonomous algorithms that can take over the world is a way of avoiding the profoundly uncomfortable political problem, which is that if there's some actuator that can do harm, we have to figure out some way that people don't do harm with it.
    • David McGavock
       
      Another key - focus on the actuator, not the agent that exploits it.
  • the part that causes the problem is the actuator. It's the interface to physicality
  • not so much whether it's a bunch of teenagers or terrorists behind it or some AI
  • The sad fact is that, as a society, we have to do something to not have little killer drones proliferate.
  • What we don't have to worry about is the AI algorithm running them, because that's speculative.
  • another one where there's so-called artificial intelligence, some kind of big data scheme, that's doing exactly the same thing, that is self-directed and taking over 3-D printers, and sending these things off to kill people.
  • There's a whole other problem area that has to do with neuroscience, where if we pretend we understand things before we do, we do damage to science,
  • You have to be able to accept what your ignorances are in order to do good science. To reject your own ignorance just casts you into a silly state where you're a lesser scientist.
  • To my mind, the mythology around AI is a re-creation of some of the traditional ideas about religion, but applied to the technical world.
  • The notion of this particular threshold—which is sometimes called the singularity, or super-intelligence, or all sorts of different terms in different periods—is similar to divinity.
  • In the history of organized religion, it's often been the case that people have been disempowered precisely to serve what were perceived to be the needs of some deity or another, where in fact what they were doing was supporting an elite class that was the priesthood for that deity.
    • David McGavock
       
      Technical priesthood.
  • If AI means this mythology of this new creature we're creating, then it's just a stupid mess that's confusing everybody, and harming the future of the economy. If what we're talking about is a set of algorithms and actuators that we can improve and apply in useful ways, then I'm very interested, and I'm very much a participant in the community that's improving those things.
  • A lot of people in the religious world are just great, and I respect and like them. That goes hand-in-hand with my feeling that some of the mythology in big religion still leads us into trouble that we impose on ourselves and don't need.
  •  
    "The idea that computers are people has a long and storied history. It goes back to the very origins of computers, and even from before. There's always been a question about whether a program is something alive or not since it intrinsically has some kind of autonomy at the very least, or it wouldn't be a program. There has been a domineering subculture-that's been the most wealthy, prolific, and influential subculture in the technical world-that for a long time has not only promoted the idea that there's an equivalence between algorithms and life, and certain algorithms and people, but a historical determinism that we're inevitably making computers that will be smarter and better than us and will take over from us."
Donal O' Mahony

21 Things That Will Be Obsolete by 2020 | MindShift - EdTech Leadership - 3 views

  •  
    some interesting observations
  •  
    Great ink. I posted the following comment on the article's Website: On point 18, why not organic food in the cafeteria? Seems like if we are smart enough to adapt all the trends you are discussing, a farm-to-school lunch program would make sense as well. I like all the suggestions you are making, but I also find them to be too technologically oriented, and not necessarily grounded in the needs of the current reality we are facing: can we even educate for a world that no longer has the carrying capacity for civilization? I think the tools you mention are all useful and can be applied sustainably, but I would suggest a conscious push to incorporate sustainability as an educational value that is integrated into the technology. And I'm not just talking about information literacy about environmental issues, but actual sustainable cultural practice, which includes many of the things you have listed here. Additionally, it would be good to argue for Green IT. What good is a digital cloud if the ones outside the classroom are wrecking havoc on our surroundings? Again, I like your ideas, I just think they will be more feasible in a habitable world. We should put our minds together to make this so.
Charles van der Haegen

UCLI collective UC & Public Higher Education: A Teach In on Vimeo - 0 views

  •  
    Very interesting video collection to understand the policy problems on the commons of education in California... I hear there very practical living testimonies which illustrate the dilemma of the commons, the opposint solidarities in Socio-Cultural viability Theory, the vested interest idssues, the democracy principles who are used as myths and metaphores that are influencing people's thinking... It is life policy making and activism in action
1 - 6 of 6
Showing 20 items per page