Skip to main content

Home/ Mindamp/ Group items tagged myth

Rss Feed Group items tagged

David McGavock

The Myth Of AI | Edge.org - 1 views

  • what I'm proposing is that if AI was a real thing, then it probably would be less of a threat to us than it is as a fake thing.
  • it adds a layer of religious thinking to what otherwise should be a technical field.
  • we can talk about pattern classification.
  • ...38 more annotations...
  • But when you add to it this religious narrative that's a version of the Frankenstein myth, where you say well, but these things are all leading to a creation of life, and this life will be superior to us and will be dangerous
  • I'm going to go through a couple of layers of how the mythology does harm.
  • this overall atmosphere of accepting the algorithms as doing a lot more than they do. In the case of Netflix, the recommendation engine is serving to distract you from the fact that there's not much choice anyway.
  • If a program tells you, well, this is how things are, this is who you are, this is what you like, or this is what you should do, we have a tendency to accept that.
  • our economy has shifted to what I call a surveillance economy, but let's say an economy where algorithms guide people a lot, we have this very odd situation where you have these algorithms that rely on big data in order to figure out who you should date, who you should sleep with, what music you should listen to, what books you should read, and on and on and on
  • people often accept that
  • all this overpromising that AIs will be about to do this or that. It might be to become fully autonomous driving vehicles instead of only partially autonomous, or it might be being able to fully have a conversation as opposed to only having a useful part of a conversation to help you interface with the device.
  • other cases where the recommendation engine is not serving that function, because there is a lot of choice, and yet there's still no evidence that the recommendations are particularly good.
  • there's no way to tell where the border is between measurement and manipulation in these systems.
  • if the preponderance of those people have grown up in the system and are responding to whatever choices it gave them, there's not enough new data coming into it for even the most ideal or intelligent recommendation engine to do anything meaningful.
  • it simply turns into a system that measures which manipulations work, as opposed to which ones don't work, which is very different from a virginal and empirically careful system that's trying to tell what recommendations would work had it not intervened
  • What's not clear is where the boundary is.
  • If you ask: is a recommendation engine like Amazon more manipulative, or more of a legitimate measurement device? There's no way to know.
  • we don't know to what degree they're measurement versus manipulation.
  • If people are deciding what books to read based on a momentum within the recommendation engine that isn't going back to a virgin population, that hasn't been manipulated, then the whole thing is spun out of control and doesn't mean anything anymore
  • not so much a rise of evil as a rise of nonsense.
  • because of the mythology about AI, the services are presented as though they are these mystical, magical personas. IBM makes a dramatic case that they've created this entity that they call different things at different times—Deep Blue and so forth.
  • Cortana or a Siri
  • This pattern—of AI only working when there's what we call big data, but then using big data in order to not pay large numbers of people who are contributing—is a rising trend in our civilization, which is totally non-sustainable
    • David McGavock
       
      Key relationship between automation of tasks, downsides, and expectation for AI
  • If you talk about AI as a set of techniques, as a field of study in mathematics or engineering, it brings benefits. If we talk about AI as a mythology of creating a post-human species, it creates a series of problems that I've just gone over, which include acceptance of bad user interfaces, where you can't tell if you're being manipulated or not, and everything is ambiguous.
  • It creates incompetence, because you don't know whether recommendations are coming from anything real or just self-fulfilling prophecies from a manipulative system that spun off on its own, and economic negativity, because you're gradually pulling formal economic benefits away from the people who supply the data that makes the scheme work.
  • I'm going to give you two scenarios.
  • let's suppose somebody comes up with a way to 3-D print a little assassination drone that can go buzz around and kill somebody. Let's suppose that these are cheap to make.
  • Having said all that, let's address directly this problem of whether AI is going to destroy civilization and people, and take over the planet and everything.
  • some disaffected teenagers, or terrorists, or whoever start making a bunch of them, and they go out and start killing people randomly
  • This idea that some lab somewhere is making these autonomous algorithms that can take over the world is a way of avoiding the profoundly uncomfortable political problem, which is that if there's some actuator that can do harm, we have to figure out some way that people don't do harm with it.
    • David McGavock
       
      Another key - focus on the actuator, not the agent that exploits it.
  • the part that causes the problem is the actuator. It's the interface to physicality
  • not so much whether it's a bunch of teenagers or terrorists behind it or some AI
  • The sad fact is that, as a society, we have to do something to not have little killer drones proliferate.
  • What we don't have to worry about is the AI algorithm running them, because that's speculative.
  • another one where there's so-called artificial intelligence, some kind of big data scheme, that's doing exactly the same thing, that is self-directed and taking over 3-D printers, and sending these things off to kill people.
  • There's a whole other problem area that has to do with neuroscience, where if we pretend we understand things before we do, we do damage to science,
  • You have to be able to accept what your ignorances are in order to do good science. To reject your own ignorance just casts you into a silly state where you're a lesser scientist.
  • To my mind, the mythology around AI is a re-creation of some of the traditional ideas about religion, but applied to the technical world.
  • The notion of this particular threshold—which is sometimes called the singularity, or super-intelligence, or all sorts of different terms in different periods—is similar to divinity.
  • In the history of organized religion, it's often been the case that people have been disempowered precisely to serve what were perceived to be the needs of some deity or another, where in fact what they were doing was supporting an elite class that was the priesthood for that deity.
    • David McGavock
       
      Technical priesthood.
  • If AI means this mythology of this new creature we're creating, then it's just a stupid mess that's confusing everybody, and harming the future of the economy. If what we're talking about is a set of algorithms and actuators that we can improve and apply in useful ways, then I'm very interested, and I'm very much a participant in the community that's improving those things.
  • A lot of people in the religious world are just great, and I respect and like them. That goes hand-in-hand with my feeling that some of the mythology in big religion still leads us into trouble that we impose on ourselves and don't need.
  •  
    "The idea that computers are people has a long and storied history. It goes back to the very origins of computers, and even from before. There's always been a question about whether a program is something alive or not since it intrinsically has some kind of autonomy at the very least, or it wouldn't be a program. There has been a domineering subculture-that's been the most wealthy, prolific, and influential subculture in the technical world-that for a long time has not only promoted the idea that there's an equivalence between algorithms and life, and certain algorithms and people, but a historical determinism that we're inevitably making computers that will be smarter and better than us and will take over from us."
Charles van der Haegen

TVO.ORG | Video | The Agenda - Tim Wu: Information Empires - 0 views

  •  
    transcript of my blog Tim Wu author of Master Swtch, interviewed http://ow.ly/5G0ry . No doubt a must read book, and if you doubt, view this video. The one thing that bothers me in Tim Wu's speech is his deep belief that the two things that will NOT change are economics and human nature … Food for thought, questions for deep dialogue and inquiry. I belief we can come up with solutions to these two "invariables", who seem more "metaphores" or "myths" than inescapable fatalities. Should these deep beliefs remain however , inconsciously hidden in our minds, they might prevent us to look at things from other, more hopefull underlying beliefs systems. New ways of looking at things bring with them other possibilities for the future of the World. Let's hope we can achieve a stage of mental capacity so that we allow a World to emerge without Wars all-over, without undignified living conditions for the majority of Humans, without unequality all-over even in so called advanced economies, without destruction of nature. Let's aim instead on Freedom and Self-Determination for all, a belief in Homan endowment and possibility, a change in mental capacity, a return to conditions for our Systems Intelligenge to express itself. This might allow us all to raise our consciousness and to cooperate collectively to solving the intractable problems our ongoing mental models have created.
  •  
    I believe this interview says a lot about what might happen if the proponents of open and free social media and internet loose their battle. It also shows that this battle is broader, is it inescapable that human nature and economic paradigms are invariable? Are we doomed to see unchanged economic pursuits (meaning money and concentration of wealth) combine with unchanged human nature (incessant and exclusive pursuit of more wealth and power). I believe not, this is the whole point of new paradigms for cooperation combined with the effects of Mind Amplification and social media
David McGavock

Our mission - Gapminder.org - 0 views

  •  
    "About Gapminder Fighting the most devastating myths by building a fact-based world view that everyone understands. Gapminder is a non-profit venture - a modern "museum" on the Internet - promoting sustainable global development and achievement of the United Nations Millennium Development Goals."
Charles van der Haegen

UCLI collective UC & Public Higher Education: A Teach In on Vimeo - 0 views

  •  
    Very interesting video collection to understand the policy problems on the commons of education in California... I hear there very practical living testimonies which illustrate the dilemma of the commons, the opposint solidarities in Socio-Cultural viability Theory, the vested interest idssues, the democracy principles who are used as myths and metaphores that are influencing people's thinking... It is life policy making and activism in action
David McGavock

A New Culture of Learning | Social Media Classroom - 3 views

  • A New Culture of Learning
  • what strikes me is the second part of the title Cultivating the Imagination for a World of Constant Change.
  • I love seeing a child's imagination being captivated
  • ...32 more annotations...
  • I am challenged by many who see social-media as the next project rather than a shift in the paradigm of existence.
  • I believe that dissatisfaction with the factory model of school, along with the growing number, ubiquity, and accessiblity, of tools (for connection, collaboration and creation) will tip the balance toward new models and cultures of learning.
  • I love to see teachers and student figuring out how to use technology together; asking questions, trying stuff, "messing around" as Brown would say.
  • The Social Life of Information by John Seely Brown and Paul Duguid
  • Can I just say that it is amazingly prescient and still relevant even a decade later? I'm interested in comparing it to his more recent book in discussion here.
  • Howard reponds with an idea on assignments (and the power of assignments). I found the questions (or in other courses the assignements) to really good at directing my brain. 1.Read the question 2. go to sleep 3. stare at the ceiling for hours 4. brush teeth 5. eurekaThese methods are also used in action learning and action research
  • I'm reading the book "the myth of management" (which is not related to learning), and I found out that finding "faults" is actually a dirty consultant trick, as it expands the window through which you can sell your solution. I hacked that idea and replaced solution with learning.
  •  The role of the instructor in balancing freedom and structure -- setting enough structure so that the unlimited freedom doesn't become vertiginous and overwhelming -- resonates with my experiences with Rheingold U. so far. Assignments seem to help, but they can't be too onerous.
  •  Very nice article comparing Thomas/JSB ideas to John Dewey's:
  •  http://charlestkerchner.com/wp-content/uploads/2011/04/DeweyThomas.pdf
  • Ernst - I am particularly interested in Action Research of the "plan, act, observe,reflect" variety where we never really arrive at conclusions but start again in a new cycle of teaching and learning.
  • that idea of teaching people to fail is very important - I notice that this is acceptable very often in business especially in the contexts of 'start-ups' but unacceptable in most schools. Here in Europe, the work of the Finnish educationalist Pasi Sahlberg gets a lot of attention - one of his motifs is learning to be wrong.
  • Knowing who to listen to in the 'noise' of all the information overload is important - I'm looking forward to our continuing review of how we all re-imagine that new culture of learning.
  • Can You, and if yes, How,  Change a system from within? This is one of the key issues of our time. Learning, PLN, Community support structures, activism, Social media, cooperation.. are all part of that... so it is realIy at the heart of our SMC Alumni topics. 
  • I would suggest, we should be dialoguing in depth about the question, and how to formulate it, before jumping to solutions...
  • The work of social and developmental psychologist, Carol Dweck can inform our discussion about failure,
  • Her book, Mindset, posits that some students have growth mindsets and some have fixed mindsets.
  • Ernst, I adore your description of problem-solving (especially the enumerated part). Downtime is essential for processing information and I agree, even subtle shifts within group dynamics can cause huge internal vistas to open up.
  • The idea of structuring for failure in itself is a whole new take on creative thinking.
  • Schools reward success.  That's our measurement system, our "leaderboard".  Some winners at school go on to run schools. Schools punish failure deeply, systematically.  Remember dunce caps? So taking failure as a good thing is, at the very least, weird and defamiliarizing!
  • Chapter Two of Thomas and Seely-Brown's book  is so short - just five pages - They conclude with the idea ....the point is to embrace what we don't know, come up with better questions about it, and continue asking those questions in order to learn more and more, both incrementally and exponentially. I wonder do the authors want us to reflect repeatedly on the contents of the chapter given its brevity.
  • is it certain type of people who fail, who are subsequently allowed to start again?
  • book's first chapter
  • Two key elements: network ("a massive information network that provides almost unlimited access and resources", sounds like mobile + Web) and environments ("bounded and structural") (19).
  • what do you make of the examples they present?  What do they suggest about the theory they exemplify?
  • ohn Seely Brown is particularly interested in the idea of tinkering. He suggests one of the best 'tinkering' models is the architectural studio -- the place where students work together trying to solve each others' problems, and a mentor or master can also take part in open criticism. Find out why this is a model for us all.  http://www.abc.net.au/rn/bydesign/stories/2011/3147776.htm
  • The first chapter is a pretty rosy, and might I say westernized, view of the power of Internet access + play in learning.* It manages to enlighten and engage using a few choice narratives (I imagine we will get to the power of those at some point in the book, too) and sets us up for the rationale to come.
  • * I'm looking for some reaction with regards to that comment
  • based on WEIRD (Western Educated Industrialized Rich Democratic) concepts. (An aside, here's a truly wonderful post unpacking of the idea of WEIRD in social science research.)
  • I can only talk for myself but there are contradictions between what I think is best to do with the students I teach and what I actually do. This "living contradiction" is something I consider in my own studies - I noticed a Tweet last night from Howard: Online and blended learning is NOT about automating delivery of knowledge, but about encouraging peer learning, inquiry, discourse.
  • The sentence I liked most from Chapter One reads "One of the metaphors we adopt to describe this process is cultivation. A farmer for example takes the nearly unlimited resources of sunlight, wind, water, earth, and biology and consolidates them into the bounded and structured environment of garden or farm. We see a new culture of learning as a similar kind of process - but cultivating minds instead of plants"
  • Everyone - you may have seen the piece below - if not please take 12 minutes to view it - it fits nicely with our current discussion
  •  
    This is the first capture of the conversation from the thread "A New Culture of Learning". We'll see how this goes
  •  
    I read the book almost cover to cover. It led me to think more about pushing what I've been doing closer to pure p2p. One of the co-learners in the latest Mindamp told me about "paragogy." That one is worth bookmarking.
David McGavock

Multitasking, social media and distraction: Research review Journalist's Resource: Rese... - 0 views

  • researchers have tried to assess how humans are coping in this highly connected environment and how “chronic multitasking” may diminish our capacity to function effectively.
  • Clifford Nass, notes that scholarship has remained firm in the overall assessment: “The research is almost unanimous, which is very rare in social science, and it says that people who chronically multitask show an enormous range of deficits. They’re basically terrible at all sorts of cognitive tasks, including multitasking.”
  • Below are more than a dozen representative studies in these areas:
  • ...9 more annotations...
  • The researchers conclude that the experiments “suggest that heavy media multitaskers are distracted by the multiple streams of media they are consuming, or, alternatively, that those who infrequently multitask are more effective at volitionally allocating their attention in the face of distractions.”
  • Members of the ‘Net Generation’ reported more multitasking than members of ‘Generation X,’ who reported more multitasking than members of the ‘Baby Boomer’ generation. The choices of which tasks to combine for multitasking were highly correlated across generations, as were difficulty ratings of specific multitasking combinations.
  • same time, these experts predicted that the impact of networked living on today’s young will drive them to thirst for instant gratification, settle for quick choices, and lack patience
  • similar mental limitations in the types of tasks that can be multitasked.
  • survey about the future of the Internet, technology experts and stakeholders were fairly evenly split as to whether the younger generation’s always-on connection to people and information will turn out to be a net positive or a net negative by 2020.
  • said many of the young people growing up hyperconnected to each other and the mobile Web and counting on the Internet as their external brain will be nimble, quick-acting multitaskers who will do well in key respects.
  • The educational implications include allowing students short ‘technology breaks’ to reduce distractions and teaching students metacognitive strategies regarding when interruptions negatively impact learning.”
  • The data suggest that “using Facebook and texting while doing schoolwork were negatively predictive of overall GPA.” However, “emailing, talking on the phone, and using IM were not related to overall GPA.”
  • Regression analyses revealed that increased media multitasking was associated with higher depression and social anxiety symptoms, even after controlling for overall media use and the personality traits of neuroticism and extraversion.
  •  
    Clifford Nass, notes that scholarship has remained firm in the overall assessment: "The research is almost unanimous, which is very rare in social science, and it says that people who chronically multitask show an enormous range of deficits. They're basically terrible at all sorts of cognitive tasks, including multitasking." - See more at: http://journalistsresource.org/studies/society/social-media/multitasking-social-media-distraction-what-does-research-say#sthash.I21dv2wV.dpuf
1 - 6 of 6
Showing 20 items per page