Skip to main content

Home/ Mindamp/ Group items tagged self-determined

Rss Feed Group items tagged

Charles van der Haegen

‪George Siemens on Massive Open Online Courses (MOOCs)‬‏ - YouTube - 0 views

  •  
    Thanks Howard for having conducted this interview and having allowed George Siemens to expose the philisophy behind his MOOKC idea. Great educational content. Also a path is shown for the future of self-determined and self-managed, life-long autonomous, learning in teams and around personal and wider, global, community networks "George Siemens, at the Technology Enhanced Knowledge Research Institute at Athabasca Universityhas been running "Massive Open Online Courses" (MOOCs). I talk to him about what a MOOC is, how it works, and the educational philosophy behind it." Excellent Interview by Howard Rheingold
  •  
    This video is really great. Howard is a master interviewer. George Siemens is provoked in answering the kind of questions that allow the viewer to reallt comprehend his thinking and the power of his MOOC. By the same token, it gives a nice indication of the similarity in design that Howard is following for his course... When will the two combine to a greater whole
Charles van der Haegen

TVO.ORG | Video | The Agenda - Tim Wu: Information Empires - 0 views

  •  
    transcript of my blog Tim Wu author of Master Swtch, interviewed http://ow.ly/5G0ry . No doubt a must read book, and if you doubt, view this video. The one thing that bothers me in Tim Wu's speech is his deep belief that the two things that will NOT change are economics and human nature … Food for thought, questions for deep dialogue and inquiry. I belief we can come up with solutions to these two "invariables", who seem more "metaphores" or "myths" than inescapable fatalities. Should these deep beliefs remain however , inconsciously hidden in our minds, they might prevent us to look at things from other, more hopefull underlying beliefs systems. New ways of looking at things bring with them other possibilities for the future of the World. Let's hope we can achieve a stage of mental capacity so that we allow a World to emerge without Wars all-over, without undignified living conditions for the majority of Humans, without unequality all-over even in so called advanced economies, without destruction of nature. Let's aim instead on Freedom and Self-Determination for all, a belief in Homan endowment and possibility, a change in mental capacity, a return to conditions for our Systems Intelligenge to express itself. This might allow us all to raise our consciousness and to cooperate collectively to solving the intractable problems our ongoing mental models have created.
  •  
    I believe this interview says a lot about what might happen if the proponents of open and free social media and internet loose their battle. It also shows that this battle is broader, is it inescapable that human nature and economic paradigms are invariable? Are we doomed to see unchanged economic pursuits (meaning money and concentration of wealth) combine with unchanged human nature (incessant and exclusive pursuit of more wealth and power). I believe not, this is the whole point of new paradigms for cooperation combined with the effects of Mind Amplification and social media
David McGavock

The Myth Of AI | Edge.org - 1 views

  • what I'm proposing is that if AI was a real thing, then it probably would be less of a threat to us than it is as a fake thing.
  • it adds a layer of religious thinking to what otherwise should be a technical field.
  • we can talk about pattern classification.
  • ...38 more annotations...
  • But when you add to it this religious narrative that's a version of the Frankenstein myth, where you say well, but these things are all leading to a creation of life, and this life will be superior to us and will be dangerous
  • I'm going to go through a couple of layers of how the mythology does harm.
  • this overall atmosphere of accepting the algorithms as doing a lot more than they do. In the case of Netflix, the recommendation engine is serving to distract you from the fact that there's not much choice anyway.
  • If a program tells you, well, this is how things are, this is who you are, this is what you like, or this is what you should do, we have a tendency to accept that.
  • our economy has shifted to what I call a surveillance economy, but let's say an economy where algorithms guide people a lot, we have this very odd situation where you have these algorithms that rely on big data in order to figure out who you should date, who you should sleep with, what music you should listen to, what books you should read, and on and on and on
  • people often accept that
  • all this overpromising that AIs will be about to do this or that. It might be to become fully autonomous driving vehicles instead of only partially autonomous, or it might be being able to fully have a conversation as opposed to only having a useful part of a conversation to help you interface with the device.
  • other cases where the recommendation engine is not serving that function, because there is a lot of choice, and yet there's still no evidence that the recommendations are particularly good.
  • there's no way to tell where the border is between measurement and manipulation in these systems.
  • if the preponderance of those people have grown up in the system and are responding to whatever choices it gave them, there's not enough new data coming into it for even the most ideal or intelligent recommendation engine to do anything meaningful.
  • it simply turns into a system that measures which manipulations work, as opposed to which ones don't work, which is very different from a virginal and empirically careful system that's trying to tell what recommendations would work had it not intervened
  • What's not clear is where the boundary is.
  • If you ask: is a recommendation engine like Amazon more manipulative, or more of a legitimate measurement device? There's no way to know.
  • we don't know to what degree they're measurement versus manipulation.
  • If people are deciding what books to read based on a momentum within the recommendation engine that isn't going back to a virgin population, that hasn't been manipulated, then the whole thing is spun out of control and doesn't mean anything anymore
  • not so much a rise of evil as a rise of nonsense.
  • because of the mythology about AI, the services are presented as though they are these mystical, magical personas. IBM makes a dramatic case that they've created this entity that they call different things at different times—Deep Blue and so forth.
  • Cortana or a Siri
  • This pattern—of AI only working when there's what we call big data, but then using big data in order to not pay large numbers of people who are contributing—is a rising trend in our civilization, which is totally non-sustainable
    • David McGavock
       
      Key relationship between automation of tasks, downsides, and expectation for AI
  • If you talk about AI as a set of techniques, as a field of study in mathematics or engineering, it brings benefits. If we talk about AI as a mythology of creating a post-human species, it creates a series of problems that I've just gone over, which include acceptance of bad user interfaces, where you can't tell if you're being manipulated or not, and everything is ambiguous.
  • It creates incompetence, because you don't know whether recommendations are coming from anything real or just self-fulfilling prophecies from a manipulative system that spun off on its own, and economic negativity, because you're gradually pulling formal economic benefits away from the people who supply the data that makes the scheme work.
  • I'm going to give you two scenarios.
  • let's suppose somebody comes up with a way to 3-D print a little assassination drone that can go buzz around and kill somebody. Let's suppose that these are cheap to make.
  • Having said all that, let's address directly this problem of whether AI is going to destroy civilization and people, and take over the planet and everything.
  • some disaffected teenagers, or terrorists, or whoever start making a bunch of them, and they go out and start killing people randomly
  • This idea that some lab somewhere is making these autonomous algorithms that can take over the world is a way of avoiding the profoundly uncomfortable political problem, which is that if there's some actuator that can do harm, we have to figure out some way that people don't do harm with it.
    • David McGavock
       
      Another key - focus on the actuator, not the agent that exploits it.
  • the part that causes the problem is the actuator. It's the interface to physicality
  • not so much whether it's a bunch of teenagers or terrorists behind it or some AI
  • The sad fact is that, as a society, we have to do something to not have little killer drones proliferate.
  • What we don't have to worry about is the AI algorithm running them, because that's speculative.
  • another one where there's so-called artificial intelligence, some kind of big data scheme, that's doing exactly the same thing, that is self-directed and taking over 3-D printers, and sending these things off to kill people.
  • There's a whole other problem area that has to do with neuroscience, where if we pretend we understand things before we do, we do damage to science,
  • You have to be able to accept what your ignorances are in order to do good science. To reject your own ignorance just casts you into a silly state where you're a lesser scientist.
  • To my mind, the mythology around AI is a re-creation of some of the traditional ideas about religion, but applied to the technical world.
  • The notion of this particular threshold—which is sometimes called the singularity, or super-intelligence, or all sorts of different terms in different periods—is similar to divinity.
  • In the history of organized religion, it's often been the case that people have been disempowered precisely to serve what were perceived to be the needs of some deity or another, where in fact what they were doing was supporting an elite class that was the priesthood for that deity.
    • David McGavock
       
      Technical priesthood.
  • If AI means this mythology of this new creature we're creating, then it's just a stupid mess that's confusing everybody, and harming the future of the economy. If what we're talking about is a set of algorithms and actuators that we can improve and apply in useful ways, then I'm very interested, and I'm very much a participant in the community that's improving those things.
  • A lot of people in the religious world are just great, and I respect and like them. That goes hand-in-hand with my feeling that some of the mythology in big religion still leads us into trouble that we impose on ourselves and don't need.
  •  
    "The idea that computers are people has a long and storied history. It goes back to the very origins of computers, and even from before. There's always been a question about whether a program is something alive or not since it intrinsically has some kind of autonomy at the very least, or it wouldn't be a program. There has been a domineering subculture-that's been the most wealthy, prolific, and influential subculture in the technical world-that for a long time has not only promoted the idea that there's an equivalence between algorithms and life, and certain algorithms and people, but a historical determinism that we're inevitably making computers that will be smarter and better than us and will take over from us."
1 - 3 of 3
Showing 20 items per page