Skip to main content

Home/ Mindamp/ Group items tagged religion

Rss Feed Group items tagged

David McGavock

HOW CULTURE DROVE HUMAN EVOLUTION | Edge.org - 0 views

  • how culture drove human evolution
  • cultural brain hypothesis—this is the idea that the real driver in the expansion of human brains was this growing cumulative body of cultural information, so that what our brains increasingly got good at was the ability to acquire information, store, process and retransmit this non genetic body of information.
  • but tools and artifacts (the kinds of things that one finds useful to throw or finds useful to manipulate) are themselves products of cultural evolution.
  • ...116 more annotations...
  • or a long time was that status in humans was just a kind of human version of this dominant status
  • Chimps, other primates, have dominant status.
  • social status
  • second kind of status. We call this status prestige.
  • from being particularly knowledgeable or skilled in an area,
  • From this we've argued that humans have two separate kinds of status, dominance and prestige
  • give them deference in exchange for knowledge that you get back
  • you want to isolate the members of your group who are most likely to have a lot of this resources, meaning a lot of the knowledge or information that could be useful to you in the future
  • some of the big questions are, exactly when did this body of cumulative cultural evolution get started?
  • may have started early
  • another possibility is that it emerged about 800,000 years ago.
  • here's theoretical models that show that culture, our ability to learn from others, is an adaptation to fluctuating environments.
  • Another signature of cultural learning is regional differentiation and material culture, and you see that by about 400,000 years ago
  • 400,000 years ago
  • there's another possibility that it was a different kind of ape that we don't have in the modern world: a communal breeding ape that lives in family units rather than the kind of fission fusion you might see in chimpanzees
  • In the Pliocene, we see lots of different kinds of apes in terms of different species of Australopithecus.
  • we now have evidence to suggest that humans were communal breeders, so that we lived in family groups maybe somewhat similar to the way gorillas live in family groups, and that this is a much better environment for the evolution of capacities for culture than typical in the chimpanzee model
  • for cultural learning to really take off, you need more than one model.
  • trying out different technique
  • take advantage of the variation
  • the question is, how did we become such long distance runners?
  • only humans have it
  • humans who don't know how to track animals, can't run them down
  • idea being that the religions of modern societies are quite different than the religions we see in hunter gatherers and small scale societies
  • Most recently I've been also thinking about the evolution of societal complexity.
  • when societies begin to get big and complex
  • large-scale cooperation
  • What are the causal processes that bring these things about?
  • There's an interaction between genes and culture. First you have to get the culturally transmitted knowledge about animal behavior and tracking and spore knowledge and the ability to identify individuals, which is something you need to practice, and only after that can you begin to take advantage of long distance running techniques
  • I've worked in a couple of different areas on this, and one is religion.
  • there was an intense period that continues today of intergroup competition, which favors groups who have social norms and institutions that can more effectively expand the group while maintaining internal harmony
  • they've been shaped in ways that galvanize cooperation in larger groups
  • In small-scale hunter-gatherer religions, the gods are typically whimsical. They're amoral.
  • but as we begin to move to the religions in more complex societies, we find that the gods are increasingly moralizing.
  • if you remind believers of their god, believers cheat less, and they're more pro social or fair in exchange tasks,
  • more pro social in are the ones with anonymous others, or strangers. These are the kinds of things you need to make a market run to have a successful division of labor
  • ritual plays a role in this
  • rituals seem to be sets of practices engineered by cultural evolution to be effective at transmitting belief and transmitting faith
  • elevate the degree of belief in the high-moralizing gods
  • high-moralizing gods will often require rituals of this kind
  • Speaking in unison, large congregations saying the same thing, this all taps our capacity for conformist transmission;
  • People also engage in what we call credibility-enhancing displays [during rituals]. These are costly things. It might be an animal sacrifice or the giving of a large sum of money or some kind of painful initiation rite
  • We think religions are just one element, one way in which culture has figured out ways to expand the sphere of cooperation and allow markets to form and people to exchange and to maintain the substantial division of labor.
  • There's a lot of risk in developing specialization because you have to be confident that there's a market there that you can engage with. Whereas if you're a generalist and you do a little bit of farming, a little bit of manufacturing, then you're much less reliant on the market. Markets require a great deal of trust
  • In the intellectual tradition that I'm building on, culture is information stored in people's heads that gets there by some kind of social learning
  • We tend to think of cultural transmission, or at least many people think of cultural transmission as relying on language
  • , it's quite clear that there is a ton of cultural transmission that is just strictly by observational learning.
  • what we don't see amongst other animals is cumulative cultural evolution.
  • you can learn one thing from one generation, and that begins to accumulate in subsequent generations.
  • One possible exception to that is bird song.
  • One of the interesting lines of research that's come out of this recognition is the importance of population size and the interconnectedness for technology.
  • looking at a case study in Tasmania.
  • You start out with two genetically well-intermixed peoples. Tasmania's actually connected to mainland Australia so it's just a peninsula. Then about 10,000 years ago, the environment changes, it gets warmer and the Bass Strait floods, so this cuts off Tasmania from the rest of Australia, and it's at that point that they begin to have this technological downturn
  • You can show that this is the kind of thing you'd expect if societies are like brains in the sense that they store information as a group and that when someone learns, they're learning from the most successful member
  • study by Rob Boyd and Michelle Kline
  • larger islands had much bigger and more complex fishing technologies, and you can even show an effective contact. Some of the islands were in more or less contact with each other,
  • more in contact, you have fancier tools, and that seems to hold up.
  • rates of innovation should continue to increase, especially with the emergence of communication technologies
  • As an individual inventor or company, you're best off if everybody else shares their ideas but you don't share your ideas because then you get to keep your good ideas, and nobody else gets exposed to them, and you get to use their good ideas, so you get to do more recombination.
  • An important thing to remember is that there's always an incentive to hide your information.
  • Embedded in this whole information-sharing thing is a constant cooperative dilemma in which individuals have to be willing to share for the good of the group.
  • a norm of information sharing is a really good norm to have
  • I've done a lot of work on marriage systems with the evolution of monogamy.
  • Eighty-five percent of human societies have allowed men to have more than one wife
  • pushes us towards polygyny
  • But in the modern world, of course, monogamy is normative, and people who have too many wives are thought poorly of by the larger society. The question is, how did this ever get in place?
  • European Marriage Pattern,
  • Athens legislates the first rules about monogamous marriage
  • people are ready to moralize it,
  • it does seem to have societal level benefits. It reduces male-male competition. We think there's evidence to say it reduces crime, reduces substance abuse, and it also engages males in ways that cause them to discount the future less and engage in productive activities rather than taking a lot of risks
  • If I talk about normative monogamy as being successful, I mean that it spread,
  • especially if you have a society with widely varying amounts of wealth, especially among males. Then you're going to have a situation that would normally promote high levels of polygyny
  • to get into the mating and marriage market you would have to have a high level of wealth if we were to let nature take it's course
  • Part of my program of research is to convince people that they should stop distinguishing cultural and biological evolution as separate in that way. We want to think of it all as biological evolution. 
  • Culture is part of our biology.
  • We now have the neuroscience to say that culture's in our brain, so if you compare people from different societies, they have different brains.
  • Cognition and our ability to think are all interwoven,
  • A good example of this is the placebos. Placebos are something that depend on your cultural beliefs. If you believe that something will work, then when you take it, like you take an aspirin or you take a placebo for an aspirin, it initiates the same pathways as the chemically active substance. Placebos are chemically inert but biologically active, and it's completely dependent on your cultural beliefs.
  • One of the large research projects that I run in an effort to understand human sociality is called The Root of Human Sociality Project.
  • at the time to something called the Ultimatum Game, and the Ultimatum Game seemed to provide evidence that humans were innately inclined to punish unfairness.
  • behavioral economists find that students give about half, sometimes a little bit less than half, and people are inclined to reject offers below about 30 percent
  • The older you get, even if you have more wealth and more income, you're especially inclined to only offer half, and you'll reject offers below 40 percent.
  • I was thinking that the Machiguenga would be a good test of this
  • I did it in 1995 and 1996 there, and what I found amongst the Machiguenga was that they were completely unwilling to reject, and they thought it was silly. Why would anyone ever reject?
  • they made low offers, the modal offer was 15 percent instead of 50, and the mean comes out to be about 25 percent.
  • over the next two summers these field anthropologists went to the field and conducted the ultimatum game as well as a few other games
  • we found is that societies vary dramatically, from societies that would never reject, to societies that would even reject offers above 50 percent, and we found that mean offers ranged across societies from about 25 percent to even over 50 percent. We had some of what we called hyper fair societies. The highest was 57 percent in Lamalera, Indonesia.
  • able to explain a lot of the variation in these offers with two variables. One was the degree of market integration.
  • there seemed to be other institutions, institutions of cooperative hunting seemed to influence offers.
  • measured market integration much more carefully
  • subsequent project
  • large number of other variables, including wealth, income, education, community size, and also religion.
  • did the Ultimatum Game along with two other experiments. The two other experiments were the Dictator Game (the Dictator Game is like the Ultimatum Game except the second player doesn't have the option to reject) and the Third Party Punishment Game.
  • Third Party Punishment Game, there are three players and the first two players play a Dictator Game.
  • This gives us two different measures of willingness to punish strangers
  • one is rejection in the Ultimatum Game
  • three measures of fairness
  • size of the community predicts willingness to punish
  • suggesting that if you have small communities, you don't need punishment.
  • It could be some kind of reputational mechanism
  • There's a number of different ways to create norm systems that operate like that.
  • In a big society punishment can be most effective because reputational mechanisms can be weak. If you're in a big society and you encounter somebody, you probably don't have friends in common through which you could pass reputational information for which punishment could be generated. You might want to punish them right on the spot or someone who observes the interaction might want to punish them right on the spot or call the authorities or whatever, which is also costly.
  • This creates a puzzle because typically people think of small-scale kinds of societies, where you study hunter-gatherers and horticultural scattered across the globe (ranging from New Guinea to Siberia to Africa) as being very pro social and cooperative.
  • but the thing is those are based on local norms for cooperation with kin and local interactions in certain kinds of circumstances
  • these norms don't extend beyond food sharing. They certainly don't extend to ephemeral or strangers
  • large-scale society run you have to shift from investing in your local kin groups and your enduring relationships to being willing to pay to be fair to a stranger.
  • if you're going to be fair to a stranger, then you're taking money away from your family.
  • A commitment to something like anti-nepotism norms is something that runs against our evolutionary inclinations and our inclinations to help kin
  • In this sense, the norms of modern societies that make modern societies run now are at odds with at least some of our evolved instincts.
  • Lately we've been focused on the effects of religion
  • adherence to a world religion matters
  • People from world religions were willing to give more to the other person in the experiment, the anonymous stranger
  • Part of this is your willingness to acquire a norm of impartial roles; that we have a set of rules that governs this system.
  • political scientists call it the rule of law
  • those rules apply independently of the identities
  • If you want the rule of law to spread or to be maintained, you need conditions in which you're managing risk.
  •  
    [JOSEPH HENRICH:] The main questions I've been asking myself over the last couple years are broadly about how culture drove human evolution. Think back to when humans first got the capacity for cumulative cultural evolution-and by this I mean the ability for ideas to accumulate over generations, to get an increasingly complex tool starting from something simple. One generation adds a few things to it, the next generation adds a few more things, and the next generation, until it's so complex that no one in the first generation could have invented it.
Charles van der Haegen

Hartmut Rosa on Social Acceleration and Time - YouTube - 0 views

  •  
    indicated by François le Palec see book "Renowned social theorist Hartmut Rosa talks about social acceleration in modernity, and its consequences for religion, immortality, health, and the commercialization of time" See also Hartmut Rosa's essay "Alienation and Acceleration: Towards a Critical Theory of Late-Modern Temporality" published at Summertalk Vol3 by NSU PressSummerlalk Another input, in my view, on MindAmp, Infotension, Mindfulness and Wisdom
  •  
    Hi Friends co-learners, I believe this to be of interest. I was juggling with deep and quick thinking and studying... This, and many other things I am digging into, hopefully brings me some framework. What about you? Course over, Life takes over? Who wants to dig further? Cheers, Charlie the Grandfather
David McGavock

The Myth Of AI | Edge.org - 1 views

  • what I'm proposing is that if AI was a real thing, then it probably would be less of a threat to us than it is as a fake thing.
  • it adds a layer of religious thinking to what otherwise should be a technical field.
  • we can talk about pattern classification.
  • ...38 more annotations...
  • But when you add to it this religious narrative that's a version of the Frankenstein myth, where you say well, but these things are all leading to a creation of life, and this life will be superior to us and will be dangerous
  • I'm going to go through a couple of layers of how the mythology does harm.
  • this overall atmosphere of accepting the algorithms as doing a lot more than they do. In the case of Netflix, the recommendation engine is serving to distract you from the fact that there's not much choice anyway.
  • If a program tells you, well, this is how things are, this is who you are, this is what you like, or this is what you should do, we have a tendency to accept that.
  • our economy has shifted to what I call a surveillance economy, but let's say an economy where algorithms guide people a lot, we have this very odd situation where you have these algorithms that rely on big data in order to figure out who you should date, who you should sleep with, what music you should listen to, what books you should read, and on and on and on
  • people often accept that
  • all this overpromising that AIs will be about to do this or that. It might be to become fully autonomous driving vehicles instead of only partially autonomous, or it might be being able to fully have a conversation as opposed to only having a useful part of a conversation to help you interface with the device.
  • other cases where the recommendation engine is not serving that function, because there is a lot of choice, and yet there's still no evidence that the recommendations are particularly good.
  • there's no way to tell where the border is between measurement and manipulation in these systems.
  • if the preponderance of those people have grown up in the system and are responding to whatever choices it gave them, there's not enough new data coming into it for even the most ideal or intelligent recommendation engine to do anything meaningful.
  • it simply turns into a system that measures which manipulations work, as opposed to which ones don't work, which is very different from a virginal and empirically careful system that's trying to tell what recommendations would work had it not intervened
  • What's not clear is where the boundary is.
  • If you ask: is a recommendation engine like Amazon more manipulative, or more of a legitimate measurement device? There's no way to know.
  • we don't know to what degree they're measurement versus manipulation.
  • If people are deciding what books to read based on a momentum within the recommendation engine that isn't going back to a virgin population, that hasn't been manipulated, then the whole thing is spun out of control and doesn't mean anything anymore
  • not so much a rise of evil as a rise of nonsense.
  • because of the mythology about AI, the services are presented as though they are these mystical, magical personas. IBM makes a dramatic case that they've created this entity that they call different things at different times—Deep Blue and so forth.
  • Cortana or a Siri
  • This pattern—of AI only working when there's what we call big data, but then using big data in order to not pay large numbers of people who are contributing—is a rising trend in our civilization, which is totally non-sustainable
    • David McGavock
       
      Key relationship between automation of tasks, downsides, and expectation for AI
  • If you talk about AI as a set of techniques, as a field of study in mathematics or engineering, it brings benefits. If we talk about AI as a mythology of creating a post-human species, it creates a series of problems that I've just gone over, which include acceptance of bad user interfaces, where you can't tell if you're being manipulated or not, and everything is ambiguous.
  • It creates incompetence, because you don't know whether recommendations are coming from anything real or just self-fulfilling prophecies from a manipulative system that spun off on its own, and economic negativity, because you're gradually pulling formal economic benefits away from the people who supply the data that makes the scheme work.
  • I'm going to give you two scenarios.
  • let's suppose somebody comes up with a way to 3-D print a little assassination drone that can go buzz around and kill somebody. Let's suppose that these are cheap to make.
  • Having said all that, let's address directly this problem of whether AI is going to destroy civilization and people, and take over the planet and everything.
  • some disaffected teenagers, or terrorists, or whoever start making a bunch of them, and they go out and start killing people randomly
  • This idea that some lab somewhere is making these autonomous algorithms that can take over the world is a way of avoiding the profoundly uncomfortable political problem, which is that if there's some actuator that can do harm, we have to figure out some way that people don't do harm with it.
    • David McGavock
       
      Another key - focus on the actuator, not the agent that exploits it.
  • the part that causes the problem is the actuator. It's the interface to physicality
  • not so much whether it's a bunch of teenagers or terrorists behind it or some AI
  • The sad fact is that, as a society, we have to do something to not have little killer drones proliferate.
  • What we don't have to worry about is the AI algorithm running them, because that's speculative.
  • another one where there's so-called artificial intelligence, some kind of big data scheme, that's doing exactly the same thing, that is self-directed and taking over 3-D printers, and sending these things off to kill people.
  • There's a whole other problem area that has to do with neuroscience, where if we pretend we understand things before we do, we do damage to science,
  • You have to be able to accept what your ignorances are in order to do good science. To reject your own ignorance just casts you into a silly state where you're a lesser scientist.
  • To my mind, the mythology around AI is a re-creation of some of the traditional ideas about religion, but applied to the technical world.
  • The notion of this particular threshold—which is sometimes called the singularity, or super-intelligence, or all sorts of different terms in different periods—is similar to divinity.
  • In the history of organized religion, it's often been the case that people have been disempowered precisely to serve what were perceived to be the needs of some deity or another, where in fact what they were doing was supporting an elite class that was the priesthood for that deity.
    • David McGavock
       
      Technical priesthood.
  • If AI means this mythology of this new creature we're creating, then it's just a stupid mess that's confusing everybody, and harming the future of the economy. If what we're talking about is a set of algorithms and actuators that we can improve and apply in useful ways, then I'm very interested, and I'm very much a participant in the community that's improving those things.
  • A lot of people in the religious world are just great, and I respect and like them. That goes hand-in-hand with my feeling that some of the mythology in big religion still leads us into trouble that we impose on ourselves and don't need.
  •  
    "The idea that computers are people has a long and storied history. It goes back to the very origins of computers, and even from before. There's always been a question about whether a program is something alive or not since it intrinsically has some kind of autonomy at the very least, or it wouldn't be a program. There has been a domineering subculture-that's been the most wealthy, prolific, and influential subculture in the technical world-that for a long time has not only promoted the idea that there's an equivalence between algorithms and life, and certain algorithms and people, but a historical determinism that we're inevitably making computers that will be smarter and better than us and will take over from us."
Charles van der Haegen

Robert Thurman | Professor of Buddhist Studies, Columbia University; President, Tibet H... - 0 views

  •  
    "Robert Thurman is Professor of Indo-Tibetan Buddhist Studies in the Department of Religion at Columbia University, President of Tibet House US, a non-profit organization dedicated to the preservation and promotion of Tibetan civilization, and President of the American Institute of Buddhist Studies. The New York Times recently hailed him as "the leading American expert on Tibetan Buddhism." The first American to have been ordained a Tibetan Buddhist monk and a personal friend of the Dalai Lama for over 40 years, Professor Thurman is a passionate advocate and spokesperson for the truth regarding the current Tibet-China situation and the human rights violations suffered by the Tibetan people under Chinese rule. His commitment to finding a peaceful, win-win solution for Tibet and China inspired him to write his latest book, Why the Dalai Lama Matters: His Act of Truth as the Solution for China, Tibet and the World, published in June of 2008. Professor Thurman also translates important Tibetan and Sanskrit philosophical writings and lectures and writes on Buddhism, particularly Tibetan Buddhism; on Asian history, particularly the history of the monastic institution in the Asian civilization; and on critical philosophy, with a focus on the dialogue between the material and inner sciences of the world's religious traditions."
  •  
    I believe this is a great interview... and video set
David McGavock

Babies help unlock the origins of morality - CBS News - 0 views

  • It's a question people have asked for as long as there have been people: are human beings inherently good? Are we born with a sense of morality or do we arrive blank slates, waiting for the world to teach us right from wrong? Or could it be worse: do we start out nasty, selfish devils, who need our parents, teachers, and religions to whip us into shape?
  • The philosopher Rousseau considered babies "perfect idiots...Knowing nothing,"
  • for most of its history, her field agreed.
  • ...2 more annotations...
  • discovered seemingly simple ways to probe what's really going on in those adorable little heads.
  • Babies, even at three months, looked towards the nice character and looked hardly at all, much, much, much shorter times, towards the unhelpful character.
  •  
    CBS story with Leslie Stahl
David McGavock

Mission for week two: Evolution of cooperation questions (ACTION REQUESTED) | Social Me... - 0 views

  • Pavel's
  • a lot of smart people across the region also begin to identify themselves with one of the sides, inevitably getting involved in arguments they don't want to be part of, raising hostility towards each other. 
  • ake control over our pre-wired responses.
  • ...69 more annotations...
  • awareness (such as meditation)
  • help people learn how to identify and de-identify with various groups, by allowing them to experience the variety of social contexts.
  • Roland's
  • not only be critical thinking but systems thinking
  • help people become more self-dependent.
  • experiences are organized for children from the early age
  • raise the level of critical thinking
  • Education is liberating.
  • The notion of indirect reciprocity could be important here: doing things for those groups without expecting to get a return, but setting an example
  • reject the notion of tribes or of people being permanently and essentially bad and extremist, and to be welcoming and kind
  • Bodil's
  • I can work with other communities which are open, tolerant and welcoming.
  • Better distribution of resources.
  • reputation and trust
  • know how to build trust and create cooperation, we should know something about breaking bad patterns
  • knowledge about social dilemmas
  • “growth mindset”
  • David's
  • separating fiction from fact,
  • interaction in order to reveal the "true" characteristics of inform
  • physical security, enough to eat, a place to sleep, freedom from threat.
  • John's
  • little can be done at the level of the individual, other than being aware that our appreciation of ideas, and our tendency to engage in counterproductive behavior may be due to forces other than the ideas themselves.
  • becoming aware of our own weaknesses with regard to absorbing new information
  • it is possible to gather individuals into a super organism that is less vulnerable to being victimized by false or misleading information,
  • we need access to information and skill in critical thinking
  • Hermano's
  • My political answer is internationalism: http://en.wikipedia.org/wiki/Internationalism_(politics).
  • High level of material, political, physical, psychological etc personal independance
  • My cultural answer is to displace the ubiquitous narrative of competition by this narrative of cooperative
  • traces of trustworthiness online,
  • tactical tool is the internet.
  • empowers the common man to act at multiple levels, assuming responsibility for all the nested groups to which he would belong.
  • Inger's
  •  Fighting discrimination
  • stereotypes
  • negative stereotypes
  • experience the feeling of discrimination in order to fight it.
  • Discrimination starts with stereotypes that turn into prejudice, and the individual becomes a member of a group that is dehmanised and stripped of human qualities.
  • Elena's
  • Meditation skills
  • life satisfaction
  • transferring an ultimate level of governance and common legislation to structures above nation states
  • Practices of integration of spirit-mind-body
  • value of own life and personal voe not to destroy self
  • Calisa's
  • only possible escape route is to get a glimpse of life on the outside, to see that there are different ways to live one's life, to understand that there are choices.
  • only through the glimpse can the child even begin to contemplate the notion of breaking the "pre-wiring"
  • glimpse does not guarantee escape
  •  shine your light brightly:
  • If there are children in your life, invest in them
  • Sahil's
  • Stay informed about the big, complex world-shaping issues
  • Use technology to express yourself beyond your home and workplace
  • same forces producing the 'dark' forms of social cooperation mentioned above - compliance, conformance, solidarity - are perhaps the same forces behind 'good' cooperation.
  • continually trying to re-imagine our 'imagined communities'
  • the more connected we are, the more we'll be forced to recognize others' interests as our own.
  • might include: cultural traits and norms based on morality (i.e. religion), integration of market economies, promoting greater free-flow of people/ideas, promoting denser urban centers, open access to information, monogamy??, anti-nepotism norms, cooperative higher institutions (with ability to manage laws/reputations/punishment).
  • Luis'
  •  We are “pre-wired” to cooperate within our tribe
  • impact of group identity
  • “manifold and profound”
  • make group identity salient
  • redefining the boundaries of the group to include more people is the best opportunity for change
  • Once you include everyone in the group, you find ways to encourage interactions among both sub-groups,
  • narcos manage to stay loyal and cooperate within their cartel when competing against other cartels with equally loyal members.
  • discourage cooperation inside the cartel groups
  • Assurance game, because one narco will only fight if the other fights, and will defect if the other defects
  • The key issue in the Assurance Game is whether we can trust each other.
  •  
    Answers from all co-learners
1 - 6 of 6
Showing 20 items per page