Skip to main content

Home/ TOK Friends/ Group items tagged think

Rss Feed Group items tagged

Javier E

What Happened Before the Big Bang? The New Philosophy of Cosmology - Ross Andersen - Te... - 1 views

  • This question of accounting for what we call the "big bang state" -- the search for a physical explanation of it -- is probably the most important question within the philosophy of cosmology, and there are a couple different lines of thought about it.
  • One that's becoming more and more prevalent in the physics community is the idea that the big bang state itself arose out of some previous condition, and that therefore there might be an explanation of it in terms of the previously existing dynamics by which it came about
  • The problem is that quantum mechanics was developed as a mathematical tool. Physicists understood how to use it as a tool for making predictions, but without an agreement or understanding about what it was telling us about the physical world. And that's very clear when you look at any of the foundational discussions. This is what Einstein was upset about; this is what Schrodinger was upset about. Quantum mechanics was merely a calculational technique that was not well understood as a physical theory. Bohr and Heisenberg tried to argue that asking for a clear physical theory was something you shouldn't do anymore. That it was something outmoded. And they were wrong, Bohr and Heisenberg were wrong about that. But the effect of it was to shut down perfectly legitimate physics questions within the physics community for about half a century. And now we're coming out of that
  • ...9 more annotations...
  • One common strategy for thinking about this is to suggest that what we used to call the whole universe is just a small part of everything there is, and that we live in a kind of bubble universe, a small region of something much larger
  • Newton realized there had to be some force holding the moon in its orbit around the earth, to keep it from wandering off, and he knew also there was a force that was pulling the apple down to the earth. And so what suddenly struck him was that those could be one and the same thing, the same force
  • That was a physical discovery, a physical discovery of momentous importance, as important as anything you could ever imagine because it knit together the terrestrial realm and the celestial realm into one common physical picture. It was also a philosophical discovery in the sense that philosophy is interested in the fundamental natures of things.
  • There are other ideas, for instance that maybe there might be special sorts of laws, or special sorts of explanatory principles, that would apply uniquely to the initial state of the universe.
  • The basic philosophical question, going back to Plato, is "What is x?" What is virtue? What is justice? What is matter? What is time? You can ask that about dark energy - what is it? And it's a perfectly good question.
  • right now there are just way too many freely adjustable parameters in physics. Everybody agrees about that. There seem to be many things we call constants of nature that you could imagine setting at different values, and most physicists think there shouldn't be that many, that many of them are related to one another. Physicists think that at the end of the day there should be one complete equation to describe all physics, because any two physical systems interact and physics has to tell them what to do. And physicists generally like to have only a few constants, or parameters of nature. This is what Einstein meant when he famously said he wanted to understand what kind of choices God had --using his metaphor-- how free his choices were in creating the universe, which is just asking how many freely adjustable parameters there are. Physicists tend to prefer theories that reduce that number
  • You have others saying that time is just an illusion, that there isn't really a direction of time, and so forth. I myself think that all of the reasons that lead people to say things like that have very little merit, and that people have just been misled, largely by mistaking the mathematics they use to describe reality for reality itself. If you think that mathematical objects are not in time, and mathematical objects don't change -- which is perfectly true -- and then you're always using mathematical objects to describe the world, you could easily fall into the idea that the world itself doesn't change, because your representations of it don't.
  • physicists for almost a hundred years have been dissuaded from trying to think about fundamental questions. I think most physicists would quite rightly say "I don't have the tools to answer a question like 'what is time?' - I have the tools to solve a differential equation." The asking of fundamental physical questions is just not part of the training of a physicist anymore.
  • The question remains as to how often, after life evolves, you'll have intelligent life capable of making technology. What people haven't seemed to notice is that on earth, of all the billions of species that have evolved, only one has developed intelligence to the level of producing technology. Which means that kind of intelligence is really not very useful. It's not actually, in the general case, of much evolutionary value. We tend to think, because we love to think of ourselves, human beings, as the top of the evolutionary ladder, that the intelligence we have, that makes us human beings, is the thing that all of evolution is striving toward. But what we know is that that's not true. Obviously it doesn't matter that much if you're a beetle, that you be really smart. If it were, evolution would have produced much more intelligent beetles. We have no empirical data to suggest that there's a high probability that evolution on another planet would lead to technological intelligence.
Emily Freilich

The Man Who Would Teach Machines to Think - James Somers - The Atlantic - 1 views

  • Douglas Hofstadter, the Pulitzer Prize–winning author of Gödel, Escher, Bach, thinks we've lost sight of what artificial intelligence really means. His stubborn quest to replicate the human mind.
  • “If somebody meant by artificial intelligence the attempt to understand the mind, or to create something human-like, they might say—maybe they wouldn’t go this far—but they might say this is some of the only good work that’s ever been done
  • Their operating premise is simple: the mind is a very unusual piece of software, and the best way to understand how a piece of software works is to write it yourself.
  • ...43 more annotations...
  • “It depends on what you mean by artificial intelligence.”
  • Computers are flexible enough to model the strange evolved convolutions of our thought, and yet responsive only to precise instructions. So if the endeavor succeeds, it will be a double victory: we will finally come to know the exact mechanics of our selves—and we’ll have made intelligent machines.
  • Ever since he was about 14, when he found out that his youngest sister, Molly, couldn’t understand language, because she “had something deeply wrong with her brain” (her neurological condition probably dated from birth, and was never diagnosed), he had been quietly obsessed by the relation of mind to matter.
  • How could consciousness be physical? How could a few pounds of gray gelatin give rise to our very thoughts and selves?
  • Consciousness, Hofstadter wanted to say, emerged via just the same kind of “level-crossing feedback loop.”
  • In 1931, the Austrian-born logician Kurt Gödel had famously shown how a mathematical system could make statements not just about numbers but about the system itself.
  • But then AI changed, and Hofstadter didn’t change with it, and for that he all but disappeared.
  • By the early 1980s, the pressure was great enough that AI, which had begun as an endeavor to answer yes to Alan Turing’s famous question, “Can machines think?,” started to mature—or mutate, depending on your point of view—into a subfield of software engineering, driven by applications.
  • Take Deep Blue, the IBM supercomputer that bested the chess grandmaster Garry Kasparov. Deep Blue won by brute force.
  • Hofstadter wanted to ask: Why conquer a task if there’s no insight to be had from the victory? “Okay,” he says, “Deep Blue plays very good chess—so what? Does that tell you something about how we play chess? No. Does it tell you about how Kasparov envisions, understands a chessboard?”
  • AI started working when it ditched humans as a model, because it ditched them. That’s the thrust of the analogy: Airplanes don’t flap their wings; why should computers think?
  • It’s a compelling point. But it loses some bite when you consider what we want: a Google that knows, in the way a human would know, what you really mean when you search for something
  • Cognition is recognition,” he likes to say. He describes “seeing as” as the essential cognitive act: you see some lines a
  • How do you make a search engine that understands if you don’t know how you understand?
  • s “an A,” you see a hunk of wood as “a table,” you see a meeting as “an emperor-has-no-clothes situation” and a friend’s pouting as “sour grapes”
  • That’s what it means to understand. But how does understanding work?
  • analogy is “the fuel and fire of thinking,” the bread and butter of our daily mental lives.
  • there’s an analogy, a mental leap so stunningly complex that it’s a computational miracle: somehow your brain is able to strip any remark of the irrelevant surface details and extract its gist, its “skeletal essence,” and retrieve, from your own repertoire of ideas and experiences, the story or remark that best relates.
  • in Hofstadter’s telling, the story goes like this: when everybody else in AI started building products, he and his team, as his friend, the philosopher Daniel Dennett, wrote, “patiently, systematically, brilliantly,” way out of the light of day, chipped away at the real problem. “Very few people are interested in how human intelligence works,”
  • For more than 30 years, Hofstadter has worked as a professor at Indiana University at Bloomington
  • The quick unconscious chaos of a mind can be slowed down on the computer, or rewound, paused, even edited
  • project out of IBM called Candide. The idea behind Candide, a machine-translation system, was to start by admitting that the rules-based approach requires too deep an understanding of how language is produced; how semantics, syntax, and morphology work; and how words commingle in sentences and combine into paragraphs—to say nothing of understanding the ideas for which those words are merely conduits.
  • , Hofstadter directs the Fluid Analogies Research Group, affectionately known as FARG.
  • Parts of a program can be selectively isolated to see how it functions without them; parameters can be changed to see how performance improves or degrades. When the computer surprises you—whether by being especially creative or especially dim-witted—you can see exactly why.
  • When you read Fluid Concepts and Creative Analogies: Computer Models of the Fundamental Mechanisms of Thought, which describes in detail this architecture and the logic and mechanics of the programs that use it, you wonder whether maybe Hofstadter got famous for the wrong book.
  • ut very few people, even admirers of GEB, know about the book or the programs it describes. And maybe that’s because FARG’s programs are almost ostentatiously impractical. Because they operate in tiny, seemingly childish “microdomains.” Because there is no task they perform better than a human.
  • “The entire effort of artificial intelligence is essentially a fight against computers’ rigidity.”
  • “Nobody is a very reliable guide concerning activities in their mind that are, by definition, subconscious,” he once wrote. “This is what makes vast collections of errors so important. In an isolated error, the mechanisms involved yield only slight traces of themselves; however, in a large collection, vast numbers of such slight traces exist, collectively adding up to strong evidence for (and against) particular mechanisms.
  • So IBM threw that approach out the window. What the developers did instead was brilliant, but so straightforward,
  • The technique is called “machine learning.” The goal is to make a device that takes an English sentence as input and spits out a French sentence
  • What you do is feed the machine English sentences whose French translations you already know. (Candide, for example, used 2.2 million pairs of sentences, mostly from the bilingual proceedings of Canadian parliamentary debates.)
  • By repeating this process with millions of pairs of sentences, you will gradually calibrate your machine, to the point where you’ll be able to enter a sentence whose translation you don’t know and get a reasonable resul
  • Google Translate team can be made up of people who don’t speak most of the languages their application translates. “It’s a bang-for-your-buck argument,” Estelle says. “You probably want to hire more engineers instead” of native speakers.
  • But the need to serve 1 billion customers has a way of forcing the company to trade understanding for expediency. You don’t have to push Google Translate very far to see the compromises its developers have made for coverage, and speed, and ease of engineering. Although Google Translate captures, in its way, the products of human intelligence, it isn’t intelligent itself.
  • “Did we sit down when we built Watson and try to model human cognition?” Dave Ferrucci, who led the Watson team at IBM, pauses for emphasis. “Absolutely not. We just tried to create a machine that could win at Jeopardy.”
  • For Ferrucci, the definition of intelligence is simple: it’s what a program can do. Deep Blue was intelligent because it could beat Garry Kasparov at chess. Watson was intelligent because it could beat Ken Jennings at Jeopardy.
  • “There’s a limited number of things you can do as an individual, and I think when you dedicate your life to something, you’ve got to ask yourself the question: To what end? And I think at some point I asked myself that question, and what it came out to was, I’m fascinated by how the human mind works, it would be fantastic to understand cognition, I love to read books on it, I love to get a grip on it”—he called Hofstadter’s work inspiring—“but where am I going to go with it? Really what I want to do is build computer systems that do something.
  • Peter Norvig, one of Google’s directors of research, echoes Ferrucci almost exactly. “I thought he was tackling a really hard problem,” he told me about Hofstadter’s work. “And I guess I wanted to do an easier problem.”
  • Of course, the folly of being above the fray is that you’re also not a part of it
  • As our machines get faster and ingest more data, we allow ourselves to be dumber. Instead of wrestling with our hardest problems in earnest, we can just plug in billions of examples of them.
  • Hofstadter hasn’t been to an artificial-intelligence conference in 30 years. “There’s no communication between me and these people,” he says of his AI peers. “None. Zero. I don’t want to talk to colleagues that I find very, very intransigent and hard to convince of anything
  • Everything from plate tectonics to evolution—all those ideas, someone had to fight for them, because people didn’t agree with those ideas.
  • Academia is not an environment where you just sit in your bath and have ideas and expect everyone to run around getting excited. It’s possible that in 50 years’ time we’ll say, ‘We really should have listened more to Doug Hofstadter.’ But it’s incumbent on every scientist to at least think about what is needed to get people to understand the ideas.”
sissij

The Normalization Trap - The New York Times - 0 views

  • What’s normal?Perhaps the answer seems obvious: What’s normal is what’s typical — what is average.
  • when people think about what is normal, they combine their sense of what is typical with their sense of what is ideal.
  • Normal, in other words, turns out to be a blend of statistical and moral notions.
  • ...7 more annotations...
  • If you are like most of our experimental participants, you will not give the same answer to the second question that you give to the first. Our participants said the “average” number was about four hours and the “normal” number was about three hours.
  • Again and again, our participants did not take the normal to be the same as the average. Instead, what people picked out as the “normal thing to do” or a “normal such-and-such” tended to be intermediate between what they thought was typical and what they thought was ideal.
  • These results point to something surprising about the way people’s minds work. You might imagine that people have two completely distinct modes of reasoning: On one hand, we can think about how things typically are; on the other, we can think about how things ought to be.
  • But our results suggest that people’s minds cannot be divided up so neatly in this way.
  • Likewise, people’s attitudes toward atypical behavior are frequently colored by this blended conception of normality.
  • However deeply ingrained this cognitive tendency may be, people are not condemned to think this way.
  • But this type of thinking, which takes some discipline, is no doubt more the exception than the rule. Most often, we do not stop to distinguish the typical from the acceptable, the infrequent from the deviant.
  •  
    We don't have a universal definition of "normal". Everybody will involve their own experience and perspective when they are considering wether something is normal or not. For example, if we ask someone in the past whether women attending is normal, they would definitely say that it is not normal because the social convention at that time wouldn't allow women to get education. But when we ask people in the twenty-first century, it would be very common for them. Everybody has their own database from their experience. We always tend to think we represent the average of the world so what has been familiar for us must has been the same for the others. This is the exposure effect that can bias our definition of "normal". The reason why many people think gays are weds is probably because we usually do not stop to distinguish the typical from the acceptable, the infrequent from the deviant. --Sissi (1/29/2017)
Javier E

Why Our Children Don't Think There Are Moral Facts - NYTimes.com - 1 views

  • I already knew that many college-aged students don’t believe in moral facts.
  • the overwhelming majority of college freshman in their classrooms view moral claims as mere opinions that are not true or are true only relative to a culture.
  • where is the view coming from?
  • ...32 more annotations...
  • the Common Core standards used by a majority of K-12 programs in the country require that students be able to “distinguish among fact, opinion, and reasoned judgment in a text.”
  • So what’s wrong with this distinction and how does it undermine the view that there are objective moral facts?
  • For example, many people once thought that the earth was flat. It’s a mistake to confuse truth (a feature of the world) with proof (a feature of our mental lives)
  • Furthermore, if proof is required for facts, then facts become person-relative. Something might be a fact for me if I can prove it but not a fact for you if you can’t. In that case, E=MC2 is a fact for a physicist but not for me.
  • worse, students are taught that claims are either facts or opinions. They are given quizzes in which they must sort claims into one camp or the other but not both. But if a fact is something that is true and an opinion is something that is believed, then many claims will obviously be both
  • How does the dichotomy between fact and opinion relate to morality
  • Kids are asked to sort facts from opinions and, without fail, every value claim is labeled as an opinion.
  • Here’s a little test devised from questions available on fact vs. opinion worksheets online: are the following facts or opinions? — Copying homework assignments is wrong. — Cursing in school is inappropriate behavior. — All men are created equal. — It is worth sacrificing some personal liberties to protect our country from terrorism. — It is wrong for people under the age of 21 to drink alcohol. — Vegetarians are healthier than people who eat meat. — Drug dealers belong in prison.
  • The answer? In each case, the worksheets categorize these claims as opinions. The explanation on offer is that each of these claims is a value claim and value claims are not facts. This is repeated ad nauseum: any claim with good, right, wrong, etc. is not a fact.
  • In summary, our public schools teach students that all claims are either facts or opinions and that all value and moral claims fall into the latter camp. The punchline: there are no moral facts. And if there are no moral facts, then there are no moral truths.
  • It should not be a surprise that there is rampant cheating on college campuses: If we’ve taught our students for 12 years that there is no fact of the matter as to whether cheating is wrong, we can’t very well blame them for doing so later on.
  • If it’s not true that it’s wrong to murder a cartoonist with whom one disagrees, then how can we be outraged? If there are no truths about what is good or valuable or right, how can we prosecute people for crimes against humanity? If it’s not true that all humans are created equal, then why vote for any political system that doesn’t benefit you over others?
  • the curriculum sets our children up for doublethink. They are told that there are no moral facts in one breath even as the next tells them how they ought to behave.
  • Our children deserve a consistent intellectual foundation. Facts are things that are true. Opinions are things we believe. Some of our beliefs are true. Others are not. Some of our beliefs are backed by evidence. Others are not.
  • Value claims are like any other claims: either true or false, evidenced or not.
  • The hard work lies not in recognizing that at least some moral claims are true but in carefully thinking through our evidence for which of the many competing moral claims is correct.
  • Moral truths are not the same as scientific truths or mathematical truths. Yet they may still be used a guiding principle for our individual lives as well as our laws.But there is equal danger of giving moral judgments the designation of truth as there is in not doing so. Many people believe that abortion is murder on the same level as shooting someone with a gun. But many others do not. So is it true that abortion is murder?Moral principles can become generally accepted and then form the basis for our laws. But many long accepted moral principles were later rejected as being faulty. "Separate but equal" is an example. Judging homosexual relationships as immoral is another example.
  • Whoa! That Einstein derived an equation is a fact. But the equation represents a theory that may have to be tweaked at some point in the future. It may be a fact that the equation foretold the violence of atomic explosions, but there are aspects of nature that elude the equation. Remember "the theory of everything?"
  • Here is a moral fact, this is a sermon masquerading as a philosophical debate on facts, opinions and truth. This professor of religion is asserting that the government via common core is teaching atheism via the opinion vs fact.He is arguing, in a dishonest form, that public schools should be teaching moral facts. Of course moral facts is code for the Ten Commandments.
  • As a fourth grade teacher, I try to teach students to read critically, including distinguishing between facts and opinions as they read (and have been doing this long before the Common Core arrived, by the way). It's not always easy for children to grasp the difference. I can only imagine the confusion that would ensue if I introduced a third category -- moral "facts" that can't be proven but are true nonetheless!
  • horrible acts occur not because of moral uncertainty, but because people are too sure that their views on morality are 100% true, and anyone who fails to recognize and submit themselves are heathens who deserve death.I can't think of any case where a society has suffered because people are too thoughtful and open-minded to different perspectives on moral truth.In any case, it's not an elementary school's job to teach "moral truths."
  • The characterization of moral anti-realism as some sort of fringe view in philosophy is misleading. Claims that can be true or false are, it seems, 'made true' by features of the world. It's not clear to many in philosophy (like me) just what features of the world could make our moral claims true. We are more likely to see people's value claims as making claims about, and enforcing conformity to, our own (contingent) social norms. This is not to hold, as Mr. McBrayer seems to think follows, that there are no reasons to endorse or criticize these social norms.
  • This is nonsense. Giving kids the tools to distinguish between fact and opinion is hard enough in an age when Republicans actively deny reality on Fox News every night. The last thing we need is to muddy their thinking with the concept of "moral facts."A fact is a belief that everyone _should_ agree upon because it is observable and testable. Morals are not agreed upon by all. Consider the hot button issue of abortion.
  • Truthfully, I'm not terribly concerned that third graders will end up taking these lessons in the definition of fact versus opinion to the extremes considered here, or take them as a license to cheat. That will come much later, when they figure out, as people always have, what they can get a way with. But Prof. McBrayer, with his blithe expectation that all the grownups know that there moral "facts"? He scares the heck out of me.
  • I've long chafed at the language of "fact" v. "opinion", which is grounded in a very particular, limited view of human cognition. In my own ethics courses, I work actively to undermine the distinction, focusing instead on considered judgment . . . or even more narrowly, on consideration itself. (See http://wp.me/p5Ag0i-6M )
  • The real waffle here is the very concept of "moral facts." Our statements of values, even very important ones are, obviously, not facts. Trying to dress them up as if they are facts, to me, argues for a pretty serious moral weakness on the part of those advancing the idea.
  • Our core values are not important because they are facts. They are important because we collectively hold them and cherish them. To lean on the false crutch of "moral facts" to admit the weakness of your own moral convictions.
  • I would like to believe that there is a core of moral facts/values upon which all humanity can agree, but it would be tough to identify exactly what those are.
  • For the the ancient philosophers, reality comprised the Good, the True, and the Beautiful (what we might now call ethics, science and art), seeing these as complementary and inseparable, though distinct, realms. With the ascendency of science in our culture as the only valid measure of reality to the detriment of ethics and art (that is, if it is not observable and provable, it is not real), we have turned the good and the beautiful into mere "social constructs" that have no validity on their own. While I am sympathetic in many ways with Dr. McBrayer's objections, I think he falls into the trap of discounting the Good and The Beautiful as valid in and of themselves, and tries, instead, to find ways to give them validity through the True. I think his argument would have been stronger had he used the language of validity rather than the language of truth. Goodness, Truth and Beauty each have their own validity, though interdependent and inseparable. When we artificially extract one of these and give it primacy, we distort reality and alienate ourselves from it.
  • Professor McBrayer seems to miss the major point of the Common Core concern: can students distinguish between premises based on (reasonably construed) fact and premises based on emotion when evaluating conclusions? I would prefer that students learn to reason rather than be taught moral 'truth' that follows Professor McBrayer's logic.
  • Moral issues cannot scientifically be treated on the level that Prof. McBrayer is attempting to use in this column: true or false, fact or opinion or both. Instead, they should be treated as important characteristics of the systematic working of a society or of a group of people in general. One can compare the working of two groups of people: one in which e.g. cheating and lying is acceptable, and one in which they are not. One can use historical or model examples to show the consequences and the working of specific systems of morals. I think that this method - suitably adjusted - can be used even in second grade.
  • Relativism has nothing to do with liberalism. The second point is that I'm not sure it does all that much harm, because I have yet to encounter a student who thought that he or she had to withhold judgment on those who hold opposing political views!
sissij

Why Westerners and Easterners Really Do Think Differently | Big Think - 0 views

  • While the studies cover many different topics, the subject of an individualistic or holistic thinking style is noteworthy.
  • In one study, complex images were shown to test subjects from East Asia and North America. The scientists tracked the eye movements of the participants in order to gauge where their attention was focused. It was found that the Chinese participants spent more time looking at the background of the image, while the Americans tended to focus on the main object in the picture. Holistic and individualistic thinking manifested in one clear example.
  • Of course, these tendencies are generalizations.
  • ...2 more annotations...
  • One is that the staple food of a region may have something to do with it. This is excellently seen in China, where the northern half of the country grows wheat and the southern half grows rice. Rice growing is a labour intensive activity, that requires the coordination of several neighboring farms to do properly. Wheat farming, on the other hand, takes much less work and does not require coordination of irrigation systems in order to work.
  • Even today, more than 100 years after the colonization effort, the effects of living in a society that was so recently a frontier show up in individual and holistic thinking tests. With residents of Hokkaido demonstrating tendencies towards individualism to a larger extent than the rest of the Japanese population.
  •  
    I really like the author stating that "Of course, these tendencies are generalizations." This shows that this study is not to categorize people into east and west, two groups. But this tendency is worth-noticing. The trials presented in the article shows different possibilities of why there is a difference. I think this cultural difference is similar to why Australia has very distinctive animal compare to the other continents. Since in the ancient times, westerners and easterners are isolated to each other, they took different approaches to develop their civilization. However, I really like the author emphasizing that this difference is not a stereotyping, it is the result of population analysis and observations. --Sissi (2/6/2017)
Javier E

Joshua Foer: John Quijada and Ithkuil, the Language He Invented : The New Yorker - 2 views

  • Languages are something of a mess. They evolve over centuries through an unplanned, democratic process that leaves them teeming with irregularities, quirks, and words like “knight.” No one who set out to design a form of communication would ever end up with anything like English, Mandarin, or any of the more than six thousand languages spoken today.“Natural languages are adequate, but that doesn’t mean they’re optimal,” John Quijada, a fifty-four-year-old former employee of the California State Department of Motor Vehicles, told me. In 2004, he published a monograph on the Internet that was titled “Ithkuil: A Philosophical Design for a Hypothetical Language.” Written like a linguistics textbook, the fourteen-page Web site ran to almost a hundred and sixty thousand words. It documented the grammar, syntax, and lexicon of a language that Quijada had spent three decades inventing in his spare time. Ithkuil had never been spoken by anyone other than Quijada, and he assumed that it never would be.
  • his “greater goal” was “to attempt the creation of what human beings, left to their own devices, would never create naturally, but rather only by conscious intellectual effort: an idealized language whose aim is the highest possible degree of logic, efficiency, detail, and accuracy in cognitive expression via spoken human language, while minimizing the ambiguity, vagueness, illogic, redundancy, polysemy (multiple meanings) and overall arbitrariness that is seemingly ubiquitous in natural human language.”
  • Ithkuil, one Web site declared, “is a monument to human ingenuity and design.” It may be the most complete realization of a quixotic dream that has entranced philosophers for centuries: the creation of a more perfect language.
  • ...25 more annotations...
  • Since at least the Middle Ages, philosophers and philologists have dreamed of curing natural languages of their flaws by constructing entirely new idioms according to orderly, logical principles.
  • What if, they wondered, you could create a universal written language that could be understood by anyone, a set of “real characters,” just as the creation of Arabic numerals had done for counting? “This writing will be a kind of general algebra and calculus of reason, so that, instead of disputing, we can say that ‘we calculate,’ ” Leibniz wrote, in 1679.
  • nventing new forms of speech is an almost cosmic urge that stems from what the linguist Marina Yaguello, the author of “Lunatic Lovers of Language,” calls “an ambivalent love-hate relationship.” Language creation is pursued by people who are so in love with what language can do that they hate what it doesn’t. “I don’t believe any other fantasy has ever been pursued with so much ardor by the human spirit, apart perhaps from the philosopher’s stone or the proof of the existence of God; or that any other utopia has caused so much ink to flow, apart perhaps from socialism,”
  • Quijada began wondering, “What if there were one single language that combined the coolest features from all the world’s languages?”
  • Solresol, the creation of a French musician named Jean-François Sudre, was among the first of these universal languages to gain popular attention. It had only seven syllables: Do, Re, Mi, Fa, So, La, and Si. Words could be sung, or performed on a violin. Or, since the language could also be translated into the seven colors of the rainbow, sentences could be woven into a textile as a stream of colors.
  • “I had this realization that every individual language does at least one thing better than every other language,” he said. For example, the Australian Aboriginal language Guugu Yimithirr doesn’t use egocentric coördinates like “left,” “right,” “in front of,” or “behind.” Instead, speakers use only the cardinal directions. They don’t have left and right legs but north and south legs, which become east and west legs upon turning ninety degrees
  • Among the Wakashan Indians of the Pacific Northwest, a grammatically correct sentence can’t be formed without providing what linguists refer to as “evidentiality,” inflecting the verb to indicate whether you are speaking from direct experience, inference, conjecture, or hearsay.
  • In his “Essay Towards a Real Character, and a Philosophical Language,” from 1668, Wilkins laid out a sprawling taxonomic tree that was intended to represent a rational classification of every concept, thing, and action in the universe. Each branch along the tree corresponded to a letter or a syllable, so that assembling a word was simply a matter of tracing a set of forking limbs
  • he started scribbling notes on an entirely new grammar that would eventually incorporate not only Wakashan evidentiality and Guugu Yimithirr coördinates but also Niger-Kordofanian aspectual systems, the nominal cases of Basque, the fourth-person referent found in several nearly extinct Native American languages, and a dozen other wild ways of forming sentences.
  • he discovered “Metaphors We Live By,” a seminal book, published in 1980, by the cognitive linguists George Lakoff and Mark Johnson, which argues that the way we think is structured by conceptual systems that are largely metaphorical in nature. Life is a journey. Time is money. Argument is war. For better or worse, these figures of speech are profoundly embedded in how we think.
  • I asked him if he could come up with an entirely new concept on the spot, one for which there was no word in any existing language. He thought about it for a moment. “Well, no language, as far as I know, has a single word for that chin-stroking moment you get, often accompanied by a frown on your face, when someone expresses an idea that you’ve never thought of and you have a moment of suddenly seeing possibilities you never saw before.” He paused, as if leafing through a mental dictionary. “In Ithkuil, it’s ašţal.”
  • Neither Sapir nor Whorf formulated a definitive version of the hypothesis that bears their names, but in general the theory argues that the language we speak actually shapes our experience of reality. Speakers of different languages think differently. Stronger versions of the hypothesis go even further than this, to suggest that language constrains the set of possible thoughts that we can have. In 1955, a sociologist and science-fiction writer named James Cooke Brown decided he would test the Sapir-Whorf hypothesis by creating a “culturally neutral” “model language” that might recondition its speakers’ brains.
  • most conlangers come to their craft by way of fantasy and science fiction. J. R. R. Tolkien, who called conlanging his “secret vice,” maintained that he created the “Lord of the Rings” trilogy for the primary purpose of giving his invented languages, Quenya, Sindarin, and Khuzdul, a universe in which they could be spoken. And arguably the most commercially successful invented language of all time is Klingon, which has its own translation of “Hamlet” and a dictionary that has sold more than three hundred thousand copies.
  • He imagined that Ithkuil might be able to do what Lakoff and Johnson said natural languages could not: force its speakers to precisely identify what they mean to say. No hemming, no hawing, no hiding true meaning behind jargon and metaphor. By requiring speakers to carefully consider the meaning of their words, he hoped that his analytical language would force many of the subterranean quirks of human cognition to the surface, and free people from the bugs that infect their thinking.
  • Brown based the grammar for his ten-thousand-word language, called Loglan, on the rules of formal predicate logic used by analytical philosophers. He hoped that, by training research subjects to speak Loglan, he might turn them into more logical thinkers. If we could change how we think by changing how we speak, then the radical possibility existed of creating a new human condition.
  • today the stronger versions of the Sapir-Whorf hypothesis have “sunk into . . . disrepute among respectable linguists,” as Guy Deutscher writes, in “Through the Looking Glass: Why the World Looks Different in Other Languages.” But, as Deutscher points out, there is evidence to support the less radical assertion that the particular language we speak influences how we perceive the world. For example, speakers of gendered languages, like Spanish, in which all nouns are either masculine or feminine, actually seem to think about objects differently depending on whether the language treats them as masculine or feminine
  • The final version of Ithkuil, which Quijada published in 2011, has twenty-two grammatical categories for verbs, compared with the six—tense, aspect, person, number, mood, and voice—that exist in English. Eighteen hundred distinct suffixes further refine a speaker’s intent. Through a process of laborious conjugation that would befuddle even the most competent Latin grammarian, Ithkuil requires a speaker to home in on the exact idea he means to express, and attempts to remove any possibility for vagueness.
  • Every language has its own phonemic inventory, or library of sounds, from which a speaker can string together words. Consonant-poor Hawaiian has just thirteen phonemes. English has around forty-two, depending on dialect. In order to pack as much meaning as possible into each word, Ithkuil has fifty-eight phonemes. The original version of the language included a repertoire of grunts, wheezes, and hacks that are borrowed from some of the world’s most obscure tongues. One particular hard-to-make clicklike sound, a voiceless uvular ejective affricate, has been found in only a few other languages, including the Caucasian language Ubykh, whose last native speaker died in 1992.
  • Human interactions are governed by a set of implicit codes that can sometimes seem frustratingly opaque, and whose misreading can quickly put you on the outside looking in. Irony, metaphor, ambiguity: these are the ingenious instruments that allow us to mean more than we say. But in Ithkuil ambiguity is quashed in the interest of making all that is implicit explicit. An ironic statement is tagged with the verbal affix ’kçç. Hyperbolic statements are inflected by the letter ’m.
  • “I wanted to use Ithkuil to show how you would discuss philosophy and emotional states transparently,” Quijada said. To attempt to translate a thought into Ithkuil requires investigating a spectrum of subtle variations in meaning that are not recorded in any natural language. You cannot express a thought without first considering all the neighboring thoughts that it is not. Though words in Ithkuil may sound like a hacking cough, they have an inherent and unavoidable depth. “It’s the ideal language for political and philosophical debate—any forum where people hide their intent or obfuscate behind language,” Quijada co
  • In Ithkuil, the difference between glimpsing, glancing, and gawking is the mere flick of a vowel. Each of these distinctions is expressed simply as a conjugation of the root word for vision. Hunched over the dining-room table, Quijada showed me how he would translate “gawk” into Ithkuil. First, though, since words in Ithkuil are assembled from individual atoms of meaning, he had to engage in some introspection about what exactly he meant to say.For fifteen minutes, he flipped backward and forward through his thick spiral-bound manuscript, scratching his head, pondering each of the word’s aspects, as he packed the verb with all of gawking’s many connotations. As he assembled the evolving word from its constituent meanings, he scribbled its pieces on a notepad. He added the “second degree of the affix for expectation of outcome” to suggest an element of surprise that is more than mere unpreparedness but less than outright shock, and the “third degree of the affix for contextual appropriateness” to suggest an element of impropriety that is less than scandalous but more than simply eyebrow-raising. As he rapped his pen against the notepad, he paged through his manuscript in search of the third pattern of the first stem of the root for “shock” to suggest a “non-volitional physiological response,” and then, after several moments of contemplation, he decided that gawking required the use of the “resultative format” to suggest “an event which occurs in conjunction with the conflated sense but is also caused by it.” He eventually emerged with a tiny word that hardly rolled off the tongue: apq’uxasiu. He spoke the first clacking syllable aloud a couple of times before deciding that he had the pronunciation right, and then wrote it down in the script he had invented for printed Ithkuil:
  • “You can make up words by the millions to describe concepts that have never existed in any language before,” he said.
  • Many conlanging projects begin with a simple premise that violates the inherited conventions of linguistics in some new way. Aeo uses only vowels. Kēlen has no verbs. Toki Pona, a language inspired by Taoist ideals, was designed to test how simple a language could be. It has just a hundred and twenty-three words and fourteen basic sound units. Brithenig is an answer to the question of what English might have sounded like as a Romance language, if vulgar Latin had taken root on the British Isles. Láadan, a feminist language developed in the early nineteen-eighties, includes words like radíidin, defined as a “non-holiday, a time allegedly a holiday but actually so much a burden because of work and preparations that it is a dreaded occasion; especially when there are too many guests and none of them help.”
  • “We think that when a person learns Ithkuil his brain works faster,” Vishneva told him, in Russian. She spoke through a translator, as neither she nor Quijada was yet fluent in their shared language. “With Ithkuil, you always have to be reflecting on yourself. Using Ithkuil, we can see things that exist but don’t have names, in the same way that Mendeleyev’s periodic table showed gaps where we knew elements should be that had yet to be discovered.”
  • Lakoff, who is seventy-one, bearded, and, like Quijada, broadly built, seemed to have read a fair portion of the Ithkuil manuscript and familiarized himself with the language’s nuances.“There are a whole lot of questions I have about this,” he told Quijada, and then explained how he felt Quijada had misread his work on metaphor. “Metaphors don’t just show up in language,” he said. “The metaphor isn’t in the word, it’s in the idea,” and it can’t be wished away with grammar.“For me, as a linguist looking at this, I have to say, ‘O.K., this isn’t going to be used.’ It has an assumption of efficiency that really isn’t efficient, given how the brain works. It misses the metaphor stuff. But the parts that are successful are really nontrivial. This may be an impossible language,” he said. “But if you think of it as a conceptual-art project I think it’s fascinating.”
kushnerha

Diversity Makes You Brighter - The New York Times - 0 views

  • Diversity improves the way people think. By disrupting conformity, racial and ethnic diversity prompts people to scrutinize facts, think more deeply and develop their own opinions. Our findings show that such diversity actually benefits everyone, minorities and majority alike.
  • When trading, participants could observe the behavior of their counterparts and decide what to make of it. Think of yourself in similar situations: Interacting with others can bring new ideas into view, but it can also cause you to adopt popular but wrong ones.
  • It depends how deeply you contemplate what you observe. So if you think that something is worth $100, but others are bidding $120 for it, you may defer to their judgment and up the ante (perhaps contributing to a price bubble) or you might dismiss them and stand your ground.
  • ...6 more annotations...
  • When participants were in diverse company, their answers were 58 percent more accurate. The prices they chose were much closer to the true values of the stocks. As they spent time interacting in diverse groups, their performance improved.In homogeneous groups, whether in the United States or in Asia, the opposite happened. When surrounded by others of the same ethnicity or race, participants were more likely to copy others, in the wrong direction. Mistakes spread as participants seemingly put undue trust in others’ answers, mindlessly imitating them. In the diverse groups, across ethnicities and locales, participants were more likely to distinguish between wrong and accurate answers. Diversity brought cognitive friction that enhanced deliberation.
  • For our study, we intentionally chose a situation that required analytical thinking, seemingly unaffected by ethnicity or race. We wanted to understand whether the benefits of diversity stem, as the common thinking has it, from some special perspectives or skills of minorities.
  • What we actually found is that these benefits can arise merely from the very presence of minorities.
  • before participants interacted, there were no statistically significant differences between participants in the homogeneous or diverse groups. Minority members did not bring some special knowledge.
  • When surrounded by people “like ourselves,” we are easily influenced, more likely to fall for wrong ideas. Diversity prompts better, critical thinking. It contributes to error detection. It keeps us from drifting toward miscalculation.
  • Our findings suggest that racial and ethnic diversity matter for learning, the core purpose of a university. Increasing diversity is not only a way to let the historically disadvantaged into college, but also to promote sharper thinking for everyone.
Javier E

ROUGH TYPE | Nicholas Carr's blog - 0 views

  • The smartphone has become a repository of the self, recording and dispensing the words, sounds and images that define what we think, what we experience and who we are. In a 2015 Gallup survey, more than half of iPhone owners said that they couldn’t imagine life without the device.
  • So what happens to our minds when we allow a single tool such dominion over our perception and cognition?
  • Not only do our phones shape our thoughts in deep and complicated ways, but the effects persist even when we aren’t using the devices. As the brain grows dependent on the technology, the research suggests, the intellect weakens.
  • ...39 more annotations...
  • he has seen mounting evidence that using a smartphone, or even hearing one ring or vibrate, produces a welter of distractions that makes it harder to concentrate on a difficult problem or job. The division of attention impedes reasoning and performance.
  • Another 2015 study, appearing in the Journal of Computer-Mediated Communication, showed that when people hear their phone ring but are unable to answer it, their blood pressure spikes, their pulse quickens, and their problem-solving skills decline.
  • The researchers recruited 520 undergraduates at UCSD and gave them two standard tests of intellectual acuity. One test gauged “available working-memory capacity,” a measure of how fully a person’s mind can focus on a particular task. The second assessed “fluid intelligence,” a person’s ability to interpret and solve an unfamiliar problem. The only variable in the experiment was the location of the subjects’ smartphones. Some of the students were asked to place their phones in front of them on their desks; others were told to stow their phones in their pockets or handbags; still others were required to leave their phones in a different room.
  • In both tests, the subjects whose phones were in view posted the worst scores, while those who left their phones in a different room did the best. The students who kept their phones in their pockets or bags came out in the middle. As the phone’s proximity increased, brainpower decreased.
  • In subsequent interviews, nearly all the participants said that their phones hadn’t been a distraction—that they hadn’t even thought about the devices during the experiment. They remained oblivious even as the phones disrupted their focus and thinking.
  • In a 2013 study conducted at the University of Essex in England, 142 participants were divided into pairs and asked to converse in private for ten minutes. Half talked with a phone in the room, half without a phone present. The subjects were then given tests of affinity, trust and empathy. “The mere presence of mobile phones,” the researchers reported in the Journal of Social and Personal Relationships, “inhibited the development of interpersonal closeness and trust” and diminished “the extent to which individuals felt empathy and understanding from their partners.”
  • the “integration of smartphones into daily life” appears to cause a “brain drain” that can diminish such vital mental skills as “learning, logical reasoning, abstract thought, problem solving, and creativity.”
  •  Smartphones have become so entangled with our existence that, even when we’re not peering or pawing at them, they tug at our attention, diverting precious cognitive resources. Just suppressing the desire to check our phone, which we do routinely and subconsciously throughout the day, can debilitate our thinking.
  • They found that students who didn’t bring their phones to the classroom scored a full letter-grade higher on a test of the material presented than those who brought their phones. It didn’t matter whether the students who had their phones used them or not: All of them scored equally poorly.
  • A study of nearly a hundred secondary schools in the U.K., published last year in the journal Labour Economics, found that when schools ban smartphones, students’ examination scores go up substantially, with the weakest students benefiting the most.
  • Social skills and relationships seem to suffer as well.
  • Because smartphones serve as constant reminders of all the friends we could be chatting with electronically, they pull at our minds when we’re talking with people in person, leaving our conversations shallower and less satisfying.
  • A second experiment conducted by the researchers produced similar results, while also revealing that the more heavily students relied on their phones in their everyday lives, the greater the cognitive penalty they suffered.
  • The evidence that our phones can get inside our heads so forcefully is unsettling. It suggests that our thoughts and feelings, far from being sequestered in our skulls, can be skewed by external forces we’re not even aware o
  •  Scientists have long known that the brain is a monitoring system as well as a thinking system. Its attention is drawn toward any object that is new, intriguing or otherwise striking — that has, in the psychological jargon, “salience.”
  • even in the history of captivating media, the smartphone stands out. It is an attention magnet unlike any our minds have had to grapple with before. Because the phone is packed with so many forms of information and so many useful and entertaining functions, it acts as what Dr. Ward calls a “supernormal stimulus,” one that can “hijack” attention whenever it is part of our surroundings — and it is always part of our surroundings.
  • Imagine combining a mailbox, a newspaper, a TV, a radio, a photo album, a public library and a boisterous party attended by everyone you know, and then compressing them all into a single, small, radiant object. That is what a smartphone represents to us. No wonder we can’t take our minds off it.
  • The irony of the smartphone is that the qualities that make it so appealing to us — its constant connection to the net, its multiplicity of apps, its responsiveness, its portability — are the very ones that give it such sway over our minds.
  • Phone makers like Apple and Samsung and app writers like Facebook, Google and Snap design their products to consume as much of our attention as possible during every one of our waking hours
  • Social media apps were designed to exploit “a vulnerability in human psychology,” former Facebook president Sean Parker said in a recent interview. “[We] understood this consciously. And we did it anyway.”
  • A quarter-century ago, when we first started going online, we took it on faith that the web would make us smarter: More information would breed sharper thinking. We now know it’s not that simple.
  • As strange as it might seem, people’s knowledge and understanding may actually dwindle as gadgets grant them easier access to online data stores
  • In a seminal 2011 study published in Science, a team of researchers — led by the Columbia University psychologist Betsy Sparrow and including the late Harvard memory expert Daniel Wegner — had a group of volunteers read forty brief, factual statements (such as “The space shuttle Columbia disintegrated during re-entry over Texas in Feb. 2003”) and then type the statements into a computer. Half the people were told that the machine would save what they typed; half were told that the statements would be erased.
  • Afterward, the researchers asked the subjects to write down as many of the statements as they could remember. Those who believed that the facts had been recorded in the computer demonstrated much weaker recall than those who assumed the facts wouldn’t be stored. Anticipating that information would be readily available in digital form seemed to reduce the mental effort that people made to remember it
  • The researchers dubbed this phenomenon the “Google effect” and noted its broad implications: “Because search engines are continually available to us, we may often be in a state of not feeling we need to encode the information internally. When we need it, we will look it up.”
  • as the pioneering psychologist and philosopher William James said in an 1892 lecture, “the art of remembering is the art of thinking.”
  • Only by encoding information in our biological memory can we weave the rich intellectual associations that form the essence of personal knowledge and give rise to critical and conceptual thinking. No matter how much information swirls around us, the less well-stocked our memory, the less we have to think with.
  • As Dr. Wegner and Dr. Ward explained in a 2013 Scientific American article, when people call up information through their devices, they often end up suffering from delusions of intelligence. They feel as though “their own mental capacities” had generated the information, not their devices. “The advent of the ‘information age’ seems to have created a generation of people who feel they know more than ever before,” the scholars concluded, even though “they may know ever less about the world around them.”
  • That insight sheds light on society’s current gullibility crisis, in which people are all too quick to credit lies and half-truths spread through social media. If your phone has sapped your powers of discernment, you’ll believe anything it tells you.
  • Data, the novelist and critic Cynthia Ozick once wrote, is “memory without history.” Her observation points to the problem with allowing smartphones to commandeer our brains
  • When we constrict our capacity for reasoning and recall or transfer those skills to a gadget, we sacrifice our ability to turn information into knowledge. We get the data but lose the meaning
  • We need to give our minds more room to think. And that means putting some distance between ourselves and our phones.
  • Harvard Business School professor emerita Shoshana Zuboff argues in her new book that the Valley’s wealth and power are predicated on an insidious, essentially pathological form of private enterprise—what she calls “surveillance capitalism.” Pioneered by Google, perfected by Facebook, and now spreading throughout the economy, surveillance capitalism uses human life as its raw material. Our everyday experiences, distilled into data, have become a privately-owned business asset used to predict and mold our behavior, whether we’re shopping or socializing, working or voting.
  • By reengineering the economy and society to their own benefit, Google and Facebook are perverting capitalism in a way that undermines personal freedom and corrodes democracy.
  • Under the Fordist model of mass production and consumption that prevailed for much of the twentieth century, industrial capitalism achieved a relatively benign balance among the contending interests of business owners, workers, and consumers. Enlightened executives understood that good pay and decent working conditions would ensure a prosperous middle class eager to buy the goods and services their companies produced. It was the product itself — made by workers, sold by companies, bought by consumers — that tied the interests of capitalism’s participants together. Economic and social equilibrium was negotiated through the product.
  • By removing the tangible product from the center of commerce, surveillance capitalism upsets the equilibrium. Whenever we use free apps and online services, it’s often said, we become the products, our attention harvested and sold to advertisers
  • this truism gets it wrong. Surveillance capitalism’s real products, vaporous but immensely valuable, are predictions about our future behavior — what we’ll look at, where we’ll go, what we’ll buy, what opinions we’ll hold — that internet companies derive from our personal data and sell to businesses, political operatives, and other bidders.
  • Unlike financial derivatives, which they in some ways resemble, these new data derivatives draw their value, parasite-like, from human experience.To the Googles and Facebooks of the world, we are neither the customer nor the product. We are the source of what Silicon Valley technologists call “data exhaust” — the informational byproducts of online activity that become the inputs to prediction algorithms
  • internet companies operate in what Zuboff terms “extreme structural independence from people.” When databases displace goods as the engine of the economy, our own interests, as consumers but also as citizens, cease to be part of the negotiation. We are no longer one of the forces guiding the market’s invisible hand. We are the objects of surveillance and control.
anonymous

Opinion | I Don't Want Another Family to Lose a Child the Way We Did - The New York Times - 0 views

  • I Don’t Want Another Family to Lose a Child the Way We Did
  • The thought of suicide is terrifying, but we have to make talking about it a part of everyday life.
  • I always felt so blessed watching my boy-girl twins; even as teenagers they would walk arm in arm down the street, chatting and laughing together.
  • ...33 more annotations...
  • But that blessed feeling evaporated in June of 2019, when I lost my daughter, Frankie, to suicide, three weeks before her high school graduation
  • Ever since that day, I have thought of little else except how I could help the next struggling teenager, the next Frankie.
  • Several days after her passing, we opened our home up to our community, including Frankie’s very large group of teenage friends
  • “What strength Frankie had. It must have taken enormous energy for her to do what she did each day.”
  • That was Frankie. She had the strength to engage in school and in theater, despite her anxiety and depression. She had an ability to connect — emotionally, profoundly — with others, even when she was struggling herself
  • “empathy personified, with quite the fabulous earring collection.”
  • Whether that strength came from her home or somewhere else, or both, Frankie just had a way of drawing out warmth wherever she went.
  • Just as my parents couldn’t predict in the 1980s what seatbelt safety would look like now, I am not sure what suicide prevention should look like in the future.
  • Suicidal thinking, whether it is the result of mental illness, stress, trauma or loss, is actually far more common and difficult to see than many of us realize
  • A June 2020 Centers for Disease Control survey found that one in four 18- to 24-year-olds reported that they had seriously thought about taking their lives in the past 30 days; prepandemic estimates found that just under one in five high schoolers had seriously considered suicide, and just under one in 10 had made at least one suicide attempt during the previous year.
  • Despite 50 years of research, predicting death by suicide is still nearly impossible
  • Like others who have lost a child to suicide, I have spent countless hours going over relentless “what ifs.”
  • Maybe what we need are seatbelts for suicide.
  • “Click it or Ticket” was born in part out of a concern in the 1980s about teenagers dying in car accidents. Just as with suicides today, adults couldn’t predict who would get into a car accident, and one of the best solutions we had — seatbelts — was used routinely, in some estimates, by only 15 percent of the population. Indeed, as children, my siblings and I used to make a game of rolling around in the back of our car, seatbelts ignored.
  • Three decades later, our world is unlike anything I could have imagined as a child. Putting on a seatbelt is the first lesson of driver’s education; cars get inspected annually for working seatbelts; car companies embed those annoying beeping sounds to remind you to buckle your seatbelt
  • But like many who struggle with suicidal thinking, she kept her own pain camouflaged for a long time, perhaps for too long.
  • Most of us (estimates range as high as 91 percent) now wear a seatbelt.
  • But I imagine a world in which every health worker, school professional, employer and religious leader can recognize the signs of suicidal thinking and know how to ask about it, respond to it and offer resources to someone who is struggling
  • When I told Frankie’s orthodontist about her suicide, his response surprised me: “We really don’t come across that in our practice.” Even though orthodontists don’t ask about it, they see children during their early teenage years, when suicidal thinking often begins to emerge. Can you imagine a world in which signs for the prevention hotline and text line are posted for kids to see as they get their braces adjusted?
  • What if the annual teenage pediatric checkup involved a discussion of one-at-a-time pill packaging and boxes to lock up lethal medications, the way there is a discussion of baby-proofing homes when children start to crawl? What if pediatricians handed each adolescent a card with the prevention hotline on it (or better yet, if companies preprogrammed that number into cellphones) and the pediatrician talked through what happens when a teenager calls? What if doctors coached parents on how to ask their teenager, “Are you thinking about suicide?”
  • What if we required and funded every school to put in place one of the existing programs that train teachers and other school professionals to be a resource for struggling students?
  • I recognize that despite progress identifying effective programs to combat suicidal thinking, their success rate and simplicity does not compare with what we see with seatbelts. But that doesn’t mean we shouldn’t do more.
  • Part of doing more also includes making the world more just and caring. To give one example, state-level same-sex-marriage policies that were in place before the Supreme Court legalized same-sex marriage nationally have been linked to reductions in suicide attempts among adolescents, especially among sexual minorities.
  • Just as safer highways and car models make seatbelts more effective, asking about and responding to suicidal thinking is only one part of a solution that also includes attention to societal injustices.
  • I understand, of course, that asking about suicidal thinking is scary. But if it is scary for you to ask about it, it is even scarier for the teenager who is thinking about it.
  • I will never forget sitting with Frankie in the waiting room in the pediatric psychiatric wing on the night I brought her to the inpatient unit, three months before she took her life
  • “You know, I am so glad you finally know.” I could hear the relief in her voice. I just nodded, understandingly, but it broke my heart that she held on to such a painful secret for so long.
  • I find myself inspired by Frankie’s teenage friends, who cared deeply for her and now support one another after her passing.
  • On good days, she would sit on the worn couch in that office, snuggle in a pile of teenagers and discuss plays, schoolwork and their lives.
  • And in that corner space, she would text a friend to help her get to class or, after she had opened up about her struggles, encourage others to open up as well.
  • The fall after Frankie left us, some students decided to remake that hidden corner, dotting the walls with colored Post-it notes. Scrawled on a pink Post-it were the words “you matter”; a yellow one read “it gets better”; an orange one shared a cellphone number to call for help. Tiny Post-it squares had transformed the corner into a space to comfort, heal and support the next struggling teenager.
  • I don’t know if a seatbelt approach would have saved Frankie. And I understand that all the details of such an approach aren’t fully worked out here. But I don’t want us to lose any more children because we weren’t brave enough to take on something that scares us, something we don’t fully understand, something that is much more prevalent than many of us realize.
  • If 17- and 18-year-olds who’ve lost a friend have the strength to imagine a world dotted with healing, then the least we can do as adults is design and build the structure to support them
sanderk

Council Post: The Seven Key Steps Of Critical Thinking - 0 views

  • the effort we put into growing our workforce, we often forget the one person who is in constant need of development: ourselves. In particular, we neglect the soft skills that are vital to becoming the best professional possible — one of them being critical thinking.
  • In short, the ability to think critically is the art of analyzing and evaluating data for a practical approach to understanding the data, then determining what to believe and how to act.
  • There are times where an answer just needs to be given and given right now. But that doesn't mean you should make a decision just to make one. Sometimes, quick decisions can fall flat
  • ...5 more annotations...
  • “Don’t just do something, stand there.” Sometimes, taking a minute to be systematic and follow an organized approach makes all the difference. This is where critical thinking meets problem solving. Define the problem, come up with a list of solutions, then select the best answer, implement it, create an evaluation tool and fine-tune as needed.
  • Evaluate information factually. Recognizing predispositions of those involved is a challenging task at times. It is your responsibility to weigh the information from all sources and come to your own conclusions.
  • Be open-minded and consider all points of view. This is a good time to pull the team into finding the best solution. This point will allow you to develop the critical-thinking skills of those you lead.
  • Communicate your findings and results. This is a crucial yet often overlooked component. Failing to do so can cause much confusion in the organization.
  • Developing your critical-thinking skills is fundamental to your leadership success.
marvelgr

THE BASES OF THE MIND:THE RELATIONSHIP OF LANGUAGE AND THOUGHT | by Koç Unive... - 0 views

  • We can talk about three different interactions when we investigate the complex relationships between language and thinking. First, the existence of language as a cognitive process affects the system of thinking. Second, thinking comes before language, and the learning of a language interacts with the conceptual process that is formed before language use. Third, each language spoken may affect the system of thinking. Here we will discuss these three interactions under these subsections: “thinking without language,” “thinking before language,” and “thinking with language.”
  • Babies can categorize objects and actions, understand the cause and effect relationship between events, and see the goals in a movement. Recent studies on action representation and spatial concepts have shown that babies’ universal and language-general action representation productively changes with the learning of the mother tongue. For example, languages use prepositions to express the relationship between objects, i.e., in, on, under. However, languages also vary how they use these relations. One of the most significant studies suggests that babies can differentiate between concepts expressed with prepositions such as containment (in) and support (on). The Korean language specifies the nature of these containment and support relationships using the tightness of the relationship between objects: tight or loose. For example, a pencil in a pencil-size box represents a tight relationship, while a pencil in a big basket represents a loose relationship.
  • In the late 1800s, anthropologist Franz Boas laid the foundations of cultural relativity. According to this point of view, individuals see and perceive the world within the boundaries of their cultures. The role of anthropology is to investigate how people are conditioned by their culture and how they interact with the world in different ways. To understand such mechanisms, it suggests, implications in culture and language should be studied. The reflection of this view in the relationship between language and thought is the linguistic determinism hypothesis advanced by Eric Safir and Benjamin Lee Whorf. This hypothesis suggests that thought emerges only with the effect of language and concepts that are believed to exist even in infancy fade away due to the language learned.
  • ...1 more annotation...
  • In conclusion, there is a nested relationship between language and thought. In the interaction processes mentioned above, the role of language changes. Even though the limits of our language are different from the limits of our thinking, it is inevitable that people prioritize concepts in their languages. This, however, does not mean that they cannot comprehend or think about concepts that do not exist in their language.
sissij

Lessons from Playing Golf with Trump - The New Yorker - 1 views

  • “I will buy one only if it has the potential to be the best. I’m not interested in having a nine.”
  • A friend asked me later whether Trump wasn’t “in on the joke” of his public persona, and I said that, as far as I could tell, the Trump we were used to seeing on television was the honest-to-god authentic Trump: a ten-year-old boy who, for unknown reasons, had been given a real airplane and a billion dollars. In other words, a fun guy to hang around with.
  • He was upset that I hadn’t written that he’d shot 71—a very good golf score, one stroke under par.
  • ...3 more annotations...
  • He complained to me that golf publications never rank his courses high enough, because the people who do the rating hold a grudge against him, but he also said that he never allows raters to play his courses, because they would just get in the way of the members.
  • He wanted the number, and the fact that I hadn’t published the number proved that I was just like all the other biased reporters, who, because we’re all part of the anti-Trump media conspiracy, never give him as much credit as he deserves for being awesome.
  • In Trump’s own mind, I suspect, he really did shoot 71 that day, if not (by now) 69. Trump’s world is a parallel universe in which truth takes many forms, none of them necessarily based on reality.
  •  
    I think this article has a very interesting interpretation on Trump's personalities and behaviors. Something we think is absurd might be totally normal in other people's perspective. For example, in this article, the author states that Trump values social status and potential profit more than the real person or the real thing. It shows how people see this world differently and this affects how they make their moves and decisions. I think the overwhelming critics on Trump is partly because we don't understand Trump and don't even try to understand and accept him. He is an anomaly. Also, I think everybody observe the universe through their unique senses and perception, so we cannot tell whose reality is truer than others. Condemning others' reality won't bring us a good negotiation. --Sissi (1/14/2017)
sissij

Marchers Pour Into Washington to Pour Out Their Hearts - The New York Times - 0 views

  • No one seemed to mind. By the time the Women’s March on Washington officially began at 10 a.m., the protesters had arrived in a force so large that they surprised even themselves, spilling over the National Mall and the streets of the capital a day after Donald J. Trump was sworn in as president.
  • When he gave everybody a phone number to call Congress, the crowd repeated it back loudly, many smiling and nodding.
  • People were playful with their signs. There was “Cyborgs for Civility,” and “Women Geologists Rock.” Another said, “1933 Called. Don’t Answer.” A white sign in black marker read: “I know signs. I make the best signs. They’re terrific. Everyone agrees.”
  • ...2 more annotations...
  • But there was also seriousness. Mary Robinson, 60, from a rural part of Northern Arizona, said she felt energized by the march, but the work ahead seemed hard.
  • “They are in rural America where there’s no jobs, no technology, and many people live on government subsidies. It’s not that they are ignorant or stupid, they are just uninformed.”
  •  
    This Saturday I went to Philadelphia and there were a lot of police cars blocking the roads and I was wondering why they were doing that. Now I know that there was a protest going on. The presidency of Trump is surely not celebrated by many people in this region. And after reading this article, I think there is a new attitude in protest. I really like the scene described in this article that people are actually being playful with their sign. I think this new attitude is a new spice in protest and add on new possibilities of protest. If we are always being serious about everything then the world would seem so stressful. How about take a step back, reduce the tension and look at the issue more playfully. I think the best protest is not shouting slogans angrily. I think both side should leave some space and respect to each other. --Sissi (1/24/2017)
sissij

Take a Bad Year. And Make It Better. - The New York Times - 0 views

  •  
    Why does everybody think 2016 was a bad year? I think it is just because of the confirmation bias. The bad things that listed in this article do not happen only in 2016. There are deaths and war every year, the shout out that 2016 is a bad year appear particular after the election of Trump, so I think this presidential election is a catalyst for many people to reflect back on the bad things happened in 2016. Also I think saying goodbye to a year does not mean a new start for us because time is always continuous. Bad things won't disappear as a year goes by. --Sissi (12/31/2016)
sissij

Apeirophobia is Apparently the Fear of Living Forever | Big Think - 0 views

  • You’re a child lying in bed at night, and you suddenly find yourself trying to wrap your mind around eternity — the room begins spinning and keeps going until you can get the idea out of your head.
  • If you believe in an afterlife like heaven, it’s about being there forever.
  • George Mason University’s Martin Weiner told The Atlantic that it may be that it’s because the frontal lobe responsible for long-term thinking is one of the last to develop as we mature.
  • ...2 more annotations...
  • For many, the idea of not being, and forever, is absolutely terrifying.
  • one realizes that there is no way to project ahead to ‘forever’ — and that experience is, inherently, anxiety-provoking. As such, the anxiety that these folks are feeling may not be much different than the fear of growing up, getting old, or death.”
  •  
    I found this article very interesting as it says that there are actually people who are afraid of living forever. As we see in history, almost every king is in search of ways to live forever. They are afraid of death because they don't know what they would become after death. It's like playing a game. When you work your role up to level one hundred, you will really cherish this role and don't want to have it dead. Eternity, whether to be alive or dead forever, is fearful when people think about it. The author related the reason with the logic of evolution and our development of front lobe in our brain. We are afraid of eternity because we are afraid of things that we cannot see. We cannot think of anything that's beyond eternity. It is a really terrifying feeling when you think deep about death, it feels like a blackhole that suck up your strength and hard to get out. --Sissi (2/24/2017)
julia rhodes

How people learn - The Week - 0 views

  • n a traditional classroom, the teacher stands at the front of the class explaining what is clear in their mind to a group of passive students. Yet this pedagogical strategy doesn't positively impact retention of information from lecture, improve understanding basic concepts, or affect beliefs (that is, does new information change your belief about how something works).
  • Given that lectures were devised as a means of transferring knowledge from one to many, it seems obvious that we would ensure that people retain the information they are consuming.
  • The research tells us that the human brain can hold a maximum of about seven different items in its short-term working memory and can process no more than about four ideas at once. Exactly what an "item" means when translated from the cognitive science lab into the classroom is a bit fuzzy.
  • ...13 more annotations...
  • The results were similarly disturbing when students were tested to determine understanding of basic concepts. More instruction wasn't helping students advance from novice to expert. In fact, the data indicated the opposite: students had more novice-like beliefs after they completed a course than they had when they started.
  • But in addition, experts have a mental organizational structure that facilitates the retrieval and effective application of their knowledge.
  • experts have an ability to monitor their own thinking ("metacognition"), at least in their discipline of expertise. They are able to ask themselves, "Do I understand this? How can I check my understanding?"
  • But that is not what cognitive science tells us. It tells us instead that students need to develop these different ways of thinking by means of extended, focused mental effort.
  • new ways of thinking are always built on the prior thinking of the individual, so if the educational process is to be successful, it is essential to take that prior thinking into account.
  • . Everything that constitutes "understanding" science and "thinking scientifically" resides in the long-term memory, which is developed via the construction and assembly of component proteins.
  • What is elementary, worldly wisdom? Well, the first rule is that you can't really know anything if you just remember isolated facts and try and bang 'em back. If the facts don't hang together on a latticework of theory, you don't have them in a usable form.
  • "So it makes perfect sense," Wieman writes, "that they are not learning to think like experts, even though they are passing science courses by memorizing facts and problem-solving recipes."
  • Anything one can do to reduce cognitive load improves learning.
  • A second way teachers can improve instruction is by recognizing the importance of student beliefs about science
  • My third example of how teaching and learning can be improved is by implementing the principle that effective teaching consists of engaging students, monitoring their thinking, and providing feedback.
  • I assign students to groups the first day of class (typically three to four students in adjacent seats) and design each lecture around a series of seven to 10 clicker questions that cover the key learning goals for that day.
  • The process of critiquing each other's ideas in order to arrive at a consensus also enormously improves both their ability to carry on scientific discourse and to test their own understanding. [Change]
Javier E

Skeptics read Jordan Peterson's '12 Rules for Life' - The Washington Post - 0 views

  • I do think that women tend to spend more time thinking about their lives, planning for the future, sort of sorting themselves out — and know how to do so. So they don’t need Peterson’s basic life advice as much as men do.
  • Emba: These days, young men seem far more lost than young women. And we’re seeing the results of that all over the place — men disappearing into video games, or pornography, or dropping out of the workforce, or succumbing to depression and despair. So maybe they need this more.
  • Rubin made it sound as though Peterson held some *hidden knowledge,* but there’s no secret to “stand up straight and make sure the people you keep around you pull you up rather than drag you down.”
  • ...12 more annotations...
  • I actually think Peterson was right to observe that it’s remarkable how many students at the universities where they tested some of his theories hadn’t been told these things. Though I thought it was interesting that he seemed to think that teaching this kind of thing was a job for the educational system rather than the parents
  • I think perhaps we’re both lucky in that though our backgrounds are different, we both come from relatively stable families with parents and surrounding adults who inculcated these “rules” intrinsically, from our youth on. So the Peterson gospel doesn’t feel new to us.
  • The fact that there are whole swaths of our generation who are advantaged by already knowing this information about how to make your life better, and another whole swath who is being left behind, character and life-formation wise, because they don’t. And they are left to rely on Jordan Peterson.
  • He is convinced of the importance and significance of these stories, these words — and religion, and its significance. At one point he stated that he didn’t have a materialist view of the world, but actually a “deeply religious” one.
  • One thing that’s definitely central to the book is telling people (particularly men) that life is hard, and you need to get it together.
  • largely the message you come away with is that if you don’t like the way things are going, it’s your fault and your fault alone. And that’s an easier message to believe when you’re a white male and systemic obstacles aren’t really a thing you run into.
  • Jordan Peterson professes not to be religious, but he is. His book is built on what he describes as archetypal myths from different cultures, but leans *very* heavily on Judeo-Christian ones especially — Cain and Abel and the stories of Jesus’s life, from his temptation in the desert to his death and resurrection.
  • This tendency was even more pronounced in his live lecture. Basically every line, every piece of advice he gave, was supported by a Bible verse. At one point, he quoted the gospel of Matthew: “Knock and the door will be opened to you” — and said, “This is how life works, ACTUALLY” — basically glaring at the crowd and daring them to disagree.
  • Just in the week or so I was reading “12 Rules,” I had several men my age come up to me on buses or in coffee shops and strike up conversations with me about Peterson — the one thing they all talked about right away was how the book had a lot of “hard truths” that they needed to hear
  • He’s not keeping great company. But I think his personal work and statements are generally benign, in many cases actually helpful, in that they urge young people to seek out a better-structured and more meaningful life.
  • I agree it’s inaccurate to label him as alt-right, though that is a low bar to clear. Frankly I see him more as a mainstream conservative. I think part of the reason people get this wrong is that there’s a big gap between what boosted his fame and what the central thrust of his book is
  • I think “traditionalist” is probably the best label for him — both because his views are traditionalist and because his worldview is so dependent on traditions (or at least what he sees as traditions.)
sissij

YouTube Filtering Draws Ire of Gay and Transgender Creators - The New York Times - 0 views

  • YouTube said on Sunday that it was investigating the simmering complaints by some users that its family-friendly “restricted mode” wrongly filters out some lesbian, gay, bisexual and transgender videos.
  • In a statement, YouTube described restricted mode as “an optional feature used by a very small subset of users who want to have a more limited YouTube experience.”
  • In a statement, YouTube said that many videos featuring lesbian, gay, bisexual and transgender content were unaffected by the filter, an optional parental-control setting, and that it only targeted those that discussed sensitive topics such as politics, health and sexuality.
  • ...2 more annotations...
  • the system is “not 100 percent accurate.”
  • Over the weekend, many video creators and users complained on Twitter, recycling the hashtag #YouTubeIsOverParty, which was trending worldwide by Sunday night.
  •  
    Restriction in social media has always been a controversial issue. I think this problem in the system filtering the videos of gay and transgender creator shouldn't be all blamed on Youtube. I think the system Youtube used to filter the video is not based on the sexuality information of the creators. It think the system might take in the comments and survey results from the viewer. I think this reflects that the mainstream community and mindset still reject and repel transgenders and gays. People don't want to be sensitive topics. --Sissi (3/20/2017)
sissij

Human-Like Thinking Is up to 1.8 Million Years-Old, Study Finds | Big Think - 0 views

  • I have a confession to make, I think I’m pretty smart.
  • And is it something we can measure?
  • Shelby S. Putt conducted the study. She’s a postdoctoral researcher with The Stone Age Institute at Indiana University.
  • ...3 more annotations...
  • Neuroarchaeology is a new field which seeks to understand how ancient hominids and humans evolved cognitively.
  • brain imaging technology
  • The more intricate Acheulian tools, required a lot more of the brain’s real estate. Fashioning one activates the superior temporal cortex, ventral precentral gyrus, and supplementary motor areas.
  •  
    We tends to think we are the special ones that are the most intelligent form of living. However, the cognitive that we are so proud of actually existed millions of years ago according to neuroarchaeology. I think it is rather ironic. --Sissi (5/24/2017
sissij

Are You Lucky? How You Respond Affects Your Fate. | Big Think - 0 views

  • Humans are superstitious creatures. Our rituals are vast. We tie one shoelace before the other; if we tie one, we have to tie the other even if it’s not loose.
  • Luck is the ever-present elephant in the room, dwarfed in our vocabulary by destiny and blessings.
  • But a roll of seven has more to do with the flick of a wrist than fate. 
  • ...3 more annotations...
  • Considering yourself lucky is a good thing. Rather than having a negative worldview—“that’s just my luck”—thinking yourself to be lucky results in positive brain functioning and overall well-being.
  • To navigate this tricky terrain, Frank suggests asking someone about their luck rather than informing them of their luck.
  • As should we all. Luck is not a mystical ally.
  •  
    I think luck is very tricky thing in human social science. As study suggests, luck is not a real thing, it is just something that human invented to comfort themselves. However, the belief of luck does have an effect on people's performance. I remembered once I saw in a study that people who believe that they are very lucky would have a better chance of good performance. This does not necessarily means that there is some unknown force called luck. It just means that believing in oneself would have a positive effect. I think it is very interesting that people are so used to use the word luck when a thing that is low in possibility happened to them. I think the language itself is giving people suggestions that there is some force that helps people in their action. --Sissi (4/19/2017)
‹ Previous 21 - 40 of 1711 Next › Last »
Showing 20 items per page