Skip to main content

Home/ TOK Friends/ Group items tagged robots

Rss Feed Group items tagged

mcginnisca

Why Do We Teach Girls That It's Cute to Be Scared? - The New York Times - 0 views

  • Why Do We Teach Girls That It’s Cute to Be Scared?
  • Apparently, fear is expected of women.
  • parents cautioned their daughters about the dangers of the fire pole significantly more than they did their sons and were much more likely to assist them
  • ...13 more annotations...
  • But both moms and dads directed their sons to face their fears, with instruction on how to complete the task on their own.
  • Misadventures meant that I should try again. With each triumph over fear and physical adversity, I gained confidence.
  • She said that her own mother had been very fearful, gasping at anything remotely rough-and-tumble. “I had been so discouraged from having adventures, and I wanted you to have a more exciting childhood,”
  • arents are “four times more likely to tell girls than boys to be more careful”
  • “Girls may be less likely than boys to try challenging physical activities, which are important for developing new skills.” This study points to an uncomfortable truth: We think our daughters are more fragile, both physically and emotionally, than our sons. Advertisement Continue reading the main story Advertisement Continue reading the main story
  • Nobody is saying that injuries are good, or that girls should be reckless. But risk taking is important
  • It follows that by cautioning girls away from these experiences, we are not protecting them. We are failing to prepare them for life.
  • When a girl learns that the chance of skinning her knee is an acceptable reason not to attempt the fire pole, she learns to avoid activities outside her comfort zone.
  • Fear becomes a go-to feminine trait, something girls are expected to feel and express at will.
  • By the time a girl reaches her tweens no one bats an eye when she screams at the sight of an insect.
  • When girls become women, this fear manifests as deference and timid decision making
  • We must chuck the insidious language of fear (Be careful! That’s too scary!) and instead use the same terms we offer boys, of bravery and resilience. We need to embolden girls to master skills that at first appear difficult, even dangerous. And it’s not cute when a 10-year-old girl screeches, “I’m too scared.”
  • I was often scared. Of course I was. So were the men.
sissij

Good reasoning needn't make you an unfeeling robot - 1 views

  • There are two brain networks, called in the literature the “Default Mode Network” and the “Task Positive Network” – and it was shown these activate in different reasoning situations, but rarely together. One network lit up when subjects were asked to reason about physical systems (including the mechanical properties of inanimate objects); the other lit up when subjects were asked to reason about social situations (including the mental states of other people).
  • Some people have jumped to bad conclusions on the basis of this evidence, claiming that it shows “analytic thinking” and “empathy” are in tension, and that when we reason carefully, we can’t see the human cost of our decisions.
  • These are all open questions where the logician, the linguist and the philosopher enter the picture, to help us understand how we can represent and reason about the world.
  •  
    This article is saying that emotion and reasoning is not in conflict. Although our brain has two distinctive system of thinking, it doesn't tension between thinking and feeling. So our emotion and our reasoning is coexisting. A good reasoning doesn't conflict with moral judgment. Good reasoning is also coming up with possibilities to consider the options to explore. However, this topic is still debatable. --Sissi (11/29/2016)
Javier E

ThinkUp Helps the Social Network User See the Online Self - NYTimes.com - 1 views

  • In addition to a list of people’s most-used words and other straightforward stats like follower counts, ThinkUp shows subscribers more unusual information such as how often they thank and congratulate people, how frequently they swear, whose voices they tend to amplify and which posts get the biggest reaction and from whom.
  • after using ThinkUp for about six months, I’ve found it to be an indispensable guide to how I navigate social networks.
  • Every morning the service delivers an email packed with information, and in its weighty thoroughness, it reminds you that what you do on Twitter and Facebook can change your life, and other people’s lives, in important, sometimes unforeseen ways.
  • ...14 more annotations...
  • ThinkUp is something like Elf on the Shelf for digitally addled adults — a constant reminder that someone is watching you, and that you’re being judged.
  • “The goal is to make you act like less of a jerk online,” Ms. Trapani said. “The big goal is to create mindfulness and awareness, and also behavioral change.”
  • One of the biggest dangers is saying something off the cuff that might make sense in a particular context, but that sounds completely off the rails to the wider public. The problem, in other words, is acting without thinking — being caught up in the moment, without pausing to reflect on the long-term consequences. You’re never more than a few taps away from an embarrassment that might ruin your career, or at least your reputation, for years to come.
  • Because social networks often suggest a false sense of intimacy, they tend to lower people’s self-control.
  • Like a drug or perhaps a parasite, they worm into your devices, your daily habits and your every free moment, and they change how you think.Continue reading the main story Continue reading the main story
  • For those of us most deeply afflicted, myself included, every mundane observation becomes grist for a 140-character quip, and every interaction a potential springboard into an all-consuming, emotionally wrenching flame battle.
  • people often tweet and update without any perspective about themselves. That’s because Facebook and Twitter, as others have observed, have a way of infecting our brains.
  • getting a daily reminder from ThinkUp that there are good ways and bad ways to behave online — has a tendency to focus the mind.
  • More basically, though, it’s helped me pull back from social networks. Each week, ThinkUp tells me how often I’ve tweeted. Sometimes that number is terribly high — a few weeks ago it was more than 800 times — and I realize I’m probably overtaxing my followers
  • ThinkUp charges $5 a month for each social network you connect to it. Is it worth it? After all, there’s a better, more surefire way of avoiding any such long-term catastrophe caused by social media: Just stop using social networks.
  • The main issue constraining growth, the founders say, is that it has been difficult to explain to people why they might need ThinkUp.
  • your online profile plays an important role in how you’re perceived by potential employers. In a recent survey commissioned by the job-hunting site CareerBuilder, almost half of companies said they perused job-seekers’ social networking profiles to look for red flags and to see what sort of image prospective employees portrayed online.
  • even though “never tweet” became a popular, ironic thing to tweet this year, actually never tweeting, and never being on Facebook, is becoming nearly impossible for many people.
  • That may change as more people falter on social networks, either by posting unthinking comments that end up damaging their careers, or simply by annoying people to the point that their online presence becomes a hindrance to their real-life prospects.
Javier E

But What Would the End of Humanity Mean for Me? - James Hamblin - The Atlantic - 0 views

  • Tegmark is more worried about much more immediate threats, which he calls existential risks. That’s a term borrowed from physicist Nick Bostrom, director of Oxford University’s Future of Humanity Institute, a research collective modeling the potential range of human expansion into the cosmos
  • "I am finding it increasingly plausible that existential risk is the biggest moral issue in the world, even if it hasn’t gone mainstream yet,"
  • Existential risks, as Tegmark describes them, are things that are “not just a little bit bad, like a parking ticket, but really bad. Things that could really mess up or wipe out human civilization.”
  • ...17 more annotations...
  • The single existential risk that Tegmark worries about most is unfriendly artificial intelligence. That is, when computers are able to start improving themselves, there will be a rapid increase in their capacities, and then, Tegmark says, it’s very difficult to predict what will happen.
  • Tegmark told Lex Berko at Motherboard earlier this year, "I would guess there’s about a 60 percent chance that I’m not going to die of old age, but from some kind of human-caused calamity. Which would suggest that I should spend a significant portion of my time actually worrying about this. We should in society, too."
  • "Longer term—and this might mean 10 years, it might mean 50 or 100 years, depending on who you ask—when computers can do everything we can do," Tegmark said, “after that they will probably very rapidly get vastly better than us at everything, and we’ll face this question we talked about in the Huffington Post article: whether there’s really a place for us after that, or not.”
  • "This is very near-term stuff. Anyone who’s thinking about what their kids should study in high school or college should care a lot about this.”
  • Tegmark and his op-ed co-author Frank Wilczek, the Nobel laureate, draw examples of cold-war automated systems that assessed threats and resulted in false alarms and near misses. “In those instances some human intervened at the last moment and saved us from horrible consequences,” Wilczek told me earlier that day. “That might not happen in the future.”
  • there are still enough nuclear weapons in existence to incinerate all of Earth’s dense population centers, but that wouldn't kill everyone immediately. The smoldering cities would send sun-blocking soot into the stratosphere that would trigger a crop-killing climate shift, and that’s what would kill us all
  • “We are very reckless with this planet, with civilization,” Tegmark said. “We basically play Russian roulette.” The key is to think more long term, “not just about the next election cycle or the next Justin Bieber album.”
  • “There are several issues that arise, ranging from climate change to artificial intelligence to biological warfare to asteroids that might collide with the earth,” Wilczek said of the group’s launch. “They are very serious risks that don’t get much attention.
  • a widely perceived issue is when intelligent entities start to take on a life of their own. They revolutionized the way we understand chess, for instance. That’s pretty harmless. But one can imagine if they revolutionized the way we think about warfare or finance, either those entities themselves or the people that control them. It could pose some disquieting perturbations on the rest of our lives.”
  • Wilczek’s particularly concerned about a subset of artificial intelligence: drone warriors. “Not necessarily robots,” Wilczek told me, “although robot warriors could be a big issue, too. It could just be superintelligence that’s in a cloud. It doesn’t have to be embodied in the usual sense.”
  • it’s important not to anthropomorphize artificial intelligence. It's best to think of it as a primordial force of nature—strong and indifferent. In the case of chess, an A.I. models chess moves, predicts outcomes, and moves accordingly. If winning at chess meant destroying humanity, it might do that.
  • Even if programmers tried to program an A.I. to be benevolent, it could destroy us inadvertently. Andersen’s example in Aeon is that an A.I. designed to try and maximize human happiness might think that flooding your bloodstream with heroin is the best way to do that.
  • “It’s not clear how big the storm will be, or how long it’s going to take to get here. I don’t know. It might be 10 years before there’s a real problem. It might be 20, it might be 30. It might be five. But it’s certainly not too early to think about it, because the issues to address are only going to get more complex as the systems get more self-willed.”
  • Even within A.I. research, Tegmark admits, “There is absolutely not a consensus that we should be concerned about this.” But there is a lot of concern, and sense of lack of power. Because, concretely, what can you do? “The thing we should worry about is that we’re not worried.”
  • Tegmark brings it to Earth with a case-example about purchasing a stroller: If you could spend more for a good one or less for one that “sometimes collapses and crushes the baby, but nobody’s been able to prove that it is caused by any design flaw. But it’s 10 percent off! So which one are you going to buy?”
  • “There are seven billion of us on this little spinning ball in space. And we have so much opportunity," Tegmark said. "We have all the resources in this enormous cosmos. At the same time, we have the technology to wipe ourselves out.”
  • Ninety-nine percent of the species that have lived on Earth have gone extinct; why should we not? Seeing the biggest picture of humanity and the planet is the heart of this. It’s not meant to be about inspiring terror or doom. Sometimes that is what it takes to draw us out of the little things, where in the day-to-day we lose sight of enormous potentials.
jlessner

Why Facebook's News Experiment Matters to Readers - NYTimes.com - 0 views

  • Facebook’s new plan to host news publications’ stories directly is not only about page views, advertising revenue or the number of seconds it takes for an article to load. It is about who owns the relationship with readers.
  • It’s why Google, a search engine, started a social network and why Facebook, a social network, started a search engine. It’s why Amazon, a shopping site, made a phone and why Apple, a phone maker, got into shopping.
  • Facebook’s experiment, called instant articles, is small to start — just a few articles from nine media companies, including The New York Times. But it signals a major shift in the relationship between publications and their readers. If you want to read the news, Facebook is saying, come to Facebook, not to NBC News or The Atlantic or The Times — and when you come, don’t leave. (For now, these articles can be viewed on an iPhone running the Facebook app.)
  • ...6 more annotations...
  • The front page of a newspaper and the cover of a magazine lost their dominance long ago.
  • But news reports, like albums before them, have not been created that way. One of the services that editors bring to readers has been to use their news judgment, considering a huge range of factors, when they decide how articles fit together and where they show up. The news judgment of The New York Times is distinct from that of The New York Post, and for generations readers appreciated that distinction.
  • “In digital, every story becomes unbundled from each other, so if you’re not thinking of each story as living on its own, it’s tying yourself back to an analog era,” Mr. Kim said.
  • Facebook executives have insisted that they intend to exert no editorial control because they leave the makeup of the news feed to the algorithm. But an algorithm is not autonomous. It is written by humans and tweaked all the time. Advertisement Continue reading the main story Advertisement Continue reading the main story
  • That raises some journalistic questions. The news feed algorithm works, in part, by showing people more of what they have liked in the past. Some studies have suggested that means they might not see as wide a variety of news or points of view, though others, including one by Facebook researchers, have found they still do.
  • Tech companies, Facebook included, are notoriously fickle with their algorithms. Publications became so dependent on Facebook in the first place because of a change in its algorithm that sent more traffic their way. Later, another change demoted articles from sites that Facebook deemed to run click-bait headlines. Then last month, Facebook decided to prioritize some posts from friends over those from publications.
Javier E

To Justify Every 'A,' Some Professors Hand Over Grading Power to Outsiders - Technology... - 0 views

  • The best way to eliminate grade inflation is to take professors out of the grading process: Replace them with professional evaluators who never meet the students, and who don't worry that students will punish harsh grades with poor reviews. That's the argument made by leaders of Western Governors University, which has hired 300 adjunct professors who do nothing but grade student work.
  • These efforts raise the question: What if professors aren't that good at grading? What if the model of giving instructors full control over grades is fundamentally flawed? As more observers call for evidence of college value in an era of ever-rising tuition costs, game-changing models like these are getting serious consideration.
  • Professors do score poorly when it comes to fair grading, according to a study published in July in the journal Teachers College Record. After crunching the numbers on decades' worth of grade reports from about 135 colleges, the researchers found that average grades have risen for 30 years, and that A is now the most common grade given at most colleges. The authors, Stuart Rojstaczer and Christopher Healy, argue that a "consumer-based approach" to higher education has created subtle incentives for professors to give higher marks than deserved. "The standard practice of allowing professors free rein in grading has resulted in grades that bear little relation to actual performance," the two professors concluded.
  • ...13 more annotations...
  • Western Governors is entirely online, for one thing. Technically it doesn't offer courses; instead it provides mentors who help students prepare for a series of high-stakes homework assignments. Those assignments are designed by a team of professional test-makers to prove competence in various subject areas. The idea is that as long as students can leap all of those hurdles, they deserve degrees, whether or not they've ever entered a classroom, watched a lecture video, or participated in any other traditional teaching experience. The model is called "competency-based education."
  • Ms. Johnson explains that Western Governors essentially splits the role of the traditional professor into two jobs. Instructional duties fall to a group the university calls "course mentors," who help students master material. The graders, or evaluators, step in once the homework is filed, with the mind-set of, "OK, the teaching's done, now our job is to find out how much you know," says Ms. Johnson. They log on to a Web site called TaskStream and pluck the first assignment they see. The institution promises that every assignment will be graded within two days of submission.
  • Western Governors requires all evaluators to hold at least a master's degree in the subject they're grading.
  • Evaluators are required to write extensive comments on each task, explaining why the student passed or failed to prove competence in the requisite skill. No letter grades are given—students either pass or fail each task.
  • Another selling point is the software's fast response rate. It can grade a batch of 1,000 essay tests in minutes. Professors can set the software to return the grade immediately and can give students the option of making revisions and resubmitting their work on the spot.
  • All evaluators initially receive a month of training, conducted online, about how to follow each task's grading guidelines, which lay out characteristics of a passing score.
  • Other evaluators want to push talented students to do more than the university's requirements for a task, or to allow a struggling student to pass if he or she is just under the bar. "Some people just can't acclimate to a competency-based environment," says Ms. Johnson. "I tell them, If they don't buy this, they need to not be here.
  • She and some teaching assistants scored the tests by hand and compared their performance with the computer's.
  • The graduate students became fatigued and made mistakes after grading several tests in a row, she told me, "but the machine was right-on every time."
  • He argues that students like the idea that their tests are being evaluated in a consistent way.
  • The graders must regularly participate in "calibration exercises," in which they grade a simulated assignment to make sure they are all scoring consistently. As the phrase suggests, the process is designed to run like a well-oiled machine.
  • He said once students get essays back instantly, they start to view essay tests differently. "It's almost like a big math problem. You don't expect to get everything right the first time, but you work through it.
  • robot grading is the hottest trend in testing circles, says Jacqueline Leighton, a professor of educational psychology at the University of Alberta who edits the journal Educational Measurement: Issues and Practice. Companies building essay-grading robots include the Educational Testing Service, which sells e-rater, and Pearson Education, which makes Intelligent Essay Assessor. "The research is promising, but they're still very much in their infancy," Ms. Leighton says.
jlessner

Straight Talk for White Men - NYTimes.com - 0 views

  • SUPERMARKET shoppers are more likely to buy French wine when French music is playing, and to buy German wine when they hear German music. That’s true even though only 14 percent of shoppers say they noticed the music, a study finds.
  • Researchers discovered that candidates for medical school interviewed on sunny days received much higher ratings than those interviewed on rainy days. Being interviewed on a rainy day was a setback equivalent to having an MCAT score 10 percent lower, according to a new book called “Everyday Bias,” by Howard J. Ross.
  • Those studies are a reminder that we humans are perhaps less rational than we would like to think, and more prone to the buffeting of unconscious influences. That’s something for those of us who are white men to reflect on when we’re accused of “privilege.”
  • ...7 more annotations...
  • When I wrote a series last year, “When Whites Just Don’t Get It,” the reaction from white men was often indignant: It’s an equal playing field now! Get off our case!
  • Yet the evidence is overwhelming that unconscious bias remains widespread in ways that systematically benefit both whites and men. So white men get a double dividend, a payoff from both racial and gender biases.
  • male professors are disproportionately likely to be described as a “star” or “genius.” Female professors are disproportionately described as “nasty,” “ugly,” “bossy” or “disorganized.”
  • When students were taking the class from someone they believed to be male, they rated the teacher more highly. The very same teacher, when believed to be female, was rated significantly lower.
  • The study found that a résumé with a name like Emily or Greg received 50 percent more callbacks than the same résumé with a name like Lakisha or Jamal. Having a white-sounding name was as beneficial as eight years’ work experience.
  • Then there was the study in which researchers asked professors to evaluate the summary of a supposed applicant for a post as laboratory manager, but, in some cases, the applicant was named John and in others Jennifer. Everything else was the same.“John” was rated an average of 4.0 on a 7-point scale for competence, “Jennifer” a 3.3. When asked to propose an annual starting salary for the applicant, the professors suggested on average a salary for “John” almost $4,000 higher than for “Jennifer.”Continue reading the main story Continue reading the main story
  • While we don’t notice systematic unfairness, we do observe specific efforts to redress it — such as affirmative action, which often strikes white men as profoundly unjust. Thus a majority of white Americans surveyed in a 2011 study said that there is now more racism against whites than against blacks.
sissij

FaceApp apologises for 'racist' filter that lightens users' skintone | Technology | The... - 0 views

  • its “hot” filter automatically lightened people’s skin.
  • “It is an unfortunate side-effect of the underlying neural network caused by the training set bias, not intended behaviour.”
  • which he said was a side-effect of the “neural network”.
  • ...3 more annotations...
  • But users noticed one of the options, initially labelled as “hot” made people look whiter.
  • which usually adds filters, because it uses deep learning technology to alter the photo itself.
  • This is by no means the first time an app which changes people’s faces have been criticised for racial insensitivity.
  •  
    This article reminds me of an article I read days ago about AI chatting program that picked up racial expression when it is learning in chatting people a lot of people. The faceApp obviously adapt a similar learning program that's similar to that of the AI robot. I think it can partly reflect what the mainstream society is thinking. It unveil this preference of whiter skin tone people have intentionally or subconsciously. This is the mainstream esthetic in the society now. --Sissi (4/26/2017)
katherineharron

Four ways the Mars 2020 rover will pave the way for a manned mission - CNN - 0 views

  • When NASA's Mars 2020 rover lands on the Red Planet in February 2021, it will touch down in Jezero Crater, the site of a lake that existed 3.5 billion years ago. The next generation rover will build on the goals of previous robotic explorers by collecting the first samples of Mars, which would be returned to Earth at a later date.
  • "We're very much thinking about how Mars could be inhabited, how humans could come to Mars and make use of the resources that we have there in the Martian environment today," said Stack. "We send our robotic scouts first to learn about these other places, hopefully for us to prepare the way for us to go ourselves."
  • "Combining an understanding of the composition of the rocks, but also the very fine detail that we see in the rocks and the textures, can make a powerful case for ancient signs of life," Stack said. "We know that ancient Mars was habitable. But we haven't yet been able to show that we have signs, real signs, of ancient life yet. And with our instrument suite, we think we can make real advances towards that on the surface.
  • ...5 more annotations...
  • "This is a huge endeavor for the human species, and it'll take cooperation from more than just our own space program," Stack said. "Once the resources are there, we can develop the technology. It's getting the buy-in from international partners and from our own space administration and government to really make this happen."
  • No matter the mission, sticking the landing is key for future success. The 2020 rover will land on Mars using the new Terrain Relative Navigation system, which allows the lander to avoid any large hazards in the landing zone.
  • Astronauts exploring Mars will need oxygen, but carting enough to sustain them on a spacecraft isn't viable. The Mars 2020 rover will carry MOXIE on board, or the Mars Oxygen In-Situ Resource Utilization Experiment.
  • Speaking of "The Martian," the events of the book and its film adaptation are set in motion when a surprise, devastating dust storm impacts astronauts on the Red Planet. Understanding the weather and environment on Mars will be crucial for determining the conditions astronauts will face.
  • For the first time, a surface mission will include a ground-penetrating radar instrument called RIMFAX, or Radar Imager for Mars' Subsurface Experiment. It will be able to peek beneath the surface and study Martian geology, looking for rock, ice and boulder layers. Scientists hope that RIMFAX will help them understand the geologic history of Jezero Crater, according to David Paige, principal investigator for the experiment at the University of California, Los Angeles.
Javier E

'The Godfather of AI' Quits Google and Warns of Danger Ahead - The New York Times - 0 views

  • he officially joined a growing chorus of critics who say those companies are racing toward danger with their aggressive campaign to create products based on generative artificial intelligence, the technology that powers popular chatbots like ChatGPT.
  • Dr. Hinton said he has quit his job at Google, where he has worked for more than decade and became one of the most respected voices in the field, so he can freely speak out about the risks of A.I. A part of him, he said, now regrets his life’s work.
  • “I console myself with the normal excuse: If I hadn’t done it, somebody else would have,”
  • ...24 more annotations...
  • Industry leaders believe the new A.I. systems could be as important as the introduction of the web browser in the early 1990s and could lead to breakthroughs in areas ranging from drug research to education.
  • But gnawing at many industry insiders is a fear that they are releasing something dangerous into the wild. Generative A.I. can already be a tool for misinformation. Soon, it could be a risk to jobs. Somewhere down the line, tech’s biggest worriers say, it could be a risk to humanity.
  • “It is hard to see how you can prevent the bad actors from using it for bad things,” Dr. Hinton said.
  • After the San Francisco start-up OpenAI released a new version of ChatGPT in March, more than 1,000 technology leaders and researchers signed an open letter calling for a six-month moratorium on the development of new systems because A.I technologies pose “profound risks to society and humanity.
  • Several days later, 19 current and former leaders of the Association for the Advancement of Artificial Intelligence, a 40-year-old academic society, released their own letter warning of the risks of A.I. That group included Eric Horvitz, chief scientific officer at Microsoft, which has deployed OpenAI’s technology across a wide range of products, including its Bing search engine.
  • Dr. Hinton, often called “the Godfather of A.I.,” did not sign either of those letters and said he did not want to publicly criticize Google or other companies until he had quit his job
  • Dr. Hinton, a 75-year-old British expatriate, is a lifelong academic whose career was driven by his personal convictions about the development and use of A.I. In 1972, as a graduate student at the University of Edinburgh, Dr. Hinton embraced an idea called a neural network. A neural network is a mathematical system that learns skills by analyzing data. At the time, few researchers believed in the idea. But it became his life’s work.
  • Dr. Hinton is deeply opposed to the use of artificial intelligence on the battlefield — what he calls “robot soldiers.”
  • In 2012, Dr. Hinton and two of his students in Toronto, Ilya Sutskever and Alex Krishevsky, built a neural network that could analyze thousands of photos and teach itself to identify common objects, such as flowers, dogs and cars.
  • In 2018, Dr. Hinton and two other longtime collaborators received the Turing Award, often called “the Nobel Prize of computing,” for their work on neural networks.
  • Around the same time, Google, OpenAI and other companies began building neural networks that learned from huge amounts of digital text. Dr. Hinton thought it was a powerful way for machines to understand and generate language, but it was inferior to the way humans handled language.
  • Then, last year, as Google and OpenAI built systems using much larger amounts of data, his view changed. He still believed the systems were inferior to the human brain in some ways but he thought they were eclipsing human intelligence in others.
  • “Maybe what is going on in these systems,” he said, “is actually a lot better than what is going on in the brain.”
  • As companies improve their A.I. systems, he believes, they become increasingly dangerous. “Look at how it was five years ago and how it is now,” he said of A.I. technology. “Take the difference and propagate it forwards. That’s scary.”
  • Until last year, he said, Google acted as a “proper steward” for the technology, careful not to release something that might cause harm. But now that Microsoft has augmented its Bing search engine with a chatbot — challenging Google’s core business — Google is racing to deploy the same kind of technology. The tech giants are locked in a competition that might be impossible to stop, Dr. Hinton said.
  • His immediate concern is that the internet will be flooded with false photos, videos and text, and the average person will “not be able to know what is true anymore.”
  • He is also worried that A.I. technologies will in time upend the job market. Today, chatbots like ChatGPT tend to complement human workers, but they could replace paralegals, personal assistants, translators and others who handle rote tasks. “It takes away the drudge work,” he said. “It might take away more than that.”
  • Down the road, he is worried that future versions of the technology pose a threat to humanity because they often learn unexpected behavior from the vast amounts of data they analyze. This becomes an issue, he said, as individuals and companies allow A.I. systems not only to generate their own computer code but actually run that code on their ow
  • And he fears a day when truly autonomous weapons — those killer robots — become reality.
  • “The idea that this stuff could actually get smarter than people — a few people believed that,” he said. “But most people thought it was way off. And I thought it was way off. I thought it was 30 to 50 years or even longer away. Obviously, I no longer think that.”
  • Many other experts, including many of his students and colleagues, say this threat is hypothetical. But Dr. Hinton believes that the race between Google and Microsoft and others will escalate into a global race that will not stop without some sort of global regulation.
  • But that may be impossible, he said. Unlike with nuclear weapons, he said, there is no way of knowing whether companies or countries are working on the technology in secret. The best hope is for the world’s leading scientists to collaborate on ways of controlling the technology. “I don’t think they should scale this up more until they have understood whether they can control it,” he said.
  • Dr. Hinton said that when people used to ask him how he could work on technology that was potentially dangerous, he would paraphrase Robert Oppenheimer, who led the U.S. effort to build the atomic bomb: “When you see something that is technically sweet, you go ahead and do it.”
  • He does not say that anymore.
Emily Horwitz

Struggle For Smarts? How Eastern And Western Cultures Tackle Learning : Shots - Health ... - 1 views

  • In 1979, when Jim Stigler was still a graduate student at the University of Michigan, he went to Japan to research teaching methods and found himself sitting in the back row of a crowded fourth grade math class.
  • and one kid was just totally having trouble with it. His cube looked all cockeyed, so the teacher said to him, 'Why don't you go put yours on the board?' So right there I thought, 'That's interesting! He took the one who can't do it and told him to go and put it on the board.'"
  • the kid didn't break into tears. Stigler says the child continued to draw his cube with equanimity. "And at the end of the class, he did make his cube look right! And the teacher said to the class, 'How does that look, class?' And they all looked up and said, 'He did it!' And they broke into applause." The kid smiled a huge smile and sat down, clearly proud of himself.
  • ...12 more annotations...
  • very early ages we [in America] see struggle as an indicator that you're just not very smart," Stigler says. "It's a sign of low ability — people who are smart don't struggle, they just naturally get it, that's our folk theory. Whereas in Asian cultures they tend to see struggle more as an opportunity."
  • For the most part in American culture, intellectual struggle in schoolchildren is seen as an indicator of weakness, while in Eastern cultures it is not only tolerated, it is often used to measure emotional strength.
  • to understand why these two cultures view struggle so differently, it's good to step back and examine how they think about where academic excellence comes from.
  • American mother is communicating to her son that the cause of his success in school is his intelligence. He's smart — which, Li says, is a common American view.
  • children are not creative. Our children do not have individuality. They're just robots. You hear the educators from Asian countries express that concern, a
  • "So the focus is on the process of persisting through it despite the challenges, not giving up, and that's what leads to success," Li says.
  • Obviously if struggle indicates weakness — a lack of intelligence — it makes you feel bad, and so you're less likely to put up with it. But if struggle indicates strength — an ability to face down the challenges that inevitably occur when you are trying to learn something — you're more willing to accept it.
  • American students "worked on it less than 30 seconds on average and then they basically looked at us and said, 'We haven't had this,'" he says.
  • Japanese students worked for the entire hour on the impossible problem.
  • Westerns tend to worry that their kids won't be able to compete against Asian kids who excel in many areas but especially in math and science. Jin Li says that educators from Asian countries have their own set of worries.
  • "The idea of intelligence in believed in the West as a cause," Li explains. "She is telling him that there is something in him, in his mind, that enables him to do what he does."
  • in the Japanese classrooms that he's studied, teachers consciously design tasks that are slightly beyond the capabilities of the students they teach, so the students can actually experience struggling with something just outside their reach. Then, once the task is mastered, the teachers actively point out that the student was able to accomplish it through the students hard work and struggle.
  •  
    An interesting look into the differences between how Eastern and Western cultures see academic struggle
Javier E

The Future of Sex - The European - 1 views

  • Consider the most likely scenario for how human sexual behavior will develop over the next hundred years or so in the absence of cataclysm. Here’s what I see if we continue on our current path:
  • Like every other aspect of human life, our sexuality will become increasingly mediated by technology. The technology of pornography will become ever more sophisticated—even if the subject matter of porn itself will remain as primal as ever.
  • As the technology improves, society continues to grow ever more fragmented, and hundreds of millions of Chinese men with no hope of marrying a bona-fide, flesh-and-blood woman come of age, sex robots will become as common and acceptable as dildos and vibrators are today. After all, the safest sex is that which involves no other living things…
  • ...4 more annotations...
  • As our sexuality becomes ever more divorced from emotion and intimacy, a process already well underway, sex will increasingly be seen as simply a matter of provoking orgasm in the most efficient, reliable ways possible.
  • Human sexuality will continue to be subjected to the same commodification and mechanization as other aspects of our lives. Just as the 21st century saw friends replaced by Facebook friends, nature replaced by parks, ocean fisheries replaced by commercially farmed seafood, and sunshine largely supplanted by tanning salons, we’ll see sexual interaction reduced to mechanically provoked orgasm as human beings become ever more dominated by the machines and mechanistic thought processes that developed in our brains and societies like bacteria in a petri dish.
  • Gender identity will fade away as sexual interaction becomes less “human” and we grow less dependent upon binary interactions with other people. As more and more of our interactions take place with non-human partners, others’ expectations and judgments will become less relevant to the development of sexual identity, leading to greater fluidity and far less urgency and passion concerning sexual expression.
  • the collapse of western civilization may well be the best thing that could happen for human sexuality. Following the collapse of the consumerist, competitive mind-set that now dominates so much of human thought, we’d possibly be free to rebuild a social world more in keeping with our preagricultural origins, characterized by economies built upon sharing rather than hoarding, a politics of respect rather than of power, and a sexuality of intimacy rather than alienation.
Javier E

Drones, Ethics and the Armchair Soldier - NYTimes.com - 0 views

  • the difference between humans and robots is precisely the ability to think and reflect, in Immanuel Kant’s words, to set and pursue ends for themselves. And these ends cannot be set beforehand in some hard and fast way
  • Working one’s way through the complexities of “just war” and moral theory makes it perfectly clear that ethics is not about arriving easily at a single right answer, but rather coming to understand the profound difficulty of doing so. Experiencing this difficulty is what philosophers call existential responsibility.
  • One of the jobs of philosophy, at least as I understand it, is neither to help people to avoid these difficulties nor to exaggerate them, but rather to face them in resolute and creative ways.
  • ...6 more annotations...
  • ground troops, unfortunately, had more pressing concerns than existential responsibility. They did not have leisure, unlike their commanders, who also often had the philosophical training to think through the complexities of their jobs.
  • This training was not simply a degree requirement at Officer Candidate School or one of the United States military academies, but a sustained, ongoing, and rigorous engagement with a philosophical tradition. Alexander lived with Aristotle.
  • , Jeff McMahan argued that traditional “just war theory” should be reworked in several important ways. He suggested that the tenets of a revised theory apply not only to governments, traditionally represented by commanders and heads of state, but also to individual soldiers. This is a significant revision since it broadens the scope of responsibility for warfare
  • McMahan believes that individuals are to bear at least some responsibility in upholding “just cause” requirements. McMahan expects more of soldiers and, in this age of drones and leisure, he is right to do so.
  • while drones are to be applauded for keeping these soldiers out of harm’s way physically, we would do well to remember that they do not keep them out of harm’s way morally or psychologically. The high rates of “burnout” should drive this home. Supporting our troops requires ensuring that they are provided not just with training and physical armor, but with the intellectual tools to navigate these new difficulties.
  • Just as was the case in the invasion of Iraq 10 years ago, the most important questions we should be asking should not be directed to armchair soldiers but to those of us in armchairs at home: What wars are being fought in our name? On what grounds are they being fought?
Javier E

Noam Chomsky on Where Artificial Intelligence Went Wrong - Yarden Katz - The Atlantic - 0 views

  • If you take a look at the progress of science, the sciences are kind of a continuum, but they're broken up into fields. The greatest progress is in the sciences that study the simplest systems. So take, say physics -- greatest progress there. But one of the reasons is that the physicists have an advantage that no other branch of sciences has. If something gets too complicated, they hand it to someone else.
  • If a molecule is too big, you give it to the chemists. The chemists, for them, if the molecule is too big or the system gets too big, you give it to the biologists. And if it gets too big for them, they give it to the psychologists, and finally it ends up in the hands of the literary critic, and so on.
  • neuroscience for the last couple hundred years has been on the wrong track. There's a fairly recent book by a very good cognitive neuroscientist, Randy Gallistel and King, arguing -- in my view, plausibly -- that neuroscience developed kind of enthralled to associationism and related views of the way humans and animals work. And as a result they've been looking for things that have the properties of associationist psychology.
  • ...19 more annotations...
  • in general what he argues is that if you take a look at animal cognition, human too, it's computational systems. Therefore, you want to look the units of computation. Think about a Turing machine, say, which is the simplest form of computation, you have to find units that have properties like "read", "write" and "address." That's the minimal computational unit, so you got to look in the brain for those. You're never going to find them if you look for strengthening of synaptic connections or field properties, and so on. You've got to start by looking for what's there and what's working and you see that from Marr's highest level.
  • it's basically in the spirit of Marr's analysis. So when you're studying vision, he argues, you first ask what kind of computational tasks is the visual system carrying out. And then you look for an algorithm that might carry out those computations and finally you search for mechanisms of the kind that would make the algorithm work. Otherwise, you may never find anything.
  • "Good Old Fashioned AI," as it's labeled now, made strong use of formalisms in the tradition of Gottlob Frege and Bertrand Russell, mathematical logic for example, or derivatives of it, like nonmonotonic reasoning and so on. It's interesting from a history of science perspective that even very recently, these approaches have been almost wiped out from the mainstream and have been largely replaced -- in the field that calls itself AI now -- by probabilistic and statistical models. My question is, what do you think explains that shift and is it a step in the right direction?
  • AI and robotics got to the point where you could actually do things that were useful, so it turned to the practical applications and somewhat, maybe not abandoned, but put to the side, the more fundamental scientific questions, just caught up in the success of the technology and achieving specific goals.
  • The approximating unanalyzed data kind is sort of a new approach, not totally, there's things like it in the past. It's basically a new approach that has been accelerated by the existence of massive memories, very rapid processing, which enables you to do things like this that you couldn't have done by hand. But I think, myself, that it is leading subjects like computational cognitive science into a direction of maybe some practical applicability... ..in engineering? Chomsky: ...But away from understanding.
  • I was very skeptical about the original work. I thought it was first of all way too optimistic, it was assuming you could achieve things that required real understanding of systems that were barely understood, and you just can't get to that understanding by throwing a complicated machine at it.
  • if success is defined as getting a fair approximation to a mass of chaotic unanalyzed data, then it's way better to do it this way than to do it the way the physicists do, you know, no thought experiments about frictionless planes and so on and so forth. But you won't get the kind of understanding that the sciences have always been aimed at -- what you'll get at is an approximation to what's happening.
  • Suppose you want to predict tomorrow's weather. One way to do it is okay I'll get my statistical priors, if you like, there's a high probability that tomorrow's weather here will be the same as it was yesterday in Cleveland, so I'll stick that in, and where the sun is will have some effect, so I'll stick that in, and you get a bunch of assumptions like that, you run the experiment, you look at it over and over again, you correct it by Bayesian methods, you get better priors. You get a pretty good approximation of what tomorrow's weather is going to be. That's not what meteorologists do -- they want to understand how it's working. And these are just two different concepts of what success means, of what achievement is.
  • if you get more and more data, and better and better statistics, you can get a better and better approximation to some immense corpus of text, like everything in The Wall Street Journal archives -- but you learn nothing about the language.
  • the right approach, is to try to see if you can understand what the fundamental principles are that deal with the core properties, and recognize that in the actual usage, there's going to be a thousand other variables intervening -- kind of like what's happening outside the window, and you'll sort of tack those on later on if you want better approximations, that's a different approach.
  • take a concrete example of a new field in neuroscience, called Connectomics, where the goal is to find the wiring diagram of very complex organisms, find the connectivity of all the neurons in say human cerebral cortex, or mouse cortex. This approach was criticized by Sidney Brenner, who in many ways is [historically] one of the originators of the approach. Advocates of this field don't stop to ask if the wiring diagram is the right level of abstraction -- maybe it's no
  • if you went to MIT in the 1960s, or now, it's completely different. No matter what engineering field you're in, you learn the same basic science and mathematics. And then maybe you learn a little bit about how to apply it. But that's a very different approach. And it resulted maybe from the fact that really for the first time in history, the basic sciences, like physics, had something really to tell engineers. And besides, technologies began to change very fast, so not very much point in learning the technologies of today if it's going to be different 10 years from now. So you have to learn the fundamental science that's going to be applicable to whatever comes along next. And the same thing pretty much happened in medicine.
  • that's the kind of transition from something like an art, that you learn how to practice -- an analog would be trying to match some data that you don't understand, in some fashion, maybe building something that will work -- to science, what happened in the modern period, roughly Galilean science.
  • it turns out that there actually are neural circuits which are reacting to particular kinds of rhythm, which happen to show up in language, like syllable length and so on. And there's some evidence that that's one of the first things that the infant brain is seeking -- rhythmic structures. And going back to Gallistel and Marr, its got some computational system inside which is saying "okay, here's what I do with these things" and say, by nine months, the typical infant has rejected -- eliminated from its repertoire -- the phonetic distinctions that aren't used in its own language.
  • people like Shimon Ullman discovered some pretty remarkable things like the rigidity principle. You're not going to find that by statistical analysis of data. But he did find it by carefully designed experiments. Then you look for the neurophysiology, and see if you can find something there that carries out these computations. I think it's the same in language, the same in studying our arithmetical capacity, planning, almost anything you look at. Just trying to deal with the unanalyzed chaotic data is unlikely to get you anywhere, just like as it wouldn't have gotten Galileo anywhere.
  • with regard to cognitive science, we're kind of pre-Galilean, just beginning to open up the subject
  • You can invent a world -- I don't think it's our world -- but you can invent a world in which nothing happens except random changes in objects and selection on the basis of external forces. I don't think that's the way our world works, I don't think it's the way any biologist thinks it is. There are all kind of ways in which natural law imposes channels within which selection can take place, and some things can happen and other things don't happen. Plenty of things that go on in the biology in organisms aren't like this. So take the first step, meiosis. Why do cells split into spheres and not cubes? It's not random mutation and natural selection; it's a law of physics. There's no reason to think that laws of physics stop there, they work all the way through. Well, they constrain the biology, sure. Chomsky: Okay, well then it's not just random mutation and selection. It's random mutation, selection, and everything that matters, like laws of physics.
  • What I think is valuable is the history of science. I think we learn a lot of things from the history of science that can be very valuable to the emerging sciences. Particularly when we realize that in say, the emerging cognitive sciences, we really are in a kind of pre-Galilean stage. We don't know wh
  • at we're looking for anymore than Galileo did, and there's a lot to learn from that.
Javier E

Stephen Hawking just gave humanity a due date for finding another planet - The Washingt... - 0 views

  • Hawking told the audience that Earth's cataclysmic end may be hastened by humankind, which will continue to devour the planet’s resources at unsustainable rates
  • “Although the chance of a disaster to planet Earth in a given year may be quite low, it adds up over time, and becomes a near certainty in the next thousand or ten thousand years. By that time we should have spread out into space, and to other stars, so a disaster on Earth would not mean the end of the human race.”
  • “I think the development of full artificial intelligence could spell the end of the human race,” Hawking told the BBC in a 2014 interview that touched upon everything from online privacy to his affinity for his robotic-sounding voice.
  • ...1 more annotation...
  • “Once humans develop artificial intelligence, it will take off on its own and redesign itself at an ever-increasing rate,” Hawking warned in recent months. “Humans, who are limited by slow biological evolution, couldn't compete and would be superseded.”
sissij

How Social Isolation Is Killing Us - The New York Times - 0 views

  • About one-third of Americans older than 65 now live alone, and half of those over 85 do. People in poorer health — especially those with mood disorders like anxiety and depression — are more likely to feel lonely. Those without a college education are the least likely to have someone they can talk to about important personal matters.
  • Loneliness can accelerate cognitive decline in older adults, and isolated individuals are twice as likely to die prematurely as those with more robust social interactions. These effects start early: Socially isolated children have significantly poorer health 20 years later, even after controlling for other factors. All told, loneliness is as important a risk factor for early death as obesity and smoking.
  • The loneliness of older adults has different roots — often resulting from family members moving away and close friends passing away. As one senior put it, “Your world dies before you do.”
  • ...3 more annotations...
  • “In America, you almost need an excuse for knocking on a neighbor’s door,” Dr. Tang told me. “We want to break down those barriers.”
  • “You don’t need a playmate every day,” Dr. Tang said. “But knowing you’re valued and a contributing member of society is incredibly reaffirming.” Advertisement Continue reading the main story
  • A great paradox of our hyper-connected digital age is that we seem to be drifting apart. Increasingly, however, research confirms our deepest intuition: Human connection lies at the heart of human well-being. It’s up to all of us — doctors, patients, neighborhoods and communities — to maintain bonds where they’re fading, and create ones where they haven’t existed.
  •  
    We are always finding reasons to do something good for others. However, these barriers are just invented. We don't need reason to be kind and friendly. The digital age gives us access to more people, but it also limits attention and effort to form a friendship. Our brain is limited so there is attention blindness showing that we cannot handle too much information. What we should do is focus our attention on people in our community and actually make effort to form relationships and connections. Usually, people that's most realistic and close to us are not online. --Sissi (12/23/2016)
maxwellokolo

If You're Looking For Alien Life, How Will You Know If You've Found it? - 1 views

  •  
    When a robotic probe finally lands on a watery world like Jupiter's moon Europa, what do scientists have to see to definitively say whether the place has any life? That's the question retired astronaut John Grunsfeld posed to some colleagues at NASA when he was in charge of the agency's science missions.
Javier E

Our Ecological Boredom - NYTimes.com - 1 views

  • Live free or die: This is the maxim of our age. But the freedoms we celebrate are particular and limited. We fetishize the freedom of business from state control; the freedom not to pay taxes; the freedom to carry guns and speak our minds and worship whom we will. But despite, in some cases because of, this respect for particular freedoms, every day the scope of our lives appears to contract.
  • Half a century ago, we were promised that rising wealth would mean less work, longer vacations and more choice
  • our working hours rise in line with economic growth, and they are now governed by a corporate culture of snooping and quantification, of infantilizing dictats and impossible demands, all of which smothers autonomy and creativity. Technologies that promised to save time and free us from drudgery (such as email and smartphones) fill our heads with a clatter so persistent it stifles the ability to think.
  • ...6 more annotations...
  • Young people, who have no place in this dead-eyed, sanitized landscape, scarcely venture from their bedrooms. Political freedom now means choosing between alternative versions of market fundamentalism.
  • Even the freedoms we do possess we tend not to exercise. We spend hours every day watching other people doing what we might otherwise be doing: dancing, singing, playing sports, even cooking. We venture outdoors to seek marginally different varieties of stuff we already possess
  • We entertain the illusion that we have chosen our lives. Why, if this is the case, do our apparent choices differ so little from those of other people? Why do we live and work and travel and eat and dress and entertain ourselves in almost identical fashion? It’s no wonder, when we possess and use it so little, that we make a fetish out of freedom.Continue reading the main story
  • our survival in the modern economy requires the use of few of the mental and physical capacities we possess. Sometimes it feels like a small and shuffling life. Our humdrum, humiliating lives leave us, I believe, ecologically bored.
  • Across many rich nations, especially the United States, global competition is causing the abandonment of farming on less fertile land. Rather than trying to tame and hold back the encroaching wilds, I believe we should help to accelerate the process of reclamation, removing redundant roads and fences, helping to re-establish missing species, such as wolves and cougars and bears, building bridges between recovering habitats to create continental-scale wildlife corridors, such as those promoted by the Rewilding Institute.
  • This rewilding of the land permits, if we choose, a partial rewilding of our own lives. It allows us to step into a world that is not controlled and regulated, to imagine ourselves back into the rawer life from which we came
kushnerha

How 'Empowerment' Became Something for Women to Buy - The New York Times - 0 views

  • The mix of things presumed to transmit and increase female power is without limit yet still depressingly limiting.“Empowerment” wasn’t always so trivialized, or so corporate, or even so clamorously attached to women.
  • Four decades ago, the word had much more in common with Latin American liberation theology than it did with “Lean In.” In 1968, the Brazilian academic Paulo Freire coined the word “conscientization,” empowerment’s precursor, as the process by which an oppressed person perceives the structural conditions of his oppression and is subsequently able to take action against his oppressors.
  • Eight years later, the educator Barbara Bryant Solomon, writing about American black communities, gave this notion a new name, “empowerment.” It was meant as an ethos for social workers in marginalized communities, to discourage paternalism and encourage their clients to solve problems in their own ways. Then in 1981, Julian Rappaport, a psychologist, broadened the concept into a political theory of power that viewed personal competency as fundamentally limitless; it placed faith in the individual and laid at her feet a corresponding amount of responsibility too.
  • ...7 more annotations...
  • Sneakily, empowerment had turned into a theory that applied to the needy while describing a process more realistically applicable to the rich. The word was built on a misaligned foundation; no amount of awareness can change the fact that it’s the already-powerful who tend to experience empowerment at any meaningful rate. Today “empowerment” invokes power while signifying the lack of it. It functions like an explorer staking a claim on new territory with a white flag.
  • highly marketable “women’s empowerment,” neither practice nor praxis, nor really theory, but a glossy, dizzying product instead. Women’s empowerment borrows the virtuous window-dressing of the social worker’s doctrine and kicks its substance to the side. It’s about pleasure, not power; it’s individualistic and subjective, tailored to insecurity and desire.
  • The new empowerment doesn’t increase potential so much as it assures you that your potential is just fine. Even when the thing being described as “empowering” is personal and mildly defiant (not shaving, not breast-feeding, not listening to men, et cetera), what’s being mar­keted is a certain identity.
  • When consumer purchases aren’t made out to be a path to female empowerment, a branded corporate experience often is. There’s TEDWomen (“about the power of women”), the Forbes Women’s Summit (“#RedefinePower”) and Fortune’s Most Powerful Women Conference (tickets are $10,000).
  • This consumption-and-conference empowerment dilutes the word to pitch-speak, and the concept to something that imitates rather than alters the structures of the world. This version of empowerment can be actively disempowering: It’s a series of objects and experiences you can purchase while the conditions determining who can access and accumulate power stay the same. The ready partici­pation of well-off women in this strat­egy also points to a deep truth about the word “empowerment”: that it has never been defined by the people who actually need it. People who talk empowerment are, by definition, already there.
  • I have never said “empowerment” sincerely or heard it from a single one of my friends. The formulation has been diluted to something representational and bloodless — an architectural rendering of a building that will never be built.But despite its nonexistence in honest conversation, “empowerment” goes on thriving. It’s uniquely marketable, like the female body, which is where women’s empowerment is forced to live.
  • Like Sandberg, Kardashian is the apotheosis of a particular brand of largely contentless feminism, a celebratory form divorced from material politics, which makes it palatable — maybe irresistible — to the business world. Advertisement Continue reading the main story The mistake would be to locate further empowerment in choosing between the two. Corporate empowerment — as well as the lightweight, self-exculpatory feminism it rides on — feeds rav­enously on the distracting performance of identity, that buffet of false opposition.
‹ Previous 21 - 40 of 80 Next › Last »
Showing 20 items per page