Skip to main content

Home/ TOK Friends/ Group items tagged monitor

Rss Feed Group items tagged

Javier E

Disruptions: Medicine That Monitors You - NYTimes.com - 0 views

  • researchers and some start-ups are already preparing the next, even more intrusive wave of computing: ingestible computers and minuscule sensors stuffed inside pills.
  • some people on the cutting edge are already swallowing them to monitor a range of health data and wirelessly share this information with a doctor
  • does not need a battery. Instead, the body is the power source. Just as a potato can power a light bulb, Proteus has added magnesium and copper on each side of its tiny sensor, which generates just enough electricity from stomach acids.
  • ...6 more annotations...
  • People with heart failure-related difficulties could monitor blood flow and body temperature; those with central nervous system issues, including schizophrenia and Alzheimer’s disease, could take the pills to monitor vital signs in real time.
  • Future generations of these pills could even be convenience tools.
  • Once that pill is in your body, you could pick up your smartphone and not have to type in a password. Instead, you are the password. Sit in the car and it will start. Touch the handle to your home door and it will automatically unlock. “Essentially, your entire body becomes your authentication token,
  • “The wonderful is that there are a great number of things you want to know about yourself on a continual basis, especially if you’re diabetic or suffer from another disease. The terrible is that health insurance companies could know about the inner workings of your body.”
  • And the implications of a tiny computer inside your body being hacked? Let’s say they are troubling.
  • After it has done its job, flowing down around the stomach and through the intestinal tract, what happens next?“It passes naturally through the body in about 24 hours,” Ms. Carbonelli said, but since each pill costs $46, “some people choose to recover and recycle it.”
Emily Freilich

All Can Be Lost: The Risk of Putting Our Knowledge in the Hands of Machines - Nicholas ... - 0 views

  • We rely on computers to fly our planes, find our cancers, design our buildings, audit our businesses. That's all well and good. But what happens when the computer fails?
  • On the evening of February 12, 2009, a Continental Connection commuter flight made its way through blustery weather between Newark, New Jersey, and Buffalo, New York.
  • The Q400 was well into its approach to the Buffalo airport, its landing gear down, its wing flaps out, when the pilot’s control yoke began to shudder noisily, a signal that the plane was losing lift and risked going into an aerodynamic stall. The autopilot disconnected, and the captain took over the controls. He reacted quickly, but he did precisely the wrong thing: he jerked back on the yoke, lifting the plane’s nose and reducing its airspeed, instead of pushing the yoke forward to gain velocity.
  • ...43 more annotations...
  • The crash, which killed all 49 people on board as well as one person on the ground, should never have happened.
  • aptain’s response to the stall warning, the investigators reported, “should have been automatic, but his improper flight control inputs were inconsistent with his training” and instead revealed “startle and confusion.
  • Automation has become so sophisticated that on a typical passenger flight, a human pilot holds the controls for a grand total of just three minutes.
  • We humans have been handing off chores, both physical and mental, to tools since the invention of the lever, the wheel, and the counting bead.
  • And that, many aviation and automation experts have concluded, is a problem. Overuse of automation erodes pilots’ expertise and dulls their reflexes,
  • No one doubts that autopilot has contributed to improvements in flight safety over the years. It reduces pilot fatigue and provides advance warnings of problems, and it can keep a plane airborne should the crew become disabled. But the steady overall decline in plane crashes masks the recent arrival of “a spectacularly new type of accident,”
  • “We’re forgetting how to fly.”
  • The experience of airlines should give us pause. It reveals that automation, for all its benefits, can take a toll on the performance and talents of those who rely on it. The implications go well beyond safety. Because automation alters how we act, how we learn, and what we know, it has an ethical dimension. The choices we make, or fail to make, about which tasks we hand off to machines shape our lives and the place we make for ourselves in the world.
  • What pilots spend a lot of time doing is monitoring screens and keying in data. They’ve become, it’s not much of an exaggeration to say, computer operators.
  • Examples of complacency and bias have been well documented in high-risk situations—on flight decks and battlefields, in factory control rooms—but recent studies suggest that the problems can bedevil anyone working with a computer
  • That may leave the person operating the computer to play the role of a high-tech clerk—entering data, monitoring outputs, and watching for failures. Rather than opening new frontiers of thought and action, software ends up narrowing our focus.
  • A labor-saving device doesn’t just provide a substitute for some isolated component of a job or other activity. It alters the character of the entire task, including the roles, attitudes, and skills of the people taking part.
  • when we work with computers, we often fall victim to two cognitive ailments—complacency and bias—that can undercut our performance and lead to mistakes. Automation complacency occurs when a computer lulls us into a false sense of security. Confident that the machine will work flawlessly and handle any problem that crops up, we allow our attention to drift.
  • Automation bias occurs when we place too much faith in the accuracy of the information coming through our monitors. Our trust in the software becomes so strong that we ignore or discount other information sources, including our own eyes and ears
  • Automation is different now. Computers can be programmed to perform complex activities in which a succession of tightly coordinated tasks is carried out through an evaluation of many variables. Many software programs take on intellectual work—observing and sensing, analyzing and judging, even making decisions—that until recently was considered the preserve of humans.
  • Automation turns us from actors into observers. Instead of manipulating the yoke, we watch the screen. That shift may make our lives easier, but it can also inhibit the development of expertise.
  • Since the late 1970s, psychologists have been documenting a phenomenon called the “generation effect.” It was first observed in studies of vocabulary, which revealed that people remember words much better when they actively call them to mind—when they generate them—than when they simply read them.
  • When you engage actively in a task, you set off intricate mental processes that allow you to retain more knowledge. You learn more and remember more. When you repeat the same task over a long period, your brain constructs specialized neural circuits dedicated to the activit
  • What looks like instinct is hard-won skill, skill that requires exactly the kind of struggle that modern software seeks to alleviate.
  • In many businesses, managers and other professionals have come to depend on decision-support systems to analyze information and suggest courses of action. Accountants, for example, use the systems in corporate audits. The applications speed the work, but some signs suggest that as the software becomes more capable, the accountants become less so.
  • You can put limits on the scope of automation, making sure that people working with computers perform challenging tasks rather than merely observing.
  • Experts used to assume that there were limits to the ability of programmers to automate complicated tasks, particularly those involving sensory perception, pattern recognition, and conceptual knowledge
  • Who needs humans, anyway? That question, in one rhetorical form or another, comes up frequently in discussions of automation. If computers’ abilities are expanding so quickly and if people, by comparison, seem slow, clumsy, and error-prone, why not build immaculately self-contained systems that perform flawlessly without any human oversight or intervention? Why not take the human factor out of the equation?
  • The cure for imperfect automation is total automation.
  • That idea is seductive, but no machine is infallible. Sooner or later, even the most advanced technology will break down, misfire, or, in the case of a computerized system, encounter circumstances that its designers never anticipated. As automation technologies become more complex, relying on interdependencies among algorithms, databases, sensors, and mechanical parts, the potential sources of failure multiply. They also become harder to detect.
  • conundrum of computer automation.
  • Because many system designers assume that human operators are “unreliable and inefficient,” at least when compared with a computer, they strive to give the operators as small a role as possible.
  • People end up functioning as mere monitors, passive watchers of screens. That’s a job that humans, with our notoriously wandering minds, are especially bad at
  • people have trouble maintaining their attention on a stable display of information for more than half an hour. “This means,” Bainbridge observed, “that it is humanly impossible to carry out the basic function of monitoring for unlikely abnormalities.”
  • a person’s skills “deteriorate when they are not used,” even an experienced operator will eventually begin to act like an inexperienced one if restricted to just watching.
  • You can program software to shift control back to human operators at frequent but irregular intervals; knowing that they may need to take command at any moment keeps people engaged, promoting situational awareness and learning.
  • What’s most astonishing, and unsettling, about computer automation is that it’s still in its early stages.
  • most software applications don’t foster learning and engagement. In fact, they have the opposite effect. That’s because taking the steps necessary to promote the development and maintenance of expertise almost always entails a sacrifice of speed and productivity.
  • Learning requires inefficiency. Businesses, which seek to maximize productivity and profit, would rarely accept such a trade-off. Individuals, too, almost always seek efficiency and convenience.
  • Abstract concerns about the fate of human talent can’t compete with the allure of saving time and money.
  • The small island of Igloolik, off the coast of the Melville Peninsula in the Nunavut territory of northern Canada, is a bewildering place in the winter.
  • , Inuit hunters have for some 4,000 years ventured out from their homes on the island and traveled across miles of ice and tundra to search for game. The hunters’ ability to navigate vast stretches of the barren Arctic terrain, where landmarks are few, snow formations are in constant flux, and trails disappear overnight, has amazed explorers and scientists for centuries. The Inuit’s extraordinary way-finding skills are born not of technological prowess—they long eschewed maps and compasses—but of a profound understanding of winds, snowdrift patterns, animal behavior, stars, and tides.
  • The Igloolik hunters have begun to rely on computer-generated maps to get around. Adoption of GPS technology has been particularly strong among younger Inuit, and it’s not hard to understand why.
  • But as GPS devices have proliferated on Igloolik, reports of serious accidents during hunts have spread. A hunter who hasn’t developed way-finding skills can easily become lost, particularly if his GPS receiver fails.
  • The routes so meticulously plotted on satellite maps can also give hunters tunnel vision, leading them onto thin ice or into other hazards a skilled navigator would avoid.
  • An Inuit on a GPS-equipped snowmobile is not so different from a suburban commuter in a GPS-equipped SUV: as he devotes his attention to the instructions coming from the computer, he loses sight of his surroundings. He travels “blindfolded,” as Aporta puts it
  • A unique talent that has distinguished a people for centuries may evaporate in a generation.
  • Computer automation severs the ends from the means. It makes getting what we want easier, but it distances us from the work of knowing. As we transform ourselves into creatures of the screen, we face an existential question: Does our essence still lie in what we know, or are we now content to be defined by what we want?
  •  
    Automation increases efficiency and speed of tasks, but decreases the individual's knowledge of a task and decrease's a human's ability to learn. 
annabaldwin_

How Getting Enough Sleep Can Make You Less Afraid - The Atlantic - 0 views

  • A new study suggests that people who naturally get more REM sleep may be less sensitive to frightening things.
  • For the study, a team of researchers from Rutgers University sent 17 subjects home with sleep-monitoring devices—headbands that monitor their brain waves, wristbands that track arm movements, and sleep logs—and asked them to sleep as they normally would for a week. They were monitoring how much sleep they were getting—especially REM, or rapid-eye-movement sleep.
  • Each night, most people sleep about seven or eight hours, about two hours of which is REM sleep, the stage of sleep in which the body relaxes fully and most dreams occur.
  • ...6 more annotations...
  • The researchers then conditioned the participants to be afraid of certain images by showing them pictures of ordinary-looking rooms lit with lamps of various hues, some of which were paired with a mild shock to the finger. Through the shocks, they were taught to fear the rooms that were lit by certain colors.
  • The subjects with more REM sleep also had less activity in those areas of the brain. That suggests that the more well-rested subjects may not have been hard-wiring those fears into their brains quite as strongly.
  • PTSD is already known to be associated with sleep disturbances, and past studies have shown that sleep-deprived people have more activity in their amygdalae upon being shown upsetting pictures.
  • “REM is very unique because it’s the only time that area of the brain is completely silent,” said Shira Lupkin, one of the study’s authors and a researcher with the Center for Molecular and Behavioral Neuroscience at Rutgers University.
  • Because of that, people who get plenty of REM sleep might be less reactive to emotional stimuli.
  • If the study is replicated, there could be real-world implications for stopping trauma—before it starts.
criscimagnael

Explained: Social media and the Texas shooter's messages | Explained News,The Indian Ex... - 0 views

  • Could technology companies have monitored ominous messages made by a gunman who Texas authorities say massacred 19 children and two teachers at an elementary school? Could they have warned the authorities? Answers to these questions remain unclear
  • But if nothing else, the shooting in Uvalde, Texas, seems highly likely to focus additional attention on how social platforms monitor what users are saying to and showing each other.
  • Shortly thereafter, Facebook stepped in to note that the gunman sent one-to-one direct messages, not public posts, and that they weren’t discovered until “after the terrible tragedy”.
  • ...7 more annotations...
  • Some reports appear to show that at least some of the gunman’s communications used Apple’s encrypted iPhone messaging services, which makes messages almost impossible for anyone else to read when sent to another iPhone user.
  • Facebook parent company Meta, which also owns Instagram, says it is working with law enforcement but declined to provide details.
  • A series of posts appeared on his Instagram in the days leading up to the shooting, including photos of a gun magazine in hand and two AR-style semi-automatic rifles. An Instagram user who was tagged in one post shared parts of what appears to be a chilling exchange on Instagram with Ramos, asking her to share his gun pictures with her more than 10,000 followers.
  • Meta has said it monitors people’s private messages for some kinds of harmful content, such as links to malware or images of child sexual exploitation. But copied images can be detected using unique identifiers — a kind of digital signature — which makes them relatively easy for computer systems to flag. Trying to interpret a string of threatening words — which can resemble a joke, satire or song lyrics — is a far more difficult task for artificial intelligence systems.
  • Facebook could, for instance, flag certain phrases such as “going to kill” or “going to shoot”, but without context — something AI in general has a lot of trouble with — there would be too many false positives for the company to analyze.
  • A recent Meta-commissioned report emphasized the benefits of such privacy but also noted some risks — including users who could abuse the encryption to sexually exploit children, facilitate human trafficking and spread hate speech.
  • Security experts say this could be done if Apple were to engineer a “backdoor” to allow access to messages sent by alleged criminals. Such a secret key would let them decipher encrypted information with a court order.
peterconnelly

Your Bosses Could Have a File on You, and They May Misinterpret It - The New York Times - 0 views

  • The company you work for may want to know. Some corporate employers fear that employees could leak information, allow access to confidential files, contact clients inappropriately or, in the extreme, bring a gun to the office.
  • at times using behavioral science tools like psychology.
  • But in spite of worries that workers might be, reasonably, put off by a feeling that technology and surveillance are invading yet another sphere of their lives, employers want to know which clock-punchers may harm their organizations.
  • ...13 more annotations...
  • “There is so much technology out there that employers are experimenting with or investing in,” said Edgar Ndjatou
  • Software can watch for suspicious computer behavior or it can dig into an employee’s credit reports, arrest records and marital-status updates. It can check to see if Cheryl is downloading bulk cloud data or run a sentiment analysis on Tom’s emails to see if he’s getting testier over time. Analysis of this data, say the companies that monitor insider risk, can point to potential problems in the workplace.
  • Organizations that produce monitoring software and behavioral analysis for the feds also may offer conceptually similar tools to private companies, either independently or packaged with broader cybersecurity tools.
  • But corporations are moving forward with their own software-enhanced surveillance. While private-sector workers may not be subjected to the rigors of a 136-page clearance form, private companies help build these “continuous vetting” technologies for the federal government, said Lindy Kyzer of ClearanceJobs. Then, she adds, “Any solution would have private-sector applications.”
  • “Can we build a system that checks on somebody and keeps checking on them and is aware of that person’s disposition as they exist in the legal systems and the public record systems on a continuous basis?” said Chris Grijalva
  • But the interest in anticipating insider threats in the private sector raises ethical questions about what level of monitoring nongovernmental employees should be subject to.
  • “People are starting to understand that the insider threat is a business problem and should be handled accordingly,” said Mr. Grijalva.
  • The linguistic software package they developed, called SCOUT, uses psycholinguistic analysis to seek flags that, among other things, indicate feelings of disgruntlement, like victimization, anger and blame.
  • “The language changes in subtle ways that you’re not aware of,” Mr. Stroz said.
  • There’s not enough information, in other words, to construct algorithms about trustworthiness from the ground up. And that would hold in either the private or the public sector.
  • Even if all that dystopian data did exist, it would still be tricky to draw individual — rather than simply aggregate — conclusions about which behavioral indicators potentially presaged ill actions.
  • “Depending too heavily on personal factors identified using software solutions is a mistake, as we are unable to determine how much they influence future likelihood of engaging in malicious behaviors,” Dr. Cunningham said.
  • “I have focused very heavily on identifying indicators that you can actually measure, versus those that require a lot of interpretation,” Dr. Cunningham said. “Especially those indicators that require interpretation by expert psychologists or expert so-and-sos. Because I find that it’s a little bit too dangerous, and I don’t know that it’s always ethical.”
oliviaodon

Why Silence Is So Good For Your Brain | Huffington Post - 0 views

  • We live in a loud and distracting world, where silence is increasingly difficult to come by — and that may be negatively affecting our health.
  • World Health Organization report called noise pollution a “modern plague,”
  • overwhelming evidence that exposure to environmental noise has adverse effects on the health of the population.”
  • ...6 more annotations...
  • How many moments each day do you spend in total silence?
  • Silence relieves stress and tension.
  • noise pollution has been found to lead to high blood pressure and heart attacks, as well as impairing hearing and overall health. Loud noises raise stress levels by activating the brain’s amygdala and causing the release of the stress hormone cortisol
  • In our everyday lives, sensory input is being thrown at us from every angle. When we can finally get away from these sonic disruptions, our brains’ attention centers have the opportunity to restore themselves.
  • The ceaseless attentional demands of modern life put a significant burden on the prefrontal cortex of the brain, which is involved in high-order thinking, decision-making and problem-solving.
  • Silence can quite literally grow the brain.
  •  
    This article serves as a reminder to keep some silence in our lives! 
sissij

All the Ways Your Wi-Fi Router Can Spy on You - The Atlantic - 0 views

  • But it can also be used to monitor humans—and in surprisingly detailed ways.
  • y analyzing the exact ways that a Wi-Fi signal is altered when a human moves through it, researchers can “see” what someone writes with their finger in the air, identify a particular person by the way that they walk, and even read a person’s lips with startling accuracy—in some cases even if a router isn’t in the same room as the person performing the actions.
  • Many researchers presented their Wi-Fi sensing technology as a way to preserve privacy while still capturing important data.
  • ...2 more annotations...
  • Ali said the system only works in controlled environments, and with rigorous training. “So, it is not a big privacy concern for now, no worries there,” wrote Ali, a Ph.D. student at Michigan State University, in an email.
  • Routers could soon keep kids and older adults safe, log daily activities, or make a smart home run more smoothly—but, if invaded by a malicious hacker, they could also be turned into incredibly sophisticated hubs for monitoring and surveillance.
  •  
    Everything has pros and cons. Gain always comes with loss. The development of new technology always comes with concerns. It reminded me of the scientific findings in quantum that leads to the invention of the atomic bombs. I think this wifi sensor technology can make our life much more convenient. Science enables us to see the world differently. --Sissi (1/25/2017)
dicindioha

Nervous markets take fright at prospect of Trump failing to deliver | Larry Elliott | B... - 0 views

  • Shares, oil and the US dollar were all under pressure as global financial markets took fright at the prospect that Donald Trump would fail to deliver on his growth-boosting promises.
  • stock markets in Asia and Europe fell in response to Tuesday’s sharp decline on Wall Street.
  • Markets have become increasingly impatient with the new Trump administration for failing to follow through on pledges to use a package of tax cuts and infrastructure spending to raise the US growth rate.
  • ...3 more annotations...
  • Investors believe a failure to secure agreement on Capitol Hill to repeal Barack Obama’s healthcare act – the new administration’s first legislative test – will lead to a further sell-off on Wall Street.
  • money flowed out of the dollar and into the safe haven of the Japanese yen. Sterling rose to stand at just under $1.25 against the US currency.
  • The “repeal and replace” of Obamacare was being seen as an acid test of whether Trump could deliver on his fiscal plans and the difficulties encountered were a “bad omen” for tax reform.
  •  
    After watching inside job it is so interesting to see the way the world market flows around the major countries, and the small countries rely on the success of the big ones. It will be important to monitor whether Trump will be able to implement his campaign claims referring to the market and taxes.
Javier E

The Benefits of Bilingualism - NYTimes.com - 2 views

  • Being bilingual, it turns out, makes you smarter. It can have a profound effect on your brain, improving cognitive skills not related to language and even shielding against dementia in old age.
  • in a bilingual’s brain both language systems are active even when he is using only one language, thus creating situations in which one system obstructs the other. But this interference, researchers are finding out, isn’t so much a handicap as a blessing in disguise. It forces the brain to resolve internal conflict, giving the mind a workout that strengthens its cognitive muscles.
  • the bilingual experience improves the brain’s so-called executive function — a command system that directs the attention processes that we use for planning, solving problems and performing various other mentally demanding tasks. These processes include ignoring distractions to stay focused, switching attention willfully from one thing to another and holding information in mind — like remembering a sequence of directions while driving.
  • ...2 more annotations...
  • The key difference between bilinguals and monolinguals may be more basic: a heightened ability to monitor the environment. “Bilinguals have to switch languages quite often — you may talk to your father in one language and to your mother in another language,” says Albert Costa, a researcher at the University of Pompeu Fabra in Spain. “It requires keeping track of changes around you in the same way that we monitor our surroundings when driving.”
  • individuals with a higher degree of bilingualism — measured through a comparative evaluation of proficiency in each language — were more resistant than others to the onset of dementia and other symptoms of Alzheimer’s disease: the higher the degree of bilingualism, the later the age of onset.
julia rhodes

How people learn - The Week - 0 views

  • n a traditional classroom, the teacher stands at the front of the class explaining what is clear in their mind to a group of passive students. Yet this pedagogical strategy doesn't positively impact retention of information from lecture, improve understanding basic concepts, or affect beliefs (that is, does new information change your belief about how something works).
  • Given that lectures were devised as a means of transferring knowledge from one to many, it seems obvious that we would ensure that people retain the information they are consuming.
  • The research tells us that the human brain can hold a maximum of about seven different items in its short-term working memory and can process no more than about four ideas at once. Exactly what an "item" means when translated from the cognitive science lab into the classroom is a bit fuzzy.
  • ...13 more annotations...
  • The results were similarly disturbing when students were tested to determine understanding of basic concepts. More instruction wasn't helping students advance from novice to expert. In fact, the data indicated the opposite: students had more novice-like beliefs after they completed a course than they had when they started.
  • But in addition, experts have a mental organizational structure that facilitates the retrieval and effective application of their knowledge.
  • experts have an ability to monitor their own thinking ("metacognition"), at least in their discipline of expertise. They are able to ask themselves, "Do I understand this? How can I check my understanding?"
  • But that is not what cognitive science tells us. It tells us instead that students need to develop these different ways of thinking by means of extended, focused mental effort.
  • new ways of thinking are always built on the prior thinking of the individual, so if the educational process is to be successful, it is essential to take that prior thinking into account.
  • . Everything that constitutes "understanding" science and "thinking scientifically" resides in the long-term memory, which is developed via the construction and assembly of component proteins.
  • What is elementary, worldly wisdom? Well, the first rule is that you can't really know anything if you just remember isolated facts and try and bang 'em back. If the facts don't hang together on a latticework of theory, you don't have them in a usable form.
  • "So it makes perfect sense," Wieman writes, "that they are not learning to think like experts, even though they are passing science courses by memorizing facts and problem-solving recipes."
  • Anything one can do to reduce cognitive load improves learning.
  • A second way teachers can improve instruction is by recognizing the importance of student beliefs about science
  • My third example of how teaching and learning can be improved is by implementing the principle that effective teaching consists of engaging students, monitoring their thinking, and providing feedback.
  • I assign students to groups the first day of class (typically three to four students in adjacent seats) and design each lecture around a series of seven to 10 clicker questions that cover the key learning goals for that day.
  • The process of critiquing each other's ideas in order to arrive at a consensus also enormously improves both their ability to carry on scientific discourse and to test their own understanding. [Change]
grayton downing

Measuring Consciousness | The Scientist Magazine® - 0 views

  • General anesthesia has transformed surgery from a ghastly ordeal to a procedure in which the patient feels no pain.
  • “integrated-information theory,” which holds that consciousness relies on communication between different brain areas, and fades as that communication breaks down.
  • neural markers of consciousness—or more precisely, the loss of consciousness—a group led by Patrick Purdon
  • ...9 more annotations...
  • The purpose of the surgery was to remove electrodes that had previously been implanted in the patients’ brains to monitor seizures. But before they were taken out, the electrodes enabled the researchers to study the activity of individual neurons in the cortex, in addition to large-scale brain activity from EEG recordings.
  • importance of communication between discrete groups of neurons, both within the cortex and across brain regions, is analogous to a band playing music, said George Mashour, a neuroscientist and anesthesiologist at the University of Michigan, Ann Arbor. “You need musical information to come together either in time or space to really make sense,”
  • “Consciousness and cognitive activity may be similar. If different areas of the brain aren’t in synch or if a critical area that normally integrates cognitive activity isn’t functioning, you could be rendered unconscious.”
  • , Purdon and colleagues were able to discern a more detailed neural signature of loss of unconsciousness, this time by using EEG alone. Monitoring brain activity in healthy patients for 2 hours as they underwent propofol-induced anesthesia, they observed that as responsiveness fades, high-frequency brain waves (12–35 hertz) rippling across the cortex and the thalamus were replaced by two different brain waves superimposed on top on one another: a low-frequency (<1 hertz) wave and an alpha frequency (8–12 hertz) wave. “These two waves pretty much come at loss of consciousness,”
  • “We’ve started to teach our anesthesiologists how to read this signature on the EEG”
  • Anesthesia is not the only state in which consciousness is lost, of course
  • o measure the gradual breakdown of connectivity between neural networks during natural REM sleep and anesthesia, as well as in brain-injured, unresponsive patients. Using an electromagnetic coil to activate neurons in a small patch of the human cortex, then recording EEG output to track the propagation of those signals to other neuronal groups, the researchers can measure the connectivity between collections of neurons in the cortex and other brain regions.
  • minimally conscious patients, the magnetically stimulated signals propagated fairly far and wide, occasionally reaching distant cortical areas, much like activations seen in locked-in but conscious patients. In patients in a persistent vegetative state, on the other hand, propagation was severely limited—a breakdown of connectivity similar to that observed in previous tests of anesthetized patients. What’s more, in three vegetative patients that later recovered consciousness, the test picked up signs of increased connectivity before clinical signs of improvement became evident.
  • “I think understanding consciousness itself is going to help us find successful [measurement] approaches that are universally applicable,” said Pearce.
anonymous

Report Reveals Wider Tracking of Mail in U.S. - NYTimes.com - 0 views

  • WASHINGTON — In a rare public accounting of its mass surveillance program, the United States Postal Service reported that it approved nearly 50,000 requests last year from law enforcement agencies and its own internal inspection unit to secretly monitor the mail of Americans for use in criminal and national security investigations.
Javier E

How Tech Can Turn Doctors Into Clerical Workers - The New York Times - 0 views

  • what I see in my colleague is disillusionment, and it has come too early, and I am seeing too much of it.
  • In America today, the patient in the hospital bed is just the icon, a place holder for the real patient who is not in the bed but in the computer. That virtual entity gets all our attention. Old-fashioned “bedside” rounds conducted by the attending physician too often take place nowhere near the bed but have become “card flip” rounds
  • My young colleague slumping in the chair in my office survived the student years, then three years of internship and residency and is now a full-time practitioner and teacher. The despair I hear comes from being the highest-paid clerical worker in the hospital: For every one hour we spend cumulatively with patients, studies have shown, we spend nearly two hours on our primitive Electronic Health Records, or “E.H.R.s,” and another hour or two during sacred personal time.
  • ...23 more annotations...
  • The living, breathing source of the data and images we juggle, meanwhile, is in the bed and left wondering: Where is everyone? What are they doing? Hello! It’s my body, you know
  • Our $3.4 trillion health care system is responsible for more than a quarter of a million deaths per year because of medical error, the rough equivalent of, say, a jumbo jet’s crashing every day.
  • I can get cash and account details all over America and beyond. Yet I can’t reliably get a patient record from across town, let alone from a hospital in the same state, even if both places use the same brand of E.H.R
  • the leading E.H.R.s were never built with any understanding of the rituals of care or the user experience of physicians or nurses. A clinician will make roughly 4,000 keyboard clicks during a busy 10-hour emergency-room shift
  • In the process, our daily progress notes have become bloated cut-and-paste monsters that are inaccurate and hard to wade through. A half-page, handwritten progress note of the paper era might in a few lines tell you what a physician really thought
  • so much of the E.H.R., but particularly the physical exam it encodes, is a marvel of fiction, because we humans don’t want to leave a check box empty or leave gaps in a template.
  • For a study, my colleagues and I at Stanford solicited anecdotes from physicians nationwide about patients for whom an oversight in the exam (a “miss”) had resulted in real consequences, like diagnostic delay, radiation exposure, therapeutic or surgical misadventure, even death. They were the sorts of things that would leave no trace in the E.H.R. because the recorded exam always seems complete — and yet the omission would be glaring and memorable to other physicians involved in the subsequent care. We got more than 200 such anecdotes.
  • The reason for these errors? Most of them resulted from exams that simply weren’t done as claimed. “Food poisoning” was diagnosed because the strangulated hernia in the groin was overlooked, or patients were sent to the catheterization lab for chest pain because no one saw the shingles rash on the left chest.
  • I worry that such mistakes come because we’ve gotten trapped in the bunker of machine medicine. It is a preventable kind of failure
  • How we salivated at the idea of searchable records, of being able to graph fever trends, or white blood counts, or share records at a keystroke with another institution — “interoperability”
  • The seriously ill patient has entered another kingdom, an alternate universe, a place and a process that is frightening, infantilizing; that patient’s greatest need is both scientific state-of-the-art knowledge and genuine caring from another human being. Caring is expressed in listening, in the time-honored ritual of the skilled bedside exam — reading the body — in touching and looking at where it hurts and ultimately in localizing the disease for patients not on a screen, not on an image, not on a biopsy report, but on their bodies.
  • What if the computer gave the nurse the big picture of who he was both medically and as a person?
  • a professor at M.I.T. whose current interest in biomedical engineering is “bedside informatics,” marvels at the fact that in an I.C.U., a blizzard of monitors from disparate manufacturers display EKG, heart rate, respiratory rate, oxygen saturation, blood pressure, temperature and more, and yet none of this is pulled together, summarized and synthesized anywhere for the clinical staff to use
  • What these monitors do exceedingly well is sound alarms, an average of one alarm every eight minutes, or more than 180 per patient per day. What is our most common response to an alarm? We look for the button to silence the nuisance because, unlike those in a Boeing cockpit, say, our alarms are rarely diagnosing genuine danger.
  • By some estimates, more than 50 percent of physicians in the United States have at least one symptom of burnout, defined as a syndrome of emotional exhaustion, cynicism and decreased efficacy at work
  • It is on the increase, up by 9 percent from 2011 to 2014 in one national study. This is clearly not an individual problem but a systemic one, a 4,000-key-clicks-a-day problem.
  • The E.H.R. is only part of the issue: Other factors include rapid patient turnover, decreased autonomy, merging hospital systems, an aging population, the increasing medical complexity of patients. Even if the E.H.R. is not the sole cause of what ails us, believe me, it has become the symbol of burnou
  • burnout is one of the largest predictors of physician attrition from the work force. The total cost of recruiting a physician can be nearly $90,000, but the lost revenue per physician who leaves is between $500,000 and $1 million, even more in high-paying specialties.
  • I hold out hope that artificial intelligence and machine-learning algorithms will transform our experience, particularly if natural-language processing and video technology allow us to capture what is actually said and done in the exam room.
  • as with any lab test, what A.I. will provide is at best a recommendation that a physician using clinical judgment must decide how to apply.
  • True clinical judgment is more than addressing the avalanche of blood work, imaging and lab tests; it is about using human skills to understand where the patient is in the trajectory of a life and the disease, what the nature of the patient’s family and social circumstances is and how much they want done.
  • Much of that is a result of poorly coordinated care, poor communication, patients falling through the cracks, knowledge not being transferred and so on, but some part of it is surely from failing to listen to the story and diminishing skill in reading the body as a text.
  • As he was nearing death, Avedis Donabedian, a guru of health care metrics, was asked by an interviewer about the commercialization of health care. “The secret of quality,” he replied, “is love.”/•/
Javier E

Thieves of experience: On the rise of surveillance capitalism - 1 views

  • Harvard Business School professor emerita Shoshana Zuboff argues in her new book that the Valley’s wealth and power are predicated on an insidious, essentially pathological form of private enterprise—what she calls “surveillance capitalism.” Pioneered by Google, perfected by Facebook, and now spreading throughout the economy, surveillance capitalism uses human life as its raw material. Our everyday experiences, distilled into data, have become a privately-owned business asset used to predict and mold our behavior, whether we’re shopping or socializing, working or voting.
  • By reengineering the economy and society to their own benefit, Google and Facebook are perverting capitalism in a way that undermines personal freedom and corrodes democracy.
  • Under the Fordist model of mass production and consumption that prevailed for much of the twentieth century, industrial capitalism achieved a relatively benign balance among the contending interests of business owners, workers, and consumers. Enlightened executives understood that good pay and decent working conditions would ensure a prosperous middle class eager to buy the goods and services their companies produced. It was the product itself — made by workers, sold by companies, bought by consumers — that tied the interests of capitalism’s participants together. Economic and social equilibrium was negotiated through the product.
  • ...72 more annotations...
  • By removing the tangible product from the center of commerce, surveillance capitalism upsets the equilibrium. Whenever we use free apps and online services, it’s often said, we become the products, our attention harvested and sold to advertisers
  • this truism gets it wrong. Surveillance capitalism’s real products, vaporous but immensely valuable, are predictions about our future behavior — what we’ll look at, where we’ll go, what we’ll buy, what opinions we’ll hold — that internet companies derive from our personal data and sell to businesses, political operatives, and other bidders.
  • Unlike financial derivatives, which they in some ways resemble, these new data derivatives draw their value, parasite-like, from human experience.To the Googles and Facebooks of the world, we are neither the customer nor the product. We are the source of what Silicon Valley technologists call “data exhaust” — the informational byproducts of online activity that become the inputs to prediction algorithms
  • Another 2015 study, appearing in the Journal of Computer-Mediated Communication, showed that when people hear their phone ring but are unable to answer it, their blood pressure spikes, their pulse quickens, and their problem-solving skills decline.
  • The smartphone has become a repository of the self, recording and dispensing the words, sounds and images that define what we think, what we experience and who we are. In a 2015 Gallup survey, more than half of iPhone owners said that they couldn’t imagine life without the device.
  • So what happens to our minds when we allow a single tool such dominion over our perception and cognition?
  • Not only do our phones shape our thoughts in deep and complicated ways, but the effects persist even when we aren’t using the devices. As the brain grows dependent on the technology, the research suggests, the intellect weakens.
  • he has seen mounting evidence that using a smartphone, or even hearing one ring or vibrate, produces a welter of distractions that makes it harder to concentrate on a difficult problem or job. The division of attention impedes reasoning and performance.
  • internet companies operate in what Zuboff terms “extreme structural independence from people.” When databases displace goods as the engine of the economy, our own interests, as consumers but also as citizens, cease to be part of the negotiation. We are no longer one of the forces guiding the market’s invisible hand. We are the objects of surveillance and control.
  • Social skills and relationships seem to suffer as well.
  • In both tests, the subjects whose phones were in view posted the worst scores, while those who left their phones in a different room did the best. The students who kept their phones in their pockets or bags came out in the middle. As the phone’s proximity increased, brainpower decreased.
  • In subsequent interviews, nearly all the participants said that their phones hadn’t been a distraction—that they hadn’t even thought about the devices during the experiment. They remained oblivious even as the phones disrupted their focus and thinking.
  • The researchers recruited 520 undergraduates at UCSD and gave them two standard tests of intellectual acuity. One test gauged “available working-memory capacity,” a measure of how fully a person’s mind can focus on a particular task. The second assessed “fluid intelligence,” a person’s ability to interpret and solve an unfamiliar problem. The only variable in the experiment was the location of the subjects’ smartphones. Some of the students were asked to place their phones in front of them on their desks; others were told to stow their phones in their pockets or handbags; still others were required to leave their phones in a different room.
  • the “integration of smartphones into daily life” appears to cause a “brain drain” that can diminish such vital mental skills as “learning, logical reasoning, abstract thought, problem solving, and creativity.”
  •  Smartphones have become so entangled with our existence that, even when we’re not peering or pawing at them, they tug at our attention, diverting precious cognitive resources. Just suppressing the desire to check our phone, which we do routinely and subconsciously throughout the day, can debilitate our thinking.
  • They found that students who didn’t bring their phones to the classroom scored a full letter-grade higher on a test of the material presented than those who brought their phones. It didn’t matter whether the students who had their phones used them or not: All of them scored equally poorly.
  • A study of nearly a hundred secondary schools in the U.K., published last year in the journal Labour Economics, found that when schools ban smartphones, students’ examination scores go up substantially, with the weakest students benefiting the most.
  • Data, the novelist and critic Cynthia Ozick once wrote, is “memory without history.” Her observation points to the problem with allowing smartphones to commandeer our brains
  • Because smartphones serve as constant reminders of all the friends we could be chatting with electronically, they pull at our minds when we’re talking with people in person, leaving our conversations shallower and less satisfying.
  • In a 2013 study conducted at the University of Essex in England, 142 participants were divided into pairs and asked to converse in private for ten minutes. Half talked with a phone in the room, half without a phone present. The subjects were then given tests of affinity, trust and empathy. “The mere presence of mobile phones,” the researchers reported in the Journal of Social and Personal Relationships, “inhibited the development of interpersonal closeness and trust” and diminished “the extent to which individuals felt empathy and understanding from their partners.”
  • The evidence that our phones can get inside our heads so forcefully is unsettling. It suggests that our thoughts and feelings, far from being sequestered in our skulls, can be skewed by external forces we’re not even aware o
  •  Scientists have long known that the brain is a monitoring system as well as a thinking system. Its attention is drawn toward any object that is new, intriguing or otherwise striking — that has, in the psychological jargon, “salience.”
  • even in the history of captivating media, the smartphone stands out. It is an attention magnet unlike any our minds have had to grapple with before. Because the phone is packed with so many forms of information and so many useful and entertaining functions, it acts as what Dr. Ward calls a “supernormal stimulus,” one that can “hijack” attention whenever it is part of our surroundings — and it is always part of our surroundings.
  • Imagine combining a mailbox, a newspaper, a TV, a radio, a photo album, a public library and a boisterous party attended by everyone you know, and then compressing them all into a single, small, radiant object. That is what a smartphone represents to us. No wonder we can’t take our minds off it.
  • The irony of the smartphone is that the qualities that make it so appealing to us — its constant connection to the net, its multiplicity of apps, its responsiveness, its portability — are the very ones that give it such sway over our minds.
  • Phone makers like Apple and Samsung and app writers like Facebook, Google and Snap design their products to consume as much of our attention as possible during every one of our waking hours
  • Social media apps were designed to exploit “a vulnerability in human psychology,” former Facebook president Sean Parker said in a recent interview. “[We] understood this consciously. And we did it anyway.”
  • A quarter-century ago, when we first started going online, we took it on faith that the web would make us smarter: More information would breed sharper thinking. We now know it’s not that simple.
  • As strange as it might seem, people’s knowledge and understanding may actually dwindle as gadgets grant them easier access to online data stores
  • In a seminal 2011 study published in Science, a team of researchers — led by the Columbia University psychologist Betsy Sparrow and including the late Harvard memory expert Daniel Wegner — had a group of volunteers read forty brief, factual statements (such as “The space shuttle Columbia disintegrated during re-entry over Texas in Feb. 2003”) and then type the statements into a computer. Half the people were told that the machine would save what they typed; half were told that the statements would be erased.
  • Afterward, the researchers asked the subjects to write down as many of the statements as they could remember. Those who believed that the facts had been recorded in the computer demonstrated much weaker recall than those who assumed the facts wouldn’t be stored. Anticipating that information would be readily available in digital form seemed to reduce the mental effort that people made to remember it
  • The researchers dubbed this phenomenon the “Google effect” and noted its broad implications: “Because search engines are continually available to us, we may often be in a state of not feeling we need to encode the information internally. When we need it, we will look it up.”
  • as the pioneering psychologist and philosopher William James said in an 1892 lecture, “the art of remembering is the art of thinking.”
  • Only by encoding information in our biological memory can we weave the rich intellectual associations that form the essence of personal knowledge and give rise to critical and conceptual thinking. No matter how much information swirls around us, the less well-stocked our memory, the less we have to think with.
  • As Dr. Wegner and Dr. Ward explained in a 2013 Scientific American article, when people call up information through their devices, they often end up suffering from delusions of intelligence. They feel as though “their own mental capacities” had generated the information, not their devices. “The advent of the ‘information age’ seems to have created a generation of people who feel they know more than ever before,” the scholars concluded, even though “they may know ever less about the world around them.”
  • That insight sheds light on society’s current gullibility crisis, in which people are all too quick to credit lies and half-truths spread through social media. If your phone has sapped your powers of discernment, you’ll believe anything it tells you.
  • A second experiment conducted by the researchers produced similar results, while also revealing that the more heavily students relied on their phones in their everyday lives, the greater the cognitive penalty they suffered.
  • When we constrict our capacity for reasoning and recall or transfer those skills to a gadget, we sacrifice our ability to turn information into knowledge. We get the data but lose the meaning
  • We need to give our minds more room to think. And that means putting some distance between ourselves and our phones.
  • Google’s once-patient investors grew restive, demanding that the founders figure out a way to make money, preferably lots of it.
  • nder pressure, Page and Brin authorized the launch of an auction system for selling advertisements tied to search queries. The system was designed so that the company would get paid by an advertiser only when a user clicked on an ad. This feature gave Google a huge financial incentive to make accurate predictions about how users would respond to ads and other online content. Even tiny increases in click rates would bring big gains in income. And so the company began deploying its stores of behavioral data not for the benefit of users but to aid advertisers — and to juice its own profits. Surveillance capitalism had arrived.
  • Google’s business now hinged on what Zuboff calls “the extraction imperative.” To improve its predictions, it had to mine as much information as possible from web users. It aggressively expanded its online services to widen the scope of its surveillance.
  • Through Gmail, it secured access to the contents of people’s emails and address books. Through Google Maps, it gained a bead on people’s whereabouts and movements. Through Google Calendar, it learned what people were doing at different moments during the day and whom they were doing it with. Through Google News, it got a readout of people’s interests and political leanings. Through Google Shopping, it opened a window onto people’s wish lists,
  • The company gave all these services away for free to ensure they’d be used by as many people as possible. It knew the money lay in the data.
  • the organization grew insular and secretive. Seeking to keep the true nature of its work from the public, it adopted what its CEO at the time, Eric Schmidt, called a “hiding strategy” — a kind of corporate omerta backed up by stringent nondisclosure agreements.
  • Page and Brin further shielded themselves from outside oversight by establishing a stock structure that guaranteed their power could never be challenged, neither by investors nor by directors.
  • What’s most remarkable about the birth of surveillance capitalism is the speed and audacity with which Google overturned social conventions and norms about data and privacy. Without permission, without compensation, and with little in the way of resistance, the company seized and declared ownership over everyone’s information
  • The companies that followed Google presumed that they too had an unfettered right to collect, parse, and sell personal data in pretty much any way they pleased. In the smart homes being built today, it’s understood that any and all data will be beamed up to corporate clouds.
  • Google conducted its great data heist under the cover of novelty. The web was an exciting frontier — something new in the world — and few people understood or cared about what they were revealing as they searched and surfed. In those innocent days, data was there for the taking, and Google took it
  • Google also benefited from decisions made by lawmakers, regulators, and judges — decisions that granted internet companies free use of a vast taxpayer-funded communication infrastructure, relieved them of legal and ethical responsibility for the information and messages they distributed, and gave them carte blanche to collect and exploit user data.
  • Consider the terms-of-service agreements that govern the division of rights and the delegation of ownership online. Non-negotiable, subject to emendation and extension at the company’s whim, and requiring only a casual click to bind the user, TOS agreements are parodies of contracts, yet they have been granted legal legitimacy by the court
  • Law professors, writes Zuboff, “call these ‘contracts of adhesion’ because they impose take-it-or-leave-it conditions on users that stick to them whether they like it or not.” Fundamentally undemocratic, the ubiquitous agreements helped Google and other firms commandeer personal data as if by fiat.
  • n the choices we make as consumers and private citizens, we have always traded some of our autonomy to gain other rewards. Many people, it seems clear, experience surveillance capitalism less as a prison, where their agency is restricted in a noxious way, than as an all-inclusive resort, where their agency is restricted in a pleasing way
  • Zuboff makes a convincing case that this is a short-sighted and dangerous view — that the bargain we’ve struck with the internet giants is a Faustian one
  • but her case would have been stronger still had she more fully addressed the benefits side of the ledger.
  • there’s a piece missing. While Zuboff’s assessment of the costs that people incur under surveillance capitalism is exhaustive, she largely ignores the benefits people receive in return — convenience, customization, savings, entertainment, social connection, and so on
  • hat the industries of the future will seek to manufacture is the self.
  • Behavior modification is the thread that ties today’s search engines, social networks, and smartphone trackers to tomorrow’s facial-recognition systems, emotion-detection sensors, and artificial-intelligence bots.
  • All of Facebook’s information wrangling and algorithmic fine-tuning, she writes, “is aimed at solving one problem: how and when to intervene in the state of play that is your daily life in order to modify your behavior and thus sharply increase the predictability of your actions now, soon, and later.”
  • “The goal of everything we do is to change people’s actual behavior at scale,” a top Silicon Valley data scientist told her in an interview. “We can test how actionable our cues are for them and how profitable certain behaviors are for us.”
  • This goal, she suggests, is not limited to Facebook. It is coming to guide much of the economy, as financial and social power shifts to the surveillance capitalists
  • Combining rich information on individuals’ behavioral triggers with the ability to deliver precisely tailored and timed messages turns out to be a recipe for behavior modification on an unprecedented scale.
  • it was Facebook, with its incredibly detailed data on people’s social lives, that grasped digital media’s full potential for behavior modification. By using what it called its “social graph” to map the intentions, desires, and interactions of literally billions of individuals, it saw that it could turn its network into a worldwide Skinner box, employing psychological triggers and rewards to program not only what people see but how they react.
  • spying on the populace is not the end game. The real prize lies in figuring out ways to use the data to shape how people think and act. “The best way to predict the future is to invent it,” the computer scientist Alan Kay once observed. And the best way to predict behavior is to script it.
  • competition for personal data intensified. It was no longer enough to monitor people online; making better predictions required that surveillance be extended into homes, stores, schools, workplaces, and the public squares of cities and towns. Much of the recent innovation in the tech industry has entailed the creation of products and services designed to vacuum up data from every corner of our lives
  • “The typical complaint is that privacy is eroded, but that is misleading,” Zuboff writes. “In the larger societal pattern, privacy is not eroded but redistributed . . . . Instead of people having the rights to decide how and what they will disclose, these rights are concentrated within the domain of surveillance capitalism.” The transfer of decision rights is also a transfer of autonomy and agency, from the citizen to the corporation.
  • What we lose under this regime is something more fundamental than privacy. It’s the right to make our own decisions about privacy — to draw our own lines between those aspects of our lives we are comfortable sharing and those we are not
  • Other possible ways of organizing online markets, such as through paid subscriptions for apps and services, never even got a chance to be tested.
  • Online surveillance came to be viewed as normal and even necessary by politicians, government bureaucrats, and the general public
  • Google and other Silicon Valley companies benefited directly from the government’s new stress on digital surveillance. They earned millions through contracts to share their data collection and analysis techniques with the National Security Agenc
  • As much as the dot-com crash, the horrors of 9/11 set the stage for the rise of surveillance capitalism. Zuboff notes that, in 2000, members of the Federal Trade Commission, frustrated by internet companies’ lack of progress in adopting privacy protections, began formulating legislation to secure people’s control over their online information and severely restrict the companies’ ability to collect and store it. It seemed obvious to the regulators that ownership of personal data should by default lie in the hands of private citizens, not corporations.
  • The 9/11 attacks changed the calculus. The centralized collection and analysis of online data, on a vast scale, came to be seen as essential to national security. “The privacy provisions debated just months earlier vanished from the conversation more or less overnight,”
caelengrubb

I'm So Totally Over Newton's Laws of Motion | WIRED - 0 views

  • We don't need to be stuck with the traditions of the past if we want students to understand physics.
  • Newton's First Law: An object in motion stays in motion unless acted on by a force. An object at rest, stays at rest unless acted on by a force.Newton's Second Law: The magnitude of an object's acceleration is proportional to the net force and inversely proportional to the mass of the object.Newton's Third Law: For every force there is an equal and opposite force. (I've already complained about the way most books talk about this one)
  • Newton's First Law Is Really About Aristotle
  • ...7 more annotations...
  • Remember that before Galileo and Newton, people looked to Aristotle for ideas about physics
  • Yes, it's true that Aristotle wasn't a scientist since he didn't really do any experiments. However, that didn't stop him from become a huge influence on the way people think about physics
  • Do I think that we should ban Newton's Laws? No. There is still a place to talk about the historical development of the interaction between forces and matter and Newton played a large role here (but so did Aristotle and Galileo
  • Let's write down Newton's Second Law in its common form as an equation:Although this is a very useful model, it doesn't always work. If you take a proton moving at half the speed of light and push on it with a force, you cannot use this to find the new velocity of the proton---but it's still a great model. So, maybe we shouldn't call it a Law.
  • Science is all about models. If there is one thing I've tried to be consistent about---it's that we build models in science. These models could be conceptual, physical, or mathematical
  • Since Newton's ideas are Laws, does that mean that they are true? No---there is no truth in science, there are just models. Some models work better than others, and some models are wrong but still useful
  • Just because most physics textbooks (but not all) have been very explicit about Newton's Laws of Motion, this doesn't mean that is the best way for students to learn.
margogramiak

How To Fight Deforestation In The Amazon From Your Couch | HuffPost - 0 views

  • If you’ve got as little as 30 seconds and a decent internet connection, you can help combat the deforestation of the Amazon. 
  • Some 15% of the Amazon, the world’s largest rainforest and a crucial carbon repository, has been cut or burned down. Around two-thirds of the Amazon lie within Brazil’s borders, where almost 157 square miles of forest were cleared in April alone. In addition to storing billions of tons of carbon, the Amazon is home to tens of millions of people and some 10% of the Earth’s biodiversity.
    • margogramiak
       
      all horrifying stats.
  • you just have to be a citizen that is concerned about the issue of deforestation,
    • margogramiak
       
      that's me!
  • ...12 more annotations...
  • If you’ve got as little as 30 seconds and a decent internet connection, you can help combat the deforestation of the Amazon. 
    • margogramiak
       
      great!
  • to build an artificial intelligence model that can recognize signs of deforestation. That data can be used to alert governments and conservation organizations where intervention is needed and to inform policies that protect vital ecosystems. It may even one day predict where deforestation is likely to happen next.
    • margogramiak
       
      That sounds super cool, and definitely useful.
  • To monitor deforestation, conservation organizations need an eye in the sky.
    • margogramiak
       
      bird's eye view pictures of deforestation are always super impactful.
  • WRI’s Global Forest Watch online tracking system receives images of the world’s forests taken every few days by NASA satellites. A simple computer algorithm scans the images, flagging instances where before there were trees and now there are not. But slight disturbances, such as clouds, can trip up the computer, so experts are increasingly interested in using artificial intelligence.
    • margogramiak
       
      that's so cool.
  • Inman was surprised how willing people have been to spend their time clicking on abstract-looking pictures of the Amazon.
    • margogramiak
       
      I'm glad so many people want to help.
  • Look at these nine blocks and make a judgment about each one. Does that satellite image look like a situation where human beings have transformed the landscape in some way?” Inman explained.
    • margogramiak
       
      seems simple enough
  • It’s not always easy; that’s the point. For example, a brown patch in the trees could be the result of burning to clear land for agriculture (earning a check mark for human impact), or it could be the result of a natural forest fire (no check mark). Keen users might be able to spot subtle signs of intervention the computer would miss, like the thin yellow line of a dirt road running through the clearing. 
    • margogramiak
       
      I was thinking about this issue... that's a hard problem to solve.
  • SAS’s website offers a handful of examples comparing natural forest features and manmade changes. 
    • margogramiak
       
      I guess that would be helpful. What happens if someone messes up though?
  • users have analyzed almost 41,000 images, covering an area of rainforest nearly the size of the state of Montana. Deforestation caused by human activity is evident in almost 2 in 5 photos.
    • margogramiak
       
      wow.
  • The researchers hope to use historical images of these new geographies to create a predictive model that could identify areas most at risk of future deforestation. If they can show that their AI model is successful, it could be useful for NGOs, governments and forest monitoring bodies, enabling them to carefully track forest changes and respond by sending park rangers and conservation teams to threatened areas. In the meantime, it’s a great educational tool for the citizen scientists who use the app
    • margogramiak
       
      But then what do they do with this data? How do they use it to make a difference?
  • Users simply select the squares in which they’ve spotted some indication of human impact: the tell-tale quilt of farm plots, a highway, a suspiciously straight edge of tree line. 
    • margogramiak
       
      I could do that!
  • we have still had people from 80 different countries come onto the app and make literally hundreds of judgments that enabled us to resolve 40,000 images,
    • margogramiak
       
      I like how in a sense it makes all the users one big community because of their common goal of wanting to help the earth.
knudsenlu

You Are Already Living Inside a Computer - The Atlantic - 1 views

  • Nobody really needs smartphone-operated bike locks or propane tanks. And they certainly don’t need gadgets that are less trustworthy than the “dumb” ones they replace, a sin many smart devices commit. But people do seem to want them—and in increasing numbers.
  • Why? One answer is that consumers buy what is on offer, and manufacturers are eager to turn their dumb devices smart. Doing so allows them more revenue, more control, and more opportunity for planned obsolescence. It also creates a secondary market for data collected by means of these devices. Roomba, for example, hopes to deduce floor plans from the movement of its robotic home vacuums so that it can sell them as business intelligence.
  • And the more people love using computers for everything, the more life feels incomplete unless it takes place inside them.
  • ...15 more annotations...
  • Computers already are predominant, human life already takes place mostly within them, and people are satisfied with the results.
  • These devices pose numerous problems. Cost is one. Like a cheap propane gauge, a traditional bike lock is a commodity. It can be had for $10 to $15, a tenth of the price of Nokē’s connected version. Security and privacy are others. The CIA was rumored to have a back door into Samsung TVs for spying. Disturbed people have been caught speaking to children over hacked baby monitors. A botnet commandeered thousands of poorly secured internet-of-things devices to launch a massive distributed denial-of-service attack against the domain-name syste
  • Reliability plagues internet-connected gadgets, too. When the network is down, or the app’s service isn’t reachable, or some other software behavior gets in the way, the products often cease to function properly—or at all.
  • Turing guessed that machines would become most compelling when they became convincing companions, which is essentially what today’s smartphones (and smart toasters) do.
  • But Turing never claimed that machines could think, let alone that they might equal the human mind. Rather, he surmised that machines might be able to exhibit convincing behavior.
  • People choose computers as intermediaries for the sensual delight of using computers
  • ne such affection is the pleasure of connectivity. You don’t want to be offline. Why would you want your toaster or doorbell to suffer the same fate? Today, computational absorption is an ideal. The ultimate dream is to be online all the time, or at least connected to a computational machine of some kind.
  • Doorbells and cars and taxis hardly vanish in the process. Instead, they just get moved inside of computers.
  • “Being a computer” means something different today than in 1950, when Turing proposed the imitation game. Contra the technical prerequisites of artificial intelligence, acting like a computer often involves little more than moving bits of data around, or acting as a controller or actuator. Grill as computer, bike lock as computer, television as computer. An intermediary
  • Or consider doorbells once more. Forget Ring, the doorbell has already retired in favor of the computer. When my kids’ friends visit, they just text a request to come open the door. The doorbell has become computerized without even being connected to an app or to the internet. Call it “disruption” if you must, but doorbells and cars and taxis hardly vanish in the process. Instead, they just get moved inside of computers, where they can produce new affections.
  • The present status of intelligent machines is more powerful than any future robot apocalypse.
  • Why would anyone ever choose a solution that doesn’t involve computers, when computers are available? Propane tanks and bike locks are still edge cases, but ordinary digital services work similarly: The services people seek out are the ones that allow them to use computers to do things—from finding information to hailing a cab to ordering takeout. This is a feat of aesthetics as much as it is one of business. People choose computers as intermediaries for the sensual delight of using computers, not just as practical, efficient means for solving problems.
  • This is not where anyone thought computing would end up. Early dystopic scenarios cautioned that the computer could become a bureaucrat or a fascist, reducing human behavior to the predetermined capacities of a dumb machine. Or else, that obsessive computer use would be deadening, sucking humans into narcotic detachment.Those fears persist to some extent, partly because they have been somewhat realized. But they have also been inverted. Being away from them now feels deadening, rather than being attached to them without end. And thus, the actions computers take become self-referential: to turn more and more things into computers to prolong that connection.
  • But the real present status of intelligent machines is both humdrum and more powerful than any future robot apocalypse. Turing is often called the father of AI, but he only implied that machines might become compelling enough to inspire interaction. That hardly counts as intelligence, artificial or real. It’s also far easier to achieve. Computers already have persuaded people to move their lives inside of them. The machines didn’t need to make people immortal, or promise to serve their every whim, or to threaten to destroy them absent assent. They just needed to become a sufficient part of everything human beings do such that they can’t—or won’t—imagine doing those things without them.
  • . The real threat of computers isn’t that they might overtake and destroy humanity with their future power and intelligence. It’s that they might remain just as ordinary and impotent as they are today, and yet overtake us anyway.
katherineharron

Blackout Tuesday: Why posting a black image with the hashtag #blm is doing more harm th... - 0 views

  • It's Blackout Tuesday, a day promoted by activists to observe, mourn and bring about policy change in the wake of the death of George Floyd. This movement has spread on social media, where organizations, brands and individuals are posting solemn messages featuring stark black backgrounds, sometimes tagging the posts with #BlackLivesMatter.
  • Here's the problem. While these posts may be well-intended, several activists and influencers have pointed out that posting a blank black image with a bunch of tags clogs up critical channels of information and updates.
  • One, the actual tags used on Blackout Tuesday posts. Two, the actual purpose of posting a black image in the first place.
  • ...6 more annotations...
  • one of the most common ways to keep track of all of this is by monitoring or searching tags.
  • it gets automatically added to a searchable feed, which people can find using that tag. It's a common way for people to monitor a situation or interest. And since people have been including the #BlackLivesMAtter tag, in the words of activist Feminista Jones, the protests have been erased from Instagram.
  • "When you check the #BlackLivesMatter hashtag, it's no longer videos, helpful information, resources, documentation of the injustice, it's rows of black screens,
  • Blackout Tuesday gained traction from the work of music executives Jamila Thomas and Brianna Agyemang, who led an effort in the music community to pause normal business operations on June 2nd "in observance of the long-standing racism and inequality that exists from the boardroom to the boulevard."
  • However, there's concern that while what amounts to a virtual moment of silence may be a powerful reminder to some, it comes at a time when the voices of black activists and advocates are needed the most.
  • However, some people have taken the call to action to mean a pause on posting about personal things or issues unrelated to Black Lives Matter or the ongoing protests rather than complete silence. Some widely shared posts about the day encourage people to refrain from self-promotion and use their presence on various platforms to uplift members of the black community instead.
Javier E

The Worst Part of the Woodward Tapes Isn't COVID. - 0 views

  • 1. Woodward
  • I'd like to take the other side of this Trump-Woodward story and offer two curveball views:
  • (1) I do not believe that Donald Trump "knew" how dangerous the coronavirus was. Allow me to explain.
  • ...21 more annotations...
  • This is simply how the man talks. About everything. What's more, he says everything, takes the both sides of everything:
  • Does he believe any of this, either way? Almost certainly not. The man has the brain of a goldfish: He "believes" whatever is in front of him in the moment. No matter whether or not it contradicts something he believed five minutes ago or will believe ten minutes from now.
  • All this guy does is try to create panic. That's his move
  • (2) The most alarming part of the Woodward tapes is the way Trump talks about Kim Jong Un and the moment when Trump literally takes sides with Kim Jong Un against a former American president.
  • In a way, it would be comforting to believe that our president was intelligent enough to grasp the seriousness of the coronavirus, even if his judgment in how to deal with the outbreak was malicious or poor.
  • All of the available evidence suggests the opposite:
  • Donald Trump lacks the cognitive ability to understand any concepts more complicated than self-promotion or self-preservation.
  • Put those two together—constant exaggerating self-aggrandizement and the perpetual attempt to stoke panic—and what you have is a guy was just saying stuff to Woodward.
  • After the Woodward tapes, anyone still deluding themselves about the authoritarian danger Trump poses to America is, finally, all out of excuses.
  • This, right here, is the most damning revelation from the Woodward tapes (so far):   Trump reflected on his relationships with authoritarian leaders generally, including Turkish President Recep Tayyip Erdogan. “It’s funny, the relationships I have, the tougher and meaner they are, the better I get along with them,” he told Woodward. “You know? Explain that to me someday, okay?” It's not hard to explain. And it's not funny.
  • You have this incredible rise in interest in technology and excitement about technology and the beat itself really took off while I was there. But then at the same time, you have this massive new centralization of government control over technology and the use of technology to control people and along with that rising nationalism.
  • Paul Mozur, who covers China and tech for the New York Times and is currently living in Taiwain, after the Chinese expelled all foreign journalists. 
  • That was more apparent, I think, over the past five years or so after Xi Jinping really consolidated power, but the amount of cameras that went up on street corners, the degree to which you used to be able to — there’s a moment maybe seven or eight years ago — where Jack Ma talked about the Tiananmen Square crackdowns on Chinese social media and now that’s just so utterly unthinkable. The degree to which the censorship has increased now to the level where if you say certain things on WeChat, it’s very possible the police will show up at your door where you actually have a truly fully formed Internet Police. . .
  • I think a lot of Chinese people feel more secure from the cameras, there’s been a lot of propaganda out there saying the cameras are here for your safety. There is this extremely positive, almost Utopian take on technology in China, and a lot of the stuff that I think, our knee-jerk response from the United States would be to be worried about, they kind of embrace as a vision of the future. .
  • The main reasons WeChat is a concern if you were the United States government is number one, it’s become a major vector of the spread of Chinese propaganda and censorship, and because it’s a social network that is anchored by a vast majority of users in China who are censored and who are receptive to all this propaganda, even if you’re overseas using WeChat and not censored in the same way, what you get is mostly content shared from people who are living in a censored environment, so it basically stays a censored environment. I call that a super filter bubble; the idea is that there are multiple filter bubbles contending in a website like Facebook, but with WeChat, because it’s so dominated by government controls, you get one really big mega pro-China filter bubble that then is spread all over the the world over the app, even if people outside of China don’t face the same censorship. So that’s one thing.
  • The second is the surveillance is immense and anybody who creates an account in China brings the surveillance with them overseas
  • And most people, frankly, using WeChat overseas probably created the accounts in China, and even when they don’t create the account in China, when national security priorities hit a certain level, I think they’re probably going to use it to monitor people anyway. I’ve run into a number of people who have had run-ins with the Chinese Internet Police either in China, but some of them outside of China, in their day-to-day life using WeChat, and then they return home and it becomes apparent that the Internet Police were watching them the whole time, and they get a visit and the police have a discussion with them about what their activities have been
  • So it’s also a major way that the Chinese government is able to spy on and monitor people overseas and then unsurprisingly, because of that, it’s used as a way for the Chinese intel services to harass people overseas. . . .
  • WeChat is particularly suited to this in part because every single person who uses WeChat within China has it linked to their real identity. And then because everybody on WeChat has linked to their real identity, you can map their relationship networks and lean on them that way.
  • It also has a bunch of tools that the Chinese police use, for instance key words, where you can set an alarm so that if you were to say “Tiananmen”, they could set an alarm so that anytime you say that they get a warning about that, and then they go look at what you’ve written. So there’s all these tools that are uniquely created for Chinese state surveillance that are within the app that they can also use, so there’s a bunch of ways that the app is just better.
  • It’s also one of the very few unblocked communication tools that goes between the two countries. So for all these reasons it’s a very, very big deal. For the Chinese government, it’s an important tool of social control, and it’s been a way that they’ve been able to take the social controls that exist within China and expand them to the diaspora community in some pretty unnerving ways.
Javier E

Why Is It So Hard to Be Rational? | The New Yorker - 0 views

  • an unusually large number of books about rationality were being published this year, among them Steven Pinker’s “Rationality: What It Is, Why It Seems Scarce, Why It Matters” (Viking) and Julia Galef’s “The Scout Mindset: Why Some People See Things Clearly and Others Don’t” (Portfolio).
  • When the world changes quickly, we need strategies for understanding it. We hope, reasonably, that rational people will be more careful, honest, truthful, fair-minded, curious, and right than irrational ones.
  • And yet rationality has sharp edges that make it hard to put at the center of one’s life
  • ...43 more annotations...
  • You might be well-intentioned, rational, and mistaken, simply because so much in our thinking can go wrong. (“RATIONAL, adj.: Devoid of all delusions save those of observation, experience and reflection,”
  • You might be rational and self-deceptive, because telling yourself that you are rational can itself become a source of bias. It’s possible that you are trying to appear rational only because you want to impress people; or that you are more rational about some things (your job) than others (your kids); or that your rationality gives way to rancor as soon as your ideas are challenged. Perhaps you irrationally insist on answering difficult questions yourself when you’d be better off trusting the expert consensus.
  • Not just individuals but societies can fall prey to false or compromised rationality. In a 2014 book, “The Revolt of the Public and the Crisis of Authority in the New Millennium,” Martin Gurri, a C.I.A. analyst turned libertarian social thinker, argued that the unmasking of allegedly pseudo-rational institutions had become the central drama of our age: people around the world, having concluded that the bigwigs in our colleges, newsrooms, and legislatures were better at appearing rational than at being so, had embraced a nihilist populism that sees all forms of public rationality as suspect.
  • modern life would be impossible without those rational systems; we must improve them, not reject them. We have no choice but to wrestle with rationality—an ideal that, the sociologist Max Weber wrote, “contains within itself a world of contradictions.”
  • Where others might be completely convinced that G.M.O.s are bad, or that Jack is trustworthy, or that the enemy is Eurasia, a Bayesian assigns probabilities to these propositions. She doesn’t build an immovable world view; instead, by continually updating her probabilities, she inches closer to a more useful account of reality. The cooking is never done.
  • Rationality is one of humanity’s superpowers. How do we keep from misusing it?
  • Start with the big picture, fixing it firmly in your mind. Be cautious as you integrate new information, and don’t jump to conclusions. Notice when new data points do and do not alter your baseline assumptions (most of the time, they won’t alter them), but keep track of how often those assumptions seem contradicted by what’s new. Beware the power of alarming news, and proceed by putting it in a broader, real-world context.
  • Bayesian reasoning implies a few “best practices.”
  • Keep the cooked information over here and the raw information over there; remember that raw ingredients often reduce over heat
  • We want to live in a more rational society, but not in a falsely rationalized one. We want to be more rational as individuals, but not to overdo it. We need to know when to think and when to stop thinking, when to doubt and when to trust.
  • But the real power of the Bayesian approach isn’t procedural; it’s that it replaces the facts in our minds with probabilities.
  • Applied to specific problems—Should you invest in Tesla? How bad is the Delta variant?—the techniques promoted by rationality writers are clarifying and powerful.
  • the rationality movement is also a social movement; rationalists today form what is sometimes called the “rationality community,” and, as evangelists, they hope to increase its size.
  • In “Rationality,” “The Scout Mindset,” and other similar books, irrationality is often presented as a form of misbehavior, which might be rectified through education or socialization.
  • Greg tells me that, in his business, it’s not enough to have rational thoughts. Someone who’s used to pondering questions at leisure might struggle to learn and reason when the clock is ticking; someone who is good at reaching rational conclusions might not be willing to sign on the dotted line when the time comes. Greg’s hedge-fund colleagues describe as “commercial”—a compliment—someone who is not only rational but timely and decisive.
  • You can know what’s right but still struggle to do it.
  • Following through on your own conclusions is one challenge. But a rationalist must also be “metarational,” willing to hand over the thinking keys when someone else is better informed or better trained. This, too, is harder than it sounds.
  • For all this to happen, rationality is necessary, but not sufficient. Thinking straight is just part of the work. 
  • I found it possible to be metarational with my dad not just because I respected his mind but because I knew that he was a good and cautious person who had my and my mother’s best interests at heart.
  • between the two of us, we had the right ingredients—mutual trust, mutual concern, and a shared commitment to reason and to act.
  • Intellectually, we understand that our complex society requires the division of both practical and cognitive labor. We accept that our knowledge maps are limited not just by our smarts but by our time and interests. Still, like Gurri’s populists, rationalists may stage their own contrarian revolts, repeatedly finding that no one’s opinions but their own are defensible. In letting go, as in following through, one’s whole personality gets involved.
  • in truth, it maps out a series of escalating challenges. In search of facts, we must make do with probabilities. Unable to know it all for ourselves, we must rely on others who care enough to know. We must act while we are still uncertain, and we must act in time—sometimes individually, but often together.
  • The realities of rationality are humbling. Know things; want things; use what you know to get what you want. It sounds like a simple formula.
  • The real challenge isn’t being right but knowing how wrong you might be.By Joshua RothmanAugust 16, 2021
  • Writing about rationality in the early twentieth century, Weber saw himself as coming to grips with a titanic force—an ascendant outlook that was rewriting our values. He talked about rationality in many different ways. We can practice the instrumental rationality of means and ends (how do I get what I want?) and the value rationality of purposes and goals (do I have good reasons for wanting what I want?). We can pursue the rationality of affect (am I cool, calm, and collected?) or develop the rationality of habit (do I live an ordered, or “rationalized,” life?).
  • Weber worried that it was turning each individual into a “cog in the machine,” and life into an “iron cage.” Today, rationality and the words around it are still shadowed with Weberian pessimism and cursed with double meanings. You’re rationalizing the org chart: are you bringing order to chaos, or justifying the illogical?
  • For Aristotle, rationality was what separated human beings from animals. For the authors of “The Rationality Quotient,” it’s a mental faculty, parallel to but distinct from intelligence, which involves a person’s ability to juggle many scenarios in her head at once, without letting any one monopolize her attention or bias her against the rest.
  • In “The Rationality Quotient: Toward a Test of Rational Thinking” (M.I.T.), from 2016, the psychologists Keith E. Stanovich, Richard F. West, and Maggie E. Toplak call rationality “a torturous and tortured term,” in part because philosophers, sociologists, psychologists, and economists have all defined it differently
  • Galef, who hosts a podcast called “Rationally Speaking” and co-founded the nonprofit Center for Applied Rationality, in Berkeley, barely uses the word “rationality” in her book on the subject. Instead, she describes a “scout mindset,” which can help you “to recognize when you are wrong, to seek out your blind spots, to test your assumptions and change course.” (The “soldier mindset,” by contrast, encourages you to defend your positions at any cost.)
  • Galef tends to see rationality as a method for acquiring more accurate views.
  • Pinker, a cognitive and evolutionary psychologist, sees it instrumentally, as “the ability to use knowledge to attain goals.” By this definition, to be a rational person you have to know things, you have to want things, and you have to use what you know to get what you want.
  • Introspection is key to rationality. A rational person must practice what the neuroscientist Stephen Fleming, in “Know Thyself: The Science of Self-Awareness” (Basic Books), calls “metacognition,” or “the ability to think about our own thinking”—“a fragile, beautiful, and frankly bizarre feature of the human mind.”
  • A successful student uses metacognition to know when he needs to study more and when he’s studied enough: essentially, parts of his brain are monitoring other parts.
  • In everyday life, the biggest obstacle to metacognition is what psychologists call the “illusion of fluency.” As we perform increasingly familiar tasks, we monitor our performance less rigorously; this happens when we drive, or fold laundry, and also when we think thoughts we’ve thought many times before
  • The trick is to break the illusion of fluency, and to encourage an “awareness of ignorance.”
  • metacognition is a skill. Some people are better at it than others. Galef believes that, by “calibrating” our metacognitive minds, we can improve our performance and so become more rational
  • There are many calibration methods
  • nowing about what you know is Rationality 101. The advanced coursework has to do with changes in your knowledge.
  • Most of us stay informed straightforwardly—by taking in new information. Rationalists do the same, but self-consciously, with an eye to deliberately redrawing their mental maps.
  • The challenge is that news about distant territories drifts in from many sources; fresh facts and opinions aren’t uniformly significant. In recent decades, rationalists confronting this problem have rallied behind the work of Thomas Bayes
  • So-called Bayesian reasoning—a particular thinking technique, with its own distinctive jargon—has become de rigueur.
  • the basic idea is simple. When new information comes in, you don’t want it to replace old information wholesale. Instead, you want it to modify what you already know to an appropriate degree. The degree of modification depends both on your confidence in your preëxisting knowledge and on the value of the new data. Bayesian reasoners begin with what they call the “prior” probability of something being true, and then find out if they need to adjust it.
  • Bayesian reasoning is an approach to statistics, but you can use it to interpret all sorts of new information.
1 - 20 of 110 Next › Last »
Showing 20 items per page