Skip to main content

Home/ TOK Friends/ Group items tagged concise

Rss Feed Group items tagged

oliviaodon

EFFECTIVE USE OF LANGUAGE - 0 views

  • To communicate effectively, it is not enough to have well organized ideas expressed in complete and coherent sentences and paragraphs. One must also think about the style, tone and clarity of his/her writing, and adapt these elements to the reading audience. Again, analyzing one's audience and purpose is the key to writing effectiveness. In order to choose the most effective language, the writer must consider the objective of the document, the context in which it is being written, and who will be reading it.
  • Effective language is: (1) concrete and specific, not vague and abstract; (2) concise, not verbose; (3) familiar, not obscure; (4) precise and clear, not inaccurate or ambiguous; (5) constructive, not destructive; and (6) appropriately formal.
  •  
    This article is informative and extremely helpful if you are preparing for an  essay or presentation!
Javier E

The Choice Explosion - The New York Times - 0 views

  • the social psychologist Sheena Iyengar asked 100 American and Japanese college students to take a piece of paper. On one side, she had them write down the decisions in life they would like to make for themselves. On the other, they wrote the decisions they would like to pass on to others.
  • The Americans desired choice in four times more domains than the Japanese.
  • Americans now have more choices over more things than any other culture in human history. We can choose between a broader array of foods, media sources, lifestyles and identities. We have more freedom to live out our own sexual identities and more religious and nonreligious options to express our spiritual natures.
  • ...15 more annotations...
  • But making decisions well is incredibly difficult, even for highly educated professional decision makers. As Chip Heath and Dan Heath point out in their book “Decisive,” 83 percent of corporate mergers and acquisitions do not increase shareholder value, 40 percent of senior hires do not last 18 months in their new position, 44 percent of lawyers would recommend that a young person not follow them into the law.
  • It’s becoming incredibly important to learn to decide well, to develop the techniques of self-distancing to counteract the flaws in our own mental machinery. The Heath book is a very good compilation of those techniques.
  • assume positive intent. When in the midst of some conflict, start with the belief that others are well intentioned. It makes it easier to absorb information from people you’d rather not listen to.
  • Suzy Welch’s 10-10-10 rule. When you’re about to make a decision, ask yourself how you will feel about it 10 minutes from now, 10 months from now and 10 years from now. People are overly biased by the immediate pain of some choice, but they can put the short-term pain in long-term perspective by asking these questions.
  • An "explosion" that may also be a "dissolution" or "disintegration," in my view. Unlimited choices. Conduct without boundaries. All of which may be viewed as either "great" or "terrible." The poor suffer when they have no means to pursue choices, which is terrible. The rich seem only to want more and more, wealth without boundaries, which is great for those so able to do. Yes, we need a new decision-making tool, but perhaps one that is also very old: simplify, simplify,simplify by setting moral boundaries that apply to all and which define concisely what our life together ought to be.
  • our tendency to narrow-frame, to see every decision as a binary “whether or not” alternative. Whenever you find yourself asking “whether or not,” it’s best to step back and ask, “How can I widen my options?”
  • deliberate mistakes. A survey of new brides found that 20 percent were not initially attracted to the man they ended up marrying. Sometimes it’s useful to make a deliberate “mistake” — agreeing to dinner with a guy who is not your normal type. Sometimes you don’t really know what you want and the filters you apply are hurting you.
  • It makes you think that we should have explicit decision-making curriculums in all schools. Maybe there should be a common course publicizing the work of Daniel Kahneman, Cass Sunstein, Dan Ariely and others who study the way we mess up and the techniques we can adopt to prevent error.
  • The explosion of choice places extra burdens on the individual. Poorer Americans have fewer resources to master decision-making techniques, less social support to guide their decision-making and less of a safety net to catch them when they err.
  • the stress of scarcity itself can distort decision-making. Those who experienced stress as children often perceive threat more acutely and live more defensively.
  • The explosion of choice means we all need more help understanding the anatomy of decision-making.
  • living in an area of concentrated poverty can close down your perceived options, and comfortably “relieve you of the burden of choosing life.” It’s hard to maintain a feeling of agency when you see no chance of opportunity.
  • In this way the choice explosion has contributed to widening inequality.
  • The relentless all-hour reruns of "Law and Order" in 100 channel cable markets provide direct rebuff to the touted but hollow promise/premise of wider "choice." The small group of personalities debating a pre-framed trivial point of view, over and over, nightly/daily (in video clips), without data, global comparison, historic reference, regional content, or a deep commitment to truth or knowledge of facts has resulted in many choosing narrower limits: streaming music, coffee shops, Facebook--now a "choice" of 1.65 billion users.
  • It’s important to offer opportunity and incentives. But we also need lessons in self-awareness — on exactly how our decision-making tool is fundamentally flawed, and on mental frameworks we can adopt to avoid messing up even more than we do.
Javier E

China: A Modern Babel - WSJ - 0 views

  • The oft-repeated claim that we must all learn Mandarin Chinese, the better to trade with our future masters, is one that readers of David Moser’s “A Billion Voices” will rapidly end up re-evaluating.
  • In fact, many Chinese don’t speak it: Even Chinese authorities quietly admit that only about 70% of the population speaks Mandarin, and merely one in 10 of those speak it fluently.
  • Mr. Moser presents a history of what is more properly called Putonghua, or “common speech,” along with a clear, concise and often amusing introduction to the limits of its spoken and written forms.
  • ...12 more annotations...
  • what Chinese schoolchildren are encouraged to think of as the longstanding natural speech of the common people is in fact an artificial hybrid, only a few decades old, although it shares a name—Mandarin—with the language of administration from imperial times. It’s a designed-by-committee camel of a language that has largely lost track of its past.
  • The idea of a national Chinese language began with the realization by the accidentally successful revolutionaries of 1911 that retaining control over a country speaking multiple languages and myriad dialects would necessitate reform. Long-term unification and the introduction of mass education would require a common language.
  • Whatever the province they originated from, the administrators of the now-toppled Great Qing Empire had all learned to communicate with one another in a second common language—Guanhua, China’s equivalent, in practical terms, of medieval Latin
  • To understand this highly compressed idiom required a considerable knowledge of the Chinese classics. Early Jesuit missionaries had labeled it Mandarin,
  • The committee decided that the four-tone dialect of the capital would be the base for a new national language but added a fifth tone whose use had lapsed in the north but not in southern dialects. The result was a language that no one actually spoke.
  • After the Communist victory of 1949, the process began all over again with fresh conferences, leading finally to the decision to use Beijing sounds, northern dialects and modern literature in the vernacular (of which there was very little) as a source of grammar.
  • This new spoken form is what is now loosely labeled Mandarin, still as alien to most Chinese as all the other Chinese languages.
  • A Latin alphabet system called Pinyin was introduced to help children learn to pronounce Chinese characters, but today it is usually abandoned after the first few years of elementary school.
  • The view that Mandarin is too difficult for mere foreigners to learn is essential to Chinese amour propre. But it is belied by the number of foreign high-school students who now learn the language by using Pinyin as a key to pronunciation —and who bask in the admiration they receive as a result.
  • Since 1949, the Chinese government, obsessed with promoting the image of a nation completely united in its love of the Communist Party, has decided that the Chinese people speak not several different languages but the same one in a variety of dialects. To say otherwise is to suggest, dangerously, that China is not one nation
  • Yet on Oct. 1, 1949, Mao Zedong announced the founding of the People’s Republic in a Hunan accent so thick that members of his audience subsequently differed about what he had said. He never mastered the Beijing sounds on which Putonghua is based, nor did Sichuanese-speaking Deng Xiaoping or most of his successors.
  • When Xi Jinping took power in 2012, many online commentators rejoiced. “At last! A Chinese leader who can speak Putonghua!” One leader down, only 400 million more common people to go.
Javier E

Emmy Noether, the Most Significant Mathematician You've Never Heard Of - NYTimes.com - 0 views

  • Albert Einstein called her the most “significant” and “creative” female mathematician of all time, and others of her contemporaries were inclined to drop the modification by sex. She invented a theorem that united with magisterial concision two conceptual pillars of physics: symmetry in nature and the universal laws of conservation. Some consider Noether’s theorem, as it is now called, as important as Einstein’s theory of relativity; it undergirds much of today’s vanguard research in physics
  • At Göttingen, she pursued her passion for mathematical invariance, the study of numbers that can be manipulated in various ways and still remain constant. In the relationship between a star and its planet, for example, the shape and radius of the planetary orbit may change, but the gravitational attraction conjoining one to the other remains the same — and there’s your invariance.
  • Noether’s theorem, an expression of the deep tie between the underlying geometry of the universe and the behavior of the mass and energy that call the universe home. What the revolutionary theorem says, in cartoon essence, is the following: Wherever you find some sort of symmetry in nature, some predictability or homogeneity of parts, you’ll find lurking in the background a corresponding conservation — of momentum, electric charge, energy or the like. If a bicycle wheel is radially symmetric, if you can spin it on its axis and it still looks the same in all directions, well, then, that symmetric translation must yield a corresponding conservation.
  • ...1 more annotation...
  • Noether’s theorem shows that a symmetry of time — like the fact that whether you throw a ball in the air tomorrow or make the same toss next week will have no effect on the ball’s trajectory — is directly related to the conservation of energy, our old homily that energy can be neither created nor destroyed but merely changes form.
Javier E

Reimagining Televised Debates - The Daily Dish | By Andrew Sullivan - 0 views

  • television forces those who appear on it to argue "directly, and pointedly, in a short amount of time." This shapes how debates unfold because "concision actually favors the spouting of conventional thinking."
  • What if a television network tried to run a debate show like the back-and-forths that sometimes occur in print?
  • if executed correctly, the quality of argument and entertainment would be far better than any of the talking head exchanges currently broadcast on cable.
pantanoma

BBC News - Mathematics: Why the brain sees maths as beauty - 0 views

  • The same emotional brain centres used to appreciate art were being activated by "beautiful" maths.
  • The researchers suggest there may be a neurobiological basis to beauty.
  • neurobiological basis to b
  • ...2 more annotations...
  • Neuroscience can't tell you what beauty is, but if you find it beautiful the medial orbito-frontal cortex is likely to be involved, you can find beauty in anything,"
  • "Given that e, pi and i are incredibly complicated and seemingly unrelated numbers, it is amazing that they are linked by this concise formula.
Javier E

Decoding the Rules of Conversation - NYTimes.com - 0 views

  • Life at Versailles was apparently a protracted battle of wits. You gained status if you showed “esprit” — clever, erudite and often caustic wit, aimed at making rivals look ridiculous. The king himself kept abreast of the sharpest remarks, and granted audiences to those who made them. “Wit opens every door,” one courtier explained.If you lacked “esprit” — or suffered from “l’esprit de l’escalier” (thinking of a comeback only once you had reached the bottom of the staircase) — you’d look ridiculous yourself.
  • But many modern-day conversations — including the schoolyard cries of “Bim!” — make more sense once you realize that everyone around you is in a competition not to look ridiculous
  • Many children train for this at home. Where Americans might coo over a child’s most inane remark, to boost his confidence, middle-class French parents teach their kids to be concise and amusing, to keep everyone listening. “I force him or her to discover the best ways of retaining my attention,” the anthropologist Raymonde Carroll wrote in her 1987 book “Cultural Misunderstandings: The French-American Experience.”
  • ...7 more annotations...
  • Jean-Benoît Nadeau, a Canadian who co-wrote a forthcoming book on French conversation, told me that the penchant for saying “no” or “it’s not possible” is often a cover for the potential humiliation of seeming not to know something. Only once you trust someone can you turn down the wit and reveal your weaknesses
  • At least it’s not boring. Even among friends, being dull is almost criminal. A French entrepreneur told me her rules for dinner-party topics: no kids, no jobs, no real estate. Provocative opinions are practically required. “You must be a little bit mean but also a little bit vulnerable,” she said.
  • It’s dizzying to switch to the British conversational mode, in which everyone’s trying to show they don’t take themselves seriously. The result is lots of self-deprecation and ironic banter. I’ve sat through two-hour lunches in London waiting for everyone to stop exchanging quips so the real conversation could begin. But “real things aren’t supposed to come up,” my husband said. “Banter can be the only mode of conversation you ever have with someone.”
  • Earnestness makes British people gag
  • After being besieged by British irony and French wit, I sometimes yearn for the familiar comfort of American conversations, where there are no stupid questions. Among friends, I merely have to provide reassurance and mirroring: No, you don’t look fat, and anyway, I look worse.
  • It might not matter what I say, since some American conversations resemble a succession of monologues. A 2014 study led by a psychologist at Yeshiva University found that when researchers crossed two unrelated instant-message conversations, as many as 42 percent of participants didn’t notice
  • A lot of us — myself included — could benefit from a basic rule of improvisational comedy: Instead of planning your next remark, just listen very hard to what the other person is saying. Call it “mindful conversation,” if you like. That’s what the French tend to do
Javier E

The Coming Software Apocalypse - The Atlantic - 1 views

  • Our standard framework for thinking about engineering failures—reflected, for instance, in regulations for medical devices—was developed shortly after World War II, before the advent of software, for electromechanical systems. The idea was that you make something reliable by making its parts reliable (say, you build your engine to withstand 40,000 takeoff-and-landing cycles) and by planning for the breakdown of those parts (you have two engines). But software doesn’t break. Intrado’s faulty threshold is not like the faulty rivet that leads to the crash of an airliner. The software did exactly what it was told to do. In fact it did it perfectly. The reason it failed is that it was told to do the wrong thing.
  • Software failures are failures of understanding, and of imagination. Intrado actually had a backup router, which, had it been switched to automatically, would have restored 911 service almost immediately. But, as described in a report to the FCC, “the situation occurred at a point in the application logic that was not designed to perform any automated corrective actions.”
  • “The problem is that software engineers don’t understand the problem they’re trying to solve, and don’t care to,” says Leveson, the MIT software-safety expert. The reason is that they’re too wrapped up in getting their code to work.
  • ...52 more annotations...
  • Code is too hard to think about. Before trying to understand the attempts themselves, then, it’s worth understanding why this might be: what it is about code that makes it so foreign to the mind, and so unlike anything that came before it.
  • Technological progress used to change the way the world looked—you could watch the roads getting paved; you could see the skylines rise. Today you can hardly tell when something is remade, because so often it is remade by code.
  • Software has enabled us to make the most intricate machines that have ever existed. And yet we have hardly noticed, because all of that complexity is packed into tiny silicon chips as millions and millions of lines of cod
  • The programmer, the renowned Dutch computer scientist Edsger Dijkstra wrote in 1988, “has to be able to think in terms of conceptual hierarchies that are much deeper than a single mind ever needed to face before.” Dijkstra meant this as a warning.
  • As programmers eagerly poured software into critical systems, they became, more and more, the linchpins of the built world—and Dijkstra thought they had perhaps overestimated themselves.
  • What made programming so difficult was that it required you to think like a computer.
  • The introduction of programming languages like Fortran and C, which resemble English, and tools, known as “integrated development environments,” or IDEs, that help correct simple mistakes (like Microsoft Word’s grammar checker but for code), obscured, though did little to actually change, this basic alienation—the fact that the programmer didn’t work on a problem directly, but rather spent their days writing out instructions for a machine.
  • Though he runs a lab that studies the future of computing, he seems less interested in technology per se than in the minds of the people who use it. Like any good toolmaker, he has a way of looking at the world that is equal parts technical and humane. He graduated top of his class at the California Institute of Technology for electrical engineering,
  • “The serious problems that have happened with software have to do with requirements, not coding errors.” When you’re writing code that controls a car’s throttle, for instance, what’s important is the rules about when and how and by how much to open it. But these systems have become so complicated that hardly anyone can keep them straight in their head. “There’s 100 million lines of code in cars now,” Leveson says. “You just cannot anticipate all these things.”
  • a nearly decade-long investigation into claims of so-called unintended acceleration in Toyota cars. Toyota blamed the incidents on poorly designed floor mats, “sticky” pedals, and driver error, but outsiders suspected that faulty software might be responsible
  • software experts spend 18 months with the Toyota code, picking up where NASA left off. Barr described what they found as “spaghetti code,” programmer lingo for software that has become a tangled mess. Code turns to spaghetti when it accretes over many years, with feature after feature piling on top of, and being woven around
  • Using the same model as the Camry involved in the accident, Barr’s team demonstrated that there were actually more than 10 million ways for the onboard computer to cause unintended acceleration. They showed that as little as a single bit flip—a one in the computer’s memory becoming a zero or vice versa—could make a car run out of control. The fail-safe code that Toyota had put in place wasn’t enough to stop it
  • . In all, Toyota recalled more than 9 million cars, and paid nearly $3 billion in settlements and fines related to unintended acceleration.
  • The problem is that programmers are having a hard time keeping up with their own creations. Since the 1980s, the way programmers work and the tools they use have changed remarkably little.
  • This is the trouble with making things out of code, as opposed to something physical. “The complexity,” as Leveson puts it, “is invisible to the eye.”
  • The fact that the two of them were thinking about the same problem in the same terms, at the same time, was not a coincidence. They had both just seen the same remarkable talk, given to a group of software-engineering students in a Montreal hotel by a computer researcher named Bret Victor. The talk, which went viral when it was posted online in February 2012, seemed to be making two bold claims. The first was that the way we make software is fundamentally broken. The second was that Victor knew how to fix it.
  • “Visual Studio is one of the single largest pieces of software in the world,” he said. “It’s over 55 million lines of code. And one of the things that I found out in this study is more than 98 percent of it is completely irrelevant. All this work had been put into this thing, but it missed the fundamental problems that people faced. And the biggest one that I took away from it was that basically people are playing computer inside their head.” Programmers were like chess players trying to play with a blindfold on—so much of their mental energy is spent just trying to picture where the pieces are that there’s hardly any left over to think about the game itself.
  • in early 2012, Victor had finally landed upon the principle that seemed to thread through all of his work. (He actually called the talk “Inventing on Principle.”) The principle was this: “Creators need an immediate connection to what they’re creating.” The problem with programming was that it violated the principle. That’s why software systems were so hard to think about, and so rife with bugs: The programmer, staring at a page of text, was abstracted from whatever it was they were actually making.
  • “Our current conception of what a computer program is,” he said, is “derived straight from Fortran and ALGOL in the late ’50s. Those languages were designed for punch cards.”
  • WYSIWYG (pronounced “wizzywig”) came along. It stood for “What You See Is What You Get.”
  • Victor’s point was that programming itself should be like that. For him, the idea that people were doing important work, like designing adaptive cruise-control systems or trying to understand cancer, by staring at a text editor, was appalling.
  • With the right interface, it was almost as if you weren’t working with code at all; you were manipulating the game’s behavior directly.
  • When the audience first saw this in action, they literally gasped. They knew they weren’t looking at a kid’s game, but rather the future of their industry. Most software involved behavior that unfolded, in complex ways, over time, and Victor had shown that if you were imaginative enough, you could develop ways to see that behavior and change it, as if playing with it in your hands. One programmer who saw the talk wrote later: “Suddenly all of my tools feel obsolete.”
  • hen John Resig saw the “Inventing on Principle” talk, he scrapped his plans for the Khan Academy programming curriculum. He wanted the site’s programming exercises to work just like Victor’s demos. On the left-hand side you’d have the code, and on the right, the running program: a picture or game or simulation. If you changed the code, it’d instantly change the picture. “In an environment that is truly responsive,” Resig wrote about the approach, “you can completely change the model of how a student learns ... [They] can now immediately see the result and intuit how underlying systems inherently work without ever following an explicit explanation.” Khan Academy has become perhaps the largest computer-programming class in the world, with a million students, on average, actively using the program each month.
  • “Everyone thought I was interested in programming environments,” he said. Really he was interested in how people see and understand systems—as he puts it, in the “visual representation of dynamic behavior.” Although code had increasingly become the tool of choice for creating dynamic behavior, it remained one of the worst tools for understanding it. The point of “Inventing on Principle” was to show that you could mitigate that problem by making the connection between a system’s behavior and its code immediate.
  • “Typically the main problem with software coding—and I’m a coder myself,” Bantegnie says, “is not the skills of the coders. The people know how to code. The problem is what to code. Because most of the requirements are kind of natural language, ambiguous, and a requirement is never extremely precise, it’s often understood differently by the guy who’s supposed to code.”
  • In a pair of later talks, “Stop Drawing Dead Fish” and “Drawing Dynamic Visualizations,” Victor went one further. He demoed two programs he’d built—the first for animators, the second for scientists trying to visualize their data—each of which took a process that used to involve writing lots of custom code and reduced it to playing around in a WYSIWYG interface.
  • Victor suggested that the same trick could be pulled for nearly every problem where code was being written today. “I’m not sure that programming has to exist at all,” he told me. “Or at least software developers.” In his mind, a software developer’s proper role was to create tools that removed the need for software developers. Only then would people with the most urgent computational problems be able to grasp those problems directly, without the intermediate muck of code.
  • Victor implored professional software developers to stop pouring their talent into tools for building apps like Snapchat and Uber. “The inconveniences of daily life are not the significant problems,” he wrote. Instead, they should focus on scientists and engineers—as he put it to me, “these people that are doing work that actually matters, and critically matters, and using really, really bad tools.”
  • Bantegnie’s company is one of the pioneers in the industrial use of model-based design, in which you no longer write code directly. Instead, you create a kind of flowchart that describes the rules your program should follow (the “model”), and the computer generates code for you based on those rules
  • In a model-based design tool, you’d represent this rule with a small diagram, as though drawing the logic out on a whiteboard, made of boxes that represent different states—like “door open,” “moving,” and “door closed”—and lines that define how you can get from one state to the other. The diagrams make the system’s rules obvious: Just by looking, you can see that the only way to get the elevator moving is to close the door, or that the only way to get the door open is to stop.
  • . In traditional programming, your task is to take complex rules and translate them into code; most of your energy is spent doing the translating, rather than thinking about the rules themselves. In the model-based approach, all you have is the rules. So that’s what you spend your time thinking about. It’s a way of focusing less on the machine and more on the problem you’re trying to get it to solve.
  • The ideas spread. The notion of liveness, of being able to see data flowing through your program instantly, made its way into flagship programming tools offered by Google and Apple. The default language for making new iPhone and Mac apps, called Swift, was developed by Apple from the ground up to support an environment, called Playgrounds, that was directly inspired by Light Table.
  • On this view, software becomes unruly because the media for describing what software should do—conversations, prose descriptions, drawings on a sheet of paper—are too different from the media describing what software does do, namely, code itself.
  • for this approach to succeed, much of the work has to be done well before the project even begins. Someone first has to build a tool for developing models that are natural for people—that feel just like the notes and drawings they’d make on their own—while still being unambiguous enough for a computer to understand. They have to make a program that turns these models into real code. And finally they have to prove that the generated code will always do what it’s supposed to.
  • tice brings order and accountability to large codebases. But, Shivappa says, “it’s a very labor-intensive process.” He estimates that before they used model-based design, on a two-year-long project only two to three months was spent writing code—the rest was spent working on the documentation.
  • uch of the benefit of the model-based approach comes from being able to add requirements on the fly while still ensuring that existing ones are met; with every change, the computer can verify that your program still works. You’re free to tweak your blueprint without fear of introducing new bugs. Your code is, in FAA parlance, “correct by construction.”
  • “people are not so easily transitioning to model-based software development: They perceive it as another opportunity to lose control, even more than they have already.”
  • The bias against model-based design, sometimes known as model-driven engineering, or MDE, is in fact so ingrained that according to a recent paper, “Some even argue that there is a stronger need to investigate people’s perception of MDE than to research new MDE technologies.”
  • “Human intuition is poor at estimating the true probability of supposedly ‘extremely rare’ combinations of events in systems operating at a scale of millions of requests per second,” he wrote in a paper. “That human fallibility means that some of the more subtle, dangerous bugs turn out to be errors in design; the code faithfully implements the intended design, but the design fails to correctly handle a particular ‘rare’ scenario.”
  • Newcombe was convinced that the algorithms behind truly critical systems—systems storing a significant portion of the web’s data, for instance—ought to be not just good, but perfect. A single subtle bug could be catastrophic. But he knew how hard bugs were to find, especially as an algorithm grew more complex. You could do all the testing you wanted and you’d never find them all.
  • An algorithm written in TLA+ could in principle be proven correct. In practice, it allowed you to create a realistic model of your problem and test it not just thoroughly, but exhaustively. This was exactly what he’d been looking for: a language for writing perfect algorithms.
  • TLA+, which stands for “Temporal Logic of Actions,” is similar in spirit to model-based design: It’s a language for writing down the requirements—TLA+ calls them “specifications”—of computer programs. These specifications can then be completely verified by a computer. That is, before you write any code, you write a concise outline of your program’s logic, along with the constraints you need it to satisfy
  • Programmers are drawn to the nitty-gritty of coding because code is what makes programs go; spending time on anything else can seem like a distraction. And there is a patient joy, a meditative kind of satisfaction, to be had from puzzling out the micro-mechanics of code. But code, Lamport argues, was never meant to be a medium for thought. “It really does constrain your ability to think when you’re thinking in terms of a programming language,”
  • Code makes you miss the forest for the trees: It draws your attention to the working of individual pieces, rather than to the bigger picture of how your program fits together, or what it’s supposed to do—and whether it actually does what you think. This is why Lamport created TLA+. As with model-based design, TLA+ draws your focus to the high-level structure of a system, its essential logic, rather than to the code that implements it.
  • But TLA+ occupies just a small, far corner of the mainstream, if it can be said to take up any space there at all. Even to a seasoned engineer like Newcombe, the language read at first as bizarre and esoteric—a zoo of symbols.
  • this is a failure of education. Though programming was born in mathematics, it has since largely been divorced from it. Most programmers aren’t very fluent in the kind of math—logic and set theory, mostly—that you need to work with TLA+. “Very few programmers—and including very few teachers of programming—understand the very basic concepts and how they’re applied in practice. And they seem to think that all they need is code,” Lamport says. “The idea that there’s some higher level than the code in which you need to be able to think precisely, and that mathematics actually allows you to think precisely about it, is just completely foreign. Because they never learned it.”
  • “In the 15th century,” he said, “people used to build cathedrals without knowing calculus, and nowadays I don’t think you’d allow anyone to build a cathedral without knowing calculus. And I would hope that after some suitably long period of time, people won’t be allowed to write programs if they don’t understand these simple things.”
  • Programmers, as a species, are relentlessly pragmatic. Tools like TLA+ reek of the ivory tower. When programmers encounter “formal methods” (so called because they involve mathematical, “formally” precise descriptions of programs), their deep-seated instinct is to recoil.
  • Formal methods had an image problem. And the way to fix it wasn’t to implore programmers to change—it was to change yourself. Newcombe realized that to bring tools like TLA+ to the programming mainstream, you had to start speaking their language.
  • he presented TLA+ as a new kind of “pseudocode,” a stepping-stone to real code that allowed you to exhaustively test your algorithms—and that got you thinking precisely early on in the design process. “Engineers think in terms of debugging rather than ‘verification,’” he wrote, so he titled his internal talk on the subject to fellow Amazon engineers “Debugging Designs.” Rather than bemoan the fact that programmers see the world in code, Newcombe embraced it. He knew he’d lose them otherwise. “I’ve had a bunch of people say, ‘Now I get it,’” Newcombe says.
  • In the world of the self-driving car, software can’t be an afterthought. It can’t be built like today’s airline-reservation systems or 911 systems or stock-trading systems. Code will be put in charge of hundreds of millions of lives on the road and it has to work. That is no small task.
katedriscoll

Phantom limb pain: A literature review - 0 views

  • . The purpose of this review article is to summarize recent researches focusing on phantom limb in order to discuss its definition, mechanisms, and treatments.
  • The incidence of phantom limb pain has varied from 2% in earlier records to higher rates today. Initially, patients were less likely to mention pain symptoms than today which is a potential explanation for the discrepancy in incidence rates. However, Sherman et al.4 discuss that only 17% phantom limb complaints were initiated treated by physicians. Consequently, it is important to determine what constitutes phantom pain in order to provide efficacious care. Phantom pain is pain sensation to a limb, organ or other tissue after amputation and/or nerve injury.5 In podiatry, the predominant cause of phantom limb pain is after limb amputation due to diseased state presenting with an unsalvageable limb. Postoperative pain sensations from stump neuroma pain, prosthesis, fibrosis, and residual local tissue inflammation can be similar to phantom limb pain (PLP). Patients with PLP complain of various sensations including burning, stinging, aching, and piercing pain with changing warmth and cold sensation to the amputated area which waxes and wanes.6 Onset of symptoms may be elicited by environmental, emotional, or physical changes.
  • The human body encompasses various neurologic mechanisms allowing reception, transport, recognition, and response to numerous stimuli. Pain, temperature, crude touch, and pressure sensory information are carried to the central nervous system via the anterolateral system, with pain & temperature information transfer via lateral spinothalamic tracts to the parietal lobe. In detail, pain sensation from the lower extremity is transported from a peripheral receptor to a first degree pseudounipolar neurons in the dorsal root ganglion and decussate and ascend to the third-degree neurons within the thalamus.7 This sensory information will finally arrive at the primary sensory cortex in the postcentral gyrus of the parietal lobe which houses the sensory homunculus.8 It is unsurprising that with an amputation that such an intricate highway of information transport to and from the periphery may have the potential for problematic neurologic developments.
  • ...3 more annotations...
  • How does pain sensation, a protection mechanism for the human body, become chronic and unrelenting after limb loss? This is a question researchers still ask today with no concise conclusion. Phantom limb pain occurs more frequently in patients who also experience longer periods of stump pain and is more likely to subside as the stump pain subsides.9 Researchers have also found dorsal root ganglion cells change after a nerve is completely cut. The dorsal root ganglion cells become more active and sensitive to chemical and mechanical changes with potential for plasticity development at the dorsal horn and other areas.10 At the molecular level, increasing glutamate and NMDA (N-methyl d-aspartate) concentrations correlate to increased sensitivity which contributes to allodynia and hyperalgesia.11 Flor et al.12 further described the significance of maladaptive plasticity and the development of memory for pain and phantom limb pain. They correlated it to the loss of GABAergic inhibition and the development of glutamate induced long-term potentiation changes and structural changes like myelination and axonal sprouting.
  • Phantom limb pain in some patients may gradually disappear over the course of a few months to one year if not treated, but some patients suffer from phantom limb pain for decades. Treatments include pharmacotherapy, adjuvant therapy, and surgical intervention. There are a variety of medications to choose from, which includes tricyclic antidepressants, opioids, and NSAIDs, etc. Among these medications, Tricyclic antidepressant is one of the most common treatments. Studies have shown that Amitriptyline (a tricyclic antidepressant) has a good effect on relieving neuropathic pain.25
  • Phantom limb pain is very common in amputees. As a worldwide issue, it has been studied by a lot of researchers. Although phantom limb sensation has already been described and proposed by French military surgeon Ambroise Pare 500 years ago, there is still no detailed explanation of its mechanisms. Therefore, more research will be needed on the different types of mechanisms of phantom limb pain. Once researchers and physicians are able to identify the mechanism of phantom limb pain, mechanism-based treatment will be rapidly developed. As a result, more patients will be benefit from it in the long run.
  •  
    One of the articles we read mentioned phantom limbs. This article goes more indepth on what a phantom limb is, why it happens and some cures.
1 - 10 of 10
Showing 20 items per page