Skip to main content

Home/ New Media Ethics 2009 course/ Group items tagged Information

Rss Feed Group items tagged

Weiye Loh

How We Know by Freeman Dyson | The New York Review of Books - 0 views

  • Another example illustrating the central dogma is the French optical telegraph.
  • The telegraph was an optical communication system with stations consisting of large movable pointers mounted on the tops of sixty-foot towers. Each station was manned by an operator who could read a message transmitted by a neighboring station and transmit the same message to the next station in the transmission line.
  • The distance between neighbors was about seven miles. Along the transmission lines, optical messages in France could travel faster than drum messages in Africa. When Napoleon took charge of the French Republic in 1799, he ordered the completion of the optical telegraph system to link all the major cities of France from Calais and Paris to Toulon and onward to Milan. The telegraph became, as Claude Chappe had intended, an important instrument of national power. Napoleon made sure that it was not available to private users.
  • ...27 more annotations...
  • Unlike the drum language, which was based on spoken language, the optical telegraph was based on written French. Chappe invented an elaborate coding system to translate written messages into optical signals. Chappe had the opposite problem from the drummers. The drummers had a fast transmission system with ambiguous messages. They needed to slow down the transmission to make the messages unambiguous. Chappe had a painfully slow transmission system with redundant messages. The French language, like most alphabetic languages, is highly redundant, using many more letters than are needed to convey the meaning of a message. Chappe’s coding system allowed messages to be transmitted faster. Many common phrases and proper names were encoded by only two optical symbols, with a substantial gain in speed of transmission. The composer and the reader of the message had code books listing the message codes for eight thousand phrases and names. For Napoleon it was an advantage to have a code that was effectively cryptographic, keeping the content of the messages secret from citizens along the route.
  • After these two historical examples of rapid communication in Africa and France, the rest of Gleick’s book is about the modern development of information technolog
  • The modern history is dominated by two Americans, Samuel Morse and Claude Shannon. Samuel Morse was the inventor of Morse Code. He was also one of the pioneers who built a telegraph system using electricity conducted through wires instead of optical pointers deployed on towers. Morse launched his electric telegraph in 1838 and perfected the code in 1844. His code used short and long pulses of electric current to represent letters of the alphabet.
  • Morse was ideologically at the opposite pole from Chappe. He was not interested in secrecy or in creating an instrument of government power. The Morse system was designed to be a profit-making enterprise, fast and cheap and available to everybody. At the beginning the price of a message was a quarter of a cent per letter. The most important users of the system were newspaper correspondents spreading news of local events to readers all over the world. Morse Code was simple enough that anyone could learn it. The system provided no secrecy to the users. If users wanted secrecy, they could invent their own secret codes and encipher their messages themselves. The price of a message in cipher was higher than the price of a message in plain text, because the telegraph operators could transcribe plain text faster. It was much easier to correct errors in plain text than in cipher.
  • Claude Shannon was the founding father of information theory. For a hundred years after the electric telegraph, other communication systems such as the telephone, radio, and television were invented and developed by engineers without any need for higher mathematics. Then Shannon supplied the theory to understand all of these systems together, defining information as an abstract quantity inherent in a telephone message or a television picture. Shannon brought higher mathematics into the game.
  • When Shannon was a boy growing up on a farm in Michigan, he built a homemade telegraph system using Morse Code. Messages were transmitted to friends on neighboring farms, using the barbed wire of their fences to conduct electric signals. When World War II began, Shannon became one of the pioneers of scientific cryptography, working on the high-level cryptographic telephone system that allowed Roosevelt and Churchill to talk to each other over a secure channel. Shannon’s friend Alan Turing was also working as a cryptographer at the same time, in the famous British Enigma project that successfully deciphered German military codes. The two pioneers met frequently when Turing visited New York in 1943, but they belonged to separate secret worlds and could not exchange ideas about cryptography.
  • In 1945 Shannon wrote a paper, “A Mathematical Theory of Cryptography,” which was stamped SECRET and never saw the light of day. He published in 1948 an expurgated version of the 1945 paper with the title “A Mathematical Theory of Communication.” The 1948 version appeared in the Bell System Technical Journal, the house journal of the Bell Telephone Laboratories, and became an instant classic. It is the founding document for the modern science of information. After Shannon, the technology of information raced ahead, with electronic computers, digital cameras, the Internet, and the World Wide Web.
  • According to Gleick, the impact of information on human affairs came in three installments: first the history, the thousands of years during which people created and exchanged information without the concept of measuring it; second the theory, first formulated by Shannon; third the flood, in which we now live
  • The event that made the flood plainly visible occurred in 1965, when Gordon Moore stated Moore’s Law. Moore was an electrical engineer, founder of the Intel Corporation, a company that manufactured components for computers and other electronic gadgets. His law said that the price of electronic components would decrease and their numbers would increase by a factor of two every eighteen months. This implied that the price would decrease and the numbers would increase by a factor of a hundred every decade. Moore’s prediction of continued growth has turned out to be astonishingly accurate during the forty-five years since he announced it. In these four and a half decades, the price has decreased and the numbers have increased by a factor of a billion, nine powers of ten. Nine powers of ten are enough to turn a trickle into a flood.
  • Gordon Moore was in the hardware business, making hardware components for electronic machines, and he stated his law as a law of growth for hardware. But the law applies also to the information that the hardware is designed to embody. The purpose of the hardware is to store and process information. The storage of information is called memory, and the processing of information is called computing. The consequence of Moore’s Law for information is that the price of memory and computing decreases and the available amount of memory and computing increases by a factor of a hundred every decade. The flood of hardware becomes a flood of information.
  • In 1949, one year after Shannon published the rules of information theory, he drew up a table of the various stores of memory that then existed. The biggest memory in his table was the US Library of Congress, which he estimated to contain one hundred trillion bits of information. That was at the time a fair guess at the sum total of recorded human knowledge. Today a memory disc drive storing that amount of information weighs a few pounds and can be bought for about a thousand dollars. Information, otherwise known as data, pours into memories of that size or larger, in government and business offices and scientific laboratories all over the world. Gleick quotes the computer scientist Jaron Lanier describing the effect of the flood: “It’s as if you kneel to plant the seed of a tree and it grows so fast that it swallows your whole town before you can even rise to your feet.”
  • On December 8, 2010, Gleick published on the The New York Review’s blog an illuminating essay, “The Information Palace.” It was written too late to be included in his book. It describes the historical changes of meaning of the word “information,” as recorded in the latest quarterly online revision of the Oxford English Dictionary. The word first appears in 1386 a parliamentary report with the meaning “denunciation.” The history ends with the modern usage, “information fatigue,” defined as “apathy, indifference or mental exhaustion arising from exposure to too much information.”
  • The consequences of the information flood are not all bad. One of the creative enterprises made possible by the flood is Wikipedia, started ten years ago by Jimmy Wales. Among my friends and acquaintances, everybody distrusts Wikipedia and everybody uses it. Distrust and productive use are not incompatible. Wikipedia is the ultimate open source repository of information. Everyone is free to read it and everyone is free to write it. It contains articles in 262 languages written by several million authors. The information that it contains is totally unreliable and surprisingly accurate. It is often unreliable because many of the authors are ignorant or careless. It is often accurate because the articles are edited and corrected by readers who are better informed than the authors
  • Jimmy Wales hoped when he started Wikipedia that the combination of enthusiastic volunteer writers with open source information technology would cause a revolution in human access to knowledge. The rate of growth of Wikipedia exceeded his wildest dreams. Within ten years it has become the biggest storehouse of information on the planet and the noisiest battleground of conflicting opinions. It illustrates Shannon’s law of reliable communication. Shannon’s law says that accurate transmission of information is possible in a communication system with a high level of noise. Even in the noisiest system, errors can be reliably corrected and accurate information transmitted, provided that the transmission is sufficiently redundant. That is, in a nutshell, how Wikipedia works.
  • The information flood has also brought enormous benefits to science. The public has a distorted view of science, because children are taught in school that science is a collection of firmly established truths. In fact, science is not a collection of truths. It is a continuing exploration of mysteries. Wherever we go exploring in the world around us, we find mysteries. Our planet is covered by continents and oceans whose origin we cannot explain. Our atmosphere is constantly stirred by poorly understood disturbances that we call weather and climate. The visible matter in the universe is outweighed by a much larger quantity of dark invisible matter that we do not understand at all. The origin of life is a total mystery, and so is the existence of human consciousness. We have no clear idea how the electrical discharges occurring in nerve cells in our brains are connected with our feelings and desires and actions.
  • Even physics, the most exact and most firmly established branch of science, is still full of mysteries. We do not know how much of Shannon’s theory of information will remain valid when quantum devices replace classical electric circuits as the carriers of information. Quantum devices may be made of single atoms or microscopic magnetic circuits. All that we know for sure is that they can theoretically do certain jobs that are beyond the reach of classical devices. Quantum computing is still an unexplored mystery on the frontier of information theory. Science is the sum total of a great multitude of mysteries. It is an unending argument between a great multitude of voices. It resembles Wikipedia much more than it resembles the Encyclopaedia Britannica.
  • The rapid growth of the flood of information in the last ten years made Wikipedia possible, and the same flood made twenty-first-century science possible. Twenty-first-century science is dominated by huge stores of information that we call databases. The information flood has made it easy and cheap to build databases. One example of a twenty-first-century database is the collection of genome sequences of living creatures belonging to various species from microbes to humans. Each genome contains the complete genetic information that shaped the creature to which it belongs. The genome data-base is rapidly growing and is available for scientists all over the world to explore. Its origin can be traced to the year 1939, when Shannon wrote his Ph.D. thesis with the title “An Algebra for Theoretical Genetics.
  • Shannon was then a graduate student in the mathematics department at MIT. He was only dimly aware of the possible physical embodiment of genetic information. The true physical embodiment of the genome is the double helix structure of DNA molecules, discovered by Francis Crick and James Watson fourteen years later. In 1939 Shannon understood that the basis of genetics must be information, and that the information must be coded in some abstract algebra independent of its physical embodiment. Without any knowledge of the double helix, he could not hope to guess the detailed structure of the genetic code. He could only imagine that in some distant future the genetic information would be decoded and collected in a giant database that would define the total diversity of living creatures. It took only sixty years for his dream to come true.
  • In the twentieth century, genomes of humans and other species were laboriously decoded and translated into sequences of letters in computer memories. The decoding and translation became cheaper and faster as time went on, the price decreasing and the speed increasing according to Moore’s Law. The first human genome took fifteen years to decode and cost about a billion dollars. Now a human genome can be decoded in a few weeks and costs a few thousand dollars. Around the year 2000, a turning point was reached, when it became cheaper to produce genetic information than to understand it. Now we can pass a piece of human DNA through a machine and rapidly read out the genetic information, but we cannot read out the meaning of the information. We shall not fully understand the information until we understand in detail the processes of embryonic development that the DNA orchestrated to make us what we are.
  • The explosive growth of information in our human society is a part of the slower growth of ordered structures in the evolution of life as a whole. Life has for billions of years been evolving with organisms and ecosystems embodying increasing amounts of information. The evolution of life is a part of the evolution of the universe, which also evolves with increasing amounts of information embodied in ordered structures, galaxies and stars and planetary systems. In the living and in the nonliving world, we see a growth of order, starting from the featureless and uniform gas of the early universe and producing the magnificent diversity of weird objects that we see in the sky and in the rain forest. Everywhere around us, wherever we look, we see evidence of increasing order and increasing information. The technology arising from Shannon’s discoveries is only a local acceleration of the natural growth of information.
  • . Lord Kelvin, one of the leading physicists of that time, promoted the heat death dogma, predicting that the flow of heat from warmer to cooler objects will result in a decrease of temperature differences everywhere, until all temperatures ultimately become equal. Life needs temperature differences, to avoid being stifled by its waste heat. So life will disappear
  • Thanks to the discoveries of astronomers in the twentieth century, we now know that the heat death is a myth. The heat death can never happen, and there is no paradox. The best popular account of the disappearance of the paradox is a chapter, “How Order Was Born of Chaos,” in the book Creation of the Universe, by Fang Lizhi and his wife Li Shuxian.2 Fang Lizhi is doubly famous as a leading Chinese astronomer and a leading political dissident. He is now pursuing his double career at the University of Arizona.
  • The belief in a heat death was based on an idea that I call the cooking rule. The cooking rule says that a piece of steak gets warmer when we put it on a hot grill. More generally, the rule says that any object gets warmer when it gains energy, and gets cooler when it loses energy. Humans have been cooking steaks for thousands of years, and nobody ever saw a steak get colder while cooking on a fire. The cooking rule is true for objects small enough for us to handle. If the cooking rule is always true, then Lord Kelvin’s argument for the heat death is correct.
  • the cooking rule is not true for objects of astronomical size, for which gravitation is the dominant form of energy. The sun is a familiar example. As the sun loses energy by radiation, it becomes hotter and not cooler. Since the sun is made of compressible gas squeezed by its own gravitation, loss of energy causes it to become smaller and denser, and the compression causes it to become hotter. For almost all astronomical objects, gravitation dominates, and they have the same unexpected behavior. Gravitation reverses the usual relation between energy and temperature. In the domain of astronomy, when heat flows from hotter to cooler objects, the hot objects get hotter and the cool objects get cooler. As a result, temperature differences in the astronomical universe tend to increase rather than decrease as time goes on. There is no final state of uniform temperature, and there is no heat death. Gravitation gives us a universe hospitable to life. Information and order can continue to grow for billions of years in the future, as they have evidently grown in the past.
  • The vision of the future as an infinite playground, with an unending sequence of mysteries to be understood by an unending sequence of players exploring an unending supply of information, is a glorious vision for scientists. Scientists find the vision attractive, since it gives them a purpose for their existence and an unending supply of jobs. The vision is less attractive to artists and writers and ordinary people. Ordinary people are more interested in friends and family than in science. Ordinary people may not welcome a future spent swimming in an unending flood of information.
  • A darker view of the information-dominated universe was described in a famous story, “The Library of Babel,” by Jorge Luis Borges in 1941.3 Borges imagined his library, with an infinite array of books and shelves and mirrors, as a metaphor for the universe.
  • Gleick’s book has an epilogue entitled “The Return of Meaning,” expressing the concerns of people who feel alienated from the prevailing scientific culture. The enormous success of information theory came from Shannon’s decision to separate information from meaning. His central dogma, “Meaning is irrelevant,” declared that information could be handled with greater freedom if it was treated as a mathematical abstraction independent of meaning. The consequence of this freedom is the flood of information in which we are drowning. The immense size of modern databases gives us a feeling of meaninglessness. Information in such quantities reminds us of Borges’s library extending infinitely in all directions. It is our task as humans to bring meaning back into this wasteland. As finite creatures who think and feel, we can create islands of meaning in the sea of information. Gleick ends his book with Borges’s image of the human condition:We walk the corridors, searching the shelves and rearranging them, looking for lines of meaning amid leagues of cacophony and incoherence, reading the history of the past and of the future, collecting our thoughts and collecting the thoughts of others, and every so often glimpsing mirrors, in which we may recognize creatures of the information.
Weiye Loh

Freakonomics » The Revolution Will Not Be Televised. But It Will Be Tweeted - 0 views

  • information alone does not destabilize an oppressive regime. In fact, more information (and the control of that information) is a major source of political strength for any ruling party. The state controlled media of North Korea is a current example of the power of propaganda, much as it was in the Soviet Union and Nazi Germany, where the state heavily subsidized the diffusion of radios during the 1930s to help spread Nazi propaganda.
  • changes in technology do not by themselves weaken the state. While Twitter played a role in the Iranian protests in 2009, the medium was used effectively by the Iranian regime to spread rumors and disinformation. But, if information becomes not just more widespread but more reliable, the regime’s chances of survival are significantly diminished. In this sense, though social media like Twitter and Facebook appear to be a scattered mess, they are more reliable than state controlled messages.
  • The model predicts that a given percentage increase in information reliability has exactly twice as large an effect on the regime’s chances as the same percentage increase in information quantity, so, overall, an information revolution that leads to roughly equal-sized percentage increases in both these characteristics will reduce a regime’s chances of surviving.-
  •  
    If the quantity of information available to citizens is sufficiently high, then the regime has a better chance of surviving. However, an increase in the reliability of information can reduce the regime's chances. These two effects are always in tension: a regime benefits from an increase in information quantity if and only if an increase in information reliability reduces its chances. The model allows for two kinds of information revolutions. In the first, associated with radio and mass newspapers under the totalitarian regimes of the early twentieth century, an increase in information quantity coincides with a shift towards media institutions more accommodative of the regime and, in this sense, a decrease in information reliability. In this case, both effects help the regime. In the second kind, associated with diffuse technologies like modern social media, an increase in information quantity coincides with a shift towards sources of information less accommodative of the regime and an increase in information reliability. This makes the quantity and reliability effects work against each other.
Weiye Loh

How wise are crowds? - 0 views

  • n the past, economists trying to model the propagation of information through a population would allow any given member of the population to observe the decisions of all the other members, or of a random sampling of them. That made the models easier to deal with mathematically, but it also made them less representative of the real world.
    • Weiye Loh
       
      Random sampling is not representative
  • this paper does is add the important component that this process is typically happening in a social network where you can’t observe what everyone has done, nor can you randomly sample the population to find out what a random sample has done, but rather you see what your particular friends in the network have done,” says Jon Kleinberg, Tisch University Professor in the Cornell University Department of Computer Science, who was not involved in the research. “That introduces a much more complex structure to the problem, but arguably one that’s representative of what typically happens in real settings.”
    • Weiye Loh
       
      So random sampling is actually more accurate?
  • Earlier models, Kleinberg explains, indicated the danger of what economists call information cascades. “If you have a few crucial ingredients — namely, that people are making decisions in order, that they can observe the past actions of other people but they can’t know what those people actually knew — then you have the potential for information cascades to occur, in which large groups of people abandon whatever private information they have and actually, for perfectly rational reasons, follow the crowd,”
  • ...8 more annotations...
  • The MIT researchers’ paper, however, suggests that the danger of information cascades may not be as dire as it previously seemed.
  • a mathematical model that describes attempts by members of a social network to make binary decisions — such as which of two brands of cell phone to buy — on the basis of decisions made by their neighbors. The model assumes that for all members of the population, there is a single right decision: one of the cell phones is intrinsically better than the other. But some members of the network have bad information about which is which.
  • The MIT researchers analyzed the propagation of information under two different conditions. In one case, there’s a cap on how much any one person can know about the state of the world: even if one cell phone is intrinsically better than the other, no one can determine that with 100 percent certainty. In the other case, there’s no such cap. There’s debate among economists and information theorists about which of these two conditions better reflects reality, and Kleinberg suggests that the answer may vary depending on the type of information propagating through the network. But previous models had suggested that, if there is a cap, information cascades are almost inevitable.
  • if there’s no cap on certainty, an expanding social network will eventually converge on an accurate representation of the state of the world; that wasn’t a big surprise. But they also showed that in many common types of networks, even if there is a cap on certainty, convergence will still occur.
  • people in the past have looked at it using more myopic models,” says Acemoglu. “They would be averaging type of models: so my opinion is an average of the opinions of my neighbors’.” In such a model, Acemoglu says, the views of people who are “oversampled” — who are connected with a large enough number of other people — will end up distorting the conclusions of the group as a whole.
  • What we’re doing is looking at it in a much more game-theoretic manner, where individuals are realizing where the information comes from. So there will be some correction factor,” Acemoglu says. “If I’m seeing you, your action, and I’m seeing Munzer’s action, and I also know that there is some probability that you might have observed Munzer, then I discount his opinion appropriately, because I know that I don’t want to overweight it. And that’s the reason why, even though you have these influential agents — it might be that Munzer is everywhere, and everybody observes him — that still doesn’t create a herd on his opinion.”
  • the new paper leaves a few salient questions unanswered, such as how quickly the network will converge on the correct answer, and what happens when the model of agents’ knowledge becomes more complex.
  • the MIT researchers begin to address both questions. One paper examines rate of convergence, although Dahleh and Acemoglu note that that its results are “somewhat weaker” than those about the conditions for convergence. Another paper examines cases in which different agents make different decisions given the same information: some people might prefer one type of cell phone, others another. In such cases, “if you know the percentage of people that are of one type, it’s enough — at least in certain networks — to guarantee learning,” Dahleh says. “I don’t need to know, for every individual, whether they’re for it or against it; I just need to know that one-third of the people are for it, and two-thirds are against it.” For instance, he says, if you notice that a Chinese restaurant in your neighborhood is always half-empty, and a nearby Indian restaurant is always crowded, then information about what percentages of people prefer Chinese or Indian food will tell you which restaurant, if either, is of above-average or below-average quality.
  •  
    By melding economics and engineering, researchers show that as social networks get larger, they usually get better at sorting fact from fiction.
Weiye Loh

TODAYonline | Singapore | Information literacy needed for Singaporean students - 0 views

  • Conducted by the Wee Kim Wee School of Communication and Information (WKWSCI) at Nanyang Technological University, the "National Information Literacy Survey for Singapore Schools 2010" involved more than 3,000 secondary school students. The findings showed that the overall score across all info-literacy competencies stood at 38.7 per cent - some way off the ideal score of at least 50 per cent. While the study found that the participants appeared most adept in defining a project task and knowing where to seek information, their lack of aptitude to cite the sources of information used were of particular concern.
  • The participants were also found to be lacking in the abilities to compare information with other sources and to form critical assessments from the information.
  • Comparing the participants' backgrounds, the findings showed that the information literacy score for each participant increases with their parents' educational qualifications and their affluence levels.
Weiye Loh

Information about information | plus.maths.org - 0 views

  • since what we actually experience depends on us observing the world (via our measuring devices), reality is shaped by answers to yes/no questions. For example, is the electron here or is it not? Is its spin pointing up or pointing down? Answers to questions are information — the yes and no in English language correspond to the 0 and 1 in computer language. Thus, information is fundamental to physical reality. As the famous physicist John Archibald Wheeler put it, the "It" we observe around us comes from the "Bit" that encodes information: "It from bit". Is this really true?
  •  
    "what exactly is information? We tend to think of it as human made, but since we're all a result of our DNA sequence, perhaps we should think of humans as being made of information."
Weiye Loh

When information is power, these are the questions we should be asking | Online Journal... - 0 views

  • “There is absolutely no empiric evidence that shows that anyone actually uses the accounts produced by public bodies to make any decision. There is no group of principals analogous to investors. There are many lists of potential users of the accounts. The Treasury, CIPFA (the UK public sector accounting body) and others have said that users might include the public, taxpayers, regulators and oversight bodies. I would be prepared to put up a reward for anyone who could prove to me that any of these people have ever made a decision based on the financial reports of a public body. If there are no users of the information then there is no point in making the reports better. If there are no users more technically correct reports do nothing to improve the understanding of public finances. In effect all that better reports do is legitimise the role of professional accountants in the accountability process.
  • raw data – and the ability to interrogate that – should instead be made available because (quoting Anthony Hopwood): “Those with the power to determine what enters into organisational accounts have the means to articulate and diffuse their values and concerns, and subsequently to monitor, observe and regulate the actions of those that are now accounted for.”
  • Data is not just some opaque term; something for geeks: it’s information: the raw material we deal in as journalists. Knowledge. Power. The site of a struggle for control. And considering it’s a site that journalists have always fought over, it’s surprisingly placid as we enter one of the most important ages in the history of information control.
  • ...1 more annotation...
  • 3 questions to ask of any transparency initiative: If information is to be published in a database behind a form, then it’s hidden in plain sight. It cannot be easily found by a journalist, and only simple questions will be answered. If information is to be published in PDFs or JPEGs, or some format that you need proprietary software to see, then it cannot be easily be questioned by a journalist If you will have to pass a test to use the information, then obstacles will be placed between the journalist and that information The next time an organisation claims that they are opening up their information, tick those questions off. (If you want more, see Gurstein’s list of 7 elements that are needed to make effective use of open data).
  •  
    control of information still represents the exercise of power, and how shifts in that control as a result of the transparency/open data/linked data agenda are open to abuse, gaming, or spin.
Meenatchi

Online Defamation - 0 views

  •  
    Interesting Case In summary, the article discusses the court ruling of an online defamation case that took place in Korea. It involves Kim, the victim, who experienced the spread of false articles and defamatory comments that blamed him for his ex-girlfriend's suicide. The final verdict held Internet portals liable for the damages caused by the articles they displayed on their website. This is despite the articles having been provided by external media outlets. The Supreme Court ordered four of the major portals involved in the case to pay a combined 30 million ($22,500) as compensation to Kim. Ethical Question I feel there are a few ethical issues that are at play in this case. One would be if it is ethical to publish sensitive information about an individual without his/her permission on the Internet. This is of more importance when the credibility of the information is dubious. Another ethical question would be if Internet Service Providers can be held responsible for information they did not create. Is it fair to charge them on the basis that they have failed to regulate the content displayed on their sites? Problem The problem with the first ethical question is that it creates a question of individual privacy rights against the freedom of speech for another. Publishing sensitive information that might not even be true about an individual infringes his/her privacy rights. However, it is the right of the publisher to have the freedom of speech to state what he/she thinks. The issue with the second ethical question is that the Internet Service Providers merely provide a platform for people to express their views. They should not be held liable for comments posted by individuals using the website. However, the opposing view would expect the ISPs to be responsible for the content they allow to be displayed on their site. They have to regulate the content to ensure that sensitive or controversial information that would cause irrevocable damage to others
Inosha Wickrama

Student arrested for posting guide to suicide - 13 views

I think this is an interesting topic to discuss on the IVLE. If you can please post this there, we can start a discussion on this. Karin Tan wrote: > http://www.orientexpat.com/forum/3467-student-...

Weiye Loh

IPhone and Android Apps Breach Privacy - WSJ.com - 0 views

  • Few devices know more personal details about people than the smartphones in their pockets: phone numbers, current location, often the owner's real name—even a unique ID number that can never be changed or turned off.
  • An examination of 101 popular smartphone "apps"—games and other software applications for iPhone and Android phones—showed that 56 transmitted the phone's unique device ID to other companies without users' awareness or consent. Forty-seven apps transmitted the phone's location in some way. Five sent age, gender and other personal details to outsiders.
  • The findings reveal the intrusive effort by online-tracking companies to gather personal data about people in order to flesh out detailed dossiers on them.
  • ...24 more annotations...
  • iPhone apps transmitted more data than the apps on phones using Google Inc.'s Android operating system. Because of the test's size, it's not known if the pattern holds among the hundreds of thousands of apps available.
  • TextPlus 4, a popular iPhone app for text messaging. It sent the phone's unique ID number to eight ad companies and the phone's zip code, along with the user's age and gender, to two of them.
  • Pandora, a popular music app, sent age, gender, location and phone identifiers to various ad networks. iPhone and Android versions of a game called Paper Toss—players try to throw paper wads into a trash can—each sent the phone's ID number to at least five ad companies. Grindr, an iPhone app for meeting gay men, sent gender, location and phone ID to three ad companies.
  • iPhone maker Apple Inc. says it reviews each app before offering it to users. Both Apple and Google say they protect users by requiring apps to obtain permission before revealing certain kinds of information, such as location.
  • The Journal found that these rules can be skirted. One iPhone app, Pumpkin Maker (a pumpkin-carving game), transmits location to an ad network without asking permission. Apple declines to comment on whether the app violated its rules.
  • With few exceptions, app users can't "opt out" of phone tracking, as is possible, in limited form, on regular computers. On computers it is also possible to block or delete "cookies," which are tiny tracking files. These techniques generally don't work on cellphone apps.
  • makers of TextPlus 4, Pandora and Grindr say the data they pass on to outside firms isn't linked to an individual's name. Personal details such as age and gender are volunteered by users, they say. The maker of Pumpkin Maker says he didn't know Apple required apps to seek user approval before transmitting location. The maker of Paper Toss didn't respond to requests for comment.
  • Many apps don't offer even a basic form of consumer protection: written privacy policies. Forty-five of the 101 apps didn't provide privacy policies on their websites or inside the apps at the time of testing. Neither Apple nor Google requires app privacy policies.
  • the most widely shared detail was the unique ID number assigned to every phone.
  • On iPhones, this number is the "UDID," or Unique Device Identifier. Android IDs go by other names. These IDs are set by phone makers, carriers or makers of the operating system, and typically can't be blocked or deleted. "The great thing about mobile is you can't clear a UDID like you can a cookie," says Meghan O'Holleran of Traffic Marketplace, an Internet ad network that is expanding into mobile apps. "That's how we track everything."
  • O'Holleran says Traffic Marketplace, a unit of Epic Media Group, monitors smartphone users whenever it can. "We watch what apps you download, how frequently you use them, how much time you spend on them, how deep into the app you go," she says. She says the data is aggregated and not linked to an individual.
  • Apple and Google ad networks let advertisers target groups of users. Both companies say they don't track individuals based on the way they use apps.
  • Apple limits what can be installed on an iPhone by requiring iPhone apps to be offered exclusively through its App Store. Apple reviews those apps for function, offensiveness and other criteria.
  • Apple says iPhone apps "cannot transmit data about a user without obtaining the user's prior permission and providing the user with access to information about how and where the data will be used." Many apps tested by the Journal appeared to violate that rule, by sending a user's location to ad networks, without informing users. Apple declines to discuss how it interprets or enforces the policy.
  • Google doesn't review the apps, which can be downloaded from many vendors. Google says app makers "bear the responsibility for how they handle user information." Google requires Android apps to notify users, before they download the app, of the data sources the app intends to access. Possible sources include the phone's camera, memory, contact list, and more than 100 others. If users don't like what a particular app wants to access, they can choose not to install the app, Google says.
  • Neither Apple nor Google requires apps to ask permission to access some forms of the device ID, or to send it to outsiders. When smartphone users let an app see their location, apps generally don't disclose if they will pass the location to ad companies.
  • Lack of standard practices means different companies treat the same information differently. For example, Apple says that, internally, it treats the iPhone's UDID as "personally identifiable information." That's because, Apple says, it can be combined with other personal details about people—such as names or email addresses—that Apple has via the App Store or its iTunes music services. By contrast, Google and most app makers don't consider device IDs to be identifying information.
  • A growing industry is assembling this data into profiles of cellphone users. Mobclix, the ad exchange, matches more than 25 ad networks with some 15,000 apps seeking advertisers. The Palo Alto, Calif., company collects phone IDs, encodes them (to obscure the number), and assigns them to interest categories based on what apps people download and how much time they spend using an app, among other factors. By tracking a phone's location, Mobclix also makes a "best guess" of where a person lives, says Mr. Gurbuxani, the Mobclix executive. Mobclix then matches that location with spending and demographic data from Nielsen Co.
  • Mobclix can place a user in one of 150 "segments" it offers to advertisers, from "green enthusiasts" to "soccer moms." For example, "die hard gamers" are 15-to-25-year-old males with more than 20 apps on their phones who use an app for more than 20 minutes at a time. Mobclix says its system is powerful, but that its categories are broad enough to not identify individuals. "It's about how you track people better," Mr. Gurbuxani says.
  • four app makers posted privacy policies after being contacted by the Journal, including Rovio Mobile Ltd., the Finnish company behind the popular game Angry Birds (in which birds battle egg-snatching pigs). A spokesman says Rovio had been working on the policy, and the Journal inquiry made it a good time to unveil it.
  • Free and paid versions of Angry Birds were tested on an iPhone. The apps sent the phone's UDID and location to the Chillingo unit of Electronic Arts Inc., which markets the games. Chillingo says it doesn't use the information for advertising and doesn't share it with outsiders.
  • Some developers feel pressure to release more data about people. Max Binshtok, creator of the DailyHoroscope Android app, says ad-network executives encouraged him to transmit users' locations. Mr. Binshtok says he declined because of privacy concerns. But ads targeted by location bring in two to five times as much money as untargeted ads, Mr. Binshtok says. "We are losing a lot of revenue."
  • Apple targets ads to phone users based largely on what it knows about them through its App Store and iTunes music service. The targeting criteria can include the types of songs, videos and apps a person downloads, according to an Apple ad presentation reviewed by the Journal. The presentation named 103 targeting categories, including: karaoke, Christian/gospel music, anime, business news, health apps, games and horror movies. People familiar with iAd say Apple doesn't track what users do inside apps and offers advertisers broad categories of people, not specific individuals. Apple has signaled that it has ideas for targeting people more closely. In a patent application filed this past May, Apple outlined a system for placing and pricing ads based on a person's "web history or search history" and "the contents of a media library." For example, home-improvement advertisers might pay more to reach a person who downloaded do-it-yourself TV shows, the document says.
  • The patent application also lists another possible way to target people with ads: the contents of a friend's media library. How would Apple learn who a cellphone user's friends are, and what kinds of media they prefer? The patent says Apple could tap "known connections on one or more social-networking websites" or "publicly available information or private databases describing purchasing decisions, brand preferences," and other data. In September, Apple introduced a social-networking service within iTunes, called Ping, that lets users share music preferences with friends. Apple declined to comment.
Weiye Loh

McKinsey & Company - Clouds, big data, and smart assets: Ten tech-enabled business tren... - 0 views

  • 1. Distributed cocreation moves into the mainstreamIn the past few years, the ability to organise communities of Web participants to develop, market, and support products and services has moved from the margins of business practice to the mainstream. Wikipedia and a handful of open-source software developers were the pioneers. But in signs of the steady march forward, 70 per cent of the executives we recently surveyed said that their companies regularly created value through Web communities. Similarly, more than 68m bloggers post reviews and recommendations about products and services.
  • for every success in tapping communities to create value, there are still many failures. Some companies neglect the up-front research needed to identify potential participants who have the right skill sets and will be motivated to participate over the longer term. Since cocreation is a two-way process, companies must also provide feedback to stimulate continuing participation and commitment. Getting incentives right is important as well: cocreators often value reputation more than money. Finally, an organisation must gain a high level of trust within a Web community to earn the engagement of top participants.
  • 2. Making the network the organisation In earlier research, we noted that the Web was starting to force open the boundaries of organisations, allowing nonemployees to offer their expertise in novel ways. We called this phenomenon "tapping into a world of talent." Now many companies are pushing substantially beyond that starting point, building and managing flexible networks that extend across internal and often even external borders. The recession underscored the value of such flexibility in managing volatility. We believe that the more porous, networked organisations of the future will need to organise work around critical tasks rather than molding it to constraints imposed by corporate structures.
  • ...10 more annotations...
  • 3. Collaboration at scale Across many economies, the number of people who undertake knowledge work has grown much more quickly than the number of production or transactions workers. Knowledge workers typically are paid more than others, so increasing their productivity is critical. As a result, there is broad interest in collaboration technologies that promise to improve these workers' efficiency and effectiveness. While the body of knowledge around the best use of such technologies is still developing, a number of companies have conducted experiments, as we see in the rapid growth rates of video and Web conferencing, expected to top 20 per cent annually during the next few years.
  • 4. The growing ‘Internet of Things' The adoption of RFID (radio-frequency identification) and related technologies was the basis of a trend we first recognised as "expanding the frontiers of automation." But these methods are rudimentary compared with what emerges when assets themselves become elements of an information system, with the ability to capture, compute, communicate, and collaborate around information—something that has come to be known as the "Internet of Things." Embedded with sensors, actuators, and communications capabilities, such objects will soon be able to absorb and transmit information on a massive scale and, in some cases, to adapt and react to changes in the environment automatically. These "smart" assets can make processes more efficient, give products new capabilities, and spark novel business models. Auto insurers in Europe and the United States are testing these waters with offers to install sensors in customers' vehicles. The result is new pricing models that base charges for risk on driving behavior rather than on a driver's demographic characteristics. Luxury-auto manufacturers are equipping vehicles with networked sensors that can automatically take evasive action when accidents are about to happen. In medicine, sensors embedded in or worn by patients continuously report changes in health conditions to physicians, who can adjust treatments when necessary. Sensors in manufacturing lines for products as diverse as computer chips and pulp and paper take detailed readings on process conditions and automatically make adjustments to reduce waste, downtime, and costly human interventions.
  • 5. Experimentation and big data Could the enterprise become a full-time laboratory? What if you could analyse every transaction, capture insights from every customer interaction, and didn't have to wait for months to get data from the field? What if…? Data are flooding in at rates never seen before—doubling every 18 months—as a result of greater access to customer data from public, proprietary, and purchased sources, as well as new information gathered from Web communities and newly deployed smart assets. These trends are broadly known as "big data." Technology for capturing and analysing information is widely available at ever-lower price points. But many companies are taking data use to new levels, using IT to support rigorous, constant business experimentation that guides decisions and to test new products, business models, and innovations in customer experience. In some cases, the new approaches help companies make decisions in real time. This trend has the potential to drive a radical transformation in research, innovation, and marketing.
  • Using experimentation and big data as essential components of management decision making requires new capabilities, as well as organisational and cultural change. Most companies are far from accessing all the available data. Some haven't even mastered the technologies needed to capture and analyse the valuable information they can access. More commonly, they don't have the right talent and processes to design experiments and extract business value from big data, which require changes in the way many executives now make decisions: trusting instincts and experience over experimentation and rigorous analysis. To get managers at all echelons to accept the value of experimentation, senior leaders must buy into a "test and learn" mind-set and then serve as role models for their teams.
  • 6. Wiring for a sustainable world Even as regulatory frameworks continue to evolve, environmental stewardship and sustainability clearly are C-level agenda topics. What's more, sustainability is fast becoming an important corporate-performance metric—one that stakeholders, outside influencers, and even financial markets have begun to track. Information technology plays a dual role in this debate: it is both a significant source of environmental emissions and a key enabler of many strategies to mitigate environmental damage. At present, information technology's share of the world's environmental footprint is growing because of the ever-increasing demand for IT capacity and services. Electricity produced to power the world's data centers generates greenhouse gases on the scale of countries such as Argentina or the Netherlands, and these emissions could increase fourfold by 2020. McKinsey research has shown, however, that the use of IT in areas such as smart power grids, efficient buildings, and better logistics planning could eliminate five times the carbon emissions that the IT industry produces.
  • 7. Imagining anything as a service Technology now enables companies to monitor, measure, customise, and bill for asset use at a much more fine-grained level than ever before. Asset owners can therefore create services around what have traditionally been sold as products. Business-to-business (B2B) customers like these service offerings because they allow companies to purchase units of a service and to account for them as a variable cost rather than undertake large capital investments. Consumers also like this "paying only for what you use" model, which helps them avoid large expenditures, as well as the hassles of buying and maintaining a product.
  • In the IT industry, the growth of "cloud computing" (accessing computer resources provided through networks rather than running software or storing data on a local computer) exemplifies this shift. Consumer acceptance of Web-based cloud services for everything from e-mail to video is of course becoming universal, and companies are following suit. Software as a service (SaaS), which enables organisations to access services such as customer relationship management, is growing at a 17 per cent annual rate. The biotechnology company Genentech, for example, uses Google Apps for e-mail and to create documents and spreadsheets, bypassing capital investments in servers and software licenses. This development has created a wave of computing capabilities delivered as a service, including infrastructure, platform, applications, and content. And vendors are competing, with innovation and new business models, to match the needs of different customers.
  • 8. The age of the multisided business model Multisided business models create value through interactions among multiple players rather than traditional one-on-one transactions or information exchanges. In the media industry, advertising is a classic example of how these models work. Newspapers, magasines, and television stations offer content to their audiences while generating a significant portion of their revenues from third parties: advertisers. Other revenue, often through subscriptions, comes directly from consumers. More recently, this advertising-supported model has proliferated on the Internet, underwriting Web content sites, as well as services such as search and e-mail (see trend number seven, "Imagining anything as a service," earlier in this article). It is now spreading to new markets, such as enterprise software: Spiceworks offers IT-management applications to 950,000 users at no cost, while it collects advertising from B2B companies that want access to IT professionals.
  • 9. Innovating from the bottom of the pyramid The adoption of technology is a global phenomenon, and the intensity of its usage is particularly impressive in emerging markets. Our research has shown that disruptive business models arise when technology combines with extreme market conditions, such as customer demand for very low price points, poor infrastructure, hard-to-access suppliers, and low cost curves for talent. With an economic recovery beginning to take hold in some parts of the world, high rates of growth have resumed in many developing nations, and we're seeing companies built around the new models emerging as global players. Many multinationals, meanwhile, are only starting to think about developing markets as wellsprings of technology-enabled innovation rather than as traditional manufacturing hubs.
  • 10. Producing public good on the grid The role of governments in shaping global economic policy will expand in coming years. Technology will be an important factor in this evolution by facilitating the creation of new types of public goods while helping to manage them more effectively. This last trend is broad in scope and draws upon many of the other trends described above.
yongernn teo

Eli Lilly Accused of Unethical Marketing of Zyprexa - 0 views

  •  
    Summary of the Unethical Marketing of Zyprexa by Eli Lilly: \n\nEli Lilly is a global pharmaceutical company. In the year 2006, it was charged with unethical marketing of Zyprexa, the top-selling drug. It is approved only for the treatment of schizophrenia and bipolar disorder. \nFirstly, Eli Lilly in a report downplayed the risks of obesity and increased blood sugar associated with Zyprexa. Although Eli Lilly was aware of these risks for at least a decade, they went ahead without emphasizing the significance of these risks, in fear of jeopardizing their sales. \nSecondly, Eli Lilly held a promotional campaign called Viva Zyprexa, encouraging off-label usage of this drug in patients who had neither schizophrenia nor bipolar disorder. This campaign was targeted at the elderly who had dementia. However, this drug was not approved to treat dementia. In fact, it could increase the risk of death in older patients who had dementia-related psychosis. \nAll these were done to boost the sale of Zyprexa and to bring in more revenue for Eli Lilly. Zyprexa could alone bring in $4billion worth of sales annually. \n\nEthical Question:\nTo what extent should pharmaceutical companies go to inform potential consumers on the side-effects of their drugs? \n\nEthical Problem: \nThe information that is disseminated through marketing campaigns have to be true and transparent. There should not be any hidden agenda behind the amount of information being released. In this case, to prevent sales from plummeting, Eli Lilly downplayed the side-effects of Zyprexa. It also encouraged off-label usage. \nIt is very important that pharmaceutical companies practice good ethics as this concerns the health of its consumers. While one drug may act as a remedy for a health-problem, it could possibly lead to other health problems due to the side-effects. All these have to be conveyed to the consumer who exchanges his money for the product. \nNot being transparent and honest with the information of the pr
Weiye Loh

How the net traps us all in our own little bubbles | Technology | The Observer - 0 views

  • Google would use 57 signals – everything from where you were logging in from to what browser you were using to what you had searched for before – to make guesses about who you were and what kinds of sites you'd like. Even if you were logged out, it would customise its results, showing you the pages it predicted you were most likely to click on.
  • Most of us assume that when we google a term, we all see the same results – the ones that the company's famous Page Rank algorithm suggests are the most authoritative based on other pages' links. But since December 2009, this is no longer true. Now you get the result that Google's algorithm suggests is best for you in particular – and someone else may see something entirely different. In other words, there is no standard Google any more.
  • In the spring of 2010, while the remains of the Deepwater Horizon oil rig were spewing oil into the Gulf of Mexico, I asked two friends to search for the term "BP". They're pretty similar – educated white left-leaning women who live in the north-east. But the results they saw were quite different. One saw investment information about BP. The other saw news.
  • ...7 more annotations...
  • the query "stem cells" might produce diametrically opposed results for scientists who support stem-cell research and activists who oppose it.
  • "Proof of climate change" might turn up different results for an environmental activist and an oil-company executive.
  • majority of us assume search engines are unbiased. But that may be just because they're increasingly biased to share our own views. More and more, your computer monitor is a kind of one-way mirror, reflecting your own interests while algorithmic observers watch what you click. Google's announcement marked the turning point of an important but nearly invisible revolution in how we consume information. You could say that on 4 December 2009 the era of personalisation began.
  • We are predisposed to respond to a pretty narrow set of stimuli – if a piece of news is about sex, power, gossip, violence, celebrity or humour, we are likely to read it first. This is the content that most easily makes it into the filter bubble. It's easy to push "Like" and increase the visibility of a friend's post about finishing a marathon or an instructional article about how to make onion soup. It's harder to push the "Like" button on an article titled "Darfur sees bloodiest month in two years". In a personalised world, important but complex or unpleasant issues – the rising prison population, for example, or homelessness – are less likely to come to our attention at all.
  • As a consumer, it's hard to argue with blotting out the irrelevant and unlikable. But what is good for consumers is not necessarily good for citizens. What I seem to like may not be what I actually want, let alone what I need to know to be an informed member of my community or country. "It's a civic virtue to be exposed to things that appear to be outside your interest," technology journalist Clive Thompson told me. Cultural critic Lee Siegel puts it a different way: "Customers are always right, but people aren't."
  • Personalisation is based on a bargain. In exchange for the service of filtering, you hand large companies an enormous amount of data about your daily life – much of which you might not trust friends with.
  • To be the author of your life, professor Yochai Benkler argues, you have to be aware of a diverse array of options and lifestyles. When you enter a filter bubble, you're letting the companies that construct it choose which options you're aware of. You may think you're the captain of your own destiny, but personalisation can lead you down a road to a kind of informational determinism in which what you've clicked on in the past determines what you see next – a web history you're doomed to repeat. You can get stuck in a static, ever- narrowing version of yourself – an endless you-loop.
  •  
    An invisible revolution has taken place is the way we use the net, but the increasing personalisation of information by search engines such as Google threatens to limit our access to information and enclose us in a self-reinforcing world view, writes Eli Pariser in an extract from The Filter Bubble
Weiye Loh

Study: why bother to remember when you can just use Google? - 0 views

  •  
    In the age of Google and Wikipedia, an almost unlimited amount of information is available at our fingertips, and with the rise of smartphones, many of us have nonstop access. The potential to find almost any piece of information in seconds is beneficial, but is this ability actually negatively impacting our memory? The authors of a paper that is being released by Science Express describe four experiments testing this. Based on their results, people are recalling information less, and instead can remember where to find the information they have forgotten.
Weiye Loh

Fukushima babies and how numbers can lie - Boing Boing - 0 views

  • Over at Scientific American, Michael Moyer takes a critical look at an Al Jazeera story about a recent study purporting to show that infant deaths on the American West Coast increased by 35% as a result of fallout from the Fukushima Daiichi nuclear power plant meltdown.
  • At first glance, the story looks credible. And scary. The information comes from a physician, Janette Sherman MD, and epidemiologist Joseph Mangano, who got their data from the Centers for Disease Control and Prevention's Morbidity and Mortality Weekly Reports—a newsletter that frequently helps public health officials spot trends in death and illness.
  • Look closer, though, and the credibility vanishes. For one thing, this isn't a formal scientific study and Sherman and Mangano didn't publish their findings in a peer-reviewed journal, or even on a science blog. Instead, all of this comes from an essay the two wrote for Counter Punch, a political newsletter.
  • ...2 more annotations...
  • Let's first consider the data that the authors left out of their analysis. It's hard to understand why the authors stopped at these eight cities. Why include Boise but not Tacoma? Or Spokane? Both have about the same size population as Boise, they're closer to Japan, and the CDC includes data from Tacoma and Spokane in the weekly reports.
  • More important, why did the authors choose to use only the four weeks preceding the Fukushima disaster? Here is where we begin to pick up a whiff of data fixing. ... While it certainly is true that there were fewer deaths in the four weeks leading up to Fukushima than there have been in the 10 weeks following, the entire year has seen no overall trend. When I plotted a best-fit line to the data, Excel calculated a very slight decrease in the infant mortality rate. Only by explicitly excluding data from January and February were Sherman and Mangano able to froth up their specious statistical scaremongering.
  •  
    When you think about what information be skeptical of, that decision can't begin and end with "corporate interests." Yes, those sources often give you bad information. But bad information comes from other places, too. The Fukushima accident was worse than TEPCO wanted people to believe when it first happened. Radiation isn't healthy for you, and there are people (plant workers, emergency crews, people who lived nearby) who will be dealing with the effects of Fukushima for years to come. But the fact that all of that is true does not mean that we should uncritically accept it when somebody says that radiation from Fukushima is killing babies in the United States. Just because the corporate interests are in the wrong doesn't mean that every claim against them is true.
Weiye Loh

Secrecy in the age of WikiLeaks - 1 views

  •  
    As government agencies look to leverage new technologies to communicate with the public, move more citizen services online, share services amongst agencies, share intelligence for national security purposes and collaborate with other nations and private industry, they will need to take a more open stance to secrecy and information sharing. But to mitigate risks, they need to take a more solid security stance at the same time. It is imperative for leaders at all levels within government (agencies, departments, contractors, etc.) to weigh the risks and benefits of making information more accessible and, once decided, put strong safeguards in place to ensure only those who need access can get access. Information leaks imply failures across multiple areas, particularly risk management, access control and confidentiality. The ongoing WikiLeaks exposé clearly shows that the threat is not always from external groups; it can be far more insidious when it stems from trusted individuals within an organisation.
Jody Poh

Online data privacy - 12 views

I think another question has to be brought up: Is the information meant to be public or private? If the information is public, then users should know they are risking identity theft by putting in...

privacy

Satveer

Spammed, scammed, jammed: Flu outbreak dominates online buzz - 10 views

This article is about online misinformation on social networking sites such as twitter and in this case it particularly pertains to health. It has got to do with swine flu and how people on Twitte...

http:__www.canada.com_entertainment_Spammed%2Bscammed%2Bjammed%2

started by Satveer on 19 Aug 09 no follow-up yet
Weiye Loh

Skepticblog » Flaws in Creationist Logic - 0 views

  • making a false analogy here by confusing the origin of life with the later evolution of life. The watch analogy was specifically offered to say that something which is complex and displays design must have been created and designed by a creator. Therefore, since we see complexity and design in life it too must have had a creator. But all the life that we know – that life which is being pointed to as complex and designed – is the result of a process (evolution) that has worked over billions of years. Life can grow, reproduce, and evolve. Watches cannot – so it is not a valid analogy.
  • Life did emerge from non-living matter, but that is irrelevant to the point. There was likely a process of chemical evolution – but still the non-living precursors to life were just chemicals, they did not display the design or complexity apparent in a watch. Ankur’s attempt to rescue this false analogy fails. And before someone has a chance to point it out – yes, I said that life displays design. It displays bottom-up evolutionary design, not top-down intelligent design. This refers to another fallacy of creationists – the assumption that all design is top down. But nature demonstrates that this is a false assumption.
  • An increase in variation is an increase in information – it takes more information to describe the greater variety. By any actual definition of information – variation increases information. Also, as I argued, when you have gene duplication you are physically increasing the number of information carrying units – that is an increase in information. There is simply no way to avoid the mountain of genetic evidence that genetic information has increased over evolutionary time through evolutionary processes.
  •  
    FLAWS IN CREATIONIST LOGIC
Weiye Loh

How the Internet Gets Inside Us : The New Yorker - 0 views

  • N.Y.U. professor Clay Shirky—the author of “Cognitive Surplus” and many articles and blog posts proclaiming the coming of the digital millennium—is the breeziest and seemingly most self-confident
  • Shirky believes that we are on the crest of an ever-surging wave of democratized information: the Gutenberg printing press produced the Reformation, which produced the Scientific Revolution, which produced the Enlightenment, which produced the Internet, each move more liberating than the one before.
  • The idea, for instance, that the printing press rapidly gave birth to a new order of information, democratic and bottom-up, is a cruel cartoon of the truth. If the printing press did propel the Reformation, one of the biggest ideas it propelled was Luther’s newly invented absolutist anti-Semitism. And what followed the Reformation wasn’t the Enlightenment, a new era of openness and freely disseminated knowledge. What followed the Reformation was, actually, the Counter-Reformation, which used the same means—i.e., printed books—to spread ideas about what jerks the reformers were, and unleashed a hundred years of religious warfare.
  • ...17 more annotations...
  • If ideas of democracy and freedom emerged at the end of the printing-press era, it wasn’t by some technological logic but because of parallel inventions, like the ideas of limited government and religious tolerance, very hard won from history.
  • As Andrew Pettegree shows in his fine new study, “The Book in the Renaissance,” the mainstay of the printing revolution in seventeenth-century Europe was not dissident pamphlets but royal edicts, printed by the thousand: almost all the new media of that day were working, in essence, for kinglouis.gov.
  • Even later, full-fledged totalitarian societies didn’t burn books. They burned some books, while keeping the printing presses running off such quantities that by the mid-fifties Stalin was said to have more books in print than Agatha Christie.
  • Many of the more knowing Never-Betters turn for cheer not to messy history and mixed-up politics but to psychology—to the actual expansion of our minds.
  • The argument, advanced in Andy Clark’s “Supersizing the Mind” and in Robert K. Logan’s “The Sixth Language,” begins with the claim that cognition is not a little processing program that takes place inside your head, Robby the Robot style. It is a constant flow of information, memory, plans, and physical movements, in which as much thinking goes on out there as in here. If television produced the global village, the Internet produces the global psyche: everyone keyed in like a neuron, so that to the eyes of a watching Martian we are really part of a single planetary brain. Contraptions don’t change consciousness; contraptions are part of consciousness. We may not act better than we used to, but we sure think differently than we did.
  • Cognitive entanglement, after all, is the rule of life. My memories and my wife’s intermingle. When I can’t recall a name or a date, I don’t look it up; I just ask her. Our machines, in this way, become our substitute spouses and plug-in companions.
  • But, if cognitive entanglement exists, so does cognitive exasperation. Husbands and wives deny each other’s memories as much as they depend on them. That’s fine until it really counts (say, in divorce court). In a practical, immediate way, one sees the limits of the so-called “extended mind” clearly in the mob-made Wikipedia, the perfect product of that new vast, supersized cognition: when there’s easy agreement, it’s fine, and when there’s widespread disagreement on values or facts, as with, say, the origins of capitalism, it’s fine, too; you get both sides. The trouble comes when one side is right and the other side is wrong and doesn’t know it. The Shakespeare authorship page and the Shroud of Turin page are scenes of constant conflict and are packed with unreliable information. Creationists crowd cyberspace every bit as effectively as evolutionists, and extend their minds just as fully. Our trouble is not the over-all absence of smartness but the intractable power of pure stupidity, and no machine, or mind, seems extended enough to cure that.
  • Nicholas Carr, in “The Shallows,” William Powers, in “Hamlet’s BlackBerry,” and Sherry Turkle, in “Alone Together,” all bear intimate witness to a sense that the newfound land, the ever-present BlackBerry-and-instant-message world, is one whose price, paid in frayed nerves and lost reading hours and broken attention, is hardly worth the gains it gives us. “The medium does matter,” Carr has written. “As a technology, a book focuses our attention, isolates us from the myriad distractions that fill our everyday lives. A networked computer does precisely the opposite. It is designed to scatter our attention. . . . Knowing that the depth of our thought is tied directly to the intensity of our attentiveness, it’s hard not to conclude that as we adapt to the intellectual environment of the Net our thinking becomes shallower.
  • Carr is most concerned about the way the Internet breaks down our capacity for reflective thought.
  • Powers’s reflections are more family-centered and practical. He recounts, very touchingly, stories of family life broken up by the eternal consultation of smartphones and computer monitors
  • He then surveys seven Wise Men—Plato, Thoreau, Seneca, the usual gang—who have something to tell us about solitude and the virtues of inner space, all of it sound enough, though he tends to overlook the significant point that these worthies were not entirely in favor of the kinds of liberties that we now take for granted and that made the new dispensation possible.
  • Similarly, Nicholas Carr cites Martin Heidegger for having seen, in the mid-fifties, that new technologies would break the meditational space on which Western wisdoms depend. Since Heidegger had not long before walked straight out of his own meditational space into the arms of the Nazis, it’s hard to have much nostalgia for this version of the past. One feels the same doubts when Sherry Turkle, in “Alone Together,” her touching plaint about the destruction of the old intimacy-reading culture by the new remote-connection-Internet culture, cites studies that show a dramatic decline in empathy among college students, who apparently are “far less likely to say that it is valuable to put oneself in the place of others or to try and understand their feelings.” What is to be done?
  • Among Ever-Wasers, the Harvard historian Ann Blair may be the most ambitious. In her book “Too Much to Know: Managing Scholarly Information Before the Modern Age,” she makes the case that what we’re going through is like what others went through a very long while ago. Against the cartoon history of Shirky or Tooby, Blair argues that the sense of “information overload” was not the consequence of Gutenberg but already in place before printing began. She wants us to resist “trying to reduce the complex causal nexus behind the transition from Renaissance to Enlightenment to the impact of a technology or any particular set of ideas.” Anyway, the crucial revolution was not of print but of paper: “During the later Middle Ages a staggering growth in the production of manuscripts, facilitated by the use of paper, accompanied a great expansion of readers outside the monastic and scholastic contexts.” For that matter, our minds were altered less by books than by index slips. Activities that seem quite twenty-first century, she shows, began when people cut and pasted from one manuscript to another; made aggregated news in compendiums; passed around précis. “Early modern finding devices” were forced into existence: lists of authorities, lists of headings.
  • Everyone complained about what the new information technologies were doing to our minds. Everyone said that the flood of books produced a restless, fractured attention. Everyone complained that pamphlets and poems were breaking kids’ ability to concentrate, that big good handmade books were ignored, swept aside by printed works that, as Erasmus said, “are foolish, ignorant, malignant, libelous, mad.” The reader consulting a card catalogue in a library was living a revolution as momentous, and as disorienting, as our own.
  • The book index was the search engine of its era, and needed to be explained at length to puzzled researchers
  • That uniquely evil and necessary thing the comprehensive review of many different books on a related subject, with the necessary oversimplification of their ideas that it demanded, was already around in 1500, and already being accused of missing all the points. In the period when many of the big, classic books that we no longer have time to read were being written, the general complaint was that there wasn’t enough time to read big, classic books.
  • at any given moment, our most complicated machine will be taken as a model of human intelligence, and whatever media kids favor will be identified as the cause of our stupidity. When there were automatic looms, the mind was like an automatic loom; and, since young people in the loom period liked novels, it was the cheap novel that was degrading our minds. When there were telephone exchanges, the mind was like a telephone exchange, and, in the same period, since the nickelodeon reigned, moving pictures were making us dumb. When mainframe computers arrived and television was what kids liked, the mind was like a mainframe and television was the engine of our idiocy. Some machine is always showing us Mind; some entertainment derived from the machine is always showing us Non-Mind.
Weiye Loh

'Scrapers' Dig Deep for Data on the Web - WSJ.com - 0 views

  • website PatientsLikeMe.com noticed suspicious activity on its "Mood" discussion board. There, people exchange highly personal stories about their emotional disorders, ranging from bipolar disease to a desire to cut themselves. It was a break-in. A new member of the site, using sophisticated software, was "scraping," or copying, every single message off PatientsLikeMe's private online forums.
  • PatientsLikeMe managed to block and identify the intruder: Nielsen Co., the privately held New York media-research firm. Nielsen monitors online "buzz" for clients, including major drug makers, which buy data gleaned from the Web to get insight from consumers about their products, Nielsen says.
  • The market for personal data about Internet users is booming, and in the vanguard is the practice of "scraping." Firms offer to harvest online conversations and collect personal details from social-networking sites, résumé sites and online forums where people might discuss their lives. The emerging business of web scraping provides some of the raw material for a rapidly expanding data economy. Marketers spent $7.8 billion on online and offline data in 2009, according to the New York management consulting firm Winterberry Group LLC. Spending on data from online sources is set to more than double, to $840 million in 2012 from $410 million in 2009.
  • ...6 more annotations...
  • The Wall Street Journal's examination of scraping—a trade that involves personal information as well as many other types of data—is part of the newspaper's investigation into the business of tracking people's activities online and selling details about their behavior and personal interests.
  • Some companies collect personal information for detailed background reports on individuals, such as email addresses, cell numbers, photographs and posts on social-network sites. Others offer what are known as listening services, which monitor in real time hundreds or thousands of news sources, blogs and websites to see what people are saying about specific products or topics.
  • One such service is offered by Dow Jones & Co., publisher of the Journal. Dow Jones collects data from the Web—which may include personal information contained in news articles and blog postings—that help corporate clients monitor how they are portrayed. It says it doesn't gather information from password-protected parts of sites.
  • The competition for data is fierce. PatientsLikeMe also sells data about its users. PatientsLikeMe says the data it sells is anonymized, no names attached.
  • Nielsen spokesman Matt Anchin says the company's reports to its clients include publicly available information gleaned from the Internet, "so if someone decides to share personally identifiable information, it could be included."
  • Internet users often have little recourse if personally identifiable data is scraped: There is no national law requiring data companies to let people remove or change information about themselves, though some firms let users remove their profiles under certain circumstances.
  •  
    he market for personal data about Internet users is booming, and in the vanguard is the practice of "scraping." Firms offer to harvest online conversations and collect personal details from social-networking sites, résumé sites and online forums where people might discuss their lives.
1 - 20 of 293 Next › Last »
Showing 20 items per page