Skip to main content

Home/ Dystopias/ Group items tagged cognition

Rss Feed Group items tagged

Ed Webb

Artificial Intelligence and the Future of Humans | Pew Research Center - 0 views

  • experts predicted networked artificial intelligence will amplify human effectiveness but also threaten human autonomy, agency and capabilities
  • most experts, regardless of whether they are optimistic or not, expressed concerns about the long-term impact of these new tools on the essential elements of being human. All respondents in this non-scientific canvassing were asked to elaborate on why they felt AI would leave people better off or not. Many shared deep worries, and many also suggested pathways toward solutions. The main themes they sounded about threats and remedies are outlined in the accompanying table.
  • CONCERNS Human agency: Individuals are  experiencing a loss of control over their lives Decision-making on key aspects of digital life is automatically ceded to code-driven, "black box" tools. People lack input and do not learn the context about how the tools work. They sacrifice independence, privacy and power over choice; they have no control over these processes. This effect will deepen as automated systems become more prevalent and complex. Data abuse: Data use and surveillance in complex systems is designed for profit or for exercising power Most AI tools are and will be in the hands of companies striving for profits or governments striving for power. Values and ethics are often not baked into the digital systems making people's decisions for them. These systems are globally networked and not easy to regulate or rein in. Job loss: The AI takeover of jobs will widen economic divides, leading to social upheaval The efficiencies and other economic advantages of code-based machine intelligence will continue to disrupt all aspects of human work. While some expect new jobs will emerge, others worry about massive job losses, widening economic divides and social upheavals, including populist uprisings. Dependence lock-in: Reduction of individuals’ cognitive, social and survival skills Many see AI as augmenting human capacities but some predict the opposite - that people's deepening dependence on machine-driven networks will erode their abilities to think for themselves, take action independent of automated systems and interact effectively with others. Mayhem: Autonomous weapons, cybercrime and weaponized information Some predict further erosion of traditional sociopolitical structures and the possibility of great loss of lives due to accelerated growth of autonomous military applications and the use of weaponized information, lies and propaganda to dangerously destabilize human groups. Some also fear cybercriminals' reach into economic systems.
  • ...18 more annotations...
  • AI and ML [machine learning] can also be used to increasingly concentrate wealth and power, leaving many people behind, and to create even more horrifying weapons
  • “In 2030, the greatest set of questions will involve how perceptions of AI and their application will influence the trajectory of civil rights in the future. Questions about privacy, speech, the right of assembly and technological construction of personhood will all re-emerge in this new AI context, throwing into question our deepest-held beliefs about equality and opportunity for all. Who will benefit and who will be disadvantaged in this new world depends on how broadly we analyze these questions today, for the future.”
  • SUGGESTED SOLUTIONS Global good is No. 1: Improve human collaboration across borders and stakeholder groups Digital cooperation to serve humanity's best interests is the top priority. Ways must be found for people around the world to come to common understandings and agreements - to join forces to facilitate the innovation of widely accepted approaches aimed at tackling wicked problems and maintaining control over complex human-digital networks. Values-based system: Develop policies to assure AI will be directed at ‘humanness’ and common good Adopt a 'moonshot mentality' to build inclusive, decentralized intelligent digital networks 'imbued with empathy' that help humans aggressively ensure that technology meets social and ethical responsibilities. Some new level of regulatory and certification process will be necessary. Prioritize people: Alter economic and political systems to better help humans ‘race with the robots’ Reorganize economic and political systems toward the goal of expanding humans' capacities and capabilities in order to heighten human/AI collaboration and staunch trends that would compromise human relevance in the face of programmed intelligence.
  • As AI matures, we will need a responsive workforce, capable of adapting to new processes, systems and tools every few years. The need for these fields will arise faster than our labor departments, schools and universities are acknowledging
  • We humans care deeply about how others see us – and the others whose approval we seek will increasingly be artificial. By then, the difference between humans and bots will have blurred considerably. Via screen and projection, the voice, appearance and behaviors of bots will be indistinguishable from those of humans, and even physical robots, though obviously non-human, will be so convincingly sincere that our impression of them as thinking, feeling beings, on par with or superior to ourselves, will be unshaken. Adding to the ambiguity, our own communication will be heavily augmented: Programs will compose many of our messages and our online/AR appearance will [be] computationally crafted. (Raw, unaided human speech and demeanor will seem embarrassingly clunky, slow and unsophisticated.) Aided by their access to vast troves of data about each of us, bots will far surpass humans in their ability to attract and persuade us. Able to mimic emotion expertly, they’ll never be overcome by feelings: If they blurt something out in anger, it will be because that behavior was calculated to be the most efficacious way of advancing whatever goals they had ‘in mind.’ But what are those goals?
  • AI will drive a vast range of efficiency optimizations but also enable hidden discrimination and arbitrary penalization of individuals in areas like insurance, job seeking and performance assessment
  • The record to date is that convenience overwhelms privacy
  • “I strongly believe the answer depends on whether we can shift our economic systems toward prioritizing radical human improvement and staunching the trend toward human irrelevance in the face of AI. I don’t mean just jobs; I mean true, existential irrelevance, which is the end result of not prioritizing human well-being and cognition.”
  • AI will eventually cause a large number of people to be permanently out of work
  • Newer generations of citizens will become more and more dependent on networked AI structures and processes
  • there will exist sharper divisions between digital ‘haves’ and ‘have-nots,’ as well as among technologically dependent digital infrastructures. Finally, there is the question of the new ‘commanding heights’ of the digital network infrastructure’s ownership and control
  • As a species we are aggressive, competitive and lazy. We are also empathic, community minded and (sometimes) self-sacrificing. We have many other attributes. These will all be amplified
  • Given historical precedent, one would have to assume it will be our worst qualities that are augmented
  • Our capacity to modify our behaviour, subject to empathy and an associated ethical framework, will be reduced by the disassociation between our agency and the act of killing
  • We cannot expect our AI systems to be ethical on our behalf – they won’t be, as they will be designed to kill efficiently, not thoughtfully
  • the Orwellian nightmare realised
  • “AI will continue to concentrate power and wealth in the hands of a few big monopolies based on the U.S. and China. Most people – and parts of the world – will be worse off.”
  • The remainder of this report is divided into three sections that draw from hundreds of additional respondents’ hopeful and critical observations: 1) concerns about human-AI evolution, 2) suggested solutions to address AI’s impact, and 3) expectations of what life will be like in 2030, including respondents’ positive outlooks on the quality of life and the future of work, health care and education
Ed Webb

The Digital Maginot Line - 0 views

  • The Information World War has already been going on for several years. We called the opening skirmishes “media manipulation” and “hoaxes”, assuming that we were dealing with ideological pranksters doing it for the lulz (and that lulz were harmless). In reality, the combatants are professional, state-employed cyberwarriors and seasoned amateur guerrillas pursuing very well-defined objectives with military precision and specialized tools. Each type of combatant brings a different mental model to the conflict, but uses the same set of tools.
  • There are also small but highly-skilled cadres of ideologically-motivated shitposters whose skill at information warfare is matched only by their fundamental incomprehension of the real damage they’re unleashing for lulz. A subset of these are conspiratorial — committed truthers who were previously limited to chatter on obscure message boards until social platform scaffolding and inadvertently-sociopathic algorithms facilitated their evolution into leaderless cults able to spread a gospel with ease.
  • There’s very little incentive not to try everything: this is a revolution that is being A/B tested.
  • ...17 more annotations...
  • The combatants view this as a Hobbesian information war of all against all and a tactical arms race; the other side sees it as a peacetime civil governance problem.
  • Our most technically-competent agencies are prevented from finding and countering influence operations because of the concern that they might inadvertently engage with real U.S. citizens as they target Russia’s digital illegals and ISIS’ recruiters. This capability gap is eminently exploitable; why execute a lengthy, costly, complex attack on the power grid when there is relatively no cost, in terms of dollars as well as consequences, to attack a society’s ability to operate with a shared epistemology? This leaves us in a terrible position, because there are so many more points of failure
  • Cyberwar, most people thought, would be fought over infrastructure — armies of state-sponsored hackers and the occasional international crime syndicate infiltrating networks and exfiltrating secrets, or taking over critical systems. That’s what governments prepared and hired for; it’s what defense and intelligence agencies got good at. It’s what CSOs built their teams to handle. But as social platforms grew, acquiring standing audiences in the hundreds of millions and developing tools for precision targeting and viral amplification, a variety of malign actors simultaneously realized that there was another way. They could go straight for the people, easily and cheaply. And that’s because influence operations can, and do, impact public opinion. Adversaries can target corporate entities and transform the global power structure by manipulating civilians and exploiting human cognitive vulnerabilities at scale. Even actual hacks are increasingly done in service of influence operations: stolen, leaked emails, for example, were profoundly effective at shaping a national narrative in the U.S. election of 2016.
  • The substantial time and money spent on defense against critical-infrastructure hacks is one reason why poorly-resourced adversaries choose to pursue a cheap, easy, low-cost-of-failure psy-ops war instead
  • Information war combatants have certainly pursued regime change: there is reasonable suspicion that they succeeded in a few cases (Brexit) and clear indications of it in others (Duterte). They’ve targeted corporations and industries. And they’ve certainly gone after mores: social media became the main battleground for the culture wars years ago, and we now describe the unbridgeable gap between two polarized Americas using technological terms like filter bubble. But ultimately the information war is about territory — just not the geographic kind. In a warm information war, the human mind is the territory. If you aren’t a combatant, you are the territory. And once a combatant wins over a sufficient number of minds, they have the power to influence culture and society, policy and politics.
  • This shift from targeting infrastructure to targeting the minds of civilians was predictable. Theorists  like Edward Bernays, Hannah Arendt, and Marshall McLuhan saw it coming decades ago. As early as 1970, McLuhan wrote, in Culture is our Business, “World War III is a guerrilla information war with no division between military and civilian participation.”
  • The 2014-2016 influence operation playbook went something like this: a group of digital combatants decided to push a specific narrative, something that fit a long-term narrative but also had a short-term news hook. They created content: sometimes a full blog post, sometimes a video, sometimes quick visual memes. The content was posted to platforms that offer discovery and amplification tools. The trolls then activated collections of bots and sockpuppets to blanket the biggest social networks with the content. Some of the fake accounts were disposable amplifiers, used mostly to create the illusion of popular consensus by boosting like and share counts. Others were highly backstopped personas run by real human beings, who developed standing audiences and long-term relationships with sympathetic influencers and media; those accounts were used for precision messaging with the goal of reaching the press. Israeli company Psy Group marketed precisely these services to the 2016 Trump Presidential campaign; as their sales brochure put it, “Reality is a Matter of Perception”.
  • If an operation is effective, the message will be pushed into the feeds of sympathetic real people who will amplify it themselves. If it goes viral or triggers a trending algorithm, it will be pushed into the feeds of a huge audience. Members of the media will cover it, reaching millions more. If the content is false or a hoax, perhaps there will be a subsequent correction article – it doesn’t matter, no one will pay attention to it.
  • Combatants are now focusing on infiltration rather than automation: leveraging real, ideologically-aligned people to inadvertently spread real, ideologically-aligned content instead. Hostile state intelligence services in particular are now increasingly adept at operating collections of human-operated precision personas, often called sockpuppets, or cyborgs, that will escape punishment under the the bot laws. They will simply work harder to ingratiate themselves with real American influencers, to join real American retweet rings. If combatants need to quickly spin up a digital mass movement, well-placed personas can rile up a sympathetic subreddit or Facebook Group populated by real people, hijacking a community in the way that parasites mobilize zombie armies.
  • Attempts to legislate away 2016 tactics primarily have the effect of triggering civil libertarians, giving them an opportunity to push the narrative that regulators just don’t understand technology, so any regulation is going to be a disaster.
  • The entities best suited to mitigate the threat of any given emerging tactic will always be the platforms themselves, because they can move fast when so inclined or incentivized. The problem is that many of the mitigation strategies advanced by the platforms are the information integrity version of greenwashing; they’re a kind of digital security theater, the TSA of information warfare
  • Algorithmic distribution systems will always be co-opted by the best resourced or most technologically capable combatants. Soon, better AI will rewrite the playbook yet again — perhaps the digital equivalent of  Blitzkrieg in its potential for capturing new territory. AI-generated audio and video deepfakes will erode trust in what we see with our own eyes, leaving us vulnerable both to faked content and to the discrediting of the actual truth by insinuation. Authenticity debates will commandeer media cycles, pushing us into an infinite loop of perpetually investigating basic facts. Chronic skepticism and the cognitive DDoS will increase polarization, leading to a consolidation of trust in distinct sets of right and left-wing authority figures – thought oligarchs speaking to entirely separate groups
  • platforms aren’t incentivized to engage in the profoundly complex arms race against the worst actors when they can simply point to transparency reports showing that they caught a fair number of the mediocre actors
  • What made democracies strong in the past — a strong commitment to free speech and the free exchange of ideas — makes them profoundly vulnerable in the era of democratized propaganda and rampant misinformation. We are (rightfully) concerned about silencing voices or communities. But our commitment to free expression makes us disproportionately vulnerable in the era of chronic, perpetual information war. Digital combatants know that once speech goes up, we are loathe to moderate it; to retain this asymmetric advantage, they push an all-or-nothing absolutist narrative that moderation is censorship, that spammy distribution tactics and algorithmic amplification are somehow part of the right to free speech.
  • We need an understanding of free speech that is hardened against the environment of a continuous warm war on a broken information ecosystem. We need to defend the fundamental value from itself becoming a prop in a malign narrative.
  • Unceasing information war is one of the defining threats of our day. This conflict is already ongoing, but (so far, in the United States) it’s largely bloodless and so we aren’t acknowledging it despite the huge consequences hanging in the balance. It is as real as the Cold War was in the 1960s, and the stakes are staggeringly high: the legitimacy of government, the persistence of societal cohesion, even our ability to respond to the impending climate crisis.
  • Influence operations exploit divisions in our society using vulnerabilities in our information ecosystem. We have to move away from treating this as a problem of giving people better facts, or stopping some Russian bots, and move towards thinking about it as an ongoing battle for the integrity of our information infrastructure – easily as critical as the integrity of our financial markets.
Ed Webb

Charlie Brooker | Google Instant is trying to kill me | Comment is free | The Guardian - 0 views

  • I'm starting to feel like an unwitting test subject in a global experiment conducted by Google, in which it attempts to discover how much raw information it can inject directly into my hippocampus before I crumple to the floor and start fitting uncontrollably.
  • It's the internet on fast-forward, and it's aggressive – like trying to order from a waiter who keeps finishing your sentences while ramming spoonfuls of what he thinks you want directly into your mouth, so you can't even enjoy your blancmange without chewing a gobful of black pudding first.
  • Google may have released him from the physical misery of pressing enter, but it's destroyed his sense of perspective in the process.
  • ...2 more annotations...
  • My attention span was never great, but modern technology has halved it, and halved it again, and again and again, down to an atomic level, and now there's nothing discernible left. Back in that room, bombarded by alerts and emails, repeatedly tapping search terms into Google Instant for no good reason, playing mindless pinball with words and images, tumbling down countless little attention-vortexes, plunging into one split-second coma after another, I began to feel I was neither in control nor 100% physically present. I wasn't using the computer. The computer was using me – to keep its keys warm.
  • I'm rationing my internet usage and training my mind muscles for the future. Because I can see where it's heading: a service called Google Assault that doesn't even bother to guess what you want, and simply hurls random words and sounds and images at you until you dribble all the fluid out of your body. And I know it'll kill me, unless I train my brain to withstand and ignore it. For me, the war against the machines has started in earnest.
Ed Webb

Does the Digital Classroom Enfeeble the Mind? - NYTimes.com - 0 views

  • My father would have been unable to “teach to the test.” He once complained about errors in a sixth-grade math textbook, so he had the class learn math by designing a spaceship. My father would have been spat out by today’s test-driven educational regime.
  • A career in computer science makes you see the world in its terms. You start to see money as a form of information display instead of as a store of value. Money flows are the computational output of a lot of people planning, promising, evaluating, hedging and scheming, and those behaviors start to look like a set of algorithms. You start to see the weather as a computer processing bits tweaked by the sun, and gravity as a cosmic calculation that keeps events in time and space consistent. This way of seeing is becoming ever more common as people have experiences with computers. While it has its glorious moments, the computational perspective can at times be uniquely unromantic. Nothing kills music for me as much as having some algorithm calculate what music I will want to hear. That seems to miss the whole point. Inventing your musical taste is the point, isn’t it? Bringing computers into the middle of that is like paying someone to program a robot to have sex on your behalf so you don’t have to. And yet it seems we benefit from shining an objectifying digital light to disinfect our funky, lying selves once in a while. It’s heartless to have music chosen by digital algorithms. But at least there are fewer people held hostage to the tastes of bad radio D.J.’s than there once were. The trick is being ambidextrous, holding one hand to the heart while counting on the digits of the other.
  • The future of education in the digital age will be determined by our judgment of which aspects of the information we pass between generations can be represented in computers at all. If we try to represent something digitally when we actually can’t, we kill the romance and make some aspect of the human condition newly bland and absurd. If we romanticize information that shouldn’t be shielded from harsh calculations, we’ll suffer bad teachers and D.J.’s and their wares.
  • ...5 more annotations...
  • Some of the top digital designs of the moment, both in school and in the rest of life, embed the underlying message that we understand the brain and its workings. That is false. We don’t know how information is represented in the brain. We don’t know how reason is accomplished by neurons. There are some vaguely cool ideas floating around, and we might know a lot more about these things any moment now, but at this moment, we don’t. You could spend all day reading literature about educational technology without being reminded that this frontier of ignorance lies before us. We are tempted by the demons of commercial and professional ambition to pretend we know more than we do.
  • Outside school, something similar happens. Students spend a lot of time acting as trivialized relays in giant schemes designed for the purposes of advertising and other revenue-minded manipulations. They are prompted to create databases about themselves and then trust algorithms to assemble streams of songs and movies and stories for their consumption. We see the embedded philosophy bloom when students assemble papers as mash-ups from online snippets instead of thinking and composing on a blank piece of screen. What is wrong with this is not that students are any lazier now or learning less. (It is probably even true, I admit reluctantly, that in the presence of the ambient Internet, maybe it is not so important anymore to hold an archive of certain kinds of academic trivia in your head.) The problem is that students could come to conceive of themselves as relays in a transpersonal digital structure. Their job is then to copy and transfer data around, to be a source of statistics, whether to be processed by tests at school or by advertising schemes elsewhere.
  • If students don’t learn to think, then no amount of access to information will do them any good.
  • To the degree that education is about the transfer of the known between generations, it can be digitized, analyzed, optimized and bottled or posted on Twitter. To the degree that education is about the self-invention of the human race, the gargantuan process of steering billions of brains into unforeseeable states and configurations in the future, it can continue only if each brain learns to invent itself. And that is beyond computation because it is beyond our comprehension.
  • Roughly speaking, there are two ways to use computers in the classroom. You can have them measure and represent the students and the teachers, or you can have the class build a virtual spaceship. Right now the first way is ubiquitous, but the virtual spaceships are being built only by tenacious oddballs in unusual circumstances. More spaceships, please.
  •  
    How do we get this right - use the tech for what it can do well, develop our brains for what the tech can't do? Who's up for building a spaceship?
Ed Webb

Sleep isn't priority on campus, but experts say it should be | Connect2Mason - 0 views

  • Sleep deprivation affects 80 to 90 percent of college students, and getting a good night’s sleep is essential to staying healthy but is often overlooked
  • sleep deprivation has short-term effects such as increased blood pressure and desires for fatty foods, a weakened immune system, a harder time remembering things and a decrease in sense of humor
  • the idea that staying up late or pulling all nighters will help prepare for a test is a misconception because it negatively affects cognitive skills. “Pride in not sleeping much is like pride in not exercising," Gartenberg said. “It doesn’t make sense.”
  • ...1 more annotation...
  • people should only go to sleep if they are tired, caffeine and exercising before bed disturbs sleep, and alcohol does not help people fall asleep, but rather disrupts sleep and decreases its quality
Ed Webb

FT.com / UK - Towards the empathic civilisation - 0 views

  • The great turning points occur when new, more complex energy regimes converge with communications revolutions, fundamentally altering human consciousness in the process. This happened in the late 18th century, when coal and steam power ushered in the industrial age. Print technology was vastly improved and became the medium to organise myriad new activities. It also changed the wiring of the human brain, leading to a great shift from theological to ideological consciousness. Enlightenment philosophers - with some exceptions - peered into the psyche and saw a rational creature obsessed with autonomy and driven by the desire to acquire property and wealth.Today, we are on the verge of another seismic shift. Distributed information and communication technologies are converging with distributed renewable energies, creating the infrastructure for a third industrial revolution. Over the next 40 years, millions of buildings will be overhauled to collect the surrounding renewable energies. These energies will be stored in the form of hydrogen and any surplus electricity will be shared over continental inter-grids managed by internet technologies. People will generate their own energy, just as they now create their own information and, as with information, share it with millions of others.
  • the early stages of a transformation from ideological consciousness to biosphere consciousness
  • This new understanding goes hand-in-hand with discoveries in evolutionary biology, neuro-cognitive science and child development that reveal that human beings are biologically predisposed to be empathic.
  • ...2 more annotations...
  • The millennial generation is celebrating the global commons every day, apparently unmindful of Hardin's warning. For millennials, the notion of collaborating to advance the collective interest in networks often trumps "going it alone" in markets
  • We think of property as the right to exclude others from something. But property has also meant the right of access to goods held in common - the right to navigate waterways, enjoy public parks and beaches, and so on. This second definition is particularly important now because quality of life can only be realised collectively - for example, by living in unpolluted environments and safe communities. In the new era, the right to be included in "a full life" - the right to access - becomes the most important "property value
Ed Webb

The migrant caravan "invasion" and America's epistemic crisis - Vox - 0 views

  • The intensity of belief on the right has begun to vary inversely with plausibility. Precisely because the “threat” posed by the caravan is facially absurd, believing in it — performing belief in it — is a powerful act of shared identity reinforcement, of tribal solidarity.
    • Ed Webb
       
      See (obviously) Orwell. Also Lisa Wedeens' great book on totalitarianism in Syria, Ambiguities of Domination.
  • Once that support system is in place, Trump is unbound, free to impose his fantasies on reality. He can campaign on Republicans protecting people with preexisting conditions even as the GOP sues to block such protections. He can brush off Mueller’s revelations and fire anyone who might threaten him. He can use imaginary Democratic voter fraud to cover up red-state voter suppression. He can use antifa as a pretext for deploying troops domestically.
  • Trump does not view himself as president of the whole country. He views himself as president of his white nationalist party — their leader in a war on liberals. He has all the tools of a head of state with which to prosecute that war. Currently, he is restrained only by the lingering professionalism of public servants and a few thin threads of institutional inertia.
  • ...6 more annotations...
  • The epistemic crisis Trump has accelerated is now morphing into a full-fledged crisis of democracy.
  • As Voltaire famously put it: “Anyone who has the power to make you believe absurdities has the power to make you commit injustices.”
  • The right, in all its organs, from social media to television to the president, is telling a well-worn, consistent story: Opposition from the left and Democrats is fraudulent, illegitimate, a foreign-funded conspiracy against the traditional white American way of life.
  • Having two versions of reality constantly clashing in public is cognitively and emotionally exhausting. To an average person following the news, the haze of charge and countercharge is overwhelming. And that is precisely what every autocrat wants.
  • every aspiring tyrant in modern history has made the independent media his first target
  • Then they go after the courts, the security services, and the military. Once they have a large base of support that will believe whatever they proclaim, follow them anywhere, support them in anything — it doesn’t have to be a majority, just an intense, activated minority — they can, practically speaking, get away with anything.
Ed Webb

Smartphones are making us stupid - and may be a 'gateway drug' | The Lighthouse - 0 views

  • rather than making us smarter, mobile devices reduce our cognitive ability in measurable ways
  • “There’s lots of evidence showing that the information you learn on a digital device, doesn’t get retained very well and isn’t transferred across to the real world,”
  • “You’re also quickly conditioned to attend to lots of attention-grabbing signals, beeps and buzzes, so you jump from one task to the other and you don’t concentrate.”
  • ...16 more annotations...
  • Not only do smartphones affect our memory and our concentration, research shows they are addictive – to the point where they could be a ‘gateway drug’ making users more vulnerable to other addictions.
  • Smartphones are also linked to reduced social interaction, inadequate sleep, poor real-world navigation, and depression.
  • “The more time that kids spend on digital devices, the less empathetic they are, and the less they are able to process and recognise facial expressions, so their ability to actually communicate with each other is decreased.”
  • “Casino-funded research is designed to keep people gambling, and app software developers use exactly the same techniques. They have lots of buzzes and icons so you attend to them, they have things that move and flash so you notice them and keep your attention on the device.”
  • Around 90 per cent of US university students are thought to experience ‘phantom vibrations', so the researcher took a group to a desert location with no cell reception – and found that even after four days, around half of the students still thought their pocket was buzzing with Facebook or text notifications.
  • “Collaboration is a buzzword with software companies who are targeting schools to get kids to use these collaboration tools on their iPads – but collaboration decreases when you're using these devices,”
  • “All addiction is based on the same craving for a dopamine response, whether it's drug, gambling, alcohol or phone addiction,” he says. “As the dopamine response drops off, you need to increase the amount you need to get the same result, you want a little bit more next time. Neurologically, they all look the same.“We know – there are lots of studies on this – that once we form an addiction to something, we become more vulnerable to other addictions. That’s why there’s concerns around heavy users of more benign, easily-accessed drugs like alcohol and marijuana as there’s some correlation with usage of more physically addictive drugs like heroin, and neurological responses are the same.”
  • parents can also fall victim to screens which distract from their child’s activities or conversations, and most adults will experience this with friends and family members too.
  • “We also know that if you learn something on an iPad you are less likely to be able to transfer that to another device or to the real world,”
  • a series of studies have tested this with children who learn to construct a project with ‘digital’ blocks and then try the project with real blocks. “They can’t do it - they start from zero again,”
  • “Our brains can’t actually multitask, we have to switch our attention from one thing to another, and each time you switch, there's a cost to your attentional resources. After a few hours of this, we become very stressed.” That also causes us to forget things
  • A study from Norway recently tested how well kids remembered what they learned on screens. One group of students received information on a screen and were asked to memorise it; the second group received the same information on paper. Both groups were tested on their recall.Unsurprisingly, the children who received the paper version remembered more of the material. But the children with the electronic version were also found to be more stressed,
  • The famous ‘London taxi driver experiments’ found that memorising large maps caused the hippocampus to expand in size. Williams says that the reverse is going to happen if we don’t use our brain and memory to navigate. “Our brains are just like our muscles. We ‘use it or lose it’ – in other words, if we use navigation devices for directions rather than our brains, we will lose that ability.”
  • numerous studies also link smartphone use with sleeplessness and anxiety. “Some other interesting research has shown that the more friends you have on social media, the less friends you are likely to have in real life, the less actual contacts you have and the greater likelihood you have of depression,”
  • 12-month-old children whose carers regularly use smartphones have poorer facial expression perception
  • turning off software alarms and notifications, putting strict time limits around screen use, keeping screens out of bedrooms, minimising social media and replacing screens with paper books, paper maps and other non-screen activities can all help minimise harm from digital devices including smartphones
1 - 11 of 11
Showing 20 items per page