Skip to main content

Home/ Bucknell Digital Pedagogy & Scholarship/ Group items tagged Internet

Rss Feed Group items tagged

Todd Suomela

The Internet as existential threat « Raph's Website - 1 views

  • Our medical systems have terrible Internet security… MRI machines you can connect to with USB that still have “admin:password” to gain root access. That’s horrifying, sure, but that’s not an attack at scale. More frightening: we’re busily uploading all our medical records to the cloud. Take down that cloud, and no patients can be treated, because nobody will know what they have, what meds they are on. Software swallows your insulin pumps and your pacemakers. To kill people, all you need is to hack that database, or simply erase it or block access to it. After all, we don’t tend to realize that in an Internet of Things, humans are just Things too. As this software monster has encroached on stuff like election systems, the common reaction has been to go back to paper. So let’s consider a less obvious example. We should be going back to paper for our libraries too! We’ve outsourced so much of our knowledge to digital that the amount of knowledge available in analog has dropped notably. There are less librarians in the fewer libraries with smaller collections than there used to be. If the net goes down, how much reference material is simply not accessible that was thirty years ago? Google Search is “critical cultural infrastructure.” How much redundancy do we actually have? Could a disconnected town actually educate its children? How critical is Google as a whole? If Google went down for a month, I am pretty sure we would see worldwide economic collapse. How much of the world economy passes through Google hosting? How much of it is in GMail? How much is dependent on Google Search, Google Images, Google Docs? The answer is a LOT. And because financial systems are now also JIT, ten thousand corporate blips where real estate agencies and local car washes and a huge pile of software companies and a gaggle of universities and so on are suddenly 100% unable to function digitally (no payroll! no insurance verification!) would absolutely have ripple effects into their suppliers and their customers, and thence to the worldwide economic market. Because interconnection without redundancy increases odds of cascades.
  • But just as critically, governments and state actors seem to be the source of so many of the problems precisely because the Internet is now too many forms of critical infrastructure, and therefore too juicy a target. If software eats everything, then the ability to kill software is the ability to kill anything. Net connectivity becomes the single point of failure for every system connected to it. Even if the Net itself is designed to route around damage, that doesn’t help if it is the single vector of attack that can take down any given target. It’s too juicy a target for the military, too juicy a target for terror, too juicy a target for criminal ransom. The old adage goes “when they came for this, I said nothing. When they came for that…” — we all know it. Consider that the more we hand gleefully over to the cloud because we want convenience, big data, personalization, and on, we’re creating a single thing that can be taken from us in an instant. We’ve decided to subscribe to everything, instead of owning it. When they came for your MP3s, your DVDs, fine,. not “critical infrastructure.” When they came for your resumes, OK, getting closer.
  • As we rush towards putting more and more things “in the cloud,” as we rush towards an Internet of Things with no governance beyond profit motive and anarchy, what we’re effectively doing is creating a massive single point of failure for every system we put in it.
Leslie Harris

Research: The Proof is In! Multi-Tasking in Class Reduces Test Scores -- Campus Technology - 1 views

  •  
    The study itself involved student self-reporting of Internet use during lecture, and the researchers used ACT scores as an indicator of academic ability, but the results are not surprising: students who surfed the Internet during lecture did worse on course exams. Duh!
Todd Suomela

DSHR's Blog: Ithaka's Perspective on Digital Preservation - 0 views

  • Second, there is very little coverage of Web archiving, which is clearly by far the largest and most important digital preservation initiative both for current and future readers. The Internet Archive rates only two mentions, in the middle of a list of activities and in a footnote. This is despite the fact that archive.org is currently the 211th most visited site in the US (272nd globally) with over 5.5M registered users, adding over 500 per day, and serving nearly 4M unique IPs per day. For comparison, the Library of Congress currently ranks 1439th in the US (5441st globally). The Internet Archive's Web collection alone probably dwarfs all other digital preservation efforts combined both in size and in usage. Not to mention their vast collections of software, digitized books, audio, video and TV news.. Rieger writes: There is a lack of understanding about how archived websites are discovered, used, and referenced. “Researchers prefer to cite the original live-web as it is easier and shorter,” pointed out one of the experts. “There is limited awareness of the existence of web archives and lack of community consensus on how to treat them in scholarly work. The problems are not about technology any more, it is about usability, awareness, and scholarly practices.” The interviewee referred to a recent CRL study based on an analysis of referrals to archived content from papers that concluded that the citations were mainly to articles about web archiving projects. It is surprising that the report doesn't point out that the responsibility for educating scholars in the use of resources lies with the "experts and thought leaders" from institutions such as the University of California, Michigan State, Cornell, MIT, NYU and Virginia Tech. That these "experts and thought leaders" don't consider the Internet Archive to be a resource worth mentioning might have something to do with the fact that their scholars don't know that they should be using it. A report whose first major section, entitled "What's Working Well", totally fails to acknowledge the single most important digital preservation effort of the last two decades clearly lacks credibility
  • Finally, there is no acknowledgement that the most serious challenge facing the field is economic. Except for a few corner cases, we know how to do digital preservation, we just don't want to pay enough to have it done. Thus the key challenge is to achieve some mixture of significant increase in funding for, and significant cost reduction in the processes of, digital preservation. Information technology processes naturally have very strong economies of scale, which result in winner-take-all markets (as W. Brian Arthur pointed out in 1985). It is notable that the report doesn't mention the winners we already have, in Web and source code archiving, and in emulation. All are at the point where a competitor is unlikely to be viable. To be affordable, digital preservation needs to be done at scale. The report's orientation is very much "let a thousand flowers bloom", which in IT markets only happens at a very early stage. This is likely the result of talking only to people nurturing a small-scale flower, not to people who have already dominated their market niche. It is certainly a risk that each area will have a single point of failure, but trying to fight against the inherent economics of IT pretty much guarantees ineffectiveness.
  • 1) The big successes in the field haven't come from consensus building around a roadmap, they have come from idiosyncratic individuals such as Brewster Kahle, Roberto di Cosmo and Jason Scott identifying a need and building a system to address it no matter what "the community" thinks. We have a couple of decades of experience showing that "the community" is incapable of coming to a coherent consensus that leads to action on a scale appropriate to the problem. In any case, describing road-mapping as "research" is a stretch. 2) Under severe funding pressure, almost all libraries have de-emphasized their custodial role of building collections in favor of responding to immediate client needs. Rieger writes: As one interviewee stated, library leaders have “shifted their attention from seeing preservation as a moral imperative to catering to the university’s immediate needs.” Regrettably, but inevitably given the economics of IT markets, this provides a market opportunity for outsourcing. Ithaka has exploited one such opportunity with Portico. This bullet does describe "research" in the sense of "market research".  Success is, however, much more likely to come from the success of an individual effort than from a consensus about what should be done among people who can't actually do it. 3) In the current climate, increased funding for libraries and archives simply isn't going to happen. These institutions have shown a marked reluctance to divert their shrinking funds from legacy to digital media. Thus the research topic with the greatest leverage in turning funds into preserved digital content is into increasing the cost-effectiveness of the tools, processes and infrastructure of digital preservation.
Todd Suomela

Jaron Lanier Interview on What Went Wrong With the Internet - 0 views

  • The theory of markets and capitalism is that when we compete, what we’re competing for is to get better at something that’s actually a benefit to people, so that everybody wins. So if you’re building a better mousetrap, or a better machine-learning algorithm, then that competition should generate improvement for everybody. But if it’s a purely abstract competition set up between insiders to the exclusion of outsiders, it might feel like a competition, it might feel very challenging and stressful and hard to the people doing it, but it doesn’t actually do anything for anybody else. It’s no longer genuinely productive for anybody, it’s a fake. And I’m a little concerned that a lot of what we’ve been doing in Silicon Valley has started to take on that quality. I think that’s been a problem in Wall Street for a while, but the way it’s been a problem in Wall Street has been aided by Silicon Valley. Everything becomes a little more abstract and a little more computer-based. You have this very complex style of competition that might not actually have much substance to it.
  • I think the fundamental mistake we made is that we set up the wrong financial incentives, and that’s caused us to turn into jerks and screw around with people too much. Way back in the ’80s, we wanted everything to be free because we were hippie socialists. But we also loved entrepreneurs because we loved Steve Jobs. So you wanna be both a socialist and a libertarian at the same time, and it’s absurd. But that’s the kind of absurdity that Silicon Valley culture has to grapple with. And there’s only one way to merge the two things, which is what we call the advertising model, where everything’s free but you pay for it by selling ads. But then because the technology gets better and better, the computers get bigger and cheaper, there’s more and more data — what started out as advertising morphed into continuous behavior modification on a mass basis, with everyone under surveillance by their devices and receiving calculated stimulus to modify them. So you end up with this mass behavior-modification empire, which is straight out of Philip K. Dick, or from earlier generations, from 1984. It’s this thing that we were warned about. It’s this thing that we knew could happen. Norbert Wiener, who coined the term cybernetics, warned about it as a possibility. And despite all the warnings, and despite all of the cautions, we just walked right into it, and we created mass behavior-modification regimes out of our digital networks. We did it out of this desire to be both cool socialists and cool libertarians at the same time.
  • But at the end, I have one that’s a spiritual one. The argument is that social media hates your soul. And it suggests that there’s a whole spiritual, religious belief system along with social media like Facebook that I think people don’t like. And it’s also fucking phony and false. It suggests that life is some kind of optimization, like you’re supposed to be struggling to get more followers and friends. Zuckerberg even talked about how the new goal of Facebook would be to give everybody a meaningful life, as if something about Facebook is where the meaning of life is. It suggests that you’re just a cog in a giant global brain or something like that. The rhetoric from the companies is often about AI, that what they’re really doing — like YouTube’s parent company, Google, says what they really are is building the giant global brain that’ll inherit the earth and they’ll upload you to that brain and then you won’t have to die. It’s very, very religious in the rhetoric. And so it’s turning into this new religion, and it’s a religion that doesn’t care about you. It’s a religion that’s completely lacking in empathy or any kind of personal acknowledgment. And it’s a bad religion. It’s a nerdy, empty, sterile, ugly, useless religion that’s based on false ideas. And I think that of all of the things, that’s the worst thing about it. I mean, it’s sort of like a cult of personality. It’s like in North Korea or some regime where the religion is your purpose to serve this one guy. And your purpose is to serve this one system, which happens to be controlled by one guy, in the case of Facebook. It’s not as blunt and out there, but that is the underlying message of it and it’s ugly and bad. I loathe it, and I think a lot of people have that feeling, but they might not have articulated it or gotten it to the surface because it’s just such a weird and new situation.
Todd Suomela

Rejecting Test Surveillance in Higher Education by Lindsey Barrett :: SSRN - 0 views

  •  
    "The rise of remote proctoring software during the COVID-19 pandemic illustrates the dangers of surveillance-enabled pedagogy built on the belief that students can't be trusted. These services, which deploy a range of identification protocols, computer and internet access limitations, and human or automated observation of students as they take tests remotely, are marketed as necessary to prevent cheating. But the success of these services in their stated goal is ill- supported at best and discredited at worst, particularly given their highly over- inclusive criteria for "suspicious" behavior. Meanwhile, the harms they inflict on students are clear: severe anxiety among test-takers, concerning data collection and use practices, and discriminatory flagging of students of color and students with disabilities have provoked widespread outcry from students, professors, privacy advocates, policymakers, and sometimes universities themselves. To make matters worse, the privacy and civil rights laws most relevant to the use of these services are generally inadequate to protect students from the harms they inflict. Colleges and universities routinely face difficult decisions that require reconciling conflicting interests, but whether to use remote proctoring software isn't one of them. Remote proctoring software is not pedagogically beneficial, institutionally necessary, or remotely unavoidable, and its use further entrenches inequities in higher education that schools should be devoted to rooting out. Colleges and universities should abandon remote proctoring software, and apply the lessons from this failed experiment to their other existing or potential future uses of surveillance technologies and automated decision-making systems that threaten students' privacy, access to important life opportunities, and intellectual freedom. "
Todd Suomela

Fluent in Social Media, Failing in Fake News: Generation Z, Online - Pacific Standard - 0 views

  • Instead of burrowing into a silo or vertical on a single webpage, as our Gen Z digital natives do, fact checkers tended to read laterally, a strategy that sent them zipping off a site to open new tabs across the horizontal axis of their screens. And their first stop was often the site we tell kids they should avoid: Wikipedia. But checkers used Wikipedia differently than the rest of us often do, skipping the main article to dive straight into the references, where more established sources can be found. They knew that the more controversial the topic, the more likely the entry was to be "protected," through the various locks Wikipedia applies to prevent changes by anyone except high-ranking editors. Further, the fact checkers knew how to use a Wikipedia article's "Talk" page, the tab hiding in plain sight right next to the article—a feature few students even know about, still less consult. It's the "Talk" page where an article's claims are established, disputed, and, when the evidence merits it, altered.
  • In the short term, we can do a few useful things. First, let's make sure that kids (and their teachers) possess some basic skills for evaluating digital claims. Some quick advice: When you land on an unfamiliar website, don't get taken in by official-looking logos or snazzy graphics. Open a new tab (better yet, several) and Google the group that's trying to persuade you. Second, don't click on the first result. Take a tip from fact checkers and practice click restraint: Scan the snippets (the brief sentence accompanying each search result) and make a smart first choice.
  • What if the answer isn't more media literacy, but a different kind of media literacy?
  • ...1 more annotation...
  • We call them "digital natives." Digitally naive might be more accurate.Between January of 2015 and June of 2016, my colleagues and I at the Stanford History Education Group surveyed 7,804 students across 12 states. Our goal was to take the pulse of civic online reasoning: students' ability to judge the information that affects them as citizens. What we found was a stunning and dismaying consistency. Young people's ability to navigate the Internet can be summed up in one word: bleak.
Todd Suomela

The Tincture of Time - Should Journals Return to Slower Publishing Practices? - The Sch... - 0 views

  • This speed may create lower levels of accuracy and reliability. I’ve seen first-hand how rapid publication practices, driven by competitive forces as well as the tempting capabilities of Internet publishing, can lead to an increase in corrections and errata at scientific journals. There is a price to pay to squeezing the time to publication down to its absolute minimum. The aggregate rise in corrections and retractions across the journals system may provide more evidence that haste makes waste. With questions of the quality and legitimacy of the reports published in journals arising in the mainstream media, the costs of speed to our brands and the overall reputation of the industry may be worth reconsidering. Moreover, is rapid publication where the competitive advantage currently resides? Or has the strategic ground shifted?
  • Perhaps, instead, the strategic differentiator for journals isn’t unpredictable schedules, rapid publication, and error-prone publishing of scientific reports. With preprint servers supporting rapid, preliminary publication in an environment that is actually more supportive of amendments/corrections, speed, and unpredictability, perhaps journals should rethink shouldering the load of and courting the risks of rapid publication. More importantly, there are indications that coordinating with your audience, taking more time to fact-check and edit, and returning to a higher level of quality may be the smart move. Journals don’t have to perform every publishing trick anymore. Maybe it’s time to return to doing what they do best — vetting information carefully, validating claims as best they can, and ensuring novelty, quality, relevance, and importance around what they choose to publish.
Todd Suomela

Build a Better Monster: Morality, Machine Learning, and Mass Surveillance - 0 views

  • Unfortunately, the enemy is complacency. Tech workers trust their founders, find labor organizing distasteful, and are happy to leave larger ethical questions to management. A workplace free of 'politics' is just one of the many perks the tech industry offers its pampered employees. So our one chance to enact meaningful change is slipping away. Unless something happens to mobilize the tech workforce, or unless the advertising bubble finally bursts, we can expect the weird, topsy-turvy status quo of 2017 to solidify into the new reality. But even though we're likely to fail, all we can do is try. Good intentions are not going to make these structural problems go away. Talking about them is not going to fix them. We have to do something.
  • Can we fix it? Institutions can be destroyed quickly; they take a long time to build. A lot of what we call ‘disruption’ in the tech industry has just been killing flawed but established institutions, and mining them for parts. When we do this, we make a dangerous assumption about our ability to undo our own bad decisions, or the time span required to build institutions that match the needs of new realities. Right now, a small caste of programmers is in charge of the surveillance economy, and has broad latitude to change it. But this situation will not last for long. The kinds of black-box machine learning that have been so successful in the age of mass surveillance are going to become commoditized and will no longer require skilled artisans to deploy. Moreover, powerful people have noted and benefited from the special power of social media in the political arena. They will not sit by and let programmers dismantle useful tools for influence and social control. It doesn’t matter that the tech industry considers itself apolitical and rationalist. Powerful people did not get to be that way by voluntarily ceding power. The window of time in which the tech industry can still act is brief: while tech workers retain relatively high influence in their companies, and before powerful political interests have put down roots in the tech industry.
1 - 13 of 13
Showing 20 items per page