Skip to main content

Home/ Instructional & Media Services at Dickinson College/ Group items tagged US

Rss Feed Group items tagged

Ed Webb

Please do a bad job of putting your courses online - Rebecca Barrett-Fox - 0 views

  • Please do a bad job of putting your courses online
  • For my colleagues who are now being instructed to put some or all of the remainder of their semester online, now is a time to do a poor job of it. You are NOT building an online class. You are NOT teaching students who can be expected to be ready to learn online. And, most importantly, your class is NOT the highest priority of their OR your life right now. Release yourself from high expectations right now, because that’s the best way to help your students learn.
  • Remember the following as you move online: Your students know less about technology than you think. Many of them know less than you. Yes, even if they are digital natives and younger than you. They will be accessing the internet on their phones. They have limited data. They need to reserve it for things more important than online lectures. Students who did not sign up for an online course have no obligation to have a computer, high speed wifi, a printer/scanner, or a camera. Do not even survey them to ask if they have it. Even if they do, they are not required to tell you this. And if they do now, that doesn’t mean that they will when something breaks and they can’t afford to fix it because they just lost their job at the ski resort or off-campus bookstore. Students will be sharing their technology with other household members. They may have LESS time to do their schoolwork, not more.
  • ...14 more annotations...
  • Social isolation contributes to mental health problems. Social isolation contributes to domestic violence.
  • Do not require synchronous work. Students should not need to show up at a specific time for anything. REFUSE to do any synchronous work.
  • Do not record lectures unless you need to. (This is fundamentally different from designing an online course, where recorded information is, I think, really important.) They will be a low priority for students, and they take up a lot of resources on your end and on theirs. You have already built a rapport with them, and they don’t need to hear your voice to remember that.
  • Do record lectures if you need to. When information cannot be learned otherwise, include a lecture. Your university already some kind of tech to record lectures. DO NOT simply record in PowerPoint as the audio quality is low. While many people recommend lectures of only 5 minutes, I find that my students really do listen to longer lectures. Still, remember that your students will be frequently interrupted in their listening, so a good rule is 1 concept per lecture. So, rather than a lecture on ALL of, say, gender inequality in your Intro to Soc course, deliver 5 minutes on pay inequity (or 15 minutes or 20 minutes, if that’s what you need) and then a separate lecture on #MeToo and yet another on domestic violence. Closed caption them using the video recording software your university provides. Note that YouTube also generates closed captions [edited to add: they are not ADA compliant, though]. If you don’t have to include images, skip the video recording and do a podcast instead.
  • Editing is a waste of your time right now.
  • Listen for them asking for help. They may be anxious. They may be tired. Many students are returning to their parents’ home where they may not be welcome. Others will be at home with partners who are violent. School has been a safe place for them, and now it’s not available to them. Your class may matter to them a lot when they are able to focus on it, but it may not matter much now, in contrast to all the other things they have to deal with. Don’t let that hurt your feelings, and don’t hold it against them in future semesters or when they come back to ask for a letter of recommendation.
  • Allow every exam or quiz to be taken at least twice, and tell students that this means that if there is a tech problem on the first attempt, the second attempt is their chance to correct it. This will save you from the work of resetting tests or quizzes when the internet fails or some other tech problem happens. And since it can be very hard to discern when such failures are really failures or students trying to win a second attempt at a quiz or test, you avoid having to deal with cheaters.
  • Do NOT require students to use online proctoring or force them to have themselves recorded during exams or quizzes. This is a fundamental violation of their privacy, and they did NOT sign up for that when they enrolled in your course.
  • Circumvent the need for proctoring by making every exam open-notes, open-book, and open-internet. The best way to avoid them taking tests together or sharing answers is to use a large test bank.
  • Remind them of due dates. It might feel like handholding, but be honest: Don’t you appreciate the text reminder from your dentist that you have an appointment tomorrow? Your LMS has an announcement system that allows you to write an announcement now and post it later.
  • Make everything self-grading if you can (yes, multiple choice and T/F on quizzes and tests) or low-stakes (completed/not completed).
  • Don’t do too much. Right now, your students don’t need it. They need time to do the other things they need to do.
  • Make all work due on the same day and time for the rest of the semester. I recommend Sunday night at 11:59 pm.
  • This advice is very different from that which I would share if you were designing an online course. I hope it’s helpful, and for those of you moving your courses online, I hope it helps you understand the labor that is required in building an online course a bit better.
Ed Webb

Guest Post: The Complexities of Certainty | Just Visiting - 0 views

  • Privileges abound in academia, but so do experiences of loss, instability and fear. And into this situation we were called to respond to a pandemic.
  • It is tempting to reach for certainties when everything around us is in chaos, and for a vast swath of higher ed instructors, the rapid shift from face-to-face teaching to emergency distance learning has been chaos. Small wonder, then, that people have offered -- and clung to -- advice that seeks to bring order to disorder. Many people have advised instructors to prioritize professionalism, ditching the sweatpants and putting away the visible clutter in our homes before making a Zoom call, upholding concepts like "rigor" so that our standards do not slip. To some, these appeals to universal principles are right-minded and heartening, a bulwark against confusion and disarray. But to others they have felt oppressive, even dangerously out of touch with the world in which we and our students live.
  • certainties can be dangerous; their very power is based upon reifying well-worn inequities dressed up as tradition
  • ...3 more annotations...
  • there is no objective standard of success that we reach when we insist on rigor, which is too often deployed in defense of practices that are ableist and unkind
  • We are not just teachers, or scholars, or professionals. We are individuals thrown back in varying degrees on our own resources, worried about ourselves and our families and friends as we navigate the effects of COVID-19. Many of us are deeply anxious and afraid. Our pre-existing frailties have been magnified; we feel vulnerable, distracted and at sea. Our loved ones are sick, even dying. This is trauma. Few of us have faced such world-changing circumstances before, and as our minds absorb the impact of that reality, our brains cannot perform as capably as they usually would.
  • The most professional people I know right now are those who show up, day after day, to teach under extraordinary circumstances. Perhaps they do it with their laundry waiting to be folded, while their children interrupt, thinking constantly of their loved ones, weathering loneliness, wearing sweatpants and potentially in need of a haircut. But I know they do it while acknowledging this is not the world in which we taught two months before, and that every student is facing disruption, uncertainty and distraction. They do it creatively, making room for the unexpected, challenging their students, with the world a participant in the conversation.
Ed Webb

Waving the Asynchronous Flag - CogDogBlog - 0 views

  • in all the pivot talk, there’s a tinge of favoring the synchronous over the asynchronous
  • it’s not synchronous BAD / asynchronous GOOD
  • In terms of teaching, it seems now seen through sepia toned web glasses, is one of my favorite approaches, of participants/learners creating/writing/publishing in their own spaces and the class space being a syndication hub. The old gold ds106, which, as I must remind is still chugging along after 10 years, while in that span, most every Name Your Tech Fad has crested and sunk to the bottom of the Gartner hype trough
  • ...2 more annotations...
  • I think we ought to be placing a lot of thought and effort into asynchronous events and activities
  • The whole idea of distributed activity, woven in with daily challenges and assignment banks, was asynchronous beauty. But not without synchronous bits, be it class visits or running live sessions on ds106radio. Twas a mix.
Ed Webb

CRITICAL AI: Adapting College Writing for the Age of Large Language Models such as Chat... - 1 views

  • In the long run, we believe, teachers need to help students develop a critical awareness of generative machine models: how they work; why their content is often biased, false, or simplistic; and what their social, intellectual, and environmental implications might be. But that kind of preparation takes time, not least because journalism on this topic is often clickbait-driven, and “AI” discourse tends to be jargony, hype-laden, and conflated with science fiction.
  • Make explicit that the goal of writing is neither a product nor a grade but, rather, a process that empowers critical thinking
  • Students are more likely to misuse text generators if they trust them too much. The term “Artificial Intelligence” (“AI”) has become a marketing tool for hyping products. For all their impressiveness, these systems are not intelligent in the conventional sense of that term. They are elaborate statistical models that rely on mass troves of data—which has often been scraped indiscriminately from the web and used without knowledge or consent.
  • ...9 more annotations...
  • LLMs usually cannot do a good job of explaining how a particular passage from a longer text illuminates the whole of that longer text. Moreover, ChatGPT’s outputs on comparison and contrast are often superficial. Typically the system breaks down a task of logical comparison into bite-size pieces, conveys shallow information about each of those pieces, and then formulaically “compares” and “contrasts” in a noticeably superficial or repetitive way. 
  • In-class writing, whether digital or handwritten, may have downsides for students with anxiety and disabilities
  • ChatGPT can produce outputs that take the form of  “brainstorms,” outlines, and drafts. It can also provide commentary in the style of peer review or self-analysis. Nonetheless, students would need to coordinate multiple submissions of automated work in order to complete this type of assignment with a text generator.  
  • No one should present auto-generated writing as their own on the expectation that this deception is undiscoverable. 
  • LLMs often mimic the harmful prejudices, misconceptions, and biases found in data scraped from the internet
  • Show students examples of inaccuracy, bias, logical, and stylistic problems in automated outputs. We can build students’ cognitive abilities by modeling and encouraging this kind of critique. Given that social media and the internet are full of bogus accounts using synthetic text, alerting students to the intrinsic problems of such writing could be beneficial. (See the “ChatGPT/LLM Errors Tracker,” maintained by Gary Marcus and Ernest Davis.)
  • Since ChatGPT is good at grammar and syntax but suffers from formulaic, derivative, or inaccurate content, it seems like a poor foundation for building students’ skills and may circumvent their independent thinking.
  • Good journalism on language models is surprisingly hard to find since the technology is so new and the hype is ubiquitous. Here are a few reliable short pieces.     “ChatGPT Advice Academics Can Use Now” edited by Susan Dagostino, Inside Higher Ed, January 12, 2023  “University students recruit AI to write essays for them. Now what?” by Katyanna Quach, The Register, December 27, 2022  “How to spot AI-generated text” by Melissa Heikkilä, MIT Technology Review, December 19, 2022  The Road to AI We Can Trust, Substack by Gary Marcus, a cognitive scientist and AI researcher who writes frequently and lucidly about the topic. See also Gary Marcus and Ernest Davis, “GPT-3, Bloviator: OpenAI’s Language Generator Has No Idea What It’s Talking About” (2020).
  • “On the Dangers of Stochastic Parrots” by Emily M. Bender, Timnit Gebru, et al, FAccT ’21: Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency, March 2021. Association for Computing Machinery, doi: 10.1145/3442188. A blog post summarizing and discussing the above essay derived from a Critical AI @ Rutgers workshop on the essay: summarizes key arguments, reprises discussion, and includes links to video-recorded presentations by digital humanist Katherine Bode (ANU) and computer scientist and NLP researcher Matthew Stone (Rutgers).
Ed Webb

The Generative AI Race Has a Dirty Secret | WIRED - 0 views

  • The race to build high-performance, AI-powered search engines is likely to require a dramatic rise in computing power, and with it a massive increase in the amount of energy that tech companies require and the amount of carbon they emit.
  • Every time we see a step change in online processing, we see significant increases in the power and cooling resources required by large processing centres
  • third-party analysis by researchers estimates that the training of GPT-3, which ChatGPT is partly based on, consumed 1,287 MWh, and led to emissions of more than 550 tons of carbon dioxide equivalent—the same amount as a single person taking 550 roundtrips between New York and San Francisco
  • ...3 more annotations...
  • There’s also a big difference between utilizing ChatGPT—which investment bank UBS estimates has 13 million users a day—as a standalone product, and integrating it into Bing, which handles half a billion searches every day.
  • Data centers already account for around one percent of the world’s greenhouse gas emissions, according to the International Energy Agency. That is expected to rise as demand for cloud computing increases, but the companies running search have promised to reduce their net contribution to global heating. “It’s definitely not as bad as transportation or the textile industry,” Gómez-Rodríguez says. “But [AI] can be a significant contributor to emissions.”
  • The environmental footprint and energy cost of integrating AI into search could be reduced by moving data centers onto cleaner energy sources, and by redesigning neural networks to become more efficient, reducing the so-called “inference time”—the amount of computing power required for an algorithm to work on new data.
Ed Webb

ChatGPT Is Nothing Like a Human, Says Linguist Emily Bender - 0 views

  • Please do not conflate word form and meaning. Mind your own credulity.
  • We’ve learned to make “machines that can mindlessly generate text,” Bender told me when we met this winter. “But we haven’t learned how to stop imagining the mind behind it.”
  • A handful of companies control what PricewaterhouseCoopers called a “$15.7 trillion game changer of an industry.” Those companies employ or finance the work of a huge chunk of the academics who understand how to make LLMs. This leaves few people with the expertise and authority to say, “Wait, why are these companies blurring the distinction between what is human and what’s a language model? Is this what we want?”
  • ...16 more annotations...
  • “We call on the field to recognize that applications that aim to believably mimic humans bring risk of extreme harms,” she co-wrote in 2021. “Work on synthetic human behavior is a bright line in ethical Al development, where downstream effects need to be understood and modeled in order to block foreseeable harm to society and different social groups.”
  • chatbots that we easily confuse with humans are not just cute or unnerving. They sit on a bright line. Obscuring that line and blurring — bullshitting — what’s human and what’s not has the power to unravel society
  • She began learning from, then amplifying, Black women’s voices critiquing AI, including those of Joy Buolamwini (she founded the Algorithmic Justice League while at MIT) and Meredith Broussard (the author of Artificial Unintelligence: How Computers Misunderstand the World). She also started publicly challenging the term artificial intelligence, a sure way, as a middle-aged woman in a male field, to get yourself branded as a scold. The idea of intelligence has a white-supremacist history. And besides, “intelligent” according to what definition? The three-stratum definition? Howard Gardner’s theory of multiple intelligences? The Stanford-Binet Intelligence Scale? Bender remains particularly fond of an alternative name for AI proposed by a former member of the Italian Parliament: “Systematic Approaches to Learning Algorithms and Machine Inferences.” Then people would be out here asking, “Is this SALAMI intelligent? Can this SALAMI write a novel? Does this SALAMI deserve human rights?”
  • Tech-makers assuming their reality accurately represents the world create many different kinds of problems. The training data for ChatGPT is believed to include most or all of Wikipedia, pages linked from Reddit, a billion words grabbed off the internet. (It can’t include, say, e-book copies of everything in the Stanford library, as books are protected by copyright law.) The humans who wrote all those words online overrepresent white people. They overrepresent men. They overrepresent wealth. What’s more, we all know what’s out there on the internet: vast swamps of racism, sexism, homophobia, Islamophobia, neo-Nazism.
  • One fired Google employee told me succeeding in tech depends on “keeping your mouth shut to everything that’s disturbing.” Otherwise, you’re a problem. “Almost every senior woman in computer science has that rep. Now when I hear, ‘Oh, she’s a problem,’ I’m like, Oh, so you’re saying she’s a senior woman?”
  • “We haven’t learned to stop imagining the mind behind it.”
  • In March 2021, Bender published “On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?” with three co-authors. After the paper came out, two of the co-authors, both women, lost their jobs as co-leads of Google’s Ethical AI team.
  • “On the Dangers of Stochastic Parrots” is not a write-up of original research. It’s a synthesis of LLM critiques that Bender and others have made: of the biases encoded in the models; the near impossibility of studying what’s in the training data, given the fact they can contain billions of words; the costs to the climate; the problems with building technology that freezes language in time and thus locks in the problems of the past. Google initially approved the paper, a requirement for publications by staff. Then it rescinded approval and told the Google co-authors to take their names off it. Several did, but Google AI ethicist Timnit Gebru refused. Her colleague (and Bender’s former student) Margaret Mitchell changed her name on the paper to Shmargaret Shmitchell, a move intended, she said, to “index an event and a group of authors who got erased.” Gebru lost her job in December 2020, Mitchell in February 2021. Both women believe this was retaliation and brought their stories to the press. The stochastic-parrot paper went viral, at least by academic standards. The phrase stochastic parrot entered the tech lexicon.
  • Tech execs loved it. Programmers related to it. OpenAI CEO Sam Altman was in many ways the perfect audience: a self-identified hyperrationalist so acculturated to the tech bubble that he seemed to have lost perspective on the world beyond. “I think the nuclear mutually assured destruction rollout was bad for a bunch of reasons,” he said on AngelList Confidential in November. He’s also a believer in the so-called singularity, the tech fantasy that, at some point soon, the distinction between human and machine will collapse. “We are a few years in,” Altman wrote of the cyborg merge in 2017. “It’s probably going to happen sooner than most people think. Hardware is improving at an exponential rate … and the number of smart people working on AI is increasing exponentially as well. Double exponential functions get away from you fast.” On December 4, four days after ChatGPT was released, Altman tweeted, “i am a stochastic parrot, and so r u.”
  • “This is one of the moves that turn up ridiculously frequently. People saying, ‘Well, people are just stochastic parrots,’” she said. “People want to believe so badly that these language models are actually intelligent that they’re willing to take themselves as a point of reference and devalue that to match what the language model can do.”
  • The membrane between academia and industry is permeable almost everywhere; the membrane is practically nonexistent at Stanford, a school so entangled with tech that it can be hard to tell where the university ends and the businesses begin.
  • “No wonder that men who live day in and day out with machines to which they believe themselves to have become slaves begin to believe that men are machines.”
  • what’s tenure for, after all?
  • LLMs are tools made by specific people — people who stand to accumulate huge amounts of money and power, people enamored with the idea of the singularity. The project threatens to blow up what is human in a species sense. But it’s not about humility. It’s not about all of us. It’s not about becoming a humble creation among the world’s others. It’s about some of us — let’s be honest — becoming a superspecies. This is the darkness that awaits when we lose a firm boundary around the idea that humans, all of us, are equally worthy as is.
  • The AI dream is “governed by the perfectibility thesis, and that’s where we see a fascist form of the human.”
  • “Why are you trying to trick people into thinking that it really feels sad that you lost your phone?”
Ed Webb

Eluminate user agreement - 1 views

  • Collaborate may use, disclose, distribute or copy the information and may use any ideas, concepts or know-how contained in the information for any purpose
  • The Sessions and the Services may be amended, revised, replaced or terminated, in whole or in part, by Collaborate, at its sole discretion, at any time and from time to time, without notice
  •  
    Blackborg has terrible agreement for users of Eluminate - are we surprised?
Ed Webb

Google pushes journalists to create G+ profiles · kegill · Storify - 0 views

  • linking search results with Google+ was like Microsoft bundling Internet Explore with Windows
  • Market strength in one place being used to leverage sub optimal products in another.
  • It's time to tell both Google and Bing that we want to decide for ourselves, thank you very much, if content is credible, instead of their making those decisions for us, decisions made behind hidden -- and suspicious -- algorithms.
Ed Webb

Reflections on open courses « Connectivism - 0 views

  • There is value of blending traditional with emergent knowledge spaces (online conferences and traditional journals) - Learners will create and innovate if they can express ideas and concepts in their own spaces and through their own expertise (i.e. hosting events in Second Life) - Courses are platforms for innovation. Too rigid a structure puts the educator in full control. Using a course as a platform fosters creativity…and creativity generates a bit of chaos and can be unsettling to individuals who prefer a structure with which they are familiar. - (cliche) Letting go of control is a bit stressful, but surprisingly rewarding in the new doors it opens and liberating in how it brings others in to assist in running a course and advancing the discussion. - People want to participate…but they will only do so once they have “permission” and a forum in which to utilize existing communication/technological skills.
  • The internet is a barrier-reducing system. In theory, everyone has a voice online (the reality of technology ownership, digital skills, and internet access add an unpleasant dimension). Costs of duplication are reduced. Technology (technique) is primarily a duplicationary process, as evidenced by the printing press, assembly line, and now the content duplication ability of digital technologies. As a result, MOOCs embody, rather than reflect, practices within the digital economy. MOOCs reduce barriers to information access and to the dialogue that permits individuals (and society) to grow knowledge. Much of the technical innovation in the last several centuries has permitted humanity to extend itself physically (cars, planes, trains, telescopes). The internet, especially in recent developments of connective and collaborative applications, is a cognitive extension for humanity. Put another way, the internet offers a model where the reproduction of knowledge is not confined to the production of physical objects.
  • Knowledge is a mashup. Many people contribute. Many different forums are used. Multiple media permit varied and nuanced expressions of knowledge. And, because the information base (which is required for knowledge formation) changes so rapidly, being properly connected to the right people and information is vitally important. The need for proper connectedness to the right people and information is readily evident in intelligence communities. Consider the Christmas day bomber. Or 9/11. The information was being collected. But not connected.
  • ...11 more annotations...
  • The open model of participation calls into question where value is created in the education system. Gutenberg created a means to duplicate content. The social web creates the opportunity for many-to-many interactions and to add a global social layer on content creation and knowledge growth.
  • Whatever can be easily duplicated cannot serve as the foundation for economic value. Integration and connectedness are economic value points.
  • In education, content can easily be produced (it’s important but has limited economic value). Lectures also have limited value (easy to record and to duplicate). Teaching – as done in most universities – can be duplicated. Learning, on the other hand, can’t be duplicated. Learning is personal, it has to occur one learner at a time. The support needed for learners to learn is a critical value point.
  • Learning, however, requires a human, social element: both peer-based and through interaction with subject area experts
  • Content is readily duplicated, reducing its value economically. It is still critical for learning – all fields have core elements that learners must master before they can advance (research in expertise supports this notion). - Teaching can be duplicated (lectures can be recorded, Elluminate or similar webconferencing system can bring people from around the world into a class). Assisting learners in the learning process, correcting misconceptions (see Private Universe), and providing social support and brokering introductions to other people and ideas in the discipline is critical. - Accreditation is a value statement – it is required when people don’t know each other. Content was the first area of focus in open education. Teaching (i.e. MOOCs) are the second. Accreditation will be next, but, before progress can be made, profile, identity, and peer-rating systems will need to improve dramatically. The underlying trust mechanism on which accreditation is based cannot yet be duplicated in open spaces (at least, it can’t be duplicated to such a degree that people who do not know each other will trust the mediating agent of open accreditation)
  • The skills that are privileged and rewarded in a MOOC are similar to those that are needed to be effective in communicating with others and interacting with information online (specifically, social media and information sources like journals, databases, videos, lectures, etc.). Creative skills are the most critical. Facilitators and learners need something to “point to”. When a participant creates an insightful blog post, a video, a concept map, or other resource/artifact it generally gets attention.
  • Intentional diversity – not necessarily a digital skill, but the ability to self-evaluate ones network and ensure diversity of ideologies is critical when information is fragmented and is at risk of being sorted by single perspectives/ideologies.
  • The volume of information is very disorienting in a MOOC. For example, in CCK08, the initial flow of postings in Moodle, three weekly live sessions, Daily newsletter, and weekly readings and assignments proved to be overwhelming for many participants. Stephen and I somewhat intentionally structured the course for this disorienting experience. Deciding who to follow, which course concepts are important, and how to form sub-networks and sub-systems to assist in sensemaking are required to respond to information abundance. The process of coping and wayfinding (ontology) is as much a lesson in the learning process as mastering the content (epistemology). Learners often find it difficult to let go of the urge to master all content, read all the comments and blog posts.
  • e. Learning is a social trust-based process.
  • Patience, tolerance, suspension of judgment, and openness to other cultures and ideas are required to form social connections and negotiating misunderstandings.
  • An effective digital citizenry needs the skills to participate in important conversations. The growth of digital content and social networks raises the need citizens to have the technical and conceptual skills to express their ideas and engage with others in those spaces. MOOCs are a first generation testing grounds for knowledge growth in a distributed, global, digital world. Their role in developing a digital citizenry is still unclear, but democratic societies require a populace with the skills to participate in growing a society’s knowledge. As such, MOOCs, or similar open transparent learning experiences that foster the development of citizens confidence engage and create collaboratively, are important for the future of society.
Ed Webb

Study Finds No Link Between Social-Networking Sites and Academic Performance - Wired Ca... - 0 views

  • no connection between time spent on social-networking sites and academic performance
  • The trouble with social media is it stunts the development of social skills. Now we learn that time spent on social media does not damage GPA, which implies it's benign. What a tragedy. And precisely the mistaken impression that social development stunting craves.
  • The study in question focused only on first-year students, and traditional ones at that. (A read through of the study revealed the sample included mostly 18- and 19-year-olds and a few (3%) 20-29-year-olds).
  • ...2 more annotations...
  • Such a broad generalization based on one narrowly defined study, along with the suggestion that college students should be unconcerned about how much time they spend on SNS, is, at best, naive and, at worst, irresponsible.
  • Will there soon be a study determining that partying had no effect on grades, despite "how often students used them or how many they used"?
Ed Webb

Using VoiceThread to Give Students a Voice Outside the Classroom - ProfHacker - The Chr... - 1 views

Ed Webb

Social media in the public sector - common sense | A dragon's best friend - 2 views

  •  
    Sound advice for us all, really (minus the specifics about the Dept of Justice).
Ryan Burke

Wolfram|Alpha - 0 views

  •  
    Today's Wolfram|Alpha is the first step in an ambitious, long-term project to make all systematic knowledge immediately computable by anyone. You enter your question or calculation, and Wolfram|Alpha uses its built-in algorithms and growing collection of data to compute the answer
‹ Previous 21 - 40 of 95 Next › Last »
Showing 20 items per page