Skip to main content

Home/ Digit_al Society/ Group items tagged responsibility

Rss Feed Group items tagged

dr tech

- 0 views

  •  
    "Exposure to false and inflammatory content is remarkably low, with just 1% of Twitter users accounting for 80% of exposure to dubious websites during the 2016 U.S. election. This is heavily concentrated among a small fringe of users actively seeking it out. Examples: 6.3% of YouTube users were responsible for 79.8% of exposure to extremist channels from July to December 2020, 85% of vaccine-sceptical content was consumed by less than 1% of US citizens in the 2016-2019 period. Conventional wisdom blames platform algorithms for spreading misinformation. However, evidence suggests user preferences play an outsized role. For instance, a mere 0.04% of YouTube's algorithmic recommendations directed users to extremist content. It's tempting to draw a straight line between social media usage and societal ills. But studies rigorously designed to untangle cause and effect often come up short. "
dr tech

Frank McCourt Organizing a People's Bid to Acquire TikTok | Project Liberty - 0 views

  •  
    "Frank McCourt, Founder of Project Liberty and Executive Chairman of McCourt Global, announces that Project Liberty is building a consortium to purchase TikTok and rearchitect the platform to put people in control of their digital identities and data Leading technologists and academics, including Jonathan Haidt, David Clark, and Sir Tim Berners-Lee express support for Project Liberty's vision for a more open, inclusive and responsible internet"
dr tech

Benjamin Riley: AI is Another Ed Tech Promise Destined to Fail - The 74 - 0 views

  •  
    "It's an interesting question. I'm almost not sure how to answer it, because there is no thinking happening on the part of an LLM. A large language model takes the prompts and the text that you give it and tries to come up with something that is responsive and useful in relation to that text. And what's interesting is that certain people - I'm thinking of Mark Andreessen most prominently - have talked about how amazing this is conceptually from an education perspective, because with LLMs you will have this infinitely patient teacher. But that's actually not what you want from a teacher. You want, in some sense, an impatient teacher who's going to push your thinking, who's going to try to understand what you're bringing to any task or educational experience, lift up the strengths that you have, and then work on building your knowledge in areas where you don't yet have it. I don't think LLMs are capable of doing any of that. As you say, there's no real thinking going on. It's just a prediction machine. There's an interaction, I guess, but it's an illusion. Is that the word you would use? Yes. It's the illusion of a conversation. "
dr tech

How did one CrowdStrike mistake stop the world? We asked 3 experts. | Mashable - 0 views

  •  
    ""The problem is that we're really stuck in a digital monoculture, where decades of anti-competitive practices have created it so that just one system is responsible for so much of what we rely on from everything from airlines to hospitals to schools," Mir said. "One mistake that creates a big failure, it happens, it's an inevitability. But for it to have this sort of impact is a policy failure.""
dr tech

South Korea's AI textbook program faces skepticism from parents | TechCrunch - 0 views

  •  
    "The tablets are scheduled to be introduced next year, and by 2028, teachers are supposed to be using these AI textbooks for all subjects except music, art, physical education and ethics. The government hasn't shared many details about how it will all work, except that the material is supposed to be customized for different speeds of learning, with teachers using dashboards to monitor how students are doing. In response, more than 50,000 parents have signed a petition demanding that the government focus less on new tech and more on students' overall well-being: "We, as parents, are already encountering many issues at unprecedented levels arising from [our children's] exposure to digital devices." Lee Sun-youn, a mother of two, told FT, "I am worried that too much usage of digital devices could negatively affect their brain development, concentration span and ability to solve problems - they already use smartphones and tablets too much.""
dr tech

Charter school is replacing teachers with AI | Popular Science - 0 views

  •  
    Instead, affiliate charter schools seek applicants for positions like a "High School Guide." These $50/hr employees will help design "creative, immersive learning experiences that teach students to leverage cutting-edge AI tools and innovative strategies," among other responsibilities. "Think of yourself as a brand consultant for 50 startups simultaneously, guiding diverse branding needs from business to personal expertise positioning," reads one job listing. Apart from students' brand development, the opening also stipulates candidates must possess "demonstrated expertise in social media management, content creation, and audience engagement."
dr tech

Are chatbots of the dead a brilliant idea or a terrible one? | Aeon Essays - 0 views

  •  
    "'Fredbot' is one example of a technology known as chatbots of the dead, chatbots designed to speak in the voice of specific deceased people. Other examples are plentiful: in 2016, Eugenia Kuyda built a chatbot from the text messages of her friend Roman Mazurenko, who was killed in a traffic accident. The first Roman Bot, like Fredbot, was selective, but later versions were generative, meaning they generated novel responses that reflected Mazurenko's voice. In 2020, the musician and artist Laurie Anderson used a corpus of writing and lyrics from her late husband, Velvet Underground's co-founder Lou Reed, to create a generative program she interacted with as a creative collaborator. And in 2021, the journalist James Vlahos launched HereAfter AI, an app anyone can use to create interactive chatbots, called 'life story avatars', that are based on loved ones' memories. Today, enterprises in the business of 'reinventing remembrance' abound: Life Story AI, Project Infinite Life, Project December - the list goes on."
dr tech

16 Musings on AI's Impact on the Labor Market - 0 views

  •  
    "In the short term, generative AI will replace a lot of people because productivity increases while demand stays the same due to inertia. In the long term, the creation of new jobs compensates for the loss of old ones, resulting in a net positive outcome for humans who leave behind jobs no one wants to do. The most important aspect of any technological revolution is the transition from before to after. Timing and location matters: older people have a harder time reinventing themselves into a new trade or craft. Poor people and poor countries have less margin to react to a wave of unemployment. Digital automation is quicker and more aggressive than physical automation because it bypasses logistical constraints-while ChatGPT can be infinitely cloned, a metallic robot cannot. Writing and painting won't die because people care about the human factor first and foremost; there are already a lot of books we can't possibly read in one lifetime so we select them as a function of who's the author. Even if you hate OpenAI and ChatGPT for being responsible for the lack of job postings, I recommend you ally with them for now; learn to use ChatGPT before it's too late to keep your options open. Companies are choosing to reduce costs over increasing output because the sectors where generative AI is useful can't artificially increase demand in parallel to productivity. (Who needs more online content?) Our generation is reasonably angry at generative AI and will bravely fight it. Still, our offspring-and theirs-will be grateful for a transformed world whose painful transformation they didn't have to endure. Certifiable human-made creative output will reduce its quantity but multiply its value in the next years because demand specific for it will grow; automation can mimic 99% of what we do but never reaches 100%. The maxim "AI won't take your job, a person using AI will; yes, you using AI will replace yourself not using it" applies more in the long term than the
dr tech

Early methods for studying affective use and emotional well-being on ChatGPT | OpenAI - 0 views

  •  
    "Our findings show that both model and user behaviors can influence social and emotional outcomes. Effects of AI vary based on how people choose to use the model and their personal circumstances. This research provides a starting point for further studies that can increase transparency, and encourage responsible usage and development of AI platforms across the industry."
dr tech

Amid Backlash, Duolingo Backtracks on Plans for AI Pivot | PCMag - 0 views

  •  
    "But Duolingo now seems to have changed its tune, at least in terms of hiring. CEO Luis von Ahn wrote in a LinkedIn post earlier this week: "To be clear: I do not see AI as replacing what our employees do (we are, in fact, continuing to hire at the same speed as before). I see it as a tool to accelerate what we do, at the same or better level of quality. And the sooner we learn how to use it-and use it responsibly-the better off we will be in the long run." Though many language learners obviously appreciate the human touch on their materials, Duolingo isn't the only one leaning toward AI for language education. Last month, Google applied its flagship Google Gemini AI model to create three new tools, dubbed Little Language Lessons, accessible via the Google Labs page. However, Google did dub the new set of tools as "just an early exploration.""
dr tech

Scaffolding Student Writing in the Age of AI - 0 views

  •  
    "We typically begin the semester by asking students to reflect on the formulas they've learned in the past and to consider how those shape their writing. We now ask those same reflection questions about AI outputs. Student responses, as Jennifer has written elsewhere, are revealing: The kind of writing that Chat does, it's what our teachers try to get us to do. It's like five-paragraph essays, and perfect paragraph[s] that don't have any personality, which we were taught in high school. It does what school has trained us to do. Like write a perfectly formatted essay that is based on some random people's ideas."
dr tech

A real issue: video game developers are being accused of using AI - even when they aren... - 0 views

  •  
    "In April, game developer Stamina Zero achieved what should have been a marketing slam-dunk: the launch trailer for the studio's game Little Droid was published on PlayStation's official YouTube channel. The response was a surprise for the developer. The game looks interesting, people wrote in the comments, but was "ruined" by AI art. But the game's cover art, used as the thumbnail for the YouTube video, was in fact made by a real person, according to developer Lana Ro. "We know the artist, we've seen her work, so such a negative reaction was unexpected for us, and at first we didn't know how to respond or how to feel," Ro said. "We were confused.""
longspagetti

BBC defends delay of 'truly transformational' micro:bit (Wired UK) - 0 views

  •  
    The BBC has defended its plan to supply a million schoolchildren with free micro:bit computers after it was criticised for delaying the launch until at least 2016.
dr tech

Special report: The simulations driving the world's response to COVID-19 - 0 views

  •  
    "But, as he and other modellers warn, much information about how SARS-CoV-2 spreads is still unknown and must be estimated or assumed - and that limits the precision of forecasts. An earlier version of the Imperial model, for instance, estimated that SARS-CoV-2 would be about as severe as influenza in necessitating the hospitalization of those infected. That turned out to be incorrect."
melodyyy

Australia tests 'Orwellian' Covid app which uses facial recognition to enforce quaranti... - 2 views

  • Users will have 15 minutes, when the app pings them, to prove they are at their homes by showing the app their faces and giving it access to geo-location data. Should they fail to do so, the local police department will be sent to follow up in person.
  • “Location and biometric data is extremely valuable. Any government initiative that wishes to collect these types of personal information should have robust safeguards in place before it is rolled out, to ensure that information is not later used or disclosed for other purposes,”
  • According to its privacy statement, Home Quarantine SA will encrypt data “immediately upon submission” before sending it to an Australian server “under control of the Government of South Australia”.
aren01

Social networks' anti-racism policies belied by users' experience | Race | The Guardian - 1 views

  •  
    ""The abhorrent racist abuse directed at England players last night has absolutely no place on Twitter," the social network said on Monday morning. A Facebook spokesperson said similarly: "No one should have to experience racist abuse anywhere, and we don't want it on Instagram." But the statements bore little relation to the experience of the company's users. On Instagram, where thousands left comments on the pages of Marcus Rashford, Bukayo Saka and Jadon Sancho, supportive users who tried to flag abuse to the platform were surprised by the response."
  •  
    "The world's biggest social networks say racism isn't welcome on their platforms, but a combination of poor enforcement and weak rules have allowed hate to flourish."
aren01

Protocols, Not Platforms: A Technological Approach to Free Speech | Knight First Amendm... - 1 views

  •  
    "Some have argued for much greater policing of content online, and companies like Facebook, YouTube, and Twitter have talked about hiring thousands to staff up their moderation teams.8 8. April Glaser, Want a Terrible Job? Facebook and Google May Be Hiring,Slate (Jan. 18, 2018), https://slate.com/technology/2018/01/facebook-and-google-are-building-an-army-of-content-moderators-for-2018.html (explaining that major platforms have hired or have announced plans to hire thousands, in some cases more than ten thousand, new content moderators).On the other side of the coin, companies are increasingly investing in more and more sophisticated technology help, such as artificial intelligence, to try to spot contentious content earlier in the process.9 9. Tom Simonite, AI Has Started Cleaning Up Facebook, But Can It Finish?,Wired (Dec. 18, 2018), https://www.wired.com/story/ai-has-started-cleaning-facebook-can-it-finish/.Others have argued that we should change Section 230 of the CDA, which gives platforms a free hand in determining how they moderate (or how they don't moderate).10 10. Gohmert Press Release, supra note 7 ("Social media companies enjoy special legal protections under Section 230 of the Communications Act of 1934, protections not shared by other media. Instead of acting like the neutral platforms they claim to be in order obtain their immunity, these companies have turned Section 230 into a license to potentially defraud and defame with impunity… Since there still appears to be no sincere effort to stop this disconcerting behavior, it is time for social media companies to be liable for any biased and unethical impropriety of their employees as any other media company. If these companies want to continue to act like a biased medium and publish their own agendas to the detriment of others, they need to be held accountable."); Eric Johnson, Silicon Valley's Self-Regulating Days "Probably Should Be" Over, Nancy Pelosi Says, Vox (Apr. 11, 2019), https:/
  •  
    "After a decade or so of the general sentiment being in favor of the internet and social media as a way to enable more speech and improve the marketplace of ideas, in the last few years the view has shifted dramatically-now it seems that almost no one is happy. Some feel that these platforms have become cesspools of trolling, bigotry, and hatred.1 1. Zachary Laub, Hate Speech on Social Media: Global Comparisons, Council on Foreign Rel. (Jun. 7, 2019), https://www.cfr.org/backgrounder/hate-speech-social-media-global-comparisons.Meanwhile, others feel that these platforms have become too aggressive in policing language and are systematically silencing or censoring certain viewpoints.2 2. Tony Romm, Republicans Accused Facebook, Google and Twitter of Bias. Democrats Called the Hearing 'Dumb.', Wash. Post (Jul. 17, 2018), https://www.washingtonpost.com/technology/2018/07/17/republicans-accused-facebook-google-twitter-bias-democrats-called-hearing-dumb/?utm_term=.895b34499816.And that's not even touching on the question of privacy and what these platforms are doing (or not doing) with all of the data they collect."
dr tech

TechScape: Is 'banning' TikTok protecting users or censorship? It depends who you ask |... - 0 views

  •  
    "The US battle with TikTok over data privacy concerns and Chinese influence has been heating up for years, and recent measures have brought college campuses to the forefront - with a number of schools banning the app entirely on campus wifi. Students have responded, of course, on TikTok. Taking advantage of viral sounds, they have expressed outrage at their favourite app being blocked at universities like Auburn, Oklahoma and Texas A&M in the past few months. "Do they not realize people in college are actually adults?" one user wrote. "We should make our own independent decision to use TikTok or not," another said."
« First ‹ Previous 61 - 80 of 83 Next ›
Showing 20 items per page