Skip to main content

Home/ Digit_al Society/ Group items tagged language

Rss Feed Group items tagged

dr tech

What does it mean to be human in the age of technology? | Technology | The Guardian - 1 views

  •  
    "Second, there is the question of how we see ourselves. Human nature is a baggy, capacious concept, and one that technology has altered and extended throughout history. Digital technologies challenge us once again to ask what place we occupy in the universe: what it means to be creatures of language, self-awareness and rationality."
aren01

Protocols, Not Platforms: A Technological Approach to Free Speech | Knight First Amendm... - 1 views

  •  
    "Some have argued for much greater policing of content online, and companies like Facebook, YouTube, and Twitter have talked about hiring thousands to staff up their moderation teams.8 8. April Glaser, Want a Terrible Job? Facebook and Google May Be Hiring,Slate (Jan. 18, 2018), https://slate.com/technology/2018/01/facebook-and-google-are-building-an-army-of-content-moderators-for-2018.html (explaining that major platforms have hired or have announced plans to hire thousands, in some cases more than ten thousand, new content moderators).On the other side of the coin, companies are increasingly investing in more and more sophisticated technology help, such as artificial intelligence, to try to spot contentious content earlier in the process.9 9. Tom Simonite, AI Has Started Cleaning Up Facebook, But Can It Finish?,Wired (Dec. 18, 2018), https://www.wired.com/story/ai-has-started-cleaning-facebook-can-it-finish/.Others have argued that we should change Section 230 of the CDA, which gives platforms a free hand in determining how they moderate (or how they don't moderate).10 10. Gohmert Press Release, supra note 7 ("Social media companies enjoy special legal protections under Section 230 of the Communications Act of 1934, protections not shared by other media. Instead of acting like the neutral platforms they claim to be in order obtain their immunity, these companies have turned Section 230 into a license to potentially defraud and defame with impunity… Since there still appears to be no sincere effort to stop this disconcerting behavior, it is time for social media companies to be liable for any biased and unethical impropriety of their employees as any other media company. If these companies want to continue to act like a biased medium and publish their own agendas to the detriment of others, they need to be held accountable."); Eric Johnson, Silicon Valley's Self-Regulating Days "Probably Should Be" Over, Nancy Pelosi Says, Vox (Apr. 11, 2019), https:/
  •  
    "After a decade or so of the general sentiment being in favor of the internet and social media as a way to enable more speech and improve the marketplace of ideas, in the last few years the view has shifted dramatically-now it seems that almost no one is happy. Some feel that these platforms have become cesspools of trolling, bigotry, and hatred.1 1. Zachary Laub, Hate Speech on Social Media: Global Comparisons, Council on Foreign Rel. (Jun. 7, 2019), https://www.cfr.org/backgrounder/hate-speech-social-media-global-comparisons.Meanwhile, others feel that these platforms have become too aggressive in policing language and are systematically silencing or censoring certain viewpoints.2 2. Tony Romm, Republicans Accused Facebook, Google and Twitter of Bias. Democrats Called the Hearing 'Dumb.', Wash. Post (Jul. 17, 2018), https://www.washingtonpost.com/technology/2018/07/17/republicans-accused-facebook-google-twitter-bias-democrats-called-hearing-dumb/?utm_term=.895b34499816.And that's not even touching on the question of privacy and what these platforms are doing (or not doing) with all of the data they collect."
dr tech

How Does Spotify Know You So Well? | by Sophia Ciocca | Medium - 0 views

  •  
    "To create Discover Weekly, there are three main types of recommendation models that Spotify employs: Collaborative Filtering models (i.e. the ones that Last.fm originally used), which analyze both your behavior and others' behaviors. Natural Language Processing (NLP) models, which analyze text. Audio models, which analyze the raw audio tracks themselves."
dr tech

ChatGPT Stole Your Work. So What Are You Going to Do? | WIRED - 0 views

  •  
    "DATA LEVERAGE CAN be deployed through at least four avenues: direct action (for instance, individuals banding together to withhold, "poison," or redirect data), regulatory action (for instance, pushing for data protection policy and legal recognition of "data coalitions"), legal action (for instance, communities adopting new data-licensing regimes or pursuing a lawsuit), and market action (for instance, demanding large language models be trained only with data from consenting creators). "
dr tech

AI could decipher gaps in ancient Greek texts, say researchers | Language | The Guardian - 0 views

  •  
    "Artificial intelligence could bring to life lost texts, from imperial decrees to the poems of Sappho, researchers have revealed, after developing a system that can fill in the gaps in ancient Greek inscriptions and pinpoint when and where they are from."
dr tech

New Tool Reveals How AI Makes Decisions - Scientific American - 0 views

  •  
    "Most AI programs function like a "black box." "We know exactly what a model does but not why it has now specifically recognized that a picture shows a cat," said computer scientist Kristian Kersting of the Technical University of Darmstadt in Germany to the German-language newspaper Handelsblatt. That dilemma prompted Kersting-along with computer scientists Patrick Schramowski of the Technical University of Darmstadt and Björn Deiseroth, Mayukh Deb and Samuel Weinbach, all at the Heidelberg, Germany-based AI company Aleph Alpha-to introduce an algorithm called AtMan earlier this year. AtMan allows large AI systems such as ChatGPT, Dall-E and Midjourney to finally explain their outputs."
dr tech

Millions of Workers Are Training AI Models for Pennies | WIRED - 0 views

  •  
    "Some experts see platforms like Appen as a new form of data colonialism, says Saiph Savage, director of the Civic AI lab at Northeastern University. "Workers in Latin America are labeling images, and those labeled images are going to feed into AI that will be used in the Global North," she says. "While it might be creating new types of jobs, it's not completely clear how fulfilling these types of jobs are for the workers in the region." Due to the ever moving goal posts of AI, workers are in a constant race against the technology, says Schmidt. "One workforce is trained to three-dimensionally place bounding boxes around cars very precisely, and suddenly it's about figuring out if a large language model has given an appropriate answer," he says, regarding the industry's shift from self-driving cars to chatbots. Thus, niche labeling skills have a "very short half-life." "From the clients' perspective, the invisibility of the workers in microtasking is not a bug but a feature," says Schmidt. Economically, because the tasks are so small, it's more feasible to deal with contractors as a crowd instead of individuals. This creates an industry of irregular labor with no face-to-face resolution for disputes if, say, a client deems their answers inaccurate or wages are withheld. The workers WIRED spoke to say it's not low fees but the way platforms pay them that's the key issue. "I don't like the uncertainty of not knowing when an assignment will come out, as it forces us to be near the computer all day long," says Fuentes, who would like to see additional compensation for time spent waiting in front of her screen. Mutmain, 18, from Pakistan, who asked not to use his surname, echoes this. He says he joined Appen at 15, using a family member's ID, and works from 8 am to 6 pm, and another shift from 2 am to 6 am. "I need to stick to these platforms at all times, so that I don't lose work," he says, but he struggles to earn more than $50
dr tech

Don't Expect ChatGPT to Help You Land Your Next Job - 0 views

  •  
    "Shapiro said that using ChatGPT can be "great" in helping applicants "brainstorm verbs" and reframe language that can "bring a level of polish to their applications." At the same time, she said that submitting AI-generated materials along with job applications can backfire if applicants don't review them for accuracy. Shapiro said Jasper recruiters have interviewed candidates and discovered skills on their résumés that applicants said shouldn't be there or characterizations they weren't familiar with. Checking the AI-generated materials to ensure they accurately reflect an applicant's capabilities, she said, is critical if they're using ChatGPT - especially if the applicant gets hired."
dr tech

Yepic fail: This startup promised not to make deepfakes without consent, but did anyway... - 1 views

  •  
    "U.K.-based startup Yepic AI claims to use "deepfakes for good" and promises to "never reenact someone without their consent." But the company did exactly what it claimed it never would. In an unsolicited email pitch to a TechCrunch reporter, a representative for Yepic AI shared two "deepfaked" videos of the reporter, who had not given consent to having their likeness reproduced. Yepic AI said in the pitch email that it "used a publicly available photo" of the reporter to produce two deepfaked videos of them speaking in different languages. The reporter requested that Yepic AI delete the deepfaked videos it created without permission."
dr tech

Facebook blames hate speech ads on mistake, immediately approves more hate speech ads |... - 0 views

  •  
    ""We submitted another two examples of real-life Amharic-language hate speech to them a week later," said Global Witness. "Both ads were, again, accepted by Facebook for publication within a matter of hours.""
dr tech

Deepfakes are Venezuela's latest disinformation tool, experts say - The Washington Post - 0 views

  •  
    "But the reporters in those videos aren't real. Their names are Daren and Noah, and they're computer-generated avatars crafted by Synthesia, a London-based artificial intelligence company. The clips are from a YouTube channel called House of News, which presents itself as an English-language media outlet. Researchers say the videos are part of the Venezuelan government's attempts to spin the narrative on social media, considered one of the last bastions of free speech in a nation where outlets are censored and journalists are often persecuted. The incorporation of AI, experts told The Washington Post, seems to be a new addition to the government's disinformation campaigns, which range from incentivizing Twitter users to post specific talking points to using bots that spit out the regime's messaging."
dr tech

Misinformation, mistakes and the Pope in a puffer: what rapidly evolving AI can - and c... - 0 views

  •  
    "The question of why AI generates fake academic papers relates to how large language models work: they are probabilistic, in that they map the probability over sequences of words. As Dr David Smerdon of the University of Queensland puts it: "Given the start of a sentence, it will try to guess the most likely words to come next.""
dr tech

'He Would Still Be Here': Man Dies by Suicide After Talking with AI Chatbot, Widow Says - 0 views

  •  
    "A Belgian man recently died by suicide after chatting with an AI chatbot on an app called Chai, Belgian outlet La Libre reported. The incident raises the issue of how businesses and governments can better regulate and mitigate the risks of AI, especially when it comes to mental health. The app's chatbot encouraged the user to kill himself, according to statements by the man's widow and chat logs she supplied to the outlet. When Motherboard tried the app, which runs on a bespoke AI language model based on an open-source GPT-4 alternative that was fine-tuned by Chai, it provided us with different methods of suicide with very little prompting. "
dr tech

The AI future for lesson plans is already here | EduResearch Matters - 0 views

  •  
    "What do today's AI-generated lesson plans look like? AI-generated lesson plans are already better than many people realise. Here's an example generated through the GPT-3 deep learning language model:"
dr tech

Warning over use in UK of unregulated AI chatbots to create social care plans | Artific... - 0 views

  •  
    "A pilot study by academics at the University of Oxford found some care providers had been using generative AI chatbots such as ChatGPT and Bard to create care plans for people receiving care. That presents a potential risk to patient confidentiality, according to Dr Caroline Green, an early career research fellow at the Institute for Ethics in AI at Oxford, who surveyed care organisations for the study. "If you put any type of personal data into [a generative AI chatbot], that data is used to train the language model," Green said. "That personal data could be generated and revealed to somebody else." She said carers might act on faulty or biased information and inadvertently cause harm, and an AI-generated care plan might be substandard."
dr tech

ChatGPT is bullshit | Ethics and Information Technology - 0 views

  •  
    "Applications of these systems have been plagued by persistent inaccuracies in their output; these are often called "AI hallucinations". We argue that these falsehoods, and the overall activity of large language models, is better understood as bullshit in the sense explored by Frankfurt (On Bullshit, Princeton, 2005): the models are in an important way indifferent to the truth of their outputs. We distinguish two ways in which the models can be said to be bullshitters, and argue that they clearly meet at least one of these definitions. We further argue that describing AI misrepresentations as bullshit is both a more useful and more accurate way of predicting and discussing the behaviour of these systems."
dr tech

The future is … sending AI avatars to meetings for us, says Zoom boss | Artif... - 0 views

  • ix years away and
  • “five or six years” away, Eric Yuan told The Verge magazine, but he added that the company was working on nearer-term technologies that could bring it closer to reality.“Let’s assume, fast-forward five or six years, that AI is ready,” Yuan said. “AI probably can help for maybe 90% of the work, but in terms of real-time interaction, today, you and I are talking online. So, I can send my digital version, you can send your digital version.”Using AI avatars in this way could free up time for less career-focused choices, Yuan, who also founded Zoom, added. “You and I can have more time to have more in-person interactions, but maybe not for work. Maybe for something else. Why do we need to work five days a week? Down the road, four days or three days. Why not spend more time with your fam
  •  
    "Ultimately, he suggests, each user would have their own "large language model" (LLM), the underlying technology of services such as ChatGPT, which would be trained on their own speech and behaviour patterns, to let them generate extremely personalised responses to queries and requests. Such systems could be a natural progression from AI tools that already exist today. Services such as Gmail can summarise and suggest replies to emails based on previous messages, while Microsoft Teams will transcribe and summarise video conferences, automatically generating a to-do list from the contents."
dr tech

Benjamin Riley: AI is Another Ed Tech Promise Destined to Fail - The 74 - 0 views

  •  
    "It's an interesting question. I'm almost not sure how to answer it, because there is no thinking happening on the part of an LLM. A large language model takes the prompts and the text that you give it and tries to come up with something that is responsive and useful in relation to that text. And what's interesting is that certain people - I'm thinking of Mark Andreessen most prominently - have talked about how amazing this is conceptually from an education perspective, because with LLMs you will have this infinitely patient teacher. But that's actually not what you want from a teacher. You want, in some sense, an impatient teacher who's going to push your thinking, who's going to try to understand what you're bringing to any task or educational experience, lift up the strengths that you have, and then work on building your knowledge in areas where you don't yet have it. I don't think LLMs are capable of doing any of that. As you say, there's no real thinking going on. It's just a prediction machine. There's an interaction, I guess, but it's an illusion. Is that the word you would use? Yes. It's the illusion of a conversation. "
dr tech

"We are basically the last generation": An interview with Thomas Ramge on writing - Goe... - 0 views

  •  
    "Yes of course. We are basically the last generation, or maybe there will be one more after us, who grew up without strong AI writing assistants. But these AI assistants are here now, especially in English. In German the systems are following suit, even though they're still much stronger in English. You get to a stage where someone who cannot write very well, can be pulled to a decent level of writing through machine assistance. And this raises important questions: Are we no longer learning the basics? In order to step up and really improve your writing, you will probably always need to be deeply proficient in the cultural practice of writing. But we need to ask, what proportion of low and medium level writers will be raised with the help from machines to a very decent level? And what repercussions does this have on teaching and learning, and the proficient use of language and writing? We shouldn't neglect our writing skills, because we believe machines will get us there. Anyone who has children can clearly see the dangers autocorrect and autocomplete will have for the future of writing."
dr tech

Jobhunters flood recruiters with AI-generated CVs - 0 views

  •  
    "Jobhunters flood recruiters with AI-generated CVs About half of applicants are using tools such as ChatGPT to help write cover letters but, without editing, the language is 'clunky'"
« First ‹ Previous 41 - 60 of 81 Next › Last »
Showing 20 items per page