TodaysMeet - 3 views
tweetbook.in - 2 views
Wiffiti - 0 views
Engaging Students with Engaging Tools (EDUCAUSE Quarterly) | EDUCAUSE - 3 views
-
1569 reads
-
10000 reads
hashtagify.me - explore the hashtagspace - 0 views
http://openmicroblogger.org/?posts/34 - 0 views
http://tweetchat.com/oauth - 0 views
CRITICAL AI: Adapting College Writing for the Age of Large Language Models such as Chat... - 1 views
-
In the long run, we believe, teachers need to help students develop a critical awareness of generative machine models: how they work; why their content is often biased, false, or simplistic; and what their social, intellectual, and environmental implications might be. But that kind of preparation takes time, not least because journalism on this topic is often clickbait-driven, and “AI” discourse tends to be jargony, hype-laden, and conflated with science fiction.
-
Make explicit that the goal of writing is neither a product nor a grade but, rather, a process that empowers critical thinking
-
Students are more likely to misuse text generators if they trust them too much. The term “Artificial Intelligence” (“AI”) has become a marketing tool for hyping products. For all their impressiveness, these systems are not intelligent in the conventional sense of that term. They are elaborate statistical models that rely on mass troves of data—which has often been scraped indiscriminately from the web and used without knowledge or consent.
- ...9 more annotations...
Google and Meta moved cautiously on AI. Then came OpenAI's ChatGPT. - The Washington Post - 0 views
-
The surge of attention around ChatGPT is prompting pressure inside tech giants including Meta and Google to move faster, potentially sweeping safety concerns aside
-
Tech giants have been skittish since public debacles like Microsoft’s Tay, which it took down in less than a day in 2016 after trolls prompted the bot to call for a race war, suggest Hitler was right and tweet “Jews did 9/11.”
-
Some AI ethicists fear that Big Tech’s rush to market could expose billions of people to potential harms — such as sharing inaccurate information, generating fake photos or giving students the ability to cheat on school tests — before trust and safety experts have been able to study the risks. Others in the field share OpenAI’s philosophy that releasing the tools to the public, often nominally in a “beta” phase after mitigating some predictable risks, is the only way to assess real world harms.
- ...8 more annotations...
‹ Previous
21 - 36 of 36
Showing 20▼ items per page