Skip to main content

Home/ Instructional & Media Services at Dickinson College/ Group items tagged spelling

Rss Feed Group items tagged

Ed Webb

Search Engine Helps Users Connect In Arabic : NPR - 0 views

  • new technology that is revolutionizing the way Arabic-speaking people use the Internet
  • Abdullah says that of her 500 Egyptian students, 78 percent have never typed in Arabic online, a fact that greatly disturbed Habib Haddad, a Boston-based software engineer originally from Lebanon. "I mean imagine [if] 78 percent of French people don't type French," Haddad says. "Imagine how destructive that is online."
  • "The idea is, if you don't have an Arabic keyboard, you can type Arabic by spelling your words out phonetically," Jureidini says. "For example ... when you're writing the word 'falafel,' Yamli will convert that to Arabic in your Web browser. We will go and search not only the Arabic script version of that search query, but also for all the Western variations of that keyword."
  • ...1 more annotation...
  • At a recent "new" technology forum at MIT, Yamli went on to win best of show — a development that did not escape the attention of Google, which recently developed its own search and transliteration engine. "I guess Google recognizes a good idea when it sees it," Jureidini says. He adds, "And the way we counter it is by being better. We live and breathe Yamli every day, and we're constantly in the process of improving how people can use it." Experts in Arabic Web content say that since its release a year ago, Yamli has helped increase Arabic content on the Internet just by its use. They say that bodes well for the Arabic Web and for communication between the Arab and Western worlds.
Ed Webb

Google and Meta moved cautiously on AI. Then came OpenAI's ChatGPT. - The Washington Post - 0 views

  • The surge of attention around ChatGPT is prompting pressure inside tech giants including Meta and Google to move faster, potentially sweeping safety concerns aside
  • Tech giants have been skittish since public debacles like Microsoft’s Tay, which it took down in less than a day in 2016 after trolls prompted the bot to call for a race war, suggest Hitler was right and tweet “Jews did 9/11.”
  • Some AI ethicists fear that Big Tech’s rush to market could expose billions of people to potential harms — such as sharing inaccurate information, generating fake photos or giving students the ability to cheat on school tests — before trust and safety experts have been able to study the risks. Others in the field share OpenAI’s philosophy that releasing the tools to the public, often nominally in a “beta” phase after mitigating some predictable risks, is the only way to assess real world harms.
  • ...8 more annotations...
  • Silicon Valley’s sudden willingness to consider taking more reputational risk arrives as tech stocks are tumbling
  • A chatbot that pointed to one answer directly from Google could increase its liability if the response was found to be harmful or plagiarized.
  • AI has been through several hype cycles over the past decade, but the furor over DALL-E and ChatGPT has reached new heights.
  • Soon after OpenAI released ChatGPT, tech influencers on Twitter began to predict that generative AI would spell the demise of Google search. ChatGPT delivered simple answers in an accessible way and didn’t ask users to rifle through blue links. Besides, after a quarter of a century, Google’s search interface had grown bloated with ads and marketers trying to game the system.
  • Inside big tech companies, the system of checks and balances for vetting the ethical implications of cutting-edge AI isn’t as established as privacy or data security. Typically teams of AI researchers and engineers publish papers on their findings, incorporate their technology into the company’s existing infrastructure or develop new products, a process that can sometimes clash with other teams working on responsible AI over pressure to see innovation reach the public sooner.
  • Chatbots like OpenAI routinely make factual errors and often switch their answers depending on how a question is asked
  • To Timnit Gebru, executive director of the nonprofit Distributed AI Research Institute, the prospect of Google sidelining its responsible AI team doesn’t necessarily signal a shift in power or safety concerns, because those warning of the potential harms were never empowered to begin with. “If we were lucky, we’d get invited to a meeting,” said Gebru, who helped lead Google’s Ethical AI team until she was fired for a paper criticizing large language models.
  • Rumman Chowdhury, who led Twitter’s machine-learning ethics team until Elon Musk disbanded it in November, said she expects companies like Google to increasingly sideline internal critics and ethicists as they scramble to catch up with OpenAI.“We thought it was going to be China pushing the U.S., but looks like it’s start-ups,” she said.
1 - 3 of 3
Showing 20 items per page