"Twitter has made more cuts to its trust and safety team in charge of international content moderation, as well as a unit overseeing hate speech and harassment, Bloomberg reported on Friday.
The move adds to longstanding concerns that new owner Elon Musk is dismantling the company's regulation of hateful content and misinformation."
"Two high-ranking "admins" - volunteer administrators with privileged access to Wikipedia, including the ability to edit fully protected pages - have been imprisoned since they were arrested on the same day in September 2020, the two bodies added.
The arrests appeared to be part of a "crackdown on Wikipedia admins in the country", Dawn and Smex said, naming the two people imprisoned as Osama Khalid and Ziyad al-Sofiani."
"The tool has determined that there is a 99.61% probability this text was generated using OpenAI GPT. Please note that this tool like everything in AI, has a high probability of detecting GPT output, but not 100% as attributed by George E. P. Box "All models are wrong, but some are useful"."
"The dark forest theory
of the web points to the increasingly life-like but life-less state of being online. Most open and publicly available spaces on the web are overrun with bots, advertisers, trolls, data scrapers, clickbait, keyword-stuffing "content creators," and algorithmically manipulated junk.
It's like a dark forest that seems eerily devoid of human life - all the living creatures are hidden beneath the ground or up in trees. If they reveal themselves, they risk being attacked by automated predators."
"The fact that flow is not only rare, but draining; and that taking a break to scroll a different screen or play a game on your phone can be restorative, is proof of the need for nuance. The moralising over productivity and screentime is unhelpful when it comes to finding solutions - but highly profitable as the boom in (useless) blue-light glasses and "distraction-free" tech goes to show."
"Yet despite much of the content appearing to break TikTok's rules, which explicitly ban misogyny and copycat accounts, the platform appears to have done little to limit Tate's spread or ban the accounts responsible. Instead, it has propelled him into the mainstream - allowing clips of him to proliferate, and actively promoting them to young users."
"Every year, Amazon ships hundreds of millions of parcels in Germany. Just a few clicks and a little later the delivery driver is at your door. An investigation by CORRECTIV.Lokal takes a look behind the scenes of the logistics chain and shows a system built on pressure, surveillance, and extreme stress. An insight into the gears of a machine where no idling is allowed."
""Think about what information is going to be collected," she said. "And how comfortable you are with that information potentially flowing to just anybody … [Companies] are certainly sharing [user data] and they don't really have to tell you who they're sharing it with or why."
Such items might include "smart devices" that track our behavior, such as sleep and fitness trackers, as well as popular self-discovery tools such as DNA testing kits.
With the help of experts, we broke down the privacy implications of some of this season's latest offerings - so you can give the gift of privacy."
"Now that might be about to change. The arrival of OpenAI's ChatGPT, a program that generates sophisticated text in response to any prompt you can imagine, may signal the end of writing assignments altogether-and maybe even the end of writing as a gatekeeper, a metric for intelligence, a teachable skill.
If you're looking for historical analogues, this would be like the printing press, the steam drill, and the light bulb having a baby, and that baby having access to the entire corpus of human knowledge and understanding. My life-and the lives of thousands of other teachers and professors, tutors and administrators-is about to drastically change."
"But for many scientists, Twitter has become an essential tool for collaboration and discovery - a source of real-time conversations around research papers, conference talks and wider topics in academia. Papers now zip around scientific communities faster thanks to Twitter, says Johann Unger, a linguist at Lancaster University, UK, who notes that extra information is also shared in direct private messages through the site. And its limit on tweet length - currently 280 characters - has pushed academics into keeping their commentary pithy, he adds."
"But on the downside it has led to a loss of face-to-face contact, which can have negative consequences, particularly when it comes to education. What's more, there are also health impacts to consider. New scientific research indicates that spending large amounts of time in front of the computer, or other devices such as tablets and cell phones, can be harmful to our health. This is largely due to the blue light emitted by electronic devices, which expose us to light-emitting diodes (LEDs)."
"In the fall of 2020, gig workers in Venezuela posted a series of images to online forums where they gathered to talk shop. The photos were mundane, if sometimes intimate, household scenes captured from low angles-including some you really wouldn't want shared on the Internet.
In one particularly revealing shot, a young woman in a lavender T-shirt sits on the toilet, her shorts pulled down to mid-thigh.
The images were not taken by a person, but by development versions of iRobot's Roomba J7 series robot vacuum. They were then sent to Scale AI, a startup that contracts workers around the world to label audio, photo, and video data used to train artificial intelligence."
"It remains possible, however, that the true costs of social-media anxieties are harder to tabulate. Gentzkow told me that, for the period between 2016 and 2020, the direct effects of misinformation were difficult to discern. "But it might have had a much larger effect because we got so worried about it-a broader impact on trust," he said. "Even if not that many people were exposed, the narrative that the world is full of fake news, and you can't trust anything, and other people are being misled about it-well, that might have had a bigger impact than the content itself." Nyhan had a similar reaction. "There are genuine questions that are really important, but there's a kind of opportunity cost that is missed here. There's so much focus on sweeping claims that aren't actionable, or unfounded claims we can contradict with data, that are crowding out the harms we can demonstrate, and the things we can test, that could make social media better." He added, "We're years into this, and we're still having an uninformed conversation about social media. It's totally wild.""
"In the future, it may be possible to guard against this kind of photo misuse through technical means. For example, future AI image generators might be required by law to embed invisible watermarks into their outputs so that they can be read later, and people will know they're fakes. But people will need to be able to read the watermarks easily (and be educated on how they work) for that to have any effect. Even so, will it matter if an embarrassing fake photo of a kid shared with an entire school has an invisible watermark? The damage will have already been done."
"Yet in 2020, Byland had to find out secondhand that the company had abandoned the technology and was on the verge of going bankrupt. While his two-implant system is still working, he doesn't know how long that will be the case. "As long as nothing goes wrong, I'm fine," he says. "But if something does go wrong with it, well, I'm screwed. Because there's no way of getting it fixed.""
"The technology certainly has its flaws. While the system is theoretically designed not to cross some moral red lines - it's adamant that Hitler was bad - it's not difficult to trick the AI into sharing advice on how to engage in all sorts of evil and nefarious activities, particularly if you tell the chatbot that it's writing fiction. The system, like other AI models, can also say biased and offensive things. As my colleague Sigal Samuel has explained, an earlier version of GPT generated extremely Islamophobic content, and also produced some pretty concerning talking points about the treatment of Uyghur Muslims in China."
""Is it just me or are these AI selfie generator apps perpetuating misogyny?" tweeted Brandee Barker, a feminist and advocate who has worked in the tech industry. "Here are a few I got just based off of photos of my face." One of Barker's results showed her wearing supermodel-length hair extensions and a low-cut catsuit. Another featured her in a white bra with cleavage spilling out from the top."
"What do today's AI-generated lesson plans look like?
AI-generated lesson plans are already better than many people realise. Here's an example generated through the GPT-3 deep learning language model:"