"As a non-lawyer, I have not kept up with the emerging trends (and related risks) in legal technology and did not realize that Google Bard was a generative text service that, like Chat-GPT, could show citations and descriptions that looked real but actually were not," he wrote the court in a sworn statement."
"Next, was two months of probation where we moderated on practice queues that consisted of hundreds of thousands of videos that had already been moderated. The policies we applied to these practice videos were compared with what had previously been applied to them by a more experienced moderator in order to find areas we needed to improve in. Everyone passed their probation.
One trend that is particularly hated by moderators are the "recaps". These consist of a 15- to 60-second barrage of pictures, sometimes hundreds, shown as a super fast slideshow often with three to four pictures a second. We have to view every one of these photos for infractions.
If a video is 60 seconds long then the system will allocate us around 48 seconds to do this. We also have to check the video description, account bio and hashtags. Around the end of the school year or New Year's Eve, when these sort of videos are popular, it becomes incredibly draining and also affects our stats. "
"It forecast 14 earthquakes within a 200-mile area of the estimated epicenter and also made a very accurate forecast regarding their intensity, a report on the university's website said. It failed to warn of just one earthquake and gave eight false predictions.
The research team trained the AI to detect statistical bumps in real-time seismic data that the research team had paired with previous earthquakes, the report explained. Once trained, the AI monitored for signs of approaching earthquakes.
"Predicting earthquakes is the holy grail," said Sergey Fomel, a professor at UT's Bureau of Economic Geology and a member of the research team, adding: "What we achieved tells us that what we thought was an impossible problem is solvable in principle.""
"In past months, an AI-generated image of an explosion at the Pentagon caused a brief dip in the stock market. AI audio parodies of US presidents playing video games became a viral trend. AI-generated images that appeared to show Donald Trump fighting off police officers trying to arrest him circulated widely on social media platforms. The Republican National Committee released an entirely AI-generated ad that showed images of various imagined disasters that would take place if Biden were re-elected, while the American Association of Political Consultants warned that video deepfakes present a "threat to democracy"."
"The more technology helps make us more efficient, the more we are asked to be more efficient. We - our labour, our time, our data - is mined with increasing rapaciousness.
Here's my thing with that Keynes essay. Sure, it looks like he was totally wrong about the future. We didn't end up with so much free time that we all went insane. But, then again, we've never actually tested his theory properly. We never just let the machines take over. Clearly, as we're (re)discovering, everyone finds that idea terrifying. I tend to agree. The idea of a completely A.I.-controlled world makes me uneasy. That said, the trend over the last 100 years - and even more since the dawn of this century - doesn't make me feel much better.
What seems likelier to me than us all losing our jobs to A.I. is that the way in which we're already being replaced by machines continues is accelerated. That is, that we become ever more tied to the machines, ever more entwined with them. That our lives, bodies, and brains will become ever more machine-like."
"The model takes a two-pronged approach. First, it focuses on trends present in the region, looking at geostatistics and historical data from Prodes, the annual government monitoring system for deforestation in the Amazon. Understanding what has happened can help make predictions more precise. When already deforested areas are recent, this indicates gangs are operating in the area, so there's a higher risk that nearby forest will soon be wiped out.
Second, it looks at variables that put the brakes on deforestation - land protected by Indigenous and quilombola (descendent of rebel slaves) communities, and areas with bodies of water, or other terrain that doesn't lend itself to agricultural expansion, for instance - and variables that make deforestation more likely, including higher population density, the presence of settlements and rural properties, and higher density of road infrastructure, both legal and illegal."
"All of this probably means I should be worried about recent trends in artificial intelligence, which is encroaching on voice-over work in a manner similar to how it threatens the labour of visual artists and writers-both financially and ethically. The creep is only just beginning, with dubbing companies training software to replace human actors and tech companies introducing digital audiobook narration. But AI poses a threat to work opportunities across the board by giving producers the tools to recreate their favourite voices on demand, without the performer's knowledge or consent and without additional compensation. It's clear that AI will transform the arts sector, and the voice-over industry offers an early, unsettling model for what this future may look like."
"The first paper turns the tables on the trend for job applicants to be screened by algorithms. The researchers assigned some applicants "algorithmic writing assistance" with their CVs or covering letters to see if it influenced employers' decisions. But obviously those of us who do lots of recruiting would never be affected by such small changes… would we? I'm afraid so. Jobseekers who had the tech help were 8% more likely to get hired. Sigh."
"I don't trust it, David! It's the whole Lord of the Rings vibe - "one app to rule them all", which famously didn't work out great for Middle-earth.
A lot of people have concerns, myself included. It's why there was a backlash to Meta - which provides Facebook and WhatsApp - trying to launch a digital currency. I think there's a broader issue of digital literacy here: when we give up our permissions to a super app, do we really know what we're agreeing to?"
"You're nailing the problem: the tech sales people and the politicians are all on the same drug, which is "This tech is perfect", because it's cheaper than more police. There's a lawsuit in the US because a black man was wrongly arrested based on facial recognition. Tech companies need to be held to account. One company we focused on, Clearview AI, scraped social networks - collected images of people's faces and data from publicly available information - to create its software. Facial recognition relies on artificial intelligence. It needs to study faces. And only the government - the DVLA etc - and social networking companies have access to a lot of faces."
"Generational differences in learning techniques are apparent in how people of different ages approach technology. It has been said that we, the Net Generation, are closer to our grandparents-the Greatest Generation-in our work ethic and optimism about the future than to our parents' generation. But how we approach problems is totally different."
"TOKYO/GUANGZHOU -- From shopping to banking to boarding airplanes, an economy based on facial recognition is taking root in Japan, enabling consumers to live a cashless, bag-free life. "
"We may not take action on and cannot respond to each report in the experiment, but your input will help us identify trends so that we can improve the speed and scale of our broader misinformation work."
""We may not take action on and cannot respond to each report in the experiment, but your input will help us identify trends so that we can improve the speed and scale of our broader misinformation work."
"
""Our job as teachers and professors is not to surveil and police our students, but it's to educate them," he says. "You are assuming that students are trying to cheat-rather than assuming students are trying to learn and help them learn."
He sees the growing adoption of automated proctoring tools as a continuation of a trend started by plagiarism-detection services like Turnitin, which he says were built on the assumption that students want to cheat and must be policed. But despite early pushback by students and some professors, plagiarism detection has become ubiquitous. Parry worries the same thing could happen with automated proctoring."
AI and machine learning, combined with the science of turning data into insightful information (aka data science), have become more important than ever in the “new normal” to guide innovation based on new market trends and consumer preferences
"The Auschwitz museum has called a new trend, where users of video-sharing app TikTok role-play as Holocaust victims, "hurtful and offensive," but added that it did not want to shame young people involved.
The museum at the site of the former Nazi death camp was responding to the point of view videos."
"Half a century ago, before the first Apple computer was even sold, climate scientists started making computer-generated forecasts of how Earth would warm as carbon emissions saturated the atmosphere (the atmosphere is now brimming with carbon).
It turns out these decades-old climate models - which used math equations to predict how much greenhouse gases would heat the planet - were pretty darn accurate. Climate scientists gauged how well early models predicted Earth's relentless warming trend and published their research Wednesday in the journal Geophysical Research Letters."
"It has led to a proliferation of fake news and clickbait. It has fuelled surveillance capitalism and normalised pervasive tracking and data-mining. If we want to do something about the proliferation of misinformation and erosion of trust in traditional institutions, it is not enough to regulate or factcheck political adverts. We need to crack down on the use of personal information for all targeted advertising. Otherwise democracy will continue to erode, one highly optimised click at a time."