"The conference acquired 30 machines for hackers to toy with. Every voting machine in the village was hacked.
Though voting machines are technologically simple, they are difficult for researchers to obtain for independent research."
"O'Neill recounts an exercise to improve service to homeless families in New York City, in which data-analysis was used to identify risk-factors for long-term homelessness. The problem, O'Neill describes, was that many of the factors in the existing data on homelessness were entangled with things like race (and its proxies, like ZIP codes, which map extensively to race in heavily segregated cities like New York). Using data that reflects racism in the system to train a machine-learning algorithm whose conclusions can't be readily understood runs the risk of embedding that racism in a new set of policies, these ones scrubbed clean of the appearance of bias with the application of objective-seeming mathematics. "
"The development of a new code of conduct will take place in two phases.
The first will include setting policies for in-person and virtual events as well as policies for technical spaces including chat rooms and other Wikimedia projects. It is set to be ratified by the board by 30 August."
"If the eCNY is successful, it will give the central bank new powers, including novel types of monetary policy to help the economy grow. In one possibility that economists have discussed, a central bank could program its digital currency to slowly lose value so that consumers are encouraged to spend it immediately."
"Prisma Labs has already gotten into trouble for accidentally generating nude and cartoonishly sexualised images - including those of children - despite a "no nudes" and "adults only" policy.
Prisma Lab's CEO and co-founder Andrey Usoltsev told TechCrunch this behaviour only happened if the AI was intentionally provoked to create this type of content - which represents a breach of terms against its use.
"If an individual is determined to engage in harmful behavior, any tool would have the potential to become a weapon," he said."
"OpenAI has "discussed and debated quite extensively" when to release a tool that can determine whether an image was made with DALL-E 3, OpenAI's generative AI art model, or not. But the startup isn't close to making a decision anytime soon.
That's according to Sandhini Agarwal, an OpenAI researcher who focuses on safety and policy, who spoke with TechCrunch in a phone interview this week. She said that, while the classifier tool's accuracy is "really good" - at least by her estimation - it hasn't met OpenAI's threshold for quality."
""We are seeing deep fakes being used all the time, and the technology is going to allow still images, but ultimately also video images, to be synthesised [more easily] by bad actors," he says.
DALL-E has content policy rules in place that prohibit bullying, harassment, the creation of sexual or political content, or creating images of people without their consent. And while Open AI has limited the number of people who can sign up to DALL-E, its lower-grade replica, DALL-E mini, is open access, meaning people can produce anything they want."
Every year at the beginning of January teachers across the country dream up their own New Year wish lists when they visit the BETT Show at Olympia.
-
Filled with the latest innovative pieces of education technology, the four-day exposition gives those that work in education and the technology industry a taste of what the classroom of the future will be like.
The digital Economy Bill comes under major scrutiny as firms such as Google and Facebook are against it, meaning that ministers have to debate against large technology giants.
"TikTok moderators were told to suppress videos from users who appeared too ugly, poor or disabled, as part of the company's efforts to curate an aspirational air in the videos it promotes, according to new documents published by the Intercept."
"
The children's charity NSPCC has called on Facebook to resume a programme that scanned private messages for indications of child abuse, with new data suggesting that almost half of referrals for child sexual abuse material are now falling below the radar.
Recent changes to the European commission's e-privacy directive, which are being finalised, require messaging services to follow strict new restrictions on the privacy of message data. Facebook blamed that directive for shutting down the child protection operation, but the children's charity says Facebook has gone too far in reading the law as banning it entirely."
"But the movement to legally protect leisure time is gaining ground. The European parliament voted overwhelmingly last month in favour of a resolution calling on the European commission to propose a law allowing those who work digitally to disconnect outside their working hours."
"On Wednesday, California Gov. Gavin Newsom signed a bill into law that directly affects "mega-retailers" like Amazon, and how these companies use algorithms to manage warehouse workers. Mega-retailers are those that employ more than 1,000 warehouse workers, and they include one of Amazon's main competitors, Walmart."
"The research concludes that mentally demanding tasks are more difficult to handle at home than when physically present at a workplace. Based on the chess players' performances, excessive use of homeworking can hurt productivity, the three researchers believe."
"The CEO of OpenAI, the company responsible for creating artificial intelligence chatbot ChatGPT and image generator Dall-E 2, said "regulation of AI is essential" as he testified in his first appearance in front of the US Congress.
The apocalypse isn't coming. We must resist cynicism and fear about AI
Stephen Marche
Stephen Marche
Read more
Speaking to the Senate judiciary committee on Tuesday, Sam Altman said he supported regulatory guardrails for the technology that would enable the benefits of artificial intelligence while minimizing the harms.
"We think that regulatory intervention by governments will be critical to mitigate the risks of increasingly powerful models," Altman said in his prepared remarks."