"No prominent developer of AI foundation models - a list including companies like OpenAI and Meta - is releasing sufficient information about their potential impact on society, determines a new report from Stanford HAI (Human-Centered Artificial Intelligence).
Today, Stanford HAI released its Foundation Model Transparency Index, which tracked whether creators of the 10 most popular AI models disclose information about their work and how people use their systems. Among the models it tested, Meta's Llama 2 scored the highest, followed by BloomZ and then OpenAI's GPT-4. But none of them, it turned out, got particularly high marks."
"The transparency report is in its third year, but it hasn't prevented attacks from advocates such as Edward Snowden, who called the company "hostile to privacy". "Dropbox is a targeted you know wannabe PRISM partner," he told the Guardian in July 2014. "They just put … Condoleezza Rice on their board … who is probably the most anti-privacy official you can imagine.""
"In his full first interview as surveillance commissioner, Tony Porter - a former senior counter-terrorism officer - said the public was complacent about encroaching surveillance and urged public bodies, including the police, to be more transparent about how they are increasingly using smart cameras to monitor people."
"The government's scheme to store patients' medical information in a single database, which ran into massive problems over confidentiality, is to be scrapped, NHS England has said.
The decision to axe the scheme, care.data, follows the publication of two reports that support far greater transparency over what happens to the information, and opt-outs for patients who want their data seen only by those directly caring for them."
"Like China's Great Firewall, the UK firewall is a patchwork of rules and filters that are opaque to users and regulators. Every ISP uses its own censorship supplier to spy on its customers and decide what they're allowed to see, and they change what is and is not allowed from moment to moment, with no transparency into how, when or why those decisions are being made. "
"The president of Baidu, Ya-Qin Zhang, said in a statement: "As AI technology keeps advancing and the application of AI expands, we recognise the importance of joining the global discussion around the future of AI. Ensuring AI's safety, fairness and transparency should not be an afterthought but rather highly considered at the onset of every project or system we build.""
"EFF's Legal Director Corynne McSherry offers five lessons to keep in mind:
1. (Lots of) mistakes will be made: copyright takedowns result in the removal of tons of legitimate content.
2. Robots won't help: automated filtering tools like Content ID have been a disaster, and policing copyright with algorithms is a lot easier than policing "bad speech."
3. These systems need to be transparent and have due process. A system that allows for automated instant censorship and slow, manual review of censorship gives a huge advantage to people who want to abuse the system.
4. Punish abuse. The ability to censor other peoples' speech is no joke. If you're careless or malicious in your takedown requests, you should pay a consequence: maybe a fine, maybe being barred form using the takedown system.
5. Voluntary moderation quickly becomes mandatory. Every voluntary effort to stem copyright infringement has been followed by calls to make those efforts mandatory (and expand them)."
"Scientists must embrace circumspection, transparency, and robust ways of working that safeguard against bias and analytical flexibility. Doing so will provide parents and policymakers with the reliable insights they need on a topic most often characterized by unfounded media hype."
"Privacy, transparency, and consent are of utmost importance in this effort, and we look forward to building this functionality in consultation with interested stakeholders. We will openly publish information about our work for others to analyze.
All of us at Apple and Google believe there has never been a more important moment to work together to solve one of the world's most pressing problems. Through close cooperation and collaboration with developers, governments and public health providers, we hope to harness the power of technology to help countries around the world slow the spread of COVID-19 and accelerate the return of everyday life."
"Yet these blueprints may also alarm free speech advocates concerned about Facebook's de facto role as the world's largest censor. Both sides are likely to demand greater transparency."
"In October, American teachers prevailed in a lawsuit with their school district over a computer program that assessed their performance.
The system rated teachers in Houston by comparing their students' test scores against state averages. Those with high ratings won praise and even bonuses. Those who fared poorly faced the sack.
The program did not please everyone. Some teachers felt that the system marked them down without good reason. But they had no way of checking if the program was fair or faulty: the company that built the software, the SAS Institute, regards its algorithm a trade secret and would not disclose its workings."
"The researchers programmed a robot called Pepper, made by SoftBank Robotics, with the ability to vocalise its thought processes. This means the robot is no longer a "black box" and its underlying decision-making is more transparent to the user."
"The long-running issues of traceability, transparency and enforcement were colourfully illustrated in September 2017 when a group of investigators from the Basel Action Network (BAN) - a non-for-profit group that monitors compliance with the 1989 United Nations Basel Convention on the trade of hazardous wastes - attempted to learn where exactly Australia's e-waste was going.
The group fitted 35 old CRT televisions, LED monitors and printers with GPS devices of a special make. Out of this sample the team quickly focused on the fate of three LCD screens dropped at Officeworks storefronts around the Brisbane metro area.
Hayley Palmer, BAN's chief operating officer, was on the team that followed where they went afterwards. As the signals left the country, Palmer, her nine-month-old and a colleague tracked the monitors to a warehouse in Hong Kong and then on to an illegal dump-yard in a rural part of Thailand where they talked their way inside."
"If you've been job hunting recently, chances are you've interacted with a resume robot, a nickname for an Applicant Tracking System, or ATS. In its most basic form, an ATS acts like an online assistant, helping hiring managers write job descriptions, scan resumes and schedule interviews. As artificial intelligence advances, employers are increasingly relying on a combination of predictive analytics, machine learning and complex algorithms to sort through candidates, evaluate their skills and estimate their performance. Today, it's not uncommon for applicants to be rejected by a robot before they're connected with an actual human in human resources.
The job market is ripe for the explosion of AI recruitment tools. Hiring managers are coping with deflated HR budgets while confronting growing pools of applicants, a result of both the economic downturn and the post-pandemic expansion of remote work. As automated software makes pivotal decisions about our employment, usually without any oversight, it's posing fundamental questions about privacy, accountability and transparency."
"The Digital Services Act marks the end of the platforms' vast liability exemptions and their seeming impunity. It will impose more transparency on the platforms' content moderation and set rules on so-called dark patterns, design features that can trick users into doing things they didn't mean to."
"That is why we must demand transparency here, especially in the case of technology that uses human-like interfaces such as language. For any automated system, we need to know what it was trained to do, what training data was used, who chose that data and for what purpose. In the words of AI researchers Timnit Gebru and Margaret Mitchell, mimicking human behaviour is a "bright line" - a clear boundary not to be crossed - in computer software development. We treat interactions with things we perceive as human or human-like differently. With systems such as LaMDA we see their potential perils and the urgent need to design systems in ways that don't abuse our empathy or trust."
"Among other things, the declaration states that military AI needs to be developed according to international laws, that nations should be transparent about the principles underlying their technology, and that high standards are implemented for verifying the performance of AI systems. It also says that humans alone should make decisions around the use of nuclear weapons."
"This week the Animal Justice Party MP Georgie Purcell had her photo edited to enlarge her breasts and insert a crop into her top that hadn't been there. Having previously been a victim of image-based abuse, Purcell said the incident felt violating, and that the explanation given by Nine News failed to address the issue.
For its part, Nine blamed an "automation" tool in Photoshop - the recently launched "generative fill", which, as the name suggests, fills in the blanks of an image when it is resized using artificial intelligence. Nine said the company was working from an already-cropped version of the original image, and used the tool to expand beyond the image's existing borders. But whoever did alter the image presumably still exported the modified version without considering the impact of their changes."