""AI can pick up missed cues and suggest nudges to bridge the gap in emotional intelligence and communication styles. It can identify optimal ways to discuss common problems and alleviate common misunderstandings based on these different priorities and ways of viewing the world. We could be looking at a different gender dynamics in a decade.""
"Blockchain technology has also opened the way to new models whereby endless micropayments can be made in return for particular online services or content; and, if people voluntarily allow elements of their data to be used, rewards can flow the other way. Here perhaps lies the key to a system beyond the current, Google-led model, in which services appear to be free but the letting-go of personal data is the actual price."
"Across the technology industry, rank-and-file employees are demanding greater insight into how their companies are deploying the technology that they build. At Google, Amazon, Microsoft and Salesforce, as well as at tech startups, engineers and technologists are increasingly asking whether the products they are working on are being used for surveillance in places like China or for military projects in the United States or elsewhere."
""I don't really care about data privacy because I think it's all an illusion anyway."
It's interesting to see how this encounter also reflects the general behavior of a majority of people when it comes to privacy. Although user data has long been a gold mine for companies, would it be more acceptable if they started paying their users in exchange for it?"
"We are talking about vast fields of aggregate data, the scale of which is difficult to comprehend; this data can be parsed by the artificial intelligence recommendation algorithms that Google has pioneered, and that now steer everything from employment application processes to dating apps."
"It is time now for two things: for people to wake up and realise how much our lives are dominated by such a small number of Silicon Valley bros, one hand in their jean pocket announcing their next move, and for tech companies to acknowledge their power and influence and become truly accountable."
"Researchers from MIT and Google recently showed off a machine learning algorithm capable of automatically retouching photos just like a professional photographer. Snap a photo and the neural network identifies exactly how to make it look better-increase contrast a smidge, tone down brightness, whatever-and apply the changes in less than 20 milliseconds."
"Referred to in the Indian press variously as the "toolkit case", the "Greta toolkit", and the "toolkit conspiracy", the police's ongoing investigation of Ravi, along with fellow activists Nikita Jacob and Shantanu Muluk, centres on the contents of a social media guide that Thunberg tweeted to her nearly 5 million followers in early February. When Ravi was arrested, the Delhi police declared that she "is an editor of the Toolkit Google Doc & key conspirator in document's formulation & dissemination. She started WhatsApp Group & collaborated to make the Toolkit doc. She worked closely with them to draft the Doc.""
""The code reasonably attempts to address the bargaining power imbalance between digital platforms and Australian news businesses," he said.
Google's search engine not as good as its competitors for news, research finds
Read more
"It also recognises the important role search plays, not only to consumers but to the thousands of Australian small businesses that rely on search and advertising technology to fund and support their organisations."
The code, which is currently before the parliament, would facilitate negotiations between media companies and digital platforms - currently just Facebook and Google - for payment for content. If an agreement cannot be reached, then it goes to an arbiter for resolution."
"The greatest propaganda machine in history.
Think about it. Facebook, YouTube and Google, Twitter and others - they reach billions of people. The algorithms these platforms depend on deliberately amplify the type of content that keeps users engaged - stories that appeal to our baser instincts and that trigger outrage and fear. It's why YouTube recommended videos by the conspiracist Alex Jones billions of times. It's why fake news outperforms real news, because studies show that lies spread faster than truth. And it's no surprise that the greatest propaganda machine in history has spread the oldest conspiracy theory in history - the lie that Jews are somehow dangerous. As one headline put it, "Just Think What Goebbels Could Have Done with Facebook.""
"These ranged from the mundane (Facebook may change its terms of service at any time) to a reminder that Facebook may store and process your data anywhere in the world, meaning it might be subject to different data protection laws. When scanning license agreements from Google, Do Not Sign told me the company reserves the right to stop providing its services at any time and that its services are used at the users' sole risk."
"Apple and Google's tougher enforcement could preclude such apps from becoming realistic alternatives to the mainstream social networks. They now face the choice of either stepping up their policing of posts - undercutting their main feature in the process - or losing their ability to reach a wide audience."
"Users post violent and hateful content to Facebook and Twitter all the time, they argued - and no one was demanding those platforms get the boot from the app stores."
""We know [geofence warrants] are a ubiquitous policing tool, and as long as companies make it possible to comply with these sorts of court orders, they're putting their users at risk," Fox Cahn said. "Whether it's Google or Uber or Lyft or payment companies, by segregating their user data in a way which prevents the aggregated location searches, you can keep that data while preventing compliance with a geofence warrant.""
"His immediate concern is that the internet will be flooded with false photos, videos and text, and the average person will "not be able to know what is true anymore."
He is also worried that AI technologies will in time upend the job market. Today, chatbots such as ChatGPT tend to complement human workers, but they could replace paralegals, personal assistants, translators and others who handle rote tasks. "It takes away the drudge work," he said. "It might take away more than that."
Down the road, he is worried that future versions of the technology pose a threat to humanity because they often learn unexpected behavior from the vast amounts of data they analyze. This becomes an issue, he said, as individuals and companies allow AI systems not only to generate their own computer code but actually to run that code on their own. And he fears a day when truly autonomous weapons - those killer robots - become reality."
"AI tool GNoME finds 2.2 million new crystals, including 380,000 stable materials that could power future technologies
Modern technologies from computer chips and batteries to solar panels rely on inorganic crystals. To enable new technologies, crystals must be stable otherwise they can decompose, and behind each new, stable crystal can be months of painstaking experimentation.
Today, in a paper published in Nature, we share the discovery of 2.2 million new crystals - equivalent to nearly 800 years' worth of knowledge. We introduce Graph Networks for Materials Exploration (GNoME), our new deep learning tool that dramatically increases the speed and efficiency of discovery by predicting the stability of new materials."
"Green Light uses machine learning systems to comb through Maps data to calculate the amount of traffic congestion present at a given light, as well as the average wait times of vehicles stopped there. That information is then used to train AI models that can autonomously optimize the traffic timing at that intersection, reducing idle times as well as the amount of braking and accelerating vehicles have to do there. It's all part of Google's goal to help its partners collectively reduce their carbon emissions by a gigaton by 2030."
"That is why we must demand transparency here, especially in the case of technology that uses human-like interfaces such as language. For any automated system, we need to know what it was trained to do, what training data was used, who chose that data and for what purpose. In the words of AI researchers Timnit Gebru and Margaret Mitchell, mimicking human behaviour is a "bright line" - a clear boundary not to be crossed - in computer software development. We treat interactions with things we perceive as human or human-like differently. With systems such as LaMDA we see their potential perils and the urgent need to design systems in ways that don't abuse our empathy or trust."