"Generative artificial-intelligence tools are unproven and expensive to operate, requiring muscular servers with expensive chips that consume lots of power. Microsoft MSFT -0.43%decrease; red down pointing triangle
, Google, Adobe
and other tech companies investing in AI are experimenting with an array of tactics to make, market and charge for it.
Microsoft has lost money on one of its first generative AI products, said a person with knowledge of the figures. It and Google are now launching AI-backed upgrades to their software with higher price tags. Zoom Video Communications ZM 1.79%increase; green up pointing triangle
has tried to mitigate costs by sometimes using a simpler AI it developed in-house. Adobe and others are putting caps on monthly usage and charging based on consumption.
"A lot of the customers I've talked to are unhappy about the cost that they are seeing for running some of these models," said Adam Selipsky, the chief executive of Amazon.com's cloud division, Amazon Web Services, speaking of the industry broadly. "
"Analyst Brian Nowak estimates that the AI technology will have a $4.1 trillion economic effect on the labor force - or affect about 44% of labor - over the next few years by changing input costs, automating tasks and shifting the ways companies obtain, process and analyze information. Today, Morgan Stanley pegs the AI effect at $2.1 trillion, affecting 25% of labor.
"We see generative AI expanding the scope of business processes that can be automated," he wrote in a Sunday note. "At the same time, the input costs supporting GenAI functionality are rapidly falling, enabling a strongly expansionary impact to software production. As a result, Generative AI is set to impact the labor markets, expand the enterprise software TAM, and drive incremental spend for Public Cloud services.""
"Google has announced a new control in its robots.txt indexing file that would let publishers decide whether their content will "help improve Bard and Vertex AI generative APIs, including future generations of models that power those products." The control is a crawler called Google-Extended, and publishers can add it to the file in their site's documentation to tell Google not to use it for those two APIs. In its announcement, the company's vice president of "Trust" Danielle Romain said it's "heard from web publishers that they want greater choice and control over how their content is used for emerging generative AI use cases.""
"The company has called for Australian policymakers to promote "copyright systems that enable appropriate and fair use of copyrighted content to enable the training of AI models in Australia on a broad and diverse range of data, while supporting workable opt-outs for entities that prefer their data not to be trained in using AI systems".
The call for a fair use exception for AI systems is a view the company has expressed to the Australian government in the past, but the notion of an opt-out option for publishers is a new argument from Google."
"Google is using romance novels to teach its artificial intelligence (AI) system to better understand how people communicate.
Researchers at Google Brain, the company's AI-focused deep learning project, presented a paper earlier this month that detailed techniques they used to teach its AI to write fiction - and the results were unexpectedly haunting."
"The president of Baidu, Ya-Qin Zhang, said in a statement: "As AI technology keeps advancing and the application of AI expands, we recognise the importance of joining the global discussion around the future of AI. Ensuring AI's safety, fairness and transparency should not be an afterthought but rather highly considered at the onset of every project or system we build.""
"The camera never lies. Except, of course, it does - and seemingly more often with each passing day.
In the age of the smartphone, digital edits on the fly to improve photos have become commonplace, from boosting colours to tweaking light levels.
Now, a new breed of smartphone tools powered by artificial intelligence (AI) are adding to the debate about what it means to photograph reality.
Google's latest smartphones released last week, the Pixel 8 and Pixel 8 Pro, go a step further than devices from other companies. They are using AI to help alter people's expressions in photographs.
It's an experience we've all had: one person in a group shot looks away from the camera or fails to smile. Google's phones can now look through your photos to mix and match from past expressions, using machine learning to put a smile from a different photo of them into the picture. Google calls it Best Take. "
"His immediate concern is that the internet will be flooded with false photos, videos and text, and the average person will "not be able to know what is true anymore."
He is also worried that AI technologies will in time upend the job market. Today, chatbots such as ChatGPT tend to complement human workers, but they could replace paralegals, personal assistants, translators and others who handle rote tasks. "It takes away the drudge work," he said. "It might take away more than that."
Down the road, he is worried that future versions of the technology pose a threat to humanity because they often learn unexpected behavior from the vast amounts of data they analyze. This becomes an issue, he said, as individuals and companies allow AI systems not only to generate their own computer code but actually to run that code on their own. And he fears a day when truly autonomous weapons - those killer robots - become reality."
"At first glance, the change might seem relatively benign. Often, all folks surfing the web want is a quick-hit summary or snippet of something anyway.
But it's not unfair to say that Google, which in April, according to data from SimilarWeb, hosted roughly 91 percent of all search traffic, is somewhat synonymous with, well, the internet. And the internet isn't just some ethereal, predetermined thing, as natural water or air. The internet is a marketplace, and Google is its kingmaker.
As such, the demo raises an extremely important question for the future of the already-ravaged journalism industry: if Google's AI is going to mulch up original work and provide a distilled version of it to users at scale, without ever connecting them to the original work, how will publishers continue to monetize their work?"
"The paper, co-authored by researchers inside and outside Google, contended that technology companies could do more to ensure AI systems aimed at mimicking human writing and speech do not exacerbate historical gender biases and use of offensive language, according to a draft copy seen by Reuters."
"Over the Bridge hopes the project emphasizes exactly how much work goes into creating AI music. "There's an inordinate amount of human hands at the beginning, middle and end to create something like this," explained Michael Scriven, a rep for Lemmon Entertainment whose CEO is on Over the Bridge's board of directors.
Scriven added, "A lot of people may think [AI] is going to replace musicians at some point, but at this point, the number of humans that are required just to get to a point where a song is listenable is actually quite significant.""
"Curtin University internet studies professor Tama Leaver posted about some of his tests with Emu's sticker generation to X, formerly known as Twitter. Leaver found, for example, that the AI will block a phrase like "child with gun" and display a warning message about how the prompt doesn't follow Meta's Community Guidelines. Emu will, however, generate stickers with the similar, more niche prompt "child with grenade." It not only creates stickers of kids holding the weapon but also produces stickers of children holding guns."
"Green Light uses machine learning systems to comb through Maps data to calculate the amount of traffic congestion present at a given light, as well as the average wait times of vehicles stopped there. That information is then used to train AI models that can autonomously optimize the traffic timing at that intersection, reducing idle times as well as the amount of braking and accelerating vehicles have to do there. It's all part of Google's goal to help its partners collectively reduce their carbon emissions by a gigaton by 2030."
"Eck said machine learning, a powerful form of AI, will be integrated into how humans communicate with each other. He raised the idea of "assistive writing" in the future with Google Docs, the company's online word processing software. This may be based on Google's upcoming Smart Compose technology that suggests words and phrases based on what's being typed. Teachers used to worry about whether students used Wikipedia for their homework. Now they may wonder what part of the work the students wrote themselves, Eck said."
"They still have far more advanced technology that they haven't made publicly available yet. Something that does more or less what Bard does could have been released over two years ago. They've had that technology for over two years. What they've spent the intervening two years doing is working on the safety of it - making sure that it doesn't make things up too often, making sure that it doesn't have racial or gender biases, or political biases, things like that. That's what they spent those two years doing. But the basic existence of that technology is years old, at this point.
And in those two years, it wasn't like they weren't inventing other things. There are plenty of other systems that give Google's AI more capabilities, more features, make it smarter. The most sophisticated system I ever got to play with was heavily multimodal - not just incorporating images, but incorporating sounds, giving it access to the Google Books API, giving it access to essentially every API backend that Google had, and allowing it to just gain an understanding of all of it."
"It is too late to stop the emergence of AI. Instead, we need to think about what we want next, how to design and nurture spaces of knowledge creation and communication for a human-centric world. Search engines need to act as publishers instead of usurpers, and recognize the importance of connecting creators and audiences. Google is testing AI-generated content summaries that appear directly in its search results, encouraging users to stay on its page rather than to visit the source. Long term, this will be destructive."
"Google's neural networks have achieved the dream of CSI viewers everywhere: the company has revealed a new AI system capable of "enhancing" an eight-pixel square image, increasing the resolution 16-fold and effectively restoring lost data.
The neural network could be used to increase the resolution of blurred or pixelated faces, in a way previously thought impossible; a similar system was demonstrated for enhancing images of bedrooms, again creating a 32x32 pixel image from an 8x8 one."
"More than 150 artificial intelligence researchers have signed an open letter calling for future research in the field to focus on maximising the social benefit of AI, rather than simply making it more capable.
The signatories, which include researchers from Oxford, Cambridge, MIT and Harvard as well as staff at Google, Amazon and IBM, celebrate progress in the field, but warn that "potential pitfalls" must be avoided."
"It is also self-improving. The 10-year-old grading software leverages deep learning algorithms to "compare notes" with human teachers' scores, suggestions, and comments. An engineer involved in the project compared its capabilities to those of AlphaGo, the record-breaking AI Go player developed by Google subsidiary DeepMind."
"Ever get the feeling someone is looking over your shoulder at your phone? Well, you might not have to worry about that in the future: Google's researchers have developed an AI tool that can spot when someone is sneaking a peek at your screen."