Skip to main content

Home/ Digit_al Society/ Group items tagged modelling computer science

Rss Feed Group items tagged

dr tech

Science relies on computer modelling, but what happens when it goes wrong? -- Science &... - 0 views

  •  
    Much of current science deals with even more complicated systems, and similarly lacks exact solutions. Such models have to be "computational" - describing how a system changes from one instant to the next. But there is no way to determine the exact state at some time in the future other than by "simulating" its evolution in this way. Weather forecasting is a familiar example; until the advent of computers in the 1950s, it was impossible to predict future weather faster than it actually happened.
dr tech

How digital twins may enable personalised health treatment | Medical research | The Gua... - 0 views

  •  
    "Imagine having a digital twin that gets ill, and can be experimented on to identify the best possible treatment, without you having to go near a pill or a surgeon's knife. Scientists believe that within five to 10 years, "in silico" trials - in which hundreds of virtual organs are used to assess the safety and efficacy of drugs - could become routine, while patient-specific organ models could be used to personalise treatment and avoid medical complications. Digital twins are computational models of physical objects or processes, updated using data from their real-world counterparts. Within medicine, this means combining vast amounts of data about the workings of genes, proteins, cells and whole-body systems with patients' personal data to create virtual models of their organs - and eventually, potentially their entire body"
dr tech

Liverpool are using incredible data science during matches, and effects are extraordina... - 0 views

  •  
    "Liverpool are using incredible data science during matches, and effects are extraordinary Liverpool's sport-leading data science is providing Jürgen Klopp with the tools to change football matches as they're happening."
dr tech

The future is … sending AI avatars to meetings for us, says Zoom boss | Artif... - 0 views

  • ix years away and
  • “five or six years” away, Eric Yuan told The Verge magazine, but he added that the company was working on nearer-term technologies that could bring it closer to reality.“Let’s assume, fast-forward five or six years, that AI is ready,” Yuan said. “AI probably can help for maybe 90% of the work, but in terms of real-time interaction, today, you and I are talking online. So, I can send my digital version, you can send your digital version.”Using AI avatars in this way could free up time for less career-focused choices, Yuan, who also founded Zoom, added. “You and I can have more time to have more in-person interactions, but maybe not for work. Maybe for something else. Why do we need to work five days a week? Down the road, four days or three days. Why not spend more time with your fam
  •  
    "Ultimately, he suggests, each user would have their own "large language model" (LLM), the underlying technology of services such as ChatGPT, which would be trained on their own speech and behaviour patterns, to let them generate extremely personalised responses to queries and requests. Such systems could be a natural progression from AI tools that already exist today. Services such as Gmail can summarise and suggest replies to emails based on previous messages, while Microsoft Teams will transcribe and summarise video conferences, automatically generating a to-do list from the contents."
dr tech

AI Bias Reduction: IISc team develops method to reduce bias in AI images | Bengaluru Ne... - 0 views

  •  
    "The research, conducted at Vision and AI Lab of the Department of Computational and Data Sciences, offers a novel approach to mitigating bias in popular image-generative models without the need for additional data or model retraining."
dr tech

The chatbot optimisation game: can we trust AI web searches? | Artificial intelligence ... - 0 views

  •  
    "But what is pitched as a more convenient way of looking up information online has prompted scrutiny over how and where these chatbots select the information they provide. Looking into the sort of evidence that large language models (LLMs, the engines on which chatbots are built) find most convincing, three computer science researchers from the University of California, Berkeley, found current chatbots overrely on the superficial relevance of information. They tend to prioritise text that includes pertinent technical language or is stuffed with related keywords, while ignoring other features we would usually use to assess trustworthiness, such as the inclusion of scientific references or objective language free of personal bias."
dr tech

Deepfake detectors can be defeated, computer scientists show for the first time | Eurek... - 0 views

  •  
    "Researchers showed detectors can be defeated by inserting inputs called adversarial examples into every video frame. The adversarial examples are slightly manipulated inputs which cause artificial intelligence systems such as machine learning models to make a mistake. In addition, the team showed that the attack still works after videos are compressed."
1 - 7 of 7
Showing 20 items per page