Skip to main content

Home/ Digit_al Society/ Group items tagged system

Rss Feed Group items tagged

dr tech

Moore's Law for Everything - 0 views

  •  
    "On a zoomed-out time scale, technological progress follows an exponential curve. Compare how the world looked 15 years ago (no smartphones, really), 150 years ago (no combustion engine, no home electricity), 1,500 years ago (no industrial machines), and 15,000 years ago (no agriculture). The coming change will center around the most impressive of our capabilities: the phenomenal ability to think, create, understand, and reason. To the three great technological revolutions-the agricultural, the industrial, and the computational-we will add a fourth: the AI revolution. This revolution will generate enough wealth for everyone to have what they need, if we as a society manage it responsibly. The technological progress we make in the next 100 years will be far larger than all we've made since we first controlled fire and invented the wheel. We have already built AI systems that can learn and do useful things. They are still primitive, but the trendlines are clear."
dr tech

The future is … sending AI avatars to meetings for us, says Zoom boss | Artif... - 0 views

  • ix years away and
  • “five or six years” away, Eric Yuan told The Verge magazine, but he added that the company was working on nearer-term technologies that could bring it closer to reality.“Let’s assume, fast-forward five or six years, that AI is ready,” Yuan said. “AI probably can help for maybe 90% of the work, but in terms of real-time interaction, today, you and I are talking online. So, I can send my digital version, you can send your digital version.”Using AI avatars in this way could free up time for less career-focused choices, Yuan, who also founded Zoom, added. “You and I can have more time to have more in-person interactions, but maybe not for work. Maybe for something else. Why do we need to work five days a week? Down the road, four days or three days. Why not spend more time with your fam
  •  
    "Ultimately, he suggests, each user would have their own "large language model" (LLM), the underlying technology of services such as ChatGPT, which would be trained on their own speech and behaviour patterns, to let them generate extremely personalised responses to queries and requests. Such systems could be a natural progression from AI tools that already exist today. Services such as Gmail can summarise and suggest replies to emails based on previous messages, while Microsoft Teams will transcribe and summarise video conferences, automatically generating a to-do list from the contents."
dr tech

"We are basically the last generation": An interview with Thomas Ramge on writing - Goe... - 0 views

  •  
    "Yes of course. We are basically the last generation, or maybe there will be one more after us, who grew up without strong AI writing assistants. But these AI assistants are here now, especially in English. In German the systems are following suit, even though they're still much stronger in English. You get to a stage where someone who cannot write very well, can be pulled to a decent level of writing through machine assistance. And this raises important questions: Are we no longer learning the basics? In order to step up and really improve your writing, you will probably always need to be deeply proficient in the cultural practice of writing. But we need to ask, what proportion of low and medium level writers will be raised with the help from machines to a very decent level? And what repercussions does this have on teaching and learning, and the proficient use of language and writing? We shouldn't neglect our writing skills, because we believe machines will get us there. Anyone who has children can clearly see the dangers autocorrect and autocomplete will have for the future of writing."
dr tech

The world is not quite ready for 'digital workers' | Artificial intelligence (AI) | The... - 1 views

  •  
    "Seeing an opportunity, Franklin decided to take advantage. On 9 July, the company said that it would begin to support digital employees as part of its platform and treat them like any other employee. "Today Lattice is making AI history," Franklin pronounced. "We will be the first to give digital workers official employee records in Lattice. Digital workers will be securely onboarded, trained and assigned goals, performance metrics, appropriate systems access and even a manager. Just as any person would be." The pushback was swift - and, in many cases, brutal, particularly on LinkedIn, which is generally not known for its savage engagement like X (formerly known as Twitter). "This strategy and messaging misses the mark in a big way, and I say that as someone building an AI company," said Sawyer Middeleer, an executive at a firm that uses AI to help with sales research, on LinkedIn. "Treating AI agents as employees disrespects the humanity of your real employees. Worse, it implies that you view humans simply as 'resources' to be optimized and measured against machines. It's the exact opposite of a work environment designed to elevate the people who contribute to it.""
sparkle26

Online Expert System for Diagnosis PsychologicalDisorders Using Case-Based Reasoning Me... - 2 views

    • sparkle26
       
      GOOD
dr tech

Bluesky lets you choose your algorithm - 0 views

  •  
    "But do these options make Bluesky a more prosocial experience? Prosocial design is a "set of design patterns, features and processes which foster healthy interactions between individuals and which create the conditions for those interactions to thrive by ensuring individuals' safety, wellbeing and dignity," according to the Prosocial Design Network. Giving users control over their feeds is a step in this direction, but it's not a new concept. The Panotpykon Foundation's Safe by Default briefing advocates for human-centric recommender systems that prioritize conscious user choice and empowerment. They propose features like: Sliders for content preferences (e.g., informative vs. entertaining content), A "hard stop" button to suppress unwanted content, and Prompts for users to define their interests or preferences."
dr tech

Is doom scrolling really rotting our brains? The evidence is getting harder t... - 0 views

  •  
    "But we're not entirely to blame if technology is making us less intelligent. After all, it was designed to captivate us totally. Silicon Valley's dirtiest design feature - which is everywhere once you spot it - is the infinite scroll, likened to the "bottomless soup bowl" experiment, in which participants will keep mindlessly eating from a soup bowl if it keeps refilling. An online feed that constantly "refills" manipulates the brain's dopaminergic reward system in a similar way. These powerful dopamine-driven loops of endless "seeking" can become addictive."
dr tech

Can Community Notes match the speed of misinformation? - 0 views

  •  
    "The promise of Community Notes lies in its transparency and its ability to crowdsource moderation from across ideological divides. By emphasizing consensus, the system avoids the mistrust or perception of bias with platform-driven fact-checking or content removal. Last year YouTube adopted this approach, but as a complement to other products such as information panels, or their recent disclosure requirement when content is altered or synthetic."
dr tech

16 Musings on AI's Impact on the Labor Market - 0 views

  •  
    "In the short term, generative AI will replace a lot of people because productivity increases while demand stays the same due to inertia. In the long term, the creation of new jobs compensates for the loss of old ones, resulting in a net positive outcome for humans who leave behind jobs no one wants to do. The most important aspect of any technological revolution is the transition from before to after. Timing and location matters: older people have a harder time reinventing themselves into a new trade or craft. Poor people and poor countries have less margin to react to a wave of unemployment. Digital automation is quicker and more aggressive than physical automation because it bypasses logistical constraints-while ChatGPT can be infinitely cloned, a metallic robot cannot. Writing and painting won't die because people care about the human factor first and foremost; there are already a lot of books we can't possibly read in one lifetime so we select them as a function of who's the author. Even if you hate OpenAI and ChatGPT for being responsible for the lack of job postings, I recommend you ally with them for now; learn to use ChatGPT before it's too late to keep your options open. Companies are choosing to reduce costs over increasing output because the sectors where generative AI is useful can't artificially increase demand in parallel to productivity. (Who needs more online content?) Our generation is reasonably angry at generative AI and will bravely fight it. Still, our offspring-and theirs-will be grateful for a transformed world whose painful transformation they didn't have to endure. Certifiable human-made creative output will reduce its quantity but multiply its value in the next years because demand specific for it will grow; automation can mimic 99% of what we do but never reaches 100%. The maxim "AI won't take your job, a person using AI will; yes, you using AI will replace yourself not using it" applies more in the long term than the
dr tech

- 0 views

  •  
    "There is also a lot of research that both third-party fact-checking and Community Notes can be really effective at reducing misperceptions. But - and this is a significant caveat - neither works well as a complete solution for lies on social media. When Twitter was working on Birdwatch, they claimed it would "not replace other labels and fact checks Twitter currently uses". But as I've written about before, Musk scaled back Twitter's Trust and Safety team significantly and positioned Community Notes as the replacement. As Yoel Roth, Twitter's former head of Trust and Safety, told WIRED, "The intention of Birdwatch was always to be a complement to, rather than a replacement for, Twitter's other misinformation methods." In fact, research on various attempts to mitigate COVID misinformation found that a layered, "Swiss cheese" approach might work best, where some efforts work well sometimes, but collectively the system catches most falsehoods."
dr tech

Study Finds That People Who Entrust Tasks to AI Are Losing Critical Thinking Skills - 0 views

  •  
    "The findings from those examples were striking: overall, those who trusted the accuracy of the AI tools found themselves thinking less critically, while those who trusted the tech less used more critical thought when going back over AI outputs. "The data shows a shift in cognitive effort as knowledge workers increasingly move from task execution to oversight when using GenAI," the researchers wrote. "Surprisingly, while AI can improve efficiency, it may also reduce critical engagement, particularly in routine or lower-stakes tasks in which users simply rely on AI, raising concerns about long-term reliance and diminished independent problem-solving." This isn't enormously surprising. Something we've observed in many domains, from self-driving vehicles to scrutinizing news articles produced by AI, is that humans quickly go on autopilot when they're supposed to be overseeing an automated system, often allowing mistakes to slip past."
dr tech

AI pioneer announces non-profit to develop 'honest' artificial intelligence | Artificia... - 0 views

  •  
    ""We want to build AIs that will be honest and not deceptive," Bengio said. He added: "It is theoretically possible to imagine machines that have no self, no goal for themselves, that are just pure knowledge machines - like a scientist who knows a lot of stuff." However, unlike current generative AI tools, Bengio's system will not give definitive answers and will instead give probabilities for whether an answer is correct."
dr tech

Robodebt: When automation fails - by Don Moynihan - 0 views

  •  
    "From 2016 to 2020, the Australian government operated an automated debt assessment and recovery system, known as "Robodebt," to recover fraudulent or overpaid welfare benefits. The goal was to save $4.77 billion through debt recovery and reduced public service costs. However, the algorithm and policies at the heart of Robodebt caused wildly inaccurate assessments, and administrative burdens that disproportionately impacted those with the least resources. After a federal court ruled the policy unlawful, the government was forced to terminate Robodebt and agree to a $1.8 billion settlement."
dr tech

This AI 'thinks' like a human - after training on 160 psychology studies - 0 views

  •  
    "An innovative artificial intelligence (AI) system can predict the decisions people will make in a wide variety of situations - often outperforming classical theories used in psychology to describe human choices."
dr tech

Update: New 25 GPU Monster Devours Passwords In Seconds | The Security Ledger - 0 views

  •  
    Yikes...scary...
« First ‹ Previous 301 - 320 of 504 Next › Last »
Showing 20 items per page