Skip to main content

Home/ Digit_al Society/ Group items matching "reviews" in title, tags, annotations or url

Group items matching
in title, tags, annotations or url

Sort By: Relevance | Date Filter: All | Bookmarks | Topics Simple Middle
dr tech

Your Car Is Spying on You. A CBP Contract Shows the Risks. - 0 views

  •  
    "U.S. CUSTOMS AND BORDER PROTECTION purchased technology that vacuums up reams of personal information stored inside cars, according to a federal contract reviewed by The Intercept, illustrating the serious risks in connecting your vehicle and your smartphone."
dr tech

Skype audio graded by workers in China with 'no security measures' | Technology | The Guardian - 0 views

  •  
    "A Microsoft programme to transcribe and vet audio from Skype and Cortana, its voice assistant, ran for years with "no security measures", according to a former contractor who says he reviewed thousands of potentially sensitive recordings on his personal laptop from his home in Beijing over the two years he worked for the company."
dr tech

Facebook and Twitter Cross a Line in Censorship - 0 views

  •  
    "THE GLARING FALLACY that always lies at the heart of pro-censorship sentiments is the gullible, delusional belief that censorship powers will be deployed only to suppress views one dislikes, but never one's own views. The most cursory review of history, and the most minimal understanding of how these tech giants function, instantly reveals the folly of that pipe dream."
dr tech

Police built an AI to predict violent crime. It was seriously flawed | WIRED UK - 1 views

  •  
    "A flagship artificial intelligence system designed to predict gun and knife violence before it happens had serious flaws that made it unusable, police have admitted. The error led to large drops in accuracy and the system was ultimately rejected by all of the experts reviewing it for ethical problems."
dr tech

Profile 1: Chloe - 0 views

  •  
    "Welcome, real human A troll is a fake social media account, often created to spread misleading information. Each of the following 8 profiles include a brief selection of posts from a single social media account. You decide if each is an authentic account or a professional troll. After each profile, you'll review the signs that can help you determine if it's a troll or not."
dr tech

Online Harms: Encryption under attack | Open Rights Group - 0 views

  •  
    "Service providers, including many ORG members, will be required to do this through the imposition of a "duty of care" - a concept awkwardly borrowed from health & safety - which will require them to monitor the integrity of their services not by objective technical standards, but by subjective "codes of practice" on both illegal and legal content. Although the framework has been drawn up with large American social media platforms in mind, it would apply to any site or service with UK users which hosts user-generated content. A blog with comments will be fair game. An app with user reviews will be fair game. "
dr tech

When AI can make art - what does it mean for creativity? | Artificial intelligence (AI) | The Guardian - 0 views

  •  
    "Some are outraged at what they consider theft of their artistic trademark. Greg Rutkowski, a concept artist and illustrator well known for his golden-light infused epic fantasy scenes, has already been mentioned in hundreds of thousands of prompts used across Midjourney and Stable Diffusion. "It's been just a month. What about in a year? I probably won't be able to find my work out there because [the internet] will be flooded with AI art," Rutkowski told MIT Technology Review. "That's concerning.""
dr tech

Twitter moderators turn to automation amid a reported surge in hate speech | Twitter | The Guardian - 0 views

  •  
    "Elon Musk's Twitter is leaning heavily on automation to moderate content according to the company's new head of trust and safety, amid a reported surge in hate speech on the social media platform. Ella Irwin has told the Reuters news agency that Musk, who acquired the company in October, was focused on using automation more, arguing that Twitter had in the past erred on the side of using time and labour-intensive human reviews of harmful content."
dr tech

Content Moderation is a Dead End. - by Ravi Iyer - 0 views

  •  
    "One of the many policy-based projects I worked on at Meta was Engagement Bait, which is defined as "a tactic that urges people to interact with Facebook posts through likes, shares, comments, and other actions in order to artificially boost engagement and get greater reach." Accordingly, "Posts and Pages that use this tactic will be demoted." To do this, "models are built off of certain guidelines" trained using "hundreds of thousands of posts" that "teams at Facebook have reviewed and categorized." The examples provided are obvious (eg. a post saying "comment "Yes" if you love rock as much as I do"), but the problem is that there will always be far subtler ways to get people to engage with something artificially. As an example, psychology researchers have a long history of studying negativity bias, which has been shown to operate across a wide array of domains, and to lead to increased online engagement. "
dr tech

Iran's Secret Manual for Controlling Protesters' Mobile Phones - 0 views

  •  
    "According to these internal documents, SIAM is a computer system that works behind the scenes of Iranian cellular networks, providing its operators a broad menu of remote commands to alter, disrupt, and monitor how customers use their phones. The tools can slow their data connections to a crawl, break the encryption of phone calls, track the movements of individuals or large groups, and produce detailed metadata summaries of who spoke to whom, when, and where. Such a system could help the government invisibly quash the ongoing protests - or those of tomorrow - an expert who reviewed the SIAM documents told The Intercept."
dr tech

The Folly of DALL-E: How 4chan is Abusing Bing's New Image Model - bellingcat - 0 views

  •  
    "Racists on the notorious troll site 4chan are using a powerful new and free AI-powered image generator service offered by Microsoft to create antisemitic propaganda, according to posts reviewed by Bellingcat. Users of 4chan, which has frequently hosted hate speech and played home to posts by mass shooters, tasked Bing Image Creator to create photo-realistic antisemitic caricatures of Jews and, in recent days, shared images created by the platform depicting Orthodox men preparing to eat a baby, carrying migrants across the US border (the latter a nod to the racist Great Replacement conspiracy theory), and committing the 9/11 attacks."
dr tech

Don't Expect ChatGPT to Help You Land Your Next Job - 0 views

  •  
    "Shapiro said that using ChatGPT can be "great" in helping applicants "brainstorm verbs" and reframe language that can "bring a level of polish to their applications." At the same time, she said that submitting AI-generated materials along with job applications can backfire if applicants don't review them for accuracy. Shapiro said Jasper recruiters have interviewed candidates and discovered skills on their résumés that applicants said shouldn't be there or characterizations they weren't familiar with. Checking the AI-generated materials to ensure they accurately reflect an applicant's capabilities, she said, is critical if they're using ChatGPT - especially if the applicant gets hired."
dr tech

Social Media is a Major Cause of the Mental Illness Epidemic in Teen Girls. Here's The Evidence. - 0 views

  •  
    "Taken as a whole, it shows strong and clear evidence of causation, not just correlation. There are surely other contributing causes, but the Collaborative Review doc points strongly to this conclusion: Social Media is a Major Cause of the Mental Illness Epidemic in Teen Girls."
dr tech

Pause Giant AI Experiments: An Open Letter - Future of Life Institute - 0 views

  •  
    "Contemporary AI systems are now becoming human-competitive at general tasks,[3] and we must ask ourselves: Should we let machines flood our information channels with propaganda and untruth? Should we automate away all the jobs, including the fulfilling ones? Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us? Should we risk loss of control of our civilization? Such decisions must not be delegated to unelected tech leaders. Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable. This confidence must be well justified and increase with the magnitude of a system's potential effects. OpenAI's recent statement regarding artificial general intelligence, states that "At some point, it may be important to get independent review before starting to train future systems, and for the most advanced efforts to agree to limit the rate of growth of compute used for creating new models." We agree. That point is now."
dr tech

Cory Doctorow: What Kind of Bubble is AI? - Locus Online - 0 views

  •  
    "Do the potential paying customers for these large models add up to enough money to keep the servers on? That's the 13 trillion dollar question, and the answer is the difference between WorldCom and Enron, or dotcoms and cryptocurrency. Though I don't have a certain answer to this question, I am skeptical. AI decision support is potentially valuable to practitioners. Accountants might value an AI tool's ability to draft a tax return. Radiologists might value the AI's guess about whether an X-ray suggests a cancerous mass. But with AIs' tendency to "hallucinate" and confabulate, there's an increasing recognition that these AI judgments require a "human in the loop" to carefully review their judgments. In other words, an AI-supported radiologist should spend exactly the same amount of time considering your X-ray, and then see if the AI agrees with their judgment, and, if not, they should take a closer look. AI should make radiology more expensive, in order to make it more accurate. But that's not the AI business model. AI pitchmen are explicit on this score: The purpose of AI, the source of its value, is its capacity to increase productivity, which is to say, it should allow workers to do more, which will allow their bosses to fire some of them, or get each one to do more work in the same time, or both. The entire investor case for AI is "companies will buy our products so they can do more with less." It's not "business custom­ers will buy our products so their products will cost more to make, but will be of higher quality.""
dr tech

Technology must tackle bias in medical devices | Health | The Guardian - 0 views

  •  
    "The independent review on equity in medical devices once again highlights the multiple ways in which medical technology development can lead to solutions whereby the benefits are distributed inequitably across society, or can further exacerbate health inequalities (UK report reveals bias within medical tools and devices, 11 March). While the report is welcome, the challenge facing scientists and engineers is how to innovate medical devices differently to respond to longstanding societal biases and inequalities."
dr tech

Glue for the Internet of Things | MIT Technology Review - 0 views

  •  
    "OpenRemote is an open-source Internet of Things platform that could help spur smarter homes and cities. "
‹ Previous 21 - 40 of 66 Next › Last »
Showing 20 items per page