Skip to main content

Home/ Digit_al Society/ Group items tagged content

Rss Feed Group items tagged

anonymous

BBC News - NatWest online services hit by cyber attack - 0 views

  • ails safe On Friday, a number of customers reported problems getting on to the bank's website, from which they normally access their accounts online. The RBS Group - which includes RBS, NatWest and Ulster Bank - said that NatWest was worst affected by the "deliberate" disruption. "Due to a surge in internet traffic deliberately directed at the NatWest website, customers experienced difficulties accessing some of our customer websites today," a spokeswoman for RBS said. "This deliberate surge of traffic is commonly known as a distributed denial of service (DDoS) attack. We have taken the appropriate action to restore the affected websites. At no time was there any risk to customers. We apologise for the inconvenience caused." She stressed that the latest incident was not connected to Monday's IT failure and no customer information was compromised at any time. The incident on Monday also affected cash machines and card payments and prompted an apology from the boss of the RBS group, Ross McEwan. More on This Story Big Banking Latest news EU fines banks over rate-rigging We've kept businesses alive - RBS Cable hands RBS file to watchdog Parties row over Co-op 'smears' JP Morgan in record $13bn settlement Police search home of Paul Flowers Barclays plans to cut 1,700 jobs $render("hyper-related-assets","group-title-1"); Basics Funding for Lending: How does it work? Q&A: Standard Chartered allegations HSBC report: Key findings Q&A: Basel rules on bank capital $render("hyper-related-assets","group-title-2"); Guides and analysis Shock: A banker can live on £1m salary RBS's new boss, Ross McEwan, will not receive any bonus for his first 15 months in the job, and won't pocket any bonus payments till at least 2017. When will banking ever change? Q&A: Banker bonus cap plan What has changed since the crisis? Explaining the Libor scandal Timeline: Libor-fixing scandal $render("hyper-related-assets","group-title-6");
  • Details safe On Friday, a number of customers reported problems getting on to the bank's website, from which they normally access their accounts online. The RBS Group - which includes RBS, NatWest and Ulster Bank - said that NatWest was worst affected by the "deliberate" disruption. "Due to a surge in internet traffic deliberately directed at the NatWest website, customers experienced difficulties accessing some of our customer websites today," a spokeswoman for RBS said. "This deliberate surge of traffic is commonly known as a distributed denial of service (DDoS) attack. We have taken the appropriate action to restore the affected websites. At no time was there any risk to customers. We apologise for the inconvenience caused." She stressed that the latest incident was not connected to Monday's IT failure and no customer information was compromised at any time. The incident on Monday also affected cash machines and card payments and prompted an apology from the boss of the RBS group, Ross McEwan. More on This Story Big Banking Latest news EU fines banks over rate-rigging We've kept businesses alive - RBS Cable hands RBS file to watchdog Parties row over Co-op 'smears' JP Morgan in record $13bn settlement Police search home of Paul Flowers Barclays plans to cut 1,700 jobs $render("hyper-related-assets","group-title-1"); Basics Funding for Lending: How does it work? Q&A: Standard Chartered allegations HSBC report: Key findings Q&A: Basel rules on bank capital $render("hyper-related-assets","group-title-2"); Guides and analysis Shock: A banker can live on £1m salary RBS's new boss, Ross McEwan, will not receive any bonus for his first 15 months in the job, and won't pocket any bonus payments till at least 2017. When will banking ever change? Q&A: Banker bonus cap plan What has changed since the crisis? Explaining the Libor scandal Timeline: Libor-fixing scandal $render("hyper-related-assets","group-title-6"); hyper-depth-st
  • 's website, from which they normally access their accounts online. The RBS Group - which includes RBS, NatWest and Ulster Bank - said that NatWest was worst affected by the "deliberate" disruption. "Due to a surge in internet traffic deliberately directed at the NatWest website, customers experienced difficulties accessing some of our customer websites today," a spokeswoman for RBS said. "This deliberate surge of traffic is commonly known as a distributed denial of service (DDoS) attack. We have taken the appropriate action to restore the affected websites. At no time was there any risk to customers. We apologise for the inconvenience caused." She stressed that the latest incident was not connected to Monday's IT failure and no customer information was compromised at any time. The incident on Monday also affected cash machines and card payments and prompted an apology from the boss of the RBS group, Ross McEwan. More on This Story Big Banking Latest news EU fines banks over rate-rigging We've kept businesses alive - RBS Cable hands RBS file to watchdog Parties row over Co-op 'smears' JP Morgan in record $13bn settlement Police search home of Paul Flowers Barclays plans to cut 1,700 jobs $render("hyper-related-assets","group-title-1"); Basics Funding for Lending: How does it work? Q&A: Standard Chartered allegations HSBC report: Key findings Q&A: Basel rules on bank capital $render("hyper-related-assets","group-title-2"); Guides and analysis Shock: A banker can live on £1m salary RBS's new boss, Ross McEwan, will not receive any bonus for his first 15 months in the job, and won't pocket any bonus payments till at least 2017. When will banking ever change? Q&A: Banker bonus cap plan What has changed since the crisis? Explaining the Libor scandal Timeline: Libor-fixing scandal $render("hyper-related-assets","group-title-6"); Your Savings
  • ...4 more annotations...
  • and cash machines. Details safe On Friday, a number of customers reported problems getting on to the bank's website, from which they normally access their accounts online. The RBS Group - which includes RBS, NatWest and Ulster Bank - said that NatWest was worst affected by the "deliberate" disruption. "Due to a surge in internet traffic deliberately directed at the NatWest website, customers experienced difficulties accessing some of our customer websites today," a spokeswoman for RBS said. "This deliberate surge of traffic is commonly known as a distributed denial of service (DDoS) attack. We have taken the appropriate action to restore the affected websites. At no time was there any risk to customers. We apologise for the inconvenience caused." She stressed that the latest incident was not connected to Monday's IT failure and no customer information was compromised at any time. The incident on Monday also affected cash machines and card payments and prompted an apology from the boss of the RBS group, Ross McEwan. More on This Story Big Banking Latest news EU fines banks over rate-rigging We've kept businesses alive - RBS Cable hands RBS file to watchdog Parties row over Co-op 'smears' JP Morgan in record $13bn settlement Police search home of Paul Flowers Barclays plans to cut 1,700 jobs $render("hyper-related-assets","group-title-1"); Basics Funding for Lending: How does it work? Q&amp;A: Standard Chartered allegations HSBC report: Key findings Q&amp;A: Basel rules on bank capital $render("hyper-related-assets","group-title-2"); Guides and analysis Shock: A banker can live on £1m salary RBS's new boss, Ross McEwan, will not receive any bonus for his first 15 months in the job, and won't pocket any bonus payments till at least 2017. When will banking ever change? Q&amp;A: Banker bonus cap plan What has changed since the crisis? Explaining the Libor scandal Timeline: Libor-fixing scandal $render("hyper-related-assets","group-title-6"); <h4 cla
  • It came less than a week after a major computer failure left some customers unable to use cards and cash machines.
  • On Friday, a number of customers reported problems getting on to the bank's website
  • Due to a surge in internet traffic deliberately directed at the NatWest website, customers experienced difficulties accessing some of our customer websites today,
dr tech

The AI feedback loop: Researchers warn of 'model collapse' as AI trains on AI-generated... - 0 views

  •  
    "Now, as more people use AI to produce and publish content, an obvious question arises: What happens as AI-generated content proliferates around the internet, and AI models begin to train on it, instead of on primarily human-generated content? A group of researchers from the UK and Canada have looked into this very problem and recently published a paper on their work in the open access journal arXiv. What they found is worrisome for current generative AI technology and its future: "We find that use of model-generated content in training causes irreversible defects in the resulting models.""
dr tech

There's a new tactic for exposing you to radical content online: the 'slow red-pill' | ... - 0 views

  •  
    "This type of extreme racist post was frequently met with pushback from the community. Common responses included; "people should be treated as individuals not as part of a group" and "the Democrats are the ones who want to divide us up by race". Implicit or explicit gestures of antisemitism were strongly protested by evangelical Christians. Red-pill posts would rarely stay up long. In most cases, they were only intended to appear in one's Instagram feed and to vanish shortly after. The account would then resume posting popular content, wait another week and try it again. This process would continue for months, maybe a year. By posting mainstream conservative content most of the time, these extreme-right groups were able to build up an audience numbering in the range of 30,000 to 40,000, which they could then incrementally expose to radical content."
dr tech

Media freedom in dire state in record number of countries, report finds | Press freedom... - 0 views

  •  
    "It shows rapid technological advances are allowing governments and political actors to distort reality, and fake content is easier to publish than ever before. "The difference is being blurred between true and false, real and artificial, facts and artifices, jeopardising the right to information," the report said. "The unprecedented ability to tamper with content is being used to undermine those who embody quality journalism and weaken journalism itself." Artificial intelligence was "wreaking further havoc on the media world", the report said, with AI tools "digesting content and regurgitating it in the form of syntheses that flout the principles of rigour and reliability". This is not just written AI content but visual, too. High-definition images that appear to show real people can be generated in seconds."
dr tech

Google will let publishers hide their content from its insatiable AI - 0 views

  •  
    "Google has announced a new control in its robots.txt indexing file that would let publishers decide whether their content will "help improve Bard and Vertex AI generative APIs, including future generations of models that power those products." The control is a crawler called Google-Extended, and publishers can add it to the file in their site's documentation to tell Google not to use it for those two APIs. In its announcement, the company's vice president of "Trust" Danielle Romain said it's "heard from web publishers that they want greater choice and control over how their content is used for emerging generative AI use cases.""
dr tech

Social media urged to act on violent content after Hamas attack | Social media | The Gu... - 0 views

  •  
    "The coverage of the Israel-Hamas conflict on social media platforms has come under scrutiny from the UK government and Brussels, as tech firms including X and Meta were urged to deal with a surge in violent and misleading content on their sites. In the UK, the technology secretary summoned social media executives on Wednesday to demand the removal from their platforms of violent content related to the Hamas attacks on Israel."
dr tech

'Fundamentally against their safety': the social media insiders fearing for their kids ... - 0 views

  •  
    "For Bejar, the controls in place on social networks like Instagram are not sufficient because they turn "inherently human interactions into an objective assessment". There are too few options for users to hide content or flag comments and DMs and explain why it made them uncomfortable even if it doesn't violate Meta's specific policies, he said. "There's a question of how clearly bad does the content need to be to warrant removal? And that means you set a line somewhere and have to define a criterion where either a computer system or a human can evaluate a piece of content," Bejar said."
dr tech

Whose job is it to stop the livestreaming of mass murder? | Media | The Guardian - 0 views

  •  
    "The latest incident has revived questions about who should be responsible for removing harmful content from the internet: the networks that host the content, the companies that protect those networks, or governments of the countries where the content is viewed."
dr tech

YouTube will temporarily increase automated content moderation | Engadget - 0 views

  •  
    "YouTube will rely more on machine learning and less on human reviewers during the coronavirus outbreak. Normally, algorithms detect potentially harmful content and send it to human reviewers for assessment. But these are not normal times, and in an effort to reduce the need for employees and contractors to come into an office, YouTube will allow its automated system to remove some content without human review."
dr tech

Google says AI better than humans at scrubbing extremist YouTube content | Technology |... - 0 views

  •  
    "Google has pledged to continue developing advanced programs using machine learning to combat the rise of extremist content, after it found that it was both faster and more accurate than humans in scrubbing illicit content from YouTube."
dr tech

Child safety groups and prosecutors criticize encryption of Facebook and Messenger | Fa... - 0 views

  •  
    "This week, the tech giant announced it had begun rolling out automatic encryption for direct messages on its Facebook and Messenger platforms to more than 1 billion users. Under the changes, Meta will no longer have access to the contents of the messages that users send or receive unless one participant reports a message to the company. As a result, messages will not be subject to content moderation unless reported, which social media companies undertake to detect and report abusive and criminal activity. Encryption hides the contents of a message from anyone but the sender and the intended recipient by converting text and images into unreadable cyphers that are unscrambled on receipt."
dr tech

Pentagon leak suggests Russia honing disinformation drive - report | Pentagon leaks 202... - 0 views

  •  
    ""Bots view, 'like,' subscribe and repost content and manipulate view counts to move content up in search results and recommendation lists," the analysis said. In some cases, Fabrika targets users with disinformation directly after gleaning their emails and phone numbers from databases. The campaign's goals include demoralising Ukrainians and exploiting divisions among western states, the document added. Experts have downplayed the 1% claim. Alan Woodward, a professor of cybersecurity at Surrey University, said the figure sounded implausible and that sock puppet accounts - a term for accounts with fake identities - need their content to be reposted by plausible accounts such as those operated by influencers."
aren01

Protocols, Not Platforms: A Technological Approach to Free Speech | Knight First Amendm... - 1 views

  •  
    "Some have argued for much greater policing of content online, and companies like Facebook, YouTube, and Twitter have talked about hiring thousands to staff up their moderation teams.8 8. April Glaser, Want a Terrible Job? Facebook and Google May Be Hiring,Slate (Jan. 18, 2018), https://slate.com/technology/2018/01/facebook-and-google-are-building-an-army-of-content-moderators-for-2018.html (explaining that major platforms have hired or have announced plans to hire thousands, in some cases more than ten thousand, new content moderators).On the other side of the coin, companies are increasingly investing in more and more sophisticated technology help, such as artificial intelligence, to try to spot contentious content earlier in the process.9 9. Tom Simonite, AI Has Started Cleaning Up Facebook, But Can It Finish?,Wired (Dec. 18, 2018), https://www.wired.com/story/ai-has-started-cleaning-facebook-can-it-finish/.Others have argued that we should change Section 230 of the CDA, which gives platforms a free hand in determining how they moderate (or how they don't moderate).10 10. Gohmert Press Release, supra note 7 ("Social media companies enjoy special legal protections under Section 230 of the Communications Act of 1934, protections not shared by other media. Instead of acting like the neutral platforms they claim to be in order obtain their immunity, these companies have turned Section 230 into a license to potentially defraud and defame with impunity… Since there still appears to be no sincere effort to stop this disconcerting behavior, it is time for social media companies to be liable for any biased and unethical impropriety of their employees as any other media company. If these companies want to continue to act like a biased medium and publish their own agendas to the detriment of others, they need to be held accountable."); Eric Johnson, Silicon Valley's Self-Regulating Days "Probably Should Be" Over, Nancy Pelosi Says, Vox (Apr. 11, 2019), https:/
  •  
    "After a decade or so of the general sentiment being in favor of the internet and social media as a way to enable more speech and improve the marketplace of ideas, in the last few years the view has shifted dramatically-now it seems that almost no one is happy. Some feel that these platforms have become cesspools of trolling, bigotry, and hatred.1 1. Zachary Laub, Hate Speech on Social Media: Global Comparisons, Council on Foreign Rel. (Jun. 7, 2019), https://www.cfr.org/backgrounder/hate-speech-social-media-global-comparisons.Meanwhile, others feel that these platforms have become too aggressive in policing language and are systematically silencing or censoring certain viewpoints.2 2. Tony Romm, Republicans Accused Facebook, Google and Twitter of Bias. Democrats Called the Hearing 'Dumb.', Wash. Post (Jul. 17, 2018), https://www.washingtonpost.com/technology/2018/07/17/republicans-accused-facebook-google-twitter-bias-democrats-called-hearing-dumb/?utm_term=.895b34499816.And that's not even touching on the question of privacy and what these platforms are doing (or not doing) with all of the data they collect."
dr tech

Facebook mistakenly banning ok content a growing problem | Thaiger - 0 views

  •  
    "People have been censored or blocked from the platform because their names sounded too fake. Ads for clothing disabled people we removed buy algorithms that believed they were breaking the rules and promoting medical devices. The Vienna Tourist Board had to move to adult content friendly site OnlyFans to share works of art from their museum after Facebook removed photos of paintings. Words that have rude popular meanings but other more specific definitions in certain circles - like "hoe" amongst gardeners, or "cock" amongst chicken farmers or gun enthusiasts - can land people in the so-called "Facebook jail" for days or even weeks."
dr tech

There's Tons Of Black Lives Matter Content On TikTok, But You May Not See Much Of It - 0 views

  •  
    "That algorithm can make the app powerfully addictive and fun, but like other social media platforms, it may also be cutting out whole swaths of content that you'll never get to see. I ran an experiment by creating two fresh accounts on TikTok. With these accounts, the only bias they start with is knowing my location - Toronto - which brings up content made near me."
dr tech

That broken tech/content culture cycle - 0 views

  •  
    "That broken tech/content culture cycle"
dr tech

Leading adviser quits over Instagram's failure to remove self-harm content | Instagram ... - 0 views

  •  
    "A leading psychologist who advises Meta on suicide prevention and self-harm has quit her role, accusing the tech giant of "turning a blind eye" to harmful content on Instagram, repeatedly ignoring expert advice and prioritising profit over lives. Lotte Rubæk, who has been on Meta's global expert group for more than three years, told the Observer that the tech giant's ongoing failure to remove images of self-harm from its platforms is "triggering" vulnerable young women and girls to further harm themselves and contributing to rising suicide figures."
dr tech

Want the platforms to police bad speech and fake news? The copyright wars want a word w... - 0 views

  •  
    "EFF's Legal Director Corynne McSherry offers five lessons to keep in mind: 1. (Lots of) mistakes will be made: copyright takedowns result in the removal of tons of legitimate content. 2. Robots won't help: automated filtering tools like Content ID have been a disaster, and policing copyright with algorithms is a lot easier than policing "bad speech." 3. These systems need to be transparent and have due process. A system that allows for automated instant censorship and slow, manual review of censorship gives a huge advantage to people who want to abuse the system. 4. Punish abuse. The ability to censor other peoples' speech is no joke. If you're careless or malicious in your takedown requests, you should pay a consequence: maybe a fine, maybe being barred form using the takedown system. 5. Voluntary moderation quickly becomes mandatory. Every voluntary effort to stem copyright infringement has been followed by calls to make those efforts mandatory (and expand them)."
dr tech

Facebook's content moderation a mess, employees outraged, contractors have PTSD: Report... - 0 views

  •  
    "Of all the disturbing parts of this @CaseyNewton piece about Facebook content reviewers -- and there are many -- the one about people slowly coming to believe the conspiracy theories sticks with me https://t.co/ulDx3PEaWa"
dr tech

I helped build ByteDance's censorship machine - Protocol - The people, power and politi... - 0 views

  •  
    "My job was to use technology to make the low-level content moderators' work more efficient. For example, we created a tool that allowed them to throw a video clip into our database and search for similar content. When I was at ByteDance, we received multiple requests from the bases to develop an algorithm that could automatically detect when a Douyin user spoke Uyghur, and then cut off the livestream session. The moderators had asked for this because they didn't understand the language. Streamers speaking ethnic languages and dialects that Mandarin-speakers don't understand would receive a warning to switch to Mandarin."
1 - 20 of 145 Next › Last »
Showing 20 items per page