Explained: Social media and the Texas shooter's messages | Explained News,The Indian Ex... - 0 views
-
Could technology companies have monitored ominous messages made by a gunman who Texas authorities say massacred 19 children and two teachers at an elementary school? Could they have warned the authorities? Answers to these questions remain unclear
-
But if nothing else, the shooting in Uvalde, Texas, seems highly likely to focus additional attention on how social platforms monitor what users are saying to and showing each other.
-
Shortly thereafter, Facebook stepped in to note that the gunman sent one-to-one direct messages, not public posts, and that they weren’t discovered until “after the terrible tragedy”.
- ...7 more annotations...
-
Some reports appear to show that at least some of the gunman’s communications used Apple’s encrypted iPhone messaging services, which makes messages almost impossible for anyone else to read when sent to another iPhone user.
-
Facebook parent company Meta, which also owns Instagram, says it is working with law enforcement but declined to provide details.
-
A series of posts appeared on his Instagram in the days leading up to the shooting, including photos of a gun magazine in hand and two AR-style semi-automatic rifles. An Instagram user who was tagged in one post shared parts of what appears to be a chilling exchange on Instagram with Ramos, asking her to share his gun pictures with her more than 10,000 followers.
-
Meta has said it monitors people’s private messages for some kinds of harmful content, such as links to malware or images of child sexual exploitation. But copied images can be detected using unique identifiers — a kind of digital signature — which makes them relatively easy for computer systems to flag. Trying to interpret a string of threatening words — which can resemble a joke, satire or song lyrics — is a far more difficult task for artificial intelligence systems.
-
Facebook could, for instance, flag certain phrases such as “going to kill” or “going to shoot”, but without context — something AI in general has a lot of trouble with — there would be too many false positives for the company to analyze.
-
A recent Meta-commissioned report emphasized the benefits of such privacy but also noted some risks — including users who could abuse the encryption to sexually exploit children, facilitate human trafficking and spread hate speech.
-
Security experts say this could be done if Apple were to engineer a “backdoor” to allow access to messages sent by alleged criminals. Such a secret key would let them decipher encrypted information with a court order.