#socialmedia — #Facebook’s New #AI-powered #Moderation Tool Helps It Catch Billions Of Fake Accounts. BULLSHIT! ; #Privacy #Regulation

Why is this bullshit? It's very simple really, even a monkey can figure it out.

It's a load of crap for two basic, fundamental reasons:

a. Social media platforms these days are nothing more than a marketing tool to push products and services. An instrument used to perpetually collect consumer data — both behavioral and personal — on a massive scale. And whatever “social” aspect of it is left is reduced to high stakes user competition of show and tell, a competition for Likes and attention. It's like a DIY reality television… on an app.

b. Engineered flaws that serve corporate interests. “Engineered” because these loop-holes don’t just materialize from the ether. They’re being thought out, vetted by teams, sign-off by executives, designed, coded, reviewed and go through rigorous testing. Not to mention money they invest on it. If any of these marketing… errr.. “social media” platforms is really serious about user authenticity and getting rid of fake accounts, they would “verify” everyone during registration. But no, it's bad for business. So they reserve this “security” measure to… guess what… businesses and high profile people. It boosts the features' value by only reserving it to privileged few. That blue tick markets itself. Fake accounts also artificially inflates the number of users. It gives these platforms an apparent popularity, much like the kind of culture they foster. So they engineered all these flaws into the platform to help convince unsuspecting users to use more of their services… like… *drum roll*… two-factor authentication! Users will have to divulge more personal information to avail of the service, of course. Malicious bots — or accounts that are automatically driven by software designed to fulfill certain tasks like, auto-liking or auto-following paying users — are easy to detect. They can nip that bud too during registration, especially if it's coming from known sources. But they won't do that because having it furthers their interest. It helps them make a strong case in marketing future products. It further reinforce their user data mining practices.

What consumers, and the connected world at large, need are sensible, enforceable laws and regulations to protect them from unscrupulous technology companies. Unfortunately, most often than not, any new legislation are aimed protecting corporate interests. Even EU's GDPR seems anemic.

It's been decades since the technology industry operates like its the wild west. Even with GDPR in effect, for the most part, social media is still pretty much operating business as usual… data mining and marketing the shit out of their users.

And the biggest culprit of them all? FACEBOOK! That's why this new gimmick… is utter… crap.

New York (The Verge) — Facebook’s new AI-powered moderation tool helps it catch billions of fake accounts theverge.com/2020/3/4/21164….

Facebook is opening up about an artificial intelligence-powered tool it calls Deep Entity Classification, which is designed to detect and help take down fake accounts used by scammers, spammers, and other malicious actors violating Facebook’s terms of service. The tool looks at an account’s entire network on Facebook and analyzes thousands of features to determine if its inauthentic.

Deep Entity Classification (DEC) is a machine learning model that doesn’t just take into account the activity of the suspect account, but it also evaluates all of the surrounding information, including the behaviors of the accounts and pages the suspect account interacts with. Facebook says it’s reduced the estimated volume of spam and scam accounts by 27 percent.

So far, DEC has helped Facebook thwart more than 6.5 billion fake accounts that scammers and other malicious actors created or tried to create last year. A vast majority of those accounts are actually caught in the account creation process, and even those that do get through tend to get discovered by Facebook’s automated systems before they are ever reported by a real user. Nick Statt/@verge

Source: verge, full story

 

Leave a Reply