Two lawyers, two doctors, and an army officer walk into a Zoom meeting and make Bright the best digital social community in the world. The team’s education and diversity of experience have given us the tools to confront some of the toughest tech and social problems.
Instagram issued a press release late last week to announce that they are "testing" new ways to verify the age of their users. Or, rather, that they're looking at ways that they might validate the age of some of their users. Over the years, large players like Meta will do things like this every so often -- they'll announce a seemingly major policy change that is designed to protect users but, upon closer inspection, they're announcing nothing of the sort, and nothing changes.
We think that's happening again here, and we want to share why we think so.
Here's the key part of the announcement:
Join the Waitlist
If someone attempts to edit their date of birth on Instagram from under the age of 18 to 18 or over, we’ll require them to verify their age using one of three options: upload their ID, record a video selfie or ask mutual friends to verify their age.
So let's talk about what this means.
Uploading an ID
Imitation is the sincerest form of flattery, and we're thrilled to see that even Meta recognises how valuable it can be to use reliable methods to validate critical information. Except, they're not really doing that. Here, Meta is just going to verify that the name on an ID and the birthdate are valid -- hence their partnership with Yoti, which is tied into government ID databases. This means that the only thing Meta will validate is that a date on an ID is over 18, not necessarily that your birthdate on your ID states that you are over 18.
What are we getting at? It's simple: Instagram (famously) allows you to have as many accounts as you like, and none of them are tied to your actual identity. A person could, starting tomorrow, use their ID to create a dozen accounts, each with a perfectly valid internal "over 18" attribute. Those accounts can now be handed off (or, presumably, sold off) to anyone willing to pay, including to a 10 year old. And how will Instagram know? Will they monitor for accounts verified by a single ID for simultaneous logins or for geographic anomalies?
Unlikely. What's more, Instagram has said nothing about verifying the identity of users who are currently on the platform, which means billions of accounts will exist with no oversight, no validation, and no accountability. The only thing worse than not verifying age is verifying some ages, sometimes, sort of. And that's the option Meta is taking, or anyway, one it might take. And the kicker? If you give Instagram your ID to verify, they keep the image, which means they keep the underlying data. What could possibly go wrong?
And the kicker? If you give Instagram your ID to verify, they keep the image, which means they keep the underlying data. What could possibly go wrong?
Caught on Tape
That Meta is taking shortcuts shouldn't be a surprise here -- the company has routinely flouted pesky things like consumer requests, consent orders, and Congress. That's why their decision to allow for video validation of age is so troubling. Maintenance of video recordings is one of the trickiest (and costliest) privacy obligations a business can undertake. The risks are extremely high: misappropriating the combination of image, voice, movement, biometric markers, and similar characteristics means that a video of you speaking is just about as sensitive as it gets.
And so, presumably, Instagram will have taken the proper steps and instituted rules which counteract biased systems and complete lack of oversight and which protect against out of control employees and unaccountable programs, right? Because Meta would never have unsafe systems, or fail to protect personal data, or allow data to be used for improper purposes, right? We can trust that Meta won't misuse biometric data for profit without telling us, right?
There's another component here: certainty. If you read the white paper that Yoti drafted on video estimation of age, they have created some very interesting technology. But Yoti's work is grounded in science, and doesn't claim to be able to pinpoint age; instead, they say that they can give a high degree of confidence that, based on a video, a person is in an age range. Whether that's 6-11 or 13-17, a range is helpful, but not definitive, about age. So when Instagram says that they'll use Yoti's age estimation tool to verify age, they're actually saying that they'll use Yoti's tool to get a degree of confidence about age, and nothing more.
So when Instagram says that they'll use Yoti's age estimation tool to verify age, they're actually saying that they'll use Yoti's tool to get a degree of confidence about age, and nothing more.
Yet the law is about certainty, and it is explicit: you can't advertise to anyone under 13. Instagram's terms of service are clear, too: no one under 13 on the platform. But instead of sticking to that, and instead of meeting a legal obligation to do so, Instagram is announcing that if you look enough like you're 13, even though you're 11 or 12, they'll treat you as though you're legal.
So let's restate that: Instagram's expressed plan is to institute a "yeah you're probably 13" verification system and claim that this is enough under the law. If you aren't sure why that's a big deal, try making a similar guess with the tax authorities or the police and see how it goes. This is yet another example showing that the Meta approach to privacy laws can be summed up as "tl;dr lol."
Outsourcing Privacy
The last element of Instagram's page is so blatantly foolish it's hard to believe they're serious. Allowing existing users to validate the age of someone trying to get on the app makes as much sense as a bouncer asking your friends inside the bar if you should be allowed in. We rely on adversarial methods to find truth; when both the person seeking validation and the validating entity have aligned interests (as when they're friends or have paid for collusion), the outcomes are worthless.
If this method is so obviously wrongheaded, why suggest it at all? Because it's a great way to shift responsibility. If Instagram had to do the work of validating ages, that would take time and money, and if there was a mistake, it would be all on Meta. But if I lie about my friend's age so that they can get an account, it's my fault for breaking the rules. Would Instagram be held liable? Perhaps, but more likely it would be a shared finding of culpability, and a reduction in liability for the company. Welcome to the Metaverse.
Right for the Wrong Reasons
Instagram's proposed age validation plan is no plan at all. Instead, it's PR, lip service, self-congratulatory hot air. If Instagram or Meta were serious about protecting under 18s, they'd worry less about onboarding new users and start where it matters: content moderation and safeguarding. That would show a commitment to tackling the hardest issues, costly though they may be, and would be worth celebrating.
Instead, Meta, as always, puts its mouth where its money is and focuses on how to make sure that as many under 18s can get on the platform as possible while still appearing to "do something" about online harms
Instead, Meta, as always, puts its mouth where its money is and focuses on how to make sure that as many under 18s can get on the platform as possible while still appearing to "do something" about online harms. In the end, Instagram is being true to form: focus on appearance and flash. This is worse than not doing anything, because make-believe "solutions" like this may convince the public (and even some regulators) that things are getting better, and that create a false sense of security. Things are not getting better at Instagram, and this press release is solid proof.
As the market and users begin to recognise the staggering costs of this approach, "do it for the Gram" isn't going to cut it as a business strategy anymore, and it certainly won't be enough to give the kind of security, safety, reliability, and trust that people want and deserve. For that, we're going to need something much more than the same old Meta.
Join the Conversation
Join the waitlist to share your thoughts and join the conversation.
The Bright Team
Two lawyers, two doctors, and an army officer walk into a Zoom meeting and make Bright the best digital social community in the world. The team’s education and diversity of experience have given us the tools to confront some of the toughest tech and social problems.