We already know that Facebook has the ability to tag and unearth our most embarrassing photos when we least expect it, but the social network’s latest leap forward takes that ability to the next level.
Through research conducted in conjunction with Flickr’s public domain photo collection, Facebook created an algorithm that can accurately identify a face even when it is turned away from the camera.
Out of a sample of nearly 40, 000 images, the algorithm was correct 83 percent of the time. That’s a small enough margin of error to have even the most unflappable of online sharers double-checking their privacy settings.
Off-Putting? Off Platform
So why trawl Flickr’s image archives and not leverage Facebook’s own plentiful supply of photos?
That probably comes down to the “creepy factor,” with which Facebook has flirted for many years. In the last year alone there have been several examples of on-platform feature roll-outs that have been more about freak outs for users.
In the wake of the company’s on-platform emotional experiment/manipulation (delete as applicable) last year – in which Facebook tweaked some user feeds to receive overtly negative content without their knowledge, then measured its impact on the mood of their posts – Facebook is aware that it needs to tread lightly with new features like these. And that’s not to mention earlier concerns over it tuning into conversations on mobile devices, endorsing products with our likeness, and surfacing old memories.
We reported earlier this year from Social Media Week about Facebook’s growing awareness of privacy concerns, but it will always face the challenge of balancing those user worries with the temptation to roll out new products. When those products involve features like identifying a covered-up face or tagging potentially unwanted images, the privacy debate will always bubble back to the surface.
My So-Called Digital Life
Although few people pay much attention to their digital footprint beyond the last few days, these privacy concerns are not unwarranted.
Employers are known to be using Facebook in their checks on potential and existing employees, while colleges and even law enforcement agencies are getting in on the act to identify potential troublemakers. Being able to identify partially covered faces, those in a crowd, or even just items
As a result, minor transgressions and mild embarrassments can blow up into much more damaging blemishes on our online record. Because of the nature of social networks and cloud storage, these are marks that can last a lifetime. Once something attached itself to our digital trail, it can be very hard to shake off.
Because of the nature of social networks and cloud storage, these are marks that can last a lifetime. Once something attaches itself to our digital trail, it can be very hard to shake off. No-one polices the Internet on a global scale, so outdated information and biased reporting can easily go unchecked, leaving the reader to filter out the accurate from the erroneous. When employment, college acceptance, and perhaps even arrest are at stake, the dangers of posting more and more personal information to social networks becomes apparent.
Europe has started to take a tougher stance on these issues with its right to be forgotten legislation. Google, long a pariah in the region and, more widely, the subject of antitrust and intellectual property abuse accusations, is at the heart of this debate.
The issue of images and Google’s woes in Europe could soon collide further stil, as the technology behemoth’s newly streamlined Photos app offers up advanced search functions that closely resemble the results of Facebook’s experiment, albeit with no claims that it can pull specific faces from a crowd. We have to assume, however, that if Facebook can do it so can Google, and with the latter’s patchy record on protecting intellectual property, it’s easy to see the search giant serving up anything in its results that benefits the bottom line.
It remains early days for all of these services, but the potential to make mistakes – or for others to misuse them intentionally – is already significant.
When it comes to online privacy, we can be our own worst enemy. We can also be our first line of defence, by remembering the advanced, ever-developing capabilities of image search and thinking twice before uploading anything that could work against us in the long term.