The Right to Be Untagged: As Facebook Disables Facial Recognition for EU Consumers, U.S. Consumers Are Left Wondering What’s Next for Them

Updated:
Posted in: Consumer Law

Just a few days ago, Facebook announced that it was turning off its facial-recognition tool, which had suggested the identities of registered Facebook users for possible tagging in uploaded photos.  Facebook did so in response to an audit by the Irish Data Protection Commissioner, one of 27 national privacy regulators in the EU—and the one that happens to be just down the road from Facebook’s European headquarters, in Dublin.

At least for now, then, European consumers can rest assured that Facebook will ease up on its practice of scanning photos and suggesting to a person’s Facebook friends that a particular photo contains his or her image, and should be tagged accordingly. (Facebook itself does not directly tag people in photos.)

Facebook intends to restore the facial-recognition tool, provided that it can find common ground with regulators about the most appropriate way to obtain user content.

Meanwhile, Facebook has temporarily suspended the auto-tag feature in the U.S. as well.  But it had no legal obligation to do so, and thus, the feature may return here soon.

In this column, I will discuss the privacy concerns raised by facial recognition and social networking.  I will also discuss why facial recognition is currently off limits in the EU, based on its privacy law and framework, whereas, in the U.S., we are at the mercy of Facebook’s ever-changing privacy terms and business model.

Facial Recognition and “Tag Suggestions” on Facebook: How It All Works

Facebook launched its facial-recognition “tag suggestions” feature in the summer of 2010 in the United States.  The launch occurred without much fanfare, as often happens with new Facebook features, especially those that may compromise our privacy expectations.  In fact, in this case, Facebook introduced the feature by default, forcing users who disliked the concept to opt out and to do so in a cumbersome way, by navigating Facebook’s constantly changing  privacy settings to opt out of suggested tags.  It is even harder to request that stored tagged photos be deleted, via a hard-to-find link.

As noted above, the service is opt-out (not opt-in) in Facebook’s privacy settings—with the default being that you are set up to be tagged unless you object. And that aspect of the facial-recognition feature, and of Facebook’s approach to privacy generally, bothers privacy advocates.

Facebook added facial recognition to its site after buying Israeli startup Face.com for a reported $100 million.  The feature automatically checks to see if users’ friends appear in images that they upload, and suggests friends to tag, ensuring that identifiable images will be shared more widely and appear on the homepages of the tagged user.  Thus, even if you have never posted a photo of yourself, you may find your page populated with photos taken by others, where you are “tagged.”

Facebook’s facial-recognition technology works by generating a biometric signature for users who are tagged in photos on Facebook using data from “photo comparisons.” This biometric “signature,” based on scanning of a user’s facial image, is available to Facebook, but not to the user.

Facebook routinely encourages users to “tag”—that is, to provide actual identifying names for themselves, their friends and acquaintances, and other people they may recognize on the site.  Facebook then associates the tags with a user’s account, compares what the tagged photos have in common, and stores a summary of the comparison. Facebook notes that it automatically compares uploaded photos “to the summary information we’ve stored about what your tagged photos have in common.”

When Facebook introduced its auto-tagging, it gave no special notice to users, and failed to obtain their consent prior to collecting and storing biometric data and facial comparisons.  Put more simply, Facebook was creating a large archive of our images—linking those images to our name, our identity, and other data about us—without providing us with any other notice about this facial-recognition technology. Facebook admitted in a later statement, “[W]e should have been clearer during the roll-out process when this became available to [users].”

Moreover, there is no option within a user’s Facebook privacy preferences to delete or prevent Facebook’s biometric-data collection.  Instead, if a user wants to delete this  “summary” data associated with his account that can be used to couple his name to photos of him, the user must contact Facebook through a difficult-to-find link.

Photo-tagging is important for Facebook, as it allows the social network to better ascertain the persons with whom its users interact in the real world, and to try and make money off of these associations in the future.  It also allows Facebook, via tagging, to figure out where we travel, what we do, and whom we like.

Why Facebook Users Should Care About Their Images Being Identified by Other Users via Facebook’s Software

Some readers may wonder, Why would anyone care about their image being identified, with the help of fellow users, through Facebook’s software?  There are a few good reasons for concern.

First, it seems creepy to know that a company is keeping an archive of photos of us without our affirmative consent—thus creating a digital photo album displaying who we are, where we have been, and with whom we associate.

Second, the tagging feature relies on getting third parties to identify other Facebook users, which is also a bit unsettling.

And third—and arguably worst of all—a database is being created, without the user’s consent, that could someday be accessed for intelligence or law-enforcement purposes.  Governments in democracies and repressive regimes alike would love to have access to such a trove of images—to help them identify people ranging from protesters at a demonstration, to crime suspects.

Facebook’s Use of Facial Recognition and the EU’s Concerns

As questions from users, privacy advocates, and regulators have mounted with respect to Facebook and its method of user-enabled facial recognition, Facebook quietly and temporarily pulled the plug on “tag suggestions” for all Facebook users this past summer.

Facebook did not introduce facial-recognition features in Europe until June 2011.  Yet, within the year, Irish data-protection (i.e., privacy) commissioners started to raise concerns about the practice.

Moreover, EU privacy regulators—including the data-protecting commissions in Norway, Germany, and Ireland—have voiced concerns about Facebook’s facial recognition, worrying that the feature violates EU privacy laws.

The EU has a comprehensive privacy law, the Data Protection Directive, which is implemented through national privacy legislation in member countries.  Those laws require that companies (1) obtain consent before they collect a user’s personal data, including images, and (2) notify the user when they are going to collect or use data in a new manner.  Thus, when Facebook unveiled the recognition/tagging future, European regulators believed that Facebook had not properly gotten consent by asking users to “opt in” and thus permit the use of their data.

Moreover, in the EU, citizens have a right to access information that a company collects about them, and to have that information deleted if they so choose.  It was unclear whether Facebook was still compiling tagged photos of users—even if it was no longer posting those tagged photos online—without user consent.  In other words, there was a fear that Facebook was amassing a database of tagged user images, and keeping that database in storage, even if it was respecting our wishes not to have our photos tagged by other users online.

The Resolution of the Irish and EU Concerns About Facebook’s Facial-Recognition Mechanism

Based on Facebook-user complaints, the Irish Data Protection Commissioner (DPC) audited Facebook’s facial-recognition feature.  In late September, it announced the audit’s results, revealing that Facebook intends to delete its facial-recognition data and stop using the technology in Europe next month, as it works to comply with the privacy watchdog in addressing user complaints.

Last October, for example, Australian law student Max Schrems lodged a reported 22 complaints with the Irish privacy watchdog after requesting a copy of his data from Facebook.  Schrems discovered that Facebook had kept all of his deleted data.

Since then, Facebook has stopped using facial-recognition technology for new users in Europe.  For existing users, Facebook will deactivate its service throughout Europe on October 15.

The audit—conducted to ascertain whether the company had adjusted its practices to conform with Irish and EU data-protection laws—found that Facebook has now implemented many of the privacy improvements that were suggested in a previous government report.  It cites improved transparency regarding how data is handled, better user control over Facebook settings, greater clarity about data retention and deletion capabilities, and enhanced user data-access rights.

“I am particularly encouraged in relation to the approach [Facebook] has decided to adopt on the tag suggest/facial-recognition feature by in fact agreeing to go beyond our initial recommendations, in light of developments since then, in order to achieve best practice,” DPC head Billy Hawkes said.  He added that Facebook’s actions demonstrated a clear commitment to good data-protection practice.

However, while the Irish DPC applauded Facebook’s ceasing its facial-recognition program, several other issues remain unresolved.  The regulator has raised questions about whether Facebook is actually deleting photos that are marked for removal within 40 days, as required under Irish Data Protection laws.

The Irish DPC also seeks further clarification about the way Facebook handles inactive and deactivated accounts.   The Irish privacy watchdog has given Facebook another four weeks to address this issue.

Richard Allan, Facebook’s Director of Policy for Europe, the Middle East, and Africa, says the decision to remove the facial-recognition tool was made in response to new guidance issued by the EU.  Allan said, “Our intention is to reinstate the tag-suggest feature, but consistent with new guidelines. The service will need a different form of notice and consent.”

That’s a major admission for Facebook, acknowledging that the site will improve its notice and consent practices with respect to EU consumers.

If Facebook does reintroduce its facial-recognition mechanism, users will have to opt in, not opt out.  Also at issue is whether Facebook will maintain images of people who do not consent to the collection of those images.  Thus, in the EU, users may find that they have had their images deleted, and that Facebook will not initiate scanning and suggested tags without their affirmative permission.

Facebook and Its Facial-Recognition Mechanism in the U.S.

While EU regulators have explicit authority to stop Facebook’s data collection unless it complies with EU law, the situation in the U.S. is different.  Currently, in the U.S., Facebook is able to use facial recognition—and to require users to opt out, rather than opting in.  Moreover, Facebook can do so without much fear of legal repercussions, because here in the U.S., there is no federal privacy law that regulates how Facebook can collect our biometric data.

In 2010, when Facebook launched its service, the Electronic Privacy Information Center (EPIC), a leading U.S. privacy NGO, filed a complaint with the US Federal Trade Commission (FTC), complaining about the way that facial recognition was deployed, three days after Facebook launched “tag suggestions.”

The complaint alleged that: (1) Facebook is involved in “unfair and deceptive acts and practices” due to its continued use of the automatic tagging feature; (2) Facebook’s implementation of the facial-recognition technology is an invasion of privacy, which not only causes harm to consumers, but is done without their consent; and (3) Facebook’s collection of biometric data from children is contrary to the Children’s Online Privacy Protection Act of 1988 (COPPA).

Additionally, the EPIC complaint requests that the FTC require Facebook to: (1) immediately suspend Facebook-initiated tagging or identification of users based on Facebook’s database of facial images; (2) not misrepresent how it “maintains and protects the security, privacy, confidentiality, and integrity of any consumer information”; and (3) provide additional disclosures to users prior to new or additional sharing of information with third parties.

The EPIC Complaint focuses on Facebook’s business practices because the FTC is equipped to pursue such violations under section 5(a) of the FTC Act, which focuses on preventing unfair or deceptive trade practices.  The FTC Act, however, is not a comprehensive privacy statute, and does not require companies to obtain consent before collecting consumer data.  To the contrary, unlike the EU, the U.S. typically allows a company’s Terms of Service (ToS), a private contract, to set the privacy commitments that a company makes to its customers.  That is why, in the U.S., Facebook will introduce a new feature (such as Timeline or auto-tagging) and require users to opt out, rather than opt in. This approach, of course, takes advantage of inertia—as many users may not be aware of a change in privacy practices, or may not take the time to opt out—especially if it must be done by links that are hard to find, and/or instructions that are difficult to understand.

With respect to Facebook and photo-tagging, EPIC noted in its complaint that:

“Facebook users are only given the option to turn off the automatic tagging feature and have their biometric data deleted only after the feature is installed. To turn off this feature, a user must navigate through his or her privacy settings to ‘opt out’ of the tag suggestion service. In addition to opting out, a user has to send a message to Facebook and specifically request that Facebook delete the data that it has collected. This multi-layered process is confusing and there is no instruction page or notification alerting users that opting out is an option.”

The bottom line: EPIC asked the FTC to prohibit Facebook from creating facial- recognition profiles without getting users’ express consent.

The EPIC complaint also explains how Facebook has failed to establish that application developers, the government, and other third parties will not be able to access Facebook’s “photo comparison data.” It also addresses the ways in which Facebook’s collection of biometric data for facial recognition violates user expectations and its own ToS.

It’s not surprising, however, that EPIC’s complaint is currently still pending with the FTC.  The FTC can act, if it believes a complaint has merit, but it is not obliged to do so.  And the FTC recently dealt with Facebook.  In November 2011, the two reached a settlement relating to other privacy problems the company had, in terms of its sharing data with third parties.

As part of the FTC settlement, Facebook must implement practices that are appropriate to the sensitivity of the “covered information” in question, which is very broadly defined in the order, and would include biometric data.

The most hopeful development here is that when Facebook plans to override our privacy preferences in the future, it is meant to obtain our affirmative consent.  But it appears that facial recognition and auto-tagging were already grandfathered in.  Thus, there may be no need for Facebook to go back and get our permission with respect to these features—meaning that American users may still be stuck with auto-tagging, unless we expressly opt out.

Finally, based on what Facebook did with Timeline—forcing us to switch to the new profile whether we wanted to or not, it appears that the concept of affirmative consent to Facebook changes may be more mythical than we might once have hoped. (I wrote about Timeline in another column here on Justia’s Verdict).

Senator Franken, and Consumer Protection Groups Are Also Scrutinizing Facebook’s Privacy Practices

In addition to scrutiny from European regulators, Facebook has faced criticism from U.S. lawmakers over its use of facial-recognition technology.  At a Senate hearing last July, Senator Al Franken (D. Minn.), described Facebook as the “world’s largest privately-held database of face prints—without the explicit consent of its users.”

Last Friday, after Facebook’s EU announcement, Senator. Franken said in an e-mail statement that he hoped that Facebook would offer a way for American users to opt in to its photographic database.

“I believe that we have a fundamental right to privacy, and that means people should have the ability to choose whether or not they’ll be enrolled in a commercial facial-recognition database,” he said. “I encourage Facebook to provide the same privacy protections to its American users as it does its foreign ones.”

Facebook: A Tale of Two Jurisdictions

To recap, EU citizens have regulators watching out for them, and a specific right to opt in to new uses of data by Facebook.  Meanwhile, in the U.S., the FTC settlement—at least in theory—means that Facebook should check with us before launching new tools and applications that use our data and images in surprising new ways.

But the FTC settlement is still ambiguous. It does not create an explicit statutory mandate for Facebook to treat us and our data in a certain way.  In contrast, the current White House proposal (which now also appears in pending federal legislation) would give us an explicit statutory bill of rights in this area.

Should we be concerned?  As I noted above, the idea that a private company is amassing a large database of photos of us is creepy.  Moreover, the database may well be seen by Facebook as a large asset, in which it has property rights that it can then monetize.

Other businesses also see commercial potential here.  For example, Redpepper, an Atlanta-based company, is developing an application to allow Facebook users to be identified by cameras installed in stores and restaurants.  Redpepper has reported in a blog post that users would have to authorize the application to pull their most recently tagged photographs. After you give consent, Redpepper says, “custom-developed cameras then simply use this existing data to identify you in the real world,” including by offering you special discounts and deals.

We May Yet See Additional EU Regulation for Facebook’s Photo-Tagging

Meanwhile, it’s useful to remember that we’re still in the early days when it comes to Facebook’s use of photo-tagging in the EU, and the regulation that may well follow.  In addition to Ireland’s probe, Germany recently called the practice of photo-tagging illegal, and demanded that Facebook disable its facial-recognition service and destroy its German database of user images derived from facial-recognition.

Facebook said at the time that it didn’t have to do so, because the data collection is legal in Ireland, where the company has its European headquarters.  But Irish privacy authorities were conducting an inquiry of their own, and they forced Facebook to partially do what Germany had demanded.

In Germany, meanwhile, the Hamburg Commissioner for Data Protection and Freedom of Information has issued an administrative order against Facebook over its facial-recognition technology, in the midst of the country’s own ongoing battle with Facebook. This measure appears to have been taken more or less independently of whatever was happening in Dublin.  According to the Hamburg regulator, if Facebook cannot sort it out, “the existing data base has to be deleted.” That rule would apply only to Hamburg, although, the commissioner notes, “Other German authorities have already announced similar administration procedures.”

Although Ireland’s decision has wider EU ramifications because Facebook’s international headquarters are in that country, Hamburg maintains that Facebook still needs to comply with local regulations in Germany.  And so EU privacy regulators continue on with their push to keep Facebook accountable to EU consumers.