NHTSA Strongly Endorses Connected Vehicle Technology, But Implementation Questions Remain

By Thomas DeCesar, Edward Fishman, Cliff Rothenstein, and Marty Stern

In a recent announcement, the United States Department of Transportation’s (“DOT”) National Highway Traffic Safety Administration (“NHTSA”) endorsed the future implementation of vehicle-to-vehicle (“V2V”) communication technology in light vehicles (e.g., passenger vehicles and light trucks).  This technology, which is seen by some as the future of vehicle safety, allows vehicles to exchange location and speed information with other vehicles.  This information is processed to provide warnings of driving hazards, or even to automatically stop the vehicle.

In the near term, the practical effect of NHTSA’s decision will be increased interest in and study of V2V technology.  The results from NHTSA’s year-long Safety Pilot model deployment program, which tested V2V devices, will also be released shortly.  More importantly, this decision signals a cautious step over the line from general interest to intent to mandate V2V technology in the future.  NHTSA declined to set a timeline, but it will likely engage in a rulemaking process over the next few years to require V2V technology in new light vehicles.  NHTSA’s hesitancy to commit itself to this mandate may come from a lack of data or real world implementation of the technology, since the agency has historically only required safety features after they gained general acceptance within the automotive industry.

Legal concerns relating to privacy, security, and liability will also pose significant hurdles to the implementation of V2V technology.  Steps must be taken to ensure that the outflow of information will be used for proper purposes and that the technology is equipped with sufficient security measures to prevent improper access by third parties.  Although not mentioned in NHTSA’s press release, liability considerations may also play an important role in V2V implementation.  These risks will have to be understood and adequately managed by companies involved with this technology, including automotive companies, equipment suppliers, and technology providers.

In addition, the implementation of V2V technology may cause continued friction between the Federal Communications Commission (“FCC”) and proponents of unlicensed spectrum use on the one hand, and the DOT and automotive industry on the other.  As currently envisioned, V2V technology will operate using dedicated short-range communications (“DSRC”) over 75 MHz in the 5.9 GHz spectrum band, which also happens to be prime potential territory for unlicensed WiFi use.  The 5.9 GHz spectrum, however, was set aside for V2V communications by the FCC in 1999, but beyond the V2V test program, has not been deployed.  As a result, the FCC, at the direction of Congress in the 2012 Spectrum Act, recently began to consider opening the spectrum up for unlicensed shared broadband and WiFi use to meet predicted future demand, as part of a proceeding to consider allowing unlicensed use in the 5 GHz band.  There is a significant debate over whether the spectrum may be used for DSRC and shared broadband uses, and V2V proponents are concerned with potential interference with vehicle safety communications from such unlicensed use.  Further testing will be required before the issue can be resolved and will likely be prominently featured in the FCC’s 5 GHz proceeding in light of the DOT’s announcement and its upcoming V2V rulemaking.

V2V technology is part of the larger field of intelligent transportation systems (“ITS”).  Broadly speaking, ITS involves the integration of technology and communication devices into vehicles and other transportation infrastructure to improve safety and provide other benefits.  NHTSA’s decision should be seen as a positive step forward for ITS technology in general, and specifically for vehicle-to-infrastructure (“V2I”) technology, which refers to roadside communications infrastructure designed to operate in conjunction with V2V devices. 

The further implementation of ITS technology within U.S. transportation infrastructure will require the cooperation and collaboration of several industries, including the automotive, information technology, and telecommunication sectors.  The approach these industries take with regard to the connected car space, along with public opinion on the advantages and disadvantages of the technology, will be driving forces in the ITS field.  Given the rapid pace of innovation within these industries, it will be important for NHTSA to address compatibility, scaling and future-proofing issues related to V2V and other ITS technologies.  Although NHTSA has offered a strong endorsement of V2V technology with its recent announcement, there are still a number of important questions surrounding ITS technology (including the privacy, security, and liability issues identified above) that must be answered before widespread implementation can occur.

Hulu Privacy Class Action Ruling Eases Road to Recovery for Plaintiffs

By Jenny Paul 

A federal magistrate judge in California recently ruled that Hulu LLC users do not need to demonstrate an actual injury beyond the wrongful disclosure of personally identifiable information to recover damages under the federal Video Privacy Protection Act, a ruling that may ease the road to recovery for other VPPA plaintiffs.
 

 Plaintiffs in the Hulu case alleged that the company wrongfully disclosed their video viewing selections and personal identification information to third parties — metrics company comScore and social network Facebook — in violation of the VPPA.  The VPPA provides that a video tape service provider who knowingly discloses personally identifiable information concerning any consumer of the provider is liable to the consumer in a civil action for “actual damages but not less than liquidated damages in an amount of $2,500.”  In an earlier decision, the court found that the term “video tape service provider” encompasses video streaming services such as Hulu.

The court rejected Hulu’s contention that plaintiffs had to show an injury separate from the violation of the statute to recover actual or liquidated damages.  Rather, the court found, under the plain language of the statute, plaintiffs only have to demonstrate a wrongful disclosure of personally identifiable information — not an additional injury — to recover damages.

The ruling will allow the Hulu plaintiffs to proceed with their claims, and plaintiffs in future class action suits involving the VPPA are likely to rely on it.

The case is In re Hulu Privacy Litig., N.D. Cal., No. 3:11-cv-03764-LB.

Companies Who Track Shoppers Via Smartphones Adopt Privacy Code of Conduct

By Jenny Paul and Marc Martin

A group of location analytics companies
agreed recently to a code of conduct that will govern use of technology that allows retailers to track shoppers’ locations throughout stores.  The group worked with Sen. Charles Schumer (D-NY) and the Future of Privacy Forum, who both had raised concerns about the technology, to develop the code, which includes in-store signs to alert customers that they are being tracked, as well as a central opt-out site for consumers.  


The technology itself relies on mobile device Wi-Fi or Bluetooth MAC addresses to develop aggregate reports for retailers.  The code limits how the information collected in this manner may be used and shared and how long it may be retained.  The analytics companies must also de-identify the data before handing it over to their customers.  Under the code, consumer opt-in consent is only required when personal information is collected or when a consumer will be contacted. In addition, the code prohibits the companies from using the data collected “in an adverse manner” for employment, health care or insurance purposes.

Location analytics companies who have agreed to abide by the code of conduct include Euclid, iInside (a WirelessWERX company), Mexia Interactive, SOLOMO, Radius Networks, Brickstream and Turnstyle Solutions.

California Ups Privacy Ante With Passage of Digital Eraser, Do Not Track Laws

By Jenny Paul and Marc Martin

 

The state of California passed two laws in recent weeks aimed at bolstering the privacy protections offered by websites and mobile apps.  The first, SB 568, gives minors a digital eraser of sorts by requiring the operators of minor-focused websites and mobile apps to provide a mechanism for removing content posted by minors who are registered users.  A website or mobile app can comply with the law by permitting minors to make content removal requests or by giving clear instructions on how a minor user can remove his or her own postings.   

 In addition, affected websites and apps must also provide minor users with notice that deletion will not ensure complete or comprehensive removal of the content in question.  The law, which was signed by Gov. Jerry Brown Sept. 23, also prohibits operators of minor-targeted websites and mobile apps from marketing or advertising to a minor specified types of products or services, including alcoholic beverages, firearms, tobacco and permanent tattoos.  The law takes effect Jan. 1, 2015.

The second privacy-oriented law, AB 370, requires all websites that collect personally identifiable information to disclose how they respond to web browser “do not track” signals or similar mechanisms that allow browser users to make a choice about the collection of information that reveals an individual’s online activities over time and across third-party websites or online services.  A website operator may comply with the law by providing a link in its privacy policy to a description of any protocol the operator follows that offers users the choice to opt out of Internet tracking.  Gov. Brown signed the law Sept. 27, and it takes effect Jan. 1, 2014.  Given California’s prior application of its online privacy requirements to mobile applications, the new tracking law’s requirements likely will apply to mobile applications, although the law does not specifically address that issue.

FCC Privacy Rules Updated for Smartphone Era

By Nickolas Milonas, Marc Martin, and Marty Stern

The Federal Communications Commission adopted a Declaratory Ruling that updates and broadens the scope of the FCC’s customer proprietary network information (CPNI) rules applicable to customer information stored on mobile devices.  Specifically, the ruling clarifies that wireless carriers have the same obligations to protect CPNI collected and stored on mobile devices using carrier software tools, as they do for CPNI collected through network facilities.  The Declaratory Ruling does not apply to third-party app developers, apps that customers download from an app store, nor to device manufacturers or operating system developers.  Commissioner Jessica Rosenworcel noted that the rules were in need of updating because the last time they were, the iPhone did not yet exist.

Under the existing rules, wireless carriers may use wireless devices to collect information regarding network use.  This type of information can include phone numbers of calls made and received, duration of calls, and the location of the device during the call.  Carriers use this information to monitor network congestion and improve network performance.  The FCC specifically recognized these benefits of carriers collecting CPNI on mobile devices and made clear that it was not barring carriers from doing so.  The Declaratory Ruling found, however, that when this information is stored on the mobile device of a customer, it may be vulnerable to unauthorized access and should be subject to similar protections as customer information maintained on carrier networks.  Consequently, the Declaratory Ruling requires wireless carriers to take “reasonable precautions” to safeguard that data, just as if the carrier was collecting the information from its network facilities, provided that the data is collected at the carrier’s direction and the carrier or its designee has access to or control over the information.  The Declaratory Ruling does not mandate a specific set of precautions or safeguards, but instead leaves it to each carrier to determine its own means of appropriate protection.

Mobile App Transparency Group Continues Development of Privacy Code of Conduct

By Nickolas Milonas and Marc Martin

The National Telecommunications and Information Administration recently held another meeting as part of its multistakeholder process regarding mobile app transparency.  In the summer of 2012, the NTIA began an industry-wide effort to develop a voluntary code of conduct for how mobile apps notify users about their personal data collection practices.  Industry representatives and privacy groups have worked together over the past year to develop a draft code of conduct to improve the clarity of privacy disclosures while advancing self-regulation as a preferred option to privacy laws that may be handed down from Congress.

The Obama Administration has praised the process a means to create meaningful self-regulation within the industry, which may also spur industry progress on other technology and privacy issues.  The NTIA’s process is near completion, as the group currently only has one more meeting slated on its docket for this summer. 

FTC's Online Privacy Rules for Children Clarified

By Nickolas Milonas and Marc Martin

The Federal Trade Commission recently released guidance on its December 2012 updates to the Children’s Online Privacy Protection Act (COPPA).  COPPA regulates the collection and use by website operators and application developers of personal information from children under the age of 13.  COPPA also requires website operators and application developers to obtain parental consent before collecting a child’s personal information.  The guidance touches upon several issues, including geolocation data; services directed towards children vs. mixed-audience services; parental access to children’s personal information; and disclosure of information to third parties.

As we previously reported, the December changes to the COPPA regulations are scheduled to take effect this July and contain definitional changes; expand the scope of permitted operations to include the collection of certain personal information through the use of persistent identifies; clarify the use of age screens for content targeting a broad audience vs. content specifically targeting children; heighten parental notification requirements; and implement more-stringent requirements regarding the retention and disposal of personal information.

In advance of the FTC’s guidance, industry groups voiced concerns that the complex changes could deter innovation and asked the FTC to delay implementation until 2014 to ensure compliance.  However, privacy groups advocated rejecting any delays, stating that the changes are necessary to protect children and companies have had plenty of lead time to revise their policies and products.

Updated (5/6/13): In a letter to representatives of the advertising, application, and e-business industries, the FTC confirmed that it will not delay implementation of the new COPPA rules scheduled to take effect this July. The FTC stated that all stakeholders were afforded a sufficient opportunity to raise their concerns with the new rules but did not present any facts to warrant delaying implementation.

Apple Settles In-App Purchases Class Action

By Nickolas Milonas and Marc Martin

Earlier this week, Apple agreed to settle a class action lawsuit regarding so-called “bait apps”—mobile apps directed towards children, which are free to download but then charge for in-app purchases. Under the terms of the settlement, Apple agreed to issue $5 in iTunes credit to affected customers. If customers racked up more than $5 in in-app charges, Apple will issue up to $30 in iTunes credit. Apple will issue cash refunds for accounts that spent more than $30.

The size of the class is not yet determined, but Apple will send notices to 23 million potentially affected customers. Customers who were affected during the relevant time period will have to certify that the charges were made by minors and without parental permission. While the final settlement amount may vary, Apple could end up paying in excess of $100 million when all is said and done. The court is scheduled to hear the proposed settlement tomorrow, March 1.

The lawsuit was filed in 2011 by a group of parents who claimed their children made purchases within apps without their consent. The parents alleged that Apple failed to adequately disclose that these apps contained the ability to make in-app purchases. At that time, Apple’s policy required account holders to enter their passwords when downloading a mobile app, but would not require passwords to be re-entered for the next 15 minutes. During that 15 minute window, children could make in-app purchases without entering an account password. Apple has since changed its policy to require a password for every purchase.

The settlement is one piece in the larger puzzle regarding mobile app disclosures and privacy protections. Earlier this year, the FTC released a mobile privacy report and entered into a settlement with Path—a mobile-only social network that was allegedly mining information without users’ consent, including, according to the FTC, the information of minors in violation of the Children’s Online Privacy Protection Act. Late last year, the FTC released another report, highlighting the widespread practice of mobile apps collecting and sharing minors’ information with third parties without disclosing such practices. Also last year, the California Attorney General entered into an agreement with six mobile app platforms to increase consumer privacy protections. It issued a follow-up report earlier this year, Privacy on the Go, which includes recommendations for app developers, platform providers, and mobile carriers.

Data Privacy Update: FTC Releases Mobile Privacy Report and Settles Action against Path; Facebook to Identify Tracking Advertisements

By Nickolas Milonas, Marc Martin, and David Tallman

In a trio of recent data privacy developments, the FTC published mobile data policy recommendations, Path settled an FTC action regarding allegedly unlawful data collection, and Facebook will now tell users which ads are tracking their online activity.

The FTC recently released a staff report calling on mobile services to make their data policies more transparent and accessible to consumers. The report makes recommendations for mobile platform providers, application developers, advertising networks, and other key players in a rapidly expanding marketplace. The recommendations focus on providing consumers clear and timely disclosures about what consumer data is collected and how that data may be used. The report results in part from a May 2012 FTC workshop in which representatives from the industry, academia, and consumer privacy groups examined privacy risks and disclosures on mobile devices. 

Noting the expansive growth of services offered on mobile platforms, the report recognizes unique privacy concerns rooted in the “unprecedented amounts of data collection” possible from a single mobile device. The report also notes consumers are increasingly concerned about their privacy on mobile devices, stating “less than one-third of Americans feel they are in control” of their mobile personal data. 

With those concerns in mind, the report offers recommendations to improve mobile privacy disclosures. These recommendations are consistent with the broad principles previously articulated in the FTC’s prior March 2012 Privacy Report, which generally called upon companies handling consumer data to adhere to the core principles of “privacy by design,” simplified consumer choice, and greater transparency. The staff report elaborates on these general principles by providing guidance to address the unique challenges presented in the mobile environment (e.g., limited screen space, the centrality of platform and operating system providers, etc.) Among other recommendations, the report suggests: 

  • Developing privacy best practices and uniform, short-form disclosures;
  • Providing just-in-time disclosures to consumers requiring affirmative consent before allowing apps to access sensitive content like geolocation, contacts, or photos;
  • Developing a one-stop “dashboard” to review content accessed by apps; and
  • Offering a “Do Not Track” mechanism on smartphones to prevent third-parting tracking at the operating system level.

On the heels of the staff report, the FTC also announced a law enforcement action against Path, a mobile-only social network accused of collecting user data without consent. Through its social networking service, Path’s app allows users to upload and share content, including photos, comments, location data, and even the names of songs that the user plays. Among other allegations, the FTC claimed that the Path application automatically collected and stored personal information from users’ mobile device address books without the users’ consent (including names, addresses, phone numbers, email addresses, Facebook and Twitter usernames, and dates of birth). The agency also alleged that Path violated the Children’s Online Privacy Protection Act by collecting personal information from approximately 3,000 children under the age of 13 without parental consent. Path settled with the FTC on the same day that the agency filed its action. Path agreed to pay $800,000 in fines, delete all information for users under 13, and submit a comprehensive privacy plan with updates/assessments every other year for the next 20 years. 

Finally, Facebook recently announced it will alert users to advertisements that are based on or track browsing history. When users are logged in to their Facebook account and hover over ads with their mouse, a new pop-up icon will alert users if they are being tracked. The feature is the product of an agreement between Facebook and the Council of Better Business Bureaus, and users are still able to opt out of brand-specific ads, as well as ad tracking altogether.

These developments highlight the continuing regulatory focus on online privacy issues, particularly in connection with social media and mobile applications.

FTC Report Investigates Mobile Apps for Kids

By Samuel Castic

Federal Trade Commission staff recently released a report titled “Mobile Apps for Kids: Disclosures Still Not Making the Grade,” which contained the FTC’s most recent mobile app investigative findings that build upon its report from February of this year. The February report contained four key recommendations, which we summarized in a prior post.

This new report expanded on the FTC’s prior investigation by reviewing mobile app features and comparing them to disclosures made concerning the apps. The FTC found that many apps shared kids’ information with third parties without disclosing such practices to parents. Specifically:

1.      Most apps failed to disclose information collection or sharing practices before the apps were downloaded;

2.      Many apps failed to disclose that they contained advertising content or that the app shared privacy data with third-party advertising networks (including device IDs, geolocation information, and phone numbers);

3.      Some apps failed to disclose that they allowed in-app purchases;

4.      Some apps failed to disclose that they contained social media integrations that allow users to communicate with members of social networks; and

5.      Some app disclosures included false information. For example, certain apps expressly stated that user information would not be shared or that the apps did not contain advertising, when that was not the case.

The FTC has taken the position that mobile apps are online services for purposes of the Children’s Online Privacy Protection Act (“COPPA”), which prohibits the online collection of personal information concerning children under age 13, except in certain circumstances. As we have noted in prior posts, this area is fraught with risk and legal exposure. Indeed, the report indicates that the FTC staff plans to launch “multiple nonpublic investigations” to determine whether certain players in the mobile app space have violated COPPA or engaged in unfair acts or deceptive practices in violation of the FTC Act.

The report concludes by urging the mobile app industry to carry out the recommendations from the FTC’s recent privacy report—most notably, to:

1.      Incorporate privacy protections into the design of mobile products and services;

2.      Offer parents easy-to-understand choices about data collection and sharing through kids’ apps; and

3.      Provide greater transparency about how data is collected, used, and shared through kids’ apps.

Stay tuned in the upcoming weeks as the FTC is expected to announce new COPPA regulations that could impose further compliance challenges for mobile apps.

US-Japan Report on Cloud Computing Wary of EU Privacy Protections

By Chad King and Nickolas Milonas

The United States and Japan recently concluded a Director General-level meeting of the US-Japan Policy Cooperation Dialogue on the Internet Economy, addressing cloud computing and other Internet-related issues. The Cooperation Dialogue is focused on developing bilateral Internet policy initiatives and includes senior-level US and Japanese government officials and industry representatives. As part of a working group on cloud computing issues, representatives from US and Japanese industries submitted a joint report to the US and Japanese governments, which highlighted the benefits of robust and widely-adopted cloud computing services but cautioned against the potential adverse impacts of increased EU privacy regulations on the deployment and adoption of cloud services.

The report is non-binding and seeks to provide both governments with information detailing industry priorities and cloud computing market issues. A US State Department communication on the report noted that the working group will continue its discussions regarding the development of cloud computing services, while seeking a balance between personal data security and the free flow of information.

The report found that recent EU efforts to strengthen personal data protections may “stifle business, slow the deployment of new business solutions[,] and create a large compliance burden” in connection with cloud computing applications. In anticipation of new, more-stringent data protection requirements, the report warned that Asian businesses now fear increased costs of doing business in Europe. The report counseled US and Japanese governments to create an environment that allows the free flow of cloud services and establish guidelines that provide for the protection of data with minimal costs to business. The report noted that from an end-user perspective, privacy and data protections are essential in fostering “trust” in the cloud.  The report concluded that the government’s role should be to maintain the balance of protecting user information while allowing “the free flow of information to support business and innovation.” 

The EU Commission recently launched an initiative to ease the regulatory burden on cloud computing and some cloud providers in the EU have suggested alternative views of the EU requirements, so the assertions in the joint report are not without controversy. However, it highlights the growing importance of cloud computing for businesses worldwide, and the key role that privacy plays in the cloud’s continued acceptance and expansion.

FTC Chairman and Experts to Examine Mobile and Online Privacy in Upcoming Webcast

A live webcast program entitled Privacy Untangled, featuring Federal Trade Commission Chairman Jon Leibowitz and an expert panel will be carried on Broadband US TV on Friday, October 26, 2012, from 1:00-2:30 p.m. ET.

Balancing privacy with commercial interests has become increasingly complex and contentious, as businesses and government organizations rely on the collection, storage, and sharing of online and mobile consumer data. Recent regulatory initiatives, including the White House’s proposed Consumer Privacy Bill of Rights and related workshops, and the privacy enforcement actions and best practices reports of the FTC have placed evolving privacy practices in the spotlight. In addition, privacy watchdog groups continue to criticize the government’s privacy initiatives as insufficient, while service providers complain of the government over-reaching in its regulatory approach towards industry privacy practices.

An in-depth examination of these issues will be provided in a live webcast with co-hosts Marty Stern of K&L Gates and Jim Baller of the Baller Herbst Law Group. In addition to special guest FTC Chairman Jon Leibowitz, the program will feature an expert panel with Sue Kelley, American Public Power Association General Counsel; Deborah J. Matties, Attorney Advisor to FTC Chairman Leibowitz; Emily Mossberg, Principal at Deloitte & Touche LLP; Ross Shulman, Public Policy and Regulatory Counsel at the Computer and Communications Industry Association; Bernin Szoka, President at TechFreedom, and Peter Swire, former Chief Counsel for Privacy under President Clinton and current professor at the Ohio State University.

The panel will engage in a lively discussion regarding privacy issues and the government’s recent initiatives to adjust privacy regulations for an evolving online and mobile marketplace.

You can register for the webcast here (free registration required).

California Adopts New Social Media Privacy Protections for Employees and Students

By J. Bradford Currier and Marc Martin

California employers and universities will no longer be permitted to ask employees and students for access to their social media accounts under a pair of bills recently signed into law. The legislation comes in response to concerns from state officials that some businesses were requiring employees and prospective employees to provide access to their social media accounts in order to conduct background checks and take disciplinary action. State officials were also concerned with universities using social media to monitor student behavior, particularly for student athletes. The California laws follow similar legislation passed by Maryland and Illinois, as well as federal legislation currently under consideration by Congress, and poses new restrictions on employers and educators from using online material to take action offline.

The bills define “social media” as not only the user’s account information but also a user’s social media content, including videos, photos, blogs, text messages, email, and web site profiles. Under Assembly Bill 1844, employers are prohibited from requiring employees or prospective employees to: (1) disclose their username or password; (2) access personal social media in the presence of the employer; or (3) divulge any personal social media information, and employers may not take any disciplinary action against an employee for refusing to comply with such a request. Employers may seek social media information when accessing an employer-issued electronic device, and the new legislation is not intended to impede investigations of workplace misconduct or employee violation of the law. The legislation also states that California's labor commissioner is not required to investigate alleged violations of the new law. Senate Bill 1349 similarly prohibits universities from asking a student, prospective student, or student group to disclose or access personal social media information and further prohibits the universities from taking disciplinary actions against students who refuse such requests. Universities will also be required to post its social media privacy policy on the institution’s website. The legislation states that the restrictions do not affect a university’s right to investigate or punish student misconduct.

According to the California governor’s office, the new laws will protect residents from “unwarranted invasions” of privacy. However, critics of the bills suggest that the new restrictions may prevent employers from adequately investigating cases of workplace harassment and universities from ensuring their student athletes comply with NCAA rules.

FTC Releases Mobile App Privacy and Advertising Guide

By J. Bradford Currier, Marc Martin, and Samuel R. Castic

Developers of mobile applications are urged to adopt truthful advertising practices and build in basic privacy principles into their products under guidance recently issued by the Federal Trade Commission. The guidance is aimed at providing mobile app start-ups and independent developers with marketing recommendations designed to ensure compliance with federal consumer protection regulations. The guidance follows recent actions by the Federal Communications Commission, the White House, states, private stakeholders, and the FTC itself to establish mobile privacy codes of conduct and safeguard consumer information. The FTC guidance focuses on two key regulatory compliance areas for mobile app developers: (1) truthful advertising and (2) consumer privacy.

(1)        Truthful Advertising – The guidance recommends that mobile app developers always “[t]ell the truth about what your app can do.” The FTC cautions mobile app developers that anything a developer tells a prospective buyer or user about their app can constitute an advertisement subject to the FTC’s prohibitions on false or misleading claims. As a result, mobile app developers are encouraged to carefully consider the promises made concerning their apps on websites, in app stores, or within the app itself. Specifically, the guidance reminds mobile app developers that any claim that an app can provide health, safety, or performance benefits must be supported by “competent and reliable” scientific evidence. The FTC notes that it has taken enforcement action against mobile app developers for suggesting that their apps could treat medical conditions and recommends app developers review the FTC’s advertising guidelines before making any claims to consumers.

The guidance also advises mobile app developers to disclose key information about their products “clearly and conspicuously.” While the guidance recognizes that FTC regulation does not dictate a specific font or type size for disclosures, mobile app developers are encouraged to develop disclosures that are “big enough and clear enough that users actually notice them and understand what they say.” The FTC warns mobile app developers that it will take action against mobile app developers that attempt to “bury” important terms and conditions in long, dense licensing agreements. 

(2)        Consumer Privacy – The guidance calls upon mobile app developers to build privacy considerations into their products from the start, also known as “privacy by design” development. The FTC suggests that mobile app developers establish default privacy settings which would limit the amount of information the app will collect. The FTC also recommends that app developers provide their users with conspicuous, easy-to-use tools to control how their personal information is collected and shared. The guidance pushes mobile app developers to get users’ express agreement to: (1) any collection or sharing of information that is not readily apparent in the app; (2) any material changes to an app’s privacy policy; or (3) any collection of users’ medical, financial, or precise geolocation information. At all times, mobile app developers should be transparent with consumers about their data collection and sharing practices, especially when the app shares information with other entities. 

The FTC also advocates that mobile app developers install strong personal information security protections in their products. In order to keep sensitive data secure, the guidance suggests that mobile app developers: (1) collect only the data they need; (2) secure the data they keep by taking reasonable precautions against well-known security risks; (3) limit access to a need-to-know basis; and (4) safely dispose of data they no longer need. Mobile app developers are also encouraged to establish similar standards with any independent contractors.

The guidance also pays special attention to the issue of mobile app protection of children’s privacy under the Children’s Online Privacy Protection Act (“COPPA”). The guidance reminds mobile app developers that they must clearly explain their information practices and get parental consent before collecting personal information from children if their apps are “directed to” kids under age 13 and keep such information confidential and secure. The FTC’s recommendations parallel its recently proposed rules designed to clarify the responsibilities under COPPA when third parties (such as advertising networks or downloadable “plug-ins”) collect personal information from users on child-directed websites. Mobile app developers are encouraged to contact the FTC or review the Bureau of Consumer Protection’s business resources when developing their privacy policies.

Comment Deadline on Proposed Children's Online Privacy Rules Extended by FTC

By J. Bradford Currier and Marc Martin

The Federal Trade Commission has extended the deadline for comments on the agency’s proposed revisions to the Children’s Online Privacy Protection Act (“COPPA”) released earlier this month. As we reported previously, the proposed rules would modify key definitions contained in COPPA to clarify the parental notice and consent responsibilities of website operators as well as third-party advertisers and “plug-in” developers that collect personal information on children. The proposed rules also aim to establish clear guidelines for the use of so-called “persistent identifiers” and would potentially allow websites which appeal to a general audience to “age screen” users by birth date and provide parental notice and obtain consent only for users who identify themselves as under 13 years of age. 

Comments on the proposed rules will now be accepted until September 24, 2012.

Employers "Surfing" into Uncharted Waters with Social Media Practices

As social media continues to blur the line between personal and professional lives, employers have grappled with whether they can or should use social media to monitor current employees or screen potential hires. While social media can provide employers with information to combat workplace harassment, protect confidentiality, and conduct internal background checks, recent lawsuits, media reports, and legislative activity at both the state and federal level indicate that employers may put themselves and their businesses at risk for taking action in the workplace for something posted online. In addition to recently enacted or proposed state laws prohibiting employers from requesting or requiring access to employee or applicant social media accounts, using social media information may potentially violate privacy, anti-discrimination, and labor laws as well as the terms of use of many social media sites.

A detailed summary of these potential risk areas that employers should consider as their social media practices and policies progress may be found here.

Twitter Ordered to Produce Protestor's Tweets

By J. Bradford Currier and Marc Martin

Users of microblogging site Twitter have no reasonable expectation of privacy in their public tweets, according to a recent decision issued by a New York City criminal court. The ruling concerned a subpoena issued to Twitter seeking user information associated with the account of an Occupy Wall Street protestor arrested for alleged disorderly conduct for marching in a public roadway. Prosecutors alleged that the subpoenaed tweets contradicted the protestor’s claim that he was escorted by the police onto the public roadway. The decision marks the first time New York courts have grappled with the privacy rights of Twitter users in a criminal case and represents a defeat for civil liberties and electronic privacy groups which supported Twitter’s opposition to the subpoena. The decision follows a ruling in April which denied the protestor’s own motion to quash the subpoena on standing grounds.

In its motion to quash, Twitter argued that the subpoena compelled the company to violate the Fourth Amendment, which protects against unreasonable searches and seizures, and the Stored Communications Act, which governs the access to and disclosure of electronic communications. Twitter also argued that its users own their data under the site’s terms of service. The New York court disagreed, stating that the user had no reasonable expectation of privacy in a “tweet sent round the world.” The decision analogized the tweets to yelling out a window on a public street, with Twitter serving as a third-party witness to the user’s statements. By contrast, the court noted that users do have an expectation of privacy in their personal emails, private chats, or other non-public online communications. The decision also stated that the subpoena did not place an unreasonable burden on Twitter, as the company could locate the requested data with little difficulty. The court did limit the scope of the subpoena, stating that the Stored Communications Act required prosecutors to obtain a search warrant to access tweets which were less than 180 days old. While the decision recognized that the law regarding social media is still developing, the court stated that “there are still consequences for your public posts.”

Obama Administration Pursues Mobile Privacy Code of Conduct

By J. Bradford Currier and Marc Martin

The National Telecommunications and Information Administration (“NTIA”) will hold its first meeting on July 12, 2012 aimed at developing voluntary codes of conduct designed to provide consumers with clear information regarding how personal data is handled by companies which develop and offer applications for mobile devices. The NTIA’s planned meetings with stakeholders were first announced in February 2012 as part of the White House’s proposed Consumer Privacy Bill of Rights. The NTIA meeting comes as both the Federal Trade Commission and Federal Communications Commission have recently taken action to improve consumer transparency and privacy safeguards for personal information collected by mobile apps.

A number of stakeholders have already filed comments expressing their support for improving the clarity and comprehensiveness of privacy disclosures provided to mobile app consumers. However, a number of commenters noted that the rapid pace of innovation in the mobile app market and the relatively small screen sizes of current mobile devices will make long-term, definitive disclosure rules difficult to develop. While NTIA hopes to tackle a number of Internet policy topics, including copyright and cybersecurity issues, the organization chose mobile app privacy as the first meeting topic because it believes consensus on a code of conduct can be reached “in a reasonable timeframe.” NTIA expects the mobile app privacy meeting will serve as a useful precedent for later discussions involving other online consumer protection concerns.

The NTIA meeting is open to all interested stakeholders and a venue should be announced before the end of the month. Interested stakeholders are asked to inform NTIA online in advance if they plan to attend the meeting.

Maryland "Facebook Law" Regulates Employer Access to Social Media Accounts

By David A. Tallman and Andrew L. Caplan

It is increasingly common for employers to request that job applicants and employees divulge the passwords to their Facebook accounts and to other social media sites. This trend has not gone unnoticed by the media and privacy advocates, which view this practice as an intrusive violation of individual privacy. On the other hand, employers often have valid reasons to exercise oversight over social media activities, especially in highly regulated industries where employees’ activities may be more likely to cause the company to incur liability.

This month, the Maryland General Assembly stepped into the debate by passing a law that will prevent employers from accessing the personal social media accounts of their employees and job applicants. Subject to certain exceptions, Senate Bill 433 (“S.B. 433”) provides that “an employer may not request or require that an employee or applicant disclose any user name, password, or other means of accessing a personal account or service through an electronic communications device.” S.B. 433 also provides that an employer may not discharge, discipline, or penalize (or threaten to discharge, discipline, or penalize) an employee based upon the employee’s refusal to disclose access to the employee’s personal social media account. A similar prohibition exists with respect to prospective employees – an employer may not fail or refuse to hire a job applicant based upon the applicant’s failure to provide access information to a personal social media account.

The prohibitions in S.B. 433 do not come without exceptions. For example, an employer is not prohibited from accessing an employee’s personal accounts in connection with an employee downloading company proprietary information and financial data. Moreover, S.B. 433 contains a significant exception that appears intended to address the concerns of businesses. Specifically, an employer may access an employee’s “personal web site, internet web site, or web-based account, or similar account,” if: (i) the employer receives information that the account is being used for a business purpose; and (ii) the purpose of the access is to ensure compliance with “applicable securities or financial law, or regulatory requirements.” Since S.B. 433 does not define “applicable securities or financial law, or regulatory requirements,” it is uncertain how broadly this exception will be construed in practice. It is also noteworthy that the exception only permits an employer to access an employee’s personal account when the employer has reason to believe that the account is being used for business purposes. This effectively means that businesses will not be able to access an employee’s personal account until after the damage is done.

Maryland appears to be one of the first states to pass legislation that specifically addresses this increasingly high-profile issue. While the exceptions articulated in the bill do not appear to permit businesses to either request or require job applicants or employees to disclose their social media log-in credentials in order to monitor social media activity on an ongoing basis (unless the employer has information to suggest that the account is being used for business purposes), there remain other less intrusive social monitoring techniques that companies might employ. For example, an employer might ask its employees to “friend” a social media account controlled by the compliance department or otherwise take steps to make social media account activity visible to the company.

S.B. 433 demonstrates that social media monitoring is an increasingly sensitive issue – and it seems likely that other states will follow Maryland’s lead by passing legislation to prevent perceived overreach. Businesses must be prepared to incorporate these legal requirements into their social media policies.

Consumer Privacy Report Released By FTC

By J. Bradford Currier and Lauren B. Pryor

The Federal Trade Commission recently released its long-awaited Final Report on protecting consumer privacy, in which it stated that consumers should have more choice and control over how their personal information is collected and used. The FTC’s Final Report offers non-binding recommendations for companies “that collect or use consumer data that can be reasonably linked to a specific consumer, computer, or other device.” The Final Report comes more than a year after the FTC first issued its proposed framework for regulating consumer privacy and just a month after the White House released a proposed Consumer Privacy Bill of Rights

Recognizing the potential burden of the Final Report’s recommendations on small businesses, the FTC stated that its conclusions did not apply to companies that merely collect and do not transfer non-sensitive data on fewer than 5,000 consumers a year. Similarly, a company’s data collection practices may fall outside the scope of the Final Report if:  (1) a given data set has been reasonably stripped of personally identifiable information; (2) the company publicly commits not to re-identify such information, and (3) the company requires any secondary users of the data to keep it in de-identified form. While the majority of the Final Report discusses protecting consumer privacy online, the FTC noted that its recommendations would also apply to companies collecting personal information offline, such as financial institutions and healthcare industries. With these qualifications, the Final Report provides three best practices for companies collecting personal information from consumers:

(1)        Privacy By Design

The Final Report recommends that companies build in consumer privacy protections at every stage of the development of their products and services. Specifically, companies should incorporate reasonable procedures for collecting, securing, and retaining customer data. The Final Report commends a number of leading online service companies that have adopted stringent encryption systems in the face of increasing cyberattacks. Companies should limit data collection to activities which are “consistent with the context of a particular transaction,” and provide prominent notices to consumers regarding the collection of data unrelated to the requested service. Companies should also destroy consumer data when the company no longer needs this information to provide the requested service. On this point, the FTC expressed support for offering consumers an “eraser button” on social media websites to allow the deletion of personal information at the user’s discretion. Additionally, companies should ensure that data collected remains accurate and offer customers an opportunity to correct erroneous information. By adopting these policies, most online services’ default privacy settings would be strong.

(2)        Simplified Consumer Choice

The FTC also advised companies to provide easy-to-use mechanisms allowing customers to determine how their data is collected and used. The application of the simplified consumer choice policy will vary depending on the context of the interaction between the company and the consumer. For example, a car dealership may send a coupon to a customer based upon personal information obtained during prior purchases at the dealership without providing the customer with a choice. By contrast, if the car dealership intends to sell that customer’s personal information to a third-party data broker for use in unrelated marketing activities, the car dealership must provide the consumer with the ability to prevent the sale of his or her information. 

For most online services, the FTC suggested that companies allow users to choose data sharing preferences during the registration process or at least before any personal information is collected. The FTC identified company practices requiring consumers to disclose personal data in order to obtain important services on a “take it or leave it basis” as especially problematic and inconsistent with the public interest. 

The Final Report generally concludes that companies should provide consumers with the ability to opt out of being tracked across third parties’ websites. However, the FTC stopped short of recommending that Congress pass “do not track” legislation and stated that the FTC would work closely with stakeholders to develop an industry-led solution. The FTC reaffirmed its commitment expressed in recent enforcement actions to requiring companies to give prominent disclosures and to obtain express affirmative consent for material retroactive changes to privacy policies and before collecting especially sensitive information such as health, financial, and precise geolocation data. The Final Report indicates that the FTC will host a workshop on the concerns raised by the data collection practices of large ISPs, search engines, and social networking platforms later this year.

(3)        Information Collection Transparency

In accordance with industry guidance on mobile applications released earlier this month, the Final Report calls for “clearer, shorter, and more standardized” privacy policies. This recommendation can result in a Catch-22: if brevity comes at the expense of accuracy, there is increased risk that a privacy policy will be deemed misleading, deceptive, or insufficient. The FTC noted that screen size limitations on mobile phones further compound the difficulties with providing sufficient privacy disclosures and stated that it will host a workshop in May 2012 regarding how mobile privacy disclosures can be short, effective, and understandable on small screens.

The Final Report also encourages transparency by recommending that companies allow consumers more options to access their personal data. Specifically, the FTC indicated its support for recent legislation which would give access rights to consumers for information held by data brokers. The Final Report also suggests that the data broker industry should explore the idea of creating a centralized website where data brokers identify themselves to consumers, describe how they collect consumer data, and disclose the types of companies to which they sell information. At a minimum, the Final Report asks all companies collecting personal data to improve their consumer outreach and education efforts relating to data privacy practices.

Mobile App Platforms Reach Voluntary Agreement with California State Attorney General

By Samuel R. Castic and J. Bradford Currier

Californians who download mobile applications on their smartphones, tablets and other mobile devices should soon have greater knowledge of how their personal information is collected and used under a non-binding Joint Statement of Principals recently reached between six mobile app platforms, such as Apple, Inc., and the California Office of the Attorney General. The California announcement comes just days after the Federal Trade Commission warned app developers to improve privacy disclosures for mobile apps directed at children and within hours of the White House’s announcement of a Consumer Privacy Bill of Rights to protect citizens online.

Although the agreement does not create any new legal obligations for app providers, the parties agreed to voluntarily abide by five privacy principles: 

(1) Any app that collects personal data from a user, regardless of age, “must conspicuously post a privacy policy or other statement describing the app’s privacy practices” that informs the user how the data will be used and shared. California law already requires websites and online services to post privacy policies when they collect personally identifiable information about users. Despite this obligation, the California Attorney General reported that only 5 percent of mobile apps currently offer a privacy policy, although other parties suggest that the figure is approximately 33 percent. The agreement makes clear that the California Attorney General views mobile applications as online services subject to this law. 

(2) The agreement modifies the app submission process to make it easier for app developers to include a link to, or the text of, the privacy policy governing the app. However, the agreement contains no commitment by app platforms to notify users when a privacy policy changes. 

(3) The app platforms will create reporting procedures for users to identify apps that do not comply with applicable terms of service or applicable law. 

(4) The app platforms agreed to implement a response process to handle reported violations of app privacy policies. 

(5) The parties agreed to work together to develop “best practices” for mobile privacy policies. 

While no timetable exists for implementation of the agreement, the parties agreed that they will reassess the state of app privacy policies in within six months.

FTC Warns Mobile App Developers About Privacy Practices

By Samuel Castic, J. Bradford Currier, and Lauren B. Pryor

In another example of its recent efforts to step up enforcement on a variety of privacy-related issues, the Federal Trade Commission released a staff report on privacy disclosures for mobile applications used by kids. The report follows a recent FTC enforcement action against a mobile app developer for children and a notice of proposed rulemaking to amend the Children’s Online Privacy Protection Act (“COPPA”). The staff report represents a “warning call” to the app industry to provide parents with easily accessible, basic information about the mobile apps that their children use.

Under COPPA, operators of mobile apps directed at children under the age of 13 must provide notice and obtain parental consent before collecting personal information from children. The report surveyed approximately 1,000 apps designed for children and reviewed the types of privacy disclosures currently available to parents and kids. The FTC found that users frequently received the privacy disclosures only after downloading the app, limiting parents’ ability to “pre-screen” apps for their children. Additionally, the FTC reported that app websites often failed to provide meaningful notice regarding the data collection features of the app such that parents were not informed as to whether the app collecteddata from their children, the type of data collected, the purpose for such collection, and what parties may access such data. The FTC found this lack of disclosure troubling, especially in light of current technologies that allow mobile apps to access a child’s information with the click of a button and to transmit it invisibly to a wide variety of entities. 

In light of these concerns, the report offered four key recommendations:

  • App developers should provide “simple and short” privacy disclosures that are easy to find and understand on a mobile device screen;
  • App developers should “alert parents if the app connects with any social media, or allows targeted advertising to occur through the app”;
  • Third parties obtaining user information through apps should make their privacy policies “easily accessible” through a link on the app promotion page or in the app developer’s disclosures; and
  • Companies that provide platforms for downloading mobile apps should take action to help better protect kids and inform parents (e.g., develop a uniform practice for developers to disclose data collection practices).

The FTC plans to conduct additional reviews over the next six months to determine whether to take enforcement action against app developers that violate COPPA. The FTC also plans to hold a workshop on mobile privacy issues later this year.

Warrant Necessary for GPS Tracking Says Supreme Court

By Michael K. Ryan

In United States v. Jones, the Supreme Court ruled unanimously that law enforcement officials must obtain a search warrant before installing a GPS device to monitor and track criminal suspects. Although unanimous, the Court’s reasoning was fractured and the Court left open many questions about how advancing technology must be balanced with the Fourth Amendment’s privacy expectations and protections against unreasonable searches.

The facts of the case are straightforward: In 2004, Antoine Jones came under suspicion of narcotics trafficking and became a target for investigation by the FBI and the District of Columbia Metropolitan Police Department. After collecting information for almost a year, in 2005 the Government applied for a warrant authorizing the use of an electronic tracking device to be placed on a vehicle used by Jones. Based on the information received from the GPS device, Jones was indicted on multiple counts, including conspiracy to distribute cocaine and cocaine base. When the Government sought to introduce the GPS tracking evidence, Jones moved to suppress the evidence because, while the warrant required that the GPS device be installed in the District of Columbia within ten days, the device was not installed until the eleventh day and was installed in Maryland. As the warrant was not complied with, the Government urged that no warrant was necessary to install the GPS device. 

Writing for the majority, Justice Scalia concluded “that the Government’s installation of a GPS device on a target’s vehicle, and its use of that device to monitor the vehicle’s movement, constitutes a ‘search’” under the Fourth Amendment. The Court emphasized that, by placing a GPS device on a vehicle Jones used, it “physically occupied private property for the purpose of obtaining information.” In other words, the majority found the application of the “reasonable expectation of privacy” test unnecessary because the Government physically intruded upon one of Jones’ “effects” by placing the tracking device on the vehicle. This physical intrusion alone mandated a warrant. 

Justice Alito, joined by Justices Ginsburg, Breyer and Kagan, concurred in the judgment, but criticized the majority for applying “18th-century tort law” to a “21st-century surveillance technique.” Instead, Justice Alito analyzed the constitutional question “by asking whether respondent’s reasonable expectations of privacy were violated by the long-term monitoring of the movements he drove,” rejecting the majority’s trespass-based approach. While acknowledging the problems inherent in this test, namely how ever-expanding technology can change individual’s reasonable expectations of privacy and how such judgments may be better left to the elected branches of government, Justice Alito nevertheless concluded “that the lengthy monitoring that occurred in this case constituted a search under the Fourth Amendment.” 

In perhaps the most interesting opinion, Justice Sotomayor agreed with the majority that a search occurred in this case based on the Government’s physical intrusion on the vehicle Jones was driving, but noted that because the “Fourth Amendment is not concerned only with trespassory intrusions on property” the majority’s test was not particularly useful. In addressing the “reasonable expectation of privacy” test, she noted the necessity of reconsidering “the premise that an individual has no reasonable expectation of privacy in information voluntarily disclosed to third parties.” In her view, such a premise “is ill suited to the digital age, in which people reveal a great deal of information about themselves to third parties in the course of carrying out mundane tasks.” She then cataloged the many ways in which individuals disclose information via cell phone use, web browsing and on-line purchases, concluding: “I for one doubt that people would accept without complaint the warrantless disclosure to the Government of a list of every Web site they had visited in the last week, month or year. But whatever the societal expectations, they can attain constitutionally protected status only if our Fourth Amendment jurisprudence ceases to treat secrecy as a prerequisite for privacy.” 

The Jones decision left open many questions regarding how the Fourth Amendment would be applied to technologies which often require the disclosure of private information to various non-governmental entities such as cellular phone and Internet service providers. While those questions were not decided by the Court, the divergence of opinion among the Justices demonstrates that the Court will be wrestling with these issues for years to come.

Facebook Settles Privacy Action with the FTC

By Lauren B. Pryor

On Tuesday, the Federal Trade Commission announced a proposed settlement with Facebook, the world’s largest social media site, in connection with an FTC complaint alleging that Facebook repeatedly deceived consumers by promising to keep certain personal information private and failing to do so. The settlement suggests that the FTC may continue to focus on privacy-related enforcement actions into the New Year.

Among other things, the complaint alleged that Facebook made privacy misrepresentations in connection with the 2009 revamping of its privacy model. The complaint also alleged that:

  • without notice or consent, Facebook changed its privacy features to allow private information – such as Friends Lists – to be publicly available; 
  • although Facebook represented that users could restrict sharing of data to limited groups (e.g., "Friends Only"), such information was shared with third-party apps;
  • Facebook shared personal information with advertisers while promising not to do so; and
  • Facebook continued to allow access to pictures and videos after user accounts were deleted or deactivated.

The proposed settlement bars Facebook from making deceptive privacy claims, requires Facebook to obtain user consent before changing privacy features and requires Facebook to submit to privacy compliance audits over the next 20 years. In addition, under the proposed settlement, Facebook must prevent access to user content after an account has been deleted and establish and maintain a comprehensive privacy program to address privacy risks related to the development and management of new and existing products and services. Notwithstanding the allegations, the proposed settlement does not constitute an admission by Facebook as to any violations of law.

In a blog post in response to the settlement, Facebook CEO Mark Zuckerberg outlined recent modifications to Facebook’s privacy policies designed to address privacy concerns. Zuckerberg further announced that the role of Facebook’s Chief Privacy Officer will be split into two distinct positions to address matters related to policy and products.

The Facebook settlement is evidence of a greater effort by the FTC to hold social media companies accountable for allegedly deceptive privacy practices. Specifically, the FTC recently settled an action with Twitter concerning its data security practices and actions with the operator of www.skidekids.com and the application provider W3 Innovations, LLC in connection with violations of the Children’s Online Privacy Protection Act. 

The FTC will accept public comments on the proposed Facebook settlement through December 30, 2011. Thereafter the FTC will decide whether to make the order final.

Mobile App Rating System Adopted by Wireless Industry

By Marc Martin and J. Bradford Currier

Many mobile applications will soon carry age-appropriateness ratings under a voluntary program recently announced by the mobile wireless trade association CTIA and the Entertainment Software Rating Board. The program will mirror the age-ratings currently issued by ESRB for video games and will range from apps appropriate for “early childhood” to “adults only” content. CTIA members AT&T, Microsoft, Sprint, T-Mobile USA, U.S. Cellular and Verizon Wireless will include the new ratings on the apps offered through their online storefronts, although actual deployment dates will vary by company. Apple and Google, which are not wireless carriers or members of CTIA, have not adopted the ESRB ratings systems. Apple and Google currently provide their own proprietary app ratings systems. 

Under the new program, app providers will enter information regarding the app’s content as part of the app storefront submission process known as onboarding. ESRB will consider whether the app includes violence, sexual content or profanity as well as whether the app allows the sharing of user-generated content or a user’s location and personal information. After receiving the information from the app provider, ESRB will automatically assign an age rating that will be displayed across the participating storefronts. App ratings will be continually monitored by ESRB and will be adjusted in response to consumer complaints. Unless the app developer resubmits an app to ESRB, the new system will not apply to apps already available on participating storefronts. App developers will also have the ability to appeal allegedly inaccurate ratings. Ratings may also change if the developer adds content that would alter the original app classification.

The announcement follows an increase in agency enforcement activities against mobile app developers and inquiries from lawmakers related to privacy protections for apps directed at children. Tech observers suggest that the app industry’s voluntary adoption of the ESRB rating system will allow developers to avoid a mandated system imposed by Congress. Proponents of the new system claim that the ratings will give parents better tools to monitor app content as more children and young adults access their entertainment on mobile devices. The ratings systems also received praise from lawmakers and Internet watchdog groups as a sensible method of safeguarding children from inappropriate content in a rapidly expanding app marketplace.

FTC Proposes Major Expansion to COPPA's Scope and Compliance Requirements

Update (11/22/11): The FTC extended the deadline for comments on the proposed COPPA reforms until December 23, 2011, citing the complexity of the questions and issues raised by the proposed amendments. The original comment deadline was November 28, 2011.

---------

The Federal Trade Commission recently announced a set of proposed revisions to the Children’s Online Privacy Protection Act (“COPPA”) which would expand the Act’s application to a greater number of websites and online services. COPPA requires that website operators notify parents and obtain parental consent before they collect, use, or disclose personal information from individuals under 13 years of age. Specifically, the proposed rules would expand the definition of personal information to include so-called “persistent identifiers,” which represent unique user identification information obtained for purposes other than for the support of the internal operations of a website or online service. The new rules would also extend COPPA protections to photographs, videos, or audio files that include a child’s image or voice. The FTC will consider a wider range of factors, including whether a website includes child celebrities and music content, when determining whether the site or online service is directed to children. The proposed rules rejected a number of alternative means of obtaining parental consent proposed by stakeholders and declined to establish a safe harbor for websites and online services which follow best practices guidelines issued by the Direct Marketing Association.

A K&L Gates Client Alert providing a detailed summary of the FTC’s proposed COPPA revisions and an analysis of the potential impacts of the reforms on websites and online services may be found here.

FTC Settles Privacy Case Against Children's Social Networking Site

The Federal Trade Commission recently announced its settlement with the operator of www.skidekids.com, a social media website marketed as the “Facebook and Myspace for kids.” The FTC claimed that the website collected personal information from approximately 5,600 children without parent consent in violation of the Children’s Online Privacy Protection Act (“COPPA”). COPPA requires that website operators notify parents and obtain parental consent before they collect, use, or disclose personal information from individuals under 13 years of age. The agency also alleged that the website’s operator made deceptive claims regarding the website’s privacy policy and information collection practices.

While the Skid-e-Kids website asserted that parents would be contacted by email prior to their child’s use of the site, the FTC found numerous instances where parental notice was not provided and consent was not received. As a result, the site allowed children to create profiles, post personal information, upload pictures, and send messages to other users, resulting in the unauthorized collection of user names, birth dates, email addresses, and cities of residence. 

In addition to barring any future COPPA violations and deceptive privacy claims, the operator of Skid-e-Kids agreed to: (i) destroy all information collected from children in violation of COPPA; (ii) provide online educational material about privacy, retain an online privacy professional, or join a FTC-approved safe harbor program; and (iii) pay a $100,000 civil penalty. All but $1,000 of the penalty will be waived if the operator complies with the settlement’s oversight requirements and supplies accurate financial information to the FTC. The settlement remains subject to court approval.

The settlement is further evidence of the FTC’s recent efforts to step up enforcement on a variety of privacy-related issues. On the same day as the Skid-e-Kids settlement, the FTC reached another settlement with an online advertising company for misleading customers regarding the use of tracking cookies. Less than a month ago, the FTC settled a privacy case against a mobile application developer for alleged COPPA violations. The FTC has specifically emphasized online privacy protections for children, recently launching a website promoting safe use of social networking sites by tweens and teens.

Google Loses Skirmish in "Wardriving" Class Actions [UPDATED 7/15/11]

By Dan Royalty

 Update (7/21/11):

U.S. District Judge James Ware on Monday granted Google’s request to certify his decision for appeal to the Ninth Circuit. The court noted that it had earlier found that the dispute “presents a case of first impression as to whether the Wiretap Act imposes liability upon a defendant who allegedly intentionally intercepts data packets from a wireless home network” and a “novel question of statutory interpretation.” Given the novelty of the issues presented, the court concluded that its earlier decision “involves a controlling question of law as to which there is a credible basis for a difference of opinion” and certified the decision for immediate interlocutory appeal. In granting Google’s motion, the court also stayed the case pending resolution of the matter on appeal.

-------

The Northern District of California issued an interesting, and at first blush, surprising, decision in the consolidated Google Street View privacy class actions last week, exploring some rarely trod real estate within the federal Wiretap Act, 18 U.S.C. § 2510 et seq., and denying Google’s attempt to dismiss the federal wiretap claims against it.  The court held that WiFi transmissions—which are carried by radio waves—are not “radio communications” under the Wiretap Act.  It also rejected Google’s argument that transmissions across unencrypted WiFi networks are “readily accessible to the general public,” because plaintiffs claimed that the packet sniffing software needed to intercept those transmissions are not readily available.  These rulings may be surprising to technophiles who know that WiFi is transmitted via radio waves, and who know at least three locations where free packet sniffing software can be downloaded. After the jump we explain the background of these cases and why, properly framed, the court’s decision is not so surprising after all.

 

Update (7/21/11)

The practice of driving around with a laptop to detect and record WiFi access points is known as “wardriving.” Google’s wardriving troubles began in spring 2010 when German data authorities questioned Google’s practice of collecting WiFi network information along with pictures from its roving Street View vehicles. 

Google acknowledged that its Street View cars collected WiFi network information such as network name and MAC address, but initially denied that its cars collected any payload data—i.e., the cars had not collected the content of any communications transmitted across WiFi networks. However, after the data protection authority of Hamburg, Germany asked Google to audit the Street View WiFi data it had retained, Google retained a forensic firm to do so and released its report. That report confirmed that the Google software stored payload data transmitted from unencrypted wireless networks, and Google began the mea culpas.

Unsurprisingly, several class action lawsuits were soon filed against Google around the United States, alleging that its collection of WiFi data violated the federal Wiretap Act, state wiretap laws, and other state statutory and common laws. These lawsuits were transferred and consolidated before Judge James S. Ware of the Northern District of California.

The federal Wiretap Act makes it unlawful to intentionally intercept any electronic communication. 18 U.S.C. § 2511(a). “Electronic communication” is broadly defined under the law to include any transfer of data by, among other things, radio, easily reaching Google’s collection of data collected from WiFi access points. 

In its motion to dismiss, Google sought the shelter of one of several exceptions to the Wiretap Act. That exception carves out from liability any interception of an electronic communication “made through an electronic communication system that is configured so that such electronic communication is readily accessible to the general public.” 18 U.S.C. § 2511(g)(i) (emphasis added). Section 2510(16) defines communications that have been scrambled or encrypted, among other things, as not readily accessible to the general public. However, the statute defines “readily accessible to the general public” only “with respect to a radio communication.” 18 U.S.C. § 2510(16). 

The question, then, was whether the WiFi signals intercepted by Google could be considered “radio communications” under the law. WiFi signals are transmitted by radio waves using the IEEE 802.11 radio standards, and thus, Google argued, they should be considered radio communications. Because plaintiffs had not pleaded that their WiFi signals were scrambled or encrypted, their networks should be considered readily accessible to the general public, and Google could not be held liable for intercepting their broadcasts. Plaintiffs argued that Congress meant something narrower than all communications transmitted by radio waves when it used the phrase “radio communications,” and that electronic communications transmitted by WiFi networks fell outside Congress’s intended definition. 

Finding the statutory text ambiguous, Judge Ware took a deep dive into legislative history. He sussed out that Congress’s definitions of “electronic communication” and “radio communication” were largely written to bring the Wiretap Act in line with new standards of electronic communications, and to quell the concerns of radio scanning enthusiasts, respectively. After examining the legislative history and parsing other parts of the law that address “radio communications,” Judge Ware concluded that “radio communications” should be limited to “traditional radio services.” Because WiFi was not a traditional radio service, it did not qualify as a radio communication. Thus, for electronic communications, “readily accessible to the general public” was undefined under the Wiretap Act, and plaintiffs did not need to plead around the Section 2510(16) exception to liability.

Finally, the court rejected Google’s argument that, even under an ordinary meaning of the phrase, plaintiffs’ WiFi networks were “readily accessible to the general public.” Google had seized on plaintiffs’ claim that their networks were open and unencrypted, and argued that this meant that they were readily accessible to the general public. But plaintiffs claimed that only by using “rare packet sniffing software,” not widely available, was Google able to intercept the WiFi packets. The Court found that plaintiff had properly made out a claim under the Wiretap Act, and allowed the case to proceed.

To those familiar with the ubiquity and ease-of-use of packet sniffing software, this last ruling stands out. But at the motion to dismiss stage, the court was required to accept as true plaintiffs’ claims that the means by which Google intercepted payload data was not generally available to the public. Google will almost certainly mount a strong factual challenge on that point. But now the putative class representatives have stated a claim under the Wiretap Act, and will be able to move their cases towards class certification.

Senators McCain and Kerry Introduce Privacy Bill of Rights

On April 12, 2011, Senator John Kerry (D-MA) and Senator John McCain (R-AZ) introduced the “Commercial Privacy Bill of Rights Act of 2011” to establish the first federal statutory baseline of consumer privacy protection that would apply across industry sectors. The bill would govern how customer information is used, stored, and distributed online. We will provide more analysis soon, but for now, here are the highlights:

Information covered. The bill applies to broad categories of information, including names, addresses, phone numbers, e-mail addresses, other unique identifiers, and biometric data when any of those categories are combined with a date of birth, place of birth, birth certificate number, location data, unique identifier information (that does not, alone, identify an individual), information about an individual’s use of voice services, or any other information that could be used to identify the individual.

Right to security and accountability. Information-collecting entities would be required to implement security measures to protect user information and would be prohibited from collecting more individual information than is necessary “to enforce a transaction or deliver a service requested by that individual,” subject to certain exceptions.

Privacy by design. Entities would be required to implement privacy by design concepts, which would require entities to incorporate privacy protection into each stage of product or service development in a manner that is much more comprehensive than previously required anywhere in the United States.

Privacy policies. Entities would be required to have privacy policies or disclosures that clearly, concisely, and timely notify individuals of the entities’ practices “regarding the collection, use, transfer, and storage” of individual information, and entities would also be required to notify individuals when their practices undergo “material changes.”

Right to notice, consent, access and correction of information. The bill would offer individuals the option to opt-out of most information collection activities and require that individuals affirmatively consent to sharing certain information with third parties, and for an entity's collection of especially sensitive personal information. Entities would also have the right to access and correct information that entities maintain about them.

Service providers. The bill would require entities that contract with any service provider that has access to individual information to require the service provider to comply with the requirements of the bill, and to comply with the entity’s information policies and practices.

Third parties and data transfers. The bill would restrict the ability to transfer or share individual information with third parties, and would obligate the transferring entity to contract with any such third party for the protection of the individual information before transferring it.

Enforcement. The bill would empower state attorneys general and the Federal Trade Commission (“FTC”) to enforce the new restrictions. It would allow the FTC to develop safe harbor programs for authorized information collection.

Scope. The new rules would apply to non-profit organizations (a potential expansion of FTC authority), telecommunications common carriers (an expansion of FTC authority), and other entities which collect personal information on more than 5,000 individuals in a given year. The bill’s restrictions would not extend to federal and state governments or law enforcement agencies.

The privacy protections follow the decision by many popular Internet browsers to allow users to select a “do-not-track” feature for their searches. Leading Internet merchants and privacy watchdog groups praised the bipartisan bill, calling it “an important step” toward the development of a comprehensive national privacy law, while critics maintain that it does not go far enough to protect consumer privacy rights.

Doing Business in Mexico? It's Time to Revise Your Privacy Practices

By Holly K. Towle, Henry L. Judy, Samuel R. Castic

On July 6, 2010, Mexico’s “Law on the Protection of Personal Data Held by Private Parties” took effect, and some of the most stringent requirements are currently scheduled to take effect in July 2011.  Accordingly, the time for companies that are covered by the law to adjust their privacy policies and business practices is today, not mañana.[1]   In many ways, this law is more robust than approaches taken to data protection in the United States.  It brings Mexican privacy law far closer to, or goes beyond, the concepts and structure of the European Data Protection Directive (“EU Directive”)[2] or other approaches such as the Canadian Personal Information Protection and Electronic Documents Act.[3]   The law also seems to approximate the European Union approach of treating data protection as a basic right.[4]   This Alert discusses some of the key provisions of Mexico’s new law.

What Data Is Covered?

The law applies to “personal data,” which is “any information concerning an identified, or identifiable individual.”[5]   Although the U.S. Federal Trade Commission has proposed a similar approach[6] and some of its enforcement orders already take such an approach, such breadth is not used in all aspects of data protection.  For example, U.S. state breach notification laws generally are limited to specified categories of sensitive personal data such as a name in combination with a social security number or credit card number.  It is unclear whether the term “identifiable” would reach profiles and other types of data that are able to approximate individuals, such as on a statistical basis.

The law also applies to “sensitive” personal data.  This category is defined generally as personal data touching on the most private areas of the data subject’s life or whose misuse might lead to discrimination or involve a serious risk for the data subject,[7] i.e., there are three separate concepts at issue:  a broad “touching on” concept, discrimination, and serious risks.  The law does not address the issue of what consequences the serious risks concept is concerned with.  The law indicates that this general, three-part concept in particular includes personal data that might reveal “racial or ethnic origin, present and future health status, genetic information, religious, philosophical and moral beliefs, union membership, political views, or sexual preference.”[8]   This formulation in effect expands on the analogous categories of sensitive data under the EU Directive because it is not limited to specified categories of information, as is the EU Directive. The Mexican law also adds genetic data to its specific list of sensitive data.  While it lists “sexual preference” rather than the EU Directive’s “sex life,” it may be as broad as the Directive in that regard because of its non-exclusive formulation.  The Mexican law does not create a special category for the processing of data relating to offenses and criminal convictions, as the EU Directive does.  In the U.S., most of these categories of sensitive data are protected by a variety of sector-specific laws, such as the federal U.S. Genetic Information Nondiscrimination Act of 2008, the Health Insurance Portability and Accountability Act of 1996 (“HIPAA”) and various constitutional doctrines, some of which are aimed at privacy and some of which are aimed at prohibited outcomes such as discrimination.

Whose Data Is Protected?

The law protects the personal data of the “data owner,” a term defined in the statute to mean the individual to whom the personal data relates.[9]   In this Alert, we use the term “data subject” instead to avoid any misunderstanding.  In the U.S., “data owner” typically refers to the person collecting the data and U.S. legal concepts do not automatically treat or assume that the “owner” of data is the person to whom it relates.[10]   For example, U.S. state data security breach statutes often require the “data owner” to give notice to the individual who is the subject of the data:  if “data owner” were that individual, he would be required by law to give notice to himself! 

In contrast, the Mexican law uses the term “data owner” to mean the data subject. The term seems to reflect the concept of habeas data as employed in the law of a number of Latin American countries under which the individual may compel the disclosure by the public authorities (and some private interests) of his or her personal data because the individual is considered to have sufficient ownership interest in it to have the right to control its use.  

What Entities and Individuals Are Regulated?

The law applies broadly to all entities or individuals that process personal data.  The law purports to apply broadly: “The parties regulated under this Law are private parties, whether individuals or private legal entities, that process personal data, with the exception of: I. Credit reporting companies under the Law Regulating Credit Reporting Companies and other applicable laws, and; II. Persons carrying out the collection and storage of personal data that is exclusively for personal use, and without purposes of disclosure or commercial use.”[11]   Thus, the Mexican law does not apply to public authorities.

The law does not itself specify any additional jurisdiction requirements that would need to be met for the law to apply, but we presume for purposes of this Alert that jurisdictional and other qualifiers exist under other aspects of Mexican law.  Nevertheless, U.S. companies with subsidiaries or distributors in Mexico, or that otherwise process personal information concerning Mexican employees, customers, or business contacts, may wish to assume they are covered.  Similarly, U.S. companies that contract with Mexican companies, such as for call centers and similar outsourced services, may wish to assume that they will be asked to reflect obligations under the new Mexican law in their contracts.

What Is Processing Data and Who Are the Data Processor and Data Controller?

The law states that regulated parties are those who “process” personal data.  “Processing” is broadly defined to mean retrieval, use, disclosure or storage by any means, and use covers “any action of access, management, exploitation, transfer or disposal of personal data.”[12]   The “data processor” is someone (individual or entity) that, alone or jointly with others, processes data on behalf of a data controller.  A “data controller” is someone who decides on the processing,[13] i.e., the person in charge of directing others.  These terms are used in the Mexican law with the same type of meaning that they have in the EU Directive.

Provisions That May Require a Change in Practices

Mexico’s new law may require a change in some practices that are permissible in the U.S., absent adoption and enforceability of significant changes of the type recently proposed by the Federal Trade Commission or Department of Commerce.[14] The following examples illustrate some of the types of changes that may be required.

  • Consent.  Subject to listed exceptions,[15] all processing of personal data is subject to the consent of the data subject.  Consent levels vary.  For example, in certain circumstances consent can be assumed, such as after a privacy policy has been made available.  In other cases, consent must be “express,” meaning in some circumstances (such as for sensitive data) that the consent be “written” and signed (although if structured appropriately, such consent can be provided electronically).[16]

     
  • Relevant Data.  Mexico’s law requires the data controller to “ensure” that personal data it maintains be “relevant, correct, and up-to-date for the purposes for which it has been collected.”[17]   The law also requires a phase-out of collected data. 

First, when the data is no longer necessary “for the fulfillment of the objectives set forth in the privacy” policy and applicable law, it must be “cancelled.”   Note that the privacy policy has a large impact here and use of it to explain the purposes of collection is important under this law.  The “cancel” concept appears to be explained in the part of the law allowing the data subject a continuing right to “cancel” his data.[18]  As explained there, cancellation invokes a “blocking” period.  “Blocking” is essentially defined as labeling data in the database once it has served its purpose so that it will not be used during the blocking period even though it is retained, such as until a statute of limitations period has run.[19]  After that, it must be “erased” or “deleted” in the database and the data subject must be notified of the cancellation.[20]  This right of notification could create a considerable administrative burden depending on how it is implemented in regulations (e.g., is the notification directly to the data subject; may notification be given generally such as by notice on a website or in newspapers; may notifications be cumulated and given at regular intervals, such as quarterly)?  Similarly, how can the notification obligation be tailored to variations in Mexico ’s communications infrastructure?

Notwithstanding these kinds of questions, the foregoing is a more sophisticated approach than some data protection laws take.  Some privacy regimes speak in terms of eliminating personal data as soon as it is no longer necessary for its disclosed or presumed collection purposes, but that is not always practical.  Using a U.S. illustration, consumer credit card transaction data might not be “necessary” once the item purchased is delivered, if the purpose of the transaction is viewed as a sale and delivery.  Legal realities are more complex, however, as such data needs to be available to respond to investigations that will take place if the cardholder claims that a “billing error” has occurred -- if the card issuer determines its private investigation in the cardholder’s favor, the merchant will retain its legal right to pursue the cardholder in court until the statute of limitations runs.  Similarly, the cardholder may, for the relevant statute of limitations, sue the merchant for product defects or other aspects of the contract not within the “billing error” investigation.  The Mexican law understands that it may be necessary or advisable to hold data until risks have expired.

The law also permits a “disassociation” procedure, which means a “procedure through which personal data cannot be associated with the data subject nor allow, by way of its structure, content or degree of disaggregation, identification thereof.”  This appears to correspond to concepts of anonymization or de-identification under U.S. and EU law.  However, no guidance as to permissible procedures is provided, particularly on the key point of whether “disassociation” is permissible following “cancellation,” or, putting it another way, whether “disassociation” is a permissible form of “erasure.”  The only context in which disassociation is mentioned is where the law addresses exemptions from certain data subject to consent requirements.[21]   Nor is any guidance provided as to the relation of “disassociation” under the new law to procedures of coding and anonymization under health care law, as in the case in the U.S. under HIPAA. 

Second, data “relating to nonperformance of contractual obligations” must be removed after 6 years from “the calendar day on which said nonperformance arose.”[22] This wording has the potential to cut off claims by parties (data controller or data subject) entitled to sue under statutes of limitation longer than 6 years or under “discovery” rules sometimes allowing later suit if the claimant could not reasonably have discovered the nonperformance before the statute of limitations ran.  Of course, especially with regard to contracts calling for complex performance over time, this provision opens the door to questions as to the exact calendar day on which nonperformance arose.  It is also not clear how the different sections of the law work together in this regard, but at least under one section, restrictions on the cancellation power exist which might be helpful in resolving this kind of problem.[23] 

Third, portions of the Mexican law seem to strike new ground in that they may place an even greater emphasis on a right of cancellation and deletion than do other privacy regimes, including the regimes established on the model of the current EU Directive.   In this regard, the Mexican law appears to be anticipating the communication issued by the European Commission on November 4, 2010, concerning a comprehensive revision to the data protection regime established under the EU Directive (“Communication”).  This Communication placed greater emphasis on deletion of personal data and reflected arguments in favor of a “right to be forgotten” and the concept that anonymity fosters personal autonomy.[24]   Such a “right,” of course, raises countervailing policy questions that are the subject of current debate.[25]

  • Service Providers.  Mexico’s law requires that companies “ensure compliance” with its requirements, even when “third parties” are used.[26]   That term is defined to mean a Mexican or foreign individual or legal entity (other than the data subject or data controller), so it includes a very broad scope of parties such as all subcontractors and service providers.[27]   Accordingly, contracts with vendors need to be updated to address this and additional obligations under the new law.

     
  • Privacy Policy.  The law requires data controllers to provide a privacy “notice” to data subjects explaining what information is collected and why.[28]    The notice must at least contain listed items, some of which are not commonly disclosed in the United States under generic privacy regimes (though some may be disclosed under sector-specific regimes or other statutes relating to the Internet or by members of U.S. Safe Harbor relating to the EU Directive).  The notice must also be made available to data subjects in compliance with specific timing and formatting requirements.[29]

     
  • Security.  According to the IAPP translation, Mexico’s law requires “all responsible parties[30] that process personal data” to establish and maintain physical, technical, and administrative security measures designed to protect personal data from damage, loss, alteration, destruction or unauthorized use, access or processing.[31]   In the U.S., many states have versions of such an obligation, with Massachusetts having the most extensive version.  Federal sector-specific laws in the U.S. can also impose similar obligations (e.g., HIPAA and GLBA), and the FTC’s enforcement orders addressing inadequate data security or access controls pursuant to section five of the FTC Act are to a similar effect.  The Mexican law prohibits data controllers from implementing security measures that are any less protective than controllers use to protect their own information.  Also, in determining the security means to implement, data controllers must take into account “the potential consequences to the data subjects, the sensitivity of the data and technological development.”   The concept may require data controllers to re-examine and update their security measures as they acquire additional types of personal data, as technology advances and as various risk levels change.  Hence, the new Mexican law explicitly poses the familiar challenge or impossibility of matching affordability and foreseeability within a dynamic and non-uniform information security environment. 

     
  • Access.  Mexico’s law permits individuals to access, rectify and cancel personal data that entities maintain about them.[32]   Importantly, personal data must be preserved in a manner that permits exercise of these rights “without delay,” and the law requires the entity to respond to the request within twenty days of receipt.[33]   As previously discussed, “cancellation” is a nuanced concept in the law and other details apply – the point here is that these robust rights exist and have detailed requirements and specified time limits for compliance.

     
  • Data Transfers. Mexico’s law requires data controllers that transfer personal data to “domestic or foreign third parties other than the data processor” to: (a) provide a copy of the relevant privacy notice to the transferees; and (b) include in that notice a clause regarding data transfers and “whether or not” the data subject “agrees” to those transfers.  Assuming such a notice and agreement, data processing by the transferee must be done as agreed in that notice.[34] There is a list of domestic and international transfers that can be done without consent, such as transfers to certain corporate affiliates, but the ambiguity and limitations in the list encourage obtaining consent in the privacy notice.

     
  • Penalties.  The new law contains penalties for violations that range in severity from a warning, to a maximum penalty of 320,000 days of the Mexico City minimum wage,[35] or approximately USD $1,595,000.[36]   The penalty is doubled when sensitive data is involved, which can mean a maximum fine of approximately USD $3,190,000. 

Recommendations
Now is the time for companies subject to Mexico’s new law to start bringing their policies and practices into compliance.  Critically, the law places a fundamental importance on the content of privacy policies, so companies should review their privacy policies in light of the new requirements.  Although the Mexican law burdens companies with the expense of compliance with yet another country’s unique privacy regime, the Mexican law permits some flexibility to achieve compliance by means of well-designed contracts and disclosures.  As this Alert indicates, many key aspects of the new law remain uncertain and companies with direct or indirect operations in Mexico will wish to receive advice as they commit their compliance resources.

The agency with primary authority for implementation of the new law and for supervising compliance is the Federal Institute for Access to Information and Data Protection (Instituto Federal de Acceso a la Información y Protección de Datos).[37]   Currently, the Federal Institute and the Ministry of Economy (Secretaría de Economía (SECON)) are working on draft implementing regulations.  These will be issued for public comment with the aim of adopting them in final form in July.  It will be important to follow the progress of that project.

Though the law was enacted on July 6, 2010, some of the more stringent requirements will not take effect until one year to eighteen months after the date of enactment,[38]so there is still time to review policies and bring practices into compliance.   If you would like assistance in that endeavor or other compliance efforts, K&L Gates has data protection attorneys who can help.  Please feel free to contact any of the attorneys listed below. 

Notes:

[1] The official Spanish language version of the law is available at here; an unofficial English language translation is available courtesy of the International Association of Privacy Professionals here.  The analysis herein relies on the unofficial English translation that the IAPP has provided, which may contain inaccurate or misleading translations.

[2] Directive 95/46/EC on the protection of individuals with regard to the processing of personal data and on the free movement of such data, available here.

[3] Available here.

[4] See supra note 1 at ch. I, art. 1 (addressing the “right” to “information self-determination of individuals.”).

[5] See id. at ch. I, art. 3, § V. 

[6] See note 14 infra.

[7] See supra note 1 at ch. I, art. 3, § VI.

[8] Id.

[9] Id. at XVII

[10] See, e.g., State v. Farmer, 80 Wash. App. 795, 911 P.2d 1030 (Wash. App. 1996) (no legitimate expectation of privacy in information a person reveals to third party such as evidence of a transaction with a business; holding that a warrantless seizure of sales receipts from a merchant to compare them to receipts defendant submitted to insurance company under claim for property loss did not violate the state constitution).

[11] Supra note 1 at art. 2, § II.

[12] Id. at § XVIII.

[13] Id. at §§ IX and XIV.

[14] See “FTC Proposes Broad New Privacy Framework, and Asks ‘How It Might Apply in the Real World’” available here; see alsohere.

[15] See supra note 1, at ch. II, art. 10.

[16] Id. at art. 9.

[17]Id. at art. 11. 

[18] Id. at ch. III, art. 25.

[19] Id. at ch. I, art. 3, § III.

[20] Id. at ch. III, art. 25.

[21] Id. at ch. I, art. 10, § III.

[22] Id. at ch. I, art 11.

[23] See id. at ch. III, art. 26 (significant list of situations in which cancellation is not required).

[24] See generally,   hereat 51 (discussing common themes between EU and FTC approaches).

[25] See, e.g., Paul Sonne, Max Colchester and David Roman, Wall Street Journal,Plastic Surgeon and Net’s Memory Figure in Google Face-Off in Spain” (Mar. 7, 2011), available here.

[26] See supra note 1   at ch. II, art. 14. 

[27] Id. at ch. I, art. 3, § XVI.

[28] Id. at ch. II, arts. 15 & 16.

[29] Id. at ch. II, art. 17.

[30] Some have argued that a more correct translation would be “data controllers.”  We take no position on the issue but note that such an interpretation could be considered to create two categories of data controllers:  those who process and those who do not.

[31] See supra note 1 at ch. II, art. 19.

[32] Id. at ch. III, arts. 23-25.

[33] Id. at ch. III, arts. 22 & 32.

[34] Id. at ch. V, art. 36.

[35] See id. at ch. X, art. 64 (noting fines in multiples of the Mexico City minimum wage); see also Laurence Iliff, “Mexico Raises 2011 Daily Minimum Wage by 4.1% to About $4.60,” Wall Street Journal, Dec. 21, 2010, available here  (noting that minimum wage in Mexico City is 59.82 pesos per day).

[36] Assuming a 12.0 Mexican Peso per United States Dollar exchange rate. 

[37] Available here (Announcement IFAI/019/11 dated Feb. 16, 2011).

[38]See supra note 1, at Transitory Provisions, Three and Four. 

Law Seminars International presents Cloud Computing: Law, Risks and Opportunities

On December 13-14, 2010, Law Seminars International presented a seminar exploring different cloud computing service models and the challenges they pose. They explored what cloud computing is, how it works and the benefits it offers.

Leading practitioners, including Dan Royalty (K&L Gates Seattle), described the contracting and compliance challenges their clients face on a daily basis and shared their strategies for meeting them.  Among other things, the program provided pointers on identifying the legal and compliance issues around cloud computing and addressing them in cloud computing transactions.

Participants learned:

  • Due diligence for selecting a service provider
  • Risks in data flows among different national jurisdictions
  • Export controls on technology moving to different jurisdictions
  • Records retention approaches and business practices for the cloud
  • Dealing with law enforcement agencies
  • Privacy and confidentiality: Applicable laws, regulations nd standards
  • Effective data security strategies
  • Contracting and negotiations
  • How control vs. cost will play out over time
  • Ethical issues