First Amendment concerns and consumer privacy law has people talking about a case recently settled in Massachusetts. It revolved around the practice of serving advertisements with geo-fencing technology. At issue: was it legal for anti-abortion groups to use this new marketing technique to directly communicate with young women on their way to get abortions?
According to the settlement in Massachusetts, the answer, at least for the time being, is no.
The ads in question used geo-fencing technology and marketing data to target a very specific demographic. “You Have Choices,” and “You’re Not Alone” were among the messages used in the ad campaign. The problem, according to Massachusetts Attorney General Maura Healey, was that the ads were served using information that was legally protected: personal healthcare data.
According to Copley Advertising founder (and sole employee), John F. Flynn, his Boston-based firm served 2.4 million ad impressions, generating about 10,000 conversions or “clicks.” The market? Young women aged 18-24 who had searched online for information about abortion and–here’s the geo-fencing part–who were also near one of 140 reproductive health centers and methadone clinics included in the ad campaign. Bethany Christian Services of Michigan and RealOptions of California were the clients, and the clinics were located in New York City, St. Louis, Richmond, Virginia, Pittsburgh and Columbus, Ohio. Massachusetts put the kibosh on the practice in their state.
As with anything privacy-related these days, the position taken by the Massachusetts AG was complex, but the reason for that is simple: Privacy has taken a backseat to commercial interests in the United States.
That said, this particular case features an obvious First Amendment issue that muddies the waters of what kinds of data should and shouldn’t be off limits, thus making the AG’s case less than ideal for furthering the cause of consumer privacy rights.
Consider, a protester’s right to camp out by the very same facilities and shout anti-abortion slogans at patients is guaranteed by the Constitution–so why shouldn’t an advocacy group have the right to serve those same would-be patients an ad declaiming abortion or offering alternatives to it? Because the question revolves around a hotly politicized topic, the main point here is lost: private information–sensitive search data–was used to serve those ads, the use of which would be illegal in many countries with stronger consumer protections.
Geo-Fencing for Beginners
Setting aside First Amendment issues for the moment, the more I tried to talk to people about the story, and about geo-fencing in particular, the clearer it became that this common practice is just complicated enough and new-ish to consumers that it warranted a closer look.
The earliest known use of geolocation dates back to the Greeks who used a triangulation of stars to figure out where they were. The same principal applies in today’s smart devices, which is evidenced every time a message or phone call finds its way to you.
So, how do those communications get where they are going? Even before GPS technology was common, mobile phones worked using the same principles the Greeks used to determine their location, but instead of stars, cell towers triangulate the position of a mobile user. Whenever it’s powered on, your cell phone or smart device is sending out signals to figure out where it is, and as that is determined, the information can be shared with a variety of opt-in, or pay, services ranging from maps and the weather to entertainment guides and retail apps.
In ad-speak, geo-targeting is the catch-all category, and it’s been around long before Al Gore invented the Internet. An IP address provides specific location information about a user that can be repurposed to serve advertising. Nowhere near as sophisticated as things are today, geo-targeting allowed marketing companies to hit a general area (usually a town, county or city) and drill down to the right people in that area using data mined from cookies, so, for example, farmers within driving distance of one of the many Springfields across our great land would get ads from the local Agway there telling them when there was a sale on farmer stuff.
Geo-fencing is better geo-targeting. It uses GPS technology to get a very good guess at where you are. If you open your smartphone’s map and zoom in on your location you can get a feel for how accurate it is–usually within a few hundred feet.
The accuracy afforded by GPS makes the technology ideal for sending retail advertising to people on the go. One method you may be familiar with is the exploding offer: A retailer nearby sends a message announcing a fantastic deal, but you have to act on it in X minutes–something thanks to geo-fencing the retailer knows you can do because of the parameters of the ad you got served.
How It’s Used
Geo-fencing has plenty of applications that aren’t problematic. Employers can use an app-based approach to manage employees at remote work sites with what amounts to a geo-locate time clock–the app sending a prompt to workers in a designated area to punch in or out while on the job.
When it comes to the best strategy for serving ads using geo-fence technology, some marketers will tell you to put the fence around your competition. Say a customer goes to Bob’s Burgers for lunch, a good geo-fencing campaign would send a Bill’s Burger’s ad letting them know that a much better burger is available just a few blocks away, and if you get there within the hour, the first beer is free.
Even in the realm of healthcare, there are efforts to integrate geo-fencing into research on the best approaches from treating chronic diseases. But unlike the anti-abortion scenario that attracted the attention of Massachusetts’s AG, this work is all opt-in and is focused on improving healthcare for the more than 130 million American adults who suffer from chronic disease.
How Should It Be Used?
At issue in the Massachusetts case is consumer privacy: specifically, the legal definition personal healthcare information.
The data used to serve the ads was gleaned from user behavior on apps and via IP addresses, information that is regularly harvested by data brokers who repackage it to marketing firms who use it to serve ads with laser-like precision based on interest, location, age and many other markers. So, in theory at least, serving anti-abortion ads to women of child-bearing age who have been reading about abortion online and who come near an abortion clinic is a spot-on example of how marketing should work when it’s well tuned.
The United States does not have common sense rules regarding what kind of information can be tapped by advertisers when implementing geo-fencing, but since search history is not protected it seems reasonable to expect a redline to be drawn around certain kinds of searches–healthcare for sure among them.
There is a point where good marketing turns into an episode of Black Mirrors or the dystopian world of “Minority Report.”
The Massachusetts case goes to the heart of the privacy dilemma we all face. What should data brokers and marketing companies be allowed to do in the hunt for the most targeted ads? As we move deeper into the murk of what is and what should be, it’s time to think about that.