Part IV of IV: On the Ethics of Web Experience Personalization

March 12, 2015 | John Berndt

man spying with cameraThis blog post is excerpted from a longer chapter in my new full-length book "Personalization Mechanics: Targeted Content for Web Teams of All Sizes," which is now available.

As someone who has worked for 20+ years in the salt mines of Web strategy and user experience, I couldn’t be more excited about personalization, which gives new answers to the most well-worn Web problems.

But I’ve seen the other side as well. I recall once making the mistake of signing up on a website for a technology demo. I was then hounded for two years on every major website I visited by misguided advertising from that same technology vendor (via Google remarketing, I assume). It was thrown in my face that personalization has some dark—not to mention intensely stupid—sides. I knew that before, but it became a daily annoyance. Suffice it to say that if marketers and technologists don’t think long and hard about the user experience, the annoyances will multiply. At minimum, this leads to user frustration, but perhaps also more concerted movements against personalization.

But taking such a tactical view doesn’t do the topic justice. Digital privacy is not just a topic that deserves its own book, it’s a virtual earthquake of cultural change currently ripping through civilization, and one that no amount of discourse is likely to successfully unpack and ethically legislate any time soon—which is not to say we shouldn’t give it our full attention.

It’s hard to know where you are in history at any moment, because of the relativity of perspective. Without being a cynic, I have to say there are forces at play here—including the eerie human imperative to adopt new technical abilities no matter what—that go way beyond what is currently covered by public discourse.

There has already been much public debate, and some major institutions have taken public stands. The 2001 U.S. Office of Management and Budget restrictions on the use of cookies virtually killed sophisticated analytics and personalization on government sites for a decade, but at least in one small way kept the specter of Big Brother at bay. In 2010 it was revised to be less restrictive. The European Union’s decision in May 2011 to require public sites to explicitly declare their user tracking efforts has likely driven marketers and brand managers crazy, though the decision drew a major line in the sand for user privacy. Nonetheless, these moves seem to provide only small pushback against the complex tidal wave of privacy change in which we are all, to various extents, complicit. Behind the scenes, huge, big data wheels are turning, damaging privacy at a level no one Web team could possibly achieve, even with the best platforms available.

Given the complexity and ambiguity of the situation, what lucid guidance can I possibly give to personalization teams about the ethics of their practice? To provide guidance in a credible way means trying to articulate a threshold in a shifting cultural landscape. Further, discussing these issues in the context of your job as a digital marketer is likely to be difficult, particularly with a disinterested boss. On the other hand, not even attempting to address the issues seems equally misguided.

man with censor box over eyesAnd yet, this is the guidance that I can provide (very provisionally):

1. User-Contributed Profiling

This is more or less explicit personalization, where the user sets his or her own preferences. In my opinion, it doesn’t seem to pose much of an ethical issue.

2. User Profiling Within Sites

I think it is acceptable for a site to internally profile visitors, act on the information gathered, and do so without alerting them (with the exception of a long-form privacy policy).

The logic I use here is this: it is your site, the visitor is a guest, and to some extent they have to play by your rules—in the case of personalization, your “rules” may influence the visitor’s behavior or provide him or her with a better site experience. Ultimately, this is part of the deal—a visitor’s presence initiates a sophisticated response. There are things a site owner can do that may appear creepy and alienate the user, but assuming that can be avoided by careful approaches, I tend to feel they fall into fair territory.

3. Sharing User Data Across Sites

I think sharing user data between sites (either across organizations or within large multi-brand organizations) is one activity that can easily lead to ethical complications and alienating creepiness, though this could be mitigated by disclosing data sharing to the user. The rise of optimization platforms and Google retargeting have brought data sharing to the shop floor, and we will see more of it, despite ethical complications.

Given that this activity is almost sure to happen in some way in larger organizations, maybe the best guidance today is to recommend that marketers determine the level of data sharing that, as a user, would make them comfortable. Of course, this may be problematic since marketers and personalization experts may represent a demographic that is unusually comfortable with the idea of decreased privacy. If this is the case, perhaps they should consider that a typical user’s comfort level may be half that of their own comfort.

4. Sharing User Data Across Channels

It may seem like a nuance, but users may have thoughts about how their data flows across digital touch points. CAN-SPAM1 legislation already requires the use of “opt-out” features in emails, and users don’t necessarily have the same level of scrutiny from channel to channel. Users want to be “channel agnostic” in the sense that they want all of their digital interactions with an organization to be consistent and helpfully aligned, but at the same time, they don’t want those profiling interactions to thrown in their faces.

For example, if I visit a landing page and have already converted on a related site, should I receive an email inducing me to buy, personalized to what I looked at before? Again, the issue here is both an ethical one and also one of perception, but I call out this specific case because I think cross-channel personalization is perhaps a more sensitive issue than is usually acknowledged. Users often feel they should have some say in approving aggressive reaching across digital channels, or at least that they should have some say in how email is used, since in it is often more intimate than visiting a website.

These points are simplistic for a reason: it is challenging to predict how the trade-offs between privacy and utility will play out. The scenarios we worry about today may seem foolish in a few years, likely to be replaced by entirely different, hard-to-imagine considerations. Software allowing the user to robotically defeat profiling by simulating random user behavior may evolve to intentionally blur the paths of user activity. There’s no way to really know the outcomes, so we try to be as flexible as possible.

One crucial point I’ll leave you with is this:

The dynamic today is heavily weighted against individual privacy. Instead, it leans more toward allowing larger organizations to gather data, since it is increasingly hard for users to opt out without losing access to valuable capabilities and more relevant media, valuable elements that are increasingly the digital air we breathe.

But there is always the chance that what seems like an unavoidable new reality (rapidly backing away from privacy), and what seems like an unstoppable juggernaut, will turn out to be the Emperor’s New Clothes. Something head-turningly contrary suddenly becomes possible. Perhaps it’s a new app that selectively scrambles profiling or switches profiling, or new ways of managing how much personalization you are seeing just as we have learned to manage spam. As human beings, we have ways of collectively changing what needs to be changed to suit our interests, even if it takes us a while to get around to it. When the change comes, it is often fast and furious, taking everyone by surprise.

Excerpted and adapted from Personalization Mechanics: Targeted Content for Web Teams of All Sizes © 2015 John Berndt.


1The acronym CAN-SPAM comes from the full name of the act: Controlling the Assault of Non-Solicited Pornography and Marketing Act of 2003, instituted by the U.S. government.

About the Author

John Berndt

I'm CEO of TBG and I've been thinking about the Web in creative ways since the year it began.

Leave A Reply

comments powered by Disqus