Privacy

Follow-up to: Blackmail

[Note on Compass Rose response: This is not a response to the recent Compass Rose response, it was written before that, but with my post on Hacker News I need to get this out now. It has been edited in light of what was said. His first section is a new counter-argument against a particular point that I made – it is interesting, and I have a response but it is beyond scope here. It does not fall into either main category, because it is addressing a particular argument of mine rather than being a general argument for blackmail. The second counter-argument is a form of #1 below, combined with #2, #3 and #4 (they do tend to go together) so it is addressed somewhat below, especially the difference between ‘information tends to be good’ and ‘information chosen, engineered and shared so to be maximally harmful tends to be bad.’ My model and Ben’s of practical results also greatly differ. We intend to hash all this out in detail in conversations, and I hope to have a write-up at some point. Anyway, on to the post at hand.]

There are two main categories of objection to my explicit thesis that blackmail should remain illegal.

Today we will not address what I consider the more challenging category. Claims that while blackmail is bad, making it illegal does not improve matters. Mainly because we can’t or won’t enforce laws, so it is unclear what the point is. Or costs of enforcement exceed benefits.

The category I address here claims blackmail is good. We want more.

Key arguments in this category:

  1. Information is good.*
  2. Blackmail reveals bad behavior.
  3. Blackmail provides incentive to uncover bad behavior.
  4. Blackmail provides a disincentive to bad behavior.
  5. Only bad, rich or elite people are vulnerable to blackmail.
  6. We should strongly enforce all norms on everyone, without context dependence not explicitly written into the norm, and fix or discard any norms we don’t want to enforce in this way.

A key assumption is that blackmail mostly targets existing true bad behavior. I do not think this is true. For true or bad or for existing. For details, see the previous post.

Such arguments also centrally argue against privacy. Blackmail advocates often claim privacy is unnecessary or even toxic.

It’s one thing to give up on privacy in practice, for yourself, in the age of Facebook. I get that. It’s another to argue that privacy is bad. That it is bad to not reveal all the information you know. Including about yourself.

This radical universal transparency position, perhaps even assumption, comes up quite a lot recently. Those advocating it act as if those opposed carry the burden of proof.

No. Privacy is good.

A reasonable life, a good life, requires privacy.

I

We need a realm shielded from signaling and judgment. A place where what we do does not change what everyone thinks about us, or get us rewarded and punished. Where others don’t judge what we do based on the assumption that we are choosing what we do knowing that others will judge us based on what we do. Where we are free from others’ Bayesian updates and those of computers, from what is correlated with what, with how things look. A place to play. A place to experiment. To unwind. To celebrate. To learn. To vent. To be afraid. To mourn. To worry. To be yourself. To be real. 

We need people there with us who won’t judge us. Who won’t use information against us. 

We need having such trust to not risk our ruin. We need to minimize how much we wonder, if someone’s goal is to get information to use against us. Or what price would tempt them to do that.

Friends. We desperately need real friends.

This is not the central feature let alone most or all of friendship. But without it, friendship is impossible.

II

Norms are not laws. 

Life is full of trade-offs and necessary unpleasant actions that violate norms. This is not a fixable bug. Context is important for both enforcement and intelligent or useful action.

Even if we could fully enforce norms in principle, different groups have different such norms and each group’s/person’s norms are self-contradictory. Hard decisions mean violating norms and are common in the best of times.

A complete transformation of our norms and norm principles, beyond anything I can think of in a healthy historical society, would be required to even attempt full non-contextual strong enforcement of all remaining norms. It is unclear how one would avoid a total loss of freedom, or a total loss of reasonable action, productivity and survival, in such a context. Police states and cults and thought police and similar ideas have been tried and have definitely not improved this outlook.

What we do for fun. What we do to make money. What we do to stay sane. What we do for our friends and our families. What maintains order and civilization. What must be done. 

Necessary actions are often the very things others wouldn’t like, or couldn’t handle… if revealed in full, with context simplified to what gut reactions can handle.

Or worse, with context chosen to have the maximally negative gut reactions. 

There are also known dilemmas where any action taken would be a norm violation of a sacred value. And there are lots of values that claim to be sacred, because every value wants to be sacred, but which we know we must treat as not sacred when making real decisions with real consequences.

Or in many contexts, justifying our actions would require revealing massive amounts of private information that would then cause further harm (and/or which people very much do not have the time to properly absorb and consider). Meanwhile, you’re taking about the bad-sounding thing, which digs your hole deeper.

We all must do these necessary things. These often violate both norms and formal laws. Explaining them often requires sharing other things we dare not share.

I wish everyone a past and future Happy Petrov Day.

Part of the job of making sausage is to allow others not to see it. We still get reliably disgusted when we see it.

We constantly must claim ‘everything is going to be all right’ or ‘everything is OK.’ That’s never true. Ever.

In these, and in many other ways, we live in an unusually hypocritical time. A time when people need be far more afraid both to not be hypocritical, and of their hypocrisy being revealed.

We are a nation of men, not of laws.

But these problems, while improved, wouldn’t go away in a better or less hypocritical time. Norms are not a system that can have full well-specified context dependence and be universally enforced. That’s not how norms work.

III

Life requires privacy so we can not reveal the exact extent of our resources.

If others know exactly what resources we have, they can and will take all of them. The tax man who knows what you can pay, what you would pay, already knows what you will pay. For government taxes, and for other types of taxes.

This is not only about payments in money. It is also about time, and emotion, and creativity, and everything else.

Many things in life claim to be sacred. Each claims all known available resources. Each claims we are blameworthy for any resources we hold back. If we hold nothing back, we have nothing.

That which is fully observed cannot be one’s slack. Once all constraints are known, they bind.

Slack requires privacy. Life requires slack.

The includes our decision making process. 

If it is known how we respond to any given action, others find best responses. They will respond to incentives. They exploit exactly the amount we won’t retaliate against. They feel safe.

We seethe and despair. We have no choices. No agency. No slack.

It is a key protection that one might fight back, perhaps massively out of proportion, if others went after us. To any extent.

It is a key protection that one might do something good, if others helped you. Rather than others knowing exactly what things will cause you to do good things, and which will not.

It is central that one react when others are gaming the system. 

Sometimes that system is you.

World peace, and doing anything at all that interacts with others, depends upon both strategic confidence in some places, and strategic ambiguity in others. We need to choose carefully where to use which.

Having all your actions fully predictable and all your information known isn’t Playing in Hard Mode. That’s Impossible Mode.

IV

In my rush to get this out, I forgot completely about the biggest privacy need of all.

Yes, if your thought process is known, people will game it and use it strategically against you. But that’s a small time problem.

“What are you thinking?” is a test. You’d better pass it. Or else.

The real problem is that they will judge you for your thoughts, and for your decision process. The only thing shielding us from thought crime accusations is the privacy of our thoughts. If they could be fully known, I shudder to think. And would never be allowed to again.

I now give specific responses to the six claims above. This mostly summarizes from the previous post.

  1. Information, by default, is probably good. But this is a tendency. It is not a law of physics. As discussed last time, information engineered to be locally harmful probably is net harmful. Keep this distinct from incentive effects on bad behavior, which is argument number 4.
  2. Most ‘bad’ behavior will be a justification for scapegoating, involving levels of bad behavior that are common. Since such bad behavior is rarely made common knowledge, and allowing it to become common knowledge is often considered far worse behavior than the original action, making it common knowledge forces oversize reaction and punishment. What people are punishing is that you are the type of person who lets this type of information become common knowledge about you. Thus you are not a good ally. In a world like ours, where all are anticipating future reactions by others anticipating future reactions, this can be devastating.
  3. Blackmail does provide incentive to investigate to find bad behavior. But if found, it also provides incentive to make sure it is never discovered. And what is extracted from the target is often further bad behavior, largely because…
  4. Blackmail also provides an incentive to engineer or provoke bad behavior, and to maximize the damage that would result from revelation of that behavior. The incentives promoting more bad behavior likely are stronger than the ones discouraging it. I argue in the last piece that it is common even now for people to engineer blackmail material against others and often also against themselves, to allow it to be used as collateral and leverage. That a large part of job interviews is proving that you are vulnerable in these ways. That much bonding is about creating mutual blackmail material. And so on. This seems quite bad.
  5. If any money one has can be extracted, then one will permanently be broke. This is a lot of my model of poverty traps – there are enough claiming-to-be-sacred things demanding resources that any resources get extracted, so no one tries to acquire resources or hold them for long. Consider what happens if people in such situations are allowed to borrow money. Even if you are (for any reason) sufficiently broke that you cannot pay money, you have much that you could be forced to say or do. Often this involves deep compromises of sacred values, of ethics and morals and truth and loyalty and friendship. It often involves being an ally of those you despise, and reinforcing that which is making your life a living hell, to get the pain to let up a little. Privacy, and the freedom from blackmail, are the only ways out.
  6. A full exploration is beyond scope but section two above is a sketch.

* – I want to be very clear that yes, information in general is good. But that is a far cry from the radical claim that all and any information is good and sharing more of it is everywhere and always good.

 

 

 

 

This entry was posted in Uncategorized. Bookmark the permalink.

26 Responses to Privacy

  1. Pingback: Blackmail | Don't Worry About the Vase

  2. benquo says:

    Your notion of trust seems like it’s conflation two opposite things meant by the word.

    The first relates to coordination towards clarity, a norm of using info to improve the commons. The second is about covering for each other in an environment where information is mainly used to extract things from others.

    Related: http://benjaminrosshoffman.com/humility-argument-honesty/
    http://benjaminrosshoffman.com/against-neglectedness/
    http://benjaminrosshoffman.com/model-building-and-scapegoating/

    • TheZvi says:

      I use the word trust in one place only here, where I am explicitly referring to the concept of being able to share information without worry it will be used against us. This does not require information, or local information, *mainly* be used to extract things from us. It only requires that this is a potential use as a potential motivating factor.

      Suppose someone can have an ulterior motive, such as future blackmail or the potential of such blackmail, in a conversation or elsewhere. Or can be something simpler like, ‘wealthy person might fund me or pay for things’ or ‘this person might want to have sex with me.’ If this is going on to a sufficient extent, then that transforms the nature of the interaction. Even if that isn’t going on, one must be on guard lest one open up such paths. If that is what is going on, what you definitely aren’t doing is coordinating for the common good, clarity or otherwise.

      Thus, if I am going to coordinate for clarity, I first need to know that what I share probably won’t be used against me in ways engineered to do harm and extract maximum resources. This is very different from some true things I share happening to have poor consequences for me personally.

      Trust as unpacked in your comment is a huge bundle of goods. It includes trust *to do (or not do, or be, or say, etc etc) any particular thing or type of thing or work towards any particular end in an effective way* where thing is anything you might want. E.g. “firm belief in the reliability, truth, ability, or strength of someone or something.”

      Thus, if we want to speak about only one of these things, we use more words – e.g. ‘I trust him to keep a secret’ or ‘I trust her to know how to navigate to our destination.’ When I say ‘I trust this person’ with no qualifications, I definitely mean a package that includes ‘I trust them to honor their word’ and ‘I trust them to speak the truth’ and ‘I trust them to keep a secret’ and ‘I trust them not to do things that are harmful to me and others they encounter without a damn good reason.’ When I say that *you* can trust them, that means all that stuff with respect to you. When I say someone *is trustworthy* in general I mean it with respect to everyone. In context, this may also include skill in the contextually relevant arts/jobs/actions/etc, or it might not.

      • benquo says:

        I’m not complaining specifically about the usage of that one word. I am trying to say that the entirety of section 1 seems ambiguous about whether it’s Using the language of the oppressor to describe a tactical response to oppression, or whether you actually think that that’s what judgment and friendship are, simpliciter.

      • TheZvi says:

        The more we talk about this topic the more our models seem to differ. I do not even still think there is one primary/central disagreement.

        I certainly did not intend to say that this was the entirety of the concept of friendship or of judgment.

        I certainly *did* intend to describe some of the things that forces outside of us, including powers that be, do with information about us, that often is harmful to us, that we thus often want to remain private, even when the information does not contain illegal actions or strong norm violations.

        There are several types of judgment. This refers in particular the concept where all observations are used primarily to decide what to think about and how to react to the person being observed, rather than judging things like what is true or what actions would best achieve the best outcomes. That one can judge someone’s actions to have been bad, without then going after the person, and this is precious.

        There are a lot of components to friendship, as well. I think this covers some, but far from all or most, of those components. It’s a big concept and I don’t think you’re asking for all the components here. I do think it’s reasonable to say that without the things I’m talking about here, one cannot have friends, and that friends are vital not only for this purpose but for many others.

  3. kjg says:

    For a while I was sort of wrestling with the pro-blackmail argument, thinking I must be missing something, but I’ve concluded that it’s just a bad argument.

    The idea seems to be that revealing more true information about behavior, people will have more information and therefore make better decisions.

    But doesn’t revealing private info often lead to harm? Yes, but not if you imagine we lived in a completely different world. In this world:

    1) Punishment is perfectly calibrated to the crime, and no one is ever punished for neutral or praiseworthy behavior;
    2) Everyone is part of a hive mind, so one person’s evaluation of my behavior is the same as everyone else’s;
    3) And people have no desire for privacy or compartmentalization, such that it would be perfectly okay with everybody if their family/coworkers/teachers/students knew all about their sex lives.

    So, if you assume the author can redesign society and human nature to his specifications, blackmail would be harmless! But I fail to see how this argument relates to our world.

    Which brings us to something I’m amazed no one but me has brought up: revenge porn. It’s sorta blackmail, but with a revenge motive instead of (or in addition to) an extortionate one. Women who are victims of this sort of crime get sexually harassed, threatened with rape to the extent they are forced to move, fired, shunned by friends. This is not a theoretical possibility; it happens in our world every day.

    It’s a little upsetting to see this discussion ignore that obvious point. And it blows my mind to see that some people apparently feel so safe as to contemplate the prospect of a complete loss of privacy with equanimity.

    • TheZvi says:

      I mean, I basically agree? The actual arguments being used (and claimed as slam dunks) seem based on quite bad world models. But there are interesting challenges and points of disagreement, still.

    • benquo says:

      One core question here is whether systemic effects of legalizing blackmail – which can be very different in character (not just degree) than the marginal effect of a single act of blackmail – are on the whole likely to be positive pr negative.

    • benquo says:

      Zvi and I agree, I think, that a world with legalized blackmail would be, if not totally different, at least one with very different incentive gradients and power structures than the current one in many ways relevant to the impact of blackmail.

  4. Pingback: Rational Feed – deluks917

  5. Pingback: Reflections on the Mythic Invitational | Don't Worry About the Vase

  6. ADifferentAnonymous says:

    I found this post late. I think it’s tremendously important and also incomplete.

    I’m struggling to put my finger on what’s missing, but roughly my question is, what went wrong that *everyone* needs privacy? If we all have something to hide, why can’t we coordinate on an amnesty? If we all need slack, why can’t we agree to let each other have some? These aren’t rhetorical questions–it’s the roadblock itself that I’m interested in.

    • Eric Fletcher says:

      One answer to that question is that there isn’t a unified “something to hide”
      Consider:
      Alice disapproves of X, does Y with Carol, and does Z with Bob
      Bob disapproves of Y, does Z with Alice, and does X with Carol
      Carol disapproves of Z, does X with Bob, and does Y with Alice
      All three can be friends, and enjoy mutual activity Q together, as long as they keep their separate secrets, but there is no single thing that everyone needs to hide, and no-one can even propose shifting the equilibrium without breaking up the group.

      • ADifferentAnonymous says:

        Sure, but let’s say somebody suggests a new technology that will make all instances of X, Y and Z public. If Alice, Bob, and Carol all privately dread that prospect, it means they each value their ability to do their preferred activity more than they value avoiding friendship with those who do the thing they disapprove of. And if they can all openly oppose it, then it seems like they’re most of the way to being able to make an agreement to tolerate the things they disapprove of. Like, Alice could just say “In the event that all our XYZ activity becomes public, how about we agree that we won’t cut anyone out of our Q-group based on that information?” and the other two should be relieved and agree enthusiastically.

      • Eric Fletcher says:

        @ADifferentAnonymous: But X is abhorrent, and Alice really values not associating with people who X. If Alice learns that someone is doing X, and can’t appropriately punish them, that is a “bad thing” to Alice. So, Alice would be less happy in a situation where she knows Bob and Carol are doing X compared to the current situation, on that dimension. But, all X, Y, Z being public means Alice no longer has to keep Y and Z a secret, which has some utility – the question is, how does the utility of “not keeping Y and Z a secret” compare to “having to tolerate X.”
        I have observed that, for the vast majority of hypothetical people, keeping secrets is low effort, and tolerating X is high effort.

        Also, as neutral observers, we can see that the ABC XYZ situation is cyclical, but from Bob’s point of view, he _doesn’t know_ that Alice and Carol are doing Y. So, given a proposal to “make XYZ public” … intuitively, I feel like statements made in favor or against such a proposal could leak information about people’s activities. Consider: if Alice says “I will tolerate X if you agree to tolerate Y and Z” that indicates to Bob and Carol that she is doing one or both of Y and Z. Bob knows she is doing Y, and Carol knows she is doing Z, but consider Daniel (also in Q group), who didn’t know that. If Daniel is opposed to all of XYZ, he would now shun Alice, even though he was happy to Q with her before the proposal was made!

      • ADifferentAnonymous says:

        @Eric Fletcher
        Your first paragraph has what I suspect is a very telling assumption: you take for granted that Alice doesn’t care if she has friends who do X as long as she doesn’t know about it. You frame it as if (ignoring Q, Y and Z altogether) her preference order is “My friend Bob doesn’t X” == “My friend does Bob X but I don’t know it” > “Bob is not my friend” > “My friend Bob does X and I know about it”. Whereas I would suggest that *actually* disapproving of something would look like “My friend Bob doesn’t X” > “Bob is not my friend” > “My friend Bob does X”, with Alice’s state of knowledge not being a factor. (Note that with the former preference order, gaining information about your friends’ X habits can only be bad, whereas with the latter it can only be good)

        Alice’s preference order makes perfect sense if what she really cares about is a (public or internal) identity as someone who doesn’t tolerate X. That, I need to think more about.

        (It also sort of makes sense if Alice sees shunning Xers as a private loss but public gain, but in that case she’d support global X-exposure)

        Your second paragraph is also a good point, but for now it’s the first that has piqued my interest.

      • Eric Fletcher says:

        Let’s let Alice assign all the points. Then:
        Doing X is -X points
        Knowing about someone doing X, and tolerating it is -T points
        Helping someone who does X (which makes X more likely to happen, even if you don’t know about it) is -H points
        Friendship is +F points
        Disapproving of someone doing X, which makes X less likely to happen – even if just by observers, is +D points.
        So:
        Alice is Friends with Bob, who does not X: F
        Alice is Friends with Bob who does X, and she doesn’t know: F – H – X
        Alice is Friends with Bob who does X, and she knows: F – H – X – T (clearly worse than the above)
        Alice shuns Bob who does X, when she knows; D – X
        Hypothetically, Alice shuns Bob for X, when he doesn’t do X: D. This implies D > (F – H). If X is “doesn’t use turn signals while driving” then D << (F – H). If X is [some sex activity] it could go either way, depending on how close the Friendship is, and how strongly Alice Disapproves of [some sex activity].

  7. jacklecter says:

    @ADifferentAnonymous:

    Really great comment! I don’t have a way to say this that doesn’t sound kind of rote, but: it added some details to my map I was missing, and I expect those to help.

    I’d add that I think there’s an asymmetry in the signaling incentives involved. As a concrete example, consider the case of pornography (which I have no particular animus towards in general, with one or two exceptions) and child pornography (which is one of the exceptions, although a sufficiently good consequentialist case would make me change my mind, and I haven’t bothered to check whether there is one).

    Anyway, thinking only of the *social* incentives- I feel like advocating for a policy of greater tolerance towards both these things would be net-negative as a status move, for reasons not entirely captured by the disparity in how strong the taboos are. Emotionally, it seems like tolerance of any X is only likely to be considered a positive virtue if it’s already common knowledge among the ingroup that X isn’t bad.

    I realize this is pretty similar to your last paragraph in effect, but I thought it worth spelling out the extent to which the problem transcends actual individual judgments.

    (I’ve been at gatherings where I strongly suspected everyone would have been [sex-positive-including-normal-porn] and even gatherings where it seemed like that was mutual knowledge- and I *still* would have felt like *advocating* for more tolerance of [normal-porn] was a socially risky move.

    • ADifferentAnonymous says:

      Hah, wrote my latest reply to Eric Fletcher before reading this, and you were already where I ended up, more or less. Well, let me see if I can take it further.

      The puzzle I see is that arguing for privacy is way less risky than arguing for tolerance. There’s a special reason for this in the case of sex, where we generally agree that privacy is a value in itself, but I think it applies to other things too.

      One answer is that privacy provides more stable Schelling points. The difference between “nobody has to know what books you read” and “there’s a system that makes it public if you read Mein Kampf” is subjectively larger than the difference between “Don’t judge people for their choice of books” and “Don’t judge people for their choice of books unless it’s Mein Kampf or something” due to the implementation details–in the former case you have to do all the work of constructing a book-alerting infrastructure.

      Another answer would be that it’s path-dependent, and we currently have learned a special respect for privacy because it’s how our existing truces are implemented but you could potentially have a successful low-privacy/high-tolerance society instead.

      Or maybe it’s because the transitional incentives differ–privacy often benefits its early adopters first, whereas tolerance benefits its early adopters last. But that would predict that it’s harder to argue against individually-initiated invasions of privacy, and I don’t think this is true–saying e.g. that you shouldn’t look through people’s unlocked phone does not seem like a particularly difficult stance to take.

  8. Pingback: Potential Ways to Fight Mazes | Don't Worry About the Vase

  9. Pingback: On Negative Feedback and Simulacra | Don't Worry About the Vase

  10. Pingback: Covid 10/8: October Surprise | Don't Worry About the Vase

  11. Pingback: Asymmetric Justice | Don't Worry About the Vase

  12. Pingback: Ukraine #3: Decision Theory, Madman Theory and the Mafioso Nature | Don't Worry About the Vase

  13. Pingback: AI #3 | Don't Worry About the Vase

  14. Pingback: The Best of Don’t Worry About the Vase | Don't Worry About the Vase

Leave a comment