A long time ago, I was in a nightclub, with a friend, and we ran into a woman who we'd both met a few times before. My friend struggled visibly for a moment, and then correctly remembered her name. She was pleased and flattered. I had known her name immediately without any struggle, but she didn't look flattered at that!
A while back a group of my friends used to play a general-
If you release a piece of software with a security hole in it, and then fix it promptly and competently when someone finds it, users will be vocally grateful. You'll get compliments on your dedication and your integrity, and it will increase general trust in you to maintain a security product –
Psychologically, it's easy to come up with reasons why this general pattern of human behaviour makes sense. But it seems like a cognitive weakness nonetheless: surely there must be a multitude of cases where it creates a perverse incentive to pretend to be less competent than you are, or to make deliberate mistakes so you can earn kudos for fixing them…
Not All Men, etc, but if you are looking for more discussion of this phenomenon that's where I'd start!
But perhaps your unspoken point is that that might be what's really going on in some of my own illustrative examples? The security-hole one, in particular, would make a lot of sense that way now that you mention it. (Even if your real feelings were that the developer was a total muppet for having introduced $bug, that's the last thing you'd say out loud when they fix it, at least if you're sensible: you don't want to teach them that making their software better attracts a round of upsetting criticism that they could avoid by keeping quiet about the next hole.)
My initial thought was that that didn't sound like quite the same kind of thing, because I was talking about people feeling genuinely more impressed by the less objectively impressive achievements, whereas you seem to be talking more about people making tactical judgments about where positive reinforcement can be most effectively applied.
No, I think I was unclear (see also Endless Paper): I think this is absolutely a thoroughly-observed phenomenon along gendered lines in division of housework and childcare. What constitutes "astonishingly impressive" (single) parenting is very different depending on the (perceived) gender of the parent, even if all other variables are held constant. There is also, overlain in this, some amount of tactical judgement -- but I don't think that's the main point I was trying to get at.
So some of it is less "tactical judgements" and more "genuine response to perceived effort expenditure": someone putting in More Effort will tend to get More Praise even if the effort is ineffective/inefficient/misdirected/etc.
But perhaps your unspoken point is that that might be what's really going on in some of my own illustrative examples? The security-hole one, in particular, would make a lot of sense that way now that you mention it.
That too!
Though there's also the thing where: if we work on the premise that nobody is perfect and everybody's going to screw up (eventually), "a company that has never introduced a security hole" is "a company that hasn't introduced a security hole yet". Given the assumption that screw-ups will happen, the sooner you get data on how they'll be handled -- whether they'll be handled well -- the sooner you get reassurance that it's "safe"(r) to invest (time/energy/finance/emotion).
This also shows up in sociology: see e.g. the increasingly frequent advice (at least in the places I hang out, like Captain Awkward) to, early on in any kind of relationship, try saying "no" to something minor, and see what reaction you get. Certainly I've spent a lot of time in therapy arriving at the idea that particularly in contexts where people see to want to form a very intimate relationship very rapidly, where I'm giving a lot of support, it is a really good idea to ask for some reciprocal support early on, rather than just assuming that Obviously It's On Offer When I Need It, because very frequently it's turned out not to be in ways I've found really upsetting or isolating or difficult.
The point I was aiming for was mainly "I think I recognise this phenomenon or at the very least closely associated phenomena, I know a whole bunch of sociological analysis of it Exists, I do not have the brain to dig it out right now but here's a signpost to aspects of discussion I think you might find interesting."
(If you are cheerful about the prospect of ongoing dilatory correspondence on this topic I am v happy to keep flailing intermittently at you -- I think you are Entirely Right about the pattern existing, to be clear!)
[...]
Given the assumption that screw-ups will happen, the sooner you get data on how they'll be handled -- whether they'll be handled well -- the sooner you get reassurance that it's "safe"(r) to invest (time/energy/finance/emotion)
*nodnod* yes. When I said in my original post that it was easy to think of reasons why it made sense for people to act this way, both of these ideas were the kind of thing I had in mind.
You could construe both of them as different facets of the more central question of how, or whether, or how well, people can stretch themselves past their existing limits. If you're tackling a problem beyond the size you can solve effortlessly, do you have the concentration, determination, and straight-up endurance to put in enough effort to solve the larger problem? And if you're tackling a problem beyond the complexity you can solve without errors, will you cope sensibly with the inevitability of making errors? (Fixing them when found, fixing them even when you're more interested in moving on to the next problem, reliably adding regression tests to stop the same problems coming back, etc...)
And, as you say, this really is an important thing to want to know about somebody whose problem-solving (or whatever) you're likely to be depending on, because problems beyond their comfort zone will come up sooner or later. It even makes sense to consider 'are you able to stretch past your limits?' as a more important question than 'where are your limits right now?'.
It's just that it's also true that it introduces perverse incentives. One of the risks of using old-chestnut puzzle questions in job interviews, for example, is that some candidates will have heard them before and already know the answer. The good case of that is that the candidate owns up to knowing the answer already, in which case the question is merely inconclusive, and you have to try harder to come up with a different question that will actually make them think. But the bad case is if the candidate is a bit more quick-thinking and/or unscrupulous, and pretends never to have seen the question before, in order to simulate solving it on the spot in a plausible but better-than-average length of time.
Simont's example is a much better place to start. It is something uncontroversial and because of that unlikely to spawn heated off-topic discussion.
While I'm commenting anyway, another example I realised too late that I ought to have put in the original post is the way credit ratings work: you earn trust from money-lending organisations by incurring debts that you then pay off on time, and the strictly better behaviour (in all other respects) of avoiding getting into debt in the first place doesn't score nearly so highly.