Hacker News new | past | comments | ask | show | jobs | submit login

I had a boss who liked to use a similar idea, but with a twist; he'd sometimes repeat the idea back but intentionally get a key nuance wrong, with the intention of getting feedback on that part. If the person corrected it, he'd be much more confident in both his own and the other person's understanding of the idea. If they didn't, he'd dig in more to figure out why his original understanding was actually _not_ correct. The idea behind this was to look to _disprove_ what he thought he understood rather than to confirm it; he would only feel confident in his mental model if he couldn't disprove it after exhausting all of the ways he could think of. His strategy was pretty effective, from what I could tell; he'd often uncover subtle flaws or questionable assumptions in ideas or plans where he was not as knowledgeable in the domain as the person presenting it due to not only being willing to say something incorrect, but going out of this way to embrace it. Importantly, this was never used as a way to try to trick or test people; he would never criticize anyone for failing to correct him when he said something wrong, because the whole point of the technique was that he wasn't even sure whether he needed to be corrected or not, and he was still trying to figure it out.





Veritasium has a great video[1] about people’s cognitive bias towards using examples that prove their mental model, rather than using examples that disprove their mental model. But if you want to actually confirm you have accurate understanding of something, testing examples that don't fit your metal model is the workable methodology (i.e. use a null hypothesis).

Testing examples that do match your mental model only proves your model partially matches the actual model, but it does very little to actually identify misunderstandings, or improve your understanding.

[1] https://youtu.be/vKA4w2O61Xo


It's a common programming error too. I often see automated test cases that only test the happy path, but you ought to test that errors error the way you expect and desire too. These tests fail the first time I run them as often as the happy path cases do.

This to me gets to the fundamental nature of testing. If errors are part of your test suite then they're happy path too. It's just as much a part of your API surface as any other return value.

But if you don't have (or want) callers that depend on your specific behavior in error situations then I wouldn't bother testing them. It's my contract that I can and will change the behavior arbitrarily and you can't be mad at me for it.


I was taught to use the null hypothesis in a US high school science lesson on the scientific method in the 90s. It never clicked to me “why” until your explanation. Funny how that stuff works.

So many great videos on Veritasium, and that is one of my favorites. Anyone who is not familiar with the channel should check it out.

Of course, there is this counterpoint...

https://www.youtube.com/watch?v=cY_o4A1wzsg


Smart play, but hard to do in the moment. Sounds like a sharp guy.

Can you elaborate a bit? I don't understand how this is useful if, for example, someone says "we should add a nullable field and then create a migration" he'd say "you're saying we should add a non-nullable field and create a migration"? Why would someone not correct this?

I think this approach is mainly for more complex ideas. To expand on your example, it might be something more like "our 99th percentile page loads are slow due to high widget view hydration latency. We can reduce the cost of hydration by caching greeble status in the the widget table. We should add a nullable field and then create a migration."

To which, the reply might be roughly the same but ending with "We should add a nullable field to the greeble table and then create a migration."

If the proposer is paying attention, they might say "no no, the new field has to go in the widget table."

The goal isn't to catch someone out, but to make doubly sure you've reached a shared understanding.


Hmm, that makes more sense, but I still don't see how these factual errors could go uncorrected. I can see how this might be a measure of attention, but I don't understand the "disprove my assumptions" aspect of it.

If you repeat what you think is a 100% correct version of the information, and the other person nods along, you can't be very sure whether they're just passively agreeing because they zoned out or if you were actually correct.

It's a bit like writing unit tests that fail, before you implement the change that'll make them pass.


It seems you also agree with stavros that this strategy seems more like a test of attention, then?

It does test attention, but that is not its purpose.

"okay, so you mean that the profile data is being sent, but something is messing up in our endpoint handle, right?"

"No, it looks more like it's a routing issue. The endpoint never gets hit when the client sends data, so we're trying to troubleshoot where the disconnect happens"

Not trying to be snarky; just had a real world example handy because my entire team uses this type of messaging. Usually starts with a "okay, just trying to level set...", or "just so I know I'm on the same page".

In our experience, this type of communication has helped minimize instances of completely mismatching on task expectations


Whenever I'm saying "just so I know I'm on the same page", I communicate my best understanding of the correct interpretation of my colleague's message. I would never deliberately introduce a misinterpretation to see if it gets corrected. Misinterpretations happen often enough naturally already, in both directions, and my goal is generally to minimize them.

Right on. It's a fine strategy. An alternative one is to pick your second best guess (usually in an upward inflecting question-voice; semi incredulously), to see if this thing you think isn't the right idea gets approval. Then, you can ask your first best guess because now just asking will highlight the distinction, which may make the missing piece more apparent to both parties.

Really no wrong answer, so long as all parties are earnestly working towards the solution. The nice part about this strategy is it lets your 'most correct' answer actually be multiple answers that you whittle down, rather than making the judgement calls, yourself, on what the other person most probably means to what they least probably mean. You remove an assumption and lead with your biggest concern, even if that seems like a crazy suggestion. Once you confirm that it is crazy, you're closer to the target. And if the 'crazy' thing was right, then you get to skip a lot of the steps between your initial best understanding and the correct understanding.


Hmm, I see, thanks. This sounds pretty everyday to me, but I guess that's probably just because we already do this. Things like "you're saying <x>" and "OK, summarize my points back to me so I know we're aligned".

I might not have been clear, but this was my manager, who was not spending time pair coding with us and things like that. His job was to deal with things at a much higher level, and this was the type of thing he would do in meetings where we were writing design documents for complex new features or entire new systems; at the abstraction where he was involved, the discussion wouldn't be about the "how" but the "what" and the "why. Building off of your example, let's say one of his engineers wrote a design document about a new piece of data called Foo that was getting added to our database, and the document mentioned that the new piece of data would be considered optional rather than required because we'd only have access to it in the newest version of the client software, and the EOL for the last version not to provide it was two years away. He might mention that customer Bar just upgraded to the latest version of our client and expressed interest in using the Foo data, and ask why they wouldn't be able to get the Foo data from their client, with the idea that he'd be corrected and told that the customer would be able to get the Foo data because they had just upgraded and the latest client had access to it.

It sounds more like he's trying to disprove his current understanding, and he feels confident in his understanding only if he can't disprove it. So, he probably wouldn't fake a trivial misunderstanding like that, but some deeper part where he's less confident about his understanding anyway.

i like the repeat back but i would hate this. what if the person doesn't feel comfortable correcting them? just so many ways this could go wrong. not to mention it's dishonest and manipulative. management is already manipulative in a way, but this is crossing the line.

If it's a junior developer, then they really shouldn't have too much responsibility without heavy supervision anyway.

If it's a senior developer, then they should feel comfortable speaking up when there's a disconnect or misunderstanding. I would argue that one trait is the overwhelming majority of what separates a senior developer from a junior one.


> If it's a senior developer, then they should feel comfortable speaking up when there's a disconnect or misunderstanding. I would argue that one trait is the overwhelming majority of what separates a senior developer from a junior one.

this is just idealistic and doesn't acknowledge the power dynamics in any organization, nor does it factor in peoples' individual personality traits.


> If it's a senior developer, then they should feel comfortable speaking up when there's a disconnect or misunderstanding. I would argue that one trait is the overwhelming majority of what separates a senior developer from a junior one.

Whether seniors feel confident to correct the manager depends primary on how manager acts when corrected. There are many managers who don't get corrected by seniors and seniors who do learn not to do that - either becabuse it is useless or will be punished.

Either way, juniors do talk with management fairly often, whether they have responsibility or not.


I could not agree more

Seems like it'll always be the case that people will chuck the responsibility for X towards people with the lowest capacity to actually be responsible for X

People feel fine with correcting managers when managers reward being corrected instead of punishing it. That's got nothing to do with seniority levels


Agreed I've seen a scenario play out with a junior engineer and a senior manager who does this. The junior had the right idea but the dialog with the inquisitive senior manger left the junior confused because they didn't feel like they had the chops to push back.

I personally prefer a direct strategy: state that you don't understand and what your (possibly) unenlightened concerns are. Or ask something like, "that sounds good, but what are we missing that might cause someone to be paged at 3am?"



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: