One of the greatest compliments I’ve gotten from a coworker recently was that he considered me “a grandmaster of the socratic method.” It puts a smile on my face, just thinking about it.

He went on to say that in meetings we’d been in he found my questions the most insightful. I asked the best questions. And he was curious how I came up with them.

A good question. How did I come up with my questions?

I believe that my best questions come from comparing three different mental models I have about a situation. First, there are the things the person has said (or at least my best recollection of what they’ve said). Second, there is what I believe the other person thinks about the topic (taken from prior interactions and interpretation of what they are saying). And finally, there is what I believe about the topic.

My questions are to probe the differences between these three models.


All career ladders come down to a proxy for impact people have on the world. Once it is all said and done, there are only five levels: assisted, individual, team, company, industry. Within any given role (e.g. software engineer) a person is at one of these levels. To change to a higher level, it takes figuring out what has impact (which will be somewhat context dependent) and making that happen.

Assisted – the person needs assistance to perform the role. You can think of this as pulling in impact rather than pushing out impact.

Individual – the person can handle the tasks of the role on their own.

Team – the person behaves in such a way that they elevate the team around them.

Company – the person behaves in such a way that they alter and elevate the company in which they operate.

Industry – the person behaves in such a way that they alter and elevate the industry in which they work.

Most career ladders are proxy measures of this scale: know technology X, breaks down work, etc. Can you measure impact more directly or explicitly?

4 Simple Questions

I was taught this set of questions to use in 1-on-1s by Jeffrey Fredrick. They have worked very well for uncovering conversations that need to happen. The order matters and each question has a purpose.

  1. Are you happy?
  2. Are you able to do work that you are proud of?
  3. Is there anything that you expect to see happening that you don’t see happening?
  4. Is there anything that I, or the company, can do to help you achieve what you want to achieve?

They seem simple enough, and they are. The secret to using them is that they aren’t just a checklist to go through. You need to be engaging with the responses with active listening, EAR, NVC, mutual learning, and you need to care about what the person is saying. Reactivity is real.

The first question seems simple enough and it is first because happiness is the leading indicator for what the person will do next (engage with the team more, leave the company, etc.). Often the person will think for a second or two and say yes and then you both move on. Pay attention to not just what they say, but how they say it and how they reacted to the question.

The second question gets to the person’s judgement of the current situation they are in. There are two parts of this “able” and “proud”. Pay attention to both parts. Often I hear examples given of what they did that they are proud of. Respond to those and explore with them what about the work they were proud of and how they can get more of that.

The third question starts to turn things from the person and their work situation to the environment around them. All of us have ideas about what should be happening. Sometimes what should be happening is visible and sometimes it isn’t. This is the chance to find out where transparency is missing and to find if the person has agency in any situation they identified that maybe they are not exercising. This question opens the negotiation to jointly design the next step on anything identified. Don’t default to just taking a task and moving on.

The last question is where you are looking for things that are really not in the person’s control. What can you, or others, uniquely do to help? I find that usually by this point there is not much that hasn’t already been handled already. Sometimes there will be that rare gem that really let’s you help. Don’t miss it.

A Thought on Puzzles and Paradigms

While reading The Structure of Scientific Revolutions, I came across this

It is no criterion of goodness in a puzzle that its outcome be intrinsically interesting or important. On the contrary, the really pressing problems, e.g., a cure for cancer or the design of a lasting peace, are often not puzzles at all, largely because they may not have any solution. Consider the jigsaw puzzle whose pieces are selected at random from each of two different puzzle boxes. Since that problem is likely to defy (thought it might not) even the most ingenious of men, it cannot serve as a test of skill in solution. In any  usual sense it is not a puzzle at all. Though intrinsic value is no criterion for a puzzle, the assured existence of a solution is.

We have already seen, however, that one of the things a scientific community acquires with a paradigm is a criterion for choosing problems that, while the paradigm is taken for granted, can be assumed to have solutions.

T.S. Kuhn, The Structure of Scientific Revolutions

I find it thought provoking as well to think about how that mindset (interesting puzzles have solutions) explains actions taken in a software group. I often see teams or individuals going off into “la-la land” of technical work because there is, within the paradigm of computer science, a puzzle that can be solved. With some more performance analysis, some more network packet tracing, some more unit tests the last bug can be found and resolved.

The problems of “what does this software need to do for its user” appear, from the paradigm of computer science, to be unsolvable. People are not well defined, they don’t stick to one way of thinking. There are too many different customers. Therefore any “solution” isn’t really a solution, it is an almost arbitrary decision that can’t be either right or wrong. Kuhn continues

… Other problems, including many that had previously been standard, are rejected as metaphysical, as the concern of another discipline, or sometimes as just too problematic to be worth the time. A paradigm can, for that matter, even insulate the community from those socially important problems that are not reducible to the puzzle form, because they cannot be stated in terms of the conceptual and instrumental tools the paradigm supplies.

The focus on the problems of the paradigm can be beneficial for scientific researchers as it keeps them focused on advancing the paradigm in which they work. We work-a-day software developers, however, are generally not scientific researchers. So we have to ask, is the devotion that puzzle-seeking produces advantageous in a software development context?

Like so many questions that evoke a hand-wavy “context”, the answer has to be yes and no. Yes, puzzle-seeking creates that drive to produce better functioning software that is fast, doesn’t crash, and works as the developer intended. No, puzzle-seeking wastes time and effort on problems that don’t matter to the consumers of the software. Both answers are right and yet neither is complete. Sometimes it is correct to seek that puzzle and sometimes it is a waste. How do we manage this ambiguity?

We manage it by creating another paradigm around the paradigm of computer science. This is the paradigm of any individual’s or team’s “software development process”. The process paradigm gives shape for new puzzles to solve and rules by which we can admit certain puzzles and omit others. For example, XP uses an on-site customer role on the team in order to provide a “correct answer” for what needs to be built for the users. The on-site customer changes the “metaphysical” questions of user needs into problems that have solutions within the paradigm of the team.

Withholding Information

Withholding information is one of those things that many of us do without even thinking about it, but it comes at a cost. Once you start doing it you limit what you, and others, can learn.

I run a workshop to teach people the basics (really the absolute basics) of incident analysis. Incident analysis is the practice of observing, interpreting, explaining, and learning from an event, usually one that didn’t go the way you wanted it to go. My goal in the workshop is to explain the philosophy behind such an analysis and to provide a structured and supportive space for everyone to apply these techniques to real life incidents from their own experience.

Each time I teach this workshop, I gain further insight into how people take these principles, embed them into their own way of thinking, and practise the skills. A recent workshop was particularly instructive because it highlighted how strongly people will hold onto and hide information to get the outcome that they want and avoid the discomfort of learning something new.

The Story

I start the workshop by explaining the philosophy and mindset needed for effective incident analysis. For me, these are about understanding blame and the role that determinism has in that. I want people to approach the incident as something to be understood instead of a way of proving someone correct.

After my sermon on the good and evil of incident analysis, we work through an analysis step by step in groups. The first step is for someone to describe an incident to their group. The group explores the incident and attempts to understand what about the described situation made it a “problem”. What they come up with will strongly influence what they will learn through the rest of the workshop.

One group’s incident story was (vaguely) that new software was being deployed overnight and required the coordination of various teams. Shortly after starting, at around midnight, they encountered a problem with the database changes. The DBA who was applying the changes reported that the system had run out of disk space, which meant that the deploy couldn’t go forward. The teams tried to correct the problem over the next hour or two, but eventually, they decided to roll back and try again another night. The rollback was finished and at around four or five in the morning and the conference call was ended. The storyteller arrived home at around six and then came back into work later that morning.

The group ended up framing the problem as a lack of access, which meant that the team couldn’t properly respond to the problems as they came up. After the workshop, however, I found out that the story had been manipulated to get this outcome! The person presenting the incident told me he had guided the discussion and left out certain details because he “didn’t want to go there”.

I was caught by surprise. Flabbergasted even! The sentiment that he “didn’t want to go there” is another side of “I know the solution already”, which is what I thought my workshop was about avoiding. Throughout the workshop, I strove to make it clear that we need to look at what happened, look at it from many different angles, reserve our judgement early on, and pay attention to our own biases so that we can compensate for them during the investigation and analysis. And here I had someone using his bias to get the outcome that he was looking for!


What I found most surprising was that even in a situation set up to be about exploring a problem from many angles and looking for new information, someone would decide to withhold what they knew in order to control the outcome. I find it both sad and unsurprising.

People will try to control situations like this for many different reasons. They could be trying to protect themselves. They could honestly believe that they are saving others from wasted effort or concern. However, the thing that we all need to learn from this story, and others like it, is that (almost) no matter how much we try, we won’t have the complete story. While we have to strive not to be a part of the problem, we have to work with what we can get and do the best with that as we can.

Paper Critique: A Longitudinal Cohort Study on the Retainment of Test-Driven Development

D. Fucci, S. Romano, M. T. Baldassarre, D. Caivano, G. Scanniello, B. Thuran, and N. Juristo. A Longitudinal Cohort Study on the Retainment of Test-Driven Development. arXiv e-prints, page arXiv:1807.02971, Jul 2018.

I won’t pretend that I think this is an example of good research. However, it is an example of a research paper that can teach us a lot by critically examining it to understand both the good and the bad parts.

Continue reading “Paper Critique: A Longitudinal Cohort Study on the Retainment of Test-Driven Development”

Paper Critique: Towards a Theory of Software Development Expertise

Towards a Theory of Software Development Expertise Sebastian Baltes and Stephan Diehl.
Proceedings of the 26th ACM Joint European Software Engineering Conference and Symposium on the Foundations of Software Engineering (ESEC/FSE 2018).

I remember when I was just starting out as a software developer. I had moments of complete certainty about my utter expertise in the profession. I also had moments of complete despair that I had any clue at all about what I was doing. Reading books like The Pragmatic Programmer helped me get a sense of what it meant to grow in expertise. This paper by Baltes and Diehl tackles this same question with a Grounded Theory approach.

Continue reading “Paper Critique: Towards a Theory of Software Development Expertise”

Paper Critique: On the diffuseness and the impact on maintainability of code smells

Palomba, F., Bavota, G., Penta, M. et al. Empir Software Eng (2018) 23: 1188.

Suppose you are working on some chunk of code. You notice that the logic is a bit convoluted, some of the class fields are public, there seem to be more methods on the class than make sense for the concept, and the particular method you are looking at is absurdly long. You sit at your keyboard and the training about code smells in you says that you need to be a responsible software developer and address those smells. Then your other training about engineering being about tradeoffs kicks in and you wonder, which of these matter? What impact do they have?

Continue reading “Paper Critique: On the diffuseness and the impact on maintainability of code smells”

What is the job to be done of your job?

A few days ago three of us were talking back from the Devonshire Square market after having enjoyed the warm, sunny weather by eating our food outside. The topic of conversation had wandered but had eventually rested on careers. Specifically, we had hit upon the topic of how to decide on what one’s next position might be. How can one decide between staying at an existing job or leaving to something else? If the decision is to leave, then how does the decision between different positions on offer get made?

The question seems to be easy to answer by just listing out specific aspects one could be looking for: look for a higher salary, look for better working conditions, look for a place that has less overtime, take the one where you’ll learn the most. This hasn’t answered the question at all! It has actually made the question even harder to answer by obscuring the issue with too many details and, at the same time, not providing any way of deciding which of these specific aspects the decider cares about.

To get to the real answer, you have to first ask yourself a very different question: what are you hiring this new (or your existing) job to do for you?

Are you looking to have it pay off your debts? Move you to another country? Teach you a new skill? Allow you to have time to fish?

If you look at the job as a product to be bought or a service to be hired, what are you hoping to get out of it?

Once the job to be done is clear, then the choice between the different aspects above will be, if not easy, at least guided toward a goal.

Reading Notes from “Johnny Bunko: the last career guide you’ll ever need”

There is no plan

The world is too unpredictable to be able to plan out in advance your career or your life. You can make choices for instrumental reasons: the choice is based on the perceived utility for something else down the road. The object of choice is seen as an instrument of a plan. You can make choices for fundamental reasons: the choice is based on the inherent value of the object of the choice. The object of choice is seen as good in and of itself (reminds me of the Meditations on the subject of stoicism that a friend has been talking about). Because things are so unpredictable, instrumental reasons often lead to the wrong choices for the plan anyway. Most successful people, most of the time use fundamental reasons.

My interpretation: using fundamental reasons actually keep your options open and keep you happy at the same time! Instrumental reasons shut down options and likely are not what you actually wanted to do thereby making you less happy.

Continue reading “Reading Notes from “Johnny Bunko: the last career guide you’ll ever need””