Leadership Teams may be a smell

Some time in mid-January, I was thinking about various organizational patterns and how companies run. I was thinking, in particular, about the idea of a "leadership team". I dropped a thought into Hootsuite and a couple of weeks later, on February 6, that thought was broadcast over a few social media channels.
I got a few different responses, all of which amounted to "please say more about this." I want to make it clear that this is a thing I am still pondering and a lot of this post is ideas I've not shared broadly or discussed at length.

Right now, I think of a leadership team as an organizational smell. Like a code smell, an organizational smell potentially indicates a deeper problem in the system. The smell itself, in this case the existence of a leadership team, is not technically a problem. A leadership team doesn't prevent an organization from functioning. But the existence of a leadership team may indicate perceived weaknesses in the overall system. Maybe we're attempting to compensate for these weaknesses by centralizing authority and decision making.

Maybe we see no weaknesses and we can easily rationalize this centralization. After all, the members of a leadership team are nearly always the individuals at the top of the organizational chart. In such a case, we may be rationalizing the existence of a smell with a misapplied organizational design pattern.

Like software design patterns, I think of organizational design patterns as not final repeatable prescriptions, but templates. Dominator hierarchy is an organizational design pattern optimal for repeatable work. With a dominator hierarchy, there is a static pyramidal structure with a clear chain of command, decisions are made at the top and people identify with specific job titles within the hierarchy. This is the right organizational pattern for what Dr. Snowden refers to as the chaotic and obvious domains in the Cynefin framework. While dominator hierarchy works in both of these domains, the optimal management styles differ from command to coordination for chaotic and obvious, respectively.

For the complicated domain, the Supportive Hierarchy is an appropriate organizational design pattern. In the Supportive Hierarchy, the reporting structures are similar to dominator hierarchy, but there is an emphasis is on empowering employees. Management style moves from being controller and coordinator, to mentor. Employees have more freedom. Decision making protocols are in place, allowing employees high levels of autonomy. Servant leadership is another common and effective management style here.

Implementing a hierarchy in the complex domain is not technically a problem as it doesn't stop the organization from functioning. But hierarchy of any form is a suboptimal structure for an organization that needs to respond quickly to unpredictable changes in the market. For the complex domain, the optimal organizational design pattern is a Decentralized Network and the management style is facilitative, but the roles associated with management in the hierarchy patterns disappear. In a Decentralized Network there are no managers, directors, VPs, or SVPs. You commonly won't even find executive roles beyond those required by law. The decision making protocols used in Supportive Hierarchy are made more robust to account for all decision making, including governance decisions.

Product development in its true form falls into the complex domain. If you know precisely what you are building and how the market will respond to it, you are not developing product, you are building it. In such a case you are either square in the complicated domain or your hubris has gotten the better of you. I suspect the latter, but I suppose that's another blog post.


Let's see if we can wrap this up and attempt to clarify why a "Leadership Team" is likely an indication of dysfunction.

With Dominator Hierarchy and command management style (Chaotic), a leadership team would slow down decisions and speed is paramount to getting out of chaos, which is the primary goal of leadership when in chaos.

With Supportive Hierarchy, mentor management style, and decision making protocols, the centralization of decision-making is not necessary. A leadership team is likely an indication of failure elsewhere in the decision making process.

With Decentralized Network and facilitative management style, and robust decision making protocols (Complex), a leadership team makes no sense. There is no hierarchy of titles. Everyone is a decision maker.

A leadership team makes sense is if you have a dominator hierarchy with coordination management style (Obvious). Decisions are made at upper levels of the hierarchy and we are not in chaos, so the small amount of extra time a leadership team would take to coordinate and make a decision is a worthwhile trade-off for the increased average in value of said decisions.



So if your local fast food chain has a leadership team, that makes sense. If their headquarters has one, however, it suggests they fail to recognize the complicated or complex nature of the work they do. The leadership team is an indication they are operating under a suboptimal organizational design pattern and/or management style with insufficient decision making protocols. In other words, the existence of a leadership team at headquarters is a smell.

Code Profiling

I recently gave a talk on the role of a Quality Analyst as an organization transitions from waterfall to agile. The talk was entitled "Switching Horses in Midstream" and covered a number of topics. One item in particular struck me as worthy of blogging about. It's a technique I've been using for years, but have never written about it. So here we go:

Legacy code bases with a lack of test coverage are often trepidatious places to hang out. You never know when a simple abstract method is going to create a sink-hole of logic and reason, eventually leading to your exhausted resolve to revert and leave it dirty.

Whenever I encounter a code base that lacks unit tests, I begin with a technique I call "Code Profiling". This is a nod to criminal behavioral profiling where scientists observe objectively and describe patterns observed.

First of all, you need a naming convention for your unit test. I don't much care what it is. I mean, I have an opinion or two on the matter, but for the purpose of this article and this technique, you just need to pick something and stick with it. Today, I'm going to go with a "Should_ExpectedBehavior_When_StateUnderTest" pattern as it lends itself well to this technique.

So let's say we're looking at a postal address class.

MailingAddress
--------------------
Street
Street2
City
StateCode
StateName
PostalCode
Country
--------------------
OutputAsLabel
--------------------

This is a close proximity of a class I encountered at a client a few years back. All of our properties have getters and setters. Pretty basic stuff at first glance.

When we profile the class, we change our naming convention from "Should_ExpectedBehavior_When_StateUnderTest" to "Does_ObservedBehavior_When_StateUnderTest". The key difference here is the "Does" instead of the "Should". The point is to clearly identify these as tests that were written after encountering untested code and are based on observed behavior.

For this particular class, we notice that both StateCode and StateName have getters and setters and that the StateName is optional in the constructor whereas StateCode is required. This is .... odd.

After reading the code a bit, we figure it would be entirely possible to set a StateCode and StateName that do not match. We take a look at the constructor and it protects against this by looking up the StateCode in a hash. If the StateName does not match, the code silently changes StateName to the match from the hash and moves on. No error. And a little more checking shows that we can update the StateName directly with no change to StateCode.

Does_NotError_When_StateMismatchToConstructor...
Does_AllowStateMismatch_When_StateNameChanged...

And here is the subtle significance of this approach. We make no judgement about what "Should" happen. We simply record what "Does" happen. The developers that missed this in the first place were likely thinking about what "Should" happen and missed these possible issues.

Now run through the tests with your product owner and/or QA. For those that are the desired behavior, rename them to fit the standard convention. For those that are not the desired behavior, if any, decide if they need to be fixed now or if they can be deferred. It is possible that the issue has been in the code for years and while it's annoying it is not an urgent fix. And it might take a lot of unravelling to actually fix it.

Your tests now indicate the desired behavior and the observed behavior that is not desired, but is not entirely "broken".

When you want to bring the behavior into alignment with expectation, I suggest commenting out the "Does" test and writing as many "Should" tests as necessary. Then, un-comment the "Does" test, ensure it fails, and delete it.


Good Software

I saw a tweet this morning that caught me off-guard.
It doesn't strike me as consistent with the type of thing AgileFortune usually tweets. My initial reaction was to reply via twitter, but didn't feel I could express my thoughts well in 140 characters or less.

What is "good"?

And by extension what is "good" software? Good has many meanings. To be morally excellent or virtuous, to be satisfactory in quality, quantity, or degree, to be proper or fit, well-behaved, beneficent, or honorable. These are all definitions of good.

Morals and Software?

One might argue that moral excellence, beneficence, or honorableness are not relevant to measures of "good" in software. I disagree. They are as relevant to software as they are to medical care, banking, manufacturing, and any other human endeavor. Dishonorable businesses that are morally questionable and do harm to others are not "good" businesses. Sex trafficking is not "good" business. Exploitation of others is not "good" business. Profits alone do not make a business "good". Businesses do not exist to make money, or rather they should not. Businesses should exist to provide an offering of value to others. Money is a means of measuring the value provided to others. With that money, we can continue to provide and improve our offering, should we choose.

Money is to a business as food is to a human. We do not live to eat; we eat to live. We do not run a business to make money, we make money to run a business. Greed and gluttony are entirely different matters; neither of which is healthy or "good".

Other aspect of "goodness"

Let's set aside the moral and social aspects for a moment. This leaves us with satisfactory in quality, quantity, or degree, proper or fit, and well-behaved. Distilling it down to these elements, one could assert that if software makes money it is "good", for it behaves well enough for people to use it for some given purpose that results in profit to the software creator, be this in subscriptions, transaction fees, exchange for goods or service, or ad revenue.

Reliable? Must be good enough. It makes money. And nothing else matters.
Maintainable? What difference does that make? It makes money. And nothing else matters.
Secure? Bah! It makes money. And nothing else matters.
Scalable? Who cares? It makes money. And nothing else matters.

Let's look at this measure of success in another endeavor. An electrician whose installations fail on occasion (reliable), makes junctions inaccessible and does not follow industry standards (maintainable), and fails to adhere to safety standards (secure), but gets paid for the job has provided a "good" service. Because nothing other than money matters.

Good and Money are not the same

As a software developer; a professional software developer, your measure of "good" needs to include factors far beyond revenue generation. Open Source software is often quite good and makes no money. Commercial software is often poor and makes a lot of money. "Good" and "Money" are not the same. Software that makes money is not automatically good.


Organizational Motivators: Autonomy, Connection, and Excellence.

I think I saw Daniel Pink's TED Talk on "The Puzzle of Motivation" for the first time in 2011. I'd been reading some about leadership, management, and organizational psychology up to that point, but Pink's talk and his distillation of these complex concepts into a simple framework (Autonomy, Connection, and Excellence) inspired me to read more on the topics. Over the course of the next couple of years, I consumed a decent amount of material. You can view my Goodreads account to see what books I was reading. Unfortunately, there is no easy way to share all of the scientific articles and other sources I also consumed.

During the time period when I was reading this material, Daniel Pink's "Drive" seemed to become all the rage. People were talking about it at work, people were submitting conference talks about it, and it even seemed to be in the news. As a result, there was plenty of opportunity to speak with others about the ideas and concepts that I was reading about.

I had not read "Drive". It was originally because I'd already purchased other related material, but eventually, I was intentionally avoiding it. I soon noticed a divergence in what I was learning and what people were saying. What people were talking about was the motivations of an individual; what inspired great work from a single individual. What I was interested in was not only individual motivation, but the collective. How did these ideas apply to organizations? Autonomy, Mastery, and Purpose felt to me like it was close, but missing the mark when it came to groups or organizations.

A new role

In January of 2014, I was provided a unique opportunity with Groupon to take on a role focused on our culture within Product Engineering. Up until that point, I'd been building relationships within the department and the broader company, working to affect positive change where possible, and primarily helping teams get better at learning and growing in the service of delivering software. It was time to put pen to paper and articulate the things I'd learned over the years in a concise and actionable manner that could serve as a guideline for my new team.

A simple framework

Autonomy


As pertains to the individual, autonomy is about having a voice in the decisions that impact one’s life and the freedom to choose a different course. As pertains to the collective, autonomy is the right to self-organize and self-govern. In a corporate environment autonomy means leaving the decisions about how work is done to those who actually do the work.

Connection


Connection is internal and external. Connection is what makes a community. Internally, organizations need cohesion within and between all teams; one collective, united toward a common cause. External connection is engagement with a community through sincere dialogue and contribution, where bolstering your brand is a side effect, not a primary objective.

Excellence


Excellence is about both personal mastery and the quality of the product we produce. To achieve excellence, teams and individuals must be adequately challenged and be able to see true progress toward a goal they aspire to.

Finally Reading Drive

In February of 2014, I finally read "Drive". Three years into my new area of study, I read the book that essentially inspired it. As it turned out, I'd read almost all of the studies he references. And I'd read a number more that he didn't mention. I agree with Pink's conclusions and I appreciate his ability to articulate them in a way that resonates with so many. Fundamentally, Pink's book is about individual motivators in a knowledge work economy. This is certainly interesting to me, but I am more interested in how people operate optimally as a collective in a complex knowledge work economy.

Autonomy, Connection, and Excellence is obviously similar to Pink's Autonomy, Mastery, and Purpose. Like I said, I agree with him. One could argue (and many have) that this is identical to Pink's framework and I've just substituted in synonyms. But I think the distinction is in the aspect of the collective. Not just what motivates us as individuals, but what motivates us as a group.

I'll post more on this in the future. But I wrote the original paper over a 18 moths ago and decided it was time to get something out there more publicly.

What are your thoughts?

Creative Collaboration

I had the pleasure of presenting at NDC Oslo last week and the additional privilege of co-presenting a collaboration workshop along with Denise Jacobs and Carl Smith.


In this workshop, we cover Fist to Five voting, 5x7 Prioritization, and Collaboration Contracts. We had around 30 attendees for the workshop, allowing us to create 4 groups of approximately 8 people each.

After some ice-breakers, groups came up with product ideas by mashing two random words together and using first to five voting to rapidly identify a product idea they could all agree on. This was easier for some groups than others. It was interesting to see the dynamics as some groups discussed each combination prior to voting, some groups created multiple options before voting, and other groups ripped through options and found their product in a manner of minutes (as intended). It is often difficult for us to give up old habits even in pursuit of a better way.

Next up was brainstorming and prioritizing a list of items that needed to be done in order to launch our new awesome concept at a key conference in only three months. We started with each individual member writing at least two items they thought were critically important to prepare for the conference. We then removed duplicate items for each group and used 5x7 prioritization to come up with the top most important items for each group. At the end of the process, teams agreed that the resultant priorities were good and many were surprised at how easy and equitable the process was.

Finally, each group took their top 4 items and ran collaboration contracts against them. We did this in two passes; running the basic contract and resolving conflicts. We had one group that ended up with no conflicts. The other groups worked through their conflicts in relatively short order and the quality of conversation was high throughout. One group realized that even after they resolved the obvious conflicts, they had one individual who was in a decision making role on all four items. While this is not technically a conflict on a contract, it does indicate an issue. After some additional discussion, they were able to adjust the overall contract to everyone's satisfaction and eliminate the potential bottleneck.

This was our first time delivering this workshop and I thought it went quite well.

I'm planning to add Parallel Thinking to the workshop along with a couple more games to create a solid half-day collaboration tools workshop that can work for teams or groups.

If you're interested in this workshop for your team, let me know. Maybe, if we're lucky, Denise and Carl can come along too.