Jurgen Appelo – Complexity vs Lean the Big Showdown

Lean software development promotes removing waste as one of its principles. However, complexity science seems to show that waste can have various functions. In complex systems things that look like waste can actually be a source for stability and innovation; Lean software development preaches optimize the whole as a principle, and then translates this to optimization of the value chain. However, I believe that complexity science shows us a value chain is an example of linear thinking, which usually leads to sub-optimization of the whole organization because it is a non-linear complex system.  — Jurgen Appelo

Exactly. Somewhat reflects my own thoughts and is something that has been on my mind quite a bit of late amidst an organization and projects hell bent on removing not just the optimum amount of waste from a process but removing all white space from the environment in pursuit of maximum efficiency toward the achievement of what they already know how to do. (breathe, Brett…)

As I wrote in KM vs LSS vs CPI, too often “improvement” is seen as requiring a single, all or nothing approach. When, in fact, improvement and optimal performance comes from a mix of techniques. Sometimes waste is a hindrance, and sometimes it’s where you find the gold.

 

Thinking in bits (not atoms)

During a break at EMWCon, I participated in a conversation with several people about the relative advantages and disadvantages of requiring people to use wikitext markup in MediaWiki (instead of providing them a visual editor). During the conversation, Lex brought up examples of documents with their content locked up as binary files compared to wiki pages with the text readily available and accessible. I mentioned the idea of “thinking in bits” as part of the conversation.

Reflecting on the conversation later, I realized that I have written here and there about the concept, but don’t really have anything pulling all the thoughts together. So here you go.

I first came across the idea of thinking in bits in Nicholas Negroponte‘s 1995 book Being digital. In the book, Negroponte talks about the limitations, the cost, of moving information around as atoms – paper books, CDs, DVDs, snail mail, you get the idea – and how information would soon be converted from atoms to bits. The immediately obvious implication is that it now becomes essentially free to move and share information as bits.

The less obvious, but much more important, implication is that bits change the way you can think about the information. How you can manipulate and repurpose the information. How you can do things that were impossible with the information locked up in atoms. The obvious applications have come to fruition. Email instead of snail mail. Music downloads instead of CDs, and now streaming instead of downloads. The same with video.

And yet…

And yet, the way this digitized information, these bits, is handled is still in many ways tied to the way atoms were handled. Some of this, such as in the music and movie industries, is purely for commercial reasons. Digital rights management systems are deployed so that the company can benefit from the freedom (as in beer) of distributing their content while at the same time restricting the freedom (as in speech) of the consumers of that content. They are shipping in bits, but they are not thinking in bits.

Even from a creative perspective, as opposed to the commercial, this thinking in atoms prevents them from seeing new possibilities for providing engaging and individual experiences to their customers. For example, consider how labels distribute music, how they release the same tracks in the same order on both CD and on services like iTunes or Google Play. This is thinking in atoms at its finest (worst?).

Imagine if they were thinking in bits instead. They could offer an “album” that includes songs from the setlist the band played in your town, or edit the songs at the disc-breaks so they didn’t fade out / fade in. Along those lines, for the individual song downloads they could edit the track so you didn’t catch the introduction to the next song at the end of the song you’re listening too.

The same is true, albeit for different reasons, inside many organizations. Yes, nearly everything is in bits, stored on shared drives, in Sharepoint or email, or whatever system your orginzation uses to “manage” documents.

And yet….

And yet most of these bits are locked up in digital representations of atoms. We are using bits, but again we are not thinking in bits.

Part of the challenge, of course, is a need to accommodate the lowest common denominator. In the case of many corporate processes that lcd is the requirement to print. So, the templates and processes are designed based on what is expected in the final, printed outcome. Of course, once something is printed, there isn’t a whole lot you can do with it except read it and manually extract the info you need. If you have the digital file that was printed, you can at least search the content. But this is really just a faster way of “reading” the document to get to the “good part”.

What if, on the other hand, the document (whatever it might be) was designed and created based on the expectation that it would be used primarily in a digital format, with the printed product a secondary feature. Or that you don’t even know what the final format needs to be.

As an example (since I was inspired to write this by a conversation at EMWCon), creating your contract proposals as semantic wiki entries. The proposal can be collaboratively developed and reviewed and when ready can be exported into the end format that you need. This will likely be some sort of MS Office or .pdf file that can be easily sent to the potential client, but it could just as easily be shared with them as bits and negotiations conducted against that.

I say “just as easily”. This isn’t to say that work wouldn’t be involved, there would be a lot of work required. Designing, implementing, transitioning, executing. Cultural challenges galore. But, as Lex explained in his story about bikes, cars, and messenger services, the marginal cost of making this change can be far exceeded by the benefits you can gain from the change.

 

Organizational forgetting

I wrote the following back in November 2005:

My early days in Knowledge Management included a lot of time developing, deploying, and getting people to use “knowledge repositories.” (At least trying to get people to use them.) A worthwhile endeavor in some regards, I’ve always had misgivings about the whole idea, at least how it has been implemented in most cases. The cheapness of mass storage these days, and the way we just keep everything, has nagged at this misgiving over the past couple of years.

I finally realized one day that the problem has become not, “How do we remember all this knowledge that we’ve learned?” but rather, “How do we forget all this knowledge we’ve accumulated that we no longer need so we can focus on what we do need?”

That post also included a reference to memory and forgetting in the human mind, taken from the book The Trouble with Tom by Paul Collins:

Memory is a toxin, and its overretention – the constant replaying of the past – is the hallmark of stress disorders and clinical depression. The elimination of memory is a bodily function, like the elimination of urine. Stop urinating and you have renal failure: stop forgetting and you go mad.

explored this idea a bit further in March 2007, where I added the following to my thinking:

In the context of mastery, especially of something new, it is sometimes hard to know when to forget what you’ve learned. You have to build up a solid foundation of basic knowledge, the things that have to be done. And at some point you start to build up tacit knowledge of what you are trying to master. And this, the tacit knowledge that goes into learning and mastery, is probably the hardest thing to learn how to forget.

Sometimes, though, it is critical to forget what you know so you can continue to improve.

And yet again in June 2009:

I’m at a point now, though, where the project is going through significant changes, almost to the point of being a “new” project. My dilemma: How to “forget” the parts of the old project that are no longer important and start with an “empty mind” to build up the new project without the baggage of the old.

In his book Brain Rules, author John Medina writes, “It’s easy to remember, and easy to forget, but figuring out what to remember and what to forget is not nearly so easy.”

I was reminded of this train of thought today when a colleague shared a link to a TEDx talk by Pablo Martin de Holan titled Managing Organizational Forgetting, based on a paper of the same name published in the MIT Sloan Management Review. If you read my quotes above, I’m sure you understand why this opening paragraph from the paper grabbed my attention (emphasis at the end is mine):

Over the last decade, companies have become increasingly aware of the value of managing their organizational knowledge, and researchers have investigated those processes extensively. Indeed, the ways in which organizations learn and have stocks of knowledge that underlie their capabilities can be a powerful tool in explaining the behavior and competitiveness of companies. Yet something is missing in the current discussions of organizational knowledge: Companies don’t just learn; they also forget.
Pablo Martin de Holan 

There is a lot of great info in the paper (about 12 pages worth), but for now I’ll just mention the two modes of forgetting – Accidental and Intentional. Obviously, you will want to limit the former and maximize the benefit of the latter. At the risk of a giant spoiler (you should still take the time to read the full paper), de Holan summarizes nicely:

Some companies forget the things they need to know, incurring huge costs to replace the lost knowledge. Other organizations can’t forget the things they should, and they remain trapped by the past, relying on uncompetitive technologies, dysfunctional corporate cultures or untenable assumptions about their markets. Successful companies instead are able to move quickly to adapt to rapidly changing environments by being skilled not only at learning, but also at forgetting. Indeed, as companies work to increase their capacity to learn they also need to develop a corresponding ability to forget. Otherwise, they could easily be learning counterproductive knowledge, such as bad habits. The bottom line is that companies need to manage their processes for forgetting as well as for learning, because only then can they deploy their organizational knowledge in the most effective ways for achieving sustained competitive advantage.

I really wish I had come across this paper back in Winter 2004 when it was published. I’ve got a lot of catching up to do.

And for those of you interested in the TEDx talk, here you go.

On knowledge and (organizations as) knowers

Been giving some thought to the concept of knowledge and knowing in the context of organizations and knowledge management. These two paragraphs come from separate trains of thought, but are related so I decided to post them here together. Definitely needs a bit more reflection and development. What do you think?


The terms “tacit” and “explicit” are typically used when referring to different types of knowledge (in the context of knowledge management efforts). It seems to me that “unconscious” and “conscious” might be more appropriate / accurate? In that explicit knowledge is that of which you are consciously aware of while tacit knowledge is that which lies “below the surface” and which you use without having to be aware you are using it. Need to cross reference this with what I’ve been learning about Liminal Thinking….

On the subject of “knowers”, could the organization itself be considered a “knower”? Not the sum total of the knowledge that resides in its members or files, but a knowing that emerges from the connections and interactions of that knowledge. If so, how would that change how we approach KM?

Is there a problem here?

Solving a problem that you know has a solution may require knowledge, but it is knowledge that already exists. Unfortunately – or, if you prefer, fortunately – many of the problems that are worth solving, that need to be solved, don’t come with that level of certainty.

In his book, How Life Imitates Chess (which, by the way, I highly recommend), Garry Kasparov has this to say about uncertainty:

Knowing a solution is at hand is a huge advantage; it’s like not having a “none of the above” option. Anyone with reasonable competence and adequate resources can solve a puzzle when it is presented as something to be solved. We can skip the subtle evaluations and move directly to plugging in possible solutions until we hit upon a promising one. Uncertainty is far more challenging. Instead of immediately looking for solutions to the crisis, we have to maintain a constant state of asking, “Is there a crisis* forming?”

 

Does your organization need a neurologist?

When addressing the idea of tacit knowledge in respect to knowledge management, most descriptions focus on the tacit knowledge IN organizations – that is, the tacit knowledge of the individual members of the organization – and how to capture and share that tacit knowledge. While I believe it is important to understand this tacit knowledge, I’ve always been more attracted to an understanding of the tacit knowledge OF an organization, what it is the organization as a whole ‘knows.’

As with individuals, organizations operate based on the tacit knowledge they possess and their ability to act on that knowledge when needed. In the human brain it is the connections between neurons – and the ability of the brain to reorganize those connections to meet the situation – that makes up the intelligence and tacit knowledge of the individual. In organizations, it is the connections between people. (see this post of mine from 2006 for a bit more on this.)

Many years ago, in one of my first ever blog posts, I wrote that “KM is the neuroscience of an organization.” After reading Is Enterprise 2.0 the neuro-organization? a couple of days ago, and a brief discussion with Harold Jarche (@hjarche), I was once again curious.

Here’s a start. (Definitions from Wikipedia)

Neurology: a medical specialty dealing with disorders of the nervous system. Specifically, it deals with the diagnosis and treatment of all categories of disease involving the central, peripheral, and autonomic nervous systems, including their coverings, blood vessels, and all effector tissue, such as muscle.  –>OD ?

Neuroscience: the scientific study of the nervous system, the scope of neuroscience has broadened to include different approaches used to study the molecular, developmental, structural, functional, evolutionary, computational, and medical aspects of the nervous system. –> KM?  IT?

Psychology: the scientific study of human or animal mental functions and behaviors, psychologists attempt to understand the role of mental functions in individual and social behavior, while also exploring underlying physiological and neurological processes.  –> OD? Training/Learning?

Psychiatry: the medical specialty devoted to the study and treatment of mental disorders—which include various affective, behavioural, cognitive and perceptual disorders; mental disorders are currently conceptualized as disorders of brain circuits likely caused by developmental processes shaped by a complex interplay of genetics and experience.  –> HR?

Way off base? On the right track?

Retaining knowledge in organizations – a contrary view

Yesterday’s #kmers chat focused on the topic Retaining the Knowledge of People Leaving your Organization.  Quite a bit of discussion around the topic, including questions about whether you should try to capture knowledge from those leaving, how you should do it, etc. etc.  Personally, I agree with V Mary Abraham (@vmaryabraham) when she says:

Ideally, move to system of #observable work. Then people disclose info & connections as they work & before they leave.

That way, the knowledge that is shared is in the context of a current action and not just information sitting in a repository somewhere.

This is a question that I – and many others – have wrestled with for many years now. Here is something I originally posted in Sep 2004 on the question. This is an unedited copy of that original post; I may come back later and give it a fresh coat.

– – — — —– ——–

For many years now I’ve read about and been involved in discussions about the impending retirement of baby boomers, the effect this will have on institutional memory, and what can be done about it. Most of my interest in this at the time concerned the impact on the federal government workforce, which will be very hard hit since the retirement age is a bit lower than the populace in general.

Though I’ve not yet read it, the book Lost Knowledge by Dave DeLong addresses this problem in great detail (more on the book can be found here, here, and here). A snippet from the book’s website:

Dr. David DeLong, a research fellow at MIT’s AgeLab, has just created the first comprehensive framework to help leaders retain critical organizational knowledge despite an aging workforce and increased turnover among mid-career employees.

Like most discussions of the topic I’ve been involved in, the book seems to focus on the negative aspects of people leaving, and taking their knowledge with them. However, I have been reading James Surowiecki’s The Wisdom of Crowds and think that we may be missing out on an opportunity to actively reinvent the corporate knowledge as we try, probably in vain, to keep the old knowledge around.

Granted, there is some information and there are many processes that must be recorded and retained. This the basic infrastructure of how an organization functions. But if you simply take the knowledge of people who are leaving and transfer that to the people that are replacing them, you are effectively eliminating the value of the “new blood” coming into the organization. Or, in the words of Surowiecki, you are maintaining homogeneity at the expense of diversity.

Organizational memory, like human memory, can be a stubborn thing to change and often results in the this is how we’ve always done it syndrome. An excellent description of memory formation can be found in Tony Buzan’s The Mind Map Book (sorry for the lengthy quote, but it bears repeating in whole):

Every time you have a thought, the biochemical/electromagnetic resistance along the pathway carrying that thought is reduced. It is like trying to clear a path through a forest. The first time is a struggle because you have to fight your way through the undergrowth. The second time you travel that way will be easier because of the clearing you did on your first journey. The more times you travel that path, the less resistance ther will be, until, after many repetitions, you have a wide, smooth track which requires little or no clearing. A similar function occurs in your brain: the more you repeat patterns or maps of thought, the less resistance there is to them. Therefore, and of greater significance, repetition in itself increases the probability of repetition (original emphasis). In other words, the more times a ‘mental event’ happens, the more likely it is to happen again.

When you are trying to learn something, this is obviously a good thing. However, the very nature of this learning process makes it more difficult to learn something new, especially if it is very different (“off the beaten path”). By pointing new people down the paths of the people that are retiring, you are ensuring that the well known paths will continue to thrive and that it will be harder to create new paths through the forest.

That’s fine if your goal is to continue on the path you are on, but it brings to mind an old proverb I saw somewhere: If you don’t change the path you are on, you’ll end up where it takes you.

——– —– — — – –