Organizational Entropy, the Problem of Choice, and the Effect of Organizational Structure on Performance (part 1 of 4)

Have you ever wondered why your company is “stupid,” and why it just seems to get stupider as it grows? Shannon Entropy and Productivity: Why Big Organizations Can Seem Stupid (paper) by Dr. Richard Janow sheds some light on this phenomenon and shows that organizations really do get stupid as they get big (though I personally would call it something more akin to “scatter brained” than “stupid”). Here’s the description of the paper from Dr. Janow’s CV:

Accepted for publication in Journal of the Wash D. C. National Academy of Sciences.

The first application of Shannon-like entropy to decision-making in organizations suggests that there is a fundamental upper limit on per capita decision efficiency that decreases as organizations grow larger. The result can be impaired efficiency in utilizing intellectual capital (knowledge workers) unless re-structuring is consciously designed to limit entropy effects. Quantitative tools that can help manage organizational entropy and productivity in business firms and in command and control applications are suggested (patent pending).

There are many interesting implications of the analysis. For instance, “the maximum per capita management decision flow rate … actually shrinks as the number of decision-makers in the network grows.” This is somewhat counterintuitive at first glance (you might think that as more people can make decisions those decisions would be made faster). But this is actually the “stupidity” that Janow mentions in his title – as the organization grows, it has a harder time keeping up with the required decisions, thus appearing a bit “slow”.

The “proof” of all this involves quite a bit of detailed math (Dr. Janow is, after all, a physics Ph.D.), but the results are fairly straightforward. In addition to describing the implications, Dr. Janow provides some concrete ideas on how to manage organizational entropy. The best one-line recommendation, from the abstract, is “[t]he smallest size organization that has the resources to handle a task is preferable.” In practical terms, we see this every day. Large companies are broken up into functional units, business units, corporate staff, etc. But even doing this can create its own set of problems.

For instance, say you have a large organization with a great deal of organizational (Janow) entropy. You re-structure the organization so that instead of a small number of large, high entropy sub-organizations that communicate widely throughout the organization, you have a large number of small, low entropy sub-organizations that communicate through a single point to other parts of the organization. You have improved the performance of the new small sub-organizations, but now you have created a whole new layer of highly entropic communications between the sub-organizations. You also now must deal with the prospect of overload at this single communications point, resulting in the possibility of loss in communications between sub-organizations.

Janow acknowledges that many of his recommendations have been used intuitively by organizations, but believes the quantitative formulation he introduces in the paper “can be the basis for modeling tools that will make the tuning of organizations for high performance a much less chancy and ad hoc process.”

Author: gBRETTmiller

I'm not lost, I'm wondering