Basic income is not just about today’s poor, but tomorrow’s (robot induced) unemployed too.

Another quote / link on basic income, this one from FiveThirtyEight. A long article, worth the read. It’s not just about today’s poor, but tomorrow’s (robot induced) unemployed too.

Increasingly, technologists envision basic income as a “hack,” or fix, to the system — it offers a way of coping with an economic future dominated by automation, a fallback plan for when most human labor isn’t valued or needed.

“We think there could be a possibility where 95 percent — or a vast majority — of people won’t be able to contribute to the workforce,” said Matt Krisiloff, the manager of Y Combinator’s basic income project. “We need to start preparing for that transformation.”

What would happen if we just gave people money?

The value of a stitch in time

I’ve been reading up on the concept of Universal Basic Income lately, including the book Utopia for Realists. There is a lot to consider with UBI, including one of the main arguments: simply giving money to the poor is more cost effective, and provides better results, than providing them services. I don’t have the exact figures at hand, but the gist of it is that giving a certain amount of cash directly to people today to bring them out of poverty will save you multiples of that funding that you would have to spend later to deal with the issues caused by poverty.

To busy to improve.

Not really a “stitch in time”, and not quite a nine-fold savings, but the principle is the same. Spend a little bit now so you don’t have to spend more later. But just as with that pithy bromide, it is easy to express the savings of UBI but not so easy to get people to accept it to the point they are interested in acting on it.

I have a similar challenge in my day job helping people understand how to use an Enterprise Social Network to be more effective, to spend a little bit of time now instead of a lot of time later. As an outsider to their process, it is obvious to me what they can do to improve. As a part of the system they are using, it is just as obvious to them that they don’t have time to change how they are doing things even if intellectually they understand the value in it.

Or not.





Thinking in bits (not atoms)

I first came across the idea of thinking in bits in Nicholas Negroponte’s 1995 book Being Digital, in which he talks about the limitations, the cost, of moving information around as atoms and how information would soon be converted from atoms to bits. The immediately obvious implication is that it now becomes essentially free to move and share information as bits.

During a break at EMWCon, I participated in a conversation with several people about the relative advantages and disadvantages of requiring people to use wikitext markup in MediaWiki (instead of providing them a visual editor). During the conversation, Lex brought up examples of documents with their content locked up as binary files compared to wiki pages with the text readily available and accessible. I mentioned the idea of “thinking in bits” as part of the conversation.

Reflecting on the conversation later, I realized that I have written here and there about the concept, but don’t really have anything pulling all the thoughts together. So here you go.

I first came across the idea of thinking in bits in Nicholas Negroponte‘s 1995 book Being digital. In the book, Negroponte talks about the limitations, the cost, of moving information around as atoms – paper books, CDs, DVDs, snail mail, you get the idea – and how information would soon be converted from atoms to bits. The immediately obvious implication is that it now becomes essentially free to move and share information as bits.

The less obvious, but much more important, implication is that bits change the way you can think about the information. How you can manipulate and repurpose the information. How you can do things that were impossible with the information locked up in atoms. The obvious applications have come to fruition. Email instead of snail mail. Music downloads instead of CDs, and now streaming instead of downloads. The same with video.

And yet…

And yet, the way this digitized information, these bits, is handled is still in many ways tied to the way atoms were handled. Some of this, such as in the music and movie industries, is purely for commercial reasons. Digital rights management systems are deployed so that the company can benefit from the freedom (as in beer) of distributing their content while at the same time restricting the freedom (as in speech) of the consumers of that content. They are shipping in bits, but they are not thinking in bits.

Even from a creative perspective, as opposed to the commercial, this thinking in atoms prevents them from seeing new possibilities for providing engaging and individual experiences to their customers. For example, consider how labels distribute music, how they release the same tracks in the same order on both CD and on services like iTunes or Google Play. This is thinking in atoms at its finest (worst?).

Imagine if they were thinking in bits instead. They could offer an “album” that includes songs from the setlist the band played in your town, or edit the songs at the disc-breaks so they didn’t fade out / fade in. Along those lines, for the individual song downloads they could edit the track so you didn’t catch the introduction to the next song at the end of the song you’re listening too.

The same is true, albeit for different reasons, inside many organizations. Yes, nearly everything is in bits, stored on shared drives, in Sharepoint or email, or whatever system your orginzation uses to “manage” documents.

And yet….

And yet most of these bits are locked up in digital representations of atoms. We are using bits, but again we are not thinking in bits.

Part of the challenge, of course, is a need to accommodate the lowest common denominator. In the case of many corporate processes that lcd is the requirement to print. So, the templates and processes are designed based on what is expected in the final, printed outcome. Of course, once something is printed, there isn’t a whole lot you can do with it except read it and manually extract the info you need. If you have the digital file that was printed, you can at least search the content. But this is really just a faster way of “reading” the document to get to the “good part”.

What if, on the other hand, the document (whatever it might be) was designed and created based on the expectation that it would be used primarily in a digital format, with the printed product a secondary feature. Or that you don’t even know what the final format needs to be.

As an example (since I was inspired to write this by a conversation at EMWCon), creating your contract proposals as semantic wiki entries. The proposal can be collaboratively developed and reviewed and when ready can be exported into the end format that you need. This will likely be some sort of MS Office or .pdf file that can be easily sent to the potential client, but it could just as easily be shared with them as bits and negotiations conducted against that.

I say “just as easily”. This isn’t to say that work wouldn’t be involved, there would be a lot of work required. Designing, implementing, transitioning, executing. Cultural challenges galore. But, as Lex explained in his story about bikes, cars, and messenger services, the marginal cost of making this change can be far exceeded by the benefits you can gain from the change.


EMWCon 2016 – some notes (create camp)


Spent some time this morning discussing various ideas for projects to work on. Including:

  • HTML2Wiki
  • Semantic Form Themes
  • Mermaid
  • Make site faster
  • Extension certification
    • AD / Vagrant roles
    • BPM Setup for an extension cert service
  • Extension manager
    • Extension interdependency management
  • Extension screenshots and working links to examples on
  • Reification / provenance in SMW
  • Semantic forms validation

Most are somewhat technical (definitely beyond my skill level with MW), but many of those do require some non-technical participation. And some are longer term ideas (screenshots of extensions, for example) that can be continued to be worked on over time.

The one that most appealed to me was Lex’s presentation for creating an “Extension Certification” process for MediaWiki extensions. Would tie in with the potential Enterprise MediaWiki Foundation (EMF?) that we discussed on Day 1.

Basic process is straightforward (developer creates extension, runs acceptance tests, submits to EMF for review, they certify), but the implementation is a bit less so. Quite involved on the developer side, somewhat automatic at the review level. End result would be the “EMF Seal of Approval” for the extension, showing which core versions the extension has been tested against.

This type of process would would go a long way for Enterprise users, especially when trying to convince management, IA, etc that the extension can be trusted and presents (relatively) low-risk in implementing.

You can keep up with progress on the EMWCon 2016 page.