Are technology giants becoming too big to fail?

The 2008 financial crisis gave birth to the notion of too big to fail financial institutions. As IT is quickly becoming a 24/7 services utility with key industry segments dominated by few, very large companies, the notion of too big to fail may be applicable here too. The success of these large platform providers implies that our personal and professional lives rely increasingly on their functioning, availability and well being.

What would be the effect on the world economy in a few years time if one of them were to run into serious trouble? Possibly more impactful than the sudden collapse of Lehman Brothers in 2008. Assessing and mitigating the risks related to serious failures of ‘public IT utilities’ is probably not a priority for authorities and institutions today. We may have to witness a big ‘accident’ first before adequate measures are being taken.


The 2008 financial crisis gave birth to the notion of too big to fail financial institutions. We were confronted with (or were reminded of) the fact that parts of these institutions are in fact utility providers. They supply us with essential services and for this reason the financial system’s proper functioning is critical beyond the primary role of the financial industry. In response to the crisis, governments strengthened their role in regulating the industry to limit risks.

If we consider the IT industry, leading companies have benefited from economies of scale and networking effects in the last three decades as well. This has led to few companies dominating certain industry segments. Regulators across the world have sometimes fought against the abuse of dominant market positions but have not been successful in reversing them.  By the time the US government and the European Union completed their antitrust reviews of Microsoft, Microsoft had already dominated the PC market. The subsequent relative decline of Microsoft came about through market forces and not government intervention.

Despite such concentrations of power, IT systems have been too decentralised in the way they run to give cause for concern regarding the stability of economic systems. For example, if Microsoft went bankrupt in 2000, all computers running its software would continue operating the next day. There would have been no automatic continuation in the Microsoft product lines but the economic system would not have ground to a halt. Other companies would have bought parts of Microsoft to make a business out of supporting the existing software products and over time customers would have migrated to other products and vendors.

The dependencies are changing structurally now. The IT industry is turning into a 24/7 utility services industry quickly that is also characterised by scale and networking effects we have never seen before. 

For example, the cloud we refer to as public cloud -implicitly underlining its utilitarian function- is already dominated by a small number of providers. They are providing higher quality services at lower unit cost given their enormous, global customer base. Each of them has a capital expenditures level approaching the building of one aircraft carrier annually (raising the barriers to enter these markets). They also benefit from significant networking effects in multiple dimensions. As more developers and companies work with their platforms, they attract more customers and end users in a mutually self-reinforcing manner. New technologies (AI, big data, API’s, OSS) provide additional networking effects.

The success of these platform providers implies that our personal and professional lives rely increasingly on their functioning, availability and well being. What would be the effect on the world economy in a few years time if one of them were to run into serious trouble? Possibly more impactful than the sudden collapse of Lehman Brothers in 2008.

In the 1990s Bell Labs worked on visualising the internet. This picture is an iconic representation of early international data flows. Economic (inter)dependencies have grown exponentially since then.

In the 1990s Bell Labs worked on visualising the internet. This picture is an iconic representation of early international data flows. Economic (inter)dependencies have grown exponentially since then.

The increased speed of today’s technology adoption curves, combined with the knock on effects of aforementioned economics is complicating matters for authorities. The methods of the regulatory authorities are not in sync with the emergence of new “winner takes all” opportunities and would be too slow nowadays in evaluating a company once they have identified such a situation is unfolding. Furthermore, a new dimension has entered the equation; one which authorities should take into account: the dependence of entire economic systems on the continuous, online availability of IT infrastructures and services.

The markets for online IT utilities will continue to grow in the foreseeable future and it is hard to imagine that these giga scale companies -or their services- will fail in the near future. It is equally likely that authorities remain focused on managing other priorities for quite some time: geopolitical and financial instability, economic slowdown, terrorism and refugee crises, climate change… An assessment of the risks related to serious failures of ‘public IT utilities’ is therefore likely to be flying under the radar of authorities and institutions. We may have to witness a big ‘accident’ first before adequate measures are being taken.


Do not focus on the competition, focus on the customer

Three years ago Jeff Bezos bought the Washington Post; it sought help from the Amazon founder because of his knowledge about the internet. Recently New York Magazine published an overview of the changes that are taking place at the Post with valuable insights and lessons for organisations transitioning to a more digital future!

Notwithstanding several successes that are visible, the Post still has to find a profitable and sustainable new business model. But it has found a wealthy owner to fund the expedition and transition and it’s not wasting a good crisis. If you are still making enough money in the old ways you’d better use it also to fund the transition to explore and define your digital future. This post provides some advice on how to do that.


In 2013 Jeff Bezos bought the Washington Post, a company that had become synonymous with the newspaper industry’s decline. It sought help from the Amazon founder because of his knowledge about the internet. In an earlier post we wrote about some of the changes and contributions Jeff Bezos had made. Last week, New York Magazine published an overview of the changes that are taking place at the Post - with valuable insights and lessons for organisations in other industries transitioning to a more digital future! 

Here are some examples:

  • Start right away - From the very beginning new employees were hired and added to the digital team and significant investments in new projects were approved; after years of cuts and layoffs. These tangible investments signalled the longer term commitment to crafting a new business model and communicated the priority to provide “more freedom to innovate and take more risk”. They also stimulated people to believe in and commit to a new vision and a new course.
     
  • Change the scene - Earlier this year, the Post moved out of a former printing plant and into a new office “that resembles a tech start up” where “engineers and data scientists sit alongside journalists to integrate emerging technologies into everything the Post does…” Previously, the digital team was relegated to a separate  building. Now, large screens on the wall display the typical tv news channels and also the company’s new focus and future: the digital online version of the Post as well as key data and analytics. The result: “we view ourselves as a technology company as well as a media company”.
     
  • Add, do not just replace - Enriching the existing team with technology experts and turning into an organisation that is part technology company does not mean that the existing industry expertise is worth nothing. On the contrary, deep domain knowledge combined with new technology is key to success. At the Post it is clear that Jeff Bezos respects and builds upon the media expertise of the newspaper’s staff. He does not meld into editorial decisions and he has kept Marty Baron, a fiercely independent chief editor.
     
  • Focus on the user - A repetitive theme in the examples given in the article can be summarised as: “don’t focus on the competition, focus on the reader”. The key here is to not just pay lip service but to forge genuine commitment. One example from the article: readers are moving over to social media for their news. Currently the Post is the only major newspaper to publish all of its digital content directly on Facebook’s Instant Articles - at the risk of cannibalising its subscription business. 
     
  • Experiment and learn - To experiment and to “fail successfully” has become the norm. The article provides some examples and describes how Bezos is stimulating the transformation during, amongst others, biweekly conference calls with the senior executives. Data is collected about many aspects of the business and the user interaction. Such data is “at the heart of virtually every strategy discussion”.
     
  • Use IT to add value - The organisation is collaborating in different ways with tech companies. Together with Google it is developing differentiating technologies to create a superior digital user interface and user experience. The way readers engage with the digital content is tested and measured in great detail, just like the competition, but the Post has developed some additional tools focused on improving several industry specific drivers to be more successful with its digital services.
     
  • Transform, do not just translate - Users continue to change the way they consume news; they adopt new devices, platforms, distribution channels, develop new habits. New technologies emerge and new competitors pop up and challenge the status quo. Therefore, just translating or extrapolating the off line model to explore and create feasible digital opportunities will not be good enough: “We need to think big and lean into the future”.
The Post’s digital version is tailored to the needs and habits of the online reader

The Post’s digital version is tailored to the needs and habits of the online reader

  • Figure out and commit to new economics - The digital arena works with very different economics. Under Bezos, the Post has chosen to aim for a larger audience and to make less money per reader. It is important to choose and commit: being stuck in the middle will kill you as the digital arena is quite ruthless in many ways. The economics of the digital Post are very different from the printed edition’s business.
     
  • Develop new lines of business - Technology offers the Post entirely new opportunities. Today, it is generating revenues by licensing several of its in-house developed digital capabilities to other media companies. A comparison seems obvious: Amazon started Amazon Web Services (AWS), selling its data center capacity and licensing the tools it was using in its core business. Today AWS is Amazon’s most profitable business.

For the Post, the jury is still out there: “Despite all these experiments, the Post has yet to find a breakout business model.” The article states accurately: “the paper is succeeding in large part because of a very old-media tradition: the support of a wealthy owner”. Perhaps that is not so very different from your organisation if you are still making enough money in the old ways and are using it to fund the transition to digital. Or perhaps your position resembles the Post’s? Then it is time to not waste a good crisis and also copy some of its practices!


Silicon is disrupting more and more industries

For decades, a characteristic of the Wintel-dominated PC industry has been that hardware was a non-differentiating attribute of a computer. As an example, take the computer CPU: Intel delivered continuous improvements in performance, but, given that almost everyone used the same chips, hardware was a commodity in terms of differentiation. 

The world has totally changed. The PC market is declining and has been eclipsed by mobile devices. Sales volumes have shifted and so has our user preference. In this new mobile world, differentiated chip capabilities have become extremely important. To develop new areas of competitive and profitable advantage, organisations must master the interplay and integration of software, services and hardware capabilities.


A strong characteristic of the Wintel-dominated PC industry has been that hardware was a non-differentiating attribute of a computer. The computer CPU is a perfect illustration: Intel delivered continuous improvements in performance, but, given that almost everyone used the same chips, hardware was a commodity in terms of differentiation. 

The world however has totally changed. As discussed in an earlier post, the PC market is declining in absolute terms and has totally been eclipsed by mobile computing devices in relative terms. Not only have sales volumes shifted, so has our usage pattern. We spend more and more time using our mobile devices while static devices like PC’s and TV’s are getting marginalised. In this new world, differentiated chip capabilities have become extremely important.

Smartphones and internet are eating tv-time in the US. Source: Nielsen

Smartphones and internet are eating tv-time in the US. Source: Nielsen

To develop new areas of competitive and profitable advantage, organisations must master the interplay and integration of software, services and hardware capabilities. Furthermore, those organisations who are able to create platforms will be able to charge a tax on other organisations that use these platforms.


The chip industry has undergone two bifurcations due to the emergence of the mobile era. The first bifurcation occurred when Apple identified mobile computing devices as a huge growth opportunity with disruptive potential. Apple quickly realised that in mobile computing, performance per watt would be much more important than raw chip performance:

  • Long battery life is required if users are to give preference to the mobile device over the static PC for many more tasks than just telephony, music and email.
     
  • Low power consumption goes along with smaller battery capacities and sizes that in turn allow thin and light form factors.
     
  • Long periods of peak performance; an energy inefficient chip quickly heats up when working hard and is consequently forced to throttle its own performance soon after starting a heavy workload.
     
  • Low power consumption is equivalent to less heat generation allowing the elimination of cooling fans. Elimination of cooling fans yields multiple benefits: thinner and lighter designs, less battery drain and the elimination of background noise - also important for voice activated interaction.

The second bifurcation occurred due to the increasing variation in the number of mobile appliances and use cases. This increased the demand for differentiation in hardware capabilities to power different user experiences. One size would not fit all because the requirements often are impossible to combine. 

This implies that in certain industries you must master yet another area - hardware technology - if you want to lead in the digital transition and not be left behind to divide the diminishing margins of the late majority. What are the developments in computer chip technology that have enabled these shifts?

  • Intel missed the boat in mobile CPU’s and everyone in mobile is using ARM-based designs instead. Intel designs and manufactures chips but ARM only licenses its designs. Leading mobile chip companies thus have more degrees of freedom than PC manufacturers had: they have the possibility to adapt the ARM-designs and can also select which chip manufacturer will manufacture their CPU’s. These differences lead to chips with significantly differentiated characteristics and performance. Furthermore the decoupling of design and manufacturing has enabled the development of SoC’s.
     
  • Mobile devices combine many key functions on a single chip, a so called System on Chip (SoC). These functions include the basic CPU, GPU and memory components of a computer but may also include GPS, gyroscopes, accelerometers, compasses, image and sound processing capabilities. Combining these functions on one SoC improves energy efficiency considerably, saves space and offers attractive (new) functional possibilities. This is another area of significant hardware differentiation for leading manufacturers as the development of SoC’s is complex and capital intensive. And the list of new functions continues to grow: biosensor data processing, barometers, advanced image signal processing, secure enclaves...

In the smartphone industry the competition is fierce. Investments in hardware capabilities are a critical component -next to innovations in software, services and platforms- to realise a price premium in the market, make a profit and escape the intensifying, head on battle with a large number of competitors.

Smartphone market by price segment. Source: Sony; Sony Investment Day, June 2016

Smartphone market by price segment. Source: Sony; Sony Investment Day, June 2016

To get a taste of what is driving innovation in chip design and how this has an impact on competitive positions in other industries, we must look at some of the developments in chip capabilities and their applications.

Image quality - Nowadays photo’s and video clips are captured and shared using mobile phones. Of course: they say that the best camera is the camera you have with you. The quality of the photos and video taken by the best mobile phones has become extremely good, despite the physical limitations imposed on the camera lens and image sensor. This quality is due to exponential advancements in the Image Signal Processing part of the mobile SoC’s. Many industry reviewers in 2015 found that the iphone 6s could shoot better 4K video than digital single lens reflex cameras costing 4.000 USD. 

There are three key effects of having a good quality camera everywhere you go and being able to instantly share what you shoot with other people:

  1. Disruption among camera manufacturers: With the introduction of the digital camera the film camera market started to decline and within ten years it had disappeared. Today Eastman Kodak exists only as a case study in business schools. Next, mobile phones slashed the market for digital compact cameras (point-and-shoot cameras) in roughly five years. With their volumes and investments mobile phone manufacturers continue to pressure producers of the more expensive digital cameras.
     
  2. Disruption in the media industry: at the peak of the film camera industry in 1999, consumers took around 80 billion photos. In 2015, only on Facebook, about 730 billion photos were shared. If we add snapchat, WhatsApp and other social messaging platforms Ben Evans estimates that 2-3 trillion photos were shared in 2015. In other words, more photos (and videos!) were shared via the mobile phone in 2015 than were taken on film in the entire history of the analogue camera business. This has been instrumental to the shift in media preference that continues to disrupt traditional media companies.
     
  3. Disruption in the media production industry: In 2015, the Swiss TV News Station Léman Bleu decided to use iphones exclusively to shoot its reports. A growing number of TV commercials are being shot with iphones and one movie at Sundance, Tangerine, was shot with two iPhones. Newspapers, being disrupted themselves by IT, fire their photographers and give iphones to their journalists expecting them to write and take photos.
The development of built-in lens and interchangeable lens camera sales; in millions of units. Source: Thomas Stirr (CIPA data)

The development of built-in lens and interchangeable lens camera sales; in millions of units. Source: Thomas Stirr (CIPA data)

Security - As users have shifted to mobile first, organisations must follow. Mobile devices are quickly becoming the dominant user interface, for example in banking and retail. In certain markets in Asia and Africa it allowed users and organisations to skip building traditional distribution channels. In other markets, it has significant impact on brands, branches and supply chains.

The number of bank transactions in Britain per channel/device over time. Source: British Bankers’ Association.

The number of bank transactions in Britain per channel/device over time. Source: British Bankers’ Association.

Cybersecurity is growing equally rapidly and mobile devices are inherently more likely to be lost or be stolen. This has led to the need of (un)locking the device more securely as well as encrypting all data on the device. As ease of use is also a key feature in driving adoption, mobile security isn’t just a matter of software measures anymore: user requirements have also translated into hardware requirements.

Biometric authentication such as fingerprint scanning is making inroads. A safe and performant implementation of such a method requires specialised silicon that enables the safekeeping of the user’s biometric data in a secure enclave within the chip. Encrypting all data on the device without noticeable performance penalty also requires expertise in hardware. In particular in the storage controller, the piece of silicon that manages the storage chips on the device.

The secure authentication facility on a mobile device also allows the device to support secure and frictionless payment authorisation. Apple Pay has been successfully rolled out in the retail channel in a number of countries and Apple has recently announced that the payment facility will also support payment in e-commerce websites. This will create a level playing field in e-commerce with Amazon’s 1-click shopping and drastically reduce the number of people who initiate the payment process in an e-commerce site but abandon it before completion.

There are other innovations that are also enabled by hardware developments whose impact on organisations and entire industries are not yet visible. Their impact is relatively easy to envision:

Artificial intelligence - The first manifestations of artificial intelligence on a mobile device are visible and drive specific hardware requirements in a variety of ways:

  • First of all the Deep Learning algorithms used in image classification and search, speech to text translation and other such functions run on the mobile device and in the data centre. In both places specific silicon -efficient GPU’s or specialised AI chips- is required.
     
  • Secondly, the user interface to intelligent agents is (often) voice. In order for a device to be listening continuously whether it is being summoned, specific silicon is required with tailor made sound processing capabilities in order to limit battery drain and carry out sound signal processing (filtering of noise, recognising a specific voice etc.).

Biosensor data collection - Health functions are also becoming prevalent on multiple mobile devices. Such functions require specialised silicon that enable the continuous collection of data with a minimal amount of electrical current drain.

Virtual and Augmented Reality - This is a functionality that requires very high graphical processing power. Combining such capabilities in a mobile device with battery and thermal limitations can only be addressed with leading chip designs. Virtual and Augmented reality will not only play a role in gaming, retail and other consumer businesses. Professional training, professional services, field engineers and the military will all be impacted by it.

The tremendous innovation in hardware is a key pillar of innovation and differentiation in mobile devices. These innovations do not stop at the borders of the IT industry. As IT becomes a large component of other industries, chip innovation will have a major impact on more traditional industries as well.


... and more disruption to the Wintel ecosystem from Google

The Q1 figures show how the PC industry continues to struggle. Besides the shrinking numbers, news focused on the fact that Chromebooks outsold Macs in the US for the first time. Although comparisons were made with Apple computers, the real victim is the Windows PC. The rise of the Chromebook is actually a case in point of Harvard's Clayton Christensen's theory of disruptive innovation. With the recent announcement that Chromebooks will soon be able to run Android apps, further disruption is inevitable.

In an earlier post we described the decline of the Windows PC as part of a larger phenomenon: the decline of the Wintel ecosystem. Google is now throwing a very compelling business case in the laps of organisations that must, in any case, prepare for an accelerated decline of the Wintel empire!


The PC share of the market of programmable devices (computers, smartphones, tablets) has evaporated. Q1 2016 data about the industry from IDC Research illustrate how much most of the PC manufacturers continue to struggle. Last month Gartner put some of these number into perspective:

  • Over the last five years, global shipments of traditional desktops and laptops have dropped by one third and sales value dropped by 44% in the same period.
  • With an oversaturated market and with falling average selling prices, the only profitable market segments are found in high end niches, like gaming.

The mainstream news focused mainly on the fact that Chromebools outsold Macs for the first time in the US. Something that appeals to the broader audience as one of the battles in the ongoing rivalry between Google and Apple. But others look at it from a different angle. This week, Charles Arthur updated his perspective that it's a case in point of low-end disruption. It's all about Google versus Microsoft.

Harvard professor Clayton M. Christensen developed the theory of disruptive innovation. At first, new good-enough products enter the market and replace the well-developed, more expensive products that overserve the low-end segments. Then, as the low-end products improve, they capture a larger portion of the market and drive the higher priced well-developed products (ever) further upmarket. The incumbents initially benefit from higher unit margins, but their sales volumes shrink and overall profitability gets hurt. Eventually the newer lower-cost players dominate the disrupted market.

Chritensen's disruption theory has proven robust and applicable to many industries. The personal computer itself was a disruptive innovation. A decade ago, the smartphone disrupted the PC market. Five years ago, the Chromebook had all the characteristics of yet another disruptive innovation in this market. In his post, Charles Arthur isolated the development of Windows PC's specifically, clarifying that this disruption  impacted mostly Microsoft. Some numbers: in the US market in Q1 2016 almost 10 million Windows PC's were sold and 2 million Chromebooks.

Charles Arthur combined the numbers from IDC Research with the figures from the largest PC manufacturers to estimate the revenues development of Windows PC's.

In Christensen's theory, the good-enough product improves and continues to gain market share. Last month's announcement provides guidance and evidence: Chromebooks will soon be able to run Android apps. In this way, more than two million applications will become available and amongst them: Microsoft Office. This will certainly attract new customers and in particular in the enterprise market which is Microsoft's home turf.

Chromebooks are built with advantages that carry more weight in organisations. Their low purchase prices is only the first element of a very appealing business case; there are additional savings in licences, effortless support and maintenance and they have a robust security architecture. This is why Chromebooks have become the preferred choice in the US education sector.

Furthermore, organisations can realise significant additional benefits and savings in other areas -storage, file systems, upgrades, projects, user support- when Chromebooks are combined with the use of other Google cloud services. In a recent interview, Microsoft's Chairman John Thompson stated that it's "inevitable that part of our business will be under continued pressure." He also reflected on what happened to AT&T when the world shifted to mobile: "What's the likelyhood that could happen with on-prem versus cloud? That in three years, we look up and it's gone?"

In an earlier post we described the decline of the Windows PC as part of a larger phenomenon: the decline of the Wintel ecosystem. It's evident that Google has thrown a very compelling business case into the laps of organisational decision makers who must, in any case, prepare for an accelerated decline of the Wintel empire!


Apple WWDC & Google I/O drive new commandments for the digital age

The Apple and Google developer conferences are behind us. News from these events illustrate we live in exciting times. Despite some unsuccessful bets, all major technology companies are showing impressive gains on an annual basis. Consumers benefit from new products and features, the development community from enormous investments in robust public cloud platforms, organisations from leveraging new ecosystems.

When we look at the existing technologies and practices in the average organisation and compare it to the new realities, it looks like many are riding on a steam engine train whereas the technology world has boarded a TGV. Furthermore, the trains have left the platform and the distance between the two is growing by the day.

So here are some new commandments if you don't want to lose out in this race.


The Apple and Google developer conferences are behind us. News from these events illustrate we live in an exciting period of accelerating innovation. As consumers, we benefit from shiny new products with fascinating features, coming out of the new factories of the 21st century. At an apparantly ever increasing speed. For example, the user interface was revolutionised only recently -changing from clicking and typing to touching- to go multimodal soon when voice will augment the graphical interface. Natural language understanding is improving at a rapid pace enabled by impressive investments and improvements in AI.

During Apple's WWDC this week, Horace Dediu of Asymco posted his updated time series describing the size and development speed of the Apple Ecosystem - resonating evidence of exponential growth curves:

  • The number of iOS app installs has now reached 130 billion;
  • The iOS app download speed just continues to increase - it's currently above 30 billion per year (a staggering 80,000,000+ per day);
  • This success continues to attract more developers - last year the registered Apple developer community grew by yet another 18%.

Other numbers also demonstrate how these major technnology companies are setting trends that extend beyond our mobile phones. Apple transacts a formidable $35 billion annually from its apps. In the same post, Horace Dediu estimates that the entire iOS ecosystem revenues are at least $250 billon per year - by including the revenues of all mobile services that run on top of iOS and which transact outside of the Apple systems, like Google, Facebook, Amazon and Uber.

To illustrate how gigantic the shift is to mobile, Benedict Evans posted yesterday: "a little over 4x more people own an ARM/iOS/Android computer than own an x86/Windos/Mac one". Heralding the decline of earlier devices and ecosystems. Mobile data traffic grew 60% last year and today's teens are streaming natives. Mobile devices have seduced us to switch from (costly) calling and texting to (gratis) chatting and have become our primary devices in many customer journeys and for reading (two thirds) of our mails:

Emails-Opened-by-Device - Edited.jpg
A chart from MovableInk showing that about two thirds of all emails in the US are opened on mobile devices. In the latest report with Q3 2015 data the share of mobile devices was 67%.

In yesterday's post, Horace Dediu also estimated that Apple's market share among professional and hobbyist developers may be as high as 70%. Developers are empowered to build great apps whilst they can leverage the investments in hardware, software, services and marketing made by the technology giants - and at the same time earn good financial returns. It is no surprise that consumer IT is encroaching the enterprise vendor ecosystem.

So what are some of the new commandments for organisations that don't want to lose out in this race?

  • Tech-oligopolies that have emerged from the last century are being challenged by a new order of small, very innovative companies. No doubt many will disappear or be acquired, but they are all building on enterprise grade public cloud platforms. Therefore you shall excommunicate religiously held views of best practices for (out)sourcing and vendor management and you shall explore terra incognita.
     
  • As none of the new tech giants can be bothered to make more efficient what isn't effective anymore (to paraphrase Peter Drucker) you shall question and monitor how the existing enterprise clergy will (continue to) change course and deliver different solutions for the new reality. If they don't accelerate change, they will become increasingly marginalised and you shall leave their churches in time.
     
  • To code or not to code, that's the question. The excruciating pains of the ERP-era have indoctrinated today's managers to ban any custom coding. As new technologies change the rules of the game, you shall challenge this dominant doctrine and explore new ones. And its missionaries shall be heard and given room to experiment and try to create new sources of competitive advantage and business value.
     
  • You shall build your new applications on a new liturgy, with principles such as mobile first and with shortened life cycles for tools, including the architectural bibles. Sticking to existing beliefs, or only adopting the new truths superficially will widen the gap at increasing speed with those who can adapt. Agile and devops better not be the emperor's new clothes, when it's time to do your confessions.
     
  • You shall experiment and learn, but not reinvent the wheel. Even the largest companies reuse field proven Open Source components and work with robust practices and Open Source communities that respectfully share and learn. The opportunities are even greater now with available API's for AI algorithms, voice, image and natural language understanding that enable customers to interact naturally with your apps.
     
  • On a broader scale, your organisation shall develop positive learning cycles from experimentation and failure. Organised anarchy shall become the new dogma with new rules that aren't considered blasphemic. As an example, read Jeff Bezos thoughts on how to become and stay an invention machine. You shall reconsider the existing catechesis of (working with) best practices.

When we look at the existing technologies and practices in the average organisation and compare it to the new realities, it looks like many are riding on a steam engine train whereas the technology world has boarded a TGV. Furthermore, the trains have left the platform and the distance between the two is growing by the day.


One more nail in the coffin of the Wintel ecosystem coming from Apple

Decades ago the PC market standardised around Microsoft’s Windows and Intel’s microchips and thus an entire ‘Wintel ecosystem’ was born. It grew into an enormous habitat of programmers, hardware and software vendors as well as system integrators and consultants. For generations, Wintel became the de facto omnipresent ecosystem of enterprise IT.

It still is, but Barbarians at the gate: with the arrival of the smartphone (and tablets) less than a decade ago, things started to look really bad. With sales of these handheld computers growing quickly to phenomenal volumes, PC sales started to decline. The knock on effects have proven invasively corrosive to the entire Wintel ecosystem.

And as growth can progress exponentially, so can decline; as Hemingway once put it: “How did you go bankrupt? Two ways. Gradually, then suddenly”. Users have already shifted to mobile and with software and hardware vendors following suit. Continuing to run Wintel applications is becoming a liability because the vast majority of them can run only on Wintel.

Your organisation better watch and prepare for an accelerating decline of the Wintel empire that will send shockwaves spreading. Another important reason to put priority on getting rid of your legacy running exclusively on Intel Inside1.

1. When we talk about Wintel we also include the marginal AMD that also produces CPU’s based on the x86 architecture.

Let us take a closer look.

Mobile is chipping away at the user end of the Wintel ecosystem:

  • The PC share of the market of programmable devices (computers, smartphones, tablets) has evaporated as users have moved to mobile devices - see graph below. According to IDC estimates for 2015, 1.432 billion smartphones were sold compared to 276 million PC’s. In absolute terms, PC sales are falling year on year. Competition is fierce and margins have eroded for PC manufacturers, brands and supply chains.
     
  • As users spend less time using PCs and performance improvements of Intel processors have slowed down significantly, replacement cycles have lengthened. Furthermore, the interest of users to buy software for the PC-market continues to decline.
     
  • Intel has missed the boat in the mobile space. During the PC era, Intel dominated the market and benefitted from the biggest economies of scale. In the miniaturisation race, Intel was always one process node ahead of all other manufacturers, because it had the financial might to invest in the newest fabs. 

    In the mobile era, the Intel production volumes and corresponding cash flows are dwarfed by the ARM based chip industry that supplies all mobile phones. For the first time, Intel is no longer ahead in the miniaturisation race -crucial for notebooks and mobile devices- and most likely the chip manufacturers who can leverage the mobile volumes will leap ahead next year.
     
  • The work computer is no longer synonymous with a Wintel PC. SAP and IBM are developing their mobile apps for enterprise on Apple iOS, thus placing ARM processor based Apple hardware on the shortlist for certain Wintel PC replacements. Google has announced its Chromebooks will also run Android apps. These laptops are often Intel Inside, but this will shift when mobile apps become more important. Productivity tools are becoming agnostic: Microsoft Office is available on Android and iOS; Adobe tools are being made available for Android and iOS platforms.
     
  • The developer community and software vendors are shifting to either mobile apps or cloud-based applications. The dominant front end is either the browser, an Android or an iOS app. You do not need a Wintel device to run any of these.
The once dominant Windows PC market share has evaporated. Source: Horace Dediu of Asymco regularly posts a graph depicting the evolution of computer platforms market share.

The once dominant Windows PC market share has evaporated. Source: Horace Dediu of Asymco regularly posts a graph depicting the evolution of computer platforms market share.


Apple is a good case study for the ongoing decline of the Wintel ecosystem.

The Apple Worldwide Developers Conference starts next week and many rumour sites have reported that Apple shall not announce any new macbook pro’s at the event and that these are due only in quarter four of this year. Apple’s laptop line has seen no major refresh for a couple of years. Most of the delay is related to production problems that Intel has faced in the 14nm manufacturing process. But now that Intel’s new 14nm Skylake CPU’s have been released, all other PC manufacturers have launched laptops incorporating this technology. Why would Apple delay the release of overdue hardware refreshes by another six months?

One logical explanation for such a delay would be a switch of the Apple Mac line from Intel to ARM processors. Apple would most likely wish to combine such a hardware release with the release of a new version of its operating system, planned for quarter four. An additional advantage of releasing late in Q4: Apple can use the new ARM processors that are released with the new iphones, to be announced every September.

We are not trying to predict the exact timing of this processor switch, but we are convinced Apple will make the switch in the coming years for the following reasons:

  • Intel was late in investing in graphical processor units (GPUs) and its offering is inadequate for professional use. Apple is forced to combine the Intel processors with an additional GPU from nVidia or ATI, thus adding complexity in its design and deteriorating battery life. The iPad Pro -based on ARM architecture with an integrated GPU- has already surpassed the graphical (and CPU!) performance of the ultra portable Macbook line with Intel Inside. Very likely, the future designs of ARM processors will offer Apple a competitive design in the high end professional market as well.
     
  • Apple incorporates advanced features integrated in its ARM based chip designs for mobile devices in order to offer functionalities such as biometric authorization. Moving the Mac from Intel to ARM based chips will allow Apple to incorporate these features in its computer line as well. Sticking with Intel will mean either the exclusion of such features or the addition of extra chips to achieve the same goal.
     
  • Apple has always strived to control key elements of its products. One important exception are Intel processors and the current trends will allow Apple to get rid of this exception. ARM processors will provide Apple much more control, as ARM provides a licensed template that Apple can further tweak and integrate with GPU’s and other functions in one System on Chip (SoC).
     
  • Switching from Intel processors will allow Apple to either increase its margin or reduce the prices of its products or do a bit of both.
  • Moving to a single CPU & GPU architecture across all of its products gives Apple significant economies of scope. Apple will be able to consolidate the sizeable effort it has to put into optimizing several software components, such as compilers, graphical libraries and LLVM’s2.
  • Unifying its chip architecture will make it easier for Apple to integrate iOS and OS X apps where it wishes. It is clear that Apple does not wish to merge the two operating systems, but being able to run iOS apps on OS X without using emulation does have benefits. Dashboard widgets in OS X offered much more limited functionality to what iOS apps offer.


Using its own ARM based chip designs will allow Apple to achieve thinner designs and better battery life without a sacrifice in performance and with additional features. In addition, the huge installed base of iOS apps can be leveraged in the OS X platform. This will allow further differentiation of its computers and is a direct result of the economies of scale and scope afforded by the mobile industry.

2. Low Level Virtual Machine

As the rot in the Wintel ecosystem spreads further, Wintel based applications will become increasingly marginalised. Keeping legacy applications that are tied up to the Wintel architecture is a guarantee for diminished innovation, increased risks and costs.3

We can draw a parallel to our own past experience migrating away from a mainframe. First, innovation comes to a grinding halt, then risks of continuity in hardware, software and resources mount to an unacceptable level. Finally, the last application left on the mainframe has disastrous economics because it has to shoulder all the fixed costs. Similarly, you do not want to be the last one holding the Wintel-can.

3. We focus on Wintel applications requiring a run time on the user’s PC. Such applications can be centralised in a data centre in order to eliminate the dependence on a Wintel end-user device. Techniques such as server based computing or VDI are used but our experience is that this approach is in most cases sub-optimal (complex and expensive).

Twilio summarises it nicely

Twilio,  a silicon valley “unicorn” has filed for an IPO this week. From their S-1 filing paper:

The way organizations build, deploy and scale modern applications has fundamentally changed. Organizations must continuously bring new applications and features to market to differentiate themselves from their competitors and to build and extend their competitive advantage. Heightened consumer expectations for real-time, personalized interaction further necessitate rapid innovation. In order to satisfy these needs, developers must be empowered to freely experiment, quickly prototype and rapidly deploy new applications that are massively scalable. Legacy infrastructure does not support this new paradigm for developers because it typically has been slow, complex and costly to implement, and inflexible to operate and iterate. 

We could not have said it better ourselves


The next predator of industries: Artificial Intelligence

Jeff Bezos recently said about Artificial Intelligence: “It's probably hard to overstate how big an impact it's going to have on society over the next 20 years.” The field is still in its early innings; often brute force approaches to problem solving are used but it has a huge runway ahead of it.

Due to the confluence of several developments, AI has switched to exponential growth with disruptive potential  - not utilising it will have detrimental effects to organisations foolhardy to ignore it. Software is eating the world and the first disruptions of Artificial Intelligence are already discernible.


“The number 73 marks the hour of your downfall” was the oracle that Delphi gave to the Roman Emperor Nero. He had just turned 30 and concluded he would reign long and die at 73. His reign however came to a sudden end in the next year after a revolt by Galba, a 73 years old man.

What would the Oracle predict today about the impact of Artificial Intelligence (AI) -Silicon Valley’s ‘new new thing’- on your industry and your company? Are you perhaps feeling comfortable about your competitive position today, like Nero did? After all, your organisation’s knowledge is not artificial; it is tangible and hard to copy.

The subject of AI has fascinated humans for many decades but the results had been underwhelming until recently. However, in the course of the last five years the field has advanced enough for practical applications with broad appeal to all consumers. Intelligent agents - Apple’s Siri, Google Now, Microsoft’s Cortana, Amazon’s Alexa- and services such as the fascinatingly clever Google Photos. But AI is not just an opportunity for the tech industry and a benefit to its users: “software is eating the world” and AI will be one of the next predators! Not utilising it will have detrimental effects to those foolhardy to ignore it.

Look at self driving vehicles. This is not just a Google whim and moreover, it will have an enormous impact on the car industry. Self-driving is not just about mounting some hardware sensors and actuators to an existing vehicle, already equipped with GPS and cartography. AI plays an incredibly important role in making it all work. Look at this picture, illustrating the vast amount of data processed and -more importantly- interpreted intelligently, in real time.

Bill Gross, founder of Idealab, posted this picture recently and added: “I learned this weekend at an XPrize event that Google's self-driving car gathers 750 megabytes of sensor data per SECOND! That is just mind-boggling to me. Here is a picture of what the car "sees" while it is driving and about to make a left turn. It is capturing every single thing that it sees moving - cars, trucks, birds, rolling balls, dropped cigarette butts, and fusing all that together to make its decisions while driving. If it sees a cigarette butt, it knows a person might be creeping out from between cars. If it sees a rolling ball it knows a child might run out from a driveway. I am truly stunned by how impressive an achievement this is.” (Italics added)

And it will not be just products and manufacturing supply chains that will be impacted by AI. Many services industries will experience similar disruptions. Remember IBM’s Watson winning Jeopardy in 2011? Fast forward and look at Amelia, ‘your first digital employee’. It was developed by IPSoft, a company we have worked with that pioneered AI in IT services. The Amelia-platform can understand, learn and interact to solve problems. ‘She’ reads natural language, understands context, applies logic, infers implications, learns through experience and even senses emotions - understands what is meant, not simply what is said. Furthermore, ‘Amelia’ becomes an expert, capable of reading and digesting the same training information as human ‘colleagues’ quickly and learning from interactions faster.

The potential and likely impact? Imagine service desk tasks, procurement processing activities, claims processing work in insurance companies or expert advisory roles to field service engineers, to lawyers, to financial services professionals... 24/7, all year long, reliably and compliant. This is not science fiction; version 2.0 was released in October 2015.

We should note that brute force approaches to solving problems are a part of the solution compensating for the fact that current AI algorithms learn and operate much less efficient than we humans do. For example, Google’s AlphaGo computer beating the world champion of Go requires thousands of times more power than the human brain. Nevertheless, we are convinced that the field of AI is in its early innings and has a huge runway ahead of it. AI has switched definitively from linear to exponential growth due to the confluence of a number of technology developments and trends:

  • Much of the recent progress is based on improving the effectiveness of the Deep Learning algorithms that are being used. Open sourcing these algorithms is delivering notable improvements at an accelerated pace. Google has open sourced the development of its deep learning algorithms that were used in Google Photo, and recently Amazon has gone open source as well.
     
  • These ‘learning’ algorithms require training data to get good at their task; the more data you feed them, the better they get. The availability of powerful public cloud hardware has risen sharply allowing for faster processing -hence learning- at much lower cost.
     
  • The collection of massive amounts of training data has become easier and faster due to the enormously increased use of connected sensors (smartphones, the Internet of Things). Also, Amazon’s Mechanical Turk -a crowdsourcing marketplace to use human intelligence for tasks that computers are currently unable to do- has accelerated learning by producing significantly more training data at a higher pace.
     
  • With Moore’s law, chips are running the AI algorithms faster, cheaper and more energy efficient all the time. Despite the mounting challenges related to the ongoing miniaturisation -raising questions about the sustainability of Moore’s law- the chip industry continues to advance. NVIDIA, a leading GPU -graphics processing units- designer recently launched a new graphics chip optimised for AI, holding 3 times more transistors than the previous generation, enabling learning 12 times as fast.
     
  • Instead of using generic chips to run the AI algorithms we also see tailor made hardware turbo charging progress: earlier this year, research at MIT produced a different chip designed specifically for neural networks which is “10 times as energy efficient as a mobile GPU”. This means that mobile devices will be able to run the powerful AI algorithms locally and even in the absence of connectivity, rather than having to upload the data to the internet for central processing, interpretation and response. Also IBM is introducing radically new and more energy efficient designs, different from the Von Neumann architecture principles which have dictated the hardware industry since its beginnings.
     
  • Other big step improvements in AI performance are driven by using different algorithms (smarter) rather than by just calculating a lot faster. An example is VoiceIQ, a company acquired by Apple last year, which has developed a smarter algorithm with a steeper learning curve: “Siri brings in 1 billion queries per week from users to help it get better. But VocalIQ was able to learn with just a few thousand queries and still beat Siri.” It also understands and remembers context. VoiceIQ’s AI requires orders of magnitude less training, producing more effective results faster and easier.
     
  • The big tech companies are exposing their AI algorithms to external parties via programmable interfaces (API’s). This implies that everyone can get access to already existing, state of the art algorithms and connect them to their systems. You can put them to use for your specific business situation as a configurable functionality and train them with your data. For example, Amazon’s Alexa has two ‘software development kits’: one to embed the AI voice recognition capabilities and one to teach Alexa a new “skill”. And those two things work together. A very important additional ‘side effect’ for enterprise  is the increased size and scope of the development community. To illustrate this point: SAP’s ‘corporate closed community’ has about 2.5 million developers whereas Apple’s iOS ‘open community’ has roughly 10 million developers - and it hasn’t been around for nearly as long.

We close by quoting Jeff Bezos; at the 2016 Code Conference he commented on AI: “I think it's gigantic — natural language understanding, machine learning in general. [...] It's probably hard to overstate how big an impact it's going to have on society over the next 20 years. [...] Amazon spent working on Alexa four years behind the scenes and more than 1,000 people are working on it. [...] And there's so much more to come. It's just the tip of the iceberg.”
 

In the interview at Start Up Fest Europe -referred to in an earlier post- Eric Schmidt talks about AI (time: 3:16-5:40).

Getting the most out of the cloud? Get rid of your legacy apps!

Organisations must ‘move to the public cloud’: it offers tremendous business opportunities and is a key component of an effective digital strategy. The key question is what you move and what you replace.

‘Lifting and shifting’ your traditional IT may bring some business benefits but may also turn into a costly disappointment. Raising your organisational maturity, acquiring new technical skills and selecting the appropriate provider and platform are critical success factors. But don't expect too much. You must put effort into replacing your legacy applications and use the public cloud differently if you want to capture significant business benefits!


The cloud is omnipresent in communications and offerings of every IT supplier and is on the agenda of every CIO. For a good reason, as cloud technologies can deliver significant benefits to organisations. It has even become a ‘qualitative benchmark’ in Boards: ‘the move to the cloud’ as an indicator for the organisation’s transition to a more modern and strategic use of IT.

Almost every organisation is ‘moving to the cloud’ these days, but many are not able to extract significant business value yet. In some cases, organisations have not ‘moved’ beyond the infrastructural levels of IT and business benefits are limited. In other cases, organisations are working on adapting their structure and culture and expect that the targeted benefits will materialise later.

For many organisations though, it is important to check certain assumptions and clarify some expectations about the benefits of such a move. Working with ‘cloud native’ IT -for example with smartphones apps and Salesforce.com- is a lot easier than moving your existing IT to the cloud. More explicitly: lifting and shifting your IT to the cloud will not improve reliability, agility, flexibility, time to market or cost effectiveness by itself. On the contrary, it may even become a risky, costly and disappointing exercise.

The public cloud was built by providers such as Google and Amazon to give their software-based services unique and attractive characteristics. As a next step, their infrastructure designs and their software tools were made available to third parties. Although similar engineering philosophies were applied, each company built a unique technical solution, fit for its purpose. There is no such thing as a ‘standard’, ‘public’ cloud.

Furthermore, the technologies used are very different from the traditional solutions from IT vendors like SAP, Microsoft, Oracle, IBM and the ecosystem of applications built around their platforms. The differences are significant, already when using the public cloud as an ‘Infrastructure as a Service’ only. Running your existing IT in the public cloud is not a simple next step in a logical evolution of outsourcing and offshoring.

This engineering misfit needs to be addressed, or ‘lifting and shifting’ will become a very costly disappointment. Finding solutions to address issues that arise, challenges the traditional wisdom in IT departments. Your own organisation often lacks the appropriate engineering expertise; it is radically different from running or outsourcing your own IT. Furthermore, when IT departments start working with third party service providers more controls and more mature and formal ways of working and collaboration are needed. Also, it’s important to understand that the public cloud providers’ primary target customer is the developer community. In contrast to hosting centres and traditional outsourcing parties who are accustomed to and try to accommodate your particular circumstances, your ways of working and your legacy technologies.

Turning these issues into recommendations, our advice is fourfold:

  • Make sure you raise your organisational (process) maturity to an adequate level before moving to the public cloud. Also raise your technical expertise to the appropriate level before assessing and considering such a move.
     
  • You need to select the right provider and cloud platform to minimise technical and process mismatches with your legacy. Some mismatches may prove fatal and force you to abort the move. But even if you succeed, don’t expect too much:  a move of your legacy applications to the (public or private) cloud will only unlock a limited amount of the significant potential of the public cloud technologies.
     
  • Many problems emanate from your legacy applications. Trying to fit square pegs through round holes will not work well - read our technical notes further below. You must put effort into replacing your legacy applications - rather than trying to continue running these in the (public) cloud.
     
  • The public cloud is ideal for developing and running modern applications; you must build your new systems and functionalities in it *. Tremendous possibilities are at your reach today and new ones are introduced regularly!

Organisations must ‘move to the public cloud’: it delivers significant business contributions and is a key component of an effective IT strategy. The key question is what you move and what you replace.

Below, we elaborate on the ‘technological’ aspects that underpin our arguments. It is worth reading, not just for IT professionals.
 

* Alphabet's Eric Schmidt was interviewed this week at Start Up Fest Europe. His advice: "Anybody who's coming in with plans based on the earlier architectures, you're not going to make it. You need to be on scalable plans that solve real problems."

The arguments made above are based on the following observations about ‘technology and technical’ differences between the public cloud and the traditional platforms and environments. Not addressing these differences adequately will impact your business in a negative way, deteriorating user experience, reliability, cost effectiveness, et cetera.

  • Public cloud infrastructure technology is designed to run applications architected according to scale out principles (‘you simply need to deploy twice as many -low end- servers when you need to process twice the workload’). Legacy applications are architected according to scale up principles (‘you need a -higher quality- server that is twice as fast when you need to process twice the workload’).

    When having to scale up a legacy application in the public cloud -in order to facilitate an increased workload, for example when more users or devices must tap into your systems- notable performance issues arise quickly or costs rise disproportionately (or a combination of the two).
     
  • Most companies apply server virtualisation technologies to utilise hardware more efficiently and lower maintenance costs. Most public cloud providers use different virtualisation solutions to the one used in-house. Although virtualisation vendors typically provide tools to import a virtual machine from a different vendor system, these tools are far from robust; especially for virtual machines running Microsoft Windows. As a consequence organisations end up rebuilding each (!) Windows server in the new public cloud environment; a labour intensive, costly and time consuming exercise (even if you use automation tooling).
     
  • The benefits of virtualisation technologies are based on the consolidation of physical resources. As a consequence, application instructions are processed in a serial fashion on fewer, shared physical resources. This leads to queuing of instructions and additional specific (and expensive) measures are always needed to prevent a serious degradation in performance. Without such measures, users experience frequent, smaller hiccups, for example lasting up to 15 seconds. It’s often quite difficult to detect and address these issues as many monitoring tools miss a sufficient level of granularity in measurement.

    Public cloud scale out environments apply virtualisation technologies. When lifting and shifting scale up applications to the scale out public cloud, performance and the user experience is most likely to be degraded. But if this leads to a notable deterioration of the user experience, resolving such problems in the public cloud is even more challenging than resolving them in your own data centre. As a customer you don’t have sufficient access to the underlying technologies and the cloud provider may not even identify (nor acknowledge) the problems if the monitoring tooling fails to measure any issues. And finally, the public cloud provider may not even want to implement specific additional measures required to address your specific legacy problems. Note that scale out application are much less susceptible to such queueing bottlenecks.
     
  • Disaster recovery in traditional environments is usually based on storage replication. You will experience significant difficulties moving this set up to the public cloud. Disaster recovery for public cloud scale out applications is addressed entirely differently: by having the application run simultaneously in more than one data centres. This will not work for traditional applications. When moving to the public cloud you will have to forego recovery capabilities or you will have to find a less robust, laborious work around.
     
  • Most applications run software simultaneously both on the user devices as well as on back end servers. For the application to function and to perform smoothly, the time delay (‘latency’) between the two parts of the same application must be below a certain level. Latency depends mostly on the physical distance between the user’s device and the back end servers. If you move to the public cloud this distance usually increases significantly. Many legacy applications are less tolerant of higher latencies and their performance and the user experience suffer exponentially.

    If you need to retain these applications and want to run them in the public cloud, you must reduce the latency by also moving the user device part of the application to servers in the public cloud data centre. There are various technical solutions to achieve this; they are typically complex, costly and slowing down the migration, affecting the business case negatively.
     
  • Deploying additional IT resources is a more critical capability nowadays with the ever rising use of IT. Virtualisation and automation tooling have made it much easier to deploy additional resources. However the management practices of deploying IT resources effectively and efficiently has become more complex. In practice, it is quite common that business needs more hardware IT resources (and faster) than IT can provision.

    If IT capacity management practices are not sufficiently mature, the physical machines are oversubscribed and as a consequence notable performance issues arise for users. The public cloud, with its agility, deals with this fluctuating or growing demand without any problem. In fact it is made so easy, a different problem arises quickly: a seemingly uncontrollable growth in costs.
     
  • A large number of existing maintenance tasks are not eliminated by moving to the public cloud (unless you're building apps with the latest public cloud technologies that reduce this load). For example the tasks to manage and maintain development, testing and production environments, specific network security zones, intrusion detection systems. Tasks that are necessary to safeguard availability, reliability and performance.

    We have seen many business cases that centre around optimistic assumptions about both the elimination of tasks, people, and related costs and the improvement in the quality of these unavoidable maintenance tasks. These business cases hardly ever come true and assumptions about either cost or quality improvements need substantial and disappointing adjustments.
     
  • Traditional back-up practices with retention policies of up to -for example- seven years is rarely a standard service delivered by public cloud providers. If you choose to simply mimic your existing practices, (storage) costs will explode and you may experience notable performance issues as well. It is possible to arrange back up within a cloud based data center, but it will require dramatic technical and procedural changes, with significant change management challenges and effort.
     
  • Many organisations lack the infrastructure development expertise needed to engineer and make the move to the public cloud safely and successfully. There is a substantial difference in the required skill set, experience and profile between an engineer maintaining traditional systems and a solution engineer able to design effective solutions for the public cloud.

    Lacking such skills increases the risk that many problems -some of which are mentioned here- are overlooked and that the migrated legacy applications are bogged down with technical problems, with negative business implications or even that the move ends in disillusionment and with disinvestments.

IT experience among business leaders urgently needed ...

When looking at the leadership teams of successful organisations one may conclude: “better stick to your knitting”. Across many industries IT becomes part of the core proposition and upends business models that have persisted for decades. What if “software is eating your world” and IT isn't “your knitting”?

The Washington Post newspaper provides an inspiring and practical example: hire experience. This newspaper -a 140-year-old icon of an entire industry that has been disrupted- has quickly turned into a successful media and technology company!


If you look at great CEO's in the automotive industry, you find engineers with passion for building cars. Notable examples are Karl Benz, Ferdinand Porsche, Henry Ford and Ferdinand Piëch.

Similarly,  if you look at the leadership teams of very successful IT companies, you find either engineers or individuals who are deeply engaged with the guts of their IT operations. Think of Larry Page, Mark Zuckerberg, Jeff Bezos, Steve Jobs. The list goes on and on.

You would think that the morale of the story is “stick to your knitting”.  There is one large anomaly in all this, and that is IT. “Software is eating the world”, to quote Marc Andreessen. Across many industries IT becomes part of the core proposition of a product or service and upends business models that have persisted for decades. Think of how IT changed the media industry fundamentally, disrupted previously successful business models and companies and has become a standard key ingredient of every media product and service.

If IT hasn't been “your knitting”, what to do and where to start? The Washington Post newspaper provides an inspiring and practical example: hire experience! 

In 2013 The Post was a 140-years-old newspaper in decline when Don Graham, its owner, asked Jeff Bezos for help. In an interview given to The Washington Post executive editor Martin Baron, Jeff Bezos says:

“My biggest question quickly became to Don: ‘Are you sure I can help? Why me? I don't know anything about the news business’, and Don said, ‘Look, we don't need anybody that knows about the news business. We got lots of people here who know about the news business. We need somebody who knows about the internet’.” (38:30 into the interview)

In 2013 Jeff Bezos acquired the newspaper and three years later, his influence on The Washington Post is already being felt. The newspaper surpassed The New York Times in unique monthly US visitors in 2015 and is turning into a media and technology company.

Bezos didn't get involved in managing the editorial direction of the newspaper. Rather he took a very hands-on approach on the business and technology aspects to transform it and create a real media and technology company. With its own engineering team that “rivals any team in Silicon Valley”. Today, technology is put to use to generate analytics, improve on-line commercial effectiveness and generate revenues, increase (social) media reach and exposure, improve the user experience (e.g., on mobile devices) and seek input from readers about their preferences and behaviour.

If we look at the boards and management teams of many organisations, IT experience is sorely lacking. This paucity will either get addressed or will become a large liability.