Paradoxes in Standardization
Lessons learned from the speech and audio coding industry
Standardization makes the internet work. When two computers talk to each other, they need a common language. Standards define such languages for computer-to-computer communication. It’s as simple as that. If company A would be using a different language than company B, then their products would not be interoperable. You could not call from a phone of brand A to a phone of brand B. You could not browse the webpages of brand A with a computer of brand B. Almost everything on the Internet which works, relies on standards. They are massively important.
The importance of standards is also their biggest weakness. Primarily, short term economic incentives all work against standards. For example, if a company owns intellectual rights to a technology included in an important standard, they could demand royalties of all users and manufacturers. Say you have a patent on an essential speech coding technique; you could then demand royalties from all 7.83 billion mobile phone subscribers as well as all phone and network manufacturers and all operators. You wouldn’t even have to demand much, say 1 cent per user per year, and you would already have money making machine producing 80 million € per year.
Conversely, the primary objective of companies is then to develop technologies and patents which will get used in standards. It would seem like that would be a good thing, wouldn’t it? Not quite. First, note that it is not important to develop new technology as long as you manage to get a new patent. By patenting troves of trivial technologies, companies can muscle themselves into standards. It is border-line patent trolling, though the the big players do have methods for controlling their exposure to such evil actors.
A much worse consequence is then that companies have no incentives to publish their research results. Any information they publish could potentially help competitors to develop new technologies. Research is therefore done behind closed corporate doors. For example in speech coding research, this has stifled academic research almost entirely. Academic researchers do not have access to the newest results because they are not published, and even if the academic researchers would publish new technologies, they would not be used in standards because they are not patented and nobody has the incentive to include it in a product. Lack of academic research, with some exceptions, has slowed down progress in the field to a snail race.
Moreover, being left out of a standard would mean that a company has no revenue from that standard. Developers therefore become risk-averse, since a mistake could be very expensive. This leads developers to submit small incremental improvements to existing technologies, rather than work on bold innovations, which further reduces the speed of development.
Standardization organizations do however still want to generate new useful standards. They want to be good. The operating environment changes, there are new use cases and albeit slowly, technology does improve. There is therefore both demand and supply for new standards. The question is then how to best agree on standards among stake holders? Often, organizations then organize (sic!) competitions to which companies can offer their technologies and where the best technology is then chosen as the new standard. So far so good.
The challenges with such competitions are in the details. First, stake holders, which are usually members of the organization, have to agree on the rules of the competition. Naturally, an endurance runner would like to run a marathon while a sprinter would like to compete on 100m dash. Choosing the rules of the game can therefore determine the winner even before the competition has started. It is then easy to see how discussion about competition rules can become a nasty political fight.
Second, it is not only performance metrics which lead to difficulties, but also the competition structure which has an impact on quality of the final standard. Some organizations (like 3GPP) use a winner-takes-it-all approach, where the winning technology is accepted as the new standard as such. The problem is that it is not always possible for companies to make independent submissions to the competition. There are too many legacy patents which cannot be avoided, whereby companies are forced to co-operate. While co-operation is usually good, in a competition it means that co-operating partners have essentially all the same entries in the competition. Collaborations therefore make it very difficult for new participants to join. Besides, standards are often so complicated that no one player would have the resources to submit their own independent technology.
An alternative approach used in some standardization organizations (like MPEG) is to include a secondary step in the process where anyone can submit improvements to the currently best technology. It is then much easier for small players to participate, because it is sufficient to show that their technology improves the product. In theory, by each addition of new technology the final product then improves. This however opens the door to minimal-improvement-contributions, which have no meaningful effect on the final product, but introduces new patents into the standard. Worse, increasing the number of participants in the design process tends to increase the complexity of the project.
The unwarranted increase in complexity is thus a danger in both approaches. A large number of collaborators and contributors tend to make software projects unnecessarily complex — known as the design by committee syndrome. The project becomes a Christmas tree with all the bells, whistles and kitchen sinks you can imagine. Maintenance of such products is a nightmare since the smallest changes can have unintended consequences in distant parts of the project (just look at the problems with the Bluetooth stack). No player has the incentive to apply Occam’s razor, because then their own patents might get removed from the standard. As a consequence, most commercial standards are not beautiful products — at best, they work as intended.
Open source enthusiasts sometimes claim that their approach is therefore superior. I’ll give them that: To some extent free from economic incentives, open source projects can occasionally produce beautiful products. An active community can also maintain such standards to gain impressive quality. The challenges with open standards are however different and usually related to lacklustre adoption, immature products and inherent uncertainties. Namely, without commercial incentives for companies, open standards often cannot afford advertising the products. The danger is that even superior products remain unknown among users. Secondly, while enthusiast and academic researchers are often talented and can produce fantastic proofs-of-concepts, the conversion to a polished product takes a lot of work. Some talk of the 10/90 ratio in the amount of work between proof-of-concept research and product engineering. That’s something companies can more easily do. They can, unlike open source projects, also provide performance and maintenance guarantees. Finally, many fine open source standards are tainted by uncertainties in the their licensing situation. Take for instance the open speech and audio coding standard Opus, based on a combination of Speex and Vorbis. Though these predecessors where started more than 15 years ago, I am today still uncertain whether their claim that they are free-to-use has been properly verified.
In conclusion, I think it is obvious that we need standards. However, in the way we currently design standards, there is a lot to be improved. Commercial support seems essential to make standards successful, at least in the field of speech and audio processing. Such commercial interest however stifles innovation and often reduces the quality of the final product.
A central issue with current problems is that standards are huge monoliths. If a company manages to get patents into a product, it gets a piece of the revenue of the whole product, but without the patents it gets nothing. It’s all or nothing. A possible solution would then be to modularize standards to subsystems, such that there can be multiple alternative subsystems which work through a shared API, but that is a story for an another time.
This should not be taken as a criticism to available standards or stake holders. I am certain that companies honestly try their best to develop best possible products and many standards do give a high end-user-experience. In contrast, my claim is that we can do better. My claim is that by organizing standardization in a better way, we can develop better products. My frustration is that we as a community do not have a tradition of discussing openly the standardisation process. In fact, I have received stern opposition and veiled threats even when suggesting discussions such as the current text. With this opinion-piece, I therefore want to encourage discussion about standardization processes in a constructive fashion.
Disclaimer: Everything stated above are my personal opinions and do not reflect the position of any of my current or past employers or collaborators.