The first video codec in history was a disaster, but it was a milestone after more than 50 years

  • 2

Watching movies on a computer is nowadays something trivial, but the way to transforming bits and pixels into an encoded video that does not take up an immensity was not easy at all. The first digital video compression format was published in 1984 and the result was so disastrous that there were hardly any implementations. However, it was a milestone that would soon lead to other much more widespread codecs.

We talk about H.120 format, but it took more than 50 years of research and search for algorithms to get to it. Must go back to 1929 to find the origin of the idea of ​​inter-frame compression. The technique that tries to find out the relationship between consecutive frames in order to save only the changes from one frame to another. Without this idea, today’s videos would occupy a huge space that would make it practically impossible to play them.

The British RD Kell was the one who devised the concept, although he did it with analog video in mind. Despite this, it is digital video that has benefited from it and today the most modern codecs are still based on this idea.

PDP-11/45: the 16-bit computer that created the Death Star CGI in 1976

It was in the Bell Labs, mythical AT&T laboratories, where the next step was taken to achieve effective video compression. In 1952, Bernard M Oliver and C. W. Harrison suggested that differential pulse code modulation (MCPD) for video, in addition to radio, which is where it is most commonly used. It is a technique in which samples are taken and then an attempt is made to reconstruct a future image based on predictions. You don’t get great precision, but you don’t need that much data either.

From key frames to the first video compression standard

These works of trying to predict the images in order to compress the video ended up being transformed into the concept of key frames, presented in 1959 by NHK (The Japan Broadcasting Corporation). It involves choosing a set of keyframes, spaced in the video and then just encode the changes between them. Cue points scattered throughout the video save a lot of space.


In the early days of digital video, in the 1970s, the techniques used were equivalent to those of television and telecommunications. But converting analog video to digital was not very efficient. That’s when the need for better video compression techniques started to gain momentum.

In 1974, Professor Nasir Ahmed of the University of Kansas published the discrete cosine transform (DCT). A mathematical technique that ended up becoming the first functional digital video compression algorithm. DCT divides images into parts of different frequencies.

during a process less important frequencies are removed and only those that allow the image to be rebuilt sufficiently are left. This elimination of frequencies allows the video to occupy much less, without affecting the result so much. It was simply the concept of key frames, but applying it through frequencies.

The AV1 codec is the future, and Qualcomm's support for it is more important than it seems

This DCT algorithm was optimized by multiple engineers and companies for several years, until in 1977, Wen-Hsiung Chencombined all the previous research and published a paper describing how the DCT video compression with motion compensation. There, the video compression technique that has been used since then was defined. A few years later, in 1984, this knowledge led to the creation of the H.120 standard.

H261 play

But this first compression standard was a disaster, to the point that no existing codecs were created for this format. The video quality was not optimal and a lot of quality was lost from one frame to another. H.120 worked well with semi-static videos such as those in a video conference, but when the image changed a lot, too much information was lost. This first standard worked for NTSC at 1,544 kbps and 2,048 kbps for PAL.

The solution came in 1988 with the H.261 codec, the first compression algorithm that did use effective techniques and the first to be commercially successful. If H.120 was practically forgotten, with H.261 a standard used and supported by video companies as important as Hitachi, NTT or Toshiba.

H.261 introduced a block-based coding technique that is still used in most modern codecs. Their maximum resolution was 352 x 288 pixels.

Since then, other more modern codecs such as MPEG-1 or H.263 have arrived, as well as the recent H.266/VVC or AV1, but the first encoded digital videos arrived in the 1980s. They were not as efficient as they are now, but without With these first compression algorithms, we could not now have 4K videos with a huge bitrate occupying little more than a few GBs.

In Xataka | Ten documentaries on the history of computing that you cannot miss

Watching movies on a computer is nowadays something trivial, but the way to transforming bits and pixels into an encoded…

Watching movies on a computer is nowadays something trivial, but the way to transforming bits and pixels into an encoded…

Leave a Reply

Your email address will not be published.