High Definition Television
HDTV has been a long time coming, from the psycho-visual experiments by NHK in the 1970s, through the format wars of the 1980s and the IT/TV battles of the late 1990s. The first practical equipment appeared in 1985 this was deployed in Japan with some pioneers in Europe and the USA. The Equipment was mainly analogue but in fact already used quite a lot of Digital Processing in the cameras and recorders.
The politics also hotted up around this time and European manufacturers successfully parted the EU from many millions of ECUs (predecessor to the Euro) to develop an all European system. So now we had both rival 50Hz and 60Hz HDTV production and transmission systems. The Japanese had MUSE (multiple subnyquist sampling encoding) and the Europeans had HD MAC (Multiplexed Analogue Components) for Transmission. The European lobby made much of the fact that all the receivers would need to be changed if the Japanese system won, they omitted to say that this was also the case if the European system won.
For production equipment the pinnacle of performance was achieved in the late '80s with large format CCD camera’s and uncompressed Digital recorders.
The early '90s was a cooling off period though with some useful standardization work including the much lauded common image format which used the same 1920 x 1080 Image format for both 50 Hz and 60Hz.
The end of the '90s was dirty tricks time, influential senators remembered that the Broadcasters had successfully quarantined spectrum for HDTV Broadcasting and put pressure on them to make it so. The Broadcasters at this time were preoccupied with the threat of mult-ichannel using compression over cable and Sattelite. They then coined the phrase DTV (Digital TV) and tried to deflect the politicians. Enter also at this time software vendors who could just about process Standard definition in real time and didn't understand the broadcast interlace system. Using political influence and calling in High School Allegiances they managed to initially dilute the DTV offering to NTSC Progressive scan. The Broadcasters now seeing a threat from another direction then switched back into an HDTV agenda. The outcome of all this was the infamous ATSC (Advanced Television System Committee) table 3 with around 30 different broadcasting formats. The eventual result was a split between Broadcasters with some supporting the normally interlace Common Image Format of 1920 x 1080 for both Production and Transmission, and others supporting the 1280 x 720 progressive format much loved by the IT community. Progressive eats double the bandwidth but was easier to process by the novice software writers of the time.
With luck in Europe the target for mainstream HDTV will be both the common image format (1920 x 1080) and progressive scan. Luckily, the NTSC Progressive Format also known as 480P has passed away.
Behind this story the Professional Equipment vendors were trying hard to make Equipment Better, Lighter, Cheaper. This resulted in a significant degradation of picture quality over the costly but superior equipment of the late 1980s. In particular subsampling luminance and chrominance then compressing it to death confused the format debate by seriously degrading the capabilities of the 1080 I format. Another interesting point was the realisation that Film images in 50 Hz systems were carried as progressive images @ 25P. This concept went back to the 1960s but was invented as new and promoted heavily as the HD 24P system when the term field dominance morphed into segment dominance. Woe betide anyone who got the dominance the wrong way and tried to display motion on a Digital Projector.
So moving into the second decade of the 21st Century, many broadcasters are on air. The 'religious' arguments around the various formats have quietened down and finally the technology is approaching the performance of the late '80s but with a healthier price tag. The prospect is that any newer system (such as UHDTV) will finally abandon interlace.
4K/Ultra High Definition Television
COMING SOON - This page is under construction!
Our upcoming 4K/UHD training course
. . . Read more
HDDC/Mesclado study paper
"4K/UHD: Business Realities vs. Technological Speculation?"
. . . Read more
Digital Cinema is now fully established. It can be broken into two parts: Production (including post production) and Distribution/Exhibition.
Digital Camera capture is now established in the mainstream, but some movies and many commercials are still shot on 35mm film. Pioneers such as George Lucas were the first to shoot major features using Digital Cameras. Digital capture has moved on substantially due to the demands of Stereoscopic production. Now that creators have learnt to employ digital methods for 3D, they are adopting them for 2D as well. Digital technology is fully established in the post-production area.
Television has had many advanced tools at its disposal for many years, these tools were until recently not good enough for use on major features. The advent of new Digital Intermediate tools, running on multiprocessor-based Render Farms, has changed the situation. TV commercials used very sophisticated secondary colour correction systems with the ability to isolate and change individual colours in a scene. Film traditionally used a simple three light process where a skilled operator could match the overall image to the director wishes but could not manipulate colours within the scene.
As mentioned above the Digital files can be written back to Film for traditional Distribution. For Digital distribution a Digital Source Master is prepared, this is then encrypted and packaged for delivery by satellite, hard disk or internet. The Digital Cinema Initiative, a coalition of seven, now six studios has published a very thorough requirements specification for Digital Cinema Distribution - www.dcimovies.com. This Requirements Specification was then processed through SMPTE. The SMPTE Standards were then processed by the ISO to make them into International Standards.
Special effects have been achieved for some time by scanning the camera negatives at high resolution 2048 x 1556 pixels then storing the images as a file. It is now common for Direct Digital Capture to be on the set. The Computer graphics process then takes the file and integrates it into the CGI shot, or in fact the CGI may be completely synthetic. The result was recorded back to Film and the negative produced cut into the movie as normal. The advent of Digital Intermediate brings the opportunity to edit and secondary colour grade the whole movie whilst it is in the Digital Domain. The finished file can then be either prepared for Digital Cinema Distribution or written back to negative stock to facilitate film distribution. A major benefit was the ability to write many negatives of equal quality so improving the quality of the Film seen in the Cinema. As Digital Cinema roll-out is now approaching 90%, this film distribution process has largely been abandoned by the mainstream industry.
When you mention 3D movies, the reaction is often either very pro or very dismissive. The naysayers normally look back to the 1950s and say "It didn’t happen then, so why should it work now?"
Like so many things that have previously been commercial failures, improvements in understanding and technological development can now make a success of it. If we look back to those earlier times, the camera systems were very big and bulky, nothing could be visualised without developing the film, editing was difficult and projection was a nightmare, having to start and run two projectors in synch and still change reels. Many of the producers had no real understanding of the Human Visual System and so shot 3D movies guaranteed to give you a splitting headache! So why should it be different now? There have been many technological advances which have suddenly enabled a 3D Renaissance.
Computer Generated Imagery: In the last twenty years, CGI has progressed from being a novelty, used for special effects combined with live capture, to an art form in itself. For example the fantastic box office successes of 'Toy Story', 'Shrek' and a multitude of other blockbusters guaranteed reinvestment in the computer generated movie. It was not too big a jump to realise that a similar technique could be used to plan a 3D version. All the objects are generated separately by the computers and automatically composited together, so it is possible to decide what depth each object should occupy in each picture. This combined with several possible synthetic camera positions allows total control over the depth of each object in each picture. In conjunction with a better understanding of what causes viewer fatigue and the minimisation of objects rushing out of the screen, Hollywood has succeeded in making CGI movies which are entertaining, full of action and non-tiring.
Live Capture: Most if not all 3D movies shot today use Digital Cameras. Several Companies have developed very elaborate camera rigs to allow remote control of such parameters as inter-axial distance, 'Toe' of the cameras (Parallel lenses or lenses pointing inwards), and tracking zooms (Both camera lenses zooming in synchronism). These tools allow a much better opportunity to make informed choices in production and to make non-tiring 3D. Digital production facilitates on set monitoring and fast editing. There is however much debate on the right and wrong way to shoot 3D. You still see movies shot with basic errors of scale, or foregrounds/backgrounds that cause headaches. In addition to the well-established use of 24 Fps, we now also have movies such as "The Hobbit" using higher frame-rates (48 Fps).
Dimensionalization is the technique for taking a 2D production and turning it into a 3D movie. This technique uses powerful computers and skilled operators to identify all the objects in a frame and reposition them for the required depth effect (Depth Mapping). This is not a trivial process, as objects may obscure or reveal information behind them which must be concealed or revealed. If a hole is revealed the operator must hand paint in the hole. Though time consuming and relatively expensive, the results can be excellent and may, in fact, give better results than live shooting 3D directly. 'Avatar' set the quality bar very high for live 3D production and there have been several new 2D-to-3D conversion technologies introduced. Unfortunately, some of these technologies have been sadly lacking and the studios are having to take more care about how they allocate the work - The shortest time and lowest budget doesn't always ensure a good result!
Exhibiting 3D movies on film was once a nightmare. Getting the projected images to overlap optically or mechanically and slaving the projectors so that no frames are lost or distorted from one channel only, were a constant problem for the projectionist. The biggest enabler for the modern renaissance in 3D projection was the Digital Projector. Digital projectors have the ability to project 3D through a single lens. It is also possible to slave together two Digital Projectors (one per eye). This is sometimes done to increase brightness, or to achieve perfect time-coincident projection of each eye. "The Hobbit" was shot using 48 Fps per eye to facilitate superior motion capture. It is worth noting that perfect projection of High Frame-Rate images requires either SXRD technology or Dual DLP-based technology. Single lens projection, using DLP technology introduces Temporal Axis Distortion, which negates many of the benefits of High Frame-Rate presentation.