Overscan is a behaviour in certain television sets in which part of the input picture is cut off by the visible bounds of the screen. It exists because cathode-ray tube (CRT) television sets from the 1930s to the early 2000s were highly variable in how the video image was positioned within the borders of the screen. It then became common practice to have video signals with black edges around the picture, which the television was meant to discard in this way.
Early analog televisions varied in the displayed image because of manufacturing tolerance problems. There were also effects from the early design limitations of power supplies, whose DC voltage was not regulated as well as in later power supplies. This could cause the image size to change with normal variations in the AC line voltage, as well as a process called blooming, where the image size increased slightly when a brighter overall picture was displayed due to the increased electron beam current causing the CRT anode voltage to drop. Because of this, TV producers could not be certain where the visible edges of the image would be. In order to compensate, they defined three areas:[1]
A significant number of people would still see some of the overscan area, so while nothing important in a scene would be placed there, it also had to be kept free of microphones, stage hands, and other distractions. Studio monitors and camera viewfinders were set to show this area, so that producers and directors could make certain it was clear of unwanted elements. When used, this mode is called underscan.[2]
Despite the wide adoption of LCD TVs that do not require overscan since the size of their images remains the same irrespective of voltage variations, many LCD TVs still come with overscan enabled by default, but it can be disabled by the user using the TV's on-screen menus.[3] [4]
Today's displays, being driven by digital signals (such as DVI, HDMI and DisplayPort), and based on newer fixed-pixel digital flat panel technology (such as liquid crystal displays), can safely assume that all pixels are visible to the viewer. On digital displays driven from a digital signal, therefore, no adjustment is necessary because all pixels in the signal are unequivocally mapped to physical pixels on the display. As overscan reduces picture quality, it is undesirable for digital flat panels;[5] therefore, is preferred.[6] When driven by analog video signals such as VGA, however, displays are subject to timing variations and cannot achieve this level of precision.
CRTs made for computer display are set to underscan with an adjustable border, usually colored black. Some 1980s home computers such as the Apple IIGS could even change the border color. The border will change size and shape if required to allow for the tolerance of low precision (although later models allow for precise calibration to minimise or eliminate the border). As such, computer CRTs use less physical screen area than TVs, to allow all information to be shown at all times.
Computer CRT monitors usually have a black border (unless they are fine-tuned by a user to minimize it)—these can be seen in the video card timings, which have more lines than are used by the desktop. When a computer CRT is advertised as 17-inch (16-inch viewable), it will have a diagonal inch of the tube covered by the plastic cabinet; this black border will occupy this missing inch (or more) when its geometry calibrations are set to default (LCDs with analog input need to deliberately identify and ignore this part of the signal, from all four sides).
Video game systems have been designed to keep important game action in the title safe area. Older systems did this with borders for example, the Super Nintendo Entertainment System windowboxed the image with a black border, visible on some NTSC television sets and all PAL television sets. Newer systems frame content much as live action does, with the overscan area filled with extraneous details.[7]
Within the wide diversity of home computers that arose during the 1980s and early 1990s, many machines such as the ZX Spectrum or Commodore 64 had borders around their screen, which worked as a frame for the display area. Some other computers such as the Amiga allowed the video signal timing to be changed to produce overscan. In the cases of the C64, Amstrad CPC,[8] and Atari ST it has proved possible to remove apparently fixed borders with special coding tricks. This effect was called overscan or fullscreen within the 16-bit Atari demoscene and allowed the development of a CPU-saving scrolling technique called sync-scrolling a bit later.
Analog TV overscan can also be used for datacasting. The simplest form of this is closed captioning and teletext, both sent in the vertical blanking interval (VBI). Electronic program guides, such as TV Guide On Screen, are also sent in this manner. Microsoft's HOS uses the horizontal overscan instead of the vertical to transmit low-speed program-associated data at 6.4 kbit/s, which is slow enough to be recorded on a VCR without data corruption.[9] In the U.S., National Datacast used PBS network stations for overscan and other datacasting, but they migrated to digital TV due to the digital television transition in 2009.
There is no hard technical specification for overscan amounts for the low definition formats. Some say 5%, some say 10%, and the figure can be doubled for title safe, which needs more margin compared to action safe. The overscan amounts are specified for the high definition formats as specified above.
Different video and broadcast television systems require differing amounts of overscan. Most figures serve as recommendations or typical summaries, as the nature of overscan is to overcome a variable limitation in older technologies such as cathode ray tubes.
However the European Broadcasting Union has safe area recommendations regarding Television Production for 16:9 Widescreen.[10]
The official BBC suggestions[11] actually say 3.5% / 5% per side (see p21, p19). The following is a summary:
Action safe | Title safe | - | ! Vertical | Horizontal ! Vertical | Horizontal | - ! 4:3 | 3.5% | 3.3% | 5.0% | 6.7% | - ! 16:9 | 3.5% | 3.5% | 5.0% | 10.0% | - ! 14:9 (displayed on 16:9) | 3.5% | 10.0% | 5.0% | 15.0% | - ! 4:3 (displayed on 16:9) | 3.5% | 15.0% | 5.0% | 17.5% |
---|
Microsoft's Xbox game developer guidelines recommend using 85 percent of the screen width and height,[7] or a title safe area of 7.5% per side.
Title safe or safe title is an area that is far enough in from the edges to neatly show text without distortion. If you place text beyond the safe area, it might not display on some older CRT TV sets (in worst case).
Action-safe or safe action is the area in which you can expect the customer to see action. However, the transmitted image may extend to the edges of the MPEG frame 720x576. This presents a requirement unique to television, where an image with reasonable quality is expected to exist where some customers won't see it. This is the same concept as used in widescreen cropping.
TV-safe is a generic term for the above two, and could mean either one.
The sampling (digitising) of standard definition video was defined in Rec. 601 in 1982. In this standard, the existing analogue video signals are sampled at 13.5 MHz. Thus the number of active video pixels per line is equal to the sample rate multiplied by the active line duration (the part of each analogue video line that contains active video, that is to say that it does not contain sync pulses, blanking, etc.).
In order to accommodate both formats within the same line length, and to avoid cutting off parts of the active picture if the timing of the analogue video was at or beyond the tolerances set in the relevant standards, a total digital line length of 720 pixels was chosen. Hence the picture will have thin black bars down each side.
704 is the nearest mod(16) value to the actual analogue line lengths, and avoids having black bars down each side. The use of 704 can be further justified as follows:
The "standard" pixel aspect ratio data found in video editors, certain ITU standards, MPEG etc. is usually based on an approximation of the above, fudged to allow either 704 or 720 pixels to equate to the full 4x3 or 16x9 picture at the whim of the author.[15]
Although standards-compliant video processing software should never fill all 720 pixels with active picture (only the center 704 pixels must contain the actual image, and the remaining 8 pixels on the sides of the image should constitute vertical black bars), recent digitally generated content (e.g. DVDs of recent movies) often disregards this rule. This makes it difficult to tell whether these pixels represent wider than 4x3 or 16x9 (as they would do if following Rec.601), or represent exactly 4x3 or 16x9 (as they would do if created using one of the fudged 720-referenced pixel aspect ratios).
The difference between 702/704 and 720 pixels/line is referred to as nominal analogue blanking.
In broadcasting, analogue system descriptions include the lines not used for the visible picture, whereas the digital systems only "number" and encode signals that contain something to see.
The 625 (PAL) and 525 (NTSC) frame areas therefore contain even more overscan, which can be seen when vertical hold is lost and the picture rolls.
A portion of this interval available in analogue, known as the vertical blanking interval, can be used for older forms of analogue datacasting such as Teletext services (like Ceefax and subtitling in the UK). The equivalent service on digital television does not use this method and instead often uses MHEG.
The 525-line system originally contained 486 lines of picture, not 480. Digital foundations to most storage and transmission systems since the early 1990s have meant that analogue NTSC has only been expected to have 480 lines of picture – see SDTV, EDTV, and DVD-Video. How this affects the interpretation of "the 4:3 ratio" as equal to 704x480 or 704x486 is unclear, but the VGA standard of 640x480 has had a large impact.