Shot transition detection (or simply shot detection) also called cut detection is a field of research of video processing. Its subject is the automated detection of transitions between shots in digital video with the purpose of temporal segmentation of videos.
Shot transition detection is used to split up a film into basic temporal units called shots; a shot is a series of interrelated consecutive pictures taken contiguously by a single camera and representing a continuous action in time and space.
This operation is of great use in software for post-production of videos. It is also a fundamental step of automated indexing and content-based video retrieval or summarization applications which provide an efficient access to huge video archives, e.g. an application may choose a representative picture from each scene to create a visual overview of the whole film and, by processing such indexes, a search engine can process search items like "show me all films where there's a scene with a lion in it."
Cut detection can do nothing that a human editor couldn't do manually, however it is advantageous as it saves time. Furthermore, due to the increase in the use of digital video and, consequently, in the importance of the aforementioned indexing applications, the automatic cut detection is very important nowadays.
In simple terms cut detection is about finding the positions in a video in that one scene is replaced by another one with different visual content. Technically speaking the following terms are used:
A digital video consists of frames that are presented to the viewer's eye in rapid succession to create the impression of movement. "Digital" in this context means both that a single frame consists of pixels and the data is present as binary data, such that it can be processed with a computer. Each frame within a digital video can be uniquely identified by its frame index, a serial number.
A shot is a sequence of frames shot uninterruptedly by one camera. There are several film transitions usually used in film editing to juxtapose adjacent shots; In the context of shot transition detection they are usually group into two types:
"Detecting a cut" means that the position of a cut is gained; more precisely a hard cut is gained as "hard cut between frame i and frame i+1", a soft cut as "soft cut from frame i to frame j".
A transition that is detected correctly is called a hit, a cut that is there but was not detected is called a missed hit and a position in that the software assumes a cut, but where actually no cut is present, is called a false hit.
An introduction to film editing and an exhaustive list of shot transition techniques can be found at film editing.
Although cut detection appears to be a simple task for a human being, it is a non-trivial task for computers. Cut detection would be a trivial problem if each frame of a video was enriched with additional information about when and by which camera it was taken. Possibly no algorithm for cut detection will ever be able to detect all cuts with certainty, unless it is provided with powerful artificial intelligence.
While most algorithms achieve good results with hard cuts, many fail with recognizing soft cuts. Hard cuts usually go together with sudden and extensive changes in the visual content while soft cuts feature slow and gradual changes. A human being can compensate this lack of visual diversity with understanding the meaning of a scene. While a computer assumes a black line wiping a shot away to be "just another regular object moving slowly through the on-going scene", a person understands that the scene ends and is replaced by a black screen.
Each method for cut detection works on a two-phase-principle:
This principle is error prone. First, because even minor exceedings of the threshold value produce a hit, it must be ensured that phase one scatters values widely to maximize the average difference between the score for "cut" and "no cut". Second, the threshold must be chosen with care; usually useful values can be gained with statistical methods.
There are many possible scores used to access the differences in the visual content; some of the most common are:
Finally, a combination of two or more of these scores can improve the performance.
In the decision phase the following approaches are usually used:
All of the above algorithms complete in O(n) — that is to say they run in linear time — where n is the number of frames in the input video. The algorithms differ in a constant factor that is determined mostly by the image resolution of the video.
Usually the following three measures are used to measure the quality of a cut detection algorithm:
The symbols stand for: C, the number of correctly detected cuts ("correct hits"), M, the number of not detected cuts ("missed hits") and F, the number of falsely detected cuts ("false hits"). All of these measures are mathematical measures, i. e. they deliver values in between 0 and 1. The basic rule is: the higher the value, the better performs the algorithm.
Benchmark | Videos | Hours | Frames | Shot transitions | Participants | Years | |
---|---|---|---|---|---|---|---|
TRECVid | 12 - 42 | 4.8 - 7.5 | 545,068 - 744,604 | 2090 - 4806 | 57 | 2001 - 2007 | |
MSU SBD | 31 | 21.45 | 1,900,000+ | 10883 | 7 | 2020 - 2021 |
Automatic shot transition detection was one of the tracks of activity within the annual TRECVid benchmarking exercise from 2001 to 2007. There were 57 algorithms from different research groups. Сalculations of F score were performed for each algorithm on a dataset, which was replenished annually.
Group | F score | Processing speed (compared to real-time) | Open source | Used metrics and technologies | |
---|---|---|---|---|---|
Tsinghua U.[2] | 0.897 | ×0.23 | No | Mean of Pixel Intensities Standard Deviation of Pixel Intensities Color Histogram Pixel-wise Difference Motion Vector | |
NICTA[3] | 0.892 | ×2.30 | No | Machine learning | |
IBM Research[4] | 0.876 | ×0.30 | No | Color histogram Localized Edges direction histogram Gray-level Thumbnails comparison Frame luminance |
The benchmark has compared 6 methods on more than 120 videos from RAI and MSU CC datasets with different types of scene changes, some of which were added manually.[6] The authors state that the main feature of this benchmark is the complexity of shot transitions in the dataset. To prove it they calculate SI/TI metric of shots and compare it with others publicly available datasets.
Algorithm | F score | Processing speed (FPS) | Open source | Used metrics and technologies | |
---|---|---|---|---|---|
Saeid Dadkhah[7] | 0.797 | 86 | Yes | Color histogram Adaptive threshold | |
Max Reimann[8] | 0.787 | 76 | Yes | SVM for cuts Neural networks for graduals transitions Color Histogram | |
VQMT[9] | 0.777 | 308 | No | Edges histograms Motion compensation Color histograms | |
PySceneDetect[10] | 0.776 | 321 | Yes | Frame intensity | |
FFmpeg[11] | 0.772 | 165 | Yes | Color histogram |