Shot transition detection explained

Shot transition detection (or simply shot detection) also called cut detection is a field of research of video processing. Its subject is the automated detection of transitions between shots in digital video with the purpose of temporal segmentation of videos.

Use

Shot transition detection is used to split up a film into basic temporal units called shots; a shot is a series of interrelated consecutive pictures taken contiguously by a single camera and representing a continuous action in time and space.

This operation is of great use in software for post-production of videos. It is also a fundamental step of automated indexing and content-based video retrieval or summarization applications which provide an efficient access to huge video archives, e.g. an application may choose a representative picture from each scene to create a visual overview of the whole film and, by processing such indexes, a search engine can process search items like "show me all films where there's a scene with a lion in it."

Cut detection can do nothing that a human editor couldn't do manually, however it is advantageous as it saves time. Furthermore, due to the increase in the use of digital video and, consequently, in the importance of the aforementioned indexing applications, the automatic cut detection is very important nowadays.

Basic technical terms

In simple terms cut detection is about finding the positions in a video in that one scene is replaced by another one with different visual content. Technically speaking the following terms are used:

A digital video consists of frames that are presented to the viewer's eye in rapid succession to create the impression of movement. "Digital" in this context means both that a single frame consists of pixels and the data is present as binary data, such that it can be processed with a computer. Each frame within a digital video can be uniquely identified by its frame index, a serial number.

A shot is a sequence of frames shot uninterruptedly by one camera. There are several film transitions usually used in film editing to juxtapose adjacent shots; In the context of shot transition detection they are usually group into two types:

"Detecting a cut" means that the position of a cut is gained; more precisely a hard cut is gained as "hard cut between frame i and frame i+1", a soft cut as "soft cut from frame i to frame j".

A transition that is detected correctly is called a hit, a cut that is there but was not detected is called a missed hit and a position in that the software assumes a cut, but where actually no cut is present, is called a false hit.

An introduction to film editing and an exhaustive list of shot transition techniques can be found at film editing.

Vastness of the problem

Although cut detection appears to be a simple task for a human being, it is a non-trivial task for computers. Cut detection would be a trivial problem if each frame of a video was enriched with additional information about when and by which camera it was taken. Possibly no algorithm for cut detection will ever be able to detect all cuts with certainty, unless it is provided with powerful artificial intelligence.

While most algorithms achieve good results with hard cuts, many fail with recognizing soft cuts. Hard cuts usually go together with sudden and extensive changes in the visual content while soft cuts feature slow and gradual changes. A human being can compensate this lack of visual diversity with understanding the meaning of a scene. While a computer assumes a black line wiping a shot away to be "just another regular object moving slowly through the on-going scene", a person understands that the scene ends and is replaced by a black screen.

Methods

Each method for cut detection works on a two-phase-principle:

  1. Scoring – Each pair of consecutive frames of a digital video is given a certain score that represents the similarity/dissimilarity between them.
  2. Decision – All scores calculated previously are evaluated and a cut is detected if the score is considered high.

This principle is error prone. First, because even minor exceedings of the threshold value produce a hit, it must be ensured that phase one scatters values widely to maximize the average difference between the score for "cut" and "no cut". Second, the threshold must be chosen with care; usually useful values can be gained with statistical methods.

Scoring

There are many possible scores used to access the differences in the visual content; some of the most common are:

Finally, a combination of two or more of these scores can improve the performance.

Decision

In the decision phase the following approaches are usually used:

Cost

All of the above algorithms complete in O(n) — that is to say they run in linear time — where n is the number of frames in the input video. The algorithms differ in a constant factor that is determined mostly by the image resolution of the video.

Measures for quality

Usually the following three measures are used to measure the quality of a cut detection algorithm:


The symbols stand for: C, the number of correctly detected cuts ("correct hits"), M, the number of not detected cuts ("missed hits") and F, the number of falsely detected cuts ("false hits"). All of these measures are mathematical measures, i. e. they deliver values in between 0 and 1. The basic rule is: the higher the value, the better performs the algorithm.

Benchmarks

Comparison of benchmarks
Benchmark Videos Hours Frames Shot transitions Participants Years
TRECVid 12 - 42 4.8 - 7.5 545,068 - 744,604 2090 - 4806 57 2001 - 2007
MSU SBD 31 21.45 1,900,000+ 10883 7 2020 - 2021

TRECVid SBD Benchmark 2001-2007[1]

Automatic shot transition detection was one of the tracks of activity within the annual TRECVid benchmarking exercise from 2001 to 2007. There were 57 algorithms from different research groups. Сalculations of F score were performed for each algorithm on a dataset, which was replenished annually.

Top research groups
Group F score Processing speed
(compared to real-time)
Open source Used metrics and technologies
Tsinghua U.[2] 0.897 ×0.23 No Mean of Pixel Intensities
Standard Deviation of Pixel Intensities
Color Histogram
Pixel-wise Difference
Motion Vector
NICTA[3] 0.892 ×2.30 No Machine learning
IBM Research[4] 0.876 ×0.30 No Color histogram
Localized Edges direction histogram
Gray-level Thumbnails comparison
Frame luminance

MSU SBD Benchmark 2020-2021 [5]

The benchmark has compared 6 methods on more than 120 videos from RAI and MSU CC datasets with different types of scene changes, some of which were added manually.[6] The authors state that the main feature of this benchmark is the complexity of shot transitions in the dataset. To prove it they calculate SI/TI metric of shots and compare it with others publicly available datasets.

Top algorithms
Algorithm F score Processing speed
(FPS)
Open source Used metrics and technologies
Saeid Dadkhah[7] 0.797 86 Yes Color histogram
Adaptive threshold
Max Reimann[8] 0.787 76 Yes SVM for cuts
Neural networks for graduals transitions
Color Histogram
VQMT[9] 0.777 308 No Edges histograms
Motion compensation
Color histograms
PySceneDetect[10] 0.776 321 Yes Frame intensity
FFmpeg[11] 0.772 165 Yes Color histogram

Notes and References

  1. Smeaton, A. F., Over, P., & Doherty, A. R. (2010). Video shot boundary detection: Seven years of TRECVid activity. Computer Vision and Image Understanding, 114(4), 411–418.
  2. Yuan, J., Zheng, W., Chen, L., Ding, D., Wang, D., Tong, Z., Wang, H., Wu, J., Li, J., Lin, F., & Zhang, B. (2004). Tsinghua University at TRECVID 2004: Shot Boundary Detection and High-Level Feature Extraction. TRECVID.
  3. Yu, Zhenghua, S. Vishwanathan and Alex Smola. “NICTA at TRECVID 2005 Shot Boundary Detection Task.” TRECVID (2005).
  4. A. Amir, The IBM Shot Boundary Detection System at TRECVID 2003, in:TRECVID 2005 Workshop Notebook Papers, National Institute of Standardsand Technology, MD, USA, 2003.
  5. Web site: MSU SBD Benchmark 2020 . 2021-02-19 . 2021-02-13 . https://web.archive.org/web/20210213052638/http://videoprocessing.ml/benchmarks/sbd.html . dead .
  6. Web site: MSU SBD Benchmark 2020 . 2021-02-19 . 2021-02-13 . https://web.archive.org/web/20210213052638/http://videoprocessing.ml/benchmarks/sbd.html#methodology . dead .
  7. Web site: SaeidDadkhah/Shot-Boundary-Detection. GitHub. 19 September 2021.
  8. Web site: Shot-Boundary-Detection. GitHub. 11 September 2021.
  9. Web site: MSU Scene Change Detector (SCD).
  10. Web site: Home - PySceneDetect.
  11. Web site: Ffprobe Documentation .