The CRAAP test is a test to check the objective reliability of information sources across academic disciplines. CRAAP is an acronym for Currency, Relevance, Authority, Accuracy, and Purpose. Due to a vast number of sources existing online, it can be difficult to tell whether these sources are trustworthy to use as tools for research. The CRAAP test aims to make it easier for educators and students to determine if their sources can be trusted. By employing the test while evaluating sources, a researcher can reduce the likelihood of using unreliable information. The CRAAP test, developed by Sarah Blakeslee and her team of librarians at California State University, Chico (CSU Chico),[1] is used mainly by higher education librarians at universities. It is one of various approaches to source criticism.
The test was implemented by Sarah Blakeslee[2] [3] during the spring of 2004 when she was creating a first year workshop to first-year instructors. Blakeslee was frustrated that she could not remember the criteria for looking up different sources. After much thought, she came up with the acronym. She wanted to give students an easier way to determine what sources are credible. One of the other tests that came before the CRAAP test is the SAILS test: Standardized Assessment of Information Literacy Skills, created in 2002 by a group of librarians at Kent State University as an assessment for students' information literacy skills. The SAILS test focuses more on the scores as a quantitative measure of how well students look up their sources.[4] While the SAILS test is more specific in its terms of evaluation, it shares the same objectives as the CRAAP test.
One university has started using the CRAAP test to help teach students about online content evaluation. In a 2017 article, Cara Berg, a reference librarian and co-coordinator of user education at William Paterson University emphasizes website evaluation as a tool for active research.[5] At Berg's university, for example, library instruction is given to roughly 300 different classes, each in different subjects that require some type of research that require students to look up sources. Website evaluation using the CRAAP test was incorporated as part of the first year seminar for students at this university, to help them hone their research skills.
When the CRAAP test was first implemented at William Paterson University, there were some technical challenges. The workshop for website evaluation felt rushed and in most cases, librarians could not cover all the angles in one class session. As well, as a consequence of rushing the website evaluation portion for reasons of time, student performance on an assessment focused on website evaluation was poor. To address these problems, they developed a "flipped" method in which students watched a video that covered two of three workshop sections on their own time, with in-class instruction limited to website evaluation yet occupying all of a class period. Student performance on assessments of their knowledge of CRAAP for website evaluation improved after the change to instruction.
The CRAAP test is generally used in library instruction as part of a first-year seminar for students. Students were required to participate in this class as a part of the graduation requirement at William Paterson University. Besides using the CRAAP method in English courses, many other courses have been using this method as well, such as science and engineering classes. The test is applied the same way as the website evaluation and is used universally in all courses. Examples of universities that use the CRAAP test include Central Michigan University,[6] Benedictine University,[7] Community College of Baltimore County,[8] among the many examples. There are other schools that use the test as a way for students to do well on their assignments in subjects that require research papers.
See also: Heuristic-systematic model of information processing.
In 2004, Marc Meola's paper "Chucking the Checklist" critiqued the checklist approach to evaluating information,[9] and librarians and educators have explored alternative approaches.
Mike Caulfield, who has criticized some uses of the CRAAP test in information literacy,[10] has emphasized an alternative approach using step-by-step heuristics that can be summarized by the acronym SIFT: "Stop; Investigate the source; Find better coverage; Trace claims, quotes, and media to the original context".[11] [12]
In a December 2019 article, Jennifer A. Fielding raised the issue that the CRAAP method's focus is on a "deep-dive" into the website being evaluated, but noted that "in recent years the dissemination of mis- and disinformation online has become increasingly sophisticated and prolific, so restricting analysis to a single website's content without understanding how the site relates to a wider scope now has the potential to facilitate the acceptance of misinformation as fact."[13] Fielding contrasted use of the CRAAP method, a "vertical reading" of a single website, with "lateral reading", a fact-checking method to find and compare multiple sources of information on the same topic or event.[13]
In a 2020 working paper, Sam Wineburg, Joel Breakstone, Nadav Ziv and Mark Smith found that using the CRAAP method for information literacy education makes "students susceptible to misinformation". According to these authors the method needs thorough adaptation in order to help students detect fake news and biased or satirical sources in the digital age.[14]