A self-report study is a type of survey, questionnaire, or poll in which respondents read the question and select a response by themselves without any outside interference.[1] A self-report is any method which involves asking a participant about their feelings, attitudes, beliefs and so on. Examples of self-reports are questionnaires and interviews; self-reports are often used as a way of gaining participants' responses in observational studies and experiments.
Self-report studies have validity problems.[2] Patients may exaggerate symptoms in order to make their situation seem worse, or they may under-report the severity or frequency of symptoms in order to minimize their problems. Patients might also simply be mistaken or misremember the material covered by the survey.
Questionnaires are a type of self-report method which consist of a set of questions usually in a highly structured written form. Questionnaires can contain both open questions and closed questions and participants record their own answers.Interviews are a type of spoken questionnaire where the interviewer records the responses. Interviews can be structured whereby there is a predetermined set of questions or unstructured whereby no questions are decided in advance.The main strength of self-report methods are that they are allowing participants to describe their own experiences rather than inferring this from observing participants. Questionnaires and interviews are often able to study large samples of people fairly easy and quickly. They are able to examine a large number of variables and can ask people to reveal behaviour and feelings which have been experienced in real situations. However participants may not respond truthfully, either because they cannot remember or because they wish to present themselves in a socially acceptable manner. Social desirability bias can be a big problem with self-report measures as participants often answer in a way to portray themselves in a good light. Questions are not always clear and we do not know if the respondent has really understood the question we would not be collecting valid data.If questionnaires are sent out, say via email or through tutor groups, response rate can be very low. Questions can often be leading. That is, they may be unwittingly forcing the respondent to give a particular reply.
Unstructured interviews can be very time consuming and difficult to carry out whereas structured interviews can restrict the respondents’ replies. Therefore psychologists often carry out semi-structured interviews which consist of some pre-determined questions and followed up with further questions which allow the respondent to develop their answers.
Questionnaires and interviews can use open or closed questions or both.
Closed questions are questions that provide a limited choice (for example, a participant's age or their favorite type of football team), especially if the answer must be taken from a predetermined list. Such questions provide quantitative data, which is easy to analyze. However, these questions do not allow the participant to give in-depth insights.
Open questions are those questions that invite the respondent to provide answers in their own words and provide qualitative data. Although these types of questions are more difficult to analyze, they can produce more in-depth responses and tell the researcher what the participant actually thinks, rather than being restricted by categories.
One of the most common rating scales is the Likert scale. A statement is used and the participant decides how strongly they agree or disagree with the statements. For example the participant decides whether Mozzarella cheese is great with the options of "strongly agree", "agree", "undecided", "disagree", and "strongly disagree". One strength of Likert scales is that they can give an idea about how strongly a participant feels about something. This therefore gives more detail than a simple yes no answer. Another strength is that the data are quantitative, which are easy to analyse statistically. However, there is a tendency with Likert scales for people to respond towards the middle of the scale, perhaps to make them look less extreme. As with any questionnaire, participants may provide the answers that they feel they should. Moreover, because the data is quantitative, it does not provide in-depth replies.
Fixed-choice questions are phrased so that the respondent has to make a fixed-choice answer, usually 'yes' or 'no'.
This type of questionnaire is easy to measure and quantify. It also prevents a participant from choosing an option that is not in the list. Respondents may not feel that their desired response is available. For example, a person who dislikes all alcoholic beverages may feel that it is inaccurate to choose a favorite alcoholic beverage from a list that includes beer, wine, and liquor, but does not include none of the above as an option. Answers to fixed-choice questions are not in-depth.
Reliability refers to how consistent a measuring device is. A measurement is said to be reliable or consistent if the measurement can produce similar results if used again in similar circumstances. For example, if a speedometer gave the same readings at the same speed it would be reliable. If it did not it would be pretty useless and unreliable. Importantly reliability of self-report measures, such as psychometric tests and questionnaires can be assessed using the split half method. This involves splitting a test into two and having the same participant doing both halves of the test.
Validity refers to whether a study measures or examines what it claims to measure or examine. Questionnaires are said to often lack validity for a number of reasons. Participants may lie; give answers that are desired and so on.A way of assessing the validity of self-report measures is to compare the results of the self-report with another self-report on the same topic. (This is called concurrent validity). For example if an interview is used to investigate sixth grade students' attitudes toward smoking, the scores could be compared with a questionnaire of former sixth graders' attitudes toward smoking.
Results of self-report studies have been confirmed by other methods. For example, results of prior self-reported outcomes were confirmed by studies involving smaller participant population using direct observation strategies.[3]
The overarching question asked regarding this strategy is, "Why would the researcher trust what people say about themselves?"[4] In case, however, when there is a challenge to the validity of collected data, there are research tools that can be used to address the problem of respondent bias in self-report studies. These include the construction of some inventories to minimize respondent distortions such as the use of scales to assess the attitude of the participant, measure personal bias, as well as identify the level of resistance, confusion, and insufficiency of self-reporting time, among others.[5] Leading questions could also be avoided, open questions could be added to allow respondents to expand upon their replies and confidentiality could be reinforced to allow respondents to give more truthful responses.
Self-report studies have many advantages, but they also suffer from specific disadvantages due to the way that subjects generally behave.[6] Self-reported answers may be exaggerated;[7] respondents may be too embarrassed to reveal private details; various biases may affect the results, like social desirability bias. There are also cases when respondents guess the hypothesis of the study and provide biased responses that 1) confirm the researcher's conjecture; 2) make them look good; or, 3) make them appear more distressed to receive promised services.
Subjects may also forget pertinent details. Self-report studies are inherently biased by the person's feelings at the time they filled out the questionnaire. If a person feels bad at the time they fill out the questionnaire, for example, their answers will be more negative. If the person feels good at the time, then the answers will be more positive.
As with all studies relying on voluntary participation, results can be biased by a lack of respondents, if there are systematic differences between people who respond and people who do not. Care must be taken to avoid biases due to interviewers and their demand characteristics.