Scale (social sciences) explained

In the social sciences, scaling is the process of measuring or ordering entities with respect to quantitative attributes or traits. For example, a scaling technique might involve estimating individuals' levels of extraversion, or the perceived quality of products. Certain methods of scaling permit estimation of magnitudes on a continuum, while other methods provide only for relative ordering of the entities.

The level of measurement is the type of data that is measured.

The word scale, including in academic literature, is sometimes used to refer to another composite measure, that of an index. Those concepts are however different.[1]

Scale construction decisions

Scale construction method

Scales constructed should be representative of the construct that it intends to measure.[2] It is possible that something similar to the scale a person intends to create will already exist, so including those scale(s) and possible dependent variables in one's survey may increase validity of one's scale.

  1. Begin by generating at least ten items to represent each of the sub-scales. Administer the survey; the more representative and larger the sample, the more credibility one will have in the scales.
  2. Review the means and standard deviations for the items, dropping any items with skewed means or very low variance.
  3. Run an exploratory factor analysis with oblique rotation on items for the scales - it is important to differentiate them based on their loading on factors to create sub-scales that represents the construct. Request factors with eigenvalues (for calculating eigenvalue for each factor square the factor loading's and sum down the columns) greater than 1. It is easier to group the items by targeted scales. The more distinct the other items, the better the chances the items will load better in one's own scale.
  4. “Cleanly loaded items” are those items that load at least .40 on one factor and more than .10 greater on that factor than on any others. Identify those in the factor pattern.
  5. “Cross loaded items” are those that do not meet the above criterion. These are candidates to drop.
  6. Identify factors with only a few items that do not represent clear concepts, these are “uninterpretable scales.” Also identify any factors with only one item. These factors and their items are candidates to drop.
  7. Look at the candidates to drop and the factors to be dropped. Is there anything that needs to be retained because it is critical to one's construct. For example, if a conceptually important item only cross loads on a factor to be dropped, it is good to keep it for the next round.
  8. Drop the items, and run a confirmatory factor analysis asking the program to give only the number of factors after dropping the uninterpretable and single-item ones. Go through the process again starting at Step 3. Here various test reliability measures could also be taken.
  9. Keep running through the process until one get “clean factors” (until all factors have cleanly loaded items).
  10. Run the Alpha in the statistical program (asking for the Alpha's if each item is dropped). Any scales with insufficient Alphas should be dropped and the process repeated from Step 3. ['''Coefficient alpha=number of items<sup>2</sup> x average correlation between different items/sum of all correlations in the [[correlation matrix]] (including the diagonal values)]
  11. Run correlational or regressional statistics to ensure the validity of the scale. For better practices, keep the final factors and all loadings of yours and similar scales selected in the Appendix of the created scale.

Multi-Item and Single-Item Scales

In most practical situations, multi-item scales are more effective in predicting outcomes compared to single items. The use of single-item measures in research is advised cautiously, their use should be limited to specific circumstances. [3] [4]

CriterionMulti-item scaleSingle-item scale
Construct concretenessAbstractConcrete
Construct dimensionality/complexityMultidimensional, moderately complexUnidimensional or extremely complex
Semantic redundancyLowHigh
Primary role of constructDependent or independent variableModerator or control variable
Desired precisionHighLow
Monitoring changesAppropriateProblematic
Sampled populationHomogenousDiverse
Sample sizeLargeLimited
Table: Criteria for Assessing the Potential Use of Single-Item Measures

Data types

See main article: Level of measurement. The type of information collected can influence scale construction. Different types of information are measured in different ways.

  1. Some data are measured at the nominal level. That is, any numbers used are mere labels; they express no mathematical properties. Examples are SKU inventory codes and UPC bar codes.
  2. Some data are measured at the ordinal level. Numbers indicate the relative position of items, but not the magnitude of difference. An example is a preference ranking.
  3. Some data are measured at the interval level. Numbers indicate the magnitude of difference between items, but there is no absolute zero point. Examples are attitude scales and opinion scales.
  4. Some data are measured at the ratio level. Numbers indicate magnitude of difference and there is a fixed zero point. Ratios can be calculated. Examples include: age, income, price, costs, sales revenue, sales volume, and market share.

Composite measures

Composite measures of variables are created by combining two or more separate empirical indicators into a single measure. Composite measures measure complex concepts more adequately than single indicators, extend the range of scores available and are more efficient at handling multiple items.

In addition to scales, there are two other types of composite measures. Indexes are similar to scales except multiple indicators of a variable are combined into a single measure. The index of consumer confidence, for example, is a combination of several measures of consumer attitudes. A typology is similar to an index except the variable is measured at the nominal level.

Indexes are constructed by accumulating scores assigned to individual attributes, while scales are constructed through the assignment of scores to patterns of attributes.

While indexes and scales provide measures of a single dimension, typologies are often employed to examine the intersection of two or more dimensions. Typologies are very useful analytical tools and can be easily used as independent variables, although since they are not unidimensional it is difficult to use them as a dependent variable.

Comparative and non comparative scaling

With comparative scaling, the items are directly compared with each other (example: Does one prefer Pepsi or Coke?). In noncomparative scaling each item is scaled independently of the others. (Example: How does one feel about Coke?)

Comparative scaling techniques

Non-comparative scaling techniques

Scale evaluation

Scales should be tested for reliability, generalizability, and validity. Generalizability is the ability to make inferences from a sample to the population, given the scale one have selected. Reliability is the extent to which a scale will produce consistent results. Test-retest reliability checks how similar the results are if the research is repeated under similar circumstances. Alternative forms reliability checks how similar the results are if the research is repeated using different forms of the scale. Internal consistency reliability checks how well the individual measures included in the scale are converted into a composite measure.

Scales and indexes have to be validated. Internal validation checks the relation between the individual measures included in the scale, and the composite scale itself. External validation checks the relation between the composite scale and other indicators of the variable, indicators not included in the scale. Content validation (also called face validity) checks how well the scale measures what is supposed to measured. Criterion validation checks how meaningful the scale criteria are relative to other possible criteria. Construct validation checks what underlying construct is being measured. There are three variants of construct validity. They are convergent validity, discriminant validity, and nomological validity (Campbell and Fiske, 1959; Krus and Ney, 1978). The coefficient of reproducibility indicates how well the data from the individual measures included in the scale can be reconstructed from the composite scale.

See also

Further reading

External links

Notes and References

  1. Book: Earl Babbie. The Practice of Social Research. 1 January 2012. Cengage Learning. 978-1-133-04979-1. 162.
  2. Book: McDonald, Roderick P. . Test Theory: A Unified Treatment . 2013-06-17 . Psychology Press . 978-1-135-67531-8 . en.
  3. Diamantopoulos . Adamantio . Sarstedt . Marko . Fuchs . Christoph . 2012 . Guidelines for choosing between multi-item and single-item scales for construct measurement: a predictive validity perspective . Journal of the Academy of Marketing Science . 40 . 3 . 434–449 . 10.1007/s11747-011-0300-3. 1959.13/1052296 . free .
  4. Fuchs . Christoph . Diamantopoulos . Adamantios . 2009 . Using single-item measures for construct measurement in management research: Conceptual issues and application guidelines. . Die Betriebswirtschaft . 69 . 2.
  5. U.-D. Reips and F. Funke (2008) "Interval level measurement with visual analogue scales in Internet-based research: VAS Generator."