In psychology and sociology, a trust metric is a measurement or metric of the degree to which one social actor (an individual or a group) trusts another social actor. Trust metrics may be abstracted in a manner that can be implemented on computers, making them of interest for the study and engineering of virtual communities, such as Friendster and LiveJournal.
Trust escapes a simple measurement because its meaning is too subjective for universally reliable metrics, and the fact that it is a mental process, unavailable to instruments. There is a strong argument[1] against the use of simplistic metrics to measure trust due to the complexity of the process and the 'embeddedness' of trust that makes it impossible to isolate trust from related factors.
There is no generally agreed set of properties that make a particular trust metric better than others, as each metric is designed to serve different purposes, e.g.[2] provides certain classification scheme for trust metrics. Two groups of trust metrics can be identified:
Trust metrics enable trust modelling[3] and reasoning about trust. They are closely related to reputation systems. Simple forms of binary trust metrics can be found e.g. in PGP.[4] The first commercial forms of trust metrics in computer software were in applications like eBay's Feedback Rating. Slashdot introduced its notion of karma, earned for activities perceived to promote group effectiveness, an approach that has been very influential in later virtual communities.
Empirical metrics capture the value of trust by exploring the behavior or introspection of people, to determine the perceived or expressed level of trust. Those methods combine theoretical background (determining what it is that they measure) with defined set of questions and statistical processing of results.
The willingness to cooperate, as well as actual cooperation, are commonly used to both demonstrate and measure trust. The actual value (level of trust and/or trustworthiness) is assessed from the difference between observed and hypothetical behaviors i.e. those that would have been anticipated in the absence of cooperation.
Surveys capture the level of trust by means of both observations or introspection, but without engaging into any experiments. Respondents are usually providing answers to a set of questions or statements and responses are e.g. structured according to a Likert scale. Differentiating factors are the underlying theoretical background and contextual relevance.
One of the earliest surveys are McCroskey's scales[5] that have been used to determine authoritativeness (competence) and character (trustworthiness) of speakers. Rempel's trust scale[6] and Rotter's scale[7] are quite popular in determining the level of interpersonal trust in different settings. The Organizational Trust Inventory (OTI)[8] is an example of an exhaustive, theory-driven survey that can be used to determine the level of trust within the organisation.
For a particular research area a more specific survey can be developed. For example, the interdisciplinary model of trust,[9] has been verified using a survey while[10] uses a survey to establish the relationship between design elements of the web site and perceived trustworthiness of it.
Another empirical method to measure trust is to engage participants in experiments, treating the outcome of such experiments as estimates of trust. Several games and game-like scenarios have been tried, some of which estimate trust or confidence in monetary terms (see[11] for an interesting overview).
Games of trust are designed in a way that their Nash equilibrium differ from Pareto optimum so that no player alone can maximize their own utility by altering his selfish strategy without cooperation, while cooperating partners can benefit. Trust can be therefore estimated on the basis of monetary gain attributable to cooperation.
The original 'game of trust' has been described in[12] as an abstracted investment game between an investor and his broker. The game can be played once or several times, between randomly chosen players or in pairs that know each other, yielding different results.
Several variants of the game exist, focusing on different aspects of trust as the observable behaviour. For example, rules of the game can be reversed into what can be called a game of distrust,[13] declaratory phase can be introduced[14] or rules can be presented in a variety of ways, altering the perception of participants.
Other interesting games are e.g. binary-choice trust games,[15] the gift-exchange game,[16] cooperative trust games, and various other forms of social games. Specifically the Prisoners Dilemma[17] are popularly used to link trust with economic utility and demonstrate the rationality behind reciprocity. For multi-player games, different forms of close market simulations exist.[18]
Formal metrics focus on facilitating trust modelling, specifically for large scale models that represent trust as an abstract system (e.g. social network or web of trust). Consequently, they may provide weaker insight into the psychology of trust, or in particulars of empirical data collection. Formal metrics tend to have a strong foundations in algebra, probability or logic.
There is no widely recognised way to attribute value to the level of trust, with each representation of a 'trust value' claiming certain advantages and disadvantages. There are systems that assume only binary values,[19] that use fixed scale,[20] where confidence range from -100 to +100 (while excluding zero),[21] from 0 to 1[22] [23] or from [−1 to +1);<ref>Marsh, S. P. (1994) Formalising Trust as a Computational Concept. University of Stirling PhD thesis.</ref> where confidence is discrete or continuous, one-dimensional or have many dimensions.<ref>Gujral, N., DeAngelis, D., Fullam, K. K., and Barber, K. S. (2006) Modelling Multi-Dimensional Trust. In: Proc. of Fifth Int. Conf. on Autonomous Agents and Multiagent Systems AAMAS-06. Hakodate, Japan.</ref> Some metrics use ordered set of values without attempting to convert them to any particular numerical range (e.g.<ref>Nielsen, M. and Krukow, K. (2004) On the Formal Modelling of Trust in Reputation-Based Systems. In: Karhumaki, J. et al. (Eds.): Theory Is Forever, Essays Dedicated to Arto Salomaa on the Occasion of His 70th Birthday. Lecture Notes in Computer Science 3113 Springer.</ref> See<ref>Abdul-Rahman, A. (2005) A Framework for Decentralised trust Reasoning. PhD Thesis.</ref> for a detailed overview). There is also a disagreement about the semantics of some values. The disagreement regarding the attribution of values to levels of trust is specifically visible when it comes to the meaning of zero and to negative values. For example, zero may indicate either the lack of trust (but not distrust), or lack of information, or a deep distrust. Negative values, if allowed, usually indicate distrust, but there is a doubt<ref>Cofta, P. (2006) Distrust. In: Proc. of Eight Int. Conf. on Electronic Commerce ICEC'06, Fredericton, Canada. pp. 250–258.</ref> whether distrust is simply trust with a negative sign, or a phenomenon of its own. ===Subjective probability=== Subjective probability<ref>Gambetta, D. (2000) Can We Trust Trust? In: Gambetta, D. (ed.) Trust: Making and Breaking Cooperative Relations, electronic edition, Department of Sociology, University of Oxford, chapter 13, pp. 213–237,</ref> focuses on trustor's self-assessment about his trust in the trustee. Such an assessment can be framed as an anticipation regarding future behaviour of the trustee, and expressed in terms of probability. Such a probability is subjective as it is specific to the given trustor, their assessment of the situation, information available to him etc. In the same situation other trustors may have a different level of a subjective probability. Subjective probability creates a valuable link between formalisation and empirical experimentation. Formally, subjective probability can benefit from available tools of probability and statistics. Empirically, subjective probability can be measured through one-side bets. Assuming that the potential gain is fixed, the amount that a person bets can be used to estimate his subjective probability of a transaction. ===Uncertain probabilities (subjective logic)=== The logic for uncertain probabilities ([[subjective logic]]) has been introduced by Josang,[24] [25] where uncertain probabilities are called subjective opinions. This concept combines probability distribution with uncertainty, so that each opinion about trust can be viewed as a distribution of probability distributions where each distribution is qualified by associated uncertainty. The foundation of the trust representation is that an opinion (an evidence or a confidence) about trust can be represented as a four-tuple (trust, distrust, uncertainty, base rate), where trust, distrust and uncertainty must add up to one, and hence are dependent through additivity.
Subjective logic is an example of computational trust where uncertainty is inherently embedded in the calculation process and is visible at the output. It is not the only one, it is e.g. possible to use a similar quadruplet (trust, distrust, unknown, ignorance) to express the value of confidence,[26] as long as the appropriate operations are defined. Despite the sophistication of the subjective opinion representation, the particular value of a four-tuple related to trust can be easily derived from a series of binary opinions about a particular actor or event, thus providing a strong link between this formal metric and empirically observable behaviour.
Finally, there are CertainTrust[27] and CertainLogic.[28] Both share a common representation, which is equivalent to subjective opinions, but based on three independent parameters named 'average rating', 'certainty', and 'initial expectation'. Hence, there is a bijective mapping between the CertainTrust-triplet and the four-tuple of subjective opinions.
Fuzzy systems,[29] as trust metrics can link natural language expressions with a meaningful numerical analysis.
Application of fuzzy logic to trust has been studied in the context of peer-to-peer networks[30] to improve peer rating. Also for grid computing[31] it has been demonstrated that fuzzy logic allows to solve security issues in reliable and efficient manner.
The set of properties that should be satisfied by a trust metric vary, depending on the application area. Following is a list of typical properties.
Transitivity is a highly desired property of a trust metric.[32] In situations where A trusts B and B trusts C, transitivity concerns the extent to which A trusts C. Without transitivity, trust metrics are unlikely to be used to reason about trust in more complex relationships.
The intuition behind transitivity follows everyday experience of 'friends of a friend' (FOAF), the foundation of social networks. However, the attempt to attribute exact formal semantics to transitivity reveals problems, related to the notion of a trust scope or context. For example,[33] defines conditions for the limited transitivity of trust, distinguishing between direct trust and referral trust. Similarly,[34] shows that simple trust transitivity does not always hold, based on information on the Advogato model and, consequently, have proposed new trust metrics.
The simple, holistic approach to transitivity is characteristic to social networks (FOAF, Advogato). It follows everyday intuition and assumes that trust and trustworthiness apply to the whole person, regardless of the particular trust scope or context. If one can be trusted as a friend, one can be also trusted to recommend or endorse another friend. Therefore, transitivity is semantically valid without any constraints, and is a natural consequence of this approach.
The more thorough approach distinguishes between different scopes/contexts of trust, and does not allow for transitivity between contexts that are semantically incompatible or inappropriate. A contextual approach may, for instance, distinguish between trust in a particular competence, trust in honesty, trust in the ability to formulate a valid opinion, or trust in the ability to provide reliable advice about other sources of information. A contextual approach is often used in trust-based service composition.[35] The understanding that trust is contextual (has a scope) is a foundation of a collaborative filtering.
For a formal trust metric to be useful, it should define a set of operations over values of trust in such way that the result of those operations produce values of trust. Usually at least two elementary operators are considered:
The exact semantics of both operators are specific to the metric. Even within one representation, there is still a possibility for a variety of semantic interpretations. For example, for the representation as the logic for uncertain probabilities, trust fusion operations can be interpreted by applying different rules (Cumulative fusion, averaging fusion, constraint fusion (Dempster's rule), Yager's modified Dempster's rule, Inagaki's unified combination rule, Zhang's centre combination rule, Dubois and Prade's disjunctive consensus rule etc.). Each interpretations leads to different results, depending on the assumptions for trust fusion in the particular situatation to be modelled. See[36] [37] for detailed discussions.
The growing size of networks of trust make scalability another desired property, meaning that it is computationally feasible to calculate the metric for large networks. Scalability usually puts two requirements of the metric:
Attack resistance is an important non-functional property of trust metrics which reflects their ability not to be overly influenced by agents who try to manipulate the trust metric and who participate in bad faith (i.e. who aim to abuse the presumption of trust).
The free software developer resource Advogato is based on a novel approach to attack-resistant trust metrics of Raph Levien. Levien observed that Google's PageRank algorithm can be understood to be an attack resistant trust metric rather similar to that behind Advogato.