According to Franz Brentano, intentionality refers to the "aboutness of mental states that cannot be a physical relation between a mental state and what it is about (its object) because in a physical relation each of the relata must exist whereas the objects of mental states might not."
Several problems arise for features of intentionality, which are unusual for materialistic relations. Representation is unique. When 'x represents y' is true, it is not the same as other relations between things, like when 'x is next to y' or when 'x caused y' or when 'x met y', etc. Representation is different because, for instance, when 'x represents y' is true, y need not exist. This isn't true when say 'x is the square root of y' or 'x caused y' or 'x is next to y'. Similarly, when 'x represents y' is true, 'x represents z' can still be false, even when y = z. Intentionality encompasses relations that are both physical and mental. In this case, "Billy can love Santa and Jane can search for unicorns even if Santa does not exist and there are no unicorns."
Franz Brentano, the nineteenth century philosopher, spoke of mental states as involving presentations of the objects of our thoughts. This idea encompasses his belief that one cannot desire something unless they actually have a representation of it in their minds.
Dennis Stampe was one of the first philosophers in modern times to suggest a theory of content according to which content is a matter of reliable causes.
Fred Dretskes book, Knowledge and the Flow of Information (1981), was a major influence on the development of informational theories, and although the theory developed there is not a teleological theory, Dretske (1986, 1988, 1991) later produced an informational version of teleosemantics. He begins with a concept of carrying information that he calls "indicating", explains that indicating is not equivalent to representing, and then suggests that a representation's content is what it has the function of indicating
Teleosemantics, also known as biosemantics, is used to refer to the class of theories of mental content that use a teleological notion of function. Teleosemantics is best understood as a general strategy for underwriting the normative nature of content, rather than any particular theory. What all teleological theories have in common is the idea that semantic norms are ultimately derivable from functional norms.
Attempts to naturalize semantics began in the late 1970s. Many attempts were and still are being made to bring natural-physical explanations to bear on minds and, specifically, to the question of how minds acquire content.[1] This is an interesting question; it is no surprise that it takes center stage in the philosophy of mind. Indeed, it is certainly an interesting question how minds, thought by those in the natural camp to be "natural physical objects",[1] could have developed intentional properties. In the mid-1980s, with the works of Ruth Millikan and David Papineau (Language, Thought, and Other Biological Categories and "Representation and Explanation", respectively), teleosemantics, a theory of mental content that attempts to address the question of content and intentionality of minds, was born.
Ruth Millikan is perhaps the most vocal supporter of the teleosemantic program. Millikan's view differs from other teleosemantic views in myriad ways, but perhaps its most unusual characteristic is its distinction between the mechanisms that produce mental representations from those that consume mental representations.[2] There is a representational function as a whole, at a composite level; and there are two "sub-functions",[3] the producer-function and the consumer-function. In terms that are easy to understand, let's take Millikan's own example of beavers splashing their tails. One beaver alerts other beavers to the presence of danger by splashing its tail on the surface of water. The splashing of the tail tells, or represents to, the other beavers that there is danger in the environment, and the other beavers dip into the water to avoid the danger. The splashing of the beaver's tail produces a representation; the other beavers consume the representation. The representation that the beavers consume guides their behavior in ways that relate to their survival.
Of course, the foci of the teleosemantic program is internal representations, and not just representational states of affairs between two (or more) distinct, external entities. How does the picture of the producer and consumer beavers, for instance, play into a story about internal representations? Papineau and Macdonald describe Millikan's account of this well and loyally, saying "The producing mechanisms will be the sensory and other cerebral mechanisms that give rise to cognitive representations."[2] The consuming mechanisms are those that "use these representations to direct behavior in pursuit of some biological end".[2] Here, we have a picture similar to the beaver example, but this picture portrays the two sub-functions, producer and consumer, operating within a more-obviously unified system, namely, the cognitive system. One sub-function produces mental representations while the other sub-function consumes them in order to reach some end, e.g., danger-avoidance or food-acquisition. The representations consumed by the consumer sub-function guide an organism's behavior toward some biological end, e.g., survival. This is a rather brief sketch of Millikan's overall portrait. Of course, more goes into her account of the relation between producer- and consumer-functions in order to arrive at a nuanced account of mental representation. But that is a matter of how. Details as to the how aside, much of Millikan's efforts are directed towards the why, viz., why it is that perceivers like us have mental representations—why representations are produced in the first place.
The theory of asymmetric dependence, from Fodor, who says that his theory "distinguishes merely informational relations on the basis of their higher-order relations to each other: informational relations depend upon representational relations, but not vice versa. He gives an example of this theory when he says, "if tokens of a mental state type are reliably caused by horses, cows-on-dark-nights, zebras-in-the-mist and Great Danes, then they carry information about horses, etc. If however, such tokens are caused by cows-on-dark-nights, etc. because they are caused by horses, but not vice versa, then they represent horses (or property horse).
20th century American philosopher Willard Van Orman Quine believed that linguistic terms do not have distinct meanings that accompany them because there are no such entities as "meanings". In his books, Word and Object (1960) and Ontological Relativity (1968), Quine considers the methods available to a field linguist attempting to translate an unknown language in order to outline his thesis. His thesis, the indeterminacy of translation, notes that there are many different ways to distribute purpose and meanings among words. Whenever a theory of translation is made it is commonly based upon context. An argument over the correct translation of an unidentified term depends on the possibility that the native could have spoken a different sentence. The same problem of indeterminacy would appear in this argument once again since any hypothesis can be defended if one adopts enough compensatory hypotheses about other parts of the language. Quine uses as an example the word "gavagai" spoken by a native upon seeing a rabbit. One can go the simplest route and translate the word to "Lo, a rabbit", but other possible translations such as "Lo, food" or "Let's go hunting" are completely reasonable given what the linguist knows. Subsequent observations can rule out certain possibilities as well as questioning the natives. But this is only possible once the linguist has mastered much of the natives' grammar and vocabulary. This is a big problem because this can only be done on the basis of hypotheses derived from simpler, observation-connected bits of language, which admit multiple interpretations, as we have seen.
Daniel C. Dennett's theory of mental content, the intentional stance, tries to view the behavior of things in terms of mental properties. According to Dennett: "Here is how it works: first you decide to treat the object whose behavior is to be predicted as a rational agent; then you figure out what beliefs that agent ought to have, given its place in the world and its purpose. Then you figure out what desires it ought to have, on the same considerations, and finally you predict that this rational agent will act to further its goals in the light of its beliefs. A little practical reasoning from the chosen set of beliefs and desires will in most instances yield a decision about what the agent ought to do; that is what you predict the agent will do."
Dennett's thesis has three levels of abstraction:
Dennett states that the more concrete the level, the more accurate is in principle our predictions. Though if one chooses to view an object through a more abstract level, he will gain greater computational power by getting a better overall picture of the object and skipping over any extraneous details. Also, switching to a more abstract level has its risks as well as its benefits. If we applied the intentional stance to a thermometer that was heated to 500 °C, trying to understand it through its beliefs about how hot it is and its desire to keep the temperature just right, we would gain no useful information. The problem would not be understood until we dropped down to the physical stance to comprehend that it has been melted. Whether to take a particular stance should be decided by how successful that stance is when applied. Dennett argued that it is best to understand human beliefs and desires at the level of the intentional stance.