Benjamin Börschinger sent me some questions by e-mail which I have been wanting to respond to. I think the main question might be put in the following way:
If meaning facts are exhausted by the facts about sentence meanings, as Frege’s Context Principle--which holds that one should “never ask for the meaning of a word in isolation, but only in the context of a proposition”--would seem to suggest, then once we have accounted for the correct interpretations of sentences, what work is there left over to do? To say that there is work left over is to say, is it not, that the facts about word meaning are something over and above the facts about sentence meaning and so to deny the Context Principle.
So the idea is that if a meaning theory meets Convention T, then it makes the correct assignments of meanings to sentences (the inference to the corresponding M-sentence makes this explicit). But if there is no more to word meaning than contribution to sentence meaning, everything that needs to be said has been said. End of story. Convention A is either trivially satisfied or cannot be an intelligible constraint on a meaning theory. (It is possible, though, to meet Convention T without meeting Convention A. An example is given in the post on inscrutability, and there are more complicated examples. You can add stuff to satisfaction axioms and then do things to strip them out before you get to T-sentences.)
I think the resolution of the puzzle lies in a closer look at the Context Principle. Why say what Frege does? Why say, that is, that one should never ask for the meaning of a word in isolation but only in the context of a proposition? What does this come to? Here is how I understand the point.
The basic unit of linguistic understanding is the speech act, for the speech act is the minimal move in the game of conversation. Speech acts divide into five basic kinds (here I follow Searle’s taxonomy). There are assertives (the moon is full), directives (take out the trash), commissives (I promise to be there), directives (you’re fired), and expressives (would that you were here). All specifically linguistic meaning relates to the performance of speech acts and is understood in terms of its contribution to them.
There is no reason we cannot use single unanalyzable symbols as conventional ways of performing speech acts with a certain content. Signals flags often function this way.
The following flag, for example, when used alone, functions as an unstructured sentence that means ‘Man overboard’.
It is plausible that the first symbol systems used for anything like speech acts were unstructured and relatively limited in expressive potential for that reason. But they nonetheless can get the basic work of communication done because that involves the recruitment of a certain observable act or the product of such an act to perform a certain role with respect an appropriate audience, such as the conveyance of information (something that presupposes both sincerity and competence on the part of the speaker). This requires a shared understanding on the part of both the utterer and the auditor with respect to the circumstances under which the act (either token if there is a one off use anticipated, or type if it is to be used for recurring circumstances) is to be performed. It is this that allows the act token or type to be recruited for the function of, say, indicating something: the speaker and auditor know the speaker is to utter it only if such and such a thing obtains. The uttering of it, taken as a matter of participation in the agreed upon practice, then (fixing sincerity and competence) is taken as an indicator of the relevant circumstances, and the speaker may be said to have represented it as being so.
Where do words get in the picture? Words
have a linguistic purpose only relative to their contribution to the basic
business of conversation, i.e., to the production of speech acts with certain
contents. Their point is to make the
symbol system we are using more flexible, more powerful, and more
expressive. Take a very simple
innovation, the introduction of subject-predicate structure into
sentences. This rests of course on an
antecedent conceptual distinction between objects and properties (for purposes of
exposition I’ll put aside nominalistic scruples). Say previously I had used an unstructured
symbol to perform the speech act whose content is: man overboard. Now I conceive that it is useful sometimes to
perform a speech act where what I say is some particular or other is overboard,
Tom, or Jen, or the ship’s clock, and the like.
There can be displacement of objects overboard in all of these cases,
but different objects for the different cases.
I conceive of introducing a two component system of symbols. I will always, say, hold up a red flag when
something is overboard, but then hold up another flag with a shape on it that
resembles Tom, Jen or the ship’s clock depending on which it is that I want to
indicate is overboard. It is then the
tokening of the complex symbol that does
the work of indication (relative to the assumptions of sincerity and
competence)—all of this presupposing the speaker and audience have a shared
understanding of how they are to be treated.
Clearly, given this, the introduction of a new flag understood to
function like the Tom or Jen flags immediately gives one the ability to say of
the newly named item that it is overboard as well. And the introduction of a new flag that is
understood to function like the overboard flag, perhaps a green flag for
onboard, immediately gives one the ability to represent a range of things as
onboard. So in this simplified context,
what does it come to to say that we should ask after the meaning of a word only
in the context of a proposition? It
means that its function relative to the basic task of linguistic communication
is to be understood in terms of what its systematic contribution to speech acts
performed in accordance with the conventions attaching to it is. What is the meaning of the red flag in our
symbol system? Well, how does it
function to contribute to speech acts? It
is used in all and only speech acts (exploiting the symbol system) whose
content is to the effect that some particular thing is overboard. I.e., it says of things that they are overboard. What about the flag with the Tom-shape? It is used in all and only speech acts (exploiting
the symbol system) which say something about Tom. That is the basic meaning of the Context
Principle. The Context Principle does
not say that only sentences have meanings; words do as well but their meanings are to be
understood in terms of what they are supposed to systematically contribute to
the speech acts performed using sentences in which they appear. (Of course, someone might have more in mind by the Context Principle, but I do not.)
When people learn a natural language, they acquire of course a mastery of its semantical primitives. They learn how they are combined so as to be used to say various things (assert, command, question, promise, etc.). They don’t acquire explicit knowledge of the rules that govern them but rather a kind of skill in deploying them and interpreting them in accordance with the rules implicit in the practice in their community. There is no question that understanding of sentences rests upon understanding in this sense of words. When you hear a completely novel sentence in any of the languages you understand, you understand it on the basis of your prior mastery of its component expressions and the rules governing them in the language. This is completely compatible with saying that the meanings of the words are to be understood in relation to their contribution to sentence meaning. It is because the meanings of the words are understood in relation to their contribution to sentence meaning that we can understand novel sentences on the basis of prior understanding of words.
None of this is to say, of course, that in acquiring a mastery of words in a language, we do so independently of learning sentence meanings. We learn both at the same time, for what it is to learn the words cannot be divorced from understanding their contributions specifically to sentence meanings, and so what combinations of them with other words mean as sentences.
Now let’s return to Convention A and Convention T (and for convenience I’ll subsume the extension to a context sensitive language under the same heading). What would a truth theory that met Convention T but not Convention A be missing?
A truth theory for a context sensitive language meets Convention T (i.e., the extension of Tarksi’s Convention T) provided it is formally correct and it entails for every sentence of the object language a theorem of the form
s is true(s,t) iff p
which yields a true sentence when ‘is true(s,t) iff’ is replaced by ‘means(s,t) that’. Such a theory would enable us in a straightforward sense to interpret every object language sentence.
I said, however, that there was another goal that the theory was to meet. That was to provide insight into the compositional structure of object language sentences and to capture or represent in some sense the “structure” of a complex practical ability. It is relative to this latter goal that we require more of a truth theory that is to serve its role in a compositional meaning theory than just that it meet Convention T.
What more do we want? We want the truth theory to provide us with a kind of model of competence. As I put it in an earlier post (The content of a meaning theory and knowledge of a language):
“This constraint (and others) that we impose on a compositional meaning theory is designed to help us state something knowledge of which would enable us to see in detail what the rules are attaching to words that determine what the sentences containing them mean, and which are realized in the competences of speakers of the language in the sense that the rules can be taken to express what the competencies are competencies in doing.”
The axioms express rules for the use of words. The canonical proofs show how they contribute in virtue of the rules that govern them, interacting with those that govern other words they combine with, to determine their contribution to the conditions under which the sentence containing them is true, in virtue of the meanings of the contained expressions.
But if the axioms are not interpretive, that is, if the theory does not meet Convention A, then they will not model speaker competence, and the theory will not meet one of our goals.
So that is the answer to what more meeting Convention A adds, and what is missing from a theory that meets Convention T without meeting Convention A.
How does this meet the initial puzzle about the Context Principle? I think that dissolves once we see what that really comes to. To say that words are understood in relation to their contributions to sentence meaning, so that that is the canonical way to ask after what it is that they mean, is not to say that we are not interested in exactly how they do that. And meeting Convention A is supposed to guarantee, relative to a canonical proof procedure, that the truth theory will help do exactly that.
I'll just try to reconstruct, please correct me if I'm wrong-
Ok, so the question seems to be:
Can the axioms be taken to be interpretive?
No, because
1) competence of a language has to be presupposed before we even get in a position to find the axioms governing it
2) natural languages have characteristics which evade 'catching' in axioms
Nevertheless we 'know' that there ARE rules governing sentences and that those rules have to be presupposed in order to explain our understanding of 'speech-acts'. These Rules lie in the differentiating combinatorial-features of a compositional meaning-theorems within a compositional meaning-theory. The Axioms show how the particulars can be put in combination to make a sentence interpretive for another speaker.
Second Question: What are the particulars, i.e. which are the smallest parts out of which the meaning of a sentence given the combinatorial rules can be reconstructed?
Proposals:
1) Semantical Primitives
2) Words
3) Speech-Acts
where 1) and 3) are context-sensitive but are easier to 'grasp' because 2) contains demonstratives and 'objects' alike, for whom Reference has to be given in order to make them somehow contribute to meaning, apart from context-sensitive use.
but it also seems to me that 1) and 3) make the finding of 'exact' Axioms pretty hard if not impossible, but maybe there I just didn't have a good look at the proposals made.
But still, we know that there is logical structure underlying language (think of particles of negation) and that understanding of it contributes a great deal to correct interpretation.
And then we may further ask where this logical structure comes from and how we come to conceive similarities or differences in nature. There we have the problems with the Universals and Salience.
Ja?
Posted by: Irmela Wagner | 02/18/2010 at 05:48 PM
Let me say a bit about how I have been using the term 'interpretive'. To explain it, I want to compare it to my use of 'translational'. I use 'translational' in connection with axioms or T-theorems of a truth theory. Let me give examples of both.
First, a T-theorem for a language without context sensitive expressions, say the language of mathematics.
'II + II = IV' is true in L iff 2 + 2 = 4
I call this a translational theorem if and only if 'II + II = IV' in L translates '2 + 2 = 4' in English (the language of the theorem in this case). Tarski's Convention T says a truth definition (axiomatic truth theory) is adequate only if it entails a translational T-theorem for each sentences of the object language (the language the theory is a truth theory for).
An axiom for such a theory, for example,
A pair of things, x and y, are such that '=' is true of them in L iff x is identical with y
is translational if and only if '=' in L translates 'is identical with' in English.
Convention A, applied to a theory for a context insensitive language would require that all the axioms be translational (I have not give a full accounting of how it would look but only indicated the basic idea with one type of axiom).
Now, when we turn to a context sensitive language, we cannot require the theorems or axioms be translational. A theorem for 'Je suis faime' for example would look like this.
For any speaker s, and time t, 'Je suis faime' is true in French taken as if spoken by s at t iff s is hungry at t.
But here 'Je suis faime' does not translate 's is hungry at t' because the latter has variables in it and no expression that means the same as 'I'.
But it does spell out what the meaning of a particular utterance of it is relative to a particular speaker and time. So I say the theorem is interpretive, rather than translational. And this is then in effect a technical term.
More precisely and more generally, we can say that a theorem
S is true in L taken as if spoken by s at t iff p
is interpretive if the corresponding M theorem is true:
(M) For any speaker s, for any time t, S means in L taken as if spoken by s at t that p
Then for axioms I intend a similar interpretation. So for 'faime' we would have
For any x, for any speaker s, for any time t, 'est faime' is true of x taken relative to s at t iff x is hungry at t.
and this is interpretive iff
For any x, for any speaker s, for any time t, 'est faime' means as applied to x and taken relative to s at t that x is hungry at t.
Now, the issue that Miguel raised (I think your first question is more directed to that than to the question that Benjamin raised) was not whether the axioms of a truth theory considered as an object talked about by a meaning theory can be considered interpretive. He was suggesting that for one to use the truth theory for the intended purpose of coming to understand sentences in the object language one would have to understand the language in which it was stated. And that is something I agree with. And he was suggested that the knowledge I had said was sufficient for that was not. And I think he has a point. For there is something I had been thinking was to be included but which what I said I think does not secure, and that has to do with the way the grammar of the language of the truth theory is to be laid out. For I was supposing it would provide in effect a analysis of sentence structure in terms of semantical categories like noun phrase, verb phrase, verb, adverb, adjective, noun, determiner, connnective, quantifier, and the like, so that from knowing what the axioms mean as stated in a language we understand one by one, we would be in a position to associate terms in the language we knew with terms in the language of the truth theory, and so come to see in fact how it in turn was to be taken as showing what object language expressions mean.
But Miguel's deeper worry had to do with whether having to have knowledge of one language for the theory to do its work would undermine its ability to explain how competence works generally. I argued in the previous post that it did not because we are not modeling the speaker's competence in how the theorist comes to be in possession of knowledge sufficient to understand the language. What we want by this requirement is to reveal what the rules are that that govern or are expressed in the speaker's competence in the language.
For the second question, the answer is #1, by definition. And then which expressions are semantical primitives depends on the language. Speech acts would not be relevant here because these are not linguistic expressions. We use sentences to perform speech acts, but we can perform speech acts without using sentences as well. For example, if someone asks me if I want to go to the latest Bond movie and I put on an exaggerated expression of boredom, I have answered the question, and so performed a speech act, but not by uttering a sentence. Words are sometimes semantical primtives and sometimes not. 'or' is a semantical primitive. It has no subunits which are semantically significant. On the other hand, 'walked' is not semantically primitive because it is the combination of two expressions to which rules attach independently, 'walk'+ 'ed'. Roughly, it works like this. 'walk' really has the form 'walk(x,t)' where 'x' takes as values things that can walk and 't' takes times as values. Then 'ed' adds a quantifier: '[there is a time t such that t lies in the past of now]'. We put them together to get 'walked' = '[there is a time t such that t lies in the past of now][walk(x,t)]. Inflection for case likewise involves a semantical rule attaching to something smaller than a word.
So, it all depends on the workings of the language in question. Chinese, for example, does not use inflection for tense but always auxiliaries. So it would be like: walks today/walks tomorrow/ walks past/ walks future/ etc.
While sometimes it is not obvious how to formulate axioms for certain words, there is nothing intractable in principle about doing so. It is a matter of making use of all the information we can gather about how we use them with other words to form sentences we use for performing speech acts of various kinds. And this is a matter of finding patterns, and distinguishing between those that express semantical rules and those that express other aspects of usage (some patterns, for example, emerge because we are generally polite--it is not a matter of the meaning of 'like' that people almost always say they like the gifts that people give them, no matter how hideous they may be).
Posted by: Kirk Ludwig | 02/20/2010 at 11:37 AM