Thursday, February 9, 2012

Repost-- Third Factor

I promised nearly a week ago that I would return to a discussion of the so-called Third Factor at some point. That is the goal of this post. I am afraid I will not have time to fully articulate my position or the general positional held in current minimalist thinking. But, I hope to at least prompt some discussion.

Before I get into that, I want to quickly review what the different factors are and what they can tell us about language and theory.

First Factor: The genetic endowment (i.e., UG). This is where most of the work has been done for most of the life of generative linguistics. In a P&P model this is what gave most of the Principles and also the unset Parameters. Some modern approaches to syntax (especially those argued for directly by Chomsky) attempt to remove as much as possible from this factor.

Second Factor: The second factor is the individual contribution of the particular language being learned/used. In the old P&P model the 2nd factor would essentially be parameter setting. Chomsky rarely discusses this factor directly, but it can be cast in modern terms as the lexical properties of language-specific items (or some other view based on a lexicon (for lack of a better term) ala Borer and not via global parameter settings). Ultimately, I believe that this will the source of most linguistic phenomena with consideration from the other factors.

Third Factor: This factor is language independent. It is essentially natural law-- the requirement that computation is done efficiently (barring interference from either of the first two factors).

On the surface this makes a very simple story, but of course we haven't arrived on conclusions about any of the factors. Today, I will focus on the mostly on the third factor, but it is impossible to discuss the third without discussing the first.

-----

In many ways I believe that the Third Factor is one of the greatest sources of misunderstanding and/or contention with modern syntactic approaches. At its heart, there should be nothing controversial about it; it is essentially a principle of science: a simpler operation should be preferred over a complex one all else being equal.

However, in practice, this claim is used inappropriately. I do not want to attack specific arguments here, but I often attend conference talks (or less commonly see this in published sources) where unexplained items are shunted off to some near mystical economy constraint.

There is the added confound that there is no single, standard definition of complexity (nor do I think there can be at this point). But without a solid understanding of complexity, more simple becomes something more like a best guess.

This is not inappropriate.

This is science.

It is hypothesis testing. But we need to never lose sight of what it actually is.

Economy/arguments to simplicity are also misused in another way that also undermines it acceptance generally.

To me, a model which takes seriously economy constraints must take the other aspects of the model presented--namely, a minimal UG. However, a lot of work takes virtually the opposite approach. For instance, Cartography (with its 400+ universally ordered functional heads) cannot possible be the genetic component of language. It is simply too specified to be innate. There are certainly much useful work that is coming out of that approach, but as a theory of UG it is simply not viable. Yet there are a theories of global economy based off of robust Cartographic approaches.... attempting to build a model based on how natural law interacts with an object that cannot exist in nature is a fool's errand, at best.

4 comments:

  1. RE: third factor as uncontroversial principle of science, see Martin & Uriagereka's discussion of methodological vs ontological minimalism. There's a substantive issue at stake, beyond Occam's razor -- the extent to which the actual properties of language really do follow from "natural law." Maybe a lot has that kind of explanation, maybe only a little; we don't know until we can formulate and test some hypotheses, however crude and simplistic they may be at this stage. I certainly agree that "near-mystical", vague notions of economy need to be spelled out so they can be evaluated.

    As for cartography being incompatible with minimalism and a large role for the third factor, because it necessarily invokes a rich 1st factor:

    "The question whether such universal hierarchies of functional projections are
    primitive objects of UG, or can be derived from interface or more general external
    conditions is important, but fundamentally orthogonal to the prior task of drawing
    their precise map, and perhaps not easily determinable at the present state of our
    knowledge." (Cinque & Rizzi 2008:4)

    There are two issues: whether you buy the cartographers' idea of universality, and, if you do, what the source of that universal hierarchy is. One can take the universality seriously while remaining agnostic as to its provenance. I think the evidence they marshal in favor of universality -- not just typological facts, but converging evidence from acquisition and aphasia, for instance -- is quite striking, but read it for yourself, don't take my word for it.

    As for the dismissal of even the possibility of rich cartography as part of UG, I think that's too cavalier as well, given how little is known. Certainly I'd wager that it couldn't have arisen de novo as part of the "Great Leap Forward" scenario. That said, there are millions of years of hominid cognitive evolution that we know almost nothing about. One possibility that might be explored is that a richly specified hierarchy might have roots in older capacities that arose in that period (or, who knows, even earlier; I hear there's evidence for some argument-structure-like action schemas in chimp cognition). In fact, the basically linear structure of the cartographer's hierarchy would be usable by a mind with weaker computational power, e.g. finite-state-like. Victor Longa, at the 2012 LSA, presented some archeological evidence that he interpreted as indicating that, in effect, Neanderthals and pre-70kya Sapiens had finite-state capacities.
    Pure speculation, but the point is, we really don't know. As good scientists, yes we need to retain a healthy skepticism... but we also need to be willing to at least consider believing what our best theories are telling us, no matter how little sense it makes right now. Quantum theory is a case in point: it violently upturns our most basic notions of how the world ought to work; no one would believe it if it didn't happen to make such successful predictions.

    On a selfish, defensive note, I'll point out that my own work on movement adopts as a starting assumption only the claim that there IS a universal hierarchy; the whole point is to discover what that actually looks like, not presuppose it. If it lines up with what the cartographers find, great. If it's different, then either my account of movement is wrong, or they got the map wrong, or there is no universal map.
    The implausibility of the 400 number assumes that these are unanalyzable primitives, "atoms" of linguistic computation. That’s worth questioning: if the bewildering array of heads are themselves built by a still-deeper combinatoric system, the primitives may be far fewer in number (hence, more plausibly a part of UG -- or better yet, FLB). Again: too early to issue blanket judgments on what "cannot exist in nature."

    --Dave

    PS, you should link this on fb, I'd love to hear other opinions as well. Interesting post.

    ReplyDelete
  2. Hey Dave-

    Thanks! I hope to give a more thorough response some time in the next few days. I will note that in my own work I both use Cartography and invoke Economy... what troubles me so much about Cartography is that it seems to be right (at least in some fundamental cases CP over TP over vP or VP, for instance)....

    Jeff

    ReplyDelete
  3. Hmm, looks like the overlong comment I emailed you really is overlong! Google tells me I only have 4096 characters to work with... so, I'll try to sum up:

    I disagree about the inevitability of the Third Factor. This is related to the personal problem I have taking any "economy constraints" really seriously. I mean, they may be the right way to go, but I don't really follow the argument for why economical principles should characterize the correct theory of syntax. My problem boils down to the questions (1) "economy of what?" and (2) "why would economical at the level of linguistic theory be expected to be economical at a biological level?".

    For the first, I think I naturally expect formulations of economy to have some kind of unit. Otherwise, how can you compare "costs"? Maybe there is more rigor to these approaches than I give them credit for, but without an understanding of what is being "saved" in a more economical theory, it all smacks of mysticism, and "economy" becomes just an incantation to chant whenever you need something to limit operations in some way (number of operations, distance of operations, etc. etc.).

    The second issue is related, because if you don't know what is being "saved" by more economy, why do you think it relates at all to the underlying biology? To be concrete about it, I can at least imagine what it would mean to say that we expect the neural representation of words (or the computations that words involve) to be "optimal", in some sense of optimal. But why would we expect this to generalize to a much much higher level of abstraction, to think that the principles that account for syntactic derivations to be optimal? Even most Chomskyans don't believe that syntactic derivations represent operations carried out in real time, so what exactly is the "cost" of a non-"economic" syntax? I just don't get it. Does Chomsky (or Uriagereka or anyone else) have an argument for why economy should scale from the biological to the level of syntactic theory?

    The other useful comment I think I have is that I think you're missing something that doesn't quite fit in the Three Factors description. It's related to how Chomsky can, in the same breath, talk about Language as a special, modular cognitive system that can't be reduced to "general cognitive principles", but is also subject to these universal "natural law" principles. How can a system be both utterly unique and completely derived from universal physical laws? The way I understand it, for Chomsky, the nature of the linguistic system is dependent on the interfaces and other "imperfections", which give rise to specialized linguistic structures and rules. In other words, you take the "purity" of a system constrained by biological economy kinds of things, and you add the messiness of it being a system that interfaces with conceptual systems and with production/perception systems, and you get the organ of language.

    How does this fit? Is this a formulation of the First Factor, or is it a Fourth?

    ReplyDelete
    Replies
    1. Let me say this: I don't view Third Factor considerations as inevitable either. I think the point you raise about falsifibility is legit, but frankly, the same is true of the opposing view. I do think that it is a reasonable starting point, to say maybe this abstract system functions identically to physical ones is reasonable in my mind. Maybe we will find that the Third Factor plays no to virtually no role. That would be a very interesting result. But it would be a definite result. So I don't view this as an inevitability but rather as a starting point.

      Delete