I promised nearly a week ago that I would return to a discussion of the so-called Third Factor at some point. That is the goal of this post. I am afraid I will not have time to fully articulate my position or the general positional held in current minimalist thinking. But, I hope to at least prompt some discussion.
Before I get into that, I want to quickly review what the different factors are and what they can tell us about language and theory.
First Factor: The genetic endowment (i.e., UG). This is where most of the work has been done for most of the life of generative linguistics. In a P&P model this is what gave most of the Principles and also the unset Parameters. Some modern approaches to syntax (especially those argued for directly by Chomsky) attempt to remove as much as possible from this factor.
Second Factor: The second factor is the individual contribution of the particular language being learned/used. In the old P&P model the 2nd factor would essentially be parameter setting. Chomsky rarely discusses this factor directly, but it can be cast in modern terms as the lexical properties of language-specific items (or some other view based on a lexicon (for lack of a better term) ala Borer and not via global parameter settings). Ultimately, I believe that this will the source of most linguistic phenomena with consideration from the other factors.
Third Factor: This factor is language independent. It is essentially natural law-- the requirement that computation is done efficiently (barring interference from either of the first two factors).
On the surface this makes a very simple story, but of course we haven't arrived on conclusions about any of the factors. Today, I will focus on the mostly on the third factor, but it is impossible to discuss the third without discussing the first.
In many ways I believe that the Third Factor is one of the greatest sources of misunderstanding and/or contention with modern syntactic approaches. At its heart, there should be nothing controversial about it; it is essentially a principle of science: a simpler operation should be preferred over a complex one all else being equal.
However, in practice, this claim is used inappropriately. I do not want to attack specific arguments here, but I often attend conference talks (or less commonly see this in published sources) where unexplained items are shunted off to some near mystical economy constraint.
There is the added confound that there is no single, standard definition of complexity (nor do I think there can be at this point). But without a solid understanding of complexity, more simple becomes something more like a best guess.
This is not inappropriate.
This is science.
It is hypothesis testing. But we need to never lose sight of what it actually is.
Economy/arguments to simplicity are also misused in another way that also undermines it acceptance generally.
To me, a model which takes seriously economy constraints must take the other aspects of the model presented--namely, a minimal UG. However, a lot of work takes virtually the opposite approach. For instance, Cartography (with its 400+ universally ordered functional heads) cannot possible be the genetic component of language. It is simply too specified to be innate. There are certainly much useful work that is coming out of that approach, but as a theory of UG it is simply not viable. Yet there are a theories of global economy based off of robust Cartographic approaches.... attempting to build a model based on how natural law interacts with an object that cannot exist in nature is a fool's errand, at best.