First, to say that “no one (outside of a couple of Russian logicians) knew about Paraconsistent Logic in those times” is oddly eurocentric and, to be frank, offensively dismissive of those Russian (and, slightly later, Polish and South American) logicians, who were — and are — both creative and brilliant. Just because a few arrogant logicians working in primarily English-speaking circles were unaware of work being done on other continents doesn’t mean that “no one… knew.” What you mean, I think, is that the Vienna Circle folk were unaware of the work being done elsewhere. Their bad, of course. But Prof Blanchette isn’t making any normative claims about what logicists should or should not have explored, and so your argument that logicism failed because the logicists failed to consider something other than logicism (i.e., an axiomatic system in a non-classical logic) is a bizarre red herring.

Second, the remark that “we *can* reduce mathematics to Paraconsistent Logic, since those systems of logic can tolerate contradictions (like Russell’s Paradox)” is similarly laden with non-universal assumptions — in this case, about what is meant by “mathematics.” I’ll assume, for the sake of argument, that when you say mathematics you mean axiomatized systems of basic arithmetic, e.g., Peano Arithmetic, or perhaps Robinson’s arithmetic (a finitely axiomatized fragment of Peano Arithmetic). But then what you say is false, because Peano arithmetic presumes an underlying logic (embedded in what used to be called the equality relation axioms) — which of course means that PA *cannot* be reduced to, for example, the paraconsistent system LP, because LP lacks some of the basic rules of inference (such as modus ponens) that are required by the underlying logic of PA. Sure, we can change the logic of PA. But that is simply to modify the arithmetic or mathematics that we’re trying to reduce so as to make it reducible — not to actually reduce PA.

Finally, it’s misleading to suggest that paraconsistent theories somehow permit a way around the incompleteness result by embracing the inconsistent fork of the dichotomy. Stewart Shapiro’s 2002 “Incompleteness and Inconsistency” is a good start for those who want to grasp the fact that paraconsistent logics don’t provide a direct path to completeness — or, for that matter, even to the promise of a comfortably controlled form of inconsistency. Shapiro’s results show that if one adds Priest’s dialethic semantics to PA to produce a recursively axiomatized system PA* that contains it’s own truth predicate, there are purely arithmetic (Π0 — !) sentences that are both provable and refutable in PA* — and, even worse, there is a number g which both is and is not the code of a derivation of the indicated Gödel sentence of PA*. In other words: paraconsistent mathematical theories come in many different flavors, and will not always be complete. Depending on what the theory takes to be true, and the strength of the deductive system, there might well be unprovable truths.

Matt — many thanks for posting this excellent discussion with Prof Blanchette!

]]>Maybe our guest was wrong in suggesting that the evidence in favor of universal grammar was definitive, and maybe he was right. Either way, that position is fairly representative of the state of the field, at least in many departments. It is not uncommon, at many of the top departments across the US, for students to complete an entire undergraduate linguistics major without hearing anyone mention the functional paradigm. For that reason, I’m not sure I’d be able to portray the foundational disagreement between generative and functional linguists as a live debate without feeling a bit disingenuous.

That doesn’t mean that’s the way it should be, but it does mean that it’s hard to find guests who are qualified to discuss both, and that it can be difficult to come by meaningful discussions of the empirical points in favor of one approach versus the other. It’s a bit like if we wanted to do an episode comparing Roland Barthes’ theory of meaning with Richard Montague’s theory of meaning. I would love to do something like that, but isn’t necessarily that easy to find someone who’s spent enough time immersing themselves in the literature from both traditions that they’re able to draw non-superficial, well-informed connections between them. A more realistic option, in this case, would be to find a guest who is an expert in functional linguistics and can lay out the empirical evidence in favor of some of its basic conceits the way our guest did here for the theory of universal grammar.

Anyway, it’s not very often that we get to hear feedback from listeners about what topics they’d like to see covered, so this is very much appreciated.

]]>Thanks for the podcast on language universals! Unfortunately, I was somewhat dismayed that the episode didn’t even touch on Functionalism, which is the major competing approach to linguistics today, one which denies the existence of Universal Grammar in favor of explanations based in more general theories of cognition and complex adaptive systems. The Functionalist (“anti-Universal Grammar”) perspective in fact seemed rather belittled during the episode, as though to say, ‘What sensible person could possibly think this?’ But in fact the Functionalism vs. Generativism (Universal Grammar) debate is a major ongoing issue and point of contention in the field. It would be disingenuous to claim that the science or the debate is settled on this matter. Do you have plans to present the opposing perspective in the future? Or would you entertain the idea of doing so? It seems that philosophers with an interest in linguistics would be well-served to understand the range of theoretical approaches available to them, and their potential merits and demerits, rather than being limited to the perspective of Generativism alone.

best,

Danny ]]>

What Gödel’s Incompleteness Theorems showed was that if a formal system is expressive enough to serve as a foundation for arithmetic, it has to be incomplete or inconsistent. The real problem that Russell’s Paradox caused was that it trivialized the naive set theory by way of the Principle of Explosion: P & ~P -> Q (from a contradiction, anything & everything follows, aka “trivialism”).

Of course, this brings up an interesting avenue that some logicians have been & are currently exploring: What if we keep the naive set theory, we keep Russell’s Paradox, but we change our logic from Frege’s Classical Logic? This has been explored by logicians like Brady & Weber by switching from Classical Logic to a Paraconsistent Logic. And the interesting thing is that it seems like we *can* reduce mathematics to Paraconsistent Logic, since those systems of logic can tolerate contradictions (like Russell’s Paradox) without the theory falling to trivialism. And this is right in Gödel’s Incompleteness theorems; they allow for a foundation of mathematics to be inconsistent. The problem was that no one (outside of a couple of Russian logicians) knew about Paraconsistent Logic in those times, so they thought a contradiction was indisputably the death of a theory.

/long comment 🙂

]]>