Lưu trữ Blog

Được tạo bởi Blogger.

Thứ Năm, 31 tháng 1, 2013


As I’ve noted many times (e.g. here), when a thinker like Aquinas describes God as the First Cause, what is meant is not merely “first” in a temporal sense, and not “first” in the sense of the cause that happens to come before the second, third, fourth, fifth, etc. causes, but rather “first” in the sense of having absolutely primal and underived causal power, of being that from which all other causes derive their efficacy.  Second causes are, accordingly, “second” not in the sense of coming later in time or merely happening to come next in a sequence, but rather in the sense of having causal power only in a secondary or derivative way.  They are like the moon, which gives light only insofar as it receives it from the sun.

The moon really does give light, though, and secondary causes really do have causal power.  To affirm God as First Cause is not to embrace the occasionalist positionthat only God ever really causes anything to happen.  Alfred Freddoso helpfully distinguishesbetween occasionalism, mere conservationism, and concurrentism.  Whereas the occasionalist attributes all causality to God, mere conservationism goes to the opposite extreme of holding that although God maintains things and their causal powers in being, they bring about their effects all by themselves.  Concurrentists like Aquinas take a middle ground position according to which secondary causes really have (contra occasionalism) genuine causal power, but in producing their effects still only ever act together with God as a “concurring” cause (contra mere conservationism).  To borrow an example from Freddoso, if you draw a square on a chalkboard with blue chalk, both you as primary cause and the chalk as secondary cause are joint causes of the effect -- you of there being any square there at all, the chalk of the square’s being blue.  God’s concurrence with the secondary, natural causes he sustains in being is analogous to that.

Concurrentism alone, the Thomist holds, can adequately account for both the natural world’s reality and its utter dependence on God.  Occasionalism threatens to collapse into pantheism insofar as if it is really God who is doing everything that creaturely things seem to be doing, it is hard to see how they are in any interesting way distinct from him.   (Consider that a mark of a thing’s having a substantial form rather than an accidental form -- and thus of its being a true substance, with an independent existence, rather than being a mere modification of something else -- is having its own irreducible causal powers.)  Mere conservationism, on the other hand, threatens to collapse into deism, on which the world could in principle carry on just as it is even in the absence of God.  (For if, as the Scholastics hold, a thing’s manner of acting reflects its manner of existing, then what can bring about effects entirely independently of God can in principle exist apart from God.)
 
That secondary causes are true causes, even if ultimately dependent on God, is necessary if natural science is to be possible.  If occasionalism were true, absolutely everything that happens would, in effect, be comparable to a miracle and there would be no natural regularities to discover.  Physics, chemistry, biology, and the like would be nothing other than branches of theology -- the study of different sorts of divine action rather than of (say) the properties of magnetism, electricity, gravitation, hydrogen, helium, bodily organs, or genetic material as such.  And if God’s ways are inscrutable (as they must be given that He is pure actuality, subsistent being itself, etc.), then there could in that case be little reason to expect regularity in any of these spheres.  (As Alain Besançon has argued, a tendency toward an occasionalist conception of divine causality is part of what distinguishes Islam from Christianity – and this is no doubt one reason why natural science progressed in the West and stagnated within the Islamic world.)

But it is not just in the area of efficientcausality that this middle ground position is theologically and philosophically essential.  Final causality too must be regarded as immanent to nature, and precisely because efficient causal powers are.  For Aquinas, there is no way to make sense of the fact that an efficient cause A regularly generates a certain specific effect or range of effects B -- rather than C, or D, or no effect at all -- if we don’t suppose that A inherently “points to” or is “directed at” B as toward an end or goal.  Immanent efficientcausal power goes hand in hand with immanent finality or directedness; deny the latter and you implicitly deny the former, which is why Humean skepticism about efficient causality as a real, objective feature of the world followed upon the early moderns’ chucking-out of immanent final causes.

That means that potency as a real feature of nature would go out the door with immanent finality, since a potency is always a potency for some particular outcome, toward which it “points” or is directed.  If there is no finality inherent in nature, then there are no real potencies in nature either.  And if potency is not a real feature of the world, then there is no basis for an Aristotelian-Thomistic argument from change or motion -- that is to say, from the actualization of potency -- to the existence of an Unmoved Mover (or “Unactualized Actualizer”) of the world.  (Indeed, as I argued in my 2011 lecture at Franciscan University of Steubenville, which you can view on YouTube, there is in general no way to argue from the world to God if potency is not a real feature of the world.)

Immanent formal causes -- substantial forms or immanent natures, inherent in natural substances themselves rather than in some Platonic third realm -- are essential for the same reason.  For a thing’s substantial form is the immediate ground both of its efficient-causal powers and its “directedness” toward certain ends.  Hence if formal causes are not immanent to natural substances, neither are efficient causal powers or finality (i.e. teleology or “directedness” toward an end). 

The distinction between immanent or “built in” efficient, final, and formal causes on the one hand, and extrinsic or externally imposed causes on the other, is essentially coterminous with the Aristotelian distinction between “nature” versus “art” (which I’ve discussed in several places, such as hereand here).  To appeal to an example I’ve used several times before, a liana vine (the sort of vine Tarzan swings around on) is a paradigmatic natural substance, whereas a hammock Tarzan might make out of living liana vines is a paradigmatic artifact.  The difference is that the vines have an inherent tendency to carry out activities like taking in nutrients through their roots, growing in certain patterns, etc., but do not have any inherent tendency to function as a hammock.  That is why, unless occasionally pruned, re-tied, and so forth, living liana vines will presumably not stay configured in a hammock-like way.  The hammock-like function is externally imposed on the vines, whereas the functions of taking in nutrients, growing in certain patterns, etc. are “built in” to the vines, just by virtue of being vines.  That is what it is for the vines to have the substantial form of a liana vine, whereas the form of being a hammock is a merely “accidental” form.  And that’s what it is for the nutrient absorption and growth patterns to be instances of immanent finality or teleology while the hammock-like function is an instance of extrinsic finality or teleology.  And precisely for that reason, efficient-causal powers like the ability to facilitate a restful sleep are not inherent to the vines as such, but result only from Tarzan’s having redirected the vines away from their natural tendencies and toward an end of his own.

Now just as attributing real causal power to secondary or natural causes (contra occasionalism) is in no way inconsistent with the claim that all causal power ultimately derives from God as First Cause, so too, insisting that final and formal causes are immanent to natural substances is in no way incompatible with affirming that God is the ultimate source of natural teleology (as the Supreme Intellect which directs things toward their ends, as Aquinas holds in the Fifth Way) and the ultimatesource of the forms of things (insofar as, as Aquinas also holds, the forms of things preexist in the divine intellect as the archetypes according to which God creates).  The latter positions are essentially analogues, for formal and final causes, of the concurrentist position vis-à-vis efficient causes. 

Indeed, concurrentism requires such a view about formal and final causes, for the reasons already indicated.  If formal and final causes in no way derived from God, then neither would a thing’s efficient causal powers (which follow upon its substantial form and teleological features) depend on God.  We would be left with mere conservationism at best.  On the other hand, if formal and final causes were entirely extrinsic, imposed from outside by God but in noway inherent in things themselves, then neither would a thing’s efficient causal powers -- which, again, follow upon its form and its teleological features -- be inherent in it.  We would be left with an essentially occasionalist position.

This is why the Aristotelian-Thomistic position is, as I have argued many times, fundamentally incompatible with Paleyan “design arguments” and “Intelligent Design” theory.  Insofar as these approaches treat natural objects as artifacts, they essentially attribute to them merely accidental rather than substantial forms, and teleology or finality that is entirely extrinsic rather than immanent.  This not only gets the natural order entirely wrong insofar as it ignores the Aristotelian distinction between “nature” and “art,” but it leads (whether the proponents of these views realize it or not) to a conception of divine causality that threatens to collapse into occasionalism.  (Though other things such writers say tend toward the opposite extreme of deism.  For they hold that whether the order exhibited by natural phenomena has a divine cause is a matter of probability -- which entails that it is at least in principle possible that the formal, final, and efficient causes of things might lack a divine sustaining cause.  The Thomist view, of course, is that this is not possible even in principle, so that the existence of a divine source of formal, final, and efficient causality is not a matter of mere probability but rather of metaphysical necessity.)

Earlier I cited the moon’s illumination of the earth, and Freddoso’s example of the chalk, as illustrations of the idea that secondary efficient causes have genuine causal power of their own even though that power ultimately depends on something outside them.  Are there examples that might help us to understand how finality, teleology, or directedness can be both immanent to natural substances and yet dependent on a divine source?

There are.  Consider, first, a simple analogy.  A white wall on which ordinary sunlight is shining is white and not at all red.  A white wall on which red light is shining is in one sense red, but it derives its redness entirely from the light.  And a red wall on which ordinary sunlight is shining is in some sense red inherently, but the redness is nevertheless manifest only insofar as the light is shining on it.  Now compare God’s imparting of teleology to natural substances to the light’s shining on a wall.  Natural teleology as writers like Paley understand it -- something entirely extrinsic to nature -- can be compared to the redness a white wall has only when the red light is shining on it.  But natural teleology as Aquinas understands it is like the redness a red wall has when ordinary sunlight is shining on it.  The redness is really there in the wall, yet it cannot in any way manifest itself apart from the light.  (I ignore the scientific details as irrelevant to the purpose of the analogy, and I do not claim that the analogy is perfect, only suggestive.)

Or consider signs, linguistic and otherwise.  The word “triangle” and the symbol Δ can both be used to represent triangles in general.  Now neither one can do so on its own, for each by itself is a mere set of physical marks with no symbolic content.  A mind must impart such content to them.  Moreover, the connection between the word “triangle” and triangles is entirely arbitrary, an accident of the history of the English language.  And even Δ hardly resembles all triangles; for example, there are obvious respects in which it does not resemble right triangles, or green ones, or very large ones.  All the same, there is obviously something inherent to Δ which makes it a more natural symbol for triangles in general than the word “triangle” is.  Though both symbols ultimately depend for their symbolic content on a mind which imparts that content to them, Δ nevertheless has an inherentaptness for representing triangles in general that “triangle” does not.  Now compare God’s imparting of teleology to natural objects to a mind’s imparting symbolic content to signs.  For Paley and his conception of teleology as entirely extrinsic, natural objects are like the word “triangle,” whereas for Aquinas they are like Δ.  As with the word, the symbol Δ refers to triangles in general only insofar as that meaning is imparted to it, but there is still a natural connection between Δ and triangles in general that does not exist between “triangle” and triangles in general.  (Again, I do not say that the analogy is perfect, only suggestive.)

A final analogy is taken from linguistic representation specifically.  If we consider the words and sentences we speak and write, it is obvious that they get their meaning from the community of language users that produces them, and ultimately from the ideas expressed by those language users in using them.  Apart from these users, these linguistic items would be nothing more than meaningless noises or splotches of ink.  Still, once produced, they take on a kind of life of their own.  Words and sentences printed in books or recorded on tape retain their meaning even when no one is thinking about them; indeed, even if the books or tapes sit in a dusty corner of a library or archive somewhere, ignored for decades and completely forgotten, they still retain their meaning.  Moreover, language has a structure that most language users are unaware of, but which can be studied by linguists.  Still, if the community of language users were to disappear entirely – every single one of them killed in a worldwide plague, say – then the recorded words that were left behind would in that case revert to meaningless sounds or marks.  While the community of language users exists, its general background presence is all that is required for meaning to persist in the physical sounds and markings, even if some of those sounds and markings are not the subject of anyone’s attention at a particular moment.  But if the community goes away altogether, the meaning goes with it. 

By analogy (and here too I do not claim that the analogy is exact) we might think of the relationship of the divine intelligence of Aquinas’s Fifth Way to the system of final causes in the natural world as somewhat like the relationship of language users to language.  God directs things to their ends, but the system thereby created has a kind of independence insofar as it can be studied without reference to God Himself, just as linguists can study the structure of language without paying attention to the intentions of this or that language user.  The directedness toward certain ends is in a sense just “there” in unintelligent causes like the meaning is just “there” in words once they have been written.  At the same time, if God were to cease directing things toward their ends, final causes would immediately disappear, just as the meaning of words would disappear if all language users disappeared.  In this way, immanent teleology plays a role similar to secondary causes in the order of efficient causes, as I suggested above.  Just as secondary causes have real causal power of their own, even if it derives ultimately from God as First Cause, so too natural objects have immanent teleology, even if it derives ultimately from God as ordering intelligence.  (This is notintended as an exposition or defense of the Fifth Way itself, mind you -- for that see Aquinas.)

As I have said, to deny the immanence of teleology would be implicitly to deny that natural substances have real causal power and that there is any real potency in nature -- and thus to undermine the foundations of natural science and natural theology (or at least any natural theology that argues from the world to God).  It would also undermine the possibility of natural law.  For there can be such a thing as natural law only insofar as there are ends toward which human beings are naturally and inherently directed, and which we can therefore know by studying human nature.  If teleology is entirely extrinsic -- no more immanent to nature than the time-telling function is to the metal parts of a watch -- then it can only exist in the world, including human beings, insofar as it is imposed on it entirely from outside either by us or by God.  If by us, then ethics is essentially a human invention; if by God, then it is a matter of sheer divine command, which entails that we could know what is good or bad for us only by reference to those commands rather than by reference to human nature itself.  (I discussed this issue at greater length in an earlier post.)

In short, natural law, natural science, and natural theology presuppose the reality of nature -- nature as something which, though ultimately dependent on God (and necessarily so), nevertheless is distinct from God and thus can at least partiallybe understood without reference to God.  That is why we can know that certain actions are good for us and others bad whether or not we know that the former have been commanded by God and the latter forbidden by Him; it is why we can do physics, chemistry, and biology without constantly asking “What were God’s intentions in making [quarks, phosphorus, dandelions, etc.]?”; and it is why we can know that teleology and potency are real features of the world whether or not we know that there is a God  (so that the arguments for the existence of God from the reality of natural teleology and potency are not circular arguments). 

You might say that the natural order is the metaphysical middle man between human beings and God.  There are certain kinds of religious sensibility eager to cut out the middle man – to deny nature, or do dirt on it, or make it “respectable” by absorbing it into the order of grace.  Sometimes this takes a “high church” form – pantheism, say, or occasionalism, or Barthianism and some other strains of Protestantism, or the Catholic nouvelle theologie.  (I said something about some of these views in an earlier post.)  Sometimes it takes a “low church” form, as with Bible-thumping (or Quran-thumping) fideism, or the crude picture of natural objects as artifacts of “the ‘carpenter’ of cheap apologetics” (as Gilson once described the anthropomorphic god of popular design arguments).  What these otherwise very different views have in common is a tendency to deny that the natural order per se really has anything interesting or important to tell us -- to insinuate that we have to go straight to theology for that.  Zealous to honor the Creator, they end up insulting His creation.

And now, for you Boz Scaggs fans who have held on to the end, thinking this post had something to do with his classic Middle Man, here’s the best known cut from the album.

Thứ Ba, 29 tháng 1, 2013


Over at The Huffington Post, Rabbi Adam Jacobs defends the cosmological argument for the existence of God, kindly citing yours truly and The Last Superstition.  Give it a read, then sit back and watch as the tsunami of clueless objections rolls into the combox.

Thứ Sáu, 25 tháng 1, 2013


In another in a series of excellent interviews with contemporary philosophers, 3:AM Magazine’s witty and well-informed Richard Marshall talks to analytic metaphysician Stephen Mumford.  Mumford is an important and influential contributor to the current revival of interest in powers and dispositions as essential to understanding what science reveals to us about the natural world.  The notion of a power or disposition is closely related to what the Scholastics called a potency, and Mumford cites Aristotle and Aquinas as predecessors of the sort of view he defends.  Mumford’s notion of the “metaphysics of science” is also more or less identical to what modern Scholastic writers call the philosophy of nature.  But Mumford’s interest is motivated by issues in philosophy of science and metaphysics rather than natural theology.  The interview provides a useful basic, brief introduction to some of the issues that have arisen in the contemporary debate about powers.

Some comments on the interview: Mumford cites Bertrand Russell as a great thinker from whom one can learn much even if one largely disagrees with him.  I agree with that assessment (where Russell’s serious philosophical work was concerned, anyway -- his popular writings on religion, morals, politics, etc. are awful), and I wrote my doctoral dissertation in part on Russell.  I would qualify some of the specific points Mumford makes, however.  The early Russell famously rebelled against the neo-Hegelian monism that dominated British philosophy in the late 19thcentury, in favor of a metaphysics of radically discrete objects.  He famously suggested that for Hegel the world is like a jelly -- one continuous blob, as it were -- whereas for Russell himself the world was like a bucket of shot, countless disconnected individual bits.  

Mumford gives the impression that dispositionalism -- which affirms an interconnectedness between things insofar as dispositions tend toward their manifestations (e.g. brittleness tends toward breaking) -- entails a return to something like the monism Russell rejected.  But I think that is not correct (and I’m not sure Mumford would actually take it that far).  Instead of comparing the world to either jelly or buckshot, we might compare it to a museum full of paintings that represent each other from different points of view.  The paintings are discrete objects (unlike the Hegelian jelly) but not radically independent (unlike Russell’s buckshot) insofar as they point beyond themselves to each other.  The Aristotelian conception of the world, anyway -- which, in fairness, Mumford himself may not be entirely committed to -- is a middle ground between monism (whether Hegelian, Parmenidean, or what have you) and radical metaphysical individualism (whether Humean, Ockhamite, or whatever).

Mumford might not be entirely happy with the painting analogy, though, since he indicates that he disagrees with George Molnar’s idea that powers exhibit a kind of intentionality insofar as they are “directed toward” their manifestations, and he favors instead the notion of what he calls the “dispositional modality, between pure necessity and pure contingency.”  Here (with qualifications) I would side with Molnar.  I also am wary of Mumford’s rejection of the division of properties into categorical and dispositional, which seems to threaten to lead to a metaphysics of pure potency devoid of act, to use the Scholastic language.  (These are issues I’ll be dealing with in some forthcoming work on causation.)

Mumford also unfavorably compares Wittgenstein to Russell.  I agree with Mumford’s reservations about Wittgensteinian method, but I think that Wittgenstein is nevertheless of great significance for metaphysics(despite Wittgenstein’s own intentions!) insofar as his work constitutes a powerful critique of reductionism, scientism, and related notions.  I would say that Wittgenstein’s position points in the direction of something like an Aristotelian conception of human nature, even if he would himself never have taken it in that direction.

These disagreements notwithstanding, Mumford is always interesting and the interview is well worth reading.  Mumford makes some wise remarks about science, noting that “physics largely consists of a mathematical representation of reality: usually an artificial portion of reality in a model.  Reality should not be mistaken for that mathematical representation.  The world is not a number, nor an equation.”  (This is a theme I have addressed many times, such as hereand here-- Mumford, who is a fan of comic books, might especially like the first of those posts.)  The joke Marshall cites from the book Mumford co-wrote with Rani Lill Anjum nicely parodies the clueless scientism that permeates so much of contemporary intellectual life.

Thứ Hai, 21 tháng 1, 2013


I commented recently on the remarks about Thomas Nagel’s Mind and Cosmos made by Eric Schliesser over at the New APPS blog.  Schliesser has now posted an interesting set of objections to Alvin Plantinga’s “Evolutionary Argument against Naturalism” (EAAN), which features in Nagel’s book.  Schliesser’s latest comments illustrate, I think, how very far one must move away from what Wilfred Sellars called the “manifest image” in order to try to respond to the most powerful objections to naturalism -- and how the result threatens naturalism with incoherence (as it does with Alex Rosenberg’s more extreme position).

The EAAN

First let me summarize Plantinga’s EAAN, which I think does pose a powerful challenge to naturalism, though I don’t think it shows quite what Plantinga thinks it does.  (Plantinga’s most recent statement and defense of the argument can be found in Where the Conflict Really Lies, which I recently reviewed for First Things.)

The EAAN begins by noting that what natural selection favors is behavior that is conducive to reproductive success.  Such behavior might be associated with true beliefs, but it might not be; it is certainly possible that adaptive behavior could be associated instead with beliefs that happen to be false.  In that case, though, there is nothing about natural selection per se that could guarantee that our cognitive faculties reliably produce true beliefs.  A given individual belief would have about a 50-50 chance of being true.  And the probability that the preponderance of true beliefs over false ones would be great enough to make our cognitive faculties reliable is very small indeed.

Now if evolution is only part of the story of the origin of our cognitive faculties, this is not necessarily a problem.  For example, if there is a God who ensures that the neurological processes generated by natural selection are generally correlated with true beliefs, then our cognitive faculties will be reliable.  But suppose that, as naturalism claims, there isn’t more to the story.  Then for all we know, our cognitive faculties are not reliable.  They may be reliable, but we will have no reason to believe that they are, and good reason to believe that they are not.  Now that means that we also have good reason to doubt the beliefs that are generated by those faculties.  For the naturalist, that will include belief in naturalism itself.  Naturalism, then, when conjoined with evolution, is self-defeating.  Evolution, concludes Plantinga, is thus better interpreted within a non-naturalistic framework.  

I think the basic thrust of this argument is correct, though I prefer the related argument that generally goes under the name of “the argument from reason” and has been defended in different versions by Karl Popper, Victor Reppert and William Hasker, and which I endorsed in Philosophy of Mind and The Last Superstition.  For one thing, I don’t think the basic point of the argument has anything to do with weighing probabilities, so that Plantinga’s tendency to state the argument in probabilistic terms needlessly muddies the waters somewhat.  The key point is rather that the logical relations that hold between thoughts cannot in principle be reduced to, supervenient upon, or in any way explained in terms of relations of efficient causality between material elements.  See the post on Popper just linked to for a summary of the argument as I would state it.

I also think that it is a mistake to suppose that the EAAN gives direct support to theism, specifically -- as opposed, say, to a non-theistic teleological view of the world (such as Nagel puts forward in Mind and Cosmos).  In Where the Conflict Really Lies, Plantinga acknowledges (rightly, in my view) that design inferences of the sort associated with William Paley and “Intelligent Design” theory do not constitute strong arguments for theism.  But he suggests reinterpreting the tendency to see design in complex biological phenomena as a kind of “perception” rather than an inference or argument.  Just as you can perceive that someone is angry from the expression on his face, so too, Plantinga suggests, can you perceive that an organ was designed from the order it exhibits.  And just as the former perceptual belief is rational despite its typically not involving an inference or argument, so too is the latter rational even if itdoes not involve an inference or argument.

There’s a lot that could be said about this, but the most important thing to say is that it is simply too quick.  As any Aristotelian can tell you, it is one thing to attribute a function to something, but quite another to attribute design to it.  That roots have the function of anchoring a plant to the ground and taking in nutrients may well be something we just perceive on close examination.  But that is precisely because having such functions is of the nature of roots -- something built into them, as it were.  In that respect they are very different from an artifact like a watch, whose metallic parts do not have a time-telling function built into them by nature.  That function has to be imposed on them from outside, which is why a watch requires a designer.  But precisely because natural objects are not artifacts, to perceive functionality or order in them is not ipso facto to perceive design.  And that means that while Plantinga’s EAAN and defense of the rationality of “perceiving” functionality in nature strike a blow against the naturalist’s dogmatic rejection of teleology, they do not by themselves constitute reasons to embrace theism, specifically.  (For more on the distinction between function and design, see this post, this post, this article, and other earlier posts dealing with the difference between an Aristotelian-Thomistic conception of nature and “Intelligent Design” theory.)

That is not to say that a divine intellect is not ultimately responsible for the order of things.  But for the Aristotelian (and for Thomists, who build on an Aristotelian foundation) that is a claim which certainly does require an argument, and an argument which does not conflate function and design, as too many Christian apologists have done at least since the time of William Paley, but which Aquinas’s Fifth Way -- though often mistakenly assimilated to Paley’s argument -- does not.  (I’ve defended the Fifth Way in several places, including in Aquinas.)

The bottom line is that what the EAAN/”argument from reason” shows, in my view, is that we cannot coherently trust our cognitive faculties unless we suppose that they are directed toward the attainment of truth as their telos or end.  But this does not by itself entail any extrinsic, artifact-like teleology of the Paleyan sort.  One could opt instead for an immanent teleology of the Aristotelian sort (and then try to resist a Fifth Way-style argument to the effect that this sort of teleology too ultimately requires a divine cause).  This is, in effect, Nagel’s strategy.

Schliesser’s response to the EAAN

Let’s turn now to Schliesser’s remarks.  Do read his entire post (which, never fear, isn’t as verbose as mine often are) in case I have missed any important elements of the context in interpreting the passages I’ll be quoting from it.  Schliesser begins as follows:

[L]et's grant -- for the sake of argument -- the claim [made by Nagel, following Plantinga] that "Mechanisms of belief formation that have selective advantage in the everyday struggle for existence do not warrant our confidence in the construction of theoretical accounts of the world as a whole."  What follows from this?

My quick and dirty answer is: nothing. For the crucial parts of science really do not rely on such mechanisms of belief formation.  Much of scientific reason is or can be performed by machines; as I have argued before, ordinary cognition, perception, and locution does not really matter epistemically in the sciences. 

End quote.  If I understand him correctly, what Schliesser is saying here is that even if the EAAN casts doubt on the reliability of our cognitive faculties (given naturalism), that is irrelevant to the question of whether science is reliable, for what is crucial to science can be done by machines, and the EAAN does not cast doubt on theirreliability.  He also writes:

[Nagel] thinks that somehow there are "norms of thought which, if we follow them, will tend to lead us toward the correct answers" to "factual and practical" questions… Now… if this claim is true, it is utterly unsubstantive--none of the non-trivial results in physics or mathematics are the consequence of following the norms of thought. (I realize that there is a conception of logic that treats it as providing us with the norms of thought, but even if one were to grant this conception, it does not follow one obtains thereby mathematical or scientific results worth having.)

Schliesser’s point here, I think, is that the substantive results of science are not arrived at mechanically via the simplistic application of a set of more or less commonsense rules of the sort one finds in a logic textbook.  That is to say, scientists don’t proceed by saying: “OK, now let’s take the traditional Laws of Thought, the valid syllogism forms, inference rules like modus ponens, etc. and start cranking out some implications from what we’ve observed.”   Scientific practice is far more complex than that, especially insofar as it involves the use of computers following algorithms very unlike the patterns of reasoning we rely on in ordinary life.  Hence (again, if I am reading Schliesser correctly) if the EAAN shows that ordinary patterns of reasoning are unreliable on the assumption of naturalism, that is irrelevant to the reliability of science insofar as it does not rely on these patterns anyway.  Continuing in this vein, Schliesser writes:

Okay, let's assume -- for the sake of argument -- that it matters that humans are engaged in scientific practices that generate the building blocks of theoretical accounts.  In most of these the ordinary or average products of Darwinian evolution as such are not allowed near the lab.  In fact, the ordinary or average products of primary, secondary, and university education are also not allowed inside the lab.  Insanely high "achievement" over, say, twenty years of human capital formation is required before one becomes a little cog in the collaborative, scientific enterprise. (It's likely, in fact, that such achievement may just be a consequence of being a relatively rare freak of nature--a "monster" in eighteenth century vocabulary.) Parts of this achievement undoubtedly takes advantage of our selected for cognitive capacities and, perhaps, enhances these in subtle ways.  A large art of this achievement is the actual unlearning -- or generating the capacity for temporary disabling -- lots of our avarage Darwinian programming.  Moreover, much of the unlearning takes place after one's formal education is complete and inside the lab, where one's cognitive capacities are transformed into engagement with particular model organismsand particular specialized techniques. One does not need to accept all of Foucault, to see that the disciplining of scientific agents is as much an enhancement of human nature as a battle with pre-existing nature. So, "in science" our "cognitive capacities" are not used "directly." (Moreover, in so far as any human perception takes place in the epistemic processes of science much effort and skill is directed at making it entirely trivial.)

End quote.  Here I take it that Schliesser’s point is that the cognitive tendencies hardwired into us by natural selection are unlearned in the process of scientific training and practice -- the whole point of science being, as it were, to replace the “manifest image” that our natural cognitive tendencies generate with the “scientific image” (again to allude to Sellars) -- so that it doesn’t matter if those cognitive tendencies are unreliable.  

So, as I read him, the reliability of the cognitive tendencies put into us by natural selection is in Schliesser’s view irrelevant to the practice of science -- and thus to the defensibility of naturalism, which regards the scientific description of the world as either exhaustive or at least the only description worth bothering with -- for two reasons.  First, the relatively few human beings actually involved in scientific practice in a serious way do not rely on the cognitive tendencies in question in the first place, but seek precisely to resist and replace them.  And second, the modes of cognition they are engaged in can be carried out by machines anyway, which don’t have any hardwired human cognitive tendencies to resist.  So the EAAN fails, because it falsely supposes that it is the reliability of those hardwired human cognitive tendencies that naturalism presupposes.

Schliesser on our cognitive faculties

What should we think of all this?  Let’s consider first the claim that scientific practice involves radically moving away from our hardwired cognitive tendencies and their deliverances.  There is of course much truth in this, and I think Schliesser is right to suggest that any criticism of naturalism that does not factor it in is superficial.  However, this by no means suffices to disarm the EAAN.  

To see why, consider a couple of analogies.  Suppose you criticized a portrait or landscape artist for his poor drawing ability and he responded: “Drawing?  I don’t need no stinking drawing!  I’m a painter!  Hell, I haven’t done a complete line drawing since I was in school, and I rarely if ever even sketch out my subject before getting out the paints.  No, it’s all in the brushwork.  Obviously you don’t understand what we artists do.”  Or consider a dancer who suggested that the physiology of ordinary walking was irrelevant to understanding what she does, since she has over the course of many years had to acquire habits of movement that go well beyond anything the ordinary person is capable of, and even to unlearn certain natural tendencies.  (Think e.g. of the unusual stress a ballet dancer has to put on the foot, or the need to overcome our natural reluctance to move in ways that would for most people result in a fall.)  

The problem with such claims, of course, is that the fact painting or dancing involve going well beyond, and even to some extent unlearning, certain more basic habits does not entail that those habits are entirely irrelevant to the more advanced ones or that they can be entirely abandoned.  On the contrary, the more advanced habits necessarily presuppose that the more basic ones are preserved at least to some extent.  Even if drawing constitutes a very small part of producing a certain painting, and even if no sketch in pencil were made prior to getting out the paints, a painter without skill in drawing is going to produce a bad painting.  (The point has nothing to do with realism, by the way; a good painting done in the surrealist, impressionist, pointillist, or cubist style also presupposes the skills involved in drawing.)  Dancers have to have at least the muscles, bones, comfort with one’s body, ease of movement, etc. that are involved in ordinary walking even if they must also have much morethan that.  The skills involved in ordinary drawing and walking constitute a framework for the more advanced skills, a framework that can be so covered over and modified that it may go virtually unnoticed in the course of painting or dancing, but which nevertheless cannot in principle be altogether abandoned.

Now by the same token, the ordinary patterns of reasoning as familiar to common sense as to the professional logician -- modus ponens, disjunctive reasoning, conjunctive reasoning, basic syllogistic reasoning, basic arithmetic, etc.  --  are, as Schliesser implies, a “trivial” part of science, but only in the sense that being able to walk over to the barre is a trivial part of being a ballet dancer, or the ability to draw a line or circle is a trivial part of being a painter.  While being able to walk over to the barre is obviously very far from sufficient for being a good ballet dancer, it is nonetheless absolutely necessary for being one; and while being able to draw a line or circle is obviously very far from sufficient for being a good painter, it too is still absolutely necessary for being one.  Similarly, while having the ability to reason in accordance with modus ponens, basic arithmetic, etc. is very far from sufficient for being able to do serious science, it is still an absolutely necessary condition for doing it.  The reason scientists don’t make a big deal of these “norms of thought” is the same reason ballet dancers don’t make a big deal out of their ability to walk and painters don’t go on about their skill in holding a pencil.  It is not that basic inference rules, walking, and drawing are irrelevant to science, dancing, and painting, respectively; it is rather that their relevance is so blindingly obvious that it goes without saying.

As Hilary Putnam pointed out in Representation and Reality, if you are going to call “folk psychology” into question -- which is what Schliesser is essentially doing (at least in the context of scientific practice, if not in other contexts) -- then you are going to have to call “folk logic” into question as well.  But we have nothing remotely close to an account of how this can coherently be done.  However far removed from ordinary cognition scientific modes of reasoning might be, they will presuppose fundamental logical notions like truth, consistency, validity, and the like, and our ability to recognize them when we see them.  And that means that they will presuppose the very abilities that even uneducated, untrained, pre-scientific “folk” possess.  (The fact that such “folk” sometimes make basic, systematic logical errors doesn’t change anything.  Pointing out to undergraduates that “This inference seems valid, but it is not”requires that they be able to see validity somewhere, and in particular in the argument that tells them that the inference in question is not really valid after all.)

Something similar is true of our perceptual faculties, which modern physics (with its account of the world as made up of colorless, odorless, soundless, tasteless particles etc. -- think of Eddington’s two tables)  might seem to have moved beyond altogether.  That this cannot be the case is obvious from the fact that physical theory, in the name of which perception is said to be misleading, is itself empirically based and thus grounded in perception.  Science can supplement or correct what perception tells us, but it cannot coherently deny the reliability of perception wholesale.  That it is at the very least difficult to see how it could coherently do so has, as I have noted in several places (e.g. here), been noticed by a number of thinkers from Democritus to Schrödinger.

We might also note that the degree to which the actual practice of science really does involve moving beyond ordinary modes of cognition is itself a matter of controversy (as the work of thinkers like Michael Polanyi illustrates); and that equally controversial is the question of whether the methods of physics really do reveal to us the whole nature of objective material reality in the first place.  Nor need one take a purely instrumentalist view of physics to doubt that they do.  To appeal to an analogy I’ve used in earlier posts, when aircraft engineers determine how many passengers can be carried on a certain plane, they might focus exclusively on their average weight and ignore not only the passengers’ sex, ethnicity, hair color, dinner service preferences, etc., but even the actual weight of any particular passenger.  This method is very effective, and is effective precisely because it captures real features of the world, but it hardly gives us an exhaustive description of airline passengers.  Similarly, the methods of physics, which focus on those aspects of a system that are susceptible of prediction and control and thus abstract away aspects which cannot be modeled mathematically, are extremely effective, and effective precisely because they capture real features of the world.  But it simply does not follow that the description of physical reality they afford us is exhaustive, any more than the engineer’s description is exhaustive.  And thus the fact that that description is radically different from the picture afforded by perception does not entail that it falsifies the latter.  To assume otherwise is (as I have noted before) to commit what Alfred North Whitehead called the “Fallacy of Misplaced Concreteness.”  

In any event, whether we think our ordinary, pre-scientific perceptual and rational faculties are unreliable to only a minor extent or to a significant extent, we cannot coherently regard them as fundamentallyunreliable.  And that they are fundamentally reliable is all the EAAN requires.  Even science at its most rarefied presupposes that at some level our senses tell us the truth in a systematic way, and that basic arithmetic, modus ponens, conjunctive reasoning, etc. are valid modes of inference.  EAAN claims that naturalism is inconsistent with this presupposition, and nothing Schliesser has said shows otherwise.

Schliesser on machines

But couldn’t Schliesser now appeal to the suggestion that the role of human beings in science is irrelevant anyway, since what they do could just as well be done by machines?  

No, one reason being that the machines in question must be designed and constructed by human beings -- they don’t grow on trees after all!  That means that, however it is they get the results they do, the machines will be reliable only if the cognitive faculties of those who designed and constructed them --namely, human beings -- are reliable.  (Nor would it help to suggest that machines that were constructed by other machines rather than by us wouldn’t face this problem; for the machine-constructing machines, or their ancestors anyway, would have been constructed by us, so that the problem is only pushed back a stage or several stages.)

But a deeper problem is that however they get here, machines in fact cannot carry out the cognitive tasks associated with scientific reason.  What they can do is merely serve as instruments to assist us as we carry out those tasks, as telescopes, microscopes, electrometers, scales, slide rules, pencil and paper, etc. do.  Schliesser is essentially taking for granted the computationalist theory of mind, on which cognitive processes in general are computational processes (in the sense of “computational” associated with modern computer science), so that they could be carried out by a machine as well as by us.  The “machine scientists” Schliesser is describing would accordingly be characterizable in terms of a kind of “android epistemology,” or perhaps in terms of what Paul Thagard calls “computational philosophy of science.”  But you don’t have to be an anti-naturalist to think that this whole idea is wrongheaded.  You just have to “get your Searle on,” as it were.

I am alluding here not to John Searle’s famous Chinese Room argument, but to the less well-known but more penetrating argument of his paper “Is the Brain a Digital Computer?” (restated in The Rediscovery of the Mind), according to which computation is not intrinsic to the physics of a system, so that it makes no sense to regard anything as carrying out a computation apart from the designers and/or users of a system who assign a computational interpretation to its processes.  (I discuss Searle’s argument in more detail in the post on Popper linked to above, since it is related to Popper’s argument.)  Saul Kripke has presented a similar argument, to the effect that there is nothing in the physical properties of any machine that can determine precisely which program it instantiates.  Any set of processes could, as far as their inherent physical properties alone are concerned, be interpreted either as the carrying out of one program or as a malfunction in the carrying out of some different program.  (I’ve discussed this argument too in greater detail in another earlier post.)

Now I think that it is in fact too strong to conclude on the basis of Searle’s, Popper’s, or Kripke’s arguments that there is nothing like computation inherent in physical processes, full stop.  The correct thing to say is rather that there is nothing like computation inherent in physical processes given an essentially materialist, anti-teleological conception of the physical.  However, if we allow that there is teleology of a broadly Aristotelian sort immanent to physical systems, then (as I’ve noted in earlier posts like this one and this one) we can make sense of the idea that certain physical systems are inherently directed toward the realization of this computational process rather than that one.  And if Nagel’s brand of naturalism is correct (though of course I don’t myself think it is), then such teleology can be made sense of without reference to a divine cause.  But what we would be left with in such a case is precisely Nagel’s form of naturalism -- a form that acknowledges the force of the EAAN and affirms teleology so as to get around problems of the sort the argument raises -- and this can hardly help to salvage Schliesser’s objection to the EAAN.

And of course, even if some computational processes are inherent to nature, that wouldn’t include those exhibited by the machines we use in our scientific endeavors, which are man-made and have only a derived teleology and thus a derivative status as “computers.”  Wewould be the true computers, with the machines serving as mere enhancements to our computational activity, just as binoculars enhance our vision but do not themselves see anything.  The reliability of the machines’ processes would, again, presuppose the reliability of our cognitive processes; and if the reliability of the latter is grounded in immanent teleology, then the force of the EAAN has been conceded and Nagel’s position will have been embraced rather than rebutted.

The bottom line is that we cannot altogether get outside our cognitive skins, even if we can modify, supplement, or even eliminate parts of those skins.  Schliesser’s position seems to suppose otherwise insofar as it implies that we could coherently practice science and accept its results while simultaneously denying the reliability of our cognitive faculties.  In fact, however we spell out the details of their relationship, Sellars’ “scientific image” is ultimately a part of the “manifest image” itself, so that the former cannot coherently be appealed to as a way of undermining the latter.  To quote Putnam quoting William James, “the trail of the human serpent is over all” -- or if not over all, then at least all over science, which is no less essentially human a practice than dancing, painting, machine-building, or philosophizing are.