Wednesday, September 7, 2011

The Product of Conception

A reader with the Davidic name of Jesse has asked a series of questions on the other site about our previous post. I'm bringing it up as a new post because his questions raise interesting points; and it's easier to handle than in the comm box.

Perceiving percepts.
Jesse wrote:
A "concept" itself seems not completely sharply defined, for example does it require verbal ability or could a highly autistic person who never developed the ability to understand language be said to have "concepts"?

Rover, an ymago.  He is smiling,
Trust me.
A concept is most easily grasped.  We can sense Rover: see him, hear him, smell him (especially when he has doggie breath) and feel him (when we rub his fur).  In some cultures, we could even taste him; but we won't go there.  Our mind, through the common sense, constructs the sight, sound, smell, and feel into a single common ymago of ROVER.

Wildcat, an ymago.  Stipulated:
wildcats do not normally wear
caps sideways.  They wear them
backward, like everyone else. 
This ymago is a  percept.  We can remember this percept.  We can even imagine it.  (This involves a manipulated memory, in which we might picture Rover as smiling (left).  Perhaps after defeating the Phillipsburg wildcat (right).

The capacity to form percepts is called (wait for it) perception. It is common to at least all higher animals and, perhaps to some degree in all animals.  They may differ in their range of senses.  Rats are virtually blind and perceive a world primarily of smells.  Rattlesnakes perceive heat; dolphins, sonar echos.

Rinty and Rusty,
another ymago

Perceptions are always of concrete singulars: Rover, Fido, Spot, Lassie, Rin Tin Tin.  (Yo, Rinty!)  This dog, that dog, the other dog over there.  Perceptions are the precursors to the sensitive appetites, which are the desires for things perceived.  (In the negative sense, these might be revulsions.)  For example, hunger or fear.  These appetites, also called emotions, are the precursors to motion, which is a movement toward or away from the perceived object; e.g., "fight or flight." 

The entire sensitive soul can be shown schematically thus, where the external stimuli (S) cascade upon the external senses and are compiled, stored, and manipulated by the internal senses.  The common sense, memory, and imagination is often called collectively "the imagination."

From the inner senses proceed to the emotional reactions to the motive powers to the response (R). There is also a short-cut called the autonomous nervous system directly to the motive power. 
(This all sits atop, and is integrated with, the vegetative soul and atop the form of inanimate matter.  Animals possess the vegetative powers of reproduction, development, homeostasis, and metabolic control.  And they are made of otherwise in-animate matter that possesses the powers of electromagnetism, gravity, strong, and weak nuclear forces.  These need not detain us here.)

Now, here is the thing.

The powers of the imagination are considerable.  For example, Jesse mentions:
[A] bird in [a] video which figures out that it can get a bucket out of a tube by bending a piece of wire to form a hook--presumably it had something like a visual flash of insight that a wire could be reshaped in this way before it purposefully bent it, is there some absolutely clear definition of a "concept" that allows us to say whether or not this means it had a "concept" of a hook?
There is nothing in the bird's behavior that cannot be explained by perception of concrete singulars.  This piece of wire, that bucket, and so forth.  It then manipulated concrete singulars to accomplish a concrete task.  As long as we associate "intelligence" with technology and problem-solving, we will always mistake this kind of behavior for intelligent behavior.  But notice that there is no necessary evidence of abstraction.  Ockham's razor comes into play: that which can be explained with fewer entities should not be explained with more. 

Conceiving concepts.

Which one of these is dog?

But which of Rover, Fido, Spot, Lassie, or the indomitable Rinty is dogDog is not a concrete particular.  It is an abstract universal.  But if dog is not a concrete particular, we cannot sense dog, and we cannot perceive dog.  If dog were material, we could sense it, perhaps with instruments.  So, it must be immaterial.  Where did dog come from?

It is not merely a name for a bunch of particulars (Rover, Fido, et al.)  That is the absurdity of nominalism.  (Boo.)  Why a mere name for Rover, Fido, Spot, Lassie, and Rinty, but not for Rover, Fido, Spot, Lassie, and Puss-in-boots?  The name only makes sense if there were something in the particulars in virtue of which they were all dogs. 

The Old Stagerite
Nor is dog a mere mental thing.  That is the absurdity of conceptualism.  (Double Boo.)  Were it so, there is no reason why each of us should have the same concept of dog (or green or whatever).

So dog must exist somehow as a real thing.  This is realism, either Platonic or Aristotelian.  Let's go with the Old Stagerite.

Aristotle discerned a rational soul that is incorporated into the animal soul.  In addition to the animal powers aforesaid, it includes two additional, rational powers: Intellect and Will.   And here at long last (yay!) we get to the answer to Jesse's plaintive queries.

The intellect reflects on the concrete particulars of perception and abstracts from them the abstract universals of conception.  Ed Feser writes:
What intellect involves, for the A-T tradition, is the ability to grasp abstract concepts (such as the concept man or the concept being mortal), to put them together into complete thoughts (such as the thought that all men are mortal), and to reason from one thought to another in accordance with the laws of logic (as when we infer from All men are mortal and Socrates is a man to Socrates is mortal).  All of this differs in kind, and not just in degree, from the operations of sensation and imagination, which we share with non-human animals.  Concepts have a universality and determinateness that no sensation or mental image can have even in principle. 

An ymago of a triangle.  What?
Were you expecting a gold
equilateral triangle? 
He gives the example of triangularity.  
The concept triangularity, for example, has a universality that even the most general mental image of a triangle cannot have, and an unambiguous or determinate content that the auditory or visual image of the English word “triangle” (whose meaning is entirely conventional) cannot have.  Indeed, concepts have a universality and determinacy that nothing material can have.  So while the A-T tradition holds, in common with materialism and against some forms of dualism, that sensation and imagination have a material basis, it also holds that intellectual activity -- grasping concepts, putting them together into judgments, and reasoning from one judgment to another -- is necessarily immaterial.   

So why the confusion?

Confusion means to 'fuse together' and many problems people have in understanding conception versus imagination is that
  • Although the concepts we grasp are immaterial, we must abstract them from mental images derived ultimately from sensation; and imagination and sensation are material. 
  • Even when we grasp an abstract concept, we always do so in conjunction with mental imagery
Ymago of an angry red equilateral triangle. 
What? Were you expecting a white
scalene triangle?
A concept is not a mental image.  A concept is not a mental image.  A concept is not a mental image.  I tell you three times. 
The concept triangularity is not identical with either the word “triangle” (since people who have never heard this English word still have the concept of triangularity) or with any particular mental image of a triangle (since any such image will have features -- a certain color, say, or being scalene -- that do not apply to all triangles in the way that the concept does).  Still, we cannot entertain the concept of triangularity without at the same time forming a mental image of some sort or other, whether a visual image of some particular triangle, a visual or auditory image of the word “triangle” or of the corresponding word in some other language, or what have you. 
Which one is "dog." No, wait, I mean "triangle"

Now gorillas and perhaps even birds can form mental images; but they do not form concepts.  As Walker Percy wrote in The Message in the Bottle,
The word ball is a sign to my dog and a symbol to you. If I say ball to my dog, he will respond like a good Pavlovian organism and look under the sofa and fetch it. But if I say ball to you, you will simply look at me and, if you are patient, finally say, "What about it?" The dog responds to the word by looking for the thing: you conceive the ball through the word ball.
Thus, it is not relevant to conception that a gorilla might be trained to recognize various symbols or that a trainer might enthusiastically interpret the gorilla's use of the symbols.  (Koko's AOL chat room conversation was an embarrassment.  The handler was clearly reading meanings into Koko's efforts to obtain a reward.) 

Asker of serious and
pertinent Questions
Quaestiones de Jesse super conceptionem

does it require verbal ability or could a highly autistic person who never developed the ability to understand language be said to have "concepts"?
Verbal ability is a sign; but a man with his tongue torn out is still a rational animal.  We'd go with the natural capacity.  I know of no evidence that autistics cannot form concepts; quite the contrary.  Consider Helen Keller, who did not develop language until fairly late.  The lack of a language may inhibit conceptualization just as the lack of a leg may inhibit dancing.  But we still say man is a two-legged animal and do not deprive Peg-Leg Pete of his humanity. 

A sapient?  Or a wily manipulator
of susceptible adults?
Is a baby sapient
A baby is not a thing, but a part of a thing; viz., a temporal part, a stage in the development of a thing (an adult human).  In the common course of nature, the baby will move toward its natural end as an adult human. 
A similar question would be "Is a sleeping man sapient?"  The answer is also yes. 
that there must have been a distinct first sapient hominid, which certainly isn't a mainstream scientific conclusion

No doubt.  But it isn't actually a scientific question in the first place.  Natural science deals only with the metrical properties of physical bodies.  But the intellect - and so conceptualizing - is immaterial, and thus not grounded in any material body.  When science talks about hominids, she is talking about biological hominids, not metaphysical hominids.  Undoubtedly, the first human to conceptualize was biologically indistinguishable from his companions. 
apes have been taught to use signs in a way that isn't just responding to human cues, and shows that they have at least associated specific signs with specific objects, and they sometimes seem to combine them in meaningful ways even if they don't possess true grammar.
"Seem" is the correct verb.  To associate specific signs with specific objects is exactly what we would expect from perception/imagination, as we described above.  Heck, these animals would not be trainable if they could not. 
We are oft confused because humans are also animals and so also display imaginative, as well as intellective behaviors.  Citing Walker Percy again, When the fisherman in one canoe cries out "Mackerel here!" and the other canoes paddle over toward that spot, we are witnessing the sort of stimulus-response reaction as when a baboon up in the tree makes the call that indicates leopard here.  We can't say the first baboon thought, "I must warn the others!"  He only did what a good Darwinian baboon would do.  Those baboons who failed to make the sound or to respond to the sound in the proper way have long since ceased to contribute to the baboonish gene pool.  We could say that first baboon has "communicated" to the second baboon in the same way that a disease is communicated.  
Baboon leaving gene pool
Edward Selfe Photography
What baboons do not do is sit around the bar griping to one another about the Leopard Menace or whether there are good leopards and bad leopards. 

36 comments:

  1. "The common sense, memory, and imagination is often called collectively "the imagination." "

    This putting of 'common sense' here as a part of imagination is, at least, awkward. Isn't it an abuse of language to say that a dog has common sense.
    There must a better word to express whatever you are trying to here.

    " Our mind, through the common sense, constructs the sight, sound, smell, and feel into a single common ymago of ROVER."

    But if you want to being in animals, then it is confusing to say 'our minds'.

    ReplyDelete
  2. Isn't it an abuse of language to say that a dog has common sense.

    No, the common sense is that inner sense which integrates sound, color, feel, shape, etc. into a common percept, or ymago. All higher animals have this sense, and it is likely that most animals in general do, too. This spot of red, and this round shape, that this elastic texture, etc. are this one [common] rubber ball.

    Without it, Rover could not remember the ball, as such, but only a spot of red or a round shape separately and independently of each other.

    ReplyDelete
  3. I find that a lot of the confusion in communicating scholastic principles to people wholly unacquainted with them is in such double meanings. "Common sense" is so loaded with a cultural significance that it becomes easy to breed misunderstandings.

    ReplyDelete
  4. Thank you for a clear concise article. Very helpful. Now when atheists ask me why I am a theist I can answer, "I believe in God because there is a dog".

    ReplyDelete
  5. If your notion of an entity having a "concept" has nothing to do with formal symbolic definitions and is just a matter of being able to recognize a class of sensory stimuli (including novel examples never seen before) as opposed to remembering a specific instance that has already been seen, then it seems to me that not only animals but also even computerized neural networks can be said to have "concepts". I could point to a simple example like the fact that a mouse will respond in about the same way to any cat, even one that's (say) a different color than any it has seen before, but a more clear animal example would be something like these experiments with pigeons where they are trained to respond differently to slides depending on whether they contained an image of a particular type (say, an image of a tree vs. an image with no trees). The number of classes they could learn to recognize was quite large:

    These animals seem readily capable of categorizing images from natural scenes; among the reported stimulus classes are aerial photographs (Skinner, 1960; Lubow, 1974), images from people (Herrnstein & Loveland, 1964), pigeons (Poole & Lander, 1971; Watanabe, 1991), trees and bodies of water (Herrnstein, Loveland & Cable, 1976), oak leaves (Cerella, 1979), chairs, cars, humans, and flowers (Bhatt, Wasserman, Reynolds & Knauss, 1988; click here to see examples of their stimuli), birds and other animals (Roberts & Mazmanian, 1988), and pictures of a geographic location (Wilkie, Willson & Kardal, 1989). But the rather strange flexibility of pigeons has been demonstrated by using defective pharmaceutical capsules or diodes (Cumming, 1966; Verhave, 1966), letters of the alphabet (Morgan et al., 1976; Lea & Ryan, 1983, 1990), line drawings of cartoon characters (Cerella, 1980), squiggles (Vaughan & Greene, 1984), dot patterns (Watanabe, 1988), schematic faces (Huber & Lenz, 1993, 1996) and real faces (Jitsumori & Yoshihara, 1997; Troje, Huber, Loidolt, Aust & Fieder, 1999), color slides of paintings by Monet and Picasso (Watanabe, Sakamoto & Wakita, 1995) and excerpts from famous pieces of classical music (Porter & Neuringer, 1984).

    Similarly artificial neural networks can be trained to respond differently depending on whether the (novel) input image contains an object of a given class, like this paper on using neural networks to determine which members of a large collection images of lung biopsies show cancer cells.

    ReplyDelete
  6. it seems to me that not only animals but also even computerized neural networks can be said to have "concepts".

    Then you should have little trouble discussing the matter with them.

    That a mouse perceives a cat and esteems it an enemy is no great mystery. It does not require a concept of cattiness; it requires only a cat -- plus memory, and imagination.

    ReplyDelete
  7. "Were it so, there is no reason why each of us should have the same concept of dog (or green or whatever)."

    Do we have the same concept of dog? If we do, then we'll all agree without consulting each other on what examples of dog are. I'd be willing to bet that you'd easily be able to find people who disagreed on whether a statue of a dog was a dog, or whether the union of a wolf and a dog produced dogs.

    In fact, anyone who has to take a team from concept to actual product will agree, I'd expect, that nailing down exactly what a concept consists of can be a very time-consuming process, and the result might be a sprawling collection of documents that no human can keep in their head at once. When you start that process, even though every member of your team agrees that you're writing a webapp that allows people to create, edit, and take polls (or whatever), each person's version of that concept is quite different.

    "So dog must exist somehow as a real thing. This is realism, either Platonic or Aristotelian. "

    If we find (as I expect we will) the machinery that instantiates the concept in a human brain, then we need not demand that the concept in an abstract sense exists in some non-physical way, since we'll be able to point at actual, physical examples of concepts.

    ReplyDelete
  8. Then you should have little trouble discussing the matter with them.

    I guess this comment is purely a joke, since you specified above that "concept" does not require any use of language.

    That a mouse perceives a cat and esteems it an enemy is no great mystery. It does not require a concept of cattiness; it requires only a cat -- plus memory, and imagination.

    This is just an assertion; you haven't defined "concept" in the above post clearly enough to show why, under your definition, the mouse cannot be said to have a "concept" of a cat. In your above post, you mainly seem to be saying that a "concept" involves abstracting from a bunch of particulars, and I took that to basically mean the operational definition was the ability to classify new objects in terms of whether they are examples of the same "concept" or not, which the mouse and the pigeons and the neural networks can do. If it goes beyond this, then how, precisely? I am not sure there is anything more to my concept of a dog then merely the ability to recognize whether a given animal is a "dog" or not, plus maybe the ability to not just understand "dog" in purely gestalt terms but also to break it down into a bunch of traits that I identify as important to dogginess (though most are not essential since any specific trait like a tail might be absent in an injured dog). I would also say that outside of pure mathematics (and perhaps particle physics), pretty much all my concepts are "fuzzy" in the sense that for any nonmathematical verbal concept like "dog", one could always imagine in-between examples where I would have no definite opinion as to whether the example fell into the class or out of it (for any animal classes, this obviously must be true for anyone who accepts Darwinian evolution, not to mention randallsquared's example of a sculpture of an animal).

    Also, given your comment "So dog must exist somehow as a real thing. This is realism, either Platonic or Aristotelian", is it essential to your notion of "concept" that our conceptual understanding must be tied to the ability to access some immaterial realm of "forms"? If so it seems you are trying to make it so by definition this metaphysical hypothesis must hold in order for anyone to have a "concept" of anything, rather than starting from a less metaphysical, more operational definition of "concept" and using that to make an argument for your metaphysical view. If we knew for sure my behavior could be explained without the need for for causal interaction between my physical brain and some immaterial realm (say for example we knew this because my neurons had been replaced one by one with futuristic computer chips that each have pretty much the same input/output relation as the neuron they replace, but with each chip known to function in a completely deterministic way wholly explainable in terms of their physical properties), then would you say that by definition I must not have any ability to recognize "concepts", even if my outward ability to discuss them rationally (passing the Turing test and so forth) was the same as any normal person?

    ReplyDelete
  9. "Thank you for a clear concise article. Very helpful. Now when atheists ask me why I am a theist I can answer, "I believe in God because there is a dog"."

    Myself, I prefer "I know that God is because I am a man."

    ReplyDelete
  10. Incidentally, a comment on this:

    I know of no evidence that autistics cannot form concepts; quite the contrary. Consider Helen Keller, who did not develop language until fairly late.

    I specified I was talking about autistics who never learned language; if you think they still "form concepts", see my above questions on how you actually define "concept", whether the definition is operational (and if so how it goes beyond just the ability to classify novel examples) or metaphysical. But as for Helen Keller, this article points out that her story is more complicated than it's often presented:

    However the truth of Helen Keller's story is quite different. Helen did not lose her sight and hearing until she was two, so for the first couple of years of her life, she lived quite normally. She would even have begun to talk. So when the shutters did come down, Helen carried with her memories of the world and, more importantly, she had the stamp of language on the speech centres of her brain.

    ...

    While Helen could not speak, her two years of normal childhood did give her a strong feeling for language. She invented a private language of gestures in which shakes of the head meant yes or no, pushes and pulls meant to go and to come. Like d'Estrella, Helen developed 60 or more quite sophisticated signs. To refer to her father, she would mime putting on a pair of glasses; her mother was indicated by tying up her hair; and ice cream became a shiver.

    Whether or not Helen could "think" in this kinesthetic imagery - imagining the muscle movements of her private signs in the same way that d'Estrella visualised his own private signs - there is no doubt that language was something ingrained in her speech centres. When Annie Sullivan took charge of a wild six-year-old and started spelling words into her hand, the foundations of speech already were there to be built on.


    The last section of the article tells the much sadder story of children who are actually born both blind and deaf; no matter how doctors try, they have never succeeded in teaching any detailed language to such people, at best they can learn to respond to respond to a few "finger-spelt" commands and can be taught some simple physical tasks like getting dressed and using a spoon to eat.

    ReplyDelete
  11. If we knew for sure my behavior could be explained without the need for for causal interaction between my physical brain and some immaterial realm (say for example we knew this because my neurons had been replaced one by one with futuristic computer chips that each have pretty much the same input/output relation as the neuron they replace, but with each chip known to function in a completely deterministic way wholly explainable in terms of their physical properties), then would you say that by definition I must not have any ability to recognize "concepts", even if my outward ability to discuss them rationally (passing the Turing test and so forth) was the same as any normal person?

    You wouldn't 'know for sure' even then, if my understanding of this topic is correct. It sounds as if you're trying to find some way to give an empirical test to what is an essentially metaphysical question. It's like trying to refute panpsychism by slicing open a rock and 'looking for the consciousness', and if it's not there, then.. refutation! (Or trying to disprove materialism by slicing open a mouse to 'look for the consciousness' as well, for that matter.)

    ReplyDelete
  12. You wouldn't 'know for sure' even then, if my understanding of this topic is correct.

    Why do you say so? Keep in mind, I said "my behavior could be explained without the need for for causal interaction between my physical brain and some immaterial realm". So I wasn't talking about purely subjective aspects of consciousness, like what in philosophy of mind is called qualia, only to externally measurable behaviors like the words that come out of my mouth when I talk about my mental state. If my nervous system consisted of computer chips operating in a deterministic, mathematically describable way, then each chip could keep a record of all its inputs and outputs over time. One could then verify that each chip's input from other chips determined its outputs according to the correct formula, and that the final output to the chips to the muscles of my mouth and throat and lungs were consistent with what I had been heard to say.

    One could even disconnect this virtual nervous system from a real body, and attach it to a simulated body in a computer simulation, with simulated muscles and simulated sensory organs, operating in a self-contained simulated environment with no inputs from the "real world". That way not only the nervous system itself but everything causally influencing it would be functioning in a deterministic, computational way. One could even rerun the simulation multiple times from the same initial state and verify that this simulated me (really a mind upload) exhibited precisely the same behaviors on each run. Wouldn't this show that these behaviors all had a purely physical explanation in terms of interactions between the elements of the computer, with no room for causal influences on behavior from an immaterial realm?

    ReplyDelete
  13. Why do you say so? Keep in mind, I said "my behavior could be explained without the need for for causal interaction between my physical brain and some immaterial realm".

    For one, because models aren't equal to explanations. Making a model which allows you to say 'When this exact physical state of affairs obtains, then this precise sequence of events follows' still leaves you with the task of analyzing just what both the physical state of affairs and the events following are doing in metaphysical terms. "Is this exact physical state intrinsically 'about' some idea X or Y"? Arguments like that are left over even in an idealized case.

    I'm not sure what TOF is saying amounts to 'causal influences on behavior from a material realm' either - that sounds like substance dualism. But I'm not sure even a substance dualist who argues that some immaterial substance 'exerts influence' on the physical would be locked in the position of denying an entirely deterministic unfolding in a case like you describe - one reply could be that those 'causal influences' work alongside the physical, so when you know something about the physical substance you also know something about the mental substance.

    On the flipside, even with that in mind I wonder: What do you do if there's indeterminism at a fundamental level in the universe - or even if that remains an open question?

    ReplyDelete
  14. For one, because models aren't equal to explanations.

    I don't disagree, but I wasn't really talking about metaphysical questions like whether a given physical state could be seen as an "instance" of a universal or abstract form (I would tend to say it can, at least in the case of mathematical forms...I lean towards some type of mathematical platonism). I was just talking about a causal account of any of the type of physical actions that we take as an indication that some being has a "concept" of something.

    'm not sure what TOF is saying amounts to 'causal influences on behavior from a material realm' either - that sounds like substance dualism.

    I may have misunderstood. It did sound like TOF was advocating a type of substance dualism with comments like If dog were material, we could sense it, perhaps with instruments. So, it must be immaterial, as well as this comment which seemed to be endorsing Aristotle's concept of different types of "souls" as an integral part of TOF's answer to my questions:

    Aristotle discerned a rational soul that is incorporated into the animal soul. In addition to the animal powers aforesaid, it includes two additional, rational powers: Intellect and Will. And here at long last (yay!) we get to the answer to Jesse's plaintive queries.

    I don't think Aristotle would have said the rational soul has no independent causal power (especially since he associates it with will) or that it is simply a type of pattern of matter. Aristotle's metaphysics was very teleological, and seems incompatible with the reductionist theories of modern science, which assume that the most accurate possible prediction of any system's behavior (a prediction which might be probabilistic, if the fundamental laws contain any randomness) would involve a complete knowledge of the state of all the most basic entities (fundamental particles, quantum fields) that make it up along with the fundamental laws governing interactions of these basic entities. There are no "top-down" causal forces on the basic entities, like a force guiding a seed to become a tree, that would allow you to make more accurate predictions than the hypothetical being with the complete knowledge of the initial state and the knowledge of the most fundamental physical laws (it may still be useful and interesting to figure out higher-level "laws" governing macroscopic systems, but it's understood they could always in principle be deduced from the lower ones). This seems to conflict with Aristotle, who specifically believed that "final causes" distinct from "material and efficient causes" were needed to explain what we see in nature, see here.

    Anyway, if TOF didn't mean to suggest an anti-reductionist stance (and again I am talking about "reductionism" only in the sense of predictions about physical events, not about more metaphysical kinds of reductions like eliminative materialism in philosophy of mind), hopefully he will clarify.

    ReplyDelete
  15. I don't think Aristotle would have said the rational soul has no independent causal power (especially since he associates it with will) or that it is simply a type of pattern of matter.

    I'm sure he wouldn't, but thinking of 'causal power' in terms of efficient causality seems to be part of the problem here. I don't think a cause as in "something that pushes matter in distinct directions" is the way Aristotle conceived all four of the four types of causes.

    Aristotle's metaphysics was very teleological, and seems incompatible with the reductionist theories of modern science, which assume that the most accurate possible prediction of any system's behavior (a prediction which might be probabilistic, if the fundamental laws contain any randomness) would involve a complete knowledge of the state of all the most basic entities (fundamental particles, quantum fields) that make it up along with the fundamental laws governing interactions of these basic entities.

    Well, again, if there's a 'fundamental probablism' could your example even get off the ground? And could we know it's fundamental, or a limit of our knowledge? We can model something as random, we can compare our observations to the model, but that's about it.

    Likewise, I'm not sure the "reductionist theories of modern science" remain scientific theories if you push them too far. Aristotle's four causes are metaphysical posits, a way to interpret data rather than as something in competition with the domain of physical science. A little like how we can be idealists or materialists and still 'do science' without sacrificing one or the other.

    That's my understanding, either way.

    ReplyDelete
  16. I'm sure he wouldn't, but thinking of 'causal power' in terms of efficient causality seems to be part of the problem here. I don't think a cause as in "something that pushes matter in distinct directions" is the way Aristotle conceived all four of the four types of causes.

    That may not have been a full account of how he "conceived" of them philosophically, but at the same time I think he would have said that in fact a final cause does push matter in distinct directions in a way that someone who knew only about efficient causes (like one bit of matter colliding with another) would never be able to explain. I don't think he would have agreed with an atomist who believed every motion of an atom would have an efficient cause in a physical interaction with some other atom, and that "final causes" were just a kind of different high-level description of collective behaviors of certain kinds of large collections of atoms like growing organisms. Look for example at the discussion on this page of a paper discussing Aristotle's criticism of the atomists.

    Well, again, if there's a 'fundamental probablism' could your example even get off the ground? And could we know it's fundamental, or a limit of our knowledge? We can model something as random, we can compare our observations to the model, but that's about it.

    When I talked about how I define physical reduction, it was meant to be more of an ontological definition than an epistemological one, so it doesn't actually matter whether we can know that we are making the best prediction possible. Basically I'm saying that if physical reductionism is true in an ontological sense, then if you imagine a Laplacian demon with complete knowledge of the initial state of all the fundamental constituents of a system (particles, quantum fields) and the fundamental laws which govern their interactions, this demon's predictions couldn't be improved upon by a different demon which recognized higher-level patterns (say, that the particles made up an acorn) and knew some higher-level rules governing these patterns (like that an acorn planted in the soil and given proper nourishment will tend to grow into an oak tree). Of course, I assume that neither demon's knowledge can include direct foreknowledge of the future, only aspects of the initial physical state and laws governing the evolution of physical entities.

    But on the subject of what we could know, keep in mind that quantum randomness can be made to have an arbitrarily low probability of altering the calculations of a computer from what they "should" be according to the algorithm it's supposed to be running--that's why even in a quantum universe it makes sense to talk about classical computers. So in my thought experiment where we have a simulated brain in a simulated body in a self-contained simulated environment, and we assume the simulation was running on such a classical computer where the probability of quantum randomness altering the calculations was made astronomically small, then if the simulated being could discuss "concepts" cogently, that would be good evidence that mysterious causes slipping in through apparent quantum randomness are not necessary to produce the behaviors we ordinarily take to be evidence of conceptual understanding.

    ReplyDelete
  17. a simulated brain in a simulated body in a self-contained simulated environment

    Airline pilots can train on simulators to fly a 747 from New York to Paris. Operating these simulators is just like flying to Paris, with one notable exception. When you step out of the simulator, you will not actually be in Paris.

    That a behavior can be simulated by a computational algorithm does not mean that the mind has actually followed that algorithm. And that means that running the suite of such algorithms will not replicate the mind. Compare your own experience conversing in English to that of someone in Searle's Chinese Room simulating a conversation in Mandarin. There is a very distinct difference in the mental processes involved.

    http://users.ecs.soton.ac.uk/harnad/Papers/Py104/searle.comp.html

    ReplyDelete
  18. That a behavior can be simulated by a computational algorithm does not mean that the mind has actually followed that algorithm.

    According to physical reductionism (which again concerns only observable behaviors of systems--see my definition involving the "best possible prediction" above--and is not the same as a philosophical view like eliminative materialism), it should be possible to simulate any physical system to arbitrary accuracy by simulating the initial state of the fundamental entities that make it up and the the physical laws that govern their subsequent interactions; look for example at this article discussing a simulation of the behavior of large collections of water molecules which is based on nothing but the basic laws of quantum physics governing interactions between charged particles. Do you disagree that according to physical reductionism, the "algorithm" that the brain or any other system is following is ultimately just the algorithm of the fundamental laws of physics operating on the particles that make it up?

    Searle's Chinese Room argument is not really meant to deny that a sufficiently detailed simulation of a brain might pass the Turing test and be behaviorally indistinguishable from a real brain, it's more about the question of whether this simulation would have any subjective qualia or whether it would be a philosophical zombie. But Searle offers no real argument beyond personal incredulity that it would lack qualia. I recommend reading David Chalmers' The Conscious Mind for a thoughtful take on these issues by a philosopher who accepts the mainstream view of physical reductionism but is not an eliminative materialist with regard to qualia. He does however make a good argument that any alternate physical system which accurately simulated a brain in our universe would probably have the same qualia as the original brain, using a thought-experiment involving gradual replacement of the brain by functionally equivalent artificial components--see his paper Absent Qualia, Fading Qualia, Dancing Qualia.

    Incidentally, any comment on the question of whether your argument about concepts presupposes that physical reductionism is false and something like interactive dualism is true? And if this is the case, what about my other question about whether you are defining the word "concept" in a way that presupposes that this philosophical assumption must hold in order for anyone to have a "concept" of anything (so if physical reductionism were true, none of us would have any "concepts" according to your definition), or whether you have a more operational definition of "concept" that can be used to make an argument for your philosophical beliefs?

    ReplyDelete
  19. That a behavior can be simulated by a computational algorithm does not mean that the mind has actually followed that algorithm.

    Searle's Chinese Room argument is not really meant to deny that a sufficiently detailed simulation of a brain might pass the Turing test

    No, what it means is that passing the Turing test is essentially meaningless. A system that passes the test - in this case, by carrying on a conversation in Chinese - need not be self-conscious of doing so and need not be "running" the same "algorithm" as the human mind.

    ...the behavior of large collections of water molecules which is based on nothing but the basic laws of quantum physics

    And this will tell us that water is wet? Potable? The whole has properties that the parts do not possess. That's why formal causation is so important. It accounts for "emergent" properties. A sodium atom and a chlorine atom are made of the same matter -- protons, neutrons, electrons -- but what gives them their respective properties is their form, the number and arrangement of their parts. And then, by combining them we obtain neither a poisonous gas nor a flammable metal, but a tasty seasoning for our food. So some things cannot be explained by a reduction to parts; and require some understanding of the whole of which the parts are.

    Do you disagree that according to physical reductionism, the "algorithm" that the brain or any other system is following is ultimately just the algorithm of the fundamental laws of physics operating on the particles that make it up?

    The brain? Possibly, although there is a problem with the word "just." The physics and biology of the leg muscles are pretty well understood, and an electrical current can make the leg twitch. But the fundamental laws of physics operating on the particles that make up the leg muscles do not explain why I walked to corner or why my grandson chased a soccer ball around a field.

    And the mind appears to be non-physical, so physics may no more account for it than it accounts for the properties of equilateral triangles, tensor analysis on manifolds, or the beauty of the Parthenon. To put it another way, do "the algorithms" account for the existence of the algorithms and the laws of physics? That seems too much the serpent biting its own tail.

    ReplyDelete
  20. A system that passes the test - in this case, by carrying on a conversation in Chinese - need not be self-conscious of doing so and need not be "running" the same "algorithm" as the human mind.

    I'm not sure what you mean by "running an algorithm". Do you think of physical systems, like molecules or hurricanes, as "running an algorithm"? They do not have the architecture of computers, so this would be a rather confusing use of language. But according to physical reductionism, a sufficiently detailed computer simulation of the particles that make them up would be the most accurate possible way of predicting how their parts moved around through space over time. Same with the brain according to the reductionist idea, though I would not ordinarily speak of the brain "running algorithms" any more than I would say this about hurricanes, since the brain doesn't have the architecture of a computer (for example, having a mostly static "memory" which can only be altered by a read/write head) either.

    And this will tell us that water is wet? Potable? The whole has properties that the parts do not possess.

    "Wet" and "potable" relate to the interaction of water with other things, so a simulation wouldn't tell you about that unless you included the other things in the simulation. But suppose you have any physical system "in a box", isolated from physical influences from the external environment. And suppose you are a Laplacian demon with the maximum possible information about the initial state of all the fundamental particles/fields in the box (their quantum state, in terms of modern physics) and knowledge of the fundamental physical laws that govern their behavior, along with an ideal ability to calculate what these fundamental laws predict given a specific initial state. Then according to what I call physical reductionism, you have all the information needed to make the best possible prediction of the subsequent dynamics of all the physical elements in that box. Some other Laplacian demon who had all the same information about the initial state of the particles and the laws of physics, but who also had some high-level description of the system in the box that you lacked, like the fact that it consisted of a hungry person approaching a buffet, would not be able to improve on your predictions; knowledge of "emergent" macroscopic properties is only supposed to be helpful for predictions about physical events when you lack full knowledge of the microscopic properties. Do your philosophical assumptions absolutely rule out this notion of "physical reductionism" as applied to humans? (if so, would you possibly accept it for other physical systems, like a glass of water tipping over onto a tablecloth and getting it wet?)

    ReplyDelete
  21. (split my response into two because it got too long again)

    The physics and biology of the leg muscles are pretty well understood, and an electrical current can make the leg twitch. But the fundamental laws of physics operating on the particles that make up the leg muscles do not explain why I walked to corner or why my grandson chased a soccer ball around a field.

    I specifically frame my statement of "physical reductionism" in terms of physical predictions, not "explanations" which is a term which may have a wider variety of meanings in a philosophical discussion. Do you think a Laplacian demon who was given a complete physical description of the particles making up you and the room (which again we assume is isolated from external physical influences as part of this thought-experiment), but who gives no thought to what the particular configuration of particles might "mean" in terms of macroscopic descriptions like "person" and "room", would have a weaker predictive ability that a Laplacian demon who knew this information but also knew that a high-level description would be "a human who has just declared 'I think I'll go see what's in that corner of the room now'"? If so, this would be a rejection of physical reductionism.

    And the mind appears to be non-physical

    I don't want to talk about "the mind" for the moment because that word encompasses unmeasurable features like first-person qualia. I'm only talking about behavior that could be quantified in terms of how the particles in your body, and the ones in stuff relevant to communication like sound waves coming out of your mouth, move around over time. Again, I want to know whether you reject a priori the notion of "physical reductionism" I have outlined above, which applies purely to such quantifiable behaviors.

    ReplyDelete
  22. JesseM,

    But according to physical reductionism, a sufficiently detailed computer simulation of the particles that make them up would be the most accurate possible way of predicting how their parts moved around through space over time.

    I think this needs to be amended in part. To my knowledge, previously "physical reductionism" of this type held that A) it was in principle possible to know the details of all the particles in exacting detail, and B) once this was known, together with perfect knowledge of the laws involved (in principle possible as well), you would C) be able to perfectly predict all subsequent positions, etc.

    But quantum physics has thrown a wrench in A on a fundamental level, B is problematic because our knowledge of the laws is incomplete, and C is fatally compromised.

    I know you didn't state 'physical reductionism' this way. But again, my understanding of PR was that this was the gold standard - it's fallen on hard times.

    Now, instead of so accurate a prediction, you're talking about 'the most accurate possible way'. But here's a further problem I'm having: In opposition to physical reductionism, I suppose, would be a holistic analysis. Instead of viewing everything as resulting from the motions of the most minute particles, we use higher level descriptions.

    The problem I'm having is this: How do you, scientifically, determine that the reductionist account is correct and the holistic account is incorrect? Perhaps if they both ended up predicting different physical outcomes - but A) There's no reason to assume they would, and B) insofar as they wouldn't, the distinction hardly seems to matter anyway.

    ReplyDelete
  23. But quantum physics has thrown a wrench in A on a fundamental level

    Not exactly. People sometimes imagine the uncertainty principle means the particle has a precise position and momentum at all times, but if we measure one we can't know the other. This would be what's called a hidden-variable theory, and the most popular interpretations of quantum mechanics among physicists--the Copenhagen Interpretation and the Many-Worlds Interpretation--do not involve any such hidden variables. In a no-hidden-variable interpretation, if you know the complete quantum state of a system, you really do have the complete physical information about it. There are hidden-variable interpretations like Bohmian mechanics, but physicists tend to consider them a bit "contrived", and a more concrete objection is that for entangled particles Bohmian mechanics requires a type of faster-than-light interaction where measuring one particle instantly affects the state of the other distant one, violating at least the spirit of relativity (though it's not a totally clear violation because the effect can't be used for observers like ourselves to send messages to one another faster than light). In fact by a combination of "Bell's theorem" and the "Kochen-Specker theorem", it can be proven that any possible hidden variables theory would either require nonlocal faster-than-light effects, or would require the hidden variables to have a kind of "precognition" of which measurement was going to be performed on them in the future, even if the experimenter did not decide until the last moment.

    Anyway, it is always possible in principle to measure a precise quantum state for an isolated system, which combined with the fundamental laws of physics could be used to make a probabilistic prediction about what state you'd find the system in with a later measurement. My definition of physical reductionism says you could never do any better than this prediction by using high-level information like what the complete quantum state "looks like" on a macro scale, or even by using knowledge of totally holistic entities (like a vitalistic life-force) that aren't made up of the fundamental entities like particles and fields at all.

    ReplyDelete
  24. (second part of response to Crude)

    The problem I'm having is this: How do you, scientifically, determine that the reductionist account is correct and the holistic account is incorrect? Perhaps if they both ended up predicting different physical outcomes - but A) There's no reason to assume they would, and B) insofar as they wouldn't, the distinction hardly seems to matter anyway.

    It's fine with me if they make the same prediction, the definition of physical reductionism is just that you don't gain any additional predictive power from knowing about anything beyond the fundamental particles/fields and the fundamental laws governing them. One thing I'd also add, though, is that any specific holistic account will presumably only apply to certain types of configurations of the fundamental particles/fields--for example, fluid dynamics is only useful for understanding the dynamics of fluids, biological laws are only useful for living organisms, etc.--whereas the description in terms of fundamental particles/fields and fundamental laws is totally general and can apply to any physical system whatsoever.

    Also, I would not actually claim that the holistic account is "incorrect" unless it is a specific type of holism that presumes the reductionist account is incomplete, that there are top-down forces from "the whole" guiding the behavior of the particles that make up the whole in ways that someone who knew only about the fundamental level would not be able to predict (or would be worse at predicting, if neither account allows for deterministic predictions). And even here I don't think you could definitively prove that this sort of super-holistic theory is incorrect, as I said to you before I was more talking about the question of whether it's true ontologically than whether we can have complete confidence it's true (I want to know whether TOF's philosophical views require that this notion of physical reductionism is false ontologically). I do think we can continue to bolster subjective confidence in the reductionist account by showing that we can build reductionist models and computer simulations which reproduce statistical and qualitative behaviors of various types of composite physical systems, from water to weather to (perhaps someday) cells and even organisms and brains, even if they aren't used to predict the precise behavior of a particular instance of this system in a particular known starting state.

    ReplyDelete
  25. Not exactly. People sometimes imagine the uncertainty principle means the particle has a precise position and momentum at all times, but if we measure one we can't know the other.

    Right, but I wasn't assuming that - I'm not a physicist, but I'm aware that there are multiple interpretations of quantum theory, etc. But I think my point with A still stands; previously, 'physical reductionism' was idealized as being able to attain perfect knowledge of a state, down to the smallest levels, and given that knowledge a single and exact result (barring interference) would inevitably follow. That idea really has been seriously undermined, to say the least.

    Now, I know that your definition of physical reductionism accounts for this - you speak in terms of the 'best possible' prediction, etc. But I was pointing out that one reason for this definition being offered is because the old one fell on hard times due to scientific advances.

    Likewise, you say that your definition of physical reductionism is such that no other model could "do better" - but doing "just as good" is fine. But why call that physical reductionism? Why not have let the holistic modeler say "holism is defined as stating the physical reductionist model will never do better"? Given that models are descriptive and open to all manner of perpetual tweaking, updating, even fudge-factoring, it just seems like a red herring.

    You're saying you want to know if physical reductionism is "false ontologically" in TOF's view - but it seems that previously you were trying to avoid the ontological question, and wanted to focus purely on the topic of empirical predictions.

    I do think we can continue to bolster subjective confidence in the reductionist account by showing that we can build reductionist models and computer simulations

    Well, first I'd say I find Searle's talk about simulations and what they ultimately are end of the day persuasive. More than that, even if we grant that a simulation can more and more accurately reflect the reality it's modeling, who's to say that's bolstering the reductionist account rather than the holistic account? The gold standard will be whether the simulation is most true to reality, but saying 'It's simulating a reductive physicalist reality!' seems a lot like saying that the Mona Lisa was a painting of a woman 'who was physically reducible' - as if we should go looking around for another painting, one that drew a holistic woman.

    Finally, I've recently read that Kolmogorov Microscales represent a limit in ocean sciences - if your simulation is above or below the KM scale, your simulation's accuracy will suffer. If this is correct, does that represent another blow against physical reductionism? It seems like it would, since the PR reductionist view - even by your measure - seems to be 'the smaller the better'.

    ReplyDelete
  26. But I think my point with A still stands; previously, 'physical reductionism' was idealized as being able to attain perfect knowledge of a state, down to the smallest levels, and given that knowledge a single and exact result (barring interference) would inevitably follow. That idea really has been seriously undermined, to say the least.

    I thought your point with A was specifically about not being able to have complete knowledge of the initial physical state, not about whether one could make deterministic predictions from an initial state, which was what you defined as C.

    Anyway, people thought the universe was deterministic in the past because the equations of physics seemed to indicate this, but I'm not sure this was really a central part of the basic notion of "reductionism". If one presented a 19th century thinker with a scenario in which a Deist God decided to create a universe by specifying an initial state of atoms, along with some fundamental physical laws governing the interactions of individual atoms, but these fundamental laws were "stochastic" meaning there was an element of genuine randomness in what would happen when a given pair of atoms with a given pair of initial trajectories interacted. Do you think this person would say that the "reductionist" philosophy fails to apply in this universe? It would still be true that all behavior of complex arrangements of matter was a consequence of the behavior of the atoms that make it up, whose movements follow the fundamental stochastic laws that the Deist God has set up.

    Likewise, you say that your definition of physical reductionism is such that no other model could "do better" - but doing "just as good" is fine. But why call that physical reductionism? Why not have let the holistic modeler say "holism is defined as stating the physical reductionist model will never do better"?

    There is the issue of generality I mentioned--holistic models always deals with special cases of certain types of large-scale systems, while the reductionist model deals with every configuration possible, and it should always be possible to derive a specific holistic model from the reductionist one mathematically. I also don't think that even in a situation where a holistic model is good at predicting, its predictions would be as detailed as the reductionist theory--a holistic model will normally only predict certain types of macro-variables, not give a detailed probability distribution on every possible quantum state the system might be found in when measured precisely. But as I said, I don't actually want to say that "reductionism" is inconsistent with any type of "holism", just the specific kinds which say that there are top-down forces from larger "forms" which push around the atoms etc. in ways not predicted by the reductionist theory, something I think Aristotle would have said about the "soul" (see the page on Aristotle I linked to in an earlier comment, or On the Motion of Animals, for example).

    ReplyDelete
  27. (continued)

    The relevance of all these questions about the possible validity of reductionism to the main subject of the definition of "concepts" is this. If TOF does fundamentally rule out this notion of "reductionism" and say that the "rational soul" is really a sort of top-down force pushing around the atoms of my body, then I would want to know whether he is defining his notion of a "concept" in terms of this belief, in which case it would be a rather contentious definition since a reductionist would believe that none of us have any "concepts" under this definition. On the other hand, if TOF accepts at least the possibility of this sort of physical reductionism, he should also accept the possibility of something like a simulation of a human which is behaviorally indistinguishable from the real thing--not just a generic "human-like" AI, but something more like a mind upload of a specific person, where friends and families who interact with the upload find that in terms of personality, intellect, sense of humor, creativity, spirituality, display of emotion, etc., the upload behaves in a way that to them seems indistinguishable from the original biological person. If TOF accepts this sort of thing as a genuine possibility, then his notion of a "concept" could be sharpened further by addressing whether the upload would have genuine "concepts" or whether it would lack them (and if he thinks it would lack them, is that simply a consequence of his belief that the upload lacks qualia and is a type of philosophical zombie, or is there even more to having a "concept" than just exhibiting the right behaviors and having qualia such as the inner experience of mental imagery? If it's nothing more than that combination, then if he accepts that animals have some form of qualia, more explanation would be needed about what particular type of qualia he thinks they are lacking, or what type of behavior they fail to exhibit, that indicates to him that they lack "concepts")

    You're saying you want to know if physical reductionism is "false ontologically" in TOF's view - but it seems that previously you were trying to avoid the ontological question, and wanted to focus purely on the topic of empirical predictions.

    I said I wanted to avoid certain types of philosophical discussions about "meaning" and "explanation", but I don't think I said I wanted to avoid ontological questions. And I was defining the ontological truth of "physical reductionism" in terms of empirical predictions by a sort of ideal Laplacian demon with the maximum possible knowledge of initial physical facts and the laws of physics, not in terms of what can realistically be known by humans.

    ReplyDelete
  28. (continued)
    Well, first I'd say I find Searle's talk about simulations and what they ultimately are end of the day persuasive.

    Are you referring specifically to the "Chinese room" argument about a brain simulation lacking qualia, or something else? If you're referring to that, have you looked at David Chalmers' Absent Qualia, Fading Qualia, Dancing Qualia which I linked to earlier in response?

    More than that, even if we grant that a simulation can more and more accurately reflect the reality it's modeling, who's to say that's bolstering the reductionist account rather than the holistic account?

    Not sure what you mean by "reductionist account", again I wish to avoid discussions about issues like "explanation" and "meaning"--I don't claim for example that if physical reductionism is true, that would mean that once we have an understanding of the fundamental laws we have learned everything relevant or interesting about the laws of nature. If one can make more and more accurate predictions about the behavior of complex systems that only make use of more basic underlying laws governing interactions between their parts, all that I'm saying this "bolsters" is the belief that my ontological definition of "physical reductionism" is correct: namely, that a Laplacian demon with the best possible knowledge of the original physical state of the fundamental constituents of a system, and the fundamental laws governing the dynamics of these basic constituents, would be able to make the best possible predictions about the subsequent physical behavior of the system, his accuracy only limited by any fundamental randomness in the laws of nature. In no way do I claim that this demon understands everything interesting or relevant or meaningful about the system.

    Finally, I've recently read that Kolmogorov Microscales represent a limit in ocean sciences - if your simulation is above or below the KM scale, your simulation's accuracy will suffer.

    Not familiar with this, Kolmogorov Microscales have something to do with turbulence so if this relates to predictive problems in ocean science, I would guess this is something involving sensitive dependence on initial conditions (the "butterfly effect") in chaos theory. Do you remember where you read this, or have a link discussing it? If it is related to chaos, then I think a more detailed fine-grained simulation is not going to be less accurate than a rougher coarse-grained simulation in predicting the evolution of any macro-variables that the coarse-grained simulation is good at predicting. It would be more an issue of neither simulation being able to predict certain sufficiently long-term or small-scale behaviors accurately, because they would need an impossibly detailed knowledge of the initial state at small scales to do so...in a simulation of a chaotic system, if you re-run it twice with the tiniest tweak to the small-scale initial conditions, the two simulated runs will over time diverge completely.

    ReplyDelete
  29. Do you think this person would say that the "reductionist" philosophy fails to apply in this universe? It would still be true that all behavior of complex arrangements of matter was a consequence of the behavior of the atoms that make it up, whose movements follow the fundamental stochastic laws that the Deist God has set up.

    I think it would call it into serious question, yes. Certainly for a thinker from that time it wouldn't be as simple as saying 'Oh, well, then I guess some things are stochastic.' That certainly wasn't the reaction of many contemporaries to discoveries in quantum physics, particularly on the 'stochastic' question. And that's leaving out other aspects of quantum physics (entanglement, measurement concerns, etc.)

    But as I said, I don't actually want to say that "reductionism" is inconsistent with any type of "holism", just the specific kinds which say that there are top-down forces from larger "forms" which push around the atoms etc. in ways not predicted by the reductionist theory, something I think Aristotle would have said about the "soul" (see the page on Aristotle I linked to in an earlier comment, or On the Motion of Animals, for example).

    I'm disputing that, in part because it seems clear to me that both in the holistic and reductionistic models you're able to increasingly refine your models to take into account any and all perturbations, no matter how minute, in principle. Neither theory is sitting in stasis, forever forced to be what it is currently without revision.

    In other words, 'Behavior not predicted by (reductionist/holistic) theory' at 9am can be 'Behavior predicted by (reductionist/holistic) theory' at 10am. In both cases it will be a different way of modeling the same exact result.

    Are you referring specifically to the "Chinese room" argument about a brain simulation lacking qualia, or something else?

    I'm actually referring to the basic ideas of what's going on in any simulation, at least according to typical understandings of nature. Where a computer 'simulates' the way some stones on the ground 'represent Dallas'.

    In no way do I claim that this demon understands everything interesting or relevant or meaningful about the system.

    I appreciate that, but my focus here is on telling the difference between a 'reductionist model' and a 'holistic model'. Because it really seems like, in principle, there need be no difference in the two even when it comes to the sort of predictions you're speaking of - they'd simply go about it different ways (maybe one would be more of a hassle than the other.) In practice one model could lag behind the other at any given time, but that could in principle be addressed with further refining.

    Do you remember where you read this, or have a link discussing it?

    Sadly, the only online reference I was able to find was at this site: http://yeahokbutstill.blogspot.com/2010/04/scientism-and-actual-sciences.html - Not the most impressive. I was actually hoping you were aware of this and had more detail/context to offer up.

    ReplyDelete
  30. top-down forces from larger "forms" which push around the atoms etc.

    You are thinking of formal causes as if they were some sort of efficient cause. Efficient causes "push" things around. A basketball has the form of a sphere. How does the sphere "push" the rubber around to make a basketball? It makes no sense, because a basketball just is rubber in a spherical form.

    Being
    a. Material cause: what a thing is made of.
    b. Formal cause: what makes it that thing.
    Becoming
    c. Efficient cause: what made the thing. (How was the thing made?)
    d. Final cause: what is the thing made for.

    ReplyDelete
  31. You are thinking of formal causes as if they were some sort of efficient cause. Efficient causes "push" things around.

    But reading Aristotle's On The Motion of Animals, it does sound like he is saying the animal's "imaginations and sensations and ideas" have a power to move its body in ways that efficient causes fundamentally couldn't--it seems to be more than just a different way of describing the same motions that could be described in terms of efficient causes if we wished. For example, in part 7 he specifically contrasts the motion of automatons with the motion of animals, and he says that the expansion and contraction of body parts like muscles occurs in a way that would never be seen in an automaton (apparently in spite of the fact that the automaton might have been designed for a purpose by some rational being):

    The movements of animals may be compared with those of automatic puppets, which are set going on the occasion of a tiny movement; the levers are released, and strike the twisted strings against one another; or with the toy wagon. For the child mounts on it and moves it straight forward, and then again it is moved in a circle owing to its wheels being of unequal diameter (the smaller acts like a centre on the same principle as the cylinders). Animals have parts of a similar kind, their organs, the sinewy tendons to wit and the bones; the bones are like the wooden levers in the automaton, and the iron; the tendons are like the strings, for when these are tightened or leased movement begins. However, in the automata and the toy wagon there is no change of quality, though if the inner wheels became smaller and greater by turns there would be the same circular movement set up. In an animal the same part has the power of becoming now larger and now smaller, and changing its form, as the parts increase by warmth and again contract by cold and change their quality. This change of quality is caused by imaginations and sensations and by ideas. Sensations are obviously a form of change of quality, and imagination and conception have the same effect as the objects so imagined and conceived. For in a measure the form conceived be it of hot or cold or pleasant or fearful is like what the actual objects would be, and so we shudder and are frightened at a mere idea. Now all these affections involve changes of quality, and with those changes some parts of the body enlarge, others grow smaller.

    But my main interest here isn't in knowing what Aristotle might have thought, but in knowing your own opinions on the questions I asked relevant to your definition of "concepts". Are you willing to address my question about whether "physical reductionism" as I defined it (purely in terms of the empirical predictions of a Laplacian demon who knows only about the initial physical state of an isolated system, and about the fundamental laws of physics, but nothing else) could possibly be correct, or whether you definitively rule it out? If you don't rule it out absolutely, my next question would be whether that means you'd agree it should be possible in principle to create a simulation of a human which would be behaviorally indistinguishable from the real thing, and whether this entity could be said to have "concepts"...but one question at a time, I am first interested in your answer to the physical reductionism question.

    ReplyDelete
  32. whether "physical reductionism" as I defined it (purely in terms of the empirical predictions of a Laplacian demon who knows only about the initial physical state of an isolated system, and about the fundamental laws of physics, but nothing else) could possibly be correct

    I don't know that it accounts entirely for the motions of inanimate physical systems. I'm not sure that a "physical system" can be "isolated." That sounds like talking about the laws of gravity in the absence of matter. But the laws of gravity are themselves formal causes and so are only instantiated in matter.

    This essay may help. There are typos (like "that" for "what") and the footnote numbers are not superscripted, but what the hey:
    http://www.nd.edu/~afreddos/courses/43151/ross-aristotle%27s-revenge.pdf

    ReplyDelete
  33. I don't know that it accounts entirely for the motions of inanimate physical systems. I'm not sure that a "physical system" can be "isolated."

    Well, isolated systems are regularly assumed in physics, and putting a system in a box in intergalactic space would presumably make physical influences from outside the box pretty negligible if not precisely nonexistent. But given relativity, talking about an isolated system in a box is not actually a necessary part of my definition of "physical reductionism". Instead we could say that if the Laplacian demon is trying to make a prediction about a particular finite region of spacetime, then if we consider the past light cone of that region, the demon knows the full set of physical conditions in some cross-section of the past light cone (like the spacetime diagram shown here, with the demon having full knowledge of physical conditions in region 3 and making predictions about region 1). Basically the demon knows the complete state of all the fundamental particles at an earlier time, in a region large enough so that nothing outside this region could send a signal to the region we want to make a prediction about, assuming physical signals are limited by the speed of light (if you don't want to assume relativity we could even assume the demon knows the complete physical state of all fundamental particles throughout the universe at some earlier time, prior to the region of spacetime we want to make a prediction about). So "physical reductionism" says this demon would make the best possible predictions about what occurs in the later region of spacetime, that it couldn't be bested by a different demon who had the same information as the first but some additional knowledge of the "forms" that the fundamental particles are organized into, or knowledge of some sort of immaterial entities which are not composed of fundamental particles. Certainly I think most physicists would take this form of reductionism to be the case, and regardless of if you think it's likely to be false (the paper you link to seems to argue it's false, see the paragraph starting at the bottom of p. 2), are you completely confident we should rule out the possibility that this sort of "physical reductionism" holds? And if you admit even the possibility, would the truth of this form of physical reductionism require any modification of your discussion of the nature of "concepts" or do you think all of that discussion would still stand?

    ReplyDelete
  34. This comment has been removed by a blog administrator.

    ReplyDelete
  35. This comment has been removed by a blog administrator.

    ReplyDelete
  36. This comment has been removed by a blog administrator.

    ReplyDelete

Wonder and Anticipation, the Likes of Which We Have Never Seen

  Hello family, friends and fans of Michael F. Flynn.   It is with sorrow and regret that I inform you that my father passed away yesterday,...