Naming Does Not Necessitate Existence

Responding to Learning is Scaffolded Construction by Mark H. Bickhard.

OK, the core of the argument is here. Everything before it leads to it, and everything after follows from it:

Encoding models can tempt the presupposition of a passive mind: neither the wax
nor the transducing retina need to be endogenously active. But there is no such
temptation regarding interaction systems. The world could not impress a competent interaction system into a passive mind. Interaction systems must be constructed. Pragmatism forces constructivism.
Furthermore, unless we assume that the organism already knows which
constructions will succeed, these constructions must be tried out and removed or
modified if they are not correct. Pragmatism forces a variation and selection
constructivism: an evolutionary epistemology (Campbell, 1974).

Now how could we get ourselves into such a situation? The answer lies in the presuppositions that led to this point. Specifically:

A theory of encoding is, therefore, what we need to complete the bridge
between … semantics and the computational story about thinking. … [An
account of] encoding [is] pie in the sky so far. … we haven’t got a ghost of a
Naturalistic theory about [encoding]. Fodor, 1987, pg. 81

and

The right questions are: “How do mental representations represent?” and
“How are we to reconcile atomism about the individuation of concepts with
the holism of such key cognitive processes as inductive inference and the
fixation of belief?” Pretty much all we know about the first question is that
here Hume was, for once, wrong: mental representation doesn’t reduce to
mental imaging. Fodor, 1994, pg. 113

In other words, the mind is depicted as a representational system. But there is a disconnect between representations and the things being represented. For example, some representations may be false; that is (to simplify) the state of affairs represented does not actually exist. Hence representations cannot be caused entirely by the phenomena that cause them. Rather, they must be constructed, through some process of interpretation of those phenomena.

The problem with depending on Fodor to set up the state of affairs is that a reference to Fodor brings with it quite a bit of baggage. Fodor, like Chomsky, argues that the linguistic capacity is innate. Fodor calls this 'the language of thought' and argues, not only that grammar and syntax are innate, as Chomsky argues, but also that the semantics are innate, that we are both with (the capacity to represent) all the concepts we can express. How is it that we can use the term 'electric typewriter' in a sentence? because we were born with it.

But what if Fodor's theory, in particular, and the representational theory of mind, in general, are wrong? What if perception and cognition are not the result of a process of 'encoding'. What if the human mind is much more like Hume's version (very misleadingly described as a blob of wax)? What if semantic properties, such as 'truth' and 'falsehood' (and moral properties, such as 'right' and 'wrong') are more like sensations or emotions, instead of an account of some sort of correspondence between a proposition in the mind (as interpreted through a constructed mental representation) and a state of affairs in the world?

Because of Fodor's perspective, he wants you to believe that empiricism promotes certain corollaries:

1. The mind is a passive receiver of input and knowledge,
2. Learning is independent of prior state and of context,
3. The ideal form of learning is errorless learning.
It is certainly debatable whether Hume would believe any of these, and they are certainly false of modern empiricism. Much is made of the failures of causal theories of perception (which is why simple encoding fails, and why a representational theory is required in its place). But what if, as Hume says, cause is nothing more than the natural human inclination to ascribe a relation between two objects when the one frequently follows from the other? What if causation itself is something humans bring to the table? This is certainly not passive perception - humans, on Hume's theory, though 'custom and habit' interpret a perception as one thing or another.

These considerations constitute a response to the interaction theory proposed in this paper. Representations, on this theory, constitute 'interaction possibilities', that is, possible responses an agent may undertake in response to given stimuli (or perceptions). These have all of the properties of representations (truth values, content) but - by virtue of being implicit, do not suffer from the pitfalls of representationalism. We don't need to show how it was caused by this or that, because only the interaction possibility, not the representation itself, is caused by the phenomena. "Encoding models, in contrast, are not future oriented, but backward oriented, into the past, attempting to look back down the input stream."

Fair enough, and a spirited response to the myriad problems facing representational theories of mind (problems imposed from it more empiricist critics), but if Hume's position (as understood here, and not mangled by Fodor) stands, then this proposition does not follow: "The world could not impress a competent
interaction system into a passive mind. Interaction systems must be constructed." And of course, if this does not follow, the need for scaffolding, and the attendant infrastructure required, does not follow.

And Hume's position stands. We are misled by the 'wax' analogy. Even the slightest inspection reveals that perceptions are not like metal stamps, nor are brains anything like lumps of wax. A brain is a complex entity, such that when a perception makes an impression on any bit of it (ie., when a photon strikes a neural cell) the mind is not left with a resultant 'dent' but rather a myriad of disturbances and reflections, rather like the way water ripples when struck by a pebble or a raindrop. Some of these ripples and reflections have more or less permanent consequences; just as repeated waves form surface features, such as sandbars, that change the shape of subsequent waves, so also repeated perceptions form connections between neurons, that change the way the impact of a photon ripples through the neural network.

The world, therefore, could impress a competent interaction system (so-called) into a passive mind. And therefore (happily) interaction systems (so-called) need not be constructed, which is a good thing.

Why is it a good thing? Because if the interaction system (so-called - I am saying 'so-called' because the resulting neural structure may be described as an 'interaction system' or may be described as something else) is constructed then there must be some entity that does the constructing. And if this is the case, then there are only two possibilities:

Either, 1, the construction is accomplished by the learner him or her self, which raises the question of how the learner could attain a mental state sufficiently complex to be able to accomplish such constructions, or

2. an external agency must accomplish the construction, in which case the question is raised as to how the perceptions emanating from the external agent to the learning agent could be perceived in such a way as to accomplish that construction.

The pragmatist turn does not resolve the problem. Indeed, it makes the problem even worse. Bickhard writes, " Pragmatism forces a variation and selection
constructivism: an evolutionary epistemology." This means even more constructions must be constructed, both those that survive the 'evolutionary trial' and those that don't.

Indeed, the use of 'evolutionary' terminology to describe the state of affairs here is very misleading.

The problem is, any representational theory - whether it employs virtual propositions or not - needs elements that are simply not found in nature. They need 'truth' and 'representation' and even (on most accounts) 'causality'. They need, in other words, precisely the sort of things an intelligent agent would bring to the table. They need to be constructed in order to give them these properties. They need, in other words, to be created.

Far from being an evolutionary theory of learning, this sort of theory is in fact a creationist theory of learning. It amounts to an assertion that the combination of a mind and some phenomena are not sufficient to accomplish learning, that some agency, either an intermediating external agency, or an internal homuncular agency, are needed. But both such agencies presuppose the phenomenon they are adduced to explain.

In general, the ascription of such intentional properties - truth, meaning, causation, desire, right, interaction - which are not present naturally in the human mind or the phenomena it perceives can only be accomplished through some such circular form of reasoning. Historically, the existence of these properties has been used in order to deduce some necessary entity - an innate idea of God, an innate knowledge of grammar or syntax, or scaffolded construction, among others (the putative existence of this entity is then used to explain the phenomena in question, to add circularity on circularity).

These properties, however, are interpreted properties. They constitute, at most, a way of describing something. They are names we use to describe various sets of phenomena, and do not exist in and of themselves. Consequently, nothing follows from them. Naming does not necessitate existence.



Bickhard's response:

It is difficult to reply to something with so many mis-readings, both of my own work and of others.

I cite Fodor concerning encodings because even he, as one of the paramount exponents of such a position, acknowledges that we don't have any idea of how it could happen. Since the focus of all of my critical remarks is against such an encodingist position, it's not clear to me how I end up being grouped with Fodor. Certainly nothing actually written commits me to any kind of innatism - that too is one of my primary targets in my general work. In fact, one of the primary paths away from the arguments for innatism is an emergentist constructivism. (This, of course, requires a metaphysical account of emergence - see the several papers and chapters that I have on that issue.)

I don't even know where to start regarding Hume, but there are some comments below as relevant to more specific issues.

r. Representations, on this theory, constitute 'interaction possibilities', that is, possible responses an agent may undertake in response to given stimuli (or perceptions). These have all of the properties of representations (truth values, content) but - by virtue of being implicit, do not suffer from the pitfalls of representationalism. We don't need to show how it was caused by this or that, because only the interaction possibility, not the representation itself, is caused by the phenomena.
Representation is constituted, according to this model, by indications of interaction possibilities, not by interaction possibilities per se. And such indications are not caused, but, as attacked later, constructed.

I fail to see how even the account of Hume given supports the claim:
If Hume's position (as understood here, and not mangled by Fodor) stands, then this proposition does not follow: "The world could not impress a competent
interaction system into a passive mind. Interaction systems must be constructed."

That is:
But what if, as Hume says, cause is nothing more than the natural human inclination to ascribe a relation between two objects when the one frequently follows from the other? What if causation itself is something humans bring to the table? This is certainly not passive perception - humans, on Hume's theory, though 'custom and habit' interpret a perception as one thing or another.
First, I'm not addressing cause at all. Second, Hume explicitly said that he had no idea how perception worked, so the claims being made on his behalf here are rather difficult to fit with Hume's position. Third, interpretation, presumably based on custom and habit, is not necessarily passive, though Hume didn't have much of a model of activity beyond association. Fourth, such "interpretations" are not themselves caused, so they constitute a partial gesture in the direction of construction. I'm arguing that such constructions are of indications of interaction potentials, and that the basic properties of representation are emergent in such indications. Fifth, independent of all of that, how does any such interpretation of Hume undo the basic point that "the world could not impress a competent interaction system into a passive mind"? There appears to be a serious non-sequitur here. The comments about ripples and reflections would both seem to advert to cause in the mental realm, and how could that be rendered coherent given the other comments about cause, and do not address issues of interaction or interaction systems at all.

if the interaction system (so-called - I am saying 'so-called' because the resulting neural structure may be described as an 'interaction system' or may be described as something else) is constructed then there must be some entity that does the constructing.
I fail to see this at all. By this reasoning, there must be some entity that does the constructing of life and organisms and the genome, etc. This truly does lead to creationism, but, if that is the position taken, then the path is pretty clear (it is as well pretty clear who takes such a position). On the other hand, the premise is clearly false. That is one of the central points of variation and selection constructivist models - things can be constructed, that fit particular selection criteria, without there being any external or teleological constructor. The possibility that the organism, mind, etc. does the constructing itself is dismissed with a question of how it becomes sufficiently complex to do that sort of thing. But the ensuing "discussion" seems to assume that there is no answer to this question. I have in fact addressed similar issues in multiple other places. And again, biological evolution itself is proof in principle of the possibilities of such "auto-construction".

Bickhard writes, " Pragmatism forces a variation and selection
constructivism: an evolutionary epistemology." This means even more constructions must be constructed, both those that survive the 'evolutionary trial' and those that don't.
Sorry about that, but if constructions are possible, then they are possible, and if the lack of foreknowledge requires that many constructions be made that are ultimately found to fail, then get used to it. I take it that the author is also greatly exercised about biological evolution, which similarly involves lots of errors along the way.

The problem is, any representational theory - whether it employs virtual propositions or not - needs elements that are simply not found in nature. They need 'truth' and 'representation' and even (on most accounts) 'causality'. They need, in other words, precisely the sort of things an intelligent agent would bring to the table.
Are human beings not part of nature? Are frogs not part of nature? If they are part of nature, then "representation", "truth", and so on are also part of nature, and are in fact found in nature. The problem is to account for that, not to sneer at attempts to account for it. Or, if the preferred answer is that they are not part of nature, then that agenda should be made a little more clear, and we could debate naturalism versus anti-naturalism (dualism?) - or perhaps a simple physicalist materialism?

Far from being an evolutionary theory of learning, this sort of theory is in fact a creationist theory of learning. It amounts to an assertion that the combination of a mind and some phenomena are not sufficient to accomplish learning, that some agency, either an intermediating external agency, or an internal homuncular agency, are needed. But both such agencies presuppose the phenomenon they are adduced to explain.
Since it is the author of these diatribes who rejected any kind of emergentist constructivism, it would seem that the epithet of "creationist" fits the other side. Certainly it does not fit the model I have outlined. Note also that the possibility of an agent doing his or her own construction is here rendered as "an internal homuncular agency". Where did that come from ("homuncular" was not in the earlier characterization of "auto" construction)? If constructions can generate emergents, then internal constructions can generate emergents, and, if those emergents are of the right kind, then what is to be explained is not at all presupposed. If anything legitimately follows from anything in this rant, it follows from the authors own assumptions, not from mine.

In general, the ascription of such intentional properties - truth, meaning, causation, desire, right, interaction - which are not present naturally in the human mind or the phenomena it perceives can only be accomplished through some such circular form of reasoning. Historically, the existence of these properties has been used in order to deduce some necessary entity - an innate idea of God, an innate knowledge of grammar or syntax, or scaffolded construction, among others (the putative existence of this entity is then used to explain the phenomena in question, to add circularity on circularity).
Earlier, causation at least was located solely in the human mind. But I take it from this that intentionality is in toto supposed to be not a real class of phenomena; none of these properties or phenomena actually exist - ?? If that is the position, then to what is the illusion of intentionality presented, or in what is the illusion of intentionality generated (constructed?). I cannot make enough sense of this to even criticize it. If what is being asked for (though not very politely) is an account of how such circularities regarding normative and intentional phenomena are to be avoided, then I would point to, for example:

Bickhard, M. H. (2006). Developmental Normativity and Normative Development. In L. Smith, J. Voneche (Eds.) Norms in Human Development. (57-76). Cambridge: Cambridge University Press.

Bickhard, M. H. (2005). Consciousness and Reflective Consciousness. Philosophical Psychology, 18(2), 205-218.

Bickhard, M. H. (2004). Process and Emergence: Normative Function and Representation. Axiomathes — An International Journal in Ontology and Cognitive Systems, 14, 135-169. Reprinted from: Bickhard, M. H. (2003). Process and Emergence: Normative Function and Representation. In: J. Seibt (Ed.) Process Theories: Crossdisciplinary Studies in Dynamic Categories. (121-155). Dordrecht: Kluwer Academic.

These properties, however, are interpreted properties. They constitute, at most, a way of describing something. They are names we use to describe various sets of phenomena, and do not exist in and of themselves. Consequently, nothing follows from them. Naming does not necessitate existence.
Since intentionality seems to have been denied, I fail to understand what "interpretation" or "naming" could possibly be. So, on his own account, these sentences seem to be meaningless - the basic terms in them have no referents (but, then, what is reference?).

I apologize for my paper having been the occasion for such mean spirited nugatory "discussion". I have tried to keep responses "in kind" to a minimum. I am not accustomed to such as this, though perhaps it constitutes a "learning experience".







My reply:

Mark H. Bickhard wrote:
It is difficult to reply to something with so many mis-readings, both of my own work and of others.
I think this comment has as much to do with the other discussion as with this.

I cite Fodor concerning encodings because even he, as one of the paramount exponents of such a position, acknowledges that we don't have any idea of how it could happen. Since the focus of all of my critical remarks is against such an encodingist position, it's not clear to me how I end up being grouped with Fodor.
One person can be against a person in one way, and grouped with him in another. A Protestant may be different from a Catholic, but this is not an argument against lumping them together as Christians. Similarly, though you disagree with Fodor on encoding, you nonetheless agree with him on mental contents (specifically, that they exist, that they have semantical properties, that they constitute representations, etc.). "Such indications of interaction possibilities," you write, "I will claim, constitute the emergence of a primitive form of representation." Moreover, "such indications of interactive potentiality have truth value. They can be true or false; the indicated possibilities can exist or not exist. The indications constitute implicit predications of the environment — this environment is one that will support this indicated kind of interaction — and those predications can be true or false."

Related: Clark Quinn asks, "Stephen, are you suggesting that there are no internal representations, and taking the connectionist viewpoint to a non-representational extreme?" Generally, yes. Though I wouldn't call it an "extreme". But let me be clear about this. I do not deny that there is a representationalist discourse about the mind (to deny this would be to deny the obvious). People certainly talk about mental contents. But it does not follow that mental contents exist. Just as, people may talk about unicorns, but it doesn't follow that unicorns exist. To me, saying 'there are representations' and saying 'there are interaction possibilities' is to make the same kind of move, specifically, to look at what might generally be called mental phenomena, and to claim to see in them something with representational and semantic properties. But since these properties do not exist in nature, it follows that they cannot be seeing them. Therefore, they are engaged in (as Hume might say) a manner of speaking about mental properties.

I am certainly not the first person to make this sort of observation. You could liken it to Dennett's 'intentional stance' if you like, though I would find a more apt analogy to be the assertion that you are engaging in a type of 'folk psychology' as described by people like Churchland and Stich. Yes, as Quinn suggests, a learning system can bootstrap itself. But there are limits. A learning system cannot bootstrap itself into onmiscience, for example. As Quinn suggests, "the leap between neural networks and our level of discourse being fairly long." And in some cases, impossibly long - you can't get there from here. And my position is that the sort of system Bickhard proposes is one of those.
Certainly nothing actually written commits me to any kind of innatism - that too is one of my primary targets in my general work.
I did not write that you are committed to innatism. I wrote that the position you take commits you to either innatism or external agency.

The reason is, if a mind (a neural network) cannot bootstrap itself into the type of representation you describe here, then the representation must come from some other source. And the only two sources are innate abilities (the move that Fodor and Chomsky take) or an external agency (the move creationists take). You can disagree with my primary assertion - you can say we can too get from there to here (though I don't see this as proven in your paper). But if my primary position is correct, then there is really no dispute that you are forced into one or the other alternative.

What you are in fact doing is giving us a story about external agency. This is evident, for example, when you say "It depends on whether or not the current environment is in fact one that would support the indicated kind of interaction." You want 'the environment' to be the external agent. But the environment works causally. And the environment does not (except via some form of creationism) work intentionally. It doesn't assert (contra the language you use) any sort of notion of 'true' or 'false'; it just is. What is happening is that you are giving the environment properties it does not have, specifically, counterfactuals, as in "one that would support the indicated kind of interaction." But there is no fact of the matter here. An environment's counterfactual properties depend on our theories about the world (that's why David Lewis takes the desperate move of arguing that possible worlds are real).
In fact, one of the primary paths away from the arguments for innatism is an emergentist constructivism. (This, of course, requires a metaphysical account of emergence - see the several papers and chapters that I have on that issue.)
I have looked at what you have posted online.
I don't even know where to start regarding Hume, but there are some comments below as relevant to more specific issues.
r. Representations, on this theory, constitute 'interaction possibilities', that is, possible responses an agent may undertake in response to given stimuli (or perceptions). These have all of the properties of representations (truth values, content) but - by virtue of being implicit, do not suffer from the pitfalls of representationalism. We don't need to show how it was caused by this or that, because only the interaction possibility, not the representation itself, is caused by the phenomena.
Representation is constituted, according to this model, by indications of interaction possibilities, not by interaction possibilities per se. And such indications are not caused, but, as attacked later, constructed.
With all due respect, I consider this to be a sleight of hand.

Let's work through this level by level.

There are, shall we say, states of affairs - ways the world actually is.

Then there are representations - things that stand for the way the world actually is (the way you can use a pebble, for example, to stand for Kareenm Abdul-Jabbar).

One type of representation (the type postulated by Fodor and company) is composed of sentences (more specifically, propositions). The difficulties with this position are spelled out in your paper. But another type of representation, postulated here, is composed of interactions.

Except that the interactions do not yet exist, because they are future events. Therefore, they exist only as potentials, or as you say (borrowing from Derrida?) "traces" of interactions.

Well, what can a 'trace' be if it is not an actual interaction?

It has to be exactly the same sort of thing Fodor is describing, but with a different name. It has to be some sort of countefactual proposition. Only a counterfactual proposition can describe counterfactuals and stand in a semantical relation (ie., be true or false) to the world.

That's why I think this is just a sleight of hand.

I fail to see how even the account of Hume given supports the claim:
If Hume's position (as understood here, and not mangled by Fodor) stands, then this proposition does not follow: "The world could not impress a competent interaction system into a passive mind. Interaction systems must be constructed."
That is:
But what if, as Hume says, cause is nothing more than the natural human inclination to ascribe a relation between two objects when the one frequently follows from the other? What if causation itself is something humans bring to the table? This is certainly not passive perception - humans, on Hume's theory, though 'custom and habit' interpret a perception as one thing or another.
First, I'm not addressing cause at all.
I'll give you this, but claim a chit, which I'll cash in below.
Second, Hume explicitly said that he had no idea how perception worked, so the claims being made on his behalf here are rather difficult to fit with Hume's position.
Hume writes, "All the perceptions of the human mind resolve themselves into two distinct kinds, which I shall call IMPRESSIONS and IDEAS. The difference betwixt these consists in the degrees of force and liveliness, with which they strike upon the mind, and make their way into our thought or consciousness." And "There is another division of our perceptions, which it will be convenient to observe, and which extends itself both to our impressions and ideas. This division is into SIMPLE and COMPLEX." And "Having by these divisions given an order and arrangement to our objects, we may now apply ourselves to consider with the more accuracy their qualities and relations. This is from the Treatise, Book 1, Part 1, Section 1.
http://www.class.uidaho.edu/mickelsen/ToC/hume%20treatise%20ToC.htm

Given that he then went on to compose three volumes based on the account of perceptions outlines here, I would say that he believed that he did indeed have a very clear idea of how perception works. What he does not claim to know, of course, is how perceptions are caused. But that is a very different matter.

For as to the specific claim about causation, "Cause is nothing more than the natural human inclination to ascribe a relation between two objects when the one frequently follows from the other," I turn to the Enquiry: "Suppose a person... to be brought on a sudden into this world... He would not, at first, by any
reasoning, be able to reach the idea of cause and effect... Their conjunction may be arbitrary and casual. There may be no reason to infer the existence of
one from the appearance of the other....Suppose, again, that he has acquired more experience, and has lived so long in the world as to have observed familiar objects or events to be constantly conjoined together; what is the consequence of this experience? He immediately infers the existence of one object from the appearance of the other.... And though he should be convinced that his understanding has no part in the operation, he would nevertheless continue in the same course of thinking. There is some other principle which determines him to form such a conclusion...This principle is Custom or Habit."
Enquiry Section 5, Part 1, 35-36. http://darkwing.uoregon.edu/%7Erbear/hume/hume5.html

I maintain that I have represented Hume correctly.
Third, interpretation, presumably based on custom and habit, is not necessarily passive, though Hume didn't have much of a model of activity beyond association.
I was not the one to make that assertion. Hume is an empiricist, and it was you who cited the principle that 'The mind is a passive receiver of input and knowledge'. As suggested, by 'custom and habit' Hume doesn't mean much beyond association. I am willing to allow slightly more; for example, I have in presentations asserted that beyond simple Hebbian association we can also postulate activity such as Botzmann 'settling' and 'annealing' along with, of course, some story about back propagation (though, of course, that story involves past 'training' events, not postulated traces of future training events).

I think that this seems to me to be non-controversial as a principle, that insofar as there is a model of activity, this model of activity cannot ascribe to that activity forces other than the state and nature of the brain itself, and stimulations of that brain (aka 'perceptions'). Specifically (and this is where Clark Quinn calls me 'radical') I argue that it cannot include the postulation of events or entities with semantical properties (aka 'mental contents', 'propositions','representations', and relevant to the current discussion, 'counterfactuals'). because - though you don't want me to lump you in with Fodor - the same sort of problems 'encodings' have are shared by these other events or entities.
Fourth, such "interpretations" are not themselves caused, so they constitute a partial gesture in the direction of construction.
I'll give you this - but claim the same chit I did above. We'll come back to this.
I'm arguing that such constructions are of indications of interaction potentials, and that the basic properties of representation are emergent in such indications.
Fifth, independent of all of that, how does any such interpretation of Hume undo the basic point that "the world could not impress a competent interaction system into a passive mind"?
By "a competent interaction system into a passive mind" I mean the sort of entity you describe, that stands in a semantical relation to the world.
There appears to be a serious non-sequitur here. The comments about ripples and reflections would both seem to advert to cause in the mental realm, and how could that be rendered coherent given the other comments about cause, and do not address issues of interaction or interaction systems at all.
... and yet does not advert to cause.

The comment about ripples and reflections is a metaphor to suggest that the same kind of thing happens in the brain. 'Causation' is the theory used to explain both. My views on the nature of causation are similar to Hume's.

And - just as there is no 'truth' or 'representation' or 'indications of interaction potentials' in the ripples in the pond, nor either are there any such in the brain.
if the interaction system (so-called - I am saying 'so-called' because the resulting neural structure may be described as an 'interaction system' or may be described as something else) is constructed then there must be some entity that does the constructing.
I fail to see this at all. By this reasoning, there must be some entity that does the constructing of life and organisms and the genome, etc. This truly does lead to creationism, but, if that is the position taken, then the path is pretty clear (it is as well pretty clear who takes such a position). On the other hand, the premise is clearly false.
OK, now I'm claiming my chit.

You are saying the following:
That is one of the central points of variation and selection constructivist models - things can be constructed, that fit particular selection criteria, without there being any external or teleological constructor.
Now of course a "variation and selection model" is, essentially, evolution. In a thing that can be reproduced (such as, say, a gene) introduce some sort of variation (such as, say, a mutation) in various reproductions. Then, though some sort of test (such as, say, survival) select one of those variations to carry on the reproductive chain. It is, in other words, a fancy way of saying 'trial and error'.

Strictly speaking, "variation and selection constructivism" is a misnomer. The term 'construction' implies a deliberately formed entity with some goal or purpose in mind - in other words, an act of creation. It's like coming up with a theory of 'evolutionary creationism'.

Still, leaving the connotations aside, there is a story that can be told here. But there is a crucial difference between 'variation and selection' and what is being offered here.

An analogy: there is no sense to be made of the assertion that the species that remain, red in tooth and claw, after the ravages of natural selection, are 'true'. Nor indeed would anybody say that they 'represent' nature. It is even a stretch to say that they are the 'best'. They just happen to be what was left after repeated iterations of a natural process. There was no sense of truth, representation or morality in the process that created them, and hence there is no sense of truth, representation or morality in what was created. Even the phrase 'survival of the fittest' attributes an intentionality that is just not present. It could equally well be (given our world's experiences with comets and ice ages and humans) 'survival of the luckiest'. Certainly, the major attribute that explains the survival of, say, kangaroos, is 'living in Australia'.

So - even if a process of trial and error, or shall we say, variation and selection, results in a given mental state, from whence does it obtain its semantic properties? The state of affairs that produces a mental state could indeed produce any number of mental states (and has, so far, produced roughly ten billion of them through history). It would be a miracle that any of them, all by itself, would become representative, much less true.

The word 'construction' implies a 'construction worker' for a reason. The word 'construction' suggests semantic attributes. That is why it is no surprise to see Bickhard claim them in his essay.

So what is the difference between natural selection, which does not produce semantic properties, and variation and selection constructivism, which does?

It is this: the entities or events that do the selection in natural selection actually exist. They are past entities that could have actually informed the selection. The entities in the model postulated here, however, do not exist in the brain or the natural world. They are future events, counterfactuals, potentials or traces. They exist only insofar as they are postulated. But if they are postulated, we are begging the question of how they were created in the first place.

Natural selection makes a great scientific theory. It explains numerous phenomena, from the existence of alligators to the operation of the immune system. But natural selection makes a lousy semantic theory. The only way to introduce 'truth' or 'representation' or 'content' into such a system is to invent it, to introduce it surreptitiously using some sort of sleight of hand, as I have described above.
The possibility that the organism, mind, etc. does the constructing itself is dismissed with a question of how it becomes sufficiently complex to do that sort of thing. But the ensuing "discussion" seems to assume that there is no answer to this question. I have in fact addressed similar issues in multiple other places. And again, biological evolution itself is proof in principle of the possibilities of such "auto-construction".
Biological evolution is proof of no such thing.

It is a mangling of the language to say that animals were 'constructed'.

There is, indeed, self-organization. I have referred to it myself many times. But it is not a process of 'consruction'. It is not imbued of intentional properties. Mental states do not become 'true' or 'representational' because we evolve into them. We do not in any way 'select' them; actual phenomena (and not non-existing counterfactuals) strengthen one or another in our minds.

Bickhard writes, " Pragmatism forces a variation and selection
constructivism: an evolutionary epistemology." This means even more constructions must be constructed, both those that survive the 'evolutionary trial' and those that don't.
Sorry about that, but if constructions are possible, then they are possible, and if the lack of foreknowledge requires that many constructions be made that are ultimately found to fail, then get used to it. I take it that the author is also greatly exercised about biological evolution, which similarly involves lots of errors along the way.
Every iteration of a duck is slightly different from every other. I don't have a problem with that. I believe that the reproduction of ducks, of multiple diverse types of ducks, is a good thing.

But what I don't believe is that a reproduction of a duck can be described as a test of such-and-such a theory, that the natural variation of ducks produces some sort of 'true' duck or even an 'optimal' duck, much less a 'representational' duck. A duck is just a duck. It doesn't mean anything.
The problem is, any representational theory - whether it employs virtual propositions or not - needs elements that are simply not found in nature. They need 'truth' and 'representation' and even (on most accounts) 'causality'. They need, in other words, precisely the sort of things an intelligent agent would bring to the table.
Are human beings not part of nature? Are frogs not part of nature? If they are part of nature, then "representation", "truth", and so on are also part of nature, and are in fact found in nature. The problem is to account for that, not to sneer at attempts to account for it. Or, if the preferred answer is that they are not part of nature, then that agenda should be made a little more clear, and we could debate naturalism versus anti-naturalism (dualism?) - or perhaps a simple physicalist materialism?
When you say things like "The problem is to account for that, not to sneer at attempts to account for it" you are making exactly the same move people like Chomsky and Fodor make (you may as well have said 'poverty of the stimulus' and quoted Chomsky directly).

We have, it is argued, the capacity to think of universals, such as 'all ducks quack'. But universals do not exist in nature (because they extend to non-existent future events). Therefore... what? Chomsky says they must exist in the mind. You say... what? That they are the result of trial and error? How would that work for these non-existing future events?

What is the case, in fact, is that what we think are universals, what we call universals, are not actually universals. They are summarizations, they are abstractions, they are something that can actually coexist with the stimuli, however impoverished.

You cannot assume representations in order to argue for a representational theory of mind.
Far from being an evolutionary theory of learning, this sort of theory is in fact a creationist theory of learning. It amounts to an assertion that the combination of a mind and some phenomena are not sufficient to accomplish learning, that some agency, either an intermediating external agency, or an internal homuncular agency, are needed. But both such agencies presuppose the phenomenon they are adduced to explain.
Since it is the author of these diatribes who rejected any kind of emergentist constructivism, it would seem that the epithet of "creationist" fits the other side. Certainly it does not fit the model I have outlined. Note also that the possibility of an agent doing his or her own construction is here rendered as "an internal homuncular agency". Where did that come from ("homuncular" was not in the earlier characterization of "auto" construction)? If constructions can generate emergents, then internal constructions can generate emergents, and, if those emergents are of the right kind, then what is to be explained is not at all presupposed. If anything legitimately follows from anything in this rant, it follows from the authors own assumptions, not from mine.
This really is a gloss of my position, and not a particularly kind one. I hope this version of it is clearer.

In general, the ascription of such intentional properties - truth, meaning, causation, desire, right, interaction - which are not present naturally in the human mind or the phenomena it perceives can only be accomplished through some such circular form of reasoning. Historically, the existence of these properties has been used in order to deduce some necessary entity - an innate idea of God, an innate knowledge of grammar or syntax, or scaffolded construction, among others (the putative existence of this entity is then used to explain the phenomena in question, to add circularity on circularity).
Earlier, causation at least was located solely in the human mind. But I take it from this that intentionality is in toto supposed to be not a real class of phenomena; none of these properties or phenomena actually exist - ?? If that is the position, then to what is the illusion of intentionality presented, or in what is the illusion of intentionality generated (constructed?). I cannot make enough sense of this to even criticize it.
Oh goodness, what an equivocation.

When I say that 'unicorns only exist in the mind' I am not in any way asserting that large (or I guess very tiny?) horned horses are prancing about the cerebral cortex.

If what is being asked for (though not very politely) is an account of how such circularities regarding normative and intentional phenomena are to be avoided, then I would point to, for example:

Bickhard, M. H. (2006). Developmental Normativity and Normative Development. In L. Smith, J. Voneche (Eds.) Norms in Human Development. (57-76). Cambridge: Cambridge University Press.

Bickhard, M. H. (2005). Consciousness and Reflective Consciousness. Philosophical Psychology, 18(2), 205-218.

Bickhard, M. H. (2004). Process and Emergence: Normative Function and Representation. Axiomathes — An International Journal in Ontology and Cognitive Systems, 14, 135-169. Reprinted from: Bickhard, M. H. (2003). Process and Emergence: Normative Function and Representation. In: J. Seibt (Ed.) Process Theories: Crossdisciplinary Studies in Dynamic Categories. (121-155). Dordrecht: Kluwer Academic.

These properties, however, are interpreted properties. They constitute, at most, a way of describing something. They are names we use to describe various sets of phenomena, and do not exist in and of themselves. Consequently, nothing follows from them. Naming does not necessitate existence.
Since intentionality seems to have been denied, I fail to understand what "interpretation" or "naming" could possibly be. So, on his own account, these sentences seem to be meaningless - the basic terms in them have no referents (but, then, what is reference?).

I apologize for my paper having been the occasion for such mean spirited nugatory "discussion". I have tried to keep responses "in kind" to a minimum. I am not accustomed to such as this, though perhaps it constitutes a "learning experience".
To take offense at my response is ridiculous. It was certainly not mean-spirited, rude, or anything else. Again, I think you are attributing the properties of some other discussion to this one. I cannot otherwise understand why you would object to my response.

Indeed, in the spirit of completeness, perhaps you can point to sentences in my previous response where I was in fact mean spirited, nugatory, rude, or anything else. What specific sentences did you find objectionable? I most certainly have no wish to cause offense, though I certainly do not take that to preclude the possibility of disagreeing with you.

I submit that I interpreted your position correctly, interpreted Hume correctly (among others), and have fairly and successfully criticized your presentation, and that I did so in an academically responsible manner.

Comments

  1. Stephen, I think that it's quite plausible that we generate representations externally without having persistent ones internally, I just wanted to make sure I understood your position. Yet somehow our conscious experience does seem to create representations, and we can re-access (regenerate?) them.

    I'd give Mark credit for 'getting' this (his interactive potentials could map to activation states) if he hadn't explicitly talked about representations and truth value ;). And I have no idea what he's on about with nugatory (and I had to look it up) discussions. I thought the discussion, while rigorous, to be intellectually honest and without personal attack.

    Seldom do I get this fun level of discussion going (though I'm out of practice), and it's hard on paper instead of over a lunch table (or, better yet, a beer ;).

    ReplyDelete
  2. Gelernter in "The Muse in the Machine" makes an argument in favour of the emotional linkage of memories. In that sense, intelligence requires a body.

    T. Barake

    ReplyDelete

Post a Comment

Your comments will be moderated. Sorry, but it's not a nice world out there.

Popular Posts