Author: Ralph E. Kenyon, Jr. (diogenes)
Thursday, December 20, 2007 - 10:37 pm
|
Off the top of my head... DNA had not yet been discovered, and protein folding and "jig-saw puzzle" like interlocking organic compounds were not yet understood in any great detail. It was surmised back then that living tissue was constructed of colloidal sized particles held together by electic charges. The view was naive, and it has been superceded by lots of research since then. Colloidal chemistry gives an understanding of how cement works, but it is a "dead end" for explaining living tissue, made up of thousands of continually changing molecules that fit together like lock and key. Colloids fall into the region of inorganic chemistry. You can basically "write off" the sections on colloids as an area where 1933 general semantict has been completely superceded.
|
Author: Ralph E. Kenyon, Jr. (diogenes)
Sunday, December 23, 2007 - 10:48 am
|
Per your request... A Star Trek program about a society of clones discussed what they called "replicative fading" as the cumulative errors over generations (of clones) as copies of copies, etc, gradually succumed to the each map is not the territory and has errors. I seem to remember some recent news theorizing that dna (and rna?) molecules have "oncogenes" on each end, and these are gradually lost during successive generations of reproduction of the molecules. No harm-no foul, until the oncogenes are used up; after that the reproduced molecules start to loses their ability to function, because critical parts are lost. The supposition about ageing, I seem to recall, holds that these oncogenes act as a biological death clock, and when they are used up, we start deteriorating and eventually die due to systemic chemistry failure. Oncogenes are also implicated in some cancers, when the cells grow wildly with altered function. As in all genetics, there is significant variation. But the abstract summary is "When we use up our oncogenes, or when they get 'ripped off' due to some molecular trauma, we begin to die." Oxidation by free radicals is a part of this; so is dammage from naturally occurring background radioactivity, cosmic rays, man-made environmental poisions, ultra-violet radiation that gets through the weakened ionisphere, etc., etc.. Cellular biology would not be possible without "colloidal" forces, as they are simply electric charges that attract and repel each other. But that is not the major structural consideration. Each molecule has positive and negative charges at different places. The molecules fold up and make an object with a different distribution of charges in different positions. These interlock like jig-saw puzzle pieces to make structures. In cell chemistry these molecules change, using energy from the conversion of atp to adp in each step. In molecular reproduction, one of the "rocking horse" molecules can "zip-up" loose amino acid molecules and other base materials acting as a mineature "factory" manufacturing our biological molecules from the "raw materials" (digested food) that has been brough to the cells by similar processes. Damage one of these little "factory" molecules by the loss of a part, or by many other ways, can result in non-production or in producing something that is "not quite right" for the needed function. The keys are the three-dimensional folding, and three-dimensional dynamic changes that keep the reproduction of structure, the replacement of dammaged parts, etc., operating efficiently that allows us to live. Colloids? Irrelevant to life, at this point, because the study of colloids involves gels and suspensions - cement, jello, plastics, glue, and other non-organic substances that are essentially hetrogenous at scales below our natural perception. Life needs not the colloidal process, but the electric charges that are distributed in fixed lock-and-key positions on the folded molecules. Colloidal charges are basically free to roam around on the particles - they provide no reliable structure. Just put your hand into a box of these styrofoam peanuts used for packing. They stick any which way, because the charges move around on the surface. A charged particle will attract a neutral particle on any side, because it casuses the object with free charges to become a dipole, attracting the opposite charge and repelling the similar charge, but because the oppose charge is "closer", it over-rides the replusive force of the same charge, which has moved further away. (Charges act on the inverse square law.) But when the charges are fixed, the pieces fit together precisely like jig-saw puzzles, or they don't fit at all. Think of the anti-body "lock and key" metaphor; apply that to all our body's construction, as well as the operation. The red blood cell heme molecule opens and closes like a clambshell to capture and release oxygen, and when it does it in the right places, in response to its mineature chemical environmental differences between the lungs and the capilaries in the body, it transports needed oxygen. This does not work with "colloidal" forces, because they stick and don't let go. Simple molecules come with various three-dimensional charge shapes. Water, with only three atoms, is a dipole - with a positive and a negative end. Methane, with five atoms, is symetrically neutral. Organic molecules with thousands and millions of atoms fold in on themselvel presetting a surface with fixed charges at fixed points, until the molecules obsorbe energy or release energy and re-fold into a different shape. Nothing about this process is "colloidal" except some of the molecules are big enough to be in the colloidal size range. But in Korzybski's time, The structure of DNA had not been discoverd. "Colloids" were a guess, because slime molds and eggs could be cooked into coagulating, or they could be caused to liquify. The protoplasm gained "too much structure" or "too little structure". If you haven't see the movie about slime-molds that was presented at past institute seminars, and gotten a good laugh, you haven't gotten an appreciation for the "state of knowledge" of the time. Nuff said.
|
Author: Ralph E. Kenyon, Jr. (diogenes)
Sunday, December 23, 2007 - 10:27 pm
|
I'd be more qualified to do the calculus thing than the biology thing. My "knowledge" comes mostly from reading science news fairly regularly. I view my prior post to be nothing more than what the intelligent layman who pays attention to science developments should know. I just put some pieces together. As far as the calculus is concerned, the presentation in Science and Sanity is more "intuitive" than the "up-to-date" "formalisms" which do not use much of the visualizations. I would recommend Lakeoff's "Where Mathematics Comes from" which develops mathematics from the second generation cognitive science perspective. I doubt, however, that "common sense realists" would have much use for infintesimals or infinities. But one of Korzybski's basic premises, that of "similarity of structure" between mathematics, the nervous systems, and "reality", needs to be "thrown out". In any mapping between any two of these the characteristics at each level are totally different. They are literally not comparable. See: http://xenodochy.org/gs/nonsimilar.html and http://xenodochy.org/gs/similar.html
|
Author: Ralph E. Kenyon, Jr. (diogenes)
Tuesday, December 25, 2007 - 07:23 pm
|
Physicists don't "observe a relation"; they observe events, and they then abstract from those events, and subsequently manipulating their abstractions, may infer or hypothesize a relation, sometimes one that a nice mathematcical variance can be calculated for to account for the observed datums. By the time a "relation" has been hypothesized, and even made predictions that have not been disconfirmed, much abstraction has already taken place from the putative "things" and "events" into an abstract object space and further to verbal (numeric) representations. Your putative "R(x,y,z)" is a projection of r(a,b,c) onto what is going on. Your claim begs the question by presuming a relation "exists" and can be "observed"; such is not the case; "relations" - functional relations, mathematical relations, are "structures" in an abstract map that we create - mathematics - and subsequently "project" elsewhere. We do not have any "relations" that match between any two of "physical", neurological, and "verbal" either. We cannot specify any r(a,b,c) and z(d,e,f) where "r" is "physical" (projected) and "z" is neurological such that A[abstraction](a)=d;A(b)=d;A(c)=f, and A(r)=z. Nor can we do this for words, as we have little agreement on "abstraction". We can have some relatively fixed reference relations between some words and some putative "things", but only in the context of agreement between people. No "similarity" of structure "exists" outside of the "mind" (brain) of an individual who has abstracted DIFFERENT "thing" into his neurological representation and made an evaluation of "similatity". To put it simply, "apples" and "oranges" cannot be compared, just like "sound waves" and "electricity" cannot be compared. Only something of the same type in one medium can be compared. We can compare abstractions to each other within our nervous systems, but those abstraction SIMPLY ARE NOT what they are abstraction from. We do not determine "similarity of structure" of the respective territories; we only determine "similarity of structure" between our abstractions (maps), and then only within our nervous systems. We can construct voltage comparators that can measure a difference between voltages, such as in a common voltmeter, and produces a physical movement of a needle in response to the difference, but it is we, who, seeing the needle move, and "identifying" the needle position with an abstract idea of voltage difference, that would assert "these are similar"; the "similarity" is a product of our nervous system evaluation. It works like this "Similarity of structure" is a highly abstract mapping evaluaton, and a much abused one at that, because it strongly identifies map and territory. We experience our own similarity evaluation, but we neglect the processes of abstraction by which we obtain the neurological responses to compare.
|
Author: Ralph E. Kenyon, Jr. (diogenes)
Wednesday, December 26, 2007 - 11:29 pm
|
Physicists make meaurements through an abstraction process involving "operational definitions" that take for granted (assume) that some "common sense" objects (not "putative" "things") and actions are understood and followed, and they abstract relations among the records of such observations. "Quantities" are numerical abstractions that are assigned by an observer (abstracting) by comparing a "standard" in the context of an "event". The "discoveries" involve mostly the abstraction of central tendencies and variances, the abstraction of correlations, and the computaion of confidence levels based on various probability tests - thus abstracting "numbers" from data. Someone with a bright idea "maps" those numbers, assigning names, and providing verbal "explanations" for them. Just in the way that our brain locates its experiences external to itself, physicists project the numerical relations in the data onto what is going on. It is a step outside of the observation-abstraction, inference-abstraction, hypothesize-abstraction, experience-abstraction process to assume that the structures that we create in our abstractions "are" the result of "corresponding" putative "structures" tha "exist" independently of observation - outside of us. That "is" the common sense and "realism" view, but it begs the question by assuming the "existence" of the very things that we can only experience abstractions from - according to current general semantics theory. We experience the "object", not the "putative" "thing". Said nicely, we experience our projections and "identify" the projections as we assume them to be. Doing philosophy has been likened to reparing the hull of a ship while it is in operation underway at sea. As early as Xenophanes the human time-binding record shows an awareness that what we "know" just "ain't it". But even today that understanding still ranks as esoteric. Building a model of the universe, as we hypthesize, it works similarly; we have a somewhat coherent collection of parts held together by belief and acceptance of the vast majority while only questioning a tiny piece at a time. Right now, our current model is quite coherent, but we already know that it is incomplete and with error. Our best efforts have not combined "guts" and "gravity" into a single coherent theory. (We probably have almost as many theories as we have really advanced physicistcs.) We don't know if any one of them is "right", and we cannot, in principle, find out, because, no matter how much corroboration we have for a theory, we still have the future to contend with. Thomas Kuhn pointed out major paradigm shifts in the past, noting that "knowledge" and our corresponding "world view" undergoes periodic major changes. Learning from this time-binding, we no longer believe in certain "knowledge". What's more, according to our best current model, we "know" that our brain projects it's responses outside of us. We have experiences that we consider to be pretty reliable, and we "bet our lives" on them every day. But that does not make it so. To say that that which we do not "know", and, in principle, cannot "know", has structure "similar" to that which we (internally) "know" begs the question. We know the map is not the territory. (non-similar) The map covers not all the territory. (non-similar) The map reflects the map maker. (non-similar [to the "territory"]) Our three primary principles all directly deny similarity. Non-identity denies similarity. If you try to define "similar" in terms of "similar" you have infinite regress and "begging the question". Our brains, however, as I noted earlier, abstract into on-off nerve cell activation, and some of those circuits have outputs that report if two inputs are both active. Doing lots of these we abstract, create, and project identity and similarity.
|
Author: Ralph E. Kenyon, Jr. (diogenes)
Thursday, December 27, 2007 - 10:30 am
|
You are, of course, free to believe that the structures you project "exist" at the event level; the birds do it; the bees do it; nearly everybody does it. We even routinely "identify" our objects as "things". Belief, however, is not an adequate counter-argument. "Making sense" - the fitting in of our sensory inputs into our current model. You do not "make sense" of what I say, because your model assumes that "similarity of structure" is a given. For me, everything is open to question. And I find "similarity" to be based in a low level primitive which we might call "same" that is a nervous system evaluation comparing two non-discriminating responses. What is "the same" is activation of two responders, but we do not know what activated those responders. General semanticists deny "sameness" in any guise. Even this primitive neurological abstraction response? But without a primitive notion of "sameness", similarity cannot be defined without infinite regress.
|
Author: Ralph E. Kenyon, Jr. (diogenes)
Thursday, December 27, 2007 - 03:59 pm
|
For many people "beliefs" are not open to question. My statement does not imply closed "mindedness", that would be an unwarrented "inference" that you make. "Assumed", "unexamined", "not willing to deal with", and more are possibilities. The paradigm of "realism" is so deeply and thoroughly ingrained in our daily habits and life, that it "really" is taken by most as unquestionable. If "similar" is left undefined, and "structure" is "officially" undefined, what can "similarity of structure" possibly mean? "Undefined of undefined" The map has "undefined of undefined" to the territory. Does that give you the same sense as The map has "similarity of structure" to the territory? I do not think that "similarity" or even "same" is treated as "undefined". Both are "concepts by intuition". But that does not mean that we cannot create an adequate "concept by postulation" to act as a model. Infinite regress has been a well known "problem" for aeons. And general semantics does not resolve it. The current solution to "infinite regress" is "recursion", which is to say regression to a beginning base case. "Similarity", and I have pointed out, regresses to a primitive notion of sameness - the "base case" or lowest level, smallest part, etc. Our perception system, and our cognitive system, differentiates objects (not putative "things"), and each has a least object or smallest level below which further (unaided) differentiation is not "discernable". A is "similar" to B if we differentiate A into a1,a2, ... AND / OR we differntiate B into b1,b2, ... bm and some (not none) of A or the a's are similar to some of B or the b's, or (the recursive base case) if we cannot differentiate A and we cannot differentiate B and we cannot distinguish A from B, in which case we say that they are the "same" (primitive base case). If for every differentiation of A there exists a differentiation of B such that for every ai there exist a corresponding bi indistinguishable from ai we say A and B are the same. In the nervous system inputs are active or not active. We don't know what activated the input. Conside a rod cell. It can be activated by a low energy photon or by a high energy photon. Two such rod cells each active from unknown energy photons can get processed in the comparator circuit I referenced here. The output of the circuit indicates both active but not distinguishable. Other, parallel, circuits can activate location maps, and we can see that there are two pinpoints of light, but we cannot tell them apart in kind, because we have no way to differentiate each into parts. Our neurological evaluation is that they are indistinguishable, aside from the fact that parallel circuits might activate different "pins" on our location mapping circuits. When we abstract thousands of parallel inputs that get consolidated into fewer higher levels circuits, and these inputs are fed into comparators in parallel, we can get the differentiations I defined above, and we can get the division by division comparison indicated in my formal definition of similar. If we find no left over pieces on either side, then we have e neurological basis for "same" (objects). If we have some left over, we have similar but not same. Additional circuits are required to examine a wider range of comparators to determine if there are subsets of comparator outputs that unmatched. This can be achieved by arrays of such circuits. We humans have such arrays arranged as a "map" of our senses in what is known as the sensory homunculus - a distorted image of our bodies across the brain - distorted by how much brain tissue corresponds to each part - much more for hands, lips, tongue, etc. A similar "maping" takes the retina into the visual cortex. And since these mapping are largely "hard-wired", we have a behavioral response to put ojects into the right processing area. We turn and face what we are responding to putting our inputs in "front center". At higher levels in the phylogenetic scale, we hawe the added circuitry and ability to do this transformation in "software", we metaphorically "hold something up" "mentally" "in front" of us. In radars a trainable array uses electrical circuit delays to "virtually" "turn" the radar's "direction" without physically turning the antenna. I surmise we have something like that in our high-level brains than can retrieve from memory, and "turn" it around "in our minds" using circuitry similar. Front-center orientation puts objects into the evolved comparative circuits to allow us an evolutionary defined ability to abstract "the same" or "similar", and my formal definition above gets implemened in evolved "wet-ware". We know toads have very simple circuits for "recognizing" the right size, shape, motion, and distance to activate a tongue reflex, but the toad turns to put the perhipheral motion into near front center first. We have neurological circuts that make an evaluation that objects (not putative "things") we experience are "the same" or "similar", and "similar" recurses to a base case of the same, but "degree of similarity" measures inversly how much differentiation does not match.
|
Author: Ralph E. Kenyon, Jr. (diogenes)
Thursday, December 27, 2007 - 10:28 pm
|
Thomas wrote You can't define indefinitely without going around in circles. I already pointed out that this was solved by recursion with a base case. General semantics did not come up with that solution, computer science did. Leaving "equivalence" and "relations" as undefined fairs no better. The term "relation" is one of three "officially" undefined terms in general semantics (structure, order, relation). Each is weakly characterized in terms of the other two, leaving all three as concepts by intuition for non-mathematical usage. "Equivalence" is a synonym of "same" for generic or lay purposes, but not for mathematical purposes. I provided an explanation for neuro-semantic contexts in my previous post. Thomas wrote: "In GS we recognize this fact and call it the Objective level of abstraction." In this context "objective level of abstraction" amounts to pointing and grunting - "naming" - in which we apply a non-verbal association to a word, thus avoiding "defining it" using other words. This "defining" by pointing while avoiding words assumes anyone communicated with in this manner will recognize what we are indicating. It fails to take into consideration the "mistake the finger for the moon" situation. This does not produce a formal level solution to the "problem" of infinite regress; it simply stops dealing with it by ceasing to define. As noted previously I provided the formal solution at the mathematical level (recursion), and I indicated it at the neuro-semantic level by showing the base level (minimum) circuit required to abstract "same" or "different". I also pointed out that these notions are "concepts by intuition" as their "common" meaning. I did not say they have no meaning. David wrote ""sameness" is a term which refers to one extreme of the "similarity" spectrum. I believe I said this when I provided a recursive definition of similar, but I further noted that as one gets smaller and smaller differentiations, one eventually gets down to the level at which we can no longer distinguish diferences. When our nervous system process two items for which neither can be differentiated further, we have the primitive - "sameness".
|
Author: Ralph E. Kenyon, Jr. (diogenes)
Friday, December 28, 2007 - 07:22 pm
|
David, your next to last post was quite good. There is a very important relation between concept by intuition and concept by postulation, and that relation parallels the relation between theory and application. We create concepts by postulation by the device of a formal definition, which is more than "intensional". We use these definitions together with theory statements (IF...THEN...) to make predictions, which we test. Usually it is the theory statement that gets corroborated or disconfirmed, but it sometime may be the case that a definition gets shown to entail a contradiction. It was precisely that case that lead to Russell's paradox, when Frege allowed a set to be a member of itself. The formal defining of the symbol for "member of" allowing self reflexion proved to introduce the contradiction. Once that defining is changed to preclude selfreflection, Russell's theory of types evolved as a direct consequence. Any set that includes a given set, when given the same name, must be indexed by level of abtraction. XN can be a member of XM where M > N, but NOT where M=N (which Frege allowed}. When you wrote "This occurs because the definition is presented using other words and symbols, which must also be defined., you went too far. Not all words used in a formal definition must also have a formal definition. A concept by inution cannot be "explained" using it name, as that is the prima facia criteria for circularity or infinite regression. The restriction againts circular definitions applies particularly with concepts by intuition. It's usually not a problem, because we generally do not "use" the word in such an intensional definition. We typically give examples and ask the user to abtract, hoping they get some intuition such that we can agree with their subsequent verbalizations using the term. With recursive definitions, which can be applied in both concepts by postulation and concepts by intuition, this restriction is somewhat relaxed. We may "use" the term being "defined" in the definition provided the use is in a lower or reduced sense and there is a sequence of such lower or reduced senses that must come to a finite end point. This pattern allow us to prevent circularity by coming to the end. Said end is called the base case. Consider N factorial which is the product of all the integers from N down to 1. N factorial is written N!. A recursive definition is written thus: N! = Base case: IF N=1, N!=1 (or 1!=1). Recursive case: N!=Nx(N-1)! Because N is an intetger, and each step subtracts 1, this process is guaranteed to get to 1, which is defined explicitly. Let us conside a 3-dimensional geometric figure. It is composed of interconnected 2-dimensional geometric figures. A two dimensional geometric figure is composed of connected 1=dimensional geometric figures. A one dimensional geometric figure, (for the purpose of this discussion) is a line segment of non-zero length. An N-dimensional "structure" is: Base case: (N=1) a non-zero lenghth line segment. Recursive case: interconnected N-1-dimensional "structures". A more generic approach... A "structure" is: base case: An undifferentiable object. Recursive case: Differentiable into smaller or less-complex structures. What kind of differentiation, you ask? That depends on the abstraction process of the observer. We are employing some concepts by intuition here. One is the nature of smaller or less complex - reduced size, fewer parts, smaller pieces, etc. Another is composition - the sub-structures must be composed into the greater structure. Put together. Obviously we can have different ways of breaking objects down into lesser parts, but they all depend upon the ability of the observer to distinguish parts, aspects, etc., - to differentiate the object into smaller objects. Do not forget that I use "object" as an indicator of object level experiences. Whether it be senses or memory, we recall that differentiating is based in neural computation, and it eventually gets down to a single cell response. However, just for drill, consider bisecting a line segment object in two, and then bisecting the parts, etc. If we conceptualize the half a segment, and "blow it up" in our mind" we are violating the reduction or smaller sense. Consequentely, just like physically folding or cutting a paper in half, we get down to a point where we cannot perform the action - reach a minimum size that we cannot bisect again, and if we adhere to the "smaller" requirement in mental conception, we have the same problem; we get down to the minimum which we cannot hold without violating the smaller requirement by blowing it up in our mental image. So this kind of "infinite" regress is not a counter example to my definition for structure. When we can no longer differentiate an object into smaller ones, and we have two such objects, we can not use "parts" or sub structures to compare them. That leaves us only with the possibility "point" objects to compare. We may be able to distinguish them, such as a momentary pin-point flash of green light and a momentary pin-point flasg of red light, in which case we can say they differ, or we cannot distinguish them on the basis of color. Other examples are pin-pricks too close to tell apart, short durations sounds whose pitch we cannot differentiate, repeated fashes, repeated sounds, repeated touches, etc., all of which we are unable to differentiate or tell apart aside from location. These are all examples of base-case "structure" response objects. It is customary to evaluate them as "the same", even though we "know", through consciousness of abstraction, that these object may be "caused" by different sourcess; merely that we cannot differentiate them. My point is that within our nervous systems these objects are "the same" - though any putative "territories" may differ. If you would like to eschew the use of the word "same" and call them "similar", you violate the recursive definition structure requirements. The term being defined may be used in the recursive cases, but it may not be used in the base case. I opt for the term "same" as the name to apply to objects that we cannot differentiate among. We can define "similar" using my recursive definition of "structure" as objects with non-differentiatable as the base case. And this approach squares with geometry and axiomatic approaches, but it is expressly adapted for the general semantics notion of object level experience. But it is very important to note that we cannot generalize this beyond the nervous system without adding "belief" assumtions. It is a belief-postulate to apply what we experience through abstraction (object leve) "backwards" to that prior to being experienced (event level). This is precisely what is meant by projection. We project our abstractions onto what is going on. You may take "similarity of structure" between objects and putative "things" as given, but you likely enjoy slight of hand magicians, an example where that premise fails. Another is holograms. Another is dreams. Another is various visual and auditory illusions. The brain locates its experiences elsewhere, and sometimes thery are not consistent with the experiences reported by others. These in themselves are enough to satisfy Poppers falsification requirement. The upshot of this is that the word "structure" is not undefined. It's defined weakly as a concept by intuition, and it's defined strongly as a concept by postulation by my recursive definition above. There is no requirement that all the words in a formal definition must all themselves have formal definitions. Also this definition agrees with the interdependent characterizations of structure, order, relation in general semantics. Composition of parts involves relations and order - order in how we sequence the relations among the "part" ("sub-"structures"). "Undefined" terms may have concept by intuition understandings, but that is itself a problem. The classic undefined terms are 'point', 'line', and 'plane', all of which had to be "undefined" by getting rid of the influence of the concept by inuition in order to allow geometry to progress beyond nte Euclidean era. We "undefine" a term in order to get rid of the influence of the concept by intuition that can act as a "mental block" to discovering other possible relations. Example: The fith postulate stated: "there are no parallels to a given line through a point not on the line", defines the geometry of lines on the surface of a sphere. Every two straight lines always intersect. Draw the "straight" line knows as the equator. Draw a "straight" line through the poles. It intersects the equator at "opposite" "points" on the equator. But part of the axioms holds that two lines intersect at one and only one point. So a "point" in this closed curved space geometry consists of what we would normally use the "concept-by-intuition" point to refer to as the two points at opposite ends of the great circle. A "point" in a "straight" "line" in the spherical geometry includes the polar opposite halves taken together. A notion quite different from our "concept by intuition" meaning for the word 'point' Korzybski was following the trend that was all the rage - attempting to "axiomatize" parts of general semantics, and that influence shows in his claim to treat "structure, order, relation" as "undefined". Further, "concept by postulation" goes right along with this, and applies specifically in the mathematical theory statements to model what is going on. In order to have a proper "theory", we have to take a concept by intuition, devise a concept by posutalion version of it, incorporate that into our theory of what is going on, make predictions, and then test them by use, looking for falsifications that necessitate revising the theory and the definitions used to express concepts by postulation. When you compare two thing, recall all the abstracting that is taking place, and pay particular attention to what gets compared - the end of the stimulus-abstract-response chain - the response -- those are what are being compared - NOT a response to a stimulus; NOT an event to an object, NOT a map to a territory, but only a map response to a map response. That we have coherence and lots of corroboration does not give us true knowledge; it only gives us a conditional theory that has not yet been broken - which may yet happen.
|
Author: Ralph E. Kenyon, Jr. (diogenes)
Saturday, December 29, 2007 - 09:26 am
|
The "base case" in recursion is the lowest level. The value or "meaning" of the base case is explicitly given in the sense of an ordinary definition. By "ordinary", in mean the requirement that the name being defined is NOT used in the definition. Imagine that "we" are the clerk performing the recursive procedure for somebody who requests and answer. In a recursive procedure, the first step is to test if one is at the base case. If one is at the base case, then the answer is given by the definition and we are done. If we are not at the base case, then we break the input down according to the reduction rule, and we hand off the smaller request "to ourselves". I'm sitting in cubicle and a customer comes to the window and asks for Factorial 3. I look at the number 3, and I see if it is 1. It is not 1, so I put the 3 on the shelf, subtract 1 from it, and I write down "2". Then I go out the back door, walk around to the outside of my window, and present to the clerk the request for Factorial 2. (The sign says back in a second, so I leave the request at the window. Then I go back around, enter the back door, take my seat, and open the window, to see the request for 2 factorial. I test to see if it is 1. It is not, so I put the 2 on the shelf, subtract 1, get 1, go out the back door, around to the front of the window and and I submit the request for 1 factorial to the window. Then I go back around into my cubicle and reopen the window, where I find the request for one factorial. I test to see if the number is 1. It is. So I give back the answer 1 in accordance with the base case definition. Now I go back outside to the window and get the answer 1. When I come back, I get the 2 from the shelf and multiply it by the 1 factorial answer I just go, and now I put the answer to 2 factorial out the windows. Then I go back around to the outside, get the answer to 2 factorial, and return through the back door. This time I take the 3 down from the shelf and multiply it by the two factorial answer which is 2, and I get 6. I give this back out the window as the answer to the original request for 3 factorial. Notice that at each stage, the multiplication process is suspended to wait for an answer from a lower stage - until one gets to the base case. How is the base case defined? Here is where the concept by intuition comes in. It is given as a specific answer. In the case of "structure" I give the intuitive answer as a part or "sub-structure" that cannot be differentiated (by the abstractor) into smaller parts or sub-structure. A structure is either an undifferentiable object (essentially a point object) or it is composed of smaller structures. A is "similar" to B if A and B are structures as defined above, either or both are capable of being decomposed into substructures, and there (sub)structures of A that are similar to (sub)structures of B (the recursive part) If A cannot be decomposed or differentiated into sub structures, and B cannot be decomposed or differentiated into sub-structures, then A is "Similar" to B only if A is indistinguishable from B - in which case we say that A is "the same as" B. Ultimately, A and B are similar if they have some indistinguishable parts (the same). But "some" allows "similarity" to be defined according to whatever criteria any abstractor has for deciding how many sub parts must be indistinguishable to that abstractor. This put "similarity" strictly in the category of an evaluation by an abstractor based on that abstractors ability to differentiate a structure into sub-structures and that abstractor's choice as to how many or what proportion of the indistinguishable parts will be counted towards similarity. It could come down to just a single abstracted characteristics (often that others don't see). Similarity, then is not a property of "event-things"; it is not a relation between "map" and "territory"; it is not a relation between "word" and "object"; it a reaction, response, evaluation, etc., by observers (abstractors) to object level abstraction responses.
|
Author: Ralph E. Kenyon, Jr. (diogenes)
Saturday, December 29, 2007 - 09:50 am
|
As an after-thought, notice that a "recursive" definition is not a simple object or static relations between simple objects.; it has the properties or characterists of a verb in that it requires action. To apply a recursive definition we have to actively participate in a process of differentiation, comparison, composition, etc., to evaluate if the definition applies.
|
Author: Ralph E. Kenyon, Jr. (diogenes)
Saturday, December 29, 2007 - 09:59 pm
|
David wrote As I understand it, the notions of "base case" and "recursion" assume a context of concept by postulation. The notions of a "base case" and "recursion" do not apply when we are talking about concept by intuition. Mostly correct; however, The notion of recursion is sufficient to imply that what is being defined is a concept by postulation. "Base case" might be construed somewhat more loosely. For example, one might use the phrase 'base case' to refer to a "paradigm case" example used in a concept-by-intuition "definition". The difference is in whether the phrase 'base case' is being used in the technical sense of a recursive definition or in a more general sense - like the difference between using "general semantics" as the name of a discipline or using the adjective 'general' to modify the noun 'semantics' in a generic sense. So, yes, if we are talking about giving a recursive definition for something and specifying the base case, we are talking about defining a concept-by-postulation. We are "postulating" the formal definition. "When I use a word," Humpty Dumpty said in a rather scornful tone, "it means just what I choose it to mean --- neither more nor less." "When I [define] a [concept]," Humpty Dumpty said in a rather scornful tone, "it means just what I [say it means] --- neither more nor less." In any definition, the definiens may include words which do not have an explicit definition, that is, it may include words which represent concepts by intuition. The formal definition however, becomes the "postulate" for which the defienindum gives the name of the "concept by postulation" so created. A concept by intuition does not have a formal explicit definition, so what it names or referrs to is assumed or guessed at by any listener or reader. We create concepts by postulation in order to crystalize a corresponding concept by intuition. Philosophers discuss and argue whether or not proposed concepts by postulation "capture" what we generally mean by concepts by intuition to an adequate degree. Scientists do the same in creating theories of the "universe", and they use prediction to corroborate or disconfirm such postulated concepts. But most of us "understand" even concepts by postulation in terms of our own semantic reactions as concepts by intuition - at least until we have adequate formal training in formal languages such as mathematics and logic.
|
Author: Ralph E. Kenyon, Jr. (diogenes)
Saturday, December 29, 2007 - 10:21 pm
|
David wrote I don’t see how a concept by postulation could be classified as an undefined term. I do; however, see how a concept by intuition would be classified as an undefined term. This is what leads me to believe tht "concept by intuition" and "undefined term" refer to the same thing. If we use the word 'define' in a formal sense in which a definiendum is equated to a definiens in which the term defined is mentioned in the definiendum and not used in the defienens (or used recursively only therein), then we would not say that a concept by intuition is "defined"; we would have to use a different word, such as 'explained' or 'described'. I think, however, that we use the word 'define' in both a formal and an informal sense. Most dictionary definitions are not formal. But there are instances where formal definitions can specify relations among terms that implicitly "define" their formal interpretation by specifying formal relations. Such is the case with absolute geometry, and Euclidean and non-Euclidian geometries, and some other formal systems. The axioms of geometry specify the relations between "point", "line", "plane", "n-dimensional" objects, etc., in such a way that these terms, though not given an explicit definition, have constrained concepts by postulation (axioms or postulates). So these are specific examples of concepts by postulation that are explicitly undefined. And as the previous posts illustrate, something like the word 'red' can have both a concept by intuition - learned by point and name - and a concept by postulate - learned throught a knowled of physical and optical theory. We have defined concepts by postulation. We also have undefined concepts by postulation. We have "defined" (described) concepts by intuition (most dictionary definitions). We also have "undefined" concepts by intuition, such as are learned as a child by associating a word with various example objects including "cat", a color, (and I love this one from Bambi, "flower"), etc.
|
Author: Ralph E. Kenyon, Jr. (diogenes)
Saturday, December 29, 2007 - 11:02 pm
|
The two definitions of "mutual" on the web: common: common to or shared by two or more parties; "a common friend"; "the mutual interests of management and labor" reciprocal: concerning each of two or more persons or things; especially given or done in return; "reciprocal aid"; "reciprocal trade"; "mutual respect"; "reciprocal privileges at other clubs" It would see that the first applies in this context; the second does not. Thomas wrote ... our communication depends on mutual experience and, in particular, attaching the same word to the mutual experience. David wrote "our communication depends on mutual concepts and those concepts can be of two types, concepts by intuition and concepts but postulation”... I take 'experience' to refer to a happening within the brain - as in we "experience" pain, pleasure, etc. First David. I don't believe we can have "mutual" concepts by intuition, because each person abstracts from imprecise language into his or her own unique experience and understanding. People can have concepts by intuition from experiencing overlapping environments - different because each one has the other in his or her respective environment, and because each person experiences in terms of his or her unique history. I would lean towards having "mutual" concepts by postulation, because these are determined by the external formulation in which they are expressed. However, each person still abstracts uniquely, so the "experience" will still not be "mutual". But they can continually refer back to the external formulation. Perhaps I'm taking the word 'mutual' in connection with 'experience' too strongly. Now Thomas. I don't think we can have "mutual" experience, because each person experiences uniquely. Mutual and sharing require a category level above the object level, but each of us has unique semantic reactions; unique classification systems. We can be together in the presence of an event, but what we each take from that event is a unique personal experience. We can share an event, and we can share an environment that includes us both, but we cannot share "experience", as those are unique to each person. Since I view "experience" as brain responses - unique to each person - I don't see mutual as applicable to "experience". However, I do agree with a slight paraphrase of Thomas ... "attaching the same word [in a common context]." This phraseology brings us to the extensional level (words uttered) while skirting the problem of distinct (internal) experience.
|
Author: Ralph E. Kenyon, Jr. (diogenes)
Sunday, December 30, 2007 - 12:42 pm
|
As I understand it, "concept by postulation" and "concept by intuition" are supposed to be a binary classification system that covers all concepts. The rather narrow interpretation that David is advocating restricts the "concept" only to those concepts that are created by pointing and naming - examples from sensory associations. Examples are "dog", "cat", "cow", "bad", "good", "hot", "cold", and many other things that babies learn while acquiring language. A similarly narrow definition of "concept by postulation", which developed during the time that axiomitization was all the rage, limits "concept by postulation" only to those that are explictily defined with precise formal definitions, or from a system of axioms (postulates) such as form the paradigme case example of postulation - geometry. This leave a big gap between the two into which intensional definitions which are not precise enough to explicitly cover all cases fall. Since most of our common concepts are merely described, rather than precisely defined, I have included all those under the "concept by intuition" category. This include "defining" something using metaphors, analogies, and all kind of other soft or poetic descriptions. In each such case the listener must abstract from the heard words and "intuit", guess, form a semantic reaction, etc., as to what the description is intended to "mean". The listener forms his or her "concept" or "meaning" of any term so "defined" by using his or her "intuition". So I can devide "concepts" into 1. Strict hard and fast "names" learned only by pointing. 2. Other names (and phrases) learned by providing soft metaphors, analogies, examples, etc., in intensional language. 3. Strict hard and fast "names", the properties of which are strictly limited by formal logical inference from a set of postulates or from a strict formal definition. Since class 2 involves the use of "intuition" by the listener to figure out what the word or phrase was intened to "mean", I naturally put those in the same category as 1. We might call 2 "concepts by intension", but I think "concepts by intuition" sufices. I would guess that Northrop was giving the simplest kind of example that would avoid the complications from talking about other words when he described the concept (necessarily a concept by intuition ). Not having read him directly, I'm not in a position to verify my surmise. But those "middle ground" "concepts" have to be in the classification system somewhere, and they cannot be in the "postulate" area, as they are too imprecise and uncertain. So, the classification system, I believe, is concepts by postulation, which derive from formal definitions or a system of axioms in the postulate category, and all other concepts in the intuition category. Loel, I too remember some very early experience - less than one year old (by inference) for which I have no verbal associations - but just a picture. I can now describe the picture in words, and I can infer that I was riding on my Dad's shoulders while he was walking along a railroad embankment near where we lived at the time in a beautiful fall foliage scene when I was about six months old. (Bicammeral mind theory holds that stressful and often ambivilant experiences evoke consciousness.) I simply recall seeing the canopied colored "tunnel" like path along something raised seen from a high point with brighly colored patches above and to the sides and with with a carpet of multi-colored cover over the lower area raised embankment ground. But I can only apply those terms from today's perspective. The memory itself, when I re-experience it, has no such verbal components. I do not remember any emotional reaction either; just the picture of this brief scene, and I also don't remember a sense of "I" behind the scene (like one conscious adult would have looking at a painting). In some sense I can infer that I was little more than the visual experience itself. But it stuck and I can recall that experience. I cannot relate this to concepts, but I can retroactively apply them now to describe the scene. (me then).
|
Author: Ralph E. Kenyon, Jr. (diogenes)
Sunday, December 30, 2007 - 02:17 pm
|
Ok, David, I've read the article by Northrop. I may have been led astray by the Wikipedia article which states "[Northrop] divides all concepts into two kinds: intuition and postulation." However, several other sources all state that Northrop divided ALL concepts into two classes. That leaves the question somewhat ambiguous as to the dividing line, because that which was described in his korzybsk lecture conveniently leaves out such concepts as are acquired by common dictionary definitions or reading from context in which the reader has never seen the stuff described or pictures of it. I think my three level classification previously posted applies. Northrop makes no mention of or attempt to cover those "concepts" in the middle ground between "concepts by intuition" which he does limit to sensed objects, and "concepts by postulation" which he does limit to those defined by a system of axioms or postulates. In my earlier discussion, I've included the middle ground which I labeled, "concepts by intension". These include most all "concepts" learned throug dictionary definitions or by inference from context of use. These are what I referred to by the phraseology "impresise language". And I argued that they should also be called "concepts by intuition", because we learn them through applying our intuition. Northop's definition, at least in the Korzybsk lecture, do not constitute a classification system; they are simply distinct names for a very limited set of "concepts" - those which are name of classes of sensible "things", such a cow, red, etc. What about the name of classifications? How about the name "concept" itself? Other sources state that Northrop's system classifies all concepts. What about it? Can we come up with any "concept" that fits neither "sensible objects" nor a system of postulates?
|
Author: Ralph E. Kenyon, Jr. (diogenes)
Monday, December 31, 2007 - 12:10 am
|
Ben, "Concept by accident", "Concept by brain lesion", etc, are proposed other names for the "middle" area; they are not example concepts that might fit into the middle area. Induction and deduction correspond, at least in my mind, to "bottom up" and "top down" approaches to analysis. The former takes examples and tries to abstract a generalization; the later takes axioms and formal definitions and derives conclusions using valid rules of inference. The former provides the process for building theories, the later provides the predictions for testing them. The former is extensional, the later is intensional. Empiricism and Popper's falsification principle marries the two, so that, as the Greeks said over 2500 years ago, "every hypothesis [theory] must 'save the appearances'" (5). As I noted here, "general semantics" (personified) preferes 'formulation' to 'concept' on the basis that formulations are extensional whereas "concepts" are viewed within the paradigm of general semantics as intensional semantic reactions unique to each person. Frege proposed a middle-ground "sense" of a term is neither its intension nor its extension. "Senses" are not formulations, but they are not unique to each individual. There is a sense in which "the morning star" is not "the evening star" in spite of the fact that both phrases refer to the same astronomical body, and are, in terms of reference "the same". (The morning star "is" the evening star, namely the planet Venus. But I grew up seeing the "morning star" over the back yard of my parent home in the East as I delivered papers in the morning and the "evening start" over the railroad tracks in front of the house in the west in the evening.) Korzybski would possibly simply note that the formulations are different, and that is enough. How are we to know when a speaker speaks if the speaker intends his or her formulations to direct the attention of the listener to the (commonly understood) referent of the formulation, to the formulaion itself (when we use single quotes to distinguish mention from use), or to some "sense" of the words that is neither the referent nor the formulation itself? (Double or "scare" quotes may sometimes perform this function.)
|
Author: Ralph E. Kenyon, Jr. (diogenes)
Monday, December 31, 2007 - 11:32 am
|
For a number of the threads on this board a "concept" was "defined" (intensionally) as a reaction in brains. In the model of general semantics, we must place a "concept" somewhere on the structural differential. It seems that a strong concensus holds that a "concept" is most certainly not a "formulation". And, I'd suspect that most of us would agree that a concept is not some putative "thing" at the event level. That leaves only neurological levels of abstraction. We contrast these notions with an idea that a "concept" is somehow something that varies not or at least very little from person to person. That would suggest that a concept can be described as a "virtual semantic object" - "object" in the sense that it is intantiated in brains, "semantic" in the sense that it hold some kind of "meaning", "virtual" in the sense that we experience it as external to ourselves and describe it with various formulations that have a significant amount of commonality or correlation with each other - in other words we have a relatively strongly agreed to "dictionary definition" for the term. If we both go to a virtual reality emporium, and we put on our glasses and ear phones, and each of us "sees" (in the virtual reality eye-screens) and hears a constructed representation of each other and we both "see" a "structure" being presented to us as our environment, the "things" in that environment, which we experience as objects, that we can talk about, can be examined, and they form the basis of what we can call a "common" "experience". While each of us experience each object uniquely, back and forth communication, including pointing and naming as well as more complex descriptions allow us some access to the other's perspective. If we use this picture as a metaphor for talking about a "concept", our "virtual reality glasses and earphones" vis-a-vis "concepts" become what we say and hear to and from others, read in dictionaries, learn from working (mathematically) with the sub-structures we objectify, etc.. In virtue of time-binding and communication with the time-binding record, as well as live individuals, we sample the symbolic environment not altogether unlike a physicist takes meaurements of putative "things" and "events". With this verbal (and more) "sampling" of our symbolic environment through the acts of communicating, we individually form hypotheses to account for the objects we construct through abstraction from our sampled communications. Such objects support abstraction to the verbal level, and we test both those objects and verbalizations about them through the process of more communication. In testing we get to corroborate (or disconfirm) our object and verbal abstractions (formulation) through feedback from others. As a brief aside, suppose we asked a bunch of people for formulation about a particular concept name or phrase. We would find, I expect, some statistical central tendency in the words and phrases used, as well as some variance, and this would indicate the coehesiveness or degree of "well defined or well understood" character of the supposed concept "identified" by the word or phrase we used in the experiment. Getting back from the aside, I hypothesize that the more coherent a putative "concept" measures, the better our ability to form a model and the more consistent the feedback we get will be. We try variations, and get corrections. Those that are not corrected, we continue to use; those that get corrected we drop. With frequent communication (study) of the subject we can bring our own individual neurological instantiation of the putative "concept" into strong agreement with the central tendency of the symbolic environment. (Incidentally, we call this process "genetic epistemology" applied at the level of the individual brain - an "idea" that has been around since before I was working on my dissertation.) The difference here is between how we interact with the physical environment - abstracting from physical activity of measurements - and with the symbolic environment - communicating with others and the time-binding record. For David: Induction and deduction. Induction: the process of abstracting a generalization from particulars - not truth preserving. A-la-Korzybski - going from event to verbal. Deduction: the process of producing a particular conculsion from a generaliation using ONLY VALID rules of inference - strictly truth preserving. A-la-Korzybsk - the strict application of mathematics to general formulations to produce other, specific, formulations. I do make the associations with "extensional" and "intensional" that you state, but not as classifications. If we take a theory and we make a prediction, that involves deduction. If we then treat that prediction as a fact, and then deny contradictiory evidence, that is intensional orientation. If, however, we accept the contradaiction and begin to revise the theory, that is extensional orientation. But in general, yes the "extensional" direction points to event, and the "intensional" direction points high level formulations. But finer analysis distinguishes between theses dimensions. They only "align" when the resolution is low. According to my recollection, Kendig, at general semantics seminars, emphasized that we (general semantics Institute) preferred "formulation" to concept because formulations are extensional. Formulations can be both elementalistic and non-elementalistic depending on how carefully you used them. I think the extensional character - that you can go back and re-read the words - is a much better choice than "concept" because we cannot go back and "re-read" a concept; we can only re-read the formulation in which it is expressed.
|
Author: Ralph E. Kenyon, Jr. (diogenes)
Monday, December 31, 2007 - 11:52 am
|
Thomas wrote, "... on Ralph's page one definition of 'concept' says ... That's a quote from a Random House dictionary. 'Concept' isn't exactly a word with a strongly cohesive formulation behind it. Look at these these.
|
Author: Ralph E. Kenyon, Jr. (diogenes)
Monday, December 31, 2007 - 01:50 pm
|
Ben, "Intuition" does not have to be "right"; it often isn't, and the process occurs neurologically regardless of the whether the brain is "normal" or dammaged. Addressed in general... According to Northrop's lecture and secondary literature, Northrop's classification system is supposed to be exhaustive and exclusive, and cover all "concepts". I asked if we could come up with any specific concepts that we could successfully argue that can not be fit into one or the other of his two categories. If Northrop asserts in his book that he is defining these as a binary exhaustive classification, then we need to look more carefully at our own interpretation of the descriptions and re-interpret them so that our own understandings agree with Northrop's. Based on that assertion of binary distinction, and my own somewhat exhasutive tranining in mathematics, logic, and philosophy, I feel very confident about the "concept by postulation" side of the distinction, and that is why I put all the others in "concept by intuition". You have a conversation with someone, and he or she tries to explain something to you, based on his or her words you begin to form a "notion" that you use other words to express than only the ones you heard. Based on a few iterations of talk and reply, you refine that notion into a "concept by intuition", unless the formulations you heard are a formal definition in mathematical language or are given to you by a set of postulates. I surmise that you are likely to develop a "concept by intuition", and you may use it to inform the applications of the axioms and definitions, but you will use the axioms and formal defition, with careful application of valid rules of inference, to "re-inform" your intial "concept by intuition". We grew up with a "concept by intuition" for a "point", but when we learn about non-Euclidean geometry and study the relations of the non-Euclidean postulates with valid rules of inference we get great circles that intersect at what our intuition call "two points", while our concept by postulation dictates that these are "one point". Of course, if you don't do geometetry, mathematics, or logic, the above will likely be mostly lost on you.
|
Author: Ralph E. Kenyon, Jr. (diogenes)
Monday, December 31, 2007 - 03:23 pm
|
So, Ben, you're claiming that Northrop, when he gave the definition in his book, and defined it as a binary classification with two and only two categories, whatever you got from it, you won't use his terminology the way he said it was supposed to be used. This sounds to me like intensional orientation - refusal to correct one's own mental picture in the face of contradictory evidence - the fact that Northrop postulated a binary classification, and that many secondary literature sources reiterated that binary classification. Just remember, Using a binary classification is not "having a two-valued orientation". I submit that the many other suggestions you are advancing are neither a classification system, because they overlap, nor a contradiction to Northrop's distinction, they merely represent the abstraction of different characteristics in ways of acquiring concepts. If you look closely at your suggestion, you are asking David to respond on the basis of knowing what some specific names refer to, and those are both, according to Northrop, concepts by intuition - regardless of whether we learned them directly or through the use of secondary time-binding maps. Moreover, you gave spececific instructions for a specific act at a specific time. That, to my way of thinking, does not even represent a "concept".
|
Author: Ralph E. Kenyon, Jr. (diogenes)
Monday, December 31, 2007 - 05:59 pm
|
Ben, Let us know what you find out when you do read Northrop's book.
|
Author: Ralph E. Kenyon, Jr. (diogenes)
Tuesday, January 1, 2008 - 02:34 am
|
Ben interjected "[Ben's note: Doesn't he mean "a word"?]. No. He is NOT describing a distinction in kinds of words; he is describing a distinction in kinds of "concepts", although he has not provided, in this context a "definition" of 'concept'. And your quote included "...for the two foregoing types of concepts ... implying that he previously described them. I've ordered his book, so I'll check it out. Then I will be in a position to corroborate these several other secondary sources that claim that Northrop devised his classification system to cover all "concepts".
|
Author: Ralph E. Kenyon, Jr. (diogenes)
Tuesday, January 1, 2008 - 12:28 pm
|
It's clear to me that the notion of a "concept" is producing as much variation in this discussion as the definitions I cited here. Let us try to step back and look at a bigger picture. A central area of the structural differential deals with neurological process that instantiate a person's "meaning", "understanding", "semantic reaction", etc. Although a single map-element is shown, the object-level circle depicting entering and leaving processes terminated on characteristics, this covers virtually all human cognition between the hearing of sounds to the uttering of sounds, the seeing of marks on surfaces to the making of marks on surfices, etc., in general the process of abstracting inputs to producing outputs. Like Pavlov's dog that salivated at the sound of a bell because it experienced a history of the bell preceeding food stimululus, we learn to associate sounds, marks, etc., with other aspects of our processing of our inputs. But there's more to it. We associate not only what we abstracted from our external inputs, we abstract from what we were doing, and what the results were. And as we evolved we developed names for more than what we could directly experience; we developed names for higher level abstractions - abstractions which included categories, sequences of actions, and further abstractions, and when we did not have names already present in the culture, we chose phrases. We became even more creative in using metaphors to convey an abstraction we made. Since the abstractions are in the neurological process, we have a "disconnect" of the form "the map is not the territory" when trying to communicate our abstraction to another, most often for the purpose of directing their behavior. Behind virtually all communication is motives to achieve ends. We communicate only to affect the action, behavior, understanding, etc., of another person. In the early days, it was, "Give!"', or "Mine!", [I'm more powerful that you],[I want to be boss.], etc., or "Give?", or "Mine?", [You're boss, feed me and you stay boss, feed me and we can reproduce, etc.]. These cover the simple imperatives and interogatives. A natural response to the interogative in the afirmative other than a simple assent would be "Give.", or "Yours.", and begins the development of the declarative assertion. [I've argued this before.] We have developed the assertion or declarative form quite a bit since then, don't you think? In the beginning simple "concepts" by intuition arose as simply as class names for food items and other common "things" in the environment, and imperative from a teacher followed by a declarative from a student established under the pattern of dominance in the social order, now generalized and differentiated into "roles" as opposed to strictly applied to the person's rank. With this separation of dominance simpliciter into dominance limited to context comes the notion of "authority" - as in one who knows. But with individual creativity and striving for dominance in the context of "strength" that changes, the environment for dominance and authority becomes dynamic, and the sounds that had become paired with "things" and actions - enforced through dominance - begin to vary, divide, and evolve. With the advent of writing and the extension of time-binding beyond the original simple oral traditions, two things happen in particular. Records of associations - use - get preserved - slowing down some changes, keeping old usages current while adding new usages - speeding up the evolution. When this happens words cease to have univocal usages, and ambiguity creeps into the relation between "names" and their commonly accepted referents. This includes some variation due to the process of learning from the (social-symbolic) environment having many more "authorities" - often with varying "commands" [old imperative] as to how to use the words. A result of this is that the relation between words and the referents, including both "things" and actions, becomes many to many. "The word is not the thing." An immediate consequence is that diferent words can refer to the same "thing" or action, and that the same words can refer to different "things" or actions. A result of this is having devices in our language to indicate that the words we are using to express something are not unique to that which we wish to express. We have words like 'notion', 'concept', 'thought', 'idea', and many more, all functioning to differentiate between the formulation and the referent or action. We want our listener to undestand the referent we have "in mind", not the words we choose to express it. But if we have multiple ways of referring to a "notion", "concept", "thought", etc., we might expect that some of these ways are more commonly used than others, so there might be a "preferred" name or phrase, or at least one that, due to frequencey of usage, may evoke the desired response in the listener with a higher probability of success - the kind of thing that our [dominate role] English teacher might label a "word choice error" on our composition papers. Assuming that there are a collections of "things" and actions that we want to talk about, and extending the coverage to more abstract relations, while remembering that these objects (the "things", actions, and abstractions, now at the level of individual brains) vary from person to person, but holding onto the idea that the purpose of communication is to achieve a desired effect, and that an external observer may descibe and classify those effects differently, we can attribute to this set of referents an "independence" relatively invariant with respect to different individuals. I call this a virtual space. To reify this space, what shall we call the occupants? we call them notions, concepts, etc., and when any one person experiences one, we call it an idea, thought, etc. When you "think" of a "concept", you necessarily invoke you own instantiation of a virtual object which has a preferred or most commonly used verbal formulation, but which may be expressed by a variety of formulations, all of which can be used to express, more or less consistenly with common usage other such "concepts". There are many examples of formulations that describe concepts with greater or lesser degrees of ambiguity allowing more or less freedom of "thought" as to what is "meant" by the speaker or writes - as noted above. All of the terms that refer to internal neurological processes as experienced by individuals suffer from the same lack of disambigation capabilities - none of them can be held up and shown like the simple names of external "things" or of object classes intuited by a sequence of shown examples. We cannot hold up a "thought" and say "see what it looks like", so we paint a word picture using words that are ambiguous. Is it any wonder that we appear to have virtually no consistent usage for the word 'concept'? Now, if we talk about a "formulation", like Kendig said we should, then we have something to hold up and show as the relatively extensional "thing" under discussion, even though the individual object for this formulation varies from person to person. Northrop's classification is about "concepts", and since we are admonished not to talk about "concepts", but about "formulations", Northrop's entire classification, and all this verbiage about it just ain't applied general semantics (according to Kending). If we choose the best formulation to represent a particular "concept" and discuss that, are we still talking about a "concept"? I doubt it. To add to the general semantict dictionary... "Concept": Noun. - indicates a ficticious Cartesion entity in non-physical space but common to different "minds". (Not a topic of general semantics, according to Kendig. Best replacement - "formulation")
|
Author: Ralph E. Kenyon, Jr. (diogenes)
Tuesday, January 1, 2008 - 07:09 pm
|
Hi David, thats a neat perspective, but I think many circle objects derived from the label side are also "concepts by intuition". Only some circle object derived from the label side would qualify as "concepts by postulation", because not all lable constructs are systems of postulates or formal definitions. All the verbal metaphors, most of the dictionary definitions, and virtully all poetically expressed notions can only generate concepts by intuition.
|
Author: Ralph E. Kenyon, Jr. (diogenes)
Wednesday, January 2, 2008 - 01:00 am
|
David, The key above is "a three-dimensional spherical object the, the phraseology of which comes from postulated mathematics. If you leave this out of his sentence, the fellow is only connecting to his "concept by intuition", but if he explicitly connects it to "a three-dimensional spherical object", then the epistemic correlation occurs. For those with little mathematics, or who passed math and forgot it, such a connection does not occur. In fact, I surmise that it does not occur for most of us most of the time. Epistemic correlatons are not ubiquitously happening; they happen when the person "puts two and two together" and consciously relates his perceptual experience to his postulational theory. I'll look further when I get the book.
|
Author: Ralph E. Kenyon, Jr. (diogenes)
Thursday, January 3, 2008 - 12:03 am
|
David wrote How does this sound?: concept by intuition = our "inductively given" perception of the territory concept by postulation = our "deductively formulated" map of the territory epistemic correlation = the degree of similarity between our deductively formulated map of the territory and our inductively given perception of the territory This leaves out too much for me. concept by intuition = our "inductively given" - I would opt for "abstracted" instead of "given". perception of the territory - Perception is too pre-verbal for me, and too "event-singular". Territory needs to be pluralized in some way, as "concept", to me, means a relatively invariant cognitive structure abstracted from a number of examples. concept by postulation = our "deductively formulated" map of the territory - This is also not good because "formulated" too strongly biases "concept" in the verbal direction. A formulation is not a concept, although there may exist a formulation that may be judged as the best verbal representation of any particular concept. There is still a problem with trying to define classification among "concepts" when we do not have a formal or postulated "definition" for "concept" simpliciter. Until you, or someone, says in explicit verbal terms what a concept is "postulated" "to be", an exercise in classification becomes mental masturbation. If you simply define concept as the union of defined by postulation "concept by intuition" and defined by postulation "concept by postulation", then it becomes an empirical exercise to examine putative "concepts" (which we cannot do, so we can only examine the formulations of concepts) to evaluate if the two definitions become an exclusive and exhaustive categorization scheem. epistemic correlation = the degree of similarity between our deductively formulated map of the territory and our inductively given perception of the territory I'll wait until I read Northrop's explanation of his use of the phrase. But I don't like 'similarity' here because we have no objective measurement device for differences between semantic reactions in the same brain (other than subjective-objective rating scales, such as are used in individual differences multi-dimensional scaling [INDSCAL]- but that tool is used on data collected by many people rating an external event on many [redundant] word-pair described presumed scales). From "define:correlation". A measure of linear association between two (ordered) lists where two variables can be strongly correlated without having any causal relationship, and two variables can have a causal relationship and yet be uncorrelated. arkedu.state.ar.us/curriculum/word_files/statistics.doc A statistical relationship between two variables such that high scores on one factor tend to go with high scores on the other factor (positive correlation) or that high scores on one factor go with low scores on the other factor (negative correlation). www.st-andrews.ac.uk/psychology/teaching/glossary.shtml A measure of the strength of of the relationship between two variables. Often, a measure of the strength of the linear relationship between two variables, in which case the measure is given by the correlation coefficient, r. www.math.duke.edu/education/modules2/materials/test/test/glossary.html Epistemic, however, basically means having to do with how we know what we know. So "epistemic correlation" would alegedly be a measure of how well two different ways of "knowing" go together. As such, it would be a measure of the "connectedness" between many pairs of measurements of different "concepts" on the two scales "intuition" and "postulates". My book has been reported shipped, so I should get it in a few days.
|
Author: Ralph E. Kenyon, Jr. (diogenes)
Friday, January 4, 2008 - 10:46 am
|
Author: Ralph E. Kenyon, Jr. (diogenes)
Friday, January 4, 2008 - 06:59 pm
|
Thomas, You are not getting it. The only way you can discover any relation betwen any empirical quantities (event level) is by processing your own abstractions from them. This means that you are "seeing" relationships among the objects within your nervous system after abstraction. Thomas also wrote This is because the mathematical structure may be applied to the empirical structure as equivalent. The "empirical structure" - what is going on - can never be "equivalent" to the mathematical structure - an abstract symbolic formulation. We use the mathematical structure as a theory to make predictions. Then we observe events looking for ones that our interpretation of does not match our understanding of the predictions. If the ... observation does not match the ... prediction, the theory is "falsified"; if the ... observation does match the ... prediction, the theory is merely corroborated (not yet contradicted); it is not "confirmed". These are absolutely NOT "equivalent". They would be "equivalent" only if we were GUARANTEED that the prediction would come to pass, such as would be the case if we were able to know the absolute Truth of the matter. That is comparable to saying that time1 = time2 = ... = timeN entails "= timeN+1". But that is contrary to general semanticts teachings - namely that timeN+1 is not time1, time2, ..., timeN. What we discover is relations among our interpretation of our abstractions from events. To say we discover relations among events says that we can have knowledge of what is going on at the event level; that would take us back to the pre-general semantics days without consciousness of abstracting when people (as most still do) identify our object level responses with the event level.
|
Author: Ralph E. Kenyon, Jr. (diogenes)
Tuesday, January 8, 2008 - 03:20 pm
|
Northrop: The Logic of the Sciences and the Humanities To determine the relation between diverse things it is necessary to express each in terms of a common denominator. One would not attempt to relate three-fifths to four-sevenths in mathematics without first reducting the two fractions to thirty-fifths. ... the trustworthy student of comparative philosophy must be more than a mere linguist or posess more than trustworthy tranlations by linguists; in addition he must have a professional mastery of the problems, methods, and theories of philosophy. ... A little reading of philosophical treatises soon discloses, however, that [ordinary words] are not used with their common-sense meanings. Ordinary words are given technical meanings." A theory of any kind ... is a body of propositions, and a body of propositions is a set of concepts. Concepts fall into different types according to th edifferent sources of their meaning. Consequently the designaton of the different possible major types of concepts should provide a technical terminology with the generality sufficient to include within itself as a special case any possible scientific or philosophical theory. A concept is a term to which a meaning has been assigned. There are two major ways in which this assignment can be made. The otherwise meaningless term may be associated denotatively with some datum or set of date which is given immediately, or it may have its meaning proposed fro it theoretically by the postulates of the deductive thoiry in which it occurs. We shall call these two basic types concepts by intuition and concepts by postulation respectively. A concept by intuition is one which denotes, and the complete meaning is given by, something which is immediately apprehended. ["apprehended" is not limited only to "sensed".] A concept by postulation is one the meaning of which in whole or part is designated by the postulates of the deductive thory in which it occurs [emphasis added]. A deductive thory is a set of propositions which fall into two groups called postulates and theorems by means of the logical relation of formal implication. Given the postulates, the theorems can be proved. In considering any theory, proof must not be confused with truth. Proof is a relation between propositions, i.e., between those which are postulates and those which are theorems; whereas truth is a relation between propositions and immediate apprenhended fact. The former is a purely formal relation which it is the busines of pur mathematics and formal logic to define; the later is an empirical relation which it is the task of empirical science and empirical logic to designate. More about concepts by Intuition:IV Logical Concepts by Intuition= concepts designating factors, the content of which is given through the senses or by mere abstraction from the totality of sense awarenes, and whose logical universality and immortality are give by postulation. [emphasis added] For Northop, the distinction between concepts by postulation - those defined by a theory consiting of axioms and any theorems that can be proved from the axioms - and concepts by intuition - comprised of that which can be immediately apprenheded, abstracted from all that we apprehend, as well as with added "postulated" properties - form a comprehensive binary classification system. This is in agreement with how I previously interpreted his classification system. Page 99 goes on describe several classifications of concepts by intuition derived by various abstraction methods including sensation, introspection, aesthics, differentiation, and more. Concepts by postulation, however are strictly limited to axioms or postulates as part of a system and any thing that can be proved with formal deductive methods from the assumed postulates. If we provide a strict formal definition as part of a theoretical system - even if that system is intended as a proposed model of "reality", physics, for example, its "properties" or "characteristics" are only and exactly those deducible by strict (two-valued) logic from those postulates. The connection between our various concepts by intuition, including abstract ones with added postulated properties, must be "connected" in some way to the concepts by postulation we create for the purpose of modeling that which we apprehend by multiple kinds of intuition. I have used the word "inform" in this regard. Tarski's concept by postulation for "truth" develops into a model theoretic system, which I say "informs" our concept by intuition known as the correspondence theory of truth. Northrop makes a stronger connection which we would call "identification" when he defines this connectedness as "epistemic correlation". "Thus an epistemic correlation joins a thing known in the one way to what is in some sense that same thing known in a different way." [emphasis added]. We have this capability built into our nervous system - to activate a greater part of a network when a part of the network is stimulated. Naturally, the person who first creates a postulate and theory system, does it by bringing to bear formal methods to his or her existing concept by intuition. It remains for others to, when studying the proposed postulates and theory stystem to evaluate, connect neurologically and ultimately to say. "Yes, I see that." or "No I just don't think that's right." Eventually people with the "right experience" corroborate or disconfirm the theory. With binary two-valued logic if it is on the formal deductive side, or with the application of Popper's falsification principle, by showing failed predictions if it is on the empirical (intuitive) side. Proving a conclusion consistent with the axioms and other theorems is only on the logical side. Once a theory is shown consistent, then it has to be used to make predictions that can theoretically falsify the theory. Many a consistent theory fails to predict events correctly. We want the names of objects in our theory to refer to "things" in what is going on, because the "correspondence" theory is so simple to understand and use, and for most practical situations work very well. When Tarski devised a formal system to "inform" the correspondence theory of truth, he specified the terms in the language, the object in its application, and rigidly specified the relation between his terms and his objects. We do not have that luxury, becase we only guess at what is in the "world". We "see" lots of objects, and we putatively project these as "known" "things". But we are continually proven wrong about many of our projections. Our models have gone through many sometimes major paradigm shifts. Now, we are wary; we no longer say what "is"; we build maps and navigate, revising our maps as we discover error. Notice that I said nothing about "territory" in this description. I only wrote about what we know - our maps. We build maps, we navigate using the same maps for a long time, and we are flabergasted and totally surprised when we encounter error in our maps - but not as much when we understand general semantics.
|
Author: Ralph E. Kenyon, Jr. (diogenes)
Tuesday, January 8, 2008 - 07:28 pm
|
David, Both of your points are already covered in the previous quotes from Northrop, which I repeat here:
A deductive thory is a set of propositions which fall into two groups called postulates and theorems by means of the logical relation of formal implication. Given the postulates, the theorems can be proved. (p. 83) In considering any theory, proof must not be confused with truth. Proof is a relation between propositions, i.e., between those which are postulates and those which are theorems; whereas truth is a relation between propositions and immediate apprenhended fact. The former is a purely formal relation which it is the business of pure mathematics and formal logic to define; the later is an empirical relation which it is the task of empirical science and empirical logic to designate. (p. 83) IV Logical Concepts by Intuition = concepts designating factors, the content of which is given through the senses or by mere abstraction from the totality of sense awarenes, and whose logical universality and immortality are give by postulation. (p. 95) In this paragraph he gives two specific examples:(a) Monistic. e.g., The "Unmoved Mover" in Aristotle's metaphysics (b) Pluralistic. e.g., Whitehead's "eternal objects," Santayana's "essences," or Aristotle,s "Ideals." Contrary to your interpretation that concepts by intuition are ONLY that which we can "apprenhed directy" which you assert are sensed, these are not "sensed" concepted, though they are "apprehended" with some immediacy. They are so, because they are direct abstractions from "the totality of our sensed awareness". One species of Concept by postulation, Logical Concepts by Intuition ... the contents of which are given through the senses or by mere abstraction from the totality of our sense awareness. These are concepts by postulation "merely so far as their immortality is concerned and are concepts by intuition with respect to their content (p. 95). Concepts by intuition include (content wise) abstractions from the totality of sense awareness). The names of some from page 99. The concept of the Differentiated Aesthetic Continuum. The concept of the indeterminate or undifferentiated aesthetic continuum. The concepts of the Differentiation=concepts by inspection=atomic concepts by inspection=the specific inspected qualities or differentials considered apart from the continuum. Concepts by sensation - given through the senses Concept by introspection - given introspectively. Sounds pretty abstract to me...
|
Author: Ralph E. Kenyon, Jr. (diogenes)
Wednesday, January 9, 2008 - 01:02 am
|
David wrote Now that you've had a chance to read some of Northrop, do you still contend that a concept by intuition derives from the use of "imprecise language"? Certainly. Axioms and postulates contrast with "imprecise" use of language. Inductive (not the mathematical kind) use of description allowing a person to infer from examples what is meant, that is to say, abstracting from direct experience as well as from "indirect" experience (taking the words of others without direct experience) all require the exercise of "intuition" to surmise. None of these are precise like axioms and postulates or proving theorems from axioms and postulates. When you hear an unfamiliar word in context, and you guestimate its meaning, you have intuited a concept for the term. When you look up an unfamiar word in the dictionary of common usage, you are "intuiting" a concept for the term. These are NOT concepts by postulation, unless you are using a mathematical "dictionary" of axioms or you are reading the proof of a theorem in the context of axioms (postulates). You may be simultaneouly forming a concept by intuition in the process of reading a postulate or axiom, but you will be using the concept by postulation if you are strictly deducing its consequences from its formulation. And yes, Northrop does relate a concept to a term, which I alluded to earlier when I suggested that there are "best word choices" that may be most commonly agreed to as the paradigm case example for naming or describing a concept (simpliciter). We are, however, discussing this following several different threads in which the "meaning is in people" and "concepts" are "responses in brains" paradigms are operative. Moreover, the separation of concepts as objects (internal to brains) from the language expressing them (maps) falls under the three map rules - map is not territory, map covers not all territory, and map reflects the map maker, so we have different formulations to express such a hypothesized "concept" as well as the same formulation being used to express different concepts - a many to many relation. And, we have reiterated the prior institute policy of eschewing the use of the word "concept" in favor of "formulation". That would be more in agreement with Northrop. So, which is it? Do you want to return to the institute's former? policy of eschewing the use of the word concept and drop any discussion of "concepts by xxx" as "irrelevant" to general semantic? - Do so by going back to Northrop's claim that a concept is a term with an assigned meaning? (Something not so slippery as being individual to persons? - taking meaning of this kind out of the person? Perhaps going back to Frege's senses of terms in which the morning star is not the evening star?) If a "concept" is exactly and precisely stated by the use of unambiguous formulations consisting of axioms and formal definitions, and moreover, is part of a comprehensive theory, then we should call it a concept by postulation, or a "formulation" by postulation. But if it is the result of human abstraction, varies with the individual, and can be expressed in many forms, then we should call it a semantic reaction by abstraction, intuition (non-mathematical) "induction". The former - starts with language; the later starts with experience. Where the two meet, "epistemic correlation", is where somebody puts two and two together - again a personal response that may be expressed in communication in different ways that various individuals may assent to or may deny. But I would not say that they are "one thing" known in different ways, but two things from which we may abstract an evaluation of similarity.
|
Author: Ralph E. Kenyon, Jr. (diogenes)
Wednesday, January 9, 2008 - 12:25 pm
|
David wrote In my mind, these qualify as examples where meaning is deduced, or formulated deductively. In other words, the person has formulated a "theory" about meaning.. Perhaps in your mind, but it does not satisfy the criteria of formal deduction using valid rules of inference starting from axioms. You appear to be interpreting the word 'deduction' in a decidedly non-technical sense. Neurologically, a pattern is stimulating some "crude" associations and returing some abstractions from experience. This is "abstracting" or (non-mathematical) "induction". It is NOT deduction in the technical sense that Northrop carefully lays out. Deduction, in the technical sense involves "pure syntax", and a computer can perform it using only procedures that implement the formal rules of inference. This is an outgrowth of mathematical theorem proving programs that developed after Northrop. Northrop had no experience with this technology (invented later - the first major language, FORTRAN, in 1957), but he did know mathematical proof techniques, and that is precisely what he means by 'deduction'. Looking up words in a dictionary of common usage is NOT what Northrop means by deduction. I believe that his taxonomy as I re-described or re-formulated it, "formulations" by postulation and "semantic reactions by intuition", closely parallels Northrop's concepts by postulation and concepts by intuition by updating the expression of the distinction to conform to the paradigm shift represented by general semantics, in which we extensionally discuss formulations while recognizing a second level of individual variablity through semantic reactions. Individuals must still form semantic reactions to postulates, and they must relate by abstraction and comparison that semantic reaction to semantic reactions to other experiences and formulations. In all cases we are "understanding" any formulations - whether postulates and theormes or not, and other experiences - in terms of individual semantic reactions. By using 'formulations' and 'semantic reactions' we simply don't need the word 'concept' (and the Institute officially eschewed its use).
|
Author: Ralph E. Kenyon, Jr. (diogenes)
Wednesday, January 9, 2008 - 09:50 pm
|
"Is." "Is not." "Is!" "Is not!" "You cannot tell someone something they are not prepared to hear.", said the Sufi. I got someone invited to deliver an Alfred Korzybski Memorial Lecture. We've seen ones who were disappointing with regard to general semantics. You cannot give "credibility" to someone for "knowing" or "understanding" general semantics on the strength of their having been invited to present at the AKML. I would say especially when the dominant theme of his talk involves a word (concept) that had been officially eschewed by Kending as inappropriate for use within general semantics. At least one of you seems to have assumed that "it must be a good example" rather than an experiment to see if it could salvage a term already decreed "inappropriate". Judging by the lack of agreement here, ... A parting flame from Page 192. The scientist can define these unobserved scientific objects in any way that he pleases providing he specifies their properties and behavior unambiguously in the postulates of his theory so that rigorously logicial deductions can be made therefrom and providing he does not regard his theory as true until these deductions stated in terms of concepts by postulation are checked experimentally or empirically by appeal to directly observable fact, that is, to factors denoted by concepts by intuition. [emphasis added] We "check" (NOT "confirm") a theory by comparing the prediction statements to the observed events. Favorable results of such checks just keep the theory conditional on future observations. Bear in your consciousness of abstraction that Northrop wrote before Thomas Kuhn proposed the model of a paradigm shift in the philosophy of science, and before Popper developed his theory of science after Kuhn. Northrop is written in language that predates two major paradigm shifts. Korzybski, it can be argued, anticipated Popper's approach with his structural differential. But his work too predated both the Kuhn and Popper revolutions, so we may well be "projecting" the modern paradigm onto Korzybski by failing to apply his terminology in ways consistent with the approach to science prior to Kuhn and Popper. This may be evident by the claim that our most abstract scientific theories are represented in the event level. That would be consistent with pre-Kuhn and pre-Popper approaches. That Northrop is willing to allow a scientist to call his theory "true" if it has been checked experimentally or empirically by appealing to observable fact reflects a pre-Popperian paradigm before we described the relation as one of conditional modeling. In today's paradigm we would never call a physical theory "true"; we would only call it conditionally not yet disconfirmed.
|
Author: Ralph E. Kenyon, Jr. (diogenes)
Thursday, January 10, 2008 - 07:59 am
|
http://xenodochy.org/ex/quotes/santayana.html http://xenodochy.org/ex/quotes/beliefs.html http://xenodochy.org/ex/thirteen.html
|
|