Author: Ralph E. Kenyon, Jr. (diogenes)
Friday, January 4, 2008 - 10:46 pm
|
Thomas, I have no difficulty with the notion of an undefined term. The question you might consider is how would you characterize "undefined"? The character of undefined terms arises along with the character of "concept by postulation". Both come out of the non-Euclidean geometry history. A system of axioms (or postulates) specifies relations among terms that each have no formal definition within the axiomatic system. Classic cases are the use of the terms 'point', 'line', 'plane', etc., in absolute geometery, Euclidean geometry, and the various non-Euclidean geometries. As the axioms vary from system to system, the theorems one can form within each system differ, and the resulting "properties" of the "referents" of the undefined terms differ from axiom system to axiom system. Moreover, my question, which you appear to have taken literaly, was a rhetorical one. My argument "against" similarity of structure has to do with proponents confusing levels of abstraction or identifiying level of abstracting by failing to take abstraction into consideration.
- A judgement of similarity is an abtraction respose to comparing two objects.
- The objects are judged by a process of comparison.
- Objects are differentiated into parts - subordinate objects.
- Objects are compared recursively by comparing parts - all the way down until the "smallest" parts cannot be differentiated further.
- Comparison can take place only between objects of the same type in a common medium - such as neurological process.
- Comparisons are limited to a nominal scale or an ordinal scale.
- Comparisons of similarity are limited to nominal scale comparisons.
- A nominal scale comparison is based on finite un-ordered categories.
- Objects are the result of abstraction.
- Abstraction from different types of "events" or "things" into a common medium involves different transformations and conversions.
- A neurological representation is a "map" with a different type of structure than an "event".
When we look at some "thing", abstraction takes place in which we presume photons emanating from some electrons as they change state strike our retina and induce neurological events instantiated in electro-chemical cascade reactions and other electro-chemical changes in our brains. The change in our brain are not the electrons emitting photons that theoretically preceeded and eventually "caused", through a domino effect including the endpoints of other chains of dominos, our experience of seeing. We know from the study of color perception that "color" experiences exist in brains, and they do not correspond to any single property or characteristic in the event level. A color is simply not a particular frequency in the electromagnetic spectrum, although it is commonly misunderstood to be so. Color Perception I cite color perception just as a reminder that the objects we abstract are not whatever may have given rise to them, that we may experience the same object abstracted from different sources, that the object (map) is not the putative "thing" (territory) from whence it was abstracted. We must not forget this abstraction process when we are deciding that different putative "thing" "are" "similar". (Sorry, Ben, but each of these words deserves "scare" quotes.). I allow "similarity" only within the nervous system as a neurological abstraction process evaluation. But, I do not allow that evaluation to be project backwards onto what is going on without the continued consciousness of the abtraction process by which it was accomplished. In math formultions O1 ~ O2 - Object1 is evaluated as similar to Object2. ("~" stands for "is similar to" as introducted by Leibniz, but we can take it as "is evaluated as similar to".) But consciousness of abstraction yields: O1 = A1("X") or O1 <- A1("X") - Object1 is abstracted from putative "X". and O2 = A2("X") or O2 <- A2("Y") -Object2 is abstracted from putative "Y". We have A1("X") = O1 ~ O2 = A2("X"), so we can conclude A1("X") ~ A2("X") But this evaluation, A1("X") ~ A2("Y"), does not allow us to conclude that X ~ Y, because we do not know that A1 = A2. We do not know that the abstraction processes for the different objects are "the same" or "identical", and that is what we would have to know. Moreover, a function in general can be many to one, so even if the abstraction process was "the same" for both object, we would still not know that the objects were generated from the same point in the domain. For that to be the case, the abstraction function would have to be one-to-one, but we already know that is NOT the case, as the color vision example illustrates. And we are also not entitled to conclude that A1("X") ~ X. They are at different levels of abstraction, they are in different media, and they are of different types. The one exception to this is exemplified when X is an object within the nervous system and A1(X) is an abstraction to another object within the same nervous system. In this case they are of the same type within the same medium (in the same brain). This is the only type of situation where different level of abstraction can be directly compared. A computer CPU having rules written in English that allows deriving results in English could compare the English words prior to the application of a rule to the words after the application of the rule (an abstraction); again this is the same type in the same medium (the same computer memory store). [This was done with the COBOL language, invented by Grace Hopper, that began the great computer revolution for big business.] We do it with English at verbal levels. [In the case of computers, all properly encoded into very well understood machine readable codes, in the case of humans all "encoded" into very poorly understood neurological processes.] Because we perceive one object similar to another does not allow us to conclude that the "things" from which the objects are abstracted are similar, nor does it allow us to conclude that the objects are themselves similar to the "thing" from which it was abstracted. Any apparent similarities in these areas are an artifact of the abstraction process resulting from identification across levels of abstraction and projection. We may however, evaluate similarities between two different levels of abstraction, provided they are both within the same nervous system - but we must remember that these are already different from the source by virtue of the abstraction paths. It is somewhat simplisticaly argued by some so-called "general semanticists" that one must have undefined terms (in order to avoid circularity) because we will run out of words to use in the definition. This is an answer for those who do not think in terms of levels. We have words "defined" by pointing - categorization words too - concepts by intuition, which can be "defined" even inductively without the use of other words. Other words can be "inter-defined" in mutual sets. A simple analogy would be a system of N linear equations with N unknowns. They may produce a unique value for each variable, or they may be "redundant" in such a way as to leave some variable defined ambiguously in terms of others - thus creating a relation among some of them that leaves them ambiguous until somebody sets the value of one (or more). Such an interdefined system was offered for 'structure', 'order', and 'relation'. Another approach, and more to the point, comes from characterizing mathematics as the science of contentless relations. The terms (variable) are "undefined" until a value is given, usually to apply the particular formula or system. In teaching and tutoring mathematics, I'm amazed at how some people just cannot seem to grasp the idea or notion of a variable. They have to have specific numbers to do things, and then they do them poorly. I know how to "define" all these things in contentless ways that leaves them all "undefined".
|
Author: Ralph E. Kenyon, Jr. (diogenes)
Friday, January 4, 2008 - 11:13 pm
|
Thomas writes "If I make measurements and establish relations between them, like V=IR, you are saying the relationship between these quantities exist only in my nervous system? Then why do other scientists that investigate the same phenomenon find a similar relation? The simple answer is "monkey-see, monkey-do" with the advantage of time-binding." Each is "trained" to follow directions and record marks called numbers. You must distinguish between "t" and M("t") where "t" is the putative territory which we hypothesize to "exist" in some way outside of our nervous system. M("t"), or just plain m"t" is a nervous system object response to what is going on - a mapping that is many to one and leaves stuff out. Our "object" m is a brain response. When you apply a volt-meter and measure a voltage in a circuit at two places, and measure the current at those two places, you evaluate that the current "is the same" based on your seeing (object level) a needle at "approximately" the "same" place on the meter. You also "pick out" a number cloosest to that needle position and you record it. Similarly you measure the two voltages and you get different approximate numerical values on the scale of the instrument. You are, in fact, already applying electrical theory from time-binding simply by calling these numbers "voltages" and "current", but the Volt-Ohm meter you are using displays a moving needle against numbers, or, if you are lucky enough, a digital readout. After you have conducted this data gathering a few hundred times with different voltage sources, you apply some mathematical statistical and correlation theory to abstract a relationship between objects V1, V2, I1, I2 and different materials, and you might rediscover Ohm's law. None of this analysis or perception happens outside the nervous system. The picture of what is going on is painted in the nervous system by abstract neurological processes. We don't say these things happen only within the nervous systetm. But we do say an "knowledge" (a-la-Korzybski, not yet disconfirmed models) is only within the nervous system. Knowledge is what we know (not in the strong philosophical sense but in the weak not-yet-disconfirmed senes), what we experience, and we project that knowledge onto what is going on. We don't know (in the strong philosophical sense Know->True) what is out there; we only build a model that is not it, has different characteristics, and we know comes from an ill-understood process depicted in grossly understructured form in the structural differential.
|
Author: Ralph E. Kenyon, Jr. (diogenes)
Saturday, January 5, 2008 - 11:14 am
|
Author: Ralph E. Kenyon, Jr. (diogenes)
Saturday, January 5, 2008 - 12:03 pm
|
Thomas wrote It may be that we can never know this structure absolutely but we can approximate it and refine our maps as time goes on. I would revise this to say "we can never know that such a structure is there, but we can refine our maps as time goes on." Thomas wrote Newton's "law" of gravitation still works in some sense, it is a perfectly good relation that can be observed and I would not use the term 'disconfirmed'. You would not be using the word 'disconfirmed' in the sense that Popper and general semantics applies it. A better example is that Newton's definition of kinetic energy =1/2MV2 turns out to be a first approximation of the relativistic kinitic enery as that expression "is" the first term in the expansion of the relativistic equasion as an infinite convergent series, and at low speeds it's difference is minimal because the other terms are divided by higher powers of C2. What "disconfirmation" means, is that the therory is NOT absolutely "True". It does not mean that the theory is not a good first approximation. Thomas wrote we created means of measuring it so we can get a result which is more or less independent of who does the measuring In this case I would not use the phrase "indepenent of". I would use the phrase consistent with others". The context of this action is governed by time-binding learning. What we get from measurement is not "independence" of observers; what we get from measurement is consistency and conformity to commonly agreed standards of behavior. Ever since the first bully made all his subordinates use the same "grunt" to "name" a class including "rocks", thereby creating perhaps the first "concept by intuition", we have held more or less relatively invariant standard uses for terminology (subject to the evolutionary ravages of time). "Measuring" is an example of such standards use. What we get is a measure consistent with what other (trained in time-binding) observer get - conformity. This is NOT "independent" of observation. It is merely relatively independent of who among those properly trained abstract to the common usage. We "assume" that the measure we get is such case may be projected backwards onto what is going on as indicating "some structure" in what is going on has a causative relation to our observations, but that's projection of our map. As long as timen+1 continue to agree with times 1-n, we will not revise our projection. Thomas wrote Another way of saying it might be discovering relations which are invariant from one observer to another. If you change "discovering" to "abstracting" and "are" to "appear", I'd say you are on the right track. Thomas wrote These relations exist in WIGO and lower and higher order abstractions. When scientists compare observed quantities abstraction is implicit in 'observation', of course we have only our lower order abstractions to compare. We discover the relations between observed (abstracted) quantities but we would not notice these relations if they were not present in WIGO. It has long been a question in the philosohpy of mathematics as to whether we "discover" or "invent" mathemicical objects and relations. This part of Thomas's claim These relations exist in WIGO ... we would not notice these relations if they were not present in WIGO. is nether logically sound nor demonstrable. The first begs the question. The second !A>!B is equivalent to B>A, and B>A is clearly false since there are a LOT of things that we believe are present that we do not notice. The argument reduces to B and B>A. These relations exist in what is going on AND IF these relations exist in what is going on, THEN we would notice them. Since we do not notice all the relations, then either the conditional or the assertion or both are false. In fact, we have to dig very hard and analyze many ways to abstract even a semblance of some relationships. But, assuming these projected relations "actually exist" in what is going on is the position of "scientific realism". But it is still a matter of belief - a question in metaphysics - not in epistemology. In epistemology, the "knowledge" is uncertain. We do not "know" what causes our experences; we merely model them and project those models onto what is going on. And models like maps are not the territory, do not cover it all, and reflect the map maker.
|
Author: Ralph E. Kenyon, Jr. (diogenes)
Saturday, January 5, 2008 - 11:12 pm
|
Thomas wrote Then 'disconfirmation' is a useless notion because no theory is absolutely true. Disconfirmation is not "useless" because it shows when theories are false. It moves a theory from the category of possibly true but not yet known to be false to known to be false.
|
Author: Ralph E. Kenyon, Jr. (diogenes)
Saturday, January 5, 2008 - 11:35 pm
|
Ben, As you noted "1st, 2nd, 3rd, 4th, ect." are called "ordinal" numbers, and, for simplicity, they represent which. The numbers "1, 2, 3, 4, etc." are called "cardinal" numbers, and, for simplicity, they represent how many. All the finite cardinals are also ordinals, but this pattern changes for infinite sets. However, we can practically ignore that in this forum because we won't live long enough to count that high.
|
Author: Ralph E. Kenyon, Jr. (diogenes)
Sunday, January 6, 2008 - 07:35 am
|
Thomas, Korzybski extolled the virtues of math and the binary logic that goes behind it - also Tarski's definition of "truth" an defining the correspondence theory. A statement is "satified in a model" if the statement is true in the logical sense of consistent with the axioms of the theory AND there exist objects in the model that satisfy the statement. True or False apply at the level of the theory as the indicators of consistencey. I think you need to review Popper if you think general semanticists should not be using the words true or false. You could, however, resort to the Nixonism of "operative" and "not operative". But the words true and false apply at the level of the theory with respect to any theory statement as an indicator of its consistency with the axioms (assumptions) of the theory. A "model" consists of a theory and a set of objects that satsify the theory. In the case of science, the "objects" are putative (projected) "structures" that we assume "exist" in what is going on, and the "theory" is the variable part that we are trying to conststruct. But we require truth preserving deductive methods to insure that our theory remains consistent. If a theory statement (IF ... THEN ...) makes a prediction that is not "satisfied", then we deem that that theory statement is not "true" (in all cases - the difference between a conditional and an implication as noted in an earlier post), as in, it allows T > F, and we must eliminate any theory statement that is not an implication. If I apply Newton's law of kinetic energy to a high velocity particle, then I will measure X, but when I have measured it, I got Y - MUCH BIGGER than X, so "I measure X" is "false" therefore by modus tolens, so is Newton's law of kinitec energy (for high velocity particles). We use True and False for observation statements as well as for theory statements. Anyone who thinks general semanticists should not be using the words true and false cannot talk about indexed observation statements or the consistency of theory. Actually that is part of the problem with "novice" (anti-Aristotelian) would-be general semanticists with inadequate math and logic training. Recall the effort that Stuart Mayper spent at seminars trying to get some of this through to "math avoiders". Observation statements are True or False. (Reported as observed, reported as observed not to be - correspondence definition of truth). Theory statements are Possibly true, Probably true (corroborated but not yet disconfirmed), or False (disconfirmed) (True and False as in Assumed as an axiom or postulate or consistent with the axioms or postulates [includes "law" statements] or as in a conditional derivable under strict deductive logic from the postulates, or inductively assumed to be an implication, but not shown to be a material conditional without satisfying observations. Since such inductive {not mathematical induction} statements are derived from non truth preserving methods from observation statement, we do not call such theory statement "true"; in modal logic we call them possibly true, but in general semantics we call them "not disconfirmed".)
|
|