My studies are in Philosophy (I am a graduate of the University of Essex), and Ι work as a literary translator (English to Greek). Published translations of mine include works by E.A. Poe, R.L. Stevenson and H.P. Lovecraft. I sometimes post articles at https://www.patreon.com/Kyriakos
KyriakosCH
″ Presumably the machine learning model has in some sense discovered Newtonian mechanics using the training data we fed it, since this is surely the most compact way to predict the position of the planets far into the future. ”
To me, this seems to be an entirely unrealistic presumption (also true for any of its parallels; not just when it is strictly about the position of planets). Even the claim that NM is “surely the most compact [...]” is questionable, given that obviously we know from history that there had been models able to predict just the position of stars since ancient times, and in this hypothetical situation where we somehow have knowledge of the position of planets (maybe through developments in telescopic technology) there is no reason to assume analogous models with the ancient ones with stars couldn’t apply, thus NM would not be specifically needed to be part of what the machine was calculating.
Furthermore, I have some issue with the author’s sense that the machine calculating something is somehow calculating it in a manner which inherently allows for the calculation to be translatable in many ways. While a human thinker inevitably thinks in ways which are open to translation and adaptation, this is true because as humans we do not think in a set way: any thinking pattern or collections of such patterns can—in theory—consist of a vast number of different neural connections and variations. Only as a finished mental product can it seem to have a very set meaning. For example, if we ask a child if their food was nice, they may say “yes, it was”, and we would have that statement as something meaning something set, but we would never actually be aware of the set neural coding of that reply, for the simple reason that there isn’t just one.
For a machine, on the other hand, a calculation is inherently an output on a non-translatable, set basis. Which is another way of saying that the machine does not think. This problem isn’t likely to be solved by just coding a machine in such a way that it could have many different possible “connections” when its output would be the same, cause with humans this happens naturally, and one can suspect that human thinking itself is in a way just a byproduct of something not tied to actual thinking but the sense of existence. Which is, again, another way of saying that a machine is not alive. Personally, I think AI in the way it is currently imagined, is not possible. Perhaps some hybrid of machine-dna may produce a type of AI, but it would again be due to the DNA forcing a sense of existence and it would still take very impressive work to use that to advance Ai itself; I think it can be used to study DNA itself, though, through the machine’s interaction with it.
While “yes requires the possibility of no” is correct, one should also establish whether or not either yes or no is meaningful itself in the context of the examination. For example, usually one is not up against a real authority, so whether the view of the other person is in favor or against his/her own the answer cannot be final for reasons other than just the internal conflict of the one who poses (or fears to pose) the question.
Often (in the internet age) we see this issue of bias and fear of asking framed in regards to hybrid matters, both scientific and political. However, one would have to suppose that the paradigmatic anxiety before getting an answer exists only in matters which are more personal. And in personal matters there is usually no clear authority, despite the fact that often there is a clear (when honest) consensus.
An example, from life. A very beautiful girl happens to have a disability—for example paralysis or atrophy of some part of her body. There is clear contrast between her pretty features (face, upper body etc) and the disabled/distorted one. The girl cannot accept this, yet—as is perfectly human—wishes to get some reassurance from others. Others may react in a number of different ways. The answer, however, to any question posed on this, can never be regarded as some final say, and in a way it happens that what is being juxtaposed here is not a question with an answer, but an entire mental life with some nearly nameless input of some other human.
In essence, while yes requires the possibility of no, I think that the most anxiety-causing matters really do not lend themselves well to asking a question in the first place.
I will try to offer my reflection on the two matters you mentioned.
1)First of all whether this development may have been social. It would—to a degree—but if so then it would be a peculiar and prehistoric event:
If I was to guess, at some point (in deep prehistory) our ancestors could not yet be able to communicate using anything resembling a language, or even words. Prior to using words (or anything similar to words) prehistoric humans would only tenuously tie their inner world (thinking & feeling) to formulated or isolated notions. It is highly likely that logical thinking (by which I mean the basis of later formalization of logic, starting -at the latest- with Aristotle) wasn’t yet so prominent a part in the human mentality. It is not at all impossible, or even (in my view) that improbable that some degree of proto- rationalization had to occur so that prehistoric humans would manage to think and sense less of something less organized, and move towards becoming able to establish stable notions and consequently words and a language.
2) Secondly, this would be also inherited. I do suppose that ultimately math (by which I mean more complicated math than the one we currently are aware of) serves somewhat as a dna-to-consciousness interface. But even if this is so, due to point 1 it wouldn’t really connote mathematical parameters as being more important than other parameters in the human mind or overall organism.
But there is another point, regarding your post. I think that a non-mental object (for example any external object) cannot be identified as it actually is by the observer/the one who senses it. In philosophy there is a famous term, the so-called “thing-in-itself”. That term (used since ancient times) generally means that any object is picked up as having qualities depending on the observer’s own ability and means to identify qualities, and not because the actual object has to have those qualities or anything like them. The actual object is just there, but is not in singularity with the observer; the observer translates it through his/her own means (senses and thought). Your point about the object possibly having math inherently is interesting (I do understand you mean that its form is shaped due to actual, real properties, and those are just picked up by us), yet it should be supplemented with the note that even if the object (for example one of those shells) had properties itself which create that spiral and then we notice it, it would have to follow that either we noticed the spiral without distorting the thing-in-itself as an observer of it, or that we picked up some property which didn’t actually have any mathematical value but was (in some strange way) isomorphic to the spiral when translated for a human observer’s sensory organs. If the latter somehow was true then the external object had no mathematical property, and we picked up some math property because we seem to project math even in more ways than one. If the former was true then we are in singularity with the observed object and nothing is actually distinct in the cosmos (certainly anyone senses their own self as distinct from something external). And in both cases it would not connote that math are cosmic, given the case where math were part of the observed object would present a case where we are so full of illusions that we think (incorrectly in that case) ourselves distinct from a shell, when in “reality” we would not have been.
I do realize this may seem way too “philosophical” (and in a bad way). Philosophy has had problems since ancient times (this itself is already examined by Plato himself: how philosophy may seem very alienating and problematic). Yet the gist of the matter is that (in virtually all serious philosophers’ view) there is no reason to think that we as observers pick up any actual non-anthropic reality. We do pick up a translation of something, and this translation is enough to allow us to advance in various ways, including being able to build space-traveling rockets. This is so because we always stay within the translation, and to us the cosmos is witnessed in translation. But a translation of something is not in tautology with the thing itself. My own suspicion is that different intelligent species will not have compatible translations (because they would likely even lack fundamental notions we have; for example they may not sense space or movement or other parameters, and sense ones we cannot imagine. Intuitively I suspect even so alien a species could develop tech and science of a very high level).
Hi, yes, I do not mean why the Fibonacci spiral approximates the Golden Spiral. I mean why we happen to see something very close to this pattern in some external objects (for example some shells of creatures) when it is a mathematical formation based on a specific sequence.
I referred to it to note that perhaps we project math onto the external world, including cases where we literally see a fully fledged math spiral.
There are other famous examples. Another is The Vitruvian Man (proportions of man by Vitruvius, as presented by DaVinci). One would be tempted to account for this by saying math is cosmic, yet it may just be it is anthropic and the result is a projection of patterns. That math is very important for us (both consciously and even more so unconsciously) seems certain; yet maybe it is not cosmic at all.
Thank you. Intuitively I would hazard the guess that even non-obvious systems (such as your example of the story which rests on axioms) may be in the future presented in a mathematical way. There is a very considerable added hurdle there, however:
When we communicate about math (let’s use a simple and famous example: the Pythagorean theorem in Euclidean space) we never focus on parameters that go outside the system. Not only parameters which are outside the set axioms which define the system mathematically (in this case Euclidean space) but more importantly the many more which define the terms we use: I do not communicate to you how I sense the terms for A, B, squared, equality or any other, regardless of the likelihood of myself sensing them in my mind in a very different way to you. It’s the same relative communication which is used in every-day matters: if one says “I am happy” you do not sense what is very specifically/fully meant, although the term is a fossil of specific connotations, so some communication is possible, and often no more is needed. Likewise no more is needed to present a math system like that, but certainly far more will be needed to present a story or the subconscious in math terms (and within a given level; outside of that set the terms will remain less defined).
Indeed, crows are a good example of non-human creatures that use something which may be identified as math (crows have been observed to effectively even notice the -its practical manifestation, obviously—law of displacement of liquids :) )
I used human as a synecdoche here, that is chose the most prominent creature we know that uses math, to stand for all that (to some degree) do. Even if we accept that crows or other creatures have a similar link (itself debatable) it still would link math to dna found on our planet. My suspicion is that what we identify as math is a manifestation of relations, sequences or outcomes of dna, more easily observable in human self-reflection and sense (which is why I mentioned the shells we see in the form approximating the golden ratio spiral).
In essence my suspicion is that math is tied to specific dna-to-conscious animal logistics, and serves as a kind of interface between the deep mind and consciousness, parts of which are occasionally brought up and examined more rigorously. (humans being the species which is more apt to self-reflection, makes us likely the main one here to be conscious of math concepts). I am not of the view that math is cosmic. Approaching this philosophically, it basically connotes that the external world is not mathematical, but because human examination of phenomena in scientific manner presupposes use of the human mind it inevitably is examined through math. One could hypothesize the existence of some other field, non-human, which is equally applicable to the study of the cosmos, and possibly some intelligent species of alien uses that, with compatibility with math being probably non existent.
Thanks for the reply. I think that it does matter, because if math is indeed anthropic then it should follow that humans are in effect bringing to light parts of our own mental world. It isn’t a discovery of principles of the cosmos, but of how any principles (to the degree they exist in parts of the cosmos) are translated by our mentality. I do find it a little poetic, in that if true it is a bit like using parts of yourself so as to “move” about, and special kind of “movement” requires special knowledge of something still only human.
To use another common metaphor: people who are born blind have no sense of how the world looks. They do come up with theories. To a degree those theories, coupled with sensory routines (counting steps to known routes, hearing and noticing smells) provide a personal model of some environment, translated in their own way. Yet the actual phenomenon, the visible world, is not available. Likewise, it seems that math is not part of anything external, and is an own, human tool, composed of particularly human ingredients and enough to model something of the world that it may allow quite complicated movement through it (including space travel).
Machine language is a known lower level; neurons aren’t; perhaps in the future there will be more microscopic building blocks examined; maybe there is no end to the division itself.
In a computer it would indeed make no sense for a programmer to examine something below machine language, since you are compiling or otherwise acting upon it. But it’s not a known isomorphism to the mind.
If you’d like a parallel to the above, from the history of philosophy, you might be interested in comparing dialectic reasoning and Aristotelian logic. It’s not by accident that Aristotle explicitly argued that for any system to include the means to prove something (proof isn’t there in dialectics, not past some level, exactly because no lower level is built into the system) it has to be set with at least one axiom: the inability of anything to simultaneously include and not include a quality (in math you’d more often see this as A∨¬A). In dialectics (Parmenides, Zeno etc), this explicitly is argued against, the possibility of infinite division of matter being one of their premises.
How memories are stored certainly matters, it is too much of an assumption that levels are sealed off. Such an assumption may be implicitly negated in a model, but obviously this doesn’t mean something has changed; the nature of material systems has this issue, unlike mathematical ones.
Another poignant property of material systems is that at times there is a special status of observer for them. In the case of the mind, you have the consciousness of the person and while certainly it can be juxtaposed to other instances of it, it is a different relation from the one which would allow carefree use of the term “anecdote”. Notice “special”, which in no way means infallible or anything of such a class, but it does connote a qualitative difference: apart from other means of observation—those available to everyone else, like the tool you mentioned—there is also the sense through consciousness itself, which here was for reasons of brevity referred to as intuition.
Of course consciousness itself is problematic as an observer. But it is used—in a different capacity—in all other input procedures, since you need an observer to take those in as well. If one treats consciousness as a block which acts with built-in biases, it is too much to believe those are cancelled if one simply uses it as an observer of another type of input. It’s due to this (particular) loop that posing a question about intuition is not without merit.
Going by practice, it does seem likely that intertwined (nominally separate, as over-categories) memories will be far easier to recall at will, than any loosely related (by stream of consciousness) collection of declarative memories. However it is not known if the stored memories (of either type) actually are stored individually or not; they are many competing models for how a memory is stored and recalled, up to the lowest/”lowest” -for there may be no lowest in reality—level of neurons.
That said, I was only asking about other people’s intuitive sense of what works better. It isn’t possible to answer using a definitive model, due to the number of unknowns.
I mean more cost-effective, so to speak. My sense is that while procedural is easier to sustain (for years, or even for the entirety of your life), it really is more suitable for focused projects instead of random/general knowledge accumulation. Then again it is highly likely that procedural memories help with better organization overall, acting as a more elegant system. In that, declarative memories are more like axioms, with procedural being either rules or just application of rules, with far fewer axioms needed.
You are confusing “reason to choose” (which is obviously not there; optimal strategy is trivial to find) with “happens to be chosen”. Ie you are looking at what is said from an angle which isn’t crucial to the point.
Everyone is aware that scissors is not be chosen at any time if the player has correctly evaluated the dynamic. Try asking a non-sentence in a formal logic system to stop existing cause it evaluated the dynamic, and you’ll get why your point is not sensible.
Please read my edited reply to lsusr.
″ People don’t live merely to survive: we’re hardwired to propagate our genes. If you cannot think abstractly and articulate your ideas well, you will have difficulty attracting a mate. People who have disabled their ability to examine themselves will be quickly eliminated from the gene pool. Hence, it seems unlikely that such an illness will occur because it goes against how natural selection has shaped us. ”
I don’t disagree with the gist of the above. However it is tricky to assign clear intentions to a non-human agent, assuming one views biological undercurrents as an analogue to an agent in the first place. Which brings us to:
″ This reasoning seems to rely on the assumption that the mind was designed by some kind of agent. Who do you think is deciding whether it “makes sense” to allow an expansion of the ability to think? Our best theory is that cognitive expansion resulted as a series of mutations that improved the ability of our ancestors to survive. One does not need to appeal to the fact that “Day Zero illness” does not “make sense” to argue for its implausibility. It is implausible simply by the fact that it is a priori highly unlikely for any novel previously unobserved phenomenon to exist in the absence of a very strong theory that predicts it. ”
If I assume such an illness can exist, it doesn’t mean I can pontificate on the way in which it would be triggered. Certainly some mental illnesses seem to be more common in modern times—despite the ability to account for them and measure number of patients more efficiently. Some slightly related illnesses that do exist are those which have aphasia as a core part. Usually in pre-modern times one finds more elaborate personal accounts by poets and other authors, of such sensations or states; eg in the case of an aphasia-like state, there are two good examples, one from Baudelaire (the french poet) and his sense that he was “touched by the wing of idiocy” etc, and the very dramatic story of the deterioration of Guy De Maupassant (important story-writer), who in the end “reverted to an animal state”.
However, as I noted, the hypothetical illness I wrote about is not just an individual case with elements of aphasia. Primarily my background for asking the question has been that any human is not primarily (in my view) an outward/social oriented being, but in the vast majority of cases humans are indeed social agents (due to a variety of reasons; usually having to do with clear rewards). However, below all that there is the person in their world of consciousness, as part of the greater world of the mind. It may be, therefore, that a risk can be picked up (more on by what it will be picked up later) as serious enough if it somehow attacks the inner world, that even a massive exodus from formations about anything closer to a surface (like interests in the external world) may occur. In such case, assuming it is possible, it would be easier to cause not a full erasure of memories or skills, but a negation of the ability to stabilize them, as briefly presented in the definition of the new illness in the OP.
As for your point about all this having to allow for the mind being created by an agent—no, that isn’t so. I certainly have no reason to think the mind was created as a set work, nor (of course) that it existed a priori or may be sensed as existing a priori even figuratively. The way in which it developed (mutations etc) doesn’t by itself have to cancel the possibility of a non-yet seen illness appearing. After all, as you agreed, not much of the final (such a thing cannot even exist) form of a mind can manifest, given this system of connections cannot exhaust all its possible rearrangements during the person’s lifetime (likely not even if the person could live for 1000 years). I do approach this from a more literary (which, sadly, at times means even less literal...) point of view, given literature and philosophy is where my interests and studies lie.
I should also give at least one parallel (it won’t be perfect, and it may lead to problems as well...) with a procedure which allows for a new development on a larger scale, while it wasn’t picked up individually up to then. Given that if something like the DZI would exist, it wouldn’t have been picked up before, it can be said that what was doing the picking-up or noticing certainly would not act on the same level as an individual (eg some individual sufferer or some aphasia-like condition). This would perhaps be possible, if the complexity of both the trigger and the formations which pick up the trigger were again far larger. In effect, in my hypothetical, the general idea was that some core pattern or patterns—not created by any agent; not conscious and not accounted for—does exist, which would signal due to special relation to the unconscious mind some particular and grave danger. Such patterns do not even have to be intelligible to an individual in the first place. In that, perhaps, it deteriorates somewhat to the realm of fiction; yet most complicated patterns aren’t making a full impression on someone who views them. In fact we can be said to be surrounded by patterns which are not picked up, due to our position or lack of related interest to notice. Maybe—that is the hypothesis—a slight difference will lead to the unintended formation of a curious pattern which happens to be related not to the thinker but to some scheme in the mental world. After all—here comes the parallel—it isn’t rare to see the opposite happen, for humans project math formations into external objects (eg the fibonacci and other φ related patterns, on shells etc). If we can project math onto the external world (which isn’t anthropic or mathematical; math is not cosmic, in my view), why shouldn’t some formation there present us with other elements and balances of our own mental world?
That such would be catastrophic, or cataclysmic, is just an assumption.
Intuitively, I think it is possible it will appear.
Rationally, one may consider the following as well:
-not much time has passed since the first use of language (by prehistoric people) to this day, so it can be assumed that only a negligible part of the possible mental calculations/connections has occured
-there is no direct survival bonus through ability to think in complicated manner; on the other hand there is arguably an cost-effective logic in disabling great freedom in self-examination
However it may take centuries for that to happen.
At any rate, it is just my guess—there are so many unknowns about the mind that this may too be impossible to actually happen. One reason why it would be unlikely is that, ultimately, if so grave a danger was built-in a system, it would make more sense to never allow as an option the expansion of ability to think in the first place.
I wish to examine a point in the foundations of your post—to be more precise, a point which leads to the inevitable conclusion that it is not problematic in this discussion to use the term ‘agent’ while it is understood in a manner which allows a thermostat to qualify as an agent.
A thermostat certainly has triggers/sensors which force a reaction when a condition has been met. However to argue that this is akin to how a person is an agent is to argue that a rock supposedly “runs” the program known as gravity, when it falls. The issue is not a lack of parallels; it is a lack of undercurrent below the parallels (in a sense, this is causing the view that a thermostat is an agent, to be a ‘leaking abstraction’ as you put it). For we have to consider that no actual identification of change (be it through sense or thought or both) is possible when the entity identifying such change lacks the ability to translate it in a setting of its own. By translating I mean something readily evident in the case of human agents—not so evident in the case of ants or other relatively simpler creatures. If your room is on fire you identify this as a change from the normal, but this does not mean there is only one way to identify the changed situation. Someone living next to you will also identify that there is a fire, but chances are the (to use an analogy) code for that in their mind will differ very significantly from your own. Yet on some basic level you will be in agreement that there was a fire, and you had to leave.
Now an ant, another being which has life—unlike a thermostat—picks up changes in its environment. If you try to attack it it may go into panic mode. This, again, does not mean the act of attacking the ant is picked up as it is; it is once against translated, this time by the ant. How it translates it is not known, however it seems impossible to argue that it merely picks up the change as something set, some block of truth with the meaning ‘change/danger’ etc. It picks it up due to its ability (not conscious in the case of the ant) to identify something as set, and something as a change in that original set. A thermostat has no identification of anything set, because not being alive it has no power nor need to sense a starting condition, let alone to have inside it a vortex where translations of changes are formed.
All the above is why I firmly am against the view that “agent” is to be defined in a way that both a human and a thermostat can partake in it, when the discussion is about humans and involves that term.
I do suspect that when things make sense it is because of a drive of the sense-making agent to further his/her understanding, but I think that unwittingly it is actually a self-understanding and not one of the cosmos. If the cosmos does make sense, it isn’t making sense to some chance observer like a human who is at any rate a walking thinking mechanism and has very little consciousness of either his own mental cogs or the dynamics between his own thinking and anything external and non-human. That this allows for distinct and verifiable progress (eg, as noted in my OP, anything up to space-traveling vehicles) is not due to some supposed real tie between observer agent and cosmos, but due to inherent tie between observer and translation natural (and inescapable past some degree) to said observer of the cosmos.
I generally agree, and I am happy you found the discussion interesting :)
In my view, indeed the Babylonian type of labyrinth does promote continuous struggle, or at least multiple points of hope and focus on achieving a breakthrough, while ultimately a majority of the time they won’t lead to anything—and couldn’t have lead to anything in the first place. The Arabian type at least promotes a stable progression, towards an end—although that end may already be a bad one.
Most of the time we simply move in our labyrinth anyway. And with more theoretical goals it can be said that even a breakthrough is more of a fantasy borne out of the endless movement inside the maze.
A good question. I would think that while the story doesn’t have much to offer regarding conscious mental calculation and systems, it still includes a set of powerful allegories (in my article I did mention one of them: Algernon seems to stand for the somatic part, with the person turning into a purely mental entity; another allegory seems to be about the need to stop extrapolating thoughts to prevent an overload) which can, consciously or not, bring about changes to the reader’s rationality.
I don’t think the story has much to do with youth and experience. After all, as we all know (unless we are youths ;) ) while some knowledge only can be had by experience and thus only be gotten in time, the more theoretical types of knowledge are available to highly intelligent youths as well, eg an elementary school pupil can be already exceptionally good at math.
Thank you, I will have a look!
My own interest in recollecting this variation (an actual thing, from my childhood years) is that intuitively it seems to me that this type of limited setting may be enough so that the inherent dynamic of ‘new player will go for the less than optimal strategy’, and the periodic ripple effect it creates, can (be made to) mimic some elements of a formal logic system, namely the interactions of non-sentences with sentences.
So I posted this as a possible trigger for more reflection, not for establishing the trivial (optimal strategy in this corrupted variation of the game) ^_^