(note: they’d better be random things, because otherwise your analogy with an inexperienced software developer attacking an unmanageable pile of code that s/he can’t even see doesn’t work)
It’s not that you can’t see the code at all, it’s that you can’t list all the code, or even search it except by a very restricted set of criteria. But you can single-step it in a debugger, viewing the specific instructions being executed at a given point in time. To single-step all the code would take a ridiculous amount of time, but if you can step through a specific issue, then you can make a change at that point.
Such single changes sometimes generalize broadly, if you happen to hit a “function” that’s used by a lot of different things. But as with any legacy code base, it’s hard to predict in advance how many things will need changing in order to implement a particular bugfix or new feature.
I’d say “Put that thing down and back away slowly before you completely fuck something up with it”.
Well, when I started down this road, I was desperate enough that the risk of frying something was much less than the risk of not doing something. Happily, I can now say that the brain is a lot more redundant—even at the software level—than we tend to think. It basically uses a, “when in doubt, use brute force” approach to computation. It’s inelegant in one sense, but VERY robust -- massively robust compared to any human-built hardware OR software.
It’s not that you can’t see the code at all, it’s that you can’t list all the code, or even search it except by a very restricted set of criteria. But you can single-step it in a debugger, viewing the specific instructions being executed at a given point in time. To single-step all the code would take a ridiculous amount of time, but if you can step through a specific issue, then you can make a change at that point.
Such single changes sometimes generalize broadly, if you happen to hit a “function” that’s used by a lot of different things. But as with any legacy code base, it’s hard to predict in advance how many things will need changing in order to implement a particular bugfix or new feature.
While I understand that the code/brain analogy is an analogy, I think you are significantly underplaying the dangers of doing this in a code base you do not understand. Roughly half of my job is fixing other people’s “fixes” because they really had no concept of what was happening or how to use the tools in the box correctly.
While I understand that the code/brain analogy is an analogy, I think you are significantly underplaying the dangers of doing this in a code base you do not understand.
Brain code doesn’t crash, and the brain isn’t capable of locking in a tight loop for very long; there are plenty of hardware-level safeguards that are vastly better than anything we’ve got in computers. Remember, too, that brains have to be able to program themselves, so the system is inherently both simple and robust.
In fact, brains weren’t designed for conscious programming as such. What “mind hacking” essentially consists of is deliberately directing the brain to information that convinces it to make its own programming changes, in the same way that it normally updates its programming—e.g. by noticing that something is no longer true, a mistake in classification has been made, etc. (The key being that these changes have to be accomplished at the “near” thinking level, which operates primarily on simple sensory/emotional patterns, rather than verbal abstractions.)
In a sense, to make a change at all, you have to convince the brain that what you are asking it to change to will produce better results than what it’s already doing. (Again, in “near”, sensory terms.) Otherwise, it won’t “take” in the first place, or else it will revert to the old programming or generate new programming once you get it “in the field”.
I don’t mean you have to convince the person, btw; I mean you have to convince the brain. Meaning, you need to give it options that lead to a prediction of improved results in the specific context you’re modifying. In a sense, it’d be like talking an AI into changing its source code; you have to convince it that the change is consistent with its existing high-level goals.
It isn’t exactly like that, of course—all these things are all just metaphors. There isn’t really anything there to “convince”, it’s just that what you add into your memory won’t become the preferred response unless it meets certain criteria, relative to the existing options.
Truth be told, though, most of my work tends to be deleting code, not adding it, anyway. Specifically, removing false predictions of danger, and thereby causing other response options to bump up in the priority queue for that context.
For example, suppose you have an expert system that has a rule like “give up because you’re no good at it”, and that rule has a higher priority than any of the rules for performing the actual task. If you go in and just delete that rule, you will have what looks like a miraculous cure: the system now starts working properly. Or, if it still has bugs, they get ironed out through the normal learning process, not by you hacking individual rules.
I suppose what I’m trying to say is that there isn’t anything I’m doing that brains can’t or don’t already do on their own, given the right input. The only danger in that, is if you say, motivated yourself to do something dangerous without actually knowing how to do that thing safely. And people do that all the time anyway.
It’s not that you can’t see the code at all, it’s that you can’t list all the code, or even search it except by a very restricted set of criteria. But you can single-step it in a debugger, viewing the specific instructions being executed at a given point in time. To single-step all the code would take a ridiculous amount of time, but if you can step through a specific issue, then you can make a change at that point.
Such single changes sometimes generalize broadly, if you happen to hit a “function” that’s used by a lot of different things. But as with any legacy code base, it’s hard to predict in advance how many things will need changing in order to implement a particular bugfix or new feature.
Well, when I started down this road, I was desperate enough that the risk of frying something was much less than the risk of not doing something. Happily, I can now say that the brain is a lot more redundant—even at the software level—than we tend to think. It basically uses a, “when in doubt, use brute force” approach to computation. It’s inelegant in one sense, but VERY robust -- massively robust compared to any human-built hardware OR software.
While I understand that the code/brain analogy is an analogy, I think you are significantly underplaying the dangers of doing this in a code base you do not understand. Roughly half of my job is fixing other people’s “fixes” because they really had no concept of what was happening or how to use the tools in the box correctly.
Brain code doesn’t crash, and the brain isn’t capable of locking in a tight loop for very long; there are plenty of hardware-level safeguards that are vastly better than anything we’ve got in computers. Remember, too, that brains have to be able to program themselves, so the system is inherently both simple and robust.
In fact, brains weren’t designed for conscious programming as such. What “mind hacking” essentially consists of is deliberately directing the brain to information that convinces it to make its own programming changes, in the same way that it normally updates its programming—e.g. by noticing that something is no longer true, a mistake in classification has been made, etc. (The key being that these changes have to be accomplished at the “near” thinking level, which operates primarily on simple sensory/emotional patterns, rather than verbal abstractions.)
In a sense, to make a change at all, you have to convince the brain that what you are asking it to change to will produce better results than what it’s already doing. (Again, in “near”, sensory terms.) Otherwise, it won’t “take” in the first place, or else it will revert to the old programming or generate new programming once you get it “in the field”.
I don’t mean you have to convince the person, btw; I mean you have to convince the brain. Meaning, you need to give it options that lead to a prediction of improved results in the specific context you’re modifying. In a sense, it’d be like talking an AI into changing its source code; you have to convince it that the change is consistent with its existing high-level goals.
It isn’t exactly like that, of course—all these things are all just metaphors. There isn’t really anything there to “convince”, it’s just that what you add into your memory won’t become the preferred response unless it meets certain criteria, relative to the existing options.
Truth be told, though, most of my work tends to be deleting code, not adding it, anyway. Specifically, removing false predictions of danger, and thereby causing other response options to bump up in the priority queue for that context.
For example, suppose you have an expert system that has a rule like “give up because you’re no good at it”, and that rule has a higher priority than any of the rules for performing the actual task. If you go in and just delete that rule, you will have what looks like a miraculous cure: the system now starts working properly. Or, if it still has bugs, they get ironed out through the normal learning process, not by you hacking individual rules.
I suppose what I’m trying to say is that there isn’t anything I’m doing that brains can’t or don’t already do on their own, given the right input. The only danger in that, is if you say, motivated yourself to do something dangerous without actually knowing how to do that thing safely. And people do that all the time anyway.