The distinction between specified goals and maximal goals seems ill-defined or at least very vague. In order for this to be of any use you would at minimum need to expand a lot on what these mean and establish that they fundamentally make sense across a broad swath of mindspace not just human minds (I’d settle for a decent argument that they would even be common in evolved minds for a start.)
Note: This is also cross-posted here at my blog in anticipation of being karma’d out of existence (not necessarily a foregone conclusion but one pretty well supported by my priors ;-).
Maybe you should read more and consider carefully what you post? Possibly have other people give feedback on the posts before you post them that way your posts address the more obvious concerns people have.
Specified goals are exactly what you would expect—a fixed number of goals that are specified in the design (or random creation) of the entity. These are assumed goals: inflexible and, in the best case, having and needing no interpretation other than what is exactly specified.
Maximal goals are maximal in both number and diversity (so please don’t conjure the strawman of 100 copies of the same goal ;-).
The existence of goals fundamentally make sense across the broad swath of mindspace. I had assumed that this was a given at this site.
Classifying regions of mindspace by their relationship to goals and what the implications of that might be should be an obvious question.
Given your last comment, I think you’d be very surprised by how much I read and how carefully I do consider what I post (assuming that this isn’t the standard reflexive dig that is very prominent around here to protect old ideas and put down upstart newbies ;-) I normally have a couple of people provide feedback on all my writing before I let it out ion public—and I have to say that this is the only community that has any problem with lack of clarity on most of my terms (maybe that’s because it’s always EXPECTING everything to have an odd nonstandard definition?).
Specified goals are exactly what you would expect—a fixed number of goals that are specified in the design (or random creation) of the entity. These are assumed goals: inflexible and, in the best case, having and needing no interpretation other than what is exactly specified.
Maximal goals are maximal in both number and diversity (so please don’t conjure the strawman of 100 copies of the same goal ;-).
Still way too vague. How for example to multiple goals interact? If an entity has two specific goals how does it decide to prioritize the two. If it has some priority system how is that not just one slightly more complicated goal? How is “interpretation” relevant to the specific goals? And if a goal is flexible how can it be a goal?
The existence of goals fundamentally make sense across the broad swath of mindspace. I had assumed that this was a given at this site. Classifying regions of mindspace by their relationship to goals and what the implications of that might be should be an obvious question.
Missing the point. This is probably true. But that there’s a useful distinction between maximal and specific goals in a broad section of mindspace has not been demonstrated.
Given your last comment, I think you’d be very surprised by how much I read and how carefully I do consider what I post (assuming that this isn’t the standard reflexive dig that is very prominent around here to protect old ideas and put down upstart newbies ;-) I normally have a couple of people provide feedback on all my writing before I let it out ion public—and I have to say that this is the only community that has any problem with lack of clarity on most of my terms (maybe that’s because it’s always EXPECTING everything to have an odd nonstandard definition?).
I have no way of judging how much you read which isn’t at all terribly relevant to my comment. I don’t think that there’s anything here about expecting non-standard definitions. But definitions in the world around us are often highly imprecise. It is probably true that LW expects more precision in definitions and more careful thinking than most of the web. There are comments I’d make at Reddit or Slashdot that I’d never say here simply because I’d never get away with making such imprecise claims (this in fact worries me that I’m making such statements because it suggests that the level of rationality is not necessarily rubbing off on me. I’ll need to think about that.).
The distinction between specified goals and maximal goals seems ill-defined or at least very vague. In order for this to be of any use you would at minimum need to expand a lot on what these mean and establish that they fundamentally make sense across a broad swath of mindspace not just human minds (I’d settle for a decent argument that they would even be common in evolved minds for a start.)
Maybe you should read more and consider carefully what you post? Possibly have other people give feedback on the posts before you post them that way your posts address the more obvious concerns people have.
Specified goals are exactly what you would expect—a fixed number of goals that are specified in the design (or random creation) of the entity. These are assumed goals: inflexible and, in the best case, having and needing no interpretation other than what is exactly specified.
Maximal goals are maximal in both number and diversity (so please don’t conjure the strawman of 100 copies of the same goal ;-).
The existence of goals fundamentally make sense across the broad swath of mindspace. I had assumed that this was a given at this site.
Classifying regions of mindspace by their relationship to goals and what the implications of that might be should be an obvious question.
Given your last comment, I think you’d be very surprised by how much I read and how carefully I do consider what I post (assuming that this isn’t the standard reflexive dig that is very prominent around here to protect old ideas and put down upstart newbies ;-) I normally have a couple of people provide feedback on all my writing before I let it out ion public—and I have to say that this is the only community that has any problem with lack of clarity on most of my terms (maybe that’s because it’s always EXPECTING everything to have an odd nonstandard definition?).
Still way too vague. How for example to multiple goals interact? If an entity has two specific goals how does it decide to prioritize the two. If it has some priority system how is that not just one slightly more complicated goal? How is “interpretation” relevant to the specific goals? And if a goal is flexible how can it be a goal?
Missing the point. This is probably true. But that there’s a useful distinction between maximal and specific goals in a broad section of mindspace has not been demonstrated.
I have no way of judging how much you read which isn’t at all terribly relevant to my comment. I don’t think that there’s anything here about expecting non-standard definitions. But definitions in the world around us are often highly imprecise. It is probably true that LW expects more precision in definitions and more careful thinking than most of the web. There are comments I’d make at Reddit or Slashdot that I’d never say here simply because I’d never get away with making such imprecise claims (this in fact worries me that I’m making such statements because it suggests that the level of rationality is not necessarily rubbing off on me. I’ll need to think about that.).