miniKanren is a logic/relational language. It’s been used to solve questions related to programs. For example, once you give miniKanren a description of the untyped λ-calculus extended with integers you can ask it “give me programs that result in 2” and it’ll enumerate programs from the constant “2″ to “1 + 1” to more complicated versions using λ-expressions. It can even find quines (if the described language supports it).
The Nanopass Framework is built for that:
“The nanopass framework provides a tool for writing compilers composed of several simple passes that operate over well-defined intermediate languages. The goal of this organization is both to simplify the understanding of each pass, because it is responsible for a single task, and to simplify the addition of new passes anywhere in the compiler.”
I’m going again, it was too fun/interesting to miss.
Count me in.
Around São Paulo, yes. Around LW, not much anymore, I mostly read it via feed reader.
This model seems to be reducible to “people will eat what they prefer”.
A good model would be able to reduce the number of bits to describe a behavior, if the model requires to keep a log (e.g. what particular humans prefer to eat) to predict something, it’s not much less complex (i.e. bit encoding) than the behavior.
I agree vague is not a good word choice. Irrelevant (using relevancy as it’s used to describe search results) is a better word.
I would classify such kinds of predictions as vague, after all they match equally well for every human being in almost any condition.
There’s no way to create a non-vague, predictive, model of human behavior, because most human behavior is (mostly) random reaction to stimuli.
Corollary 1: most models explain after the fact and require both the subject to be aware of the model’s predictions and the predictions to be vague and underspecified enough to make astrology seems like spacecraft engineering.
Corollary 2: we’ll spend most of our time in drama trying to understand the real reasons or the truth about our/other’s behavior even when presented with evidence pointing to the randomness of our actions. After the fact we’ll fabricate an elaborate theory to explain everything, including the evidence, but this theory will have no predictive power.
It doesn’t seem to me that you have an accurate description of what a super-smart person would do/say other than match your beliefs and providing insightful thought. For example, do you expect super-smart people to be proficient in most areas of knowledge or even able to quickly grasp the foundations of different areas through super-abstraction? Would you expect them to be mostly unbiased? Your definition needs to be more objective and predictive, instead of descriptive.
How would you describe the writing patterns of super-smart people? Similarly, how would meeting/talking/debating them would feel like?
Hi, I’m Daniel. I’ve read OB for a long time and followed on LW right in the beginning, but work /time issues in the last year made my RSS reading queue really long (I had all LW posts in the queue).
I’m a Brazilian programmer, long time rationalist and atheist.
Hi, I’m a lurker mostly because I was reading these off my RSS queue (I accumulated thousands of entries in my RSS reader in the last year due to work/time issues),
Sao Paulo, Brazil