Not a very pointed answer, but a collection of leads:
Most books I can find on compilers/PLs tend to spend most of their time on the text representation (and algorithms for translating programs out of text, i.e. parsing) and the machine-code representation (and algorithms for translating programs into machine code).
There are good reasons for the time spent on them — they are more difficult than the parts that go in the middle, which is “merely” software engineering, although of an unusual kind.
There is also a dearth on resources on the topic. And because of that, it is actually fairly hard.
One reason is that the basics of it is quite simple: generate a tree as the output of parsing, then transform this tree. Generate derivative trees and graphs from these trees to perform particular analyses.
Phrased like that, it seems that knowledge on how to work with trees and graphs is going to serve you well, and that is indeed correct.
A good read (though with a very narrow focus) is that discussion of syntax tree architecture in Roslyn. The Roslyn whitepaper is also quite interesting though more oriented towards exposing compiler features to user.
Personally, I did some research on trying to implement name resolution (relating an identifier user to its declaration site) and typing as a reactive framework: you would define the typing rules for you language by defining inference rules, e.g. once your type the type of node A and the type of node B, you can derive the type of node C. The reactive part was then to simply find the applicable inference rules and run them.
The project didn’t really pan out. In reality, the logic ends looking quite obfuscated and it’s just easier to write some boring non-modular old code where the logic is readily apparent.
(Incidentally, fighting against this “it’s easier to just code it manually” effect — but in parsing — is what my PhD thesis is about.)
I might advise you to look at research done on the Spoofax language workbench. Spoofax includes declarative language to specify name binding, typing, semantics and more. These languages do not offer enormous flexibility but they cover the most common language idioms. Since those were codified in a formal system (the declarative language), it might tell you something about the structure of the underlying problem (… which is not really about something quite as simple as data structure selection, but there you have it).
For purposes of this question, I’m not particularly interested in either of these representations—they’re not very natural data structures for representing programs, and we mostly use them because we have to.
I’d like to point out I have seen very convincing arguments to the contrary. One argument in particular was that while the data structures used to represent program will tend to change (for engineering reasons, supporting new features, …), the text representation stays constant. This was made in the context of a macro system, I believe (defending the use of quasiquotes).
Regarding machine code, it would be immensely useful even if we didn’t need to run code on CPUs. Look at virtual machines: they work with bytecode. A list of sequential instructions is just the extremum of the idea of translating high-level stuff into a more limited number of lower-level primitives that are easier to deal with.
Is there some other question I should be asking, e.g. a different term to search for?
On the meta-level, where else should I look/ask this question?
For academic literature on the topic, I would like at the proceedings of the GPCE (Generative Programming: Concepts & Experiences) and SLE (Software Language Engineering) conferences.
I think there exists some program transformation framework out there, and you might also learn something from them, though in my experience they’re quite byzantine. One such is Rascal MPL (meta-programming language). Another is Stratego (part of Spoofax) (I read some papers on that one a while ago that were palatable).
So anyay, here goes. Hope it helps. You can contact me if you need more info!
I’d be more interested in the in-between: what about cases where we don’t have general AI, but we have automation that drastically cuts jobs in a field, without causing counter-balancing wage increases or job creation in another field?
For instance, imagine the new technology is something really simple to manufacture (or worse, a new purpose for something we already manufacture en masse) — it’s so easy to produce these things, we don’t need really need to hire more workers, just push a couple levers and all the demand is met just like that.
Is there something interesting to be said about what happens then? Can this be modeled?
(In practice, even this is too extreme a scenario of course, everything sits on a continuum.)
Something more realistic, I think, is that even when a new useful machine and introduced, and the productivity of the producers of that machine shots up, the salaries of the machine-maker won’t shot up in a way that is proportional (maybe it’s easy to train people to make these machines?). And maybe the ratio skews: like automation will remove X people, and the increased demand for automation will get X/5 people hired. So on the one hand you get major job loss, and on the other a minor salary hike and minor job creation.
How to model what is lost here? Isn’t there some kind of conservation law and the surplus disappears somewhere (presumably in the pockets of the shareholders of both the companies buying and producing the machines?).
I think rationality ought to encompass more than explicit decision making (and I think there are plenty of writing on this website that show it does even within the community).
If you think of instrumental rationality of the science of how to win, then necessarily it entails considering things like how to setup your environment, unthinking habits, how to “hack” into your psyche/emotions.
Put otherwise, it seems you share your definition of Rationality with David Chapman (of https://meaningness.com/ ) — and I’m thinking of that + what he calls “meta-rationality”.
So when is rationality relevant? Always! It’s literally the science of how to make your life better / achieving your values.
Of course I’m setting that up by definition… And if you look at what’s actually available community-wise, we still have a long way to go. But still, there is quite a bit of content about fundamentals ways in which to improve not all of which have to do with explicit decision making or an explicit step-by-step plan where each step is an action to carry explicitly.
Seems to me you’re on about treating (or more to the point, dreaming about treating) the cure rather than the symptoms that make people vulnerable to the social network sink in the first place. The same fundamental weakness probably has a lot of other failure modes.
Category theory, of which I’m acquainted with at a basic level, seems to formalize a lot of regularities I already knew about as a programmer and a student of <those mathematics topics that were taught to me as part of my CS master’s degree>.
I found it mathematically neat, but I have never derived any useful insights from it. Said otherwise, nothing would have changed if I had never been introduced to it. This seems quite wrong to me, so I was quite interested in reading the answers here. Unfortunately, there is not much in ways of insight.
What is this? The links seem to require some login and registration is limited to students of some specific universities.
Is it even possible to avoid for a curated selection to be deemed better? Maybe only if it fails horribly at what it set out to do, but otherwise?
I strongly second Michaël’s recommendation — of any place, the front page of Less Wrong is where things should be clear.
For me, what separates mindfulness from rumination is that in mindfulness you observe things and accept them, whereas in rumination you’re trying to fight or hold onto something.
Constantly reminiscing a slight is a good way to make it loom large. It’s an unwillingness to either resolve the matter and letting it be.
Similarly, fighting some negative emotions (pain, loss, anger) makes them worse when they inevitably breaks through.
Great post! More of an exploration than a presentation, but a thoroughly enjoyable one.
Last year, I sat down with some hard thoughts about my own life philosophy, and came out with essentially the same conclusion: that enjoying life is about the process of getting somewhere rather than about actually getting there.
There are some intriguing new elements here, including the link with entropy (though I do tend to think that the ending is perhaps a tad too abstract and speculative).
I too, was inspired by reading and quotes, here are a few that guided me in this direction, the most related of which is perhaps:
What man actually needs is not some tension-less state but rather the striving and struggling for some goal worthy of him.— Viktor Frankl
(feel free to reach out, there’s a whole lot more of them)
I also can’t resist to link this Hunter S Thompson letter, which is perhaps the piece of writing that has influenced me the most, and is completely in line with what you propose here.
You should probably specify which generation you’re in =)
I’m 28. I don’t know that the next generation has “gone too far”, but the big difference I see between them and my generation is that we were the last generation to grow up without pervasive internet / smartphones / social networks. Facebook boomed (at least in Europe) right as I entered college.
What it entails is a lack of focus. I won’t say my generation is very focused, but the next one is certainly worse. As a TA, I can witness this firsthand.
For applied rationality, my 10% improvement problem: https://www.lesswrong.com/posts/Aq8QSD3wb2epxuzEC/the-10-improvement-problem
Basically, how do you notice small (10% or less) improvements in areas that are hard to quantify. This is important, because after reaping the low-hanging fruits, stacking those small improvements is how you get ahead.
I thought the piece was interesting.
If I can offer some feedback on form, I also thought it was too long for what it did say, and conversely did not say some things I would have wanted it to.
For me, the gist of the article really is this:
What I really wanted out of the system, in each case, wasn’t the most valuable thing to get, or what it had to teach me. What I wanted was me, and my own beliefs, and for everything to stay the same, so that my prime directive would be met.
This is somewhat relatable. It’s intriguing! But
is it true? I’m having some doubts. If I’m taking on some endeavour, or even some experiment, my goal isn’t to be confirmed in my current identity. But could my current identity be a force that acts against that endeavour or the honest fullfilment of the experiment? Probably. Would you agree or do you see this differently?
Where does this come from?
What to do about it?
Would the tl;dr “integrate the evidence presented by revealed preferences” be accurate?
Putting technical limitations aside (which are a huge deal, at the very least for video), the problem is that the audiences were built using the platform, and don’t carry over easily.
The creators were able to build their audiences because, notably
The platforms have idle eyeballs actively looking for good content *on the platform*. No one google for content these days, only for answers.
The recommendation algorithms sometimes work, or at least you can make them work for you. Even if you have to figure out the peculiarities in the algorithm, this is vastly simpler than cracking global marketing. And again, active digital marketing for content typical passes through social media anyway! This is were the people are, and it’s where they look, and it’s where they will stumble on you if they’re not looking.
The alternative is being so damn appealing that you’ll spread by word of mouth. And even then, you’d do better on a platform, it’s just an incredible force multiplier.
The audiences don’t carry over because, simply put, they are living on the platform. It’s centralized. They consume many things there, so they will check it. Most people don’t know RSS and it’s being phased out of many browsers. You’ll lose most of your subscribers.
And you are wrong, the algorithms do account for many of the views of the top creators, on top of their subscribers.
Could they survive without the platform? Of course! Would they do better? No chance.
Finally, anger at the platform is generally at being less good than it used to be. But think about, for instance, demonetization on YouTube. Well, you can still sign your own deals and include your own ads in the videos. If you leave the platform, you have to do this. But if you stay there, it’s still an option.
Brings two things to mind:
The Dark Arts of Rationality series and its compartmentalization and inconsistency techniques. I’m toying with that a bit, but I don’t have a good account to give yet.
The fact (apparently) that placebo work even if you know they are placebo.
So I’d say that clearly many people are getting self-reported benefits from self-deception.
Key in understanding the phenomenon is the system 1 / system 2 (fast / slow) distinction. Typically you know in system 2 that you are deceiving yourself but you act out the deception in system 1.
I don’t think one can generalize so easily from bounded-options full-information games like those to the whole range of human endeavours.
I’m reading this, and it seems very reasonable, and then:
Changing our perspective might have significant benefits. Systematized winning is not an actionable definition. Most domains already have field specific knowledge on how to win, and in aggregate these organized practices are called society. The most powerful engine of systematized winning developed thus far is civilization.
So, assume civilization is a set of guidelines that dictate a course of actions. Just like rationality in fact. How can this beat rationality? If it dictates the correct course of actions, rationality will too. And often, rationality can suggest something more effective.
The possible counters c are: (a) rationality is hard work, and mostly sticking with civilization is fine. (b) Or you’re not a good enough rationalist (or have enough good information) to beat civilizational guidelines.
But the article does not really suggest those. It says civilization is already winning. Well, it all hinges on the definition of winning. But it’s quite clear you can achieve better outcomes through rationality if that’s what you care about and are not put off by the extra work (counter (a)).
The counters are interesting but ultimately irrelevant. You can actually rationality arrive at (a): determining that the cost incurred by practicing rationality is more than the benefits accrued. That being said, it’s so general a statement, I don’t think anyone it can be true for anyone capable to think the thoughts. You can also rationally arrive at (b), and in fact, if it’s true you should: civilization IS evidence, and it has to be valued accurately. If civilization guidelines keep trumping your best guesses, the weight of civilizational evidence should increase accordingly.
Why bother voting? Your vote will only change the result if it would otherwise be an exact tie; and the chance of that is negligible – one in millions.
But a chance of one in millions is worth taking if the jackpot is billions or trillions. That is, the opportunity for you to select a better rather than worse government, thereby making the country – though not yourself – billions or trillions of dollars better off. So as long as you care at least slightly about the rest of the country, voting is rational; civic duty really is a reason to vote.
That’s an incredibly spurious premise right from the bat. Personally, I don’t care all that much if the country is billions or trillions better off… That’s ranging from single-digit dollar amounts to a couple hundreds. Also that’s supposing the government has this kind of influence (esp. if you counter the last by positing bigger amounts). Also as long as people are not going into poverty, I still mostly care about myself.
People hate to hear this, and I usually don’t bring it up because it’s counterproductive, but: voting is not rational except in very small elections. The problem is that if everyone thinks this, you have a serious problem. Yep, that’s the tragedy of the commons.
A possible way to solve the issue is to make the vote legally mandatory (which is the case in my country—Belgium). This might lead to more uninformed ballots being cast, but I’m not entirely sure (most of the ballots are uninformed regardless).
Bravo! This essay is very well put together, and it make my mind go “bling” a couple times.
I have experienced guilt for not taking well to criticism, and I feel this piece helps to explain why: the criticism didn’t address my own unsatisfaction with the work, nor highlight what I thought was an important shortcoming. Looking forward, it required things of me without actually helping me making something better. But as you mentioned, feedback (just an alias for criticism) is almost sacred is certain circles nowadays.
I’ve been thinking about this too, and I’m not sure guide suffice. Getting in shape or learning about a topic are simple problems (not that can’t be challenging in their own right) compared to the complexity of actually achieving something.
At this point, we don’t even have good theories or hypotheses on why these things are hard. It’s lot of small issues that aggregate and compound. Motivation is a big class of these issues. Not seeing clearly enough—failure to perceive danger, opportunities, alternative ways of doing things.
To achieve you have to get the strategy, the tactics and the operations right. There’s a lot you can screw up at every level.
One key issue, I think, is that it’s damn hard to hack yourself on some fundamental levels. For instance to “be more perceptive”. You can’t really install a TAP for that. I guess some mindfulness practice can help (although I’d be wary of prescribing meditation—more like mindfulness on the move). Consuming self-help, insights, news, etc etc only seems to move the needle marginally.
So yeah, I don’t know. Just throwing some ideas out there.
Something like this: https://www.lesswrong.com/posts/qwdupkFd6kmeZHYXy/build-small-skills-in-the-right-order might be a nice starting point. Maybe, just maybe, we’re trying to lift heavy weights without having built the required muscles. Worth investigating and expanding.