CFAR should post more stats about success rates of any kind and maintain these stats throughout all cohorts.
Also stats about how many people asked for their money back.
It’s not really RAM, but rather a tape. (like a doubly linked list) The LSTM controller can’t specify any location in logarithmic space / time. They add multiple tape readers at one point though.
The performance curves database site only has data submitted from ’09 to ’10. None of the data sets are up to date. There’s also not much context to the data, so it can be misinterpreted easily.
It’d just model a world where if the machine it sees in the mirror turns off, it can no longer influence what happens.
When the function it uses to model the world becomes detailed enough, it can predict only being able to do certain things if some objects in the world survive, like the program running on that computer over there.
Gattaca, except everyone is actually superhuman and nobody cares about whether you’ll have a heart attack at thirty except your doctor.
It seems like Qualia the Purple is a manga where after a certain point, the author introduced magic and started giving philosophic explanations for how the main character can do magic, turn into other people, go back in time, and generally do whatever the fuck she wants except save one person. What does “actually try” mean?
Iterated embryo selection was pretty interesting. I wonder if there is anything viable about inserting new / activating the growth of neurons / synapses into the human brain, particularly into specifically targeted areas, like the section(s) where people do math.
Is there literature I could read on the differences between the performance of the neurons DARPA uses and the neurons Blue Brain uses?
Are there examples in the different octants suggested by this? In particular, is there an example of something automatic, but slow and effortful?
I wrote a 140 character lambda calculus interpreter and a bigger and more complete (static name resolution + renaming + repl) version of it.