The concept learning proposal is basically finished right? Submit it now. (OK, I’m biased—I really like this proposal.) Start writing up the others in the order you think is most important / most likely to be funded and submit those as well.
As some feedback, I don’t think “what are human values?” is likely to get much funding from FLI, although obviously a representative of that organization should correct me if I’m wrong. It seems they have a preference to projects more directly connected to code.
Regarding your third idea, I’m pretty sure there is already some published work in this area. I certainly recall some discussion in the OpenCog community about the nature of creativity and concept formation via conceptual metaphors. I’m pretty sure that was in response to some published academic papers, but I’ll have to dig those up...
The concept learning proposal is basically finished right? Submit it now.
Good point, that makes sense.
I guess “can’t choose the right one” wasn’t actually my true rejection, rather I’m hesitating because I’m not sure whether this field is actually where my comparative advantage lies, and whether this is the kind of thing that I’d want to be doing. I do fine when it comes to vague philosophizing at the level of my original concept learning paper, but I’m much less certain of my ability to do actual rigorous technical work. Meanwhile I seem to be getting promising feedback of doing well on some other (non-technical) high-impact projects I’ve been pursuing.
Though I guess I could apply for the first stage of the grants anyway and decide later, since it doesn’t commit me to anything yet...
I’m not sure whether this field is actually where my comparative advantage lies.
What else are you considering?
I would advise that’s only half the the equation though. You should also weight by how unique that contribution would be. We simply don’t have enough people doing AGI work like concept formation. Not to place too much pressure, but if you don’t work on this then it’s not clear who would. It’s an underfunded area academically (hence these grants are a great opportunity), and too long term to be a part of industrial research efforts...
Rationality training by itself is worse than useless. Apply things in practice or you risk building free-floating castles detached from any practical application. A basic rule of thumb: if you spend more than 10-15% of your time on meta improvements, you are probably accomplishing less in your life than you could be. That means 85% to 90% of your time should be spent doing actual work.
As for community building, if that floats your boat, sure why not. I’m hoping you choose the FLI grant instead however :)
Rationality training by itself is worse than useless. Apply things in practice or you risk building free-floating castles detached from any practical application. A basic rule of thumb: if you spend more than 10-15% of your time on meta improvements, you are probably accomplishing less in your life than you could be. That means 85% to 90% of your time should be spent doing actual work.
Yeah, CFAR-style rationality training is the goal: carried out by actually troubleshooting and solving one’s real-life problems, while also building a community of like-minded people to remind you to actually think about your problems instead of doing whatever default thing comes to mind.
The concept learning proposal is basically finished right? Submit it now. (OK, I’m biased—I really like this proposal.) Start writing up the others in the order you think is most important / most likely to be funded and submit those as well.
As some feedback, I don’t think “what are human values?” is likely to get much funding from FLI, although obviously a representative of that organization should correct me if I’m wrong. It seems they have a preference to projects more directly connected to code.
Regarding your third idea, I’m pretty sure there is already some published work in this area. I certainly recall some discussion in the OpenCog community about the nature of creativity and concept formation via conceptual metaphors. I’m pretty sure that was in response to some published academic papers, but I’ll have to dig those up...
Good point, that makes sense.
I guess “can’t choose the right one” wasn’t actually my true rejection, rather I’m hesitating because I’m not sure whether this field is actually where my comparative advantage lies, and whether this is the kind of thing that I’d want to be doing. I do fine when it comes to vague philosophizing at the level of my original concept learning paper, but I’m much less certain of my ability to do actual rigorous technical work. Meanwhile I seem to be getting promising feedback of doing well on some other (non-technical) high-impact projects I’ve been pursuing.
Though I guess I could apply for the first stage of the grants anyway and decide later, since it doesn’t commit me to anything yet...
What else are you considering?
I would advise that’s only half the the equation though. You should also weight by how unique that contribution would be. We simply don’t have enough people doing AGI work like concept formation. Not to place too much pressure, but if you don’t work on this then it’s not clear who would. It’s an underfunded area academically (hence these grants are a great opportunity), and too long term to be a part of industrial research efforts...
Rationality training and community-building, basically.
But I just submitted my FLI grant application for the concept learning project anyway. :-)
Rationality training by itself is worse than useless. Apply things in practice or you risk building free-floating castles detached from any practical application. A basic rule of thumb: if you spend more than 10-15% of your time on meta improvements, you are probably accomplishing less in your life than you could be. That means 85% to 90% of your time should be spent doing actual work.
As for community building, if that floats your boat, sure why not. I’m hoping you choose the FLI grant instead however :)
Oh yeah, forgot to say that my initial grant application on concept learning was accepted to the second round of proposals.
Working on the full-length proposal now.
:)
Let me know if you need a review.
Yeah, CFAR-style rationality training is the goal: carried out by actually troubleshooting and solving one’s real-life problems, while also building a community of like-minded people to remind you to actually think about your problems instead of doing whatever default thing comes to mind.