Are you looking to move in there?
patrickscottshields
Discuss the concept of this thread here. For example, how could it be more useful? What would you do differently?
I attended the Center for Applied Rationality’s June rationality camp in Berkeley, and would very much like to have a full-time living environment similar to the environment at camp. I’m very interested in joining or working to create a living environment that values open communication and epistemic hygiene, facilitates house-wide life-hacking experimentation, provides a collaborative, fulfilling environment to live and work in, and those sorts of things.
I’ll finish my computer science degree in May, and I plan to make changes to my living situation at that time. I plan to apply a portion of my time over the next ten months to identifying and assessing potential living environments, and I am interested in collaborating with others throughout the process. Contact me if you think collaboration could be mutually beneficial (I would rather you err on the side of contacting me.)
I started a software development company last summer under which I have been developing a web application that assesses tasks’ utility in order to suggest high-utility tasks to users. I have not publicly released the application, but I use it daily to manage my own tasks. Contingent on my startup remaining a high-utility prospect in my mind, I’d like to work on it full-time after I graduate. I am very interested in live-work arrangements (e.g. working and living on the same premises), or in living close to a coworking space or an affordable office space.
My finances are limited right now. That would change if I got a full-time software engineering job once I graduate, but I’d rather work for my startup and finance things through part-time or contract work if necessary (if you’re interested in hiring me, please contact me.) I’m especially interested in collaborating with other programmers, working in Python or Go, working on data visualizations in D3, programming rationality exercises, or working on something that qualifies as “data science”.
I live in Kansas, and it’s alright here. I preferred the weather in Berkeley when I visited there last month. I think I would enjoy living in the San Francisco bay area, but the cost of living is high there. I’m interested in identifying affordable places to live that are competitive with the amenities of the bay area. I’m also very interested in meeting and networking with potential roommates.
In terms of resources, I have found Sperling’s BestPlaces to have a lot of good information about U.S. cities.
- Aug 14, 2012, 5:05 PM; 16 points) 's comment on Who Wants To Start An Important Startup? by (
- Aug 14, 2012, 6:24 PM; 0 points) 's comment on Roommate interest and coordination thread by (
Roommate interest and coordination thread
I’m interested in idea 2. If you write about it, I’m especially interested in what you think we should do about it.
There are many different ways we could represent a personality (to varying degrees of accuracy.) I have not found a widely-accepted format, but I think we can each make our own for now. Whenever you wonder why someone acted a certain way, think about what the relevant parameters might have been and write them down. If several people work on this and share their results, perhaps one or more standardized personality representation formats will emerge.
The parameters collected by online user profiles such as those maintained by Facebook, Google Plus, or OkCupid might provide some inspiration.
If we had a good dataset of people and their personality attributes along with some performance measures, we could use machine learning to do neat things like predict relationship compatibility between two people. Imagine a rationalist dating service that used personality data to suggest matches!
Schema.org defines a “Person” model but it focuses primarily on circumstantial attributes rather than mental state.
I like “AI Risk Reduction Institute”. It’s direct, informative, and gives an accurate intuition about the organization’s activities. I think “AI Risk Reduction” is the most intuitive phrase I’ve heard so far with respect to the organization.
“AI Safety” is too vague. If I heard it mentioned, I don’t think I’d have a good intuition about what it meant. Also, it gives me a bad impression because I visualize things like parents ordering their children to fasten their seatbelts.
“Beneficial Architectures” is too vague. It’s not clear it’s AI-related.
“AI Impacts Research” is too vague and non-prescriptive. Unlike “AI Risk Reduction”, it’s ambiguous in its intentions.
I’m writing a forward planner to help me figure out whether to attend university for another year to finish my computer science degree, or do something else such as working for my startup full-time. I have a working prototype of the planner but still need to input most of the possible actions and their effects.
I chose this project because I think my software will do a better job assessing the utility of alternatives than my intuition, and because I implemented a forward planner for an artificial intelligence class I’m taking and wanted to apply something similar to my own life to help me plan my future.
Thank you. Your comment resolved some of my confusion. While I didn’t understand it entirely, I am happy to have accrued a long list of relevant background reading.
I have several questions. I hadn’t asked them because I thought I should do more research before taking up your time. Here are some examples:
What does it mean to solve the limited predictor problem? In what form should a solution be—an agent program?
What is a decision, more formally? I’m familiar with the precondition/effect paradigm of classical AI planning but I’ve had trouble conceptualizing Newcomb’s problem in that paradigm.
What, formally, is an agent? What parameters/inputs do your agent programs take?
What does it mean for an agent prove a theorem in some abstract formal system S?
I will plan to do more research and then ask more detailed questions in the relevant discussion threads if I still don’t understand.
I think my failure to comprehend parts of your posts is more due to my lack of familiarity with the subject matter than your communication style. Adding links to works that establish the assumptions or formal systems you’re using could help less advanced readers start learning that background material without you having to significantly lengthen your posts.
Thanks for the help!
My education in decision theory has been fairly informal so far, and I’ve had trouble understanding some of your recent technical posts because I’ve been uncertain about what assumptions you’ve made. I think more explicitly stating your assumptions could lessen the frequency of arguments about assumptions by decreasing the frequency of readers mistakenly believing you’ve made different assumptions. It could also decrease inquiries about your assumptions, like the one I made on your post on the limited predictor problem.
One way to do this could be to, in your posts, link to other works that define your assumptions. Such links could also function to connect less-experienced readers with relevant background reading.
In section 2, you say:
Unfortunately you can’t solve most LPPs this way [...]
By solving most LPPs, do you mean writing a general-purpose agent program that correctly maximizes its utility function under most LPPs? I tried to write a program to see if I could show a counterexample, but got stuck when it came to defining what exactly a solution would consist of.
Does the agent get to know N? Can we place a lower bound on N to allow the agent to time to parse the problem and become aware of its actions? Otherwise, wouldn’t low N values force failure for any non-trivial agent?
- Mar 28, 2012, 12:44 PM; 3 points) 's comment on Common mistakes people make when thinking about decision theory by (
Hi! I’m Patrick Shields, an 18-year-old computer science student who loves AI, rationality and musical theater. I’m happy I finally signed up—thanks for the reminder!
Thanks for posting this. It inspired me to write a more general roommate coordination thread. I’m interested in the living situation you describe, but my housing situation is set until I finish my computer science degree in May. I also don’t have a steady source of income right now.
When considering my prospects about where to live post-graduation, I’m torn between Silicon Valley and places that might have a higher quality/cost ratio. Can you share some of your rationale for choosing Silicon Valley over your other options? How would not having a steady source of income change your thinking about where to live?