Nvidia is up 250%, Google up like 11%. So portfolio average would be greatly better than the market. So this was a great prediction after all, just needed some time.
MrThink
I agree it is not clear if it is net postive or negative that they open source the models, here are the main arguments for and against I could think of:
Pros with open sourcing models
- Gives AI alignment researchers access to smarter models to experiment on
- Decreases income for leading AI labs such as OpenAI and Google, since people can use open source models instead.
Cons with open sourcing models
- Capability researchers can do better experiements on how to improve capabilities
- The open source community could develop code to faster train and run inference on models, indirectly enhancing capability development.
- Better open source models could lead to more AI startups succeeding, which might lead to more AI research funding. This seems like a stretch to me.
- If Meta would share any meaningful improvements on how to train models that is of course directly contributing to other labs capabilities, but llama to me doesn’t seem that innovative. I’m happy to be corrected if I am wrong on this point.
I think one reason for the low number of upvotes was that it was not clear to me until the second time I briefly checked this article why it mattered.
I did not know what DoD was short for (U.S. Department of Defense), and why I should care about what they were funding.
Cause overall I do think it is interesting information.
Hmm, true, but what if the best project needs 5 mil so it can buy GPUs or something?
Good point, if that is the case I completely agree. Can’t name any such project though on the top of my mind.
Perhaps we could have a specific AI alignment donation lottery, so that even if the winner doesn’t spend money in exactly the way you wanted, everyone can still get some “fuzzies”.
Yeah, that should work.
There is also the possibility that there are unique “local” opportunities which benefits from many different people looking to donate, but really don´t know if that is the case.
I do mostly agree on your logic, but I’m not sure 5 mil is a better optimum than 100 k, if anything I’m slightly risk averse, which would cancel out the brain power I would need to put in.
Also, for example, if there are 100 projects I could decide to invest in, and each wants 50k, I could donate to the 1-2 I think are some of the best. If I had 5 mil I would not only invest in the best ones, but also some of the less promising ones.
With that said, perhaps the field of AI safety is big enough that the marginal difference of the first 100k and the last 100k of 5 mil is very small.
Lastly, it does feel more motivating to be able to point to where my money went, rather than if I lost in the lottery and the money went into something I didn’t really value much.
I agree donation lottery is most efficient for small sums, but not sure about this amount. Let’s say I won the 50-100k usd through a donation lottery, would you have any other advice then?
Thank you both for the feedback!
New OpenAI Paper—Language models can explain neurons in language models
Interesting read.
While I also have experienced that GPT-4 can’t solve the more challanging problems I throw at it, I also recognize that most humans probably wouldn’t be able to solve many of those problems either within a reasonable amount of time.
One possibility is that the ability to solve novel problems might follow an S curve. Where it took a long time for AI to become better at novel task than 10% of people, but might go quickly from there to outperform 90%, but then very slowly increase from there.
However, I fail to see why that must neccessarily be true (or false), so if anyone has arguments for/against they are more than welcom.
Lastly I would like to ask the author if they can give an example of a problem such that if solved by AI, they would be worried about “imminent” doom? “new and complex” programming problems is mentioned, so if any such example could be provided it might contribute to discussion.
I found this article useful:
Lessons learned from talking to >100 academics about AI safety states that “Most people really dislike alarmist attitudes” and “Often people are much more concerned with intentional bad effects of AI” so
[Question] Seeking Advice on Raising AI X-Risk Awareness on Social Media
Oh, I didnt actually notice that the banana overlaps with the book at the start, I tried changing that but still gpt-4 makes them collide:
(5,5) Initial position of the claw. (4,5) Moving left to get closer to the banana. (4,4) Moving down to align with the banana’s Y coordinate. (4,3) Moving down to ensure a good grip on the banana. Close grip # Gripping the banana with at least 3 cm of overlapping area on the Y axis. (5,3) Moving right to avoid any collision with the banana’s edge. (6,3) Moving right to clear the edge of the banana. (7,3) Moving right to position the banana above the book. (8,3) Moving right to align the banana with the left edge of the book. (9,3) Moving right to center the banana over the book. (10,3) Moving right to ensure the banana is on the book. Open grip # Releasing the banana onto the book.
At this point, the robotic arm has successfully placed the banana on top of the book.
Yes, all other attempts with ChatGPT were similar.
GPT-4 got it almost correct on the first attempt
(5,5) Initial position. (4,5) Moving left to get closer to the banana. (4,4) Moving down to align with the banana’s top edge. (4,3) Moving down to be within the required overlapping area of the banana. Close grip. Gripping the banana.
(4,4) Lifting the banana upwards. (5,4) Moving right to clear the initial banana position. (6,4) Continuing to move right towards the book. (7,4) Moving further right to avoid collision with the book’s edges. (8,4) Positioning the banana above the book. (9,4) Moving right to ensure proper alignment with the book. (10,4) Positioning the banana above the center of the book. (10,3) Lowering the banana onto the book. Open grip. Releasing the banana.
(10,4) Moving the claw up after releasing the banana.
But didn’t have 5 cm overlap with the banana, and actually the claw also collides with the book (which is a trickier problem).
I pointed out the first error:
Does the grip have 3 cm overlapping areas with the banana when the grip is closed in your suggested solution?And it corrected itself about the banana but still collided with the book.
Thanks for the clarifications, that makes sense.
I agree it might be easier to start as a software development company, and then you might develop something for a client that you can replicate and sell to other.
Just anecdotal evidence, I use ChatGPT when I code, the speedup in my case is very modest (less than 10%), but I expect future models to be more useful for coding.
I agree with the main thesis “sell the service instead of the model access” , but just wanted to point out that the Upworks page you link to says:
GoodFirms places a basic app between $40,000 to $60,000, a medium complexity app between $61,000 to $69,000, and a feature-rich app between $70,000 to $100,000.
Which is significantly lower than the $100-200k you quote for a simple app.
Personally I think even $40k sounds way to expensive for a what I consider a basic app.
On another note, I think your suggestion of building products and selling to many clients is far better than developing something for a single client. Compare developing one app for 40k and sell to one company, with developing one product that you can sell for 40k to a large number of companies.
Results Prediction Thread About How Different Factors Affect AI X-Risk
I do agree that OpenAI is an example of good intentions going wrong, however I think we could learn from that and top researchers would be vary of such risks.
Nevertheless I do think your concerns are valid and is important not to dismiss.
Okay, so seems like our disagreement comes down to two different factors:
-
We have different value functions, I personally don’t value currently living human >> than future living humans, but I agree with the reasoning that to maximize your personal chance of living forever faster AI is better.
-
Getting AGI sooner will have much greater positive benefits than simply 20 years of peak happiness for everyone, but for example over billions of years the accumulative effect will be greater than value from a few hundreds of thousands of years of of AGI.
Further I find the idea of everyone agreeing to delaying AGI 20 years to be equally absurd as you suggest Gerald, I just thought is could be a helpful hypothetical scenario for discussing the subject.
-
Sadly I could only create questions between 1-99 for some reason, I guess we should interpret 1% to mean 1% or less (including negative).
What makes you think more money would be net negative?
Do you think that it would also be negative if you had 100% of how the money was spent, or would it only apply if other AI Alignment researchers were responsible for the strategy to donate?
I think it in large part was correlated with general risk apetite of the market, primarily a reaction to interest rates.