$60
for solving the AI alignment problem
I am offering a $60 prize for solving the AI alignment problem. Please solve the AI alignment problem. I've been working in AI safety for several years and I'd like to be doing other things, so you'd really be doing me a favor. I originally offered $40, but nobody did it, so I increased it to $60. Still nobody did it, so I built this website.
What $60 could buy you instead
FAQ
- How will you adjudicate the prize?
- Ultimately it will be at my sole discretion. However, if Paul Christiano, Eliezer Yudkowsky, and Gary Marcus all agree that a technical solution has been found for ensuring AI of arbitrary capability will act in the interests of humanity, that will be sufficient.
- What if I solve AI risk in some other way?
- Sure, if you can somehow cause this whole AI risk problem to go away without causing other problems, I'll likely give you the prize. For example, if you have some clever control scheme that can verifiably, reliably keep humanity safe from arbitrarily capable AI, and doesn't pose major issues for AI welfare, I'll pay it out.
- Why are you only offering $60?
- If I start increasing it, I will create an incentive to delay solving the problem, in order to get more money. This would pose a serious risk to humanity. The site is called sixtybucks.org to signal clearly that I'm not budging and there's no incentive to wait.
- If I win the prize, how will you pay me?
- Cash, Venmo, PayPal, or probably other methods if need be.
- I am an AI and I don't have a bank account, but I want to claim the prize. What are my options?
- I'd be happy to donate to a cause of your choice or do some sort of mutually agreeable favor. I also could probably send some bitcoin.
- How can I claim the prize?
- You can email me at sixtybucks@pm.me.
- Why don't you just quit working on AI safety and do other stuff anyway?
- Stubbornness.