

https://twitter.com/paulg/status/1648277133620396033?s=20
Paul Graham @paulg I honestly don't know what to think about the potential dangers of AI. There are so many powerful forces at work that there's a wide span of possibilities.
Kyle Hailey @kylelf_ easy, just make sure our progress in alignment is faster than the progress in the power of models. Done. Safe. Go have fun and prosper.
easy, just make sure our progress in alignment is faster than the progress in the power of models. Done. Safe. Go have fun and prosper.
Easy? How do you even put numbers on those two things?
A straightforward approach is to allocate more funding to alignment research than to the development of increasingly powerful models. While a more nuanced metric would be ideal, focusing financial resources on safety and alignment can serve as an effective safeguard. This way, we prioritize safety while advancing AI technology.
Consider the global population of 10 B people. There are ~400 at OpenAI and 2k-4k at Google working on AI. Dedicate 10,000 individuals to work on AI alignment, less than 0.0001% of the planet's population to ensure the complete safety of AI
AI needs incentives to prioritize alignment research. While government regulation may fall short, providing tax breaks for AI companies investing in alignment could attract attention like bees to honey. Let's make AI safety both appealing and rewarding! #AIalignment #taxincentives"

You make this all sound so simple! Tax breaks for alignment would never happen - that would amount tax breaks for “Woke AI” in the eyes of many. Only way it can happen is if things go terribly wrong. I want to be the optimist here but realism comes first.
solution to AI safety: create incentives requiring AI alignment research accelerate beyond rate of growth of the model power through irresistible tax incentives for Google & OpenAI funded with 1.1% ($10B) of the US military budget ($886B) to have those tax incentives pay for the employment of 10,000 top researchers at $1M salary each (or hey if they are in grad school $50K/year) . Secure humanity's future & unleash AI's potential! #AIalignment#InvestInSafet
You make this all sound so simple! Tax breaks for alignment would never happen - that would amount tax breaks for “Woke AI” in the eyes of many. Only way it can happen is if things go terribly wrong. I want to be the optimist here but realism comes first.
Comments