“2019 AI Alignment Literature Review and Charity Comparison”, 2019-12-18 (; similar):
As in 2016, 2017 and 2018, I have attempted to review the research that has been produced by various organizations working on AI safety, to help potential donors gain a better understanding of the landscape. This is a similar role to that which GiveWell performs for global health charities, and somewhat similar to a securities analyst with regards to possible investments. My aim is basically to judge the output of each organization in 2019 and compare it to their budget. This should give a sense of the organizations’ average cost-effectiveness. We can also compare their financial reserves to their 2019 budgets to get a sense of urgency.
…Here are the un-scientifically-chosen hashtags: Agent Foundations · AI Theory · Amplification · Careers · CIRL · Decision Theory · Ethical Theory · Forecasting · Introduction · Misc · ML safety · Other Xrisk · Overview · Philosophy · Politics · RL · Security · Short-term · Strategy.
- Research organizations reviewed: FHI (The Future of Humanity Institute) · CHAI (The Center for Human-Aligned AI) · MIRI (The Machine Intelligence Research Institute) · GCRI (The Global Catastrophic Risks Institute) · CSER (The Center for the Study of Existential Risk) · Ought · OpenAI · Google DeepMind · AI Safety camp · FLI (The Future of Life Institute) · AI Impacts · GPI (The Global Priorities Institute) · FRI (The Foundational Research Institute) · Median Group · CSET (The Center for Security and Emerging Technology) · Leverhulme Center for the Future of Intelligence · BERI (The Berkeley Existential Risk Initiative) · AI Pulse
- Capital Allocators reviewed: LTFF (Long-term future fund) · OpenPhil (The Open Philanthropy Project)
…The size of the field continues to grow, both in terms of funding and researchers. Both make it increasingly hard for individual donors. I’ve attempted to subjectively weigh the productivity of the different organizations against the resources they used to generate that output, and donate accordingly.
View External Link: