“2019 AI Alignment Literature Review and Charity Comparison”, Larks2019-12-18 (, , ; similar)⁠:

As in 2016, 2017 and 2018, I have attempted to review the research that has been produced by various organizations working on AI safety, to help potential donors gain a better understanding of the landscape. This is a similar role to that which GiveWell performs for global health charities, and somewhat similar to a securities analyst with regards to possible investments. My aim is basically to judge the output of each organization in 2019 and compare it to their budget. This should give a sense of the organizations’ average cost-effectiveness. We can also compare their financial reserves to their 2019 budgets to get a sense of urgency.

…Here are the un-scientifically-chosen hashtags: Agent Foundations · AI Theory · Amplification · Careers · CIRL · Decision Theory · Ethical Theory · Forecasting · Introduction · Misc · ML safety · Other Xrisk · Overview · Philosophy · Politics · RL · Security · Short-term · Strategy.

…The size of the field continues to grow, both in terms of funding and researchers. Both make it increasingly hard for individual donors. I’ve attempted to subjectively weigh the productivity of the different organizations against the resources they used to generate that output, and donate accordingly.