Keeping track of my donations
My giving philosophy: “My case for donating to small, new efforts”
I think the average donor has very little impact when they donate to big, established efforts in traditional philanthropy. I think the biggest impact comes from the equivalent of angel investing, but for funding novel philanthropic initiatives that could potentially be extremely impactful in relevant cause areas, but are underexplored and underfunded.
On reflection for myself, donating in the first few months of the project’s existence to small initatives was probably much more impactful than donating to big, established efforts. Once someone with orders of magnitude more resources is actively funding a project, it probably doesn’t require your donations anymore.
Mark Budolfson and Dean Spears make this case elogquently in their paper “The Hidden Zero Problem: Barriers to Marginal Impact”: “analyse the marginal effect of philanthropic donations. The core of their analysis is the observation that marginal good done per dollar donated is a product (in the mathematical sense) of several factors: change in good done per change in activity level of the charity in question, change in activity per change in the charity’s budget size, and change in budget size per change in the individual’s donation to the charity in question. They then discuss the “hidden zero problem” that some of the terms in the equation (in particular, the last term) might be “hidden zeros” that prevent donations from doing any good—or worse, imply that they do harm—even if the charity is at the top of rankings based on the other factors.”
Chronology of what I thought was worth supporting (with a range of small amounts)
- to keep the habit of giving, and learn about different efforts, as well as calibrate my giving
2025
- Taimaka: Pediatric malnutrition treatment – reimagined for scale
- AI-Driven Market Alternatives for a post-AGI world
- MATS
- Next Steps in Developmental Interpretability
- AI Safety Camp
- Biosecurity bootcamp by EffiSciences
- SafePlanBench: evaluating a Guaranteed Safe AI Approach for LLM-based Agents
- Epoch AI: Data Curation, Capability Measurement, Benchmarks & Economic Modeling
- Free Democratic Party Germany
- Ozempic for Sleep: Research for Safely Reducing Sleep Needs
2024
- Malaria Consortium
- Isaak Freeman @ Boyden: high-res computational models of neuronal circuits of C. elegans
- Speculative Technologies
- Blueprint Biosecurity
- Patreon: Gwern, Roots of Progress Institute, Rational Animations, AXRP Pod, The Inside View, Andy Matuschak
2023
- METR: Model Evaluation and Threat Research
- Global Shield
- Scaling Training Process Transparency
- Exploring novel research directions in prosaic AI alignment
- Cadenza Labs: AI Safety research group working on own interpretability agenda - really like their approach
- AI Safety Research Organization Incubator - Pilot Program
- ML Alignment & Theory Scholars (MATS) Program
- Empirical research into AI consciousness and moral patienthood
- Avoiding Incentives for Performative Prediction in AI
- Long term future fund
- Activation vector steering with BCI
- Empowering AI Governance - Grad School Costs Support for Technical AIS Research
- Build an AI Safety Lab at Oxford University
- AI Alignment Research Lab for Africa
- Introductory resources for Singular Learning Theory
- WhiteBox Research: Training Exclusively for Mechanistic Interpretability
- Compute and other expenses for LLM alignment research
- The Rethink Priorities Existential Security team: Research Fellow hire
- Optimizing clinical Metagenomics and Far-UVC implementation
- Run five international hackathons on AI safety research
- Apollo Research: Scale up interpretability & behavioral model evals research
- Automated Interpretability and Memory Management in Transformers
- Agency and (Dis)Empowerment by Damiano Fornasiere
- Discovering latent goals by Lucy Farnik
- Scoping Developmental Interpretability by Jesse Hoogland
- Targeted Interpretability Work
- Joseph Bloom - Independent AI Safety Research on offline-RL agents using mechanistic interpretability in order to understand goals and agency.
- Lightcone Infrastructure/LessWrong
- Long-term future fund
- The Inside View Podcast
- Metacrisis quadratic donation round
- EA community infrastructure fund
- Long-term future fund
- Global health and development fund
- Animal welfare fund
- ARC Evaluations Project
- FAR AI
- EA infrastructure fund
- Long term future fund
- European Network for AI Safety (ENAIS)
- Alignment Research Center
- Rethink priorities
- The Center for AI Safety (CAIS)
- Center on Long-Term Risk
- Turkey and Syria Earthquake Relief Fund
- Berkeley Existential Risk Initiative
- Taimaka
- Nuclear Threat Initiative
- Institute for Meaning Alignment
- Qualia research institute
- Global poverty fund
- Helen Keller International
- GiveWell recommendation
- EA infrastructure fund
- LEVF: Mouse rejuvenation
- Long term future fund
- EA germany + effektivspenden
- Noora Health
- Patreon - The Inside View, AXRP Podcast, rob miles/ai safety, The Sheekey Science Show/longevity, The Roots of Progress, Andy Matuschak/Creating tools for thought, Rational Animations, Isaac Arthur/SciFi YouTube
2022
- Long-Term Future Fund by Giving What We Can
- Effective Altruism Infrastructure Fund by Giving What We Can
- GiveWell
- Berkeley Existential Risk Initiative
- Material Innovation Initiative
- Spark Climate - Ryan’s top recommendation
- Malengo: facilitates international educational migration (starting between Uganda<>Germany, Ukraine<>Germany (cause exploration)
- 100+ Gitcoin grants i’ve supported via quadratically matched donations, from open + decentralized science, climate to open source, matched with approx $20k+
- Long-Term Future Fund: Donate to people or projects that aim to improve the long-term future, such as by reducing risks from artificial intelligence and engineered pandemics.
- Nuclear Threat Initiative
- Taimaka
- The Effective Altruism Infrastructure Fund aims to increase the impact of projects that use the principles of effective altruism, by increasing their access to talent, capital, and knowledge.
- protect reproductive rights
- Clean air task force
- Founders Pledge (Climate Change Fund)
- ACX Grants
- Kickstart
- Clean Air Task Force
- Patreon - rob miles/ai safety, The Sheekey Science Show/longevity, The Roots of Progress, Andy Matuschak/Creating tools for thought, Rational Animations, Isaac Arthur/SciFi YouTube
2021: see all and donate easily through every.org or endaoment: for direct crypto donations
- The Against Malaria Foundation
- The Knowledge Society
- Evidence Action Evidence Action
- Carbon180:
- Khan Academy
- Black Girls CODE:
- Cool Earth:
- Science and Tech Future
- Sightsavers
- Taimaka Project
- 80,000 hours
- founders pledge
- animal charity evaluators
- maps mental health
- wikipedia
- generation pledge
- terra praxis
- silverlining climate
- strong minds
- rethink charity
- mars society
- malaria consortium
- clean air task force
- centre for effective altruism
- founders pledge science & tech
- legal priorities project
- nuclear threat initiative
- strong minds
- centre for health security
- effective altruism foundation
- our world in data
- future of life institute
- founderspledge patient philanthropy
- centre for human compatible ai
- rethink priorities
- machine intelligence research inst
- global health and dev fund
- climate change fund
- berkeley x risk
- qualia research
- fdp
- newscience.org
- * ~70 Projects, from open source to longevity
2020
- 80,000 Hours
- CEA
- Partei für Gesundheitsforschung
- EA Fund Global Poverty
- EA Fund Long-term future
- EA Fund EA Infrastructure
- SENS
- Our World in Data
- MAPS
- StrongMinds
- Berkeley X Risk, SENS, CEA, …
- * ~10 Projects i supported through gitcoin (from open source to longevity), matched with approx. ~$1k+
2018-2019
- EA Fund Global Poverty
- EA Fund Long-term future
- EA Fund EA Infrastructure
- EA Fund Animal Suffering
- SENS
2017 and before
- ocean cleanup
- nabu