top of page

HOW TO LAUNCH A HIGH-IMPACT FOUNDATION

Online Resources

Starting an impactful foundation and being a strong grantmaker is hard. Few high-quality resources exist (at least in a publicly available form) that teach how to run an evidence-based and highly effective foundation. For this reason, our team wrote How to Launch a High-impact Foundation, which serves both as the core text of our Foundation Incubation Program and as a public resource for those who are interested in improving their grantmaking.

When writing the book, there were often valuable details that we wanted to share, but didn’t make sense to include in the book itself. These extra resources (mentioned in the book) are included here.

Table of contents

Interpreting scientific studies

Scentific studies

See cheat sheet here.

See upskilling exercises here

Best practices for creating CEAs

Best practices CEAs

There aren’t many great resources on how to become skilled at creating cost-effectiveness analyses. Most experts have developed their skills through practice, by creating countless models. We will go over some of the basic steps, but in all likelihood, your first CEA will be terrible. Don’t let that discourage you – it takes time, but you’ll improve.

Software

We considered several software programs and combinations for our cost-effectiveness modeling. The two easiest to rule out were Google Docs and STATA (a complex modeling software). These programs were too simple and overly complex, respectively, for the needs of most organizations. Some tools were great for certain specific use cases (e.g. we loved Causal13 for time series models). For general CEAs, spreadsheets (like Google Sheets or Microsoft Excel) and Guesstimate seemed to be the best tools. Google Sheets is fast and simple to work with, easy to collaborate on and easy to get the hang of. While spreadsheets are a common way to generate number-based models, they lack a few of the key features we need for more in-depth CEAs. Guesstimate is less commonly used, but it offers advanced Monte Carlo and sensitivity analysis features. It is too slow to use for very quick CEAs, but can be handy for models with high levels of uncertainty (e.g. we’ve used it to model interventions that hinge heavily on sentience probabilities for different animal species). For important models, we recommend using both Google Sheets and Guesstimate. Create an initial CEA in a Google Sheets spreadsheet, and then remodel the data using Guesstimate for sensitivity analysis and simulated endline-point estimates. Utilizing two models makes it easier to spot errors, and reduces the likelihood of a single error significantly impacting the overall outcome.

Formatting

It’s good practice to use consistent formatting across all of your CEAs, even keeping it consistent with other models your CEAs might be compared to (e.g. GiveWell’s). Anyone familiar with GiveWell’s CEAs would then have an easier time understanding yours (and vice versa).

Color coding

Cells can be color-coded to reflect the sources of those numbers.

Yellow: Value and ethical judgments

These numbers could change if the reader has different values from the researcher. For example, people could quite reasonably disagree about the answer to the question “How many years of happiness is losing the life of one child under five worth?” When making these judgments we can consult the available literature, but there is often no clear answer.

Green: Citation-based numbers

These numbers are based on a specific citation. If we found and considered multiple citations, the best will be hyperlinked to the number, and the others will be included in the reference section. If a number is an average of two other numbers, both numbers will be entered into the sheet and the average will become a calculated number with a different color format.

Blue: Calculated number

These numbers are calculations generated from others within the sheet. Calculated numbers should involve no more than five variables, both for readability and to allow for sanity checking. Generally, it is harder to err when making a higher number of subtotals rather than a single very large, multi-variable calculation.

Orange: Estimated numbers (i.e. assumptions)

Sometimes, no specific numbers can be found for a parameter. In this case, the number is estimated by one or more staff members. These estimates will often be the numbers within a CEA that you have the lowest confidence in.

Discounting

On paper, two interventions might show a similar number of QALYs (quality-adjusted life years), welfare points, or lives saved, etc. In practice, they might be supported by different levels of evidence, occur over different time frames, or have other extenuating factors that change your view of their true cost-effectiveness. Applying discounting factors is one way to address these issues. Here we’re referring both to discounting in the sense of (for example) applying an X% discount to a cost-effectiveness value due to the relative weakness of the evidence-base, and in the more traditional (especially in the for-profit world) sense of applying a discount rate to future costs and benefits to capture factors like opportunity costs and inflation. 

 

Try to keep your discounting clear and separate from the original number in the CEA, as these discounts are generally subjective. Discounting is common, and can be seen in many other detailed CEAs (again, see GiveWell’s). The items listed below are not the only types of discounting, but they are some of the most common.

Certainty discounting:

If a source of evidence suggests one number but the source is extremely weak, we might apply a certainty discount to it. This is based on the assumption that, in general, numbers regress to the mean as more evidence is gathered about them. Thus, using a very weakly evidenced number in one estimate and a strongly evidenced number in another will systematically favor the areas with weaker evidence, as these numbers will be more positive.

Try to keep your discounting clear and separate from the original number in the CEA, as these discounts are generally subjective. Discounting is common, and can be seen in many other detailed CEAs (again, see GiveWell’s). The items listed below are not the only types of discounting, but they are some of the most common.

Generalizability discounting:

Often, sources will be based on a situation that is not identical or even similar to the situation we are considering when using a source. For example, if a study was run in one country, the results will not be identical if it were run in another, even if all other factors are held constant. Thus, when generalizing evidence more than is common in our other comparable CEAs, we apply a generalizability discount.

Bias discounting:

If a citation comes from a source that we suspect has some sort of bias, we might discount this number. For example, every charity has a strong incentive to make their program and progress look better. Thus, charity-reported numbers tend to be far more optimistic than more unbiased analyses would reflect.

Time discounting:

Time discounting is the practice of discounting future benefits compared to immediate effects. Even with zero time preference, in terms of utility it can still make sense to discount based on time. For example, income in the near term can be invested and used for increased consumption in the future. Additionally, there is always some probability that an accidental death will occur before the future utility is realized, and therefore, it is worth less than in the present. Time discounting applies to costs as well: An $100 cost incurred today is greater than an $100 cost incurred in ten years, both because you could have invested the $100 to earn a return before paying the cost in ten years, and because of currency inflation. 

Organization

Each broad idea should have its own CEA tab being evaluated within the sheet. The most important numbers of each tab should be pulled into a summary tab.

Summary tab:

The first tab should be a summary that allows quick comparison between the charities. It describes the three factors that could most change the CEA, as determined by a sensitivity analysis, and the factors considered the least certain by the CEA’s creator. The summary tab should include two endlines. One is a metric that is easily understandable and directly connected to the intervention – for example, “number of chickens’ life-years lost from being in a caged vs. a cage-free system.” The other endline should be a cross-comparable metric, such as welfare points, that can be used across the entire cause area. This metric can be used to determine which interventions look most cost-effective in a given area. There is also a column to describe the overall uncertainty level, which is the CEA creator’s estimate of how confident they are in this CEA, relative to others within the cause area.

Optimistic, pessimistic, and best guess scenarios:

Throughout the spreadsheet, an optimistic, pessimistic, and best-guess estimate can be given for certain values that are critically influential or highly important. The most time should be put into the best-guess numbers. The final output of the CEA can be generated using a Monte Carlo simulation. The optimistic and pessimistic estimates can be used for the range of the 90% confidence interval and the relative position of the best-guess within this range will be used to determine the probability curve. We tend to Guesstimate for this kind of simulation instead of Google Sheets or Excel, but you can use both.

Sensitivity analysis:

A sensitivity analysis can be conducted on each CEA to determine which factors most affect the estimate. The CEA creator can then identify the factors that have a large effect and put more focus on increasing the accuracy of estimates for those factors.

Referencing:

It’s important to provide references for all data sources, either by linking the reference to the cell or by adding a source column next to all values. Often it can be helpful to also store all references used in a single reference sheet.

Using external CEAs

There’s a lot of diversity among CEAs, both in quality and formatting, but crucially in which costs and benefits are or aren’t included, and how they’re included. This diversity means that CEAs are basically never directly comparable, although this doesn’t mean they aren’t useful. We see CEAs at roughly three different levels: informative, suggestive, and predictive.

Informative CEAs:

Many CEAs, even those of low quality, can be informative to generate ideas or find citations for key numbers. For this level of CEA, the endline itself isn’t informative or even suggestive of the intervention’s impact. However, if we are investigating an area, it’s worth considering the variables and citations used in informative CEAs. Quick or back-of-the-envelope calculations will often fall into this category.

Suggestive CEAs:

Many CEAs are of decent quality, but assess different metrics or look at a different situation than what we’re interested in. These CEAs might suggest the promise of an idea. Often we can update our views based on a suggestive CEA. We view the DCP3 CEAs as suggestive, so if their models indicate that an intervention is cost-effective, that makes us think that related charities and intervention areas could be promising. Even so, we do not take the endline numbers literally or even as comparable. If DCP3 says intervention A is better than B, but that both are cost-effective, we would do our own comparative research.

Predictive CEAs:

Some CEAs are of high enough quality or sufficiently close to our own methodology and endline measurements that we take them as predictive. A predictive CEA might hold considerable weight in our research process, and we might use many of the same numbers and inputs when creating our own. If an organization has created multiple high-quality CEAs, we would view their CEAs as useful in predicting which areas are more promising. CEAs in this category include those done by GiveWell.

The importance of different traits by role (for hiring)

Traits by role

Here are a few examples of traits and skills that may vary in importance depending on common job roles. These lists are not meant to be comprehensive or exhaustive, they are merely meant to give you a sense of what may be good heuristics to look for:

 


Earlier hires and people with high-level responsibility

​​

  • Value alignment should be at the very top of your list

  • Deep understanding of your organization and excitement about it

  • Learning fast and understanding across many domains is highly important

  • “Smart”

 


Program officer / grantmaking roles

​​

  • Value alignment and deep understanding of the end goals are very important here

  • A balance of detail orientation and big-picture thinking

  • Comfortable and decent with numbers, e.g. doing quick BOTECs and quantifying metrics that are hard to quantify

  • Able to understand implicit assumptions and context

  • A background in or connections to the focus area

  • Thinks critically

  • Should be comfortable with risk and uncertainty and be more of a “do”-er than a thinker

  • “Smart”

​

​

Roles with management responsibility

​​

  • Social skills and empathy are highly important

  • Being organized and keeping track of where their direct reports are and how it plays into overall plans

  • Inspirational and visionary, able to help others learn

​

​

Research roles

​​

  • Hard research skills are more important here

  • Analytical mindset; understanding of the scientific method and rationality

  • Social skills and charisma are a bit less important

  • Good reasoning transparency

  • Are often a bit on the slower side which you might need to watch out for

  • Can be a bit less “EA” and altruistic personally, but should have a keen sense of and goal for optimizing for cost-effectiveness and impact

  • Can be a bit more of a thinker than a do-er, but that requires either good action-focused management or a good timeboxed research structure to make sure research stays decision-relevant and leads to action

 

 

Operations roles

​​

  • Value alignment is not as important

  • Highly organized and able to keep track of things and not drop the ball

  • Detail-orientation

  • A bit more risk-aversion is good

  • Calm under pressure

  • Smarts and raw processing power a bit less important

  • Depending on how much legal, logistical, and operational responsibility they will take on, experience can vary in importance. E.g. if you need them to take over payroll and contracts for 30 staff in different countries, then they need to have previous experience in that. Conversely, if you are hiring a third staff member to set up your operations during your first months, a talented, value-aligned, smart generalist will be able to skill up as they go along.

Traits to look for in startup founders
(from the for-profit world)

Traits Startups

It is also worth looking at what for-profit startup incubators and startup investors look for when they vet founders.

​

Key traits some investors look for in a startup team:

​

  1. Vision:

    • Can they communicate their vision, do they have enough drive to achieve that vision?

    • Does the team seem aligned in terms of concept and commitment?

  2. Execution level:

    • Is the team reliable, and do they have experience that backs this reliability?

    • Do they have enough knowledge on finding, hiring, managing the right people?

    • Are they able to manage complex projects properly?

    • Do they have enough finance and sales knowledge?

  3. Courage level:

    • Is the team committed?

    • Might they give up easily?

    • How do they react to ambiguity or quick-changing conditions?

    • Is the team ambitious enough to stay fearless by facing demanding customers or internal problems? 

  4. Most completed management team profiles should cover the following roles: 

    • Domain expert – knows the industry very well 

    • Seller

    • Builder 

Here is a list of the selection criteria Entrepreneur First looks for when vetting candidates to become founders for their for-profit startup incubator:

​

0. Meta: Are high-potential outliers - based on their current trajectory, have they outperformed or differentiated themselves from their peers who were/are in a comparable position?

1. Challenges convention

2. Drive to achieve

3. Followership

4. Smart, with clarity of thought

5. Technical knowledge, and applicability or commerciality

Details on how to set up your vetting process

Vetting Process

1) Need Description: How to write a good job ad / open call for applications

 

Before you can begin to vet candidates - step 0 of the vetting process, if you will - you need to describe your need. For hiring a person, this is writing up a job ad; for a grant it might be a formal open call for applications, or an informal description of your scope. All three of these documents serve the purpose of describing what a good solution (that is, hire or grant) would look like, and serve to draw and filter projects and people that are a good fit for that to you.

​

Principles of a good need description

​

Below are some principles that apply to every job ad and every open call for applications. After that we will give into some specific structures of how each can be set up. 

 

Writing a need description is your first filtering opportunity: Most people think the perfect job ad or grant process is the one that attracts the most applications. And although attracting candidates is indeed important, this part of the process is also the first filtering opportunity: You want it to attract people who would be a good fit and discourage those who would not be, saving both you and them time. When designing a job ad, you really want to accurately reflect what your workplace and the role is like - it’s good to turn off people who would not be a good fit for the role or your organization. This is doubly true for grants, as having hundreds of unfitting extra applications due to having a description that is too broad and unspecific wastes both your and the applicants’ time. 

 

On the other hand, it’s a mistake to have a far too limited and limiting description, and thus repel people who might be good applicants. There are hundreds of jobs and many dozens of places to apply for funding out there - why should an applicant consider this one? When you want to hire top talent, job applications are very much a two-way street: You need to be able to make a good case for your foundation when communicating. The nonprofit sector is full of people looking for work, but the best talent often has many options to choose from. You’ll need to know your competitive advantage – why should they choose you? Is your mission inspiring? Can you make a case that it’s higher impact than other options? Do you offer something that other jobs do not? What sort of regranting budget are you offering? How much freedom is there in decision-making? Similarly, why should a potential grantee apply to you in particular? Is your scope a good fit for them? Are you an easy funder to work with? Is your process fair and clear? Are you supportive and can connect them to other people working in the space or do you give them free reign? Are you able to give helpful feedback? Especially if you want to solicit a decent number of applications so you have a good pool to vet, the importance of attracting good applications should not be underestimated.

 

As a new foundation, you will often not have the reputation for people to know about your work in detail before applying for either jobs or grants. However, the flip side is that there are other benefits you can bring to the table that some people might weigh more heavily. E.g. a new foundation is a particularly promising place to have an impact and jobseekers are typically aware of that: Many individuals want to get into grantmaking, and for those who have impact as a major part of their motivation, the difference they can make in a new foundation is much larger than when working in a more siloed role at one of the more established players. What you want to offer, and your answer to the question of “Why should I work for you?” should be tied to traits that will also help you identify the right person for the job. Don’t imply the role is stable if it is not. Instead, highlight its flexibility and ever-changing nature; you’ll find a better fit. Similarly, in grantmaking, a new foundation has many potential advantages such as being less reputation-sensitive when supporting higher-risk projects and being able to give closer feedback and build a more personal connection when working with its grantees. Communicate the pros and cons and modes of working with you clearly.

 

Be detailed about your specific organization: Anyone can put “a great place to work” on a job ad, so get specific. Do your employees have a shared pancake breakfast on Mondays, or a training program for how to break down problems? Do you work from an office that lots of interesting people frequent? Do you know many interesting people in your focus area? That's something to highlight. Are you more like a Silicon Valley start-up or venture capitalist, or an academic think tank or family foundation? Different organizations have different cultures and modes of working, and you want to attract the right person for the job or the grant.

 

Hiring-specific considerations (Key elements of a job ad)

 

Example: Here is an example of a 2022 job ad when we were hiring a new research analyst to join CE.

 

Once you have a sense of the particular features of your foundation, the first step of creating a new job ad should be looking at templates and other job ads, especially from similar organizations and for similar roles. It will give you a sense of the market, what to include, and how your position might differ from others offered. Then grab your list of traits you want to hire for (that we considered in the chapter above) and fill in the elements of your job ad:

​

  • Job title: A job title should succinctly communicate the role that the person is going to play in the organization. Spend some time researching common titles to ensure that you are using terms that others will understand – it should reflect the type of work and the level of responsibility implied.

  • Your organization: Help potential candidates understand the basics of what you are trying to accomplish. This is also a good opportunity to impress them with some of your past accomplishments. Keep it brief – they can always read more on your website. Do, however, make sure that it naturally leads to the key responsibilities of the position you are offering, so that potential candidates understand how the role fits into your organization.

  • Role responsibilities: Outline the core responsibilities of the position. You’ve hopefully spent some time thinking about this before deciding to try to hire someone. Adding percentages to indicate the amount of time spent on each key responsibility can be beneficial.

  • Impact of the role: Making a positive difference to the world is generally the top priority for effectiveness-minded altruists and others in the nonprofit sector. This is therefore the most important section, and the main draw that motivates qualified candidates to apply. Think of it as though you were applying for a grant or a donation, but for human resources (time) instead of money. Make it easy to understand how the role fits into your mission and how it is going to achieve impact. You might have already spent some time explicitly modeling the impact of making a new hire – this is a chance to present that.

  • Qualifications: Do they need to live in a specific city? Do they need to know a specific language? Give your applicants some cues as to whether they are a good fit. Distinguish between requirements and nice-to-haves, and make sure not to list things that are largely irrelevant- this will narrow down the pool of who will decide to apply.

  • Benefits: Individual applicants prioritize things like skill-building, intellectual challenge, autonomy, and work-life balance differently. Give them a sense of the perks of this position. Think about whether you want to list a salary or leave it as negotiable – there are pros and cons to both. We do lean towards listing a salary range to give people a sense.

  • Application process: Give them a link that they can use to apply, and mention any deadlines. (Setting a deadline is better, even if it’s rolling – far fewer people apply if there is no deadline.

 

Advertise the position where your ideal candidate would hear about it, and ask people who are well-versed in the right communities for recommendations for where that is. Once you have people applying to your job ad, the vetting process can truly begin.

 

Grantmaking-specific considerations (Key elements of an open call for applications)

 

An open call for applications in grantmaking is akin to a job advertisement, as it informs potential grantees about your interest in funding opportunities and the formalities associated with the process. 

​

Example: Here is an example of an EA Forum post announcement by Open Philanthropy for EA meta university group organizer grants. Here is a more minimalistic example announcement on the EA Forum by EA Funds, including links to their websites that contain more details on applications and funding scope.

​

An open call should include at least the following elements:

​

  • Any time constraints or application deadlines (which may be on a rolling basis)

  • A scope description (see below)

  • A link to your application form (or other information about how and in which form to indicate interest in submitting a proposal) and some information of what is requested during the first step of the application

  • Information on the next steps of the process

  • By when applicants can expect to hear back

  • Some background on your foundation and/or grantmaking

  • Whether or not multiple applications are allowed, or within which timeframe

  • Where to direct further queries 

 

A scope description clarifies what kinds of projects fall within the scope of your foundation or grantmaking, and should be included with your request for proposals, just as a job ad lists what kinds of person or people you are looking to hire. It should list relevant information like the following:

  • Cause area(s) you are interested in

  • The goal of this portfolio or grantmaking project

  • Preferred types of interventions, e.g. direct delivery vs. meta vs. research vs. fundraising vs. policy etc.

  • Potential preferred size of projects (e.g. individuals and/or organizations)

  • Potential preferred stage of projects (e.g. early stage or a particular size)

  • Potential grant size(s)

  • Potentially the total size of your grantmaking

  • Potential preferences regarding whether you are comfortable funding a whole budget or only parts, e.g. M&E efforts or scaling plans

  • Any thresholds you have (e.g. of effectiveness)

  • Risk profiles you are comfortable with

  • Factors that would disqualify a grant in advance

  • Other particular decision-relevant information

  • Other information that might have come up when you set your foundation’s scope (see the earlier chapters in this book on strategy)

  • It is a great idea to include lots of concrete examples of projects that would or would not fall within your scope, to help applicants self-select for whether or not they should spend time applying

  • Potentially some examples about how successful proposals might look, e.g. examples or a list for projects you would be particularly excited to see

 

Similar to writing job ads, it can be useful to review several existing open calls for applications and scope descriptions to identify areas of agreement or disagreement with your grantmaking goals, particularly those that were not initially considered when establishing your foundation's scope.

 

2) Application form

 

Common key elements of an application form 

 

The first step of application processes is an online application form. There are many providers for this (for example: Jotform, Airtable, or Google Forms sending submissions to a Google spreadsheet). We choose one that allows us to include multiple choice questions, free-form questions, and custom fields for candidates to upload CVs, and then automatically enters candidate submissions into a database that gives us an easy overview and allows for quick processing. The application form should request at least the following five elements:

​

  1. Full name & Email address

  2. CV

  3. Logistical questions

  4. Short free-form questions

  5. (in grantmaking) one-page project description / plan

  6. (at scale) multiple choice quiz questions

 

Examples: Here is an example of a job application form we use at CE; and here is an example of a minimal application form for the mental health funding circle.

 

We’ll first talk a bit more about the purpose of application forms and how long to spend on them, and then go into how to evaluate each of the steps 1-5. above.

 

Purpose: The main purpose of this stage is to filter out candidates who are obviously not a good fit for the position or grant you are vetting for and to collect basic information like the applicant’s full name, CV, and email (make sure these fields are required). Application forms provide greater flexibility in gathering information relevant to the position and are often more informative than cover letters, while also arguably easier to complete. Unless you expect your applicant pool to be exceedingly small (say, below ~30 people), spending a significant amount of time creating an application form can be worthwhile, as it can help filter out unsuitable applicants early on, saving time for both you and the applicants as you move forward in the selection process. Hopefully you’ll be able to rule out at least 50% of applications. In grantmaking, it is possible to eliminate a significant percentage of applicants, but the exact number may vary depending on your grant solicitation process and the resulting quality of applications.

 

How long to spend on assessing this step: When we vet for the Charity Entrepreneurship Incubation Program, we spend mere minutes assessing each initial application. Otherwise, it would be impossible to go through the thousands of applications we get each round. But once you have some practice, a few minutes is usually enough time to make a rough judgment on whether or not someone has even just a small chance of being the best candidate for your job or grant. When in doubt, we recommend to err on the side of inviting a candidate to the next round - missing a great person or project is much more costly than evaluating a few extra test tasks or interviews.

​

How to weight the parts of the application form: How much importance to place on each part of the form will vary depending on the concrete requirements of your grant. There will be more details below, when we discuss each element of the application form.

 

How long to make this step: While it may be tempting to include more requirements in the initial section to gather more information upfront, it is important to avoid making this stage excessively lengthy or burdensome for applicants; it should not take more than 15 to 60 minutes to complete. If the application process takes too long, capable candidates may opt out. If you are a well-known organization seeking a highly desirable position or grant, applicants may tolerate a longer application process. However, if you are a small or relatively unknown organization, it is better to keep the application process as easy as possible.

​

There are a few things you should not do for any application form, especially in hiring:

​

  • Ask for information you are not legally allowed to ask for: This includes things like race, sexual orientation, and marital status. Make sure to check for topics you are not allowed to ask about.

  • Ask for information that is easy to find on a CV: Wasting a candidate's time is no fun. If you are going to look at their CV, don’t ask them to write out parts of it again into a form. It just creates frustration.

  • Have hard cutoffs: Very few factors will rule out a job applicant 100% of the time. If you are auto scoring any stages, make sure no one factor can rule an applicant out. Generally, a strong candidate is near the top in most categories, but that doesn’t mean they have no areas of weakness.

 

Now let’s look at how to evaluate the pieces of information you get through the application form.

 

CVs

 

While CVs are undoubtedly important, they should not be the sole factor considered in the selection process and should not be given too much weight relative to the predominant practice in most countries.

​

Here is an incomplete list of things that lead us to be skeptical about CVs:

​

  • Years of experience and education are just not very predictive for good performance. In their meta study summarizing 85 years of research on the predictive validity of 18 selection procedures in hiring, Schmidt & Hunter (1998) found that a candidate’s years of experience and education were only very weakly correlated with job performance (0.2 and 0.1 respectively). This teaches us to be careful not to overvalue previous achievements and experiences we find in people’s CVs.

  • There are many effects and biases that can come into play when you are trying to evaluate CVs, like affinity bias confirmation bias, stereotype bias; a CV looking much better visually but containing the same information as a more lackluster CV seeming much stronger etc.

 

However, having a person’s CV is a piece of the puzzle that we definitely want to have.

 

Some examples of things a CV can tell us about a person are the following:

​

  • How well did they perform in school and past jobs? Were they at the top of their peers?

  • What does their choice of study and roles indicate about where their interests and priorities lie? E.g. for program officer roles, did they choose altruistic careers or volunteering positions? Did they pursue quantitative fields or roles? If it’s for an early stage position, did they take up responsibility in small teams, e.g. leading their student council? If it’s for a later-stage position, have they led teams or worked/volunteered in equivalent roles? If it’s vetting them for leading an organization, have they got leadership experience?

  • How long did they stay at jobs, that is, how curious and ambitious might they be versus how mercurial? E.g. if they have repeatedly left jobs within a year of starting, that should ring some alarm bells. A long time in the same role without growth in responsibilities might be worrying as well; roughly two to three years seems like a good time to stay in many jobs nowadays

  • Does it seem likely that they have already learned some, or most, of the skills necessary for this job or grant? Does their application at this point fit into a narrative that makes sense of this role or project being a sensible next step that they could be good at and thrive in?

 

Don’t overvalue unrelated achievements and signals: One of the things we most want to urge caution on is not to overvalue achievements that are mostly unrelated to the job or project at hand, and instead be very open and creative in understanding what content on a CV can be a good proxy for. For instance, if you're seeking an entrepreneurial first employee to establish your foundation, a middle-aged PhD in theoretical mathematics from Harvard who has never ventured beyond the structured academic environment may be a less suitable match than a young individual who has launched their own business selling mosquito-repellent clothing and gained some experience working at a consultancy in a junior role.

 

Taking into account people’s backgrounds and starting points: People come from diverse backgrounds and circumstances, and it can be challenging to gain a comprehensive understanding of this and factor it in while assessing CVs. This uncertainty is another reason why we don’t weight CVs more highly, but it does also give you the chance to find real gems. For instance, imagine you have a job candidate who went to primary school in Oxford and then went on to study engineering at Oxford University and was in the student choir. Then imagine you have another candidate from Nigeria who went to a local school and then moved to the capital to study their undergraduate degree, volunteered with an international NGO, and then went to a small university in the Netherlands for their graduate degree in global development. All else being equal, who of these seems to have risen beyond their possibilities, overcome more obstacles, and shown more altruistic drive and grit?

 

How to evaluate CVs better: Evaluating CVs for what they do or do not show is an art form that takes a long time to learn. We think the best way to practice is to look comparatively at lots of CVs; talk to someone (ideally with experience in vetting) about what you think that CV does or does not show, relating that to the job or grant you are considering this person for, and the overall score you would assign to this CV. We want to reiterate that it is significantly simpler to evaluate multiple candidates and compare them to generate a ranked list of suitability, rather than determining if a single candidate is a suitable fit. The two key points here are:

​

  1. Practice - it makes you better (even if not perfect)!

  2. Assess comparative fit of multiple candidates

 

Weighting CVs in hiring: For the above reasons, we recommend giving CVs no more than 30% weight when judging the initial application form, and to give them no more than 10% weight in the final decision. Of course, this weight can vary a little with crucial considerations regarding the role: e.g. if you are seeking to fill a senior position in a specific department, it is reasonable to prioritize candidates with relevant prior work experience, and to place significant weight on whether or not this requirement is met.

 

Weighting CVs in grantmaking: Depending on whether you are judging an earlier-stage or later-stage project, your weighting of CVs should change (e.g. for founders of new projects, it might be up to 80% during the first application form stage and something like 20%-30% in endline decisions; for founders of existing projects, apart from how they list that person’s track record in concrete projects, CVs should not be more than 30% initially and at most 5% in the final decision). For more details, refer back to the section above on what to vet people for in grantmaking.

 

Logistics

 

It makes sense to include questions for logistical constraints and information in the initial application form. This makes it easy for you to get an overview over each candidate’s logistics that you can come back to, as well as directly see how it relates to and trades off with their other submissions to the application form.

​

Some examples for logistical factors in hiring are:

​

  • Whether they can relocate 

  • Compatible time zones

  • Start dates, e.g. 2 months

  • Full time vs. part time

  • Visa requirements, e.g. are they eligible to work in your country of operations?

  • Whether they are looking to make a salary within your stated range

  • Whether they consent to having their application shared with other organizations in your network for whom they might be a better fit

 

The weight to assign these depends on how important your logistical factors are to you, but unless they are wholly insignificant, we recommend giving them somewhere between 10%-20%.

 

For example, if someone is relatively junior for the position you are looking to hire for, but otherwise seems great, it makes a big difference if they are able  to work in person in your stimulating office with someone who could train them up. This is much harder to achieve remotely.

 

Some examples for logistical factors in grantmaking are:

​

  • Grant amount requested - it is a good idea to request information on both a lower, ideal, and stretch amount

  • Overall budget

  • Whether they consent to having their application shared with other funders who might be a better fit

  • Legal setup and base of operations of their project

  • Website

 

Short free-form questions

 

Purpose: Including ~two custom short questions in your initial application form is extremely helpful, both in hiring and in grantmaking. These serve as mini-test tasks that enable you to gauge a candidate's values and thought process on critical topics. By selecting the right questions, you can acquire a substantial amount of information about a candidate, with relatively little production time required on their part and minimal vetting time on yours.

​

Hiring-specific considerations

​

In hiring, including short free-form questions allows you to get a sense of how a candidate thinks or approaches a problem (that is, how analytical, transparent, humble, comprehensive etc. they are), as well as of how they think about a given issue (that is, their opinion or conclusion regarding the content of the question you ask). 

 

How to come up with good questions: It is worth spending a decent amount of time to come up with questions that would help you rule out lots of applicants early on. Most questions fall into at least one of three categories:

​

  • Knowledge questions: If there is any essential prior knowledge, you can include it here as a basic filter for applicants who won’t work. For example, you probably shouldn’t hire a copyeditor who doesn’t know the difference between “their” and “there,” or an animal welfare specialist who doesn’t know what cortisol is. You can design this as a mini test task - e.g. ask a copyediting candidate to copy edit a short paragraph - or set this up as a quiz to get an overall quiz score.

  • Problem-solving questions: You can use problem-solving questions as a filter for general ability. These could be multiple choice with a single correct answer, or a long answer allowing a candidate to showcase their creativity or methods. These are especially helpful if you want a candidate who can demonstrate understanding of a specific complex topic (e.g. counterfactual impact, cost-effectiveness analyses, monitoring and evaluation, etc.). Just make sure you can grade them quickly.

  • Value-alignment questions: Given the importance of value alignment for early staff, asking questions that give you a sense of candidates’ values can be extremely helpful. For example, asking how much of a particular salary they would donate gives you a sense of how much they think is morally appropriate to donate, but also of how they prioritize donation choices and which charity evaluators they know and trust and why.

 

We think the best short questions are a combination of the above three categories, and give you a sense of candidates’ thinking and abilities on all of them.

​

Here are some prompts that can help you come up with good questions:

​

  • What are key elements of the job that a great candidate already knows about, or is able to perform?

  • What are key challenges that might come up?

  • What is a question that I think every one of my current staff / trusted advisors / people I think are deeply aligned with my philosophy / mission / strategy would give a good but slightly unconventional, non-obvious answer to?

  • What are the key values I care about and how can I put these into a short, non-leading question?

 

When choosing a question, you should of course make sure that it can be answered by candidates within a sensible time-frame, e.g. around 15 minutes each.

 

If you can create good questions, it is easily worth giving this part of the first step 40%.

The predictive validity here is not so much that candidates who score a 9 will be much better later on than candidates who score a 7, but that people who score below a 7, for example, are extremely unlikely to score 7 or higher on later tasks. Therefore, this application form component is very good for separating candidates who have a chance at being a great fit from those who are clearly not a good fit.

 

Here is an example for a short question from our Incubation Program application process: “Which idea(s) from our list of recommended interventions this year are you most excited about founding, and why?” This is a good question because:

​

  • It leads potential candidates to engage with our recommendations, so they can either get excited or self-select out

  • It informs people, who might not have realized it, that we are looking for them to found our recommended interventions

  • It allows us to pretty easily reject people who did not actually check the link, or who just select one of the interventions without giving any reasons

  • It gives us information about what candidates think is important for a charity intervention worth recommending

  • The answer gives us a sense of how structured and analytic a candidate’s thinking is.

 

Grantmaking-specific considerations

 

In grantmaking, choosing relevant questions is a bit easier. We recommend asking for two pieces of information, that applicants should already have and thus need to invest minimal time for, but gives us maximum information value:

  • Lay out your expected cost-effectiveness or link to any cost-effectiveness model(s) you have made

  • Link to any external review you have had (This one should not be required, but it is a good idea to normalize that this is an extremely valuable piece of information and signal to give to a potential funder)

 

At scale: multiple choice quizzes

 

For our Incubation Program application process, we include a short, automatically scored multiple-choice quiz (available on some application form platforms like jotform). This allows us to put our ~5000 applicants in a very rough, but highly useful order to prioritize evaluating applications. If you have a manageable number of candidates (say, well below 100) you will probably want to skip this component.

 

Side note: For our Incubation Program, the multiple choice scores were actually surprisingly predictive, and we have kept iterating on and improving the quiz with each year. As mentioned above, they are not predictive in the sense that a person scoring 127 out of 130 always turned out to be a much better fit than a person scoring a 108, but in the sense that the top 500 out of 3000 candidates (as sorted by MC scores) captured all the people who later made it into the program and founded charities, and no one who made it into the cohort scored below an 84 out of 130, with most people scoring over or very close to 100. So again, it’s a heuristic, it’s about trends and proxies rather than exactitude in a single number.

​

In the backend, you assign each answer to your multiple choice questions a positive or negative score, have the software add them up when someone submits their application, and then rank incoming applications on the basis of the final score. This way you get to more promising applications quicker and earlier in the process. It also helps you evaluate people who are at a very roughly similar tier, which makes it easier to decide who to pass on to the next stage.

​

The questions in our multiple-choice quiz fall into three categories, but you could include any questions that might be helpful for putting your candidates into a rough order:

  • General mental ability questions (GMA): The literature is positive on GMA as a predictor for work performance, but there are also criticisms. We are relatively positive on including GMA questions, along with other methods of testing. The caveat is that it is likely that you’ll get the majority of benefit from just a few quick questions, and that the examples you use in text tasks should be varied so they are not easy to google.

  • Predictive personality factors: Some personality factors are quite predictive for certain positions. E.g. for our incubation program, we highly value grit and conscientiousness, so we include a few questions that are designed to test candidates’ leaning on these traits.

  • Logistics: We assign a score to certain answers to logistics questions,  so that candidates who are able to attend full time are more likely to appear higher on the list than those who do not, for example.

  • Central cultural questions: When we hire for Charity Entrepreneurship staff, there are certain things we look for in people that would make them a good fit for our organizational culture, e.g. ability to 80/20, open-mindedness, impact focus, creativity and lean mindset towards experimentation and failing fast. Therefore, we may include a question or two that test candidates’ leanings. For example, for one round, we asked candidates to choose five self-describing traits from a list of 20. All of the answers could be construed as positive, but some of them were closer to CE’s culture than others.

 

Of course, this score should be taken with a grain of salt, so if you do decide to include an auto-scoring quiz, we wouldn’t recommend assigning more than 10% weight to it and to use it for sorting purposes rather than for decision-making. This is especially so if you have not previously validated the predictiveness of your questions.

​

Idea & project plans (in grantmaking) 

​

Example: The first page / executive summary of this 10-page project plan is a good example of a 1-pager.

 

To vet the intervention idea and project plan, you can take a look at both the one-page plan and the cost-effectiveness model that candidates have (hopefully) submitted through the application form.

 

In grantmaking, the one-page plan is the equivalent to something in between the short questions and test task 1 in hiring. It is requested right away in the application form because it gives you lots of information at little time cost to the applicant (since it draws on elements that they should already have if they are applying for funding for a new project), and it helps you rule out lots of applications. However, it also functions somewhat similarly to an in-depth test task. This is because it is a substantial piece of work that demonstrates what candidates are capable of over a significant period of time, and enables you to thoroughly evaluate their ability to convert your funding into tangible impact.

 

There is no single clear list of what a one-page project plan should include, but we think that including an explicit ask for the following elements is a good idea:

​

  • One-sentence summary

  • Problem description with numbers

  • Solution description / Theory of Change

  • Cost-effectiveness estimate

  • Mention of, or links to, evidence-base

  • Mention of neglectedness / targeted niche

  • Current stage of the project/organization and track record (if any)

  • Team composition and roles of key leaders

  • Rough one-year(ish) plan steps, ideally with key milestone goals

  • An overall budget number

​

The one-page plan, along with the more elaborate cost-effectiveness model requested in the application form, enables you to assess both the intervention concept and the project plan for how the candidate intends to execute the intervention using the funding they seek.

 

How to judge an idea and plan: As a grantmaker, you don’t need to do deep research into whether the idea and plan fully satisfy requirements beyond any doubt. Instead, we recommend using a heuristics-based approach, where you create a weighted-factor model. To assess an opportunity, you employ several heuristics and evaluate whether the proposed idea/plan meets these criteria. You then assign approximate numerical ratings (or use a traffic light system ranging from red to green) and use this information to evaluate a weighted total. Based on this evaluation, you can determine whether or not you believe the opportunity has the potential to create significant impact. Aspects that receive low scores during the evaluation can serve as topics for discussion during subsequent interviews with the applicants. This allows you to delve deeper into areas that require more clarification or exploration.

Heuristics for evaluating an idea

Below is a list of critical criteria that CE uses when researching and recommending top charity ideas for our incubation program, along with our recommended weighting for each criterion:

ht this cause harm?

Heuristics for evaluating an idea

Below is a list of critical criteria that CE uses when researching and recommending top charity ideas for our incubation program, along with our recommended weighting for each criterion:

Heuristics for evaluating a plan

Here is a list of key criteria to consider when evaluating a one-page plan. Of course, your expectations for how well applicants do on these criteria should be adjusted depending on the age and stage of the organization; a one-page project plan for an organization cannot go into much depth and should be treated as such - the 10-page project plan that you will request at a later stage in the process will be able to go into more detail. Similarly, remember to adjust your weightings: e.g. for early-stage organizations, the theory of change, team composition and personal track records are more important than concrete plans. For later-stage organizations, organizational track-record and concrete plans, plus more fleshed-out M&E systems, are more important.

We think it is reasonable to pass a candidate on to the next stage if, after looking at their project proposal, you think you have an above 5%-10% chance that you will want to fund it. With 5% as a cut-off point, this would translate to approximately 20 screening interviews before you fund an opportunity; if that seems too many, 10% might be a better cut-off point.

3) Test task 1 (in hiring)

​

After you have decided which applications you would like to consider, let’s turn now to step 2 of an ideal hiring process. 


Pros and cons: The good news is: well-designed test tasks that mimic the work someone would later do are highly predictive, and will usually be the most revealing piece of information you get during the application process. The bad news is, of course, that designing a good test task requires work. In addition to that, they often take a long time to complete, and can be relatively tricky to evaluate when you don’t have a lot of experience in the area you are hiring for. also a relatively long time to evaluate. Nevertheless, they are highly predictive, both according to the literature and according to our experience, and take considerably less time to evaluate than conducting a whole interview, so it’s worth spending the time.


Purpose: The main purpose of a test task is to test someone’s ability to fulfill the kind of tasks that are exemplary, important, or take up most of the actual work time for the role you are hiring for. 

 

How to come up with a test task:

​

  • Start with the list of main responsibilities (which we already have from writing the job ad for the role and thinking about ideal traits for our staff), think about how important these are and how much time of the job will be spent on these, and try to come up with a task that feels exemplary for what this job is going to be like and that someone who will exceed at this job has to be able to do really well. You might even think of the first couple of tasks you would give a hire for this role, and see if any of them can be made into a test task.
     

  • We, at CE, like to include an element of analytic, independent thinking to most of our test tasks, rather than just giving pure execution tasks. For example, our test tasks will nearly always include an element of model-building and criteria-choosing, rather than just giving someone an overly simple template to execute on. This is because as a small organization, we value entrepreneurial spirit, ability to work independently, and talent density.
     

  • Nearly always, it can be helpful to ask peers and colleagues working in a similar area for test tasks they have used themselves, or seen before and liked, and/or to send your test task to someone for them to have a second pair of eyes on it.
     

  • Ideally, it helps to do a new test task yourself, or have someone who would do it well do it for you, to properly calibrate how long it takes to do and how it should ideally look after a limited amount of time spent on it.
     

  • A flow-through function of test tasks is to help candidates self-assess whether they might be a good fit for this kind of role and your organization. Therefore, the more a task gives applicants a good sense of what it is actually like to do this job day-to-day, the more useful it is going to be for them.
     

Concrete examples: Here are two shortened example test tasks which are relevant for a foundation, and should take no more than three hours each:
 

Program officer: 


Task: On the basis of an imaginary grant application and budget, create a write-up on the key case for the grant (including key premises, the evidence for them, and the potential cost-effectiveness of the grant), the case against, open questions, predictions about key aspects of the grant, and some M&E notes on how you would know whether or not this grant has gone well or badly.


Purpose: You test the candidate's ability to evaluate a grant proposal, identify critical considerations, perform a rough cost-effectiveness analysis using BOTEC methodology, critically evaluate information and evidence presented in the grant application (which will always contain bias), identify crucial considerations or uncertainties requiring further investigation, quantify their beliefs and predictions, develop effective feedback loops, and clearly and transparently communicate information.

 

Research analyst role: 


Task: Create a WFM for evaluating health security intervention ideas for a new charity to launch in India. Use at least five criteria and explain your rationale for the weights you assign.


Purpose: Tests their ability to develop and construct a model that supports decision-making, research appropriate criteria, make informed decisions on the significance of each criterion when evaluating interventions, assign appropriate weights and provide compelling justifications for their choices, adapt criteria to an atypical intervention and geographic context, ensure that the model is accessible, develop a formula that is logical, create a final conclusion that is user-friendly, and present their reasoning in a clear and transparent manner


Here are two more examples to give you a sense of more applied test tasks:

 

Animal welfare charity communications director role: Part 1: Contact one chicken farm and find out the following 10 pieces of information, plus any other info you think might be relevant. Part 2: Which farm do you think would be a good target for launching an intervention: the one you called, or the one described in the report below? Why?


Operations role: Our lease is expiring, and we have to change offices in December. We have 15 staff, all of whom are able to relocate within the city come December. We will want to stay in the new office for one year. Using real listings that are currently available in this city, come up with a process and decide which office we should rent.


For scoring test tasks, it pays to come up with a set of criteria for a scoring rubric that makes sense for your task, and rate the responses via how each submission scores on those criteria. This will be more helpful the newer a task is for you, whereas your intuitions for how to score a task will get better over time. Either way, a rubric will help to standardize your scoring across time and candidates, and mellow out any fluctuations caused by irrelevant factors, like your mood.


Here is an example for a test task rubric: Imagine that we are grading a test task that asks Research Analyst candidates to create a weighted-factor model for comparing at least five biosecurity interventions to be implemented in India. Out of 10, we give the following points:

  • +3.5 for useful criteria and their justification for biosecurity

  • +2 for sensible weightings

  • +1.5 for a good formula calculating a final score

  • +1 for roughly promising health security interventions

  • +1 for good understandability/accessibility

  • +1-2 bonus points for good reasoning transparency, process description, noting uncertainties, notes for next steps, being notably and interestingly different from CE's process; especially when adapted to the biosecurity and India context


Here are some important practical considerations to keep in mind when designing test tasks:

  • Clear instructions: There are many things that might seem obvious to you but may not be to candidates. Anything that is ambiguous or unclear will take up unnecessary energy on the part of your applicants, and lead to lower quality results. Therefore, we like to err on the side of longer instructions. E.g, it can be helpful to give candidates a rough idea of the rubric you are going to use to evaluate their work, or it can be useful to explicitly tell candidates to explain their process, or to make a list of how they spent their time and what they prioritized and why, or to write an executive summary of their findings, etc.

  • Three-hour length: We think that for most longer-term, full-time jobs, the length for the candidate should not exceed 3 hours. For volunteering or internship roles, a shorter test task of one to two hours might be enough to give you a solid understanding of the candidate’s ability and potential. For crucial positions with various responsibilities, you can also consider one or multiple test tasks, spanning more hours (see test task 2 below).

  • Seven-day deadline: Seven days are usually enough for most candidates, but to be safe, we recommend that candidates let us know if they require additional time. We are usually receptive to extending deadlines to accommodate candidates' needs.


4) Interview 1 / screening interviews


Common key considerations for interviews


An interview can provide valuable information about a candidate’s social skills, their ability to think on their feet, and their first-impulse responses. It also offers the opportunity to test the candidate under conditions where you can clearly observe them. Unfortunately, interviews are also where factors that do not necessarily matter to every job (e.g. charisma, assertiveness) can bias an evaluation.


Why it should be structured: The goal of having structured interviews is to minimize the biases caused by first impressions and random factors, generating responses that can be systematically compared. We typically recommended having one short screening interview (15 minutes) and another, longer one (60 minutes) later in the process.


Purpose and setup: The first screening interview is a very short interview (roughly 15 minutes, 10 questions) whose main purpose is, like the initial application form, to quickly and easily rule out a significant number of candidates, by testing them on easily testable key requirements for the job or grant. So for instance, finding out that a candidate is looking for any job in an effectiveness minded organization and doesn't actually understand your particular mission at all allows you to quickly rule them out. Similarly, if a candidate for a grant clearly doesn’t align with your values and focus on impact, or doesn’t seem to fully understand their intervention idea or some central difficulties in it, you might be able to quickly rule them out.


Designing the questions: Laszlo Bock of Google has conducted internal studies regarding what works in hiring and written guides on structured interviewing, as well as some sample questions. Read through them for ideas.

Interview questions should be a mix of the following:


Softball questions: Keep it easy and generic for the first question as this helps put the interviewee at ease. For example, you might ask, “How did you hear about us?” or “What made you interested in working here?” or “How did you first come across the idea for your project?”


Skill-focused questions: Include some semi-focused questions that give the applicant the opportunity to tell anecdotes about the skills you are testing them for. For example, when hiring someone who will need to manage others or lead an organization you are considering to fund, you might ask: “Tell me about a time when you effectively managed your team to achieve a goal. What did your approach look like?” (Follow-ups: What were your targets, and how did you meet them as an individual and as a team? How did you adapt your leadership approach to different individuals? What was the key takeaway from this specific situation?)


Negative questions: You can also try questions that explore more negative emotions, such as, “When was the last time you got into a conflict with a coworker, and how did you handle it?” Some candidates will not be able to answer these questions gracefully and may tell an anecdote that focuses on putting down and finding faults with others- a big red flag.


Knowledge questions: How much do they know about you as an organization? How much do they know about altruism in general? How much do they know about the intervention you’re doing, or the types of projects you are hoping to support? Design some questions that allow them to demonstrate this knowledge.


Find out their values: Having shared values is important in an impact-focused organization. You’ll want to know not only what they value morally in a philosophical sense, but also how they will practically make decisions. Examples: “How should employee salaries be decided?”; “Do you think it is important to have diversity within an organization, what kind, and why?”


Miniature test tasks: If you’re recruiting an outreach specialist for conferences, you could ask them to pretend that you are a conference participant interested in their organization. You might also ask them questions that test their intelligence or creativity. Be careful with more creative questions though, as it’s really easy to get carried away and ask questions that don’t predict anything. If your organization is facing a real problem, this might be a good time to ask them for advice on how they would solve it so you can get a sense of their problem-solving methods. In general, the closer the test task is to the real tasks, the better.


Scoring interviews:


For scoring answers, we recommend using a rubric and scoring during or right after the call. A structured scoring table allows you to be more neutral and unbiased. It makes you less susceptible to being distracted by largely irrelevant factors, like a proposal’s graphic design or an interview candidate’s charisma, as well as focus your attention on what is actually important in a task. Use your list of key traits and skills that you developed for your need description and job ad, or the list of key traits to look for in early-stage founders vs. later-stage leaders, respectively, to decide on good questions and figure out what would be strong answers.

​

Here is an example for a scoring rubric for an interview question:

What resources on effective altruism have you read, or are you reading?

However, just like evaluating CVs, being able to interpret what an interview question tells you about the candidate rather than just blindly checking whether they gave “the correct response” is an art form to practice.

​

Here are some examples for what answers can tell you that go beyond rubrics: Asking a candidate about their strengths and weaknesses for a role can reveal much more than just whether they are a good fit. It can also provide insight into whether they have a clear understanding of the job requirements and what constitutes strengths and weaknesses for the position. Additionally, it can show whether they are self-reflected enough to have a sensible answer, whether they are humble and honest but also confident enough to mention actual strengths and weaknesses, whether the ratio between both seems okay (e.g. not dwelling on weaknesses but not overhyping their positives either); and whether they have more of a growth mindset, e.g. whether the weaknesses they mention are things that could be overcome and whether they mention how they are working on them rather than stating them as unmovable, fixed flaws. There are whole lists of things that can be learned from most questions that go beyond the literal meaning of the answer.

 

Here, as in other interviews, it is important to make sure that the questions you ask are not obvious or leading (i.e. don’t have obvious correct answers, with the exception of fairly technical questions), and are open-ended. That way, you will get a lot more information and a better sense of how a candidate thinks.

 

Conducting remote interviews vs. video interviews: If you conduct your interview remotely, we recommend using Calendly or an equivalent software to allow multiple applicants to book you during pre-set time slots. You should also plan to record the interview; don’t forget to ask the candidate if it’s okay for them to be recorded, as this is a legal requirement in many countries. Recording interviews can be very helpful later on when you want to compare between two neck-and- neck candidates and need a refresher on what happened during the interview. It can also help you write down information that you missed while it was live. Finally, if you want multiple people to grade an interview, having a recording allows you to do so. 

We usually conduct our initial interviews via a pre-recorded video platform, as we go through so many of them. This might be worth it once you reach an amount of around 20 initial interviews. There are many tools for this (e.g. EasyHire or Vervoe, but there are lots on the market and features are changing quickly right now). We pre-record an introduction, all the questions, and an outro, and send candidates invitation links. They then get to watch each question in turn, have a set amount of time to think about their answers or re-watch the question (e.g. 15-45 seconds), and then record their answer for a predetermined length of time (e.g. 2 minutes). This allows candidates to take the interview whenever is best for them, lowers the impact of slower internet, increases standardization as every candidate receives literally the same questions in the same formulations, and in turn enables recruiters to watch interviews at x2 speed and grade them within the system. Since answers are always recorded, sharing within the team is easy. Furthermore, since you can watch interviews all at once, it diminishes effects like fluctuations in your mood. Cons are, of course, that it feels less personal to the candidate than meeting an interviewer face-to-face in a video call, and there might be other biases that come into play here.

 

Hiring-specific considerations

​

Switching the test task and interview steps: In hiring, depending on how hard you find it to evaluate the test task (that is, probably, how far removed the role and the task are from what you are familiar with), you might want to switch the interview 1 with test task 1.

 

Grantmaking-specific considerations

​

You need to have a level of certainty that if you fund an intervention or charity, the decision-makers will execute the project idea well. This applies to the big-picture decisions that determine how the project pursues its mission, as well as the hundreds of decisions that instrumentally determine its success- setting up M&E systems, deciding on pivots,  budgeting, hiring staff, setting culture, communicating with stakeholders, and more. Interviews in a grantmaking process can help you to assess three things:

  1. If the applicant would be a good decision-maker

  2. How this person connects into the organization

  3. Answers to central questions around the organization, the idea, and its plan. You can use your scoring of the initial application form and the one-page plan to figure out which things you are most uncertain or worried about.


Similarly to in hiring, the screening interview in a grant application process should aim to help you rule out candidates. In order to do that, we suggest two possible approaches:

​

  • During the first interview, you could ask questions that aim to identify any potential red flags in the three areas mentioned above. The second interview, which takes place later in the process, can then delve deeper into any areas that you were unsure or concerned about during the first interview.
     

  • Alternatively, the first interview could be focused on questions that help you determine whether this person is a good decision-maker. The second interview could then be focused on figuring out how they connect into the organization and probe the organization, its intervention idea, and plan more deeply.
     

We recommend choosing whatever you think will help you rule out candidates with a reasonable amount of certainty.

​

5) Test task 2 (in hiring)

​

The second test task in hiring is very similar to the first one, except that we usually ask for a longer test task (up to five hours instead of three).  It tests candidates on different abilities and skills than the first test task did, to give you a fuller picture of their abilities for the job. 

 

Example test task 2 for a program officer role: For test task 1, you might ask candidates to evaluate a fictional grant application using a prepared template, within three hours. Then, for test task 2, candidates might be required to spend five hours creating a strategy and portfolio for a year's worth of grantmaking, based on your foundation's profile. The first task is more limited, takes less time, and allows you to assess a candidate's ability to evaluate a grant, identify crucial considerations, and create a sensible BOTEC. On the other hand, the second task requires more time, big-picture thinking, research into your foundation, value alignment for a strategy, effective communication skills, and a deeper understanding of your foundation. It would not be fair or reasonable to ask this task of candidates in the initial stages of the hiring process. However, candidates who have made it past the application form, initial test task, and first interview have a reasonable chance of getting the job, so it's reasonable to ask them to invest more time and energy in mutually testing for fit.


The practical considerations and process for how to come up with this task are the same as for test task 1.


Be creative: Furthermore, you can be creative with both the length and content of test tasks. If you are hiring for a more generalist, entrepreneurial, early-stage staff, the second test task could be split into three separate one-hour tasks that all test for something different and give you a good sense of their performance and alignment on different aspects.


6) 10-page project plan (in grantmaking)


Example: Here is an example of a great 10-page project plan.

The 10-page plan serves a similar purpose as the one-page plan used earlier in the process. However, you should request more detailed information about the project and all the points discussed in the previous chapter on one-page plans. The 10-page plan should also include additional information that wasn't covered in the one-page plan. A comprehensive ask list to request from applicants could look as follows (additions compared to the one-page plan are marked in bold):
 

  • One-sentence summary

  • Problem description with numbers

  • Solution description / Detailed Theory of Change

  • Cost-effectiveness estimate and details / link to a model

  • Mention of or links to evidence base

  • Mention of neglectedness / targeted niche

  • Current stage of the project/organization and track record (if any)

  • Team composition and roles of key leaders

  • More detailed one-year plan, ideally with key milestone goals

  • An overall budget number with details on planned usage and differentiated according to low, medium, and high funding

  • A more aspirational three-five-year vision

  • M&E plan with mitigation strategies at crucial points

 

Evaluating it will involve the same factors as evaluating the one-page plan, but your expectations and level of scrutiny and skepticism should be higher.

 

7) Interview 2

 

The last regular step of our iterative vetting process, both in hiring and in grantmaking, is interview 2. Candidates who make it to this stage are highly promising, and only a few people get there.

 

Purpose: In hiring, interview 2 is a deeper exploration of candidates’ role fit, as well as values and culture fit. The main purpose is to dig deeper into weaknesses and doubts we have about each candidate on the basis of the documents, test tasks, and interview 1s that we have. We also want to get a better sense of, and stress test, their strengths for this position. Therefore, we personalize interview 2 to each candidate. In grantmaking, interview 2 goes more deeply into the doubts and uncertainties you might still have about the project idea, the plan, track record, or its leadership and team. 


Record the interview: We highly recommend recording this interview, e.g. on zoom, so you can later re-watch it or share with colleagues to get their input and compare evaluations. As for the first interview, this can help avoid a lot of biases, both personal and intrapersonal, helps to evaluate the interview more fairly, and also gives you a chance to assess your own performance as an interviewer.


Structure: The interview usually runs for about an hour, sometimes a little bit more:

  • Introductions (5 minutes)

  • 20 questions (45 minutes)

  • Space for the applicant’s questions (10 minutes)

  • Debrief and next steps (5 minutes)


Questions: In 45 minutes of conversation, we can ask roughly 20 questions. To select those, we have a list of 40+ varied, validated questions, divided up by areas of questioning which cover the spectrum of what we are looking for in an ideal candidate. There are 8-10 fairly standardized questions that we almost always ask, while the remaining 10-12 will be tailored for the weaknesses or uncertainties we have about the candidate specifically. To find these, we will go through all submissions we have received from them, and write down any doubts that come up. These could be about technical abilities if the person’s test tasks were their weakest contribution, or things that came up in other steps, like an apparent tendency towards interpersonal conflict, or doubts about whether they are comfortable moving fast and experimenting rather than getting everything perfect right away, etc. In grantmaking, you look for weaknesses or uncertainties to address in the applicant’s submissions.


Space for the applicant’s questions: This part is important to give candidates a chance to ask any questions they might be curious about. The questions a candidate asks can also tell you a lot about what they care, think, and worry about; so assessing the candidate isn’t entirely over until after this part.


Scoring the interview: Especially as you get started interviewing people, it is useful to stick closely to a pre-prepared rubric spreadsheet when evaluating answers. If there is no time for that, or you are experienced enough to have a good sense of the range of answers people give, you might switch to only giving each answer a quick score while listening to the candidate, and mark any red flags with a combination of letters you can easily find later on (e.g. “xxx” or “???”).


After the interview, the final score is assigned based on the candidate’s scores during this interview and how excited we would be to hire or fund them after addressing any final doubts and concerns. We write down any remaining doubts and consider whether a reference check would help to alleviate these before making a final decision on whether to hire them or not.


8) Reference check


Purpose: Reference checks allow you to get an external perspective, by drawing on information outside your vetting process. In the literature on hiring, reference checks by themselves don’t possess very high predictive power, so we don’t usually recommend them. However, they can provide valuable insight into certain things. 


Baselines differ: A thing to keep in mind when evaluating references, is that baselines differ: For example, in the corporate for-profit employee world, anything other than a highly enthusiastic recommendation is a red flag, whereas in the space of Effective Altruist organizations, referees will usually be much more nuanced, transparent, and less blanket optimistic in their assessment - some amount of nuance and criticism from an EA reference doesn’t necessarily mean that the person is not a good candidate.


References are used differently in hiring and grantmaking, so let’s look at these separately:


Hiring-specific considerations


In hiring, any concrete remaining doubts about the candidate who otherwise seems really great, can be specifically addressed by a reference check.


Examples: If you're still unsure whether a candidate can keep up with the required pace of work, you might ask their reference whether they believe the candidate's work tends to be more methodical and comprehensive or ambitious and approximate. It's crucial not to ask leading questions where the "right" answer is obvious. Another example is asking a candidate's reference about the types of team members and team culture that the candidate would likely thrive with, versus those they might struggle with in a workplace setting. Referees may not have much insight into what you're specifically looking for, so it's more likely that you'll receive useful answers.

It's important to request references from the candidate and inform them of the upcoming reference check, so they can let their referees know.


Grantmaking-specific considerations


In grantmaking, reference checks are a tool that you should use liberally to gain additional information on someone’s track record. These checks can provide insights into the candidate's level of involvement in a space, and their accomplishments beyond what was outlined in the application process. In grantmaking, you might want to ask for references from the candidate; but if you are well-networked in the relevant space, we also recommend asking people who might know the candidate for an informal assessment. Again, for the same reasons as in hiring, we would not weight the judgment someone makes about a candidate too highly - especially if you don’t fully trust their judgment - but rather use it to gain more external information on facts and traits that are important for your evaluation.

​

9) Making a final decision

 

If you follow our advice on how to score a vetting process, at the end of a vetting process, you should have a list of top candidates, alongside their application form scores, CV, logistics, test tasks or project plans, interviews, and possibly their references. Every step has given you a piece of data and allowed you to narrow down to only the top candidates.


Final score: Calculating the weighted average of all of a candidate’s scores will give you their final score. For the final decision, this score is helpful - we practically never end up hiring someone who scores less than a weighted average of 8 - but your final decision should be more nuanced than that.


Here is a list of things to consider when making the final decision:

​

  • Comparative evaluation is easier: First of all we recommend looking at several top candidates in comparison - if you just evaluate a single candidate, you are highly likely to overvalue or undervalue them depending on whether you generally tend towards optimism or pessimism / cynicism about hiring people, and on how you are feeling that day.
     

  • Putting strengths and weaknesses into the big picture: Then try to get a good sense of a candidate’s individual strengths and especially their weaknesses, and put those into relation with other data you have about them. For example, if a candidate’s weakness is that they are a bit more junior than we would like, will they be working from our office together with our senior staff so we can expect them to learn and grow quickly? Or are they going to be working remotely and the weakness is likely to stay a real problem? If a new project founder you are considering granting to is a bit more inexperienced than you’d like, what did their uni professor reference say about their ability to independently run projects? Did you get the sense that they learn quickly from the interviews?


Hiring-specific considerations
 

  • Reasons not to hire: If your doubt is about their conflict potential, social skills, and team compatibility, we recommend erring on the side of not hiring. Similarly, if you are not sure if any of your top candidates are up to the job, don’t be afraid of not hiring at this time. The indirect costs of hiring the wrong employee on team performance and morale, and the hassle of letting them go, is just too high.
     

  • Testing intuitions: If you are still uncertain, test your judgment with this question: If you imagine hiring them now, how likely do you estimate it to be that you will want to keep them on after a trial period of, for example, three months? If it’s below something like 75-80% (very rough number), then it’s likely that you shouldn’t hire them.


Even for a great candidate, it’s a good idea to start with a trial hire or probationary period, particularly if you haven’t done much hiring in the past. A good hire can make your organization and a bad one can break it, so the time investment really pays off.


Grantmaking-specific considerations


In grantmaking, the final decision depends a lot on the bar(s) you have set for your grantmaking and on the goals of your granting.


Another thing to consider is whether you have overriding doubts about a grant or not. Similarly to in hiring, if the highest impact thing to do is not to hire, don’t be afraid to make that call. However, an option you can take in grantmaking is to give a smaller amount than you would if you were fully convinced. Especially for more early-stage organizations, this can be a valuable signal of support and endorsement. This will motivate them and allow them to go much further, and keep up a connection with you to track their future progress.


10) Giving feedback


In hiring, we are big fans of giving individual feedback to a handful of candidates who made it far into our process. This is both fair and reciprocal, as candidates will have invested a lot of time and emotional energy into your process at that stage. It is also potentially impactful, as it will allow promising candidates to work on their weaknesses and builds a positive connection between you, despite the rejection. For legal reasons, we advise against giving feedback in writing. Instead, offer to schedule a 15-minute feedback video chat in your rejection emails to top candidates that you think you have useful feedback to give to. This should not take more than a total of 30 minutes on your part, and is invaluable both for the candidate and your connection with them.


In grantmaking, we think it can be useful to give feedback at later stages of the process, but this needs to be navigated very carefully. Power dynamics between funders and grantees could result in a side comment be taken by grant applicants to mean that they should pivot their entire project in order to secure funding from you. It might be safer to stick to more high-level, structure-focused feedback, like asking people to improve their CEA model rather than giving more detailed feedback on their intervention that might lead them to pivot significantly.

Example list of potential advisor candidates

Example list

See example here.

bottom of page