Stanford UniversityFreeman Spogli Institute for International Studies
Rural Education Action Program
What We Do



China is now synonymous with growth and prosperity. Cities such as Shanghai and Beijing boast infrastructures that rival any city in the world. Dramatic images of glimmering skyscrapers towering above streets booming with commerce highlight China’s seemingly unstoppable growth. China’s currency reserves and trade policies shape global markets. The academic prowess of China’s children is widely acclaimed.

However, in the rural interior, far from the eastern seaboard, millions of people still live in extreme poverty. Here, sturdy mules replace luxury cars, and humble villages replace towering skyscrapers. Two thirds of China’s young people are growing up in these poor, rural areas. Less than 5% will go to college. As they grow up and move to the cities, they can either help propel the country’s growth or dampen its dynamism. It is possible that failing to educate and train poor rural children will jeopardize China’s growth and transformation into a modern, knowledge-based economy.

This is where REAP comes in.


The Rural Education Action Program (REAP) is an impact evaluation organization that aims to inform sound education, health and nutrition policy in China. REAP’s goal is to help students from vulnerable communities in China enhance their human capital and overcome obstacles to education so that they can escape poverty and better contribute to China’s developing economy. REAP’s research focuses on three key areas:  


Health, Nutrition & Education

When children are sick or undernourished, their schoolwork suffers. REAP aims to reduce illness and undernutrition among children so that they can reach their full academic potential.

Keeping Kids in School

Rural schools can be both low quality and expensive, giving children and their parents little incentive to attend. REAP aims to identify and solve the most serious cost and quality problems associated with rural schooling, so that rural children can have access to an affordable, quality education.   

Technology & Human Capital

REAP is exploring the use of technology to improve schooling and health outcomes, both by providing children with extra help inside and outside of school, and by educating parents in remote, hard-to-penetrate areas.

   

 

 


 

 

What distinguishes REAP’s approach?


There are thousands of government entities, private organizations, and research institutions around the world that are dedicated to solving problems for vulnerable populations. Often they are awash with money and good intentions, yet the problems they are committed to solving persist. REAP believes this is partly because very few organizations are able to convincingly answer a fundamental question about their efforts: do they work?

REAP asks this question about all of our projects. We believe that in order to reliably measure success and effectively channel ideas and investments, a quantitative, experimental (or quasi-experimental) design is essential. These types of rigorous program evaluations are known in the academic world as “impact evaluations.”

 

 

  

 Impact Evaluation: The Basics

Experimental impact evaluations are different from traditional, qualitative monitoring and evaluation techniques in one key respect: they make use of a control group to serve as a basis of comparison. In this sense, they resemble traditional pharmaceutical trials or experiments that were once only done in the laboratory. Here we outline the basic steps involved in all of our impact evaluation projects:

Step 1:        A sample group is chosen—say, 100 poor schools.
Step 2: We conduct a survey of all schools in the sample in order to establish a baseline level of information about them.
Step 3:   We divide the sample into two statistically identical halves. Each school is assigned to either an intervention group (e.g., 50 schools) or a control group (50 schools).
Step 4: The intervention is implemented in one half of the schools (the intervention group) and not in the other (the control group). For example, 50 schools receive computer labs and education software, while 50 schools do not.
Step 5:   At the end of the project, our impact evaluation team comes in again to administer another survey, identical to that given during the baseline.
Step 6:                      The impact evaluation begins! We use the data from the baseline and endline surveys to measure the size of any changes in the treatment schools and compare them to the changes in the control schools.
    • If the schools in the intervention group experience more of a change than the schools in the control group, we know that the intervention had an impact, and we can show exactly what that impact was.
    • If the intervention and control groups still look identical at the end of the project, we know that the intervention did not have any measurable impact.

 

Download our brochure to learn more!