How might we really understand the ‘how’ and ‘why’ behind what works (or indeed what doesn’t) in improving foundational literacy and numeracy programmes in Sub-Saharan Africa?
In 2024, Brink, alongside our partners Laterite and with seed funding from BMGF, set out to join a growing movement of people looking to answer this question by launching the uBoraBora fund. Taking its name from the Swahili words for ‘quality’ and ‘better’, uBoraBora offers implementers of foundational literacy and numeracy (FLN) programmes up to $100k in grant funding and technical assistance to adapt or improve their programme based on evidence.
For this very first portfolio we were looking for eager, open-minded implementers who were collectively frustrated with the status quo. These are people who are curious to get under the hood of why and how interventions work (or don’t). This mindset mattered to us because it's what makes the difference to teams when scaling and adapting education programmes effectively.
To find those people, we knew we’d have to design a different kind of fund with a human-centred application process; one that was simple, accessible and encouraged collaboration, but that could also challenge people in the right ways. To do this, we knew we’d need to listen to a wide range of people who knew what would incentivise implementers in education to apply.
Below we’ve mapped out some of the thinking that went into designing the fund and selecting the first uBoraBora portfolio. Using this approach we’ve been able to attract an incredible first set of research projects and individuals.
Before the fund was launched our partners Laterite undertook a landscape analysis that looked at how existing funds were designed and run, what 'best in class' looked like for funding FLN research, and where the funding gap was for FLN research at scale.That work informed our investment thesis, which we then introduced to around 50 implementers, donors, partners, and researchers. Most of this happened online, but we were also fortunate to host a session at the Building Evidence in Education (BE2) working group.
We designed multiple versions of the fund with different grant sizes, themes, and support packages, asking people to tell us which ones they preferred and why. We listened carefully to the feedback, so we were able to iterate the fund design effectively and at speed, and used simple voting exercises to encourage people to vote with their feet!
As a result of this process, we were able to refine the scope of the fund, clarify who the ‘ideal grantee’ would be for the first portfolio, document evaluation criteria, and agree on the size and nature of our “offer” (grant size and technical support) to prospective grantees.
It also helped us understand what prospective grantees needed from us as the manager of the fund. We heard repeatedly how busy FLN implementers were and that smaller teams especially don’t have the luxury of time or resources to write lengthy grant applications.
Clear, simple and realistic were three watchwords we carried with us as we designed and ran uBoraBora’s application process. To that end we chose to keep the written application concise -‘two pages max’ was our rule of thumb. While some teams found it difficult to articulate all their ideas in such a concise way, many more told us that the restriction helped them focus on the key elements of what they needed to say.
“I found the process refreshing, and thought the questions were thoughtful. I really like how you are running this process and think uBoraBora is a very well conceived initiative.”
The ‘two chats’ part of our approach came in the form of a 60 minute co-design session and an investment committee pitch. Co-design sessions were a space where we could talk through the proposal, offer feedback and ideas, and discuss any areas where the applicant wanted advice or input before progressing to the investment pitch. It also helped us assess the team dynamics and their interest in contributing to our broader goal which was to contribute significant evidence around implementation in FLN.
“I liked the minimalist request for information that postponed the discussions on budget and programme design specifics to a point after the viability of the idea.”
It’s important to note that even at the final pitch, we actively welcomed unanswered questions from the teams. A finalised research design wasn’t required for submission and neither was a budget, just a compelling problem that the team was curious to explore and which would have a high impact if resolved.
“I like that the process helped to really narrow down the focus of our application and enabled us to be concise in explaining our thinking. It was a bit challenging given that there is a lot of context to any programmatic or organisational focus but I did enjoy having to get to the core of what we needed to communicate to fit two pages.”
We knew our responsiveness and availability were very important for applicants if we were going to provide a compelling proposition. We also wanted to generate the sense of a “buzz” around the process, and make the idea of applying exciting rather than a time suck. So we began with regional launch events in Nairobi, Freetown, Kigali, Cape Town and at CIES in Miami. These informal, friendly and lively in-person events were designed to build excitement around uBoraBora and the possibility of implementation research, and to create spaces where implementers, foundational literacy and numeracy sector whizzes, and implementation research enthusiasts could geek out. We also aimed to stay humble and acknowledge that we were seeking to contribute to a growing and existing movement.
We made sure to host webinars for applicants early on, both to explain the core parts of the request for proposals (RFP) and to help people put faces to the fund. We also committed to respond within 48 hours to emails and set aside bookable ‘office hours’ every week so that applicants could talk to us candidly for 30 minutes, learn more about the application process and ask specific questions about their individual applications. We also enabled resubmissions if applicants learnt something during a webinar that informed their application.
“[We benefited from] access to the team and the space for open, honest conversation as part of the application itself. It felt like we had more of a voice, and a friend on the other side of the table …looking to solve problems and answer questions together.”
Our portfolio is centred around three types of implementation challenge: adaptation, uptake and greater efficiencies.
Adapting effective FLN programmes in new contexts is hard because teams need to figure out what ingredients of their core programme can be changed while still maintaining the program’s original effectiveness. Adaptation might be required as implementers scale a programme to a new geography, with a new government partnership or to suit a new demographic of learners.
In implementation research, uptake refers to the extent to which a particular intervention or programme is adopted and used by the target audience or stakeholders. FLN implementers need to know what tweaks and adjustments they can make to ensure people really use the programme in the best way it was designed at scale.
Thirdly, creating greater efficiencies is all about maximising the effectiveness of the FLN program, while minimising the resources needed (for example, time, money or people power) to get excellent results. FLN implementers are always looking for ways to optimise their programme because this is key to unlocking scale and greater government adoption.
Both Building Tomorrow and Impact Network are investigating ways to adapt their core programme to tailor it to different contexts. In Zambia, Promise and the team at Impact Network are learning how to adapt their programme for government schools for the first time. In Uganda, Innocent and Dasan from Building Tomorrow have started their research by diving into existing data, before exploring ways to adapt their programme specifically to accelerate the progress of P4-P5 learners.
Elsewhere, both Meerkat Learning and VVOB are already implementing Teaching at the Right Level (TaRL) at scale. Their research will help to figure out how to improve teacher uptake of core TaRL pedagogy. Angelica and the Meerkat Learning team are focused on the role of education support officers while Kakula and Sharon at VVOB are exploring how to optimise existing peer-mentoring groups. Meanwhile, Rising Academies are also focused on improving uptake, with a specific focus on teacher guides. Afua and Anne-Fleur are looking across Sierra Leone, Liberia and Ghana to understand what makes a difference to teachers using Rising’s curriculum guides.
Justice Rising and FHi360 are both focused on greater efficiency. Ee-Reh and the Justice Rising team are leading research to understand how to integrate school based alternatives to coaching in challenging conflict settings. Lizzie, Zahra and the team at FHi360 are learning more about what it takes to enable more productive and responsive teacher coaching at scale.
Now we’ve selected the first uBoraBora portfolio, we are looking to see what we can learn from the past 12 or so months in order to make the next call even better for implementers.
For example, next time around we want to encourage more Francophone organisations to apply to the fund, and we’re thinking deeply about how we could make the language around implementation challenges clearer and simpler in order to attract the widest possible audience.
We are busy collating all the data we can (both quantitative and qualitative) to see what patterns and insights we can draw out to inform the design of our second call later this year.
More broadly, this first call showed that there is huge demand and potential for this kind of work. Now we’re asking ourselves how we might mobilise the education sector to fund more of it.
If you’d like to stay up to date on the portfolio’s learning as well as get news on future calls you can follow uBoraBora on LinkedIn right here.
You can discover much more about the uBoraBora portfolio on our website.
A big thank you to all who have helped us get to this point:
The grantees who support, all of which are inspiring us with the incredible research they do.
Clio, Ben and the brilliant team at the Bill & Melinda Gates Foundation who have given us the space, time and astute guidance to get to this point.
Our partners at Laterite and our advisors Amy Jo Dowd and Christine Beggs, who have shaped this work alongside us
A group of brilliant people, too many to name who gave us excellent feedback on our initial designs, as part of our landscape analysis and proposition testing.
Partner organisations who have generously traded notes with us throughout: Michelle and Noam at What Works Hub, Julie and Cate at Better Purpose, Nancy and Tom at SHARE, Julianne at Science of Teaching, and many more.