Quest Alliance at the Data Catalyst Program

This blog is written by Shiny Dhar from the Quest Alliance team

The social sector collaborates in many forms. There are collaboratives, capacity-building programs led by funders, agencies, quasi-government or multilateral organizations, and conferences that let the space come together to discuss and look beyond or look back. 

The data catalyst program is one such program that the Quest team attended this year at the beautiful Port Muziris at Kochi. The program was a joint effort by Project Tech4Dev in collaboration with Dasra, intending to bring together senior management, data practitioners, and leaders from several of India’s most reputable NGOs for a three-month cohort-based program. It differed significantly from other programs because it adapted a practitioner’s approach to developing scalable and sustainable data solutions via interactive, hands-on, problem statement-based workshops. 

The Quest programs have come a long way since 2004. We are at a scale stage where the focus is on scaling our programs by increasing adoption at institutions or working directly with the district education system for our models to be adopted across the district through a cascade system. We have collected a lot of data in time and aim to become a genuinely evidence-based organization with high investments in data infrastructure and culture creation. With these goals in mind, our issues also have complications that often come with scale. 

The Data Catalyst program helped us identify, formalize, and streamline solution-building for many of our identified issues. While the results are for another post in the new year, here are my biggest takeaways from the program:

  1. Data culture is top down to bottom up: Culture often starts at the top, encompassing an org-wide approach of all leaders to trickle down the use of data to make decisions. What came out strongly via examples is how we collectively should re-examine data management. We are often underscoring the importance of fostering a robust data culture that encompasses more than just swift data analysis. Building this culture is more complicated than it looks. Goalkeep’s sessions and the data culture rubric gave us more insight into what areas we need to focus on to execute our existing data strategies to fruition. The session on data levers that are essential based on an organization’s strength and areas of development while solving its various problem statements was another key takeaway that we are trying to make decisions internally at Quest.
  1. Metrics: The alignment of metrics with the organization’s primary journey that focuses on tracking progress comes from our key stakeholders and beneficiaries. These metrics need to be identified and refined over time and discussed often. The Agency Fund discussed an approach to look at funnel metrics during all data-driven discussions and updates. Funnel metrics are widely used in product management but are equally powerful tools for visualizing impact numbers. 
  2. Visualizations: One of the key takeaways related to this is designing dashboards that are combined and separated by stakeholders using the Key Questions theory. A best practice was to put these critical questions on dashboards to help increase utilization by fields, programs, or operations teams. A key question often has a clear purpose and an action associated with it to be truly effective. For example, an operations dashboard could have “How many parents did the field team visit this month ? “provided that visiting is an important metric that leads to outcomes. The action could be chasing 100% by the end of the month for the field team. 
  3. Correlation vs. Causation: As organizations, we could move away from a pre and post-approach that establishes connections alone. For example, while percentage points difference can often be used during baseline and endline to determine impact, the evaluations should statistically move to establish correlations, eventually establishing causation. While correlation can be easy, causations are complicated and more resource-intensive processes. However, if we are systematic in our approach, it is possible to design for the final steps while laying the foundations. While experimentation is critical, keeping research and data teams in the loop for M&E, Project Design, and Impact Evaluations is essential. This enables a systematic approach to experiments and innovations, involving defining metrics, considering counterfactuals, and using appropriate analysis and statistical methods to ensure that the results are significant enough. 
  4. One team, many avatars – Automation, Research, ML, Data Engineering, Data Science, and Data Analysis: It is essential to understand where we stand in the data journey and the key strengths of the data team members instead of merging and using all these terms interchangeably or assuming that all roles are the same. For organizations with fewer members who are data literate, identifying specialists and vendors in the space is essential. Self-learning AI tools and utilizing them for data analysis is another way to go about it. The GPT 4 version of ChatGPT supports good data analysis that can be utilized for crunching numbers using a simple conversational approach that can be used with very little tech prowess.
  5. Use of AI and LLMs: LLMs (Large Language Models) refer to large, general-purpose language models that are pre-trained and then can be fine-tuned for specific purposes. Various automation approaches were recognized like Dalgo to streamline the data cycle or ChatGPT for data analysis, which frees up time for more targeted and strategic team tasks. Many projects were discussed in Health and Education, where Glific was used to create chatbots that increased project reach without increasing project costs. Some things to consider while building such approaches are data privacy, ethical challenges, solution-level challenges ( e.g., hallucination), language barriers, expert opinion on problem and solution statements, and force-fitting LLMs to solve issues that can perform better without. Some exciting platforms to explore were – Bard, ChatGPT, and Jugalbandi.
  6. Upskilling and Data: The emphasis on continuous learning and upskilling at all levels, including managerial and leadership positions, underlining the importance of adaptation in a changing world, came out very strongly in all conversations.
  7. Data Storytelling: Data is a significant enabler in crafting powerful stories of change for our internal and external stakeholders. The significance of patience in crafting coherent data stories for external stakeholders, recognizing that the process often begins with seemingly independent questions, culminates during the articulation stage, and often re-shapes with extensive feedback loops, highlights the need for learning and unlearning. Data teams must filter These feedback loops into important and irrelevant feedback to maintain the data story’s effectiveness.

The interactions with roommates, over ‘chai,’ at dinner tables, during worktimes, or simply just five minutes in the lobby went beyond mere exchanges. It reflected our collective purpose of making the world a better place and the role that data can play in collective success. Part one ended with a bang with a fantastic karaoke night where everyone came together to share stories, sing songs, and continue exciting conversations in a more informal setting—looking forward to Part 2!

Leave a Reply

%d bloggers like this: