Ever been curious how federal reviewers think about your local data practices? Or what it’s like for the review team visiting centers virtually or in person? As a Review Lead and later as a Field Operations Manager, Dr. Desiree Del-Zio has facilitated or managed approximately 400 federal reviews. She’s now CQI Training Lead for the Q.I. Network.
Dr. Del-Zio shared her insights as a former federal reviewer, including how reviewers view data stories and how members of the Q.I. Network can leverage that experience to make a positive impression. The following conversation has been edited for clarity and brevity.
How does an FA2 Review Team determine if programs have continuous quality improvement systems in place?
Dr. Desiree Del-Zio: After the Office of Head Start introduced the new performance standards with a continuous quality improvement requirement, it updated its monitoring iteration to the Focus Area One and Two reviews. While FA1 is a virtual and primarily discussion-based review, it still has elements of that discussion that focus on improvements and outcomes.
The FA2 is an on-site review that includes interviews, explorations, and data review and analysis. The data review takes place during the data tour, and reviewers are really looking and listening for those practices that demonstrate data is collected, aggregated, analyzed, and compared to identify risk and inform quality improvements.
Generally, these conversations allow reviewers to look at data the program has collected. They engage in conversations with the program, trying to understand how the program uses that information and what decisions it may have resulted in. It is an exciting way for programs to share their data-driven decisions, innovations, and improvements in a way they have not done in past monitoring iterations.
Where did you fit into that?
Dr. Desiree Del-Zio: DLH Danya hired me as a Review Lead after I worked in a Head Start Agency as a Quality Assurance Manager. In the non-profit agency I served, we had close to 2,000 funded slots, and multiple funding streams, including the Head Start, Early Head Start, and Early Head Start Child Care Partnership grants. We also had CACFP and state preschool funding, so it was a fairly complex program. As a QA staff member, I was a generalist and had a comprehensive and broad knowledge of the standards, which made me a strong fit for a Review Lead position.
I was a Review Lead for about four years, and then I was promoted to Field Operations Manager and did that for 2 monitoring years. As a Review Lead, I monitored roughly 25 programs a year. As a Field Operations Manager, I oversaw anywhere from 25 to 35 reviews a month. So lots and lots of exposure to the review process and working with teams of folks that were out there helping programs either showcase their amazing stories or identifying opportunities for them to improve their services to the community.
What did you learn about programs that were doing continuous quality improvement well?
Dr. Desiree Del-Zio: In both of my roles at DLH Danya, I had the opportunity to listen to and see how programs’ data collection and analysis systems created improvements. When programs did this really well, they shared strong monitoring systems that were woven into an organizational culture of collaboration and curiosity. Data and using it for decision-making was everyone’s responsibility. Each staff member, from program aides to executive leaders, understood how their interactions with families, their monitoring practices, and data accuracy impacted everyone else in the program in some way.
These agencies could describe improvements in each of their program areas. All interviewed participants, from teachers to chief financial officers, could articulate those changes and the improved services that resulted from them. You could really tell that these folks embraced collaboration and could share their efforts to improve the lives of children and families.
Were there any unique characteristics about the data stories these programs shared with you as compared to programs that were not as effective in their quality improvement practices?
Dr. Desiree Del-Zio: One thing that always stuck out for me was their ability to use data analytics to demonstrate their improvements to any stakeholder group. The way they presented the data to the review team was clear, the charts and graphs were easy to read, and they could be understood by a person with a lot or a little data analysis background. This was impressive because you could see the program wanted the data to be accessible to staff, parents, board members, and the community in an unambiguous way.
In creating these easy-to-understand analytics, they could incorporate data discussions into all their meetings and networking events and gain meaningful insights from staff, parents, board members, policy council members, and community partners. It was a powerful demonstration of their intentionality.
Now that you are working with the Q.I. Network, what are some of the benefits you see for our programs when they prepare for a review?
Dr. Desiree Del-Zio: Directors in the Q.I. Network should know how important it is to leverage their participation during a review. Their membership gives them access to the Power BI data analytics dashboard, Engagement Managers with a breadth of Head Start knowledge, and quality improvement experts who help them build their program’s capacity to shift from one that focuses on compliance to one that understands what it means to be a learning organization and embrace continuous improvement.
Member programs gain access to a national network of Head Start agencies that offer each other support and the opportunity to learn from each other individually and in targeted training events. In addition, their Improvement Coach, who is an expert in Continuous Quality Improvement, guides them through gap analyses, root cause analyses, and the implementation and evaluation systems that really help them understand how to create proactive, ambitious, and innovative improvement planning systems.
For programs in the network, it would be such a loss not to emphasize this as an element of their internal learning systems and how it fits into their overall ongoing monitoring structure.