Early Intel Logo

Head Start

Performance Standards

What You Need To Know

As a federally funded program, Head Start is authorized under Congress and governed by federal regulations called the Head Start Program Performance Standards (HSPPS). These Standards are used to guide funding decisions, federal monitoring, re-competition, and key management practices such as how programs should collect and use data.

Head Start Performance Standards: An Overview

The first Head Start Program Performance Standards were published in 1975. Since then, the standards have been revised and updated several times, including the Improving Head Start for School Readiness Act of 2007 and additional updates to the HSPPS in 2016. The latter was the first comprehensive revision of the standards since they were first released in 1975.

Head Start Performance Standards: An Overview

The first Head Start Program Performance Standards were published in 1975. Since then, the standards have been revised and updated several times, including the Improving Head Start for School Readiness Act of 2007 and additional updates to the HSPPS in 2016. The latter was the first comprehensive revision of the standards since they were first released in 1975.

Here’s an overview of areas covered in the Head Start Performance Standards:

  • Eligibility, Recruitment, Selection, Enrollment and Attendance
  • Program Structure
  • Education and Child Development Program Services
  • Health Program Services
  • Family and Community Engagement Program Services
  • Additional Services for Children With Disabilities
  • Transition Services
  • Services to Enrolled Pregnant Women
  • Human Resources Management
  • Program Management and Quality Improvement

Requiring a Stronger Command of Program Data

In many ways, the 2016 changes simplified and streamlined the Head Start Performance Standards, reducing the overall number of federal requirements and compliance details by a third. These changes expanded the important role of parents and required Head Start and Early Head Start programs to offer longer service hours to meet children and families’ needs.

However, the biggest change to the Head Start Performance Standards came in a new expectation for ongoing monitoring, a stronger command of data, and the use of independent judgment. In the past, programs focused on complying with Head Start Performance Standards. Once they met those standards, they felt they had done their job.

In many ways, the 2016 changes simplified and streamlined the Head Start Performance Standards, reducing the overall number of federal requirements and compliance details by a third. These changes expanded the important role of parents and required Head Start and Early Head Start programs to offer longer service hours to meet children and families’ needs.

However, the biggest change to the Head Start Performance Standards came in a new expectation for ongoing monitoring, a stronger command of data, and the use of independent judgment. In the past, programs focused on complying with Head Start Performance Standards. Once they met those standards, they felt they had done their job.

How Head Start Programs Use Data Coaches

How Head Start Programs Use Data Coaches

As Head Start programs seek to strengthen their data use, they are beginning to use a new kind of resource: coaches. Data coaches started appearing in K-12 school districts over a decade ago. They help faculty glean insights from the mass of information they collect. With new federal requirements for data, now Head Start programs are engaging coaches as well.

How Head Start Programs Use Data Coaches

How Head Start Programs Use Data Coaches

As Head Start programs seek to strengthen their data use, they are beginning to use a new kind of resource: coaches. Data coaches started appearing in K-12 school districts over a decade ago. They help faculty glean insights from the mass of information they collect. With new federal requirements for data, now Head Start programs are engaging coaches as well.

Dr. Desiree Del-Zio, Early Intel’s Director of Services, discusses the intent behind the Head Start Performance Standards. 

As programs adjust to the new Head Start performance standards, data coaching can help them adopt a continuous improvement mindset and comply with these requirements.

Early Learning Outcomes Framework

The Head Start Early Learning Outcomes Framework (ELOF) is a foundational resource describing the skills, behaviors, and knowledge that programs must foster in all children. The ELOF is grounded in a comprehensive body or research about what children should know and be able to do to succeed in school. The five broad areas of early learning are referred to as central domains:
  • NApproaches to Learning
  • NSocial and Emotional Development
  • NLanguage and Literacy
  • NCognition
  • NPerceptual, Motor, and Physical Development

The ELOF has inspired many state level early childhood frameworks, and it informs various child assessments. The Head Start Performance Standards require grantees to implement program and teaching practices that are aligned with the ELOF.

QRIS & Head Start

Over the past couple decades, nearly every state in the country has developed a quality rating and improvement system (QRIS). The systems are designed to assess and to encourage quality in local early childhood programs, offering technical assistance and some financial incentives for reaching quality benchmarks.

While Head Start Performance Standards preceded QRIS, today there is significant overlap between the federal, state and local quality standards. We anticipate that as Head Start advances the use of data analytics (e.g. the Q.I. Network) and the practice of continuous quality improvement, state and local QRIS will follow this progress and continue to innovate.

Federal performance standards recognize QRIS (45 CFR §1302.53) and require a Head Start program (with the exception of American Indian and Alaska Native programs) to participate in a state or local QRIS if:

(i) Its state or local QRIS accepts Head Start monitoring data to document quality indicators included in the state’s tiered system;
(ii) Participation would not impact a program’s ability to comply with HSPS; and,
(iii) The program has not provided OHS with a compelling reason not to comply with this requirement.

Head Start Federal Reviews

Focus Area 1 and 2 reviews each happen once every five-year grant cycle for all Head Start and Early Head Start grantees. These reviews present a key opportunity for programs to come together as a team, reflect more deeply on data, and highlight promising practices to federal reviewers.

The goal is to not only demonstrate not only that they are meeting Head Start Performance Standards but that they are positively impacting children, families, and communities.

Focus Area 1

The Focus Area 1 (FA1) review is conducted virtually, generally during the first year of the grant cycle. In the past, it was primarily discussion-based. However, the Fiscal Year 2024 Monitoring process now requires documentation to back up the information programs share about improvements, outcomes, and how they use data to drive decision-making in all content areas.

A diverse group of professionals sit around a conference table with paper and laptops. Head Start Performance Standards require programs to collect, analyze, aggregate, and use data to make program decisions.During an FA1 review, grantees share their program design. They discuss the role of community assessment and explain how they’ve identified program goals that will positively impact their community. Grantees also show how their budget supports these program goals.

As part of the new monitoring protocol, FA1 reviews now also include data tours. During a data tour, programs should be prepared to present and discuss data from the previous two years. The goal is to show an understanding of the data’s context and display program-wide trends over time rather than day-to-day case management.

After the FA1 review, the grantee and the regional office receive a report including feedback for the program moving forward. This review sets the stage for Focus Area 2, when reviewers go beyond program design to understand how well grantees have actually implemented their program design.

Focus Area 2

During a Focus Area 2 (FA2) review, federal reviewers dig deeper than a FA1 review. They visit on-site to interview staff and conduct a data review and analysis around program implementation. A data review takes place during the data tour, when a program, center or site leader shares how they collect, analyze, aggregate, and use data to make program decisions and meet the Head Start Performance Standards.

Data tours are a newer part of the monitoring process, allowing programs to share their data-driven decisions, innovations, and improvements. This aligns with Head Start Performance Standard 1302.102 (c) – using data for continuous improvement. Recognizing the importance of this process, many programs start preparing for an FA2 visit months in advance.

To help staff gain confidence answering questions and sharing data stories, they might periodically conduct focus groups, mock interviews, document reviews, observations, and mock data tours. Data tours inform discussions woven into the FA2 week. This includes formal conversations with Board and Policy Council members, teachers, family service staff, and parents.

Generally, the more FA2 preparation, the better, as long as it doesn’t overwhelm staff and lead to stress. When staff can use data analytics to explain improvements to any stakeholder group using clear, easy-to-understand charts and graphs, that impresses federal reviewers. If a program doesn’t have a good grasp of their data, it may lead to a deficiency and multiple deficiencies can result in the grantee recompeting for their grant.

Monitoring Protocol

The Office of Head Start releases an updated monitoring protocol each fall. The biggest shift was in 2016, focusing more on continuous improvement and less on compliance to regulations. However, the 2023 revisions are also significant. It’s helpful for program leaders to review the new monitoring protocol as soon as it comes out during a program year with a scheduled FA1 or FA2 review. This can help prevent any surprises.

In keeping with the Head Start Performance Standards, federal reviewers ask to see up to two years worth of data. They want to know that programs can articulate how they use their data across time. They also want to know data use continues to impact their ability to sustain short term and long term goal setting.

The Fiscal Year 2024 Monitoring process includes the following:

  • Management Discussion: While other components may be scheduled based on a program’s needs and staffing, the timing of this discussion is non-negotiable. The management discussion typically includes Head Start directors and content leads. It covers the who, how, and why, which will be tested in subsequent review activities
  • Governance and Parent Discussions: These discussions are attended by volunteer members that are selected by the program at a time that is agreeable to attendees. These discussions are designed to test areas including communication systems, data awareness, improvement planning, and relationship development systems.
  • Content area data tours: Content area teams in fiscal, health, human resources, ECD, ERSEA, and FCE lead reviewers through data tours. The goal is to demonstrate how that content area collects, aggregates, analyzes, and uses data.
  • Classroom and Center Explorations: The purpose of classroom and center explorations is for reviewers to assess environments and safety practices. They are looking for effective teaching practices, curriculum fidelity, data-driven lesson planning and implementation, and observing classroom culture for nurturing and safe practices.
  • ERSEA File Review: This file review is based on a predetermined sample size with no more than 93 files per grant. Files are randomly selected so that the reviewer can assess compliance and quality and test selection and enrollment practices.
  • Fiscal Transaction Testing: The goal of fiscal transaction testing is to ensure that the program has checks and balances in place to ensure that the grantee has practices in place to prevent waste, abuse, and fraud. This is monitored virtually.

Head Start Data Stories in a Review

During federal monitoring reviewers want grantees to share data stories. that demonstrate their commitment to growth and continuous improvement. These data stories combine data analytics with a narrative.
Two men and two men discuss information at a whiteboard. Data stories are a way for programs to show their commitment to growth as outlined in the Head Start Performance Standards.

Unfortunately, many programs conflate anecdotes with data stories. For instance, consider a Family Services team that discovers a family is unhoused and then connects them with a community partner to secure stable housing. This is a heartwarming story, but if it’s not backed by broader program data reflecting a larger pattern or system, it’s merely an anecdote, not a data story, and it won’t impress a federal reviewer. 

This approach also creates a missed opportunity for the program to understand trends and risks impacting their community and take steps to fill that need. Staff might say “I think” rather than “I know,” because they don’t have the data to back up assumptions.

Programs that don’t take a data-informed approach may try to present anecdotes during an FA2 review. Often, they can’t explain the rationale behind what data is collected and how it’s used, because they don’t understand the bigger picture. More importantly, they can’t demonstrate that they’re meeting the Head Start Performance Standard to collect, aggregate, and analyze data to achieve program performance goals. 

Contrast that to a program taking a more data-informed approach. During an FA2 review, a Head Start program leader might explain that as of last year, 80% of children were not up-to-date on their health screenings. Through data analysis, they discovered that the most commonly missing screening was dental. Based on that finding, they then partnered with a local organization to bring dentists into their centers for cleanings twice a year. 

Thanks to this new partnership and an effort to educate parents and secure the necessary paperwork for their children to get their teeth cleaned on-site, the number of children without up-to-date health screenings dropped 20% compared to last year. Program leaders might add that they’re looking at other missing health screenings so they can bring that number closer to zero. 

This example not only cites data but shows the team’s commitment to collecting the right data and taking positive action based on findings as outlined in the Head Start Performance Standards. Coaching can help teams develop data stories and articulate them with greater confidence and fluency.

Recompetition and the Designation Renewal System

In the past, Head Start and Early Head Start grants rolled over from year to year once the grantee submitted paperwork. Beginning in 2011, the Designation Renewal System (DRS) was created to maintain quality standards and ensure that Head Start and Early Head Start programs meet program and financial requirements. This system created five-year grant periods for all agencies running Head Start and Early Head Start programs. It was updated in 2020.

Two professional women and one man sit at a computer discussing what they see. The Designation Renewal System was designed to maintain Head Start Performance Standards.
Agencies meeting the requirements receive additional five-year grants after submitting a new grant but without competing with other agencies for funding. Most Head Start and Early Head Start agencies typically receive their next five-year grant midway through year five of the current grant cycle.

However, under the DRS, agencies meeting one or more of the following conditions must compete for Head Start or Early Head Start funds with other local agencies:

 

  1. Two or more deficiencies, defined as a systemic or substantial material failure of an agency in an area of performance
  2. Score below a competitive threshold in one or more Classroom Assessment Scoring System (CLASS®) domains. Additionally, grantees that fall below a quality threshold on any CLASS® domain receive support from the Office of Head Start (OHS) for the program to improve classroom learning environments.
  3. Two or more audit findings of material weakness or questioned costs associated with Head Start funds, or a going concern finding
  4. Failure to establish and take steps to achieve school readiness goals
  5. License revocation
  6. Suspension by OHS
  7. Debarment by another federal or state agency or disqualification from the Child and Adult Care Food Program (CACFP)

Agencies are typically aware that this is coming due to low CLASS® scores or a deficiency that came up during their FA2 visit. Typically, they will have a chance to address the issue.

Once an agency goes into the recompete process, there is a Notice of Funding Opportunity (NOFO) and interested agencies have the chance to participate in an open competition for a grant to provide Head Start and Early Head Start services in that area. The agency meeting those conditions can still receive a grant through this re-competition process. 

In extreme cases such as financial malfeasance or health and safety concerns, Community Development Institute (CDI) Head Start can take over program management. If a DRS condition applies to a tribal grantee, the U.S. Department of Health and Human Services consults with the tribal government to create a plan to improve quality. If expectations are not met based on a reevaluation, the grantee would need to recompete for their grant. 
While the majority of programs that have gone through DRS have been successfully refunded, programs can sometimes lose a portion of their funding to other providers in the area. DRS competition can also vary by region, with more urban areas facing stiffer competition because more providers are available to compete. In cases where a grantee loses all of their funding and the grant is awarded to another agency in the area, the new grantee will sometimes take over the lease from the prior grantee and hire some of their staff. 

Grantees preparing for the recompete process may benefit from quality assurance coaching or fractional staffing support to help ensure that they are meeting Head Start Performance Standard

Takeaways:
Data & the Performance Standards

If there was any doubt when the current Head Start Performance Standards were first approved whether data was truly a federal priority, that doubt has been resolved. Every year, when OHS releases its protocol for federal reviews, the expectations rise for local program data use.  

As contemporary data tools and practices have become widespread across the social sector, Head Start programs are expected to adapt and evolve. Programs are accomplishing this in a variety of ways: recruiting new staff or consultants, subscribing to new tools, bringing in data coaches, and collaborating with peer programs to advance their work. Early Intel supports many of these programs with its menu of resources and through the Head Start Q.I. Network.