Glossary

This Glossary defines terms commonly used to describe programming, research, and evaluation, with an emphasis on concepts relevant to healthy marriage or responsible fatherhood programs. Bolded terms are defined in the glossary. The references cited in this glossary may also serve as resources for further information on these key terms and ideas.

Services offered by the program, including those performed under each “allowable” or “authorized” activity, as defined in legislation and grant announcements. Examples include curriculum-based group-level workshops, case management, individual-level services, couple-level services, referrals to outside resources, and parent-child activities facilitated by the program.

__________________________

National Healthy Marriage Resource Center. “Administration for Children and Families Healthy Marriage Initiative, 2002−2009: An Introductory Guide.” Washington, DC: U.S. Department of Health and Human Services, n.d. (p. 10)

Similarity of the program group and the control group or comparison group at the beginning of an evaluation. To detect effects of the program, the program, control, and/or comparison groups must have similar characteristics at the beginning—that is, the baseline—so that any subsequent differences can be attributed to the program. Baseline equivalence may be determined in different ways, depending on the design of the evaluation.

In experimental designs, random assignment creates groups that are equivalent on all characteristics, on average. (Researchers can demonstrate that random assignment created equivalent groups by comparing the averages on selected variables between the program and control groups.)

In quasi-experimental designs, baseline equivalence may be demonstrated by comparing the averages on selected variables between the program and comparison groups, though differences in other (unmeasured) variables might still exist.

Baseline equivalence is not established in non-experimental designs or descriptive evaluations because control/comparison groups are not used.

__________________________

“What Works Clearinghouse Glossary.” Available at http://ies.ed.gov/ncee/wwc/glossary.aspx. Accessed July 2, 2014.

The distortion of results due to over- or under-representing certain types of respondents, timing the data collection poorly, or wording questions in a way that encourages or discourages certain responses, or other factors.

__________________________

Rossi, Peter H., Howard E. Freeman, and Mark W. Lipsey. Evaluation: A Systematic Approach. 6th ed. Thousand Oaks, CA: SAGE Publications, 1999. (pp. 285-294)

Refers to both program enrollees and program participants. Used in the context of measuring program inputs and outputs, for example, in measuring program participation.

__________________________

Developed by the Office of Family Assistance, Administration for Children and Families.

A group of people who receive the same services and participate in a program together. An integrated cohort is a group of people who join and finish the program at the same time. An open-entry cohort is a group of people who receive the same services but do not go through the program at the same time; participants might start and finish at different points from their peers.

__________________________

Zaveri, Heather, Robin Dion, and Scott Baumgartner. “Responsible Fatherhood Programming: Two Approaches to Service Delivery.” OPRE Report Number 2015-46. Washington, DC: Office of Planning, Research, and Evaluation, Administration for Children and Families, U.S. Department of Health and Human Services, 2015.

Fathers across every demographic and socio-economic spectrum (and not exclusively fathers who are non-custodial or low-income), including fathers who have returned from incarceration (those who have reentered) or have had contact with the criminal justice system (see also General population in this Glossary).

__________________________

Developed by the Office of Family Assistance, Administration for Children and Families.

A group of people who do not receive the same services as the program group and who researchers think should be similar to the program group in their characteristics at baseline. A comparison group typically is formed using non-random or quasi-experimental methods. (The term “control group” is reserved for a group of people who are randomly assigned not to receive services.) Researchers use statistical methods to test whether the characteristics of the comparison group (for example, age or gender) are similar to the characteristics of the program group before the program begins. This is called testing “baseline equivalence.” The more similar a comparison group is to the program group on baseline characteristics, the more likely that any difference in outcomes between the two groups can be attributed to the program.

__________________________

U.S. Government Accountability Office. “Designing Evaluations.” Washington, DC: U.S. Government Accountability Office, March 1991.

Some aspect of the evaluation design, other than the services provided, that aligns with the program group or comparison group. If a confounding factor is related to participants’ outcomes, it can bias evaluation results and make it impossible to know whether the program or the confounding factor caused observed differences in outcomes. One example is a systematic difference in the way data are collected from people in the program group versus comparison group. For example, program group members might be surveyed by a case manager, while control group members might be surveyed by a research assistant. Participants might report information to someone they know, such as their case manager, differently from someone they do not know, like a research assistant. A confounding factor prevents an evaluation from distinguishing program effects from other potential influences or factors.

__________________________

“What Works Clearinghouse Glossary.” Available at http://ies.ed.gov/ncee/wwc/glossary.aspx. Accessed July 2, 2014.

A broadly defined area of change that a program targets. A construct can be an issue or idea that a program wants to address or change, such as father involvement. Constructs are the higher-level ideas that are operationalized into measures and measurement tools.

__________________________

Adapted from Dew, Dennis. “Construct.” In Encyclopedia of Survey Research Methods, edited by Paul J. Lavrakas. Thousand Oaks, CA: Sage Publications, Inc., 2008.

CQI is the process of identifying, describing, and analyzing strengths and problems and then testing, implementing, learning from, and revising solutions. CQI relies on organizational culture that supports continuous learning and is dependent upon the participation of staff at all levels.

__________________________

“Continuous Quality Improvement.” Available at https://www.childwelfare.gov/topics/management/reform/soc/communicate/initiative/ntaec/soctoolkits/continuous-quality-improvement/#phase=pre-planning

A group of people who agree to participate in an evaluation of program impacts and are randomly assigned not to receive program services. The control group can usually participate in other services available in their communities. (The term “comparison group” is generally used if the group not receiving services is formed non-randomly with quasi-experimental methods.)

__________________________

Rossi, Peter H., Howard E. Freeman, and Mark W. Lipsey. Evaluation: A Systematic Approach. 6th ed. Thousand Oaks, CA: SAGE Publications, 1999. (pp. 257−258, 442)

An evaluation that combines results across multiple programs to assess patterns in program design, implementation, outputs, outcomes, or impacts.

__________________________

Adapted from Rossi, Peter H., Howard E. Freeman, and Mark W. Lipsey. Evaluation: A Systematic Approach. 6th ed. Thousand Oaks, CA: SAGE Publications, 1999. (p. 267)

A program or course of study that focuses on specific topics and includes planned sessions, projects, activities, and other learning opportunities. A curriculum generally includes all of the planned learning experiences over a certain period of time for a specific group of participants. This information typically is documented in a manual or other written material. A curriculum might be provided as one aspect of a larger intervention (and an intervention, in turn, is part of a larger program model).

__________________________

Glatthorn, Allan A., Floyd Boschee, Bruce M. Whitehead, and Bonni F. Boschee. Curriculum Leadership: Strategies for Development and Implementation. 3rd ed. Thousand Oaks, CA: SAGE Publications, 2012. (pp. 357−358)

A research design that documents outputs or outcomes in a program group but does not include a control or comparison group. It might focus on changes in participant outcomes from the beginning of a program to its end (or later) but cannot provide evidence of the impact of a program. (See also non-experimental design.)

__________________________

Adapted from Patton, Michael Quinn. Qualitative Research and Evaluation Methods. 3rd ed. Thousand Oaks, CA: SAGE Publications, 2002. (p. 23)

A systematic collection of information about the activities, characteristics, or outcomes of programs. Several types of evaluations exist, including impact evaluations, which aim to understand program effectiveness; and implementation evaluations, which document program operations.

__________________________

Patton, Michael Quinn. Qualitative Research and Evaluation Methods. 3rd ed. Thousand Oaks, CA: SAGE Publications, 2002. (p. 10)

Replicates practices that have been evaluated using rigorous evaluation design such as randomized controlled or high-quality quasi-experimental trials and that have demonstrated positive impacts for youth, families, and communities.

__________________________

Developed by the Office of Family Assistance, Administration for Children and Families.

Brings together the best available research, professional expertise, and input from youth and families to identify and deliver services that have promise to achieve positive outcomes for youth, families, and communities.

__________________________

Developed by the Office of Family Assistance, Administration for Children and Families.

A research design in which the program and control groups are created with random assignment. This is one of the strongest research designs because, if well executed, the differences in outcomes between the program and control groups at follow-up can be attributed to the program. Experiments can rule out factors other than the program that might cause change in the outputs or outcomes of participants.

__________________________

Rossi, Peter H., Howard E. Freeman, and Mark W. Lipsey. Evaluation: A Systematic Approach. 6th ed. Thousand Oaks, CA: SAGE Publications, 1999.

The extent to which the delivery of program activities adheres to the program’s intended design. For example, this might refer to the following: whether the program as delivered followed the program model's intended staffing structure or format for delivering services; the extent to which a curriculum is delivered in the way the curriculum developer intended.

__________________________

Mowbray, C., M. Holter, G. Teague, and D. Bybee. “Fidelity Criteria: Development, Measurement, and Validation.” American Journal of Evaluation, vol. 24, no. 3, fall 2003, pp. 315−340.

An applicant whose project is awarded funds under this funding opportunity announcement (see also Grantee organization in this Glossary).

__________________________

Developed by the Office of Family Assistance, Administration for Children and Families.

Statements that broadly reflect the major change expected as a result of the program. Goals are usually general and abstract and are transformed into specific objectives for programming and evaluative purposes.

__________________________

Rossi, Peter H., Howard E. Freeman, and Mark W. Lipsey. Evaluation: A Systematic Approach. 6th ed. Thousand Oaks, CA: SAGE Publications, 1999. (pp. 78, 94)

Fathers across every demographic and socio-economic spectrum (and not exclusively fathers who are non-custodial or low-income), including fathers who have returned from incarceration (those who have reentered) or have had contact with the criminal justice system (see also Community fathers in this Glossary).

__________________________

Developed by the Office of Family Assistance, Administration for Children and Families.

An applicant whose project is awarded funds under this funding opportunity announcement (see also Funded organization in this Glossary).

__________________________

Developed by the Office of Family Assistance, Administration for Children and Families.

A change in the well-being of individuals, households, or communities caused by a particular project, program, or policy. Measuring an impact requires using program and control/comparison groups. In contrast, changes that are measured for a group before and after program participation might be related to the program or caused by other factors, such as changes because of the passage of time.

__________________________

The World Bank. “Impact Evaluation in Practice”.” Available at https://www.worldbank.org/en/programs/sief-trust-fund/publication/impact-evaluation-in-practice. Accessed March 3. 2020.

An evaluation intended to measure and analyze the effectiveness of a program in achieving its goals, using an experimental or quasi-experimental design. Impact evaluations are designed to distinguish between the effects of the program’s activities and other factors that might lead to change.

__________________________

Rossi, Peter H., Howard E. Freeman, and Mark W. Lipsey. Evaluation: A Systematic Approach. 6th ed. Thousand Oaks, CA: SAGE Publications, 1999. (pp. 234−235)

Fathers who are within nine (9) months of release from incarceration and who intend to return to their communities and families.

__________________________

Developed by the Office of Family Assistance, Administration for Children and Families.

Resources, including financial, technical, and staffing, used to implement program activities. Inputs might include resource constraints the program faces.

__________________________

McDavid, James C., Irene Huse, and Laura R.L. Hawthorn. Program Evaluation and Performance Measurement: An Introduction to Practice. 2nd ed. Thousand Oaks, CA: SAGE Publications, 2013. (p. 20)

Rossi, Peter H., Howard E. Freeman, and Mark W. Lipsey. Evaluation: A Systematic Approach. 6th ed. Thousand Oaks, CA: SAGE Publications, 1999. (p. 111)

The combination of all of the services and activities a program offers, which together are intended to lead to a specific set of outputs and participant outcomes. Services and activities could include, for example, curriculum-led workshops, individual- or couple-level activities, and case management. The term intervention differs from program model; a model also includes the strategies used to implement the intervention.

__________________________

Adapted from Tucker, Jeffrey G. “Intervention.” In Encyclopedia of Evaluation, edited by Sandra Mathison. Thousand Oaks, CA: Sage Publications, Inc., 2005.

A representation of the relationship between a program’s resources, activities, and its intended effects. A logic model includes a programs inputs, outputs, and desired outcomes.

__________________________

“Logic Model Tip Sheet.” Available at https://www.acf.hhs.gov/sites/default/files/fysb/prep-logic-model-ts.pdf

A construct that has been quantified or operationalized by providing a concrete and specific definition by which observations of the construct should be categorized. For example, the construct of father involvement can by operationalized into different measures, such as frequency or quality of father-child contact.

__________________________

Adapted from Lewis-Beck, Michael G., Alan Bryman, and Tim Futing Liao. “Measure.” In The Sage Encyclopedia of Social Science Methods, edited by Michael G. Lewis-Beck, Alan Bryman, and Tim Futing Liao. Thousand Oaks, CA: Sage Publications, Inc., 2004.

A data collection instrument, such as a set of specific questions to ask program participants. A measurement tool is used to collect data to assess a specific measure or set of measures. Ideally, the measurement tool already will have demonstrated validity and reliability in prior research with the intended population.

__________________________

Adapted from Tucker, Eric. “Towards a More Rigorous Scientific Approach to Social Measurement: Considering a Grounded Indicator Approach to Developing Measurement Tools.” In The SAGE Handbook of Measurement, edited by Geoffrey Walford, Eric Tucker, and Madhu Viswanathan. London: SAGE Publications Ltd., 2010.

An organization’s or program’s statement of purpose. A mission statement generally reflects the unique reason for the organization’s or program’s existence.

__________________________

Bryson, John M., and Farnum K. Alston. Creating and Implementing Your Strategic Plan: A Workbook for Public and Nonprofit Organizations. San Francisco: Jossey-Bass Publishers, 1996. As cited in Council on Education for Public Health. “Outcomes Assessment for School and Program Effectiveness: Linking Planning and Evaluation to Mission, Goals and Objectives.” Washington, DC: Council on Education for Public Health, 2011. (p. 1)

A research design that includes a program group that receives program services but does not have a control or comparison group. Examples include evaluations that measure participant behavior before program participation (pre-test) and after participation (post-test). Because of the lack of a control or comparison group, this design cannot determine whether observed outcomes were caused by the program or by other factors, such as natural change over time or effects of the broader economy.

__________________________

Measurement, Learning and Evaluation Project for the Urban Reproductive Health Initiative. “Types of Evaluation Designs.” Available at https://www.urbanreproductivehealth.org/toolkits/measuring-success/types-evaluation-designs#Non-experimental. Accessed December 10, 2013.

Statements that reflect the program’s specific desired achievements. Objectives should be measurable criteria of program accomplishments. Objectives are often derived from a program’s goals and measured by key outcomes for the program.

__________________________

Rossi, Peter H., Howard E. Freeman, and Mark W. Lipsey. Evaluation: A Systematic Approach. 6th ed. Thousand Oaks, CA: SAGE Publications, 1999. (pp. 78, 94)

Indicators or measures of characteristics in the target population. Examples include participant behavior, attitudes, beliefs, and values. Changes in outcomes for those who participate in services are presumed to result from the program.

__________________________

Rossi, Peter H., Howard E. Freeman, and Mark W. Lipsey. Evaluation: A Systematic Approach. 6th ed. Thousand Oaks, CA: SAGE Publications, 1999. (p. 220)

Indicators or measures of program operations, including, for example, the number of workshops offered, the receipt of services by program participants, and the number of participants who completed a workshop.

__________________________

Rossi, Peter H., Howard E. Freeman, and Mark W. Lipsey. Evaluation: A Systematic Approach. 6th ed. Thousand Oaks, CA: SAGE Publications, 1999. (pp. 201−202)

A grantee organization’s distribution of funds to third-party partners or contractors without retention of substantial involvement in the design, implementation, guidance, oversight, and monitoring of the funded project.

__________________________

Developed by the Office of Family Assistance, Administration for Children and Families.

A type of measure that a grantee must report to the federal government as part of the Government Performance and Results Act (GPRA) of 1993 (and the subsequent GPRA Modernization Act of 2010). Performance measures generally are related to program activities, particularly with regard to aspects of service delivery (outputs) and the achievement of desired results (outcomes). Performance measurement is intended to monitor how well a program is performing, according to the fulfillment of expected outputs and outcomes of service delivery.

__________________________

Adapted from Rossi, Peter H., Howard E. Freeman, and Mark W. Lipsey. Evaluation: A Systematic Approach. 6th ed. Thousand Oaks, CA: SAGE Publications, 1999. (pp. 190, 201)

A broad group of people from which research will draw a sample relevant to their evaluation and intervention. Both the program group and the control group are drawn from the population. The results of the evaluation, which are derived from the program and control groups, have implications for the broader population.

__________________________

Malone, Lizabeth M., Charlotte Cabili, Jamila Henderson, Andrea Mraz Esposito, Kathleen Coolahan, Juliette Henke, Subuhi Asheer, Meghan O’Toole, Sally Atkins-Burnett, and Kimberly Boller. “Compendium of Student, Teacher, and Classroom Measures Used in NCEE Evaluations of Educational Interventions. Volume II. Technical Details, Measure Profiles, and Glossary (Appendices A – G).” NCEE 2010-4013. Washington, DC: National Center for Education Evaluation and Regional Assistance, Institute of Education Sciences, U.S. Department of Education, 2010.

A curriculum-based workshop in which all participants are expected to attend, and ultimately complete. Importantly, a project's primary workshop(s) must (collectively, in the case of multiple workshops) address all funding opportunity announcement requirements and outcomes (see also Workshop in this Glossary).

__________________________

Developed by the Office of Family Assistance, Administration for Children and Families

The authorized federal funding under this funding opportunity announcement.

__________________________

Developed by the Office of Family Assistance, Administration for Children and Families.

A group of people who agree to participate in an evaluation and are assigned (randomly or non-randomly) to receive the program’s services. Also called the “treatment group” or “intervention group.”

__________________________

Adapted from Cramer, Duncan, and Dennis Howitt. “Treatment Group or Condition.” In The SAGE Dictionary of Statistics, edited by Duncan Cramer and Dennis Howitt. London: SAGE Publications Ltd., 2004.

An intervention that targets one specific population and that incorporates: (a) one or more curriculum-based workshops that address all funding opportunity announcement requirements and target outcomes outlined in the funding opportunity announcement; and (b) additional services. (Please see each Funding Opportunity Announcement (FOA) for FOA-specific language related to program models.)

__________________________

Developed by the Office of Family Assistance, Administration for Children and Families.

Rossi, Peter H., Howard E. Freeman, and Mark W. Lipsey. Evaluation: A Systematic Approach. 6th ed. Thousand Oaks, CA: SAGE Publications, 1999. (pp. 446−447)

The grantee's funded program in its entirety, including the program model and the mechanisms to implement it, such as staffing, oversight, and data collection.

__________________________

Developed by the Office of Family Assistance, Administration for Children and Families.

A research design in which program and comparison groups are formed by a method other than random assignment. For example, program group members might live in an area that can receive services and comparison group members might live in an area without those services. The more similar a comparison group is to the program group on baseline characteristics, the more likely that any difference in outcomes between the two groups can be attributed to the program.

__________________________

Rossi, Peter H., Howard E. Freeman, and Mark W. Lipsey. Evaluation: A Systematic Approach. 6th ed. Thousand Oaks, CA: SAGE Publications, 1999. (pp. 234, 263)

A method to form program and control groups randomly (that is, by chance). Random assignment creates groups that are the same, on average, on all measured and unmeasured characteristics at the beginning of an evaluation. If randomization is done correctly, later differences in outcomes between the program/treatment and control groups can be attributed to the program. Steps should be taken to ensure that random assignment is indeed random—by using a computer program that generates random numbers, for example. In addition, after a person is randomly assigned, he or she should not be reassigned for any reason. For example, if someone is randomly assigned to the program group but never receives services, he or she should still be analyzed as part of the program group.

__________________________

Rossi, Peter H., Howard E. Freeman, and Mark W. Lipsey. Evaluation: A Systematic Approach. 6th ed. Thousand Oaks, CA: SAGE Publications, 1999. (pp. 234, 275)

An indication of the consistency of results for a measure or measurement tool under different conditions. If a measure has high reliability, it yields consistent results. For example, if the same person answered a set of interview questions in the same way at different times, those questions would have high reliability.

__________________________

University of North Texas Center for Learning Experimentation, Application, and Research. “Why Reliability and Validity are Important to Learning Assessment.” Available at https://teachingcommons.unt.edu/teaching-essentials/assessment/why-reliability-and-validity-are-important-learning-assessment. Accessed March 2, 2020

Experimental or non-experimental work conducted to gain new knowledge related to phenomena and observable facts.

__________________________

The Organisation for Economic Co-operation and Development. “Frascati Manual: Proposed Standard Practice for Surveys on Research and Experimental Development.” Paris: Organisation for Economic Co-operation and Development Publishing, 2002. (p. 30)

Activities the program offers, such as curriculum-based group-level workshops, case management, individual- and couple-level services, referrals to outside resources, and parent-child activities facilitated by the program.

__________________________

Adapted from Tucker, Jeffrey G. “Services.” In Encyclopedia of Evaluation, edited by Sandra Mathison. Thousand Oaks, CA: Sage Publications, Inc., 2005.

The tendency to give socially acceptable responses even if they are not accurate.

__________________________

Adapted from Lewis-Beck, Michael G., Alan Bryman, and Tim Futing Liao. In The Sage Encyclopedia of Social Science Methods, edited by Michael G. Lewis-Beck, Alan Bryman, and Tim Futing Liao. Thousand Oaks, CA: Sage Publications, Inc., 2004.

A framework that helps explain how and why a complex intervention is expected to bring about a desired change.

__________________________

De Silva, M. J., E. Breuer, L. Lee, A. Asher, N. Chowdhary, C. Lund, and V. Patel. “Theory of Change: A Theory-Driven Approach to Enhance the Medical Research Council’s Framework for Complex Interventions.” Trials, vol. 15, no. 267, 2014. doi:10.1186/1745-6215-15-267.

Douglass, A., T. Halle, and K. Tout. “The Culture of Continuous Learning Project Theory of Change.” OPRE Report #2019-100, Washington, DC: Office of Planning, Research, and Evaluation, Administration for Children and Families, U.S. Department of Health and Human Services, 2019.

The extent to which a measure is related to the underlying construct. Validity demonstrates the degree to which a measure accurately reflects the true value of the construct.

__________________________

Groves, Robert M., Floyd J. Fowler, Jr., and Mick P. Couper. Survey Methodology. Hoboken, NJ: John Wiley & Sons, 2004. (p. 50)

A broad statement of the desired results of the program. Statements of vision often refer to goals or expected or desired future outcomes.

__________________________

The Pell Institute and Pathways to College Network Evaluation Toolkit. “Using a Logic Model.” Available at http://toolkit.pellinstitute.org/evaluation-guide/plan-budget/using-a-logic-model. Accessed December 10, 2013.

A set of structured, classes focused on a topic(s) related to the funding opportunity announcement (see also Primary workshop in this Glossary).

__________________________

Developed by the Office of Family Assistance, Administration for Children and Families.