Skip to Main Content

Public Affairs 4000: Public Affairs Program Evaluation

This is a sample syllabus to provide general information about the course and it's requirements. Course requirements are subject to change. This syllabus does not contain all assignment or course detail and currently enrolled students should reference the syllabus provided by their instructor. For a specific syllabus, please email us a request.

Course Overview

3 Credit Hour
Modalities Available: In-Person

 

The purpose of the course is to help students acquire an understanding of research design and analytic methods used to evaluate public policy. It teaches students how to plan and undertake evaluations in a variety of policy and programmatic contexts using a variety of data and methods. It also pays particular attention to designing research in order to estimate the causal impact of public policy. The knowledge and skills that students acquire in this course are of value in a wide variety of occupations, particularly those in the public and non-profit sectors.

Learning Outcomes

By the end of this course students will be able to:

  • Understand what public affairs evaluation is and its role in the policymaking process.
  • The diversity of techniques used to conduct and communicate public policy evaluations, as well as the strengths and weaknesses of those techniques.
  • What policy claims a specific policy evaluation can and cannot validate.
  • Apply data analytic and other modeling techniques to evaluate the impact of public policies.
  • Critically assess policy evaluations.
  • Communicate results of public evaluations in written, oral, and visual formats.

The course contributes to all Glenn College learning goals and objectives related to foundational knowledge in public affairs; competencies in management, leadership, and policy analysis; and developing an appreciation for multiple perspectives in public affairs. In particular, the course focuses on the following objectives at an advanced level:

  • Students can define and address problems in the public and/or nonprofit sectors using analytical tools.
  • Students can conduct advanced data analysis to inform decision making in the public and/or nonprofit sectors.
  • Students can communicate effectively via written, oral, and electronic methods in public and/or nonprofit sectors.
  • Students can describe and explain public sector policy making and administrative processes.
  • Students have an appreciation for the diversity and interdisciplinary nature of public affairs.

Requirements and Expectations

A text book may be required for this course. Consult your instructor's syllabus for details. 

  • Assignments (Best 5 of 6, 5% each), 25%

  • Exam 1, 20%

  • Data worksheet, 2%

  • Draft evaluation, 8%

  • Exam 2, 20%

  • Presentation/Q&A , 7%

  • Final data set in xls format, 3%

  • Final evaluation, 15%

The goal is to build key sections of your final program evaluation (assignments 1-3 map closely to sections 1-4 of your program evaluation) and extensions to this (assignments 4-6 cover material from the textbook not included in the program evaluation).

Approximately 500 words per assignment. Provide references to the work/words of others. Upload via Carmen by 5pm on the due date. Late assignments may be deducted points.

The best 5 of the 6 assignments will count.

Assignment Tasks:

  1. Describe (and illustrate with a figure) your impact/logic model. Be clear to identify inputs, activities, outputs and outcomes. What are the external factors? 
    Who is the client/target of the program? Detail a timeframe for changes to behavior.
    Be clear about the differences between outputs (controlled by the program) and outcomes (changes to client behavior).

  2. What are your evaluation question(s) – why these?
    To whom, when, why, and how do these changes in behavior occur?
    Make sure you are describing outcomes!

  3. How can you describe the level of diffusion/adoption (the dynamics) of you program?

  4. How could (and what type of) qualitative information enhance your evaluation?

  5. If you were to use an experiment to evaluate your program, how would you do this?
    Pay attention to minimizing bias and other critiques of policy experiments.

  6. You do not have to provide cost/benefit estimates or conduct a cost effectiveness study for your program. But if you did, what would be the key costs and how would you compare across alternatives?

You will prepare a quantitative outcome evaluation for a program of your own choosing. The program may be local, state, national, or international. It needs to be a real program tested with real data!

The draft paper should be approximately 1,500 words. The final paper should be approximately 3,000 words

Your paper will:

  1. Briefly describe the program you are evaluating, include a logic model which articulates the theory of change for the program. When did the program start? How has it changed over time?
  2. Clearly state the evaluation question(s); this must be an outcome. This step can only be achieved after developing the logic model for your program.
  3. Conduct a literature review that explains how past research/practice has informed your evaluation design as well as the model specification for data analysis. Your review will include the following categories:
    • The original legislative/regulatory policy analysis and/or program proposal.
    • Previous evaluations of the same and/or similar programs. Have these used control variables/external factors?
    • Report previous evidence of an effect size of the program on the process/outcome.
    • Explain how this past research has informed your design?
  4. Compile a real data set that you propose to use. This step is much harder than it seems so start right away. Include an Excel file with your data. Describe the data including:
    • Unit of analysis – annual, state, individual client or averages?
    • Identify and explain how the program and outcome variable(s) are measured.
    • How do your variables actually evaluate the program under consideration?
    • Number of observations and timeline of the program and the data.
    • Describe how much your variables actually vary – over time and/or space.
    • Strategies to link datasets if necessary and to “fill-in” missing observations.

Parts 1-4 are to be submitted as a DRAFT

  1. Propose a quasi‐experimental evaluation design. Be sure to explain the following about your design:
    • The strengths and weaknesses with respect to internal validity- what is your counterfactual?
  2. Data analysis, including the following:
    • Identify and explain control variables/external factors and how they are measured.
    • Identify and explain the different statistical analyses you have conducted. If this includes multivariate analysis, you must describe which type and also specify the model.
  3. Explain your findings, including the following:
    • How you assess whether the program had the intended impact on the outcome variable(s). Specify what coefficients to pay attention to, why, and also how (e.g., direction of effect, statistical significance, program effect size).
  4. Discussion of the strengths and weakness of your evaluation. Provide cautionary remarks regarding the limitations of the design, data and analysis (e.g., causality, analysis concerns such as sample size, measurement reliability and validity, external validity, etc.).

Build from the feedback on your DRAFT

Include all parts 1-8 (and xls file of your final data) in your FINAL

 

 

Some thoughts and suggestions on preparing an evaluation report. Keep asking questions and checking the FAQ module on the Carmen page throughout the semester for additional guidance.

  1. Use section headings to break-up the report and make the various sections clearer.
  2. If you are close to the word count maximum, use a technical appendix for more detailed discussions of methods, data, etc. I won’t count this towards the total. Also, you can use a table to summarize your literature review. +/- 10% of the word count is fine, much shorter and you haven’t gone into enough detail, much longer and you aren’t focusing on the most critical elements.
  3. A logic model is often clearer if presented as a visual, such as the flow diagrams in the lectures. Be sure to include external factors, and how you think they may impact your evaluation (e.g., a plausible alternative reason why the outcome has changed or explanations of why certain client groups are not reached by the program). Also, think through how the client interacts with the program – once, or multiple times (e.g., a one-time tax refund or a weekly benefit; a one-day training program or a semester course, etc.). Make sure you have a feasible temporal sequence – the client is exposed to the program and then an outcome is measured.
  4. The reason we consider the program as the sum of inputs, activities and outputs is to highlight that it is rarely a simple “binary” or on/off variable. Rather, the output (at the very least) should describe the scope and scale of your program. A “small” program is less likely to change the outcome as much as a “larger” one. This is what is meant by a treatment or exposure effect and is analogous to a dose response in medical applications. For example, if the implementation of your program has seen more counties adopt each year, then a state-level outcome measure will change more each year given a larger number of clients. It is easier to “find” statistically a larger effect than a smaller one.
  5. Your literature review should include a range of sources including the policy proposals that generated the legislative or regulatory basis of your program. For Federal programs this can include Federal Register rulemaking announcements or the Congressional Record. Similar records can be found at the state or local/municipal level. For non-profit organizations the equivalent could be a grant proposal. Such information provides context, background, suggestions of program need and effect, and when describing policies (programs) with significant economic impact a comparison of alternatives. You should also include any prior evaluations of similar programs (other states/cities, different client groups, etc.). Regardless, such sources should help you discover data, suggest possible measures/variables (including external factors or control variables), methods and ideally describe a program effect size to compare to any that you are able to determine.
  6. The data step (#4) is critical. Before analyzing the data with models describe the trends, strengths and weaknesses of the measures, and the timeline of your program. This review should help guide you towards the most appropriate analysis approach. It is rare to find all the data in one place (particularly outcome measures) so expect to “merge” sources and be clear to describe any assumptions/decisions made to facilitate this (e.g., unit of analysis state versus county, fiscal year versus calendar year, etc.)

 

5% of your final grade. Upload an approximately 5 minute video presentation to the Assignment tab.

In the Discussion tab summarize your program (and re-post your video) then lead a Q&A with at least two discussion comments/responses on your peer’s/your presentations – provide them feedback to help them improve their final reports and get feedback from your peers. This will count for 2% of your final grade.

Content of your presentation, 1 slide each (not counting title slide):

  1. Program.  Briefly describe who the program targets and how it acts.
  2. Prior studies.  Summarize previous/similar evaluations with a focus on data/measures, method and effect size.
  3. Evaluation question.  Introduce your data and measures for program, outcome(s) and control variables/external factors.
  4. Design.  Describe your analysis approach.
  5. Findings.  Present the evidence/show if the program has an effect.

Course Schedule

  1. Introduction

  2. Planning Evaluations

  3. Program Theory

  4. Measurement

  5. Ethics in Evaluation

  6. Assessing Need and Information

  7. Implementation

  8. Qualitative Methods

  9. Meetings about draft paper
  10. One Group Designs

  11. Quasi-Experiments

  12. Experiments

  13. Costs and Outcomes

  14. Communicating Evaluations

  15. Meetings about final paper

Previous Instructors Have Included