Volume One Part One: Contemporary Trends Section One: Research on Evaluation Theory, Method, and Practice Advancing Empirical Scholarship to Further Develop Evaluation Theory and Practice - Christina Christie Developing Standards for Empirical Examinations of Evaluation Theory - Robin Miller Research on Evaluation: A Needs Assessment - Michael Szanyi, Tarek Azzam and Matthew Galen Taking Stock of Empowerment Evaluation: An Empirical Review - Robin Miller and Rebecca Campbell Designing Evaluations: A Study Examining Preferred Evaluation Designs of Educational Evaluators - Tarek Azzam and Michael Szanyi A Systematic Review of Theory-Driven Evaluation Practice from 1990 to 2009 - Chris Coryn, Lindsay Noakes, Carl Westine and Daniela Schroeter Evaluator Characteristics and Methodological Choice - Tarek Azzam Research on Evaluation Use: A Review of the Empirical Literature from 1986 to 2005 - Kelli Johnson, Lija Greenseid, Stacie Toal, Jean King, Frances Lawrenz and Boris Volkov Evaluation Use: Results from a Survey of U.S. American Evaluation Association Members - Dreolin Fleischer and Christina Christie Going through the Process: An Examination of the Operationalization of Process Use in Empirical Research on Evaluation - Courtney Amo and J. Bradley Cousins An Empirical Examination of Validity in Evaluation - Laura Peck, Yushim Kim and Joanna Lucio Part Two: Emerging Issues Section One: Visualizing Evaluation Data Data Visualization and Evaluation - Tarek Azzam, Stephanie Evergreen, Amy Germuth and Susan Kistler GIS in Evaluation: Utilizing the Power of Geographic Information Systems to Represent Evaluation Data - Tarek Azzam and David Robinson Section Two: Communication Unlearning Some of Our Social Scientist Habits - E. Jane Davidson Reconceptualizing Evaluator Roles - Gary Skoltis, Jennifer Morrow and Erin Burr Volume Two Part One: Methodological Developments Section One: Perspectives on Validity Validity Frameworks for Outcome Evaluation - Huey Chen, Stewart Donaldson and Melvin Mark Reframing Validity in Research and Evaluation: A Multidimensional, Systematic Model of Valid Inference - George Julnes Recommendations for Practice: Justifying Claims of Generalizability - Larry Hedges Section Two: Perspectives on Causality Campbell and Rubin: A Primer and Comparison of Their Approaches to Causal Inference in Field Settings - William Shadish Contemporary Thinking about Causation in Evaluation: A Dialogue with Tom Cook and Michael Scriven - Thomas Cook, Michael Scriven, Chris Coryn and Stephanie Evergreen Campbell's and Rubin's Perspectives on Causal Inference - Stephen West and Felix Thoemmes Evaluating Methods for Estimating Program Effects - Charles Reichardt Reflections Stimulated by the Comments of Shadish (2010) and West & Thoemmes (2010) - Donald Rubin An Economist's Perspective on Shadish (2010) and West and Thoemmes (2010) - Guido Imbens Part Two: Empirical Developments Section One: Quasi-Experiments that Resemble Experiments` Three Conditions under Which Experiments and Observational Studies Produce Comparable Causal Estimates: New Findings from Within-Study Comparisons - Thomas Cook, William Shadish and Vivian Wong Can Nonrandomized Experiments Yield Accurate Answers? A Randomized Experiment Comparing Random and Nonrandom Assignments - William Shadish, M.H. Clark and Peter Steiner Comment: The Design and Analysis of Gold Standard Randomized Experiments - Donald Rubin Rejoinder - William Shadish, M.H. Clark and Peter Steiner An Assessment of Propensity Score Matching as a Nonexperimental Impact Estimator: Evidence from Mexico's PROGRESA Program - Juan Diaz and Sudhanshu Handa Examining the Internal Validity and Statistical Precision of the Comparative Interrupted Time Series Design by Comparison with a Randomized Experiment - Travis St. Clair, Thomas Cook and Kelly Hallberg Section Two: Improving the Design of Cluster-Randomized Trials Emergent Principles for the Design, Implementation, and Analysis of Cluster-based Experiments in Social Science - Thomas Cook Using Covariates to Improve Precision for Studies That Randomize Schools to Evaluate Educational Interventions - Howard Bloom, Lashawn Richburg-Hayes and Alison Black Strategies for Improving Precision in Group-Randomized Experiments - Stephen Raudenbush, Andres Martinez and Jessaca Spybrook New Empirical Evidence for the Design of Group Randomized Trials in Education - Robin Jacob, Pei Zhu and Howard Bloom Intraclass Correlations and Covariate Outcome Correlations for Planning Two- and Three-Level Cluster-Randomized Experiments in Education - Larry Hedges and Eric Hedberg The Implications of "Contamination" for Experimental Design in Education - Christopher Rhoads Stratified Sampling Using Cluster Analysis: A Sample Selection Strategy for Improved Generalizations from Experiments - Elizabeth Tipton Volume Three Part One: Developments in Qualitative Methods Section One: Advances in Qualitative Analysis Techniques A General Inductive Approach for Analyzing Qualitative Evaluation Data - David Thomas Qualitative Comparative Analysis (QCA) and Related Systematic Comparative Methods: Recent Advances and Remaining Challenges for Social Science Research - Benoit Rihoux A New Realistic Evaluation Analysis Method: Linked Coding of Context, Mechanism, and Outcome Relationships - Suzanne Jackson and Gillian Kolla A Proposed Model for the Analysis and Interpretation of Focus Groups in Evaluation Research - Oliver Massey Part Two: Developments in Mixed Methods Section One: Defining Mixed Methods Mixed Methods Research: A Research Paradigm Whose Time Has Come - R. Burke Johnson and Anthony Onwuegbuzie Toward a Methodology of Mixed Methods Social Inquiry - Jennifer Greene Toward a Definition of Mixed Methods Research - R. Burke Johnson, Anthony Onwuegbuzie and Lisa Turner Integrating Quantitative and Qualitative Research: How Is It Done? - Alan Bryman Is Mixed Methods Social Inquiry a Distinctive Methodology? - Jennifer Greene Putting the MIXED Back Into Quantitative and Qualitative Research in Educational Research and Beyond: Moving toward the Radical Middle - Anthony Onwuegbuzie Section Two: Mixed Methods Typologies A General Typology of Research Designs Featuring Mixed Methods - Charles Teddlie and Abbas Tashakkori Conducting Mixed Analyses: A General Typology - Anthony Onwuegbuzie et al. A Typology Of Mixed Methods Research Designs - Nancy Leech and Anthony Onwuegbuzie Section Three: Mixed Methods in Practice Transformative Paradigm: Mixed Methods and Social Justice - Donna Mertens Grounded Theory in Practice: Is It Inherently a Mixed Method? - R. Burke Johnson, Marilyn McGowan and Lisa Turner Communities of Practice: A Research Paradigm for the Mixed Methods Approach - Martyn Denscombe A Theory-Driven Evaluation Perspective on Mixed Methods Research - Huey Chen Mixed Methods and Credibility of Evidence in Evaluation - Donna Mertens and Sharlene Hesse-Biber Guidelines for Conducting and Reporting Mixed Research in the Field of Counseling and Beyond - Nancy Leech and Anthony Onwuegbuzie The Validity Issue in Mixed Research - Anthony Onwuegbuzie and R. Burke Johnson Mixed Data Analysis: Advanced Integration Techniques - Anthony Onwuegbuzie et al. Volume Four Part One: Enduring Issues of Evaluation Practice Section One: Metaevaluation Quality, Context, and Use: Issues in Achieving the Goals of Metaevaluation - Leslie Cooksy and Valerie Caracelli Concurrent Meta-Evaluation: A Critique - Carl Hanssen, Franz Lawrenz and Diane Dunet Metaevaluation in Practice: Selection and Application of Criteria - Leslie Cooksy and Valerie Caracelli Evaluating the Quality of Self-Evaluations: The (Mis)match between Internal and External Meta-Evaluation - Jan Vanhoof and Peter Van Petegem Meta-Evaluation Revisited - Michael Scriven Section Two: Ethics Expanding the Conversation on Evaluation Ethics - Thomas Schwandt The Good, the Bad, and the Evaluator: 25 Years of AJE Ethics - Michael Morris Ethics and Development Evaluation: Introduction - Patrick Grasso Everyday Ethics: Reflections on Practice - Gretchen Rossman and Sharon Rallis Nonparticipant to Participant: A Methodological Perspective on Ethics - Scott Rosas Section Three: Using Program Theory in Evaluation Constructing Theories of Change: Methods and Sources - Paul Mason and Marian Barnes Unpacking Black Boxes: Mechanisms and Theory Building in Evaluation - Brad Astbury and Frans Leeuw Using Programme Theory to Evaluate Complicated and Complex Aspects of Interventions - Patricia Rogers Part Two: Enduring Issues of Evaluation Training Section One: Evaluation Capacity Building/Development A Research Synthesis on the Evaluation Capacity Building Literature - Susan Labin, Jennifer Duffy, Duncan Meyers, Abraham Wandersman and Catherine Lesene A Multidisciplinary Model of Evaluation Capacity Building - Hallie Preskill and Shanelle Boyle Measuring Evaluation Capacity - Results and Implications of a Danish Study - Steffen Nielsen, Sebastian Lemire and Majbritt Skov A Self-Assessment Procedure for Use in Evaluation Training - Daniel Stufflebeam and Lori Wingate Section Two: Evaluator Competence Establishing Essential Competencies for Program Evaluators - Laurie Stevahn, Jean King, Gail Ghere and Jane Minnema Evaluator Competencies: What's Taught versus What's Sought - Jennifer Dewey, Bianca Montrosse, Daniela Schroeter, Carolyn Sullins and John Mattox II A Conversation on Cultural Competence in Evaluation - Joseph Trimble, Ed Trickett, Celia Fisher and Leslie Goodyear Development and Validation of the Cultural Competence of Program Evaluators (CCPE) Self-Report Scale - Krystall Dunaway, Jennifer Morrow and Bryan Porter Emphasizing Cultural Competence in Evaluation: A Process-Oriented Approach - Luba Botcheva, Johanna Shih and Lynne Huffman