To determine what the effects of the program are: Assess skills development by program participants Compare changes in behavior over time Decide where to allocate new resources Demonstrate that accountability requirements are fulfilled Use information from multiple evaluations to predict the likely effects of similar programs To affect participants: Reinforce messages of the program Stimulate dialogue and raise awareness about community issues Broaden consensus among partners about program goals Teach evaluation skills to staff and other stakeholders Gather success stories Support organizational change and improvement Questions The evaluation needs to answer specific questions.
Evaluation utilizes many of the same methodologies used in traditional social research, but because evaluation takes place within a political and organizational context, it requires group skills, management ability, political dexterity, sensitivity to multiple stakeholders and other skills that social research in general does not rely on as much.
Here we introduce the idea of evaluation and some of the major terms and issues in the field. Definitions of Evaluation Probably the most frequently given definition is: Evaluation is the systematic assessment of the worth or merit of some object This definition is hardly perfect.
There are many types of evaluations that do not necessarily result in an assessment of worth or merit -- descriptive studies, implementation analyses, and formative evaluations, to name a few.
Better perhaps is a definition that emphasizes the information-processing and feedback functions of evaluation.
For instance, one might say: Evaluation is the systematic acquisition and assessment of information to provide useful feedback about some object Both definitions agree that evaluation is a systematic endeavor and both use the deliberately ambiguous term 'object' which could refer to a program, policy, technology, person, need, activity, and so on.
The latter definition emphasizes acquiring and assessing information rather than assessing worth or merit because all evaluation work involves collecting and sifting through data, making judgements about the validity of the information and of inferences we derive from it, whether or not an assessment of worth or merit results.
The Goals of Evaluation The generic goal of most evaluations is to provide "useful feedback" to a variety of audiences including sponsors, donors, client-groups, administrators, staff, and other relevant constituencies. Most often, feedback is perceived as "useful" if it aids in decision-making.
But the relationship between an evaluation and its impact is not a simple one -- studies that seem critical sometimes fail to influence short-term decisions, and studies that initially seem to have no influence can have a delayed impact when more congenial conditions arise.
Despite this, there is broad consensus that the major goal of evaluation should be to influence decision-making or policy formulation through the provision of empirically-driven feedback. Evaluation Strategies 'Evaluation strategies' means broad, overarching perspectives on evaluation.
They encompass the most general groups or "camps" of evaluators; although, at its best, evaluation work borrows eclectically from the perspectives of all these camps. Four major groups of evaluation strategies are discussed here.
Scientific-experimental models are probably the most historically dominant evaluation strategies. Taking their values and methods from the sciences -- especially the social sciences -- they prioritize on the desirability of impartiality, accuracy, objectivity and the validity of the information generated.
Included under scientific-experimental models would be: The second class of strategies are management-oriented systems models. Both have been widely used in business and government in this country.
It would also be legitimate to include the Logical Framework or "Logframe" model developed at U.
Agency for International Development and general systems theory and operations research approaches in this category. Two management-oriented systems models were originated by evaluators: These management-oriented systems models emphasize comprehensiveness in evaluation, placing evaluation within a larger framework of organizational activities.
They emphasize the importance of observation, the need to retain the phenomenological quality of the evaluation context, and the value of subjective human interpretation in the evaluation process. Included in this category are the approaches known in evaluation as naturalistic or 'Fourth Generation' evaluation; the various qualitative schools; critical theory and art criticism approaches; and, the 'grounded theory' approach of Glaser and Strauss among others.
Finally, a fourth class of strategies is termed participant-oriented models. As the term suggests, they emphasize the central importance of the evaluation participants, especially clients and users of the program or technology.
Client-centered and stakeholder approaches are examples of participant-oriented models, as are consumer-oriented evaluation systems. With all of these strategies to choose from, how to decide?
Debates that rage within the evaluation profession -- and they do rage -- are generally battles between these different strategists, with each claiming the superiority of their position.
In reality, most good evaluators are familiar with all four categories and borrow from each as the need arises.IU School of Social Work MSW Program Evaluation Plan The Council of Social Work's Educational Policy and Accreditation Standards () "requires that social work programs have an assessment plan and procedures for evaluating the outcome of.
familiar with the program competencies.
The evaluation measures where the student is relative to the student’s growth and development in relation to the competencies. make thorough notes and comments from a strengths perspective that reflected Clinical Social Work. Student. Evaluation and Social Work Practice.
offers a comprehensive treatment of the central issues confronting evaluation in social work that links theory and method to practical applications. Evaluation is an integral part of social work and social care provision, for both practice and service delivery.
Hands-on Training. Nothing beats real-life experience in your field of study. NU courses require an internship, and faculty members and career development counselors help match students with the employers that are best for them. People who do program evaluation come from many different backgrounds, such as sociology, psychology, economics, social work, and public policy.
Some graduate schools also have specific training programs for program evaluation. The authors have selected and arranged its content so it can be mainly used in a social work program evaluation course.
They strive to meet three overlapping objectives: 1. To prepare students to cheerfully participate in evaluative activities within the programs that hire them after they graduate.