Maruja De Villa Lorica
Paper written in Fall 2009
Definition of Evaluation
Trochim (2006) defines evaluation as the systematic assessment of the worth or merit of some object. He adds that evaluation is the systematic acquisition and assessment of information to provide useful feedback about some object, where object could refer to a program, policy, technology, person, need, or activity. Evaluation involves collecting and sifting through data, making judgments about the validity of the information and of inferences derive from it, whether or not an assessment of worth or merit results.
More specifically, evaluation identifies and gathers data about specific services, programs, or activities, establishes set of criteria by which success can be assessed, and determines the quality of the service or activity and the degree to which the service or activity accomplishes stated goals and objectives (Van House et al., 1990 in by McClure 1994). Matthews (2007) suggested evaluation process includes assessing library users and nonusers, physical collection, resources, and services.
Purpose of Evaluation
Library system and its components are evaluated for several reasons. According to Powell (2006), evaluation is done to account for the use of resources, describe impact, increase efficiency, support planning activities, and support decision-making. In addition, Powell notes that evaluation is conducted to determine how clients or beneficiaries are affected by the program, fulfill grant requirements, make decisions to continue or terminate programs, document program history, and provide feedback, among others.
Nitecki (2004) also notes that a frequently cited reason for conducting a program evaluation is to fulfill a requirement for grant funds. Grant evaluation is required for grant accountability or advocacy to government or other institutional funding officials.
McClure (1994), Bawden (1990) and Wilson all share the view that resulting data from performance evaluation could be used to assess how well the system meets its objectives or to justify use of resources and provide basis for future funding request. Evaluation contributes in making informed decisions and in justifying services. Kebede (1999) also states that data from evaluation activities could be used to justify worth/value and resources utilized, improve competitiveness for obtaining financial support, and enhance internal efficiency and services provided to library users.
According to Trochim, most evaluations are intended to provide "useful feedback" to various groups of audiences including sponsors, donors, client-groups, administrators, staff, and other stakeholders. Trochim notes however that the relationship between an evaluation and its impact is not an easy and straight-forward one. Studies that seem important may not necessarily and immediately influence short-term decisions, and studies that seem to have no influence at first can have a delayed impact when the right conditions arise. Still, Trochim believes that the major goal of evaluation is to influence decision-making or policy formulation through the provision of empirically-driven feedback.
Data Collection Methods
Nitecki (2004) expresses the view that techniques for gathering data for program evaluation in library and information sciences (LIS) are drawn from both qualitative and quantitative research methods. Most commonly used methods among library evaluators include surveys, interviews, observation, and counting. Sources of library program information comprise of written records, computer-generated counts or transaction logs, and observations or responses that have been self-reported or collected from participants or observers.
Hernon and Nitecki (1999) find merit in multiple data collection methods since such methods might produce a wider range of research evidence methods, offer depth and insights, as well as cross validation of the findings. Nitecki (2004) adds that while there is no single recommended method for gathering data, experts recommend, and experiences confirm that multiple techniques used in evaluation contribute to richer and complementary information toward making more informed decisions. Multiple techniques of data collection provide concurring evidence that increase the validity and credibility of the findings, as well as cross validation of results (Fletcher, et al 2006).
Gajda and Jewiss (2004) state that narrative and/or numerical approaches can document effectiveness of the program’s activities and services. Individual or focus group interviews, open-ended survey questions, and observations of the program in action are some methods that can be used to gather narrative information. Gajda and Jewiss further state that information about a program’s quality can be gathered using close-ended survey questions such as the Likert scales that are commonly used to obtain numerical ratings from survey respondents about the quality of a program.
Library professionals use survey methodologies, whether mailed, telephoned, or more recently, distributed via the Web or by electronic mail (Nitecki 2004). Survey is a means to collect information that is available from no other source - when the information should come directly from people: descriptions of their values, attitudes, habits, and background characteristics (Jerabek, et al 2002). Jerabek et al add that once a survey has been designed in print, it can easily be converted to an electronic format. The easiest electronic format is e-mail based. By using a web-based form, surveys may be filled out online. When this form is submitted, results are sent to a pre-determined e-mail address.
Hiller (2001) affirms that surveys offer the benefit of obtaining quantifiable data from large populations at reasonable costs, but they should be employed in the right situation. Surveys should be designed from the user perspective. Questions should be short, simple, and clear to the user. Likewise, respondents should be properly motivated for them to complete the survey.
In designing surveys, Gajda and Jewiss (2004) recommend that program’s outcomes and indicators be revisited to develop questions that address what researchers want to know and what information they want to capture.
References
Bawden, D. 1990. User-oriented evaluation of information systems and services. Aldershot, England: Gower.
Gajda, R. and J. Jewiss. 2004. Thinking about how to evaluate your program? these strategies will get you started. Practical Assessment, Research & Evaluation 9(8). URL: http://PAREonline.net/getvn.asp?v=9&n=8 (viewed January 22, 2010).
Hernon, P. and D. Nitecki. 1999. Evaluation Research: Editorial. Journal of Academic Librarianship 25(6):429.
Hiller, S. 2001. Assessing user needs, satisfaction, and library performance at the University of Washington Libraries. Library Trends 49(4): 605-625.
Jerabek, J.A., L. M. McMain, and J. L.Van Roekel. 2002. Using needs assessment to determine library services for distance learning programs. Journal of Interlibrary Loan, Document Delivery & Information Supply 12(4).
Kebede, G. 1999. Performance evaluation in library and information systems of developing countries: A study of the literature. Libri 49:106–119. URL: http://www.librijournal.org/pdf/1999-2pp106-119.pdf (viewed February 13, 2010).
Kelsey, M. E. 2006. Education Reform in Minnesota: Profile of Learning and the Instructional Role of the School Library Media Specialist. School Library Media Research 9. URL: http://www.ala.org/ala/mgrps/divs/aasl/aaslpubsandjournals/slmrb/slmrcontents/volume0 9/kelsey_educationreform.cfm (viewed February 16, 2010).
McClure, C. R. 1994. User-based data collection techniques and strategies for evaluating networked information services. Library Trends 42(4):591-607.
Matthews, J. R. (2007). The evaluation and measurement of library services. West Port, CT: Library Unlimited.
Nitecki, D. A. 2004. Program evaluation in libraries: Relating operations and clients. Archival Science 4 (1-2):17-44.
Powell, R. R. 2006. Evaluation Research: An overview. Library Trends 55(1):102–120.
Trochim, W. M. K. 2006. The Research Methods Knowledge Base. URL: http://www.socialresearchmethods.net/kb/ (viewed August 28, 2009).
Wilson, T. Evaluation strategies for library information systems. URL: http://informationr.net/tdw/publ/papers/evaluation85.html (viewed August 27, 2009).
No comments:
Post a Comment