Research

From CSCL-EREM

Jump to: navigation, search

The needlework of evaluating CSCL systems: An Evaluand Oriented Responsive Evaluation Model


Iván M. Jorrín-Abellán, Center for Intructional Research and Currículum Evaluation, University of Illinois at Urbana-Champaign. 190 Children´s Research Center. 51 Gerty Drive. Champaign, IL 61820, jorrin@uiuc.edu

Robert E. Stake, Center for Intructional Research and Currículum Evaluation, University of Illinois at Urbana-Champaign. 190 Children´s Research Center. 51 Gerty Drive. Champaign, IL 61820, stake@uiuc.edu

Alejandra Martínez-Monés. Department of Computer Science, University of Valladolid. Escuela Técnica Superior de Ingeniería Informática. Campus Miguel Delibes, 47011, Valladolid,
Spain, amartine@infor.uva.es



Abstract: This article presents the CSCL Evaluand Oriented Responsive Evaluation Model, an evolving evaluation model, conceived as a “boundary object”, to be used in the evaluation of a wide range of CSCL systems. The model relies on a responsive evaluation approach and tries to provide potential evaluators with a practical tool to evaluate CSCL systems. The article is driven by a needlework metaphor that tries to illustrate the complexity of the traditions, perspectives and practical issues that converge in this proposal.


Contents

Introduction: Initial Stitches

Werner Heisenberg articulated in 1927 the uncertainty principle. Roughly speaking, it states that the position and velocity of an object cannot both be measured exactly at the same time, and that the concepts of exact position and exact velocity together have no meaning in nature. Although it applies only at the small scales of atoms and subatomic particles and is not noticeable for macroscopic objects, it could be useful to illustrate the uncertainty involved in the evaluation of programs, courses, learning strategies, projects, or technological tools.

Evaluation is intrinsic to human being, hence complex and intricate. We are always balancing things, decisions, opinions to whether or not ask something, do something, etc. Stufflebeam (1971) and Cronbach (1980) defined the main goal of evaluation as improvement in decision-making. A less utilitarian definition can be found in Stake, (2003) who describes the main goal of evaluation as improving understanding of the quality of what we want to evaluate in its particular setting. This definition could be easily applicable to the evaluation of CSCL systems.

CSCL is an interdisciplinary field with characteristics that distinguish it from other applications of ICT to learning and/or collaboration. This difference is mainly based on its emphasis on learning, and its theoretical foundations that consider knowledge as learner construction promoted by the interaction of learners with their social and physical environment. For Koschmann (2002) CSCL is considered as “a field of study centrally concerned with meaning and the practices of meaning-making in the context of joint activity and the ways in which these practices are mediated through designed artifacts." Therefore, it could be understood as a practical and theoretical field aiming at providing useful systems and scenarios where people can interact and learn together.

The design and enactment of CSCL systems and scenarios is inherently complex because of the wide mix of disciplinary perspectives engaged. Different groups of teachers, curriculum designers, evaluators, students, and technology developers must work together to implement successful educational settings. As a consequence, the evaluation of these systems has shown to be a new and challenging field. Such a challenge originates from the combination of the innate difficulty of evaluation and the emergence of CSCL. In this sense, Treleaven (2003) argues that “evaluation in these contexts challenges traditional approaches to evaluation and require new theoretical frameworks to guide analysis and interpretation.”

The different conceptions of the CSCL field, the strong social component that defines it, as well as the many possible “stitches” that influence the “embroidery” conformed by a CSCL educational setting, make evaluators evaluate in an uncertain fashion. This fact, in addition to the rapid growing of the field and the little time and space dedicated for reflection and evaluation on the new practices promoted within it, hinders researchers, teachers, evaluators and ICT developers to identify the key issues to be taken into account to evaluate a CSCL system. Therefore, this situation highlights the need for more helpful guidance for potential evaluators. In our view, such guidance needs to be clear, understandable and action-oriented.

In this article we present an evaluation model to be used by CSCL practitioners called Evaluand Oriented Responsive Evaluation Model (CSCL-EREM). The model can be framed within what Lincoln and Guba have called the “Fourth generation of evaluation”. In this sense, evaluators respond to participants instead of measuring them, describing them or judging them. According to this, the model is oriented to the activity, the uniqueness and the plurality of the evaluand to be evaluated, promoting responsiveness to key issues and problems recognized by participants at the site (Stake, 2003).

The rest of the paper is driven by an “embroidery-patchwork” metaphor that helps us to better describe our proposal. According to this, the second section provides some ideas about the authors´ understanding of the CSCL field as a starting point of the statements formulated hereinafter. The third section reveals some evidences that support the necessity of an evaluation framework in the CSCL field coming both from the previous existing frameworks in the field, and from the evaluation experiences developed within the authors´ research team GSIC-EMIC1. The fourth section is devoted to the description of the evaluation model proposed by deepening in its components. The article finishes with a set of conclusions and some steps to be followed as future work.

Justification: Threading the Needle

Before starting to sew some previous steps are needed. We have to select the right thread with the desired thickness; we also have to decide the design and the colors to be used. In this section we will provide some key issues, some decisions to be taken, and some of the threads that will affect the evaluation model we propose in section 4.

Sfard stated in 1998 that “it is hard to present a well-defined, consistent and comprehensive definition of CSCL theory, methodology, findings or best practices. CSCL today necessarily pursues seemingly irreconcilable approaches”. Eight years after, in 2006, Stahl, Koschmann and Suthers still posed that “The field of CSCL has a long history of controversy about its theory, methods and definition. Furthermore, it is important to view CSCL as a vision of what may be possible with computers and of what kinds of research should be conducted, rather than as an established body of broadly accepted laboratory and classroom practices.” According to these statements, it seems to be obvious that CSCL boundaries are not clearly defined. Participants involved in the field have different conceptions about what CSCL is and should be. This situation lead us to define the way we understand the field as a good starting point of the evaluation framework we are proposing in this paper.

CSCL richness and complexity can be found in its duality. It is at the same time a theoretical research field and a practical one where researchers, teachers, developers and evaluators work together. A common trend that can be identified within both perspectives has to do with the new practices promoted by the use of collaboration supported by technological artefacts. For instance, these practices demand the definition of participatory design processes (Kensing, 2003), sustained on formative evaluation, able to inform the developers about the needs that the different stakeholders have in relation to a particular CSCL system. This general picture of the field crystallizes in at least two different evaluation approaches that have been traditionally used in the field. The first one, the educational approach, understands evaluation in CSCL as a specific problem within the broader scope of evaluation in general and educational evaluation in particular. This perspective considers CSCL systems within the educational settings where they are applied. The scope of the evaluation is the educational setting as a whole. Example evaluation questions for this approach can be found in the evaluation process conducted by our research team in an undergraduate computer architecture course based on CSCL principles (Jorrín-Abellán et al, 2006a; Jorrín-Abellán et al, 2006b): “Is the CSCL system promoting the acquisition of new competencies related to collaborative work in the students?” Is the learning design enhancing students participation?”. The second, the technological approach, concentrates on the study of CSCL systems as a specific case of computer-based system. It aims at the development of system-oriented evaluation methods. For this approach, the scope of the evaluation is the CSCL tool. The educational setting can be taken into account as part of the context when the system is evaluated in real conditions. Moreover, this educational setting might be completely absent if the evaluation is performed in experimental conditions. For example, questions like “Is the node-based interface of the searching tool adequate for this group of users?” or “Do the collaborative patterns presented help teachers to design real collaborative learning experiences?” drove the evaluation performed by our research team of two tools: Ontoolsearch, an interactive system for ontology-based search of CSCL tools (Vega-Gorgojo et al, 2008) and Collage (Hernández-Leo et al, 2006), a collaborative learning design editor.

The first perspective would be followed by researchers in the CSCL field and by education practitioners interested in the understanding and the enhancement of the learning/teaching process. Policy makers and institutions would take also this perspective, as it considers the problems raised by the application of CSCL in real contexts. In the same way, system designers and developers are expected to assume the second approach.

Although these perspectives could be seen as two separate paths for the study of evaluation in CSCL, and therefore that one choice has to be made, we think they are complementary. This can be seen in the two examples provided above, i,e., the evaluation of Collage and Ontoolsearch. There we followed a technological evaluation approach where the educational setting was deeply taken into account, as both tools were evaluated in real educational settings. However, the experience made us think in the profits that a flexible evaluation framework to integrate the two mentioned approaches would report. With this aim, we started a research process to develop a suitable evaluation framework for CSCL systems.

Looking for the right Needle: Need for an evaluation framework in the CSCL field

Probably the main questions that immediately arise when defining a new evaluation framework have to do with its necessity and even with the kind of framework needed. In order to illustrate both questions we followed two different but complementary research pathways. The first one is based in the previous work done in the field (top-down approach) while the second one has to do with the expertise achieved, as CSCL practitioners, in the evaluations performed within our research team (bottom-up approach).

The top-down approach helped us to gather evidences to confirm our initial suspicions about the necessity of a CSCL evaluation framework. The first one comes from the many traditions/perspectives involved in the field. Most of the times it is difficult to find a common language among educators, evaluators, computer scientists, psychologists, and engineers involved in the evaluation of a CSCL system, since their understanding of evaluation differs so much. An evaluation framework would contribute to the definition of shared concerns, promoting mutual understanding to better conduct evaluations.

Secondly, CSCL field seems to be much more ambitious than previous approaches of ICT (Stahl et al, 2006) since the social component of learning is one of its cornerstones. This fact constitutes a challenge to the evaluation of CSCL systems (Treleaven, 2003). The more social relations involved in a particular computer based situation, the more difficult to evaluate it. Thereby, CSCL can be seen as a field posing challenging situations for developers, instructional designers and evaluators as well. An evaluation framework would help to guide the evaluation of CSCL systems as holistic situations, having in mind that the effects of CSCL can not be reassumed along a single variable, rather, a chain reaction occurs in which each event gives meaning to the next (Salomon, 1995).

Another aspect that reveals the need of an evaluation framework can be found in the quick evolution and fragmentation of the ICT technology enabling CSCL processes. Technology changes so fast at these days that when the evaluation of a particular tool is finished, there are other ten more available that could be used for the same purpose. The continuous development of new platforms and tools that support collaboration or computer-mediated learning calls for deep and systematic evaluation of the systems and the experiences that took place based on them. An evaluation framework could contribute to determine whether or not a tool is promoting meaning-making practices, recognizing that the primacy of the learning environment should reign over the technical artefact (Nash et al, 2003). In this sense, determining if real meaning-making practices are taking place requires the identification of the criteria that affect the quality of a CSCL system, providing feedback to the practitioners on their own practices. In absence of any evaluation criteria, potential users and developers of CSCL may feel daunted and become discouraged, favouring the possibility to revert to more traditional teaching methods (Crawley, 1999).

Other evidences to illustrate the need for an evaluation framework can be found in the analysis of the many previous efforts done in the field to develop evaluation frameworks (Crawley, 1999; Gutwin & Greenberg, 2000; Cezec-kecmanovic et al, 2000; Pinelle & Gutwin, 2000; Saunders, 2000; Garrison et al, 2001; Gunawardena et al, 2001; Ewing & Miller, 2002; Avouris et al, 2003; Kirschner et al, 2004; Pozzi et al, 2007). The revision we have made of them addresses some interesting issues: a) Some of the analyzed frameworks are too specific to be taken into account as general evaluation frameworks for a wide range of CSCL systems. This is the case of the Object Oriented Collaboration Analysis Framework (OCAF) (Avouris et al, 2003) which is a framework for the analysis of the interaction process that takes place in a CSCL distant problem solving process. It is too specific since it is mainly based on log file analysis and its scope is restricted to shared workspaces for problem solving. Likewise, other frameworks such as the Groupware Framework (Gutwin & Greenberg, 2000) or the Framework of the Communities of Inquiry (Garrison et al, 2001) are more focused on specifics like groupware usability or the tracking of text-based interactions between Students and tutors. b) On the contrary, some other frameworks seem to be too theoretical and difficult to put in practice. For instance, the Communicating Model of Collaborative learning (CMCL) (Cezec-Kermanovic & Webb, 2000) defines a method to classify the linguistic acts used by students in chats and forums. This model could help to evaluate how students collaborate to create an artefact in collaboration, as it has to do with the knowledge construction, but it does not give answer to the evaluation needs of other stakeholders involved in a CSCL system. Nevertheless, the model is highly theoretical and uses abstractions that cannot be easily understood by practitioners, thus being impractical in several cases. Other example can be Rufdata (Saunders, 2000) a framework for evaluation planning that provides a meta-evaluative tool to help experienced evaluators to conduct institutional evaluations. Although it is a thoughtful model, it was not designed according to the CSCL needs and it is as general as the well known Stufflebeam´s CIIP program evaluation model (2000). c) Another interesting issue that affects a number of the studied frameworks, is that they are stakeholder-oriented. An evaluation framework can be considered stakeholder-oriented in a direct or in an indirect way. Some of them were intentionally created to be used by particular stakeholders. This is the case of the Groupware Framework (Pinelle & Gutwin, 2000) and the CMCL framework which were designed to be used by developers. It is also the case of the NSCL evaluation framework (TELL-project, 2003) which proposes a set of different itineraries to be followed depending on who is performing the evaluation, a researcher, a teacher, a developer, etc. Other frameworks can be defined as stakeholder-oriented in a more indirect way. Although some of them were not specifically designed to be used by a particular set of stakeholders, their special features make them more suitable for teachers, developers or instructional designers. This is the case of the General framework for tracking and analysing learning processes in CSCL environments (Pozzi et al, 2007) which presents a robust way to analyse the learning processes occurring in an online course by tracking the interactions among the actors involved. Although it is an outstanding framework, it can only be used by practitioners involved in on-line and blended courses who want to evaluate a learning process, usually teachers. Although stakeholder-oriented frameworks highlight the idea of adapting evaluation processes to evaluators, and we agree with them, at the same time they can promote the appearance of barriers among the different traditions within the CSCL field, assuming, in fact, that there are irreconcilable aspects that would not allow someone (depending on who she is) to evaluate something in a different way that she is supposed to. According to this, a more evaluand-oriented framework would contribute to focus on what we want to evaluate (evaluand) rather than in the differences of the evaluators.

These previous efforts done in the field as well as the issues addressed before highlight the need for more helpful guidance for potential evaluators.

As it was mentioned in the beginning of this section, the experience achieved in the evaluation of CSCL systems was taken into account to identify the requirements of an evaluation model for CSCL (bottom-up approach). Since the middle nineties, our research team has been involved in the evaluation of undergraduate courses based on CSCL principles (Martínez-Monés et al, 2006; Jorrín-Abellán et al, 2006b; Jorrín-Abellán et al, 2007), in the evaluation of teaching strategies to promote collaboration in computer based settings (Martínez-Monés et al, 2006; Jorrín-Abellán, 2007), and in the evaluation of tools and technological systems developed to support CSCL settings (Hernández-Leo et al, 2006; Bote-Lorenzo et al, 2008; Vega-Gorgojo et al, 2008). These evaluations have made us turn progressively into a more qualitative/interpretative evaluation approach, where the uniqueness and particularity of each evaluated system constitutes the key issue. An outstanding aspect of the evaluations yet conducted has to do with the prominent role played by the participants. Most of the evaluations were designed with the aim of giving answer to the participants´ needs, giving voice to them. In this sense, even in the evaluation of technological tools we have included multiple data gathering techniques, and member checking with the aim of generating rich enough evaluation outcomes.

The issues addressed in this section, coming from the dual approach followed, helped us to draw an initial sketch of the characteristics that can be demanded to a CSCL evaluation framework designed with the aim to overcome the difficulties found in the evaluation of CSCL systems. Some of them can be summarized in:

  • The traditions/perspectives involved in the field highlight the need for a flexible enough framework to give answer to the needs and goals of the many different stakeholders. At the same time it should be robust enough to provide a common evaluation model shared by the CSCL community.
  • The importance of the social component of learning in the field recommends the definition of an evaluation framework oriented to the activity, the uniqueness and the plurality of the evaluand to be evaluated. It should also be sensitive to key issues or problems recognized by people at the site, giving voice to the participants.
  • The many possible evaluands that could be evaluated from diverse traditions reveals the need for a framework that should propose many different data gathering techniques.
  • The applicability of the framework as well as the consensus intended among CSCL practitioners highlight the need of an evaluand-oriented framework.

According to these features we have come to believe that the framework could be close to the fourth generation of evaluation (Guba & Lincoln, 1989). This approach relies on Stake’s responsive evaluation (2003) and distinguishes four generations in the historical development of evaluation: measurement, description, judgement and negotiation. “Measurement” includes the collection of quantitative data. “Description” refers to the identification of the features of the evaluand. “Judgement” is the assessment of the quality of the evaluand based on a comparison between standards and actual effects, while “negotiation” characterises the essence of responsive evaluation. In this sense, within this approach evaluators respond to participants instead of measuring them (first generation), describing them (second generation), or judging them (third generation). Although this approach was thought to be used primarily in program evaluation, it constitutes a remarkable rationale to frame an evaluation model to be used in CSCL systems, as when it is compared to most other evaluation approaches it is oriented more to the activity, the uniqueness and the plurality of the evaluand. Its design is slowly developed, with continuing adaptation of evaluation goal-setting and data-gathering while the people responsible for the evaluation become acquainted with the evaluand and its context (Stake, 2003). In this approach, issues are suggested as "conceptual organizers" for the evaluation study, rather than hypotheses or objectives. An issue can be understood as a tension, an organizational perplexity or a problem. Although responsive evaluation can be considered closer sometimes to other Pluralist-Intuitionist evaluation Approaches (Worthen, 1990) like Elliot Eisner's connoisseurship (1998), or Michael Scriven's Modus operandi method (1991) it differs from them in the essential feature of emphasizing the issues, contexts and stakeholders. This approach could be helpful to frame the evaluation of CSCL systems since it is deeply concerned with the standards of the different stakeholders that would perform an evaluation, at the same time that highlights the relevance of contextualizing the evaluands. The use of a responsive approach can also be reinforced by the growing interest on interpretative evaluation methods in the CSCL field. This fact is reflected in the number of evaluation experiences reported in the last years (see e.g., Suthers, 2006; Koschmann et al, 2005; Ares, 2008). Moreover, a responsive evaluation approach could help to design a CSCL evaluation framework as a Boundary Object (Star & Griesemer, 1989; Suthers, 2006) determining a model both plastic enough to adapt to local needs and constraints of the several stakeholders employing them, yet robust enough to maintain a common identity across different CSCL communities and possible CSCL systems to be evaluated.

This section has shown some of the needles that could be used to start sewing the embroidery entailed in the evaluation of CSCL systems. These needles in accordance with the threads described in the previous one constitute the basis of the CSCL evaluation model we propose in the following paragraphs.

The frame to Needlework: CSCL-Evaluand Oriented Responsive Evaluation Model (CSCL-EREM)

The evaluation of CSCL systems can be seen as “embroidered patchwork”. It is a form of needlework that involves sewing together different pieces into a larger design. In the past this work was often done in group around a frame. This way, the frame becomes a tool that allows seamstresses to join the small pieces created in advance. This metaphor tries to describe the sort of CSCL evaluation model we propose in this paper. We deliberately decided to use the word “model” instead of “framework” as our proposal is conceived as a flexible and evolving boundary object, that could be completed at any time by any practitioner. We do not want you to buy our embroidery, we only want to provide a set of aspects that could help you to find the most suitable needle and threads to create your own one.
Figure 1. CSCL-EREM Components
Figure 1. CSCL-EREM Components

The CSCL Evaluand-Oriented Responsive Evaluation Model (CSCL-EREM) is to be a framework for helping in the evaluation of CSCL programs, innovations, learning and teaching resources, teaching strategies, tools, and CSCL institutional evaluations. The aim of the model is to provide clear, understandable and action-oriented guidance to CSCL practitioners involved in the evaluation of CSCL systems. It is deeply focused on the different evaluands that could be evaluated in a CSCL setting, and it is framed within the Responsive Evaluation approach. According to this, the model is oriented to the activity, the uniqueness and the plurality of the evaluand to be evaluated, promoting responsiveness to key issues and problems recognized by participants at the site. As can be seen in Figure 1, the model´s core parts are: Three facets (perspective, scope and method) that summarize some characteristics that could be taken into account while conducting an evaluation of a CSCL system; four question-oriented practical courses (pathways) according to the possible evaluands that can be evaluated (1st course: Evaluating a CSCL program, innovation, course. 2nd course: Evaluating a CSCL tool. 3rd course: Evaluating a CSCL teaching strategy/learning resource. 4th course: Evaluating a CSCL project); a representation diagram with the aim of helping evaluators to plan an evaluation; and finally a set of recommendations to write the report of an evaluation using the current model. Although we are proposing an ambitious model, it does not try to discover anything new in the CSCL field nor "to reinvent the wheel". The aim of this work is to provide clear and practical guidance to those CSCL practitioners that are novice in evaluation, by proposing a particular organization of the complexity of the field. This way, the model can be interpreted as an effort to minimize the evaluation uncertainty discussed in the beginning of this article.

Facets of the model

As it was briefly mentioned before, the first component of the model brings together some of the aspects that can be studied in the evaluation of a CSCL system. We have grouped them into three different facets. The first facet is called Perspective. We understand it as the point of view based on which an evaluation process can be designed and carried out. This facet emphasis relies on the main goal from which we are performing an evaluation, and it includes interests, objectives or expected outcomes of the evaluator or group of evaluators. The main goals of a CSCL evaluation can be: To improve the educational practice; to improve the design of a tool; to monitor the progress of something within a CSCL system or; to support a research process. The second facet, called Ground, can be defined as the state of the environment in which a CSCL system exists. It can be considered as the context in which an evaluand takes place or is intended for. Within this facet we have included some aspects that should be taken into account in an evaluation: e.g. the characteristics of the evaluation we want to perform (extension, number of evaluators, experience in evaluation, transdisciplinarity of the evaluation group if any, external help etc), the main features of the participants (number of students, learning style of the students, students´ previous knowledge, teaching style, teacher´s knowledge in the use of ICT, developers´ background, developers´ evaluation experience, etc ) or the features of the setting in which we are going to evaluate (climate, grade, extension, etc) (see Figure 3). The third facet, the Method, refers to the sequence of steps that lead the evaluation process, involving reasoning, observations, data collection, data processing, analysis and interpretation. The sort of evaluands that can be evaluated in these special scenarios differ so much; because of this, the model proposes many different data gathering techniques, not only naturalistic inquiry or qualitative research ones, with the aim of becoming an umbrella model where different traditions and ways of evaluation would coexist. The model encourages the use of mixed data gathering techniques as well as a variety of informants, in order to provide multiple perspectives to enrich the evaluation process. The model includes five different evaluation approaches from which an evaluation could be conducted; Case Study methods, Action Research methods, Usability methods, Human Computer Interaction methods and Interaction Analysis methods. In this sense the model proposes a profuse set of data gathering techniques like observations, interviews, expert reviews, costing techniques, heuristics, cognitive walkthroughs, social network analysis or feature inspections (see Figure 3).

Courses

In order to bridge the gap between theory and practice, the model proposes four courses according to the different evaluands that could be evaluated in a CSCL system. Each itinerary is formed by a set of questions that epitomize the aspects included in the described facets, helping evaluators to recognize some relevant issues that could affect their evaluand. The courses are: Evaluation of CSCL programs, innovations, courses; Evaluation of CSCL tools; Evaluation of teaching strategies/learning resources to promote collaboration; and Evaluation of CSCL projects. The CSCL-EREM provides not only different question-oriented paths but also real examples of evaluations performed by using them. The following excerpt from the first course illustrates the way in which the model provides practical guidance and recommendations to evaluators. Which are the data gathering techniques you are going to use? Practical Recommendation: You should use easy techniques that provide data exactly on the things you want to asses or improve. In order to be able to select the suitable techniques to your interests, you must be aware of their limitations and affordances. If you don’t have external support, select at most two or three techniques because of possible data saturation. You should only use techniques in which you have enough experience. Otherwise, if you have support from an evaluation group, you can select more and more complex techniques. The basic design can combine two interpretative techniques such as observation, interview, focus group, or biography and a more quantitative one like sociograms/network analysis, log file analysis or web-based questionnaires. The first course is expected to be followed by those CSCL practitioners who want to make sense of CSCL teaching practices. The second course could be used by those CSCL practitioners who contribute to the development of supportive technologies to CSCL practices. The third course is expected to be used by CSCL practitioners deeply concerned with particular aspects of CSCL learning practices, while the fourth one is devoted to those practitioners, institutions and researchers that want to evaluate the quality of a CSCL project. The differences among them can be seen, for instance, in the recommendations provided by the model on the data gathering techniques to be used, the selection of informants and the contextual issues that most affect each course.

Representation Diagram

Sometimes small management artefacts help to better planning an evaluation, thus contributing to the quality of the evaluation. The CSCL-EREM proposes a representation diagram that supports evaluators to plan a CSCL evaluation in a practical and contextualized way. This diagram is deeply inspired in the one proposed by Stake (2006) to represent Case Studies. Figure 2 shows the representation diagram of two evaluations that follow courses 1st and 2nd respectively. The first one (a) illustrates the representation diagram we used to plan the evaluation of a blended undergraduate course called NNTT on ICT to preservice teachers. More details on the NNTT course are described in Jorrín-Abellán (2007). The representation shows relevant aspects considered within the three facets of the model, as well as a brief schedule of the evaluation according to the data gathering techniques, the informants and the supportive technologies used in the evaluation. Figure 2.b shows the representation diagram of the aformentioned evaluation of Ontool-search (Vega-Gorgojo et al, 2008). Both diagrams represent in the lower right side of the circle the issues that guided the evaluation. These issues serve as conceptual organizers of the evaluation, helping evaluators to focus on the desired tensions of the evaluand.
Figure 2.a CSCL-EREM Practical examples
Figure 2.a CSCL-EREM Practical examples
Figure 2.b CSCL-EREM Practical examples
Figure 2.b CSCL-EREM Practical examples

Recommendations to write the final report

The end product of an evaluation is expected to be a report. It constitutes the joint construction that emerges as the result of the evaluation; its synthesis. Many times the effort required to construct it goes further than the quality of the evaluation conducted. As evaluators we are not only asked to provide results but to disseminate them in the best way. According to this, the model also includes a set of general recommendations, emerged from the practice, on how to manage the final report of the evaluation of a CSCL system. For instance, the feedback from responsive evaluation studies is expected to be in forms and language attractive and comprehensible to various audiences. Thus, it should be advisable to consider it in the early stages of the evaluation in order to decide the kind of reports to be made. Narrative portrayals and textual testimony will be appropriate for some readers, social network analysis for others, or even regression analyses for others. A number of recommendations provided by the model have to do with procedural aspects that could be helpful for evaluators in order to give insight to their possible audiences. For example, it is common for responsive evaluation feedback to occur early and throughout the evaluation period, so evaluators should have to plan the writing of specific reports to concrete audiences that sometimes would help to refine the evaluation issues. Other critical aspects such as advocacy, credibility and triangulation are also taken into account in this final component of the model.

As a summary of the components described in this section, we provide Figure 3. It shows the “complex embroidery” formed by the interrelations among the aspects that conform each of the previously described facets. Although it does not tend to represent the complete set of aspects that should be included in the evaluation of a CSCL system, it reveals its complexity. Figure 3 also constitutes the front-end of the web tool we are currently developing with the aim of providing practical guidance to those putting in practice the CSCL-EREM. It also illustrates the metaphor followed through this article reinforcing the fact that we, CSCL evaluators, should create our own embroidery by highlighting those threads (or even adding new ones) that best suit with our purpose, trying to select the best needle to do it.

Figure 3. CSCL-EREM facets
Figure 3. CSCL-EREM facets

Conclusions and Future Work (CSCL-EREM)

In this article we have presented the CSCL Evaluand-Oriented Model as well as its theoretical and practical justification. It is conceived as a boundary evaluation model that could be used to evaluate a wide range of CSCL systems. The model relies on a responsive evaluation approach and tries to provide potential evaluators with a practical tool to show evidence on how things are working in a particular CSCL system. Some of the advantages of the model can be summarized in: a) The CSCL-EREM can promote mutual understanding among the different backgrounds and perspectives that are traditionally involved in the evaluation of CSCL systems as it provides evaluation criteria that could affect any CSCL system. b) The model can help to guide the evaluation of CSCL systems as wholistic and interconnected situations, showing that the effects of CSCL systems can not be reassumed along a single variable. In this sense, it could be helpful since strengthens the necessity of conducting evaluand-oriented evaluations instead of encapsulating evaluations to the particular field of who is evaluating. c) The model is a practical tool that provides question-oriented evolving courses, and real examples that could help to conduct an evaluation. d) The model aims also to help in the planning stage of the evaluation since it provides a representation diagram to organize the steps to be followed in the evaluation of a CSCL system in an issue-driven fashion, as well as a set of recommendations to write the final report of an evaluation.

We are currently working in providing access to CSCL-EREM as a web-based tool, as well as a repository of CSCL systems evaluated following it. An evaluator would then select one of the courses and the tool will guide her through the evaluation process proposed by the model. Assuming that we are not proposing a complete and prescriptive evaluation model, and also that it has given accurate answer to the needs of the previous evaluations conducted in our research team, we deeply believe that a suitable way to meta-evaluate it could be by generating a community of practitioners using it. According to this, the feedback received by the users will contribute to improve and refine it, consolidating the idea of the evolving evaluation model presented in this work.

References