Evaluating Adaptive E-Learning System

Examining the Evaluation and Integration of Adaptive E-Learning Systems

by Aasim Zafar*,

- Published in Journal of Advances in Science and Technology, E-ISSN: 2230-9659

Volume 2, Issue No. 2, Nov 2011, Pages 0 - 0 (0)

Published by: Ignited Minds Journals


ABSTRACT

This paper discusses the general approach of evaluationstudy for an adaptive e-learning system. The aims, advantages and limitationsof such evaluations have been discussed. The evaluation methodology has beenexemplified by discussing the empirical evaluation of eLGuide (e-Learning Guide)framework with an objective to examine its usefulness, effectiveness and henceits possible integration with web-based educational system. The same could beemployed to many other adaptive systems and would enable researchers to uncoverdeficits and failures of the system, justify and demonstrate the usefulness ofadaptive features incorporated in the system.

KEYWORD

adaptive e-learning system, evaluation study, advantages, limitations, evaluation methodology, eLGuide framework, usefulness, effectiveness, integration, web-based educational system

1. INTRODUCTION

Adaptive e-learning ensures individualized teaching. Every participating student receives learning materials matching their current knowledge level and learning preferences. The adaptive system ensures adaptation by changing their characteristics automatically according to the learner’s needs. Such systems claim to achieve the goals of learning efficiently and effectively. Majority of the adaptive e-learning systems employ methods like machine learning, statistical reasoning methods, rule based inferences, and combinations of these. Chrysafiadi & Virvou (2010) emphasizes the necessity of empirical study for evaluating the systems that employs fuzzy and AI techniques in user modeling which are based on human-computer interaction. Further, the assessment of usefulness of such an approach and justification of the efforts made may be achieved with the help of empirical evaluation (Weibelzahl, 2005). Further, empirical studies are able to identify errors in AI systems that would otherwise remain undiscovered. According to Dix et al., (1998), the main goals of software evaluations may be enumerated as:

  • Assessing the extent of system’s functionality
  • Assessing the effect of user interface (ease of use)
  • Identifying specific problems with the system

Evaluation helps to improve the system by uncovering unexpected behavior of the system and by identifying differences between user expectations and system design. Totterdell and Boyle (1990) suggested the following step-wise procedure for the evaluation of software system.

  • Identifying the objectives of the evaluation
  • Specifying experimental design
  • Collecting results
  • Analyzing data
  • Drawing conclusions

Whereas, formal correctness, verification, and tests are important methods for software engineering, the empirical evaluation are considered as an important complement which can improve AI techniques considerably. Moreover, the empirical approach is an important way to both, legitimize the efforts spent, and to give evidence to the usefulness of an approach. The evaluation of adaptive e-learning systems is of special interest, because the potential lack of consistency has been criticized (Benyon, 1993). The flexibility of adaptive e-learning systems causes a major threat to usability issues, mainly learnability and memorability (Woods and Warren, 1996). So with adaptive systems, which change its behavior over time, it becomes difficult for users to remember the functions and commands. Obviously formal techniques such as verification cannot solve such subjective psychological issues.

Available online at www.ignited.in Page 2

Usability has been used as an evaluation criterion for adaptive systems. A system is said to be usable if it allows the user to achieve his task with effectiveness, efficiency and satisfaction in a given context of use. To measure the usability of an adaptive system we have to define criteria for each dimension. While evaluating adaptive e-learning systems, one of the important criteria i.e. learning gain is evaluated empirically. Other criteria may include domain knowledge, accuracy, and duration of interaction. We have proposed eLGuide (e-Learning Guide) framework, which aims at guiding the students to achieve their learning goals by providing personalized navigation path matching their current knowledge level. Like all educational software, it is necessary to evaluate eLGuide in experimental settings before applying the prototype in real web-based learning environments (Phillips & Gilding, 2003). In this paper, we exemplify the empirical evaluation of the proposed eLGuide prototype to assess user satisfaction, system usability, effectiveness and the efficiency of the implemented adaptive methods (Chin, 2001).

2. RELATED WORKS

A review of the available literature related to these fields points out the lack of standard methodologies that can be followed to develop the evaluation process. Many researchers, for example Kinshuk et al. (2000), and Graf et al. (2010) advocate for two-phase evaluation of adaptive e-learning systems. The aim of the first phase, usually called formative evaluation, is two pronged: highlighting the required improvements in procedures and interface design and to evaluate the usability and effectiveness of the system. The second phase, called summative evaluation, mainly aims to determine the effectiveness of the system in real environments. For both the formative and summative evaluation, data is collected through qualitative and quantitative methods (Wolf, 2007). The results are then interpreted and analyzed with the help of various statistical tools (Barrow, 2008; Norusis, 2009).

3. EVALUATION OF ELGUIDE

The prime aim of evaluating eLGuide is to estimate the influence of adaptive features in learning outcomes of students and verifying its usability and functionality. The evaluation also focuses on estimating pitfalls and outlining benefits of the framework, so that it can be improved and employed in web-based learning systems. In the eLGuide evaluation, an experimental research methodology with control group design is adopted to study the effects of the adaptive features on the learning process of the students. Accordingly, in the eLGuide evaluation, we follow both formative and summative evaluation and combine quantitative and qualitative approaches to study the effects of the adaptive features on the learning outcomes of the students. 3.1 Formative evaluation The aim of formative evaluation is to gather information related to system performance so that further amendments and improvements may be achieved. It is important to identify potential users’ (facilitators and students) problems and concerns. The formative evaluation of the eLGuide prototype was performed in two main stages. In the first stage various modules and links were tested in order to find possible errors and its rectifications. While in the second stage several participants (some students and teachers, who were intended to take part in learning process and evaluation of the prototype) tested and worked with the system, gave valuable comments and suggestions which helped in improving the system and making the user interface more effective and user-friendly. The comments and suggestions from the participants were collected and, whenever needed, used to modify the prototype accordingly. 3.2 Summative evaluation The aim of summative evaluation of educational systems is to determine the impact provided by the system. This means that the summative evaluation of eLGuide should assess the usefulness and benefits of the overall approach. Such evaluation is appropriate once the main development is completed and a stable prototype exists.

3.2.1 The Experimental Study

A pre-test was conducted to assess the pre-knowledge of the students before they started the use of eLGuide system in their learning process. eLGuide aims at guiding the students to achieve their learning goals by providing personalized navigation path matching their current knowledge level. Therefore, it was necessary to assess the effects of adaptive and advising features of eLGuide on the students learning process and to compare the assessment results to the case in which these adaptive and advising features were absent. The experimental study involved two groups of students – a control and an experimental group. The control group students were provided a special experimental version of eLGuide without adaptive features. The Experimental group worked with the eLGuide prototype with adaptive features and guided the students in achieving their learning goals by personalizing their navigation paths.

Available online at www.ignited.in Page 3

INTERPRETATION AND ANALYSIS

The students’ score in the pre- and post-tests and individual traces of students’ interaction with the system were also analyzed. Pre-test scores were used as an indicator of pre-knowledge level of the students before they participated in experimental study. The pre-test was conducted on a topic that was different from those studied with eLGuide (i.e., SQL). Pre-test scores, post-test scores, and learning gains (differences between post-test scores and pre-test scores) were used to compare between students in Control Group and Experimental Group to check for any significant difference due to the availability of adaptive and advising feature. In addition to Pre-test and Post-test scores, all interactions of the students working with the eLGuide system were recorded and were analyzed to compute several parameters which we considered as important for comparing the performance of these two groups. The most important of these are the course completion time, the overall number of navigation steps and concept repetitions during the course. According to the results of similar experiments, like, (Barrow, 2008; Mitrovic, 2003), we expected that completion time, number of steps and concept repetition should decrease for Experimental group (using adaptive version of eLGuide). We used two statistical techniques for this analysis:

  • T-test: This is used to compare performance parameters like, pre/post test scores, completion time, number of steps and concept repetitions, of experimental and control groups in similar projects (Norusis, 2009; Wolf 2007; Barrow 2008).
  • Effect Size: This is used by many researchers in the field of computer based educational systems to compare learning gains (Mayer, 2003; Wolf, 2007).

The interpretation and analysis of results are discussed below:

  • There was significant difference between the post-test scores of the two groups, the Experimental Group being on higher side and this may be attributed to the availability of adaptive and advising features directed to Experimental Group students. The result was found on the expected line.
  • The number of steps for Experimental group (using adaptive feature) was much smaller than for Control group (using non-adaptive version) and this significant decrease on students' navigation efforts may be attributed to the availability of adaptive navigation support directed to

Experimental Group students. This analysis confirmed our expectations.

  • The number of concept repetitions is visibly less for Experimental group (who used adaptive features) which indicates that adaptive navigation support reduces users' navigation efforts in course content and guides the students to achieve their learning goals more effectively and efficiently.
  • The analysis of effect size indicates a large improvement in learning gains for the students of Experimental group. This significant improvement in learning gains of Experimental group students may certainly be attributed specifically to the availability of eLGuide adaptive and advising features.

3.2.2 Administering of students questionnaire

The students’ questionnaire was administered just before the post-test exam. By using the questionnaire it was possible to collect massive data from the students in a short time. The questionnaire was designed to reveal the students' opinions and impressions about eLGuide and to compare between responses collected from students to examine the effect of the adaptive and advising features. The most important outcomes concluded from the responses of the questionnaire are summarized below. A better impression from Experimental group respondents (63%) may be attributed to the availability of the adaptive and advising features, which was the only factor differentiating between the conditions of the control and the experimental groups. The response of question related to eLGuide interface is indicative of the fact that the system is easy to use. The adaptive and advising part was assessed only by Experimental group – students, who worked with eLGuide with adaptive and advising features. The results show that 76% of Experimental group of students found the adaptive and advising feature useful in meeting their learning goal. Several differences were found between the two groups of students. The Experimental group responses appear more positive than Control group responses regarding issues like enjoyment while working with the system, self-esteem, ease of use, getting guidance and recommending the course to other students. The students from the Experimental group enjoyed studying with eLGuide more than Control group students. The results indicate that Experimental group students were more satisfied than Control group students. Since the availability of the adaptive and advising features was the sole difference (i.e. a controlling variable) between the two groups, then it may

Available online at www.ignited.in Page 4

be possible to relate the better satisfaction level of Experimental group students to the availability of adaptive and advising features of eLGuide.

3.2.3 Feedbacks from facilitators

Knowing the facilitators' opinions, impression, and comments was very important in the evaluation of eLGuide. The discussions took place with the facilitators during the course duration and during the interview conducted with them, while subject expert was involved right from the beginning of the development of eLGuide prototype. A group interview with the facilitators was also conducted. The analysis of the feedback showed an overall satisfaction of facilitators on various features of the adaptive system.

CONCLUSION

We discussed an experimental study methodology to evaluate eLGuide framework, with an objective to examine its usefulness, effectiveness and hence its possible integration with web-based educational system. A combination of different quantitative and qualitative methodologies enabled the examination of the collected data from different perspectives. Two major sources of data were used to address benefits for the participating students: students’ questionnaire, pre- and post-test scores and individual traces of students’ interaction with the system. The analysis of the students’ questionnaires showed a better overall satisfaction for the students who used adaptive and advising features of eLGuide (experimental group). Moreover, the analysis of the students’ learning gains based on the performance parameters like pre/post-test scores, course completion time, number of steps and concept repetitions showed that the learning gains of experimental group were slightly higher than that of control group. The results of the experimental study along with the encouraging feedbacks from facilitators allowed us to conclude that eLGuide is a useful framework which can be employed in web-based educational environment and implemented with web-based learning platforms to support students as well as teachers in a better way.

REFERENCES

1. Barrow, D. K. (2008). Assessing the Impact of Positive Feedback in Constraint-Based Tutors. A thesis submitted at University of Canterbury. 2. Benyon, D. R. (1993). Adaptive systems; a solution to usability problems. User Modelling and User Adapted Interaction, 3(1), 65–87. 3. Chin, D. N. (2001). Empirical evaluation of the user models and user-adapted systems. User modelling and user-adapted interaction, 11, pp. 181–194. 4. Chrysafiadi, K., & Virvou, M. (2010). Modeling student’s knowledge on programming using fuzzy techniques. In Proceedings third international symposium on intelligent and interactive multimedia: Systems and services, Baltimore, USA, pp. 23–32. 5. Dix, A. J., Finlay, J. E., Abowd, G. D., and Beale, R. (1998). Human-Computer Interaction. Harlow, England: Prentice Hall. 6. Graf, S., Liu, T.C. & Kinshuk (2010). Analysis of learners' navigational behaviour and their learning styles in an online course. Journal of Computer Assisted Learning 26 (2), pp.116-131. 7. Kinshuk, Patel, A. & Russell, D. (2000). A Multi-Institutional Evaluation of Intelligent Tutoring Tools in Numeric Disciplines. Educational Technology & Society, vol. 3 no. 4, pp. 66-74. 8. Mitrovic, A. (2003). An Intelligent SQL Tutor on the Web. International Journal of Artificial Intelligence in Education, vol. 13, pp. 171-195. 9. Norusis, M. J. (2009). SPSS 17 0 statistical procedures companion. United States: Pearson Education. 10. Phillips, R., & Gilding, T. (2003). Approaches to evaluating the effect of ICT on student learning. ALT Starter Guide 8. 11. Totterdell, P. A. and Boyle, E. (1990). The evaluation of adaptive systems. In Browne, D., Totterdell, P., and Norman, M. (Eds.), Adaptive User Interfaces, pages 161–194. London: Academic Press. 12. Weibelzahl, S. (2005). Problems and pitfalls in the evaluation of adaptive systems. In S. Y. Chen & G. D. Magoulas (Eds.), Adaptable and Adaptive Hypermedia Systems, Hershey, PA: IRM Press, pp. 285-299. 13. Wolf, C. (2007). Construction of an Adaptive E-learning Environment to Address Learning Styles and an Investigation of the Effect of Media Choice. PhD thesis, RMIT University. 14. Woods, P., & Warren, J. (1996). Adapting Teaching Strategies in Intelligent Tutoring

Available online at www.ignited.in Page 5

Systems. ITS’96 Workshop on Architectures and Methods for Designing Cost-Effective and Reusable ITSs, Montreal.