A Study on Math Rotter Model for Acknowledge of Intellectual with Technology

Exploring the Impact of Math Rotter Model and Technology on Emotions and Relationships in Robotics

by Mamta .*,

- Published in Journal of Advances and Scholarly Researches in Allied Education, E-ISSN: 2230-7540

Volume 16, Issue No. 9, Jun 2019, Pages 1319 - 1326 (8)

Published by: Ignited Minds Journals


ABSTRACT

In a social setting this research can help to understand and guess the feelings and emotions of a person under a variety of circumstances. This information can therefore be used to manage relationships between parents and children, partners, family members, friends, relatives working in local relationships and social interactions. Here both the H-H and HM communication pattern of our system will be very helpful. It will work well for detecting and identifying criminals. Such an investigation will be of great benefit to the community and the nation. This study will be a one-step development in the field of robotics science development. Currently the robots are configured with the MM interface but this study will also help with the MH interface. If a robot can capture the emotions of a person then it can act as a person with emotions. This will be a major development in the field of robotic development and excellent services in the scientific world as a whole.

KEYWORD

Math Rotter Model, Intellectual, Technology, feelings, emotions, relationships, communication pattern, criminals, community, nation, robotics science development, robots, MM interface, MH interface, emotions, robotic development, scientific world

INTRODUCTION

From the time of the development of the mainframe computer in 1942, the first four-function calculator in 1967, the microcomputer in 1978, and the graphing calculator in 1985 (Kelly 2003), both mathematicians and mathematics educators have been intrigued by the possibilities offered by technology. However, it was not until the late 1960s when, according to Fey (1984), mathematicians and mathematics educators began to feel that computing could have significant effects on the content and emphases of school-level and university-level mathematics. Among the earliest applications of the new technology to mathematical learning in schools was Computer Assisted Instruction – the design of individualized student-paced modules that were said to promote a more active form of student learning. Perhaps the most well-known is the PLATO project (Dugdale and Kibbey 1980; Dugdale 2007). The next wave in technology-based approaches to mathematics learning involved programming, in particular, in Logo and BASIC. The development of the Logo programming language by (Feurzeig and Papert 1968; Papert 1980) was instrumental in this regard. Papert, a mathematician who was influenced by the theories of Piaget, was interested in the learning activities of young children and how the computer could enhance those activities (see, e.g., Papert 1970, for descriptions of children and junior high school students learning to program the M.I.T. ―turtle‖ computer). In his 1972 article, entitled Teaching children to be mathematicians versus teaching about mathematics, Papert promoted ―putting hildren in a better position to do mathematics rather than merely learn about it (Papert 1972).‖ At the time, programming in BASIC was also considered a means for enhancing students‘ mathematical problem-solving abilities (Hatfield and Kieren 1972), even for students as young as first graders (Shumway 1984).The arrival of the microcomputer in the late 1970s not only increased the interest in programming activity, but also led to the development of more specialized pieces of software. Some of these specialized software tools were created specifically for mathematics learning (e.g., CABRI Geometry developed by Laborde 1990, and Function Probe developed by Confrey 1991), while others were adapted for use in the mathematics classroom (e.g., spread sheets and computer algebra systems). The microcomputer and the graphing calculator also fed the growth of functional approaches in algebra and interest in multiple representations of mathematical objects (Fey and Good 1985; Heid 1988; Schwartz et al. 1991). However, by the 1990s, technological tools were still not widespread in mathematics classrooms, nor was there an abundance of qualitatively good software available (Kaput 1992).Against the above technological scene in mathematics education during the years from the 1960s to the early 1990s, we now examine the question of the 1985 ICMI Study on technology. The Emergence of Theory from the Integration of Technology within Mathematics Education In 1985, the first ICMI Study was held in Strasbourg, rance, with the theme, ―The Influence of Computers and Informatics on Mathematics and Its Teaching at University and Senior High School Level.‖ While research on the learning and teaching of mathematics was not a main thrust of the questions addressed by the Study group, we thought it might be illuminating to peruse the Proceedings of the Study (Howson and Kahane 1986) for an indirect glimpse at the kind of theories figuring in the discussions of the Study group participants. In the opening general report that synthesized the Study pers and discussions, the editors of the proceedings, Howson and Kahane, emphasized the roles that computers could play in the learning of mathematics, such as, ―advantages to be derived from the use of computer graphics‖ ―the design of software to encourage the discovery and exploration of concepts‖ and ―the active involvement of students in their own learning through the writing of short programs‖. The activities of exploration and discovery were particularly pointed to. However, one cannot but be struck by the way in which the papers emphasized the educational potentialities and capabilities of computing technology − such as, visualizing, modelling, and programming − with an optimism that was not yet supported by evidence. The papers, mostly of the essay variety, included deliberations on the synergy between mathematics and computers, and considerations of the potentialities and limitations of the computer. The only theoretical discussions that could be said to be present in any of the 11 papers concerned epistemological issues involving the nature of mathematics and that of computer science – but at a rather general level. Theorizing and theory on the role of technology in the teaching and learning of mathematics were clearly While not reflected in the ICMI Study papers, theory with respect to technology and its use in mathematics education was nevertheless developing during the 1980s.

BRIEF REVIEW

However, the first examples to emerge tended to be rather descriptive models of the roles being played by technology than research tools for designing learning environments or for testing hypotheses about the possible enhancement of mathematical learning and teaching. These theoretical beginnings focused on specific issues related to integrating technology into education. They had a local feel to them that was based in large measure on the characteristics of the technology and which suggested certain uses and forms of examples of these first steps in theorizing, we briefly discuss the Tutor-Tool-Tutee distinction, the White Box–Black Box idea, With the arrival of the microcomputer and its increasing proliferation, a new framework was developed, which classified educational computing activity according to three modes or roles of the computer: tutor, tool, and tutee Taylor 1980). To function as a tutor: ―The computer presents some subject material, student responds, the computer evaluates the response, and, from the results of the evaluation, To function as a tool, the computer requires, according to Taylor, much less in the way of expert programming than is required for the computer as tutor and can be used in a variety of ways (e.g., as a calculator in math, a map-making tool in geography,…). The third mode of educational computing activity, that of tutee, was described by Taylor as follows: ―To use the computer as tutee is to tutor the computer; for that the student or teacher doing the tutoring must learn to program, to talk to the computer in a language it understands‖ .The rationale behind this mode of computing activity was that the human tutor would learn what was trying to teach the computer and, thus, that learners would gain new insights into their thinking through learning to program.

NOTEWORTHY CONTRIBUTION

A theoretical idea that focused on the interaction between the knowledge of the learner and the characteristics of the technological tool was the White Box/Black Box (WBBB) notion put forward by Buchberger (1990). According to Buchberger, the technology is being used as a white box when students are aware of the mathematics they are asking the technology to carry out; otherwise the technology is being used as a black box. He argued that the use of symbolic manipulation software (i.e., CAS) as a black box can be ―disastrous‖ for students when they are initially learning some new area of mathematics – a usage that is akin to the Tool mode within the Tutor-Tool-Tutee framework. However, other researchers (e.g., Heid 1988; Berry et al. 1994) have shown that students can develop conceptual understanding in CAS environments before mastering by-hand manipulation techniques. While the WBBB idea is pitched in terms of two extreme positions, others (e.g., Cedillo and Kieran, 2003) have taken this notion and adapted it in their development of ―gray-box‖ teaching approaches.

MICROWORLDS AND CONSTRUCTIONISM

Papert and Harel (1991) encapsulated the theoretical ideas underlying the educational goals

we are now in Tutee mode. Papert and Harel have described constructionism as follows: Constructionism – the N word as opposed to the V word – shares constructivism‘s connotation of learning as ―building knowledge structures‖ irrespective of the circumstances of the learning. It then adds the idea that this happens especially felicitously in a context where the learner is consciously engaged in constructing a public entity, whether it‘s a sand castle on the beach or a theory of the universe. While admitting in 1991 that the concept itself was in evolution, Papert and Harel provided examples of studies that Papert himself was involved with during the 20 years previous and that fed the early evolution of the idea. Microworlds, such as turtle geometry, were a central component of the theory: The Turtle World was a micro world, a ―place,‖ a ―province of Mathland,‖ where certain kinds of mathematical thinking could hatch and grow with particular The Turtle defines a self-contained world in which certain questions are relevant and others are not… this idea can be developed by constructing many such ―micro worlds,‖ each with its own set of assumptions and constraints. Children get to know what it is like to explore the properties of a chosen micro world undisturbed by extraneous questions. In doing so they learn to transfer habits of exploration from their personal lives to the formal domain of scientific theory construction.

PROPOSED METHODOLOGY

The purpose of functions engage students to think mathematically; the process functions aid them once they do so. The purpose functions focus on constructs such as ownership, self-worth, and the use of motivational ―real-world‖ contexts and collaborative learning environments. The process functions include, according to Pea, five categories of examples: ―tools for developing conceptual fluency, tools for mathematical exploration, tools for integrating different mathematical representations, tools for learning how to learn, and tools for learning problem-solving methods‖. Some of this work fed into the development of theories on distributed cognition (Pea 1989) and on situated cognition (e.g., Brown et al. 1989) – the latter construct being taken up in a later section of this chapter on situated abstraction. Theoretical Ideas Emanating from the Literature on Mathematical Learning Not only did local theorization concerning the use of new technologies in education begin to grow during these years; gradually, links with recently developed theory from the learning and teaching of mathematics were established. In that an exhaustive coverage is not possible, the following three examples of theoretical ideas involving technological environments during the years leading up to the early 1990s.that an approach to functions that is based primarily on programming activities, though valuable for emphasizing process aspects, may be too closely tied to computability to permit a full-blown object conception of functions. The APOS theory, and its element of genetic decomposition, was also used by Repo (1994) in one of the early studies on the pedagogical use of computer algebra. A third approach to the process-object duality was the notion of procept, introduced by Gray and Tall (1991). The authors suggest that the ―the use of the computer to carry out the process, thus enabling the learner to concentrate on the product, significantly improves the learning experience‖.

VISUAL THINKING VS. ANALYTICAL THINKING

The interplay between visual and analytical schemas in mathematical activity and students‘ tendencies to favour one over the other (Eisenberg and Dreyfus 1986) was a theoretical notion that was adopted in some of the past research studies involving technology. For example, Hillel and Kieran (1987) distinguished between the two, within the context of 11- and 12-year-olds working in turtle geometry Logo environments, as follows: By a visual schema we refer to Logo constructions of geometric figures where the choice of commands and of inputs is made on visual cues, Rationale for choices is often expressed by, ―It looks like…‖. By an analytical schema we refer to solutions based on an attempt to look for exact mathematical and programming relations within the geometry of the figure. These researchers found that the students did not easily make links between their visualizations and their analytical thinking. While research in nontechnology learning situations had disclosed (older) students‘ preferences for working with the symbolic mode rather than with the graphical, the advent of graphing technology provided the potential for a shift toward valuing graphical representations and visual thinking (e.g., Eisenberg and Dreyfus 1989). These issues would continue to be explored in the years to come.

REPRESENTATIONAL ISSUES

Representational issues were very much a part of the theoretical frames of the early research involving technological environments in mathematics education. However, in much of this research, as well as in some of the research that did not involve technology, there was a lack of Dreyfus 1991; Hitt 2002; Presmeg 2006). The demand for clarification coming from the new technologies and their representational potential contributed to an effort to outline a unifying theoretical frame for representation. As Kaput pointed out to participants at a conference on representation in 1984, ―Some mathematics education researchers have, in response to the need for understanding forms of representation in a particular area, developed local theories…; however, a coherent and unifying theoretical context is lacking‖ (Kaput 1987, p. 19). He proposed that a concept of representation ought to describe the five following components: the represented entity; the representing entity; those particular aspects of the represented entity that are being represented; those aspects of the representing entity that are doing the representing; and the correspondence between the two entities.

EXPECTED OUTCOME

This framework served as a basis for conceptualizing several ―representational studies‖ involving the three representations of the tabular, the graphical, and the symbolic (e.g., Schwarz and Bruckheimer 1988). While the body of research on students‘ making connections among the three representations of functions in various technological environments would continue to grow (e.g., Romberg et. al. 1993), further developments of a theoretical nature with respect to representations were forecast by Kaput when he spoke of, ―the potential of notations in dynamic interactive media‖ (Kaput 1998, p. 271, emphasis added). This particular evolution in theoretical frameworks will be among the ones discussed in an upcoming section.

FROM PAST TO PRESENT

A very interesting inventory of the mid-1990s research on technology in mathematics education is the one carried out by Lagrange et al. (2003). In their review of the world-wide corpus of research and innovation publications in the field of Information–Communication–Technology integration, they point out that ―the period from 1994 to 1998 appeared particularly worthy of study .In the entire corpus of papers that was reviewed, Lagrange et. al. found that the only theoretical convergences were at a general level and touched upon issues related to visualization, connection of representations, and situated knowledge. The study shows that less than half of the publications surveyed appeared to go beyond descriptions of the environment or phenomena being observed – and this literature was intended to reflect a certain maturity in the field. To summarize this section on the theoretical frames that were used in the technology related research in theoretical foundations. Gradually, both local technology-driven theories emerged and recently developed theories from mathematics education research were adapted to the case of learning with technology. This overall development can be extrapolated and applied to the current situation, which is the issue at stake in the next section.

CURRENT DEVELOPMENTS

The current proceedings concerning theory in research on technology in mathematics education cannot be dissolved from recent technological developments. On the one hand, technological devices have become smaller and handheld devices such as graphing and symbolic calculators are widespread. On the other hand, communication has become a more integrated part of technology use: software can be distributed using the Internet, and students can work, collaborate, and communicate with peers and teachers in digital learning environments. The content of such learning environments, however, turns out to be not easy to set up. The question of what a digital course should look like, so that it may really benefit from the potentials of technology and exceed the ―paper-on-screen‖ approach, has not yet been answered. These technological developments are included as background to the discussion of current theoretical developments described in this section.

APPLICATION OF PRINCIPLES

Although the ACT-R theory provides a cognitive modelling framework, it does not specify the particular skills that comprise the ability to solve a linear equation, for example. In order to create instruction in mathematics, we need to understand the knowledge components involved in completing a particular task. It is not enough to know the components involved in expert performance of the task; we also need to know the components exercised by students learning to perform the task. Much of our applied research in mathematics has concerned identifying the particular skills and methods that students use to complete mathematical tasks (see Corbett, McLaughlin, Scarpinatto, & Hadley, 2000; Koedinger & Anderson, 1990; Mark & Koedinger, 1999). Often, these skills do not correspond to expert beliefs (Koedinger & Nathan, 2004; Nathan & Koedinger, 2000a, 2000b). One technique that we have used to understand how students approach mathematics problems is to track their eye movements as they work through a problem (Gluck, 1999). Consider the task of a student completing a table of values

In Figure 1, part of the table that represents the word problem has been completed. The student has filled in the columns with the independent and dependent quantities relevant to the situation presented in the problem, specified the units of measurement for these quantities, and provided a formula to show their relationship. The student next needs to calculate the amount of money re- maiming after 2 h. There are at least two ways to perform this task.

Figure 1. Partially completed word problem task used in an eye tracking study.

First, the student might reason from the problem scenario (perhaps imagining having $20 and then using repeated subtraction to calculate the money left after spending $4 two times). A second method would be to use the algebraic expression and then substitute 2 for x and calculate the result. If a student has produced the table shown in Figure 1 (including the algebraic expression for the amount of money left), we might expect that he or she would then use the algebraic expression and execute the second method. In fact, Gluck (1999) found that when students were answering a question such as the first question in Figure 1, they looked at the problem scenario but not at the expression about 13% of the time. Students looked at the expression (sometimes along with the scenario) 54% of the time. Almost 34% of the time, they looked at neither the expression nor the problem scenario. As a result of these and other data (see Koedinger & Anderson, 1998), the Cognitive Tutor curriculum treats the search for the algebraic expressions for simple word problems as an induction task. The formula row, shown as the second row in the table of Figure 1, is now presented at the bottom of the table, after the rows corresponding to the two questions. This has the effect of asking students to solve the individual problems (How much money will you have after 2 hours? and How many hours can you play before you run out of money?) first and then use a generalization of their reasoning to come up with the algebraic expression. In later units of curriculum, as the situations and algebraic expressions become Beyond the design of mathematical tasks, the ACT-R theory guides instruction in Cognitive Tutor because the software includes an active cognitive model, which is similar to there being an ACT-R model within the soft- ware (Corbett, Koedinger, & Anderson, 1997). This model serves two purposes. First, the model follows student actions in order to determine the particular student‘s strategy in solving a problem. The technique by which it does this is called model tracing. Second, each action that the student takes is associated with one or more skills, which are references to knowledge components in the cognitive model. Individual student performance on these skills is tracked over time (and displayed to students in the ―skillometer‖). Cognitive Tutor uses each student‘s skill profile to pick problems that emphasize the skills on which the student is weakest (Corbett & Anderson, 1995b). In addition, the skill model is used to implement mastery learning. When all skills in a section of the curriculum are determined to be sufficiently mastered, the student moves on to the next section of the curriculum, which introduces new skills.

CAREFUL EVALUATIONS

The development of curriculum involves many decisions, and there is often room for disagreement about how learning theory should be applied in particular cases. For that reason, we believe that careful evaluation is an essential part of the process. Our development process has included many formative evaluations of individual units of instruction (see, e.g., Aleven & Koedinger, 2002; Corbett, Trask, Scarpinatto, & Hadley, 1998; Koedinger & Anderson, 1998; Ritter & Anderson, 1995). In addition, we have conducted several large evaluations of the entire curriculum (combining text, soft- ware, and training components in a single manipulation). Early evaluations of Cognitive Tutors for programming and geometry showed great promise, with effect sizes of approximately 1 SD (Anderson, Corbett, Koedinger, & Pelletier, 1995). In studies of the Algebra I Cognitive Tutor conducted in Pittsburgh and Milwaukee (Koedinger et. al., 1997), students were tested both on standardized tests (SAT and Iowa) and on performance-based problem solving. Cognitive Tutor students significantly outscored their peers on the standardized tests (by about 0.3 SDs), but the difference in performance was particularly pronounced on tests of problem solving and multiple representations, on which the Cognitive Tutor students outscored their peers by 85%, In Moore, Oklahoma, a study was conducted in which teachers were asked to teach some of their classes using Cognitive Tutor and some using the textbook they had been previously using (Morgan & Ritter, 2002; National Research Council, 2003). The result was that the Cognitive Tutor students scored higher on a standardized test (the ETS Algebra I End-of-Course Assessment), received higher grades, reported greater confidence in their mathematical abilities, and were more likely to believe that mathematics would be useful to them outside of school. This study was recognized by the U.S. Department of Education‘s What Works Clearinghouse as having met the highest standards of evidence. This study showed effect sizes of approximately 0.4 SDs. The Miami–Dade County school district studied the use of Cognitive Tutor Algebra I in 10 high schools. An analysis of over 6,000 students taking the 2003 FCAT (a state exam) showed that students who used Cognitive Tutor significantly outscored their peers on the exam (Sarkis, 2004)

Figure 2. Percent correct over time, considering all (sequentially numbered) student actions in the geometry curriculum. Figure 3. Percent correct over (sequentially numbered) actions involving a single skill.

The findings were particularly dramatic for special populations. The study showed that 35.7% of students receiving Exceptional Student Education who use Cognitive Tutor passed the FCAT, in comparison with only 10.9% of such students who used a different curriculum. For students with limited English proficiency, 27% of Cognitive Tutor students passed the FCAT, as opposed to only 18.9% of such students in another curriculum.

METHODOLOGY FOR IMPROVEMENT

ACT-R provides guidelines for educational pedagogy and for constructing tasks that are likely to increase learning. The theory also provides a way for us to test and improve our curriculum over time. Cognitive Tutor observes students. As an observer, it sees everything the student does within its interface at approximately 10-sec intervals, for 2 days per week over a school year. However, the cognitive model is not a passive observer. It is continually evaluating the student and predicting what the student knows and does not know. By aggregating these predictions across students, we can test whether or not the cognitive model is correctly modelling student behavior. Consider what an observer should see across time in a classroom. If students are learning, they should be making fewer errors over time. However, the activities given to the students over time should also increase in difficulty. In a well-constructed curriculum, these two forces should cancel each other out, leading to a fairly constant error rate over time. In fact, that is what we see in the Cognitive Tutor curricula. Figure 2 shows the percent correct, over time, for 88 students using the Cognitive Tutor Geometry curriculum in a

ACT-R makes the strong claim that learning takes place at the level of the knowledge components. Thus, if we consider only actions that involve a particular knowledge component, we should see an increase in percent correct over time (Anderson et al., 1989). Figure 3 shows percent correct for the same group of students as in Figure 2, this time tracking only those student actions that the cognitive model considers to be relevant to a single skill (calculating the area of a regular polygon, in an orientation in which one side is horizontal). If ACT-R is correct in its assertion that performance of a complex task is determined by the individual knowledge components contributing to the performance of that task, then each skill in the cognitive model should show a learning curve such as this one. Failure to see learning on one of the component skills must mean that the cognitive model implemented in the tutor is not correctly representing student knowledge. In the development of our algebra tutor, we discovered that the model was over predicting student performance in solving some equations of the form ax = b. An analysis of the data revealed that the over prediction was due, in part, to the case in which 1a In retrospect, the ex- planation for this over prediction is obvious. In the case in which 1a, the student needs to understand that the expression -x means ―-1 times x‖ and that, otherwise, the equation can be solved using the same operations as would be applied to any equation of the form ax = b. (An-other way to think about this error is that some students have learned a rule equivalent to ―if the equation is of the form ax = b, then divide by the number in front of the variable.‖ But when the coefficient is -1, the student doesn‘t see a number but just a negative sign, so the rule does not apply.) Now that recognition of -x as -1 times x has been added to the cognitive model, Cognitive Tutor automatically adjusts instruction to test whether or not the students have mastered that skill and automatically provides extra practice on such problems to students who need it. In addition, we can target instruction specifically to this skill. The process of analyzing learning curves and improving our fit of these curves to the data has, to this point, been laborious. We have recently been exploring the possibility of automating the process of discovering flaws in the cognitive model (Cen, Koedinger, & Junker, 2005; Junker, Koedinger, & Trottini, 2000), and this is an active focus of research at the Pittsburgh Science of Learning Center (www.learnlab.org). We believe that in the near future we will be able to greatly extend our ability to understand and accurately model students‘ mathematical cognition. In addition to improved with the ability to look at student cognition both more deeply and more broadly. We have now collected data from over 7,000 students using Cognitive Tutor in a pre-algebra class. These data comprise over 35 million observations, which amounts to observing an action for each student about every 9.5 sec. With a database of this size, we expect to be able to detect subtler factors affecting learning, including the effectiveness of individual tasks, hints, and feedback patterns. We are starting to apply micro genetic methods (Siegler & Crowley, 1991) to see whether or not we can identify key learning experiences, which could contribute to better cognitive models of individual differences in prior knowledge or learning styles and preferences. We believe that the combination of a dense data stream of student behavior and a large sample of students will allow us to greatly expand our knowledge of students‘ mathematical cognition and advance our ability to help students learn mathematics.

REFERENCES

Anna Freud (1997). "Freud's Models of the Mind: An Introduction", Issue 1 the Anna Freud Centre, London, Joseph Sandler Publisher-Karnac Books, ISBN 1855751674, 9781855751675. Barron. J.L., Fleet D.J., and Beauchemin S.S. (1994). "Systems and Experiment Performance of Optical Flow Techniques", International Journal of Computer Vision, Vol. 12, No. 1, pp. 43-77. Cannon W. B. (1927). "The James-Lange theory of emotions: A critical examination and an alternative theory", American Journal of Psychology, 39, pp. 106-112. Darwin C., D. Appleton & Co. (1955). "The expression of the emotions in man and animals", Philosophical Library, New York. David B. Givens (2002). "The Nonverbal Dictionary of Gestures, Signs & Body Language Cues", Center for Nonverbal Studies Press, Spokane, Washington. Daw-Tunglin (2006). "Facial Expression Classification Using PCA and Hierarchical Radial Basis Function Network", Journal of Information Science And Engineering 22, pp. 1033-1046, Taiwan. Gonzalez (2013). "Digital Image Processing Using MATLAB", Publisher-Tata McGraw-Hill Education, ISBN 0070702624, 9780070702622. H. Abdi D. Valentin, and A. J. O‘Toole, D. Levine (1996). "A Generalized Auto-Associator Model for Face Processing and Sex Categorization: From Principal Components to Multivariate Analysis", Optimality in Biological and Artificial Networks, pp. 317-337. H. Kobayashi and F. Hara (1994). "Analysis of the Neural Network Recognition Characteristics of Six basic Facial Expressions", 3rd IEEE International Workshop on Robot and Human Communication, New York, NY., pp. 222-227. J.P. Nadal and N. Parga,(1994). "Non-Linear Neurons in the Low Noise Limit: A Factorial Code Maximizes Information Transfer", Network, vol. 5, pp. 565–581. Jawad Nagi, Syed Khaleel Ahmed (2008). "A MATLAB based Face Recognition System using Image Processing and Neural Networks", 4th International Colloquium on Signal Processing and its Applications, Kuala Lumpur, Malaysia. ISBN: 978-983-42747-9-5. Li Peng (2007). "A Facial Expression Recognition Method Based on Quantum Neural Networks", Junhua Li Publication: ISKE-2007, Proceedings part of series: Advances in Intelligent Systems Research, ISBN: 978-90-78677-04-8 ISSN: [print=1875-6891 / online=1875-6883], DOI: oi:10.2991/iske., October 2007. Nina Bull (1968). "The Attitude Theory of Emotion: With a New Pref", Volume 81 of Nervous and mental disease monograph series, Publisher-Johnson Reprint. Nitin Bhatnagar, Mamta Bhatnagar (2012). "Effective Communication and Soft Skills", Publisher-Pearson Education India, ISBN 9332501297, 9789332501294. Zlochower A.J., Cohn J.F., Lien J.J., and Kanade T. (1998). "Automated Face Coding: A Computer Vision Based Method of Facial Expression Analysis in Parent-Infant Interaction", International Conference on Infant Studies, Atlanta, Georgia. Conference on Computer Vision, pp. 992-998, Bombay, India. Xue-wen Chen, Thomas Huang (2002). "Facial expression recognition: A clustering-based approach", Beckman Institute for Advanced Technology and Development, University of Illinois at Urbana-Champaign, 405 N. Mathews Avenue, Urbana, IL 61801, USA. Zhang Z. (1999). "Feature-Based Facial Expression Recognition: Sensitivity Analysis and Experiments with a Multilayer Perceptron", International Journal of Pattern Recognition and Artificial Intelligence, Vol. 13, No. 6, pp. 893-911. http://www.iwaha.com/ebook/index.php Schlosberg H., (1952). "The description of facial expressions in terms of two dimensions", Journal of Experimental Psychology, 44, pp. 229-237. Sharmaine V. Cerez (2007). "Facial Feature Detection using a Geometric Face Model", CMSC 190 Special Problem, Institute of Computer Science, ICS University of the Philippines Los Banos. Sherrington C. S. (1900). "Experiments on the value of vascular and visceral factors for the genesis of emotion", Proc. R. Soc. London, 56: pp. 390-403. Shi Zhong, Vladimir Cherkassky: "Factors Controlling Generalization Ability of MLP Networks", ECE Dept., University of Minnesota.

Corresponding Author Mamta*

Madhyanchal Professional University – Educational Institute, Bhopal, Madhya Pradesh