Machine Learning As a Part of Artificial Language in Decision Science: A Literature Review
Exploring the Role of Machine Learning in Dynamic Motion Planning for Mobile Robots
by Parag Chandra Dutta*,
- Published in Journal of Advances in Science and Technology, E-ISSN: 2230-9659
Volume 15, Issue No. 2, Sep 2018, Pages 42 - 45 (4)
Published by: Ignited Minds Journals
ABSTRACT
Motion planning in dynamic or in an uncertain environment is an important problem in the field of mobile robots that is used in real world applications. In motion planning motion behaviors of the mobile robot can be classified into two fundamental behaviors obstacle-avoidance and goal-seeking. Robots that operate in the real world need to respond rapidly to changes in the environment. A plan with available data to the robot’s goal quickly becomes invalidated as and when the environment changes or the robot receives new information. Then the challenge in mobile robots is replanning the paths as quickly as possible. Especially challenging environments are dynamic obstacles, such as personal space around people, buffer zones around dangerous vehicles, and rough terrain. Because sensors are imperfect, robots navigating in real time dynamic environments must re-plan whenever they receive new sensory data in order to ensure a safe, low-cost path. Learning is acquiring new or modifying existing knowledge, behaviors, skills, values, or preferences and may involve synthesizing different types of information. The ability to learn is possessed by humans, animals and some machines. Human learning may occur as part of education, personal development, or training. It may be goal-oriented and may be aided by motivation. Learning may occur with or without conscious.
KEYWORD
Machine Learning, Artificial Language, Decision Science, Motion Planning, Mobile Robots, Obstacle-Avoidance, Goal-Seeking, Replanning, Dynamic Obstacles, Learning
INTRODUCTION
The learning system is divided into supervised learning, unsupervised learning and reinforcement learning. Supervised learning requires sample input-output pairs from the function for learning. In other words, supervised learning requires a set of questions with the right answers. Supervised learning could look at all the examples with answers, and learn how to recognize the example. In unsupervised learning only the inputs are available, and the requirement of the output for these inputs is structured. Unsupervised learning is distinguished from supervised learning by the fact that there is no prior knowledge of the target outputs. The observations are not labeled with their corresponding classes and it is left to the learning algorithm to construct a representation of the input, decision making, predicting future inputs or outlying detection. Reinforcement learning is a general learning approach that does not require a trainer or supervisor. This kind of learning is recommended when the knowledge needed for supervised learning is not available, as it does not directly compare the actual with the correct pattern at the system output. Reinforcement learning addresses the problem of learning to select actions in order to maximize one‘s performance in a model free environment. The Q-learning Algorithm can achieve the performance of reactive mobile robot navigation in indoor environment. It learns the entire environment through the reward and the design problem is about how to ensure the optimized path of the environment to reach the goal. Simulation experiments were conducted through a grid which has been a common method of representing a robot‘s movement in the proposed environment. The length of the path between two adjacent cells is defined in the created grid environment of size 1010. The values of the grid cells are assigned with respect to the standard algorithm representation. Based on the solution domain, grid values, movement of robot, obstacle movement, speed of robot and speed of the obstacles are assigned.
requires forming shared intentions and having shared goals .It was Green who specifically mentioned that a way human collaborate with the other human in a task is a source of inspiration to how robot should collaborate with humans. There is an extensive body of work focused on understanding what is involved in the human-human interaction for successful collaboration. Within this section, a few key points are addressed such as the rules that govern the formation and maintenance of human-human collaboration. Integrating robots into human teams is challenging in many aspects. This subsection addressed how these challenges are addresses in literature. The term agent here personify any robotic or agent system.
MACHINE LEARNING AS A PART OF ARTIFICIAL LANGUAGE IN DECISION
SCIENCE
There are lines of research on human-agent collaborative team work. Collagen is a collaborative agent that has adopted the principles that underlie human collaboration from research in discourse theory and shared plans. R-CAST is a multi-agent system that supports human decision-making in a team context by using a collaborative RPD process (Recognition–Primed Decision model) as part of the team members‘ shared mental model. Miller et al. presented intelligent team training systems called Collaborative Agent architecture for Simulating Teamwork (CAST). The recent revolution in digital technology has touched every sphere and facet of lives, and education sector has not been spared. Unlike any other sector, the link between digital technology and education is unique and complimentary. The recent revolution in digital technology has touched every sphere and facet of lives, and education sector has not been spared. Unlike any other sector, the link between digital technology and education is unique and complimentary. On one hand, digital technology has become the enabler by redefining the very basics of the sector and altering the rules of the game. On the other hand, today‘s young minds will decide the future direction of digital technology as they are going to be the innovators of tomorrow. So, equipping our students is key to success in the field. Currently, more than 40 crore Indians use the internet and this number will get doubled in the next four years. The government has embarked on a mission to connect 2.5 lakh villages through the fibre superhighway. The government is aiming to train crores of Indians in different skills by 2022. It means that digital technology is all set embrace every moment of our lives. We are already a digital society and are moving towards the knowledge society. Hence, the task is cut out for students, teachers and managements. It‘s time for learning, relearning and unlearning. Over the years, technology pens are fast turning into the things of the past. Traditional classrooms are giving way to smart classrooms. Students are smart enough to swim with the current trends and they are constantly on the learning curve. But, for teachers, it‘s time for relearning, whether it is pedagogical tools, content or dissemination. They need to update themselves to catch up with students. For the management or school authorities, the task is first unlearning, before learning. The old management theories and best practices are getting outdated with every passing day, because even the traditional infrastructure is slowly becoming obsolete in this virtual world. Going forward, many foreseen and unforeseen technology innovations will disrupt the education sector. One of the most powerful disruptions will be the rising inclination towards m-learning from e-learning practices. Mobile technology is making education affordable, convenient and more effective. Mobile apps are turning learning a pleasure ride, like negotiating through the twists and turns of an online game. We already see the market being flooded with multiple apps for different categories of studies. Technology is a great leveller, and more so in education. Another big trend to watch out is how fast this will redefine the educational landscape. Digital technology is making place, people and time irrelevant for learning. As we are moving into a global classroom, rural and urban divide will fade away. With schools interconnected digitally, expertise will matter first. Through telemedicine facilities, tertiary care is now being made available at primary healthcare centres. Similarly, expertise by specialists in big towns now quickly reach grass-roots levels. Talent, whether in small towns or metros, is able to get support at an equal scale. Even the time constraints in learning will be removed soon and synchronisation—boundless and timeless education—will happen. Another aspect is that parents will also be enrolled into this digital highway and their contribution will be integral for the success of the pupils. But the two biggest trends to impact education in the near future will be artificial intelligence and Internet of Things—they are already charting the very course of information technology. Virtual reality and augmented reality videos and simulations will make education content more interactive and interesting.
DISCUSSION
Cloud technology is going to make life easier both for students and teachers as documents and files will be stored and accessed easily. This will help managements in a big way, cutting down on infrastructure costs. Similarly, Big Data will make assignments, evaluations, tests and projects more results-driven. In the same way analytics is helping fintech companies, student performance can be
Augmented reality and virtual reality can make learning exciting, with rich experiences and opening up endless possibilities. Highly engaging classrooms will lead to better results. These can transform the traditional methods of learning, breaking down the walls of classrooms and making students to think out-of-the box and pilot new innovations. AI‘s digital, dynamic nature also offers opportunities for student engagement that cannot be found in often out-dated textbooks or in the fixed environment of the typical four-walled classroom. In synergistic fashion, they each have the potential to propel the other forward and accelerate the discovery of new learning frontiers and the creation of innovative technologies. Some of this observation is explored further in our AI Future Scape series where we examine how the leaders of tomorrow in businesses and governments are educated on AI practices and will thus, play a key role in the future of AI‘s development. ―Smart content‖ creation, from digitized guides of textbooks to customizable learning digital interfaces, are being introduced at all levels, from elementary to post-secondary to corporate environments. Content Technologies, Inc., an artificial intelligence development company specializing in automation of business processes and intelligent instruction design, has created a suite of smart content services for secondary education and beyond. Cram101, for example, uses AI to help disseminate and breakdown textbook content into digestible ―smart‖ study guide that includes chapter summaries, true-false and multiple choice practice tests, and flashcards. JustTheFacts101 has a similar, though more streamlined purpose — highlighting and creating text and chapter-specific summaries, which are then archived into a digital collection and made available on Amazon. Other companies are creating smart digital content platforms, complete with content delivery, practice exercises, and real-time feedback and assessment. Netex Learning, for example, allows educators to design digital curriculum and content across devices, integrating rich media like video and audio, as well as self- or online-instructor assessment. Nextex also provides a personalized learning cloud platform designed for the modern workplace, in which employers can design customizable learning systems with apps; gamification and simulations; virtual courses; self-assessments; video conferencing; and other tools. Learning platforms for the modern workplace are designed to allow employees to master additional skills and receive continuous and automated feedback, and when used strategically have the potential to help improve performance and increase production. Carnegie Learning‘s ―Mika‖ software, for example, uses cognitive science and AI technologies to provide personalized tutoring and real-time feedback for post-secondary education students, particularly incoming college freshman who would otherwise need remedial courses. Carnegie states the cost of such remedial learning as costing colleges $6.7 billion annually, with only a 33% success rate for math courses. ITS provides the potential for students to more conveniently access flexible and more personalized modes of learning on an ongoing basis. While it seems obvious that no one in education is eager for virtual humans to come and replace educators, the idea of creating virtual human guides and facilitators for use in a variety of educational and therapeutic environments is a promising area of development. Though not yet a reality, the ultimate goal in this field is to create virtual human-like characters who can think, act, react, and interact in a natural way, responding to and using both verbal and nonverbal communication.
REFERENCES
N. K. Aaronson, C. Acquadro, J. Alonso, G. Apolone, D. Bucquet, M. Bullinger, K. Bungay, S. Fukuhara, Gandek, S. Keller, D. Razavi, R. Sanson-Fisher, M. Sullivan, S. Wood-Dauphinee, A. Wagner, and J. E. Ware Jr. (2014). International quality of life assessment (iqola) project. Quality of Life Research, 1(5): pp. 349–351. P. Aigner and B. Mc Carragher (2014). Shared control framework applied to a robotic aid for the blind. Control Systems Magazine, IEEE, 19(2): pp. 40–46. I. Asimov. I, Robot (2010). Doubleday. I. Asimov (2014). Bicentennial Man. Ballantine Books. H. Asoh, S. Hayamizu, I. Hara, Y. Motomura, S. Akaho, and T. Matsui (2015). Socially embedded learning of the office-conversant mobile robot jijo-2. In Internation Joint Conference on Artificial Intelligence (IJCAI), Nagoya, Japan. L. Baillie, M. Pucher, and M. Kpesi (2014). A supportive multimodal mobile robot for the home. In C. Stary and Stephanidis, editors, User-Centered Interaction Paradigms for Universal Access in the Information Society, volume 3196/2014 of Lecture Notes in Computer Science, pages 375–383. Springer Berlin / Heidelberg.
the International Conference on Rehabilitation Robotics, pages 187– 192, Chicago, Il. G. Baltus, D. Fox, F. Gemperle, J. Goetz, T. Hirsh, D. Magaritis, M. Montemerlo, J. Pineau, N. Roy, J. Schulte, and S. Thrun (2010). Towards personal service robots for the elderly. In Proceedings of the Workshop on Interactive Robots and Entertainment, Pittsburgh, PA. C. Bartneck, J. Reichenbach, and A. V. Breemen (2014).. In your face, robot! the influence of a character‘s embodiment on how users perceive its emotional expressions. In Proceedings of the Design and Emotion 2014 Conference, Ankara, Turkey. M. Betke, W. Mullally and J. Magee (2010). Active detection of eye scleras in real time. In Proceedings of the IEEE Workshop on Human Modeling, Analysis and Synthesis, Hilton Head, South Carolina. Z.Z. Bien, K.H. Park, W.C. Bang, and D.H. Stefanov (2012). LARES: An Intelligent Sweet Home for Assisting the Elderly and the Handicapped. In Proc. of the 1st Cambridge Workshop on Universal Access and Assistive Technology (CWUAAT), pages 43–46, Cambridge, UK.
Corresponding Author
Parag Chandra Dutta*