A Reinforcement Learning Variant for Control Scheduling


 Maurice Glenn
 4 years ago
 Views:
Transcription
1 A Reinforcement Learning Variant for Control Scheduling Aloke Guha Honeywell Sensor and System Development Center 3660 Technology Drive Minneapolis MN Abstract We present an algorithm based on reinforcement and state recurrence learning techniques to solve control scheduling problems. In particular we have devised a simple learning scheme called "handicapped learning" in which the weights of the associative search element are reinforced either positively or negatively such that the system is forced to move towards the desired setpoint in the shortest possible trajectory. To improve the learning rate a variable reinforcement scheme is employed: negative reinforcement values are varied depending on whether the failure occurs in handicapped or normal mode of operation. Furthermore to realize a simulated annealing scheme for accelerated learning if the system visits the same failed state successively the negative reinforcement value is increased. In examples studied these learning schemes have demonstrated high learning rates and therefore may prove useful for insitu learning. 1 INTRODUCTION Reinforcement learning techniques have been applied successfully for simple control problems such as the polecart problem [Barto 83 Michie 68 Rosen 88] where the goal was to maintain the pole in a quasistable region but not at specific setpoints. However a large class of continuous control problems require maintaining the system at a desired operating point or setpoint at a given time. We refer to this problem as the basic setpoint control problem [Guha 90] and have shown that reinforcement learning can be used not surprisingly quite well for such control tasks. A more general version of the same problem requires steering the system from some 479
2 480 Guha initial or starting state to a desired state or setpoint at specific times without knowledge of the dynamics of the system. We therefore wish to examine how control scheduling tasks where the system must be steered through a sequence of setpoints at specific times. can be learned. Solving such a control problem without explicit modeling of the system or plant can prove to be beneficial in many adaptive control tasks. To address the control scheduling problem. we have derived a learning algorithm called handicapped learning. Handicapped learning uses a nonlinear encoding of the state of the system. a new associative reinforcement learning algorithm. and a novel reinforcement scheme to explore the control space to meet the scheduling constraints. The goal of handicapped learning is to learn the control law necessary to steer the system from one setpoint to another. We provide a description of the state encoding and associative learning in Section 2. the reinforcement scheme in Section 3 the experimental results in Section 4 and the conclusions in Section 5. 2 REINFORCEMENT LEARNING STRATEGY: HANDICAPPED LEARNING Our earlier work on regulatory control using reinforcement learning [Guha 90] used a simple linear coded state representation of the system. However. when considering multiple setpoints in a schedule a linear coding of highresolution results in a combinatorial explosion of states. To avoid this curse of dimensionality we have adopted a simple nonlinear encoding of the state space. We describe this first. 2.1 STATE ENCODING To define the states in which reinforcement must be provided to the controller. we set tolerance limits around the desired setpoint. say Xd. If the tolerance of operation defined by the level of control sophistication required in the problem is T. then the controller is defined to fail if IX(t)  Xdl > T as described in our earlier work in [Guha 90]. The controller must learn to maintain the system within this tolerance window. If the range R. of possible values of the setpoint or control variable X(t) is significantly greater than the tolerance window. then the number of states required to define the setpoint will be large. We therefore use a nonlinear coding of the control variable. Thus if the level of discrimination within the tolerance window is 2T/n. then the number of states required to represent the control variable is (n + 2) where the two added states represent the states (X(t)  Xd) > T and (X(t)  Xd) < T. With this representation scheme. any continuous range of setpoints can be represented with very high resolution but without the explosion in state space. The above state encoding will be used in our associative reinforcement learning algorithm. handicapped learning which we describe next.
3 A Reinforcement Learning Variant for Control Scheduling HANDICAPPED LEARNING ALGORITHM Our reinforcement learning strategy is derived from the Associative Search Element/Adaptive Heuristic Critic (ASE/AHC) algorithm [Barto 83. Anderson 86]. We have considered a binary control output. y(t): y(t) = f(l wi(t)xi(t) + noise(t» i where f is the thresholding step function. and xi(t). 0 SiS N. is the current decoded state. that is. xi(t) = 1 when the system is in the ith state and 0 otherwise. As in ASE. the added term noise(t) facilitates stochastic learning. Note that the learning algorithm can be easily extended to continuous valued outputs. the nature of the continuity is determined by the thresholding function. We incorporate two learning heuristics: state recurrence [Rosen 88] and a newly introduced heuristic called "handicapped learning". The controller is in the handicapped learning mode if a flag. H. is set high. H is defined as follows: H = O. if IX(t)  Xdl < T = 1. otherwise (2) The handicap mode provides a mechanism to modify the reinforcement scheme. In this mode the controller is allowed to explore the search space of action sequences. to steer to a new setpoint. without "punishment" (negative reinforcement). The mode is invoked when the system is at a valid setpoint XI(tI) at time tl. but must be steered to the new setpoint X2 outside the tolerance window. that is. IXI  X21 > T. at time t2. Since both setpoints are valid operating points. these setpoints as well as all points within the possible optimal trajectories from Xl to X2 cannot be deemed to be failure states. Further. by following a special reinforcement scheme during the handicapped mode. one can enable learning and facilitate the controller to find the optimal trajectory to steer the system from one setpoint to another. The weight updating rule used during setpoint schedule learning is given by equation (3): wi(t+i) = wi(t) + (1 rt(t) ei(t) + (12 r2(t) e2i(t) + (13 r3(t) e3i(t) (3) where the term (1 rt (t) ei(t) is the basic associative learning component. rt (t) the heuristic reinforcement. and ei(t) the eligibility trace of the state xi(t) [Barto 83]. The third term in equation (3) is the state recurrence component for reinforcing short cycles [Rosen 88]. Here (12 is a constant gain. f2(t) is a positive constant reward. and ~i the state recurrence eligibility is defined as follows: (1) e2i(t) = ~2 xi(t)y(ti.last)/(~2 + t  ti.last). = O. otherwise if (t  ti.last) > 1 and H = 0 (4)
4 482 Guha where ~2 is a positive constant and ti.last is the last time the system visited the ith state. The eligibility function in equation (4) reinforces shorter cycles more than longer cycles and improve control when the system is within a tolerance window. The fourth term in equation (3) is the handicapped learning component. Here (13 is a constant gain. r3(t) is a positive constant reward and e3i the handicapped learning eligibility is defined as follows: e3i(t) =  ~3 xi(t)y(ti.last)/(~3 + t  ti.lasv. if H = 1 = O. otherwise (5) where ~3 is a positive constant. While state recurrence promotes short cycles around a desired operating point. handicapped learning forces the controller to move away from the current operating point X(t). The system enters the handicapped mode whenever it is outside the tolerance window around the desired setpoint. If the initial operating point Xi (= X(O» is outside the tolerance window of the desired setpoint Xd. 1Xi  Xdl > T. the basic AHC network will always register a failure. This failure situation is avoided by invoking the handicapped learning described above. By setting absolute upper and lower limits to operating point values. the controller based on handicapped learning can learn the correct sequence of actions necessary to steer the system to the desired operating point Xd. The weight update equations for the critic in the AHC are unchanged from the original AHC and we do not list them here. 3 REINFORCEMENT SCHEMES Unlike in previous experiments by other researchers. we have constructed the reinforcement values used during learning to be multivalued. and not binary. The reinforcement to the critic is negativeboth positive and negative reinforcements are used. There are two forms of failure that can occur during setpoint control. First. the controller can reach the absolute upper or lower limits. Second. there may be a timeout failure in the handicapped mode. By design. when the controller is in handicapped mode it is allowed to remain there for only TL. determined by the average control step Ay and the error between the current operating point and the desired setpoint: TL = k Ay (XO  Xd) (6) where Xo is the initial setpoint. and k some constant. The negative reinforcement provided to the controller is higher if the absolute limits of the operating point is reached. We have implemented a more interesting reinforcement scheme that is somewhat similar to simulated annealing. If the system fails in the same state on two successive trials. the negative reinforcement is increased. The primary reinforcement function can be defined as follows:
5 A Reinforcement Learning Variant for Control Scheduling 483 rjck + I) = rick)  ro = rl if i = j if i ":i; j (7) where ri(k) is the negative reinforcement provided if the system failed in state i during trial k and ro and rl are constants. 4 EXPERIMENTS AND RESULTS Two different setpoint control experiments have been conducted. The first was the basic setpoint control of a continuous stirred tank reactor in which the temperature must be held at a desired setpoint. That experiment successfully demonstrated the use of reinforcement learning for setpoint control of a highly nonlinear and unstable process [Guha 90]. The second recent experiment has been on evaluating the handicapped learning strategy for an environmental controller where the controller must learn to control the heating system to maintain the ambient temperature specified by a timetemperature schedule. Thus as the external temperature varies the network must adapt the heating (ON) and (OFF) control sequence so as to maintain the environment at the desired temperature as quickly as possible. The state information describing system is composed of the time interval of the schedule the current heating state (ON/OFF) and the error or the difference between desired and current ambient or interior temperature. The heating and cooling rates are variable: the heating rate decreases while the cooling rate increases exponentially as the exterior temperature falls below the ambient or controlled temperature. 100 e j..!! :I 1.c J! '5 " Handicapped Learning + No Handicapped Learning Trial Number Figure I: Rate of Learning with and without Handicapped Learning
6 484 Guha TdalH3 Tenp f Ambient Temperature '( (CXIIltrolled) Dl Time (minute) 1200 Figure 2: TimeTemperature Plot of Controlled Environment at Fortythird Trial The experiments on the environmental controller consisted of embedding a daily setpoint schedule that contains six setpoints at six specific times. Trails were conducted to train the controller. Each trial starts at the beginning of the schedule (time = 0). The setpoints typically varied in the range of 55 to 75 degrees. The desired tolerance window was 1 degree. The upper and lower limits of the controlled temperature were set arbitrarily at 50 and 80 degrees. respectively. Control actions were taken every 5 minutes. Learning was monitored by examining how much of the schedule was learnt correctly as the number of trials increased. Temp 65 t 55 Tal Run Sd\ecIule/.'.'t '. 11.') Temperature.' \" J.\L.. Ambient!. I.' \ i. ~emperature I t Setpoint.\. (. '!. l. (controlled) \. ; 1 ;.. I! ~ '. "' "r\~ J\"'..'. it. " "'.. ~..~I\'~';; ~l' ~... fl! '.. ..!'.~J I. '...! 1. t'. BxteriCll" ~ \. 4 Temperature tr : 50 r ~ TIme (minutea) Figure 3: TimeTemperature Plot of Controlled Environment for a Test Run Figure 1 shows how the learning progresses with the number of trials. Current results show that the learning of the complete schedule (of the six timetemperature pairs) requiring 288 control steps. can be accomplished in only 43 trials. (Given binary \
7 A Reinforcement Learning Variant for Control Scheduling 485 output the controller could have in the worst case executed 1086 ( 2288) trials to learn the complete schedule.) More details on the learning ability using the reinforcement learning strategy are available from the timetemperature plots of the trial and test runs in Figures 2 and 3. As the learning progresses to the fortythird trial the controller learns to continuously heat up or cool down to the desired temperature (Figure 2). To further test the learning generalizations on the schedule the trained network was tested on a different environment where the exterior temperature profile (and the therefore the heating and cooling rates) was different from the one used for training. Figure 3 shows the schedule that is maintained. Because the controller encounters different cooling rates in the test run some learning still occurs as evident form Figure 3. However all six setpoints were reached in the proper sequence. In essence this test shows that the controller has generalized on the heating and cooling control law independent of the setpoints and the heating and cooling rates. 5 CONCLUSIONS We have developed a new learning strategy based on reinforcement learning that can be used to learn setpoint schedules for continuous processes. The experimental results have demonstrated good learning performance. However a number of interesting extensions to this work are possible. For instance. the handicapped mode exploration of control can be better controlled for faster learning if more information on the desired or possible trajectory is known. Another area of investigation would be the area of state encoding. In our approach the nonlinear encoding of the system state was assumed uniform at different regions of the control space. In applications where the system with high nonlinearity different nonlinear coding could be used adaptively to improve the state representation. Finally other formulations of reinforcement learning algorithms besides ASE/AHC should also be explored. One such possibility is Watkins' QIearning [Watkins 89]. References [Guha 90] A. Guha and A. Mathur Set point Control Based on Reinforcement Learning Proceedings of UCNN 90 Washington D.C. January [Barto 83] A.G. Barto R.S. Sutton and C.W. Anderson Neuronlike Adaptive Elements That Can Solve Difficult Learning Control Problems IEEE Transactions on Systems Man and Cybernetics Vol. SMC13. No.5. September/October [Michie 68] D. Michie and R. Chambers Machine Intelligence E. Dale and D. Michie (eds.) Oliver and Boyd Edinburgh 1968 p [Rosen 88] B. E. Rosen J. M. Goodwin. and J. J. Vidal Learning by State Recurrence Detection IEEE Conference on Neural Information Processing Systems  Natural and Synthetic. AlP Press [Watkins 89] C.J.C.H. Watkins Learning from Delayed Rewards Ph. D. Dissertation King's College May 1989.
Reinforcement Learning by Comparing Immediate Reward
Reinforcement Learning by Comparing Immediate Reward Punit Pandey DeepshikhaPandey Dr. Shishir Kumar Abstract This paper introduces an approach to Reinforcement Learning Algorithm by comparing their immediate
More informationAxiom 2013 Team Description Paper
Axiom 2013 Team Description Paper Mohammad Ghazanfari, S Omid Shirkhorshidi, Farbod Samsamipour, Hossein Rahmatizadeh Zagheli, Mohammad Mahdavi, Payam Mohajeri, S Abbas Alamolhoda Robotics Scientific Association
More informationLearning Methods for Fuzzy Systems
Learning Methods for Fuzzy Systems Rudolf Kruse and Andreas Nürnberger Department of Computer Science, University of Magdeburg Universitätsplatz, D396 Magdeburg, Germany Phone : +49.39.67.876, Fax : +49.39.67.8
More informationENME 605 Advanced Control Systems, Fall 2015 Department of Mechanical Engineering
ENME 605 Advanced Control Systems, Fall 2015 Department of Mechanical Engineering Lecture Details Instructor Course Objectives Tuesday and Thursday, 4:00 pm to 5:15 pm Information Technology and Engineering
More informationSpeeding Up Reinforcement Learning with Behavior Transfer
Speeding Up Reinforcement Learning with Behavior Transfer Matthew E. Taylor and Peter Stone Department of Computer Sciences The University of Texas at Austin Austin, Texas 787121188 {mtaylor, pstone}@cs.utexas.edu
More informationINPE São José dos Campos
INPE5479 PRE/1778 MONLINEAR ASPECTS OF DATA INTEGRATION FOR LAND COVER CLASSIFICATION IN A NEDRAL NETWORK ENVIRONNENT Maria Suelena S. Barros Valter Rodrigues INPE São José dos Campos 1993 SECRETARIA
More informationModule 12. Machine Learning. Version 2 CSE IIT, Kharagpur
Module 12 Machine Learning 12.1 Instructional Objective The students should understand the concept of learning systems Students should learn about different aspects of a learning system Students should
More informationRadius STEM Readiness TM
Curriculum Guide Radius STEM Readiness TM While today s teens are surrounded by technology, we face a stark and imminent shortage of graduates pursuing careers in Science, Technology, Engineering, and
More informationISFA2008U_120 A SCHEDULING REINFORCEMENT LEARNING ALGORITHM
Proceedings of 28 ISFA 28 International Symposium on Flexible Automation Atlanta, GA, USA June 2326, 28 ISFA28U_12 A SCHEDULING REINFORCEMENT LEARNING ALGORITHM Amit Gil, Helman Stern, Yael Edan, and
More informationTD(λ) and QLearning Based Ludo Players
TD(λ) and QLearning Based Ludo Players Majed Alhajry, Faisal Alvi, Member, IEEE and Moataz Ahmed Abstract Reinforcement learning is a popular machine learning technique whose inherent selflearning ability
More informationLecture 10: Reinforcement Learning
Lecture 1: Reinforcement Learning Cognitive Systems II  Machine Learning SS 25 Part III: Learning Programs and Strategies Q Learning, Dynamic Programming Lecture 1: Reinforcement Learning p. Motivation
More informationRule Learning With Negation: Issues Regarding Effectiveness
Rule Learning With Negation: Issues Regarding Effectiveness S. Chua, F. Coenen, G. Malcolm University of Liverpool Department of Computer Science, Ashton Building, Ashton Street, L69 3BX Liverpool, United
More informationDesigning a Rubric to Assess the Modelling Phase of Student Design Projects in Upper Year Engineering Courses
Designing a Rubric to Assess the Modelling Phase of Student Design Projects in Upper Year Engineering Courses Thomas F.C. Woodhall Masters Candidate in Civil Engineering Queen s University at Kingston,
More informationLearning Optimal Dialogue Strategies: A Case Study of a Spoken Dialogue Agent for
Learning Optimal Dialogue Strategies: A Case Study of a Spoken Dialogue Agent for Email Marilyn A. Walker Jeanne C. Fromer Shrikanth Narayanan walker@research.att.com jeannie@ai.mit.edu shri@research.att.com
More informationArtificial Neural Networks written examination
1 (8) Institutionen för informationsteknologi Olle Gällmo Universitetsadjunkt Adress: Lägerhyddsvägen 2 Box 337 751 05 Uppsala Artificial Neural Networks written examination Monday, May 15, 2006 9 0014
More informationAn Evaluation of the InteractiveActivation Model Using Masked PartialWord Priming. Jason R. Perry. University of Western Ontario. Stephen J.
An Evaluation of the InteractiveActivation Model Using Masked PartialWord Priming Jason R. Perry University of Western Ontario Stephen J. Lupker University of Western Ontario Colin J. Davis Royal Holloway
More informationWord Segmentation of Offline Handwritten Documents
Word Segmentation of Offline Handwritten Documents Chen Huang and Sargur N. Srihari {chuang5, srihari}@cedar.buffalo.edu Center of Excellence for Document Analysis and Recognition (CEDAR), Department
More informationD Road Maps 6. A Guide to Learning System Dynamics. System Dynamics in Education Project
D45065 1 Road Maps 6 A Guide to Learning System Dynamics System Dynamics in Education Project 2 A Guide to Learning System Dynamics D45065 Road Maps 6 System Dynamics in Education Project System Dynamics
More informationOn the Combined Behavior of Autonomous Resource Management Agents
On the Combined Behavior of Autonomous Resource Management Agents Siri Fagernes 1 and Alva L. Couch 2 1 Faculty of Engineering Oslo University College Oslo, Norway siri.fagernes@iu.hio.no 2 Computer Science
More informationPython Machine Learning
Python Machine Learning Unlock deeper insights into machine learning with this vital guide to cuttingedge predictive analytics Sebastian Raschka [ PUBLISHING 1 open source I community experience distilled
More informationDiscriminative Learning of BeamSearch Heuristics for Planning
Discriminative Learning of BeamSearch Heuristics for Planning Yuehua Xu School of EECS Oregon State University Corvallis,OR 97331 xuyu@eecs.oregonstate.edu Alan Fern School of EECS Oregon State University
More information1.11 I Know What Do You Know?
50 SECONDARY MATH 1 // MODULE 1 1.11 I Know What Do You Know? A Practice Understanding Task CC BY Jim Larrison https://flic.kr/p/9mp2c9 In each of the problems below I share some of the information that
More informationRule Learning with Negation: Issues Regarding Effectiveness
Rule Learning with Negation: Issues Regarding Effectiveness Stephanie Chua, Frans Coenen, and Grant Malcolm University of Liverpool Department of Computer Science, Ashton Building, Ashton Street, L69 3BX
More informationOn Human Computer Interaction, HCI. Dr. Saif al Zahir Electrical and Computer Engineering Department UBC
On Human Computer Interaction, HCI Dr. Saif al Zahir Electrical and Computer Engineering Department UBC Human Computer Interaction HCI HCI is the study of people, computer technology, and the ways these
More informationAMULTIAGENT system [1] can be defined as a group of
156 IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICS PART C: APPLICATIONS AND REVIEWS, VOL. 38, NO. 2, MARCH 2008 A Comprehensive Survey of Multiagent Reinforcement Learning Lucian Buşoniu, Robert Babuška,
More informationStudent Perceptions of Reflective Learning Activities
Student Perceptions of Reflective Learning Activities Rosalind Wynne Electrical and Computer Engineering Department Villanova University, PA rosalind.wynne@villanova.edu Abstract It is widely accepted
More informationA Neural Network GUI Tested on TextToPhoneme Mapping
A Neural Network GUI Tested on TextToPhoneme Mapping MAARTEN TROMPPER Universiteit Utrecht m.f.a.trompper@students.uu.nl Abstract Texttophoneme (T2P) mapping is a necessary step in any speech synthesis
More informationSURVIVING ON MARS WITH GEOGEBRA
SURVIVING ON MARS WITH GEOGEBRA Lindsey States and Jenna Odom Miami University, OH Abstract: In this paper, the authors describe an interdisciplinary lesson focused on determining how long an astronaut
More informationGiven a directed graph G =(N A), where N is a set of m nodes and A. destination node, implying a direction for ow to follow. Arcs have limitations
4 Interior point algorithms for network ow problems Mauricio G.C. Resende AT&T Bell Laboratories, Murray Hill, NJ 079742070 USA Panos M. Pardalos The University of Florida, Gainesville, FL 326116595
More informationarxiv: v1 [cs.cv] 10 May 2017
Inferring and Executing Programs for Visual Reasoning Justin Johnson 1 Bharath Hariharan 2 Laurens van der Maaten 2 Judy Hoffman 1 Li FeiFei 1 C. Lawrence Zitnick 2 Ross Girshick 2 1 Stanford University
More informationAGS THE GREAT REVIEW GAME FOR PREALGEBRA (CD) CORRELATED TO CALIFORNIA CONTENT STANDARDS
AGS THE GREAT REVIEW GAME FOR PREALGEBRA (CD) CORRELATED TO CALIFORNIA CONTENT STANDARDS 1 CALIFORNIA CONTENT STANDARDS: Chapter 1 ALGEBRA AND WHOLE NUMBERS Algebra and Functions 1.4 Students use algebraic
More informationGeorgetown University at TREC 2017 Dynamic Domain Track
Georgetown University at TREC 2017 Dynamic Domain Track Zhiwen Tang Georgetown University zt79@georgetown.edu Grace Hui Yang Georgetown University huiyang@cs.georgetown.edu Abstract TREC Dynamic Domain
More informationLecture 1: Machine Learning Basics
1/69 Lecture 1: Machine Learning Basics Ali Harakeh University of Waterloo WAVE Lab ali.harakeh@uwaterloo.ca May 1, 2017 2/69 Overview 1 Learning Algorithms 2 Capacity, Overfitting, and Underfitting 3
More informationSARDNET: A SelfOrganizing Feature Map for Sequences
SARDNET: A SelfOrganizing Feature Map for Sequences Daniel L. James and Risto Miikkulainen Department of Computer Sciences The University of Texas at Austin Austin, TX 78712 dljames,risto~cs.utexas.edu
More informationDOCTOR OF PHILOSOPHY HANDBOOK
University of Virginia Department of Systems and Information Engineering DOCTOR OF PHILOSOPHY HANDBOOK 1. Program Description 2. Degree Requirements 3. Advisory Committee 4. Plan of Study 5. Comprehensive
More informationFUZZY EXPERT. Dr. Kasim M. AlAubidy. Philadelphia University. Computer Eng. Dept February 2002 University of DamascusSyria
FUZZY EXPERT SYSTEMS 1618 18 February 2002 University of DamascusSyria Dr. Kasim M. AlAubidy Computer Eng. Dept. Philadelphia University What is Expert Systems? ES are computer programs that emulate
More informationLearning Structural Correspondences Across Different Linguistic Domains with Synchronous Neural Language Models
Learning Structural Correspondences Across Different Linguistic Domains with Synchronous Neural Language Models Stephan Gouws and GJ van Rooyen MIH Medialab, Stellenbosch University SOUTH AFRICA {stephan,gvrooyen}@ml.sun.ac.za
More informationAn empirical study of learning speed in backpropagation
Carnegie Mellon University Research Showcase @ CMU Computer Science Department School of Computer Science 1988 An empirical study of learning speed in backpropagation networks Scott E. Fahlman Carnegie
More informationImproving Fairness in Memory Scheduling
Improving Fairness in Memory Scheduling Using a Team of Learning Automata Aditya Kajwe and Madhu Mutyam Department of Computer Science & Engineering, Indian Institute of Tehcnology  Madras June 14, 2014
More informationEvolutive Neural Net Fuzzy Filtering: Basic Description
Journal of Intelligent Learning Systems and Applications, 2010, 2: 1218 doi:10.4236/jilsa.2010.21002 Published Online February 2010 (http://www.scirp.org/journal/jilsa) Evolutive Neural Net Fuzzy Filtering:
More informationMathematics subject curriculum
Mathematics subject curriculum Dette er ei omsetjing av den fastsette læreplanteksten. Læreplanen er fastsett på Nynorsk Established as a Regulation by the Ministry of Education and Research on 24 June
More informationAPB Step 3 Test, Evaluation, and Analysis Process
MP00W0000124 MITRE PAPER TEASG Step 3 Report on APB Step 3 Test, Evaluation, and Analysis Process April 2000 Michael Beasley, Digital Systems Resources David Colella, The MITRE Corporation, Chair Ronald
More informationA GENERIC SPLIT PROCESS MODEL FOR ASSET MANAGEMENT DECISIONMAKING
A GENERIC SPLIT PROCESS MODEL FOR ASSET MANAGEMENT DECISIONMAKING Yong Sun, a * Colin Fidge b and Lin Ma a a CRC for Integrated Engineering Asset Management, School of Engineering Systems, Queensland
More informationRunning head: DELAY AND PROSPECTIVE MEMORY 1
Running head: DELAY AND PROSPECTIVE MEMORY 1 In Press at Memory & Cognition Effects of Delay of Prospective Memory Cues in an Ongoing Task on Prospective Memory Task Performance Dawn M. McBride, Jaclyn
More informationP. Belsis, C. Sgouropoulou, K. Sfikas, G. Pantziou, C. Skourlas, J. Varnas
Exploiting Distance Learning Methods and Multimediaenhanced instructional content to support IT Curricula in Greek Technological Educational Institutes P. Belsis, C. Sgouropoulou, K. Sfikas, G. Pantziou,
More informationAlgebra 1, Quarter 3, Unit 3.1. Line of Best Fit. Overview
Algebra 1, Quarter 3, Unit 3.1 Line of Best Fit Overview Number of instructional days 6 (1 day assessment) (1 day = 45 minutes) Content to be learned Analyze scatter plots and construct the line of best
More informationA Comparison of Annealing Techniques for Academic Course Scheduling
A Comparison of Annealing Techniques for Academic Course Scheduling M. A. Saleh Elmohamed 1, Paul Coddington 2, and Geoffrey Fox 1 1 Northeast Parallel Architectures Center Syracuse University, Syracuse,
More informationSeminar  Organic Computing
Seminar  Organic Computing SelfOrganisation of OCSystems Markus Franke 25.01.2006 Typeset by FoilTEX Timetable 1. Overview 2. Characteristics of SOSystems 3. Concern with Nature 4. DesignConcepts
More informationGrade 6: Correlated to AGS Basic Math Skills
Grade 6: Correlated to AGS Basic Math Skills Grade 6: Standard 1 Number Sense Students compare and order positive and negative integers, decimals, fractions, and mixed numbers. They find multiples and
More informationThe Good Judgment Project: A large scale test of different methods of combining expert predictions
The Good Judgment Project: A large scale test of different methods of combining expert predictions Lyle Ungar, Barb Mellors, Jon Baron, Phil Tetlock, Jaime Ramos, Sam Swift The University of Pennsylvania
More informationGCSE Mathematics B (Linear) Mark Scheme for November Component J567/04: Mathematics Paper 4 (Higher) General Certificate of Secondary Education
GCSE Mathematics B (Linear) Component J567/04: Mathematics Paper 4 (Higher) General Certificate of Secondary Education Mark Scheme for November 2014 Oxford Cambridge and RSA Examinations OCR (Oxford Cambridge
More informationDublin City Schools Mathematics Graded Course of Study GRADE 4
I. Content Standard: Number, Number Sense and Operations Standard Students demonstrate number sense, including an understanding of number systems and reasonable estimates using paper and pencil, technologysupported
More informationApplication of Virtual Instruments (VIs) for an enhanced learning environment
Application of Virtual Instruments (VIs) for an enhanced learning environment Philip Smyth, Dermot Brabazon, Eilish McLoughlin Schools of Mechanical and Physical Sciences Dublin City University Ireland
More informationPhysical Features of Humans
Grade 1 Science, Quarter 1, Unit 1.1 Physical Features of Humans Overview Number of instructional days: 11 (1 day = 20 30 minutes) Content to be learned Observe, identify, and record the external features
More informationEvolution of Symbolisation in Chimpanzees and Neural Nets
Evolution of Symbolisation in Chimpanzees and Neural Nets Angelo Cangelosi Centre for Neural and Adaptive Systems University of Plymouth (UK) a.cangelosi@plymouth.ac.uk Introduction Animal communication
More informationClouds = Heavy Sidewalk = Wet. davinci V2.1 alpha3
Identifying and Handling Structural Incompleteness for Validation of Probabilistic KnowledgeBases Eugene Santos Jr. Dept. of Comp. Sci. & Eng. University of Connecticut Storrs, CT 062693155 eugene@cse.uconn.edu
More informationMachine Learning and Data Mining. Ensembles of Learners. Prof. Alexander Ihler
Machine Learning and Data Mining Ensembles of Learners Prof. Alexander Ihler Ensemble methods Why learn one classifier when you can learn many? Ensemble: combine many predictors (Weighted) combina
More informationA Stochastic Model for the Vocabulary Explosion
Words Known A Stochastic Model for the Vocabulary Explosion Colleen C. Mitchell (colleenmitchell@uiowa.edu) Department of Mathematics, 225E MLH Iowa City, IA 52242 USA Bob McMurray (bobmcmurray@uiowa.edu)
More informationFirms and Markets Saturdays Summer I 2014
PRELIMINARY DRAFT VERSION. SUBJECT TO CHANGE. Firms and Markets Saturdays Summer I 2014 Professor Thomas Pugel Office: Room 1153 KMC Email: tpugel@stern.nyu.edu Tel: 2129980918 Fax: 2129954212 This
More informationMeasurement. When Smaller Is Better. Activity:
Measurement Activity: TEKS: When Smaller Is Better (6.8) Measurement. The student solves application problems involving estimation and measurement of length, area, time, temperature, volume, weight, and
More informationEvery curriculum policy starts from this policy and expands the detail in relation to the specific requirements of each policy s field.
1. WE BELIEVE We believe a successful Teaching and Learning Policy enables all children to be effective learners; to have the confidence to take responsibility for their own learning; understand what it
More informationOPTIMIZATINON OF TRAINING SETS FOR HEBBIANLEARNING BASED CLASSIFIERS
OPTIMIZATINON OF TRAINING SETS FOR HEBBIANLEARNING BASED CLASSIFIERS Václav Kocian, Eva Volná, Michal Janošek, Martin Kotyrba University of Ostrava Department of Informatics and Computers Dvořákova 7,
More informationDeveloping a Language for Assessing Creativity: a taxonomy to support student learning and assessment
Investigations in university teaching and learning vol. 5 (1) autumn 2008 ISSN 17405106 Developing a Language for Assessing Creativity: a taxonomy to support student learning and assessment Janette Harris
More informationMalicious User Suppression for Cooperative Spectrum Sensing in Cognitive Radio Networks using Dixon s Outlier Detection Method
Malicious User Suppression for Cooperative Spectrum Sensing in Cognitive Radio Networks using Dixon s Outlier Detection Method Sanket S. Kalamkar and Adrish Banerjee Department of Electrical Engineering
More informationSurpriseBased Learning for Autonomous Systems
SurpriseBased Learning for Autonomous Systems Nadeesha Ranasinghe and WeiMin Shen ABSTRACT Dealing with unexpected situations is a key challenge faced by autonomous robots. This paper describes a promising
More informationA Pipelined Approach for Iterative Software Process Model
A Pipelined Approach for Iterative Software Process Model Ms.Prasanthi E R, Ms.Aparna Rathi, Ms.Vardhani J P, Mr.Vivek Krishna Electronics and Radar Development Establishment C V Raman Nagar, Bangalore560093,
More informationKnowledge Transfer in Deep Convolutional Neural Nets
Knowledge Transfer in Deep Convolutional Neural Nets Steven Gutstein, Olac Fuentes and Eric Freudenthal Computer Science Department University of Texas at El Paso El Paso, Texas, 79968, U.S.A. Abstract
More informationME 443/643 Design Techniques in Mechanical Engineering. Lecture 1: Introduction
ME 443/643 Design Techniques in Mechanical Engineering Lecture 1: Introduction Instructor: Dr. Jagadeep Thota Instructor Introduction Born in Bangalore, India. B.S. in ME @ Bangalore University, India.
More informationPredicting Students Performance with SimStudent: Learning Cognitive Skills from Observation
School of Computer Science HumanComputer Interaction Institute Carnegie Mellon University Year 2007 Predicting Students Performance with SimStudent: Learning Cognitive Skills from Observation Noboru Matsuda
More informationData Fusion Models in WSNs: Comparison and Analysis
Proceedings of 2014 Zone 1 Conference of the American Society for Engineering Education (ASEE Zone 1) Data Fusion s in WSNs: Comparison and Analysis Marwah M Almasri, and Khaled M Elleithy, Senior Member,
More informationGetting Started with TINspire High School Science
Getting Started with TINspire High School Science 2012 Texas Instruments Incorporated Materials for Institute Participant * *This material is for the personal use of T3 instructors in delivering a T3
More informationInteraction Design Considerations for an Aircraft Carrier Deck Agentbased Simulation
Interaction Design Considerations for an Aircraft Carrier Deck Agentbased Simulation Miles Aubert (919) 6195078 Miles.Aubert@duke. edu Weston Ross (505) 3855867 Weston.Ross@duke. edu Steven Mazzari
More informationA ProcessModel Account of Task Interruption and Resumption: When Does Encoding of the Problem State Occur?
A ProcessModel Account of Task Interruption and Resumption: When Does Encoding of the Problem State Occur? Dario D. Salvucci Drexel University Philadelphia, PA Christopher A. Monk George Mason University
More informationAbstractions and the Brain
Abstractions and the Brain Brian D. Josephson Department of Physics, University of Cambridge Cavendish Lab. Madingley Road Cambridge, UK. CB3 OHE bdj10@cam.ac.uk http://www.tcm.phy.cam.ac.uk/~bdj10 ABSTRACT
More informationDeploying Agile Practices in Organizations: A Case Study
Copyright: EuroSPI 2005, Will be presented at 911 November, Budapest, Hungary Deploying Agile Practices in Organizations: A Case Study Minna Pikkarainen 1, Outi Salo 1, and Jari Still 2 1 VTT Technical
More informationTesting A Moving Target: How Do We Test Machine Learning Systems? Peter Varhol Technology Strategy Research, USA
Testing A Moving Target: How Do We Test Machine Learning Systems? Peter Varhol Technology Strategy Research, USA Testing a Moving Target How Do We Test Machine Learning Systems? Peter Varhol, Technology
More informationHuman Emotion Recognition From Speech
RESEARCH ARTICLE OPEN ACCESS Human Emotion Recognition From Speech Miss. Aparna P. Wanare*, Prof. Shankar N. Dandare *(Department of Electronics & Telecommunication Engineering, Sant Gadge Baba Amravati
More informationContinual CuriosityDriven Skill Acquisition from HighDimensional Video Inputs for Humanoid Robots
Continual CuriosityDriven Skill Acquisition from HighDimensional Video Inputs for Humanoid Robots Varun Raj Kompella, Marijn Stollenga, Matthew Luciw, Juergen Schmidhuber The Swiss AI Lab IDSIA, USI
More informationAdaptive Learning in TimeVariant Processes With Application to Wind Power Systems
IEEE TRANSACTIONS ON AUTOMATION SCIENCE AND ENGINEERING, VOL 13, NO 2, APRIL 2016 997 Adaptive Learning in TimeVariant Processes With Application to Wind Power Systems Eunshin Byon, Member, IEEE, Youngjun
More informationRedirected Inbound Call Sampling An Example of Fit for Purpose Nonprobability Sample Design
Redirected Inbound Call Sampling An Example of Fit for Purpose Nonprobability Sample Design Burton Levine Karol Krotki NISS/WSS Workshop on Inference from Nonprobability Samples September 25, 2017 RTI
More informationLaboratorio di Intelligenza Artificiale e Robotica
Laboratorio di Intelligenza Artificiale e Robotica A.A. 20082009 Outline 2 Machine Learning Unsupervised Learning Supervised Learning Reinforcement Learning Genetic Algorithms GeneticsBased Machine Learning
More informationMachine Learning from Garden Path Sentences: The Application of Computational Linguistics
Machine Learning from Garden Path Sentences: The Application of Computational Linguistics http://dx.doi.org/10.3991/ijet.v9i6.4109 J.L. Du 1, P.F. Yu 1 and M.L. Li 2 1 Guangdong University of Foreign Studies,
More informationProbabilistic Latent Semantic Analysis
Probabilistic Latent Semantic Analysis Thomas Hofmann Presentation by Ioannis Pavlopoulos & Andreas Damianou for the course of Data Mining & Exploration 1 Outline Latent Semantic Analysis o Need o Overview
More informationUniversity of Groningen. Systemen, planning, netwerken Bosman, Aart
University of Groningen Systemen, planning, netwerken Bosman, Aart IMPORTANT NOTE: You are advised to consult the publisher's version (publisher's PDF) if you wish to cite from it. Please check the document
More informationSoftprop: Softmax Neural Network Backpropagation Learning
Softprop: Softmax Neural Networ Bacpropagation Learning Michael Rimer Computer Science Department Brigham Young University Provo, UT 84602, USA Email: mrimer@axon.cs.byu.edu Tony Martinez Computer Science
More informationAUTOMATIC DETECTION OF PROLONGED FRICATIVE PHONEMES WITH THE HIDDEN MARKOV MODELS APPROACH 1. INTRODUCTION
JOURNAL OF MEDICAL INFORMATICS & TECHNOLOGIES Vol. 11/2007, ISSN 16426037 Marek WIŚNIEWSKI *, Wiesława KUNISZYKJÓŹKOWIAK *, Elżbieta SMOŁKA *, Waldemar SUSZYŃSKI * HMM, recognition, speech, disorders
More informationPurdue Data Summit Communication of Big Data Analytics. New SAT Predictive Validity Case Study
Purdue Data Summit 2017 Communication of Big Data Analytics New SAT Predictive Validity Case Study Paul M. Johnson, Ed.D. Associate Vice President for Enrollment Management, Research & Enrollment Information
More informationLaboratorio di Intelligenza Artificiale e Robotica
Laboratorio di Intelligenza Artificiale e Robotica A.A. 20082009 Outline 2 Machine Learning Unsupervised Learning Supervised Learning Reinforcement Learning Genetic Algorithms GeneticsBased Machine Learning
More informationShockwheat. Statistics 1, Activity 1
Statistics 1, Activity 1 Shockwheat Students require real experiences with situations involving data and with situations involving chance. They will best learn about these concepts on an intuitive or informal
More informationGenerative models and adversarial training
Day 4 Lecture 1 Generative models and adversarial training Kevin McGuinness kevin.mcguinness@dcu.ie Research Fellow Insight Centre for Data Analytics Dublin City University What is a generative model?
More information"Onboard training tools for long term missions" Experiment Overview. 1. Abstract:
"Onboard training tools for long term missions" Experiment Overview 1. Abstract 2. Keywords 3. Introduction 4. Technical Equipment 5. Experimental Procedure 6. References Principal Investigators: BTE:
More informationMining Association Rules in Student s Assessment Data
www.ijcsi.org 211 Mining Association Rules in Student s Assessment Data Dr. Varun Kumar 1, Anupama Chadha 2 1 Department of Computer Science and Engineering, MVN University Palwal, Haryana, India 2 Anupama
More informationRobot manipulations and development of spatial imagery
Robot manipulations and development of spatial imagery Author: Igor M. Verner, Technion Israel Institute of Technology, Haifa, 32000, ISRAEL ttrigor@tx.technion.ac.il Abstract This paper considers spatial
More informationCollege Pricing. Ben Johnson. April 30, Abstract. Colleges in the United States price discriminate based on student characteristics
College Pricing Ben Johnson April 30, 2012 Abstract Colleges in the United States price discriminate based on student characteristics such as ability and income. This paper develops a model of college
More informationEECS 571 PRINCIPLES OF REALTIME COMPUTING Fall 10. Instructor: Kang G. Shin, 4605 CSE, ;
EECS 571 PRINCIPLES OF REALTIME COMPUTING Fall 10 Instructor: Kang G. Shin, 4605 CSE, 7630391; kgshin@umich.edu Number of credit hours: 4 Class meeting time and room: Regular classes: MW 10:30am noon
More informationExploration. CS : Deep Reinforcement Learning Sergey Levine
Exploration CS 294112: Deep Reinforcement Learning Sergey Levine Class Notes 1. Homework 4 due on Wednesday 2. Project proposal feedback sent Today s Lecture 1. What is exploration? Why is it a problem?
More informationInfrared Paper Dryer Control Scheme
Infrared Paper Dryer Control Scheme INITIAL PROJECT SUMMARY 10/03/2005 DISTRIBUTED MEGAWATTS Carl Lee Blake Peck Rob Schaerer Jay Hudkins 1. Project Overview 1.1 Stake Holders Potlatch Corporation, Idaho
More informationSimulation of Multistage Flash (MSF) Desalination Process
Advances in Materials Physics and Chemistry, 2012, 2, 200205 doi:10.4236/ampc.2012.24b052 Published Online December 2012 (http://www.scirp.org/journal/ampc) Simulation of Multistage Flash (MSF) Desalination
More informationPUBLIC CASE REPORT Use of the GeoGebra software at upper secondary school
PUBLIC CASE REPORT Use of the GeoGebra software at upper secondary school Linked to the pedagogical activity: Use of the GeoGebra software at upper secondary school Written by: Philippe Leclère, Cyrille
More informationFF+FPG: Guiding a PolicyGradient Planner
FF+FPG: Guiding a PolicyGradient Planner Olivier Buffet LAASCNRS University of Toulouse Toulouse, France firstname.lastname@laas.fr Douglas Aberdeen National ICT australia & The Australian National University
More information