Skip to main content

Morgan Kaufmann

  • Principles of Artificial Intelligence

    • 1st Edition
    • Nils J. Nilsson
    • English
    A classic introduction to artificial intelligence intended to bridge the gap between theory and practice, Principles of Artificial Intelligence describes fundamental AI ideas that underlie applications such as natural language processing, automatic programming, robotics, machine vision, automatic theorem proving, and intelligent data retrieval. Rather than focusing on the subject matter of the applications, the book is organized around general computational concepts involving the kinds of data structures used, the types of operations performed on the data structures, and the properties of the control strategies used.Principles of Artificial Intelligenceevolved from the author's courses and seminars at Stanford University and University of Massachusetts, Amherst, and is suitable for text use in a senior or graduate AI course, or for individual study.
  • Scalable Shared-Memory Multiprocessing

    • 1st Edition
    • Daniel E. Lenoski + 1 more
    • English
    Dr. Lenoski and Dr. Weber have experience with leading-edge research and practical issues involved in implementing large-scale parallel systems. They were key contributors to the architecture and design of the DASH multiprocessor. Currently, they are involved with commercializing scalable shared-memory technology.
  • Machine Learning Proceedings 1992

    Proceedings of the Ninth International Workshop (ML92)
    • 1st Edition
    • Peter Edwards + 1 more
    • English
    Machine Learning: Proceedings of the Ninth International Workshop (ML92) covers the papers and posters presented at ML92, the Ninth International Machine Learning Conference, held at Aberdeen, Scotland on July 1-3, 1992. The book focuses on the advancements of practices, methodologies, approaches, and techniques in machine learning. The selection first offers information on the principal axes method for constructive induction; learning by incomplete explanations of failures in recursive domains; and eliminating redundancy in explanation-based learning. Topics include means-ends analysis search in recursive domains, description space transformation, distance metric, generating similarity matrix, and learning principal axes. The text then examines trading off consistency and efficiency in version-space induction; improving path planning with learning; finding the conservation of momentum; and learning to predict in uncertain continuous tasks. The manuscript elaborates on a teaching method for reinforcement learning, compiling prior knowledge into an explicit bias, spatial analogy and subsumption, and multistrategy learning with introspective meta-explanations. The publication also ponders on selecting typical instances in instance-based learning and temporal difference learning of backgammon strategy. The selection is a valuable source of information for researchers interested in machine learning.
  • Artificial Intelligence Planning Systems

    Proceedings of the First Conference (AIPS 92)
    • 1st Edition
    • James Hendler
    • English
    Artificial Intelligence Planning Systems documents the proceedings of the First International Conference on AI Planning Systems held in College Park, Maryland on June 15-17, 1992. This book discusses the abstract probabilistic modeling of action; building symbolic primitives with continuous control routines; and systematic adaptation for case-based planning. The analysis of ABSTRIPS; conditional nonlinear planning; and building plans to monitor and exploit open-loop and closed-loop dynamics are also elaborated. This text likewise covers the modular utility representation for decision-theoretic planning; reaction and reflection in tetris; and planning in intelligent sensor fusion. Other topics include the resource-bounded adaptive agent, critical look at Knoblock's hierarchy mechanism, and traffic laws for mobile robots. This publication is beneficial to students and researchers conducting work on AI planning systems.
  • Representation and Understanding

    Studies in Cognitive Science
    • 1st Edition
    • Jerry Bobrow
    • English
    Language, Thought, and Culture: Advances in the Study of Cognition: Representation and Understanding: Studies in Cognitive Science focuses on the principles, processes, and methodologies involved in artificial intelligence. The selection first offers information on the dimensions of representation, foundations for semantic networks, and reflections on the formal description of behavior. Discussions focus on relativity of behavioral description, hierarchical organization of processes, problems in knowledge representation, and inference, access, and self-awareness. The text then takes a look at the synthesis, analysis, and contingent knowledge in specialized understanding systems, some principles of memory schemata, and representing knowledge for recognition. The book examines frame representations and declarative/procedur... controversy, schema for stories, and structure of episodes in memory. Topics include long-term memory, conceptual dependency, understanding paragraphs, simple story grammar, and first attempt at synthesis. The publication then ponders on concepts for representing mundane reality in plans and multiple representations of knowledge for tutorial reasoning. The selection is highly recommended for researchers interested in exploring artificial intelligence.
  • Machine Learning Proceedings 1989

    • 1st Edition
    • Alberto Maria Segre
    • English
    Proceedings of the Sixth International Workshop on Machine Learning covers the papers presented at the Sixth International Workshop of Machine Learning, held at Cornell University, Ithaca, New York (USA) on June 26-27, 1989. The book focuses on the processes, methodologies, techniques, and approaches involved in machine learning. The selection first offers information on unifying themes in empirical and explanation-based learning; integrated learning of concepts with both explainable and conventional aspects; conceptual clustering of explanations; and tight integration of deductive and inductive learning. The text then examines multi-strategy learning in nonhomogeneous domain theories; description of preference criterion in constructive learning; and combining case-based reasoning, explanation-based learning, and learning from instruction. Discussions focus on causal explanation of actions, constructive learning, learning in a weak theory domain, learning problem, and individual criteria and their relationships. The book elaborates on learning from plausible explanations, augmenting domain theory for explanation-based generalization, reducing search and learning goal preferences, and using domain knowledge to improve inductive learning algorithms for diagnosis. The selection is a dependable reference for researchers interested in the dynamics of machine learning.
  • Machine Learning Proceedings 1994

    Proceedings of the Eighth International Conference
    • 1st Edition
    • William W. Cohen
    • English
    Machine Learning: Proceedings of the Eleventh International Conference covers the papers presented at the Eleventh International Conference on Machine Learning (ML94), held at New Brunswick, New Jersey on July 10-13, 1994. The book focuses on the processes, methodologies, and approaches involved in machine learning, including inductive logic programming, neural networks, and decision trees. The selection first offers information on learning recursive relations with randomly selected small training sets; improving accuracy of incorrect domain theories; and using sampling and queries to extract rules from trained neural networks. The text then takes a look at boosting and other machine learning algorithms; an incremental learning approach for completable planning; and learning disjunctive concepts by means of genetic algorithms. The publication ponders on rule induction for semantic query optimization; irrelevant features and the subset selection problem; and an efficient subsumption algorithm for inductive logic programming. The book also examines Bayesian inductive logic programming; a statistical approach to decision tree modeling; and an improved algorithm for incremental induction of decision trees. The selection is a dependable source of data for researchers interested in machine learning.
  • Uncertainty in Artificial Intelligence

    Proceedings of the Tenth Conference on Uncertainty in Artificial Intelligence, University of Washington, Seattle, July 29-31, 1994
    • 1st Edition
    • MKP
    • English
    Uncertainty in Artificial Intelligence: Proceedings of the Tenth Conference (1994) covers the papers accepted for presentation at the Tenth Annual Conference on Uncertainty in Artificial Intelligence, held in Seattle, Washington on July 29-31, 1994. The book focuses on the processes, methodologies, and approaches involved in artificial intelligence, including approximations, computational methods, Bayesian networks, and probabilistic inference. The selection first offers information on ending-based strategies for part-of-speech tagging; an evaluation of an algorithm for inductive learning of Bayesian belief networks using simulated data sets; and probabilistic constraint satisfaction with non-Gaussian noise. The text then examines Laplace's method approximations for probabilistic inference in belief networks with continuous variables; computational methods, bounds, and applications of counterfactual probabilities; and approximation algorithms for the loop cutset problem. The book takes a look at learning in multi-level stochastic games with delayed information; properties of Bayesian belief network learning algorithms; and the relation between kappa calculus and probabilistic reasoning. The manuscript also elaborates on intercausal independence and heterogeneous factorization; evidential reasoning with conditional belief functions; and state-space abstraction for anytime evaluation of probabilistic networks. The selection is a valuable reference for researches interested in artificial intelligence.
  • Case-Based Reasoning

    • 1st Edition
    • Janet Kolodner
    • English
    Case-based reasoning is one of the fastest growing areas in the field of knowledge-based systems and this book, authored by a leader in the field, is the first comprehensive text on the subject. Case-based reasoning systems are systems that store information about situations in their memory. As new problems arise, similar situations are searched out to help solve these problems. Problems are understood and inferences are made by finding the closest cases in memory, comparing and contrasting the problem with those cases, making inferences based on those comparisons, and asking questions when inferences can't be made.This book presents the state of the art in case-based reasoning. The author synthesizes and analyzes a broad range of approaches, with special emphasis on applying case-based reasoning to complex real-world problem-solving tasks such as medical diagnosis, design, conflict resolution, and planning. The author's approach combines cognitive science and engineering, and is based on analysis of both expert and common-sense tasks. Guidelines for building case-based expert systems are provided, such as how to represent knowledge in cases, how to index cases for accessibility, how to implement retrieval processes for efficiency, and how to adapt old solutions to fit new situations. This book is an excellent text for courses and tutorials on case-based reasoning. It is also a useful resource for computer professionals and cognitive scientists interested in learning more about this fast-growing field.
  • COLT '89

    Proceedings of the Second Annual Workshop, UC Santa Cruz, California, July 31 - August 2 1989
    • 1st Edition
    • COLT
    • English
    Computational Learning Theory presents the theoretical issues in machine learning and computational models of learning. This book covers a wide range of problems in concept learning, inductive inference, and pattern recognition. Organized into three parts encompassing 32 chapters, this book begins with an overview of the inductive principle based on weak convergence of probability measures. This text then examines the framework for constructing learning algorithms. Other chapters consider the formal theory of learning, which is learning in the sense of improving computational efficiency as opposed to concept learning. This book discusses as well the informed parsimonious (IP) inference that generalizes the compatibility and weighted parsimony techniques, which are most commonly applied in biology. The final chapter deals with the construction of prediction algorithms in a situation in which a learner faces a sequence of trials, with a prediction to be given in each and the goal of the learner is to make some mistakes. This book is a valuable resource for students and teachers.