神經(jīng)網(wǎng)絡(luò)與機(jī)器學(xué)習(xí)

出版時(shí)間:2009-3  出版社:機(jī)械工業(yè)  作者:(加)海金  頁數(shù):906  
Tag標(biāo)簽:無  

前言

In writing this third edition of a classic book, I have been guided by the same uuderly hag philosophy of the first edition of the book:Write an up wdate treatment of neural networks in a comprehensive, thorough, and read able manner.The new edition has been retitied Neural Networks and Learning Machines, in order toreflect two reahties: L The perceptron, the multilayer perceptroo, self organizing maps, and neuro dynamics, to name a few topics, have always been considered integral parts of neural networks, rooted in ideas inspired by the human brain.2. Kernel methods, exemplified by support vector machines and kernel principal components analysis, are rooted in statistical learning theory.Although, indeed, they share many fundamental concepts and applications, there aresome subtle differences between the operations of neural networks and learning ma chines. The underlying subject matter is therefore much richer when they are studiedtogether, under one umbrella, particulasiy so when ideas drawn from neural networks and machine learning are hybridized to perform improved learning tasks beyond the capability of either one operating on its own, and ideas inspired by the human brain lead to new perspectives wherever they are of particular importance.

內(nèi)容概要

神經(jīng)網(wǎng)絡(luò)是計(jì)算智能和機(jī)器學(xué)習(xí)的重要分支,在諸多領(lǐng)域都取得了很大的成功。在眾多神經(jīng)網(wǎng)絡(luò)著作中,影響最為廣泛的是Simon Haykin的《神經(jīng)網(wǎng)絡(luò)原理》(第4版更名為《神經(jīng)網(wǎng)絡(luò)與機(jī)器學(xué)習(xí)》)。在本書中,作者結(jié)合近年來神經(jīng)網(wǎng)絡(luò)和機(jī)器學(xué)習(xí)的最新進(jìn)展,從理論和實(shí)際應(yīng)用出發(fā),全面。系統(tǒng)地介紹了神經(jīng)網(wǎng)絡(luò)的基本模型、方法和技術(shù),并將神經(jīng)網(wǎng)絡(luò)和機(jī)器學(xué)習(xí)有機(jī)地結(jié)合在一起。  本書不但注重對數(shù)學(xué)分析方法和理論的探討,而且也非常關(guān)注神經(jīng)網(wǎng)絡(luò)在模式識(shí)別、信號處理以及控制系統(tǒng)等實(shí)際工程問題中的應(yīng)用。本書的可讀性非常強(qiáng),作者舉重若輕地對神經(jīng)網(wǎng)絡(luò)的基本模型和主要學(xué)習(xí)理論進(jìn)行了深入探討和分析,通過大量的試驗(yàn)報(bào)告、例題和習(xí)題來幫助讀者更好地學(xué)習(xí)神經(jīng)網(wǎng)絡(luò)。  本版在前一版的基礎(chǔ)上進(jìn)行了廣泛修訂,提供了神經(jīng)網(wǎng)絡(luò)和機(jī)器學(xué)習(xí)這兩個(gè)越來越重要的學(xué)科的最新分析?! ”緯厣 』陔S機(jī)梯度下降的在線學(xué)習(xí)算法;小規(guī)模和大規(guī)模學(xué)習(xí)問題。  核方法,包括支持向量機(jī)和表達(dá)定理?! ⌒畔⒄搶W(xué)習(xí)模型,包括連接、獨(dú)立分量分析(ICA),一致獨(dú)立分量分析和信息瓶頸?! ‰S機(jī)動(dòng)態(tài)規(guī)劃,包括逼近和神經(jīng)動(dòng)態(tài)規(guī)劃。  逐次狀態(tài)估計(jì)算法,包括Kalman和粒子濾波器?! ±弥鸫螤顟B(tài)估計(jì)算法訓(xùn)練遞歸神經(jīng)網(wǎng)絡(luò)?! 「挥卸床炝Φ拿嫦蛴?jì)算機(jī)的試驗(yàn)。

作者簡介

Simon Haykin,于1953年獲得英國伯明翰大學(xué)博士學(xué)位,目前為加拿大McMaster大學(xué)電子與計(jì)算機(jī)工程系教授、通信研究實(shí)驗(yàn)室主任。他是國際電子電氣工程界的著名學(xué)者,曾獲得IEEE McNaughton金獎(jiǎng)。他是加拿大皇家學(xué)會(huì)院士、IEEE會(huì)士,在神經(jīng)網(wǎng)絡(luò)、通信、自適應(yīng)濾波器等領(lǐng)域成果頗

書籍目錄

Preface Acknowledgements Abbreviations and Symbols GLOSSARYIntroduction  1 Whatis aNeuralNetwork? 2 The Human Brain 3 Models of a Neuron 4 Neural Networks Viewed As Dirccted Graphs 5 Feedback   6 Network Architecturns   7 Knowledge Representation   8 Learning Processes   9 Learninglbks  10 Concluding Remarks  Notes and RcferencesChapter 1 Rosenblatt's Perceptrou  1.1 Introduction  1.2 Perceptron  1.3 1he Pcrceptron Convergence Theorem  1.4 Relation Between the Perceptron and Bayes Classifier for a Gaussian Environment  1.5 Computer Experiment:Pattern Classification  1.6 The Batch Perceptron Algorithm  1.7 Summary and Discussion   Notes and Refercnces  Problems Chapter 2 Model Building through Regression 2.1 Introduction 68  2.2 Linear Regression Model:Preliminary Considerafions   2.3 Maximum a Posteriori Estimation ofthe ParameterVector   2.4 Relationship Between Regularized Least-Squares Estimation and MAP Estimation   2.5 Computer Experiment:Pattern Classification   2.6 The Minimum.Description-Length Principle   2.7 Rnite Sample—Size Considerations   2.8 The Instrumental,variables Method   2 9 Summary and Discussion   Notes and References   Problems Chapter 3 The Least—Mean-Square Algorithm  3.1 Introduction  3.2 Filtering Structure of the LMS Algorithm  3.3 Unconstrained optimization:a Review  3.4 ThC Wiener FiIter  3.5 ne Least.Mean.Square Algorithm  3.6 Markov Model Portraying the Deviation of the LMS Algorithm from the Wiener Filter  3.7 The Langevin Equation:Characterization ofBrownian Motion  3.8 Kushner’S Direct.Averaging Method  3.9 Statistical LMS Learning Iheory for Sinail Learning—Rate Parameter  3.10 Computer Experiment I:Linear PTediction  3.11 Computer Experiment II:Pattern Classification  3.12 Virtucs and Limitations of the LMS AIgorithm  3.13 Learning.Rate Annealing Schedules  3.14 Summary and Discussion   Notes and Refefences   Problems Chapter 4 Multilayer Pereeptrons  4.1  IntroductlOn  4.2 Some Preliminaries  4.3 Batch Learning and on.Line Learning  4.4 The Back.Propagation Algorithm  4 5 XORProblem  4.6 Heuristics for Making the Back—Propagation Algorithm PerfoITn Better  4.7 Computer Experiment:Pattern Classification  4.8 Back Propagation and Differentiation  4.9 The Hessian and lIs Role 1n On-Line Learning  4.10 Optimal Annealing and Adaptive Control of the Learning Rate  4.11 Generalization  4.12 Approximations of Functions  4.13 Cross.Vjlidation  4.14 Complexity Regularization and Network Pruning  4.15 Virtues and Limitations of Back-Propagation Learning  4.16 Supervised Learning Viewed as an Optimization Problem  4.17 COUVOlutionaI Networks  4.18 Nonlinear Filtering  4.19 Small—Seale VerSus Large+Scale Learning Problems  4.20 Summary and Discussion   Notes and RCfcreilces   Problems Chapter 5 Kernel Methods and Radial-Basis Function Networks  5.1 Intreduction  5.2 Cover’S Theorem on the Separability of Patterns  5.3 1he Interpolation Problem  5 4 Radial—Basis—Function Networks  5.5 K.Mcans Clustering  5.6 Recursive Least-Squares Estimation of the Weight Vector  5 7 Hybrid Learning Procedure for RBF Networks  5 8 Computer Experiment:Pattern Classification  5.9 Interpretations of the Gaussian Hidden Units  5.10 Kernel Regression and Its Relation to RBF Networks  5.11 Summary and Discussion   Notes and References   Problems Chapter 6 Support Vector Machines Chapter 7 Regularization TheoryChapter 8 Prindpal-Components AaalysisChapter 9 Self-Organizing MapsChapter 10 Information-Theoretic Learning ModelsChapter 11 Stochastic Methods Rooted in Statistical MechanicsChapter 12 Dynamic Programming Chapter 13 Neurodynamics Chapter 14 Bayseian Filtering for State Estimation ofDynamic Systems Chaptel 15 Dynamlcaay Driven Recarrent NetworksBibliography Index

章節(jié)摘錄

插圖:knowledge, the teacher is able to provide the neural network with a desired responsefor that training vector. Indeed, the desired response represents the "optimum" ac-tion to be performed by the neural network. The network parameters are adjustedunder the combined influence of the training vector and the error signal. The errorsignal is defined as the difference between the desired response and the actual re-sponse of the network. This adjustment is carried out iteratively in a step-by-stepfashion with the aim of eventually making the neural network emulate the teacher;the emulation is presumed to be optimum in some statistical sense. In this way,knowledge of the environment available to the teacher is transferred to the neuralnetwork through training and stored in the form of"fixed" synaptic weights, repre-senting long-term memory. When this condition is reached, we may then dispensewith the teacher and let the neural network deal with the environment completelyby itself.The form of supervised learning we have just described is the basis of error-correction learning. From Fig. 24, we see that the supervised-learning process con-stitutes a closed-loop feedback system, but the unknown environment is outside theloop. As a performance measure for the system, we may think in terms of the mean-square error, or the sum of squared errors over the training sample, defined as a func-tion of the free parameters (i.e., synaptic weights) of the system. This function maybe visualized as a multidimensional error-performance surface, or simply error surface,with the free pai'ameters as coordinates.The true error surface is averaged over allpossible input-output examples. Any given operation of the system under theteacher's supervision is represented as a point on the error surface. For the system toimprove performance over time and therefore learn from the teacher, the operatingpoint has to move down successively toward a minimum point of the error surface;the minimum point may be a local minimum or a global minimum. A supervisedlearning system is able to do this with the useful information it has about the gradient of the error surface corresponding to the current behavior of the system.

編輯推薦

《神經(jīng)網(wǎng)絡(luò)與機(jī)器學(xué)習(xí)(英文版第3版)》特色:基于隨機(jī)梯度下降的在線學(xué)習(xí)算法;小規(guī)模和大規(guī)模學(xué)習(xí)問題。核方法,包括支持向量機(jī)和表達(dá)定理。信息論學(xué)習(xí)模型,包括連接、獨(dú)立分量分析(ICA),一致獨(dú)立分量分析和信息瓶頸。隨機(jī)動(dòng)態(tài)規(guī)劃,包括逼近和神經(jīng)動(dòng)態(tài)規(guī)劃。逐次狀態(tài)估計(jì)算法,包括Kalman和粒子濾波器。利用逐次狀態(tài)估計(jì)算法訓(xùn)練遞歸神經(jīng)網(wǎng)絡(luò)。富有洞察力的面向計(jì)算機(jī)的試驗(yàn)。

圖書封面

圖書標(biāo)簽Tags

評論、評分、閱讀與下載


    神經(jīng)網(wǎng)絡(luò)與機(jī)器學(xué)習(xí) PDF格式下載


用戶評論 (總計(jì)20條)

 
 

  •   一直在找這本書的英文版,中文翻譯版的《神經(jīng)網(wǎng)絡(luò)原理》,其翻譯簡直是折磨人。這個(gè)是新版的,很好。我們一下子買了4本。ps:就是紙張?zhí)×藒15cm X 21.4cm.
  •   特別專業(yè),但對數(shù)學(xué)要求比較高此外還需要懂一點(diǎn)矩陣論,動(dòng)態(tài)系統(tǒng),泛函之類的
  •   看了一部分,存在一些印刷錯(cuò)誤,字太小了,總體還可以。
  •   對于研究和學(xué)習(xí)神經(jīng)網(wǎng)絡(luò)具有重要的參考價(jià)值,結(jié)合Michell的機(jī)器學(xué)習(xí)一起學(xué)習(xí)效果更好。
  •   東西不錯(cuò),可惜我的英文不太好;還得補(bǔ)英文,里面的一些專業(yè)詞匯還是找工具書……請大家不要吐槽我
  •   字太小,看不清,紙?zhí)?。真是想不通,怎么能印的這么差。書整體一般吧,不能算是深入淺出,所以基礎(chǔ)薄弱的就別看了
  •   說好的開的發(fā)票一直都沒有開,讓人覺得很不好!
  •   的確是一本經(jīng)典好書,但是32開小本,而且字體太小,看起來比較費(fèi)勁!
  •   字體太小了,根本看不清,唉,出版社坑人??!
  •   看了1/6吧,感覺不錯(cuò),比較適合初學(xué)者
  •   不錯(cuò),很不錯(cuò)!It's what I want.
  •   可以作為參考書手里備一本,有時(shí)會(huì)用到
  •   34.5買的,收到貨時(shí)書上還一層透明薄膜保護(hù)著,外加亞馬遜自己的包裝書很厚900多頁,印刷還可以,至少比盜版強(qiáng),字體小,不過比看電子書要好N倍書的內(nèi)容還沒怎么看,粗略看了目錄,感覺對數(shù)學(xué)要求很高其它我自己看一段時(shí)間再說
  •   后悔買了英文的,因?yàn)闀容^厚,課程又比較多,沒有那么多時(shí)間專門研究英文的!
  •   還沒有看。應(yīng)該是大牛的作品
  •   32開的書字很小
  •   包裝完整,送貨速度快。
  •   知識(shí)比推理有時(shí)候更重要
  •   神經(jīng)網(wǎng)絡(luò)與機(jī)器學(xué)習(xí)(英文版·第3版)
  •   等我看看
 

250萬本中文圖書簡介、評論、評分,PDF格式免費(fèi)下載。 第一圖書網(wǎng) 手機(jī)版

京ICP備13047387號-7