神經(jīng)網(wǎng)絡(luò)

出版時(shí)間:2006-8  出版社:清華大學(xué)出版社  作者:[印度]Satish Kumar  頁(yè)數(shù):736  
Tag標(biāo)簽:無(wú)  

內(nèi)容概要

本書從理論和實(shí)際應(yīng)用出發(fā),全面系統(tǒng)地介紹神經(jīng)網(wǎng)絡(luò)的基本模型、基本方法和基本技術(shù),涵蓋了神經(jīng)系統(tǒng)科學(xué)、統(tǒng)計(jì)模式識(shí)別、支撐向量機(jī)、模糊系統(tǒng)、軟件計(jì)算與動(dòng)態(tài)系統(tǒng)等內(nèi)容。本書對(duì)神經(jīng)網(wǎng)絡(luò)的各種基本模型做了深入研究,對(duì)神經(jīng)網(wǎng)絡(luò)的最新發(fā)展趨勢(shì)和主要研究方向也都進(jìn)行了全面而綜合的介紹,每章都包含大量例題、習(xí)題,對(duì)所有模型不僅給出了實(shí)際的應(yīng)用示例,還提供了詳細(xì)的MATHLAB代碼,是一本很好的神經(jīng)網(wǎng)絡(luò)教材。    本書適合作為相關(guān)專業(yè)研究生或本科高年級(jí)學(xué)生的教材,也是神經(jīng)網(wǎng)絡(luò)的科研人員的參考書。

作者簡(jiǎn)介

作者:(印)庫(kù)馬爾

書籍目錄

Foreword PrefacMore Acknowledgements Part Ⅰ Traces of History and A Neuroscience Briefer  1. Brain Style Computing: Origins and Issues   1.1 From the Greeks to the Renaissance    1.2 The Advent of Modern Neuroscience   1.3 On the Road to Artificial Intelligence    1.4 Classical AI and Neural Networks    1.5 Hybrid Intelligent Systems    Chapter Summary    Bibliographic Remarks  2. Lessons from Neuroscience   2.1 The Human Brain    2.2 Biological Neurons    Chapter Summary   Bibliographic Remarks Part Ⅱ Feedforward Neural Networks and Supervised Learning  3. Artificial Neurons, Neural Networks and Architectures   3.1 Neuron Abstraction    3.2 Neuron Signal Functions   3.3 Mathematical Preliminaries    3.4 Neural Networks Defined    3.5 Architectures: Feedforward and Feedback   3.6 Salient Properties and Application Domains of Neural Networks    Chapter Summary    Bibliographic Remarks    Review Questions 4. Geometry of Binary Threshold Neurons and Their Networks   4.1 Pattern Recognition and Data Classification   4.2 Convex Sets, Convex Hulls and Linear Separability   4.3 Space of Boolean Functions   4.4 Binary Neurons are Pattern Dichotomizers   4.5 Non-linearly Separable Problems   4.6 Capacity of a Simple Threshold Logic Neuron   4.7 Revisiting the XOR Problem   4.8 Multilayer Networks   4.9 How Many Hidden Nodes are Enough?    Chapter Summary    Bibliographic Remarks    Review Questions  5. Supervised LearningⅠ: Perceptrons and LMS   5.1 Learning and Memory   5.2 From Synapses to Behaviour: The Case of Aplysia  5.3 Learning Algorithms   5.4 Error Correction and Gradient Descent Rules   5.5 The Learning Objective for TLNs   5.6 Pattern Space and Weight Space   5.7 Perceptron Learning Algorithm   5.8 Perceptron Convergence Theorem  5.9 A Handworked Example and MATLAB Simulation    5.10 Perceptron Learning and Non-separable Sets   5.11 Handling Linearly Non-separable Sets     5.12 a–Least Mean Square Learning     5.13 MSE Error Surface and its Geometry   5.14 Steepest Descent Search with Exact Gradient Information    5.15 u–LMS: Approximate Gradient Descent     5.16 Application of LMS to Noise Cancellation    Chapter Summary    Bibliographic Remarks    Review Questions  6. Supervised Learning Ⅱ: Backpropagation and Beyond   6.1 Multilayered Network Architectures    6.2 Backpropagation Learning Algorithm    6.3 Handworked Example   6.4 MATLAB Simulation Examples   6.5 Practical Considerations in Implementing the BP Algorithm   6.6 Structure Growing Algorithms   6.7 Fast Relatives of Backpropagation   6.8 Universal Function Approximation and Neural Networks   6.9 Applications of Feedforward Neural Networks  6.10 Reinforcement Learning: A Brief Review    Chapter Summary    Bibliographic Remarks    Review Questions  7. Neural Networks: A Statistical Pattern Recognition Perspective   7.1 Introduction   7.2 Bayes’ Theorem   7.3 Two Instructive MATLAB Simulations   7.4 Implementing Classification Decisions with Bayes’ Theorem   7.5 Probabilistic Interpretation of a Neuron Discriminant Function   7.6 MATLAB Simulation: Plotting Bayesian Decision Boundaries   7.7 Interpreting Neuron Signals as Probabilities  7.8 Multilayered Networks, Error Functions and Posterior Probabilities   7.9 Error Functions for Classification Problems    Chapter Summary    Bibliographic Remarks    Review Questions  8. Focussing on Generalization: Support Vector Machines and Radial Basis Function Networks   8.1 Learning From Examples and Generalization    8.2 Statistical Learning Theory Briefer   8.3 Support Vector Machines  8.4 Radial Basis Function Networks   8.5 Regularization Theory Route to RBFNs  8.6 Generalized Radial Basis Function Network   8.7 Learning in RBFN’s   8.8 Image Classification Application   8.9 Other Models For Valid Generalization   Chapter Summary    Bibliographic Remarks    Review Questions Part Ⅲ   Recurrent Neurodynamical Systems Part Ⅳ  Contemporary Topics Appendix A: Neural Network Hardware Appendix B: Web Pointers Bibliography Index

圖書封面

圖書標(biāo)簽Tags

無(wú)

評(píng)論、評(píng)分、閱讀與下載


    神經(jīng)網(wǎng)絡(luò) PDF格式下載


用戶評(píng)論 (總計(jì)5條)

 
 

  •   好!!!!
  •   就是送貨太慢了。。
  •   覺得還是先看中文,有了一定的基礎(chǔ)后再看英文比較好。
  •   經(jīng)典書目,雖然沒看應(yīng)該不錯(cuò)
  •      螞蟻在爬行時(shí)看上去是那么地自信,似乎有自己的行動(dòng)計(jì)劃。否則,它們?nèi)绾谓M織起螞蟻社會(huì)的“高速公路”、建造起精致的巢穴和發(fā)動(dòng)大規(guī)模的戰(zhàn)爭(zhēng)?
       事實(shí)上,這種看法大錯(cuò)特錯(cuò)。螞蟻并不是聰明的工程師、建筑師或者戰(zhàn)士——至少對(duì)于單個(gè)螞蟻來(lái)說(shuō)是這樣的,大多數(shù)螞蟻對(duì)于下一步該做什么可以說(shuō)是茫然無(wú)知的。斯坦福大學(xué)的生物學(xué)家黛博拉·M·戈登(Deborah M.Gordon)認(rèn)為:如果觀察單只螞蟻嘗試做成某件事的話,你就會(huì)發(fā)現(xiàn)它是多么地力不從心?!拔浵伈⒉宦斆?,聰明的是螞蟻群體。”
       螞蟻群體能夠解決個(gè)體所無(wú)法解決的問題,如以最短的路徑到達(dá)最豐富的食物源,給工蟻分派各種不同的任務(wù),或者在外敵入侵時(shí)保衛(wèi)自己的領(lǐng)土。作為個(gè)體,螞蟻微小得不堪一擊,但是作為群體,它們能夠?qū)Νh(huán)境做出迅速而有效的反應(yīng),其“武器”就是群體智能。
       螞蟻和蜜蜂的群體智能來(lái)自哪里?個(gè)體簡(jiǎn)單行為如何形成復(fù)雜的群體行為?如果許多個(gè)體不協(xié)調(diào),成百上千的蜜蜂又怎么能做出某個(gè)重要決定?是什么讓一群鯡魚在一瞬間改變行動(dòng)方向的?沒有一個(gè)個(gè)體能夠掌控全局,這些生物的群體能力似乎不可思議,生物學(xué)家也一直困惑不已。但在過去的數(shù)十年里,研究人員有了一些有趣的發(fā)現(xiàn)。
 

250萬(wàn)本中文圖書簡(jiǎn)介、評(píng)論、評(píng)分,PDF格式免費(fèi)下載。 第一圖書網(wǎng) 手機(jī)版

京ICP備13047387號(hào)-7