出版時(shí)間:2009-6 出版社:中國(guó)科學(xué)技術(shù)大學(xué)出版社 作者:李衛(wèi)平,李世鵬,王純 著 頁(yè)數(shù):153
Tag標(biāo)簽:無(wú)
前言
大學(xué)最重要的功能是向社會(huì)輸送人才.大學(xué)對(duì)于一個(gè)國(guó)家、民族乃至世界的重要性和貢獻(xiàn)度,很大程度上是通過(guò)畢業(yè)生在社會(huì)各領(lǐng)域所取得的成就來(lái)體現(xiàn)的. 中國(guó)科學(xué)技術(shù)大學(xué)建校只有短短的50年,之所以迅速成為享有較高國(guó)際聲譽(yù)的著名大學(xué)之一,主要就是因?yàn)樗囵B(yǎng)出了一大批德才兼?zhèn)涞膬?yōu)秀畢業(yè)生.他們志向高遠(yuǎn)、基礎(chǔ)扎實(shí)、綜合素質(zhì)高、創(chuàng)新能力強(qiáng),在國(guó)內(nèi)外科技、經(jīng)濟(jì)、教育等領(lǐng)域做出了杰出的貢獻(xiàn),為中國(guó)科大贏得了“科技英才的搖籃”的美譽(yù). 2008年9月,胡錦濤總書(shū)記為中國(guó)科大建校五十周年發(fā)來(lái)賀信,信中稱贊說(shuō):半個(gè)世紀(jì)以來(lái),中國(guó)科學(xué)技術(shù)大學(xué)依托中國(guó)科學(xué)院,按照全院辦校、所系結(jié)合的方針,弘揚(yáng)紅專(zhuān)并進(jìn)、理實(shí)交融的校風(fēng),努力推進(jìn)教學(xué)和科研工作的改革創(chuàng)新,為黨和國(guó)家培養(yǎng)了一大批科技人才,取得了一系列具有世界先進(jìn)水平的原創(chuàng)性科技成果,為推動(dòng)我國(guó)科教事業(yè)發(fā)展和社會(huì)主義現(xiàn)代化建設(shè)做出了重要貢獻(xiàn). 據(jù)統(tǒng)計(jì),中國(guó)科大迄今已畢業(yè)的5萬(wàn)人中,已有42人當(dāng)選中國(guó)科學(xué)院和中國(guó)工程院院士,是同期(自1963年以來(lái))畢業(yè)生中當(dāng)選院士數(shù)最多的高校之一.其中,本科畢業(yè)生中平均每1,000人就產(chǎn)生1名院士和。700多名碩士、博士,比例位居全國(guó)高校之首.還有眾多的中青年才俊成為我國(guó)科技、企業(yè)、教育等領(lǐng)域的領(lǐng)軍人物和骨干.在歷年評(píng)選的“中國(guó)青年五四獎(jiǎng)?wù)隆鲍@得者中,作為科技界、科技創(chuàng)新型企業(yè)界青年才俊代表,科大畢業(yè)生已連續(xù)多年榜上有名,獲獎(jiǎng)總?cè)藬?shù)位居全國(guó)高校前列.鮮為人知的是,有數(shù)千名優(yōu)秀畢業(yè)生踏上國(guó)防戰(zhàn)線,為科技強(qiáng)軍做出了重要貢獻(xiàn),涌現(xiàn)出20多名科技將軍和一大批國(guó)防科技中堅(jiān).
內(nèi)容概要
本書(shū)闡述視頻技術(shù)中的基本概念和關(guān)鍵技術(shù)。除了回顧視頻技術(shù)的背景和歷史外,重點(diǎn)介紹視頻編碼和通信的基本原理和方法,并對(duì)相關(guān)的標(biāo)準(zhǔn)、應(yīng)用系統(tǒng)、及未來(lái)的研究方向予以討論。全書(shū)共分八部分,依次為引言、早期的視頻技術(shù)、模擬視頻信號(hào)、數(shù)字視頻信號(hào)、視頻編碼、視頻編碼標(biāo)準(zhǔn)、視頻應(yīng)用、以及對(duì)未來(lái)的展望。本書(shū)可作為信息科學(xué)技術(shù)領(lǐng)域高年級(jí)本科生和研究生針對(duì)視頻領(lǐng)域的入門(mén)教材,也可以供從事該領(lǐng)域科研和技術(shù)開(kāi)發(fā)的人員參考。
書(shū)籍目錄
Preface to the USTC Alumni's SeriesPreface1 Introduction2 Early Days of Video Technology3 Analog Video 3.1 Difference between Video and Film 3.2 Interlaced Scanning4 Digitized Video 4.1 Video Sampling 4.2 Pixel Quantization 4.3 Color Sub-sampling5 Video Coding 5.1 Basic Principles 5.2 Three Stage Model 5.3 Signal Processing for Compression 5.3.1 Prediction 5.3.2 Motion Compensated Prediction 5.3.3 Linear Transforms 5.3.4 Motion Compensated Transformations 5.4 Quantization 5.4.1 Scalar Quantization 5.4.2 Vector Quantization 5.5 Entropy Coding 5.5.1 Huffman Coding 5.5.2 Arithmetic Coding 5.5.3 Lempel-Ziv Coding 5.6 Predictive Coding with Quantization 5.7 Symbol Formation 5 7 1 Run-Length Coding (RLC) 5.7.2 Zigzag Scanning 5.7.3 Zerotree Coding 5.7.4 Context Formation 5.8 Bit Allocation and Rate Control 5.9 Pre-processing and Post-processing 5.10 Fractal Compression 5.11 Scalable Video Coding 5.12 Transcoding 5.13 Object Based Video Coding 5.14 Model-Based Coding 5.15 Hybrid Natural and Synthetic Coding 5.16 Error Resilient Video Coding6 Video Coding Standards 6.1 H.261 6.2 H.263 6.3 MPEG- 1 6.4 MPEG-2/H.262 6.5 MPEG-4 Part 2 6.6 H. 264/MPEG - 4 Part 10/AVC 6.7 AVS 6.8 H. 264/MPEG4 - AVC SCALABLE EXTENSION 6.8.1 Hierarchical Coding Structure and Temporal Scalability 6 8 2 Spatial Scalability 6 8 3 SNR Scalability7 Applications 7.1 Good Old Analog Television 7.2 Video Tape 7.3 Video CD, DVD, HD - DVD, and Blu-ray 7.4 Digital Television 7.5 Video over IP 7.5.1 Technical Challenges and Possible Solutions 7.5.2 Bring Old Applications to New Level8 Look into Future 8.1 New Types of Video Contents 8.2 New Video Coding Trends 8.3 Video Delivery 8.4 Mobile Media 8.5 Video Content Analysis and Search 8.6 Internet Video -- Media 2.0 8.6.1 Democratized media life cycle 8.6.2 Data-driven media value chain 8.6.3 Decoupled media system 8.6.4 Decomposed media contents 8.6.5 Decentralized media business model 8.7 Going Back to BasicsReferences
章節(jié)摘錄
The prediction or transformation techniques discussed so far all aim atde-correlating highly correlated neighboring signal samples. They areespecially effective for spatial neighbors within video frames. Video is a setof temporal samples that capture a moving scene across time. In a typicalscene, there is a great deal of similarity or correlation between neighboringframes of the same sequence. However, directly extending the above de-correlating techniques to the temporal direction would not always be veryeffective. The direct temporal neighboring samples that are spatiallycollocated are not necessarily highly correlated because of the motion in thevideo sequence. Rather, the pixels in neighboring frames that are morecorrelated are along the motion trajectory or optical flow and there usuallyexists a spatial displacement between them. Therefore, an efficient signaldecorrelation scheme in temporal direction should always operate on pixelsalong the same motion trajectory. Such a spatial displacement of visualobjects (pixels, blocks, or objects) are called motion vector and the processof determining how objects move from one frame to another or finding themotion vector is called motion estimation. The operation of de-correlatingsamples in different frames using motion information is called motioncompensation.The concept of motion estimation and compensation isillustrated in Figure 11.
圖書(shū)封面
圖書(shū)標(biāo)簽Tags
無(wú)
評(píng)論、評(píng)分、閱讀與下載
250萬(wàn)本中文圖書(shū)簡(jiǎn)介、評(píng)論、評(píng)分,PDF格式免費(fèi)下載。 第一圖書(shū)網(wǎng) 手機(jī)版