Computer Architecture - Behrooz Parhami

Computer Architecture

From Microprocessors to Supercomputers

(Autor)

Buch | Hardcover
576 Seiten
2005
Oxford University Press Inc (Verlag)
978-0-19-515455-9 (ISBN)
215,70 inkl. MwSt
Designed for the first course in Computer Architecture, usually offered at the junior/senior (3rd, 4th year) level in electrical engineering, computer science or computer engineering departments. This text provides a comprehensive introduction to computer architecture, covering topic from design of simple microprocessors to various techniques.
PART I: sets the stage, provides context, reviews some of the prerequisite topics and gives a taste of what is to come in the rest of the book. Included are two refresher-type chapters on digital circuits and components, a discussion of types of computer systems, an overview of digital computer technology, and a detailed perspective on computer system 3erformance. PART II:lays out the user's interface to computer hardware known as the instruction-set architechture (ISA). For better understanding, the instruction set of MiniMIPS (a simplified, yet very realistic, machine for which open reference material and simulation tools exist) is described. Included is a chapter on variations in ISA (e.g. RISC vs CISC) and associated cost performace tradeoffs. The next two parts cover the central processing unit (CPU). PART III: describes the structure of arithmetic/logic units (ALUs) in some detail. Included are discussions of fixed- and floating-point number representations, design of high-speed adders, shift and logical operations, and hardware multipliers/dividers. Implementation aspects and pitfalls of floating-point arthimetic are also discussed. PART IV: is devoted to data path and control circuits comprising the CPU. Beginning with instruction execution steps, the needed components and control mechanisms are derived. These are followed by an exposition of control design strategies, use of a pipelined data path for performance enhancement, and various limitations of pipelining due to data and control dependencies. PART V: concerned with the memory system. The technologies in use for primary and secondary memories are described, along with their strengths and limitations. It is shown how the use of cache memories effectively bridges the speed gap between CPU and main memory. Similarly, the use of virtual memory to provide the illusion of a vast main memory is explained. PART VI: deals with input/output and interfacing topics. A discussion of I/O device technologies is followed by methods of I/O programming and the roles of buses and links (including standards) in I/O communication and interfacing. Elements of processes and context switching, for exception handling or multireaded computation, are also covered. PART VII: introduces advanced architectures. An overview of performance enhancement strategies, beyond simple pipelining, is presented and examples of applications requiring higher performance are cited. These are followed by design strategies and example architectures based on vector or array proccessing, multiprocessing, and multicomputing.

Behrooz Parhami is Professor of Computer Engineering at the University of California, Santa Barbara. He has written several textbooks, including Computer Arithmetic (OUP, 2000), and more than 200 research papers. He is a fellow of both the Institute of Electrical and Electronics Engineers (IEEE) and the British Computer Society (BCS). He is a member of the Association for Computing Machinery (ACM), and a distinguished member of the Informatics Society of Iran, for which he served as a founding member and the first president.

Preface
PART 1: BACKGROUND AND MOTIVATION
1. Combinational Digital Circuits
1.1.: Signals, Logic Operators, and Gates
1.2.: Boolean Functions and Expressions
1.3.: Designing Gate Networks
1.4.: Useful Combinational Parts
1.5.: Programmable Combinational Parts
1.6.: Timing and Circuit Considerations
2. Digital Circuits with Memory
2.1.: Latches, Flip-Flops, and Registers
2.2.: Finite-State Machines
2.3.: Designing Sequential Circuits
2.4.: Useful Sequential Parts
2.5.: Programmable Sequential Parts
2.6.: Clocks and Timing of Events
3. Computer System Technology
3.1.: From Components to Applications
3.2.: Computer Systems and Their Parts
3.3.: Generations of Progress
3.4.: Processor and Memory Technologies
3.5.: Peripherals, I/O, and Communications
3.6.: Software Systems and Applications
4. Computer Performance
4.1.: Cost, Performance, and Cost/Performance
4.2.: Defining Computer Performance
4.3.: Performance Enhancement and Amdahl's Law
4.4.: Performance Measurement vs.
4.5.: Reporting Computer Performance
4.6.: The Quest for Higher Performance
PART 2: INSTRUCTION-SET ARCHITECTURE
5. Instructions and Addressing
5.1.: Abstract View of Hardware
5.2.: Instruction Formats
5.3.: Simple Arithmetic and Logic Instructions
5.4.: Load and Store Instructions
5.5.: Jump and Branch Instructions
5.6.: Addressing Modes
6. Procedures and Data
6.1.: Simple Procedure Calls
6.2.: Using the Stack for Data Storage
6.3.: Parameters and Results
6.4.: Data Types
6.5.: Arrays and Pointers
6.6.: Additional Instructions
7. Assembly Language Programs
7.1.: Machine and Assembly Languages
7.2.: Assembler Directives
7.3.: Pseudoinstructions
7.4.: Macroinstructions
7.5.: Linking and Loading
7.6.: Running Assembler Programs
8. Instruction-Set Variations
8.1.: Complex Instructions
8.2.: Alternative Addressing Modes
8.3.: Variations in Instruction Formats
8.4.: Instruction Set Design and Evolution
8.5.: The RISC/CISC Dichotomy
8.6.: Where to Draw the Line
PART 3: THE ARITHMETIC/LOGIC UNIT
9. Number Representation
9.1.: Positional Number Systems
9.2.: Digit Sets and Encodings
9.3.: Number-Radix Conversion
9.4.: Signed Integers
9.5.: Fixed-Point Numbers
9.6.: Floating-Point Numbers
10. Adders and Simple ALUs
10.1.: Simple Adders
10.2.: Carry Propagation Networks
10.3.: Counting and Incrementation
10.4.: Design of Fast Adders
10.5.: Logic and Shift Operations
10.6.: Multifunction ALUs
11. Multipliers and Dividers
11.1.: Shift-Add Multiplication
11.2.: Hardware Multipliers
11.3.: Programmed Multiplication
11.4.: Shift-Subtract Division
11.5.: Hardware Dividers
11.6.: Programmed Division
12. Floating-Point Arithmetic
12.1.: Rounding Modes
12.2.: Special Values and Exceptions
12.3.: Floating-Point Addition
12.4.: Other Floating-Point Operations
12.5.: Floating-Point Instructions
12.6.: Result Precision and Errors
PART 4: DATA PATH AND CONTROL
13. Instruction Execution Steps
13.1.: A Small Set of Instructions
13.2.: The Instruction Execution Unit
13.3.: A Single-Cycle Data Path
13.4.: Branching and Jumping
13.5.: Deriving the Control Signals
13.6.: Performance of the Single-Cycle Design
14. Control Unit Synthesis
14.1.: A Multicycle Implementation
14.2.: Clock Cycle and Control Signals
14.3.: The Control State Machine
14.4.: Performance of the Multicycle Design
14.5.: Microprogramming
14.6.: Dealing with Exceptions
15. Pipelined Data Paths
15.1.: Pipelining Concepts
15.2.: Pipeline Stalls or Bubbles
15.3.: Pipeline Timing and Performance
15.4.: Pipelined Data Path Design
15.5.: Pipelined Control
15.6.: Optimal Pipelining
16. Pipeline Performance Limits
16.1.: Data Dependencies and Hazards
16.2.: Data Forwarding
16.3.: Pipeline Branch Hazards
16.4.: Branch Prediction
16.5.: Advanced Pipelining
16.6.: Exceptions in a Pipeline
PART 5: MEMORY SYSTEM DESIGN
17. Main Memory Concepts
17.1.: Memory Structure and SRAM
17.2.: DRAM and Refresh Cycles
17.3.: Hitting the Memory Wall
17.4.: Pipelined and Interleaved Memory
17.5.: Nonvolatile Memory
17.6.: The Need for a Memory Hierarchy
18. Cache Memory Organization
18.1.: The Need for a Cache
18.2.: What Makes a Cache Work?
18.3.: Direct-Mapped Cache
18.4.: Set-Associative Cache
18.5.: Cache and Main Memory
18.6.: Improving Cache Performance
19. Mass Memory Concepts
19.1.: Disk Memory Basics
19.2.: Organizing Data on Disk
19.3.: Disk Performance
19.4.: Disk Caching
19.5.: Disk Arrays and RAID
19.6.: Other Types of Mass Memory
20. Virtual Memory and Paging
20.1.: The Need for Virtual Memory
20.2.: Address Translation in Virtual Memory
20.3.: Translation Lookaside Buffer
20.4.: Page Replacement Policies
20.5.: Main and Mass Memories
20.6.: Improving Virtual Memory Performance
PART 6: INPUT/OUTPUT AND INTERFACING
21. Input/Output Devices
21.1.: Input/Output Devices and Controllers
21.2.: Keyboard and Mouse
21.3.: Visual Display Units
21.4.: Hard-Copy Input/Output Devices
21.5.: Other Input/Output Devices
21.6.: Networking of Input/Output Devices
22. Input/Output Programming
22.1.: I/O Performance and Benchmarks
22.2.: Input/Output Addressing
22.3.: Scheduled I/O: Polling
22.4.: Demand-Based I/O: Interrupts
22.5.: I/O Data Transfer and DMA
22.6.: Improving I/O Performance
23. Buses, Links, and Interfacing
23.1.: Intra- and Intersystem Links
23.2.: Buses and Their Appeal
23.3.: Bus Communication Protocols
23.4.: Bus Arbitration and Performance
23.5.: Basics of Interfacing
23.6.: Interfacing Standards
24. Context Switching and Interrupts
24.1.: System Calls for I/O
24.2.: Interrupts, Exceptions, and Traps
24.3.: Simple Interrupt Handling
24.4.: Nested Interrupts
24.5.: Types of Context Switching
24.6.: Threads and Multithreading
PART 7: ADVANCED ARCHITECTURES
25. Road to Higher Performance
25.1.: Past and Current Performance Trends
25.2.: Performance-Driven ISA Extensions
25.3.: Instruction-Level Parallelism
25.4.: Speculation and Value Prediction
25.5.: Special-Purpose Hardware Accelerators
25.6.: Vector, Array, and Parallel Processing
26. Vector and Array Processing
26.1.: Operations on Vectors
26.2.: Vector Processor Implementation
26.3.: Vector Processor Performance
26.4.: Shared-Control Systems
26.5.: Array Processor Implementation
26.6.: Array Processor Performance
27. Shared-Memory Multiprocessing
27.1.: Centralized Shared Memory
27.2.: Multiple Caches and Cache Coherence
27.3.: Implementing Symmetric Multiprocessors
27.4.: Distributed Shared Memory
27.5.: Directories to Guide Data Access
27.6.: Implementing Asymmetric Multiprocessors
28. Distributed Multicomputing
28.1.: Communication by Message Passing
28.2.: Interconnection Networks
28.3.: Message Composition and Routing
28.4.: Building and Using Multicomputers
28.5.: Network-Based Distributed Computing
28.6.: Grid Computing and Beyond
Index

Erscheint lt. Verlag 17.3.2005
Reihe/Serie The ^AOxford Series in Electrical and Computer Engineering
Zusatzinfo numerous line illustrations and tables
Verlagsort New York
Sprache englisch
Maße 236 x 198 mm
Gewicht 1143 g
Themenwelt Mathematik / Informatik Informatik Theorie / Studium
Technik Elektrotechnik / Energietechnik
ISBN-10 0-19-515455-X / 019515455X
ISBN-13 978-0-19-515455-9 / 9780195154559
Zustand Neuware
Haben Sie eine Frage zum Produkt?
Mehr entdecken
aus dem Bereich
Grundlagen – Anwendungen – Perspektiven

von Matthias Homeister

Buch | Softcover (2022)
Springer Vieweg (Verlag)
34,99
was jeder über Informatik wissen sollte

von Timm Eichstädt; Stefan Spieker

Buch | Softcover (2024)
Springer Vieweg (Verlag)
37,99
Eine Einführung in die Systemtheorie

von Margot Berghaus

Buch | Softcover (2022)
UTB (Verlag)
25,00