Document Number: 326764-008
Desktop 3rd Generation Intel®
Core™ Processor Family, Desktop
Intel® Pentium® Processor Family,
and Desktop Intel® Celeron®
Processor Family
Datasheet – Volume 1 of 2
November 2013
2Datasheet, Volume 1
INFORMATION IN THIS DOCUMENT IS PROVIDED IN CONNECTION WITH INTEL PRODUCTS. NO LICENSE, EXPRESS OR IMPLIED,
BY ESTOPPEL OR OTHERWISE, TO ANY INTELLECTUAL PROPERTY RIGHTS IS GRANTED BY THIS DOCUMENT. EXCEPT AS
PROVIDED IN INTEL'S TERMS AND CONDITIONS OF SALE FOR SUCH PRODUCTS, INTEL ASSUMES NO LIABILITY WHATSOEVER
AND INTEL DISCLAIMS ANY EXPRESS OR IMPLIED WARRANTY, RELATING T O SALE AND/OR USE OF INTEL PRODUC TS INCLUDING
LIABILITY OR WARRANTIES RELATING TO FITNES S FOR A P ARTICULAR PURPOSE, MERCHANT ABILITY, OR INFRINGEMENT OF ANY
PATENT, COPYRIGHT OR OTHER INTELLECTUAL PROPERTY RIGHT.
A "Mission Critical Application" is any application in which failure of the Intel Prod uct co uld res ult, dire ctl y or ind i rectly, in personal
injury or death. SHOULD YOU PURCHASE OR USE INTEL'S PRODUCTS FOR ANY SUCH MISSION CRITICAL APPLICATION, YOU
SHALL INDEMNIFY AND HOLD INTEL AND ITS SUBSIDIARIES, SUBCONTRACTORS AND AFFILIATES, AND THE DIRECTORS,
OFFICERS, AND EMPLOYEES OF EA CH, HARMLESS AGAINST ALL CLAIMS COSTS, DAMAGES, AND EXPENSES AND REASONABLE
A T T ORNEYS' FEES ARISING OUT OF, DIRE CTLY OR INDI RECTLY, ANY CLAIM OF PRODUCT LIABILITY, PERSONAL INJURY, OR DEATH
ARISING IN ANY WAY OUT OF SUCH MISSION CRITICAL APPLICATION, WHETHER OR NOT INTEL OR ITS SUBCONTRACTOR WAS
NEGLIGENT IN THE DESIGN, MANUFACTURE, OR WARNING OF THE INTEL PRODUCT OR ANY OF ITS PARTS.
Intel may make changes to specifications and product descriptions at any time, without notice. Designers must not rely on the
absence or characteristics of any features or instructions marked "reserved" or "undefined". Intel reserves these for future
definition and shall have no responsibility whatsoever for conflicts or incompatibilities arising from future changes to them. The
information here is subject to change without notice. Do not finalize a design with this information.
The products described in this document may contain design defects or errors known as errata which may cause the product to
deviate from published specifications. Current characterized errata are available on request.
Contact your local Intel sales office or your distributor to obtain the latest specifications and before placing your product order.
Copies of documents which have an order number and are referenced in th is document, or other Intel literature, may be obtained
by calling 1-800-548-4725, or go to: http://www.intel.com/de s ign/literature.htm.
No computer system can prov ide absol ute security under all conditions. Intel® Tru sted Ex ecution Technology (Intel® TXT) requires
a computer system wi th Intel® Virtualization Technology, an Intel TXT-enabled processor, chipset, BIOS, Authenticated Code
Modules and an Intel TXT-compatible measured launched environment (MLE). The MLE coul d cons ist of a virtual machine mo nitor,
an OS or an application. In addition, Intel TXT requires the system to contain a TPM v1.2, as defined by the Trusted Computing
Group and specific software for some uses. For more information, see http://www.intel.com/technology/security/
Intel® Virtualization Technology requires a computer system with an enabled Intel® processor, BIOS, virtual machine monitor
(VMM) and, for some uses, certain computer system software enabled for it. Functionality, performance or other benefits will vary
depending on hardw are and softw are configur ation s and may requir e a BIOS update. Softw are applications may not b e compatible
with all operating systems. Please check with your application vendor.
Intel® Active Management Technology requires the computer system to have an Intel(R) AMT-enabled chipset, network hardware
and software, as well as connection with a power source and a corporate network connection. Setup requires configuration by the
purchaser a nd may require scripting with the management console or further integration into existing security frameworks to
enable certain functionality. It may also require modifications of implementation of new business processes. With regard to
notebooks, Intel AMT may not be available or certain capabilities may be limited over a host OS-based VPN or when connecting
wirelessly, on battery power, sleeping, hibernating or powered off. For more information, see http://www.intel.com/technology/
platform-technology/intel-amt/
Hyper-Threading Technology requires a computer system with a processor supporting HT Technology and an HT Technology-
enabled chipse t, BIOS and oper at ing syste m . Performance will vary de pe nding on the specific hardware and so ftware you use . For
more information including details on which processors support HT Technology, see http://www.intel.com/info/hyperthreading.
“Intel® Turbo Boost Technology requires a PC with a processor with Intel Turbo Boost Technology capability. Intel Turbo Boost
Technology performance varies depending on hardware, software and overall system configuration. Check with your PC
manufacturer on whether your system delivers Intel Turbo Boost Technology.For more information, see http://www.intel.com/
technology/turboboost.”
Enhanced Intel SpeedStep® Technology See the Processor Spec Find er or contact your Intel representative for more information.
Intel processor numbers are not a measure of performance. Proc essor numbers d ifferentiate features within e ach processo r family,
not across different processor families. See www.intel.com/products/processor_number for details.
64-bit computing on Intel architecture requires a computer system with a processor, chipset, BIOS, operating system, device
drivers and applications enabled for Intel® 64 architecture. Performance will vary depending on your hardware and software
configurations. Consult with your system vendor for more information.
Intel, Pentium, Celeron, Intel Core, and the Intel logo are trademarks of Intel Corporation in the U.S. and other countries.
*Other names and brands may be claimed as the property of others.
Copyright © 2013, Intel Corporation. All rights reserved.
Datasheet, Volume 1 3
Contents
1Introduction..............................................................................................................9
1.1 Processor Feature Details ...................................................................................11
1.1.1 Supported Technologies ..........................................................................11
1.2 Interfaces ........................................................................................................11
1.2.1 System Memory Support.........................................................................11
1.2.2 PCI Express* .........................................................................................12
1.2.3 Direct Media Interface (DMI)....................................................................14
1.2.4 Platform Environment Control Interface (PECI)...........................................14
1.2.5 Processor Graphics.................................................................................14
1.2.6 Intel® Flexible Display Interface (Intel® FDI).............. ..................... .. .. ......15
1.3 Power Management Support ...............................................................................15
1.3.1 Processor Core.......................................................................................15
1.3.2 System.................................................................................................15
1.3.3 Memory Controller................. ..................... .. .. .. ..................... ... .. .. ..........15
1.3.4 PCI Express* .........................................................................................16
1.3.5 Direct Media Interface (DMI)....................................................................16
1.3.6 Processor Graphics Controller (GT) ...........................................................16
1.3.7 Thermal Management Support .................................................................16
1.4 Processor SKU Definitions...................................................................................16
1.5 Package...........................................................................................................17
1.6 Processor Compatibility......................................................................................18
1.7 Terminology .....................................................................................................19
1.8 Related Documents ...........................................................................................22
2Interfaces................................................................................................................23
2.1 System Memory Interface ..................................................................................23
2.1.1 System Memory Technology Supp o rted................. ... .. .. .. ............. .. ............23
2.1.2 System Memory Timing Supp o rt............... .. .. .. ..........................................24
2.1.3 System Memory Organization Modes.........................................................25
2.1.3.1 Single-Channel Mod e................... .. .. .. ........... .. .. ..................... .. ..25
2.1.3.2 Dual-Channel Mode – Intel® Flex Memory Technology Mode ........... 25
2.1.4 Rules for Populating Memory Slots............................................................26
2.1.5 Technology Enhancements of Intel® Fast Memory Access (Intel® FMA)..........27
2.1.5.1 Just-in-Time Command Scheduling........ .. ... .. .. ............ ... .. ............27
2.1.5.2 Command Overlap....................................................................27
2.1.5.3 Out-of-Order Scheduling............................................................27
2.1.6 Data Scrambling ....................................................................................27
2.1.7 DDR3 Reference Voltag e Gen eration................. .. ............. ............. ............27
2.2 PCI Express* Interface.......................................................................................28
2.2.1 PCI Express* Architecture .......................................................................28
2.2.1.1 Transaction Layer .....................................................................29
2.2.1.2 Data Link Layer ........................................................................29
2.2.1.3 Physical Layer ..........................................................................29
2.2.2 PCI Express* Configuration Mechanism .....................................................30
2.2.3 PCI Express* Port...................................................................................31
2.2.3.1 PCI Express* Lanes Connection ..................................................31
2.3 Direct Media Interface (DMI)...............................................................................32
2.3.1 DMI Error Flow.............. .......... .. .. .. ..................... ... .. .. .............................32
2.3.2 Processor / PCH Compatibility Assumptions................................................32
2.3.3 DMI Link Down .......... ........... .. .. ........... .. .. .. ........... .. .. .......... .. ... .......... .. ..32
2.4 Processor Graphics Controller (GT) ......................................................................33
4Datasheet, Volume 1
2.4.1 3D and Video Engines for Graphics Processing ............................................33
2.4.1.1 3D Engine Execution Units..........................................................33
2.4.1.2 3D Pipeline........ .. .......... .. .. ........... .. .. ........... .. .. .......... ... .......... ..34
2.4.1.3 Video Engine ............................................................................34
2.4.1.4 2D Engine ................... .. .. ........... .. .......... ... .......... .. .. ........... .. ....35
2.4.2 Processor Graphics Display ......................................................................36
2.4.2.1 Display Planes ................. ........... .. .. ........... .. .. ..................... .. .. ..36
2.4.2.2 Display Pipes............. .. .. ........... .. .. .......... ... .. .......... .. .. ........... .. ..37
2.4.2.3 Display Ports ............... ..................... .. .. ........... .. .. ........... .. .. ......37
2.4.3 Intel® Flexible Display Interface (Intel® FDI) .............................................37
2.4.4 Multi Graphics Controllers Multi-Monitor Support.........................................37
2.5 Platform Environment Control Interface (PECI) ......................................................38
2.6 Interfa ce Clocking................ ........... .. .. .......... ... .. .......... .. .. ... .......... .. .. ........... .. .. ..38
2.6.1 Internal Clockin g Re q uire m e nts ....................................... .. .. .. ...................38
3 Technologies............................................................................................................39
3.1 Intel® Virtualization Technology (Intel® VT)..........................................................39
3.1.1 Intel® Virtualization Technology (Intel® VT) for
IA-32, Intel® 64 and Intel® Architecture
(Intel® VT-x) Objectives........... .. .. ........... .. .. .......... ... .. ..................... .. .. ....39
3.1.2 Intel® Virtualization Technology (Intel® VT) for
IA-32, Intel® 64 and Intel® Architecture
(Intel® VT-x) Features ............. .......... ... .. .......... .. .. ........... .. .. .. ........... .. .. ..40
3.1.3 Intel® Virtualization Technology (Intel® VT) for Directed
I/O (Intel® VT-d) Objectives ....................................................................40
3.1.4 Intel® Virtualization Technology (Intel® VT) for Directed
I/O (Intel® VT-d) Features.......................................................................41
3.1.5 Intel® Virtualization Technology (Intel® VT) for Directed
I/O (Intel® VT-d) Features Not Supported..................................................41
3.2 Intel® Trusted Execution Technology (Intel® TXT) ........... .. .. ........... ........... .. ..........42
3.3 Intel® Hyper-Threading Technology (Intel® HT Technology)........... .. .. .. ...................42
3.4 Intel® Turbo Boost Technology.................................... .. .. .. ...................... .. .. .. ......43
3.4.1 Intel® Turbo Boost Te chnolog y Freque ncy.............. .. ... .. .. ............ ... .. ..........43
3.4.2 Intel® Turbo Boost Technology Graphics Frequency.....................................43
3.5 Intel® Advanced Vector Extensions (Intel® AVX).................................................... 44
3.6 Security and Cryptography Technologies...............................................................44
3.6.1 Intel® Advanced Encryption Standard New Instructions (Intel® AES-NI) ........44
3.6.2 PCLMULQDQ Instruction ..........................................................................44
3.6.3 RDRAND Instruction................................................................................45
3.7 Intel® 64 Architecture x2APIC.............................................................................45
3.8 Supervisor Mode Execution Protection (SMEP) .......................................................46
3.9 Power Aware Interrupt Routing (PAIR)..................................................................46
4 Power Management .................................................................................................47
4.1 Advanced Configuration and Power Interface
(ACPI) States Supported.....................................................................................48
4.1.1 System States................. .......... .. .. ........... .. .. .. ........... .. .. .......... ... .. ..........48
4.1.2 Processor Core / Package Idle States.........................................................48
4.1.3 Integrated Me mory Controller States............. .. .. ....................... .................48
4.1.4 PCI Express* Link States .........................................................................49
4.1.5 Direct Media Interface (DMI) States .......................................................... 49
4.1.6 Processor Graphics Controller States .........................................................49
4.1.7 Interfa ce State Combinations .......... .. ..................... ... .. ..................... .. .. .. ..49
4.2 Processor Core Power Management......................................................................50
4.2.1 Enhanced Intel® SpeedStep® Technology ..................................................50
4.2.2 Low-Power Idle States.............................................................................50
4.2.3 Requesting Low -Power Idle States .................. .. .. ....................... ...............52
Datasheet, Volume 1 5
4.2.4 Core C-states ........ .. ..................... .. ... .......... .. .. ........... .. .. ........... .. .. ........52
4.2.4.1 Core C0 State...........................................................................52
4.2.4.2 Core C1 / C1E State..................................................................53
4.2.4.3 Core C3 State...........................................................................53
4.2.4.4 Core C6 State...........................................................................53
4.2.4.5 C-State Auto-Demotion ........... .. ............ ........................ ............53
4.2.5 Package C-States........................ .. .. ........... .. .. ........... .. .. ........... .. .. ..........54
4.2.5.1 Package C0..............................................................................55
4.2.5.2 Package C1/C1E .......................................................................55
4.2.5.3 Package C3 State......................................................................56
4.2.5.4 Package C6 State......................................................................56
4.3 Integrated Memory Controller (IMC) Power Management ........................................56
4.3.1 Disabling Unused System Memory Outputs ............... .. .. .. ............. .. .. .. .. ......56
4.3.2 DRAM Power Management and Initialization...............................................57
4.3.2.1 Initialization Role of CKE............................................................58
4.3.2.2 Conditional Self-Refresh ............................................................58
4.3.2.3 Dynamic Power Down Operation .................................................59
4.3.2.4 DRAM I/O Power Management....................................................59
4.3.3 DDR Electrical Power Gating (EPG) ...........................................................59
4.4 PCI Express* Power Management........................................................................60
4.5 DMI Power Manageme nt....... .. ... .. .......... .. .. ..................... ... .. ..................... .. .. .. ....60
4.6 Graphics Power Management ..............................................................................60
4.6.1 Intel® Rapid Memory Power Management (Intel® RMPM)
(also known as CxSR).............................................................................60
4.6.2 Intel® Graphics Performance Modulation Technology (Intel® GPMT).............. 60
4.6.3 Graphics Render C-State.........................................................................60
4.6.4 Intel® Smart 2D Display Technology (Intel® S2DDT) .................. ......... ....... 61
4.6.5 Intel® Graphics Dynamic Frequency..........................................................61
4.7 Graphics Thermal Power Management..................................................................61
5 Thermal Management..............................................................................................63
6 Signal Description ...................................................................................................65
6.1 System Memory Interface Sign als............. .. .. ............. ............. ....................... ......66
6.2 Memory Reference and Compensation Signals.. .. ...................................................67
6.3 Reset and Miscellaneous Signals..........................................................................68
6.4 PCI Express*-based Interface Signals ..................................................................69
6.5 Intel® Flexible Display (Intel® FDI) Interface Signals .............................................69
6.6 Direct Media Interface (DMI) Signals....................................................................70
6.7 Phase Lock Loop (PLL) Signals ............................................................................70
6.8 Test Access Points (TAP) Signals .........................................................................70
6.9 Error and Thermal Protection Signals ...................................................................71
6.10 Power Sequencing Signals..................................................................................72
6.11 Processor Power Signals.....................................................................................73
6.12 Sense Signals........... .......... .. ... .......... .. ........... .. .. .......... ... .. .......... .. .. ........... .. ....73
6.13 Ground and Non-Critical to Function (NCTF) Signals...............................................74
6.14 Processor Internal Pull-Up / Pull-Down Resistors....................................................74
7 Electrical Specifications...........................................................................................75
7.1 Power and Ground Lands....................................................................................75
7.2 Decoupling Guidelines........................................................................................75
7.2.1 Voltage Rail Decoupling...........................................................................75
7.3 Processor Clocking (BCLK[0], BCL K # [ 0])....... ............. ............. ............ ............. ....76
7.3.1 Phase Lock Loop (PLL) Power Supply.........................................................76
7.4 VCC Voltage Identification (VID)..........................................................................76
7.5 System Agent (SA) VCC VID................................................................................80
7.6 Reserved or Unused Signals................................................................................80
6Datasheet, Volume 1
7.7 Signal Groups ...................................................................................................80
7.8 Test Access Port (TAP) Connection............ .. .. .. ............. ....................... .................82
7.9 Storage Conditions Specificatio ns................ .........................................................83
7.10 DC Specifications......... .. .. ........... .. .. .......... .. .. ........... .. .. ........... .. .. .. ........... .. .. ......84
7.10.1 Voltage and Current Spe cifications.......... .. ............ ........................ ............84
7.11 Platform Environmental Control Interface (PECI) DC Specifications...........................90
7.11.1 PECI Bus Architecture..............................................................................90
7.11.2 DC Characteristics ................... ................................ .. .. .. ..................... .. ..91
7.11.3 Input Device Hyster esis................ .......................................... .. ... .. ..........91
8 Processor Land and Signal Information....................................................................93
8.1 Processor Land Assignments ...............................................................................93
9 DDR Data Swizzling................................................................................................109
Figures
1-1 Desktop Processor Platform...... ................................ .. .. .. ..................... .. .. .. ...............10
1-2 Desktop Processor Compatibility Diagram ..................................................................18
2-1 Intel® Flex Memory Technology Operation .................................................................26
2-2 PCI Express* Layering Diagram................................................................................28
2-3 Packet Flow Through the Layers ...............................................................................29
2-4 PCI Express* Related Register Structures in the Processor ...........................................30
2-5 PCI Express* Typical Operation 16 Lanes Mapping ...................................................... 31
2-6 Processor Graphics Controller Unit Block Diagram .......................................................33
2-7 Processor Display Block Diagram ..............................................................................36
4-1 Processor Power States ...........................................................................................47
4-2 Idle Power Management Breakdown of the Processor Cores..........................................51
4-3 Thread and Core C-State Entry and Exit.....................................................................51
4-4 Package C-State Entry and Exit ............. ...................................................................55
7-1 Example for PECI Host-Clients Connection..................................................................90
7-2 Input Device Hysteresis...........................................................................................91
8-1 LGA Socket Land Map..............................................................................................94
Tables
1-1 Desktop 3rd Generation Intel® Core™ Processor Family, Desktop Intel®
Pentium® Processor Family, and Desktop Intel® Celeron® Processor Family SKUs...........16
1-2 Terminology...........................................................................................................19
1-3 Related Documents.................................................................................................22
2-1 Processor DIMM Support Summary by Product ...........................................................23
2-2 Supported UDIMM Module Configurations...................................................................24
2-3 Supported SO-DIMM Module Configurations (AIO Only)................................................24
2-4 System Memory Timin g Sup p ort.................. .. ................................ .. .. .. .....................25
2-5 Reference Clock. .. .. ........... .. .. ........... .. .......... .. .. ........... .. .. ........... .. .. .......... ... .. ..........38
4-1 System States........................................................................................................48
4-2 Processor Core / Package State Support ....................................................................48
4-3 Integrated Memory Controller States.........................................................................48
4-4 PCI Express* Link States .........................................................................................49
4-5 Direct Media Interface (DMI) States ..........................................................................49
4-6 Processor Graphics Controller States .........................................................................49
4-7 G, S, and C State Combinations................................................................................49
4-8 Coordination of Thread Power States at the Core Level ................................................51
4-9 P_LVLx to MWAIT Conversion................................................................................... 52
4-10 Coordination of Core Power States at the Package Level ..............................................54
6-1 Signal Description Buffer Types ................................................................................65
Datasheet, Volume 1 7
6-2 Memory Channel A Signals ......................................................................................66
6-3 Memory Channel B Signals ......................................................................................67
6-4 Memory Reference and Compensation.......................................................................67
6-5 Reset and Miscellaneous Signals...............................................................................68
6-6 PCI Express* Graphics Interface Signals....................................................................69
6-7 Intel® Flexible Display (Intel® FDI) Interface.............................................................69
6-8 Direct Media Interface (DMI) Signals – Processor to PCH Serial Interface .......................70
6-9 Phase Lock Loop (PLL) Signals .................................................................................70
6-10 Test Access Points (TAP) Signals ..............................................................................70
6-11 Error and Thermal Protection Signals ............. ...........................................................71
6-12 Power Sequencing Signals.......................................................................................72
6-13 Processor Power Signals......................... .. .. .. ..................... .. ... .. ..................... .. .. .. ....73
6-14 Sense Signals........................................................................................................73
6-15 Ground and Non-Critical to Function (NCTF) Signals....................................................74
6-16 Processor Internal Pull-Up / Pull-Down Resistors.........................................................74
7-1 VR 12.0 Voltage Identification Definition....................................................................77
7-2 Signal Groups 1 .....................................................................................................81
7-3 Storage Condition Ratings .......................................................................................83
7-4 Processor Core Active and Idle Mode DC Voltage and Current Specifications...................84
7-5 Processor System Agent I/O Buffer Supply DC Voltage and Current Specifications.......... . 86
7-6 Processor Graphics VID based (VAXG) Supply DC Voltage and Current Specifications........ 87
7-7 DDR3 Signal Group DC Specifications........................................................................87
7-8 Control Sideband and TAP Signal Group DC Specifications ...........................................89
7-9 PCI Express* DC Specifications ................................................................................89
7-10 PECI DC Electrical Limits .........................................................................................91
8-1 Processor Land List by Land Name............................................................................95
9-1 DDR Data Swizzling Tabl e – Channel A....... .. .. .. ........... .. .. .. ..................... .. ... .. .......... 110
9-2 DDR Data Swizzling table – Channel B .................................................................... 111
8Datasheet, Volume 1
Revision History
§ §
Revision
Number Description Revision Date
001 Initial release April 2012
002 Added Desktop 3rd Generation Intel® Core™ i5-3470T, i5-3470, i5-3470S,
i5-3475S, i5-3570, i5-3570S processors June 2012
003
Updated Section 1.2.2, PCI Express*
Updated Section 2.1.1, System Memory Technology Supported
Updated Table 7-4, “Processor Core Active and Idle Mode DC Voltage and
Current Specifications”. Added 65 W to 2011C.
June 2012
004
Minor edits throughout for clarity
Added Intel Pentium G2120 and G2100T processors
Added Desktop 3rd Generation Intel® Core™ i3-3 220, i 3-3220T, i3-3225, i 3-
3240, i3-3240T, i5-3330, i5-3330S, i5-3335S, i5-3350P processors
September 2012
005 Added Desktop 3rd Generation Intel® Core™ i3-3210 processor
Added Desktop Intel® Pentium® G2130, G2020, G2020T, G2010 processor
Added Desktop Intel® Celeron® G1620, G1610, G1610T processor January 2013
006 Added Desktop 3rd Generation Intel® Core™ i3-3250, i3-3250T, i3-3245
processor
Added Desktop Intel® Pentium® G2140, G2120T, G2030, G2030T processor June 2013
007 Added Desktop 3rd Generation Intel® Core™ i5-3340, i5-3340S processor
Added Desktop Intel® Celeron® G1630, G1620, G1620T processor September 2013
008 Added Desktop Intel Pentium Processor A1018 November 2013
Datasheet, Volume 1 9
Introduction
1Introduction
The Desktop 3rd Generation Intel® Core™ processor family, Desktop Intel® Pentium®
processor family, and Desktop Intel® Celeron® processor family are the next
generation of 64-bit, multi-core processors built on 22-nanometer process technology.
The processors are designed for a two-chip platform. The two-chip platform consists of
a processor and a Platform Controller Hub (PCH) and enables higher performance,
lower cost, easier validation, and improved x-y footprint. The processor includes an
Integrated Display Engine, Processor Graphics, PCI Express ports, and an Integrated
Memory Controller. The processor is designed for desktop platforms. The processor
offers either 6 or 16 graphic execution units (EUs). The number of EU engines
supported may v ary between processor SKUs. The processor is offered in an 1155-land
LGA package (H2). Figure 1-1 shows an example desktop platform block diagram.
The Datasheet provides DC specifications, pinout and signal definitions, interface
functional descriptions, and additional feature information pertinent to the
implementation and operation of the processor on its respective platform.
Note: Throughout this document, the Intel® 6 / 7 Series Chipset Platform Controller Hub may
be referred to as “PCH”.
Note: Throughout this document, the Desktop 3rd Generation Intel® Core™ processor family,
Desktop Intel® Pentiu m® processor family, and Desktop Intel® Celeron® processor
family may be referred to simply as “processor”.
Note: Throughout this document, the Desktop 3rd Generation Intel® Core™ processor family,
Desktop Intel® Pentiu m® processor family, and Desktop Intel® Celeron® processor
family refer to the processor SKUs listed in Table 1-1.
Note: Some processor features are not available on all platforms. Refer to the processor
specification update for details.
Note: The term “DT” refers to desktop platforms.
Introduction
10 Datasheet, Volume 1
Figure 1-1. Desktop Processor Platform
Intel
®
Flexible
Display Interface
DMI2 x4
Discrete
Graphics (PEG)
Analog CRT
Gigabit
Network Connec tion
USB 2.0 / USB 3.01
Intel®HD Audio
FWH
Super I/O
Serial ATA
DDR3
PCI Express* 3.0
1 x16 or 2x8
8 PCI Express* 2.0
x1 Ports
(5 GT/s)
SPI
Digital Display x 3
PCI Express*
SPI Flash x 2
LPC
SMBUS 2.0
GPIO
WiFi / WiMax
Controller Link 1
PECI
Intel®6/7 Series
Chipset Families
Intel®
Management
Engine
Intel®
Processor
Note:
1. USB 3.0 is supported o n the Intel®7 Series Chipset family only.
Datasheet, Volume 1 11
Introduction
1.1 Processor Feature Details
Four or two execution cores
A 32-KB instruction and 32-KB data first-level cache (L1) for each core
A 256-KB shared instruction / data second-level cache (L2) for each core
Up to 8-MB shared instruction / data third-level cache (L3), shared among all cores
1.1.1 Supported Technologies
•Intel
® Virtualization Technology (Intel® VT) for Directed I/O (Intel® VT-d)
•Intel
® Virtualization Technology (Intel® VT) for IA-32, Intel® 64 and Intel®
Architecture (Intel® VT-x)
Intel® Active Management Technology (Intel® AMT) 8.0
•Intel
® Trusted Execution Technology (Intel® TXT)
•Intel
® Streaming SIMD Extensions 4.1 (Intel® SSE4.1)
•Intel
® Streaming SIMD Extensions 4.2 (Intel® SSE4.2)
•Intel
® Hyper-Threading Technology (Intel ® HT Technolog y)
•Intel
® 64 Architecture
Execute Disable Bit
•Intel
® Turbo Boost Technology
•Intel
® Advanced Vector Extensions (Intel® AVX)
•Intel
® Advanced Encryption Standard New Instructions (Intel® AES-NI)
PCLMULQDQ Instruction
RDRAND instruction for random number generation
SMEP – Supervisor Mode Execution Protection
PAIR – Power Aware Interrupt Routing
1.2 Interfaces
1.2.1 System Memory Support
Two channels of DDR3 Unbuffered Dual In-Line Memory Modules (UDIMM) or DDR3
Unbuffered Small Outline Dual In-Line Memory Modules (SO-DIMM) with a
maximum of two DIMMs per cha nnel
Single-channel and dual-channel memory organization modes
Data burst length of eight for all memory organization modes
Memory DDR3 data transfer rates of 1333 MT/s and 1600 MT/s. The DDR3 data
transfer rates supported by the processor is dependent on the PCH SKU in the
target platform:
Desktop PCH platforms support 1333 MT/s and 1600 MT/s for One DIMM and
Two DIMMs per channel
All In One platforms (AIO) support 1333 MT/s and 1600 MT/s for One DIMM
and Two DIMMs per channel
64-bit wide channels
System Memory Interface I/O Voltage of 1.5 V
DDR3 and DDR3L DIMMs/DRAMs running at 1.5 V
No support for DDR3L DIMMs/DRAMS running at 1.35 V
Introduction
12 Datasheet, Volume 1
Support memory configurations that mix DDR3 DIMMs/DRAMs with DDR3L
DIMMs/DRAMs running at 1.5 V
The type of the DIMM modules supported by the processor is dependent on the PCH
SKU in the target platform:
Desktop PCH platforms support non-ECC UDIMMs only
All In One platforms (AIO) support SO-DIMMs
Theoretical Maximum Memory Bandwidth:
10.6 GB/s in single-channel mode or 21.3 GB/s in dual-channel mode assuming
DDR3 1333 MT/s
12.8 GB/s in single-channel mode or 25.6 GB/s in dual-channel mode assuming
DDR3 1600 MT/s
Processor on-die Reference Voltage (VREF) generation for both DDR3 Read
(RDVREF) and Write (VREFDQ)
1Gb, 2Gb, and 4Gb DDR3 DRAM device technologies are supported
Using 4Gb DRAM device technologies, the largest memory capacity possible is
32 GB, assuming Dual Channel Mode with four x8 dual ranked DIMM memory
configuration
Up to 64 simultaneous open pages, 32 per channel (assuming 8 ranks of 8 bank
devices)
Command launch modes of 1N/2N
On-Die Termination (ODT)
Asynchronous ODT
•Intel
® Fast Memory Access (Intel® FMA):
Just-in-Time Command Scheduling
—Command Overlap
Out-of-Order Scheduling
1.2.2 PCI Express*
The PCI Express* lanes (PEG[15:0] TX and RX) are fully-compliant to the PCI
Express Base Specification, Revision 3.0, including support for 8.0 GT/s transfer
speeds.
Processor with Desktop PCH Supports (may vary depending on PCH SKUs)
PCI Express* supported configurations in desktop products
The port may negotiate down to narrower widths
Support for x16/x8/x4/x2/x1 widths for a single PCI Express* mode
2.5 GT/s, 5.0 GT/s and 8.0 GT/s PCI Express* frequencies are supported
Gen1 Raw bit -rate on the data pins Gen 2 Raw bit -rate on the data pins of 5.0 GT/s,
resulting in a real bandwidth per pair of 500 MB/s given the 8b/10b encoding used
Configuration Organization Desktop
11x8 Graphics, I/O
2x4
2 2x8 Graphics, I/O
3 1x16 Graphics, I/O
Datasheet, Volume 1 13
Introduction
to transmit data across this interface. This also does not account for packet
overhead and link maintenance.
Maximum theoretical bandwidth on the interface of 8 GB/s in each direction
simultaneously, for an aggregate of 16 GB/s when x16 Gen 2
Gen 3 raw bit-rate on the data pins of 8.0 GT/s, resulting in a real bandwidth per
pair of 984 MB/s using 128b/130b encoding to transmit data across this interface.
This also does not account for packet overhead and link maintenance.
Maximum theoretical bandwidth on the interface of 16 GB/s in each direction
simultaneously, for an aggregate of 32 GB/s when x16 Gen 3
Hierarchical PCI-compliant configuration mechanism for downstream devices
Traditional PCI style traffic (asynchronous snooped, PCI ordering)
PCI Express* extended configuration space. The first 256 bytes of configuration
space aliases directly to the PCI Compatibility configuration space . The remaining
portion of the fixed 4-KB block of memory-mapped space above that (starting at
100h) is known as extended configuration space.
PCI Express* Enhanced Access Mechanism. Accessing the device configuration
space in a flat memory mapped fashion.
Automatic discovery, negotiation, and training of link out of reset
Traditional AGP style traffic (asynchronous non-snooped, PCI-X Relaxed ordering)
Peer segment destination posted write traffic (no peer-to-peer read traffic) in
Virtual Channel 0:
DMI -> PCI Express* Port 0
64-bit downstream address format; however, the processor never generates an
address above 64 GB (Bits 63:36 will always be zeros)
64-bit upstream address format; however, the processor responds to upstream
read transactions to addresses above 64 GB (addresses where any of Bits 63 : 3 6
are nonzero) with an Unsupported Request response. Upstream write transactions
to addresses above 64 GB will be dropped.
Re-issues Configur ation cycles that ha ve been previously completed with the
Configuration Retry status
PCI Express* reference clock is 100-MHz differential clock
Power Management Event (PME) functions
Dynamic width capability
Message Signaled Interrupt (MSI and MSI-X) messages
Polarity inversion
Note: The processor does not support PCI Express* Hot-Plug.
Introduction
14 Datasheet, Volume 1
1.2.3 Direct Media Interface (DMI)
DMI 2.0 support
Four lanes in each direction
5 GT/s point-to-point DMI interface to PCH is supported
Raw bit-rate on the data pins of 5.0 Gb/s, resulting in a real bandwid th per pair of
500 MB/s given the 8b/10b encoding used to transmit data across this interface.
Does not account for packet overhead and link maintenance.
Maximum theoretical bandwidth on interface of 2 GB/s in each direction
simultaneously, for an aggregate of 4 GB/s when DMI x4
Shares 100-MHz PCI Express* reference clock
64-bit downs tream address format; however, the processor never generates an
address above 64 GB (Bits 63:36 will always be zeros)
64-bit upstream address format, but the processor responds to upstream read
transactions to addresses above 64 GB (addresses where any of Bits 63:36 are
nonzero) with an Unsupported Request response. Upstream write transactions to
addresses above 64 GB will be dropped.
Supports the following traffic types to or from the PCH:
—DMI -> DRAM
DMI -> processor core (Virtual Legacy Wires (VLWs), Resetwarn, or MSIs only)
Processor core -> DMI
APIC and MSI interrupt messaging support:
Message Signaled Interrupt (MSI and MSI-X) messages
Downstream SMI, SC I and SERR error indication
Legacy support for ISA regime protocol (PHOLD / PHOLDA) required for parallel
port DMA, floppy drive, and LPC bus masters
DC coupling – no capacitors between the processor and the PCH
Polarity inve rsion
PCH end-to-end lane reversal across the link
Supports Half Swing “low-power / low-voltage”
1.2.4 Platform Environment Control Interface (PECI)
The PECI is a one-wire interface that provides a communication channel between a
PECI client (the processor) and a PECI master. The processor supports the PECI 3.0
Specification.
1.2.5 Processor Graphics
The Processor Graphics contains a refresh of the seventh generation graphics core
enabling substantial gains in performance and lower power consumption. Up to
16 EU support.
Next Generation Intel Clear Video Technology HD Support is a collection of video
playback and enhancement features that improve the end user’s viewing
experience
Encode / transcode HD content
Playback of high definition content including Blu-ray Disc*
Superior image quality with sharper, more colorful images
Playback of Blu-ray Disc* S3D content using HDMI* (V.1.4 with 3D)
Datasheet, Volume 1 15
Introduction
DirectX* Video Acceleration (DXVA) support for accelerating video processing
Full AVC/VC1/MPEG2 HW Decode
Advanced Scheduler 2.0, 1.0, XPDM supp ort
Windows* 7, Windows* XP, OSX, Linux OS Support
DirectX* 11, DirectX* 10.1, DirectX* 10, DirectX* 9 support
•OpenGL* 3.0 support
Switchable Graphics support on Desktop AIO platforms with MxM solutions only
1.2.6 Intel® Flexible Display Interface (Intel® FDI)
For SKUs with graphics, carries display traffic from the Processor Graphics in the
processor to the legacy display connectors in the PCH
Based on DisplayPort standard
The two Intel FDI links are capable of being configured to support three
independent channels, one for each display pipeline
There are two Intel FDI channels, each one consists of four unidirectional
downstream differential transmitter pairs:
Scalable down to 3X, 2X, or 1X based on actual display bandwidth
requirements
Fixed frequency 2.7 GT/s data rate
Two sideband signals for display synchronization:
FDI_FSYNC and FDI_LSYNC (Frame and Line Synchronization)
One Interrupt signal used for various interrupts from the PCH:
FDI_INT signal shared by both Intel FDI Links
PCH supports end-to-end lane reversal across both links
Common 100-MHz reference clock
1.3 Power Management Support
1.3.1 Processor Core
Full support of ACPI C-states as implemented by the following processor C-states:
C0, C1, C1E, C3, C6
Enhanced Intel SpeedStep Technology
1.3.2 System
S0, S3, S4, S5
1.3.3 Memory Controller
Conditional self-refresh (Intel® Rapid Memory Power Management (Intel® RMPM))
Dynamic power down
1.3.4 PCI Express*
L0s and L1 ASPM power management capability
Introduction
16 Datasheet, Volume 1
1.3.5 Direct Media Interface (DMI)
L0s and L1 ASPM power management capability
1.3.6 Processor Graphics Controller (GT)
•Intel
® Rapid Memory Power Management (Intel® RMPM) – CxSR
•Intel
® Graphics Performance Modulation Technology (Intel® GPMT)
•Intel
® Smart 2D Display Technology (Intel® S2DDT)
Graphics Render C-State (RC6)
1.3.7 Thermal Management Support
Digital Thermal Sensor
Intel Adaptive Thermal Monitor
THERMTRIP# and PROCHOT# support
On-Demand Mode
Memory Thermal Throttling
External Thermal Sensor (TS-on-DIMM and TS-on-Board)
Render Thermal Throttling
Fan speed control with DTS
1.4 Processor SKU Definitions
Table 1-1. Desktop 3rd Generation Intel® Core™ Processor Family, Desktop Intel®
Pentium® Processor Family, and Desktop Intel® Celeron ® Processor Family
SKUs (Sheet 1 of 2)
Processor
Number TDP
(W) IA LFM
Frequency IA Frequency range GT Frequency range TjMAX
(°C)
i7-3770T 45 1600 MHz 2.5 GHz up to 3.7 GHz 650 MHz up to 1150 MHz 94
i7-3770S 65 1600 MHz 3.1 GHz up to 3.9 GHz 650 MHz up to 1150 MHz 103
i7-3770K 77 1600 MHz 3.5 GHz up to 3.9 GHz 650 MHz up to 1150 MHz 105
i7-3770 77 1600 MHz 3.4 GHz up to 3.9 GHz 650 MHz up to 1150 MHz 105
i5-3570T 45 1600 MHz 2.3 GHz up to 3.3 GHz 650 MHz up to 1150 MHz 94
i5-3570S 65 1600 MHz 3.1 GHz up to 3.8 GHz 650 MHz up to 1150 MHz 103
i5-3570K 77 1600 MHz 3.4 GHz up to 3.8 GHz 650 MHz up to 1150 MHz 105
i5-3570 77 1600 MHz 3.4 GHz up to 3.8 GHz 650 MHz up to 1150 MHz 105
i5-3550S 65 1600 MHz 3 GHz up to 3.7 GHz 650 MHz up to 1150 MHz 103
i5-3550 77 1600 MHz 3.3 GHz up to 3.7 GHz 650 MHz up to 1150 MHz 105
i5-3475S 65 1600 MHz 2.9 GHz up to 3.6 GHz 650 MHz up to 1100 MHz 103
i5-3470S 65 1600 MHz 2.9 GHz up to 3.6 GHz 650 MHz up to 1100 MHz 103
i5-3470T 35 1600 MHz 2.9 GHz up to 3.6 GHz 650 MHz up to 1100 MHz, 91
i5-3470 77 1600 MHz 3.2 GHz up to 3.6 GHz 650 MHz up to 1100 MHz 105
i5-3450S 65 1600 MHz 2.8 GHz up to 3.5 GHz 650 MHz up to 1100 MHz 103
i5-3450 77 1600 MHz 3.1 GHz up to 3.5 GHz 650 MHz up to 1100 MHz 105
i5-3350P 69 1600 MHz 3.1 GHZ up to 3.3 GHZ N/A 105
i5-3340 77 1600 MHz 3.1 GHZ up to 3.3 GHZ 650 MHz up to 1050 MHz 105
Datasheet, Volume 1 17
Introduction
1.5 Package
The processor socket type is noted as LGA 1155. The package is a 37.5 x 37.5 mm Flip
Chip Land Grid Array (FCLGA 1155). See the Desktop 3rd Generation Intel® Core™
Processor Family, Desktop Intel® Pentium® Processor Family, Desktop Intel® Celeron®
Processor Family, and LGA1155 Socket Thermal / Mechanical Specifications and Design
Guidelines for complete details on the package.
i5-3340S 65 1600 MHz 3.0 GHZ up to 3.3 GHZ 650 MHz up to 1050 MHz 103
i5-3335S 65 1600 MHz 2.7 GHz up to 3.2 GHz 650 MHz up to 1050 MHz 103
i5-3330S 65 1600 MHz 2.7 GHz up to 3.2 GHz 650 MHz up to 1050 MHz 103
i3-3250T 35 1600 MHz N/A 650 MHz up to 1050 MHz 91
i3-3250 55 1600 MHz N/A 650 MHz up to 1050 MHz 105
i3-3245 55 1600 MHz N/A 650 MHz up to 1050 MHz 105
i5-3330 77 1600 MHz 3 GHz up to 3.2 GHz 650 MHz up to 1050 MHz 105
i3-3240T 35 1600 MHz Up to 3.0 GHz 650 MHz up to 1050 MHz 91
i3-3240 55 1600 MHz Up to 3.4 GHz 650 MHz up to 1050 MHz 105
i3-3225 55 1600 MHz Up to 3.3 GHz 650 MHz up to 1050 MHz 105
i3-3220T 35 1600 MHz Up to 2.8 GHz 650 MHz up to 1050 MHz 91
i3-3220 55 1600 MHz Up to 3.3 GHz 650 MHz up to 1050 MHz 105
i3-3210 55 1600 MHz Up to 3.2 GHz 650 MHz up to 1050 MHz 105
G2140 55 1600 MHz N/A 650 MHz up to 1050 MHz 105
G2130 55 1600 MHz Up to 3.2 GHz 650 MHz up to 1050 MHz 105
G2120T 35 1600 MHz N/A 650 MHz up to 1050 MHz 91
G2120 55 1600 MHz 3.1 GHZ 650 MHZ up to 1.05 GHZ 105
G2100T 35 1600 MHz 2.6 GHZ 650 MHZ up to 1.05 GHZ 91
G2030T 35 1600 MHz N/A 650 MHz up to 1050 MHz 91
G2030 35 1600 MHz N/A 650 MHz up to 1050 MHz 105
G2020 55 1600 MHz 2.9 GHZ 650 MHZ up to 1050 MHz 105
G2020T 35 1600 MHz 2.5 GHZ 650 MHZ up to 1050 MHz 91
G2010 55 1600 MHz 2.8 GHZ 650 MHZ up to 1050 MHz 105
G1630 55 1600 MHz 2.8 GHZ 650 MHZ up to 1050 MHz 105
G1620 55 1600 MHz 2.7 GHZ 650 MHZ up to 1050 MHz 105
G1620T 35 1600 MHz 2.4 GHZ 650 MHZ up to 1050 MHz 91
G1610 55 1600 MHz 2.6 GHZ 650 MHZ up to 1050 MHz 105
G1610T 35 1600 MHz 2.3 GHZ 650 MHZ up to 1050 MHz 91
A1018 35 1600 MHz 2.1 GHz 650 MHz up to 1 GHz 105
Table 1-1. Desktop 3rd Generation Intel® Core™ Processor Family, Desktop Intel®
Pentium® Processor Family, and Desktop Intel® Celeron® Processor Family
SKUs (Sheet 2 of 2)
Processor
Number TDP
(W) IA LFM
Frequency IA Frequency range GT Frequency range TjMAX
(°C)
Introduction
18 Datasheet, Volume 1
1.6 Processor Compatibility
The Desktop 3rd Generation Intel® Core™ processor family, Desktop Intel® Pentium®
processor family, Desktop Intel® Celeron® processor Family has specific platform
requirements that differentiate it from a 2nd Generation Intel ® Core™ processor family
Desktop, Intel ® Pentium® processor family Desktop, Intel® Celeron® processor Family
Desktop processor. Platforms intending to support both processor families need to
address the platform compatibility requirements detailed in Figure 1-2.
Notes:
1. G2_Core = 2nd Generation Intel® Core™ processor family Desktop, Intel® Pentium® processor
family Desktop, Intel® Celeron® processor family Desktop,
2. G3_Core = Desktop 3rd Generation Intel® Core™ processor family, Desktop Intel® Pentium®
processor, Desktop Intel® Celeron® processor family
Figure 1-2. Desktop Processor Compatibility Diagram
2 x 330 µF
2 x 330 µF +
1 placeholder
VCCIO
VR VDDQ
VR VCore
VR VCCSA
VR
VAXG
VR
DDR3
DDR3
G2_Core: 1.5 V
G3_Core: 1.5 V
G2_Core: 1.05 V
G3_Core: 1.05 V
VCCIO_SEL#
G2_Core: ‘1’
G3_Core: ‘1’
Processor
PCH
VCCSA_VID
G2_Core: ‘0’
G3_Core: ‘0’
G2_Core: 0.925 V
G3_Core: 0.925 V
PEG AC Decoupling
PEG Gen 1,2 – 100 nF
PEG Gen 1,2,3 – 220 nF
*VAXG: 2 ph required for
some of the SKUs
SVID
PROC_SELECT#
G2_Core: ‘1’
G3_Core: ‘0’
Controls DMI
And FDI
termination
DF_TVS
Datasheet, Volume 1 19
Introduction
1.7 Terminology
Table 1-2. Terminology (Sheet 1 of 3)
Term Description
ACPI Advanced Configuration and Power Interface
ADB Automatic Display Brightness
APD Active Power Down
ASPM Active State Power Management
BGA Ball Grid Array
BLT Block Level Transfer
CLTT Closed Loop Thermal Thr ottling
CRT Cathode Ray Tube
cTDP Configurable Thermal Design Power
DDDR3L-RS DDR3L Reduced Standby Power
DDR3 Third-generation Double Data Rate SDRAM memory technology
DDR3L DDR3 Low Voltage
DMA Direct Memory Access
DMI Direct Media Interface
DP DisplayPort*
DPST Display Power Savings Technology
DTS Digital Thermal Sensor
EC Embedded Controller
ECC Error Correction Code
eDP* Embedded DisplayPort*
Enhanced Intel®
SpeedStep®
Technology
Technology that provides power management capabilities to laptops.
EPG Electrical Power Gating
EU Execution Unit
Execute Disable Bit
The Execute Disable bit allows memory to be marked as exe cutable or non-execut able,
when combined with a supporting operating system. If code attempts to run in non-
executabl e memory the process or raises an err or to the oper ating system. Thi s feature
can prevent some classes of viruses or worms that exploit buffer overrun
vulnerabilities and can thus he lp improve the overall security of the system. See the
Intel® 64 and IA-32 Architectures Software Developer's Manuals for more detailed
information.
HDMI* High Definition Multimedia Interface
HFM High Frequency Mode
IMC Integrated Memory Controller
Intel® 64 Technology 64-bit memory extensions to the IA-32 architecture
Intel® DPST Intel® Display Power Saving Technology
Intel® FDI Intel® Flexible Display Interface
Intel® TXT Intel® Trusted Execution Technology
Intel® Virtualization
Technology
Processor virtualization which when used in conjunction with Virtual Machine Monitor
software enables multiple, robust independent software environments inside a single
platform.
Introduction
20 Datasheet, Volume 1
Intel® VT-d
Intel® Virtualization Technology (Intel® VT) for Directed I/O. Intel VT -d is a hardware
assist, under system software (Virtual Machine Manager or operating sys t em) c ontro l,
for enabling I/O device virtualization. Intel VT-d also brings robust security by
providing protection from errant DMAs by using DMA remapping, a key feature of Intel
VT-d.
IOV I/O Virtualization
ISA Industry Standard Architecture. This is a legacy computer bus standard for IBM PC
compatible computers.
ITPM Integrated Trusted Platform Module
LCD Liquid Crystal Display
LFM Low Frequency Mode
LPC Low Pin Count
LPM Low Power Mode
LVDS Low Voltage Differential Signaling. A high speed, low power data transmission
standard used for display connections to LCD panels.
MLE Measured Launched Environment
MSI Message Signaled Interrupt
NCTF Non-Critical to Function. NCTF locations are typically redundant ground or non-critical
reserved, so the loss of the solder joint continuity at end of life conditions will not
affect the overall product functionality.
ODT On-Die Termination
PAIR Power Aw are Interrupt Routing
PCH Platform Controller Hub. The chipset with centr alized platform capabilities including the
main I/O interfaces along with display connectivity, audio features, power
management, manageability, security and storage features.
PECI Platform Environment Control Interface.
PEG PCI Express* Graphics. External Graphics using PCI Express* Architecture. A high-
speed serial interface whose configuration is software compatible with the existing PCI
specifications.
PGA Pin Grid Array
PLL Phase Lock Loop
PME Power Management Event
PPD Precharge d Power Down
Processor The 64-bit, single-core or multi-core component (package).
Processor Core The term “processor core” refers to Si die itself that can contain multiple execution
cores. Each ex ecution core has an instruction cache, data c ache, and 256-KB L2 cache.
All execution cores share the L3 cache.
Processor Graphics Intel Processor Graphics
Rank A unit of DRAM corresponding four to eight devices in parallel, ignoring ECC. These
devices are usually, but not always, mounted on a single side of a SO-DIMM.
SCI System Control Interrupt. Used in ACPI protocol.
Intel SDRRS
Technology Intel Seamless Display Refresh Rate Switching Technology
SMEP Supervisor Mode Execution Protection
Table 1-2. Terminology (Sheet 2 of 3)
Term Description
Datasheet, Volume 1 21
Introduction
Storage Conditions
A non-operational state. The processor may be installed in a platform, in a tray, or
loose. Processors may be sealed in packaging or exposed to free air. Under these
conditions, processor landings should not be connected to any supply voltages, have
any I/Os biased or receive any clocks. Upon exposure to “free air” (that is, unsealed
packaging or a device removed from packaging material) the processor must be
handled in accordance with moisture sensitivity labeling (MSL) as indicated on the
packaging material.
SVID Serial Voltage IDentification interface
TAC Thermal Averaging Constant
TAP Test Access Point
TCC Thermal Control Circuit
TDC Thermal Design Current
TDP Thermal Design Power
TLP Transaction Layer Packet
VAXG Graphics core power supply
VCC Processor core power supply
VCCIO High Frequency I/O logic power supp ly
VCCPLL PLL power supply
VCCSA System Agent (memory controller, DMI, PCIe controllers, and display engine) power
supply
VDDQ DDR3 power supply
VGA Video Graphics Array
VID Voltage Identification
VLD Variable Length Decoding
VLW Virtual Legacy Wire
VR Voltage Regulator
VSS Processor ground
VTS Virtual Temperature Sensor
x1 Refers to a Link or Port with one Physical Lane.
x16 Refers to a Link or Port with sixteen Physical Lanes.
x4 Refers to a Link or Port with four Physical Lanes.
x8 Refers to a Link or Port with eight Physical Lanes.
Table 1-2. Terminology (Sheet 3 of 3)
Term Description
Introduction
22 Datasheet, Volume 1
1.8 Related Documents
Note: Contact your Intel representative for the latest revision of this item.
§ §
Table 1-3. Related Documents
Document Document Number /
Location
Desktop 3rd Generation Intel® Core™ Processor Family, Desktop Intel®
Pentium® Processor Family, and Desktop Intel® Celeron® Processor Family
Datasheet, Volume 2 326765
Desktop 3rd Generation Intel® Core™ Processor Family, Desktop Intel®
Pentium® Processor Family, and Desktop Intel® Celeron® Processor Family
Specification Update 326766
Desktop 3rd Generation Intel® Core™ Processor Family, Desktop Intel®
Pentium® Processor Family, Desktop Intel® Celeron® Processor Family, and
LGA1155 Socket Thermal / Mechanical Specifications and Design Guidelines 326767
Advanced Configuration and Power Interface Specification 3.0 http://www.acpi.info/
PCI Local Bus Specification 3.0 http://www.pcisig.com/speci
fications
PCI Express* Base Specification 2.0 http://www.pcisig.com
DDR3 SDRAM Specification http://www.jedec.org
DisplayPort* Specification http://www.vesa.org
Intel® 64 and IA-32 Architectures Software Developer's Manuals http://www.intel.com/produ
cts/processor/manuals/inde
x.htm
Volume 1: Basic Architecture 253665
Volume 2A: Instruction Set Reference, A-M 253666
Volume 2B: Instruction Set Reference, N-Z 253667
Volume 3A: System Programming Guide 253668
Volume 3B: System Programming Guide 253669
Datasheet, Volume 1 23
Interfaces
2Interfaces
This chapter describes the interfaces supported by the processor.
2.1 System Memory Interface
2.1.1 System Memory Technology Supported
The Integrated Memory Controller (IMC) supports DDR3 / DDR3L protocols with two
independent, 64-bit wid e channels, each accessing one or two DIMMs. The type of
memory supported by the processor is dependant on the PCH SKU in the target
platform. Refer to Chapter 1 for supported memory configuration details.
Note: The processor supports only JEDEC approved memory modules and devices.
Note: The IMC supports a maximum of two DIMMs per channel; thus, allowing up to four
device ranks per channel.
Note: The supported memory interface frequencies and number of DIMMs per channel are
SKU dependent.
Note: There is no support for DDR3L DIMMs/DRAMS running at 1.35 V.
DDR3 / DDR3L at 1.5 V Data Transfer Rates
1333 MT/s (PC3-10600), 1600 MT/s (PC3-12800)
DDR3 / DDR3L at 1.5 V SO-DIMM Modules
Raw Card A – Dual Ranked x16 unbuffered non-ECC
Raw Card B – Single Ranked x8 unbuffered non-ECC
Raw Card C – Single Ranked x16 unbuffered non-ECC
Raw Card F – Dual Ranked x8 (planar) unbuffered non-ECC
Desktop platform DDR3/DDR3L at 1.5 V UDIMM Modules
Raw Card A – Single Ranked x8 unbuffered non-ECC
Raw Card B – Dual Ranked x8 unbuffered non-ECC
Raw Card C – Single Ranked x16 unbuffered non-ECC
Note: The processor supports memory configurations that mix DDR3 DIMMs / DRAMs with
DDR3L DIMMs / DRAMs running at 1.5 V.
Table 2-1. Processor DIMM Support Summary by Product
Processor
cores Package DIMM per
channel DIMM type DDR3 DDR3L at 1.5 V
Dual Core,
Quad Core uLGA 1 DPC SO-DIMM 1333/1600 1333/1600
2 DPC 1333,1600 1333/1600
Dual Core,
Quad Core uLGA 1 DPC UDIMM 1333/1600 1333/1600
2 DPC 1333/1600 1333/1600
Interfaces
24 Datasheet, Volume 1
Note:
1. DIMM module support is based on availability and is subject to change.
Note:
1. System memory configurations are based on availability and are subject to change.
2.1.2 System Memory Timing Support
The IMC supports the following Speed Bins, CAS Write Latency (CWL), and command
signal mode timings on the main memory interface:
tCL = CAS Latency
tRCD = Activate Command to READ or WRITE Command delay
tRP = PRECHARGE Command Period
CWL = CAS Write Latency
Command Signal modes = 1N indicates a new command may be issued every clock
and 2N indicates a new command may be issued every 2 clocks. Command launch
mode programming depends on the transfer rate and memory configuration.
Table 2-2. Supported UDIMM Module Configurations
Raw
Card
Version
DIMM
Capacity
DRAM
Device
Technology
DRAM
Organization
# of
DRAM
Devices
# of
Physical
Device
Ranks
# of
Row/Col
Address
Bits
# of
Banks
Inside
DRAM
Page
Size
Desktop Platforms:
Unbuffered/Non-ECC Supported DIM M M odule Configurations
A
1 GB 1 Gb 128 M X 8 8 1 14/10 8 8K
2 GB 2 Gb 128 M X 16 8 1 1510 8 8K
4 GB 4 Gb 512 M X 8 8 1 15/10 8 8K
B
2 GB 1 Gb 128 M X 8 16 2 14/10 8 8K
4 GB 2 Gb 256 M X 8 16 2 15/10 8 8K
8 GB 4 Gb 512 M X 8 16 2 16/10 8 8K
C 1 GB 2 Gb 128 M X 16 4 1 14/10 8 16K
Table 2-3. Supported SO-DIMM Module Configurations (AIO Only)
Raw
Card
Version
DIMM
Capacity
DRAM
Device
Technology
DRAM
Organization
# of
DRAM
Devices
# of
Physical
Device
Ranks
# of
Row/Col
Address
Bits
# of
Banks
Inside
DRAM
Page
Size
A2 GB 2 Gb 128 M x 16 8 2 14/10 8 8K
4 GB 4 Gb 256 M x 16 8 2 15/10 8 8K
B
1 GB 1 Gb 128 M x 8 8 1 14/10 8 8K
2 GB 2 Gb 256 M x 8 8 1 15/10 8 8K
4 GB 4 Gb 512 M x 8 8 1 16/10 8 8K
C1 GB 2 Gb 128 M x 16 4 1 14/10 8 8K
2 GB 4 Gb 256 M x 16 4 1 15/10 8 8K
F
2 GB 1 Gb 128 M x 8 16 2 14/10 8 8K
4 GB 2 Gb 256 M x 8 16 2 15/10 8 8K
8 GB 4 Gb 512 M x 8 16 2 16/ 10 8 8K
Datasheet, Volume 1 25
Interfaces
Note:
1. System memory timing support is based on availability and is subject to change.
2.1.3 System Memory Organization Modes
The IMC supports two memory organization modes, single-channel and dual-channel.
Depending upon how the DIMM Modules are populated in each memory channel, a
number of different configurations can exist.
2.1.3.1 Single-Channel Mode
In this mode, all memory cycles are directed to a single-channel. Single-channel mode
is used when either Channel A or Channel B DIMM connectors are populated in any
order, but not both.
2.1.3.2 Dual-Channel Mode – Intel® Flex Memory Technology Mode
The IMC supports Intel Flex Memory Technology Mode . Memory is divided into a
symmetric and a asymmetric zone. The symmetric z one starts at the lowest address in
each channel and is contiguous until the asymmetric zone begins or until the top
address of the channel with the smaller capacity is reached. In this mode, the system
runs with one zone of dual-channel mode and one zone of single-channel mode,
simultaneously, across the whole memory array.
Note: Channels A and B can be mapped for physical channel 0 and 1 respectively or vice
versa; however, channel A size must be greater or equal to channel B size.
Table 2-4. System Memory Timing Support
Segment Transfer
Rate
(MT/s)
tCL
(tCK) tRCD
(tCK) tRP
(tCK) CWL
(tCK) DPC CMD
Mode Notes1
Desktop 1333 9 9 9 7 11N/2N
22N
1600 11 11 11 8 11N/2N
22N
AIO 1333 9 9 9 7 11N/2N
22N
1600 11 11 11 8 1 1N/2N
Interfaces
26 Datasheet, Volume 1
2.1.3.2.1 Dual-Channel Symmetric Mode
Dual-Channel Symmetric mode, also known as interleaved mode, provides maxi mum
performance on real world applications. Addresses are ping-ponged between the
channels after each cache line (64-byte boundary). If there are two requests, and the
second request is to an address on the opposite channel from the first, that request can
be sent before data from the first request has returned. If two consecutive cache lines
are requested, both may be retrieved simultaneously, since they are ensured to be on
opposite channels. Use Dual-Channel Symmetric mode when both Channel A and
Channel B DIMM connectors are populated in any order, with the total amount of
memory in each channel being the same.
When both channels are populated with the same memory capacity and the boundary
between the dual channel zone and the single channel zone is the top of memory, the
IMC operates completely in Dual-Channel Symmetric mode.
Note: The DRAM device technology and width may vary from one channel to the other.
2.1.4 Rules for Populating Memory Slots
In all System Memory Organization Modes, the frequency and latency timings of the
system memory is the lowest supported frequency and slowest supported latency
timings of all memory DIMM modules placed in the system, as determined through the
SPD registers.
Note: In a Two DIMM Per Channel (2DPC) daisy chain layout memory configuration, the
furthest DIMM from the processor of an y given channel must always be populated first.
Figure 2-1. Intel® Flex Memory Technology Operation
CH BCH A
B B
C
B
B
CNon interleaved
access
Dual channel
interleaved access
TOM
C H A and CH B can be configured to be physical channels 0 or 1
B The largest physical m em ory am ount of the sm aller size m em ory m odule
C The rem aining physical m emory am ount of the larger size m em ory m odule
Datasheet, Volume 1 27
Interfaces
2.1.5 Technology Enhancements of Intel® Fast Memory Access
(Intel® FMA)
The following sections describe the Just-in-Time Scheduling, Command Overlap, and
Out-of-Order Scheduling Intel FMA technology enhancements.
2.1.5.1 Just-in-Time Command Scheduling
The memory controller has an a dvanced command scheduler where all pending
requests are examined simultaneously to determine the most efficient request to be
issued next. The most efficient request is picked from all pending requests and issued
to system memory Just-in-Time to make optimal use of Command Overlapping. Thus,
instead of having all memory access requests go individually through an arbitration
mechanism forcing requests to be ex ecuted one at a tim e, they can be started witho ut
interfering with the current request allowing for concurrent issuing of requests. This
allows for optimized bandwidth and reduced latency while maintaining appropriate
command spacing to meet system memory protocol.
2.1.5.2 Command Overlap
Command Overlap allows the insertion of the DRAM commands between the Activate,
Precharge, and Read/Write commands normally used, as long as the inserted
commands do not affect the currently executing command. Multiple commands can be
issued in an overlapping manner, increasing the efficiency of system memory protocol.
2.1.5.3 Out-of-Order Scheduling
While leveraging the Just-in-Time Scheduling and Command Overlap enhancements,
the IMC continuously monitors pending requ ests to system memory for th e best use of
bandwidth and reduction of latency. If there are multiple requests to the same open
page, these requests would be launched in a back to back manner to make optimum
use of the open memory page. This ability to reorder requests on the fly allows the IMC
to further reduce latency and increase bandwidth efficiency.
2.1.6 Data Scrambling
The memory controller incorporates a DDR3 Data Scrambling feature to minimize the
impact of excessive di/dt on the platform DDR3 VRs due to successive 1s and 0s on the
data bus. Past experience has demonstr ated that tr affic on the data bus is not r andom.
Rather, it can have ene rgy concentrated at specific spectral harmonics creating high
di/dt that is generally limited by data patterns that excite resonance between the
package inductance and on die capacitances. As a result the memory controller uses a
data scrambling feature to create pseudo-random patterns on the DDR3 data bus to
reduce the impact of any excessive di/dt.
2.1.7 DDR3 Reference Voltage Generation
The processor memory controller has the capability of generating the DDR3 Reference
Voltage (VREF) internally for both read (RDVREF) and write (VREFDQ) operations. The
generated VREF can be changed in small steps, and an optimum VREF value is
determined for both during a cold boot through advanced DDR3 training procedures in
order to provide the best voltage and signal margins.
Interfaces
28 Datasheet, Volume 1
2.2 PCI Express* Interface
This section describes the PCI Express interface capabilities of the processor. See the
PCI Express Base Specification for details of PCI Express.
The number of PCI Express controllers is depen dent on the platform. Refer to Chapter 1
for details.
2.2.1 PCI Express* Architecture
Compatibility with the PCI addressing model is maintained to ensure that all existing
applications and drivers may operate unchanged.
The PCI Express configuration uses standard mechanisms as defined in the PCI
Plug-and-Play specification. The processor external gr aphics ports support Gen 3 speed
as well. At 8 GT/s, Gen 3 operation results in twice as much bandwidth per lane as
compared to Gen 2 operation. The 16-lane PCI Express* graphics port can operate at
either 2.5 GT/s, 5 GT/s, or 8 GT/s.
PCI Express* Gen 3 uses a 128/130b encoding scheme, eliminating nearly all of the
overhead of the 8b/10b encoding scheme used in Gen 1 and Gen 2 operation.
The PCI Express architecture is specified in three layers – Transaction Layer, Data Link
Layer, and Physical Layer. The partitioning in the component is not necessarily along
these same boundaries. Refer to Figure 2-2 for the PCI Express layering diagram.
PCI Express uses packets to communicate information between components. Packets
are formed in the Transaction and Data Link Layers to carry the information from the
transmitting component to the receiving component. As the transmitted packets flow
through the other layers, they are extended with additional information necessary to
handle packets at those layers. At the receiving side, the reverse process occurs and
packets get transformed from their Physical Layer representation to the Data Link
Layer representation and finally (for Transaction Lay er P ackets) to the form that can be
processed by the Transaction Layer of the receiving device.
Figure 2-2. PCI Express* Layering Diagram
Transaction
Data Link
Physical
Logical S ub-block
Electrical S ub-block
RX TX
Transaction
Data Link
Physical
Logical S ub-block
Electrical Sub-block
RX TX
Datasheet, Volume 1 29
Interfaces
2.2.1.1 Transaction Layer
The upper layer of the PCI Express* architecture is the Transaction Layer. The
Transaction Layer's primary responsibility is the assembly and disassembly of
Transaction Layer Packets (TLPs). TLPs are used to communicate transactions, such as
read and write, as well as certain types of events. The Transaction Lay er also manages
flow control of TLPs.
2.2.1.2 Data Link Layer
The middle layer in the PCI Express stack, the Data Link Layer, serves as an
intermediate stage between the Transaction Layer and the Physical Layer.
Re sponsibilities of Data Link Layer include link management, error detection, and error
correction.
The transmission side of the Data Link Layer accepts TLPs assembled by the
Transaction Layer, calculates and applies data protection code and TLP sequence
number, and submits them to Physical Layer for transmission across the Link. The
receiving Data Link Layer is responsible for checking the integrity of received TLPs and
for submitting them to the Transaction Layer for further processing. On detection of TLP
error(s), this layer is responsible for requesting retransmission of TLPs until information
is correctly received, or the Link is determined to have failed. The Data Link Layer also
generates and consumes packets which are used for Link management functions.
2.2.1.3 Physical Layer
The Physical Layer includes all circuitry for interface operation, including driver and
input buffers, parallel-to-serial and serial-to-parallel conv ersion, PLL(s), clock recovery
circuits and impedance matching circuitry. It also includes logical functions related to
interface initialization and maintenance. The Physical Layer exchanges data with the
Data Link Layer in an implementation-specific format, and is responsible for con verting
this to an appropriate serialized format and transmitting it across the PCI Express Link
at a frequency and width compatible with the remote device.
Figure 2-3. Packet Flow Through the Layers
Sequence
Number
Framing Header Data ECRC LCRC Framing
Transaction Layer
Data Link Layer
Physical Layer
Interfaces
30 Datasheet, Volume 1
2.2.2 PCI Express* Configuration Mechanism
The PCI Express (external graphics) link is mapped through a PCI-to-PCI bridge
structure.
PCI Express extends the configuration space to 4096 bytes per-device/function, as
compared to 256 bytes allowed by the Conventional PCI Specification. PCI Express
configuration space is divided into a PCI-compatible region (that consists of the first
256 bytes of a logical device's configuration space) and an extended PCI Express region
(that consists of the remaining configuration space). The PCI-compatible region can be
accessed using either the mechanisms defined in the PCI specification or using the
enhanced PCI Express configuration access mechanism described in the PCI Express
Enhanced Configuration Mechanism section.
The PCI Express Host Bridge is required to translate the memory-mapped PCI Express
configuration space accesses from the host processor to PCI Express configuration
cycles. To maintain compatibility with PCI configuration addressing mechanisms, it is
recommended that system software access the enhanced configuration space using
32-bit operations (32-bit aligned) only. See the PCI Express Base Specification for
details of both the PCI-compatible and PCI Express Enhanced configuration
mechanisms and transaction rules.
Figure 2-4. PCI Express* Related Register Structure s in the Processor
PCI-PCI B r idge
representing
root PCI
Express* ports
(Device 1 and
Device 6)
PCI Compatible
Host Bridge
Device
(Device 0)
PCI
Express*
Device
PEG0
DMI
Datasheet, Volume 1 31
Interfaces
2.2.3 PCI Express* Port
The PCI Express interface on the processor is a single, 16-lane (x16) port that can also
be configured at narrower widths. The PCI Express port is being designed to be
compliant with the PCI Express Base Specification, Revision 3.0.
2.2.3.1 PCI Express* Lanes Connection
Figure 2-5 demonstrates the PCIe lanes mapping.
Figure 2-5. PCI Express* Typical Operation 16 Lanes Mapping
0
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
1 X 16 Controller
Lane 0 0
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
Lane 1
Lane 2
Lane 3
Lane 4
Lane 5
Lane 6
Lane 7
Lane 8
Lane 9
Lane 10
Lane 11
Lane 12
Lane 13
Lane 14
Lane 15
0
1
2
3
4
5
6
7
1 X 8 Controller
0
1
2
3
1 X 4 Controller
Interfaces
32 Datasheet, Volume 1
2.3 Direct Media Interface (DMI)
Direct Media Interface (DMI) connects the processor and the PCH. Next generation DMI
2.0 is supported.
Note: Only DMI x4 configuration is supported.
2.3.1 DMI Error Flow
DMI can only generate SERR in response to errors, never SCI, SMI, MSI, PCI INT, or
GPE. Any DMI related SERR activity is associated with Device 0.
2.3.2 Processor / PCH Compatibility Assumptions
The processor is compatible with the Intel 7 Series Chipset PCH products.
2.3.3 DMI Link Down
The DMI link going down is a fatal, unrecover able error. If the DMI data link goes to
data link down, after the link was up, then the DMI link hangs the system by not
allowing the link to retrain to prevent data corruption. This link behavior is controlled
by the PCH.
Downstream transactions that had been successfully transmitted across the link prior
to the link going down may be processed as normal. No completions from downstream,
non-posted transactions are returned upstream over the DMI link after a link down
event.
Datasheet, Volume 1 33
Interfaces
2.4 Processor Graphics Controller (GT)
New Graphics Engine Architecture includes 3D compute elements, Multi-format
hardware assisted decode/encode pipeline, and Mid-Level Cache (MLC) for superior
high definition playback, video quality, and improved 3D performance and Media.
The Display Engine handles delivering the pixels to the screen, and is the primary
channel interface for display memory accesses and “PCI-like” traffic in and out.
2.4.1 3D and Video Engines for Graphics Processing
The 3D graphics pipeline architecture simultaneously operates on different primitiv es or
on different portions of the same primitive. All the cores are fully programmable,
increasing the versatility of the 3D Engine. The Gen 7.0 3D engine provides the
following performance and power-management enhancements:
Up to 16 Execution units (EUs)
•Hierarchal-Z
Video quality enhancements
2.4.1.1 3D Engine Execution Units
Supports up to 16 EUs. The EUs perform 128-bit wide execution per clock
Support SIMD8 instructions for vertex processing and SIMD16 instructions for pixel
processing
Figure 2-6. Processor Graphics Controller Unit Block Diagram
Vertex
Fetch
VS/GS
Setup/Rasterize
Hiera c hi ca l Z
Hardware Clipper
EU EU
EU EU
Unified Execution Unit Array Texture
Unit
Pixel
Backend
Full MPEG2, VC1, AVC De code
Fixed Function Post Processing
Full AVC Encode
Partial MPEG2, VC1 Encode
Multi-Format Decode/Encode
Additional Pos t Proces sing
Interfaces
34 Datasheet, Volume 1
2.4.1.2 3D Pipeline
2.4.1.2.1 Vertex Fetch (VF) Stage
The VF stage executes 3DPRIMITIVE commands. Some enhancements have been
included to better support legacy D3D APIs as well as SGI OpenGL*.
2.4.1.2.2 Vertex Shader (VS) Stage
The VS stage performs shading of vertices output by the VF function. The VS unit
produces an output vertex reference for ever y input vertex reference received from the
VF unit, in the order received.
2.4.1.2.3 Geometry Shader (GS) Stage
The GS stage receives inputs from the VS stage. Compiled application-provided GS
programs, specifying an algorithm to convert the vertices of an input object into some
output primitives. For example, a GS shader may convert lines of a line strip into
polygons representing a corresponding segment of a blade of grass centered on the
line. Or it could use adjacency information to detect silhouette edges of triangles and
output polygons extruding out from the edges.
2.4.1.2.4 Clip Stage
The Clip stage performs general processing on incoming 3D objects. However, it also
includes specialized logic to perform a Clip Test function on incoming objects. The Clip
Test optimizes generalized 3D Clipping. The Clip unit examines the position of incoming
vertices, and accepts/rejects 3D objects based on its Clip algorithm.
2.4.1.2.5 Strips and Fans (SF) Stage
The SF stage performs setup operations required to rasterize 3D objects. The outputs
from the SF stage to the Windower stage contain implementation-specific information
required for the rasterization of objects and also supports clipping of primitiv es to some
extent.
2.4.1.2.6 Windower/IZ (WIZ) Stage
The WIZ unit performs an early depth test, which removes failing pixels and eliminates
unnecessary processing overhead.
The Windower uses the parameters provided by the SF unit in the object-specific
rasterization algorithms. The WIZ unit rasterizes objects into the corresponding set of
pixels. The Windower is also capable of performing dithering, whereby the illusion of a
higher resolution when using low-bpp channels in color buffers is possible. Color
dithering diffuses the sharp color bands seen on smooth-shaded objects.
2.4.1.3 Video Engine
The video engine is part of the Intel Processor Graphics for image processing, play-
back and transcode of Video applications. The Processor Graphics video engine has a
dedicated fixed hardware pipe-line for high quality decode and encode of media
content. This engine supports Full hardware acceleration for decode of AVC/H.264,
VC-1 and MPEG -2 contents along with encode of MPEG-2 and AVC/H.264 apart from
various video processing features. The new Processor Graphics Video engine adds
support for processing features such as frame r ate conversion, image stabilization, and
gamut conversion.
Datasheet, Volume 1 35
Interfaces
2.4.1.4 2D Engine
The Display Engine fetches the raw data from the memory, puts the data into a stream,
converts the data into raw pixels, organizes pixels into images, blends different planes
into a single image, encodes the data, and sends the data out to the display device.
The Display Engine executes its fun ctions with th e help of three main fu nctional blocks
– Planes, Pipes, and Ports, except for eDP. The Planes and Pipes are in the processor
while the P orts reside in the PCH. Intel FDI connects the displa y engine in the processor
with the Ports in the PCH. The 2D Engine adds a new display pipe C that enables
support for three simultaneous and concurrent display configurations.
2.4.1.4.1 Processor Graphics Registers
The 2D registers consists of original VGA registers and others to support graphics
modes that have color depths, resolutions, and hardware acceleration features that go
beyond the original VGA standard.
2.4.1.4.2 Logical 128-Bit Fixed BLT and 256 Fill Engine
This BLT engine accelerates the GUI of Microsoft Windows* operating systems. The
128-bit BLT engine provides hardware acceleration of block transfers of pixel data for
many common Windows operations. The BLT engine can be used for the following:
Move rectangular blocks of data between memory locations
Data alignment
To perform logical operations (raster ops)
The rectangular block of data does not change, as it is transferred between memory
locations. The allowable memory transfers are between cacheable system memory and
frame buffer memory, frame buffer memory and frame buffer memory, and within
system memory. Data to be transferred can consist of regions of memory, patterns, or
solid color fills. A pattern is always 8 x 8 pixels wide and may be 8, 16, or 32 bits per
pixel.
The BLT engine expands monochrome data into a color depth of 8, 16, or 32 bits. BLTs
can be either opaque or transparent. Opaque transfers move the data specified to the
destination. Transparent transfers compare destination color to source color and write
according to the mode of transparency selected.
Data is horizontally and vertically aligned at the destination. If the destination for the
BLT overlaps with the source memory location, the BLT engine specifies which area in
memory to begin the BLT transfer. Hardware is included for all 256 raster operations
(source, pattern, and destination) defined by Microsoft, including transparent BLT.
The BLT engine has instructions to invoke BLT and stretch BLT operations, permitting
software to set up instruction buffers and use batch processing. The BLT engine can
perform hardware clipping during BLTs.
Interfaces
36 Datasheet, Volume 1
2.4.2 Processor Graphics Display
The Processor Graphics controller display pipe can be broken down into three
components:
•Display Planes
Display Pipes
DisplayPort* and Intel® FDI
2.4.2.1 Display Pla nes
A display plane is a single displayed surface in memory and contains one image
(desktop, cursor, overlay). It is the portion of the display hardware logic that defines
the format and location of a rectangular region of memory that can be displayed on
display output device and delivers that data to a display pipe. This is clocked by the
Core Display Clock.
2.4.2.1.1 Primary Planes A, B, and C
Planes A, B, and C are the main display planes and are associated with Pipes A, B, and
C respectively.
2.4.2.1.2 Sprite A, B, and C
Sprite A and Sprite B are planes optimized for video decode, and are associated with
Planes A and B respectively. Sprite A and B are also double-buffered.
2.4.2.1.3 Cursors A, B, and C
Cursors A and B are small, fixed-sized planes dedicated for mouse cursor acceleration,
and are associated with Planes A and B respectively. These planes support resolutions
up to 256 x 256 each.
2.4.2.1.4 Video Graphics Array (VGA)
VGA is used for boot, safe mode, legacy games, and so on. It can be changed by an
application without operating system/driver no tification, due to legacy requirements.
Figure 2-7. Processor Display Block Diagram
Datasheet, Volume 1 37
Interfaces
2.4.2.2 Display Pipes
The display pipe blends and synchronizes pixel data received from one or more display
planes and adds the timing of the display output device upon which the image is
displayed.
The display pipes A, B, and C operate independently of each other at the rate of 1 pixel
per clock. They can attach to any of the display ports. Each pipe sends display data to
eDP* or to the PCH over the Intel® Flexible Display Interface (Intel® FDI).
2.4.2.3 Display Ports
The display ports consist of output logic and pins that transmit the display data to the
associated encoding logic and send the data to the display device (that is, LVDS,
HDMI*, DVI, SDVO, and so on). All display interfaces connecting external displays are
now repartitioned and driv en from the PCH. Refer to the PCH datasheet for more details
on display port support.
2.4.3 Intel® Flexible Display Interface (Intel® FDI)
Intel® Flexible Display Interface (Intel® FDI) is a proprietary link for carrying display
traffic from the Processor Graphics controller to the PCH display I/Os. Intel FDI
supports two or three independent channels – one for pipe A, one for pipe B, and one
for Pipe C.
Channels A and B have a maximum of four transmit (Tx) differential pairs used for
transporting pixel and framing data from the display engine in two display
configurations. In three display configurations Channel A has 4 transmit (Tx)
differential pairs while Channel B and C have two transmit (Tx) differential pairs.
Each channel has four transmit (Tx) differential pairs used for transporting pixel
and framing data from the displa y engine
Each channel has one single-ended LineSync and one FrameSync input (1- V CMOS
signaling)
One display interrupt line input (1-V CMOS signaling)
Intel FDI may dynamically scale down to 2X or 1X based on actual display
bandwidth requirements
Common 100-MHz reference clock
Each channel transports at a rate of 2.7 Gbps
PCH supports end-to-end lane reversal across both channels (no reversal support
required in the processor)
2.4.4 Multi Graphics Controllers Mul t i-Monitor Support
The processor supports simultaneous use of the Processor Graphics Controller (GT) and
a x16 PCI Express* Graphics (PEG) device.
The processor supports a maximum of 2 displays connected to the PEG card in parallel
with up to 2 displays connected to the processor and PCH.
Note: When supporting Multi Graphics Multi Monitors, “drag and drop” between monitors and
the 2x8 PEG is not supporte d .
Interfaces
38 Datasheet, Volume 1
2.5 Platform Environment Control Interface (PECI)
The PECI is a one-wire interface that provides a communication channel between a
PECI client (processor) and a PECI master. The processor implements a PECI interface
to:
Allow communication of processor thermal and other information to the PECI
master.
Read averaged Digital Thermal Sensor (DTS) values for fan speed control.
2.6 Interface Clocking
2.6.1 Internal Clocking Requirements
§ §
Table 2-5. Reference Clock
Reference Input Clock Input Frequency Associated PLL
BCLK[0]/BCLK#[0] 100 MHz Processor/Memory/Graphics/PCIe/DMI/FDI
Datasheet, Volume 1 39
Technologies
3Technologies
This chapter provides a high-level description of Intel technologies implemented in the
processor.
The implementation of the features may vary between the processor SKUs.
Details on the different technologies of Intel processors and other relevant external
notes are located at the Intel technology web site: http://www.intel.com/technology/.
3.1 Intel® Virtualization Technology (Intel® VT)
Intel® Virtualization Technology (Intel® VT) makes a single system appear as multiple
independent systems to software. This allows multiple, independent operating systems
to run simultaneously on a single system. Intel VT comprises technology components
to support virtualization of platforms based on Intel architecture microprocessors and
chipsets. Intel® Virtualization Technology for IA-32, Intel® 64 and Intel® Architecture
(Intel® VT-x) added hardware support in the processor to improve the virtualization
performance and robustness. Intel Virtualization Technology for Directed I/O (Intel VT-
d) adds chipset hardware implementation to support and improve I/O virtualization
performance and robustness.
Intel VT-x specifications and functional descriptions are included in the Intel® 64 and
IA-32 Architectures Software Developer’s Manual, Volume 3B and is available at:
http://www.intel.com/products/processor/manuals/index.htm
Other Intel VT documents can be referenced at:
http://www.intel.com/technology/virtualization/index.htm
3.1.1 Intel® Virtualization Technology (Intel® VT) for
IA-32, Intel® 64 and Intel® Architecture
(Intel® VT-x) Objectives
Intel VT-x provides hardware acceleration for virtualization of IA platforms. Virtual
Machine Monitor (VMM) can use Intel VT-x features to provide improved reliable
virtualized platform. By using Intel VT-x, a VMM is:
Robust: VMMs no longer need to use paravirtualization or binary translation. This
means that they will be able to run off-the-shelf oper ating systems and applications
without any special steps.
Enhanced: Intel VT enables VMMs to run 64-bit guest operating systems on IA x86
processors.
More reliable: Due to the hardware support, VMMs can now be smaller, less
complex, and more efficient. This improves reliability and availability and reduces
the potential for software conflicts.
More secure: The use of hardw are transitions in the VMM strengthens the isolation
of VMs and further prevents corruption of one VM from affecting others on the
same system.
Technologies
40 Datasheet, Volume 1
3.1.2 Intel® Virtualization Technology (Intel® VT) for
IA-32, Intel® 64 and Intel® Architecture
(Intel® VT-x) Features
The processor core supports the following Intel VT-x features:
Extended Pa g e Tables (EPT)
EPT is hardware assisted page table virtualization
It eliminates VM exits from guest operating system to the VMM for shadow
page-table maintenance
Virtual Processor IDs (VPID)
Ability to assign a VM ID to tag processor core hardware structures (such as
TLBs)
This avoids flushes on VM transitions to give a lower-cost VM transition time
and an overall reduction in virtualization overhead
Guest Preemption Timer
Mechanism for a VMM to preempt the execution of a guest operating system
after an amount of time specified by the VMM. The VMM sets a timer value
before entering a guest.
The feature aids VMM developers in flexibility and Quality of Service (QoS)
guarantees
Descriptor-Table Exiting
Descriptor-table exiting allows a VMM to protect a guest oper ating system from
internal (malicious software based) attack by preventing relocation of key
system data structures like IDT (interrupt descriptor table), GDT (global
descriptor table), LDT (local descriptor table), and TSS (task segment selector)
A VMM using this feature can intercept (by a VM exit) attempts to relocate
these data structures and prevent them from being tampered by malicious
software
3.1.3 Intel® Virtualization Technology (Intel® VT) for Directed
I/O (Intel® VT-d) Objectives
The key Intel VT-d objectives are domain-based isolation and hardware-based
virtualization. A domain can be abstractly defined as an isolated environment in a
platform to which a subset of host physical memory is allocated. Virtualization allows
for the creation of one or more partitions on a single system. This could be multiple
partitions in the same operating system, or there can be multiple operating system
instances running on the same system – offering benefits such as system
consolidation, legacy migration, activity partitioning, or security.
Datasheet, Volume 1 41
Technologies
3.1.4 Intel® Virtualization Technology (Intel® VT) for Directed
I/O (Intel® VT-d) Features
The processor supports the following Intel VT-d features:
Memory controller and processor graphics comply with Intel® VT-d 1.2 specification
•Two VT-d DMA remap engines:
—iGFX DMA remap engine
—DMI / PEG
Support for root entry, context entry, and default context
39-bit guest physical address and host physical address widths
Support for 4K page sizes only
Support for register-based fault recording only (for single entry only) and support
for MSI interrupts for faults
Support for both leaf and non-leaf caching
Support for boot protection of default page table
Support for non-caching of invalid page table entries
Support for hardware based flushing of translated but pending writes and pending
reads, on IOTLB invalidation
Support for page-selective IOTLB invalidation
MSI cycles (MemWr to address FEEx_xxxxh) not translated
Translation faults result in cycle forwarding to VBIOS region (byte enables
masked for writes). R eturned data may be bogus for internal agents, PEG / DMI
interfaces return unsupported request status.
Interrupt Remapping is supported
Queued invalidation is supported
VT-d translation bypass address range is supported (Pass Through)
Note: Intel VT-d Technology may not be available on all SKUs.
3.1.5 Intel® Virtualization Technology (Intel® VT) for Directed
I/O (Intel® VT-d) Features Not Supported
The following features are not supported by the processor with Intel VT-d:
No support for PCIe* endpoint caching (A TS)
No support for Intel VT-d read prefetching / snarfing (that is, translations within a
cacheline are not stored in an internal buffer for reuse for subsequent translations)
No support for advance fault reporting
No support for super pages
No support for Intel VT-d translation bypass address range (such usage models
need to be resolved with VMM help in setting up the page tables correctly)
Technologies
42 Datasheet, Volume 1
3.2 Intel® Trusted Execution Technology (Intel® TXT)
Intel Trusted Execution Technology (Intel TXT) defines platform-level enhancements
that provide the building blocks for creating trusted platforms.
The Intel TXT platform helps to provide the authenticity of the controlling environment
such that those wishing to rely on the platfo rm can make an appropriate trust decision.
The Intel TXT platform determines the identity of the controlling environment by
accurately measuring and verifying the controlling softwa re.
Another aspect of the trust decision is the ability of the platform to resist attempts to
change the controlling environment. The Intel TXT platform will resist attempts by
software processes to change the controlling environment o r bypass the bounds set by
the controlling environment.
Intel TXT is a set of extensions designed to provide a measured and controlled launch
of system software that will then establish a protected environment for itself and any
additional software that it may execute.
These extensions enhance two areas:
The launching of the Measured Launched Environment (MLE)
The protection of the MLE from potential corruption
The enhanced platform provides these launch and control interfaces using Safer Mode
Extensions (SMX).
The SMX interface i ncl ud es the fol l ow in g functi ons:
Measured / Verified launch of the MLE
Mechanisms to ensure the above measurement is protected and stored in a secure
location
Protection mechanisms that allow the MLE to control attempts to modify itself
For more information, refer to the Intel® TXT Measured Launched Environment
Developer’s Guide in http://www.intel.com/content/www/us/en/software-
developers/intel-txt-software-development-guide.html.
3.3 Intel® Hyper-Threading Technology (Intel® HT
Technology)
The processor supports Intel® Hyper-Threading Technology (Intel® HT Technology)
that allows an execution core to function as two logical processors. While some
execution resources such as caches, execution units, and buses are shared, each
logical processor has its own architectural state with its own set of general-purpose
registers and control registers. This feature must be enabled using the BIOS and
requires operating system support.
Intel recommends enabling Intel® HT Technology with Microsoft Windows 7*, Microsoft
Windows Vista*, Microsoft Windows* XP Professional / Windows* XP Home, and
disabling Intel® HT Te chnology using the BIOS for all previous versions of Windows
operating systems. For more information on Intel® HT Technology, see
http://www.intel.com/technology/platform-technology/hyper-threading/.
Datasheet, Volume 1 43
Technologies
3.4 Intel® Turbo Boost Technology
Intel® Turbo Boost Technology is a feature that allows the processor core to
opportunistically and automatically run faster than its r ated operating frequency/render
clock if it is operating below power, temperature, and current limits. The Intel Turbo
Boost Technology feature is designed to increase performance of both multi-threaded
and single-threaded workloads. Maximum frequency is dependant on the SKU and
number of active cores. No special hardware suppor t is necessary for Intel Turbo Boost
Technology. BIOS and the operating system can enable or disable Intel Turbo Boost
Technology. Intel Turbo Boost Technology will increase the ratio of application power to
TDP. Thus, thermal solutions and platform cooling that are designed to less than
thermal design guidance might experience thermal and performance issues since more
applications will tend to run at the maximum power limit for significant periods of time.
Note: Intel Turbo Boost Technology may not be available on all SKUs.
3.4.1 Intel® Turbo Boost Technology Frequency
The processor’s rated frequency assumes that all execution cores are running an
application at the thermal design power (TDP). How e ver, under typical operation, not
all cores are active. Therefore most applications are consuming less than the TDP at the
rated frequency. To take advantage of the av ailable thermal headroom, the active cores
can increase their operating frequency.
To determine the highest performance frequency amongst active cores, the processor
takes the following into consideration:
The number of cores operating in the C0 state
The estimated current consumption
The estimated power consumption
•The temperature
Any of these factors can affect the maximum frequency for a given workload. If the
power, current, or thermal limit is reached, the processor will automatically reduce the
frequency to stay with its TDP limit.
Note: Intel Turbo Boost Technology processor frequencies are only active if the operating
system is requesting the P0 state. For more information on P-states and C -states, refer
to Chapter 4.
3.4.2 Intel® Turbo Boost Technology Graphics Frequency
Graphics render frequency is selected by the processor dynamically based on graphics
workload demand. The processor can optimize both processor and Processor Graphics
performance by managing power for the overall package. For the integrated graphics,
this allows an increase in the render core frequency and increased graphics
performance for graphics intensive workloads. In addition, during processor intensive
workloads when the graphics power is low, the processor core can increase its
frequency higher within the package power limit. Enabling Intel Turbo Boost Technology
will maximize the performance of the processor core and the graphics render frequency
within the specified package power levels.
Technologies
44 Datasheet, Volume 1
3.5 Intel® Advanced Vector Extensions (Intel® AVX)
Intel Advanced Vector Extensions (Intel AVX) is the latest expansion of the Intel
instruction set. It extends the Intel Streaming SIMD Extensions (Intel S SE) from 128-
bit vectors to 256-bit vectors. Intel AVX addresses the continued need for vector
floating-point performance in mainstream scientific and engineering numerical
applications, visual processing, recognition, data-mining / synthesis, gaming, physics,
cryptography and other application areas.
The enhancement in Intel AVX allows for improved performance due to wider vectors,
new extensible syntax, and rich functionality including the ability to better manage,
rearrange, and sort data. In the processor, new instructions were added to allow
graphics, media and imaging applications to speed up the processing of large amount
of data by reducing the memory bandwidth and footprint. The new instructions convert
operands between single-precision floating point values and half-precision (16 bit)
floating point values.
For more information on Intel AVX, see http://www.intel.com/software/avx.
3.6 Security and Cryptography Technologies
3.6.1 Intel® Advanced Encryption Standard New Instructions
(Intel® AES-NI)
The processor supports Intel Advanced Encryption Standard New Instructions (Intel
AES-NI) that are a set of Single Instruction Multiple Data (SIMD) instructions that
enable fast and secure data encryption and decryption based on the Advanced
Encryption Standard (AES). Intel AES-NI are v aluable for a wide r ange of cryptographic
applications, for example: applications that perform bulk encryption / decryption,
authentication, random number generation, and authenticated encryption. AES is
broadly accepted as the standard for both gov ernment and industry applications, and is
widely deployed in various protocols.
AES-NI consists of six Intel SSE instructions. Four instructions, namely AESENC,
AESENCLAST, AESDEC, and AESDELAST facilitate high performance AES encryption and
decryption. The other two, AESIMC and AESKEYGENASSIST, support the AES key
expansion procedure. Together, these instructions provide a full hardware for support
AES, offering security, high performance, and a great deal of flexibility.
3.6.2 PCLMULQDQ Instruction
The processor supports the carry-less multiplication instruction, PCLMULQDQ.
PCLMULQDQ is a Single Instruction Multiple Data (SIMD) instruction that computes the
128-bit carry-less multiplication of two, 64-bit operands without generating and
propagating carries. Carry-less multiplication is an essential processing component of
several cryptographic systems and standards. Hence, accelerating carry-less
multiplication can significantly contribute to achieving high speed secure computing
and communication.
Datasheet, Volume 1 45
Technologies
3.6.3 RDRAND Instruction
The processor introduces a software visible random number generation mechanism
supported by a high quality entropy source. This capability will be made available to
programmers through the new RDRAND instruction. The resultant random number
generation capability is designed to comply with existing industry standards in this
regard (ANSI X9.82 and NIST S P 800-90).
Some possible usages of the new RDRAND instruction include cryptographic key
generation as used in a variety of applications including communication, digital
signatures, secure storage, and so on.
3.7 Intel® 64 Architecture x2APIC
The Intel x2APIC architecture extends the xAPIC architecture that provides key
mechanism for interrupt delivery. This extension is intended primarily to increase
processor addressability.
Specifically, x2APIC:
Retains all key elements of compatibility to the xAPIC architecture:
delivery modes
interrupt and processor priorities
interrupt sources
interrupt destination types
Provides extensions to scale processor addressability for both the logical and
physical destination modes
Adds new features to enhance performance of interrupt delivery
Reduces complexity of logical destination mode interrupt delivery on link based
architectures
The key enhancements provided by the x2APIC architecture over xAPIC are the
following:
Support for two modes of operation to provide backward compatibility and
extensibility for future platform innovations:
In xAPIC compatibility mode, APIC registers are accessed through memory
mapped interface to a 4 KB page, identical to the xAPIC architecture.
In x2APIC mode, APIC registers are accessed through Model Specific Register
(MSR) interfaces. In this mode, the x2APIC architecture provides significantly
increased processor addressability and some enhancements on interrupt
delivery.
Increased range of processor addressability in x2APIC mode:
Physical xAPIC ID field increases from 8 bits to 32 bits, allowing for interrupt
processor addressability up to 4 GB-1 processors in physical destination mode.
A processor implementation of x2APIC architecture can support fewer than
32 bits in a software transparent fashion.
Logical xAPIC ID field increases from 8 bits to 32 bits. The 32-bit logical x2APIC
ID is partitioned into two sub-fields – a 16-bit cluster ID and a 16-bit logical ID
within the cluster. Consequently, ((2^20) -16) processors can be addressed in
logical destination mode. Processor implementations can support fewer than
16 bits in the cluster ID sub-field and logical ID sub-field in a software agnostic
fashion.
Technologies
46 Datasheet, Volume 1
More efficient MSR interface to access APIC registers.
To enhance inter-processor and self directed interrupt delivery as well as the
ability to virtualize the local APIC, the APIC register set can be accessed only
through MSR based interfaces in the x2APIC mode. The Memory Mapped IO
(MMIO) interface used by xAPIC is not supported in the x2APIC mode.
The semantics for accessing APIC registers have been revised to simplify the
programming of frequently-used APIC registers by system software. Specifically
the software semantics for using the Interrupt Command Register (ICR) and End Of
Interrupt (EOI) registers have been modified to allow for more efficient delivery
and dispatching of interrupts.
The x2APIC extensions are made available to system software by enabling the local
x2APIC unit in the “x2APIC” mode. To benefit from x2APIC capabilities, a new operating
system and a new BIOS are both needed, with special support for the x2APIC mode.
The x2APIC architecture provides backward compatibility to the xAPIC architecture and
forward extendibility for future Intel platform innovations.
Note: Intel x2APIC technology may not be available on all SKUs.
For more information, refer to the Intel 64 Architecture x2APIC specification at
http://www.intel.com/products/processor/manuals/
3.8 Supervisor Mode Execution Protection (SMEP)
The processor introduces a new mechanism that provides next level of system
protection by blocking malicious software attacks from user mode code when the
system is running in the highest privilege level.
This technology helps to protect from virus attacks and unwanted code to harm the
system.
For more information, please refer to the Intel® 64 and IA-32 Architec tu res Softw a re
Developer’s Manual, Volume 3A (see Section 1.8, “Related Documents” on page 22).
3.9 Power Aware Interrupt Routing (PAIR)
The processor added enhanced power-performance technology which routes interrupts
to threads or cores based on their sleep states. For example concerning energy
savings, it routes the interrupt to the active cores without waking the deep idle cores.
For Performance, it routes the interrupt to the idle (C1) cores without interrupting the
already heavily loaded cores. This enhancement is mostly beneficial for high interrupt
scenarios like Gigabit LAN, WLAN peripherals, and so on.
§ §
Datasheet, Volume 1 47
Power Management
4Power Management
This chapter provides information on the following power management topics:
Advanced Configuration and Power Interface (ACPI) States
Processor Core
Integrated Memory Controller (IMC)
PCI Express*
Direct Media Interface (DMI)
Processor Graphics Controller
Figure 4-1. Processor Power States
G0 – Working
S0 – CPU Fully powered on
C0 – Active mode
C1 – Auto halt
C1E – Auto halt, low freq, low voltage
C3 – L1/L2 caches flu sh, c locks off
C6 – save core states before shutdown
G1 – Slee p in g
S3 cold – Sleep – Suspend To Ram (STR)
S4 – Hibernate – Suspend To Disk (STD),
Wakeup on PCH
S5 – Soft Off – no power,
Wakeup on PCH
G3 – Mechanical Off
P0
Pn
Note: Power states availability may vary between the different SKUs.
Power Management
48 Datasheet, Volume 1
4.1 Advanced Configuration and Power Interface
(ACPI) States Supported
The ACPI states supported by the processor are described in this section.
4.1.1 System States
4.1.2 Processor Core / Package Idle States
4.1.3 Integrated Memory Controller States
Table 4-1. System States
State Description
G0/S0 Full On
G1/S3-Cold Suspend-to-RAM (ST R). Context saved to memory (S3-Hot is not supported by the
processor).
G1/S4 Suspend-to-Disk (STD). All power lost (except wakeup on PCH) .
G2/S5 Soft off. All power lost (except wakeup on PCH). Total reboot.
G3 Mechanical off. All power removed from system.
Table 4-2. Processor Core / Package State Support
State Description
C0 Active mode, processor executing code
C1 AutoHALT state
C1E AutoHALT state with lowest frequency and voltage operating point
C3 Execution cores in C3 flush their L1 instruction cache, L1 data cache, and L2 cache
to the L3 shared cache. Clocks are shut off to each core
C6 Execution cores in this state save their architectural state before removing core
voltage
Table 4-3. Integrated Memory Controller States
State Description
Power up CKE asserted. Active mode.
Pre-charge Power Down CKE de-asserted (not self-refresh) with all banks closed
Active Power Down CKE de-asserted (not self-refresh) with minimum one bank active
Self-Refresh CKE de-asserted using device self-refresh
Datasheet, Volume 1 49
Power Management
4.1.4 PCI Express* Link States
4.1.5 Direct Media Interface (DMI) States
4.1.6 Processor Graphics Controller States
4.1.7 Interface State Combinations
Table 4-4. PCI Express* Link States
State Description
L0 Full on – Active transfer state.
L0s First Active Power Management low power state – Low exit latency
L1 Lowest Active Power Management – Longer exit latency
L3 Lowest power state (power-off) – Longest exit latency
Table 4-5. Direct Media Interface (DMI) States
State Description
L0 Full on – Active transfer state
L0s First Active Power Management low power state – Low exit latency
L1 Lowest Active Power Management – Longer exit latency
L3 Lowest power state (power-off) – Longest exit latency
Table 4-6. Processor Graphics Controller States
State Description
D0 Full on, display active
D3 Cold Power-off
Table 4-7. G, S, and C State Combinations
Global (G)
State Sleep
(S) State
Processor
Package
(C) State
Processor
State System Clocks Description
G0 S0 C0 Full On On Full On
G0 S0 C1/C1E Auto-Halt On Auto-Halt
G0 S0 C3 Deep Sleep On Deep Sleep
G0 S0 C6 Deep Power
Down On Deep Power Down
G1 S3 Power off Off, except RTC Suspend to RAM
G1 S4 Power off Off, except RTC Suspend to Disk
G2 S5 Power off Off, except RTC Soft Off
G3 NA Power off Power off Hard off
Power Management
50 Datasheet, Volume 1
4.2 Processor Core Power Management
While executing code, Enhanced Intel SpeedS tep Technology optimizes the processor’s
frequency and core voltage based on workload. Each frequency and voltage operating
point is defined by ACPI as a P-state. When the processor is not executing code, it is
idle. A low-power idle state is defined by ACPI as a C-state. In general, lower power
C-states have longer entry and exit latencies.
4.2.1 Enhanced Intel® SpeedStep® Technology
The following are the key features of Enhanced Intel SpeedStep Technology:
Multiple frequency and voltage points for optimal performance and power
efficiency. These operating points are known as P-states.
Frequency selection is software controlled by writing to processor MSRs. The
voltage is optimized based on the selected frequency and the number of active
processor cores.
If the target frequency is higher than the current frequency, VCC is ramped up
in steps to an optimized voltage. This vo ltage is signaled by the SVID bus to the
voltage regulator. Once the voltage is established, the PLL locks on to the
target frequency.
If the target frequency is lower than the cur rent frequency, the PLL locks to the
target frequency, then transitions to a lower voltage by signaling the target
voltage on SVID bus.
All active processor cores share the same frequency and voltage. In a multi-
core processor, the highest frequency P-state requested amongst all active
cores is selected.
Software-requested transitions are accepted at any time. If a previous
transition is in progress, the new transition is deferred until the previous
transition is completed.
The processor controls voltage ramp rates internally to ensure glitch-free
transitions.
Because there is low transition latency between P-states, a significant number of
transitions per-second are possible.
4.2.2 Low-Power Idle States
When the processor is idle, low-power idle states (C-states) are used to save power.
More power savings actions are taken for numerically higher C-states. However, higher
C-states have longer exit and entry latencies. Resolution of C-states occur at the
thread, processor core, and processor package level. Thread-level C-states are
available if Intel® HT Technology is enabled.
Caution: Long term reliability cannot be assured unless all the Low Power Idle States are
enabled.
Datasheet, Volume 1 51
Power Management
Entry and exit of the C-States at the thread and core level are shown in Figure 4-3.
While individual threads can request low power C-states, power saving actions only
take place once the core C-state is resolved. Core C-states are automatically resolved
by the processor. For thread and core C-states, a transition to and from C0 is required
before entering any other C-state.
Note: If enabled, the core C-state will be C1E if all enabled cores have also resolved a core C1 state or higher.
Figure 4-2. Idle Power Management Breakdown of the Processor Cores
Processor Package State
Core 1 State
Thread 1Thread 0
Core 0 State
Thread 1Thread 0
Figure 4-3. Thread and Core C-State Entry and Exit
C1 C1E C6C3
C0
MWAIT(C1), HLT
C0
MWAIT(C6),
P_LVL3 I/O Read
MWAIT(C3),
P_LV2 I/O Read
MWAIT(C1), HLT
(C1E Enabled)
Table 4-8. Coordination of Thread Power States at the Core Level
Processor Core
C-State
Thread 1
C0 C1 C3 C6
Thread 0
C0 C0 C0 C0 C0
C1 C0 C11C11C11
C3 C0 C11C3 C3
C6 C0 C11C3 C6
Power Management
52 Datasheet, Volume 1
4.2.3 Requesting Low-Power Idle States
The primary software interfaces for requesting low power idle states are through the
MWAIT instruction with sub-state hints and the HLT instruction (for C1 and C1E).
However, software may make C-state requests using the legacy method of I/O reads
from the ACPI-defined processor clock control registers, referred to as P_LVLx. This
method of requesting C-states provides legacy support for operating systems that
initiate C-state transitions using I/O reads.
To seamless support of legacy operating systems, P_LVLx I/O reads are converted
within the processor to the equivalent MWAIT C -state request. Therefore, P_L VLx reads
do not directly result in I/O reads to the system. The feature, known as I/O MWAIT
redirection, must be enabled in the BIOS.
Note: The P_LVLx I/O Monitor address needs to be set up before using the P_LVLx I/O read
interface. Each P-LVLx is mapped to the supported MWAIT(Cx) instruction as shown in
Table 4-9.
The BIOS can write to the C-state range field of the PMG_IO_CAPTURE MSR to restrict
the range of I/O addresses that are trapped and emulate MWAIT like functionality. Any
P_L VLx reads outside of this range does not cause an I/O redirection to an MW AIT(Cx)-
like request. They fall through like a normal I/O instruction.
Note: When P_LVLx I/O instructions are used, MWAIT substates cannot be defined. The
MWAIT substate is always zero if I/O MW AIT redirection is used. By default, P_L VLx I/O
redirections enable the MWAIT 'break on EFLAGS.IF’ feature that triggers a wakeup on
an interrupt even if interrupts are masked by EFLAGS.IF.
4.2.4 Core C-states
The following are general rules for all core C-states, unless specified otherwise:
A core C-State is determined by the lowest numerical thread state (such as Thread
0 requests C1E while Thread 1 requests C3, resulting in a core C1E state). See
Table 4-7.
A core transitions to C0 state when:
An interrupt occurs
There is an access to the monitored address if the state was entered using an
MWAIT instruction
For core C1/C1E, core C3, and core C6, an interrupt directed toward a single thread
wakes only that thread. However, since both threads are no longer at the same
core C-state, the core resolves to C0.
A system reset re-initializes all processor cores
4.2.4.1 Core C0 State
The normal operating state of a core where code is being executed.
Table 4-9. P_LVLx to MWAIT Conversion
P_LVLx MWAIT(Cx) Notes
P_LVL2 MWAIT(C3)
P_LVL3 MWAIT(C6) C6. No sub-states allowed.
Datasheet, Volume 1 53
Power Management
4.2.4.2 Core C1 / C1E State
C1/C1E is a low power state entered when all threads within a core execute a HLT or
MWAIT(C1/C1E) instruction.
A System Management Interrupt (SMI) handler returns execution to either Normal
state or the C1/C1E state. See the Intel® 64 and IA-32 Architecture Software
Developer’s Manual, Volume 3A/3B: System Programmer’s Guide for more information.
While a core is in C1/C1E state, it processes bus snoops and snoops from other
threads. For more information on C1E, see “Package C1/C1E”.
4.2.4.3 Core C3 State
Individual threads of a core can enter the C3 state by initiating a P_LVL2 I/O read to
the P_BLK or an MWAIT(C3) instruction. A core in C3 state flushes the contents of its
L1 instruction cache, L1 data cache, and L2 cache to the shared L3 cache, while
maintaining its architectural state. All core clocks are stopped at this point. Because the
core’ s caches are flushed, the processor does not wake any core that is in the C3 state
when either a snoop is detected or when another core accesses cacheable memory.
4.2.4.4 Core C6 State
Individual threads of a core can enter the C6 state by initiating a P_L V L3 I/O read or an
MWAIT(C6) instruction. Before entering core C6, the core will save its architectural
state to a dedicated SRAM. Once complete, a core will ha ve its v oltage reduced to z ero
volts. During exit, the core is powered on and its architectural state is restored.
4.2.4.5 C-State Auto-Demotion
In general, deeper C-states such as C6 have long latencies and have higher energy
entry / exit costs. The resulting performance and energy penalties become significant
when the entry / exit frequency of a deeper C-state is high. Therefore, incorrect or
inefficient usage of deeper C-states have a negative impact on idle power. To increase
residency and improve idle power in deeper C-states, the processor supports C-state
auto-demotion.
There are two C-State auto-demotion options:
•C6 to C3
C6/C3 To C1
The decision to demote a core from C6 to C3 or C3/C6to C1 is based on each core’s
immediate residency history. Upon each core C6 request, the core C-state is demoted
to C3 or C1 until a sufficient amount of residency has been established. At that point, a
core is allowed to go into C3/C6. Each option can be run concurrently or individually.
This feature is disabled by default. BIOS must enable it in the
PMG_CST_CONFIG_CONTROL register. The auto-demotion policy is also configured by
this register.
Power Management
54 Datasheet, Volume 1
4.2.5 Package C-States
The processor supports C0, C1/C1E, C3, and C6 power states. The following is a
summary of the general rules for package C-state entry. These apply to all package C-
states unless specified otherwise:
A package C-state request is determined by the lowest numerical core C-state
amongst all cores.
A package C-state is automatically resolved by the processor depending on the
core idle power states and the status of the platform components.
Each core can be at a lower idle power state than the package if the platform
does not grant the processor permission to enter a requested package C-state.
The platform may allow additional power savings to be realized in the
processor.
For package C -states, the processor is not required to enter C0 before entering
any other C-state.
The processor exits a package C-state when a break event is detected. Depending on
the type of break event, the processor does the following:
If a core break event is received, the target core is activated and the break event
message is forwarded to the target core.
If the break event is not masked, the target core enters the core C0 state and
the processor enters package C0.
If the break event was due to a memory access or snoop request.
But the platform did not request to keep the processor in a higher package C-
state, the package returns to its previous C-state.
And the platform requests a higher power C -state, the memory access or snoop
request is serviced and the package remains in the higher power C-state.
Table 4-1 0 shows package C-state resolution for a dual-core processor. Figure 4-4
summarizes package C-state transitions.
Note: If enabled, the package C-state will be C1E if all cores have resolved a core C1 state or higher.
Table 4-10. Coordination of Core Power States at the Package Level
Package C-State Core 1
C0 C1 C3 C6
Core 0
C0 C0 C0 C0 C0
C1 C0 C11C11C11
C3 C0 C11C3 C3
C6 C0 C11C3 C6
Datasheet, Volume 1 55
Power Management
4.2.5.1 Package C0
Package C0 is the normal operating state for the processor. The processor remains in
the normal state when at least one of its cores is in the C0 or C1 state or when the
platform has not granted permission to the processor to go into a low power state.
Individual cores may be in lower power idle states while the package is in C0.
4.2.5.2 Package C1/C1E
No additional power reduction actions are taken in the package C1 state. However, if
the C1E sub-state is enabled, the processor automatically transitions to the lowest
supported core clock frequency, followed by a reduction in voltage.
The package enters the C1 low power state when:
At least one core is in the C1 state
The other cores are in a C1 or lower power state
The package enters the C1E state when:
All cores have directly requested C1E using MWAIT(C1) with a C1E sub-state hint
All cores are in a power state lower that C1/C1E but the package low power state is
limited to C1/C1E using the PMG_CST_CONFIG_CONTROL MSR
All cores have requested C1 using HLT or MWAIT(C1) and C1E auto-promotion is
enabled in IA32_MISC_ENABLES
No notification to the system occurs upon entry to C1/C1E.
Figure 4-4. Package C-S tate Entry and Exit
C0
C1 C6
C3
Power Management
56 Datasheet, Volume 1
4.2.5.3 Package C3 State
A processor enters the package C3 low power state when:
At least one core is in the C3 state
The other cores are in a C3 or lower power state, and the processor has been
granted permission by the platform
The platform has not granted a request to a package C6 state but has allowed a
package C6 state
In package C3-state, the L3 shared cache is valid.
4.2.5.4 Package C6 State
A processor enters the package C6 low power state when:
At least one core is in the C6 state
The other cores are in a C6 or lower power state and the processor has been
granted permission by the platform
In package C6 state, all cores have saved their architectural state and have had their
core voltages reduced to zero volts. The L3 shared cache is still powered and snoopable
in this state. The processor remains in package C6 state as long as any part of the L3
cache is active.
4.3 Integrated Memory Controller (IMC) Power
Management
The main memory is power managed during normal operation and in low-power ACPI
Cx states.
4.3.1 Disabling Unused System Memory Outputs
Any System Memory (SM) interface signal that goes to a memory module connector in
which it is not connected to any actual memory devices (such as SO-DIMM connector is
unpopulated, or is single-sided) is tri-stated. The benefits of disabling unused SM
signals are:
Reduced power consumption
Reduced possible overshoot/undershoot signal qualit y issues seen by the processor
I/O buffer receivers caused by reflections from potentially un-terminated
transmission lines
When a given rank is not populated, the corresponding chip select and CKE signals are
not driven.
At reset, all rows must be assumed to be populated, until it can be proven that they are
not populated. This is due to the fact that when CKE is tri-stated with a SO-DIMM
present, the SO-DIMM is not ensured to maintain data integrity.
SCKE tri-state should be enabled by BIOS where appropriate, since at reset all rows
must be assumed to be populated.
Datasheet, Volume 1 57
Power Management
4.3.2 DRAM Power Management and Initialization
The processor implements extensive support for power management on the SDRAM
interface. The re are four SD RA M operations associated with the Clock Enable (CKE)
signals that the SDRAM controller supports. The processor drives four CKE pins to
perform these operations.
The CKE is one means of power saving. When CKE is off, the internal DDR clock is
disabled and the DDR power is reduced. The power-saving differs according to the
selected mode and the DDR type used. For more information, refer to the IDD table in
the DDR specification.
The DDR defines 3 levels of power down that differ in power saving and in wakeup
time:
1. Active power down (APD): This mode is entered if there are open pages when de-
asserting CKE. In this mode the open pages are retained. Power-saving in this
mode is the lowest. Power consumption of DDR is defined by IDD3P. Exiting this
mode is defined by tXP – small number of cycles.
2. Precharged power down (PPD): This mode is entered if all banks in DDR are
precharged when de-asserting CKE. Power-saving in this mode is intermediate –
better than APD, but less than DLL-off. Power consumption is defined by IDD2P1.
Exiting this mode is defined by tXP. The difference relative to APD mode is that
when waking-up in PPD mode, all page-buffers are empty.
3. DLL-off: In this mode the data-in DLLs on DDR are off . P ower-saving in this mode is
the best among all power modes. Power consumption is defined by IDD2P1. Exiting
this mode is defined by tXP and tXPDLL (10–20 according to the DDR type) until
first data transfer is allowed.
The processor supports 6 different types of power down. Th e dif fe r ent mod es are the
power down modes supported by DDR3 and combinations of these. The type of CKE
power down is defined by configuration. The options are as follows:
1. No power down
2. APD: The rank enters power down as soon as the idle-timer expires, independent of
the bank status
3. PPD: When idle timer expires, the MC sends PRE-all to rank and then enters power
down
4. DLL-off: Same as option 2 but DDR is configured to DLL-off
5. APD, change to PPD (APD-PPD): Begins as option 1, and when all page-close timers
of the rank are ex pi red , i t wakes the rank, issues PR E-all, and returns to PPD.
6. APD, change to DLL-off (APD_DLLoff): Begins as option 1, and when all page-close
timers of the rank are expired, it wakes the rank, issues PRE-all, and returns to
DLL-off power down.
The CKE is determined per rank, when it is inactive. Each r ank has an idle counter. The
idle counter starts counting as soon as the rank has no accesses, and if it expires, the
rank may enter power down while no new transactions to the rank arrive to queues.
The idle counter begins counting at the last incoming transaction arrival.
Power Management
58 Datasheet, Volume 1
It is important to understand that since the power down decision is per rank, the MC
can find a lot of opportunities to power down ranks, even while running memory
intensive applications; savings may be significant (up to a few Watts, depending on
DDR configuration). This becomes more significant when each channel is populated
with more ranks.
Selection of power modes should be according to power performance or thermal trade-
offs of a given system:
When trying to achieve maximum performance and power or thermal consideration
is a non-issue, use no power down.
In a system that tries to minimize power-consumption, try to use the deepest
power down mode possible – DLL-off or APD_DLLoff.
In high-performance systems with dense packaging (that is, tricky thermal design)
the power down mode should be considered in order to reduce the heating and
avoid DDR throttling caused by the heating.
Control of the power-mode through CRB-BIOS: BIOS selects by default no-power
down.
Another control is the idle timer expiration count. This is set through PM_PDWN_config
bits 7:0 (MCHBAR +4CB0). As this timer is set to a shorter time, the IMC will have
more opportunities to put DDR in power down. The minimum recommended value for
this register is 15. There is no BIOS hook to set this register. Customers who choose to
change the value of this register can do it by changing the BIOS . For experiments, this
register can be modified in real time if BIOS did not lock the MC registers.
Note: In APD, APD-PPD, and APD-DLLoff there is no point in setting the idle counter in the
same range of page-close idle timer.
Another option associated with CKE power down is the S_DLL-off. When this option is
enabled, the SBR I/O slave DLLs go off when all channel ranks are in power down. (Do
not confuse it with the DLL-off mode, in which the DDR DLLs are off). This mode
requires an I/O slave DLL wakeup time be defined.
4.3.2.1 Initialization Role of CKE
During power-up, CKE is the only input to the SDRAM that has its level recognized
(other than the DDR3 reset pin) once power is applied. The signal must be driven LOW
by the DDR controller to make sure the SDRAM components float DQ and DQS during
power-up. CKE sign als remain LOW (while any reset is active) until the BIOS writes to a
configuration register. Using this method, CKE is ensured to remain inactive for much
longer than the specified 200 s after power and clocks to SDRAM devices are stable.
4.3.2.2 Conditional Self-Refresh
Intel® Rapid Memory Power Management (Intel® RMPM) conditionally places memory
into self-refresh in the package C3 and C6 low-power states. Intel RMPM functionality
depends on graphics/display state (relevant only when processor graphics is being
used), as well as memory traffic patterns generated by other connected I/O devices.
When entering the S3 - Suspend-to-RAM (STR) state or S0 conditional self -refresh, the
processor core flushes pending cycles and then enters all SDRAM ranks into self
refresh. the CKE signals remain LOW so the SDRAM devices perform self-refresh.
Datasheet, Volume 1 59
Power Management
The target behavior is to enter self -refresh for the package C3 and C6 states as long as
there are no memory requests to service.
4.3.2.3 Dynamic Power Down Operation
Dynamic power do wn of memory is e mployed during normal operation. Based on idle
conditions, a given memory rank may be powered down. The IMC implements
aggressive CKE control to dynamically put the DRAM devices in a power down state.
The processor core controller can be configured to put the devices in active power down
(CKE de-assertion with open pages) or precharge power down (CKE de-assertion with
all pages closed). Precharge power down provides greater power savings but has a
bigger performance impact, since all pages will first be closed before putting the
devices in power down mode.
If dynamic power down is enabled, all ranks are powered up before doing a refresh
cycle and all ranks are powered down at the end of refresh.
4.3.2.4 DRAM I/O Power Management
Unused signals should be disabled to save power and reduce electromagnetic
interference. This includes all signals associated with an unused memory channel.
Clocks can be controlled on a per SO-DIMM basis. Exceptions are made for per SO-
DIMM control signals such as CS#, CKE, and ODT for unpopulated SO-DIMM slots.
The I/O buffer for an unused signal should be tri-stated (output driver disabled), the
input receiver (differential sense-amp) should be disabled, and any DLL circuitry
related ONLY to unused signals should be disabled. The input path must be gated to
prevent spurious results due to noise on the unused signals (typically handled
automatically when input receiver is disabled).
4.3.3 DDR Electrical Power Gating (EPG)
The DDR I/O of the processor supports on-die Electrical Power Gating (DDR-EPG)
during normal operation (S0 mode) while the processor is at package C3 or deep er
power state.
During EPG, the VCCIO internal voltage rail will be powered down, while VDDQ and the
un-gated VCCIO will stay powered on.
The processor will transition in and out of DDR EPG mode on an as needed basis
without any external pins or signals.
There is no change to the signals driven by the processor to the DIMMs during DDR IO
EPG mode.
During EPG mode, all the DDR IO logic will be powered down, except for the Physical
Control registers that are powered by the un-gated VCCIO power supply.
Unlike S3 exit, at DDR EPG exit, the DDR will not go through training mode. Rather, it
will use the previous training information retained in the physical control registers and
will immediately resume normal operation.
Power Management
60 Datasheet, Volume 1
4.4 PCI Express* Power Management
Active power management support using L0s and L1 states.
All inputs and outputs disabled in L2/L3 Ready state.
Note: PCIe* interface does not support Hot-Plug.
Note: An increase in power consumption may be observed when PCIe Active State Power
Management (ASPM) capabilities are disabled.
4.5 DMI Power Management
Active power management support using L0s/L1 state.
4.6 Graphics Power Management
4.6.1 Intel® Rapid Memory Power Management (Intel® RMPM)
(also known as CxSR)
The Intel Rapid Memory P ower Management (Intel RMPM) puts rows of memory into
self-refresh mode during C3/C6 to allow the system to remain in the lower power states
longer. Processors routinely save power during runtime conditions by entering the C3,
C6 state. Intel RMPM is an indirect method of power saving that can have a significant
effect on the system as a whole.
4.6.2 Intel® Graphics Performance Modulation Technology
(Intel® GPMT)
Intel Graphics Power Modulation Technology (Intel® GPMT) is a method for saving
power in the graphics adapter while continuing to display and process data in the
adapter. This method will switch the render frequency and/or render voltage
dynamically between higher and lower power states supported on the platform based
on render engine workload.
In products where Intel® Graphics Dynamic Frequency (also known as Turbo Boost
Technology) is supported and enabled, the functionality of Intel GPMT will be
maintained by Intel Graphics Dynamic Frequency (also known as Turbo Boost
Technology).
4.6.3 Graphics Render C-State
Render C-State (RC6) is a technique designed to optimize the average power to the
graphics render engine during times of idleness of the render engine. Render C -state is
entered when the graphics render engine, blitter engine and the video engine have no
workload being currently worked on and no outstanding gr aphics memory tr ansactions.
When the idleness condition is met then the Processor Graphics will program the VR
into a low voltage state (~0 V) through the SVID bus.
Caution: Long term reliability cannot be assured unless all the Low Power Idle States are
enabled.
Datasheet, Volume 1 61
Power Management
4.6.4 Intel® Smart 2D Display Technology (Intel® S2DDT)
Intel S2DDT reduces display refresh memory traffic by reducing memory reads
required for display refresh. P ower consumption is reduced by less accesses to the IMC.
S2DDT is only enabled in single pipe mode.
Intel S2DDT is most effective with:
Display images well suited to compression, such as text windows, slide shows, and
so on. Poor examples are 3D games.
Static screens such as screens with significant portions of the background showing
2D applications, processor benchmarks, and so on, or conditions when the
processor is idle. Poor examples are full-screen 3D games and benchmarks that flip
the display image at or near display refresh rates.
4.6.5 Intel® Graphics Dynamic Frequency
Intel Graphics Dynamic Frequency Technology is the ability of the processor and
graphics cores to opportunistically increase frequency and/or voltage above the
ensured processor and graphics frequency for the given part. Intel Graphics Dynamic
Frequency Technology is a performance feature that makes use of unused package
power and thermals to increase application performance. The increa se in frequency is
determined by how much power and thermal budget is available in the package, and
the application demand for additional processor or graphics performance. The
processor core control is maintained by an embedded controller. The graphics driver
dynamically adjusts between P-States to maintain optimal performance, power, and
thermals.
4.7 Graphics Thermal Power Management
See Section 4.6 for all graphics thermal power management-related features.
§ §
Power Management
62 Datasheet, Volume 1
Datasheet, Volume 1 63
Thermal Management
5Thermal Management
For thermal specifications and design guidelines refer to the Desktop 3rd Generation
Intel® Core™ Processor Family, Desktop Intel® Pentium® Processor, Desktop Intel®
Celeron® Processor, and LGA1155 Socket Thermal and Mechanical Specifications and
Design Guidelines.
§ §
Thermal Management
64 Datasheet, Volume 1
Datasheet, Volume 1 65
Signal Description
6Signal Description
This chapter describes the processor signals. They are arranged in functional groups
according to their associated interface or category. The following notations are used to
describe the signal type.
The signal description also includes the type of buffer used for the particular signal
(see Table 6-1).
Note:
1. Qualifier for a buffer type.
Notations Signal Type
I Input Signal
OOutput Signal
I/O Bi-directional Input/Output Signal
Table 6-1. Signal Description Buffer Types
Signal Description
PCI Express* PCI Express* interface signals. These signals are compatible with PCI Express* 3.0
Signalling Environment AC Specifications and are AC coupled. The buffers are not
3.3-V tolerant. Refer to the P CIe specification.
DMI Direct Media Interface signals. These signals are compatible with PCI Express* 2.0
Signaling Environment AC Specifications, but are DC coupled. The buffers are not
3.3-V tolerant.
CMOS CMOS buffers.
DDR3 DDR3 buffers: 1.5-V tolerant
AAnalog reference or output. May be used as a threshold voltage or for buffer
compensation
Ref Voltage reference signal
Asynchronous1Signal has no timing relationship with any reference clock.
Signal Description
66 Datasheet, Volume 1
6.1 System Memory Interface Signals
Table 6-2. Memory Channel A Signals
Signal Name Description Direction/
Buffer Type
SA_BS[2:0] Bank Select : These signals define which banks are selected within
each SDRAM rank. O
DDR3
SA_WE# Write Enable Control Signal: This signal is used with SA_RAS# and
SA_CAS# (along with SA_CS#) to define the SDRAM Commands. O
DDR3
SA_RAS# RAS Control Signal: This signal is used with SA_CAS# and SA_WE#
(along with SA_CS#) to define the SRAM Commands. O
DDR3
SA_CAS# CAS Control Signal: This signal is used with SA_RAS# and SA_WE#
(along with SA_CS#) to define the SRAM Commands. O
DDR3
SA_DQS[8:0]
SA_DQS#[8:0]
Data Strobes: SA_DQS[8:0] and its complement signal group make
up a differential strobe pair. The data is captured at the crossing point
of SA_DQS[8:0] and its SA_DQS#[8:0] during read and write
transactions.
I/O
DDR3
SA_DQ[63:0] Data Bus: Channel A data signal interface to the SDRAM data bus. I/O
DDR3
SA_MA[15:0] Memory Address: These signals are used to provide the multiplexed
row and column address to the SDRAM. O
DDR3
SA_CK[3:0]
SA_CK#[3:0]
SDRAM Differential Clock: Channel A SDRAM Differential clock si gnal
pair. The crossing of the positive edge of SA_CK and the negative edge
of its complement SA_CK# are used to sample the command and
control signals on the SDRAM.
O
DDR3
SA_CKE[3:0]
Clock Enable: (1 per rank). These signals are used to:
Initialize the SDRAMs during power-up.
•Power down SDRAM ranks.
Place all SDRAM ranks into and out of self-refresh during STR.
O
DDR3
SA_CS#[3:0] Chip Select: (1 per rank). These signals are used to select particular
SDRAM components during the active state. There is one Chip S elect
for each SDRAM rank.
O
DDR3
SA_ODT[3:0] On Die Termination: Active Te rmination Control. O
DDR3
Datasheet, Volume 1 67
Signal Description
6.2 Memory Reference and Compensation Signals
Table 6-3. Memory Channel B Signals
Signal Name Description Direction/
Buffer Type
SB_BS[2:0] Bank Select: These signals define which banks are selected within
each SDRAM rank. O
DDR3
SB_WE# Write Enable Control Signal: This signal is used with SB_RAS# and
SB_CAS# (along with SB_CS#) to define the SDRAM Commands. O
DDR3
SB_RAS# RAS Control Signal: This signal is used with SB_CAS# and SB_WE#
(along with SB_CS#) to define the SRAM Commands. O
DDR3
SB_CAS# CAS Control Signal: This signal is used with SB_RAS# and SB_WE#
(along with SB_CS#) to define the SRAM Commands. O
DDR3
SB_DQS[8:0]
SB_DQS#[8:0]
Data Strobes: SB_DQS[8:0] and its compleme nt signal group make
up a differential strobe pair. The data is captured at the crossing point
of SB_DQS[8:0] and its SB_DQS#[8:0] during read and write
transactions.
I/O
DDR3
SB_DQ[63:0] Data Bus: Channel B data signa l interface to the SDRAM data bus. I/O
DDR3
SB_MA[15:0] Memory Address: These signals are used to provide the multiplexed
row and column address to the SDRAM. O
DDR3
SB_CK[3:0]
SB_CK#[3:0]
SDRAM Differential Clo ck : Channel B SDRAM Differential clock
signal pair. The crossing of the positive ed ge of SB_CK and the
negative edge of its complement SB_CK# are used to sample the
command and control signals on the SDRAM.
O
DDR3
SB_CKE[3:0]
Clock Enable: (1 per rank) These signals are used to:
Initialize the SDRAMs during power-up.
•Power down SDRAM ranks.
Place all SDRAM ranks into and out of self-refresh during STR.
O
DDR3
SB_CS#[3:0] Chip S elect: (1 per rank). These signals are used to select particular
SDRAM components during the active state. There is one Chip Select
for each SDRAM rank.
O
DDR3
SB_ODT[3:0] On Die Termination: Active Termination Control. O
DDR3
Table 6-4. Memory Reference and Compensation
Signal Name Description Direction/
Buffer Type
SM_VREF DDR3 Reference Voltage: This signal is used as a reference
voltage to the DDR3 controller. I
A
SA_DIMM_VREFDQ
SB_DIMM_VREFDQ
Memory Channel A/B DIMM DQ Voltage Reference: These
output pins are connected to the DIMMs, and are programmed to
have a reference voltage with optimized margin.
The nominal source impedance for these pins is 150
The step size is 7.7 mV for DDR3 (with no load).
O
A
Signal Description
68 Datasheet, Volume 1
6.3 Reset and Miscellaneous Signals
Note:
1. PCIe* bifurcation support varies with the processor and PCH SKUs used.
Table 6-5. Reset and Miscellaneous Signals
Signal Name Description Direction/
Buffer Type
CFG[17:0]
Configuration Signals:
The CFG signals have a default value of '1' if not terminated on the
board.
CFG[1:0]: Reserved configuration lane. A test point may be
placed on the board for this lane.
CFG[2]: PCI Express* Static x16 Lane Numbering Reversal.
1 = Normal operation
0 = Lane numbers reversed
CFG[3]: PCI Express* Static x4 Lane Numbering Reversal.
1 = Normal operation
0 = Lane numbers reversed
CFG[4]: Reserved configuration lane. A test point may be
placed on the board for this lane.
CFG[6:5]: PCI Express* Bifurcation: Note 1
00 = 1 x8, 2 x4 PCI Express*
01 = reserved
10 = 2 x8 PCI Express*
11 = 1 x16 PCI Express*
CFG[17:7]: Reserved configuration lanes. A test point may be
placed on the board for these pins.
I
CMOS
FC_x FC signals are signals that are available for compatibility with other
processors. A test point may be placed on the board for these pins.
PM_SYNC Power Management Sync: A sideband signal to communicate
power management status from the platform to the processor. I
CMOS
RESET# Platform Reset pin driven by the PCH. I
CMOS
RSVD
RSVD_NCTF
Reserved: All signals that are RSVD and RSVD_NCTF must be left
unconnected on the board. No Connect
Non-Critical to
Function
SM_DRAMRST# DDR3 DRAM Reset: R eset si gnal from proce ssor to DRAM device s.
One common to all channels. O
CMOS
Datasheet, Volume 1 69
Signal Description
6.4 PCI Express*-based Interface Signals
Note:
1. PE_TX[3:0]/PE_TX#[3:0] and PE_RX[3:0]/PE_RX#[3:0] signals are only used for platforms that support
20 PCIe lanes. These signals are reserved on Desktop 3rd Generation Intel Core™ i7/i5 processors,
Desktop Intel® Pentium® processors and Desktop Intel® Celeron® processors.
6.5 Intel® Flexible Display (Intel® FDI) Interface
Signals
Table 6-6. PCI Express* Graphics Interface Signals
Signal Name Description Direction/
Buffer Type
PEG_ICOMPI PCI Express* Input Current Compensation I
A
PEG_ICOMPO PCI Express* Current Compensation I
A
PEG_RCOMPO PCI Express* Resistance Comp ensation I
A
PEG_RX[15:0]
PEG_RX#[15:0]
PE_RX[3:0]1
PE_RX#[3:0]1
PCI Express* Receive Differential Pair I
PCI Express*
PEG_TX[15:0]
PEG_TX#[15:0]
PE_TX[3:0]1
PE_TX#[3:0]1
PCI Express* Transmit Differential Pair O
PCI Express*
Table 6-7. Intel® Flexible Display (Intel® FDI) Interface
Signal Name Description Direction/
Buffer Type
FDI0_FSYNC[0] Intel® Flexible Display Interface Frame Sync: Pipe A I
CMOS
FDI0_LSYNC[0] Intel® Flexible Display Interface Line Sync: Pipe A I
CMOS
FDI_TX[7:0]
FDI_TX#[7:0] Intel® Flexible Display Interface Transmit Differential
Pairs O
FDI
FDI1_FSYNC[1] Intel® Flexible Display Interface Frame Sync: Pipe B and C I
CMOS
FDI1_LSYNC[1] Intel® Flexible Display Interface Line Sync: Pipe B and C I
CMOS
FDI_INT Intel® Flexible Display Interface Hot-Plug Interrupt I
Asynchronous
CMOS
Signal Description
70 Datasheet, Volume 1
6.6 Direct Media Interface (DMI) Signals
6.7 Phase Lock Loop (PLL) Signals
6.8 Test Access Points (TAP) Signals
Table 6-8. Direct Media Interface (DMI) Signals – Processor to PCH Serial Interface
Signal Name Description Direction/
Buffer Type
DMI_RX[3:0]
DMI_RX#[3:0] DMI Input from PCH: Direct Media Interface receive
differential pair. I
DMI
DMI_TX[3:0]
DMI_TX#[3:0] DMI Output to PCH: Direct Media Interface transmit
differential pair. O
DMI
Table 6-9. Phase Lock Loop (PLL) Signals
Signal Name Description Direction/
Buffer Type
BCLK
BCLK# Differential bus clock input to the processor I
Diff Clk
Table 6-10. Test Access Points (TAP) Signals
Signal Name Description Direction/
Buffer Type
BPM#[7:0]
Breakpoint and Performance Monitor Signals: These signals
are outputs from the processor that indicate the status of
breakpoints and programmable counters used for monitoring
processor performance.
I/O
CMOS
BCLK_ITP
BCLK_ITP# These signal s a re connected in parallel to the top side debug
probe to enable debug capacitie s. I
DBR#
DBR# is used only in systems where no debug port is
implemented o n the system boa rd. DBR# is used by a debug
port interposer so that an in-target probe can drive system
reset.
O
PRDY# PRDY# is a processor output used by debug too ls to determine
processor debug re adiness. O
Asynchronous
CMOS
PREQ# PREQ# is used by debug tools to request debug operation of the
processor. I
Asynchronous
CMOS
TCK Test Clock: This signal provides the clock input for the
processor Test Bus (also known as the Test Access Port). TCK
must be driven low or allowed to float during power on Reset.
I
CMOS
TDI Test Data In: This signal transfers serial test data into the
processor. TDI provides th e serial input needed for JTAG
specification support.
I
CMOS
TDO Test Data Out: This signal transfers serial test data out of the
processor. TDO provides the serial out put needed for JTAG
specification support.
O
Open Drain
TMS Test Mode Select: A JTAG specification support signal used by
debug tools. I
CMOS
TRST# Test Reset: This signal resets the Test Access Port (TAP) logic.
TRST# must be driven low during power on Reset. I
CMOS
Datasheet, Volume 1 71
Signal Description
6.9 Error and Thermal Protection Signals
Table 6-11. Error and Thermal Protection Signals
Signal Name Description Direction/
Buffer Type
CATERR#
Catastrophic Error: This signal indicates that the system has
experienced a cat astrophic error and cannot continue to operate .
The processor will set this for non-recoverable machine check
errors or other unrecoverable internal errors.
On the processor, CATERR# is used for signaling the following
types of errors:
L egacy MCERRs – CATERR# is asserted for 16 BCLKs.
Legacy IERRs – CATERR# remains asserted until warm or
cold reset.
O
CMOS
PECI PECI (Platform Environment Control Interface): A serial
sideband interface to the processor, it is used primarily for
thermal, power, and error management.
I/O
Asynchronous
PROCHOT#
Processor Hot: PROCHOT# goes active when the processor
temperature monitoring sensor(s) detects that the processor has
reached it s maximu m safe oper ating temper ature. This indicates
that the processor Thermal Co ntrol Circuit (TCC) has been
activated, if enabled. This signal can also be driven to the
processor to activate the TCC.
Note: Toggling PROCHOT# more than once in 1.5 ms period
will result in constant Pn state of the processor.
CMOS Input/
Open-Drain
Output
THERMTRIP#
Thermal Trip: The processor protects itself from catastrophic
overheating by use of an internal thermal sensor. This sensor is
set well above the normal operating temperature to ensure that
there are no false trips. The processor will stop all execution
when the junction temperature exceeds approximately 130 °C.
This is signaled to the system by the THERMTRIP# signal.
O
Asynchronous
CMOS
Signal Description
72 Datasheet, Volume 1
6.10 Power Sequencing Signals
Table 6-12. Power Sequencing Signals
Signal Name Description Direction/
Buffer Type
SM_DRAMPWROK SM_DRAMPWROK Processor Input: Connects to PCH
DRAMPWROK. I
Asynchronous
CMOS
UNCOREPWRGOOD
The processor requires this input signal to be a clean indication
that the VCCSA, VCCIO, VAXG, and VDDQ, power supplies are
stable and within specifications. This requirement applies
regardless of the S-state of the processor. 'Clean' implies that
the signal will remain low (capable of sinking leakage current),
without glitches, from the time that the power supplies are
turned on until they come within specification. The signal must
then transition monotonically to a high state. This is connected
to the PCH PROCPWRGD signal.
I
Asynchronous
CMOS
SKTOCC#
SKTOCC# (Socket Occupied) : This signal is pulled down
directly (0 Ohms) on the processor package to the ground.
There is no connection to the processor silicon for this signal.
System board designers may use this signal to de termine if the
processor is present.
PROC_SEL
Processor Select: This signal is an output that indicates if the
processor used is 2nd Gener ation Intel® Core™ processor family
desktop, Intel® Pentium® processor family desktop, Intel®
Celeron® processor family desktop o r Desktop 3rd G eneration
Intel® Core™ processor family, Desktop Intel® Pentium®
processor family, Desktop Intel® Celeron® processor family .
For 2nd Generation Intel® Core™ processor family desktop,
Intel® Pen tium® processor family desktop, Intel® Celeron®
processor family desktop, the output will be high.
For Desktop 3rd Generation Intel® Core™ processor family,
Desktop Intel® Pentium® processor family, Desktop Intel®
Celeron® processor family, the output will be low.
O
VCCIO_SEL
Voltage selection for VCCIO: This output signal was initially
intended to select the I/O voltage depending on th e processor
being used.
Since the VCCIO voltage is the same for 2nd Generation Intel®
Core™ processor family desktop, Intel® Pe ntium® processor
family desktop, Intel® Celeron® processor family desktop and
Desktop 3rd Generation Intel® Core™ processor family, Desktop
Intel® Pen tium® processor family, Desktop Intel® Celeron®
processor family, the usage of this pin was changed as follows:
The pin is configured on the package to be same as 2nd
Generation Intel® Core™ processor family desktop, Intel®
Pentium® processor family desktop, Intel® Celeron® processor
family desktop . This pin must be pulled high on the
motherboard, when using a dual rail voltage regulator.
O
Datasheet, Volume 1 73
Signal Description
6.11 Processor Power Signals
Note:
1. The VCCSA_VID can toggle at most once in 500 uS; The slew rate of VCCSA_VID is 1 V/nS.
6.12 Sense Signals
Table 6-13. Processor Power Sig nals
Signal Name Description Direction/
Buffer Type
VCC Processor core power rail. Ref
VCCIO Processor power for I/O. Ref
VDDQ Processor I/O supply voltage for DDR3. Ref
VCCAXG Graphics core power supply. Ref
VCCPLL VCCPLL provides isolated power for internal processor PLLs. Ref
VCCSA System Agent power supply. Ref
VIDSOUT
VIDSCLK
VIDALERT#
VIDALERT#, VIDSCLK, and VIDSCLK comprise a three signal
serial synchronous interface used to transfer power
management information between the processor and the
voltage regulator controllers. This serial VID interface replaces
the parallel VID interface on previous processors.
CMOS I/ OD O
OD O
CMOS I
VCCSA_VID 1 Voltage selection for VCCSA: O
CMOS
Table 6-14. Sense Signals
Signal Name Description Direction/
Buffer Type
VCC_SENSE
VSS_SENSE
VCC_SENSE and VSS_SENSE provide an isolated, low
impedance connection to the processor core voltage and
ground. They can be used to se nse or measure voltage near the
silicon.
O
Analog
VAXG_SENSE
VSSAXG_SENSE
VAXG_SENSE and VSSAXG_SENSE provide an isolated, low
impedance connection to the VAXG voltage and ground. They
can be used to sense or measure voltage near the silicon.
O
Analog
VCCIO_SENSE
VSS_SENSE_VCCIO
VCCIO_SENSE and VSS_SENSE_VCCIO provide an isolated, low
impedance connection to the processor VCCIO voltage and
ground. They can be used to se nse or measure voltage near the
silicon.
O
Analog
VCCSA_SENSE VCCSA_SENSE provide an isolated, low impedance connection
to the pr oce sso r syste m agent vol tage . It c an be used to s ens e
or measure voltage near the silicon.
O
Analog
Signal Description
74 Datasheet, Volume 1
6.13 Ground and Non-Critical to Function (NCTF)
Signals
6.14 Processor Internal Pull-Up / Pull-Down Resistors
§ §
Table 6-15. Ground and Non-Critical to Function (NCTF) Signals
Signal Name Description Direction/
Buffer Type
VSS Processor ground node GND
VSS_NCTF (BGA Only) Non-Critical to Function: These signals are for package
mechanical reliability.
Table 6-16. Processor Internal Pull-Up / Pull-Down Resistors
Signal Name Pull-Up / Pull-Down Rail Value
BPM[7:0] Pull Up VCCIO 65–165
PRDY# Pull Up VCCIO 65–165
PREQ# Pull Up VCCIO 65–165
TCK Pull Down VSS 5–15 k
TDI Pull Up VCCIO 5–15 k
TMS Pull Up VCCIO 5–15 k
TRST# Pull Up VCCIO 5–15 k
CFG[17:0] Pull Up VCCIO 5–15 k
Datasheet, Volume 1 75
Electrical Specifications
7Electrical Specifications
7.1 Power and Ground Lands
The processor has VCC, VDDQ, VCCPLL, VCCSA, VCCAXG, VCCIO and VSS (ground)
inputs for on-chip power distribution. All power lands must be connected to their
respective processor power planes, while all VSS lands must be connected to the
system ground plane. Use of multiple power and ground planes is recommended to
reduce I*R drop. The VCC and VCCAXG lands must be supplied with the voltage
determined by the processor Serial Voltage IDentification (SVID) interface. A new
serial VID interface is implemented on the processor. Tab le 7-1 specifies the voltage
level for the various VIDs.
7.2 Decoupling Guidelines
Due to its large number of transistors and high internal clock speeds, the processor is
capable of generating large current swings between low- and full-power states. This
may cause voltages on power planes to sag below their minimum values, if bulk
decoupling is not adequate. Larger bulk storage (CBULK), such as electrolytic capacitors,
supply current during longer lasting changes in current demand (for example, coming
out of an idle condition). Similarly, capacitors act as a storage well for current when
entering an idle condition from a running condition. To keep voltages within
specification, output decoupling must be properly designed.
Caution: Design the board to ensure that the voltage provided to the processor remains within
the specifications listed in Table 7-4. Failure to do so can result in timing violations or
reduced lifetime of the processor.
7.2.1 Voltage Rail Decoupling
The voltage regulator solution needs to provide:
bulk capacitance with low effective series resistance (ESR)
a low interconnect resistance from the regulator to the socket
bulk decoupling to compensate for large current swings generated during poweron,
or low-power idle state entry/exit
The power delivery solution must ensure that the voltage an d current specifications are
met, as defined in Table 7-4.
Electrical Specifications
76 Datasheet, Volume 1
7.3 Processor Clocking (BCLK[0], BCLK#[0])
The processor uses a diffe rential clock to generate the processor co re ope rating
frequency, memory cont roller frequency, system agent frequencies, an d o ther interna l
clocks. The processor core frequency is determined by multiplying the processor core
ratio by the BCLK frequency. Clock multiplying within the processor is provided by an
internal phase locked loop (PLL) that requires a constant frequency input, with
exceptions for Spread Spectrum Clocking (SSC).
The processor’s maximum non-turbo core frequency is configured during power-on
reset by using its manufacturing default value. This v alue is the highest non-turbo core
multiplier at which the processor can operate. If lower maximum speeds are desired,
the appropriate ratio can be configured using the FLEX_RATIO MSR.
7.3.1 Phase Lock Loop (PLL) Power Supply
An on-die PLL filter solution is implemented on the processor. Re fer to Table 7-5 for DC
specifications.
7.4 VCC Voltage Identification (VID)
The processor uses three signals for the serial voltage identification interface to support
automatic selection of voltages. Ta ble 7-1 specifies the voltage level corresponding to
the eight bit VID value transmitted over serial VID. A ‘1’ in this table refers to a high
voltage level and a ‘0’ refers to a low voltage level. If the voltage regulation circuit
cannot supply the voltage that is requested, the voltage regulator must disable itself.
VID signals are CMOS push/pull drivers. R efer to Table 7-8 for the DC specifications for
these signals. The VID codes will change due to temperature and/or current load
changes to minimize the power of the part. A voltage range is provided in Table 7-4.
The specifications are set so that one voltage regulator can operate with all supporte d
frequencies.
Individual processor VID values may be set during manufacturing so that two devices
at the same core frequency may have different default VID settings. This is shown in
the VID range values in Table 7-4. The processor provides the ability to operate while
transitioning to an adjacent VID and its associated voltage. This will represent a DC
shift in the loadline.
Note: At condition outside functional operation condition limits, neither functionality nor long
term reliability can be expected. If a device is returned to conditions within functional
operation limits after having been subjected to conditions outside these limits, but
within the absolute maximum and minimum ratings, the device may be functional, but
with its lifetime degraded on exposure to conditions ex ceeding the functional operation
condition limits.
Datasheet, Volume 1 77
Electrical Specifications
Table 7-1. VR 12.0 Voltage Identification Definition (Sheet 1 of 3)
VID
7VID
6VID
5VID
4VID
3VID
2VID
1VID
0HEX VCC_MAX VID
7VID
6VID
5VID
4VID
3VID
2VID
1VID
0HEX VCC_MAX
0 0 0 0 0 0 0 0 0 0 0.00000 1 0 0 0 0 0 0 0 8 0 0.88500
0 0 0 0 0 0 0 1 0 1 0.25000 1 0 0 0 0 0 0 1 8 1 0.89000
0 0 0 0 0 0 1 0 0 2 0.25500 1 0 0 0 0 0 1 0 8 2 0.89500
0 0 0 0 0 0 1 1 0 3 0.26000 1 0 0 0 0 0 1 1 8 3 0.90000
0 0 0 0 0 1 0 0 0 4 0.26500 1 0 0 0 0 1 0 0 8 4 0.90500
0 0 0 0 0 1 0 1 0 5 0.27000 1 0 0 0 0 1 0 1 8 5 0.91000
0 0 0 0 0 1 1 0 0 6 0.27500 1 0 0 0 0 1 1 0 8 6 0.91500
0 0 0 0 0 1 1 1 0 7 0.28000 1 0 0 0 0 1 1 1 8 7 0.92000
0 0 0 0 1 0 0 0 0 8 0.28500 1 0 0 0 1 0 0 0 8 8 0.92500
0 0 0 0 1 0 0 1 0 9 0.29000 1 0 0 0 1 0 0 1 8 9 0.93000
0 0 0 0 1 0 1 0 0 A 0.29500 1 0 0 0 1 0 1 0 8 A 0.93500
0 0 0 0 1 0 1 1 0 B 0.30000 1 0 0 0 1 0 1 1 8 B 0.94000
0 0 0 0 1 1 0 0 0 C 0.30500 1 0 0 0 1 1 0 0 8 C 0.94500
0 0 0 0 1 1 0 1 0 D 0.31000 1 0 0 0 1 1 0 1 8 D 0.95000
0 0 0 0 1 1 1 0 0 E 0.31500 1 0 0 0 1 1 1 0 8 E 0.95500
0 0 0 0 1 1 1 1 0 F 0.32000 1 0 0 0 1 1 1 1 8 F 0.96000
0 0 0 1 0 0 0 0 1 0 0.32500 1 0 0 1 0 0 0 0 9 0 0.96500
0 0 0 1 0 0 0 1 1 1 0.33000 1 0 0 1 0 0 0 1 9 1 0.97000
0 0 0 1 0 0 1 0 1 2 0.33500 1 0 0 1 0 0 1 0 9 2 0.97500
0 0 0 1 0 0 1 1 1 3 0.34000 1 0 0 1 0 0 1 1 9 3 0.98000
0 0 0 1 0 1 0 0 1 4 0.34500 1 0 0 1 0 1 0 0 9 4 0.98500
0 0 0 1 0 1 0 1 1 5 0.35000 1 0 0 1 0 1 0 1 9 5 0.99000
0 0 0 1 0 1 1 0 1 6 0.35500 1 0 0 1 0 1 1 0 9 6 0.99500
0 0 0 1 0 1 1 1 1 7 0.36000 1 0 0 1 0 1 1 1 9 7 1.00000
0 0 0 1 1 0 0 0 1 8 0.36500 1 0 0 1 1 0 0 0 9 8 1.00500
0 0 0 1 1 0 0 1 1 9 0.37000 1 0 0 1 1 0 0 1 9 9 1.01000
0 0 0 1 1 0 1 0 1 A 0.37500 1 0 0 1 1 0 1 0 9 A 1.01500
0 0 0 1 1 0 1 1 1 B 0.38000 1 0 0 1 1 0 1 1 9 B 1.02000
0 0 0 1 1 1 0 0 1 C 0.38500 1 0 0 1 1 1 0 0 9 C 1.02500
0 0 0 1 1 1 0 1 1 D 0.39000 1 0 0 1 1 1 0 1 9 D 1.03000
0 0 0 1 1 1 1 0 1 E 0.39500 1 0 0 1 1 1 1 0 9 E 1.03500
0 0 0 1 1 1 1 1 1 F 0.40000 1 0 0 1 1 1 1 1 9 F 1.04000
0 0 1 0 0 0 0 0 2 0 0.40500 1 0 1 0 0 0 0 0 A 0 1.04500
0 0 1 0 0 0 0 1 2 1 0.41000 1 0 1 0 0 0 0 1 A 1 1.05000
0 0 1 0 0 0 1 0 2 2 0.41500 1 0 1 0 0 0 1 0 A 2 1.05500
0 0 1 0 0 0 1 1 2 3 0.42000 1 0 1 0 0 0 1 1 A 3 1.06000
0 0 1 0 0 1 0 0 2 4 0.42500 1 0 1 0 0 1 0 0 A 4 1.06500
0 0 1 0 0 1 0 1 2 5 0.43000 1 0 1 0 0 1 0 1 A 5 1.07000
0 0 1 0 0 1 1 0 2 6 0.43500 1 0 1 0 0 1 1 0 A 6 1.07500
0 0 1 0 0 1 1 1 2 7 0.44000 1 0 1 0 0 1 1 1 A 7 1.08000
0 0 1 0 1 0 0 0 2 8 0.44500 1 0 1 0 1 0 0 0 A 8 1.08500
0 0 1 0 1 0 0 1 2 9 0.45000 1 0 1 0 1 0 0 1 A 9 1.09000
0 0 1 0 1 0 1 0 2 A 0.45500 1 0 1 0 1 0 1 0 A A 1.09500
0 0 1 0 1 0 1 1 2 B 0.46000 1 0 1 0 1 0 1 1 A B 1.10000
0 0 1 0 1 1 0 0 2 C 0.46500 1 0 1 0 1 1 0 0 A C 1.10500
0 0 1 0 1 1 0 1 2 D 0.47000 1 0 1 0 1 1 0 1 A D 1.11000
Electrical Specifications
78 Datasheet, Volume 1
001011102E0.47500 1 0 1 0 1 1 1 0 A E 1.11500
001011112F0.48000 1 0 1 0 1 1 1 1 A F 1.12000
00110000300.48500 1 0 1 1 0 0 0 0 B 0 1.12500
00110001310.49000 1 0 1 1 0 0 0 1 B 1 1.13000
00110010320.49500 1 0 1 1 0 0 1 0 B 2 1.13500
00110011330.50000 1 0 1 1 0 0 1 1 B 3 1.14000
00110100340.50500 1 0 1 1 0 1 0 0 B 4 1.14500
00110101350.51000 1 0 1 1 0 1 0 1 B 5 1.15000
00110110360.51500 1 0 1 1 0 1 1 0 B 6 1.15500
00110111370.52000 1 0 1 1 0 1 1 1 B 7 1.16000
00111000380.52500 1 0 1 1 1 0 0 0 B 8 1.16500
00111001390.53000 1 0 1 1 1 0 0 1 B 9 1.17000
001110103A0.53500 1 0 1 1 1 0 1 0 B A 1.17500
001110113B0.54000 1 0 1 1 1 0 1 1 B B 1.18000
001111003C0.54500 1 0 1 1 1 1 0 0 B C 1.18500
001111013D0.55000 1 0 1 1 1 1 0 1 B D 1.19000
001111103E0.55500 1 0 1 1 1 1 1 0 B E 1.19500
001111113F0.56000 1 0 1 1 1 1 1 1 B F 1.20000
01000000400.56500 1 1 0 0 0 0 0 0 C 0 1.20500
01000001410.57000 1 1 0 0 0 0 0 1 C 1 1.21000
01000010420.57500 1 1 0 0 0 0 1 0 C 2 1.21500
01000011430.58000 1 1 0 0 0 0 1 1 C 3 1.22000
01000100440.58500 1 1 0 0 0 1 0 0 C 4 1.22500
01000101450.59000 1 1 0 0 0 1 0 1 C 5 1.23000
01000110460.59500 1 1 0 0 0 1 1 0 C 6 1.23500
01000111470.60000 1 1 0 0 0 1 1 1 C 7 1.24000
01001000480.60500 1 1 0 0 1 0 0 0 C 8 1.24500
01001001490.61000 1 1 0 0 1 0 0 1 C 9 1.25000
010010104A0.61500 1 1 0 0 1 0 1 0 C A 1.25500
010010114B0.62000 1 1 0 0 1 0 1 1 C B 1.26000
010011004C0.62500 1 1 0 0 1 1 0 0 C C 1.26500
010011014D0.63000 1 1 0 0 1 1 0 1 C D 1.27000
010011104E0.63500 1 1 0 0 1 1 1 0 C E 1.27500
010011114F0.64000 1 1 0 0 1 1 1 1 C F 1.28000
01010000500.64500 1 1 0 1 0 0 0 0 D 0 1.28500
01010001510.65000 1 1 0 1 0 0 0 1 D 1 1.29000
01010010520.65500 1 1 0 1 0 0 1 0 D 2 1.29500
01010011530.66000 1 1 0 1 0 0 1 1 D 3 1.30000
01010100540.66500 1 1 0 1 0 1 0 0 D 4 1.30500
01010101550.67000 1 1 0 1 0 1 0 1 D 5 1.31000
01010110560.67500 1 1 0 1 0 1 1 0 D 6 1.31500
01010111570.68000 1 1 0 1 0 1 1 1 D 7 1.32000
01011000580.68500 1 1 0 1 1 0 0 0 D 8 1.32500
01011001590.69000 1 1 0 1 1 0 0 1 D 9 1.33000
010110105A0.69500 1 1 0 1 1 0 1 0 D A 1.33500
010110115B0.70000 1 1 0 1 1 0 1 1 D B 1.34000
010111005C0.70500 1 1 0 1 1 1 0 0 D C 1.34500
Table 7-1. VR 12.0 Voltage Identification Definition (Sheet 2 of 3)
VID
7VID
6VID
5VID
4VID
3VID
2VID
1VID
0HEX VCC_MAX VID
7VID
6VID
5VID
4VID
3VID
2VID
1VID
0HEX VCC_MAX
Datasheet, Volume 1 79
Electrical Specifications
0 1 0 1 1 1 0 1 5 D 0.71000 1 1 0 1 1 1 0 1 D D 1.35000
0 1 0 1 1 1 1 0 5 E 0.71500 1 1 0 1 1 1 1 0 D E 1.35500
0 1 0 1 1 1 1 1 5 F 0.72000 1 1 0 1 1 1 1 1 D F 1.36000
0 1 1 0 0 0 0 0 6 0 0.72500 1 1 1 0 0 0 0 0 E 0 1.36500
0 1 1 0 0 0 0 1 6 1 0.73000 1 1 1 0 0 0 0 1 E 1 1.37000
0 1 1 0 0 0 1 0 6 2 0.73500 1 1 1 0 0 0 1 0 E 2 1.37500
0 1 1 0 0 0 1 1 6 3 0.74000 1 1 1 0 0 0 1 1 E 3 1.38000
0 1 1 0 0 1 0 0 6 4 0.74500 1 1 1 0 0 1 0 0 E 4 1.38500
0 1 1 0 0 1 0 1 6 5 0.75000 1 1 1 0 0 1 0 1 E 5 1.39000
0 1 1 0 0 1 1 0 6 6 0.75500 1 1 1 0 0 1 1 0 E 6 1.39500
0 1 1 0 0 1 1 1 6 7 0.76000 1 1 1 0 0 1 1 1 E 7 1.40000
0 1 1 0 1 0 0 0 6 8 0.76500 1 1 1 0 1 0 0 0 E 8 1.40500
0 1 1 0 1 0 0 1 6 9 0.77000 1 1 1 0 1 0 0 1 E 9 1.41000
0 1 1 0 1 0 1 0 6 A 0.77500 1 1 1 0 1 0 1 0 E A 1.41500
0 1 1 0 1 0 1 1 6 B 0.78000 1 1 1 0 1 0 1 1 E B 1.42000
0 1 1 0 1 1 0 0 6 C 0.78500 1 1 1 0 1 1 0 0 E C 1.42500
0 1 1 0 1 1 0 1 6 D 0.79000 1 1 1 0 1 1 0 1 E D 1.43000
0 1 1 0 1 1 1 0 6 E 0.79500 1 1 1 0 1 1 1 0 E E 1.43500
0 1 1 0 1 1 1 1 6 F 0.80000 1 1 1 0 1 1 1 1 E F 1.44000
0 1 1 1 0 0 0 0 7 0 0.80500 1 1 1 1 0 0 0 0 F 0 1.44500
0 1 1 1 0 0 0 1 7 1 0.81000 1 1 1 1 0 0 0 1 F 1 1.45000
0 1 1 1 0 0 1 0 7 2 0.81500 1 1 1 1 0 0 1 0 F 2 1.45500
0 1 1 1 0 0 1 1 7 3 0.82000 1 1 1 1 0 0 1 1 F 3 1.46000
0 1 1 1 0 1 0 0 7 4 0.82500 1 1 1 1 0 1 0 0 F 4 1.46500
0 1 1 1 0 1 0 1 7 5 0.83000 1 1 1 1 0 1 0 1 F 5 1.47000
0 1 1 1 0 1 1 0 7 6 0.83500 1 1 1 1 0 1 1 0 F 6 1.47500
0 1 1 1 0 1 1 1 7 7 0.84000 1 1 1 1 0 1 1 1 F 7 1.48000
0 1 1 1 1 0 0 0 7 8 0.84500 1 1 1 1 1 0 0 0 F 8 1.48500
0 1 1 1 1 0 0 1 7 9 0.85000 1 1 1 1 1 0 0 1 F 9 1.49000
0 1 1 1 1 0 1 0 7 A 0.85500 1 1 1 1 1 0 1 0 F A 1.49500
0 1 1 1 1 0 1 1 7 B 0.86000 1 1 1 1 1 0 1 1 F B 1.50000
0 1 1 1 1 1 0 0 7 C 0.86500 1 1 1 1 1 1 0 0 F C 1.50500
0 1 1 1 1 1 0 1 7 D 0.87000 1 1 1 1 1 1 0 1 F D 1.51000
0 1 1 1 1 1 1 0 7 E 0.87500 1 1 1 1 1 1 1 0 F E 1.51500
0 1 1 1 1 1 1 1 7 F 0.88000 1 1 1 1 1 1 1 1 F F 1.52000
Table 7-1. VR 12.0 Voltage Identification Definition (Sheet 3 of 3)
VID
7VID
6VID
5VID
4VID
3VID
2VID
1VID
0HEX VCC_MAX VID
7VID
6VID
5VID
4VID
3VID
2VID
1VID
0HEX VCC_MAX
Electrical Specifications
80 Datasheet, Volume 1
7.5 System Agent (SA) VCC VID
The VCCSA is configured by the processor output land VCCSA_VID. VCCSA_VID output
default logic state is low for 2nd generation and 3rd generation Desktop Core
processors, and configures VCCSA to 0.925 V.
7.6 Reserved or Unused Signals
The following are the general types of reserved (RSVD) signals and connection
guidelines:
RSVD – these signals should not be connected.
RSVD_TP – these signals must be routed to a test point. Failure to route these
signal to test points will restrict Intel’s ability to assist in platform debug.
RSVD_NCTF – these signals are non-critical to function and may be left un-
connected.
Arbitrary connection of these signals to VCC, VCCIO, VDDQ, VCCPLL, VCCSA, VAXG, VSS, or
to any other signal (including each other) may result in component malfunction or
incompatibility with future processors. See Chapter 8 for a land listing of the processor
and the location of all reserved signals.
For reliable operation, always connect unused inputs or bi-directional signals to an
appropriate signal level. Unused active high inputs should be connected through a
resistor to ground (VSS). Unused outputs maybe left unconnected; however, this may
interfere with some Test Access Port (TAP) functions, complicate debug probing, and
prevent boundary scan testing. A resistor must be used when tying bi-directional
signals to power or ground. When tying any signal to power or ground, a resistor will
also allow for system testability. For details, see Table 7-8.
7.7 Signal Groups
Signals are grouped by buffer type and similar characteristics as listed in Tabl e 7-2. The
buffer type indicates which signaling technology and specifications apply to the signals.
All the differential signals and selected DDR3 and Control Sideband signals have On-Die
Termination (ODT) resistors. There are some signals that do not have ODT and need to
be terminated on the board.
Datasheet, Volume 1 81
Electrical Specifications
Table 7-2. Signal Groups (Sheet 1 of 2)1
Signal Group Type Signals
System Reference Clock
Differential CMOS Input BCLK[0], BCLK#[0]
DDR3 Reference Clocks2
Differential DDR3 Output SA_CK[3:0], SA_CK#[3:0]
SB_CK[3:0], SB_CK#[3:0]
DDR3 Command Signals2
Single Ended DDR3 Output
SA_RAS#, SB_RAS#, SA_CAS#, SB_CAS#
SA_WE#, SB_WE#
SA_MA[15:0], SB_MA[15:0]
SA_BS[2:0], SB_BS[2:0]
SM_DRAMRST#
SA_CS#[3:0], SB_CS#[3:0]
SA_ODT[3:0], SB_ODT[3:0]
SA_CKE[3:0], SB_CKE[3:0]
DDR3 Data Signals2
Single en ded DDR3 Bi-directional SA_DQ[63:0], SB_DQ[63:0]
Differential DDR3 Bi-directional SA_DQS[8:0], SA_DQS#[8:0]
SB_DQS[8:0], SB_DQS#[8:0]
TAP (ITP/XDP)
Single Ended CMOS Input TCK, TDI, TMS, TRST#
Single Ended CMOS Output TDO
Single Ended Asynchronous CMOS Output TAPPWRGOOD
Control Sideband
Single Ended CMOS Input CFG[17:0]
Single Ended Asynchronous CMOS/Open
Drain Bi-directional PROCHOT#
Single Ended Asynchronous CMOS Output THERMTRIP#, CATERR#
Single Ended Asynchronous CMOS Input SM_DRAMPWROK, UNCOREPWRGOOD3,
PM_SYNC, RESET#
Single Ended Asynchronous Bi-directional PECI
Single Ended CMOS Input
Open Drain Output
Bi-directional
VIDALERT#
VIDSCLK
VIDSOUT
Power/Ground/Other
Power VCC, VCC_NCTF, VCCIO, VCCPLL, VDDQ, VCCAXG
Ground VSS
No Connect and test point RSVD, RSVD_NCTF, RSVD_TP, FC_x
Sense Points VCC_SENSE, VSS_SENSE, VCCIO_SENSE,
VSS_SENSE_VCCIO, VAXG_SENSE,
VSSAXG_SENSE
Other SKTOCC#, DBR#
Electrical Specifications
82 Datasheet, Volume 1
Notes:
1. Refer to Chapter 8 for signal description details.
2. SA and SB refer to DDR3 Channel A and DDR3 Channel B.
3. The maximum rise/fall time of UNCOREPWRGOOD is 20 ns.
4. PE_TX[3:0]/PE_TX#[3:0] and PE_RX[3:0]/PE_RX#[3:0] signals are only used for platforms that support
20 PCIe* lanes. These signals are reserved on Desktop 3rd Generation Intel Core™ i7/i5 processors.
Note: All Control Sideband Asynchronous signals are required to be asserted/de-asserted for
at least 10 BCLKs with maxim um Trise/Tfall of 6 ns in order for the processor to
recognize the proper signal state. See Section 7.10 for the DC specifications.
7.8 Test Access Port (TAP) Connection
Due to the voltage levels supported by other components in the Test Access Port (TAP)
logic, Intel recommends the processor be first in the TAP chain, followed by any other
components within the system. A translation buffer should be used to connect to the
rest of the chain unless one of the other components is capable of accepting an input of
the appropriate voltage. Two copies of each signal may be required with each driving a
different voltage level.
The processor supports Boundary Scan (JTAG) IEEE 1149.1-2001 and IEEE 1149.6-
2003 standards. A small portion of the I/O lands may support only one of those
standards.
PCI Express*
Differential PCI Express Input PEG_RX[15:0], PEG_RX#[15:0],
PE_RX[3:0]4, PE_RX#[3:0]4
Differential PCI Express Output PEG_TX[15:0], PEG_TX#[15:0],
PE_TX[3:0]4, PE_TX#[3:0]4
Single Ended Analog Input PEG_ICOMP0, PEG_COMP I, PEG_RCOMP0
DMI
Differential DMI Input DMI_RX[3:0], DMI_RX#[3:0]
Differential DMI Output DMI_TX[3:0], DMI_TX#[3:0]
Intel® FDI
Single Ended FDI Input FDI_FSYNC[1:0], FDI_LSYNC[1:0], FDI_INT
Differential FDI Output FDI_TX[7:0], FDI_TX#[7:0]
Single Ended Analog Input FDI_COMPIO, FDI_ICOM PO
Table 7-2. Signal Groups (Sheet 2 of 2)1
Signal Group Type Signals
Datasheet, Volume 1 83
Electrical Specifications
7.9 Storage Conditions Specifications
Environmental storage condition limits define the temper ature and relative humidit y to
which the device is exposed to while being stored in a moisture barrier bag. The
specified storage conditions are for component level prior to board attach.
Table 7-3 specifies absolute maximum and minimum storage temperature limits that
represent the maximum or minimum device condition beyond which damage, latent or
otherwise, may occur. The table also specifies sustained storage temperature, relative
humidity, and time-duration limits. These limits specify the maximum or minimum
device storage conditions for a sustained period of time. Failure to adhere to the
following specifications can affect long term reliability of the processors conditions
outside sustained limits, but within absolute maximum and minimum ratings, quality
and reliability may be affected.
Notes:
1. R efers to a component device that is n ot assembled in a board or sock et and is not electrically connec ted to
a voltage reference or I/O signal.
2. Specified temperatures are not to exce ed values based on data collected. Exceptions for su rface mount
reflow are specified by the applicable JEDEC standard. Non-adherence may affect processor reliability.
3. Tabsolute storage applies to the unassembled component only and does not apply to the shipping media,
moisture barrier bags, or desiccant.
4. Component product device stor age temperature qualification methods may follow JESD 22-A119 (low temp)
and JESD22-A103 (high temp) standards when applicable for volatile memory.
5. Intel branded products are specified and certified to meet the following temperature and humidity limits
that are given as an example only (Non-Operating Temperature Limit: -40 °C to 70 °C and Humidity: 50%
to 90%, non-condens ing with a maximum we t bulb of 28 °C.) Post board at tach storag e temperatu re limits
are not specified for non-Intel branded boards.
6. The JEDEC J-JSTD-020 moisture level rating and associated handling practices apply to all moisture
sensitive devices removed from the moisture barrier bag.
7. Nominal temperature and humidity conditions and durations are given and tested within the constraints
imposed by Tsustained st orage and customer shelf life in applicable Intel boxes and bags.
Table 7-3. Storage Condition Ratings
Symbol Parameter Min Max Notes
Tabsolute storage The non-operating device storage temperature.
Damage (latent or otherwise) may occur when
exceeded for any length of time. -25 °C 125 °C 1, 2, 3, 4
Tsustained storage The ambient storage temperature (in shipping
media) for a sustained period of time -5 °C 40 °C 5, 6
Tshort term storage The ambient storage temperature (in shipping
media) for a short period of time. -20 °C 85 °C
RHsustained stor age The maximum device storage relative humidity
for a sustained period of time. 60% at 24 °C 6, 7
Timesustained storage A prolonged or extende d period of time; typically
associated with customer shelf life. 0 Months 30 Months 7
Timeshort term storage A short-period of time; 0 hours 72 hours
Electrical Specifications
84 Datasheet, Volume 1
7.10 DC Specifications
The processor DC specifications in this section are defined at the processor
pads, unless noted otherwise. See Chapter 8 for the processor land listings and
Chapter 6 for signal definitions. Voltage and current specifications are detailed in
Table 7-4 , Table 7-5, and Table 7-6.
The DC specifications for the DDR3 signals are listed in Table 7-7 Control Sideband and
Test Access Port (TAP) are listed in Table 7-8.
Table 7-4 through Table 7-6 list the DC specifications for the processor and are valid
only while meeting the thermal specifications (as specified in the Thermal / Mechanical
Specifications and Guidelines), clock frequency, and input voltages. Care should be
taken to read all notes associated with each parameter.
7.10.1 Voltage and Current Specifications
Note: Noise measurements on SENSE lands for all voltage supplies should be made with a
20-MHz bandwidth oscilloscope.
Table 7-4. Processor Core Active and Idle Mode DC Voltage and Current Specifications
(Sheet 1 of 2)
Symbol Parameter Min Typ Max Unit Note
VID VID Range 0.2500 1.5200 V 1
LLVCC
VCC Loadline Slope
2011D, 2011C, 2011B (processors with
77 W, 65 W, 55 W, 45 W TDP) 1.7 m2, 4, 5
VCCTOB
VCC Tolerance Band
2011D, 2011C, 2011B (processors with
77 W, 65 W, 55 W, 45 W TDP)
PS0
PS1
PS2
±16
±13
±11.5
mV 2, 4, 5,
6
VCCRipple
Ripple:
2011D, 2011C, 2011B (processors with
77 W, 65 W, 55 W, 45 W TDP)
PS0
PS1
PS2
±7
±10
-10/+25
mV 2, 4, 5,
6
LLVCC VCC Loadline Slope 2011A (processors
with 35 W TDP) 2.9 m2, 4, 5,
7
VCCTOB
VCC Tolerance Band
2011A (processors with 35 W TDP)
PS0
PS1
PS2
±19
±19
±11.5
mV 2, 4, 5,
6, 7
VCCRipple
Ripple:
2011A (processors with 35 W TDP)
PS0
PS1
PS2
±10
±10
-10/+25
mV 2, 4, 5,
6, 7
VCC,BOOT Default VCC voltage for initial power up 0 V
ICC 2011D ICC (processors with 77 W, TDP) 112 A 3
ICC 2011C ICC (processors with 55 W TDP) 75 A 3
Datasheet, Volume 1 85
Electrical Specifications
Notes:
1. Each processor is programmed with a maximum valid voltage identification value (VID), which is set at
manufacturing and canno t be altered. Individual maxi mu m VID values are cal ibr ated duri ng man ufact uring
such that two processors at the same frequency may have different settings within the VID range. This
differs from the VID employed by the processor during a power management event (Adaptive Thermal
Monitor, Enhanced Intel SpeedStep Technology, or Low Power States).
2. The voltage specification requirements a re measured acro ss VCC_SENSE and VSS_SENSE lan ds at the
socket with a 20-MHz bandwidth oscilloscope, 1.5 pF maximum probe capacitance, and 1-M minimum
impedance. The maximum length of ground wire on the probe should be less than 5 mm. Ensure external
noise from the system is not coupled into the oscilloscope probe.
3. ICC_MAX specification is based on the VCC loadline at worst case (highest) tolerance and ripple.
4. The VCC specifications repre sent static and transient limits.
5. The loadlines specify voltage limits at the die measured at the VCC_SENSE and VS S_SE NS E lands. Voltage
regulation feedback for voltage regulator circuits must also be taken from processor VCC_SENSE and
VSS_SENSE lands.
6. PSx refers to the voltage regulator power state as set by the SVID protocol.
7. 2011A (processors with 35 W TDP) loadline slope, TOB, and ripple specifications allow for a cost reduced
voltage regulator for boards supporting only the 2011A (processors with 35 W TDP). 2011A (processors
with 35 W TDP) processors may also use the loadline slope, TOB, and ripple specifications for 2011D,
2011C, and 2011B.
ICC 2011B ICC (processors with 45 W TDP) 60 A 3
ICC 2011A ICC (processors with 35 W TDP) 35 A 3
ICC_TDC 2011D Sustained ICC (processors with
77 W, TDP) ——85A
ICC_TDC 2011C Sustained ICC (processors with
55 W TDP) ——55A
ICC_TDC 2011B Sustained ICC (processors with
45 W TDP) ——40A
ICC_TDC 2011A Sustained ICC (processors with
35 W TDP) ——25A
Table 7-4. Processor Core Active and Idle Mode DC Voltage and Current Specifications
(Sheet 2 of 2)
Symbol Parameter Min Typ Max Unit Note
Electrical Specifications
86 Datasheet, Volume 1
Notes:
1. VCCSA must be provided using a separ ate voltage source and not be connec ted to VCC. This specification is
measure d a t VCCSA_SENSE.
2. ±5% total. Minimum of ±2% DC and 3% AC at the sense point. di/dt = 50 A/us with 150 ns step.
Table 7-5. Processor System Agent I/O Buffer Supply DC Voltage and Current
Specifications
Symbol Parameter Min Typ Max Unit Note
VCCSA Voltage for the system agent 0.879 0.925 0.971 V 1
VDDQ Processor I/O supply voltage for
DDR3 —1.5 V
TOLDDQ VDDQ Tolerance DC= ±3%
AC= ±2%
AC+DC= ±5% %
VCCPLL PLL supply voltage (DC + AC
specification) 1.71 1.8 1.89 V
VCCIO Processor I/O supply voltage for
other than DDR3 -2/-3% 1.05 +2/+3% V 2
ISA Current for the system agent 8.8 A
ISA_TDC Sustained current for the system
agent ——8.2A
IDDQ Processor I/O supply current for
DDR3 4.75 A
IDDQ_TDC Processor I/O supply sustained
current for DDR3 4.75 A
IDDQ_STANDBY Processor I/O supply standby
current for DDR3 —— 1A
ICC_VCCPLL PLL supply current 1.5 A
ICC_VCCPLL_TDC PLL sustained supply current 0.93 A
ICC_VCCIO Processor I/O supply current 8.5 A
ICC_VCCIO_TDC Processor I/O supply sustained
current ——8.5A
Datasheet, Volume 1 87
Electrical Specifications
Notes:
1. VAXG is VID based rail.
2. The VAXG_MIN and VAXG_MAX loadlines represent static and transient limits.
3. The loadlines specify voltage limits at the die measured at the VAXG_SENSE and VSSAXG_SENSE lands.
Voltage regulation feedback for voltage regulator circuits must also be taken from processor VAXG_SENSE
and VSSAXG_SENSE lands.
4. PSx refers to the voltage regulator power state as set by the SVID protocol.
5. Each processor is programmed with a maximum valid voltage identification value (VID) that is set at
manufacturing and canno t be altered. Individual maxi mu m VID values are cal ibr ated duri ng man ufact uring
such that two processors at the same frequency may have different settings within the VID range. This
differs from the VID employed by the processor during a power management event (Adaptive Thermal
Monitor, Enhanced Intel SpeedStep Technology, or Low Power States).
Table 7-6. Processor Graphics VID based (VAXG) Supply DC Voltage and Current
Specifications
Symbol Parameter Min Typ Max Unit Note
VAXG GFX_VID
Range GFX_VID Range for VAXG 0.2500 1.5200 V 1
LLAXG VAXG Loadline Slope 4.1 m 2, 3
VAXGTOB VCC To lerance Band
PS0, PS1
PS2 19
11.5 mV 2, 3, 4
VAXGRipple
Ripple:
PS0
PS1
PS2
±10
±10
-10/+15
mV 2, 3, 4
IAXG Current for Processor Graphics
core ——35 A
IAXG_TDC Sust ained cur rent for Proces sor
Graphics core ——25 A
Table 7-7. DDR3 Signal Group DC Specifications (Sheet 1 of 2)
Symbol Parameter Min Typ Max Units Notes1,7
VIL Input Low Voltage ——
SM_VREF
– 0.1 V2, 4, 9
VIH Input High Vo ltage SM_VREF
+ 0.1 ——V3, 9
VIL Input Low Voltage
(SM_DRAMPWROK) ——
VDDQ*0.55
– 0.1 V8
VIH Input High Voltage
(SM_DRAMPWROK) VDDQ*0.55
+ 0.1 ——V8
VOL Output Low Voltage (VDDQ / 2)* (RON
/(RON+RTERM)) —6
VOH Output High Voltage VDDQ - ((VDDQ / 2)*
(RON/(RON+RTERM)) —V4, 6
RON_UP(DQ) DDR3 Data Buffer pull-
up Resistance 20 28.6 40 5
RON_DN(DQ) DDR3 Data Buffer pull-
down Resistance 20 28.6 40 5
RODT(DQ)
DDR3 On -die
termination equivalent
resistance for data
signals
40 50 60
VODT(DC)
DDR3 On -die
termination DC working
point (driver set to
receive mode)
0.4*VDDQ 0.5*VDDQ 0.6*VDDQ V
Electrical Specifications
88 Datasheet, Volume 1
Notes:
1. Unless otherwise noted, all specifications in this table apply to all processor frequencies.
2. VIL is defined as the maximum voltage level at a receiving agent that will be interpreted as a logical low
value.
3. VIH is defined as the minimum voltage level at a receiving agent that will be interpreted as a logical high
value.
4. VIH and VOH may experience excursions above VDDQ. However, input signal drivers must comply with the
signal quality specifications.
5. This is the pull-up/pull-down driver resistance.
6. RTERM is the termination on the DIMM and in not controlled by the processor.
7. The minimum and maximum values for these signals are programmable by BIOS to one of the two sets.
8. SM_DRAMPWROK must have a maximum of 15 ns rise or fall time over VDDQ * 0.55 ±200 mV and the edge
must be monotonic.
9. SM_VREF is defined as VDDQ/2
10. Ron tolerance is preliminary and might be subject to change.
RON_UP(CK) DDR3 Clock Buffer pull-
up Resistance 20 26 40 5, 10
RON_DN(CK) DDR3 Clock Buffer pull-
down Resistance 20 26 40 5, 10
RON_UP(CMD) DDR3 Command Buffer
pull-up Resistance 15 20 25 5, 10
RON_DN(CMD) DDR3 Command Buffer
pull-down Resistance 15 20 25 5, 10
RON_UP(CTL) DDR3 Control Buffer
pull-up Resistance 15 20 25 5, 10
RON_DN(CTL) DDR3 Control Buffer
pull-down Resistance 15 20 25 5, 10
ILI
Input Leakage Current
(DQ, CK)
0V
0.2*VDDQ
0.8*VDDQ
VDDQ
——
± 0.75
± 0.55
± 0.9
± 1.4
mA
ILI
Input Leakage Current
(CMD, CTL)
0V
0.2*VDDQ
0.8*VDDQ
VDDQ
——
± 0.85
± 0.65
± 1.10
± 1.65
mA
Table 7-7. DDR3 Signal Group DC Specifications (Sheet 2 of 2)
Symbol Parameter Min Typ Max Units Notes1,7
Datasheet, Volume 1 89
Electrical Specifications
Notes:
1. Unless otherwise noted, all specifications in this table apply to all processor frequencies.
2. The VCCIO referred to in these specifications refers to instantaneous VCCIO.
3. For VIN between “0” V and VCCIO. Measured when th e driver is tri-stated.
4. VIH and VOH may experience excursions above VCCIO. However, input signal drivers must comply with the
signal quality specifications.
Notes:
1. Refer to the PCI Express Base Specification for more details.
2. Low impedance defin ed during signaling. Parameter is captured for 5.0 GHz by RLTX-DIFF.
3. DC impedance limits are needed to ensure Receiver detect.
4. The Rx DC Common Mode Impedance must be present when the Receiver terminations are first enabled to
ensure that the Receiver Detect occurs properly. Co mpensation of this impedance can start immediately
and the 15 Rx Common Mode Impedance (constrained by RLRX-CM to 50 ±20%) must be within the
specified range by the time Detect is entered.
5. COMP resistance must be provided on the system board with 1% resistors.
6. PEG_ ICOMPO, PEG_ICOMPI, PEG_RCOMPO are the same resistor. Intel allows using 24.9 1% resistors.
Table 7-8. Control Sideband and TAP Signal Group DC Specifications
Symbol Parameter Min Max Units Notes1
VIL Input Low Voltage VCCIO * 0.3 V 2
VIH Input High Voltage VCCIO * 0.7 V 2, 4
VOL Output Low Voltage VCCIO * 0.1 V 2
VOH Output High Voltage VCCIO * 0.9 V 2, 4
RON Buffer on Resistance 23 73
ILI Input Leakage Current ±200 A3
Table 7-9. PCI Express* DC Specifications
Symbol Parameter Min Typ Max Units Notes1
ZTX-DIFF-DC DC Differential Tx Impedance (Gen 1
Only) 80 120 2
ZTX-DIFF-DC DC Differential Tx Impedance (Gen 2
and Gen 3) 120 2
ZRX-DC DC Common Mode Rx Impedance 40 60 3, 4
ZRX-DIFF-DC DC Differential Rx Impedance (Gen 1
Only) 80 120
PEG_ICOMPO Comp Resistance 24.75 25 25.25 5, 6
PEG_ICOMPI Comp Resistance 24.75 25 25.25 5, 6
PEG_RCOMPO Comp Resistance 24.75 25 25.25 5, 6
Electrical Specifications
90 Datasheet, Volume 1
7.11 Platform Environmental Control Interface (PECI)
DC Specifications
PECI is an Intel proprietary interface that provides a communication channel between
Intel processors and chipset components to external thermal monitoring devices. The
processor contains a Digital Thermal Sensor (DTS) that reports a relative die
temperature as an offset from Thermal Control Circuit (TCC) activation temperature.
Temperature sensors located throughout the die are implemented as analog-to-digital
converters calibra ted at the factory. PECI provides an interface for external devices to
read the DTS temperature for thermal management and fan speed control. More
detailed information may be found in the Platform Environment Control Interface
(PECI) Specification.
7.11.1 PECI Bus Architecture
The PECI architecture based on wired OR bus which the clients (as processor PECI)
can pull up high (with strong drive).
The idle state on the bus is near zero.
Figure 7-1 demonstrates PE CI design and connectivity, while the host/originator can be
3rd party PECI host, and one of the PECI clients is the processor PECI device.
Figure 7-1. Example for PECI Host-Clients Connection
VTT
Q1
nX
Q2
1X CPECI
<10 pF / Node
VTT
Q3
nX
PECI ClientHost / Originator
Additional PECI
Clients
PECI
Datasheet, Volume 1 91
Electrical Specifications
7.11.2 DC Characteristics
The PECI interface operates at a nominal voltage set by VCCIO. The DC electrical
specifications shown in Table 7-10 are used with devices normally operating from a
VCCIO interface supply. VCCIO nominal levels will vary between processor families. All
PECI devices will operate at the VCCIO level determined by the processor installed in the
system. For specific nominal VCCIO levels, refer to Table 7-5.
Notes:
1. VCCIO supplies the PECI interface. PECI behavior does not affect VCCIO min/max specifications.
2. The leakage specification applies to powered devices on the PECI bus.
3. The PECI buffer internal pull up resistance measured at 0.75*VCCIO.
7.11.3 Input Device Hysteresis
The input buffers in both client and host models must use a Schmitt-triggered input
design for improved noise immunity. Use Figure 7-2 as a guide for input buffer design.
§ §
Table 7-10. PECI DC Electrical Limits
Symbol Definition and Conditions Min Max Units Notes1
Rup Output resistance 15 45 3
Vin Input Voltage Range -0.15 VCCIO V
Vhysteresis Hysteresis 0.1 * VCCIO N/A V
VnNegative-Edge Threshold Voltage 0.275 * VCCIO 0.500 * VCCIO V
VpPositive-Edge Threshold Voltage 0.550 * VCCIO 0.725 * V CCIO V
Cbus Bus Capacitance per Node N/A 10 pF
Cpad Pad Capacitance 0.7 1.8 pF
Ileak000 leakage current at 0V 0.6 mA
Ileak025 leakage current at 0.25 *VCCIO —0.4mA
Ileak050 leakage current at 0.50*VCCIO —0.2mA
Ileak075 leakage current at 0.75*VCCIO —0.13mA
Ileak100 leakage current at VCCIO —0.10mA
Figure 7-2. Input D evice Hysteresis
Minimum VP
Maximum VP
Minimum VN
Maximum VN
PECI High Range
PECI Low Range
Valid Input
Signal Range
Minimum
Hysteresis
VTTD
PECI Ground
Electrical Specifications
92 Datasheet, Volume 1
Datasheet, Volume 1 93
Processor Land and Signal Information
8Processor Land and Signal
Information
8.1 Processor Land Assignments
The processor land map is shown in Figure 8-1. Table 8-1 provides a listing of all
processor lands ordered alphabetically by land name.
Note: SA_ECC_CB[7:0] and SB_ECC_CB[7:0] Lands are RSVD on Desktop 3rd Generation
Intel® Core™ i7/i5 processors.
Note: PE_TX[3:0]/PE_TX#[3:0] and PE_RX[3:0]/PE_RX#[3:0] Lands are RSVD on Desktop
3rd Generation Intel® Core™ i7/i5 processors, Desktop Intel Pentium processors, and
Desktop Intel Celeron processors.
Processor Land and Signal Information
94 Datasheet, Volume 1
Figure 8-1. LGA Socket Land Map
40393837363534333231302928272625242322212019181716151413121110987654321
AY
AW
AV
AU
AT
AR
AP
AN
AM
AL
AK
AJ
AH
AG
AF
AE
AD
AC
AB
AA
Y
W
V
U
T
R
P
N
M
L
K
J
H
G
F
E
D
C
B
A40393837363534333231302928272625242322212019181716151413121110987654321
Datasheet, Volume 1 95
Processor Land and Signal Information
Table 8-1. Processor Land List by
Land Name
Land Name Land # Buffer Type Dir.
BCLK_ITP C40 Diff Clk I
BCLK_ITP# D40 Diff Clk I
BCLK[0] W2 Diff Clk I
BCLK#[0] W1 Diff Clk I
BPM#[0] H40 GTL I/O
BPM#[1] H38 GTL I/O
BPM#[2] G38 GTL I/O
BPM#[3] G40 GTL I/O
BPM#[4] G39 GTL I/O
BPM#[5] F38 GTL I/O
BPM#[6] E40 GTL I/O
BPM#[7] F40 GTL I/O
CATERR# E37 GTL O
CFG[0] H36 CMOS I
CFG[1] J36 CMOS I
CFG[2] J37 CMOS I
CFG[3] K36 CMOS I
CFG[4] L36 CMOS I
CFG[5] N35 CMOS I
CFG[6] L37 CMOS I
CFG[7] M36 CMOS I
CFG[8] J38 CMOS I
CFG[9] L35 CMOS I
CFG[10] M38 CMOS I
CFG[11] N36 CMOS I
CFG[12] N38 CMOS I
CFG[13] N39 CMOS I
CFG[14] N37 CMOS I
CFG[15] N40 CMOS I
CFG[16] G37 CMOS I
CFG[17] G36 CMOS I
DBR# E39 Async CMOS O
DMI_RX[0] W5 DMI I
DMI_RX[1] V3 DMI I
DMI_RX[2] Y3 DMI I
DMI_RX[3] AA4 DMI I
DMI_RX#[0] W4 DMI I
DMI_RX#[1] V4 DMI I
DMI_RX#[2] Y4 DMI I
DMI_RX#[3] AA5 DMI I
DMI_TX[0] V7 DMI O
DMI_TX[1] W7 DMI O
DMI_TX[2] Y6 DMI O
DMI_TX[3] AA7 DMI O
DMI_TX#[0] V6 DMI O
DMI_TX#[1] W8 DMI O
DMI_TX#[2] Y7 DMI O
DMI_TX#[3] AA8 DMI O
SB_DIMM_VREFDQ AH1 Analog O
SA_DIMM_VREFDQ AH4 Analog O
FDI_COMPIO AE2 Analog I
FDI_FSYNC[0] AC5 CMOS I
FDI_FSYNC[1] AE5 CMOS I
FDI_ICOMPO AE1 Analog I
FDI_INT AG3 CMOS I
FDI_LSYNC[0] AC4 CMOS I
FDI_LSYNC[1] AE4 CMOS I
FDI_TX[0] AC8 FDI O
FDI_TX[1] AC2 FDI O
FDI_TX[2] AD2 FDI O
FDI_TX[3] AD4 FDI O
FDI_TX[4] AD7 FDI O
FDI_TX[5] AE7 FDI O
FDI_TX[6] AF3 FDI O
FDI_TX[7] AG2 FDI O
FDI_TX#[0] AC7 FDI O
FDI_TX#[1] AC3 FDI O
FDI_TX#[2] AD1 FDI O
FDI_TX#[3] AD3 FDI O
FDI_TX#[4] AD6 FDI O
FDI_TX#[5] AE8 FDI O
FDI_TX#[6] AF2 FDI O
FDI_TX#[7] AG1 FDI O
NCTF A38
NCTF AU40
NCTF AW38
NCTF C2
NCTF D1
PE_RX[0] P3 PCI Express I
PE_RX[1] R2 PCI Express I
PE_RX[2] T4 PCI Express I
PE_RX[3] U2 PCI Express I
PE_RX#[0] P4 PCI Express I
PE_RX#[1] R1 PCI Express I
PE_RX#[2] T3 PCI Express I
PE_RX#[3] U1 PCI Express I
PE_TX[0] P8 PCI Express O
PE_TX[1] T7 PCI Express O
PE_TX[2] R6 PCI Express O
PE_TX[3] U5 PCI Express O
PE_TX#[0] P7 PCI Express O
PE_TX#[1] T8 PCI Express O
PE_TX#[2] R5 PCI Express O
PE_TX#[3] U6 PCI Express O
Table 8-1. Processor Land List by
Land Name
Land Name Land # Buffer Type Dir.
Processor Land and Signal Information
96 Datasheet, Volume 1
PECI J35 Async I/O
PEG_COMPI B4 Analog I
PEG_ICOMPO B5 Analog I
PEG_RCOMPO C4 Analog I
PEG_RX[0] B11 PCI Express I
PEG_RX[1] D12 PCI Express I
PEG_RX[2] C10 PCI Express I
PEG_RX[3] E10 PCI Express I
PEG_RX[4] B8 PCI Express I
PEG_RX[5] C6 PCI Express I
PEG_RX[6] A5 PCI Express I
PEG_RX[7] E2 PCI Express I
PEG_RX[8] F4 PCI Express I
PEG_RX[9] G2 PCI Expr ess I
PEG_RX[10] H3 PCI Express I
PEG_RX[11] J1 PCI Express I
PEG_RX[12] K3 PCI Express I
PEG_RX[13] L1 PCI Express I
PEG_RX[14] M3 PCI Express I
PEG_RX[15] N1 PCI Express I
PEG_RX#[0] B12 PCI Express I
PEG_RX#[1] D11 PCI Express I
PEG_RX#[2] C9 PC I Express I
PEG_RX#[3] E9 PCI Express I
PEG_RX#[4] B7 PCI Express I
PEG_RX#[5] C5 PC I Express I
PEG_RX#[6] A6 PCI Express I
PEG_RX#[7] E1 PCI Express I
PEG_RX#[8] F3 PCI Express I
PEG_RX#[9] G1 PCI Express I
PEG_RX#[10] H4 PCI Express I
PEG_RX#[11] J2 PCI Express I
PEG_RX#[12] K4 PCI Express I
PEG_RX#[13] L2 PCI Express I
PEG_RX#[14] M4 PCI Express I
PEG_RX#[15] N2 PCI Express I
PEG_TX[0] C13 PCI Express O
PEG_TX[1] E14 PCI Express O
PEG_TX[2] G14 PCI Express O
PEG_TX[3] F12 PCI Express O
PEG_TX[4] J14 PCI Express O
PEG_TX[5] D8 PCI Express O
PEG_TX[6] D3 PCI Express O
PEG_TX[7] E6 PCI Express O
PEG_TX[8] F8 PCI Express O
PEG_TX[9] G10 PCI Express O
PEG_TX[10] G5 PCI Express O
Table 8-1. Processor Land List by
Land Name
Land Name Land # Buffer Type Dir.
PEG_TX[11] K7 PCI Express O
PEG_TX[12] J5 PCI Express O
PEG_TX[13] M8 PC I Express O
PEG_TX[14] L6 PCI Express O
PEG_TX[15] N5 PCI Express O
PEG_TX#[0] C14 PCI Express O
PEG_TX#[1] E13 PCI Express O
PEG_TX#[2] G13 PCI Express O
PEG_TX#[3] F11 PCI Express O
PEG_TX#[4] J13 PCI Express O
PEG_TX#[5] D7 PCI Express O
PEG_TX#[6] C3 PCI Express O
PEG_TX#[7] E5 PCI Express O
PEG_TX#[8] F7 PCI Express O
PEG_TX#[9] G9 PCI Express O
PEG_TX#[10] G6 PCI Express O
PEG_TX#[11] K8 PC I Express O
PEG_TX#[12] J6 PCI Express O
PEG_TX#[13] M7 PC I Express O
PEG_TX#[14] L5 PCI Express O
PEG_TX#[15] N6 PCI Express O
PM_SYNC E38 CMOS I
PRDY# K38 Async GTL O
PREQ# K40 Async GTL I
PROC_SEL K32 N/A O
PROCHOT# H34 Async GTL I/O
RESET# F36 CMOS I
RSVD AB6
RSVD AB7
RSVD AD37
RSVD AE6
RSVD AF4
RSVD AG4
RSVD AJ11
RSVD AJ29
RSVD AJ30
RSVD AJ31
RSVD AN20
RSVD AP20
RSVD AT11
RSVD AT14
RSVD AU10
RSVD AV34
RSVD AW34
RSVD AY10
RSVD C38
RSVD C39
Table 8-1. Processor Land List by
Land Name
Land Name Land # Buffer Type Dir.
Datasheet, Volume 1 97
Processor Land and Signal Information
RSVD D38
RSVD H7
RSVD H8
RSVD J33
RSVD J34
RSVD J9
RSVD K34
RSVD K9
RSVD L31
RSVD L33
RSVD L34
RSVD L9
RSVD M34
RSVD N33
RSVD N34
RSVD P35
RSVD P37
RSVD P39
RSVD R34
RSVD R36
RSVD R38
RSVD R40
RSVD J31
RSVD AD34
RSVD AD35
RSVD K31
RSVD_NCTF AV1
RSVD_NCTF AW2
RSVD_NCTF AY3
RSVD_NCTF B39
SA_BS[0] AY29 DDR3 O
SA_BS[1] AW28 DDR3 O
SA_BS[2] AV20 DDR3 O
SA_CAS# AV30 DDR3 O
SA_CK[0] AY25 DDR3 O
SA_CK[1] AU24 DDR3 O
SA_CK[2] AW27 DDR3 O
SA_CK[3] AV26 DDR3 O
SA_CK#[0] AW25 DDR3 O
SA_CK#[1] AU25 DDR3 O
SA_CK#[2] AY27 DDR3 O
SA_CK#[3] AW26 DDR3 O
SA_CKE[0] AV19 DDR3 O
SA_CKE[1] AT19 DDR3 O
SA_CKE[2] AU18 DDR3 O
SA_CKE[3] AV18 DDR3 O
SA_CS#[0] AU29 DDR3 O
Table 8-1. Processor Land List by
Land Name
Land Name Land # Buffer Type Dir.
SA_CS#[1] AV32 DDR3 O
SA_CS#[2] AW30 DDR3 O
SA_CS#[3] AU33 DDR3 O
SA_DQ[0] AJ3 DDR3 I/O
SA_DQ[1] AJ4 DDR3 I/O
SA_DQ[2] AL3 DDR3 I/O
SA_DQ[3] AL4 DDR3 I/O
SA_DQ[4] AJ2 DDR3 I/O
SA_DQ[5] AJ1 DDR3 I/O
SA_DQ[6] AL2 DDR3 I/O
SA_DQ[7] AL1 DDR3 I/O
SA_DQ[8] AN1 DDR3 I/O
SA_DQ[9] AN4 DDR3 I/O
SA_DQ[10] AR3 DDR3 I/O
SA_DQ[11] AR4 DDR3 I/O
SA_DQ[12] AN2 DDR3 I/O
SA_DQ[13] AN3 DDR3 I/O
SA_DQ[14] AR2 DDR3 I/O
SA_DQ[15] AR1 DDR3 I/O
SA_DQ[16] AV2 DDR3 I/O
SA_DQ[17] AW3 DDR3 I/O
SA_DQ[18] AV5 DDR3 I/O
SA_DQ[19] AW5 DDR3 I/O
SA_DQ[20] AU2 DDR3 I/O
SA_DQ[21] AU3 DDR3 I/O
SA_DQ[22] AU5 DDR3 I/O
SA_DQ[23] AY5 DDR3 I/O
SA_DQ[24] AY7 DDR3 I/O
SA_DQ[25] AU7 DDR3 I/O
SA_DQ[26] AV9 DDR3 I/O
SA_DQ[27] AU9 DDR3 I/O
SA_DQ[28] AV7 DDR3 I/O
SA_DQ[29] AW7 DDR3 I/O
SA_DQ[30] AW9 DDR3 I/O
SA_DQ[31] AY9 DDR3 I/O
SA_DQ[32] AU35 DDR3 I/O
SA_DQ[33] AW37 DDR3 I/O
SA_DQ[34] AU39 DDR3 I/O
SA_DQ[35] AU36 DDR3 I/O
SA_DQ[36] AW35 DDR3 I/O
SA_DQ[37] AY36 DDR3 I/O
SA_DQ[38] AU38 DDR3 I/O
SA_DQ[39] AU37 DDR3 I/O
SA_DQ[40] AR40 DDR3 I/O
SA_DQ[41] AR37 DDR3 I/O
SA_DQ[42] AN38 DDR3 I/O
SA_DQ[43] AN37 DDR3 I/O
Table 8-1. Processor Land List by
Land Name
Land Name Land # Buffer Type Dir.
Processor Land and Signal Information
98 Datasheet, Volume 1
SA_DQ[44] AR39 DDR3 I/O
SA_DQ[45] AR38 DDR3 I/O
SA_DQ[46] AN39 DDR3 I/O
SA_DQ[47] AN40 DDR3 I/O
SA_DQ[48] AL40 DDR3 I/O
SA_DQ[49] AL37 DDR3 I/O
SA_DQ[50] AJ38 DDR3 I/O
SA_DQ[51] AJ37 DDR3 I/O
SA_DQ[52] AL39 DDR3 I/O
SA_DQ[53] AL38 DDR3 I/O
SA_DQ[54] AJ39 DDR3 I/O
SA_DQ[55] AJ40 DDR3 I/O
SA_DQ[56] AG40 DDR3 I/O
SA_DQ[57] AG37 DDR3 I/O
SA_DQ[58] AE38 DDR3 I/O
SA_DQ[59] AE37 DDR3 I/O
SA_DQ[60] AG39 DDR3 I/O
SA_DQ[61] AG38 DDR3 I/O
SA_DQ[62] AE39 DDR3 I/O
SA_DQ[63] AE40 DDR3 I/O
SA_DQS[0] AK3 DDR3 I/O
SA_DQS[1] AP3 DDR3 I/O
SA_DQS[2] AW4 DDR3 I/O
SA_DQS[3] AV8 DDR3 I/O
SA_DQS[4] AV37 DDR3 I/O
SA_DQS[5] AP38 DDR3 I/O
SA_DQS[6] AK38 DDR3 I/O
SA_DQS[7] AF38 DDR3 I/O
SA_DQS[8] AV13 DDR3 I/O
SA_DQS#[0] AK2 DDR3 I/O
SA_DQS#[1] AP2 DDR3 I/O
SA_DQS#[2] AV4 DDR3 I/O
SA_DQS#[3] AW8 DDR3 I/O
SA_DQS#[4] AV36 DDR3 I/O
SA_DQS#[5] AP39 DDR3 I/O
SA_DQS#[6] AK39 DDR3 I/O
SA_DQS#[7] AF39 DDR3 I/O
SA_DQS#[8] AV12 DDR3 I/O
SA_ECC_CB[0] AU12 DDR3 I/O
SA_ECC_CB[1] AU14 DDR3 I/O
SA_ECC_CB[2] AW13 DDR3 I/O
SA_ECC_CB[3] AY13 DDR3 I/O
SA_ECC_CB[4] AU13 DDR3 I/O
SA_ECC_CB[5] AU11 DDR3 I/O
SA_ECC_CB[6] AY12 DDR3 I/O
SA_ECC_CB[7] AW12 DDR3 I/O
SA_MA[0] AV27 DDR3 O
Table 8-1. Processor Land List by
Land Name
Land Name Land # Buffer Type Dir.
SA_MA[1] AY24 DDR3 O
SA_MA[2] AW24 DDR3 O
SA_MA[3] AW23 DDR3 O
SA_MA[4] AV23 DDR3 O
SA_MA[5] AT24 DDR3 O
SA_MA[6] AT23 DDR3 O
SA_MA[7] AU22 DDR3 O
SA_MA[8] AV22 DDR3 O
SA_MA[9] AT22 DDR3 O
SA_MA[10] AV28 DDR3 O
SA_MA[11] AU21 DDR3 O
SA_MA[12] AT21 DDR3 O
SA_MA[13] AW32 DDR3 O
SA_MA[14] AU20 DDR3 O
SA_MA[15] AT20 DDR3 O
SA_ODT[0] AV31 DDR3 O
SA_ODT[1] AU32 DDR3 O
SA_ODT[2] AU30 DDR3 O
SA_ODT[3] AW33 DDR3 O
SA_RAS# AU28 DDR3 O
SA_WE# AW29 DDR3 O
SB_BS[0] AP23 DDR3 O
SB_BS[1] AM24 DDR3 O
SB_BS[2] AW17 DDR3 O
SB_CAS# AK25 DDR3 O
SB_CK[0] AL21 DDR3 O
SB_CK[1] AL20 DDR3 O
SB_CK[2] AL23 DDR3 O
SB_CK[3] AP21 DDR3 O
SB_CK#[0] AL22 DDR3 O
SB_CK#[1] AK20 DDR3 O
SB_CK#[2] AM22 DDR3 O
SB_CK#[3] AN21 DDR3 O
SB_CKE[0] AU16 DDR3 O
SB_CKE[1] AY15 DDR3 O
SB_CKE[2] AW15 DDR3 O
SB_CKE[3] AV15 DDR3 O
SB_CS#[0] AN25 DDR3 O
SB_CS#[1] AN26 DDR3 O
SB_CS#[2] AL25 DDR3 O
SB_CS#[3] AT26 DDR3 O
SB_DQ[0] AG7 DDR3 I/O
SB_DQ[1] AG8 DDR3 I/O
SB_DQ[2] AJ9 DDR3 I/O
SB_DQ[3] AJ8 DDR3 I/O
SB_DQ[4] AG5 DDR3 I/O
SB_DQ[5] AG6 DDR3 I/O
Table 8-1. Processor Land List by
Land Name
Land Name Land # Buffer Type Dir.
Datasheet, Volume 1 99
Processor Land and Signal Information
SB_DQ[6] AJ6 DDR3 I/O
SB_DQ[7] AJ7 DDR3 I/O
SB_DQ[8] AL7 DDR3 I/O
SB_DQ[9] AM7 DDR3 I/O
SB_DQ[10] AM10 DDR3 I/O
SB_DQ[11] AL10 DDR3 I/O
SB_DQ[12] AL6 DDR3 I/O
SB_DQ[13] AM6 DDR3 I/O
SB_DQ[14] AL9 DDR3 I/O
SB_DQ[15] AM9 DDR3 I/O
SB_DQ[16] AP7 DDR3 I/O
SB_DQ[17] AR7 DDR3 I/O
SB_DQ[18] AP10 DDR3 I/O
SB_DQ[19] AR10 DDR3 I/O
SB_DQ[20] AP6 DDR3 I/O
SB_DQ[21] AR6 DDR3 I/O
SB_DQ[22] AP9 DDR3 I/O
SB_DQ[23] AR9 DDR3 I/O
SB_DQ[24] AM12 DDR3 I/O
SB_DQ[25] AM13 DDR3 I/O
SB_DQ[26] AR13 DDR3 I/O
SB_DQ[27] AP13 DDR3 I/O
SB_DQ[28] AL12 DDR3 I/O
SB_DQ[29] AL13 DDR3 I/O
SB_DQ[30] AR12 DDR3 I/O
SB_DQ[31] AP12 DDR3 I/O
SB_DQ[32] AR28 DDR3 I/O
SB_DQ[33] AR29 DDR3 I/O
SB_DQ[34] AL28 DDR3 I/O
SB_DQ[35] AL29 DDR3 I/O
SB_DQ[36] AP28 DDR3 I/O
SB_DQ[37] AP29 DDR3 I/O
SB_DQ[38] AM28 DDR3 I/O
SB_DQ[39] AM29 DDR3 I/O
SB_DQ[40] AP32 DDR3 I/O
SB_DQ[41] AP31 DDR3 I/O
SB_DQ[42] AP35 DDR3 I/O
SB_DQ[43] AP34 DDR3 I/O
SB_DQ[44] AR32 DDR3 I/O
SB_DQ[45] AR31 DDR3 I/O
SB_DQ[46] AR35 DDR3 I/O
SB_DQ[47] AR34 DDR3 I/O
SB_DQ[48] AM32 DDR3 I/O
SB_DQ[49] AM31 DDR3 I/O
SB_DQ[50] AL35 DDR3 I/O
SB_DQ[51] AL32 DDR3 I/O
SB_DQ[52] AM34 DDR3 I/O
Table 8-1. Processor Land List by
Land Name
Land Name Land # Buffer Type Dir.
SB_DQ[53] AL31 DDR3 I/O
SB_DQ[54] AM35 DDR3 I/O
SB_DQ[55] AL34 DDR3 I/O
SB_DQ[56] AH35 DDR3 I/O
SB_DQ[57] AH34 DDR3 I/O
SB_DQ[58] AE34 DDR3 I/O
SB_DQ[59] AE35 DDR3 I/O
SB_DQ[60] AJ35 DDR3 I/O
SB_DQ[61] AJ34 DDR3 I/O
SB_DQ[62] AF33 DDR3 I/O
SB_DQ[63] AF35 DDR3 I/O
SB_DQS[0] AH7 DDR3 I/O
SB_DQS[1] AM8 DDR3 I/O
SB_DQS[2] AR8 DDR3 I/O
SB_DQS[3] AN13 DDR3 I/O
SB_DQS[4] AN29 DDR3 I/O
SB_DQS[5] AP33 DDR3 I/O
SB_DQS[6] AL33 DDR3 I/O
SB_DQS[7] AG35 DDR3 I/O
SB_DQS[8] AN16 DDR3 I/O
SB_DQS#[0] AH6 DDR3 I/O
SB_DQS#[1] AL8 DDR3 I/O
SB_DQS#[2] AP8 DDR3 I/O
SB_DQS#[3] AN12 DDR3 I/O
SB_DQS#[4] AN28 DDR3 I/O
SB_DQS#[5] AR33 DDR3 I/O
SB_DQS#[6] AM33 DDR3 I/O
SB_DQS#[7] AG34 DDR3 I/O
SB_DQS#[8] AN15 DDR3 I/O
SB_ECC_CB[0] AL16 DDR3 I/O
SB_ECC_CB[1] AM16 DDR3 I/O
SB_ECC_CB[2] AP16 DDR3 I/O
SB_ECC_CB[3] AR16 DDR3 I/O
SB_ECC_CB[4] AL15 DDR3 I/O
SB_ECC_CB[5] AM15 DDR3 I/O
SB_ECC_CB[6] AR15 DDR3 I/O
SB_ECC_CB[7] AP15 DDR3 I/O
SB_MA[0] AK24 DDR3 O
SB_MA[1] AM20 DDR3 O
SB_MA[2] AM19 DDR3 O
SB_MA[3] AK18 DDR3 O
SB_MA[4] AP19 DDR3 O
SB_MA[5] AP18 DDR3 O
SB_MA[6] AM18 DDR3 O
SB_MA[7] AL18 DDR3 O
SB_MA[8] AN18 DDR3 O
SB_MA[9] AY17 DDR3 O
Table 8-1. Processor Land List by
Land Name
Land Name Land # Buffer Type Dir.
Processor Land and Signal Information
100 Datasheet, Volume 1
SB_MA[10] AN23 DDR3 O
SB_MA[11] AU17 DDR3 O
SB_MA[12] AT18 DDR3 O
SB_MA[13] AR26 DDR3 O
SB_MA[14] AY16 DDR3 O
SB_MA[15] AV16 DDR3 O
SB_ODT[0] AL26 DDR3 O
SB_ODT[1] AP26 DDR3 O
SB_ODT[2] AM26 DDR3 O
SB_ODT[3] AK26 DDR3 O
SB_RAS# AP24 DDR3 O
SB_WE# AR25 DDR3 O
SKTOCC# AJ33 Analog O
SM_DRAMPWROK AJ19 Async CMOS I
SM_DRAMRST# AW18 DDR3 O
SM_VREF AJ22 Analog I
TCK M40 TAP I
TDI L40 TAP I
TDO L39 TAP O
THERMTRIP# G35 Asynch CMOS O
TMS L38 TAP I
TRST# J39 TAP I
UNCOREPWRGOOD J40 Async CMOS I
VCC A12 PWR
VCC A13 PWR
VCC A14 PWR
VCC A15 PWR
VCC A16 PWR
VCC A18 PWR
VCC A24 PWR
VCC A25 PWR
VCC A27 PWR
VCC A28 PWR
VCC B15 PWR
VCC B16 PWR
VCC B18 PWR
VCC B24 PWR
VCC B25 PWR
VCC B27 PWR
VCC B28 PWR
VCC B30 PWR
VCC B31 PWR
VCC B33 PWR
VCC B34 PWR
VCC C15 PWR
VCC C16 PWR
VCC C18 PWR
Table 8-1. Processor Land List by
Land Name
Land Name Land # Buffer Type Dir.
VCC C19 PWR
VCC C21 PWR
VCC C22 PWR
VCC C24 PWR
VCC C25 PWR
VCC C27 PWR
VCC C28 PWR
VCC C30 PWR
VCC C31 PWR
VCC C33 PWR
VCC C34 PWR
VCC C36 PWR
VCC D13 PWR
VCC D14 PWR
VCC D15 PWR
VCC D16 PWR
VCC D18 PWR
VCC D19 PWR
VCC D21 PWR
VCC D22 PWR
VCC D24 PWR
VCC D25 PWR
VCC D27 PWR
VCC D28 PWR
VCC D30 PWR
VCC D31 PWR
VCC D33 PWR
VCC D34 PWR
VCC D35 PWR
VCC D36 PWR
VCC E15 PWR
VCC E16 PWR
VCC E18 PWR
VCC E19 PWR
VCC E21 PWR
VCC E22 PWR
VCC E24 PWR
VCC E25 PWR
VCC E27 PWR
VCC E28 PWR
VCC E30 PWR
VCC E31 PWR
VCC E33 PWR
VCC E34 PWR
VCC E35 PWR
VCC F15 PWR
VCC F16 PWR
Table 8-1. Processor Land List by
Land Name
Land Name Land # Buffer Type Dir.
Datasheet, Volume 1 101
Processor Land and Signal Information
VCC F18 PWR
VCC F19 PWR
VCC F21 PWR
VCC F22 PWR
VCC F24 PWR
VCC F25 PWR
VCC F27 PWR
VCC F28 PWR
VCC F30 PWR
VCC F31 PWR
VCC F32 PWR
VCC F33 PWR
VCC F34 PWR
VCC G15 PWR
VCC G16 PWR
VCC G18 PWR
VCC G19 PWR
VCC G21 PWR
VCC G22 PWR
VCC G24 PWR
VCC G25 PWR
VCC G27 PWR
VCC G28 PWR
VCC G30 PWR
VCC G31 PWR
VCC G32 PWR
VCC G33 PWR
VCC H13 PWR
VCC H14 PWR
VCC H15 PWR
VCC H16 PWR
VCC H18 PWR
VCC H19 PWR
VCC H21 PWR
VCC H22 PWR
VCC H24 PWR
VCC H25 PWR
VCC H27 PWR
VCC H28 PWR
VCC H30 PWR
VCC H31 PWR
VCC H32 PWR
VCC J12 PWR
VCC J15 PWR
VCC J16 PWR
VCC J18 PWR
VCC J19 PWR
Table 8-1. Processor Land List by
Land Name
Land Name Land # Buffer Type Dir.
VCC J21 PWR
VCC J22 PWR
VCC J24 PWR
VCC J25 PWR
VCC J27 PWR
VCC J28 PWR
VCC J30 PWR
VCC K15 PWR
VCC K16 PWR
VCC K18 PWR
VCC K19 PWR
VCC K21 PWR
VCC K22 PWR
VCC K24 PWR
VCC K25 PWR
VCC K27 PWR
VCC K28 PWR
VCC K30 PWR
VCC L13 PWR
VCC L14 PWR
VCC L15 PWR
VCC L16 PWR
VCC L18 PWR
VCC L19 PWR
VCC L21 PWR
VCC L22 PWR
VCC L24 PWR
VCC L25 PWR
VCC L27 PWR
VCC L28 PWR
VCC L30 PWR
VCC M14 PWR
VCC M15 PWR
VCC M16 PWR
VCC M18 PWR
VCC M19 PWR
VCC M21 PWR
VCC M22 PWR
VCC M24 PWR
VCC M25 PWR
VCC M27 PWR
VCC M28 PWR
VCC M30 PWR
VCC_SENSE A36 Analog O
VCCAXG AB33 PWR
VCCAXG AB34 PWR
VCCAXG AB35 PWR
Table 8-1. Processor Land List by
Land Name
Land Name Land # Buffer Type Dir.
Processor Land and Signal Information
102 Datasheet, Volume 1
VCCAXG AB36 PWR
VCCAXG AB37 PWR
VCCAXG AB38 PWR
VCCAXG AB39 PWR
VCCAXG AB40 PWR
VCCAXG AC33 PWR
VCCAXG AC34 PWR
VCCAXG AC35 PWR
VCCAXG AC36 PWR
VCCAXG AC37 PWR
VCCAXG AC38 PWR
VCCAXG AC39 PWR
VCCAXG AC40 PWR
VCCAXG T33 PWR
VCCAXG T34 PWR
VCCAXG T35 PWR
VCCAXG T36 PWR
VCCAXG T37 PWR
VCCAXG T38 PWR
VCCAXG T39 PWR
VCCAXG T40 PWR
VCCAXG U33 PWR
VCCAXG U34 PWR
VCCAXG U35 PWR
VCCAXG U36 PWR
VCCAXG U37 PWR
VCCAXG U38 PWR
VCCAXG U39 PWR
VCCAXG U40 PWR
VCCAXG W33 PWR
VCCAXG W34 PWR
VCCAXG W35 PWR
VCCAXG W36 PWR
VCCAXG W37 PWR
VCCAXG W38 PWR
VCCAXG Y33 PWR
VCCAXG Y34 PWR
VCCAXG Y35 PWR
VCCAXG Y36 PWR
VCCAXG Y37 PWR
VCCAXG Y38 PWR
VCCAXG_SENSE L32 Analog O
VCCIO A11 PWR
VCCIO A7 PWR
VCCIO AA3 PWR
VCCIO AB8 PWR
VCCIO AF8 PWR
Table 8-1. Processor Land List by
Land Name
Land Name Land # Buffer Type Dir.
VCCIO AG33 PWR
VCCIO AJ16 PWR
VCCIO AJ17 PWR
VCCIO AJ26 PWR
VCCIO AJ28 PWR
VCCIO AJ32 PWR
VCCIO AK15 PWR
VCCIO AK17 PWR
VCCIO AK19 PWR
VCCIO AK21 PWR
VCCIO AK23 PWR
VCCIO AK27 PWR
VCCIO AK29 PWR
VCCIO AK30 PWR
VCCIO B9 PWR
VCCIO D10 PWR
VCCIO D6 PWR
VCCIO E3 PWR
VCCIO E4 PWR
VCCIO G3 PWR
VCCIO G4 PWR
VCCIO J3 PWR
VCCIO J4 PWR
VCCIO J7 PWR
VCCIO J8 PWR
VCCIO L3 PWR
VCCIO L4 PWR
VCCIO L7 PWR
VCCIO M13 PWR
VCCIO N3 PWR
VCCIO N4 PWR
VCCIO N7 PWR
VCCIO R3 PWR
VCCIO R4 PWR
VCCIO R7 PWR
VCCIO U3 PWR
VCCIO U4 PWR
VCCIO U7 PWR
VCCIO V8 PWR
VCCIO W3 PWR
VCCIO_SEL P33 N/A O
VCCIO_SENSE AB4 Analog O
VCCPLL AK11 PWR
VCCPLL AK12 PWR
VCCSA H10 PWR
VCCSA H11 PWR
VCCSA H12 PWR
Table 8-1. Processor Land List by
Land Name
Land Name Land # Buffer Type Dir.
Datasheet, Volume 1 103
Processor Land and Signal Information
VCCSA J10 PWR
VCCSA K10 PWR
VCCSA K11 PWR
VCCSA L11 PWR
VCCSA L12 PWR
VCCSA M10 PWR
VCCSA M11 PWR
VCCSA M12 PWR
VCCSA_SENSE T2 Analog O
VCCSA_VID P34 CMOS O
VDDQ AJ13 PWR
VDDQ AJ14 PWR
VDDQ AJ20 PWR
VDDQ AJ23 PWR
VDDQ AJ24 PWR
VDDQ AR20 PWR
VDDQ AR21 PWR
VDDQ AR22 PWR
VDDQ AR23 PWR
VDDQ AR24 PWR
VDDQ AU19 PWR
VDDQ AU23 PWR
VDDQ AU27 PWR
VDDQ AU31 PWR
VDDQ AV21 PWR
VDDQ AV24 PWR
VDDQ AV25 PWR
VDDQ AV29 PWR
VDDQ AV33 PWR
VDDQ AW31 PWR
VDDQ AY23 PWR
VDDQ AY26 PWR
VDDQ AY28 PWR
VIDALERT# A37 CMOS I
VIDSCLK C37 CMOS O
VIDSOUT B37 CMOS I/O
VSS A17 GND
VSS A23 GND
VSS A26 GND
VSS A29 GND
VSS A35 GND
VSS AA33 GND
VSS AA34 GND
VSS AA35 GND
VSS AA36 GND
VSS AA37 GND
VSS AA38 GND
Table 8-1. Processor Land List by
Land Name
Land Name Land # Buffer Type Dir.
VSS AA6 GND
VSS AB5 GND
VSS AC1 GND
VSS AC6 GND
VSS AD33 GND
VSS AD36 GND
VSS AD38 GND
VSS AD39 GND
VSS AD40 GND
VSS AD5 GND
VSS AD8 GND
VSS AE3 GND
VSS AE33 GND
VSS AE36 GND
VSS AF1 GND
VSS AF34 GND
VSS AF36 GND
VSS AF37 GND
VSS AF40 GND
VSS AF5 GND
VSS AF6 GND
VSS AF7 GND
VSS AG36 GND
VSS AH2 GND
VSS AH3 GND
VSS AH33 GND
VSS AH36 GND
VSS AH37 GND
VSS AH38 GND
VSS AH39 GND
VSS AH40 GND
VSS AH5 GND
VSS AH8 GND
VSS AJ12 GND
VSS AJ15 GND
VSS AJ18 GND
VSS AJ21 GND
VSS AJ25 GND
VSS AJ27 GND
VSS AJ36 GND
VSS AJ5 GND
VSS AK1 GND
VSS AK10 GND
VSS AK13 GND
VSS AK14 GND
VSS AK16 GND
VSS AK22 GND
Table 8-1. Processor Land List by
Land Name
Land Name Land # Buffer Type Dir.
Processor Land and Signal Information
104 Datasheet, Volume 1
VSS AK28 GND
VSS AK31 GND
VSS AK32 GND
VSS AK33 GND
VSS AK34 GND
VSS AK35 GND
VSS AK36 GND
VSS AK37 GND
VSS AK4 GND
VSS AK40 GND
VSS AK5 GND
VSS AK6 GND
VSS AK7 GND
VSS AK8 GND
VSS AK9 GND
VSS AL11 GND
VSS AL14 GND
VSS AL17 GND
VSS AL19 GND
VSS AL24 GND
VSS AL27 GND
VSS AL30 GND
VSS AL36 GND
VSS AL5 GND
VSS AM1 GND
VSS AM11 GND
VSS AM14 GND
VSS AM17 GND
VSS AM2 GND
VSS AM21 GND
VSS AM23 GND
VSS AM25 GND
VSS AM27 GND
VSS AM3 GND
VSS AM30 GND
VSS AM36 GND
VSS AM37 GND
VSS AM38 GND
VSS AM39 GND
VSS AM4 GND
VSS AM40 GND
VSS AM5 GND
VSS AN10 GND
VSS AN11 GND
VSS AN14 GND
VSS AN17 GND
VSS AN19 GND
Table 8-1. Processor Land List by
Land Name
Land Name Land # Buffer Type Dir.
VSS AN22 GND
VSS AN24 GND
VSS AN27 GND
VSS AN30 GND
VSS AN31 GND
VSS AN32 GND
VSS AN33 GND
VSS AN34 GND
VSS AN35 GND
VSS AN36 GND
VSS AN5 GND
VSS AN6 GND
VSS AN7 GND
VSS AN8 GND
VSS AN9 GND
VSS AP1 GND
VSS AP11 GND
VSS AP14 GND
VSS AP17 GND
VSS AP22 GND
VSS AP25 GND
VSS AP27 GND
VSS AP30 GND
VSS AP36 GND
VSS AP37 GND
VSS AP4 GND
VSS AP40 GND
VSS AP5 GND
VSS AR11 GND
VSS AR14 GND
VSS AR17 GND
VSS AR18 GND
VSS AR19 GND
VSS AR27 GND
VSS AR30 GND
VSS AR36 GND
VSS AR5 GND
VSS AT1 GND
VSS AT10 GND
VSS AT12 GND
VSS AT13 GND
VSS AT15 GND
VSS AT16 GND
VSS AT17 GND
VSS AT2 GND
VSS AT25 GND
VSS AT27 GND
Table 8-1. Processor Land List by
Land Name
Land Name Land # Buffer Type Dir.
Datasheet, Volume 1 105
Processor Land and Signal Information
VSS AT28 GND
VSS AT29 GND
VSS AT3 GND
VSS AT30 GND
VSS AT31 GND
VSS AT32 GND
VSS AT33 GND
VSS AT34 GND
VSS AT35 GND
VSS AT36 GND
VSS AT37 GND
VSS AT38 GND
VSS AT39 GND
VSS AT4 GND
VSS AT40 GND
VSS AT5 GND
VSS AT6 GND
VSS AT7 GND
VSS AT8 GND
VSS AT9 GND
VSS AU1 GND
VSS AU15 GND
VSS AU26 GND
VSS AU34 GND
VSS AU4 GND
VSS AU6 GND
VSS AU8 GND
VSS AV10 GND
VSS AV11 GND
VSS AV14 GND
VSS AV17 GND
VSS AV3 GND
VSS AV35 GND
VSS AV38 GND
VSS AV6 GND
VSS AW10 GND
VSS AW11 GND
VSS AW14 GND
VSS AW16 GND
VSS AW36 GND
VSS AW6 GND
VSS AY11 GND
VSS AY14 GND
VSS AY18 GND
VSS AY35 GND
VSS AY4 GND
VSS AY6 GND
Table 8-1. Processor Land List by
Land Name
Land Name Land # Buffer Type Dir.
VSS AY8 GND
VSS B10 GND
VSS B13 GND
VSS B14 GND
VSS B17 GND
VSS B23 GND
VSS B26 GND
VSS B29 GND
VSS B32 GND
VSS B35 GND
VSS B38 GND
VSS B6 GND
VSS C11 GND
VSS C12 GND
VSS C17 GND
VSS C20 GND
VSS C23 GND
VSS C26 GND
VSS C29 GND
VSS C32 GND
VSS C35 GND
VSS C7 GND
VSS C8 GND
VSS D17 GND
VSS D2 GND
VSS D20 GND
VSS D23 GND
VSS D26 GND
VSS D29 GND
VSS D32 GND
VSS D37 GND
VSS D39 GND
VSS D4 GND
VSS D5 GND
VSS D9 GND
VSS E11 GND
VSS E12 GND
VSS E17 GND
VSS E20 GND
VSS E23 GND
VSS E26 GND
VSS E29 GND
VSS E32 GND
VSS E36 GND
VSS E7 GND
VSS E8 GND
VSS F1 GND
Table 8-1. Processor Land List by
Land Name
Land Name Land # Buffer Type Dir.
Processor Land and Signal Information
106 Datasheet, Volume 1
VSS F10 GND
VSS F13 GND
VSS F14 GND
VSS F17 GND
VSS F2 GND
VSS F20 GND
VSS F23 GND
VSS F26 GND
VSS F29 GND
VSS F35 GND
VSS F37 GND
VSS F39 GND
VSS F5 GND
VSS F6 GND
VSS F9 GND
VSS G11 GND
VSS G12 GND
VSS G17 GND
VSS G20 GND
VSS G23 GND
VSS G26 GND
VSS G29 GND
VSS G34 GND
VSS G7 GND
VSS G8 GND
VSS H1 GND
VSS H17 GND
VSS H2 GND
VSS H20 GND
VSS H23 GND
VSS H26 GND
VSS H29 GND
VSS H33 GND
VSS H35 GND
VSS H37 GND
VSS H39 GND
VSS H5 GND
VSS H6 GND
VSS H9 GND
VSS J11 GND
VSS J17 GND
VSS J20 GND
VSS J23 GND
VSS J26 GND
VSS J29 GND
VSS J32 GND
VSS K1 GND
Table 8-1. Processor Land List by
Land Name
Land Name Land # Buffer Type Dir.
VSS K12 GND
VSS K13 GND
VSS K14 GND
VSS K17 GND
VSS K2 GND
VSS K20 GND
VSS K23 GND
VSS K26 GND
VSS K29 GND
VSS K33 GND
VSS K35 GND
VSS K37 GND
VSS K39 GND
VSS K5 GND
VSS K6 GND
VSS L10 GND
VSS L17 GND
VSS L20 GND
VSS L23 GND
VSS L26 GND
VSS L29 GND
VSS L8 GND
VSS M1 GND
VSS M17 GND
VSS M2 GND
VSS M20 GND
VSS M23 GND
VSS M26 GND
VSS M29 GND
VSS M33 GND
VSS M35 GND
VSS M37 GND
VSS M39 GND
VSS M5 GND
VSS M6 GND
VSS M9 GND
VSS N8 GND
VSS P1 GND
VSS P2 GND
VSS P36 GND
VSS P38 GND
VSS P40 GND
VSS P5 GND
VSS P6 GND
VSS R33 GND
VSS R35 GND
VSS R37 GND
Table 8-1. Processor Land List by
Land Name
Land Name Land # Buffer Type Dir.
Datasheet, Volume 1 107
Processor Land and Signal Information
§ §
VSS R39 GND
VSS R8 GND
VSS T1 GND
VSS T5 GND
VSS T6 GND
VSS U8 GND
VSS V1 GND
VSS V2 GND
VSS V33 GND
VSS V34 GND
VSS V35 GND
VSS V36 GND
VSS V37 GND
VSS V38 GND
VSS V39 GND
VSS V40 GND
VSS V5 GND
VSS W6 GND
VSS Y5 GND
VSS Y8 GND
VSS_NCTF A4 GND
VSS_NCTF AV39 GND
VSS_NCTF AY37 GND
VSS_NCTF B3 GND
VSS_SENSE B36 Analog O
VSSAXG_SENSE M32 Analog O
VSSIO_SENSE AB3 Analog O
Table 8-1. Processor Land List by
Land Name
Land Name Land # Buffer Type Dir.
Processor Land and Signal Information
108 Datasheet, Volume 1
Datasheet, Volume 1 109
DDR Data Swizzling
9DDR Data Swizzling
To achieve better memory performance and timing, Intel Design performed DDR Data
pin swizzling that allows a better use of the product across different platforms.
Swizzling has no effect on functional operation and is invisible to the operating
system/software.
However, during debug, swizzling needs to be taken into consideration. Therefore,
swizzling information is presented in this chapter. When placing a DIMM logic analyzer,
the design engineer must pay attention to the swizzling table in order to be able to
debug memory efficiently.
DDR Data Swizzling
110 Datasheet, Volume 1
Table 9-1. DDR Data Swizzling
Table – Channel A
Land Name Land # MC Land Name
SA_DQ[0] AJ3 DQ06
SA_DQ[1] AJ4 DQ05
SA_DQ[2] AL3 DQ01
SA_DQ[3] AL4 DQ00
SA_DQ[4] AJ2 DQ04
SA_DQ[5] AJ1 DQ07
SA_DQ[6] AL2 DQ02
SA_DQ[7] AL1 DQ03
SA_DQ[8] AN1 DQ15
SA_DQ[9] AN4 DQ12
SA_DQ[10] AR3 DQ08
SA_DQ[11] AR4 DQ09
SA_DQ[12] AN2 DQ14
SA_DQ[13] AN3 DQ13
SA_DQ[14] AR2 DQ10
SA_DQ[15] AR1 DQ11
SA_DQ[16] AV2 DQ21
SA_DQ[17] AW3 DQ20
SA_DQ[18] AV5 DQ16
SA_DQ[19] AW5 DQ19
SA_DQ[20] AU2 DQ23
SA_DQ[21] AU3 DQ22
SA_DQ[22] AU5 DQ18
SA_DQ[23] AY5 DQ17
SA_DQ[24] AY7 DQ28
SA_DQ[25] AU7 DQ30
SA_DQ[26] AV9 DQ27
SA_DQ[27] AU9 DQ26
SA_DQ[28] AV7 DQ31
SA_DQ[29] AW7 DQ29
SA_DQ[30] AW9 DQ24
SA_DQ[31] AY9 DQ25
SA_DQ[32] AU35 DQ36
SA_DQ[33] AW37 DQ37
SA_DQ[34] AU39 DQ32
SA_DQ[35] AU36 DQ33
SA_DQ[36] AW35 DQ38
SA_DQ[37] AY36 DQ39
SA_DQ[38] AU38 DQ35
SA_DQ[39] AU37 DQ34
SA_DQ[40] AR40 DQ44
SA_DQ[41] AR37 DQ45
SA_DQ[42] AN38 DQ43
SA_DQ[43] AN37 DQ42
SA_DQ[44] AR39 DQ46
SA_DQ[45] AR38 DQ47
SA_DQ[46] AN39 DQ40
SA_DQ[47] AN40 DQ41
SA_DQ[48] AL40 DQ52
SA_DQ[49] AL37 DQ55
SA_DQ[50] AJ38 DQ51
SA_DQ[51] AJ37 DQ50
SA_DQ[52] AL39 DQ54
SA_DQ[53] AL38 DQ53
SA_DQ[54] AJ39 DQ48
SA_DQ[55] AJ40 DQ49
SA_DQ[56] AG40 DQ61
SA_DQ[57] AG37 DQ63
SA_DQ[58] AE38 DQ59
SA_DQ[59] AE37 DQ58
SA_DQ[60] AG39 DQ62
SA_DQ[61] AG38 DQ60
SA_DQ[62] AE39 DQ57
SA_DQ[63] AE40 DQ56
SA_DQ[64] AU12 DQ71
SA_DQ[65] AU14 DQ66
SA_DQ[66] AW13 DQ67
SA_DQ[67] AY13 DQ65
SA_DQ[68] AU13 DQ70
SA_DQ[69] AU11 DQ69
SA_DQ[70] AY12 DQ64
SA_DQ[71] AW12 DQ68
Table 9-1. DDR Data Swizzling
Table – Channel A
Land Name Land # MC Land Name
DDR Data Swizzling
Datasheet, Volume 1 111
§ §
Table 9-2. DDR Data Swizzling
table – Ch annel B
Land Name Land # MC Land Name
SB_DQ[0] AG7 DQ04
SB_DQ[1] AG8 DQ05
SB_DQ[2] AJ9 DQ02
SB_DQ[3] AJ8 DQ03
SB_DQ[4] AG5 DQ07
SB_DQ[5] AG6 DQ06
SB_DQ[6] AJ6 DQ00
SB_DQ[7] AJ7 DQ01
SB_DQ[8] AL7 DQ12
SB_DQ[9] AM7 DQ13
SB_DQ[10] AM10 DQ08
SB_DQ[11] AL10 DQ10
SB_DQ[12] AL6 DQ15
SB_DQ[13] AM6 DQ14
SB_DQ[14] AL9 DQ11
SB_DQ[15] AM9 DQ09
SB_DQ[16] AP7 DQ20
SB_DQ[17] AR7 DQ21
SB_DQ[18] AP10 DQ18
SB_DQ[19] AR10 DQ16
SB_DQ[20] AP6 DQ22
SB_DQ[21] AR6 DQ23
SB_DQ[22] AP9 DQ19
SB_DQ[23] AR9 DQ17
SB_DQ[24] AM12 DQ30
SB_DQ[25] AM13 DQ24
SB_DQ[26] AR13 DQ26
SB_DQ[27] AP13 DQ27
SB_DQ[28] AL12 DQ31
SB_DQ[29] AL13 DQ25
SB_DQ[30] AR12 DQ28
SB_DQ[31] AP12 DQ29
SB_DQ[32] AR28 DQ39
SB_DQ[33] AR29 DQ37
SB_DQ[34] AL28 DQ33
SB_DQ[35] AL29 DQ34
SB_DQ[36] AP28 DQ38
SB_DQ[37] AP29 DQ36
SB_DQ[38] AM28 DQ35
SB_DQ[39] AM29 DQ32
SB_DQ[40] AP32 DQ43
SB_DQ[41] AP31 DQ44
SB_DQ[42] AP35 DQ42
SB_DQ[43] AP34 DQ40
SB_DQ[44] AR32 DQ47
SB_DQ[45] AR31 DQ45
SB_DQ[46] AR35 DQ41
SB_DQ[47] AR34 DQ46
SB_DQ[48] AM32 DQ52
SB_DQ[49] AM31 DQ55
SB_DQ[50] AL35 DQ50
SB_DQ[51] AL32 DQ53
SB_DQ[52] AM34 DQ51
SB_DQ[53] AL31 DQ54
SB_DQ[54] AM35 DQ48
SB_DQ[55] AL34 DQ49
SB_DQ[56] AH35 DQ60
SB_DQ[57] AH34 DQ61
SB_DQ[58] AE34 DQ58
SB_DQ[59] AE35 DQ56
SB_DQ[60] AJ35 DQ62
SB_DQ[61] AJ34 DQ63
SB_DQ[62] AF33 DQ57
SB_DQ[63] AF35 DQ59
SB_DQ[64] AL16 DQ66
SB_DQ[65] AM16 DQ64
SB_DQ[66] AP16 DQ68
SB_DQ[67] AR16 DQ69
SB_DQ[68] AL15 DQ67
SB_DQ[69] AM15 DQ65
SB_DQ[70] AR15 DQ70
SB_DQ[71] AP15 DQ71
Table 9-2. DDR Data Swizzling
table – Channel B
Land Name Land # MC Land Name
DDR Data Swizzling
112 Datasheet, Volume 1