X86 is colloquially used nowadays as a synonym for 32bit processors in the PC space. Similarly, x64 is used to denote 64bit processors in the PC/wintel space.
But 32bit is absolutely not equivalent to x86 (likewise for 64bits and x64) because they’re very different things.
Let’s go back in time: in 1978, Intel released the 8086 microprocessor. It was a 16bit processor that could address 1MB of memory, built to compete against the threat that was the Zilog Z80 as well as upcoming 16bit and 32bit processors from the likes of Motorola and National Semiconductor. Intel’s own 32bit processor project, the Intel iAPX 432 had been delayed until 1981 and the company wanted a stop-gap until that chip was finalized.
The 8086 was the first processor in a family of processors called “X86”. More precisely, it was the first processor that used the x86 Instruction set architecture: the ISA defines which instructions the processor can execute, how they are to be used (semantics) as well as their encoding.
Intel would release a cost-reduced version of the 8086 called the 8088, which notably reduce the size of the databus from 16 to 8bits wide, reducing the amount of bandwidth to memory. The 8088 offered most of the performance of the 8086 (it was still a 16bit processor) and was subsequently used in the IBM PC model 5150 (the “IBM PC” for short), which set the standard for desktop PCs during the 80s and 90s, basically forcing Intel to keep the x86 ISA alive.
From the 8086 we got the 80186, the 80286 until we got the 80386 in 1985. The 386 was the first processor of the x86 line to be 32 bits wide. New 32bit x86 designs were introduced over the years, such as the original Pentium, but in 2003 AMD (another x86 processor manufacturer) released the Opteron and Athlon 64 processor families. These were still fundamentally x86 processors, but AMD had extended it in order to support 64bit execution. The new extended x86 was referred to as x86–64, or AMD64 for short. This extension would later be adopted by Intel in the later Pentium 4 designs (although it would remain disabled) and the original core 2 series, such as the core 2 duo and core 2 quad lines. It is still used today.
As far as nomenclature is concerned, we have a bit of a problem: the x86 family is made up of 16, 32 and 64bit processors. While officially known as x86–64, that extended version would quickly become known as x64.
To differentiate between 16bit and 32bit x86 versions, the 16bit designs would (colloquially at least) be referred to as x86–16, with the 32bit designs simply being referred to as “x86”.
This has nothing to do with deeper technical links between 32bit architecture and x86, it’s just that 32bit Intel and AMD designs have been referred to as x86 for simplicity.
Windows also shares part of the blame: 64bit windows actually separates native 64bit programs from the legacy 32bit programs. The 64bit software is installed into folder called “Program Files”, while the old 32bit software gets installed into a folder called “Program Files (x86)”. While not incorrect in this specific case, it seems to imply that 32bit architecture = x86, which simply isn’t true.
Source :
https://www.quora.com/Why-is-32-bit-called-x86-while-64-bit-called-x64/answers/144463864
Comments
Post a Comment