I remember fondly the AMD K6/2 architecture. It was the CPU of a ultra-budget priced Compaq Presario laptop that got me through graduate school back in the day.
Some years later, back in my home country (Paraguay) I met a lady who had a side business being a VAR builder of desktop PCs. In my country, due to a lot of constraints, there was (and is) quite a money crunch and people tried to cheap out the most when purchasing computers. This gave rise to a lot of unscrupulous VAR resellers who built ultra-low quality, underpowered PCs with almost unusable specs at an attractive price while making a pretty profit. You could still get much better deals in both price and specs, but you had to have an idea about where to look.
Well, back to this lady. She said that during the early 2000s she was on the same line of business, selling beige box desktop PCs at the lowest possible prices. But she said that she loved the AMD K6 and K6/2 architectures because they provided considerable bang for the buck. The cost was affordable, and yet performance was good. Add some reasonable amounts of RAM and storage and you could have a well-performing PC at a good price. The downside, as she said, was that the processors tended to generate lots of heat and thus the fans had to be good. This was especially important in a very hot country like Paraguay. But the bottom line was that AMD K6 line enabled her to offer customers a good deal.
This made me appreciate what AMD did with K6. They really helped to bring good computers to the masses.
Those sellers never disappeared; although I'm from not from Paraguay, the situation is familiar. These days they're selling desktops built on 10+ year old Xeons which you can buy for dirt cheap on AliExpress, installed on frankenstein motherboards from noname Chinese manufacturers which are desktop-oriented, but take server processors. The graphics card is something old like RX480, and comes from being run into the ground by years of crypto"currency" mining, then resoldered on a new board, also often developed by Chinese manufacturers you've never heard of.
Graphics cards especially are very unreliable and frequently die within a few months of purchase. But when you can buy a whole PC for the price of one modern videocard, many don't have a choice.
The notion that GPU chips can be "run into the ground" by years of crypto mining or AI workloads has been debunked pretty thoroughly by now. The hardware is quite resilient, it doesn't really fail at a higher rate.
I bought a XFX RX 580 that was used for cryptomining from eBay, used it for 3 years then handed it down to my son who used it for 2 more. It still worked when he removed it. Can confirm.
Also any miner worth their salt knows to undervolt to save power (=money), run cooler, last longer, and run at very close to full speed or even in some cases 100% full-speed, depending on silicon "lottery".
Well, a lot of people here would have loved to have 10-year old Xeons in their motherboards; while power hungry, I guess they would make good CPUs since they have good cache sizes. But no, there's no Xeons in our offers here. What people get here now are Intel Pentium and Celeron-branded CPUs, or N-class CPUs, with the onboard GPU only, 4GB RAM and 1 TB HDD running unlicensed Windows with understandable results. But when you are a digitally illiterate parent seeking to purchase a first PC for your children of school age, this looks attractive enough at a good price point.
Don't look at the branding. Look at the core type, count, and speed (maybe).
It's been a while since I shopped Intel, but they used to typically release a low core count/lower clock speed Pentium/Celeron on the mainstream cores, often with no hyperthreading. These were typically low cost and could be a good value, you'd get decent single core performance because it's the newest architecture and multicore performance would be iffy but you can't have everything.
> N-class CPUs
These are definitely worth avoiding most of the time. Usually twice the cores, but much less performance per clock. Never feels fast for interactive work. But they make sense for some situations. Some of these get an n3 branding to trick people looking for i3s.
> These are definitely worth avoiding most of the time.
They may not be ideal for desktops, but they are great low power home server CPUs. In fact, they are much better than ARM alternatives like Raspberry Pis for the money.
I indeed remember too the family of K6 chips and their Super Socket 7 motherboards. They were cheap and affordable, and allowed cpu upgrades to classical Socket 7 motherboards.
The peak of the Super Socket 7 performance CPUs was reached when AMD released the + versions of those chips, the K6-2+ and K6-3+. Those were initially designed for laptops with lower powerconsumption and some enhanced instruction set. But they quickly became common in typical overclockers setup.
I got myself a K6-3+ that I was able to overclock to around 600MHz, probably on an ASUS motherboard.
Back then AMD was fighting so much to get marketshare that you could order for free all types of merchandising from AMD like posters, stickers and CPU badges, and they would even ship it for free from US to Europe. I remember always bringing some to hacker meetings.
I happen to have one of those 600MHz chips on my bench currently! It's a K6-2+ that has had the remaining 128KB cache unlocked, making it a K6-3+. It is indeed a speedy chip, performing somewhere above a Pentium II-450 according to Speedsys.
Do you recall how long you used the platform or your next upgrade choice? :)
K6 was great at everything other than FP. Unfortunately for AMD, a year before its launch ID released Quake, for which the primary metric of performance was basically "how fast are you at FP". And Quake very rapidly became the common benchmark against which CPU performance was measured.
The Pentium III does sound like a good chip choice for a retro cyberpunk story. Like they said in “The Matrix”, 1999 was the peak of human civilization.
(586 became Pentium, so 686 would be the Pentium Pro/II microarchitecture.)
The cartridge-style packaging of the Pentium II/III’s was also peak for the lineup.
My favorite PC I ever built was a dual-CPU Tyan motherboard that eventually held two screaming fast Coppermines. Needed a university copy of Windows 2000 to really make them sing—the Windows 95 series never supported SMP—and it was glorious.
I remember 1999/2000-ish I had both a Pentium III/Intel motherboard and Athlon PC. The Pentium III system was rock solid, and performed fantasic. Even the CPU and motherboard looked amazing.
The Athlon was solid but less reliable, various reboots and glitches. I kind have always had a preference for Intel since then.
A lot of that probably came down to the motherboard chipset. IIRC Intel made their own chipsets for the Pentium III and they were good and reliable. Athlons were coupled with chipsets from VIA and whatnot.
Some of those chipsets were fine and others were less reliable or compatible. The quality of the drivers for each chipset may also have mattered.
Ah the pentium, aka 5-ium due to the penta- prefix. It is actually a nod from 4 to 5, but Intel wanted some cool name, and they decided penta + premium would sound cool, hence pentium.
But still, internally we call it i586, because that's the way it is. so is Pentium MMX which I reckon is called i686.
Yes, and Intel had actually lost a legal attempt to stop people using the numbers (I can't remember if this was earlier in the 486 era or if it was something they tried first with 586).
But marketing was a large part of the reason that they started caring so much at that particular time. The Pentium line was the first time Intel had marketed directly to the end users¹² in part as a response to alt-486 manufacturers (AMD, Cyryx) doing the same with their products⁴ like clock-doubled units compatible with standard 486/487 targetted sockets (which were cheaper and, depending on workload, faster than the Intel upgrade options).
--------
[1] this was the era that “Intel Inside (di dum di dum)” initially came from
[2] that was also why the FDIV bug was such a big thing despite processor bugs³ being, within the industry, an accepted part of the complex design and manufacturing process
[3] for a few earlier examples: a 486 FPU bug that resulted in what should have been errors (such as overflow) being silently ignored, another (more serious) one in trig functions that resulted in a partial recall and the rest of that line being marked down as 486SX units (with a pin removed to effectively disable the FPU), similarly an entire stepping of 386 chips ended up sold as “for 16 bit code only”, going further back into the 8-bit days some versions of the 6502 had a bug or two in handling things (jump instructions, loading via relative references) that straddled page boundaries (which were mitigated in code by being careful with code/data alignment, no recalls, just errata published)
I remember when IBM was upset that various companies were calling their 80286 computers "<Brandname> AT" like the IBM AT ("advanced technology"). But you can't trademark a preposition!
If I am correct, the Pentium Pro was the first "out of order" design. It specialized in 32-bit code, and did not handle 16-bit code very well.
The original Pentium I believe introduced a second pipeline that required a compiler to optimize for it to achieve maximum performance.
AMD actually made successful CPUs based on Berkeley RISC, similar to SPARC (they used register windows). The AMD K5 had this RISC CPU at its core. AMD bought NexGen and improved their RISC design for the K6 then Athlon.
Because of the branding change, history will remember the Pentium (P5). It was really the Pentium
Pro (P6) that put Intel leaps ahead on x86 microarchitecture, a lead they’d hold with only a few minor stumbles for two decades.
Bob Colwell (mentioned elsewhere ITT) wrote a fascinating technical history of the P6: The Pentium Chronicles.
The major stumble being having to cross licence AMD for the x64 opcode design thus ensuring at least two players in the field (and due to how it's going only two).
They also started to slip behind AMD in the Pentium 4/NetBurst era, but got their footing back with Core (a more direct descendant of the P6 than the Pentium 4!)
Around the same time, but I’d classify as separate stumbles.
I'm really not sure if POWER1 and PowerPC 603 should be counted as OoO or not.
It's certainly not the same kind of OoO. They had register renaming¹, But only enough storage for a few renamed registers. And they didn't have any kind of scheduler.
The lack of a scheduler meant execution units still executed all instructions in program order. The only way you could get out-of-order execution is when instructions went down different pipelines. A floating point instruction could finish execution before a previous integer instruction even started, but you could never execute two floating point instructions Out-of-Order. Or two memory instructions, or two integer instructions.
While the Pentium Pro had a full scheduler. Any instruction within the 40 μop reorder buffer could theoretically execute in any order, depending on when their dependencies were available.
Even on the later PowerPCs (like the 604) that could reorder instructions within an execution unit, the scheduling was still very limited. There was only a two entry reservation station in front of each execution unit, and it would pick whichever one was ready (and oldest). One entry could hold a blocked instruction for quite a while many later instructions passed it through the second entry.
And this two-entry reservation station scheme didn't even seem to work. The laster PowerPC 750 (aka G3) and 7400 (aka G4) went back to singe entry reservation stations on every execution unit except for the load-store units (which stuck with two entries).
It's not until the PowerPC 970 (aka G5) that we see a PowerPC design with substantial reordering capabilities.
¹ well on the PowerPC 603, only the FPU had register naming, but the POWER1 and all later PowerPCs had integer register renaming
Interesting, apparently it did scoreboarding like the CDC6600 and allowed multiple memory loads in flight, but I can't find a definite statement on whether it did renaming (I.e. writes to the same registers stalled). It might not be OoO as per modern definition, but is also not a fully on-order design.
> The original Pentium I believe introduced a second pipeline that required a compiler to optimize for it to achieve maximum performance.
It wasn't a full pipeline, but large parts of the integer ALU and related circuitry were duplicated so that complex (time-consuming) instructions like multiply could directly follow each other without causing a pipeline bubble. Things were still essentially executed entirely in-order but the second MUL (or similar) could start before the first was complete, if it didn't depend upon the result of the first, and the Pentium line had a deeper pipeline than previous Intel chips to take most advantage of this.
The compiler optimisations, and similar manual code changes with the compiler wasn't bright enough, were to reduce the occurrence of instructions depending on the results of the instructions before, which would make the pipeline bubble come back as the subsequent instructions couldn't be started until the current one was complete. This was also a time when branch prediction became a major concern, and further compiler optimisations (and manual coding tricks) were used to help here too, because aborting a deep pipeline because of a branch (or just stalling the pipeline at the conditional branch point until the decision is made) causes quite a performance cost.
The Pentium was not just pipelined but also superscalar; it had two pipelines (U and V). U implemented all instructions, V only implemented a subset of simpler ones, and only when using simple (prefix-less) encodings.
As the CPU was not out of order, to execute two instructions per clock you had to pair them so that the second one was simple, and did not use the output of the first one. Existing code and most compilers around at the time were generally bad at this, but things like inner render loops in games could make a lot of use if you wrote them in assembly.
Por qué no los dos? If "-ium" makes nerds think of an element name, and others of a premium product, all the better. I'd bet both of these interpretations were listed in the original internal marketing presentation of the name...
> but Intel wanted some cool name, and they decided penta + premium would sound cool, hence pentium
some say that they tried to add 486 with 100 and the result had some numbers after the comma, that's why they named it pentium (yes, i know about the FDIV bug)
I remember a 120MHz Pentium Linux box arriving at a cottage in Crete, where, with the aid of a 56k USRobotics modem, we (my wife and I) worked remotely in 1995-6. She had a Mac SE/30 for her tourist guidebook work. She later upgraded to a 6100 PowerMac "pizza-box", various iMacs, G3/G4/G5, whereas I saved a quad-200MHz PentiumPro monster (Compaq Professional Workstation 8000, tricked up to 3GB RAM) from the skip. I regret taking that to the recycling centre many years later.
I well remember the 486SX/2-66's and how terrible they were. I liked to say that Compaq put the "sorry" in Presario.
In the late 90's, between around 96 and 98, I made good money building AMD 486 DX/4 133's. Those things were blindingly fast for the price. As I recall there was even a 150MHz variant.
Still, my favorite CPU of all time remains the AMD K6/2-450. It wasn't until the Phenom II BE950, a dual core that I unlocked to quad core , that I felt I had a CPU that matched the K6/2-450 in value. Since then I've had a couple of Ryzen's for my daily driver/work machine, and couldn't be happier. AMD has done a fantastic job keeping price and performance in tune. But, it goes even further if you shop smartly.
Overall, this was an excellent read, and brought back a lot of memories. The 6x86 for example- too much promise for what they actually delivered. And, thanks to this article I now know why so many cheap motherboards had their CPU's soldered. It wasn't a technology decision, but a legal one. I had no idea of that at the time.
I had a K6/2 back in the day. There was a Pentium 4 machine and then a refurbed HP desktop with a Phenom II X4 in it. I used that until I (perhaps finally) built a machine with a Ryzen 1700x.
I'm not sure where or why I have so many AM4 machines around, but my kids are still playing games fine on machines with a 1st and 2nd gen Ryzen in them.
I just upgraded another to a Ryzen 5 5500. I plan to get a few more years out of it.
The bang for my buck has been pretty high. I don't believe CPUs go obsolete immediately like they used to.
I had an AMD 5x86 @133 that lasted me several years... I had the secondary cache module and 64mb of ram, which was a lot for the time. It wasn't great for gaming, but general productivity software and just simple web browsing it was kind of a beast from the ram/cache performance. The next computer I really remember was an OC'd AMD Duron at 1ghz.
I bought an AMD Sempron "single core" CPU for a pittance (can't remember now how much it was, but it was the cheapest new CPU available) and as able to unlock the second core in the BIOS immediately with no issues, saving a ton of money. Those were the days.
Oh man, my first custom build had a Phenom II 955 BE. Overclocked beautifully, I loved that system. Got it in a barebones kit from good old TigerDirect, RIP
Fellow AMD fan here—it wasn't that long ago that I finally relinquished the old ABIT motherboard that overclocked my AMD badboy to eke out extra cycles for my DAW.
Another interesting episode "after the 486" was the switch from 32 bit to 64 bit, where Intel wanted to bury the ghost of the 8086 once and for all and switched to a completely new architecture (https://en.wikipedia.org/wiki/IA-64), while AMD opted to extend the x86 architecture (https://en.wikipedia.org/wiki/X86-64). This was probably the first time that customers voted with their feet against Intel in a major way. The Itanium CPUs with the new architecture were quickly rechristened "Itanic" and Intel grudgingly had to switch to AMDs instruction set - that's the reason why the current instruction set still used by all "x86" CPUs is often referred to as AMD-64.
What I find interesting is that Intel engineers actually designed their own 64-bit extension, somewhere along the same lines as AMD64.
Intel's marketing department threw a fit, they didn't want the Pentium 4 competing with their flagship Itanium. Bob Colwell was directly ordered to remove the 64-bit functionality.
Which he kind of did, kind of didn't. The functionally was still there, but fused off when Netburst shipped.
If it wasn't for AMD beating them to market with AMD64, Intel would have probably eventually allowed their engineers to enable the 64-bit extension. And when it did come time to add AMD64 support to the Pentium 4 (later Prescott and Cedar Mill models) the existing 64-bit support probably made for a good starting point.
Around the time of K8 being released, I remember reading official intel roadmaps announced to normal people, and they essentially planned that for at least few more years if not more they will segment into increasingly consumer-only 32bit and IA-64 on the higher end
They were trying to compete with Sun and IBM in the server space (SPARC and Power) and thought that they needed a totally pro architecture (which Itanium was). The baggage of 32-bit x86 would have just slowed it all down. However having an x86-64 would have confused customers in the middle.
Think back then it was all about massive databases - that was where the big money was and x86 wasn't really setup for the top end load patterns of databases (or OLAP data lakes).
In the end, Intel did cannibalize themselves. It wasn’t too long after the Itanium launch that Intel was publicly presenting a roadmap that had Xeons as the appealing mass-market server product.
Yeah they actually survived quite well. Who knows how much they put into Itanium but in the end they did pull the plug and Xeons dominated the market for years.
They even had a chance with mobile chips using ATOM but ARM was too compelling and I think Apple was sick of the Intel dependency so when there was an opportunity in the mobile space to not be so deeply tied to Intel they took it.
> and thought that they needed a totally pro architecture (which Itanium was).
Was it though ? They made a new CPU from scratch, promissing to replace Alpha, PA-RISC and MIPS, but the first release was a flop.
The only "win" of Itanium that I see, is that it eliminated some competitors in low and medium end server market: MIPS and PA-RISC, with SPARC being on life support.
The deep and close relationship of Compaq with Intel meant that it also killed off Alpha, which unlike MIPS and PA-RISC wasn't going out by itself (Itanium was explicitly to be PA-RISC replacement, in fact it started as one, while SGI had issues with MIPS. SPARC was reeling from the radioactive cache scandal at the time but wasn't in as bad condition as MIPS, AFAIK)
I never used them but my understanding is that the performance was solid - but in a market with incumbents you don't just need to be as good as them you need to be significantly better or significantly cheaper. My sense was that it met expectations but that it wasn't enough for people to switch over.
> It’s as if they actually bought into the RISC FUD from the 1990’s that x86 was unscalable, exactly when it was taking its biggest leaps.
That's exactly what was happening.
Though it helps to realise that this argument was taking place inside Intel around 1997. The Pentium II is only just hitting the market, it wasn't exactly obvious that x86 was right in the middle making its biggest leaps.
RISC was absolutely dominating the server/workstation space, this was slightly before the rise of the cheap x86 server. Intel management was desperate to break into the server/workstation space, and they knew they needed a high end RISC cpu. It was kind of general knowledge in the computer space at the time that RISC was the future.
Well, TBH it wasn't all FUD - hanging on to x86 eventually (much later) came back to bite them when x86 CPUs weren't competitive for tablets and smartphones, leading to Apple developing their own ARM-based RISC CPUs (which run circles around the previous x86 CPUs) and dumping Intel altogether.
It is interesting how so much of the speculation in those days was about how x86 was a dead end because it couldn’t scale up, but the real issue ended up being that it didn’t scale down.
Well, it turns out that it could scale up, it just needed more power than other architectures. As long as it was only servers and desktop PCs, you only noticed it in more elaborate cooling and maybe on your power bill, and even with laptops, x86 compatibility was more important than the higher power usage for a long time. It's just when high-performance CPUs started to be put in devices with really limited power budgets that x86 started looking really bad...
If this is true or not I don't know, but I worked on a project with an HP employee and we were talking about the Itanium. At some point the HP guy goes "You know we more or less designed that thing, right?"
I would tend to believe that the Itanium is an HP product, given that they've always seems more invested in the platform than Intel.
Yes, it was originally designed as a successor to HP PA-RISC and then brought to Intel. I don't know how much it changed from the original design during development at Intel.
I remember those Cyrix chips well. We had a little shop where we would assemble boxes to spec. And hey, a 486 is a 486, we reasoned. They were cheap, ran cool, and just about as fast as the others.
Nice nostalgic piece. I have a lot of fond "favorite chip" memories from that era.
Not to be too pedantic, I would contend that at the time, it was pretty clear to enthusiast what the differences were. Everyone in the industry was paying attention to 486s and the cost of a genuine intel chip. The FDIV bug was on every Evening News for weeks. AMD and Cyrix vs intel debates were common.
I agree that it is not obvious now that Pentium came after 486, but at the time, it was clear.
I remember not believing my friend when he said that the OS and the games were inside the computer and didn't need to be loaded up via a floppy disk. That was my first time seeing a hard drive.
My very first hard drive was for an Apple //gs. It was the size of a shoebox and was a whopping 5 megabytes. I was an Egghead employee at the time and after my employee discount, it was something like $600-$700.
The years when Pentium came was a bit of an shitshow. As the article said, there were 7 companies producing 486 processors but after that the market was mostly Intel, AMD and little Cyrix. Then came socket-A vs. slot-A etc. Now looking back it seems like there was lot of changes in short period of time.
Things started progressing so fast in mid nineties that brand new top of the line computer was being matched in performance by low end offerings 2 years later. Lasted up to late 2000.
December 1998 $85 Celeron 300A handily beating June 97 $594 Pentium 233 MMX, not to mention overclocked one matching 1998 $621-824 Pentium 2s.
> December 1998 $85 Celeron 300A handily beating June 97 $594 Pentium 233 MMX, not to mention overclocked one matching 1998 $621-824 Pentium 2s.
Ah, I remember the days of Intel's fabs doing “too good” a job and many more chips passing tests for faster use being produced than expected. To fulfil orders for the slower chips some of these better batches were marked down and sold as slower units, so if you were lucky you could really push the overclocking and get yourself a performance bargain. You also needed a good motherboard and quality RAM to pull it off reliably, of course.
Sillyrons is what we used to call the massively overclocked Celerons. At Uni a friend of mine made a good bit of pocket money from selling an optimisation service, for people who didn't feel confident playing with such settings themselves.
It wasn't just that. Mendocino wasn't a low bin, it was an entirely separate die from Pentium II. At the time, Pentium II used a 512kB off-die L2 cache, running at half cpu clock. To save on costs, Celeron 300A moved to a 128kB L2, which was integrated on the CPU die, and which ran at full cpu clock.
And it turns out that for a lot of software, a smaller but faster L2 was actually better than the bigger one. And because there were no fast products that used the Mendocino die, even the fastest of them were sold as Celerons. 300A was particularly nice because very nearly all of them could run at 450, and 100MHz FSB motherboards were widely available to pair with the fixed 4.5 multiplier of the CPU.
Fun thing is with a tiny bit of manipulation you can run a P3 tualatin at 1.33ghz via a slot adapter and some pin disablement and some voltage mods (or if you had the right adapter a jumper) in a motherboard which came with a low tier P2 or even earlier. So without replacing your Asus P2B from very early 1998 well up to mid 00s with astonishing performance gains, that motherboard had a massive lifespan in the right hands. Mine is still running with a new voltage regulator to this day.
Better, throwing in two of those Celeron 300As into a modded dual socket MB... I remember running one of those OC'd Duron's at 1.1ghz... had a custom shim to better support the cpu cooler with it. IIRC, I ran it with a Voodoo 3 card and maxed out ram. Had 2x IBM Deathstar drives in Raid-0... that was the big mistake, crazy fast, but first drive died, and then the second before I could get an RMA on the first. Only the 3tb Seagate line was worse.
On other hand not being hopelessly outdated in a relatively short time does have perks. It is cheaper to not have to update constantly and still getting decent performance.
Pentium marketing was next level. You could buy plushies of Intel workers in bunny suits. The first IMAX movie I went to was called "The Journey Inside", and it was basically a big ad for the Pentium.
I always wondered if some of that was to offset the negative publicity from the FDIV bug in the early Pentiums.
I have a bunny suit plushie on my shelf to this day. The other Pentium marketing blitz I remember was in the 1998 Lost in Space which had a TV ad for a Pentium XXI or something. Also notable was the Silicon Graphics branding in that movie. Which I have always found amusing since SGI didn't have any consumer products and even for businesses the prices were "Call Us" which has always meant eye watering expensive.
You then had the 486 DLCs which were even worse. you'd get companies that sold 386 and even 286 systems with '486' chips, constrained by slow, 16-bit buses, etc.
The 486DLC was fully 32 bit. The 286/386SX-like model with a 16 bit bus was the 486SLC.
But, even a 486SLC wasn't all that bad at the time. It was still much faster than a 386DX for many things (DOOM, for example).
These AT-like machines limited users to 16-bit ISA cards for expansion, and a 24-bit address bus (16 MB RAM). But how many consumer PCs used more than that, back when your 32-bit bus options were VLB (video only), MCA (IBM only), or EISA (expensive servers/workstations only).
I'll always have a fond spot in my heart for my cheap Cyrix CPU. Once I was near finishing an important project in the middle of the night and my CPU fan died. The cyrix chip would overheat in no time and shut down so I ended up filling a coffee mug with ice and jamming it up against the chip, giving me more precious minutes before it got hot enough to shut down again. I would swap out the ice in the mug and give it another go. I got that project done. :)
I was a poor kid building computers in the mid to late 90's. I tried everything I could NOT to use a true Pentium. My first build (coming from an upgraded Compaq 386DX) was an AMD 486 "DX4". I had a Diamond Stealth PCI VGA card and 16mb of DRAM. After that I tried a 233Mhz Cyrix 6x86. That chip was garbage. I had to run some software pentium emulation to get Cubase to run. I went 300Mhz Celeron after that. That was my first time trying the new SDRAM! After that I FINALLY got a legit Pentium III 400Mhz! I could go on and on as this is a lovely walk down memory lane and there's been some fun dips back into AMD Athlon/Ryzen/etc.
> I tried a 233Mhz Cyrix 6x86. That chip was garbage.
Those chips were excellent value for mostly integer work, but had incredibly poor floating-point performance which was a problem for gamers as the 3D era was really getting going around that time. I had one, it did me good service for a few years.
Yeah, I was all about recording music/running the first iteration of software synths. I was a Graphic Design major at that time so Photoshop/Illustrator/QuarkXpress were my jam. Those suprisingly didn't run that bad - in real Graphic Design, no one used Eyecandy (the reason everything on the web in 1998 had drop shadows, outter glows, lens flares) so rendering "3D" rarely came into play.
They were practically overclocked coming out of the factory, so if you had any issue at all with cooling, or attempted to clock them up even higher, they would be unstable.
I remember I had a Cyrix 6x86 in one of my first computers that ran windows 95. It was terrible. When I tried to play games or run some apps with non-trivial performance requirements and, I suppose, that used floating point operations extensively, perhaps MMX or some other proprietary extensions, apps would just die. Often with the entire operating system.
Fun fact: Bonnel Atoms (D510 etc) were not affected by the meltdown vulnerability that plagued every Pentium processor since the 1995 Pentiums. These Atoms use purely in-order execution engines which kinda makes them supercharged 486s.
Pentium were the first superscalar x86 from intel, but were still in-order. Pentium-Pro (a completely different microarchitecture) was the first OoO intel x86 microarchitecture.
Second only to getting an SSD in my 2006-vintage laptop the AMD 5x86 was the best upgrade I ever put in a computer, dollars for performance. It made my 486SX-25 run like a 75Mhz+ Pentium. Linux kernel builds flew after that. I also fondly remember seeing little bits of "Second Reality" run better or show me things I'd never seen before (more of the sword extending further from the pool, as an example).
I never felt during this era that the information about these chips was hard to come by as the author claims. Retrospectively I appreciate that’s because I grew up living by a large, well funded library in a tech centric town, so they always had all the latest tech publications.
I still keep and metal case of my first 486 with 66Mhz CPU with Windows 3.11 installed and remember that Pentium 100Mhz was what people were buying next.
Oof I'm old enough to remember the Cyrix chips. They were cheaper than the other players, which lured in a lot of people who then regretted it once they realised that they were constantly crashing hot-garbage. Good times.
Some years later, back in my home country (Paraguay) I met a lady who had a side business being a VAR builder of desktop PCs. In my country, due to a lot of constraints, there was (and is) quite a money crunch and people tried to cheap out the most when purchasing computers. This gave rise to a lot of unscrupulous VAR resellers who built ultra-low quality, underpowered PCs with almost unusable specs at an attractive price while making a pretty profit. You could still get much better deals in both price and specs, but you had to have an idea about where to look.
Well, back to this lady. She said that during the early 2000s she was on the same line of business, selling beige box desktop PCs at the lowest possible prices. But she said that she loved the AMD K6 and K6/2 architectures because they provided considerable bang for the buck. The cost was affordable, and yet performance was good. Add some reasonable amounts of RAM and storage and you could have a well-performing PC at a good price. The downside, as she said, was that the processors tended to generate lots of heat and thus the fans had to be good. This was especially important in a very hot country like Paraguay. But the bottom line was that AMD K6 line enabled her to offer customers a good deal.
This made me appreciate what AMD did with K6. They really helped to bring good computers to the masses.
Graphics cards especially are very unreliable and frequently die within a few months of purchase. But when you can buy a whole PC for the price of one modern videocard, many don't have a choice.
https://aliexpress.com/w/wholesale-intel-xeon-processors.htm...
https://aliexpress.com/w/wholesale-motherboards-xeon.html
https://aliexpress.us/w/wholesale-amd-radeon-rx-580.html
Don't look at the branding. Look at the core type, count, and speed (maybe).
It's been a while since I shopped Intel, but they used to typically release a low core count/lower clock speed Pentium/Celeron on the mainstream cores, often with no hyperthreading. These were typically low cost and could be a good value, you'd get decent single core performance because it's the newest architecture and multicore performance would be iffy but you can't have everything.
> N-class CPUs
These are definitely worth avoiding most of the time. Usually twice the cores, but much less performance per clock. Never feels fast for interactive work. But they make sense for some situations. Some of these get an n3 branding to trick people looking for i3s.
They may not be ideal for desktops, but they are great low power home server CPUs. In fact, they are much better than ARM alternatives like Raspberry Pis for the money.
The peak of the Super Socket 7 performance CPUs was reached when AMD released the + versions of those chips, the K6-2+ and K6-3+. Those were initially designed for laptops with lower powerconsumption and some enhanced instruction set. But they quickly became common in typical overclockers setup.
I got myself a K6-3+ that I was able to overclock to around 600MHz, probably on an ASUS motherboard.
Back then AMD was fighting so much to get marketshare that you could order for free all types of merchandising from AMD like posters, stickers and CPU badges, and they would even ship it for free from US to Europe. I remember always bringing some to hacker meetings.
Do you recall how long you used the platform or your next upgrade choice? :)
(586 became Pentium, so 686 would be the Pentium Pro/II microarchitecture.)
My favorite PC I ever built was a dual-CPU Tyan motherboard that eventually held two screaming fast Coppermines. Needed a university copy of Windows 2000 to really make them sing—the Windows 95 series never supported SMP—and it was glorious.
The Athlon was solid but less reliable, various reboots and glitches. I kind have always had a preference for Intel since then.
Some of those chipsets were fine and others were less reliable or compatible. The quality of the drivers for each chipset may also have mattered.
RISC architecture is gonna change everything.
But still, internally we call it i586, because that's the way it is. so is Pentium MMX which I reckon is called i686.
> The name invoked the number five, but was completely trademarkable, unlike the number 586.
But marketing was a large part of the reason that they started caring so much at that particular time. The Pentium line was the first time Intel had marketed directly to the end users¹² in part as a response to alt-486 manufacturers (AMD, Cyryx) doing the same with their products⁴ like clock-doubled units compatible with standard 486/487 targetted sockets (which were cheaper and, depending on workload, faster than the Intel upgrade options).
--------
[1] this was the era that “Intel Inside (di dum di dum)” initially came from
[2] that was also why the FDIV bug was such a big thing despite processor bugs³ being, within the industry, an accepted part of the complex design and manufacturing process
[3] for a few earlier examples: a 486 FPU bug that resulted in what should have been errors (such as overflow) being silently ignored, another (more serious) one in trig functions that resulted in a partial recall and the rest of that line being marked down as 486SX units (with a pin removed to effectively disable the FPU), similarly an entire stepping of 386 chips ended up sold as “for 16 bit code only”, going further back into the 8-bit days some versions of the 6502 had a bug or two in handling things (jump instructions, loading via relative references) that straddled page boundaries (which were mitigated in code by being careful with code/data alignment, no recalls, just errata published)
The original Pentium I believe introduced a second pipeline that required a compiler to optimize for it to achieve maximum performance.
AMD actually made successful CPUs based on Berkeley RISC, similar to SPARC (they used register windows). The AMD K5 had this RISC CPU at its core. AMD bought NexGen and improved their RISC design for the K6 then Athlon.
Bob Colwell (mentioned elsewhere ITT) wrote a fascinating technical history of the P6: The Pentium Chronicles.
Around the same time, but I’d classify as separate stumbles.
"Some companies, notably Dell, remained Intel-only well into the 21st century,"
Dell was receiving $1Billion a year in bribes from Intel https://247wallst.com/consumer-electronics/2007/02/02/michea...
"The documents filed in District Court claim that there were $1 billion in kickbacks and payments."
That was the only way to make big boys plunge into Pentium 4 with Rambus fiasco.
It's certainly not the same kind of OoO. They had register renaming¹, But only enough storage for a few renamed registers. And they didn't have any kind of scheduler.
The lack of a scheduler meant execution units still executed all instructions in program order. The only way you could get out-of-order execution is when instructions went down different pipelines. A floating point instruction could finish execution before a previous integer instruction even started, but you could never execute two floating point instructions Out-of-Order. Or two memory instructions, or two integer instructions.
While the Pentium Pro had a full scheduler. Any instruction within the 40 μop reorder buffer could theoretically execute in any order, depending on when their dependencies were available.
Even on the later PowerPCs (like the 604) that could reorder instructions within an execution unit, the scheduling was still very limited. There was only a two entry reservation station in front of each execution unit, and it would pick whichever one was ready (and oldest). One entry could hold a blocked instruction for quite a while many later instructions passed it through the second entry.
And this two-entry reservation station scheme didn't even seem to work. The laster PowerPC 750 (aka G3) and 7400 (aka G4) went back to singe entry reservation stations on every execution unit except for the load-store units (which stuck with two entries).
It's not until the PowerPC 970 (aka G5) that we see a PowerPC design with substantial reordering capabilities.
¹ well on the PowerPC 603, only the FPU had register naming, but the POWER1 and all later PowerPCs had integer register renaming
https://en.wikipedia.org/wiki/Tomasulo's_algorithm
Took a while until transistor budgets allowed it to be implemented in consumer microprocessors.
https://news.ycombinator.com/item?id=38459128
It wasn't a full pipeline, but large parts of the integer ALU and related circuitry were duplicated so that complex (time-consuming) instructions like multiply could directly follow each other without causing a pipeline bubble. Things were still essentially executed entirely in-order but the second MUL (or similar) could start before the first was complete, if it didn't depend upon the result of the first, and the Pentium line had a deeper pipeline than previous Intel chips to take most advantage of this.
The compiler optimisations, and similar manual code changes with the compiler wasn't bright enough, were to reduce the occurrence of instructions depending on the results of the instructions before, which would make the pipeline bubble come back as the subsequent instructions couldn't be started until the current one was complete. This was also a time when branch prediction became a major concern, and further compiler optimisations (and manual coding tricks) were used to help here too, because aborting a deep pipeline because of a branch (or just stalling the pipeline at the conditional branch point until the decision is made) causes quite a performance cost.
As the CPU was not out of order, to execute two instructions per clock you had to pair them so that the second one was simple, and did not use the output of the first one. Existing code and most compilers around at the time were generally bad at this, but things like inner render loops in games could make a lot of use if you wrote them in assembly.
some say that they tried to add 486 with 100 and the result had some numbers after the comma, that's why they named it pentium (yes, i know about the FDIV bug)
I well remember the 486SX/2-66's and how terrible they were. I liked to say that Compaq put the "sorry" in Presario.
In the late 90's, between around 96 and 98, I made good money building AMD 486 DX/4 133's. Those things were blindingly fast for the price. As I recall there was even a 150MHz variant.
Still, my favorite CPU of all time remains the AMD K6/2-450. It wasn't until the Phenom II BE950, a dual core that I unlocked to quad core , that I felt I had a CPU that matched the K6/2-450 in value. Since then I've had a couple of Ryzen's for my daily driver/work machine, and couldn't be happier. AMD has done a fantastic job keeping price and performance in tune. But, it goes even further if you shop smartly.
Overall, this was an excellent read, and brought back a lot of memories. The 6x86 for example- too much promise for what they actually delivered. And, thanks to this article I now know why so many cheap motherboards had their CPU's soldered. It wasn't a technology decision, but a legal one. I had no idea of that at the time.
I'm not sure where or why I have so many AM4 machines around, but my kids are still playing games fine on machines with a 1st and 2nd gen Ryzen in them.
I just upgraded another to a Ryzen 5 5500. I plan to get a few more years out of it.
The bang for my buck has been pretty high. I don't believe CPUs go obsolete immediately like they used to.
Also, for Half Life.
Intel's marketing department threw a fit, they didn't want the Pentium 4 competing with their flagship Itanium. Bob Colwell was directly ordered to remove the 64-bit functionality.
Which he kind of did, kind of didn't. The functionally was still there, but fused off when Netburst shipped.
If it wasn't for AMD beating them to market with AMD64, Intel would have probably eventually allowed their engineers to enable the 64-bit extension. And when it did come time to add AMD64 support to the Pentium 4 (later Prescott and Cedar Mill models) the existing 64-bit support probably made for a good starting point.
Bob Colwell talks about this (and some of the x86 team vs Itanium team drama) in his quora answer and followup comments: https://www.quora.com/How-was-AMD-able-to-beat-Intel-in-deli...
But this market segmentation idea just seems absolutely insane to me in a way I’ve never had anyone satisfactorily explain.
It requires Intel to voluntarily destroy the commodity economics that put their CPUs on a rocket ship to domination.
It’s as if they actually bought into the RISC FUD from the 1990’s that x86 was unscalable, exactly when it was taking its biggest leaps.
Think back then it was all about massive databases - that was where the big money was and x86 wasn't really setup for the top end load patterns of databases (or OLAP data lakes).
They even had a chance with mobile chips using ATOM but ARM was too compelling and I think Apple was sick of the Intel dependency so when there was an opportunity in the mobile space to not be so deeply tied to Intel they took it.
Was it though ? They made a new CPU from scratch, promissing to replace Alpha, PA-RISC and MIPS, but the first release was a flop.
The only "win" of Itanium that I see, is that it eliminated some competitors in low and medium end server market: MIPS and PA-RISC, with SPARC being on life support.
That's exactly what was happening.
Though it helps to realise that this argument was taking place inside Intel around 1997. The Pentium II is only just hitting the market, it wasn't exactly obvious that x86 was right in the middle making its biggest leaps.
RISC was absolutely dominating the server/workstation space, this was slightly before the rise of the cheap x86 server. Intel management was desperate to break into the server/workstation space, and they knew they needed a high end RISC cpu. It was kind of general knowledge in the computer space at the time that RISC was the future.
Resulting in claim like "it's pretty good DSP, but hilariously overpriced".
I would tend to believe that the Itanium is an HP product, given that they've always seems more invested in the platform than Intel.
[1] https://liam-on-linux.livejournal.com/49259.html
Cyrix chips get too much hate because of Quake being optimized specifically for the Pentium and its FPU.
On the other hand, I probably wouldn't have recognized the F00F bug mention if you had actually written 0xc8c70ff0.
[1] https://en.wikipedia.org/wiki/Pentium_F00F_bug
Not to be too pedantic, I would contend that at the time, it was pretty clear to enthusiast what the differences were. Everyone in the industry was paying attention to 486s and the cost of a genuine intel chip. The FDIV bug was on every Evening News for weeks. AMD and Cyrix vs intel debates were common.
I agree that it is not obvious now that Pentium came after 486, but at the time, it was clear.
December 1998 $85 Celeron 300A handily beating June 97 $594 Pentium 233 MMX, not to mention overclocked one matching 1998 $621-824 Pentium 2s.
January 2002 $120 Duron 1300/Celeron 1300 beating 2000 $1000 Athlon 1000/Pentium 3 1000-1133
June 2007 $40 Celeron 420 overclockable out of the box from stock 1.6 to 3.2GHz beat best $1000 CPUs of year 2005 (FX-57, P4 EE).
Same goes for Graphic chips starting around 1998/9.
Ah, I remember the days of Intel's fabs doing “too good” a job and many more chips passing tests for faster use being produced than expected. To fulfil orders for the slower chips some of these better batches were marked down and sold as slower units, so if you were lucky you could really push the overclocking and get yourself a performance bargain. You also needed a good motherboard and quality RAM to pull it off reliably, of course.
Sillyrons is what we used to call the massively overclocked Celerons. At Uni a friend of mine made a good bit of pocket money from selling an optimisation service, for people who didn't feel confident playing with such settings themselves.
And it turns out that for a lot of software, a smaller but faster L2 was actually better than the bigger one. And because there were no fast products that used the Mendocino die, even the fastest of them were sold as Celerons. 300A was particularly nice because very nearly all of them could run at 450, and 100MHz FSB motherboards were widely available to pair with the fixed 4.5 multiplier of the CPU.
But the time since 2020 feels much faster again. It's scary! But it's exciting.
I always wondered if some of that was to offset the negative publicity from the FDIV bug in the early Pentiums.
It was annoying as it seemed every computer ad needed to play it, not just intel ads.
The author links to an example:
https://dfarq.homeip.net/ibm-486slc2-cpu-when-a-clone-isnt-a...
You then had the 486 DLCs which were even worse. you'd get companies that sold 386 and even 286 systems with '486' chips, constrained by slow, 16-bit buses, etc.
But, even a 486SLC wasn't all that bad at the time. It was still much faster than a 386DX for many things (DOOM, for example).
These AT-like machines limited users to 16-bit ISA cards for expansion, and a 24-bit address bus (16 MB RAM). But how many consumer PCs used more than that, back when your 32-bit bus options were VLB (video only), MCA (IBM only), or EISA (expensive servers/workstations only).
But to be fair, that naming shift probably mattered more than most people realize.
Those chips were excellent value for mostly integer work, but had incredibly poor floating-point performance which was a problem for gamers as the 3D era was really getting going around that time. I had one, it did me good service for a few years.
Sure AMD and a few others had back-seat answers, but Intel was literally driving the bus.
https://timeline.intel.com/1993/peripheral-component-interco...
There was a mobile 266MHz Pentium MMX, Tillamook
And it appears there was a 300MHz version according to Wikipedia.
It wasn't much more than a year later that I was able to get a Pentium 100MHz for $2000. It's amazing how fast things progressed back then.