I work in CPU security and it's the same with microarchitecture. You wanna know if a machine is vulnerable to a certain issue?
- The technical experts (including Intel engineers) will say something like "it affects Blizzard Creek and Windy Bluff models'
- Intel's technical docs will say "if CPUID leaf 0x3aa asserts bit 63 then the CPU is affected". (There is no database for this you can only find it out by actually booting one up).
- The spec sheet for the hardware calls it a "Xeon Osmiridium X36667-IA"
Absolutely none of these forms of naming have any way to correlate between them. They also have different names for the same shit depending on whether it's a consumer or server chip.
Meanwhile, AMD's part numbers contain a digit that increments with each year but is off-by-one with regard to the "Zen" brand version.
Usually I just ask the LLM and accept that it's wrong 20% of the time.
> - Intel's technical docs will say "if CPUID leaf 0x3aa asserts bit 63 then the CPU is affected". (There is no database for this you can only find it out by actually booting one up).
I’m doing some OS work at the moment and running into this. I’m really surprised there’s no caniuse.com for cpu features. I’m planning on requiring support for all the features that have been in every cpu that shipped in the last 10+ years. But it’s basically impossible to figure that out. Especially across Intel and amd. Can I assume apic? Iommu stuff? Is acpi 2 actually available on all CPUs or do I need to have to have support for the old version as well? It’s very annoying.
I’m pretty sure the number of people at Intel who can tell you offhandedly the answer to your questions about only Intel processors is approximately zero give or take couple. Digging would be required.
If you were willing to accept only the relatively high power variants it’d be easier.
> AMD's part numbers contain a digit that increments with each year
Aha, but which digit? Sure, that's easy for server, HEDT and desktop (it's the first one) but if you look at their line of laptop chips then it all breaks down.
I've also found the same thing a decade ago,
apparently lots of features(e.g. specific instruction, igpu)
are broadly advertised as belonging to specific arch,
but pentium/celeron(or for premium stuff non-xeon) models often lack them entirely and
the only way to detect is lscpu/feature bits/digging in UEFI settings.
You can correlate microarchitecture to product SKUs using the Intel site that the article links. AMD has a similar site with similar functionality (except that AFAIK it won't let you easily get a list of products with a given uarch). These both have their faults, but I'd certainly pick them over an LLM.
But you're correct that for anything buried in the guts of CPUID, your life is pain. And Intel's product branding has been a disaster for years.
I feel like it's a cultural thing with the designers. Ceragon were the exact same when I used to do microwave links. Happy to provide demo kit, happy to provide sales support, happy to actually come up and go through their product range.
But if you want any deep and complex technical info out of them, like oh maybe how to configure it to fit UK/EU regulatory domain RF rules? Haha no chance.
We ended up hiring a guy fluent in Hebrew just to talk to their support guys.
Super nice kit, but I guess no-one was prepared to pay for an interface layer between the developers and the outside world.
Intel doesn't like to officially use codenames for products once they have shipped, but those codenames are used widely to delineate different families (even by them!), so they compromise with the awkward "products formerly x" wording. Have done for a long time.
I wouldn't mind them coming up with better codenames anyway. "Some lower-end SKUs branded as Raptor Lake are based on Alder Lake, with Golden Cove P-cores and Alder Lake-equivalent cache and memory configurations." How can anyone memorize this endless churn of lakes, coves and monts? They could've at least named them in the alphabetical order.
AMD does this subterfuge as well. Put Zen 2 cores from 2019 (!) in some new chip packaging and sell it as Ryzen 10 / 100. Suddenly these chips seem as fresh as Zen 5.
The entire point of code names is that you can delay coming up with a marketing name. If the end user sees the code name then what is even the point? Using the code name in external communication is really really dumb. They need to decide if it should be printed on the box or if it's only for internal use, and don't do anything in between.
Product lines are in design and development for years, two years is lightning fast, code names can be found for things five or more years before they were released, so everyone who works with them knows them better (much better) than the retail names.
I have three Ubuntu servers and the naming pisses me off so much. Why can't they just stick with their YY.MM. naming scheme everywhere. Instead, they mostly use code names and I never know what codename I am currently using and what is the latest code name. When I have to upgrade or find a specific Python ppa for whatever OS I am running, I need to research 30 minutes to correlate all these dumb codenames to the actual version numbers.
As an Apple user, the macOS code names stopped being cute once they ran out of felines, and now I can't remember which of Sonoma or Sequoia was first.
Android have done this right: when they used codenames they did them in alphabetical order, and at version 10 they just stopped being clever and went to numbers.
Protip, if you have access to the computer: `lsb_release -a` should list both release and codename. This command is not specific to Ubuntu.
Finding the latest release and codename is indeed a research task. I use Wikipedia[1] for that, but I feel like this should be more readily available from the system itself. Perhaps it is, and I just don't know how?
Yes, I agree, codenames are stupid, they are not funny or clever.
I want a version number that I can compare to other versions, to be able to easily see which one is newer or older, to know what I can or should install.
I don't want to figure out and remember your product's clever nicknames.
- sSpec S0ABC = "Blizzard Creek" Xeon type 8 version 5 grade 6 getConfig(HT=off, NX=off, ECC=on, VT-x=off, VT-d=on)=4X Stepping B0
- "Blizzard Creek" Xeon type 8 -> V3 of Socket FCBGA12345 -> chipset "Pleiades Mounds"
- CPUID leaf 0x3aa = Model specific feature set checks for "Blizzard Creek" and "Windy Bluff(aka Blizzard Creek V2)"
- asserts bit 63 = that buggy VT-d circuit is not off
- "Xeon Osmiridium X36667-IA" = marketing name to confuse specifically you(but also IA-36-667 = (S0ABC|S9DFG|S9QWE|QA45P))
disclaimer: above is all made up and I don't work at any of relevant companies
I recall standing in CEX one day perusing the cabinet of random electronics ( as you do ) and wondering why the Intel CPUs were so cheap compared to the AMD ones. I eventually concluded that the cross generation compatibility of zen cpus meant they had a better resale value. Whereas if you experienced the more common mobo failure with an Intel chip you were likely looking at replacing both.
That reminds me when I got a server-grade Xeon E5472 (LGA771) and after some very minor tinkering (knife, sticker mod) fit it into a cheap consumer-grade LGA775 socket. Same microarchitecture, power delivery class, all that.
LGA2011-0 and LGA2011-1 are very unalike, from the memory controller to vast pin rearrangement.
So not only they call two different sockets almost the same per the post, but they also call essentially the same sockets differently to artificially segment the market.
Yeah, Intel has some crazies in the naming department since they abandoned Netburst with clear generation number and frequency in the name. I remember having two CPUs with exact same name E6300 for the exact same socket LGA775, but the difference was 1 GHz and cache size. Like, ok, I can understand that they were close enough, but at least add something to the model number to distinguish them.
This is too forgiving of intel in this case. It has a name. They just don't use it. "Sockets Supported: FCLGA2011". It's not like this is poorly named. It's not even true.
It has pretty much always been the case that you need to make sure the motherboard supports the specific chip you want to use, and that you can't rely on just the physical socket as an indicator of compatibility (true for AMD as well). For motherboards sold at retail the manufacturer's site will normally have a list, and they may provide some BIOS updates over time that extend compatibility to newer chips. OEM stuff like this can be more of a crapshoot.
All things considered I actually kind of respect the relatively straightforward naming of this and several of Intel's other sockets. LGA to indicate it's land grid array (CPU has flat "lands" on it, pins are on the motherboard), 2011 because it has 2011 pins. FC because it's flip chip packaging.
This reminds me of my ASRock motherboard, though this was over a decade ago now. The actual board was one piece of hardware, but the manual it shipped with was for a different piece of hardware. Very similar, but not identical (and worse, not identical where I needed them to be, which, naturally, is both the only reason I noticed and how these things get noticed…), but yet both manual and motherboard had the same model number. ASRock themselves appeared utterly unaware that they had two separate models wandering around bearing the same name, even after it was pointed out to them.
The next motherboard (should RAM ever cease being the tulip du jour) will not be an ASRock, for that and other reasons.
For the love of everything though, just increment the model number.
Yea, old server hardware can be super cheap! In my opinion though, the core counts are misleading. Those 24 cores are not compareable to the cores of today. Plus IPC+power usage are wildly different. YMMV on if those tradeoffs are worth it.
LGA2011 was an especially cursed era of processors and motherboards.
In addition to all of the slightly different sockets there was ddr3, ddr3 low voltage, the server/ecc counterparts, and then ddr4 came out but it was so expensive (almost more expensive than 4/5 is now compared to what it should be) that there were goofy boards that had DDR3 & DDR4 slots.
By the way it is _never_ worth attempting to use or upgrade anything from this era. Throw it in the fucking dumpster (at the e waste recycling center). The onboard sata controllers are rife with data corruption bugs and the caps from around then have a terrible reputation. Anything that has made it this long without popping is most likely to have done so from sitting around powered off. They will also silently drop PCI-E lanes even at standard BCLK under certain utilization patterns that cause too much of a vdrop.
This is part of why Intel went damn-near scorched earth on the motherboard partners that released boards which broke the contractual agreement and allowed you to increase the multipliers on non-K processors. The lack of validation under these conditions contributed to the aformentioned issues.
>and allowed you to increase the multipliers on non-K processors
Wasn't this the other way around, allowing you to increase multipliers on K processors on the lower end chipsets? Or was both possible at some point? I remember getting baited into buying an H87 board that could overclock a 4670K until a bios update removed the functionality completely.
In fairness, the author should've known something was up when they thought they could put a multiple year newer chip in an Intel board. That sort of cross-generational compatibility may exist in AMD land but never in Intel.
The author would likely be able to put a v3 generation processor in the motherboard, they just didn't do the necessary research to find that out before pulling the trigger.
I mean sure, that would seem suspicious. But not suspicious enough that I'd likely have caught the problem. It's not that far fetched that Intel may occasionally make new CPUs for older sockets, and when Intel's documentation for the motherboard says "uses socket FCLGA2011" and Intel's documentation for the CPU says "uses socket FCLGA2011", I too would have assumed that they use the same socket.
With Intel's confusing socket naming, you can buy a CPU that doesn't fit the socket.
With USB, the physical connection is very clearly the first part of the name. You cannot get it wrong. Yeah, the names aren't the most logical or consistent, but USB C or A or Micro USB all mean specific things and are clearly visibly different. The worst possible scenario is that the data/power standard supported by the physical connection isn't optimal. But it will always work.
I don't think the port names is what they were referring to.
The actual names for each data transfer level are an absolute mess.
1.x has Low Speed and Full Speed
2.0 added High Speed
3.0 is SuperSpeed (yes no space this time)
3.1 renamed 3.0 to 3.1 Gen 1 and added SuperSpeedPlus
3.2 bumped the 3.1 version numbers again and renamed all the SuperSpeeds to SuperSpeed USB xxGbps
And finally they renamed them again removing the SuperSpeed and making them just USB xxGbps
USB-IF are the prime examples of "don't let engineers name things, they can't"
> The worst possible scenario is that the data/power standard supported by the physical connection isn't optimal. But it will always work.
I don't know what "always work" means here but I feel like I've had USB cables that transmit zero data because they're only for power, as well as ones that don't charge the device at all when the device expects more power than it can provide. The only thing I haven't seen is cables that transmit zero data on some devices but nonzero data on others.
I don't think those cables are in spec, and there are a lot of faulty devices and chargers that don't conform to the spec creating these kinds of problem (e.g. Nintendo Switch 1). This is especially a problem with USB C.
You can maybe blame USB consortium for creating a hard spec, but usually it's just people saving $0.0001 on the BOM by omitting a resistor.
Not at all. If you want to charge your phone, it might "always work", but if you want to use your monitor with USB hub and pass power to your MacBook, you're gonna have a hard time.
I can't find a USB-C PD adapter for a laptop that uses less than 100W. As a result, I can't charge a 65W laptop from a 65W port because the adapter doesn't even work unless the port is at least 100W.
I've noticed that GAN PD's 100w and 65w adapters output is actually less (both do not charge my laptop) than lenovo 65w charger (the one with a non-detachable usbc cable). Cable does not matter, tried with many of them including ones providing power from other chargers.
It seems totally random, and you cannot rely on watts anymore.
> Cable does not matter, tried with many of them including ones providing power from other chargers.
That might not necessarily be the right conclusion. My understanding is: almost all USB-C power cables you will enounter day to day support a max current of at most 3A (the most that a cable can signal support for without an emarker). That means that, technically, the highest power USB-PD profile they support is 60W (3A at 20V), and the charger should detect that and not offer the 65W profile, which requires 3.25A.
Maybe some chargers ignore that and offer it anyway, since 3.25A isn't that much more than 3A. For ones that don't and degrade to offering 60W, if a laptop strictly wants 65W, it won't charge off of them.
So it's worth acquiring a cable that specifically supports 5A to try, which is needed for every profile above 60W (and such a cable should support all profiles up to the 240W one, which is 5A*48V).
(I might be mistaken about some of that, it's just what I cobbled together while trying to figure out what chargers work with my extremely-picky-about-power lenovo x1e)
There's a fair number of misleading our outright wrong specs if your buying from amazon or the like. And even if you're buying brand name, the specs can be misleading. They often refer to the maximum output of all the ports, not the maximum output of a port.
So a 100 watt GAN charger might be able to deliver only 65 watts from it's main "laptop" port, but it has two other ports that can do 25 and 10 watts each. Still 100 watts in total, but your laptop will never get it's 100 watts.
Not every brand is as transparent about this, sometimes it's only visible in product marketing images instead of real specs. Real shady.
I have a dell laptop that uses a usbc port to charge, but doesn't actually use the PD specification, but a custom one, so my 65w GAN charger falls back to 5v 0.5a and isn't useful at all. I'd bet dollars to donuts that your Lenovo is doing similar shit.
For this specific issue I'm surprised, I have used all kinds of USB PD chargers for my laptops and all of them but one are less than 100W, with no problem at all.
The ones I use most are 20W and 40W, just stuff I ordered from AliExpress (Baseus brand I think).
I assume someone typed it in (possibly on a mobile device with autocorrect) rather than copy & pasting it (which you would have to do twice, for the URL and for the title).
- The technical experts (including Intel engineers) will say something like "it affects Blizzard Creek and Windy Bluff models'
- Intel's technical docs will say "if CPUID leaf 0x3aa asserts bit 63 then the CPU is affected". (There is no database for this you can only find it out by actually booting one up).
- The spec sheet for the hardware calls it a "Xeon Osmiridium X36667-IA"
Absolutely none of these forms of naming have any way to correlate between them. They also have different names for the same shit depending on whether it's a consumer or server chip.
Meanwhile, AMD's part numbers contain a digit that increments with each year but is off-by-one with regard to the "Zen" brand version.
Usually I just ask the LLM and accept that it's wrong 20% of the time.
I’m doing some OS work at the moment and running into this. I’m really surprised there’s no caniuse.com for cpu features. I’m planning on requiring support for all the features that have been in every cpu that shipped in the last 10+ years. But it’s basically impossible to figure that out. Especially across Intel and amd. Can I assume apic? Iommu stuff? Is acpi 2 actually available on all CPUs or do I need to have to have support for the old version as well? It’s very annoying.
If you were willing to accept only the relatively high power variants it’d be easier.
https://en.wikipedia.org/wiki/List_of_Intel_Core_processors
https://en.wikipedia.org/wiki/List_of_Intel_Xeon_processors
It doesn't have the CPUID but it's a pretty good mapping of model numbers to code names and on top of that has the rest of the specs.
Aha, but which digit? Sure, that's easy for server, HEDT and desktop (it's the first one) but if you look at their line of laptop chips then it all breaks down.
But you're correct that for anything buried in the guts of CPUID, your life is pain. And Intel's product branding has been a disaster for years.
But if you want any deep and complex technical info out of them, like oh maybe how to configure it to fit UK/EU regulatory domain RF rules? Haha no chance.
We ended up hiring a guy fluent in Hebrew just to talk to their support guys.
Super nice kit, but I guess no-one was prepared to pay for an interface layer between the developers and the outside world.
"Products formerly Blizzard Creek"
WTF does that even mean?
It's fraud, plain and simple.
Same with Intel.
STOP USING CODENAMES. USE NUMBERS!
Android have done this right: when they used codenames they did them in alphabetical order, and at version 10 they just stopped being clever and went to numbers.
Finding the latest release and codename is indeed a research task. I use Wikipedia[1] for that, but I feel like this should be more readily available from the system itself. Perhaps it is, and I just don't know how?
[1] https://en.wikipedia.org/wiki/Ubuntu#Releases
I want a version number that I can compare to other versions, to be able to easily see which one is newer or older, to know what I can or should install.
I don't want to figure out and remember your product's clever nicknames.
NVidia has these, very different GPUs:
Quadro 6000, Quadro RTX 6000, RTX A6000, RTX 6000 Ada, RTX 6000 Workstation Edition, RTX 6000 Max-Q Workstation Edition, RTX 6000 Server Edition
It would be like having Quadro 6000 and 6050 be completely different generation
LGA2011-0 and LGA2011-1 are very unalike, from the memory controller to vast pin rearrangement.
So not only they call two different sockets almost the same per the post, but they also call essentially the same sockets differently to artificially segment the market.
> There are only two hard things in Computer Science: cache invalidation, naming things, off-by-one errors.
All things considered I actually kind of respect the relatively straightforward naming of this and several of Intel's other sockets. LGA to indicate it's land grid array (CPU has flat "lands" on it, pins are on the motherboard), 2011 because it has 2011 pins. FC because it's flip chip packaging.
That's an industry-wide standard across all IC manufacturing - Intel doesn't really get to take credit for it.
The next motherboard (should RAM ever cease being the tulip du jour) will not be an ASRock, for that and other reasons.
For the love of everything though, just increment the model number.
In addition to all of the slightly different sockets there was ddr3, ddr3 low voltage, the server/ecc counterparts, and then ddr4 came out but it was so expensive (almost more expensive than 4/5 is now compared to what it should be) that there were goofy boards that had DDR3 & DDR4 slots.
By the way it is _never_ worth attempting to use or upgrade anything from this era. Throw it in the fucking dumpster (at the e waste recycling center). The onboard sata controllers are rife with data corruption bugs and the caps from around then have a terrible reputation. Anything that has made it this long without popping is most likely to have done so from sitting around powered off. They will also silently drop PCI-E lanes even at standard BCLK under certain utilization patterns that cause too much of a vdrop.
This is part of why Intel went damn-near scorched earth on the motherboard partners that released boards which broke the contractual agreement and allowed you to increase the multipliers on non-K processors. The lack of validation under these conditions contributed to the aformentioned issues.
Wasn't this the other way around, allowing you to increase multipliers on K processors on the lower end chipsets? Or was both possible at some point? I remember getting baited into buying an H87 board that could overclock a 4670K until a bios update removed the functionality completely.
With Intel's confusing socket naming, you can buy a CPU that doesn't fit the socket.
With USB, the physical connection is very clearly the first part of the name. You cannot get it wrong. Yeah, the names aren't the most logical or consistent, but USB C or A or Micro USB all mean specific things and are clearly visibly different. The worst possible scenario is that the data/power standard supported by the physical connection isn't optimal. But it will always work.
The actual names for each data transfer level are an absolute mess.
1.x has Low Speed and Full Speed 2.0 added High Speed 3.0 is SuperSpeed (yes no space this time) 3.1 renamed 3.0 to 3.1 Gen 1 and added SuperSpeedPlus 3.2 bumped the 3.1 version numbers again and renamed all the SuperSpeeds to SuperSpeed USB xxGbps And finally they renamed them again removing the SuperSpeed and making them just USB xxGbps
USB-IF are the prime examples of "don't let engineers name things, they can't"
Engineers don't make names that are nice for marketing team.
But they absolutely do make consistent ones. The engineer wouldn't name it superspeed, the engineer would encode the speed in the name
While not disagreeing, I'd ask for a proof it's not a marketing department's fun. Just to be sure.
Engineers love consistency. Marketing is on the opposite side of this spectra.
I don't know what "always work" means here but I feel like I've had USB cables that transmit zero data because they're only for power, as well as ones that don't charge the device at all when the device expects more power than it can provide. The only thing I haven't seen is cables that transmit zero data on some devices but nonzero data on others.
You can maybe blame USB consortium for creating a hard spec, but usually it's just people saving $0.0001 on the BOM by omitting a resistor.
Not at all. If you want to charge your phone, it might "always work", but if you want to use your monitor with USB hub and pass power to your MacBook, you're gonna have a hard time.
How polite. It can be useless, not "not optimal". Especially since usb-c can burn you on a combination of power and speed, not only speed.
I can't find a USB-C PD adapter for a laptop that uses less than 100W. As a result, I can't charge a 65W laptop from a 65W port because the adapter doesn't even work unless the port is at least 100W.
It does not always work.
It seems totally random, and you cannot rely on watts anymore.
That might not necessarily be the right conclusion. My understanding is: almost all USB-C power cables you will enounter day to day support a max current of at most 3A (the most that a cable can signal support for without an emarker). That means that, technically, the highest power USB-PD profile they support is 60W (3A at 20V), and the charger should detect that and not offer the 65W profile, which requires 3.25A.
Maybe some chargers ignore that and offer it anyway, since 3.25A isn't that much more than 3A. For ones that don't and degrade to offering 60W, if a laptop strictly wants 65W, it won't charge off of them.
So it's worth acquiring a cable that specifically supports 5A to try, which is needed for every profile above 60W (and such a cable should support all profiles up to the 240W one, which is 5A*48V).
(I might be mistaken about some of that, it's just what I cobbled together while trying to figure out what chargers work with my extremely-picky-about-power lenovo x1e)
So a 100 watt GAN charger might be able to deliver only 65 watts from it's main "laptop" port, but it has two other ports that can do 25 and 10 watts each. Still 100 watts in total, but your laptop will never get it's 100 watts.
Not every brand is as transparent about this, sometimes it's only visible in product marketing images instead of real specs. Real shady.
And wow, I'll keep away from Dell, thanks.
The ones I use most are 20W and 40W, just stuff I ordered from AliExpress (Baseus brand I think).
Email them, address is in the guidelines.
on the other side AMD with legendary AM4