5 comments

  • nxobject 12 hours ago
    Huh - literally every trick in the toolbox (pun intended) is used to integrate the running System with the host OS.

    An actual Quadra ROM, but a System Enabler (and a Helper application). For Toolbox UI services, it does everything from patch the Toolbox (e.g. to prevent windows from being moved onto the MAE toolbar) to call the host OS for services (eg direct translation from QuickDraw to Xlib.)

    But so many other mysteries remain.

    How is transparent disk access/conversion implemented if there are no SCSI devices, and the ATA/IDE Manager isn’t installed? Is the File Manager heavily patched as well?

    Is the 68k emulator patched to map the graphics card DeclROMs into the memory windows the Slot Manager expects? (Why not just patch the Slot Manager as well?)

  • pjmlp 12 hours ago
    I guess having a big fat guy for A/UX advertisement kind of says how Apple saw UNIX even then.

    Jobs were also not that keen into UNIX, around this time NeXT was already going on, and some of the surviving remarks including his presence at USENIX, was similar to Microsoft's tatics, in the sense of bringing UNIX applications and custmers into NeXTSTEP, and then offer them tools unavailable in any other UNIX.

    OpenSTEP effort, and collaboration with Sun only started after the sales not going that well as expected.

    Nonetheless, I only saw A/UX live in 1994 at Lisbon's IT conference (CEBIT in miniature), and it looked kind of interesting.

  • esafak 1 day ago
    Ha, PA-RISC! I remember faxing HP as a teen for a brochure about it. Back when HP was a contender :(
    • classichasclass 1 day ago
      (author) I thought PA-RISC could go a lot further. It had good performance and a fairly sane ISA, and HP put a lot of work into it. Dropping it for Itanic was one of HP's poorest decisions, because compilers just weren't sophisticated enough to make EPIC VLIW efficient, and arguably still aren't. The issue about the "one instruction per cycle" limit got addressed in a different way.
      • ndiddy 1 day ago
        HP, DEC/Compaq, and SGI all made the decision to drop their various bespoke architectures for Itanium years before prototypes were available based solely on what Intel claimed performance would be on paper. Even Sun and IBM made noise about doing the same thing. Honestly, I think it was inevitable that something like this would happen. By the late 90s, it was starting to get too expensive for each individual high-end server/workstation company to continue investing in high-performance chip design and leading edge fabs to make low-volume parts, so it made sense for all of them to standardize on a common architecture. The mistake everyone made was choosing Itanium to be the industry standard.
        • classichasclass 1 day ago
          Yes, that's all true, but I blame HP more than the others because a large part of what went into Itanium came from HP. They thought that with their simulation results they would eclipse all other architectures with Itanic, and they were way off base, junking an architecture that had room to grow in the process. Even so, PA-RISC was still competitive at least into the days of Mako, though they kind of phoned it in with Shortfin (the last PA-8900).
        • kev009 1 day ago
          IBM did ship a few generations of Itanium hardware, they just smartly never bet the farm on it.

          MIPS and SPARC were always a little weak versus contemporaries, if SGI had forestalled a bit with the R18k that would have been enough time to read the tea leaves and jump to Opteron instead.

          PA-RISC and Alpha had big enough captive markets and some legs left that got pulled out too soon. That paradoxically might had led to a healthier Sun that went all in on Opteron.

      • jeffbee 1 day ago
        I mean, the performance was better than "fair". Nothing could touch it including Alpha. They abandoned it right at the top of their game.
        • kev009 1 day ago
          The most astonishing thing about this is it was done under forbearance of the ISA.. PA-RISC ISA was basically frozen in 1996 and they were able to ride that at the top for years. For instance PA-RISC doesn't really have appropriate instructions for desirable atomic operations. But it led to working on the right problems, a hardwired control RISCy chip that happened to be philosophically similar to the survivor POWER.
      • jimmaswell 1 day ago
        I wonder if we could make an LLM or other modern machine learning framework finally figure out how to compile to Itanic in an optimized fashion.
        • duskwuff 1 day ago
          No. The problems involved are fundamental:

          1) Load/store latency is unpredictable - whenever you get a cache miss (which is unpredictable*), you have to wait for the value to come back from main memory (which is getting longer and longer as CPUs get faster and memory latency roughly stays the same). Statically scheduling around this sort of unpredictable latency is extremely difficult; you're better off doing it on the fly.

          2) Modern algorithms for branch prediction and speculative execution are dynamic. They can make observations like "this branch has been taken 15/16 of the last times we've hit it, we'll predict it's taken the next time" which are potentially workload-dependent. Compile-time optimization can't do that.

          *: if you could reliably predict when the cache would miss, you'd use that to make a better cache replacement algorithm

          • brucehoult 1 day ago
            > this branch has been taken 15/16 of the last times we've hit it

            That is kind of how it worked more than 30 years ago (pre 1995), but not since, at least in OoO CPUs.

            In fact it was found that having more than a 2-bit saturating counter doesn't help, because when the situation changes it takes too many bad predictions in a row to get to predictions that, actually, this branch is not being taken any more.

            What both the Pentium Pro and PowerPC 604 (the first OoO designs in each family) had was a global history of how you GOT TO the current branch. The Pentium Pro had 4 bits of taken/not taken history for the last four conditional branches and this was used to decide which 2-bit counter to use for a given branch instruction. The PowerPC 604 used 6 bits of history. The Pentium Pro algorithm for combing the branch address with the history (XOR them!) is called "gshare". The PPC604 did something a little bit different but I'm not sure what. By the PPC750 Motorola was using basically the same gshare algorithm as Intel.

            There are newer and better algorithms today -- exactly what is somewhat secret in leading edge CPUs -- but gshare is simple and is common in low end in-order and small OoO CPUs to this day. The Berkeley BOOM core uses a 13 bit branch history. I think early SiFive in-order cores such as the E31 and U54 used 10 bits.

            • duskwuff 20 hours ago
              Fair point, I oversimplified a bit. Either way, what matter is that it's dynamic.
        • o11c 1 day ago
          No, VLIW is fundamentally a flawed idea; OoO is mandatory. "We need better compilers" is purely Intel marketing apologia.
          • whaleofatw2022 1 day ago
            Isn't VILW how a number of GPUs worked internally? That said GPU isn't the same as GPC
            • buildbot 1 day ago
              Yes, as other noted AMD used VLIW for terscale in the 2000-6000 series. https://en.wikipedia.org/wiki/TeraScale_(microarchitecture)

              They are used in a lot of DSP chips too, where you (hopefully) have very simple branching if any and nice data access patterns.

              • Sesse__ 1 day ago
                And typically just fast RAM everywhere instead of caches, so you don't have cache misses. (The flip side is that you typically have very _little_ RAM.)
            • classichasclass 1 day ago
              Some older ones, yeah (TeraScale comes to mind) but modern ones are more like RISC with whopping levels of SIMD. It turns out that VLIW was hard for them too.
    • sgt 1 day ago
      As a teen, I needed help with an Alpha 21164, and I phoned Compaq. This was a year after Compaq had bought Digital. But they had absolutely zero interest in helping a teenager with hardware they probably believed I had no business using.
      • spauldo 20 hours ago
        Calling DEC probably wouldn't have done you any good either, unless you could get ahold of an insider that took interest in you.

        I did tech support for DEC's Starion line of Windows PCs. I'd get the occasional call from someone wanting help with a VAX or a Rainbow or some other piece of DEC kit. But a lot of DEC's customer support was actually handled by 3rd party companies, including the one I worked for, and we had no way of connecting people to anyone from DEC itself.

        Compaq killed off most of the online resources for DEC equipment fairly quickly. I always thought DEC deserved a better death... But at least it was better than being bought by Oracle.

        • sgt 18 hours ago
          It could have been worse... think of when Sun was bought by Oracle.
          • spauldo 17 hours ago
            I think you missed my last paragraph :)
            • sgt 12 hours ago
              You are right! I missed it
      • kev009 20 hours ago
        I remember doing something similar with Bull, a now obscure but once somewhat formidable mainframe and UNIX company.

        I had a DPX/20, which was for that model just a rebadged Microchannel IBM RS/6000. I was 12 and trying to figure out how to use it. I knew what I was in for, that I needed to load AIX, but the "firmware" on these are bare bones and you don't have much to go on once it passes off control if you don't know if your console is working in the first place.

        Given what I now know, they were surprisingly kind and passed the call around until landing it with an old timer that was familiar with the model and somewhat bemused that I had it and was trying to use it but didn't really know how to help me remotely.

        Eventually someone on Usenet clued me in that I needed more pins on my serial cable connected, and it all turned out to be a nice learning opportunity building the GNU toolchain and AMP stack on it.

        There was some serendipity years later when I moved back to Phoenix after school and joined a newly formed PostgreSQL user group. Bull was trying to pivot into that open DB market and still had a huge campus in Phoenix where they held the meetings. It seemed sparsely occupied and the writing was on the wall that was all going away (it eventually did a handful of years ago), but I was still a bit wide eyed now having some notion of the campus's historical significance as Honeywell, in the Multics project, and other things. And that my naive call from back then was almost certainly answered in that facility not that far from where I was struggling.

        • esafak 8 hours ago
          Groupe Bull? I remember them. I wonder how we can give kids today that same sense of wonder and joy of tinkering we had. I guess today's equivalent would involve robotics, since personal computers are all played out.
          • kev009 6 hours ago
            Maker spaces seem to have the right hacker ethos around explore, tinker, finish.

            I'm on a bunch of retrocomputing discords where youth still find obscure old systems, typically Sun, SGI and the parallel universe IBM systems (mainframes and as/400 line), and manage to figure them out.

    • pjmlp 12 hours ago
      HP-UX was also the very first OS where I used something like containers, the Virtual Vault introduced in HP-UX 11, almost a decade before containers became fashionable in UNIX systems.
  • ethan_smith 1 day ago
    MAE was Apple's fascinating attempt to run Mac OS on Unix workstations, essentially creating a compatibility layer that translated Mac Toolbox calls to X11, allowing Unix users to run Mac software without actual Apple hardware.
    • sillywalk 23 hours ago
      There was a 3rd party toolkit called I believe Equal, then Lattitude for porting Mac Apps to Unix. Equal was used to port MS Word & Excel, and Lattitude for porting Adobe Photoshop and Illustrator.

      It was essentially a reverse-engineered MacOS Rom Toolkit. It implemented most of System 7 as well as QuickDraw.

      http://preserve.mactech.com/articles/mactech/Vol.13/13.06/Ju...

    • lukeh 23 hours ago
      Also was the foundation of Blue Box if I remember right. (And QuickTime’s portability layer was for Carbon.)
      • classichasclass 23 hours ago
        (author) MAE isn't the basis for Blue Box, though I'm quite sure it informed its design. Blue Box/Classic is actually more like MAS, the aborted Mac compatibility layer for PowerOpen/AIX/"A/UX 4," in that it runs PowerPC code directly on the CPU in the "problem state" and uses a paravirtualized operating system and enabler. There is no processor emulation in Classic except for supervisor and faulting instructions.

        There are also differences at the level they execute: MAE can, and was designed to run, as an independent process like any other well-behaved X11 application, and multiple users can run multiple sessions of it, but Classic/Blue Box needs operating system support and only one instance of it can be running on a system by a single user.

        • lukeh 21 hours ago
          I stand corrected, very interesting!
  • Telemakhos 1 day ago
    RISC Mac? Why am I reminded of the original Mission Impossible movie?
    • raddan 1 day ago
      If you scroll down long enough there actually is some discussion of the (unbadged) Macs that appear in Mission Impossible!
    • bluedino 1 day ago
      686 protoypes with the artificial intelligence RISC chips
      • pezezin 25 minutes ago
        Damn, the scene Ving Rhames and Jean Reno having that conversation is burned into my brain xD