16 comments

  • ramoz 19 minutes ago
    Recall itself is absolutely ridiculous. And any solution like it is as well.

    Meanwhile, Anthropic is openly pushing the ability to ingest our entire professional lives into their model which ChatGPT would happily consume as well (they're scraping up our healthcare data now).

    Sandboxing is the big buzzword early 2026. I think we need to press harder for verified privacy at inference. Any data of mine or my company's going over the wire to these models needs to stay verifiably private.

    • qwertox 3 minutes ago
      > And any solution like it is as well.

      Depends. I think I would like it to have an observing AI which is only active when I want it to, so that it logs the work done, but isn't a running process when I don't want to, which would be the default.

      But that should certainly not be bundled with the OS and best even a portable app, so no registry entries, no files outside of its directory (or a user-provided data directory)

      Let's say you're about to troubleshoot an important machine and have several terminals and applications open, it would be good to have something that logs all the things done with timestamped image sequences.

      The idea of Recall is good, but we can't trust Microsoft.

    • m4rtink 10 minutes ago
      >Any data of mine or my company's going over the wire to these models needs to stay verifiably private.

      I don't think this is possible without running everyting locally and the data not leaving the machine (or possibly local network) you control.

      • ramoz 5 minutes ago
        Without diving too technically here there is an additional domain of “verifiability” relevant to ai these days.

        Using cryptographic primitives and hardware root of trust (even GPU trusted execution which NVIDIA now supports for nvlink) you can basically attest to certain compute operations. Of which might be confidential inference.

        My company, EQTY Lab, and others like Edgeless Systems or Tinfoil are working hard in this space.

      • sbszllr 3 minutes ago
        Interestingly enough, it is possible to do private inference in theory, e.g. via oblivious inference protocols but prohibitively slow in practice. You can also throw a model into a trusted execution environment. But again, too slow.
        • ramoz 2 minutes ago
          Modern TEE is actually performant for industry needs these days. Over 400,000x gains of zero knowledge proofs and with nominal differences from most raw inference workloads.
      • SoftTalker 8 minutes ago
        Once someone else knows, it's no longer a secret.
  • alphazard 1 hour ago
    This isn't an AI problem, its an operating systems problem. AI is just so much less trustworthy than software written and read by humans, that it is exposing the problem for all to see.

    Process isolation hasn't been taken seriously because UNIX didn't do a good job, and Microsoft didn't either. Well designed security models don't sell computers/operating systems, apparently.

    That's not to say that the solution is unknown, there are many examples of people getting it right. Plan 9, SEL4, Fuschia, Helios, too many smaller hobby operating systems to count.

    The problem is widespread poor taste. Decision makers (meaning software folks who are in charge of making technical decisions) don't understand why these things are important, or can't conceive of the correct way to build these systems. It needs to become embarrassing for decision makers to not understand sandboxing technologies and modern security models, and anyone assuming we can trust software by default needs to be laughed out of the room.

    • umvi 24 minutes ago
      > Well designed security models don't sell computers/operating systems, apparently.

      Well more like it's hard to design software that is both secure-by-default and non-onerous to the end users (including devs). Every time I've tried to deploy non-trivial software systems to highly secure setups it's been a tedious nightmare. Nothing can talk to each other by default. Sometimes the filesystem is immutable and executables can't run by default. Every hole through every layer must be meticulously punched, miss one layer and things don't work and you have to trace calls through the stack, across sockets and networks, etc. to see where the holdup is. And that's not even including all the certificate/CA baggage that comes with deploying TLS-based systems.

    • layer8 1 hour ago
      It’s also an AI problem, because in the end we want what is called “computer use” from AI, and functionality like Recall. That’s an important part of what the CCC talk was about. The proposed solution to that is more granular, UAC-like permissions. IMO that’s not universally practical, similar to current UAC. How we can make AIs our personal assistants across our digital life — the AI effectively becoming an operating system from the user’s point of view — with security and reliability, is a hard problem.
      • dmitrygr 4 minutes ago
        > in the end we want what is called “computer use” from AI

        Who is "we" here? I do not want that at all.

      • alphazard 46 minutes ago
        We aren't there yet. You are talking about crafting a complicated window into the box holding the AI, when there isn't even a box to speak of.
        • layer8 44 minutes ago
          Yes, we aren’t there yet, but that’s what OS companies are trying to implement with things like Copilot and Recall, and equivalents on smartphones, and what the talk was about.
    • c-linkage 1 hour ago
      It's pretty clear that the security models designed into operating systems never considered networked systems. Given that most operating systems were designed and deployed before the internet, this should not be a surprise.

      Although one might consider it surprising that OS developers have not updated security models for this new reality, I would argue that no one wants to throw away their models due to 1) backward compatibility; and 2) the amount of work it would take to develop and market an entirely new operating system that is fully network aware.

      Yes we have containers and VMs, but these are just kludges on top of existing systems to handle networks and tainted (in the Perl sense) data.

      • OptionOfT 57 minutes ago
        > It's pretty clear that the security models designed into operating systems never considered networked systems. Given that most operating systems were designed and deployed before the internet, this should not be a surprise.

        I think Active Directory comes pretty close. I remember the days where we had an ASP.NET application where we signed in with our Kerberos credentials, which flowed to the application, and the ASP.NET app connected to MSSQL using my delegated credentials.

        When the app then uploaded my file to a drive, it was done with my credentials, if I didn't have permission it would fail.

      • gz09 1 hour ago
        > It's pretty clear that the security models that were design into operating systems never truly considered networked systems

        Andrew Tanenbaum developed the Amoeba operating system with those requirements in mind almost 40 years ago. There were plenty of others that did propose similar systems in the systems research community. It's not that we don't know how to do it just that the OS's that became mainstream didn't want to/need to/consider those requirements necessary/<insert any other potential reason I forgot>.

        • jacquesm 55 minutes ago
          Yes, Tanenbaum was right. But it is a hard sell, even today, people just don't seem to get it.

          Bluntly: if it isn't secure and correct it shouldn't be used. But companies seem to prefer insecure, incorrect but fast software because they are in competition with other parties and the ones that want to do things right get killed in the market.

          • germinalphrase 31 minutes ago
            Are there other obvious tradeoffs, in addition to speed, to these more secure OS systems vs status quo?
            • jacquesm 29 minutes ago
              Yes, money. Making good software is very expensive.
      • nyrikki 49 minutes ago
        There is a lot to blame on the OS side, but Docker/OCI are also to blame, not allowing for permission bounds and forcing everything to the end user.

        Open desktop is also problematic, but the issue is more about user land passing the buck, across multiple projects that can easily justify local decisions.

        As an example, if crun set reasonable defaults and restricted namespace incompatible features by default we would be in a better position.

        But docker refused to even allow you to disable the —privileged flag a decade ago,

        There are a bunch of *2() system calls that decided to use caller sized structs that are problematic, and apparmor is trivial to bypass with ld_preload etc…

        But when you have major projects like lamma.cpp running as container uid0, there is a lot of hardening tha could happen with projects just accepting some shared responsibility.

        Containers are just frameworks to call kernel primitives, they could be made more secure by dropping more.

        But OCI wants to stay simple and just stamp couple selinux/apparmor/seccomp and dbus does similar.

        Berkeley sockets do force unsharing of netns etc, but Unix is about dropping privileges to its core.

        Network aware is actually the easier portion, and I guess if the kernel implemented posix socket authorization it would help, but when user land isn’t even using basic features like uid/gid, no OS would work IMHO.

        We need some force that incentivizes security by design and sensible defaults, right now we have wack-a-mole security theater. Strong or frozen caveman opinions win out right now.

    • orbital-decay 1 hour ago
      If you want the AI to do anything useful, you need to be able to trust it with the access to useful things. Sandboxing doesn't solve this.

      Full isolation hasn't been taken seriously because it's expensive, both in resources and complexity. Same reason why microkernels lost to monolithic ones back in the day, and why very few people use Qubes as a daily driver. Even if you're ready to pay the cost, you still need to design everything from the ground up, or at least introduce low attack surface interfaces, which still leads to pretty major changes to existing ecosystems.

      • falloutx 2 minutes ago
        Crazy how all the rules about privacy and security go out of the window as soon as its AI
      • alphazard 52 minutes ago
        Microkernels lost "back in the day" because of how expensive syscalls were, and how many of them a microkernel requires to do basic things. That is mostly solved now, both by making syscalls faster, and also by eliminating them with things like queues in shared memory.

        > you still need to design everything from the ground up

        This just isn't true. The components in use now are already well designed, meaning they separate concerns well, and can be easily pulled apart. This is true of kernel code and userspace code. We just witnessed a filesystem enter and exit the linux kernel within the span of a year. No "ground up" redesign needed.

      • thewebguyd 51 minutes ago
        > If you want the AI to do anything useful, you need to be able to trust it with the access to useful things. Sandboxing doesn't solve this.

        By default, AI cannot be trusted because it is not deterministic. You can't audit what the output of any given prompt is going to be to make sure its not going to rm -rf /

        We need some form of behavioral verification/auditing with guarantees that any input is proven to not produce any number of specific forbidden outputs.

        • orbital-decay 42 minutes ago
          Determinism is an absolute red herring. A correct output can be expressed in an infinite amount of ways, all of them valid. You can always make an LLM give deterministic outputs (with some overhead), that might bring you limited reproducibility, but that won't bring you correctness. You need correctness, not determinism.

          >We need some form of behavioral verification/auditing with guarantees that any input is proven to not produce any number of specific forbidden outputs.

          You want the impossible. The domain LLMs operate on is inherently ambiguous, thus you can't formally specify your outputs correctly or formally prove them being correct. (and yes, this doesn't have anything to do with determinism either, it's about correctness)

          You just have to accept the ambiguousness, and bring errors or deviation to the rates low enough to trust the system. That's inherent to any intelligence, machine or human.

          • drdeca 7 minutes ago
            This comment I'm making is mostly useless nitpicking, and I overall agree with your point. Now I will commence my nitpicking:

            I suspect that it may merely be infeasible, not strictly impossible. There has been work on automatically proving that an ANN satisfies certain properties (iirc e.g. some kinds of robustness to some kinds of adversarial inputs, for handling images).

            It might be possible (though infeasible) to have an effective LLM along with a proof that e.g. it won't do anything irreversible when interacting with the operating system (given some formal specification of how the operating system behaves).

            But, yeah, in practice I think you are correct.

            It makes more sense to put the LLM+harness in an environment which ensures you can undo whatever it does if it messes things up, than to try to make the LLM be such that it certainly won't produce outputs that would mess things up in a way that isn't easily revertible, even if it does turn out that the latter is in principle possible.

          • pfortuny 9 minutes ago
            There remains the issue of responsibility, moral, technical, and legal, though.
    • wat10000 38 minutes ago
      There are two problems that get smooshed together.

      One is that agents are given too much access. They need proper sandboxing. This is what you describe. The technology is there, the agents just need to use it.

      The other is that LLMs don't distinguish between instructions and data. This fundamentally limits what you can safely allow them to access. Seemingly simple, straightforward systems can be compromised by this. Imagine you set up a simple agent that can go through your emails and tell you about important ones, and also send replies. Easy enough, right? Well, you just exposed all your private email content to anyone who can figure out the right "ignore previous instructions and..." text to put in an email to you. That fundamentally can't be prevented while still maintaining the desired functionality.

      This second one doesn't have an obvious fix and I'm afraid we're going to end up with a bunch of band-aids that don't entirely work, and we'll all just pretend it's good enough and move on.

      • synalx 33 minutes ago
        In that sense, AI behaves like a human assistant you hire who happens to be incredibly susceptible to social engineering.
        • mikrl 13 minutes ago
          Make sure to assign your agent all the required security trainings.
    • atoav 10 minutes ago
      No it is also not an OS problem, it is a problem of perverse incentives.

      AI companies have to monetize what they are doing. And eventually they will figure out that knowing everything about everyone can be pretty lucrative if you leverage it right and ignore or work towards abolishing existing laws that would restrict that malpractice.

      There are thousand utopian worlds where LLMs knowing a lot about you could be actually a good thing. In none of them the maker of that AI has to have the prime goal of extracting as much money as possible to become the next monopolist.

      Sure, the OS is one tiny technical layer users could leverage to retain some level of control. But to say this is the source of the problem is like being in a world filled with arsonists and pointing at minor fire code violations. Sure it would help to fix that, but the problem has its root entirely elsewhere.

    • gruez 28 minutes ago
      >Well designed security models don't sell computers/operating systems, apparently.

      What are you talking about? Both Android and iOS have strong sandboxing, same with mac and linux, to an extent.

    • HPsquared 59 minutes ago
      Android servers? They already have ARM servers.
    • bdangubic 50 minutes ago
      > AI is just so much less trustworthy than software written and read by humans, that it is exposing the problem for all to see.

      Whoever thinks/feels this has not seen enough human-written code

    • api 57 minutes ago
      > Well designed security models don't sell computers/operating systems, apparently.

      That's because there's a tension between usability and security, and usability sells. It's possible to engineer security systems that minimize this, but that is extremely hard and requires teams of both UI/UX people and security experts or people with both skill sets.

  • tptacek 1 hour ago
    It's Signal's job to prioritize safety/privacy/security over all other concerns, and the job of an enterprise IT operation to manage risk. Underrated how different those jobs --- security and risk management --- are!

    Most normal people probably wouldn't enjoy working in a shop where Signal owned the risk management function, and IT/dev had to fall in line. But for the work Signal does, their near-absolutist stance makes a lot of sense.

    • pipo234 17 minutes ago
      That's an interesting take, but it sounds like you're downplaying the actual risks of enterprise users running agents on their desktop(?).

      What would your say would be a prudent posture an IT manager should take to control risk to the organisation?

      • tptacek 16 minutes ago
        Anybody who has ever run an internal pentest knows there's dozens of different ways to game-over an entire enterprise, and decisively resolving all of them in any organization running at scale is intractable. That's why it's called risk management, and not risk eradication.
        • pipo234 2 minutes ago
          Risk management is not my day job, but I'm aware of a cottage industry of enterprise services and appliances to map out, prevent and mitigate risks. Pentest are part of those as are keeping up with trends and literature.

          So on the subject of something like Recall or Copilot what tools and policies does an it manager have at their disposal to prevent let's say unintentional data exfiltration or data poisoning?

          (Added later:) How do I make those less likely to happen?

  • MarginalGainz 1 hour ago
    This resonates with what I'm seeing in the enterprise adoption layer.

    The pitch for 'Agentic AI' is enticing, but for mid-market operations, predictability is the primary feature, not autonomy. A system that works 90% of the time but hallucinates or leaks data the other 10% isn't an 'agent', it's a liability. We are still in the phase where 'human-in-the-loop' is a feature, not a bug.

    • ygjb 49 minutes ago
      > A system that works 90% of the time but hallucinates or leaks data the other 10% isn't an 'agent', it's a liability.

      That strongly depends on whether or not the liability/risk to the business is internalized or externalized. Businesses take steps to mitigate internal risks while paying lip service to the risks with data and interactions where high risk is externalized. Usually that is done in the form of a waiver in the physical world, but in the digital world it's usually done through a ToS or EULA.

      The big challenge is that the risks that Agentic AI in it's current incarnation or not well understood by individuals or even large businesses, and most people will happily click through thinking "I trust $vendor" to do the right thing, or "I trust my employer to prevent me doing the wrong thing."

      Employers are enticed by the siren call of workforce/headcount/cost reductions and in some businesses/cases are happy to take the risk of a future realized loss as a result of an AI issue that happens after they move on/find a new role/get promoted/transfer responsibility to gain the boost of a good quarterly report.

    • supriyo-biswas 53 minutes ago
      Would be grateful if you can stop with the LLM generated output, this place is mostly for humans to interact. (There are just too many "it's not a X, but Y" in this comment, and real people don't talk like that.)
    • witnessme 56 minutes ago
      Can't agree more
  • HiPhish 8 minutes ago
    "Hey, you know that thing no one understands how it works and has no guarantee of not going off the rails? Let's give it unrestricted access over everything!" Statements dreamed up by the utterly deranged.

    I can see the value of agentic AI, but only if it has been fenced in, can only delegate actions to deterministic mechanisms, and if ever destructive decision has to be confirmed. A good example I once read about was an AI to parse customer requests: if it detects a request that the user is entitle to (e.g. cancel subscription) it will send a message like "Our AI thinks you want to cancel your subscription, is this correct?" and only after confirmation by the user will the action be carried out. To be reliable the AI itself must not determine whether the user is entitled to cancelling, it may only guess the the user's intention and then pass a message to a non-AI deterministic service. This way users don't have to wait until a human gets around to reading the message.

    There is still the problem of human psychology though. If you have an AI that's 90% accurate and you have a human confirm each decision, the human's mind will start drifting off and treat 90% as if it's 100%.

  • burnerToBetOut 19 minutes ago
    That article is right on the money for the request I made here yesterday: https://news.ycombinator.com/item?id=46595265
  • nwellinghoff 46 minutes ago
    Are these assumptions wrong? If I 1) execute the ai as a isolated user. 2) behind a white list out and in firewall 3) on a overlay file mount

    I am pretty much good to go from a it can’t do something I don’t want it to do?

  • suriya-ganesh 1 hour ago
    This is true. But lately technology direction has largely been a race to the bottom, while marketing it as bold bets.

    It has created this dog eat dog system of crass negligence everywhere. All the security risks of signed tokens and auth systems are meaningless now that we are piping cookies, and everything else through AI browsers who seemingly have inifinite attack surface. Feels like the last 30 years of security research has come to naught

  • einpoklum 32 minutes ago
    Risk? It's a surveillance certainty.
  • tucnak 1 hour ago
    This is nothing new, really. The recommendation for MCP deployments in all off-the-shelf code editors has been RCE and storing credentials in plaintext from the get-go. I spent months trying to implement a sensible MCP proxy/gateway with sandbox capability at our company, and failed miserably at that. The issue is on consumption side, as always. We tried enforcing a strict policy against RCE, but nobody cared for it. Forget prompt injection; it seems, nobody takes zero trust seriously. This is including huge companies with dedicated, well-staffed security teams... Policy-making is hard, and maintaining the ever-growing set of rules is even harder. AI provides incredible opportunity for implementing and auditing of granular RBAC/ReBAC policies, but I'm yet to see a company that would actually leverage it to that end.

    On a different note: we saw Microsoft seemingly "commit to zero trust," however in reality their system allowed dangling long-lived tokens in production systems, which resulted in compromise by state actors. The only FAANG company to take zero trust seriously is Google, and they get flak for permission granularity all the time. This is a much larger tragedy, and AI vulnerabilities are only cherry on top.

  • falloutx 0 minutes ago
    We need to give AI agents full access to our computer they can cure cancer, i don't why thats hard to understand. \s
  • antibull 1 hour ago
    [dead]
  • apercu 1 hour ago
    A large percentage of my work is peripheral to info security (ISO 27001, CMMC, SOC 2), and I've been building internet companies and software since the 90's (so I have a technical background as well), which makes me think that I'm qualified to have an opinion here.

    And I completely agree that LLMs (the way they have been rolled out for most companies, and how I've witnessed them being used) are an incredibly underestimated risk vector.

    But on the other hand, I'm pragmatic (some might say cynical?), and I'm just left here thinking "what is Signal trying to sell us?"

    • jsheard 1 hour ago
      > But on the other hand, I'm pragmatic (some might say cynical?), and I'm just left here thinking "what is Signal trying to sell us?"

      A messaging app? I'm struggling to come up with a potential conflict of interest here unless they have a wild pivot coming up.

      • apercu 1 hour ago
        I didn't mean to imply a conflict of interest, I'm wondering what product or service offering (or maybe feature on their messaging app) prompted this.

        No other tech (major) leaders are saying the quiet parts out loud right, about the efficacy, cost to build and operate or security and privacy nightmares created by the way we have adopted LLMs.

        • contact9879 1 hour ago
          Whittaker’s background is in AI research. She talks a lot (and has been for a while) about the privacy implications of AI.

          I’m not sure of any one thing that could be considered to prompt it. But a large one is the wide-deployment of models on devices with access to private information (Signal potentially included)

        • kuerbel 1 hour ago
          Maybe it's not about gaining something, but rather about not losing anything. Signal seems to operate from a kind of activism mindset, prioritizing privacy, security, and ethical responsibility, right? By warning about agentic AI, they’re not necessarily seeking a direct benefit. Or maybe the benefit is appearing more genuine and principled, which already attracted their userbase in the first place.
          • jofla_net 49 minutes ago
            Exactly, if the masses cease to have "computers" any more (deterministic boxes solely under the user's control), then it matters little how bulletproof signal's ratchet protocol is, sadly.
    • JoshTriplett 1 hour ago
      Signal is conveying a message of wanting to be able to trust your technology/tools to work for you and work reliably. This is a completely reasonable message, and it's the best kind of healthy messaging: "apply this objectively good standard, and you will find that you want to use tools like ours".
    • fnwbr 1 hour ago
      coming from the fact that this was a talk held at #39c3, maybe, just maybe, this was not about selling anything at all?!

      i feel like that might be hard to grasp for some HN users.

      • pferde 1 hour ago
        Since Signal lives and dies on having trust of its users, maybe that's all she is after?

        Saying the quiet thing out loud because she can, and feels like she should, as someone with big audience. She doesn't have to do the whole "AI for everything and kitchen sink!" cargo-culting to keep stock prices up or any of that nonsense.

    • usefulposter 1 hour ago
      >what is Signal trying to sell us?

      This: https://arstechnica.com/security/2026/01/signal-creator-moxi...

      Great timing! :^)

    • EA-3167 1 hour ago
      I'd argue that Signal is trying to sell sanity at their own direct expense, during a time when sanity is in short supply. Just like "Proof of Work" wasn't going to be the BIG THING that made Crypto the new money, the new way to program, 'Agents' are another wet squib. I'm not claiming that they're useless, but they aren't worth the cost within orders of magnitude.

      I'm really getting tired of people who insist on living in a future fantasy version of a technology at a time when there's no real significant evidence that their future is going to be realized. In essence this "I'll pay the costs now for the promise of a limitless future" is becoming a way to do terrible things without an awareness of the damage being done.

      It's not hard, any "agent" that you need to double check constantly to keep it from doing something profoundly stupid that you would never do, isn't going to fulfill the dream/nightmare of automating your work. It will certainly not be worth the trillions already sunk into its development and the cost of running it.

  • z3ratul163071 1 hour ago
    "Signal creator Moxie Marlinspike wants to do for AI what he did for messaging " what, turn it over to CIA and NSA?