Ask HN: How are LLMs supposed to be used for warfare?

I have recently asked the same question in a HN thread, which was mysteriously downvoted. The question remains to me: there is a lot of talk between Anthropic and the DOW about adopting LLM technology for warfare. Specifically, for "fully autonomous weapons and mass domestic surveillance". Does anyone understand how these two goals can be achieved? LLMs don't seem to me the right tool for this. Autonomous weapons would require a much faster and much more reliable and deterministic AI. LLMs might be a better use for mass surveillance, but I am not really sure how they would cope with the massive amount of data and the limited context window (unless they use the data itself for training). RAGs might only mitigate the problem. Does anyone have some ideas?

5 points | by sirnicolaz 1 day ago

7 comments

  • JPLeRouzic 1 day ago
    I saw once a public request from a US administration for interested providers to create a LLM able to translate video meetings in Farsi (Iranian), summarize and propose following up actions.

    There were training corpus in English, Hebrew and Farsi. I guess it means the US had access to Iranian communication means. I wondered how it was possible to rely on a LLM to carry international relations. It seems a new low in stupidity.

    The request was made on Innocentive (an open innovation website).

    https://cttso.community.innocentive.com/challenge/487ad0cf48...

  • tornikeo 1 day ago
    IMO the one likely usecase that I see, aside surveillance, is fully-autonomous aerial drones actively planning how to act deep in the enemy territory. e.g. prioritizing targets, collaborating with other drones, making decisions. Think an LLM acting as a Kamikaze.
  • Someone 17 hours ago
    The LLM could be separate from the targeting software. “Here are a few camera feeds. Whenever you detect enemy combatants, keep tracking their location and generate a request to target them to automated system X”

    And yes, there could be problems with the implementation. in particular the chance of having many false positives are one reason many people are opposed to such tech.

  • not_your_vase 1 day ago

        > Autonomous weapons would require a much faster and much more reliable and deterministic AI. 
    
    I think this is only true when the bots are on home-field, and you don't want to kill your own ones. When you are on the other side, you just want to shoot indiscriminately everything that moves, and monitor your surroundings to protect yourself. For this, today's LLMs seem to be more than enough. And since there was no human intervention in shooting everyone, it's not even a war crime.
    • farseer 22 hours ago
      To add to that, you can always make your drone/bot loiter for a while before getting a fix.
  • k310 1 day ago
    Target schools, blame vendor instead of operators.

    I fully expect this.

    Based on industry experience, where vendors were hired (and paid well) so that there would be, and I quote, "a throat to choke" when needed.

  • bjourne 11 hours ago
    > there is a lot of talk between Anthropic and the DOW about adopting LLM technology for warfare

    Please cite that talk. Fully autonomous weapons and mass domestic surveillance is not the same as "adopting LLM technology". Please be precise.