20 comments

  • zurfer 16 hours ago
    It makes me wonder if we'll see an explosion of purpose trained LLMs because we hit diminishing returns on invest with pre training or if it takes a couple of months to fold these advantages back into the frontier models.

    Given the size of frontier models I would assume that they can incorporate many specializations and the most lasting thing here is the training environment.

    But there is probably already some tradeoff, as GPT 3.5 was awesome at chess and current models don't seem trained extensively on chess anymore.

    • criemen 9 hours ago
      > or if it takes a couple of months to fold these advantages back into the frontier models.

      Right now, I believe we're seeing that the big general-purpose models outperform approximately everything else. Special-purpose models (essentially: fine tunes) of smaller models make sense when you want to solve a specific task at lower cost/lower latency, and you transfer some/most of the abilities in that domain from a bigger model to a smaller one. Usually, people don't do that, because it's a quite costly process, and the frontier models develop so rapidly, that you're perpetually behind them (so in fact, you're not providing the best possible abilities).

      If/when frontier model development speed slows down, training smaller models will make more sense.

      • nextos 4 hours ago
        The advantage of small purpose-specific models is that they might be much more robust i.e., unlikely to generate wrong sequences for your particular domain. That is at least my experience working on this topic during 2025. And, obviously, smaller models mean you may deploy them on cheaper hardware, latency is reduced, energy consumption is lower, etc. In some domains like robotics, these two advantages might be very compelling, but it's obviously early to draw any long-term conclusions.
        • larodi 35 minutes ago
          I second this. Smaller models indeed may be much better positioned for fine-tuning for the very reason you point out - less noise to begin with.
      • barrell 2 hours ago
        > If/when frontier model development speed slows down

        You do not believe that this has already started? It seems to me that we’re well into a massive slowdown

      • fragmede 6 hours ago
        Right, the Costco problem. A small boutique eg wine store might be able to do better for picking a very specific wine for a specific occasion, but Costco is just so much bigger that they can make it up in Volume and buy cases and cases of everything with a lower markup, so it ends up being cheaper to shop at Costco, no matter how much you want to support the local wine boutique.
    • Imustaskforhelp 4 hours ago
      > But there is probably already some tradeoff, as GPT 3.5 was awesome at chess and current models don't seem trained extensively on chess anymore.

      Wow, I am so curious, can you provide me the source

      I am so interested in a chess LLM's benchmark as someone who occasionally plays chess. I have thought about creating things like these but it would be very interesting to find the best model at chess which isn't stockfish/lila but general purpose large language models.

      I also agree that there might be an explosion of purpose trained LLM's. I had this idea some year ago when there was llama / before deepseek that what if I want to write sveltekit and there are models like deepseek which know about sveltekit but they are so damn big and bloated when I only want to use sveltekit/svelte models. Yes there are thoughts on why we might need the whole network to get better quality but I genuinely feel like right now, the better quality is debtable thanks to all this benchmarkmaxxing and I would happily take a model trained on sveltekit on like preferably 4b-8b parameter but if an extremely good SOTA-ish model for sveltekit is even around 30-40b I would be happy since I could buy a gpu on my pc to run it or run it on my mac

      I think my brother who actually knows what he's talking about in the AI space, (unlike me), also said the same thing a few months back to me as well.

      In fact, its funny because I had asked him to please create a website comparing benchmarks of AI playing chess and having an option where we can make two AI LLM's play against each other and we can view it or we can also play against an LLM inside an actual chess board on the web and more..., I had given this idea to him a few months ago after the talk about small llm's really lol and he said that its good but he was busy right now. I think then later he might have forgotten about it and I had forgotten about it too until now.

    • deepanwadhwa 14 hours ago
      -> GPT 3.5 was awesome at chess I don't agree with this. I did try to play chess with GPT3.5 and it was horrible. Full of hallucinations.
      • miki123211 14 hours ago
        It was GPT-3 I think.

        As far as I remember, it's post-training that kills chess ability for some reason (GPT-3 wasn't post-trained).

        • Imustaskforhelp 4 hours ago
          This is so interesting, I am curious as to why, can you (or anyone) please provide any resources or insightful comments about it, they would really help a ton out here, thanks!
          • pixelmelt 38 minutes ago
            Gpt3 was trained on completion data so it likely saw lots of raw chess games layed out in whatever standard format moves are listed in, while 3.5 was post trained on instruct data (talking back and forth) which would have needed to explicitly include those chess games as conversational training data for it to retain as much as it would otherwise
    • onlyrealcuzzo 12 hours ago
      Isn't the whole point of the MOE architecture exactly this?

      That you can individually train and improve smaller segments as necessary

      • ainch 11 hours ago
        Generally you train each expert simultaneously. The benefit of MoEs is that you get cheap inference because you only use the active expert parameters, which constitute a small fraction of the total parameter count. For example Deepseek R1 (which is especially sparse) only uses 1/18th of the total parameters per-query.
        • pama 6 hours ago
          > only uses 1/18th of the total parameters per-query.

          only uses 1/18th of the total parameters per token. It may use the large fraction of them in a single query.

      • idiotsecant 11 hours ago
        I think it's the exact opposite - you don't specifically train each 'expert' to be a SME at something. Each of the experts is a generalist but becomes better at portions of tasks in a distributed way. There is no 'best baker', but things evolve toward 'best applier of flour', 'best kneader', etc. I think explicitly domain-trained experts are pretty uncommon in modern schemes.
        • viraptor 11 hours ago
          That's not entirely correct. Most of moe right now are fully balanced, but there is an idea of a domain expert moe where the training benefits fewer switches. https://arxiv.org/abs/2410.07490
    • AmbroseBierce 6 hours ago
      It reminds me of a story I read somewhere that some guy high on drugs climbed to the top of some elevated campus headlights shouting things about being a moth and loving lights, and the security guys tried telling him to go down but he paid no attention to that and time went on until a janitor came and shut off the lights, then turned one of those high powered handheld ones and point it at him the guy quickly climbed down there.

      So yeah I think there are different levels of thinking, maybe future models with have some sort of internal models once they recognize patterns of some level of thinking, I'm not that knowledgeable of the internal workings of LLMs so maybe this is all nonsense.

    • alephnerd 14 hours ago
      > if we'll see an explosion of purpose trained LLMs...

      Domain specific models have been on the roadmap for most companies for years now for both competitive (why give up your moat to OpenAI or Anthropic) and financial (why finance OpenAI's margins) perspective.

  • sumo43 13 hours ago
    I made a 4B Qwen3 distill of this model (and a synthetic dataset created with it) a while back. Both can be found here: https://huggingface.co/flashresearch
    • Imustaskforhelp 4 hours ago
      Can you please create a huggingface space or something similar, I am not sure about the state of huggingface but I would love to be able to try it out in a browser or something similar if possible as I am really curious and I just love qwen3 4b as they were one of the models which work even on my intel integrated gpu graphics card at a really impressive rate and they were really nice the last time I tried but this looks even more cooler/practical.

      I had once an idea of using something like qwen4 or some pre-trained AI model just to do a (to censor or not to) idea after the incidents of mecha-hitler. I thought if there was some extremely cheap model which could detect that it is harmful that the AI models of Grok itself couldn't recognize, it would've been able to prevent the absolute advertising/ complete disaster that happened.

      What are your thoughts on it? I would love to see an Qwen 4B of something similar if possible if you or anyone is up to the challenge or any small LLM's in generals. I just want to know if this idea fundamentally made sense or not.

      Another idea was to use it for routing purposes similar to what chatgpt does but I am not sure about that now really but I still think that it maybe worth it but this routing idea I had was before chatgpt had implemented it, so now after it implemented, we are gonna find some more data/insights about if its good or not/ worth it, so that's nice.

      • bigyabai 1 hour ago
        > What are your thoughts on it?

        You don't really need an entire LLM to do this - lightweight encoder models like BERT are great at sentiment analysis. You feed it an arbitrary string of text, and it just returns a confidence value from 0.0 to 1.0 that it matches the characteristics you're looking for.

    • Nymbo 9 hours ago
      Just tried this out with my web search mcp, extremely impressed with it. Never seen deep research this good from a model so small.
  • tbruckner 13 hours ago
    Has anyone found these deep research tools useful? In my experience, they generate really bland reports don't go much further than summarization of what a search engine would return.
    • andy99 10 hours ago
      My experience is the same as yours. It feels to me (similar to most LLM writing) like they write for someone who’s not going to read it or use it but is going to glance at it and judge the quality that way and assume it’s good.

      Not to different from a lot of consulting reports, in fact, and pretty much of no value if if you’re actually trying to learn something.

      Edit to add: even the name “deep research” to me feels like something defined to appeal to people who have never actually done or consumed research, sort of like the whole “phd level” thing.

      • tbruckner 8 hours ago
        "they write for someone who’s not going to read it" is a great way to phrase it.
    • ainch 11 hours ago
      The reports are definitely bland, but I find them very helpful for discovering sources. For example, if I'm trying to ask an academic question like "has X been done before," sending something to scour the internet and find me examples to dig into is really helpful - especially since LLMs have some base knowledge which can help with finding the right search terms. It's not doing all the thinking, but those kind of broad overviews are quite helpful, especially since they can just run in the background.
      • kmarc 10 hours ago
        I caught myself that most of my LLM usage is like this:

        ask a loaded, "filter question" I more or less know the answer for, and mostly skip the prose and get to the links to its sources.

    • TACIXAT 2 hours ago
      I have used Gemini's 2.5 Pro deep research probably about 10 times. I love it. Most recently was reviewing PhD programs in my area then deep diving into faculty research areas.
    • blaesus 10 hours ago
      "Summarization of what a search engine would return" is good enough for many of my purposes though. Good for breaking into new grounds, finding unknown unknowns, brainstorming etc.
    • criemen 8 hours ago
      I tend to use them when I'm looking to buy something of category X, and want to get a market overview. I can then still dig in and decide whether I consider the sources used trustworthy or not, and before committing money, I'll read some reviews myself, too. Still, it's a speedup for me.
      • edot 6 hours ago
        Yes, this is one of my primary use cases for deep research right now. It will become garbage in a few short years once OpenAI starts selling influence / ads. I think they’ve started a bit with doing this but the recommendations still seem mostly “correct”. My prior way of doing this was Googling with site:Reddit.com for real reviews and not SEO spam reviewers.
    • alasr 6 hours ago
      I haven't used any LLM deep research tools in the past; today, after reading this HN post, I gave Tongyi DeepResearch a try to see how it performs on a simple "research" task (in an area I've working experience in: healthcare and EHR) and I'm satisfied with its response (for the given tasks; I, obviously, can't say anything how it'll performs on other "research" tasks I'll ask it in the future). I think I'll keep using this model for tasks for which I was using other local LLM models before.

      Besides I might give other large deep research models a try when needed.

  • rokob 16 hours ago
    This whole series of work is quite cool. The use of `word-break: break-word;` makes this really hard to read though.
    • soared 15 hours ago
      I actually can’t read it for some reason? My brain just can’t connect the words
      • don-bright 14 hours ago
        so it appears the entire text has been Translated with non-breaking space unicode x00a0 instead of normal spaces x0020, so the web layout is considering all paragraph text as a super-long single word ('the\00a0quick\00a0\brown\00a0fox' instead of 'the quick brown fox') - the non-breaking space character appears identically to breaking-space when rendered but underlying coding breaks the concept of "break at end of word" because there is no end as 00a0 literally means "non-breaking"). per Copilot spending a half hour explaining this to me, apparently this can be fixed by opening web browser developer view, and copy/pasting this code into the console.

        function replaceInTextNodes(node) { if (node.nodeType === Node.TEXT_NODE) { node.nodeValue = node.nodeValue .replace(/\u00A0/g, ' '); } else { node.childNodes.forEach(replaceInTextNodes); } }

        replaceInTextNodes(document.body);

        • nl 7 hours ago
          This is completely fascinating although puzzling how that happens.

          The script is great!

      • dlisboa 11 hours ago
        That’s why typography matters. You can’t read it because a very basic convention has been broken here and that throws everything off.
  • aliljet 16 hours ago
    Sunday morning, and I find myself wondering how the engineering tinkerer is supposed to best self-host these models? I'd love to load this up on the old 2080ti with 128gb of vram and play, even slowly. I'm curious what the current recommendation on that path looks like.

    Constraints are the fun part here. I know this isn't the 8x Blackwell Lamborghini, that's the point. :)

    • giobox 15 hours ago
      If you just want to get something running locally as fast as possible to play with (the 2080ti typically had 11gb of VRAM which will be one of the main limiting factors), the ollama app will run most of these models locally with minimum user effort:

      https://ollama.com/

      If you really do have a 2080ti with 128gb of VRAM, we'd love to hear more about how you did it!

    • jlokier 14 hours ago
      I use a Macbook Pro with 128GB RAM "unified memory" that's available to both CPU and GPU.

      It's slower than a rented Nvidia GPU, but usable for all the models I've tried (even gpt-oss-120b), and works well in a coffee shop on battery and with no internet connection.

      I use Ollama to run the models, so can't run the latest until they are ported to the Ollama library. But I don't have much time for tinkering anyway, so I don't mind the publishing delay.

      • anon373839 11 hours ago
        I’d strongly advise ditching Ollama for LM Studio, and using MLX versions of the models. They run quite a bit faster on Apple Silicon. Also, LM Studio is much more polished and feature rich than Ollama.
        • terhechte 10 hours ago
          Fully agree to this. LM Studio is much nicer to use and with MLX faster on Apple Silicon
      • MaxMatti 12 hours ago
        How's the battery holding up during vibe coding sessions or occasional LLM usage? I've been thinking about getting a MacBook or a laptop with a similar Ryzen chip specifically for that reason.
    • btbuildem 14 hours ago
      I've recently put together a setup that seemed reasonable for my limited budget. Mind you, most of the components were second-hand, open box deals, or deep discount of the moment.

      This comfortably fits FP8 quantized 30B models that seem to be "top of the line for hobbyists" grade across the board.

      - Ryzen 9 9950X

      - MSI MPG X670E Carbon

      - 96GB RAM

      - 2x RTX 3090 (24GB VRAM each)

      - 1600W PSU

      • nine_k 12 hours ago
        Does it offer more performance than a Macbook Pro that could be had for a comparable sum? Your build can be had for under $3k; a used MBP M3 with 64 GB RAM can be had for approximately $3.5k.
        • btbuildem 11 hours ago
          I'm not sure, I did not run any benchmarks. As a ballpark figure -- with both cards throttled down to 250W, running a Qwen-30B FP8 model (variant depending on task), I get upwards of 60 tok/sec. It feels on par with the premium models, tbh.

          Of course this is in a single-user environment, with vLLM keeping the model warm.

        • bee_rider 6 hours ago
          MacBooks have some clever chips, but 2x 3090 is a lot of brawn to overcome.
      • PeterStuer 10 hours ago
        Unfortunately the RTX 3090 has no native FP8 support.
      • pstuart 13 hours ago
        That's basically what I imagined would be my rig if I were to pull the trigger. Do you have an NVLink adapter as well?
        • btbuildem 11 hours ago
          No NVLink; it took me a long time to compose the exact hardware specs, because I wanted to optimize performance. Both cards are on x8 PCIe direct CPU channels, close to their max throughput anyway. It runs hot with the CPU engaged, but it runs fast.
    • jwr 13 hours ago
      I just use my laptop. A modern MacBook Pro will run ~30B models very well. I normally stick to "Max" CPUs (initially for more performance cores, recently also for the GPU power) with 64GB of RAM. My next update will probably be to 128GB of RAM, because 64GB doesn't quite cut it if you want to run large Docker containers and LLMs.
    • sigmarule 2 hours ago
      The Framework Desktop runs this perfectly well, and for just about $2k.
    • homarp 16 hours ago
      llama.cpp gives you the most control to tune it for your machine.
    • CuriousSkeptic 15 hours ago
      Im sure this guy has some helpful hints on that: https://youtube.com/@azisk
    • sumo43 13 hours ago
    • aliljet 5 hours ago
      oh my god. 128 gb of RAM! way too late to repair this thread, but most people caught this.
    • 3abiton 9 hours ago
      As many pointed out, Macs are decent enough to run them (with maxxed rams). You also have more alternative, like DGX Sparks (if you appreciate the ease of cuda, albeit a tad bit slower token generation performance), or the Strix Halo (good luck with ROCm though, AMD still peddling hype). There is no straitghtforwars "cheap" answer. You either go big (gpu server), or compromise. Either way use either vllm or sglang, or llama.cpp. ollama is just inferior in every way to llama.cpp.
    • exe34 15 hours ago
      llama.cpp + quantized: https://huggingface.co/bartowski/Alibaba-NLP_Tongyi-DeepRese...

      get the biggest one that will fit in your vram.

      • trebligdivad 10 hours ago
        How do people deal with all the different quantisations? Generally if I see an Unsloth I'm happy to try it locally; random other peoples...how do I know what I'm getting?

        (If nothing else Tongyi are currently winning AI with cutest logo)

        • exe34 8 hours ago
          personally I've only used them for toying around - but in all cases you have to test them for your use case anyway.
      • davidsainez 12 hours ago
        This is the way. I managed to run (super) tiny models on CPU only with this approach.
  • theflyestpilot 16 hours ago
    I hope the translation for this is actually "Agree" Deep research. Just a dig at "You are absolutely right!" sycophancy.
    • numpad0 15 hours ago
      TIL the "full" name of Alibaba Qwen is 通義千問(romanized as "Tongyi Qianwen", something along "knows all thousand questions"), of which the first half without the Chinese accent flags is romanized identically to "同意", meaning "same intents" or "agreed".

      The Chinese version of the link says "通义 DeepResearch" in the title, so doesn't look like the "agree" to be the case. Completely agreed that it would be hilarious.

      1: https://www.alibabacloud.com/en/solutions/generative-ai/qwen...

      • rahimnathwani 14 hours ago
        For people who don't read Chinese: the two 'yi' characters numpad0 mentioned (义 and 義) are the same, but written in different variants of Chinese script (Simplified/Traditional).
  • VladVladikoff 5 hours ago
    Recently I gave a list of 300 links to deep research and asked it to go through each one to analyze a certain question about them. Repeatedly it would take shortcuts and not actually do the full list. Is this caused by a context window limits? Or maybe Open AI limits request size? Is it possible to not run into these types of limits with locally hosted models?
    • oofbey 2 hours ago
      I’ve also had extremely poor luck getting any LLM agent to go through a long list of repetitive tasks. Don’t know why. I’d guess it’s because they’re trained for transactional responses, and thus are horrible at repute anything.
  • embedding-shape 17 hours ago
    Isn't OpenIA "Deep research" (not "DeepResearch") a methodology/tooling thing, and you'll get different responses depending on what specific model you use with it? As far as the UI allows you to, you could use Deep research with GPT-5, GPT-4o, o3 and so on, and that'll have an impact on the responses. Skimming the paper and searching for some simple terms makes it seem like they never expand on what exact models they've used, just that they've used a specific feature from ChatGPT?
    • simonw 15 hours ago
      At this point "deep research" is more of a pattern - OpenAI and Perplexity and Google Gemini all offer products with that name which work essentially the same way, and Anthropic and Grok have similar products with a slightly different name attached.

      The pattern is effectively long-running research tasks that drive a search tool. You give them a prompt, they churn away for 5-10 minutes running searches and they output a report (with "citations") at the end.

      This Tongyi model has been fine-tuned to be really good at using its search tool in a loop to produce a report.

      • embedding-shape 14 hours ago
        Yes, but I think my previous point still matter, namely what exact model is being used greatly affects the results.

        So without specifying which model is being used, it's really hard to know what is better than something else, because we don't understand what the underlying model is, and if it's better because of the model itself, or the tooling, which feels like an important distinction.

  • jychang 17 hours ago
    This is over a month old, they released the weights a long time ago.
    • jwr 13 hours ago
      That's OK — not all of us follow all the progress on a daily basis, and a model that is a month old doesn't become useless just by being a month old!
    • earthnail 17 hours ago
      And for those not so tightly in the loop: how does it compare?
  • mehdibl 16 hours ago
    It's a Qwen 3 MoE fine tune...
  • brutus1213 13 hours ago
    I recently got a 5090 with 64 GB of RAM (intel cpu). Was just looking for a strong model I can host locally. If I had performance of GPT4-o, I'd be content. Are there any suggestions or cases where people got disappointed?
    • bogtog 12 hours ago
      GPT-OSS-20B at 4- or 8-bits is probably your best bet? Qwen3-30b-a3b probably the next best option. Maybe there exists some 1.7 or 2 bit version of GPT-OSS-120B
    • p1esk 13 hours ago
      5090 has 32GB of RAM. Not sure if that’s enough to fit this model.
  • ugh123 5 hours ago
    Slightly off topic but why does word wrapping seem to be broken in this site? Chrome on Android
    • rippeltippel 2 hours ago
      Thank you for pointing that out, I was about to ask the same. It's giving my OCD a hard time reading it.
  • whiplash451 11 hours ago
    Has anyone tried running this on a 5090 or 6000 pro? What throughput do you see?
  • krystofee 13 hours ago
    Isnt it huge deal, that this 30B model can compare and surpass huge closed models?
  • blueboo 3 hours ago
    When was the last time you did a deep research? Good agents just do research as necessary. I find GPT5 Pro >> all the top DR agents
  • Traubenfuchs 14 hours ago
    It still feels to me like OpenAI has zero moat. There are like 5 paid competitors + open source models.

    I switch between gemini and ChatGpt whenever I feel one fails to fully grasp what I want, I do coding in claude.

    How are they supposed to become the 1 trillion dollar company they want to be, with strong competition and open source disruptions every few months?

    • nickpinkston 13 hours ago
      Yea, I agree.

      Arguably LLMs are both (1) far easier to switch between models than it is today to switch from AWS / GCP / Azure systems, and (2) will be rapidly decreasing switching costs for your legacy systems to port to new ones - ie Oracle's, etc. whole business model.

      Meanwhile, the whole world is building more chip fabs, data centers, AI software/hardware architectures, etc.

      Feels more like we're headed to commodification of the compute layer more than a few giant AI monopolies.

      And if true, that's actually even more exciting for our industry and "letting 100 flowers bloom".

    • Grimblewald 9 hours ago
      Of course they dont, the only advantage it ever had was the willingness to destroy trust on the internet by scraping everything from everyone rules and expectations be dammed.

      The underlying architecture isnt special, the underlying skills and tools aren't special.

      There is nothing openAI brings to the table other than a willingness to lie, cheat, and steal. That only gives you an edge for so long.

    • red2awn 9 hours ago
      The moat of OpenAI is 1. internal knowledge they've built over the last few years building front tier models 2. their talent 3. the ChatGPT brand (go ask a random person on the street, they know ChatGPT but not Claude or Gemini)
    • whiplash451 11 hours ago
      Isn’t the moat in the product/UI/UX? I use Claude daily and love the “scratch notebook” feel of it. The barebone model does not get you any of this.
      • hamandcheese 11 hours ago
        I agree that the scaffolding around the model contributes greatly to the experience. But it doesn't take billions of dollars in GPUs to do that part.
    • rokob 14 hours ago
      I don’t know if they can pull it off but a lot of companies are built on strong enterprise sales being able to sell free stuff with a bow on it to someone who doesn’t know better or doesn’t care.
    • isoprophlex 14 hours ago
      Premium grade deals with Oracle. They will bullshit their way into government and enterprise environments where all the key decision makers are clueless and/or easily manipulated.
  • DataDaemon 13 hours ago
    Unfortunately soon China will take lead in AI.
    • davidsainez 12 hours ago
      I have been very impressed with the Qwen3 series. I'm still evaluating them, and I generally take LLM benchmarks with a huge grain of salt, but their MoE models in particular seem to offer a lot of bang for the compute. But what makes you so sure they will take the lead?
    • ninetyninenine 12 hours ago
      Isn't this an indication they are already in the lead? They currently have the best model that beats everyone on all quantitative metrics? Are you implying that the US has a better model somewhere?
      • mike_hearn 6 hours ago
        They aren't in the lead. They are very close behind, but that's not hard given the quantity of freely published papers. They keep proving they can train models competitive with US models, but, only months after the fact. And at least some of the Chinese models were trained via distillation from US models. Probably not at Alibaba but it seems at least some models were.
    • aeve890 13 hours ago
      Unfortunately? May I ask why? What country would you like to be the lead in AI?
      • ninetyninenine 12 hours ago
        The USA of course. Isn't it obvious? What other country is more Free and great? None. Why does this even need to be asked?

        China is full of people who want communism to dominate the world with totalitarian control so no one wants China to dominate anything at all because they are bad...

        • victorbjorklund 10 hours ago
          USA is threatening to invade Europe so not sure it can be considered great.
        • Krasnol 10 hours ago
          The USA is being led by a criminal pedo atm. There is military in the streets and SA-like, masked thugs are kidnapping people. Billionaires sit behind the wheels to profit from all those developments. Many of them are somehow related to AI. You can image what that will be/is used (see Palantir).

          The whole country is going down the drain right now. There is nothing about it, sane people outside the Republican bubble would consider "freedom".

          • GordonS 7 hours ago
            I rather think the GP was being sarcastic. At least, I hope they were.
  • steveny3456 14 hours ago
    Juju
  • ninetyninenine 12 hours ago
    Is China dominating the US in terms of AI? Given that they currently have a model that beats the best models at all formal quantitative benchmarks?

    What is the state of AI in China? My personal feeling is that it doesn't dominate the zeitgeist in China as it does in the US and despite this because of the massive amount of intellectual capital they have just a small portion of their software engineering talent working on this is enough to go head to head with us even though it only takes a fraction of their attention.

    • seanmcdirmid 3 hours ago
      America continues to dominate in amount of money spent on AI resources but China has more value in the human and hardware resources it brings to bare.

      China is also more willing to deploy AI apps that Americans would hesitate on, although I'm not sure I've seen much of it so far outside of Shenzhen cyberpunk clips. Let's see how this plays out in a decade.

    • idiotsecant 11 hours ago
      I think the lesson of the Chinese catchup in AI is that there is a massive disadvantage in being first, in this domain. You can do all the hard work and your competitors can distill that work out of your model for pennies on the dollar. Why should anyone want to do the work?
      • MaxPock 10 hours ago
        This sounds like copium . If it was just about distillation,we'd be seeing many awesome models from Europe ,Japan and even India.
        • mike_hearn 6 hours ago
          It's certainly both a lot more than distillation and at least some Chinese labs have been cloning OpenAI via distillation. That's why they instituted much tighter ID verification requirements earlier this year.

          No, the reason you don't see many open source models coming from the rest-of-world (other than Mistral in France) is that you still need a ton of capital to do it. China can compete because the CCP used a combination of the Great Firewall and lax copyright/patent enforcement to implement protectionism for internet services, which is a unique policy (one that obviously came with massive costs too). This allowed China to develop home grown tech companies which then have the datacenters, capital and talent density to train models. Rest of world didn't do this and wasn't able to build up domestic tech industries competitive with the USA.

          • MaxPock 5 hours ago
            There’s no Chinese lab that has been accused by OpenAI or anyone else of distillation. The accusations come from fringe right-wing media that are used to the “China only copies” trope. Training a model, by the way, is not about money, because many Western tech giants have more money than the CCP can allocate to Chinese labs. Apple, Meta, Amazon, SAP, IBM, and others have access to the same data as OpenAI and should thus be able to come up with a SOTA model in under a year, right? On lax copyright enforcement, I’d like to point out that it’s actually Western labs that have been taken to court for stealing content.

            On matters protectionism,the Great Firewall was the best thing that China did.It prevented them from digital colonization like the rest of the world.

  • yalogin 13 hours ago
    In my experience using these supposed expert models, they are all more or less the same given they all are trained on the same internet data. The differentiation and value is in the context window management and how relevant info from your session is pulled in. So it’s the interface to the model that makes all the difference. Even there the differences are quite minimal. That is because all these companies want to toe the line between providing functionality to keep the users engaged and pushing them to sign up for the subscription.

    All this to ask the question, if I host these open source models locally, how is the user interface layer that remembers and picks the right data from my previous session and the agentic automation and others implemented? Do I have to do it myself or are the free options for that?

    • viksit 12 hours ago
      this is a great question. what are the main use cases that you have for this? i’ve been working on a library for something similar and exposing it via an mcp interface. would love to pick your brain on this (@viksit on twitter)