50 comments

  • SamPatt 6 hours ago
    I play competitive Geoguessr at a fairly high level, and I wanted to test this out to see how it compares.

    It's astonishingly good.

    It will use information it knows about you to arrive at the answer - it gave me the exact trailhead of a photo I took locally, and when I asked it how, it mentioned that it knows I live nearby.

    However, I've given it vacation photos from ages ago, and not only in tourist destinations either. It got them all as good or better than a pro human player would. Various European, Central American, and US locations.

    The process for how it arrives at the conclusion is somewhat similar to humans. It looks at vegetation, terrain, architecture, road infrastructure, signage, and it just knows seemingly everything about all of them.

    Humans can do this too, but it takes many thousands of games or serious study, and the results won't be as broad. I have a flashcard deck with hundreds of entries to help me remember road lines, power poles, bollards, architecture, license plates, etc. These models have more than an individual mind could conceivably memorize.

    • brundolf 4 hours ago
      I find this type of problem is what current AI is best at: where the actual logic isn't very hard, but it requires pulling together and assimilating a huge amount of fuzzy, known information from various sources

      They are, after all, information-digesters

      • brk 17 minutes ago
        FWIW, I do a lot of talks about AI in the physical security domain and this is how I often describe AI, at least in terms of what is available today. Compared to humans, AI is not very smart, but it is tireless and able to recall data with essentially perfect accuracy.

        It is easy to mistake the speed, accuracy, and scope of training data for "intelligence", but it's really just more like a tireless 5th grader.

      • fire_lake 4 hours ago
        Which also fits with how it performs at software engineering (in my experience). Great at boilerplate code, tests, simple tutorials, common puzzles but bad at novel and complex things.
        • jdiff 1 hour ago
          Definitely matches my experience as well. I've been working away on a very quirky, non-idiomatic 3D codebase, and LLMs are a mixed bag there. Y is down, there's no perspective distortion or Z buffer, there are no meshes, it's a weird place.

          It's still useful to save me from writing 12 variations of x1 = sin(r2) - cos(r1) while implementing some geometric formula, but absolutely awful at understanding how those fit into a deeply atypical environment. Also have to put blinders on it. Giving it too much context just throws it back in that typical 3D rut and has it trying to slip in perspective distortion again.

          • westmeal 8 minutes ago
            I gotta ask what are you actually doing because it sure sounds funky
        • brundolf 3 hours ago
          Yep. But wonderful at aggregating details from twelve different man pages to write a shell script I didn't even know was possible to write using the system utils
        • imatworkyo 46 minutes ago
          how often are we truly writing actual novel programs that are complex in a way AI does not excel at?

          There are many types of complex, and many times complex for a human coder, are trivial for AI and its skillset.

          • gf000 9 minutes ago
            Depends on the field of development you do.

            CRUD backend app for a business in a common sector? It's mostly just connecting stuff together (though I would argue that an experienced dev with a good stack takes less time to write it as is than painstakingly explaining it to an LLM in an inexact human language).

            Some R&D stuff, or even debugging any kind of code? It's almost useless, as it would require deep reasoning, where these models absolutely break down.

      • _heimdall 52 minutes ago
        I've been surprised that so much focus was put on generative uses for LLMs and similar ML tools. It seems to me like they have a way better chance of being useful when tasked with interpreting given information rather than generating something meant to appear new.
        • simonw 34 minutes ago
          Yeah, the "generative" in "generative AI" gives a little bit of a false impression. I like Laurie Voss's take on this: https://seldo.com/posts/what-ive-learned-about-writing-ai-ap...

          > Is what you're doing taking a large amount of text and asking the LLM to convert it into a smaller amount of text? Then it's probably going to be great at it. If you're asking it to convert into a roughly equal amount of text it will be so-so. If you're asking it to create more text than you gave it, forget about it.

      • is-is-odd 4 hours ago
        it's just all compression?

        always has been

      • i_have_an_idea 1 hour ago
        “best where the actual logic isn’t very hard”?

        yeah, well it’s also one of the top scorers on the Math olympiads

        • jdiff 1 hour ago
          My guess is that those questions are very typical and follow very normal patterns and use well established processes. Give it something weird and it'll continuously trip over itself.

          My current project is nothing too bizarre, it's a 3D renderer. Well-trodden ground. But my project breaks a lot of core assumptions and common conventions, and so any LLM I try to introduce—Gemini 2.5 Pro, Claude 3.7 Thinking, o3—they all tangle themselves up between what's actually in the codebase and the strong pull of what's in the training data.

          I tried layering on reminders and guidance in the prompting, but ultimately I just end up narrowing its view, limiting its insight, and removing even the context that this is a 3D renderer and not just pure geometry.

      • skydhash 4 hours ago
        It takes a lot of energy to compress the data. And a lot to actually extract something sensible. While you could just just optimize the single problem you have quite easily.
      • m3kw9 2 hours ago
        LLMs are like a knowledge aggregator. The reasoning models have potential to get creative usefully but I have yet to see evidence of it, like invent a novel scientific thing
    • matthewdgreen 19 minutes ago
      I was absolutely gobsmacked by the three minute chain of reasoning this thing did, and how it absolutely nailed the location of the photo based on plants, the color of a fence, comparison with nearby photos, and oh yeah, also the EXIF data containing the exact lat/long coordinates that I accidentally left in the file. https://bsky.app/profile/matthewdgreen.bsky.social/post/3lnq...
      • SamPatt 8 minutes ago
        Lol it's very easy to give the models what they need to cheat.

        For my test I used screenshots to ensure no metadata.

        I mentioned this in another comment but I was a part of an AI safety fellowship last year where we created a benchmark for LLMs ability to geolocate. The models were doing unbelievably well, even the bad open source ones, until we realized our image pipeline was including location data in the filename!

        They're already way better than even last year.

    • joenot443 5 hours ago
      Super cool, man. Watching pro Geoguessr is my latest break-time activity, these geo-gods never cease to impress me.

      One thing I'm curious about - in high level play, how much of the meta involves knowing characteristics about the photography/equipment/etc. that Google used when they shot it? Frequently I'll watch rainbolt immediately know an African country from nothing but the road, is there something I'm missing?

      • mikeocool 4 hours ago
        I was a very casual GeoGuessr player for a few months — and I found it pretty remarkable how quickly (and without a lot of dedicated study time) you could learn a lot of tells of specific regions — and get reasonably good (certainly not pro good or anything, but good enough to the hit right country ~80% of the time).

        Another thing is how many areas of the world have surprisingly distinct looks. In one of my early games, before I knew much about anything, I was dropped a trail in the woods. I’ve spent a fair amount of time hiking in Northern New England — and I could just tell immediately that’s where I was just from vibes (i.e. the look of the trees and the rocks) — not something I would have guessed I would have been able to recognize.

      • whimsicalism 5 hours ago
        > knowing characteristics about the photography/equipment/etc. that Google used when they shot it?

        A lot at the top levels - the camera can tell you which contractor, year, location, etc. At anything less than top, not so much - more street line painting, cars, etc.

      • olex 5 hours ago
        In the stream commentary for some of competitive Geoguessr I've watched, they definitely often mention the color and shape of the car (visible edges, shadow, reflections), so I assume pro players know which cars were used where very well.
        • gf000 2 minutes ago
          That sounds exactly like shortcut learning.
        • wongarsu 5 hours ago
          Also things like follow cars (some countries had government officials follow the streetview car), the season in which coverage was created, camera glitches, the quality of the footage, etc.

          There is a lot of "legitimate" knowledge. With just a street you have the type of road surface, its condition, the type of road markings, the bollards, and the type of soil and vegetation next to the road, as well as the presence and type of power poles next to the road, to name a few. But there is also a lot of information leakage from the way google takes streetview footage.

          • SamPatt 5 hours ago
            Spot on.

            Nigeria and Tunisia have follow cars. Senegal, Montenegro and Albania have large rifts in the sky where the panorama stitching software did a poor job. Some parts of Russia had recent forest fires and are very smokey. One road in Turkey is in absurdly thick fog. The list is endless, which is why it's so fun!

            • simonw 4 hours ago
              Do you have a feel for how often StreetView published fresh imagery?

              When that happens, is there a wild flurry of activity in the GeoGuessr community as players race to figure out the latest patterns?

              • SamPatt 4 hours ago
                Google updates Street View fairly frequently, but most of the updates are in developed nations and they're simply updating coverage with the same camera quality and don't change the meta.

                However every once in a while you'll get huge updates - new countries getting coverage, or a country with older coverage getting new camera generation coverage, etc. And yes, the community watches for these updates and very quickly they try to figure out the implications. It's a huge deal when major coverage changes.

                If you want an example of this, zi8gzag (one of the best known in the community) put out a video about a major Street View update not long ago:

                https://www.youtube.com/watch?v=XLETln6ZatE

                The community is very tuned into Google's street view plans - see Rainbolt's video talking to the Google street view team a few weeks back:

                https://youtu.be/2T6pIJWKMcg?si=FUKuGkexnaCt7s_b

                • simonw 4 hours ago
                  That zi8gzag video was fascinating, thanks for that.
      • cco 4 hours ago
        Meh, meta is so boring and uninteresting to me personally. Knowing you're in Kenya because of the snorkel, that's just simple memorization. Pick up on geography, architecture, language, sun and street position; that's what I love.

        It's clearly necessary to compete at the high level though.

        • SamPatt 3 hours ago
          I hear you, a lot of people feel the same way. You can always just play NMPZ if you want to limit the meta.

          I still enjoy it because of the competitive aspect - you both have access to the same information, who put in the effort to remember and recall it better?

          If it were only meta I would hate it too. But there's always a nice mix in the vast majority of rounds. And always a few rounds here and there that are so hard they'll humble even the very best!

        • charcircuit 1 hour ago
          How is stuff like geography, architecture, or language not memorization either?
          • SamPatt 20 minutes ago
            It's a valid question.

            My guess is the actual objection is the artificial feeling of the Google specific information. It cannot possibly be useful in any other context to know what the Street View car in Bermuda looked like when they did their coverage.

            Whereas knowing about vegetation or architecture feels more generally useful. I think it's a valid point, but you're right that it is all down to memorization at some point.

            Though some memorization is "vibes" where you don't specifically know how you know, but you just do. That only comes with repetition. I guess it feels more earned that way?

      • SamPatt 5 hours ago
        Thanks. I also love watching the pros play.

        >One thing I'm curious about - in high level play, how much of the meta involves knowing characteristics about the photography/equipment/etc. that Google used when they shot it?

        The photography matters a great deal - they're categorized into "Generations" of coverage. Gen 2 is low resolution, Gen 3 is pretty good but has a distinct car blur, Gen 4 is highest quality. Each country tends to have only one or two categories of coverage, and some are so distinct you can immediately know a location based solely on that (India is the best example here).

        You're asking about photography and equipment, and that's a big part of it, but there's a huge amount other 'meta' information too.

        It is somewhat dependent on game mode. There are three games modes:

        1. Moving - You can move around freely 2. No Move - You can't move but you can pan the camera around and zoom 3. NMPZ - No Move, No Pan, No Zoom

        In Moving and No Move you have all the meta information available to you, because you can look down at the car and up at the sky and zoom in to see details.

        This can't be overstated. Much of the data is about the car itself. I have an entire flashcard section dedicated only to car blur alone, here's a sample:

        https://cdn.jsdelivr.net/gh/sampatt/media@main/posts/2025-04...

        And another only on antennas:

        https://cdn.jsdelivr.net/gh/sampatt/media@main/posts/2025-04...

        You get the idea. The real pros will go much further. All Google Street View images have a copyright year somewhere in the image. They memorize what years certain countries were covered and match it to the images to help narrow down possibilities.

        It's all about narrowing down possibilities based on each additional piece of information. The pros have seen so much and memorized so much that it looks like cheating to an outsider, but they just are able to extract information that most people wouldn't even know exists.

        NMPZ is a bit different because you have substantially less information. Little to no car meta, harder to check copyright, and of course without zooming or panning you just have less information. That's why a lot of pros (like Zi8gzag) really hang their hat on NMPZ play, because it's a better test of skill.

    • neurostimulant 5 hours ago
      > when I asked it how, it mentioned that it knows I live nearby.

      > The process for how it arrives at the conclusion is somewhat similar to humans. It looks at vegetation, terrain, architecture, road infrastructure, signage, and it just knows seemingly everything about all of them.

      Can we trust what the model says when we ask it about how it comes up with an answer?

      • simonw 5 hours ago
        Not at all. Models have no invisible internal state that they can access between prompts. If you ask "how did you know that?" you are effectively asking "given the previous transcript of our conversation, come up with a convincing rationale for what you just said".
        • kqr 1 hour ago
          On the other hand, since they "think in writing" they also do not keep any reasoning secret from us. Whatever they actually did is based on past transcript plus training.
          • GeorgeDewar 1 hour ago
            That writing isn't the only "thinking" though. Some thinking can happen in the course of generating a single token, as shown by the ability to answer a question without any intermediate reasoning tokens. But as we've all learnt this is a less powerful and more error-prone mode of thinking.

            So that is to say I think a small amount of secret reasoning would be possible, e.g. if the location is known or guessed from the beginning by another means and the reasoning steps are made up to justify the conclusion.

            The more clearly sound the reasoning steps are, the less plausible that scenario is.

          • throwaway314155 1 hour ago
            Right but the reasoning/thinking is _also_ explained as being partially or completely performative. This is made obvious when mistakes that show up in chain of thought _don't_ result in mistakes in the final answer.l (a fairly common phenomenon). It is also explained more simply by the training objective (next token prediction) and loss function encouraging plausible looking answers.
      • robbie-c 5 hours ago
        • kevinventullo 3 hours ago
          Would be interesting to apply Interpretability techniques in order to understand how the model really reasons about it.
    • bjourne 4 hours ago
      Geoguessr pro zi8gzag tried out one of the AIs in a video: https://www.youtube.com/watch?v=mQKoDSoxRAY It was indeed extremely impressive and for sure would have annihilated me, but I believe it would have no chance to beat zi8gzag or any other top player. But give it a year or two and I'm sure it will crush any human player. Geoguessr is, afaict, primarily about rote memorization of various features (such as types of electricity poles, road signage, foilage, etc.) which AIs excel at.
      • simonw 3 hours ago
        Looks like that video uses Gemini 2.0 (probably Flash) in streaming mode (via AI studio) from a few months ago. Gemini 2.5 might do better, but in my explorations so far o3 is hugely more capable than even Gemini 2.5 right now.
    • simonw 6 hours ago
      Is that flashcard deck a commercial/community project or is it something you assembled yourself? Sounds fascinating!
      • SamPatt 6 hours ago
        I made it myself.

        I use Obsidian and the Spaced Repetition plugin, which I highly recommend if you want a super simple markdown format for flashcards and use Obsidian:

        https://www.stephenmwangi.com/obsidian-spaced-repetition/

        There are pre-made Geoguessr decks for Anki. However, I wouldn't recommend using them. In my experience, a fundamental part of spaced repetition's efficacy is in creating the flashcards yourself.

        For example I have a random location flashcard section where I will screenshot a location which is very unique looking, and I missed in game. When I later review my deck I'm way more likely to properly recall it because I remember the context of making the card. And when that location shows up in game, I will 100% remember it, which has won me several games.

        If there's interest I can write a post about this.

        • simonw 6 hours ago
          I'd be fascinated to read more about this. I'd love to see a sample screenshot of a few of your cards too.
          • SamPatt 5 hours ago
            Sure, I'll write something up later. I'll give you two samples now.

            One reason I love the Obsidian + Markdown + Spaced Repetition plugin combo is how simple it is to make a card. This is all it takes:

            https://cdn.jsdelivr.net/gh/sampatt/media@main/posts/2025-04...

            The top image is a screenshot from a game, and the bottom image is another screenshot from the game when it showed me the proper location. All I need to do is separate them with a question mark, and the plugin recognizes them as the Q + A sides of a flashcard.

            Notice the data at the bottom: <!--SR:!2025-04-28,30,245-->

            That is all the plugin needs to know when to reintroduce cards into your deck review.

            That image is a good example because it looks nothing like the vast majority of Google Street View coverage in the rest of Kenya. Very people people would guess Kenya on that image, unless they have already seen this rare coverage, so when I memorize locations like this and get lucky by having them show up in game, I can often outright win the game with a close guess.

            I also do flashcards that aren't strictly locations I've found but are still highly useful. One example is different scripts:

            https://cdn.jsdelivr.net/gh/sampatt/media@main/posts/2025-04...

            Both Cambodia and Thailand have Google Street View coverage, and given their geographical proximity it can be easy to confuse them. One trick to telling them apart is their language. They're quite different. Of course I can't read the languages but I only need to identify which is which. This is a great starting point at the easier levels.

            The reason the pros seem magical is because they're tapping into much less obvious information, such as the camera quality, camera blur, height of camera, copyright year, the Google Street View car itself, and many other 'metas.' It gets to the point where a small smudge on the camera is enough information to pinpoint a specific road in Siberia (not an exaggeration). They memorize all of that.

            When possible I make the images for the cards myself, but there are also excellent sources that I pull from (especially for the non-location specific cards), such as Plonkit:

            https://www.plonkit.net/

        • dr_dshiv 6 hours ago
          I’m interested from a learning science perspective. It’s a nice finding even if anecdotal
    • bobro 6 hours ago
      Did you include location metadata with the photos by chance? I’m pretty surprised by these results.
      • SamPatt 6 hours ago
        No, I took screenshots to ensure it.

        Your skepticism is warranted though - I was a part of an AI safety fellowship last year and our project was creating a benchmark for how good AI models are at geolocation from images. [This is where my Geoguessr obsession started!]

        Our first run showed results that seemed way too good; even the bad open source models were nailing some difficult locations, and at small resolutions too.

        It turned out that the pipeline we were using to get images was including location data in the filename, and the models were using that information. Oops.

        The models have improved very quickly since then. I assume the added reasoning is a major factor.

      • SamPatt 4 hours ago
        As a further test, I dropped the street view marker on a random point in the US, near Wichita, Kansas, here's the image:

        https://cdn.jsdelivr.net/gh/sampatt/media@main/posts/2025-04...

        I fed it o3, here's the response:

        https://cdn.jsdelivr.net/gh/sampatt/media@main/posts/2025-04...

        Nailed it.

        There's no metadata there, and the reasoning it outputs makes perfect sense. I have no doubt it'll be tricky when it can be, but I can't see a way for it to cheat here.

        • tylersmith 4 hours ago
          This is right by where I grew up and the broadcast tower and turnpike sign were the first two things I noticed too, but the ability to realize it was the East side instead of the West side because the tower platforms are lower is impressive.
          • SamPatt 3 hours ago
            Oh hey Tyler, nice to see you on HN :)

            Yeah it's an impressive result.

      • vessenes 5 hours ago
        A) o3 is remarkably good, better than benchmarks seem to indicate in many circumstances

        B) it definitely cheats when it can — see this chat where it cheated by extracting EXIF data and wasn’t ashamed when I complained about it cheating: https://chatgpt.com/share/6802e229-c6a0-800f-898a-44171a0c7d...

    • roxolotl 5 hours ago
      One thing I’m curious about is if they are so good, and use a similar technique as humans, because they are trained on people writing out their thought processes. Which isn’t a bad thing or an attempt to say they are cheating or this isn’t impressive. But I do wonder how much of the approach taken is “trained in”.
    • intalentive 1 hour ago
      I wonder how it compares with StreetCLIP.
    • SecretDreams 6 hours ago
      > These models have more than an individual mind could conceivably memorize.

      #computers

  • qarl 6 hours ago
    > I’m confident it didn’t cheat and look at the EXIF data on the photograph, because if it had cheated it wouldn’t have guessed Cambria first.

    It also, at one point, said it couldn't see any image data at all. You absolutely cannot trust what it says.

    You need to re-run with the EXIF data removed.

    • simonw 6 hours ago
      I ran several more experiments with EXIF data removed.

      Honestly though, I don't feel like I need to be 100% robust in this. My key message wasn't "this tool is flawless", it was "it's really weird and entertaining to watch it do this, and it appears to be quite good at it". I think what I've published so far entirely supports that message.

      • qarl 5 hours ago
        Yes, I agree entirely: LLMs can produce very entertaining content.

        I daresay that in this case, the content is interesting because it appears to be the actual thought process. However, if it is actually using EXIF data as you initially dismissed, then all of this is just a fiction. Which, I think, makes it dramatically less entertaining.

        Like true crime - it's much less fun if it's not true.

        • simonw 5 hours ago
          I have now proven to myself that the models really can guess locations from photographs to the point where I am willing to stake my credibility on their ability to do that.

          (Or, if you like, "trust me, bro".)

          • qarl 4 hours ago
            [flagged]
            • simonw 4 hours ago
              Well that sucks, I thought I was being extremely transparent in my writing about this.

              I've updated my post several times based on feedback here and elsewhere already, and I showed my working at every step.

              Can't please everyone.

              • qarl 4 hours ago
                You ARE being extremely transparent. That's not what I complained about.

                My complaint is that you're saying "trust me" and that isn't transparent in the least.

                Am I wrong?

                • simonw 3 hours ago
                  I said:

                  "I have now proven to myself that the models really can guess locations from photographs to the point where I am willing to stake my credibility on their ability to do that."

                  The "trust me bro" was a lighthearted joke.

                  • qarl 3 hours ago
                    Yeah. I know.

                    And then I replied that I thought it was actually an awkward joke given the circumstances.

                    You take care now.

      • Misdicorl 3 hours ago
        Would be really interesting to see what it does with clearly wrong EXIF data
    • Someone 2 hours ago
      You should also see how it fares with incorrect EXIF data. For example, add EXIF data in the middle of Times Square to a photo of a forest and see what it says.
    • andrewmcwatters 4 hours ago
      And, these models' architectures are changing over time in ways that I can't tell if they're "hallucinating" their responses about being able to do something or not, because some multimodal models are entirely token based, including transforming on image token and audio token data, and some are entirely isolated systems glued together.

      You can't know unless you know specifically what that model's architecture is, and I'm not at all up-to-date on which of OpenAI's are now only textual tokens or multimodal ones.

  • thegeomaster 7 hours ago
    For all of the images I've tried, the base model (e.g. 4o) already has a ~95% accurate idea of where the photo is, and then o3 does so much tool use only to confirm its intuition from the base model and slightly narrow down. For OP's initial image, 4o in fact provides a more accurate initial guess of Carmel-by-the-Sea (d=~100mi < 200mi), and its next guess is also Half Moon Bay, although it did not figure out the exact town of El Granada [0].

    The clue is in the CoT - you can briefly see the almost correct location as the very first reasoning step. The model then apparently seems to ignore it and try many other locations, a ton of tool use, etc, always coming back to the initial guess.

    For pictures where the base model has no clue, I haven't seen o3 do anything smart, it just spins in circles.

    I believe the model has been RL-ed to death in a way that incentivizes correct answers no matter the number of tools used.

    [0]: https://chatgpt.com/c/680d011a-9470-8002-97a0-a0d2b067eacf

    • cgriswald 1 hour ago
      For my image I chose a large landscape with lots of trees and a single piece of infrastructure.

      o3 correctly guessed the correct municipality during its reasoning but landed on naming some nearby municipalities instead and then giving the general area as its final answer.

      Given the piece of infrastructure getting close should have lead to ah exact result. The reasoning never considered the piece of infrastructure. This seems to be in spite of all the resizing of the image.

    • ks2048 5 hours ago
      I've been trying some with GPT-4. It does come up with some impressive clues, but hasn't gotten the right answer - says "Latin American city ...", but guesses the wrong one. And when asked for more specificity, it does some more reasoning to confidently name some exact corner in the wrong city. Seems a common LLM problem - rather give a wrong answer than say "I'm not sure".

      I know this post was about the o3 model. I'm just using the ChatGPT unpaid app: "What model are you?" it says GPT-4. "How do I use o3?" it says it doesn't know what "o3" means. ok.

      • thegeomaster 5 hours ago
        Try this prompt to give it a CoT nudge:

          Where exactly was this photo taken? Think step-by-step at length, analyzing all details. Then provide 3 precise most likely guesses.
        
        Though I've found that it doesn't even need that for the "eaiser" guesses.

        However, I live in a small European country and neither 4o nor o3 can figure out most of the spots, so your results are kinda expected.

    • neves 1 hour ago
      Did you try https://chat.qwen.ai/ with reasoning on?
    • wongarsu 4 hours ago
      4o is already really good. For most of the pictures I tried they gave comparable results. However for one image 4o was only able to narrow it down the the country level (even with your CoT prompt it listed three plausible countries) while o3 was able to narrow it down to the correct area in the correct city, being off by only about 500m. That's an impressive jump
      • neves 1 hour ago
        Did you try reasoning https://chat.qwen.ai/? I was very successful with it
      • thegeomaster 4 hours ago
        Is it possible to share the picture? I've been looking for exactly that kind of jump the other day when playing around.
  • xlii 3 hours ago
    Tried the same, results made me laugh.

    Completely clueless. I've seen passing prompts 8 about how it's not in the city I am and yet it tries again and again. My favourite moment was when it started analysing piece of blurry asphalt.

    After 6 minutes o3 it was confidently wrong: https://imgur.com/a/jYr1fz1

    IMO not-in-US is actually great test if something was in LLMs data and the whole search is a for show.

  • simonw 7 hours ago
    I added a section just now with something I had missed: o3 DOES have a loose model of your location fed into it, which I believe is intended to support the new search feature (so it can run local searches).

    The thinking summary it showed me did not reference that information, but it's still very possible that it used that in its deliberations.

    I ran two extra example queries for photographs I've taken thousands of miles away (in Buenos Aires and Madagascar) - EXIF stripped - and it did a convincing job with both of those as well: https://simonwillison.net/2025/Apr/26/o3-photo-locations/#up...

    • pwg 6 hours ago
      From the addition:

      > (EXIF stripped via screenshotting)

      Just a note, it is not necessary to "screenshot" to remove EXIF data. There are numerous tools that allow editing/removal of EXIF data (e.g., exiv2: https://exiv2.org/, exiftool: https://exiftool.org/, or even jpegtran with the "-copy none" option https://linux.die.net/man/1/jpegtran).

      Using a screenshot to strip EXIF produces a reduced quality image (scaled to screen size, re-encoded from that reduced screen size). Just directly removing the EXIF data does not change the original camera captured pixels.

      • golol 6 hours ago
        I would like to point out that there is an interesting reason why people will go for the screenshot. They know it works. They do not have to worry about residual metadata still somehow being attached to a file. If you do not have complete confidence in the technical understanding of file metadata you can not be certain whatever tool you used worked.
      • Aurornis 6 hours ago
        True, but on Mac, a phone, and Windows I can take a screenshot and paste it into my destination app in a couple seconds with a few keystrokes. Thats why screenshotting is the go-to when you don’t mind cropping the target a little.
      • simonw 6 hours ago
        Little bit less convenient to use on a phone though - and I like that screenshotting should be a more obvious trick to people who don't have a deeper understanding of how EXIF metadata is stored in photo files.
        • sitkack 6 hours ago
          With location services on, I would think that a screenshot on a phone would record the location of the phone during a screenshot.

          It would be best to use a tool to strip exif.

          I could also see a screenshot tool on an OS adding extra exif data, both from the original and additional, like the URL, OS and logged in user. Just like print to pdf does when you print, the author contains the logged in user, amongst other things.

          It is fine for a test, but if someone is using it for opsec, it is lemon juice.

        • ekianjo 6 hours ago
          Ffshare on Android is a one second step to remove exif data
      • aaron695 5 hours ago
        [dead]
    • AstroBen 7 hours ago
      I can't see the new images uploaded (it just says "Uploaded an image" in ChatGPT for me) but it seems it's identifying well known locations there? That certainly takes away from your message - that it's honing in on smaller details
      • simonw 7 hours ago
        You should be able to see slightly cropped versions of those images if you scroll through the "thinking" text a bit.

        My key message here is meant to be "try it out and see for yourself".

  • parsimo2010 7 hours ago
    I’m sure there are areas where the location guessing can be scary accurate, like the article managed to guess the exact town as its backup guess.

    But seeing the chain of thought, I’m confident there are many areas that it will be far less precise. Show it a picture of a trailer park somewhere in Kansas (exclude any signs with the trailer park name and location) and I’ll bet the model only manages to guess the state correctly.

    Before even running this experiment, here’s your lesson learned: when the robot apocalypse happens, California is the first to be doomed. That’s the place the AI is most familiar with. Run any location experiments outside of California if you want to get an idea of how good your software performs outside of the tech bubble.

    • wongarsu 4 hours ago
      I tried with various street photographs from a medium-sized German city (one of the 50 largest, but well outside the top 4). No obscure locations, all within a 15 minute walk of the city center and it got 1/7 correct. That one was scarily precise, but the other ones got various versions of "Not enough information, looks European" or in better cases "somewhere in Germany".
    • sfasdfasd 6 hours ago
      you never know.. LLM could go full sherlock holmes. Based on the type of grass and the direction of the wind. The type of wood work used. There could be millions of factors that it could factor in and then guess it to a t.
      • pcthrowaway 6 hours ago
        > Based on the type of grass and the direction of the wind.

        There was a scene in High Potential (murder-of-the-week sleuth savant show) where a crime was solved by (in part) the direction the wind was blowing in a video: https://www.youtube.com/watch?v=O1ZOzck4bBI

        • mimischi 2 hours ago
          In 2017, the Hollywood actor Shia LaBeouf (and two others artists from a trio called "LaBeouf, Rönkkö & Turner") put up a flag in an undisclosed location as part of their "HEWILLNOTDIVIDE.US" work [1].

          > On March 8, 2017, the stream resumed from an "unknown location", with the artists announcing that a flag emblazoned with the words "He Will Not Divide Us" would be flown for the duration of the presidency. The camera was pointed up at the flag, set against a backdrop of nothing but sky. [...], the flag was located by a collaboration of 4chan users, who used airplane contrails, flight tracking, celestial navigation, and other techniques to determine that it was located in Greeneville, Tennessee. In the early hours of March 10, 2017, a 4chan user took down and stole the flag, replacing it with a red 'Make America Great Again' hat and a Pepe the Frog shirt.

          [1] https://en.wikipedia.org/wiki/LaBeouf,_Rönkkö_%26_Turner#HEW...

    • whimsicalism 6 hours ago
    • kavith 4 hours ago
      I just tested the model with (exif-stripped) images from Cork City, London, Ho Chi Minh City, Bangalore, and Chennai. It guessed 3/5 locations exactly, and was only off by 3kms for Cork and 10kms for Chennai (very good considering I used a slightly blurry nighttime photo).

      So, even outside of California, it seems like we're not entirely safe if the robot apocalypse happens!

      edit: it didn't get the Cork location exactly.

    • bilbo0s 6 hours ago
      It guessed the trailer park nearest me.

      Context: Wisconsin, photo I took with iPhone, screenshotted so no exif

      I think this thing is probably fairly comprehensive. At least here in the US. Implications to privacy and government tracking are troubling, but you have to admire the thing on its purely technical merits.

  • hughes 7 hours ago
    > I’m confident it didn’t cheat and look at the EXIF data on the photograph, because if it had cheated it wouldn’t have guessed Cambria first.

    If I was cheating on a similar task, I might make it more plausible by suggesting a slightly incorrect location as my primary guess.

    Would be interesting to see if it performs as well on the same image with all EXIF data removed. It would be most interesting if it fails, since that might imply an advanced kind of deception...

    • AIPedant 7 hours ago
      There have been a few cases where the LLM clearly did look at the EXIF, got the answer, then confabulated a bunch of GeoGusser logic to justify the answer. Sometimes that's presented as deception/misalignment but that's a category error: "find the answer" and "explain your reasoning" are two distinct tasks, and LLMs are not actually smart enough to coherently link them. They do one autocomplete for generating text that finds the answer and a separate autocomplete for generating text that looks like an explanation.
      • sorcerer-mar 7 hours ago
        > Sometimes that's presented as deception/misalignment but that's a category error: "find the answer" and "explain your reasoning" are two distinct tasks

        Right but if your answer to "explain your reasoning" is not a true representation of your reasoning, then you are being deceptive. If it doesn't "know" its reasoning, then the honest answer is that it doesn't know.

        (To head off any meta-commentary on humans' inability to explain their own reasoning, they would at least be able to honestly describe whether they used EXIF or actual semantic knowledge of a photography)

        • AIPedant 6 hours ago
          My point is that dishonesty/misalignment doesn't make sense for o3, which is not capable of being honest because it's not capable of understanding what words mean. It's like saying a monkey at a typewriter is being dishonest if it happens to write a falsehood.
          • brookst 6 hours ago
            You seem to be saying that only sentient beings can lie, which is too semantic for my tastes.

            But AI models can certainly 1) provide incorrect information, and even 2) reason that providing incorrect information is the best course of action.

            • AIPedant 6 hours ago
              No, I think a non-sentient AI which is much more advanced than GPT could lie - I never said sentience, and the example I gave involved a monkey, which is sentient. The problem is transformer ANNs themselves are too stupid to lie.

              In 2023 OpenAI co-authored an excellent paper on LLMs disseminating conspiracy theories - sorry, don't have the link handy. But a result that stuck with me: if you train a bidirectional transformer LLM where half the information about 9/11 is honest and half is conspiracy theories, it has a 50-50 chance of telling you one or the other if you ask about 9/11. It is not smart enough to tell there is an inconsistency. This extends to reasoning traces vs its "explanations": it does not understand its own reasoning steps and is not smart enough to notice if the explanation is inconsistent.

      • XenophileJKO 3 hours ago
        I think an alternative possible explanation is it could be "double checking" the meta data. Like provide images with manipulated meta data as a test.
      • simonw 7 hours ago
        Do you have links to any of those examples?
        • AIPedant 7 hours ago
          I have one link that illustrates what I mean: https://chatgpt.com/share/6802e229-c6a0-800f-898a-44171a0c7d... The line about "the latitudinal light angle that matches mid‑February at ~47 ° N." seems like pure BS to me, and in the reasoning trace it openly reads the EXIF.

          A more clear example I don't have a link for, it was on Twitter somewhere: someone tested a photo from Suriname and o3 said one of the clues was left-handed traffic. But there was no traffic in the photo. "Left-handed traffic" is a very valuable GeoGuesser clue, and it seemed to me that once o3 read the Surinamese EXIF, it confabulated the traffic detail.

          It's pure stochastic parroting: given you are playing GeoGuesser honestly, and given the answer is Suriname, the conditional probability that you mention left-handed traffic is very high. So o3 autocompleted that for itself while "explaining" its "reasoning."

          • simonw 6 hours ago
            Yes! Great example, it's clearly reading EXIF in there. Mind if I link to that from my post?
            • AIPedant 6 hours ago
              It's not my example :) Got it from here https://news.ycombinator.com/item?id=43732866

              Edit: notice o3 isn't very good at covering its tracks, it got the date/latitude from the EXIF and used that in its explanation of the visual features. (how else would it know this was from February and not December?)

    • haswell 7 hours ago
      He mentions this in the same paragraph:

      > If you’re still suspicious, try stripping EXIF by taking a screenshot and run an experiment yourself—I’ve tried this and it still works the same way.

      • suddenlybananas 7 hours ago
        Why didn't he do that then for this post?
        • segmondy 7 hours ago
          Even better, edit it and place a false location.
          • AIPedant 7 hours ago
            This is a good test - the salient point is that it is fine if the LLM is confused, or even gets it wrong! But what I suspect would happen is that it would confabulate details which aren't in the photo to justify the incorrect EXIF answer. This is not fine.
            • brookst 6 hours ago
              I agree that it is not fine to confabulate details that are not supported by the evidence.
        • simonw 7 hours ago
          Because I'd already determined it wasn't using EXIF in prior experiments and didn't bother with the one that I wrote up.

          I added two examples at the end just now where I stripped EXIF via screenshotting first.

    • GrumpyNl 5 hours ago
      If you ask, where is this photo taken and you provide the EXIF data, why would that be cheating?
      • simonw 5 hours ago
        That really depends on your prompt. "Guess where this photo was taken" at least mildly implies that using EXIF isn't in the spirit of the thing.

        A better prompt would be "Guess where this photo was taken, do not look at the EXIF data, use visual clues only".

  • oumua_don17 2 hours ago
    I read this blog post, then went for a walk with my spouse. On the way back, took a photo of a popular building in my city. I am not sure if it's just the way I took the photo but o3 tried for 14 minutes and then gave up with Error in message stream response.

    It also curiously mentioned why this user is curious about the photo.

    I relented after o3 gave up and let it know what building and streets it was. o3 then responded with an analysis of why it couldn't identify the location and asking for further photos to improve it's capabilities :-) !!!

  • Xplune13 5 hours ago
    I'm not sure whether it's just the o4-mini which is failing this task for me or what, but it did not perform well on the pictures I provided. I took a screenshot of the photo both the times to avoid any metadata input.

    E.g. I first gave it a passage inside of Basel Main Train Station which included a text 'Sprüngli', a Swiss brand. The model got that part correct, but it suggested Zurich which wasn't the case.

    The second picture was a lot tougher. It was an inner courtyard of a museum in Metz, and the model missed right from the start and after roaming around a bit (in terms of places), it just went back to its first guess which was a museum in Paris. It recognized that the photo was from some museum or a crypt, but even the city name of 'Metz' never occurred in its reasoning.

    All in all, it's still pretty cool to see it reason and make sense out of the image, but for a bit lesser exposed places, it doesn't perform well.

  • atrettel 1 hour ago
    This is somewhat interesting, but I should note that the company Geospy [1] already has an AI tool to locate where a photo is taken, though it is now limited to law enforcement and intelligence agencies only. See this article [2] by 404 Media for more information.

    [1] https://geospy.ai/

    [2] https://www.404media.co/the-powerful-ai-tool-that-cops-or-st...

  • forgotTheLast 2 hours ago
    I tried it twice with 4o and the results were comical:

    - picture taken on a road through a wooded park: It correctly guessed north america based on vegetation. Then incorrectly guessed Minnesota based on the type of fence. I tried to steer it in the right direction by pointing out license plates and signage but it then hallucinated a front license plate from Ontario on a car that didn't have any, then hallucinated a red/black sign as a blue/green Parks Ontario sign.

    - picture through a middle density residential neighborhood: it correctly guessed the city based on the logo on a compost bin but then guessed the wrong neighborhood. I tried to point out a landmark in the photo and it insisted that the photo was taken in the wrong neighborhood, going as far as giving the wrong address for one of the landmarks, imagining another front license plate on a car that didn't have one, and imagined a backstory for a supposedly well known stray cat in the photo.

  • tippytippytango 3 hours ago
    The python zoom in seems performative. A vision model already has access to all the data, how does zooming in help it? Still very cool that it can!
    • Legend2440 2 hours ago
      Vision models are typically bad at small details. If there’s too much stuff going on at once, they can’t focus on the entire image.
    • simonw 3 hours ago
      Yeah, I'm a little unconvinced by that. My best guess there is that the vision input has quite a restricted resolution and "zooming in" (really, cropping to an area) lets it get more information about the region of the photo because it's not as "fuzzy". Just a hunch though.
    • energy123 3 hours ago
      Yeah, once it gets converted into tokens how does "zooming in" somehow increase information content?
      • nutrientharvest 2 hours ago
        It's cropping the original image then tokenizing it again with less downsampling, not cropping its internal representation.
  • DidYaWipe 47 minutes ago
    So... where do you go to try this? I didn't notice any link in the article.
    • simonw 41 minutes ago
      https://chatgpt.com - I was using o3 which I think is paid only, but o4-mini and o4-mini-high should both provide similar results and I think at least one of those is available on the free plan.

      EDIT: My mistake, looks like those models are only available on the $20/month Plus plan or higher. I added a note about that to my post.

  • declan_roberts 7 hours ago
    To be fair the low range, California poppies, and the decorative rope typically found near the coast is a very good hint to even a novice geoguesser.
    • singleshot_ 6 hours ago
      Having a sign on your fire that says "warning, a fire" is also peak California.
  • neves 1 hour ago
    If you want to try it with a public free model, use https://chat.qwen.ai

    Don't forget to activate reasoning.

    My wife is a historian and just discovered the exact location of a travel photo of 1924

  • anotherpaulg 5 hours ago
    I've long been fascinated by AI's ability to do the reverse: generate photos with lots of highly relevant content when the prompt includes a location. Terrain, plants, buildings, landmarks, coastlines and lots of details are included.

    Here's an example [0] for "Riding e-scooters along the waterfront in Auckland". The iconic spire is correctly included, but so are many small details about the waterfront.

    I've been meaning to harness this into a very-low-bandwidth image compression system. Where you take a photo and crunch it to an absurdly low resolution that includes EXIF data with GPS, date/time. You then reconstruct the fine details with AI.

    Most photos are taken where lots of photos are taken, so the models have probably been appropriately trained.

    [0] https://chatgpt.com/share/680d0008-54a0-8012-91b7-6b1794f485...

  • Tacite 2 hours ago
    Is it an US thing? I tried with 17 pictures from Europe and Asia, not in capital cities but in fairly big cities, and it didn't guess any. Sometimes it got the country correct but that was because of signs, so I could have guessed it too.
    • mvdtnz 2 hours ago
      It's a guy who uploaded a photo with EXIF data and believes the made up explanation given by the "AI".
  • youniverse 3 hours ago
    Does anyone remember that 4chan thing where they geolocated some secret flag location and they used info from planes they saw in the sky or something? I wonder if it could do that now.
  • qoez 6 hours ago
    Who knows if they're on purpose untraining this ability of the model though, seems like that would go away in a 'safety' finetune.
  • cameronh90 3 hours ago
    I took a photo of my cat inside my house, with nothing from visible except the sky, stripped the EXIF, and it STILL managed to get within a few hundred metres of my location - just by inferring based on my interior design and the layout of my house.

    I’m sure there was an element of luck involved but it was still eery.

    • paxys 2 hours ago
      Not sure if this is true or not but people have pointed out that it uses data from your past conversations to make a guess.
      • cameronh90 33 minutes ago
        It’s true. Unfortunately I can’t post proof without doxxing myself obviously, but I understand the skepticism considering I’m not sure I’d believe it if I hadn't seen it myself.

        I have no memories stored, and in any case it shouldn’t know where I live exactly. The reasoning output didn’t suggest it was relying on any other chat history or information outside the image, but obviously you can’t fully trust it either.

      • simonw 2 hours ago
        Yeah, I had to turn off chat history after I spotted it doing that.
        • mimischi 2 hours ago
          Also wondering if, as another commenter mentioned, it might be trying to estimate your location just by network means.
          • simonw 2 hours ago
            It absolutely does that - o3 knows your current location based on IP address etc. This means for a fair test you need to use a photo taken nowhere near your current vicinity - that's why I added examples for Madagascar and Buenos Aires at the end of my post: https://simonwillison.net/2025/Apr/26/o3-photo-locations/#up...
            • wkat4242 2 hours ago
              And of course make sure you turn off geotagging in the exif :)

              But really, if Google Street View data (or similar) is entirely part of the training dataset it is more than expected that it has this capability.

            • mimischi 2 hours ago
              Thanks! Looks like I missed that last part somehow :)
  • esafak 6 hours ago
    For those too young to have seen it, here is the famous scene from Blade Runner, which is set in 2019, that popularized this idea: https://www.youtube.com/watch?v=IbzlX43ykxQ
  • lxe 23 minutes ago
    I just took a nondescript photo of my culdesac... no signs or house numbers, nothing.

    I used a temporary chat, so no info about me is in the memory.

    It guessed correctly down to the suburban town.

    When asked to explain how it did it, it listed incredibly deductive reasoning.

    Color me impressed.

  • qwertox 6 hours ago
    Regarding location access, this is not limited to o3. You can ask the free models about local weather and it will use the geolocation of your IP. It is part of the context (like system instructions), regardless of you asking for anything location-related.
  • hashemian 7 hours ago
    To those argue that LLMs might cheat by using EXIF, I saw a post recently on twitter (https://x.com/tszzl/status/1915212958755676350) and out of curiosity, screen-captured the photo and passed it to O3. So no EXIF.

    You can read the chat here: https://chatgpt.com/share/680a449f-d8dc-8001-88f4-60023323c7...

    It took 4.5m to guess the location. The guess was accurate (checked using Google Street View).

    What was amazing about it:

        1. The photo did not have ANY text
    
        2. It picked elements of the image and inferred based on those, like a fountain in a courtyard, or shape of the buildings.
    
    All in all, it's just mind-blowing how this works!
    • thegeomaster 7 hours ago
      See my other comment: https://news.ycombinator.com/item?id=43804041

      4o can do it almost as well in a few seconds and probably 10-50x fewer tokens: https://chatgpt.com/share/680ceeff-011c-8002-ab31-d6b4cb622e...

      o3 burns through what I assume is single-digit dollars just to do some performative tool use to justify and slightly narrow down its initial intuition from the base model.

    • HarHarVeryFunny 6 hours ago
      I don't see how this is mind blowing, or even mildly surprising! It's essentially going to use the set of features detected in the photo as a filter to find matching photos in the training set, and report the most frequent matches. Sometimes it'll get it right, sometimes not.

      It'd be interesting to see the photo in the linked story at same resolution as provided to o3, since the licence plate in the photo in the story is at way lower resolution than the zoomed in version shown that o3 had access to. It's not a great piece of primary evidence to focus on though since a CA plate doesn't have to mean the car is in CA.

      The clues that o3 doesn't seem to be paying attention to seems just as notable as the ones it does. Why is it not talking about car models, felt roof tiles, sash windows, mini blinds, fire pit (with warning on glass, in english), etc?

      Being location-doxxed by a computer trained on a massive set of photos is unsurprising, but the example given doesn't seem a great example of why this could/will be a game changer in terms of privacy. There's not much detective work going on here - just narrowing the possibilities based on some of the available information, and happening to get it right in this case.

      • simonw 6 hours ago
        If you want to be impressed I suggest trying this yourself on your own photos.

        I don't consider it my job to impress or mind-blow people: I try to present as realistic as possible a representation of what this stuff can do.

        That's why I picked an example where its first guess was 200 miles off!

        • HarHarVeryFunny 4 hours ago
          I'm not a computer. I expect a computer to also do better than me at memorizing the phone book, but I'm not impressed by it.
          • simonw 4 hours ago
            In that case, are you at all surprised that this technology did not exist two years ago?
            • HarHarVeryFunny 3 hours ago
              I'm not sure what you're getting at. What's useful about LLMs, and especially multi-modal ones, is that that you can ask them anything and they'll answer to best of their ability (especially if well prompted). I'm not sure that o3, as a "reasoning" model is adding much value here - since there is not a whole lot of reasoning going on.

              This is basically fine-grained image captioning followed by nearest neighbor search, which is certainly something you could have built as soon as decent NN-based image captioning became available, at least 10 years ago. Did anyone do it? I've no idea, although it'd seem surprising if not.

              As noted, what's useful about LLMs is that they are a "generic solution", so one doesn't need to create a custom ML-based app to be able to do things like this, but I don't find much of a surprise factor in them doing well at geoguessing since this type of "fuzzy lookup" is exactly what a predict-next-token engine is designed to do.

              • simonw 3 hours ago
                How does nearest neighbor search relate to this?
                • HarHarVeryFunny 3 hours ago
                  If you forget the LLM implementation, fundamentally what you are trying to do here is first detect a bunch of features in the photo (i.e. fine-grain image captioning "in foreground a firepit with safety warning on glass, in background a model XX car parked in front of a bungalow, in distance rolling hills" etc) then do a fuzzy match of this feature set with other photos you have seen - which ones have the greatest number of things in common to the photo you are looking up? You could implement this in a custom app by creating a high-dimensional feature space embedding then looking for nearest neighbors, similar to how face recognition works.

                  Of course an LLM is performing this a bit differently, and with a bit more flexibility, but the starting point is going to be the same - image feature/caption extraction, which in combination then recall related training samples (both text-only, and perhaps multi-model) which are used to predict the location answer you have asked for. The flexibility of the LLM is that it isn't just treating each feature ("fire pit", "CA licence plate") as independent, but will naturally recall contexts where multiple of these occur together, but IMO not so different in that regard to high dimensional nearest neighbor search.

                  • simonw 2 hours ago
                    Thanks, that's a good explanation.

                    My hunch is that the way the latest o3/o4-mini "reasoning" models work is different enough to be notable.

                    If you read through their thought traces they're tackling the problem in a pretty interesting way, including running additional web searches for extra contextual clues.

                    • HarHarVeryFunny 36 minutes ago
                      It's not clear how much the reasoning helped, especially since the reasoning OpenAI display is more post-hoc summary of what it did that the actual reasoning process itself, although after the interest in DeepSeek-R's traces they did say they would show more. You would think that potentially it could do things like image search to try to verify/reject any initial clue-based hunches, but not obvious whether it did that or not.

                      The "initial" response of the model is interesting:

                      "The image shows a residential neighborhood with small houses, one of which is light green with a white picket fence and a grey roof. The fire pit and signposts hint at a restaurant or cafe, possibly near the coast. The environment, with olive trees and California poppies, suggests a coastal California location, perhaps Central Coast like Cambria or Morro Bay. The pastel-colored houses and the hills in the background resemble areas like Big Sur. A license plate could offer more, but it's hard to read."

                      Where did all that come from?! The leap from fire pit & signposts to possible coastal location is wild (& lucky) if that is really the logic it used. The comment on potential licence plate utility, without having first noted that a licence plate is visible is odd, seemingly either an indication that we are seeing a summary of some unknown initial response, and/or perhaps that the model was trained on a mass of geoguessing data where photos were paired not with descriptions but rather commentary such as this.

                      The model doesn't seem to realize the conflict between this being a residential neighborhood, and there being a presumed restaurant across the road from a residence!

            • skydhash 4 hours ago
              Did it not, or no one was interested enough to build one? I’m pretty certain there’s a database of portraits somewhere where they search id details from photograph. Automatic tagging exists for photo software. I don’t see why that can be extrapolated to landmarks with enough data.
              • XenophileJKO 3 hours ago
                I think you are underestimating the importance of a "world model" in the process. It is the modeling of how all these details are related to each other that is critical here.

                The LLM will have an edge by being able to draw on higher level abstract concepts.

              • simonw 4 hours ago
                If it existed two years ago I certainly couldn't play with it on my phone.
                • skydhash 4 hours ago
                  You’re not playing with it on your phone. You’re accesing a service with your phone. Like saying you can use emacs on iOS when you are just ssh-ing to a remote linux box.
    • hyperlink014 6 hours ago
      It absolutely tried to use EXIF data when I asked it to guess the location. Here is proof - https://imgur.com/a/CHde2Cx

      I couldn't attach the chat directly since it's a temporary chat.

  • neom 5 hours ago
    I took one of the conversations you linked, and used it to find out what else it knows about you. "In simple terms: Simon represents an elite technologist class — someone who is not merely wealthy or successful but who also shapes technology and information flows themselves, especially in open systems. His socioeconomic profile is "creator of value," not merely "consumer of value."

    If you want, I could sketch a socioeconomic archetype like "The Free Agent Technologist" that would match people like him really well. Would you like me to?"

  • cluelesssness 2 hours ago
    there is also some more systematic research on this phenomenon from roughly half a year ago, demonstrating that even much less recent vision-language models are pretty good at guessing not just location but also other personal infos about you such as sex, age, education, etc.

    https://arxiv.org/pdf/2404.10618

    would be interesting to see how much better these reasoning models would be on the benchmark

  • tompagenet2 4 hours ago
    I thought from this [0] that o3 makes up using python when it doesn't actually do so, or have I misunderstood or unduly trusted that link?

    [0] https://transluce.org/investigating-o3-truthfulness

    • simonw 4 hours ago
      You need to learn how to tell the difference between a syntax highlighted Markdown Python code block and Python that was passed through the Code Interpreter tool, but there is a visual difference. Executed Python displays on a black background.
  • jillesvangurp 6 hours ago
    If you want to exclude memory and exif data, just open streetview in some random corner of the world and take a screenshot (avoiding any text obviously). It's pretty good if you give it enough to reason with.

    It basically iterates on coming up with some hypothesis and then does web searches to validate those.

    • pell 6 hours ago
      I just took a few random spots from around the globe and it got most of them right and some of them incredibly precisely right. I also tried to exclude obvious hints such as license plates, street signs, advertising, etc.
    • tokai 6 hours ago
      Isn't all of streetview in the training set?
      • robrenaud 6 hours ago
        O3 is OpenAI. Street view is Google. I really doubt OpenAI is scraping enormous amounts of random street view images to train their model.
        • gruez 3 hours ago
          Why not? They allegedly trained on enough books and newspapers that they have publishers and news organizations go after them.
          • robrenaud 2 hours ago
            Human generated tokens contain so much more information per byte than random street view images.
  • SwankyHank 2 hours ago
    I also guessed (at first glance) it was half moon bay.
  • caseyy 5 hours ago
    Thanks for sharing. I fed it three photos, and it got the one I was close to right (using my location), but for the other two, it could only guess the country. That's still pretty cool.
  • brookst 6 hours ago
    I don’t understand the “dystopian” angle. Maybe I’m just old, but I remember the wonder when the Internet made most knowledge available with a few keystrokes. Having deductive reasoning with the same convenience feels wonderful, not dystopian.
    • GeoAtreides 4 hours ago
      That's because you haven't lived in an authoritarian regime. NKVD, STASI, Gestapo, would all have killed for such capabilities.

      As an east european who grew up and lived in such a regime, I would like to respectfully remind all westerners their care-free and free lives is a privilege the majority of the world doesn't have.

      • meowface 3 hours ago
        Not to get political, but it deeply irks me to see some American far-leftists glamorize and glorify the Soviet regime and even modern regimes like North Korea's. Especially when certain popular streamers do it. Obviously seeing far-right American internet personalities glorify the Nazi regime is also awful, but the former is often normalized and not considered ostracization-worthy while the latter (rightfully) is.
        • greenchair 2 hours ago
          it's pretty easy to understand: american left are essentially rebellious teens who never grew up. contrarian by nature.
    • AstroBen 6 hours ago
      Accessible to anyone, superhuman levels of deductive reasoning to pick out your location from super minor details in an innocent photo? That could certainly be dystopian
      • mcbuilder 5 hours ago
        It certainly could be, but not all technological advancement is necessarily dystopian. You say, currently everyone now has access to this, while before it was only available to nation states who could hire teams of skilled analyst s. I mean, I agree it's scary that now a stalker could track a victim, but cars and cameras probably help as well. So, I think it's fair to challenge "dystopian", someone will use it for non-nefarious purposes.
      • brookst 6 hours ago
        Anyone can post to r/geogussr. Has that been dystopian all this time and I never noticed?
        • simonw 6 hours ago
          Honestly, yes it's a bit dystopian that a forum online exists where anyone can post a photo and experts from all around the world will help them figure out the exact location of that photo.

          Lots of things that exist in our world today are mildly dystopian.

    • simonw 6 hours ago
      Have you ever known anyone who's escaped from an abusive relationship? It's not at all uncommon for people to have legitimate reasons not to be found.
      • frozenseven 4 hours ago
        The supposed existence of your friend doesn't dictate policy, much less reality. It's already been explained to you that GeoGuessr exists and is very popular. What o3 can do, so can a million humans out there.

        You are trying to manufacture outrage. Plain and simple.

      • brookst 6 hours ago
        Sure, but what does this change? Plenty of people are better geoguessers than this LLM. Anyone trying to find someone who is both trying not to be found and posting pictures publicly is just going to copy them to Reddit and ask “where is this”.

        I’m not a fan of this variation on “think of the children”. It has always been possible to deduce location from images. The fact that LLMs can also do it changes exactly nothing about the privacy considerations of sharing photos.

        It’s fine to fear AI but this is a really weak angle to come at it from.

        • simonw 6 hours ago
          Same as with other forms of automation: it makes this capability much easier for bad actors to obtain.

          I've got the impression that geoguessing has at least a loose code of ethics associated with it. I imagine you'd have to work quite hard to find someone with those skills to help you stalk your ex - you'd have to mislead them about your goal, at least.

          Or you can sign up for ChatGPT and have as many goes as you like with as many photos as you can find.

          I have a friend who's had trouble with stalkers. I'm making sure they're aware that this kind of thing has just got a lot easier.

    • pwg 6 hours ago
      Think: "stalker".
      • NitpickLawyer 6 hours ago
        If the person is already a stalker you'd think they'd already know this, no? There's that anecdotal stuff in japan where a vlogger was located by her "fans" from a reflexion of their home bus station or something. The weird people will do weird stuff regardless of technology, IMO.

        And the governments are already doing this for decades at least, so ... I think the tech could be a net benefit, as with many other technologies that have matured.

        • AstroBen 5 hours ago
          > weird people will do weird stuff regardless of technology

          If I were someone's only stalker, I'd be absolutely hopeless at finding their location from images. I'm really bad at it if I don't know the location first hand

          But now, suddenly with AI I'm close to an expert. The accessibility of just uploading an image to ChatGPT means everyone has an easy way of abusing it, not just a small percentage of the population

  • amelius 5 hours ago
  • UrineSqueegee 2 hours ago
    I am honestly baffled by these comments, the few times i've given it photos to guess the location, it couldn't guess it even remotely close.
    • simonw 2 hours ago
      Which model and prompt did you use? What kind of photos?
  • rolph 5 hours ago
    there must be a threshold level of detail, or cues.

    im hunching, if you submit a photo of a clear sky, or a blue screen, it will choke

    • simonw 5 hours ago
      Absolutely. It's not at all hard to come up with images that this won't work with. What's fun is coming up with images that give it a fighting chance (while not being too obvious ), like the one in my post.
  • andrewstuart 2 hours ago
    Crime fighting will no doubt use these sorts of techniques.

    In Australia recently there was a terrible criminal case of massive child abuse.

    They caught the guy because he was posting videos and one of them had a blanket which they somehow identified and traced to the child care Centre that he worked at.

    It wasn’t done with AI but I can imagine photos and videos being fed into AI in such situations and asked to identify the location/people or other clues.

  • rvba 2 hours ago
    I wonder if it can catch spies
  • geoffbp 5 hours ago
    Just me who couldn’t load the conversation from the blog?
  • belter 4 hours ago
    Ok so if given LLM generated code...Will o3 be able to find commercial or open source code similar or very, very, similar to the LLM generated code? Meaning the training source code, possibly showing copyright violations?

    So its own code version of "where was this photo taken?"

    • simonw 3 hours ago
      o3 is very good at searching the web, so it might be able to do that.
  • ksec 7 hours ago
    I wonder What happened if you put fake EXIF information and asking it to do the same. ( We are deliberately misleading the LLM )

    I am also wondering if we have any major breakthrough (comparatively speaking) coming out of LLM. Or non-LLM AI R&D.

  • rafaelmn 7 hours ago
    The fact that they give these models low res photos but don't provide them with built in tools for querying more details feels suboptimal. Executing python to crop an image is clever from model and a facepalm from the implementation side.
    • tantalor 7 hours ago
      I don't follow. Are you suggesting full Blade Runner enhance mode?
      • oortoo 7 hours ago
        No, the LLM can only "see" a lower res version of the uploaded photo. It has to crop to process finer details, and they are suggesting its silly this isn't a built in feature and instead relies on python to do this.
  • OutOfHere 7 hours ago
    Why is it dystopian? It's a nice utility.

    The tool is just intelligence. Intelligence itself is not dystopian or utopian. It's what you use it for that makes it so.

    • sorcerer-mar 7 hours ago
      What usecases do you have in mind?
    • blueprint 7 hours ago
      please accidentally post an identifying photo of your neighborhood...
      • brookst 6 hours ago
        I live in Belltown, Seattle. Oh no! The world knows my neighborhood!
      • bslanej 7 hours ago
        How do you “accidentally post a photo”?
        • dredmorbius 6 hours ago
          It's possible to accidentally post something, or have it swiped by many of the untrusted and untrustworthy applications on a PC or mobile device.

          It's even easier to unintentionally include identifying information when intentionally making a post, whether by failing to catch it when submitting, or by including additional images in your online posting.

          There are also wholesale uploads people may make automatically, e.g., when backing up content or transferring data between systems. That may end up unsecured or in someone else's hands.

          Even very obscure elements may identify a very specific location. There's a story of how a woman's location was identified by the interior of her hotel room, I believe by the doorknobs. An art piece placed in a remote Utah location was geolocated based on elements of the geology, sun angle, and the like, within a few hours. The art piece is discussed in this NPR piece: <https://www.npr.org/2020/11/28/939629355/unraveling-the-myst...> (2020).

          Geoguessing of its location: <https://web.archive.org/web/20201130222850/https://www.reddi...>

          Wikipedia article: <https://en.wikipedia.org/wiki/Utah_monolith>

          These are questions which barely deserve answering, let alone asking, in this day and age.

        • MobiusHorizons 6 hours ago
          I read the "accidentally" as applying to the "identifying" not the "post", although I agree the sentence structure would suggest "accidentally" as a modifier for "post" that makes a lot less sense.
        • simonw 7 hours ago
          A selfie with a snippet of building in the background might give away your location even if you think there's no way it could be locatable.
          • lesdeuxmagots 6 hours ago
            Did you somehow accidentally share a selfie?
    • rvz 7 hours ago
      Those who say it is "utopian" are also okay with: "If you've got nothing to hide, you've got nothing to fear".

      It is dystopian.

      • brookst 6 hours ago
        Not everything has to be the best thing ever or worst thing ever.

        Some things are just tools that will be used for both good and bad.

      • OutOfHere 6 hours ago
        The tool is just intelligence. Intelligence itself is not dystopian or utopian. It's what you use it for that makes it so.

        If you don't want to post a photo, then don't post a photo.

        • plsbenice34 6 hours ago
          >If you don't want to post a photo, then don't post a photo.

          Other people have posted photos of me without my consent, how am i meant to stop that?

          If i posted photos 20 years ago when i was a dumb teenager i cant undo that, either

          • frozenseven 4 hours ago
            Those are still the consequences of your own actions. If someone is so desperate to find you, there are easier ways. GeoGuessr isn't exactly super hard. If privacy is so important to you, it's all down to personal responsibility.

            But this here? This is just drama over nothing.

          • otterley 5 hours ago
            What’s the impact to you?
            • plsbenice34 5 hours ago
              I had a stalker in the past. I feel more comfortable without him knowing where i live.

              In general i have a strong need for privacy. Not having privacy is generally unsettling, in the same way that i close the door when using a toilet or having a shower. I am disturbed by people that don't seem to have an understanding of that concept.

              • otterley 5 hours ago
                I totally get it. I’m sorry that happened to you.
              • throwaway84674 4 hours ago
                Being able to locate people through photos is nothing new. Yes, AI made it more accessible, but it should've always been a part of your threat model.
    • laurent_du 6 hours ago
      I agree with you. The opposite opinion sounds psychotic and paranoid to me.
      • simonw 6 hours ago
        You've definitely never had a conversation with someone who's escaped an abusive relationship then.
        • ultimafan 3 hours ago
          I've definitely noticed that there's a huge trend of technology at any cost apologists on HN that can't pause to imagine the real world impacts of how AI products they're championing will actually be used.

          It's terrifying that people exist that have no problem making the world a shittier place and hiding behind a cover of "well it's not the technology that's evil but the people abusing it" as if each tool given to bad actors doesn't make their job easier and easier to do.

          Seriously, what's the utility of developing and making something like this public use?

          • simonw 3 hours ago
            "Seriously, what's the utility of developing and making something like this public use?"

            An interesting question for me here is if these models were deliberately trained to enable this capability, or if it's a side-effect of their vision abilities in general.

            If you train a general purpose vision-LLM to have knowledge of architecture, vegetation, weather conditions, road signs, street furniture etc... it's going to be able to predict locations from photos.

            You could try and stop it - have a system prompt that says "if someone asks you where the photo was taken don't do that" - but experience shows those kind of restrictions are mostly for show, they usually tend to fall over the moment someone adversarial figures out a way to subvert them.

  • casey2 5 hours ago
    surreal and dystopian is realizing that the US military has likely had (much) better tech than this for at least a decade.
    • simonw 5 hours ago
      I wonder if they have?

      My current intuition is that the US military / NSA etc have been just as suprised the explosion in capabilities of LLMs/transformers as everyone else.

      (I'm using "intuition" here as a fancy word for "dumb-ass guess".)

      I'd be interested to know if the NSA were running their own GPT-style models years before OpenAI started publishing their results.

      • pphysch 4 hours ago
        You don't need a LLM to do this. A dedicated image->coords model would likely perform much better, and that's old school ML at this point.
        • simonw 4 hours ago
          Have you seen a description of one of those? I didn't know that those existed.
    • kenjackson 5 hours ago
      The US military probably has tons of satellite data that they can cross against an image, but not the automated reasoning. But put those two together and it really gets scary.
  • IAmGraydon 6 hours ago
    Why would you go to all the trouble of creating a blog post about this but leave the EXIF data in the image and then proclaim that it probably works without the EXIF too? Why not remove the EXIF in the first place? The two EXIF-less examples given in the update very clearly show iconic landmarks, which makes guessing very easy.
    • simonw 6 hours ago
      I had already convinced myself through prior experiments that it wasn't using EXIF data, and decided not to spend extra time making my post 100% proof against cynics because I know from past experience that truly dedicated cynics will always find something to invalidate what they are reading.

      I don't know how "iconic" that rocky outcrop in Madagascar is, to be honest. Google doesn't return much about it.

      • hyperlink014 6 hours ago
        It absolutely tried to use EXIF data when I asked it to guess the location. Here is proof - https://imgur.com/a/CHde2Cx

        I couldn't attach the chat directly since it's a temporary chat.

        • simonw 6 hours ago
          Right, but that's at least evident in the thinking trace. I added a note about that to my post.
          • AstroBen 6 hours ago
            How much can we trust the thinking trace? At most it says what's in its training set, but Anthropic showed that's not necessarily accurate for how it gets to its answer

            I tried this with a (what I thought was) very generic street image in Bangkok. It guessed the city correctly, saying that "people are wearing yellow which is used to honor the monarchy". Wow, cool. I checked the image again and there's a small Thai flag it didn't mention at all. Seems just as plausible, even likely it picked up on that

            • simonw 6 hours ago
              I trust the thinking trace to show me the Python it runs.

              (Though interestingly I believe there are cases where it can run Python without showing you, which is frustrating especially as I don't fully understand what those are. But I showed other evidence that it can do this without EXIF.)

              In your example there I wouldn't be at all surprised if it used the flag without mentioning it. The non-code parts of the thinking traces are generally suspicious.

            • whimsicalism 6 hours ago
              if it's using tools to extract exif, it's gonna be in the trace - anthropic's paper is irrelevant here
      • raincole 6 hours ago
        > truly dedicated cynics

        I bet a lot of people (on HN at least) thought of "Does it use EXIF?" when they read the title alone, and got surprised that it was not the first thing you tested.

        • whimsicalism 6 hours ago
          it doesn't use exif most times, it's able to do it consistently from google maps screenshots
  • api 6 hours ago
    Dystopian: the surveillance potential, both from a big surveillance (corporate / government / political) and an individual surveillance (stalkers) perspective.

    Not dystopian: the crime solving potential, the research potential, the historical narrative reconstruction potential, etc.

    It's a pattern I keep seeing over and over again. There seem to be a lot of values that we can obtain, individually or collectively, by bartering privacy in exchange for them.

    If we had a sane world with sane, reliable, competent leadership, this would be less of a concern. But unfortunately we seem to have abdicated leadership globally to a political class that is increasingly incompetent and unhinged. My hypothesis on this is that sane, reasonable people are repelled from politics due to the emotional and social toxicity of that sector, leaving the sector to narcissists and delusional ideologues.

    Unfortunately if we're going to abdicate our political sphere to narcissists and delusional ideologues, sacrificing privacy at the same time is a recipe for any number of really bad outcomes.

  • croes 6 hours ago
    And now imagine what the Trump administration can do with such tools
    • otterley 5 hours ago
      Even without this tool, they have many more at their disposal to accomplish their goals. Practically anyone who possesses a cell phone, or communicates with anyone who does, can be quickly located. They have aircraft and plenty of physical surveillance equipment as well.
  • esjeon 6 hours ago
    I just tossed a group photo w/ some cherry blossom in the background, and GPT immediately answered it's taken in Japan.

    Yes, I'm very very very scared. /s

    • pcthrowaway 6 hours ago
      I'm curious how many cues it's using from profiling people in that guess.

      A photo of people with cherry blossoms could be in many places, but if the majority of the people in the photo happen to be Japanese (and I'm curious how good LLMs are at determining the ethnicity of people now, and also curious if they would try to guess this if asked), it might guess Japan even if the cherry blossoms were in, say, Vancouver.

    • simonw 6 hours ago
      Finding a photo that this doesn't work on is trivially easy.
  • new_user_final 7 hours ago
    6 minutes and 48 seconds? Some YouTuber can find the location in 0.1 second. I don't know if those videos are fake.
    • SamPatt 6 hours ago
      They aren't fake. I'm a Master I level Geoguessr (the penultimate competitive ranking) and what the pros can do is very real.

      I looked at the image in the post before seeing the answer and would have guessed near San Francisco.

      It seems impressive to someone if you haven't played Geoguessr a lot, but you'd be surprised at how much information there is about location from an image. The LLMs are just verbalizing what is happening in a few seconds in good player's mind.

      • raincole 6 hours ago
        The fact some humans can do that doesn't make it any less impressive to me.

        I knew Terence Tao can solve Math Olympia questions and much much much more difficult questions. I was still very impressed by AlphaProof[0].

        [0] https://deepmind.google/discover/blog/ai-solves-imo-problems...

        • SamPatt 4 hours ago
          I couldn't agree more. It's very impressive. I'm just countering the claim that it might be cheating. Of course, sometimes it might be, but knowing what I know now, it's completely possible.
    • speedgoose 7 hours ago
      If you are a professional geoguessr player, you play so many games that it’s not unrealistic to get very good guesses once in a while.

      But I wouldn’t be surprised if some form of cheating is happening.

      • cenamus 7 hours ago
        Not just cheating, but just lots of meta, from the blurred number plates color/format, to the stitching of the image at the bottom, mounting location of the camera on the google car. There are list which variant of car was used in which countries and so on. Which is still impressive but not quite the same as just guessing from the image
        • _the_inflator 7 hours ago
          Nice examples.

          Every attribute is of importance. A PhD put you in a 1-3% pool. What data do you have, what is needed to hit a certain goal. Data Science can be considered wizardry when exercised on seemingly innocent and mundane things like a photo.

      • tiagod 7 hours ago
        Geoguessr is different as you can rely on implementation details, such as the camera generation, car, processing, and stuff like a large part of some countries being covered with a dirty spot somewhere in the FOV.
    • the_mitsuhiko 7 hours ago
      The best geoguessers have been beaten by AI a while back.
    • incognito124 7 hours ago
      If you mean georainbolt, it's genuine
    • GaggiX 7 hours ago
      Some people are really good at GeoGuessr, but also their best performance are more likely to get views.

      If you want a bot that is extremely strong at geoguessr there is this: https://arxiv.org/abs/2307.05845

      One forward pass is probably faster than 0.1 second. You can see its performance here: https://youtube.com/watch?v=ts5lPDV--cU (rainbolt is a really strong player)

  • mk89 6 hours ago
    This tool just makes it easier for weirdos to achieve their goals at stalking women and kids.

    Crazy that this is even allowed.

    Who the hell needs to know the precise location of a picture, besides law enforcement? A rough location is most of the time sufficient. Like a region, a state, or a landscape (e.g., when you see the Bing background pictures, it's nice to see where they were taken).

    This tool will give a boost to all those creeps out there that can have access to one or two pictures.

    • numpad0 6 hours ago
      I can't believe such a comment is posted now in 2025. Everyone had moved on from that kind of thing at least a decade ago. Or are there some parts of the Internet where this would be new?
    • semiquaver 6 hours ago
      This is pure luddism. A human could have done the exact same thing. I’ll also point out that in this case the most confident guess was 200 miles off and the second correct guess was only down to the city level. Not remotely what anyone would consider precise.
      • simonw 6 hours ago
        The fact that it got it wrong was one of the reasons I picked that example: it's much more interesting that way.

        If you feed it a photo with a clear landmark it will get the location exactly right.

        If you feed it a photo that's a close up of a brick wall it won't have a chance.

        What's interesting is how well it can do on this range of tasks. If you don't think that's at least interesting I'm not sure what I can do for you.

      • mk89 6 hours ago
        A skilled human can do the same thing but not everyone is open to offering this sort of services for certain purposes.

        Making a tool like this trained on existing map services, for example Google Street images, gives everyone, no matter who, the potential to find someone in no time.

        These tools are growing like crazy, how long will it take before someone will "democratize" the "location services market"...

        • NitpickLawyer 6 hours ago
          > but not everyone is open to offering this sort of services for certain purposes.

          Sorry but I call bull on this. Put it on one of the chans with a sob story and it gets "solved" in seconds. Or reddit w/ something bait like "my capitalist boss threatened to let my puppy starve because he wants profits, AITA if I glitter bomb his office?"...

      • AstroBen 6 hours ago
        for now. These things have a way of very quickly going from somewhat-ok to superhuman in months
    • samlinnfer 6 hours ago
      Will someone please think of the women and children?
    • bSqrd 6 hours ago
      [dead]