I’ve banned query strings

(chrismorgan.info)

219 points | by susam 6 hours ago

32 comments

  • jedimastert 4 hours ago
    You know I was actually really curious about this so I went back to the HTML and URL W3C standards and surprisingly they don't actually have any definitions of format other than being percent encoded. One might conflate query strings with "form-urlencoded"[0] query strings, which is one potential interoperability format, but in general a queries string is just any percent encoded string following a "?" in a url[1], and just another property in the "URL" HTML object that can be used in the generation of a response. While additionally there is a URLSearchParams object that is the result of parsing the query string with the form-urlencoded parser, this is simply an interoperability layer for JavaScript.

    I'm going to be honest, I was pretty geared up to have a contrarian opinion until I looked at the standards but they're actually pretty clear, a 404 could be a proper response to unexpected query string; query string is as much part of the URL API as the path is and I think pretty much everyone can acknowledge that just tacking random stuff onto the path would be ill advised and undefined behavior.

    [0]: https://url.spec.whatwg.org/#application/x-www-form-urlencod...

    [1]: https://url.spec.whatwg.org/#url-class

    • wongarsu 2 hours ago
      Back in the day it was reasonably common for CMSs and forums to only have an index.php, and routing entirely by query string (in form-urlencoded form, people were not savages). So you would have index.php?p=home and index.php?p=shop. Or index.php?action=showthread&forum=42&thread=17976. It should be immediately obvious that in that scheme 404 is indeed the correct answer to unknown query parameters

      In fact lots of sites still work like that, they just hide it behind a couple rewrite rules in apache/nginx for SEO reasons

      • Semiapies 1 hour ago
        If you're routing like it's 1999, sure, 404.

        On the other hand, if it's a CRUD app and you're filtering a list of entities by various field values? Returning that no items matched your selection (or an empty list, if an API) makes more sense than a 404, which would more appropriate for an attempt to pull up a nonexistent entity URI.

        • Sander_Marechal 21 minutes ago
          There is no reason you can return that "no items matched your selection" with a 404 HTTP response code instead of a 200.
        • stouset 17 minutes ago
          The point was that returning a 404 for unexpected query strings doesn’t just happen to okay per the specs, but that there is significant historical precedent for doing so based on application design that was common in the past.
        • brightball 18 minutes ago
          Yea, empty response at a valid path. Isn’t 204 the code for it?

          Lots of REST libraries that I’ve used treat any 400 response as an error so generating a 404 when for an empty list would just create more headaches.

      • sroussey 18 minutes ago
        Oh no, looks like my old forum software urls.
    • qiller 25 minutes ago
      Interestingly, quite a few places that should treat query strings transparently make a lot of assumptions about their structure. We ran into that when picking a new CDN, some providers didn't handle repeat parameters (?a=1&a=2) correctly.
      • sroussey 17 minutes ago
        What’s do you mean by correctly?
    • nrds 2 hours ago
      Wait until you realize that the difference between path and query string is entirely arbitrary and decided by the server. Query strings should never have existed. They are an implementation detail of CGI webservers that leaked all over everything and now smells really bad.
      • mikeocool 1 hour ago
        I dunno, it seems like the fact that we arrived at a fairly standard structure for URL paths that works pretty well is not a bad outcome.

        Seems a lot better than the other potential world we could lived in, where paths were a black box and every web server/framework invented their own structure for them.

        • gritzko 43 minutes ago
          In my current project I use URIs to refer to absolutely any entity in a git(-ish) repo. Files, branches, revisions, diffs, anything. URI turns out to be a really good addressing scheme for everything. Surprise. But the most used and abused element is always the path. Query takes a lot of that mess away. Might have been unmanageable otherwise.

          https://github.com/gritzko/beagle

          • gritzko 33 minutes ago
            In fact, GitHub URIs are a good example of overusing paths: https://github.com/gritzko/beagle/blob/a7e17290a39250092055f...

              - user gritzko,
              - project beagle, 
              - view blob, 
              - commit a7e17290a39250092055fcda5ae7015868dabdb4, 
              - file path VERBS.md
            
            ... all concatenated indiscriminately.
            • em-bee 17 minutes ago
              what would be a better way of doing that? i am not disagreeing, but i just can't think of any way to improve on this. put everything into the query part? i prefer to use the query only for optional arguments. in this example the blob argument is the only thing that doesn't fit in my opinion.
            • iainmerrick 13 minutes ago
              Back in the day there was an attempt to introduce "matrix URIs" as a more structured alternative to query strings: https://www.w3.org/DesignIssues/MatrixURIs.html

              Of course there's nothing to stop you using URIs like this (I think Angular does, or did at one point?) but I don't think the rules for relative matrix URIs were ever figured out and standardised, so browsers don't do anything useful with them.

        • hamburglar 1 hour ago
          My next website is going to have the path portion of the URL be a base64 encoded ASN.1 blob.
      • halayli 28 minutes ago
        Nothing you said here is correct. Paths, query strings, and fragments are all well defined entities. https://datatracker.ietf.org/doc/html/rfc3986#section-3.3
        • sroussey 14 minutes ago
          It’s a string between ? and # isn’t well defined. Or it is and it says very little.
      • jolmg 2 hours ago
        It's arbitrary to a degree like the difference between using an attribute or child element in XML, but it's not entirely arbitrary. If you want to include data in the URL that's not part of the hierarchy of the path, query strings are good for that.
      • gpvos 1 hour ago
        Query strings existed before CGI did, and the way they're defined to be filled in from web forms is quite useful; I wouldn't want to need Javascript to fit that into path format. There's nothing wrong about having things decided by the server; I don't get that part of your argument at all.
        • cobbzilla 1 hour ago
          Maybe dumb question: how does the server “decide” anything other than what file to serve? Today we have many choices but back in the day CGI was the first standard way to do it.

          So yes query parameters existed before CGI but to use them you had to hack your server to do something with them (iirc NCSA web servers had some magic hacks for queries). CGI drove standardization.

          • stirfish 1 hour ago

                func specialHandler(w http.ResponseWriter, r *http.Request) {
             if time.Now().Weekday() == time.Tuesday {
              http.NotFound(w, r)
              return
             }
            
                 fmt.Fprintln(w, "server made a decision")
                }
            
            Your server can make decisions however you program it to, you know? It's just software.

            Forgive the phone-posting.

      • paulddraper 1 hour ago
        How do you figure?

        Paths are hierarchical; query strings are name/value.

        (Note I speak of common usage.)

        You can create a different convention, but that one is pretty dang useful.

  • ChrisMarshallNY 3 hours ago
    > It is a small, decentralised, self-hosted web console that lets visitors to your website explore interesting websites and pages recommended by a community of independent personal website owners.

    Back in the Stone Age, we called these “Webrings,” but they weren’t as fancy.

    One of the issues that I faced, while developing an open-source application framework, was that hosting that used FastCGI, would not honor Auth headers, so I was forced to pass the tokens in the query. It sucked, because that makes copy/paste of the Web address a real problem. It would often contain tokens. I guess maybe this has been fixed?

    In the backends that I control, and aren’t required to make available to any and all, I use headers.

    • bch 3 hours ago
      > an open-source application framework, was that hosting that used FastCGI, would not honor Auth headers

      So you were writing your application as a fcgi-app, and (e.g.) Apache was bungling Auth headers? Can you expand on this? Curious about the technical detail of (I guess) PARAM records not actually giving you what you expect?

      • ChrisMarshallNY 2 hours ago
        I don’t remember, exactly. Long time ago (I stepped away from that project many years ago).

        I just remember the auth headers never showing up in the $_SERVER global (it was a PHP app). This was what I was told was the issue. They made it sound like it was well-known.

  • Aardwolf 3 hours ago
    > You could argue that I’m abusing 414 URI Too Long. I respond that it’s funnier this way. Other options I considered were:

    Another option to consider is "418 I'm a teapot": teapots usually also don't support query strings

    • dredmorbius 2 hours ago
      Just straight "400" ("Bad Request") or "403" ("Forbidden") would also probably be defensible. Odd that there aren't any error response codes specific to URI parameters.

      Several options which seem like they might be appropriate aren't on close examination:

      - "406" ("Not Acceptable") which is based on content-negotiation headers.

      - "409" ("Conflict") which is largely for WebDAV requests.

      - Others such as 411, 422, and 431 are also for specific conditions which aren't relevant here.

      - 300 or 500 errors are inappropriate as this isn't a relocation or server-side failure, it's a client-side request problem.

      Teapot or too long seem best bets.

      • thayne 1 hour ago
        I think either 400 or 404 would be fine. 400 because the request isn't in the expected format, 404 because a resource with that query string doesn't exist.
      • thfuran 1 hour ago
        I'm willing to pay them $1 for a contact guaranteeing that they won't service such requests. That would make 451 the most appropriate.
      • mystraline 1 hour ago
        Just fire off a 200 OK with text body of "499 Bad query string"

        Im not making this up btw. A old NOC I woeked at emitted every error as 200 OK with the body message with the real error. They were a real shitshow.

    • layer8 2 hours ago
      Of course they do. For example you can lower a string from the top to query the fill level. Or you can wrap a string around the pot to query the circumference.
  • 1shooner 4 hours ago
    >So I’ve decided to try a blanket ban for this site: no unauthorised query strings.

    His site returns (I think incorrectly) a 414 if a request includes a query string. If this protest is meant to advocate for the user, who presumably wasn't able to manage that string in the first place, why would you penalize them for it being there?

    Why not just use it as a cue to tell users how they can make this decision themselves (e.g. through browser tools)?

    • jampekka 4 hours ago
      "You could argue that I’m abusing 414 URI Too Long. I respond that it’s funnier this way. Other options I considered were:

          400 Bad Request, the generic client error code, which is correct but boring;
      
          402 Payment Required, and honestly if you want to pay me to make a particular URL with query string work, I’m open to it;
      
          404 Not Found, but it’s too likely to have side effects, and it doesn’t convey the idea that the request was malformed, which is what I’m going for; and
      
          303 See Other with no Location header, which is extremely uncommon these days but legitimate. Or at least it was in RFC 2616 (“The different URI SHOULD be given by the Location field in the response”), but it was reworded in 7231 and 9110 in a way that assumes the presence of a Location header (“… as indicated by a URI in the Location header field”), while 301, 302, 307 and 308 say “the server SHOULD generate a Location header field”. Well, I reckon See Other with no Location header is fair enough. But URI Too Long was funnier."
      
      https://chrismorgan.info/no-query-strings?foo
      • ollien 3 hours ago
        I don't think it's an abuse, RFC9110 defines 414 as a response for "refusing to service the request because the target URI is longer than the server is willing to interpret". Since adding a query string involves only adding characters, this seems fine; there's no stipulation as far as I can tell that all pages a server hosts must adhere to the same length. I'd be curious if any well-known clients interpret it that way though, and make caching decisions based on it. As far as I know, they shouldn't.

        Obviously it's against the spirit of the thing, but I don't think it's wrong per-se.

        • lucketone 1 hour ago
          If the goal is to be misleading, but technically correct, it hits the bullseye
          • ollien 1 hour ago
            When the goal is "the funniest way", I think that's a hit :)
      • thayne 1 hour ago
        You could also redirect to the url with the query string dropped.
      • 1shooner 4 hours ago
        Also from the 414 page:

        >Complain to whoever gave you the bad link, and ask them to stop modifying URLs, because it’s bad manners.

        It's ironic that an error response so blatantly violating the robustness principle is throwing shade about bad manners.

        • btilly 2 hours ago
          Opinions vary on how good an idea the robustness principle is. That is why, for example, the XML standard requires a conforming validator to throw an error on invalid XML.

          In our modern world, the robustness principle has become an invitation to security bugs, and vendor lock-in. Edge cases snuck through one system on robustness, then trigger unfortunate behavior when they hit a different system. Two systems tried to do something reasonable on an ambiguous case, but did it differently, leading to software that works on one, failing to work on the other.

          • 1shooner 1 hour ago
            I generally agree, but I don't think XML is the best example. Getting HTML out of XML is considered to have been the right move isn't it? I was pro-XHTML2 at the time but in retrospect, have we suffered much for not sending webpage validation errors to end users?
        • zaphar 21 minutes ago
          But, this is robust? I mean it's pretty clearly stating that you are visiting an unsupported URL. It provides direction on what to do about it to the user. It does not crash the browser or the server. In pretty much every dimension this is highly robust.
        • wizzwizz4 4 hours ago
          The robustness principle is itself bad manners, in plenty of contexts. If I deliver packages by throwing them at the customer, I really want a customer to tell me "hey, don't throw packages at me!" before I attempt to lob something fragile and breakable, or something heavy at someone fragile and breakable. Otherwise, how am I supposed to learn that I'm doing anything wrong?
    • bryanrasmussen 4 hours ago
      It's been years but I seem to remember there was a version of PLSQL server pages that would return 500 if you tried to pass in an unknown query string.
  • dspillett 26 minutes ago
    Maybe an alternative would be to inconvenience people following such links still, but somewhat less.

    Instead of responding with an error, give a page that states “The link you followed to get here appears to have had some tracking gubbins added, in case you are a bot following arbitrary links, and/or using random URL additions to look like a more organic visit, please wait while we run a little PoW automaton deterrent before passing you on to the page you are looking for.” then do a little busy work (perhaps a real PoW thingy) before redirecting. Or maybe don't redirect directly, just output the unadorned URL for the user to click (and pass on to others). This won't stop the extra gubbins being added of course, but neither will the error and this inconveniences potential readers less.

  • humodz 3 hours ago
    The tone of this and Chris's post gives me the impression that it's harmful to include these query parameters, but I don't understand how. Could someone elucidate me? I understand it can mangle some URLs and that's good enough reason not do it, but even then it seems like a minor incovenience.
    • cortesoft 3 hours ago
      You can read some of the issues people have had with this by reading up on the http referer header: https://en.wikipedia.org/wiki/HTTP_referer

      There are a lot of reasons I might not want a site to know where I came from to get to their site. It is basically sharing your browsing history with the site you are visiting.

      Because of this, there have been a lot of updates to the http referer header, with restrictions on when it is sent, and an ability to opt out of the feature entirely.

      Adding a url parameter with the same information bypasses any of these existing rules and ability to opt out. They should just use the standard.

      • odie5533 3 hours ago
        If I send out an email campaign, I can't use custom http headers to know that a user arrived from the newsletter.
        • cortesoft 1 hour ago
          If you are sending out an email, you can use whatever url form you like?

          This is talking about links to third party sites, not your own.

        • grg0 2 hours ago
          Do you really need to? Basic statistics will tell you if the email campaign had any significant effect on site visits.
          • maccard 2 hours ago
            If I release a video and send an email newsletter at the same time, which one caused the traffic increase? Should I invest in making more videos of sending more emails?
            • hananova 2 hours ago
              If you insist on knowing, include a different url in both that goes to the same place and use your damn server logs. You don’t need google analytics and whatever.
              • vel0city 1 hour ago
                Isn't putting in a different query string "including a different url that goes to the same place"?

                Isn't this functionally the exact same?

                • zaphar 18 minutes ago
                  presumably you control the urls you are sending in the email. As a result if you want to use query strings that's fine. The issue only arises when you use query strings to implement tracking on someone else's site instead.
        • abigail95 2 hours ago
          use a unique url for each email
        • zahlman 2 hours ago
          As your reader, I might not actually want you to know.
    • legitster 2 hours ago
      What's interesting is that none of these sites have a "search" feature. Which is an important accessibility feature and a clear and legitimate use case for a query string.
      • saintfire 1 hour ago
        > If I ever start using any query strings, I’ll allow only known parameters.

        They aren't saying the concept of query strings are bad, They're saying unsolicited query strings during referal are the issue.

      • j2kun 2 hours ago
        My website has search without a query string: https://www.jeremykun.com/
    • phoronixrly 3 hours ago
      Oh, I have a couple - the users did not agree on being tracked (these query params are tracking information), and the site administrator does not want incoming traffic to be tracked. I know the latter can be hard to understand, but I for example sure as hell do not want to have any info in my logs that can be used to harm my users.

      On a more personal note, I hate it when I go to copy a link to send via a message, and the tracking code glued onto it is twice as long as original URL... I either have to fiddle around with it to clean it up or leave the person I sent it to to wonder wtf am I on about with a screenful of random characters...

      So it's violating users' privacy, it's shit UX, and on top of that, nobody asked for it...

      • legitster 3 hours ago
        >(these query params are tracking information)

        Query strings are useful for way more than just tracking. Saving and servicing search queries is a way more common use case. So assuming it's only useful for tracking is very misleading.

        Query strings are probably the least invasive tracking. They are transparent, obvious, and anonymous. Users are free to strip out and edit query strings if they don't want them.

        More to the point, I can essentially do the same thing with HTTP routing - create an infinite number of unique URLs for tracking purposes. In that regard calling out query strings specifically for essentially the same thing but more transparently seems like splitting hairs.

        • phoronixrly 2 hours ago
          Thank you for explaining to me that query parameters can be used for other purposes apart from tracking. The articles in question though, are railing against query parameters being abused for tracking purposes - passing referers (sic) and UTM by adding them to URLs of sites that neither process them, nor want them.
          • legitster 1 hour ago
            Referral query strings are not for tracking though. The person putting them on the links gets nothing out of them. There is no PII being shared. They are purely added out of courtesy.

            If I am handing out maps to your address, letting people know who is publishing the map is generally a good thing.

            This is like saying having a return to sender address on mail is an invasion of privacy.

  • dang 3 hours ago
    Since the original source hadn't had a discussion on HN yet, I've put that link (https://chrismorgan.info/no-query-strings) at the top and moved the response link (https://susam.net/no-query-strings.html) to the toptext.

    Both are good but it seems fair to give priority to the original!

  • peesem 2 hours ago
    edit: not true https://news.ycombinator.com/item?id=48077990

    "I don’t like people adding tracking stuff to URLs" and "You abuse your users by adding that to the link" and "no unauthorised query strings" and "At present I don’t use any query strings" but for some reason ?igsh, which i'm pretty sure is an instagram tracking parameter, is allowed. weird

    • zahlman 2 hours ago
      ?igsh doesn't get through for me, and I don't see any links on the page including it.
      • peesem 2 hours ago
        oh, it's uBlock Origin (non-lite) removing it without telling me at all. retracted
  • gpvos 1 hour ago
    This is not the first site to do so. A few years back, scarygoround.com started blocking query strings, although it seems to have stopped doing so now. Back then, Facebook had started to add ?fbclid=... to every outgoing link.
  • hamdingers 2 hours ago
    While I don't take the author's hard stance, I do hate gratuitous query params that result in links that are thousands of characters long.

    I use this bookmarklet to strip query params before sharing a link:

        javascript:(()=>navigator.clipboard.writeText(location.origin+location.pathname))();
  • gtowey 4 hours ago
    "wander console" sounds like they're just web rings re-invented. In the era of forced feeds by giant corporations which consist of the things they want you to see, I've wondered if this old idea would make a comeback. Human curated content from trusted people seems like the only way forward.
    • SoftTalker 4 hours ago
      FTA: It is also a bit like web rings except that the community network is not restricted to being a cycle; it is a graph and it is flexible.
      • cosmicgadget 3 hours ago
        Is it not a random walk? Might sound pedantic but if there is graph structure I am interested.
        • susam 13 minutes ago
          I'll start with the clarification that the moderators have changed the URL of the original post from <https://susam.net/no-query-strings.html> to <https://chrismorgan.info/no-query-strings>. Hopefully, this will prevent any confusion about why we are discussing random walks in a post about query strings. Now let me answer your question.

          > Is it not a random walk? Might sound pedantic but if there is graph structure I am interested.

          The network is a directed graph. Every Wander Console declares a few other consoles as its neighbours. The person setting up the console decides who they want to list as their neighbours. So if we call the network graph X, then the set of vertices is:

            V(X) = the set of all URLs that point to Wander Consoles
          
          and the set of edges is:

            E(X) = {(u, v) in V(X) : u declares v as its neighbour}
          
          The traversal between consoles is not strictly a random walk. If I could call it something, I would call it randomised graph exploration with frontier expansion. On each click of the 'Wander' button, the tool picks one console at random from the set of discovered consoles and visits that console. It then fetches the neighbours declared by that console and adds any newly discovered consoles to the set.

          The difference from a random walk is that the next console is not chosen from the neighbours of the last visited console. It is chosen from the whole set of consoles discovered so far. In other words, each click expands the known part of the graph, but the console used for that expansion is selected randomly from all discovered consoles, not just from the last console visited.

  • jameshart 2 hours ago
    There’s nothing ruder in hypertext etiquette than giving someone a link to navigate to someone else’s HTTP server, where you have manipulated that URL in some way unsanctioned by the server you are sending them to.

    You can’t just send arbitrary query string parameters to a server and assume they will just ignore them. Just like you can’t just remove query string parameters and assume the URL will work.

    • gojomo 1 hour ago
      In fact, you usually can just send arbitrary query string parameters to a server - that's why the behavior is so common, and often useful.

      Most sites don't mind or break, some sites get value from the behavior in ways hard to replicate in other ways – and those sites that don't like such additions can easily ignore them. And a few lines of code will work better than ineffectually appealing to manners, when the freedom of the web's form of hypertext, and protocols, gives the outlink authors full freedom to craft URLs (and thus requests) however they like.

      • jameshart 1 hour ago
        Crafting outbound links with your own additions and handing them out to visitors to your site is similar to the practice of writing someone’s phone number on the door of a bathroom cubicle with ‘for a good time call:’ written above it.

        You’re handing out someone elses’s contact details, but giving the person you hand them to a completely fabricated expectation for how the interaction will go.

    • abecode 1 hour ago
      My use case for this is making separate bookmarks in different folders for a single URL:

      Example.com/interesting -> bookmark folder one

      Example.com/interesting?dummy=t -> bookmark folder two

      • jameshart 1 hour ago
        Use #fragment identifiers then
  • madprops 1 hour ago
    >Right click a youtube video from the results to copy the URL. I would have liked a short URL ready to share with people in chats, but no, I get: https://www.youtube.com/watch?v=IFfLCuHSZ-U&pp=ygUNcmF0Ym95I...

    >Want to share an amazon product on a chat to discuss about it. I would have liked a nice short url that I can copy, instead I get a monstrosity, it forces me to manually select only the id portion of it if I want to share it.

  • gojomo 1 hour ago
    Trying to boostrap some taboo against novel unpermissioned URL munging is silly prudishness.

    Ensuring both sides of a hyperlink agree/consent was a design flaw that limited the uptake of pre-web hypertext systems. The web's laissez-faire approach demonstrated a looser coupling was far better for users, despite all the new failure modes.

    Of course any site/server has the practical power free to treat inbound requests as rigorously (or harshly) as they want. But by the web's essential nature, it is equally part of the inherent range-of-freedom of outlink authors to craft their URLs (and thus the resulting requests) however they want. URLs are permissionless hyperlanguage, not the intellectual property of entities named therein.

    Plenty of sites welcome such extra info, and those that don't want it can ignore it easily enough – including by just not caring enough about the undefined behavior/failures to do nothing.

    Though, when a web publisher has naively deployed a system that's fragile with respect to unexpected query-string values, they should want to upgrade their thinking for robustness, via either conscious strictness or conscious permissiveness. Thereafter, their work will be ready for the real web, not a just some idealized sandbox where scolding unwanted behavior makes sense.

  • sigseg1v 4 hours ago
    Adding query strings is one of those things that I think a lot of sites could get away with more easily if they were reasonable about it.

    A link that is "https:// web.site" is fine.

    A link that is "https:// web.site?via=another.site" is fine.

    A link that is "https:// web.site?fbm=avddjur5rdcbbdehy63edjur5rdcbbdehy63ednddjur5rdcbbdehy63ednddjur5rdehy63ednddjur5rdcbbdehy63ednddjur5rdcbbdehy63edaaaddjur5rdcbbdehy63ednddjur5rdcbbdehy63ednddjur5rdcbbdehy63ednddjur5rdcbbdehy63ednddjur5rdcbbdehy63ednddjur5rdcbbdehy63ednddjur5rdcbbdehy63ednddjur5rdcbbdehy63ednddjur5rdcbbdehy63ednddjur5rdcbbdehy63ednddjur5rdcbbdehy63ednddjur5rdcbbdehy63ednddjur5rdcbbdehy63ednddjur5rdcbbdehy63ednddjur5rdcbbdehy63ednddjur5rdcbbdehy63ednddjur5rdcbbdehy63ednddjur5rdcbbdehy63ednddjur5rdcbbdehy63ednddjur5rdcbbdehy63ednddjur5rdcbbdehy63ednddjur5rdcbbdehy63ednddjur5rdcbbdehy63ednddjur5rdcbbdehy63ednddjur5rdcbbdehy63ednzzddjur5rdcbbdehy63ednddjur5rdcbbdehy63ednddjur5rdcbbdehy63ednddjur5rdcbbdehy63ednddjur5rdcbbdehy63ednddjur5rdcbbdehy63ednddjur5rdcbbdehy63ednddjur5rdcbbdehy63ednddjur5rdcbbdehy63ednddjur5rdcbbdehy63edn"

    is annoying as shit and I need to literally apologize to people after sending it if I forget to manually redact the query string. Don't abuse this.

    • culi 4 hours ago
      There are addons to remove unnecessary params from the worst offending sites:

      https://www.google.com/search?q=clearurls+addon

      • franciscop 4 hours ago
        Thanks for removing the rest on that google link, the one I get after switching to "images" and back to "web" is this monstrosity:

        https://www.google.com/search?newwindow=1&sca_esv=8061bd9cb1...

        Edit: which luckily and sensibly Hacker News cuts short since it's 463 characters

        • culi 29 minutes ago
          Yeah I removed the rest of the link as an example of how much cleaner the urls can be with that addon haha. I was being meta
        • dredmorbius 2 hours ago
          You can post the string as text (indent by 2+ spaces) to avoid that trimming.

          Since the purpose is to show the full URL with trackers and other cruft, that's sensible here:

            https://www.google.com/search?newwindow=1&sca_esv=8061bd9cb19cd450&sxsrf=ANbL-n7S60ZBdf0lh5kQ8RojJdQpnM0S5w:1778353180297&q=clearurls+addon&source=lnms&fbs=ADc_l-aN0CWEZBOHjofHoaMMDiKpeTF8ggB1qASWZfpybz5TQZmqMiWOgtbP_iLwZE3_BsqFrIkjQk30pNpcyOJjgYT1NYhSr_eVWusunSdIYLAa1WWhJm7VPvRsNUkHss5YZDSVhzEth7KnRsP0kwdL-3ylxxDz_j5WL-QtjJdzQePIWAeCwn7532w9WuSzSqnY0V2tn342eEk_wDwxk45MDY_JuA-5CA&sa=X&ved=2ahUKEwjH3uLs8ayUAxUghP0HHVXuOeIQ0pQJegQICxAB&biw=1296&bih=711&dpr=2.22
          
          And yeah, that's pretty awful.

          In conclusion, Google must be destroyed.

  • codingclaws 1 hour ago
    I was just wondering if I should do something like this. I use a couple query string values and I validate them and issue a 40x if the value is invalid. So, I was wondering if I should issue a 40x for an unused query string val.
  • ashley95 23 minutes ago
    But ?fbclid is not banned?
  • arjie 4 hours ago
    Just referrer policy of strict origin when cross origin gives host level referer (sic) header in most mainstream browsers unless user has configured otherwise right? That’s usually enough for web authors to know what audience they’re appealing to and privacy-maximizers can turn off that header sending.
  • moritzwarhier 1 day ago
    This is cool and creative!

    It uses 4xx, but not just 400 :)

    https://chrismorgan.info/no-query-strings?why=unknown

  • itopaloglu83 1 hour ago
    YouTube is also quite famous with their source identifiers, especially with the short urls, the tracking part is longer then the url I’m trying to share.
  • notlive 2 hours ago
    Referrer is sometimes nice to know. If your site gets a traffic spike from an email newsletter that traffic won't correctly identify the source in the http headers.

    No qualms with OP, your site your rules.

  • gwern 4 hours ago
    Query strings break unpredictably, and that alone is enough to ban them by third parties, especially for something as minor as referral tracking.

    Example: The Browser is a well known link aggregation paid periodical. I subscribe, and every 1 in 10 or 20 links I clicked, it'd just break outright and I'd have to tediously edit the URL to fix it (assuming the website didn't do a silent ninja URL edit and make it impossible for me to remember what URL I opened possibly days or weeks ago in a tab and potentially fix it). This was annoying enough to bother me regularly, but not enough to figure out a workaround.

    Why? ...Because TB was injecting a '?referrer=The_Browser' or something, and the receiving website server got confused by an invalid query and errored out. 'Wow, how careless of The Browser! Are they really so incompetent as to not even check their URLs before mailing an issue out to paying subscribers?'

    I wondered the same thing, and I eventually complained to them. It turns out, they did check all their URLs carefully before emailing them out... emphasis on 'before', which meant that they were checking the query-string-free versions, which of course worked fine. (This is a good example of a testing failure due to not testing end-to-end or integration testing: they should have been testing draft emails sent to a testing account, to check for all possible issues like MIME mangling, not just query string shenanigans.)

    After that they fixed it by making sure they injected the query string before they checked the URLs. (I suggested not injecting it at all, but they said that for business reasons, it was too valuable to show receiving websites exactly how much traffic TB was driving to them on net, because referrers are typically stripped from emails and reshares and just in general - this, BTW, is why the OP suggestion of 'just set a HTTP referrer header!' is naive and limited to very narrow niches where you can be sure that you can, in fact, just set the referrer header.)

    But this error was affecting them for god knows how long and how many readers and how many clicks, and they didn't know. Because why would they? The most important thing any programmer or web dev should know about users is that "they may never tell you": https://pointersgonewild.com/2019/11/02/they-might-never-tel... (excerpts & more examples: https://gwern.net/ref/chevalier-boisvert-2019 ). No matter how badly broken a feature or service or URL may be, the odds are good that no user will ever tell you that. Laziness, public goods, learned helplessness / low standards, I don't know what it is, but never assume that you are aware of severe breakage (or vice-versa, as a user, never assume the creator is aware of even the most extreme problem or error).

    Even the biggest businesses.... I was watching a friend the other day try to set up a bank account in Central America, and clicking on one of the few banks' websites to download the forms on their main web page. None of the form PDF download links worked. "That's not a good sign", they said. No, but also not as surprising as you might think - the bank might have no idea that some server config tweak broke their form links. After all, at least while I was watching, my friend didn't tell them about their problem either!

    • gojomo 1 hour ago
      I don't see how your example, The Browser (thebrowser.com), supports your argument that ad-hoc query-string additions are so prone-to-breaking that 3rd parties should ban them.

      In fact, the example seems to suggest the opposite: a 17+ year successful paid subscription business – to which you appear to be a generally-satisfied customer! – receives enough "business value" from the practice, despite its failure modes, they don't want to stop. Improving their probe of the risk-of-failure was enough.

      Seemingly, the practice works often enough, pleasing more destination sites than it angers, that "referral tracking" is not something "so minor".

      • gwern 35 minutes ago
        > Improving their probe of the risk-of-failure was enough.

        The point was it was dangerous in a way they didn't even realize was an issue, for a thin business rationale. Unless you are going to do thorough tests and understand the risk you are taking (which they did not, as evidenced by screwing it up systematically at scale for years), you should not be doing it.

        And it's not obvious that they are correct in their tightened-up testing, because even if a link is correct at the time they test it, it could break at any time thereafter.

        > to which you appear to be a generally-satisfied customer!

        No matter what _X_ is, _X_ would have to be a pretty epic screwup to make a customer unsubscribe solely over that! I never claimed it was such a major epic screwup that it could do that. So that is an unreasonable criterion: "well, you didn't outright quit, so I guess it can't be that bad." Indeed, but I never said it was, and somewhat bad is still bad; I was in fact fairly annoyed by the random breakage, and at the margin, everything matters. If TB did a few other things, in sum, they could potentially convince me to let my subscription lapse. An annoyance here, a papercut here, and pretty soon a generally-satisfied customer is no longer so satisfied...

  • dredmorbius 2 hours ago
    This is genius, kudos Chris.

    It also makes me wonder what other noxious online behaviours might be addressed through ... creative ... client-side responses similar to this.

    We've already seen, for years, sites attempting to socially-condition people over the use of ad-blockers and Javascript disablers. No reason why the Other Side can't fight back as well.

  • legitster 3 hours ago
    Query strings are awesome. Especially for one-page applications.

    I build a lot of internal applications, and one of my golden UI rules is that a user should be able to share their URL and other users should be able to see exactly what the sender did.

    So if you have a dashboard or visualization where the user can add filters or configurations, I have all of their settings saved automatically in the URL. It's visible, it's obvious, it's easy, it's convenient.

    >There is also a moral question here about whether it is okay to modify a given URL on behalf of the user in order to insert a referral query string into it. I think it isn't.

    These dogmatic technical screeds are all so weird to me. They usually reveal more about the authors lack of experience or imagination than provide a useful truism.

    • keane 3 hours ago
      Yes, query strings often enable useful features! But Chris's post, "no unauthorised query strings", is only regarding third parties adding them.
      • legitster 3 hours ago
        But... like... that's a weird hill to die on.

        > If I wanted to know I’d look at the Referer header; and if it isn’t there, it’s probably for a good reason. You abuse your users by adding that to the link.

        The reason is that the referrer headers are a usability and privacy nightmare. It's weird for the author to jump to such a conclusion.

        This referral information is being done purely as a courtesy to the webhost. If we imagined a world in which ChatGPT or Wikipedia launched massive hugs of death on referral links without attributing themselves, that is a much, much worse outcome.

        • kyralis 37 minutes ago
          There's a referrer header, if the client wishes to send it. If they don't, the "courtesy to the web host" is done at the expense of the client. This particular web host takes umbrage at other sites taking advantage of their clients that way, which seems reasonable to me.
    • jimmaswell 3 hours ago
      A relatively minor impact concern is that query strings create a new cache entry both in the browser and typically on server-side caches unless configured otherwise, so you might want to use URL fragment parameters if the parameters are only used by clientside JavaScript but the server response is the same.
  • arexxbifs 2 hours ago
    Running your own small website is a constant battle against grifters and bad online etiquette. When people hotlink images, I usually make a point of having some personal fun with mod_rewrite.
  • lloydatkinson 2 hours ago
    This is really cool. My site is hosted by cloudflare, so I guess I could do the same with a cloudflare worker... maybe?
  • julianlam 4 hours ago
    > After I implemented that feature, a page from one of my favourite websites refused to load in the console... the third URL returns an HTTP 404 error page. The website uses the query string to determine which one of its several font collections to show.

    Yes, let's unilaterally decide that query strings are bad because one website (ab)uses query strings to load different fonts.

    It's the query strings that are the problem, not the website!

    jfc.

    Look, I'm against utm fragments as much as the next guy, but let's not throw away a perfectly good thing because tracking is evil.

    • ergonaught 4 hours ago
      Adding your own garbage to someone else's URLs is in fact the problem. Could they handle your garbage better? Sure. Is your garbage still a problem? Yes.
      • SoftTalker 4 hours ago
        Postel's law worked OK when people operated in good faith. But today the internet is full of abusers. Rejecting requests that aren't exactly what they should be is probably the best policy now.
        • wtallis 3 hours ago
          Postel's law is typically stated as "be conservative in what you do, be liberal in what you accept from others". It's unfortunately common for people to ignore the first half and hallucinate a third clause demanding that the recipient stay silent about the errors they receive.
    • InsideOutSanta 4 hours ago
      That website is not abusing query strings, though, its usage of query strings is perfectly cromulent. And tfa is not saying not to use query strings, but not to append random garbage to other people's URLs.
    • jorams 4 hours ago
      The website uses the feature for its intended purpose. Adding random trash to the query string of another website assuming it'll ignore it is in fact a bad idea, always, even if you can usually get away with it.
    • LocalH 4 hours ago
      The problem is adding query strings to the URLs of others. It's peak entitlement to think that's proper
    • jedimastert 4 hours ago
      > one website (ab)uses query strings

      Really not abusing abusing query strings from a standards perspective, a 404 is not an improper response to an unexpected query string

  • willthefirst 3 hours ago
    I mean…the site that broke should know what to do with arbitrary query strings. If your site breaks when someone puts in an invalid query string, that’s on you?
    • rglover 2 hours ago
      This. Query strings are a standard feature and have many more purposes beyond tracking.
      • kyralis 36 minutes ago
        Yes, and if the site actually used query strings, then it would of course accept them. Why does it have any reason to accept invalid query strings?
  • shevy-java 2 hours ago
    > It’s my website: I can do what I want with it.

    > And you can do what you want with yours!

    That does not make a lot of sense. Yes, you can do what you want with your website, but query-string is a way for users to query for additional information or wants or needs. I use them on my own websites to have more flexibility. For instance:

        foobar.com/ducks?pdf
    
    That will download the website content as a formatted .pdf file.

    I can give many more examples here. The "query strings are horrible" I can not agree with at all. His websites don't allow for query strings? That's fine. But in no way does this mean query strings are useless. Besides, what does it mean to "ban" it? You simply don't respond to query strings you don't want to handle. We do so via general routing in web-applications these days.

    • pessimizer 2 hours ago
      > foobar.com/ducks?pdf

      This isn't relevant when talking about links to his site. This is relevant when talking about links to your site.

      > Besides, what does it mean to "ban" it? You simply don't respond to query strings you don't want to handle.

      It means that you're going to get some sort of 400 error when you follow a link to his site with a query string attached to it. He simply will not respond to query strings that he doesn't want to handle, which is all of them.

  • ironfront 3 hours ago
    [flagged]
  • huflungdung 1 hour ago
    [dead]