With the rise of these retro-looking websites, I feel it's possible again to start using a browser from the '90s. Someone should make a static-site social media platform for full compatibility.
This is totally doable! It can be done with static sites + rss (and optionally email).
For example, I do this with my website. I receive comments via email (with the sender’s addresses hashed). Each page/comment-list/comment has its own rss feed that people can “subscribe” to. This allows you to get notified when someone responds to a comment you left, or comments on a page. But all notifications are opt-in and require no login because your rss reader is fetching the updates.
Since I’m the moderator of my site, I subscribe to the “all-comments” feed and get notified upon every submission. I then go review the comment and then the site rebuilds. There’s no logins or sign ups. Commenting is just pushing and notifications just pulling.
I plan on open sourcing the commenting aspect of this (it’s called https://r3ply.com) so this doesn’t have to be reinvented for each website, but comments are just one part of the whole system:
The web is the platform. RSS provides notifications (pull). Emailing provides a way to post (push) - and moderate - content. Links are for sharing and are always static (never change or break).
The one missing thing is like a “pending comments” cache, for when you occasionally get HN like traffic and need comments to be temporarily displayed immediately. I’m building this now but it’s really optional and would be the only thing in this system that even requires JS or SSR.
It does not work for people who are using web interface of e-mail only. It would be nice to provide textual instructions (sent this subject to this e-mail) instead of mailto links only.
I really like that idea. I need to add it to my own site to test it out and let it bake.
Do you think think this would work: a little icon that opens a pure html disclosure element with instructions and a design with text laid out sort of in the shape of an email.
“(Text only instructions) Send an email like this:
To: <site>@r3pl.com
Subject: <page_or_comment_url>
Body:
<write your comment here, be careful to not accidentally leave your email signature>”
Your comment system is fantastic. Looking for something like this literally for decades. Hope you will open source it soon. I would like to use it with my blog.
Not so much. While a lot of these websites use classic approaches (handcrafted HTML/CSS, server-side includes, etc.) and aesthetics, the actual versions of those technologies used are often rather modern. For example, TFA looks like a page I'd have browsed in IE5 as a kid, but if you look at the markup, it's using HTML5 tags and Flexbox (which became a W3C WR in 2017), while a period site would have used an HTML table to get the same effect. Of course, you wouldn't want to do it that way nowadays, because it wouldn't be responsive or mobile-friendly.
(I don't think this detracts from such sites, to be clear; they're adopting new technologies where they provide practical benefits to the reader because many indieweb proponents are pushing it as a progressive, rather than reactionary, praxis.)
A couple years ago I made this https://bootstra386.com/ ... it's for a project. This is genuinely 1994 style with 1994 code that will load on 1994 browsers. It doesn't force SSL, this does work. I made sure of it.
The CSS on the page is only to make modern browsers behave like old ones in order to match the rendering.
The guestbook has some javascript if you notice to defeat spam: https://bootstra386.com/guestbook.html but it's the kind of javascript that netscape 2.0 can run without issue.
> This is genuinely 1994 style with 1994 code that will load on 1994 browsers.
Unfortunately it won’t, at least not when you’re serving it with that configuration.
It uses what used to be called “name-based virtual hosting” (before it became the norm), which looks at the Host request header to determine which site to serve. Internet Explorer 3, released in 1996, was the first version of Internet Explorer to send a Host header. I think Netscape 3, also released in 1996, might’ve been the first version to support it as well. So, for instance, Internet Explorer 2.0, released in 1995, will fail to load that site at that URL. If you test locally with localhost, for instance, then this problem won’t be apparent, because you aren’t using named-based virtual hosting in that situation.
If you need to support early-1996 browsers and older, then your site needs to be available when you request it without any Host header. In most cases, you can test this by using the IP address in your browser location bar instead of the hostname.
Edit:
At one point around 1998, it wasn’t possible to directly install Internet Explorer 4 on Windows NT 4, because it shipped with Internet Explorer 2 and microsoft.com used name-based virtual hosting, or at least their downloads section did. So the method to install Internet Explorer 4 on Windows NT 4 was to use Internet Explorer 2 to download Netscape Navigator 4, and then use Netscape Navigator 4 to download Internet Explorer 4.
Using the IP address is a tricky one for something that is supposed to be Internet facing in the 2020s.
In the modern world, one common probe performed by attackers is to see whether a site responds with its own IP address in the Host: header, or the address-to-name lookup result of the IP address in the DNS, or the well-known defaults of some WWW servers.
What they're relying upon, of course, is people/softwares allowing IP addresses and the reverse lookup domain names, but forgetting to install security controls for those as virtual hosts.
Or, equally as bad, the fallback if no Host: header is supplied being a private/internal WWW site of some kind.
> For example, TFA looks like a page I'd have browsed in IE5 as a kid, but if you look at the markup, it's using HTML5 tags and Flexbox (which became a W3C WR in 2017), while a period site would have used an HTML table to get the same effect.
Are they going out of their way to recreate an aesthetic that was originally the easiest thing to create given the language specs of the past, or is there something about this look and feel that is so fundamental to the idea of making websites that basically anything that looks like any era or variety of HTML will converge on it?
I'm happy they didn't choose to go full authentic with quirks mode and table-based layouts, because Firefox has some truly ancient bugs in nested table rendering... that'll never get fixed, because... no one uses them anymore!
I think the layout as such (the grid of categories) isn't particularly dated, though a modern site would style them as tiles. The centered text can feel a little dated, but the biggest thing making it feel old is that it uses the default browser styles for a lot of page elements, particularly the font.
I think it’s the former. Many of these retro layouts are pretty terrible. They existed because they were the best at the time, but using modern HTML features to recreate bad layouts from the last is just missing the point completely.
I loaded up Windows 98SE SP2 in a VM and tried to use it to browse the modern web but it was basically impossible since it only supported HTTP/1.1 websites. I was only able to find maybe 3-4 websites that still supported it and load.
In theory, yes, although there are some fairly big stones falling in the avalanche of turning off HTTP/0.9 and HTTP/1.0 at the server end.
In practice, it's going to be tricky to know without measurement; and the shifting of the default at the client end to from 0.9 and 1.0 to 1.1 began back in 2010. Asking the people who run robots for statistics will not help. Almost no good actor robots are using 0.9 and 1.0 now, and 0.9 and 1.0 traffic dropped off a cliff in the 2010s falling to 0% (to apparently 1 decimal place) by 2021 as measured by the Web Almanac.
If a modern HTTP server stopped serving 0.9 and 1.0, or even just had a problem doing so to decades-old pre-1.1 client softwares, very few people would know. Almost 0% of HTTP client traffic would be affected.
And, indeed, http://url.town/ is one of the very places that has already turned 0.9 off. It does not speak it, and returns a 1.1 error response. And no-one in this thread (apart from edm0nd) knew.
I tried old macOS ... sorry, Mac OS ... and yeah the main problem was SSL/TLS. HTTP/1.0 was fine but the SSL crypto algorithm negotiation never went through.
If your definition of social-media includes link aggregators, check https://brutalinks.tech. I've been working on things adjacent to that for quite a while now and I'm always looking for interested people.
The biggest issue there is that regardless of how your old your html elements, the old browsers only supported SSL 2/3, at best, and likely nothing at all, meaning you can't connect to basically any website.
(For the youth, this is basically what Yahoo was, originally; it was _ten years_ after Yahoo started before it had its own crawler-based search engine, though it did use various third parties after the first few years.)
(I recall too that when Yahoo did add their own web crawler, all web devs did was add "Pamela Anderson" a thousand times in as meta tags in order to get their pages ranked higher. Early SEO.)
This is cute, but I absolutely do not care about buying a omg.lol URL for $20/yr, and I'm not trying to be a hater because the concept is fine, but anybody who falls into this same boat should know this is explicitly "not for them"
While I'm usually one of those who complain about subscription services, $20 per year is not considerably more than registering a .com with the whois protection. Given that you get a registered, valid domain name that you have control over, it's not a bad deal. Also, it does help filter out low effort spam, especially if they decided to add a limit to allow only n registrations per a credit card should it become a problem.
We're always discussing something along "if you're not paying for it, you're the product" in the context of social media, yet now we're presented a solution and criticize that it's not free.
You can also roll your own webring/directory for free on your ISP's guest area (if they still offer that) and there's no significant network effect to url.town yet that would make you miss out if you don't pay.
I hadn't realised that this was tied to omg.lol until your comment but now I'm confused. If it's from the omg.lol community, how come the address isn't something like url.omg.lol? (ie. it's a community around a domain, why isn't that doimain used here?)
I don't think pointing out "this is a web directory full of links submitted by people willing to spend $20/yr" is being cheap, per se, the same way I don't think paying to be "verified" on Twitter means your content is worth paying attention to
There was a time where "willing to pay for access" was a decent spam control mechanism, but that was long ago
Agree. Recently I’ve noticed the complaints with paying for Kagi search [0]. HN loves to moan about how bad Google is but paying $10 ($5 if you want a tiny plan) is apparently too much for something as critical as search?
As you say, those coffees seem to keep on selling…
Everyone wants a Starbucks coffee per month from you. Even if you're on FAANG compensation, there's a finite number of coffees you can afford to pay for.
If you’re on FANG compensation, and you earn roughly $200k after taxes in one year, and you spent it all of it on Starbucks coffee, you could buy roughly a century worth of coffee if you drink one a day.
If on the other hand, you spent the $200k on leasing an omg.lol domain in perpetuity, you could hold the domain for 10 millenniums.
If we were in the Dune universe, that means your omg.lol domain would expire roughly around the same time as the Butlerian Jihad starts and the thinking machines are overthrown.
Nice website. But do I need to buy a omg.lol subdomain before I can contribute links here? Why is it an omg.lol subdomain? I'm happy to buy a new domain, but not so happy about buying a subdomain. I'm not sure why I'd be paying omg.lol to contribute links to url.town? What's the connection between the two?
Having studied, and attempted to build, a few taxonomies / information hierarchies myself (a fraught endeavour, perhaps information is not in fact hierarchical? (Blasphemy!!!)), I'm wondering how stable the present organisational schema will prove, and how future migrations might be handled.
Unexpectedly related to the problem of perfect classification is McGilchrist’s The Master and His Emissary. It shows that human mind is a duet where each part exhibits a different mode of attending to reality: one seeks patterns and classifies, while the other experiences reality as indivisible whole. The former is impossible to do “correctly”[0]; the latter is impossible to communicate.
(As a bit of meta, one would notice how in making this argument it itself has to use the classifying approach, but that does not defeat the point and is rather more of a pre-requisite for communicating it.)
Notably, the classifying mode was shown in other animals (as this is common to probably every creature with two eyes and a brain) to engage when seeking food or interacting with friendly creatures. This highlights its ultimate purposes—consumption and communication, not truth.
In a healthy human both parts act in tandem by selectively inhibiting each other; I believe in later sections he goes a bit into the dangers of over-prioritizing exclusively the classifying part all the time.
Due to the unattainability of comprehensive and lossless classification, presenting information in ways that allows for coexistence of different competing taxonomies (e.g., tagging) is perhaps a worthy compromise: it still serves the communication requirement, but without locking into a local optimum.
[0] I don’t recall off the top of my head exactly how Iain gets there (there is plenty of material), but similar arguments were made elsewhere—e.g., Clay Shirky’s points about the inherent lossiness of any ontology and the impossible requirement to be capable of mind reading and fortune telling, or I personally would extrapolate a point from the incompleteness theorem: we cannot pick apart and formally classify a system which we ourselves are part of in a way that is complete and provably correct.
Yes, the seeming hierarchy in information is bit shallow. Yahoo, Altavista and others tried this and it became unmanageable soon. Google realized that keywords and page-raking is the way to go. I think keywords are sort of same as a dimensions in multi-dimensional embeddings.
Information, is basically is about relating something to other known things. A closer relation is being interpreted as location proximity in a taxonomy space.
The fact that it already has categories for most hobbies but absolutely nothing for cars, motorbikes, or any mechanical engineering-related topic, makes me sad. I know it's not their fault - young people simply don't care anymore.
... Possibly I'm missing something, but currently it has four categories under "Hobbies"; folklore, Pokemon, travel and yarn craft. Are you suggesting that if someone added "car stuff", that would be, well, basically complete, the big five hobbies represented?
It's clearly extremely new and has almost no content as yet.
Now its own Hacker News submission, with many concluding that it is entirely LLM-generated content and thus highly suspect for any kind of accuracy at all.
Sadly it's the same for Sci-Fi art. I had a link to submit, but you need to sign up and it's $20. Fair enough if they want to set some minimum barrier for the site to filter out suggestions from every Tom, Dick, and Harry (and Jane?), but I don't feel so investing in this to give them $20 to provide a suggestion.
Someone wants to add it enough to click the button that adds the site. Sometimes you need to REALLY want to add it because no category is applicable so you also click the button to add the category.
Cool, but I'd like us to get past the idea that a site has to use Times font to be retro.
Times is really not adapted for the web and is particularly bad on low-resolution screens. How many computer terminals used Times for anything but Word processing?
Verdana was released in 1996 — is that too recent?
In the true spirit of the old web, you can adjust the default font in your browser's preferences to any font you prefer and the page respects it, as it doesn't specify what font to use at all.
For example, I do this with my website. I receive comments via email (with the sender’s addresses hashed). Each page/comment-list/comment has its own rss feed that people can “subscribe” to. This allows you to get notified when someone responds to a comment you left, or comments on a page. But all notifications are opt-in and require no login because your rss reader is fetching the updates.
Since I’m the moderator of my site, I subscribe to the “all-comments” feed and get notified upon every submission. I then go review the comment and then the site rebuilds. There’s no logins or sign ups. Commenting is just pushing and notifications just pulling.
example https://spenc.es/updates/posts/4513EBDF/
I plan on open sourcing the commenting aspect of this (it’s called https://r3ply.com) so this doesn’t have to be reinvented for each website, but comments are just one part of the whole system:
The web is the platform. RSS provides notifications (pull). Emailing provides a way to post (push) - and moderate - content. Links are for sharing and are always static (never change or break).
The one missing thing is like a “pending comments” cache, for when you occasionally get HN like traffic and need comments to be temporarily displayed immediately. I’m building this now but it’s really optional and would be the only thing in this system that even requires JS or SSR.
Do you think think this would work: a little icon that opens a pure html disclosure element with instructions and a design with text laid out sort of in the shape of an email.
“(Text only instructions) Send an email like this:
To: <site>@r3pl.com
Subject: <page_or_comment_url>
Body: <write your comment here, be careful to not accidentally leave your email signature>”
I like your thinking. Beautiful website, by the way!
(I don't think this detracts from such sites, to be clear; they're adopting new technologies where they provide practical benefits to the reader because many indieweb proponents are pushing it as a progressive, rather than reactionary, praxis.)
The CSS on the page is only to make modern browsers behave like old ones in order to match the rendering.
The guestbook has some javascript if you notice to defeat spam: https://bootstra386.com/guestbook.html but it's the kind of javascript that netscape 2.0 can run without issue.
Unfortunately it won’t, at least not when you’re serving it with that configuration.
It uses what used to be called “name-based virtual hosting” (before it became the norm), which looks at the Host request header to determine which site to serve. Internet Explorer 3, released in 1996, was the first version of Internet Explorer to send a Host header. I think Netscape 3, also released in 1996, might’ve been the first version to support it as well. So, for instance, Internet Explorer 2.0, released in 1995, will fail to load that site at that URL. If you test locally with localhost, for instance, then this problem won’t be apparent, because you aren’t using named-based virtual hosting in that situation.
If you need to support early-1996 browsers and older, then your site needs to be available when you request it without any Host header. In most cases, you can test this by using the IP address in your browser location bar instead of the hostname.
Edit:
At one point around 1998, it wasn’t possible to directly install Internet Explorer 4 on Windows NT 4, because it shipped with Internet Explorer 2 and microsoft.com used name-based virtual hosting, or at least their downloads section did. So the method to install Internet Explorer 4 on Windows NT 4 was to use Internet Explorer 2 to download Netscape Navigator 4, and then use Netscape Navigator 4 to download Internet Explorer 4.
In the modern world, one common probe performed by attackers is to see whether a site responds with its own IP address in the Host: header, or the address-to-name lookup result of the IP address in the DNS, or the well-known defaults of some WWW servers.
What they're relying upon, of course, is people/softwares allowing IP addresses and the reverse lookup domain names, but forgetting to install security controls for those as virtual hosts.
Or, equally as bad, the fallback if no Host: header is supplied being a private/internal WWW site of some kind.
Are they going out of their way to recreate an aesthetic that was originally the easiest thing to create given the language specs of the past, or is there something about this look and feel that is so fundamental to the idea of making websites that basically anything that looks like any era or variety of HTML will converge on it?
In practice, it's going to be tricky to know without measurement; and the shifting of the default at the client end to from 0.9 and 1.0 to 1.1 began back in 2010. Asking the people who run robots for statistics will not help. Almost no good actor robots are using 0.9 and 1.0 now, and 0.9 and 1.0 traffic dropped off a cliff in the 2010s falling to 0% (to apparently 1 decimal place) by 2021 as measured by the Web Almanac.
* https://almanac.httparchive.org/en/2021/http
If a modern HTTP server stopped serving 0.9 and 1.0, or even just had a problem doing so to decades-old pre-1.1 client softwares, very few people would know. Almost 0% of HTTP client traffic would be affected.
And, indeed, http://url.town/ is one of the very places that has already turned 0.9 off. It does not speak it, and returns a 1.1 error response. And no-one in this thread (apart from edm0nd) knew.
https://portal.mozz.us/gopher/gopher.somnolescent.net/9/w2kr...
with these NEW values in about:config set to true:
Also, set these to false:What do you mean by that? Especially the "social" part?
Isn't that https://subreply.com/ ?
(For the youth, this is basically what Yahoo was, originally; it was _ten years_ after Yahoo started before it had its own crawler-based search engine, though it did use various third parties after the first few years.)
(I recall too that when Yahoo did add their own web crawler, all web devs did was add "Pamela Anderson" a thousand times in as meta tags in order to get their pages ranked higher. Early SEO.)
2010 archive of dmoz: https://web.archive.org/web/20100227212554/http://www.dmoz.o...
We're always discussing something along "if you're not paying for it, you're the product" in the context of social media, yet now we're presented a solution and criticize that it's not free.
You can also roll your own webring/directory for free on your ISP's guest area (if they still offer that) and there's no significant network effect to url.town yet that would make you miss out if you don't pay.
What is (was) it? I can't find anything with a search (too many unrelated results).
Even if it was $10/year, people would still cry foul.
There was a time where "willing to pay for access" was a decent spam control mechanism, but that was long ago
As you say, those coffees seem to keep on selling…
[0] https://kagi.com/pricing
If on the other hand, you spent the $200k on leasing an omg.lol domain in perpetuity, you could hold the domain for 10 millenniums.
If we were in the Dune universe, that means your omg.lol domain would expire roughly around the same time as the Butlerian Jihad starts and the thinking machines are overthrown.
X is just one cappuccino, Y is just 3.5 bagels, Z costs not more than a pint, A costs almost as much as a nice meal … and so on. God's sake! :)
Anyone with an account already that wants to take requests for URLs to add?
(Hey, charge $1 a request and you should be able to break even on your $20 domain purchase before the day is up.)
I'll take requests, but I don't guarantee I'll add just anything.
(Whether for this or comparable projects.)
<https://en.wikipedia.org/wiki/Taxonomy>
<https://en.wikipedia.org/wiki/Library_classification>
https://web.archive.org/web/20191117161738/http://shirky.com...
(As a bit of meta, one would notice how in making this argument it itself has to use the classifying approach, but that does not defeat the point and is rather more of a pre-requisite for communicating it.)
Notably, the classifying mode was shown in other animals (as this is common to probably every creature with two eyes and a brain) to engage when seeking food or interacting with friendly creatures. This highlights its ultimate purposes—consumption and communication, not truth.
In a healthy human both parts act in tandem by selectively inhibiting each other; I believe in later sections he goes a bit into the dangers of over-prioritizing exclusively the classifying part all the time.
Due to the unattainability of comprehensive and lossless classification, presenting information in ways that allows for coexistence of different competing taxonomies (e.g., tagging) is perhaps a worthy compromise: it still serves the communication requirement, but without locking into a local optimum.
[0] I don’t recall off the top of my head exactly how Iain gets there (there is plenty of material), but similar arguments were made elsewhere—e.g., Clay Shirky’s points about the inherent lossiness of any ontology and the impossible requirement to be capable of mind reading and fortune telling, or I personally would extrapolate a point from the incompleteness theorem: we cannot pick apart and formally classify a system which we ourselves are part of in a way that is complete and provably correct.
Information, is basically is about relating something to other known things. A closer relation is being interpreted as location proximity in a taxonomy space.
... Possibly I'm missing something, but currently it has four categories under "Hobbies"; folklore, Pokemon, travel and yarn craft. Are you suggesting that if someone added "car stuff", that would be, well, basically complete, the big five hobbies represented?
It's clearly extremely new and has almost no content as yet.
* https://news.ycombinator.com/item?id=44789192
https://www.simonstalenhag.se/
^ The link is for the sci-fi art, not the hookers.
Times is really not adapted for the web and is particularly bad on low-resolution screens. How many computer terminals used Times for anything but Word processing?
Verdana was released in 1996 — is that too recent?
Also, the website styles don't specify font-family at all, so you are complaining about your own browser defaults.
Good pickup on the font being the default browser choice, I didn't notice that!