Skip to main content

Design Graveyard

Meta page describing Gwern.net website design experiments and post-mortem analyses.

Often the most interesting part of any design are the parts that are invisible—what was tried but did not work. Sometimes they were unnecessary, other times readers didn’t understand them because it was too idiosyncratic, and sometimes we just can’t have nice things.

Some post-mortems of things I tried on Gwern.net but abandoned (in chronological order).

105. You can’t communicate complexity, only an awareness of it.

Alan J. Perlis

Gitit

Gitit wiki: I preferred to edit files in Emacs/Bash rather than a GUI/browser-based wiki.

A Pandoc-based wiki using Darcs as a history mechanism, serving mostly as a demo; the requirement that ‘one page edit = one Darcs revision’ quickly became stifling, and I began editing my Markdown files directly and recording patches at the end of each day, and syncing the HTML cache with my host (at the time, a personal directory on code.haskell.org).

Eventually I got tired of that and figured that since I wasn’t using the wiki, but only the static compiled pages, I might as well switch to Hakyll and a normal static website approach.

RSS Feed

Gitit, as part of the version-control approach, exposed as an RSS feed the history of each page (using a query) and the wiki as a whole, which included the diff as well.

This worked reasonably well for a collaborative wiki, where editors will want to monitor every edit; or for a documentation wiki, where updates tend to be big; or for a blog which updates in discrete, self-contained, daily units. But it was an awkward fit for Gwern.net longform essays/resources right from the beginning: while darcs/git do not particularly care about tracking tens of thousands of tiny edits, and I stopped trying to track each edit and instead batched them up, that made the RSS less useful for any Gwern.net readers.

It is not useful to know that today I +links, just like I did yesterday or the day before that. Nor is it helpful to see 30 pages updated today due to fixed dead links. It’s just a blizzard of unimportant tweaks; no one (including me) really needs to read changes at that fine-grained a level. And that is what the RSS history quickly turned into, as the corpus grew and needed maintenance and I heavily revised the formatting or engaged in various experiments.

Eventually, I just removed it.

This did not make everyone happy as some people were, somehow, using it for following site updates. I set up the Changelog & monthly newsletter to try to address this by having a monthly list of new essays, but for them, this was now too coarsely granular a level of summarization. (I also have not always mailed it out in a timely manner.)

Probably the desired granularity would be something like, ‘includes addition of sections to essays, but not addition of links or a few sentences’; however, this is more work than I want to put in. It is, however, something that might work with LLMs like GPT-4: pass in the Git log to pull out key commits, then summarize them appropriately as an itemized list. (This sort of functionality was already demonstrated years ago with Github tools pulling out major changes from git repositories, so it should work.)

JQuery Sausages Scrollbar

jQuery sausages: unhelpful UI visualization of section lengths.

A UI experiment, ‘sausages’ add a second scroll bar where vertical lozenges correspond to each top-level section of the page; it indicates to the reader how long each section is and where they are. (They look like a long link of pale white sausages.) I thought it might assist the reader in positioning themselves, like the popular ‘floating highlighted Table of Contents’ UI element, but without text labels, the sausages were meaningless. After a jQuery upgrade broke it, I didn’t bother fixing it.

Beeline Reader

Beeline Reader: a ‘reading aid’ which just annoyed readers.

BLR tries to aid reading by coloring the beginnings & endings of lines to indicate the continuation and make it easier for the reader’s eyes to saccade to the correct next line without distraction (apparently dyslexic readers in particular have issue correctly fixating on the continuation of a line). The A/B test indicated no improvements in the time-on-page metric, and I received many complaints about it; I was not too happy with the browser performance or the appearance of it, either.

I’m sympathetic to the goal and think syntax highlighting aids are underused, but BLR was a bit half-baked and not worth the cost compared more straightforward interventions like reducing paragraph lengths or more rigorous use of ‘semantic zoom’ formatting. (We may be able to do typography differently in the future with new technology, like VR/AR headsets which come with eye tracking technology intended for foveated rendering—forget simple tricks like emphasizing the beginning of the next line as the reader reaches the end of the current line, do we need ‘lines’ at all if we can do things like just-in-time display the next piece of text in-place to create an ‘infinite line’?)

Google Custom Search Engine

Google CSE: website search feature which too few people used.

A ‘custom search engine’, a CSE is a souped-up site:gwern.net/ Google search query; I wrote one covering Gwern.net and some of my accounts on other websites, and added it to the sidebar 2013-05-25. Checking the analytics, perhaps 1 in 227 page-views used the CSE, and a decent number of them used it only by accident (eg. searching “e”); an A/B testing for a feature used so little would be powerless, and so I removed it 2015-07-20 rather than try to formally test it.

I suspect that a website search feature is not useful becasuse Gwern.net is not the kind of site that readers search at all. Readers are usually arriving at a specific landing page (eg. linked on social media), or they are arriving from a search engine in the first place, or they were reading a page and following links in it (and are better served by adding features like well-curated tags). No one is loading the site and then searching a random topic—it’s just not big enough or comprehensive enough like a Wikipedia to be worth doing so.

Further, it’s a bit difficult to provide your own search feature for a static site: search typically requires a server somewhere, to avoid downloading a large inverted index. (Although there are approaches which try to make the inverted index small enough to feasibly download into the reader browser so one can then interactively process it with JS, and there is an intriguing hack which downloads a small JS database engine such as WASMed SQLite which then queries a standard large database using HTTP Range queries to download just a few specific bytes & avoid downloading the entire database.1)


In April 2024, because readers kept occasionally asking for search, and we still hadn’t found any search we liked, we expermented with adding the old Google CSE back. Our logic is that in the 9 years since 2015, the site has expanded, making search much more useful, and that with the theme toolbar, we now have somewhere to put a search widget which is not cluttered (and which can be done on demand, via transcluding a separate HTML page with the CSE JS widget).

Surprisingly, Google has not killed CSE (like so many other services/products of that age & catering to power users), and some poking indicated it still seemed to be functional. And it allowed nice integration with tab-completion.

This lets the reader simply pull up the eyeglass search icon anywhere they are and search, like thus:

Screenshot of hovering over the site theme toggle to pull up a Google site-search interface and searching for the topic cat illusion, showing a few relevant pages & research papers.

Screenshot of hovering over the site theme toggle to pull up a Google site-search interface and searching for the topic cat illusion, showing a few relevant pages & research papers.


In November 2024, we had to remove the Google CSE again.

In the wake of the Dwarkesh Patel interview, a reader intrigued by my discussion of “Suzanne Delage” went to search for it on the main page using the term “Dracula”, and the CSE returned one hit: the main page. There are scores of pages on Gwern.net which use the word “Dracula”; adding insult to injury, if you searched “Suzanne Delage” in the CSE, then it pulled up the right page and showed a snippet from the page metadata which has the word “Dracula”! Some quick testing with other queries like “Midjourney” showed that the CSE was wildly incomplete and/or buggy, and so worse than useless.

As Google CSE is now harmful, and we no longer trust it even if it were fixed, we have removed it for the last time, and will never use CSE again.

We replaced it with a much more reliable, but less convenient, alternative: a form which simply opens up in a new tab a Google search for query site:gwern.net.2

Tufte-CSS Sidenotes

Tufte-CSS Sidenotes: fundamentally broken, and superseded.

An early admirer of Tufte-CSS for its sidenotes, I gave a Pandoc plugin a try only to discover a terrible drawback: the CSS didn’t support block elements & so the plugin simply deleted them. This bug apparently can be fixed, but the density of footnotes led to using sidenotes.js instead.

DjVu Files

DjVu document format use: DjVu is a space-efficient document format with the fatal drawback that Google ignores it, and “if it’s not in Google, it doesn’t exist.”

DjVu is a document format superior to PDFs, especially standard PDFs: in the past, I used DjVu for documents I produce myself, as it produces much smaller scans than gscan2pdf’s default PDF settings due to a buggy Perl library (at least half the size, sometimes one-tenth the size), making them more easily hosted & a superior browsing experience.

It worked fine in my document viewers (albeit not all despite being 20 years old), Internet Archive & Libgen preferred them (up until 2016 when IA dropped DjVu), and so why not? Until one day I wondered if anyone was linking them and tried searching in Google Scholar for some. Not a single hit! (As it happens, GS seems to specifically filter out books.) Perplexed, I tried Google—also nothing. Huh‽ My scans have been visible for years, DjVu dates to the 1990s and was widely used (if not remotely as popular as PDF), and G/GS picks up all my PDFs which are hosted identically. What about filetype:djvu? I discovered to my horror that on the entire Internet, Google indexed about 50 DjVu files. Total. While apparently at one time Google did index DjVu files, that time must be long past.

Loathe to take the space hit, which would noticeably increase my Amazon AWS S3 hosting costs, I looked into PDFs more carefully. I discovered PDF technology had advanced considerably over the default PDFs that gscan2pdf generates, and with JBIG2 compression, they were closer to DjVu in size; I could conveniently generate such PDFs using ocrmypdf.3 This let me convert over at moderate cost and now my documents do show up in Google.

Darcs/Github Repo

Darcs Patch-tag/Github Git repo: no useful contributions or patches submitted, added considerable process overhead, and I accidentally broke the repo by checking in too-large PDFs from a failed post-DjVu optimization pass (I misread the result as being smaller, when it was much larger).

I removed the site-content repo and replaced it with an infrastructure-specific repo for easier collaboration with Said Achmiz.

Long URLs

A consequence of starting my personal wiki using Gitit was defaulting to long URLs. Gitit encourages you to have filename+.page = title = URL+.html to simplify things. So the “DNB FAQ” page would just be ./DNB FAQ.page as a file on disk, and /DNB%20FAQ.html URL to visit/edit as a rendered page. Then, because I had no opinion on it at the time and it sounded technically-scary to do otherwise (HTTPS, and lots of jargon about subdomains and A or C DNS records), I began hosting pages at http://​www.gwern.net. Thus, the final URL would be http://​www.gwern.net/DNB%20FAQ.html

So, my URLS were:

  1. HTTP, not HTTPS

  2. www. subdomain, not naked domain;

  3. long URL/titles rather than single-word slugs, where they are

  4. mixed-case/capitalized words rather than lower-case4, and

  5. space-separated, rather than hyphen-separated (or better yet, single-word), and

  6. files/directories inconsistently pluralized.

All wrong. In retrospect, all of these choices5 were mistakes: Derek Sivers & Sam Hughes were right: I should have made URLs as simple as possible (and then a bit simpler): a single word, lowercase alphanumerical, with no hyphens or underscores or spaces or punctuation of any sort.6 That is, the URL should have been https://​gwern.net/dnb or https://​gwern.net/faq, if that didn’t risk any confusion—but no longer than https://​gwern.net/dnb-faq! (And the .page extension for the source Markdown files was a minor nuisance in its own right: few things recognize the extension for Markdown, and it’s a 4-letter extension too.)

These papercuts would cost me a great deal of effort to fix while remaining backwards-compatible (ie. not breaking tens of thousands of inbound links created over a decade).

HTTP

Procrastination. The HTTP → HTTPS migration was already inevitable when I began writing a HTTP-using website. Injection attacks by the CCP and ISPs, general concerns over privacy, increasingly heavy-handed penalties & alarming GUI nags by search engines & web browsers… I knew everything was going HTTPS, I just didn’t want to pay for a certificate (Let’s Encrypt did not exist) or figure it out because it’s not like my website in any meaningful way needs the security of HTTPS. Eventually, in November 2016, Cloudflare made it turnkey-easy to enable HTTPS at the CDN level without needing to update my server.

The switch has continued to cause problems due to web browser security policies7, but is worth it—if only so web browsers will stop scaring readers by displaying ugly but irrelevant security warnings!

Space-Separated URLs

Spaces in URLs: an OK idea but people are why we can’t have nice things.

Error-prone. I liked the idea of space-separated filenames in terms of readability & semantics, and letting one pun on the filename = title, saving time; I carried this over to Hakyll, but gradually, by monitoring analytics realized this was a terrible mistake—as straightforward as URL-encoding spaces as %20 may seem, no one can do it properly. I didn’t want to fix it because by the time I realized how bad the problem was, it would have required breaking, or later on, redirecting, hundreds of URLs and updating all my pages. The final straw came in September 2017 when The Browser linked a page incorrectly, sending ~1,500 people to the 404 page. Oops.

I gave in and replaced spaces with hyphens. (Underscores are the other viable option8 but because of Markdown, I worry that trades one error for another.)

www Subdomain

The next change was migrating from www.gwern.net URLs to just gwern.net.

www is long & old. While I had always had redirects for gwern.netwww.gwern.net so going to the former didn’t result in broken links the way that space-separation did, it still led to problems: people would assume the absence of a www and use those URLs, leading to duplication failures or search problems; particularly on mobile, people would skip it, showing that the extra 4 letters were a nuisance (which frustration I began to understand myself when working on the mobile appearance); it was also more letters for me to constantly be typing while writing out links elsewhere to my site (eg. when providing PDF references); I noticed that web browsers & sites like Twitter increasingly show little of a URL (so the prefix meant you couldn’t see the important part, the actual page!) or suppressed the prefix entirely (leading to confusion); and finally, I began noticing that the prefix increasingly struck me as old in a bad way, smelling like an old unmaintained website that a reader would be discouraged from wanting to visit.

None of these were big problems, but why was I incurring them? What did the prefix do for me? I looked into it a little.

No length benefits. It was indeed old-fashioned and far from universal; of the domains I link, only 40% (2,008 / 4,978) use it, and it seems that usage is declining ~2% per year. Pro-www discussion seems relatively minimal, and there are even hate sites for www. It is not a standardized or special subdomain, was not even used by the first WWW domain historically, and was apparently accidental to begin with, so Chesterton’s fence is satisfied. It seemed that the only benefits were that the prefix was useful in a handful of extremely technically narrow ways involving cookie/security or load-balancing minutiae, that I couldn’t see ever applying; it was compatible with more domain name registars, although all of the ones I am likely to use support it already; and it was my status quo, but the migration looked about as simple as flipping a switch in the Cloudflare DNS settings and then doing a big global rewrite (which would be safe because the string is so unique).

So, after stressing out about it for weeks & asking people if there was some reason not to do it that I was missing, I went ahead and did it in January 2023. It was surprisingly easy9, and I immediately appreciated the easier typing.

Simplified URLs

The final big change to naming practices was to simplify URLs in general: lower-case them all, shorten as much as reasonably mnemonic, and remove pluralization as much as possible—I had been inconsistent about naming, particularly in document directories.

This was for similar reasons as the subdomain, but more so.

Case/plural-insensitivity. Mixed-case URLs are prettier & more readable, but they cause many problems. The use of long mixed-case URLs led to endless 404 errors due to the combinatorial number of possible casings. (Is it ‘Death Note Anonymity’ or ‘Death Note anonymity’? Is it ‘Bitcoin Is Worse Is Better’ or ‘Bitcoin is Worse is Better’ or ‘Bitcoin is worse is better’? etc.) Typing mixed-case is especially miserable on smartphones, where the keyboard is now usually modal so it’s not as simple as holding a Shift key. Setting up individual redirects consumed time—and sometimes would backfire, creating redirect loops or redirecting other pages. The long names meant lots of typing, and shared prefixes like ‘the’ made it harder to avoid typing using tab-completion. I (and readers) would have to guess half-remembered names, and would occasionally screw up by typing a link to /doc/foo.pdf instead of /docs/foo.pdf.

This was a major change, in part because of all the bandaids I had put on the problems caused by the bad URLS—all of the redirects & lint checks I set up for each encountered error would have to be undone or updated—exacerbated by the complexity of the features which had been added to Gwern.net like the backlinks or local-archives, which were propagating stale URLs & other kinds of cache (the other hard problem in CS…) problems. So I only got around to it in February 2023 after the easier fixes were exhausted.

But now the URL for the DNB FAQ is https://gwern.net/dnb-faq—easier to type on mobile by at least 6 keystrokes (prefix plus two shifts), consistent, memorable, and timeless.

Ads

AdSense banner ads (and ads in general): reader-hostile and probably a net financial loss.

I hated running banner ads, but before my Patreon began working, it seemed the lesser of two evils. As my finances became less parlous, I became curious as to how much lesser—but I could find no Internet research whatsoever measuring something as basic as the traffic loss due to advertising! So I decided to run an A/B test myself, with a proper sample size and cost-benefit analysis; the harm point-estimate turned out to be so large that the analysis was unnecessary, and I removed AdSense permanently the first time I saw the results. Given the measured traffic reduction, I was probably losing several times more in potential donations than I ever earned from the ads. (Amazon affiliate links appear to not trigger this reaction, and so I’ve left them alone.)

Google Web Fonts

Google Fonts web fonts: slow and buggy.

The original idea of Google Fonts was a trusted high-performance provider of a wide variety of modern, multi-lingual, subsetted drop-in fonts which would likely be cached by browsers if you used a common font. You want a decent Baskerville font? Just customize a bit of CSS and off you go!

The reality turned out to be a bit different. The cache story turned out to be mostly wishful thinking as caches expired too quickly, and in any case, privacy concerns meant that major web browsers all split caches across domains, so a Google Font download on your domain did nothing at all to help with the download on my domain. With no cache help and another domain connection required, Google Fonts turned out to introduce noticeable latency in page rendering. The variety of fonts offered turned out to be somewhat illusory: while expanding over time, its selection of fonts was back then limited, and the fonts outdated or incomplete. Google Fonts was not trusted at all and routinely cited as an example of the invasiveness of the Google panopticon (without any abuse ever documented that I saw—nevertheless, it was), and for additional lulz, Google Fonts may have been declared illegal by the EU’s elastic interpretation of the GDPR.

Removing Google Fonts was one of the first design & performance optimizations Said made. We got both faster and nicer-looking pages by taking the master Github versions of Adobe Source Serif/Sans Pro (the Google Fonts version was both outdated & incomplete then) and subsetting them for Gwern.net specifically.

MathJax

MathJax JS: switched to static rendering during compilation for speed.

For math rendering, MathJax and KaTeX are reasonable options (inasmuch as MathML browser adoption is dead in the water). MathJax rendering is extremely slow on some pages: up to 6 seconds to load and render all the math. Not a great reading experience. When I learned that it was possible to preprocess MathJax-using pages, I dropped MathJax JS use the same day.

Quote Syntax Highlighting

<q> quote tags for English syntax highlighting: a neat use of an obscure semantic HTML element, but divisive and a maintenance burden.

I like the idea of treating English a little more like a formal language, such as a programming language, as it comes with benefits like syntax highlighting. In a program, the reader gets guidance from syntax highlighting indicating logical nesting and structure of the ‘argument’; in a natural language document, it’s one damn letter after another, spiced up with the occasional punctuation mark or indentation. (If Lisp looks like “oatmeal with fingernail clippings mixed in” due to the lack of “syntactic sugar”, then English must be plain oatmeal!) One of the most basic kinds of syntax highlighting is simply highlighting strings vs code: I learned early on as a coding novice that syntax highlighting was worth it just to make sure you hadn’t forgotten a quote or parenthesis somewhere. The same is true of regular writing: if you are extensively quoting or naming things, the reader can get a bit lost in the thickets of curly quotes and be unsure who said what.

I discovered an obscure HTML tag enabled by an obscurer Pandoc setting: the quote tag <q>, which replaces quote characters and is rendered by the browser as quotes (usually). Quote tags are parsed explicitly, rather than just being opaque natural language text blobs, and are primarily intended to allow the user’s browser to style appropriately the nesting of all the different kinds of quote marks without modifying the source HTML, especially for foreign languages which use different quoting conventions (eg. French double & single guillemets). But they can also be manipulated by the author’s JS/CSS for other purposes, such as… syntax-highlighting. Anything inside a pair of quotes would be tinted a gray to visually set it off similarly to the blockquotes. I was proud of this tweak, which I have never seen anywhere else.

The problems with it was that not everyone was a fan (to say the least); it was not always correct (there are many double-quotes which are not literal quotes of anything, like rhetorical questions); and it interacted badly with everything else. There were puzzling drawbacks: eg. web browsers delete them from copy-paste, so we had to use a JS copy-paste listener to convert them to normal quotes.10 Even when it was worked out, all the HTML/CSS/JS had to be constantly rejiggered to deal with interactions with them, browser updates would silently break what was working, and Said hated the look. I tried manually annotating quotes to ensure they were all correct and not used in dangerous ways, but even with interactive regexp search-and-replace to assist, the manual toil of constantly marking up quotes was a major obstacle to writing.

So I gave in. It was not meant to be.

Rubrication

Typographic rubrication: a solution in search of a problem.

Red emphasis is a visual strategy that works wonderfully well for many styles, but not Gwern.net that I could find. Using it on the regular website resulted in too much emphasis and the lack of color anywhere else made the design inconsistent; we tried using it in dark mode to add some color & preserve night vision by making headers/links/dropcaps red, but it looked like, as one reader put it, “a vampire fansite”. It is a good idea, but we just haven’t found a use for it. (Perhaps if I ever make another website, it will be designed around rubrication.)

wikipedia-popups.js

wikipedia-popups.js: a JS library written to imitate Wikipedia popups, which used the WP API to fetch article summaries; obsoleted by the faster & more general local static link annotations.

I disliked the delay and as I thought about it, it occurred to me that it would be nice to have popups for other websites, like Arxiv/BioRxiv links—but they didn’t have APIs which could be queried. If I fixed the first problem by fetching WP article summaries while compiling articles and inlining them into the page, then there was no reason to include summaries for only Wikipedia links, I could get summaries from any tool or service or API, and I could of course write my own! But that required an almost complete rewrite to turn it into popups.js.

The general popups functionality now handles WP articles as a special-case, which happens to call their API, but could also call another API, pop up the URL in an iframe (whether within the current page, on another page, or even on another website entirely), rewrite the URL being popped up in an iframe (such as trying to fetch a syntax-highlighted version of a linked file, or fetching the Ar5iv HTML version of an Arxiv paper), or fetch a pre-generated page like an annotation or backlinks or similar-links page.

Automatic Dark Mode

Auto-dark mode: a good idea but “readers are why we can’t have nice things”.

OSes/browsers have defined a ‘global dark mode’ toggle the reader can set if they want dark mode everywhere, and this is available to a web page; if you are implementing a dark mode for your website, it then seems natural to just make it a feature and turn on iff the toggle is on. There is no need for complicated UI-cluttering widgets with complicated implementations. And yet—if you do do that, readers will regularly complain about the website acting bizarre or being dark in the daytime, having apparently forgotten that they enabled it (or never understood what that setting meant).

A widget is necessary to give readers control, although even there it can be screwed up: many websites settle for a simple negation switch of the global toggle, but if you do that, someone who sets dark mode at day will be exposed to blinding white at night… Our widget works better than that. Mostly.

Is it possible that someday dark-mode will become so widespread, and users so educated, that we could quietly drop the widget? Yes, even by 2023 dark-mode had become quite popular, and I suspect that an auto-dark-mode would cause much less confusion in 2024 or 2025. However, we are stuck with the widget—once we had a widget, the temptation to stick in more controls (for reader-mode and then disabling/enabling popups) was impossible to resist, and who knows, it may yet accrete more features (site-wide fulltext search?), rendering removal impossible.

Multi-Column Footnotes

Multi-column footnotes: mysteriously buggy and yielding overlaps.

Since most footnotes are short, and no one reads the endnote section, I thought rendering them as two columns, as many papers do, would be more space-efficient and tidy. It was a good idea, but it didn’t work.

Hyphenopoly Hyphenation

Hyphenopoly: it turned out to be more efficient (and not much harder to implement) to hyphenate the HTML during compilation than to run JS client-side.

To work around Google Chrome’s 2-decade-long refusal to ship hyphenation dictionaries on desktop and enable justified text (and incidentally use the better TeX hyphenation algorithm), the JS library Hyphenopoly will download the TeX English dictionary and typeset a webpage itself. While the performance cost was surprisingly minimal (<0.05s on a medium-sized page), it was there, and it caused problems with obscurer browsers like Internet Explorer.

So we scrapped Hyphenopoly, and I later implemented a compile-time Hakyll rewrite using a Haskell version of the TeX hyphenation algorithm & dictionary to insert at compile-time a ‘soft hyphen’ everywhere a browser could usefully break a word, which enables Chrome to hyphenate correctly, at the moderate cost of inlining them and a few edge cases.11 So the compile-time soft-hyphen approach had its own problems compared to Hyphenopoly’s dictionary-download + JS rewriting the whole page. We were not happy with either approach.

Desktop Chrome finally shipped hyphen support in early 2020, and I removed the soft-hyphen hyphenation pass in April 2021 when CanIUse indicated >96% global support.

In 2022, Achmiz revisited the topic of using Hyphenopoly (but not compile-type hyphens): the compatibility issue would get less important with every year, and the performance hit could be made near-invisible by being more selective about it and restricting its use to cases of narrow columns/screens where better hyphenation makes the most impact. So we re-enabled Hyphenopoly on: the page abstracts on non-Linux12 desktop (because they are the first thing a reader sees, and narrowed by the ToC); sidenotes; popups; and all mobile browsers.

Knuth-Plass Line Breaking

'Sometimes I get overwhelmed thinking about the amount of work that went into the ordinary objects around me. Despite it being imaginary, I already have SUCH a strong opinion on the cord-switch firing incident.' [A table is shown with a glass of water to the left and a lamp standard type desk lamp on the right. There are nine labels in relation to different parts of these three items. For each label, one or two arrows points to the relevant part. Five labels are written above the table, two on the table and two below the table between the front legs. These last two labels are causing the table legs to the rear to disappear, and also cuts the lamp cord, going beneath the table, in two. Below each label will be written under a description of what they point to going in normal reading order from left to right, two lines above, one line on and one line below the table.] · [Arrow points a line that follow the curve of the lamps shade:] An engineer worked late drawing this curve in AutoCAD · [Arrow points to back of lamp shade just above the stem. The shade has four visible vents on the front. The part the arrow points to is not visible:] Extra vents added to avoid California safety recall · [Arrow points to glass:] Years-long negotiation with glass supplier · [A double arrow is placed above the center of the glass, ending on two lines above the edges of the glass:] 4 hours of meetings · [Two arrow points on either side of the lamp's stem:] 9 hours of meetings · [Two arrow, one pointing up at the bottom and the other down at the inside bottom of the glass:] Months of tip-over testing · [An arrow points to the lamp information sticker on the bottom part of the lamps base. Unreadable text can be seen as thins lines on the sticker:] Ongoing debate · [An arrow points to the front edge of the desk, ending in a starburst on the edge:] Wood source changed due to 20 year legal fight over logging in the Great Bear rainforest · [Arrow points to the switch on the lamps cord which can be seen going over the right edge of the table and hanging down below the table. The switch can be seen just under the table edge:] Argument over putting switch on cord got someone fired

XKCD #1741, “Work”

Knuth-Plass Line breaking: not to be confused with Knuth-Liang hyphenation discussed before, which simply optimizes the set of legal hyphens, Knuth-Plass line breaking tries to optimize the actual chosen linebreaks.

Particularly on narrow screens, justified text does not fit well, and must be distorted to fit, by microtypographic techniques like inserting spaces between/within words or changing glyph size. The default line breaking that web browsers use is a bad one: it is a greedy algorithm, which produces many unnecessary poor layouts, causing many stretched out words and blatant rivers. This bad layout gets worse the narrower the text, and so on Gwern.net lists on mobile, there are a lot of bad-looking list items when fully-justified with greedy layout.

Knuth-Plass instead looks at paragraphs as a whole, and calculates every possible layout to pick the best one. As can be seen in any TeX output, the results are much better. Knuth-Plass (or its competitors) would solve the justified mobile layout problem.

Unfortunately, no browser implements any such algorithm (aside from a brief period where Internet Explorer, of all browsers, apparently did?). What do we have?

  • CSS: there is a property in CSS4, text-wrap: pretty (CanIUse), which might someday be implemented somehow by some browsers and be Knuth-Plass, but no one has any idea when or how.

    As of April 2024, only Chrome v117+ claims to support pretty; while based on Minikin Android’s derivative of Knuth-Plass (and Knuth-Liang…?), it is unclear what it does, and the design doc seems to say that it is highly limited and among other issues, only applies to the last 4 lines of paragraphs. (It does seem fast.)

    When we tried it on Gwern.net’s fully-justified text, we found that it degraded spacing too much to be worth using, despite helping fix orphan-words at the ends of paragraphs, and we couldn’t use it. Firefox has no active discussion of any implementation.

  • JS: unlike with Knuth-liang hyphenationm doing it ourselves in JavaScript is not an option, because the available JS prototypes fail on Gwern.net pages. (There are also questions about whether the performance on long pages would be acceptable, as the JS libraries rely on inserting & manipulating a lot of DOM elements in order to force the browser to break where it should break, and our pages already inherently require so many DOM elements as to be a performance problem.)

    • Bramstein’s typeset explicitly excludes lists and blockquotes, Bramstein commenting in 2014 that “This is mostly a tech-demo, not something that should be used in production. I’m still hopeful browser will implement this functionality natively at some point.”

    • Knight’s tex-linebreak suffers from fatal bugs too.

  • Other: Matthew Petroff has a demo which uses the brilliantly stupid brute-force approach of pre-calculating offline the Knuth-Plass linebreaks for every possible width—after all, monitor widths can only range ~1–4000px with the ‘readable’ range one cares about being a small subset of that.

    It’s unclear, to say the least, how I’d ever use such a thing for Gwern.net (although it could work for server-side rendering), and doubtless has bugs or limitations of its own (particularly for dynamic text).

But all those concerns about correctness or performance are moot when the prototypes are so radically incomplete where not bitrotten. (My prediction is that the cost would be acceptable with careful optimization, and adding harmless constraints like considering a maximum of n lines; see West2006.)

So the line breaking situation is insoluble for the foreseeable future.

We decided to disable full justification on narrow screens, and settle for ragged-right.

Autopager

Autopager keyboard shortcuts: binding Home/PgUp & End/PgDwn keyboard shortcuts to go to the ‘previous’/‘next’ logical page (a metadata feature I also eventually removed) turned out to be glitchy & confusing.

HTML supports previous/next attributes (rel="prev"/"next") on links which specify what URL is the logical next or previous URL, which makes sense in many contexts like manuals or webcomics/web serials or series of essays (which generally fail to use it, however); browsers make little use of this metadata—typically not even to preload the next page! (Opera apparently was one of the few exceptions.)

Such metadata was typically available in older hypertext systems by default, and so older more reader-oriented interfaces like pre-Web hypertext readers such info browsers frequently overloaded the standard page-up/down keybindings to, if one was already at the beginning/ending of a hypertext node, go to the logical previous/next node. This was convenient, since it made paging through a long series of info nodes fast, almost as if the entire info manual were a single long page, and it was easy to discover: most readers will accidentally tap them twice at some point, either reflexively or by not realizing they were already at the top/bottom (as is the case on most info nodes due to egregious shortness). In comparison, navigating the HTML version of an info manual is frustrating: not only do you have to use the mouse to page through potentially dozens of 1-paragraph pages, each page takes noticeable time to load (because of failure to exploit preloading) whereas a local info browser is instantaneous. The HTML version suffers from what I call the ‘twisty maze of passages each alike’ problem: the reader is confronted with countless hyperlinks, all of which will take a meaningful amount of time/effort to navigate (taking one out of flow) but where most of them are near-worthless while a few are all-important, and little distinguishes the two kinds.13

After defining a global sequence for Gwern.net pages, and adding a ‘navbar’ to the bottom of each page with previous/next HTML links encoding that sequence, I thought it’d be nice to support continuous scrolling through Gwern.net, and wrote some JS to detect whether at the top/bottom of page, and on each Home/PgUp/End/PgDwn, whether that key had been pressed in the previous 0.5s, and if so, proceed to the previous/next page.

This worked, but proved buggy and opaque in practice, and tripped up even me occasionally. Since so few people know about that pre-WWW hypertext UI pattern (as useful as it is), would be unlikely to discover it, or use it much if they did discover it, I removed it.

Automatic Smallcaps

.smallcaps-auto class: the typography of Gwern.net relies on “smallcaps”. We use smallcaps extensively as an additional form of emphasis going beyond italic, bold, and capitalization (and this motivated the switch from system Baskerville fonts to Source Serif Pro fonts). For example, keywords in lists can be emphasized as bold 1st top-level, italics 2nd level, and smallcaps 3rd level, making them much easier to scan.

However, there are other uses of smallcaps: acronyms/initials. 2 capital letters, like “AM”, don’t stand out; but names like “NASA” or phrases like “HTML/CSS” stick out for the same reason that writing in all-caps is ‘shouting’—capital letters are big! Putting them in smallcaps to condense them is a typographic refinement recommended by some typographers.14

Manually annotating every such case is a lot of work, even using interactive regexp search-and-replace. After a month or two, I resolved to do it automatically in Pandoc. So I created a rewrite plugin which would regexes on every string in the Pandoc AST for hits, split, and annotate the match in a HTML span element marked up with the .smallcaps-auto class, which was styled by CSS like the existing .smallcaps class. (Final code version.)

Doing so using Pandoc’s tree traversal library proved to be highly challenging due to a bunch of issues, and slow. (I believe it at least doubled website compilation times due to the extravagant inefficiency of the traversal code & cost of running complex regexps on every possible node repeatedly.) The rewrite approach meant that spans could be nested repeatedly, generating pointless <span><span><span>... sequences (only partially ameliorated by more rewrite code to detect & remove those). The smallcaps regex was also hard to get right, and constantly sprouted new special-cases and exceptions. The injected span elements caused further complications downstream as they would break pattern-matches or add raw HTML to text I was not expecting to have raw HTML in it. The smallcaps themselves had many odd side-effects, like interactions with italics & link drop-shadow trick necessary for underlined links. The speed penalty did not stop at the website compilation, but affected readers: Gwern.net pages are already intensive on browsers because of the extensive hyperlinks & formatting yielding a final large DOM (each atom of which caused additional load from the also-expanding set of JS & CSS), and the smallcaps markup added hundreds of additional DOM nodes to some pages. I also suspect that the very visibility of smallcaps contributed to the sense of “too fancy” or “overload” that many Gwern.net readers complain about: even if they don’t explicitly notice the smallcaps are smallcaps, they still notice that there is something unusual about all the acronyms. (If smallcaps were much more common, this would stop being a problem; but it is a problem and will remain one for as long as smallcaps are an exotic typographic flourish which must be explicitly enabled for each instance.)

The last straw was a change in annotations for Gwern.net essays to include their Table of Contents for easier browsing, where the ToCs in annotations got smallcaps-auto but the original ToCs did not (simply because the original ToCs are generated by Pandoc long after the rewrites are done, and are inaccessible to Pandoc plugins), creating an inconsistency and requiring even more CSS workarounds. At this point, with Said not a fan of smallcaps-auto and myself more than a little fed up, we decided to cut our losses and scrap the feature.

I still think that the idea of automatically using smallcaps for all-caps phrases like acronyms is valid—especially in technical writing, an acronym soup is overwhelming due to the capital letters!—but the costs of doing so in the HTML DOM as CSS/HTML markup on ordinary text are too high for both writers & readers.

It may make more sense for this sort of non-semantic change to be treated as a ligature and done by the font instead, which will have more control of the layout and avoid the need for special-cases. With smallcaps automatically done by the font, it can become a universal feature of online text, and lose its unpleasant unfamiliarity.

Disqus Comments

Disqus JS-based commenting system:

A commenting system was the sine qua non of blogs in the 2000s, but they required either a server to process comments (barring static websites) or an extortionately-expensive service using oft-incompatible plugins (barring blogging); they were also one of the most reliable ways (after being hacked thanks to WordPress) to kill a blog by filling it up with spam. Disqus helped disrupt incumbents by providing spam-filtering in a free JS-based service; while proprietary and lightly ad-supported at the time, it had some nice features like email moderation, and it supported the critical features of comment exports & anonymous comments. It quickly became the default choice for static websites which wanted a commenting system—like mine.

I set up Gwern.net’s Disqus in 2010-10-10; I removed it 4,212 days later, on 2022-04-21 (archive of comment exports).

There was no single reason to scrap Disqus, just a steady accumulation of minor issues:

  • Shift to social media: the lively blogosphere of the 2000s gave way in the 2010s to social media like Digg, Twitter, Reddit, Facebook—even in geek circles, momentum moved from on-blog comments to aggregators like Hacker News.

    While there are still blogs with more comments on them than aggregators (eg. SlateStarCodex/Astral Codex Ten or LessWrong), this was increasingly only possible with a discrete community which centered on that blog. The culture of regular unaffiliated readers leaving comments is gone. I routinely saw aggregator:site comment ratios of >100:1. In the year before removal, I received 134 comments across >900,000 pageviews. For comparison, the last front-page Hacker News discussion had 254 comments, and the last weekly Astral Codex Ten ‘open thread’ discussion has >6× comments.

    So, now I add links to those social media discussions in the “External Links” sections of pages to serve the purpose that the comment section used to. If no one is using the Disqus comments, why bother? (Much less move to an alternative like Commento, which costs >$100/year.) I am not the first blogger to observe that their commenting system has become vestigial, and remove it.

  • Monetization decay: it is a law of Internet companies that scrappy disruptive startups become extractive sclerotic incumbents as the VC money runs out & investors demand a return.

    Disqus never became a unicorn and was eventually acquired by some sort of ad company. The new owners have not wrecked it the way many acquisitions go (eg. SourceForge), but it is clearly no longer as dynamic or invested-in as it used to, the spam-filtering seemed to occasionally fall behind the attackers, and the Disqus-injected advertising has gradually gotten heavier.

    Many Disqus-user websites are unaware that Disqus lets you disable advertising on your website (it’s buried deep in the config), but Disqus’s reputation for advertising is bad enough that readers will accuse you of having Disqus ads anyway! (I think they look at one of the little boxes/page-cards for other pages on the same website which Disqus provides as recommendations, and without checking each one, assume that the rest are ads.) My ad experiments only investigated the harms of real advertising, so I don’t know how bad the effect of fake ads is—but I doubt it’s good.

    • odd bugs: One example of this decay is that I could never figure out why some Disqus comments on Gwern.net just… disappeared.

      They weren’t casualties of page renames changing the URL, because comments disappeared on pages that had never been renamed. They weren’t deleted, because I knew I didn’t & the author would complain about me deleting them so they didn’t either. They weren’t marked as spam in the dashboard (as odd as retroactive spam-filtering would be, given that they had been approved initially). In fact, they weren’t anywhere in the dashboard that I could see, which made reporting issues to Disqus rather odd (and given the Disqus decay, I lacked faith that reporting bugs would help). The only way I knew they existed was if I had a URL to them (because I linked them as a reference) or if I could retrieve the original Disqus email of the comment.

      So there are people out there who have left critical comments on Gwern.net, and are convinced that I deleted the comments to censor them and cover up what an intellectual fraud I am. Less than ideal. (One benefit of outsourcing comments to social media is that if someone is blamed for a bug, it won’t be me.)

    • dark mode: Disqus was designed for the 2000s, not the 2020s. Starting in the late 2010s, “dark mode” became a fad, driven mostly by smartphone use of web browsers in night-time contexts.

      Disqus has some support for dark mode patched in, but it doesn’t integrate seamlessly into a website’s native customized dark mode. Since we put a lot of effort into making Gwern.net’s dark mode great, Disqus was a frustration.

  • Performance: Disqus was never lightweight. But the sheer weight of all of the (dynamic, uncached) JS & CSS it pulled in, filled with warnings & errors, only seemed to grow over the years.

    Even with all of the features added to Gwern.net, I think the Disqus continued to outweigh it. Much of the burden looked to have little to do with commenting, and more to do with ads & tracking. It was frustrating to struggle with performance optimizations, only for any gains to be blown away as soon as the Disqus loaded, or during debugging, see the browser dev console rendered instantly unreadable.

    It helped to use tricks like IntersectionObserver to avoid loading Disqus until the reader scrolled to the end of the page, but these brought their own problems. (Getting IntersectionObserver to work at all was tricky, and this trick creates new bugs: for example, I can only use 1 IntersectionObserver at a time without it breaking mysteriously; or, if a reader clicks on a URL containing a Disqus ID anchor like #comment-123456789, when Disqus has not loaded then that ID cannot exist and so the browser will load the page & not jump to the comment. As we have code to check for wrong anchors, this further causes spurious errors to be logged.) The weight of these wasn’t too bad (the Gwern.net side of Disqus was only ~250 lines of JS, 20 lines of CSS, & 10 of HTML), but the added complexity of interactions was.

  • Poor integration: Disqus increasingly just does not fit into Gwern.net and cannot be made to.

    The dark mode & performance problems are examples of this, but it goes further. For example, the Disqus comment box does not respect the Gwern.net CSS and always looked lopsided because it did not line up with the main body. Disqus does not ‘know’ about page moves, so comments would be lost when I moved pages (which deterred me from ever renaming anything). Dealing with spam comments was annoying but had no solution other than locking down comments, defeating the point.

    As the design sophistication increases, the lack of control becomes a bigger fraction of the remaining problems.

So eventually, a straw broke the camel’s back and I removed Disqus.

Double-Spaced Sentences

Considered, but rejected due to poor evidence and difficulty of HTML+CSS implementation even if it could be proven to be better.

Reactive Archiving

My original linkrot fighting approach was reactive: detect linkrot, fix broken links with their new links or with Internet Archive backups, and use bots to ensure that they were all archived by IA in advance. This turned out to miss some links when IA didn’t get them, so I added on local archiving tools to make local snapshots. This too turned out to be inadequate, sometimes missing URLs, and just being a lot of work to fix each link when it broke (sometimes repeatedly). Eventually, I had to resort to preemptive local link archiving: make & check & use local archives of every link when they are added, instead of waiting for them to break and dealing with breakage manually in a labor-intensive grinding way.

srcset Mobile Optimization

The srcset image optimization tries to serve small images to devices which can only display small images to speed up loading & save bandwidth.

After 3 years, it proved to be implemented by browsers so poorly and inconsistently as to be useless, and I had to remove it when it broke yet again.

I do not recommend using srcset, and definitely not without a way to test regressions. You are better off using some server-side or JS-based solution, if you try to optimize image sizes at all.

Background

A ‘standard’ HTML optimization for images on mobile browsers is to serve a smaller image than the original. There is no point in serving a big 1600px image to a smartphone which is 800px tall, never mind wide. An appropriately resized image can be a tenth of the original size or less, reducing expensive mobile bandwidth use and speeding up page load times.

Implementing srcset

This can be done by the server by snooping the browser (which is a service offered by some CDNs), but the ‘official’ way to do this involves a weird extension to your vanilla <img> tag called a srcset attribute. This attribute does not simply specify an alternative smaller image, like one might expect, but rather, encodes multiple domain-specific languages in a pseudo-CSS for specifying many images and various properties which supposedly determine which image will be selected in a responsive design. In theory, this lets one do many image optimizations, like serving different images based on not just the width or height but eg. the pixel density of the screen, or to crop/uncrop or rotate the image for ‘art direction’ artistic purposes etc.

I set to doing this in May 2020 since it was a natural optimization to make, especially for the StyleGAN articles (which are heavy on generated-image samples & particularly punishing for mobile browsers to load)… only to discover: srcset is hella broken in browsers.

Issues With Browser Support

It is supposedly completely standardized and supported by all major browsers for many years now, and yet, whenever I tried a snippet from a tutorial on MDN or elsewhere—it didn’t work. Nothing would work the way the docs & tutorials said it would work. I would specify an image appropriately, render it in the HTML appropriately, and watch the ‘network’ tab of the dev tools reveal that it was ignored by the browser & the original image downloaded anyway. After much jiggering and poking, I got an invocation which worked, in that it downloaded the small image in the mobile simulators, and the original image in desktop mode.30

This was imperfect in that it wasn’t fully integrated with the popups, or with image-focus.js (if you ‘focused’ on an image to zoom-fullscreen it, it would remain small).

Nor was it a lot of fun on the backend, either. “There are only two hard problems in CS, naming and cache invalidation”, and storing small versions of all my images entails both. Generating, and then avoiding, the small versions caused perennial problems, especially once I began moving images around to genuinely organize them instead of dumping into unsorted mega-directories out of laziness.

Inability to Fix

And it broke, repeatedly. In April 2023, Achmiz was reviewing how to fix the image-focus.js bug, and noticed that strictly speaking, there was nothing there to fix because it was zooming into the original image—having loaded that in the first place. The srcset had stopped working entirely at some point. Aside from the difficulty of detecting such regressions, the biggest problem was that srcset hadn’t changed at all. The browsers had (again).

Achmiz looked into fixing srcset and discovered what I had: that the implementations were all unpredictably broken & violated the docs—he said that even the MDN tutorial was broken and didn’t do what it said it did (now), and exhibited bizarre behavior like loading the original when in the mobile simulator mode but then loading the small when in desktop mode, changed when ‘slots’ changed (in direct violation of the specification), or (wrongly) downloaded & displayed the original image but when queried via JavaScript would lie to the caller & claim it was the right small image! How did any of this get implemented, and how does anyone use this correctly? (Does anyone use it correctly?) Life is a bitter mystery.

Conclusion

So, it did not work, had not worked for a while, was unclear how to make it work again other than trial-and-error given that the documentation & browser implementations are lies, and if we somehow figured out what incantation currently yielded the correct behavior would likely silently fail again in a year or two (and we’d have no easy way to notice), and there was no sign any of this would ever be fixed because the general bugginess has persisted for well over half a decade judging by people asking for help on Stack Overflow & elsewhere.31 It was a complicated & fragile feature delivering no actual benefits.

I decided I had given it a fair try, and ripped it out. The increased bandwidth use is unfortunate, but the use of lazy-loading images (via the loading="lazy" attribute) appears to have removed most of the reader-visible download problems, and in any case, it’s not like they were benefiting to begin with given that the optimization had been broken for an unknown period.

Postscript: Manual srcset

The one performance case I was worried about, optimizing thumbnails in popups so they have no perceptible lag and appear ‘instant’, could be handled as a special-case inside the annotation backend code, as opposed to trying to srcset all images on Gwern.net by default. (If I needed more than that, Achmiz could do a JS pass which detected screen size dynamically & rewrite <img src="foo"> paths to point to a small version so the small ones get lazy-loaded instead.)

We implemented that in July 2024: all images have a corresponding 256px width version stored in /metadata/thumbnail/256px/, and the popup JS knows to rewrite images in popups to use those. Simple & reliable & working—unlike the so-called “standards”.

Interviews

A particularly unsatisfying area of website formatting was interviews (and roundtables or panels or discussions in general). There is no accepted way to format interviews which can handle interviews in an easy-to-write way with clear depiction of topics & speaker transitions, and nice typography: approaches using paragraphs, tables, definition lists, and unordered lists all have flaws.

After using the conventional formatting of paragraph-separated speakers and experimenting with various alternatives over the years, we abandoned it for a custom approach.

Interviews are now formatted a two-level list of topics and then nested in that are speaker statements; these double-lists are parsed by JS to style speakers correctly and use CSS to create a 3-column layout which can be read vertically with minimal clutter.

Interviews are hard to stylize because they have a strong semantic structure of back-and-forths but of irregular lengths & contents, which does not fit naturally into the standard typographic constructs. One would like to exploit the clear semantics of individual speakers discussing topics back-and-forth in order to standardize their appearance & make reading them easier, but they do not fit into the standard Markdown-HTML toolkit: they are not an ordered or unordered list, they are not a blockquote, they are not (just) paragraphs, they may be splittable into sections but not usually at a question-level of granularity, they are not a table… They have speakers, but statements can be multiple paragraphs and contain other block elements like blockquotes (eg. a quotation in a prepared lecture or a public reading) so block-level transitions do not define speaker-level transitions. The speakers often speak multiple times, perhaps scores of times, so speaker labels can become repetitive. They have questions (usually), and an answer—usually, but not always, and sometimes more than one, as multiple people might respond to a single question or start arguing back and forth.

Ideally, I want a presentation of interviews which

  • semantically:

    • respects the natural back-and-forth, closely linking each utterance where there can be more than the standard “Q/A” pair,

    • while grouping them thematically,

    • designates speaker transitions clearly,

  • typographically:

    • is not visually cluttered with redundancy,

    • aligns text vertically in neat columns

  • technically:

    • is reasonably native to Markdown & writable by a forgetful author (myself) without consulting the manual, and doesn’t require heavyweight Semantic Web/XML-style notation (like marking up every speaker label & passage with unique IDs etc), and

    • compiles to reasonably native HTML which will be machine-parseable & reflow well on mobile devices etc.

Is there any existing typography/design writing on interviews we can draw on? Doesn’t seem like much. I don’t recall any discussions from the books I’ve read like Rutter or Butterick or Bringhurst, CTAN has nothing helpful (only performance scripts), and most magazines with interesting interview layouts are focused more on novelty & graphic design with the text as an afterthought (typically just separate paragraphs with bolded questions).

Once you start looking at interview formatting on the Internet, you notice there’s many approaches, and they’re all bad:

  1. Alternating emphasized paragraphs: this is perhaps the most common and basic approach. Just write down each paragraph as spoken, and put the interviewer’s questions or comments in non-roman text (bold if possible, otherwise italics32).

    **It has been alleged you huff kittens. Any comment?**
    
    Outrageous libel, for which I will be suing the parties responsible
    in a court of law in Trenton, New Jersey.
    
    **Duly noted.**

    Pros: Just alternating <p>s with some <strong>s salted in: it will work everywhere for the Web’s entire existence, and is lightweight to write—there is hardly any way to more easily encode in text the speaker label of each text than simply typing some asterisks like **foo bar**. It doesn’t clutter the text with a lot of names, and it also handles multi-paragraph statements naturally: if it’s the interviewer, all of them get put in bold, otherwise, do nothing. This is so straightforward it tends to used by even web publications which otherwise try to be more sophisticated like The New York Times or New Yorker.

    Cons: The drawback is that it is simple to the point of being simple-minded. For short two-person Q&A, this is fine, but for more complex discussions, it begins to fail to handle the material adequately. The overall effect is just ‘one d—n thing after another’, and there is no way to skim it by topic. As you add in more metadata, the lack of more structured formatting begins to backfire: you wind up having large paragraphs in bold (which is not as bad as them being in italics which makes them hard to read & is especially confusing if fictional works are being discussed, but still, not what bold is for); and for more than two people, it gets confusing as one has to insert the labels of speakers (which introduces shifting column alignment based on the names pushing the text around). The bolding assumes you have suppressed the names, so if the names have to be reintroduced, then it becomes a drawback as now the name gets jammed into the statement (because it goes from the implicit **Question?** to explicit **Name: Question?**). You could expand it out to put speaker labels on separate lines/paragraphs, but this wastes a lot of vertical space:

    **Interviewer**:
    
    It is further alleged that you trade in bonsai kittens in violation of CITES.
    
    **Interviewee**:
    
    No comment.

    Not great: what ought to be 2 lines, max, expands out to 7 lines. (Centering the speaker labels and removing the colon helps a little, but is lipstick on a pig.)

    So, it’s a reasonable solution, particularly when the material is simple or convenience of the author is at a premium, but surely one can do better?

  2. Table: tables can encode Q&A with columns, one per speaker, or do almost arbitrarily more complex layouts.

    Pros: tables are space-efficient & inherently aligned (hard otherwise!), and column headers encode speakers clearly & efficiently; they are standard HTML. Some layout variations:

    ------------------------------------------|
    | **Interviewer**     | **Famous Person** |
    |---------------------|-------------------|
    | Shaken, or stirred? | Shaken.           |
    -------------------------------------------

    or

    --------------------------------------------------------
    | Interviewer         | Famous Person                  |
    |---------------------|--------------------------------|
    | Shaken, or stirred? |                                |
    |                     | Do I look like I give a d---n‽ |
    --------------------------------------------------------

    or:

    ----------------------------------------------
    | Speaker            | Statement             |
    |--------------------|-----------------------|
    | **Speaker 2**      | I'm on the rocks.     |
    | **Bartender**      | That's what she said. |
    ----------------------------------------------

    Cons: But they rapidly become more complex if asked to do anything more complex than single-paragraph 2-person Q&A and forfeit their advantages like space-efficiency. (If there are 3 speakers and #3 only speaks once, do you waste an entire almost-empty column on him? And if you aren’t using columns for speakers but are doing a 1-column layout, then that’s just worse than alternating-paragraphs.) They are not easy to write or debug in Markdown, and they are an HTML nightmare.

    Tables for interviews made sense back in the 1990s when most layout was table-based, but you will not have seen it since, for good reason.

  3. Definition list: HTML, and some Markdown dialects like Pandoc, support a ‘definition’ <dl> element. Despite going back to ~1995, it’s obscure, and I’m not sure I’ve ever used it. (Even the intended use cases, like dictionaries or glossaries, seem to often avoid it in favor of more vanilla HTML layout.)

    Definition lists look like a single bold ‘term’ followed by an indented ‘definition’. To use it, one would either treat the Qs as the ‘term’ and the response/answer as the definition, for strict Q&A (perhaps adding in speaker labels if more than one person does Q or A), or perhaps simply have each definition be a single statement and the ‘term’ is the speaker label. So something like this:

    **Q**
    
    : Question?
    : Answer.
    
    <-- Or: -->
    
    **Interviewer**
    
    : Question?
    **Respondent**
    
    : Answer.
    
       Humorous anecdote.
    : **Interviewer**: Followup query?

    Pros: Definition lists would work, but don’t have any notable advantages: they are technically compatible, not too cluttered, somewhat visually aligned etc, indicate speaker transitions bulky, and overall mediocre.

    Cons: Like alternating-paragraphs, definition lists aren’t too suited to more complex interviews, as there’s no clear way to encode the two-level structure of topics containing multiple exchanges. The default formatting of definition lists looks relatively bulky, and it’s so rarely used I would have a hard time remembering the syntax—it’s not terrible, at least in Pandoc Markdown, but I don’t need to transcribe interviews that often, so I would have to check or work at memorizing it. The HTML standard explicitly highlights ‘questions and answers’ as a use-case (“Name-value groups may be terms and definitions, metadata topics and values, questions and answers, or any other groups of name-value data.”)—but notes that this meant more for uses like FAQs, and says it is inappropriate for general dialogue.

    So, while not as doomed as tables, unappealing and if this was the only alternative to alternating-paragraphs, I would probably settle for those.

  4. Unordered list: definition lists may not work, but there are more familiar list types like unordered lists. (Interviews have a temporal order, of course, but there is usually not much point in numbering them, unless one is doing detailed citations.) Something like:

    - **Question**: Question?
    - **Answer**: Answer.

    Pros: This is easy to write/remember & highly technically compatible, makes visual sense, preserves half the semantics (it preserves speaker-level multi-paragraph statements as a single list item containing indented paragraphs) & gives them visual grouping with clear transitions (due to the list markers). And because the transitions between speakers are clear, one can abbreviate or eliminate them. Nor does it have any trouble handling any number of speakers trading roles; interviewers can be denoted by ‘Q’ or by their name, answers can be ‘A’ or their own name as necessary to disambiguate them etc.

    Cons: The drawbacks with 1-level deep unordered lists are that speaker labels necessarily make the text unaligned once a speaker statement wraps to the next line, there is still no thematic grouping even though the reader can now more easily track speaker changes by seeing the list marker in the left margin, and it handles complex interviews well but now there’s visual clutter problems with simple interviews where there are a lot of short statements and so it becomes a tall skinny list splattered with list markers. (If almost every line is a speaker transition because every question is a one-liner and the answers often short like an interjection or denial, the markers are no longer helpful and become distracting.)

    However, if we work at it, we could fix the visual alignment by either outdenting the speaker labels, or indenting each line after the first line; the list marker can then be suppressed & the speaker label used as both. This is much easier to accomplish when typesetting books or magazines than web pages, but still doable. If there is a ‘canonical’ way to typeset interviews for legibility, I think the unordered list with vertical alignment is it.

  5. Unordered two-level list: If the previous solution of single-level unordered lists doesn’t work (even with the cleaned-up layout) because it encodes only 1 level of grouping, what about two-level lists? In a two-level list transcription, the top-level encodes theme or exchange, and then the second sub-level encodes the statement as a whole. This can be written in Pandoc Markdown using ‘empty’ lists on specified lists.

    This was the implementation used on Gwern.net for a while, but it proved to be unsatisfactory due to details of how Pandoc Markdown operates: while a two-level list seemed simple to write, I had constant issues with the indentation or Pandoc not wrapping list items in <p> appropriately where it would mash together sub-lists, questions & answers, break HTML validation, or break the JS parsing it (which, if in a transclusion—as most interview excerpts are—usually broke the transclusion entirely). It was also impossible to tell from reading the compiled HTML where the issue was or how to fix it. Even interviews I thought I had carefully checked would turn out to have a problem somewhere. After one such case, we resolved to abandon this Markdown/HTML approach.

  6. Horizontal-ruler separated lists:

    Source code encoding. After getting fed up with the two-level list approach, I noted that we weren’t using the list to encode anything more complex than a two-level list, it would work just as well to simply include some sort of separator, like a self-closed span or div. Or, easier to type in Markdown/HTML, a horizontal ruler.

    So now a Markdown interview simply looks like unordered lists, separated by a horizontal ruler ---, and the JS reformats it.

    <div class="interview">
    - **Q**: Question 1?
    - **A**: Answer 1.
    
         Elaboration.
    - **Q**: Skeptical query?
    - **A**: Wounded dignity!
    
    ---
    
    - **Q**: Question 2?
    - **A**: Answer 2.
    </div>

    Visual display. This still leaves us with the problems of alignment and list-marker clutter. However, now that one has fully-encoded the structure into the HTML as a separated list with a bold-colon speaker convention, it is possible to parse with JS & then style it with CSS to improve the presentation however we wish, or revert to a simpler presentation. (The advantage of preserving the semantics is that it’s forward-compatible—we can always throw it away if we don’t need it after all.)

    In our case, we choose to suppress the second-level list marker icons, because the speaker transitions are unambiguously marked by the bold speaker names, and we leave the top-level list marker icons to indicate thematic transitions. We then indent the contents of each second-level list item to line up with the text on the first line after the speaker label. (We can see that we want to line up speaker names by considering an example which indents the response further—madness!)

    Pros: This produces a 3-column effect: the left-most column is the list markers, which indicate overall thematic transitions, so one can skim in content chunks; the second column is the ‘outdented’ speaker labels, as if they were margin notes, making it easy to see speaker transitions; the third column is the actual speech.

    We have largely resolved all the problems: we can encode the two-level structure in a way which looks good & can be skimmed easily at both levels, which is easy to write & read Markdown of, fully compatible with mobile views, and works well even if JS/CSS are disabled entirely (as it simply becomes more visually explicit & loses its nice vertical alignment). It looks like this:

    Example of discussion between William Shatner & Leonard Nimoy, which does not fall neatly into a simple Q&A but is readable when grouped & aligned in a two-level list organization. For another example, see Hamming1986’s Q&A (annotation examples: 1, 2, 3).

    Example of discussion between William Shatner & Leonard Nimoy, which does not fall neatly into a simple Q&A but is readable when grouped & aligned in a two-level list organization. For another example, see Hamming1986’s Q&A (annotation examples: 1, 2, 3).

    Cons: This clean semantic appearance comes at the cost of some JS/CSS runtime complexity33 and the unavoidable need for the author to do extra work to encode the themes.

Last-Read Scroll Marker

Another feature considered but discarded yesterday was a “scroll marker”/“read progress marker”, to help mark place on desktop when paging down (eg. while using PgDwn/Space). Sometimes one can lose track. Scroll markers used to be semi-common in desktop GUIs pre-2000, and I thought might be useful to revive for long text documents (like these pages).

Demo of a ‘scroll marker’ in JavaScript, written by GPT-4; the red line is supposed to mark the bottom of the last visible line before the reader scroll down 1 screen, enabling them to refind their place effortlessly.

Demo of a ‘scroll marker’ in JavaScript, written by GPT-4; the red line is supposed to mark the bottom of the last visible line before the reader scroll down 1 screen, enabling them to refind their place effortlessly.

After mocking up a prototype using GPT-4 to write the JS for me, I found that scrolling on Gwern.net seemed consistent enough in-browser, and the prototype buggy enough, that I wasn’t too sold on the idea. Said Achmiz is unconvinced it’s a real need at all34, and a proper solution has to deal with many annoying edge-cases figuring out something as deceptively-simple-seeming as ‘last position’, which would make it harder to implement than one would hope for such a minor feature.

A more viable feature is a persistent last-read scroll marker for reading a page across multiple sessions, similar to how browsers try to store the last-read position and jump to it. This can be done non-invasively using LocalStorage.


  1. This raises an interesting possibility: a website which is truly database-centric—not merely doing calls to an API like REST endpoints which hide everything, but almost a brutalist, possibly “naked objects”-like website which is just a database engine JS stub and a list of database queries to download data/HTML. Thus, the user (or any code running) can do anything just by writing SQL queries; this would enable powerful search over a website, and extensibility like arbitrary levels of reskinning compared to websites where the semantics of data may be thrown away before being delivered to the client. (And you can even provide a in-browser SQL database viewer!)

    Because the client & server have equal access to the database, the queries can be done at any stage: all of the queries could be done client-side for maximum flexibility, but to speed things up, pages could be partially or fully pre-rendered before serving them to the client.↩︎

  2. So unlike the CSE, which supposedly searched multiple sites, the current site-search is indeed just a site-search. Frustratingly, it used to be possible to use multiple site: operators and OR to approximate a CSE—which was what I did long ago, before I set up any CSEs—but this functionality has apparently bitrotten in Google Search and no competing login-free search engine like Bing or Yandex implements a better site: operator. (Kagi might but requires accounts.)↩︎

  3. Why don’t all PDF generators use that? Software patents, which makes it hard to install the actual JBIG2 encoder (supposedly all JBIG2 encoding patents had expired by 2017, but no one, like Linux distros, wants to take the risk of unknown patents surfacing), which has to ship separately from ocrmypdf, and worries over edge-cases in JBIG2 where numbers might be visually changed to different numbers to save bits.↩︎

  4. I initially had a convention where lower-case URLs were ‘drafts’ and only mixed-case URLs were ‘finished’, but I abandoned it after a few years in favor of an explicit ‘status’ description in the metadata header. No one noticed the convention, and my perfectionism & scope-creep & lack of HTTP redirect support early on (requiring breaking links) meant I rarely ever flipped the switch.↩︎

  5. The one good choice was getting the gwern.TLD domain, and .net as its TLD: no other name would have worked over the years or be as memorable, and the connotations of .com remain poor—even if gwern.com hadn’t been domain-squatted, it would’ve been a bad choice.↩︎

  6. In terms of Zooko’s triangle, because I control the domain, all URLs are ‘secure’ and they cannot be made more ‘decentralized’, so the only improvement is to make them more ‘human-meaningful’—but in a UX way, being meaningful, short and easy to type, not trying to approximate a written-out English sentence or title.↩︎

  7. Mostly mixed-content issues: because Cloudflare was handling the HTTPS initially, I had problem with nginx redirects redirecting to the HTTP plaintext, which browsers refuse to accept, breaking whatever it was. I eventually had to set up HTTPS in nginx itself.↩︎

  8. I couldn’t find any hard evidence about underscores being worse for SEO, so I was more concerned about the likelihood of mangled URLs & underscores being harder to type than hyphens.↩︎

  9. The main glitch turned out to be off-site entirely: while Google Analytics seems to’ve taken the migration in stride, I didn’t notice for a month that Google Search Console had crashed to zero traffic & reporting all indexed pages now blocked. (The old URLs were of course now redirecting, which GSC treats as an error.) GSC does support a ‘whole domain’ rather than subdomain registration, but it only lets you do that by proving you own the whole domain by screwing with DNS, and I had opted for the safer (but subdomain-only) verification method of inserting some metadata in the homepage. So I lost a month or two of data before I could migrate the old GSC to the new GSC. A minor but annoying glitch.↩︎

  10. Not as big a drawback as it initially seemed, because we would wind up needing copy-paste listeners for other things like math conversion or soft hyphens.↩︎

  11. Specifically: some OS/browsers preserve soft hyphens in copy-paste, which might confuse readers, so we use JS to delete soft hyphens; this breaks for readers with JS disabled, and on Linux, the X GUI bypasses the JS entirely for middle-click but no other way of copy-pasting. There were some additional costs: the soft-hyphens made the final HTML source code harder to read, made regexp & string searches/replaces more error-prone, and apparently some screen readers are so incompetent that they pronounce every soft-hyphen!↩︎

  12. The X11 middle-click thing again.↩︎

  13. This friction is then increased by all the other design problems: lack of preload means each hyperlink eats up seconds; ads & other visually-wasteful design elements clutter & slow every page; failing to set a:visited CSS means the reader will waste time on pages he already visited; broken links are slower still while adding a new dilemma on each link—try to search for a live copy because it might be important, or give up? and so on. For a medium whose goal was to be as fluid and effortless as thought, it is usually more akin to wading through pits of quicksand surrounded by Legos.↩︎

  14. eg. pg47, The Elements of Typographic Style (third edition), Bringhurst 200420ya; Richard Rutter; Dave Bricker etc.↩︎

  15. One could imagine using superscripted link icons, but like any other use of ‘ruby’ in HTML, this winds up looking pretty crazy.↩︎

  16. It probably doesn’t help link-icon popularity that the main link-icon people see, Wikipedia’s glyph for Adobe Acrobat ‘PDF’, is so ugly. Wikipedia, you can do better.↩︎

  17. MediaWiki uses the regexp approach, and struggles to cover all the useful cases, as their CSS indicates by having 6 different regexps:

    .mw-parser-output a[href$=".pdf"].external,
    .mw-parser-output a[href*=".pdf?"].external,
    .mw-parser-output a[href*=".pdf#"].external,
    .mw-parser-output a[href$=".PDF"].external,
    .mw-parser-output a[href*=".PDF?"].external,
    .mw-parser-output a[href*=".PDF#"].external {
        background: url(//upload.wikimedia.org/wikipedia/commons/4/4d/Icon_pdf_file.png) no-repeat right;
        padding: 8px 18px 8px 0;
    }

    This could be simplified to 3 regexps, and broadened to handle possible mixed-case/typo extensions like .Pdf, by using case-insensitive matching (ie. [href$=".pdf" i] & so on). Regardless, this suite will miss the /pdf/ or wrapper cases I tried to handle, but does handles cases with ?foo=bar query parameters, which I skip. (Presumably for servers that insist on various kinds of metadata & tracking & authorization gunk instead of just serving a PDF without any further hassle. I tend to regard such URLs are treacherous and just never link them, rehosting the PDF immediately.)↩︎

  18. My solution to that problem was to more frequently manually mirror PDFs (where they are guaranteed to follow the .pdf pattern), and eventually create a ‘local archiving’ system which would snapshot most remote URLs & thus ensure webpages that involved PDFs would be shown to readers as PDFs.↩︎

  19. One might be a little dubious, but as the joke goes, to a sheep, all sheep look distinct, and I can often tell a paper is by a DeepMind group before I’ve finished reading the abstract, and sometimes from the title, even when it’s ostensibly blinded peer-review.↩︎

  20. Making SVG link-icons takes time, but not necessarily as hard as it sounds.

    Many websites will have an SVG favicon or logo already; if they do not, their Wikipedia entry may include an SVG logo already, or Google Images may turn one up. If there is none, then the PNG/JPG can sometimes be traced in Inkscape with “Trace Bitmap”. (I have not had much luck directly using Potrace.) Once imported into Inkscape, even a newbie like myself can usually make it monochrome and simplify/exaggerate it to make it legible as a tiny link-icon. Then an SVG compression utility like vecta.io can trim the fat down to 1–4kb. The dark-mode CSS then usually can invert them automatically, and no further work is necessary.↩︎

  21. That operator is long since removed, so I switched to searching by title↩︎

  22. I’d used it previously ~201232015 because I had a vague idea that seeing what links readers clicked on would be helpful in deciding which ones were useful, which ones needed better titles/descriptions, which ones might deserve lengthier treatment like blockquote excerpts (since superseded by annotations), etc. I wound up not using it for any of that because click rates were so low, decreased throughout the article as readers dropped off, and I found them meaningless anyway.↩︎

  23. Not that I was thrilled about the ugliness and difficulty of reading the classic ‘et-al’ style of inline citations either! I remembered when I first began reading academic papers, rather than books, and the difficulty I had dealing with the soups of names & dates marching across the page, making it hard to recall what a given parade was even supposed to be citations for… (That one gets used to it eventually, and forgets the burden, is not a good excuse.) My dislike would lead to my subscript notation.↩︎

  24. Amusingly, in 2021 I would go back and parse all of the existing tooltips to extract the metadata for annotations. It worked reasonably well.↩︎

  25. There are some implementations which do not hook links to load the fragment on demand, but instead, on page load, do an API call for each link. We found this to be completely unnecessary as a performance optimization because the WP API will generally return the fragment within ~50ms (while you typically need a UI delay of >500ms to avoid spurious popups when the reader was just moving his mouse), and would waste potentially hundreds of API calls per page load—on particularly heavily wikilinked Gwern.net pages, the API results might be a substantial fraction of the entire page! So please don’t do that if you ever make a WP popup yourself.↩︎

  26. Why not use that? The logged-in user preview, Lupin’s page navigation popup tool (current version), does include the inline links. But close inspection of its source shows that there is no secret API returning the right HTML. Instead, it download the page’s entire MediaWiki source, and compiles it via a JS library to HTML on its own! I later attempted to work with this using Pandoc to compile, and for simple articles this works well enough, but it fails badly on any article which makes heavy use of templates (which is many of them, particularly STEM ones), and hand-substitution or replacement couldn’t keep up with the infinite long tail of WP templates.↩︎

  27. Neural net summarizers had already gotten good, and GPT-2 had come out in February 2019 and shown that it had learned summarization all on its own (amusingly, when prompted with a Reddit tl;dr:), and while I had not fully gotten on board the scaling hypothesis, I was quite sure that neural net summarization was going to get much better over the next decade. But I didn’t want to wait a decade to start using popups, and it seemed likely that I would need my own corpus to finetune a summarizer on my annotations. So I might as well get started.↩︎

  28. While it was elegant & simple to just pop up other Gwern.net pages when they were linked, this suffered from the same performance problem as the link-bibliographies: it can be a lot of HTML to parse & render, especially when the reader is expecting the popup to popup & render with no discernible delay—in the most extreme cases like the GPT-3 page, an unsuspecting reader might be left waiting 10–15s before the popup finally displayed anything!↩︎

  29. One might think that it would be easy: surely a Wikipedia article is simply every URL starting with https://en.wikipedia.org/wiki/, thereby excluding the API/infrastructure pages?

    Unfortunately, this is not the case. WP further namespaces pages under /wiki/Foo:—note the colon, which means that /wiki/Image:XYZ is completely different from /wiki/Image_XYZ—and each of these namspaces has different behavior for whether they have an introduction or if they can be live links inside a frame. For example, one must be careful to handle all the special characters in a page title like C++ or Aaahh!!! Real Monsters, and remember that titles like “Bouba/kiki effect” are simply a slash in the name & not a page named “kiki effect” inside a “Bouba” directory; pages inside the Wikipedia: namespace can be both annotated & live, like regular articles; Category: cannot be annotated but can be live; Special: pages can be neither.

    I had to set up a testsuite in Interwiki.hs to finally get all the permutations correct.↩︎

  30. eg.

    <img srcset="/doc/ai/nn/transformer/gpt/fiction/2021-07-08-gwern-meme-tuxedowinniethepooh-gpt3promptingwithwritingquality.jpg 768w,
        /doc/ai/nn/transformer/gpt/fiction/2021-07-08-gwern-meme-tuxedowinniethepooh-gpt3promptingwithwritingquality.jpg 994w"
        />
    ↩︎
  31. I’m going to cynically guess that srcset was pushed by FANG for their mobile websites in a half-baked manner, has been neglected since (in part because it fails silently), and they care only enough to debug their use-cases.↩︎

  32. I also experimented with putting speaker labels in monospace (code) formatting. This made them stand out better from general use of bold & italics, but had confusing connotations, and incurred another font load.↩︎

  33. The JS parsing could in theory be done statically, but not easily by Pandoc: classes must be set on elements like <ul>, <li>, <strong>, but for historical reasons, the Pandoc AST doesn’t allow arbitrary attributes to arbitrary elements (only some). So it was much easier to use JS.↩︎

  34. I later discovered that there is one usecase where a scroll marker would be useful: reading chapter-paginated novels, like on Wikisource, where one will reliably lose one’s place when one does the final page-down but the browser can only move a fraction of a screen before hitting the end of the page—thereby shattering the reader’s immersion and throwing them into confusion as they have to wake up & refind their place. This is also a bit of an issue in web serials, as one has to find the ‘next’ button, and then wait (entirely unnecessarily) for the next page to then load & render before one can start reading. (None of these issues apply to paper books, as pages can be turned unconsciously and there is never any confusion about where to start reading on the next page.)↩︎

Similar Links

[Similar links by topic]