A place for musings, observations, design notes, code snippets - my thought gists.
The Illusion of ... the Illusion of Thinking
As of late there’s been a lot of thinking engendered by Apple’s The Illusion of Thinking paper.
I propose we get ahead of this sequence of paper-rebuttal-paper and define the iterated function sequence \( \{ n \ge 1: \textsf{(The Illusion of)}^{n} \textsf{(Thinking)} \} \). Whether this series of thoughts will converge to any fixed point is left as a conjecture for the reader.
A new post by Sean Godecke, commenting on a recent publication Is chain-of-thought AI reasoning a mirage? includes this incisive critique:
“LLMs construct superficial chains of logic based on learned token associations, often failing on tasks that deviate from commonsense heuristics or familiar templates Models often incorporate … irrelevant details into their reasoning, revealing a lack of sensitivity to salient information models may overthink easy problems and give up on harder ones Together, these findings suggest that LLMs are not principled reasoners but rather sophisticated simulators of reasoning-like text”
I want to tear my hair out when I read quotes like these, because all of these statements are true of human reasoners. Humans rely on heuristics and templates, include irrelevant details, overthink easy problems, and give up on hard ones all the time! The big claim in the paper - that reasoning models struggle when they’re out of domain - is true about even the strongest human reasoners as well.
Well, I think there’s a strong point here. I’m not maintaining that humans and LLMs are equally powerful at arbitrary reasoning, but reasoning is an intrinsically complex task that we might sometimes forget humans grapple intensively with too.
Writing collapsible inline footnotes in Hugo with Markdown
I use Hugo as the static site builder for my blog. I love it because it’s fast (blisteringly fast) and for the most part it just gets out of the way.
However, Hugo’s support for customising the rendering of Markdown into HTML is … lacking, to say the least. Hugo uses Goldmark under the hood, which supports Extensions, but Hugo doesn’t natively plug in to those. I’d previously been using a Hugo shortcode to wrap around some arbitrary Markdown and convert it into a footnote HTML:
<label for="fn{{ .Get "id" }}">{{ .Get "label" }}</label>
<input type="checkbox" id="fn{{ .Get "id" }}" />
<small id="fn{{ .Get "id" }}">
{{ .Inner | markdownify }}
</small>
But using it in Markdown means writing something inordinately verbose and clunky like:
This is some text,{{< footnote id="1" label="footnote!" >}}and here's some **markdown** in a footnote{{< /footnote >}}
I considered switching to pandoc for my Markdown preprocessing, but that would mean changing my whole build system and using GitHub Actions with a pandoc
workflow instead of the (super quick) Cloudflare Pages Hugo runner. I did some browsing and realised that I was not the only one thinking deeply about footnotes and sidenotes in Hugo.
I’ve been struggling with this for a while, but I realised that Hugo supports render hooks for select Markdown elements. Unfortunately, footnotes are not included, but images are! In Markdown, the standard syntax for images looks like: 
, which is not too dissimilar to the footnote syntax [^label]: footnote
. So I built a little layouts/_markup/render-image.html
hook:
{{- if eq .Destination "fn" -}}
{{- /* This is a footnote using image syntax */ -}}
{{- $label := .PlainText | default "note" -}}
{{- $content := .Title | default "Footnote content" -}}
{{- $id := .Ordinal -}}
<label for="fn{{ $id }}" class="footnote-trigger">{{ $label }}</label><input type="checkbox" id="fn{{ $id }}" class="footnote-checkbox" /><small class="footnote-aside" id="fn{{ $id }}">{{ $content | markdownify }}</small>
{{- else -}}
{{- /* This is a regular image */ -}}
<img src="{{ .Destination | safeURL }}"
{{- with .PlainText }} alt="{{ . }}"{{ end -}}
{{- with .Title }} title="{{ . }}"{{ end -}}
>
{{- end -}}
so that I can write:

and, with some pure CSS magic, have it render into a collapsible inline footnote! You can see it live on my website now: I’ve started using it to add maths proofs and asides as footnotes that expand inline on hover.
A neat CSS trick to add favicons to website links
I discovered a neat CSS trick that automatically lets you prepend an SVG ‘favicon’ before URLs linking to a particular domain. For example, for links to github.com
, you can embed an octocat SVG using an attribute selector:
a[href*="github.com"]::before {
content: "";
display: inline-block;
width: 1em;
height: 1em;
margin-right: 0.35em;
vertical-align: -0.15em;
background: currentColor;
mask: url('data:image/svg+xml;utf8,<svg xmlns="http://www.w3.org/2000/svg" viewBox="0 0 16 16"><path fill="currentColor" d="M8 0C3.58 0 0 3.58 0 8c0 3.54 2.29 6.53 5.47 7.59.4.07.55-.17.55-.38 0-.19-.01-.82-.01-1.49-2.01.37-2.53-.49-2.69-.94-.09-.23-.48-.94-.82-1.13-.28-.15-.68-.52-.01-.53.63-.01 1.08.58 1.23.82.72 1.21 1.87.87 2.33.66.07-.52.28-.87.51-1.07-1.78-.2-3.64-.89-3.64-3.95 0-.87.31-1.59.82-2.15-.08-.2-.36-1.02.08-2.12 0 0 .67-.21 2.2.82a7.62 7.62 0 0 1 2-.27c.68 0 1.36.09 2 .27 1.53-1.04 2.2-.82 2.2-.82.44 1.1.16 1.92.08 2.12.51.56.82 1.27.82 2.15 0 3.07-1.87 3.75-3.65 3.95.29.25.54.73.54 1.48 0 1.07-.01 1.93-.01 2.2 0 .21.15.46.55.38A8.013 8.013 0 0 0 16 8c0-4.42-3.58-8-8-8Z"/></svg>') center/contain no-repeat;
}
We apply the favicon as a CSS mask to properly render the transparent components of the logo against the background.
MathJax v4.0 released
MathJax v4.0 has been released!
I just integrated MathJax v3 with Hugo last night for this blog to start writing some maths posts, and v4 is released today…
I’m hoping it improves rendering though, I was seeing some issues with SVG output for inline formulae in Safari. KaTeX is neat and fast, but I find the quality of the typesetting to be quite sub-par compared to MathJax.
A Character Study in Alliteration
I came across a short alliterative poem about Rorschach I wrote a good while back after reading The Watchmen:
Manic milieu meets maimed mien,
moralising meandering misanthropic menace,
metes most morbid measures masked,
macabre murmurs marry much moribund musings,
massacring measly men mirthlessly
I don’t know why I decided to pick ’m’ here for the alliteration, but I often enjoy writing such whimsical snippets. I’m not sure what you’d call this, but I suppose ’extreme alliteration’ or reverse lipogram would suffice. I quite like the whole field of constrained writing, it’s quite fun thinking about how to slot words into constraints just-so. Apparently there was (is?) a group of French mathematicians and writers who similarly delighted in such pursuits… the Oulipo (Ouvroir de littérature potentielle).
Amazingly, I asked ChatGPT who it thought the poem was about, with no other context provided, and it nailed it (link to chat).
Learning from LLMs ethically by practising code minimisation as a discipline
Like many people nowadays, I find LLMs an invaluable tool for learning new concepts or vibecoding in unfamiliar stacks and langauages. It’s undeniably a massive accelerator when it comes to quickly iterating and learning.
But it has its downsides. There’s a strong argument to be said that by offloading the challenge of learning something new, your core critical thinking atrophies and your ability to focus slides down the slippery slope of instant gratification.
However, there really are only so many hours in a day, and there are far too many days of learning to fit in. So a Faustian bargain I have done…
A useful exercise I employ is “how far can I cut this code down before it stops working”? LLMs seem to love overdoing things, even when instructed against it. I find the code produced by LLMs to be full of single-use functions, over-zealous try-except clauses, and overdone encapsulation. (Of course, the same could be said of any idiomatic Java programmer).
Of course, concision is not the sole determiner of ‘good code’, whatever that may be. But I think there is something to be said that parsimony is an ideal to strive for.1 Everything in computer science is just a long walk up a Ladder of Abstraction, after all.
In any case, I find that the sheer act of this “code minimisation” helps distill how the code works the way it does and I find myself learning huge amounts along the way.
-
This may also be a case of Stockholm Syndrome: my statistical modelling professor was particularly zealous about parsimonious linear models… ↩︎