My first 5 months of residency at the Centre for Text Margings

Heikki Lotvonen | 4.2.2026

September — teaching and starting a new experimental DTP software

Experiment: Code

Technically my residency started already in September, but I spent most of it by teaching a 6-week "Experiment: Code" class. I'm happy how the student projects turned out [1]. Many of them managed to create sites that have that certain self-made, creative and personal atmosphere reminiscent of the "old" web. They also made posters just using HTML and CSS! Who needs Adobe anymore when you can just use the browser? :)

I updated the course website [2] for this year's class, but it didn't change drastically from the first version I made in 2024 [3]. The website shows its own source code, but also works as a code sandbox, so I used it again for demonstrating HTML and CSS techniques during class. It still works quite wonderfully.

The code on the website uses a font I made with built-in syntax highlighting made with a crazy amalgamation of OpenType features. I updated the blog post I wrote about it [4] to include some great examples of its use in the wild.

    Links

  1. https://hlnet.neocities.org/koe-koodi/nayttely/
  2. https://hlnet.neocities.org/koe-koodi/
  3. https://hlotvonen.github.io/koe-koodi24/
  4. https://blog.glyphdrawing.club/font-with-built-in-syntax-highlighting/

Hypergoblet — an experimental DTP software (WIP)

Even though my September was mostly filled with teaching, I did however manage to start developing a new editor called Hypergoblet. This is still in very early development and not available publicly yet.

The idea is that it has the bones of a simple Desktop Publishing (DTP) software — you can make multiple pages (left panel) and then add text, images, graphics or filters on the pages using "frames". It has (or will have) master pages, PDF export and the works.

The difference to typical DTP software is that it doesn't provide you with any tools out-of-the-box. Instead, it provides an API for creating plugins that manipulate pixels on the page. Want to add an image to the page? Well, you need a plugin for that. The API is fairly simple: it takes the frame's pixel data as an input and outputs an image that gets placed on the canvas. Whatever happens in-between is up to the plugin, which can be as simple or as complex as needed.

This means that you're not limited by a fixed set of general-purpose tools that you can't modify, because you can (and have to) make your own tools. As a result, designing a publication becomes explicitly shaped by all the chosen workflows, processes, algorithms and tools, giving designers more agency over how their work is done.

Of course, a few years ago this idea would have been ludicrous, because who has the time or skill to code plugins for everything. Sure, maybe it will have some kind of "asset store" where you can download and install plugins others have made. But the main idea is to use AI for writing the plugin code: if you roughly know what you want, you can feed an AI agent the provided API&UI template and an idea of some effect or tool, and it will just do it for you. A simple one-shot example of a prompt I used was "Can you make a plugin that smears the image context. Here's an example you can use as a template: [template code]". Then, I just copy paste the generated code to Hypergoblet, and I can immediately use the tool, and control the smear amount using the generated UI. It's "AI powered" in the right way, I think. And, if you know how to code yourself, you can do so without any AI involvement at all.

But even with AI, a more complex tool plugin requires a bit more thought and effort. Still, the fact that you can make stuff that is simply not available in any other software is pretty amazing. For example, the spiky and distorted "unconventional graphic design tools" type in the screenshot is made with an SVG curve displacement plugin I developed originally as a web component demo. It calculates the normal vector of points along the SVG path, then displaces them using mathematical functions, like "Witch of Agnesi" (1 / (1 + x²)) [1]. The curve is then normalized by another curve like a basic sine wave, so the starting and ending points match perfectly at y=0. Then, amplitude and frequency controls the scale and repeat of the effect. It's easier to show than explain, so you can play around with it below:

Try this path for example: M102 406h203c57 0 101-71 101-162v-61H153v61h152c0 45-23 81-51 81H153c-29 0-51-36-51-81v-82c0-45 22-81 51-81h203V0H102C45 0 0 71 0 162v82c0 91 45 162 102 162Z

Another example. I found this 1986 book Geometric and Artistic Graphics — Design Generation with Microcomputers by Jean-Paul Delahaye from the Aalto library, which included many instructions for drawing graphics with MS BASIC. Made a simple MS BASIC interpreter for javascript, which uses the commands to draw onto a HTML canvas. Then, turned that into a plugin for Hypergoblet. Here's a standalone demo which you can try. Press "RUN" to render the graphic and check the few exampels I typed in from the book. (Open in new page)

The future of Hypergoblet is still unknown. It shows a lot of promise, but I'm still a bit unsure on some directions I want to take because it's a fairly complex project. I might get back to this in the spring.

    Links

  1. https://mathshistory.st-andrews.ac.uk/Curves/Witch/

October — ASCII AUTOMATA & UI experiments

ASCII AUTOMATA

In October after my teaching duties ended I could fully focus on the residency. And that focus yielded a new tool: ASCII AUTOMATA.

For some years I've been developing an ASCII art editor called MoebiusXBIN [1] which, among other things, supports custom fonts (besides the default IBM PC and AMIGA ones). Because of that, I've spent a lot of time in Aseprite and Fontraption[2] looking at pixels and thinking about what makes a good ASCII art font. I like fonts that have a diverse set of characters that "connect" to each other at the edges of their bounding boxes. Good examples include Amiga ASCII's Topaz fonts, and Commorode's PETSCII font. A font with characters that can connect to many other characters is good for creating ASCII art because they can be assembled into an endless variety of continuous and nearly seamless shapes, like logos, text, images and other graphics.

But, when making a new ASCII art font, it's pretty overwhelming and difficult to keep track of how everything connects to each other. If a character has an "edge connector" (part of a shape that touches one of the edges), but it doesn't have a counter part in some other character, it's less useful than a character that connects to many other characters. Knowing beforehand which character shapes end up being actually useful for drawing, and which ones end up rarely used, is largely guesswork, even with a strict design system. You can't really know unless you spend a lot of time making art with the font and getting really familiar with each of the 256 characters.

To ease this font-making process, I started designing a tool to analyze the visual connectivity of characters in textmode fonts. It works by scoring edge connectivity of each character and finding the best matching neighbour piece. With it, I can get a quicker sense which characters have a lot of matching counter pieces, and should (in theory) be useful for ASCII art purposes.

Then, to visualize which characters have the best connectivity, I made a sort-of cellular automata: starting from a random character, see if it touches an edge, and if so, place another matching character into its neighboring cell. Repeat until a dead-end is reached or a character with no additional edge connections is placed.

Because this tool produces unexpectedly beautiful and strange emergent patterns, I made it into a proper little toy-tool for anyone to play around with. It's available at https://hlnet.neocities.org/ascii-automata/. It was a great kickstart to my residency!

    Links

  1. https://blog.glyphdrawing.club/moebiusxbin-ascii-and-text-mode-art-editor-with-custom-font-support/
  2. https://int10h.org/blog/2019/05/fontraption-vga-text-mode-font-editor/

A new experimental UI building method

It was also a good opportunity for testing an idea I had for constructing a flexible semi-complex UI.

A little background. During the spring/summer, I got into researching single stroke vector fonts, specifically Hershey fonts [1]. They are a collection of vector fonts made in 1967 by Allen Hershey at the US Naval Weapons Laboratory. They were originally designed to be rendered using vectors on early cathode ray tube displays, but are still often used for carving text with CNC and laser cutting machines.

The neat thing about single stroke vector fonts is that they can be freely scaled, stretched, skewed or otherwise transformed while keeping the stroke contrast consistent. In other words, unlike with conventional outline based fonts, where any transformation applies to the whole shape, with single stroke fonts, transformations only affect the "skeleton" and stroke is applied afterwards, which keeps every stroke the same fixed width, and the overall visual consistency is maintained.

While working on new visuals with GRMMXI for Ruusut's upcoming album, we thought that this text rendering technique could be interesting to use, so I made a little web editor in which you can create type compositions using the Hershey shapes that fills the entire canvas regardless of word length. (Click the "Append" checkbox, then click the characters to see it in action)

For their gig at Flow festival, they asked us to do some visuals for it, so based on this web editor, I made a python script that generated an signed distance field (SDF) for each word in their song lyrics, which were then fed through some VJ software and timed in Ableton. I couldn't attend the gig myself to take proper pictures, so I don't have any, but this video by @danielapartanen from Instagram shows a glimpse:

I heard that there were some difficulties with the VJ software, so the end result is not exactly as we would have hoped, but even so, I think the results are pretty interesting.

Anyway, this got me thinking: what if you use this technique to render the text of UI elements, like buttons and labels? The problem with designing UI's is that they're often incredibly information dense, and trying to maintain denseness, ease of use, and ease of development while supporting all kinds of different screen sizes is a major headache. But if everything would just stretch to fill the screen space, I would only have to worry about different screen aspect ratios (desktop vs mobile), but not different screen/window sizes.

So, I made a web component which renders the Hershey vector shapes as SVG paths. The SVG fills the parent element, and applies stroke after the stretching happens (thanks to SVG's vector-effect="non-scaling-stroke"): so strings "A" and "AAA" take the same amount of space, while remaining legible because the stroke is independent of the text's transformations. Thus, I will never have problems with overflowing text in the UI again!

A AAA AAAAAA

This works wonderfully with the layout system I've been developing, which is based on CSS grids. For example the sidebar is simply <div style="--cols:8;--rows:41;" class="sidebar grid"> and then each UI element gets a position and size <vec-text style="--x:1;--y:19;--w:2;--h:1;">Cell Width</vec-text>. As a result, the layout is easy to make beacuse all you really need to specify is the position and size for each element in the grid. The sidebar itself can be any size or shape, all the UI elements stay exactly where I put them, and all text remains legible due to the stretchy, monolined vector font web component. It's great! The only downside is that even 1px stroke can get muddy if the rendered text is tiny. But that is rarely a problem and can be worked around.

The WHOLE UI layout for ASCII AUTOMATA is just 120 lines of HTML, and 40 lines of CSS for around 90 UI elements, which is honestly pretty incredible! This technique has made UI design an actually pleasant activity, and one that doesn't fill me with dread anymore. (It did take a while to fiddle with the coordinate numbers, which was a bit painful, but I'm working on a WYSIWYG tool to make that easier too...)

    Links

  1. https://en.wikipedia.org/wiki/Hershey_fonts

November — a single stroke vector font editor

An experimental single stroke vector font editor

As great as the Hershey fonts are, I wanted to make my own single stroke vector fonts, so I spent November making an editor for that. It's not "officially" out yet, but you can already use it at https://hlnet.neocities.org/hershey/

The editor is split into two panels: in the right panel you design the font, and in the left panel you can compose paragraphs. The text changes in real time as the font is edited, which actually makes for a really great design feedback loop because you can immediately see the how changes in design works in the context of actual text, and not just as separate entities.

The font editor

The font editor side is heavily based upon the simplicity of the Hershey font format. It's based on a coarse grid (each point is snapped to the grid), and has the following simple commands: moveTo and lineTo. But, I also added a new command: conicTo, which is a g-conic curve.

I first read about g-conics from the 1994 book Font Technology, Methods and Tools by Peter Karow. According to Karow, g-conic curves were used for rendering font outlines with the long-forgotten F3 font format in the late 80's and early 90's, but fell out of use as cubic Bézier curves and TrueType fonts became standard.

G-conics refer to a mathematical funtion to draw conic sections (like parabolas, ellipses, and hyperbolas) using a set of three points and a "sharpness" value. Nowadays g-conics are mostly known as "rational quadratic Bézier curves", which sounds less exciting but is maybe more straightforward to understand. Basically, it's the same as a normal quadratic Bézier, or constraining a cubic Bézier curve to the "magic triangle", as described by a tutorial for Glyphs for drawing good paths [1], but with an additional parameter that determines how pointy the curve is. Here's an interactive comparison that should make it clear. Crank the "sharpness" slider above ~0.73, and you get a sharper curve than what is possible with conventional Béziers:

Because "good type design" already constrains Béziers to a magic triangle, but overall homogenizes the way fonts look [2], the sharpness parameter adds a layer that can break the algorithmic logic of how we design fonts.

In addition to having the sharpness slider, the curves also automatically follow cardinal directions, which removes the need to manually adjust the handle directions. And, with shortcut "x", the direction of the curve can be toggled between clockwise and counter-clockwise directions.

Overall, it's really fast to design lettershapes with it. Here's an early test I made in a few hours with no planning or extra thought (in other words, I can use it quickly & effectively. Making an equivalent font in Glyphs, for example, would take infinitely more time):

The paragraph composer

The other part of the editor is the paragraph composer. It's fairly simple: you can add text, give it a position (x & y), size (width & height), font size, line height and tracking. But the way it composes the paragraph is also quite unconventional. Words are spaced with collision based optical kerning, and the paragraphs are composed with an obscure semi-justification method.

Collision based optical kerning

The simplest way to handle composing words is to place the letters next to each other based on their bounding boxes (or advance widths) and add a uniform amount of tracking between each letter. This is of course not satisfactory if we want to have decent looking typography, because of inconsistent spacing due to missing kerning adjustments. The coventional way to handle kerning is to assign each letter some default left and right bearings, and then have a additional table for adjusting those numbers for each letter pair. Doing this manually is a huge amount of work. I didn't want that, so I made a collision based kerning system which is more automatic, but still generates decent results. With tracking set to 0, it looks like this:

Instead of using some set number for kerning values, it just uses the form of the letter to pack them as tightly as possible. Then tracking is added on top. It has a big drawback though: if you have a letter like "C" and then type a dash "—", then the dash will go fully inside the shape of the letter C. So, I added a "minWidth" parameter to each letter, which acts as a barrier for the collision kerning. For example, if the letter C is 20 cells wide, I can set the minWidth to 19, and it stops the dash from going all the way in by 1 cell width. It works quite well and is only one additional number to fiddle with.

The last remaining drawback is that straight horizontal shapes in letter pairs like "ll" are too close to each other. I'm still trying to figure out the best way to automatically give them a bit more space, but I haven't figured out a simple yet foolproof way of doing that yet. Behdad Esfahbod on typo.social mentioned a method he uses for halfkern [3] as follows:

The way the tool works is that for every pair of letters that are considered, it will blur their renderings and space the two such that the blurred images overlap a certain amount. This certain amount is found by first calibrating using the "ll", "nn", and "oo" pairs.

It seems to produce really good results, completely automatically, so I might give it a shot at some point. Jackson (@Okay) also reminded me of bubblekern [4], but that also requires drawing the collision shapes by hand, so not quite what I want.

Semi-justified text

The paragraphs then are composed with a semi-justification algorithm I came up with. I wrote about it in detail on my blog [5], so I'm not going to repeat everything here. In a nutshell it's like a basic greedy justification, but the wordspaces are bounded to some min and max width:

By bounding word-spaces, we don't need to decide which lines should be justified, because lines automatically self-select: if a line can reach the target width within acceptable spacing bounds, it gets justified, and if not, it stays ragged. This is achieved by restricting how much word-spaces are allowed to expand during space distribution. In addition, we can also allow the word-spaces to shrink for more flexibility.

And here's a demo that demonstrates how it works. Adjust the line width to see how the paragraph goes fully ragged for narrow paragraphs, and more justified for wider paragraphs.


I really like this text alignment method. It hits the sweet spot of simple-to-implement, simple-to-compute and good looking with a lot of utility. I just wish it was available anywhere! I think it would be the perfect justification method for the web (I made a comparison of text aligment methods on codepen [6]).

I'm very excited about the direction this tool is going. I can make a font and layout some paragraphs with it, export it as SVG and print it, all using my own editor. And the results are completely legible, but also completely unique and kind of strange. Here's a test print:

Because it's all single stoke vectors, I hope to do some plotter prints at the Aalto workshops in the spring with this. Let's see!

    Links

  1. https://glyphsapp.com/learn/drawing-good-paths
  2. https://beyondbezier.ch/
  3. https://github.com/behdad/halfkern
  4. https://tosche.net/non-fonts/bubblekern
  5. https://blog.glyphdrawing.club/semi-justified-text/
  6. https://codepen.io/heikkilotvonen/pen/EaVeZBP

December — Unconventional Graphic Design Tools -workshop, Mr. Baby Paint, accidentally discovering a new cellular automata & pixel-fattening

Unconventional Graphic Design Tools -workshop

I used to host Glyph Drawing Club workshops at Aalto, but a few years ago the university changed their policy so that only salaried teachers could organize them. I miss hosting them, so at the beginning of my residency I asked Arja if I could run one as part of the program. Fortunately she agreed and managed to make it happen within the bureaucracy of Aalto, and in the first week of December I hosted a workshop for 15 students (around 40 signed up...!) on "Unconventional Graphic Design Tools". It's hard to put into words how happy I am how it went. Students were very motivated and active, and the results are amazing!

The Unconventional Graphic Design Tools-workshop was based on a simple question: could one be a graphic designer WITHOUT Adobe Software? All of the participants knew very well that the answer is no and that Adobe has too much power over our field, but everyone wished the answer could be YES, and would like to have some real alternatives. So, that's what we did. The assignment was:

Make two spreads for a collective zine without using any Adobe software.

Rules:
  1. Use at least 4 different tools in total, in some way or another.
  2. It should include both textual content and images/ilustration/graphics. Can be either analog, digital or a combination. Avoid defaulting to conventional ways of making. Avoid Adobe-like software, like Affinity, GIMP, Inkscape, Scribus or Photopea, unless it's to specifically use some niche feature not found in Adobe software.
  3. Content is whatever visual matter comes out of the expeiments you do. Focus on the process, not the outcome. Have fun with it. Dont overthink the outcome. When putting everything together, you can layout the content in any way you like. This you can do old-school by printing stuff out and assembling by hand, then scanning. If you do this, you can clean up the end result in Affinity/some other Adobe-like program. If you insist on doing everything digital, you can use Affinity to layout the visual matter if you can't find anything else to do that. But it's important to explore alternative ways of doing design first, and not immediately reach for the Adobe-like software out of habit / comfort.
  4. A pedantic sidenote: The PDF file format is open source so it's not considered "Adobe software" anymore. You can also use Adobe Acrobat to view PDF files.

The first day I gave a little introductionary lecture, then introduced the students to the single stroke vector font editor, just to get the ball rolling.

And the next day I talked about some of my favourite non-Adobe tools, showed them my collection of analogue tools (lettering guides, rulers, etc.), and then shared a huge list of tools I've compiled: https://harvest-secretary-a65.notion.site/text-art-tools.

In the end, the students used 43 different digital tools, and many analogue tools too. Some of the favourites seemed to be Avocado Ibuprofen Paint [1], Glyph Drawing Club [2], tooooools.app [3] and constraint.systems [4]. The biggest pain-point seemed to be alternative layout tools, so people resorted to doing that with Affinity or by cutting, glueing and scanning by hand. That was also a good indicator that maybe I should focus on making some alternative layout tools :)

    Links

  1. https://nightphilosophy.github.io/avocado/
  2. https://glyphdrawing.club/
  3. https://www.tooooools.app/
  4. https://constraint.systems/

Designing software for toddlers: Mr. Baby Paint

I also manage to make and release a new editor during December! It's called Mr. Baby Paint [link: https://glyphdrawingclub.itch.io/mr-baby-paint].

My 3-year old kid wants to participate in everything I do, including computer stuff. He enjoys pressing the springy keys, wiggling the mouse, making it do clicky sounds, and spinning the wheel. But with that kind of skillset, there's not yet a lot he can do with the computer. All I could think of is two things: experimental keyboard-smashing poetry in a text editor, or "action paint" in a drawing app.

We tried both Wordpad and MS Paint. As simple as it gets, I thought. While he did manage to draw some beautiful but random scribbles and produce interesting yet unintelligible letter poems, overall the experience was more frustrating than fun. Because his mouse movements and clicks were haphazard and erratic, and key presses random, I had to constantly intervene to bring back the typing or drawing mode after he "mis"clicked some random menu, toolbar or taskbar item. And changing colors, fonts or other options also meant I had to take the mouse away from him for a bit. He found these interruptions annoying, because he just wanted to keep playing. (But for some reason, when the computer is off, he's not interested in playing with them.)

So, Wordpad or MS Paint wouldn't do, and I couldn't find anything else that would be simple enough for our needs and his skillset. So, I had to make my own. And I wanted to tackle this task with the same seriousness that I would any other software project, and really think how to design a good software experience for toddlers (in co-op with their carers).

The result is a radically simple drawing app called Mr. Baby Paint. At first I thought because the app is simple, that it would be a simple to do, but it turned out to be a much more interesting challenge than I expected, and it produced some surprising outcomes, like accidentally discovering a flood fill based cellular automaton.

Requirements

The minimum viable product I envisioned for this drawing app is as follows: a fullscreen blank canvas with no menus or toolbars, where clicking and dragging the mouse draws directly on the screen. My kid sits on my lap controlling the mouse while I handle keyboard shortcuts with my left hand — CMD+S to save his drawings and CMD+E to clear the canvas. The entire screen is the drawing area, and nothing breaks the experience, not even if he smashes the keyboard.

But, I also wanted the app to be slighlty more interesting and fun than that and really encourage drawing and creative play. Every action should be rewarding, whether they were intentional or not. So every action either makes a mark, produces a sound effect, visual effect, or a combination of these. There's no way to "mess up".

So, left-click draws, scroll drops sand and right-click paintbuckets. The faster you draw, the more paint splatter it produces.

One of the problems I had was that when you move the mouse really fast, the computer doesn't actually register the movement as a fluid continuous curve, but as discrete points in space captured every few milliseconds. This is fine for normal computer use, but unuseable for a drawing app where you want to draw a continuous line. Most drawing apps solve this by connecting each point with a line, which works great for moderately fast mouse movements. But, toddler mouse movements can be really fast, so the distance between the captured mouse positions can be tens or hundreds of pixels apart, making the supposedly fluid curve look very angular. I solved this by using a Catmull-Rom spline to connect the points, which creates a smooth continuous curve between points. Then, I just stamp the brush texture along the curve every 1px. This approach was laggy for larger brushes, so I had to limit the stamping distance for them.

The UI

Instead of requiring precise mouse control, I made use of the erratic mouse movement: moving the cursor anywhere on the screen edges changes some setting:



Co-op paint

In reality, it's not really meant for toddlers to use all by themselves, but the idea is to (of course) do this activity together with a toddler — so it's more like a co-op paint for parents (or other carers) and toddlers. All of the more complex functionalities are meant to be activated with a keyboard shortcut by the parent while the toddler can focus on the main thing, drawing, without any unneccessary interruptions. For example, it can be difficult for young kids to actually hold down the left mouse button, so a parent can hold down the Alt (or Option on Mac) key to trigger the draw function while their child just moves the mouse around. Other keyboard shortcuts include:

Fill tool & accidentally discovering a sort of cellular automata

In most drawing apps the fill tool is instant, but I was inspired by Mario Paint [1] where you can actually see the fill happen slowly in real time. It's satisfying to watch it go. In Mario Paint, filling starts at the cursor position, filling line by line in both upwards and downwards direction. In Mr. Baby Paint the flood fill also grows left and right.

In Mario Paint you have to wait for the fill to finish before you can start another one. However, I found that to be unintuitive and unfun in my app. I didn't want any action to block drawing, I wanted multiple fills and drawing to happen simultaneously.

But when I implemented that function, I accidentally discovered a flood fill based cellular automata. And by "discovered", I mean it literally: I was just test spamming the flood fill until patterns started to emerge!

Here's how: I started a new flood fill near the inside edge of another growing flood fill. Because the "growth budget" for flood fills is shared among all flood fills, and are capped at growing 1000 pixels per frame, newer flood fills grow faster than older ones because they are smaller. So, the smaller fill started colliding with the still-growing edge of the older fill. And then, I placed yet another flood fill with the same color as the first one, they all started collapsing into each other, forming very organic looking patterns that ripple and shift. It's semi-stable: they can go for thousands of generations before settling into a stable oscillating pattern, or sometimes one flood "wins" and the others disappear. It's a fun glitch so I left it in.

(Technical sidenote: I do everything on the CPU in one thread, so calculations are sequential, which is required for the flood fill automata to work. I did try a GPU version with a compute shaders, but while it's faster and produces no lag, it's somehow more boring. The inherent slowness of CPU based calculations is a deliberate design choice, although I did my best to keep everything running at reasonable framerates.)

My friend Adel also suggested I try different fill patterns, so I made a little demo where the fill color automatically changes between black and white, so the glitch is super easy to achieve.

Instructions: Click to start a fill, and then immediately start another fill close to the inner edge of the first fill. Patterns should start to emerge. You can also click the "Auto" button" and wait until patterns emerge. You can design and customize the fill patterns, and even change them mid-fill. Download the image by right-clicking on the canvas, then Save Image As...

Mr. Baby Paint is also the first app I sell. Just wanted to see what that entails. I've sold about 30 copies so far! It's available for Mac, Windows and Linux and you can get it on itch.io for 4.99$.

Pixel fattening

Mr. Baby Paint has 16 fonts. Most of them are from various old school computers, like the original Apple Macintosh. I sourced them from Rob Hagemans' Hoard of bitfonts [2]. But I wanted to also make a few of my own, so I used my single stroke font editor for that. It already had a bitmap renderer, which draws the fonts with 1px strokes, which I could use for the textures. But 1px stroke is awfully thin and I wanted something thicker, and because I didn't want to do it by hand, I made a dozen experimental pixel-fattening scripts that could do it for me.

Here's the original I wanted thicker, straight from my single stroke vector font editor:

The first idea was extremely simple: sample each pixel, and place a bigger square at every black pixel. This was of course not great, because it would fill in important details in lettershapes and generally look quite clunky:

So I tried to preserve the details and gaps, while increasing the line width...

I tried Jump flooding algorithm (JFA) distance field with second-nearest component distance to stop expansion at the midpoint between components:

Which didn't work so well, so then I tried JFA distance field with angular gap detection (finds black pixels in opposing directions >90° apart) to estimate and preserve gaps:

Which looked promising, so then I tried the same as #2 but adding a post-processing hole-fill pass that fills white pixels surrounded in all 4 cardinal directions:

Which was even more promising, so then I tried JFA distance field with white-space skeleton (medial axis ridge) detection (pixels near the skeleton are masked to preserve gaps):

Which was already pretty good... but I just had to keep going, so then I tried adaptive per-pixel radius, which uses JFA to compute distance to the nearest different component, then shrinks each pixel's expansion radius to maintain a minimum gap:

I should have stopped at #4, but had to then try the same angular gap method as #2 but with a diagonal tolerance offset and a post-process pass that fills isolated white pixels surrounded by 7+ black neighbors:

Then I got the itch to try a signed distance field (SDF) via JFA combined with 16-direction ray-casting to compute per-pixel maximum safe expansion before hitting an opposing gap boundary:

...and then I tried a SDF via JFA with per-pixel 8-direction ray-casting narrow-gap detection at render time (so it skips pixels found to be in the middle of a narrow gap):

...and a JFA distance field with 8-direction ray-cast gap detection using a distance-ratio threshold between the two nearest components:

...and Meijster's exact Euclidean distance transform with Union-Find CCL and Voronoi boundary gap detection:

...and an iterative 1px dilation with collision detection which locks pixels where two different components would meet, controlled by a gap-width delay parameter:

...and a topology-preserving iterative dilation which after each 1px expansion backs out any pixel that would merge separate white regions. Or that's the theory, but it failed completely and just resulted in the original crude version:

...and finally, before snapping out of it, a JFA Voronoi boundaries as watershed lines that can never be crossed, plus a white-region merge check as a second guard:

All pretty interesting, but none that were perfect. I settled with method #4:

...which I manually edited and cleaned up in Aseprite:

And here's how it all looks in Mr. Baby Paint. The bigger font is another version with same skeleton, just different size and different stroke applied.

Overall, I'm pretty happy about it! It's chunky, fun and fits Mr. Baby Paint quite well. And now I have a full bitmap based font pipeline that produces different weights and styles quite easily from the same source.

Here's a few extra tests I made. The details are weird, but overall they're surprisingly legible!

Then I have a few of the "failed" tests, but even they're quite legible and could be used effectively in some situations:

And, it works for drawings too! Or any image basically.

    Links

  1. Mario Paint gameplay: https://youtu.be/MX3HERvqHwI?t=312
  2. https://github.com/robhagemans/hoard-of-bitfonts

January — Il-Verse

Il-Verse

January produced yet another tool. The House of Text in Helsinki had a one month residency period for a group of artists working with text. They asked Arja if she could come introduce Centre for Text Margins, and they also wanted to hear more about my residency, so Arja invited me along. I had mentioned earlier that it would be really nice to collaborate with some artists who work with text to test out some new ideas I had for a poetry tool, so this was the perfect opportunity. The result is a visual poetry & experimental typography tool called Il-Verse, which I introduced to the group during one afternoon workshop.

It's based on the same kerning idea I already implemented for the single stroke vector font editor: fit glyphs as close to each other as possible based on the actual vector shape, so they're just about touching, but don't collide. Then add tracking.

But instead of automatically composing paragraphs from words, and words from letters, with Il-Verse you "drop" characters in a straight path (up, down, left or right) until it collides with another letter. It somewhat resembles manual letterpress typesetting, but as a writing tool rather than a design tool.

Parsing the fonts

Before starting with the editor, I had to figure out how to make the collision system and get proper data to work with. I knew AABB [1] is computationally quite fast and simple to implement, so I went with that. Then I only had to figure out how to parse the letterforms into boxes. I remembered a method from Sebastian Lague's video on ray tracing [2] which mentioned a bounding volume hierarchy for subdividing a complex shape into smaller and smaller bounding boxes. I figured that would be the best, as it would produce the least amout of boxes, but with sufficient resolution (I also cull boxes that are not visible from one of the cardinal directions, because letters would only drop from those directions):

As mentioned earlier, this can create too much overlap in combinations like "C-" where the dash would go completely inside the C shape. I tried fixing this by adding a big bounding box that's some percentage of the original shape, and acts as a minimum bounding box size. This worked decently, so that combinations like AT, LY, etc. are not overkerned.

I used TypR.js [3] to get the font vector data, parsed each letter, then made a quick demo with matter.js [4]. The initial results were promising and fun! Almost too fun because I almost pivoted into making a kinetic typography sandbox rather than a poetry tool.

But as I started working on the editor, and had a friend try it, I realized that my initial approach was not going to work for two reasons: first, the binary tree method would produce zero width/height boxes where it found a straight edge, and this would cause all kinds of problems with the collision system, and second, my friend wanted to drop letters inside other letters! Like, drop some letters inside the bowls of the letter "B" for example. Which, I agreed, he should be able to do. But that didn't work with my method because the inner areas were either blocked by the inner bounding box, or because I had culled the inner bounding boxes and there was nothing to collide with. I had done the classic programming sin and optimized too early.

I had to figure out a different approach, and I settled on the following: rasterize the shape, divide it into 50×50 grid, find the largest axis-aligned rectangle inside the grid that the shape occupies, convert it to a bounding box, then repeat until some treshold is reached. This produced a decently low number of bounding boxes with a decent precision. I bet there are better and simpler ways to solve this problem. But this works just fine — I only need to parse the font once after all, and the larger amount of bounding boxes doesn't actually impact performance as much as I had feared. I can easily render 3000+ glyphs on one page, which is more than enough.

Then I did the rest of the editor. It has two modes: In TYPING MODE letters are dropped instantly on key press, and in DROP MODE letters can be placed more carefully with a mouse click. The drop direction can be changed, letters can be rotated and flipped, the font size follows Pierre-Simon Fournier's [5] type scale, and the editor is completely useable also with a keyboard. The cursor can be snapped to grid for precise lines. TAB places an empty bounding box at cursor location, which allows for creating all kinds of layouts. It saves the files as a command history, so the undo state is always preserved! It also functions as a nifty timelapse playback tool. You can load your own font, or use one of the four Computer Modern fonts. It's pretty simple but surprisinly "powerful".

Finally, I used the same UI system as with ASCII AUTOMATA, but with a new, refined and much faster workflow. I made it with an electron version of my Grid Drawing Club (which I also have been working on during the residency... I hope to turn into a kind of WYSIWYG hand-coding / grid layout / experimental HTML & CSS tool... hard to explain.) It's super fast. When making Il-Verse, I put all UI elements into a flat list of HTML elements (so almsot no nested elements). Then, I wrapped the whole thing into my CSS grid system, and wrapped each UI element into a <cell> (just a custom named element), and then I could drag and resize each element on the page, live. And it's JUST vanilla HTML and CSS, no build steps, no heavy libraries, no pain. I just hit save in my electron app and it's already there because all it does is edit the html file. Making the whole UI took me only one day, and I could have done it even faster if I had any kind of plan, but I went back and forth with positioning things here and there and trying many different layouts, which was easy and fast because of this workflow. It's so good! Can't wait to share this app with everyone. Here's a screenshot (it looks quite rought because it's in early stanges still, but it works!):

But, back to the workshop... the participants at the Text Laboratory workshop managed to produce many delightful works in just one afternoon:

I feel like I say this about all my recent projects, but I'm again overjoyed at the results, and very proud of the tool I made. It's also I think the perfect workshop tool because it's really fast to learn, it's somewhat familiar to everyone (whether you're more text or more image oriented), yet nobody is an expert at it so everyone starts at the same line, and everybody can produce some delightful stuff with it. And for those who really get into it, it can be a really great tool to explore a new spatial dimension of writing.

    Links

  1. https://en.wikipedia.org/wiki/Minimum_bounding_box#Axis-aligned_minimum_bounding_box
  2. Sebastial Lague video: https://youtu.be/C1H4zIiCOaI?t=1914
  3. https://github.com/photopea/Typr.js
  4. https://brm.io/matter-js/
  5. https://en.wikipedia.org/wiki/Pierre_Simon_Fournier

Water simulation

Almost forgot! I also got sick, stuck at home, so I made this water simulation in Godot using a bunch of compute shaders. Heavily based upon Sebastian Lague's fluid sim video, but with additional rigid body collision stuff. Plan is to turn this into a water physics based puzzle game, maybe, at some point. But I'll return to it at a later date, otherwise it might overtake my whole spring.

So far, it's very hypnotizing.


What's next?

I'm about half way through my residency at the Centre for Text Margins and I feel like I've just started! But, looking back I realize I've already done quite a lot, which is nice. I'm very grateful for Arja for inviting me to be a resident, and grateful for Kone Foundation for funding it. It's been a dream to just focus on doing my things, developing tools, hosting workshops, exploring, experimenting and everything!

What's next:

And the rest is kind of unknown still. I will figure things out though. Here's a list of maybes (not promises):

  1. I hope to do some plotter stuff. Maybe riso too?
  2. I hope to finish my polycentric art drawing tool that I started developing last spring.
  3. I hope to finish the electron version of Grid Drawing Club and release the UI system for others to use too.
  4. I hope to continue making Hypergoblet.
  5. I hope to update Glyph Drawing Club (it's been years!)
  6. I hope to maybe make one mega-editor which combines all my other editors into one...?
  7. I hope to produce a book using all of my own tools, from fonts, to font renderings, to layout, graphics and PDF generation, etc.
  8. I hope to write more. I want to write about the grammar of type ornaments, and about my research on pictorial letterpress.
  9. I hope to continue developing the archive site for my archive/collection of pictorial letterpress and text art. It's already online, but missing about 500 pictures so I haven't shared it anywhere yet.
  10. I hope I don't get too sidetracked with too many new shiny ideas, but could maintain focus on improving and developing what I already have.
  11. I hope to release these kinds of updates at least once a month going forward.
  12. I hope to learn how to make a really good onion soup.

That's all for now. If you want to contact me, send me an email at hlotvonen@gmail.com

show page source

    
back to index