Noordstar Blog

Homepage and blog by Bram Noordstar

I write here because I like writing. Simple as that. Some of it is technical, some of it is just thoughts and little ideas. No tracking, no engagement farming or ads – just words on a page.

Things you might be looking for

  • Blog posts – Browse the latest or check out posts by topic
  • Code & Projects – My GitHub or Git server
  • Self-hosted services – If you're a friend or family member looking for something, you probably already know where to go. If not, ask me directly.
  • Contact – Reach out to me on Matrix

Who I am

I like open source, decentralized tech, and figuring things out for myself. I love D&D, public transport and Europe. I would've liked to use the term cyberpunk to describe my blog, hadn't it already been used to describe a dystopian hyper-capitalist setting.

As can be read in my first blog post, I intend to build an Elm-like language with some unique design choices – and the community has taught me some valuable things!

TLDR: My design seems akin to cutting-edge programming languages. I have a functional proof-of-concept! I'm not content with the syntax yet.

My idea isn't original – but it seems new

One of the most intriguing parallels to selective mutability I've discovered is the “resurrection hypothesis” mentioned in a 2019 paper called Counting Immutable Beans. The resurrection hypothesis mentions the idea that many objects die just before the creation of an object of the same kind.

The map function is a great example to this:

type Tree a = Leaf a | Node (Tree a) (Tree a)

map : (a -> b) -> Tree a -> Tree b
map f tree =
    case tree of
        Leaf x ->
            Leaf (f x)

        Node tree1 tree2 ->
           Node (map f tree1) (map f tree2)

If the language is purely immutable, then you might use twice the memory necessary: you would traverse through the tree, build a new tree with the identical structure, and then discard the old one. But imagine that the update function of our MUV-system looks like this:

type alias Model = { name : String, tree : Tree Int }

update : Int -> Model -> Model
update n model =
    { model | tree = map (\x -> x * 2 } model.tree }

Most immutable languages would duplicate the tree and discard the old one, effectively doubling memory usage. But with selective mutability, the tree could be updated in-place.

There's also other programming languages developing similar ideas. Research language Koka uses Perceus, an algorithm that offers reference counting in a way that avoids a garbage collector. Similarly, Neut does what they call static memory management, where they find malloc/free pairs of matching sizes to optimize around the resurrection hypothesis.

A proof-of-concept works – for now

As an experiment, I have built a proof-of-concept transpiled version. A major challenge with catching memory problems is that most OS systems aren't built for catching memory issues. From my understanding, C, LLVM and Rust typically rely on the OS to manage the stack & heap. If there's an overflow, or some other problem, the OS terminates the program, reporting a segmentation fault. Not very helpful!

As a result, I have designed my own stack/heap system in a C program. Similar to a VM, the code runs in a single block of memory that's assigned to the program on startup. It functions reliably, regardless of available memory size.

For now, this snippet represents the decompiled version of the file low.c:

main =
    h "Kaokkokos shall prevail by the hands of Alkbaard!"

h : String -> String
h x = f ( g x )

f : String -> String
f = String.upper

g : String -> String
g = Console.print -- currently an identity function with side-effects

This proof-of-concept shows that a memory-aware runtime is feasible, though Mem.withDefault handling is still pending implementation.

Memory-aware language design might need some changes

There's two major challenges with a design using => operations. It's a difficult concept to understand, and it limits the capability for the language to compile to environments where memory cannot managed.

As a result, it might be rewarding to design the types in a Mem module that is only usable in environments where memory CAN be managed. This has several downsides to be considered, but the confusion of the => operation might not necessarily be a better directly. For example, consider the following two functions:

-- Example 1
foo1 : Foo => Foo -> Foo
foo1 x =
    (\y -> bar x y )
        |> Mem.withDefault identity

-- Example 2
foo2 : Foo -> Foo => Foo
foo2 x y =
    bar x y
        |> Mem.withDefault defaultFoo

Both functions are essentially different, offering different guarantees in different scenarios. While foo1 is guaranteeing to return a Foo type when two Foo types have been inserted, foo2 guarantees to return a Foo -> Foo function after one Foo type has been inserted.

This guarantee sounds relatively reasonable when you look at the code, but how much effort would it cost to write a fully memory aware function?

foo : Foo => Foo => Foo
foo x =
    (\y -> bar x y |> Mem.withDefault defaultFoo )
        |> Mem.withDefault identity

This is a rather unappealing way to write code, and quite difficult to read. This might need some reworking.

Conclusion

I have learned a few more concepts and the development seems to go rather well! I am encountering less hurdles than expected, and the design seems manageable.

As with the previous post, I am very much open to ideas. Let me know if you have any thoughts to share!

#foss #functional #languagedesign


Older English post <<<

Older post <<<

Vanmiddag op 19 februari 2025 rond 16:00 stond een grote groep trams vast in Amsterdam op het Leidseplein. Ze leken in de knoop te zitten.

Ik trof de knoop aan om 15:56 en heb een beetje rondgelopen om de situatie te bekijken. Op basis van mijn beeldopnames leek de knoop er als volgt uit te zien.

Tekening van trams die vast staan op Leidseplein, getekend op OpenRailwayMap

De voorste blauwe tram was lijn 5 richting Amstelveen Stadshart. Die moest linksaf, maar die kon dat net niet omdat er een lijn 17 in de weg zat. Die tram kon echter niet meer naar voren rijden, omdat vooraan een lijn 2 richting Amsterdam Centraal zat te wachten op de lijn 19 die op de baan wilde oversteken.

Foto van de tram die niet naar rechts kan omdat een andere tram net niet ver genoeg naar voren kan

Om 16:02 was de opstopping opgelost. Ik stond aan de zuidkant van het Leidseplein en zag daardoor niet hoe het was opgelost – maar mijn vermoeden is dat de tram in de bocht een klein stukje naar achteren heeft kunnen rijden om zo een aantal trams naar voren te laten.


Ik ben erg benieuwd of dit een probleem is wat vaker voorkomt! Het Leidseplein is de afgelopen paar jaar steeds drukker geworden, dus ik hoor graag of dit een probleem is wat veel voorkomt op het Leidseplein. Ik heb het in ieder geval niet eerder gezien.

#publictransport #traffic


Vooralsnog is dit het enige Nederlandstalige bericht!

Ouder bericht <<< >>> Nieuwer bericht

Introduction

As a programmer who has experienced the elegance of writing Elm, I’ve often wished for a language that extends Elm’s core philosophy beyond the browser. While many programming languages emphasize type safety, immutability, and purity, few address memory safety as a core language feature.

What if we designed a programming language where memory failures never crash a program? Where aggressive dead code elimination produces highly optimized output? And where every function is guaranteed to be pure and immutable?

This article outlines a conceptual framework for such a language—its principles, challenges, and potential optimizations.


Core Principles

1. Functional, Pure & Immutable

Everything in the language is a function. Functions are pure, meaning they always return the same output for the same input, and immutability is enforced throughout. Even variables are just functions with zero arguments.

This ensures strong guarantees for compiler optimization and program correctness.

2. Side-Effects Managed by the Runtime

Like Elm, side-effects cannot be executed directly by user code. Instead, side-effects must be passed to the runtime for execution. This delegates responsibility to the runtime designers and allows the compiler to assume that all side-effects are managed safely.

3. Memory Safety as a Core Language Feature

This language ensures programs never crash due to memory exhaustion. A special memory-safe module (Mem) allows functions to specify default return values in case of memory failure:

add : Int -> Int => Int
add x y =
    x + y
        |> Mem.withDefault 0

Mechanism

  • The => syntax signals a memory-safe function.
  • Mem.withDefault 0 ensures a fallback return value in case of failure.
  • Default values are allocated at startup to prevent mid-execution failures.

By guaranteeing upfront memory allocation, runtime failures are prevented once the runtime passes the initial startup phase.


Handling Dynamic Data Structures

Since the language enforces immutability, dynamically sized data structures must be created at runtime. If memory limits are reached, functions must define fallback strategies:

  • Return the original input if allocation fails.
  • Return an default value specified by the developer.

Ideally, memory exhaustion can be explicitly handled with a dedicated return type:

type Answer = Number Int | OutOfMemory

fib : Int => Answer
fib n =
    case n of
        0 -> Number 1
        1 -> Number 1
        _ ->
            case (fib (n - 1), fib (n - 2)) of
                (Number a, Number b) -> Number (a + b)
                _ -> OutOfMemory
    |> Mem.withDefault OutOfMemory

Extreme Dead Code Elimination

The compiler aggressively removes unused computations, reducing program size. Consider:

type alias Message =
    { happy : String
    , angry : String
    , sad : String
    , mood : Bool
    }

toText : Message -> String
toText msg =
    if msg.mood then msg.happy else msg.angry

main =
    { happy = "I am happy today."
    , angry = "I am extremely mad!"
    , sad = "I am kinda sad..."
    , mood = True
    }
    |> toText
    |> Mem.withDefault "Ran out of memory"
    |> Console.print

Optimization Process

  1. Since mood is always True, the else branch is never used.
  2. The function simplifies to toText msg = msg.happy.
  3. The .angry, .sad, and .mood fields are removed.
  4. Message reduces to type alias Message = String.
  5. The toText function is removed as a redundant identity function.

Final optimized output:

main = Console.print "I am happy today!"

While this may require too many computations at compile-time, all of these optimizations seem fair assessments to make.


Compiler-Assisted Mutability for Performance

While immutability is enforced, the compiler introduces selective mutability when safe. If an old value is provably unused, it can be mutated in place to reduce memory allocations.

Example:

type alias Model = { name : String, age : Int }

capitalizeName : Model -> Model
capitalizeName model =
    { model | name = String.capitalize model.name }

Normally, this creates a new string and record. However, if the previous model.name isn't referenced anywhere else, the compiler mutates the name field in place, optimizing memory usage.


Compiler & Debugging Considerations

For effective optimizations, the compiler tracks:

  • Global variable usage to detect always-true conditions.
  • Usage patterns (e.g., optimizing predictable structures like Message).
  • External data sources, which are excluded from optimizations.

To aid debugging, the compiler could provide:

  • Graph-based visualization of variable flow.
  • Debugging toggles to disable optimizations selectively.

Conclusion: A New Paradigm for Functional Memory Safety?

Most languages handle memory safety through garbage collection (Java, Python), manual management (C, C++), or borrow-checking (Rust). This language proposes a fourth approach:

Memory-aware functional programming

By making memory failures a core language feature with predictable handling, functional programming can become more robust.

Would this approach be practical? The next step is to prototype a minimal interpreter to explore these ideas further.

If you're interested in language design, memory safety, and functional programming, I’d love to hear your thoughts!

#foss #functional #languagedesign


Older English post <<< >>> Newer English post

Older post <<< >>> Newer post

As a resident in the Netherlands, I take part in traffic by bike on a daily basis. One of my major problems is that communication between cars and bicycles can be difficult.

I believe that regulating cars to have brake lights on the front might help communication in traffic and make traffic safer.

While cars have turn signals and headlights, these don’t clearly indicate when a driver is slowing down, especially for pedestrians and cyclists. This can lead to hesitation, miscommunication, and even accidents.

The agony of pedestrians that cross at the worst time

Imagine you’re a pedestrian or cyclist approaching a crossing. A car is coming fast—do you cross?

Maybe it’s slowing down, but its blinkers suggest a turn. Is it stopping for you or just taking the turn carefully? Maybe the driver noticed you and is coasting, but they’re still moving. Do you cross?

By the time the car finally stops, you realize this whole dance wasted time for both of you.

There's comedy sketches ridiculing this situation on the internet, and it's an annoying experience for all drivers, cyclists and pedestrians involved.


Cars among one another don't have this problem. They can't see other drivers' body language so they'll have to trust that everyone follows priority rules correctly.

Cyclists don’t have this issue either. They notoriously ignore traffic rules, but at least they can read each other’s body language. Priority usually goes to the one who pretends hardest they don’t see the other person.

Pedestrians still bump into one another, but this is rarely deadly.

It's the unique combination of car drivers, whose body language is hard to read, and bikers that ignore traffic rules. As a result, neither really knows what the other is up to.

Frontal brake lights

Brake lights clearly signal to drivers behind, 'Look, I’m decelerating,' and they work. Even the 3rd brake light on the back has demonstrated to improve safety on the road, and I believe that frontal brake lights might do the same.

From my understanding, the United Nations Economic Commission for Europe (UNECE) seems to regulate vehicles in the European Union. As an individual, I cannot simply join one of their meetings and ask “what about frontal brake lights?” But member state representatives can.

If you agree this could improve road safety, consider raising the idea with your local representatives. I'll be reaching out to mine to see if this can gain traction at the UNECE level. If you're more knowledgeable on the topic, feel free to reach out to me on Matrix or get in touch through the Fediverse. I'd like to hear whether this is a good idea before I'm trying to pursue it politically.

Until then, whenever you find yourself hesitating at a crossing, ask yourself—would frontal brake lights have made this easier? If so, let’s make it happen.

#politics #regulation #frontalbrakelights #traffic


>>> Newer English post

>>> Newer post