We at SpeakEZ have been working on the Fidelity framework for a while, and it’s been a journey to find the right balance of familiar conventions with new capabilities. Nowhere is that more apparent than in the async/task/actor models for concurrent programming.
The Iceberg Model: Familiar on the Surface, Revolutionary Underneath
Think of Fidelity’s concurrency model as an iceberg. Above the waterline, it looks remarkably similar to what you already know:
// This should look pretty familiar to F# developers
let processData = async {
let! data = fetchDataAsync()
let transformed = transform data
do! saveResultAsync transformed
return transformed
}
But beneath the surface? That’s where everything changes. Instead of relying on the CLR’s thread pool and garbage collector, Fidelity compiles your async code directly to native machine code through a sophisticated MLIR and LLVM pipeline.
Core Libraries in the Fidelity Framework
The Fidelity Framework includes several key libraries for concurrency, which we’ll explore in a logical order:
- Alloy: Automatic static resolution of functions and types
- BAREWire: Zero-copy memory protocol for efficient data handling
- Frosty: Lightweight task library that replaces .NET’s Task
- Olivier: Actor model implementation with Erlang-inspired semantics
- Prospero: Scheduling and orchestration within the actor model
- Dabbit: Transformation of F# code to MLIR operations
Alloy: Automatic Static Resolution
Alloy, based on the elegant fsil library for automatic inlining of F# select functions within .NET, is an extension library within the Fidelity Framework that provides more general static resolution of functions and types:
// Regular F# function that Alloy automatically optimizes
let processData (items: 'T[]) =
items |> Array.map transformItem |> Array.sum
// No explicit inline keyword needed
// Yet this compiles to the same efficient code as if you had written:
let inline processData (items: 'T[]) = ...
Alloy analyzes your code during compilation and automatically applies static resolution. This gives you the performance benefits of manually inlined code without littering your codebase with inline
keywords. As a building block for other Fidelity libraries, Alloy’s static resolution enables efficient compilation across the entire framework.
BAREWire: Efficient Memory Protocol
BAREWire provides a high-performance protocol for memory management and cross-process communication:
// Define a BAREWire message schema
let messageSchema =
BAREWire.schema {
field "id" BAREWireType.Int64
field "payload" BAREWireType.String
field "timestamp" BAREWireType.Double
}
// Send message across process boundaries
let sendMessage (target: ProcessId) (message: Message) =
// Zero copy where possible
BAREWire.sendMessage target messageSchema message
BAREWire enables zero-copy operations when possible and efficient serialization when necessary, providing optimal performance regardless of process boundaries. This library forms the foundation for efficient memory handling throughout the Fidelity Framework.
Solving the byref Problem
One pernicious issue in .NET’s memory model is “the byref problem”. In the CLR, byref pointers cannot escape the stack frame where they’re created, and you can’t store them in heap-allocated objects. This creates a significant limitation when working with performance-critical code that needs direct memory access.
BAREWire solves this with its type-safe memory model:
// Create a memory-mapped buffer with static lifetime
let buffer = BAREWire.createBuffer<float> 1024
// Get direct access to the buffer with type safety
let span = buffer.AsSpan()
// Work with the span directly without copying
for i = 0 to span.Length - 1 do
span[i] <- float i * 2.0
// Share the buffer with another process
let sharedBuffer = buffer.Share(targetProcess)
// No need to copy the data - targetProcess gets direct access
BAREWire.sendBuffer targetProcess sharedBuffer
The magic is in how BAREWire separates buffer lifetime from access permissions. Unlike the CLR where lifetime and access are tightly coupled through the garbage collector, BAREWire uses a capability-based model:
- Buffer Ownership: Explicit lifetime management without GC intervention
- Buffer Capabilities: Type-safe permissions that can be passed between components
- Memory Protection: Hardware-enforced boundaries that prevent invalid access
This means you can have multiple components access the same memory without copying, while still maintaining memory safety guarantees. For .NET developers accustomed to constant serialization and defensive copying, this represents a significant performance improvement.
Frosty Tasks: Native Task Implementation
Building on the foundation provided by Alloy and BAREWire, Frosty is our task library that looks familiar but compiles to native code. Frosty builds on lessons learned from IcedTasks, an innovative F# library created by Jimmy Byrd, but reimplemented without .NET dependencies for native compilation:
// Creating a cold task (doesn't start until someone subscribes)
let coldTask = Frosty.startCold (fun () ->
calculateSomething()
)
// Creating a hot task (starts immediately)
let hotTask = Frosty.startHot (fun () ->
calculateSomething()
)
// Composing tasks with a computation expression (looks like async!)
let combinedTask = frosty {
let! result1 = firstTask
let! result2 = secondTask
return result1 + result2
}
Through the Alloy library described above, all these tasks transform at compile time to optimal machine code for your target platform. No thread pool, no runtime overhead.
Platform Configuration: Just Below the Waterline
For .NET developers accustomed to letting the runtime handle everything, the Fidelity Framework offers a simple compromise - just dip your toes below the waterline with minimal configuration:
// This is typically done once at application startup
let platformConfig =
PlatformConfig.Default
|> PlatformConfig.withExecutionModel ExecutionModel.WorkStealing
|> PlatformConfig.withMemoryStrategy MemoryStrategy.RegionBased
// Apply the configuration
Fidelity.configurePlatform platformConfig
This small step “into the waters beneath the semantic surface” gives you control over aspects of the computation graph that are normally hidden deep in the CLR’s implementation. Want cooperative multitasking for embedded systems with limited resources? Or work-stealing schedulers for server applications that need to maximize throughput? Perhaps you need deterministic memory management for real-time systems? All of these become configurable options rather than fixed runtime behaviors.
It’s this minimal configuration - the only visible difference from standard F# development - that unlocks the entire power of the Fidelity Framework. By making just a few explicit choices about execution and memory models, you gain access to capabilities that simply aren’t possible in a traditional runtime environment, all while keeping your application lightweight and the code very close to an idiomatic experience.
The beauty of this is that your core application logic remains unchanged regardless of the target platform. The same business logic can run efficiently on an embedded device or a high-performance server - only the platform configuration changes to match the environment’s capabilities and constraints.
Olivier: Complete Actor System
Olivier is the actor model implementation in Fidelity, providing an Erlang-inspired message-passing concurrency system:
// Define an actor behavior
let counterBehavior = actor {
// Actor state
let mutable count = 0
// Message processing loop
let rec loop() = async {
// Receive message
let! msg = Actor.receive()
match msg with
| Increment ->
count <- count + 1
return! loop()
| Decrement ->
count <- count - 1
return! loop()
| GetCount replyTo ->
// Send reply
replyTo <! CountValue count
return! loop()
}
loop()
}
// Create an actor system
let system = Olivier.createSystem "my-system"
// Spawn an actor in the system
let counterActor = Olivier.spawn system "counter" counterBehavior
// Send message to actor
counterActor <! Increment
Olivier draws primary inspiration from Erlang’s OTP framework for its message-passing semantics and fault tolerance principles. The Olivier library contains everything needed for actor-based concurrency, including the Prospero library described next.
Prospero: Scheduling Within Olivier
Prospero is the scheduling and orchestration library contained within Olivier, handling the actor lifecycle and distribution:
// Once you have an Olivier system, you can configure its Prospero scheduler
let system = Olivier.createSystem "my-system"
// Configure local scheduling
let schedulerConfig =
SchedulerConfig.create()
|> SchedulerConfig.withWorkerCount 4
|> SchedulerConfig.withPriorities ["critical"; "normal"; "background"]
let configuredSystem = system |> Olivier.configureScheduler schedulerConfig
// Optional: Configure clustering capabilities
let clusterConfig =
ClusterConfig.create()
|> ClusterConfig.withSeedNodes ["akka.tcp://system@node1:2552"]
|> ClusterConfig.withRoles ["worker"]
let distributedSystem = configuredSystem |> Olivier.withClustering clusterConfig
// Create a sharded entity
let userRegion =
Olivier.Sharding.start distributedSystem "user"
(fun id -> userActorFactory id)
(fun msg -> extractEntityId msg)
(fun id -> extractShardId id)
While Prospero offers Akka.NET compatibility for clustering, its primary role is scheduling and orchestration within the Olivier actor model. It manages message delivery, supervision hierarchies, and actor lifecycle events within the system.
Here’s a simplified example of how an async function might look in MLIR:
High-level Task"] end subgraph MidLevelMLIR["Mid-Level MLIR"] CreateCoroutine["coroutine.create
Create State Machine"] SuspendPoint["coroutine.suspend
Suspension Point"] ResumeCoroutine["coroutine.resume
Resume Execution"] end subgraph LowLevelMLIR["Low-Level MLIR"] Alloca["memref.alloca
Allocate State"] ControlFlow["scf.if/while
Control Flow"] LoadStore["memref.load/store
State Management"] end AsyncExecute --> CreateCoroutine CreateCoroutine --> SuspendPoint SuspendPoint --> ResumeCoroutine ResumeCoroutine --> Alloca Alloca --> ControlFlow ControlFlow --> LoadStore
Each level gets closer to the metal, with more explicit control over memory and execution. By the time we reach the lowest level, we have a representation that can be efficiently compiled to native code for any target platform.
You’re right that Dabbit’s role in bridging F# and MLIR deserves more exploration, especially around the compatibility between their abstract syntax trees. Here’s how I would enhance that section:
Dabbit: Where F# Meets MLIR
Dabbit handles the critical transformation of F# code into MLIR operations within the compilation pipeline. Its name—inspired by the duck-rabbit illusion—perfectly captures its dual nature. Just as the famous image can be perceived as either a duck or a rabbit depending on your perspective, Dabbit sees F# code simultaneously as high-level functional programming and low-level hardware operations.
What makes this transformation remarkably smooth is the surprising alignment between F#’s abstract syntax tree (AST) and MLIR’s structural design. This isn’t coincidence—MLIR was explicitly designed to support multiple source languages, with significant work driven by the needs of languages like Mojo. The result is a compilation infrastructure unusually well-suited for functional programming patterns.
// You never need to interact with this directly
// It's part of the compilation pipeline
let mlirTransform = mlir {
// F# async/task code gets transformed to MLIR operations
// These are then lowered through MLIR dialects and ultimately to machine code
yield MLIRPrimitives.async_execute
yield MLIRPrimitives.coroutine_suspend
yield MLIRPrimitives.control_flow
}
Consider how F# represents function composition, pattern matching, and higher-order functions. These structures map naturally to MLIR’s region-based operations and SSA (Static Single Assignment) form. For example, an F# pattern match translates cleanly to MLIR’s scf.if and scf.match operations, preserving both the logical structure and optimization opportunities.
Particularly fascinating is how F#’s computation expressions—the foundation of async workflows—correspond directly to MLIR’s structured control flow. Where .NET’s compiler stops at creating state machines that still require runtime support, Dabbit continues the transformation all the way to hardware-optimized instructions through MLIR’s progressive lowering process.
This alignment between F# and MLIR represents years of parallel evolution in programming language design. While developed separately, both embody similar principles around composition, immutability, and explicit data flow—principles that ultimately lead to more optimizable code. Dabbit simply connects these kindred spirits, enabling F# code to bypass the runtime entirely and speak directly to the hardware in its native tongue.
Conclusion: F# Unleashed
The Fidelity Framework represents nothing less than the liberation of F# from the constraints of the runtime environment. By maintaining the elegant, expressive syntax that F# developers love while revolutionizing what happens beneath the surface, we’ve created something truly transformative.
When you write code in the Fidelity Framework, you’re no longer limited by garbage collection pauses, thread pool configurations, or runtime overhead. Instead, your F# code flows through a seamless pipeline of specialized libraries—Alloy, BAREWire, Frosty, Olivier, and finally Dabbit—emerging as lean, efficient machine code precisely tailored to your target hardware.
This isn’t just a performance upgrade—it’s a fundamental expansion of what’s possible. The same F# code that powers your server applications can now run directly on embedded devices. The actor model concepts you apply in distributed systems can scale down to real-time applications. The memory safety you depend on remains rock-solid, but without the overhead of a runtime or monolithic garbage collector.
We built the Fidelity Framework because we believe F# deserves to run everywhere, at peak efficiency, without compromise. The language’s inherent clarity, safety, and expressiveness shouldn’t be limited to environments that can support a heavy runtime. Now, you have more choices than ever.
Join us in exploring what F# can achieve when truly unleashed. The possibilities are only limited by your imagination (and the device you’re running on). 🧊