Mixed Python/Mojo project for experimenting with Mojo kernels inside a fast.ai / PyTorch training loop, roughly following the "faster-ai-mojo" specification.
This project uses pixi to manage both Python and Mojo (via the Modular "max-nightly" channel).
-
Install pixi (see the pixi documentation for installation instructions).
-
From this directory, create the environment (this creates the pixi environment and installs all Python and Mojo dependencies):
pixi install
-
Open a shell in the environment:
pixi shell
-
(Optional) Check Mojo is available:
pixi run mojo-version
If you have built an operations.mojopkg containing the Mojo dense kernels exported
for Python, ensure it is importable as import operations inside the pixi
environment (for example by installing it or adjusting PYTHONPATH).
If operations is not available, the code falls back to pure PyTorch implementations,
so all experiments still run – just without Mojo kernels.
mojo/dense.mojo– Mojo implementation of dense forward/backward kernels.python/simple_mlp.py– Baseline PyTorch MLP.python/mojo_mlp.py– MLP that calls Mojo kernels viaMojoDensewhen available, otherwise falls back to pure PyTorch math.python/data.py– fast.ai MNISTDataLoadershelper (grayscale 28×28 images).python/experiments/gradient_check.py– Autograd gradient check.python/experiments/performance_compare.py–torch.profilercomparison.python/experiments/train_compare.py– Full training + accuracy comparison.
All commands below assume you are inside pixi shell. You can also run them directly
as pixi run <task> without entering the shell.
-
Gradient check (Mojo vs PyTorch autograd):
pixi run gradient-check
-
Performance comparison (emits
mojo_trace.jsonandsimple_trace.jsonfor Chrome tracing):pixi run performance
-
Training + accuracy comparison on MNIST (MojoMLP vs SimpleMLP, with a 1% tolerance assertion):
pixi run train
At present, the Python side falls back to pure PyTorch math if the Mojo
operations module is not available. To connect the real Mojo kernels, build an
operations.mojopkg with your Mojo dense kernels and place it at the project root.
python/mojo_mlp.py will load this package via max.torch.CustomOpLibrary and call
dense_forward / dense_backward from Mojo (see MojoDenseFunction.forward and
MojoDenseFunction.backward).