Skip to content

DataBooth/fasterai-mojo

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

3 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

fasterai-mojo

Mixed Python/Mojo project for experimenting with Mojo kernels inside a fast.ai / PyTorch training loop, roughly following the "faster-ai-mojo" specification.

Environment

This project uses pixi to manage both Python and Mojo (via the Modular "max-nightly" channel).

  1. Install pixi (see the pixi documentation for installation instructions).

  2. From this directory, create the environment (this creates the pixi environment and installs all Python and Mojo dependencies):

    pixi install
  3. Open a shell in the environment:

    pixi shell
  4. (Optional) Check Mojo is available:

    pixi run mojo-version

If you have built an operations.mojopkg containing the Mojo dense kernels exported for Python, ensure it is importable as import operations inside the pixi environment (for example by installing it or adjusting PYTHONPATH).

If operations is not available, the code falls back to pure PyTorch implementations, so all experiments still run – just without Mojo kernels.

Layout

  • mojo/dense.mojo – Mojo implementation of dense forward/backward kernels.
  • python/simple_mlp.py – Baseline PyTorch MLP.
  • python/mojo_mlp.py – MLP that calls Mojo kernels via MojoDense when available, otherwise falls back to pure PyTorch math.
  • python/data.py – fast.ai MNIST DataLoaders helper (grayscale 28×28 images).
  • python/experiments/gradient_check.py – Autograd gradient check.
  • python/experiments/performance_compare.pytorch.profiler comparison.
  • python/experiments/train_compare.py – Full training + accuracy comparison.

Running experiments

All commands below assume you are inside pixi shell. You can also run them directly as pixi run <task> without entering the shell.

  • Gradient check (Mojo vs PyTorch autograd):

    pixi run gradient-check
  • Performance comparison (emits mojo_trace.json and simple_trace.json for Chrome tracing):

    pixi run performance
  • Training + accuracy comparison on MNIST (MojoMLP vs SimpleMLP, with a 1% tolerance assertion):

    pixi run train

At present, the Python side falls back to pure PyTorch math if the Mojo operations module is not available. To connect the real Mojo kernels, build an operations.mojopkg with your Mojo dense kernels and place it at the project root. python/mojo_mlp.py will load this package via max.torch.CustomOpLibrary and call dense_forward / dense_backward from Mojo (see MojoDenseFunction.forward and MojoDenseFunction.backward).

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors