This bundle contains the code needed to export an ExecuTorch edge program and build a scenario for models that use grid_sample.
- Tested with Python 3.10 and 3.12 on Linux and Windows.
- Your own model file. The shipped
sample_model.pyworks out of the box with the includedsample_inputs/, and you can adapt it to your own model or copy the relevant snippets into an existing model script. - A Python environment with the package versions from
requirements.txt.- Please note that a development version of
executorchis needed and several 'monkey-patches' are applied (seearm_backend_monkey_patch.py)
- Please note that a development version of
model-converterandflatconPATH. In a typical Python environment they are installed under the environment'ssite-packages/model_converter/binaries/binandsite-packages/bindirectories.scenario-runneronPATH- A recent version (newer than
d78c059) may be needed depending on your model and the image formats that are used in the exported scenario. - The CMake build flag
SCENARIO_RUNNER_EXPERIMENTAL_IMAGE_FORMAT_SUPPORT=ONmay be needed depending on your model and the image formats that are used in the exported scenario.
- A recent version (newer than
- If the target device does not have native Data Graph Arm extension support, the runtime also needs the appropriate emulation layers enabled. See
https://github.com/arm/ai-ml-emulation-layer-for-vulkan.
Bash example:
python3 -m venv .venv
. .venv/bin/activate
python -m pip install -r requirements.txt
SITE_PACKAGES="$(python -c 'import site; print(site.getsitepackages()[0])')"
export PATH="$SITE_PACKAGES/model_converter/binaries/bin:$SITE_PACKAGES/bin:$PATH"
python sample_model.py
scenario-runner --scenario artifacts/scenario/scenario.json --output artifacts/scenario --log-level debugThe script resolves its sample inputs and artifact output relative to its own location, so it can be run from any current working directory. It exports and builds artifacts/scenario/scenario.json, then prints a sample scenario-runner command line you can run manually. The bundled example applies a 3x3 blur, rotates with grid_sample, then applies a 3x3 sharpen filter. It uses INT8 image IO and INT16 symmetric grid coordinates.
When building coordinate tensors for grid_sample, prefer torch.linspace(...) over torch.arange(...). In practice, arange often introduces int64 ops into the exported graph, and those do not lower cleanly through TOSA.
export_executorch.pycaptures the model, applies the Arm/ExecuTorch export flow, and partitions the graph into regular NN segments andgrid_samplesegments.- Regular NN segments are lowered through the Arm VGF path and emitted as
section_*.vgfbinaries. grid_samplesegments are lowered by the custom backend into compute-shader work rather than VGF sections.export_scenario.pyturns those lowered pieces into onescenario.jsonthat references all VGF sections, shader work, tensors, and intermediate resources together.sample_model.pysupplies the sample inputs, runs export, and leaves everything underartifacts/so the scenario can be executed later withscenario-runner.