Jupyter Notebooks
Jupyter notebooks provide a web-based environment for creating and sharing Mojo computational documents. They combine code, results, and explanation so readers explore what you built, how you built it, and why it matters.
You can run Mojo language notebooks locally or in GPU-backed Google Colab environments to accelerate workloads. For teaching, learning, and exploration, notebooks provide a hands-on, iterative workflow.
Choose an environment
This overview assumes you'll work with Mojo notebooks in one of two ways:
- Google Colab
Fast setup, optional GPU acceleration, ideal for quick experiments and for learning GPU programming when you don't have a compatible GPU-enabled computer on-hand. - Local JupyterLab
Private environment with full control of code, data, and dependencies.
Both options use the same notebook model and the same Mojo cell magic.
Using Mojo on Google Colab
1. Create a Notebook
Visit Google Colab and create a new notebook.
2. Install Mojo
For the nightly release:
!pip install mojo --index-url https://dl.modular.com/public/nightly/python/simple/For the stable release:
!pip install mojoWait for the "Successfully installed" message.
3. Enable Mojo
In the first cell, run:
import mojo.notebookThis adds the %%mojo cell magic, so you can compile and
run Mojo code.
Your Colab notebook is now ready to run Mojo programs.
Using Mojo in Local Jupyter Notebooks
Local notebooks use pixi to manage an environment with
Jupyter and Mojo.
1. Create a project
pixi init notebooks \
-c https://conda.modular.com/max-nightly/ \
-c conda-forge
cd notebooks
pixi shellThis creates a project directory and enters the Pixi shell.
2. Install required tools
pixi add mojo jupyterlab ipykernelThis installs:
- Mojo
- JupyterLab
- The Python kernel required for notebook execution
3. Start JupyterLab
jupyter labJupyterLab opens in your browser.
4. Create a Python-backed notebook
In your web browser:
- Select File > New > Notebook.
- Choose the Python kernel.
5. Enable Mojo support
In the first cell, run:
import mojo.notebookThis registers the %%mojo magic command.
Your local environment is now ready for interactive Mojo development.
Writing and running Mojo code
Mojo code runs inside notebook cells marked with the %%mojo directive.
Each Mojo cell must contain a complete program, including a main() function.
Example: Hello Mojo
%%mojo
def main():
print("Hello Mojo")Output:
Hello MojoExample: Parameterized compilation
%%mojo
# Compiler-parameterized function
fn repeat[count: Int](msg: String):
@parameter
for i in range(count):
print(msg)
# Compiler-argumented function
fn threehello():
repeat[3]("Hello 🔥!")
# Run
def main():
threehello()Output:
Hello 🔥!
Hello 🔥!
Hello 🔥!Using Mojo with GPU support
Google Colab offers GPU-backed runtimes so you can run Mojo GPU examples even without local hardware. T4, L4, and A100 accelerators are supported. Before running GPU code, select Runtime > Change runtime type > [GPU].
Example: GPU Hello World
%%mojo
from gpu.host import DeviceContext
fn kernel():
print("Hello from the GPU")
def main():
# Launch GPU kernel
with DeviceContext() as ctx:
ctx.enqueue_function_checked[kernel, kernel](grid_dim=1, block_dim=1)
ctx.synchronize()Output:
Hello from the GPUExample: GPU vector addition
This example runs elementwise vector addition on the GPU. Each GPU thread updates one element.
%%mojo
# ===----------------------------------------------------------------------=== #
# Copyright (c) 2025, Modular Inc. All rights reserved.
#
# Licensed under the Apache License v2.0 with LLVM Exceptions:
# https://llvm.org/LICENSE.txt
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ===----------------------------------------------------------------------=== #
from gpu import thread_idx
from gpu.host import DeviceContext
from layout import Layout, LayoutTensor
from sys import has_nvidia_gpu_accelerator, has_amd_gpu_accelerator
comptime VECTOR_WIDTH = 10
comptime layout = Layout.row_major(VECTOR_WIDTH)
comptime active_dtype = DType.uint8
comptime Tensor = LayoutTensor[active_dtype, layout, MutAnyOrigin]
# Elementwise vector addition on GPU threads
fn vector_addition(left: Tensor, right: Tensor, output: Tensor):
var idx = thread_idx.x
output[idx] = left[idx] + right[idx]
def main():
# Ensure a supported GPU (NVIDIA or AMD) is available
constrained[
has_nvidia_gpu_accelerator() or has_amd_gpu_accelerator(),
"This example requires a supported GPU",
]()
# Create GPU device context
var ctx = DeviceContext()
# Allocate buffers and tensors for left and right operands, and output
var left_buffer = ctx.enqueue_create_buffer[active_dtype](VECTOR_WIDTH)
var left_tensor = Tensor(left_buffer)
var right_buffer = ctx.enqueue_create_buffer[active_dtype](VECTOR_WIDTH)
var right_tensor = Tensor(right_buffer)
var output_buffer = ctx.enqueue_create_buffer[active_dtype](VECTOR_WIDTH)
var output_tensor = Tensor(output_buffer)
# Initialize input buffers with sample data
var message_bytes = List[UInt8](
71, 100, 107, 107, 110, 31, 76, 110, 105, 110
)
with left_buffer.map_to_host() as mapped_buffer:
var mapped_tensor = Tensor(mapped_buffer)
for idx in range(VECTOR_WIDTH):
mapped_tensor[idx] = message_bytes[idx]
_ = right_buffer.enqueue_fill(1)
# Launch GPU kernel
ctx.enqueue_function_checked[vector_addition, vector_addition](
left_tensor,
right_tensor,
output_tensor,
grid_dim=1,
block_dim=VECTOR_WIDTH,
)
ctx.synchronize()
# Read results back and print as ASCII
with output_buffer.map_to_host() as mapped_buffer:
var mapped_tensor = Tensor(mapped_buffer)
for idx in range(VECTOR_WIDTH):
print(chr(Int(mapped_tensor[idx])), end="")
print()Output:
Hello MojoWas this page helpful?
Thank you! We'll create more content like this.
Thank you for helping us improve!