Open Source · Apache 2.0

You write the math.
Aether handles the machine.

The AI-first language that compiles one codebase to CPU, GPU, and quantum — automatically. From research notebook to production deployment, without rewriting a single line.

macOS · Linux · Windows · WASM · IoT
One language. Every platform.
From training neural networks on GPU clusters to deploying on IoT microcontrollers.
Unique to Aether

parallel { } blocks

Run independent tasks simultaneously with a single keyword. No asyncio.gather, no Promise.all, no goroutines. Dependencies with after(), racing with parallel.race, and automatic cancellation.

The runtime auto-parallelizes independent operations. You opt out of parallelism, not in.

parallel.ae
// All three run simultaneously parallel { users = fetch_users() orders = fetch_orders() stats = fetch_stats() } // All done here. Use them. // With dependencies parallel { data = load_csv("train.csv") config = load_config() model = after(data, config) { train(data, config) } }
AI-native

@gpu — GPU without CUDA

Write tensor math in Aether. The compiler generates CUDA PTX, ROCm HIP, or WebGPU shaders automatically. Works on NVIDIA, AMD, and Intel GPUs.

No CUDA C++. No manual memory transfers. No thread block configuration. Just math.

gpu.ae
@gpu def attention( q: Tensor<Float32>, k: Tensor<Float32>, v: Tensor<Float32> ) -> Tensor<Float32> { scores = (q ** k.T) / sqrt(k.dim) weights = softmax(scores) return weights ** v } // Compiles to CUDA, ROCm, or WebGPU // automatically based on available hardware
Gradual typing

Start like Python. Ship like Rust.

No types needed to start. Add them when you're ready. Enable #strict for production. The same code grows from prototype to deployment without a rewrite.

gradual.ae
// Prototype: no types, no keywords data = load("data.csv") model = train(data) print("Accuracy: {model.score()}") // Production: add types + strict mode #strict let data: DataFrame = load("data.csv")? let model: Classifier = train(data) print("Accuracy: {model.score()}") // Same logic. Same file. Just add types.
Built for the AI era
Every feature designed for the developer who builds, trains, and deploys AI.
Ξ

Unified Compute Model

Write once. The runtime auto-places operations on CPU, GPU, TPU, or quantum processor. No code changes.

100x faster than Python

Compiles to native code via Cranelift/LLVM. Predictable ARC memory management. Zero cold-start.

Python bridge

Use PyTorch, HuggingFace, pandas from Aether. Zero-copy tensor sharing. Zero switching cost.

Formal verification

Mathematical contracts prove your code is correct at compile time. Built for safety-critical AI.

Deploy everywhere

One codebase compiles to server, iOS, Android, WASM, and IoT. No model conversion. No wrappers.

Experiment tracking

@experiment logs everything automatically. Compare runs, track metrics, snapshot models. Built in.

Start building with Aether

Open source. Apache 2.0. Community-driven.

Install Aether → GitHub ↗
>