OPEN SOURCE — MIT LICENSE

SYNAPSE
————

SYNTHETIC NEURAL API & PIPELINE SCRIPTING ENGINE

One language for every stage of the AI pipeline. Faster than Python. Safer than C. More expressive than anything in between.

READ THE SPEC → SEE THE SYNTAX
10–40×
FASTER THAN PYTHON
ZERO
MEMORY UNSAFETY
ONE
UNIFIED SYNTAX
MIT
OPEN SOURCE
// SYNTAX

Replace four tools
with one language.

Today's ML pipelines stitch together Python, YAML, JSON, and shell scripts. SYNAPSE replaces all of it.

● SYNAPSE
-- Schema, pipeline, and logic in one file

schema ImageInput {
  path:   str,
  label:  str?,
  width:  int,
  height: int
}

pipeline classify_images {
  source: ImageInput[]
  steps:  [preprocess -> embed -> infer]
  output: ClassResult[]
}

fn preprocess(img: ImageInput) -> Tensor {
  load_image(img.path)
    |> resize(224, 224)
    |> normalize(mean=[0.485, 0.456])
}

async fn infer(t: Tensor) -> ClassResult {
  let model = load_model("resnet50.synmodel")
  model.forward(t)
}
● TODAY'S EQUIVALENT
# schema.json + config.yaml + pipeline.py + run.sh

# --- schema.json ---
{ "type": "object",
  "properties": {
    "path": {"type": "string"},
    "label": {"type": "string"} }}

# --- pipeline.py ---
from torchvision import transforms
from PIL import Image
import torch, yaml, json, subprocess

def preprocess(img_path):
    t = transforms.Compose([
        transforms.Resize((224, 224)),
        transforms.ToTensor(),
        transforms.Normalize([0.485, 0.456])
    ])
    return t(Image.open(img_path))

async def infer(tensor):
    model = torch.load("resnet50.pt")
    model.eval()
    with torch.no_grad():
        return model(tensor.unsqueeze(0))
// FEATURES

Built for the AI era.
Not retrofitted for it.

Native Speed
Compiles to native machine code via LLVM. Interpreted mode for prototyping, compiled mode for production. 10–40× faster than equivalent Python.
LLVM BACKEND
🛡
Memory Safety
Static typing with full inference. Null safety enforced at compile time. No buffer overflows. No silent failures. Errors are values, not exceptions.
COMPILE-TIME SAFETY
🧠
AI-Native Primitives
Built-in tensor and matrix types. Native async/await for model inference. Pipeline operators for composing model chains without boilerplate.
TENSOR<T, DIMS>
🔗
Unified Syntax
The same file defines schemas, logic, pipelines, and config. No format translation. No context switching between YAML, JSON, Python, and shell.
ONE FORMAT
🔄
Python Interop
Import any Python, C, or Rust library natively. Migrate incrementally. You don't have to abandon your existing ecosystem to adopt SYNAPSE.
FFI SUPPORT
🌐
LLM-Legible Grammar
Deterministic, unambiguous grammar designed for human and AI collaboration. The first language built knowing that LLMs will read and write it.
AI-FIRST DESIGN
// ROADMAP

From spec to stable.

v0.1
Formal Grammar + Reference Parser
Complete language specification (EBNF), lexer, and recursive-descent parser in Python. The foundation everything else builds on.
COMPLETE
v0.2
Type System + Inference Engine
Full static type system with Hindley-Milner inference. Schema validation. Nullable types. Tensor shape tracking.
COMPLETE
v0.3
LLVM Compilation Backend
Native code generation via LLVM IR. Interpreted mode retained. First benchmarks against Python and Rust.
COMPLETE
v0.4
Standard Library
syn::io, syn::tensor, syn::http, syn::async, syn::pipeline, syn::model, syn::data. The full runtime surface.
COMPLETE
v1.0
Stable Release + RFC Process
Stable language semantics. `syn` package manager CLI. Community governance via GitHub Discussions and formal RFC workflow.
COMPLETE