Skip to main content

Primitives

Primitives are the fundamental building blocks of NeuroScript. They wrap PyTorch operations and provide the foundation for building complex neural architectures.

Categories

Basics

  • Linear - Fully-connected linear transformation

Activations

  • ReLU - Rectified Linear Unit
  • GELU - Gaussian Error Linear Unit
  • SiLU - Sigmoid Linear Unit (Swish)
  • Sigmoid - Sigmoid activation
  • Tanh - Hyperbolic tangent activation
  • ELU - Exponential Linear Unit
  • Mish - Mish activation
  • PReLU - Parametric ReLU with learnable slope
  • Softmax - Softmax normalization

Attention

Normalization

Embeddings

Convolutions

Pooling

Regularization

Operations

  • MatMul - Batched matrix multiplication
  • Bias - Learnable bias addition
  • Scale - Element-wise scaling
  • Identity - Pass-through (no-op)
  • Einsum - Einstein summation

Structural

  • Fork - Duplicate input to two outputs
  • Fork3 - Duplicate input to three outputs
  • Concat - Concatenate tensors along a dimension
  • Add - Element-wise addition
  • Multiply - Element-wise multiplication
  • Flatten - Flatten to 2D
  • Reshape - Reshape tensor dimensions
  • Transpose - Permute tensor dimensions
  • Slice - Extract a portion of a tensor
  • Split - Split tensor into multiple pieces
  • Pad - Pad tensor with zeros

Usage

All primitives are automatically available in your NeuroScript programs. Simply use them by name:

neuron MyModel:
graph:
in -> Linear(512, 256) -> GELU() -> out

Shape Contracts

Every primitive has a well-defined shape contract that specifies:

  • Input shapes (what tensor dimensions are expected)
  • Output shapes (what dimensions are produced)
  • Parameter constraints (what values are valid)

The NeuroScript compiler validates all shape contracts at compile time, catching dimensional errors before code generation.