Primitives
Primitives are the fundamental building blocks of NeuroScript. They wrap PyTorch operations and provide the foundation for building complex neural architectures.
Categories
Basics
- Linear - Fully-connected linear transformation
Activations
- ReLU - Rectified Linear Unit
- GELU - Gaussian Error Linear Unit
- SiLU - Sigmoid Linear Unit (Swish)
- Sigmoid - Sigmoid activation
- Tanh - Hyperbolic tangent activation
- ELU - Exponential Linear Unit
- Mish - Mish activation
- PReLU - Parametric ReLU with learnable slope
- Softmax - Softmax normalization
Attention
- ScaledDotProductAttention - Core attention mechanism (Q, K, V)
- MultiHeadSelfAttention - Multi-head self-attention
Normalization
- LayerNorm - Layer normalization
- RMSNorm - Root Mean Square normalization
- BatchNorm - Batch normalization
- GroupNorm - Group normalization
- InstanceNorm - Instance normalization
Embeddings
- Embedding - Token embedding (index → dense vector)
- PositionalEncoding - Fixed sinusoidal positional encoding
- LearnedPositionalEmbedding - Learnable positional embeddings
- RotaryEmbedding - Rotary position embeddings (RoPE)
Convolutions
- Conv1d - 1D convolution (sequences, audio)
- Conv2d - 2D convolution (images)
- Conv3d - 3D convolution (volumetric data, video)
- DepthwiseConv - Depthwise separable convolution
- SeparableConv - Depthwise + pointwise convolution
- TransposedConv - Transposed (deconvolutional) layer
Pooling
- MaxPool - 2D max pooling
- AvgPool - 2D average pooling
- GlobalAvgPool - Global average pooling
- GlobalMaxPool - Global max pooling
- AdaptiveAvgPool - Adaptive average pooling
- AdaptiveMaxPool - Adaptive max pooling
Regularization
- Dropout - Random element dropout
- DropConnect - Random connection dropout
- DropPath - Stochastic depth (path dropout)
Operations
- MatMul - Batched matrix multiplication
- Bias - Learnable bias addition
- Scale - Element-wise scaling
- Identity - Pass-through (no-op)
- Einsum - Einstein summation
Structural
- Fork - Duplicate input to two outputs
- Fork3 - Duplicate input to three outputs
- Concat - Concatenate tensors along a dimension
- Add - Element-wise addition
- Multiply - Element-wise multiplication
- Flatten - Flatten to 2D
- Reshape - Reshape tensor dimensions
- Transpose - Permute tensor dimensions
- Slice - Extract a portion of a tensor
- Split - Split tensor into multiple pieces
- Pad - Pad tensor with zeros
Usage
All primitives are automatically available in your NeuroScript programs. Simply use them by name:
neuron MyModel:
graph:
in -> Linear(512, 256) -> GELU() -> out
Shape Contracts
Every primitive has a well-defined shape contract that specifies:
- Input shapes (what tensor dimensions are expected)
- Output shapes (what dimensions are produced)
- Parameter constraints (what values are valid)
The NeuroScript compiler validates all shape contracts at compile time, catching dimensional errors before code generation.