Skip to content

WAVE Specification v0.3

Version: 0.3 (Working Draft, Revised)
Authors: Ojima Abraham, Onyinye Okoli
Date: March 29, 2026
Status: Working Draft (Revised)

The key words MUST, MUST NOT, REQUIRED, SHALL, SHALL NOT, SHOULD, SHOULD NOT, RECOMMENDED, MAY, and OPTIONAL in this document are to be interpreted as described in RFC 2119.

This specification defines WAVE (Wide Architecture Virtual Encoding), a vendor-neutral instruction set architecture for general-purpose GPU computation. It specifies an abstract execution model, a register model, a memory model, structured control flow semantics, an instruction set, and a capability query system.

The specification follows the thin abstraction principle: it defines what a compliant implementation MUST be able to do, not how it must do it. Implementations MAY use any microarchitectural technique to achieve compliance.

This specification covers general-purpose compute workloads. Graphics pipeline operations (rasterization, tessellation, pixel export, ray tracing) are out of scope and MAY be addressed by future extensions.

  1. Thin abstraction. Every requirement traces to a hardware-invariant primitive observed across all four major GPU vendors. No requirement is imposed for software convenience alone.
  2. Queryable parameters. Values that differ across implementations (wave width, register count, scratchpad size) are exposed as queryable constants, not fixed in the specification.
  3. Structured divergence. The specification defines control flow semantics but not divergence mechanisms. Implementations are free to use any technique (execution masks, predication, hardware stacks, per-thread program counters) to achieve the specified behavior.
  4. Mandatory minimums. Every queryable parameter has a minimum value. A compliant implementation MUST meet or exceed all minimums.

This specification is complementary to existing standards. SPIR-V MAY be used as a distribution format for programs targeting this ISA. OpenCL and Vulkan MAY serve as host APIs for dispatching workloads. The distinction is that this specification defines the hardware execution model, while existing standards define host-device interaction.

This version incorporates changes discovered during implementation and hardware verification of vendor backends. Key changes:

  1. Modifier field widened from 3 to 4 bits (Section 8.2). The v0.2 encoding used a 3-bit modifier field, limiting sub-opcode values to 0-7. The toolchain required values up to 13 for wave reduce operations. The modifier field is now 4 bits (values 0-15), and the flags field is reduced from 3 to 2 bits.
  2. WAVE_REDUCE_FLAG eliminated. Wave reduce operation types (add, min, max, and_bits, or_bits, xor_bits) are now encoded directly in the modifier field (values 8-13) rather than using a separate flag bit.
  3. NON_RETURNING_ATOMIC_FLAG removed. Non-returning atomic variants are now detected by the decoder using the rd==0 convention (destination register zero implies no return value).
  4. Intra-wave shuffle added as 11th mandatory primitive (Section 2.2, Section 6.11). All four vendors implement shuffle in hardware. Benchmarks on NVIDIA T4 showed a 37.5% performance gap without it, confirming it belongs in the mandatory set.
  5. Three-vendor hardware verification documented (Section 9). The same WAVE program now produces identical results on Apple M4 Pro, NVIDIA T4, and AMD MI300X.

These changes were introduced in v0.2:

  1. Register encoding widened from 5-bit to 8-bit (Section 8.2).
  2. Minimum divergence stack depth specified (Section 5.4, Section 7.1).
  3. Predicate negation semantics clarified (Section 5.1).
  4. Per-Wave control flow state requirement added (Section 5.5).
  5. Full opcode table provided (Appendix A).
  6. Conformance test suite referenced (Section 9.4).

A compliant processor consists of one or more Cores. Each Core is an independent compute unit capable of executing multiple Workgroups concurrently. Cores are not addressable by software. The hardware assigns Workgroups to Cores, and the programmer MUST NOT assume any particular mapping.

The execution model defines four levels, three mandatory and one optional:

Level 0: Thread. The smallest unit of execution. A Thread has a private register file, a scalar program counter, and a position within the hierarchy identified by hardware-populated identity values. A Thread executes a sequential stream of instructions.

Level 1: Wave. A group of exactly W Threads that execute a single instruction simultaneously, where W is a hardware constant queryable at compile time (see Section 7). All Threads in a Wave share a program counter for the purpose of instruction fetch. When Threads in a Wave disagree on a branch condition, the implementation MUST ensure that both paths execute with inactive Threads producing no architectural side effects. The mechanism by which this is achieved is not specified.

A Wave is the fundamental scheduling unit. The hardware scheduler operates on Waves, not individual Threads. Each Wave MUST maintain independent control flow state (see Section 5.5).

The specification defines eleven mandatory wave-level primitives: shuffle (by lane), shuffle-up, shuffle-down, shuffle-xor, broadcast, ballot, any, all, prefix-sum, reduce, and intra-wave shuffle. All four major vendors implement these in hardware.

Level 2: Workgroup. A group of one or more Waves, containing up to MAX_WORKGROUP_SIZE Threads. All Waves in a Workgroup execute on the same Core, share access to Local Memory, and may synchronize via Barriers.

The number of Waves per Workgroup is ceil(workgroup_thread_count / W).

Workgroup dimensions are specified at dispatch time as a 3-dimensional size (x, y, z) where x * y * z <= MAX_WORKGROUP_SIZE.

Level 3: Grid. The complete dispatch of Workgroups. A Grid is specified as a 3-dimensional count of Workgroups. Workgroups within a Grid MAY execute in any order, on any Core, at any time. No synchronization is available between Workgroups within a single Grid dispatch.

Level 2.5 (Optional): Cluster. A group of Workgroups guaranteed to execute concurrently on adjacent Cores with access to each other’s Local Memory. The Cluster size is queryable via CLUSTER_SIZE. If the implementation does not support Clusters, CLUSTER_SIZE is 1 and Cluster-scope operations behave identically to Workgroup-scope operations.

Every Thread has the following hardware-populated, read-only values available as special registers:

IdentifierTypeDescription
thread_id.{x,y,z}uint32Thread position within Workgroup (3D)
wave_iduint32Wave index within Workgroup
lane_iduint32Thread position within Wave (0 to W-1)
workgroup_id.{x,y,z}uint32Workgroup position within Grid (3D)
workgroup_size.{x,y,z}uint32Workgroup dimensions
grid_size.{x,y,z}uint32Grid dimensions (in Workgroups)
num_wavesuint32Number of Waves in this Workgroup

Each Core provides the following resources:

Register File. A fixed-size on-chip storage of F bytes, partitioned among all simultaneously resident Threads. Each Thread receives R registers (declared at compile time). The maximum number of simultaneously resident Waves is bounded by:

max_resident_waves = floor(F / (R * W * 4))

where 4 is the register width in bytes (32 bits). This is the occupancy equation. Implementations MUST support at least MIN_MAX_REGISTERS registers per Thread.

Local Memory. A fixed-size on-chip scratchpad of S bytes, shared among all Waves in the same Workgroup. Local Memory is explicitly addressed via load and store instructions. There is no automatic caching or data placement. Local Memory contents are undefined at Workgroup start and are not preserved across Workgroup boundaries. Implementations MUST provide at least MIN_LOCAL_MEMORY_SIZE bytes.

Hardware Scheduler. Selects a ready Wave for execution each cycle. When a Wave stalls on a memory access, barrier, or other long-latency operation, the scheduler MUST be able to select another resident Wave without software-visible overhead. The scheduling policy is implementation-defined.

  1. All Threads in a Wave execute the same instruction in the same cycle, or appear to from the programmer’s perspective.
  2. Within a Wave, instruction execution is in program order.
  3. Between Waves in the same Workgroup, no execution order is guaranteed unless explicitly synchronized via Barriers or memory ordering operations.
  4. Between Workgroups, no execution order is guaranteed. Period.
  5. A Workgroup that uses Local Memory or Barriers MUST have all its Waves resident on a single Core simultaneously.
  6. The implementation MUST guarantee forward progress for at least one Wave per Core at all times (no deadlock from scheduling).
  7. A Wave that has been dispatched MUST eventually complete, assuming the program terminates (no starvation).
  8. Execution MUST be deterministic: given identical inputs, program binary, and dispatch configuration, the same program MUST produce identical results across runs on the same implementation. (Non-determinism across different implementations is permitted for implementation-defined behaviors.)

A dispatch operation launches a Grid of Workgroups. The dispatch specifies:

  • The kernel (program entry point)
  • Grid dimensions
  • Workgroup dimensions
  • Register count per Thread (R)
  • Local Memory size required (in bytes)
  • Kernel arguments (buffer addresses, constants)

The implementation MUST reject a dispatch if the requested resources exceed Core capacity.

Each Thread has access to R general-purpose registers, where R is declared at compile time and MUST NOT exceed MAX_REGISTERS. Registers are 32 bits wide. They are untyped at the hardware level; the instruction determines how the register contents are interpreted (integer, float, bitfield). Registers are named r0 through r{R-1}.

Each 32-bit register MAY be accessed as two 16-bit halves: r{N}.lo (bits [15:0]) and r{N}.hi (bits [31:16]). This enables efficient F16 and BF16 operations without consuming additional registers.

Two consecutive registers MAY be used as a 64-bit value: r{N}:r{N+1} with r{N} as the low 32 bits. This is used for F64 operations (where supported) and 64-bit integer operations.

Hardware-populated read-only registers for thread identity:

  • sr_thread_id_x, sr_thread_id_y, sr_thread_id_z
  • sr_wave_id, sr_lane_id
  • sr_workgroup_id_x, sr_workgroup_id_y, sr_workgroup_id_z
  • sr_workgroup_size_x, sr_workgroup_size_y, sr_workgroup_size_z
  • sr_grid_size_x, sr_grid_size_y, sr_grid_size_z
  • sr_wave_width, sr_num_waves

The implementation MUST provide at least 4 predicate registers (p0 through p3), each 1 bit wide per Thread. Predicates are set by comparison instructions and consumed by conditional branch instructions.

The register count R is declared per-kernel at compile time. The implementation allocates R registers per Thread for all Threads in all resident Waves. The occupancy equation determines how many Waves can be resident simultaneously. Compilers SHOULD minimize R to maximize occupancy.

Three mandatory memory spaces:

  • Register Memory: Per-Thread, on-chip, single-cycle, not addressable
  • Local Memory: Per-Workgroup, on-chip, explicitly addressed, size S
  • Device Memory: Global, off-chip or unified, cached by hardware-managed caches, persistent across dispatches

Local Memory is organized as a flat byte-addressable array of S bytes. The base address is 0. Addresses outside [0, S) produce undefined behavior.

Local Memory is banked. When multiple Threads in a Wave access the same bank in the same cycle, a bank conflict MAY occur. Accesses to the same address within a bank (broadcast) MUST NOT cause a conflict.

Supports access widths of 8, 16, 32, and 64 bits.

Device Memory is byte-addressable with a 64-bit virtual address space. The implementation MUST support aligned loads and stores of 8, 16, 32, 64, and 128 bits. Coalescing of contiguous accesses is implementation-defined. Cache hierarchy is transparent to the ISA.

Default ordering is relaxed. Ordering is achieved through scoped fence operations at four scopes:

  • scope_wave
  • scope_workgroup
  • scope_device
  • scope_system

Fence semantics:

  • fence_acquire(scope) ensures subsequent loads see values at least as recent as those visible at the scope
  • fence_release(scope) ensures prior stores are visible at the scope
  • fence_acq_rel(scope) combines both

Store-to-load ordering within a Thread is always guaranteed.

Atomic operations perform indivisible read-modify-write sequences on Local Memory and Device Memory. Required operations:

  • atomic_add (i32, u32, f32)
  • atomic_sub (i32, u32)
  • atomic_min (i32, u32)
  • atomic_max (i32, u32)
  • atomic_and (u32)
  • atomic_or (u32)
  • atomic_xor (u32)
  • atomic_exchange (u32)
  • atomic_compare_swap (u32)

Each takes a scope parameter. 64-bit atomics are OPTIONAL.

The ISA defines structured control flow primitives. All control flow MUST be expressible through these primitives. The implementation MUST NOT require the programmer to manage divergence masks, execution masks, or reconvergence points.

Conditional:

if (predicate)
<then-body>
else
<else-body>
endif

When Threads in a Wave evaluate the predicate differently, the implementation MUST execute both paths. Threads for which the predicate is false MUST NOT produce side effects during the then-body, and vice versa for the else-body. After endif, all Threads that were active before the if are active again.

Loop:

loop
<body>
break (predicate) // exit loop for Threads where predicate is true
continue (predicate) // skip to next iteration for Threads where predicate is true
endloop

A loop executes until all active Threads have exited via break. The implementation MUST guarantee forward progress: if at least one Thread remains in the loop, execution continues.

Predicate negation on break and continue: When a break or continue instruction uses a negated predicate (e.g., break !p0), the instruction applies to Threads where the predicate is false. That is, break !p0 causes Threads where p0 is false to exit the loop. The negation is applied before evaluating which Threads are affected.

Function call:

call <function>
return

Function calls push the return address onto an implementation-managed call stack. The call stack depth MUST support at least MAX_CALL_DEPTH levels of nesting (see Section 7.1). Recursion is OPTIONAL (see Section 7).

If all Threads in a Wave evaluate a branch identically (uniform), the implementation SHOULD avoid executing the not-taken path. Performance optimization, not correctness requirement.

The implementation is free to use any mechanism to implement the structured control flow semantics of Section 5.1:

  • Compiler-managed execution masks (AMD approach)
  • Hardware per-thread program counters (NVIDIA approach)
  • Compiler-generated predicated instructions (Intel approach)
  • Hardware divergence stack (Apple approach)
  • Any other mechanism that preserves the specified semantics

The ISA does not expose or constrain the divergence mechanism.

Implementations MUST support nested control flow (if/else/endif, loop/break/endloop) to a depth of at least MIN_DIVERGENCE_DEPTH levels (see Section 7.1). This means a program may have up to MIN_DIVERGENCE_DEPTH nested if/else/endif blocks, or nested loops, or any combination thereof.

If a program exceeds the implementation’s maximum divergence depth, the behavior is undefined.

Each Wave MUST maintain independent control flow state. This includes, but is not limited to, the divergence stack (active mask history), loop iteration state, and reconvergence points. Two Waves executing the same program binary at different points in a loop or branch MUST NOT interfere with each other’s control flow state.

Rationale: This requirement was added in v0.2 after the reference emulator discovered that sharing control flow state across Waves in a Workgroup causes deadlock when Waves reach barriers at different loop iterations.

All instructions operate on registers. There are no memory-to-register or memory-to-memory instructions (except explicit load/store).

Instructions are specified in this document in assembly notation:

opcode destination, source1, source2

Predicated instructions are written as:

@predicate opcode destination, source1, source2 // execute if predicate is true
@!predicate opcode destination, source1, source2 // execute if predicate is false

A predicated instruction executes only for Threads where the predicate condition is met. Threads where the condition is not met are unaffected — their destination registers retain their previous values and no side effects (memory stores, atomics) occur.

The binary encoding of instructions is defined in Section 8.

All integer operations are performed per-Thread. Full set of integer arithmetic for i32/u32:

  • iadd, isub, imul, imul_hi, imad
  • idiv, imod, ineg, iabs
  • imin, imax, iclamp

Integer arithmetic uses wrapping semantics for overflow. Division by zero produces undefined behavior.

  • and, or, xor, not
  • shl, shr (logical), sar (arithmetic)
  • bitcount, bitfind, bitrev
  • bfe (bit field extract), bfi (bit field insert)

Shift amounts are masked to 5 bits (shift by rs2 & 0x1F).

6.4 Floating-Point Arithmetic (F32) — REQUIRED

Section titled “6.4 Floating-Point Arithmetic (F32) — REQUIRED”

IEEE 754 single precision. Operations:

  • fadd, fsub, fmul, fma, fdiv
  • fneg, fabs, fmin, fmax, fclamp
  • fsqrt, frsqrt, frcp
  • ffloor, fceil, fround, ftrunc, ffract, fsat

Transcendentals: fsin, fcos, fexp2, flog2 (at least 2 ULP precision). Denormals MAY be flushed to zero.

6.5 Floating-Point Arithmetic (F16) — REQUIRED

Section titled “6.5 Floating-Point Arithmetic (F16) — REQUIRED”

F16 operations on register halves. Packed 2xF16 operations for throughput:

  • hadd2, hmul2, hma2

6.6 Floating-Point Arithmetic (F64) — OPTIONAL

Section titled “6.6 Floating-Point Arithmetic (F64) — OPTIONAL”

F64 operations on register pairs:

  • dadd, dsub, dmul, dma, ddiv, dsqrt

Type conversion between i32, u32, f16, f32, f64.

Comparison instructions set predicate registers. Select instruction for conditional moves.

Local memory: local_load/local_store for u8, u16, u32, u64.

Device memory: device_load/device_store for u8, u16, u32, u64, u128. Device loads are asynchronous; use wait or fence before consuming.

Optional cache hints: .cached, .uncached, .streaming.

Atomic instructions on Local and Device Memory with scope suffixes (.wave, .workgroup, .device, .system). Return old value; non-returning variants SHOULD be optimized.

Wave operations communicate between Threads within a Wave without going through memory. Eleven mandatory primitives:

  • wave_shuffle (by lane), wave_shuffle_up, wave_shuffle_down, wave_shuffle_xor
  • wave_broadcast
  • wave_ballot, wave_any, wave_all
  • wave_prefix_sum (exclusive)
  • wave_reduce_add, wave_reduce_min, wave_reduce_max
  • wave_shuffle_idx — intra-wave shuffle by arbitrary index

Rationale for intra-wave shuffle: All four vendors implement intra-wave shuffle in hardware (NVIDIA __shfl_sync, AMD ds_permute, Intel sub_group_shuffle, Apple simd_shuffle). Benchmarks on NVIDIA T4 showed a 37.5% performance gap on communication-heavy kernels without it, confirming its inclusion as a mandatory primitive.

For shuffle operations, if the source lane is out of bounds (< 0 or >= W) or the source lane is inactive, the result is implementation-defined.

Wave operations operate only on active Threads. Inactive Threads (masked by divergence) do not participate in reductions, ballots, or prefix sums, and do not have their registers modified.

  • barrier — Workgroup-scope barrier. All Waves in the Workgroup MUST reach this point before any Wave proceeds past it. Memory operations before the barrier are visible to all Waves in the Workgroup after the barrier.
  • fence_acquire/fence_release/fence_acq_rel (scoped)
  • wait (for async loads)

Barrier restriction: A barrier instruction MUST NOT appear inside a divergent control flow path. That is, when a Wave reaches a barrier, all active Threads in that Wave (at the point of the outermost non-divergent scope) must reach the same barrier. Barriers inside uniform if blocks (where all Threads agree) are permitted. Barriers inside loops are permitted provided all Waves in the Workgroup execute the same number of barrier instructions per loop iteration.

InstructionDescription
if pdBegin conditional block (Threads where pd is false become inactive)
elseSwitch active/inactive Threads
endifEnd conditional block (restore original active set)
loopBegin loop
break pdThreads where pd is true exit the loop
break !pdThreads where pd is false exit the loop
continue pdThreads where pd is true skip to next iteration
continue !pdThreads where pd is false skip to next iteration
endloopEnd loop (branch back to loop if any Threads still active)
call <label>Call function
returnReturn from function
haltTerminate this Thread

6.14 Matrix Multiply-Accumulate — OPTIONAL

Section titled “6.14 Matrix Multiply-Accumulate — OPTIONAL”
  • mma_f16_f32, mma_bf16_f32, mma_f32_f32

Tile dimensions queryable.

  • mov, mov_imm, nop
ConstantMinimumDescription
WAVE_WIDTH8Threads per Wave
MAX_REGISTERS64Maximum registers per Thread
REGISTER_FILE_SIZE16384Total register file size (bytes)
LOCAL_MEMORY_SIZE16384Local memory per Workgroup (bytes)
MAX_WORKGROUP_SIZE256Maximum Threads per Workgroup
MAX_WORKGROUPS_PER_CORE1Maximum concurrent Workgroups per Core
MAX_WAVES_PER_CORE4Maximum concurrent Waves per Core
DEVICE_MEMORY_SIZETotal device memory (bytes)
CLUSTER_SIZE1Workgroups per Cluster
MAX_CALL_DEPTH8Maximum function call depth
MIN_DIVERGENCE_DEPTH32Minimum nested control flow depth
  • CAP_F64 — 64-bit floating point
  • CAP_ATOMIC_64 — 64-bit atomics
  • CAP_ATOMIC_F32 — F32 atomic add
  • CAP_MMA — Matrix multiply-accumulate
  • CAP_RECURSION — Recursive function calls
  • CAP_CLUSTER — Cluster support

When CAP_MMA is present, the following are queryable:

  • MMA_M, MMA_N, MMA_K — Tile dimensions
  • MMA_TYPES — Supported input/output type combinations

Host API provides query_constant and query_capability functions.

Instructions are encoded as fixed-width 48-bit (6-byte) words. Some instructions require an additional 32-bit word for immediate values or extended operands (80 bits / 10 bytes total).

v0.2 change: The v0.1 encoding used 32-bit base instructions with 5-bit register fields (max 32 registers). This conflicted with MAX_REGISTERS = 64. The encoding has been widened to 48 bits with 8-bit register fields (max 256 registers).

v0.3 change: The modifier field has been widened from 3 to 4 bits (lower 11 bits restructured). WAVE_REDUCE_FLAG and NON_RETURNING_ATOMIC_FLAG have been eliminated. See Section 8.2 for the updated bit layout.

v0.3 change: The modifier field has been widened from 3 to 4 bits to accommodate wave reduce sub-opcodes (values 8-13). The flags field is reduced from 3 to 2 bits. WAVE_REDUCE_FLAG and NON_RETURNING_ATOMIC_FLAG have been eliminated — wave reduce types are encoded directly in the modifier, and non-returning atomics are detected via the rd==0 convention.

BitsFieldDescription
[47:40]opcode8 bits — 256 primary opcodes
[39:32]rd8 bits — destination register (0-255). rd==0 on atomics indicates non-returning variant.
[31:24]rs18 bits — source register 1 (0-255)
[23:16]rs28 bits — source register 2 (0-255)
[15:11]reserved5 bits — reserved, must be zero
[10:7]modifier4 bits — instruction-specific sub-opcode (0-15)
[6:5]scope2 bits — memory scope (00=wave, 01=workgroup, 10=device, 11=system)
[4:3]pred2 bits — predicate register selector (p0-p3)
[2]pred_neg1 bit — negate predicate (0=normal, 1=negated)
[1:0]flags2 bits — instruction-specific flags

For instructions requiring a third source register or a 32-bit immediate:

  • Word 0 (48 bits): Base instruction as above, with flags indicating extended format
  • Word 1 (32 bits): [31:0] rs3 (8 bits) + imm24, or full imm32

The 8-bit opcode field provides 256 primary opcodes, organized as:

RangeCategory
0x00-0x0FInteger arithmetic
0x10-0x1FFloating-point arithmetic (F32)
0x20-0x27Bitwise operations
0x28-0x2FComparison and select
0x30-0x37Local memory operations
0x38-0x3FDevice memory operations
0x40-0x4FAtomic operations
0x50-0x5FWave operations
0x60-0x6FControl flow and synchronization
0x70-0x7FType conversion
0x80-0x8FF16 arithmetic
0x90-0x9FF64 arithmetic (optional)
0xA0-0xAFMatrix MMA (optional)
0xB0-0xEFReserved for future extensions
0xF0-0xFFMiscellaneous (mov, mov_imm, nop, halt)

See Appendix A for the complete opcode-to-mnemonic mapping.

A compliant implementation MUST:

  1. Support all mandatory instructions (Sections 6.2 through 6.15).
  2. Meet or exceed all minimum values in Section 7.1.
  3. Implement the memory ordering semantics of Section 4.4.
  4. Implement the structured control flow semantics of Section 5.1, including per-Wave control flow state (Section 5.5).
  5. Satisfy all execution guarantees of Section 2.5.
  6. Correctly report all capabilities of Section 7.2.
  7. Support nested control flow to at least MIN_DIVERGENCE_DEPTH levels (Section 5.4).

The following behaviors are implementation-defined (valid implementations may differ):

  • Denormal floating-point handling (flush to zero or preserve)
  • Bank conflict penalty in Local Memory
  • Device Memory coalescing policy
  • Cache hierarchy structure, sizes, and policies
  • Scheduling policy for Wave selection
  • Transcendental function precision beyond the specified minimum
  • Out-of-bounds shuffle source lane result
  • Shuffle from inactive source lane result
  • Unaligned memory access behavior
  • Wave scheduling order within a Workgroup

HIP conformance note: Implementations targeting AMD HIP MUST include the half-precision header (hip_fp16.h) when F16 operations are used.

The following constitute undefined behavior (no guarantees):

  • Accessing Local Memory outside [0, S)
  • Accessing Device Memory outside allocated regions
  • Using optional capabilities on hardware that does not support them
  • Exceeding MAX_CALL_DEPTH
  • Exceeding the implementation’s maximum divergence depth
  • Data races on Device Memory without proper fencing
  • Infinite loops with no forward progress
  • Barrier inside a divergent control flow path (where threads in a wave disagree on whether to execute the barrier)
  • Integer division by zero

A reference conformance test suite consisting of 102 tests is provided as a companion artifact in the WAVE toolchain repository. The test suite verifies:

  1. Correct execution of all mandatory instructions, including edge cases (overflow, NaN, infinity)
  2. Memory ordering compliance across scopes
  3. Barrier semantics with multi-wave workgroups, including barriers inside loops
  4. Atomic operation correctness on both local and device memory
  5. Structured control flow behavior under divergence, including nested divergence to depth 32
  6. Wave operations under divergence (shuffle, ballot, reduce with inactive threads)
  7. Capability reporting accuracy
  8. Real GPU program correctness (tiled GEMM, parallel reduction, histogram, prefix sum)

An implementation passes conformance if all 102 tests in the mandatory suite produce correct results. The test suite is versioned alongside the specification.

This specification has been validated through a reference toolchain (assembler, disassembler, emulator) and a conformance test suite. Four vendor backends (Apple Metal, NVIDIA PTX, AMD HIP, Intel SYCL) have been implemented. Three have been verified on real hardware: Apple M4 Pro (Apple Silicon), NVIDIA T4 (Turing), and AMD MI300X (CDNA 3). The same WAVE program produces identical results across all three platforms.

OpcodeMnemonicFormatDescription
0x00iaddBaserd = rs1 + rs2
0x01isubBaserd = rs1 - rs2
0x02imulBaserd = (rs1 * rs2) & 0xFFFFFFFF
0x03imul_hiBaserd = (rs1 * rs2) >> 32
0x04imadExtendedrd = rs1 * rs2 + rs3
0x05idivBaserd = rs1 / rs2
0x06imodBaserd = rs1 % rs2
0x07inegBaserd = -rs1
0x08iabsBaserd = abs(rs1)
0x09iminBaserd = min(rs1, rs2) (signed)
0x0AimaxBaserd = max(rs1, rs2) (signed)
0x0BiclampExtendedrd = clamp(rs1, rs2, rs3)
0x0CuminBaserd = min(rs1, rs2) (unsigned)
0x0DumaxBaserd = max(rs1, rs2) (unsigned)
0x0E-0x0FReserved
OpcodeMnemonicFormatDescription
0x10faddBaserd = rs1 + rs2
0x11fsubBaserd = rs1 - rs2
0x12fmulBaserd = rs1 * rs2
0x13fmaExtendedrd = rs1 * rs2 + rs3
0x14fdivBaserd = rs1 / rs2
0x15fnegBaserd = -rs1
0x16fabsBaserd = abs(rs1)
0x17fminBaserd = min(rs1, rs2)
0x18fmaxBaserd = max(rs1, rs2)
0x19fclampExtendedrd = clamp(rs1, rs2, rs3)
0x1AfsqrtBaserd = sqrt(rs1)
0x1BfrsqrtBaserd = 1/sqrt(rs1)
0x1CfrcpBaserd = 1/rs1
0x1DfroundBasemodifier: 0=floor, 1=ceil, 2=round, 3=trunc
0x1EffractBaserd = fract(rs1)
0x1FftranscBasemodifier: 0=sin, 1=cos, 2=exp2, 3=log2
OpcodeMnemonicFormatDescription
0x20andBaserd = rs1 & rs2
0x21orBaserd = rs1 | rs2
0x22xorBaserd = rs1 ^ rs2
0x23notBaserd = ~rs1
0x24shiftBasemodifier: 0=shl, 1=shr, 2=sar
0x25bitopBasemodifier: 0=bitcount, 1=bitfind, 2=bitrev
0x26bfeExtendedExtract bit field
0x27bfiExtendedInsert bit field
OpcodeMnemonicFormatDescription
0x28icmpBasemodifier: 0=eq, 1=ne, 2=lt, 3=le, 4=gt, 5=ge
0x29ucmpBasemodifier: 0=lt, 1=le
0x2AfcmpBasemodifier: 0=eq, 1=lt, 2=le, 3=gt, 4=ne, 5=ord, 6=unord
0x2BselectBaserd = pred ? rs1 : rs2
0x2CfsatBaserd = clamp(rs1, 0.0, 1.0)
0x2D-0x2FReserved
OpcodeMnemonicFormatDescription
0x30local_loadBasemodifier: 0=u8, 1=u16, 2=u32, 3=u64
0x31local_storeBasemodifier: 0=u8, 1=u16, 2=u32, 3=u64
0x32-0x37Reserved
OpcodeMnemonicFormatDescription
0x38device_loadBasemodifier: 0=u8, 1=u16, 2=u32, 3=u64, 4=u128
0x39device_storeBasemodifier: 0=u8, 1=u16, 2=u32, 3=u64, 4=u128
0x3A-0x3FReserved
OpcodeMnemonicFormatDescription
0x40atomic_addExtendedAtomic add (scope in scope field)
0x41atomic_subExtendedAtomic subtract
0x42atomic_minExtendedAtomic minimum
0x43atomic_maxExtendedAtomic maximum
0x44atomic_andExtendedAtomic bitwise AND
0x45atomic_orExtendedAtomic bitwise OR
0x46atomic_xorExtendedAtomic bitwise XOR
0x47atomic_exchangeExtendedAtomic swap
0x48atomic_casExtendedCompare-and-swap
0x49-0x4FReserved
OpcodeMnemonicFormatDescription
0x50wave_shuffleBaserd = rs1 from lane rs2
0x51wave_shuffle_upBaserd = rs1 from lane (lane_id - rs2)
0x52wave_shuffle_downBaserd = rs1 from lane (lane_id + rs2)
0x53wave_shuffle_xorBaserd = rs1 from lane (lane_id ^ rs2)
0x54wave_broadcastBaserd = rs1 from lane rs2 (all threads)
0x55wave_ballotBaserd = bitmask of pd across active threads
0x56wave_anyBasepd_dst = any active thread has pd_src true
0x57wave_allBasepd_dst = all active threads have pd_src true
0x58wave_prefix_sumBaseExclusive prefix sum
0x59wave_reduceBasemodifier: 0=add, 1=min, 2=max, 3=and_bits, 4=or_bits, 5=xor_bits
0x5Awave_shuffle_idxBaserd = rs1 from lane rs2 (arbitrary index shuffle)
0x5B-0x5FReserved

Control Flow and Synchronization (0x60-0x6F)

Section titled “Control Flow and Synchronization (0x60-0x6F)”
OpcodeMnemonicFormatDescription
0x60ifBaseBegin conditional block
0x61elseBaseSwitch active/inactive
0x62endifBaseEnd conditional, restore mask
0x63loopBaseBegin loop
0x64breakBaseExit loop for predicated threads
0x65continueBaseSkip to next iteration for predicated threads
0x66endloopBaseEnd loop, branch back if any active
0x67callExtendedCall function at imm32 address
0x68returnBaseReturn from function
0x69barrierBaseWorkgroup barrier
0x6AfenceBasemodifier: 0=acquire, 1=release, 2=acq_rel
0x6BwaitBaseWait for async loads
0x6ChaltBaseTerminate thread
0x6D-0x6FReserved
OpcodeMnemonicFormatDescription
0x70cvt_f32_i32BaseSigned int to float
0x71cvt_f32_u32BaseUnsigned int to float
0x72cvt_i32_f32BaseFloat to signed int
0x73cvt_u32_f32BaseFloat to unsigned int
0x74cvt_f32_f16BaseF16 to F32
0x75cvt_f16_f32BaseF32 to F16
0x76cvt_f32_f64BaseF64 to F32 (requires CAP_F64)
0x77cvt_f64_f32BaseF32 to F64 (requires CAP_F64)
0x78-0x7FReserved
OpcodeMnemonicFormatDescription
0x80haddBaseF16 add
0x81hsubBaseF16 subtract
0x82hmulBaseF16 multiply
0x83hmaExtendedF16 fused multiply-add
0x84hadd2BasePacked 2xF16 add
0x85hmul2BasePacked 2xF16 multiply
0x86hma2ExtendedPacked 2xF16 fused multiply-add
0x87-0x8FReserved

F64 Arithmetic (0x90-0x9F) — Requires CAP_F64

Section titled “F64 Arithmetic (0x90-0x9F) — Requires CAP_F64”
OpcodeMnemonicFormatDescription
0x90daddBaseF64 add
0x91dsubBaseF64 subtract
0x92dmulBaseF64 multiply
0x93dmaExtendedF64 fused multiply-add
0x94ddivBaseF64 divide
0x95dsqrtBaseF64 square root
0x96-0x9FReserved

Matrix MMA (0xA0-0xAF) — Requires CAP_MMA

Section titled “Matrix MMA (0xA0-0xAF) — Requires CAP_MMA”
OpcodeMnemonicFormatDescription
0xA0mma_f16_f32ExtendedD = A*B+C, A/B F16, C/D F32
0xA1mma_bf16_f32ExtendedD = A*B+C, A/B BF16, C/D F32
0xA2mma_f32_f32ExtendedD = A*B+C, all F32
0xA3-0xAFReserved
OpcodeMnemonicFormatDescription
0xF0movBaserd = rs1
0xF1mov_immExtendedrd = imm32
0xF2mov_specialBaserd = special register (rs1 encodes which)
0xF3nopBaseNo operation
0xF4-0xFFReserved
Abstract ConceptNVIDIAAMD RDNAAMD CDNAIntelApple
CoreSMWGPCUXe-coreGPU core
WaveWarp (32)Wavefront (32)Wavefront (64)Sub-group (8-16)SIMD-group (32)
RegisterPTX registerVGPRVGPRGRF entryGPR
Local MemoryShared memoryLDSLDSSLMThreadgroup mem
Barrierbar.syncS_BARRIERS_BARRIERbarrierthreadgroup_barrier
Shuffle__shfl_syncDPP/ds_permuteDPP/ds_permutesub_group_shufflesimd_shuffle
Atomicatom/redds/buffer atomicds/buffer atomicatomic_ref (SEND)atomic_fetch_add
Device MemoryGlobal memoryVRAMVRAMGDDR/HBMLPDDR (unified)
Fencefence.scopeS_WAITCNTS_WAITCNTscoreboardwait_for_loads
VersionDateChanges
0.12026-03-22Initial draft
0.22026-03-23Register encoding widened to 8-bit; minimum divergence depth specified (32); predicate negation on break/continue clarified; per-Wave control flow state required; full opcode table added; deterministic execution guarantee added; barrier divergence restriction documented; integer overflow and division semantics specified; conformance test suite (102 tests) referenced
0.32026-03-29Widened modifier field from 3 to 4 bits. Eliminated WAVE_REDUCE_FLAG and NON_RETURNING_ATOMIC_FLAG. Added intra-wave shuffle as 11th mandatory primitive. Documented three-vendor hardware verification (Apple M4 Pro, NVIDIA T4, AMD MI300X).

Defensive Publication Statement: This specification is published as a defensive publication. The architectural concepts described herein are placed in the public domain for the purpose of establishing prior art and preventing proprietary claims on vendor-neutral GPU compute primitives.