Development

SHIP Language

Secure Heterogeneous Inference Programming—translating natural language to verifiable bytecode.

Why SHIP Is Necessary

SHIP translates model inference outputs into executable operations: asset transfers, agent interactions, and contract calls. It provides a constrained, verifiable layer between model output and runtime execution.

The Problem with Raw LLM Outputs

LLMs can generate text resembling executable logic, but they're non-deterministic and lack formal guarantees about structure, safety, or correctness. Using raw outputs for bytecode generation introduces serious issues.
Unpredictability

Hallucinations, unsafe constructs

Unbounded execution

DoS risks

No proof anchoring

Verification impossible

Opaque intent

Implicit goals

Ecosystem Examples

Public repositories in the Theseus ecosystem show how SHIP is used in deployed applications.

proof-of-lobster

Demonstrates persistent agent identity, scheduled execution, and social interaction flows.

View repository →

the-prediction-market

Demonstrates agent-to-contract calls, contract-to-agent callbacks, and resolver workflows.

View repository →

Design Principles

Determinism

Static bounds, known gas/memory

Verifiability

Tensor Commit proofs

Traceability

Tied to agent context

Composability

Staged, delegated, templated

Execution Flow

1

Inference

Agent runs model, generates output

2

Compilation

NL→SHIP via fine-tuned agent, then SHIP→bounded opcodes

3

Verification

Tensor Commit proves inference integrity, bytecode validated

4

Execution

Program submitted to runtime

Example Use Case

A sovereign agent runs a summarization model. The summary contains a trigger like "Pay 10 $THE to agent_xyz for document processing".

Without SHIP

Text parsed directly into bytecode, causing potential execution unaligned with agent's intention.

With SHIP

example.ship
let payment = Transfer {
  recipient: agent_xyz,
  amount: 10 THE
};
commit(payment);

Integration with AIVM

SHIP compiles to AIVM opcodes, executed via AGENT_TICK() or MODEL_INFER().

Each construct maps to safe primitives: TLOAD, TCUSTOM, STATE_EXPORT, TRANSFER_TOKEN.

Tensor Commits link inference outputs to on-chain outcomes.

Documentation