SHIP Language
Secure Heterogeneous Inference Programming—translating natural language to verifiable bytecode.
Why SHIP Is Necessary
SHIP translates model inference outputs into executable operations: asset transfers, agent interactions, and contract calls. It provides a constrained, verifiable layer between model output and runtime execution.
The Problem with Raw LLM Outputs
Hallucinations, unsafe constructs
DoS risks
Verification impossible
Implicit goals
Ecosystem Examples
Public repositories in the Theseus ecosystem show how SHIP is used in deployed applications.
proof-of-lobster
Demonstrates persistent agent identity, scheduled execution, and social interaction flows.
View repository →the-prediction-market
Demonstrates agent-to-contract calls, contract-to-agent callbacks, and resolver workflows.
View repository →Design Principles
Static bounds, known gas/memory
Tensor Commit proofs
Tied to agent context
Staged, delegated, templated
Execution Flow
Inference
Agent runs model, generates output
Compilation
NL→SHIP via fine-tuned agent, then SHIP→bounded opcodes
Verification
Tensor Commit proves inference integrity, bytecode validated
Execution
Program submitted to runtime
Example Use Case
A sovereign agent runs a summarization model. The summary contains a trigger like "Pay 10 $THE to agent_xyz for document processing".
Without SHIP
Text parsed directly into bytecode, causing potential execution unaligned with agent's intention.
With SHIP
let payment = Transfer {
recipient: agent_xyz,
amount: 10 THE
};
commit(payment);Integration with AIVM
SHIP compiles to AIVM opcodes, executed via AGENT_TICK() or MODEL_INFER().
Each construct maps to safe primitives: TLOAD, TCUSTOM, STATE_EXPORT, TRANSFER_TOKEN.
Tensor Commits link inference outputs to on-chain outcomes.