MotokoCoderV0

The first code generation model for Motoko β€” the native language of the Internet Computer blockchain.

Part of the Motoko Coder model series by Mercatura Forum AI Lab and ICP Hub Egypt. Smaller and larger models are planned for production use, with an API available for developers to try. This V0 release uses Qwen3-Coder-30B-A3B as the base β€” a commercially licensable model you can run and deploy freely.

Highlights

  • 70% compilation rate on a balanced evaluation set of 20 diverse Motoko programming tasks
  • Generates production-quality persistent actor code with proper mo:core imports
  • Writes compilable AMM swap pools, escrow services, token ledgers, staking contracts, admin access control, and more
  • LoRA adapter (205MB) on top of Qwen3-Coder-30B-A3B-Instruct
  • Verified against the official moc compiler from DFINITY SDK

Motoko Coder Series

Model Base Status Use Case
MotokoCoderV0 Qwen3-Coder-30B-A3B βœ… Released Local development, commercial use
MotokoCoderV1 TBD πŸ”œ Coming soon Higher accuracy, self-repair
MotokoCoder-API Hosted πŸ”œ Coming soon API access for all developers
MotokoCoder-Small Qwen3-Coder-8B πŸ”œ Planned Edge deployment, IDE plugins
MotokoCoder-Pro Qwen3-Coder-235B πŸ”œ Planned Production code generation

Evaluation Results

Tested against the moc compiler β€” every "compiled" result is verified machine-checked code.

Category Compiled Rate
Easy (contact forms, todo lists, profiles) 4/7 57%
Medium (voting, ledgers, config stores, event logs) 6/8 75%
Hard (AMM pools, staking, escrow, batch transfers) 4/5 80%
Overall 14/20 70%

What it compiles

  • Persistent actors with Map, Set, Principal, Time state management
  • CRUD operations with proper Map.add/Map.get/Map.delete and compare functions
  • DeFi primitives: constant product AMM formula, fee collection, reserve tracking
  • State machines with variant types (#Created, #Funded, #Released)
  • Admin access control with Principal.equal checks
  • Record updates with { record with field = newValue } syntax
  • Result types with #ok/#err error handling
  • Query vs update function separation
  • Token ledgers with transfer, mint, burn operations
  • Escrow services with full lifecycle management
  • Online stores (bookstore, restaurant menus) with inventory management

Example: AMM Swap Pool (compiles βœ…)

import Map "mo:core/Map";
import Nat "mo:core/Nat";
import Principal "mo:core/Principal";
import Result "mo:core/Result";

persistent actor AMMSwapPool {
  var reserveA : Nat = 1_000_000;
  var reserveB : Nat = 1_000_000;
  var totalFees : Nat = 0;

  func getOutputAmount(inputAmount : Nat, inputReserve : Nat, outputReserve : Nat) : Nat {
    let numerator = inputAmount * outputReserve * 997;
    let denominator = (inputReserve * 1000) + (inputAmount * 997);
    numerator / denominator;
  };

  public shared(msg) func swap(inputToken : Text, inputAmount : Nat) : async Result.Result<Nat, Text> {
    if (inputAmount == 0) { return #err("Amount must be > 0") };
    let outputAmount = getOutputAmount(inputAmount, reserveA, reserveB);
    let fee = inputAmount * 3 / 1000;
    totalFees += fee;
    reserveA += inputAmount;
    reserveB -= outputAmount;
    #ok(outputAmount);
  };

  public query func getReserves() : async { reserveA : Nat; reserveB : Nat } {
    { reserveA; reserveB };
  };
};

Example: Escrow Service (compiles βœ…, 156 lines)

persistent actor EscrowService {
  public type EscrowState = {
    #Created; #Funded; #Disputed; #Released; #Refunded;
  };

  public type Escrow = {
    id : Nat; buyer : Principal; seller : Principal;
    amount : Nat; state : EscrowState; createdAt : Int;
  };

  var escrows = Map.empty<Nat, Escrow>();

  public shared(msg) func createEscrow(seller : Principal, amount : Nat) : async Result.Result<Nat, Text> { ... };
  public shared(msg) func fundEscrow(id : Nat) : async Result.Result<(), Text> { ... };
  public shared(msg) func releaseFunds(id : Nat) : async Result.Result<(), Text> { ... };
  public shared(msg) func dispute(id : Nat) : async Result.Result<(), Text> { ... };
};

Usage

from transformers import AutoModelForCausalLM, AutoTokenizer
from peft import PeftModel
import torch

base_model = "Qwen/Qwen3-Coder-30B-A3B-Instruct"
adapter = "ky00040/MotokoCoderV0"

tokenizer = AutoTokenizer.from_pretrained(base_model, trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained(base_model, torch_dtype=torch.bfloat16, device_map="auto", trust_remote_code=True)
model = PeftModel.from_pretrained(model, adapter)
model = model.merge_and_unload()

messages = [
    {"role": "system", "content": "You are a Motoko expert for the Internet Computer. Write clean, compilable Motoko code using mo:core imports. Use `persistent actor` for actors, Map.empty/add/get with compare functions."},
    {"role": "user", "content": "Write a Motoko persistent actor for a token balance ledger with transfer, mint, and balance query."}
]

text = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
inputs = tokenizer(text, return_tensors="pt").to(model.device)

with torch.no_grad():
    outputs = model.generate(**inputs, max_new_tokens=2048, temperature=0.1, do_sample=True, top_p=0.95)

response = tokenizer.decode(outputs[0][inputs.input_ids.shape[1]:], skip_special_tokens=True)
print(response)

System Prompt

For best results, use this system prompt:

You are a Motoko expert for the Internet Computer. Write clean, compilable Motoko code using mo:core imports. Use `persistent actor` for actors, Map.empty/add/get with compare functions.

Tips for Best Results

  1. Ask for full actors: "Write a Motoko persistent actor for X" works better than "Write a function that does X"
  2. Describe the types: "Store items with name, price, and category" helps the model define proper types
  3. Mention state: "Use Map for storage" guides the model toward correct patterns
  4. Temperature 0.1 for reliable code, 0.7 for creative variations

Hardware Requirements

This is a MoE (Mixture of Experts) model β€” 30B total parameters but only 3B active per forward pass, making it much lighter than a dense 30B model.

Setup VRAM Precision Works?
1x RTX 5090 / A100 40GB 32-40GB INT8 βœ… Recommended
2x RTX 5090 / 1x A100 80GB 64-80GB bf16 βœ… Full precision
1x RTX 5080 / 4090 / 4080 16-24GB AWQ 4-bit βœ… Quantized
Apple M4 Pro/Max 36-128GB unified MLX / llama.cpp βœ…

Supported frameworks:

  • transformers + peft (recommended, tested)
  • vLLM for serving
  • llama.cpp / Ollama (with GGUF conversion)

Note: This model is NOT compatible with Unsloth due to MoE architecture limitations.

Known Limitations

  • Standalone function prompts without context may reference undefined types
  • Very long actors (200+ lines) may occasionally truncate
  • String manipulation and regex-style operations are weak
  • HTTP outcall and inter-canister call patterns are limited
  • Sometimes uses OOP-style method calls (.toArray()) instead of module functions (Iter.toArray())

Model Details

  • Base model: Qwen3-Coder-30B-A3B-Instruct (MoE architecture, 30B total parameters, 3B active per forward pass)
  • Adapter type: LoRA with rsLoRA scaling
  • Adapter config: r=64, alpha=128
  • Target modules: q_proj, k_proj, v_proj, o_proj, gate_proj, up_proj, down_proj
  • Trainable parameters: 53.5M (0.17% of total)
  • Compilation verification: All evaluation results verified against moc (Motoko compiler) from DFINITY SDK v0.31.0

About Motoko

Motoko is a programming language designed specifically for the Internet Computer blockchain. Key features include:

  • Persistent actors β€” canister smart contracts with automatic state persistence
  • Async/await β€” native support for inter-canister communication
  • Strong type system β€” derived from OCaml, with variants, options, and generics
  • mo:core standard library β€” Map, Set, List, Array, Principal, Time, and more

MotokoCoderV0 uses the modern mo:core standard library (not the deprecated mo:base).

About

Mercatura Forum AI Lab and ICP Hub Egypt are building developer tooling and AI infrastructure for the Internet Computer ecosystem.

License

Apache 2.0 β€” free for commercial use.

Downloads last month
-
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Model tree for ky00040/MotokoCoderV0

Adapter
(20)
this model