ORCALab
Open governance research. Free tools. Real academic proof.
Every university in the world is using AI for research. Almost none of them can explain how to a reviewer, an ethics board, or a grant officer. ORCA Lab exists to close that gap — free governed tools built for people who have to show their work.
Phase-locked execution, full traceability, reversibility tiers, and nightly self-audit loops give labs and students the missing layer that makes AI safe, reproducible, and IRB-ready.
New to the stack? OpenClaw is the open runtime. ORCA is the governance on top. The free pack below is Markdown + config for that runtime. If you don't use OpenClaw yet, start here, then follow INSTALL.md after you unzip.
Free for faculty, labs & grad programs
ORCA Research Scholar — a ready-to-run governed OpenClaw agent
A full professor-scoped research collaborator: literature synthesis, grant and paper scaffolding, reproducibility planning, and labeled peer-review simulation — wrapped in ORCA phase-locking and traceability. No signup: unzip, personalize IDENTITY.md, follow INSTALL.md, your keys and hardware.
Full governed research agent pack for OpenClaw — persona files, install guide, portability (prompt-only fallback), terms, compiled profile, and example snippets. Customize IDENTITY.md, run openclaw doctor. Assistive only; not IRB or legal advice.
What you're downloading
A complete governed research agent
The ZIP is a working OpenClaw persona + ORCA governance bundle tuned for faculty workload: deep literature passes when your tools allow, structured writing, grant blocks, experiment and dataset plans, and honest uncertainty flags — with logs and tiers you can point to when reviewers ask how AI-assisted work was produced.
- With OpenClaw installed, this folder becomes your agent workspace—not a single chat prompt. You get SOUL, IDENTITY, MEMORY, HEARTBEAT, SAFETY, phase-locked ORCA files (PHASES, RHYTHMS, governance), a compiled profile, and an `openclaw.template.json` you merge into your install.
- Built for professors and lab leads in any field: you edit one file (`IDENTITY.md` — name, department, institution, research focus), run `openclaw doctor`, and start. Typical first setup is on the order of 10–20 minutes if OpenClaw is new on the machine (`INSTALL.md` is in the ZIP).
- Designed outputs: structured literature reviews with gap analysis; LaTeX-ready paper sections; grant and experiment-design scaffolding; reproducible dataset and analysis plans; citation packages in styles you specify (APA, Chicago, Vancouver, etc.).
- Peer-review simulation: three simulated reviewer voices with section-level feedback — explicitly labeled as simulation, not a substitute for human peer review.
- Governance you can describe in a methods section: traceability anchors, self-audit patterns, professor-scoped reversibility tiers, and halt-on-deviation rules — assistive only; you retain IRB, legal, and submission authority.
WhyGovernanceMattersforResearch
When AI sits inside literature review, coding, analysis, or student mentoring, the risk isn't only accuracy — it's defensibility. Outputs that cannot be traced, prompts that are not retained, and workflows that differ from one session to the next are red flags for IRB reviewers, grant officers, and anyone trying to replicate your methods.
Governance closes that gap: every material step is phase-locked, logged, and reversible within tiers you define — so AI becomes something you can describe in a protocol, show an ethics board, and hand to a student without losing oversight.
- IRB & ethics reviews. Reviewers increasingly ask how AI-assisted work was produced — not just what the output says.
- Reproducibility & methods. Publication-grade science needs workflows and receipts, not one-off chats that vanish when the session ends.
- Grants & oversight. Funders want transparency: who acted, under what constraints, and how risk was bounded.
TMU CS197 pilot — Spring 2026
Pilot approved for Spring 2026; launch imminent. What follows is the pre-launch configuration — curriculum, governance templates, and exportable materials — so reviewers and partner labs can inspect the methodology before Week 1 runs.
CS197 Research Training Pod
Toronto Metropolitan University · Computer Science Department
Asynchronous research training pod for eight undergraduates with no prior research experience, adapting the Stanford CS197 curriculum structure (Prof. Michael Bernstein) into a governed, asynchronous format. Two governed personas (ARIA, SENTINEL) on the full ApexORCA stack — per-student memory isolation, nightly HEARTBEAT, Trust Meter™, and threshold-only faculty flags. Configured for Spring 2026; Week 1 begins when the instructor runs setup.sh.
- Turnkey ZIP delivered to the faculty member (not a DIY build)
- Per-student memory isolation + nightly HEARTBEAT audit
- Threshold-only faculty flags (noise-free supervision)
- Exact ORCA templates used in production — exportable for IRB
This pod extends the Stanford CS197 curriculum structure (Prof. Michael Bernstein and colleagues) into an asynchronous, governed-AI-assisted format. We credit the original framework explicitly; the governance layer, personas, and deployment pattern are ours.
Additional pilots are queued as partnerships close. Each new case study will ship with the same package: downloadable logs, governance configs, and ORCA templates so you can replicate or fork the methodology. apex@apexorca.io
Download Pilot Templates & AssetsShare your deployment
If you run a governed ORCA deployment in teaching or research, you can add it here as a public case note — short description, outcomes, and optional assets. Featured submissions help other institutions see what a reproducible, auditable rollout actually looks like (and give your lab a citable reference). Nothing below obligates you to partner with us or build anything jointly; it is voluntary documentation for the community hub.
WhatORCADeliversforYourLab
Concrete governance mechanics you can point to in documentation — not a black box that only the model vendor understands.
Ethics Board Ready
Full audit trails, vetoes, and reversibility tiers give you exportable evidence for IRB and funders.
Reproducible by Design
Phase-locked runs mean the same workflow, the same receipts — no mystery prompts.
Scales Without Burnout
One governed pod mentors dozens of students asynchronously; faculty sees flags only when it matters.
Model-Agnostic & Grant Edge
Bring your own keys — Claude, Grok, local models. ORCA logs reviewers can actually inspect.
IRB-Compliant & Safe
Threshold-only faculty flags and per-student memory isolation keep every deployment auditable.
Templates & OpenClaw
Standard ORCA templates and OpenClaw workflows — stand up a governed agent your lab can inspect and extend.
Value for Faculty, Labs & Students
Faculty & research groups
- Scale training without scaling your time — async Socratic guidance when students need it, without you in every thread.
- IRB-aligned trails: veto tiers, reversibility, exports you can show a board or attach to a protocol.
- Publishable, defensible methodology — not a black-box pilot you cannot describe in a paper.
- Stronger grant narratives as agencies ask how AI transparency and governance are handled.
Students & researchers
- Guidance that leads to discovery — not spoon-fed answers — with boundaries that keep exploration productive.
- Persistent memory across sessions; no groundhog day re-explaining context every time you return.
- Safe to explore: governance cuts unproductive or high-risk paths early, with everything logged.
- Exceptional work can surface with clear, logged signals faculty can notice without micromanaging.
FreeAcademicResources
Three paths, least friction first. Research Scholar and the Foundation Kit are already one click — no form, no queue (see Research Scholar · free pack above and the download row here). If you want the same public links mirrored to a verified institutional inbox (.edu, .ac.uk, .edu.au, recognized .ac.*, or known lab domains), use the form — automated when email delivery is live; otherwise the page hands you the identical URLs immediately after submit. The full institutional Playbook stays a short human thread: volume, updates, and terms do not belong on an anonymous CDN link.
Instant — no form
Foundation Kit — seven core governance files (Markdown; PDF when CI has built it). No card. No inbox required.
Research Scholar ZIP: jump to free pack ↑. General questions: info@apexorca.io.
Institutional inbox (automated)
Same three public URLs as the left column — we only verify the hostname before firing (or revealing) the packet. No extra "wait for a human" step for those files.
Full institutional Playbook (human)
The field manual plus templates for formal lab use — one email from your institutional address with your lab or department; we reply with next steps. Same inbox for Foundation Kit follow-ups if you need help after downloading.
Paid retail Playbook ($39) is on the Playbook page if you do not need an institutional grant. Explore products: Marketplace · The Wild.
Hit a snag? Campus mail filters, a download that 404s, or a hostname we should whitelist — email apex@apexorca.io (research hub) or info@apexorca.io (general). The self-serve paths above stay the default; this is backup when something is wrong.
ResearchScholarvs.thedefaultstack
Compared honestly on the dimensions that matter for research workflows. Not a feature war — a methodology gap.
| Dimension | Research Scholar (free) | ChatGPT Plus / Claude Pro |
|---|---|---|
| Traceability | Every material step logged to disk; exportable for IRB and reviewers | Chat history per session; no structured reasoning trail |
| Reproducibility | Phase-locked workflows; same inputs → same receipts | Drifts across sessions and silent model updates |
| Drift control | Self-audit ≥0.99 threshold + Tier-3 veto | None visible to the user |
| Data locality | Self-host on your machine / lab server; BYO keys | Vendor cloud; subject to vendor retention policy |
| Model-agnostic | Claude, Gemini, local Ollama, OpenRouter, DeepSeek — swap per task | Single-vendor model family |
| Cost (per outcome) | Typically 3–5× fewer tokens; small governed models beat big ungoverned | $20/mo + hidden token overhead |
| License | Free for academic use; no redistribution, no resale | Vendor ToS |
Claims about relative cost and token use reference the methodology published at /efficiency. If your workflow shows different numbers, we want to see them.
ThePaper
Orcinus orca: A Biologically-Grounded Governance Architecture for Autonomous AI Agents
B.W. Moore · Independent Researcher, Toronto, Canada
A design & methodology paper — the full technical framing of ORCA as a governance middleware mapped from orca behavioral ecology onto engineered controls: phase-locked reasoning, reversibility tiers, self-audit thresholds, traceability anchors, and pod-level coordination with a veto seat. It specifies the architecture, publishes the locked evaluation instrument, reports preliminary observational findings from the ApexORCA deployment, and compares ORCA against Constitutional AI, Reflexion, Voyager, LangGraph, CrewAI, and AutoGen in a feature matrix. The full controlled within-model efficiency study, a threshold sensitivity sweep, operator-approval statistics, an adversarial-robustness evaluation, and results from the TMU CS197 Spring 2026 pilot are the subject of a companion empirical paper now pre-registered and in preparation.
The PDF is the canonical version in the meantime; the arXiv DOI will be posted here and on /natures-blueprint the day it goes live. A one-page summary and a 90-second overview video are planned; when published, their links will appear below this paragraph.
No email gate on the paper. The Research Scholar ZIP above is also a direct download — we do not collect email for either. The full institutional Playbook is by email only (§05). Custom deployments and scoping go through Apex Agents™ on the Marketplace.
Stuck on a first use-case before you wire anything? Sixty concrete agent ideas — sorted by job function with honest ROI signals, not brainstorm theater.
License
License: ORCA Research Scholar is proprietary software provided free for academic use. You may install and run it inside your institution. You may modify it for your own use and cite it in publications. You may not redistribute it or resell it, with or without modification, in whole or in part. Partner or commercial deployments are handled via Apex Agents™. Full terms ship inside the ZIP (TERMS.md).