Library

Primitives, backends, and environments designed for researchers.

All outputs come from a single CLI binary. Consistent, reproducible, scriptable.

Core primitives

AIXI Approximation

MC-AIXI with any library supported Rate Backend (See Predictive Backends Below) -- including FAC-CTW. We include a programmable environment API, both natively, and with a secure, highly performant Virtual Machine Environment, so you can set up a custom environment in an OS(i.e. Linux), with whatever program(s) you like, without bottlenecking performance. This allows direct application to security, testing, and experimental usecases, without expensive hardware.

Entropy and rates

H, entropy rate, joint/conditional entropy, and cross-entropy. You can compute marginal estimates (order 0) or sequential estimates with a Markov order.

Dependence and distance

Mutual information, intrinsic dependence, normalized entropy distance, and normalized compression distance (Vitanyi).

Divergences

KL, Jensen-Shannon, total variation, and normalized Hellinger for discrete distributions.

Transform effort

Normalized Transform Effort (NTE) and Resistance to Transformation for structure under noise or perturbation.

Predictive backends

Rapid Online Suffix Automaton with Witten-Bell smoothing. Default for entropy rate. Designed for fast online learning with memory tuning and disk caching.

CTW / AC-CTW

Context Tree Weighting with KT-estimator. The CLI accepts ctw and the alias ac-ctw for action-conditional CTW when used in agent contexts.

FAC-CTW

Factorized action-conditional CTW. Uses one context tree per percept bit with a shared history buffer, allowing action/percept conditioning without duplicating histories.

RWKV7 (via rwkvzip)

InfoTheory includes its own RWKV7 inference implementation (in the workspace crate rwkvzip). Inference is available by default and is used as an optional rate backend / world model for MC-AIXI.

Training is provided by the rwkvzip project itself (not compiled into the infotheory crate). In this workspace, infotheory depends on rwkvzip with training disabled (inference-only). The rwkvzip CLI exposes a training command behind its training feature and will refuse to train when built without it.

Beyond being a rate backend, rwkvzip is also a byte-level neural compressor: it can compress/decompress using arithmetic coding or rANS. This is not exposed in the Workspace UI due to compute constraints, but it is available locally both via the rwkvzip binary and as an NCD compressor backend inside infotheory (CLI: --ncd-backend rwkv7, Rust API: NcdBackend::Rwkv7). For RWKV-based NCD, --method selects the entropy coder (ac or rans).

ZPAQ

Compression-based distance estimates with conservative variants for stability. ZPAQ is now also available as a rate backend (streaming for standard levels, batch for arbitrary methods), in addition to NCD. ZPAQ is a very powerful, configurable compressor. ZPAQ grants you fine tuned compression algorithm control.

Rate mixtures

Mix any supported rate models using Bayes, fading Bayes, switching, or MDL. The CLI accepts --rate-backend mixture with --method pointing to a JSON mixture spec. Mixtures can be nested to build hierarchical ensembles.

Bivariate by default, multivariate by batch

The CLI primitives operate on one or two inputs (bivariate). For multivariate analyses, use Batch JSONL to run many operations with different parameters in a single request. This makes the CLI usable as a kernel: you can run several different operations at once and receive structured results together.

{"op":"metrics","text":"hello world","max_order":8}
{"op":"cross_entropy","text_x":"0101010101","text_y":"0011001100","max_order":8}
{"op":"ncd","text1":"aaaaaa","text2":"aaabaa","method":"5","variant":"vitanyi"}
{"h0":2.845351,"h_rate":1.831446,"id":0.356338,"len":11}
{"cross_entropy":2.369176}
{"ncd":0.009524}

For specialized multivariate pipelines, the Rust API is the intended path: an optimized high-level multivariate implementation cannot be universally fast, so the library exposes low-level hooks for manual performance tuning.

Validation via Lean testing (ite-bench)

The Lean 4 benchmark runner in ./ite-bench validates estimators against oracle truths and formal properties. It generates synthetic data, runs the Rust estimators, and checks identities and inequalities such as non-negativity and bounds.

Validated quantities

Shannon entropy, mutual information, KL divergence, JS divergence, conditional entropy, joint entropy, and cross-entropy.

Methodology

Lean generates oracle distributions (uniform, independent, etc.), the Rust CLI computes estimates, and tolerances and identities are verified automatically.

AIXI capability (agent + environments)

MC-AIXI planning

The agent is a full MC-AIXI implementation with MCTS planning. It simulates rollouts against an internal world model, supports configurable horizon, discounting, and exploration/exploitation ratio, and can run separate learn/eval phases.

World models (rate backends)

The AIXI world model is backend-agnostic: use FAC-CTW (default), AC-CTW (single-tree), ROSAPlus, or RWKV7. FAC-CTW is recommended for action-conditional modeling. RWKV7 uses a provided model path for deep sequence prediction.

Observation and reward encoding

Configurable observation bits, reward bits, and stream length. Observation keying supports full-stream or reduced-key modes (first/last/hash) for tree branching. Reward encoding supports offsets for signed ranges, with validation of bounds.

Built-in environments

Standard environments include coin flip, CTW test, extended tiger, tic-tac-toe, biased rock-paper-scissors, and Kuhn poker. Each environment exposes bit-widths, rewards, and action space for correct agent configuration.

VM-backed environments (nyx-lite)

The VM mode uses Firecracker with nyx-lite snapshotting for high-frequency resets (Linux/KVM; performance depends on hardware and guest behavior). Communication is over shared memory plus hypercalls, with a structured wire protocol (OBS/REW/DONE framing) and optional serial output. It supports output hashing, raw output streams, or shared memory observations.

Action sources and filtering

Actions can be literal payloads or fuzz-mutated sequences. Optional action filters allow information-theoretic constraints (entropy thresholds, intrinsic dependence, novelty vs prior) and reject rewards for filtered actions.

Rewards, shaping, and traces

Rewards can come from guest responses, pattern matching, or custom callbacks. Optional shaping includes entropy reduction vs baseline or trace-entropy scaling. Trace collection supports shared-memory tracing with configurable limits.

Security and features

Securely design AIXI experiments on real, already existing program via a VM environment. VM environments require the vm feature and provide a sandboxed execution model (Firecracker + KVM via nyx-lite). Firecracker largely works with Linux guests but may work for others!

Data ingestion and preprocessing

The CLI accepts text, uploads, and URLs, and all operations ultimately work on files. That means any preprocessing you want can be done upstream, as long as it can be represented in a file or byte stream. This keeps the system compatible with existing ETL pipelines and custom encodings.