Andrew Jewell Sr.

Andrew Jewell Sr.

HVAC Controls Technician · AI Systems Engineer

Current Mechanical · AutomataNexus LLC

About

Andrew Jewell Sr. is an HVAC Controls Technician (Journeyman) at Current Mechanical in Fort Wayne, Indiana, and the founder and AI Systems Engineer of AutomataNexus LLC. With nearly two decades of field experience in building automation and controls, he develops the NexusEdge platform — a Rust-native edge BAS stack deployed across 60+ units in 16+ commercial facilities — and the AxonML deep learning framework, a complete pure-Rust ML stack spanning 22 crates with native CUDA acceleration. His research spans nonlinear adaptive control for HVAC equipment, quantization-aware training for edge-deployable language models, multimodal biometric identity systems, and acoustic species identification. He holds a Journeyman credential through UA Local 166 and is a member of AI for Fort Wayne Community.

Affiliations

  • UA Local 166
  • AI for Fort Wayne Community
  • AutomataNexus LLC

Research Interests

  • Building automation systems
  • Adaptive nonlinear control
  • Pure-Rust ML systems
  • Edge inference & quantization-aware training
  • Multimodal biometric recognition
  • Acoustic species identification

Publications

Preprints · 2026

1

TMC: A Unified Adaptive Nonlinear Controller with Conditional Velocity Feedforward and Thermal-Coasting Brake Zone for Heterogeneous HVAC Equipment

Andrew Jewell Sr.

Preprint · 2026

We present TMC (Thermal Momentum Control), a nonlinear feedback controller for heterogeneous HVAC equipment. TMC combines a signed-demand PI core with wrong-direction-gated velocity feedforward, a thermal-coasting brake zone derived from first-order plant coasting distance, back-calculation anti-windup augmented by overshoot-triggered integrator reset, online adaptation of the thermal time constant via a rate-based midpoint-velocity estimator, and runtime Lyapunov monitoring. The controller is implemented as a single Rust core shared unchanged across ten HVAC equipment classes. We provide a closed-loop stability analysis via second-order ODE reduction, and validate against a Ziegler-Nichols tuned PID baseline. On a cascaded two-stage thermal plant, TMC limits overshoot to 0.34°F where an aggressively-tuned PID overshoots by 2.01°F—a 6× reduction. We further present field-validation results from a production deployment of 113 active HVAC units across six buildings via the NexusEdge control platform.

HVAC control adaptive control nonlinear control anti-windup building automation
2

Trident: Training a 1.58-Bit Ternary Language Model From Scratch in Pure Rust

Andrew Jewell Sr.

Preprint · 2026

We present Trident, an end-to-end implementation of a BitNet b1.58 ternary language model in pure Rust, including the ternary linear layer, absmean weight quantization, Straight-Through Estimator backward pass, training loop, checkpoint serializer, and generation utility. The implementation contains no Python in the dependency tree and no C++ wrapper around libtorch; all components are built on the AxonML framework with native CUDA acceleration. We train a 616{,}448-parameter model on the complete works of Shakespeare (~5.4M characters) for 100 steps, observing cross-entropy loss decrease from 7.9 to 2.61 (perplexity 13.6) with ternary sparsity stabilizing near 25%. We report storage compression ratios of 9.71× to 11.99× for the Shakespeare configurations and 2.89× for a 162M-parameter variant. Released under MIT and Apache-2.0.

quantization-aware training BitNet 1.58-bit ternary weights Rust edge inference
3

AxonML: A Modular Pure-Rust Deep Learning Framework with CUDA Acceleration, Distributed Training, and End-to-End Production Tooling

Andrew Jewell Sr.

Preprint · 2026

We present AxonML (v0.6.1), an open-source deep learning framework implemented entirely in Rust, spanning 22 crates and 226{,}373 lines of code. The framework provides a complete ML stack: N-dimensional tensor operations with CUDA GPU acceleration via 15 PTX kernel modules, reverse-mode automatic differentiation with gradient checkpointing and automatic mixed precision, 41 neural network layer types including state-of-the-art mechanisms (rotary embeddings, grouped-query attention, mixture-of-experts, differential attention, ternary quantized layers), 5 optimizers, 7 learning rate schedulers, and 7 loss functions. AxonML includes nine LLM architectures, six quantization formats with 1.58-bit ternary support, distributed training (DDP, FSDP, pipeline parallelism), ONNX import/export, and a 33-subcommand CLI. The workspace contains 2{,}285 tests. Dual-licensed under MIT and Apache-2.0.

deep learning framework Rust CUDA automatic differentiation distributed training ONNX