中文简体 | English
Camel is a graph-based, multi-stage, and type-driven domain-specific language (DSL) designed to bridge the gap between AI research and production deployment. It combines the elegance of functional programming with the expressiveness of declarative programming. It provides born-async semantics and highly customizable graph manipulations, enabling developers to write high-level code that compiles to near-native performance.
Modern AI development faces a dilemma:
- Semantic Fragmentation JIT tracing (e.g., TensorFlow) creates gaps between code intent and execution graphs, forcing debug workarounds and control-flow compromises.
- Cognitive Overload Framework-specific concepts (gradient tapes, graph phases, staged execution) demand expertise orthogonal to ML theory.
- Prototype-Deployment Divide Python's dynamism prohibits deep optimization, while static languages lose high-level ML abstractions.
Camel solves this by:
- First-Class Computation Graphs Native graph primitives replace fragile tracing—code directly defines compiler-optimized DAGs.
- Phase-Polymorphic Semantics Single codebase executes interactively (Python-like immediacy) or compiles to optimized binaries (C++-level performance).
- Type-Driven Automation Tensor shapes/types statically guide memory planning, operator fusion, and parallelization—zero manual tuning.
// Build graph with intuitive operators
func forward(x: Tensor) {
let layer1 = dense<w1, b1>..relu..dropout
let layer2 = dense<w2, b2>..relu..dropout
let layer3 = dense<w3, b3>..softmax
return x->layer1->layer2->layer3
}
// Compile-time graph optimization
inner macro func apply_gradients(g: functor): functor {
// inner implemented macro functor that auto
// adds the back-propagation part of the given graph
}
// usage
let train = apply_gradients(forward<w, b>..loss)
with <var w: Tensor, var b: Tensor, lr: float>
sync func train(x: Tensor, y: Tensor): Tensor {
let y_hat = forward<w, b>(x)
let pl_py = y_hat - y
wait b = b - lr * pl_py
wait w = w - lr * pl_py * x
return loss(y_hat, y)
}
# Install via pip (Python toolchain required)
pip install camel-lang
// hello.cml
func main() {
print(`Hello, ${os::user()}!`)
}
Run it:
camel hello.cml
- Python-like prototyping: Build graphs using intuitive operators and natural syntax
- What-you-see-is-what-runs: Code is the computation graph—no JIT magic or hidden control flow
- Self-documenting architecture: Explicit graph structure reduces legacy code complexity
- Compile-time optimization: Static graph analysis enables memory reuse and operator fusion
- Single-source deployment: Write once, run optimized—from server CPUs to edge TPUs without code changes
- Maintainability by design: Strong typing eliminates tensor shape errors, while explicit graph structure reduces technical debt
- No more tracing hacks: First-class graph IR captures user intent directly through language semantics
- Pluggable optimization: Extend compiler passes via composable functors instead of fragile AST manipulation
- Unified backend support: Generate optimized code for multiple targets from shared graph representation
- Build Yourself - Environment setup and installation guide
- [WIP] Documentation - Language specs and API reference
- [WIP] Examples - From MNIST training to distributed pipelines
- [WIP] Whitepaper - Deep dive into the compiler architecture
We welcome contributions! Check out our:
- [WIP] Issue Tracker - Good first issues labeled
beginner-friendly
- [WIP] Roadmap - Planned features like quantum backend support
- [WIP] Style Guide - Code formatting and design patterns
Camel is open-source under the MIT License.
Join the Herd 🌍🐪 – Build the future of AI infrastructure with us!
Enjoy! 🐪Camel Riders!