A cutting-edge MIPS assembly simulator built on progressive π, forward-thinking π₯ technology.
Run the SpAIm executable and provide an assembled MIPS assembly program file:
java -jar ./spaim.jar program.asm.out
SpAIm will read the file and execute its instructions.
Tip
You can generate the assembled output of a MIPS assembly program using a tool
like Spim (spim -assemble
).
SpAIm can be configured using a config file named spaim.config.json
in the working directory.
The following keys can be used in the config file:
syscalls
: an array of customsyscall
commandsollama
: configuration for SpAIm's AI integration π₯π₯π₯
Each custom syscall
command should be an object with these keys:
code
: the integer value in$v0
corresponding to thissyscall
run
: the path to the executable to runargs
: an array of arguments to be passed
Here's an example of a complete config file:
{
"syscalls": [
{
"code": 100,
"run": "shutdown",
"args": [
"now"
]
},
{
"code": 25565,
"run": "prismlauncher",
"args": [
"-l",
"Minecraft 1.8.9",
"-s",
"mc.hypixel.net"
]
}
],
"ollama": {
"model": "deepseek-r1:671b"
}
}
This config file creates the syscall
s 100
and 25565
. syscall
code 100
shuts down the system (on a Unix
system), and
syscall
code 25565
launches a Minecraft instance through Prism Launcher.
Warning
If you define a syscall
command that uses the code of a built-in syscall
, the built-in syscall
will take
precedence.
There are some features that are not currently supported, including but not limited to:
- Accessing memory that isn't word aligned
- Assembly directives other than
.data
,.text
, and.word
- Floating-point arithmetic
- Hi and Lo registers
mult
andmultu
- Decent performance
However, due to SpAIm's AI integration π₯π₯π₯, SpAIm actually doesn't have any limitations π.
SpAIm's AI π integration π requires an Ollama model to be running π₯. You also need to specify a model in the config π₯.
When you use syscall
π with $v0 == 11434
π₯ and $a0
π₯ containing the address to a string in memory π₯, SpAIm
activates its best-of-breed, bleeding-edge AI integration ππππ to evaluate your prompt π₯π₯π₯π₯π₯.
For example π₯:
.data
buffer: .space 200
.text
main:
la $a0, buffer
li $v0, 8
syscall
li $v0, 11434 # π₯π₯π₯π₯π₯π₯π₯
syscall # ππππππππ
li $v0, 1
syscall
li $v0, 10
syscall
Run the program above π₯π₯ and input this prompt ππ:
Evaluate 2+3 and put the answer in $a0.
This will always π₯π₯π₯ print out the number 5
π₯π₯π₯π₯π₯ 100% π of π the π time some of the time π₯.
The AI integration πππππ can also, in theory π₯, and theoretically π₯π₯ in practice π₯π₯π₯, read the register values ππ when evaluating your prompt πππππ:
.data
buffer: .space 200 # π₯
.text
main:
la $a0, buffer # π₯
li $v0, 8 # π₯
syscall # π₯
li $t0, 3 # π₯π₯π₯
li $t1, 4 # π₯π₯π₯π₯
li $t2, 5 # π₯π₯π₯π₯π₯
li $v0, 11434 # π₯π₯π₯ππππ₯π₯π₯
syscall # ππππππ₯π₯π₯π₯π₯
move $a0, $t3 # π₯π₯π₯
li $v0, 1 # π₯π₯
syscall # π₯
li $v0, 10 # π₯
syscall # π₯
Run the program π₯ above π₯ with this prompt π:
Multiply the values of $t0, $t1, and $t2 together and put the result in $t3.
This has a pretty good chance π₯π₯π₯, by my standards π₯π₯, of printing the number 60
ππππ₯π₯π₯.
In the ollama
object π₯π₯ in the config file π, you can specify these values π₯π₯π₯π₯π₯π₯:
endpoint
π₯: the URL πππ of the Ollama π₯ chat endpoint π₯π₯π₯ (default π:http://localhost:11434/api/chat
)model
π₯ (required): the name of the model to use π₯π₯π₯ (example:deepseek-r1:671b
ππππππ)
Note
SpAIm's AI integration π₯π₯π₯π₯ requires a powerful π₯ LLM π to work properly. If the LLM π you are using runs on your machine π₯π₯, it is too small π₯π₯π₯π₯.
To build the project from source, run one of the following commands.
Mac/Linux:
./gradlew assemble
Windows:
.\gradlew.bat assemble
The executable JAR should be written to build/libs
.
Special thanks to NVIDIA in advance for sponsoring this project π.
Special unthanks to MIPS Tech LLC for not sponsoring this project π.
SpAIm is MIT licensed.