Experimenting with AI models #127
pwgit-create
started this conversation in
General
Replies: 1 comment
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Here are suggestions for effective AI models that you can experiment with using Appwish!
The model can be modified by accessing the "src/main/resources/" from the release folder, and modifying the model name in file: ollama_model.props
Appwish's default models
Appwish's current default model for the amd64 version (Linux) iscodestral:22b.
https://ollama.com/library/codestral:22b
Appwish's current default model for the arm version (Raspberry pie) is llama3 by Meta.
https://ollama.com/library/llama3:latest
Model tips
These models are also fun to experiment with:
https://ollama.com/library/gemma:latest
This model is both entertaining and easy on your hardware. The quality of this model is really good, and it's kind of amazing considering its size of less than 5 GB. It´ is very cencored though , so keep that in mind if your thinking of generating network apps.
https://ollama.com/library/dolphin-mixtral:8x7b
This is an uncencored model that is good for advanced console apps. The truth is, it's really good. It requires a lot of your hardware, so it's not worth trying unless you run WSL with Nvidia Cuda.
https://ollama.com/library/dolphin-llama3:latest
Uncencored model based on the llama3 model from META. This model is a good choice if you want to generate more freely than with the default model. This model's small size makes it suitable for limited hardware resources. It's important to note that this model is limited and will result in more compile errors than the original llama3 model.
https://ollama.com/library/granite-code:34b
A version of the IBM granite-code model with 34 billion parameters. If you possess the required hardware, give the 34b version of this model a shot to achieve outstanding and dependable applications.
Beta Was this translation helpful? Give feedback.
All reactions