AI-powered development assistant that leverages Ollama's language models for code generation and assistance.
This project is currently under active development. Some features might not work as expected. Please report any issues you encounter.
-
Install the required dependencies:
# Install Node.js (v14+) curl -fsSL https://deb.nodesource.com/setup_18.x | sudo -E bash - sudo apt-get install -y nodejs # Install Ollama curl -fsSL https://ollama.com/install.sh | sh # Install DevLama globally npm install -g devlama
-
Start the required services:
# Start Ollama service ollama serve & # Start getllm API (if using local models) # Make sure to install getllm first: pip install getllm getllm serve
-
Test the installation:
devlama --version
- Fix CLI command execution
- Implement proper error handling for Ollama API calls
- Add connection test to getllm API
- Create proper configuration system
- Add proper logging
- Add support for different programming languages
- Implement context-aware code generation
- Add tests for all major components
- Create documentation website
- Add CI/CD pipeline
- Add plugin system
- Implement code review functionality
- Add support for custom templates
- Create VS Code extension
- Basic CLI interface
- Integration with Ollama
- Simple code generation
- Connection to getllm API
- Basic error handling
- Configuration system
- Improved error messages
- Better documentation
- Basic testing
- VS Code extension
- Plugin system
- Template support
- Improved logging
- Full test coverage
- Comprehensive documentation
- Performance optimizations
- Community guidelines
To use DevLama with getllm, make sure the getllm API is running:
# Install getllm if not already installed
pip install getllm
# Start the getllm API server
getllm serve
# In another terminal, you can test the connection
curl http://localhost:8000/health
- Connection to getllm API might fail if the service is not running
- Some commands might not work as expected in the current version
- Limited error handling in the current implementation
Contributions are welcome! Please read our Contributing Guidelines for details on how to get started.
This project is licensed under the Apache 2.0 License - see the LICENSE file for details.
Project | Description | Links |
---|---|---|
DevLama | AI-powered development assistant | GitHub Β· NPM Β· Docs |
GetLLM | LLM model management and code generation | GitHub Β· PyPI Β· Docs |
LogLama | Centralized logging and environment management | GitHub Β· PyPI Β· Docs |
APILama | API service for code generation | GitHub Β· Docs |
BEXY | Sandbox for executing generated code | GitHub Β· NPM Β· Docs |
JSLama | JavaScript code generation | GitHub Β· NPM Β· Docs |
SheLLama | Shell command generation | GitHub Β· PyPI Β· Docs |
WebLama | Web application generation | GitHub Β· Docs |
Tom Sapletta β DevOps Engineer & Systems Architect
- π» 15+ years in DevOps, Software Development, and Systems Architecture
- π’ Founder & CEO at Telemonit (Portigen - edge computing power solutions)
- π Based in Germany | Open to remote collaboration
- π Passionate about edge computing, hypermodularization, and automated SDLC
If you find this project useful, please consider supporting it:
npm install -g devlama # For global CLI usage
# or
yarn global add devlama
# Initialize a new project
devlama init my-project
# Generate code from a prompt
devlama generate "Create a React component that displays a counter"
# Start interactive mode
devlama
# Show version
devlama --version
const { DevLama } = require('devlama');
const devlama = new DevLama({
model: 'codellama', // Default model
temperature: 0.7,
});
// Generate code from a prompt
const code = await devlama.generateCode('Create a function that sorts an array of objects by a property');
console.log(code);
- AI-powered code generation and assistance
- Support for multiple programming languages
- Integration with Ollama's language models
- Interactive REPL for development
- Configurable model parameters
- Project scaffolding and management JSLama.generate(prompt).then(code => { console.log(code); });
## Testing
To run tests for JSLama using the PyLama ecosystem:
```bash
cd ../../tests
./run_all_tests.sh
# or for a tolerant run
./run_all_tests_tolerant.sh
Or, from the jslama directory:
make test
Common Makefile commands:
make install
β Install dependenciesmake lint
β Lint codemake test
β Run testsmake build
β Build projectmake clean
β Clean build/depsmake format
β Format codemake start
β Start project (if supported)
const JSLama = require('jslama');
JSLama.generate('Write a function to reverse a string.').then(code => {
console.log(code);
// Output: function reverseString(str) { return str.split('').reverse().join(''); }
});
JSLama is a JavaScript code generation tool that leverages Ollama's language models. It is part of the PyLama ecosystem and integrates with LogLama as the primary service for centralized logging and environment management.