FizTorch is a lightweight deep learning framework designed for educational purposes and small-scale projects. It provides a simple and intuitive API for building and training neural networks, inspired by popular frameworks like PyTorch.
- Tensor Operations: Basic tensor operations with support for automatic differentiation.
- Neural Network Layers: Common neural network layers such as Linear and ReLU.
- Sequential Model: Easy-to-use sequential model for stacking layers.
- Functional API: Functional operations for common neural network functions.
To install FizTorch, follow these steps:
-
Clone the Repository:
git clone https://github.com/ahammadnafiz/FizTorch.git cd FizTorch
-
Set Up a Virtual Environment (optional but recommended):
python -m venv fiztorch-env source fiztorch-env/bin/activate # On Windows, use `fiztorch-env\Scripts\activate`
-
Install Dependencies:
pip install -r requirements.txt
-
Install FizTorch:
pip install -e .
Here is a simple example of how to use FizTorch to build and train a neural network:
import sys
import os
sys.path.append(os.path.dirname(os.path.dirname(os.path.abspath(__file__))))
import pytest
import numpy as np
from fiztorch import Tensor
from fiztorch.nn import Linear, ReLU, Sequential
from fiztorch.optim.optimizer import SGD
import fiztorch.nn.functional as F
# Define a neural network
model = Sequential(
Linear(2, 3),
ReLU(),
Linear(3, 1)
)
# Create some input data
input = Tensor([[1.0, 2.0]], requires_grad=True)
# Example of training a model
def train_example():
optimizer = SGD(model.parameters(), lr=0.01)
# Dummy data for demonstration
X_train = Tensor(np.random.rand(100, 2), requires_grad=True)
y_train = Tensor(np.random.rand(100, 1))
for epoch in range(5): # Simulate 5 epochs of training
optimizer.zero_grad()
predictions = model(X_train)
loss = F.mse_loss(predictions, y_train)
loss.backward()
optimizer.step()
print(f"Epoch {epoch+1}, Loss: {loss.data}")
if __name__ == "__main__":
train_example()
Neural network training on MNIST digits using FizTorch library with Adam optimizer (configurable learning rate), batch support, real-time accuracy/loss tracking
Neural network training on California Housing Dataset using FizTorch library
from fiztorch.tensor import Tensor
from fiztorch.nn import Linear
# Create a linear layer
layer = Linear(2, 3)
# Create some input data
input = Tensor([[1.0, 2.0]])
# Forward pass
output = layer(input)
# Print the output
print(output)
from fiztorch.tensor import Tensor
from fiztorch.nn import ReLU
# Create a ReLU activation
relu = ReLU()
# Create some input data
input = Tensor([-1.0, 0.0, 1.0])
# Forward pass
output = relu(input)
# Print the output
print(output)
from fiztorch.tensor import Tensor
from fiztorch.nn import Linear, ReLU, Sequential
# Define a sequential model
model = Sequential(
Linear(2, 3),
ReLU(),
Linear(3, 1)
)
# Create some input data
input = Tensor([[1.0, 2.0]])
# Forward pass
output = model(input)
# Print the output
print(output)
- Enhance tensor operations with more advanced functionalities (e.g., broadcasting).
- Add support for GPU acceleration (e.g., via CUDA or ROCm).
- Improve the API for ease of use and consistency.
- Add additional layers such as Convolutional, Dropout, and BatchNorm.
- Expand activation functions (e.g.,ELU).
- Integrate pre-trained models for common tasks.
- Implement additional optimizers
- Add learning rate schedulers.
- Enhance support for custom loss functions.
- Provide built-in dataset utilities (e.g., MNIST, CIFAR).
- Create a flexible data loader with augmentation capabilities.
- Add utilities for loss/accuracy visualization.
- Integrate real-time training monitoring (e.g., TensorBoard support).
- Establish guidelines for community-driven feature additions.
- Host challenges to encourage usage and development.
Contributions are welcome! Please follow these steps to contribute:
- Fork the repository.
- Create a new branch (
git checkout -b feature-branch
). - Commit your changes (
git commit -am 'Add new feature'
). - Push to the branch (
git push origin feature-branch
). - Create a new Pull Request.
FizTorch is licensed under the MIT License. See the LICENSE file for more information.
For any questions or feedback, please open an issue or contact the maintainers.
Made with ❤️ by ahammadnafiz