Skip to content

Commit

Permalink
add python integration
Browse files Browse the repository at this point in the history
  • Loading branch information
nilsmechtel committed Feb 10, 2025
1 parent a092809 commit ecf5bfb
Show file tree
Hide file tree
Showing 2 changed files with 63 additions and 13 deletions.
63 changes: 60 additions & 3 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -30,7 +30,11 @@ BioImageIO Colab combines two powerful tools:

### Integrate SAM Compute Service

#### Required Dependencies
The SAM compute service can be integrated both in JavaScript (browser) and Python environments.

#### JavaScript Integration (Browser)

##### Required Dependencies
```html
<!-- Hypha RPC WebSocket -->
<script src="https://cdn.jsdelivr.net/npm/hypha-rpc@0.20.47/dist/hypha-rpc-websocket.min.js">
Expand All @@ -45,7 +49,7 @@ BioImageIO Colab combines two powerful tools:
<script src="https://cdn.jsdelivr.net/gh/bioimage-io/bioimageio-colab@latest/plugins/onnx-mask-decoder.js">
```
#### Integration Steps
##### Integration Steps
1. **Connect to BioEngine**
```javascript
Expand Down Expand Up @@ -88,4 +92,57 @@ const polygonCoords = processMaskToGeoJSON({
masks: mask,
threshold: 0,
});
```
```
#### Python Integration
##### Required Dependencies
```bash
pip install opencv-python
pip install onnxruntime
pip install hypha-rpc
```
##### Integration Steps
1. **Connect to BioEngine**
```python
from hypha_rpc import connect_to_server
client = await connect_to_server({"server_url": "https://hypha.aicell.io"})
svc = await client.get_service("bioimageio-colab/microsam", {"mode": "last"})
```
2. **Load SAM Model**
```python
model_id = "sam_vit_b_lm" # or "sam_vit_b_em_organelles"
model = load_sam_decoder(model_id)
```
3. **Process Images**
```python
# Load and process image
image = load_image("path/to/image.tif")
# Compute embedding
embedding_result = await svc.compute_embedding(
image=image,
model_id=model_id,
)
# Segment with point prompt
example_coordinates = (80, 80)
feeds = prepare_model_data(embedding_result, example_coordinates)
masks = model.run(["masks"], feeds)
# Process mask (example: convert to binary and find contours)
mask = masks[0].squeeze()
binary_mask = (mask > 0).astype(np.uint8)
contours, _ = cv2.findContours(
binary_mask, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE
)
```
For complete implementations, see:
- JavaScript: [plugins/onnx-mask-decoder.js](plugins/onnx-mask-decoder.js)
- Python: [bioimageio_colab/onnx_mask_decoder.py](bioimageio_colab/onnx_mask_decoder.py)
13 changes: 3 additions & 10 deletions bioimageio_colab/onnx_mask_decoder.py
Original file line number Diff line number Diff line change
Expand Up @@ -29,15 +29,6 @@ def load_image(file_path):
return image


def get_compute_service(
server_url: str = "https://hypha.aicell.io",
service_id: str = "bioimageio-colab/microsam",
):
client = connect_to_server({"server_url": server_url})
svc = client.get_service(service_id, {"mode": "last"})
return svc


def load_sam_decoder(
model_id: str = "sam_vit_b_lm",
output_dir: str = "../data",
Expand Down Expand Up @@ -119,7 +110,9 @@ def prepare_model_data(embedding_result, coordinates):

image = load_image("../data/example_image.tif")

svc = get_compute_service()
client = connect_to_server({"server_url": "https://hypha.aicell.io"})
svc = client.get_service("bioimageio-colab/microsam", {"mode": "last"})

embedding_result = svc.compute_embedding(
image=image,
model_id=model_id,
Expand Down

0 comments on commit ecf5bfb

Please sign in to comment.