Difference between revisions of "MITK-nnInteractive"
Line 1: | Line 1: | ||
− | [[File:NnInteractive MITK header white.png|frameless | + | [[File:NnInteractive MITK header white.png|frameless|600px]] |
== Overview and Download == | == Overview and Download == |
Revision as of 21:38, 11 March 2025
Overview and Download
nnInteractive, the universal 3-d promptable segmentation model is integrated into MITK. It enables interactive, open-set segmentation using various prompts such as points, scribbles, boxes, and lasso. The model translates intuitive 2-d interactions into full 3-d segmentations, facilitating efficient and accurate biomedical image analysis.
Download: MITK nnInteractive Preview 1 for Windows
Features
- Designed for medical imaging: Tailored for modalities like MRI, CT, and microscopy, ensuring accurate tissue and structure segmentation
- Multi-modal prompt support: Points, scribbles, bounding boxes, and lasso
- Full 3-d segmentation from simple 2-d interactions
- Multi-plane interaction: Supports user interaction across axial, sagittal, and coronal planes for precise anatomical structure delineation.
- 3-d visualization: Provides 3-d rendering of segmentations for enhanced spatial understanding and validation.
Installation
- Download the latest MITK nnInteractive Preview ZIP archive from above.
- No installation needed, just unzip and run MitkWorkbench.bat.
Basic Usage
- Load an Image
- Open the MITK Workbench and load your 3-d medical image (e.g. MRI, CT, microscopy images).
- Select the nnInteractive tool
- Open the Segmentation plugin.
- Create a segmentation.
- Click on nnInteractive under 3D tools.
Workflow
- Click on the "Initialize" button to get the nnInteractive backend ready for a segmentation session.
- The following interactions/prompts are available after successful initialization:
- Point: Click on the image to place points.
- Box: Click and drag on the image to place a box resp. rectangle.
- Scribble: Click and move the mouse cursor over the image to scribble.
- Lasso: Click and move the mouse to outline a region of interest.
- Initialize with Mask: Use an existing segmentation label as initial mask for further refinement.
You can combine any interaction type and individually decide if an interaction should be positive or negative. The initialization with a mask will reset all previous uncommitted interactions.
Don't forget to click on "Confirm Segmentation" to finally commit the shown preview into a segmentation label.
Save Results
Once your segmentation is complete you can save your work by right-clicking on your segmentation in the Data Manager (typically on the left) and selecting "Save...". This will open a file dialog where you can export segmentations in supported formats like NRRD for further analysis or sharing. Note that saving segmentation as DICOM SEGs is only supported if the reference image was also loaded from DICOM files.
Performance Considerations
nnInteractive is a deep learning-based model that requires significant computational resources. To ensure optimal performance we highly recommend a CUDA-enabled GPU with tensor cores and at least 6 GBs of VRAM like an NVIDIA GeForce RTX 2060 or better. Performance on older GPUs without tensor cores like the NVIDIA GeForce GTX 1080 will already take twice as long during interaction. Make sure to use an up-to-date graphics driver.
As a last resort, nnInteractive can also run on CPUs but with significant (!) performance drops of at least a magnitude in comparision to a supported GPU.
Contributions and Feedback
Contributions and feedback are welcome! Feel free to report issues or suggest improvements via GitHub.
License
- MITK Workbench: BSD-3-Clause license
- nnInteractive: Apache 2.0 license
- nnInteractive model weights: CC-BY-NC-SA 4.0