Updated 4/19/2022

- Getting Started
- Installation
- Registration and login
- Analysis Workflows
- Starting a new project
- Importing images
- Single label analysis
- Choosing a detection model
- Image analysis settings
- Annotating images
- Colocalization
- Data output
- Custom Model Training
- When and why to train a custom model
- Annotation best practices
- How to submit a training dataset
- Analysis Settings
- Hotkeys
- Options
Getting Started
Welcome to Pipsqueak Pro! From our humble origins as Pipsqueak AI, a Fiji/ImageJ plugin, we are now proud to offer our own user interface that is much easier to use and offers unlimited potential to expand functionality to suit our users’ diverse image analysis needs. Our software exists to remove the tedious roadblocks that scientists have when analyzing large amounts of microscopy data.
For now, Pipsqueak Pro is in Beta, and lacks the full features of our classic Pipsqueak AI. While we transition, your license will work the same between Pipsqueak AI and Pipsqueak Pro. We encourage you to report bugs or provide suggestions in Pipsqueak Pro via the bug report form within the program or shoot us an email with screenshots to support@rewireneuro.com. We’d also love to hear suggestions for future features that matter to you and your lab!
Install Instructions
In order to use Pipsqueak, you’ll need to first order a license on our website. You can order a 1-mo free trial (no payment info needed) to test it out before you commit to a 1-yr license.
Unlike our original Pipsqueak AI, Pipsqueak Pro does NOT require installation of Fiji/ImageJ. You can download the latest Pipsqueak Pro install file from the link provided in your license order email.
Or if you already have a Pipsqueak AI license, visit our downloads page to directly download Pipsqueak Pro https://pipsqueak.ai/downloads/

Windows: For Windows users, run the downloaded .exe file and Pipsqueak Pro will automatically install and open. It will also appear as a start menu app in a Rewire Neuro folder. Pipsqueak Pro is compatible with Windows 7 or newer OS.
Mac: Double click the downloaded Pipsqueak Pro dmg file and a Finder window will open. In that window, drag and drop the Pipsqueak Pro icon into the Applications folder. From there you can open Pipsqueak Pro from Launchpad or from spotlight search. Pipsqueak Pro is compatible with MacOS High Sierra (10.13) or newer.
First Time Startup (Windows & Mac): When Pipsqueak Pro is starting for the first time it can take a few minutes to download extra resources and updates in the background, so don’t close the window until this process is finished. You will know it’s finished when the splash screen disappears and the Pipsqueak Pro UI appears.
Registration and Login
Register within Pipsqueak Pro the first time you use the software. If you’ve purchased more than one license, you will need to register with different emails for each license. A given account can be used on multiple computers but can only be used by one person at a time. The registration login & password is NOT the same as the account you created on rewireneuro.com when you ordered your license. If you’ve already used Pipsqueak AI (in Fiji/ImageJ) in the past, then the same login will work in Pipsqueak Pro and you do not need to register again.

Analysis Workflows
Start a new project
Visit the Projects tab on the left-side column to start a new project or view your list of previous imported projects.

Image Import
You can import individual image files in a wide variety of formats or a whole experimental folder (max 500 MB at a time, with greater capacity coming soon). When importing folders, the file organization within the folder will be preserved in Pipsqueak Pro.

Folders can also be added below the project to group an experiment’s images. For example, images from test subjects and control subjects can be placed in separate folders below the project. Sub folders can be added to further separate images. The test and control folders could have sub folders for images from male and female subjects. The appropriate folder setup will allow data to be grouped when exported as needed for further analysis.

Supported File Types
Supported file types include most tiffs (except stacks), png, and jpg. We recommend using the highest quality images possible (such as .tiff instead of .jpg) and be consistent with image size and format within an experiment. A mix of file types or compression levels may negatively impact detection model performance. In the future we plan to add compatibility with major microscope files such as Leica’s .lif and Zeiss .czi formats.
Example Single Label Analysis Workflow

1. Go to the projects tab and create a new project with a description of your experimental details.
2. Import image files into your new project. Delete and rename as needed by right-clicking the image or folder. You can drag images and whole folders into existing folders to create subfolders to further organize your images. Images are stored locally and tied to their original location on your computer.
3. Click on an image in the left side window to bring it up as the active image in the center window. This is the only image that will be affected by the settings in the right-side window.
4. Select an appropriate detection model from the pull-down menu in the top right-side window. Don’t know what model to choose? If your cell or biomarker of interest isn’t in the list, choose one that most closely resembles your marker from the General Models list. Try this with different models until you get one that works best. The model won’t be perfect because it wasn’t trained on your images, but you can manually fix any issues in the next step.
5. Once you’ve selected a model, click “Detect Cells” and see the detected regions of interest (ROIs) appear as boxes in your image. Try different image settings until you get the best detection. See the “Analysis Settings” section below for more information on what the settings do.
6. You can click to select an ROI on the image and delete, or resize the ROI by clicking and dragging the edges. Click and drag with your mouse to add new ROIs, which will be automatically added to the ROI list with the FFF prefix. You can even change the color of the ROI box under options.
7. After you’re satisfied with your dataset, click “Approve ROIs” which will add a checkmark next to that image in the right-hand window, and add it to your data output file.
8. When you’re done analyzing all of your images, click “Export ROIs” to export your data to a .csv file. This will default automatically to the same directory where your images are located.
9. Didn’t find a good detection model or settings for your images? Add your annotated images to a custom detection model. Check out our awesome new custom model training capability explained below!
Example Multi-Label Analysis Workflow for Colocalization
Coming soon
Custom Model Training
A custom detection model allows you to train your own unique model to detect your biomarker of interest. If none of our existing detection models worked well for your images, then this may be a good option for you. With a relatively small upfront effort, you can create a model that will dramatically reduce your analysis time while improving consistency of your results.
To do so you will need to fully annotate a minimum of 5 training images and provide at least 20 contoured objects. It is important to annotate all of the ROIs on each image. See our associated blog post for annotation best practices: https://medium.com/@rewireneuro/machine-learning-isnt-magic-b7d06cf9b305
To get started with this process, repeat the steps mentioned in the workflow above, but when you’re done manually fixing the detections, submit the image to a custom model dataset. We’ll get back to you within a few business days with a new custom model to test and refine as needed. We recommend this route for researchers who will be analyzing hundreds of similar images, as the custom model will be much faster and more consistent than manual analysis. As our database grows, our custom model training process will become more efficient and future models will require far fewer annotations to train a reliable model.
When setting up a new custom model, please provide a brief but descriptive name for your unique model. This will be displayed in the community models list if you decide to publicly share your model. We recommend including your lab or last name and an abbreviated biomarker or cell type name. A more detailed description about your experiment and image dataset will also be helpful for future users and allow you to differentiate between multiple custom models more easily.
Custom Model Training Process:
1. After fully annotating an image, click “Approve ROIs”, which will add a check mark next to that image in the file tree. Only approved images will appear in the custom model training submission window. Repeat for several images. We recommend a minimum of 30 images with at least 2000 annotations. Important: each training image set must be of the same single biomarker. Multi-channel images must be split into single channels prior to annotation and training.
2. Create a new model in the Custom Model tab by clicking the “Create Model” button.
3. Give your model a unique name. We recommend Lab/Company name + Biomarker, e.g. “Rewire Parvalbumin”. In the Model Description box, you can provide additional details about the experiment to distinguish between future models. This information will also be useful to other users if you choose to share your model publicly.
4. Choose the project folder that contains your annotated training images.
5. Select your new model from the custom model list and click “Add Images”
6. Deselect any images you don’t want to include in the training dataset and click “Add”.
7. Once the blue progress bar fills up, indicating you have enough images and annotations, click “Train Model” and click “Train” to submit. We’ll contact you within a few business days to let you know your model is ready. Once deployed, your custom model will appear in the model drop down list.
Analysis Settings
Detection Models:
Pre-trained AI detection models for a wide variety of biomarker types. See our website FAQ [link?] for more information about each. Try different models to see which works best for your images or use custom model training to create your own.
Detect Cells:
This tells the selected model to scan the active image to identify regions of interest (ROIs) based on what it is trained to detect. In machine learning (ML)-speak this is called inference.
Detection Confidence:
Adjust this to change the detection sensitivity of the model. This will change the number of ROIs detected in the image based on the model’s confidence in its decision. The slider number represents the minimum “sureness” accepted, with a lower number only allowing for objects that the model is more confident about: in this case less ROIs will be detected but this raises the probability of false negatives (i.e. missed cells that you’ll need to manually add). For higher numbers, the model will return less certain results, meaning more ROI boxes will appear, with a higher probability of false positives that will need to be manually deleted. When first identifying a model that works best for your images, you’ll need to play around with the detection confidence until you find a balance where you have the most accurate detection.
Overlap Removal:
This setting will change the amount of overlap allowed between two ROIs. A lower value will allow more overlap between ROIs to occur, which can help when cells are closely nested together. A higher value will merge ROIs that are overlapping, which helps with false positives detected near an isolated cell. In general if there are a lot of ROIs near a cell and overlapping with other ROIs that aren’t needed, increasing the removal setting can fix this.
Image View Settings:
These settings only adjust the way you view the image to assist in visualizing your biomarkers or cells. They do not change the way the detection model functions or impact the results output.
Brightness and Contrast: You can adjust the brightness and contrast of the active image to bring out weaker signals.
Min & Max Pixel: These settings allow you to set the minimum and maximum intensity values in the active image. Raising the Min + lowering the Max can work better than brightness/contrast to accentuate the middle ground. Raising the Min level can also help remove some background noise.
Image Analysis Settings:
These settings will affect the detection model results and your resulting data output!
Background subtraction: Removing background may improve detection overall, but risks losing faint ROIs. We recommend adjusting this consistently across images within an experiment. Coming soon: select example background regions within your image to more accurately subtract background signal.
Crop to subregion: This will allow you to analyze only a specific subregion of your image. Coming soon.
Approve ROIs & Export ROIs
Approve ROIs to include the current image’s ROIs in your results output or make them available for custom model training.
Export ROIs to save your data for further analysis outside of Pipsqueak. This will create the following files in the folder that contains the images by default. Coming soon: select your own destination folder
- .csv file with ROI number, location, and a variety of intensity and area measurements.
- ROIs in zipped folder – can be imported into ImageJ for further analysis or to make figures
- Coming soon: image with ROI overlay
*New*: Batch export an entire folder of image data in one .csv file by right clicking the folder and selecting “Export ROIs”
Successful Export will show a green notification with the file save location at the top of the window. By default, the data file saves to the same location as the image files.
Colocalization Menu: Coming soon. This will allow users to analyze multiple biomarkers in an image via split channels and assess the number and degree of colocalization.
Batch Processing Menu: Coming soon. This will allow users to speed up their workflows by successively analyzing multiple images in sequence and outputting their data in aggregate for a given experimental folder.
Stack Summing Menu: Coming soon. This will allow users to convert tiff stacks of z-plane images to a single summed image for ROI detection analysis.
ROI Table: This shows a list of the ROIs drawn on the active image, with the corresponding x,y coordinates in the image as well as the size of the ROI box in pixels. IDs contain number prefixes for ROIs detected by the model and letter prefixes for ROIs added manually by the user. Hovering over an ROI in the table will highlight the corresponding ROI box in the active image. Conversely, hovering over an ROI in the image will move to the corresponding ROI in the table. ROIs with a blue checkmark will be included in the final data export. ROIs that are manually deleted from the image will automatically disappear from the ROI table.
Options
In the options menu in the upper right corner, you can choose a preferred ROI color, ROI shape, and adjust the zoom sensitivity and direction for your mouse scroll wheel (see hotkeys for how to use mouse to zoom). In addition, you can choose to clean up the UI by removing scrollbars.

Hotkeys
- Hide/unhide all ROIs on an active image: ctrl + t
- Remove ROIs on an active image (as well as from the ROI list): Delete or Backspace
- Collapse the left window: ctrl + <
- Expand the left window: shift + ctrl + <
- Collapse the right window: ctrl + >
- Expand the right window: shift + ctrl + >
- Zoom with mouse scroll wheel: hold ctrl + scroll
- Pan around image with mouse: hold ctrl + click & drag image
Suggestions for additional useful hotkeys or features? Let us know! support@rewireneuro.com
You must be logged in to post a comment.