Help & FAQ

Need some help with Polygon? Check out these resources.

For the most help getting started with Polygon AI, click below to download our comprehensive how-to guide or watch a recording of our latest live webinar. Otherwise, scroll down to see our FAQ.

Frequently Asked Questions

ARE MY DATA AND IMAGES SAFE?

Yes. Polygon AI runs on a network of AWS servers around the world that use the highest level of security and encryption when processing your images. Your images and data belong to you and are never visible, apparent, or available to other users.

DOES BIT DEPTH OR IMAGE FORMAT MATTER?

Polygon can analyse images of any bit depth, image coloration, greyscale, or non-proprietary image format. However, bit depth and image format can impact some quantifications such as pixel intensity. Therefore, we recommend users import images in a consistent, high-quality format such as .tiff.

CAN I USE POLYGON AI WITHOUT THE INTERNET?

Our machine learning models require powerful servers to quickly process your images. Currently, that means you need to stay connected to the internet so that Polygon can process your images and display your results. Offline processing may become available as we continue to develop new features and capabilities.

DOES POLYGON USE IMAGEJ/FIJI?

Our legacy open-source code (PIPSQUEAK, Pipsqueak Basic, and Pipsqueak AI) was based on ImageJ/FIJI’s reliable and trusted image quantification. Today, Polygon is built on a custom Java interface that enables faster, easier image analysis. Our quantification algorithms use a combination of widely-used OpenCV and ImageJ/FIJI resources, meaning that you can be confident about the quantification Polygon returns.

WILL THE AI LEARN WITH EXPERIENCE, AND WILL THAT AFFECT DATA REPRODUCIBILITY? WILL IMPROVEMENTS IN AI ACCURACY CAUSE DATA TO VARY OVER TIME?

While our AI engine, Sightologist.ai, does adapt and improve with use, detection models are locked and will not change on their own. We periodically make improvements to the detection models, which are rigorously tested internally and released as new versions, though no previous model disappears from the library even after new versions are released. This ensures the reproducibility of your previous analyses while still pushing our capabilities forward. For the same reason, custom detection models trained by users will not alter community resources. Polygon’s ROI approval step is another way we help ensure the accuracy of the cells being measured by giving the user a way to verify the AI’s results. We recognize that a lot of us trust the AI’s detections and want to reduce the amount of verification we have to do when using Polygon. As AI technology continues to advance, less user input will be required.

IS POLYGON OPEN-SOURCE?

Clartity is everything in research. The Polygon/PIPSQUEAK methods have been peer-reviewed and published, and FIJI analysis code is open and accessible.

CAN I CUSTOMIZE BACKGROUND OR ANALYSIS PERAMETERS?

Yes. Everything from background subtraction to ROI selection can be customized within the Polygon AI interface.

HOW DOES POLYGON’S BACKGROUND SUBTRACTION WORK? WHAT VARIABLES CAN I CHANGE?

The multi-faceted background subtraction in Polygon AI is designed to be highly customizable since many different methods to capture images are used from lab to lab. Our automatic background subtraction algorithm combines the rolling ball algorithm, native to ImageJ, with custom thresholding. We offer both automatic and manual selection for background sampling. If automatic background sampling is selected, our algorithm places 22 square ROIs around the perimeter of the image, distributed uniformly. If manual sampling is selected, the user places their own ROIs for background sampling. It is recommended that at least 5 ROIs are placed, but the algorithm will still perform if less are selected.

After background ROIs are selected, the rolling ball algorithm settings are finalized and the rolling ball smoothing is performed across the entire image. The ball’s radius is set to 50 pixels by default, but it is highly recommended that the user changes this to be around 1.15-1.25 times the average size of the object of interest, in pixels, for best results.

We calculate background subtraction by discarding the brightest 33% and dimmest 33% of ROIs in the background samples. We then calculate the mean pixel intensity and standard deviation across the ROIs selected for background sampling. These values are then used to set all pixels that are dimmer than the calculated mean two standard deviations to be Not a Number (NaN). This results in the suppressed background pixels not influencing the measured cell intensity values.

IS BATCH PROCESSING AVAILABLE?

Batch processing features in Polygon allow users to run detection models on hundreds of images all at once. The AI models are locked so that they will not drift or learn, but we encourage users to continue to verify and approve ROI detections.

DOES THE PIXEL SCALE OF MY IMAGE MATTER FOR IMAGE QUANTIFICATION?

The size of the image and the physical scale are properties that won’t directly affect the cell detection or intensity measurement. However, some labs choose to report cell intensity as mean intensity/area, in which case you would need to double check the area calculation. If pixel scale information is not provided, metrics such as area will be returned in pixels rather than physical distance.

WHAT IS COLOCALIZATION AND HOW IT IS PERFORMED IN POLYGON ?

This ImageJ article is a great exploration of why colocalization method matters. An excerpt: “…When we evaluate colocalization, we are usually attempting to demonstrate that a significant, non-random spatial correlation exists between two channels of a dual color image. The specific nature of that correlation, and what it means for your research, can vary quite a bit. It could mean that one signal of one channel is contained within the bounds of another, or that your stains/dyes are typically found separated by a certain distance or are generally clustered, or simply that the signal from both channels overlap each other when imaged at a particular spatial resolution. Importantly, colocalization results cannot indicate that two proteins/molecules are bound or interacting, only that they are both localized to within a certain volume, and is mostly dependent upon your microscope and its acquisition parameters. Regardless of your microscope, this volume is many, many times greater than the volume of a single protein. For this reason, colocalization is most often used to determine if a protein is localizing to an organelle or other well defined cellular structure…

Colocalization is not as simple as looking for merged colors in a composite image (i.e., red and green channels overlap to show yellow areas in a composite, a.k.a merged, image). Most types of fluorescent microscopy use monochromatic detectors to capture photons that are sorted into channels by fluorescence emission filters. The resulting image is a reconstruction of the captured photons and can be represented in greyscale or falsely colored. Assuming colocalization based on the merging of red/green images is problematic because perceptual illusions can easily trick the brain into seeing colors or patterns that do not exist. For this reason, the best method to determine colocalization (or not) between multiple channels of pixels over space is quantitative evaluation performed by comparison of independent channels rather than looking for merged colors.

Polygon performs quantitative evaluation of colocalization using an object-based colocalization approach. In this method, objects in the independent channels are first detected or segmented to separate the objects of interest from the background in each channel. Colocalization is then evaluated using these detection coordinates, generally by comparing the area/volume of the objects to the area/volume of: A) Object-based colocalization, in which the union of intersection between the objects in each channel (generally, this type of object-based colocalization does require that there is overlap between your objects of interest from each channel); B) Adjacent object colocalization, in which the proximity of the detection coordinates between the channels; or C) Multiple objects within colocalization, in which the union of intersection between multiple objects in one channel with a objects in another channel.

Which method for comparison of intersection is used during analysis will depend on the exact question being asked. Being able to tailor the analysis to your specific circumstances is one of the biggest advantages to Polygon’s object based-analysis.

Transitioning From Pipsqueak to Polygon?

See below for some helpful information about upgrading to the newest version of Rewire’s automated image analysis platform.

What will happen to my projects in Pipsqueak?

All projects, images, and custom models will carry over from Pipsqueak Pro to Polygon after logging into Polygon with your existing Pipsqueak Pro credentials. Free accounts are allowed up to 1TB of file storage and up to three custom models. To add more storage, you can purchase another 1TB as an add-on, or upgrade to a Premium license for unlimited file storage and custom model training. To learn more about what is offered with Premium, or to purchase a Premium license, please select “Account” in Polygon, then select “Buy a License.”

Are there any changes to quantification methods in Polygon vs Pipsqueak?

No. All quantification and computing in Polygon is done identically to that in Pipsqueak Pro. All methods developed for Pipsqueak will still be relevant in Polygon.

Will I need to create a new login for Polygon?

No. Your login information for Polygon is exactly the same as it was for Pipsqueak Pro.

What if I used Pipsqueak Pro for a trial period but do not have an active license?

Your login information when you had a free trial to Pipsqueak Pro will be the same for Polygon. Any projects, images, or models you used/created during your trial period will appear in Polygon when you complete the transition. There is no more “trial period” in Polygon – you will continue with a free Polygon license indefinitely and can upgrade to a Premium license at any time.

I am currently paying for a license to Pipsqueak Pro. What does this mean now that Polygon is free?

Any user who is paying for a full license to Pipsqueak Pro when Polygon releases will be automatically upgraded to a Premium license of Polygon for the remainder of their Pipsqueak Pro license.