Opentext Media Analyzer
Advanced AI computer vision technology that identifies case relevant pictures matching 12 pre-defined categories including Pornography and Extremism. Media Analyzer reduces the amount of content investigators need to manually review, significantly decreasing the time to discover critical evidence.
The task of identifying case relevant pictures in either a criminal or civil investigation can be a very time-consuming process and is often like looking for a needle in a haystack. A single case can contain thousands of pictures, most of which are not relevant to the investigation. OpenText Media Analyzer, an optional add-on module for EnCase Forensics, allows investigators to harness the power of AI to automatically scan any picture files within the evidence for visual features that match the following pre-defined categories: Pornography, Child Abuse (CSAM), Extremism, Graphic Violence, Drugs, Weapons, ID/Credit Cards, Currency, Alcohol, Gambling, Swim/Underwear and Documents.
Identifies unknown child abuse imagery
Media Analyzer includes a Child Abuse (CSAM) category capable of detecting previously unseen illegal pictures that maybe unknown to law enforcement. This allows investigators to quickly identify recently generated material and potential new victims.
Filter results based on risk
Once Media Analyzer has processed the evidence the investigator can filter the results by category and confidence score. Only pictures matching the specified category and confidence score threshold will be displayed in the gallery viewer, significantly reducing the time to discover critical evidence.