The PainQx ALGOS System is a machine learning based medical device that can objectively assess the intensity of pain in chronic pain patients. Using a cascade of binary classifiers, the PainQx platform currently classifies subjects into pain levels of No Pain, Mild/Moderate Pain, or Severe Pain that correlate with the patient’s self-reported NRS Pain Score. In order to produce an objective pain assessment, the PainQx ALGOS System utilizes three core technologies to process the EEG and determine the pain classification:

1. Artifacting and Epoch Selection

PainQx utilizes artifact identification routines to remove noise from the EEG signal. Common artifacts such as EMG, eye movement, and external interference are detected and the epochs (fixed length segments of EEG) containing them are discarded. The remaining epochs are processed to select the set of epochs most representative of steady state brain activity.

2. Feature Extraction

From the set of “clean”, selected epochs, PainQx’s feature extraction algorithms generate thousands of unique “features” of brain activity, also known as Quantitative EEG (qEEG) features. PainQx has developed a variety of proprietary features that, in conjunction with classic qEEG features such as absolute and relative power, create the PainQx feature set.

3. Classification

The qEEG features derived for a particular patient are fed into the classification algorithm, which determines the patient’s pain classification. The classification algorithm to be deployed in the commercial ALGOS system will be a fixed set of mathematical and logical operations established over the course of ALGOS development. Iterations to the algorithm over time will be facilitated through a series of 510k regulatory submissions.

Product Development Science & Technology

A significant amount of technology not deployed with the commercial system is required in order to construct and optimize the algorithm used for pain classification.

Curated Dataset from Multiple Sites

PainQx (and through an exclusive license with the NYU School of Medicine) has established a dataset of over 600 EEG recordings with a specific focus on pain assessment. The dataset spans five collection sites, 3 EEG acquisition devices, with all data collected under a protocol with the sole purpose of supporting pain assessment research. To the knowledge of PainQx, this is the largest dataset of its kind and continues to grow.  As the development dataset grows to provide a more complete representation of chronic pain, PainQx expects the performance of its pain assessment algorithm to increase.  In addition to raw EEG files, the PainQx algorithm development database contains a complete set of qEEG features for each case derived using the artifacting, epoch selection and feature extraction software modules described above along with additional clinical data collected from the subjects.  All data in the PainQx Development Dataset is de-identified, satisfying HIPAA requirements.

Machine Learning Technology

The role of Machine Learning (ML) in PainQx product development is to computationally analyze the set of cases contained in the PainQx algorithm development dataset, looking for patterns in the data and identifying those features, which when combined together, provide the most accurate classification of subjects.  A variety of Machine Learning tools have been utilized by PainQx in order to determine the which are the best fit to the problem space, and also to facilitate looking across tools to identify the most powerful qEEG features regardless of the specific ML tool being used.

PainQx is focused on a form of machine learning referred to as “supervised learning”. A prediction model is constructed using a subset of the collected data, referred to as the train/test set. The remaining subset of the data is used as a “hold-out” for validating the model established using the train/test data set. The machine learning tools are run using the gathered train/test dataset. Control parameters are adjusted to optimize performance assessed via cross-validation. This step is used carefully to emphasize domain knowledge supported features while minimizing the potential for overtraining. Following algorithm optimization using the train/test dataset, the performance of the resulting prediction model is measured on the holdout dataset. This creates a ‘blinded’ performance measure which can be used to predict performance on future datasets.