This paper proposes a novel approach to continuous Arabic Sign Language recognition. We use a dataset which contains 40 sentences composed from 80 sign language words. The dataset is collected using sensor-based gloves. We propose a novel set of features suitable for sensor readings based on covariance, smoothness, entropy and uniformity. We also propose a novel classification approach based on a modified polynomial classifier suitable for sequential data. The proposed classification scheme is modified to take into account the context of the feature vectors prior to classification. This is achieved through the filtering of predicted class labels using median and mode filtering. The proposed work is compared against a vision-based solution. The proposed solution is found to outperform the vision-based solution as it yields an improved sentence recognition rate of 85 %.