The present disclosure describes a multi-initialization ensemble-based defense strategy against an adversarial attack. In one embodiment, an exemplary method includes training a plurality of conventional neural networks (CNNs) with a training set of images, wherein the images include original images and images modified by an adversarial attack; after training of the plurality of conventional neural networks, providing an input image to the plurality of conventional neural networks, wherein the input image has been modified by an adversarial attack; receiving a probÂability output for the input image from each of the plurality of conventional neural networks; producing an ensemble probability output for the input image by combining the probability outputs from each of the plurality of convenÂtional neural networks; and labeling the input image as belonging to one of the one or more categories based on the ensemble probability output.Cancer Treatment Lung Cancer