Authors
Hossein Hosseini, Baicen Xiao, Mayoore Jaiswal, Radha Poovendran
Publication date
2017/12/18
Conference
2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA)
Pages
352-358
Publisher
IEEE
Description
Convolutional Neural Networks (CNNs) have achieved state-of-the-art performance on a variety of computer vision tasks, particularly visual classification problems, where new algorithms reported to achieve or even surpass the human performance. In this paper, we examine whether CNNs are capable of learning the semantics of training data. To this end, we evaluate CNNs on negative images, since they share the same structure and semantics as regular images and humans can classify them correctly. Our experimental results indicate that when training on regular images and testing on negative images, the model accuracy is significantly lower than when it is tested on regular images. This leads us to the conjecture that current training methods do not effectively train models to generalize the concepts. We then introduce the notion of semantic adversarial examples - transformed inputs that semantically represent …
Total citations
20172018201920202021202220232024523222728302216
Scholar articles
H Hosseini, B Xiao, M Jaiswal, R Poovendran - 2017 16th IEEE International Conference on Machine …, 2017