Table of contents
- Also test this for Adversarial Perturbation.
- Robustness Metric
o Robustness in Classification
Related Works
Methodology
Datasets – CalTech101, CIFAR10, CIFAR100, IIIT PETS,
IMAGENET1K VALIDATION
Models – ResNet 18…152, Swin, ViT, ConvNeXt
All subparts of methodology one by one
All datasets and models used (brief description)
Experiments and Results
Conclusion
o Future Prospects
o Publications
- References
- Time Complexity (Inference Time (pre and post robustness), Model Params Vs Flops
Vs Time
Abstract
Chapter 1. Introduction
1.1 Image classification and background
1.2 Challenges and need for Explaining Artificial Intelligence
1.3 Robustness Verification as a solution for Trustworthy AI
1.4 Adversarial Robustness techniques
Chapter 2. Robustness Verification and its Applications
2.1 Robustness Verification in Classification Tasks
2.2 Robustness Verification in Regression Tasks
Chapter 3. Datasets covered under Robustness Verification
3.1 ImageNet-1K Validation Dataset
3.2 CIFAR10
3.3 CIFAR100
3.4 IIIT-OXFORD Pets Dataset
3.5 CalTech-101 Dataset
Chapter 4. Robustness Verification Methodology
4.2 Image Perturbations
4.2 Perturbation Weights
4.3 Robustness Analysis and Confidence Interval
4.4 Old school metrics
4.4.1 Top 1% and Top 5%
4.5 Novel metrics developed for our study
4.5.1 Weighted Mean and Weighted Standard Deviation
4.5.2 Robustness Score
4.5.3 Prediction Stability Score
4.5.4 Confidence Interval
4.6 Cases for Confidence Interval
Chapter 5. Experimental Analysis and Results
5.1 Quantitative Analysis
5.2 Qualitative Analysis
5.3 Ablation Studies
5.4 Random noise factor
5.4.1 More ablation studies
5.4.2 SSIM
5.4.3 MSE
5.4.4 Ablation study on \beta vs similarity metrics
Chapter 6. Conclusion
Chapter 7. Publications
Chapter 8. Future Works
Chapter 9. References
6 References (Numbered as Used in the Section)
7 [3] Liu, Z., Lin, Y., Cao, Y., et al. Swin Transformer: Hierarchical Vision Transformer
using Shifted Windows, 2021.
8 [4] Szegedy, C., Zaremba, W., Sutskever, I., et al. Intriguing properties of neural
networks, arXiv:1312.6199, 2013.
9 [5] Chen, S., Shen, H., Wang, R., and Wang, X. Relationship Between Prediction
Uncertainty and Adversarial Robustness, Journal of Software, 2022.
10 [7] Li, J., and Li, G. The Triangular Trade-off between Robustness, Accuracy and
Fairness in Deep Neural Networks: A Survey, ACM Computing Surveys, 2024.
11 [11] Mohapatra, J., Weng, T.-W., Chen, P.-Y., Liu, S., and Daniel, L. Towards Verifying
Robustness of Neural Networks Against a Family of Semantic Perturbations, 2020.
12 [12] Hamdi, A., and Ghanem, B. Towards Analyzing Semantic Robustness of Deep Neural
Networks, ECCV Workshops, 2020.
13 [13] Zhang, H., Weng, T.-W., Chen, P.-Y., Hsieh, C.-J., and Daniel, L. Efficient Neural
Network Robustness Certification with General Activation Functions, NeurIPS, 2018.
14 [14] Singh, G., Gehr, T., Püschel, M., and Vechev, M. Boosting Robustness Certification
of Neural Networks, ICLR, 2019.
15 [15] Weng, L., Zhang, H., Chen, H., et al. Towards Fast Computation of Certified
Robustness for ReLU Networks, ICML, 2018.
16 [16] Ji, S., Du, T., Deng, S., et al. Robustness Certification Research on Deep Learning
Models: A Survey, Chinese Journal of Computers, 2022.
17 [17] Mangal, R., Nori, A. V., and Orso, A. Robustness of Neural Networks: A
Probabilistic and Practical Approach, ICSE NIER, 2019.
18 [18] Webb, S., Rainforth, T., Teh, Y. W., and Kumar, M. P. A Statistical Approach to
Assessing Neural Network Robustness, ICLR, 2019.
19 [20] Weng, L., Chen, P.-Y., Nguyen, L., et al. PROVEN: Verifying Robustness of Neural
Networks with a Probabilistic Approach, ICML, 2019.
20 [21] Carlini, N., and Wagner, D. Towards Evaluating the Robustness of Neural Networks,
IEEE S&P, 2017.
21 [23] Levy, N., and Katz, G. RoMA: A Method for Neural Network Robustness
Measurement and Assessment, arXiv:2110.11088, 2021.
22 [24] Tjeng, V., Xiao, K., and Tedrake, R. Evaluating Robustness of Neural Networks with
Mixed Integer Programming, ICLR, 2019.
23 [25] Weng, T.-W., Zhang, H., Chen, P.-Y., et al. Evaluating the Robustness of Neural
Networks: An Extreme Value Theory Approach (CLEVER), ICLR, 2018.
24 [26] Anderson, B. G., and Sojoudi, S. Certifying Neural Network Robustness to Random
Input Noise from Samples, arXiv:2010.07532, 2020.
25 [27] Tsipras, D., Santurkar, S., Engstrom, L., et al. Robustness May Be at Odds with
Accuracy, ICLR, 2019.
26 [34] Your previously published paper on perturbation-weighted robustness verification
(insert full citation here).
27