Mulder, Kevin;
(2023)
Adversarial training to improve robustness of adversarial deep neural classifiers in the NOvA experiment.
Doctoral thesis (Ph.D), UCL (University College London).
Preview |
Text
Mulder_10167208_Thesis.pdf Download (55MB) | Preview |
Abstract
The NOvA experiment is a long-baseline neutrino oscillation experiment. Consisting of two functionally identical detectors situated off-axis in Fermilab’s NuMI neutrino beam. The Near Detector observes the unoscillated beam at Fermilab, while the Far Detector observes the oscillated beam 810 km away. This allows for measurements of the oscillation probabilities for multiple oscillation channels, ν_µ → ν_µ, anti ν_µ → anti ν_µ, ν_µ → ν_e and anti ν_µ → anti ν_e, leading to measurements of the neutrino oscillation parameters, sinθ_23, ∆m^2_32 and δ_CP. These measurements are produced from an extensive analysis of the recorded data. Deep neural networks are deployed at multiple stages of this analysis. The Event CVN network is deployed for the purposes of identifying and classifying the interaction types of selected neutrino events. The effects of systematic uncertainties present in the measurements on the network performance are investigated and are found to cause negligible variations. The robustness of these network trainings is therefore demonstrated which further justifies their current usage in the analysis beyond the standard validation. The effects on the network performance for larger systematic alterations to the training datasets beyond the systematic uncertainties, such as an exchange of the neutrino event generators, are investigated. The differences in network performance corresponding to the introduced variations are found to be minimal. Domain adaptation techniques are implemented in the AdCVN framework. These methods are deployed for the purpose of improving the Event CVN robustness for scenarios with systematic variations in the underlying data.
Type: | Thesis (Doctoral) |
---|---|
Qualification: | Ph.D |
Title: | Adversarial training to improve robustness of adversarial deep neural classifiers in the NOvA experiment |
Open access status: | An open access version is available from UCL Discovery |
Language: | English |
Additional information: | Copyright © The Author 2023. Original content in this thesis is licensed under the terms of the Creative Commons Attribution-NonCommercial 4.0 International (CC BY-NC 4.0) Licence (https://creativecommons.org/licenses/by-nc/4.0/). Any third-party copyright material present remains the property of its respective owner(s) and is licensed under its existing terms. Access may initially be restricted at the author’s request. |
UCL classification: | UCL UCL > Provost and Vice Provost Offices > UCL BEAMS UCL > Provost and Vice Provost Offices > UCL BEAMS > Faculty of Maths and Physical Sciences UCL > Provost and Vice Provost Offices > UCL BEAMS > Faculty of Maths and Physical Sciences > Dept of Physics and Astronomy |
URI: | https://discovery-pp.ucl.ac.uk/id/eprint/10167208 |
Archive Staff Only
![]() |
View Item |