Research Intern at CERN
email: garimasingh0028@gmail.com
Education: B. Tech in Information Technology, Manipal Institute of Technology, Manipal, India
Complete project:
Add Automatic Differentiation to RooFit
RooFit is a toolkit for statistical modeling and fitting used by most
experiments in particle physics. Just as data sets from next-generation
experiments grow, processing requirements for physics analysis become
more computationally demanding, necessitating performance optimizations
for RooFit. One possibility to speed-up minimization and add stability
is the use of automatic differentiation (AD). Unlike for numerical
differentiation, the computation cost scales linearly with the number of
parameters, making AD particularly appealing for statistical models with
many parameters. The goal of this project is to add preliminary support
for AD in RooFit and develop benchmarks to demonstrate the advantages of
AD for large statistical fits using RooFit.
Project Proposal: URL
Mentors: Vassil Vassilev, David Lange
Completed project:
Add Numerical Differentiation Support in Clad
In mathematics and computer algebra, automatic differentiation (AD) is a
set of techniques to numerically evaluate the derivative of a function
specified by a computer program. Automatic differentiation is an
alternative technique to Symbolic differentiation and Numerical
differentiation (the method of finite differences). Clad is based on
Clang which provides the necessary facilities for code
transformation. The AD library can differentiate non-trivial functions,
find a partial derivative for trivial cases, and has good unit test
coverage. In several cases, due to different limitations, it is either
inefficient or impossible to differentiate a function. For example, clad
cannot differentiate declared-but-not-defined functions. In that case,
it issues an error. Instead, clad should fall back to its future
numerical differentiation facilities.
Project Proposal: URL
Project Reports: Final Report
Mentors: Vassil Vassilev, Alexander Penev
Completed project:
Floating point error evaluation with Clad
Floating-point estimation errors have been a testament to the finite
nature of computing. Moreover, the predominance of Floating-point
numbers in real-valued computation does not help that fact. Float
computations are highly dependent on precision, and in most cases, very
high precision calculation is not only not possible but very
inefficient. Here, one has no choice but to resort to lower precision
computing, which in turn is quite prone to errors. These errors result
in inaccurate and sometimes catastrophic results; hence, it is
imperative to estimate these errors accurately. This project aims to use
Clad, a source transformation AD tool for C++ implemented as a plugin
for the C++ compiler Clang, to develop a generic error estimation
framework that is not bound to a particular error approximation
model. It will allow users to select their preferable estimation logic
and automatically generate functions augmented with code for the
specified error estimator.
Project Proposal: URL
Project Reports: GSoC 2020 Archive
Mentors: Vassil Vassilev, David Lange