Simeon Ehrig, Game of Life on GPU Using Cling-CUDA (2021-11-09).
- This tutorial demonstrates some functions of Cling-CUDA and Jupyter Notebooks
and gives an idea what you can do with C++ in a web browser. The example
shows the usual workflow of simulation and analysis. The simulation runs
Conway’s Game of Life on a GPU.
Garima Singh, Floating-Point Error Estimation Using Automatic Differentiation with Clad (2021-08-21).
- CLAD provides an in-built floating-point error estimation framework that
can automatically annotate code with error estimation information. This
framework also provides the ability for users to write their own error models
and use the same for generating error estimates. The aim of this tutorial is
to demonstrate building a simple custom error model and using it in conjunction
with CLAD’s error estimation framework.
Ioana Ifrim, Interactive Automatic Differentiation With Clad and Jupyter Notebooks (2021-08-20).
- xeus-cling provides a Jupyter kernel for C++ with the help of the C++
interpreter cling and the native implementation of the Jupyter protocol
xeus. Within the xeus-cling framework, CLAD can enable automatic
differentiation (AD) such that users can automatically generate C++ code
for their computation of derivatives of their functions. In mathematical
optimization, the Rosenbrock function is a non-convex function used as a
performance test problem for optimization problems, this tutorial shows the
computation of the function’s derivatives, by employing either CLAD’s
Forward Mode or Reverse Mode.
Ioana Ifrim, How to Execute Gradients Generated by Clad on a CUDA GPU (2021-08-20).
- CLAD provides automatic differentiation (AD) for C/C++ and works without
code modification (legacy code). Given that the range of AD application
problems are defined by their high computational requirements, it means
that they can greatly benefit from parallel implementations on graphics
processing units (GPUs).
This tutorial showcases how to firstly use CLAD to obtain your function’s
gradient and how to schedule the function’s execution on the GPU.