#### Random Picture

## Links

## Index of Key Terms and Abbreviations

**Autograd**

Automatic gradient. It implements backpropagation.

**Backpropagation**

Backpropagation is an algorithm. It allows us to efficiently evaluate the gradient of some kind of a loss function with respect to the weights of a neural network. And what that allows us to do is then we can iteratively tune the weights of that neural network to minimize the loss function and therefore improve the accuracy of the model.

From wikipedia: Backpropagation computes the gradient of a loss function with respect to the weights of the network for a single input–output example, and does so efficiently, computing the gradient one layer at a time, iterating backward from the last layer to avoid redundant calculations of intermediate terms in the chain rule.

#### Tanh: hyperbolic tangent function

$Tanh[[x],{x,−4,4}]$

We can see that no matter the range, the graph always plateaus out around -1 and 1 on the Y-axis.

Formula: $tanh=[exp_{2x}−1]/[exp_{(}2x)+1]$

## Why micrograd?

Micrograd: dealing with tensors from scratch (understanding how things work in a fundamental level)

steps in nn

- define the neuron
- define the layers of neuron
- define what is mlp (perceptron)

This image was generated with matplotlib (source: my github’s ai repo, should link). What is the derivative of this function ($3x_{2}−4x+5$) at any single point of x? (rhetorical question)

… more to be updated soon (currently on starting the core `Value`

object of micrograd and its visualization)
code with current progress