Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
12 changes: 4 additions & 8 deletions C1-TensorFlow.qmd
Original file line number Diff line number Diff line change
Expand Up @@ -778,10 +778,7 @@ mean_Torch(y) == mean(y)
::: {.callout-caution icon="false"}
#### Question: Runtime

1. What is the meaning of "An effect is not significant"?
2. Is an effect with three \*\*\* more significant / certain than an effect with one \*?

`r hide("Click here to see the solution")` This exercise compares the speed of R to torch The first exercise is to rewrite the following function in torch:
This exercise compares the speed of R to torch. The first exercise is to rewrite the following function in torch:

```{r chunk_chapter3_task_torch_8, eval=TRUE}
do_something_R = function(x = matrix(0.0, 10L, 10L)){
Expand All @@ -791,7 +788,7 @@ do_something_R = function(x = matrix(0.0, 10L, 10L)){
}
```

Here, we provide a skeleton for a TensorFlow function:
Here, we provide a skeleton for a torch function:

```{r chunk_chapter3_task_torch_9, eval=FALSE, purl=FALSE}
do_something_torch= function(x = matrix(0.0, 10L, 10L)){
Expand Down Expand Up @@ -926,7 +923,7 @@ linalg_det(A)

Torch supports automatic differentiation (analytical and not numerical!). Let's have a look at the function $f(x) = 5 x^2 + 3$ with derivative $f'(x) = 10x$. So for $f'(5)$ we will get $10$.

Let's do this in torch Define the function:
Let's do this in torch. Define the function:

```{r chunk_chapter3_task_torch_17, eval=TRUE}
f = function(x){ return(5.0 * torch_pow(x, 2.) + 3.0) }
Expand Down Expand Up @@ -982,7 +979,7 @@ In R we would do the following to fit a linear regression model:
summary(lm(y~x))
```

Let's build our own model in TensorFlow. Here, we use now the variable data container type (remember they are mutable and we need this type for the weights ($\boldsymbol{w}$) of the regression model). We want our model to learn these weights.
Let's build our own model in torch. Here, we use now the variable data container type (remember they are mutable and we need this type for the weights ($\boldsymbol{w}$) of the regression model). We want our model to learn these weights.

The input (predictors, independent variables or features, $\boldsymbol{X}$) and the observed (response, $\boldsymbol{y}$) are constant and will not be learned/optimized.

Expand Down Expand Up @@ -1067,4 +1064,3 @@ cat("Original intercept: ", intercept, "\n")

`r unhide()`
:::

4 changes: 2 additions & 2 deletions C2-DeepNeuralNetworks.qmd
Original file line number Diff line number Diff line change
Expand Up @@ -75,7 +75,7 @@ A sequential Keras model is a higher order type of model within Keras and consis

**2. Add hidden layers to the model (we will learn more about hidden layers during the next days).**

When specifying the hidden layers, we also have to specify the shape and a so called *activation function*. You can think of the activation function as decision for what is forwarded to the next neuron (but we will learn more about it later). If you want to know this topic in even more depth, consider watching the videos presented in section \@ref(basicMath).
When specifying the hidden layers, we also have to specify the shape and a so called *activation function*. You can think of the activation function as decision for what is forwarded to the next neuron (but we will learn more about it later). If you want to know this topic in even more depth, consider watching the videos presented in section @sec-basicMath.

The shape of the input is the number of predictors (here 4) and the shape of the output is the number of classes (here 3).

Expand Down Expand Up @@ -905,7 +905,7 @@ Remarks:
- A positive thing about stochastic gradient descent is, that local valleys or hills may be left and global ones can be found instead.
:::

## Underlying mathematical concepts - optional {#basicMath}
## Underlying mathematical concepts - optional {#sec-basicMath}

If are not yet familiar with the underlying concepts of neural networks and want to know more about that, it is suggested to read / view the following videos / sites. Consider the Links and videos with descriptions in parentheses as optional bonus.

Expand Down