How to print the gradient of intermediate variables in Pytorch

Thanks to Adam Paszke’s post in Pytorch Discussion

I struggled with a problem today: My parameter “b” is not updating in the following code:

There’s nothing wrong with the gradient of  “a”. So what’s the problem?

The problem is: I used the wrong initialization of “b”. I init “b” with all zeros. and the gradient of the norm of an all-zero vector is always zero.

Numpy Precision

Precision changed!