--

When learning patterns during training, it finds non-linear ones. That is the purpose of the activation layers. But when interpolating in the latent space during inference, I believe it would always give a linear combination. I did think about this when I wrote that line and I am moderately confident this is correct.

--

--

Pierz Newton-John
Pierz Newton-John

Written by Pierz Newton-John

Writer, coder, former psychotherapist, founding member of The School Of Life Melbourne. Essayist for Dumbo Feather magazine, author of Fault Lines (fiction).

No responses yet