• by cgadski on 5/31/2024, 10:14:23 AM

    Tensor network notation is really useful for differentiating with respect to tensors. For example, where F is a real function of a matrix variable, think of how you'd differentiate F(A X) with respect to X. Conceptually this is easy, but I used to have to slow down to write it in Einstein notation. Thinking in terms of tensor diagrams, I just see F', X and A strung together in a triangle. Differentiating with respect to X means removing it from the triangle. The dangling edges are the indices of the derivative, and what's left is a matrix product of A and F' along the index that doesn't involve X.

    This blog post made me realize that tensor diagrams are the same as the factor graphs we talk about in random field theory. Indices of a tensor network become variables of a factor graph and tensors become factors. The contraction of a tensor network with positive tensors is the partition function of a corresponding field and so on.