0cf8612b2e1e 6 hours ago

  The corresponding row vector is denoted by x^T when we need to distinguish them. We can also ignore the transpose for readability, if the shape is clear from context.
I am tilting at windmills, but I am continually annoyed at the sloppiness of mathematicians in writing. Fine, you don’t like verbosity, but for didactic purposes, please do not assume the reader is equipped to know that variable x actually implies variable y.

All that being said, the writing style from the first chapter is very encouraging at how approachable this will be.

  • runeblaze 2 hours ago

    It is weird to be honest. I first learned Coq and then started taking upper level maths classes. My group theory proofs were panned by my TAs as overly verbose, very precise, and I was specializing on H_1 and H_2s everywhere and having IHns flying around like crazy because I could not fathom how one proves things without formally connecting things up.

    Then my profs told me I was not “wrong”, but proofs or expositions are to most mathematicians not programs (ha! How did I not know. You teach me natural deduction and expect me not to program?), more like convincing arguments/prose. At some point one abstracts.

  • JadeNB 3 hours ago

    > I am tilting at windmills, but I am continually annoyed at the sloppiness of mathematicians in writing. Fine, you don’t like verbosity, but for didactic purposes, please do not assume the reader is equipped to know that variable x actually implies variable y.

    I am a practicing mathematician who felt the same way you did when I started, and who still writes their papers in a way that many of my colleagues feel is gallingly pedantic. With that as my credentials, I hope I may say that it can be much worse as a reader to read something where every detail is spelled out, because a bit of syntactic sugar begins to seem as important as the heart of an argument. Where the dividing line is between precision and obfuscation depends on the reader, and so inevitably will leave some readers on the wrong side, but a trade-off does have to be made somewhere.

superjose 8 hours ago

Wow, kudos to the Author. Very easy to digest, beautifully crafted, and took the time to explain the concepts when most places take them for granted.

magnio 8 hours ago

This looks like a good practical companion for a more theoretical text, such as Deep Learning by Bishop.

odyssey7 7 hours ago

It would be nice if arXiv included a small-layout pdf or native epub option for e-readers. Now that they serve the Tex files and are experimenting with HTML, it feels like a natural step.

dunefox 2 hours ago

And I just bought the physical book...

fossa1 9 hours ago

Glad to see JAX featured alongside PyTorch. JAX still feels like the best-kept secret in deep learning

kittikitti 8 hours ago

Although I love this, it's not peer reviewed and I don't trust arxiv.

  • SiempreViernes 8 hours ago

    Actually, it is peer reviewed following the standard practice for books: some other people read it and provided feedback as evidenced by the Acknowledgments section.

  • odyssey7 7 hours ago

    It’s more a book than academic research.

    The funny thing about books is that authors in free societies are allowed to self-publish whatever they want. The norms are different and, frankly, more democratic and with less gatekeeping.

  • ethan_smith 3 hours ago

    arXiv is a preprint server trusted by the scientific community for decades - papers there often undergo peer review later, and many top ML researchers publish their work there first for faster dissemination.

ProofHouse 9 hours ago

Damn beeeeefffffyyyyy. Need the month to eat ten pages a day, Tnx looks awesome. Could append diffusion too ultimately