I had an interesting discussion today with a few PhD students from my group.

I was reminiscing about how the world of analytics looked before the advent of Machine Learning/Deep Learning.

I talked about some of the research I did as part of my PhD, many years ago now, which involved finding analytical and semi-analytical solutions to heat transfer problems.

I spoke of working as a Senior Quantitative Analyst in my early years, and modelling complex financial derivatives using the famous Black-Scholes equation, amongst other approaches, and mentioned certain techniques used in stochastic calculus. Once again, I discussed seeking analytical and semi-analytics solutions, and using approaches like Monte Carlo simulation when you unfortunately needed more of a ‘brute force’ approach, over a mathematically more aesthetic one.

We also spoke of various traditional modelling techniques in the field of statistics, from both the frequentist and Bayesian schools.

And I discussed some of the mathematics that underpins one of my current areas of research, Confidential Computing, that’s based on such interesting cryptographic techniques, such as Homomorphic Encryption, which effectively involves working with polynomial and primes.

As a mathematician, I see beauty in deriving analytical solutions and working with equations, polynomials and primes 🙂

Then the conversation became more interesting…

One of the students expressed excitement to have been introduced to the study of causality, and talked about how much he’s enjoyed reading Judea Pearl’s book The Book of Why, that I recommended to him, and some papers I sent him.

We then spoke about some of the Deep Learning models we were using in an NLP course I’m running, and that he’s helping tutor in.

This led to him discussing why he decided, after two years of research, to move out of Deep Learning, and to pursue a PhD in Logic instead, which is a significant and admirable decision for a young researcher to make! He also talked about having to finally learn mathematics properly 🙂

His motivation was fuelled by what he felt was the lack of an overall ‘framework’ in Deep Learning, and we discussed how basically the focus is simply on applying various algorithms to solve problems in effectively a brute force manner, often involving much manual tuning. In Logic, he instead finds purity in trying to create a framework that supports solving a broad range of problems.

This got me thinking, **is there inherent beauty in Deep Learning?**