In general, these models need to be backed by big data and require a lot of fine tuning of hyper parameters, regularization, and so on. Applications include computer vision and image recognition based on CNNs (convolutional neural networks); automatic translation (based on NLP techniques such as long- and short-term memory models).
In fact, the basic theory of deep learning was actually available decades ago, why has not been developed? Because it is constrained by two conditions, one is the amount of data, and the other is the computing power of the machine.
In the case of relatively small quantities, traditional machine learning methods can achieve better results. But as the amount of data continues to increase, when a certain threshold is reached, the effect of traditional machine learning methods will not be improved. In contrast, the effect of deep learning models will be significantly increased with the significant increase in the amount of data and get a significant increase. In other words, deep learning methods can maximize the value of big data! So the development of big data promotes the rise of deep learning, and deep learning amplifies the value of the data, and the two of them promote each other and complement each other.