Eleonora Angius
Skillsoft issued completion badges are earned based on viewing the percentage required or receiving a passing score when assessment is required. In the journey to understand deep learning models for natural language processing (NLP), the subsequent iterations are memory-based networks, which are much more capable of handling extended context in languages. While basic neural networks are better than machine learning (ML) models, they still lack in more significant and large language data problems.
In this course, you will learn about memory-based networks like gated recurrent unit (GRU) and long short-term memory (LSTM). Explore their architectures, variants, and where they work and fail for NLP. Then, consider their implementations using product classification data and compare different results to understand each architecture's effectiveness.
Upon completing this course, you will have learned the basics of memory-based networks and their implementation in TensorFlow to understand the effect of memory and more extended context for NLP datasets.
Issued on
February 9, 2024
Expires on
Does not expire