Deep recurrent nets are the extension of deep neural nets to process / output sequential data. They have exploded into the deep learning scene over the past few years, are no longer considered hard to train, and have enabled us today to make progress on everything from speech recognition, and language modeling to image captioning. In this talk, we will look at what recurrent nets can do for you, and go over some tips and tricks we've learnt from building
Deep Speech for training seriously deep recurrent networks on your own.
Some knowledge of recurrent nets is expected, like having read http://karpathy.github.io/2015/05/21/rnn-effectiveness/