Skip to main content

Generating discrete sequences: language and music

This course covers modern approaches to the generation of sequential data. It includes the generation of natural language as a sequence of subword tokens and music as a sequence of notes. We mostly focus on modern deep learning methods and pay a lot of attention to challenges and open questions in the field. The main goal of the course is to expose students to novel techniques in sequence generation and help them develop skills to use these techniques in practice. The course aims to bring students to the point where they have a general understanding of sequence generation and are ready to do a deeper dive into any particular area they are interested in: language, music or bioinformatic sequences.

Generating discrete sequences: language and music

There is one session available:

After a course session ends, it will be archived.
Started Nov 8
Ends Dec 31
Estimated 8 weeks
8–9 hours per week
Instructor-paced
Instructor-led on a course schedule
Free
Optional upgrade available

About this course

Skip About this course

This course covers modern approaches to the generation of sequential data. It includes the generation of natural language as a sequence of subword tokens and music as a sequence of notes. We mostly focus on modern deep learning methods and pay a lot of attention to challenges and open questions in the field. The main goal of the course is to expose students to novel techniques in sequence generation and help them develop skills to use these techniques in practice. The course aims to bring students to the point where they have a general understanding of sequence generation and are ready to do a deeper dive into any particular area they are interested in: language, music or bioinformatic sequences.

At a glance

  • Language: English
  • Video Transcript: English

What you'll learn

Skip What you'll learn

Word2Vec, BPE, Markov chain-nased Language Models, RNN, LSTM, autoencoder, self-attention, transformer, BERT

Word2Vec, BPE, Markov chain-nased Language Models, RNN, LSTM, autoencoder, self-attention, transformer, BERT

About the instructors

Who can take this course?

Unfortunately, learners residing in one or more of the following countries or regions will not be able to register for this course: Iran, Cuba and the Crimea region of Ukraine. While edX has sought licenses from the U.S. Office of Foreign Assets Control (OFAC) to offer our courses to learners in these countries and regions, the licenses we have received are not broad enough to allow us to offer this course in all locations. edX truly regrets that U.S. sanctions prevent us from offering all of our courses to everyone, no matter where they live.

Interested in this course for your business or team?

Train your employees in the most in-demand topics, with edX for Business.