Generate Rap Lyrics – Fresh Machine Learning #4

This episode of Fresh Machine Learning is about generating rap lyrics! Lyrical generation is possible using either Hidden Markov Models or deep learning. In this episode, I go through a few past examples of what’s been done before, then dive into our own example that we can code in Python. Welcome to the machine MC revolution!

The demo code for this video can be found here:

Try it out live here:

I introduce three papers in this video

Unsupervised Rhyme Scheme Identification in Hip Hop Lyrics Using Hidden Markov Models:

Modeling Hip Hop Challenge-Response Lyrics as Machine Translation:

DopeLearning: A Computational Approach to Rap Lyrics Generation:

More info about Hidden Markov Models:

I love you guys! Thanks for watching my videos, I do it for you. I left my awesome job at Twilio and I’m doing this full time now.

I recently created a Patreon page. If you like my videos, feel free to help support my effort here!:

Much more to come so please subscribe, like, and comment.

30 comments

  1. MartinDxt says:

    Can you make a generative model for images once trained on certain set like
    rooms or faces or cars or whatever

  2. Royal Crew says:

    Whats the instrumental in the bg? Cant remember and it drives me crazy.
    Also props to Inspector Deck!

  3. Robert Swift says:

    I thought that I was Smart.

    Ya , I thought. that I knew my art.

    My rime were so elementary.

    Now their so college after hearing you drop that dope knowledge.

  4. Austinopolis says:

    This was my favorite video so far. This is exactly the kind of AI that I
    want to work on; the ability to mimic human talent.

  5. Sebastian Gonzalez Aseretto says:

    Have you done something related to how to avoid or reduce catastrophic
    forgetting? To train a neural network in one task and then in another very
    different task, without losing to much performance in the task learned
    previously. Like a deep Q learner that learns 3 or 4 different types of
    games and then play them without losing to much performance, for example
    Pong, Space Invader, Kangaroo and Gopher

  6. Denin Davis says:

    @Sirajology
    Can you please make a video on speech recognition with RNN LSTM with python
    TensorFlow. (with custom made (simple) acoustic model and language model).

  7. Malte Hildebrand says:

    there are awesome insights here (
    ) when it comes to far far
    fetched rhyme patterns, would be interesting to know weather this could
    boost the model used to be a spaceship. maybe u could watch the ten hours
    of video and answer my question. Gold standard that would be 😉 – thanks
    for the great work.

  8. Chris says:

    daaaaang son, you got some dank memes going in your vids. This one was
    actually a lot clearer than many of your others but to be honest I still
    didn’t quite grasp how that model works.

  9. Dan Allison says:

    Great videos. Love the short videos but I think you should consider making
    some longer in depth videos depending on what the viewers want to see.
    You’re putting out great content so I’m sure you’ll be rewarded richly over
    time

  10. Bot_MarZ says:

    I felt like you speed up the video of you talking, if you can not say it in
    time then make the video itself longer, I don’t mind spending a few extra
    mins (I had to do .50 speed to understand) (Love your videos)

  11. creatorleo says:

    So it basically just try to predict the next word, right? What could I do
    to implement rhymes? Oh, thanks for being so awesome! Your channel is
    definitely helping me A LOT

  12. Kwun Kit Wong says:

    I love your video so much!!
    How did you find these 2 papers form Hong Kong and Finland ? I looked up at
    Google Scholars it is only cited for less than 10 times.
    Also where you do normally go to look for machine learning papers?
    Thank you so much!

  13. Federico Baldassarre says:

    I checked out the code from github, not really sure it works well with
    multiple training files, because the new counts would *kind of* overwrite
    the old transition probabilities. Also the code for the loops seemed very
    C-like and not very pythonic.

    I’ve made some adjustments and created a pull request. Would be great for
    you review and merge it.

    As a side note, inspired by the idea I started my own version of a Markov
    Model text generator. It uses ordered lists instead of dictionaries and
    numpy for faster computations, also makes some preprocessing on the text.
    Still a work in progress, but you can check it out on

    Great job with the Learn Python for Data Science series, keep it going!!

Comments are closed.