Skip to content

A trained RNN with an EDM MIDI set in various genres. This will be develop as a Max4Live device and it's main goal is to use it as a Real Time performance Tool within Ableton Live.

Notifications You must be signed in to change notification settings

cvbesJulian/Performance_RNN_For_EDM_RealTime

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

14 Commits
 
 
 
 
 
 

Repository files navigation

GenreMIDI: AI-Powered MIDI Pattern Generation by EDM Sub-Genres

Overview

GenreMIDI is an innovative project that leverages advanced machine learning techniques, including Google's Magenta library [Performance RNN], to generate MIDI patterns across various musical genres. By utilizing Recurrent Neural Networks (RNNs), this project aims to create unique and EDM genre-specific musical patterns that can inspire musicians, producers, and DJs.

Features

  • MIDI pattern generation for multiple EDM musical genres
  • Utilization of Google's Magenta library [Performance RNN] for advanced music generation
  • Genre-specific model training and generation
  • Customizable output parameters (e.g., tempo, complexity, pattern length)
  • Easy-to-use command-line interface and Python API
  • Future plans for seamless implementation in Max MSP with Node for Max

Installation

  1. Clone the repository:

    git clone https://github.com/cvbesJulian/Performance_RNN_For_EDM_RealTime.git
    cd GenreMIDI
    
  2. Download MIDI files from Google Drive: Download MIDI Files

  3. Create a virtual environment:

    python -m venv venv
    source venv/bin/activate  # On Windows, use `venv\Scripts\activate`
    
  4. Install the required dependencies:

    pip install -r requirements.txt
    

Usage

  1. Train a model for a specific genre:

    python train.py --genre techno --dataset_path /path/to/techno_midis
    
  2. Generate MIDI patterns:

    python generate.py --genre techno --num_patterns 5 --output_dir ./output
    
  3. Customize generation parameters:

    python generate.py --genre house --num_patterns 3 --tempo 120 --complexity 0.7 --output_dir ./output
    

Supported EDM Genres

  • House
  • Techno
  • Trance
  • Hip-Hop
  • Progressive
  • Tech House
  • (More genres can be added by training on appropriate datasets)

Technical Details

GenreMIDI currently uses Recurrent Neural Networks (RNNs), which are implemented with TensorFlow and the Magenta library, to learn and generate musical patterns. The project employs the following key technologies:

  • TensorFlow 2.x
  • Magenta
  • Python 3.8+
  • Numpy
  • Pretty_midi

In the future, we plan to transition to TensorFlow.js for a more seamless implementation in Max MSP with Node for Max.

Contributing

We welcome contributions to GenreMIDI! Please see our CONTRIBUTING.md file for details on how to get started.

License

This project is licensed under the Creative Commons Attribution-NonCommercial 4.0 International License (CC BY-NC 4.0). See the LICENSE file for details.

Acknowledgments

  • Google's Magenta team for their incredible work on AI-powered music generation
  • The open-source community for their valuable contributions to machine learning and music technology

Contact

For questions, suggestions, or collaborations, please open an issue in this repository or contact the project maintainers:

About

A trained RNN with an EDM MIDI set in various genres. This will be develop as a Max4Live device and it's main goal is to use it as a Real Time performance Tool within Ableton Live.

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published