Skip to content

My personal bucket of code that I will probably use across different projects or ... just some fun things that I liked or learned

Notifications You must be signed in to change notification settings

kkedich/bucket-of-code

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

5 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Bucket of Code

My personal bucket of code that I will probably use and reuse across different projects or just some fun things that I liked or learned.

Notes:

  • The requirements list present the last version tested for each package used.

Configuration files

I found this type of configuration file on the mmcv project, and I liked how clean this kind of files can be and how easy is to use them as we can set a different file for each required experiment saving the setup used for further reproducibility. This configuration file is a python file (extension .py) where we can define some variables as the settings that we want to use. The example below shows a setup for a random convolutional model with its training options:

model = dict(  
    # Initializer for the kernel weights and biases. Note that not all models use this option
    # to initialize the weights
    # The value can be a dict or just a string: kernel_initializer="he_normal"
    kernel_initializer=dict(
        type="he_normal",  # Valid types: "he_normal", "normal", "xavier", "constant", "truncated_normal"
                           # "uniform".
        stddev=0.009       # Standard deviation to be used. Default is 0.009
    ),
    # Activation function to be used in the model and its required parameters.
    # Note that not all models use this option. Some of them have a fixed function.
    activation_function=dict(
        type="relu",     # Type of act. function: "relu", "leaky_relu", and "elu"
        lrelu_alpha=0.2  # leaky relu alpha. Default 0.2
    ),
    # Dropout rate to be considered in the model when training
    # Note that not all models use this option.
    dropout=0.1,
    # Options for the optimizer. Default: Adam
    optimizer=dict(
        learning_rate=5e-2,
        beta_1=0.9,  # Default 0.9
        decay=1e-12  # Default 0.0
    )
)

How to use

  • Requirements: pip install addict==2.4.0

  • Import the src/config.py file as in:

    from config import Config
    
    config_file = "path/to/config/file.py"
    # Load config file
    cfg = Config(config_file)
    
    # Loading some options
    dropout = cfg.model.dropout 
    
    # Creating an TF optimizer with the setup defined
    import tensorflow as tf  
    
    optimizer = tf.keras.optimizers.Adam(
          lr=cfg.optimizer.learning_rate,
          beta_1=cfg.optimizer.get("beta_1", 0.9),
          decay=cfg.optimizer.get("decay", 0.0)
    )  

Pre-commit

Setup that runs a local pre-commit that organizes the imports inside each python file (with isort), runs a code formatter to follow the PEP 8 style guide (with autopep8), runs another code style enforcement tool (flake8) which is complementary to autopep8, and finally, runs a static code analysis tool over our source files (with pylint).

How to use:

  • Requirements:
pip install autopep8==1.5.5 pylint==2.7.2 isort==5.7.0 flake8==3.9.2
  • Define the pre-commit setup in Makefile and the corresponding configuration of pylint in .pylintrc

  • Run the following command inside the project directory: make pre-commit

About

My personal bucket of code that I will probably use across different projects or ... just some fun things that I liked or learned

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published