Previous articles in this series:

  1. Introduction
  2. Data Shaping
  3. Neural Network Parameters
  4. Results
  5. Code

1. Introduction

Using Morgan (circular) fingerprints as input data, we create an ensemble of 1D convolutional neural networks (CNNs) with Keras, to predict lipophilicity values. See Prediction of Small Molecule Lipophilicity: Part 1 – Data Exploration for the link to the raw data as well as code for converting SMILES strings to fingerprints via RDKit.

2. Data Shaping

Input data is in the form of 1D Numpy arrays of length 2048 consisting of 1s and 0s. In the CNN we are using the channels first option with a single channel. First, we read in a Numpy array with all of the input data and the reshape it to (number of data points, 1, 2048), where 1 = number of channels, and 2048 is the length of each 1D Numpy array (see the main code at the end of this article).

Next, we must ensure that the 1D CNN specifies the correct data shape. We have

input_tensor = Input(shape=(1, input_length))

input_length = 2048, and in the 1D convolutional layer: data_format=’channels_first’ (see model construction code at then end of this article).

3. Neural Network Parameters

We use randomized parameter search for the parameters of the multilayer perceptrons.

import numpy as np
from sklearn.model_selection import ParameterSampler

def parameters_cnn(num_samples):
    
    batch_size = [32, 64]    
    dropout = np.linspace(0.0, 0.5, num=11, endpoint=True)    
    activation = ['relu','elu']
    cnn_filters = [16, 32, 64]    
    cnn_kernel_size = [5, 7, 9, 11, 13, 15]    
    cnn_blocks = [2, 3, 4, 5]        
    layer_1_nodes_num = [16, 32, 64, 128]
        
    grid = {'batch_size':batch_size, 'dropout':dropout,
            'filters':cnn_filters, 'kernel_size':cnn_kernel_size,
            'blocks':cnn_blocks, 'activation':activation, 
            'layer_1_nodes_num':layer_1_nodes_num}
   
    parameter_grid = list(ParameterSampler(grid, n_iter=num_samples))
       
    return parameter_grid

To create an ensemble of neural networks, we set a cross validation error threshold and keep 10 models with cross validation errors below the threshold. These are the parameters of those models.

1D CNN Hyperparameters

layer_1_nodes_numkernel_sizefiltersdropoutblocksbatch_sizeactivationModel NameMetricMean CV ErrorMax CV Epochs
6415640564elucnn_1d_fingerprints_82mean_squared_error0.00280810
169640332elucnn_1d_fingerprints_37mean_squared_error0.0028644
329640.05232elucnn_1d_fingerprints_29mean_squared_error0.00300618
1611320532elucnn_1d_fingerprints_18mean_squared_error0.0030135
3213320.05332elucnn_1d_fingerprints_6mean_squared_error0.0030514
12811320.05332elucnn_1d_fingerprints_10mean_squared_error0.00313617
165640264relucnn_1d_fingerprints_65mean_squared_error0.0031386
647640.1232elucnn_1d_fingerprints_23mean_squared_error0.00318220
1285160.05464relucnn_1d_fingerprints_17mean_squared_error0.00320316
165320.05232elucnn_1d_fingerprints_14mean_squared_error0.00327424

4. Results

Results were produced using unseen data.

Lipophilicity Errors

Mean Squared ErrorRoot Mean Squared ErrorMedian Absolute ErrorCorrelation CoefficientR2
cnn_1d_fingerprints_820.76640.87540.49510.72120.4915
cnn_1d_fingerprints_370.80570.89760.51730.71290.4654
cnn_1d_fingerprints_290.79110.88940.52350.71280.4751
cnn_1d_fingerprints_180.80340.89630.5280.71680.4669
cnn_1d_fingerprints_60.77240.87890.5090.72230.4875
cnn_1d_fingerprints_100.76140.87260.53510.72350.4948
cnn_1d_fingerprints_650.85790.92630.53190.69250.4307
cnn_1d_fingerprints_230.76810.87640.50740.71870.4904
cnn_1d_fingerprints_170.77380.87970.56010.73150.4866
cnn_1d_fingerprints_140.82960.91080.51590.69770.4496
Ensemble Median0.73350.85650.50360.73170.5133


Below is a comparison of our results to those of the Molecule Net benchmark from MoleculeNet: A Benchmark for Molecular Machine Learning (page 51).

Our Results Versus Benchmark Results

ModelRoot Mean Squared Error
Random Forest (RF)0.876 +/- 0.040
Multitask (Multilayer Perceptron with multiple outputs)0.859 +/- 0.013
XGBoost0.799 +/- 0.054
Kernel Ridge Regression (KRR)0.899 +/- 0.043
Graph Convolutional Network (GC)0.655 +/- 0.036
Directed Acyclic Graph (DAG)0.835 +/- 0.039
Weave0.715 +/- 0.035
Message Passing Neural Network (MPNN)0.719 +/- 0.031
TPOT Best0.767
TPOT Ensemble0.787
Multilayer Perceptron Ensemble0.823
1D CNN Fingerprints0.857
2D CNN Fingerprints0.844

R2 results in barchart form can be seen at the Molecule Net site (see section Physical Chemistry -> lipo).

The benchmark results were obtained by running multiple random permutations of the dataset, which we did not do, so the comparisons with our results are not exact.

5. Code

Model construction function:

from keras.layers import Dense, Input, Dropout
from keras.layers import Conv1D, MaxPooling1D, Flatten
from keras.models import Model
from keras.optimizers import Adam

def cnn_1d_model_construction(dict_params_cnn, num_output_nodes,
                              input_length, loss_type, metric):
    
    poolsize = dict_params_cnn['kernel_size'] - 2
    poolsize = min(poolsize, 2)
    
    input_tensor = Input(shape=(1, input_length))
    
    # 1st block
    x = Conv1D(filters=dict_params_cnn['filters'], 
               kernel_size=dict_params_cnn['kernel_size'], 
               activation=dict_params_cnn['activation'],
               data_format='channels_first')(input_tensor)
    x = Dropout(dict_params_cnn['dropout'])(x)    
    x = MaxPooling1D(pool_size=poolsize)(x)
    
    for iblock in range(dict_params_cnn['blocks'] - 1):
        x = Conv1D(filters=dict_params_cnn['filters'], 
                   kernel_size=dict_params_cnn['kernel_size'], 
                   activation=dict_params_cnn['activation'],
                   data_format='channels_first')(x)
        x = Dropout(dict_params_cnn['dropout'])(x)    
        x = MaxPooling1D(pool_size=poolsize)(x)
 
    x = Flatten()(x)
    
    x = Dropout(dict_params_cnn['dropout'])(x)
    
    x = Dense(dict_params_cnn['layer_1_nodes_num'], 
              activation = dict_params_cnn['activation'])(x)
    
    x = Dropout(dict_params_cnn['dropout'])(x)
    
    # target E [-0.15, 0.45] so no need of activation function
    output_tensor = Dense(num_output_nodes)(x)
       
    model_cnn_f = Model(input_tensor, output_tensor)
     
    # compile the model
    opt = Adam(lr=0.00025)  # default = 0.001
    model_cnn_f.compile(optimizer=opt, loss=loss_type, metrics=[metric])
     
    return model_cnn_f

Cross validation, thresholding, and saving good models function:

import numpy as np
from keras.callbacks import EarlyStopping, ReduceLROnPlateau

def nn_cv(x_calib, y_calib, cv_folds, metric,
          nn_model, nn_model_copy, batchsize, num_epochs, stop_epochs, verbosity,
          models_dir, model_name, threshold, learn_rate_epochs):
      
    callbacks_list = [EarlyStopping(monitor='val_loss', patience=stop_epochs),                     
    ReduceLROnPlateau(monitor='val_loss', factor=0.1, patience=learn_rate_epochs, 
                      verbose=0, mode='auto', min_lr=1.0e-6)]
    
    cv_num = int(x_calib.shape[0]/cv_folds)
    list_cv_metrics = []
    list_cv_epochs = []
    for fold in range(cv_folds):
        # get train/valid
        x_train = np.vstack((x_calib[:cv_num*fold], x_calib[cv_num*(fold+1):]))        
        y_train = np.hstack((y_calib[:cv_num*fold], y_calib[cv_num*(fold+1):]))
         
        x_valid = x_calib[cv_num*fold:cv_num*(fold+1)]
        y_valid = y_calib[cv_num*fold:cv_num*(fold+1)]
        
        # fit the model
        h = nn_model.fit(x_train, y_train, epochs=num_epochs, 
                         batch_size=batchsize, validation_data=(x_valid, y_valid), 
                         callbacks=callbacks_list, verbose=verbosity)
     
        # collect cv stats
        list_cv_metrics.append(np.min(h.history['val_' + metric]))
        epoch_best_early_stop = len(h.history['val_loss']) - stop_epochs
        list_cv_epochs.append(epoch_best_early_stop)
                 
    dict_return = {}
    mean_cv_error = np.mean(list_cv_metrics)
    max_cv_epochs = np.max(list_cv_epochs)
    print('mean_cv_error =',mean_cv_error,'\tthreshold =',threshold)
    print('max_cv_epochs =',max_cv_epochs)
    
    if mean_cv_error < threshold:
        # fit the model (use copy) with all calib data
        h_refit = nn_model_copy.fit(x_calib, y_calib, epochs=max_cv_epochs, 
                                    batch_size=batchsize, verbose=0)
        nn_model_copy.save(models_dir + model_name + '.h5')
        dict_return['Model Name'] = model_name
        dict_return['Metric'] = metric
        dict_return['Mean CV Error'] = mean_cv_error
        dict_return['Max CV Epochs'] = max_cv_epochs
     
    return dict_return

In the main code below, we note that we divided the lipophilicity values by 10 so that their range is [-0.15, 0.45] and thus no output activation function is used.

import pandas as pd
import numpy as np
import time
from pathlib import Path

if __name__ == '__main__':
    
    calculation_type = 'calibration'  # production  calibration
    nn_type = 'cnn_1d'
    target_divisor = 10.0  # target E [-0.15, 0.45] so no need of activation function
       
    base_directory = YOUR DIRECTORY
    data_directory = base_directory + 'data/'
    
    results_directory_stub = base_directory + nn_type + '_fingerprints/'
    if not Path(results_directory_stub).is_dir():
        os.mkdir(results_directory_stub)
    
    results_directory = results_directory_stub + 'results_' + nn_type + '_fingerprints/'
    if not Path(results_directory).is_dir():
        os.mkdir(results_directory)
    
    models_directory = results_directory_stub + 'models_' + nn_type + '_fingerprints/'
    if not Path(models_directory).is_dir():
        os.mkdir(models_directory)
      
    # limits and params
    metric_type = 'mean_squared_error'
    type_of_loss = metric_type
    verbose = 0
    epochs = 200
    checkpoint_epochs_stop = 15
    learning_rate_epochs = 10
    
    cross_valid_folds = 3
    size_of_parameter_list = 1000
    error_threshold = 10.0
    max_time_minutes = 6*60
    number_of_models = 10
        
    if calculation_type == 'calibration':
        print('\n\n*** starting calibration at',pd.Timestamp.now())
        start_time_calibration = time.time()
            
        # get data
        ycalib = np.load(data_directory + 'y_calib.npy')/target_divisor   
        xcalib = np.load(data_directory + 'x_calib_fingerprints_1d.npy')
       
        try:
            number_of_outputs = ycalib.shape[1]
        except:
            number_of_outputs = 1  # ycalib.shape[1]
   
        # reshape data
        num_pts = xcalib.shape[0]
        length_of_input = xcalib[0].shape[0] # length_of_fingerprint_vector = 2048
        xcalib = xcalib.reshape(num_pts, 1, length_of_input)
        
        # set up parameters (stochastic)
        list_param_dict = parameters_cnn(size_of_parameter_list)
        list_dict_results = []
        counter = 0
        # loop over parameters
        for dictionary_parameter in list_param_dict:   
            
            modelname = nn_type + '_fingerprints_' + str(counter)
            counter = counter + 1
                
            print('\n\n*** starting at',pd.Timestamp.now())
            start_time_loop = time.time()            
            
            model_nn = cnn_1d_model_construction(dictionary_parameter, 
                                                 number_of_outputs, length_of_input,
                                                 type_of_loss, metric_type) 
            print(dictionary_parameter)                     
            
            dict_results = nn_cv(xcalib, ycalib, 
                                    cross_valid_folds, metric_type,
                                    model_nn, model_nn, 
                                    dictionary_parameter['batch_size'], epochs, 
                                    checkpoint_epochs_stop, verbose,
                                    models_directory, modelname, error_threshold,
                                    learning_rate_epochs)
            
            elapsed_time_minutes = (time.time() - start_time_loop)/60
            print('*** elapsed time =',elapsed_time_minutes,' mins\tcounter =',counter-1)
            
            # add epochs and modelname to param dict and save it
            if len(dict_results) > 0:
                dictionary_parameter.update(dict_results)
                list_dict_results.append(dictionary_parameter)
                
            # check elapsed time for early stopping
            elapsed_time_minutes = (time.time() - start_time_calibration)/60
            if elapsed_time_minutes > max_time_minutes:
                break
        
        print('\n\n*** ending calibation at',pd.Timestamp.now())
        print('elapsed time =',(time.time()-start_time_calibration)/60,' min')
        
        # collect results in a df, then save
        df_param_results = pd.DataFrame(data=list_dict_results)
        df_param_results.to_pickle(models_directory + 'df_parameter_results.pkl')
        df_param_results.to_csv(models_directory + 'df_parameter_results.csv')
        list_dict_results.clear()
            
    elif calculation_type == 'production':
        
        # get data
        yprod = np.load(data_directory + 'y_prod.npy')  # UNALTERED (SEE CREATE_ENSEMBLE)
        xprod = np.load(data_directory + 'x_prod_fingerprints_1d.npy')
        
        # reshape
        num_pts = xprod.shape[0]
        length_of_input = xprod[0].shape[0] # length_of_fingerprint_vector = 2048
        xprod = xprod.reshape(num_pts, 1, length_of_input)
            
        df_results = pd.read_pickle(models_directory + 'df_parameter_results.pkl')
                      
        neural_network_ensemble.create_ensemble('regression', 
                       'Mean CV Error', 'Keras', 
                       df_results, number_of_models, models_directory, 
                       results_directory, xprod, yprod, target_divisor)
        
    else:
        raise NameError

For the production (unseen data) code, we sort the cross validation results then read in the 10 best models. The final result is the median lipophilicity prediction of these models. See Prediction of Small Molecule Lipophilicity: Part 3 – Ensemble of Multilayer Perceptrons With Morgan (Circular) Fingerprints