Previous articles in this series:
- Prediction of Small Molecule Lipophilicity: Part 1 – Data Exploration
- Prediction of Small Molecule Lipophilicity: Part 2 – TPOT (Tree-based Pipeline Optimization Tool) With Morgan (Circular) Fingerprints
- Prediction of Small Molecule Lipophilicity: Part 3 – Ensemble of Multilayer Perceptrons With Morgan (Circular) Fingerprints
1. Introduction
Using Morgan (circular) fingerprints as input data, we create an ensemble of 1D convolutional neural networks (CNNs) with Keras, to predict lipophilicity values. See Prediction of Small Molecule Lipophilicity: Part 1 – Data Exploration for the link to the raw data as well as code for converting SMILES strings to fingerprints via RDKit.
2. Data Shaping
Input data is in the form of 1D Numpy arrays of length 2048 consisting of 1s and 0s. In the CNN we are using the channels first option with a single channel. First, we read in a Numpy array with all of the input data and the reshape it to (number of data points, 1, 2048), where 1 = number of channels, and 2048 is the length of each 1D Numpy array (see the main code at the end of this article).
Next, we must ensure that the 1D CNN specifies the correct data shape. We have
input_tensor = Input(shape=(1, input_length))
input_length = 2048, and in the 1D convolutional layer: data_format=’channels_first’ (see model construction code at then end of this article).
3. Neural Network Parameters
We use randomized parameter search for the parameters of the multilayer perceptrons.
import numpy as np from sklearn.model_selection import ParameterSampler def parameters_cnn(num_samples): batch_size = [32, 64] dropout = np.linspace(0.0, 0.5, num=11, endpoint=True) activation = ['relu','elu'] cnn_filters = [16, 32, 64] cnn_kernel_size = [5, 7, 9, 11, 13, 15] cnn_blocks = [2, 3, 4, 5] layer_1_nodes_num = [16, 32, 64, 128] grid = {'batch_size':batch_size, 'dropout':dropout, 'filters':cnn_filters, 'kernel_size':cnn_kernel_size, 'blocks':cnn_blocks, 'activation':activation, 'layer_1_nodes_num':layer_1_nodes_num} parameter_grid = list(ParameterSampler(grid, n_iter=num_samples)) return parameter_grid
To create an ensemble of neural networks, we set a cross validation error threshold and keep 10 models with cross validation errors below the threshold. These are the parameters of those models.
1D CNN Hyperparameters
layer_1_nodes_num | kernel_size | filters | dropout | blocks | batch_size | activation | Model Name | Metric | Mean CV Error | Max CV Epochs |
---|---|---|---|---|---|---|---|---|---|---|
64 | 15 | 64 | 0 | 5 | 64 | elu | cnn_1d_fingerprints_82 | mean_squared_error | 0.002808 | 10 |
16 | 9 | 64 | 0 | 3 | 32 | elu | cnn_1d_fingerprints_37 | mean_squared_error | 0.002864 | 4 |
32 | 9 | 64 | 0.05 | 2 | 32 | elu | cnn_1d_fingerprints_29 | mean_squared_error | 0.003006 | 18 |
16 | 11 | 32 | 0 | 5 | 32 | elu | cnn_1d_fingerprints_18 | mean_squared_error | 0.003013 | 5 |
32 | 13 | 32 | 0.05 | 3 | 32 | elu | cnn_1d_fingerprints_6 | mean_squared_error | 0.003051 | 4 |
128 | 11 | 32 | 0.05 | 3 | 32 | elu | cnn_1d_fingerprints_10 | mean_squared_error | 0.003136 | 17 |
16 | 5 | 64 | 0 | 2 | 64 | relu | cnn_1d_fingerprints_65 | mean_squared_error | 0.003138 | 6 |
64 | 7 | 64 | 0.1 | 2 | 32 | elu | cnn_1d_fingerprints_23 | mean_squared_error | 0.003182 | 20 |
128 | 5 | 16 | 0.05 | 4 | 64 | relu | cnn_1d_fingerprints_17 | mean_squared_error | 0.003203 | 16 |
16 | 5 | 32 | 0.05 | 2 | 32 | elu | cnn_1d_fingerprints_14 | mean_squared_error | 0.003274 | 24 |
4. Results
Results were produced using unseen data.
Lipophilicity Errors
Mean Squared Error | Root Mean Squared Error | Median Absolute Error | Correlation Coefficient | R2 | |
---|---|---|---|---|---|
cnn_1d_fingerprints_82 | 0.7664 | 0.8754 | 0.4951 | 0.7212 | 0.4915 |
cnn_1d_fingerprints_37 | 0.8057 | 0.8976 | 0.5173 | 0.7129 | 0.4654 |
cnn_1d_fingerprints_29 | 0.7911 | 0.8894 | 0.5235 | 0.7128 | 0.4751 |
cnn_1d_fingerprints_18 | 0.8034 | 0.8963 | 0.528 | 0.7168 | 0.4669 |
cnn_1d_fingerprints_6 | 0.7724 | 0.8789 | 0.509 | 0.7223 | 0.4875 |
cnn_1d_fingerprints_10 | 0.7614 | 0.8726 | 0.5351 | 0.7235 | 0.4948 |
cnn_1d_fingerprints_65 | 0.8579 | 0.9263 | 0.5319 | 0.6925 | 0.4307 |
cnn_1d_fingerprints_23 | 0.7681 | 0.8764 | 0.5074 | 0.7187 | 0.4904 |
cnn_1d_fingerprints_17 | 0.7738 | 0.8797 | 0.5601 | 0.7315 | 0.4866 |
cnn_1d_fingerprints_14 | 0.8296 | 0.9108 | 0.5159 | 0.6977 | 0.4496 |
Ensemble Median | 0.7335 | 0.8565 | 0.5036 | 0.7317 | 0.5133 |

Below is a comparison of our results to those of the Molecule Net benchmark from MoleculeNet: A Benchmark for Molecular Machine Learning (page 51).
Our Results Versus Benchmark Results
Model | Root Mean Squared Error |
---|---|
Random Forest (RF) | 0.876 +/- 0.040 |
Multitask (Multilayer Perceptron with multiple outputs) | 0.859 +/- 0.013 |
XGBoost | 0.799 +/- 0.054 |
Kernel Ridge Regression (KRR) | 0.899 +/- 0.043 |
Graph Convolutional Network (GC) | 0.655 +/- 0.036 |
Directed Acyclic Graph (DAG) | 0.835 +/- 0.039 |
Weave | 0.715 +/- 0.035 |
Message Passing Neural Network (MPNN) | 0.719 +/- 0.031 |
TPOT Best | 0.767 |
TPOT Ensemble | 0.787 |
Multilayer Perceptron Ensemble | 0.823 |
1D CNN Fingerprints | 0.857 |
2D CNN Fingerprints | 0.844 |
R2 results in barchart form can be seen at the Molecule Net site (see section Physical Chemistry -> lipo).
The benchmark results were obtained by running multiple random permutations of the dataset, which we did not do, so the comparisons with our results are not exact.
5. Code
Model construction function:
from keras.layers import Dense, Input, Dropout from keras.layers import Conv1D, MaxPooling1D, Flatten from keras.models import Model from keras.optimizers import Adam def cnn_1d_model_construction(dict_params_cnn, num_output_nodes, input_length, loss_type, metric): poolsize = dict_params_cnn['kernel_size'] - 2 poolsize = min(poolsize, 2) input_tensor = Input(shape=(1, input_length)) # 1st block x = Conv1D(filters=dict_params_cnn['filters'], kernel_size=dict_params_cnn['kernel_size'], activation=dict_params_cnn['activation'], data_format='channels_first')(input_tensor) x = Dropout(dict_params_cnn['dropout'])(x) x = MaxPooling1D(pool_size=poolsize)(x) for iblock in range(dict_params_cnn['blocks'] - 1): x = Conv1D(filters=dict_params_cnn['filters'], kernel_size=dict_params_cnn['kernel_size'], activation=dict_params_cnn['activation'], data_format='channels_first')(x) x = Dropout(dict_params_cnn['dropout'])(x) x = MaxPooling1D(pool_size=poolsize)(x) x = Flatten()(x) x = Dropout(dict_params_cnn['dropout'])(x) x = Dense(dict_params_cnn['layer_1_nodes_num'], activation = dict_params_cnn['activation'])(x) x = Dropout(dict_params_cnn['dropout'])(x) # target E [-0.15, 0.45] so no need of activation function output_tensor = Dense(num_output_nodes)(x) model_cnn_f = Model(input_tensor, output_tensor) # compile the model opt = Adam(lr=0.00025) # default = 0.001 model_cnn_f.compile(optimizer=opt, loss=loss_type, metrics=[metric]) return model_cnn_f
Cross validation, thresholding, and saving good models function:
import numpy as np from keras.callbacks import EarlyStopping, ReduceLROnPlateau def nn_cv(x_calib, y_calib, cv_folds, metric, nn_model, nn_model_copy, batchsize, num_epochs, stop_epochs, verbosity, models_dir, model_name, threshold, learn_rate_epochs): callbacks_list = [EarlyStopping(monitor='val_loss', patience=stop_epochs), ReduceLROnPlateau(monitor='val_loss', factor=0.1, patience=learn_rate_epochs, verbose=0, mode='auto', min_lr=1.0e-6)] cv_num = int(x_calib.shape[0]/cv_folds) list_cv_metrics = [] list_cv_epochs = [] for fold in range(cv_folds): # get train/valid x_train = np.vstack((x_calib[:cv_num*fold], x_calib[cv_num*(fold+1):])) y_train = np.hstack((y_calib[:cv_num*fold], y_calib[cv_num*(fold+1):])) x_valid = x_calib[cv_num*fold:cv_num*(fold+1)] y_valid = y_calib[cv_num*fold:cv_num*(fold+1)] # fit the model h = nn_model.fit(x_train, y_train, epochs=num_epochs, batch_size=batchsize, validation_data=(x_valid, y_valid), callbacks=callbacks_list, verbose=verbosity) # collect cv stats list_cv_metrics.append(np.min(h.history['val_' + metric])) epoch_best_early_stop = len(h.history['val_loss']) - stop_epochs list_cv_epochs.append(epoch_best_early_stop) dict_return = {} mean_cv_error = np.mean(list_cv_metrics) max_cv_epochs = np.max(list_cv_epochs) print('mean_cv_error =',mean_cv_error,'\tthreshold =',threshold) print('max_cv_epochs =',max_cv_epochs) if mean_cv_error < threshold: # fit the model (use copy) with all calib data h_refit = nn_model_copy.fit(x_calib, y_calib, epochs=max_cv_epochs, batch_size=batchsize, verbose=0) nn_model_copy.save(models_dir + model_name + '.h5') dict_return['Model Name'] = model_name dict_return['Metric'] = metric dict_return['Mean CV Error'] = mean_cv_error dict_return['Max CV Epochs'] = max_cv_epochs return dict_return
In the main code below, we note that we divided the lipophilicity values by 10 so that their range is [-0.15, 0.45] and thus no output activation function is used.
import pandas as pd import numpy as np import time from pathlib import Path if __name__ == '__main__': calculation_type = 'calibration' # production calibration nn_type = 'cnn_1d' target_divisor = 10.0 # target E [-0.15, 0.45] so no need of activation function base_directory = YOUR DIRECTORY data_directory = base_directory + 'data/' results_directory_stub = base_directory + nn_type + '_fingerprints/' if not Path(results_directory_stub).is_dir(): os.mkdir(results_directory_stub) results_directory = results_directory_stub + 'results_' + nn_type + '_fingerprints/' if not Path(results_directory).is_dir(): os.mkdir(results_directory) models_directory = results_directory_stub + 'models_' + nn_type + '_fingerprints/' if not Path(models_directory).is_dir(): os.mkdir(models_directory) # limits and params metric_type = 'mean_squared_error' type_of_loss = metric_type verbose = 0 epochs = 200 checkpoint_epochs_stop = 15 learning_rate_epochs = 10 cross_valid_folds = 3 size_of_parameter_list = 1000 error_threshold = 10.0 max_time_minutes = 6*60 number_of_models = 10 if calculation_type == 'calibration': print('\n\n*** starting calibration at',pd.Timestamp.now()) start_time_calibration = time.time() # get data ycalib = np.load(data_directory + 'y_calib.npy')/target_divisor xcalib = np.load(data_directory + 'x_calib_fingerprints_1d.npy') try: number_of_outputs = ycalib.shape[1] except: number_of_outputs = 1 # ycalib.shape[1] # reshape data num_pts = xcalib.shape[0] length_of_input = xcalib[0].shape[0] # length_of_fingerprint_vector = 2048 xcalib = xcalib.reshape(num_pts, 1, length_of_input) # set up parameters (stochastic) list_param_dict = parameters_cnn(size_of_parameter_list) list_dict_results = [] counter = 0 # loop over parameters for dictionary_parameter in list_param_dict: modelname = nn_type + '_fingerprints_' + str(counter) counter = counter + 1 print('\n\n*** starting at',pd.Timestamp.now()) start_time_loop = time.time() model_nn = cnn_1d_model_construction(dictionary_parameter, number_of_outputs, length_of_input, type_of_loss, metric_type) print(dictionary_parameter) dict_results = nn_cv(xcalib, ycalib, cross_valid_folds, metric_type, model_nn, model_nn, dictionary_parameter['batch_size'], epochs, checkpoint_epochs_stop, verbose, models_directory, modelname, error_threshold, learning_rate_epochs) elapsed_time_minutes = (time.time() - start_time_loop)/60 print('*** elapsed time =',elapsed_time_minutes,' mins\tcounter =',counter-1) # add epochs and modelname to param dict and save it if len(dict_results) > 0: dictionary_parameter.update(dict_results) list_dict_results.append(dictionary_parameter) # check elapsed time for early stopping elapsed_time_minutes = (time.time() - start_time_calibration)/60 if elapsed_time_minutes > max_time_minutes: break print('\n\n*** ending calibation at',pd.Timestamp.now()) print('elapsed time =',(time.time()-start_time_calibration)/60,' min') # collect results in a df, then save df_param_results = pd.DataFrame(data=list_dict_results) df_param_results.to_pickle(models_directory + 'df_parameter_results.pkl') df_param_results.to_csv(models_directory + 'df_parameter_results.csv') list_dict_results.clear() elif calculation_type == 'production': # get data yprod = np.load(data_directory + 'y_prod.npy') # UNALTERED (SEE CREATE_ENSEMBLE) xprod = np.load(data_directory + 'x_prod_fingerprints_1d.npy') # reshape num_pts = xprod.shape[0] length_of_input = xprod[0].shape[0] # length_of_fingerprint_vector = 2048 xprod = xprod.reshape(num_pts, 1, length_of_input) df_results = pd.read_pickle(models_directory + 'df_parameter_results.pkl') neural_network_ensemble.create_ensemble('regression', 'Mean CV Error', 'Keras', df_results, number_of_models, models_directory, results_directory, xprod, yprod, target_divisor) else: raise NameError
For the production (unseen data) code, we sort the cross validation results then read in the 10 best models. The final result is the median lipophilicity prediction of these models. See Prediction of Small Molecule Lipophilicity: Part 3 – Ensemble of Multilayer Perceptrons With Morgan (Circular) Fingerprints