For each epoch, we randomly select a SSL task, t n , from multitask combinations. The corresponding private and shared GAT models generate the task-specific ( R n ) and common ( R s ) representations, respectively; R n and R s are concatenated, and then fed into the MLP-based predictor of SSL task t n . The R s values are fed into the MLP-based discriminator to predict which type of task the shared representation vectors come from. The parameters of current private and shared GAT models are updated by back-propagation based on the loss values from a SSL task predictor and discriminator, respectively. Finally, the parameters of the current shared model are assigned to all of the other shared models. We therefore attain n private GAT models and shared GAT models with same parameters after multitask SSL training. In other words, MSSL2drug generates the private representations by all private GATs and the shared representation by an arbitrary shared model.