The rapid development and the high demand for artificial intelligence (AI) led to prioritize the performance of algorithms over their interpretability ; a lack of interpretability of machine learning techniques poses legal, operational and ethical problems. A new area of research is emerging and focusing on the question of the impenetrability of AI: the interpretable machine learning (ML). A new dynamic in which interpretability could become the new criterion for evaluating models. Our E3 project consists in building explanations of machine learning models, which we consider as "black boxes", in the form of a "toolbox". We focus on the techniques that seem to be the most relevant, namely LIME, SHAP, PDP, ICE, permutation features and shapley value.