In recent years, a torrent of Computational Intelligence (CI) applications with an outstanding autonomous degree in their behaviour has been successfully developed (such as fuzzy systems, neural networks, etc). These systems can learn and act without human intervention and achieve unbeatable results. However, at the same time, their complexity has been dramatically increased and the machine cannot provide a reasonable explanation of its results to a human user. For instance, when an expert system in medicine advises a patient to take a particular drug for treating his disease, he needs to understand why this is a good prescription in order to follow it.

Nowadays, the interaction between human users and autonomous systems is increasing and, consequently, the development of explainable CI is a real challenge; i.e., systems that can explain their results, argue their decisions; ultimately, white boxes for a human user that can understand the main reasons that support the given output. Explainable Computational Intelligence (XCI) is a new step in the development of CI which can determine, in a significant way, its future direction.

The main aim of XCI Workshop is to put the grounds for building a multidisciplinary scientific community which shares this goal. Its keystone is natural language but analysed from different perspectives (e.g., natural language generation, natural language understanding, word sense disambiguation, argumentation theory, etc.), which have been deeply developed by themselves but not considering the CI framework.