Abstract
We want to highlight the strong link between learning systems such as deep learning neural networks and neutrosophy. The latter is above all a representation considering a neutral state which is at the heart of many phenomena of reality as well as mathematical and information theories. Here, we start from the recent understanding of neural networks, which considers their internal functioning and the learning that characterizes them as based on adapted representations (both of the information to be processed and of the task to be performed) that are in fact optimal compressions. Compression means discriminating the similar and the different between cases by identifying representative characteristics through focusing on the most significant and rejecting of the rest. These three operations of neural processing, learning and compression owe their existence to non-linear treatments which essentially amount to distinguishing a neutral zone between two filtered zones, corresponding to the neutrosophical approach.