Fandango

New Fandango publications on Machine Learning

The research being done by CERTH on image manipulation detection and neural networks is already producing scientific publications.

September 10, 2019

Past June, Petros Dara, researcher at CERTH, presented a paper at the 25th International Conference on Engineering, Technology and Innovation (ICE/IEEE ITMC 2019). The paper, “Embedding Big Data in Graph Convolutional Networks”, looks into the deep learning architectures and learning methods being researched as part of FANDANGO’s image manipulation detection techniques:

Deep learning architectures and Convolutional Neural Networks (CNNs) have made a significant impact in learning embeddings of high-dimensional datasets. In some cases, and especially in the case of high-dimensional graph data, the interlinkage of data points may be hard to model. Previous approaches in applying the convolution function on graphs, namely the Graph Convolutional Networks (GCNs), presented neural networks architectures that encode information of individual nodes along with their connectivity. Nonetheless, these methods face the same issues as in traditional graph-based machine learning techniques i.e. the requirement of full matrix computations. This requirement bounds the applicability of the GCNs on the available computational resources. In this paper, the following assumption is evaluated: the training of a GCN with multiple subsets of the full data matrix is possible and converges to the full data matrix training scores, thus lifting the aforementioned limitation. Following this outcome, different subset selection methodologies are also examined to evaluate the impact of the learning curriculum in the performance of the trained model in small as well as very large scale graph datasets.

Another paper by the CERTH team, “Graph-based Multimodal Fusion with metric learning for multimodal classification”, will be published in November 2019 and will introduce methods to work with multimodal data:

In this paper, a graph-based, supervised classification method for multimodal data is introduced. It can be applied on data of any type consisting of any number of modalities and can also be used for the classification of datasets with missing modalities. The proposed method maps the features extracted from every modality to a space where the intrinsic structure of the multimodal data is kept. In order to map the extracted features of the different modalities into the same space and, at the same time, maintain the feature distances between similar and dissimilar modality data instances, a metric learning method is used. The proposed method has been evaluated on NUS-Wide, NTU-RGBD and AV-Letters multimodal datasets and has shown competitive results with the state-of-the-art methods in the field, while is able to cope with datasets with missing modalities.