Research

I am interested in how AI is shaping the creative industry, specially how natural language processing, deep learning, generative design and robotics are pushing art, design, architecture and engineering boundaries.

DS – DATASET | NLP – NATURAL LANGUAGE PROCESSING | DL – DEEP LEARNING | CV – COMPUTER VISION | RL -REINFORCEMENT LEARNING | GD – GENERATIVE DESIGN | R – ROBOTICS |

ADARIBERT


CV-NLP Our multimodal transformer model addresses two main tasks:

  • Ground high-level attributes in images under an object-agnostic approach
  • Use within-language attention mechanisms to find relevant sequences from unstructured text


ADARI dataset


DS We have created ADARI—Ambiguous Descriptions and Art Images—the first large scale self-annotated dataset of contemporary workpieces, with the aim to provide a foundational resource for subjective image description.

Design intents prediction


NLPCV We have trained a model that predicts ambiguous design intents given an image.



ADARI generative model


DL A deep multimodal learning model learns a joint space consisting of the ADARI images and subjective descriptions

ADARI language model


NLP A model has been trained on more than 260k sentences that convey subjective design intents.

After training word embeddings on ADARI, words such as organic and curvy become closer in a low dimensional space.


ARTISTIC STYLE ROBOTIC PAINTING


R-ML We have trained a robot to paint with the style of an artist.

A reinforcement learning algorithm learns to convert a painting into brushes strokes. A robot learns to translate the virtual strokes to real strokes, with the style of an artist via Learning from Demonstration.

SAM dataset


DS SAM—Strokes And Motions—is a bi-modal dataset that is curated to enhance the ongoing research on creative robotic and creative machine learning toolmaking research. The data set consists of pairs of brushstrokes, as pixel-based 2d arrays, as well as brush motions with 6 degree-of-freedom (DoF).



The dataset contains +700 examples of brushstrokes demonstrated by a user. Each brushstroke is availabel as a pair, 1) the sequence of brush motions in space, 2) the scanned brushstoke as an image. Use this notebook to process and review data.

Brush motions were collected using a motion capture system and a costum-made rigid-body marker. The coordinations were processed later, thus the center of coordination system is located at the center of each cell. Brushmotions are saved as numpy array.

Brushstrokes are scanned and converted to fixed size images and saved as a numpy array.

%d bloggers like this: