Face Expression Recognition Oriented Discrete Music Phrase Generator for Music Expression Research

In music expression research, researchers take music expression data from users with music stimuli which is always relatively long segmented real piece of music. This type of music stimuli gives rise to several problems e.g. how music expressions build up from bottom-up, creating and/or organizing dependent and independent music paramaters for testing. At the same time, known pieces of music may create a false effect which is related individiual past experiences of users.

Discrete music phrase generator creates rule based diatonic and non-diatonic music excerpts with parameters that give researchers control important parameters of music e.g. harmony, melody, musical expression, dynamics, tempo. In this way, each parameters can have a potential being independent variable to be measured as responses of music listeners.

Another important contribution of the present device is to take face expressions of users while they listen to musical excerpts and map these expressions onto particular split moment (e.g 2 secs) harmonies of music. In this way, users' negative or positive valence face expressions turn out to be a response to the discrete musical fragments in a music excerpt which is impossible to implement in classical survey methods.