Posts

Es werden Posts vom Februar, 2018 angezeigt.

Using a midi keyboard with minimum latency in Ubuntu 16.04

The synth i'm using is sunvox, and i've got a Samson Graphite 49 Keyboard (5 keys are not functional after a year of only using it occasionally, don't buy). I work on Ubuntu 16.04 and the best minmal latency audio driver is JACK, which is also supported by sunvox. There are two additional pieces of software that are needed to get cracking: a2jmidi and qjackctl. The a2jmidi program provides a midi bridge between alsa and jack, which is required to use any midi signal sending sources in JACK. All this software is availabile to get with apt-get. Once you've installed everything, here's how you get a basic minimum latency midi keyboard-to-sunvox setup running: Open sunvox settings, set the driver to "JACK". Leave everything else on auto. Close sunvox. Then open qjackctl, it's a GUI for the JACK audio driver. Here in the settings you need to experiment with the buffer sizes and sampling freq, they are the two components influencing the latency. ...

Calculating the index of a tensor

For my neural net classes, i have a Tensor class that acts as value storage. It actually has only a one-dimensional array, the multidimensionality is achieved with calculating an index value out of a desired number of values, the number of values being the dimension of the Tensor. Let's say we have a \$n\$ dimensional tensor \$T\$, so each tensor dimension has a certain value dimension \$d_i\$ . For example for a \$3\times 4\times 5\$ Tensor, \$\vec{d}$$ = \{3,4,5\}\$ . First we need a one dimensional array \$\vec a\$ that has enough values to hold all the values of the Tensor, so \$\vec a\$ needs to hold $n_v = \prod_{i=0}^{n}$$ d_i$ values. If we have a location \$\vec l\$ in the Tensor, we need to obtain the location \$l_a\$ in the array \$\vec a\$. In a 2-dimensional setting, it's: $l_a = l_1 + d_1l_2$ . I guess that in a 3-dimensional setting, it is:  $l_a = l_1 + d_1 \cdot (l_2 + d_2l_3) = l_1 + d_1l_2 + d_1d_2l_3$ , So in a $n$-dimensional set...

Training Recurrent Neural Networks without Backpropagation Through Time

Normally, for a recurrent net to learn dependencies between two states that are \$n\$ steps away from each other, they have to be unrolled through time at least \$n\$ steps during backpropagation. That is not plausible to happen in the Brain, and it is computationally exhausting in comparison to feedforward neural network training, and also it puts a hard time limit on which dependencies can be learned. Recalling the basics of neural nets A neural network is a function with parameters \$\theta\$ mapping an input vector \$\vec{v_i}\$ nonlinearily to an output vector \$\vec{v_o}\$ , so \$f({\theta},\vec{v_i}) = \vec{v_o}\$ . The parameters are initialized randomly, and the error \$E\$ between the actual and the desired output for a certain input is partially differentiated to each parameter and the parameters are altered in the direction of less error. This is called backpropagation and is the state of the art to train a neural net. Its basic unit is a neuron. It takes a vector ...