Here’s a snapshot of some of our recent work into guitar amplifiers
A preliminary quantitative exploration of guitar amplifier simulation software
Orbisant’s founder—Trent— is an absolute guitar fanatic and gear nerd. As such, it did not take long before some work for Orbisant began to gravitate towards this space. This post is the first of many to come. Here, we make a preliminary statistical exploration of some available guitar amplifier Virtual Studio Technology (VST) software.
What is a VST?
A VST—also typically called a “plugin”—is a piece of software that can be used either standalone or in a music production program (called a Digital Audio Workstation). Emulating the signal processing of an amplifier can be done in a few ways, but a lot of high-end plugins take a machine learning approach, where they train a model to “learn” the mapping from a raw guitar signal straight into the computer to that of a “processed” signal - represented by what the target amplifier does to the raw signal. Of course, these models are pre-trained before the software is shipped out, meaning that this conversion happens in real-time for the user, making the experience just like that of playing through a real amplifier.
Some example screenshots of guitar amp VSTs are shown below.
The present analysis
This analysis comprises the first part in an ongoing exploration of guitar VSTs. As a starting point, this analysis includes only amplifiers produced by two companies - Neural DSP and STL Tones. Specifically, we are comparing the following VSTs (the list will be continuously added to over time until it represents a comprehensive list of most VSTs and not just these examples from Neural and STL):
Archetype Nolly (Neural DSP) - 4 amplifiers
Archetype Gojira (Neural DSP) - 3 amplifiers
Archetype Cory Wong (Neural DSP) - 3 amplifiers
Archetype Plini (Neural DSP) - 3 amplifiers
Archetype Tim Henson (Neural DSP) - 3 amplifiers
Fortin Nameless (Neural DSP) - 1 amplifier
STL Tonality Will Putney (STL Tones) - 4 amplifiers with 3-4 tube settings for each
Note the number of different amplifiers within each VST.
Analytical Approach
An audio file can be represented as a time series by computing the amplitude at each sample (time point). This means we can apply time-series analysis methods to understand the differences in dynamics at play across audio signals. Very cool. As such, this analysis follows the following procedure:
Each amplifier head's settings are set to noon. All pedals, cabinets and effects are turned off
A 20Hz-20kHz sine sweep is fed into each amplifier, producing a standardised output response for each amplifier
The waveform is converted to a numerical time series (amplifier) x amplitude matrix
Time-series features are computed on the time series x amplitude matrix, producing a time series x feature matrix
Analysis is conducted on the feature space to identify any informative empirical structure in the data
Feature-based time-series analysis
Feature-based time-series analysis is a rapidly growing approach to tackling time series problems. A “feature” is a summary statistic computed on a time series vector. Simple examples include a mean or standard deviation. More complex features include quantities such as the autocorrelation function at a given lag or spectral entropy. Essentially, you can reduce an entire time series to a vector of features which can then be used for statistical analysis or machine learning. This is not only usually much more computationally efficient, but also lets you obtain a deeper understanding of time series properties that cannot be observed in the raw measurement space. Importantly, features provide an interpretable method for understanding temporal dynamics, as the features themselves tell you interesting things about the properties of a time series—it’s not an opaque black-box.
Fortunately, feature-based time-series analysis is a massive focus of my (Trent) PhD, and I have written software that easily enables the extraction of many features and automates statistical visualisations of them. The package for R is called theft: Tools for Handling Extraction of Features from Time Series and it has an official CRAN release and corresponding journal article coming soon!
For this specific post, I used a set of features called catch22, which is a group of 22 features that were produced by a pipeline that aimed to maximise performance on applied problems while minimising redundancy between the features. The R implementation is in my package Rcatch22 on CRAN.
Results
So what can we do with a vector of 22 features for each time series (amplifier)? Well, we can combine them into a feature x time series matrix. This then enables us to compute something called an adjacency matrix; which is a matrix of all the pairwise correlations of feature vectors between the time series. In plain words, we can plot the correlation between each amplifier in terms of their feature values. This is shown in the plot below.
Prior to plotting, the adjacency matrix was hierarchically clustered across both rows and columns as this helps to pull out informative empirical structure in the data. Evidently, we can see big chunks of blue (indicating moderate-strong negatively correlated amplifiers), as well as a few dark patches of red; indicating groups of amplifiers that are very similar in terms of their properties on these 22 features. Self-correlations are of course on the diagonal from bottom-left to top-right.
Notice the ordering of the amps on the axes. After clustering, we can see that amplifiers from within the same plugin (e.g. Neural DSP Gojira) tend to be grouped together, with a few exceptions. This is interesting, and suggests that the designers of these VSTs very much know the product (sound) they are intending to emulate. Also of interest is the STL Tonality amps. All of these amplifiers had 3 separate tube options (exception for Head 4 which had 4), and yet all the different tube settings for a given head tend to cluster together. Fascinating that an empirical approach can pull out things we musicians intuitively know, right?
So far we have looked at pairwise relationships. But something else we can do is make use of dimensionality reduction techniques to understand structure across the 22 features in an interpretable low dimensional projection. A common method for this is principal component analysis (PCA), which seeks to compute a set of principal components that progressively explain less and less of the variance in the data until the number of principal components equals that of the input data size (more simply, the first principal component explains the most variance, the second explains the second-most variance and so on). One way to graph a PCA is to plot each time series with respect to the first 2 principal components which explain the most variance in the data. This can be easily graphed as a scatterplot. The plot below shows this.
Evidently, our low dimensional project clearly pulls out informative structure in the data. Most of the amplifiers group together as expected (i.e. each head in STL Tonality regardless of tube setting and each head in a given Neural DSP plugin), with some exceptions. I want to talk about three of these exceptions here:
Neural DSP Tim Henson 1 - This head is visible right at the top of the graph. I want to pull this example out as this amp is an acoustic guitar simulator. This makes it very different from the other amplifiers in this analysis, and it is reassuring to see it relatively out on its own in the low dimensional projection.
Neural DSP Cory Wong 1 - This head is visible in the middle-left of the graph. While not extremely far away from other heads, it is not in a cluster. This head is actually a model of a D.I. Funk Console modelled after an analog channel strip, which, like the Tim Henson 1 head discussed above, makes it very unique compared to the other traditional amplifier models in this analysis. It’s interesting to see it by itself in this space.
Neural DSP Plini (all amps) - This plugin is an interesting one. It is the only plugin in which all of its heads are located in vastly different points in the low dimensional space. In other words, if you buy this plugin, you are getting three amplifiers which may cover distinctly different sonic ground compared to each other. This needs to be explored in further detail.
What’s next?
While these results are extremely interesting and very promising, much more work needs to be done. This post is intended to be the first in a series where more amplifiers from both Neural and STL Tones and many other companies will be added. If the list becomes comprehensive enough, I may develop an interactive web applications where people can explore all of the amplifier relationships themselves. Please stay posted for updates!
All source code for this project can be found on GitHub.