Add missing wiki files

Add the files in the wiki repository as a subdirectory. Note, these
are still in wiki format, and should be changed to Markdown to be
more useful. But this commit just restores what was there before.
master
Raph Levien 8 years ago
parent 40a6b2d753
commit 8e7b3f3268
  1. 69
      wiki/Dx7Envelope.wiki
  2. 29
      wiki/Dx7Hardware.wiki
  3. 19
      wiki/FrequencyModulation.wiki
  4. 79
      wiki/GettingStarted.wiki
  5. 60
      wiki/SinePoly.wiki
  6. 35
      wiki/YamahaDx7.wiki
  7. BIN
      wiki/img/cheby_vs_smooth.png
  8. BIN
      wiki/img/cheby_vs_smooth_fr.png
  9. 167
      wiki/img/env.html
  10. 71
      wiki/img/outlevel.html
  11. BIN
      wiki/img/saw_spectrum.png

@ -0,0 +1,69 @@
#summary Detailed description of DX7 envelope generation
= Interactive model =
Explore the interactive !JavaScript [http://wiki.music-synthesizer-for-android.googlecode.com/git/img/env.html implementation] of a nearly bit-accurate model of the DX7 envelope.
Also see [http://wiki.music-synthesizer-for-android.googlecode.com/git/img/outlevel.html plots] of the scaling tables for both output level and rate, from the 0..99 values in DX7 patches to real-world values, measured in dB and dB/s.
= The DX7 Envelope =
This page contains a detailed description of the envelope generation in the DX7. Conceptually, there's an "idealized envelope" which has smooth variation of amplitude as a function of time, and then there's the actual realization in the DX7's hardware, which introduces various forms of quantization, mostly due to running with very small numbers of bits for state.
== Idealized envelope ==
The envelope logic is fairly simple, but also quite musically expressive. The main parameters are four levels and rates, and also the output level (described in a bit more detail below). The shape of the envelope is asymmetrical - while the decay portions are purely exponential, the attack portions are a more complex shape approximating linear. This asymmetry is visible in the envelope figures in Chowning's original paper on FM synthesis - see reference below. Chowning says, "A general characteristic of percussive sounds is that the decay shape of the envelope is roughly exponential as shown in Fig. 14", while the attacks shown in examples of envelopes for brass and woodwind sounds is much closer to linear.
<wiki:comment>Figures from Chowning</wiki:comment>
The full state of the idealized envelope is represented as an _index_ of which part of the envelope is active (labeled 0-3 in this discussion), combined with a _level_. Typically, the envelope starts out at L4 and increases to L1. Then, when it reaches the _target_ of L1, the index increments, and it proceeds to L2 (either by decay or attack, depending on whether L1 or L2 is greater).
The conversion from level parameter (L1, L2, L3, L4 in the patch) to actual level is as follows:
{{{
Level 0..5 -> actual level = 2 * l
Level 5..16 -> actual level = 5 + l
Level 17..20 -> actual level = 4 + l
Level 20..99 -> actual level = 14 + (l >> 1)
}}}
The output level is scaled similarly, but is just 28 + l for values 20..99. It
has twice the precision of the level parameters. The exact lookup table for values 0..19 is [0, 5, 9, 13, 17, 20, 23, 25, 27, 29, 31, 33, 35, 37, 39,
41, 42, 43, 45, 46].
Then, the total level is 64 `*` actual level + 32 `*` output level, normalized
for full scale. This "total level" is in units of approx .0235 dB (20 log<sub>10</sub>(2) / 256), so that 256 of these steps doubles the amplitude.
From measurement of timing, the minimum level seems to be clipped at
3824 counts from full scale -> 14.9375 doublings. Note, however, that velocity values > 100 can cause amplitudes greater than full scale. Full scale means both L and output level set to 99 in the patch, and no additional scaling.
As mentioned above, the decay shape is simpler than the attack. An exponential decay corresponds to a linear change in dB units. First, the R parameter in the patch (range 0..99) is converted to a 6 bit value (0..63), by the formula qrate = (rate `*` 41) / 64.
The rate of decay is then 0.2819 `*` 2<sup>(qrate / 4)</sup> `*` (1 + 0.25 `*` (qrate mod 4)) dB/s. This is a reasaonably good approximation to 0.28 `*` 2<sup>(qrate `*` 0.25)</sup>.
Attack is based on decay, multiplying it by a factor dependent on the current level. In .0235 dB units, this factor is 2 + floor((full scale - current level) / 256). Also, level _immediately_ rises to 39.98 dB (1700 steps) above the minimum level, which helps create a crisper attack.
== Output level ==
The output level is computed once, at the beginning of the note, and affects both the overall amplitude of the operator and also the timing. In addition to the "output level" setting in the patch, output level is also affected by velocity and scaling.
The output level in the patch is in the range 0..99, and this is scaled in units of 0.7526 dB (ie 32 steps).
== Hardware ==
Careful measurement of the DX7 reveals quite rich detail on how envelopes are actually computed. Clearly the resolution for amplitude is .0235 dB, and there are 12 bits total (for a maximum dynamic range of 72.25 dB).
At a qrate of 0, the amplitude decreases by one step every 4096 samples, in other words halves every 2<sup>20</sup> samples. Each increase of 4 doubles the clock rate. Careful examination reveals that fractional multiples of qrate (ie qrate is not a multiple of 4) are clocked out using a pattern:
{{{
01010101
01010111
01110111
01111111
}}}
For attacks, instead of decrementing by 1, the factor is added (thus, no actual multiplication is needed). When the clock rate increases to the point where the increment would be needed more than once per sample clock (ie for qrate >= 48), the increment value is shifted left by (qrate / 4) - 11 instead, and the increment (masked by the bit pattern above) is applied every single sample clock.
= References =
* [http://people.ece.cornell.edu/land/courses/ece4760/Math/GCC644/FM_synth/Chowning.pdf The Synthesis of Complex Audio Spectra by Means of Frequency Modulation], John Chowning, J AES, Sept 1973.

@ -0,0 +1,29 @@
#summary Analysis of the hardware in the DX7
= Yamaha DX7 hardware =
There's a fair amount known about the Yamaha DX7 hardware, and more detail (and confidence) can be obtained through black-box testing. One of the best sources of information is the service manual published by Yamaha.
The main CPU is a 63X03, a variant of the Motorola 6800, most likely running at 2MHz. There's also a sub-cpu, a 6805, which is responsible for scanning the input keys and panel switches.
The main sound generation is done by a pair of VLSI chips - the YM28190 EGS (envelope generator), and the YM21280 OPS (operator). Every audio sample (at a sampling rate of 49096 Hz), these chips cycle through 96 subsamples, one for each of 6 operators x 16 voice polyphony.
The EGS contains state for 96 envelopes and is also the main point of interface from the main CPU. For each clock (approx 4.7MHz), the EGS chip supplies a 12-bit envelope and 14-bit frequency value to the second chip, the OPS (operator) chip.
Through measurement, it's clear that the 12-bit envelope value is a simple Q8 fixed-point representation of logarithmic (base 2) gain. Linear gain is equal to 2^(value / 256). The steps are particularly clearly seen in plots of amplitude for slow-decaying envelopes (todo: insert decay30.png image). This gives a total of about 96dB of dynamic range, in steps of approximately 6 / 256 = .0234 dB, which is smooth to the ear.
Similarly, it's clear that the frequency value is a Q10 fixed-point representation of logarithmic (base 2) frequency. The top four bits represent the octave (thus giving a total range of 16 octaves, ranging from below half a Hz to the Nyquist limit of 24.5kHz. The resolution of the lower 10 bits is approximately 1.17 cents.
From careful measurement of exact frequencies of sine waves generated by the OPS chip, it's clear that frequencies within an octave are quantized to linear values of 1/4096 resolution. Thus, it's likely that the lower 10 bits are run through a LUT containing 2^(value / 1024), and the result is then used to increment a phase accumulator. (It's a reasonable guess that the phase accumulator has 27 bits of precision - 12 bits of mantissa at the lowest frequencies, shifted left by up to 15 at the highest. However, it's also possible that at the very lowest frequencies the phase is only incremented for a fraction of the sample clocks).
It is known that Yamaha's later single-chip FM tone generating chips avoided the need for a multiplier by storing log(sin(x)) in one LUT, and 2^x in another. Instead of multiplying by the gain signal, the gain is simply added to the output of the first LUT. Matthew Gambrell and Olli Niemitalo decapsulated a YM3812 chip and recovered the contents of these ROMs - which are both 256-element (8 bits in). Note that the sin lut actually only stores a quarter-cycle - the other three are reconstructed through symmetry. It is very likely that the DX7 chips work on the same principle. It is not yet known whether the DX7 ROMs are the same size or more precise than there.
Since the log-frequency to linear-frequency LUT computes the same actual function as logarithmic-to-linear gain, it's entirely plausible that the ROM is shared, and two lookups are done per clock. If so, it's most likely that the exponential function has 10 low-order bits of accuracy (plus 4 bits of exponent).
The OPS chip also contains two buffers (M and F) for assembling the 6 operators into a single voice. These store linear values, so are the output of the exp LUT. The combination of multiple operators is simple linear addition. The F buffer implements Tomisawa's "anti-hunting" filter (this is determined from measurement of feedback waveforms) - so buffers the previous _two_ values, and the mean of those is used (feedback gain is a power of two, so multiplication by the feedback gain is a shift) as the input to the next cycle.
= References =
* [https://docs.google.com/a/google.com/Doc?id=dd8kqn9f_13cqjkf4gp OPLx decapsulated], Matthew Gambrell and Olli Niemitalo, 2008/04/20, also see [http://yehar.com/blog/?p=665 blog post]
* [http://www.abdn.ac.uk/~mth192/dx7/manuals/dx7-9_service_manual_1.pdf DX7/9 Service Manual]
* [http://en.wikipedia.org/wiki/Yamaha_YM3812 Yamaha YM3812] at Wikipedia

@ -0,0 +1,19 @@
#summary Discussion and resources for FM Synthesis
= FM Synthesis =
Frequency modulation (FM) synthesis is the core technique used by the Yamaha DX7 to produce sounds. It was originally invented by John Chowning around 1966, published in 1973 (see below), refined throughout the '70s (with Yamaha producing the innovative but not particularly successful [http://www.synthtopia.com/content/2010/03/05/yamaha-gs-1/ GS-1], massing 90kg and relying on 50 discrete IC's to compute the sound - and using magnetic strip memory to store patches), and achieving mass popularity for the first time in the DX7.
There is some terminological confusion around FM synthesis, with some believing that a more correct term would be "phase modulation" or "phase distortion." The two concepts are very closely related, as phase modulation by a signal y(x) is equivalent to frequency modulation by dy/dx. In the basic case where the modulating signal is a sine wave (so the derivative is also a sine wave, albeit with different phase and amplitude). Part of the confusion, no doubt, is due to Casio's use of "Phase Distortion" terminology for their competing CZ line of synthesizers.
For the most part, the DX7 implements pure FM synthesis, using six operators for each voice, in 32 possible configurations (known as "algorithms" in Yamaha lingo). However, it also implements "feedback FM", an innovation by Tomisawa that expands the range of waveforms and spectra available. Feedback FM produces a waveform resembling a sawtooth wave (very familiar in analog synthesizers and their digital modeling counterparts), with monotonically decreasing amplitudes of the overtones, as opposed to the wavelet-like shape of spectra (deriving from Bessel functions) of standard FM. Also, when driven at very high loop gains, feedback FM can become chaotic and white-noise like.
The mathematics and history of FM synthesis are authoritatively covered in the links below, which together make excellent reading.
= References =
* [http://users.ece.gatech.edu/~mcclella/2025/labs-s05/Chowning.pdf The Synthesis of Complex Audio Spectra by Means of Frequency Modulation], John M. Chowning, J. AES 21(7), Sept. 1973
* [http://en.wikipedia.org/wiki/Frequency_modulation_synthesis Frequency modulation synthesis] at Wikipedia
* [http://www.abdn.ac.uk/~mth192/html/Chowning.html Interview with John Chowning], Aftertouch Magazine 1(2)
* [https://ccrma.stanford.edu/software/snd/snd/fm.html An Introduction to FM], Bill Schottstaedt, CCRMA (Stanford)
* [http://www.maths.abdn.ac.uk/~bensondj/html/music.pdf Music: A Mathematical Offering], Dave Benson, Cambridge University Press, Nov 2006

@ -0,0 +1,79 @@
#summary How to get started with development on Music Synthesizer for Android.
#labels Featured
= Getting started with Music Synthesizer Development =
The following steps will get you started working with the Music Synthesizer for Android code in a Unix-like environment. The following environment variables are used.
* `SYNTH_PATH` - Location of the Music Synthesizer source code.
* `PROTO_PATH` - Location where Protocol Buffers are installed.
== Installing Protocol Buffers ==
Download the Google [http://code.google.com/p/protobuf/ Protocol Buffer] package from [http://code.google.com/p/protobuf/downloads/list here].
To build the `protoc` compiler, run the following commands. If you are using Windows, you can skip this step by downloading the prebuilt Windows `protoc` compiler and installing it in `$SYNTH_PATH/music-synthesizer-for-android/core/bin/`.
{{{
tar -xzvf protobuf-2.4.0a.tar.gz
cd protobuf-2.4.0a
./configure --prefix=$PROTO_PATH
make
make check
make install
mkdir $SYNTH_PATH/music-synthesizer-for-android/core/bin/
cp $PROTO_PATH/bin/protoc $SYNTH_PATH/music-synthesizer-for-android/core/bin/
}}}
Build the protocol buffer runtime libraries jar.
{{{
cd java/
mvn test
mvn install
mvn package
mkdir $SYNTH_PATH/music-synthesizer-for-android/core/lib/
cp target/protobuf-java-2.4.*.jar $SYNTH_PATH/music-synthesizer-for-android/core/lib/libprotobuf.jar
}}}
==Installing Eclipse==
Other development environments are unsupported. However, the core, test, and j2se packages can be built using Ant. So the desktop tools in the j2se package can still be built without Eclipse.
To download and install Eclipse, visit [http://www.eclipse.org/downloads/ eclipse.org].
==Installing the Android SDK==
Download and Install the Android SDK using the instructions at [http://developer.android.com/sdk/index.html android.com].
==Installing Music Synthesizer for Android==
Using Git, download the Music Synthesizer for Android source code. Visit [http://code.google.com/p/music-synthesizer-for-android/source/checkout here] for more details.
{{{
git clone https://code.google.com/p/music-synthesizer-for-android/
}}}
==Testing Music Synthesizer for Android core components==
To make sure everything so far is installed correctly, run the tests and make sure they all build and pass.
{{{
cd $SYNTH_PATH/music-synthesizer-for-android/
ant test
}}}
==Setting up NDK==
The new synth engine is written in C++ for higher performance, and uses OpenSL ES to output sound. Install the [http://developer.android.com/sdk/ndk/index.html Android NDK]. Then, you can either manually run the ndk compile, or set up your Eclipse project to run it automatically.
To run it manually: make sure that ndk-build is on your path, go into the android subdirectory and run:
{{{
ndk-build
}}}
To set up automatic building, edit android/.externalToolBuilders/NDK Builder.launch to make sure that ATTR_LOCATION points to a valid location for the ndk-build binary. The default is ${HOME}/install/android-ndk-r7b/ndk-build , so if you unpacked the NDK into the install subdirectory of your home directory, and the versions match, it may just work.
The result of the ndk-build step is to create a libsynth.so file containing the shared library. For example, android/libs/armeabi-v7a/libsynth.so.
The shared library build depends on the target architecture (unlike Java code). The default is armeabi-v7a, and can be changed by editing APP_ABI in the android/jni/Application.mk file. Note that code built for armeabi will run on ARM v7 devices, but more slowly. It might make sense to set this to "all" so that it will run on more devices, but at the expense of slowing the compile cycle and potentially bloating the APK file size.
==Setting up Music Synthesizer in Eclipse==
Make a new Eclipse workspace. Import the project into Eclipse. This should be File > Import... > Android > Existing Android Code Into Workspace. You will probably get errors on import (duplicate entry 'src', empty ${project_loc}, and maybe others). You can ignore these (although it would be great to clean them up).

@ -0,0 +1,60 @@
#summary Computing sines with polynomials
= Sine generation: polynomials =
There are two techniques used in the synthesizer for generating sine
waves. The scalar code uses a 1024-element lookup table with linear
interpolation, but the NEON code uses a polynomial. Polynomial evaluation
parallelizes easily, unlike lookups.
Most math library implementations of sin() give results accurate to a
couple of LSB's of floating point accuracy. But that's overkill for our
purposes. We want a sine so that the errors are just below what you can
hear. A sixth order polynomial is a good choice here, as the loudest
harmonic (the 3rd) is almost 100dB down from the fundamental.
Also, harmonic distortion in low frequencies is "musical". It seems silly
to go to a huge amount of trouble to suppress harmonics and create a
pure sine tone, when in practice these sines are going to be assembled
into an FM modulation graph to make rich harmonics. However, high
frequency noise is bad because it will create aliasing.
The usual criterion for designing a polynomial for function approximation
is to minimize the worst case error. But for this application, not all
error is created equal. We're willing to tolerate a small increase in
absolute error if we can shape the spectrum to concentrate the error
mostly in the low frequencies.
The design we ended up with was to compute a minimum absolute error
for a 5-th order polynomial, then integrate it. The result is:
{{{
y = 1 - 1.2333439964934032 * x**2 + 0.25215252666796095 * x**4 - 0.01880853017455781 * x**6
}}}
In this graph of the error compared to true sine, you can see the difference.
The absolute error computed with Chebyshev fitting is smaller, but there's
a discontinuity when the sign flips, and the frequency gets high. The
"smooth" variant gets rid of the discontinuity, and the high frequency
ripples get attenuated, of course at the cost of the absolute error being
higher.
http://wiki.music-synthesizer-for-android.googlecode.com/git/img/cheby_vs_smooth.png
The spectrum tells a similar story: the tail of the "smooth" variant has
about 10dB less energy than the Chebyshev fit, while the low frequency harmonics
are a touch higher.
http://wiki.music-synthesizer-for-android.googlecode.com/git/img/cheby_vs_smooth_fr.png
This was fun to to, as it felt like an optimization across all levels of
the stack, down to cycle counts on the NEON code, and all the way up to
how musical the tones would sound.
= References =
* [http://www.rossbencina.com/code/sinusoids Fun with Sinusoids] by Ross Bencina
* [http://www.excamera.com/sphinx/article-chebyshev.html Chebyshev approximation in Python] — the simple but effective tool I used to compute the polynomials
* [http://pulsar.webshaker.net/ccc/sample-506402de Pulsar cycle counter] with scheduling analysis of resulting NEON code for the inner loop (each iteration computes 12 values of the FM synthesis kernel, of which the bulk of the calculation is the sine)

@ -0,0 +1,35 @@
#summary Home page for documentation and resources about the Yamaha DX7
= Overview =
The Yamaha DX7 was the first highly successful purely digital synthesizer, and the best selling synthesizer of its day.
It is based on FM synthesis, invented by John Chowning at Stanford University in the early 1970's. As with any synthesis technique, it has strengths and weaknesses, but the strengths make it particularly suitable as the basis for a synthesizer on the Android platform. One significant advantage is that a wide range of different sounds can be created and represented in a tiny amount of storage space - a DX7 patch is 128 bytes. In addition, generating the sound requires modest computing resources.
= Reverse Engineering =
There are a number of software implementations of the DX7 (most notably, FM7, Hexter, and a CSound translator), but all suffer from imperfect emulation of the original.
A major goal of the DX7 synthesis module in this project is to match the original as precisely as possible, or, in some cases, to surpass it in sound quality. To do this, we have a test framework which sends MIDI patches and test notes to a physical DX7s, and a sound capture rig (a Roland Quad-Capture) to capture the sound with very high quality and resolution (192ksamples/s, 24 bits). The goal is to understand and document the synthesis techniques used in the actual DX7 almost to the bit level.
Fortunately, this is an achievable goal. The actual synthesis is done by a pair of LSI chips (the YM21290 for envelope generation, and the YM21280 for generating the modulated sine waves), all controlled by an 8 bit microprocessor (a 68B03 in the original DX7, probably running at 2MHz, which was a variant of the Motorola 6800). None of this was capable of a huge amount of complexity. Thus, careful measurement can reveal all the secrets of this hardware. This work is in progress, and as it is completed, the results will be reported in subpages.
Much of the publicly available research on the DX7 is based on the DX7 to Csound translator work done by Jeff Harrington and Sylvain Marchand. However, there are numerous details which are inaccurate.
= Synthesis =
The new synthesis engine is designed with a number of goals in mind:
* Top-notch sound quality, meeting or exceeding that of the original DX7
* High performance, for good battery life and robust performance even on limited hardware
* A portable C++ codebase, optimized for 32-bit fixed point arithmetic
The code draws ideas from a number of different sources, including the original DX7, Hexter, and the Sonivox FM synthesizer (which is now part of the Android source, at [https://github.com/android/platform_external_sonivox/tree/master/arm-fm-22k/lib_src external/sonivox/arm-fm-22k]).
= Links =
* [http://en.wikipedia.org/wiki/Yamaha_DX7 DX7 Wikipedia page]
* [http://www.abdn.ac.uk/~mth192/html/dx7.html Dave Benson's DX7 page]
* [http://dssi.sourceforge.net/hexter.html Hexter], another free software emulator
* [http://www.vorc.org/text/column/hally/ymxxxx.html Yamaha YM chips numerical classification]
* [http://www.parnasse.com/dx72csnd.shtml DX7 to Csound Translator]

Binary file not shown.

After

Width:  |  Height:  |  Size: 70 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 125 KiB

@ -0,0 +1,167 @@
<html>
<head>
<script type="text/javascript" src="https://www.google.com/jsapi"></script>
<script type="text/javascript">
google.load("visualization", "1", {packages:["corechart"]});
google.setOnLoadCallback(drawChart);
function drawChart() {
var options = {
title: 'Wave form qr=60'
};
var chart = new google.visualization.LineChart(document.getElementById('chart_div'));
//chart.draw(data, options);
function redraw() {
var params = [];
for (var i = 0; i < 8; i++) {
params.push(parseInt(document.getElementById("param" + i).value));
}
var nsamp = parseInt(document.getElementById("nsamp").value);
var rawdata = envdata(params, nsamp);
var data = google.visualization.arrayToDataTable(rawdata);
chart.draw(data, {title: 'Envelope'});
}
for (var i = 0; i < 8; i++) {
document.getElementById("param" + i).addEventListener("change", redraw);
}
document.getElementById("nsamp").addEventListener("change", redraw);
redraw();
}
</script>
</head>
<body>
Level 1: <input id="param0" type="text" value="99">
Rate 1: <input id="param4" type="text" value="80"><br/>
Level 2: <input id="param1" type="text" value="80">
Rate 2: <input id="param5" type="text" value="80"><br/>
Level 3: <input id="param2" type="text" value="99">
Rate 3: <input id="param6" type="text" value="70"><br/>
Level 4: <input id="param3" type="text" value="0">
Rate 4: <input id="param7" type="text" value="80"><br/>
Number of samples: <input id="nsamp" type="text" value="4000"><br/>
<div id="chart_div" style="width: 900px; height: 500px;"></div>
</body>
<script>
var envmask = [[0, 1, 0, 1, 0, 1, 0, 1],
[0, 1, 0, 1, 0, 1, 1, 1],
[0, 1, 1, 1, 0, 1, 1, 1],
[0, 1, 1, 1, 1, 1, 1, 1]];
var outputlevel = [0, 5, 9, 13, 17, 20, 23, 25, 27, 29, 31, 33, 35, 37, 39,
41, 42, 43, 45, 46, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61,
62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80,
81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99,
100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114,
115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127];
function envenable(i, qr) {
var shift = (qr >> 2) - 11;
if (shift < 0) {
var sm = (1 << -shift) - 1;
if ((i & sm) != sm) return false;
i >>= -shift;
}
return envmask[qr & 3][i & 7] != 0;
}
function attackstep(lev, i, qr) {
var shift = (qr >> 2) - 11;
if (!envenable(i, qr)) return lev;
var slope = 17 - (lev >> 8);
lev += slope << Math.max(shift, 0);
return lev;
}
function decaystep(lev, i, qr) {
var shift = (qr >> 2) - 11;
if (!envenable(i, qr)) return lev;
lev -= 1 << Math.max(shift, 0);
return lev;
}
function Env(params) {
this.params = params;
this.level = 0;
this.ix = 0;
this.i = 0;
this.down = true;
this.advance(0);
}
Env.prototype.getsample = function() {
if (envenable(this.i, this.qr) && (this.ix < 3 || (this.ix < 4 && !this.down))) {
if (this.rising) {
var lev = attackstep(this.level, this.i, this.qr);
console.log(lev);
if (lev >= this.targetlevel) {
lev = this.targetlevel;
this.advance(this.ix + 1);
}
this.level = lev;
} else {
var lev = decaystep(this.level, this.i, this.qr);
if (lev <= this.targetlevel) {
lev = this.targetlevel;
this.advance(this.ix + 1);
}
this.level = lev;
}
}
this.i++;
return this.level;
}
Env.prototype.advance = function(newix) {
this.ix = newix;
if (this.ix < 4) {
var newlevel = this.params[this.ix];
var scaledlevel = Math.max(0, (outputlevel[newlevel] << 5) - 224);
this.targetlevel = scaledlevel;
this.rising = (this.targetlevel - this.level) > 0;
var rate_scaling = 0;
this.qr = Math.min(63, rate_scaling + ((this.params[this.ix + 4] * 41) >> 6));
}
//console.log("advance ix="+this.ix+", qr="+this.qr+", target="+this.targetlevel+", rising="+this.rising);
}
Env.prototype.keyup = function() {
this.down = false;
this.advance(3);
}
function attackdata(qr) {
var result = [['samp', 'env']];
var i = 0;
var count = 0;
var lev = 1716;
while (true) {
result.push([i, lev]);
lev = attackstep(lev, i, qr);
lev = Math.min(lev, 15 << 8);
if (lev >= 15 << 8) {
count++;
if (count > 100) break;
}
i++;
}
return result;
}
function envdata(params, nsamp) {
console.log(nsamp);
var result = [['samp', 'env']];
var env = new Env(params);
for (var i = 0; i < nsamp; i++) {
if (i == 3 * nsamp / 4) {
env.keyup();
}
result.push([i, env.getsample()]);
}
return result;
}
</script>
</html>

@ -0,0 +1,71 @@
<html>
<head>
<script type="text/javascript" src="https://www.google.com/jsapi"></script>
<script type="text/javascript">
google.load("visualization", "1", {packages:["corechart"]});
google.setOnLoadCallback(drawChart);
function drawChart() {
var options = {
title: 'Wave form qr=60'
};
var chart = new google.visualization.LineChart(document.getElementById('chart_div'));
draw_outputlevel(chart);
var chart = new google.visualization.LineChart(document.getElementById('rate_div'));
draw_rate(chart);
}
</script>
</head>
<body>
<p>Output level (in dB) for DX7 envelope, scaled from 0-99 output level values. This data is based on measurement of an actual device.</p>
<div id="chart_div" style="width: 600px; height: 500px;"></div>
<p>Similarly for rate, this time measured in dB/s, for decay. Attack is faster, and has a nonlinear curve.</p>
<div id="rate_div" style="width:600px; height:500px"></div>
</body>
<script>
var outputlevel = [0, 5, 9, 13, 17, 20, 23, 25, 27, 29, 31, 33, 35, 37, 39,
41, 42, 43, 45, 46, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61,
62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80,
81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99,
100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114,
115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127];
function draw_outputlevel(chart) {
var rawdata = [['value', 'gain']];
for (var i = 0; i < outputlevel.length; i++) {
var ol = outputlevel[i];
var db = 20 * Math.log(2) / Math.log(10) * (ol - 127) / 8;
rawdata.push([i, db]);
}
var data = google.visualization.arrayToDataTable(rawdata);
chart.draw(data, {title: 'Output level to dB scaling', hAxis: {
title: 'Output level value (0-99)'
}, vAxis: {
title: 'dB'
}});
}
function draw_rate(chart) {
var rawdata = [['value', 'rate']];
for (var i = 0; i < outputlevel.length; i++) {
var qr = i * 41 / 64;
var samplerate = 49096;
var baserate = samplerate / (1<<20) * 20 * Math.log(2) / Math.log(10);
var rate = baserate * (1 << (qr >> 2)) * (1 + .25 * (qr & 3));
rawdata.push([i, rate]);
}
var data = google.visualization.arrayToDataTable(rawdata);
chart.draw(data, {title: 'Rate values to actual level change', hAxis: {
title: 'Rate value (0-99)'
}, vAxis: {
title: 'dB/s',
logScale: true
}});
}
</script>
</html>

Binary file not shown.

After

Width:  |  Height:  |  Size: 1011 KiB

Loading…
Cancel
Save